Merge lp:~openstack-charmers/charms/precise/glance/python-redux into lp:~charmers/charms/precise/glance/trunk
- Precise Pangolin (12.04)
- python-redux
- Merge into trunk
Proposed by
Adam Gandelman
Status: | Merged |
---|---|
Merged at revision: | 37 |
Proposed branch: | lp:~openstack-charmers/charms/precise/glance/python-redux |
Merge into: | lp:~charmers/charms/precise/glance/trunk |
Diff against target: |
6608 lines (+4868/-1413) 47 files modified
.coveragerc (+6/-0) Makefile (+11/-0) README.md (+89/-0) charm-helpers.yaml (+9/-0) config.yaml (+12/-2) hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0) hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0) hooks/charmhelpers/contrib/openstack/context.py (+522/-0) hooks/charmhelpers/contrib/openstack/neutron.py (+117/-0) hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0) hooks/charmhelpers/contrib/openstack/templating.py (+280/-0) hooks/charmhelpers/contrib/openstack/utils.py (+365/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+359/-0) hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0) hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0) hooks/charmhelpers/contrib/storage/linux/utils.py (+25/-0) hooks/charmhelpers/core/hookenv.py (+340/-0) hooks/charmhelpers/core/host.py (+241/-0) hooks/charmhelpers/fetch/__init__.py (+209/-0) hooks/charmhelpers/fetch/archiveurl.py (+48/-0) hooks/charmhelpers/fetch/bzrurl.py (+49/-0) hooks/charmhelpers/payload/__init__.py (+1/-0) hooks/charmhelpers/payload/execd.py (+50/-0) hooks/glance-common (+0/-133) hooks/glance-relations (+0/-464) hooks/glance_contexts.py (+89/-0) hooks/glance_relations.py (+320/-0) hooks/glance_utils.py (+193/-0) hooks/lib/openstack-common (+0/-813) metadata.yaml (+2/-0) revision (+1/-1) templates/ceph.conf (+12/-0) templates/essex/glance-api-paste.ini (+51/-0) templates/essex/glance-api.conf (+86/-0) templates/essex/glance-registry-paste.ini (+28/-0) templates/folsom/glance-api-paste.ini (+68/-0) templates/folsom/glance-api.conf (+94/-0) templates/folsom/glance-registry-paste.ini (+28/-0) templates/glance-registry.conf (+19/-0) templates/grizzly/glance-api-paste.ini (+68/-0) templates/grizzly/glance-registry-paste.ini (+30/-0) templates/haproxy.cfg (+37/-0) templates/havana/glance-api-paste.ini (+71/-0) templates/openstack_https_frontend (+23/-0) unit_tests/__init__.py (+3/-0) unit_tests/test_glance_relations.py (+401/-0) unit_tests/test_utils.py (+118/-0) |
To merge this branch: | bzr merge lp:~openstack-charmers/charms/precise/glance/python-redux |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
charmers | Pending | ||
Review via email:
|
Commit message
Description of the change
Update of all Havana / Saucy / python-redux work:
* Full python rewrite using new OpenStack charm-helpers.
* Test coverage
* Havana support
To post a comment you must log in.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === added file '.coveragerc' |
2 | --- .coveragerc 1970-01-01 00:00:00 +0000 |
3 | +++ .coveragerc 2013-10-15 01:35:02 +0000 |
4 | @@ -0,0 +1,6 @@ |
5 | +[report] |
6 | +# Regexes for lines to exclude from consideration |
7 | +exclude_lines = |
8 | + if __name__ == .__main__.: |
9 | +include= |
10 | + hooks/glance_* |
11 | |
12 | === added file 'Makefile' |
13 | --- Makefile 1970-01-01 00:00:00 +0000 |
14 | +++ Makefile 2013-10-15 01:35:02 +0000 |
15 | @@ -0,0 +1,11 @@ |
16 | +#!/usr/bin/make |
17 | + |
18 | +lint: |
19 | + @flake8 --exclude hooks/charmhelpers hooks |
20 | + @charm proof |
21 | + |
22 | +sync: |
23 | + @charm-helper-sync -c charm-helpers.yaml |
24 | + |
25 | +test: |
26 | + @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
27 | |
28 | === added file 'README.md' |
29 | --- README.md 1970-01-01 00:00:00 +0000 |
30 | +++ README.md 2013-10-15 01:35:02 +0000 |
31 | @@ -0,0 +1,89 @@ |
32 | +Overview |
33 | +-------- |
34 | + |
35 | +This charm provides the Glance image service for OpenStack. It is intended to |
36 | +be used alongside the other OpenStack components, starting with the Essex |
37 | +release in Ubuntu 12.04. |
38 | + |
39 | +Usage |
40 | +----- |
41 | + |
42 | +Glance may be deployed in a number of ways. This charm focuses on 3 main |
43 | +configurations. All require the existence of the other core OpenStack |
44 | +services deployed via Juju charms, specifically: mysql, keystone and |
45 | +nova-cloud-controller. The following assumes these services have already |
46 | +been deployed. |
47 | + |
48 | +Local Storage |
49 | +============= |
50 | + |
51 | +In this configuration, Glance uses the local storage available on the server |
52 | +to store image data: |
53 | + |
54 | + juju deploy glance |
55 | + juju add-relation glance keystone |
56 | + juju add-relation glance mysql |
57 | + juju add-relation glance nova-cloud-controller |
58 | + |
59 | +Swift backed storage |
60 | +==================== |
61 | + |
62 | +Glance can also use Swift Object storage for image storage. Swift is often |
63 | +deployed as part of an OpenStack cloud and provides increased resilience and |
64 | +scale when compared to using local disk storage. This configuration assumes |
65 | +that you have already deployed Swift using the swift-proxy and swift-storage |
66 | +charms: |
67 | + |
68 | + juju deploy glance |
69 | + juju add-relation glance keystone |
70 | + juju add-relation glance mysql |
71 | + juju add-relation glance nova-cloud-controller |
72 | + juju add-relation glance swift-proxy |
73 | + |
74 | +This configuration can be used to support Glance in HA/Scale-out deployments. |
75 | + |
76 | +Ceph backed storage |
77 | +=================== |
78 | + |
79 | +In this configuration, Glance uses Ceph based object storage to provide |
80 | +scalable, resilient storage of images. This configuration assumes that you |
81 | +have already deployed Ceph using the ceph charm: |
82 | + |
83 | + juju deploy glance |
84 | + juju add-relation glance keystone |
85 | + juju add-relation glance mysql |
86 | + juju add-relation glance nova-cloud-controller |
87 | + juju add-relation glance ceph |
88 | + |
89 | +This configuration can also be used to support Glance in HA/Scale-out |
90 | +deployments. |
91 | + |
92 | +Glance HA/Scale-out |
93 | +=================== |
94 | + |
95 | +The Glance charm can also be used in a HA/scale-out configuration using |
96 | +the hacluster charm: |
97 | + |
98 | + juju deploy -n 3 glance |
99 | + juju deploy hacluster haglance |
100 | + juju set glance vip=<virtual IP address to access glance over> |
101 | + juju add-relation glance haglance |
102 | + juju add-relation glance mysql |
103 | + juju add-relation glance keystone |
104 | + juju add-relation glance nova-cloud-controller |
105 | + juju add-relation glance ceph|swift-proxy |
106 | + |
107 | +In this configuration, 3 service units host the Glance image service; |
108 | +API requests are load balanced across all 3 service units via the |
109 | +configured virtual IP address (which is also registered into Keystone |
110 | +as the endpoint for Glance). |
111 | + |
112 | +Note that Glance in this configuration must be used with either Ceph or |
113 | +Swift providing backing image storage. |
114 | + |
115 | +Contact Information |
116 | +------------------- |
117 | + |
118 | +Author: Adam Gandelman <adamg@canonical.com> |
119 | +Report bugs at: http://bugs.launchpad.net/charms |
120 | +Location: http://jujucharms.com |
121 | |
122 | === added file 'charm-helpers.yaml' |
123 | --- charm-helpers.yaml 1970-01-01 00:00:00 +0000 |
124 | +++ charm-helpers.yaml 2013-10-15 01:35:02 +0000 |
125 | @@ -0,0 +1,9 @@ |
126 | +branch: lp:charm-helpers |
127 | +destination: hooks/charmhelpers |
128 | +include: |
129 | + - core |
130 | + - fetch |
131 | + - contrib.openstack |
132 | + - contrib.hahelpers |
133 | + - contrib.storage.linux.ceph |
134 | + - payload.execd |
135 | |
136 | === modified file 'config.yaml' |
137 | --- config.yaml 2013-09-18 09:12:13 +0000 |
138 | +++ config.yaml 2013-10-15 01:35:02 +0000 |
139 | @@ -14,11 +14,11 @@ |
140 | Note that updating this setting to a source that is known to |
141 | provide a later version of OpenStack will trigger a software |
142 | upgrade. |
143 | - db-user: |
144 | + database-user: |
145 | default: glance |
146 | type: string |
147 | description: Database username |
148 | - glance-db: |
149 | + database: |
150 | default: glance |
151 | type: string |
152 | description: Glance database name. |
153 | @@ -26,6 +26,16 @@ |
154 | default: RegionOne |
155 | type: string |
156 | description: OpenStack Region |
157 | + ceph-osd-replication-count: |
158 | + default: 2 |
159 | + type: int |
160 | + description: | |
161 | + This value dictates the number of replicas ceph must make of any |
162 | + object it stores within the images rbd pool. Of course, this only |
163 | + applies if using Ceph as a backend store. Note that once the images |
164 | + rbd pool has been created, changing this value will not have any |
165 | + effect (although it can be changed in ceph by manually configuring |
166 | + your ceph cluster). |
167 | # HA configuration settings |
168 | vip: |
169 | type: string |
170 | |
171 | === added file 'hooks/__init__.py' |
172 | === added symlink 'hooks/ceph-relation-broken' |
173 | === target is u'glance_relations.py' |
174 | === modified symlink 'hooks/ceph-relation-changed' |
175 | === target changed u'glance-relations' => u'glance_relations.py' |
176 | === modified symlink 'hooks/ceph-relation-joined' |
177 | === target changed u'glance-relations' => u'glance_relations.py' |
178 | === added directory 'hooks/charmhelpers' |
179 | === added file 'hooks/charmhelpers/__init__.py' |
180 | === added directory 'hooks/charmhelpers/contrib' |
181 | === added file 'hooks/charmhelpers/contrib/__init__.py' |
182 | === added directory 'hooks/charmhelpers/contrib/hahelpers' |
183 | === added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py' |
184 | === added file 'hooks/charmhelpers/contrib/hahelpers/apache.py' |
185 | --- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000 |
186 | +++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-10-15 01:35:02 +0000 |
187 | @@ -0,0 +1,58 @@ |
188 | +# |
189 | +# Copyright 2012 Canonical Ltd. |
190 | +# |
191 | +# This file is sourced from lp:openstack-charm-helpers |
192 | +# |
193 | +# Authors: |
194 | +# James Page <james.page@ubuntu.com> |
195 | +# Adam Gandelman <adamg@ubuntu.com> |
196 | +# |
197 | + |
198 | +import subprocess |
199 | + |
200 | +from charmhelpers.core.hookenv import ( |
201 | + config as config_get, |
202 | + relation_get, |
203 | + relation_ids, |
204 | + related_units as relation_list, |
205 | + log, |
206 | + INFO, |
207 | +) |
208 | + |
209 | + |
210 | +def get_cert(): |
211 | + cert = config_get('ssl_cert') |
212 | + key = config_get('ssl_key') |
213 | + if not (cert and key): |
214 | + log("Inspecting identity-service relations for SSL certificate.", |
215 | + level=INFO) |
216 | + cert = key = None |
217 | + for r_id in relation_ids('identity-service'): |
218 | + for unit in relation_list(r_id): |
219 | + if not cert: |
220 | + cert = relation_get('ssl_cert', |
221 | + rid=r_id, unit=unit) |
222 | + if not key: |
223 | + key = relation_get('ssl_key', |
224 | + rid=r_id, unit=unit) |
225 | + return (cert, key) |
226 | + |
227 | + |
228 | +def get_ca_cert(): |
229 | + ca_cert = None |
230 | + log("Inspecting identity-service relations for CA SSL certificate.", |
231 | + level=INFO) |
232 | + for r_id in relation_ids('identity-service'): |
233 | + for unit in relation_list(r_id): |
234 | + if not ca_cert: |
235 | + ca_cert = relation_get('ca_cert', |
236 | + rid=r_id, unit=unit) |
237 | + return ca_cert |
238 | + |
239 | + |
240 | +def install_ca_cert(ca_cert): |
241 | + if ca_cert: |
242 | + with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt', |
243 | + 'w') as crt: |
244 | + crt.write(ca_cert) |
245 | + subprocess.check_call(['update-ca-certificates', '--fresh']) |
246 | |
247 | === added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' |
248 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000 |
249 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-10-15 01:35:02 +0000 |
250 | @@ -0,0 +1,183 @@ |
251 | +# |
252 | +# Copyright 2012 Canonical Ltd. |
253 | +# |
254 | +# Authors: |
255 | +# James Page <james.page@ubuntu.com> |
256 | +# Adam Gandelman <adamg@ubuntu.com> |
257 | +# |
258 | + |
259 | +import subprocess |
260 | +import os |
261 | + |
262 | +from socket import gethostname as get_unit_hostname |
263 | + |
264 | +from charmhelpers.core.hookenv import ( |
265 | + log, |
266 | + relation_ids, |
267 | + related_units as relation_list, |
268 | + relation_get, |
269 | + config as config_get, |
270 | + INFO, |
271 | + ERROR, |
272 | + unit_get, |
273 | +) |
274 | + |
275 | + |
276 | +class HAIncompleteConfig(Exception): |
277 | + pass |
278 | + |
279 | + |
280 | +def is_clustered(): |
281 | + for r_id in (relation_ids('ha') or []): |
282 | + for unit in (relation_list(r_id) or []): |
283 | + clustered = relation_get('clustered', |
284 | + rid=r_id, |
285 | + unit=unit) |
286 | + if clustered: |
287 | + return True |
288 | + return False |
289 | + |
290 | + |
291 | +def is_leader(resource): |
292 | + cmd = [ |
293 | + "crm", "resource", |
294 | + "show", resource |
295 | + ] |
296 | + try: |
297 | + status = subprocess.check_output(cmd) |
298 | + except subprocess.CalledProcessError: |
299 | + return False |
300 | + else: |
301 | + if get_unit_hostname() in status: |
302 | + return True |
303 | + else: |
304 | + return False |
305 | + |
306 | + |
307 | +def peer_units(): |
308 | + peers = [] |
309 | + for r_id in (relation_ids('cluster') or []): |
310 | + for unit in (relation_list(r_id) or []): |
311 | + peers.append(unit) |
312 | + return peers |
313 | + |
314 | + |
315 | +def oldest_peer(peers): |
316 | + local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) |
317 | + for peer in peers: |
318 | + remote_unit_no = int(peer.split('/')[1]) |
319 | + if remote_unit_no < local_unit_no: |
320 | + return False |
321 | + return True |
322 | + |
323 | + |
324 | +def eligible_leader(resource): |
325 | + if is_clustered(): |
326 | + if not is_leader(resource): |
327 | + log('Deferring action to CRM leader.', level=INFO) |
328 | + return False |
329 | + else: |
330 | + peers = peer_units() |
331 | + if peers and not oldest_peer(peers): |
332 | + log('Deferring action to oldest service unit.', level=INFO) |
333 | + return False |
334 | + return True |
335 | + |
336 | + |
337 | +def https(): |
338 | + ''' |
339 | + Determines whether enough data has been provided in configuration |
340 | + or relation data to configure HTTPS |
341 | + . |
342 | + returns: boolean |
343 | + ''' |
344 | + if config_get('use-https') == "yes": |
345 | + return True |
346 | + if config_get('ssl_cert') and config_get('ssl_key'): |
347 | + return True |
348 | + for r_id in relation_ids('identity-service'): |
349 | + for unit in relation_list(r_id): |
350 | + rel_state = [ |
351 | + relation_get('https_keystone', rid=r_id, unit=unit), |
352 | + relation_get('ssl_cert', rid=r_id, unit=unit), |
353 | + relation_get('ssl_key', rid=r_id, unit=unit), |
354 | + relation_get('ca_cert', rid=r_id, unit=unit), |
355 | + ] |
356 | + # NOTE: works around (LP: #1203241) |
357 | + if (None not in rel_state) and ('' not in rel_state): |
358 | + return True |
359 | + return False |
360 | + |
361 | + |
362 | +def determine_api_port(public_port): |
363 | + ''' |
364 | + Determine correct API server listening port based on |
365 | + existence of HTTPS reverse proxy and/or haproxy. |
366 | + |
367 | + public_port: int: standard public port for given service |
368 | + |
369 | + returns: int: the correct listening port for the API service |
370 | + ''' |
371 | + i = 0 |
372 | + if len(peer_units()) > 0 or is_clustered(): |
373 | + i += 1 |
374 | + if https(): |
375 | + i += 1 |
376 | + return public_port - (i * 10) |
377 | + |
378 | + |
379 | +def determine_haproxy_port(public_port): |
380 | + ''' |
381 | + Description: Determine correct proxy listening port based on public IP + |
382 | + existence of HTTPS reverse proxy. |
383 | + |
384 | + public_port: int: standard public port for given service |
385 | + |
386 | + returns: int: the correct listening port for the HAProxy service |
387 | + ''' |
388 | + i = 0 |
389 | + if https(): |
390 | + i += 1 |
391 | + return public_port - (i * 10) |
392 | + |
393 | + |
394 | +def get_hacluster_config(): |
395 | + ''' |
396 | + Obtains all relevant configuration from charm configuration required |
397 | + for initiating a relation to hacluster: |
398 | + |
399 | + ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr |
400 | + |
401 | + returns: dict: A dict containing settings keyed by setting name. |
402 | + raises: HAIncompleteConfig if settings are missing. |
403 | + ''' |
404 | + settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr'] |
405 | + conf = {} |
406 | + for setting in settings: |
407 | + conf[setting] = config_get(setting) |
408 | + missing = [] |
409 | + [missing.append(s) for s, v in conf.iteritems() if v is None] |
410 | + if missing: |
411 | + log('Insufficient config data to configure hacluster.', level=ERROR) |
412 | + raise HAIncompleteConfig |
413 | + return conf |
414 | + |
415 | + |
416 | +def canonical_url(configs, vip_setting='vip'): |
417 | + ''' |
418 | + Returns the correct HTTP URL to this host given the state of HTTPS |
419 | + configuration and hacluster. |
420 | + |
421 | + :configs : OSTemplateRenderer: A config tempating object to inspect for |
422 | + a complete https context. |
423 | + :vip_setting: str: Setting in charm config that specifies |
424 | + VIP address. |
425 | + ''' |
426 | + scheme = 'http' |
427 | + if 'https' in configs.complete_contexts(): |
428 | + scheme = 'https' |
429 | + if is_clustered(): |
430 | + addr = config_get(vip_setting) |
431 | + else: |
432 | + addr = unit_get('private-address') |
433 | + return '%s://%s' % (scheme, addr) |
434 | |
435 | === added directory 'hooks/charmhelpers/contrib/openstack' |
436 | === added file 'hooks/charmhelpers/contrib/openstack/__init__.py' |
437 | === added file 'hooks/charmhelpers/contrib/openstack/context.py' |
438 | --- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000 |
439 | +++ hooks/charmhelpers/contrib/openstack/context.py 2013-10-15 01:35:02 +0000 |
440 | @@ -0,0 +1,522 @@ |
441 | +import json |
442 | +import os |
443 | + |
444 | +from base64 import b64decode |
445 | + |
446 | +from subprocess import ( |
447 | + check_call |
448 | +) |
449 | + |
450 | + |
451 | +from charmhelpers.fetch import ( |
452 | + apt_install, |
453 | + filter_installed_packages, |
454 | +) |
455 | + |
456 | +from charmhelpers.core.hookenv import ( |
457 | + config, |
458 | + local_unit, |
459 | + log, |
460 | + relation_get, |
461 | + relation_ids, |
462 | + related_units, |
463 | + unit_get, |
464 | + unit_private_ip, |
465 | + ERROR, |
466 | + WARNING, |
467 | +) |
468 | + |
469 | +from charmhelpers.contrib.hahelpers.cluster import ( |
470 | + determine_api_port, |
471 | + determine_haproxy_port, |
472 | + https, |
473 | + is_clustered, |
474 | + peer_units, |
475 | +) |
476 | + |
477 | +from charmhelpers.contrib.hahelpers.apache import ( |
478 | + get_cert, |
479 | + get_ca_cert, |
480 | +) |
481 | + |
482 | +from charmhelpers.contrib.openstack.neutron import ( |
483 | + neutron_plugin_attribute, |
484 | +) |
485 | + |
486 | +CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' |
487 | + |
488 | + |
489 | +class OSContextError(Exception): |
490 | + pass |
491 | + |
492 | + |
493 | +def ensure_packages(packages): |
494 | + '''Install but do not upgrade required plugin packages''' |
495 | + required = filter_installed_packages(packages) |
496 | + if required: |
497 | + apt_install(required, fatal=True) |
498 | + |
499 | + |
500 | +def context_complete(ctxt): |
501 | + _missing = [] |
502 | + for k, v in ctxt.iteritems(): |
503 | + if v is None or v == '': |
504 | + _missing.append(k) |
505 | + if _missing: |
506 | + log('Missing required data: %s' % ' '.join(_missing), level='INFO') |
507 | + return False |
508 | + return True |
509 | + |
510 | + |
511 | +class OSContextGenerator(object): |
512 | + interfaces = [] |
513 | + |
514 | + def __call__(self): |
515 | + raise NotImplementedError |
516 | + |
517 | + |
518 | +class SharedDBContext(OSContextGenerator): |
519 | + interfaces = ['shared-db'] |
520 | + |
521 | + def __init__(self, database=None, user=None, relation_prefix=None): |
522 | + ''' |
523 | + Allows inspecting relation for settings prefixed with relation_prefix. |
524 | + This is useful for parsing access for multiple databases returned via |
525 | + the shared-db interface (eg, nova_password, quantum_password) |
526 | + ''' |
527 | + self.relation_prefix = relation_prefix |
528 | + self.database = database |
529 | + self.user = user |
530 | + |
531 | + def __call__(self): |
532 | + self.database = self.database or config('database') |
533 | + self.user = self.user or config('database-user') |
534 | + if None in [self.database, self.user]: |
535 | + log('Could not generate shared_db context. ' |
536 | + 'Missing required charm config options. ' |
537 | + '(database name and user)') |
538 | + raise OSContextError |
539 | + ctxt = {} |
540 | + |
541 | + password_setting = 'password' |
542 | + if self.relation_prefix: |
543 | + password_setting = self.relation_prefix + '_password' |
544 | + |
545 | + for rid in relation_ids('shared-db'): |
546 | + for unit in related_units(rid): |
547 | + passwd = relation_get(password_setting, rid=rid, unit=unit) |
548 | + ctxt = { |
549 | + 'database_host': relation_get('db_host', rid=rid, |
550 | + unit=unit), |
551 | + 'database': self.database, |
552 | + 'database_user': self.user, |
553 | + 'database_password': passwd, |
554 | + } |
555 | + if context_complete(ctxt): |
556 | + return ctxt |
557 | + return {} |
558 | + |
559 | + |
560 | +class IdentityServiceContext(OSContextGenerator): |
561 | + interfaces = ['identity-service'] |
562 | + |
563 | + def __call__(self): |
564 | + log('Generating template context for identity-service') |
565 | + ctxt = {} |
566 | + |
567 | + for rid in relation_ids('identity-service'): |
568 | + for unit in related_units(rid): |
569 | + ctxt = { |
570 | + 'service_port': relation_get('service_port', rid=rid, |
571 | + unit=unit), |
572 | + 'service_host': relation_get('service_host', rid=rid, |
573 | + unit=unit), |
574 | + 'auth_host': relation_get('auth_host', rid=rid, unit=unit), |
575 | + 'auth_port': relation_get('auth_port', rid=rid, unit=unit), |
576 | + 'admin_tenant_name': relation_get('service_tenant', |
577 | + rid=rid, unit=unit), |
578 | + 'admin_user': relation_get('service_username', rid=rid, |
579 | + unit=unit), |
580 | + 'admin_password': relation_get('service_password', rid=rid, |
581 | + unit=unit), |
582 | + # XXX: Hard-coded http. |
583 | + 'service_protocol': 'http', |
584 | + 'auth_protocol': 'http', |
585 | + } |
586 | + if context_complete(ctxt): |
587 | + return ctxt |
588 | + return {} |
589 | + |
590 | + |
591 | +class AMQPContext(OSContextGenerator): |
592 | + interfaces = ['amqp'] |
593 | + |
594 | + def __call__(self): |
595 | + log('Generating template context for amqp') |
596 | + conf = config() |
597 | + try: |
598 | + username = conf['rabbit-user'] |
599 | + vhost = conf['rabbit-vhost'] |
600 | + except KeyError as e: |
601 | + log('Could not generate shared_db context. ' |
602 | + 'Missing required charm config options: %s.' % e) |
603 | + raise OSContextError |
604 | + |
605 | + ctxt = {} |
606 | + for rid in relation_ids('amqp'): |
607 | + for unit in related_units(rid): |
608 | + if relation_get('clustered', rid=rid, unit=unit): |
609 | + ctxt['clustered'] = True |
610 | + ctxt['rabbitmq_host'] = relation_get('vip', rid=rid, |
611 | + unit=unit) |
612 | + else: |
613 | + ctxt['rabbitmq_host'] = relation_get('private-address', |
614 | + rid=rid, unit=unit) |
615 | + ctxt.update({ |
616 | + 'rabbitmq_user': username, |
617 | + 'rabbitmq_password': relation_get('password', rid=rid, |
618 | + unit=unit), |
619 | + 'rabbitmq_virtual_host': vhost, |
620 | + }) |
621 | + if context_complete(ctxt): |
622 | + # Sufficient information found = break out! |
623 | + break |
624 | + # Used for active/active rabbitmq >= grizzly |
625 | + ctxt['rabbitmq_hosts'] = [] |
626 | + for unit in related_units(rid): |
627 | + ctxt['rabbitmq_hosts'].append(relation_get('private-address', |
628 | + rid=rid, unit=unit)) |
629 | + if not context_complete(ctxt): |
630 | + return {} |
631 | + else: |
632 | + return ctxt |
633 | + |
634 | + |
635 | +class CephContext(OSContextGenerator): |
636 | + interfaces = ['ceph'] |
637 | + |
638 | + def __call__(self): |
639 | + '''This generates context for /etc/ceph/ceph.conf templates''' |
640 | + if not relation_ids('ceph'): |
641 | + return {} |
642 | + log('Generating template context for ceph') |
643 | + mon_hosts = [] |
644 | + auth = None |
645 | + key = None |
646 | + for rid in relation_ids('ceph'): |
647 | + for unit in related_units(rid): |
648 | + mon_hosts.append(relation_get('private-address', rid=rid, |
649 | + unit=unit)) |
650 | + auth = relation_get('auth', rid=rid, unit=unit) |
651 | + key = relation_get('key', rid=rid, unit=unit) |
652 | + |
653 | + ctxt = { |
654 | + 'mon_hosts': ' '.join(mon_hosts), |
655 | + 'auth': auth, |
656 | + 'key': key, |
657 | + } |
658 | + |
659 | + if not os.path.isdir('/etc/ceph'): |
660 | + os.mkdir('/etc/ceph') |
661 | + |
662 | + if not context_complete(ctxt): |
663 | + return {} |
664 | + |
665 | + ensure_packages(['ceph-common']) |
666 | + |
667 | + return ctxt |
668 | + |
669 | + |
670 | +class HAProxyContext(OSContextGenerator): |
671 | + interfaces = ['cluster'] |
672 | + |
673 | + def __call__(self): |
674 | + ''' |
675 | + Builds half a context for the haproxy template, which describes |
676 | + all peers to be included in the cluster. Each charm needs to include |
677 | + its own context generator that describes the port mapping. |
678 | + ''' |
679 | + if not relation_ids('cluster'): |
680 | + return {} |
681 | + |
682 | + cluster_hosts = {} |
683 | + l_unit = local_unit().replace('/', '-') |
684 | + cluster_hosts[l_unit] = unit_get('private-address') |
685 | + |
686 | + for rid in relation_ids('cluster'): |
687 | + for unit in related_units(rid): |
688 | + _unit = unit.replace('/', '-') |
689 | + addr = relation_get('private-address', rid=rid, unit=unit) |
690 | + cluster_hosts[_unit] = addr |
691 | + |
692 | + ctxt = { |
693 | + 'units': cluster_hosts, |
694 | + } |
695 | + if len(cluster_hosts.keys()) > 1: |
696 | + # Enable haproxy when we have enough peers. |
697 | + log('Ensuring haproxy enabled in /etc/default/haproxy.') |
698 | + with open('/etc/default/haproxy', 'w') as out: |
699 | + out.write('ENABLED=1\n') |
700 | + return ctxt |
701 | + log('HAProxy context is incomplete, this unit has no peers.') |
702 | + return {} |
703 | + |
704 | + |
705 | +class ImageServiceContext(OSContextGenerator): |
706 | + interfaces = ['image-service'] |
707 | + |
708 | + def __call__(self): |
709 | + ''' |
710 | + Obtains the glance API server from the image-service relation. Useful |
711 | + in nova and cinder (currently). |
712 | + ''' |
713 | + log('Generating template context for image-service.') |
714 | + rids = relation_ids('image-service') |
715 | + if not rids: |
716 | + return {} |
717 | + for rid in rids: |
718 | + for unit in related_units(rid): |
719 | + api_server = relation_get('glance-api-server', |
720 | + rid=rid, unit=unit) |
721 | + if api_server: |
722 | + return {'glance_api_servers': api_server} |
723 | + log('ImageService context is incomplete. ' |
724 | + 'Missing required relation data.') |
725 | + return {} |
726 | + |
727 | + |
728 | +class ApacheSSLContext(OSContextGenerator): |
729 | + """ |
730 | + Generates a context for an apache vhost configuration that configures |
731 | + HTTPS reverse proxying for one or many endpoints. Generated context |
732 | + looks something like: |
733 | + { |
734 | + 'namespace': 'cinder', |
735 | + 'private_address': 'iscsi.mycinderhost.com', |
736 | + 'endpoints': [(8776, 8766), (8777, 8767)] |
737 | + } |
738 | + |
739 | + The endpoints list consists of a tuples mapping external ports |
740 | + to internal ports. |
741 | + """ |
742 | + interfaces = ['https'] |
743 | + |
744 | + # charms should inherit this context and set external ports |
745 | + # and service namespace accordingly. |
746 | + external_ports = [] |
747 | + service_namespace = None |
748 | + |
749 | + def enable_modules(self): |
750 | + cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http'] |
751 | + check_call(cmd) |
752 | + |
753 | + def configure_cert(self): |
754 | + if not os.path.isdir('/etc/apache2/ssl'): |
755 | + os.mkdir('/etc/apache2/ssl') |
756 | + ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) |
757 | + if not os.path.isdir(ssl_dir): |
758 | + os.mkdir(ssl_dir) |
759 | + cert, key = get_cert() |
760 | + with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out: |
761 | + cert_out.write(b64decode(cert)) |
762 | + with open(os.path.join(ssl_dir, 'key'), 'w') as key_out: |
763 | + key_out.write(b64decode(key)) |
764 | + ca_cert = get_ca_cert() |
765 | + if ca_cert: |
766 | + with open(CA_CERT_PATH, 'w') as ca_out: |
767 | + ca_out.write(b64decode(ca_cert)) |
768 | + check_call(['update-ca-certificates']) |
769 | + |
770 | + def __call__(self): |
771 | + if isinstance(self.external_ports, basestring): |
772 | + self.external_ports = [self.external_ports] |
773 | + if (not self.external_ports or not https()): |
774 | + return {} |
775 | + |
776 | + self.configure_cert() |
777 | + self.enable_modules() |
778 | + |
779 | + ctxt = { |
780 | + 'namespace': self.service_namespace, |
781 | + 'private_address': unit_get('private-address'), |
782 | + 'endpoints': [] |
783 | + } |
784 | + for ext_port in self.external_ports: |
785 | + if peer_units() or is_clustered(): |
786 | + int_port = determine_haproxy_port(ext_port) |
787 | + else: |
788 | + int_port = determine_api_port(ext_port) |
789 | + portmap = (int(ext_port), int(int_port)) |
790 | + ctxt['endpoints'].append(portmap) |
791 | + return ctxt |
792 | + |
793 | + |
794 | +class NeutronContext(object): |
795 | + interfaces = [] |
796 | + |
797 | + @property |
798 | + def plugin(self): |
799 | + return None |
800 | + |
801 | + @property |
802 | + def network_manager(self): |
803 | + return None |
804 | + |
805 | + @property |
806 | + def packages(self): |
807 | + return neutron_plugin_attribute( |
808 | + self.plugin, 'packages', self.network_manager) |
809 | + |
810 | + @property |
811 | + def neutron_security_groups(self): |
812 | + return None |
813 | + |
814 | + def _ensure_packages(self): |
815 | + [ensure_packages(pkgs) for pkgs in self.packages] |
816 | + |
817 | + def _save_flag_file(self): |
818 | + if self.network_manager == 'quantum': |
819 | + _file = '/etc/nova/quantum_plugin.conf' |
820 | + else: |
821 | + _file = '/etc/nova/neutron_plugin.conf' |
822 | + with open(_file, 'wb') as out: |
823 | + out.write(self.plugin + '\n') |
824 | + |
825 | + def ovs_ctxt(self): |
826 | + driver = neutron_plugin_attribute(self.plugin, 'driver', |
827 | + self.network_manager) |
828 | + |
829 | + ovs_ctxt = { |
830 | + 'core_plugin': driver, |
831 | + 'neutron_plugin': 'ovs', |
832 | + 'neutron_security_groups': self.neutron_security_groups, |
833 | + 'local_ip': unit_private_ip(), |
834 | + } |
835 | + |
836 | + return ovs_ctxt |
837 | + |
838 | + def __call__(self): |
839 | + self._ensure_packages() |
840 | + |
841 | + if self.network_manager not in ['quantum', 'neutron']: |
842 | + return {} |
843 | + |
844 | + if not self.plugin: |
845 | + return {} |
846 | + |
847 | + ctxt = {'network_manager': self.network_manager} |
848 | + |
849 | + if self.plugin == 'ovs': |
850 | + ctxt.update(self.ovs_ctxt()) |
851 | + |
852 | + self._save_flag_file() |
853 | + return ctxt |
854 | + |
855 | + |
856 | +class OSConfigFlagContext(OSContextGenerator): |
857 | + ''' |
858 | + Responsible adding user-defined config-flags in charm config to a |
859 | + to a template context. |
860 | + ''' |
861 | + def __call__(self): |
862 | + config_flags = config('config-flags') |
863 | + if not config_flags or config_flags in ['None', '']: |
864 | + return {} |
865 | + config_flags = config_flags.split(',') |
866 | + flags = {} |
867 | + for flag in config_flags: |
868 | + if '=' not in flag: |
869 | + log('Improperly formatted config-flag, expected k=v ' |
870 | + 'got %s' % flag, level=WARNING) |
871 | + continue |
872 | + k, v = flag.split('=') |
873 | + flags[k.strip()] = v |
874 | + ctxt = {'user_config_flags': flags} |
875 | + return ctxt |
876 | + |
877 | + |
878 | +class SubordinateConfigContext(OSContextGenerator): |
879 | + """ |
880 | + Responsible for inspecting relations to subordinates that |
881 | + may be exporting required config via a json blob. |
882 | + |
883 | + The subordinate interface allows subordinates to export their |
884 | + configuration requirements to the principle for multiple config |
885 | + files and multiple serivces. Ie, a subordinate that has interfaces |
886 | + to both glance and nova may export to following yaml blob as json: |
887 | + |
888 | + glance: |
889 | + /etc/glance/glance-api.conf: |
890 | + sections: |
891 | + DEFAULT: |
892 | + - [key1, value1] |
893 | + /etc/glance/glance-registry.conf: |
894 | + MYSECTION: |
895 | + - [key2, value2] |
896 | + nova: |
897 | + /etc/nova/nova.conf: |
898 | + sections: |
899 | + DEFAULT: |
900 | + - [key3, value3] |
901 | + |
902 | + |
903 | + It is then up to the principle charms to subscribe this context to |
904 | + the service+config file it is interestd in. Configuration data will |
905 | + be available in the template context, in glance's case, as: |
906 | + ctxt = { |
907 | + ... other context ... |
908 | + 'subordinate_config': { |
909 | + 'DEFAULT': { |
910 | + 'key1': 'value1', |
911 | + }, |
912 | + 'MYSECTION': { |
913 | + 'key2': 'value2', |
914 | + }, |
915 | + } |
916 | + } |
917 | + |
918 | + """ |
919 | + def __init__(self, service, config_file, interface): |
920 | + """ |
921 | + :param service : Service name key to query in any subordinate |
922 | + data found |
923 | + :param config_file : Service's config file to query sections |
924 | + :param interface : Subordinate interface to inspect |
925 | + """ |
926 | + self.service = service |
927 | + self.config_file = config_file |
928 | + self.interface = interface |
929 | + |
930 | + def __call__(self): |
931 | + ctxt = {} |
932 | + for rid in relation_ids(self.interface): |
933 | + for unit in related_units(rid): |
934 | + sub_config = relation_get('subordinate_configuration', |
935 | + rid=rid, unit=unit) |
936 | + if sub_config and sub_config != '': |
937 | + try: |
938 | + sub_config = json.loads(sub_config) |
939 | + except: |
940 | + log('Could not parse JSON from subordinate_config ' |
941 | + 'setting from %s' % rid, level=ERROR) |
942 | + continue |
943 | + |
944 | + if self.service not in sub_config: |
945 | + log('Found subordinate_config on %s but it contained' |
946 | + 'nothing for %s service' % (rid, self.service)) |
947 | + continue |
948 | + |
949 | + sub_config = sub_config[self.service] |
950 | + if self.config_file not in sub_config: |
951 | + log('Found subordinate_config on %s but it contained' |
952 | + 'nothing for %s' % (rid, self.config_file)) |
953 | + continue |
954 | + |
955 | + sub_config = sub_config[self.config_file] |
956 | + for k, v in sub_config.iteritems(): |
957 | + ctxt[k] = v |
958 | + |
959 | + if not ctxt: |
960 | + ctxt['sections'] = {} |
961 | + |
962 | + return ctxt |
963 | |
964 | === added file 'hooks/charmhelpers/contrib/openstack/neutron.py' |
965 | --- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000 |
966 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2013-10-15 01:35:02 +0000 |
967 | @@ -0,0 +1,117 @@ |
968 | +# Various utilies for dealing with Neutron and the renaming from Quantum. |
969 | + |
970 | +from subprocess import check_output |
971 | + |
972 | +from charmhelpers.core.hookenv import ( |
973 | + config, |
974 | + log, |
975 | + ERROR, |
976 | +) |
977 | + |
978 | +from charmhelpers.contrib.openstack.utils import os_release |
979 | + |
980 | + |
981 | +def headers_package(): |
982 | + """Ensures correct linux-headers for running kernel are installed, |
983 | + for building DKMS package""" |
984 | + kver = check_output(['uname', '-r']).strip() |
985 | + return 'linux-headers-%s' % kver |
986 | + |
987 | + |
988 | +# legacy |
989 | +def quantum_plugins(): |
990 | + from charmhelpers.contrib.openstack import context |
991 | + return { |
992 | + 'ovs': { |
993 | + 'config': '/etc/quantum/plugins/openvswitch/' |
994 | + 'ovs_quantum_plugin.ini', |
995 | + 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.' |
996 | + 'OVSQuantumPluginV2', |
997 | + 'contexts': [ |
998 | + context.SharedDBContext(user=config('neutron-database-user'), |
999 | + database=config('neutron-database'), |
1000 | + relation_prefix='neutron')], |
1001 | + 'services': ['quantum-plugin-openvswitch-agent'], |
1002 | + 'packages': [[headers_package(), 'openvswitch-datapath-dkms'], |
1003 | + ['quantum-plugin-openvswitch-agent']], |
1004 | + }, |
1005 | + 'nvp': { |
1006 | + 'config': '/etc/quantum/plugins/nicira/nvp.ini', |
1007 | + 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.' |
1008 | + 'QuantumPlugin.NvpPluginV2', |
1009 | + 'services': [], |
1010 | + 'packages': [], |
1011 | + } |
1012 | + } |
1013 | + |
1014 | + |
1015 | +def neutron_plugins(): |
1016 | + from charmhelpers.contrib.openstack import context |
1017 | + return { |
1018 | + 'ovs': { |
1019 | + 'config': '/etc/neutron/plugins/openvswitch/' |
1020 | + 'ovs_neutron_plugin.ini', |
1021 | + 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.' |
1022 | + 'OVSNeutronPluginV2', |
1023 | + 'contexts': [ |
1024 | + context.SharedDBContext(user=config('neutron-database-user'), |
1025 | + database=config('neutron-database'), |
1026 | + relation_prefix='neutron')], |
1027 | + 'services': ['neutron-plugin-openvswitch-agent'], |
1028 | + 'packages': [[headers_package(), 'openvswitch-datapath-dkms'], |
1029 | + ['quantum-plugin-openvswitch-agent']], |
1030 | + }, |
1031 | + 'nvp': { |
1032 | + 'config': '/etc/neutron/plugins/nicira/nvp.ini', |
1033 | + 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.' |
1034 | + 'NeutronPlugin.NvpPluginV2', |
1035 | + 'services': [], |
1036 | + 'packages': [], |
1037 | + } |
1038 | + } |
1039 | + |
1040 | + |
1041 | +def neutron_plugin_attribute(plugin, attr, net_manager=None): |
1042 | + manager = net_manager or network_manager() |
1043 | + if manager == 'quantum': |
1044 | + plugins = quantum_plugins() |
1045 | + elif manager == 'neutron': |
1046 | + plugins = neutron_plugins() |
1047 | + else: |
1048 | + log('Error: Network manager does not support plugins.') |
1049 | + raise Exception |
1050 | + |
1051 | + try: |
1052 | + _plugin = plugins[plugin] |
1053 | + except KeyError: |
1054 | + log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR) |
1055 | + raise Exception |
1056 | + |
1057 | + try: |
1058 | + return _plugin[attr] |
1059 | + except KeyError: |
1060 | + return None |
1061 | + |
1062 | + |
1063 | +def network_manager(): |
1064 | + ''' |
1065 | + Deals with the renaming of Quantum to Neutron in H and any situations |
1066 | + that require compatability (eg, deploying H with network-manager=quantum, |
1067 | + upgrading from G). |
1068 | + ''' |
1069 | + release = os_release('nova-common') |
1070 | + manager = config('network-manager').lower() |
1071 | + |
1072 | + if manager not in ['quantum', 'neutron']: |
1073 | + return manager |
1074 | + |
1075 | + if release in ['essex']: |
1076 | + # E does not support neutron |
1077 | + log('Neutron networking not supported in Essex.', level=ERROR) |
1078 | + raise Exception |
1079 | + elif release in ['folsom', 'grizzly']: |
1080 | + # neutron is named quantum in F and G |
1081 | + return 'quantum' |
1082 | + else: |
1083 | + # ensure accurate naming for all releases post-H |
1084 | + return 'neutron' |
1085 | |
1086 | === added directory 'hooks/charmhelpers/contrib/openstack/templates' |
1087 | === added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py' |
1088 | --- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000 |
1089 | +++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2013-10-15 01:35:02 +0000 |
1090 | @@ -0,0 +1,2 @@ |
1091 | +# dummy __init__.py to fool syncer into thinking this is a syncable python |
1092 | +# module |
1093 | |
1094 | === added file 'hooks/charmhelpers/contrib/openstack/templating.py' |
1095 | --- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000 |
1096 | +++ hooks/charmhelpers/contrib/openstack/templating.py 2013-10-15 01:35:02 +0000 |
1097 | @@ -0,0 +1,280 @@ |
1098 | +import os |
1099 | + |
1100 | +from charmhelpers.fetch import apt_install |
1101 | + |
1102 | +from charmhelpers.core.hookenv import ( |
1103 | + log, |
1104 | + ERROR, |
1105 | + INFO |
1106 | +) |
1107 | + |
1108 | +from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES |
1109 | + |
1110 | +try: |
1111 | + from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions |
1112 | +except ImportError: |
1113 | + # python-jinja2 may not be installed yet, or we're running unittests. |
1114 | + FileSystemLoader = ChoiceLoader = Environment = exceptions = None |
1115 | + |
1116 | + |
1117 | +class OSConfigException(Exception): |
1118 | + pass |
1119 | + |
1120 | + |
1121 | +def get_loader(templates_dir, os_release): |
1122 | + """ |
1123 | + Create a jinja2.ChoiceLoader containing template dirs up to |
1124 | + and including os_release. If directory template directory |
1125 | + is missing at templates_dir, it will be omitted from the loader. |
1126 | + templates_dir is added to the bottom of the search list as a base |
1127 | + loading dir. |
1128 | + |
1129 | + A charm may also ship a templates dir with this module |
1130 | + and it will be appended to the bottom of the search list, eg: |
1131 | + hooks/charmhelpers/contrib/openstack/templates. |
1132 | + |
1133 | + :param templates_dir: str: Base template directory containing release |
1134 | + sub-directories. |
1135 | + :param os_release : str: OpenStack release codename to construct template |
1136 | + loader. |
1137 | + |
1138 | + :returns : jinja2.ChoiceLoader constructed with a list of |
1139 | + jinja2.FilesystemLoaders, ordered in descending |
1140 | + order by OpenStack release. |
1141 | + """ |
1142 | + tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) |
1143 | + for rel in OPENSTACK_CODENAMES.itervalues()] |
1144 | + |
1145 | + if not os.path.isdir(templates_dir): |
1146 | + log('Templates directory not found @ %s.' % templates_dir, |
1147 | + level=ERROR) |
1148 | + raise OSConfigException |
1149 | + |
1150 | + # the bottom contains tempaltes_dir and possibly a common templates dir |
1151 | + # shipped with the helper. |
1152 | + loaders = [FileSystemLoader(templates_dir)] |
1153 | + helper_templates = os.path.join(os.path.dirname(__file__), 'templates') |
1154 | + if os.path.isdir(helper_templates): |
1155 | + loaders.append(FileSystemLoader(helper_templates)) |
1156 | + |
1157 | + for rel, tmpl_dir in tmpl_dirs: |
1158 | + if os.path.isdir(tmpl_dir): |
1159 | + loaders.insert(0, FileSystemLoader(tmpl_dir)) |
1160 | + if rel == os_release: |
1161 | + break |
1162 | + log('Creating choice loader with dirs: %s' % |
1163 | + [l.searchpath for l in loaders], level=INFO) |
1164 | + return ChoiceLoader(loaders) |
1165 | + |
1166 | + |
1167 | +class OSConfigTemplate(object): |
1168 | + """ |
1169 | + Associates a config file template with a list of context generators. |
1170 | + Responsible for constructing a template context based on those generators. |
1171 | + """ |
1172 | + def __init__(self, config_file, contexts): |
1173 | + self.config_file = config_file |
1174 | + |
1175 | + if hasattr(contexts, '__call__'): |
1176 | + self.contexts = [contexts] |
1177 | + else: |
1178 | + self.contexts = contexts |
1179 | + |
1180 | + self._complete_contexts = [] |
1181 | + |
1182 | + def context(self): |
1183 | + ctxt = {} |
1184 | + for context in self.contexts: |
1185 | + _ctxt = context() |
1186 | + if _ctxt: |
1187 | + ctxt.update(_ctxt) |
1188 | + # track interfaces for every complete context. |
1189 | + [self._complete_contexts.append(interface) |
1190 | + for interface in context.interfaces |
1191 | + if interface not in self._complete_contexts] |
1192 | + return ctxt |
1193 | + |
1194 | + def complete_contexts(self): |
1195 | + ''' |
1196 | + Return a list of interfaces that have atisfied contexts. |
1197 | + ''' |
1198 | + if self._complete_contexts: |
1199 | + return self._complete_contexts |
1200 | + self.context() |
1201 | + return self._complete_contexts |
1202 | + |
1203 | + |
1204 | +class OSConfigRenderer(object): |
1205 | + """ |
1206 | + This class provides a common templating system to be used by OpenStack |
1207 | + charms. It is intended to help charms share common code and templates, |
1208 | + and ease the burden of managing config templates across multiple OpenStack |
1209 | + releases. |
1210 | + |
1211 | + Basic usage: |
1212 | + # import some common context generates from charmhelpers |
1213 | + from charmhelpers.contrib.openstack import context |
1214 | + |
1215 | + # Create a renderer object for a specific OS release. |
1216 | + configs = OSConfigRenderer(templates_dir='/tmp/templates', |
1217 | + openstack_release='folsom') |
1218 | + # register some config files with context generators. |
1219 | + configs.register(config_file='/etc/nova/nova.conf', |
1220 | + contexts=[context.SharedDBContext(), |
1221 | + context.AMQPContext()]) |
1222 | + configs.register(config_file='/etc/nova/api-paste.ini', |
1223 | + contexts=[context.IdentityServiceContext()]) |
1224 | + configs.register(config_file='/etc/haproxy/haproxy.conf', |
1225 | + contexts=[context.HAProxyContext()]) |
1226 | + # write out a single config |
1227 | + configs.write('/etc/nova/nova.conf') |
1228 | + # write out all registered configs |
1229 | + configs.write_all() |
1230 | + |
1231 | + Details: |
1232 | + |
1233 | + OpenStack Releases and template loading |
1234 | + --------------------------------------- |
1235 | + When the object is instantiated, it is associated with a specific OS |
1236 | + release. This dictates how the template loader will be constructed. |
1237 | + |
1238 | + The constructed loader attempts to load the template from several places |
1239 | + in the following order: |
1240 | + - from the most recent OS release-specific template dir (if one exists) |
1241 | + - the base templates_dir |
1242 | + - a template directory shipped in the charm with this helper file. |
1243 | + |
1244 | + |
1245 | + For the example above, '/tmp/templates' contains the following structure: |
1246 | + /tmp/templates/nova.conf |
1247 | + /tmp/templates/api-paste.ini |
1248 | + /tmp/templates/grizzly/api-paste.ini |
1249 | + /tmp/templates/havana/api-paste.ini |
1250 | + |
1251 | + Since it was registered with the grizzly release, it first seraches |
1252 | + the grizzly directory for nova.conf, then the templates dir. |
1253 | + |
1254 | + When writing api-paste.ini, it will find the template in the grizzly |
1255 | + directory. |
1256 | + |
1257 | + If the object were created with folsom, it would fall back to the |
1258 | + base templates dir for its api-paste.ini template. |
1259 | + |
1260 | + This system should help manage changes in config files through |
1261 | + openstack releases, allowing charms to fall back to the most recently |
1262 | + updated config template for a given release |
1263 | + |
1264 | + The haproxy.conf, since it is not shipped in the templates dir, will |
1265 | + be loaded from the module directory's template directory, eg |
1266 | + $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows |
1267 | + us to ship common templates (haproxy, apache) with the helpers. |
1268 | + |
1269 | + Context generators |
1270 | + --------------------------------------- |
1271 | + Context generators are used to generate template contexts during hook |
1272 | + execution. Doing so may require inspecting service relations, charm |
1273 | + config, etc. When registered, a config file is associated with a list |
1274 | + of generators. When a template is rendered and written, all context |
1275 | + generates are called in a chain to generate the context dictionary |
1276 | + passed to the jinja2 template. See context.py for more info. |
1277 | + """ |
1278 | + def __init__(self, templates_dir, openstack_release): |
1279 | + if not os.path.isdir(templates_dir): |
1280 | + log('Could not locate templates dir %s' % templates_dir, |
1281 | + level=ERROR) |
1282 | + raise OSConfigException |
1283 | + |
1284 | + self.templates_dir = templates_dir |
1285 | + self.openstack_release = openstack_release |
1286 | + self.templates = {} |
1287 | + self._tmpl_env = None |
1288 | + |
1289 | + if None in [Environment, ChoiceLoader, FileSystemLoader]: |
1290 | + # if this code is running, the object is created pre-install hook. |
1291 | + # jinja2 shouldn't get touched until the module is reloaded on next |
1292 | + # hook execution, with proper jinja2 bits successfully imported. |
1293 | + apt_install('python-jinja2') |
1294 | + |
1295 | + def register(self, config_file, contexts): |
1296 | + """ |
1297 | + Register a config file with a list of context generators to be called |
1298 | + during rendering. |
1299 | + """ |
1300 | + self.templates[config_file] = OSConfigTemplate(config_file=config_file, |
1301 | + contexts=contexts) |
1302 | + log('Registered config file: %s' % config_file, level=INFO) |
1303 | + |
1304 | + def _get_tmpl_env(self): |
1305 | + if not self._tmpl_env: |
1306 | + loader = get_loader(self.templates_dir, self.openstack_release) |
1307 | + self._tmpl_env = Environment(loader=loader) |
1308 | + |
1309 | + def _get_template(self, template): |
1310 | + self._get_tmpl_env() |
1311 | + template = self._tmpl_env.get_template(template) |
1312 | + log('Loaded template from %s' % template.filename, level=INFO) |
1313 | + return template |
1314 | + |
1315 | + def render(self, config_file): |
1316 | + if config_file not in self.templates: |
1317 | + log('Config not registered: %s' % config_file, level=ERROR) |
1318 | + raise OSConfigException |
1319 | + ctxt = self.templates[config_file].context() |
1320 | + |
1321 | + _tmpl = os.path.basename(config_file) |
1322 | + try: |
1323 | + template = self._get_template(_tmpl) |
1324 | + except exceptions.TemplateNotFound: |
1325 | + # if no template is found with basename, try looking for it |
1326 | + # using a munged full path, eg: |
1327 | + # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf |
1328 | + _tmpl = '_'.join(config_file.split('/')[1:]) |
1329 | + try: |
1330 | + template = self._get_template(_tmpl) |
1331 | + except exceptions.TemplateNotFound as e: |
1332 | + log('Could not load template from %s by %s or %s.' % |
1333 | + (self.templates_dir, os.path.basename(config_file), _tmpl), |
1334 | + level=ERROR) |
1335 | + raise e |
1336 | + |
1337 | + log('Rendering from template: %s' % _tmpl, level=INFO) |
1338 | + return template.render(ctxt) |
1339 | + |
1340 | + def write(self, config_file): |
1341 | + """ |
1342 | + Write a single config file, raises if config file is not registered. |
1343 | + """ |
1344 | + if config_file not in self.templates: |
1345 | + log('Config not registered: %s' % config_file, level=ERROR) |
1346 | + raise OSConfigException |
1347 | + |
1348 | + _out = self.render(config_file) |
1349 | + |
1350 | + with open(config_file, 'wb') as out: |
1351 | + out.write(_out) |
1352 | + |
1353 | + log('Wrote template %s.' % config_file, level=INFO) |
1354 | + |
1355 | + def write_all(self): |
1356 | + """ |
1357 | + Write out all registered config files. |
1358 | + """ |
1359 | + [self.write(k) for k in self.templates.iterkeys()] |
1360 | + |
1361 | + def set_release(self, openstack_release): |
1362 | + """ |
1363 | + Resets the template environment and generates a new template loader |
1364 | + based on a the new openstack release. |
1365 | + """ |
1366 | + self._tmpl_env = None |
1367 | + self.openstack_release = openstack_release |
1368 | + self._get_tmpl_env() |
1369 | + |
1370 | + def complete_contexts(self): |
1371 | + ''' |
1372 | + Returns a list of context interfaces that yield a complete context. |
1373 | + ''' |
1374 | + interfaces = [] |
1375 | + [interfaces.extend(i.complete_contexts()) |
1376 | + for i in self.templates.itervalues()] |
1377 | + return interfaces |
1378 | |
1379 | === added file 'hooks/charmhelpers/contrib/openstack/utils.py' |
1380 | --- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000 |
1381 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2013-10-15 01:35:02 +0000 |
1382 | @@ -0,0 +1,365 @@ |
1383 | +#!/usr/bin/python |
1384 | + |
1385 | +# Common python helper functions used for OpenStack charms. |
1386 | +from collections import OrderedDict |
1387 | + |
1388 | +import apt_pkg as apt |
1389 | +import subprocess |
1390 | +import os |
1391 | +import socket |
1392 | +import sys |
1393 | + |
1394 | +from charmhelpers.core.hookenv import ( |
1395 | + config, |
1396 | + log as juju_log, |
1397 | + charm_dir, |
1398 | +) |
1399 | + |
1400 | +from charmhelpers.core.host import ( |
1401 | + lsb_release, |
1402 | +) |
1403 | + |
1404 | +from charmhelpers.fetch import ( |
1405 | + apt_install, |
1406 | +) |
1407 | + |
1408 | +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" |
1409 | +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' |
1410 | + |
1411 | +UBUNTU_OPENSTACK_RELEASE = OrderedDict([ |
1412 | + ('oneiric', 'diablo'), |
1413 | + ('precise', 'essex'), |
1414 | + ('quantal', 'folsom'), |
1415 | + ('raring', 'grizzly'), |
1416 | + ('saucy', 'havana'), |
1417 | +]) |
1418 | + |
1419 | + |
1420 | +OPENSTACK_CODENAMES = OrderedDict([ |
1421 | + ('2011.2', 'diablo'), |
1422 | + ('2012.1', 'essex'), |
1423 | + ('2012.2', 'folsom'), |
1424 | + ('2013.1', 'grizzly'), |
1425 | + ('2013.2', 'havana'), |
1426 | + ('2014.1', 'icehouse'), |
1427 | +]) |
1428 | + |
1429 | +# The ugly duckling |
1430 | +SWIFT_CODENAMES = OrderedDict([ |
1431 | + ('1.4.3', 'diablo'), |
1432 | + ('1.4.8', 'essex'), |
1433 | + ('1.7.4', 'folsom'), |
1434 | + ('1.8.0', 'grizzly'), |
1435 | + ('1.7.7', 'grizzly'), |
1436 | + ('1.7.6', 'grizzly'), |
1437 | + ('1.10.0', 'havana'), |
1438 | + ('1.9.1', 'havana'), |
1439 | + ('1.9.0', 'havana'), |
1440 | +]) |
1441 | + |
1442 | + |
1443 | +def error_out(msg): |
1444 | + juju_log("FATAL ERROR: %s" % msg, level='ERROR') |
1445 | + sys.exit(1) |
1446 | + |
1447 | + |
1448 | +def get_os_codename_install_source(src): |
1449 | + '''Derive OpenStack release codename from a given installation source.''' |
1450 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
1451 | + rel = '' |
1452 | + if src == 'distro': |
1453 | + try: |
1454 | + rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] |
1455 | + except KeyError: |
1456 | + e = 'Could not derive openstack release for '\ |
1457 | + 'this Ubuntu release: %s' % ubuntu_rel |
1458 | + error_out(e) |
1459 | + return rel |
1460 | + |
1461 | + if src.startswith('cloud:'): |
1462 | + ca_rel = src.split(':')[1] |
1463 | + ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] |
1464 | + return ca_rel |
1465 | + |
1466 | + # Best guess match based on deb string provided |
1467 | + if src.startswith('deb') or src.startswith('ppa'): |
1468 | + for k, v in OPENSTACK_CODENAMES.iteritems(): |
1469 | + if v in src: |
1470 | + return v |
1471 | + |
1472 | + |
1473 | +def get_os_version_install_source(src): |
1474 | + codename = get_os_codename_install_source(src) |
1475 | + return get_os_version_codename(codename) |
1476 | + |
1477 | + |
1478 | +def get_os_codename_version(vers): |
1479 | + '''Determine OpenStack codename from version number.''' |
1480 | + try: |
1481 | + return OPENSTACK_CODENAMES[vers] |
1482 | + except KeyError: |
1483 | + e = 'Could not determine OpenStack codename for version %s' % vers |
1484 | + error_out(e) |
1485 | + |
1486 | + |
1487 | +def get_os_version_codename(codename): |
1488 | + '''Determine OpenStack version number from codename.''' |
1489 | + for k, v in OPENSTACK_CODENAMES.iteritems(): |
1490 | + if v == codename: |
1491 | + return k |
1492 | + e = 'Could not derive OpenStack version for '\ |
1493 | + 'codename: %s' % codename |
1494 | + error_out(e) |
1495 | + |
1496 | + |
1497 | +def get_os_codename_package(package, fatal=True): |
1498 | + '''Derive OpenStack release codename from an installed package.''' |
1499 | + apt.init() |
1500 | + cache = apt.Cache() |
1501 | + |
1502 | + try: |
1503 | + pkg = cache[package] |
1504 | + except: |
1505 | + if not fatal: |
1506 | + return None |
1507 | + # the package is unknown to the current apt cache. |
1508 | + e = 'Could not determine version of package with no installation '\ |
1509 | + 'candidate: %s' % package |
1510 | + error_out(e) |
1511 | + |
1512 | + if not pkg.current_ver: |
1513 | + if not fatal: |
1514 | + return None |
1515 | + # package is known, but no version is currently installed. |
1516 | + e = 'Could not determine version of uninstalled package: %s' % package |
1517 | + error_out(e) |
1518 | + |
1519 | + vers = apt.upstream_version(pkg.current_ver.ver_str) |
1520 | + |
1521 | + try: |
1522 | + if 'swift' in pkg.name: |
1523 | + swift_vers = vers[:5] |
1524 | + if swift_vers not in SWIFT_CODENAMES: |
1525 | + # Deal with 1.10.0 upward |
1526 | + swift_vers = vers[:6] |
1527 | + return SWIFT_CODENAMES[swift_vers] |
1528 | + else: |
1529 | + vers = vers[:6] |
1530 | + return OPENSTACK_CODENAMES[vers] |
1531 | + except KeyError: |
1532 | + e = 'Could not determine OpenStack codename for version %s' % vers |
1533 | + error_out(e) |
1534 | + |
1535 | + |
1536 | +def get_os_version_package(pkg, fatal=True): |
1537 | + '''Derive OpenStack version number from an installed package.''' |
1538 | + codename = get_os_codename_package(pkg, fatal=fatal) |
1539 | + |
1540 | + if not codename: |
1541 | + return None |
1542 | + |
1543 | + if 'swift' in pkg: |
1544 | + vers_map = SWIFT_CODENAMES |
1545 | + else: |
1546 | + vers_map = OPENSTACK_CODENAMES |
1547 | + |
1548 | + for version, cname in vers_map.iteritems(): |
1549 | + if cname == codename: |
1550 | + return version |
1551 | + #e = "Could not determine OpenStack version for package: %s" % pkg |
1552 | + #error_out(e) |
1553 | + |
1554 | + |
1555 | +os_rel = None |
1556 | + |
1557 | + |
1558 | +def os_release(package, base='essex'): |
1559 | + ''' |
1560 | + Returns OpenStack release codename from a cached global. |
1561 | + If the codename can not be determined from either an installed package or |
1562 | + the installation source, the earliest release supported by the charm should |
1563 | + be returned. |
1564 | + ''' |
1565 | + global os_rel |
1566 | + if os_rel: |
1567 | + return os_rel |
1568 | + os_rel = (get_os_codename_package(package, fatal=False) or |
1569 | + get_os_codename_install_source(config('openstack-origin')) or |
1570 | + base) |
1571 | + return os_rel |
1572 | + |
1573 | + |
1574 | +def import_key(keyid): |
1575 | + cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \ |
1576 | + "--recv-keys %s" % keyid |
1577 | + try: |
1578 | + subprocess.check_call(cmd.split(' ')) |
1579 | + except subprocess.CalledProcessError: |
1580 | + error_out("Error importing repo key %s" % keyid) |
1581 | + |
1582 | + |
1583 | +def configure_installation_source(rel): |
1584 | + '''Configure apt installation source.''' |
1585 | + if rel == 'distro': |
1586 | + return |
1587 | + elif rel[:4] == "ppa:": |
1588 | + src = rel |
1589 | + subprocess.check_call(["add-apt-repository", "-y", src]) |
1590 | + elif rel[:3] == "deb": |
1591 | + l = len(rel.split('|')) |
1592 | + if l == 2: |
1593 | + src, key = rel.split('|') |
1594 | + juju_log("Importing PPA key from keyserver for %s" % src) |
1595 | + import_key(key) |
1596 | + elif l == 1: |
1597 | + src = rel |
1598 | + with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
1599 | + f.write(src) |
1600 | + elif rel[:6] == 'cloud:': |
1601 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
1602 | + rel = rel.split(':')[1] |
1603 | + u_rel = rel.split('-')[0] |
1604 | + ca_rel = rel.split('-')[1] |
1605 | + |
1606 | + if u_rel != ubuntu_rel: |
1607 | + e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ |
1608 | + 'version (%s)' % (ca_rel, ubuntu_rel) |
1609 | + error_out(e) |
1610 | + |
1611 | + if 'staging' in ca_rel: |
1612 | + # staging is just a regular PPA. |
1613 | + os_rel = ca_rel.split('/')[0] |
1614 | + ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel |
1615 | + cmd = 'add-apt-repository -y %s' % ppa |
1616 | + subprocess.check_call(cmd.split(' ')) |
1617 | + return |
1618 | + |
1619 | + # map charm config options to actual archive pockets. |
1620 | + pockets = { |
1621 | + 'folsom': 'precise-updates/folsom', |
1622 | + 'folsom/updates': 'precise-updates/folsom', |
1623 | + 'folsom/proposed': 'precise-proposed/folsom', |
1624 | + 'grizzly': 'precise-updates/grizzly', |
1625 | + 'grizzly/updates': 'precise-updates/grizzly', |
1626 | + 'grizzly/proposed': 'precise-proposed/grizzly', |
1627 | + 'havana': 'precise-updates/havana', |
1628 | + 'havana/updates': 'precise-updates/havana', |
1629 | + 'havana/proposed': 'precise-proposed/havana', |
1630 | + } |
1631 | + |
1632 | + try: |
1633 | + pocket = pockets[ca_rel] |
1634 | + except KeyError: |
1635 | + e = 'Invalid Cloud Archive release specified: %s' % rel |
1636 | + error_out(e) |
1637 | + |
1638 | + src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) |
1639 | + apt_install('ubuntu-cloud-keyring', fatal=True) |
1640 | + |
1641 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: |
1642 | + f.write(src) |
1643 | + else: |
1644 | + error_out("Invalid openstack-release specified: %s" % rel) |
1645 | + |
1646 | + |
1647 | +def save_script_rc(script_path="scripts/scriptrc", **env_vars): |
1648 | + """ |
1649 | + Write an rc file in the charm-delivered directory containing |
1650 | + exported environment variables provided by env_vars. Any charm scripts run |
1651 | + outside the juju hook environment can source this scriptrc to obtain |
1652 | + updated config information necessary to perform health checks or |
1653 | + service changes. |
1654 | + """ |
1655 | + juju_rc_path = "%s/%s" % (charm_dir(), script_path) |
1656 | + if not os.path.exists(os.path.dirname(juju_rc_path)): |
1657 | + os.mkdir(os.path.dirname(juju_rc_path)) |
1658 | + with open(juju_rc_path, 'wb') as rc_script: |
1659 | + rc_script.write( |
1660 | + "#!/bin/bash\n") |
1661 | + [rc_script.write('export %s=%s\n' % (u, p)) |
1662 | + for u, p in env_vars.iteritems() if u != "script_path"] |
1663 | + |
1664 | + |
1665 | +def openstack_upgrade_available(package): |
1666 | + """ |
1667 | + Determines if an OpenStack upgrade is available from installation |
1668 | + source, based on version of installed package. |
1669 | + |
1670 | + :param package: str: Name of installed package. |
1671 | + |
1672 | + :returns: bool: : Returns True if configured installation source offers |
1673 | + a newer version of package. |
1674 | + |
1675 | + """ |
1676 | + |
1677 | + src = config('openstack-origin') |
1678 | + cur_vers = get_os_version_package(package) |
1679 | + available_vers = get_os_version_install_source(src) |
1680 | + apt.init() |
1681 | + return apt.version_compare(available_vers, cur_vers) == 1 |
1682 | + |
1683 | + |
1684 | +def is_ip(address): |
1685 | + """ |
1686 | + Returns True if address is a valid IP address. |
1687 | + """ |
1688 | + try: |
1689 | + # Test to see if already an IPv4 address |
1690 | + socket.inet_aton(address) |
1691 | + return True |
1692 | + except socket.error: |
1693 | + return False |
1694 | + |
1695 | + |
1696 | +def ns_query(address): |
1697 | + try: |
1698 | + import dns.resolver |
1699 | + except ImportError: |
1700 | + apt_install('python-dnspython') |
1701 | + import dns.resolver |
1702 | + |
1703 | + if isinstance(address, dns.name.Name): |
1704 | + rtype = 'PTR' |
1705 | + elif isinstance(address, basestring): |
1706 | + rtype = 'A' |
1707 | + |
1708 | + answers = dns.resolver.query(address, rtype) |
1709 | + if answers: |
1710 | + return str(answers[0]) |
1711 | + return None |
1712 | + |
1713 | + |
1714 | +def get_host_ip(hostname): |
1715 | + """ |
1716 | + Resolves the IP for a given hostname, or returns |
1717 | + the input if it is already an IP. |
1718 | + """ |
1719 | + if is_ip(hostname): |
1720 | + return hostname |
1721 | + |
1722 | + return ns_query(hostname) |
1723 | + |
1724 | + |
1725 | +def get_hostname(address): |
1726 | + """ |
1727 | + Resolves hostname for given IP, or returns the input |
1728 | + if it is already a hostname. |
1729 | + """ |
1730 | + if not is_ip(address): |
1731 | + return address |
1732 | + |
1733 | + try: |
1734 | + import dns.reversename |
1735 | + except ImportError: |
1736 | + apt_install('python-dnspython') |
1737 | + import dns.reversename |
1738 | + |
1739 | + rev = dns.reversename.from_address(address) |
1740 | + result = ns_query(rev) |
1741 | + if not result: |
1742 | + return None |
1743 | + |
1744 | + # strip trailing . |
1745 | + if result.endswith('.'): |
1746 | + return result[:-1] |
1747 | + return result |
1748 | |
1749 | === added directory 'hooks/charmhelpers/contrib/storage' |
1750 | === added file 'hooks/charmhelpers/contrib/storage/__init__.py' |
1751 | === added directory 'hooks/charmhelpers/contrib/storage/linux' |
1752 | === added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py' |
1753 | === added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' |
1754 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000 |
1755 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2013-10-15 01:35:02 +0000 |
1756 | @@ -0,0 +1,359 @@ |
1757 | +# |
1758 | +# Copyright 2012 Canonical Ltd. |
1759 | +# |
1760 | +# This file is sourced from lp:openstack-charm-helpers |
1761 | +# |
1762 | +# Authors: |
1763 | +# James Page <james.page@ubuntu.com> |
1764 | +# Adam Gandelman <adamg@ubuntu.com> |
1765 | +# |
1766 | + |
1767 | +import os |
1768 | +import shutil |
1769 | +import json |
1770 | +import time |
1771 | + |
1772 | +from subprocess import ( |
1773 | + check_call, |
1774 | + check_output, |
1775 | + CalledProcessError |
1776 | +) |
1777 | + |
1778 | +from charmhelpers.core.hookenv import ( |
1779 | + relation_get, |
1780 | + relation_ids, |
1781 | + related_units, |
1782 | + log, |
1783 | + INFO, |
1784 | + WARNING, |
1785 | + ERROR |
1786 | +) |
1787 | + |
1788 | +from charmhelpers.core.host import ( |
1789 | + mount, |
1790 | + mounts, |
1791 | + service_start, |
1792 | + service_stop, |
1793 | + service_running, |
1794 | + umount, |
1795 | +) |
1796 | + |
1797 | +from charmhelpers.fetch import ( |
1798 | + apt_install, |
1799 | +) |
1800 | + |
1801 | +KEYRING = '/etc/ceph/ceph.client.{}.keyring' |
1802 | +KEYFILE = '/etc/ceph/ceph.client.{}.key' |
1803 | + |
1804 | +CEPH_CONF = """[global] |
1805 | + auth supported = {auth} |
1806 | + keyring = {keyring} |
1807 | + mon host = {mon_hosts} |
1808 | +""" |
1809 | + |
1810 | + |
1811 | +def install(): |
1812 | + ''' Basic Ceph client installation ''' |
1813 | + ceph_dir = "/etc/ceph" |
1814 | + if not os.path.exists(ceph_dir): |
1815 | + os.mkdir(ceph_dir) |
1816 | + apt_install('ceph-common', fatal=True) |
1817 | + |
1818 | + |
1819 | +def rbd_exists(service, pool, rbd_img): |
1820 | + ''' Check to see if a RADOS block device exists ''' |
1821 | + try: |
1822 | + out = check_output(['rbd', 'list', '--id', service, |
1823 | + '--pool', pool]) |
1824 | + except CalledProcessError: |
1825 | + return False |
1826 | + else: |
1827 | + return rbd_img in out |
1828 | + |
1829 | + |
1830 | +def create_rbd_image(service, pool, image, sizemb): |
1831 | + ''' Create a new RADOS block device ''' |
1832 | + cmd = [ |
1833 | + 'rbd', |
1834 | + 'create', |
1835 | + image, |
1836 | + '--size', |
1837 | + str(sizemb), |
1838 | + '--id', |
1839 | + service, |
1840 | + '--pool', |
1841 | + pool |
1842 | + ] |
1843 | + check_call(cmd) |
1844 | + |
1845 | + |
1846 | +def pool_exists(service, name): |
1847 | + ''' Check to see if a RADOS pool already exists ''' |
1848 | + try: |
1849 | + out = check_output(['rados', '--id', service, 'lspools']) |
1850 | + except CalledProcessError: |
1851 | + return False |
1852 | + else: |
1853 | + return name in out |
1854 | + |
1855 | + |
1856 | +def get_osds(service): |
1857 | + ''' |
1858 | + Return a list of all Ceph Object Storage Daemons |
1859 | + currently in the cluster |
1860 | + ''' |
1861 | + return json.loads(check_output(['ceph', '--id', service, |
1862 | + 'osd', 'ls', '--format=json'])) |
1863 | + |
1864 | + |
1865 | +def create_pool(service, name, replicas=2): |
1866 | + ''' Create a new RADOS pool ''' |
1867 | + if pool_exists(service, name): |
1868 | + log("Ceph pool {} already exists, skipping creation".format(name), |
1869 | + level=WARNING) |
1870 | + return |
1871 | + # Calculate the number of placement groups based |
1872 | + # on upstream recommended best practices. |
1873 | + pgnum = (len(get_osds(service)) * 100 / replicas) |
1874 | + cmd = [ |
1875 | + 'ceph', '--id', service, |
1876 | + 'osd', 'pool', 'create', |
1877 | + name, str(pgnum) |
1878 | + ] |
1879 | + check_call(cmd) |
1880 | + cmd = [ |
1881 | + 'ceph', '--id', service, |
1882 | + 'osd', 'pool', 'set', name, |
1883 | + 'size', str(replicas) |
1884 | + ] |
1885 | + check_call(cmd) |
1886 | + |
1887 | + |
1888 | +def delete_pool(service, name): |
1889 | + ''' Delete a RADOS pool from ceph ''' |
1890 | + cmd = [ |
1891 | + 'ceph', '--id', service, |
1892 | + 'osd', 'pool', 'delete', |
1893 | + name, '--yes-i-really-really-mean-it' |
1894 | + ] |
1895 | + check_call(cmd) |
1896 | + |
1897 | + |
1898 | +def _keyfile_path(service): |
1899 | + return KEYFILE.format(service) |
1900 | + |
1901 | + |
1902 | +def _keyring_path(service): |
1903 | + return KEYRING.format(service) |
1904 | + |
1905 | + |
1906 | +def create_keyring(service, key): |
1907 | + ''' Create a new Ceph keyring containing key''' |
1908 | + keyring = _keyring_path(service) |
1909 | + if os.path.exists(keyring): |
1910 | + log('ceph: Keyring exists at %s.' % keyring, level=WARNING) |
1911 | + return |
1912 | + cmd = [ |
1913 | + 'ceph-authtool', |
1914 | + keyring, |
1915 | + '--create-keyring', |
1916 | + '--name=client.{}'.format(service), |
1917 | + '--add-key={}'.format(key) |
1918 | + ] |
1919 | + check_call(cmd) |
1920 | + log('ceph: Created new ring at %s.' % keyring, level=INFO) |
1921 | + |
1922 | + |
1923 | +def create_key_file(service, key): |
1924 | + ''' Create a file containing key ''' |
1925 | + keyfile = _keyfile_path(service) |
1926 | + if os.path.exists(keyfile): |
1927 | + log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING) |
1928 | + return |
1929 | + with open(keyfile, 'w') as fd: |
1930 | + fd.write(key) |
1931 | + log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) |
1932 | + |
1933 | + |
1934 | +def get_ceph_nodes(): |
1935 | + ''' Query named relation 'ceph' to detemine current nodes ''' |
1936 | + hosts = [] |
1937 | + for r_id in relation_ids('ceph'): |
1938 | + for unit in related_units(r_id): |
1939 | + hosts.append(relation_get('private-address', unit=unit, rid=r_id)) |
1940 | + return hosts |
1941 | + |
1942 | + |
1943 | +def configure(service, key, auth): |
1944 | + ''' Perform basic configuration of Ceph ''' |
1945 | + create_keyring(service, key) |
1946 | + create_key_file(service, key) |
1947 | + hosts = get_ceph_nodes() |
1948 | + with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: |
1949 | + ceph_conf.write(CEPH_CONF.format(auth=auth, |
1950 | + keyring=_keyring_path(service), |
1951 | + mon_hosts=",".join(map(str, hosts)))) |
1952 | + modprobe('rbd') |
1953 | + |
1954 | + |
1955 | +def image_mapped(name): |
1956 | + ''' Determine whether a RADOS block device is mapped locally ''' |
1957 | + try: |
1958 | + out = check_output(['rbd', 'showmapped']) |
1959 | + except CalledProcessError: |
1960 | + return False |
1961 | + else: |
1962 | + return name in out |
1963 | + |
1964 | + |
1965 | +def map_block_storage(service, pool, image): |
1966 | + ''' Map a RADOS block device for local use ''' |
1967 | + cmd = [ |
1968 | + 'rbd', |
1969 | + 'map', |
1970 | + '{}/{}'.format(pool, image), |
1971 | + '--user', |
1972 | + service, |
1973 | + '--secret', |
1974 | + _keyfile_path(service), |
1975 | + ] |
1976 | + check_call(cmd) |
1977 | + |
1978 | + |
1979 | +def filesystem_mounted(fs): |
1980 | + ''' Determine whether a filesytems is already mounted ''' |
1981 | + return fs in [f for f, m in mounts()] |
1982 | + |
1983 | + |
1984 | +def make_filesystem(blk_device, fstype='ext4', timeout=10): |
1985 | + ''' Make a new filesystem on the specified block device ''' |
1986 | + count = 0 |
1987 | + e_noent = os.errno.ENOENT |
1988 | + while not os.path.exists(blk_device): |
1989 | + if count >= timeout: |
1990 | + log('ceph: gave up waiting on block device %s' % blk_device, |
1991 | + level=ERROR) |
1992 | + raise IOError(e_noent, os.strerror(e_noent), blk_device) |
1993 | + log('ceph: waiting for block device %s to appear' % blk_device, |
1994 | + level=INFO) |
1995 | + count += 1 |
1996 | + time.sleep(1) |
1997 | + else: |
1998 | + log('ceph: Formatting block device %s as filesystem %s.' % |
1999 | + (blk_device, fstype), level=INFO) |
2000 | + check_call(['mkfs', '-t', fstype, blk_device]) |
2001 | + |
2002 | + |
2003 | +def place_data_on_block_device(blk_device, data_src_dst): |
2004 | + ''' Migrate data in data_src_dst to blk_device and then remount ''' |
2005 | + # mount block device into /mnt |
2006 | + mount(blk_device, '/mnt') |
2007 | + # copy data to /mnt |
2008 | + copy_files(data_src_dst, '/mnt') |
2009 | + # umount block device |
2010 | + umount('/mnt') |
2011 | + # Grab user/group ID's from original source |
2012 | + _dir = os.stat(data_src_dst) |
2013 | + uid = _dir.st_uid |
2014 | + gid = _dir.st_gid |
2015 | + # re-mount where the data should originally be |
2016 | + # TODO: persist is currently a NO-OP in core.host |
2017 | + mount(blk_device, data_src_dst, persist=True) |
2018 | + # ensure original ownership of new mount. |
2019 | + os.chown(data_src_dst, uid, gid) |
2020 | + |
2021 | + |
2022 | +# TODO: re-use |
2023 | +def modprobe(module): |
2024 | + ''' Load a kernel module and configure for auto-load on reboot ''' |
2025 | + log('ceph: Loading kernel module', level=INFO) |
2026 | + cmd = ['modprobe', module] |
2027 | + check_call(cmd) |
2028 | + with open('/etc/modules', 'r+') as modules: |
2029 | + if module not in modules.read(): |
2030 | + modules.write(module) |
2031 | + |
2032 | + |
2033 | +def copy_files(src, dst, symlinks=False, ignore=None): |
2034 | + ''' Copy files from src to dst ''' |
2035 | + for item in os.listdir(src): |
2036 | + s = os.path.join(src, item) |
2037 | + d = os.path.join(dst, item) |
2038 | + if os.path.isdir(s): |
2039 | + shutil.copytree(s, d, symlinks, ignore) |
2040 | + else: |
2041 | + shutil.copy2(s, d) |
2042 | + |
2043 | + |
2044 | +def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
2045 | + blk_device, fstype, system_services=[]): |
2046 | + """ |
2047 | + NOTE: This function must only be called from a single service unit for |
2048 | + the same rbd_img otherwise data loss will occur. |
2049 | + |
2050 | + Ensures given pool and RBD image exists, is mapped to a block device, |
2051 | + and the device is formatted and mounted at the given mount_point. |
2052 | + |
2053 | + If formatting a device for the first time, data existing at mount_point |
2054 | + will be migrated to the RBD device before being re-mounted. |
2055 | + |
2056 | + All services listed in system_services will be stopped prior to data |
2057 | + migration and restarted when complete. |
2058 | + """ |
2059 | + # Ensure pool, RBD image, RBD mappings are in place. |
2060 | + if not pool_exists(service, pool): |
2061 | + log('ceph: Creating new pool {}.'.format(pool)) |
2062 | + create_pool(service, pool) |
2063 | + |
2064 | + if not rbd_exists(service, pool, rbd_img): |
2065 | + log('ceph: Creating RBD image ({}).'.format(rbd_img)) |
2066 | + create_rbd_image(service, pool, rbd_img, sizemb) |
2067 | + |
2068 | + if not image_mapped(rbd_img): |
2069 | + log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img)) |
2070 | + map_block_storage(service, pool, rbd_img) |
2071 | + |
2072 | + # make file system |
2073 | + # TODO: What happens if for whatever reason this is run again and |
2074 | + # the data is already in the rbd device and/or is mounted?? |
2075 | + # When it is mounted already, it will fail to make the fs |
2076 | + # XXX: This is really sketchy! Need to at least add an fstab entry |
2077 | + # otherwise this hook will blow away existing data if its executed |
2078 | + # after a reboot. |
2079 | + if not filesystem_mounted(mount_point): |
2080 | + make_filesystem(blk_device, fstype) |
2081 | + |
2082 | + for svc in system_services: |
2083 | + if service_running(svc): |
2084 | + log('ceph: Stopping services {} prior to migrating data.' |
2085 | + .format(svc)) |
2086 | + service_stop(svc) |
2087 | + |
2088 | + place_data_on_block_device(blk_device, mount_point) |
2089 | + |
2090 | + for svc in system_services: |
2091 | + log('ceph: Starting service {} after migrating data.' |
2092 | + .format(svc)) |
2093 | + service_start(svc) |
2094 | + |
2095 | + |
2096 | +def ensure_ceph_keyring(service, user=None, group=None): |
2097 | + ''' |
2098 | + Ensures a ceph keyring is created for a named service |
2099 | + and optionally ensures user and group ownership. |
2100 | + |
2101 | + Returns False if no ceph key is available in relation state. |
2102 | + ''' |
2103 | + key = None |
2104 | + for rid in relation_ids('ceph'): |
2105 | + for unit in related_units(rid): |
2106 | + key = relation_get('key', rid=rid, unit=unit) |
2107 | + if key: |
2108 | + break |
2109 | + if not key: |
2110 | + return False |
2111 | + create_keyring(service=service, key=key) |
2112 | + keyring = _keyring_path(service) |
2113 | + if user and group: |
2114 | + check_call(['chown', '%s.%s' % (user, group), keyring]) |
2115 | + return True |
2116 | |
2117 | === added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py' |
2118 | --- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000 |
2119 | +++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-10-15 01:35:02 +0000 |
2120 | @@ -0,0 +1,62 @@ |
2121 | + |
2122 | +import os |
2123 | +import re |
2124 | + |
2125 | +from subprocess import ( |
2126 | + check_call, |
2127 | + check_output, |
2128 | +) |
2129 | + |
2130 | + |
2131 | +################################################## |
2132 | +# loopback device helpers. |
2133 | +################################################## |
2134 | +def loopback_devices(): |
2135 | + ''' |
2136 | + Parse through 'losetup -a' output to determine currently mapped |
2137 | + loopback devices. Output is expected to look like: |
2138 | + |
2139 | + /dev/loop0: [0807]:961814 (/tmp/my.img) |
2140 | + |
2141 | + :returns: dict: a dict mapping {loopback_dev: backing_file} |
2142 | + ''' |
2143 | + loopbacks = {} |
2144 | + cmd = ['losetup', '-a'] |
2145 | + devs = [d.strip().split(' ') for d in |
2146 | + check_output(cmd).splitlines() if d != ''] |
2147 | + for dev, _, f in devs: |
2148 | + loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0] |
2149 | + return loopbacks |
2150 | + |
2151 | + |
2152 | +def create_loopback(file_path): |
2153 | + ''' |
2154 | + Create a loopback device for a given backing file. |
2155 | + |
2156 | + :returns: str: Full path to new loopback device (eg, /dev/loop0) |
2157 | + ''' |
2158 | + file_path = os.path.abspath(file_path) |
2159 | + check_call(['losetup', '--find', file_path]) |
2160 | + for d, f in loopback_devices().iteritems(): |
2161 | + if f == file_path: |
2162 | + return d |
2163 | + |
2164 | + |
2165 | +def ensure_loopback_device(path, size): |
2166 | + ''' |
2167 | + Ensure a loopback device exists for a given backing file path and size. |
2168 | + If it a loopback device is not mapped to file, a new one will be created. |
2169 | + |
2170 | + TODO: Confirm size of found loopback device. |
2171 | + |
2172 | + :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) |
2173 | + ''' |
2174 | + for d, f in loopback_devices().iteritems(): |
2175 | + if f == path: |
2176 | + return d |
2177 | + |
2178 | + if not os.path.exists(path): |
2179 | + cmd = ['truncate', '--size', size, path] |
2180 | + check_call(cmd) |
2181 | + |
2182 | + return create_loopback(path) |
2183 | |
2184 | === added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' |
2185 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000 |
2186 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2013-10-15 01:35:02 +0000 |
2187 | @@ -0,0 +1,88 @@ |
2188 | +from subprocess import ( |
2189 | + CalledProcessError, |
2190 | + check_call, |
2191 | + check_output, |
2192 | + Popen, |
2193 | + PIPE, |
2194 | +) |
2195 | + |
2196 | + |
2197 | +################################################## |
2198 | +# LVM helpers. |
2199 | +################################################## |
2200 | +def deactivate_lvm_volume_group(block_device): |
2201 | + ''' |
2202 | + Deactivate any volume gruop associated with an LVM physical volume. |
2203 | + |
2204 | + :param block_device: str: Full path to LVM physical volume |
2205 | + ''' |
2206 | + vg = list_lvm_volume_group(block_device) |
2207 | + if vg: |
2208 | + cmd = ['vgchange', '-an', vg] |
2209 | + check_call(cmd) |
2210 | + |
2211 | + |
2212 | +def is_lvm_physical_volume(block_device): |
2213 | + ''' |
2214 | + Determine whether a block device is initialized as an LVM PV. |
2215 | + |
2216 | + :param block_device: str: Full path of block device to inspect. |
2217 | + |
2218 | + :returns: boolean: True if block device is a PV, False if not. |
2219 | + ''' |
2220 | + try: |
2221 | + check_output(['pvdisplay', block_device]) |
2222 | + return True |
2223 | + except CalledProcessError: |
2224 | + return False |
2225 | + |
2226 | + |
2227 | +def remove_lvm_physical_volume(block_device): |
2228 | + ''' |
2229 | + Remove LVM PV signatures from a given block device. |
2230 | + |
2231 | + :param block_device: str: Full path of block device to scrub. |
2232 | + ''' |
2233 | + p = Popen(['pvremove', '-ff', block_device], |
2234 | + stdin=PIPE) |
2235 | + p.communicate(input='y\n') |
2236 | + |
2237 | + |
2238 | +def list_lvm_volume_group(block_device): |
2239 | + ''' |
2240 | + List LVM volume group associated with a given block device. |
2241 | + |
2242 | + Assumes block device is a valid LVM PV. |
2243 | + |
2244 | + :param block_device: str: Full path of block device to inspect. |
2245 | + |
2246 | + :returns: str: Name of volume group associated with block device or None |
2247 | + ''' |
2248 | + vg = None |
2249 | + pvd = check_output(['pvdisplay', block_device]).splitlines() |
2250 | + for l in pvd: |
2251 | + if l.strip().startswith('VG Name'): |
2252 | + vg = ' '.join(l.split()).split(' ').pop() |
2253 | + return vg |
2254 | + |
2255 | + |
2256 | +def create_lvm_physical_volume(block_device): |
2257 | + ''' |
2258 | + Initialize a block device as an LVM physical volume. |
2259 | + |
2260 | + :param block_device: str: Full path of block device to initialize. |
2261 | + |
2262 | + ''' |
2263 | + check_call(['pvcreate', block_device]) |
2264 | + |
2265 | + |
2266 | +def create_lvm_volume_group(volume_group, block_device): |
2267 | + ''' |
2268 | + Create an LVM volume group backed by a given block device. |
2269 | + |
2270 | + Assumes block device has already been initialized as an LVM PV. |
2271 | + |
2272 | + :param volume_group: str: Name of volume group to create. |
2273 | + :block_device: str: Full path of PV-initialized block device. |
2274 | + ''' |
2275 | + check_call(['vgcreate', volume_group, block_device]) |
2276 | |
2277 | === added file 'hooks/charmhelpers/contrib/storage/linux/utils.py' |
2278 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000 |
2279 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2013-10-15 01:35:02 +0000 |
2280 | @@ -0,0 +1,25 @@ |
2281 | +from os import stat |
2282 | +from stat import S_ISBLK |
2283 | + |
2284 | +from subprocess import ( |
2285 | + check_call |
2286 | +) |
2287 | + |
2288 | + |
2289 | +def is_block_device(path): |
2290 | + ''' |
2291 | + Confirm device at path is a valid block device node. |
2292 | + |
2293 | + :returns: boolean: True if path is a block device, False if not. |
2294 | + ''' |
2295 | + return S_ISBLK(stat(path).st_mode) |
2296 | + |
2297 | + |
2298 | +def zap_disk(block_device): |
2299 | + ''' |
2300 | + Clear a block device of partition table. Relies on sgdisk, which is |
2301 | + installed as pat of the 'gdisk' package in Ubuntu. |
2302 | + |
2303 | + :param block_device: str: Full path of block device to clean. |
2304 | + ''' |
2305 | + check_call(['sgdisk', '--zap-all', block_device]) |
2306 | |
2307 | === added directory 'hooks/charmhelpers/core' |
2308 | === added file 'hooks/charmhelpers/core/__init__.py' |
2309 | === added file 'hooks/charmhelpers/core/hookenv.py' |
2310 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 |
2311 | +++ hooks/charmhelpers/core/hookenv.py 2013-10-15 01:35:02 +0000 |
2312 | @@ -0,0 +1,340 @@ |
2313 | +"Interactions with the Juju environment" |
2314 | +# Copyright 2013 Canonical Ltd. |
2315 | +# |
2316 | +# Authors: |
2317 | +# Charm Helpers Developers <juju@lists.ubuntu.com> |
2318 | + |
2319 | +import os |
2320 | +import json |
2321 | +import yaml |
2322 | +import subprocess |
2323 | +import UserDict |
2324 | + |
2325 | +CRITICAL = "CRITICAL" |
2326 | +ERROR = "ERROR" |
2327 | +WARNING = "WARNING" |
2328 | +INFO = "INFO" |
2329 | +DEBUG = "DEBUG" |
2330 | +MARKER = object() |
2331 | + |
2332 | +cache = {} |
2333 | + |
2334 | + |
2335 | +def cached(func): |
2336 | + ''' Cache return values for multiple executions of func + args |
2337 | + |
2338 | + For example: |
2339 | + |
2340 | + @cached |
2341 | + def unit_get(attribute): |
2342 | + pass |
2343 | + |
2344 | + unit_get('test') |
2345 | + |
2346 | + will cache the result of unit_get + 'test' for future calls. |
2347 | + ''' |
2348 | + def wrapper(*args, **kwargs): |
2349 | + global cache |
2350 | + key = str((func, args, kwargs)) |
2351 | + try: |
2352 | + return cache[key] |
2353 | + except KeyError: |
2354 | + res = func(*args, **kwargs) |
2355 | + cache[key] = res |
2356 | + return res |
2357 | + return wrapper |
2358 | + |
2359 | + |
2360 | +def flush(key): |
2361 | + ''' Flushes any entries from function cache where the |
2362 | + key is found in the function+args ''' |
2363 | + flush_list = [] |
2364 | + for item in cache: |
2365 | + if key in item: |
2366 | + flush_list.append(item) |
2367 | + for item in flush_list: |
2368 | + del cache[item] |
2369 | + |
2370 | + |
2371 | +def log(message, level=None): |
2372 | + "Write a message to the juju log" |
2373 | + command = ['juju-log'] |
2374 | + if level: |
2375 | + command += ['-l', level] |
2376 | + command += [message] |
2377 | + subprocess.call(command) |
2378 | + |
2379 | + |
2380 | +class Serializable(UserDict.IterableUserDict): |
2381 | + "Wrapper, an object that can be serialized to yaml or json" |
2382 | + |
2383 | + def __init__(self, obj): |
2384 | + # wrap the object |
2385 | + UserDict.IterableUserDict.__init__(self) |
2386 | + self.data = obj |
2387 | + |
2388 | + def __getattr__(self, attr): |
2389 | + # See if this object has attribute. |
2390 | + if attr in ("json", "yaml", "data"): |
2391 | + return self.__dict__[attr] |
2392 | + # Check for attribute in wrapped object. |
2393 | + got = getattr(self.data, attr, MARKER) |
2394 | + if got is not MARKER: |
2395 | + return got |
2396 | + # Proxy to the wrapped object via dict interface. |
2397 | + try: |
2398 | + return self.data[attr] |
2399 | + except KeyError: |
2400 | + raise AttributeError(attr) |
2401 | + |
2402 | + def __getstate__(self): |
2403 | + # Pickle as a standard dictionary. |
2404 | + return self.data |
2405 | + |
2406 | + def __setstate__(self, state): |
2407 | + # Unpickle into our wrapper. |
2408 | + self.data = state |
2409 | + |
2410 | + def json(self): |
2411 | + "Serialize the object to json" |
2412 | + return json.dumps(self.data) |
2413 | + |
2414 | + def yaml(self): |
2415 | + "Serialize the object to yaml" |
2416 | + return yaml.dump(self.data) |
2417 | + |
2418 | + |
2419 | +def execution_environment(): |
2420 | + """A convenient bundling of the current execution context""" |
2421 | + context = {} |
2422 | + context['conf'] = config() |
2423 | + if relation_id(): |
2424 | + context['reltype'] = relation_type() |
2425 | + context['relid'] = relation_id() |
2426 | + context['rel'] = relation_get() |
2427 | + context['unit'] = local_unit() |
2428 | + context['rels'] = relations() |
2429 | + context['env'] = os.environ |
2430 | + return context |
2431 | + |
2432 | + |
2433 | +def in_relation_hook(): |
2434 | + "Determine whether we're running in a relation hook" |
2435 | + return 'JUJU_RELATION' in os.environ |
2436 | + |
2437 | + |
2438 | +def relation_type(): |
2439 | + "The scope for the current relation hook" |
2440 | + return os.environ.get('JUJU_RELATION', None) |
2441 | + |
2442 | + |
2443 | +def relation_id(): |
2444 | + "The relation ID for the current relation hook" |
2445 | + return os.environ.get('JUJU_RELATION_ID', None) |
2446 | + |
2447 | + |
2448 | +def local_unit(): |
2449 | + "Local unit ID" |
2450 | + return os.environ['JUJU_UNIT_NAME'] |
2451 | + |
2452 | + |
2453 | +def remote_unit(): |
2454 | + "The remote unit for the current relation hook" |
2455 | + return os.environ['JUJU_REMOTE_UNIT'] |
2456 | + |
2457 | + |
2458 | +def service_name(): |
2459 | + "The name service group this unit belongs to" |
2460 | + return local_unit().split('/')[0] |
2461 | + |
2462 | + |
2463 | +@cached |
2464 | +def config(scope=None): |
2465 | + "Juju charm configuration" |
2466 | + config_cmd_line = ['config-get'] |
2467 | + if scope is not None: |
2468 | + config_cmd_line.append(scope) |
2469 | + config_cmd_line.append('--format=json') |
2470 | + try: |
2471 | + return json.loads(subprocess.check_output(config_cmd_line)) |
2472 | + except ValueError: |
2473 | + return None |
2474 | + |
2475 | + |
2476 | +@cached |
2477 | +def relation_get(attribute=None, unit=None, rid=None): |
2478 | + _args = ['relation-get', '--format=json'] |
2479 | + if rid: |
2480 | + _args.append('-r') |
2481 | + _args.append(rid) |
2482 | + _args.append(attribute or '-') |
2483 | + if unit: |
2484 | + _args.append(unit) |
2485 | + try: |
2486 | + return json.loads(subprocess.check_output(_args)) |
2487 | + except ValueError: |
2488 | + return None |
2489 | + |
2490 | + |
2491 | +def relation_set(relation_id=None, relation_settings={}, **kwargs): |
2492 | + relation_cmd_line = ['relation-set'] |
2493 | + if relation_id is not None: |
2494 | + relation_cmd_line.extend(('-r', relation_id)) |
2495 | + for k, v in (relation_settings.items() + kwargs.items()): |
2496 | + if v is None: |
2497 | + relation_cmd_line.append('{}='.format(k)) |
2498 | + else: |
2499 | + relation_cmd_line.append('{}={}'.format(k, v)) |
2500 | + subprocess.check_call(relation_cmd_line) |
2501 | + # Flush cache of any relation-gets for local unit |
2502 | + flush(local_unit()) |
2503 | + |
2504 | + |
2505 | +@cached |
2506 | +def relation_ids(reltype=None): |
2507 | + "A list of relation_ids" |
2508 | + reltype = reltype or relation_type() |
2509 | + relid_cmd_line = ['relation-ids', '--format=json'] |
2510 | + if reltype is not None: |
2511 | + relid_cmd_line.append(reltype) |
2512 | + return json.loads(subprocess.check_output(relid_cmd_line)) or [] |
2513 | + return [] |
2514 | + |
2515 | + |
2516 | +@cached |
2517 | +def related_units(relid=None): |
2518 | + "A list of related units" |
2519 | + relid = relid or relation_id() |
2520 | + units_cmd_line = ['relation-list', '--format=json'] |
2521 | + if relid is not None: |
2522 | + units_cmd_line.extend(('-r', relid)) |
2523 | + return json.loads(subprocess.check_output(units_cmd_line)) or [] |
2524 | + |
2525 | + |
2526 | +@cached |
2527 | +def relation_for_unit(unit=None, rid=None): |
2528 | + "Get the json represenation of a unit's relation" |
2529 | + unit = unit or remote_unit() |
2530 | + relation = relation_get(unit=unit, rid=rid) |
2531 | + for key in relation: |
2532 | + if key.endswith('-list'): |
2533 | + relation[key] = relation[key].split() |
2534 | + relation['__unit__'] = unit |
2535 | + return relation |
2536 | + |
2537 | + |
2538 | +@cached |
2539 | +def relations_for_id(relid=None): |
2540 | + "Get relations of a specific relation ID" |
2541 | + relation_data = [] |
2542 | + relid = relid or relation_ids() |
2543 | + for unit in related_units(relid): |
2544 | + unit_data = relation_for_unit(unit, relid) |
2545 | + unit_data['__relid__'] = relid |
2546 | + relation_data.append(unit_data) |
2547 | + return relation_data |
2548 | + |
2549 | + |
2550 | +@cached |
2551 | +def relations_of_type(reltype=None): |
2552 | + "Get relations of a specific type" |
2553 | + relation_data = [] |
2554 | + reltype = reltype or relation_type() |
2555 | + for relid in relation_ids(reltype): |
2556 | + for relation in relations_for_id(relid): |
2557 | + relation['__relid__'] = relid |
2558 | + relation_data.append(relation) |
2559 | + return relation_data |
2560 | + |
2561 | + |
2562 | +@cached |
2563 | +def relation_types(): |
2564 | + "Get a list of relation types supported by this charm" |
2565 | + charmdir = os.environ.get('CHARM_DIR', '') |
2566 | + mdf = open(os.path.join(charmdir, 'metadata.yaml')) |
2567 | + md = yaml.safe_load(mdf) |
2568 | + rel_types = [] |
2569 | + for key in ('provides', 'requires', 'peers'): |
2570 | + section = md.get(key) |
2571 | + if section: |
2572 | + rel_types.extend(section.keys()) |
2573 | + mdf.close() |
2574 | + return rel_types |
2575 | + |
2576 | + |
2577 | +@cached |
2578 | +def relations(): |
2579 | + rels = {} |
2580 | + for reltype in relation_types(): |
2581 | + relids = {} |
2582 | + for relid in relation_ids(reltype): |
2583 | + units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} |
2584 | + for unit in related_units(relid): |
2585 | + reldata = relation_get(unit=unit, rid=relid) |
2586 | + units[unit] = reldata |
2587 | + relids[relid] = units |
2588 | + rels[reltype] = relids |
2589 | + return rels |
2590 | + |
2591 | + |
2592 | +def open_port(port, protocol="TCP"): |
2593 | + "Open a service network port" |
2594 | + _args = ['open-port'] |
2595 | + _args.append('{}/{}'.format(port, protocol)) |
2596 | + subprocess.check_call(_args) |
2597 | + |
2598 | + |
2599 | +def close_port(port, protocol="TCP"): |
2600 | + "Close a service network port" |
2601 | + _args = ['close-port'] |
2602 | + _args.append('{}/{}'.format(port, protocol)) |
2603 | + subprocess.check_call(_args) |
2604 | + |
2605 | + |
2606 | +@cached |
2607 | +def unit_get(attribute): |
2608 | + _args = ['unit-get', '--format=json', attribute] |
2609 | + try: |
2610 | + return json.loads(subprocess.check_output(_args)) |
2611 | + except ValueError: |
2612 | + return None |
2613 | + |
2614 | + |
2615 | +def unit_private_ip(): |
2616 | + return unit_get('private-address') |
2617 | + |
2618 | + |
2619 | +class UnregisteredHookError(Exception): |
2620 | + pass |
2621 | + |
2622 | + |
2623 | +class Hooks(object): |
2624 | + def __init__(self): |
2625 | + super(Hooks, self).__init__() |
2626 | + self._hooks = {} |
2627 | + |
2628 | + def register(self, name, function): |
2629 | + self._hooks[name] = function |
2630 | + |
2631 | + def execute(self, args): |
2632 | + hook_name = os.path.basename(args[0]) |
2633 | + if hook_name in self._hooks: |
2634 | + self._hooks[hook_name]() |
2635 | + else: |
2636 | + raise UnregisteredHookError(hook_name) |
2637 | + |
2638 | + def hook(self, *hook_names): |
2639 | + def wrapper(decorated): |
2640 | + for hook_name in hook_names: |
2641 | + self.register(hook_name, decorated) |
2642 | + else: |
2643 | + self.register(decorated.__name__, decorated) |
2644 | + if '_' in decorated.__name__: |
2645 | + self.register( |
2646 | + decorated.__name__.replace('_', '-'), decorated) |
2647 | + return decorated |
2648 | + return wrapper |
2649 | + |
2650 | + |
2651 | +def charm_dir(): |
2652 | + return os.environ.get('CHARM_DIR') |
2653 | |
2654 | === added file 'hooks/charmhelpers/core/host.py' |
2655 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 |
2656 | +++ hooks/charmhelpers/core/host.py 2013-10-15 01:35:02 +0000 |
2657 | @@ -0,0 +1,241 @@ |
2658 | +"""Tools for working with the host system""" |
2659 | +# Copyright 2012 Canonical Ltd. |
2660 | +# |
2661 | +# Authors: |
2662 | +# Nick Moffitt <nick.moffitt@canonical.com> |
2663 | +# Matthew Wedgwood <matthew.wedgwood@canonical.com> |
2664 | + |
2665 | +import os |
2666 | +import pwd |
2667 | +import grp |
2668 | +import random |
2669 | +import string |
2670 | +import subprocess |
2671 | +import hashlib |
2672 | + |
2673 | +from collections import OrderedDict |
2674 | + |
2675 | +from hookenv import log |
2676 | + |
2677 | + |
2678 | +def service_start(service_name): |
2679 | + return service('start', service_name) |
2680 | + |
2681 | + |
2682 | +def service_stop(service_name): |
2683 | + return service('stop', service_name) |
2684 | + |
2685 | + |
2686 | +def service_restart(service_name): |
2687 | + return service('restart', service_name) |
2688 | + |
2689 | + |
2690 | +def service_reload(service_name, restart_on_failure=False): |
2691 | + service_result = service('reload', service_name) |
2692 | + if not service_result and restart_on_failure: |
2693 | + service_result = service('restart', service_name) |
2694 | + return service_result |
2695 | + |
2696 | + |
2697 | +def service(action, service_name): |
2698 | + cmd = ['service', service_name, action] |
2699 | + return subprocess.call(cmd) == 0 |
2700 | + |
2701 | + |
2702 | +def service_running(service): |
2703 | + try: |
2704 | + output = subprocess.check_output(['service', service, 'status']) |
2705 | + except subprocess.CalledProcessError: |
2706 | + return False |
2707 | + else: |
2708 | + if ("start/running" in output or "is running" in output): |
2709 | + return True |
2710 | + else: |
2711 | + return False |
2712 | + |
2713 | + |
2714 | +def adduser(username, password=None, shell='/bin/bash', system_user=False): |
2715 | + """Add a user""" |
2716 | + try: |
2717 | + user_info = pwd.getpwnam(username) |
2718 | + log('user {0} already exists!'.format(username)) |
2719 | + except KeyError: |
2720 | + log('creating user {0}'.format(username)) |
2721 | + cmd = ['useradd'] |
2722 | + if system_user or password is None: |
2723 | + cmd.append('--system') |
2724 | + else: |
2725 | + cmd.extend([ |
2726 | + '--create-home', |
2727 | + '--shell', shell, |
2728 | + '--password', password, |
2729 | + ]) |
2730 | + cmd.append(username) |
2731 | + subprocess.check_call(cmd) |
2732 | + user_info = pwd.getpwnam(username) |
2733 | + return user_info |
2734 | + |
2735 | + |
2736 | +def add_user_to_group(username, group): |
2737 | + """Add a user to a group""" |
2738 | + cmd = [ |
2739 | + 'gpasswd', '-a', |
2740 | + username, |
2741 | + group |
2742 | + ] |
2743 | + log("Adding user {} to group {}".format(username, group)) |
2744 | + subprocess.check_call(cmd) |
2745 | + |
2746 | + |
2747 | +def rsync(from_path, to_path, flags='-r', options=None): |
2748 | + """Replicate the contents of a path""" |
2749 | + options = options or ['--delete', '--executability'] |
2750 | + cmd = ['/usr/bin/rsync', flags] |
2751 | + cmd.extend(options) |
2752 | + cmd.append(from_path) |
2753 | + cmd.append(to_path) |
2754 | + log(" ".join(cmd)) |
2755 | + return subprocess.check_output(cmd).strip() |
2756 | + |
2757 | + |
2758 | +def symlink(source, destination): |
2759 | + """Create a symbolic link""" |
2760 | + log("Symlinking {} as {}".format(source, destination)) |
2761 | + cmd = [ |
2762 | + 'ln', |
2763 | + '-sf', |
2764 | + source, |
2765 | + destination, |
2766 | + ] |
2767 | + subprocess.check_call(cmd) |
2768 | + |
2769 | + |
2770 | +def mkdir(path, owner='root', group='root', perms=0555, force=False): |
2771 | + """Create a directory""" |
2772 | + log("Making dir {} {}:{} {:o}".format(path, owner, group, |
2773 | + perms)) |
2774 | + uid = pwd.getpwnam(owner).pw_uid |
2775 | + gid = grp.getgrnam(group).gr_gid |
2776 | + realpath = os.path.abspath(path) |
2777 | + if os.path.exists(realpath): |
2778 | + if force and not os.path.isdir(realpath): |
2779 | + log("Removing non-directory file {} prior to mkdir()".format(path)) |
2780 | + os.unlink(realpath) |
2781 | + else: |
2782 | + os.makedirs(realpath, perms) |
2783 | + os.chown(realpath, uid, gid) |
2784 | + |
2785 | + |
2786 | +def write_file(path, content, owner='root', group='root', perms=0444): |
2787 | + """Create or overwrite a file with the contents of a string""" |
2788 | + log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) |
2789 | + uid = pwd.getpwnam(owner).pw_uid |
2790 | + gid = grp.getgrnam(group).gr_gid |
2791 | + with open(path, 'w') as target: |
2792 | + os.fchown(target.fileno(), uid, gid) |
2793 | + os.fchmod(target.fileno(), perms) |
2794 | + target.write(content) |
2795 | + |
2796 | + |
2797 | +def mount(device, mountpoint, options=None, persist=False): |
2798 | + '''Mount a filesystem''' |
2799 | + cmd_args = ['mount'] |
2800 | + if options is not None: |
2801 | + cmd_args.extend(['-o', options]) |
2802 | + cmd_args.extend([device, mountpoint]) |
2803 | + try: |
2804 | + subprocess.check_output(cmd_args) |
2805 | + except subprocess.CalledProcessError, e: |
2806 | + log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
2807 | + return False |
2808 | + if persist: |
2809 | + # TODO: update fstab |
2810 | + pass |
2811 | + return True |
2812 | + |
2813 | + |
2814 | +def umount(mountpoint, persist=False): |
2815 | + '''Unmount a filesystem''' |
2816 | + cmd_args = ['umount', mountpoint] |
2817 | + try: |
2818 | + subprocess.check_output(cmd_args) |
2819 | + except subprocess.CalledProcessError, e: |
2820 | + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
2821 | + return False |
2822 | + if persist: |
2823 | + # TODO: update fstab |
2824 | + pass |
2825 | + return True |
2826 | + |
2827 | + |
2828 | +def mounts(): |
2829 | + '''List of all mounted volumes as [[mountpoint,device],[...]]''' |
2830 | + with open('/proc/mounts') as f: |
2831 | + # [['/mount/point','/dev/path'],[...]] |
2832 | + system_mounts = [m[1::-1] for m in [l.strip().split() |
2833 | + for l in f.readlines()]] |
2834 | + return system_mounts |
2835 | + |
2836 | + |
2837 | +def file_hash(path): |
2838 | + ''' Generate a md5 hash of the contents of 'path' or None if not found ''' |
2839 | + if os.path.exists(path): |
2840 | + h = hashlib.md5() |
2841 | + with open(path, 'r') as source: |
2842 | + h.update(source.read()) # IGNORE:E1101 - it does have update |
2843 | + return h.hexdigest() |
2844 | + else: |
2845 | + return None |
2846 | + |
2847 | + |
2848 | +def restart_on_change(restart_map): |
2849 | + ''' Restart services based on configuration files changing |
2850 | + |
2851 | + This function is used a decorator, for example |
2852 | + |
2853 | + @restart_on_change({ |
2854 | + '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
2855 | + }) |
2856 | + def ceph_client_changed(): |
2857 | + ... |
2858 | + |
2859 | + In this example, the cinder-api and cinder-volume services |
2860 | + would be restarted if /etc/ceph/ceph.conf is changed by the |
2861 | + ceph_client_changed function. |
2862 | + ''' |
2863 | + def wrap(f): |
2864 | + def wrapped_f(*args): |
2865 | + checksums = {} |
2866 | + for path in restart_map: |
2867 | + checksums[path] = file_hash(path) |
2868 | + f(*args) |
2869 | + restarts = [] |
2870 | + for path in restart_map: |
2871 | + if checksums[path] != file_hash(path): |
2872 | + restarts += restart_map[path] |
2873 | + for service_name in list(OrderedDict.fromkeys(restarts)): |
2874 | + service('restart', service_name) |
2875 | + return wrapped_f |
2876 | + return wrap |
2877 | + |
2878 | + |
2879 | +def lsb_release(): |
2880 | + '''Return /etc/lsb-release in a dict''' |
2881 | + d = {} |
2882 | + with open('/etc/lsb-release', 'r') as lsb: |
2883 | + for l in lsb: |
2884 | + k, v = l.split('=') |
2885 | + d[k.strip()] = v.strip() |
2886 | + return d |
2887 | + |
2888 | + |
2889 | +def pwgen(length=None): |
2890 | + '''Generate a random pasword.''' |
2891 | + if length is None: |
2892 | + length = random.choice(range(35, 45)) |
2893 | + alphanumeric_chars = [ |
2894 | + l for l in (string.letters + string.digits) |
2895 | + if l not in 'l0QD1vAEIOUaeiou'] |
2896 | + random_chars = [ |
2897 | + random.choice(alphanumeric_chars) for _ in range(length)] |
2898 | + return(''.join(random_chars)) |
2899 | |
2900 | === added directory 'hooks/charmhelpers/fetch' |
2901 | === added file 'hooks/charmhelpers/fetch/__init__.py' |
2902 | --- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 |
2903 | +++ hooks/charmhelpers/fetch/__init__.py 2013-10-15 01:35:02 +0000 |
2904 | @@ -0,0 +1,209 @@ |
2905 | +import importlib |
2906 | +from yaml import safe_load |
2907 | +from charmhelpers.core.host import ( |
2908 | + lsb_release |
2909 | +) |
2910 | +from urlparse import ( |
2911 | + urlparse, |
2912 | + urlunparse, |
2913 | +) |
2914 | +import subprocess |
2915 | +from charmhelpers.core.hookenv import ( |
2916 | + config, |
2917 | + log, |
2918 | +) |
2919 | +import apt_pkg |
2920 | + |
2921 | +CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
2922 | +deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
2923 | +""" |
2924 | +PROPOSED_POCKET = """# Proposed |
2925 | +deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted |
2926 | +""" |
2927 | + |
2928 | + |
2929 | +def filter_installed_packages(packages): |
2930 | + """Returns a list of packages that require installation""" |
2931 | + apt_pkg.init() |
2932 | + cache = apt_pkg.Cache() |
2933 | + _pkgs = [] |
2934 | + for package in packages: |
2935 | + try: |
2936 | + p = cache[package] |
2937 | + p.current_ver or _pkgs.append(package) |
2938 | + except KeyError: |
2939 | + log('Package {} has no installation candidate.'.format(package), |
2940 | + level='WARNING') |
2941 | + _pkgs.append(package) |
2942 | + return _pkgs |
2943 | + |
2944 | + |
2945 | +def apt_install(packages, options=None, fatal=False): |
2946 | + """Install one or more packages""" |
2947 | + options = options or [] |
2948 | + cmd = ['apt-get', '-y'] |
2949 | + cmd.extend(options) |
2950 | + cmd.append('install') |
2951 | + if isinstance(packages, basestring): |
2952 | + cmd.append(packages) |
2953 | + else: |
2954 | + cmd.extend(packages) |
2955 | + log("Installing {} with options: {}".format(packages, |
2956 | + options)) |
2957 | + if fatal: |
2958 | + subprocess.check_call(cmd) |
2959 | + else: |
2960 | + subprocess.call(cmd) |
2961 | + |
2962 | + |
2963 | +def apt_update(fatal=False): |
2964 | + """Update local apt cache""" |
2965 | + cmd = ['apt-get', 'update'] |
2966 | + if fatal: |
2967 | + subprocess.check_call(cmd) |
2968 | + else: |
2969 | + subprocess.call(cmd) |
2970 | + |
2971 | + |
2972 | +def apt_purge(packages, fatal=False): |
2973 | + """Purge one or more packages""" |
2974 | + cmd = ['apt-get', '-y', 'purge'] |
2975 | + if isinstance(packages, basestring): |
2976 | + cmd.append(packages) |
2977 | + else: |
2978 | + cmd.extend(packages) |
2979 | + log("Purging {}".format(packages)) |
2980 | + if fatal: |
2981 | + subprocess.check_call(cmd) |
2982 | + else: |
2983 | + subprocess.call(cmd) |
2984 | + |
2985 | + |
2986 | +def add_source(source, key=None): |
2987 | + if ((source.startswith('ppa:') or |
2988 | + source.startswith('http:'))): |
2989 | + subprocess.check_call(['add-apt-repository', '--yes', source]) |
2990 | + elif source.startswith('cloud:'): |
2991 | + apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), |
2992 | + fatal=True) |
2993 | + pocket = source.split(':')[-1] |
2994 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
2995 | + apt.write(CLOUD_ARCHIVE.format(pocket)) |
2996 | + elif source == 'proposed': |
2997 | + release = lsb_release()['DISTRIB_CODENAME'] |
2998 | + with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
2999 | + apt.write(PROPOSED_POCKET.format(release)) |
3000 | + if key: |
3001 | + subprocess.check_call(['apt-key', 'import', key]) |
3002 | + |
3003 | + |
3004 | +class SourceConfigError(Exception): |
3005 | + pass |
3006 | + |
3007 | + |
3008 | +def configure_sources(update=False, |
3009 | + sources_var='install_sources', |
3010 | + keys_var='install_keys'): |
3011 | + """ |
3012 | + Configure multiple sources from charm configuration |
3013 | + |
3014 | + Example config: |
3015 | + install_sources: |
3016 | + - "ppa:foo" |
3017 | + - "http://example.com/repo precise main" |
3018 | + install_keys: |
3019 | + - null |
3020 | + - "a1b2c3d4" |
3021 | + |
3022 | + Note that 'null' (a.k.a. None) should not be quoted. |
3023 | + """ |
3024 | + sources = safe_load(config(sources_var)) |
3025 | + keys = safe_load(config(keys_var)) |
3026 | + if isinstance(sources, basestring) and isinstance(keys, basestring): |
3027 | + add_source(sources, keys) |
3028 | + else: |
3029 | + if not len(sources) == len(keys): |
3030 | + msg = 'Install sources and keys lists are different lengths' |
3031 | + raise SourceConfigError(msg) |
3032 | + for src_num in range(len(sources)): |
3033 | + add_source(sources[src_num], keys[src_num]) |
3034 | + if update: |
3035 | + apt_update(fatal=True) |
3036 | + |
3037 | +# The order of this list is very important. Handlers should be listed in from |
3038 | +# least- to most-specific URL matching. |
3039 | +FETCH_HANDLERS = ( |
3040 | + 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
3041 | + 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
3042 | +) |
3043 | + |
3044 | + |
3045 | +class UnhandledSource(Exception): |
3046 | + pass |
3047 | + |
3048 | + |
3049 | +def install_remote(source): |
3050 | + """ |
3051 | + Install a file tree from a remote source |
3052 | + |
3053 | + The specified source should be a url of the form: |
3054 | + scheme://[host]/path[#[option=value][&...]] |
3055 | + |
3056 | + Schemes supported are based on this modules submodules |
3057 | + Options supported are submodule-specific""" |
3058 | + # We ONLY check for True here because can_handle may return a string |
3059 | + # explaining why it can't handle a given source. |
3060 | + handlers = [h for h in plugins() if h.can_handle(source) is True] |
3061 | + installed_to = None |
3062 | + for handler in handlers: |
3063 | + try: |
3064 | + installed_to = handler.install(source) |
3065 | + except UnhandledSource: |
3066 | + pass |
3067 | + if not installed_to: |
3068 | + raise UnhandledSource("No handler found for source {}".format(source)) |
3069 | + return installed_to |
3070 | + |
3071 | + |
3072 | +def install_from_config(config_var_name): |
3073 | + charm_config = config() |
3074 | + source = charm_config[config_var_name] |
3075 | + return install_remote(source) |
3076 | + |
3077 | + |
3078 | +class BaseFetchHandler(object): |
3079 | + """Base class for FetchHandler implementations in fetch plugins""" |
3080 | + def can_handle(self, source): |
3081 | + """Returns True if the source can be handled. Otherwise returns |
3082 | + a string explaining why it cannot""" |
3083 | + return "Wrong source type" |
3084 | + |
3085 | + def install(self, source): |
3086 | + """Try to download and unpack the source. Return the path to the |
3087 | + unpacked files or raise UnhandledSource.""" |
3088 | + raise UnhandledSource("Wrong source type {}".format(source)) |
3089 | + |
3090 | + def parse_url(self, url): |
3091 | + return urlparse(url) |
3092 | + |
3093 | + def base_url(self, url): |
3094 | + """Return url without querystring or fragment""" |
3095 | + parts = list(self.parse_url(url)) |
3096 | + parts[4:] = ['' for i in parts[4:]] |
3097 | + return urlunparse(parts) |
3098 | + |
3099 | + |
3100 | +def plugins(fetch_handlers=None): |
3101 | + if not fetch_handlers: |
3102 | + fetch_handlers = FETCH_HANDLERS |
3103 | + plugin_list = [] |
3104 | + for handler_name in fetch_handlers: |
3105 | + package, classname = handler_name.rsplit('.', 1) |
3106 | + try: |
3107 | + handler_class = getattr(importlib.import_module(package), classname) |
3108 | + plugin_list.append(handler_class()) |
3109 | + except (ImportError, AttributeError): |
3110 | + # Skip missing plugins so that they can be ommitted from |
3111 | + # installation if desired |
3112 | + log("FetchHandler {} not found, skipping plugin".format(handler_name)) |
3113 | + return plugin_list |
3114 | |
3115 | === added file 'hooks/charmhelpers/fetch/archiveurl.py' |
3116 | --- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000 |
3117 | +++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-15 01:35:02 +0000 |
3118 | @@ -0,0 +1,48 @@ |
3119 | +import os |
3120 | +import urllib2 |
3121 | +from charmhelpers.fetch import ( |
3122 | + BaseFetchHandler, |
3123 | + UnhandledSource |
3124 | +) |
3125 | +from charmhelpers.payload.archive import ( |
3126 | + get_archive_handler, |
3127 | + extract, |
3128 | +) |
3129 | +from charmhelpers.core.host import mkdir |
3130 | + |
3131 | + |
3132 | +class ArchiveUrlFetchHandler(BaseFetchHandler): |
3133 | + """Handler for archives via generic URLs""" |
3134 | + def can_handle(self, source): |
3135 | + url_parts = self.parse_url(source) |
3136 | + if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): |
3137 | + return "Wrong source type" |
3138 | + if get_archive_handler(self.base_url(source)): |
3139 | + return True |
3140 | + return False |
3141 | + |
3142 | + def download(self, source, dest): |
3143 | + # propogate all exceptions |
3144 | + # URLError, OSError, etc |
3145 | + response = urllib2.urlopen(source) |
3146 | + try: |
3147 | + with open(dest, 'w') as dest_file: |
3148 | + dest_file.write(response.read()) |
3149 | + except Exception as e: |
3150 | + if os.path.isfile(dest): |
3151 | + os.unlink(dest) |
3152 | + raise e |
3153 | + |
3154 | + def install(self, source): |
3155 | + url_parts = self.parse_url(source) |
3156 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') |
3157 | + if not os.path.exists(dest_dir): |
3158 | + mkdir(dest_dir, perms=0755) |
3159 | + dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) |
3160 | + try: |
3161 | + self.download(source, dld_file) |
3162 | + except urllib2.URLError as e: |
3163 | + raise UnhandledSource(e.reason) |
3164 | + except OSError as e: |
3165 | + raise UnhandledSource(e.strerror) |
3166 | + return extract(dld_file) |
3167 | |
3168 | === added file 'hooks/charmhelpers/fetch/bzrurl.py' |
3169 | --- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000 |
3170 | +++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-15 01:35:02 +0000 |
3171 | @@ -0,0 +1,49 @@ |
3172 | +import os |
3173 | +from charmhelpers.fetch import ( |
3174 | + BaseFetchHandler, |
3175 | + UnhandledSource |
3176 | +) |
3177 | +from charmhelpers.core.host import mkdir |
3178 | + |
3179 | +try: |
3180 | + from bzrlib.branch import Branch |
3181 | +except ImportError: |
3182 | + from charmhelpers.fetch import apt_install |
3183 | + apt_install("python-bzrlib") |
3184 | + from bzrlib.branch import Branch |
3185 | + |
3186 | +class BzrUrlFetchHandler(BaseFetchHandler): |
3187 | + """Handler for bazaar branches via generic and lp URLs""" |
3188 | + def can_handle(self, source): |
3189 | + url_parts = self.parse_url(source) |
3190 | + if url_parts.scheme not in ('bzr+ssh', 'lp'): |
3191 | + return False |
3192 | + else: |
3193 | + return True |
3194 | + |
3195 | + def branch(self, source, dest): |
3196 | + url_parts = self.parse_url(source) |
3197 | + # If we use lp:branchname scheme we need to load plugins |
3198 | + if not self.can_handle(source): |
3199 | + raise UnhandledSource("Cannot handle {}".format(source)) |
3200 | + if url_parts.scheme == "lp": |
3201 | + from bzrlib.plugin import load_plugins |
3202 | + load_plugins() |
3203 | + try: |
3204 | + remote_branch = Branch.open(source) |
3205 | + remote_branch.bzrdir.sprout(dest).open_branch() |
3206 | + except Exception as e: |
3207 | + raise e |
3208 | + |
3209 | + def install(self, source): |
3210 | + url_parts = self.parse_url(source) |
3211 | + branch_name = url_parts.path.strip("/").split("/")[-1] |
3212 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name) |
3213 | + if not os.path.exists(dest_dir): |
3214 | + mkdir(dest_dir, perms=0755) |
3215 | + try: |
3216 | + self.branch(source, dest_dir) |
3217 | + except OSError as e: |
3218 | + raise UnhandledSource(e.strerror) |
3219 | + return dest_dir |
3220 | + |
3221 | |
3222 | === added directory 'hooks/charmhelpers/payload' |
3223 | === added file 'hooks/charmhelpers/payload/__init__.py' |
3224 | --- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000 |
3225 | +++ hooks/charmhelpers/payload/__init__.py 2013-10-15 01:35:02 +0000 |
3226 | @@ -0,0 +1,1 @@ |
3227 | +"Tools for working with files injected into a charm just before deployment." |
3228 | |
3229 | === added file 'hooks/charmhelpers/payload/execd.py' |
3230 | --- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000 |
3231 | +++ hooks/charmhelpers/payload/execd.py 2013-10-15 01:35:02 +0000 |
3232 | @@ -0,0 +1,50 @@ |
3233 | +#!/usr/bin/env python |
3234 | + |
3235 | +import os |
3236 | +import sys |
3237 | +import subprocess |
3238 | +from charmhelpers.core import hookenv |
3239 | + |
3240 | + |
3241 | +def default_execd_dir(): |
3242 | + return os.path.join(os.environ['CHARM_DIR'], 'exec.d') |
3243 | + |
3244 | + |
3245 | +def execd_module_paths(execd_dir=None): |
3246 | + """Generate a list of full paths to modules within execd_dir.""" |
3247 | + if not execd_dir: |
3248 | + execd_dir = default_execd_dir() |
3249 | + |
3250 | + if not os.path.exists(execd_dir): |
3251 | + return |
3252 | + |
3253 | + for subpath in os.listdir(execd_dir): |
3254 | + module = os.path.join(execd_dir, subpath) |
3255 | + if os.path.isdir(module): |
3256 | + yield module |
3257 | + |
3258 | + |
3259 | +def execd_submodule_paths(command, execd_dir=None): |
3260 | + """Generate a list of full paths to the specified command within exec_dir. |
3261 | + """ |
3262 | + for module_path in execd_module_paths(execd_dir): |
3263 | + path = os.path.join(module_path, command) |
3264 | + if os.access(path, os.X_OK) and os.path.isfile(path): |
3265 | + yield path |
3266 | + |
3267 | + |
3268 | +def execd_run(command, execd_dir=None, die_on_error=False, stderr=None): |
3269 | + """Run command for each module within execd_dir which defines it.""" |
3270 | + for submodule_path in execd_submodule_paths(command, execd_dir): |
3271 | + try: |
3272 | + subprocess.check_call(submodule_path, shell=True, stderr=stderr) |
3273 | + except subprocess.CalledProcessError as e: |
3274 | + hookenv.log("Error ({}) running {}. Output: {}".format( |
3275 | + e.returncode, e.cmd, e.output)) |
3276 | + if die_on_error: |
3277 | + sys.exit(e.returncode) |
3278 | + |
3279 | + |
3280 | +def execd_preinstall(execd_dir=None): |
3281 | + """Run charm-pre-install for each module within execd_dir.""" |
3282 | + execd_run('charm-pre-install', execd_dir=execd_dir) |
3283 | |
3284 | === modified symlink 'hooks/cluster-relation-changed' |
3285 | === target changed u'glance-relations' => u'glance_relations.py' |
3286 | === modified symlink 'hooks/cluster-relation-departed' |
3287 | === target changed u'glance-relations' => u'glance_relations.py' |
3288 | === modified symlink 'hooks/config-changed' |
3289 | === target changed u'glance-relations' => u'glance_relations.py' |
3290 | === removed file 'hooks/glance-common' |
3291 | --- hooks/glance-common 2013-06-03 18:39:29 +0000 |
3292 | +++ hooks/glance-common 1970-01-01 00:00:00 +0000 |
3293 | @@ -1,133 +0,0 @@ |
3294 | -#!/bin/bash |
3295 | - |
3296 | -CHARM="glance" |
3297 | - |
3298 | -SERVICES="glance-api glance-registry" |
3299 | -PACKAGES="glance python-mysqldb python-swift python-keystone uuid haproxy" |
3300 | - |
3301 | -GLANCE_REGISTRY_CONF="/etc/glance/glance-registry.conf" |
3302 | -GLANCE_REGISTRY_PASTE_INI="/etc/glance/glance-registry-paste.ini" |
3303 | -GLANCE_API_CONF="/etc/glance/glance-api.conf" |
3304 | -GLANCE_API_PASTE_INI="/etc/glance/glance-api-paste.ini" |
3305 | -CONF_DIR="/etc/glance" |
3306 | -HOOKS_DIR="$CHARM_DIR/hooks" |
3307 | - |
3308 | -# Flag used to track config changes. |
3309 | -CONFIG_CHANGED="False" |
3310 | -if [[ -e "$HOOKS_DIR/lib/openstack-common" ]] ; then |
3311 | - . $HOOKS_DIR/lib/openstack-common |
3312 | -else |
3313 | - juju-log "ERROR: Couldn't load $HOOKS_DIR/lib/openstack-common." && exit 1 |
3314 | -fi |
3315 | - |
3316 | -function set_or_update { |
3317 | - local key="$1" |
3318 | - local value="$2" |
3319 | - local file="$3" |
3320 | - local section="$4" |
3321 | - local conf="" |
3322 | - [[ -z $key ]] && juju-log "ERROR: set_or_update(): value $value missing key" \ |
3323 | - && exit 1 |
3324 | - [[ -z $value ]] && juju-log "ERROR: set_or_update(): key $key missing value" \ |
3325 | - && exit 1 |
3326 | - |
3327 | - case "$file" in |
3328 | - "api") conf=$GLANCE_API_CONF ;; |
3329 | - "api-paste") conf=$GLANCE_API_PASTE_INI ;; |
3330 | - "registry") conf=$GLANCE_REGISTRY_CONF ;; |
3331 | - "registry-paste") conf=$GLANCE_REGISTRY_PASTE_INI ;; |
3332 | - *) juju-log "ERROR: set_or_update(): Invalid or no config file specified." \ |
3333 | - && exit 1 ;; |
3334 | - esac |
3335 | - |
3336 | - [[ ! -e $conf ]] && juju-log "ERROR: set_or_update(): File not found $conf" \ |
3337 | - && exit 1 |
3338 | - |
3339 | - if [[ "$(local_config_get "$conf" "$key" "$section")" == "$value" ]] ; then |
3340 | - juju-log "$CHARM: set_or_update(): $key=$value already set in $conf." |
3341 | - return 0 |
3342 | - fi |
3343 | - |
3344 | - cfg_set_or_update "$key" "$value" "$conf" "$section" |
3345 | - CONFIG_CHANGED="True" |
3346 | -} |
3347 | - |
3348 | -do_openstack_upgrade() { |
3349 | - # update openstack components to those provided by a new installation source |
3350 | - # it is assumed the calling hook has confirmed that the upgrade is sane. |
3351 | - local rel="$1" |
3352 | - shift |
3353 | - local packages=$@ |
3354 | - orig_os_rel=$(get_os_codename_package "glance-common") |
3355 | - new_rel=$(get_os_codename_install_source "$rel") |
3356 | - |
3357 | - # Backup the config directory. |
3358 | - local stamp=$(date +"%Y%m%d%M%S") |
3359 | - tar -pcf /var/lib/juju/$CHARM-backup-$stamp.tar $CONF_DIR |
3360 | - |
3361 | - # Setup apt repository access and kick off the actual package upgrade. |
3362 | - configure_install_source "$rel" |
3363 | - apt-get update |
3364 | - DEBIAN_FRONTEND=noninteractive apt-get --option Dpkg::Options::=--force-confnew -y \ |
3365 | - install --no-install-recommends $packages |
3366 | - |
3367 | - # Update the new config files for existing relations. |
3368 | - local r_id="" |
3369 | - |
3370 | - r_id=$(relation-ids shared-db) |
3371 | - if [[ -n "$r_id" ]] ; then |
3372 | - juju-log "$CHARM: Configuring database after upgrade to $rel." |
3373 | - db_changed $r_id |
3374 | - fi |
3375 | - |
3376 | - r_id=$(relation-ids identity-service) |
3377 | - if [[ -n "$r_id" ]] ; then |
3378 | - juju-log "$CHARM: Configuring identity service after upgrade to $rel." |
3379 | - keystone_changed $r_id |
3380 | - fi |
3381 | - |
3382 | - local ceph_ids="$(relation-ids ceph)" |
3383 | - [[ -n "$ceph_ids" ]] && apt-get -y install ceph-common python-ceph |
3384 | - for r_id in $ceph_ids ; do |
3385 | - for unit in $(relation-list -r $r_id) ; do |
3386 | - ceph_changed "$r_id" "$unit" |
3387 | - done |
3388 | - done |
3389 | - |
3390 | - [[ -n "$(relation-ids object-store)" ]] && object-store_joined |
3391 | -} |
3392 | - |
3393 | -configure_https() { |
3394 | - # request openstack-common setup reverse proxy mapping for API and registry |
3395 | - # servers |
3396 | - service_ctl glance-api stop |
3397 | - if [[ -n "$(peer_units)" ]] || is_clustered ; then |
3398 | - # haproxy may already be configured. need to push it back in the request |
3399 | - # pipeline in preparation for a change from: |
3400 | - # from: haproxy (9292) -> glance_api (9282) |
3401 | - # to: ssl (9292) -> haproxy (9291) -> glance_api (9272) |
3402 | - local next_server=$(determine_haproxy_port 9292) |
3403 | - local api_port=$(determine_api_port 9292) |
3404 | - configure_haproxy "glance_api:$next_server:$api_port" |
3405 | - else |
3406 | - # if not clustered, the glance-api is next in the pipeline. |
3407 | - local api_port=$(determine_api_port 9292) |
3408 | - local next_server=$api_port |
3409 | - fi |
3410 | - |
3411 | - # setup https to point to either haproxy or directly to api server, depending. |
3412 | - setup_https 9292:$next_server |
3413 | - |
3414 | - # configure servers to listen on new ports accordingly. |
3415 | - set_or_update bind_port "$api_port" "api" |
3416 | - service_ctl all start |
3417 | - |
3418 | - local r_id="" |
3419 | - # (re)configure ks endpoint accordingly in ks and nova. |
3420 | - for r_id in $(relation-ids identity-service) ; do |
3421 | - keystone_joined "$r_id" |
3422 | - done |
3423 | - for r_id in $(relation-ids image-service) ; do |
3424 | - image-service_joined "$r_id" |
3425 | - done |
3426 | -} |
3427 | |
3428 | === removed file 'hooks/glance-relations' |
3429 | --- hooks/glance-relations 2013-09-18 18:40:06 +0000 |
3430 | +++ hooks/glance-relations 1970-01-01 00:00:00 +0000 |
3431 | @@ -1,464 +0,0 @@ |
3432 | -#!/bin/bash -e |
3433 | - |
3434 | -HOOKS_DIR="$CHARM_DIR/hooks" |
3435 | -ARG0=${0##*/} |
3436 | - |
3437 | -if [[ -e $HOOKS_DIR/glance-common ]] ; then |
3438 | - . $HOOKS_DIR/glance-common |
3439 | -else |
3440 | - echo "ERROR: Could not load glance-common from $HOOKS_DIR" |
3441 | -fi |
3442 | - |
3443 | -function install_hook { |
3444 | - juju-log "Installing glance packages" |
3445 | - apt-get -y install python-software-properties || exit 1 |
3446 | - |
3447 | - configure_install_source "$(config-get openstack-origin)" |
3448 | - |
3449 | - apt-get update || exit 1 |
3450 | - apt-get -y install $PACKAGES || exit 1 |
3451 | - |
3452 | - service_ctl all stop |
3453 | - |
3454 | - # TODO: Make debug logging a config option. |
3455 | - set_or_update verbose True api |
3456 | - set_or_update debug True api |
3457 | - set_or_update verbose True registry |
3458 | - set_or_update debug True registry |
3459 | - |
3460 | - configure_https |
3461 | -} |
3462 | - |
3463 | -function db_joined { |
3464 | - local glance_db=$(config-get glance-db) |
3465 | - local db_user=$(config-get db-user) |
3466 | - local hostname=$(unit-get private-address) |
3467 | - juju-log "$CHARM - db_joined: requesting database access to $glance_db for "\ |
3468 | - "$db_user@$hostname" |
3469 | - relation-set database=$glance_db username=$db_user hostname=$hostname |
3470 | -} |
3471 | - |
3472 | -function db_changed { |
3473 | - # serves as the main shared-db changed hook but may also be called with a |
3474 | - # relation-id to configure new config files for existing relations. |
3475 | - local r_id="$1" |
3476 | - local r_args="" |
3477 | - if [[ -n "$r_id" ]] ; then |
3478 | - # set up environment for an existing relation to a single unit. |
3479 | - export JUJU_REMOTE_UNIT=$(relation-list -r $r_id | head -n1) |
3480 | - export JUJU_RELATION="shared-db" |
3481 | - export JUJU_RELATION_ID="$r_id" |
3482 | - local r_args="-r $JUJU_RELATION_ID" |
3483 | - juju-log "$CHARM - db_changed: Running hook for existing relation to "\ |
3484 | - "$JUJU_REMOTE_UNIT-$JUJU_RELATION_ID" |
3485 | - fi |
3486 | - |
3487 | - local db_host=$(relation-get $r_args db_host) |
3488 | - local db_password=$(relation-get $r_args password) |
3489 | - |
3490 | - if [[ -z "$db_host" ]] || [[ -z "$db_password" ]] ; then |
3491 | - juju-log "$CHARM - db_changed: db_host||db_password set, will retry." |
3492 | - exit 0 |
3493 | - fi |
3494 | - |
3495 | - local glance_db=$(config-get glance-db) |
3496 | - local db_user=$(config-get db-user) |
3497 | - local rel=$(get_os_codename_package glance-common) |
3498 | - |
3499 | - if [[ -n "$r_id" ]] ; then |
3500 | - unset JUJU_REMOTE_UNIT JUJU_RELATION JUJU_RELATION_ID |
3501 | - fi |
3502 | - |
3503 | - juju-log "$CHARM - db_changed: Configuring glance.conf for access to $glance_db" |
3504 | - |
3505 | - set_or_update sql_connection "mysql://$db_user:$db_password@$db_host/$glance_db" registry |
3506 | - |
3507 | - # since folsom, a db connection setting in glance-api.conf is required. |
3508 | - [[ "$rel" != "essex" ]] && |
3509 | - set_or_update sql_connection "mysql://$db_user:$db_password@$db_host/$glance_db" api |
3510 | - |
3511 | - if eligible_leader 'res_glance_vip'; then |
3512 | - if [[ "$rel" == "essex" ]] ; then |
3513 | - # Essex required initializing new databases to version 0 |
3514 | - if ! glance-manage db_version >/dev/null 2>&1; then |
3515 | - juju-log "Setting glance database version to 0" |
3516 | - glance-manage version_control 0 |
3517 | - fi |
3518 | - fi |
3519 | - juju-log "$CHARM - db_changed: Running database migrations for $rel." |
3520 | - glance-manage db_sync |
3521 | - fi |
3522 | - service_ctl all restart |
3523 | -} |
3524 | - |
3525 | -function image-service_joined { |
3526 | - # Check to see if unit is potential leader |
3527 | - local r_id="$1" |
3528 | - [[ -n "$r_id" ]] && r_id="-r $r_id" |
3529 | - eligible_leader 'res_glance_vip' || return 0 |
3530 | - https && scheme="https" || scheme="http" |
3531 | - is_clustered && local host=$(config-get vip) || |
3532 | - local host=$(unit-get private-address) |
3533 | - url="$scheme://$host:9292" |
3534 | - juju-log "glance: image-service_joined: To peer glance-api-server=$url" |
3535 | - relation-set $r_id glance-api-server=$url |
3536 | -} |
3537 | - |
3538 | -function object-store_joined { |
3539 | - local relids="$(relation-ids identity-service)" |
3540 | - [[ -z "$relids" ]] && \ |
3541 | - juju-log "$CHARM: Deferring swift store configuration until " \ |
3542 | - "an identity-service relation exists." && exit 0 |
3543 | - |
3544 | - set_or_update default_store swift api |
3545 | - set_or_update swift_store_create_container_on_put true api |
3546 | - |
3547 | - for relid in $relids ; do |
3548 | - local unit=$(relation-list -r $relid) |
3549 | - local svc_tenant=$(relation-get -r $relid service_tenant $unit) |
3550 | - local svc_username=$(relation-get -r $relid service_username $unit) |
3551 | - local svc_password=$(relation-get -r $relid service_password $unit) |
3552 | - local auth_host=$(relation-get -r $relid private-address $unit) |
3553 | - local port=$(relation-get -r $relid service_port $unit) |
3554 | - local auth_url="" |
3555 | - |
3556 | - [[ -n "$auth_host" ]] && [[ -n "$port" ]] && |
3557 | - auth_url="http://$auth_host:$port/v2.0/" |
3558 | - |
3559 | - [[ -n "$svc_tenant" ]] && [[ -n "$svc_username" ]] && |
3560 | - set_or_update swift_store_user "$svc_tenant:$svc_username" api |
3561 | - [[ -n "$svc_password" ]] && |
3562 | - set_or_update swift_store_key "$svc_password" api |
3563 | - [[ -n "$auth_url" ]] && |
3564 | - set_or_update swift_store_auth_address "$auth_url" api |
3565 | - done |
3566 | - service_ctl glance-api restart |
3567 | -} |
3568 | - |
3569 | -function object-store_changed { |
3570 | - exit 0 |
3571 | -} |
3572 | - |
3573 | -function ceph_joined { |
3574 | - mkdir -p /etc/ceph |
3575 | - apt-get -y install ceph-common python-ceph || exit 1 |
3576 | -} |
3577 | - |
3578 | -function ceph_changed { |
3579 | - local r_id="$1" |
3580 | - local unit_id="$2" |
3581 | - local r_arg="" |
3582 | - [[ -n "$r_id" ]] && r_arg="-r $r_id" |
3583 | - SERVICE_NAME=`echo $JUJU_UNIT_NAME | cut -d / -f 1` |
3584 | - KEYRING=/etc/ceph/ceph.client.$SERVICE_NAME.keyring |
3585 | - KEY=`relation-get $r_arg key $unit_id` |
3586 | - if [ -n "$KEY" ]; then |
3587 | - # But only once |
3588 | - if [ ! -f $KEYRING ]; then |
3589 | - ceph-authtool $KEYRING \ |
3590 | - --create-keyring --name=client.$SERVICE_NAME \ |
3591 | - --add-key="$KEY" |
3592 | - chmod +r $KEYRING |
3593 | - fi |
3594 | - else |
3595 | - # No key - bail for the time being |
3596 | - exit 0 |
3597 | - fi |
3598 | - |
3599 | - MONS=`relation-list $r_arg` |
3600 | - mon_hosts="" |
3601 | - for mon in $MONS; do |
3602 | - mon_hosts="$mon_hosts $(get_ip $(relation-get $r_arg private-address $mon)):6789," |
3603 | - done |
3604 | - cat > /etc/ceph/ceph.conf << EOF |
3605 | -[global] |
3606 | - auth supported = $(relation-get $r_arg auth $unit_id) |
3607 | - keyring = /etc/ceph/\$cluster.\$name.keyring |
3608 | - mon host = $mon_hosts |
3609 | -EOF |
3610 | - |
3611 | - # Create the images pool if it does not already exist |
3612 | - if ! rados --id $SERVICE_NAME lspools | grep -q images; then |
3613 | - local num_osds=$(ceph --id $SERVICE_NAME osd ls| egrep "[^\s]"| wc -l) |
3614 | - local cfg_key='ceph-osd-replication-count' |
3615 | - local rep_count="$(config-get $cfg_key)" |
3616 | - if [ -z "$rep_count" ] |
3617 | - then |
3618 | - rep_count=2 |
3619 | - juju-log "config returned empty string for $cfg_key - using value of 2" |
3620 | - fi |
3621 | - local num_pgs=$(((num_osds*100)/rep_count)) |
3622 | - ceph --id $SERVICE_NAME osd pool create images $num_pgs $num_pgs |
3623 | - ceph --id $SERVICE_NAME osd pool set images size $rep_count |
3624 | - # TODO: set appropriate crush ruleset |
3625 | - fi |
3626 | - |
3627 | - # Configure glance for ceph storage options |
3628 | - set_or_update default_store rbd api |
3629 | - set_or_update rbd_store_ceph_conf /etc/ceph/ceph.conf api |
3630 | - set_or_update rbd_store_user $SERVICE_NAME api |
3631 | - set_or_update rbd_store_pool images api |
3632 | - set_or_update rbd_store_chunk_size 8 api |
3633 | - # This option only applies to Grizzly. |
3634 | - [ "`get_os_codename_package "glance-common"`" = "grizzly" ] && \ |
3635 | - set_or_update show_image_direct_url 'True' api |
3636 | - |
3637 | - service_ctl glance-api restart |
3638 | -} |
3639 | - |
3640 | -function keystone_joined { |
3641 | - # Leadership check |
3642 | - eligible_leader 'res_glance_vip' || return 0 |
3643 | - local r_id="$1" |
3644 | - [[ -n "$r_id" ]] && r_id=" -r $r_id" |
3645 | - |
3646 | - # determine correct endpoint URL |
3647 | - https && scheme="https" || scheme="http" |
3648 | - is_clustered && local host=$(config-get vip) || |
3649 | - local host=$(unit-get private-address) |
3650 | - url="$scheme://$host:9292" |
3651 | - |
3652 | - # advertise our API endpoint to keystone |
3653 | - relation-set service="glance" \ |
3654 | - region="$(config-get region)" public_url=$url admin_url=$url internal_url=$url |
3655 | -} |
3656 | - |
3657 | -function keystone_changed { |
3658 | - # serves as the main identity-service changed hook, but may also be called |
3659 | - # with a relation-id to configure new config files for existing relations. |
3660 | - local r_id="$1" |
3661 | - local r_args="" |
3662 | - if [[ -n "$r_id" ]] ; then |
3663 | - # set up environment for an existing relation to a single unit. |
3664 | - export JUJU_REMOTE_UNIT=$(relation-list -r $r_id | head -n1) |
3665 | - export JUJU_RELATION="identity-service" |
3666 | - export JUJU_RELATION_ID="$r_id" |
3667 | - local r_args="-r $JUJU_RELATION_ID" |
3668 | - juju-log "$CHARM - db_changed: Running hook for existing relation to "\ |
3669 | - "$JUJU_REMOTE_UNIT-$JUJU_RELATION_ID" |
3670 | - fi |
3671 | - |
3672 | - token=$(relation-get $r_args $r_args admin_token) |
3673 | - service_port=$(relation-get $r_args service_port) |
3674 | - auth_port=$(relation-get $r_args auth_port) |
3675 | - service_username=$(relation-get $r_args service_username) |
3676 | - service_password=$(relation-get $r_args service_password) |
3677 | - service_tenant=$(relation-get $r_args service_tenant) |
3678 | - [[ -z "$token" ]] || [[ -z "$service_port" ]] || [[ -z "$auth_port" ]] || |
3679 | - [[ -z "$service_username" ]] || [[ -z "$service_password" ]] || |
3680 | - [[ -z "$service_tenant" ]] && juju-log "keystone_changed: Peer not ready" && |
3681 | - exit 0 |
3682 | - [[ "$token" == "-1" ]] && |
3683 | - juju-log "keystone_changed: admin token error" && exit 1 |
3684 | - juju-log "keystone_changed: Acquired admin. token" |
3685 | - keystone_host=$(relation-get $r_args auth_host) |
3686 | - |
3687 | - if [[ -n "$r_id" ]] ; then |
3688 | - unset JUJU_REMOTE_UNIT JUJU_RELATION JUJU_RELATION_ID |
3689 | - fi |
3690 | - |
3691 | - set_or_update "flavor" "keystone" "api" "paste_deploy" |
3692 | - set_or_update "flavor" "keystone" "registry" "paste_deploy" |
3693 | - |
3694 | - local sect="filter:authtoken" |
3695 | - for i in api-paste registry-paste ; do |
3696 | - set_or_update "service_host" "$keystone_host" $i $sect |
3697 | - set_or_update "service_port" "$service_port" $i $sect |
3698 | - set_or_update "auth_host" "$keystone_host" $i $sect |
3699 | - set_or_update "auth_port" "$auth_port" $i $sect |
3700 | - set_or_update "auth_uri" "http://$keystone_host:$service_port/" $i $sect |
3701 | - set_or_update "admin_token" "$token" $i $sect |
3702 | - set_or_update "admin_tenant_name" "$service_tenant" $i $sect |
3703 | - set_or_update "admin_user" "$service_username" $i $sect |
3704 | - set_or_update "admin_password" "$service_password" $i $sect |
3705 | - done |
3706 | - service_ctl all restart |
3707 | - |
3708 | - # Configure any object-store / swift relations now that we have an |
3709 | - # identity-service |
3710 | - if [[ -n "$(relation-ids object-store)" ]] ; then |
3711 | - object-store_joined |
3712 | - fi |
3713 | - |
3714 | - # possibly configure HTTPS for API and registry |
3715 | - configure_https |
3716 | -} |
3717 | - |
3718 | -function config_changed() { |
3719 | - # Determine whether or not we should do an upgrade, based on whether or not |
3720 | - # the version offered in openstack-origin is greater than what is installed. |
3721 | - |
3722 | - local install_src=$(config-get openstack-origin) |
3723 | - local cur=$(get_os_codename_package "glance-common") |
3724 | - local available=$(get_os_codename_install_source "$install_src") |
3725 | - |
3726 | - if [[ "$available" != "unknown" ]] ; then |
3727 | - if dpkg --compare-versions $(get_os_version_codename "$cur") lt \ |
3728 | - $(get_os_version_codename "$available") ; then |
3729 | - juju-log "$CHARM: Upgrading OpenStack release: $cur -> $available." |
3730 | - do_openstack_upgrade "$install_src" $PACKAGES |
3731 | - fi |
3732 | - fi |
3733 | - configure_https |
3734 | - service_ctl all restart |
3735 | - |
3736 | - # Save our scriptrc env variables for health checks |
3737 | - declare -a env_vars=( |
3738 | - "OPENSTACK_PORT_MCASTPORT=$(config-get ha-mcastport)" |
3739 | - 'OPENSTACK_SERVICE_API=glance-api' |
3740 | - 'OPENSTACK_SERVICE_REGISTRY=glance-registry') |
3741 | - save_script_rc ${env_vars[@]} |
3742 | -} |
3743 | - |
3744 | -function cluster_changed() { |
3745 | - configure_haproxy "glance_api:9292" |
3746 | -} |
3747 | - |
3748 | -function upgrade_charm() { |
3749 | - cluster_changed |
3750 | -} |
3751 | - |
3752 | -function ha_relation_joined() { |
3753 | - local corosync_bindiface=`config-get ha-bindiface` |
3754 | - local corosync_mcastport=`config-get ha-mcastport` |
3755 | - local vip=`config-get vip` |
3756 | - local vip_iface=`config-get vip_iface` |
3757 | - local vip_cidr=`config-get vip_cidr` |
3758 | - if [ -n "$vip" ] && [ -n "$vip_iface" ] && \ |
3759 | - [ -n "$vip_cidr" ] && [ -n "$corosync_bindiface" ] && \ |
3760 | - [ -n "$corosync_mcastport" ]; then |
3761 | - # TODO: This feels horrible but the data required by the hacluster |
3762 | - # charm is quite complex and is python ast parsed. |
3763 | - resources="{ |
3764 | -'res_glance_vip':'ocf:heartbeat:IPaddr2', |
3765 | -'res_glance_haproxy':'lsb:haproxy' |
3766 | -}" |
3767 | - resource_params="{ |
3768 | -'res_glance_vip': 'params ip=\"$vip\" cidr_netmask=\"$vip_cidr\" nic=\"$vip_iface\"', |
3769 | -'res_glance_haproxy': 'op monitor interval=\"5s\"' |
3770 | -}" |
3771 | - init_services="{ |
3772 | -'res_glance_haproxy':'haproxy' |
3773 | -}" |
3774 | - groups="{ |
3775 | -'grp_glance_haproxy':'res_glance_vip res_glance_haproxy' |
3776 | -}" |
3777 | - relation-set corosync_bindiface=$corosync_bindiface \ |
3778 | - corosync_mcastport=$corosync_mcastport \ |
3779 | - resources="$resources" resource_params="$resource_params" \ |
3780 | - init_services="$init_services" groups="$groups" |
3781 | - else |
3782 | - juju-log "Insufficient configuration data to configure hacluster" |
3783 | - exit 1 |
3784 | - fi |
3785 | -} |
3786 | - |
3787 | -function ha_relation_changed() { |
3788 | - local clustered=`relation-get clustered` |
3789 | - if [ -n "$clustered" ] && is_leader 'res_glance_vip'; then |
3790 | - local port=$((9292 + 10000)) |
3791 | - local host=$(config-get vip) |
3792 | - local url="http://$host:$port" |
3793 | - for r_id in `relation-ids identity-service`; do |
3794 | - relation-set -r $r_id service="glance" \ |
3795 | - region="$(config-get region)" \ |
3796 | - public_url="$url" admin_url="$url" internal_url="$url" |
3797 | - done |
3798 | - for r_id in `relation-ids image-service`; do |
3799 | - relation-set -r $r_id \ |
3800 | - glance-api-server="$host:$port" |
3801 | - done |
3802 | - fi |
3803 | -} |
3804 | - |
3805 | - |
3806 | -function cluster_changed() { |
3807 | - [[ -z "$(peer_units)" ]] && |
3808 | - juju-log "cluster_changed() with no peers." && exit 0 |
3809 | - local haproxy_port=$(determine_haproxy_port 9292) |
3810 | - local backend_port=$(determine_api_port 9292) |
3811 | - service glance-api stop |
3812 | - configure_haproxy "glance_api:$haproxy_port:$backend_port" |
3813 | - set_or_update bind_port "$backend_port" "api" |
3814 | - service glance-api start |
3815 | -} |
3816 | - |
3817 | -function upgrade_charm() { |
3818 | - cluster_changed |
3819 | -} |
3820 | - |
3821 | -function ha_relation_joined() { |
3822 | - local corosync_bindiface=`config-get ha-bindiface` |
3823 | - local corosync_mcastport=`config-get ha-mcastport` |
3824 | - local vip=`config-get vip` |
3825 | - local vip_iface=`config-get vip_iface` |
3826 | - local vip_cidr=`config-get vip_cidr` |
3827 | - if [ -n "$vip" ] && [ -n "$vip_iface" ] && \ |
3828 | - [ -n "$vip_cidr" ] && [ -n "$corosync_bindiface" ] && \ |
3829 | - [ -n "$corosync_mcastport" ]; then |
3830 | - # TODO: This feels horrible but the data required by the hacluster |
3831 | - # charm is quite complex and is python ast parsed. |
3832 | - resources="{ |
3833 | -'res_glance_vip':'ocf:heartbeat:IPaddr2', |
3834 | -'res_glance_haproxy':'lsb:haproxy' |
3835 | -}" |
3836 | - resource_params="{ |
3837 | -'res_glance_vip': 'params ip=\"$vip\" cidr_netmask=\"$vip_cidr\" nic=\"$vip_iface\"', |
3838 | -'res_glance_haproxy': 'op monitor interval=\"5s\"' |
3839 | -}" |
3840 | - init_services="{ |
3841 | -'res_glance_haproxy':'haproxy' |
3842 | -}" |
3843 | - clones="{ |
3844 | -'cl_glance_haproxy': 'res_glance_haproxy' |
3845 | -}" |
3846 | - relation-set corosync_bindiface=$corosync_bindiface \ |
3847 | - corosync_mcastport=$corosync_mcastport \ |
3848 | - resources="$resources" resource_params="$resource_params" \ |
3849 | - init_services="$init_services" clones="$clones" |
3850 | - else |
3851 | - juju-log "Insufficient configuration data to configure hacluster" |
3852 | - exit 1 |
3853 | - fi |
3854 | -} |
3855 | - |
3856 | -function ha_relation_changed() { |
3857 | - local clustered=`relation-get clustered` |
3858 | - if [ -n "$clustered" ] && is_leader 'res_glance_vip'; then |
3859 | - local host=$(config-get vip) |
3860 | - https && local scheme="https" || local scheme="http" |
3861 | - local url="$scheme://$host:9292" |
3862 | - |
3863 | - for r_id in `relation-ids identity-service`; do |
3864 | - relation-set -r $r_id service="glance" \ |
3865 | - region="$(config-get region)" \ |
3866 | - public_url="$url" admin_url="$url" internal_url="$url" |
3867 | - done |
3868 | - for r_id in `relation-ids image-service`; do |
3869 | - relation-set -r $r_id \ |
3870 | - glance-api-server="$scheme://$host:9292" |
3871 | - done |
3872 | - fi |
3873 | -} |
3874 | - |
3875 | - |
3876 | -case $ARG0 in |
3877 | - "start"|"stop") service_ctl all $ARG0 ;; |
3878 | - "install") install_hook ;; |
3879 | - "config-changed") config_changed ;; |
3880 | - "shared-db-relation-joined") db_joined ;; |
3881 | - "shared-db-relation-changed") db_changed;; |
3882 | - "image-service-relation-joined") image-service_joined ;; |
3883 | - "image-service-relation-changed") exit 0 ;; |
3884 | - "object-store-relation-joined") object-store_joined ;; |
3885 | - "object-store-relation-changed") object-store_changed ;; |
3886 | - "identity-service-relation-joined") keystone_joined ;; |
3887 | - "identity-service-relation-changed") keystone_changed ;; |
3888 | - "ceph-relation-joined") ceph_joined;; |
3889 | - "ceph-relation-changed") ceph_changed;; |
3890 | - "cluster-relation-changed") cluster_changed ;; |
3891 | - "cluster-relation-departed") cluster_changed ;; |
3892 | - "ha-relation-joined") ha_relation_joined ;; |
3893 | - "ha-relation-changed") ha_relation_changed ;; |
3894 | - "upgrade-charm") upgrade_charm ;; |
3895 | -esac |
3896 | |
3897 | === added file 'hooks/glance_contexts.py' |
3898 | --- hooks/glance_contexts.py 1970-01-01 00:00:00 +0000 |
3899 | +++ hooks/glance_contexts.py 2013-10-15 01:35:02 +0000 |
3900 | @@ -0,0 +1,89 @@ |
3901 | +from charmhelpers.core.hookenv import ( |
3902 | + relation_ids, |
3903 | + related_units, |
3904 | + relation_get, |
3905 | + service_name, |
3906 | +) |
3907 | + |
3908 | +from charmhelpers.contrib.openstack.context import ( |
3909 | + OSContextGenerator, |
3910 | + ApacheSSLContext as SSLContext, |
3911 | +) |
3912 | + |
3913 | +from charmhelpers.contrib.hahelpers.cluster import ( |
3914 | + determine_api_port, |
3915 | + determine_haproxy_port, |
3916 | +) |
3917 | + |
3918 | + |
3919 | +def is_relation_made(relation, key='private-address'): |
3920 | + for r_id in relation_ids(relation): |
3921 | + for unit in related_units(r_id): |
3922 | + if relation_get(key, rid=r_id, unit=unit): |
3923 | + return True |
3924 | + return False |
3925 | + |
3926 | + |
3927 | +class CephGlanceContext(OSContextGenerator): |
3928 | + interfaces = ['ceph-glance'] |
3929 | + |
3930 | + def __call__(self): |
3931 | + """ |
3932 | + Used to generate template context to be added to glance-api.conf in |
3933 | + the presence of a ceph relation. |
3934 | + """ |
3935 | + if not is_relation_made(relation="ceph", |
3936 | + key="key"): |
3937 | + return {} |
3938 | + service = service_name() |
3939 | + return { |
3940 | + # ensure_ceph_pool() creates pool based on service name. |
3941 | + 'rbd_pool': service, |
3942 | + 'rbd_user': service, |
3943 | + } |
3944 | + |
3945 | + |
3946 | +class ObjectStoreContext(OSContextGenerator): |
3947 | + interfaces = ['object-store'] |
3948 | + |
3949 | + def __call__(self): |
3950 | + """ |
3951 | + Used to generate template context to be added to glance-api.conf in |
3952 | + the presence of a 'object-store' relation. |
3953 | + """ |
3954 | + if not relation_ids('object-store'): |
3955 | + return {} |
3956 | + return { |
3957 | + 'swift_store': True, |
3958 | + } |
3959 | + |
3960 | + |
3961 | +class HAProxyContext(OSContextGenerator): |
3962 | + interfaces = ['cluster'] |
3963 | + |
3964 | + def __call__(self): |
3965 | + ''' |
3966 | + Extends the main charmhelpers HAProxyContext with a port mapping |
3967 | + specific to this charm. |
3968 | + Also used to extend glance-api.conf context with correct bind_port |
3969 | + ''' |
3970 | + haproxy_port = determine_haproxy_port(9292) |
3971 | + api_port = determine_api_port(9292) |
3972 | + |
3973 | + ctxt = { |
3974 | + 'service_ports': {'glance_api': [haproxy_port, api_port]}, |
3975 | + 'bind_port': api_port, |
3976 | + } |
3977 | + return ctxt |
3978 | + |
3979 | + |
3980 | +class ApacheSSLContext(SSLContext): |
3981 | + interfaces = ['https'] |
3982 | + external_ports = [9292] |
3983 | + service_namespace = 'glance' |
3984 | + |
3985 | + def __call__(self): |
3986 | + #from glance_utils import service_enabled |
3987 | + #if not service_enabled('glance-api'): |
3988 | + # return {} |
3989 | + return super(ApacheSSLContext, self).__call__() |
3990 | |
3991 | === added file 'hooks/glance_relations.py' |
3992 | --- hooks/glance_relations.py 1970-01-01 00:00:00 +0000 |
3993 | +++ hooks/glance_relations.py 2013-10-15 01:35:02 +0000 |
3994 | @@ -0,0 +1,320 @@ |
3995 | +#!/usr/bin/python |
3996 | +import os |
3997 | +import sys |
3998 | + |
3999 | +from glance_utils import ( |
4000 | + do_openstack_upgrade, |
4001 | + ensure_ceph_pool, |
4002 | + migrate_database, |
4003 | + register_configs, |
4004 | + restart_map, |
4005 | + CLUSTER_RES, |
4006 | + PACKAGES, |
4007 | + SERVICES, |
4008 | + CHARM, |
4009 | + GLANCE_REGISTRY_CONF, |
4010 | + GLANCE_REGISTRY_PASTE_INI, |
4011 | + GLANCE_API_CONF, |
4012 | + GLANCE_API_PASTE_INI, |
4013 | + HAPROXY_CONF, |
4014 | + CEPH_CONF, ) |
4015 | + |
4016 | +from charmhelpers.core.hookenv import ( |
4017 | + config, |
4018 | + Hooks, |
4019 | + log as juju_log, |
4020 | + open_port, |
4021 | + relation_get, |
4022 | + relation_set, |
4023 | + relation_ids, |
4024 | + service_name, |
4025 | + unit_get, |
4026 | + UnregisteredHookError, ) |
4027 | + |
4028 | +from charmhelpers.core.host import restart_on_change, service_stop |
4029 | + |
4030 | +from charmhelpers.fetch import apt_install, apt_update |
4031 | + |
4032 | +from charmhelpers.contrib.hahelpers.cluster import ( |
4033 | + canonical_url, eligible_leader, is_leader) |
4034 | + |
4035 | +from charmhelpers.contrib.openstack.utils import ( |
4036 | + configure_installation_source, |
4037 | + get_os_codename_package, |
4038 | + openstack_upgrade_available, |
4039 | + lsb_release, ) |
4040 | + |
4041 | +from charmhelpers.contrib.storage.linux.ceph import ensure_ceph_keyring |
4042 | +from charmhelpers.payload.execd import execd_preinstall |
4043 | + |
4044 | +from subprocess import ( |
4045 | + check_call, ) |
4046 | + |
4047 | +from commands import getstatusoutput |
4048 | + |
4049 | +hooks = Hooks() |
4050 | + |
4051 | +CONFIGS = register_configs() |
4052 | + |
4053 | + |
4054 | +@hooks.hook('install') |
4055 | +def install_hook(): |
4056 | + juju_log('Installing glance packages') |
4057 | + execd_preinstall() |
4058 | + src = config('openstack-origin') |
4059 | + if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and |
4060 | + src == 'distro'): |
4061 | + src = 'cloud:precise-folsom' |
4062 | + |
4063 | + configure_installation_source(src) |
4064 | + |
4065 | + apt_update() |
4066 | + apt_install(PACKAGES) |
4067 | + |
4068 | + for service in SERVICES: |
4069 | + service_stop(service) |
4070 | + |
4071 | + |
4072 | +@hooks.hook('shared-db-relation-joined') |
4073 | +def db_joined(): |
4074 | + relation_set(database=config('database'), username=config('database-user'), |
4075 | + hostname=unit_get('private-address')) |
4076 | + |
4077 | + |
4078 | +@hooks.hook('shared-db-relation-changed') |
4079 | +@restart_on_change(restart_map()) |
4080 | +def db_changed(): |
4081 | + rel = get_os_codename_package("glance-common") |
4082 | + |
4083 | + if 'shared-db' not in CONFIGS.complete_contexts(): |
4084 | + juju_log('shared-db relation incomplete. Peer not ready?') |
4085 | + return |
4086 | + |
4087 | + CONFIGS.write(GLANCE_REGISTRY_CONF) |
4088 | + # since folsom, a db connection setting in glance-api.conf is required. |
4089 | + if rel != "essex": |
4090 | + CONFIGS.write(GLANCE_API_CONF) |
4091 | + |
4092 | + if eligible_leader(CLUSTER_RES): |
4093 | + if rel == "essex": |
4094 | + (status, output) = getstatusoutput('glance-manage db_version') |
4095 | + if status != 0: |
4096 | + juju_log('Setting version_control to 0') |
4097 | + check_call(["glance-manage", "version_control", "0"]) |
4098 | + |
4099 | + juju_log('Cluster leader, performing db sync') |
4100 | + migrate_database() |
4101 | + |
4102 | + |
4103 | +@hooks.hook('image-service-relation-joined') |
4104 | +def image_service_joined(relation_id=None): |
4105 | + if not eligible_leader(CLUSTER_RES): |
4106 | + return |
4107 | + |
4108 | + relation_data = { |
4109 | + 'glance-api-server': canonical_url(CONFIGS) + ":9292" |
4110 | + } |
4111 | + |
4112 | + juju_log("%s: image-service_joined: To peer glance-api-server=%s" % |
4113 | + (CHARM, relation_data['glance-api-server'])) |
4114 | + |
4115 | + relation_set(relation_id=relation_id, **relation_data) |
4116 | + |
4117 | + |
4118 | +@hooks.hook('object-store-relation-joined') |
4119 | +@restart_on_change(restart_map()) |
4120 | +def object_store_joined(): |
4121 | + |
4122 | + if 'identity-service' not in CONFIGS.complete_contexts(): |
4123 | + juju_log('Deferring swift stora configuration until ' |
4124 | + 'an identity-service relation exists') |
4125 | + return |
4126 | + |
4127 | + if 'object-store' not in CONFIGS.complete_contexts(): |
4128 | + juju_log('swift relation incomplete') |
4129 | + return |
4130 | + |
4131 | + CONFIGS.write(GLANCE_API_CONF) |
4132 | + |
4133 | + |
4134 | +@hooks.hook('ceph-relation-joined') |
4135 | +def ceph_joined(): |
4136 | + if not os.path.isdir('/etc/ceph'): |
4137 | + os.mkdir('/etc/ceph') |
4138 | + apt_install(['ceph-common', 'python-ceph']) |
4139 | + |
4140 | + |
4141 | +@hooks.hook('ceph-relation-changed') |
4142 | +@restart_on_change(restart_map()) |
4143 | +def ceph_changed(): |
4144 | + if 'ceph' not in CONFIGS.complete_contexts(): |
4145 | + juju_log('ceph relation incomplete. Peer not ready?') |
4146 | + return |
4147 | + |
4148 | + service = service_name() |
4149 | + |
4150 | + if not ensure_ceph_keyring(service=service, |
4151 | + user='glance', group='glance'): |
4152 | + juju_log('Could not create ceph keyring: peer not ready?') |
4153 | + return |
4154 | + |
4155 | + CONFIGS.write(GLANCE_API_CONF) |
4156 | + CONFIGS.write(CEPH_CONF) |
4157 | + |
4158 | + if eligible_leader(CLUSTER_RES): |
4159 | + _config = config() |
4160 | + ensure_ceph_pool(service=service, |
4161 | + replicas=_config['ceph-osd-replication-count']) |
4162 | + |
4163 | + |
4164 | +@hooks.hook('identity-service-relation-joined') |
4165 | +def keystone_joined(relation_id=None): |
4166 | + if not eligible_leader(CLUSTER_RES): |
4167 | + juju_log('Deferring keystone_joined() to service leader.') |
4168 | + return |
4169 | + |
4170 | + url = canonical_url(CONFIGS) + ":9292" |
4171 | + relation_data = { |
4172 | + 'service': 'glance', |
4173 | + 'region': config('region'), |
4174 | + 'public_url': url, |
4175 | + 'admin_url': url, |
4176 | + 'internal_url': url, } |
4177 | + |
4178 | + relation_set(relation_id=relation_id, **relation_data) |
4179 | + |
4180 | + |
4181 | +@hooks.hook('identity-service-relation-changed') |
4182 | +@restart_on_change(restart_map()) |
4183 | +def keystone_changed(): |
4184 | + if 'identity-service' not in CONFIGS.complete_contexts(): |
4185 | + juju_log('identity-service relation incomplete. Peer not ready?') |
4186 | + return |
4187 | + |
4188 | + CONFIGS.write(GLANCE_API_CONF) |
4189 | + CONFIGS.write(GLANCE_REGISTRY_CONF) |
4190 | + |
4191 | + CONFIGS.write(GLANCE_API_PASTE_INI) |
4192 | + CONFIGS.write(GLANCE_REGISTRY_PASTE_INI) |
4193 | + |
4194 | + # Configure any object-store / swift relations now that we have an |
4195 | + # identity-service |
4196 | + if relation_ids('object-store'): |
4197 | + object_store_joined() |
4198 | + |
4199 | + # possibly configure HTTPS for API and registry |
4200 | + configure_https() |
4201 | + |
4202 | + |
4203 | +@hooks.hook('config-changed') |
4204 | +@restart_on_change(restart_map()) |
4205 | +def config_changed(): |
4206 | + if openstack_upgrade_available('glance-common'): |
4207 | + juju_log('Upgrading OpenStack release') |
4208 | + do_openstack_upgrade(CONFIGS) |
4209 | + |
4210 | + open_port(9292) |
4211 | + configure_https() |
4212 | + |
4213 | + #env_vars = {'OPENSTACK_PORT_MCASTPORT': config("ha-mcastport"), |
4214 | + # 'OPENSTACK_SERVICE_API': "glance-api", |
4215 | + # 'OPENSTACK_SERVICE_REGISTRY': "glance-registry"} |
4216 | + #save_script_rc(**env_vars) |
4217 | + |
4218 | + |
4219 | +@hooks.hook('cluster-relation-changed') |
4220 | +@restart_on_change(restart_map()) |
4221 | +def cluster_changed(): |
4222 | + CONFIGS.write(GLANCE_API_CONF) |
4223 | + CONFIGS.write(HAPROXY_CONF) |
4224 | + |
4225 | + |
4226 | +@hooks.hook('upgrade-charm') |
4227 | +def upgrade_charm(): |
4228 | + cluster_changed() |
4229 | + |
4230 | + |
4231 | +@hooks.hook('ha-relation-joined') |
4232 | +def ha_relation_joined(): |
4233 | + corosync_bindiface = config("ha-bindiface") |
4234 | + corosync_mcastport = config("ha-mcastport") |
4235 | + vip = config("vip") |
4236 | + vip_iface = config("vip_iface") |
4237 | + vip_cidr = config("vip_cidr") |
4238 | + |
4239 | + #if vip and vip_iface and vip_cidr and \ |
4240 | + # corosync_bindiface and corosync_mcastport: |
4241 | + |
4242 | + resources = { |
4243 | + 'res_glance_vip': 'ocf:heartbeat:IPaddr2', |
4244 | + 'res_glance_haproxy': 'lsb:haproxy', } |
4245 | + |
4246 | + resource_params = { |
4247 | + 'res_glance_vip': 'params ip="%s" cidr_netmask="%s" nic="%s"' % |
4248 | + (vip, vip_cidr, vip_iface), |
4249 | + 'res_glance_haproxy': 'op monitor interval="5s"', } |
4250 | + |
4251 | + init_services = { |
4252 | + 'res_glance_haproxy': 'haproxy', } |
4253 | + |
4254 | + clones = { |
4255 | + 'cl_glance_haproxy': 'res_glance_haproxy', } |
4256 | + |
4257 | + relation_set(init_services=init_services, |
4258 | + corosync_bindiface=corosync_bindiface, |
4259 | + corosync_mcastport=corosync_mcastport, |
4260 | + resources=resources, |
4261 | + resource_params=resource_params, |
4262 | + clones=clones) |
4263 | + |
4264 | + |
4265 | +@hooks.hook('ha-relation-changed') |
4266 | +def ha_relation_changed(): |
4267 | + clustered = relation_get('clustered') |
4268 | + if not clustered or clustered in [None, 'None', '']: |
4269 | + juju_log('ha_changed: hacluster subordinate is not fully clustered.') |
4270 | + return |
4271 | + if not is_leader(CLUSTER_RES): |
4272 | + juju_log('ha_changed: hacluster complete but we are not leader.') |
4273 | + return |
4274 | + |
4275 | + # reconfigure endpoint in keystone to point to clustered VIP. |
4276 | + [keystone_joined(rid) for rid in relation_ids('identity-service')] |
4277 | + |
4278 | + # notify glance client services of reconfigured URL. |
4279 | + [image_service_joined(rid) for rid in relation_ids('image-service')] |
4280 | + |
4281 | + |
4282 | +@hooks.hook('ceph-relation-broken', |
4283 | + 'identity-service-relation-broken', |
4284 | + 'object-store-relation-broken', |
4285 | + 'shared-db-relation-broken') |
4286 | +def relation_broken(): |
4287 | + CONFIGS.write_all() |
4288 | + |
4289 | + |
4290 | +def configure_https(): |
4291 | + ''' |
4292 | + Enables SSL API Apache config if appropriate and kicks |
4293 | + identity-service and image-service with any required |
4294 | + updates |
4295 | + ''' |
4296 | + CONFIGS.write_all() |
4297 | + if 'https' in CONFIGS.complete_contexts(): |
4298 | + cmd = ['a2ensite', 'openstack_https_frontend'] |
4299 | + check_call(cmd) |
4300 | + else: |
4301 | + cmd = ['a2dissite', 'openstack_https_frontend'] |
4302 | + check_call(cmd) |
4303 | + |
4304 | + for r_id in relation_ids('identity-service'): |
4305 | + keystone_joined(relation_id=r_id) |
4306 | + for r_id in relation_ids('image-service'): |
4307 | + image_service_joined(relation_id=r_id) |
4308 | + |
4309 | + |
4310 | +if __name__ == '__main__': |
4311 | + try: |
4312 | + hooks.execute(sys.argv) |
4313 | + except UnregisteredHookError as e: |
4314 | + juju_log('Unknown hook {} - skiping.'.format(e)) |
4315 | |
4316 | === added file 'hooks/glance_utils.py' |
4317 | --- hooks/glance_utils.py 1970-01-01 00:00:00 +0000 |
4318 | +++ hooks/glance_utils.py 2013-10-15 01:35:02 +0000 |
4319 | @@ -0,0 +1,193 @@ |
4320 | +#!/usr/bin/python |
4321 | + |
4322 | +import os |
4323 | +import subprocess |
4324 | + |
4325 | +import glance_contexts |
4326 | + |
4327 | +from collections import OrderedDict |
4328 | + |
4329 | +from charmhelpers.fetch import ( |
4330 | + apt_install, |
4331 | + apt_update, ) |
4332 | + |
4333 | +from charmhelpers.core.hookenv import ( |
4334 | + config, |
4335 | + log as juju_log, |
4336 | + relation_ids) |
4337 | + |
4338 | +from charmhelpers.contrib.openstack import ( |
4339 | + templating, |
4340 | + context, ) |
4341 | + |
4342 | +from charmhelpers.contrib.hahelpers.cluster import ( |
4343 | + eligible_leader, |
4344 | +) |
4345 | + |
4346 | +from charmhelpers.contrib.storage.linux.ceph import ( |
4347 | + create_pool as ceph_create_pool, |
4348 | + pool_exists as ceph_pool_exists) |
4349 | + |
4350 | +from charmhelpers.contrib.openstack.utils import ( |
4351 | + get_os_codename_install_source, |
4352 | + get_os_codename_package, |
4353 | + configure_installation_source, ) |
4354 | + |
4355 | +CLUSTER_RES = "res_glance_vip" |
4356 | + |
4357 | +PACKAGES = [ |
4358 | + "apache2", "glance", "python-mysqldb", "python-swift", |
4359 | + "python-keystone", "uuid", "haproxy", ] |
4360 | + |
4361 | +SERVICES = [ |
4362 | + "glance-api", "glance-registry", ] |
4363 | + |
4364 | +CHARM = "glance" |
4365 | + |
4366 | +GLANCE_REGISTRY_CONF = "/etc/glance/glance-registry.conf" |
4367 | +GLANCE_REGISTRY_PASTE_INI = "/etc/glance/glance-registry-paste.ini" |
4368 | +GLANCE_API_CONF = "/etc/glance/glance-api.conf" |
4369 | +GLANCE_API_PASTE_INI = "/etc/glance/glance-api-paste.ini" |
4370 | +CEPH_CONF = "/etc/ceph/ceph.conf" |
4371 | +HAPROXY_CONF = "/etc/haproxy/haproxy.cfg" |
4372 | +HTTPS_APACHE_CONF = "/etc/apache2/sites-available/openstack_https_frontend" |
4373 | +HTTPS_APACHE_24_CONF = "/etc/apache2/sites-available/" \ |
4374 | + "openstack_https_frontend.conf" |
4375 | + |
4376 | +CONF_DIR = "/etc/glance" |
4377 | + |
4378 | +TEMPLATES = 'templates/' |
4379 | + |
4380 | +CONFIG_FILES = OrderedDict([ |
4381 | + (GLANCE_REGISTRY_CONF, { |
4382 | + 'hook_contexts': [context.SharedDBContext(), |
4383 | + context.IdentityServiceContext()], |
4384 | + 'services': ['glance-registry'] |
4385 | + }), |
4386 | + (GLANCE_API_CONF, { |
4387 | + 'hook_contexts': [context.SharedDBContext(), |
4388 | + context.IdentityServiceContext(), |
4389 | + glance_contexts.CephGlanceContext(), |
4390 | + glance_contexts.ObjectStoreContext(), |
4391 | + glance_contexts.HAProxyContext()], |
4392 | + 'services': ['glance-api'] |
4393 | + }), |
4394 | + (GLANCE_API_PASTE_INI, { |
4395 | + 'hook_contexts': [context.IdentityServiceContext()], |
4396 | + 'services': ['glance-api'] |
4397 | + }), |
4398 | + (GLANCE_REGISTRY_PASTE_INI, { |
4399 | + 'hook_contexts': [context.IdentityServiceContext()], |
4400 | + 'services': ['glance-registry'] |
4401 | + }), |
4402 | + (CEPH_CONF, { |
4403 | + 'hook_contexts': [context.CephContext()], |
4404 | + 'services': [] |
4405 | + }), |
4406 | + (HAPROXY_CONF, { |
4407 | + 'hook_contexts': [context.HAProxyContext(), |
4408 | + glance_contexts.HAProxyContext()], |
4409 | + 'services': ['haproxy'], |
4410 | + }), |
4411 | + (HTTPS_APACHE_CONF, { |
4412 | + 'hook_contexts': [glance_contexts.ApacheSSLContext()], |
4413 | + 'services': ['apache2'], |
4414 | + }), |
4415 | + (HTTPS_APACHE_24_CONF, { |
4416 | + 'hook_contexts': [glance_contexts.ApacheSSLContext()], |
4417 | + 'services': ['apache2'], |
4418 | + }) |
4419 | +]) |
4420 | + |
4421 | + |
4422 | +def register_configs(): |
4423 | + # Register config files with their respective contexts. |
4424 | + # Regstration of some configs may not be required depending on |
4425 | + # existing of certain relations. |
4426 | + release = get_os_codename_package('glance-common', fatal=False) or 'essex' |
4427 | + configs = templating.OSConfigRenderer(templates_dir=TEMPLATES, |
4428 | + openstack_release=release) |
4429 | + |
4430 | + confs = [GLANCE_REGISTRY_CONF, |
4431 | + GLANCE_API_CONF, |
4432 | + GLANCE_API_PASTE_INI, |
4433 | + GLANCE_REGISTRY_PASTE_INI, |
4434 | + HAPROXY_CONF] |
4435 | + |
4436 | + if relation_ids('ceph'): |
4437 | + if not os.path.isdir('/etc/ceph'): |
4438 | + os.mkdir('/etc/ceph') |
4439 | + confs.append(CEPH_CONF) |
4440 | + |
4441 | + for conf in confs: |
4442 | + configs.register(conf, CONFIG_FILES[conf]['hook_contexts']) |
4443 | + |
4444 | + if os.path.exists('/etc/apache2/conf-available'): |
4445 | + configs.register(HTTPS_APACHE_24_CONF, |
4446 | + CONFIG_FILES[HTTPS_APACHE_24_CONF]['hook_contexts']) |
4447 | + else: |
4448 | + configs.register(HTTPS_APACHE_CONF, |
4449 | + CONFIG_FILES[HTTPS_APACHE_CONF]['hook_contexts']) |
4450 | + |
4451 | + return configs |
4452 | + |
4453 | + |
4454 | +def migrate_database(): |
4455 | + '''Runs glance-manage to initialize a new database or migrate existing''' |
4456 | + cmd = ['glance-manage', 'db_sync'] |
4457 | + subprocess.check_call(cmd) |
4458 | + |
4459 | + |
4460 | +def ensure_ceph_pool(service, replicas): |
4461 | + '''Creates a ceph pool for service if one does not exist''' |
4462 | + # TODO: Ditto about moving somewhere sharable. |
4463 | + if not ceph_pool_exists(service=service, name=service): |
4464 | + ceph_create_pool(service=service, name=service, replicas=replicas) |
4465 | + |
4466 | + |
4467 | +def do_openstack_upgrade(configs): |
4468 | + """ |
4469 | + Perform an uprade of cinder. Takes care of upgrading packages, rewriting |
4470 | + configs + database migration and potentially any other post-upgrade |
4471 | + actions. |
4472 | + |
4473 | + :param configs: The charms main OSConfigRenderer object. |
4474 | + |
4475 | + """ |
4476 | + new_src = config('openstack-origin') |
4477 | + new_os_rel = get_os_codename_install_source(new_src) |
4478 | + |
4479 | + juju_log('Performing OpenStack upgrade to %s.' % (new_os_rel)) |
4480 | + |
4481 | + configure_installation_source(new_src) |
4482 | + dpkg_opts = [ |
4483 | + '--option', 'Dpkg::Options::=--force-confnew', |
4484 | + '--option', 'Dpkg::Options::=--force-confdef', |
4485 | + ] |
4486 | + apt_update() |
4487 | + apt_install(packages=PACKAGES, options=dpkg_opts, fatal=True) |
4488 | + |
4489 | + # set CONFIGS to load templates from new release and regenerate config |
4490 | + configs.set_release(openstack_release=new_os_rel) |
4491 | + configs.write_all() |
4492 | + |
4493 | + if eligible_leader(CLUSTER_RES): |
4494 | + migrate_database() |
4495 | + |
4496 | + |
4497 | +def restart_map(): |
4498 | + ''' |
4499 | + Determine the correct resource map to be passed to |
4500 | + charmhelpers.core.restart_on_change() based on the services configured. |
4501 | + |
4502 | + :returns: dict: A dictionary mapping config file to lists of services |
4503 | + that should be restarted when file changes. |
4504 | + ''' |
4505 | + _map = [] |
4506 | + for f, ctxt in CONFIG_FILES.iteritems(): |
4507 | + svcs = [] |
4508 | + for svc in ctxt['services']: |
4509 | + svcs.append(svc) |
4510 | + if svcs: |
4511 | + _map.append((f, svcs)) |
4512 | + return OrderedDict(_map) |
4513 | |
4514 | === modified symlink 'hooks/ha-relation-changed' |
4515 | === target changed u'glance-relations' => u'glance_relations.py' |
4516 | === modified symlink 'hooks/ha-relation-joined' |
4517 | === target changed u'glance-relations' => u'glance_relations.py' |
4518 | === added symlink 'hooks/identity-service-relation-broken' |
4519 | === target is u'glance_relations.py' |
4520 | === modified symlink 'hooks/identity-service-relation-changed' |
4521 | === target changed u'glance-relations' => u'glance_relations.py' |
4522 | === modified symlink 'hooks/identity-service-relation-joined' |
4523 | === target changed u'glance-relations' => u'glance_relations.py' |
4524 | === modified symlink 'hooks/image-service-relation-changed' |
4525 | === target changed u'glance-relations' => u'glance_relations.py' |
4526 | === modified symlink 'hooks/image-service-relation-joined' |
4527 | === target changed u'glance-relations' => u'glance_relations.py' |
4528 | === modified symlink 'hooks/install' |
4529 | === target changed u'glance-relations' => u'glance_relations.py' |
4530 | === removed directory 'hooks/lib' |
4531 | === removed file 'hooks/lib/openstack-common' |
4532 | --- hooks/lib/openstack-common 2013-06-03 18:39:29 +0000 |
4533 | +++ hooks/lib/openstack-common 1970-01-01 00:00:00 +0000 |
4534 | @@ -1,813 +0,0 @@ |
4535 | -#!/bin/bash -e |
4536 | - |
4537 | -# Common utility functions used across all OpenStack charms. |
4538 | - |
4539 | -error_out() { |
4540 | - juju-log "$CHARM ERROR: $@" |
4541 | - exit 1 |
4542 | -} |
4543 | - |
4544 | -function service_ctl_status { |
4545 | - # Return 0 if a service is running, 1 otherwise. |
4546 | - local svc="$1" |
4547 | - local status=$(service $svc status | cut -d/ -f1 | awk '{ print $2 }') |
4548 | - case $status in |
4549 | - "start") return 0 ;; |
4550 | - "stop") return 1 ;; |
4551 | - *) error_out "Unexpected status of service $svc: $status" ;; |
4552 | - esac |
4553 | -} |
4554 | - |
4555 | -function service_ctl { |
4556 | - # control a specific service, or all (as defined by $SERVICES) |
4557 | - # service restarts will only occur depending on global $CONFIG_CHANGED, |
4558 | - # which should be updated in charm's set_or_update(). |
4559 | - local config_changed=${CONFIG_CHANGED:-True} |
4560 | - if [[ $1 == "all" ]] ; then |
4561 | - ctl="$SERVICES" |
4562 | - else |
4563 | - ctl="$1" |
4564 | - fi |
4565 | - action="$2" |
4566 | - if [[ -z "$ctl" ]] || [[ -z "$action" ]] ; then |
4567 | - error_out "ERROR service_ctl: Not enough arguments" |
4568 | - fi |
4569 | - |
4570 | - for i in $ctl ; do |
4571 | - case $action in |
4572 | - "start") |
4573 | - service_ctl_status $i || service $i start ;; |
4574 | - "stop") |
4575 | - service_ctl_status $i && service $i stop || return 0 ;; |
4576 | - "restart") |
4577 | - if [[ "$config_changed" == "True" ]] ; then |
4578 | - service_ctl_status $i && service $i restart || service $i start |
4579 | - fi |
4580 | - ;; |
4581 | - esac |
4582 | - if [[ $? != 0 ]] ; then |
4583 | - juju-log "$CHARM: service_ctl ERROR - Service $i failed to $action" |
4584 | - fi |
4585 | - done |
4586 | - # all configs should have been reloaded on restart of all services, reset |
4587 | - # flag if its being used. |
4588 | - if [[ "$action" == "restart" ]] && [[ -n "$CONFIG_CHANGED" ]] && |
4589 | - [[ "$ctl" == "all" ]]; then |
4590 | - CONFIG_CHANGED="False" |
4591 | - fi |
4592 | -} |
4593 | - |
4594 | -function configure_install_source { |
4595 | - # Setup and configure installation source based on a config flag. |
4596 | - local src="$1" |
4597 | - |
4598 | - # Default to installing from the main Ubuntu archive. |
4599 | - [[ $src == "distro" ]] || [[ -z "$src" ]] && return 0 |
4600 | - |
4601 | - . /etc/lsb-release |
4602 | - |
4603 | - # standard 'ppa:someppa/name' format. |
4604 | - if [[ "${src:0:4}" == "ppa:" ]] ; then |
4605 | - juju-log "$CHARM: Configuring installation from custom src ($src)" |
4606 | - add-apt-repository -y "$src" || error_out "Could not configure PPA access." |
4607 | - return 0 |
4608 | - fi |
4609 | - |
4610 | - # standard 'deb http://url/ubuntu main' entries. gpg key ids must |
4611 | - # be appended to the end of url after a |, ie: |
4612 | - # 'deb http://url/ubuntu main|$GPGKEYID' |
4613 | - if [[ "${src:0:3}" == "deb" ]] ; then |
4614 | - juju-log "$CHARM: Configuring installation from custom src URL ($src)" |
4615 | - if echo "$src" | grep -q "|" ; then |
4616 | - # gpg key id tagged to end of url folloed by a | |
4617 | - url=$(echo $src | cut -d'|' -f1) |
4618 | - key=$(echo $src | cut -d'|' -f2) |
4619 | - juju-log "$CHARM: Importing repository key: $key" |
4620 | - apt-key adv --keyserver keyserver.ubuntu.com --recv-keys "$key" || \ |
4621 | - juju-log "$CHARM WARN: Could not import key from keyserver: $key" |
4622 | - else |
4623 | - juju-log "$CHARM No repository key specified." |
4624 | - url="$src" |
4625 | - fi |
4626 | - echo "$url" > /etc/apt/sources.list.d/juju_deb.list |
4627 | - return 0 |
4628 | - fi |
4629 | - |
4630 | - # Cloud Archive |
4631 | - if [[ "${src:0:6}" == "cloud:" ]] ; then |
4632 | - |
4633 | - # current os releases supported by the UCA. |
4634 | - local cloud_archive_versions="folsom grizzly" |
4635 | - |
4636 | - local ca_rel=$(echo $src | cut -d: -f2) |
4637 | - local u_rel=$(echo $ca_rel | cut -d- -f1) |
4638 | - local os_rel=$(echo $ca_rel | cut -d- -f2 | cut -d/ -f1) |
4639 | - |
4640 | - [[ "$u_rel" != "$DISTRIB_CODENAME" ]] && |
4641 | - error_out "Cannot install from Cloud Archive pocket $src " \ |
4642 | - "on this Ubuntu version ($DISTRIB_CODENAME)!" |
4643 | - |
4644 | - valid_release="" |
4645 | - for rel in $cloud_archive_versions ; do |
4646 | - if [[ "$os_rel" == "$rel" ]] ; then |
4647 | - valid_release=1 |
4648 | - juju-log "Installing OpenStack ($os_rel) from the Ubuntu Cloud Archive." |
4649 | - fi |
4650 | - done |
4651 | - if [[ -z "$valid_release" ]] ; then |
4652 | - error_out "OpenStack release ($os_rel) not supported by "\ |
4653 | - "the Ubuntu Cloud Archive." |
4654 | - fi |
4655 | - |
4656 | - # CA staging repos are standard PPAs. |
4657 | - if echo $ca_rel | grep -q "staging" ; then |
4658 | - add-apt-repository -y ppa:ubuntu-cloud-archive/${os_rel}-staging |
4659 | - return 0 |
4660 | - fi |
4661 | - |
4662 | - # the others are LP-external deb repos. |
4663 | - case "$ca_rel" in |
4664 | - "$u_rel-$os_rel"|"$u_rel-$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;; |
4665 | - "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;; |
4666 | - "$u_rel-$os_rel"|"$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;; |
4667 | - "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;; |
4668 | - *) error_out "Invalid Cloud Archive repo specified: $src" |
4669 | - esac |
4670 | - |
4671 | - apt-get -y install ubuntu-cloud-keyring |
4672 | - entry="deb http://ubuntu-cloud.archive.canonical.com/ubuntu $pocket main" |
4673 | - echo "$entry" \ |
4674 | - >/etc/apt/sources.list.d/ubuntu-cloud-archive-$DISTRIB_CODENAME.list |
4675 | - return 0 |
4676 | - fi |
4677 | - |
4678 | - error_out "Invalid installation source specified in config: $src" |
4679 | - |
4680 | -} |
4681 | - |
4682 | -get_os_codename_install_source() { |
4683 | - # derive the openstack release provided by a supported installation source. |
4684 | - local rel="$1" |
4685 | - local codename="unknown" |
4686 | - . /etc/lsb-release |
4687 | - |
4688 | - # map ubuntu releases to the openstack version shipped with it. |
4689 | - if [[ "$rel" == "distro" ]] ; then |
4690 | - case "$DISTRIB_CODENAME" in |
4691 | - "oneiric") codename="diablo" ;; |
4692 | - "precise") codename="essex" ;; |
4693 | - "quantal") codename="folsom" ;; |
4694 | - "raring") codename="grizzly" ;; |
4695 | - esac |
4696 | - fi |
4697 | - |
4698 | - # derive version from cloud archive strings. |
4699 | - if [[ "${rel:0:6}" == "cloud:" ]] ; then |
4700 | - rel=$(echo $rel | cut -d: -f2) |
4701 | - local u_rel=$(echo $rel | cut -d- -f1) |
4702 | - local ca_rel=$(echo $rel | cut -d- -f2) |
4703 | - if [[ "$u_rel" == "$DISTRIB_CODENAME" ]] ; then |
4704 | - case "$ca_rel" in |
4705 | - "folsom"|"folsom/updates"|"folsom/proposed"|"folsom/staging") |
4706 | - codename="folsom" ;; |
4707 | - "grizzly"|"grizzly/updates"|"grizzly/proposed"|"grizzly/staging") |
4708 | - codename="grizzly" ;; |
4709 | - esac |
4710 | - fi |
4711 | - fi |
4712 | - |
4713 | - # have a guess based on the deb string provided |
4714 | - if [[ "${rel:0:3}" == "deb" ]] || \ |
4715 | - [[ "${rel:0:3}" == "ppa" ]] ; then |
4716 | - CODENAMES="diablo essex folsom grizzly havana" |
4717 | - for cname in $CODENAMES; do |
4718 | - if echo $rel | grep -q $cname; then |
4719 | - codename=$cname |
4720 | - fi |
4721 | - done |
4722 | - fi |
4723 | - echo $codename |
4724 | -} |
4725 | - |
4726 | -get_os_codename_package() { |
4727 | - local pkg_vers=$(dpkg -l | grep "$1" | awk '{ print $3 }') || echo "none" |
4728 | - pkg_vers=$(echo $pkg_vers | cut -d: -f2) # epochs |
4729 | - case "${pkg_vers:0:6}" in |
4730 | - "2011.2") echo "diablo" ;; |
4731 | - "2012.1") echo "essex" ;; |
4732 | - "2012.2") echo "folsom" ;; |
4733 | - "2013.1") echo "grizzly" ;; |
4734 | - "2013.2") echo "havana" ;; |
4735 | - esac |
4736 | -} |
4737 | - |
4738 | -get_os_version_codename() { |
4739 | - case "$1" in |
4740 | - "diablo") echo "2011.2" ;; |
4741 | - "essex") echo "2012.1" ;; |
4742 | - "folsom") echo "2012.2" ;; |
4743 | - "grizzly") echo "2013.1" ;; |
4744 | - "havana") echo "2013.2" ;; |
4745 | - esac |
4746 | -} |
4747 | - |
4748 | -get_ip() { |
4749 | - dpkg -l | grep -q python-dnspython || { |
4750 | - apt-get -y install python-dnspython 2>&1 > /dev/null |
4751 | - } |
4752 | - hostname=$1 |
4753 | - python -c " |
4754 | -import dns.resolver |
4755 | -import socket |
4756 | -try: |
4757 | - # Test to see if already an IPv4 address |
4758 | - socket.inet_aton('$hostname') |
4759 | - print '$hostname' |
4760 | -except socket.error: |
4761 | - try: |
4762 | - answers = dns.resolver.query('$hostname', 'A') |
4763 | - if answers: |
4764 | - print answers[0].address |
4765 | - except dns.resolver.NXDOMAIN: |
4766 | - pass |
4767 | -" |
4768 | -} |
4769 | - |
4770 | -# Common storage routines used by cinder, nova-volume and swift-storage. |
4771 | -clean_storage() { |
4772 | - # if configured to overwrite existing storage, we unmount the block-dev |
4773 | - # if mounted and clear any previous pv signatures |
4774 | - local block_dev="$1" |
4775 | - juju-log "Cleaining storage '$block_dev'" |
4776 | - if grep -q "^$block_dev" /proc/mounts ; then |
4777 | - mp=$(grep "^$block_dev" /proc/mounts | awk '{ print $2 }') |
4778 | - juju-log "Unmounting $block_dev from $mp" |
4779 | - umount "$mp" || error_out "ERROR: Could not unmount storage from $mp" |
4780 | - fi |
4781 | - if pvdisplay "$block_dev" >/dev/null 2>&1 ; then |
4782 | - juju-log "Removing existing LVM PV signatures from $block_dev" |
4783 | - |
4784 | - # deactivate any volgroups that may be built on this dev |
4785 | - vg=$(pvdisplay $block_dev | grep "VG Name" | awk '{ print $3 }') |
4786 | - if [[ -n "$vg" ]] ; then |
4787 | - juju-log "Deactivating existing volume group: $vg" |
4788 | - vgchange -an "$vg" || |
4789 | - error_out "ERROR: Could not deactivate volgroup $vg. Is it in use?" |
4790 | - fi |
4791 | - echo "yes" | pvremove -ff "$block_dev" || |
4792 | - error_out "Could not pvremove $block_dev" |
4793 | - else |
4794 | - juju-log "Zapping disk of all GPT and MBR structures" |
4795 | - sgdisk --zap-all $block_dev || |
4796 | - error_out "Unable to zap $block_dev" |
4797 | - fi |
4798 | -} |
4799 | - |
4800 | -function get_block_device() { |
4801 | - # given a string, return full path to the block device for that |
4802 | - # if input is not a block device, find a loopback device |
4803 | - local input="$1" |
4804 | - |
4805 | - case "$input" in |
4806 | - /dev/*) [[ ! -b "$input" ]] && error_out "$input does not exist." |
4807 | - echo "$input"; return 0;; |
4808 | - /*) :;; |
4809 | - *) [[ ! -b "/dev/$input" ]] && error_out "/dev/$input does not exist." |
4810 | - echo "/dev/$input"; return 0;; |
4811 | - esac |
4812 | - |
4813 | - # this represents a file |
4814 | - # support "/path/to/file|5G" |
4815 | - local fpath size oifs="$IFS" |
4816 | - if [ "${input#*|}" != "${input}" ]; then |
4817 | - size=${input##*|} |
4818 | - fpath=${input%|*} |
4819 | - else |
4820 | - fpath=${input} |
4821 | - size=5G |
4822 | - fi |
4823 | - |
4824 | - ## loop devices are not namespaced. This is bad for containers. |
4825 | - ## it means that the output of 'losetup' may have the given $fpath |
4826 | - ## in it, but that may not represent this containers $fpath, but |
4827 | - ## another containers. To address that, we really need to |
4828 | - ## allow some uniq container-id to be expanded within path. |
4829 | - ## TODO: find a unique container-id that will be consistent for |
4830 | - ## this container throughout its lifetime and expand it |
4831 | - ## in the fpath. |
4832 | - # fpath=${fpath//%{id}/$THAT_ID} |
4833 | - |
4834 | - local found="" |
4835 | - # parse through 'losetup -a' output, looking for this file |
4836 | - # output is expected to look like: |
4837 | - # /dev/loop0: [0807]:961814 (/tmp/my.img) |
4838 | - found=$(losetup -a | |
4839 | - awk 'BEGIN { found=0; } |
4840 | - $3 == f { sub(/:$/,"",$1); print $1; found=found+1; } |
4841 | - END { if( found == 0 || found == 1 ) { exit(0); }; exit(1); }' \ |
4842 | - f="($fpath)") |
4843 | - |
4844 | - if [ $? -ne 0 ]; then |
4845 | - echo "multiple devices found for $fpath: $found" 1>&2 |
4846 | - return 1; |
4847 | - fi |
4848 | - |
4849 | - [ -n "$found" -a -b "$found" ] && { echo "$found"; return 1; } |
4850 | - |
4851 | - if [ -n "$found" ]; then |
4852 | - echo "confused, $found is not a block device for $fpath"; |
4853 | - return 1; |
4854 | - fi |
4855 | - |
4856 | - # no existing device was found, create one |
4857 | - mkdir -p "${fpath%/*}" |
4858 | - truncate --size "$size" "$fpath" || |
4859 | - { echo "failed to create $fpath of size $size"; return 1; } |
4860 | - |
4861 | - found=$(losetup --find --show "$fpath") || |
4862 | - { echo "failed to setup loop device for $fpath" 1>&2; return 1; } |
4863 | - |
4864 | - echo "$found" |
4865 | - return 0 |
4866 | -} |
4867 | - |
4868 | -HAPROXY_CFG=/etc/haproxy/haproxy.cfg |
4869 | -HAPROXY_DEFAULT=/etc/default/haproxy |
4870 | -########################################################################## |
4871 | -# Description: Configures HAProxy services for Openstack API's |
4872 | -# Parameters: |
4873 | -# Space delimited list of service:port:mode combinations for which |
4874 | -# haproxy service configuration should be generated for. The function |
4875 | -# assumes the name of the peer relation is 'cluster' and that every |
4876 | -# service unit in the peer relation is running the same services. |
4877 | -# |
4878 | -# Services that do not specify :mode in parameter will default to http. |
4879 | -# |
4880 | -# Example |
4881 | -# configure_haproxy cinder_api:8776:8756:tcp nova_api:8774:8764:http |
4882 | -########################################################################## |
4883 | -configure_haproxy() { |
4884 | - local address=`unit-get private-address` |
4885 | - local name=${JUJU_UNIT_NAME////-} |
4886 | - cat > $HAPROXY_CFG << EOF |
4887 | -global |
4888 | - log 127.0.0.1 local0 |
4889 | - log 127.0.0.1 local1 notice |
4890 | - maxconn 20000 |
4891 | - user haproxy |
4892 | - group haproxy |
4893 | - spread-checks 0 |
4894 | - |
4895 | -defaults |
4896 | - log global |
4897 | - mode http |
4898 | - option httplog |
4899 | - option dontlognull |
4900 | - retries 3 |
4901 | - timeout queue 1000 |
4902 | - timeout connect 1000 |
4903 | - timeout client 30000 |
4904 | - timeout server 30000 |
4905 | - |
4906 | -listen stats :8888 |
4907 | - mode http |
4908 | - stats enable |
4909 | - stats hide-version |
4910 | - stats realm Haproxy\ Statistics |
4911 | - stats uri / |
4912 | - stats auth admin:password |
4913 | - |
4914 | -EOF |
4915 | - for service in $@; do |
4916 | - local service_name=$(echo $service | cut -d : -f 1) |
4917 | - local haproxy_listen_port=$(echo $service | cut -d : -f 2) |
4918 | - local api_listen_port=$(echo $service | cut -d : -f 3) |
4919 | - local mode=$(echo $service | cut -d : -f 4) |
4920 | - [[ -z "$mode" ]] && mode="http" |
4921 | - juju-log "Adding haproxy configuration entry for $service "\ |
4922 | - "($haproxy_listen_port -> $api_listen_port)" |
4923 | - cat >> $HAPROXY_CFG << EOF |
4924 | -listen $service_name 0.0.0.0:$haproxy_listen_port |
4925 | - balance roundrobin |
4926 | - mode $mode |
4927 | - option ${mode}log |
4928 | - server $name $address:$api_listen_port check |
4929 | -EOF |
4930 | - local r_id="" |
4931 | - local unit="" |
4932 | - for r_id in `relation-ids cluster`; do |
4933 | - for unit in `relation-list -r $r_id`; do |
4934 | - local unit_name=${unit////-} |
4935 | - local unit_address=`relation-get -r $r_id private-address $unit` |
4936 | - if [ -n "$unit_address" ]; then |
4937 | - echo " server $unit_name $unit_address:$api_listen_port check" \ |
4938 | - >> $HAPROXY_CFG |
4939 | - fi |
4940 | - done |
4941 | - done |
4942 | - done |
4943 | - echo "ENABLED=1" > $HAPROXY_DEFAULT |
4944 | - service haproxy restart |
4945 | -} |
4946 | - |
4947 | -########################################################################## |
4948 | -# Description: Query HA interface to determine is cluster is configured |
4949 | -# Returns: 0 if configured, 1 if not configured |
4950 | -########################################################################## |
4951 | -is_clustered() { |
4952 | - local r_id="" |
4953 | - local unit="" |
4954 | - for r_id in $(relation-ids ha); do |
4955 | - if [ -n "$r_id" ]; then |
4956 | - for unit in $(relation-list -r $r_id); do |
4957 | - clustered=$(relation-get -r $r_id clustered $unit) |
4958 | - if [ -n "$clustered" ]; then |
4959 | - juju-log "Unit is haclustered" |
4960 | - return 0 |
4961 | - fi |
4962 | - done |
4963 | - fi |
4964 | - done |
4965 | - juju-log "Unit is not haclustered" |
4966 | - return 1 |
4967 | -} |
4968 | - |
4969 | -########################################################################## |
4970 | -# Description: Return a list of all peers in cluster relations |
4971 | -########################################################################## |
4972 | -peer_units() { |
4973 | - local peers="" |
4974 | - local r_id="" |
4975 | - for r_id in $(relation-ids cluster); do |
4976 | - peers="$peers $(relation-list -r $r_id)" |
4977 | - done |
4978 | - echo $peers |
4979 | -} |
4980 | - |
4981 | -########################################################################## |
4982 | -# Description: Determines whether the current unit is the oldest of all |
4983 | -# its peers - supports partial leader election |
4984 | -# Returns: 0 if oldest, 1 if not |
4985 | -########################################################################## |
4986 | -oldest_peer() { |
4987 | - peers=$1 |
4988 | - local l_unit_no=$(echo $JUJU_UNIT_NAME | cut -d / -f 2) |
4989 | - for peer in $peers; do |
4990 | - echo "Comparing $JUJU_UNIT_NAME with peers: $peers" |
4991 | - local r_unit_no=$(echo $peer | cut -d / -f 2) |
4992 | - if (($r_unit_no<$l_unit_no)); then |
4993 | - juju-log "Not oldest peer; deferring" |
4994 | - return 1 |
4995 | - fi |
4996 | - done |
4997 | - juju-log "Oldest peer; might take charge?" |
4998 | - return 0 |
4999 | -} |
5000 | - |
The diff has been truncated for viewing.