Merge lp:~thumper/charms/trusty/python-django/support-1.7 into lp:charms/python-django
- Trusty Tahr (14.04)
- support-1.7
- Merge into trunk
Status: | Superseded | ||||
---|---|---|---|---|---|
Proposed branch: | lp:~thumper/charms/trusty/python-django/support-1.7 | ||||
Merge into: | lp:charms/python-django | ||||
Diff against target: |
6301 lines (+530/-5354) 33 files modified
Makefile (+2/-0) charm-helpers.yaml (+0/-1) hooks/charmhelpers/contrib/ansible/__init__.py (+0/-165) hooks/charmhelpers/contrib/charmhelpers/__init__.py (+0/-184) hooks/charmhelpers/contrib/charmsupport/nrpe.py (+0/-216) hooks/charmhelpers/contrib/charmsupport/volumes.py (+0/-156) hooks/charmhelpers/contrib/hahelpers/apache.py (+0/-59) hooks/charmhelpers/contrib/hahelpers/cluster.py (+0/-183) hooks/charmhelpers/contrib/jujugui/utils.py (+0/-602) hooks/charmhelpers/contrib/network/ip.py (+0/-69) hooks/charmhelpers/contrib/network/ovs/__init__.py (+0/-75) hooks/charmhelpers/contrib/openstack/alternatives.py (+0/-17) hooks/charmhelpers/contrib/openstack/context.py (+0/-700) hooks/charmhelpers/contrib/openstack/neutron.py (+0/-171) hooks/charmhelpers/contrib/openstack/templates/__init__.py (+0/-2) hooks/charmhelpers/contrib/openstack/templating.py (+0/-280) hooks/charmhelpers/contrib/openstack/utils.py (+0/-450) hooks/charmhelpers/contrib/peerstorage/__init__.py (+0/-83) hooks/charmhelpers/contrib/python/packages.py (+0/-76) hooks/charmhelpers/contrib/python/version.py (+0/-18) hooks/charmhelpers/contrib/saltstack/__init__.py (+0/-102) hooks/charmhelpers/contrib/ssl/__init__.py (+0/-78) hooks/charmhelpers/contrib/ssl/service.py (+0/-267) hooks/charmhelpers/contrib/storage/linux/ceph.py (+0/-387) hooks/charmhelpers/contrib/storage/linux/loopback.py (+0/-62) hooks/charmhelpers/contrib/storage/linux/lvm.py (+0/-88) hooks/charmhelpers/contrib/storage/linux/utils.py (+0/-35) hooks/charmhelpers/contrib/templating/contexts.py (+0/-104) hooks/charmhelpers/contrib/templating/pyformat.py (+0/-13) hooks/charmhelpers/contrib/unison/__init__.py (+0/-257) hooks/hooks.py (+406/-329) hooks/tests/test_template.py (+0/-125) hooks/tests/test_unit.py (+122/-0) |
||||
To merge this branch: | bzr merge lp:~thumper/charms/trusty/python-django/support-1.7 | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
charmers | Pending | ||
Review via email: mp+260013@code.launchpad.net |
This proposal has been superseded by a proposal from 2015-05-27.
Commit message
Description of the change
Well, this fixes the last merge I attempted. It was very broken.
In order to make sure I really didn't break it, I added some unit tests, which meant refactoring the hooks module so it had no global state. What I thought would take about and hour at the most became an eight hour marathon.
'make check' now runs the hook unit tests.
Two of the files in the hooks/tests directory were direct copies from gunicorn. I removed one and renamed the other so it doesn't get in the way.
Also, trying to run the tests indicated that the charmhandler/
There is more cleanup to do, but I left some obvious things to fix as markers so the diff will show I didn't replace *everything*.
I have also tested manual deployment, and it works with postgresql, gunicorn and the django_settings relations.
- 51. By Tim Penhey
-
lint fixes
- 52. By Tim Penhey
-
Merge prev
- 53. By Tim Penhey
-
mock out admin location
- 54. By Tim Penhey
-
Attempting to get bundletest working properly, adding 1.8 test.
- 55. By Tim Penhey
-
make check uses virtual env.
- 56. By Tim Penhey
-
Must update pip during upgrade too.
- 57. By Tim Penhey
-
Make sure to use --noinput for Django 1.7+ migrate.
- 58. By Tim Penhey
-
Make the lint target depend on flake8 in the virtual env.
- 59. By Tim Penhey
-
Make the django test executable.
- 60. By Tim Penhey
-
Tweak makefile and test calls.
- 61. By Tim Penhey
-
Fix the django 1.8 with postgresql test.
Seems that the autocommit option being used was removed in Django 1.8.
Unmerged revisions
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2014-09-26 21:30:42 +0000 |
3 | +++ Makefile 2015-05-27 22:03:05 +0000 |
4 | @@ -28,3 +28,5 @@ |
5 | integration-test: |
6 | juju test --set-e -p SKIP_SLOW_TESTS,DEPLOYER_TARGET,JUJU_HOME,JUJU_ENV -v --timeout 3000s |
7 | |
8 | +check: |
9 | + nosetests hooks |
10 | |
11 | === modified file 'charm-helpers.yaml' |
12 | --- charm-helpers.yaml 2013-11-26 17:12:54 +0000 |
13 | +++ charm-helpers.yaml 2015-05-27 22:03:05 +0000 |
14 | @@ -3,4 +3,3 @@ |
15 | include: |
16 | - core |
17 | - fetch |
18 | - - contrib |
19 | |
20 | === removed directory 'hooks/charmhelpers/contrib' |
21 | === removed file 'hooks/charmhelpers/contrib/__init__.py' |
22 | === removed directory 'hooks/charmhelpers/contrib/ansible' |
23 | === removed file 'hooks/charmhelpers/contrib/ansible/__init__.py' |
24 | --- hooks/charmhelpers/contrib/ansible/__init__.py 2013-11-26 17:12:54 +0000 |
25 | +++ hooks/charmhelpers/contrib/ansible/__init__.py 1970-01-01 00:00:00 +0000 |
26 | @@ -1,165 +0,0 @@ |
27 | -# Copyright 2013 Canonical Ltd. |
28 | -# |
29 | -# Authors: |
30 | -# Charm Helpers Developers <juju@lists.ubuntu.com> |
31 | -"""Charm Helpers ansible - declare the state of your machines. |
32 | - |
33 | -This helper enables you to declare your machine state, rather than |
34 | -program it procedurally (and have to test each change to your procedures). |
35 | -Your install hook can be as simple as: |
36 | - |
37 | -{{{ |
38 | -import charmhelpers.contrib.ansible |
39 | - |
40 | - |
41 | -def install(): |
42 | - charmhelpers.contrib.ansible.install_ansible_support() |
43 | - charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml') |
44 | -}}} |
45 | - |
46 | -and won't need to change (nor will its tests) when you change the machine |
47 | -state. |
48 | - |
49 | -All of your juju config and relation-data are available as template |
50 | -variables within your playbooks and templates. An install playbook looks |
51 | -something like: |
52 | - |
53 | -{{{ |
54 | ---- |
55 | -- hosts: localhost |
56 | - user: root |
57 | - |
58 | - tasks: |
59 | - - name: Add private repositories. |
60 | - template: |
61 | - src: ../templates/private-repositories.list.jinja2 |
62 | - dest: /etc/apt/sources.list.d/private.list |
63 | - |
64 | - - name: Update the cache. |
65 | - apt: update_cache=yes |
66 | - |
67 | - - name: Install dependencies. |
68 | - apt: pkg={{ item }} |
69 | - with_items: |
70 | - - python-mimeparse |
71 | - - python-webob |
72 | - - sunburnt |
73 | - |
74 | - - name: Setup groups. |
75 | - group: name={{ item.name }} gid={{ item.gid }} |
76 | - with_items: |
77 | - - { name: 'deploy_user', gid: 1800 } |
78 | - - { name: 'service_user', gid: 1500 } |
79 | - |
80 | - ... |
81 | -}}} |
82 | - |
83 | -Read more online about playbooks[1] and standard ansible modules[2]. |
84 | - |
85 | -[1] http://www.ansibleworks.com/docs/playbooks.html |
86 | -[2] http://www.ansibleworks.com/docs/modules.html |
87 | -""" |
88 | -import os |
89 | -import subprocess |
90 | - |
91 | -import charmhelpers.contrib.templating.contexts |
92 | -import charmhelpers.core.host |
93 | -import charmhelpers.core.hookenv |
94 | -import charmhelpers.fetch |
95 | - |
96 | - |
97 | -charm_dir = os.environ.get('CHARM_DIR', '') |
98 | -ansible_hosts_path = '/etc/ansible/hosts' |
99 | -# Ansible will automatically include any vars in the following |
100 | -# file in its inventory when run locally. |
101 | -ansible_vars_path = '/etc/ansible/host_vars/localhost' |
102 | - |
103 | - |
104 | -def install_ansible_support(from_ppa=True): |
105 | - """Installs the ansible package. |
106 | - |
107 | - By default it is installed from the PPA [1] linked from |
108 | - the ansible website [2]. |
109 | - |
110 | - [1] https://launchpad.net/~rquillo/+archive/ansible |
111 | - [2] http://www.ansibleworks.com/docs/gettingstarted.html#ubuntu-and-debian |
112 | - |
113 | - If from_ppa is false, you must ensure that the package is available |
114 | - from a configured repository. |
115 | - """ |
116 | - if from_ppa: |
117 | - charmhelpers.fetch.add_source('ppa:rquillo/ansible') |
118 | - charmhelpers.fetch.apt_update(fatal=True) |
119 | - charmhelpers.fetch.apt_install('ansible') |
120 | - with open(ansible_hosts_path, 'w+') as hosts_file: |
121 | - hosts_file.write('localhost ansible_connection=local') |
122 | - |
123 | - |
124 | -def apply_playbook(playbook, tags=None): |
125 | - tags = tags or [] |
126 | - tags = ",".join(tags) |
127 | - charmhelpers.contrib.templating.contexts.juju_state_to_yaml( |
128 | - ansible_vars_path, namespace_separator='__', |
129 | - allow_hyphens_in_keys=False) |
130 | - call = [ |
131 | - 'ansible-playbook', |
132 | - '-c', |
133 | - 'local', |
134 | - playbook, |
135 | - ] |
136 | - if tags: |
137 | - call.extend(['--tags', '{}'.format(tags)]) |
138 | - subprocess.check_call(call) |
139 | - |
140 | - |
141 | -class AnsibleHooks(charmhelpers.core.hookenv.Hooks): |
142 | - """Run a playbook with the hook-name as the tag. |
143 | - |
144 | - This helper builds on the standard hookenv.Hooks helper, |
145 | - but additionally runs the playbook with the hook-name specified |
146 | - using --tags (ie. running all the tasks tagged with the hook-name). |
147 | - |
148 | - Example: |
149 | - hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml') |
150 | - |
151 | - # All the tasks within my_machine_state.yaml tagged with 'install' |
152 | - # will be run automatically after do_custom_work() |
153 | - @hooks.hook() |
154 | - def install(): |
155 | - do_custom_work() |
156 | - |
157 | - # For most of your hooks, you won't need to do anything other |
158 | - # than run the tagged tasks for the hook: |
159 | - @hooks.hook('config-changed', 'start', 'stop') |
160 | - def just_use_playbook(): |
161 | - pass |
162 | - |
163 | - # As a convenience, you can avoid the above noop function by specifying |
164 | - # the hooks which are handled by ansible-only and they'll be registered |
165 | - # for you: |
166 | - # hooks = AnsibleHooks( |
167 | - # 'playbooks/my_machine_state.yaml', |
168 | - # default_hooks=['config-changed', 'start', 'stop']) |
169 | - |
170 | - if __name__ == "__main__": |
171 | - # execute a hook based on the name the program is called by |
172 | - hooks.execute(sys.argv) |
173 | - """ |
174 | - |
175 | - def __init__(self, playbook_path, default_hooks=None): |
176 | - """Register any hooks handled by ansible.""" |
177 | - super(AnsibleHooks, self).__init__() |
178 | - |
179 | - self.playbook_path = playbook_path |
180 | - |
181 | - default_hooks = default_hooks or [] |
182 | - noop = lambda *args, **kwargs: None |
183 | - for hook in default_hooks: |
184 | - self.register(hook, noop) |
185 | - |
186 | - def execute(self, args): |
187 | - """Execute the hook followed by the playbook using the hook as tag.""" |
188 | - super(AnsibleHooks, self).execute(args) |
189 | - hook_name = os.path.basename(args[0]) |
190 | - charmhelpers.contrib.ansible.apply_playbook( |
191 | - self.playbook_path, tags=[hook_name]) |
192 | |
193 | === removed directory 'hooks/charmhelpers/contrib/charmhelpers' |
194 | === removed file 'hooks/charmhelpers/contrib/charmhelpers/__init__.py' |
195 | --- hooks/charmhelpers/contrib/charmhelpers/__init__.py 2013-11-26 17:12:54 +0000 |
196 | +++ hooks/charmhelpers/contrib/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000 |
197 | @@ -1,184 +0,0 @@ |
198 | -# Copyright 2012 Canonical Ltd. This software is licensed under the |
199 | -# GNU Affero General Public License version 3 (see the file LICENSE). |
200 | - |
201 | -import warnings |
202 | -warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning) |
203 | - |
204 | -"""Helper functions for writing Juju charms in Python.""" |
205 | - |
206 | -__metaclass__ = type |
207 | -__all__ = [ |
208 | - #'get_config', # core.hookenv.config() |
209 | - #'log', # core.hookenv.log() |
210 | - #'log_entry', # core.hookenv.log() |
211 | - #'log_exit', # core.hookenv.log() |
212 | - #'relation_get', # core.hookenv.relation_get() |
213 | - #'relation_set', # core.hookenv.relation_set() |
214 | - #'relation_ids', # core.hookenv.relation_ids() |
215 | - #'relation_list', # core.hookenv.relation_units() |
216 | - #'config_get', # core.hookenv.config() |
217 | - #'unit_get', # core.hookenv.unit_get() |
218 | - #'open_port', # core.hookenv.open_port() |
219 | - #'close_port', # core.hookenv.close_port() |
220 | - #'service_control', # core.host.service() |
221 | - 'unit_info', # client-side, NOT IMPLEMENTED |
222 | - 'wait_for_machine', # client-side, NOT IMPLEMENTED |
223 | - 'wait_for_page_contents', # client-side, NOT IMPLEMENTED |
224 | - 'wait_for_relation', # client-side, NOT IMPLEMENTED |
225 | - 'wait_for_unit', # client-side, NOT IMPLEMENTED |
226 | -] |
227 | - |
228 | -import operator |
229 | -from shelltoolbox import ( |
230 | - command, |
231 | -) |
232 | -import tempfile |
233 | -import time |
234 | -import urllib2 |
235 | -import yaml |
236 | - |
237 | -SLEEP_AMOUNT = 0.1 |
238 | -# We create a juju_status Command here because it makes testing much, |
239 | -# much easier. |
240 | -juju_status = lambda: command('juju')('status') |
241 | - |
242 | -# re-implemented as charmhelpers.fetch.configure_sources() |
243 | -#def configure_source(update=False): |
244 | -# source = config_get('source') |
245 | -# if ((source.startswith('ppa:') or |
246 | -# source.startswith('cloud:') or |
247 | -# source.startswith('http:'))): |
248 | -# run('add-apt-repository', source) |
249 | -# if source.startswith("http:"): |
250 | -# run('apt-key', 'import', config_get('key')) |
251 | -# if update: |
252 | -# run('apt-get', 'update') |
253 | - |
254 | - |
255 | -# DEPRECATED: client-side only |
256 | -def make_charm_config_file(charm_config): |
257 | - charm_config_file = tempfile.NamedTemporaryFile() |
258 | - charm_config_file.write(yaml.dump(charm_config)) |
259 | - charm_config_file.flush() |
260 | - # The NamedTemporaryFile instance is returned instead of just the name |
261 | - # because we want to take advantage of garbage collection-triggered |
262 | - # deletion of the temp file when it goes out of scope in the caller. |
263 | - return charm_config_file |
264 | - |
265 | - |
266 | -# DEPRECATED: client-side only |
267 | -def unit_info(service_name, item_name, data=None, unit=None): |
268 | - if data is None: |
269 | - data = yaml.safe_load(juju_status()) |
270 | - service = data['services'].get(service_name) |
271 | - if service is None: |
272 | - # XXX 2012-02-08 gmb: |
273 | - # This allows us to cope with the race condition that we |
274 | - # have between deploying a service and having it come up in |
275 | - # `juju status`. We could probably do with cleaning it up so |
276 | - # that it fails a bit more noisily after a while. |
277 | - return '' |
278 | - units = service['units'] |
279 | - if unit is not None: |
280 | - item = units[unit][item_name] |
281 | - else: |
282 | - # It might seem odd to sort the units here, but we do it to |
283 | - # ensure that when no unit is specified, the first unit for the |
284 | - # service (or at least the one with the lowest number) is the |
285 | - # one whose data gets returned. |
286 | - sorted_unit_names = sorted(units.keys()) |
287 | - item = units[sorted_unit_names[0]][item_name] |
288 | - return item |
289 | - |
290 | - |
291 | -# DEPRECATED: client-side only |
292 | -def get_machine_data(): |
293 | - return yaml.safe_load(juju_status())['machines'] |
294 | - |
295 | - |
296 | -# DEPRECATED: client-side only |
297 | -def wait_for_machine(num_machines=1, timeout=300): |
298 | - """Wait `timeout` seconds for `num_machines` machines to come up. |
299 | - |
300 | - This wait_for... function can be called by other wait_for functions |
301 | - whose timeouts might be too short in situations where only a bare |
302 | - Juju setup has been bootstrapped. |
303 | - |
304 | - :return: A tuple of (num_machines, time_taken). This is used for |
305 | - testing. |
306 | - """ |
307 | - # You may think this is a hack, and you'd be right. The easiest way |
308 | - # to tell what environment we're working in (LXC vs EC2) is to check |
309 | - # the dns-name of the first machine. If it's localhost we're in LXC |
310 | - # and we can just return here. |
311 | - if get_machine_data()[0]['dns-name'] == 'localhost': |
312 | - return 1, 0 |
313 | - start_time = time.time() |
314 | - while True: |
315 | - # Drop the first machine, since it's the Zookeeper and that's |
316 | - # not a machine that we need to wait for. This will only work |
317 | - # for EC2 environments, which is why we return early above if |
318 | - # we're in LXC. |
319 | - machine_data = get_machine_data() |
320 | - non_zookeeper_machines = [ |
321 | - machine_data[key] for key in machine_data.keys()[1:]] |
322 | - if len(non_zookeeper_machines) >= num_machines: |
323 | - all_machines_running = True |
324 | - for machine in non_zookeeper_machines: |
325 | - if machine.get('instance-state') != 'running': |
326 | - all_machines_running = False |
327 | - break |
328 | - if all_machines_running: |
329 | - break |
330 | - if time.time() - start_time >= timeout: |
331 | - raise RuntimeError('timeout waiting for service to start') |
332 | - time.sleep(SLEEP_AMOUNT) |
333 | - return num_machines, time.time() - start_time |
334 | - |
335 | - |
336 | -# DEPRECATED: client-side only |
337 | -def wait_for_unit(service_name, timeout=480): |
338 | - """Wait `timeout` seconds for a given service name to come up.""" |
339 | - wait_for_machine(num_machines=1) |
340 | - start_time = time.time() |
341 | - while True: |
342 | - state = unit_info(service_name, 'agent-state') |
343 | - if 'error' in state or state == 'started': |
344 | - break |
345 | - if time.time() - start_time >= timeout: |
346 | - raise RuntimeError('timeout waiting for service to start') |
347 | - time.sleep(SLEEP_AMOUNT) |
348 | - if state != 'started': |
349 | - raise RuntimeError('unit did not start, agent-state: ' + state) |
350 | - |
351 | - |
352 | -# DEPRECATED: client-side only |
353 | -def wait_for_relation(service_name, relation_name, timeout=120): |
354 | - """Wait `timeout` seconds for a given relation to come up.""" |
355 | - start_time = time.time() |
356 | - while True: |
357 | - relation = unit_info(service_name, 'relations').get(relation_name) |
358 | - if relation is not None and relation['state'] == 'up': |
359 | - break |
360 | - if time.time() - start_time >= timeout: |
361 | - raise RuntimeError('timeout waiting for relation to be up') |
362 | - time.sleep(SLEEP_AMOUNT) |
363 | - |
364 | - |
365 | -# DEPRECATED: client-side only |
366 | -def wait_for_page_contents(url, contents, timeout=120, validate=None): |
367 | - if validate is None: |
368 | - validate = operator.contains |
369 | - start_time = time.time() |
370 | - while True: |
371 | - try: |
372 | - stream = urllib2.urlopen(url) |
373 | - except (urllib2.HTTPError, urllib2.URLError): |
374 | - pass |
375 | - else: |
376 | - page = stream.read() |
377 | - if validate(page, contents): |
378 | - return page |
379 | - if time.time() - start_time >= timeout: |
380 | - raise RuntimeError('timeout waiting for contents of ' + url) |
381 | - time.sleep(SLEEP_AMOUNT) |
382 | |
383 | === removed directory 'hooks/charmhelpers/contrib/charmsupport' |
384 | === removed file 'hooks/charmhelpers/contrib/charmsupport/__init__.py' |
385 | === removed file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py' |
386 | --- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-11-26 17:12:54 +0000 |
387 | +++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000 |
388 | @@ -1,216 +0,0 @@ |
389 | -"""Compatibility with the nrpe-external-master charm""" |
390 | -# Copyright 2012 Canonical Ltd. |
391 | -# |
392 | -# Authors: |
393 | -# Matthew Wedgwood <matthew.wedgwood@canonical.com> |
394 | - |
395 | -import subprocess |
396 | -import pwd |
397 | -import grp |
398 | -import os |
399 | -import re |
400 | -import shlex |
401 | -import yaml |
402 | - |
403 | -from charmhelpers.core.hookenv import ( |
404 | - config, |
405 | - local_unit, |
406 | - log, |
407 | - relation_ids, |
408 | - relation_set, |
409 | -) |
410 | - |
411 | -from charmhelpers.core.host import service |
412 | - |
413 | -# This module adds compatibility with the nrpe-external-master and plain nrpe |
414 | -# subordinate charms. To use it in your charm: |
415 | -# |
416 | -# 1. Update metadata.yaml |
417 | -# |
418 | -# provides: |
419 | -# (...) |
420 | -# nrpe-external-master: |
421 | -# interface: nrpe-external-master |
422 | -# scope: container |
423 | -# |
424 | -# and/or |
425 | -# |
426 | -# provides: |
427 | -# (...) |
428 | -# local-monitors: |
429 | -# interface: local-monitors |
430 | -# scope: container |
431 | - |
432 | -# |
433 | -# 2. Add the following to config.yaml |
434 | -# |
435 | -# nagios_context: |
436 | -# default: "juju" |
437 | -# type: string |
438 | -# description: | |
439 | -# Used by the nrpe subordinate charms. |
440 | -# A string that will be prepended to instance name to set the host name |
441 | -# in nagios. So for instance the hostname would be something like: |
442 | -# juju-myservice-0 |
443 | -# If you're running multiple environments with the same services in them |
444 | -# this allows you to differentiate between them. |
445 | -# |
446 | -# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master |
447 | -# |
448 | -# 4. Update your hooks.py with something like this: |
449 | -# |
450 | -# from charmsupport.nrpe import NRPE |
451 | -# (...) |
452 | -# def update_nrpe_config(): |
453 | -# nrpe_compat = NRPE() |
454 | -# nrpe_compat.add_check( |
455 | -# shortname = "myservice", |
456 | -# description = "Check MyService", |
457 | -# check_cmd = "check_http -w 2 -c 10 http://localhost" |
458 | -# ) |
459 | -# nrpe_compat.add_check( |
460 | -# "myservice_other", |
461 | -# "Check for widget failures", |
462 | -# check_cmd = "/srv/myapp/scripts/widget_check" |
463 | -# ) |
464 | -# nrpe_compat.write() |
465 | -# |
466 | -# def config_changed(): |
467 | -# (...) |
468 | -# update_nrpe_config() |
469 | -# |
470 | -# def nrpe_external_master_relation_changed(): |
471 | -# update_nrpe_config() |
472 | -# |
473 | -# def local_monitors_relation_changed(): |
474 | -# update_nrpe_config() |
475 | -# |
476 | -# 5. ln -s hooks.py nrpe-external-master-relation-changed |
477 | -# ln -s hooks.py local-monitors-relation-changed |
478 | - |
479 | - |
480 | -class CheckException(Exception): |
481 | - pass |
482 | - |
483 | - |
484 | -class Check(object): |
485 | - shortname_re = '[A-Za-z0-9-_]+$' |
486 | - service_template = (""" |
487 | -#--------------------------------------------------- |
488 | -# This file is Juju managed |
489 | -#--------------------------------------------------- |
490 | -define service {{ |
491 | - use active-service |
492 | - host_name {nagios_hostname} |
493 | - service_description {nagios_hostname}[{shortname}] """ |
494 | - """{description} |
495 | - check_command check_nrpe!{command} |
496 | - servicegroups {nagios_servicegroup} |
497 | -}} |
498 | -""") |
499 | - |
500 | - def __init__(self, shortname, description, check_cmd): |
501 | - super(Check, self).__init__() |
502 | - # XXX: could be better to calculate this from the service name |
503 | - if not re.match(self.shortname_re, shortname): |
504 | - raise CheckException("shortname must match {}".format( |
505 | - Check.shortname_re)) |
506 | - self.shortname = shortname |
507 | - self.command = "check_{}".format(shortname) |
508 | - # Note: a set of invalid characters is defined by the |
509 | - # Nagios server config |
510 | - # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()= |
511 | - self.description = description |
512 | - self.check_cmd = self._locate_cmd(check_cmd) |
513 | - |
514 | - def _locate_cmd(self, check_cmd): |
515 | - search_path = ( |
516 | - '/usr/lib/nagios/plugins', |
517 | - '/usr/local/lib/nagios/plugins', |
518 | - ) |
519 | - parts = shlex.split(check_cmd) |
520 | - for path in search_path: |
521 | - if os.path.exists(os.path.join(path, parts[0])): |
522 | - command = os.path.join(path, parts[0]) |
523 | - if len(parts) > 1: |
524 | - command += " " + " ".join(parts[1:]) |
525 | - return command |
526 | - log('Check command not found: {}'.format(parts[0])) |
527 | - return '' |
528 | - |
529 | - def write(self, nagios_context, hostname): |
530 | - nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format( |
531 | - self.command) |
532 | - with open(nrpe_check_file, 'w') as nrpe_check_config: |
533 | - nrpe_check_config.write("# check {}\n".format(self.shortname)) |
534 | - nrpe_check_config.write("command[{}]={}\n".format( |
535 | - self.command, self.check_cmd)) |
536 | - |
537 | - if not os.path.exists(NRPE.nagios_exportdir): |
538 | - log('Not writing service config as {} is not accessible'.format( |
539 | - NRPE.nagios_exportdir)) |
540 | - else: |
541 | - self.write_service_config(nagios_context, hostname) |
542 | - |
543 | - def write_service_config(self, nagios_context, hostname): |
544 | - for f in os.listdir(NRPE.nagios_exportdir): |
545 | - if re.search('.*{}.cfg'.format(self.command), f): |
546 | - os.remove(os.path.join(NRPE.nagios_exportdir, f)) |
547 | - |
548 | - templ_vars = { |
549 | - 'nagios_hostname': hostname, |
550 | - 'nagios_servicegroup': nagios_context, |
551 | - 'description': self.description, |
552 | - 'shortname': self.shortname, |
553 | - 'command': self.command, |
554 | - } |
555 | - nrpe_service_text = Check.service_template.format(**templ_vars) |
556 | - nrpe_service_file = '{}/service__{}_{}.cfg'.format( |
557 | - NRPE.nagios_exportdir, hostname, self.command) |
558 | - with open(nrpe_service_file, 'w') as nrpe_service_config: |
559 | - nrpe_service_config.write(str(nrpe_service_text)) |
560 | - |
561 | - def run(self): |
562 | - subprocess.call(self.check_cmd) |
563 | - |
564 | - |
565 | -class NRPE(object): |
566 | - nagios_logdir = '/var/log/nagios' |
567 | - nagios_exportdir = '/var/lib/nagios/export' |
568 | - nrpe_confdir = '/etc/nagios/nrpe.d' |
569 | - |
570 | - def __init__(self): |
571 | - super(NRPE, self).__init__() |
572 | - self.config = config() |
573 | - self.nagios_context = self.config['nagios_context'] |
574 | - self.unit_name = local_unit().replace('/', '-') |
575 | - self.hostname = "{}-{}".format(self.nagios_context, self.unit_name) |
576 | - self.checks = [] |
577 | - |
578 | - def add_check(self, *args, **kwargs): |
579 | - self.checks.append(Check(*args, **kwargs)) |
580 | - |
581 | - def write(self): |
582 | - try: |
583 | - nagios_uid = pwd.getpwnam('nagios').pw_uid |
584 | - nagios_gid = grp.getgrnam('nagios').gr_gid |
585 | - except: |
586 | - log("Nagios user not set up, nrpe checks not updated") |
587 | - return |
588 | - |
589 | - if not os.path.exists(NRPE.nagios_logdir): |
590 | - os.mkdir(NRPE.nagios_logdir) |
591 | - os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid) |
592 | - |
593 | - nrpe_monitors = {} |
594 | - monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}} |
595 | - for nrpecheck in self.checks: |
596 | - nrpecheck.write(self.nagios_context, self.hostname) |
597 | - nrpe_monitors[nrpecheck.shortname] = { |
598 | - "command": nrpecheck.command, |
599 | - } |
600 | - |
601 | - service('restart', 'nagios-nrpe-server') |
602 | - |
603 | - for rid in relation_ids("local-monitors"): |
604 | - relation_set(relation_id=rid, monitors=yaml.dump(monitors)) |
605 | |
606 | === removed file 'hooks/charmhelpers/contrib/charmsupport/volumes.py' |
607 | --- hooks/charmhelpers/contrib/charmsupport/volumes.py 2013-11-26 17:12:54 +0000 |
608 | +++ hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000 |
609 | @@ -1,156 +0,0 @@ |
610 | -''' |
611 | -Functions for managing volumes in juju units. One volume is supported per unit. |
612 | -Subordinates may have their own storage, provided it is on its own partition. |
613 | - |
614 | -Configuration stanzas: |
615 | - volume-ephemeral: |
616 | - type: boolean |
617 | - default: true |
618 | - description: > |
619 | - If false, a volume is mounted as sepecified in "volume-map" |
620 | - If true, ephemeral storage will be used, meaning that log data |
621 | - will only exist as long as the machine. YOU HAVE BEEN WARNED. |
622 | - volume-map: |
623 | - type: string |
624 | - default: {} |
625 | - description: > |
626 | - YAML map of units to device names, e.g: |
627 | - "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }" |
628 | - Service units will raise a configure-error if volume-ephemeral |
629 | - is 'true' and no volume-map value is set. Use 'juju set' to set a |
630 | - value and 'juju resolved' to complete configuration. |
631 | - |
632 | -Usage: |
633 | - from charmsupport.volumes import configure_volume, VolumeConfigurationError |
634 | - from charmsupport.hookenv import log, ERROR |
635 | - def post_mount_hook(): |
636 | - stop_service('myservice') |
637 | - def post_mount_hook(): |
638 | - start_service('myservice') |
639 | - |
640 | - if __name__ == '__main__': |
641 | - try: |
642 | - configure_volume(before_change=pre_mount_hook, |
643 | - after_change=post_mount_hook) |
644 | - except VolumeConfigurationError: |
645 | - log('Storage could not be configured', ERROR) |
646 | -''' |
647 | - |
648 | -# XXX: Known limitations |
649 | -# - fstab is neither consulted nor updated |
650 | - |
651 | -import os |
652 | -from charmhelpers.core import hookenv |
653 | -from charmhelpers.core import host |
654 | -import yaml |
655 | - |
656 | - |
657 | -MOUNT_BASE = '/srv/juju/volumes' |
658 | - |
659 | - |
660 | -class VolumeConfigurationError(Exception): |
661 | - '''Volume configuration data is missing or invalid''' |
662 | - pass |
663 | - |
664 | - |
665 | -def get_config(): |
666 | - '''Gather and sanity-check volume configuration data''' |
667 | - volume_config = {} |
668 | - config = hookenv.config() |
669 | - |
670 | - errors = False |
671 | - |
672 | - if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'): |
673 | - volume_config['ephemeral'] = True |
674 | - else: |
675 | - volume_config['ephemeral'] = False |
676 | - |
677 | - try: |
678 | - volume_map = yaml.safe_load(config.get('volume-map', '{}')) |
679 | - except yaml.YAMLError as e: |
680 | - hookenv.log("Error parsing YAML volume-map: {}".format(e), |
681 | - hookenv.ERROR) |
682 | - errors = True |
683 | - if volume_map is None: |
684 | - # probably an empty string |
685 | - volume_map = {} |
686 | - elif not isinstance(volume_map, dict): |
687 | - hookenv.log("Volume-map should be a dictionary, not {}".format( |
688 | - type(volume_map))) |
689 | - errors = True |
690 | - |
691 | - volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME']) |
692 | - if volume_config['device'] and volume_config['ephemeral']: |
693 | - # asked for ephemeral storage but also defined a volume ID |
694 | - hookenv.log('A volume is defined for this unit, but ephemeral ' |
695 | - 'storage was requested', hookenv.ERROR) |
696 | - errors = True |
697 | - elif not volume_config['device'] and not volume_config['ephemeral']: |
698 | - # asked for permanent storage but did not define volume ID |
699 | - hookenv.log('Ephemeral storage was requested, but there is no volume ' |
700 | - 'defined for this unit.', hookenv.ERROR) |
701 | - errors = True |
702 | - |
703 | - unit_mount_name = hookenv.local_unit().replace('/', '-') |
704 | - volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name) |
705 | - |
706 | - if errors: |
707 | - return None |
708 | - return volume_config |
709 | - |
710 | - |
711 | -def mount_volume(config): |
712 | - if os.path.exists(config['mountpoint']): |
713 | - if not os.path.isdir(config['mountpoint']): |
714 | - hookenv.log('Not a directory: {}'.format(config['mountpoint'])) |
715 | - raise VolumeConfigurationError() |
716 | - else: |
717 | - host.mkdir(config['mountpoint']) |
718 | - if os.path.ismount(config['mountpoint']): |
719 | - unmount_volume(config) |
720 | - if not host.mount(config['device'], config['mountpoint'], persist=True): |
721 | - raise VolumeConfigurationError() |
722 | - |
723 | - |
724 | -def unmount_volume(config): |
725 | - if os.path.ismount(config['mountpoint']): |
726 | - if not host.umount(config['mountpoint'], persist=True): |
727 | - raise VolumeConfigurationError() |
728 | - |
729 | - |
730 | -def managed_mounts(): |
731 | - '''List of all mounted managed volumes''' |
732 | - return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts()) |
733 | - |
734 | - |
735 | -def configure_volume(before_change=lambda: None, after_change=lambda: None): |
736 | - '''Set up storage (or don't) according to the charm's volume configuration. |
737 | - Returns the mount point or "ephemeral". before_change and after_change |
738 | - are optional functions to be called if the volume configuration changes. |
739 | - ''' |
740 | - |
741 | - config = get_config() |
742 | - if not config: |
743 | - hookenv.log('Failed to read volume configuration', hookenv.CRITICAL) |
744 | - raise VolumeConfigurationError() |
745 | - |
746 | - if config['ephemeral']: |
747 | - if os.path.ismount(config['mountpoint']): |
748 | - before_change() |
749 | - unmount_volume(config) |
750 | - after_change() |
751 | - return 'ephemeral' |
752 | - else: |
753 | - # persistent storage |
754 | - if os.path.ismount(config['mountpoint']): |
755 | - mounts = dict(managed_mounts()) |
756 | - if mounts.get(config['mountpoint']) != config['device']: |
757 | - before_change() |
758 | - unmount_volume(config) |
759 | - mount_volume(config) |
760 | - after_change() |
761 | - else: |
762 | - before_change() |
763 | - mount_volume(config) |
764 | - after_change() |
765 | - return config['mountpoint'] |
766 | |
767 | === removed directory 'hooks/charmhelpers/contrib/hahelpers' |
768 | === removed file 'hooks/charmhelpers/contrib/hahelpers/__init__.py' |
769 | === removed file 'hooks/charmhelpers/contrib/hahelpers/apache.py' |
770 | --- hooks/charmhelpers/contrib/hahelpers/apache.py 2014-05-09 20:11:59 +0000 |
771 | +++ hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000 |
772 | @@ -1,59 +0,0 @@ |
773 | -# |
774 | -# Copyright 2012 Canonical Ltd. |
775 | -# |
776 | -# This file is sourced from lp:openstack-charm-helpers |
777 | -# |
778 | -# Authors: |
779 | -# James Page <james.page@ubuntu.com> |
780 | -# Adam Gandelman <adamg@ubuntu.com> |
781 | -# |
782 | - |
783 | -import subprocess |
784 | - |
785 | -from charmhelpers.core.hookenv import ( |
786 | - config as config_get, |
787 | - relation_get, |
788 | - relation_ids, |
789 | - related_units as relation_list, |
790 | - log, |
791 | - INFO, |
792 | -) |
793 | - |
794 | - |
795 | -def get_cert(): |
796 | - cert = config_get('ssl_cert') |
797 | - key = config_get('ssl_key') |
798 | - if not (cert and key): |
799 | - log("Inspecting identity-service relations for SSL certificate.", |
800 | - level=INFO) |
801 | - cert = key = None |
802 | - for r_id in relation_ids('identity-service'): |
803 | - for unit in relation_list(r_id): |
804 | - if not cert: |
805 | - cert = relation_get('ssl_cert', |
806 | - rid=r_id, unit=unit) |
807 | - if not key: |
808 | - key = relation_get('ssl_key', |
809 | - rid=r_id, unit=unit) |
810 | - return (cert, key) |
811 | - |
812 | - |
813 | -def get_ca_cert(): |
814 | - ca_cert = config_get('ssl_ca') |
815 | - if ca_cert is None: |
816 | - log("Inspecting identity-service relations for CA SSL certificate.", |
817 | - level=INFO) |
818 | - for r_id in relation_ids('identity-service'): |
819 | - for unit in relation_list(r_id): |
820 | - if ca_cert is None: |
821 | - ca_cert = relation_get('ca_cert', |
822 | - rid=r_id, unit=unit) |
823 | - return ca_cert |
824 | - |
825 | - |
826 | -def install_ca_cert(ca_cert): |
827 | - if ca_cert: |
828 | - with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt', |
829 | - 'w') as crt: |
830 | - crt.write(ca_cert) |
831 | - subprocess.check_call(['update-ca-certificates', '--fresh']) |
832 | |
833 | === removed file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' |
834 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-05-09 20:11:59 +0000 |
835 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000 |
836 | @@ -1,183 +0,0 @@ |
837 | -# |
838 | -# Copyright 2012 Canonical Ltd. |
839 | -# |
840 | -# Authors: |
841 | -# James Page <james.page@ubuntu.com> |
842 | -# Adam Gandelman <adamg@ubuntu.com> |
843 | -# |
844 | - |
845 | -import subprocess |
846 | -import os |
847 | - |
848 | -from socket import gethostname as get_unit_hostname |
849 | - |
850 | -from charmhelpers.core.hookenv import ( |
851 | - log, |
852 | - relation_ids, |
853 | - related_units as relation_list, |
854 | - relation_get, |
855 | - config as config_get, |
856 | - INFO, |
857 | - ERROR, |
858 | - unit_get, |
859 | -) |
860 | - |
861 | - |
862 | -class HAIncompleteConfig(Exception): |
863 | - pass |
864 | - |
865 | - |
866 | -def is_clustered(): |
867 | - for r_id in (relation_ids('ha') or []): |
868 | - for unit in (relation_list(r_id) or []): |
869 | - clustered = relation_get('clustered', |
870 | - rid=r_id, |
871 | - unit=unit) |
872 | - if clustered: |
873 | - return True |
874 | - return False |
875 | - |
876 | - |
877 | -def is_leader(resource): |
878 | - cmd = [ |
879 | - "crm", "resource", |
880 | - "show", resource |
881 | - ] |
882 | - try: |
883 | - status = subprocess.check_output(cmd) |
884 | - except subprocess.CalledProcessError: |
885 | - return False |
886 | - else: |
887 | - if get_unit_hostname() in status: |
888 | - return True |
889 | - else: |
890 | - return False |
891 | - |
892 | - |
893 | -def peer_units(): |
894 | - peers = [] |
895 | - for r_id in (relation_ids('cluster') or []): |
896 | - for unit in (relation_list(r_id) or []): |
897 | - peers.append(unit) |
898 | - return peers |
899 | - |
900 | - |
901 | -def oldest_peer(peers): |
902 | - local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) |
903 | - for peer in peers: |
904 | - remote_unit_no = int(peer.split('/')[1]) |
905 | - if remote_unit_no < local_unit_no: |
906 | - return False |
907 | - return True |
908 | - |
909 | - |
910 | -def eligible_leader(resource): |
911 | - if is_clustered(): |
912 | - if not is_leader(resource): |
913 | - log('Deferring action to CRM leader.', level=INFO) |
914 | - return False |
915 | - else: |
916 | - peers = peer_units() |
917 | - if peers and not oldest_peer(peers): |
918 | - log('Deferring action to oldest service unit.', level=INFO) |
919 | - return False |
920 | - return True |
921 | - |
922 | - |
923 | -def https(): |
924 | - ''' |
925 | - Determines whether enough data has been provided in configuration |
926 | - or relation data to configure HTTPS |
927 | - . |
928 | - returns: boolean |
929 | - ''' |
930 | - if config_get('use-https') == "yes": |
931 | - return True |
932 | - if config_get('ssl_cert') and config_get('ssl_key'): |
933 | - return True |
934 | - for r_id in relation_ids('identity-service'): |
935 | - for unit in relation_list(r_id): |
936 | - rel_state = [ |
937 | - relation_get('https_keystone', rid=r_id, unit=unit), |
938 | - relation_get('ssl_cert', rid=r_id, unit=unit), |
939 | - relation_get('ssl_key', rid=r_id, unit=unit), |
940 | - relation_get('ca_cert', rid=r_id, unit=unit), |
941 | - ] |
942 | - # NOTE: works around (LP: #1203241) |
943 | - if (None not in rel_state) and ('' not in rel_state): |
944 | - return True |
945 | - return False |
946 | - |
947 | - |
948 | -def determine_api_port(public_port): |
949 | - ''' |
950 | - Determine correct API server listening port based on |
951 | - existence of HTTPS reverse proxy and/or haproxy. |
952 | - |
953 | - public_port: int: standard public port for given service |
954 | - |
955 | - returns: int: the correct listening port for the API service |
956 | - ''' |
957 | - i = 0 |
958 | - if len(peer_units()) > 0 or is_clustered(): |
959 | - i += 1 |
960 | - if https(): |
961 | - i += 1 |
962 | - return public_port - (i * 10) |
963 | - |
964 | - |
965 | -def determine_apache_port(public_port): |
966 | - ''' |
967 | - Description: Determine correct apache listening port based on public IP + |
968 | - state of the cluster. |
969 | - |
970 | - public_port: int: standard public port for given service |
971 | - |
972 | - returns: int: the correct listening port for the HAProxy service |
973 | - ''' |
974 | - i = 0 |
975 | - if len(peer_units()) > 0 or is_clustered(): |
976 | - i += 1 |
977 | - return public_port - (i * 10) |
978 | - |
979 | - |
980 | -def get_hacluster_config(): |
981 | - ''' |
982 | - Obtains all relevant configuration from charm configuration required |
983 | - for initiating a relation to hacluster: |
984 | - |
985 | - ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr |
986 | - |
987 | - returns: dict: A dict containing settings keyed by setting name. |
988 | - raises: HAIncompleteConfig if settings are missing. |
989 | - ''' |
990 | - settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr'] |
991 | - conf = {} |
992 | - for setting in settings: |
993 | - conf[setting] = config_get(setting) |
994 | - missing = [] |
995 | - [missing.append(s) for s, v in conf.iteritems() if v is None] |
996 | - if missing: |
997 | - log('Insufficient config data to configure hacluster.', level=ERROR) |
998 | - raise HAIncompleteConfig |
999 | - return conf |
1000 | - |
1001 | - |
1002 | -def canonical_url(configs, vip_setting='vip'): |
1003 | - ''' |
1004 | - Returns the correct HTTP URL to this host given the state of HTTPS |
1005 | - configuration and hacluster. |
1006 | - |
1007 | - :configs : OSTemplateRenderer: A config tempating object to inspect for |
1008 | - a complete https context. |
1009 | - :vip_setting: str: Setting in charm config that specifies |
1010 | - VIP address. |
1011 | - ''' |
1012 | - scheme = 'http' |
1013 | - if 'https' in configs.complete_contexts(): |
1014 | - scheme = 'https' |
1015 | - if is_clustered(): |
1016 | - addr = config_get(vip_setting) |
1017 | - else: |
1018 | - addr = unit_get('private-address') |
1019 | - return '%s://%s' % (scheme, addr) |
1020 | |
1021 | === removed directory 'hooks/charmhelpers/contrib/jujugui' |
1022 | === removed file 'hooks/charmhelpers/contrib/jujugui/__init__.py' |
1023 | === removed file 'hooks/charmhelpers/contrib/jujugui/utils.py' |
1024 | --- hooks/charmhelpers/contrib/jujugui/utils.py 2013-11-26 17:12:54 +0000 |
1025 | +++ hooks/charmhelpers/contrib/jujugui/utils.py 1970-01-01 00:00:00 +0000 |
1026 | @@ -1,602 +0,0 @@ |
1027 | -"""Juju GUI charm utilities.""" |
1028 | - |
1029 | -__all__ = [ |
1030 | - 'AGENT', |
1031 | - 'APACHE', |
1032 | - 'API_PORT', |
1033 | - 'CURRENT_DIR', |
1034 | - 'HAPROXY', |
1035 | - 'IMPROV', |
1036 | - 'JUJU_DIR', |
1037 | - 'JUJU_GUI_DIR', |
1038 | - 'JUJU_GUI_SITE', |
1039 | - 'JUJU_PEM', |
1040 | - 'WEB_PORT', |
1041 | - 'bzr_checkout', |
1042 | - 'chain', |
1043 | - 'cmd_log', |
1044 | - 'fetch_api', |
1045 | - 'fetch_gui', |
1046 | - 'find_missing_packages', |
1047 | - 'first_path_in_dir', |
1048 | - 'get_api_address', |
1049 | - 'get_npm_cache_archive_url', |
1050 | - 'get_release_file_url', |
1051 | - 'get_staging_dependencies', |
1052 | - 'get_zookeeper_address', |
1053 | - 'legacy_juju', |
1054 | - 'log_hook', |
1055 | - 'merge', |
1056 | - 'parse_source', |
1057 | - 'prime_npm_cache', |
1058 | - 'render_to_file', |
1059 | - 'save_or_create_certificates', |
1060 | - 'setup_apache', |
1061 | - 'setup_gui', |
1062 | - 'start_agent', |
1063 | - 'start_gui', |
1064 | - 'start_improv', |
1065 | - 'write_apache_config', |
1066 | -] |
1067 | - |
1068 | -from contextlib import contextmanager |
1069 | -import errno |
1070 | -import json |
1071 | -import os |
1072 | -import logging |
1073 | -import shutil |
1074 | -from subprocess import CalledProcessError |
1075 | -import tempfile |
1076 | -from urlparse import urlparse |
1077 | - |
1078 | -import apt |
1079 | -import tempita |
1080 | - |
1081 | -from launchpadlib.launchpad import Launchpad |
1082 | -from shelltoolbox import ( |
1083 | - Serializer, |
1084 | - apt_get_install, |
1085 | - command, |
1086 | - environ, |
1087 | - install_extra_repositories, |
1088 | - run, |
1089 | - script_name, |
1090 | - search_file, |
1091 | - su, |
1092 | -) |
1093 | -from charmhelpers.core.host import ( |
1094 | - service_start, |
1095 | -) |
1096 | -from charmhelpers.core.hookenv import ( |
1097 | - log, |
1098 | - config, |
1099 | - unit_get, |
1100 | -) |
1101 | - |
1102 | - |
1103 | -AGENT = 'juju-api-agent' |
1104 | -APACHE = 'apache2' |
1105 | -IMPROV = 'juju-api-improv' |
1106 | -HAPROXY = 'haproxy' |
1107 | - |
1108 | -API_PORT = 8080 |
1109 | -WEB_PORT = 8000 |
1110 | - |
1111 | -CURRENT_DIR = os.getcwd() |
1112 | -JUJU_DIR = os.path.join(CURRENT_DIR, 'juju') |
1113 | -JUJU_GUI_DIR = os.path.join(CURRENT_DIR, 'juju-gui') |
1114 | -JUJU_GUI_SITE = '/etc/apache2/sites-available/juju-gui' |
1115 | -JUJU_GUI_PORTS = '/etc/apache2/ports.conf' |
1116 | -JUJU_PEM = 'juju.includes-private-key.pem' |
1117 | -BUILD_REPOSITORIES = ('ppa:chris-lea/node.js-legacy',) |
1118 | -DEB_BUILD_DEPENDENCIES = ( |
1119 | - 'bzr', 'imagemagick', 'make', 'nodejs', 'npm', |
1120 | -) |
1121 | -DEB_STAGE_DEPENDENCIES = ( |
1122 | - 'zookeeper', |
1123 | -) |
1124 | - |
1125 | - |
1126 | -# Store the configuration from on invocation to the next. |
1127 | -config_json = Serializer('/tmp/config.json') |
1128 | -# Bazaar checkout command. |
1129 | -bzr_checkout = command('bzr', 'co', '--lightweight') |
1130 | -# Whether or not the charm is deployed using juju-core. |
1131 | -# If juju-core has been used to deploy the charm, an agent.conf file must |
1132 | -# be present in the charm parent directory. |
1133 | -legacy_juju = lambda: not os.path.exists( |
1134 | - os.path.join(CURRENT_DIR, '..', 'agent.conf')) |
1135 | - |
1136 | - |
1137 | -def _get_build_dependencies(): |
1138 | - """Install deb dependencies for building.""" |
1139 | - log('Installing build dependencies.') |
1140 | - cmd_log(install_extra_repositories(*BUILD_REPOSITORIES)) |
1141 | - cmd_log(apt_get_install(*DEB_BUILD_DEPENDENCIES)) |
1142 | - |
1143 | - |
1144 | -def get_api_address(unit_dir): |
1145 | - """Return the Juju API address stored in the uniter agent.conf file.""" |
1146 | - import yaml # python-yaml is only installed if juju-core is used. |
1147 | - # XXX 2013-03-27 frankban bug=1161443: |
1148 | - # currently the uniter agent.conf file does not include the API |
1149 | - # address. For now retrieve it from the machine agent file. |
1150 | - base_dir = os.path.abspath(os.path.join(unit_dir, '..')) |
1151 | - for dirname in os.listdir(base_dir): |
1152 | - if dirname.startswith('machine-'): |
1153 | - agent_conf = os.path.join(base_dir, dirname, 'agent.conf') |
1154 | - break |
1155 | - else: |
1156 | - raise IOError('Juju agent configuration file not found.') |
1157 | - contents = yaml.load(open(agent_conf)) |
1158 | - return contents['apiinfo']['addrs'][0] |
1159 | - |
1160 | - |
1161 | -def get_staging_dependencies(): |
1162 | - """Install deb dependencies for the stage (improv) environment.""" |
1163 | - log('Installing stage dependencies.') |
1164 | - cmd_log(apt_get_install(*DEB_STAGE_DEPENDENCIES)) |
1165 | - |
1166 | - |
1167 | -def first_path_in_dir(directory): |
1168 | - """Return the full path of the first file/dir in *directory*.""" |
1169 | - return os.path.join(directory, os.listdir(directory)[0]) |
1170 | - |
1171 | - |
1172 | -def _get_by_attr(collection, attr, value): |
1173 | - """Return the first item in collection having attr == value. |
1174 | - |
1175 | - Return None if the item is not found. |
1176 | - """ |
1177 | - for item in collection: |
1178 | - if getattr(item, attr) == value: |
1179 | - return item |
1180 | - |
1181 | - |
1182 | -def get_release_file_url(project, series_name, release_version): |
1183 | - """Return the URL of the release file hosted in Launchpad. |
1184 | - |
1185 | - The returned URL points to a release file for the given project, series |
1186 | - name and release version. |
1187 | - The argument *project* is a project object as returned by launchpadlib. |
1188 | - The arguments *series_name* and *release_version* are strings. If |
1189 | - *release_version* is None, the URL of the latest release will be returned. |
1190 | - """ |
1191 | - series = _get_by_attr(project.series, 'name', series_name) |
1192 | - if series is None: |
1193 | - raise ValueError('%r: series not found' % series_name) |
1194 | - # Releases are returned by Launchpad in reverse date order. |
1195 | - releases = list(series.releases) |
1196 | - if not releases: |
1197 | - raise ValueError('%r: series does not contain releases' % series_name) |
1198 | - if release_version is not None: |
1199 | - release = _get_by_attr(releases, 'version', release_version) |
1200 | - if release is None: |
1201 | - raise ValueError('%r: release not found' % release_version) |
1202 | - releases = [release] |
1203 | - for release in releases: |
1204 | - for file_ in release.files: |
1205 | - if str(file_).endswith('.tgz'): |
1206 | - return file_.file_link |
1207 | - raise ValueError('%r: file not found' % release_version) |
1208 | - |
1209 | - |
1210 | -def get_zookeeper_address(agent_file_path): |
1211 | - """Retrieve the Zookeeper address contained in the given *agent_file_path*. |
1212 | - |
1213 | - The *agent_file_path* is a path to a file containing a line similar to the |
1214 | - following:: |
1215 | - |
1216 | - env JUJU_ZOOKEEPER="address" |
1217 | - """ |
1218 | - line = search_file('JUJU_ZOOKEEPER', agent_file_path).strip() |
1219 | - return line.split('=')[1].strip('"') |
1220 | - |
1221 | - |
1222 | -@contextmanager |
1223 | -def log_hook(): |
1224 | - """Log when a hook starts and stops its execution. |
1225 | - |
1226 | - Also log to stdout possible CalledProcessError exceptions raised executing |
1227 | - the hook. |
1228 | - """ |
1229 | - script = script_name() |
1230 | - log(">>> Entering {}".format(script)) |
1231 | - try: |
1232 | - yield |
1233 | - except CalledProcessError as err: |
1234 | - log('Exception caught:') |
1235 | - log(err.output) |
1236 | - raise |
1237 | - finally: |
1238 | - log("<<< Exiting {}".format(script)) |
1239 | - |
1240 | - |
1241 | -def parse_source(source): |
1242 | - """Parse the ``juju-gui-source`` option. |
1243 | - |
1244 | - Return a tuple of two elements representing info on how to deploy Juju GUI. |
1245 | - Examples: |
1246 | - - ('stable', None): latest stable release; |
1247 | - - ('stable', '0.1.0'): stable release v0.1.0; |
1248 | - - ('trunk', None): latest trunk release; |
1249 | - - ('trunk', '0.1.0+build.1'): trunk release v0.1.0 bzr revision 1; |
1250 | - - ('branch', 'lp:juju-gui'): release is made from a branch; |
1251 | - - ('url', 'http://example.com/gui'): release from a downloaded file. |
1252 | - """ |
1253 | - if source.startswith('url:'): |
1254 | - source = source[4:] |
1255 | - # Support file paths, including relative paths. |
1256 | - if urlparse(source).scheme == '': |
1257 | - if not source.startswith('/'): |
1258 | - source = os.path.join(os.path.abspath(CURRENT_DIR), source) |
1259 | - source = "file://%s" % source |
1260 | - return 'url', source |
1261 | - if source in ('stable', 'trunk'): |
1262 | - return source, None |
1263 | - if source.startswith('lp:') or source.startswith('http://'): |
1264 | - return 'branch', source |
1265 | - if 'build' in source: |
1266 | - return 'trunk', source |
1267 | - return 'stable', source |
1268 | - |
1269 | - |
1270 | -def render_to_file(template_name, context, destination): |
1271 | - """Render the given *template_name* into *destination* using *context*. |
1272 | - |
1273 | - The tempita template language is used to render contents |
1274 | - (see http://pythonpaste.org/tempita/). |
1275 | - The argument *template_name* is the name or path of the template file: |
1276 | - it may be either a path relative to ``../config`` or an absolute path. |
1277 | - The argument *destination* is a file path. |
1278 | - The argument *context* is a dict-like object. |
1279 | - """ |
1280 | - template_path = os.path.abspath(template_name) |
1281 | - template = tempita.Template.from_filename(template_path) |
1282 | - with open(destination, 'w') as stream: |
1283 | - stream.write(template.substitute(context)) |
1284 | - |
1285 | - |
1286 | -results_log = None |
1287 | - |
1288 | - |
1289 | -def _setupLogging(): |
1290 | - global results_log |
1291 | - if results_log is not None: |
1292 | - return |
1293 | - cfg = config() |
1294 | - logging.basicConfig( |
1295 | - filename=cfg['command-log-file'], |
1296 | - level=logging.INFO, |
1297 | - format="%(asctime)s: %(name)s@%(levelname)s %(message)s") |
1298 | - results_log = logging.getLogger('juju-gui') |
1299 | - |
1300 | - |
1301 | -def cmd_log(results): |
1302 | - global results_log |
1303 | - if not results: |
1304 | - return |
1305 | - if results_log is None: |
1306 | - _setupLogging() |
1307 | - # Since 'results' may be multi-line output, start it on a separate line |
1308 | - # from the logger timestamp, etc. |
1309 | - results_log.info('\n' + results) |
1310 | - |
1311 | - |
1312 | -def start_improv(staging_env, ssl_cert_path, |
1313 | - config_path='/etc/init/juju-api-improv.conf'): |
1314 | - """Start a simulated juju environment using ``improv.py``.""" |
1315 | - log('Setting up staging start up script.') |
1316 | - context = { |
1317 | - 'juju_dir': JUJU_DIR, |
1318 | - 'keys': ssl_cert_path, |
1319 | - 'port': API_PORT, |
1320 | - 'staging_env': staging_env, |
1321 | - } |
1322 | - render_to_file('config/juju-api-improv.conf.template', context, config_path) |
1323 | - log('Starting the staging backend.') |
1324 | - with su('root'): |
1325 | - service_start(IMPROV) |
1326 | - |
1327 | - |
1328 | -def start_agent( |
1329 | - ssl_cert_path, config_path='/etc/init/juju-api-agent.conf', |
1330 | - read_only=False): |
1331 | - """Start the Juju agent and connect to the current environment.""" |
1332 | - # Retrieve the Zookeeper address from the start up script. |
1333 | - unit_dir = os.path.realpath(os.path.join(CURRENT_DIR, '..')) |
1334 | - agent_file = '/etc/init/juju-{0}.conf'.format(os.path.basename(unit_dir)) |
1335 | - zookeeper = get_zookeeper_address(agent_file) |
1336 | - log('Setting up API agent start up script.') |
1337 | - context = { |
1338 | - 'juju_dir': JUJU_DIR, |
1339 | - 'keys': ssl_cert_path, |
1340 | - 'port': API_PORT, |
1341 | - 'zookeeper': zookeeper, |
1342 | - 'read_only': read_only |
1343 | - } |
1344 | - render_to_file('config/juju-api-agent.conf.template', context, config_path) |
1345 | - log('Starting API agent.') |
1346 | - with su('root'): |
1347 | - service_start(AGENT) |
1348 | - |
1349 | - |
1350 | -def start_gui( |
1351 | - console_enabled, login_help, readonly, in_staging, ssl_cert_path, |
1352 | - charmworld_url, serve_tests, haproxy_path='/etc/haproxy/haproxy.cfg', |
1353 | - config_js_path=None, secure=True, sandbox=False): |
1354 | - """Set up and start the Juju GUI server.""" |
1355 | - with su('root'): |
1356 | - run('chown', '-R', 'ubuntu:', JUJU_GUI_DIR) |
1357 | - # XXX 2013-02-05 frankban bug=1116320: |
1358 | - # External insecure resources are still loaded when testing in the |
1359 | - # debug environment. For now, switch to the production environment if |
1360 | - # the charm is configured to serve tests. |
1361 | - if in_staging and not serve_tests: |
1362 | - build_dirname = 'build-debug' |
1363 | - else: |
1364 | - build_dirname = 'build-prod' |
1365 | - build_dir = os.path.join(JUJU_GUI_DIR, build_dirname) |
1366 | - log('Generating the Juju GUI configuration file.') |
1367 | - is_legacy_juju = legacy_juju() |
1368 | - user, password = None, None |
1369 | - if (is_legacy_juju and in_staging) or sandbox: |
1370 | - user, password = 'admin', 'admin' |
1371 | - else: |
1372 | - user, password = None, None |
1373 | - |
1374 | - api_backend = 'python' if is_legacy_juju else 'go' |
1375 | - if secure: |
1376 | - protocol = 'wss' |
1377 | - else: |
1378 | - log('Running in insecure mode! Port 80 will serve unencrypted.') |
1379 | - protocol = 'ws' |
1380 | - |
1381 | - context = { |
1382 | - 'raw_protocol': protocol, |
1383 | - 'address': unit_get('public-address'), |
1384 | - 'console_enabled': json.dumps(console_enabled), |
1385 | - 'login_help': json.dumps(login_help), |
1386 | - 'password': json.dumps(password), |
1387 | - 'api_backend': json.dumps(api_backend), |
1388 | - 'readonly': json.dumps(readonly), |
1389 | - 'user': json.dumps(user), |
1390 | - 'protocol': json.dumps(protocol), |
1391 | - 'sandbox': json.dumps(sandbox), |
1392 | - 'charmworld_url': json.dumps(charmworld_url), |
1393 | - } |
1394 | - if config_js_path is None: |
1395 | - config_js_path = os.path.join( |
1396 | - build_dir, 'juju-ui', 'assets', 'config.js') |
1397 | - render_to_file('config/config.js.template', context, config_js_path) |
1398 | - |
1399 | - write_apache_config(build_dir, serve_tests) |
1400 | - |
1401 | - log('Generating haproxy configuration file.') |
1402 | - if is_legacy_juju: |
1403 | - # The PyJuju API agent is listening on localhost. |
1404 | - api_address = '127.0.0.1:{0}'.format(API_PORT) |
1405 | - else: |
1406 | - # Retrieve the juju-core API server address. |
1407 | - api_address = get_api_address(os.path.join(CURRENT_DIR, '..')) |
1408 | - context = { |
1409 | - 'api_address': api_address, |
1410 | - 'api_pem': JUJU_PEM, |
1411 | - 'legacy_juju': is_legacy_juju, |
1412 | - 'ssl_cert_path': ssl_cert_path, |
1413 | - # In PyJuju environments, use the same certificate for both HTTPS and |
1414 | - # WebSocket connections. In juju-core the system already has the proper |
1415 | - # certificate installed. |
1416 | - 'web_pem': JUJU_PEM, |
1417 | - 'web_port': WEB_PORT, |
1418 | - 'secure': secure |
1419 | - } |
1420 | - render_to_file('config/haproxy.cfg.template', context, haproxy_path) |
1421 | - log('Starting Juju GUI.') |
1422 | - |
1423 | - |
1424 | -def write_apache_config(build_dir, serve_tests=False): |
1425 | - log('Generating the apache site configuration file.') |
1426 | - context = { |
1427 | - 'port': WEB_PORT, |
1428 | - 'serve_tests': serve_tests, |
1429 | - 'server_root': build_dir, |
1430 | - 'tests_root': os.path.join(JUJU_GUI_DIR, 'test', ''), |
1431 | - } |
1432 | - render_to_file('config/apache-ports.template', context, JUJU_GUI_PORTS) |
1433 | - render_to_file('config/apache-site.template', context, JUJU_GUI_SITE) |
1434 | - |
1435 | - |
1436 | -def get_npm_cache_archive_url(Launchpad=Launchpad): |
1437 | - """Figure out the URL of the most recent NPM cache archive on Launchpad.""" |
1438 | - launchpad = Launchpad.login_anonymously('Juju GUI charm', 'production') |
1439 | - project = launchpad.projects['juju-gui'] |
1440 | - # Find the URL of the most recently created NPM cache archive. |
1441 | - npm_cache_url = get_release_file_url(project, 'npm-cache', None) |
1442 | - return npm_cache_url |
1443 | - |
1444 | - |
1445 | -def prime_npm_cache(npm_cache_url): |
1446 | - """Download NPM cache archive and prime the NPM cache with it.""" |
1447 | - # Download the cache archive and then uncompress it into the NPM cache. |
1448 | - npm_cache_archive = os.path.join(CURRENT_DIR, 'npm-cache.tgz') |
1449 | - cmd_log(run('curl', '-L', '-o', npm_cache_archive, npm_cache_url)) |
1450 | - npm_cache_dir = os.path.expanduser('~/.npm') |
1451 | - # The NPM cache directory probably does not exist, so make it if not. |
1452 | - try: |
1453 | - os.mkdir(npm_cache_dir) |
1454 | - except OSError, e: |
1455 | - # If the directory already exists then ignore the error. |
1456 | - if e.errno != errno.EEXIST: # File exists. |
1457 | - raise |
1458 | - uncompress = command('tar', '-x', '-z', '-C', npm_cache_dir, '-f') |
1459 | - cmd_log(uncompress(npm_cache_archive)) |
1460 | - |
1461 | - |
1462 | -def fetch_gui(juju_gui_source, logpath): |
1463 | - """Retrieve the Juju GUI release/branch.""" |
1464 | - # Retrieve a Juju GUI release. |
1465 | - origin, version_or_branch = parse_source(juju_gui_source) |
1466 | - if origin == 'branch': |
1467 | - # Make sure we have the dependencies necessary for us to actually make |
1468 | - # a build. |
1469 | - _get_build_dependencies() |
1470 | - # Create a release starting from a branch. |
1471 | - juju_gui_source_dir = os.path.join(CURRENT_DIR, 'juju-gui-source') |
1472 | - log('Retrieving Juju GUI source checkout from %s.' % version_or_branch) |
1473 | - cmd_log(run('rm', '-rf', juju_gui_source_dir)) |
1474 | - cmd_log(bzr_checkout(version_or_branch, juju_gui_source_dir)) |
1475 | - log('Preparing a Juju GUI release.') |
1476 | - logdir = os.path.dirname(logpath) |
1477 | - fd, name = tempfile.mkstemp(prefix='make-distfile-', dir=logdir) |
1478 | - log('Output from "make distfile" sent to %s' % name) |
1479 | - with environ(NO_BZR='1'): |
1480 | - run('make', '-C', juju_gui_source_dir, 'distfile', |
1481 | - stdout=fd, stderr=fd) |
1482 | - release_tarball = first_path_in_dir( |
1483 | - os.path.join(juju_gui_source_dir, 'releases')) |
1484 | - else: |
1485 | - log('Retrieving Juju GUI release.') |
1486 | - if origin == 'url': |
1487 | - file_url = version_or_branch |
1488 | - else: |
1489 | - # Retrieve a release from Launchpad. |
1490 | - launchpad = Launchpad.login_anonymously( |
1491 | - 'Juju GUI charm', 'production') |
1492 | - project = launchpad.projects['juju-gui'] |
1493 | - file_url = get_release_file_url(project, origin, version_or_branch) |
1494 | - log('Downloading release file from %s.' % file_url) |
1495 | - release_tarball = os.path.join(CURRENT_DIR, 'release.tgz') |
1496 | - cmd_log(run('curl', '-L', '-o', release_tarball, file_url)) |
1497 | - return release_tarball |
1498 | - |
1499 | - |
1500 | -def fetch_api(juju_api_branch): |
1501 | - """Retrieve the Juju branch.""" |
1502 | - # Retrieve Juju API source checkout. |
1503 | - log('Retrieving Juju API source checkout.') |
1504 | - cmd_log(run('rm', '-rf', JUJU_DIR)) |
1505 | - cmd_log(bzr_checkout(juju_api_branch, JUJU_DIR)) |
1506 | - |
1507 | - |
1508 | -def setup_gui(release_tarball): |
1509 | - """Set up Juju GUI.""" |
1510 | - # Uncompress the release tarball. |
1511 | - log('Installing Juju GUI.') |
1512 | - release_dir = os.path.join(CURRENT_DIR, 'release') |
1513 | - cmd_log(run('rm', '-rf', release_dir)) |
1514 | - os.mkdir(release_dir) |
1515 | - uncompress = command('tar', '-x', '-z', '-C', release_dir, '-f') |
1516 | - cmd_log(uncompress(release_tarball)) |
1517 | - # Link the Juju GUI dir to the contents of the release tarball. |
1518 | - cmd_log(run('ln', '-sf', first_path_in_dir(release_dir), JUJU_GUI_DIR)) |
1519 | - |
1520 | - |
1521 | -def setup_apache(): |
1522 | - """Set up apache.""" |
1523 | - log('Setting up apache.') |
1524 | - if not os.path.exists(JUJU_GUI_SITE): |
1525 | - cmd_log(run('touch', JUJU_GUI_SITE)) |
1526 | - cmd_log(run('chown', 'ubuntu:', JUJU_GUI_SITE)) |
1527 | - cmd_log( |
1528 | - run('ln', '-s', JUJU_GUI_SITE, |
1529 | - '/etc/apache2/sites-enabled/juju-gui')) |
1530 | - |
1531 | - if not os.path.exists(JUJU_GUI_PORTS): |
1532 | - cmd_log(run('touch', JUJU_GUI_PORTS)) |
1533 | - cmd_log(run('chown', 'ubuntu:', JUJU_GUI_PORTS)) |
1534 | - |
1535 | - with su('root'): |
1536 | - run('a2dissite', 'default') |
1537 | - run('a2ensite', 'juju-gui') |
1538 | - |
1539 | - |
1540 | -def save_or_create_certificates( |
1541 | - ssl_cert_path, ssl_cert_contents, ssl_key_contents): |
1542 | - """Generate the SSL certificates. |
1543 | - |
1544 | - If both *ssl_cert_contents* and *ssl_key_contents* are provided, use them |
1545 | - as certificates; otherwise, generate them. |
1546 | - |
1547 | - Also create a pem file, suitable for use in the haproxy configuration, |
1548 | - concatenating the key and the certificate files. |
1549 | - """ |
1550 | - crt_path = os.path.join(ssl_cert_path, 'juju.crt') |
1551 | - key_path = os.path.join(ssl_cert_path, 'juju.key') |
1552 | - if not os.path.exists(ssl_cert_path): |
1553 | - os.makedirs(ssl_cert_path) |
1554 | - if ssl_cert_contents and ssl_key_contents: |
1555 | - # Save the provided certificates. |
1556 | - with open(crt_path, 'w') as cert_file: |
1557 | - cert_file.write(ssl_cert_contents) |
1558 | - with open(key_path, 'w') as key_file: |
1559 | - key_file.write(ssl_key_contents) |
1560 | - else: |
1561 | - # Generate certificates. |
1562 | - # See http://superuser.com/questions/226192/openssl-without-prompt |
1563 | - cmd_log(run( |
1564 | - 'openssl', 'req', '-new', '-newkey', 'rsa:4096', |
1565 | - '-days', '365', '-nodes', '-x509', '-subj', |
1566 | - # These are arbitrary test values for the certificate. |
1567 | - '/C=GB/ST=Juju/L=GUI/O=Ubuntu/CN=juju.ubuntu.com', |
1568 | - '-keyout', key_path, '-out', crt_path)) |
1569 | - # Generate the pem file. |
1570 | - pem_path = os.path.join(ssl_cert_path, JUJU_PEM) |
1571 | - if os.path.exists(pem_path): |
1572 | - os.remove(pem_path) |
1573 | - with open(pem_path, 'w') as pem_file: |
1574 | - shutil.copyfileobj(open(key_path), pem_file) |
1575 | - shutil.copyfileobj(open(crt_path), pem_file) |
1576 | - |
1577 | - |
1578 | -def find_missing_packages(*packages): |
1579 | - """Given a list of packages, return the packages which are not installed. |
1580 | - """ |
1581 | - cache = apt.Cache() |
1582 | - missing = set() |
1583 | - for pkg_name in packages: |
1584 | - try: |
1585 | - pkg = cache[pkg_name] |
1586 | - except KeyError: |
1587 | - missing.add(pkg_name) |
1588 | - continue |
1589 | - if pkg.is_installed: |
1590 | - continue |
1591 | - missing.add(pkg_name) |
1592 | - return missing |
1593 | - |
1594 | - |
1595 | -## Backend support decorators |
1596 | - |
1597 | -def chain(name): |
1598 | - """Helper method to compose a set of mixin objects into a callable. |
1599 | - |
1600 | - Each method is called in the context of its mixin instance, and its |
1601 | - argument is the Backend instance. |
1602 | - """ |
1603 | - # Chain method calls through all implementing mixins. |
1604 | - def method(self): |
1605 | - for mixin in self.mixins: |
1606 | - a_callable = getattr(type(mixin), name, None) |
1607 | - if a_callable: |
1608 | - a_callable(mixin, self) |
1609 | - |
1610 | - method.__name__ = name |
1611 | - return method |
1612 | - |
1613 | - |
1614 | -def merge(name): |
1615 | - """Helper to merge a property from a set of strategy objects |
1616 | - into a unified set. |
1617 | - """ |
1618 | - # Return merged property from every providing mixin as a set. |
1619 | - @property |
1620 | - def method(self): |
1621 | - result = set() |
1622 | - for mixin in self.mixins: |
1623 | - segment = getattr(type(mixin), name, None) |
1624 | - if segment and isinstance(segment, (list, tuple, set)): |
1625 | - result |= set(segment) |
1626 | - |
1627 | - return result |
1628 | - return method |
1629 | |
1630 | === removed directory 'hooks/charmhelpers/contrib/network' |
1631 | === removed file 'hooks/charmhelpers/contrib/network/__init__.py' |
1632 | === removed file 'hooks/charmhelpers/contrib/network/ip.py' |
1633 | --- hooks/charmhelpers/contrib/network/ip.py 2014-05-09 20:11:59 +0000 |
1634 | +++ hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000 |
1635 | @@ -1,69 +0,0 @@ |
1636 | -import sys |
1637 | - |
1638 | -from charmhelpers.fetch import apt_install |
1639 | -from charmhelpers.core.hookenv import ( |
1640 | - ERROR, log, |
1641 | -) |
1642 | - |
1643 | -try: |
1644 | - import netifaces |
1645 | -except ImportError: |
1646 | - apt_install('python-netifaces') |
1647 | - import netifaces |
1648 | - |
1649 | -try: |
1650 | - import netaddr |
1651 | -except ImportError: |
1652 | - apt_install('python-netaddr') |
1653 | - import netaddr |
1654 | - |
1655 | - |
1656 | -def _validate_cidr(network): |
1657 | - try: |
1658 | - netaddr.IPNetwork(network) |
1659 | - except (netaddr.core.AddrFormatError, ValueError): |
1660 | - raise ValueError("Network (%s) is not in CIDR presentation format" % |
1661 | - network) |
1662 | - |
1663 | - |
1664 | -def get_address_in_network(network, fallback=None, fatal=False): |
1665 | - """ |
1666 | - Get an IPv4 address within the network from the host. |
1667 | - |
1668 | - Args: |
1669 | - network (str): CIDR presentation format. For example, |
1670 | - '192.168.1.0/24'. |
1671 | - fallback (str): If no address is found, return fallback. |
1672 | - fatal (boolean): If no address is found, fallback is not |
1673 | - set and fatal is True then exit(1). |
1674 | - """ |
1675 | - |
1676 | - def not_found_error_out(): |
1677 | - log("No IP address found in network: %s" % network, |
1678 | - level=ERROR) |
1679 | - sys.exit(1) |
1680 | - |
1681 | - if network is None: |
1682 | - if fallback is not None: |
1683 | - return fallback |
1684 | - else: |
1685 | - if fatal: |
1686 | - not_found_error_out() |
1687 | - |
1688 | - _validate_cidr(network) |
1689 | - for iface in netifaces.interfaces(): |
1690 | - addresses = netifaces.ifaddresses(iface) |
1691 | - if netifaces.AF_INET in addresses: |
1692 | - addr = addresses[netifaces.AF_INET][0]['addr'] |
1693 | - netmask = addresses[netifaces.AF_INET][0]['netmask'] |
1694 | - cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask)) |
1695 | - if cidr in netaddr.IPNetwork(network): |
1696 | - return str(cidr.ip) |
1697 | - |
1698 | - if fallback is not None: |
1699 | - return fallback |
1700 | - |
1701 | - if fatal: |
1702 | - not_found_error_out() |
1703 | - |
1704 | - return None |
1705 | |
1706 | === removed directory 'hooks/charmhelpers/contrib/network/ovs' |
1707 | === removed file 'hooks/charmhelpers/contrib/network/ovs/__init__.py' |
1708 | --- hooks/charmhelpers/contrib/network/ovs/__init__.py 2013-11-26 17:12:54 +0000 |
1709 | +++ hooks/charmhelpers/contrib/network/ovs/__init__.py 1970-01-01 00:00:00 +0000 |
1710 | @@ -1,75 +0,0 @@ |
1711 | -''' Helpers for interacting with OpenvSwitch ''' |
1712 | -import subprocess |
1713 | -import os |
1714 | -from charmhelpers.core.hookenv import ( |
1715 | - log, WARNING |
1716 | -) |
1717 | -from charmhelpers.core.host import ( |
1718 | - service |
1719 | -) |
1720 | - |
1721 | - |
1722 | -def add_bridge(name): |
1723 | - ''' Add the named bridge to openvswitch ''' |
1724 | - log('Creating bridge {}'.format(name)) |
1725 | - subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name]) |
1726 | - |
1727 | - |
1728 | -def del_bridge(name): |
1729 | - ''' Delete the named bridge from openvswitch ''' |
1730 | - log('Deleting bridge {}'.format(name)) |
1731 | - subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name]) |
1732 | - |
1733 | - |
1734 | -def add_bridge_port(name, port): |
1735 | - ''' Add a port to the named openvswitch bridge ''' |
1736 | - log('Adding port {} to bridge {}'.format(port, name)) |
1737 | - subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-port", |
1738 | - name, port]) |
1739 | - subprocess.check_call(["ip", "link", "set", port, "up"]) |
1740 | - |
1741 | - |
1742 | -def del_bridge_port(name, port): |
1743 | - ''' Delete a port from the named openvswitch bridge ''' |
1744 | - log('Deleting port {} from bridge {}'.format(port, name)) |
1745 | - subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port", |
1746 | - name, port]) |
1747 | - subprocess.check_call(["ip", "link", "set", port, "down"]) |
1748 | - |
1749 | - |
1750 | -def set_manager(manager): |
1751 | - ''' Set the controller for the local openvswitch ''' |
1752 | - log('Setting manager for local ovs to {}'.format(manager)) |
1753 | - subprocess.check_call(['ovs-vsctl', 'set-manager', |
1754 | - 'ssl:{}'.format(manager)]) |
1755 | - |
1756 | - |
1757 | -CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem' |
1758 | - |
1759 | - |
1760 | -def get_certificate(): |
1761 | - ''' Read openvswitch certificate from disk ''' |
1762 | - if os.path.exists(CERT_PATH): |
1763 | - log('Reading ovs certificate from {}'.format(CERT_PATH)) |
1764 | - with open(CERT_PATH, 'r') as cert: |
1765 | - full_cert = cert.read() |
1766 | - begin_marker = "-----BEGIN CERTIFICATE-----" |
1767 | - end_marker = "-----END CERTIFICATE-----" |
1768 | - begin_index = full_cert.find(begin_marker) |
1769 | - end_index = full_cert.rfind(end_marker) |
1770 | - if end_index == -1 or begin_index == -1: |
1771 | - raise RuntimeError("Certificate does not contain valid begin" |
1772 | - " and end markers.") |
1773 | - full_cert = full_cert[begin_index:(end_index + len(end_marker))] |
1774 | - return full_cert |
1775 | - else: |
1776 | - log('Certificate not found', level=WARNING) |
1777 | - return None |
1778 | - |
1779 | - |
1780 | -def full_restart(): |
1781 | - ''' Full restart and reload of openvswitch ''' |
1782 | - if os.path.exists('/etc/init/openvswitch-force-reload-kmod.conf'): |
1783 | - service('start', 'openvswitch-force-reload-kmod') |
1784 | - else: |
1785 | - service('force-reload-kmod', 'openvswitch-switch') |
1786 | |
1787 | === removed directory 'hooks/charmhelpers/contrib/openstack' |
1788 | === removed file 'hooks/charmhelpers/contrib/openstack/__init__.py' |
1789 | === removed file 'hooks/charmhelpers/contrib/openstack/alternatives.py' |
1790 | --- hooks/charmhelpers/contrib/openstack/alternatives.py 2013-11-26 17:12:54 +0000 |
1791 | +++ hooks/charmhelpers/contrib/openstack/alternatives.py 1970-01-01 00:00:00 +0000 |
1792 | @@ -1,17 +0,0 @@ |
1793 | -''' Helper for managing alternatives for file conflict resolution ''' |
1794 | - |
1795 | -import subprocess |
1796 | -import shutil |
1797 | -import os |
1798 | - |
1799 | - |
1800 | -def install_alternative(name, target, source, priority=50): |
1801 | - ''' Install alternative configuration ''' |
1802 | - if (os.path.exists(target) and not os.path.islink(target)): |
1803 | - # Move existing file/directory away before installing |
1804 | - shutil.move(target, '{}.bak'.format(target)) |
1805 | - cmd = [ |
1806 | - 'update-alternatives', '--force', '--install', |
1807 | - target, name, source, str(priority) |
1808 | - ] |
1809 | - subprocess.check_call(cmd) |
1810 | |
1811 | === removed file 'hooks/charmhelpers/contrib/openstack/context.py' |
1812 | --- hooks/charmhelpers/contrib/openstack/context.py 2014-05-09 20:11:59 +0000 |
1813 | +++ hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000 |
1814 | @@ -1,700 +0,0 @@ |
1815 | -import json |
1816 | -import os |
1817 | -import time |
1818 | - |
1819 | -from base64 import b64decode |
1820 | - |
1821 | -from subprocess import ( |
1822 | - check_call |
1823 | -) |
1824 | - |
1825 | - |
1826 | -from charmhelpers.fetch import ( |
1827 | - apt_install, |
1828 | - filter_installed_packages, |
1829 | -) |
1830 | - |
1831 | -from charmhelpers.core.hookenv import ( |
1832 | - config, |
1833 | - local_unit, |
1834 | - log, |
1835 | - relation_get, |
1836 | - relation_ids, |
1837 | - related_units, |
1838 | - unit_get, |
1839 | - unit_private_ip, |
1840 | - ERROR, |
1841 | -) |
1842 | - |
1843 | -from charmhelpers.contrib.hahelpers.cluster import ( |
1844 | - determine_apache_port, |
1845 | - determine_api_port, |
1846 | - https, |
1847 | - is_clustered |
1848 | -) |
1849 | - |
1850 | -from charmhelpers.contrib.hahelpers.apache import ( |
1851 | - get_cert, |
1852 | - get_ca_cert, |
1853 | -) |
1854 | - |
1855 | -from charmhelpers.contrib.openstack.neutron import ( |
1856 | - neutron_plugin_attribute, |
1857 | -) |
1858 | - |
1859 | -CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' |
1860 | - |
1861 | - |
1862 | -class OSContextError(Exception): |
1863 | - pass |
1864 | - |
1865 | - |
1866 | -def ensure_packages(packages): |
1867 | - '''Install but do not upgrade required plugin packages''' |
1868 | - required = filter_installed_packages(packages) |
1869 | - if required: |
1870 | - apt_install(required, fatal=True) |
1871 | - |
1872 | - |
1873 | -def context_complete(ctxt): |
1874 | - _missing = [] |
1875 | - for k, v in ctxt.iteritems(): |
1876 | - if v is None or v == '': |
1877 | - _missing.append(k) |
1878 | - if _missing: |
1879 | - log('Missing required data: %s' % ' '.join(_missing), level='INFO') |
1880 | - return False |
1881 | - return True |
1882 | - |
1883 | - |
1884 | -def config_flags_parser(config_flags): |
1885 | - if config_flags.find('==') >= 0: |
1886 | - log("config_flags is not in expected format (key=value)", |
1887 | - level=ERROR) |
1888 | - raise OSContextError |
1889 | - # strip the following from each value. |
1890 | - post_strippers = ' ,' |
1891 | - # we strip any leading/trailing '=' or ' ' from the string then |
1892 | - # split on '='. |
1893 | - split = config_flags.strip(' =').split('=') |
1894 | - limit = len(split) |
1895 | - flags = {} |
1896 | - for i in xrange(0, limit - 1): |
1897 | - current = split[i] |
1898 | - next = split[i + 1] |
1899 | - vindex = next.rfind(',') |
1900 | - if (i == limit - 2) or (vindex < 0): |
1901 | - value = next |
1902 | - else: |
1903 | - value = next[:vindex] |
1904 | - |
1905 | - if i == 0: |
1906 | - key = current |
1907 | - else: |
1908 | - # if this not the first entry, expect an embedded key. |
1909 | - index = current.rfind(',') |
1910 | - if index < 0: |
1911 | - log("invalid config value(s) at index %s" % (i), |
1912 | - level=ERROR) |
1913 | - raise OSContextError |
1914 | - key = current[index + 1:] |
1915 | - |
1916 | - # Add to collection. |
1917 | - flags[key.strip(post_strippers)] = value.rstrip(post_strippers) |
1918 | - return flags |
1919 | - |
1920 | - |
1921 | -class OSContextGenerator(object): |
1922 | - interfaces = [] |
1923 | - |
1924 | - def __call__(self): |
1925 | - raise NotImplementedError |
1926 | - |
1927 | - |
1928 | -class SharedDBContext(OSContextGenerator): |
1929 | - interfaces = ['shared-db'] |
1930 | - |
1931 | - def __init__(self, |
1932 | - database=None, user=None, relation_prefix=None, ssl_dir=None): |
1933 | - ''' |
1934 | - Allows inspecting relation for settings prefixed with relation_prefix. |
1935 | - This is useful for parsing access for multiple databases returned via |
1936 | - the shared-db interface (eg, nova_password, quantum_password) |
1937 | - ''' |
1938 | - self.relation_prefix = relation_prefix |
1939 | - self.database = database |
1940 | - self.user = user |
1941 | - self.ssl_dir = ssl_dir |
1942 | - |
1943 | - def __call__(self): |
1944 | - self.database = self.database or config('database') |
1945 | - self.user = self.user or config('database-user') |
1946 | - if None in [self.database, self.user]: |
1947 | - log('Could not generate shared_db context. ' |
1948 | - 'Missing required charm config options. ' |
1949 | - '(database name and user)') |
1950 | - raise OSContextError |
1951 | - ctxt = {} |
1952 | - |
1953 | - password_setting = 'password' |
1954 | - if self.relation_prefix: |
1955 | - password_setting = self.relation_prefix + '_password' |
1956 | - |
1957 | - for rid in relation_ids('shared-db'): |
1958 | - for unit in related_units(rid): |
1959 | - rdata = relation_get(rid=rid, unit=unit) |
1960 | - ctxt = { |
1961 | - 'database_host': rdata.get('db_host'), |
1962 | - 'database': self.database, |
1963 | - 'database_user': self.user, |
1964 | - 'database_password': rdata.get(password_setting), |
1965 | - 'database_type': 'mysql' |
1966 | - } |
1967 | - if context_complete(ctxt): |
1968 | - db_ssl(rdata, ctxt, self.ssl_dir) |
1969 | - return ctxt |
1970 | - return {} |
1971 | - |
1972 | - |
1973 | -class PostgresqlDBContext(OSContextGenerator): |
1974 | - interfaces = ['pgsql-db'] |
1975 | - |
1976 | - def __init__(self, database=None): |
1977 | - self.database = database |
1978 | - |
1979 | - def __call__(self): |
1980 | - self.database = self.database or config('database') |
1981 | - if self.database is None: |
1982 | - log('Could not generate postgresql_db context. ' |
1983 | - 'Missing required charm config options. ' |
1984 | - '(database name)') |
1985 | - raise OSContextError |
1986 | - ctxt = {} |
1987 | - |
1988 | - for rid in relation_ids(self.interfaces[0]): |
1989 | - for unit in related_units(rid): |
1990 | - ctxt = { |
1991 | - 'database_host': relation_get('host', rid=rid, unit=unit), |
1992 | - 'database': self.database, |
1993 | - 'database_user': relation_get('user', rid=rid, unit=unit), |
1994 | - 'database_password': relation_get('password', rid=rid, unit=unit), |
1995 | - 'database_type': 'postgresql', |
1996 | - } |
1997 | - if context_complete(ctxt): |
1998 | - return ctxt |
1999 | - return {} |
2000 | - |
2001 | - |
2002 | -def db_ssl(rdata, ctxt, ssl_dir): |
2003 | - if 'ssl_ca' in rdata and ssl_dir: |
2004 | - ca_path = os.path.join(ssl_dir, 'db-client.ca') |
2005 | - with open(ca_path, 'w') as fh: |
2006 | - fh.write(b64decode(rdata['ssl_ca'])) |
2007 | - ctxt['database_ssl_ca'] = ca_path |
2008 | - elif 'ssl_ca' in rdata: |
2009 | - log("Charm not setup for ssl support but ssl ca found") |
2010 | - return ctxt |
2011 | - if 'ssl_cert' in rdata: |
2012 | - cert_path = os.path.join( |
2013 | - ssl_dir, 'db-client.cert') |
2014 | - if not os.path.exists(cert_path): |
2015 | - log("Waiting 1m for ssl client cert validity") |
2016 | - time.sleep(60) |
2017 | - with open(cert_path, 'w') as fh: |
2018 | - fh.write(b64decode(rdata['ssl_cert'])) |
2019 | - ctxt['database_ssl_cert'] = cert_path |
2020 | - key_path = os.path.join(ssl_dir, 'db-client.key') |
2021 | - with open(key_path, 'w') as fh: |
2022 | - fh.write(b64decode(rdata['ssl_key'])) |
2023 | - ctxt['database_ssl_key'] = key_path |
2024 | - return ctxt |
2025 | - |
2026 | - |
2027 | -class IdentityServiceContext(OSContextGenerator): |
2028 | - interfaces = ['identity-service'] |
2029 | - |
2030 | - def __call__(self): |
2031 | - log('Generating template context for identity-service') |
2032 | - ctxt = {} |
2033 | - |
2034 | - for rid in relation_ids('identity-service'): |
2035 | - for unit in related_units(rid): |
2036 | - rdata = relation_get(rid=rid, unit=unit) |
2037 | - ctxt = { |
2038 | - 'service_port': rdata.get('service_port'), |
2039 | - 'service_host': rdata.get('service_host'), |
2040 | - 'auth_host': rdata.get('auth_host'), |
2041 | - 'auth_port': rdata.get('auth_port'), |
2042 | - 'admin_tenant_name': rdata.get('service_tenant'), |
2043 | - 'admin_user': rdata.get('service_username'), |
2044 | - 'admin_password': rdata.get('service_password'), |
2045 | - 'service_protocol': |
2046 | - rdata.get('service_protocol') or 'http', |
2047 | - 'auth_protocol': |
2048 | - rdata.get('auth_protocol') or 'http', |
2049 | - } |
2050 | - if context_complete(ctxt): |
2051 | - # NOTE(jamespage) this is required for >= icehouse |
2052 | - # so a missing value just indicates keystone needs |
2053 | - # upgrading |
2054 | - ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') |
2055 | - return ctxt |
2056 | - return {} |
2057 | - |
2058 | - |
2059 | -class AMQPContext(OSContextGenerator): |
2060 | - interfaces = ['amqp'] |
2061 | - |
2062 | - def __init__(self, ssl_dir=None): |
2063 | - self.ssl_dir = ssl_dir |
2064 | - |
2065 | - def __call__(self): |
2066 | - log('Generating template context for amqp') |
2067 | - conf = config() |
2068 | - try: |
2069 | - username = conf['rabbit-user'] |
2070 | - vhost = conf['rabbit-vhost'] |
2071 | - except KeyError as e: |
2072 | - log('Could not generate shared_db context. ' |
2073 | - 'Missing required charm config options: %s.' % e) |
2074 | - raise OSContextError |
2075 | - ctxt = {} |
2076 | - for rid in relation_ids('amqp'): |
2077 | - ha_vip_only = False |
2078 | - for unit in related_units(rid): |
2079 | - if relation_get('clustered', rid=rid, unit=unit): |
2080 | - ctxt['clustered'] = True |
2081 | - ctxt['rabbitmq_host'] = relation_get('vip', rid=rid, |
2082 | - unit=unit) |
2083 | - else: |
2084 | - ctxt['rabbitmq_host'] = relation_get('private-address', |
2085 | - rid=rid, unit=unit) |
2086 | - ctxt.update({ |
2087 | - 'rabbitmq_user': username, |
2088 | - 'rabbitmq_password': relation_get('password', rid=rid, |
2089 | - unit=unit), |
2090 | - 'rabbitmq_virtual_host': vhost, |
2091 | - }) |
2092 | - |
2093 | - ssl_port = relation_get('ssl_port', rid=rid, unit=unit) |
2094 | - if ssl_port: |
2095 | - ctxt['rabbit_ssl_port'] = ssl_port |
2096 | - ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) |
2097 | - if ssl_ca: |
2098 | - ctxt['rabbit_ssl_ca'] = ssl_ca |
2099 | - |
2100 | - if relation_get('ha_queues', rid=rid, unit=unit) is not None: |
2101 | - ctxt['rabbitmq_ha_queues'] = True |
2102 | - |
2103 | - ha_vip_only = relation_get('ha-vip-only', |
2104 | - rid=rid, unit=unit) is not None |
2105 | - |
2106 | - if context_complete(ctxt): |
2107 | - if 'rabbit_ssl_ca' in ctxt: |
2108 | - if not self.ssl_dir: |
2109 | - log(("Charm not setup for ssl support " |
2110 | - "but ssl ca found")) |
2111 | - break |
2112 | - ca_path = os.path.join( |
2113 | - self.ssl_dir, 'rabbit-client-ca.pem') |
2114 | - with open(ca_path, 'w') as fh: |
2115 | - fh.write(b64decode(ctxt['rabbit_ssl_ca'])) |
2116 | - ctxt['rabbit_ssl_ca'] = ca_path |
2117 | - # Sufficient information found = break out! |
2118 | - break |
2119 | - # Used for active/active rabbitmq >= grizzly |
2120 | - if ('clustered' not in ctxt or ha_vip_only) \ |
2121 | - and len(related_units(rid)) > 1: |
2122 | - rabbitmq_hosts = [] |
2123 | - for unit in related_units(rid): |
2124 | - rabbitmq_hosts.append(relation_get('private-address', |
2125 | - rid=rid, unit=unit)) |
2126 | - ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts) |
2127 | - if not context_complete(ctxt): |
2128 | - return {} |
2129 | - else: |
2130 | - return ctxt |
2131 | - |
2132 | - |
2133 | -class CephContext(OSContextGenerator): |
2134 | - interfaces = ['ceph'] |
2135 | - |
2136 | - def __call__(self): |
2137 | - '''This generates context for /etc/ceph/ceph.conf templates''' |
2138 | - if not relation_ids('ceph'): |
2139 | - return {} |
2140 | - |
2141 | - log('Generating template context for ceph') |
2142 | - |
2143 | - mon_hosts = [] |
2144 | - auth = None |
2145 | - key = None |
2146 | - use_syslog = str(config('use-syslog')).lower() |
2147 | - for rid in relation_ids('ceph'): |
2148 | - for unit in related_units(rid): |
2149 | - mon_hosts.append(relation_get('private-address', rid=rid, |
2150 | - unit=unit)) |
2151 | - auth = relation_get('auth', rid=rid, unit=unit) |
2152 | - key = relation_get('key', rid=rid, unit=unit) |
2153 | - |
2154 | - ctxt = { |
2155 | - 'mon_hosts': ' '.join(mon_hosts), |
2156 | - 'auth': auth, |
2157 | - 'key': key, |
2158 | - 'use_syslog': use_syslog |
2159 | - } |
2160 | - |
2161 | - if not os.path.isdir('/etc/ceph'): |
2162 | - os.mkdir('/etc/ceph') |
2163 | - |
2164 | - if not context_complete(ctxt): |
2165 | - return {} |
2166 | - |
2167 | - ensure_packages(['ceph-common']) |
2168 | - |
2169 | - return ctxt |
2170 | - |
2171 | - |
2172 | -class HAProxyContext(OSContextGenerator): |
2173 | - interfaces = ['cluster'] |
2174 | - |
2175 | - def __call__(self): |
2176 | - ''' |
2177 | - Builds half a context for the haproxy template, which describes |
2178 | - all peers to be included in the cluster. Each charm needs to include |
2179 | - its own context generator that describes the port mapping. |
2180 | - ''' |
2181 | - if not relation_ids('cluster'): |
2182 | - return {} |
2183 | - |
2184 | - cluster_hosts = {} |
2185 | - l_unit = local_unit().replace('/', '-') |
2186 | - cluster_hosts[l_unit] = unit_get('private-address') |
2187 | - |
2188 | - for rid in relation_ids('cluster'): |
2189 | - for unit in related_units(rid): |
2190 | - _unit = unit.replace('/', '-') |
2191 | - addr = relation_get('private-address', rid=rid, unit=unit) |
2192 | - cluster_hosts[_unit] = addr |
2193 | - |
2194 | - ctxt = { |
2195 | - 'units': cluster_hosts, |
2196 | - } |
2197 | - if len(cluster_hosts.keys()) > 1: |
2198 | - # Enable haproxy when we have enough peers. |
2199 | - log('Ensuring haproxy enabled in /etc/default/haproxy.') |
2200 | - with open('/etc/default/haproxy', 'w') as out: |
2201 | - out.write('ENABLED=1\n') |
2202 | - return ctxt |
2203 | - log('HAProxy context is incomplete, this unit has no peers.') |
2204 | - return {} |
2205 | - |
2206 | - |
2207 | -class ImageServiceContext(OSContextGenerator): |
2208 | - interfaces = ['image-service'] |
2209 | - |
2210 | - def __call__(self): |
2211 | - ''' |
2212 | - Obtains the glance API server from the image-service relation. Useful |
2213 | - in nova and cinder (currently). |
2214 | - ''' |
2215 | - log('Generating template context for image-service.') |
2216 | - rids = relation_ids('image-service') |
2217 | - if not rids: |
2218 | - return {} |
2219 | - for rid in rids: |
2220 | - for unit in related_units(rid): |
2221 | - api_server = relation_get('glance-api-server', |
2222 | - rid=rid, unit=unit) |
2223 | - if api_server: |
2224 | - return {'glance_api_servers': api_server} |
2225 | - log('ImageService context is incomplete. ' |
2226 | - 'Missing required relation data.') |
2227 | - return {} |
2228 | - |
2229 | - |
2230 | -class ApacheSSLContext(OSContextGenerator): |
2231 | - |
2232 | - """ |
2233 | - Generates a context for an apache vhost configuration that configures |
2234 | - HTTPS reverse proxying for one or many endpoints. Generated context |
2235 | - looks something like: |
2236 | - { |
2237 | - 'namespace': 'cinder', |
2238 | - 'private_address': 'iscsi.mycinderhost.com', |
2239 | - 'endpoints': [(8776, 8766), (8777, 8767)] |
2240 | - } |
2241 | - |
2242 | - The endpoints list consists of a tuples mapping external ports |
2243 | - to internal ports. |
2244 | - """ |
2245 | - interfaces = ['https'] |
2246 | - |
2247 | - # charms should inherit this context and set external ports |
2248 | - # and service namespace accordingly. |
2249 | - external_ports = [] |
2250 | - service_namespace = None |
2251 | - |
2252 | - def enable_modules(self): |
2253 | - cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http'] |
2254 | - check_call(cmd) |
2255 | - |
2256 | - def configure_cert(self): |
2257 | - if not os.path.isdir('/etc/apache2/ssl'): |
2258 | - os.mkdir('/etc/apache2/ssl') |
2259 | - ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) |
2260 | - if not os.path.isdir(ssl_dir): |
2261 | - os.mkdir(ssl_dir) |
2262 | - cert, key = get_cert() |
2263 | - with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out: |
2264 | - cert_out.write(b64decode(cert)) |
2265 | - with open(os.path.join(ssl_dir, 'key'), 'w') as key_out: |
2266 | - key_out.write(b64decode(key)) |
2267 | - ca_cert = get_ca_cert() |
2268 | - if ca_cert: |
2269 | - with open(CA_CERT_PATH, 'w') as ca_out: |
2270 | - ca_out.write(b64decode(ca_cert)) |
2271 | - check_call(['update-ca-certificates']) |
2272 | - |
2273 | - def __call__(self): |
2274 | - if isinstance(self.external_ports, basestring): |
2275 | - self.external_ports = [self.external_ports] |
2276 | - if (not self.external_ports or not https()): |
2277 | - return {} |
2278 | - |
2279 | - self.configure_cert() |
2280 | - self.enable_modules() |
2281 | - |
2282 | - ctxt = { |
2283 | - 'namespace': self.service_namespace, |
2284 | - 'private_address': unit_get('private-address'), |
2285 | - 'endpoints': [] |
2286 | - } |
2287 | - if is_clustered(): |
2288 | - ctxt['private_address'] = config('vip') |
2289 | - for api_port in self.external_ports: |
2290 | - ext_port = determine_apache_port(api_port) |
2291 | - int_port = determine_api_port(api_port) |
2292 | - portmap = (int(ext_port), int(int_port)) |
2293 | - ctxt['endpoints'].append(portmap) |
2294 | - return ctxt |
2295 | - |
2296 | - |
2297 | -class NeutronContext(OSContextGenerator): |
2298 | - interfaces = [] |
2299 | - |
2300 | - @property |
2301 | - def plugin(self): |
2302 | - return None |
2303 | - |
2304 | - @property |
2305 | - def network_manager(self): |
2306 | - return None |
2307 | - |
2308 | - @property |
2309 | - def packages(self): |
2310 | - return neutron_plugin_attribute( |
2311 | - self.plugin, 'packages', self.network_manager) |
2312 | - |
2313 | - @property |
2314 | - def neutron_security_groups(self): |
2315 | - return None |
2316 | - |
2317 | - def _ensure_packages(self): |
2318 | - [ensure_packages(pkgs) for pkgs in self.packages] |
2319 | - |
2320 | - def _save_flag_file(self): |
2321 | - if self.network_manager == 'quantum': |
2322 | - _file = '/etc/nova/quantum_plugin.conf' |
2323 | - else: |
2324 | - _file = '/etc/nova/neutron_plugin.conf' |
2325 | - with open(_file, 'wb') as out: |
2326 | - out.write(self.plugin + '\n') |
2327 | - |
2328 | - def ovs_ctxt(self): |
2329 | - driver = neutron_plugin_attribute(self.plugin, 'driver', |
2330 | - self.network_manager) |
2331 | - config = neutron_plugin_attribute(self.plugin, 'config', |
2332 | - self.network_manager) |
2333 | - ovs_ctxt = { |
2334 | - 'core_plugin': driver, |
2335 | - 'neutron_plugin': 'ovs', |
2336 | - 'neutron_security_groups': self.neutron_security_groups, |
2337 | - 'local_ip': unit_private_ip(), |
2338 | - 'config': config |
2339 | - } |
2340 | - |
2341 | - return ovs_ctxt |
2342 | - |
2343 | - def nvp_ctxt(self): |
2344 | - driver = neutron_plugin_attribute(self.plugin, 'driver', |
2345 | - self.network_manager) |
2346 | - config = neutron_plugin_attribute(self.plugin, 'config', |
2347 | - self.network_manager) |
2348 | - nvp_ctxt = { |
2349 | - 'core_plugin': driver, |
2350 | - 'neutron_plugin': 'nvp', |
2351 | - 'neutron_security_groups': self.neutron_security_groups, |
2352 | - 'local_ip': unit_private_ip(), |
2353 | - 'config': config |
2354 | - } |
2355 | - |
2356 | - return nvp_ctxt |
2357 | - |
2358 | - def neutron_ctxt(self): |
2359 | - if https(): |
2360 | - proto = 'https' |
2361 | - else: |
2362 | - proto = 'http' |
2363 | - if is_clustered(): |
2364 | - host = config('vip') |
2365 | - else: |
2366 | - host = unit_get('private-address') |
2367 | - url = '%s://%s:%s' % (proto, host, '9696') |
2368 | - ctxt = { |
2369 | - 'network_manager': self.network_manager, |
2370 | - 'neutron_url': url, |
2371 | - } |
2372 | - return ctxt |
2373 | - |
2374 | - def __call__(self): |
2375 | - self._ensure_packages() |
2376 | - |
2377 | - if self.network_manager not in ['quantum', 'neutron']: |
2378 | - return {} |
2379 | - |
2380 | - if not self.plugin: |
2381 | - return {} |
2382 | - |
2383 | - ctxt = self.neutron_ctxt() |
2384 | - |
2385 | - if self.plugin == 'ovs': |
2386 | - ctxt.update(self.ovs_ctxt()) |
2387 | - elif self.plugin == 'nvp': |
2388 | - ctxt.update(self.nvp_ctxt()) |
2389 | - |
2390 | - alchemy_flags = config('neutron-alchemy-flags') |
2391 | - if alchemy_flags: |
2392 | - flags = config_flags_parser(alchemy_flags) |
2393 | - ctxt['neutron_alchemy_flags'] = flags |
2394 | - |
2395 | - self._save_flag_file() |
2396 | - return ctxt |
2397 | - |
2398 | - |
2399 | -class OSConfigFlagContext(OSContextGenerator): |
2400 | - |
2401 | - """ |
2402 | - Responsible for adding user-defined config-flags in charm config to a |
2403 | - template context. |
2404 | - |
2405 | - NOTE: the value of config-flags may be a comma-separated list of |
2406 | - key=value pairs and some Openstack config files support |
2407 | - comma-separated lists as values. |
2408 | - """ |
2409 | - |
2410 | - def __call__(self): |
2411 | - config_flags = config('config-flags') |
2412 | - if not config_flags: |
2413 | - return {} |
2414 | - |
2415 | - flags = config_flags_parser(config_flags) |
2416 | - return {'user_config_flags': flags} |
2417 | - |
2418 | - |
2419 | -class SubordinateConfigContext(OSContextGenerator): |
2420 | - |
2421 | - """ |
2422 | - Responsible for inspecting relations to subordinates that |
2423 | - may be exporting required config via a json blob. |
2424 | - |
2425 | - The subordinate interface allows subordinates to export their |
2426 | - configuration requirements to the principle for multiple config |
2427 | - files and multiple serivces. Ie, a subordinate that has interfaces |
2428 | - to both glance and nova may export to following yaml blob as json: |
2429 | - |
2430 | - glance: |
2431 | - /etc/glance/glance-api.conf: |
2432 | - sections: |
2433 | - DEFAULT: |
2434 | - - [key1, value1] |
2435 | - /etc/glance/glance-registry.conf: |
2436 | - MYSECTION: |
2437 | - - [key2, value2] |
2438 | - nova: |
2439 | - /etc/nova/nova.conf: |
2440 | - sections: |
2441 | - DEFAULT: |
2442 | - - [key3, value3] |
2443 | - |
2444 | - |
2445 | - It is then up to the principle charms to subscribe this context to |
2446 | - the service+config file it is interestd in. Configuration data will |
2447 | - be available in the template context, in glance's case, as: |
2448 | - ctxt = { |
2449 | - ... other context ... |
2450 | - 'subordinate_config': { |
2451 | - 'DEFAULT': { |
2452 | - 'key1': 'value1', |
2453 | - }, |
2454 | - 'MYSECTION': { |
2455 | - 'key2': 'value2', |
2456 | - }, |
2457 | - } |
2458 | - } |
2459 | - |
2460 | - """ |
2461 | - |
2462 | - def __init__(self, service, config_file, interface): |
2463 | - """ |
2464 | - :param service : Service name key to query in any subordinate |
2465 | - data found |
2466 | - :param config_file : Service's config file to query sections |
2467 | - :param interface : Subordinate interface to inspect |
2468 | - """ |
2469 | - self.service = service |
2470 | - self.config_file = config_file |
2471 | - self.interface = interface |
2472 | - |
2473 | - def __call__(self): |
2474 | - ctxt = {} |
2475 | - for rid in relation_ids(self.interface): |
2476 | - for unit in related_units(rid): |
2477 | - sub_config = relation_get('subordinate_configuration', |
2478 | - rid=rid, unit=unit) |
2479 | - if sub_config and sub_config != '': |
2480 | - try: |
2481 | - sub_config = json.loads(sub_config) |
2482 | - except: |
2483 | - log('Could not parse JSON from subordinate_config ' |
2484 | - 'setting from %s' % rid, level=ERROR) |
2485 | - continue |
2486 | - |
2487 | - if self.service not in sub_config: |
2488 | - log('Found subordinate_config on %s but it contained' |
2489 | - 'nothing for %s service' % (rid, self.service)) |
2490 | - continue |
2491 | - |
2492 | - sub_config = sub_config[self.service] |
2493 | - if self.config_file not in sub_config: |
2494 | - log('Found subordinate_config on %s but it contained' |
2495 | - 'nothing for %s' % (rid, self.config_file)) |
2496 | - continue |
2497 | - |
2498 | - sub_config = sub_config[self.config_file] |
2499 | - for k, v in sub_config.iteritems(): |
2500 | - ctxt[k] = v |
2501 | - |
2502 | - if not ctxt: |
2503 | - ctxt['sections'] = {} |
2504 | - |
2505 | - return ctxt |
2506 | - |
2507 | - |
2508 | -class SyslogContext(OSContextGenerator): |
2509 | - |
2510 | - def __call__(self): |
2511 | - ctxt = { |
2512 | - 'use_syslog': config('use-syslog') |
2513 | - } |
2514 | - return ctxt |
2515 | |
2516 | === removed file 'hooks/charmhelpers/contrib/openstack/neutron.py' |
2517 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2014-05-09 20:11:59 +0000 |
2518 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000 |
2519 | @@ -1,171 +0,0 @@ |
2520 | -# Various utilies for dealing with Neutron and the renaming from Quantum. |
2521 | - |
2522 | -from subprocess import check_output |
2523 | - |
2524 | -from charmhelpers.core.hookenv import ( |
2525 | - config, |
2526 | - log, |
2527 | - ERROR, |
2528 | -) |
2529 | - |
2530 | -from charmhelpers.contrib.openstack.utils import os_release |
2531 | - |
2532 | - |
2533 | -def headers_package(): |
2534 | - """Ensures correct linux-headers for running kernel are installed, |
2535 | - for building DKMS package""" |
2536 | - kver = check_output(['uname', '-r']).strip() |
2537 | - return 'linux-headers-%s' % kver |
2538 | - |
2539 | -QUANTUM_CONF_DIR = '/etc/quantum' |
2540 | - |
2541 | - |
2542 | -def kernel_version(): |
2543 | - """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ |
2544 | - kver = check_output(['uname', '-r']).strip() |
2545 | - kver = kver.split('.') |
2546 | - return (int(kver[0]), int(kver[1])) |
2547 | - |
2548 | - |
2549 | -def determine_dkms_package(): |
2550 | - """ Determine which DKMS package should be used based on kernel version """ |
2551 | - # NOTE: 3.13 kernels have support for GRE and VXLAN native |
2552 | - if kernel_version() >= (3, 13): |
2553 | - return [] |
2554 | - else: |
2555 | - return ['openvswitch-datapath-dkms'] |
2556 | - |
2557 | - |
2558 | -# legacy |
2559 | - |
2560 | - |
2561 | -def quantum_plugins(): |
2562 | - from charmhelpers.contrib.openstack import context |
2563 | - return { |
2564 | - 'ovs': { |
2565 | - 'config': '/etc/quantum/plugins/openvswitch/' |
2566 | - 'ovs_quantum_plugin.ini', |
2567 | - 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.' |
2568 | - 'OVSQuantumPluginV2', |
2569 | - 'contexts': [ |
2570 | - context.SharedDBContext(user=config('neutron-database-user'), |
2571 | - database=config('neutron-database'), |
2572 | - relation_prefix='neutron', |
2573 | - ssl_dir=QUANTUM_CONF_DIR)], |
2574 | - 'services': ['quantum-plugin-openvswitch-agent'], |
2575 | - 'packages': [[headers_package()] + determine_dkms_package(), |
2576 | - ['quantum-plugin-openvswitch-agent']], |
2577 | - 'server_packages': ['quantum-server', |
2578 | - 'quantum-plugin-openvswitch'], |
2579 | - 'server_services': ['quantum-server'] |
2580 | - }, |
2581 | - 'nvp': { |
2582 | - 'config': '/etc/quantum/plugins/nicira/nvp.ini', |
2583 | - 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.' |
2584 | - 'QuantumPlugin.NvpPluginV2', |
2585 | - 'contexts': [ |
2586 | - context.SharedDBContext(user=config('neutron-database-user'), |
2587 | - database=config('neutron-database'), |
2588 | - relation_prefix='neutron', |
2589 | - ssl_dir=QUANTUM_CONF_DIR)], |
2590 | - 'services': [], |
2591 | - 'packages': [], |
2592 | - 'server_packages': ['quantum-server', |
2593 | - 'quantum-plugin-nicira'], |
2594 | - 'server_services': ['quantum-server'] |
2595 | - } |
2596 | - } |
2597 | - |
2598 | -NEUTRON_CONF_DIR = '/etc/neutron' |
2599 | - |
2600 | - |
2601 | -def neutron_plugins(): |
2602 | - from charmhelpers.contrib.openstack import context |
2603 | - release = os_release('nova-common') |
2604 | - plugins = { |
2605 | - 'ovs': { |
2606 | - 'config': '/etc/neutron/plugins/openvswitch/' |
2607 | - 'ovs_neutron_plugin.ini', |
2608 | - 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.' |
2609 | - 'OVSNeutronPluginV2', |
2610 | - 'contexts': [ |
2611 | - context.SharedDBContext(user=config('neutron-database-user'), |
2612 | - database=config('neutron-database'), |
2613 | - relation_prefix='neutron', |
2614 | - ssl_dir=NEUTRON_CONF_DIR)], |
2615 | - 'services': ['neutron-plugin-openvswitch-agent'], |
2616 | - 'packages': [[headers_package()] + determine_dkms_package(), |
2617 | - ['neutron-plugin-openvswitch-agent']], |
2618 | - 'server_packages': ['neutron-server', |
2619 | - 'neutron-plugin-openvswitch'], |
2620 | - 'server_services': ['neutron-server'] |
2621 | - }, |
2622 | - 'nvp': { |
2623 | - 'config': '/etc/neutron/plugins/nicira/nvp.ini', |
2624 | - 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.' |
2625 | - 'NeutronPlugin.NvpPluginV2', |
2626 | - 'contexts': [ |
2627 | - context.SharedDBContext(user=config('neutron-database-user'), |
2628 | - database=config('neutron-database'), |
2629 | - relation_prefix='neutron', |
2630 | - ssl_dir=NEUTRON_CONF_DIR)], |
2631 | - 'services': [], |
2632 | - 'packages': [], |
2633 | - 'server_packages': ['neutron-server', |
2634 | - 'neutron-plugin-nicira'], |
2635 | - 'server_services': ['neutron-server'] |
2636 | - } |
2637 | - } |
2638 | - # NOTE: patch in ml2 plugin for icehouse onwards |
2639 | - if release >= 'icehouse': |
2640 | - plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' |
2641 | - plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' |
2642 | - plugins['ovs']['server_packages'] = ['neutron-server', |
2643 | - 'neutron-plugin-ml2'] |
2644 | - return plugins |
2645 | - |
2646 | - |
2647 | -def neutron_plugin_attribute(plugin, attr, net_manager=None): |
2648 | - manager = net_manager or network_manager() |
2649 | - if manager == 'quantum': |
2650 | - plugins = quantum_plugins() |
2651 | - elif manager == 'neutron': |
2652 | - plugins = neutron_plugins() |
2653 | - else: |
2654 | - log('Error: Network manager does not support plugins.') |
2655 | - raise Exception |
2656 | - |
2657 | - try: |
2658 | - _plugin = plugins[plugin] |
2659 | - except KeyError: |
2660 | - log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR) |
2661 | - raise Exception |
2662 | - |
2663 | - try: |
2664 | - return _plugin[attr] |
2665 | - except KeyError: |
2666 | - return None |
2667 | - |
2668 | - |
2669 | -def network_manager(): |
2670 | - ''' |
2671 | - Deals with the renaming of Quantum to Neutron in H and any situations |
2672 | - that require compatability (eg, deploying H with network-manager=quantum, |
2673 | - upgrading from G). |
2674 | - ''' |
2675 | - release = os_release('nova-common') |
2676 | - manager = config('network-manager').lower() |
2677 | - |
2678 | - if manager not in ['quantum', 'neutron']: |
2679 | - return manager |
2680 | - |
2681 | - if release in ['essex']: |
2682 | - # E does not support neutron |
2683 | - log('Neutron networking not supported in Essex.', level=ERROR) |
2684 | - raise Exception |
2685 | - elif release in ['folsom', 'grizzly']: |
2686 | - # neutron is named quantum in F and G |
2687 | - return 'quantum' |
2688 | - else: |
2689 | - # ensure accurate naming for all releases post-H |
2690 | - return 'neutron' |
2691 | |
2692 | === removed directory 'hooks/charmhelpers/contrib/openstack/templates' |
2693 | === removed file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py' |
2694 | --- hooks/charmhelpers/contrib/openstack/templates/__init__.py 2013-11-26 17:12:54 +0000 |
2695 | +++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000 |
2696 | @@ -1,2 +0,0 @@ |
2697 | -# dummy __init__.py to fool syncer into thinking this is a syncable python |
2698 | -# module |
2699 | |
2700 | === removed file 'hooks/charmhelpers/contrib/openstack/templating.py' |
2701 | --- hooks/charmhelpers/contrib/openstack/templating.py 2013-11-26 17:12:54 +0000 |
2702 | +++ hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000 |
2703 | @@ -1,280 +0,0 @@ |
2704 | -import os |
2705 | - |
2706 | -from charmhelpers.fetch import apt_install |
2707 | - |
2708 | -from charmhelpers.core.hookenv import ( |
2709 | - log, |
2710 | - ERROR, |
2711 | - INFO |
2712 | -) |
2713 | - |
2714 | -from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES |
2715 | - |
2716 | -try: |
2717 | - from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions |
2718 | -except ImportError: |
2719 | - # python-jinja2 may not be installed yet, or we're running unittests. |
2720 | - FileSystemLoader = ChoiceLoader = Environment = exceptions = None |
2721 | - |
2722 | - |
2723 | -class OSConfigException(Exception): |
2724 | - pass |
2725 | - |
2726 | - |
2727 | -def get_loader(templates_dir, os_release): |
2728 | - """ |
2729 | - Create a jinja2.ChoiceLoader containing template dirs up to |
2730 | - and including os_release. If directory template directory |
2731 | - is missing at templates_dir, it will be omitted from the loader. |
2732 | - templates_dir is added to the bottom of the search list as a base |
2733 | - loading dir. |
2734 | - |
2735 | - A charm may also ship a templates dir with this module |
2736 | - and it will be appended to the bottom of the search list, eg: |
2737 | - hooks/charmhelpers/contrib/openstack/templates. |
2738 | - |
2739 | - :param templates_dir: str: Base template directory containing release |
2740 | - sub-directories. |
2741 | - :param os_release : str: OpenStack release codename to construct template |
2742 | - loader. |
2743 | - |
2744 | - :returns : jinja2.ChoiceLoader constructed with a list of |
2745 | - jinja2.FilesystemLoaders, ordered in descending |
2746 | - order by OpenStack release. |
2747 | - """ |
2748 | - tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) |
2749 | - for rel in OPENSTACK_CODENAMES.itervalues()] |
2750 | - |
2751 | - if not os.path.isdir(templates_dir): |
2752 | - log('Templates directory not found @ %s.' % templates_dir, |
2753 | - level=ERROR) |
2754 | - raise OSConfigException |
2755 | - |
2756 | - # the bottom contains tempaltes_dir and possibly a common templates dir |
2757 | - # shipped with the helper. |
2758 | - loaders = [FileSystemLoader(templates_dir)] |
2759 | - helper_templates = os.path.join(os.path.dirname(__file__), 'templates') |
2760 | - if os.path.isdir(helper_templates): |
2761 | - loaders.append(FileSystemLoader(helper_templates)) |
2762 | - |
2763 | - for rel, tmpl_dir in tmpl_dirs: |
2764 | - if os.path.isdir(tmpl_dir): |
2765 | - loaders.insert(0, FileSystemLoader(tmpl_dir)) |
2766 | - if rel == os_release: |
2767 | - break |
2768 | - log('Creating choice loader with dirs: %s' % |
2769 | - [l.searchpath for l in loaders], level=INFO) |
2770 | - return ChoiceLoader(loaders) |
2771 | - |
2772 | - |
2773 | -class OSConfigTemplate(object): |
2774 | - """ |
2775 | - Associates a config file template with a list of context generators. |
2776 | - Responsible for constructing a template context based on those generators. |
2777 | - """ |
2778 | - def __init__(self, config_file, contexts): |
2779 | - self.config_file = config_file |
2780 | - |
2781 | - if hasattr(contexts, '__call__'): |
2782 | - self.contexts = [contexts] |
2783 | - else: |
2784 | - self.contexts = contexts |
2785 | - |
2786 | - self._complete_contexts = [] |
2787 | - |
2788 | - def context(self): |
2789 | - ctxt = {} |
2790 | - for context in self.contexts: |
2791 | - _ctxt = context() |
2792 | - if _ctxt: |
2793 | - ctxt.update(_ctxt) |
2794 | - # track interfaces for every complete context. |
2795 | - [self._complete_contexts.append(interface) |
2796 | - for interface in context.interfaces |
2797 | - if interface not in self._complete_contexts] |
2798 | - return ctxt |
2799 | - |
2800 | - def complete_contexts(self): |
2801 | - ''' |
2802 | - Return a list of interfaces that have atisfied contexts. |
2803 | - ''' |
2804 | - if self._complete_contexts: |
2805 | - return self._complete_contexts |
2806 | - self.context() |
2807 | - return self._complete_contexts |
2808 | - |
2809 | - |
2810 | -class OSConfigRenderer(object): |
2811 | - """ |
2812 | - This class provides a common templating system to be used by OpenStack |
2813 | - charms. It is intended to help charms share common code and templates, |
2814 | - and ease the burden of managing config templates across multiple OpenStack |
2815 | - releases. |
2816 | - |
2817 | - Basic usage: |
2818 | - # import some common context generates from charmhelpers |
2819 | - from charmhelpers.contrib.openstack import context |
2820 | - |
2821 | - # Create a renderer object for a specific OS release. |
2822 | - configs = OSConfigRenderer(templates_dir='/tmp/templates', |
2823 | - openstack_release='folsom') |
2824 | - # register some config files with context generators. |
2825 | - configs.register(config_file='/etc/nova/nova.conf', |
2826 | - contexts=[context.SharedDBContext(), |
2827 | - context.AMQPContext()]) |
2828 | - configs.register(config_file='/etc/nova/api-paste.ini', |
2829 | - contexts=[context.IdentityServiceContext()]) |
2830 | - configs.register(config_file='/etc/haproxy/haproxy.conf', |
2831 | - contexts=[context.HAProxyContext()]) |
2832 | - # write out a single config |
2833 | - configs.write('/etc/nova/nova.conf') |
2834 | - # write out all registered configs |
2835 | - configs.write_all() |
2836 | - |
2837 | - Details: |
2838 | - |
2839 | - OpenStack Releases and template loading |
2840 | - --------------------------------------- |
2841 | - When the object is instantiated, it is associated with a specific OS |
2842 | - release. This dictates how the template loader will be constructed. |
2843 | - |
2844 | - The constructed loader attempts to load the template from several places |
2845 | - in the following order: |
2846 | - - from the most recent OS release-specific template dir (if one exists) |
2847 | - - the base templates_dir |
2848 | - - a template directory shipped in the charm with this helper file. |
2849 | - |
2850 | - |
2851 | - For the example above, '/tmp/templates' contains the following structure: |
2852 | - /tmp/templates/nova.conf |
2853 | - /tmp/templates/api-paste.ini |
2854 | - /tmp/templates/grizzly/api-paste.ini |
2855 | - /tmp/templates/havana/api-paste.ini |
2856 | - |
2857 | - Since it was registered with the grizzly release, it first seraches |
2858 | - the grizzly directory for nova.conf, then the templates dir. |
2859 | - |
2860 | - When writing api-paste.ini, it will find the template in the grizzly |
2861 | - directory. |
2862 | - |
2863 | - If the object were created with folsom, it would fall back to the |
2864 | - base templates dir for its api-paste.ini template. |
2865 | - |
2866 | - This system should help manage changes in config files through |
2867 | - openstack releases, allowing charms to fall back to the most recently |
2868 | - updated config template for a given release |
2869 | - |
2870 | - The haproxy.conf, since it is not shipped in the templates dir, will |
2871 | - be loaded from the module directory's template directory, eg |
2872 | - $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows |
2873 | - us to ship common templates (haproxy, apache) with the helpers. |
2874 | - |
2875 | - Context generators |
2876 | - --------------------------------------- |
2877 | - Context generators are used to generate template contexts during hook |
2878 | - execution. Doing so may require inspecting service relations, charm |
2879 | - config, etc. When registered, a config file is associated with a list |
2880 | - of generators. When a template is rendered and written, all context |
2881 | - generates are called in a chain to generate the context dictionary |
2882 | - passed to the jinja2 template. See context.py for more info. |
2883 | - """ |
2884 | - def __init__(self, templates_dir, openstack_release): |
2885 | - if not os.path.isdir(templates_dir): |
2886 | - log('Could not locate templates dir %s' % templates_dir, |
2887 | - level=ERROR) |
2888 | - raise OSConfigException |
2889 | - |
2890 | - self.templates_dir = templates_dir |
2891 | - self.openstack_release = openstack_release |
2892 | - self.templates = {} |
2893 | - self._tmpl_env = None |
2894 | - |
2895 | - if None in [Environment, ChoiceLoader, FileSystemLoader]: |
2896 | - # if this code is running, the object is created pre-install hook. |
2897 | - # jinja2 shouldn't get touched until the module is reloaded on next |
2898 | - # hook execution, with proper jinja2 bits successfully imported. |
2899 | - apt_install('python-jinja2') |
2900 | - |
2901 | - def register(self, config_file, contexts): |
2902 | - """ |
2903 | - Register a config file with a list of context generators to be called |
2904 | - during rendering. |
2905 | - """ |
2906 | - self.templates[config_file] = OSConfigTemplate(config_file=config_file, |
2907 | - contexts=contexts) |
2908 | - log('Registered config file: %s' % config_file, level=INFO) |
2909 | - |
2910 | - def _get_tmpl_env(self): |
2911 | - if not self._tmpl_env: |
2912 | - loader = get_loader(self.templates_dir, self.openstack_release) |
2913 | - self._tmpl_env = Environment(loader=loader) |
2914 | - |
2915 | - def _get_template(self, template): |
2916 | - self._get_tmpl_env() |
2917 | - template = self._tmpl_env.get_template(template) |
2918 | - log('Loaded template from %s' % template.filename, level=INFO) |
2919 | - return template |
2920 | - |
2921 | - def render(self, config_file): |
2922 | - if config_file not in self.templates: |
2923 | - log('Config not registered: %s' % config_file, level=ERROR) |
2924 | - raise OSConfigException |
2925 | - ctxt = self.templates[config_file].context() |
2926 | - |
2927 | - _tmpl = os.path.basename(config_file) |
2928 | - try: |
2929 | - template = self._get_template(_tmpl) |
2930 | - except exceptions.TemplateNotFound: |
2931 | - # if no template is found with basename, try looking for it |
2932 | - # using a munged full path, eg: |
2933 | - # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf |
2934 | - _tmpl = '_'.join(config_file.split('/')[1:]) |
2935 | - try: |
2936 | - template = self._get_template(_tmpl) |
2937 | - except exceptions.TemplateNotFound as e: |
2938 | - log('Could not load template from %s by %s or %s.' % |
2939 | - (self.templates_dir, os.path.basename(config_file), _tmpl), |
2940 | - level=ERROR) |
2941 | - raise e |
2942 | - |
2943 | - log('Rendering from template: %s' % _tmpl, level=INFO) |
2944 | - return template.render(ctxt) |
2945 | - |
2946 | - def write(self, config_file): |
2947 | - """ |
2948 | - Write a single config file, raises if config file is not registered. |
2949 | - """ |
2950 | - if config_file not in self.templates: |
2951 | - log('Config not registered: %s' % config_file, level=ERROR) |
2952 | - raise OSConfigException |
2953 | - |
2954 | - _out = self.render(config_file) |
2955 | - |
2956 | - with open(config_file, 'wb') as out: |
2957 | - out.write(_out) |
2958 | - |
2959 | - log('Wrote template %s.' % config_file, level=INFO) |
2960 | - |
2961 | - def write_all(self): |
2962 | - """ |
2963 | - Write out all registered config files. |
2964 | - """ |
2965 | - [self.write(k) for k in self.templates.iterkeys()] |
2966 | - |
2967 | - def set_release(self, openstack_release): |
2968 | - """ |
2969 | - Resets the template environment and generates a new template loader |
2970 | - based on a the new openstack release. |
2971 | - """ |
2972 | - self._tmpl_env = None |
2973 | - self.openstack_release = openstack_release |
2974 | - self._get_tmpl_env() |
2975 | - |
2976 | - def complete_contexts(self): |
2977 | - ''' |
2978 | - Returns a list of context interfaces that yield a complete context. |
2979 | - ''' |
2980 | - interfaces = [] |
2981 | - [interfaces.extend(i.complete_contexts()) |
2982 | - for i in self.templates.itervalues()] |
2983 | - return interfaces |
2984 | |
2985 | === removed file 'hooks/charmhelpers/contrib/openstack/utils.py' |
2986 | --- hooks/charmhelpers/contrib/openstack/utils.py 2014-05-09 20:11:59 +0000 |
2987 | +++ hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000 |
2988 | @@ -1,450 +0,0 @@ |
2989 | -#!/usr/bin/python |
2990 | - |
2991 | -# Common python helper functions used for OpenStack charms. |
2992 | -from collections import OrderedDict |
2993 | - |
2994 | -import apt_pkg as apt |
2995 | -import subprocess |
2996 | -import os |
2997 | -import socket |
2998 | -import sys |
2999 | - |
3000 | -from charmhelpers.core.hookenv import ( |
3001 | - config, |
3002 | - log as juju_log, |
3003 | - charm_dir, |
3004 | - ERROR, |
3005 | - INFO |
3006 | -) |
3007 | - |
3008 | -from charmhelpers.contrib.storage.linux.lvm import ( |
3009 | - deactivate_lvm_volume_group, |
3010 | - is_lvm_physical_volume, |
3011 | - remove_lvm_physical_volume, |
3012 | -) |
3013 | - |
3014 | -from charmhelpers.core.host import lsb_release, mounts, umount |
3015 | -from charmhelpers.fetch import apt_install |
3016 | -from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk |
3017 | -from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device |
3018 | - |
3019 | -CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" |
3020 | -CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' |
3021 | - |
3022 | -DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' |
3023 | - 'restricted main multiverse universe') |
3024 | - |
3025 | - |
3026 | -UBUNTU_OPENSTACK_RELEASE = OrderedDict([ |
3027 | - ('oneiric', 'diablo'), |
3028 | - ('precise', 'essex'), |
3029 | - ('quantal', 'folsom'), |
3030 | - ('raring', 'grizzly'), |
3031 | - ('saucy', 'havana'), |
3032 | - ('trusty', 'icehouse') |
3033 | -]) |
3034 | - |
3035 | - |
3036 | -OPENSTACK_CODENAMES = OrderedDict([ |
3037 | - ('2011.2', 'diablo'), |
3038 | - ('2012.1', 'essex'), |
3039 | - ('2012.2', 'folsom'), |
3040 | - ('2013.1', 'grizzly'), |
3041 | - ('2013.2', 'havana'), |
3042 | - ('2014.1', 'icehouse'), |
3043 | -]) |
3044 | - |
3045 | -# The ugly duckling |
3046 | -SWIFT_CODENAMES = OrderedDict([ |
3047 | - ('1.4.3', 'diablo'), |
3048 | - ('1.4.8', 'essex'), |
3049 | - ('1.7.4', 'folsom'), |
3050 | - ('1.8.0', 'grizzly'), |
3051 | - ('1.7.7', 'grizzly'), |
3052 | - ('1.7.6', 'grizzly'), |
3053 | - ('1.10.0', 'havana'), |
3054 | - ('1.9.1', 'havana'), |
3055 | - ('1.9.0', 'havana'), |
3056 | - ('1.13.1', 'icehouse'), |
3057 | - ('1.13.0', 'icehouse'), |
3058 | - ('1.12.0', 'icehouse'), |
3059 | - ('1.11.0', 'icehouse'), |
3060 | -]) |
3061 | - |
3062 | -DEFAULT_LOOPBACK_SIZE = '5G' |
3063 | - |
3064 | - |
3065 | -def error_out(msg): |
3066 | - juju_log("FATAL ERROR: %s" % msg, level='ERROR') |
3067 | - sys.exit(1) |
3068 | - |
3069 | - |
3070 | -def get_os_codename_install_source(src): |
3071 | - '''Derive OpenStack release codename from a given installation source.''' |
3072 | - ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
3073 | - rel = '' |
3074 | - if src in ['distro', 'distro-proposed']: |
3075 | - try: |
3076 | - rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] |
3077 | - except KeyError: |
3078 | - e = 'Could not derive openstack release for '\ |
3079 | - 'this Ubuntu release: %s' % ubuntu_rel |
3080 | - error_out(e) |
3081 | - return rel |
3082 | - |
3083 | - if src.startswith('cloud:'): |
3084 | - ca_rel = src.split(':')[1] |
3085 | - ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] |
3086 | - return ca_rel |
3087 | - |
3088 | - # Best guess match based on deb string provided |
3089 | - if src.startswith('deb') or src.startswith('ppa'): |
3090 | - for k, v in OPENSTACK_CODENAMES.iteritems(): |
3091 | - if v in src: |
3092 | - return v |
3093 | - |
3094 | - |
3095 | -def get_os_version_install_source(src): |
3096 | - codename = get_os_codename_install_source(src) |
3097 | - return get_os_version_codename(codename) |
3098 | - |
3099 | - |
3100 | -def get_os_codename_version(vers): |
3101 | - '''Determine OpenStack codename from version number.''' |
3102 | - try: |
3103 | - return OPENSTACK_CODENAMES[vers] |
3104 | - except KeyError: |
3105 | - e = 'Could not determine OpenStack codename for version %s' % vers |
3106 | - error_out(e) |
3107 | - |
3108 | - |
3109 | -def get_os_version_codename(codename): |
3110 | - '''Determine OpenStack version number from codename.''' |
3111 | - for k, v in OPENSTACK_CODENAMES.iteritems(): |
3112 | - if v == codename: |
3113 | - return k |
3114 | - e = 'Could not derive OpenStack version for '\ |
3115 | - 'codename: %s' % codename |
3116 | - error_out(e) |
3117 | - |
3118 | - |
3119 | -def get_os_codename_package(package, fatal=True): |
3120 | - '''Derive OpenStack release codename from an installed package.''' |
3121 | - apt.init() |
3122 | - cache = apt.Cache() |
3123 | - |
3124 | - try: |
3125 | - pkg = cache[package] |
3126 | - except: |
3127 | - if not fatal: |
3128 | - return None |
3129 | - # the package is unknown to the current apt cache. |
3130 | - e = 'Could not determine version of package with no installation '\ |
3131 | - 'candidate: %s' % package |
3132 | - error_out(e) |
3133 | - |
3134 | - if not pkg.current_ver: |
3135 | - if not fatal: |
3136 | - return None |
3137 | - # package is known, but no version is currently installed. |
3138 | - e = 'Could not determine version of uninstalled package: %s' % package |
3139 | - error_out(e) |
3140 | - |
3141 | - vers = apt.upstream_version(pkg.current_ver.ver_str) |
3142 | - |
3143 | - try: |
3144 | - if 'swift' in pkg.name: |
3145 | - swift_vers = vers[:5] |
3146 | - if swift_vers not in SWIFT_CODENAMES: |
3147 | - # Deal with 1.10.0 upward |
3148 | - swift_vers = vers[:6] |
3149 | - return SWIFT_CODENAMES[swift_vers] |
3150 | - else: |
3151 | - vers = vers[:6] |
3152 | - return OPENSTACK_CODENAMES[vers] |
3153 | - except KeyError: |
3154 | - e = 'Could not determine OpenStack codename for version %s' % vers |
3155 | - error_out(e) |
3156 | - |
3157 | - |
3158 | -def get_os_version_package(pkg, fatal=True): |
3159 | - '''Derive OpenStack version number from an installed package.''' |
3160 | - codename = get_os_codename_package(pkg, fatal=fatal) |
3161 | - |
3162 | - if not codename: |
3163 | - return None |
3164 | - |
3165 | - if 'swift' in pkg: |
3166 | - vers_map = SWIFT_CODENAMES |
3167 | - else: |
3168 | - vers_map = OPENSTACK_CODENAMES |
3169 | - |
3170 | - for version, cname in vers_map.iteritems(): |
3171 | - if cname == codename: |
3172 | - return version |
3173 | - #e = "Could not determine OpenStack version for package: %s" % pkg |
3174 | - #error_out(e) |
3175 | - |
3176 | - |
3177 | -os_rel = None |
3178 | - |
3179 | - |
3180 | -def os_release(package, base='essex'): |
3181 | - ''' |
3182 | - Returns OpenStack release codename from a cached global. |
3183 | - If the codename can not be determined from either an installed package or |
3184 | - the installation source, the earliest release supported by the charm should |
3185 | - be returned. |
3186 | - ''' |
3187 | - global os_rel |
3188 | - if os_rel: |
3189 | - return os_rel |
3190 | - os_rel = (get_os_codename_package(package, fatal=False) or |
3191 | - get_os_codename_install_source(config('openstack-origin')) or |
3192 | - base) |
3193 | - return os_rel |
3194 | - |
3195 | - |
3196 | -def import_key(keyid): |
3197 | - cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \ |
3198 | - "--recv-keys %s" % keyid |
3199 | - try: |
3200 | - subprocess.check_call(cmd.split(' ')) |
3201 | - except subprocess.CalledProcessError: |
3202 | - error_out("Error importing repo key %s" % keyid) |
3203 | - |
3204 | - |
3205 | -def configure_installation_source(rel): |
3206 | - '''Configure apt installation source.''' |
3207 | - if rel == 'distro': |
3208 | - return |
3209 | - elif rel == 'distro-proposed': |
3210 | - ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
3211 | - with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
3212 | - f.write(DISTRO_PROPOSED % ubuntu_rel) |
3213 | - elif rel[:4] == "ppa:": |
3214 | - src = rel |
3215 | - subprocess.check_call(["add-apt-repository", "-y", src]) |
3216 | - elif rel[:3] == "deb": |
3217 | - l = len(rel.split('|')) |
3218 | - if l == 2: |
3219 | - src, key = rel.split('|') |
3220 | - juju_log("Importing PPA key from keyserver for %s" % src) |
3221 | - import_key(key) |
3222 | - elif l == 1: |
3223 | - src = rel |
3224 | - with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
3225 | - f.write(src) |
3226 | - elif rel[:6] == 'cloud:': |
3227 | - ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
3228 | - rel = rel.split(':')[1] |
3229 | - u_rel = rel.split('-')[0] |
3230 | - ca_rel = rel.split('-')[1] |
3231 | - |
3232 | - if u_rel != ubuntu_rel: |
3233 | - e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ |
3234 | - 'version (%s)' % (ca_rel, ubuntu_rel) |
3235 | - error_out(e) |
3236 | - |
3237 | - if 'staging' in ca_rel: |
3238 | - # staging is just a regular PPA. |
3239 | - os_rel = ca_rel.split('/')[0] |
3240 | - ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel |
3241 | - cmd = 'add-apt-repository -y %s' % ppa |
3242 | - subprocess.check_call(cmd.split(' ')) |
3243 | - return |
3244 | - |
3245 | - # map charm config options to actual archive pockets. |
3246 | - pockets = { |
3247 | - 'folsom': 'precise-updates/folsom', |
3248 | - 'folsom/updates': 'precise-updates/folsom', |
3249 | - 'folsom/proposed': 'precise-proposed/folsom', |
3250 | - 'grizzly': 'precise-updates/grizzly', |
3251 | - 'grizzly/updates': 'precise-updates/grizzly', |
3252 | - 'grizzly/proposed': 'precise-proposed/grizzly', |
3253 | - 'havana': 'precise-updates/havana', |
3254 | - 'havana/updates': 'precise-updates/havana', |
3255 | - 'havana/proposed': 'precise-proposed/havana', |
3256 | - 'icehouse': 'precise-updates/icehouse', |
3257 | - 'icehouse/updates': 'precise-updates/icehouse', |
3258 | - 'icehouse/proposed': 'precise-proposed/icehouse', |
3259 | - } |
3260 | - |
3261 | - try: |
3262 | - pocket = pockets[ca_rel] |
3263 | - except KeyError: |
3264 | - e = 'Invalid Cloud Archive release specified: %s' % rel |
3265 | - error_out(e) |
3266 | - |
3267 | - src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) |
3268 | - apt_install('ubuntu-cloud-keyring', fatal=True) |
3269 | - |
3270 | - with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: |
3271 | - f.write(src) |
3272 | - else: |
3273 | - error_out("Invalid openstack-release specified: %s" % rel) |
3274 | - |
3275 | - |
3276 | -def save_script_rc(script_path="scripts/scriptrc", **env_vars): |
3277 | - """ |
3278 | - Write an rc file in the charm-delivered directory containing |
3279 | - exported environment variables provided by env_vars. Any charm scripts run |
3280 | - outside the juju hook environment can source this scriptrc to obtain |
3281 | - updated config information necessary to perform health checks or |
3282 | - service changes. |
3283 | - """ |
3284 | - juju_rc_path = "%s/%s" % (charm_dir(), script_path) |
3285 | - if not os.path.exists(os.path.dirname(juju_rc_path)): |
3286 | - os.mkdir(os.path.dirname(juju_rc_path)) |
3287 | - with open(juju_rc_path, 'wb') as rc_script: |
3288 | - rc_script.write( |
3289 | - "#!/bin/bash\n") |
3290 | - [rc_script.write('export %s=%s\n' % (u, p)) |
3291 | - for u, p in env_vars.iteritems() if u != "script_path"] |
3292 | - |
3293 | - |
3294 | -def openstack_upgrade_available(package): |
3295 | - """ |
3296 | - Determines if an OpenStack upgrade is available from installation |
3297 | - source, based on version of installed package. |
3298 | - |
3299 | - :param package: str: Name of installed package. |
3300 | - |
3301 | - :returns: bool: : Returns True if configured installation source offers |
3302 | - a newer version of package. |
3303 | - |
3304 | - """ |
3305 | - |
3306 | - src = config('openstack-origin') |
3307 | - cur_vers = get_os_version_package(package) |
3308 | - available_vers = get_os_version_install_source(src) |
3309 | - apt.init() |
3310 | - return apt.version_compare(available_vers, cur_vers) == 1 |
3311 | - |
3312 | - |
3313 | -def ensure_block_device(block_device): |
3314 | - ''' |
3315 | - Confirm block_device, create as loopback if necessary. |
3316 | - |
3317 | - :param block_device: str: Full path of block device to ensure. |
3318 | - |
3319 | - :returns: str: Full path of ensured block device. |
3320 | - ''' |
3321 | - _none = ['None', 'none', None] |
3322 | - if (block_device in _none): |
3323 | - error_out('prepare_storage(): Missing required input: ' |
3324 | - 'block_device=%s.' % block_device, level=ERROR) |
3325 | - |
3326 | - if block_device.startswith('/dev/'): |
3327 | - bdev = block_device |
3328 | - elif block_device.startswith('/'): |
3329 | - _bd = block_device.split('|') |
3330 | - if len(_bd) == 2: |
3331 | - bdev, size = _bd |
3332 | - else: |
3333 | - bdev = block_device |
3334 | - size = DEFAULT_LOOPBACK_SIZE |
3335 | - bdev = ensure_loopback_device(bdev, size) |
3336 | - else: |
3337 | - bdev = '/dev/%s' % block_device |
3338 | - |
3339 | - if not is_block_device(bdev): |
3340 | - error_out('Failed to locate valid block device at %s' % bdev, |
3341 | - level=ERROR) |
3342 | - |
3343 | - return bdev |
3344 | - |
3345 | - |
3346 | -def clean_storage(block_device): |
3347 | - ''' |
3348 | - Ensures a block device is clean. That is: |
3349 | - - unmounted |
3350 | - - any lvm volume groups are deactivated |
3351 | - - any lvm physical device signatures removed |
3352 | - - partition table wiped |
3353 | - |
3354 | - :param block_device: str: Full path to block device to clean. |
3355 | - ''' |
3356 | - for mp, d in mounts(): |
3357 | - if d == block_device: |
3358 | - juju_log('clean_storage(): %s is mounted @ %s, unmounting.' % |
3359 | - (d, mp), level=INFO) |
3360 | - umount(mp, persist=True) |
3361 | - |
3362 | - if is_lvm_physical_volume(block_device): |
3363 | - deactivate_lvm_volume_group(block_device) |
3364 | - remove_lvm_physical_volume(block_device) |
3365 | - else: |
3366 | - zap_disk(block_device) |
3367 | - |
3368 | - |
3369 | -def is_ip(address): |
3370 | - """ |
3371 | - Returns True if address is a valid IP address. |
3372 | - """ |
3373 | - try: |
3374 | - # Test to see if already an IPv4 address |
3375 | - socket.inet_aton(address) |
3376 | - return True |
3377 | - except socket.error: |
3378 | - return False |
3379 | - |
3380 | - |
3381 | -def ns_query(address): |
3382 | - try: |
3383 | - import dns.resolver |
3384 | - except ImportError: |
3385 | - apt_install('python-dnspython') |
3386 | - import dns.resolver |
3387 | - |
3388 | - if isinstance(address, dns.name.Name): |
3389 | - rtype = 'PTR' |
3390 | - elif isinstance(address, basestring): |
3391 | - rtype = 'A' |
3392 | - else: |
3393 | - return None |
3394 | - |
3395 | - answers = dns.resolver.query(address, rtype) |
3396 | - if answers: |
3397 | - return str(answers[0]) |
3398 | - return None |
3399 | - |
3400 | - |
3401 | -def get_host_ip(hostname): |
3402 | - """ |
3403 | - Resolves the IP for a given hostname, or returns |
3404 | - the input if it is already an IP. |
3405 | - """ |
3406 | - if is_ip(hostname): |
3407 | - return hostname |
3408 | - |
3409 | - return ns_query(hostname) |
3410 | - |
3411 | - |
3412 | -def get_hostname(address, fqdn=True): |
3413 | - """ |
3414 | - Resolves hostname for given IP, or returns the input |
3415 | - if it is already a hostname. |
3416 | - """ |
3417 | - if is_ip(address): |
3418 | - try: |
3419 | - import dns.reversename |
3420 | - except ImportError: |
3421 | - apt_install('python-dnspython') |
3422 | - import dns.reversename |
3423 | - |
3424 | - rev = dns.reversename.from_address(address) |
3425 | - result = ns_query(rev) |
3426 | - if not result: |
3427 | - return None |
3428 | - else: |
3429 | - result = address |
3430 | - |
3431 | - if fqdn: |
3432 | - # strip trailing . |
3433 | - if result.endswith('.'): |
3434 | - return result[:-1] |
3435 | - else: |
3436 | - return result |
3437 | - else: |
3438 | - return result.split('.')[0] |
3439 | |
3440 | === removed directory 'hooks/charmhelpers/contrib/peerstorage' |
3441 | === removed file 'hooks/charmhelpers/contrib/peerstorage/__init__.py' |
3442 | --- hooks/charmhelpers/contrib/peerstorage/__init__.py 2014-05-09 20:11:59 +0000 |
3443 | +++ hooks/charmhelpers/contrib/peerstorage/__init__.py 1970-01-01 00:00:00 +0000 |
3444 | @@ -1,83 +0,0 @@ |
3445 | -from charmhelpers.core.hookenv import ( |
3446 | - relation_ids, |
3447 | - relation_get, |
3448 | - local_unit, |
3449 | - relation_set, |
3450 | -) |
3451 | - |
3452 | -""" |
3453 | -This helper provides functions to support use of a peer relation |
3454 | -for basic key/value storage, with the added benefit that all storage |
3455 | -can be replicated across peer units, so this is really useful for |
3456 | -services that issue usernames/passwords to remote services. |
3457 | - |
3458 | -def shared_db_changed() |
3459 | - # Only the lead unit should create passwords |
3460 | - if not is_leader(): |
3461 | - return |
3462 | - username = relation_get('username') |
3463 | - key = '{}.password'.format(username) |
3464 | - # Attempt to retrieve any existing password for this user |
3465 | - password = peer_retrieve(key) |
3466 | - if password is None: |
3467 | - # New user, create password and store |
3468 | - password = pwgen(length=64) |
3469 | - peer_store(key, password) |
3470 | - create_access(username, password) |
3471 | - relation_set(password=password) |
3472 | - |
3473 | - |
3474 | -def cluster_changed() |
3475 | - # Echo any relation data other that *-address |
3476 | - # back onto the peer relation so all units have |
3477 | - # all *.password keys stored on their local relation |
3478 | - # for later retrieval. |
3479 | - peer_echo() |
3480 | - |
3481 | -""" |
3482 | - |
3483 | - |
3484 | -def peer_retrieve(key, relation_name='cluster'): |
3485 | - """ Retrieve a named key from peer relation relation_name """ |
3486 | - cluster_rels = relation_ids(relation_name) |
3487 | - if len(cluster_rels) > 0: |
3488 | - cluster_rid = cluster_rels[0] |
3489 | - return relation_get(attribute=key, rid=cluster_rid, |
3490 | - unit=local_unit()) |
3491 | - else: |
3492 | - raise ValueError('Unable to detect' |
3493 | - 'peer relation {}'.format(relation_name)) |
3494 | - |
3495 | - |
3496 | -def peer_store(key, value, relation_name='cluster'): |
3497 | - """ Store the key/value pair on the named peer relation relation_name """ |
3498 | - cluster_rels = relation_ids(relation_name) |
3499 | - if len(cluster_rels) > 0: |
3500 | - cluster_rid = cluster_rels[0] |
3501 | - relation_set(relation_id=cluster_rid, |
3502 | - relation_settings={key: value}) |
3503 | - else: |
3504 | - raise ValueError('Unable to detect ' |
3505 | - 'peer relation {}'.format(relation_name)) |
3506 | - |
3507 | - |
3508 | -def peer_echo(includes=None): |
3509 | - """Echo filtered attributes back onto the same relation for storage |
3510 | - |
3511 | - Note that this helper must only be called within a peer relation |
3512 | - changed hook |
3513 | - """ |
3514 | - rdata = relation_get() |
3515 | - echo_data = {} |
3516 | - if includes is None: |
3517 | - echo_data = rdata.copy() |
3518 | - for ex in ['private-address', 'public-address']: |
3519 | - if ex in echo_data: |
3520 | - echo_data.pop(ex) |
3521 | - else: |
3522 | - for attribute, value in rdata.iteritems(): |
3523 | - for include in includes: |
3524 | - if include in attribute: |
3525 | - echo_data[attribute] = value |
3526 | - if len(echo_data) > 0: |
3527 | - relation_set(relation_settings=echo_data) |
3528 | |
3529 | === removed directory 'hooks/charmhelpers/contrib/python' |
3530 | === removed file 'hooks/charmhelpers/contrib/python/__init__.py' |
3531 | === removed file 'hooks/charmhelpers/contrib/python/packages.py' |
3532 | --- hooks/charmhelpers/contrib/python/packages.py 2014-05-09 20:11:59 +0000 |
3533 | +++ hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000 |
3534 | @@ -1,76 +0,0 @@ |
3535 | -#!/usr/bin/env python |
3536 | -# coding: utf-8 |
3537 | - |
3538 | -__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
3539 | - |
3540 | -from charmhelpers.fetch import apt_install |
3541 | -from charmhelpers.core.hookenv import log |
3542 | - |
3543 | -try: |
3544 | - from pip import main as pip_execute |
3545 | -except ImportError: |
3546 | - apt_install('python-pip') |
3547 | - from pip import main as pip_execute |
3548 | - |
3549 | - |
3550 | -def parse_options(given, available): |
3551 | - """Given a set of options, check if available""" |
3552 | - for key, value in given.items(): |
3553 | - if key in available: |
3554 | - yield "--{0}={1}".format(key, value) |
3555 | - |
3556 | - |
3557 | -def pip_install_requirements(requirements, **options): |
3558 | - """Install a requirements file """ |
3559 | - command = ["install"] |
3560 | - |
3561 | - available_options = ('proxy', 'src', 'log', ) |
3562 | - for option in parse_options(options, available_options): |
3563 | - command.append(option) |
3564 | - |
3565 | - command.append("-r {0}".format(requirements)) |
3566 | - log("Installing from file: {} with options: {}".format(requirements, |
3567 | - command)) |
3568 | - pip_execute(command) |
3569 | - |
3570 | - |
3571 | -def pip_install(package, fatal=False, **options): |
3572 | - """Install a python package""" |
3573 | - command = ["install"] |
3574 | - |
3575 | - available_options = ('proxy', 'src', 'log', "index-url", ) |
3576 | - for option in parse_options(options, available_options): |
3577 | - command.append(option) |
3578 | - |
3579 | - if isinstance(package, list): |
3580 | - command.extend(package) |
3581 | - else: |
3582 | - command.append(package) |
3583 | - |
3584 | - log("Installing {} package with options: {}".format(package, |
3585 | - command)) |
3586 | - pip_execute(command) |
3587 | - |
3588 | - |
3589 | -def pip_uninstall(package, **options): |
3590 | - """Uninstall a python package""" |
3591 | - command = ["uninstall", "-q", "-y"] |
3592 | - |
3593 | - available_options = ('proxy', 'log', ) |
3594 | - for option in parse_options(options, available_options): |
3595 | - command.append(option) |
3596 | - |
3597 | - if isinstance(package, list): |
3598 | - command.extend(package) |
3599 | - else: |
3600 | - command.append(package) |
3601 | - |
3602 | - log("Uninstalling {} package with options: {}".format(package, |
3603 | - command)) |
3604 | - pip_execute(command) |
3605 | - |
3606 | - |
3607 | -def pip_list(): |
3608 | - """Returns the list of current python installed packages |
3609 | - """ |
3610 | - return pip_execute(["list"]) |
3611 | |
3612 | === removed file 'hooks/charmhelpers/contrib/python/version.py' |
3613 | --- hooks/charmhelpers/contrib/python/version.py 2014-05-09 20:11:59 +0000 |
3614 | +++ hooks/charmhelpers/contrib/python/version.py 1970-01-01 00:00:00 +0000 |
3615 | @@ -1,18 +0,0 @@ |
3616 | -#!/usr/bin/env python |
3617 | -# coding: utf-8 |
3618 | - |
3619 | -__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
3620 | - |
3621 | -import sys |
3622 | - |
3623 | - |
3624 | -def current_version(): |
3625 | - """Current system python version""" |
3626 | - return sys.version_info |
3627 | - |
3628 | - |
3629 | -def current_version_string(): |
3630 | - """Current system python version as string major.minor.micro""" |
3631 | - return "{0}.{1}.{2}".format(sys.version_info.major, |
3632 | - sys.version_info.minor, |
3633 | - sys.version_info.micro) |
3634 | |
3635 | === removed directory 'hooks/charmhelpers/contrib/saltstack' |
3636 | === removed file 'hooks/charmhelpers/contrib/saltstack/__init__.py' |
3637 | --- hooks/charmhelpers/contrib/saltstack/__init__.py 2013-11-26 17:12:54 +0000 |
3638 | +++ hooks/charmhelpers/contrib/saltstack/__init__.py 1970-01-01 00:00:00 +0000 |
3639 | @@ -1,102 +0,0 @@ |
3640 | -"""Charm Helpers saltstack - declare the state of your machines. |
3641 | - |
3642 | -This helper enables you to declare your machine state, rather than |
3643 | -program it procedurally (and have to test each change to your procedures). |
3644 | -Your install hook can be as simple as: |
3645 | - |
3646 | -{{{ |
3647 | -from charmhelpers.contrib.saltstack import ( |
3648 | - install_salt_support, |
3649 | - update_machine_state, |
3650 | -) |
3651 | - |
3652 | - |
3653 | -def install(): |
3654 | - install_salt_support() |
3655 | - update_machine_state('machine_states/dependencies.yaml') |
3656 | - update_machine_state('machine_states/installed.yaml') |
3657 | -}}} |
3658 | - |
3659 | -and won't need to change (nor will its tests) when you change the machine |
3660 | -state. |
3661 | - |
3662 | -It's using a python package called salt-minion which allows various formats for |
3663 | -specifying resources, such as: |
3664 | - |
3665 | -{{{ |
3666 | -/srv/{{ basedir }}: |
3667 | - file.directory: |
3668 | - - group: ubunet |
3669 | - - user: ubunet |
3670 | - - require: |
3671 | - - user: ubunet |
3672 | - - recurse: |
3673 | - - user |
3674 | - - group |
3675 | - |
3676 | -ubunet: |
3677 | - group.present: |
3678 | - - gid: 1500 |
3679 | - user.present: |
3680 | - - uid: 1500 |
3681 | - - gid: 1500 |
3682 | - - createhome: False |
3683 | - - require: |
3684 | - - group: ubunet |
3685 | -}}} |
3686 | - |
3687 | -The docs for all the different state definitions are at: |
3688 | - http://docs.saltstack.com/ref/states/all/ |
3689 | - |
3690 | - |
3691 | -TODO: |
3692 | - * Add test helpers which will ensure that machine state definitions |
3693 | - are functionally (but not necessarily logically) correct (ie. getting |
3694 | - salt to parse all state defs. |
3695 | - * Add a link to a public bootstrap charm example / blogpost. |
3696 | - * Find a way to obviate the need to use the grains['charm_dir'] syntax |
3697 | - in templates. |
3698 | -""" |
3699 | -# Copyright 2013 Canonical Ltd. |
3700 | -# |
3701 | -# Authors: |
3702 | -# Charm Helpers Developers <juju@lists.ubuntu.com> |
3703 | -import subprocess |
3704 | - |
3705 | -import charmhelpers.contrib.templating.contexts |
3706 | -import charmhelpers.core.host |
3707 | -import charmhelpers.core.hookenv |
3708 | - |
3709 | - |
3710 | -salt_grains_path = '/etc/salt/grains' |
3711 | - |
3712 | - |
3713 | -def install_salt_support(from_ppa=True): |
3714 | - """Installs the salt-minion helper for machine state. |
3715 | - |
3716 | - By default the salt-minion package is installed from |
3717 | - the saltstack PPA. If from_ppa is False you must ensure |
3718 | - that the salt-minion package is available in the apt cache. |
3719 | - """ |
3720 | - if from_ppa: |
3721 | - subprocess.check_call([ |
3722 | - '/usr/bin/add-apt-repository', |
3723 | - '--yes', |
3724 | - 'ppa:saltstack/salt', |
3725 | - ]) |
3726 | - subprocess.check_call(['/usr/bin/apt-get', 'update']) |
3727 | - # We install salt-common as salt-minion would run the salt-minion |
3728 | - # daemon. |
3729 | - charmhelpers.fetch.apt_install('salt-common') |
3730 | - |
3731 | - |
3732 | -def update_machine_state(state_path): |
3733 | - """Update the machine state using the provided state declaration.""" |
3734 | - charmhelpers.contrib.templating.contexts.juju_state_to_yaml( |
3735 | - salt_grains_path) |
3736 | - subprocess.check_call([ |
3737 | - 'salt-call', |
3738 | - '--local', |
3739 | - 'state.template', |
3740 | - state_path, |
3741 | - ]) |
3742 | |
3743 | === removed directory 'hooks/charmhelpers/contrib/ssl' |
3744 | === removed file 'hooks/charmhelpers/contrib/ssl/__init__.py' |
3745 | --- hooks/charmhelpers/contrib/ssl/__init__.py 2013-11-26 17:12:54 +0000 |
3746 | +++ hooks/charmhelpers/contrib/ssl/__init__.py 1970-01-01 00:00:00 +0000 |
3747 | @@ -1,78 +0,0 @@ |
3748 | -import subprocess |
3749 | -from charmhelpers.core import hookenv |
3750 | - |
3751 | - |
3752 | -def generate_selfsigned(keyfile, certfile, keysize="1024", config=None, subject=None, cn=None): |
3753 | - """Generate selfsigned SSL keypair |
3754 | - |
3755 | - You must provide one of the 3 optional arguments: |
3756 | - config, subject or cn |
3757 | - If more than one is provided the leftmost will be used |
3758 | - |
3759 | - Arguments: |
3760 | - keyfile -- (required) full path to the keyfile to be created |
3761 | - certfile -- (required) full path to the certfile to be created |
3762 | - keysize -- (optional) SSL key length |
3763 | - config -- (optional) openssl configuration file |
3764 | - subject -- (optional) dictionary with SSL subject variables |
3765 | - cn -- (optional) cerfificate common name |
3766 | - |
3767 | - Required keys in subject dict: |
3768 | - cn -- Common name (eq. FQDN) |
3769 | - |
3770 | - Optional keys in subject dict |
3771 | - country -- Country Name (2 letter code) |
3772 | - state -- State or Province Name (full name) |
3773 | - locality -- Locality Name (eg, city) |
3774 | - organization -- Organization Name (eg, company) |
3775 | - organizational_unit -- Organizational Unit Name (eg, section) |
3776 | - email -- Email Address |
3777 | - """ |
3778 | - |
3779 | - cmd = [] |
3780 | - if config: |
3781 | - cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", |
3782 | - "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", |
3783 | - "-keyout", keyfile, |
3784 | - "-out", certfile, "-config", config] |
3785 | - elif subject: |
3786 | - ssl_subject = "" |
3787 | - if "country" in subject: |
3788 | - ssl_subject = ssl_subject + "/C={}".format(subject["country"]) |
3789 | - if "state" in subject: |
3790 | - ssl_subject = ssl_subject + "/ST={}".format(subject["state"]) |
3791 | - if "locality" in subject: |
3792 | - ssl_subject = ssl_subject + "/L={}".format(subject["locality"]) |
3793 | - if "organization" in subject: |
3794 | - ssl_subject = ssl_subject + "/O={}".format(subject["organization"]) |
3795 | - if "organizational_unit" in subject: |
3796 | - ssl_subject = ssl_subject + "/OU={}".format(subject["organizational_unit"]) |
3797 | - if "cn" in subject: |
3798 | - ssl_subject = ssl_subject + "/CN={}".format(subject["cn"]) |
3799 | - else: |
3800 | - hookenv.log("When using \"subject\" argument you must " |
3801 | - "provide \"cn\" field at very least") |
3802 | - return False |
3803 | - if "email" in subject: |
3804 | - ssl_subject = ssl_subject + "/emailAddress={}".format(subject["email"]) |
3805 | - |
3806 | - cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", |
3807 | - "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", |
3808 | - "-keyout", keyfile, |
3809 | - "-out", certfile, "-subj", ssl_subject] |
3810 | - elif cn: |
3811 | - cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", |
3812 | - "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", |
3813 | - "-keyout", keyfile, |
3814 | - "-out", certfile, "-subj", "/CN={}".format(cn)] |
3815 | - |
3816 | - if not cmd: |
3817 | - hookenv.log("No config, subject or cn provided," |
3818 | - "unable to generate self signed SSL certificates") |
3819 | - return False |
3820 | - try: |
3821 | - subprocess.check_call(cmd) |
3822 | - return True |
3823 | - except Exception as e: |
3824 | - print "Execution of openssl command failed:\n{}".format(e) |
3825 | - return False |
3826 | |
3827 | === removed file 'hooks/charmhelpers/contrib/ssl/service.py' |
3828 | --- hooks/charmhelpers/contrib/ssl/service.py 2014-05-09 20:11:59 +0000 |
3829 | +++ hooks/charmhelpers/contrib/ssl/service.py 1970-01-01 00:00:00 +0000 |
3830 | @@ -1,267 +0,0 @@ |
3831 | -import logging |
3832 | -import os |
3833 | -from os.path import join as path_join |
3834 | -from os.path import exists |
3835 | -import subprocess |
3836 | - |
3837 | - |
3838 | -log = logging.getLogger("service_ca") |
3839 | - |
3840 | -logging.basicConfig(level=logging.DEBUG) |
3841 | - |
3842 | -STD_CERT = "standard" |
3843 | - |
3844 | -# Mysql server is fairly picky about cert creation |
3845 | -# and types, spec its creation separately for now. |
3846 | -MYSQL_CERT = "mysql" |
3847 | - |
3848 | - |
3849 | -class ServiceCA(object): |
3850 | - |
3851 | - default_expiry = str(365 * 2) |
3852 | - default_ca_expiry = str(365 * 6) |
3853 | - |
3854 | - def __init__(self, name, ca_dir, cert_type=STD_CERT): |
3855 | - self.name = name |
3856 | - self.ca_dir = ca_dir |
3857 | - self.cert_type = cert_type |
3858 | - |
3859 | - ############### |
3860 | - # Hook Helper API |
3861 | - @staticmethod |
3862 | - def get_ca(type=STD_CERT): |
3863 | - service_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] |
3864 | - ca_path = os.path.join(os.environ['CHARM_DIR'], 'ca') |
3865 | - ca = ServiceCA(service_name, ca_path, type) |
3866 | - ca.init() |
3867 | - return ca |
3868 | - |
3869 | - @classmethod |
3870 | - def get_service_cert(cls, type=STD_CERT): |
3871 | - service_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] |
3872 | - ca = cls.get_ca() |
3873 | - crt, key = ca.get_or_create_cert(service_name) |
3874 | - return crt, key, ca.get_ca_bundle() |
3875 | - |
3876 | - ############### |
3877 | - |
3878 | - def init(self): |
3879 | - log.debug("initializing service ca") |
3880 | - if not exists(self.ca_dir): |
3881 | - self._init_ca_dir(self.ca_dir) |
3882 | - self._init_ca() |
3883 | - |
3884 | - @property |
3885 | - def ca_key(self): |
3886 | - return path_join(self.ca_dir, 'private', 'cacert.key') |
3887 | - |
3888 | - @property |
3889 | - def ca_cert(self): |
3890 | - return path_join(self.ca_dir, 'cacert.pem') |
3891 | - |
3892 | - @property |
3893 | - def ca_conf(self): |
3894 | - return path_join(self.ca_dir, 'ca.cnf') |
3895 | - |
3896 | - @property |
3897 | - def signing_conf(self): |
3898 | - return path_join(self.ca_dir, 'signing.cnf') |
3899 | - |
3900 | - def _init_ca_dir(self, ca_dir): |
3901 | - os.mkdir(ca_dir) |
3902 | - for i in ['certs', 'crl', 'newcerts', 'private']: |
3903 | - sd = path_join(ca_dir, i) |
3904 | - if not exists(sd): |
3905 | - os.mkdir(sd) |
3906 | - |
3907 | - if not exists(path_join(ca_dir, 'serial')): |
3908 | - with open(path_join(ca_dir, 'serial'), 'wb') as fh: |
3909 | - fh.write('02\n') |
3910 | - |
3911 | - if not exists(path_join(ca_dir, 'index.txt')): |
3912 | - with open(path_join(ca_dir, 'index.txt'), 'wb') as fh: |
3913 | - fh.write('') |
3914 | - |
3915 | - def _init_ca(self): |
3916 | - """Generate the root ca's cert and key. |
3917 | - """ |
3918 | - if not exists(path_join(self.ca_dir, 'ca.cnf')): |
3919 | - with open(path_join(self.ca_dir, 'ca.cnf'), 'wb') as fh: |
3920 | - fh.write( |
3921 | - CA_CONF_TEMPLATE % (self.get_conf_variables())) |
3922 | - |
3923 | - if not exists(path_join(self.ca_dir, 'signing.cnf')): |
3924 | - with open(path_join(self.ca_dir, 'signing.cnf'), 'wb') as fh: |
3925 | - fh.write( |
3926 | - SIGNING_CONF_TEMPLATE % (self.get_conf_variables())) |
3927 | - |
3928 | - if exists(self.ca_cert) or exists(self.ca_key): |
3929 | - raise RuntimeError("Initialized called when CA already exists") |
3930 | - cmd = ['openssl', 'req', '-config', self.ca_conf, |
3931 | - '-x509', '-nodes', '-newkey', 'rsa', |
3932 | - '-days', self.default_ca_expiry, |
3933 | - '-keyout', self.ca_key, '-out', self.ca_cert, |
3934 | - '-outform', 'PEM'] |
3935 | - output = subprocess.check_output(cmd, stderr=subprocess.STDOUT) |
3936 | - log.debug("CA Init:\n %s", output) |
3937 | - |
3938 | - def get_conf_variables(self): |
3939 | - return dict( |
3940 | - org_name="juju", |
3941 | - org_unit_name="%s service" % self.name, |
3942 | - common_name=self.name, |
3943 | - ca_dir=self.ca_dir) |
3944 | - |
3945 | - def get_or_create_cert(self, common_name): |
3946 | - if common_name in self: |
3947 | - return self.get_certificate(common_name) |
3948 | - return self.create_certificate(common_name) |
3949 | - |
3950 | - def create_certificate(self, common_name): |
3951 | - if common_name in self: |
3952 | - return self.get_certificate(common_name) |
3953 | - key_p = path_join(self.ca_dir, "certs", "%s.key" % common_name) |
3954 | - crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) |
3955 | - csr_p = path_join(self.ca_dir, "certs", "%s.csr" % common_name) |
3956 | - self._create_certificate(common_name, key_p, csr_p, crt_p) |
3957 | - return self.get_certificate(common_name) |
3958 | - |
3959 | - def get_certificate(self, common_name): |
3960 | - if not common_name in self: |
3961 | - raise ValueError("No certificate for %s" % common_name) |
3962 | - key_p = path_join(self.ca_dir, "certs", "%s.key" % common_name) |
3963 | - crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) |
3964 | - with open(crt_p) as fh: |
3965 | - crt = fh.read() |
3966 | - with open(key_p) as fh: |
3967 | - key = fh.read() |
3968 | - return crt, key |
3969 | - |
3970 | - def __contains__(self, common_name): |
3971 | - crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) |
3972 | - return exists(crt_p) |
3973 | - |
3974 | - def _create_certificate(self, common_name, key_p, csr_p, crt_p): |
3975 | - template_vars = self.get_conf_variables() |
3976 | - template_vars['common_name'] = common_name |
3977 | - subj = '/O=%(org_name)s/OU=%(org_unit_name)s/CN=%(common_name)s' % ( |
3978 | - template_vars) |
3979 | - |
3980 | - log.debug("CA Create Cert %s", common_name) |
3981 | - cmd = ['openssl', 'req', '-sha1', '-newkey', 'rsa:2048', |
3982 | - '-nodes', '-days', self.default_expiry, |
3983 | - '-keyout', key_p, '-out', csr_p, '-subj', subj] |
3984 | - subprocess.check_call(cmd) |
3985 | - cmd = ['openssl', 'rsa', '-in', key_p, '-out', key_p] |
3986 | - subprocess.check_call(cmd) |
3987 | - |
3988 | - log.debug("CA Sign Cert %s", common_name) |
3989 | - if self.cert_type == MYSQL_CERT: |
3990 | - cmd = ['openssl', 'x509', '-req', |
3991 | - '-in', csr_p, '-days', self.default_expiry, |
3992 | - '-CA', self.ca_cert, '-CAkey', self.ca_key, |
3993 | - '-set_serial', '01', '-out', crt_p] |
3994 | - else: |
3995 | - cmd = ['openssl', 'ca', '-config', self.signing_conf, |
3996 | - '-extensions', 'req_extensions', |
3997 | - '-days', self.default_expiry, '-notext', |
3998 | - '-in', csr_p, '-out', crt_p, '-subj', subj, '-batch'] |
3999 | - log.debug("running %s", " ".join(cmd)) |
4000 | - subprocess.check_call(cmd) |
4001 | - |
4002 | - def get_ca_bundle(self): |
4003 | - with open(self.ca_cert) as fh: |
4004 | - return fh.read() |
4005 | - |
4006 | - |
4007 | -CA_CONF_TEMPLATE = """ |
4008 | -[ ca ] |
4009 | -default_ca = CA_default |
4010 | - |
4011 | -[ CA_default ] |
4012 | -dir = %(ca_dir)s |
4013 | -policy = policy_match |
4014 | -database = $dir/index.txt |
4015 | -serial = $dir/serial |
4016 | -certs = $dir/certs |
4017 | -crl_dir = $dir/crl |
4018 | -new_certs_dir = $dir/newcerts |
4019 | -certificate = $dir/cacert.pem |
4020 | -private_key = $dir/private/cacert.key |
4021 | -RANDFILE = $dir/private/.rand |
4022 | -default_md = default |
4023 | - |
4024 | -[ req ] |
4025 | -default_bits = 1024 |
4026 | -default_md = sha1 |
4027 | - |
4028 | -prompt = no |
4029 | -distinguished_name = ca_distinguished_name |
4030 | - |
4031 | -x509_extensions = ca_extensions |
4032 | - |
4033 | -[ ca_distinguished_name ] |
4034 | -organizationName = %(org_name)s |
4035 | -organizationalUnitName = %(org_unit_name)s Certificate Authority |
4036 | - |
4037 | - |
4038 | -[ policy_match ] |
4039 | -countryName = optional |
4040 | -stateOrProvinceName = optional |
4041 | -organizationName = match |
4042 | -organizationalUnitName = optional |
4043 | -commonName = supplied |
4044 | - |
4045 | -[ ca_extensions ] |
4046 | -basicConstraints = critical,CA:true |
4047 | -subjectKeyIdentifier = hash |
4048 | -authorityKeyIdentifier = keyid:always, issuer |
4049 | -keyUsage = cRLSign, keyCertSign |
4050 | -""" |
4051 | - |
4052 | - |
4053 | -SIGNING_CONF_TEMPLATE = """ |
4054 | -[ ca ] |
4055 | -default_ca = CA_default |
4056 | - |
4057 | -[ CA_default ] |
4058 | -dir = %(ca_dir)s |
4059 | -policy = policy_match |
4060 | -database = $dir/index.txt |
4061 | -serial = $dir/serial |
4062 | -certs = $dir/certs |
4063 | -crl_dir = $dir/crl |
4064 | -new_certs_dir = $dir/newcerts |
4065 | -certificate = $dir/cacert.pem |
4066 | -private_key = $dir/private/cacert.key |
4067 | -RANDFILE = $dir/private/.rand |
4068 | -default_md = default |
4069 | - |
4070 | -[ req ] |
4071 | -default_bits = 1024 |
4072 | -default_md = sha1 |
4073 | - |
4074 | -prompt = no |
4075 | -distinguished_name = req_distinguished_name |
4076 | - |
4077 | -x509_extensions = req_extensions |
4078 | - |
4079 | -[ req_distinguished_name ] |
4080 | -organizationName = %(org_name)s |
4081 | -organizationalUnitName = %(org_unit_name)s machine resources |
4082 | -commonName = %(common_name)s |
4083 | - |
4084 | -[ policy_match ] |
4085 | -countryName = optional |
4086 | -stateOrProvinceName = optional |
4087 | -organizationName = match |
4088 | -organizationalUnitName = optional |
4089 | -commonName = supplied |
4090 | - |
4091 | -[ req_extensions ] |
4092 | -basicConstraints = CA:false |
4093 | -subjectKeyIdentifier = hash |
4094 | -authorityKeyIdentifier = keyid:always, issuer |
4095 | -keyUsage = digitalSignature, keyEncipherment, keyAgreement |
4096 | -extendedKeyUsage = serverAuth, clientAuth |
4097 | -""" |
4098 | |
4099 | === removed directory 'hooks/charmhelpers/contrib/storage' |
4100 | === removed file 'hooks/charmhelpers/contrib/storage/__init__.py' |
4101 | === removed directory 'hooks/charmhelpers/contrib/storage/linux' |
4102 | === removed file 'hooks/charmhelpers/contrib/storage/linux/__init__.py' |
4103 | === removed file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' |
4104 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-05-09 20:11:59 +0000 |
4105 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000 |
4106 | @@ -1,387 +0,0 @@ |
4107 | -# |
4108 | -# Copyright 2012 Canonical Ltd. |
4109 | -# |
4110 | -# This file is sourced from lp:openstack-charm-helpers |
4111 | -# |
4112 | -# Authors: |
4113 | -# James Page <james.page@ubuntu.com> |
4114 | -# Adam Gandelman <adamg@ubuntu.com> |
4115 | -# |
4116 | - |
4117 | -import os |
4118 | -import shutil |
4119 | -import json |
4120 | -import time |
4121 | - |
4122 | -from subprocess import ( |
4123 | - check_call, |
4124 | - check_output, |
4125 | - CalledProcessError |
4126 | -) |
4127 | - |
4128 | -from charmhelpers.core.hookenv import ( |
4129 | - relation_get, |
4130 | - relation_ids, |
4131 | - related_units, |
4132 | - log, |
4133 | - INFO, |
4134 | - WARNING, |
4135 | - ERROR |
4136 | -) |
4137 | - |
4138 | -from charmhelpers.core.host import ( |
4139 | - mount, |
4140 | - mounts, |
4141 | - service_start, |
4142 | - service_stop, |
4143 | - service_running, |
4144 | - umount, |
4145 | -) |
4146 | - |
4147 | -from charmhelpers.fetch import ( |
4148 | - apt_install, |
4149 | -) |
4150 | - |
4151 | -KEYRING = '/etc/ceph/ceph.client.{}.keyring' |
4152 | -KEYFILE = '/etc/ceph/ceph.client.{}.key' |
4153 | - |
4154 | -CEPH_CONF = """[global] |
4155 | - auth supported = {auth} |
4156 | - keyring = {keyring} |
4157 | - mon host = {mon_hosts} |
4158 | - log to syslog = {use_syslog} |
4159 | - err to syslog = {use_syslog} |
4160 | - clog to syslog = {use_syslog} |
4161 | -""" |
4162 | - |
4163 | - |
4164 | -def install(): |
4165 | - ''' Basic Ceph client installation ''' |
4166 | - ceph_dir = "/etc/ceph" |
4167 | - if not os.path.exists(ceph_dir): |
4168 | - os.mkdir(ceph_dir) |
4169 | - apt_install('ceph-common', fatal=True) |
4170 | - |
4171 | - |
4172 | -def rbd_exists(service, pool, rbd_img): |
4173 | - ''' Check to see if a RADOS block device exists ''' |
4174 | - try: |
4175 | - out = check_output(['rbd', 'list', '--id', service, |
4176 | - '--pool', pool]) |
4177 | - except CalledProcessError: |
4178 | - return False |
4179 | - else: |
4180 | - return rbd_img in out |
4181 | - |
4182 | - |
4183 | -def create_rbd_image(service, pool, image, sizemb): |
4184 | - ''' Create a new RADOS block device ''' |
4185 | - cmd = [ |
4186 | - 'rbd', |
4187 | - 'create', |
4188 | - image, |
4189 | - '--size', |
4190 | - str(sizemb), |
4191 | - '--id', |
4192 | - service, |
4193 | - '--pool', |
4194 | - pool |
4195 | - ] |
4196 | - check_call(cmd) |
4197 | - |
4198 | - |
4199 | -def pool_exists(service, name): |
4200 | - ''' Check to see if a RADOS pool already exists ''' |
4201 | - try: |
4202 | - out = check_output(['rados', '--id', service, 'lspools']) |
4203 | - except CalledProcessError: |
4204 | - return False |
4205 | - else: |
4206 | - return name in out |
4207 | - |
4208 | - |
4209 | -def get_osds(service): |
4210 | - ''' |
4211 | - Return a list of all Ceph Object Storage Daemons |
4212 | - currently in the cluster |
4213 | - ''' |
4214 | - version = ceph_version() |
4215 | - if version and version >= '0.56': |
4216 | - return json.loads(check_output(['ceph', '--id', service, |
4217 | - 'osd', 'ls', '--format=json'])) |
4218 | - else: |
4219 | - return None |
4220 | - |
4221 | - |
4222 | -def create_pool(service, name, replicas=2): |
4223 | - ''' Create a new RADOS pool ''' |
4224 | - if pool_exists(service, name): |
4225 | - log("Ceph pool {} already exists, skipping creation".format(name), |
4226 | - level=WARNING) |
4227 | - return |
4228 | - # Calculate the number of placement groups based |
4229 | - # on upstream recommended best practices. |
4230 | - osds = get_osds(service) |
4231 | - if osds: |
4232 | - pgnum = (len(osds) * 100 / replicas) |
4233 | - else: |
4234 | - # NOTE(james-page): Default to 200 for older ceph versions |
4235 | - # which don't support OSD query from cli |
4236 | - pgnum = 200 |
4237 | - cmd = [ |
4238 | - 'ceph', '--id', service, |
4239 | - 'osd', 'pool', 'create', |
4240 | - name, str(pgnum) |
4241 | - ] |
4242 | - check_call(cmd) |
4243 | - cmd = [ |
4244 | - 'ceph', '--id', service, |
4245 | - 'osd', 'pool', 'set', name, |
4246 | - 'size', str(replicas) |
4247 | - ] |
4248 | - check_call(cmd) |
4249 | - |
4250 | - |
4251 | -def delete_pool(service, name): |
4252 | - ''' Delete a RADOS pool from ceph ''' |
4253 | - cmd = [ |
4254 | - 'ceph', '--id', service, |
4255 | - 'osd', 'pool', 'delete', |
4256 | - name, '--yes-i-really-really-mean-it' |
4257 | - ] |
4258 | - check_call(cmd) |
4259 | - |
4260 | - |
4261 | -def _keyfile_path(service): |
4262 | - return KEYFILE.format(service) |
4263 | - |
4264 | - |
4265 | -def _keyring_path(service): |
4266 | - return KEYRING.format(service) |
4267 | - |
4268 | - |
4269 | -def create_keyring(service, key): |
4270 | - ''' Create a new Ceph keyring containing key''' |
4271 | - keyring = _keyring_path(service) |
4272 | - if os.path.exists(keyring): |
4273 | - log('ceph: Keyring exists at %s.' % keyring, level=WARNING) |
4274 | - return |
4275 | - cmd = [ |
4276 | - 'ceph-authtool', |
4277 | - keyring, |
4278 | - '--create-keyring', |
4279 | - '--name=client.{}'.format(service), |
4280 | - '--add-key={}'.format(key) |
4281 | - ] |
4282 | - check_call(cmd) |
4283 | - log('ceph: Created new ring at %s.' % keyring, level=INFO) |
4284 | - |
4285 | - |
4286 | -def create_key_file(service, key): |
4287 | - ''' Create a file containing key ''' |
4288 | - keyfile = _keyfile_path(service) |
4289 | - if os.path.exists(keyfile): |
4290 | - log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING) |
4291 | - return |
4292 | - with open(keyfile, 'w') as fd: |
4293 | - fd.write(key) |
4294 | - log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) |
4295 | - |
4296 | - |
4297 | -def get_ceph_nodes(): |
4298 | - ''' Query named relation 'ceph' to detemine current nodes ''' |
4299 | - hosts = [] |
4300 | - for r_id in relation_ids('ceph'): |
4301 | - for unit in related_units(r_id): |
4302 | - hosts.append(relation_get('private-address', unit=unit, rid=r_id)) |
4303 | - return hosts |
4304 | - |
4305 | - |
4306 | -def configure(service, key, auth, use_syslog): |
4307 | - ''' Perform basic configuration of Ceph ''' |
4308 | - create_keyring(service, key) |
4309 | - create_key_file(service, key) |
4310 | - hosts = get_ceph_nodes() |
4311 | - with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: |
4312 | - ceph_conf.write(CEPH_CONF.format(auth=auth, |
4313 | - keyring=_keyring_path(service), |
4314 | - mon_hosts=",".join(map(str, hosts)), |
4315 | - use_syslog=use_syslog)) |
4316 | - modprobe('rbd') |
4317 | - |
4318 | - |
4319 | -def image_mapped(name): |
4320 | - ''' Determine whether a RADOS block device is mapped locally ''' |
4321 | - try: |
4322 | - out = check_output(['rbd', 'showmapped']) |
4323 | - except CalledProcessError: |
4324 | - return False |
4325 | - else: |
4326 | - return name in out |
4327 | - |
4328 | - |
4329 | -def map_block_storage(service, pool, image): |
4330 | - ''' Map a RADOS block device for local use ''' |
4331 | - cmd = [ |
4332 | - 'rbd', |
4333 | - 'map', |
4334 | - '{}/{}'.format(pool, image), |
4335 | - '--user', |
4336 | - service, |
4337 | - '--secret', |
4338 | - _keyfile_path(service), |
4339 | - ] |
4340 | - check_call(cmd) |
4341 | - |
4342 | - |
4343 | -def filesystem_mounted(fs): |
4344 | - ''' Determine whether a filesytems is already mounted ''' |
4345 | - return fs in [f for f, m in mounts()] |
4346 | - |
4347 | - |
4348 | -def make_filesystem(blk_device, fstype='ext4', timeout=10): |
4349 | - ''' Make a new filesystem on the specified block device ''' |
4350 | - count = 0 |
4351 | - e_noent = os.errno.ENOENT |
4352 | - while not os.path.exists(blk_device): |
4353 | - if count >= timeout: |
4354 | - log('ceph: gave up waiting on block device %s' % blk_device, |
4355 | - level=ERROR) |
4356 | - raise IOError(e_noent, os.strerror(e_noent), blk_device) |
4357 | - log('ceph: waiting for block device %s to appear' % blk_device, |
4358 | - level=INFO) |
4359 | - count += 1 |
4360 | - time.sleep(1) |
4361 | - else: |
4362 | - log('ceph: Formatting block device %s as filesystem %s.' % |
4363 | - (blk_device, fstype), level=INFO) |
4364 | - check_call(['mkfs', '-t', fstype, blk_device]) |
4365 | - |
4366 | - |
4367 | -def place_data_on_block_device(blk_device, data_src_dst): |
4368 | - ''' Migrate data in data_src_dst to blk_device and then remount ''' |
4369 | - # mount block device into /mnt |
4370 | - mount(blk_device, '/mnt') |
4371 | - # copy data to /mnt |
4372 | - copy_files(data_src_dst, '/mnt') |
4373 | - # umount block device |
4374 | - umount('/mnt') |
4375 | - # Grab user/group ID's from original source |
4376 | - _dir = os.stat(data_src_dst) |
4377 | - uid = _dir.st_uid |
4378 | - gid = _dir.st_gid |
4379 | - # re-mount where the data should originally be |
4380 | - # TODO: persist is currently a NO-OP in core.host |
4381 | - mount(blk_device, data_src_dst, persist=True) |
4382 | - # ensure original ownership of new mount. |
4383 | - os.chown(data_src_dst, uid, gid) |
4384 | - |
4385 | - |
4386 | -# TODO: re-use |
4387 | -def modprobe(module): |
4388 | - ''' Load a kernel module and configure for auto-load on reboot ''' |
4389 | - log('ceph: Loading kernel module', level=INFO) |
4390 | - cmd = ['modprobe', module] |
4391 | - check_call(cmd) |
4392 | - with open('/etc/modules', 'r+') as modules: |
4393 | - if module not in modules.read(): |
4394 | - modules.write(module) |
4395 | - |
4396 | - |
4397 | -def copy_files(src, dst, symlinks=False, ignore=None): |
4398 | - ''' Copy files from src to dst ''' |
4399 | - for item in os.listdir(src): |
4400 | - s = os.path.join(src, item) |
4401 | - d = os.path.join(dst, item) |
4402 | - if os.path.isdir(s): |
4403 | - shutil.copytree(s, d, symlinks, ignore) |
4404 | - else: |
4405 | - shutil.copy2(s, d) |
4406 | - |
4407 | - |
4408 | -def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
4409 | - blk_device, fstype, system_services=[]): |
4410 | - """ |
4411 | - NOTE: This function must only be called from a single service unit for |
4412 | - the same rbd_img otherwise data loss will occur. |
4413 | - |
4414 | - Ensures given pool and RBD image exists, is mapped to a block device, |
4415 | - and the device is formatted and mounted at the given mount_point. |
4416 | - |
4417 | - If formatting a device for the first time, data existing at mount_point |
4418 | - will be migrated to the RBD device before being re-mounted. |
4419 | - |
4420 | - All services listed in system_services will be stopped prior to data |
4421 | - migration and restarted when complete. |
4422 | - """ |
4423 | - # Ensure pool, RBD image, RBD mappings are in place. |
4424 | - if not pool_exists(service, pool): |
4425 | - log('ceph: Creating new pool {}.'.format(pool)) |
4426 | - create_pool(service, pool) |
4427 | - |
4428 | - if not rbd_exists(service, pool, rbd_img): |
4429 | - log('ceph: Creating RBD image ({}).'.format(rbd_img)) |
4430 | - create_rbd_image(service, pool, rbd_img, sizemb) |
4431 | - |
4432 | - if not image_mapped(rbd_img): |
4433 | - log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img)) |
4434 | - map_block_storage(service, pool, rbd_img) |
4435 | - |
4436 | - # make file system |
4437 | - # TODO: What happens if for whatever reason this is run again and |
4438 | - # the data is already in the rbd device and/or is mounted?? |
4439 | - # When it is mounted already, it will fail to make the fs |
4440 | - # XXX: This is really sketchy! Need to at least add an fstab entry |
4441 | - # otherwise this hook will blow away existing data if its executed |
4442 | - # after a reboot. |
4443 | - if not filesystem_mounted(mount_point): |
4444 | - make_filesystem(blk_device, fstype) |
4445 | - |
4446 | - for svc in system_services: |
4447 | - if service_running(svc): |
4448 | - log('ceph: Stopping services {} prior to migrating data.' |
4449 | - .format(svc)) |
4450 | - service_stop(svc) |
4451 | - |
4452 | - place_data_on_block_device(blk_device, mount_point) |
4453 | - |
4454 | - for svc in system_services: |
4455 | - log('ceph: Starting service {} after migrating data.' |
4456 | - .format(svc)) |
4457 | - service_start(svc) |
4458 | - |
4459 | - |
4460 | -def ensure_ceph_keyring(service, user=None, group=None): |
4461 | - ''' |
4462 | - Ensures a ceph keyring is created for a named service |
4463 | - and optionally ensures user and group ownership. |
4464 | - |
4465 | - Returns False if no ceph key is available in relation state. |
4466 | - ''' |
4467 | - key = None |
4468 | - for rid in relation_ids('ceph'): |
4469 | - for unit in related_units(rid): |
4470 | - key = relation_get('key', rid=rid, unit=unit) |
4471 | - if key: |
4472 | - break |
4473 | - if not key: |
4474 | - return False |
4475 | - create_keyring(service=service, key=key) |
4476 | - keyring = _keyring_path(service) |
4477 | - if user and group: |
4478 | - check_call(['chown', '%s.%s' % (user, group), keyring]) |
4479 | - return True |
4480 | - |
4481 | - |
4482 | -def ceph_version(): |
4483 | - ''' Retrieve the local version of ceph ''' |
4484 | - if os.path.exists('/usr/bin/ceph'): |
4485 | - cmd = ['ceph', '-v'] |
4486 | - output = check_output(cmd) |
4487 | - output = output.split() |
4488 | - if len(output) > 3: |
4489 | - return output[2] |
4490 | - else: |
4491 | - return None |
4492 | - else: |
4493 | - return None |
4494 | |
4495 | === removed file 'hooks/charmhelpers/contrib/storage/linux/loopback.py' |
4496 | --- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-11-26 17:12:54 +0000 |
4497 | +++ hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000 |
4498 | @@ -1,62 +0,0 @@ |
4499 | - |
4500 | -import os |
4501 | -import re |
4502 | - |
4503 | -from subprocess import ( |
4504 | - check_call, |
4505 | - check_output, |
4506 | -) |
4507 | - |
4508 | - |
4509 | -################################################## |
4510 | -# loopback device helpers. |
4511 | -################################################## |
4512 | -def loopback_devices(): |
4513 | - ''' |
4514 | - Parse through 'losetup -a' output to determine currently mapped |
4515 | - loopback devices. Output is expected to look like: |
4516 | - |
4517 | - /dev/loop0: [0807]:961814 (/tmp/my.img) |
4518 | - |
4519 | - :returns: dict: a dict mapping {loopback_dev: backing_file} |
4520 | - ''' |
4521 | - loopbacks = {} |
4522 | - cmd = ['losetup', '-a'] |
4523 | - devs = [d.strip().split(' ') for d in |
4524 | - check_output(cmd).splitlines() if d != ''] |
4525 | - for dev, _, f in devs: |
4526 | - loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0] |
4527 | - return loopbacks |
4528 | - |
4529 | - |
4530 | -def create_loopback(file_path): |
4531 | - ''' |
4532 | - Create a loopback device for a given backing file. |
4533 | - |
4534 | - :returns: str: Full path to new loopback device (eg, /dev/loop0) |
4535 | - ''' |
4536 | - file_path = os.path.abspath(file_path) |
4537 | - check_call(['losetup', '--find', file_path]) |
4538 | - for d, f in loopback_devices().iteritems(): |
4539 | - if f == file_path: |
4540 | - return d |
4541 | - |
4542 | - |
4543 | -def ensure_loopback_device(path, size): |
4544 | - ''' |
4545 | - Ensure a loopback device exists for a given backing file path and size. |
4546 | - If it a loopback device is not mapped to file, a new one will be created. |
4547 | - |
4548 | - TODO: Confirm size of found loopback device. |
4549 | - |
4550 | - :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) |
4551 | - ''' |
4552 | - for d, f in loopback_devices().iteritems(): |
4553 | - if f == path: |
4554 | - return d |
4555 | - |
4556 | - if not os.path.exists(path): |
4557 | - cmd = ['truncate', '--size', size, path] |
4558 | - check_call(cmd) |
4559 | - |
4560 | - return create_loopback(path) |
4561 | |
4562 | === removed file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' |
4563 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 2013-11-26 17:12:54 +0000 |
4564 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000 |
4565 | @@ -1,88 +0,0 @@ |
4566 | -from subprocess import ( |
4567 | - CalledProcessError, |
4568 | - check_call, |
4569 | - check_output, |
4570 | - Popen, |
4571 | - PIPE, |
4572 | -) |
4573 | - |
4574 | - |
4575 | -################################################## |
4576 | -# LVM helpers. |
4577 | -################################################## |
4578 | -def deactivate_lvm_volume_group(block_device): |
4579 | - ''' |
4580 | - Deactivate any volume gruop associated with an LVM physical volume. |
4581 | - |
4582 | - :param block_device: str: Full path to LVM physical volume |
4583 | - ''' |
4584 | - vg = list_lvm_volume_group(block_device) |
4585 | - if vg: |
4586 | - cmd = ['vgchange', '-an', vg] |
4587 | - check_call(cmd) |
4588 | - |
4589 | - |
4590 | -def is_lvm_physical_volume(block_device): |
4591 | - ''' |
4592 | - Determine whether a block device is initialized as an LVM PV. |
4593 | - |
4594 | - :param block_device: str: Full path of block device to inspect. |
4595 | - |
4596 | - :returns: boolean: True if block device is a PV, False if not. |
4597 | - ''' |
4598 | - try: |
4599 | - check_output(['pvdisplay', block_device]) |
4600 | - return True |
4601 | - except CalledProcessError: |
4602 | - return False |
4603 | - |
4604 | - |
4605 | -def remove_lvm_physical_volume(block_device): |
4606 | - ''' |
4607 | - Remove LVM PV signatures from a given block device. |
4608 | - |
4609 | - :param block_device: str: Full path of block device to scrub. |
4610 | - ''' |
4611 | - p = Popen(['pvremove', '-ff', block_device], |
4612 | - stdin=PIPE) |
4613 | - p.communicate(input='y\n') |
4614 | - |
4615 | - |
4616 | -def list_lvm_volume_group(block_device): |
4617 | - ''' |
4618 | - List LVM volume group associated with a given block device. |
4619 | - |
4620 | - Assumes block device is a valid LVM PV. |
4621 | - |
4622 | - :param block_device: str: Full path of block device to inspect. |
4623 | - |
4624 | - :returns: str: Name of volume group associated with block device or None |
4625 | - ''' |
4626 | - vg = None |
4627 | - pvd = check_output(['pvdisplay', block_device]).splitlines() |
4628 | - for l in pvd: |
4629 | - if l.strip().startswith('VG Name'): |
4630 | - vg = ' '.join(l.split()).split(' ').pop() |
4631 | - return vg |
4632 | - |
4633 | - |
4634 | -def create_lvm_physical_volume(block_device): |
4635 | - ''' |
4636 | - Initialize a block device as an LVM physical volume. |
4637 | - |
4638 | - :param block_device: str: Full path of block device to initialize. |
4639 | - |
4640 | - ''' |
4641 | - check_call(['pvcreate', block_device]) |
4642 | - |
4643 | - |
4644 | -def create_lvm_volume_group(volume_group, block_device): |
4645 | - ''' |
4646 | - Create an LVM volume group backed by a given block device. |
4647 | - |
4648 | - Assumes block device has already been initialized as an LVM PV. |
4649 | - |
4650 | - :param volume_group: str: Name of volume group to create. |
4651 | - :block_device: str: Full path of PV-initialized block device. |
4652 | - ''' |
4653 | - check_call(['vgcreate', volume_group, block_device]) |
4654 | |
4655 | === removed file 'hooks/charmhelpers/contrib/storage/linux/utils.py' |
4656 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-05-09 20:11:59 +0000 |
4657 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000 |
4658 | @@ -1,35 +0,0 @@ |
4659 | -from os import stat |
4660 | -from stat import S_ISBLK |
4661 | - |
4662 | -from subprocess import ( |
4663 | - check_call, |
4664 | - check_output, |
4665 | - call |
4666 | -) |
4667 | - |
4668 | - |
4669 | -def is_block_device(path): |
4670 | - ''' |
4671 | - Confirm device at path is a valid block device node. |
4672 | - |
4673 | - :returns: boolean: True if path is a block device, False if not. |
4674 | - ''' |
4675 | - return S_ISBLK(stat(path).st_mode) |
4676 | - |
4677 | - |
4678 | -def zap_disk(block_device): |
4679 | - ''' |
4680 | - Clear a block device of partition table. Relies on sgdisk, which is |
4681 | - installed as pat of the 'gdisk' package in Ubuntu. |
4682 | - |
4683 | - :param block_device: str: Full path of block device to clean. |
4684 | - ''' |
4685 | - # sometimes sgdisk exits non-zero; this is OK, dd will clean up |
4686 | - call(['sgdisk', '--zap-all', '--mbrtogpt', |
4687 | - '--clear', block_device]) |
4688 | - dev_end = check_output(['blockdev', '--getsz', block_device]) |
4689 | - gpt_end = int(dev_end.split()[0]) - 100 |
4690 | - check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device), |
4691 | - 'bs=1M', 'count=1']) |
4692 | - check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device), |
4693 | - 'bs=512', 'count=100', 'seek=%s'%(gpt_end)]) |
4694 | |
4695 | === removed directory 'hooks/charmhelpers/contrib/templating' |
4696 | === removed file 'hooks/charmhelpers/contrib/templating/__init__.py' |
4697 | === removed file 'hooks/charmhelpers/contrib/templating/contexts.py' |
4698 | --- hooks/charmhelpers/contrib/templating/contexts.py 2014-05-09 20:11:59 +0000 |
4699 | +++ hooks/charmhelpers/contrib/templating/contexts.py 1970-01-01 00:00:00 +0000 |
4700 | @@ -1,104 +0,0 @@ |
4701 | -# Copyright 2013 Canonical Ltd. |
4702 | -# |
4703 | -# Authors: |
4704 | -# Charm Helpers Developers <juju@lists.ubuntu.com> |
4705 | -"""A helper to create a yaml cache of config with namespaced relation data.""" |
4706 | -import os |
4707 | -import yaml |
4708 | - |
4709 | -import charmhelpers.core.hookenv |
4710 | - |
4711 | - |
4712 | -charm_dir = os.environ.get('CHARM_DIR', '') |
4713 | - |
4714 | - |
4715 | -def dict_keys_without_hyphens(a_dict): |
4716 | - """Return the a new dict with underscores instead of hyphens in keys.""" |
4717 | - return dict( |
4718 | - (key.replace('-', '_'), val) for key, val in a_dict.items()) |
4719 | - |
4720 | - |
4721 | -def update_relations(context, namespace_separator=':'): |
4722 | - """Update the context with the relation data.""" |
4723 | - # Add any relation data prefixed with the relation type. |
4724 | - relation_type = charmhelpers.core.hookenv.relation_type() |
4725 | - relations = [] |
4726 | - context['current_relation'] = {} |
4727 | - if relation_type is not None: |
4728 | - relation_data = charmhelpers.core.hookenv.relation_get() |
4729 | - context['current_relation'] = relation_data |
4730 | - # Deprecated: the following use of relation data as keys |
4731 | - # directly in the context will be removed. |
4732 | - relation_data = dict( |
4733 | - ("{relation_type}{namespace_separator}{key}".format( |
4734 | - relation_type=relation_type, |
4735 | - key=key, |
4736 | - namespace_separator=namespace_separator), val) |
4737 | - for key, val in relation_data.items()) |
4738 | - relation_data = dict_keys_without_hyphens(relation_data) |
4739 | - context.update(relation_data) |
4740 | - relations = charmhelpers.core.hookenv.relations_of_type(relation_type) |
4741 | - relations = [dict_keys_without_hyphens(rel) for rel in relations] |
4742 | - |
4743 | - if 'relations_deprecated' not in context: |
4744 | - context['relations_deprecated'] = {} |
4745 | - if relation_type is not None: |
4746 | - relation_type = relation_type.replace('-', '_') |
4747 | - context['relations_deprecated'][relation_type] = relations |
4748 | - |
4749 | - context['relations'] = charmhelpers.core.hookenv.relations() |
4750 | - |
4751 | - |
4752 | -def juju_state_to_yaml(yaml_path, namespace_separator=':', |
4753 | - allow_hyphens_in_keys=True): |
4754 | - """Update the juju config and state in a yaml file. |
4755 | - |
4756 | - This includes any current relation-get data, and the charm |
4757 | - directory. |
4758 | - |
4759 | - This function was created for the ansible and saltstack |
4760 | - support, as those libraries can use a yaml file to supply |
4761 | - context to templates, but it may be useful generally to |
4762 | - create and update an on-disk cache of all the config, including |
4763 | - previous relation data. |
4764 | - |
4765 | - By default, hyphens are allowed in keys as this is supported |
4766 | - by yaml, but for tools like ansible, hyphens are not valid [1]. |
4767 | - |
4768 | - [1] http://www.ansibleworks.com/docs/playbooks_variables.html#what-makes-a-valid-variable-name |
4769 | - """ |
4770 | - config = charmhelpers.core.hookenv.config() |
4771 | - |
4772 | - # Add the charm_dir which we will need to refer to charm |
4773 | - # file resources etc. |
4774 | - config['charm_dir'] = charm_dir |
4775 | - config['local_unit'] = charmhelpers.core.hookenv.local_unit() |
4776 | - config['unit_private_address'] = charmhelpers.core.hookenv.unit_private_ip() |
4777 | - config['unit_public_address'] = charmhelpers.core.hookenv.unit_get( |
4778 | - 'public-address' |
4779 | - ) |
4780 | - |
4781 | - # Don't use non-standard tags for unicode which will not |
4782 | - # work when salt uses yaml.load_safe. |
4783 | - yaml.add_representer(unicode, lambda dumper, |
4784 | - value: dumper.represent_scalar( |
4785 | - u'tag:yaml.org,2002:str', value)) |
4786 | - |
4787 | - yaml_dir = os.path.dirname(yaml_path) |
4788 | - if not os.path.exists(yaml_dir): |
4789 | - os.makedirs(yaml_dir) |
4790 | - |
4791 | - if os.path.exists(yaml_path): |
4792 | - with open(yaml_path, "r") as existing_vars_file: |
4793 | - existing_vars = yaml.load(existing_vars_file.read()) |
4794 | - else: |
4795 | - existing_vars = {} |
4796 | - |
4797 | - if not allow_hyphens_in_keys: |
4798 | - config = dict_keys_without_hyphens(config) |
4799 | - existing_vars.update(config) |
4800 | - |
4801 | - update_relations(existing_vars, namespace_separator) |
4802 | - |
4803 | - with open(yaml_path, "w+") as fp: |
4804 | - fp.write(yaml.dump(existing_vars, default_flow_style=False)) |
4805 | |
4806 | === removed file 'hooks/charmhelpers/contrib/templating/pyformat.py' |
4807 | --- hooks/charmhelpers/contrib/templating/pyformat.py 2013-11-26 17:12:54 +0000 |
4808 | +++ hooks/charmhelpers/contrib/templating/pyformat.py 1970-01-01 00:00:00 +0000 |
4809 | @@ -1,13 +0,0 @@ |
4810 | -''' |
4811 | -Templating using standard Python str.format() method. |
4812 | -''' |
4813 | - |
4814 | -from charmhelpers.core import hookenv |
4815 | - |
4816 | - |
4817 | -def render(template, extra={}, **kwargs): |
4818 | - """Return the template rendered using Python's str.format().""" |
4819 | - context = hookenv.execution_environment() |
4820 | - context.update(extra) |
4821 | - context.update(kwargs) |
4822 | - return template.format(**context) |
4823 | |
4824 | === removed directory 'hooks/charmhelpers/contrib/unison' |
4825 | === removed file 'hooks/charmhelpers/contrib/unison/__init__.py' |
4826 | --- hooks/charmhelpers/contrib/unison/__init__.py 2014-05-09 20:11:59 +0000 |
4827 | +++ hooks/charmhelpers/contrib/unison/__init__.py 1970-01-01 00:00:00 +0000 |
4828 | @@ -1,257 +0,0 @@ |
4829 | -# Easy file synchronization among peer units using ssh + unison. |
4830 | -# |
4831 | -# From *both* peer relation -joined and -changed, add a call to |
4832 | -# ssh_authorized_peers() describing the peer relation and the desired |
4833 | -# user + group. After all peer relations have settled, all hosts should |
4834 | -# be able to connect to on another via key auth'd ssh as the specified user. |
4835 | -# |
4836 | -# Other hooks are then free to synchronize files and directories using |
4837 | -# sync_to_peers(). |
4838 | -# |
4839 | -# For a peer relation named 'cluster', for example: |
4840 | -# |
4841 | -# cluster-relation-joined: |
4842 | -# ... |
4843 | -# ssh_authorized_peers(peer_interface='cluster', |
4844 | -# user='juju_ssh', group='juju_ssh', |
4845 | -# ensure_user=True) |
4846 | -# ... |
4847 | -# |
4848 | -# cluster-relation-changed: |
4849 | -# ... |
4850 | -# ssh_authorized_peers(peer_interface='cluster', |
4851 | -# user='juju_ssh', group='juju_ssh', |
4852 | -# ensure_user=True) |
4853 | -# ... |
4854 | -# |
4855 | -# Hooks are now free to sync files as easily as: |
4856 | -# |
4857 | -# files = ['/etc/fstab', '/etc/apt.conf.d/'] |
4858 | -# sync_to_peers(peer_interface='cluster', |
4859 | -# user='juju_ssh, paths=[files]) |
4860 | -# |
4861 | -# It is assumed the charm itself has setup permissions on each unit |
4862 | -# such that 'juju_ssh' has read + write permissions. Also assumed |
4863 | -# that the calling charm takes care of leader delegation. |
4864 | -# |
4865 | -# Additionally files can be synchronized only to an specific unit: |
4866 | -# sync_to_peer(slave_address, user='juju_ssh', |
4867 | -# paths=[files], verbose=False) |
4868 | - |
4869 | -import os |
4870 | -import pwd |
4871 | - |
4872 | -from copy import copy |
4873 | -from subprocess import check_call, check_output |
4874 | - |
4875 | -from charmhelpers.core.host import ( |
4876 | - adduser, |
4877 | - add_user_to_group, |
4878 | -) |
4879 | - |
4880 | -from charmhelpers.core.hookenv import ( |
4881 | - log, |
4882 | - hook_name, |
4883 | - relation_ids, |
4884 | - related_units, |
4885 | - relation_set, |
4886 | - relation_get, |
4887 | - unit_private_ip, |
4888 | - ERROR, |
4889 | -) |
4890 | - |
4891 | -BASE_CMD = ['unison', '-auto', '-batch=true', '-confirmbigdel=false', |
4892 | - '-fastcheck=true', '-group=false', '-owner=false', |
4893 | - '-prefer=newer', '-times=true'] |
4894 | - |
4895 | - |
4896 | -def get_homedir(user): |
4897 | - try: |
4898 | - user = pwd.getpwnam(user) |
4899 | - return user.pw_dir |
4900 | - except KeyError: |
4901 | - log('Could not get homedir for user %s: user exists?', ERROR) |
4902 | - raise Exception |
4903 | - |
4904 | - |
4905 | -def create_private_key(user, priv_key_path): |
4906 | - if not os.path.isfile(priv_key_path): |
4907 | - log('Generating new SSH key for user %s.' % user) |
4908 | - cmd = ['ssh-keygen', '-q', '-N', '', '-t', 'rsa', '-b', '2048', |
4909 | - '-f', priv_key_path] |
4910 | - check_call(cmd) |
4911 | - else: |
4912 | - log('SSH key already exists at %s.' % priv_key_path) |
4913 | - check_call(['chown', user, priv_key_path]) |
4914 | - check_call(['chmod', '0600', priv_key_path]) |
4915 | - |
4916 | - |
4917 | -def create_public_key(user, priv_key_path, pub_key_path): |
4918 | - if not os.path.isfile(pub_key_path): |
4919 | - log('Generating missing ssh public key @ %s.' % pub_key_path) |
4920 | - cmd = ['ssh-keygen', '-y', '-f', priv_key_path] |
4921 | - p = check_output(cmd).strip() |
4922 | - with open(pub_key_path, 'wb') as out: |
4923 | - out.write(p) |
4924 | - check_call(['chown', user, pub_key_path]) |
4925 | - |
4926 | - |
4927 | -def get_keypair(user): |
4928 | - home_dir = get_homedir(user) |
4929 | - ssh_dir = os.path.join(home_dir, '.ssh') |
4930 | - priv_key = os.path.join(ssh_dir, 'id_rsa') |
4931 | - pub_key = '%s.pub' % priv_key |
4932 | - |
4933 | - if not os.path.isdir(ssh_dir): |
4934 | - os.mkdir(ssh_dir) |
4935 | - check_call(['chown', '-R', user, ssh_dir]) |
4936 | - |
4937 | - create_private_key(user, priv_key) |
4938 | - create_public_key(user, priv_key, pub_key) |
4939 | - |
4940 | - with open(priv_key, 'r') as p: |
4941 | - _priv = p.read().strip() |
4942 | - |
4943 | - with open(pub_key, 'r') as p: |
4944 | - _pub = p.read().strip() |
4945 | - |
4946 | - return (_priv, _pub) |
4947 | - |
4948 | - |
4949 | -def write_authorized_keys(user, keys): |
4950 | - home_dir = get_homedir(user) |
4951 | - ssh_dir = os.path.join(home_dir, '.ssh') |
4952 | - auth_keys = os.path.join(ssh_dir, 'authorized_keys') |
4953 | - log('Syncing authorized_keys @ %s.' % auth_keys) |
4954 | - with open(auth_keys, 'wb') as out: |
4955 | - for k in keys: |
4956 | - out.write('%s\n' % k) |
4957 | - |
4958 | - |
4959 | -def write_known_hosts(user, hosts): |
4960 | - home_dir = get_homedir(user) |
4961 | - ssh_dir = os.path.join(home_dir, '.ssh') |
4962 | - known_hosts = os.path.join(ssh_dir, 'known_hosts') |
4963 | - khosts = [] |
4964 | - for host in hosts: |
4965 | - cmd = ['ssh-keyscan', '-H', '-t', 'rsa', host] |
4966 | - remote_key = check_output(cmd).strip() |
4967 | - khosts.append(remote_key) |
4968 | - log('Syncing known_hosts @ %s.' % known_hosts) |
4969 | - with open(known_hosts, 'wb') as out: |
4970 | - for host in khosts: |
4971 | - out.write('%s\n' % host) |
4972 | - |
4973 | - |
4974 | -def ensure_user(user, group=None): |
4975 | - adduser(user) |
4976 | - if group: |
4977 | - add_user_to_group(user, group) |
4978 | - |
4979 | - |
4980 | -def ssh_authorized_peers(peer_interface, user, group=None, |
4981 | - ensure_local_user=False): |
4982 | - """ |
4983 | - Main setup function, should be called from both peer -changed and -joined |
4984 | - hooks with the same parameters. |
4985 | - """ |
4986 | - if ensure_local_user: |
4987 | - ensure_user(user, group) |
4988 | - priv_key, pub_key = get_keypair(user) |
4989 | - hook = hook_name() |
4990 | - if hook == '%s-relation-joined' % peer_interface: |
4991 | - relation_set(ssh_pub_key=pub_key) |
4992 | - elif hook == '%s-relation-changed' % peer_interface: |
4993 | - hosts = [] |
4994 | - keys = [] |
4995 | - |
4996 | - for r_id in relation_ids(peer_interface): |
4997 | - for unit in related_units(r_id): |
4998 | - ssh_pub_key = relation_get('ssh_pub_key', |
4999 | - rid=r_id, |
5000 | - unit=unit) |