Merge lp:~michael.nelson/charms/trusty/elasticsearch/upgrade-charm-helpers into lp:charms/trusty/elasticsearch
- Trusty Tahr (14.04)
- upgrade-charm-helpers
- Merge into trunk
Proposed by
Michael Nelson
Status: | Merged |
---|---|
Merged at revision: | 36 |
Proposed branch: | lp:~michael.nelson/charms/trusty/elasticsearch/upgrade-charm-helpers |
Merge into: | lp:charms/trusty/elasticsearch |
Diff against target: |
2083 lines (+1393/-181) 19 files modified
README.md (+4/-0) hooks/charmhelpers/contrib/ansible/__init__.py (+63/-57) hooks/charmhelpers/contrib/templating/contexts.py (+19/-7) hooks/charmhelpers/core/fstab.py (+116/-0) hooks/charmhelpers/core/hookenv.py (+138/-7) hooks/charmhelpers/core/host.py (+107/-13) hooks/charmhelpers/core/services/__init__.py (+2/-0) hooks/charmhelpers/core/services/base.py (+313/-0) hooks/charmhelpers/core/services/helpers.py (+239/-0) hooks/charmhelpers/core/sysctl.py (+34/-0) hooks/charmhelpers/core/templating.py (+51/-0) hooks/charmhelpers/fetch/__init__.py (+196/-90) hooks/charmhelpers/fetch/archiveurl.py (+49/-4) hooks/charmhelpers/fetch/bzrurl.py (+2/-1) hooks/charmhelpers/fetch/giturl.py (+44/-0) hooks/hooks.py (+1/-0) playbook.yaml (+4/-1) tasks/peer-relations.yml (+0/-1) tasks/setup-ufw.yml (+11/-0) |
To merge this branch: | bzr merge lp:~michael.nelson/charms/trusty/elasticsearch/upgrade-charm-helpers |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
charmers | Pending | ||
Review via email: mp+239933@code.launchpad.net |
Commit message
Description of the change
This branch is mechanical work that:
1) Resyncs the charmhelpers library to use the latest crack
2) Re-merges a branch which ensures uncomplicated firewall is used for the peer 9300 port also.
This branch doesn't work - the follow-on branch does the actual updates to the charm to ensure that it's working with the new charmhelpers:
To post a comment you must log in.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'README.md' |
2 | --- README.md 2014-08-19 16:47:21 +0000 |
3 | +++ README.md 2014-10-29 02:58:49 +0000 |
4 | @@ -37,6 +37,10 @@ |
5 | epoch timestamp cluster status node.total node.data shards ... |
6 | 1404728290 10:18:10 elasticsearch green 2 2 0 |
7 | |
8 | +Note that the admin port (9200) is only accessible from the instance itself |
9 | +and any clients that join. Similarly the node-to-node communication |
10 | +port (9300) is only available to other units in the elasticsearch service. |
11 | + |
12 | See the separate HACKING.md for information about deploying this charm |
13 | from a local repository. |
14 | |
15 | |
16 | === modified file 'hooks/charmhelpers/contrib/ansible/__init__.py' |
17 | --- hooks/charmhelpers/contrib/ansible/__init__.py 2014-02-06 12:54:59 +0000 |
18 | +++ hooks/charmhelpers/contrib/ansible/__init__.py 2014-10-29 02:58:49 +0000 |
19 | @@ -6,58 +6,59 @@ |
20 | |
21 | This helper enables you to declare your machine state, rather than |
22 | program it procedurally (and have to test each change to your procedures). |
23 | -Your install hook can be as simple as: |
24 | - |
25 | -{{{ |
26 | -import charmhelpers.contrib.ansible |
27 | - |
28 | - |
29 | -def install(): |
30 | - charmhelpers.contrib.ansible.install_ansible_support() |
31 | - charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml') |
32 | -}}} |
33 | +Your install hook can be as simple as:: |
34 | + |
35 | + {{{ |
36 | + import charmhelpers.contrib.ansible |
37 | + |
38 | + |
39 | + def install(): |
40 | + charmhelpers.contrib.ansible.install_ansible_support() |
41 | + charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml') |
42 | + }}} |
43 | |
44 | and won't need to change (nor will its tests) when you change the machine |
45 | state. |
46 | |
47 | All of your juju config and relation-data are available as template |
48 | variables within your playbooks and templates. An install playbook looks |
49 | -something like: |
50 | - |
51 | -{{{ |
52 | ---- |
53 | -- hosts: localhost |
54 | - user: root |
55 | - |
56 | - tasks: |
57 | - - name: Add private repositories. |
58 | - template: |
59 | - src: ../templates/private-repositories.list.jinja2 |
60 | - dest: /etc/apt/sources.list.d/private.list |
61 | - |
62 | - - name: Update the cache. |
63 | - apt: update_cache=yes |
64 | - |
65 | - - name: Install dependencies. |
66 | - apt: pkg={{ item }} |
67 | - with_items: |
68 | - - python-mimeparse |
69 | - - python-webob |
70 | - - sunburnt |
71 | - |
72 | - - name: Setup groups. |
73 | - group: name={{ item.name }} gid={{ item.gid }} |
74 | - with_items: |
75 | - - { name: 'deploy_user', gid: 1800 } |
76 | - - { name: 'service_user', gid: 1500 } |
77 | - |
78 | - ... |
79 | -}}} |
80 | - |
81 | -Read more online about playbooks[1] and standard ansible modules[2]. |
82 | - |
83 | -[1] http://www.ansibleworks.com/docs/playbooks.html |
84 | -[2] http://www.ansibleworks.com/docs/modules.html |
85 | +something like:: |
86 | + |
87 | + {{{ |
88 | + --- |
89 | + - hosts: localhost |
90 | + user: root |
91 | + |
92 | + tasks: |
93 | + - name: Add private repositories. |
94 | + template: |
95 | + src: ../templates/private-repositories.list.jinja2 |
96 | + dest: /etc/apt/sources.list.d/private.list |
97 | + |
98 | + - name: Update the cache. |
99 | + apt: update_cache=yes |
100 | + |
101 | + - name: Install dependencies. |
102 | + apt: pkg={{ item }} |
103 | + with_items: |
104 | + - python-mimeparse |
105 | + - python-webob |
106 | + - sunburnt |
107 | + |
108 | + - name: Setup groups. |
109 | + group: name={{ item.name }} gid={{ item.gid }} |
110 | + with_items: |
111 | + - { name: 'deploy_user', gid: 1800 } |
112 | + - { name: 'service_user', gid: 1500 } |
113 | + |
114 | + ... |
115 | + }}} |
116 | + |
117 | +Read more online about `playbooks`_ and standard ansible `modules`_. |
118 | + |
119 | +.. _playbooks: http://www.ansibleworks.com/docs/playbooks.html |
120 | +.. _modules: http://www.ansibleworks.com/docs/modules.html |
121 | + |
122 | """ |
123 | import os |
124 | import subprocess |
125 | @@ -75,20 +76,20 @@ |
126 | ansible_vars_path = '/etc/ansible/host_vars/localhost' |
127 | |
128 | |
129 | -def install_ansible_support(from_ppa=True): |
130 | +def install_ansible_support(from_ppa=True, ppa_location='ppa:rquillo/ansible'): |
131 | """Installs the ansible package. |
132 | |
133 | - By default it is installed from the PPA [1] linked from |
134 | - the ansible website [2]. |
135 | - |
136 | - [1] https://launchpad.net/~rquillo/+archive/ansible |
137 | - [2] http://www.ansibleworks.com/docs/gettingstarted.html#ubuntu-and-debian |
138 | - |
139 | - If from_ppa is false, you must ensure that the package is available |
140 | + By default it is installed from the `PPA`_ linked from |
141 | + the ansible `website`_ or from a ppa specified by a charm config.. |
142 | + |
143 | + .. _PPA: https://launchpad.net/~rquillo/+archive/ansible |
144 | + .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu |
145 | + |
146 | + If from_ppa is empty, you must ensure that the package is available |
147 | from a configured repository. |
148 | """ |
149 | if from_ppa: |
150 | - charmhelpers.fetch.add_source('ppa:rquillo/ansible') |
151 | + charmhelpers.fetch.add_source(ppa_location) |
152 | charmhelpers.fetch.apt_update(fatal=True) |
153 | charmhelpers.fetch.apt_install('ansible') |
154 | with open(ansible_hosts_path, 'w+') as hosts_file: |
155 | @@ -101,6 +102,9 @@ |
156 | charmhelpers.contrib.templating.contexts.juju_state_to_yaml( |
157 | ansible_vars_path, namespace_separator='__', |
158 | allow_hyphens_in_keys=False) |
159 | + # we want ansible's log output to be unbuffered |
160 | + env = os.environ.copy() |
161 | + env['PYTHONUNBUFFERED'] = "1" |
162 | call = [ |
163 | 'ansible-playbook', |
164 | '-c', |
165 | @@ -109,7 +113,7 @@ |
166 | ] |
167 | if tags: |
168 | call.extend(['--tags', '{}'.format(tags)]) |
169 | - subprocess.check_call(call) |
170 | + subprocess.check_call(call, env=env) |
171 | |
172 | |
173 | class AnsibleHooks(charmhelpers.core.hookenv.Hooks): |
174 | @@ -119,7 +123,8 @@ |
175 | but additionally runs the playbook with the hook-name specified |
176 | using --tags (ie. running all the tasks tagged with the hook-name). |
177 | |
178 | - Example: |
179 | + Example:: |
180 | + |
181 | hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml') |
182 | |
183 | # All the tasks within my_machine_state.yaml tagged with 'install' |
184 | @@ -144,6 +149,7 @@ |
185 | if __name__ == "__main__": |
186 | # execute a hook based on the name the program is called by |
187 | hooks.execute(sys.argv) |
188 | + |
189 | """ |
190 | |
191 | def __init__(self, playbook_path, default_hooks=None): |
192 | |
193 | === modified file 'hooks/charmhelpers/contrib/templating/contexts.py' |
194 | --- hooks/charmhelpers/contrib/templating/contexts.py 2014-04-08 14:58:22 +0000 |
195 | +++ hooks/charmhelpers/contrib/templating/contexts.py 2014-10-29 02:58:49 +0000 |
196 | @@ -40,13 +40,25 @@ |
197 | relations = charmhelpers.core.hookenv.relations_of_type(relation_type) |
198 | relations = [dict_keys_without_hyphens(rel) for rel in relations] |
199 | |
200 | - if 'relations_deprecated' not in context: |
201 | - context['relations_deprecated'] = {} |
202 | - if relation_type is not None: |
203 | - relation_type = relation_type.replace('-', '_') |
204 | - context['relations_deprecated'][relation_type] = relations |
205 | + context['relations_full'] = charmhelpers.core.hookenv.relations() |
206 | |
207 | - context['relations'] = charmhelpers.core.hookenv.relations() |
208 | + # the hookenv.relations() data structure is effectively unusable in |
209 | + # templates and other contexts when trying to access relation data other |
210 | + # than the current relation. So provide a more useful structure that works |
211 | + # with any hook. |
212 | + local_unit = charmhelpers.core.hookenv.local_unit() |
213 | + relations = {} |
214 | + for rname, rids in context['relations_full'].items(): |
215 | + relations[rname] = [] |
216 | + for rid, rdata in rids.items(): |
217 | + data = rdata.copy() |
218 | + if local_unit in rdata: |
219 | + data.pop(local_unit) |
220 | + for unit_name, rel_data in data.items(): |
221 | + new_data = {'__relid__': rid, '__unit__': unit_name} |
222 | + new_data.update(rel_data) |
223 | + relations[rname].append(new_data) |
224 | + context['relations'] = relations |
225 | |
226 | |
227 | def juju_state_to_yaml(yaml_path, namespace_separator=':', |
228 | @@ -101,4 +113,4 @@ |
229 | update_relations(existing_vars, namespace_separator) |
230 | |
231 | with open(yaml_path, "w+") as fp: |
232 | - fp.write(yaml.dump(existing_vars)) |
233 | + fp.write(yaml.dump(existing_vars, default_flow_style=False)) |
234 | |
235 | === added file 'hooks/charmhelpers/core/fstab.py' |
236 | --- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000 |
237 | +++ hooks/charmhelpers/core/fstab.py 2014-10-29 02:58:49 +0000 |
238 | @@ -0,0 +1,116 @@ |
239 | +#!/usr/bin/env python |
240 | +# -*- coding: utf-8 -*- |
241 | + |
242 | +__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' |
243 | + |
244 | +import os |
245 | + |
246 | + |
247 | +class Fstab(file): |
248 | + """This class extends file in order to implement a file reader/writer |
249 | + for file `/etc/fstab` |
250 | + """ |
251 | + |
252 | + class Entry(object): |
253 | + """Entry class represents a non-comment line on the `/etc/fstab` file |
254 | + """ |
255 | + def __init__(self, device, mountpoint, filesystem, |
256 | + options, d=0, p=0): |
257 | + self.device = device |
258 | + self.mountpoint = mountpoint |
259 | + self.filesystem = filesystem |
260 | + |
261 | + if not options: |
262 | + options = "defaults" |
263 | + |
264 | + self.options = options |
265 | + self.d = d |
266 | + self.p = p |
267 | + |
268 | + def __eq__(self, o): |
269 | + return str(self) == str(o) |
270 | + |
271 | + def __str__(self): |
272 | + return "{} {} {} {} {} {}".format(self.device, |
273 | + self.mountpoint, |
274 | + self.filesystem, |
275 | + self.options, |
276 | + self.d, |
277 | + self.p) |
278 | + |
279 | + DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab') |
280 | + |
281 | + def __init__(self, path=None): |
282 | + if path: |
283 | + self._path = path |
284 | + else: |
285 | + self._path = self.DEFAULT_PATH |
286 | + file.__init__(self, self._path, 'r+') |
287 | + |
288 | + def _hydrate_entry(self, line): |
289 | + # NOTE: use split with no arguments to split on any |
290 | + # whitespace including tabs |
291 | + return Fstab.Entry(*filter( |
292 | + lambda x: x not in ('', None), |
293 | + line.strip("\n").split())) |
294 | + |
295 | + @property |
296 | + def entries(self): |
297 | + self.seek(0) |
298 | + for line in self.readlines(): |
299 | + try: |
300 | + if not line.startswith("#"): |
301 | + yield self._hydrate_entry(line) |
302 | + except ValueError: |
303 | + pass |
304 | + |
305 | + def get_entry_by_attr(self, attr, value): |
306 | + for entry in self.entries: |
307 | + e_attr = getattr(entry, attr) |
308 | + if e_attr == value: |
309 | + return entry |
310 | + return None |
311 | + |
312 | + def add_entry(self, entry): |
313 | + if self.get_entry_by_attr('device', entry.device): |
314 | + return False |
315 | + |
316 | + self.write(str(entry) + '\n') |
317 | + self.truncate() |
318 | + return entry |
319 | + |
320 | + def remove_entry(self, entry): |
321 | + self.seek(0) |
322 | + |
323 | + lines = self.readlines() |
324 | + |
325 | + found = False |
326 | + for index, line in enumerate(lines): |
327 | + if not line.startswith("#"): |
328 | + if self._hydrate_entry(line) == entry: |
329 | + found = True |
330 | + break |
331 | + |
332 | + if not found: |
333 | + return False |
334 | + |
335 | + lines.remove(line) |
336 | + |
337 | + self.seek(0) |
338 | + self.write(''.join(lines)) |
339 | + self.truncate() |
340 | + return True |
341 | + |
342 | + @classmethod |
343 | + def remove_by_mountpoint(cls, mountpoint, path=None): |
344 | + fstab = cls(path=path) |
345 | + entry = fstab.get_entry_by_attr('mountpoint', mountpoint) |
346 | + if entry: |
347 | + return fstab.remove_entry(entry) |
348 | + return False |
349 | + |
350 | + @classmethod |
351 | + def add(cls, device, mountpoint, filesystem, options=None, path=None): |
352 | + return cls(path=path).add_entry(Fstab.Entry(device, |
353 | + mountpoint, filesystem, |
354 | + options=options)) |
355 | |
356 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
357 | --- hooks/charmhelpers/core/hookenv.py 2014-02-06 12:54:59 +0000 |
358 | +++ hooks/charmhelpers/core/hookenv.py 2014-10-29 02:58:49 +0000 |
359 | @@ -25,7 +25,7 @@ |
360 | def cached(func): |
361 | """Cache return values for multiple executions of func + args |
362 | |
363 | - For example: |
364 | + For example:: |
365 | |
366 | @cached |
367 | def unit_get(attribute): |
368 | @@ -155,6 +155,127 @@ |
369 | return os.path.basename(sys.argv[0]) |
370 | |
371 | |
372 | +class Config(dict): |
373 | + """A dictionary representation of the charm's config.yaml, with some |
374 | + extra features: |
375 | + |
376 | + - See which values in the dictionary have changed since the previous hook. |
377 | + - For values that have changed, see what the previous value was. |
378 | + - Store arbitrary data for use in a later hook. |
379 | + |
380 | + NOTE: Do not instantiate this object directly - instead call |
381 | + ``hookenv.config()``, which will return an instance of :class:`Config`. |
382 | + |
383 | + Example usage:: |
384 | + |
385 | + >>> # inside a hook |
386 | + >>> from charmhelpers.core import hookenv |
387 | + >>> config = hookenv.config() |
388 | + >>> config['foo'] |
389 | + 'bar' |
390 | + >>> # store a new key/value for later use |
391 | + >>> config['mykey'] = 'myval' |
392 | + |
393 | + |
394 | + >>> # user runs `juju set mycharm foo=baz` |
395 | + >>> # now we're inside subsequent config-changed hook |
396 | + >>> config = hookenv.config() |
397 | + >>> config['foo'] |
398 | + 'baz' |
399 | + >>> # test to see if this val has changed since last hook |
400 | + >>> config.changed('foo') |
401 | + True |
402 | + >>> # what was the previous value? |
403 | + >>> config.previous('foo') |
404 | + 'bar' |
405 | + >>> # keys/values that we add are preserved across hooks |
406 | + >>> config['mykey'] |
407 | + 'myval' |
408 | + |
409 | + """ |
410 | + CONFIG_FILE_NAME = '.juju-persistent-config' |
411 | + |
412 | + def __init__(self, *args, **kw): |
413 | + super(Config, self).__init__(*args, **kw) |
414 | + self.implicit_save = True |
415 | + self._prev_dict = None |
416 | + self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
417 | + if os.path.exists(self.path): |
418 | + self.load_previous() |
419 | + |
420 | + def __getitem__(self, key): |
421 | + """For regular dict lookups, check the current juju config first, |
422 | + then the previous (saved) copy. This ensures that user-saved values |
423 | + will be returned by a dict lookup. |
424 | + |
425 | + """ |
426 | + try: |
427 | + return dict.__getitem__(self, key) |
428 | + except KeyError: |
429 | + return (self._prev_dict or {})[key] |
430 | + |
431 | + def keys(self): |
432 | + prev_keys = [] |
433 | + if self._prev_dict is not None: |
434 | + prev_keys = self._prev_dict.keys() |
435 | + return list(set(prev_keys + dict.keys(self))) |
436 | + |
437 | + def load_previous(self, path=None): |
438 | + """Load previous copy of config from disk. |
439 | + |
440 | + In normal usage you don't need to call this method directly - it |
441 | + is called automatically at object initialization. |
442 | + |
443 | + :param path: |
444 | + |
445 | + File path from which to load the previous config. If `None`, |
446 | + config is loaded from the default location. If `path` is |
447 | + specified, subsequent `save()` calls will write to the same |
448 | + path. |
449 | + |
450 | + """ |
451 | + self.path = path or self.path |
452 | + with open(self.path) as f: |
453 | + self._prev_dict = json.load(f) |
454 | + |
455 | + def changed(self, key): |
456 | + """Return True if the current value for this key is different from |
457 | + the previous value. |
458 | + |
459 | + """ |
460 | + if self._prev_dict is None: |
461 | + return True |
462 | + return self.previous(key) != self.get(key) |
463 | + |
464 | + def previous(self, key): |
465 | + """Return previous value for this key, or None if there |
466 | + is no previous value. |
467 | + |
468 | + """ |
469 | + if self._prev_dict: |
470 | + return self._prev_dict.get(key) |
471 | + return None |
472 | + |
473 | + def save(self): |
474 | + """Save this config to disk. |
475 | + |
476 | + If the charm is using the :mod:`Services Framework <services.base>` |
477 | + or :meth:'@hook <Hooks.hook>' decorator, this |
478 | + is called automatically at the end of successful hook execution. |
479 | + Otherwise, it should be called directly by user code. |
480 | + |
481 | + To disable automatic saves, set ``implicit_save=False`` on this |
482 | + instance. |
483 | + |
484 | + """ |
485 | + if self._prev_dict: |
486 | + for k, v in self._prev_dict.iteritems(): |
487 | + if k not in self: |
488 | + self[k] = v |
489 | + with open(self.path, 'w') as f: |
490 | + json.dump(self, f) |
491 | + |
492 | + |
493 | @cached |
494 | def config(scope=None): |
495 | """Juju charm configuration""" |
496 | @@ -163,7 +284,10 @@ |
497 | config_cmd_line.append(scope) |
498 | config_cmd_line.append('--format=json') |
499 | try: |
500 | - return json.loads(subprocess.check_output(config_cmd_line)) |
501 | + config_data = json.loads(subprocess.check_output(config_cmd_line)) |
502 | + if scope is not None: |
503 | + return config_data |
504 | + return Config(config_data) |
505 | except ValueError: |
506 | return None |
507 | |
508 | @@ -188,8 +312,9 @@ |
509 | raise |
510 | |
511 | |
512 | -def relation_set(relation_id=None, relation_settings={}, **kwargs): |
513 | +def relation_set(relation_id=None, relation_settings=None, **kwargs): |
514 | """Set relation information for the current unit""" |
515 | + relation_settings = relation_settings if relation_settings else {} |
516 | relation_cmd_line = ['relation-set'] |
517 | if relation_id is not None: |
518 | relation_cmd_line.extend(('-r', relation_id)) |
519 | @@ -348,27 +473,29 @@ |
520 | class Hooks(object): |
521 | """A convenient handler for hook functions. |
522 | |
523 | - Example: |
524 | + Example:: |
525 | + |
526 | hooks = Hooks() |
527 | |
528 | # register a hook, taking its name from the function name |
529 | @hooks.hook() |
530 | def install(): |
531 | - ... |
532 | + pass # your code here |
533 | |
534 | # register a hook, providing a custom hook name |
535 | @hooks.hook("config-changed") |
536 | def config_changed(): |
537 | - ... |
538 | + pass # your code here |
539 | |
540 | if __name__ == "__main__": |
541 | # execute a hook based on the name the program is called by |
542 | hooks.execute(sys.argv) |
543 | """ |
544 | |
545 | - def __init__(self): |
546 | + def __init__(self, config_save=True): |
547 | super(Hooks, self).__init__() |
548 | self._hooks = {} |
549 | + self._config_save = config_save |
550 | |
551 | def register(self, name, function): |
552 | """Register a hook""" |
553 | @@ -379,6 +506,10 @@ |
554 | hook_name = os.path.basename(args[0]) |
555 | if hook_name in self._hooks: |
556 | self._hooks[hook_name]() |
557 | + if self._config_save: |
558 | + cfg = config() |
559 | + if cfg.implicit_save: |
560 | + cfg.save() |
561 | else: |
562 | raise UnregisteredHookError(hook_name) |
563 | |
564 | |
565 | === modified file 'hooks/charmhelpers/core/host.py' |
566 | --- hooks/charmhelpers/core/host.py 2014-03-05 15:19:02 +0000 |
567 | +++ hooks/charmhelpers/core/host.py 2014-10-29 02:58:49 +0000 |
568 | @@ -6,16 +6,19 @@ |
569 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> |
570 | |
571 | import os |
572 | +import re |
573 | import pwd |
574 | import grp |
575 | import random |
576 | import string |
577 | import subprocess |
578 | import hashlib |
579 | +from contextlib import contextmanager |
580 | |
581 | from collections import OrderedDict |
582 | |
583 | from hookenv import log |
584 | +from fstab import Fstab |
585 | |
586 | |
587 | def service_start(service_name): |
588 | @@ -34,7 +37,8 @@ |
589 | |
590 | |
591 | def service_reload(service_name, restart_on_failure=False): |
592 | - """Reload a system service, optionally falling back to restart if reload fails""" |
593 | + """Reload a system service, optionally falling back to restart if |
594 | + reload fails""" |
595 | service_result = service('reload', service_name) |
596 | if not service_result and restart_on_failure: |
597 | service_result = service('restart', service_name) |
598 | @@ -50,7 +54,7 @@ |
599 | def service_running(service): |
600 | """Determine whether a system service is running""" |
601 | try: |
602 | - output = subprocess.check_output(['service', service, 'status']) |
603 | + output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT) |
604 | except subprocess.CalledProcessError: |
605 | return False |
606 | else: |
607 | @@ -60,6 +64,16 @@ |
608 | return False |
609 | |
610 | |
611 | +def service_available(service_name): |
612 | + """Determine whether a system service is available""" |
613 | + try: |
614 | + subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT) |
615 | + except subprocess.CalledProcessError as e: |
616 | + return 'unrecognized service' not in e.output |
617 | + else: |
618 | + return True |
619 | + |
620 | + |
621 | def adduser(username, password=None, shell='/bin/bash', system_user=False): |
622 | """Add a user to the system""" |
623 | try: |
624 | @@ -143,7 +157,19 @@ |
625 | target.write(content) |
626 | |
627 | |
628 | -def mount(device, mountpoint, options=None, persist=False): |
629 | +def fstab_remove(mp): |
630 | + """Remove the given mountpoint entry from /etc/fstab |
631 | + """ |
632 | + return Fstab.remove_by_mountpoint(mp) |
633 | + |
634 | + |
635 | +def fstab_add(dev, mp, fs, options=None): |
636 | + """Adds the given device entry to the /etc/fstab file |
637 | + """ |
638 | + return Fstab.add(dev, mp, fs, options=options) |
639 | + |
640 | + |
641 | +def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"): |
642 | """Mount a filesystem at a particular mountpoint""" |
643 | cmd_args = ['mount'] |
644 | if options is not None: |
645 | @@ -154,9 +180,9 @@ |
646 | except subprocess.CalledProcessError, e: |
647 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
648 | return False |
649 | + |
650 | if persist: |
651 | - # TODO: update fstab |
652 | - pass |
653 | + return fstab_add(device, mountpoint, filesystem, options=options) |
654 | return True |
655 | |
656 | |
657 | @@ -168,9 +194,9 @@ |
658 | except subprocess.CalledProcessError, e: |
659 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
660 | return False |
661 | + |
662 | if persist: |
663 | - # TODO: update fstab |
664 | - pass |
665 | + return fstab_remove(mountpoint) |
666 | return True |
667 | |
668 | |
669 | @@ -183,10 +209,15 @@ |
670 | return system_mounts |
671 | |
672 | |
673 | -def file_hash(path): |
674 | - """Generate a md5 hash of the contents of 'path' or None if not found """ |
675 | +def file_hash(path, hash_type='md5'): |
676 | + """ |
677 | + Generate a hash checksum of the contents of 'path' or None if not found. |
678 | + |
679 | + :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`, |
680 | + such as md5, sha1, sha256, sha512, etc. |
681 | + """ |
682 | if os.path.exists(path): |
683 | - h = hashlib.md5() |
684 | + h = getattr(hashlib, hash_type)() |
685 | with open(path, 'r') as source: |
686 | h.update(source.read()) # IGNORE:E1101 - it does have update |
687 | return h.hexdigest() |
688 | @@ -194,16 +225,36 @@ |
689 | return None |
690 | |
691 | |
692 | +def check_hash(path, checksum, hash_type='md5'): |
693 | + """ |
694 | + Validate a file using a cryptographic checksum. |
695 | + |
696 | + :param str checksum: Value of the checksum used to validate the file. |
697 | + :param str hash_type: Hash algorithm used to generate `checksum`. |
698 | + Can be any hash alrgorithm supported by :mod:`hashlib`, |
699 | + such as md5, sha1, sha256, sha512, etc. |
700 | + :raises ChecksumError: If the file fails the checksum |
701 | + |
702 | + """ |
703 | + actual_checksum = file_hash(path, hash_type) |
704 | + if checksum != actual_checksum: |
705 | + raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum)) |
706 | + |
707 | + |
708 | +class ChecksumError(ValueError): |
709 | + pass |
710 | + |
711 | + |
712 | def restart_on_change(restart_map, stopstart=False): |
713 | """Restart services based on configuration files changing |
714 | |
715 | - This function is used a decorator, for example |
716 | + This function is used a decorator, for example:: |
717 | |
718 | @restart_on_change({ |
719 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
720 | }) |
721 | def ceph_client_changed(): |
722 | - ... |
723 | + pass # your code here |
724 | |
725 | In this example, the cinder-api and cinder-volume services |
726 | would be restarted if /etc/ceph/ceph.conf is changed by the |
727 | @@ -266,7 +317,13 @@ |
728 | ip_output = (line for line in ip_output if line) |
729 | for line in ip_output: |
730 | if line.split()[1].startswith(int_type): |
731 | - interfaces.append(line.split()[1].replace(":", "")) |
732 | + matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line) |
733 | + if matched: |
734 | + interface = matched.groups()[0] |
735 | + else: |
736 | + interface = line.split()[1].replace(":", "") |
737 | + interfaces.append(interface) |
738 | + |
739 | return interfaces |
740 | |
741 | |
742 | @@ -295,3 +352,40 @@ |
743 | if 'link/ether' in words: |
744 | hwaddr = words[words.index('link/ether') + 1] |
745 | return hwaddr |
746 | + |
747 | + |
748 | +def cmp_pkgrevno(package, revno, pkgcache=None): |
749 | + '''Compare supplied revno with the revno of the installed package |
750 | + |
751 | + * 1 => Installed revno is greater than supplied arg |
752 | + * 0 => Installed revno is the same as supplied arg |
753 | + * -1 => Installed revno is less than supplied arg |
754 | + |
755 | + ''' |
756 | + import apt_pkg |
757 | + from charmhelpers.fetch import apt_cache |
758 | + if not pkgcache: |
759 | + pkgcache = apt_cache() |
760 | + pkg = pkgcache[package] |
761 | + return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) |
762 | + |
763 | + |
764 | +@contextmanager |
765 | +def chdir(d): |
766 | + cur = os.getcwd() |
767 | + try: |
768 | + yield os.chdir(d) |
769 | + finally: |
770 | + os.chdir(cur) |
771 | + |
772 | + |
773 | +def chownr(path, owner, group): |
774 | + uid = pwd.getpwnam(owner).pw_uid |
775 | + gid = grp.getgrnam(group).gr_gid |
776 | + |
777 | + for root, dirs, files in os.walk(path): |
778 | + for name in dirs + files: |
779 | + full = os.path.join(root, name) |
780 | + broken_symlink = os.path.lexists(full) and not os.path.exists(full) |
781 | + if not broken_symlink: |
782 | + os.chown(full, uid, gid) |
783 | |
784 | === added directory 'hooks/charmhelpers/core/services' |
785 | === added file 'hooks/charmhelpers/core/services/__init__.py' |
786 | --- hooks/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000 |
787 | +++ hooks/charmhelpers/core/services/__init__.py 2014-10-29 02:58:49 +0000 |
788 | @@ -0,0 +1,2 @@ |
789 | +from .base import * # NOQA |
790 | +from .helpers import * # NOQA |
791 | |
792 | === added file 'hooks/charmhelpers/core/services/base.py' |
793 | --- hooks/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000 |
794 | +++ hooks/charmhelpers/core/services/base.py 2014-10-29 02:58:49 +0000 |
795 | @@ -0,0 +1,313 @@ |
796 | +import os |
797 | +import re |
798 | +import json |
799 | +from collections import Iterable |
800 | + |
801 | +from charmhelpers.core import host |
802 | +from charmhelpers.core import hookenv |
803 | + |
804 | + |
805 | +__all__ = ['ServiceManager', 'ManagerCallback', |
806 | + 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports', |
807 | + 'service_restart', 'service_stop'] |
808 | + |
809 | + |
810 | +class ServiceManager(object): |
811 | + def __init__(self, services=None): |
812 | + """ |
813 | + Register a list of services, given their definitions. |
814 | + |
815 | + Service definitions are dicts in the following formats (all keys except |
816 | + 'service' are optional):: |
817 | + |
818 | + { |
819 | + "service": <service name>, |
820 | + "required_data": <list of required data contexts>, |
821 | + "provided_data": <list of provided data contexts>, |
822 | + "data_ready": <one or more callbacks>, |
823 | + "data_lost": <one or more callbacks>, |
824 | + "start": <one or more callbacks>, |
825 | + "stop": <one or more callbacks>, |
826 | + "ports": <list of ports to manage>, |
827 | + } |
828 | + |
829 | + The 'required_data' list should contain dicts of required data (or |
830 | + dependency managers that act like dicts and know how to collect the data). |
831 | + Only when all items in the 'required_data' list are populated are the list |
832 | + of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more |
833 | + information. |
834 | + |
835 | + The 'provided_data' list should contain relation data providers, most likely |
836 | + a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`, |
837 | + that will indicate a set of data to set on a given relation. |
838 | + |
839 | + The 'data_ready' value should be either a single callback, or a list of |
840 | + callbacks, to be called when all items in 'required_data' pass `is_ready()`. |
841 | + Each callback will be called with the service name as the only parameter. |
842 | + After all of the 'data_ready' callbacks are called, the 'start' callbacks |
843 | + are fired. |
844 | + |
845 | + The 'data_lost' value should be either a single callback, or a list of |
846 | + callbacks, to be called when a 'required_data' item no longer passes |
847 | + `is_ready()`. Each callback will be called with the service name as the |
848 | + only parameter. After all of the 'data_lost' callbacks are called, |
849 | + the 'stop' callbacks are fired. |
850 | + |
851 | + The 'start' value should be either a single callback, or a list of |
852 | + callbacks, to be called when starting the service, after the 'data_ready' |
853 | + callbacks are complete. Each callback will be called with the service |
854 | + name as the only parameter. This defaults to |
855 | + `[host.service_start, services.open_ports]`. |
856 | + |
857 | + The 'stop' value should be either a single callback, or a list of |
858 | + callbacks, to be called when stopping the service. If the service is |
859 | + being stopped because it no longer has all of its 'required_data', this |
860 | + will be called after all of the 'data_lost' callbacks are complete. |
861 | + Each callback will be called with the service name as the only parameter. |
862 | + This defaults to `[services.close_ports, host.service_stop]`. |
863 | + |
864 | + The 'ports' value should be a list of ports to manage. The default |
865 | + 'start' handler will open the ports after the service is started, |
866 | + and the default 'stop' handler will close the ports prior to stopping |
867 | + the service. |
868 | + |
869 | + |
870 | + Examples: |
871 | + |
872 | + The following registers an Upstart service called bingod that depends on |
873 | + a mongodb relation and which runs a custom `db_migrate` function prior to |
874 | + restarting the service, and a Runit service called spadesd:: |
875 | + |
876 | + manager = services.ServiceManager([ |
877 | + { |
878 | + 'service': 'bingod', |
879 | + 'ports': [80, 443], |
880 | + 'required_data': [MongoRelation(), config(), {'my': 'data'}], |
881 | + 'data_ready': [ |
882 | + services.template(source='bingod.conf'), |
883 | + services.template(source='bingod.ini', |
884 | + target='/etc/bingod.ini', |
885 | + owner='bingo', perms=0400), |
886 | + ], |
887 | + }, |
888 | + { |
889 | + 'service': 'spadesd', |
890 | + 'data_ready': services.template(source='spadesd_run.j2', |
891 | + target='/etc/sv/spadesd/run', |
892 | + perms=0555), |
893 | + 'start': runit_start, |
894 | + 'stop': runit_stop, |
895 | + }, |
896 | + ]) |
897 | + manager.manage() |
898 | + """ |
899 | + self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json') |
900 | + self._ready = None |
901 | + self.services = {} |
902 | + for service in services or []: |
903 | + service_name = service['service'] |
904 | + self.services[service_name] = service |
905 | + |
906 | + def manage(self): |
907 | + """ |
908 | + Handle the current hook by doing The Right Thing with the registered services. |
909 | + """ |
910 | + hook_name = hookenv.hook_name() |
911 | + if hook_name == 'stop': |
912 | + self.stop_services() |
913 | + else: |
914 | + self.provide_data() |
915 | + self.reconfigure_services() |
916 | + cfg = hookenv.config() |
917 | + if cfg.implicit_save: |
918 | + cfg.save() |
919 | + |
920 | + def provide_data(self): |
921 | + """ |
922 | + Set the relation data for each provider in the ``provided_data`` list. |
923 | + |
924 | + A provider must have a `name` attribute, which indicates which relation |
925 | + to set data on, and a `provide_data()` method, which returns a dict of |
926 | + data to set. |
927 | + """ |
928 | + hook_name = hookenv.hook_name() |
929 | + for service in self.services.values(): |
930 | + for provider in service.get('provided_data', []): |
931 | + if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name): |
932 | + data = provider.provide_data() |
933 | + _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data |
934 | + if _ready: |
935 | + hookenv.relation_set(None, data) |
936 | + |
937 | + def reconfigure_services(self, *service_names): |
938 | + """ |
939 | + Update all files for one or more registered services, and, |
940 | + if ready, optionally restart them. |
941 | + |
942 | + If no service names are given, reconfigures all registered services. |
943 | + """ |
944 | + for service_name in service_names or self.services.keys(): |
945 | + if self.is_ready(service_name): |
946 | + self.fire_event('data_ready', service_name) |
947 | + self.fire_event('start', service_name, default=[ |
948 | + service_restart, |
949 | + manage_ports]) |
950 | + self.save_ready(service_name) |
951 | + else: |
952 | + if self.was_ready(service_name): |
953 | + self.fire_event('data_lost', service_name) |
954 | + self.fire_event('stop', service_name, default=[ |
955 | + manage_ports, |
956 | + service_stop]) |
957 | + self.save_lost(service_name) |
958 | + |
959 | + def stop_services(self, *service_names): |
960 | + """ |
961 | + Stop one or more registered services, by name. |
962 | + |
963 | + If no service names are given, stops all registered services. |
964 | + """ |
965 | + for service_name in service_names or self.services.keys(): |
966 | + self.fire_event('stop', service_name, default=[ |
967 | + manage_ports, |
968 | + service_stop]) |
969 | + |
970 | + def get_service(self, service_name): |
971 | + """ |
972 | + Given the name of a registered service, return its service definition. |
973 | + """ |
974 | + service = self.services.get(service_name) |
975 | + if not service: |
976 | + raise KeyError('Service not registered: %s' % service_name) |
977 | + return service |
978 | + |
979 | + def fire_event(self, event_name, service_name, default=None): |
980 | + """ |
981 | + Fire a data_ready, data_lost, start, or stop event on a given service. |
982 | + """ |
983 | + service = self.get_service(service_name) |
984 | + callbacks = service.get(event_name, default) |
985 | + if not callbacks: |
986 | + return |
987 | + if not isinstance(callbacks, Iterable): |
988 | + callbacks = [callbacks] |
989 | + for callback in callbacks: |
990 | + if isinstance(callback, ManagerCallback): |
991 | + callback(self, service_name, event_name) |
992 | + else: |
993 | + callback(service_name) |
994 | + |
995 | + def is_ready(self, service_name): |
996 | + """ |
997 | + Determine if a registered service is ready, by checking its 'required_data'. |
998 | + |
999 | + A 'required_data' item can be any mapping type, and is considered ready |
1000 | + if `bool(item)` evaluates as True. |
1001 | + """ |
1002 | + service = self.get_service(service_name) |
1003 | + reqs = service.get('required_data', []) |
1004 | + return all(bool(req) for req in reqs) |
1005 | + |
1006 | + def _load_ready_file(self): |
1007 | + if self._ready is not None: |
1008 | + return |
1009 | + if os.path.exists(self._ready_file): |
1010 | + with open(self._ready_file) as fp: |
1011 | + self._ready = set(json.load(fp)) |
1012 | + else: |
1013 | + self._ready = set() |
1014 | + |
1015 | + def _save_ready_file(self): |
1016 | + if self._ready is None: |
1017 | + return |
1018 | + with open(self._ready_file, 'w') as fp: |
1019 | + json.dump(list(self._ready), fp) |
1020 | + |
1021 | + def save_ready(self, service_name): |
1022 | + """ |
1023 | + Save an indicator that the given service is now data_ready. |
1024 | + """ |
1025 | + self._load_ready_file() |
1026 | + self._ready.add(service_name) |
1027 | + self._save_ready_file() |
1028 | + |
1029 | + def save_lost(self, service_name): |
1030 | + """ |
1031 | + Save an indicator that the given service is no longer data_ready. |
1032 | + """ |
1033 | + self._load_ready_file() |
1034 | + self._ready.discard(service_name) |
1035 | + self._save_ready_file() |
1036 | + |
1037 | + def was_ready(self, service_name): |
1038 | + """ |
1039 | + Determine if the given service was previously data_ready. |
1040 | + """ |
1041 | + self._load_ready_file() |
1042 | + return service_name in self._ready |
1043 | + |
1044 | + |
1045 | +class ManagerCallback(object): |
1046 | + """ |
1047 | + Special case of a callback that takes the `ServiceManager` instance |
1048 | + in addition to the service name. |
1049 | + |
1050 | + Subclasses should implement `__call__` which should accept three parameters: |
1051 | + |
1052 | + * `manager` The `ServiceManager` instance |
1053 | + * `service_name` The name of the service it's being triggered for |
1054 | + * `event_name` The name of the event that this callback is handling |
1055 | + """ |
1056 | + def __call__(self, manager, service_name, event_name): |
1057 | + raise NotImplementedError() |
1058 | + |
1059 | + |
1060 | +class PortManagerCallback(ManagerCallback): |
1061 | + """ |
1062 | + Callback class that will open or close ports, for use as either |
1063 | + a start or stop action. |
1064 | + """ |
1065 | + def __call__(self, manager, service_name, event_name): |
1066 | + service = manager.get_service(service_name) |
1067 | + new_ports = service.get('ports', []) |
1068 | + port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name)) |
1069 | + if os.path.exists(port_file): |
1070 | + with open(port_file) as fp: |
1071 | + old_ports = fp.read().split(',') |
1072 | + for old_port in old_ports: |
1073 | + if bool(old_port): |
1074 | + old_port = int(old_port) |
1075 | + if old_port not in new_ports: |
1076 | + hookenv.close_port(old_port) |
1077 | + with open(port_file, 'w') as fp: |
1078 | + fp.write(','.join(str(port) for port in new_ports)) |
1079 | + for port in new_ports: |
1080 | + if event_name == 'start': |
1081 | + hookenv.open_port(port) |
1082 | + elif event_name == 'stop': |
1083 | + hookenv.close_port(port) |
1084 | + |
1085 | + |
1086 | +def service_stop(service_name): |
1087 | + """ |
1088 | + Wrapper around host.service_stop to prevent spurious "unknown service" |
1089 | + messages in the logs. |
1090 | + """ |
1091 | + if host.service_running(service_name): |
1092 | + host.service_stop(service_name) |
1093 | + |
1094 | + |
1095 | +def service_restart(service_name): |
1096 | + """ |
1097 | + Wrapper around host.service_restart to prevent spurious "unknown service" |
1098 | + messages in the logs. |
1099 | + """ |
1100 | + if host.service_available(service_name): |
1101 | + if host.service_running(service_name): |
1102 | + host.service_restart(service_name) |
1103 | + else: |
1104 | + host.service_start(service_name) |
1105 | + |
1106 | + |
1107 | +# Convenience aliases |
1108 | +open_ports = close_ports = manage_ports = PortManagerCallback() |
1109 | |
1110 | === added file 'hooks/charmhelpers/core/services/helpers.py' |
1111 | --- hooks/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000 |
1112 | +++ hooks/charmhelpers/core/services/helpers.py 2014-10-29 02:58:49 +0000 |
1113 | @@ -0,0 +1,239 @@ |
1114 | +import os |
1115 | +import yaml |
1116 | +from charmhelpers.core import hookenv |
1117 | +from charmhelpers.core import templating |
1118 | + |
1119 | +from charmhelpers.core.services.base import ManagerCallback |
1120 | + |
1121 | + |
1122 | +__all__ = ['RelationContext', 'TemplateCallback', |
1123 | + 'render_template', 'template'] |
1124 | + |
1125 | + |
1126 | +class RelationContext(dict): |
1127 | + """ |
1128 | + Base class for a context generator that gets relation data from juju. |
1129 | + |
1130 | + Subclasses must provide the attributes `name`, which is the name of the |
1131 | + interface of interest, `interface`, which is the type of the interface of |
1132 | + interest, and `required_keys`, which is the set of keys required for the |
1133 | + relation to be considered complete. The data for all interfaces matching |
1134 | + the `name` attribute that are complete will used to populate the dictionary |
1135 | + values (see `get_data`, below). |
1136 | + |
1137 | + The generated context will be namespaced under the relation :attr:`name`, |
1138 | + to prevent potential naming conflicts. |
1139 | + |
1140 | + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm |
1141 | + :param list additional_required_keys: Extend the list of :attr:`required_keys` |
1142 | + """ |
1143 | + name = None |
1144 | + interface = None |
1145 | + required_keys = [] |
1146 | + |
1147 | + def __init__(self, name=None, additional_required_keys=None): |
1148 | + if name is not None: |
1149 | + self.name = name |
1150 | + if additional_required_keys is not None: |
1151 | + self.required_keys.extend(additional_required_keys) |
1152 | + self.get_data() |
1153 | + |
1154 | + def __bool__(self): |
1155 | + """ |
1156 | + Returns True if all of the required_keys are available. |
1157 | + """ |
1158 | + return self.is_ready() |
1159 | + |
1160 | + __nonzero__ = __bool__ |
1161 | + |
1162 | + def __repr__(self): |
1163 | + return super(RelationContext, self).__repr__() |
1164 | + |
1165 | + def is_ready(self): |
1166 | + """ |
1167 | + Returns True if all of the `required_keys` are available from any units. |
1168 | + """ |
1169 | + ready = len(self.get(self.name, [])) > 0 |
1170 | + if not ready: |
1171 | + hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG) |
1172 | + return ready |
1173 | + |
1174 | + def _is_ready(self, unit_data): |
1175 | + """ |
1176 | + Helper method that tests a set of relation data and returns True if |
1177 | + all of the `required_keys` are present. |
1178 | + """ |
1179 | + return set(unit_data.keys()).issuperset(set(self.required_keys)) |
1180 | + |
1181 | + def get_data(self): |
1182 | + """ |
1183 | + Retrieve the relation data for each unit involved in a relation and, |
1184 | + if complete, store it in a list under `self[self.name]`. This |
1185 | + is automatically called when the RelationContext is instantiated. |
1186 | + |
1187 | + The units are sorted lexographically first by the service ID, then by |
1188 | + the unit ID. Thus, if an interface has two other services, 'db:1' |
1189 | + and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1', |
1190 | + and 'db:2' having one unit, 'mediawiki/0', all of which have a complete |
1191 | + set of data, the relation data for the units will be stored in the |
1192 | + order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'. |
1193 | + |
1194 | + If you only care about a single unit on the relation, you can just |
1195 | + access it as `{{ interface[0]['key'] }}`. However, if you can at all |
1196 | + support multiple units on a relation, you should iterate over the list, |
1197 | + like:: |
1198 | + |
1199 | + {% for unit in interface -%} |
1200 | + {{ unit['key'] }}{% if not loop.last %},{% endif %} |
1201 | + {%- endfor %} |
1202 | + |
1203 | + Note that since all sets of relation data from all related services and |
1204 | + units are in a single list, if you need to know which service or unit a |
1205 | + set of data came from, you'll need to extend this class to preserve |
1206 | + that information. |
1207 | + """ |
1208 | + if not hookenv.relation_ids(self.name): |
1209 | + return |
1210 | + |
1211 | + ns = self.setdefault(self.name, []) |
1212 | + for rid in sorted(hookenv.relation_ids(self.name)): |
1213 | + for unit in sorted(hookenv.related_units(rid)): |
1214 | + reldata = hookenv.relation_get(rid=rid, unit=unit) |
1215 | + if self._is_ready(reldata): |
1216 | + ns.append(reldata) |
1217 | + |
1218 | + def provide_data(self): |
1219 | + """ |
1220 | + Return data to be relation_set for this interface. |
1221 | + """ |
1222 | + return {} |
1223 | + |
1224 | + |
1225 | +class MysqlRelation(RelationContext): |
1226 | + """ |
1227 | + Relation context for the `mysql` interface. |
1228 | + |
1229 | + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm |
1230 | + :param list additional_required_keys: Extend the list of :attr:`required_keys` |
1231 | + """ |
1232 | + name = 'db' |
1233 | + interface = 'mysql' |
1234 | + required_keys = ['host', 'user', 'password', 'database'] |
1235 | + |
1236 | + |
1237 | +class HttpRelation(RelationContext): |
1238 | + """ |
1239 | + Relation context for the `http` interface. |
1240 | + |
1241 | + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm |
1242 | + :param list additional_required_keys: Extend the list of :attr:`required_keys` |
1243 | + """ |
1244 | + name = 'website' |
1245 | + interface = 'http' |
1246 | + required_keys = ['host', 'port'] |
1247 | + |
1248 | + def provide_data(self): |
1249 | + return { |
1250 | + 'host': hookenv.unit_get('private-address'), |
1251 | + 'port': 80, |
1252 | + } |
1253 | + |
1254 | + |
1255 | +class RequiredConfig(dict): |
1256 | + """ |
1257 | + Data context that loads config options with one or more mandatory options. |
1258 | + |
1259 | + Once the required options have been changed from their default values, all |
1260 | + config options will be available, namespaced under `config` to prevent |
1261 | + potential naming conflicts (for example, between a config option and a |
1262 | + relation property). |
1263 | + |
1264 | + :param list *args: List of options that must be changed from their default values. |
1265 | + """ |
1266 | + |
1267 | + def __init__(self, *args): |
1268 | + self.required_options = args |
1269 | + self['config'] = hookenv.config() |
1270 | + with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp: |
1271 | + self.config = yaml.load(fp).get('options', {}) |
1272 | + |
1273 | + def __bool__(self): |
1274 | + for option in self.required_options: |
1275 | + if option not in self['config']: |
1276 | + return False |
1277 | + current_value = self['config'][option] |
1278 | + default_value = self.config[option].get('default') |
1279 | + if current_value == default_value: |
1280 | + return False |
1281 | + if current_value in (None, '') and default_value in (None, ''): |
1282 | + return False |
1283 | + return True |
1284 | + |
1285 | + def __nonzero__(self): |
1286 | + return self.__bool__() |
1287 | + |
1288 | + |
1289 | +class StoredContext(dict): |
1290 | + """ |
1291 | + A data context that always returns the data that it was first created with. |
1292 | + |
1293 | + This is useful to do a one-time generation of things like passwords, that |
1294 | + will thereafter use the same value that was originally generated, instead |
1295 | + of generating a new value each time it is run. |
1296 | + """ |
1297 | + def __init__(self, file_name, config_data): |
1298 | + """ |
1299 | + If the file exists, populate `self` with the data from the file. |
1300 | + Otherwise, populate with the given data and persist it to the file. |
1301 | + """ |
1302 | + if os.path.exists(file_name): |
1303 | + self.update(self.read_context(file_name)) |
1304 | + else: |
1305 | + self.store_context(file_name, config_data) |
1306 | + self.update(config_data) |
1307 | + |
1308 | + def store_context(self, file_name, config_data): |
1309 | + if not os.path.isabs(file_name): |
1310 | + file_name = os.path.join(hookenv.charm_dir(), file_name) |
1311 | + with open(file_name, 'w') as file_stream: |
1312 | + os.fchmod(file_stream.fileno(), 0600) |
1313 | + yaml.dump(config_data, file_stream) |
1314 | + |
1315 | + def read_context(self, file_name): |
1316 | + if not os.path.isabs(file_name): |
1317 | + file_name = os.path.join(hookenv.charm_dir(), file_name) |
1318 | + with open(file_name, 'r') as file_stream: |
1319 | + data = yaml.load(file_stream) |
1320 | + if not data: |
1321 | + raise OSError("%s is empty" % file_name) |
1322 | + return data |
1323 | + |
1324 | + |
1325 | +class TemplateCallback(ManagerCallback): |
1326 | + """ |
1327 | + Callback class that will render a Jinja2 template, for use as a ready action. |
1328 | + |
1329 | + :param str source: The template source file, relative to `$CHARM_DIR/templates` |
1330 | + :param str target: The target to write the rendered template to |
1331 | + :param str owner: The owner of the rendered file |
1332 | + :param str group: The group of the rendered file |
1333 | + :param int perms: The permissions of the rendered file |
1334 | + """ |
1335 | + def __init__(self, source, target, owner='root', group='root', perms=0444): |
1336 | + self.source = source |
1337 | + self.target = target |
1338 | + self.owner = owner |
1339 | + self.group = group |
1340 | + self.perms = perms |
1341 | + |
1342 | + def __call__(self, manager, service_name, event_name): |
1343 | + service = manager.get_service(service_name) |
1344 | + context = {} |
1345 | + for ctx in service.get('required_data', []): |
1346 | + context.update(ctx) |
1347 | + templating.render(self.source, self.target, context, |
1348 | + self.owner, self.group, self.perms) |
1349 | + |
1350 | + |
1351 | +# Convenience aliases for templates |
1352 | +render_template = template = TemplateCallback |
1353 | |
1354 | === added file 'hooks/charmhelpers/core/sysctl.py' |
1355 | --- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000 |
1356 | +++ hooks/charmhelpers/core/sysctl.py 2014-10-29 02:58:49 +0000 |
1357 | @@ -0,0 +1,34 @@ |
1358 | +#!/usr/bin/env python |
1359 | +# -*- coding: utf-8 -*- |
1360 | + |
1361 | +__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' |
1362 | + |
1363 | +import yaml |
1364 | + |
1365 | +from subprocess import check_call |
1366 | + |
1367 | +from charmhelpers.core.hookenv import ( |
1368 | + log, |
1369 | + DEBUG, |
1370 | +) |
1371 | + |
1372 | + |
1373 | +def create(sysctl_dict, sysctl_file): |
1374 | + """Creates a sysctl.conf file from a YAML associative array |
1375 | + |
1376 | + :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 } |
1377 | + :type sysctl_dict: dict |
1378 | + :param sysctl_file: path to the sysctl file to be saved |
1379 | + :type sysctl_file: str or unicode |
1380 | + :returns: None |
1381 | + """ |
1382 | + sysctl_dict = yaml.load(sysctl_dict) |
1383 | + |
1384 | + with open(sysctl_file, "w") as fd: |
1385 | + for key, value in sysctl_dict.items(): |
1386 | + fd.write("{}={}\n".format(key, value)) |
1387 | + |
1388 | + log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict), |
1389 | + level=DEBUG) |
1390 | + |
1391 | + check_call(["sysctl", "-p", sysctl_file]) |
1392 | |
1393 | === added file 'hooks/charmhelpers/core/templating.py' |
1394 | --- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000 |
1395 | +++ hooks/charmhelpers/core/templating.py 2014-10-29 02:58:49 +0000 |
1396 | @@ -0,0 +1,51 @@ |
1397 | +import os |
1398 | + |
1399 | +from charmhelpers.core import host |
1400 | +from charmhelpers.core import hookenv |
1401 | + |
1402 | + |
1403 | +def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None): |
1404 | + """ |
1405 | + Render a template. |
1406 | + |
1407 | + The `source` path, if not absolute, is relative to the `templates_dir`. |
1408 | + |
1409 | + The `target` path should be absolute. |
1410 | + |
1411 | + The context should be a dict containing the values to be replaced in the |
1412 | + template. |
1413 | + |
1414 | + The `owner`, `group`, and `perms` options will be passed to `write_file`. |
1415 | + |
1416 | + If omitted, `templates_dir` defaults to the `templates` folder in the charm. |
1417 | + |
1418 | + Note: Using this requires python-jinja2; if it is not installed, calling |
1419 | + this will attempt to use charmhelpers.fetch.apt_install to install it. |
1420 | + """ |
1421 | + try: |
1422 | + from jinja2 import FileSystemLoader, Environment, exceptions |
1423 | + except ImportError: |
1424 | + try: |
1425 | + from charmhelpers.fetch import apt_install |
1426 | + except ImportError: |
1427 | + hookenv.log('Could not import jinja2, and could not import ' |
1428 | + 'charmhelpers.fetch to install it', |
1429 | + level=hookenv.ERROR) |
1430 | + raise |
1431 | + apt_install('python-jinja2', fatal=True) |
1432 | + from jinja2 import FileSystemLoader, Environment, exceptions |
1433 | + |
1434 | + if templates_dir is None: |
1435 | + templates_dir = os.path.join(hookenv.charm_dir(), 'templates') |
1436 | + loader = Environment(loader=FileSystemLoader(templates_dir)) |
1437 | + try: |
1438 | + source = source |
1439 | + template = loader.get_template(source) |
1440 | + except exceptions.TemplateNotFound as e: |
1441 | + hookenv.log('Could not load template %s from %s.' % |
1442 | + (source, templates_dir), |
1443 | + level=hookenv.ERROR) |
1444 | + raise e |
1445 | + content = template.render(context) |
1446 | + host.mkdir(os.path.dirname(target)) |
1447 | + host.write_file(target, content, owner, group, perms) |
1448 | |
1449 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
1450 | --- hooks/charmhelpers/fetch/__init__.py 2014-04-08 09:11:34 +0000 |
1451 | +++ hooks/charmhelpers/fetch/__init__.py 2014-10-29 02:58:49 +0000 |
1452 | @@ -1,4 +1,6 @@ |
1453 | import importlib |
1454 | +from tempfile import NamedTemporaryFile |
1455 | +import time |
1456 | from yaml import safe_load |
1457 | from charmhelpers.core.host import ( |
1458 | lsb_release |
1459 | @@ -12,9 +14,9 @@ |
1460 | config, |
1461 | log, |
1462 | ) |
1463 | -import apt_pkg |
1464 | import os |
1465 | |
1466 | + |
1467 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
1468 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
1469 | """ |
1470 | @@ -54,13 +56,69 @@ |
1471 | 'icehouse/proposed': 'precise-proposed/icehouse', |
1472 | 'precise-icehouse/proposed': 'precise-proposed/icehouse', |
1473 | 'precise-proposed/icehouse': 'precise-proposed/icehouse', |
1474 | + # Juno |
1475 | + 'juno': 'trusty-updates/juno', |
1476 | + 'trusty-juno': 'trusty-updates/juno', |
1477 | + 'trusty-juno/updates': 'trusty-updates/juno', |
1478 | + 'trusty-updates/juno': 'trusty-updates/juno', |
1479 | + 'juno/proposed': 'trusty-proposed/juno', |
1480 | + 'juno/proposed': 'trusty-proposed/juno', |
1481 | + 'trusty-juno/proposed': 'trusty-proposed/juno', |
1482 | + 'trusty-proposed/juno': 'trusty-proposed/juno', |
1483 | } |
1484 | |
1485 | +# The order of this list is very important. Handlers should be listed in from |
1486 | +# least- to most-specific URL matching. |
1487 | +FETCH_HANDLERS = ( |
1488 | + 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
1489 | + 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
1490 | + 'charmhelpers.fetch.giturl.GitUrlFetchHandler', |
1491 | +) |
1492 | + |
1493 | +APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. |
1494 | +APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks. |
1495 | +APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. |
1496 | + |
1497 | + |
1498 | +class SourceConfigError(Exception): |
1499 | + pass |
1500 | + |
1501 | + |
1502 | +class UnhandledSource(Exception): |
1503 | + pass |
1504 | + |
1505 | + |
1506 | +class AptLockError(Exception): |
1507 | + pass |
1508 | + |
1509 | + |
1510 | +class BaseFetchHandler(object): |
1511 | + |
1512 | + """Base class for FetchHandler implementations in fetch plugins""" |
1513 | + |
1514 | + def can_handle(self, source): |
1515 | + """Returns True if the source can be handled. Otherwise returns |
1516 | + a string explaining why it cannot""" |
1517 | + return "Wrong source type" |
1518 | + |
1519 | + def install(self, source): |
1520 | + """Try to download and unpack the source. Return the path to the |
1521 | + unpacked files or raise UnhandledSource.""" |
1522 | + raise UnhandledSource("Wrong source type {}".format(source)) |
1523 | + |
1524 | + def parse_url(self, url): |
1525 | + return urlparse(url) |
1526 | + |
1527 | + def base_url(self, url): |
1528 | + """Return url without querystring or fragment""" |
1529 | + parts = list(self.parse_url(url)) |
1530 | + parts[4:] = ['' for i in parts[4:]] |
1531 | + return urlunparse(parts) |
1532 | + |
1533 | |
1534 | def filter_installed_packages(packages): |
1535 | """Returns a list of packages that require installation""" |
1536 | - apt_pkg.init() |
1537 | - cache = apt_pkg.Cache() |
1538 | + cache = apt_cache() |
1539 | _pkgs = [] |
1540 | for package in packages: |
1541 | try: |
1542 | @@ -73,6 +131,16 @@ |
1543 | return _pkgs |
1544 | |
1545 | |
1546 | +def apt_cache(in_memory=True): |
1547 | + """Build and return an apt cache""" |
1548 | + import apt_pkg |
1549 | + apt_pkg.init() |
1550 | + if in_memory: |
1551 | + apt_pkg.config.set("Dir::Cache::pkgcache", "") |
1552 | + apt_pkg.config.set("Dir::Cache::srcpkgcache", "") |
1553 | + return apt_pkg.Cache() |
1554 | + |
1555 | + |
1556 | def apt_install(packages, options=None, fatal=False): |
1557 | """Install one or more packages""" |
1558 | if options is None: |
1559 | @@ -87,14 +155,7 @@ |
1560 | cmd.extend(packages) |
1561 | log("Installing {} with options: {}".format(packages, |
1562 | options)) |
1563 | - env = os.environ.copy() |
1564 | - if 'DEBIAN_FRONTEND' not in env: |
1565 | - env['DEBIAN_FRONTEND'] = 'noninteractive' |
1566 | - |
1567 | - if fatal: |
1568 | - subprocess.check_call(cmd, env=env) |
1569 | - else: |
1570 | - subprocess.call(cmd, env=env) |
1571 | + _run_apt_command(cmd, fatal) |
1572 | |
1573 | |
1574 | def apt_upgrade(options=None, fatal=False, dist=False): |
1575 | @@ -109,24 +170,13 @@ |
1576 | else: |
1577 | cmd.append('upgrade') |
1578 | log("Upgrading with options: {}".format(options)) |
1579 | - |
1580 | - env = os.environ.copy() |
1581 | - if 'DEBIAN_FRONTEND' not in env: |
1582 | - env['DEBIAN_FRONTEND'] = 'noninteractive' |
1583 | - |
1584 | - if fatal: |
1585 | - subprocess.check_call(cmd, env=env) |
1586 | - else: |
1587 | - subprocess.call(cmd, env=env) |
1588 | + _run_apt_command(cmd, fatal) |
1589 | |
1590 | |
1591 | def apt_update(fatal=False): |
1592 | """Update local apt cache""" |
1593 | cmd = ['apt-get', 'update'] |
1594 | - if fatal: |
1595 | - subprocess.check_call(cmd) |
1596 | - else: |
1597 | - subprocess.call(cmd) |
1598 | + _run_apt_command(cmd, fatal) |
1599 | |
1600 | |
1601 | def apt_purge(packages, fatal=False): |
1602 | @@ -137,10 +187,7 @@ |
1603 | else: |
1604 | cmd.extend(packages) |
1605 | log("Purging {}".format(packages)) |
1606 | - if fatal: |
1607 | - subprocess.check_call(cmd) |
1608 | - else: |
1609 | - subprocess.call(cmd) |
1610 | + _run_apt_command(cmd, fatal) |
1611 | |
1612 | |
1613 | def apt_hold(packages, fatal=False): |
1614 | @@ -151,6 +198,7 @@ |
1615 | else: |
1616 | cmd.extend(packages) |
1617 | log("Holding {}".format(packages)) |
1618 | + |
1619 | if fatal: |
1620 | subprocess.check_call(cmd) |
1621 | else: |
1622 | @@ -158,6 +206,29 @@ |
1623 | |
1624 | |
1625 | def add_source(source, key=None): |
1626 | + """Add a package source to this system. |
1627 | + |
1628 | + @param source: a URL or sources.list entry, as supported by |
1629 | + add-apt-repository(1). Examples:: |
1630 | + |
1631 | + ppa:charmers/example |
1632 | + deb https://stub:key@private.example.com/ubuntu trusty main |
1633 | + |
1634 | + In addition: |
1635 | + 'proposed:' may be used to enable the standard 'proposed' |
1636 | + pocket for the release. |
1637 | + 'cloud:' may be used to activate official cloud archive pockets, |
1638 | + such as 'cloud:icehouse' |
1639 | + 'distro' may be used as a noop |
1640 | + |
1641 | + @param key: A key to be added to the system's APT keyring and used |
1642 | + to verify the signatures on packages. Ideally, this should be an |
1643 | + ASCII format GPG public key including the block headers. A GPG key |
1644 | + id may also be used, but be aware that only insecure protocols are |
1645 | + available to retrieve the actual public key from a public keyserver |
1646 | + placing your Juju environment at risk. ppa and cloud archive keys |
1647 | + are securely added automtically, so sould not be provided. |
1648 | + """ |
1649 | if source is None: |
1650 | log('Source is not present. Skipping') |
1651 | return |
1652 | @@ -182,76 +253,98 @@ |
1653 | release = lsb_release()['DISTRIB_CODENAME'] |
1654 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
1655 | apt.write(PROPOSED_POCKET.format(release)) |
1656 | + elif source == 'distro': |
1657 | + pass |
1658 | + else: |
1659 | + log("Unknown source: {!r}".format(source)) |
1660 | + |
1661 | if key: |
1662 | - subprocess.check_call(['apt-key', 'adv', '--keyserver', |
1663 | - 'keyserver.ubuntu.com', '--recv', |
1664 | - key]) |
1665 | - |
1666 | - |
1667 | -class SourceConfigError(Exception): |
1668 | - pass |
1669 | + if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: |
1670 | + with NamedTemporaryFile() as key_file: |
1671 | + key_file.write(key) |
1672 | + key_file.flush() |
1673 | + key_file.seek(0) |
1674 | + subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file) |
1675 | + else: |
1676 | + # Note that hkp: is in no way a secure protocol. Using a |
1677 | + # GPG key id is pointless from a security POV unless you |
1678 | + # absolutely trust your network and DNS. |
1679 | + subprocess.check_call(['apt-key', 'adv', '--keyserver', |
1680 | + 'hkp://keyserver.ubuntu.com:80', '--recv', |
1681 | + key]) |
1682 | |
1683 | |
1684 | def configure_sources(update=False, |
1685 | sources_var='install_sources', |
1686 | keys_var='install_keys'): |
1687 | """ |
1688 | - Configure multiple sources from charm configuration |
1689 | + Configure multiple sources from charm configuration. |
1690 | + |
1691 | + The lists are encoded as yaml fragments in the configuration. |
1692 | + The frament needs to be included as a string. Sources and their |
1693 | + corresponding keys are of the types supported by add_source(). |
1694 | |
1695 | Example config: |
1696 | - install_sources: |
1697 | + install_sources: | |
1698 | - "ppa:foo" |
1699 | - "http://example.com/repo precise main" |
1700 | - install_keys: |
1701 | + install_keys: | |
1702 | - null |
1703 | - "a1b2c3d4" |
1704 | |
1705 | Note that 'null' (a.k.a. None) should not be quoted. |
1706 | """ |
1707 | - sources = safe_load(config(sources_var)) |
1708 | - keys = config(keys_var) |
1709 | - if keys is not None: |
1710 | - keys = safe_load(keys) |
1711 | - if isinstance(sources, basestring) and ( |
1712 | - keys is None or isinstance(keys, basestring)): |
1713 | - add_source(sources, keys) |
1714 | + sources = safe_load((config(sources_var) or '').strip()) or [] |
1715 | + keys = safe_load((config(keys_var) or '').strip()) or None |
1716 | + |
1717 | + if isinstance(sources, basestring): |
1718 | + sources = [sources] |
1719 | + |
1720 | + if keys is None: |
1721 | + for source in sources: |
1722 | + add_source(source, None) |
1723 | else: |
1724 | - if not len(sources) == len(keys): |
1725 | - msg = 'Install sources and keys lists are different lengths' |
1726 | - raise SourceConfigError(msg) |
1727 | - for src_num in range(len(sources)): |
1728 | - add_source(sources[src_num], keys[src_num]) |
1729 | + if isinstance(keys, basestring): |
1730 | + keys = [keys] |
1731 | + |
1732 | + if len(sources) != len(keys): |
1733 | + raise SourceConfigError( |
1734 | + 'Install sources and keys lists are different lengths') |
1735 | + for source, key in zip(sources, keys): |
1736 | + add_source(source, key) |
1737 | if update: |
1738 | apt_update(fatal=True) |
1739 | |
1740 | -# The order of this list is very important. Handlers should be listed in from |
1741 | -# least- to most-specific URL matching. |
1742 | -FETCH_HANDLERS = ( |
1743 | - 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
1744 | - 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
1745 | -) |
1746 | - |
1747 | - |
1748 | -class UnhandledSource(Exception): |
1749 | - pass |
1750 | - |
1751 | - |
1752 | -def install_remote(source): |
1753 | + |
1754 | +def install_remote(source, *args, **kwargs): |
1755 | """ |
1756 | Install a file tree from a remote source |
1757 | |
1758 | The specified source should be a url of the form: |
1759 | scheme://[host]/path[#[option=value][&...]] |
1760 | |
1761 | - Schemes supported are based on this modules submodules |
1762 | - Options supported are submodule-specific""" |
1763 | + Schemes supported are based on this modules submodules. |
1764 | + Options supported are submodule-specific. |
1765 | + Additional arguments are passed through to the submodule. |
1766 | + |
1767 | + For example:: |
1768 | + |
1769 | + dest = install_remote('http://example.com/archive.tgz', |
1770 | + checksum='deadbeef', |
1771 | + hash_type='sha1') |
1772 | + |
1773 | + This will download `archive.tgz`, validate it using SHA1 and, if |
1774 | + the file is ok, extract it and return the directory in which it |
1775 | + was extracted. If the checksum fails, it will raise |
1776 | + :class:`charmhelpers.core.host.ChecksumError`. |
1777 | + """ |
1778 | # We ONLY check for True here because can_handle may return a string |
1779 | # explaining why it can't handle a given source. |
1780 | handlers = [h for h in plugins() if h.can_handle(source) is True] |
1781 | installed_to = None |
1782 | for handler in handlers: |
1783 | try: |
1784 | - installed_to = handler.install(source) |
1785 | + installed_to = handler.install(source, *args, **kwargs) |
1786 | except UnhandledSource: |
1787 | pass |
1788 | if not installed_to: |
1789 | @@ -265,30 +358,6 @@ |
1790 | return install_remote(source) |
1791 | |
1792 | |
1793 | -class BaseFetchHandler(object): |
1794 | - |
1795 | - """Base class for FetchHandler implementations in fetch plugins""" |
1796 | - |
1797 | - def can_handle(self, source): |
1798 | - """Returns True if the source can be handled. Otherwise returns |
1799 | - a string explaining why it cannot""" |
1800 | - return "Wrong source type" |
1801 | - |
1802 | - def install(self, source): |
1803 | - """Try to download and unpack the source. Return the path to the |
1804 | - unpacked files or raise UnhandledSource.""" |
1805 | - raise UnhandledSource("Wrong source type {}".format(source)) |
1806 | - |
1807 | - def parse_url(self, url): |
1808 | - return urlparse(url) |
1809 | - |
1810 | - def base_url(self, url): |
1811 | - """Return url without querystring or fragment""" |
1812 | - parts = list(self.parse_url(url)) |
1813 | - parts[4:] = ['' for i in parts[4:]] |
1814 | - return urlunparse(parts) |
1815 | - |
1816 | - |
1817 | def plugins(fetch_handlers=None): |
1818 | if not fetch_handlers: |
1819 | fetch_handlers = FETCH_HANDLERS |
1820 | @@ -306,3 +375,40 @@ |
1821 | log("FetchHandler {} not found, skipping plugin".format( |
1822 | handler_name)) |
1823 | return plugin_list |
1824 | + |
1825 | + |
1826 | +def _run_apt_command(cmd, fatal=False): |
1827 | + """ |
1828 | + Run an APT command, checking output and retrying if the fatal flag is set |
1829 | + to True. |
1830 | + |
1831 | + :param: cmd: str: The apt command to run. |
1832 | + :param: fatal: bool: Whether the command's output should be checked and |
1833 | + retried. |
1834 | + """ |
1835 | + env = os.environ.copy() |
1836 | + |
1837 | + if 'DEBIAN_FRONTEND' not in env: |
1838 | + env['DEBIAN_FRONTEND'] = 'noninteractive' |
1839 | + |
1840 | + if fatal: |
1841 | + retry_count = 0 |
1842 | + result = None |
1843 | + |
1844 | + # If the command is considered "fatal", we need to retry if the apt |
1845 | + # lock was not acquired. |
1846 | + |
1847 | + while result is None or result == APT_NO_LOCK: |
1848 | + try: |
1849 | + result = subprocess.check_call(cmd, env=env) |
1850 | + except subprocess.CalledProcessError, e: |
1851 | + retry_count = retry_count + 1 |
1852 | + if retry_count > APT_NO_LOCK_RETRY_COUNT: |
1853 | + raise |
1854 | + result = e.returncode |
1855 | + log("Couldn't acquire DPKG lock. Will retry in {} seconds." |
1856 | + "".format(APT_NO_LOCK_RETRY_DELAY)) |
1857 | + time.sleep(APT_NO_LOCK_RETRY_DELAY) |
1858 | + |
1859 | + else: |
1860 | + subprocess.call(cmd, env=env) |
1861 | |
1862 | === modified file 'hooks/charmhelpers/fetch/archiveurl.py' |
1863 | --- hooks/charmhelpers/fetch/archiveurl.py 2014-04-08 09:11:34 +0000 |
1864 | +++ hooks/charmhelpers/fetch/archiveurl.py 2014-10-29 02:58:49 +0000 |
1865 | @@ -1,6 +1,8 @@ |
1866 | import os |
1867 | import urllib2 |
1868 | +from urllib import urlretrieve |
1869 | import urlparse |
1870 | +import hashlib |
1871 | |
1872 | from charmhelpers.fetch import ( |
1873 | BaseFetchHandler, |
1874 | @@ -10,11 +12,19 @@ |
1875 | get_archive_handler, |
1876 | extract, |
1877 | ) |
1878 | -from charmhelpers.core.host import mkdir |
1879 | +from charmhelpers.core.host import mkdir, check_hash |
1880 | |
1881 | |
1882 | class ArchiveUrlFetchHandler(BaseFetchHandler): |
1883 | - """Handler for archives via generic URLs""" |
1884 | + """ |
1885 | + Handler to download archive files from arbitrary URLs. |
1886 | + |
1887 | + Can fetch from http, https, ftp, and file URLs. |
1888 | + |
1889 | + Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files. |
1890 | + |
1891 | + Installs the contents of the archive in $CHARM_DIR/fetched/. |
1892 | + """ |
1893 | def can_handle(self, source): |
1894 | url_parts = self.parse_url(source) |
1895 | if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): |
1896 | @@ -24,6 +34,12 @@ |
1897 | return False |
1898 | |
1899 | def download(self, source, dest): |
1900 | + """ |
1901 | + Download an archive file. |
1902 | + |
1903 | + :param str source: URL pointing to an archive file. |
1904 | + :param str dest: Local path location to download archive file to. |
1905 | + """ |
1906 | # propogate all exceptions |
1907 | # URLError, OSError, etc |
1908 | proto, netloc, path, params, query, fragment = urlparse.urlparse(source) |
1909 | @@ -48,7 +64,30 @@ |
1910 | os.unlink(dest) |
1911 | raise e |
1912 | |
1913 | - def install(self, source): |
1914 | + # Mandatory file validation via Sha1 or MD5 hashing. |
1915 | + def download_and_validate(self, url, hashsum, validate="sha1"): |
1916 | + tempfile, headers = urlretrieve(url) |
1917 | + check_hash(tempfile, hashsum, validate) |
1918 | + return tempfile |
1919 | + |
1920 | + def install(self, source, dest=None, checksum=None, hash_type='sha1'): |
1921 | + """ |
1922 | + Download and install an archive file, with optional checksum validation. |
1923 | + |
1924 | + The checksum can also be given on the `source` URL's fragment. |
1925 | + For example:: |
1926 | + |
1927 | + handler.install('http://example.com/file.tgz#sha1=deadbeef') |
1928 | + |
1929 | + :param str source: URL pointing to an archive file. |
1930 | + :param str dest: Local destination path to install to. If not given, |
1931 | + installs to `$CHARM_DIR/archives/archive_file_name`. |
1932 | + :param str checksum: If given, validate the archive file after download. |
1933 | + :param str hash_type: Algorithm used to generate `checksum`. |
1934 | + Can be any hash alrgorithm supported by :mod:`hashlib`, |
1935 | + such as md5, sha1, sha256, sha512, etc. |
1936 | + |
1937 | + """ |
1938 | url_parts = self.parse_url(source) |
1939 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') |
1940 | if not os.path.exists(dest_dir): |
1941 | @@ -60,4 +99,10 @@ |
1942 | raise UnhandledSource(e.reason) |
1943 | except OSError as e: |
1944 | raise UnhandledSource(e.strerror) |
1945 | - return extract(dld_file) |
1946 | + options = urlparse.parse_qs(url_parts.fragment) |
1947 | + for key, value in options.items(): |
1948 | + if key in hashlib.algorithms: |
1949 | + check_hash(dld_file, value, key) |
1950 | + if checksum: |
1951 | + check_hash(dld_file, checksum, hash_type) |
1952 | + return extract(dld_file, dest) |
1953 | |
1954 | === modified file 'hooks/charmhelpers/fetch/bzrurl.py' |
1955 | --- hooks/charmhelpers/fetch/bzrurl.py 2014-02-06 12:54:59 +0000 |
1956 | +++ hooks/charmhelpers/fetch/bzrurl.py 2014-10-29 02:58:49 +0000 |
1957 | @@ -39,7 +39,8 @@ |
1958 | def install(self, source): |
1959 | url_parts = self.parse_url(source) |
1960 | branch_name = url_parts.path.strip("/").split("/")[-1] |
1961 | - dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name) |
1962 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
1963 | + branch_name) |
1964 | if not os.path.exists(dest_dir): |
1965 | mkdir(dest_dir, perms=0755) |
1966 | try: |
1967 | |
1968 | === added file 'hooks/charmhelpers/fetch/giturl.py' |
1969 | --- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000 |
1970 | +++ hooks/charmhelpers/fetch/giturl.py 2014-10-29 02:58:49 +0000 |
1971 | @@ -0,0 +1,44 @@ |
1972 | +import os |
1973 | +from charmhelpers.fetch import ( |
1974 | + BaseFetchHandler, |
1975 | + UnhandledSource |
1976 | +) |
1977 | +from charmhelpers.core.host import mkdir |
1978 | + |
1979 | +try: |
1980 | + from git import Repo |
1981 | +except ImportError: |
1982 | + from charmhelpers.fetch import apt_install |
1983 | + apt_install("python-git") |
1984 | + from git import Repo |
1985 | + |
1986 | + |
1987 | +class GitUrlFetchHandler(BaseFetchHandler): |
1988 | + """Handler for git branches via generic and github URLs""" |
1989 | + def can_handle(self, source): |
1990 | + url_parts = self.parse_url(source) |
1991 | + #TODO (mattyw) no support for ssh git@ yet |
1992 | + if url_parts.scheme not in ('http', 'https', 'git'): |
1993 | + return False |
1994 | + else: |
1995 | + return True |
1996 | + |
1997 | + def clone(self, source, dest, branch): |
1998 | + if not self.can_handle(source): |
1999 | + raise UnhandledSource("Cannot handle {}".format(source)) |
2000 | + |
2001 | + repo = Repo.clone_from(source, dest) |
2002 | + repo.git.checkout(branch) |
2003 | + |
2004 | + def install(self, source, branch="master"): |
2005 | + url_parts = self.parse_url(source) |
2006 | + branch_name = url_parts.path.strip("/").split("/")[-1] |
2007 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
2008 | + branch_name) |
2009 | + if not os.path.exists(dest_dir): |
2010 | + mkdir(dest_dir, perms=0755) |
2011 | + try: |
2012 | + self.clone(source, dest_dir, branch) |
2013 | + except OSError as e: |
2014 | + raise UnhandledSource(e.strerror) |
2015 | + return dest_dir |
2016 | |
2017 | === modified file 'hooks/hooks.py' |
2018 | --- hooks/hooks.py 2014-07-08 09:18:18 +0000 |
2019 | +++ hooks/hooks.py 2014-10-29 02:58:49 +0000 |
2020 | @@ -13,6 +13,7 @@ |
2021 | 'config-changed', |
2022 | 'cluster-relation-joined', |
2023 | 'peer-relation-joined', |
2024 | + 'peer-relation-departed', |
2025 | 'nrpe-external-master-relation-changed', |
2026 | 'rest-relation-joined', |
2027 | 'start', |
2028 | |
2029 | === added symlink 'hooks/peer-relation-departed' |
2030 | === target is u'hooks.py' |
2031 | === modified file 'playbook.yaml' |
2032 | --- playbook.yaml 2014-08-19 16:47:21 +0000 |
2033 | +++ playbook.yaml 2014-10-29 02:58:49 +0000 |
2034 | @@ -13,17 +13,20 @@ |
2035 | vars: |
2036 | - service_name: "{{ local_unit.split('/')[0] }}" |
2037 | - client_relation_id: "{{ relations['client'].keys()[0] | default('') }}" |
2038 | + - peer_relation_id: "{{ relations['peer'].keys()[0] | default('') }}" |
2039 | |
2040 | tasks: |
2041 | |
2042 | - include: tasks/install-elasticsearch.yml |
2043 | + - include: tasks/peer-relations.yml |
2044 | - include: tasks/setup-ufw.yml |
2045 | tags: |
2046 | - install |
2047 | - upgrade-charm |
2048 | - client-relation-joined |
2049 | - client-relation-departed |
2050 | - - include: tasks/peer-relations.yml |
2051 | + - peer-relation-joined |
2052 | + - peer-relation-departed |
2053 | |
2054 | - name: Update configuration |
2055 | tags: |
2056 | |
2057 | === modified file 'tasks/peer-relations.yml' |
2058 | --- tasks/peer-relations.yml 2014-06-06 14:40:08 +0000 |
2059 | +++ tasks/peer-relations.yml 2014-10-29 02:58:49 +0000 |
2060 | @@ -54,4 +54,3 @@ |
2061 | - peer-relation-joined |
2062 | fail: msg="Unit failed to join cluster during peer-relation-joined" |
2063 | when: cluster_health.json.number_of_nodes == 1 and cluster_health_after_restart.json.number_of_nodes == 1 |
2064 | - |
2065 | |
2066 | === modified file 'tasks/setup-ufw.yml' |
2067 | --- tasks/setup-ufw.yml 2014-07-30 06:35:59 +0000 |
2068 | +++ tasks/setup-ufw.yml 2014-10-29 02:58:49 +0000 |
2069 | @@ -27,3 +27,14 @@ |
2070 | |
2071 | - name: Deny all other requests on 9200 |
2072 | ufw: rule=deny port=9200 |
2073 | + |
2074 | +- name: Open the firewall for all peers |
2075 | + ufw: rule=allow src={{ item.value['private-address'] }} port=9300 proto=tcp |
2076 | + with_dict: relations["peer"]["{{ peer_relation_id }}"] | default({}) |
2077 | + when: not item.key == local_unit |
2078 | + |
2079 | +# Only deny incoming on 9300 once the unit is part of a cluster. |
2080 | +- name: Deny all incoming requests on 9300 once unit is part of cluster |
2081 | + ufw: rule=deny port=9300 |
2082 | + when: not peer_relation_id == "" |
2083 | + |