Merge lp:~michael.nelson/charms/trusty/elasticsearch/upgrade-charm-helpers into lp:charms/trusty/elasticsearch

Proposed by Michael Nelson
Status: Merged
Merged at revision: 36
Proposed branch: lp:~michael.nelson/charms/trusty/elasticsearch/upgrade-charm-helpers
Merge into: lp:charms/trusty/elasticsearch
Diff against target: 2083 lines (+1393/-181)
19 files modified
README.md (+4/-0)
hooks/charmhelpers/contrib/ansible/__init__.py (+63/-57)
hooks/charmhelpers/contrib/templating/contexts.py (+19/-7)
hooks/charmhelpers/core/fstab.py (+116/-0)
hooks/charmhelpers/core/hookenv.py (+138/-7)
hooks/charmhelpers/core/host.py (+107/-13)
hooks/charmhelpers/core/services/__init__.py (+2/-0)
hooks/charmhelpers/core/services/base.py (+313/-0)
hooks/charmhelpers/core/services/helpers.py (+239/-0)
hooks/charmhelpers/core/sysctl.py (+34/-0)
hooks/charmhelpers/core/templating.py (+51/-0)
hooks/charmhelpers/fetch/__init__.py (+196/-90)
hooks/charmhelpers/fetch/archiveurl.py (+49/-4)
hooks/charmhelpers/fetch/bzrurl.py (+2/-1)
hooks/charmhelpers/fetch/giturl.py (+44/-0)
hooks/hooks.py (+1/-0)
playbook.yaml (+4/-1)
tasks/peer-relations.yml (+0/-1)
tasks/setup-ufw.yml (+11/-0)
To merge this branch: bzr merge lp:~michael.nelson/charms/trusty/elasticsearch/upgrade-charm-helpers
Reviewer Review Type Date Requested Status
charmers Pending
Review via email: mp+239933@code.launchpad.net

Description of the change

This branch is mechanical work that:

1) Resyncs the charmhelpers library to use the latest crack
2) Re-merges a branch which ensures uncomplicated firewall is used for the peer 9300 port also.

This branch doesn't work - the follow-on branch does the actual updates to the charm to ensure that it's working with the new charmhelpers:

To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'README.md'
--- README.md 2014-08-19 16:47:21 +0000
+++ README.md 2014-10-29 02:58:49 +0000
@@ -37,6 +37,10 @@
37 epoch timestamp cluster status node.total node.data shards ...37 epoch timestamp cluster status node.total node.data shards ...
38 1404728290 10:18:10 elasticsearch green 2 2 038 1404728290 10:18:10 elasticsearch green 2 2 0
3939
40Note that the admin port (9200) is only accessible from the instance itself
41and any clients that join. Similarly the node-to-node communication
42port (9300) is only available to other units in the elasticsearch service.
43
40See the separate HACKING.md for information about deploying this charm44See the separate HACKING.md for information about deploying this charm
41from a local repository.45from a local repository.
4246
4347
=== modified file 'hooks/charmhelpers/contrib/ansible/__init__.py'
--- hooks/charmhelpers/contrib/ansible/__init__.py 2014-02-06 12:54:59 +0000
+++ hooks/charmhelpers/contrib/ansible/__init__.py 2014-10-29 02:58:49 +0000
@@ -6,58 +6,59 @@
66
7This helper enables you to declare your machine state, rather than7This helper enables you to declare your machine state, rather than
8program it procedurally (and have to test each change to your procedures).8program it procedurally (and have to test each change to your procedures).
9Your install hook can be as simple as:9Your install hook can be as simple as::
1010
11{{{11 {{{
12import charmhelpers.contrib.ansible12 import charmhelpers.contrib.ansible
1313
1414
15def install():15 def install():
16 charmhelpers.contrib.ansible.install_ansible_support()16 charmhelpers.contrib.ansible.install_ansible_support()
17 charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml')17 charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml')
18}}}18 }}}
1919
20and won't need to change (nor will its tests) when you change the machine20and won't need to change (nor will its tests) when you change the machine
21state.21state.
2222
23All of your juju config and relation-data are available as template23All of your juju config and relation-data are available as template
24variables within your playbooks and templates. An install playbook looks24variables within your playbooks and templates. An install playbook looks
25something like:25something like::
2626
27{{{27 {{{
28---28 ---
29- hosts: localhost29 - hosts: localhost
30 user: root30 user: root
3131
32 tasks:32 tasks:
33 - name: Add private repositories.33 - name: Add private repositories.
34 template:34 template:
35 src: ../templates/private-repositories.list.jinja235 src: ../templates/private-repositories.list.jinja2
36 dest: /etc/apt/sources.list.d/private.list36 dest: /etc/apt/sources.list.d/private.list
3737
38 - name: Update the cache.38 - name: Update the cache.
39 apt: update_cache=yes39 apt: update_cache=yes
4040
41 - name: Install dependencies.41 - name: Install dependencies.
42 apt: pkg={{ item }}42 apt: pkg={{ item }}
43 with_items:43 with_items:
44 - python-mimeparse44 - python-mimeparse
45 - python-webob45 - python-webob
46 - sunburnt46 - sunburnt
4747
48 - name: Setup groups.48 - name: Setup groups.
49 group: name={{ item.name }} gid={{ item.gid }}49 group: name={{ item.name }} gid={{ item.gid }}
50 with_items:50 with_items:
51 - { name: 'deploy_user', gid: 1800 }51 - { name: 'deploy_user', gid: 1800 }
52 - { name: 'service_user', gid: 1500 }52 - { name: 'service_user', gid: 1500 }
5353
54 ...54 ...
55}}}55 }}}
5656
57Read more online about playbooks[1] and standard ansible modules[2].57Read more online about `playbooks`_ and standard ansible `modules`_.
5858
59[1] http://www.ansibleworks.com/docs/playbooks.html59.. _playbooks: http://www.ansibleworks.com/docs/playbooks.html
60[2] http://www.ansibleworks.com/docs/modules.html60.. _modules: http://www.ansibleworks.com/docs/modules.html
61
61"""62"""
62import os63import os
63import subprocess64import subprocess
@@ -75,20 +76,20 @@
75ansible_vars_path = '/etc/ansible/host_vars/localhost'76ansible_vars_path = '/etc/ansible/host_vars/localhost'
7677
7778
78def install_ansible_support(from_ppa=True):79def install_ansible_support(from_ppa=True, ppa_location='ppa:rquillo/ansible'):
79 """Installs the ansible package.80 """Installs the ansible package.
8081
81 By default it is installed from the PPA [1] linked from82 By default it is installed from the `PPA`_ linked from
82 the ansible website [2].83 the ansible `website`_ or from a ppa specified by a charm config..
8384
84 [1] https://launchpad.net/~rquillo/+archive/ansible85 .. _PPA: https://launchpad.net/~rquillo/+archive/ansible
85 [2] http://www.ansibleworks.com/docs/gettingstarted.html#ubuntu-and-debian86 .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu
8687
87 If from_ppa is false, you must ensure that the package is available88 If from_ppa is empty, you must ensure that the package is available
88 from a configured repository.89 from a configured repository.
89 """90 """
90 if from_ppa:91 if from_ppa:
91 charmhelpers.fetch.add_source('ppa:rquillo/ansible')92 charmhelpers.fetch.add_source(ppa_location)
92 charmhelpers.fetch.apt_update(fatal=True)93 charmhelpers.fetch.apt_update(fatal=True)
93 charmhelpers.fetch.apt_install('ansible')94 charmhelpers.fetch.apt_install('ansible')
94 with open(ansible_hosts_path, 'w+') as hosts_file:95 with open(ansible_hosts_path, 'w+') as hosts_file:
@@ -101,6 +102,9 @@
101 charmhelpers.contrib.templating.contexts.juju_state_to_yaml(102 charmhelpers.contrib.templating.contexts.juju_state_to_yaml(
102 ansible_vars_path, namespace_separator='__',103 ansible_vars_path, namespace_separator='__',
103 allow_hyphens_in_keys=False)104 allow_hyphens_in_keys=False)
105 # we want ansible's log output to be unbuffered
106 env = os.environ.copy()
107 env['PYTHONUNBUFFERED'] = "1"
104 call = [108 call = [
105 'ansible-playbook',109 'ansible-playbook',
106 '-c',110 '-c',
@@ -109,7 +113,7 @@
109 ]113 ]
110 if tags:114 if tags:
111 call.extend(['--tags', '{}'.format(tags)])115 call.extend(['--tags', '{}'.format(tags)])
112 subprocess.check_call(call)116 subprocess.check_call(call, env=env)
113117
114118
115class AnsibleHooks(charmhelpers.core.hookenv.Hooks):119class AnsibleHooks(charmhelpers.core.hookenv.Hooks):
@@ -119,7 +123,8 @@
119 but additionally runs the playbook with the hook-name specified123 but additionally runs the playbook with the hook-name specified
120 using --tags (ie. running all the tasks tagged with the hook-name).124 using --tags (ie. running all the tasks tagged with the hook-name).
121125
122 Example:126 Example::
127
123 hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml')128 hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml')
124129
125 # All the tasks within my_machine_state.yaml tagged with 'install'130 # All the tasks within my_machine_state.yaml tagged with 'install'
@@ -144,6 +149,7 @@
144 if __name__ == "__main__":149 if __name__ == "__main__":
145 # execute a hook based on the name the program is called by150 # execute a hook based on the name the program is called by
146 hooks.execute(sys.argv)151 hooks.execute(sys.argv)
152
147 """153 """
148154
149 def __init__(self, playbook_path, default_hooks=None):155 def __init__(self, playbook_path, default_hooks=None):
150156
=== modified file 'hooks/charmhelpers/contrib/templating/contexts.py'
--- hooks/charmhelpers/contrib/templating/contexts.py 2014-04-08 14:58:22 +0000
+++ hooks/charmhelpers/contrib/templating/contexts.py 2014-10-29 02:58:49 +0000
@@ -40,13 +40,25 @@
40 relations = charmhelpers.core.hookenv.relations_of_type(relation_type)40 relations = charmhelpers.core.hookenv.relations_of_type(relation_type)
41 relations = [dict_keys_without_hyphens(rel) for rel in relations]41 relations = [dict_keys_without_hyphens(rel) for rel in relations]
4242
43 if 'relations_deprecated' not in context:43 context['relations_full'] = charmhelpers.core.hookenv.relations()
44 context['relations_deprecated'] = {}
45 if relation_type is not None:
46 relation_type = relation_type.replace('-', '_')
47 context['relations_deprecated'][relation_type] = relations
4844
49 context['relations'] = charmhelpers.core.hookenv.relations()45 # the hookenv.relations() data structure is effectively unusable in
46 # templates and other contexts when trying to access relation data other
47 # than the current relation. So provide a more useful structure that works
48 # with any hook.
49 local_unit = charmhelpers.core.hookenv.local_unit()
50 relations = {}
51 for rname, rids in context['relations_full'].items():
52 relations[rname] = []
53 for rid, rdata in rids.items():
54 data = rdata.copy()
55 if local_unit in rdata:
56 data.pop(local_unit)
57 for unit_name, rel_data in data.items():
58 new_data = {'__relid__': rid, '__unit__': unit_name}
59 new_data.update(rel_data)
60 relations[rname].append(new_data)
61 context['relations'] = relations
5062
5163
52def juju_state_to_yaml(yaml_path, namespace_separator=':',64def juju_state_to_yaml(yaml_path, namespace_separator=':',
@@ -101,4 +113,4 @@
101 update_relations(existing_vars, namespace_separator)113 update_relations(existing_vars, namespace_separator)
102114
103 with open(yaml_path, "w+") as fp:115 with open(yaml_path, "w+") as fp:
104 fp.write(yaml.dump(existing_vars))116 fp.write(yaml.dump(existing_vars, default_flow_style=False))
105117
=== added file 'hooks/charmhelpers/core/fstab.py'
--- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/fstab.py 2014-10-29 02:58:49 +0000
@@ -0,0 +1,116 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
5
6import os
7
8
9class Fstab(file):
10 """This class extends file in order to implement a file reader/writer
11 for file `/etc/fstab`
12 """
13
14 class Entry(object):
15 """Entry class represents a non-comment line on the `/etc/fstab` file
16 """
17 def __init__(self, device, mountpoint, filesystem,
18 options, d=0, p=0):
19 self.device = device
20 self.mountpoint = mountpoint
21 self.filesystem = filesystem
22
23 if not options:
24 options = "defaults"
25
26 self.options = options
27 self.d = d
28 self.p = p
29
30 def __eq__(self, o):
31 return str(self) == str(o)
32
33 def __str__(self):
34 return "{} {} {} {} {} {}".format(self.device,
35 self.mountpoint,
36 self.filesystem,
37 self.options,
38 self.d,
39 self.p)
40
41 DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
42
43 def __init__(self, path=None):
44 if path:
45 self._path = path
46 else:
47 self._path = self.DEFAULT_PATH
48 file.__init__(self, self._path, 'r+')
49
50 def _hydrate_entry(self, line):
51 # NOTE: use split with no arguments to split on any
52 # whitespace including tabs
53 return Fstab.Entry(*filter(
54 lambda x: x not in ('', None),
55 line.strip("\n").split()))
56
57 @property
58 def entries(self):
59 self.seek(0)
60 for line in self.readlines():
61 try:
62 if not line.startswith("#"):
63 yield self._hydrate_entry(line)
64 except ValueError:
65 pass
66
67 def get_entry_by_attr(self, attr, value):
68 for entry in self.entries:
69 e_attr = getattr(entry, attr)
70 if e_attr == value:
71 return entry
72 return None
73
74 def add_entry(self, entry):
75 if self.get_entry_by_attr('device', entry.device):
76 return False
77
78 self.write(str(entry) + '\n')
79 self.truncate()
80 return entry
81
82 def remove_entry(self, entry):
83 self.seek(0)
84
85 lines = self.readlines()
86
87 found = False
88 for index, line in enumerate(lines):
89 if not line.startswith("#"):
90 if self._hydrate_entry(line) == entry:
91 found = True
92 break
93
94 if not found:
95 return False
96
97 lines.remove(line)
98
99 self.seek(0)
100 self.write(''.join(lines))
101 self.truncate()
102 return True
103
104 @classmethod
105 def remove_by_mountpoint(cls, mountpoint, path=None):
106 fstab = cls(path=path)
107 entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
108 if entry:
109 return fstab.remove_entry(entry)
110 return False
111
112 @classmethod
113 def add(cls, device, mountpoint, filesystem, options=None, path=None):
114 return cls(path=path).add_entry(Fstab.Entry(device,
115 mountpoint, filesystem,
116 options=options))
0117
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2014-02-06 12:54:59 +0000
+++ hooks/charmhelpers/core/hookenv.py 2014-10-29 02:58:49 +0000
@@ -25,7 +25,7 @@
25def cached(func):25def cached(func):
26 """Cache return values for multiple executions of func + args26 """Cache return values for multiple executions of func + args
2727
28 For example:28 For example::
2929
30 @cached30 @cached
31 def unit_get(attribute):31 def unit_get(attribute):
@@ -155,6 +155,127 @@
155 return os.path.basename(sys.argv[0])155 return os.path.basename(sys.argv[0])
156156
157157
158class Config(dict):
159 """A dictionary representation of the charm's config.yaml, with some
160 extra features:
161
162 - See which values in the dictionary have changed since the previous hook.
163 - For values that have changed, see what the previous value was.
164 - Store arbitrary data for use in a later hook.
165
166 NOTE: Do not instantiate this object directly - instead call
167 ``hookenv.config()``, which will return an instance of :class:`Config`.
168
169 Example usage::
170
171 >>> # inside a hook
172 >>> from charmhelpers.core import hookenv
173 >>> config = hookenv.config()
174 >>> config['foo']
175 'bar'
176 >>> # store a new key/value for later use
177 >>> config['mykey'] = 'myval'
178
179
180 >>> # user runs `juju set mycharm foo=baz`
181 >>> # now we're inside subsequent config-changed hook
182 >>> config = hookenv.config()
183 >>> config['foo']
184 'baz'
185 >>> # test to see if this val has changed since last hook
186 >>> config.changed('foo')
187 True
188 >>> # what was the previous value?
189 >>> config.previous('foo')
190 'bar'
191 >>> # keys/values that we add are preserved across hooks
192 >>> config['mykey']
193 'myval'
194
195 """
196 CONFIG_FILE_NAME = '.juju-persistent-config'
197
198 def __init__(self, *args, **kw):
199 super(Config, self).__init__(*args, **kw)
200 self.implicit_save = True
201 self._prev_dict = None
202 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
203 if os.path.exists(self.path):
204 self.load_previous()
205
206 def __getitem__(self, key):
207 """For regular dict lookups, check the current juju config first,
208 then the previous (saved) copy. This ensures that user-saved values
209 will be returned by a dict lookup.
210
211 """
212 try:
213 return dict.__getitem__(self, key)
214 except KeyError:
215 return (self._prev_dict or {})[key]
216
217 def keys(self):
218 prev_keys = []
219 if self._prev_dict is not None:
220 prev_keys = self._prev_dict.keys()
221 return list(set(prev_keys + dict.keys(self)))
222
223 def load_previous(self, path=None):
224 """Load previous copy of config from disk.
225
226 In normal usage you don't need to call this method directly - it
227 is called automatically at object initialization.
228
229 :param path:
230
231 File path from which to load the previous config. If `None`,
232 config is loaded from the default location. If `path` is
233 specified, subsequent `save()` calls will write to the same
234 path.
235
236 """
237 self.path = path or self.path
238 with open(self.path) as f:
239 self._prev_dict = json.load(f)
240
241 def changed(self, key):
242 """Return True if the current value for this key is different from
243 the previous value.
244
245 """
246 if self._prev_dict is None:
247 return True
248 return self.previous(key) != self.get(key)
249
250 def previous(self, key):
251 """Return previous value for this key, or None if there
252 is no previous value.
253
254 """
255 if self._prev_dict:
256 return self._prev_dict.get(key)
257 return None
258
259 def save(self):
260 """Save this config to disk.
261
262 If the charm is using the :mod:`Services Framework <services.base>`
263 or :meth:'@hook <Hooks.hook>' decorator, this
264 is called automatically at the end of successful hook execution.
265 Otherwise, it should be called directly by user code.
266
267 To disable automatic saves, set ``implicit_save=False`` on this
268 instance.
269
270 """
271 if self._prev_dict:
272 for k, v in self._prev_dict.iteritems():
273 if k not in self:
274 self[k] = v
275 with open(self.path, 'w') as f:
276 json.dump(self, f)
277
278
158@cached279@cached
159def config(scope=None):280def config(scope=None):
160 """Juju charm configuration"""281 """Juju charm configuration"""
@@ -163,7 +284,10 @@
163 config_cmd_line.append(scope)284 config_cmd_line.append(scope)
164 config_cmd_line.append('--format=json')285 config_cmd_line.append('--format=json')
165 try:286 try:
166 return json.loads(subprocess.check_output(config_cmd_line))287 config_data = json.loads(subprocess.check_output(config_cmd_line))
288 if scope is not None:
289 return config_data
290 return Config(config_data)
167 except ValueError:291 except ValueError:
168 return None292 return None
169293
@@ -188,8 +312,9 @@
188 raise312 raise
189313
190314
191def relation_set(relation_id=None, relation_settings={}, **kwargs):315def relation_set(relation_id=None, relation_settings=None, **kwargs):
192 """Set relation information for the current unit"""316 """Set relation information for the current unit"""
317 relation_settings = relation_settings if relation_settings else {}
193 relation_cmd_line = ['relation-set']318 relation_cmd_line = ['relation-set']
194 if relation_id is not None:319 if relation_id is not None:
195 relation_cmd_line.extend(('-r', relation_id))320 relation_cmd_line.extend(('-r', relation_id))
@@ -348,27 +473,29 @@
348class Hooks(object):473class Hooks(object):
349 """A convenient handler for hook functions.474 """A convenient handler for hook functions.
350475
351 Example:476 Example::
477
352 hooks = Hooks()478 hooks = Hooks()
353479
354 # register a hook, taking its name from the function name480 # register a hook, taking its name from the function name
355 @hooks.hook()481 @hooks.hook()
356 def install():482 def install():
357 ...483 pass # your code here
358484
359 # register a hook, providing a custom hook name485 # register a hook, providing a custom hook name
360 @hooks.hook("config-changed")486 @hooks.hook("config-changed")
361 def config_changed():487 def config_changed():
362 ...488 pass # your code here
363489
364 if __name__ == "__main__":490 if __name__ == "__main__":
365 # execute a hook based on the name the program is called by491 # execute a hook based on the name the program is called by
366 hooks.execute(sys.argv)492 hooks.execute(sys.argv)
367 """493 """
368494
369 def __init__(self):495 def __init__(self, config_save=True):
370 super(Hooks, self).__init__()496 super(Hooks, self).__init__()
371 self._hooks = {}497 self._hooks = {}
498 self._config_save = config_save
372499
373 def register(self, name, function):500 def register(self, name, function):
374 """Register a hook"""501 """Register a hook"""
@@ -379,6 +506,10 @@
379 hook_name = os.path.basename(args[0])506 hook_name = os.path.basename(args[0])
380 if hook_name in self._hooks:507 if hook_name in self._hooks:
381 self._hooks[hook_name]()508 self._hooks[hook_name]()
509 if self._config_save:
510 cfg = config()
511 if cfg.implicit_save:
512 cfg.save()
382 else:513 else:
383 raise UnregisteredHookError(hook_name)514 raise UnregisteredHookError(hook_name)
384515
385516
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2014-03-05 15:19:02 +0000
+++ hooks/charmhelpers/core/host.py 2014-10-29 02:58:49 +0000
@@ -6,16 +6,19 @@
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
77
8import os8import os
9import re
9import pwd10import pwd
10import grp11import grp
11import random12import random
12import string13import string
13import subprocess14import subprocess
14import hashlib15import hashlib
16from contextlib import contextmanager
1517
16from collections import OrderedDict18from collections import OrderedDict
1719
18from hookenv import log20from hookenv import log
21from fstab import Fstab
1922
2023
21def service_start(service_name):24def service_start(service_name):
@@ -34,7 +37,8 @@
3437
3538
36def service_reload(service_name, restart_on_failure=False):39def service_reload(service_name, restart_on_failure=False):
37 """Reload a system service, optionally falling back to restart if reload fails"""40 """Reload a system service, optionally falling back to restart if
41 reload fails"""
38 service_result = service('reload', service_name)42 service_result = service('reload', service_name)
39 if not service_result and restart_on_failure:43 if not service_result and restart_on_failure:
40 service_result = service('restart', service_name)44 service_result = service('restart', service_name)
@@ -50,7 +54,7 @@
50def service_running(service):54def service_running(service):
51 """Determine whether a system service is running"""55 """Determine whether a system service is running"""
52 try:56 try:
53 output = subprocess.check_output(['service', service, 'status'])57 output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
54 except subprocess.CalledProcessError:58 except subprocess.CalledProcessError:
55 return False59 return False
56 else:60 else:
@@ -60,6 +64,16 @@
60 return False64 return False
6165
6266
67def service_available(service_name):
68 """Determine whether a system service is available"""
69 try:
70 subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
71 except subprocess.CalledProcessError as e:
72 return 'unrecognized service' not in e.output
73 else:
74 return True
75
76
63def adduser(username, password=None, shell='/bin/bash', system_user=False):77def adduser(username, password=None, shell='/bin/bash', system_user=False):
64 """Add a user to the system"""78 """Add a user to the system"""
65 try:79 try:
@@ -143,7 +157,19 @@
143 target.write(content)157 target.write(content)
144158
145159
146def mount(device, mountpoint, options=None, persist=False):160def fstab_remove(mp):
161 """Remove the given mountpoint entry from /etc/fstab
162 """
163 return Fstab.remove_by_mountpoint(mp)
164
165
166def fstab_add(dev, mp, fs, options=None):
167 """Adds the given device entry to the /etc/fstab file
168 """
169 return Fstab.add(dev, mp, fs, options=options)
170
171
172def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
147 """Mount a filesystem at a particular mountpoint"""173 """Mount a filesystem at a particular mountpoint"""
148 cmd_args = ['mount']174 cmd_args = ['mount']
149 if options is not None:175 if options is not None:
@@ -154,9 +180,9 @@
154 except subprocess.CalledProcessError, e:180 except subprocess.CalledProcessError, e:
155 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))181 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
156 return False182 return False
183
157 if persist:184 if persist:
158 # TODO: update fstab185 return fstab_add(device, mountpoint, filesystem, options=options)
159 pass
160 return True186 return True
161187
162188
@@ -168,9 +194,9 @@
168 except subprocess.CalledProcessError, e:194 except subprocess.CalledProcessError, e:
169 log('Error unmounting {}\n{}'.format(mountpoint, e.output))195 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
170 return False196 return False
197
171 if persist:198 if persist:
172 # TODO: update fstab199 return fstab_remove(mountpoint)
173 pass
174 return True200 return True
175201
176202
@@ -183,10 +209,15 @@
183 return system_mounts209 return system_mounts
184210
185211
186def file_hash(path):212def file_hash(path, hash_type='md5'):
187 """Generate a md5 hash of the contents of 'path' or None if not found """213 """
214 Generate a hash checksum of the contents of 'path' or None if not found.
215
216 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
217 such as md5, sha1, sha256, sha512, etc.
218 """
188 if os.path.exists(path):219 if os.path.exists(path):
189 h = hashlib.md5()220 h = getattr(hashlib, hash_type)()
190 with open(path, 'r') as source:221 with open(path, 'r') as source:
191 h.update(source.read()) # IGNORE:E1101 - it does have update222 h.update(source.read()) # IGNORE:E1101 - it does have update
192 return h.hexdigest()223 return h.hexdigest()
@@ -194,16 +225,36 @@
194 return None225 return None
195226
196227
228def check_hash(path, checksum, hash_type='md5'):
229 """
230 Validate a file using a cryptographic checksum.
231
232 :param str checksum: Value of the checksum used to validate the file.
233 :param str hash_type: Hash algorithm used to generate `checksum`.
234 Can be any hash alrgorithm supported by :mod:`hashlib`,
235 such as md5, sha1, sha256, sha512, etc.
236 :raises ChecksumError: If the file fails the checksum
237
238 """
239 actual_checksum = file_hash(path, hash_type)
240 if checksum != actual_checksum:
241 raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
242
243
244class ChecksumError(ValueError):
245 pass
246
247
197def restart_on_change(restart_map, stopstart=False):248def restart_on_change(restart_map, stopstart=False):
198 """Restart services based on configuration files changing249 """Restart services based on configuration files changing
199250
200 This function is used a decorator, for example251 This function is used a decorator, for example::
201252
202 @restart_on_change({253 @restart_on_change({
203 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]254 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
204 })255 })
205 def ceph_client_changed():256 def ceph_client_changed():
206 ...257 pass # your code here
207258
208 In this example, the cinder-api and cinder-volume services259 In this example, the cinder-api and cinder-volume services
209 would be restarted if /etc/ceph/ceph.conf is changed by the260 would be restarted if /etc/ceph/ceph.conf is changed by the
@@ -266,7 +317,13 @@
266 ip_output = (line for line in ip_output if line)317 ip_output = (line for line in ip_output if line)
267 for line in ip_output:318 for line in ip_output:
268 if line.split()[1].startswith(int_type):319 if line.split()[1].startswith(int_type):
269 interfaces.append(line.split()[1].replace(":", ""))320 matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
321 if matched:
322 interface = matched.groups()[0]
323 else:
324 interface = line.split()[1].replace(":", "")
325 interfaces.append(interface)
326
270 return interfaces327 return interfaces
271328
272329
@@ -295,3 +352,40 @@
295 if 'link/ether' in words:352 if 'link/ether' in words:
296 hwaddr = words[words.index('link/ether') + 1]353 hwaddr = words[words.index('link/ether') + 1]
297 return hwaddr354 return hwaddr
355
356
357def cmp_pkgrevno(package, revno, pkgcache=None):
358 '''Compare supplied revno with the revno of the installed package
359
360 * 1 => Installed revno is greater than supplied arg
361 * 0 => Installed revno is the same as supplied arg
362 * -1 => Installed revno is less than supplied arg
363
364 '''
365 import apt_pkg
366 from charmhelpers.fetch import apt_cache
367 if not pkgcache:
368 pkgcache = apt_cache()
369 pkg = pkgcache[package]
370 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
371
372
373@contextmanager
374def chdir(d):
375 cur = os.getcwd()
376 try:
377 yield os.chdir(d)
378 finally:
379 os.chdir(cur)
380
381
382def chownr(path, owner, group):
383 uid = pwd.getpwnam(owner).pw_uid
384 gid = grp.getgrnam(group).gr_gid
385
386 for root, dirs, files in os.walk(path):
387 for name in dirs + files:
388 full = os.path.join(root, name)
389 broken_symlink = os.path.lexists(full) and not os.path.exists(full)
390 if not broken_symlink:
391 os.chown(full, uid, gid)
298392
=== added directory 'hooks/charmhelpers/core/services'
=== added file 'hooks/charmhelpers/core/services/__init__.py'
--- hooks/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/services/__init__.py 2014-10-29 02:58:49 +0000
@@ -0,0 +1,2 @@
1from .base import * # NOQA
2from .helpers import * # NOQA
03
=== added file 'hooks/charmhelpers/core/services/base.py'
--- hooks/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/services/base.py 2014-10-29 02:58:49 +0000
@@ -0,0 +1,313 @@
1import os
2import re
3import json
4from collections import Iterable
5
6from charmhelpers.core import host
7from charmhelpers.core import hookenv
8
9
10__all__ = ['ServiceManager', 'ManagerCallback',
11 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
12 'service_restart', 'service_stop']
13
14
15class ServiceManager(object):
16 def __init__(self, services=None):
17 """
18 Register a list of services, given their definitions.
19
20 Service definitions are dicts in the following formats (all keys except
21 'service' are optional)::
22
23 {
24 "service": <service name>,
25 "required_data": <list of required data contexts>,
26 "provided_data": <list of provided data contexts>,
27 "data_ready": <one or more callbacks>,
28 "data_lost": <one or more callbacks>,
29 "start": <one or more callbacks>,
30 "stop": <one or more callbacks>,
31 "ports": <list of ports to manage>,
32 }
33
34 The 'required_data' list should contain dicts of required data (or
35 dependency managers that act like dicts and know how to collect the data).
36 Only when all items in the 'required_data' list are populated are the list
37 of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
38 information.
39
40 The 'provided_data' list should contain relation data providers, most likely
41 a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`,
42 that will indicate a set of data to set on a given relation.
43
44 The 'data_ready' value should be either a single callback, or a list of
45 callbacks, to be called when all items in 'required_data' pass `is_ready()`.
46 Each callback will be called with the service name as the only parameter.
47 After all of the 'data_ready' callbacks are called, the 'start' callbacks
48 are fired.
49
50 The 'data_lost' value should be either a single callback, or a list of
51 callbacks, to be called when a 'required_data' item no longer passes
52 `is_ready()`. Each callback will be called with the service name as the
53 only parameter. After all of the 'data_lost' callbacks are called,
54 the 'stop' callbacks are fired.
55
56 The 'start' value should be either a single callback, or a list of
57 callbacks, to be called when starting the service, after the 'data_ready'
58 callbacks are complete. Each callback will be called with the service
59 name as the only parameter. This defaults to
60 `[host.service_start, services.open_ports]`.
61
62 The 'stop' value should be either a single callback, or a list of
63 callbacks, to be called when stopping the service. If the service is
64 being stopped because it no longer has all of its 'required_data', this
65 will be called after all of the 'data_lost' callbacks are complete.
66 Each callback will be called with the service name as the only parameter.
67 This defaults to `[services.close_ports, host.service_stop]`.
68
69 The 'ports' value should be a list of ports to manage. The default
70 'start' handler will open the ports after the service is started,
71 and the default 'stop' handler will close the ports prior to stopping
72 the service.
73
74
75 Examples:
76
77 The following registers an Upstart service called bingod that depends on
78 a mongodb relation and which runs a custom `db_migrate` function prior to
79 restarting the service, and a Runit service called spadesd::
80
81 manager = services.ServiceManager([
82 {
83 'service': 'bingod',
84 'ports': [80, 443],
85 'required_data': [MongoRelation(), config(), {'my': 'data'}],
86 'data_ready': [
87 services.template(source='bingod.conf'),
88 services.template(source='bingod.ini',
89 target='/etc/bingod.ini',
90 owner='bingo', perms=0400),
91 ],
92 },
93 {
94 'service': 'spadesd',
95 'data_ready': services.template(source='spadesd_run.j2',
96 target='/etc/sv/spadesd/run',
97 perms=0555),
98 'start': runit_start,
99 'stop': runit_stop,
100 },
101 ])
102 manager.manage()
103 """
104 self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
105 self._ready = None
106 self.services = {}
107 for service in services or []:
108 service_name = service['service']
109 self.services[service_name] = service
110
111 def manage(self):
112 """
113 Handle the current hook by doing The Right Thing with the registered services.
114 """
115 hook_name = hookenv.hook_name()
116 if hook_name == 'stop':
117 self.stop_services()
118 else:
119 self.provide_data()
120 self.reconfigure_services()
121 cfg = hookenv.config()
122 if cfg.implicit_save:
123 cfg.save()
124
125 def provide_data(self):
126 """
127 Set the relation data for each provider in the ``provided_data`` list.
128
129 A provider must have a `name` attribute, which indicates which relation
130 to set data on, and a `provide_data()` method, which returns a dict of
131 data to set.
132 """
133 hook_name = hookenv.hook_name()
134 for service in self.services.values():
135 for provider in service.get('provided_data', []):
136 if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
137 data = provider.provide_data()
138 _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
139 if _ready:
140 hookenv.relation_set(None, data)
141
142 def reconfigure_services(self, *service_names):
143 """
144 Update all files for one or more registered services, and,
145 if ready, optionally restart them.
146
147 If no service names are given, reconfigures all registered services.
148 """
149 for service_name in service_names or self.services.keys():
150 if self.is_ready(service_name):
151 self.fire_event('data_ready', service_name)
152 self.fire_event('start', service_name, default=[
153 service_restart,
154 manage_ports])
155 self.save_ready(service_name)
156 else:
157 if self.was_ready(service_name):
158 self.fire_event('data_lost', service_name)
159 self.fire_event('stop', service_name, default=[
160 manage_ports,
161 service_stop])
162 self.save_lost(service_name)
163
164 def stop_services(self, *service_names):
165 """
166 Stop one or more registered services, by name.
167
168 If no service names are given, stops all registered services.
169 """
170 for service_name in service_names or self.services.keys():
171 self.fire_event('stop', service_name, default=[
172 manage_ports,
173 service_stop])
174
175 def get_service(self, service_name):
176 """
177 Given the name of a registered service, return its service definition.
178 """
179 service = self.services.get(service_name)
180 if not service:
181 raise KeyError('Service not registered: %s' % service_name)
182 return service
183
184 def fire_event(self, event_name, service_name, default=None):
185 """
186 Fire a data_ready, data_lost, start, or stop event on a given service.
187 """
188 service = self.get_service(service_name)
189 callbacks = service.get(event_name, default)
190 if not callbacks:
191 return
192 if not isinstance(callbacks, Iterable):
193 callbacks = [callbacks]
194 for callback in callbacks:
195 if isinstance(callback, ManagerCallback):
196 callback(self, service_name, event_name)
197 else:
198 callback(service_name)
199
200 def is_ready(self, service_name):
201 """
202 Determine if a registered service is ready, by checking its 'required_data'.
203
204 A 'required_data' item can be any mapping type, and is considered ready
205 if `bool(item)` evaluates as True.
206 """
207 service = self.get_service(service_name)
208 reqs = service.get('required_data', [])
209 return all(bool(req) for req in reqs)
210
211 def _load_ready_file(self):
212 if self._ready is not None:
213 return
214 if os.path.exists(self._ready_file):
215 with open(self._ready_file) as fp:
216 self._ready = set(json.load(fp))
217 else:
218 self._ready = set()
219
220 def _save_ready_file(self):
221 if self._ready is None:
222 return
223 with open(self._ready_file, 'w') as fp:
224 json.dump(list(self._ready), fp)
225
226 def save_ready(self, service_name):
227 """
228 Save an indicator that the given service is now data_ready.
229 """
230 self._load_ready_file()
231 self._ready.add(service_name)
232 self._save_ready_file()
233
234 def save_lost(self, service_name):
235 """
236 Save an indicator that the given service is no longer data_ready.
237 """
238 self._load_ready_file()
239 self._ready.discard(service_name)
240 self._save_ready_file()
241
242 def was_ready(self, service_name):
243 """
244 Determine if the given service was previously data_ready.
245 """
246 self._load_ready_file()
247 return service_name in self._ready
248
249
250class ManagerCallback(object):
251 """
252 Special case of a callback that takes the `ServiceManager` instance
253 in addition to the service name.
254
255 Subclasses should implement `__call__` which should accept three parameters:
256
257 * `manager` The `ServiceManager` instance
258 * `service_name` The name of the service it's being triggered for
259 * `event_name` The name of the event that this callback is handling
260 """
261 def __call__(self, manager, service_name, event_name):
262 raise NotImplementedError()
263
264
265class PortManagerCallback(ManagerCallback):
266 """
267 Callback class that will open or close ports, for use as either
268 a start or stop action.
269 """
270 def __call__(self, manager, service_name, event_name):
271 service = manager.get_service(service_name)
272 new_ports = service.get('ports', [])
273 port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
274 if os.path.exists(port_file):
275 with open(port_file) as fp:
276 old_ports = fp.read().split(',')
277 for old_port in old_ports:
278 if bool(old_port):
279 old_port = int(old_port)
280 if old_port not in new_ports:
281 hookenv.close_port(old_port)
282 with open(port_file, 'w') as fp:
283 fp.write(','.join(str(port) for port in new_ports))
284 for port in new_ports:
285 if event_name == 'start':
286 hookenv.open_port(port)
287 elif event_name == 'stop':
288 hookenv.close_port(port)
289
290
291def service_stop(service_name):
292 """
293 Wrapper around host.service_stop to prevent spurious "unknown service"
294 messages in the logs.
295 """
296 if host.service_running(service_name):
297 host.service_stop(service_name)
298
299
300def service_restart(service_name):
301 """
302 Wrapper around host.service_restart to prevent spurious "unknown service"
303 messages in the logs.
304 """
305 if host.service_available(service_name):
306 if host.service_running(service_name):
307 host.service_restart(service_name)
308 else:
309 host.service_start(service_name)
310
311
312# Convenience aliases
313open_ports = close_ports = manage_ports = PortManagerCallback()
0314
=== added file 'hooks/charmhelpers/core/services/helpers.py'
--- hooks/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/services/helpers.py 2014-10-29 02:58:49 +0000
@@ -0,0 +1,239 @@
1import os
2import yaml
3from charmhelpers.core import hookenv
4from charmhelpers.core import templating
5
6from charmhelpers.core.services.base import ManagerCallback
7
8
9__all__ = ['RelationContext', 'TemplateCallback',
10 'render_template', 'template']
11
12
13class RelationContext(dict):
14 """
15 Base class for a context generator that gets relation data from juju.
16
17 Subclasses must provide the attributes `name`, which is the name of the
18 interface of interest, `interface`, which is the type of the interface of
19 interest, and `required_keys`, which is the set of keys required for the
20 relation to be considered complete. The data for all interfaces matching
21 the `name` attribute that are complete will used to populate the dictionary
22 values (see `get_data`, below).
23
24 The generated context will be namespaced under the relation :attr:`name`,
25 to prevent potential naming conflicts.
26
27 :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
28 :param list additional_required_keys: Extend the list of :attr:`required_keys`
29 """
30 name = None
31 interface = None
32 required_keys = []
33
34 def __init__(self, name=None, additional_required_keys=None):
35 if name is not None:
36 self.name = name
37 if additional_required_keys is not None:
38 self.required_keys.extend(additional_required_keys)
39 self.get_data()
40
41 def __bool__(self):
42 """
43 Returns True if all of the required_keys are available.
44 """
45 return self.is_ready()
46
47 __nonzero__ = __bool__
48
49 def __repr__(self):
50 return super(RelationContext, self).__repr__()
51
52 def is_ready(self):
53 """
54 Returns True if all of the `required_keys` are available from any units.
55 """
56 ready = len(self.get(self.name, [])) > 0
57 if not ready:
58 hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
59 return ready
60
61 def _is_ready(self, unit_data):
62 """
63 Helper method that tests a set of relation data and returns True if
64 all of the `required_keys` are present.
65 """
66 return set(unit_data.keys()).issuperset(set(self.required_keys))
67
68 def get_data(self):
69 """
70 Retrieve the relation data for each unit involved in a relation and,
71 if complete, store it in a list under `self[self.name]`. This
72 is automatically called when the RelationContext is instantiated.
73
74 The units are sorted lexographically first by the service ID, then by
75 the unit ID. Thus, if an interface has two other services, 'db:1'
76 and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
77 and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
78 set of data, the relation data for the units will be stored in the
79 order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
80
81 If you only care about a single unit on the relation, you can just
82 access it as `{{ interface[0]['key'] }}`. However, if you can at all
83 support multiple units on a relation, you should iterate over the list,
84 like::
85
86 {% for unit in interface -%}
87 {{ unit['key'] }}{% if not loop.last %},{% endif %}
88 {%- endfor %}
89
90 Note that since all sets of relation data from all related services and
91 units are in a single list, if you need to know which service or unit a
92 set of data came from, you'll need to extend this class to preserve
93 that information.
94 """
95 if not hookenv.relation_ids(self.name):
96 return
97
98 ns = self.setdefault(self.name, [])
99 for rid in sorted(hookenv.relation_ids(self.name)):
100 for unit in sorted(hookenv.related_units(rid)):
101 reldata = hookenv.relation_get(rid=rid, unit=unit)
102 if self._is_ready(reldata):
103 ns.append(reldata)
104
105 def provide_data(self):
106 """
107 Return data to be relation_set for this interface.
108 """
109 return {}
110
111
112class MysqlRelation(RelationContext):
113 """
114 Relation context for the `mysql` interface.
115
116 :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
117 :param list additional_required_keys: Extend the list of :attr:`required_keys`
118 """
119 name = 'db'
120 interface = 'mysql'
121 required_keys = ['host', 'user', 'password', 'database']
122
123
124class HttpRelation(RelationContext):
125 """
126 Relation context for the `http` interface.
127
128 :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
129 :param list additional_required_keys: Extend the list of :attr:`required_keys`
130 """
131 name = 'website'
132 interface = 'http'
133 required_keys = ['host', 'port']
134
135 def provide_data(self):
136 return {
137 'host': hookenv.unit_get('private-address'),
138 'port': 80,
139 }
140
141
142class RequiredConfig(dict):
143 """
144 Data context that loads config options with one or more mandatory options.
145
146 Once the required options have been changed from their default values, all
147 config options will be available, namespaced under `config` to prevent
148 potential naming conflicts (for example, between a config option and a
149 relation property).
150
151 :param list *args: List of options that must be changed from their default values.
152 """
153
154 def __init__(self, *args):
155 self.required_options = args
156 self['config'] = hookenv.config()
157 with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp:
158 self.config = yaml.load(fp).get('options', {})
159
160 def __bool__(self):
161 for option in self.required_options:
162 if option not in self['config']:
163 return False
164 current_value = self['config'][option]
165 default_value = self.config[option].get('default')
166 if current_value == default_value:
167 return False
168 if current_value in (None, '') and default_value in (None, ''):
169 return False
170 return True
171
172 def __nonzero__(self):
173 return self.__bool__()
174
175
176class StoredContext(dict):
177 """
178 A data context that always returns the data that it was first created with.
179
180 This is useful to do a one-time generation of things like passwords, that
181 will thereafter use the same value that was originally generated, instead
182 of generating a new value each time it is run.
183 """
184 def __init__(self, file_name, config_data):
185 """
186 If the file exists, populate `self` with the data from the file.
187 Otherwise, populate with the given data and persist it to the file.
188 """
189 if os.path.exists(file_name):
190 self.update(self.read_context(file_name))
191 else:
192 self.store_context(file_name, config_data)
193 self.update(config_data)
194
195 def store_context(self, file_name, config_data):
196 if not os.path.isabs(file_name):
197 file_name = os.path.join(hookenv.charm_dir(), file_name)
198 with open(file_name, 'w') as file_stream:
199 os.fchmod(file_stream.fileno(), 0600)
200 yaml.dump(config_data, file_stream)
201
202 def read_context(self, file_name):
203 if not os.path.isabs(file_name):
204 file_name = os.path.join(hookenv.charm_dir(), file_name)
205 with open(file_name, 'r') as file_stream:
206 data = yaml.load(file_stream)
207 if not data:
208 raise OSError("%s is empty" % file_name)
209 return data
210
211
212class TemplateCallback(ManagerCallback):
213 """
214 Callback class that will render a Jinja2 template, for use as a ready action.
215
216 :param str source: The template source file, relative to `$CHARM_DIR/templates`
217 :param str target: The target to write the rendered template to
218 :param str owner: The owner of the rendered file
219 :param str group: The group of the rendered file
220 :param int perms: The permissions of the rendered file
221 """
222 def __init__(self, source, target, owner='root', group='root', perms=0444):
223 self.source = source
224 self.target = target
225 self.owner = owner
226 self.group = group
227 self.perms = perms
228
229 def __call__(self, manager, service_name, event_name):
230 service = manager.get_service(service_name)
231 context = {}
232 for ctx in service.get('required_data', []):
233 context.update(ctx)
234 templating.render(self.source, self.target, context,
235 self.owner, self.group, self.perms)
236
237
238# Convenience aliases for templates
239render_template = template = TemplateCallback
0240
=== added file 'hooks/charmhelpers/core/sysctl.py'
--- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/sysctl.py 2014-10-29 02:58:49 +0000
@@ -0,0 +1,34 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
5
6import yaml
7
8from subprocess import check_call
9
10from charmhelpers.core.hookenv import (
11 log,
12 DEBUG,
13)
14
15
16def create(sysctl_dict, sysctl_file):
17 """Creates a sysctl.conf file from a YAML associative array
18
19 :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
20 :type sysctl_dict: dict
21 :param sysctl_file: path to the sysctl file to be saved
22 :type sysctl_file: str or unicode
23 :returns: None
24 """
25 sysctl_dict = yaml.load(sysctl_dict)
26
27 with open(sysctl_file, "w") as fd:
28 for key, value in sysctl_dict.items():
29 fd.write("{}={}\n".format(key, value))
30
31 log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
32 level=DEBUG)
33
34 check_call(["sysctl", "-p", sysctl_file])
035
=== added file 'hooks/charmhelpers/core/templating.py'
--- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/templating.py 2014-10-29 02:58:49 +0000
@@ -0,0 +1,51 @@
1import os
2
3from charmhelpers.core import host
4from charmhelpers.core import hookenv
5
6
7def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
8 """
9 Render a template.
10
11 The `source` path, if not absolute, is relative to the `templates_dir`.
12
13 The `target` path should be absolute.
14
15 The context should be a dict containing the values to be replaced in the
16 template.
17
18 The `owner`, `group`, and `perms` options will be passed to `write_file`.
19
20 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
21
22 Note: Using this requires python-jinja2; if it is not installed, calling
23 this will attempt to use charmhelpers.fetch.apt_install to install it.
24 """
25 try:
26 from jinja2 import FileSystemLoader, Environment, exceptions
27 except ImportError:
28 try:
29 from charmhelpers.fetch import apt_install
30 except ImportError:
31 hookenv.log('Could not import jinja2, and could not import '
32 'charmhelpers.fetch to install it',
33 level=hookenv.ERROR)
34 raise
35 apt_install('python-jinja2', fatal=True)
36 from jinja2 import FileSystemLoader, Environment, exceptions
37
38 if templates_dir is None:
39 templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
40 loader = Environment(loader=FileSystemLoader(templates_dir))
41 try:
42 source = source
43 template = loader.get_template(source)
44 except exceptions.TemplateNotFound as e:
45 hookenv.log('Could not load template %s from %s.' %
46 (source, templates_dir),
47 level=hookenv.ERROR)
48 raise e
49 content = template.render(context)
50 host.mkdir(os.path.dirname(target))
51 host.write_file(target, content, owner, group, perms)
052
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2014-04-08 09:11:34 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2014-10-29 02:58:49 +0000
@@ -1,4 +1,6 @@
1import importlib1import importlib
2from tempfile import NamedTemporaryFile
3import time
2from yaml import safe_load4from yaml import safe_load
3from charmhelpers.core.host import (5from charmhelpers.core.host import (
4 lsb_release6 lsb_release
@@ -12,9 +14,9 @@
12 config,14 config,
13 log,15 log,
14)16)
15import apt_pkg
16import os17import os
1718
19
18CLOUD_ARCHIVE = """# Ubuntu Cloud Archive20CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
19deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main21deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
20"""22"""
@@ -54,13 +56,69 @@
54 'icehouse/proposed': 'precise-proposed/icehouse',56 'icehouse/proposed': 'precise-proposed/icehouse',
55 'precise-icehouse/proposed': 'precise-proposed/icehouse',57 'precise-icehouse/proposed': 'precise-proposed/icehouse',
56 'precise-proposed/icehouse': 'precise-proposed/icehouse',58 'precise-proposed/icehouse': 'precise-proposed/icehouse',
59 # Juno
60 'juno': 'trusty-updates/juno',
61 'trusty-juno': 'trusty-updates/juno',
62 'trusty-juno/updates': 'trusty-updates/juno',
63 'trusty-updates/juno': 'trusty-updates/juno',
64 'juno/proposed': 'trusty-proposed/juno',
65 'juno/proposed': 'trusty-proposed/juno',
66 'trusty-juno/proposed': 'trusty-proposed/juno',
67 'trusty-proposed/juno': 'trusty-proposed/juno',
57}68}
5869
70# The order of this list is very important. Handlers should be listed in from
71# least- to most-specific URL matching.
72FETCH_HANDLERS = (
73 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
74 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
75 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
76)
77
78APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
79APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
80APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
81
82
83class SourceConfigError(Exception):
84 pass
85
86
87class UnhandledSource(Exception):
88 pass
89
90
91class AptLockError(Exception):
92 pass
93
94
95class BaseFetchHandler(object):
96
97 """Base class for FetchHandler implementations in fetch plugins"""
98
99 def can_handle(self, source):
100 """Returns True if the source can be handled. Otherwise returns
101 a string explaining why it cannot"""
102 return "Wrong source type"
103
104 def install(self, source):
105 """Try to download and unpack the source. Return the path to the
106 unpacked files or raise UnhandledSource."""
107 raise UnhandledSource("Wrong source type {}".format(source))
108
109 def parse_url(self, url):
110 return urlparse(url)
111
112 def base_url(self, url):
113 """Return url without querystring or fragment"""
114 parts = list(self.parse_url(url))
115 parts[4:] = ['' for i in parts[4:]]
116 return urlunparse(parts)
117
59118
60def filter_installed_packages(packages):119def filter_installed_packages(packages):
61 """Returns a list of packages that require installation"""120 """Returns a list of packages that require installation"""
62 apt_pkg.init()121 cache = apt_cache()
63 cache = apt_pkg.Cache()
64 _pkgs = []122 _pkgs = []
65 for package in packages:123 for package in packages:
66 try:124 try:
@@ -73,6 +131,16 @@
73 return _pkgs131 return _pkgs
74132
75133
134def apt_cache(in_memory=True):
135 """Build and return an apt cache"""
136 import apt_pkg
137 apt_pkg.init()
138 if in_memory:
139 apt_pkg.config.set("Dir::Cache::pkgcache", "")
140 apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
141 return apt_pkg.Cache()
142
143
76def apt_install(packages, options=None, fatal=False):144def apt_install(packages, options=None, fatal=False):
77 """Install one or more packages"""145 """Install one or more packages"""
78 if options is None:146 if options is None:
@@ -87,14 +155,7 @@
87 cmd.extend(packages)155 cmd.extend(packages)
88 log("Installing {} with options: {}".format(packages,156 log("Installing {} with options: {}".format(packages,
89 options))157 options))
90 env = os.environ.copy()158 _run_apt_command(cmd, fatal)
91 if 'DEBIAN_FRONTEND' not in env:
92 env['DEBIAN_FRONTEND'] = 'noninteractive'
93
94 if fatal:
95 subprocess.check_call(cmd, env=env)
96 else:
97 subprocess.call(cmd, env=env)
98159
99160
100def apt_upgrade(options=None, fatal=False, dist=False):161def apt_upgrade(options=None, fatal=False, dist=False):
@@ -109,24 +170,13 @@
109 else:170 else:
110 cmd.append('upgrade')171 cmd.append('upgrade')
111 log("Upgrading with options: {}".format(options))172 log("Upgrading with options: {}".format(options))
112173 _run_apt_command(cmd, fatal)
113 env = os.environ.copy()
114 if 'DEBIAN_FRONTEND' not in env:
115 env['DEBIAN_FRONTEND'] = 'noninteractive'
116
117 if fatal:
118 subprocess.check_call(cmd, env=env)
119 else:
120 subprocess.call(cmd, env=env)
121174
122175
123def apt_update(fatal=False):176def apt_update(fatal=False):
124 """Update local apt cache"""177 """Update local apt cache"""
125 cmd = ['apt-get', 'update']178 cmd = ['apt-get', 'update']
126 if fatal:179 _run_apt_command(cmd, fatal)
127 subprocess.check_call(cmd)
128 else:
129 subprocess.call(cmd)
130180
131181
132def apt_purge(packages, fatal=False):182def apt_purge(packages, fatal=False):
@@ -137,10 +187,7 @@
137 else:187 else:
138 cmd.extend(packages)188 cmd.extend(packages)
139 log("Purging {}".format(packages))189 log("Purging {}".format(packages))
140 if fatal:190 _run_apt_command(cmd, fatal)
141 subprocess.check_call(cmd)
142 else:
143 subprocess.call(cmd)
144191
145192
146def apt_hold(packages, fatal=False):193def apt_hold(packages, fatal=False):
@@ -151,6 +198,7 @@
151 else:198 else:
152 cmd.extend(packages)199 cmd.extend(packages)
153 log("Holding {}".format(packages))200 log("Holding {}".format(packages))
201
154 if fatal:202 if fatal:
155 subprocess.check_call(cmd)203 subprocess.check_call(cmd)
156 else:204 else:
@@ -158,6 +206,29 @@
158206
159207
160def add_source(source, key=None):208def add_source(source, key=None):
209 """Add a package source to this system.
210
211 @param source: a URL or sources.list entry, as supported by
212 add-apt-repository(1). Examples::
213
214 ppa:charmers/example
215 deb https://stub:key@private.example.com/ubuntu trusty main
216
217 In addition:
218 'proposed:' may be used to enable the standard 'proposed'
219 pocket for the release.
220 'cloud:' may be used to activate official cloud archive pockets,
221 such as 'cloud:icehouse'
222 'distro' may be used as a noop
223
224 @param key: A key to be added to the system's APT keyring and used
225 to verify the signatures on packages. Ideally, this should be an
226 ASCII format GPG public key including the block headers. A GPG key
227 id may also be used, but be aware that only insecure protocols are
228 available to retrieve the actual public key from a public keyserver
229 placing your Juju environment at risk. ppa and cloud archive keys
230 are securely added automtically, so sould not be provided.
231 """
161 if source is None:232 if source is None:
162 log('Source is not present. Skipping')233 log('Source is not present. Skipping')
163 return234 return
@@ -182,76 +253,98 @@
182 release = lsb_release()['DISTRIB_CODENAME']253 release = lsb_release()['DISTRIB_CODENAME']
183 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:254 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
184 apt.write(PROPOSED_POCKET.format(release))255 apt.write(PROPOSED_POCKET.format(release))
256 elif source == 'distro':
257 pass
258 else:
259 log("Unknown source: {!r}".format(source))
260
185 if key:261 if key:
186 subprocess.check_call(['apt-key', 'adv', '--keyserver',262 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
187 'keyserver.ubuntu.com', '--recv',263 with NamedTemporaryFile() as key_file:
188 key])264 key_file.write(key)
189265 key_file.flush()
190266 key_file.seek(0)
191class SourceConfigError(Exception):267 subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
192 pass268 else:
269 # Note that hkp: is in no way a secure protocol. Using a
270 # GPG key id is pointless from a security POV unless you
271 # absolutely trust your network and DNS.
272 subprocess.check_call(['apt-key', 'adv', '--keyserver',
273 'hkp://keyserver.ubuntu.com:80', '--recv',
274 key])
193275
194276
195def configure_sources(update=False,277def configure_sources(update=False,
196 sources_var='install_sources',278 sources_var='install_sources',
197 keys_var='install_keys'):279 keys_var='install_keys'):
198 """280 """
199 Configure multiple sources from charm configuration281 Configure multiple sources from charm configuration.
282
283 The lists are encoded as yaml fragments in the configuration.
284 The frament needs to be included as a string. Sources and their
285 corresponding keys are of the types supported by add_source().
200286
201 Example config:287 Example config:
202 install_sources:288 install_sources: |
203 - "ppa:foo"289 - "ppa:foo"
204 - "http://example.com/repo precise main"290 - "http://example.com/repo precise main"
205 install_keys:291 install_keys: |
206 - null292 - null
207 - "a1b2c3d4"293 - "a1b2c3d4"
208294
209 Note that 'null' (a.k.a. None) should not be quoted.295 Note that 'null' (a.k.a. None) should not be quoted.
210 """296 """
211 sources = safe_load(config(sources_var))297 sources = safe_load((config(sources_var) or '').strip()) or []
212 keys = config(keys_var)298 keys = safe_load((config(keys_var) or '').strip()) or None
213 if keys is not None:299
214 keys = safe_load(keys)300 if isinstance(sources, basestring):
215 if isinstance(sources, basestring) and (301 sources = [sources]
216 keys is None or isinstance(keys, basestring)):302
217 add_source(sources, keys)303 if keys is None:
304 for source in sources:
305 add_source(source, None)
218 else:306 else:
219 if not len(sources) == len(keys):307 if isinstance(keys, basestring):
220 msg = 'Install sources and keys lists are different lengths'308 keys = [keys]
221 raise SourceConfigError(msg)309
222 for src_num in range(len(sources)):310 if len(sources) != len(keys):
223 add_source(sources[src_num], keys[src_num])311 raise SourceConfigError(
312 'Install sources and keys lists are different lengths')
313 for source, key in zip(sources, keys):
314 add_source(source, key)
224 if update:315 if update:
225 apt_update(fatal=True)316 apt_update(fatal=True)
226317
227# The order of this list is very important. Handlers should be listed in from318
228# least- to most-specific URL matching.319def install_remote(source, *args, **kwargs):
229FETCH_HANDLERS = (
230 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
231 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
232)
233
234
235class UnhandledSource(Exception):
236 pass
237
238
239def install_remote(source):
240 """320 """
241 Install a file tree from a remote source321 Install a file tree from a remote source
242322
243 The specified source should be a url of the form:323 The specified source should be a url of the form:
244 scheme://[host]/path[#[option=value][&...]]324 scheme://[host]/path[#[option=value][&...]]
245325
246 Schemes supported are based on this modules submodules326 Schemes supported are based on this modules submodules.
247 Options supported are submodule-specific"""327 Options supported are submodule-specific.
328 Additional arguments are passed through to the submodule.
329
330 For example::
331
332 dest = install_remote('http://example.com/archive.tgz',
333 checksum='deadbeef',
334 hash_type='sha1')
335
336 This will download `archive.tgz`, validate it using SHA1 and, if
337 the file is ok, extract it and return the directory in which it
338 was extracted. If the checksum fails, it will raise
339 :class:`charmhelpers.core.host.ChecksumError`.
340 """
248 # We ONLY check for True here because can_handle may return a string341 # We ONLY check for True here because can_handle may return a string
249 # explaining why it can't handle a given source.342 # explaining why it can't handle a given source.
250 handlers = [h for h in plugins() if h.can_handle(source) is True]343 handlers = [h for h in plugins() if h.can_handle(source) is True]
251 installed_to = None344 installed_to = None
252 for handler in handlers:345 for handler in handlers:
253 try:346 try:
254 installed_to = handler.install(source)347 installed_to = handler.install(source, *args, **kwargs)
255 except UnhandledSource:348 except UnhandledSource:
256 pass349 pass
257 if not installed_to:350 if not installed_to:
@@ -265,30 +358,6 @@
265 return install_remote(source)358 return install_remote(source)
266359
267360
268class BaseFetchHandler(object):
269
270 """Base class for FetchHandler implementations in fetch plugins"""
271
272 def can_handle(self, source):
273 """Returns True if the source can be handled. Otherwise returns
274 a string explaining why it cannot"""
275 return "Wrong source type"
276
277 def install(self, source):
278 """Try to download and unpack the source. Return the path to the
279 unpacked files or raise UnhandledSource."""
280 raise UnhandledSource("Wrong source type {}".format(source))
281
282 def parse_url(self, url):
283 return urlparse(url)
284
285 def base_url(self, url):
286 """Return url without querystring or fragment"""
287 parts = list(self.parse_url(url))
288 parts[4:] = ['' for i in parts[4:]]
289 return urlunparse(parts)
290
291
292def plugins(fetch_handlers=None):361def plugins(fetch_handlers=None):
293 if not fetch_handlers:362 if not fetch_handlers:
294 fetch_handlers = FETCH_HANDLERS363 fetch_handlers = FETCH_HANDLERS
@@ -306,3 +375,40 @@
306 log("FetchHandler {} not found, skipping plugin".format(375 log("FetchHandler {} not found, skipping plugin".format(
307 handler_name))376 handler_name))
308 return plugin_list377 return plugin_list
378
379
380def _run_apt_command(cmd, fatal=False):
381 """
382 Run an APT command, checking output and retrying if the fatal flag is set
383 to True.
384
385 :param: cmd: str: The apt command to run.
386 :param: fatal: bool: Whether the command's output should be checked and
387 retried.
388 """
389 env = os.environ.copy()
390
391 if 'DEBIAN_FRONTEND' not in env:
392 env['DEBIAN_FRONTEND'] = 'noninteractive'
393
394 if fatal:
395 retry_count = 0
396 result = None
397
398 # If the command is considered "fatal", we need to retry if the apt
399 # lock was not acquired.
400
401 while result is None or result == APT_NO_LOCK:
402 try:
403 result = subprocess.check_call(cmd, env=env)
404 except subprocess.CalledProcessError, e:
405 retry_count = retry_count + 1
406 if retry_count > APT_NO_LOCK_RETRY_COUNT:
407 raise
408 result = e.returncode
409 log("Couldn't acquire DPKG lock. Will retry in {} seconds."
410 "".format(APT_NO_LOCK_RETRY_DELAY))
411 time.sleep(APT_NO_LOCK_RETRY_DELAY)
412
413 else:
414 subprocess.call(cmd, env=env)
309415
=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 2014-04-08 09:11:34 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2014-10-29 02:58:49 +0000
@@ -1,6 +1,8 @@
1import os1import os
2import urllib22import urllib2
3from urllib import urlretrieve
3import urlparse4import urlparse
5import hashlib
46
5from charmhelpers.fetch import (7from charmhelpers.fetch import (
6 BaseFetchHandler,8 BaseFetchHandler,
@@ -10,11 +12,19 @@
10 get_archive_handler,12 get_archive_handler,
11 extract,13 extract,
12)14)
13from charmhelpers.core.host import mkdir15from charmhelpers.core.host import mkdir, check_hash
1416
1517
16class ArchiveUrlFetchHandler(BaseFetchHandler):18class ArchiveUrlFetchHandler(BaseFetchHandler):
17 """Handler for archives via generic URLs"""19 """
20 Handler to download archive files from arbitrary URLs.
21
22 Can fetch from http, https, ftp, and file URLs.
23
24 Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files.
25
26 Installs the contents of the archive in $CHARM_DIR/fetched/.
27 """
18 def can_handle(self, source):28 def can_handle(self, source):
19 url_parts = self.parse_url(source)29 url_parts = self.parse_url(source)
20 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):30 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
@@ -24,6 +34,12 @@
24 return False34 return False
2535
26 def download(self, source, dest):36 def download(self, source, dest):
37 """
38 Download an archive file.
39
40 :param str source: URL pointing to an archive file.
41 :param str dest: Local path location to download archive file to.
42 """
27 # propogate all exceptions43 # propogate all exceptions
28 # URLError, OSError, etc44 # URLError, OSError, etc
29 proto, netloc, path, params, query, fragment = urlparse.urlparse(source)45 proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
@@ -48,7 +64,30 @@
48 os.unlink(dest)64 os.unlink(dest)
49 raise e65 raise e
5066
51 def install(self, source):67 # Mandatory file validation via Sha1 or MD5 hashing.
68 def download_and_validate(self, url, hashsum, validate="sha1"):
69 tempfile, headers = urlretrieve(url)
70 check_hash(tempfile, hashsum, validate)
71 return tempfile
72
73 def install(self, source, dest=None, checksum=None, hash_type='sha1'):
74 """
75 Download and install an archive file, with optional checksum validation.
76
77 The checksum can also be given on the `source` URL's fragment.
78 For example::
79
80 handler.install('http://example.com/file.tgz#sha1=deadbeef')
81
82 :param str source: URL pointing to an archive file.
83 :param str dest: Local destination path to install to. If not given,
84 installs to `$CHARM_DIR/archives/archive_file_name`.
85 :param str checksum: If given, validate the archive file after download.
86 :param str hash_type: Algorithm used to generate `checksum`.
87 Can be any hash alrgorithm supported by :mod:`hashlib`,
88 such as md5, sha1, sha256, sha512, etc.
89
90 """
52 url_parts = self.parse_url(source)91 url_parts = self.parse_url(source)
53 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')92 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
54 if not os.path.exists(dest_dir):93 if not os.path.exists(dest_dir):
@@ -60,4 +99,10 @@
60 raise UnhandledSource(e.reason)99 raise UnhandledSource(e.reason)
61 except OSError as e:100 except OSError as e:
62 raise UnhandledSource(e.strerror)101 raise UnhandledSource(e.strerror)
63 return extract(dld_file)102 options = urlparse.parse_qs(url_parts.fragment)
103 for key, value in options.items():
104 if key in hashlib.algorithms:
105 check_hash(dld_file, value, key)
106 if checksum:
107 check_hash(dld_file, checksum, hash_type)
108 return extract(dld_file, dest)
64109
=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 2014-02-06 12:54:59 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2014-10-29 02:58:49 +0000
@@ -39,7 +39,8 @@
39 def install(self, source):39 def install(self, source):
40 url_parts = self.parse_url(source)40 url_parts = self.parse_url(source)
41 branch_name = url_parts.path.strip("/").split("/")[-1]41 branch_name = url_parts.path.strip("/").split("/")[-1]
42 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)42 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
43 branch_name)
43 if not os.path.exists(dest_dir):44 if not os.path.exists(dest_dir):
44 mkdir(dest_dir, perms=0755)45 mkdir(dest_dir, perms=0755)
45 try:46 try:
4647
=== added file 'hooks/charmhelpers/fetch/giturl.py'
--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/giturl.py 2014-10-29 02:58:49 +0000
@@ -0,0 +1,44 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7
8try:
9 from git import Repo
10except ImportError:
11 from charmhelpers.fetch import apt_install
12 apt_install("python-git")
13 from git import Repo
14
15
16class GitUrlFetchHandler(BaseFetchHandler):
17 """Handler for git branches via generic and github URLs"""
18 def can_handle(self, source):
19 url_parts = self.parse_url(source)
20 #TODO (mattyw) no support for ssh git@ yet
21 if url_parts.scheme not in ('http', 'https', 'git'):
22 return False
23 else:
24 return True
25
26 def clone(self, source, dest, branch):
27 if not self.can_handle(source):
28 raise UnhandledSource("Cannot handle {}".format(source))
29
30 repo = Repo.clone_from(source, dest)
31 repo.git.checkout(branch)
32
33 def install(self, source, branch="master"):
34 url_parts = self.parse_url(source)
35 branch_name = url_parts.path.strip("/").split("/")[-1]
36 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
37 branch_name)
38 if not os.path.exists(dest_dir):
39 mkdir(dest_dir, perms=0755)
40 try:
41 self.clone(source, dest_dir, branch)
42 except OSError as e:
43 raise UnhandledSource(e.strerror)
44 return dest_dir
045
=== modified file 'hooks/hooks.py'
--- hooks/hooks.py 2014-07-08 09:18:18 +0000
+++ hooks/hooks.py 2014-10-29 02:58:49 +0000
@@ -13,6 +13,7 @@
13 'config-changed',13 'config-changed',
14 'cluster-relation-joined',14 'cluster-relation-joined',
15 'peer-relation-joined',15 'peer-relation-joined',
16 'peer-relation-departed',
16 'nrpe-external-master-relation-changed',17 'nrpe-external-master-relation-changed',
17 'rest-relation-joined',18 'rest-relation-joined',
18 'start',19 'start',
1920
=== added symlink 'hooks/peer-relation-departed'
=== target is u'hooks.py'
=== modified file 'playbook.yaml'
--- playbook.yaml 2014-08-19 16:47:21 +0000
+++ playbook.yaml 2014-10-29 02:58:49 +0000
@@ -13,17 +13,20 @@
13 vars:13 vars:
14 - service_name: "{{ local_unit.split('/')[0] }}"14 - service_name: "{{ local_unit.split('/')[0] }}"
15 - client_relation_id: "{{ relations['client'].keys()[0] | default('') }}"15 - client_relation_id: "{{ relations['client'].keys()[0] | default('') }}"
16 - peer_relation_id: "{{ relations['peer'].keys()[0] | default('') }}"
1617
17 tasks:18 tasks:
1819
19 - include: tasks/install-elasticsearch.yml20 - include: tasks/install-elasticsearch.yml
21 - include: tasks/peer-relations.yml
20 - include: tasks/setup-ufw.yml22 - include: tasks/setup-ufw.yml
21 tags:23 tags:
22 - install24 - install
23 - upgrade-charm25 - upgrade-charm
24 - client-relation-joined26 - client-relation-joined
25 - client-relation-departed27 - client-relation-departed
26 - include: tasks/peer-relations.yml28 - peer-relation-joined
29 - peer-relation-departed
2730
28 - name: Update configuration31 - name: Update configuration
29 tags:32 tags:
3033
=== modified file 'tasks/peer-relations.yml'
--- tasks/peer-relations.yml 2014-06-06 14:40:08 +0000
+++ tasks/peer-relations.yml 2014-10-29 02:58:49 +0000
@@ -54,4 +54,3 @@
54 - peer-relation-joined54 - peer-relation-joined
55 fail: msg="Unit failed to join cluster during peer-relation-joined"55 fail: msg="Unit failed to join cluster during peer-relation-joined"
56 when: cluster_health.json.number_of_nodes == 1 and cluster_health_after_restart.json.number_of_nodes == 156 when: cluster_health.json.number_of_nodes == 1 and cluster_health_after_restart.json.number_of_nodes == 1
57
5857
=== modified file 'tasks/setup-ufw.yml'
--- tasks/setup-ufw.yml 2014-07-30 06:35:59 +0000
+++ tasks/setup-ufw.yml 2014-10-29 02:58:49 +0000
@@ -27,3 +27,14 @@
2727
28- name: Deny all other requests on 920028- name: Deny all other requests on 9200
29 ufw: rule=deny port=920029 ufw: rule=deny port=9200
30
31- name: Open the firewall for all peers
32 ufw: rule=allow src={{ item.value['private-address'] }} port=9300 proto=tcp
33 with_dict: relations["peer"]["{{ peer_relation_id }}"] | default({})
34 when: not item.key == local_unit
35
36# Only deny incoming on 9300 once the unit is part of a cluster.
37- name: Deny all incoming requests on 9300 once unit is part of cluster
38 ufw: rule=deny port=9300
39 when: not peer_relation_id == ""
40

Subscribers

People subscribed via source and target branches