Merge lp:~gnuoy/charms/precise/keystone/add-preinstall-hook into lp:~charmers/charms/precise/keystone/trunk
- Precise Pangolin (12.04)
- add-preinstall-hook
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 50 |
Proposed branch: | lp:~gnuoy/charms/precise/keystone/add-preinstall-hook |
Merge into: | lp:~charmers/charms/precise/keystone/trunk |
Diff against target: |
790 lines (+736/-1) 7 files modified
charm-helpers.yaml (+5/-0) hooks/charmhelpers/core/hookenv.py (+395/-0) hooks/charmhelpers/core/host.py (+281/-0) hooks/charmhelpers/payload/__init__.py (+1/-0) hooks/charmhelpers/payload/execd.py (+50/-0) hooks/keystone_hooks.py (+3/-0) revision (+1/-1) |
To merge this branch: | bzr merge lp:~gnuoy/charms/precise/keystone/add-preinstall-hook |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Liam Young (community) | Needs Resubmitting | ||
James Page | Needs Fixing | ||
Review via email: mp+194895@code.launchpad.net |
Commit message
Description of the change
Add preinstall hook.
Michael Nelson (michael.nelson) wrote : | # |
Liam Young (gnuoy) wrote : | # |
Hi Michael,
It's used by the charm_helpers_
James Page (james-page) wrote : | # |
2013-11-18 11:59:59 INFO juju.worker.uniter modes.go:101 found queued "upgrade-charm" hook
2013-11-18 11:59:59 INFO juju.worker.uniter uniter.go:348 running "upgrade-charm" hook
2013-11-18 11:59:59 INFO worker.uniter.jujuc server.go:108 running hook tool "unit-get" ["private-address"]
2013-11-18 12:00:00 INFO juju.worker.uniter context.go:255 HOOK Traceback (most recent call last):
2013-11-18 12:00:00 INFO juju.worker.uniter context.go:255 HOOK File "/var/lib/
2013-11-18 12:00:00 INFO juju.worker.uniter context.go:255 HOOK from charmhelpers.
2013-11-18 12:00:00 INFO juju.worker.uniter context.go:255 HOOK File "/var/lib/
2013-11-18 12:00:00 INFO juju.worker.uniter context.go:255 HOOK from charmhelpers.core import hookenv
2013-11-18 12:00:00 INFO juju.worker.uniter context.go:255 HOOK ImportError: No module named core
2013-11-18 12:00:00 ERROR juju.worker.uniter uniter.go:350 hook failed: exit status 1
2013-11-18 12:00:00 INFO juju.worker.uniter modes.go:421 ModeContinue starting
2013-11-18 12:00:00 INFO juju.worker.uniter modes.go:114 awaiting error resolution for "upgrade-charm" hook
2013-11-18 12:00:00 INFO juju.worker.uniter modes.go:421 ModeHookError starting
James Page (james-page) wrote : | # |
Hi Liam
Please can you add the missing core charmhelpers to your branch and re-push
Thanks!
- 51. By Liam Young
-
Added charmhelpers core which was accidently missed
Liam Young (gnuoy) wrote : | # |
Sorry about that James, I've now added core as well
Liam Young (gnuoy) : | # |
Preview Diff
1 | === added file 'charm-helpers.yaml' |
2 | --- charm-helpers.yaml 1970-01-01 00:00:00 +0000 |
3 | +++ charm-helpers.yaml 2013-11-18 14:31:24 +0000 |
4 | @@ -0,0 +1,5 @@ |
5 | +branch: lp:charm-helpers |
6 | +destination: hooks/charmhelpers |
7 | +include: |
8 | + - core |
9 | + - payload.execd |
10 | |
11 | === added directory 'exec.d' |
12 | === added directory 'hooks/charmhelpers' |
13 | === added file 'hooks/charmhelpers/__init__.py' |
14 | === added directory 'hooks/charmhelpers/core' |
15 | === added file 'hooks/charmhelpers/core/__init__.py' |
16 | === added file 'hooks/charmhelpers/core/hookenv.py' |
17 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 |
18 | +++ hooks/charmhelpers/core/hookenv.py 2013-11-18 14:31:24 +0000 |
19 | @@ -0,0 +1,395 @@ |
20 | +"Interactions with the Juju environment" |
21 | +# Copyright 2013 Canonical Ltd. |
22 | +# |
23 | +# Authors: |
24 | +# Charm Helpers Developers <juju@lists.ubuntu.com> |
25 | + |
26 | +import os |
27 | +import json |
28 | +import yaml |
29 | +import subprocess |
30 | +import UserDict |
31 | +from subprocess import CalledProcessError |
32 | + |
33 | +CRITICAL = "CRITICAL" |
34 | +ERROR = "ERROR" |
35 | +WARNING = "WARNING" |
36 | +INFO = "INFO" |
37 | +DEBUG = "DEBUG" |
38 | +MARKER = object() |
39 | + |
40 | +cache = {} |
41 | + |
42 | + |
43 | +def cached(func): |
44 | + """Cache return values for multiple executions of func + args |
45 | + |
46 | + For example: |
47 | + |
48 | + @cached |
49 | + def unit_get(attribute): |
50 | + pass |
51 | + |
52 | + unit_get('test') |
53 | + |
54 | + will cache the result of unit_get + 'test' for future calls. |
55 | + """ |
56 | + def wrapper(*args, **kwargs): |
57 | + global cache |
58 | + key = str((func, args, kwargs)) |
59 | + try: |
60 | + return cache[key] |
61 | + except KeyError: |
62 | + res = func(*args, **kwargs) |
63 | + cache[key] = res |
64 | + return res |
65 | + return wrapper |
66 | + |
67 | + |
68 | +def flush(key): |
69 | + """Flushes any entries from function cache where the |
70 | + key is found in the function+args """ |
71 | + flush_list = [] |
72 | + for item in cache: |
73 | + if key in item: |
74 | + flush_list.append(item) |
75 | + for item in flush_list: |
76 | + del cache[item] |
77 | + |
78 | + |
79 | +def log(message, level=None): |
80 | + """Write a message to the juju log""" |
81 | + command = ['juju-log'] |
82 | + if level: |
83 | + command += ['-l', level] |
84 | + command += [message] |
85 | + subprocess.call(command) |
86 | + |
87 | + |
88 | +class Serializable(UserDict.IterableUserDict): |
89 | + """Wrapper, an object that can be serialized to yaml or json""" |
90 | + |
91 | + def __init__(self, obj): |
92 | + # wrap the object |
93 | + UserDict.IterableUserDict.__init__(self) |
94 | + self.data = obj |
95 | + |
96 | + def __getattr__(self, attr): |
97 | + # See if this object has attribute. |
98 | + if attr in ("json", "yaml", "data"): |
99 | + return self.__dict__[attr] |
100 | + # Check for attribute in wrapped object. |
101 | + got = getattr(self.data, attr, MARKER) |
102 | + if got is not MARKER: |
103 | + return got |
104 | + # Proxy to the wrapped object via dict interface. |
105 | + try: |
106 | + return self.data[attr] |
107 | + except KeyError: |
108 | + raise AttributeError(attr) |
109 | + |
110 | + def __getstate__(self): |
111 | + # Pickle as a standard dictionary. |
112 | + return self.data |
113 | + |
114 | + def __setstate__(self, state): |
115 | + # Unpickle into our wrapper. |
116 | + self.data = state |
117 | + |
118 | + def json(self): |
119 | + """Serialize the object to json""" |
120 | + return json.dumps(self.data) |
121 | + |
122 | + def yaml(self): |
123 | + """Serialize the object to yaml""" |
124 | + return yaml.dump(self.data) |
125 | + |
126 | + |
127 | +def execution_environment(): |
128 | + """A convenient bundling of the current execution context""" |
129 | + context = {} |
130 | + context['conf'] = config() |
131 | + if relation_id(): |
132 | + context['reltype'] = relation_type() |
133 | + context['relid'] = relation_id() |
134 | + context['rel'] = relation_get() |
135 | + context['unit'] = local_unit() |
136 | + context['rels'] = relations() |
137 | + context['env'] = os.environ |
138 | + return context |
139 | + |
140 | + |
141 | +def in_relation_hook(): |
142 | + """Determine whether we're running in a relation hook""" |
143 | + return 'JUJU_RELATION' in os.environ |
144 | + |
145 | + |
146 | +def relation_type(): |
147 | + """The scope for the current relation hook""" |
148 | + return os.environ.get('JUJU_RELATION', None) |
149 | + |
150 | + |
151 | +def relation_id(): |
152 | + """The relation ID for the current relation hook""" |
153 | + return os.environ.get('JUJU_RELATION_ID', None) |
154 | + |
155 | + |
156 | +def local_unit(): |
157 | + """Local unit ID""" |
158 | + return os.environ['JUJU_UNIT_NAME'] |
159 | + |
160 | + |
161 | +def remote_unit(): |
162 | + """The remote unit for the current relation hook""" |
163 | + return os.environ['JUJU_REMOTE_UNIT'] |
164 | + |
165 | + |
166 | +def service_name(): |
167 | + """The name service group this unit belongs to""" |
168 | + return local_unit().split('/')[0] |
169 | + |
170 | + |
171 | +@cached |
172 | +def config(scope=None): |
173 | + """Juju charm configuration""" |
174 | + config_cmd_line = ['config-get'] |
175 | + if scope is not None: |
176 | + config_cmd_line.append(scope) |
177 | + config_cmd_line.append('--format=json') |
178 | + try: |
179 | + return json.loads(subprocess.check_output(config_cmd_line)) |
180 | + except ValueError: |
181 | + return None |
182 | + |
183 | + |
184 | +@cached |
185 | +def relation_get(attribute=None, unit=None, rid=None): |
186 | + """Get relation information""" |
187 | + _args = ['relation-get', '--format=json'] |
188 | + if rid: |
189 | + _args.append('-r') |
190 | + _args.append(rid) |
191 | + _args.append(attribute or '-') |
192 | + if unit: |
193 | + _args.append(unit) |
194 | + try: |
195 | + return json.loads(subprocess.check_output(_args)) |
196 | + except ValueError: |
197 | + return None |
198 | + except CalledProcessError, e: |
199 | + if e.returncode == 2: |
200 | + return None |
201 | + raise |
202 | + |
203 | + |
204 | +def relation_set(relation_id=None, relation_settings={}, **kwargs): |
205 | + """Set relation information for the current unit""" |
206 | + relation_cmd_line = ['relation-set'] |
207 | + if relation_id is not None: |
208 | + relation_cmd_line.extend(('-r', relation_id)) |
209 | + for k, v in (relation_settings.items() + kwargs.items()): |
210 | + if v is None: |
211 | + relation_cmd_line.append('{}='.format(k)) |
212 | + else: |
213 | + relation_cmd_line.append('{}={}'.format(k, v)) |
214 | + subprocess.check_call(relation_cmd_line) |
215 | + # Flush cache of any relation-gets for local unit |
216 | + flush(local_unit()) |
217 | + |
218 | + |
219 | +@cached |
220 | +def relation_ids(reltype=None): |
221 | + """A list of relation_ids""" |
222 | + reltype = reltype or relation_type() |
223 | + relid_cmd_line = ['relation-ids', '--format=json'] |
224 | + if reltype is not None: |
225 | + relid_cmd_line.append(reltype) |
226 | + return json.loads(subprocess.check_output(relid_cmd_line)) or [] |
227 | + return [] |
228 | + |
229 | + |
230 | +@cached |
231 | +def related_units(relid=None): |
232 | + """A list of related units""" |
233 | + relid = relid or relation_id() |
234 | + units_cmd_line = ['relation-list', '--format=json'] |
235 | + if relid is not None: |
236 | + units_cmd_line.extend(('-r', relid)) |
237 | + return json.loads(subprocess.check_output(units_cmd_line)) or [] |
238 | + |
239 | + |
240 | +@cached |
241 | +def relation_for_unit(unit=None, rid=None): |
242 | + """Get the json represenation of a unit's relation""" |
243 | + unit = unit or remote_unit() |
244 | + relation = relation_get(unit=unit, rid=rid) |
245 | + for key in relation: |
246 | + if key.endswith('-list'): |
247 | + relation[key] = relation[key].split() |
248 | + relation['__unit__'] = unit |
249 | + return relation |
250 | + |
251 | + |
252 | +@cached |
253 | +def relations_for_id(relid=None): |
254 | + """Get relations of a specific relation ID""" |
255 | + relation_data = [] |
256 | + relid = relid or relation_ids() |
257 | + for unit in related_units(relid): |
258 | + unit_data = relation_for_unit(unit, relid) |
259 | + unit_data['__relid__'] = relid |
260 | + relation_data.append(unit_data) |
261 | + return relation_data |
262 | + |
263 | + |
264 | +@cached |
265 | +def relations_of_type(reltype=None): |
266 | + """Get relations of a specific type""" |
267 | + relation_data = [] |
268 | + reltype = reltype or relation_type() |
269 | + for relid in relation_ids(reltype): |
270 | + for relation in relations_for_id(relid): |
271 | + relation['__relid__'] = relid |
272 | + relation_data.append(relation) |
273 | + return relation_data |
274 | + |
275 | + |
276 | +@cached |
277 | +def relation_types(): |
278 | + """Get a list of relation types supported by this charm""" |
279 | + charmdir = os.environ.get('CHARM_DIR', '') |
280 | + mdf = open(os.path.join(charmdir, 'metadata.yaml')) |
281 | + md = yaml.safe_load(mdf) |
282 | + rel_types = [] |
283 | + for key in ('provides', 'requires', 'peers'): |
284 | + section = md.get(key) |
285 | + if section: |
286 | + rel_types.extend(section.keys()) |
287 | + mdf.close() |
288 | + return rel_types |
289 | + |
290 | + |
291 | +@cached |
292 | +def relations(): |
293 | + """Get a nested dictionary of relation data for all related units""" |
294 | + rels = {} |
295 | + for reltype in relation_types(): |
296 | + relids = {} |
297 | + for relid in relation_ids(reltype): |
298 | + units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} |
299 | + for unit in related_units(relid): |
300 | + reldata = relation_get(unit=unit, rid=relid) |
301 | + units[unit] = reldata |
302 | + relids[relid] = units |
303 | + rels[reltype] = relids |
304 | + return rels |
305 | + |
306 | + |
307 | +@cached |
308 | +def is_relation_made(relation, keys='private-address'): |
309 | + ''' |
310 | + Determine whether a relation is established by checking for |
311 | + presence of key(s). If a list of keys is provided, they |
312 | + must all be present for the relation to be identified as made |
313 | + ''' |
314 | + if isinstance(keys, str): |
315 | + keys = [keys] |
316 | + for r_id in relation_ids(relation): |
317 | + for unit in related_units(r_id): |
318 | + context = {} |
319 | + for k in keys: |
320 | + context[k] = relation_get(k, rid=r_id, |
321 | + unit=unit) |
322 | + if None not in context.values(): |
323 | + return True |
324 | + return False |
325 | + |
326 | + |
327 | +def open_port(port, protocol="TCP"): |
328 | + """Open a service network port""" |
329 | + _args = ['open-port'] |
330 | + _args.append('{}/{}'.format(port, protocol)) |
331 | + subprocess.check_call(_args) |
332 | + |
333 | + |
334 | +def close_port(port, protocol="TCP"): |
335 | + """Close a service network port""" |
336 | + _args = ['close-port'] |
337 | + _args.append('{}/{}'.format(port, protocol)) |
338 | + subprocess.check_call(_args) |
339 | + |
340 | + |
341 | +@cached |
342 | +def unit_get(attribute): |
343 | + """Get the unit ID for the remote unit""" |
344 | + _args = ['unit-get', '--format=json', attribute] |
345 | + try: |
346 | + return json.loads(subprocess.check_output(_args)) |
347 | + except ValueError: |
348 | + return None |
349 | + |
350 | + |
351 | +def unit_private_ip(): |
352 | + """Get this unit's private IP address""" |
353 | + return unit_get('private-address') |
354 | + |
355 | + |
356 | +class UnregisteredHookError(Exception): |
357 | + """Raised when an undefined hook is called""" |
358 | + pass |
359 | + |
360 | + |
361 | +class Hooks(object): |
362 | + """A convenient handler for hook functions. |
363 | + |
364 | + Example: |
365 | + hooks = Hooks() |
366 | + |
367 | + # register a hook, taking its name from the function name |
368 | + @hooks.hook() |
369 | + def install(): |
370 | + ... |
371 | + |
372 | + # register a hook, providing a custom hook name |
373 | + @hooks.hook("config-changed") |
374 | + def config_changed(): |
375 | + ... |
376 | + |
377 | + if __name__ == "__main__": |
378 | + # execute a hook based on the name the program is called by |
379 | + hooks.execute(sys.argv) |
380 | + """ |
381 | + |
382 | + def __init__(self): |
383 | + super(Hooks, self).__init__() |
384 | + self._hooks = {} |
385 | + |
386 | + def register(self, name, function): |
387 | + """Register a hook""" |
388 | + self._hooks[name] = function |
389 | + |
390 | + def execute(self, args): |
391 | + """Execute a registered hook based on args[0]""" |
392 | + hook_name = os.path.basename(args[0]) |
393 | + if hook_name in self._hooks: |
394 | + self._hooks[hook_name]() |
395 | + else: |
396 | + raise UnregisteredHookError(hook_name) |
397 | + |
398 | + def hook(self, *hook_names): |
399 | + """Decorator, registering them as hooks""" |
400 | + def wrapper(decorated): |
401 | + for hook_name in hook_names: |
402 | + self.register(hook_name, decorated) |
403 | + else: |
404 | + self.register(decorated.__name__, decorated) |
405 | + if '_' in decorated.__name__: |
406 | + self.register( |
407 | + decorated.__name__.replace('_', '-'), decorated) |
408 | + return decorated |
409 | + return wrapper |
410 | + |
411 | + |
412 | +def charm_dir(): |
413 | + """Return the root directory of the current charm""" |
414 | + return os.environ.get('CHARM_DIR') |
415 | |
416 | === added file 'hooks/charmhelpers/core/host.py' |
417 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 |
418 | +++ hooks/charmhelpers/core/host.py 2013-11-18 14:31:24 +0000 |
419 | @@ -0,0 +1,281 @@ |
420 | +"""Tools for working with the host system""" |
421 | +# Copyright 2012 Canonical Ltd. |
422 | +# |
423 | +# Authors: |
424 | +# Nick Moffitt <nick.moffitt@canonical.com> |
425 | +# Matthew Wedgwood <matthew.wedgwood@canonical.com> |
426 | + |
427 | +import os |
428 | +import pwd |
429 | +import grp |
430 | +import random |
431 | +import string |
432 | +import subprocess |
433 | +import hashlib |
434 | + |
435 | +from collections import OrderedDict |
436 | + |
437 | +from hookenv import log |
438 | + |
439 | + |
440 | +def service_start(service_name): |
441 | + """Start a system service""" |
442 | + return service('start', service_name) |
443 | + |
444 | + |
445 | +def service_stop(service_name): |
446 | + """Stop a system service""" |
447 | + return service('stop', service_name) |
448 | + |
449 | + |
450 | +def service_restart(service_name): |
451 | + """Restart a system service""" |
452 | + return service('restart', service_name) |
453 | + |
454 | + |
455 | +def service_reload(service_name, restart_on_failure=False): |
456 | + """Reload a system service, optionally falling back to restart if reload fails""" |
457 | + service_result = service('reload', service_name) |
458 | + if not service_result and restart_on_failure: |
459 | + service_result = service('restart', service_name) |
460 | + return service_result |
461 | + |
462 | + |
463 | +def service(action, service_name): |
464 | + """Control a system service""" |
465 | + cmd = ['service', service_name, action] |
466 | + return subprocess.call(cmd) == 0 |
467 | + |
468 | + |
469 | +def service_running(service): |
470 | + """Determine whether a system service is running""" |
471 | + try: |
472 | + output = subprocess.check_output(['service', service, 'status']) |
473 | + except subprocess.CalledProcessError: |
474 | + return False |
475 | + else: |
476 | + if ("start/running" in output or "is running" in output): |
477 | + return True |
478 | + else: |
479 | + return False |
480 | + |
481 | + |
482 | +def adduser(username, password=None, shell='/bin/bash', system_user=False): |
483 | + """Add a user to the system""" |
484 | + try: |
485 | + user_info = pwd.getpwnam(username) |
486 | + log('user {0} already exists!'.format(username)) |
487 | + except KeyError: |
488 | + log('creating user {0}'.format(username)) |
489 | + cmd = ['useradd'] |
490 | + if system_user or password is None: |
491 | + cmd.append('--system') |
492 | + else: |
493 | + cmd.extend([ |
494 | + '--create-home', |
495 | + '--shell', shell, |
496 | + '--password', password, |
497 | + ]) |
498 | + cmd.append(username) |
499 | + subprocess.check_call(cmd) |
500 | + user_info = pwd.getpwnam(username) |
501 | + return user_info |
502 | + |
503 | + |
504 | +def add_user_to_group(username, group): |
505 | + """Add a user to a group""" |
506 | + cmd = [ |
507 | + 'gpasswd', '-a', |
508 | + username, |
509 | + group |
510 | + ] |
511 | + log("Adding user {} to group {}".format(username, group)) |
512 | + subprocess.check_call(cmd) |
513 | + |
514 | + |
515 | +def rsync(from_path, to_path, flags='-r', options=None): |
516 | + """Replicate the contents of a path""" |
517 | + options = options or ['--delete', '--executability'] |
518 | + cmd = ['/usr/bin/rsync', flags] |
519 | + cmd.extend(options) |
520 | + cmd.append(from_path) |
521 | + cmd.append(to_path) |
522 | + log(" ".join(cmd)) |
523 | + return subprocess.check_output(cmd).strip() |
524 | + |
525 | + |
526 | +def symlink(source, destination): |
527 | + """Create a symbolic link""" |
528 | + log("Symlinking {} as {}".format(source, destination)) |
529 | + cmd = [ |
530 | + 'ln', |
531 | + '-sf', |
532 | + source, |
533 | + destination, |
534 | + ] |
535 | + subprocess.check_call(cmd) |
536 | + |
537 | + |
538 | +def mkdir(path, owner='root', group='root', perms=0555, force=False): |
539 | + """Create a directory""" |
540 | + log("Making dir {} {}:{} {:o}".format(path, owner, group, |
541 | + perms)) |
542 | + uid = pwd.getpwnam(owner).pw_uid |
543 | + gid = grp.getgrnam(group).gr_gid |
544 | + realpath = os.path.abspath(path) |
545 | + if os.path.exists(realpath): |
546 | + if force and not os.path.isdir(realpath): |
547 | + log("Removing non-directory file {} prior to mkdir()".format(path)) |
548 | + os.unlink(realpath) |
549 | + else: |
550 | + os.makedirs(realpath, perms) |
551 | + os.chown(realpath, uid, gid) |
552 | + |
553 | + |
554 | +def write_file(path, content, owner='root', group='root', perms=0444): |
555 | + """Create or overwrite a file with the contents of a string""" |
556 | + log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) |
557 | + uid = pwd.getpwnam(owner).pw_uid |
558 | + gid = grp.getgrnam(group).gr_gid |
559 | + with open(path, 'w') as target: |
560 | + os.fchown(target.fileno(), uid, gid) |
561 | + os.fchmod(target.fileno(), perms) |
562 | + target.write(content) |
563 | + |
564 | + |
565 | +def mount(device, mountpoint, options=None, persist=False): |
566 | + """Mount a filesystem at a particular mountpoint""" |
567 | + cmd_args = ['mount'] |
568 | + if options is not None: |
569 | + cmd_args.extend(['-o', options]) |
570 | + cmd_args.extend([device, mountpoint]) |
571 | + try: |
572 | + subprocess.check_output(cmd_args) |
573 | + except subprocess.CalledProcessError, e: |
574 | + log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
575 | + return False |
576 | + if persist: |
577 | + # TODO: update fstab |
578 | + pass |
579 | + return True |
580 | + |
581 | + |
582 | +def umount(mountpoint, persist=False): |
583 | + """Unmount a filesystem""" |
584 | + cmd_args = ['umount', mountpoint] |
585 | + try: |
586 | + subprocess.check_output(cmd_args) |
587 | + except subprocess.CalledProcessError, e: |
588 | + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
589 | + return False |
590 | + if persist: |
591 | + # TODO: update fstab |
592 | + pass |
593 | + return True |
594 | + |
595 | + |
596 | +def mounts(): |
597 | + """Get a list of all mounted volumes as [[mountpoint,device],[...]]""" |
598 | + with open('/proc/mounts') as f: |
599 | + # [['/mount/point','/dev/path'],[...]] |
600 | + system_mounts = [m[1::-1] for m in [l.strip().split() |
601 | + for l in f.readlines()]] |
602 | + return system_mounts |
603 | + |
604 | + |
605 | +def file_hash(path): |
606 | + """Generate a md5 hash of the contents of 'path' or None if not found """ |
607 | + if os.path.exists(path): |
608 | + h = hashlib.md5() |
609 | + with open(path, 'r') as source: |
610 | + h.update(source.read()) # IGNORE:E1101 - it does have update |
611 | + return h.hexdigest() |
612 | + else: |
613 | + return None |
614 | + |
615 | + |
616 | +def restart_on_change(restart_map): |
617 | + """Restart services based on configuration files changing |
618 | + |
619 | + This function is used a decorator, for example |
620 | + |
621 | + @restart_on_change({ |
622 | + '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
623 | + }) |
624 | + def ceph_client_changed(): |
625 | + ... |
626 | + |
627 | + In this example, the cinder-api and cinder-volume services |
628 | + would be restarted if /etc/ceph/ceph.conf is changed by the |
629 | + ceph_client_changed function. |
630 | + """ |
631 | + def wrap(f): |
632 | + def wrapped_f(*args): |
633 | + checksums = {} |
634 | + for path in restart_map: |
635 | + checksums[path] = file_hash(path) |
636 | + f(*args) |
637 | + restarts = [] |
638 | + for path in restart_map: |
639 | + if checksums[path] != file_hash(path): |
640 | + restarts += restart_map[path] |
641 | + for service_name in list(OrderedDict.fromkeys(restarts)): |
642 | + service('restart', service_name) |
643 | + return wrapped_f |
644 | + return wrap |
645 | + |
646 | + |
647 | +def lsb_release(): |
648 | + """Return /etc/lsb-release in a dict""" |
649 | + d = {} |
650 | + with open('/etc/lsb-release', 'r') as lsb: |
651 | + for l in lsb: |
652 | + k, v = l.split('=') |
653 | + d[k.strip()] = v.strip() |
654 | + return d |
655 | + |
656 | + |
657 | +def pwgen(length=None): |
658 | + """Generate a random pasword.""" |
659 | + if length is None: |
660 | + length = random.choice(range(35, 45)) |
661 | + alphanumeric_chars = [ |
662 | + l for l in (string.letters + string.digits) |
663 | + if l not in 'l0QD1vAEIOUaeiou'] |
664 | + random_chars = [ |
665 | + random.choice(alphanumeric_chars) for _ in range(length)] |
666 | + return(''.join(random_chars)) |
667 | + |
668 | + |
669 | +def list_nics(nic_type): |
670 | + '''Return a list of nics of given type(s)''' |
671 | + if isinstance(nic_type, basestring): |
672 | + int_types = [nic_type] |
673 | + else: |
674 | + int_types = nic_type |
675 | + interfaces = [] |
676 | + for int_type in int_types: |
677 | + cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] |
678 | + ip_output = subprocess.check_output(cmd).split('\n') |
679 | + ip_output = (line for line in ip_output if line) |
680 | + for line in ip_output: |
681 | + if line.split()[1].startswith(int_type): |
682 | + interfaces.append(line.split()[1].replace(":", "")) |
683 | + return interfaces |
684 | + |
685 | + |
686 | +def set_nic_mtu(nic, mtu): |
687 | + '''Set MTU on a network interface''' |
688 | + cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] |
689 | + subprocess.check_call(cmd) |
690 | + |
691 | + |
692 | +def get_nic_mtu(nic): |
693 | + cmd = ['ip', 'addr', 'show', nic] |
694 | + ip_output = subprocess.check_output(cmd).split('\n') |
695 | + mtu = "" |
696 | + for line in ip_output: |
697 | + words = line.split() |
698 | + if 'mtu' in words: |
699 | + mtu = words[words.index("mtu") + 1] |
700 | + return mtu |
701 | |
702 | === added directory 'hooks/charmhelpers/payload' |
703 | === added file 'hooks/charmhelpers/payload/__init__.py' |
704 | --- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000 |
705 | +++ hooks/charmhelpers/payload/__init__.py 2013-11-18 14:31:24 +0000 |
706 | @@ -0,0 +1,1 @@ |
707 | +"Tools for working with files injected into a charm just before deployment." |
708 | |
709 | === added file 'hooks/charmhelpers/payload/execd.py' |
710 | --- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000 |
711 | +++ hooks/charmhelpers/payload/execd.py 2013-11-18 14:31:24 +0000 |
712 | @@ -0,0 +1,50 @@ |
713 | +#!/usr/bin/env python |
714 | + |
715 | +import os |
716 | +import sys |
717 | +import subprocess |
718 | +from charmhelpers.core import hookenv |
719 | + |
720 | + |
721 | +def default_execd_dir(): |
722 | + return os.path.join(os.environ['CHARM_DIR'], 'exec.d') |
723 | + |
724 | + |
725 | +def execd_module_paths(execd_dir=None): |
726 | + """Generate a list of full paths to modules within execd_dir.""" |
727 | + if not execd_dir: |
728 | + execd_dir = default_execd_dir() |
729 | + |
730 | + if not os.path.exists(execd_dir): |
731 | + return |
732 | + |
733 | + for subpath in os.listdir(execd_dir): |
734 | + module = os.path.join(execd_dir, subpath) |
735 | + if os.path.isdir(module): |
736 | + yield module |
737 | + |
738 | + |
739 | +def execd_submodule_paths(command, execd_dir=None): |
740 | + """Generate a list of full paths to the specified command within exec_dir. |
741 | + """ |
742 | + for module_path in execd_module_paths(execd_dir): |
743 | + path = os.path.join(module_path, command) |
744 | + if os.access(path, os.X_OK) and os.path.isfile(path): |
745 | + yield path |
746 | + |
747 | + |
748 | +def execd_run(command, execd_dir=None, die_on_error=False, stderr=None): |
749 | + """Run command for each module within execd_dir which defines it.""" |
750 | + for submodule_path in execd_submodule_paths(command, execd_dir): |
751 | + try: |
752 | + subprocess.check_call(submodule_path, shell=True, stderr=stderr) |
753 | + except subprocess.CalledProcessError as e: |
754 | + hookenv.log("Error ({}) running {}. Output: {}".format( |
755 | + e.returncode, e.cmd, e.output)) |
756 | + if die_on_error: |
757 | + sys.exit(e.returncode) |
758 | + |
759 | + |
760 | +def execd_preinstall(execd_dir=None): |
761 | + """Run charm-pre-install for each module within execd_dir.""" |
762 | + execd_run('charm-pre-install', execd_dir=execd_dir) |
763 | |
764 | === modified file 'hooks/keystone_hooks.py' |
765 | --- hooks/keystone_hooks.py 2013-09-03 12:03:32 +0000 |
766 | +++ hooks/keystone_hooks.py 2013-11-18 14:31:24 +0000 |
767 | @@ -41,6 +41,8 @@ |
768 | import lib.cluster_utils as cluster |
769 | import lib.haproxy_utils as haproxy |
770 | |
771 | +from charmhelpers.payload.execd import execd_preinstall |
772 | + |
773 | config = config_get() |
774 | |
775 | packages = [ |
776 | @@ -98,6 +100,7 @@ |
777 | |
778 | |
779 | def install_hook(): |
780 | + execd_preinstall() |
781 | utils.configure_source() |
782 | utils.install(*packages) |
783 | update_config_block('DEFAULT', |
784 | |
785 | === modified file 'revision' |
786 | --- revision 2013-10-03 17:06:42 +0000 |
787 | +++ revision 2013-11-18 14:31:24 +0000 |
788 | @@ -1,1 +1,1 @@ |
789 | -225 |
790 | +226 |
Hi Liam!
More for my own learning than a review, but is the charm-helpers.yaml something that is now automatically detected during juju-deployer build? And if so, do we still need to include the imported modules in this branch?
/me goes off to look at the new juju-deployer.