Merge lp:~bloodearnest/charms/precise/gunicorn/cleanup into lp:~charmers/charms/precise/gunicorn/trunk
- Precise Pangolin (12.04)
- cleanup
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 29 |
Proposed branch: | lp:~bloodearnest/charms/precise/gunicorn/cleanup |
Merge into: | lp:~charmers/charms/precise/gunicorn/trunk |
Diff against target: |
2086 lines (+1481/-446) 15 files modified
.bzrignore (+1/-0) Makefile (+12/-0) charm-helpers.yaml (+5/-0) config.yaml (+6/-0) hooks/charmhelpers/core/hookenv.py (+401/-0) hooks/charmhelpers/core/host.py (+291/-0) hooks/charmhelpers/fetch/__init__.py (+279/-0) hooks/charmhelpers/fetch/archiveurl.py (+48/-0) hooks/charmhelpers/fetch/bzrurl.py (+49/-0) hooks/hooks.py (+84/-444) hooks/tests/test_hooks.py (+157/-0) hooks/tests/test_template.py (+125/-0) metadata.yaml (+4/-0) templates/upstart.tmpl (+13/-2) test_requirments.txt (+6/-0) |
To merge this branch: | bzr merge lp:~bloodearnest/charms/precise/gunicorn/cleanup |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Marco Ceppi (community) | Approve | ||
Michael Nelson (community) | Approve | ||
Review via email: mp+208558@code.launchpad.net |
Commit message
Clean up of gunicorn charm, adding charmhelpers, tests and re-add env_extra param
Description of the change
Clean up of gunicorn charm.
Apologies for the large diff - it's mostly bundling charm helpers
Main highlights
- use charmhelpers, and thus remove lots of code that was doing the same stuff
- tests \o/
- slight cleanups and refactors (e.g. not shelling out to multiprocess, when we can just import it)
- add support back in for "env_extra", which we use in our IS version to set process envvars (was strangely removed in python rewrite), albeit with a slightly different (better) format.
- add support for config-changed hook (only when wsgi-file relation is present)
- 31. By Simon Davy
-
added config changed hook
- 32. By Simon Davy
-
fix config-changed, and cleaner access log handling
- 33. By Simon Davy
-
minor formatting tweaks
- 34. By Simon Davy
-
changed to restart rather than reload
- 35. By Simon Davy
-
add make unit_test target
Preview Diff
1 | === added file '.bzrignore' |
2 | --- .bzrignore 1970-01-01 00:00:00 +0000 |
3 | +++ .bzrignore 2014-03-04 16:16:33 +0000 |
4 | @@ -0,0 +1,1 @@ |
5 | +bin |
6 | |
7 | === added file 'Makefile' |
8 | --- Makefile 1970-01-01 00:00:00 +0000 |
9 | +++ Makefile 2014-03-04 16:16:33 +0000 |
10 | @@ -0,0 +1,12 @@ |
11 | +#!/usr/bin/make |
12 | +PYTHON := /usr/bin/env python |
13 | + |
14 | +unit_test: |
15 | + nosetests hooks |
16 | + |
17 | +sync-charm-helpers: bin/charm_helpers_sync.py |
18 | + @mkdir -p bin |
19 | + @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml |
20 | + |
21 | +bin/charm_helpers_sync.py: |
22 | + @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py > bin/charm_helpers_sync.py |
23 | |
24 | === added file 'charm-helpers.yaml' |
25 | --- charm-helpers.yaml 1970-01-01 00:00:00 +0000 |
26 | +++ charm-helpers.yaml 2014-03-04 16:16:33 +0000 |
27 | @@ -0,0 +1,5 @@ |
28 | +branch: lp:charm-helpers |
29 | +destination: hooks/charmhelpers |
30 | +include: |
31 | + - core |
32 | + - fetch |
33 | |
34 | === modified file 'config.yaml' |
35 | --- config.yaml 2013-05-29 18:40:56 +0000 |
36 | +++ config.yaml 2014-03-04 16:16:33 +0000 |
37 | @@ -79,3 +79,9 @@ |
38 | type: int |
39 | default: 8080 |
40 | description: "Port the application will be listenning." |
41 | + env_extra: |
42 | + type: string |
43 | + default: "" |
44 | + description: > |
45 | + List of environment variables for the wsgi process. |
46 | + e.g. FOO="bar" BAZ="1 2 3" |
47 | |
48 | === added directory 'hooks/charmhelpers' |
49 | === added file 'hooks/charmhelpers/__init__.py' |
50 | === added directory 'hooks/charmhelpers/core' |
51 | === added file 'hooks/charmhelpers/core/__init__.py' |
52 | === added file 'hooks/charmhelpers/core/hookenv.py' |
53 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 |
54 | +++ hooks/charmhelpers/core/hookenv.py 2014-03-04 16:16:33 +0000 |
55 | @@ -0,0 +1,401 @@ |
56 | +"Interactions with the Juju environment" |
57 | +# Copyright 2013 Canonical Ltd. |
58 | +# |
59 | +# Authors: |
60 | +# Charm Helpers Developers <juju@lists.ubuntu.com> |
61 | + |
62 | +import os |
63 | +import json |
64 | +import yaml |
65 | +import subprocess |
66 | +import sys |
67 | +import UserDict |
68 | +from subprocess import CalledProcessError |
69 | + |
70 | +CRITICAL = "CRITICAL" |
71 | +ERROR = "ERROR" |
72 | +WARNING = "WARNING" |
73 | +INFO = "INFO" |
74 | +DEBUG = "DEBUG" |
75 | +MARKER = object() |
76 | + |
77 | +cache = {} |
78 | + |
79 | + |
80 | +def cached(func): |
81 | + """Cache return values for multiple executions of func + args |
82 | + |
83 | + For example: |
84 | + |
85 | + @cached |
86 | + def unit_get(attribute): |
87 | + pass |
88 | + |
89 | + unit_get('test') |
90 | + |
91 | + will cache the result of unit_get + 'test' for future calls. |
92 | + """ |
93 | + def wrapper(*args, **kwargs): |
94 | + global cache |
95 | + key = str((func, args, kwargs)) |
96 | + try: |
97 | + return cache[key] |
98 | + except KeyError: |
99 | + res = func(*args, **kwargs) |
100 | + cache[key] = res |
101 | + return res |
102 | + return wrapper |
103 | + |
104 | + |
105 | +def flush(key): |
106 | + """Flushes any entries from function cache where the |
107 | + key is found in the function+args """ |
108 | + flush_list = [] |
109 | + for item in cache: |
110 | + if key in item: |
111 | + flush_list.append(item) |
112 | + for item in flush_list: |
113 | + del cache[item] |
114 | + |
115 | + |
116 | +def log(message, level=None): |
117 | + """Write a message to the juju log""" |
118 | + command = ['juju-log'] |
119 | + if level: |
120 | + command += ['-l', level] |
121 | + command += [message] |
122 | + subprocess.call(command) |
123 | + |
124 | + |
125 | +class Serializable(UserDict.IterableUserDict): |
126 | + """Wrapper, an object that can be serialized to yaml or json""" |
127 | + |
128 | + def __init__(self, obj): |
129 | + # wrap the object |
130 | + UserDict.IterableUserDict.__init__(self) |
131 | + self.data = obj |
132 | + |
133 | + def __getattr__(self, attr): |
134 | + # See if this object has attribute. |
135 | + if attr in ("json", "yaml", "data"): |
136 | + return self.__dict__[attr] |
137 | + # Check for attribute in wrapped object. |
138 | + got = getattr(self.data, attr, MARKER) |
139 | + if got is not MARKER: |
140 | + return got |
141 | + # Proxy to the wrapped object via dict interface. |
142 | + try: |
143 | + return self.data[attr] |
144 | + except KeyError: |
145 | + raise AttributeError(attr) |
146 | + |
147 | + def __getstate__(self): |
148 | + # Pickle as a standard dictionary. |
149 | + return self.data |
150 | + |
151 | + def __setstate__(self, state): |
152 | + # Unpickle into our wrapper. |
153 | + self.data = state |
154 | + |
155 | + def json(self): |
156 | + """Serialize the object to json""" |
157 | + return json.dumps(self.data) |
158 | + |
159 | + def yaml(self): |
160 | + """Serialize the object to yaml""" |
161 | + return yaml.dump(self.data) |
162 | + |
163 | + |
164 | +def execution_environment(): |
165 | + """A convenient bundling of the current execution context""" |
166 | + context = {} |
167 | + context['conf'] = config() |
168 | + if relation_id(): |
169 | + context['reltype'] = relation_type() |
170 | + context['relid'] = relation_id() |
171 | + context['rel'] = relation_get() |
172 | + context['unit'] = local_unit() |
173 | + context['rels'] = relations() |
174 | + context['env'] = os.environ |
175 | + return context |
176 | + |
177 | + |
178 | +def in_relation_hook(): |
179 | + """Determine whether we're running in a relation hook""" |
180 | + return 'JUJU_RELATION' in os.environ |
181 | + |
182 | + |
183 | +def relation_type(): |
184 | + """The scope for the current relation hook""" |
185 | + return os.environ.get('JUJU_RELATION', None) |
186 | + |
187 | + |
188 | +def relation_id(): |
189 | + """The relation ID for the current relation hook""" |
190 | + return os.environ.get('JUJU_RELATION_ID', None) |
191 | + |
192 | + |
193 | +def local_unit(): |
194 | + """Local unit ID""" |
195 | + return os.environ['JUJU_UNIT_NAME'] |
196 | + |
197 | + |
198 | +def remote_unit(): |
199 | + """The remote unit for the current relation hook""" |
200 | + return os.environ['JUJU_REMOTE_UNIT'] |
201 | + |
202 | + |
203 | +def service_name(): |
204 | + """The name service group this unit belongs to""" |
205 | + return local_unit().split('/')[0] |
206 | + |
207 | + |
208 | +def hook_name(): |
209 | + """The name of the currently executing hook""" |
210 | + return os.path.basename(sys.argv[0]) |
211 | + |
212 | + |
213 | +@cached |
214 | +def config(scope=None): |
215 | + """Juju charm configuration""" |
216 | + config_cmd_line = ['config-get'] |
217 | + if scope is not None: |
218 | + config_cmd_line.append(scope) |
219 | + config_cmd_line.append('--format=json') |
220 | + try: |
221 | + return json.loads(subprocess.check_output(config_cmd_line)) |
222 | + except ValueError: |
223 | + return None |
224 | + |
225 | + |
226 | +@cached |
227 | +def relation_get(attribute=None, unit=None, rid=None): |
228 | + """Get relation information""" |
229 | + _args = ['relation-get', '--format=json'] |
230 | + if rid: |
231 | + _args.append('-r') |
232 | + _args.append(rid) |
233 | + _args.append(attribute or '-') |
234 | + if unit: |
235 | + _args.append(unit) |
236 | + try: |
237 | + return json.loads(subprocess.check_output(_args)) |
238 | + except ValueError: |
239 | + return None |
240 | + except CalledProcessError, e: |
241 | + if e.returncode == 2: |
242 | + return None |
243 | + raise |
244 | + |
245 | + |
246 | +def relation_set(relation_id=None, relation_settings={}, **kwargs): |
247 | + """Set relation information for the current unit""" |
248 | + relation_cmd_line = ['relation-set'] |
249 | + if relation_id is not None: |
250 | + relation_cmd_line.extend(('-r', relation_id)) |
251 | + for k, v in (relation_settings.items() + kwargs.items()): |
252 | + if v is None: |
253 | + relation_cmd_line.append('{}='.format(k)) |
254 | + else: |
255 | + relation_cmd_line.append('{}={}'.format(k, v)) |
256 | + subprocess.check_call(relation_cmd_line) |
257 | + # Flush cache of any relation-gets for local unit |
258 | + flush(local_unit()) |
259 | + |
260 | + |
261 | +@cached |
262 | +def relation_ids(reltype=None): |
263 | + """A list of relation_ids""" |
264 | + reltype = reltype or relation_type() |
265 | + relid_cmd_line = ['relation-ids', '--format=json'] |
266 | + if reltype is not None: |
267 | + relid_cmd_line.append(reltype) |
268 | + return json.loads(subprocess.check_output(relid_cmd_line)) or [] |
269 | + return [] |
270 | + |
271 | + |
272 | +@cached |
273 | +def related_units(relid=None): |
274 | + """A list of related units""" |
275 | + relid = relid or relation_id() |
276 | + units_cmd_line = ['relation-list', '--format=json'] |
277 | + if relid is not None: |
278 | + units_cmd_line.extend(('-r', relid)) |
279 | + return json.loads(subprocess.check_output(units_cmd_line)) or [] |
280 | + |
281 | + |
282 | +@cached |
283 | +def relation_for_unit(unit=None, rid=None): |
284 | + """Get the json represenation of a unit's relation""" |
285 | + unit = unit or remote_unit() |
286 | + relation = relation_get(unit=unit, rid=rid) |
287 | + for key in relation: |
288 | + if key.endswith('-list'): |
289 | + relation[key] = relation[key].split() |
290 | + relation['__unit__'] = unit |
291 | + return relation |
292 | + |
293 | + |
294 | +@cached |
295 | +def relations_for_id(relid=None): |
296 | + """Get relations of a specific relation ID""" |
297 | + relation_data = [] |
298 | + relid = relid or relation_ids() |
299 | + for unit in related_units(relid): |
300 | + unit_data = relation_for_unit(unit, relid) |
301 | + unit_data['__relid__'] = relid |
302 | + relation_data.append(unit_data) |
303 | + return relation_data |
304 | + |
305 | + |
306 | +@cached |
307 | +def relations_of_type(reltype=None): |
308 | + """Get relations of a specific type""" |
309 | + relation_data = [] |
310 | + reltype = reltype or relation_type() |
311 | + for relid in relation_ids(reltype): |
312 | + for relation in relations_for_id(relid): |
313 | + relation['__relid__'] = relid |
314 | + relation_data.append(relation) |
315 | + return relation_data |
316 | + |
317 | + |
318 | +@cached |
319 | +def relation_types(): |
320 | + """Get a list of relation types supported by this charm""" |
321 | + charmdir = os.environ.get('CHARM_DIR', '') |
322 | + mdf = open(os.path.join(charmdir, 'metadata.yaml')) |
323 | + md = yaml.safe_load(mdf) |
324 | + rel_types = [] |
325 | + for key in ('provides', 'requires', 'peers'): |
326 | + section = md.get(key) |
327 | + if section: |
328 | + rel_types.extend(section.keys()) |
329 | + mdf.close() |
330 | + return rel_types |
331 | + |
332 | + |
333 | +@cached |
334 | +def relations(): |
335 | + """Get a nested dictionary of relation data for all related units""" |
336 | + rels = {} |
337 | + for reltype in relation_types(): |
338 | + relids = {} |
339 | + for relid in relation_ids(reltype): |
340 | + units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} |
341 | + for unit in related_units(relid): |
342 | + reldata = relation_get(unit=unit, rid=relid) |
343 | + units[unit] = reldata |
344 | + relids[relid] = units |
345 | + rels[reltype] = relids |
346 | + return rels |
347 | + |
348 | + |
349 | +@cached |
350 | +def is_relation_made(relation, keys='private-address'): |
351 | + ''' |
352 | + Determine whether a relation is established by checking for |
353 | + presence of key(s). If a list of keys is provided, they |
354 | + must all be present for the relation to be identified as made |
355 | + ''' |
356 | + if isinstance(keys, str): |
357 | + keys = [keys] |
358 | + for r_id in relation_ids(relation): |
359 | + for unit in related_units(r_id): |
360 | + context = {} |
361 | + for k in keys: |
362 | + context[k] = relation_get(k, rid=r_id, |
363 | + unit=unit) |
364 | + if None not in context.values(): |
365 | + return True |
366 | + return False |
367 | + |
368 | + |
369 | +def open_port(port, protocol="TCP"): |
370 | + """Open a service network port""" |
371 | + _args = ['open-port'] |
372 | + _args.append('{}/{}'.format(port, protocol)) |
373 | + subprocess.check_call(_args) |
374 | + |
375 | + |
376 | +def close_port(port, protocol="TCP"): |
377 | + """Close a service network port""" |
378 | + _args = ['close-port'] |
379 | + _args.append('{}/{}'.format(port, protocol)) |
380 | + subprocess.check_call(_args) |
381 | + |
382 | + |
383 | +@cached |
384 | +def unit_get(attribute): |
385 | + """Get the unit ID for the remote unit""" |
386 | + _args = ['unit-get', '--format=json', attribute] |
387 | + try: |
388 | + return json.loads(subprocess.check_output(_args)) |
389 | + except ValueError: |
390 | + return None |
391 | + |
392 | + |
393 | +def unit_private_ip(): |
394 | + """Get this unit's private IP address""" |
395 | + return unit_get('private-address') |
396 | + |
397 | + |
398 | +class UnregisteredHookError(Exception): |
399 | + """Raised when an undefined hook is called""" |
400 | + pass |
401 | + |
402 | + |
403 | +class Hooks(object): |
404 | + """A convenient handler for hook functions. |
405 | + |
406 | + Example: |
407 | + hooks = Hooks() |
408 | + |
409 | + # register a hook, taking its name from the function name |
410 | + @hooks.hook() |
411 | + def install(): |
412 | + ... |
413 | + |
414 | + # register a hook, providing a custom hook name |
415 | + @hooks.hook("config-changed") |
416 | + def config_changed(): |
417 | + ... |
418 | + |
419 | + if __name__ == "__main__": |
420 | + # execute a hook based on the name the program is called by |
421 | + hooks.execute(sys.argv) |
422 | + """ |
423 | + |
424 | + def __init__(self): |
425 | + super(Hooks, self).__init__() |
426 | + self._hooks = {} |
427 | + |
428 | + def register(self, name, function): |
429 | + """Register a hook""" |
430 | + self._hooks[name] = function |
431 | + |
432 | + def execute(self, args): |
433 | + """Execute a registered hook based on args[0]""" |
434 | + hook_name = os.path.basename(args[0]) |
435 | + if hook_name in self._hooks: |
436 | + self._hooks[hook_name]() |
437 | + else: |
438 | + raise UnregisteredHookError(hook_name) |
439 | + |
440 | + def hook(self, *hook_names): |
441 | + """Decorator, registering them as hooks""" |
442 | + def wrapper(decorated): |
443 | + for hook_name in hook_names: |
444 | + self.register(hook_name, decorated) |
445 | + else: |
446 | + self.register(decorated.__name__, decorated) |
447 | + if '_' in decorated.__name__: |
448 | + self.register( |
449 | + decorated.__name__.replace('_', '-'), decorated) |
450 | + return decorated |
451 | + return wrapper |
452 | + |
453 | + |
454 | +def charm_dir(): |
455 | + """Return the root directory of the current charm""" |
456 | + return os.environ.get('CHARM_DIR') |
457 | |
458 | === added file 'hooks/charmhelpers/core/host.py' |
459 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 |
460 | +++ hooks/charmhelpers/core/host.py 2014-03-04 16:16:33 +0000 |
461 | @@ -0,0 +1,291 @@ |
462 | +"""Tools for working with the host system""" |
463 | +# Copyright 2012 Canonical Ltd. |
464 | +# |
465 | +# Authors: |
466 | +# Nick Moffitt <nick.moffitt@canonical.com> |
467 | +# Matthew Wedgwood <matthew.wedgwood@canonical.com> |
468 | + |
469 | +import os |
470 | +import pwd |
471 | +import grp |
472 | +import random |
473 | +import string |
474 | +import subprocess |
475 | +import hashlib |
476 | + |
477 | +from collections import OrderedDict |
478 | + |
479 | +from hookenv import log |
480 | + |
481 | + |
482 | +def service_start(service_name): |
483 | + """Start a system service""" |
484 | + return service('start', service_name) |
485 | + |
486 | + |
487 | +def service_stop(service_name): |
488 | + """Stop a system service""" |
489 | + return service('stop', service_name) |
490 | + |
491 | + |
492 | +def service_restart(service_name): |
493 | + """Restart a system service""" |
494 | + return service('restart', service_name) |
495 | + |
496 | + |
497 | +def service_reload(service_name, restart_on_failure=False): |
498 | + """Reload a system service, optionally falling back to restart if reload fails""" |
499 | + service_result = service('reload', service_name) |
500 | + if not service_result and restart_on_failure: |
501 | + service_result = service('restart', service_name) |
502 | + return service_result |
503 | + |
504 | + |
505 | +def service(action, service_name): |
506 | + """Control a system service""" |
507 | + cmd = ['service', service_name, action] |
508 | + return subprocess.call(cmd) == 0 |
509 | + |
510 | + |
511 | +def service_running(service): |
512 | + """Determine whether a system service is running""" |
513 | + try: |
514 | + output = subprocess.check_output(['service', service, 'status']) |
515 | + except subprocess.CalledProcessError: |
516 | + return False |
517 | + else: |
518 | + if ("start/running" in output or "is running" in output): |
519 | + return True |
520 | + else: |
521 | + return False |
522 | + |
523 | + |
524 | +def adduser(username, password=None, shell='/bin/bash', system_user=False): |
525 | + """Add a user to the system""" |
526 | + try: |
527 | + user_info = pwd.getpwnam(username) |
528 | + log('user {0} already exists!'.format(username)) |
529 | + except KeyError: |
530 | + log('creating user {0}'.format(username)) |
531 | + cmd = ['useradd'] |
532 | + if system_user or password is None: |
533 | + cmd.append('--system') |
534 | + else: |
535 | + cmd.extend([ |
536 | + '--create-home', |
537 | + '--shell', shell, |
538 | + '--password', password, |
539 | + ]) |
540 | + cmd.append(username) |
541 | + subprocess.check_call(cmd) |
542 | + user_info = pwd.getpwnam(username) |
543 | + return user_info |
544 | + |
545 | + |
546 | +def add_user_to_group(username, group): |
547 | + """Add a user to a group""" |
548 | + cmd = [ |
549 | + 'gpasswd', '-a', |
550 | + username, |
551 | + group |
552 | + ] |
553 | + log("Adding user {} to group {}".format(username, group)) |
554 | + subprocess.check_call(cmd) |
555 | + |
556 | + |
557 | +def rsync(from_path, to_path, flags='-r', options=None): |
558 | + """Replicate the contents of a path""" |
559 | + options = options or ['--delete', '--executability'] |
560 | + cmd = ['/usr/bin/rsync', flags] |
561 | + cmd.extend(options) |
562 | + cmd.append(from_path) |
563 | + cmd.append(to_path) |
564 | + log(" ".join(cmd)) |
565 | + return subprocess.check_output(cmd).strip() |
566 | + |
567 | + |
568 | +def symlink(source, destination): |
569 | + """Create a symbolic link""" |
570 | + log("Symlinking {} as {}".format(source, destination)) |
571 | + cmd = [ |
572 | + 'ln', |
573 | + '-sf', |
574 | + source, |
575 | + destination, |
576 | + ] |
577 | + subprocess.check_call(cmd) |
578 | + |
579 | + |
580 | +def mkdir(path, owner='root', group='root', perms=0555, force=False): |
581 | + """Create a directory""" |
582 | + log("Making dir {} {}:{} {:o}".format(path, owner, group, |
583 | + perms)) |
584 | + uid = pwd.getpwnam(owner).pw_uid |
585 | + gid = grp.getgrnam(group).gr_gid |
586 | + realpath = os.path.abspath(path) |
587 | + if os.path.exists(realpath): |
588 | + if force and not os.path.isdir(realpath): |
589 | + log("Removing non-directory file {} prior to mkdir()".format(path)) |
590 | + os.unlink(realpath) |
591 | + else: |
592 | + os.makedirs(realpath, perms) |
593 | + os.chown(realpath, uid, gid) |
594 | + |
595 | + |
596 | +def write_file(path, content, owner='root', group='root', perms=0444): |
597 | + """Create or overwrite a file with the contents of a string""" |
598 | + log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) |
599 | + uid = pwd.getpwnam(owner).pw_uid |
600 | + gid = grp.getgrnam(group).gr_gid |
601 | + with open(path, 'w') as target: |
602 | + os.fchown(target.fileno(), uid, gid) |
603 | + os.fchmod(target.fileno(), perms) |
604 | + target.write(content) |
605 | + |
606 | + |
607 | +def mount(device, mountpoint, options=None, persist=False): |
608 | + """Mount a filesystem at a particular mountpoint""" |
609 | + cmd_args = ['mount'] |
610 | + if options is not None: |
611 | + cmd_args.extend(['-o', options]) |
612 | + cmd_args.extend([device, mountpoint]) |
613 | + try: |
614 | + subprocess.check_output(cmd_args) |
615 | + except subprocess.CalledProcessError, e: |
616 | + log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
617 | + return False |
618 | + if persist: |
619 | + # TODO: update fstab |
620 | + pass |
621 | + return True |
622 | + |
623 | + |
624 | +def umount(mountpoint, persist=False): |
625 | + """Unmount a filesystem""" |
626 | + cmd_args = ['umount', mountpoint] |
627 | + try: |
628 | + subprocess.check_output(cmd_args) |
629 | + except subprocess.CalledProcessError, e: |
630 | + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
631 | + return False |
632 | + if persist: |
633 | + # TODO: update fstab |
634 | + pass |
635 | + return True |
636 | + |
637 | + |
638 | +def mounts(): |
639 | + """Get a list of all mounted volumes as [[mountpoint,device],[...]]""" |
640 | + with open('/proc/mounts') as f: |
641 | + # [['/mount/point','/dev/path'],[...]] |
642 | + system_mounts = [m[1::-1] for m in [l.strip().split() |
643 | + for l in f.readlines()]] |
644 | + return system_mounts |
645 | + |
646 | + |
647 | +def file_hash(path): |
648 | + """Generate a md5 hash of the contents of 'path' or None if not found """ |
649 | + if os.path.exists(path): |
650 | + h = hashlib.md5() |
651 | + with open(path, 'r') as source: |
652 | + h.update(source.read()) # IGNORE:E1101 - it does have update |
653 | + return h.hexdigest() |
654 | + else: |
655 | + return None |
656 | + |
657 | + |
658 | +def restart_on_change(restart_map): |
659 | + """Restart services based on configuration files changing |
660 | + |
661 | + This function is used a decorator, for example |
662 | + |
663 | + @restart_on_change({ |
664 | + '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
665 | + }) |
666 | + def ceph_client_changed(): |
667 | + ... |
668 | + |
669 | + In this example, the cinder-api and cinder-volume services |
670 | + would be restarted if /etc/ceph/ceph.conf is changed by the |
671 | + ceph_client_changed function. |
672 | + """ |
673 | + def wrap(f): |
674 | + def wrapped_f(*args): |
675 | + checksums = {} |
676 | + for path in restart_map: |
677 | + checksums[path] = file_hash(path) |
678 | + f(*args) |
679 | + restarts = [] |
680 | + for path in restart_map: |
681 | + if checksums[path] != file_hash(path): |
682 | + restarts += restart_map[path] |
683 | + for service_name in list(OrderedDict.fromkeys(restarts)): |
684 | + service('restart', service_name) |
685 | + return wrapped_f |
686 | + return wrap |
687 | + |
688 | + |
689 | +def lsb_release(): |
690 | + """Return /etc/lsb-release in a dict""" |
691 | + d = {} |
692 | + with open('/etc/lsb-release', 'r') as lsb: |
693 | + for l in lsb: |
694 | + k, v = l.split('=') |
695 | + d[k.strip()] = v.strip() |
696 | + return d |
697 | + |
698 | + |
699 | +def pwgen(length=None): |
700 | + """Generate a random pasword.""" |
701 | + if length is None: |
702 | + length = random.choice(range(35, 45)) |
703 | + alphanumeric_chars = [ |
704 | + l for l in (string.letters + string.digits) |
705 | + if l not in 'l0QD1vAEIOUaeiou'] |
706 | + random_chars = [ |
707 | + random.choice(alphanumeric_chars) for _ in range(length)] |
708 | + return(''.join(random_chars)) |
709 | + |
710 | + |
711 | +def list_nics(nic_type): |
712 | + '''Return a list of nics of given type(s)''' |
713 | + if isinstance(nic_type, basestring): |
714 | + int_types = [nic_type] |
715 | + else: |
716 | + int_types = nic_type |
717 | + interfaces = [] |
718 | + for int_type in int_types: |
719 | + cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] |
720 | + ip_output = subprocess.check_output(cmd).split('\n') |
721 | + ip_output = (line for line in ip_output if line) |
722 | + for line in ip_output: |
723 | + if line.split()[1].startswith(int_type): |
724 | + interfaces.append(line.split()[1].replace(":", "")) |
725 | + return interfaces |
726 | + |
727 | + |
728 | +def set_nic_mtu(nic, mtu): |
729 | + '''Set MTU on a network interface''' |
730 | + cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] |
731 | + subprocess.check_call(cmd) |
732 | + |
733 | + |
734 | +def get_nic_mtu(nic): |
735 | + cmd = ['ip', 'addr', 'show', nic] |
736 | + ip_output = subprocess.check_output(cmd).split('\n') |
737 | + mtu = "" |
738 | + for line in ip_output: |
739 | + words = line.split() |
740 | + if 'mtu' in words: |
741 | + mtu = words[words.index("mtu") + 1] |
742 | + return mtu |
743 | + |
744 | + |
745 | +def get_nic_hwaddr(nic): |
746 | + cmd = ['ip', '-o', '-0', 'addr', 'show', nic] |
747 | + ip_output = subprocess.check_output(cmd) |
748 | + hwaddr = "" |
749 | + words = ip_output.split() |
750 | + if 'link/ether' in words: |
751 | + hwaddr = words[words.index('link/ether') + 1] |
752 | + return hwaddr |
753 | |
754 | === added directory 'hooks/charmhelpers/fetch' |
755 | === added file 'hooks/charmhelpers/fetch/__init__.py' |
756 | --- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 |
757 | +++ hooks/charmhelpers/fetch/__init__.py 2014-03-04 16:16:33 +0000 |
758 | @@ -0,0 +1,279 @@ |
759 | +import importlib |
760 | +from yaml import safe_load |
761 | +from charmhelpers.core.host import ( |
762 | + lsb_release |
763 | +) |
764 | +from urlparse import ( |
765 | + urlparse, |
766 | + urlunparse, |
767 | +) |
768 | +import subprocess |
769 | +from charmhelpers.core.hookenv import ( |
770 | + config, |
771 | + log, |
772 | +) |
773 | +import apt_pkg |
774 | +import os |
775 | + |
776 | +CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
777 | +deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
778 | +""" |
779 | +PROPOSED_POCKET = """# Proposed |
780 | +deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted |
781 | +""" |
782 | +CLOUD_ARCHIVE_POCKETS = { |
783 | + # Folsom |
784 | + 'folsom': 'precise-updates/folsom', |
785 | + 'precise-folsom': 'precise-updates/folsom', |
786 | + 'precise-folsom/updates': 'precise-updates/folsom', |
787 | + 'precise-updates/folsom': 'precise-updates/folsom', |
788 | + 'folsom/proposed': 'precise-proposed/folsom', |
789 | + 'precise-folsom/proposed': 'precise-proposed/folsom', |
790 | + 'precise-proposed/folsom': 'precise-proposed/folsom', |
791 | + # Grizzly |
792 | + 'grizzly': 'precise-updates/grizzly', |
793 | + 'precise-grizzly': 'precise-updates/grizzly', |
794 | + 'precise-grizzly/updates': 'precise-updates/grizzly', |
795 | + 'precise-updates/grizzly': 'precise-updates/grizzly', |
796 | + 'grizzly/proposed': 'precise-proposed/grizzly', |
797 | + 'precise-grizzly/proposed': 'precise-proposed/grizzly', |
798 | + 'precise-proposed/grizzly': 'precise-proposed/grizzly', |
799 | + # Havana |
800 | + 'havana': 'precise-updates/havana', |
801 | + 'precise-havana': 'precise-updates/havana', |
802 | + 'precise-havana/updates': 'precise-updates/havana', |
803 | + 'precise-updates/havana': 'precise-updates/havana', |
804 | + 'havana/proposed': 'precise-proposed/havana', |
805 | + 'precise-havana/proposed': 'precise-proposed/havana', |
806 | + 'precise-proposed/havana': 'precise-proposed/havana', |
807 | + # Icehouse |
808 | + 'icehouse': 'precise-updates/icehouse', |
809 | + 'precise-icehouse': 'precise-updates/icehouse', |
810 | + 'precise-icehouse/updates': 'precise-updates/icehouse', |
811 | + 'precise-updates/icehouse': 'precise-updates/icehouse', |
812 | + 'icehouse/proposed': 'precise-proposed/icehouse', |
813 | + 'precise-icehouse/proposed': 'precise-proposed/icehouse', |
814 | + 'precise-proposed/icehouse': 'precise-proposed/icehouse', |
815 | +} |
816 | + |
817 | + |
818 | +def filter_installed_packages(packages): |
819 | + """Returns a list of packages that require installation""" |
820 | + apt_pkg.init() |
821 | + cache = apt_pkg.Cache() |
822 | + _pkgs = [] |
823 | + for package in packages: |
824 | + try: |
825 | + p = cache[package] |
826 | + p.current_ver or _pkgs.append(package) |
827 | + except KeyError: |
828 | + log('Package {} has no installation candidate.'.format(package), |
829 | + level='WARNING') |
830 | + _pkgs.append(package) |
831 | + return _pkgs |
832 | + |
833 | + |
834 | +def apt_install(packages, options=None, fatal=False): |
835 | + """Install one or more packages""" |
836 | + if options is None: |
837 | + options = ['--option=Dpkg::Options::=--force-confold'] |
838 | + |
839 | + cmd = ['apt-get', '--assume-yes'] |
840 | + cmd.extend(options) |
841 | + cmd.append('install') |
842 | + if isinstance(packages, basestring): |
843 | + cmd.append(packages) |
844 | + else: |
845 | + cmd.extend(packages) |
846 | + log("Installing {} with options: {}".format(packages, |
847 | + options)) |
848 | + env = os.environ.copy() |
849 | + if 'DEBIAN_FRONTEND' not in env: |
850 | + env['DEBIAN_FRONTEND'] = 'noninteractive' |
851 | + |
852 | + if fatal: |
853 | + subprocess.check_call(cmd, env=env) |
854 | + else: |
855 | + subprocess.call(cmd, env=env) |
856 | + |
857 | + |
858 | +def apt_update(fatal=False): |
859 | + """Update local apt cache""" |
860 | + cmd = ['apt-get', 'update'] |
861 | + if fatal: |
862 | + subprocess.check_call(cmd) |
863 | + else: |
864 | + subprocess.call(cmd) |
865 | + |
866 | + |
867 | +def apt_purge(packages, fatal=False): |
868 | + """Purge one or more packages""" |
869 | + cmd = ['apt-get', '--assume-yes', 'purge'] |
870 | + if isinstance(packages, basestring): |
871 | + cmd.append(packages) |
872 | + else: |
873 | + cmd.extend(packages) |
874 | + log("Purging {}".format(packages)) |
875 | + if fatal: |
876 | + subprocess.check_call(cmd) |
877 | + else: |
878 | + subprocess.call(cmd) |
879 | + |
880 | + |
881 | +def apt_hold(packages, fatal=False): |
882 | + """Hold one or more packages""" |
883 | + cmd = ['apt-mark', 'hold'] |
884 | + if isinstance(packages, basestring): |
885 | + cmd.append(packages) |
886 | + else: |
887 | + cmd.extend(packages) |
888 | + log("Holding {}".format(packages)) |
889 | + if fatal: |
890 | + subprocess.check_call(cmd) |
891 | + else: |
892 | + subprocess.call(cmd) |
893 | + |
894 | + |
895 | +def add_source(source, key=None): |
896 | + if (source.startswith('ppa:') or |
897 | + source.startswith('http:') or |
898 | + source.startswith('deb ') or |
899 | + source.startswith('cloud-archive:')): |
900 | + subprocess.check_call(['add-apt-repository', '--yes', source]) |
901 | + elif source.startswith('cloud:'): |
902 | + apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), |
903 | + fatal=True) |
904 | + pocket = source.split(':')[-1] |
905 | + if pocket not in CLOUD_ARCHIVE_POCKETS: |
906 | + raise SourceConfigError( |
907 | + 'Unsupported cloud: source option %s' % |
908 | + pocket) |
909 | + actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket] |
910 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
911 | + apt.write(CLOUD_ARCHIVE.format(actual_pocket)) |
912 | + elif source == 'proposed': |
913 | + release = lsb_release()['DISTRIB_CODENAME'] |
914 | + with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
915 | + apt.write(PROPOSED_POCKET.format(release)) |
916 | + if key: |
917 | + subprocess.check_call(['apt-key', 'import', key]) |
918 | + |
919 | + |
920 | +class SourceConfigError(Exception): |
921 | + pass |
922 | + |
923 | + |
924 | +def configure_sources(update=False, |
925 | + sources_var='install_sources', |
926 | + keys_var='install_keys'): |
927 | + """ |
928 | + Configure multiple sources from charm configuration |
929 | + |
930 | + Example config: |
931 | + install_sources: |
932 | + - "ppa:foo" |
933 | + - "http://example.com/repo precise main" |
934 | + install_keys: |
935 | + - null |
936 | + - "a1b2c3d4" |
937 | + |
938 | + Note that 'null' (a.k.a. None) should not be quoted. |
939 | + """ |
940 | + sources = safe_load(config(sources_var)) |
941 | + keys = config(keys_var) |
942 | + if keys is not None: |
943 | + keys = safe_load(keys) |
944 | + if isinstance(sources, basestring) and ( |
945 | + keys is None or isinstance(keys, basestring)): |
946 | + add_source(sources, keys) |
947 | + else: |
948 | + if not len(sources) == len(keys): |
949 | + msg = 'Install sources and keys lists are different lengths' |
950 | + raise SourceConfigError(msg) |
951 | + for src_num in range(len(sources)): |
952 | + add_source(sources[src_num], keys[src_num]) |
953 | + if update: |
954 | + apt_update(fatal=True) |
955 | + |
956 | +# The order of this list is very important. Handlers should be listed in from |
957 | +# least- to most-specific URL matching. |
958 | +FETCH_HANDLERS = ( |
959 | + 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
960 | + 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
961 | +) |
962 | + |
963 | + |
964 | +class UnhandledSource(Exception): |
965 | + pass |
966 | + |
967 | + |
968 | +def install_remote(source): |
969 | + """ |
970 | + Install a file tree from a remote source |
971 | + |
972 | + The specified source should be a url of the form: |
973 | + scheme://[host]/path[#[option=value][&...]] |
974 | + |
975 | + Schemes supported are based on this modules submodules |
976 | + Options supported are submodule-specific""" |
977 | + # We ONLY check for True here because can_handle may return a string |
978 | + # explaining why it can't handle a given source. |
979 | + handlers = [h for h in plugins() if h.can_handle(source) is True] |
980 | + installed_to = None |
981 | + for handler in handlers: |
982 | + try: |
983 | + installed_to = handler.install(source) |
984 | + except UnhandledSource: |
985 | + pass |
986 | + if not installed_to: |
987 | + raise UnhandledSource("No handler found for source {}".format(source)) |
988 | + return installed_to |
989 | + |
990 | + |
991 | +def install_from_config(config_var_name): |
992 | + charm_config = config() |
993 | + source = charm_config[config_var_name] |
994 | + return install_remote(source) |
995 | + |
996 | + |
997 | +class BaseFetchHandler(object): |
998 | + |
999 | + """Base class for FetchHandler implementations in fetch plugins""" |
1000 | + |
1001 | + def can_handle(self, source): |
1002 | + """Returns True if the source can be handled. Otherwise returns |
1003 | + a string explaining why it cannot""" |
1004 | + return "Wrong source type" |
1005 | + |
1006 | + def install(self, source): |
1007 | + """Try to download and unpack the source. Return the path to the |
1008 | + unpacked files or raise UnhandledSource.""" |
1009 | + raise UnhandledSource("Wrong source type {}".format(source)) |
1010 | + |
1011 | + def parse_url(self, url): |
1012 | + return urlparse(url) |
1013 | + |
1014 | + def base_url(self, url): |
1015 | + """Return url without querystring or fragment""" |
1016 | + parts = list(self.parse_url(url)) |
1017 | + parts[4:] = ['' for i in parts[4:]] |
1018 | + return urlunparse(parts) |
1019 | + |
1020 | + |
1021 | +def plugins(fetch_handlers=None): |
1022 | + if not fetch_handlers: |
1023 | + fetch_handlers = FETCH_HANDLERS |
1024 | + plugin_list = [] |
1025 | + for handler_name in fetch_handlers: |
1026 | + package, classname = handler_name.rsplit('.', 1) |
1027 | + try: |
1028 | + handler_class = getattr( |
1029 | + importlib.import_module(package), |
1030 | + classname) |
1031 | + plugin_list.append(handler_class()) |
1032 | + except (ImportError, AttributeError): |
1033 | + # Skip missing plugins so that they can be ommitted from |
1034 | + # installation if desired |
1035 | + log("FetchHandler {} not found, skipping plugin".format( |
1036 | + handler_name)) |
1037 | + return plugin_list |
1038 | |
1039 | === added file 'hooks/charmhelpers/fetch/archiveurl.py' |
1040 | --- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000 |
1041 | +++ hooks/charmhelpers/fetch/archiveurl.py 2014-03-04 16:16:33 +0000 |
1042 | @@ -0,0 +1,48 @@ |
1043 | +import os |
1044 | +import urllib2 |
1045 | +from charmhelpers.fetch import ( |
1046 | + BaseFetchHandler, |
1047 | + UnhandledSource |
1048 | +) |
1049 | +from charmhelpers.payload.archive import ( |
1050 | + get_archive_handler, |
1051 | + extract, |
1052 | +) |
1053 | +from charmhelpers.core.host import mkdir |
1054 | + |
1055 | + |
1056 | +class ArchiveUrlFetchHandler(BaseFetchHandler): |
1057 | + """Handler for archives via generic URLs""" |
1058 | + def can_handle(self, source): |
1059 | + url_parts = self.parse_url(source) |
1060 | + if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): |
1061 | + return "Wrong source type" |
1062 | + if get_archive_handler(self.base_url(source)): |
1063 | + return True |
1064 | + return False |
1065 | + |
1066 | + def download(self, source, dest): |
1067 | + # propogate all exceptions |
1068 | + # URLError, OSError, etc |
1069 | + response = urllib2.urlopen(source) |
1070 | + try: |
1071 | + with open(dest, 'w') as dest_file: |
1072 | + dest_file.write(response.read()) |
1073 | + except Exception as e: |
1074 | + if os.path.isfile(dest): |
1075 | + os.unlink(dest) |
1076 | + raise e |
1077 | + |
1078 | + def install(self, source): |
1079 | + url_parts = self.parse_url(source) |
1080 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') |
1081 | + if not os.path.exists(dest_dir): |
1082 | + mkdir(dest_dir, perms=0755) |
1083 | + dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) |
1084 | + try: |
1085 | + self.download(source, dld_file) |
1086 | + except urllib2.URLError as e: |
1087 | + raise UnhandledSource(e.reason) |
1088 | + except OSError as e: |
1089 | + raise UnhandledSource(e.strerror) |
1090 | + return extract(dld_file) |
1091 | |
1092 | === added file 'hooks/charmhelpers/fetch/bzrurl.py' |
1093 | --- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000 |
1094 | +++ hooks/charmhelpers/fetch/bzrurl.py 2014-03-04 16:16:33 +0000 |
1095 | @@ -0,0 +1,49 @@ |
1096 | +import os |
1097 | +from charmhelpers.fetch import ( |
1098 | + BaseFetchHandler, |
1099 | + UnhandledSource |
1100 | +) |
1101 | +from charmhelpers.core.host import mkdir |
1102 | + |
1103 | +try: |
1104 | + from bzrlib.branch import Branch |
1105 | +except ImportError: |
1106 | + from charmhelpers.fetch import apt_install |
1107 | + apt_install("python-bzrlib") |
1108 | + from bzrlib.branch import Branch |
1109 | + |
1110 | + |
1111 | +class BzrUrlFetchHandler(BaseFetchHandler): |
1112 | + """Handler for bazaar branches via generic and lp URLs""" |
1113 | + def can_handle(self, source): |
1114 | + url_parts = self.parse_url(source) |
1115 | + if url_parts.scheme not in ('bzr+ssh', 'lp'): |
1116 | + return False |
1117 | + else: |
1118 | + return True |
1119 | + |
1120 | + def branch(self, source, dest): |
1121 | + url_parts = self.parse_url(source) |
1122 | + # If we use lp:branchname scheme we need to load plugins |
1123 | + if not self.can_handle(source): |
1124 | + raise UnhandledSource("Cannot handle {}".format(source)) |
1125 | + if url_parts.scheme == "lp": |
1126 | + from bzrlib.plugin import load_plugins |
1127 | + load_plugins() |
1128 | + try: |
1129 | + remote_branch = Branch.open(source) |
1130 | + remote_branch.bzrdir.sprout(dest).open_branch() |
1131 | + except Exception as e: |
1132 | + raise e |
1133 | + |
1134 | + def install(self, source): |
1135 | + url_parts = self.parse_url(source) |
1136 | + branch_name = url_parts.path.strip("/").split("/")[-1] |
1137 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name) |
1138 | + if not os.path.exists(dest_dir): |
1139 | + mkdir(dest_dir, perms=0755) |
1140 | + try: |
1141 | + self.branch(source, dest_dir) |
1142 | + except OSError as e: |
1143 | + raise UnhandledSource(e.strerror) |
1144 | + return dest_dir |
1145 | |
1146 | === added symlink 'hooks/config-changed' |
1147 | === target is u'hooks.py' |
1148 | === modified file 'hooks/hooks.py' |
1149 | --- hooks/hooks.py 2013-06-05 12:55:47 +0000 |
1150 | +++ hooks/hooks.py 2014-03-04 16:16:33 +0000 |
1151 | @@ -1,313 +1,24 @@ |
1152 | #!/usr/bin/env python |
1153 | # vim: et ai ts=4 sw=4: |
1154 | |
1155 | -import json |
1156 | import os |
1157 | -import re |
1158 | -import subprocess |
1159 | import sys |
1160 | -import time |
1161 | -from pwd import getpwnam |
1162 | -from grp import getgrnam |
1163 | - |
1164 | -CHARM_PACKAGES = ["gunicorn"] |
1165 | - |
1166 | -INJECTED_WARNING = """ |
1167 | -#------------------------------------------------------------------------------ |
1168 | -# The following is the import code for the settings directory injected by Juju |
1169 | -#------------------------------------------------------------------------------ |
1170 | -""" |
1171 | - |
1172 | +from multiprocessing import cpu_count |
1173 | +import shlex |
1174 | + |
1175 | +from charmhelpers.core import hookenv |
1176 | +from charmhelpers.core import host |
1177 | +from charmhelpers import fetch |
1178 | + |
1179 | +hooks = hookenv.Hooks() |
1180 | + |
1181 | + |
1182 | +CHARM_PACKAGES = ["gunicorn", "python-jinja2"] |
1183 | |
1184 | ############################################################################### |
1185 | # Supporting functions |
1186 | ############################################################################### |
1187 | -MSG_CRITICAL = "CRITICAL" |
1188 | -MSG_DEBUG = "DEBUG" |
1189 | -MSG_INFO = "INFO" |
1190 | -MSG_ERROR = "ERROR" |
1191 | -MSG_WARNING = "WARNING" |
1192 | - |
1193 | - |
1194 | -def juju_log(level, msg): |
1195 | - subprocess.call(['juju-log', '-l', level, msg]) |
1196 | - |
1197 | -#------------------------------------------------------------------------------ |
1198 | -# run: Run a command, return the output |
1199 | -#------------------------------------------------------------------------------ |
1200 | -def run(command, exit_on_error=True, cwd=None): |
1201 | - try: |
1202 | - juju_log(MSG_DEBUG, command) |
1203 | - return subprocess.check_output( |
1204 | - command, stderr=subprocess.STDOUT, shell=True, cwd=cwd) |
1205 | - except subprocess.CalledProcessError, e: |
1206 | - juju_log(MSG_ERROR, "status=%d, output=%s" % (e.returncode, e.output)) |
1207 | - if exit_on_error: |
1208 | - sys.exit(e.returncode) |
1209 | - else: |
1210 | - raise |
1211 | - |
1212 | - |
1213 | -#------------------------------------------------------------------------------ |
1214 | -# install_file: install a file resource. overwites existing files. |
1215 | -#------------------------------------------------------------------------------ |
1216 | -def install_file(contents, dest, owner="root", group="root", mode=0600): |
1217 | - uid = getpwnam(owner)[2] |
1218 | - gid = getgrnam(group)[2] |
1219 | - dest_fd = os.open(dest, os.O_WRONLY | os.O_TRUNC | os.O_CREAT, mode) |
1220 | - os.fchown(dest_fd, uid, gid) |
1221 | - with os.fdopen(dest_fd, 'w') as destfile: |
1222 | - destfile.write(str(contents)) |
1223 | - |
1224 | - |
1225 | -#------------------------------------------------------------------------------ |
1226 | -# install_dir: create a directory |
1227 | -#------------------------------------------------------------------------------ |
1228 | -def install_dir(dirname, owner="root", group="root", mode=0700): |
1229 | - command = \ |
1230 | - '/usr/bin/install -o {} -g {} -m {} -d {}'.format(owner, group, oct(mode), |
1231 | - dirname) |
1232 | - return run(command) |
1233 | - |
1234 | -#------------------------------------------------------------------------------ |
1235 | -# config_get: Returns a dictionary containing all of the config information |
1236 | -# Optional parameter: scope |
1237 | -# scope: limits the scope of the returned configuration to the |
1238 | -# desired config item. |
1239 | -#------------------------------------------------------------------------------ |
1240 | -def config_get(scope=None): |
1241 | - try: |
1242 | - config_cmd_line = ['config-get'] |
1243 | - if scope is not None: |
1244 | - config_cmd_line.append(scope) |
1245 | - config_cmd_line.append('--format=json') |
1246 | - config_data = json.loads(subprocess.check_output(config_cmd_line)) |
1247 | - except: |
1248 | - config_data = None |
1249 | - finally: |
1250 | - return(config_data) |
1251 | - |
1252 | - |
1253 | -#------------------------------------------------------------------------------ |
1254 | -# relation_json: Returns json-formatted relation data |
1255 | -# Optional parameters: scope, relation_id |
1256 | -# scope: limits the scope of the returned data to the |
1257 | -# desired item. |
1258 | -# unit_name: limits the data ( and optionally the scope ) |
1259 | -# to the specified unit |
1260 | -# relation_id: specify relation id for out of context usage. |
1261 | -#------------------------------------------------------------------------------ |
1262 | -def relation_json(scope=None, unit_name=None, relation_id=None): |
1263 | - command = ['relation-get', '--format=json'] |
1264 | - if relation_id is not None: |
1265 | - command.extend(('-r', relation_id)) |
1266 | - if scope is not None: |
1267 | - command.append(scope) |
1268 | - else: |
1269 | - command.append('-') |
1270 | - if unit_name is not None: |
1271 | - command.append(unit_name) |
1272 | - output = subprocess.check_output(command, stderr=subprocess.STDOUT) |
1273 | - return output or None |
1274 | - |
1275 | - |
1276 | -#------------------------------------------------------------------------------ |
1277 | -# relation_get: Returns a dictionary containing the relation information |
1278 | -# Optional parameters: scope, relation_id |
1279 | -# scope: limits the scope of the returned data to the |
1280 | -# desired item. |
1281 | -# unit_name: limits the data ( and optionally the scope ) |
1282 | -# to the specified unit |
1283 | -#------------------------------------------------------------------------------ |
1284 | -def relation_get(scope=None, unit_name=None, relation_id=None): |
1285 | - j = relation_json(scope, unit_name, relation_id) |
1286 | - if j: |
1287 | - return json.loads(j) |
1288 | - else: |
1289 | - return None |
1290 | - |
1291 | - |
1292 | -def relation_set(keyvalues, relation_id=None): |
1293 | - args = [] |
1294 | - if relation_id: |
1295 | - args.extend(['-r', relation_id]) |
1296 | - args.extend(["{}='{}'".format(k, v or '') for k, v in keyvalues.items()]) |
1297 | - run("relation-set {}".format(' '.join(args))) |
1298 | - |
1299 | - ## Posting json to relation-set doesn't seem to work as documented? |
1300 | - ## Bug #1116179 |
1301 | - ## |
1302 | - ## cmd = ['relation-set'] |
1303 | - ## if relation_id: |
1304 | - ## cmd.extend(['-r', relation_id]) |
1305 | - ## p = Popen( |
1306 | - ## cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, |
1307 | - ## stderr=subprocess.PIPE) |
1308 | - ## (out, err) = p.communicate(json.dumps(keyvalues)) |
1309 | - ## if p.returncode: |
1310 | - ## juju_log(MSG_ERROR, err) |
1311 | - ## sys.exit(1) |
1312 | - ## juju_log(MSG_DEBUG, "relation-set {}".format(repr(keyvalues))) |
1313 | - |
1314 | - |
1315 | -def relation_list(relation_id=None): |
1316 | - """Return the list of units participating in the relation.""" |
1317 | - if relation_id is None: |
1318 | - relation_id = os.environ['JUJU_RELATION_ID'] |
1319 | - cmd = ['relation-list', '--format=json', '-r', relation_id] |
1320 | - json_units = subprocess.check_output(cmd).strip() |
1321 | - if json_units: |
1322 | - return json.loads(subprocess.check_output(cmd)) |
1323 | - return [] |
1324 | - |
1325 | - |
1326 | -#------------------------------------------------------------------------------ |
1327 | -# relation_ids: Returns a list of relation ids |
1328 | -# optional parameters: relation_type |
1329 | -# relation_type: return relations only of this type |
1330 | -#------------------------------------------------------------------------------ |
1331 | -def relation_ids(relation_types=('db',)): |
1332 | - # accept strings or iterators |
1333 | - if isinstance(relation_types, basestring): |
1334 | - reltypes = [relation_types, ] |
1335 | - else: |
1336 | - reltypes = relation_types |
1337 | - relids = [] |
1338 | - for reltype in reltypes: |
1339 | - relid_cmd_line = ['relation-ids', '--format=json', reltype] |
1340 | - json_relids = subprocess.check_output(relid_cmd_line).strip() |
1341 | - if json_relids: |
1342 | - relids.extend(json.loads(json_relids)) |
1343 | - return relids |
1344 | - |
1345 | - |
1346 | -#------------------------------------------------------------------------------ |
1347 | -# relation_get_all: Returns a dictionary containing the relation information |
1348 | -# optional parameters: relation_type |
1349 | -# relation_type: limits the scope of the returned data to the |
1350 | -# desired item. |
1351 | -#------------------------------------------------------------------------------ |
1352 | -def relation_get_all(*args, **kwargs): |
1353 | - relation_data = [] |
1354 | - relids = relation_ids(*args, **kwargs) |
1355 | - for relid in relids: |
1356 | - units_cmd_line = ['relation-list', '--format=json', '-r', relid] |
1357 | - json_units = subprocess.check_output(units_cmd_line).strip() |
1358 | - if json_units: |
1359 | - for unit in json.loads(json_units): |
1360 | - unit_data = \ |
1361 | - json.loads(relation_json(relation_id=relid, |
1362 | - unit_name=unit)) |
1363 | - for key in unit_data: |
1364 | - if key.endswith('-list'): |
1365 | - unit_data[key] = unit_data[key].split() |
1366 | - unit_data['relation-id'] = relid |
1367 | - unit_data['unit'] = unit |
1368 | - relation_data.append(unit_data) |
1369 | - return relation_data |
1370 | - |
1371 | -def apt_get_update(): |
1372 | - cmd_line = ['apt-get', 'update'] |
1373 | - return(subprocess.call(cmd_line)) |
1374 | - |
1375 | - |
1376 | -#------------------------------------------------------------------------------ |
1377 | -# apt_get_install( packages ): Installs package(s) |
1378 | -#------------------------------------------------------------------------------ |
1379 | -def apt_get_install(packages=None): |
1380 | - if packages is None: |
1381 | - return(False) |
1382 | - cmd_line = ['apt-get', '-y', 'install', '-qq'] |
1383 | - if isinstance(packages, list): |
1384 | - cmd_line.extend(packages) |
1385 | - else: |
1386 | - cmd_line.append(packages) |
1387 | - return(subprocess.call(cmd_line)) |
1388 | - |
1389 | - |
1390 | -#------------------------------------------------------------------------------ |
1391 | -# pip_install( package ): Installs a python package |
1392 | -#------------------------------------------------------------------------------ |
1393 | -def pip_install(packages=None, upgrade=False): |
1394 | - cmd_line = ['pip', 'install'] |
1395 | - if packages is None: |
1396 | - return(False) |
1397 | - if upgrade: |
1398 | - cmd_line.append('-u') |
1399 | - if packages.startswith('svn+') or packages.startswith('git+') or \ |
1400 | - packages.startswith('hg+') or packages.startswith('bzr+'): |
1401 | - cmd_line.append('-e') |
1402 | - cmd_line.append(packages) |
1403 | - return run(cmd_line) |
1404 | - |
1405 | -#------------------------------------------------------------------------------ |
1406 | -# pip_install_req( path ): Installs a requirements file |
1407 | -#------------------------------------------------------------------------------ |
1408 | -def pip_install_req(path=None, upgrade=False): |
1409 | - cmd_line = ['pip', 'install'] |
1410 | - if path is None: |
1411 | - return(False) |
1412 | - if upgrade: |
1413 | - cmd_line.append('-u') |
1414 | - cmd_line.append('-r') |
1415 | - cmd_line.append(path) |
1416 | - cwd = os.path.dirname(path) |
1417 | - return run(cmd_line) |
1418 | - |
1419 | -#------------------------------------------------------------------------------ |
1420 | -# open_port: Convenience function to open a port in juju to |
1421 | -# expose a service |
1422 | -#------------------------------------------------------------------------------ |
1423 | -def open_port(port=None, protocol="TCP"): |
1424 | - if port is None: |
1425 | - return(None) |
1426 | - return(subprocess.call(['open-port', "%d/%s" % |
1427 | - (int(port), protocol)])) |
1428 | - |
1429 | - |
1430 | -#------------------------------------------------------------------------------ |
1431 | -# close_port: Convenience function to close a port in juju to |
1432 | -# unexpose a service |
1433 | -#------------------------------------------------------------------------------ |
1434 | -def close_port(port=None, protocol="TCP"): |
1435 | - if port is None: |
1436 | - return(None) |
1437 | - return(subprocess.call(['close-port', "%d/%s" % |
1438 | - (int(port), protocol)])) |
1439 | - |
1440 | - |
1441 | -#------------------------------------------------------------------------------ |
1442 | -# update_service_ports: Convenience function that evaluate the old and new |
1443 | -# service ports to decide which ports need to be |
1444 | -# opened and which to close |
1445 | -#------------------------------------------------------------------------------ |
1446 | -def update_service_port(old_service_port=None, new_service_port=None): |
1447 | - if old_service_port is None or new_service_port is None: |
1448 | - return(None) |
1449 | - if new_service_port != old_service_port: |
1450 | - close_port(old_service_port) |
1451 | - open_port(new_service_port) |
1452 | - |
1453 | -# |
1454 | -# Utils |
1455 | -# |
1456 | - |
1457 | -def install_or_append(contents, dest, owner="root", group="root", mode=0600): |
1458 | - if os.path.exists(dest): |
1459 | - uid = getpwnam(owner)[2] |
1460 | - gid = getgrnam(group)[2] |
1461 | - dest_fd = os.open(dest, os.O_APPEND, mode) |
1462 | - os.fchown(dest_fd, uid, gid) |
1463 | - with os.fdopen(dest_fd, 'a') as destfile: |
1464 | - destfile.write(str(contents)) |
1465 | - else: |
1466 | - install_file(contents, dest, owner, group, mode) |
1467 | - |
1468 | -def token_sql_safe(value): |
1469 | - # Only allow alphanumeric + underscore in database identifiers |
1470 | - if re.search('[^A-Za-z0-9_]', value): |
1471 | - return False |
1472 | - return True |
1473 | + |
1474 | |
1475 | def sanitize(s): |
1476 | s = s.replace(':', '_') |
1477 | @@ -317,175 +28,104 @@ |
1478 | s = s.replace("'", '_') |
1479 | return s |
1480 | |
1481 | -def user_name(relid, remote_unit, admin=False, schema=False): |
1482 | - components = [sanitize(relid), sanitize(remote_unit)] |
1483 | - if admin: |
1484 | - components.append("admin") |
1485 | - elif schema: |
1486 | - components.append("schema") |
1487 | - return "_".join(components) |
1488 | - |
1489 | -def get_relation_host(): |
1490 | - remote_host = run("relation-get ip") |
1491 | - if not remote_host: |
1492 | - # remote unit $JUJU_REMOTE_UNIT uses deprecated 'ip=' component of |
1493 | - # interface. |
1494 | - remote_host = run("relation-get private-address") |
1495 | - return remote_host |
1496 | - |
1497 | - |
1498 | -def get_unit_host(): |
1499 | - this_host = run("unit-get private-address") |
1500 | - return this_host.strip() |
1501 | + |
1502 | +def sanitized_service_name(): |
1503 | + return sanitize(hookenv.remote_unit().split('/')[0]) |
1504 | + |
1505 | + |
1506 | +def upstart_conf_path(name): |
1507 | + return '/etc/init/%s.conf' % name |
1508 | + |
1509 | |
1510 | def process_template(template_name, template_vars, destination): |
1511 | - # --- exported service configuration file |
1512 | + # deferred import so install hook can run to install jinja2 |
1513 | from jinja2 import Environment, FileSystemLoader |
1514 | - template_env = Environment( |
1515 | - loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], |
1516 | - 'templates'))) |
1517 | + path = os.path.join(os.environ['CHARM_DIR'], 'templates') |
1518 | + template_env = Environment(loader=FileSystemLoader(path)) |
1519 | |
1520 | template = \ |
1521 | template_env.get_template(template_name).render(template_vars) |
1522 | |
1523 | with open(destination, 'w') as inject_file: |
1524 | inject_file.write(str(template)) |
1525 | - |
1526 | -def append_template(template_name, template_vars, path, try_append=False): |
1527 | - |
1528 | - # --- exported service configuration file |
1529 | - from jinja2 import Environment, FileSystemLoader |
1530 | - template_env = Environment( |
1531 | - loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], |
1532 | - 'templates'))) |
1533 | - |
1534 | - template = \ |
1535 | - template_env.get_template(template_name).render(template_vars) |
1536 | - |
1537 | - append = False |
1538 | - if os.path.exists(path): |
1539 | - with open(path, 'r') as inject_file: |
1540 | - if not str(template) in inject_file: |
1541 | - append = True |
1542 | - else: |
1543 | - append = True |
1544 | - |
1545 | - if append == True: |
1546 | - with open(path, 'a') as inject_file: |
1547 | - inject_file.write(INJECTED_WARNING) |
1548 | - inject_file.write(str(template)) |
1549 | - |
1550 | + hookenv.log('written gunicorn upstart config to %s' % destination) |
1551 | |
1552 | |
1553 | ############################################################################### |
1554 | # Hook functions |
1555 | ############################################################################### |
1556 | + |
1557 | + |
1558 | +@hooks.hook('install', 'upgrade-charm') |
1559 | def install(): |
1560 | - |
1561 | - for retry in xrange(0,24): |
1562 | - if apt_get_install(CHARM_PACKAGES): |
1563 | - time.sleep(10) |
1564 | - else: |
1565 | - break |
1566 | - |
1567 | -def upgrade(): |
1568 | - |
1569 | - apt_get_update() |
1570 | - for retry in xrange(0,24): |
1571 | - if apt_get_install(CHARM_PACKAGES): |
1572 | - time.sleep(10) |
1573 | - else: |
1574 | - break |
1575 | - |
1576 | -def wsgi_file_relation_joined_changed(): |
1577 | - wsgi_config = config_data |
1578 | - relation_id = os.environ['JUJU_RELATION_ID'] |
1579 | - juju_log(MSG_INFO, "JUJU_RELATION_ID: %s".format(relation_id)) |
1580 | - |
1581 | - remote_unit_name = sanitize(os.environ['JUJU_REMOTE_UNIT'].split('/')[0]) |
1582 | - juju_log(MSG_INFO, "JUJU_REMOTE_UNIT: %s".format(remote_unit_name)) |
1583 | - wsgi_config['unit_name'] = remote_unit_name |
1584 | - |
1585 | - project_conf = '/etc/init/%s.conf' % remote_unit_name |
1586 | - |
1587 | - working_dir = relation_get('working_dir') |
1588 | + fetch.apt_update() |
1589 | + fetch.apt_install(CHARM_PACKAGES) |
1590 | + |
1591 | + |
1592 | +@hooks.hook( |
1593 | + "config-changed", |
1594 | + "wsgi-file-relation-joined", |
1595 | + "wsgi-file-relation-changed") |
1596 | +def configure_gunicorn(): |
1597 | + wsgi_config = hookenv.config() |
1598 | + |
1599 | + relations = hookenv.relations_of_type('wsgi-file') |
1600 | + if not relations: |
1601 | + hookenv.log("No wsgi-file relation, nothing to do") |
1602 | + return |
1603 | + |
1604 | + relation_data = relations[0] |
1605 | + |
1606 | + service_name = sanitized_service_name() |
1607 | + wsgi_config['unit_name'] = service_name |
1608 | + |
1609 | + project_conf = upstart_conf_path(service_name) |
1610 | + |
1611 | + working_dir = relation_data.get('working_dir', None) |
1612 | if not working_dir: |
1613 | return |
1614 | |
1615 | wsgi_config['working_dir'] = working_dir |
1616 | - wsgi_config['project_name'] = remote_unit_name |
1617 | + wsgi_config['project_name'] = service_name |
1618 | |
1619 | - for v in wsgi_config.keys(): |
1620 | - if v.startswith('wsgi_') or v in ['python_path', 'listen_ip', 'port']: |
1621 | - upstream_value = relation_get(v) |
1622 | - if upstream_value: |
1623 | - wsgi_config[v] = upstream_value |
1624 | + # any valid config item can be overidden by a relation item |
1625 | + for key, relation_value in relation_data.items(): |
1626 | + if key in wsgi_config: |
1627 | + wsgi_config[key] = relation_value |
1628 | |
1629 | if wsgi_config['wsgi_worker_class'] == 'eventlet': |
1630 | - apt_get_install('python-eventlet') |
1631 | - elif wsgi_config['wsgi_worker_class'] == 'gevent': |
1632 | - apt_get_install('python-gevent') |
1633 | + fetch.apt_install('python-eventlet') |
1634 | + elif wsgi_config['wsgi_worker_class'] == 'gevent': |
1635 | + fetch.apt_install('python-gevent') |
1636 | elif wsgi_config['wsgi_worker_class'] == 'tornado': |
1637 | - apt_get_install('python-tornado') |
1638 | + fetch.apt_install('python-tornado') |
1639 | |
1640 | if wsgi_config['wsgi_workers'] == 0: |
1641 | - res = run('python -c "import multiprocessing ; print(multiprocessing.cpu_count())"') |
1642 | - wsgi_config['wsgi_workers'] = int(res) + 1 |
1643 | - |
1644 | - if wsgi_config['wsgi_access_logfile']: |
1645 | - wsgi_config['wsgi_extra'] = " ".join([ |
1646 | - wsgi_config['wsgi_extra'], |
1647 | - '--access-logformat=%s' % wsgi_config['wsgi_access_logfile'], |
1648 | - '--access-logformat="%s"' % wsgi_config['wsgi_access_logformat'] |
1649 | - ]) |
1650 | - |
1651 | - wsgi_config['wsgi_wsgi_file'] = wsgi_config['wsgi_wsgi_file'] |
1652 | + wsgi_config['wsgi_workers'] = cpu_count() + 1 |
1653 | + |
1654 | + env_extra = wsgi_config.get('env_extra', '') |
1655 | + wsgi_config['env_extra'] = [ |
1656 | + v.split('=') for v in shlex.split(env_extra) if '=' in v |
1657 | + ] |
1658 | |
1659 | process_template('upstart.tmpl', wsgi_config, project_conf) |
1660 | |
1661 | - |
1662 | - # We need this because when the contained charm configuration or code changed |
1663 | - # Gunicorn needs to restart to run the new code. |
1664 | - run("service %s restart || service %s start" % (remote_unit_name, remote_unit_name)) |
1665 | - |
1666 | - |
1667 | + # We need this because when the contained charm configuration or code |
1668 | + # changed Gunicorn needs to restart to run the new code. |
1669 | + host.service_restart(service_name) |
1670 | + |
1671 | + |
1672 | +@hooks.hook("wsgi_file_relation_broken") |
1673 | def wsgi_file_relation_broken(): |
1674 | - remote_unit_name = sanitize(os.environ['JUJU_REMOTE_UNIT'].split('/')[0]) |
1675 | - |
1676 | - run('service %s stop' % remote_unit_name) |
1677 | - run('rm /etc/init/%s.conf' % remote_unit_name) |
1678 | - |
1679 | - |
1680 | -############################################################################### |
1681 | -# Global variables |
1682 | -############################################################################### |
1683 | -config_data = config_get() |
1684 | -juju_log(MSG_DEBUG, "got config: %s" % str(config_data)) |
1685 | - |
1686 | -unit_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] |
1687 | - |
1688 | -hook_name = os.path.basename(sys.argv[0]) |
1689 | - |
1690 | -############################################################################### |
1691 | -# Main section |
1692 | -############################################################################### |
1693 | -def main(): |
1694 | - juju_log(MSG_INFO, "Running {} hook".format(hook_name)) |
1695 | - if hook_name == "install": |
1696 | - install() |
1697 | - |
1698 | - elif hook_name == "upgrade-charm": |
1699 | - upgrade() |
1700 | - |
1701 | - elif hook_name in ["wsgi-file-relation-joined", "wsgi-file-relation-changed"]: |
1702 | - wsgi_file_relation_joined_changed() |
1703 | - |
1704 | - elif hook_name == "wsgi-file-relation-broken": |
1705 | - wsgi_file_relation_broken() |
1706 | - |
1707 | - else: |
1708 | - print "Unknown hook {}".format(hook_name) |
1709 | - raise SystemExit(1) |
1710 | - |
1711 | -if __name__ == '__main__': |
1712 | - raise SystemExit(main()) |
1713 | + service_name = sanitized_service_name() |
1714 | + host.service_stop(service_name) |
1715 | + try: |
1716 | + os.remove(upstart_conf_path(service_name)) |
1717 | + except OSError as exc: |
1718 | + if exc.errno != 2: |
1719 | + raise |
1720 | + hookenv.log("removed gunicorn upstart config") |
1721 | + |
1722 | + |
1723 | +if __name__ == "__main__": |
1724 | + hooks.execute(sys.argv) |
1725 | |
1726 | === added directory 'hooks/tests' |
1727 | === added file 'hooks/tests/test_hooks.py' |
1728 | --- hooks/tests/test_hooks.py 1970-01-01 00:00:00 +0000 |
1729 | +++ hooks/tests/test_hooks.py 2014-03-04 16:16:33 +0000 |
1730 | @@ -0,0 +1,157 @@ |
1731 | +from unittest import TestCase |
1732 | +from mock import patch |
1733 | + |
1734 | +import yaml |
1735 | + |
1736 | +import hooks |
1737 | + |
1738 | + |
1739 | +def load_config_defaults(): |
1740 | + with open("config.yaml") as conf: |
1741 | + config_schema = yaml.safe_load(conf) |
1742 | + defaults = {} |
1743 | + for key, value in config_schema['options'].items(): |
1744 | + defaults[key] = value['default'] |
1745 | + return defaults |
1746 | + |
1747 | +DEFAULTS = load_config_defaults() |
1748 | + |
1749 | + |
1750 | +class HookTestCase(TestCase): |
1751 | + maxDiff = None |
1752 | + |
1753 | + SERVICE_NAME = 'some_juju_service' |
1754 | + WORKING_DIR = '/some_path' |
1755 | + |
1756 | + _object = object() |
1757 | + mocks = {} |
1758 | + |
1759 | + def apply_patch(self, name, value=_object, return_value=_object): |
1760 | + if value is not self._object: |
1761 | + patcher = patch(name, value) |
1762 | + else: |
1763 | + patcher = patch(name) |
1764 | + |
1765 | + mock_obj = patcher.start() |
1766 | + self.addCleanup(patcher.stop) |
1767 | + |
1768 | + if value is self._object and return_value is not self._object: |
1769 | + mock_obj.return_value = return_value |
1770 | + |
1771 | + self.mocks[name] = mock_obj |
1772 | + return mock_obj |
1773 | + |
1774 | + def setUp(self): |
1775 | + super(HookTestCase, self).setUp() |
1776 | + # There's quite a bit of mocking here, due to the large amounts of |
1777 | + # environment context to juju hooks |
1778 | + |
1779 | + self.config = DEFAULTS.copy() |
1780 | + self.relation_data = {'working_dir': self.WORKING_DIR} |
1781 | + |
1782 | + # intercept all charmsupport usage |
1783 | + self.hookenv = self.apply_patch('hooks.hookenv') |
1784 | + self.fetch = self.apply_patch('hooks.fetch') |
1785 | + self.host = self.apply_patch('hooks.host') |
1786 | + |
1787 | + self.hookenv.config.return_value = self.config |
1788 | + self.hookenv.relations_of_type.return_value = [self.relation_data] |
1789 | + |
1790 | + # mocking utilities that touch the host/environment |
1791 | + self.process_template = self.apply_patch('hooks.process_template') |
1792 | + self.apply_patch( |
1793 | + 'hooks.sanitized_service_name', return_value=self.SERVICE_NAME) |
1794 | + self.apply_patch('hooks.cpu_count', return_value=1) |
1795 | + |
1796 | + def assert_wsgi_config_applied(self, expected): |
1797 | + tmpl, config, path = self.process_template.call_args[0] |
1798 | + self.assertEqual(tmpl, 'upstart.tmpl') |
1799 | + self.assertEqual(path, '/etc/init/%s.conf' % self.SERVICE_NAME) |
1800 | + self.assertEqual(config, expected) |
1801 | + self.host.service_restart.assert_called_once_with(self.SERVICE_NAME) |
1802 | + |
1803 | + def get_default_context(self): |
1804 | + expected = DEFAULTS.copy() |
1805 | + expected['unit_name'] = self.SERVICE_NAME |
1806 | + expected['working_dir'] = self.WORKING_DIR |
1807 | + expected['project_name'] = self.SERVICE_NAME |
1808 | + expected['wsgi_workers'] = 2 |
1809 | + expected['env_extra'] = [] |
1810 | + fmt = expected['wsgi_access_logformat'].replace('"', '\\"') |
1811 | + expected['wsgi_access_logformat'] = fmt |
1812 | + return expected |
1813 | + |
1814 | + def test_python_install_hook(self): |
1815 | + hooks.install() |
1816 | + self.assertTrue(self.fetch.apt_update.called) |
1817 | + self.fetch.apt_install.assert_called_once_with( |
1818 | + ['gunicorn', 'python-jinja2']) |
1819 | + |
1820 | + def test_default_configure_gunicorn(self): |
1821 | + hooks.configure_gunicorn() |
1822 | + expected = self.get_default_context() |
1823 | + self.assert_wsgi_config_applied(expected) |
1824 | + |
1825 | + def test_configure_gunicorn_no_working_dir(self): |
1826 | + del self.relation_data['working_dir'] |
1827 | + hooks.configure_gunicorn() |
1828 | + self.assertFalse(self.process_template.called) |
1829 | + self.assertFalse(self.host.service_restart.called) |
1830 | + |
1831 | + def test_configure_gunicorn_relation_data(self): |
1832 | + self.relation_data['port'] = 9999 |
1833 | + self.relation_data['wsgi_workers'] = 1 |
1834 | + self.relation_data['unknown'] = 'value' |
1835 | + |
1836 | + hooks.configure_gunicorn() |
1837 | + |
1838 | + self.assertFalse(self.fetch.apt_install.called) |
1839 | + |
1840 | + expected = self.get_default_context() |
1841 | + expected['wsgi_workers'] = 1 |
1842 | + expected['port'] = 9999 |
1843 | + |
1844 | + self.assert_wsgi_config_applied(expected) |
1845 | + |
1846 | + def test_env_extra_parsing(self): |
1847 | + self.relation_data['env_extra'] = 'A=1 B="2" C="3 4" D= E' |
1848 | + |
1849 | + hooks.configure_gunicorn() |
1850 | + |
1851 | + expected = self.get_default_context() |
1852 | + expected['env_extra'] = [ |
1853 | + ['A', '1'], |
1854 | + ['B', '2'], |
1855 | + ['C', '3 4'], |
1856 | + ['D', ''], |
1857 | + # no E |
1858 | + ] |
1859 | + |
1860 | + self.assert_wsgi_config_applied(expected) |
1861 | + |
1862 | + def do_worker_class(self, worker_class): |
1863 | + self.relation_data['wsgi_worker_class'] = worker_class |
1864 | + hooks.configure_gunicorn() |
1865 | + self.fetch.apt_install.assert_called_once_with( |
1866 | + 'python-%s' % worker_class) |
1867 | + expected = self.get_default_context() |
1868 | + expected['wsgi_worker_class'] = worker_class |
1869 | + self.assert_wsgi_config_applied(expected) |
1870 | + |
1871 | + def test_configure_worker_class_eventlet(self): |
1872 | + self.do_worker_class('eventlet') |
1873 | + |
1874 | + def test_configure_worker_class_tornado(self): |
1875 | + self.do_worker_class('tornado') |
1876 | + |
1877 | + def test_configure_worker_class_gevent(self): |
1878 | + self.do_worker_class('gevent') |
1879 | + |
1880 | + |
1881 | + |
1882 | + @patch('hooks.os.remove') |
1883 | + def test_wsgi_file_relation_broken(self, remove): |
1884 | + hooks.wsgi_file_relation_broken() |
1885 | + self.host.service_stop.assert_called_once_with(self.SERVICE_NAME) |
1886 | + remove.assert_called_once_with( |
1887 | + '/etc/init/%s.conf' % self.SERVICE_NAME) |
1888 | |
1889 | === added file 'hooks/tests/test_template.py' |
1890 | --- hooks/tests/test_template.py 1970-01-01 00:00:00 +0000 |
1891 | +++ hooks/tests/test_template.py 2014-03-04 16:16:33 +0000 |
1892 | @@ -0,0 +1,125 @@ |
1893 | +import os |
1894 | +from unittest import TestCase |
1895 | +from mock import patch, MagicMock |
1896 | + |
1897 | +import hooks |
1898 | + |
1899 | +EXPECTED = """ |
1900 | +#-------------------------------------------------------------- |
1901 | +# This file is managed by Juju; ANY CHANGES WILL BE OVERWRITTEN |
1902 | +#-------------------------------------------------------------- |
1903 | + |
1904 | +description "Gunicorn daemon for the PROJECT_NAME project" |
1905 | + |
1906 | +start on (local-filesystems and net-device-up IFACE=eth0) |
1907 | +stop on runlevel [!12345] |
1908 | + |
1909 | +# If the process quits unexpectadly trigger a respawn |
1910 | +respawn |
1911 | +respawn limit 10 5 |
1912 | + |
1913 | +setuid WSGI_USER |
1914 | +setgid WSGI_GROUP |
1915 | +chdir WORKING_DIR |
1916 | + |
1917 | +# This line can be removed and replace with the --pythonpath PYTHON_PATH \\ |
1918 | +# option with Gunicorn>1.17 |
1919 | +env PYTHONPATH=PYTHON_PATH |
1920 | +env A="1" |
1921 | +env B="1 2" |
1922 | + |
1923 | + |
1924 | +exec gunicorn \\ |
1925 | + --name=PROJECT_NAME \\ |
1926 | + --workers=WSGI_WORKERS \\ |
1927 | + --worker-class=WSGI_WORKER_CLASS \\ |
1928 | + --worker-connections=WSGI_WORKER_CONNECTIONS \\ |
1929 | + --max-requests=WSGI_MAX_REQUESTS \\ |
1930 | + --backlog=WSGI_BACKLOG \\ |
1931 | + --timeout=WSGI_TIMEOUT \\ |
1932 | + --keep-alive=WSGI_KEEP_ALIVE \\ |
1933 | + --umask=WSGI_UMASK \\ |
1934 | + --bind=LISTEN_IP:PORT \\ |
1935 | + --log-file=WSGI_LOG_FILE \\ |
1936 | + --log-level=WSGI_LOG_LEVEL \\ |
1937 | + --access-logfile=WSGI_ACCESS_LOGFILE \\ |
1938 | + --access-logformat=WSGI_ACCESS_LOGFORMAT \\ |
1939 | + WSGI_EXTRA \\ |
1940 | + WSGI_WSGI_FILE |
1941 | +""".strip() |
1942 | + |
1943 | + |
1944 | +class TemplateTestCase(TestCase): |
1945 | + maxDiff = None |
1946 | + |
1947 | + def setUp(self): |
1948 | + super(TemplateTestCase, self).setUp() |
1949 | + patch_open = patch('hooks.open', create=True) |
1950 | + self.open = patch_open.start() |
1951 | + self.addCleanup(patch_open.stop) |
1952 | + |
1953 | + self.open.return_value = MagicMock(spec=file) |
1954 | + self.file = self.open.return_value.__enter__.return_value |
1955 | + |
1956 | + patch_environ = patch.dict(os.environ, CHARM_DIR='.') |
1957 | + patch_environ.start() |
1958 | + self.addCleanup(patch_environ.stop) |
1959 | + |
1960 | + patch_hookenv = patch('hooks.hookenv') |
1961 | + patch_hookenv.start() |
1962 | + self.addCleanup(patch_hookenv.stop) |
1963 | + |
1964 | + def get_test_context(self): |
1965 | + keys = [ |
1966 | + 'project_name', |
1967 | + 'wsgi_user', |
1968 | + 'wsgi_group', |
1969 | + 'working_dir', |
1970 | + 'python_path', |
1971 | + 'wsgi_workers', |
1972 | + 'wsgi_worker_class', |
1973 | + 'wsgi_worker_connections', |
1974 | + 'wsgi_max_requests', |
1975 | + 'wsgi_backlog', |
1976 | + 'wsgi_timeout', |
1977 | + 'wsgi_keep_alive', |
1978 | + 'wsgi_umask', |
1979 | + 'wsgi_log_file', |
1980 | + 'wsgi_log_level', |
1981 | + 'wsgi_access_logfile', |
1982 | + 'wsgi_access_logformat', |
1983 | + 'listen_ip', |
1984 | + 'port', |
1985 | + 'wsgi_extra', |
1986 | + 'wsgi_wsgi_file', |
1987 | + ] |
1988 | + ctx = dict((k, k.upper()) for k in keys) |
1989 | + ctx['env_extra'] = [["A", "1"], ["B", "1 2"]] |
1990 | + return ctx |
1991 | + |
1992 | + def test_template(self): |
1993 | + |
1994 | + ctx = self.get_test_context() |
1995 | + |
1996 | + hooks.process_template('upstart.tmpl', ctx, 'path') |
1997 | + output = self.file.write.call_args[0][0] |
1998 | + |
1999 | + self.assertMultiLineEqual(EXPECTED, output) |
2000 | + |
2001 | + def test_no_access_logfile(self): |
2002 | + ctx = self.get_test_context() |
2003 | + ctx['wsgi_access_logfile'] = "" |
2004 | + |
2005 | + hooks.process_template('upstart.tmpl', ctx, 'path') |
2006 | + output = self.file.write.call_args[0][0] |
2007 | + |
2008 | + self.assertNotIn('--access-logfile', output) |
2009 | + |
2010 | + def test_no_access_logformat(self): |
2011 | + ctx = self.get_test_context() |
2012 | + ctx['wsgi_access_logformat'] = "" |
2013 | + |
2014 | + hooks.process_template('upstart.tmpl', ctx, 'path') |
2015 | + output = self.file.write.call_args[0][0] |
2016 | + |
2017 | + self.assertNotIn('--access-logformat', output) |
2018 | |
2019 | === modified file 'metadata.yaml' |
2020 | --- metadata.yaml 2013-06-04 18:36:25 +0000 |
2021 | +++ metadata.yaml 2014-03-04 16:16:33 +0000 |
2022 | @@ -7,6 +7,10 @@ |
2023 | pre-fork worker model ported from Ruby's Unicorn project. The Gunicorn server |
2024 | is broadly compatible with various web frameworks, simply implemented, light on |
2025 | server resources, and fairly speedy. |
2026 | + |
2027 | + The providers of the wsgi relation must provide working_dir, plus optionally |
2028 | + any of the charm's config directives, which will override the current config |
2029 | + for the charm. |
2030 | subordinate: true |
2031 | requires: |
2032 | wsgi-file: |
2033 | |
2034 | === modified file 'templates/upstart.tmpl' |
2035 | --- templates/upstart.tmpl 2013-05-31 15:41:27 +0000 |
2036 | +++ templates/upstart.tmpl 2014-03-04 16:16:33 +0000 |
2037 | @@ -2,13 +2,14 @@ |
2038 | # This file is managed by Juju; ANY CHANGES WILL BE OVERWRITTEN |
2039 | #-------------------------------------------------------------- |
2040 | |
2041 | -description "Gunicorn daemon for the {{ project_name }} project" |
2042 | +description "Gunicorn daemon for the {{ project_name }} project" |
2043 | |
2044 | start on (local-filesystems and net-device-up IFACE=eth0) |
2045 | stop on runlevel [!12345] |
2046 | |
2047 | # If the process quits unexpectadly trigger a respawn |
2048 | respawn |
2049 | +respawn limit 10 5 |
2050 | |
2051 | setuid {{ wsgi_user }} |
2052 | setgid {{ wsgi_group }} |
2053 | @@ -17,6 +18,9 @@ |
2054 | # This line can be removed and replace with the --pythonpath {{ python_path }} \ |
2055 | # option with Gunicorn>1.17 |
2056 | env PYTHONPATH={{ python_path }} |
2057 | +{% for name, value in env_extra -%} |
2058 | +env {{ name }}="{{ value }}" |
2059 | +{% endfor %} |
2060 | |
2061 | exec gunicorn \ |
2062 | --name={{ project_name }} \ |
2063 | @@ -31,4 +35,11 @@ |
2064 | --bind={{ listen_ip }}:{{ port }} \ |
2065 | --log-file={{ wsgi_log_file }} \ |
2066 | --log-level={{ wsgi_log_level }} \ |
2067 | - {{ wsgi_extra }} {{ wsgi_wsgi_file }} |
2068 | + {%- if wsgi_access_logfile %} |
2069 | + --access-logfile={{ wsgi_access_logfile }} \ |
2070 | + {%- endif %} |
2071 | + {%- if wsgi_access_logformat %} |
2072 | + --access-logformat={{ wsgi_access_logformat }} \ |
2073 | + {%- endif %} |
2074 | + {{ wsgi_extra }} \ |
2075 | + {{ wsgi_wsgi_file }} |
2076 | |
2077 | === added file 'test_requirments.txt' |
2078 | --- test_requirments.txt 1970-01-01 00:00:00 +0000 |
2079 | +++ test_requirments.txt 2014-03-04 16:16:33 +0000 |
2080 | @@ -0,0 +1,6 @@ |
2081 | +mock |
2082 | +nose |
2083 | +pyaml |
2084 | +python-apt |
2085 | +jinja2 |
2086 | + |
+1. I'm now using this version of the charm for deploying a new service (as I needed the env_extra option).
Most of the green diff below is just the result of syncing a recent charm-helpers.
Great to see some tests, and great to trim down the hooks.py by using charmhelpers instead.
Thanks!