Merge lp:~sidnei/charms/precise/apache2/trunk into lp:charms/apache2
- Precise Pangolin (12.04)
- trunk
- Merge into trunk
Proposed by
Sidnei da Silva
Status: | Merged |
---|---|
Merged at revision: | 46 |
Proposed branch: | lp:~sidnei/charms/precise/apache2/trunk |
Merge into: | lp:charms/apache2 |
Diff against target: |
3641 lines (+2804/-374) 31 files modified
.bzrignore (+6/-0) Makefile (+39/-0) README.md (+187/-78) charm-helpers.yaml (+4/-0) cm.py (+193/-0) config-manager.txt (+6/-0) config.yaml (+10/-0) data/balancer.template (+1/-0) data/logrotate.conf.template (+8/-0) data/nrpe_service.template (+0/-11) data/syslog-apache.conf (+13/-0) data/syslog-rsyslog.conf (+26/-0) hooks/charmhelpers/contrib/charmsupport/nrpe.py (+218/-0) hooks/charmhelpers/contrib/charmsupport/volumes.py (+156/-0) hooks/charmhelpers/core/hookenv.py (+340/-0) hooks/charmhelpers/core/host.py (+239/-0) hooks/charmhelpers/fetch/__init__.py (+209/-0) hooks/charmhelpers/fetch/archiveurl.py (+48/-0) hooks/charmhelpers/fetch/bzrurl.py (+44/-0) hooks/hooks.py (+221/-284) hooks/install (+9/-0) hooks/tests/fixtures/bar.balancer (+7/-0) hooks/tests/fixtures/foo.balancer (+6/-0) hooks/tests/fixtures/nrpe_check_config (+2/-0) hooks/tests/fixtures/nrpe_service_config (+11/-0) hooks/tests/test_balancer_hook.py (+652/-0) hooks/tests/test_nrpe_hooks.py (+134/-0) metadata.yaml (+5/-0) revision (+0/-1) setup.cfg (+4/-0) tarmac_tests.sh (+6/-0) |
To merge this branch: | bzr merge lp:~sidnei/charms/precise/apache2/trunk |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Marco Ceppi (community) | Approve | ||
Review via email: mp+190504@code.launchpad.net |
Commit message
Description of the change
Greatly improved test coverage. Support for 'all_services' set from haproxy relation. Improved documentation.
To post a comment you must log in.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file '.bzrignore' |
2 | --- .bzrignore 2013-01-24 08:48:41 +0000 |
3 | +++ .bzrignore 2013-10-10 22:50:52 +0000 |
4 | @@ -1,4 +1,10 @@ |
5 | revision |
6 | basenode/ |
7 | +_trial_temp |
8 | +.coverage |
9 | +coverage.xml |
10 | +exec.d/ |
11 | *.crt |
12 | *.key |
13 | +lib/* |
14 | +build/charm-helpers |
15 | |
16 | === added file 'Makefile' |
17 | --- Makefile 1970-01-01 00:00:00 +0000 |
18 | +++ Makefile 2013-10-10 22:50:52 +0000 |
19 | @@ -0,0 +1,39 @@ |
20 | +PWD := $(shell pwd) |
21 | +SOURCEDEPS_DIR ?= $(shell dirname $(PWD))/.sourcecode |
22 | +HOOKS_DIR := $(PWD)/hooks |
23 | +TEST_PREFIX := PYTHONPATH=$(HOOKS_DIR) |
24 | +TEST_DIR := $(PWD)/hooks/tests |
25 | +CHARM_DIR := $(PWD) |
26 | +PYTHON := /usr/bin/env python |
27 | + |
28 | + |
29 | +build: test lint proof |
30 | + |
31 | +revision: |
32 | + @test -f revision || echo 0 > revision |
33 | + |
34 | +proof: revision |
35 | + @echo Proofing charm... |
36 | + @(charm proof $(PWD) || [ $$? -eq 100 ]) && echo OK |
37 | + @test `cat revision` = 0 && rm revision |
38 | + |
39 | +test: |
40 | + @echo Starting tests... |
41 | + @CHARM_DIR=$(CHARM_DIR) $(TEST_PREFIX) nosetests $(TEST_DIR) |
42 | + |
43 | +lint: |
44 | + @echo Checking for Python syntax... |
45 | + @flake8 $(HOOKS_DIR) --ignore=E123 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK |
46 | + |
47 | +sourcedeps: $(PWD)/config-manager.txt |
48 | + @echo Updating source dependencies... |
49 | + @$(PYTHON) cm.py -c $(PWD)/config-manager.txt \ |
50 | + -p $(SOURCEDEPS_DIR) \ |
51 | + -t $(PWD) |
52 | + @$(PYTHON) build/charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ |
53 | + -c charm-helpers.yaml \ |
54 | + -b build/charm-helpers \ |
55 | + -d hooks/charmhelpers |
56 | + @echo Do not forget to commit the updated files if any. |
57 | + |
58 | +.PHONY: revision proof test lint sourcedeps charm-payload |
59 | |
60 | === modified file 'README.md' |
61 | --- README.md 2013-02-01 23:11:54 +0000 |
62 | +++ README.md 2013-10-10 22:50:52 +0000 |
63 | @@ -1,42 +1,88 @@ |
64 | -Juju charm apache |
65 | -================= |
66 | - |
67 | -The Apache Software Foundation's goal is to build a secure, efficient |
68 | -and extensible HTTP server as standards-compliant open source |
69 | -software. The result has long been the number one web server on the |
70 | -Internet. It features support for HTTPS, virtual hosting, CGI, SSI, |
71 | -IPv6, easy scripting and database integration, request/response |
72 | -filtering, many flexible authentication schemes, and more. |
73 | - |
74 | -How to deploy the charm |
75 | ------------------------ |
76 | - juju deploy apache2 |
77 | - juju set apache2 "vhost_http_template=$(base64 < vhost.tmpl)" |
78 | +# Juju charm for Apache |
79 | + |
80 | +The Apache Software Foundation's goal is to build a secure, |
81 | +efficient and extensible HTTP server as standards-compliant open |
82 | +source software. The result has long been the number one web server |
83 | +on the Internet. It features support for HTTPS, virtual hosting, |
84 | +CGI, SSI, IPv6, easy scripting and database integration, |
85 | +request/response filtering, many flexible authentication schemes, |
86 | +and more. |
87 | + |
88 | +## Development |
89 | + |
90 | + |
91 | +The following steps are needed for testing and development of the |
92 | +charm, but **not** for deployment: |
93 | + |
94 | + sudo apt-get install python-software-properties |
95 | + sudo add-apt-repository ppa:chrisjohnston/flake8 |
96 | + sudo apt-get update |
97 | + sudo apt-get install flake8 python-nose python-coverage python-testtools |
98 | + |
99 | +To fetch additional source dependencies and run the tests: |
100 | + |
101 | + make build |
102 | + |
103 | +... will run the unit tests, run flake8 over the source to warn |
104 | +about formatting issues and output a code coverage summary of the |
105 | +'hooks.py' module. |
106 | + |
107 | +## How to deploy the charm |
108 | + |
109 | +Assuming you have a copy of the charm into a `charms/$distrocodename/apache2` |
110 | +directory relative to your current directory. |
111 | + |
112 | +... then to perform a deployment execute the following steps: |
113 | + |
114 | + juju deploy --repository=charms local:apache2 |
115 | + juju set apache2 "vhost_http_template=$(base64 < http_vhost.tmpl)" |
116 | + |
117 | + |
118 | # and / or |
119 | - juju set apache2 "vhost_https_template=$(base64 < vhost.tmpl)" |
120 | + juju set apache2 "vhost_https_template=$(base64 < https_vhost.tmpl)" |
121 | + |
122 | +If you want a simple `reverseproxy` relation to your services (only |
123 | +really useful if you have a single unit on the other side of the |
124 | +relation): |
125 | + |
126 | juju add-relation apache2:reverseproxy haproxy:website |
127 | - |
128 | -Vhost templates |
129 | ---------------- |
130 | -The charm expects a jinja2 template to be passed in. The variables in |
131 | -the template should relate to the services that apache will be proxying |
132 | --- obviously no variables need to be specified if no proxying is needed. |
133 | - |
134 | -The charm will create the service variable, with the unit_name, when |
135 | -the reverseproxy relationship is joined and present this to the template |
136 | -at which point the vhost will be generated from the template again. |
137 | -All config settings are also available to the template. |
138 | - |
139 | -For example to access squid then the {{ squid }} variable should be used. |
140 | -This will be populated with the hostname:port of the squid service. The |
141 | -individual hostname and port can also be accessed via squid_hostname and |
142 | -squid_port. |
143 | - |
144 | -Note: The service name should be used, not the charm name. If deploying |
145 | - a charm with a different service name, use that instaed. |
146 | - |
147 | -The joining charm may also set an all_services variable which contains |
148 | -a list of services it provides in yaml format (list of associative arrays): |
149 | + # and / or |
150 | + juju add-relation apache2:reverseproxy squid-reverseproxy:cached-website |
151 | + |
152 | +Alternatively, you can use the `balancer` relation so that requests |
153 | +are load balanced across multiple units of your services. For more information see the section on `Using the balancer relation`: |
154 | + |
155 | + juju add-relation apache2:balancer haproxy:website |
156 | + # and / or |
157 | + juju add-relation apache2:balancer squid-reverseproxy:cached-website |
158 | + |
159 | +## VirtualHost templates |
160 | + |
161 | +The charm expects a jinja2 template to be passed in. The variables |
162 | +in the template should relate to the services that apache will be |
163 | +proxying -- obviously no variables need to be specified if no |
164 | +proxying is needed. |
165 | + |
166 | +### Using the reverseproxy relation |
167 | + |
168 | +The charm will create the service variable, with the `unit_name`, |
169 | +when the `reverseproxy` relationship is joined and present this to |
170 | +the template at which point the vhost will be generated from the |
171 | +template again. All config settings are also available to the |
172 | +template. |
173 | + |
174 | +For example to access squid then the `{{ squid }}` variable should |
175 | +be used. This will be populated with the hostname:port of the squid |
176 | +service. The individual hostname and port can also be accessed via |
177 | +`squid_hostname` and `squid_port`. |
178 | + |
179 | +Note: The service name should be used, not the charm name. If |
180 | + deploying a charm with a different service name, use that |
181 | + instead. |
182 | + |
183 | +The joining charm may set an `all_services` variable which |
184 | +contains a list of services it provides in yaml format (list of |
185 | +associative arrays): |
186 | |
187 | # ... in haproxy charm, website-relation-joined |
188 | relation-set all_services=" |
189 | @@ -45,24 +91,24 @@ |
190 | - {service_name: my-webapp, service_port: 9090} |
191 | " |
192 | |
193 | -then variables for each service would be available to the jinja2 template |
194 | -in <juju_service_name>_<sub_service_name>. In our example above |
195 | -haproxy contains stanzas named gunicorn, solr and my-webapp. These are |
196 | -accessed as {{ haproxy_gunicorn }}, {{ haproxy_solr }} and |
197 | -{{ haproxy_mywebapp }} respectively. If any unsupported characters are in |
198 | -your juju service name or the service names exposed through "all_services", |
199 | -they will be stripped. |
200 | +then variables for each service would be available to the jinja2 |
201 | +template in `<juju_service_name>_<sub_service_name>`. In our example |
202 | +above haproxy contains stanzas named gunicorn, solr and my-webapp. |
203 | +These are accessed as `{{ haproxy_gunicorn }}`, `{{ haproxy_solr }}` and |
204 | +`{{ haproxy_mywebapp }}` respectively. If any unsupported characters |
205 | +are in your juju service name or the service names exposed through |
206 | +"all_services", they will be stripped. |
207 | |
208 | For example a vhost that will pass all traffic on to an haproxy instance: |
209 | |
210 | <VirtualHost *:80> |
211 | - ServerName radiotiptop.org.uk |
212 | - |
213 | - CustomLog /var/log/apache2/radiotiptop-access.log combined |
214 | - ErrorLog /var/log/apache2/radiotiptop-error.log |
215 | - |
216 | + ServerName radiotiptop.org.uk |
217 | + |
218 | + CustomLog /var/log/apache2/radiotiptop-access.log combined |
219 | + ErrorLog /var/log/apache2/radiotiptop-error.log |
220 | + |
221 | DocumentRoot /srv/radiotiptop/www/root |
222 | - |
223 | + |
224 | ProxyRequests off |
225 | <Proxy *> |
226 | Order Allow,Deny |
227 | @@ -72,48 +118,111 @@ |
228 | ErrorDocument 502 /offline.html |
229 | ErrorDocument 503 /offline.html |
230 | </Proxy> |
231 | - |
232 | + |
233 | ProxyPreserveHost off |
234 | ProxyPassReverse / http://{{ haproxy_gunicorn }}/ |
235 | - |
236 | + |
237 | RewriteEngine on |
238 | - |
239 | + |
240 | RewriteRule ^/$ /index.html [L] |
241 | RewriteRule ^/(.*)$ http://{{ haproxy_gunicorn }}/$1 [P,L] |
242 | </VirtualHost> |
243 | |
244 | -Certs, keys and chains |
245 | ----------------------- |
246 | -ssl_keylocation, ssl_certlocation and ssl_chainlocation are file names in the |
247 | -charm /data directory. If found, they will be copied as follows: |
248 | +### Using the `balancer` relation |
249 | + |
250 | +Using the balancer relation will set up named balancers using |
251 | +Apache's mod_balancer. Each balancer will be named after the |
252 | +`sitenames` or `all_services` setting exported from the other side |
253 | +of the relation. Requests sent through those balancers will have a |
254 | +`X-Balancer-Name` header set, which can be used by the related |
255 | +service to appropriatedly route requests internally. |
256 | + |
257 | +The joining charm may set an `all_services` variable which |
258 | +contains a list of services it provides in yaml format (list of |
259 | +associative arrays): |
260 | + |
261 | + # ... in haproxy charm, website-relation-joined |
262 | + relation-set all_services=" |
263 | + - {service_name: gunicorn, service_port: 80} |
264 | + - {service_name: solr, service_port: 8080} |
265 | + - {service_name: my-webapp, service_port: 9090} |
266 | + " |
267 | + |
268 | +Each separate service name will cause a new `balancer` definition to be created on the Apache side, like: |
269 | + |
270 | + <Proxy balancer://gunicorn> |
271 | + ProxySet lbmethod=byrequests |
272 | + RequestHeader set X-Balancer-Name "gunicorn" |
273 | + </Proxy> |
274 | + |
275 | +For example a vhost that will pass specific requests to the `gunicorn` service that's defined in haproxy: |
276 | + |
277 | + <VirtualHost *:80> |
278 | + ServerName radiotiptop.org.uk |
279 | + |
280 | + CustomLog /var/log/apache2/radiotiptop-access.log combined |
281 | + ErrorLog /var/log/apache2/radiotiptop-error.log |
282 | + |
283 | + DocumentRoot /srv/radiotiptop/www/root |
284 | + |
285 | + ProxyRequests off |
286 | + <Proxy *> |
287 | + Order Allow,Deny |
288 | + Allow from All |
289 | + ErrorDocument 403 /offline.html |
290 | + ErrorDocument 500 /offline.html |
291 | + ErrorDocument 502 /offline.html |
292 | + ErrorDocument 503 /offline.html |
293 | + </Proxy> |
294 | + |
295 | + ProxyPreserveHost on |
296 | + |
297 | + RewriteEngine on |
298 | + |
299 | + RewriteRule ^/$ /index.html [L] |
300 | + RewriteRule ^/(.*)$ balancer://gunicorn/$1 [P,L] |
301 | + </VirtualHost> |
302 | + |
303 | + |
304 | +## Certs, keys and chains |
305 | + |
306 | +`ssl_keylocation`, `ssl_certlocation` and `ssl_chainlocation` are |
307 | +file names in the charm `/data` directory. If found, they will be |
308 | +copied as follows: |
309 | |
310 | - /etc/ssl/private/<ssl_keylocation> |
311 | - /etc/ssl/certs/<ssl_certlocation> |
312 | - /etc/ssl/certs/<ssl_chainlocation> |
313 | |
314 | -ssl_key and ssl_cert can also be specified which are are assumed to be |
315 | -base64 encoded. If specified, they will be written to appropriate directories |
316 | -given the values in ssl_keylocation and ssl_certlocation as listed above. |
317 | - |
318 | -ssl_cert may also be set to SELFSIGNED, which will generate a certificate. |
319 | -This, of course, is mostly useful for testing and staging purposes. The |
320 | -generated certifcate/key will be placed according to ssl_certlocation and |
321 | -ssl_keylocation as listed above. |
322 | - |
323 | -{enable,disable}_modules |
324 | ------------------------- |
325 | +`ssl_key` and `ssl_cert` can also be specified which are are assumed |
326 | +to be base64 encoded. If specified, they will be written to |
327 | +appropriate directories given the values in ssl_keylocation and |
328 | +ssl_certlocation as listed above. |
329 | + |
330 | +`ssl_cert` may also be set to SELFSIGNED, which will generate a |
331 | +certificate. This, of course, is mostly useful for testing and |
332 | +staging purposes. The generated certifcate/key will be placed |
333 | +according to `ssl_certlocation` and `ssl_keylocation` as listed |
334 | +above. |
335 | + |
336 | +## `{enable,disable}_modules` |
337 | + |
338 | Space separated list of modules to be enabled or disabled. If a module to |
339 | be enabled cannot be found then the charm will attempt to install it. |
340 | |
341 | -TODO: |
342 | ------ |
343 | +## TODO: |
344 | |
345 | * Document the use of balancer, nrpe, logging and website-cache |
346 | - * Method to deliver site content. This maybe by converting the charm to a |
347 | - subordinate and making it the master charms problem |
348 | - * Implement secure method for delivering key. Juju will likely need to provide |
349 | - this. |
350 | - * Tuning. No tuning options are present. Convert apache2.conf to a template |
351 | - and expose config options |
352 | - * Testing. I have only tested the relationship setup with 1 haproxy instance. |
353 | - Needs testing against multiple instances |
354 | + |
355 | + * Method to deliver site content. This maybe by converting the |
356 | + charm to a subordinate and making it the master charms problem |
357 | + |
358 | + * Implement secure method for delivering key. Juju will likely |
359 | + need to provide this. |
360 | + |
361 | + * Tuning. No tuning options are present. Convert apache2.conf to a |
362 | + template and expose config options |
363 | + |
364 | + * The all_services variable can be passed as part of the http interface and is |
365 | + optional. However its kind of secret and it would be more obvious if a |
366 | + seperate interface was used like http-allservices. |
367 | |
368 | === added directory 'build' |
369 | === added file 'charm-helpers.yaml' |
370 | --- charm-helpers.yaml 1970-01-01 00:00:00 +0000 |
371 | +++ charm-helpers.yaml 2013-10-10 22:50:52 +0000 |
372 | @@ -0,0 +1,4 @@ |
373 | +include: |
374 | + - core |
375 | + - fetch |
376 | + - contrib.charmsupport |
377 | \ No newline at end of file |
378 | |
379 | === added file 'cm.py' |
380 | --- cm.py 1970-01-01 00:00:00 +0000 |
381 | +++ cm.py 2013-10-10 22:50:52 +0000 |
382 | @@ -0,0 +1,193 @@ |
383 | +# Copyright 2010-2013 Canonical Ltd. All rights reserved. |
384 | +import os |
385 | +import re |
386 | +import sys |
387 | +import errno |
388 | +import hashlib |
389 | +import subprocess |
390 | +import optparse |
391 | + |
392 | +from os import curdir |
393 | +from bzrlib.branch import Branch |
394 | +from bzrlib.plugin import load_plugins |
395 | +load_plugins() |
396 | +from bzrlib.plugins.launchpad import account as lp_account |
397 | + |
398 | +if 'GlobalConfig' in dir(lp_account): |
399 | + from bzrlib.config import LocationConfig as LocationConfiguration |
400 | + _ = LocationConfiguration |
401 | +else: |
402 | + from bzrlib.config import LocationStack as LocationConfiguration |
403 | + _ = LocationConfiguration |
404 | + |
405 | + |
406 | +def get_branch_config(config_file): |
407 | + """ |
408 | + Retrieves the sourcedeps configuration for an source dir. |
409 | + Returns a dict of (branch, revspec) tuples, keyed by branch name. |
410 | + """ |
411 | + branches = {} |
412 | + with open(config_file, 'r') as stream: |
413 | + for line in stream: |
414 | + line = line.split('#')[0].strip() |
415 | + bzr_match = re.match(r'(\S+)\s+' |
416 | + 'lp:([^;]+)' |
417 | + '(?:;revno=(\d+))?', line) |
418 | + if bzr_match: |
419 | + name, branch, revno = bzr_match.group(1, 2, 3) |
420 | + if revno is None: |
421 | + revspec = -1 |
422 | + else: |
423 | + revspec = revno |
424 | + branches[name] = (branch, revspec) |
425 | + continue |
426 | + dir_match = re.match(r'(\S+)\s+' |
427 | + '\(directory\)', line) |
428 | + if dir_match: |
429 | + name = dir_match.group(1) |
430 | + branches[name] = None |
431 | + return branches |
432 | + |
433 | + |
434 | +def main(config_file, parent_dir, target_dir, verbose): |
435 | + """Do the deed.""" |
436 | + |
437 | + try: |
438 | + os.makedirs(parent_dir) |
439 | + except OSError, e: |
440 | + if e.errno != errno.EEXIST: |
441 | + raise |
442 | + |
443 | + branches = sorted(get_branch_config(config_file).items()) |
444 | + for branch_name, spec in branches: |
445 | + if spec is None: |
446 | + # It's a directory, just create it and move on. |
447 | + destination_path = os.path.join(target_dir, branch_name) |
448 | + if not os.path.isdir(destination_path): |
449 | + os.makedirs(destination_path) |
450 | + continue |
451 | + |
452 | + (quoted_branch_spec, revspec) = spec |
453 | + revno = int(revspec) |
454 | + |
455 | + # qualify mirror branch name with hash of remote repo path to deal |
456 | + # with changes to the remote branch URL over time |
457 | + branch_spec_digest = hashlib.sha1(quoted_branch_spec).hexdigest() |
458 | + branch_directory = branch_spec_digest |
459 | + |
460 | + source_path = os.path.join(parent_dir, branch_directory) |
461 | + destination_path = os.path.join(target_dir, branch_name) |
462 | + |
463 | + # Remove leftover symlinks/stray files. |
464 | + try: |
465 | + os.remove(destination_path) |
466 | + except OSError, e: |
467 | + if e.errno != errno.EISDIR and e.errno != errno.ENOENT: |
468 | + raise |
469 | + |
470 | + lp_url = "lp:" + quoted_branch_spec |
471 | + |
472 | + # Create the local mirror branch if it doesn't already exist |
473 | + if verbose: |
474 | + sys.stderr.write('%30s: ' % (branch_name,)) |
475 | + sys.stderr.flush() |
476 | + |
477 | + fresh = False |
478 | + if not os.path.exists(source_path): |
479 | + subprocess.check_call(['bzr', 'branch', '-q', '--no-tree', |
480 | + '--', lp_url, source_path]) |
481 | + fresh = True |
482 | + |
483 | + if not fresh: |
484 | + source_branch = Branch.open(source_path) |
485 | + if revno == -1: |
486 | + orig_branch = Branch.open(lp_url) |
487 | + fresh = source_branch.revno() == orig_branch.revno() |
488 | + else: |
489 | + fresh = source_branch.revno() == revno |
490 | + |
491 | + # Freshen the source branch if required. |
492 | + if not fresh: |
493 | + subprocess.check_call(['bzr', 'pull', '-q', '--overwrite', '-r', |
494 | + str(revno), '-d', source_path, |
495 | + '--', lp_url]) |
496 | + |
497 | + if os.path.exists(destination_path): |
498 | + # Overwrite the destination with the appropriate revision. |
499 | + subprocess.check_call(['bzr', 'clean-tree', '--force', '-q', |
500 | + '--ignored', '-d', destination_path]) |
501 | + subprocess.check_call(['bzr', 'pull', '-q', '--overwrite', |
502 | + '-r', str(revno), |
503 | + '-d', destination_path, '--', source_path]) |
504 | + else: |
505 | + # Create a new branch. |
506 | + subprocess.check_call(['bzr', 'branch', '-q', '--hardlink', |
507 | + '-r', str(revno), |
508 | + '--', source_path, destination_path]) |
509 | + |
510 | + # Check the state of the destination branch. |
511 | + destination_branch = Branch.open(destination_path) |
512 | + destination_revno = destination_branch.revno() |
513 | + |
514 | + if verbose: |
515 | + sys.stderr.write('checked out %4s of %s\n' % |
516 | + ("r" + str(destination_revno), lp_url)) |
517 | + sys.stderr.flush() |
518 | + |
519 | + if revno != -1 and destination_revno != revno: |
520 | + raise RuntimeError("Expected revno %d but got revno %d" % |
521 | + (revno, destination_revno)) |
522 | + |
523 | +if __name__ == '__main__': |
524 | + parser = optparse.OptionParser( |
525 | + usage="%prog [options]", |
526 | + description=( |
527 | + "Add a lightweight checkout in <target> for each " |
528 | + "corresponding file in <parent>."), |
529 | + add_help_option=False) |
530 | + parser.add_option( |
531 | + '-p', '--parent', dest='parent', |
532 | + default=None, |
533 | + help=("The directory of the parent tree."), |
534 | + metavar="DIR") |
535 | + parser.add_option( |
536 | + '-t', '--target', dest='target', default=curdir, |
537 | + help=("The directory of the target tree."), |
538 | + metavar="DIR") |
539 | + parser.add_option( |
540 | + '-c', '--config', dest='config', default=None, |
541 | + help=("The config file to be used for config-manager."), |
542 | + metavar="DIR") |
543 | + parser.add_option( |
544 | + '-q', '--quiet', dest='verbose', action='store_false', |
545 | + help="Be less verbose.") |
546 | + parser.add_option( |
547 | + '-v', '--verbose', dest='verbose', action='store_true', |
548 | + help="Be more verbose.") |
549 | + parser.add_option( |
550 | + '-h', '--help', action='help', |
551 | + help="Show this help message and exit.") |
552 | + parser.set_defaults(verbose=True) |
553 | + |
554 | + options, args = parser.parse_args() |
555 | + |
556 | + if options.parent is None: |
557 | + options.parent = os.environ.get( |
558 | + "SOURCEDEPS_DIR", |
559 | + os.path.join(curdir, ".sourcecode")) |
560 | + |
561 | + if options.target is None: |
562 | + parser.error( |
563 | + "Target directory not specified.") |
564 | + |
565 | + if options.config is None: |
566 | + config = [arg for arg in args |
567 | + if arg != "update"] |
568 | + if not config or len(config) > 1: |
569 | + parser.error("Config not specified") |
570 | + options.config = config[0] |
571 | + |
572 | + sys.exit(main(config_file=options.config, |
573 | + parent_dir=options.parent, |
574 | + target_dir=options.target, |
575 | + verbose=options.verbose)) |
576 | |
577 | === added file 'config-manager.txt' |
578 | --- config-manager.txt 1970-01-01 00:00:00 +0000 |
579 | +++ config-manager.txt 2013-10-10 22:50:52 +0000 |
580 | @@ -0,0 +1,6 @@ |
581 | +# After making changes to this file, to ensure that your sourcedeps are |
582 | +# up-to-date do: |
583 | +# |
584 | +# make sourcedeps |
585 | + |
586 | +./build/charm-helpers lp:charm-helpers;revno=70 |
587 | |
588 | === modified file 'config.yaml' |
589 | --- config.yaml 2013-03-18 07:31:45 +0000 |
590 | +++ config.yaml 2013-10-10 22:50:52 +0000 |
591 | @@ -121,6 +121,12 @@ |
592 | description: > |
593 | Use daily extension like YYYMMDD instead of simply adding a number |
594 | default: True |
595 | + package_status: |
596 | + default: "install" |
597 | + type: "string" |
598 | + description: > |
599 | + The status of service-affecting packages will be set to this value in the dpkg database. |
600 | + Useful valid values are "install" and "hold". |
601 | use_rsyslog: |
602 | type: boolean |
603 | description: >- |
604 | @@ -157,3 +163,7 @@ |
605 | type: string |
606 | description: Security setting. Set to one of On Off extended |
607 | default: "On" |
608 | + extra_packages: |
609 | + type: string |
610 | + description: List of extra packages to be installed (e.g. commercial GeoIP package) |
611 | + default: "" |
612 | |
613 | === modified file 'data/balancer.template' |
614 | --- data/balancer.template 2012-10-16 12:26:37 +0000 |
615 | +++ data/balancer.template 2013-10-10 22:50:52 +0000 |
616 | @@ -3,4 +3,5 @@ |
617 | BalancerMember http://{{ host }} timeout={{ lb_balancer_timeout }} |
618 | {% endfor %} |
619 | ProxySet lbmethod=byrequests |
620 | + RequestHeader set X-Balancer-Name "{{ balancer_name }}" |
621 | </Proxy> |
622 | |
623 | === modified file 'data/logrotate.conf.template' |
624 | --- data/logrotate.conf.template 2013-01-29 17:31:09 +0000 |
625 | +++ data/logrotate.conf.template 2013-10-10 22:50:52 +0000 |
626 | @@ -10,6 +10,13 @@ |
627 | compress |
628 | delaycompress |
629 | notifempty |
630 | +{%- if use_rsyslog %} |
631 | + create 640 syslog adm |
632 | + sharedscripts |
633 | + postrotate |
634 | + reload rsyslog >/dev/null 2>&1 || true |
635 | + endscript |
636 | +{%- else %} |
637 | create 640 root adm |
638 | sharedscripts |
639 | postrotate |
640 | @@ -20,4 +27,5 @@ |
641 | run-parts /etc/logrotate.d/httpd-prerotate; \ |
642 | fi; \ |
643 | endscript |
644 | +{%- endif %} |
645 | } |
646 | |
647 | === removed file 'data/nrpe_service.template' |
648 | --- data/nrpe_service.template 2012-10-16 11:02:00 +0000 |
649 | +++ data/nrpe_service.template 1970-01-01 00:00:00 +0000 |
650 | @@ -1,11 +0,0 @@ |
651 | -#--------------------------------------------------- |
652 | -# This file is Juju managed |
653 | -#--------------------------------------------------- |
654 | -define service { |
655 | - use active-service |
656 | - host_name {{ nagios_hostname }} |
657 | - service_description {{ nagios_hostname }} Check Apache Vhost |
658 | - check_command check_nrpe!check_vhost |
659 | - servicegroups {{ nagios_servicegroup }} |
660 | - |
661 | -} |
662 | |
663 | === added file 'data/syslog-apache.conf' |
664 | --- data/syslog-apache.conf 1970-01-01 00:00:00 +0000 |
665 | +++ data/syslog-apache.conf 2013-10-10 22:50:52 +0000 |
666 | @@ -0,0 +1,13 @@ |
667 | +# |
668 | +# " " |
669 | +# mmm m m mmm m m |
670 | +# # # # # # # |
671 | +# # # # # # # |
672 | +# # "mm"# # "mm"# |
673 | +# # # |
674 | +# "" "" |
675 | +# This file is managed by Juju. Do not make local changes. |
676 | +# |
677 | + |
678 | +ErrorLog syslog |
679 | +CustomLog "|/usr/bin/logger -p local0.info -t apache2" vhost_combined |
680 | |
681 | === added file 'data/syslog-rsyslog.conf' |
682 | --- data/syslog-rsyslog.conf 1970-01-01 00:00:00 +0000 |
683 | +++ data/syslog-rsyslog.conf 2013-10-10 22:50:52 +0000 |
684 | @@ -0,0 +1,26 @@ |
685 | +# |
686 | +# " " |
687 | +# mmm m m mmm m m |
688 | +# # # # # # # |
689 | +# # # # # # # |
690 | +# # "mm"# # "mm"# |
691 | +# # # |
692 | +# "" "" |
693 | +# This file is managed by Juju. Do not make local changes. |
694 | +# |
695 | + |
696 | +# Create a template to print just the raw message to avoid duplicate timestamps |
697 | +# and other info not needed. Also, as documented on the rsyslog website |
698 | +# http://www.rsyslog.com/log-normalization-and-the-leading-space/ |
699 | +# msg has a leading whitespace so strip that. |
700 | +$template ApacheLogFormat,"%msg:2:10000%\n" |
701 | + |
702 | +# We want all access entries, even if rsyslog deems them as repeated msgs. |
703 | +$RepeatedMsgReduction off |
704 | + |
705 | +# Error logs |
706 | +if $syslogfacility-text == 'local0' and $syslogseverity == 3 and $syslogtag == "apache2:" then /var/log/apache2/error.log;ApacheLogFormat |
707 | +& ~ |
708 | +# Access logs |
709 | +if $syslogfacility-text == 'local0' and $syslogseverity == 6 and $syslogtag == "apache2:" then /var/log/apache2/access.log;ApacheLogFormat |
710 | +& ~ |
711 | |
712 | === added directory 'hooks/charmhelpers' |
713 | === added file 'hooks/charmhelpers/__init__.py' |
714 | === added directory 'hooks/charmhelpers/contrib' |
715 | === added file 'hooks/charmhelpers/contrib/__init__.py' |
716 | === added directory 'hooks/charmhelpers/contrib/charmsupport' |
717 | === added file 'hooks/charmhelpers/contrib/charmsupport/__init__.py' |
718 | === added file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py' |
719 | --- hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000 |
720 | +++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-10-10 22:50:52 +0000 |
721 | @@ -0,0 +1,218 @@ |
722 | +"""Compatibility with the nrpe-external-master charm""" |
723 | +# Copyright 2012 Canonical Ltd. |
724 | +# |
725 | +# Authors: |
726 | +# Matthew Wedgwood <matthew.wedgwood@canonical.com> |
727 | + |
728 | +import subprocess |
729 | +import pwd |
730 | +import grp |
731 | +import os |
732 | +import re |
733 | +import shlex |
734 | +import yaml |
735 | + |
736 | +from charmhelpers.core.hookenv import ( |
737 | + config, |
738 | + local_unit, |
739 | + log, |
740 | + relation_ids, |
741 | + relation_set, |
742 | +) |
743 | + |
744 | +from charmhelpers.core.host import service |
745 | + |
746 | +# This module adds compatibility with the nrpe-external-master and plain nrpe |
747 | +# subordinate charms. To use it in your charm: |
748 | +# |
749 | +# 1. Update metadata.yaml |
750 | +# |
751 | +# provides: |
752 | +# (...) |
753 | +# nrpe-external-master: |
754 | +# interface: nrpe-external-master |
755 | +# scope: container |
756 | +# |
757 | +# and/or |
758 | +# |
759 | +# provides: |
760 | +# (...) |
761 | +# local-monitors: |
762 | +# interface: local-monitors |
763 | +# scope: container |
764 | + |
765 | +# |
766 | +# 2. Add the following to config.yaml |
767 | +# |
768 | +# nagios_context: |
769 | +# default: "juju" |
770 | +# type: string |
771 | +# description: | |
772 | +# Used by the nrpe subordinate charms. |
773 | +# A string that will be prepended to instance name to set the host name |
774 | +# in nagios. So for instance the hostname would be something like: |
775 | +# juju-myservice-0 |
776 | +# If you're running multiple environments with the same services in them |
777 | +# this allows you to differentiate between them. |
778 | +# |
779 | +# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master |
780 | +# |
781 | +# 4. Update your hooks.py with something like this: |
782 | +# |
783 | +# from charmsupport.nrpe import NRPE |
784 | +# (...) |
785 | +# def update_nrpe_config(): |
786 | +# nrpe_compat = NRPE() |
787 | +# nrpe_compat.add_check( |
788 | +# shortname = "myservice", |
789 | +# description = "Check MyService", |
790 | +# check_cmd = "check_http -w 2 -c 10 http://localhost" |
791 | +# ) |
792 | +# nrpe_compat.add_check( |
793 | +# "myservice_other", |
794 | +# "Check for widget failures", |
795 | +# check_cmd = "/srv/myapp/scripts/widget_check" |
796 | +# ) |
797 | +# nrpe_compat.write() |
798 | +# |
799 | +# def config_changed(): |
800 | +# (...) |
801 | +# update_nrpe_config() |
802 | +# |
803 | +# def nrpe_external_master_relation_changed(): |
804 | +# update_nrpe_config() |
805 | +# |
806 | +# def local_monitors_relation_changed(): |
807 | +# update_nrpe_config() |
808 | +# |
809 | +# 5. ln -s hooks.py nrpe-external-master-relation-changed |
810 | +# ln -s hooks.py local-monitors-relation-changed |
811 | + |
812 | + |
813 | +class CheckException(Exception): |
814 | + pass |
815 | + |
816 | + |
817 | +class Check(object): |
818 | + shortname_re = '[A-Za-z0-9-_]+$' |
819 | + service_template = (""" |
820 | +#--------------------------------------------------- |
821 | +# This file is Juju managed |
822 | +#--------------------------------------------------- |
823 | +define service {{ |
824 | + use active-service |
825 | + host_name {nagios_hostname} |
826 | + service_description {nagios_hostname}[{shortname}] """ |
827 | + """{description} |
828 | + check_command check_nrpe!{command} |
829 | + servicegroups {nagios_servicegroup} |
830 | +}} |
831 | +""") |
832 | + |
833 | + def __init__(self, shortname, description, check_cmd): |
834 | + super(Check, self).__init__() |
835 | + # XXX: could be better to calculate this from the service name |
836 | + if not re.match(self.shortname_re, shortname): |
837 | + raise CheckException("shortname must match {}".format( |
838 | + Check.shortname_re)) |
839 | + self.shortname = shortname |
840 | + self.command = "check_{}".format(shortname) |
841 | + # Note: a set of invalid characters is defined by the |
842 | + # Nagios server config |
843 | + # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()= |
844 | + self.description = description |
845 | + self.check_cmd = self._locate_cmd(check_cmd) |
846 | + |
847 | + def _locate_cmd(self, check_cmd): |
848 | + search_path = ( |
849 | + '/', |
850 | + os.path.join(os.environ['CHARM_DIR'], |
851 | + 'files/nrpe-external-master'), |
852 | + '/usr/lib/nagios/plugins', |
853 | + ) |
854 | + parts = shlex.split(check_cmd) |
855 | + for path in search_path: |
856 | + if os.path.exists(os.path.join(path, parts[0])): |
857 | + command = os.path.join(path, parts[0]) |
858 | + if len(parts) > 1: |
859 | + command += " " + " ".join(parts[1:]) |
860 | + return command |
861 | + log('Check command not found: {}'.format(parts[0])) |
862 | + return '' |
863 | + |
864 | + def write(self, nagios_context, hostname): |
865 | + nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format( |
866 | + self.command) |
867 | + with open(nrpe_check_file, 'w') as nrpe_check_config: |
868 | + nrpe_check_config.write("# check {}\n".format(self.shortname)) |
869 | + nrpe_check_config.write("command[{}]={}\n".format( |
870 | + self.command, self.check_cmd)) |
871 | + |
872 | + if not os.path.exists(NRPE.nagios_exportdir): |
873 | + log('Not writing service config as {} is not accessible'.format( |
874 | + NRPE.nagios_exportdir)) |
875 | + else: |
876 | + self.write_service_config(nagios_context, hostname) |
877 | + |
878 | + def write_service_config(self, nagios_context, hostname): |
879 | + for f in os.listdir(NRPE.nagios_exportdir): |
880 | + if re.search('.*{}.cfg'.format(self.command), f): |
881 | + os.remove(os.path.join(NRPE.nagios_exportdir, f)) |
882 | + |
883 | + templ_vars = { |
884 | + 'nagios_hostname': hostname, |
885 | + 'nagios_servicegroup': nagios_context, |
886 | + 'description': self.description, |
887 | + 'shortname': self.shortname, |
888 | + 'command': self.command, |
889 | + } |
890 | + nrpe_service_text = Check.service_template.format(**templ_vars) |
891 | + nrpe_service_file = '{}/service__{}_{}.cfg'.format( |
892 | + NRPE.nagios_exportdir, hostname, self.command) |
893 | + with open(nrpe_service_file, 'w') as nrpe_service_config: |
894 | + nrpe_service_config.write(str(nrpe_service_text)) |
895 | + |
896 | + def run(self): |
897 | + subprocess.call(self.check_cmd) |
898 | + |
899 | + |
900 | +class NRPE(object): |
901 | + nagios_logdir = '/var/log/nagios' |
902 | + nagios_exportdir = '/var/lib/nagios/export' |
903 | + nrpe_confdir = '/etc/nagios/nrpe.d' |
904 | + |
905 | + def __init__(self): |
906 | + super(NRPE, self).__init__() |
907 | + self.config = config() |
908 | + self.nagios_context = self.config['nagios_context'] |
909 | + self.unit_name = local_unit().replace('/', '-') |
910 | + self.hostname = "{}-{}".format(self.nagios_context, self.unit_name) |
911 | + self.checks = [] |
912 | + |
913 | + def add_check(self, *args, **kwargs): |
914 | + self.checks.append(Check(*args, **kwargs)) |
915 | + |
916 | + def write(self): |
917 | + try: |
918 | + nagios_uid = pwd.getpwnam('nagios').pw_uid |
919 | + nagios_gid = grp.getgrnam('nagios').gr_gid |
920 | + except: |
921 | + log("Nagios user not set up, nrpe checks not updated") |
922 | + return |
923 | + |
924 | + if not os.path.exists(NRPE.nagios_logdir): |
925 | + os.mkdir(NRPE.nagios_logdir) |
926 | + os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid) |
927 | + |
928 | + nrpe_monitors = {} |
929 | + monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}} |
930 | + for nrpecheck in self.checks: |
931 | + nrpecheck.write(self.nagios_context, self.hostname) |
932 | + nrpe_monitors[nrpecheck.shortname] = { |
933 | + "command": nrpecheck.command, |
934 | + } |
935 | + |
936 | + service('restart', 'nagios-nrpe-server') |
937 | + |
938 | + for rid in relation_ids("local-monitors"): |
939 | + relation_set(relation_id=rid, monitors=yaml.dump(monitors)) |
940 | |
941 | === added file 'hooks/charmhelpers/contrib/charmsupport/volumes.py' |
942 | --- hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000 |
943 | +++ hooks/charmhelpers/contrib/charmsupport/volumes.py 2013-10-10 22:50:52 +0000 |
944 | @@ -0,0 +1,156 @@ |
945 | +''' |
946 | +Functions for managing volumes in juju units. One volume is supported per unit. |
947 | +Subordinates may have their own storage, provided it is on its own partition. |
948 | + |
949 | +Configuration stanzas: |
950 | + volume-ephemeral: |
951 | + type: boolean |
952 | + default: true |
953 | + description: > |
954 | + If false, a volume is mounted as sepecified in "volume-map" |
955 | + If true, ephemeral storage will be used, meaning that log data |
956 | + will only exist as long as the machine. YOU HAVE BEEN WARNED. |
957 | + volume-map: |
958 | + type: string |
959 | + default: {} |
960 | + description: > |
961 | + YAML map of units to device names, e.g: |
962 | + "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }" |
963 | + Service units will raise a configure-error if volume-ephemeral |
964 | + is 'true' and no volume-map value is set. Use 'juju set' to set a |
965 | + value and 'juju resolved' to complete configuration. |
966 | + |
967 | +Usage: |
968 | + from charmsupport.volumes import configure_volume, VolumeConfigurationError |
969 | + from charmsupport.hookenv import log, ERROR |
970 | + def post_mount_hook(): |
971 | + stop_service('myservice') |
972 | + def post_mount_hook(): |
973 | + start_service('myservice') |
974 | + |
975 | + if __name__ == '__main__': |
976 | + try: |
977 | + configure_volume(before_change=pre_mount_hook, |
978 | + after_change=post_mount_hook) |
979 | + except VolumeConfigurationError: |
980 | + log('Storage could not be configured', ERROR) |
981 | +''' |
982 | + |
983 | +# XXX: Known limitations |
984 | +# - fstab is neither consulted nor updated |
985 | + |
986 | +import os |
987 | +from charmhelpers.core import hookenv |
988 | +from charmhelpers.core import host |
989 | +import yaml |
990 | + |
991 | + |
992 | +MOUNT_BASE = '/srv/juju/volumes' |
993 | + |
994 | + |
995 | +class VolumeConfigurationError(Exception): |
996 | + '''Volume configuration data is missing or invalid''' |
997 | + pass |
998 | + |
999 | + |
1000 | +def get_config(): |
1001 | + '''Gather and sanity-check volume configuration data''' |
1002 | + volume_config = {} |
1003 | + config = hookenv.config() |
1004 | + |
1005 | + errors = False |
1006 | + |
1007 | + if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'): |
1008 | + volume_config['ephemeral'] = True |
1009 | + else: |
1010 | + volume_config['ephemeral'] = False |
1011 | + |
1012 | + try: |
1013 | + volume_map = yaml.safe_load(config.get('volume-map', '{}')) |
1014 | + except yaml.YAMLError as e: |
1015 | + hookenv.log("Error parsing YAML volume-map: {}".format(e), |
1016 | + hookenv.ERROR) |
1017 | + errors = True |
1018 | + if volume_map is None: |
1019 | + # probably an empty string |
1020 | + volume_map = {} |
1021 | + elif not isinstance(volume_map, dict): |
1022 | + hookenv.log("Volume-map should be a dictionary, not {}".format( |
1023 | + type(volume_map))) |
1024 | + errors = True |
1025 | + |
1026 | + volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME']) |
1027 | + if volume_config['device'] and volume_config['ephemeral']: |
1028 | + # asked for ephemeral storage but also defined a volume ID |
1029 | + hookenv.log('A volume is defined for this unit, but ephemeral ' |
1030 | + 'storage was requested', hookenv.ERROR) |
1031 | + errors = True |
1032 | + elif not volume_config['device'] and not volume_config['ephemeral']: |
1033 | + # asked for permanent storage but did not define volume ID |
1034 | + hookenv.log('Ephemeral storage was requested, but there is no volume ' |
1035 | + 'defined for this unit.', hookenv.ERROR) |
1036 | + errors = True |
1037 | + |
1038 | + unit_mount_name = hookenv.local_unit().replace('/', '-') |
1039 | + volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name) |
1040 | + |
1041 | + if errors: |
1042 | + return None |
1043 | + return volume_config |
1044 | + |
1045 | + |
1046 | +def mount_volume(config): |
1047 | + if os.path.exists(config['mountpoint']): |
1048 | + if not os.path.isdir(config['mountpoint']): |
1049 | + hookenv.log('Not a directory: {}'.format(config['mountpoint'])) |
1050 | + raise VolumeConfigurationError() |
1051 | + else: |
1052 | + host.mkdir(config['mountpoint']) |
1053 | + if os.path.ismount(config['mountpoint']): |
1054 | + unmount_volume(config) |
1055 | + if not host.mount(config['device'], config['mountpoint'], persist=True): |
1056 | + raise VolumeConfigurationError() |
1057 | + |
1058 | + |
1059 | +def unmount_volume(config): |
1060 | + if os.path.ismount(config['mountpoint']): |
1061 | + if not host.umount(config['mountpoint'], persist=True): |
1062 | + raise VolumeConfigurationError() |
1063 | + |
1064 | + |
1065 | +def managed_mounts(): |
1066 | + '''List of all mounted managed volumes''' |
1067 | + return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts()) |
1068 | + |
1069 | + |
1070 | +def configure_volume(before_change=lambda: None, after_change=lambda: None): |
1071 | + '''Set up storage (or don't) according to the charm's volume configuration. |
1072 | + Returns the mount point or "ephemeral". before_change and after_change |
1073 | + are optional functions to be called if the volume configuration changes. |
1074 | + ''' |
1075 | + |
1076 | + config = get_config() |
1077 | + if not config: |
1078 | + hookenv.log('Failed to read volume configuration', hookenv.CRITICAL) |
1079 | + raise VolumeConfigurationError() |
1080 | + |
1081 | + if config['ephemeral']: |
1082 | + if os.path.ismount(config['mountpoint']): |
1083 | + before_change() |
1084 | + unmount_volume(config) |
1085 | + after_change() |
1086 | + return 'ephemeral' |
1087 | + else: |
1088 | + # persistent storage |
1089 | + if os.path.ismount(config['mountpoint']): |
1090 | + mounts = dict(managed_mounts()) |
1091 | + if mounts.get(config['mountpoint']) != config['device']: |
1092 | + before_change() |
1093 | + unmount_volume(config) |
1094 | + mount_volume(config) |
1095 | + after_change() |
1096 | + else: |
1097 | + before_change() |
1098 | + mount_volume(config) |
1099 | + after_change() |
1100 | + return config['mountpoint'] |
1101 | |
1102 | === added directory 'hooks/charmhelpers/core' |
1103 | === added file 'hooks/charmhelpers/core/__init__.py' |
1104 | === added file 'hooks/charmhelpers/core/hookenv.py' |
1105 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 |
1106 | +++ hooks/charmhelpers/core/hookenv.py 2013-10-10 22:50:52 +0000 |
1107 | @@ -0,0 +1,340 @@ |
1108 | +"Interactions with the Juju environment" |
1109 | +# Copyright 2013 Canonical Ltd. |
1110 | +# |
1111 | +# Authors: |
1112 | +# Charm Helpers Developers <juju@lists.ubuntu.com> |
1113 | + |
1114 | +import os |
1115 | +import json |
1116 | +import yaml |
1117 | +import subprocess |
1118 | +import UserDict |
1119 | + |
1120 | +CRITICAL = "CRITICAL" |
1121 | +ERROR = "ERROR" |
1122 | +WARNING = "WARNING" |
1123 | +INFO = "INFO" |
1124 | +DEBUG = "DEBUG" |
1125 | +MARKER = object() |
1126 | + |
1127 | +cache = {} |
1128 | + |
1129 | + |
1130 | +def cached(func): |
1131 | + ''' Cache return values for multiple executions of func + args |
1132 | + |
1133 | + For example: |
1134 | + |
1135 | + @cached |
1136 | + def unit_get(attribute): |
1137 | + pass |
1138 | + |
1139 | + unit_get('test') |
1140 | + |
1141 | + will cache the result of unit_get + 'test' for future calls. |
1142 | + ''' |
1143 | + def wrapper(*args, **kwargs): |
1144 | + global cache |
1145 | + key = str((func, args, kwargs)) |
1146 | + try: |
1147 | + return cache[key] |
1148 | + except KeyError: |
1149 | + res = func(*args, **kwargs) |
1150 | + cache[key] = res |
1151 | + return res |
1152 | + return wrapper |
1153 | + |
1154 | + |
1155 | +def flush(key): |
1156 | + ''' Flushes any entries from function cache where the |
1157 | + key is found in the function+args ''' |
1158 | + flush_list = [] |
1159 | + for item in cache: |
1160 | + if key in item: |
1161 | + flush_list.append(item) |
1162 | + for item in flush_list: |
1163 | + del cache[item] |
1164 | + |
1165 | + |
1166 | +def log(message, level=None): |
1167 | + "Write a message to the juju log" |
1168 | + command = ['juju-log'] |
1169 | + if level: |
1170 | + command += ['-l', level] |
1171 | + command += [message] |
1172 | + subprocess.call(command) |
1173 | + |
1174 | + |
1175 | +class Serializable(UserDict.IterableUserDict): |
1176 | + "Wrapper, an object that can be serialized to yaml or json" |
1177 | + |
1178 | + def __init__(self, obj): |
1179 | + # wrap the object |
1180 | + UserDict.IterableUserDict.__init__(self) |
1181 | + self.data = obj |
1182 | + |
1183 | + def __getattr__(self, attr): |
1184 | + # See if this object has attribute. |
1185 | + if attr in ("json", "yaml", "data"): |
1186 | + return self.__dict__[attr] |
1187 | + # Check for attribute in wrapped object. |
1188 | + got = getattr(self.data, attr, MARKER) |
1189 | + if got is not MARKER: |
1190 | + return got |
1191 | + # Proxy to the wrapped object via dict interface. |
1192 | + try: |
1193 | + return self.data[attr] |
1194 | + except KeyError: |
1195 | + raise AttributeError(attr) |
1196 | + |
1197 | + def __getstate__(self): |
1198 | + # Pickle as a standard dictionary. |
1199 | + return self.data |
1200 | + |
1201 | + def __setstate__(self, state): |
1202 | + # Unpickle into our wrapper. |
1203 | + self.data = state |
1204 | + |
1205 | + def json(self): |
1206 | + "Serialize the object to json" |
1207 | + return json.dumps(self.data) |
1208 | + |
1209 | + def yaml(self): |
1210 | + "Serialize the object to yaml" |
1211 | + return yaml.dump(self.data) |
1212 | + |
1213 | + |
1214 | +def execution_environment(): |
1215 | + """A convenient bundling of the current execution context""" |
1216 | + context = {} |
1217 | + context['conf'] = config() |
1218 | + if relation_id(): |
1219 | + context['reltype'] = relation_type() |
1220 | + context['relid'] = relation_id() |
1221 | + context['rel'] = relation_get() |
1222 | + context['unit'] = local_unit() |
1223 | + context['rels'] = relations() |
1224 | + context['env'] = os.environ |
1225 | + return context |
1226 | + |
1227 | + |
1228 | +def in_relation_hook(): |
1229 | + "Determine whether we're running in a relation hook" |
1230 | + return 'JUJU_RELATION' in os.environ |
1231 | + |
1232 | + |
1233 | +def relation_type(): |
1234 | + "The scope for the current relation hook" |
1235 | + return os.environ.get('JUJU_RELATION', None) |
1236 | + |
1237 | + |
1238 | +def relation_id(): |
1239 | + "The relation ID for the current relation hook" |
1240 | + return os.environ.get('JUJU_RELATION_ID', None) |
1241 | + |
1242 | + |
1243 | +def local_unit(): |
1244 | + "Local unit ID" |
1245 | + return os.environ['JUJU_UNIT_NAME'] |
1246 | + |
1247 | + |
1248 | +def remote_unit(): |
1249 | + "The remote unit for the current relation hook" |
1250 | + return os.environ['JUJU_REMOTE_UNIT'] |
1251 | + |
1252 | + |
1253 | +def service_name(): |
1254 | + "The name service group this unit belongs to" |
1255 | + return local_unit().split('/')[0] |
1256 | + |
1257 | + |
1258 | +@cached |
1259 | +def config(scope=None): |
1260 | + "Juju charm configuration" |
1261 | + config_cmd_line = ['config-get'] |
1262 | + if scope is not None: |
1263 | + config_cmd_line.append(scope) |
1264 | + config_cmd_line.append('--format=json') |
1265 | + try: |
1266 | + return json.loads(subprocess.check_output(config_cmd_line)) |
1267 | + except ValueError: |
1268 | + return None |
1269 | + |
1270 | + |
1271 | +@cached |
1272 | +def relation_get(attribute=None, unit=None, rid=None): |
1273 | + _args = ['relation-get', '--format=json'] |
1274 | + if rid: |
1275 | + _args.append('-r') |
1276 | + _args.append(rid) |
1277 | + _args.append(attribute or '-') |
1278 | + if unit: |
1279 | + _args.append(unit) |
1280 | + try: |
1281 | + return json.loads(subprocess.check_output(_args)) |
1282 | + except ValueError: |
1283 | + return None |
1284 | + |
1285 | + |
1286 | +def relation_set(relation_id=None, relation_settings={}, **kwargs): |
1287 | + relation_cmd_line = ['relation-set'] |
1288 | + if relation_id is not None: |
1289 | + relation_cmd_line.extend(('-r', relation_id)) |
1290 | + for k, v in (relation_settings.items() + kwargs.items()): |
1291 | + if v is None: |
1292 | + relation_cmd_line.append('{}='.format(k)) |
1293 | + else: |
1294 | + relation_cmd_line.append('{}={}'.format(k, v)) |
1295 | + subprocess.check_call(relation_cmd_line) |
1296 | + # Flush cache of any relation-gets for local unit |
1297 | + flush(local_unit()) |
1298 | + |
1299 | + |
1300 | +@cached |
1301 | +def relation_ids(reltype=None): |
1302 | + "A list of relation_ids" |
1303 | + reltype = reltype or relation_type() |
1304 | + relid_cmd_line = ['relation-ids', '--format=json'] |
1305 | + if reltype is not None: |
1306 | + relid_cmd_line.append(reltype) |
1307 | + return json.loads(subprocess.check_output(relid_cmd_line)) or [] |
1308 | + return [] |
1309 | + |
1310 | + |
1311 | +@cached |
1312 | +def related_units(relid=None): |
1313 | + "A list of related units" |
1314 | + relid = relid or relation_id() |
1315 | + units_cmd_line = ['relation-list', '--format=json'] |
1316 | + if relid is not None: |
1317 | + units_cmd_line.extend(('-r', relid)) |
1318 | + return json.loads(subprocess.check_output(units_cmd_line)) or [] |
1319 | + |
1320 | + |
1321 | +@cached |
1322 | +def relation_for_unit(unit=None, rid=None): |
1323 | + "Get the json represenation of a unit's relation" |
1324 | + unit = unit or remote_unit() |
1325 | + relation = relation_get(unit=unit, rid=rid) |
1326 | + for key in relation: |
1327 | + if key.endswith('-list'): |
1328 | + relation[key] = relation[key].split() |
1329 | + relation['__unit__'] = unit |
1330 | + return relation |
1331 | + |
1332 | + |
1333 | +@cached |
1334 | +def relations_for_id(relid=None): |
1335 | + "Get relations of a specific relation ID" |
1336 | + relation_data = [] |
1337 | + relid = relid or relation_ids() |
1338 | + for unit in related_units(relid): |
1339 | + unit_data = relation_for_unit(unit, relid) |
1340 | + unit_data['__relid__'] = relid |
1341 | + relation_data.append(unit_data) |
1342 | + return relation_data |
1343 | + |
1344 | + |
1345 | +@cached |
1346 | +def relations_of_type(reltype=None): |
1347 | + "Get relations of a specific type" |
1348 | + relation_data = [] |
1349 | + reltype = reltype or relation_type() |
1350 | + for relid in relation_ids(reltype): |
1351 | + for relation in relations_for_id(relid): |
1352 | + relation['__relid__'] = relid |
1353 | + relation_data.append(relation) |
1354 | + return relation_data |
1355 | + |
1356 | + |
1357 | +@cached |
1358 | +def relation_types(): |
1359 | + "Get a list of relation types supported by this charm" |
1360 | + charmdir = os.environ.get('CHARM_DIR', '') |
1361 | + mdf = open(os.path.join(charmdir, 'metadata.yaml')) |
1362 | + md = yaml.safe_load(mdf) |
1363 | + rel_types = [] |
1364 | + for key in ('provides', 'requires', 'peers'): |
1365 | + section = md.get(key) |
1366 | + if section: |
1367 | + rel_types.extend(section.keys()) |
1368 | + mdf.close() |
1369 | + return rel_types |
1370 | + |
1371 | + |
1372 | +@cached |
1373 | +def relations(): |
1374 | + rels = {} |
1375 | + for reltype in relation_types(): |
1376 | + relids = {} |
1377 | + for relid in relation_ids(reltype): |
1378 | + units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} |
1379 | + for unit in related_units(relid): |
1380 | + reldata = relation_get(unit=unit, rid=relid) |
1381 | + units[unit] = reldata |
1382 | + relids[relid] = units |
1383 | + rels[reltype] = relids |
1384 | + return rels |
1385 | + |
1386 | + |
1387 | +def open_port(port, protocol="TCP"): |
1388 | + "Open a service network port" |
1389 | + _args = ['open-port'] |
1390 | + _args.append('{}/{}'.format(port, protocol)) |
1391 | + subprocess.check_call(_args) |
1392 | + |
1393 | + |
1394 | +def close_port(port, protocol="TCP"): |
1395 | + "Close a service network port" |
1396 | + _args = ['close-port'] |
1397 | + _args.append('{}/{}'.format(port, protocol)) |
1398 | + subprocess.check_call(_args) |
1399 | + |
1400 | + |
1401 | +@cached |
1402 | +def unit_get(attribute): |
1403 | + _args = ['unit-get', '--format=json', attribute] |
1404 | + try: |
1405 | + return json.loads(subprocess.check_output(_args)) |
1406 | + except ValueError: |
1407 | + return None |
1408 | + |
1409 | + |
1410 | +def unit_private_ip(): |
1411 | + return unit_get('private-address') |
1412 | + |
1413 | + |
1414 | +class UnregisteredHookError(Exception): |
1415 | + pass |
1416 | + |
1417 | + |
1418 | +class Hooks(object): |
1419 | + def __init__(self): |
1420 | + super(Hooks, self).__init__() |
1421 | + self._hooks = {} |
1422 | + |
1423 | + def register(self, name, function): |
1424 | + self._hooks[name] = function |
1425 | + |
1426 | + def execute(self, args): |
1427 | + hook_name = os.path.basename(args[0]) |
1428 | + if hook_name in self._hooks: |
1429 | + self._hooks[hook_name]() |
1430 | + else: |
1431 | + raise UnregisteredHookError(hook_name) |
1432 | + |
1433 | + def hook(self, *hook_names): |
1434 | + def wrapper(decorated): |
1435 | + for hook_name in hook_names: |
1436 | + self.register(hook_name, decorated) |
1437 | + else: |
1438 | + self.register(decorated.__name__, decorated) |
1439 | + if '_' in decorated.__name__: |
1440 | + self.register( |
1441 | + decorated.__name__.replace('_', '-'), decorated) |
1442 | + return decorated |
1443 | + return wrapper |
1444 | + |
1445 | + |
1446 | +def charm_dir(): |
1447 | + return os.environ.get('CHARM_DIR') |
1448 | |
1449 | === added file 'hooks/charmhelpers/core/host.py' |
1450 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 |
1451 | +++ hooks/charmhelpers/core/host.py 2013-10-10 22:50:52 +0000 |
1452 | @@ -0,0 +1,239 @@ |
1453 | +"""Tools for working with the host system""" |
1454 | +# Copyright 2012 Canonical Ltd. |
1455 | +# |
1456 | +# Authors: |
1457 | +# Nick Moffitt <nick.moffitt@canonical.com> |
1458 | +# Matthew Wedgwood <matthew.wedgwood@canonical.com> |
1459 | + |
1460 | +import os |
1461 | +import pwd |
1462 | +import grp |
1463 | +import random |
1464 | +import string |
1465 | +import subprocess |
1466 | +import hashlib |
1467 | + |
1468 | +from collections import OrderedDict |
1469 | + |
1470 | +from hookenv import log |
1471 | + |
1472 | + |
1473 | +def service_start(service_name): |
1474 | + service('start', service_name) |
1475 | + |
1476 | + |
1477 | +def service_stop(service_name): |
1478 | + service('stop', service_name) |
1479 | + |
1480 | + |
1481 | +def service_restart(service_name): |
1482 | + service('restart', service_name) |
1483 | + |
1484 | + |
1485 | +def service_reload(service_name, restart_on_failure=False): |
1486 | + if not service('reload', service_name) and restart_on_failure: |
1487 | + service('restart', service_name) |
1488 | + |
1489 | + |
1490 | +def service(action, service_name): |
1491 | + cmd = ['service', service_name, action] |
1492 | + return subprocess.call(cmd) == 0 |
1493 | + |
1494 | + |
1495 | +def service_running(service): |
1496 | + try: |
1497 | + output = subprocess.check_output(['service', service, 'status']) |
1498 | + except subprocess.CalledProcessError: |
1499 | + return False |
1500 | + else: |
1501 | + if ("start/running" in output or "is running" in output): |
1502 | + return True |
1503 | + else: |
1504 | + return False |
1505 | + |
1506 | + |
1507 | +def adduser(username, password=None, shell='/bin/bash', system_user=False): |
1508 | + """Add a user""" |
1509 | + try: |
1510 | + user_info = pwd.getpwnam(username) |
1511 | + log('user {0} already exists!'.format(username)) |
1512 | + except KeyError: |
1513 | + log('creating user {0}'.format(username)) |
1514 | + cmd = ['useradd'] |
1515 | + if system_user or password is None: |
1516 | + cmd.append('--system') |
1517 | + else: |
1518 | + cmd.extend([ |
1519 | + '--create-home', |
1520 | + '--shell', shell, |
1521 | + '--password', password, |
1522 | + ]) |
1523 | + cmd.append(username) |
1524 | + subprocess.check_call(cmd) |
1525 | + user_info = pwd.getpwnam(username) |
1526 | + return user_info |
1527 | + |
1528 | + |
1529 | +def add_user_to_group(username, group): |
1530 | + """Add a user to a group""" |
1531 | + cmd = [ |
1532 | + 'gpasswd', '-a', |
1533 | + username, |
1534 | + group |
1535 | + ] |
1536 | + log("Adding user {} to group {}".format(username, group)) |
1537 | + subprocess.check_call(cmd) |
1538 | + |
1539 | + |
1540 | +def rsync(from_path, to_path, flags='-r', options=None): |
1541 | + """Replicate the contents of a path""" |
1542 | + options = options or ['--delete', '--executability'] |
1543 | + cmd = ['/usr/bin/rsync', flags] |
1544 | + cmd.extend(options) |
1545 | + cmd.append(from_path) |
1546 | + cmd.append(to_path) |
1547 | + log(" ".join(cmd)) |
1548 | + return subprocess.check_output(cmd).strip() |
1549 | + |
1550 | + |
1551 | +def symlink(source, destination): |
1552 | + """Create a symbolic link""" |
1553 | + log("Symlinking {} as {}".format(source, destination)) |
1554 | + cmd = [ |
1555 | + 'ln', |
1556 | + '-sf', |
1557 | + source, |
1558 | + destination, |
1559 | + ] |
1560 | + subprocess.check_call(cmd) |
1561 | + |
1562 | + |
1563 | +def mkdir(path, owner='root', group='root', perms=0555, force=False): |
1564 | + """Create a directory""" |
1565 | + log("Making dir {} {}:{} {:o}".format(path, owner, group, |
1566 | + perms)) |
1567 | + uid = pwd.getpwnam(owner).pw_uid |
1568 | + gid = grp.getgrnam(group).gr_gid |
1569 | + realpath = os.path.abspath(path) |
1570 | + if os.path.exists(realpath): |
1571 | + if force and not os.path.isdir(realpath): |
1572 | + log("Removing non-directory file {} prior to mkdir()".format(path)) |
1573 | + os.unlink(realpath) |
1574 | + else: |
1575 | + os.makedirs(realpath, perms) |
1576 | + os.chown(realpath, uid, gid) |
1577 | + |
1578 | + |
1579 | +def write_file(path, content, owner='root', group='root', perms=0444): |
1580 | + """Create or overwrite a file with the contents of a string""" |
1581 | + log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) |
1582 | + uid = pwd.getpwnam(owner).pw_uid |
1583 | + gid = grp.getgrnam(group).gr_gid |
1584 | + with open(path, 'w') as target: |
1585 | + os.fchown(target.fileno(), uid, gid) |
1586 | + os.fchmod(target.fileno(), perms) |
1587 | + target.write(content) |
1588 | + |
1589 | + |
1590 | +def mount(device, mountpoint, options=None, persist=False): |
1591 | + '''Mount a filesystem''' |
1592 | + cmd_args = ['mount'] |
1593 | + if options is not None: |
1594 | + cmd_args.extend(['-o', options]) |
1595 | + cmd_args.extend([device, mountpoint]) |
1596 | + try: |
1597 | + subprocess.check_output(cmd_args) |
1598 | + except subprocess.CalledProcessError, e: |
1599 | + log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
1600 | + return False |
1601 | + if persist: |
1602 | + # TODO: update fstab |
1603 | + pass |
1604 | + return True |
1605 | + |
1606 | + |
1607 | +def umount(mountpoint, persist=False): |
1608 | + '''Unmount a filesystem''' |
1609 | + cmd_args = ['umount', mountpoint] |
1610 | + try: |
1611 | + subprocess.check_output(cmd_args) |
1612 | + except subprocess.CalledProcessError, e: |
1613 | + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
1614 | + return False |
1615 | + if persist: |
1616 | + # TODO: update fstab |
1617 | + pass |
1618 | + return True |
1619 | + |
1620 | + |
1621 | +def mounts(): |
1622 | + '''List of all mounted volumes as [[mountpoint,device],[...]]''' |
1623 | + with open('/proc/mounts') as f: |
1624 | + # [['/mount/point','/dev/path'],[...]] |
1625 | + system_mounts = [m[1::-1] for m in [l.strip().split() |
1626 | + for l in f.readlines()]] |
1627 | + return system_mounts |
1628 | + |
1629 | + |
1630 | +def file_hash(path): |
1631 | + ''' Generate a md5 hash of the contents of 'path' or None if not found ''' |
1632 | + if os.path.exists(path): |
1633 | + h = hashlib.md5() |
1634 | + with open(path, 'r') as source: |
1635 | + h.update(source.read()) # IGNORE:E1101 - it does have update |
1636 | + return h.hexdigest() |
1637 | + else: |
1638 | + return None |
1639 | + |
1640 | + |
1641 | +def restart_on_change(restart_map): |
1642 | + ''' Restart services based on configuration files changing |
1643 | + |
1644 | + This function is used a decorator, for example |
1645 | + |
1646 | + @restart_on_change({ |
1647 | + '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
1648 | + }) |
1649 | + def ceph_client_changed(): |
1650 | + ... |
1651 | + |
1652 | + In this example, the cinder-api and cinder-volume services |
1653 | + would be restarted if /etc/ceph/ceph.conf is changed by the |
1654 | + ceph_client_changed function. |
1655 | + ''' |
1656 | + def wrap(f): |
1657 | + def wrapped_f(*args): |
1658 | + checksums = {} |
1659 | + for path in restart_map: |
1660 | + checksums[path] = file_hash(path) |
1661 | + f(*args) |
1662 | + restarts = [] |
1663 | + for path in restart_map: |
1664 | + if checksums[path] != file_hash(path): |
1665 | + restarts += restart_map[path] |
1666 | + for service_name in list(OrderedDict.fromkeys(restarts)): |
1667 | + service('restart', service_name) |
1668 | + return wrapped_f |
1669 | + return wrap |
1670 | + |
1671 | + |
1672 | +def lsb_release(): |
1673 | + '''Return /etc/lsb-release in a dict''' |
1674 | + d = {} |
1675 | + with open('/etc/lsb-release', 'r') as lsb: |
1676 | + for l in lsb: |
1677 | + k, v = l.split('=') |
1678 | + d[k.strip()] = v.strip() |
1679 | + return d |
1680 | + |
1681 | + |
1682 | +def pwgen(length=None): |
1683 | + '''Generate a random pasword.''' |
1684 | + if length is None: |
1685 | + length = random.choice(range(35, 45)) |
1686 | + alphanumeric_chars = [ |
1687 | + l for l in (string.letters + string.digits) |
1688 | + if l not in 'l0QD1vAEIOUaeiou'] |
1689 | + random_chars = [ |
1690 | + random.choice(alphanumeric_chars) for _ in range(length)] |
1691 | + return(''.join(random_chars)) |
1692 | |
1693 | === added directory 'hooks/charmhelpers/fetch' |
1694 | === added file 'hooks/charmhelpers/fetch/__init__.py' |
1695 | --- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 |
1696 | +++ hooks/charmhelpers/fetch/__init__.py 2013-10-10 22:50:52 +0000 |
1697 | @@ -0,0 +1,209 @@ |
1698 | +import importlib |
1699 | +from yaml import safe_load |
1700 | +from charmhelpers.core.host import ( |
1701 | + lsb_release |
1702 | +) |
1703 | +from urlparse import ( |
1704 | + urlparse, |
1705 | + urlunparse, |
1706 | +) |
1707 | +import subprocess |
1708 | +from charmhelpers.core.hookenv import ( |
1709 | + config, |
1710 | + log, |
1711 | +) |
1712 | +import apt_pkg |
1713 | + |
1714 | +CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
1715 | +deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
1716 | +""" |
1717 | +PROPOSED_POCKET = """# Proposed |
1718 | +deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted |
1719 | +""" |
1720 | + |
1721 | + |
1722 | +def filter_installed_packages(packages): |
1723 | + """Returns a list of packages that require installation""" |
1724 | + apt_pkg.init() |
1725 | + cache = apt_pkg.Cache() |
1726 | + _pkgs = [] |
1727 | + for package in packages: |
1728 | + try: |
1729 | + p = cache[package] |
1730 | + p.current_ver or _pkgs.append(package) |
1731 | + except KeyError: |
1732 | + log('Package {} has no installation candidate.'.format(package), |
1733 | + level='WARNING') |
1734 | + _pkgs.append(package) |
1735 | + return _pkgs |
1736 | + |
1737 | + |
1738 | +def apt_install(packages, options=None, fatal=False): |
1739 | + """Install one or more packages""" |
1740 | + options = options or [] |
1741 | + cmd = ['apt-get', '-y'] |
1742 | + cmd.extend(options) |
1743 | + cmd.append('install') |
1744 | + if isinstance(packages, basestring): |
1745 | + cmd.append(packages) |
1746 | + else: |
1747 | + cmd.extend(packages) |
1748 | + log("Installing {} with options: {}".format(packages, |
1749 | + options)) |
1750 | + if fatal: |
1751 | + subprocess.check_call(cmd) |
1752 | + else: |
1753 | + subprocess.call(cmd) |
1754 | + |
1755 | + |
1756 | +def apt_update(fatal=False): |
1757 | + """Update local apt cache""" |
1758 | + cmd = ['apt-get', 'update'] |
1759 | + if fatal: |
1760 | + subprocess.check_call(cmd) |
1761 | + else: |
1762 | + subprocess.call(cmd) |
1763 | + |
1764 | + |
1765 | +def apt_purge(packages, fatal=False): |
1766 | + """Purge one or more packages""" |
1767 | + cmd = ['apt-get', '-y', 'purge'] |
1768 | + if isinstance(packages, basestring): |
1769 | + cmd.append(packages) |
1770 | + else: |
1771 | + cmd.extend(packages) |
1772 | + log("Purging {}".format(packages)) |
1773 | + if fatal: |
1774 | + subprocess.check_call(cmd) |
1775 | + else: |
1776 | + subprocess.call(cmd) |
1777 | + |
1778 | + |
1779 | +def add_source(source, key=None): |
1780 | + if ((source.startswith('ppa:') or |
1781 | + source.startswith('http:'))): |
1782 | + subprocess.check_call(['add-apt-repository', '--yes', source]) |
1783 | + elif source.startswith('cloud:'): |
1784 | + apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), |
1785 | + fatal=True) |
1786 | + pocket = source.split(':')[-1] |
1787 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
1788 | + apt.write(CLOUD_ARCHIVE.format(pocket)) |
1789 | + elif source == 'proposed': |
1790 | + release = lsb_release()['DISTRIB_CODENAME'] |
1791 | + with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
1792 | + apt.write(PROPOSED_POCKET.format(release)) |
1793 | + if key: |
1794 | + subprocess.check_call(['apt-key', 'import', key]) |
1795 | + |
1796 | + |
1797 | +class SourceConfigError(Exception): |
1798 | + pass |
1799 | + |
1800 | + |
1801 | +def configure_sources(update=False, |
1802 | + sources_var='install_sources', |
1803 | + keys_var='install_keys'): |
1804 | + """ |
1805 | + Configure multiple sources from charm configuration |
1806 | + |
1807 | + Example config: |
1808 | + install_sources: |
1809 | + - "ppa:foo" |
1810 | + - "http://example.com/repo precise main" |
1811 | + install_keys: |
1812 | + - null |
1813 | + - "a1b2c3d4" |
1814 | + |
1815 | + Note that 'null' (a.k.a. None) should not be quoted. |
1816 | + """ |
1817 | + sources = safe_load(config(sources_var)) |
1818 | + keys = safe_load(config(keys_var)) |
1819 | + if isinstance(sources, basestring) and isinstance(keys, basestring): |
1820 | + add_source(sources, keys) |
1821 | + else: |
1822 | + if not len(sources) == len(keys): |
1823 | + msg = 'Install sources and keys lists are different lengths' |
1824 | + raise SourceConfigError(msg) |
1825 | + for src_num in range(len(sources)): |
1826 | + add_source(sources[src_num], keys[src_num]) |
1827 | + if update: |
1828 | + apt_update(fatal=True) |
1829 | + |
1830 | +# The order of this list is very important. Handlers should be listed in from |
1831 | +# least- to most-specific URL matching. |
1832 | +FETCH_HANDLERS = ( |
1833 | + 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
1834 | + 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
1835 | +) |
1836 | + |
1837 | + |
1838 | +class UnhandledSource(Exception): |
1839 | + pass |
1840 | + |
1841 | + |
1842 | +def install_remote(source): |
1843 | + """ |
1844 | + Install a file tree from a remote source |
1845 | + |
1846 | + The specified source should be a url of the form: |
1847 | + scheme://[host]/path[#[option=value][&...]] |
1848 | + |
1849 | + Schemes supported are based on this modules submodules |
1850 | + Options supported are submodule-specific""" |
1851 | + # We ONLY check for True here because can_handle may return a string |
1852 | + # explaining why it can't handle a given source. |
1853 | + handlers = [h for h in plugins() if h.can_handle(source) is True] |
1854 | + installed_to = None |
1855 | + for handler in handlers: |
1856 | + try: |
1857 | + installed_to = handler.install(source) |
1858 | + except UnhandledSource: |
1859 | + pass |
1860 | + if not installed_to: |
1861 | + raise UnhandledSource("No handler found for source {}".format(source)) |
1862 | + return installed_to |
1863 | + |
1864 | + |
1865 | +def install_from_config(config_var_name): |
1866 | + charm_config = config() |
1867 | + source = charm_config[config_var_name] |
1868 | + return install_remote(source) |
1869 | + |
1870 | + |
1871 | +class BaseFetchHandler(object): |
1872 | + """Base class for FetchHandler implementations in fetch plugins""" |
1873 | + def can_handle(self, source): |
1874 | + """Returns True if the source can be handled. Otherwise returns |
1875 | + a string explaining why it cannot""" |
1876 | + return "Wrong source type" |
1877 | + |
1878 | + def install(self, source): |
1879 | + """Try to download and unpack the source. Return the path to the |
1880 | + unpacked files or raise UnhandledSource.""" |
1881 | + raise UnhandledSource("Wrong source type {}".format(source)) |
1882 | + |
1883 | + def parse_url(self, url): |
1884 | + return urlparse(url) |
1885 | + |
1886 | + def base_url(self, url): |
1887 | + """Return url without querystring or fragment""" |
1888 | + parts = list(self.parse_url(url)) |
1889 | + parts[4:] = ['' for i in parts[4:]] |
1890 | + return urlunparse(parts) |
1891 | + |
1892 | + |
1893 | +def plugins(fetch_handlers=None): |
1894 | + if not fetch_handlers: |
1895 | + fetch_handlers = FETCH_HANDLERS |
1896 | + plugin_list = [] |
1897 | + for handler_name in fetch_handlers: |
1898 | + package, classname = handler_name.rsplit('.', 1) |
1899 | + try: |
1900 | + handler_class = getattr(importlib.import_module(package), classname) |
1901 | + plugin_list.append(handler_class()) |
1902 | + except (ImportError, AttributeError): |
1903 | + # Skip missing plugins so that they can be ommitted from |
1904 | + # installation if desired |
1905 | + log("FetchHandler {} not found, skipping plugin".format(handler_name)) |
1906 | + return plugin_list |
1907 | |
1908 | === added file 'hooks/charmhelpers/fetch/archiveurl.py' |
1909 | --- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000 |
1910 | +++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-10 22:50:52 +0000 |
1911 | @@ -0,0 +1,48 @@ |
1912 | +import os |
1913 | +import urllib2 |
1914 | +from charmhelpers.fetch import ( |
1915 | + BaseFetchHandler, |
1916 | + UnhandledSource |
1917 | +) |
1918 | +from charmhelpers.payload.archive import ( |
1919 | + get_archive_handler, |
1920 | + extract, |
1921 | +) |
1922 | +from charmhelpers.core.host import mkdir |
1923 | + |
1924 | + |
1925 | +class ArchiveUrlFetchHandler(BaseFetchHandler): |
1926 | + """Handler for archives via generic URLs""" |
1927 | + def can_handle(self, source): |
1928 | + url_parts = self.parse_url(source) |
1929 | + if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): |
1930 | + return "Wrong source type" |
1931 | + if get_archive_handler(self.base_url(source)): |
1932 | + return True |
1933 | + return False |
1934 | + |
1935 | + def download(self, source, dest): |
1936 | + # propogate all exceptions |
1937 | + # URLError, OSError, etc |
1938 | + response = urllib2.urlopen(source) |
1939 | + try: |
1940 | + with open(dest, 'w') as dest_file: |
1941 | + dest_file.write(response.read()) |
1942 | + except Exception as e: |
1943 | + if os.path.isfile(dest): |
1944 | + os.unlink(dest) |
1945 | + raise e |
1946 | + |
1947 | + def install(self, source): |
1948 | + url_parts = self.parse_url(source) |
1949 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') |
1950 | + if not os.path.exists(dest_dir): |
1951 | + mkdir(dest_dir, perms=0755) |
1952 | + dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) |
1953 | + try: |
1954 | + self.download(source, dld_file) |
1955 | + except urllib2.URLError as e: |
1956 | + raise UnhandledSource(e.reason) |
1957 | + except OSError as e: |
1958 | + raise UnhandledSource(e.strerror) |
1959 | + return extract(dld_file) |
1960 | |
1961 | === added file 'hooks/charmhelpers/fetch/bzrurl.py' |
1962 | --- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000 |
1963 | +++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-10 22:50:52 +0000 |
1964 | @@ -0,0 +1,44 @@ |
1965 | +import os |
1966 | +from bzrlib.branch import Branch |
1967 | +from charmhelpers.fetch import ( |
1968 | + BaseFetchHandler, |
1969 | + UnhandledSource |
1970 | +) |
1971 | +from charmhelpers.core.host import mkdir |
1972 | + |
1973 | + |
1974 | +class BzrUrlFetchHandler(BaseFetchHandler): |
1975 | + """Handler for bazaar branches via generic and lp URLs""" |
1976 | + def can_handle(self, source): |
1977 | + url_parts = self.parse_url(source) |
1978 | + if url_parts.scheme not in ('bzr+ssh', 'lp'): |
1979 | + return False |
1980 | + else: |
1981 | + return True |
1982 | + |
1983 | + def branch(self, source, dest): |
1984 | + url_parts = self.parse_url(source) |
1985 | + # If we use lp:branchname scheme we need to load plugins |
1986 | + if not self.can_handle(source): |
1987 | + raise UnhandledSource("Cannot handle {}".format(source)) |
1988 | + if url_parts.scheme == "lp": |
1989 | + from bzrlib.plugin import load_plugins |
1990 | + load_plugins() |
1991 | + try: |
1992 | + remote_branch = Branch.open(source) |
1993 | + remote_branch.bzrdir.sprout(dest).open_branch() |
1994 | + except Exception as e: |
1995 | + raise e |
1996 | + |
1997 | + def install(self, source): |
1998 | + url_parts = self.parse_url(source) |
1999 | + branch_name = url_parts.path.strip("/").split("/")[-1] |
2000 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name) |
2001 | + if not os.path.exists(dest_dir): |
2002 | + mkdir(dest_dir, perms=0755) |
2003 | + try: |
2004 | + self.branch(source, dest_dir) |
2005 | + except OSError as e: |
2006 | + raise UnhandledSource(e.strerror) |
2007 | + return dest_dir |
2008 | + |
2009 | |
2010 | === modified file 'hooks/hooks.py' |
2011 | --- hooks/hooks.py 2013-04-30 08:30:51 +0000 |
2012 | +++ hooks/hooks.py 2013-10-10 22:50:52 +0000 |
2013 | @@ -1,6 +1,5 @@ |
2014 | #!/usr/bin/env python |
2015 | |
2016 | -import json |
2017 | import os |
2018 | import re |
2019 | import socket |
2020 | @@ -12,7 +11,15 @@ |
2021 | import pwd |
2022 | import shutil |
2023 | import os.path |
2024 | -import glob |
2025 | + |
2026 | +from charmhelpers.core.hookenv import ( |
2027 | + open_port, |
2028 | + close_port, |
2029 | + log, |
2030 | + config as config_get, |
2031 | + relations_of_type, |
2032 | +) |
2033 | +from charmhelpers.contrib.charmsupport import nrpe |
2034 | |
2035 | ############################################################################### |
2036 | # Global variables |
2037 | @@ -21,7 +28,7 @@ |
2038 | default_apache2_config = "%s/apache2.conf" % default_apache2_config_dir |
2039 | default_apache2_service_config_dir = "/var/run/apache2" |
2040 | test_apache2_vhost = "/etc/apache2/sites-available/testvhost" |
2041 | -hook_name = os.path.basename(sys.argv[0]) |
2042 | +service_affecting_packages = ['apache2'] |
2043 | |
2044 | juju_warning_header = """# |
2045 | # " " |
2046 | @@ -39,137 +46,24 @@ |
2047 | # Supporting functions |
2048 | ############################################################################### |
2049 | |
2050 | - |
2051 | -#------------------------------------------------------------------------------ |
2052 | -# juju_log: Convenience wrapper around juju-log |
2053 | -#------------------------------------------------------------------------------ |
2054 | -def juju_log(msg="MARK"): |
2055 | - subprocess.call(['juju-log', str(msg)]) |
2056 | - |
2057 | - |
2058 | -#------------------------------------------------------------------------------ |
2059 | -# open_port: Convenience function to open a port in juju to |
2060 | -# expose a service |
2061 | -#------------------------------------------------------------------------------ |
2062 | -def open_port(port=None, protocol="TCP"): |
2063 | - if port is None: |
2064 | - return(None) |
2065 | - return(subprocess.call(['open-port', "%d/%s" |
2066 | - % (int(port), protocol)])) |
2067 | - |
2068 | - |
2069 | -#------------------------------------------------------------------------------ |
2070 | -# close_port: Convenience function to close a port in juju to |
2071 | -# unexpose a service |
2072 | -#------------------------------------------------------------------------------ |
2073 | -def close_port(port=None, protocol="TCP"): |
2074 | - if port is None: |
2075 | - return(None) |
2076 | - return(subprocess.call(['close-port', "%d/%s" |
2077 | - % (int(port), protocol)])) |
2078 | - |
2079 | - |
2080 | -#------------------------------------------------------------------------------ |
2081 | -# config_get: Returns a dictionary containing all of the config information |
2082 | -# Optional parameter: scope |
2083 | -# scope: limits the scope of the returned configuration to the |
2084 | -# desired config item. |
2085 | -#------------------------------------------------------------------------------ |
2086 | -def config_get(scope=None): |
2087 | - try: |
2088 | - config_cmd_line = ['config-get'] |
2089 | - if scope is not None: |
2090 | - config_cmd_line.append(scope) |
2091 | - config_cmd_line.append('--format=json') |
2092 | - config_data = json.loads(subprocess.check_output(config_cmd_line)) |
2093 | - if not config_data["servername"]: |
2094 | - config_data["servername"] = run(["unit-get", "public-address"]).rstrip() |
2095 | - |
2096 | - except Exception, e: |
2097 | - juju_log(e) |
2098 | - config_data = None |
2099 | - finally: |
2100 | - return(config_data) |
2101 | - |
2102 | - |
2103 | -#------------------------------------------------------------------------------ |
2104 | -# relation_get: Returns a dictionary containing the relation information |
2105 | -# Optional parameters: scope, relation_id |
2106 | -# scope: limits the scope of the returned data to the |
2107 | -# desired item. |
2108 | -# unit_name: limits the data ( and optionally the scope ) |
2109 | -# to the specified unit |
2110 | -# relation_id: specify relation id for out of context usage. |
2111 | -#------------------------------------------------------------------------------ |
2112 | -def relation_get(scope=None, unit_name=None, relation_id=None): |
2113 | - try: |
2114 | - relation_cmd_line = ['relation-get', '--format=json'] |
2115 | - if relation_id is not None: |
2116 | - relation_cmd_line.extend(('-r', relation_id)) |
2117 | - if scope is not None: |
2118 | - relation_cmd_line.append(scope) |
2119 | - if unit_name is not None: |
2120 | - relation_cmd_line.append('-') |
2121 | - relation_cmd_line.append(unit_name) |
2122 | - juju_log('Calling: %s' % relation_cmd_line) |
2123 | - relation_data = json.loads(subprocess.check_output(relation_cmd_line)) |
2124 | - except Exception: |
2125 | - relation_data = None |
2126 | - finally: |
2127 | - return(relation_data) |
2128 | - |
2129 | - |
2130 | -def relation_ids_get(relation_name=None): |
2131 | - try: |
2132 | - relation_cmd_line = ['relation-ids', '--format=json'] |
2133 | - if relation_name is not None: |
2134 | - relation_cmd_line.append(relation_name) |
2135 | - juju_log('Calling: %s' % relation_cmd_line) |
2136 | - relation_ids = json.loads(subprocess.check_output(relation_cmd_line)) |
2137 | - except Exception: |
2138 | - relation_ids = None |
2139 | - finally: |
2140 | - return(relation_ids) |
2141 | - |
2142 | - |
2143 | -def relation_list_get(relation_id=None): |
2144 | - try: |
2145 | - relation_cmd_line = ['relation-list', '--format=json'] |
2146 | - if relation_id is not None: |
2147 | - relation_cmd_line.extend(('-r', relation_id)) |
2148 | - juju_log('Calling: %s' % relation_cmd_line) |
2149 | - relations = json.loads(subprocess.check_output(relation_cmd_line)) |
2150 | - except Exception: |
2151 | - relations = None |
2152 | - finally: |
2153 | - return(relations) |
2154 | - |
2155 | - |
2156 | -def all_relation_data_get(relation_name=None): |
2157 | - relation_ids = relation_ids_get(relation_name) |
2158 | - if relation_ids is None: |
2159 | - return() |
2160 | - try: |
2161 | - all_relation_data = {} |
2162 | - for rid in relation_ids: |
2163 | - units = relation_list_get(relation_id=rid) |
2164 | - for unit in units: |
2165 | - all_relation_data[unit.replace('/', '-')] = relation_get(relation_id=rid, unit_name=unit) |
2166 | - except Exception: |
2167 | - all_relation_data = None |
2168 | - finally: |
2169 | - return(all_relation_data) |
2170 | - |
2171 | - |
2172 | #------------------------------------------------------------------------------ |
2173 | # apt_get_install( package ): Installs a package |
2174 | #------------------------------------------------------------------------------ |
2175 | def apt_get_install(packages=None): |
2176 | if packages is None: |
2177 | - return(False) |
2178 | + return False |
2179 | cmd_line = ['apt-get', '-y', 'install', '-qq'] |
2180 | cmd_line.append(packages) |
2181 | - return(subprocess.call(cmd_line)) |
2182 | + return subprocess.call(cmd_line) |
2183 | + |
2184 | + |
2185 | +def ensure_package_status(packages, status): |
2186 | + if status in ['install', 'hold']: |
2187 | + selections = ''.join(['{} {}\n'.format(package, status) |
2188 | + for package in packages]) |
2189 | + dpkg = subprocess.Popen(['dpkg', '--set-selections'], |
2190 | + stdin=subprocess.PIPE) |
2191 | + dpkg.communicate(input=selections) |
2192 | |
2193 | |
2194 | #------------------------------------------------------------------------------ |
2195 | @@ -177,10 +71,10 @@ |
2196 | #------------------------------------------------------------------------------ |
2197 | def apt_get_purge(packages=None): |
2198 | if packages is None: |
2199 | - return(False) |
2200 | + return False |
2201 | cmd_line = ['apt-get', '-y', 'purge', '-qq'] |
2202 | cmd_line.append(packages) |
2203 | - return(subprocess.call(cmd_line)) |
2204 | + return subprocess.call(cmd_line) |
2205 | |
2206 | |
2207 | #------------------------------------------------------------------------------ |
2208 | @@ -189,21 +83,13 @@ |
2209 | #------------------------------------------------------------------------------ |
2210 | def service_apache2(action=None): |
2211 | if action is None: |
2212 | - return(None) |
2213 | + return |
2214 | elif action == "check": |
2215 | - retVal = subprocess.call(['/usr/sbin/apache2ctl', 'configtest']) |
2216 | - if retVal == 1: |
2217 | - return(False) |
2218 | - elif retVal == 0: |
2219 | - return(True) |
2220 | - else: |
2221 | - return(False) |
2222 | + args = ['/usr/sbin/apache2ctl', 'configtest'] |
2223 | else: |
2224 | - retVal = subprocess.call(['service', 'apache2', action]) |
2225 | - if retVal == 0: |
2226 | - return(True) |
2227 | - else: |
2228 | - return(False) |
2229 | + args = ['service', 'apache2', action] |
2230 | + ret_val = subprocess.call(args) |
2231 | + return ret_val == 0 |
2232 | |
2233 | |
2234 | def run(command, *args, **kwargs): |
2235 | @@ -211,54 +97,63 @@ |
2236 | output = subprocess.check_output(command, *args, **kwargs) |
2237 | return output |
2238 | except Exception, e: |
2239 | - print str(e) |
2240 | + print e |
2241 | raise |
2242 | |
2243 | |
2244 | def enable_module(module=None): |
2245 | if module is None: |
2246 | - return(True) |
2247 | + return True |
2248 | if os.path.exists("/etc/apache2/mods-enabled/%s.load" % (module)): |
2249 | - juju_log("Module already loaded: %s" % module) |
2250 | - return(True) |
2251 | + log("Module already loaded: %s" % module) |
2252 | + return True |
2253 | if not os.path.exists("/etc/apache2/mods-available/%s.load" % (module)): |
2254 | - retVal = apt_get_install("libapache2-mod-%s" % (module)) |
2255 | - if retVal != 0: |
2256 | - juju_log("Installing module %s failed" % module) |
2257 | - return(False) |
2258 | - retVal = subprocess.call(['/usr/sbin/a2enmod', module]) |
2259 | - if retVal != 0: |
2260 | - return(False) |
2261 | + return_value = apt_get_install("libapache2-mod-%s" % (module)) |
2262 | + if return_value != 0: |
2263 | + log("Installing module %s failed" % (module)) |
2264 | + return False |
2265 | + return_value = subprocess.call(['/usr/sbin/a2enmod', module]) |
2266 | + if return_value != 0: |
2267 | + return False |
2268 | if service_apache2("check"): |
2269 | service_apache2("reload") |
2270 | - return(True) |
2271 | + return True |
2272 | |
2273 | |
2274 | def disable_module(module=None): |
2275 | if module is None: |
2276 | - return(True) |
2277 | + return True |
2278 | if not os.path.exists("/etc/apache2/mods-enabled/%s.load" % (module)): |
2279 | - juju_log("Module already disabled: %s" % module) |
2280 | - return(True) |
2281 | - retVal = subprocess.call(['/usr/sbin/a2dismod', module]) |
2282 | - if retVal != 0: |
2283 | - return(False) |
2284 | + log("Module already disabled: %s" % module) |
2285 | + return True |
2286 | + return_value = subprocess.call(['/usr/sbin/a2dismod', module]) |
2287 | + if return_value != 0: |
2288 | + return False |
2289 | if service_apache2("check"): |
2290 | service_apache2("reload") |
2291 | - return(True) |
2292 | + return True |
2293 | |
2294 | |
2295 | ############################################################################### |
2296 | # Hook functions |
2297 | ############################################################################### |
2298 | def install_hook(): |
2299 | - for f in glob.glob('exec.d/*/charm-pre-install'): |
2300 | - if os.path.isfile(f) and os.access(f, os.X_OK): |
2301 | - subprocess.check_call(['sh', '-c', f]) |
2302 | if not os.path.exists(default_apache2_service_config_dir): |
2303 | os.mkdir(default_apache2_service_config_dir, 0600) |
2304 | apt_get_install("python-jinja2") |
2305 | - return (apt_get_install("apache2")) |
2306 | + install_status = apt_get_install("apache2") |
2307 | + if install_status == 0: |
2308 | + ensure_package_status(service_affecting_packages, config_get('package_status')) |
2309 | + ensure_extra_packages() |
2310 | + return install_status |
2311 | + |
2312 | + |
2313 | +def ensure_extra_packages(): |
2314 | + extra = str(config_get('extra_packages')) |
2315 | + if extra: |
2316 | + install_status = apt_get_install(extra) |
2317 | + if install_status == 0: |
2318 | + ensure_package_status(extra, config_get('package_status')) |
2319 | |
2320 | |
2321 | def dump_data(data2dump, log_prefix): |
2322 | @@ -271,59 +166,95 @@ |
2323 | |
2324 | |
2325 | def get_reverseproxy_data(relation='reverseproxy'): |
2326 | - relation_data = all_relation_data_get(relation_name=relation) |
2327 | + relation_data = relations_of_type(relation) |
2328 | reverseproxy_data = {} |
2329 | if relation_data is None or len(relation_data) == 0: |
2330 | return reverseproxy_data |
2331 | - for unit_name in relation_data.keys(): |
2332 | - if 'port' not in relation_data[unit_name]: |
2333 | + for unit_data in relation_data: |
2334 | + unit_name = unit_data["__unit__"] |
2335 | + if 'port' not in unit_data: |
2336 | return reverseproxy_data |
2337 | - |
2338 | # unit_name: <service-name>-<unit_number> |
2339 | # jinja2 templates require python-type variables, remove all characters |
2340 | # that do not comply |
2341 | - unit_type = re.sub(r'(.*)-[0-9]*', r'\1', unit_name) |
2342 | + unit_type = re.sub(r'(.*)/[0-9]*', r'\1', unit_name) |
2343 | unit_type = re.sub('[^a-zA-Z0-9_]*', '', unit_type) |
2344 | - juju_log('unit_type: %s' % unit_type) |
2345 | + log('unit_type: %s' % unit_type) |
2346 | |
2347 | - host = relation_data[unit_name]['private-address'] |
2348 | - for config_setting in relation_data[unit_name].keys(): |
2349 | - config_key = '%s_%s' % (unit_type, config_setting) |
2350 | - reverseproxy_data[config_key] = relation_data[unit_name][config_setting] |
2351 | - reverseproxy_data[unit_type] = '%s:%s' % (host, relation_data[unit_name]['port']) |
2352 | - if 'all_services' in relation_data[unit_name]: |
2353 | - service_data = yaml.load(relation_data[unit_name]['all_services']) |
2354 | + host = unit_data['private-address'] |
2355 | + if unit_type in reverseproxy_data: |
2356 | + continue |
2357 | + for config_setting in unit_data.keys(): |
2358 | + if config_setting in ("__unit__", "__relid__"): |
2359 | + continue |
2360 | + config_key = '%s_%s' % (unit_type, |
2361 | + config_setting.replace("-", "_")) |
2362 | + config_key = re.sub('[^a-zA-Z0-9_]*', '', config_key) |
2363 | + reverseproxy_data[config_key] = unit_data[ |
2364 | + config_setting] |
2365 | + reverseproxy_data[unit_type] = '%s:%s' % ( |
2366 | + host, unit_data['port']) |
2367 | + if 'all_services' in unit_data: |
2368 | + service_data = yaml.safe_load(unit_data['all_services']) |
2369 | for service_item in service_data: |
2370 | service_name = service_item['service_name'] |
2371 | service_port = service_item['service_port'] |
2372 | service_key = '%s_%s' % (unit_type, service_name) |
2373 | service_key = re.sub('[^a-zA-Z0-9_]*', '', service_key) |
2374 | reverseproxy_data[service_key] = '%s:%s' % (host, service_port) |
2375 | - relation_data.update(reverseproxy_data) |
2376 | - return relation_data |
2377 | + return reverseproxy_data |
2378 | |
2379 | |
2380 | def update_balancers(): |
2381 | - relation_data = all_relation_data_get(relation_name='balancer') |
2382 | - config_data = config_get() |
2383 | + relation_data = relations_of_type('balancer') |
2384 | if relation_data is None or len(relation_data) == 0: |
2385 | - juju_log('No relation data exiting') |
2386 | + log("No relation data, exiting.") |
2387 | return |
2388 | + |
2389 | unit_dict = {} |
2390 | - for unit_name in relation_data.keys(): |
2391 | - if 'port' not in relation_data[unit_name]: |
2392 | - return |
2393 | - unit_type = re.sub(r'(.*)-[0-9]*', r'\1', unit_name) |
2394 | - host = relation_data[unit_name]['private-address'] |
2395 | - port = relation_data[unit_name]['port'] |
2396 | - if unit_type in unit_dict: |
2397 | - current_units = unit_dict[unit_type] |
2398 | - current_units.append('%s:%s' % (host, port)) |
2399 | - unit_dict[unit_type] = current_units |
2400 | + for unit_data in relation_data: |
2401 | + unit_name = unit_data["__unit__"] |
2402 | + if "port" not in unit_data: |
2403 | + log("No port in relation data for '%s', skipping." % unit_name) |
2404 | + continue |
2405 | + port = unit_data["port"] |
2406 | + if "private-address" not in unit_data: |
2407 | + log("No private-address in relation data for '%s', skipping." % |
2408 | + unit_name) |
2409 | + continue |
2410 | + host = unit_data['private-address'] |
2411 | + |
2412 | + if "all_services" in unit_data: |
2413 | + service_data = yaml.safe_load(unit_data[ |
2414 | + "all_services"]) |
2415 | + for service_item in service_data: |
2416 | + service_port = service_item["service_port"] |
2417 | + current_units = unit_dict.setdefault( |
2418 | + service_item["service_name"], []) |
2419 | + current_units.append("%s:%s" % (host, service_port)) |
2420 | else: |
2421 | - unit_dict[unit_type] = ['%s:%s' % (host, port)] |
2422 | + if "sitenames" in unit_data: |
2423 | + unit_types = unit_data["sitenames"].split() |
2424 | + else: |
2425 | + unit_types = (re.sub(r"(.*)/[0-9]*", r"\1", unit_name),) |
2426 | + |
2427 | + for unit_type in unit_types: |
2428 | + current_units = unit_dict.setdefault(unit_type, []) |
2429 | + current_units.append("%s:%s" % (host, port)) |
2430 | + |
2431 | + if not unit_dict: |
2432 | + return |
2433 | + |
2434 | + write_balancer_config(unit_dict) |
2435 | + return unit_dict |
2436 | + |
2437 | + |
2438 | +def write_balancer_config(unit_dict): |
2439 | + config_data = config_get() |
2440 | + |
2441 | from jinja2 import Environment, FileSystemLoader |
2442 | - template_env = Environment(loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], 'data'))) |
2443 | + template_env = Environment(loader=FileSystemLoader(os.path.join( |
2444 | + os.environ['CHARM_DIR'], 'data'))) |
2445 | for balancer_name in unit_dict.keys(): |
2446 | balancer_host_file = '%s/%s.balancer' % (default_apache2_config_dir, |
2447 | balancer_name) |
2448 | @@ -334,63 +265,31 @@ |
2449 | } |
2450 | template = template_env.get_template( |
2451 | 'balancer.template').render(templ_vars) |
2452 | - juju_log("Writing file: %s with data: %s" % (balancer_host_file, templ_vars)) |
2453 | + log("Writing file: %s with data: %s" % (balancer_host_file, |
2454 | + templ_vars)) |
2455 | with open(balancer_host_file, 'w') as balancer_config: |
2456 | balancer_config.write(str(template)) |
2457 | |
2458 | |
2459 | def update_nrpe_checks(): |
2460 | - config_data = config_get() |
2461 | - if 'nagios_check_http_params' not in config_data or len(config_data['nagios_check_http_params']) == 0: |
2462 | - juju_log("No nrpe configuration, skipping") |
2463 | - return |
2464 | - try: |
2465 | - nagios_uid = pwd.getpwnam('nagios').pw_uid |
2466 | - nagios_gid = grp.getgrnam('nagios').gr_gid |
2467 | - except: |
2468 | - juju_log("Nagios user not setup, exiting") |
2469 | - return |
2470 | - |
2471 | - unit_name = os.environ['JUJU_UNIT_NAME'].replace('/', '-') |
2472 | - nrpe_check_file = '/etc/nagios/nrpe.d/check_vhost.cfg' |
2473 | - nagios_hostname = "%s-%s" % (config_data['nagios_context'], unit_name) |
2474 | - nagios_logdir = '/var/log/nagios' |
2475 | - nagios_exportdir = '/var/lib/nagios/export' |
2476 | - nrpe_service_file = '/var/lib/nagios/export/service__%s_check_vhost.cfg' %\ |
2477 | - (nagios_hostname) |
2478 | - if not os.path.exists(nagios_logdir): |
2479 | - os.mkdir(nagios_logdir) |
2480 | - os.chown(nagios_logdir, nagios_uid, nagios_gid) |
2481 | - if not os.path.exists(nagios_exportdir): |
2482 | - juju_log('Exiting as %s is not accessible' % nagios_exportdir) |
2483 | - return |
2484 | - for f in os.listdir(nagios_exportdir): |
2485 | - if re.search('.*check_vhost.cfg', f): |
2486 | - os.remove(os.path.join(nagios_exportdir, f)) |
2487 | - from jinja2 import Environment, FileSystemLoader |
2488 | - template_env = Environment(loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], 'data'))) |
2489 | - templ_vars = { |
2490 | - 'nagios_hostname': nagios_hostname, |
2491 | - 'nagios_servicegroup': config_data['nagios_context'], |
2492 | - } |
2493 | - template = \ |
2494 | - template_env.get_template('nrpe_service.template').render(templ_vars) |
2495 | - with open(nrpe_service_file, 'w') as nrpe_service_config: |
2496 | - nrpe_service_config.write(str(template)) |
2497 | - with open(nrpe_check_file, 'w') as nrpe_check_config: |
2498 | - nrpe_check_config.write("# check Apache vhosts\n") |
2499 | - nrpe_check_config.write( |
2500 | - "command[check_vhost]=/usr/lib/nagios/plugins/check_http %s\n" % |
2501 | - (config_data['nagios_check_http_params'])) |
2502 | - if os.path.isfile('/etc/init.d/nagios-nrpe-server'): |
2503 | - subprocess.call(['service', 'nagios-nrpe-server', 'reload']) |
2504 | + nrpe_compat = nrpe.NRPE() |
2505 | + conf = nrpe_compat.config |
2506 | + check_http_params = conf.get('nagios_check_http_params') |
2507 | + if check_http_params: |
2508 | + nrpe_compat.add_check( |
2509 | + shortname='vhost', |
2510 | + description='Check Virtual Host', |
2511 | + check_cmd='check_http %s' % check_http_params |
2512 | + ) |
2513 | + nrpe_compat.write() |
2514 | |
2515 | |
2516 | def create_mpm_workerfile(): |
2517 | config_data = config_get() |
2518 | mpm_workerfile = '/etc/apache2/conf.d/000mpm-worker' |
2519 | from jinja2 import Environment, FileSystemLoader |
2520 | - template_env = Environment(loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], 'data'))) |
2521 | + template_env = Environment(loader=FileSystemLoader(os.path.join( |
2522 | + os.environ['CHARM_DIR'], 'data'))) |
2523 | templ_vars = { |
2524 | 'mpm_type': config_data['mpm_type'], |
2525 | 'mpm_startservers': config_data['mpm_startservers'], |
2526 | @@ -412,7 +311,8 @@ |
2527 | config_data = config_get() |
2528 | securityfile = '/etc/apache2/conf.d/security' |
2529 | from jinja2 import Environment, FileSystemLoader |
2530 | - template_env = Environment(loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], 'data'))) |
2531 | + template_env = Environment(loader=FileSystemLoader(os.path.join( |
2532 | + os.environ['CHARM_DIR'], 'data'))) |
2533 | templ_vars = { |
2534 | 'juju_warning_header': juju_warning_header, |
2535 | 'server_tokens': config_data['server_tokens'], |
2536 | @@ -429,14 +329,16 @@ |
2537 | config_data = config_get() |
2538 | logrotate_file = '/etc/logrotate.d/apache2' |
2539 | from jinja2 import Environment, FileSystemLoader |
2540 | - template_env = Environment(loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], 'data'))) |
2541 | + template_env = Environment(loader=FileSystemLoader(os.path.join( |
2542 | + os.environ['CHARM_DIR'], 'data'))) |
2543 | templ_vars = { |
2544 | 'juju_warning_header': juju_warning_header, |
2545 | 'logrotate_rotate': config_data['logrotate_rotate'], |
2546 | 'logrotate_count': config_data['logrotate_count'], |
2547 | 'logrotate_dateext': config_data['logrotate_dateext'], |
2548 | } |
2549 | - template = template_env.get_template('logrotate.conf.template').render(templ_vars) |
2550 | + template = template_env.get_template('logrotate.conf.template').render( |
2551 | + templ_vars) |
2552 | with open(logrotate_file, 'w') as logrotate_conf: |
2553 | logrotate_conf.write(str(template)) |
2554 | |
2555 | @@ -444,6 +346,10 @@ |
2556 | def config_changed(): |
2557 | relationship_data = {} |
2558 | config_data = config_get() |
2559 | + |
2560 | + ensure_package_status(service_affecting_packages, config_data['package_status']) |
2561 | + ensure_extra_packages() |
2562 | + |
2563 | relationship_data.update(get_reverseproxy_data(relation='reverseproxy')) |
2564 | relationship_data.update(get_reverseproxy_data(relation='website-cache')) |
2565 | update_balancers() |
2566 | @@ -476,7 +382,7 @@ |
2567 | from jinja2 import Template |
2568 | template = Template( |
2569 | str(base64.b64decode(config_data[template_var]))) |
2570 | - juju_log("Writing file: %s with data: %s" % (vhost_file, template_data)) |
2571 | + log("Writing file: %s with data: %s" % (vhost_file, template_data)) |
2572 | with open(vhost_file, 'w') as vhost: |
2573 | vhost.write(str(template.render(template_data))) |
2574 | open_port(ports[proto]) |
2575 | @@ -510,7 +416,7 @@ |
2576 | else: |
2577 | # Certificate provided as base64-encoded string. |
2578 | if config_data['ssl_cert']: |
2579 | - juju_log("Writing cert from config ssl_cert: %s" % cert_file) |
2580 | + log("Writing cert from config ssl_cert: %s" % cert_file) |
2581 | with open(cert_file, 'w') as f: |
2582 | f.write(str(base64.b64decode(config_data['ssl_cert']))) |
2583 | # Use certificate file shipped out with charm. |
2584 | @@ -520,11 +426,11 @@ |
2585 | if os.path.exists(source): |
2586 | shutil.copy(source, cert_file) |
2587 | else: |
2588 | - juju_log("Certificate not found, ignoring: %s" % source) |
2589 | + log("Certificate not found, ignoring: %s" % source) |
2590 | |
2591 | # Private key provided as base64-encoded string. |
2592 | if config_data['ssl_key']: |
2593 | - juju_log("Writing key from config ssl_key: %s" % key_file) |
2594 | + log("Writing key from config ssl_key: %s" % key_file) |
2595 | with open(key_file, 'w') as f: |
2596 | f.write(str(base64.b64decode(config_data['ssl_key']))) |
2597 | # Use private key shipped out with charm. |
2598 | @@ -534,12 +440,13 @@ |
2599 | if os.path.exists(source): |
2600 | shutil.copy(source, key_file) |
2601 | else: |
2602 | - juju_log("Key file not found, ignoring: %s" % source) |
2603 | + log("Key file not found, ignoring: %s" % source) |
2604 | |
2605 | if chain_file is not None: |
2606 | # Chain certificates provided as base64-encoded string. |
2607 | if config_data['ssl_chain']: |
2608 | - juju_log("Writing chain certificates file from config ssl_chain: %s" % chain_file) |
2609 | + log("Writing chain certificates file from" |
2610 | + "config ssl_chain: %s" % chain_file) |
2611 | with open(chain_file, 'w') as f: |
2612 | f.write(str(base64.b64decode(config_data['ssl_chain']))) |
2613 | # Use chain certificates shipped out with charm. |
2614 | @@ -549,7 +456,8 @@ |
2615 | if os.path.exists(source): |
2616 | shutil.copy(source, chain_file) |
2617 | else: |
2618 | - juju_log("Chain certificates not found, ignoring: %s" % source) |
2619 | + log("Chain certificates not found, " |
2620 | + "ignoring: %s" % source) |
2621 | |
2622 | # Tighten permissions on private key file. |
2623 | if os.path.exists(key_file): |
2624 | @@ -557,6 +465,24 @@ |
2625 | os.chown(key_file, pwd.getpwnam('root').pw_uid, |
2626 | grp.getgrnam('ssl-cert').gr_gid) |
2627 | |
2628 | + apache_syslog_conf = "/etc/apache2/conf.d/syslog" |
2629 | + rsyslog_apache_conf = "/etc/rsyslog.d/45-apache2.conf" |
2630 | + if config_data['use_rsyslog']: |
2631 | + shutil.copy2("data/syslog-apache.conf", apache_syslog_conf) |
2632 | + shutil.copy2("data/syslog-rsyslog.conf", rsyslog_apache_conf) |
2633 | + # Fix permissions of access.log and error.log to allow syslog user to |
2634 | + # write to |
2635 | + os.chown("/var/log/apache2/access.log", pwd.getpwnam('syslog').pw_uid, |
2636 | + pwd.getpwnam('syslog').pw_gid) |
2637 | + os.chown("/var/log/apache2/error.log", pwd.getpwnam('syslog').pw_uid, |
2638 | + pwd.getpwnam('syslog').pw_gid) |
2639 | + else: |
2640 | + if os.path.exists(apache_syslog_conf): |
2641 | + os.unlink(apache_syslog_conf) |
2642 | + if os.path.exists(rsyslog_apache_conf): |
2643 | + os.unlink(rsyslog_apache_conf) |
2644 | + run(["/usr/sbin/service", "rsyslog", "restart"]) |
2645 | + |
2646 | # Disable the default website because we don't want people to see the |
2647 | # "It works!" page on production services and remove the |
2648 | # conf.d/other-vhosts-access-log conf. |
2649 | @@ -606,36 +532,47 @@ |
2650 | ############################################################################### |
2651 | # Main section |
2652 | ############################################################################### |
2653 | -if hook_name == "install": |
2654 | - install_hook() |
2655 | -elif hook_name == "config-changed" or hook_name == "upgrade-charm": |
2656 | - config_changed() |
2657 | -elif hook_name == "start": |
2658 | - start_hook() |
2659 | -elif hook_name == "stop": |
2660 | - stop_hook() |
2661 | -elif hook_name == "reverseproxy-relation-broken": |
2662 | - config_changed() |
2663 | -elif hook_name == "reverseproxy-relation-changed": |
2664 | - config_changed() |
2665 | -elif hook_name == "reverseproxy-relation-joined": |
2666 | - config_changed() |
2667 | -elif hook_name == "balancer-relation-broken": |
2668 | - config_changed() |
2669 | -elif hook_name == "balancer-relation-changed": |
2670 | - config_changed() |
2671 | -elif hook_name == "balancer-relation-joined": |
2672 | - config_changed() |
2673 | -elif hook_name == "website-cache-relation-broken": |
2674 | - config_changed() |
2675 | -elif hook_name == "website-cache-relation-changed": |
2676 | - config_changed() |
2677 | -elif hook_name == "website-cache-relation-joined": |
2678 | - config_changed() |
2679 | -elif hook_name == "website-relation-joined": |
2680 | - website_interface("joined") |
2681 | -elif hook_name == "nrpe-external-master-relation-changed": |
2682 | - update_nrpe_checks() |
2683 | -else: |
2684 | - print "Unknown hook" |
2685 | - sys.exit(1) |
2686 | +def main(hook_name): |
2687 | + if hook_name == "install": |
2688 | + install_hook() |
2689 | + elif hook_name == "config-changed" or hook_name == "upgrade-charm": |
2690 | + config_changed() |
2691 | + elif hook_name == "start": |
2692 | + start_hook() |
2693 | + elif hook_name == "stop": |
2694 | + stop_hook() |
2695 | + elif hook_name == "reverseproxy-relation-broken": |
2696 | + config_changed() |
2697 | + elif hook_name == "reverseproxy-relation-changed": |
2698 | + config_changed() |
2699 | + elif hook_name == "reverseproxy-relation-joined": |
2700 | + config_changed() |
2701 | + elif hook_name == "balancer-relation-broken": |
2702 | + config_changed() |
2703 | + elif hook_name == "balancer-relation-changed": |
2704 | + config_changed() |
2705 | + elif hook_name == "balancer-relation-joined": |
2706 | + config_changed() |
2707 | + elif hook_name == "website-cache-relation-broken": |
2708 | + config_changed() |
2709 | + elif hook_name == "website-cache-relation-changed": |
2710 | + config_changed() |
2711 | + elif hook_name == "website-cache-relation-joined": |
2712 | + config_changed() |
2713 | + elif hook_name == "website-relation-joined": |
2714 | + website_interface("joined") |
2715 | + elif hook_name in ("nrpe-external-master-relation-changed", |
2716 | + "local-monitors-relation-changed"): |
2717 | + update_nrpe_checks() |
2718 | + else: |
2719 | + print "Unknown hook" |
2720 | + sys.exit(1) |
2721 | + |
2722 | +if __name__ == "__main__": |
2723 | + hook_name = os.path.basename(sys.argv[0]) |
2724 | + # Also support being invoked directly with hook as argument name. |
2725 | + if hook_name == "hooks.py": |
2726 | + if len(sys.argv) < 2: |
2727 | + sys.exit("Missing required hook name argument.") |
2728 | + hook_name = sys.argv[1] |
2729 | + main(hook_name) |
2730 | |
2731 | === modified symlink 'hooks/install' (properties changed: -x to +x) |
2732 | === target was u'hooks.py' |
2733 | --- hooks/install 1970-01-01 00:00:00 +0000 |
2734 | +++ hooks/install 2013-10-10 22:50:52 +0000 |
2735 | @@ -0,0 +1,9 @@ |
2736 | +#!/bin/sh |
2737 | + |
2738 | +set -eu |
2739 | + |
2740 | +juju-log 'Invoking charm-pre-install hooks' |
2741 | +[ -d exec.d ] && ( for f in exec.d/*/charm-pre-install; do [ -x $f ] && /bin/sh -c "$f"; done ) |
2742 | + |
2743 | +juju-log 'Invoking python-based install hook' |
2744 | +python hooks/hooks.py install |
2745 | |
2746 | === added symlink 'hooks/local-monitors-relation-changed' |
2747 | === target is u'hooks.py' |
2748 | === added directory 'hooks/tests' |
2749 | === added file 'hooks/tests/__init__.py' |
2750 | === added directory 'hooks/tests/fixtures' |
2751 | === added file 'hooks/tests/fixtures/bar.balancer' |
2752 | --- hooks/tests/fixtures/bar.balancer 1970-01-01 00:00:00 +0000 |
2753 | +++ hooks/tests/fixtures/bar.balancer 2013-10-10 22:50:52 +0000 |
2754 | @@ -0,0 +1,7 @@ |
2755 | +<Proxy balancer://bar> |
2756 | +BalancerMember http://10.11.12.14 timeout=123 |
2757 | +BalancerMember http://10.11.12.15 timeout=123 |
2758 | + |
2759 | + ProxySet lbmethod=byrequests |
2760 | + RequestHeader set X-Balancer-Name "bar" |
2761 | +</Proxy> |
2762 | \ No newline at end of file |
2763 | |
2764 | === added file 'hooks/tests/fixtures/foo.balancer' |
2765 | --- hooks/tests/fixtures/foo.balancer 1970-01-01 00:00:00 +0000 |
2766 | +++ hooks/tests/fixtures/foo.balancer 2013-10-10 22:50:52 +0000 |
2767 | @@ -0,0 +1,6 @@ |
2768 | +<Proxy balancer://foo> |
2769 | +BalancerMember http://10.11.12.13 timeout=123 |
2770 | + |
2771 | + ProxySet lbmethod=byrequests |
2772 | + RequestHeader set X-Balancer-Name "foo" |
2773 | +</Proxy> |
2774 | \ No newline at end of file |
2775 | |
2776 | === added file 'hooks/tests/fixtures/nrpe_check_config' |
2777 | --- hooks/tests/fixtures/nrpe_check_config 1970-01-01 00:00:00 +0000 |
2778 | +++ hooks/tests/fixtures/nrpe_check_config 2013-10-10 22:50:52 +0000 |
2779 | @@ -0,0 +1,2 @@ |
2780 | +# check Apache vhosts |
2781 | +command[check_vhost]=/usr/lib/nagios/plugins/check_http ['foo', 'bar'] |
2782 | |
2783 | === added file 'hooks/tests/fixtures/nrpe_service_config' |
2784 | --- hooks/tests/fixtures/nrpe_service_config 1970-01-01 00:00:00 +0000 |
2785 | +++ hooks/tests/fixtures/nrpe_service_config 2013-10-10 22:50:52 +0000 |
2786 | @@ -0,0 +1,11 @@ |
2787 | +#--------------------------------------------------- |
2788 | +# This file is Juju managed |
2789 | +#--------------------------------------------------- |
2790 | +define service { |
2791 | + use active-service |
2792 | + host_name somecontext-some-unit |
2793 | + service_description somecontext-some-unit Check Apache Vhost |
2794 | + check_command check_nrpe!check_vhost |
2795 | + servicegroups somecontext |
2796 | + |
2797 | +} |
2798 | \ No newline at end of file |
2799 | |
2800 | === added file 'hooks/tests/test_balancer_hook.py' |
2801 | --- hooks/tests/test_balancer_hook.py 1970-01-01 00:00:00 +0000 |
2802 | +++ hooks/tests/test_balancer_hook.py 2013-10-10 22:50:52 +0000 |
2803 | @@ -0,0 +1,652 @@ |
2804 | +from pprint import pformat |
2805 | +import os |
2806 | +import shutil |
2807 | +from StringIO import StringIO |
2808 | +import tempfile |
2809 | +import yaml |
2810 | + |
2811 | +from testtools import TestCase |
2812 | +from mock import patch, call, MagicMock |
2813 | + |
2814 | +import hooks |
2815 | + |
2816 | + |
2817 | +FIXTURES = os.path.join(os.path.dirname(__file__), 'fixtures') |
2818 | + |
2819 | + |
2820 | +original_open = open |
2821 | + |
2822 | + |
2823 | +class BalancerRelationTest(TestCase): |
2824 | + |
2825 | + def setUp(self): |
2826 | + super(BalancerRelationTest, self).setUp() |
2827 | + |
2828 | + self.relations_of_type = self.patch_hook("relations_of_type") |
2829 | + self.config_get = self.patch_hook("config_get") |
2830 | + self.log = self.patch_hook("log") |
2831 | + self.write_balancer_config = self.patch_hook("write_balancer_config") |
2832 | + |
2833 | + def patch_hook(self, hook_name): |
2834 | + mock_controller = patch.object(hooks, hook_name) |
2835 | + mock = mock_controller.start() |
2836 | + self.addCleanup(mock_controller.stop) |
2837 | + return mock |
2838 | + |
2839 | + def test_relation_data_returns_no_relations(self): |
2840 | + self.relations_of_type.return_value = [] |
2841 | + self.assertIs(None, hooks.update_balancers()) |
2842 | + self.log.assert_called_once_with("No relation data, exiting.") |
2843 | + self.write_balancer_config.assert_not_called() |
2844 | + |
2845 | + def test_no_port_in_relation_data(self): |
2846 | + self.relations_of_type.return_value = [ |
2847 | + {"private-address": "1.2.3.4", |
2848 | + "__unit__": "foo/1"}, |
2849 | + ] |
2850 | + self.assertIs(None, hooks.update_balancers()) |
2851 | + self.log.assert_called_once_with( |
2852 | + "No port in relation data for 'foo/1', skipping.") |
2853 | + self.write_balancer_config.assert_not_called() |
2854 | + |
2855 | + def test_no_private_address_in_relation_data(self): |
2856 | + self.relations_of_type.return_value = [ |
2857 | + {"port": 4242, |
2858 | + "__unit__": "foo/1"}, |
2859 | + ] |
2860 | + self.assertIs(None, hooks.update_balancers()) |
2861 | + self.log.assert_called_once_with( |
2862 | + "No private-address in relation data for 'foo/1', skipping.") |
2863 | + self.write_balancer_config.assert_not_called() |
2864 | + |
2865 | + def test_sitenames_in_relation_data(self): |
2866 | + self.relations_of_type.return_value = [ |
2867 | + {"private-address": "1.2.3.4", |
2868 | + "port": 4242, |
2869 | + "sitenames": "foo.internal bar.internal", |
2870 | + "__unit__": "foo/1"}, |
2871 | + ] |
2872 | + expected = { |
2873 | + "foo.internal": ["1.2.3.4:4242"], |
2874 | + "bar.internal": ["1.2.3.4:4242"], |
2875 | + } |
2876 | + self.assertEqual(hooks.update_balancers(), expected) |
2877 | + self.write_balancer_config.assert_called_once_with(expected) |
2878 | + |
2879 | + def test_all_services_in_relation_data(self): |
2880 | + self.relations_of_type.return_value = [ |
2881 | + {"private-address": "1.2.3.4", |
2882 | + "port": 80, |
2883 | + "__unit__": "foo/1", |
2884 | + "all_services": yaml.dump( |
2885 | + [ |
2886 | + {"service_name": "foo.internal", |
2887 | + "service_port": 4242}, |
2888 | + {"service_name": "bar.internal", |
2889 | + "service_port": 4243}, |
2890 | + ] |
2891 | + ), |
2892 | + }, |
2893 | + ] |
2894 | + expected = { |
2895 | + "foo.internal": ["1.2.3.4:4242"], |
2896 | + "bar.internal": ["1.2.3.4:4243"], |
2897 | + } |
2898 | + self.assertEqual(hooks.update_balancers(), expected) |
2899 | + self.write_balancer_config.assert_called_once_with(expected) |
2900 | + |
2901 | + def test_unit_names_in_relation_data(self): |
2902 | + self.relations_of_type.return_value = [ |
2903 | + {"private-address": "1.2.3.4", |
2904 | + "port": 4242, |
2905 | + "__unit__": "foo/1"}, |
2906 | + {"private-address": "1.2.3.5", |
2907 | + "port": 4242, |
2908 | + "__unit__": "foo/2"}, |
2909 | + ] |
2910 | + expected = { |
2911 | + "foo": ["1.2.3.4:4242", "1.2.3.5:4242"], |
2912 | + } |
2913 | + self.assertEqual(hooks.update_balancers(), expected) |
2914 | + self.write_balancer_config.assert_called_once_with(expected) |
2915 | + |
2916 | + |
2917 | +class HelpersTest(TestCase): |
2918 | + def setUp(self): |
2919 | + super(HelpersTest, self).setUp() |
2920 | + |
2921 | + self.tempdir = tempfile.mkdtemp() |
2922 | + |
2923 | + def tearDown(self): |
2924 | + super(HelpersTest, self).tearDown() |
2925 | + |
2926 | + shutil.rmtree(self.tempdir, ignore_errors=True) |
2927 | + |
2928 | + def read_fixture(self, basename): |
2929 | + fixture = os.path.join(FIXTURES, basename) |
2930 | + with original_open(fixture) as f: |
2931 | + content = f.read() |
2932 | + return content |
2933 | + |
2934 | + @patch('subprocess.call') |
2935 | + def test_installs_packages(self, mock_call): |
2936 | + mock_call.return_value = 'some result' |
2937 | + |
2938 | + result = hooks.apt_get_install('foo bar') |
2939 | + |
2940 | + self.assertEqual(result, 'some result') |
2941 | + mock_call.assert_called_with(['apt-get', '-y', 'install', '-qq', |
2942 | + 'foo bar']) |
2943 | + |
2944 | + @patch('subprocess.call') |
2945 | + def test_installs_nothing_if_package_not_provided(self, mock_call): |
2946 | + self.assertFalse(hooks.apt_get_install()) |
2947 | + self.assertFalse(mock_call.called) |
2948 | + |
2949 | + @patch('subprocess.call') |
2950 | + def test_purges_packages(self, mock_call): |
2951 | + mock_call.return_value = 'some result' |
2952 | + |
2953 | + result = hooks.apt_get_purge('foo bar') |
2954 | + |
2955 | + self.assertEqual(result, 'some result') |
2956 | + mock_call.assert_called_with(['apt-get', '-y', 'purge', '-qq', |
2957 | + 'foo bar']) |
2958 | + |
2959 | + @patch('subprocess.call') |
2960 | + def test_purges_nothing_if_package_not_provided(self, mock_call): |
2961 | + self.assertFalse(hooks.apt_get_purge()) |
2962 | + self.assertFalse(mock_call.called) |
2963 | + |
2964 | + @patch('subprocess.call') |
2965 | + def test_starts_apache_successfully(self, mock_call): |
2966 | + mock_call.return_value = 0 |
2967 | + action = 'start' |
2968 | + |
2969 | + self.assertTrue(hooks.service_apache2(action)) |
2970 | + mock_call.assert_called_with(['service', 'apache2', action]) |
2971 | + |
2972 | + @patch('subprocess.call') |
2973 | + def test_fails_to_start_apache(self, mock_call): |
2974 | + mock_call.return_value = 1 |
2975 | + action = 'start' |
2976 | + |
2977 | + self.assertFalse(hooks.service_apache2(action)) |
2978 | + mock_call.assert_called_with(['service', 'apache2', action]) |
2979 | + |
2980 | + @patch('subprocess.call') |
2981 | + def test_checks_apache_successfully(self, mock_call): |
2982 | + mock_call.return_value = 0 |
2983 | + |
2984 | + self.assertTrue(hooks.service_apache2('check')) |
2985 | + mock_call.assert_called_with(['/usr/sbin/apache2ctl', 'configtest']) |
2986 | + |
2987 | + @patch('subprocess.call') |
2988 | + def test_fails_to_check_apache(self, mock_call): |
2989 | + mock_call.return_value = 1 |
2990 | + |
2991 | + self.assertFalse(hooks.service_apache2('check')) |
2992 | + mock_call.assert_called_with(['/usr/sbin/apache2ctl', 'configtest']) |
2993 | + |
2994 | + @patch('subprocess.call') |
2995 | + def test_fails_to_check_apache_with_another_return_value(self, mock_call): |
2996 | + mock_call.return_value = 2 |
2997 | + |
2998 | + self.assertFalse(hooks.service_apache2('check')) |
2999 | + mock_call.assert_called_with(['/usr/sbin/apache2ctl', 'configtest']) |
3000 | + |
3001 | + @patch('subprocess.call') |
3002 | + def test_does_nothing_if_action_not_provided(self, mock_call): |
3003 | + self.assertIsNone(hooks.service_apache2()) |
3004 | + self.assertFalse(mock_call.called) |
3005 | + |
3006 | + @patch('subprocess.check_output') |
3007 | + def test_runs_an_arbitrary_command(self, check_output): |
3008 | + check_output.return_value = 'some result' |
3009 | + |
3010 | + self.assertEqual(hooks.run('foo', 1, 2, bar='baz'), 'some result') |
3011 | + check_output.assert_called_with('foo', 1, 2, bar='baz') |
3012 | + |
3013 | + @patch('subprocess.check_output') |
3014 | + @patch('sys.stdout.write') |
3015 | + def test_prints_and_reraises_run_error(self, write, check_output): |
3016 | + check_output.side_effect = RuntimeError('some error') |
3017 | + |
3018 | + self.assertRaises(RuntimeError, hooks.run, 'some command') |
3019 | + self.assertEqual(write.mock_calls, [ |
3020 | + call('some error'), |
3021 | + call('\n'), |
3022 | + ]) |
3023 | + |
3024 | + @patch('subprocess.call') |
3025 | + @patch('hooks.log') |
3026 | + @patch('hooks.service_apache2') |
3027 | + @patch('os.path.exists') |
3028 | + @patch('hooks.apt_get_install') |
3029 | + def test_enables_a_module(self, apt_get_install, path_exists, |
3030 | + service_apache2, log, mock_call): |
3031 | + module = 'foo' |
3032 | + module_already_enabled = False |
3033 | + module_available = False |
3034 | + apache_check = True |
3035 | + apache_reload = None |
3036 | + module_installed = 0 |
3037 | + module_finally_enabled = 0 |
3038 | + |
3039 | + path_exists.side_effect = [module_already_enabled, module_available] |
3040 | + service_apache2.side_effect = [apache_check, apache_reload] |
3041 | + apt_get_install.return_value = module_installed |
3042 | + mock_call.return_value = module_finally_enabled |
3043 | + |
3044 | + result = hooks.enable_module(module) |
3045 | + |
3046 | + self.assertTrue(result) |
3047 | + path_exists.assert_has_calls([ |
3048 | + call("/etc/apache2/mods-enabled/%s.load" % (module)), |
3049 | + call("/etc/apache2/mods-available/%s.load" % (module)) |
3050 | + ]) |
3051 | + apt_get_install.assert_called_with("libapache2-mod-%s" % (module)) |
3052 | + mock_call.assert_called_with(['/usr/sbin/a2enmod', module]) |
3053 | + service_apache2.assert_has_calls([call('check'), call('reload')]) |
3054 | + self.assertFalse(log.called) |
3055 | + |
3056 | + @patch('subprocess.call') |
3057 | + @patch('hooks.log') |
3058 | + @patch('hooks.service_apache2') |
3059 | + @patch('os.path.exists') |
3060 | + @patch('hooks.apt_get_install') |
3061 | + def test_doesnt_enable_if_module_not_provided(self, apt_get_install, |
3062 | + path_exists, service_apache2, |
3063 | + log, mock_call): |
3064 | + module = None |
3065 | + |
3066 | + result = hooks.enable_module(module) |
3067 | + |
3068 | + self.assertTrue(result) |
3069 | + self.assertFalse(apt_get_install.called) |
3070 | + self.assertFalse(path_exists.called) |
3071 | + self.assertFalse(service_apache2.called) |
3072 | + self.assertFalse(log.called) |
3073 | + self.assertFalse(mock_call.called) |
3074 | + |
3075 | + @patch('subprocess.call') |
3076 | + @patch('hooks.log') |
3077 | + @patch('hooks.service_apache2') |
3078 | + @patch('os.path.exists') |
3079 | + @patch('hooks.apt_get_install') |
3080 | + def test_doesnt_enable_if_module_already_enabled(self, apt_get_install, |
3081 | + path_exists, |
3082 | + service_apache2, log, |
3083 | + mock_call): |
3084 | + module = 'foo' |
3085 | + module_already_enabled = True |
3086 | + |
3087 | + path_exists.return_value = module_already_enabled |
3088 | + |
3089 | + result = hooks.enable_module(module) |
3090 | + |
3091 | + self.assertTrue(result) |
3092 | + path_exists.assert_called_with( |
3093 | + "/etc/apache2/mods-enabled/%s.load" % (module)) |
3094 | + log.assert_called_with("Module already loaded: foo") |
3095 | + self.assertFalse(apt_get_install.called) |
3096 | + self.assertFalse(service_apache2.called) |
3097 | + self.assertFalse(mock_call.called) |
3098 | + |
3099 | + @patch('subprocess.call') |
3100 | + @patch('hooks.log') |
3101 | + @patch('hooks.service_apache2') |
3102 | + @patch('os.path.exists') |
3103 | + @patch('hooks.apt_get_install') |
3104 | + def test_fails_to_enable_if_module_not_installed(self, apt_get_install, |
3105 | + path_exists, |
3106 | + service_apache2, log, |
3107 | + mock_call): |
3108 | + module = 'foo' |
3109 | + module_already_enabled = False |
3110 | + module_available = False |
3111 | + module_installed = 1 |
3112 | + |
3113 | + path_exists.side_effect = [module_already_enabled, module_available] |
3114 | + apt_get_install.return_value = module_installed |
3115 | + |
3116 | + result = hooks.enable_module(module) |
3117 | + |
3118 | + self.assertFalse(result) |
3119 | + path_exists.assert_has_calls([ |
3120 | + call("/etc/apache2/mods-enabled/%s.load" % (module)), |
3121 | + call("/etc/apache2/mods-available/%s.load" % (module)) |
3122 | + ]) |
3123 | + apt_get_install.assert_called_with("libapache2-mod-%s" % (module)) |
3124 | + log.assert_called_with("Installing module %s failed" % (module)) |
3125 | + self.assertFalse(mock_call.called) |
3126 | + self.assertFalse(service_apache2.called) |
3127 | + |
3128 | + @patch('subprocess.call') |
3129 | + @patch('hooks.log') |
3130 | + @patch('hooks.service_apache2') |
3131 | + @patch('os.path.exists') |
3132 | + @patch('hooks.apt_get_install') |
3133 | + def test_fails_to_enable_if_enmod_fails(self, apt_get_install, |
3134 | + path_exists, service_apache2, |
3135 | + log, mock_call): |
3136 | + module = 'foo' |
3137 | + module_already_enabled = False |
3138 | + module_available = False |
3139 | + module_installed = 0 |
3140 | + module_finally_enabled = 1 |
3141 | + |
3142 | + path_exists.side_effect = [module_already_enabled, module_available] |
3143 | + apt_get_install.return_value = module_installed |
3144 | + mock_call.return_value = module_finally_enabled |
3145 | + |
3146 | + result = hooks.enable_module(module) |
3147 | + |
3148 | + self.assertFalse(result) |
3149 | + path_exists.assert_has_calls([ |
3150 | + call("/etc/apache2/mods-enabled/%s.load" % (module)), |
3151 | + call("/etc/apache2/mods-available/%s.load" % (module)) |
3152 | + ]) |
3153 | + apt_get_install.assert_called_with("libapache2-mod-%s" % (module)) |
3154 | + mock_call.assert_called_with(['/usr/sbin/a2enmod', module]) |
3155 | + self.assertFalse(log.called) |
3156 | + self.assertFalse(service_apache2.called) |
3157 | + |
3158 | + @patch('subprocess.call') |
3159 | + @patch('hooks.log') |
3160 | + @patch('hooks.service_apache2') |
3161 | + @patch('os.path.exists') |
3162 | + def test_disables_a_module(self, path_exists, service_apache2, log, |
3163 | + mock_call): |
3164 | + module = 'foo' |
3165 | + module_still_enabled = True |
3166 | + apache_check = True |
3167 | + apache_reload = None |
3168 | + module_finally_disabled = 0 |
3169 | + |
3170 | + path_exists.return_value = module_still_enabled |
3171 | + service_apache2.side_effect = [apache_check, apache_reload] |
3172 | + mock_call.return_value = module_finally_disabled |
3173 | + |
3174 | + result = hooks.disable_module(module) |
3175 | + |
3176 | + self.assertTrue(result) |
3177 | + path_exists.assert_called_with( |
3178 | + "/etc/apache2/mods-enabled/%s.load" % (module)) |
3179 | + mock_call.assert_called_with(['/usr/sbin/a2dismod', module]) |
3180 | + service_apache2.assert_has_calls([call('check'), call('reload')]) |
3181 | + self.assertFalse(log.called) |
3182 | + |
3183 | + @patch('subprocess.call') |
3184 | + @patch('hooks.log') |
3185 | + @patch('hooks.service_apache2') |
3186 | + @patch('os.path.exists') |
3187 | + def test_doest_disable_if_module_not_provided(self, path_exists, |
3188 | + service_apache2, log, |
3189 | + mock_call): |
3190 | + module = None |
3191 | + |
3192 | + result = hooks.disable_module(module) |
3193 | + |
3194 | + self.assertTrue(result) |
3195 | + self.assertFalse(path_exists.called) |
3196 | + self.assertFalse(service_apache2.called) |
3197 | + self.assertFalse(log.called) |
3198 | + self.assertFalse(mock_call.called) |
3199 | + |
3200 | + @patch('subprocess.call') |
3201 | + @patch('hooks.log') |
3202 | + @patch('hooks.service_apache2') |
3203 | + @patch('os.path.exists') |
3204 | + def test_does_nothing_if_module_already_disabled(self, path_exists, |
3205 | + service_apache2, log, |
3206 | + mock_call): |
3207 | + module = 'foo' |
3208 | + module_still_enabled = False |
3209 | + |
3210 | + path_exists.return_value = module_still_enabled |
3211 | + |
3212 | + result = hooks.disable_module(module) |
3213 | + |
3214 | + self.assertTrue(result) |
3215 | + path_exists.assert_called_with( |
3216 | + "/etc/apache2/mods-enabled/%s.load" % (module)) |
3217 | + log.assert_called_with("Module already disabled: foo") |
3218 | + |
3219 | + @patch('subprocess.call') |
3220 | + @patch('hooks.log') |
3221 | + @patch('hooks.service_apache2') |
3222 | + @patch('os.path.exists') |
3223 | + def test_fails_to_disable_if_dismod_fails(self, path_exists, |
3224 | + service_apache2, log, mock_call): |
3225 | + module = 'foo' |
3226 | + module_still_enabled = True |
3227 | + apache_check = True |
3228 | + apache_reload = None |
3229 | + module_finally_disabled = 1 |
3230 | + |
3231 | + path_exists.return_value = module_still_enabled |
3232 | + service_apache2.side_effect = [apache_check, apache_reload] |
3233 | + mock_call.return_value = module_finally_disabled |
3234 | + |
3235 | + result = hooks.disable_module(module) |
3236 | + |
3237 | + self.assertFalse(result) |
3238 | + path_exists.assert_called_with( |
3239 | + "/etc/apache2/mods-enabled/%s.load" % (module)) |
3240 | + mock_call.assert_called_with(['/usr/sbin/a2dismod', module]) |
3241 | + self.assertFalse(service_apache2.called) |
3242 | + self.assertFalse(log.called) |
3243 | + |
3244 | + @patch('hooks.config_get') |
3245 | + @patch('hooks.log') |
3246 | + def test_writes_balancer_config(self, log, config_get): |
3247 | + config_get.return_value = { |
3248 | + 'lb_balancer_timeout': 123, |
3249 | + } |
3250 | + balancer_config = { |
3251 | + 'foo': ['10.11.12.13'], |
3252 | + 'bar': ['10.11.12.14', '10.11.12.15'], |
3253 | + } |
3254 | + with patch('hooks.default_apache2_config_dir', self.tempdir): |
3255 | + hooks.write_balancer_config(balancer_config) |
3256 | + for balancer in balancer_config.keys(): |
3257 | + basename = '%s.balancer' % balancer |
3258 | + exp_path = os.path.join(FIXTURES, basename) |
3259 | + res_path = os.path.join(self.tempdir, basename) |
3260 | + with open(exp_path) as exp, open(res_path) as res: |
3261 | + self.assertEqual(exp.read(), res.read()) |
3262 | + |
3263 | + |
3264 | +class HooksTest(TestCase): |
3265 | + def setUp(self): |
3266 | + super(HooksTest, self).setUp() |
3267 | + self.executable_file = '/some/executable' |
3268 | + self.unexecutable_file = '/some/unexecutable' |
3269 | + self.is_a_file = True |
3270 | + self.not_a_file = False |
3271 | + self.is_executable = True |
3272 | + self.not_executable = False |
3273 | + self.dir_exists = True |
3274 | + self.not_a_dir = False |
3275 | + self.log_prefix = 'baz' |
3276 | + self.log_path = '/tmp/pprint-%s.log' % self.log_prefix |
3277 | + |
3278 | + def tearDown(self): |
3279 | + super(HooksTest, self).tearDown() |
3280 | + if os.path.exists(self.log_path): |
3281 | + os.remove(self.log_path) |
3282 | + |
3283 | + @patch('hooks.config_get') |
3284 | + @patch('os.path.exists') |
3285 | + @patch('os.mkdir') |
3286 | + @patch('hooks.apt_get_install') |
3287 | + @patch('hooks.log', MagicMock()) |
3288 | + def test_installs_hook(self, apt_get_install, mkdir, exists, config_get): |
3289 | + exists.return_value = self.not_a_dir |
3290 | + config_get.return_value = None |
3291 | + apt_get_install.return_value = 'some result' |
3292 | + |
3293 | + result = hooks.install_hook() |
3294 | + |
3295 | + self.assertEqual(result, 'some result') |
3296 | + exists.assert_called_with(hooks.default_apache2_service_config_dir) |
3297 | + mkdir.assert_called_with(hooks.default_apache2_service_config_dir, |
3298 | + 0600) |
3299 | + apt_get_install.assert_has_calls([ |
3300 | + call('python-jinja2'), |
3301 | + call('apache2'), |
3302 | + ]) |
3303 | + |
3304 | + @patch('hooks.config_get') |
3305 | + @patch('os.path.exists') |
3306 | + @patch('os.mkdir') |
3307 | + @patch('hooks.apt_get_install') |
3308 | + @patch('hooks.log', MagicMock()) |
3309 | + def test_install_hook_installs_extra_packages( |
3310 | + self, apt_get_install, mkdir, exists, config_get): |
3311 | + exists.return_value = self.dir_exists |
3312 | + config_get.return_value = "extra" |
3313 | + apt_get_install.return_value = 'some result' |
3314 | + |
3315 | + result = hooks.install_hook() |
3316 | + |
3317 | + self.assertEqual(result, 'some result') |
3318 | + apt_get_install.assert_has_calls([ |
3319 | + call('python-jinja2'), |
3320 | + call('apache2'), |
3321 | + call('extra'), |
3322 | + ]) |
3323 | + |
3324 | + @patch('hooks.config_get') |
3325 | + @patch('os.path.exists') |
3326 | + @patch('os.mkdir') |
3327 | + @patch('hooks.apt_get_install') |
3328 | + @patch('hooks.log', MagicMock()) |
3329 | + def test_doesnt_create_dir_to_install_hooks_if_not_needed( |
3330 | + self, apt_get_install, mkdir, exists, config_get): |
3331 | + exists.return_value = self.dir_exists |
3332 | + config_get.return_value = None |
3333 | + apt_get_install.return_value = 'some result' |
3334 | + |
3335 | + result = hooks.install_hook() |
3336 | + |
3337 | + self.assertEqual(result, 'some result') |
3338 | + exists.assert_called_with(hooks.default_apache2_service_config_dir) |
3339 | + self.assertFalse(mkdir.called) |
3340 | + apt_get_install.assert_has_calls([ |
3341 | + call('python-jinja2'), |
3342 | + call('apache2'), |
3343 | + ]) |
3344 | + |
3345 | + def test_dumps_data_into_file(self): |
3346 | + data = {'foo': 'bar'} |
3347 | + |
3348 | + hooks.dump_data(data, self.log_prefix) |
3349 | + |
3350 | + with open(self.log_path) as f: |
3351 | + self.assertEqual(f.read().strip(), pformat(data).strip()) |
3352 | + |
3353 | + def test_dumps_nothing_if_data_not_provided(self): |
3354 | + data = None |
3355 | + |
3356 | + hooks.dump_data(data, self.log_prefix) |
3357 | + |
3358 | + self.assertFalse(os.path.exists(self.log_path)) |
3359 | + |
3360 | + @patch('hooks.relations_of_type') |
3361 | + @patch('yaml.safe_load') |
3362 | + @patch('hooks.log') |
3363 | + def test_gets_reverseproxy_data(self, log, load, relations_of_type): |
3364 | + relation_data = [ |
3365 | + {'port': 1234, |
3366 | + 'private-address': '10.11.12.13', |
3367 | + 'all_services': '/some/yaml/file', |
3368 | + '__unit__': 'foo-unit/1', |
3369 | + }, |
3370 | + {'port': 1234, |
3371 | + 'private-address': '10.11.12.14', |
3372 | + 'all_services': '/some/yaml/file2', |
3373 | + '__unit__': 'foo-unit/2', |
3374 | + }, |
3375 | + {'port': 3214, |
3376 | + 'private-address': '10.11.12.13', |
3377 | + 'all_services': '/some/yaml/file3', |
3378 | + '__unit__': 'bar-unit/1', |
3379 | + }, |
3380 | + ] |
3381 | + yaml_content = [ |
3382 | + { |
3383 | + 'service_name': 'some-service', |
3384 | + 'service_port': '2345', |
3385 | + }, |
3386 | + ] |
3387 | + |
3388 | + relations_of_type.return_value = relation_data |
3389 | + load.return_value = yaml_content |
3390 | + |
3391 | + result = hooks.get_reverseproxy_data(relation='baz-proxy') |
3392 | + |
3393 | + self.assertEqual(result, { |
3394 | + 'barunit': '10.11.12.13:3214', |
3395 | + 'barunit_all_services': '/some/yaml/file3', |
3396 | + 'barunit_port': 3214, |
3397 | + 'barunit_private_address': '10.11.12.13', |
3398 | + 'barunit_someservice': '10.11.12.13:2345', |
3399 | + 'foounit': '10.11.12.13:1234', |
3400 | + 'foounit_all_services': '/some/yaml/file', |
3401 | + 'foounit_port': 1234, |
3402 | + 'foounit_private_address': '10.11.12.13', |
3403 | + 'foounit_someservice': '10.11.12.13:2345', |
3404 | + }) |
3405 | + relations_of_type.assert_called_with('baz-proxy') |
3406 | + self.assertEqual(sorted(load.mock_calls), sorted([ |
3407 | + call('/some/yaml/file'), |
3408 | + call('/some/yaml/file3'), |
3409 | + ])) |
3410 | + self.assertEqual(sorted(log.mock_calls), sorted([ |
3411 | + call('unit_type: barunit'), |
3412 | + call('unit_type: foounit'), |
3413 | + call('unit_type: foounit'), |
3414 | + ])) |
3415 | + |
3416 | + @patch('hooks.relations_of_type') |
3417 | + @patch('yaml.safe_load') |
3418 | + @patch('hooks.log') |
3419 | + def test_truncates_reverseproxy_data_if_unit_has_no_port( |
3420 | + self, log, load, relations_of_type): |
3421 | + relation_data = [ |
3422 | + {'port': 1234, |
3423 | + 'private-address': '10.11.12.13', |
3424 | + 'all_services': '/some/yaml/file', |
3425 | + '__unit__': 'foo-unit/1', |
3426 | + }, |
3427 | + {'private-address': '10.11.12.13', |
3428 | + 'all_services': '/some/yaml/file3', |
3429 | + '__unit__': 'bar-unit/1', |
3430 | + }, |
3431 | + ] |
3432 | + |
3433 | + relations_of_type.return_value = relation_data |
3434 | + |
3435 | + result = hooks.get_reverseproxy_data(relation='baz-proxy') |
3436 | + |
3437 | + self.assertEqual(result, {'foounit': '10.11.12.13:1234', |
3438 | + 'foounit_all_services': '/some/yaml/file', |
3439 | + 'foounit_port': 1234, |
3440 | + 'foounit_private_address': '10.11.12.13'}) |
3441 | + relations_of_type.assert_called_with('baz-proxy') |
3442 | + self.assertTrue(load.called) |
3443 | + |
3444 | + @patch('hooks.relations_of_type') |
3445 | + @patch('yaml.safe_load') |
3446 | + @patch('hooks.log') |
3447 | + def test_truncates_reverseproxy_data_if_no_data_available( |
3448 | + self, log, load, relations_of_type): |
3449 | + relations_of_type.return_value = [] |
3450 | + |
3451 | + result = hooks.get_reverseproxy_data(relation='baz-proxy') |
3452 | + |
3453 | + self.assertEqual(result, {}) |
3454 | + relations_of_type.assert_called_with('baz-proxy') |
3455 | + self.assertFalse(load.called) |
3456 | |
3457 | === added file 'hooks/tests/test_nrpe_hooks.py' |
3458 | --- hooks/tests/test_nrpe_hooks.py 1970-01-01 00:00:00 +0000 |
3459 | +++ hooks/tests/test_nrpe_hooks.py 2013-10-10 22:50:52 +0000 |
3460 | @@ -0,0 +1,134 @@ |
3461 | +import os |
3462 | +import grp |
3463 | +import pwd |
3464 | +import subprocess |
3465 | +from testtools import TestCase |
3466 | +from mock import patch, call |
3467 | + |
3468 | +import hooks |
3469 | +from charmhelpers.contrib.charmsupport import nrpe |
3470 | +from charmhelpers.core.hookenv import Serializable |
3471 | + |
3472 | + |
3473 | +class NRPERelationTest(TestCase): |
3474 | + """Tests for the update_nrpe_checks hook. |
3475 | + |
3476 | + Half of this is already tested in the tests for charmsupport.nrpe, but |
3477 | + as the hook in the charm pre-dates that, the tests are left here to ensure |
3478 | + backwards-compatibility. |
3479 | + |
3480 | + """ |
3481 | + patches = { |
3482 | + 'config': {'object': nrpe}, |
3483 | + 'log': {'object': nrpe}, |
3484 | + 'getpwnam': {'object': pwd}, |
3485 | + 'getgrnam': {'object': grp}, |
3486 | + 'mkdir': {'object': os}, |
3487 | + 'chown': {'object': os}, |
3488 | + 'exists': {'object': os.path}, |
3489 | + 'listdir': {'object': os}, |
3490 | + 'remove': {'object': os}, |
3491 | + 'open': {'object': nrpe, 'create': True}, |
3492 | + 'isfile': {'object': os.path}, |
3493 | + 'call': {'object': subprocess}, |
3494 | + 'relation_ids': {'object': nrpe}, |
3495 | + 'relation_set': {'object': nrpe}, |
3496 | + } |
3497 | + |
3498 | + def setUp(self): |
3499 | + super(NRPERelationTest, self).setUp() |
3500 | + self.patched = {} |
3501 | + # Mock the universe. |
3502 | + for attr, data in self.patches.items(): |
3503 | + create = data.get('create', False) |
3504 | + patcher = patch.object(data['object'], attr, create=create) |
3505 | + self.patched[attr] = patcher.start() |
3506 | + self.addCleanup(patcher.stop) |
3507 | + if not 'JUJU_UNIT_NAME' in os.environ: |
3508 | + os.environ['JUJU_UNIT_NAME'] = 'test' |
3509 | + |
3510 | + def check_call_counts(self, **kwargs): |
3511 | + for attr, expected in kwargs.items(): |
3512 | + patcher = self.patched[attr] |
3513 | + self.assertEqual(expected, patcher.call_count, attr) |
3514 | + |
3515 | + def test_update_nrpe_no_nagios_bails(self): |
3516 | + config = {'nagios_context': 'test'} |
3517 | + self.patched['config'].return_value = Serializable(config) |
3518 | + self.patched['getpwnam'].side_effect = KeyError |
3519 | + |
3520 | + self.assertEqual(None, hooks.update_nrpe_checks()) |
3521 | + |
3522 | + expected = 'Nagios user not set up, nrpe checks not updated' |
3523 | + self.patched['log'].assert_called_once_with(expected) |
3524 | + self.check_call_counts(log=1, config=1, getpwnam=1) |
3525 | + |
3526 | + def test_update_nrpe_removes_existing_config(self): |
3527 | + config = { |
3528 | + 'nagios_context': 'test', |
3529 | + 'nagios_check_http_params': '-u http://example.com/url', |
3530 | + } |
3531 | + self.patched['config'].return_value = Serializable(config) |
3532 | + self.patched['exists'].return_value = True |
3533 | + self.patched['listdir'].return_value = [ |
3534 | + 'foo', 'bar.cfg', 'check_vhost.cfg'] |
3535 | + |
3536 | + self.assertEqual(None, hooks.update_nrpe_checks()) |
3537 | + |
3538 | + expected = '/var/lib/nagios/export/check_vhost.cfg' |
3539 | + self.patched['remove'].assert_called_once_with(expected) |
3540 | + self.check_call_counts(config=1, getpwnam=1, getgrnam=1, |
3541 | + exists=3, remove=1, open=2, listdir=1) |
3542 | + |
3543 | + def test_update_nrpe_with_check_url(self): |
3544 | + config = { |
3545 | + 'nagios_context': 'test', |
3546 | + 'nagios_check_http_params': '-u foo -H bar', |
3547 | + } |
3548 | + self.patched['config'].return_value = Serializable(config) |
3549 | + self.patched['exists'].return_value = True |
3550 | + self.patched['isfile'].return_value = False |
3551 | + |
3552 | + self.assertEqual(None, hooks.update_nrpe_checks()) |
3553 | + self.assertEqual(2, self.patched['open'].call_count) |
3554 | + filename = 'check_vhost.cfg' |
3555 | + |
3556 | + service_file_contents = """ |
3557 | +#--------------------------------------------------- |
3558 | +# This file is Juju managed |
3559 | +#--------------------------------------------------- |
3560 | +define service { |
3561 | + use active-service |
3562 | + host_name test-test |
3563 | + service_description test-test[vhost] Check Virtual Host |
3564 | + check_command check_nrpe!check_vhost |
3565 | + servicegroups test |
3566 | +} |
3567 | +""" |
3568 | + self.patched['open'].assert_has_calls( |
3569 | + [call('/etc/nagios/nrpe.d/%s' % filename, 'w'), |
3570 | + call('/var/lib/nagios/export/service__test-test_%s' % |
3571 | + filename, 'w'), |
3572 | + call().__enter__().write(service_file_contents), |
3573 | + call().__enter__().write('# check vhost\n'), |
3574 | + call().__enter__().write( |
3575 | + 'command[check_vhost]=/check_http -u foo -H bar\n')], |
3576 | + any_order=True) |
3577 | + |
3578 | + self.check_call_counts(config=1, getpwnam=1, getgrnam=1, |
3579 | + exists=3, open=2, listdir=1) |
3580 | + |
3581 | + def test_update_nrpe_restarts_service(self): |
3582 | + config = { |
3583 | + 'nagios_context': 'test', |
3584 | + 'nagios_check_http_params': '-u foo -p 3128' |
3585 | + } |
3586 | + self.patched['config'].return_value = Serializable(config) |
3587 | + self.patched['exists'].return_value = True |
3588 | + |
3589 | + self.assertEqual(None, hooks.update_nrpe_checks()) |
3590 | + |
3591 | + expected = ['service', 'nagios-nrpe-server', 'restart'] |
3592 | + self.assertEqual(expected, self.patched['call'].call_args[0][0]) |
3593 | + self.check_call_counts(config=1, getpwnam=1, getgrnam=1, |
3594 | + exists=3, open=2, listdir=1, call=1) |
3595 | |
3596 | === modified file 'metadata.yaml' |
3597 | --- metadata.yaml 2013-04-22 19:37:54 +0000 |
3598 | +++ metadata.yaml 2013-10-10 22:50:52 +0000 |
3599 | @@ -13,6 +13,9 @@ |
3600 | nrpe-external-master: |
3601 | interface: nrpe-external-master |
3602 | scope: container |
3603 | + local-monitors: |
3604 | + interface: local-monitors |
3605 | + scope: container |
3606 | website: |
3607 | interface: http |
3608 | requires: |
3609 | @@ -22,3 +25,5 @@ |
3610 | interface: http |
3611 | balancer: |
3612 | interface: http |
3613 | + logging: |
3614 | + interface: syslog |
3615 | |
3616 | === removed file 'revision' |
3617 | --- revision 2013-04-30 08:49:11 +0000 |
3618 | +++ revision 1970-01-01 00:00:00 +0000 |
3619 | @@ -1,1 +0,0 @@ |
3620 | -5 |
3621 | |
3622 | === added file 'setup.cfg' |
3623 | --- setup.cfg 1970-01-01 00:00:00 +0000 |
3624 | +++ setup.cfg 2013-10-10 22:50:52 +0000 |
3625 | @@ -0,0 +1,4 @@ |
3626 | +[nosetests] |
3627 | +with-coverage=1 |
3628 | +cover-erase=1 |
3629 | +cover-package=hooks |
3630 | \ No newline at end of file |
3631 | |
3632 | === added file 'tarmac_tests.sh' |
3633 | --- tarmac_tests.sh 1970-01-01 00:00:00 +0000 |
3634 | +++ tarmac_tests.sh 2013-10-10 22:50:52 +0000 |
3635 | @@ -0,0 +1,6 @@ |
3636 | +#!/bin/sh |
3637 | +# How the tests are run in Jenkins by Tarmac |
3638 | + |
3639 | +set -e |
3640 | + |
3641 | +make build |
This looks good to me. Since you're using charmhelpers2 now, you might want to consider using Hooks.hook decorator from core.hookenv which will allow you to simplify your methods by removing the giant case statement at the bottom, eg:
``` core.hookenv import Hooks
from charmhelper.
hooks = Hooks()
@hooks. hook('config- changed' , 'reverseproxy- relation- broken' , ...) ip_data = {}
def config_changed():
relationsh
config_data = config_get()
...
```