Merge lp:~sidnei/charms/precise/haproxy/trunk into lp:charms/haproxy
- Precise Pangolin (12.04)
- trunk
- Merge into trunk
Status: | Superseded |
---|---|
Proposed branch: | lp:~sidnei/charms/precise/haproxy/trunk |
Merge into: | lp:charms/haproxy |
Diff against target: |
5083 lines (+3681/-904) 30 files modified
.bzrignore (+10/-0) Makefile (+39/-0) README.md (+22/-15) charm-helpers.yaml (+4/-0) cm.py (+193/-0) config-manager.txt (+6/-0) config.yaml (+13/-1) files/nrpe/check_haproxy.sh (+2/-3) hooks/charmhelpers/contrib/charmsupport/nrpe.py (+218/-0) hooks/charmhelpers/contrib/charmsupport/volumes.py (+156/-0) hooks/charmhelpers/core/hookenv.py (+340/-0) hooks/charmhelpers/core/host.py (+239/-0) hooks/charmhelpers/fetch/__init__.py (+209/-0) hooks/charmhelpers/fetch/archiveurl.py (+48/-0) hooks/charmhelpers/fetch/bzrurl.py (+44/-0) hooks/hooks.py (+508/-450) hooks/install (+13/-0) hooks/nrpe.py (+0/-170) hooks/test_hooks.py (+0/-263) hooks/tests/test_config_changed_hooks.py (+120/-0) hooks/tests/test_helpers.py (+745/-0) hooks/tests/test_nrpe_hooks.py (+24/-0) hooks/tests/test_peer_hooks.py (+200/-0) hooks/tests/test_reverseproxy_hooks.py (+345/-0) hooks/tests/test_website_hooks.py (+145/-0) hooks/tests/utils_for_tests.py (+21/-0) metadata.yaml (+7/-1) revision (+0/-1) setup.cfg (+4/-0) tarmac_tests.sh (+6/-0) |
To merge this branch: | bzr merge lp:~sidnei/charms/precise/haproxy/trunk |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
charmers | Pending | ||
charmers | Pending | ||
Review via email: mp+181421@code.launchpad.net |
This proposal supersedes a proposal from 2013-04-18.
Commit message
Description of the change
* The 'all_services' config now supports a static list of servers to be used *in addition* to the ones provided via relation.
* When more than one haproxy units exist, the configured service is upgraded in-place to a mode where traffic is routed to a single haproxy unit (the first one in unit-name order) and the remaining ones are configured as 'backup'. This is done to allow the enforcement of a 'maxconn' session in the configured services, which would not be possible to enforce otherwise.
* Changes to the configured services are properly propagated to the upstream relation.
Juan L. Negron (negronjl) wrote : Posted in a previous version of this proposal | # |
Juan L. Negron (negronjl) wrote : Posted in a previous version of this proposal | # |
Hi Sidnei:
I ran charm proof on the charm ( after mergin it locally ) and I get the following:
negronjl@
W: website-
W: nrpe-external-
W: reverseproxy-
W: peer-relation-
W: install not executable
W: start not executable
W: stop not executable
W: config-changed not executable
As I try to deploy the charm, I get this:
negronjl@
2013-02-27 10:17:30,036 INFO Searching for charm local:precise/
2013-02-27 10:17:30,244 INFO Connecting to environment...
2013-02-27 10:17:33,069 INFO Connected to environment.
2013-02-27 10:17:33,337 ERROR [Errno 2] No such file or directory: '/home/
Sidnei da Silva (sidnei) wrote : Posted in a previous version of this proposal | # |
Ah, lib/charmsupport is a symlink and those don't get included in the charm IIRC. Maybe 'charm proof' should check for that.
I'll add charm proof to our tarmac test step and fix the other issue.
Mark Mims (mark-mims) wrote : Posted in a previous version of this proposal | # |
same story as apache... please reserve /tests for charm tests.
James Page (james-page) wrote : Posted in a previous version of this proposal | # |
Sidnei
This all looks like great work; I really like the approach you are taking to testing (gonna steal some of that goodness for my own charms) and you have pointed me at charmsupport which overlaps alot with the openstack-
I have one gripe right now; as is this change will make the charm un-deployable from the charm store as you have introduced a requirement to branch it locally and pull in the other dependencies.
I personally don't think this is acceptable; the branch for haproxy should ship the required revisions of its dependencies so that this will continue work.
James Page (james-page) wrote : Posted in a previous version of this proposal | # |
Sorry - that should have been 'Needs Fixing' not 'Disapprove'.
Not got my head on straight this morning...
Sidnei da Silva (sidnei) wrote : Posted in a previous version of this proposal | # |
Hi James,
re: dependencies that's actually not the case anymore, and I'll update the README to match. Notice how I've changed the install hook to be a shell script that adds a PPA and then installs the package dependencies, such that it still works if you deploy directly from the charm store.
David Britton (dpb) wrote : Posted in a previous version of this proposal | # |
Sidnei: the README.md needs the following ppa url:
sudo add-apt-repository ppa:cjohnston/
it's mispelled right now.
David Britton (dpb) wrote : Posted in a previous version of this proposal | # |
[2]: flake8 in the apt-get install line should be python-flake8
David Britton (dpb) wrote : Posted in a previous version of this proposal | # |
[3]: also add python-nosexcover
David Britton (dpb) wrote : Posted in a previous version of this proposal | # |
Just tested this branch with an upstream apache2 and downstream landscape charm that depends on the relation driven proxying change in r64. Its missing the functionality in r64 and r64 from trunk right now.
- 83. By Tom Haddon
-
Merge from U1 charms - also an upstream review in process for this. Add in charmhelpers and updating some tests.
- 84. By Matthias Arnason
-
[ev r=tiaz] Always write at least one listen stanza, do not pass null relationIDs to subprocess
- 85. By Matthias Arnason
-
[sidnei r=tiaz] Switch to using service name instead of hostname in backend server name, filter frontend-only options into frontend, create frontend/backend stanzas instead of a single listen stanza. Still support old listen stanzas when parsing for bw-compatibility.
- 86. By David Ames
-
[dames,r=mthaddon] reverting revno 84 and making notify_website call more explicit for relation_ids
- 87. By JuanJo Ciarlante
-
[sidnei, r=jjo] Dupe mode http/tcp and option httplog/tcplog between frontend and backend
Unmerged revisions
Preview Diff
1 | === added file '.bzrignore' | |||
2 | --- .bzrignore 1970-01-01 00:00:00 +0000 | |||
3 | +++ .bzrignore 2013-10-10 22:34:35 +0000 | |||
4 | @@ -0,0 +1,10 @@ | |||
5 | 1 | revision | ||
6 | 2 | _trial_temp | ||
7 | 3 | .coverage | ||
8 | 4 | coverage.xml | ||
9 | 5 | *.crt | ||
10 | 6 | *.key | ||
11 | 7 | lib/* | ||
12 | 8 | *.pyc | ||
13 | 9 | exec.d | ||
14 | 10 | build/charm-helpers | ||
15 | 0 | 11 | ||
16 | === added file 'Makefile' | |||
17 | --- Makefile 1970-01-01 00:00:00 +0000 | |||
18 | +++ Makefile 2013-10-10 22:34:35 +0000 | |||
19 | @@ -0,0 +1,39 @@ | |||
20 | 1 | PWD := $(shell pwd) | ||
21 | 2 | SOURCEDEPS_DIR ?= $(shell dirname $(PWD))/.sourcecode | ||
22 | 3 | HOOKS_DIR := $(PWD)/hooks | ||
23 | 4 | TEST_PREFIX := PYTHONPATH=$(HOOKS_DIR) | ||
24 | 5 | TEST_DIR := $(PWD)/hooks/tests | ||
25 | 6 | CHARM_DIR := $(PWD) | ||
26 | 7 | PYTHON := /usr/bin/env python | ||
27 | 8 | |||
28 | 9 | |||
29 | 10 | build: test lint proof | ||
30 | 11 | |||
31 | 12 | revision: | ||
32 | 13 | @test -f revision || echo 0 > revision | ||
33 | 14 | |||
34 | 15 | proof: revision | ||
35 | 16 | @echo Proofing charm... | ||
36 | 17 | @(charm proof $(PWD) || [ $$? -eq 100 ]) && echo OK | ||
37 | 18 | @test `cat revision` = 0 && rm revision | ||
38 | 19 | |||
39 | 20 | test: | ||
40 | 21 | @echo Starting tests... | ||
41 | 22 | @CHARM_DIR=$(CHARM_DIR) $(TEST_PREFIX) nosetests $(TEST_DIR) | ||
42 | 23 | |||
43 | 24 | lint: | ||
44 | 25 | @echo Checking for Python syntax... | ||
45 | 26 | @flake8 $(HOOKS_DIR) --ignore=E123 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK | ||
46 | 27 | |||
47 | 28 | sourcedeps: $(PWD)/config-manager.txt | ||
48 | 29 | @echo Updating source dependencies... | ||
49 | 30 | @$(PYTHON) cm.py -c $(PWD)/config-manager.txt \ | ||
50 | 31 | -p $(SOURCEDEPS_DIR) \ | ||
51 | 32 | -t $(PWD) | ||
52 | 33 | @$(PYTHON) build/charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ | ||
53 | 34 | -c charm-helpers.yaml \ | ||
54 | 35 | -b build/charm-helpers \ | ||
55 | 36 | -d hooks/charmhelpers | ||
56 | 37 | @echo Do not forget to commit the updated files if any. | ||
57 | 38 | |||
58 | 39 | .PHONY: revision proof test lint sourcedeps charm-payload | ||
59 | 0 | 40 | ||
60 | === modified file 'README.md' | |||
61 | --- README.md 2013-02-12 23:43:54 +0000 | |||
62 | +++ README.md 2013-10-10 22:34:35 +0000 | |||
63 | @@ -1,5 +1,5 @@ | |||
66 | 1 | Juju charm haproxy | 1 | Juju charm for HAProxy |
67 | 2 | ================== | 2 | ====================== |
68 | 3 | 3 | ||
69 | 4 | HAProxy is a free, very fast and reliable solution offering high availability, | 4 | HAProxy is a free, very fast and reliable solution offering high availability, |
70 | 5 | load balancing, and proxying for TCP and HTTP-based applications. It is | 5 | load balancing, and proxying for TCP and HTTP-based applications. It is |
71 | @@ -9,6 +9,23 @@ | |||
72 | 9 | integration into existing architectures very easy and riskless, while still | 9 | integration into existing architectures very easy and riskless, while still |
73 | 10 | offering the possibility not to expose fragile web servers to the Net. | 10 | offering the possibility not to expose fragile web servers to the Net. |
74 | 11 | 11 | ||
75 | 12 | Development | ||
76 | 13 | ----------- | ||
77 | 14 | The following steps are needed for testing and development of the charm, | ||
78 | 15 | but **not** for deployment: | ||
79 | 16 | |||
80 | 17 | sudo apt-get install python-software-properties | ||
81 | 18 | sudo add-apt-repository ppa:cjohnston/flake8 | ||
82 | 19 | sudo apt-get update | ||
83 | 20 | sudo apt-get install python-mock python-flake8 python-nose python-nosexcover | ||
84 | 21 | |||
85 | 22 | To run the tests: | ||
86 | 23 | |||
87 | 24 | make build | ||
88 | 25 | |||
89 | 26 | ... will run the unit tests, run flake8 over the source to warn about | ||
90 | 27 | formatting issues and output a code coverage summary of the 'hooks.py' module. | ||
91 | 28 | |||
92 | 12 | How to deploy the charm | 29 | How to deploy the charm |
93 | 13 | ----------------------- | 30 | ----------------------- |
94 | 14 | juju deploy haproxy | 31 | juju deploy haproxy |
95 | @@ -27,7 +44,7 @@ | |||
96 | 27 | the "Website Relation" section for more information about that. | 44 | the "Website Relation" section for more information about that. |
97 | 28 | 45 | ||
98 | 29 | When your charm hooks into reverseproxy you have two general approaches | 46 | When your charm hooks into reverseproxy you have two general approaches |
100 | 30 | which can be used to notify haproxy about what services you are running. | 47 | which can be used to notify haproxy about what services you are running. |
101 | 31 | 1) Single-service proxying or 2) Multi-service or relation-driven proxying. | 48 | 1) Single-service proxying or 2) Multi-service or relation-driven proxying. |
102 | 32 | 49 | ||
103 | 33 | ** 1) Single-Service Proxying ** | 50 | ** 1) Single-Service Proxying ** |
104 | @@ -67,7 +84,7 @@ | |||
105 | 67 | 84 | ||
106 | 68 | #!/bin/bash | 85 | #!/bin/bash |
107 | 69 | # hooks/website-relation-changed | 86 | # hooks/website-relation-changed |
109 | 70 | 87 | ||
110 | 71 | host=$(unit-get private-address) | 88 | host=$(unit-get private-address) |
111 | 72 | port=80 | 89 | port=80 |
112 | 73 | 90 | ||
113 | @@ -80,7 +97,7 @@ | |||
114 | 80 | " | 97 | " |
115 | 81 | 98 | ||
116 | 82 | Once set, haproxy will union multiple `servers` stanzas from any units | 99 | Once set, haproxy will union multiple `servers` stanzas from any units |
118 | 83 | joining with the same `service_name` under one listen stanza. | 100 | joining with the same `service_name` under one listen stanza. |
119 | 84 | `service-options` and `server_options` will be overwritten, so ensure they | 101 | `service-options` and `server_options` will be overwritten, so ensure they |
120 | 85 | are set uniformly on all services with the same name. | 102 | are set uniformly on all services with the same name. |
121 | 86 | 103 | ||
122 | @@ -102,18 +119,8 @@ | |||
123 | 102 | Many of the haproxy settings can be altered via the standard juju configuration | 119 | Many of the haproxy settings can be altered via the standard juju configuration |
124 | 103 | settings. Please see the config.yaml file as each is fairly clearly documented. | 120 | settings. Please see the config.yaml file as each is fairly clearly documented. |
125 | 104 | 121 | ||
126 | 105 | Testing | ||
127 | 106 | ------- | ||
128 | 107 | This charm has a simple unit-test program. Please expand it and make sure new | ||
129 | 108 | changes are covered by simple unit tests. To run the unit tests: | ||
130 | 109 | |||
131 | 110 | sudo apt-get install python-mocker | ||
132 | 111 | sudo apt-get install python-twisted-core | ||
133 | 112 | cd hooks; trial test_hooks | ||
134 | 113 | |||
135 | 114 | TODO: | 122 | TODO: |
136 | 115 | ----- | 123 | ----- |
137 | 116 | 124 | ||
138 | 117 | * Expand Single-Service section as I have not tested that mode fully. | 125 | * Expand Single-Service section as I have not tested that mode fully. |
139 | 118 | * Trigger website-relation-changed when the reverse-proxy relation changes | 126 | * Trigger website-relation-changed when the reverse-proxy relation changes |
140 | 119 | |||
141 | 120 | 127 | ||
142 | === added directory 'build' | |||
143 | === added file 'charm-helpers.yaml' | |||
144 | --- charm-helpers.yaml 1970-01-01 00:00:00 +0000 | |||
145 | +++ charm-helpers.yaml 2013-10-10 22:34:35 +0000 | |||
146 | @@ -0,0 +1,4 @@ | |||
147 | 1 | include: | ||
148 | 2 | - core | ||
149 | 3 | - fetch | ||
150 | 4 | - contrib.charmsupport | ||
151 | 0 | \ No newline at end of file | 5 | \ No newline at end of file |
152 | 1 | 6 | ||
153 | === added file 'cm.py' | |||
154 | --- cm.py 1970-01-01 00:00:00 +0000 | |||
155 | +++ cm.py 2013-10-10 22:34:35 +0000 | |||
156 | @@ -0,0 +1,193 @@ | |||
157 | 1 | # Copyright 2010-2013 Canonical Ltd. All rights reserved. | ||
158 | 2 | import os | ||
159 | 3 | import re | ||
160 | 4 | import sys | ||
161 | 5 | import errno | ||
162 | 6 | import hashlib | ||
163 | 7 | import subprocess | ||
164 | 8 | import optparse | ||
165 | 9 | |||
166 | 10 | from os import curdir | ||
167 | 11 | from bzrlib.branch import Branch | ||
168 | 12 | from bzrlib.plugin import load_plugins | ||
169 | 13 | load_plugins() | ||
170 | 14 | from bzrlib.plugins.launchpad import account as lp_account | ||
171 | 15 | |||
172 | 16 | if 'GlobalConfig' in dir(lp_account): | ||
173 | 17 | from bzrlib.config import LocationConfig as LocationConfiguration | ||
174 | 18 | _ = LocationConfiguration | ||
175 | 19 | else: | ||
176 | 20 | from bzrlib.config import LocationStack as LocationConfiguration | ||
177 | 21 | _ = LocationConfiguration | ||
178 | 22 | |||
179 | 23 | |||
180 | 24 | def get_branch_config(config_file): | ||
181 | 25 | """ | ||
182 | 26 | Retrieves the sourcedeps configuration for an source dir. | ||
183 | 27 | Returns a dict of (branch, revspec) tuples, keyed by branch name. | ||
184 | 28 | """ | ||
185 | 29 | branches = {} | ||
186 | 30 | with open(config_file, 'r') as stream: | ||
187 | 31 | for line in stream: | ||
188 | 32 | line = line.split('#')[0].strip() | ||
189 | 33 | bzr_match = re.match(r'(\S+)\s+' | ||
190 | 34 | 'lp:([^;]+)' | ||
191 | 35 | '(?:;revno=(\d+))?', line) | ||
192 | 36 | if bzr_match: | ||
193 | 37 | name, branch, revno = bzr_match.group(1, 2, 3) | ||
194 | 38 | if revno is None: | ||
195 | 39 | revspec = -1 | ||
196 | 40 | else: | ||
197 | 41 | revspec = revno | ||
198 | 42 | branches[name] = (branch, revspec) | ||
199 | 43 | continue | ||
200 | 44 | dir_match = re.match(r'(\S+)\s+' | ||
201 | 45 | '\(directory\)', line) | ||
202 | 46 | if dir_match: | ||
203 | 47 | name = dir_match.group(1) | ||
204 | 48 | branches[name] = None | ||
205 | 49 | return branches | ||
206 | 50 | |||
207 | 51 | |||
208 | 52 | def main(config_file, parent_dir, target_dir, verbose): | ||
209 | 53 | """Do the deed.""" | ||
210 | 54 | |||
211 | 55 | try: | ||
212 | 56 | os.makedirs(parent_dir) | ||
213 | 57 | except OSError, e: | ||
214 | 58 | if e.errno != errno.EEXIST: | ||
215 | 59 | raise | ||
216 | 60 | |||
217 | 61 | branches = sorted(get_branch_config(config_file).items()) | ||
218 | 62 | for branch_name, spec in branches: | ||
219 | 63 | if spec is None: | ||
220 | 64 | # It's a directory, just create it and move on. | ||
221 | 65 | destination_path = os.path.join(target_dir, branch_name) | ||
222 | 66 | if not os.path.isdir(destination_path): | ||
223 | 67 | os.makedirs(destination_path) | ||
224 | 68 | continue | ||
225 | 69 | |||
226 | 70 | (quoted_branch_spec, revspec) = spec | ||
227 | 71 | revno = int(revspec) | ||
228 | 72 | |||
229 | 73 | # qualify mirror branch name with hash of remote repo path to deal | ||
230 | 74 | # with changes to the remote branch URL over time | ||
231 | 75 | branch_spec_digest = hashlib.sha1(quoted_branch_spec).hexdigest() | ||
232 | 76 | branch_directory = branch_spec_digest | ||
233 | 77 | |||
234 | 78 | source_path = os.path.join(parent_dir, branch_directory) | ||
235 | 79 | destination_path = os.path.join(target_dir, branch_name) | ||
236 | 80 | |||
237 | 81 | # Remove leftover symlinks/stray files. | ||
238 | 82 | try: | ||
239 | 83 | os.remove(destination_path) | ||
240 | 84 | except OSError, e: | ||
241 | 85 | if e.errno != errno.EISDIR and e.errno != errno.ENOENT: | ||
242 | 86 | raise | ||
243 | 87 | |||
244 | 88 | lp_url = "lp:" + quoted_branch_spec | ||
245 | 89 | |||
246 | 90 | # Create the local mirror branch if it doesn't already exist | ||
247 | 91 | if verbose: | ||
248 | 92 | sys.stderr.write('%30s: ' % (branch_name,)) | ||
249 | 93 | sys.stderr.flush() | ||
250 | 94 | |||
251 | 95 | fresh = False | ||
252 | 96 | if not os.path.exists(source_path): | ||
253 | 97 | subprocess.check_call(['bzr', 'branch', '-q', '--no-tree', | ||
254 | 98 | '--', lp_url, source_path]) | ||
255 | 99 | fresh = True | ||
256 | 100 | |||
257 | 101 | if not fresh: | ||
258 | 102 | source_branch = Branch.open(source_path) | ||
259 | 103 | if revno == -1: | ||
260 | 104 | orig_branch = Branch.open(lp_url) | ||
261 | 105 | fresh = source_branch.revno() == orig_branch.revno() | ||
262 | 106 | else: | ||
263 | 107 | fresh = source_branch.revno() == revno | ||
264 | 108 | |||
265 | 109 | # Freshen the source branch if required. | ||
266 | 110 | if not fresh: | ||
267 | 111 | subprocess.check_call(['bzr', 'pull', '-q', '--overwrite', '-r', | ||
268 | 112 | str(revno), '-d', source_path, | ||
269 | 113 | '--', lp_url]) | ||
270 | 114 | |||
271 | 115 | if os.path.exists(destination_path): | ||
272 | 116 | # Overwrite the destination with the appropriate revision. | ||
273 | 117 | subprocess.check_call(['bzr', 'clean-tree', '--force', '-q', | ||
274 | 118 | '--ignored', '-d', destination_path]) | ||
275 | 119 | subprocess.check_call(['bzr', 'pull', '-q', '--overwrite', | ||
276 | 120 | '-r', str(revno), | ||
277 | 121 | '-d', destination_path, '--', source_path]) | ||
278 | 122 | else: | ||
279 | 123 | # Create a new branch. | ||
280 | 124 | subprocess.check_call(['bzr', 'branch', '-q', '--hardlink', | ||
281 | 125 | '-r', str(revno), | ||
282 | 126 | '--', source_path, destination_path]) | ||
283 | 127 | |||
284 | 128 | # Check the state of the destination branch. | ||
285 | 129 | destination_branch = Branch.open(destination_path) | ||
286 | 130 | destination_revno = destination_branch.revno() | ||
287 | 131 | |||
288 | 132 | if verbose: | ||
289 | 133 | sys.stderr.write('checked out %4s of %s\n' % | ||
290 | 134 | ("r" + str(destination_revno), lp_url)) | ||
291 | 135 | sys.stderr.flush() | ||
292 | 136 | |||
293 | 137 | if revno != -1 and destination_revno != revno: | ||
294 | 138 | raise RuntimeError("Expected revno %d but got revno %d" % | ||
295 | 139 | (revno, destination_revno)) | ||
296 | 140 | |||
297 | 141 | if __name__ == '__main__': | ||
298 | 142 | parser = optparse.OptionParser( | ||
299 | 143 | usage="%prog [options]", | ||
300 | 144 | description=( | ||
301 | 145 | "Add a lightweight checkout in <target> for each " | ||
302 | 146 | "corresponding file in <parent>."), | ||
303 | 147 | add_help_option=False) | ||
304 | 148 | parser.add_option( | ||
305 | 149 | '-p', '--parent', dest='parent', | ||
306 | 150 | default=None, | ||
307 | 151 | help=("The directory of the parent tree."), | ||
308 | 152 | metavar="DIR") | ||
309 | 153 | parser.add_option( | ||
310 | 154 | '-t', '--target', dest='target', default=curdir, | ||
311 | 155 | help=("The directory of the target tree."), | ||
312 | 156 | metavar="DIR") | ||
313 | 157 | parser.add_option( | ||
314 | 158 | '-c', '--config', dest='config', default=None, | ||
315 | 159 | help=("The config file to be used for config-manager."), | ||
316 | 160 | metavar="DIR") | ||
317 | 161 | parser.add_option( | ||
318 | 162 | '-q', '--quiet', dest='verbose', action='store_false', | ||
319 | 163 | help="Be less verbose.") | ||
320 | 164 | parser.add_option( | ||
321 | 165 | '-v', '--verbose', dest='verbose', action='store_true', | ||
322 | 166 | help="Be more verbose.") | ||
323 | 167 | parser.add_option( | ||
324 | 168 | '-h', '--help', action='help', | ||
325 | 169 | help="Show this help message and exit.") | ||
326 | 170 | parser.set_defaults(verbose=True) | ||
327 | 171 | |||
328 | 172 | options, args = parser.parse_args() | ||
329 | 173 | |||
330 | 174 | if options.parent is None: | ||
331 | 175 | options.parent = os.environ.get( | ||
332 | 176 | "SOURCEDEPS_DIR", | ||
333 | 177 | os.path.join(curdir, ".sourcecode")) | ||
334 | 178 | |||
335 | 179 | if options.target is None: | ||
336 | 180 | parser.error( | ||
337 | 181 | "Target directory not specified.") | ||
338 | 182 | |||
339 | 183 | if options.config is None: | ||
340 | 184 | config = [arg for arg in args | ||
341 | 185 | if arg != "update"] | ||
342 | 186 | if not config or len(config) > 1: | ||
343 | 187 | parser.error("Config not specified") | ||
344 | 188 | options.config = config[0] | ||
345 | 189 | |||
346 | 190 | sys.exit(main(config_file=options.config, | ||
347 | 191 | parent_dir=options.parent, | ||
348 | 192 | target_dir=options.target, | ||
349 | 193 | verbose=options.verbose)) | ||
350 | 0 | 194 | ||
351 | === added file 'config-manager.txt' | |||
352 | --- config-manager.txt 1970-01-01 00:00:00 +0000 | |||
353 | +++ config-manager.txt 2013-10-10 22:34:35 +0000 | |||
354 | @@ -0,0 +1,6 @@ | |||
355 | 1 | # After making changes to this file, to ensure that your sourcedeps are | ||
356 | 2 | # up-to-date do: | ||
357 | 3 | # | ||
358 | 4 | # make sourcedeps | ||
359 | 5 | |||
360 | 6 | ./build/charm-helpers lp:charm-helpers;revno=70 | ||
361 | 0 | 7 | ||
362 | === modified file 'config.yaml' | |||
363 | --- config.yaml 2012-10-10 14:38:47 +0000 | |||
364 | +++ config.yaml 2013-10-10 22:34:35 +0000 | |||
365 | @@ -59,7 +59,7 @@ | |||
366 | 59 | restarting, a turn-around timer of 1 second is applied before a retry | 59 | restarting, a turn-around timer of 1 second is applied before a retry |
367 | 60 | occurs. | 60 | occurs. |
368 | 61 | default_timeouts: | 61 | default_timeouts: |
370 | 62 | default: "queue 1000, connect 1000, client 1000, server 1000" | 62 | default: "queue 20000, client 50000, connect 5000, server 50000" |
371 | 63 | type: string | 63 | type: string |
372 | 64 | description: Default timeouts | 64 | description: Default timeouts |
373 | 65 | enable_monitoring: | 65 | enable_monitoring: |
374 | @@ -90,6 +90,12 @@ | |||
375 | 90 | default: 3 | 90 | default: 3 |
376 | 91 | type: int | 91 | type: int |
377 | 92 | description: Monitoring interface refresh interval (in seconds) | 92 | description: Monitoring interface refresh interval (in seconds) |
378 | 93 | package_status: | ||
379 | 94 | default: "install" | ||
380 | 95 | type: "string" | ||
381 | 96 | description: | | ||
382 | 97 | The status of service-affecting packages will be set to this value in the dpkg database. | ||
383 | 98 | Useful valid values are "install" and "hold". | ||
384 | 93 | services: | 99 | services: |
385 | 94 | default: | | 100 | default: | |
386 | 95 | - service_name: haproxy_service | 101 | - service_name: haproxy_service |
387 | @@ -106,6 +112,12 @@ | |||
388 | 106 | before the first variable, service_name, as above. Service options is a | 112 | before the first variable, service_name, as above. Service options is a |
389 | 107 | comma separated list, server options will be appended as a string to | 113 | comma separated list, server options will be appended as a string to |
390 | 108 | the individual server lines for a given listen stanza. | 114 | the individual server lines for a given listen stanza. |
391 | 115 | sysctl: | ||
392 | 116 | default: "" | ||
393 | 117 | type: string | ||
394 | 118 | description: > | ||
395 | 119 | YAML-formatted list of sysctl values, e.g.: | ||
396 | 120 | '{ net.ipv4.tcp_max_syn_backlog : 65536 }' | ||
397 | 109 | nagios_context: | 121 | nagios_context: |
398 | 110 | default: "juju" | 122 | default: "juju" |
399 | 111 | type: string | 123 | type: string |
400 | 112 | 124 | ||
401 | === renamed directory 'files/nrpe-external-master' => 'files/nrpe' | |||
402 | === modified file 'files/nrpe/check_haproxy.sh' | |||
403 | --- files/nrpe-external-master/check_haproxy.sh 2012-11-07 22:32:06 +0000 | |||
404 | +++ files/nrpe/check_haproxy.sh 2013-10-10 22:34:35 +0000 | |||
405 | @@ -2,7 +2,7 @@ | |||
406 | 2 | #-------------------------------------------- | 2 | #-------------------------------------------- |
407 | 3 | # This file is managed by Juju | 3 | # This file is managed by Juju |
408 | 4 | #-------------------------------------------- | 4 | #-------------------------------------------- |
410 | 5 | # | 5 | # |
411 | 6 | # Copyright 2009,2012 Canonical Ltd. | 6 | # Copyright 2009,2012 Canonical Ltd. |
412 | 7 | # Author: Tom Haddon | 7 | # Author: Tom Haddon |
413 | 8 | 8 | ||
414 | @@ -13,7 +13,7 @@ | |||
415 | 13 | 13 | ||
416 | 14 | for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'}); | 14 | for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'}); |
417 | 15 | do | 15 | do |
419 | 16 | output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="class=\"active(2|3).*${appserver}" -e ' 200 OK') | 16 | output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK') |
420 | 17 | if [ $? != 0 ]; then | 17 | if [ $? != 0 ]; then |
421 | 18 | date >> $LOGFILE | 18 | date >> $LOGFILE |
422 | 19 | echo $output >> $LOGFILE | 19 | echo $output >> $LOGFILE |
423 | @@ -30,4 +30,3 @@ | |||
424 | 30 | 30 | ||
425 | 31 | echo "OK: All haproxy instances looking good" | 31 | echo "OK: All haproxy instances looking good" |
426 | 32 | exit 0 | 32 | exit 0 |
427 | 33 | |||
428 | 34 | 33 | ||
429 | === added directory 'hooks/charmhelpers' | |||
430 | === added file 'hooks/charmhelpers/__init__.py' | |||
431 | === added directory 'hooks/charmhelpers/contrib' | |||
432 | === added file 'hooks/charmhelpers/contrib/__init__.py' | |||
433 | === added directory 'hooks/charmhelpers/contrib/charmsupport' | |||
434 | === added file 'hooks/charmhelpers/contrib/charmsupport/__init__.py' | |||
435 | === added file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py' | |||
436 | --- hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000 | |||
437 | +++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-10-10 22:34:35 +0000 | |||
438 | @@ -0,0 +1,218 @@ | |||
439 | 1 | """Compatibility with the nrpe-external-master charm""" | ||
440 | 2 | # Copyright 2012 Canonical Ltd. | ||
441 | 3 | # | ||
442 | 4 | # Authors: | ||
443 | 5 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> | ||
444 | 6 | |||
445 | 7 | import subprocess | ||
446 | 8 | import pwd | ||
447 | 9 | import grp | ||
448 | 10 | import os | ||
449 | 11 | import re | ||
450 | 12 | import shlex | ||
451 | 13 | import yaml | ||
452 | 14 | |||
453 | 15 | from charmhelpers.core.hookenv import ( | ||
454 | 16 | config, | ||
455 | 17 | local_unit, | ||
456 | 18 | log, | ||
457 | 19 | relation_ids, | ||
458 | 20 | relation_set, | ||
459 | 21 | ) | ||
460 | 22 | |||
461 | 23 | from charmhelpers.core.host import service | ||
462 | 24 | |||
463 | 25 | # This module adds compatibility with the nrpe-external-master and plain nrpe | ||
464 | 26 | # subordinate charms. To use it in your charm: | ||
465 | 27 | # | ||
466 | 28 | # 1. Update metadata.yaml | ||
467 | 29 | # | ||
468 | 30 | # provides: | ||
469 | 31 | # (...) | ||
470 | 32 | # nrpe-external-master: | ||
471 | 33 | # interface: nrpe-external-master | ||
472 | 34 | # scope: container | ||
473 | 35 | # | ||
474 | 36 | # and/or | ||
475 | 37 | # | ||
476 | 38 | # provides: | ||
477 | 39 | # (...) | ||
478 | 40 | # local-monitors: | ||
479 | 41 | # interface: local-monitors | ||
480 | 42 | # scope: container | ||
481 | 43 | |||
482 | 44 | # | ||
483 | 45 | # 2. Add the following to config.yaml | ||
484 | 46 | # | ||
485 | 47 | # nagios_context: | ||
486 | 48 | # default: "juju" | ||
487 | 49 | # type: string | ||
488 | 50 | # description: | | ||
489 | 51 | # Used by the nrpe subordinate charms. | ||
490 | 52 | # A string that will be prepended to instance name to set the host name | ||
491 | 53 | # in nagios. So for instance the hostname would be something like: | ||
492 | 54 | # juju-myservice-0 | ||
493 | 55 | # If you're running multiple environments with the same services in them | ||
494 | 56 | # this allows you to differentiate between them. | ||
495 | 57 | # | ||
496 | 58 | # 3. Add custom checks (Nagios plugins) to files/nrpe-external-master | ||
497 | 59 | # | ||
498 | 60 | # 4. Update your hooks.py with something like this: | ||
499 | 61 | # | ||
500 | 62 | # from charmsupport.nrpe import NRPE | ||
501 | 63 | # (...) | ||
502 | 64 | # def update_nrpe_config(): | ||
503 | 65 | # nrpe_compat = NRPE() | ||
504 | 66 | # nrpe_compat.add_check( | ||
505 | 67 | # shortname = "myservice", | ||
506 | 68 | # description = "Check MyService", | ||
507 | 69 | # check_cmd = "check_http -w 2 -c 10 http://localhost" | ||
508 | 70 | # ) | ||
509 | 71 | # nrpe_compat.add_check( | ||
510 | 72 | # "myservice_other", | ||
511 | 73 | # "Check for widget failures", | ||
512 | 74 | # check_cmd = "/srv/myapp/scripts/widget_check" | ||
513 | 75 | # ) | ||
514 | 76 | # nrpe_compat.write() | ||
515 | 77 | # | ||
516 | 78 | # def config_changed(): | ||
517 | 79 | # (...) | ||
518 | 80 | # update_nrpe_config() | ||
519 | 81 | # | ||
520 | 82 | # def nrpe_external_master_relation_changed(): | ||
521 | 83 | # update_nrpe_config() | ||
522 | 84 | # | ||
523 | 85 | # def local_monitors_relation_changed(): | ||
524 | 86 | # update_nrpe_config() | ||
525 | 87 | # | ||
526 | 88 | # 5. ln -s hooks.py nrpe-external-master-relation-changed | ||
527 | 89 | # ln -s hooks.py local-monitors-relation-changed | ||
528 | 90 | |||
529 | 91 | |||
530 | 92 | class CheckException(Exception): | ||
531 | 93 | pass | ||
532 | 94 | |||
533 | 95 | |||
534 | 96 | class Check(object): | ||
535 | 97 | shortname_re = '[A-Za-z0-9-_]+$' | ||
536 | 98 | service_template = (""" | ||
537 | 99 | #--------------------------------------------------- | ||
538 | 100 | # This file is Juju managed | ||
539 | 101 | #--------------------------------------------------- | ||
540 | 102 | define service {{ | ||
541 | 103 | use active-service | ||
542 | 104 | host_name {nagios_hostname} | ||
543 | 105 | service_description {nagios_hostname}[{shortname}] """ | ||
544 | 106 | """{description} | ||
545 | 107 | check_command check_nrpe!{command} | ||
546 | 108 | servicegroups {nagios_servicegroup} | ||
547 | 109 | }} | ||
548 | 110 | """) | ||
549 | 111 | |||
550 | 112 | def __init__(self, shortname, description, check_cmd): | ||
551 | 113 | super(Check, self).__init__() | ||
552 | 114 | # XXX: could be better to calculate this from the service name | ||
553 | 115 | if not re.match(self.shortname_re, shortname): | ||
554 | 116 | raise CheckException("shortname must match {}".format( | ||
555 | 117 | Check.shortname_re)) | ||
556 | 118 | self.shortname = shortname | ||
557 | 119 | self.command = "check_{}".format(shortname) | ||
558 | 120 | # Note: a set of invalid characters is defined by the | ||
559 | 121 | # Nagios server config | ||
560 | 122 | # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()= | ||
561 | 123 | self.description = description | ||
562 | 124 | self.check_cmd = self._locate_cmd(check_cmd) | ||
563 | 125 | |||
564 | 126 | def _locate_cmd(self, check_cmd): | ||
565 | 127 | search_path = ( | ||
566 | 128 | '/', | ||
567 | 129 | os.path.join(os.environ['CHARM_DIR'], | ||
568 | 130 | 'files/nrpe-external-master'), | ||
569 | 131 | '/usr/lib/nagios/plugins', | ||
570 | 132 | ) | ||
571 | 133 | parts = shlex.split(check_cmd) | ||
572 | 134 | for path in search_path: | ||
573 | 135 | if os.path.exists(os.path.join(path, parts[0])): | ||
574 | 136 | command = os.path.join(path, parts[0]) | ||
575 | 137 | if len(parts) > 1: | ||
576 | 138 | command += " " + " ".join(parts[1:]) | ||
577 | 139 | return command | ||
578 | 140 | log('Check command not found: {}'.format(parts[0])) | ||
579 | 141 | return '' | ||
580 | 142 | |||
581 | 143 | def write(self, nagios_context, hostname): | ||
582 | 144 | nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format( | ||
583 | 145 | self.command) | ||
584 | 146 | with open(nrpe_check_file, 'w') as nrpe_check_config: | ||
585 | 147 | nrpe_check_config.write("# check {}\n".format(self.shortname)) | ||
586 | 148 | nrpe_check_config.write("command[{}]={}\n".format( | ||
587 | 149 | self.command, self.check_cmd)) | ||
588 | 150 | |||
589 | 151 | if not os.path.exists(NRPE.nagios_exportdir): | ||
590 | 152 | log('Not writing service config as {} is not accessible'.format( | ||
591 | 153 | NRPE.nagios_exportdir)) | ||
592 | 154 | else: | ||
593 | 155 | self.write_service_config(nagios_context, hostname) | ||
594 | 156 | |||
595 | 157 | def write_service_config(self, nagios_context, hostname): | ||
596 | 158 | for f in os.listdir(NRPE.nagios_exportdir): | ||
597 | 159 | if re.search('.*{}.cfg'.format(self.command), f): | ||
598 | 160 | os.remove(os.path.join(NRPE.nagios_exportdir, f)) | ||
599 | 161 | |||
600 | 162 | templ_vars = { | ||
601 | 163 | 'nagios_hostname': hostname, | ||
602 | 164 | 'nagios_servicegroup': nagios_context, | ||
603 | 165 | 'description': self.description, | ||
604 | 166 | 'shortname': self.shortname, | ||
605 | 167 | 'command': self.command, | ||
606 | 168 | } | ||
607 | 169 | nrpe_service_text = Check.service_template.format(**templ_vars) | ||
608 | 170 | nrpe_service_file = '{}/service__{}_{}.cfg'.format( | ||
609 | 171 | NRPE.nagios_exportdir, hostname, self.command) | ||
610 | 172 | with open(nrpe_service_file, 'w') as nrpe_service_config: | ||
611 | 173 | nrpe_service_config.write(str(nrpe_service_text)) | ||
612 | 174 | |||
613 | 175 | def run(self): | ||
614 | 176 | subprocess.call(self.check_cmd) | ||
615 | 177 | |||
616 | 178 | |||
617 | 179 | class NRPE(object): | ||
618 | 180 | nagios_logdir = '/var/log/nagios' | ||
619 | 181 | nagios_exportdir = '/var/lib/nagios/export' | ||
620 | 182 | nrpe_confdir = '/etc/nagios/nrpe.d' | ||
621 | 183 | |||
622 | 184 | def __init__(self): | ||
623 | 185 | super(NRPE, self).__init__() | ||
624 | 186 | self.config = config() | ||
625 | 187 | self.nagios_context = self.config['nagios_context'] | ||
626 | 188 | self.unit_name = local_unit().replace('/', '-') | ||
627 | 189 | self.hostname = "{}-{}".format(self.nagios_context, self.unit_name) | ||
628 | 190 | self.checks = [] | ||
629 | 191 | |||
630 | 192 | def add_check(self, *args, **kwargs): | ||
631 | 193 | self.checks.append(Check(*args, **kwargs)) | ||
632 | 194 | |||
633 | 195 | def write(self): | ||
634 | 196 | try: | ||
635 | 197 | nagios_uid = pwd.getpwnam('nagios').pw_uid | ||
636 | 198 | nagios_gid = grp.getgrnam('nagios').gr_gid | ||
637 | 199 | except: | ||
638 | 200 | log("Nagios user not set up, nrpe checks not updated") | ||
639 | 201 | return | ||
640 | 202 | |||
641 | 203 | if not os.path.exists(NRPE.nagios_logdir): | ||
642 | 204 | os.mkdir(NRPE.nagios_logdir) | ||
643 | 205 | os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid) | ||
644 | 206 | |||
645 | 207 | nrpe_monitors = {} | ||
646 | 208 | monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}} | ||
647 | 209 | for nrpecheck in self.checks: | ||
648 | 210 | nrpecheck.write(self.nagios_context, self.hostname) | ||
649 | 211 | nrpe_monitors[nrpecheck.shortname] = { | ||
650 | 212 | "command": nrpecheck.command, | ||
651 | 213 | } | ||
652 | 214 | |||
653 | 215 | service('restart', 'nagios-nrpe-server') | ||
654 | 216 | |||
655 | 217 | for rid in relation_ids("local-monitors"): | ||
656 | 218 | relation_set(relation_id=rid, monitors=yaml.dump(monitors)) | ||
657 | 0 | 219 | ||
658 | === added file 'hooks/charmhelpers/contrib/charmsupport/volumes.py' | |||
659 | --- hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000 | |||
660 | +++ hooks/charmhelpers/contrib/charmsupport/volumes.py 2013-10-10 22:34:35 +0000 | |||
661 | @@ -0,0 +1,156 @@ | |||
662 | 1 | ''' | ||
663 | 2 | Functions for managing volumes in juju units. One volume is supported per unit. | ||
664 | 3 | Subordinates may have their own storage, provided it is on its own partition. | ||
665 | 4 | |||
666 | 5 | Configuration stanzas: | ||
667 | 6 | volume-ephemeral: | ||
668 | 7 | type: boolean | ||
669 | 8 | default: true | ||
670 | 9 | description: > | ||
671 | 10 | If false, a volume is mounted as sepecified in "volume-map" | ||
672 | 11 | If true, ephemeral storage will be used, meaning that log data | ||
673 | 12 | will only exist as long as the machine. YOU HAVE BEEN WARNED. | ||
674 | 13 | volume-map: | ||
675 | 14 | type: string | ||
676 | 15 | default: {} | ||
677 | 16 | description: > | ||
678 | 17 | YAML map of units to device names, e.g: | ||
679 | 18 | "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }" | ||
680 | 19 | Service units will raise a configure-error if volume-ephemeral | ||
681 | 20 | is 'true' and no volume-map value is set. Use 'juju set' to set a | ||
682 | 21 | value and 'juju resolved' to complete configuration. | ||
683 | 22 | |||
684 | 23 | Usage: | ||
685 | 24 | from charmsupport.volumes import configure_volume, VolumeConfigurationError | ||
686 | 25 | from charmsupport.hookenv import log, ERROR | ||
687 | 26 | def post_mount_hook(): | ||
688 | 27 | stop_service('myservice') | ||
689 | 28 | def post_mount_hook(): | ||
690 | 29 | start_service('myservice') | ||
691 | 30 | |||
692 | 31 | if __name__ == '__main__': | ||
693 | 32 | try: | ||
694 | 33 | configure_volume(before_change=pre_mount_hook, | ||
695 | 34 | after_change=post_mount_hook) | ||
696 | 35 | except VolumeConfigurationError: | ||
697 | 36 | log('Storage could not be configured', ERROR) | ||
698 | 37 | ''' | ||
699 | 38 | |||
700 | 39 | # XXX: Known limitations | ||
701 | 40 | # - fstab is neither consulted nor updated | ||
702 | 41 | |||
703 | 42 | import os | ||
704 | 43 | from charmhelpers.core import hookenv | ||
705 | 44 | from charmhelpers.core import host | ||
706 | 45 | import yaml | ||
707 | 46 | |||
708 | 47 | |||
709 | 48 | MOUNT_BASE = '/srv/juju/volumes' | ||
710 | 49 | |||
711 | 50 | |||
712 | 51 | class VolumeConfigurationError(Exception): | ||
713 | 52 | '''Volume configuration data is missing or invalid''' | ||
714 | 53 | pass | ||
715 | 54 | |||
716 | 55 | |||
717 | 56 | def get_config(): | ||
718 | 57 | '''Gather and sanity-check volume configuration data''' | ||
719 | 58 | volume_config = {} | ||
720 | 59 | config = hookenv.config() | ||
721 | 60 | |||
722 | 61 | errors = False | ||
723 | 62 | |||
724 | 63 | if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'): | ||
725 | 64 | volume_config['ephemeral'] = True | ||
726 | 65 | else: | ||
727 | 66 | volume_config['ephemeral'] = False | ||
728 | 67 | |||
729 | 68 | try: | ||
730 | 69 | volume_map = yaml.safe_load(config.get('volume-map', '{}')) | ||
731 | 70 | except yaml.YAMLError as e: | ||
732 | 71 | hookenv.log("Error parsing YAML volume-map: {}".format(e), | ||
733 | 72 | hookenv.ERROR) | ||
734 | 73 | errors = True | ||
735 | 74 | if volume_map is None: | ||
736 | 75 | # probably an empty string | ||
737 | 76 | volume_map = {} | ||
738 | 77 | elif not isinstance(volume_map, dict): | ||
739 | 78 | hookenv.log("Volume-map should be a dictionary, not {}".format( | ||
740 | 79 | type(volume_map))) | ||
741 | 80 | errors = True | ||
742 | 81 | |||
743 | 82 | volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME']) | ||
744 | 83 | if volume_config['device'] and volume_config['ephemeral']: | ||
745 | 84 | # asked for ephemeral storage but also defined a volume ID | ||
746 | 85 | hookenv.log('A volume is defined for this unit, but ephemeral ' | ||
747 | 86 | 'storage was requested', hookenv.ERROR) | ||
748 | 87 | errors = True | ||
749 | 88 | elif not volume_config['device'] and not volume_config['ephemeral']: | ||
750 | 89 | # asked for permanent storage but did not define volume ID | ||
751 | 90 | hookenv.log('Ephemeral storage was requested, but there is no volume ' | ||
752 | 91 | 'defined for this unit.', hookenv.ERROR) | ||
753 | 92 | errors = True | ||
754 | 93 | |||
755 | 94 | unit_mount_name = hookenv.local_unit().replace('/', '-') | ||
756 | 95 | volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name) | ||
757 | 96 | |||
758 | 97 | if errors: | ||
759 | 98 | return None | ||
760 | 99 | return volume_config | ||
761 | 100 | |||
762 | 101 | |||
763 | 102 | def mount_volume(config): | ||
764 | 103 | if os.path.exists(config['mountpoint']): | ||
765 | 104 | if not os.path.isdir(config['mountpoint']): | ||
766 | 105 | hookenv.log('Not a directory: {}'.format(config['mountpoint'])) | ||
767 | 106 | raise VolumeConfigurationError() | ||
768 | 107 | else: | ||
769 | 108 | host.mkdir(config['mountpoint']) | ||
770 | 109 | if os.path.ismount(config['mountpoint']): | ||
771 | 110 | unmount_volume(config) | ||
772 | 111 | if not host.mount(config['device'], config['mountpoint'], persist=True): | ||
773 | 112 | raise VolumeConfigurationError() | ||
774 | 113 | |||
775 | 114 | |||
776 | 115 | def unmount_volume(config): | ||
777 | 116 | if os.path.ismount(config['mountpoint']): | ||
778 | 117 | if not host.umount(config['mountpoint'], persist=True): | ||
779 | 118 | raise VolumeConfigurationError() | ||
780 | 119 | |||
781 | 120 | |||
782 | 121 | def managed_mounts(): | ||
783 | 122 | '''List of all mounted managed volumes''' | ||
784 | 123 | return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts()) | ||
785 | 124 | |||
786 | 125 | |||
787 | 126 | def configure_volume(before_change=lambda: None, after_change=lambda: None): | ||
788 | 127 | '''Set up storage (or don't) according to the charm's volume configuration. | ||
789 | 128 | Returns the mount point or "ephemeral". before_change and after_change | ||
790 | 129 | are optional functions to be called if the volume configuration changes. | ||
791 | 130 | ''' | ||
792 | 131 | |||
793 | 132 | config = get_config() | ||
794 | 133 | if not config: | ||
795 | 134 | hookenv.log('Failed to read volume configuration', hookenv.CRITICAL) | ||
796 | 135 | raise VolumeConfigurationError() | ||
797 | 136 | |||
798 | 137 | if config['ephemeral']: | ||
799 | 138 | if os.path.ismount(config['mountpoint']): | ||
800 | 139 | before_change() | ||
801 | 140 | unmount_volume(config) | ||
802 | 141 | after_change() | ||
803 | 142 | return 'ephemeral' | ||
804 | 143 | else: | ||
805 | 144 | # persistent storage | ||
806 | 145 | if os.path.ismount(config['mountpoint']): | ||
807 | 146 | mounts = dict(managed_mounts()) | ||
808 | 147 | if mounts.get(config['mountpoint']) != config['device']: | ||
809 | 148 | before_change() | ||
810 | 149 | unmount_volume(config) | ||
811 | 150 | mount_volume(config) | ||
812 | 151 | after_change() | ||
813 | 152 | else: | ||
814 | 153 | before_change() | ||
815 | 154 | mount_volume(config) | ||
816 | 155 | after_change() | ||
817 | 156 | return config['mountpoint'] | ||
818 | 0 | 157 | ||
819 | === added directory 'hooks/charmhelpers/core' | |||
820 | === added file 'hooks/charmhelpers/core/__init__.py' | |||
821 | === added file 'hooks/charmhelpers/core/hookenv.py' | |||
822 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 | |||
823 | +++ hooks/charmhelpers/core/hookenv.py 2013-10-10 22:34:35 +0000 | |||
824 | @@ -0,0 +1,340 @@ | |||
825 | 1 | "Interactions with the Juju environment" | ||
826 | 2 | # Copyright 2013 Canonical Ltd. | ||
827 | 3 | # | ||
828 | 4 | # Authors: | ||
829 | 5 | # Charm Helpers Developers <juju@lists.ubuntu.com> | ||
830 | 6 | |||
831 | 7 | import os | ||
832 | 8 | import json | ||
833 | 9 | import yaml | ||
834 | 10 | import subprocess | ||
835 | 11 | import UserDict | ||
836 | 12 | |||
837 | 13 | CRITICAL = "CRITICAL" | ||
838 | 14 | ERROR = "ERROR" | ||
839 | 15 | WARNING = "WARNING" | ||
840 | 16 | INFO = "INFO" | ||
841 | 17 | DEBUG = "DEBUG" | ||
842 | 18 | MARKER = object() | ||
843 | 19 | |||
844 | 20 | cache = {} | ||
845 | 21 | |||
846 | 22 | |||
847 | 23 | def cached(func): | ||
848 | 24 | ''' Cache return values for multiple executions of func + args | ||
849 | 25 | |||
850 | 26 | For example: | ||
851 | 27 | |||
852 | 28 | @cached | ||
853 | 29 | def unit_get(attribute): | ||
854 | 30 | pass | ||
855 | 31 | |||
856 | 32 | unit_get('test') | ||
857 | 33 | |||
858 | 34 | will cache the result of unit_get + 'test' for future calls. | ||
859 | 35 | ''' | ||
860 | 36 | def wrapper(*args, **kwargs): | ||
861 | 37 | global cache | ||
862 | 38 | key = str((func, args, kwargs)) | ||
863 | 39 | try: | ||
864 | 40 | return cache[key] | ||
865 | 41 | except KeyError: | ||
866 | 42 | res = func(*args, **kwargs) | ||
867 | 43 | cache[key] = res | ||
868 | 44 | return res | ||
869 | 45 | return wrapper | ||
870 | 46 | |||
871 | 47 | |||
872 | 48 | def flush(key): | ||
873 | 49 | ''' Flushes any entries from function cache where the | ||
874 | 50 | key is found in the function+args ''' | ||
875 | 51 | flush_list = [] | ||
876 | 52 | for item in cache: | ||
877 | 53 | if key in item: | ||
878 | 54 | flush_list.append(item) | ||
879 | 55 | for item in flush_list: | ||
880 | 56 | del cache[item] | ||
881 | 57 | |||
882 | 58 | |||
883 | 59 | def log(message, level=None): | ||
884 | 60 | "Write a message to the juju log" | ||
885 | 61 | command = ['juju-log'] | ||
886 | 62 | if level: | ||
887 | 63 | command += ['-l', level] | ||
888 | 64 | command += [message] | ||
889 | 65 | subprocess.call(command) | ||
890 | 66 | |||
891 | 67 | |||
892 | 68 | class Serializable(UserDict.IterableUserDict): | ||
893 | 69 | "Wrapper, an object that can be serialized to yaml or json" | ||
894 | 70 | |||
895 | 71 | def __init__(self, obj): | ||
896 | 72 | # wrap the object | ||
897 | 73 | UserDict.IterableUserDict.__init__(self) | ||
898 | 74 | self.data = obj | ||
899 | 75 | |||
900 | 76 | def __getattr__(self, attr): | ||
901 | 77 | # See if this object has attribute. | ||
902 | 78 | if attr in ("json", "yaml", "data"): | ||
903 | 79 | return self.__dict__[attr] | ||
904 | 80 | # Check for attribute in wrapped object. | ||
905 | 81 | got = getattr(self.data, attr, MARKER) | ||
906 | 82 | if got is not MARKER: | ||
907 | 83 | return got | ||
908 | 84 | # Proxy to the wrapped object via dict interface. | ||
909 | 85 | try: | ||
910 | 86 | return self.data[attr] | ||
911 | 87 | except KeyError: | ||
912 | 88 | raise AttributeError(attr) | ||
913 | 89 | |||
914 | 90 | def __getstate__(self): | ||
915 | 91 | # Pickle as a standard dictionary. | ||
916 | 92 | return self.data | ||
917 | 93 | |||
918 | 94 | def __setstate__(self, state): | ||
919 | 95 | # Unpickle into our wrapper. | ||
920 | 96 | self.data = state | ||
921 | 97 | |||
922 | 98 | def json(self): | ||
923 | 99 | "Serialize the object to json" | ||
924 | 100 | return json.dumps(self.data) | ||
925 | 101 | |||
926 | 102 | def yaml(self): | ||
927 | 103 | "Serialize the object to yaml" | ||
928 | 104 | return yaml.dump(self.data) | ||
929 | 105 | |||
930 | 106 | |||
931 | 107 | def execution_environment(): | ||
932 | 108 | """A convenient bundling of the current execution context""" | ||
933 | 109 | context = {} | ||
934 | 110 | context['conf'] = config() | ||
935 | 111 | if relation_id(): | ||
936 | 112 | context['reltype'] = relation_type() | ||
937 | 113 | context['relid'] = relation_id() | ||
938 | 114 | context['rel'] = relation_get() | ||
939 | 115 | context['unit'] = local_unit() | ||
940 | 116 | context['rels'] = relations() | ||
941 | 117 | context['env'] = os.environ | ||
942 | 118 | return context | ||
943 | 119 | |||
944 | 120 | |||
945 | 121 | def in_relation_hook(): | ||
946 | 122 | "Determine whether we're running in a relation hook" | ||
947 | 123 | return 'JUJU_RELATION' in os.environ | ||
948 | 124 | |||
949 | 125 | |||
950 | 126 | def relation_type(): | ||
951 | 127 | "The scope for the current relation hook" | ||
952 | 128 | return os.environ.get('JUJU_RELATION', None) | ||
953 | 129 | |||
954 | 130 | |||
955 | 131 | def relation_id(): | ||
956 | 132 | "The relation ID for the current relation hook" | ||
957 | 133 | return os.environ.get('JUJU_RELATION_ID', None) | ||
958 | 134 | |||
959 | 135 | |||
960 | 136 | def local_unit(): | ||
961 | 137 | "Local unit ID" | ||
962 | 138 | return os.environ['JUJU_UNIT_NAME'] | ||
963 | 139 | |||
964 | 140 | |||
965 | 141 | def remote_unit(): | ||
966 | 142 | "The remote unit for the current relation hook" | ||
967 | 143 | return os.environ['JUJU_REMOTE_UNIT'] | ||
968 | 144 | |||
969 | 145 | |||
970 | 146 | def service_name(): | ||
971 | 147 | "The name service group this unit belongs to" | ||
972 | 148 | return local_unit().split('/')[0] | ||
973 | 149 | |||
974 | 150 | |||
975 | 151 | @cached | ||
976 | 152 | def config(scope=None): | ||
977 | 153 | "Juju charm configuration" | ||
978 | 154 | config_cmd_line = ['config-get'] | ||
979 | 155 | if scope is not None: | ||
980 | 156 | config_cmd_line.append(scope) | ||
981 | 157 | config_cmd_line.append('--format=json') | ||
982 | 158 | try: | ||
983 | 159 | return json.loads(subprocess.check_output(config_cmd_line)) | ||
984 | 160 | except ValueError: | ||
985 | 161 | return None | ||
986 | 162 | |||
987 | 163 | |||
988 | 164 | @cached | ||
989 | 165 | def relation_get(attribute=None, unit=None, rid=None): | ||
990 | 166 | _args = ['relation-get', '--format=json'] | ||
991 | 167 | if rid: | ||
992 | 168 | _args.append('-r') | ||
993 | 169 | _args.append(rid) | ||
994 | 170 | _args.append(attribute or '-') | ||
995 | 171 | if unit: | ||
996 | 172 | _args.append(unit) | ||
997 | 173 | try: | ||
998 | 174 | return json.loads(subprocess.check_output(_args)) | ||
999 | 175 | except ValueError: | ||
1000 | 176 | return None | ||
1001 | 177 | |||
1002 | 178 | |||
1003 | 179 | def relation_set(relation_id=None, relation_settings={}, **kwargs): | ||
1004 | 180 | relation_cmd_line = ['relation-set'] | ||
1005 | 181 | if relation_id is not None: | ||
1006 | 182 | relation_cmd_line.extend(('-r', relation_id)) | ||
1007 | 183 | for k, v in (relation_settings.items() + kwargs.items()): | ||
1008 | 184 | if v is None: | ||
1009 | 185 | relation_cmd_line.append('{}='.format(k)) | ||
1010 | 186 | else: | ||
1011 | 187 | relation_cmd_line.append('{}={}'.format(k, v)) | ||
1012 | 188 | subprocess.check_call(relation_cmd_line) | ||
1013 | 189 | # Flush cache of any relation-gets for local unit | ||
1014 | 190 | flush(local_unit()) | ||
1015 | 191 | |||
1016 | 192 | |||
1017 | 193 | @cached | ||
1018 | 194 | def relation_ids(reltype=None): | ||
1019 | 195 | "A list of relation_ids" | ||
1020 | 196 | reltype = reltype or relation_type() | ||
1021 | 197 | relid_cmd_line = ['relation-ids', '--format=json'] | ||
1022 | 198 | if reltype is not None: | ||
1023 | 199 | relid_cmd_line.append(reltype) | ||
1024 | 200 | return json.loads(subprocess.check_output(relid_cmd_line)) or [] | ||
1025 | 201 | return [] | ||
1026 | 202 | |||
1027 | 203 | |||
1028 | 204 | @cached | ||
1029 | 205 | def related_units(relid=None): | ||
1030 | 206 | "A list of related units" | ||
1031 | 207 | relid = relid or relation_id() | ||
1032 | 208 | units_cmd_line = ['relation-list', '--format=json'] | ||
1033 | 209 | if relid is not None: | ||
1034 | 210 | units_cmd_line.extend(('-r', relid)) | ||
1035 | 211 | return json.loads(subprocess.check_output(units_cmd_line)) or [] | ||
1036 | 212 | |||
1037 | 213 | |||
1038 | 214 | @cached | ||
1039 | 215 | def relation_for_unit(unit=None, rid=None): | ||
1040 | 216 | "Get the json represenation of a unit's relation" | ||
1041 | 217 | unit = unit or remote_unit() | ||
1042 | 218 | relation = relation_get(unit=unit, rid=rid) | ||
1043 | 219 | for key in relation: | ||
1044 | 220 | if key.endswith('-list'): | ||
1045 | 221 | relation[key] = relation[key].split() | ||
1046 | 222 | relation['__unit__'] = unit | ||
1047 | 223 | return relation | ||
1048 | 224 | |||
1049 | 225 | |||
1050 | 226 | @cached | ||
1051 | 227 | def relations_for_id(relid=None): | ||
1052 | 228 | "Get relations of a specific relation ID" | ||
1053 | 229 | relation_data = [] | ||
1054 | 230 | relid = relid or relation_ids() | ||
1055 | 231 | for unit in related_units(relid): | ||
1056 | 232 | unit_data = relation_for_unit(unit, relid) | ||
1057 | 233 | unit_data['__relid__'] = relid | ||
1058 | 234 | relation_data.append(unit_data) | ||
1059 | 235 | return relation_data | ||
1060 | 236 | |||
1061 | 237 | |||
1062 | 238 | @cached | ||
1063 | 239 | def relations_of_type(reltype=None): | ||
1064 | 240 | "Get relations of a specific type" | ||
1065 | 241 | relation_data = [] | ||
1066 | 242 | reltype = reltype or relation_type() | ||
1067 | 243 | for relid in relation_ids(reltype): | ||
1068 | 244 | for relation in relations_for_id(relid): | ||
1069 | 245 | relation['__relid__'] = relid | ||
1070 | 246 | relation_data.append(relation) | ||
1071 | 247 | return relation_data | ||
1072 | 248 | |||
1073 | 249 | |||
1074 | 250 | @cached | ||
1075 | 251 | def relation_types(): | ||
1076 | 252 | "Get a list of relation types supported by this charm" | ||
1077 | 253 | charmdir = os.environ.get('CHARM_DIR', '') | ||
1078 | 254 | mdf = open(os.path.join(charmdir, 'metadata.yaml')) | ||
1079 | 255 | md = yaml.safe_load(mdf) | ||
1080 | 256 | rel_types = [] | ||
1081 | 257 | for key in ('provides', 'requires', 'peers'): | ||
1082 | 258 | section = md.get(key) | ||
1083 | 259 | if section: | ||
1084 | 260 | rel_types.extend(section.keys()) | ||
1085 | 261 | mdf.close() | ||
1086 | 262 | return rel_types | ||
1087 | 263 | |||
1088 | 264 | |||
1089 | 265 | @cached | ||
1090 | 266 | def relations(): | ||
1091 | 267 | rels = {} | ||
1092 | 268 | for reltype in relation_types(): | ||
1093 | 269 | relids = {} | ||
1094 | 270 | for relid in relation_ids(reltype): | ||
1095 | 271 | units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} | ||
1096 | 272 | for unit in related_units(relid): | ||
1097 | 273 | reldata = relation_get(unit=unit, rid=relid) | ||
1098 | 274 | units[unit] = reldata | ||
1099 | 275 | relids[relid] = units | ||
1100 | 276 | rels[reltype] = relids | ||
1101 | 277 | return rels | ||
1102 | 278 | |||
1103 | 279 | |||
1104 | 280 | def open_port(port, protocol="TCP"): | ||
1105 | 281 | "Open a service network port" | ||
1106 | 282 | _args = ['open-port'] | ||
1107 | 283 | _args.append('{}/{}'.format(port, protocol)) | ||
1108 | 284 | subprocess.check_call(_args) | ||
1109 | 285 | |||
1110 | 286 | |||
1111 | 287 | def close_port(port, protocol="TCP"): | ||
1112 | 288 | "Close a service network port" | ||
1113 | 289 | _args = ['close-port'] | ||
1114 | 290 | _args.append('{}/{}'.format(port, protocol)) | ||
1115 | 291 | subprocess.check_call(_args) | ||
1116 | 292 | |||
1117 | 293 | |||
1118 | 294 | @cached | ||
1119 | 295 | def unit_get(attribute): | ||
1120 | 296 | _args = ['unit-get', '--format=json', attribute] | ||
1121 | 297 | try: | ||
1122 | 298 | return json.loads(subprocess.check_output(_args)) | ||
1123 | 299 | except ValueError: | ||
1124 | 300 | return None | ||
1125 | 301 | |||
1126 | 302 | |||
1127 | 303 | def unit_private_ip(): | ||
1128 | 304 | return unit_get('private-address') | ||
1129 | 305 | |||
1130 | 306 | |||
1131 | 307 | class UnregisteredHookError(Exception): | ||
1132 | 308 | pass | ||
1133 | 309 | |||
1134 | 310 | |||
1135 | 311 | class Hooks(object): | ||
1136 | 312 | def __init__(self): | ||
1137 | 313 | super(Hooks, self).__init__() | ||
1138 | 314 | self._hooks = {} | ||
1139 | 315 | |||
1140 | 316 | def register(self, name, function): | ||
1141 | 317 | self._hooks[name] = function | ||
1142 | 318 | |||
1143 | 319 | def execute(self, args): | ||
1144 | 320 | hook_name = os.path.basename(args[0]) | ||
1145 | 321 | if hook_name in self._hooks: | ||
1146 | 322 | self._hooks[hook_name]() | ||
1147 | 323 | else: | ||
1148 | 324 | raise UnregisteredHookError(hook_name) | ||
1149 | 325 | |||
1150 | 326 | def hook(self, *hook_names): | ||
1151 | 327 | def wrapper(decorated): | ||
1152 | 328 | for hook_name in hook_names: | ||
1153 | 329 | self.register(hook_name, decorated) | ||
1154 | 330 | else: | ||
1155 | 331 | self.register(decorated.__name__, decorated) | ||
1156 | 332 | if '_' in decorated.__name__: | ||
1157 | 333 | self.register( | ||
1158 | 334 | decorated.__name__.replace('_', '-'), decorated) | ||
1159 | 335 | return decorated | ||
1160 | 336 | return wrapper | ||
1161 | 337 | |||
1162 | 338 | |||
1163 | 339 | def charm_dir(): | ||
1164 | 340 | return os.environ.get('CHARM_DIR') | ||
1165 | 0 | 341 | ||
1166 | === added file 'hooks/charmhelpers/core/host.py' | |||
1167 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 | |||
1168 | +++ hooks/charmhelpers/core/host.py 2013-10-10 22:34:35 +0000 | |||
1169 | @@ -0,0 +1,239 @@ | |||
1170 | 1 | """Tools for working with the host system""" | ||
1171 | 2 | # Copyright 2012 Canonical Ltd. | ||
1172 | 3 | # | ||
1173 | 4 | # Authors: | ||
1174 | 5 | # Nick Moffitt <nick.moffitt@canonical.com> | ||
1175 | 6 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> | ||
1176 | 7 | |||
1177 | 8 | import os | ||
1178 | 9 | import pwd | ||
1179 | 10 | import grp | ||
1180 | 11 | import random | ||
1181 | 12 | import string | ||
1182 | 13 | import subprocess | ||
1183 | 14 | import hashlib | ||
1184 | 15 | |||
1185 | 16 | from collections import OrderedDict | ||
1186 | 17 | |||
1187 | 18 | from hookenv import log | ||
1188 | 19 | |||
1189 | 20 | |||
1190 | 21 | def service_start(service_name): | ||
1191 | 22 | service('start', service_name) | ||
1192 | 23 | |||
1193 | 24 | |||
1194 | 25 | def service_stop(service_name): | ||
1195 | 26 | service('stop', service_name) | ||
1196 | 27 | |||
1197 | 28 | |||
1198 | 29 | def service_restart(service_name): | ||
1199 | 30 | service('restart', service_name) | ||
1200 | 31 | |||
1201 | 32 | |||
1202 | 33 | def service_reload(service_name, restart_on_failure=False): | ||
1203 | 34 | if not service('reload', service_name) and restart_on_failure: | ||
1204 | 35 | service('restart', service_name) | ||
1205 | 36 | |||
1206 | 37 | |||
1207 | 38 | def service(action, service_name): | ||
1208 | 39 | cmd = ['service', service_name, action] | ||
1209 | 40 | return subprocess.call(cmd) == 0 | ||
1210 | 41 | |||
1211 | 42 | |||
1212 | 43 | def service_running(service): | ||
1213 | 44 | try: | ||
1214 | 45 | output = subprocess.check_output(['service', service, 'status']) | ||
1215 | 46 | except subprocess.CalledProcessError: | ||
1216 | 47 | return False | ||
1217 | 48 | else: | ||
1218 | 49 | if ("start/running" in output or "is running" in output): | ||
1219 | 50 | return True | ||
1220 | 51 | else: | ||
1221 | 52 | return False | ||
1222 | 53 | |||
1223 | 54 | |||
1224 | 55 | def adduser(username, password=None, shell='/bin/bash', system_user=False): | ||
1225 | 56 | """Add a user""" | ||
1226 | 57 | try: | ||
1227 | 58 | user_info = pwd.getpwnam(username) | ||
1228 | 59 | log('user {0} already exists!'.format(username)) | ||
1229 | 60 | except KeyError: | ||
1230 | 61 | log('creating user {0}'.format(username)) | ||
1231 | 62 | cmd = ['useradd'] | ||
1232 | 63 | if system_user or password is None: | ||
1233 | 64 | cmd.append('--system') | ||
1234 | 65 | else: | ||
1235 | 66 | cmd.extend([ | ||
1236 | 67 | '--create-home', | ||
1237 | 68 | '--shell', shell, | ||
1238 | 69 | '--password', password, | ||
1239 | 70 | ]) | ||
1240 | 71 | cmd.append(username) | ||
1241 | 72 | subprocess.check_call(cmd) | ||
1242 | 73 | user_info = pwd.getpwnam(username) | ||
1243 | 74 | return user_info | ||
1244 | 75 | |||
1245 | 76 | |||
1246 | 77 | def add_user_to_group(username, group): | ||
1247 | 78 | """Add a user to a group""" | ||
1248 | 79 | cmd = [ | ||
1249 | 80 | 'gpasswd', '-a', | ||
1250 | 81 | username, | ||
1251 | 82 | group | ||
1252 | 83 | ] | ||
1253 | 84 | log("Adding user {} to group {}".format(username, group)) | ||
1254 | 85 | subprocess.check_call(cmd) | ||
1255 | 86 | |||
1256 | 87 | |||
1257 | 88 | def rsync(from_path, to_path, flags='-r', options=None): | ||
1258 | 89 | """Replicate the contents of a path""" | ||
1259 | 90 | options = options or ['--delete', '--executability'] | ||
1260 | 91 | cmd = ['/usr/bin/rsync', flags] | ||
1261 | 92 | cmd.extend(options) | ||
1262 | 93 | cmd.append(from_path) | ||
1263 | 94 | cmd.append(to_path) | ||
1264 | 95 | log(" ".join(cmd)) | ||
1265 | 96 | return subprocess.check_output(cmd).strip() | ||
1266 | 97 | |||
1267 | 98 | |||
1268 | 99 | def symlink(source, destination): | ||
1269 | 100 | """Create a symbolic link""" | ||
1270 | 101 | log("Symlinking {} as {}".format(source, destination)) | ||
1271 | 102 | cmd = [ | ||
1272 | 103 | 'ln', | ||
1273 | 104 | '-sf', | ||
1274 | 105 | source, | ||
1275 | 106 | destination, | ||
1276 | 107 | ] | ||
1277 | 108 | subprocess.check_call(cmd) | ||
1278 | 109 | |||
1279 | 110 | |||
1280 | 111 | def mkdir(path, owner='root', group='root', perms=0555, force=False): | ||
1281 | 112 | """Create a directory""" | ||
1282 | 113 | log("Making dir {} {}:{} {:o}".format(path, owner, group, | ||
1283 | 114 | perms)) | ||
1284 | 115 | uid = pwd.getpwnam(owner).pw_uid | ||
1285 | 116 | gid = grp.getgrnam(group).gr_gid | ||
1286 | 117 | realpath = os.path.abspath(path) | ||
1287 | 118 | if os.path.exists(realpath): | ||
1288 | 119 | if force and not os.path.isdir(realpath): | ||
1289 | 120 | log("Removing non-directory file {} prior to mkdir()".format(path)) | ||
1290 | 121 | os.unlink(realpath) | ||
1291 | 122 | else: | ||
1292 | 123 | os.makedirs(realpath, perms) | ||
1293 | 124 | os.chown(realpath, uid, gid) | ||
1294 | 125 | |||
1295 | 126 | |||
1296 | 127 | def write_file(path, content, owner='root', group='root', perms=0444): | ||
1297 | 128 | """Create or overwrite a file with the contents of a string""" | ||
1298 | 129 | log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) | ||
1299 | 130 | uid = pwd.getpwnam(owner).pw_uid | ||
1300 | 131 | gid = grp.getgrnam(group).gr_gid | ||
1301 | 132 | with open(path, 'w') as target: | ||
1302 | 133 | os.fchown(target.fileno(), uid, gid) | ||
1303 | 134 | os.fchmod(target.fileno(), perms) | ||
1304 | 135 | target.write(content) | ||
1305 | 136 | |||
1306 | 137 | |||
1307 | 138 | def mount(device, mountpoint, options=None, persist=False): | ||
1308 | 139 | '''Mount a filesystem''' | ||
1309 | 140 | cmd_args = ['mount'] | ||
1310 | 141 | if options is not None: | ||
1311 | 142 | cmd_args.extend(['-o', options]) | ||
1312 | 143 | cmd_args.extend([device, mountpoint]) | ||
1313 | 144 | try: | ||
1314 | 145 | subprocess.check_output(cmd_args) | ||
1315 | 146 | except subprocess.CalledProcessError, e: | ||
1316 | 147 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) | ||
1317 | 148 | return False | ||
1318 | 149 | if persist: | ||
1319 | 150 | # TODO: update fstab | ||
1320 | 151 | pass | ||
1321 | 152 | return True | ||
1322 | 153 | |||
1323 | 154 | |||
1324 | 155 | def umount(mountpoint, persist=False): | ||
1325 | 156 | '''Unmount a filesystem''' | ||
1326 | 157 | cmd_args = ['umount', mountpoint] | ||
1327 | 158 | try: | ||
1328 | 159 | subprocess.check_output(cmd_args) | ||
1329 | 160 | except subprocess.CalledProcessError, e: | ||
1330 | 161 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | ||
1331 | 162 | return False | ||
1332 | 163 | if persist: | ||
1333 | 164 | # TODO: update fstab | ||
1334 | 165 | pass | ||
1335 | 166 | return True | ||
1336 | 167 | |||
1337 | 168 | |||
1338 | 169 | def mounts(): | ||
1339 | 170 | '''List of all mounted volumes as [[mountpoint,device],[...]]''' | ||
1340 | 171 | with open('/proc/mounts') as f: | ||
1341 | 172 | # [['/mount/point','/dev/path'],[...]] | ||
1342 | 173 | system_mounts = [m[1::-1] for m in [l.strip().split() | ||
1343 | 174 | for l in f.readlines()]] | ||
1344 | 175 | return system_mounts | ||
1345 | 176 | |||
1346 | 177 | |||
1347 | 178 | def file_hash(path): | ||
1348 | 179 | ''' Generate a md5 hash of the contents of 'path' or None if not found ''' | ||
1349 | 180 | if os.path.exists(path): | ||
1350 | 181 | h = hashlib.md5() | ||
1351 | 182 | with open(path, 'r') as source: | ||
1352 | 183 | h.update(source.read()) # IGNORE:E1101 - it does have update | ||
1353 | 184 | return h.hexdigest() | ||
1354 | 185 | else: | ||
1355 | 186 | return None | ||
1356 | 187 | |||
1357 | 188 | |||
1358 | 189 | def restart_on_change(restart_map): | ||
1359 | 190 | ''' Restart services based on configuration files changing | ||
1360 | 191 | |||
1361 | 192 | This function is used a decorator, for example | ||
1362 | 193 | |||
1363 | 194 | @restart_on_change({ | ||
1364 | 195 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] | ||
1365 | 196 | }) | ||
1366 | 197 | def ceph_client_changed(): | ||
1367 | 198 | ... | ||
1368 | 199 | |||
1369 | 200 | In this example, the cinder-api and cinder-volume services | ||
1370 | 201 | would be restarted if /etc/ceph/ceph.conf is changed by the | ||
1371 | 202 | ceph_client_changed function. | ||
1372 | 203 | ''' | ||
1373 | 204 | def wrap(f): | ||
1374 | 205 | def wrapped_f(*args): | ||
1375 | 206 | checksums = {} | ||
1376 | 207 | for path in restart_map: | ||
1377 | 208 | checksums[path] = file_hash(path) | ||
1378 | 209 | f(*args) | ||
1379 | 210 | restarts = [] | ||
1380 | 211 | for path in restart_map: | ||
1381 | 212 | if checksums[path] != file_hash(path): | ||
1382 | 213 | restarts += restart_map[path] | ||
1383 | 214 | for service_name in list(OrderedDict.fromkeys(restarts)): | ||
1384 | 215 | service('restart', service_name) | ||
1385 | 216 | return wrapped_f | ||
1386 | 217 | return wrap | ||
1387 | 218 | |||
1388 | 219 | |||
1389 | 220 | def lsb_release(): | ||
1390 | 221 | '''Return /etc/lsb-release in a dict''' | ||
1391 | 222 | d = {} | ||
1392 | 223 | with open('/etc/lsb-release', 'r') as lsb: | ||
1393 | 224 | for l in lsb: | ||
1394 | 225 | k, v = l.split('=') | ||
1395 | 226 | d[k.strip()] = v.strip() | ||
1396 | 227 | return d | ||
1397 | 228 | |||
1398 | 229 | |||
1399 | 230 | def pwgen(length=None): | ||
1400 | 231 | '''Generate a random pasword.''' | ||
1401 | 232 | if length is None: | ||
1402 | 233 | length = random.choice(range(35, 45)) | ||
1403 | 234 | alphanumeric_chars = [ | ||
1404 | 235 | l for l in (string.letters + string.digits) | ||
1405 | 236 | if l not in 'l0QD1vAEIOUaeiou'] | ||
1406 | 237 | random_chars = [ | ||
1407 | 238 | random.choice(alphanumeric_chars) for _ in range(length)] | ||
1408 | 239 | return(''.join(random_chars)) | ||
1409 | 0 | 240 | ||
1410 | === added directory 'hooks/charmhelpers/fetch' | |||
1411 | === added file 'hooks/charmhelpers/fetch/__init__.py' | |||
1412 | --- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 | |||
1413 | +++ hooks/charmhelpers/fetch/__init__.py 2013-10-10 22:34:35 +0000 | |||
1414 | @@ -0,0 +1,209 @@ | |||
1415 | 1 | import importlib | ||
1416 | 2 | from yaml import safe_load | ||
1417 | 3 | from charmhelpers.core.host import ( | ||
1418 | 4 | lsb_release | ||
1419 | 5 | ) | ||
1420 | 6 | from urlparse import ( | ||
1421 | 7 | urlparse, | ||
1422 | 8 | urlunparse, | ||
1423 | 9 | ) | ||
1424 | 10 | import subprocess | ||
1425 | 11 | from charmhelpers.core.hookenv import ( | ||
1426 | 12 | config, | ||
1427 | 13 | log, | ||
1428 | 14 | ) | ||
1429 | 15 | import apt_pkg | ||
1430 | 16 | |||
1431 | 17 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive | ||
1432 | 18 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | ||
1433 | 19 | """ | ||
1434 | 20 | PROPOSED_POCKET = """# Proposed | ||
1435 | 21 | deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted | ||
1436 | 22 | """ | ||
1437 | 23 | |||
1438 | 24 | |||
1439 | 25 | def filter_installed_packages(packages): | ||
1440 | 26 | """Returns a list of packages that require installation""" | ||
1441 | 27 | apt_pkg.init() | ||
1442 | 28 | cache = apt_pkg.Cache() | ||
1443 | 29 | _pkgs = [] | ||
1444 | 30 | for package in packages: | ||
1445 | 31 | try: | ||
1446 | 32 | p = cache[package] | ||
1447 | 33 | p.current_ver or _pkgs.append(package) | ||
1448 | 34 | except KeyError: | ||
1449 | 35 | log('Package {} has no installation candidate.'.format(package), | ||
1450 | 36 | level='WARNING') | ||
1451 | 37 | _pkgs.append(package) | ||
1452 | 38 | return _pkgs | ||
1453 | 39 | |||
1454 | 40 | |||
1455 | 41 | def apt_install(packages, options=None, fatal=False): | ||
1456 | 42 | """Install one or more packages""" | ||
1457 | 43 | options = options or [] | ||
1458 | 44 | cmd = ['apt-get', '-y'] | ||
1459 | 45 | cmd.extend(options) | ||
1460 | 46 | cmd.append('install') | ||
1461 | 47 | if isinstance(packages, basestring): | ||
1462 | 48 | cmd.append(packages) | ||
1463 | 49 | else: | ||
1464 | 50 | cmd.extend(packages) | ||
1465 | 51 | log("Installing {} with options: {}".format(packages, | ||
1466 | 52 | options)) | ||
1467 | 53 | if fatal: | ||
1468 | 54 | subprocess.check_call(cmd) | ||
1469 | 55 | else: | ||
1470 | 56 | subprocess.call(cmd) | ||
1471 | 57 | |||
1472 | 58 | |||
1473 | 59 | def apt_update(fatal=False): | ||
1474 | 60 | """Update local apt cache""" | ||
1475 | 61 | cmd = ['apt-get', 'update'] | ||
1476 | 62 | if fatal: | ||
1477 | 63 | subprocess.check_call(cmd) | ||
1478 | 64 | else: | ||
1479 | 65 | subprocess.call(cmd) | ||
1480 | 66 | |||
1481 | 67 | |||
1482 | 68 | def apt_purge(packages, fatal=False): | ||
1483 | 69 | """Purge one or more packages""" | ||
1484 | 70 | cmd = ['apt-get', '-y', 'purge'] | ||
1485 | 71 | if isinstance(packages, basestring): | ||
1486 | 72 | cmd.append(packages) | ||
1487 | 73 | else: | ||
1488 | 74 | cmd.extend(packages) | ||
1489 | 75 | log("Purging {}".format(packages)) | ||
1490 | 76 | if fatal: | ||
1491 | 77 | subprocess.check_call(cmd) | ||
1492 | 78 | else: | ||
1493 | 79 | subprocess.call(cmd) | ||
1494 | 80 | |||
1495 | 81 | |||
1496 | 82 | def add_source(source, key=None): | ||
1497 | 83 | if ((source.startswith('ppa:') or | ||
1498 | 84 | source.startswith('http:'))): | ||
1499 | 85 | subprocess.check_call(['add-apt-repository', '--yes', source]) | ||
1500 | 86 | elif source.startswith('cloud:'): | ||
1501 | 87 | apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), | ||
1502 | 88 | fatal=True) | ||
1503 | 89 | pocket = source.split(':')[-1] | ||
1504 | 90 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: | ||
1505 | 91 | apt.write(CLOUD_ARCHIVE.format(pocket)) | ||
1506 | 92 | elif source == 'proposed': | ||
1507 | 93 | release = lsb_release()['DISTRIB_CODENAME'] | ||
1508 | 94 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: | ||
1509 | 95 | apt.write(PROPOSED_POCKET.format(release)) | ||
1510 | 96 | if key: | ||
1511 | 97 | subprocess.check_call(['apt-key', 'import', key]) | ||
1512 | 98 | |||
1513 | 99 | |||
1514 | 100 | class SourceConfigError(Exception): | ||
1515 | 101 | pass | ||
1516 | 102 | |||
1517 | 103 | |||
1518 | 104 | def configure_sources(update=False, | ||
1519 | 105 | sources_var='install_sources', | ||
1520 | 106 | keys_var='install_keys'): | ||
1521 | 107 | """ | ||
1522 | 108 | Configure multiple sources from charm configuration | ||
1523 | 109 | |||
1524 | 110 | Example config: | ||
1525 | 111 | install_sources: | ||
1526 | 112 | - "ppa:foo" | ||
1527 | 113 | - "http://example.com/repo precise main" | ||
1528 | 114 | install_keys: | ||
1529 | 115 | - null | ||
1530 | 116 | - "a1b2c3d4" | ||
1531 | 117 | |||
1532 | 118 | Note that 'null' (a.k.a. None) should not be quoted. | ||
1533 | 119 | """ | ||
1534 | 120 | sources = safe_load(config(sources_var)) | ||
1535 | 121 | keys = safe_load(config(keys_var)) | ||
1536 | 122 | if isinstance(sources, basestring) and isinstance(keys, basestring): | ||
1537 | 123 | add_source(sources, keys) | ||
1538 | 124 | else: | ||
1539 | 125 | if not len(sources) == len(keys): | ||
1540 | 126 | msg = 'Install sources and keys lists are different lengths' | ||
1541 | 127 | raise SourceConfigError(msg) | ||
1542 | 128 | for src_num in range(len(sources)): | ||
1543 | 129 | add_source(sources[src_num], keys[src_num]) | ||
1544 | 130 | if update: | ||
1545 | 131 | apt_update(fatal=True) | ||
1546 | 132 | |||
1547 | 133 | # The order of this list is very important. Handlers should be listed in from | ||
1548 | 134 | # least- to most-specific URL matching. | ||
1549 | 135 | FETCH_HANDLERS = ( | ||
1550 | 136 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | ||
1551 | 137 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | ||
1552 | 138 | ) | ||
1553 | 139 | |||
1554 | 140 | |||
1555 | 141 | class UnhandledSource(Exception): | ||
1556 | 142 | pass | ||
1557 | 143 | |||
1558 | 144 | |||
1559 | 145 | def install_remote(source): | ||
1560 | 146 | """ | ||
1561 | 147 | Install a file tree from a remote source | ||
1562 | 148 | |||
1563 | 149 | The specified source should be a url of the form: | ||
1564 | 150 | scheme://[host]/path[#[option=value][&...]] | ||
1565 | 151 | |||
1566 | 152 | Schemes supported are based on this modules submodules | ||
1567 | 153 | Options supported are submodule-specific""" | ||
1568 | 154 | # We ONLY check for True here because can_handle may return a string | ||
1569 | 155 | # explaining why it can't handle a given source. | ||
1570 | 156 | handlers = [h for h in plugins() if h.can_handle(source) is True] | ||
1571 | 157 | installed_to = None | ||
1572 | 158 | for handler in handlers: | ||
1573 | 159 | try: | ||
1574 | 160 | installed_to = handler.install(source) | ||
1575 | 161 | except UnhandledSource: | ||
1576 | 162 | pass | ||
1577 | 163 | if not installed_to: | ||
1578 | 164 | raise UnhandledSource("No handler found for source {}".format(source)) | ||
1579 | 165 | return installed_to | ||
1580 | 166 | |||
1581 | 167 | |||
1582 | 168 | def install_from_config(config_var_name): | ||
1583 | 169 | charm_config = config() | ||
1584 | 170 | source = charm_config[config_var_name] | ||
1585 | 171 | return install_remote(source) | ||
1586 | 172 | |||
1587 | 173 | |||
1588 | 174 | class BaseFetchHandler(object): | ||
1589 | 175 | """Base class for FetchHandler implementations in fetch plugins""" | ||
1590 | 176 | def can_handle(self, source): | ||
1591 | 177 | """Returns True if the source can be handled. Otherwise returns | ||
1592 | 178 | a string explaining why it cannot""" | ||
1593 | 179 | return "Wrong source type" | ||
1594 | 180 | |||
1595 | 181 | def install(self, source): | ||
1596 | 182 | """Try to download and unpack the source. Return the path to the | ||
1597 | 183 | unpacked files or raise UnhandledSource.""" | ||
1598 | 184 | raise UnhandledSource("Wrong source type {}".format(source)) | ||
1599 | 185 | |||
1600 | 186 | def parse_url(self, url): | ||
1601 | 187 | return urlparse(url) | ||
1602 | 188 | |||
1603 | 189 | def base_url(self, url): | ||
1604 | 190 | """Return url without querystring or fragment""" | ||
1605 | 191 | parts = list(self.parse_url(url)) | ||
1606 | 192 | parts[4:] = ['' for i in parts[4:]] | ||
1607 | 193 | return urlunparse(parts) | ||
1608 | 194 | |||
1609 | 195 | |||
1610 | 196 | def plugins(fetch_handlers=None): | ||
1611 | 197 | if not fetch_handlers: | ||
1612 | 198 | fetch_handlers = FETCH_HANDLERS | ||
1613 | 199 | plugin_list = [] | ||
1614 | 200 | for handler_name in fetch_handlers: | ||
1615 | 201 | package, classname = handler_name.rsplit('.', 1) | ||
1616 | 202 | try: | ||
1617 | 203 | handler_class = getattr(importlib.import_module(package), classname) | ||
1618 | 204 | plugin_list.append(handler_class()) | ||
1619 | 205 | except (ImportError, AttributeError): | ||
1620 | 206 | # Skip missing plugins so that they can be ommitted from | ||
1621 | 207 | # installation if desired | ||
1622 | 208 | log("FetchHandler {} not found, skipping plugin".format(handler_name)) | ||
1623 | 209 | return plugin_list | ||
1624 | 0 | 210 | ||
1625 | === added file 'hooks/charmhelpers/fetch/archiveurl.py' | |||
1626 | --- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000 | |||
1627 | +++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-10 22:34:35 +0000 | |||
1628 | @@ -0,0 +1,48 @@ | |||
1629 | 1 | import os | ||
1630 | 2 | import urllib2 | ||
1631 | 3 | from charmhelpers.fetch import ( | ||
1632 | 4 | BaseFetchHandler, | ||
1633 | 5 | UnhandledSource | ||
1634 | 6 | ) | ||
1635 | 7 | from charmhelpers.payload.archive import ( | ||
1636 | 8 | get_archive_handler, | ||
1637 | 9 | extract, | ||
1638 | 10 | ) | ||
1639 | 11 | from charmhelpers.core.host import mkdir | ||
1640 | 12 | |||
1641 | 13 | |||
1642 | 14 | class ArchiveUrlFetchHandler(BaseFetchHandler): | ||
1643 | 15 | """Handler for archives via generic URLs""" | ||
1644 | 16 | def can_handle(self, source): | ||
1645 | 17 | url_parts = self.parse_url(source) | ||
1646 | 18 | if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): | ||
1647 | 19 | return "Wrong source type" | ||
1648 | 20 | if get_archive_handler(self.base_url(source)): | ||
1649 | 21 | return True | ||
1650 | 22 | return False | ||
1651 | 23 | |||
1652 | 24 | def download(self, source, dest): | ||
1653 | 25 | # propogate all exceptions | ||
1654 | 26 | # URLError, OSError, etc | ||
1655 | 27 | response = urllib2.urlopen(source) | ||
1656 | 28 | try: | ||
1657 | 29 | with open(dest, 'w') as dest_file: | ||
1658 | 30 | dest_file.write(response.read()) | ||
1659 | 31 | except Exception as e: | ||
1660 | 32 | if os.path.isfile(dest): | ||
1661 | 33 | os.unlink(dest) | ||
1662 | 34 | raise e | ||
1663 | 35 | |||
1664 | 36 | def install(self, source): | ||
1665 | 37 | url_parts = self.parse_url(source) | ||
1666 | 38 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') | ||
1667 | 39 | if not os.path.exists(dest_dir): | ||
1668 | 40 | mkdir(dest_dir, perms=0755) | ||
1669 | 41 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) | ||
1670 | 42 | try: | ||
1671 | 43 | self.download(source, dld_file) | ||
1672 | 44 | except urllib2.URLError as e: | ||
1673 | 45 | raise UnhandledSource(e.reason) | ||
1674 | 46 | except OSError as e: | ||
1675 | 47 | raise UnhandledSource(e.strerror) | ||
1676 | 48 | return extract(dld_file) | ||
1677 | 0 | 49 | ||
1678 | === added file 'hooks/charmhelpers/fetch/bzrurl.py' | |||
1679 | --- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000 | |||
1680 | +++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-10 22:34:35 +0000 | |||
1681 | @@ -0,0 +1,44 @@ | |||
1682 | 1 | import os | ||
1683 | 2 | from bzrlib.branch import Branch | ||
1684 | 3 | from charmhelpers.fetch import ( | ||
1685 | 4 | BaseFetchHandler, | ||
1686 | 5 | UnhandledSource | ||
1687 | 6 | ) | ||
1688 | 7 | from charmhelpers.core.host import mkdir | ||
1689 | 8 | |||
1690 | 9 | |||
1691 | 10 | class BzrUrlFetchHandler(BaseFetchHandler): | ||
1692 | 11 | """Handler for bazaar branches via generic and lp URLs""" | ||
1693 | 12 | def can_handle(self, source): | ||
1694 | 13 | url_parts = self.parse_url(source) | ||
1695 | 14 | if url_parts.scheme not in ('bzr+ssh', 'lp'): | ||
1696 | 15 | return False | ||
1697 | 16 | else: | ||
1698 | 17 | return True | ||
1699 | 18 | |||
1700 | 19 | def branch(self, source, dest): | ||
1701 | 20 | url_parts = self.parse_url(source) | ||
1702 | 21 | # If we use lp:branchname scheme we need to load plugins | ||
1703 | 22 | if not self.can_handle(source): | ||
1704 | 23 | raise UnhandledSource("Cannot handle {}".format(source)) | ||
1705 | 24 | if url_parts.scheme == "lp": | ||
1706 | 25 | from bzrlib.plugin import load_plugins | ||
1707 | 26 | load_plugins() | ||
1708 | 27 | try: | ||
1709 | 28 | remote_branch = Branch.open(source) | ||
1710 | 29 | remote_branch.bzrdir.sprout(dest).open_branch() | ||
1711 | 30 | except Exception as e: | ||
1712 | 31 | raise e | ||
1713 | 32 | |||
1714 | 33 | def install(self, source): | ||
1715 | 34 | url_parts = self.parse_url(source) | ||
1716 | 35 | branch_name = url_parts.path.strip("/").split("/")[-1] | ||
1717 | 36 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name) | ||
1718 | 37 | if not os.path.exists(dest_dir): | ||
1719 | 38 | mkdir(dest_dir, perms=0755) | ||
1720 | 39 | try: | ||
1721 | 40 | self.branch(source, dest_dir) | ||
1722 | 41 | except OSError as e: | ||
1723 | 42 | raise UnhandledSource(e.strerror) | ||
1724 | 43 | return dest_dir | ||
1725 | 44 | |||
1726 | 0 | 45 | ||
1727 | === modified file 'hooks/hooks.py' | |||
1728 | --- hooks/hooks.py 2013-05-23 21:52:06 +0000 | |||
1729 | +++ hooks/hooks.py 2013-10-10 22:34:35 +0000 | |||
1730 | @@ -1,17 +1,31 @@ | |||
1731 | 1 | #!/usr/bin/env python | 1 | #!/usr/bin/env python |
1732 | 2 | 2 | ||
1733 | 3 | import json | ||
1734 | 4 | import glob | 3 | import glob |
1735 | 5 | import os | 4 | import os |
1736 | 6 | import random | ||
1737 | 7 | import re | 5 | import re |
1738 | 8 | import socket | 6 | import socket |
1740 | 9 | import string | 7 | import shutil |
1741 | 10 | import subprocess | 8 | import subprocess |
1742 | 11 | import sys | 9 | import sys |
1743 | 12 | import yaml | 10 | import yaml |
1746 | 13 | import nrpe | 11 | |
1747 | 14 | import time | 12 | from itertools import izip, tee |
1748 | 13 | |||
1749 | 14 | from charmhelpers.core.host import pwgen | ||
1750 | 15 | from charmhelpers.core.hookenv import ( | ||
1751 | 16 | log, | ||
1752 | 17 | config as config_get, | ||
1753 | 18 | relation_set, | ||
1754 | 19 | relation_ids as get_relation_ids, | ||
1755 | 20 | relations_of_type, | ||
1756 | 21 | relations_for_id, | ||
1757 | 22 | relation_id, | ||
1758 | 23 | open_port, | ||
1759 | 24 | close_port, | ||
1760 | 25 | unit_get, | ||
1761 | 26 | ) | ||
1762 | 27 | from charmhelpers.fetch import apt_install | ||
1763 | 28 | from charmhelpers.contrib.charmsupport import nrpe | ||
1764 | 15 | 29 | ||
1765 | 16 | 30 | ||
1766 | 17 | ############################################################################### | 31 | ############################################################################### |
1767 | @@ -20,92 +34,52 @@ | |||
1768 | 20 | default_haproxy_config_dir = "/etc/haproxy" | 34 | default_haproxy_config_dir = "/etc/haproxy" |
1769 | 21 | default_haproxy_config = "%s/haproxy.cfg" % default_haproxy_config_dir | 35 | default_haproxy_config = "%s/haproxy.cfg" % default_haproxy_config_dir |
1770 | 22 | default_haproxy_service_config_dir = "/var/run/haproxy" | 36 | default_haproxy_service_config_dir = "/var/run/haproxy" |
1772 | 23 | HOOK_NAME = os.path.basename(sys.argv[0]) | 37 | service_affecting_packages = ['haproxy'] |
1773 | 38 | |||
1774 | 39 | frontend_only_options = [ | ||
1775 | 40 | "backlog", | ||
1776 | 41 | "bind", | ||
1777 | 42 | "capture cookie", | ||
1778 | 43 | "capture request header", | ||
1779 | 44 | "capture response header", | ||
1780 | 45 | "clitimeout", | ||
1781 | 46 | "default_backend", | ||
1782 | 47 | "maxconn", | ||
1783 | 48 | "monitor fail", | ||
1784 | 49 | "monitor-net", | ||
1785 | 50 | "monitor-uri", | ||
1786 | 51 | "option accept-invalid-http-request", | ||
1787 | 52 | "option clitcpka", | ||
1788 | 53 | "option contstats", | ||
1789 | 54 | "option dontlog-normal", | ||
1790 | 55 | "option dontlognull", | ||
1791 | 56 | "option http-use-proxy-header", | ||
1792 | 57 | "option log-separate-errors", | ||
1793 | 58 | "option logasap", | ||
1794 | 59 | "option socket-stats", | ||
1795 | 60 | "option tcp-smart-accept", | ||
1796 | 61 | "rate-limit sessions", | ||
1797 | 62 | "tcp-request content accept", | ||
1798 | 63 | "tcp-request content reject", | ||
1799 | 64 | "tcp-request inspect-delay", | ||
1800 | 65 | "timeout client", | ||
1801 | 66 | "timeout clitimeout", | ||
1802 | 67 | "use_backend", | ||
1803 | 68 | ] | ||
1804 | 69 | |||
1805 | 24 | 70 | ||
1806 | 25 | ############################################################################### | 71 | ############################################################################### |
1807 | 26 | # Supporting functions | 72 | # Supporting functions |
1808 | 27 | ############################################################################### | 73 | ############################################################################### |
1809 | 28 | 74 | ||
1890 | 29 | def unit_get(*args): | 75 | |
1891 | 30 | """Simple wrapper around unit-get, all arguments passed untouched""" | 76 | def ensure_package_status(packages, status): |
1892 | 31 | get_args = ["unit-get"] | 77 | if status in ['install', 'hold']: |
1893 | 32 | get_args.extend(args) | 78 | selections = ''.join(['{} {}\n'.format(package, status) |
1894 | 33 | return subprocess.check_output(get_args) | 79 | for package in packages]) |
1895 | 34 | 80 | dpkg = subprocess.Popen(['dpkg', '--set-selections'], | |
1896 | 35 | def juju_log(*args): | 81 | stdin=subprocess.PIPE) |
1897 | 36 | """Simple wrapper around juju-log, all arguments are passed untouched""" | 82 | dpkg.communicate(input=selections) |
1818 | 37 | log_args = ["juju-log"] | ||
1819 | 38 | log_args.extend(args) | ||
1820 | 39 | subprocess.call(log_args) | ||
1821 | 40 | |||
1822 | 41 | #------------------------------------------------------------------------------ | ||
1823 | 42 | # config_get: Returns a dictionary containing all of the config information | ||
1824 | 43 | # Optional parameter: scope | ||
1825 | 44 | # scope: limits the scope of the returned configuration to the | ||
1826 | 45 | # desired config item. | ||
1827 | 46 | #------------------------------------------------------------------------------ | ||
1828 | 47 | def config_get(scope=None): | ||
1829 | 48 | try: | ||
1830 | 49 | config_cmd_line = ['config-get'] | ||
1831 | 50 | if scope is not None: | ||
1832 | 51 | config_cmd_line.append(scope) | ||
1833 | 52 | config_cmd_line.append('--format=json') | ||
1834 | 53 | config_data = json.loads(subprocess.check_output(config_cmd_line)) | ||
1835 | 54 | except Exception, e: | ||
1836 | 55 | subprocess.call(['juju-log', str(e)]) | ||
1837 | 56 | config_data = None | ||
1838 | 57 | finally: | ||
1839 | 58 | return(config_data) | ||
1840 | 59 | |||
1841 | 60 | |||
1842 | 61 | #------------------------------------------------------------------------------ | ||
1843 | 62 | # relation_get: Returns a dictionary containing the relation information | ||
1844 | 63 | # Optional parameters: scope, relation_id | ||
1845 | 64 | # scope: limits the scope of the returned data to the | ||
1846 | 65 | # desired item. | ||
1847 | 66 | # unit_name: limits the data ( and optionally the scope ) | ||
1848 | 67 | # to the specified unit | ||
1849 | 68 | # relation_id: specify relation id for out of context usage. | ||
1850 | 69 | #------------------------------------------------------------------------------ | ||
1851 | 70 | def relation_get(scope=None, unit_name=None, relation_id=None): | ||
1852 | 71 | try: | ||
1853 | 72 | relation_cmd_line = ['relation-get', '--format=json'] | ||
1854 | 73 | if relation_id is not None: | ||
1855 | 74 | relation_cmd_line.extend(('-r', relation_id)) | ||
1856 | 75 | if scope is not None: | ||
1857 | 76 | relation_cmd_line.append(scope) | ||
1858 | 77 | else: | ||
1859 | 78 | relation_cmd_line.append('') | ||
1860 | 79 | if unit_name is not None: | ||
1861 | 80 | relation_cmd_line.append(unit_name) | ||
1862 | 81 | relation_data = json.loads(subprocess.check_output(relation_cmd_line)) | ||
1863 | 82 | except Exception, e: | ||
1864 | 83 | subprocess.call(['juju-log', str(e)]) | ||
1865 | 84 | relation_data = None | ||
1866 | 85 | finally: | ||
1867 | 86 | return(relation_data) | ||
1868 | 87 | |||
1869 | 88 | def relation_set(arguments, relation_id=None): | ||
1870 | 89 | """ | ||
1871 | 90 | Wrapper around relation-set | ||
1872 | 91 | @param arguments: list of command line arguments | ||
1873 | 92 | @param relation_id: optional relation-id (passed to -r parameter) to use | ||
1874 | 93 | """ | ||
1875 | 94 | set_args = ["relation-set"] | ||
1876 | 95 | if relation_id is not None: | ||
1877 | 96 | set_args.extend(["-r", str(relation_id)]) | ||
1878 | 97 | set_args.extend(arguments) | ||
1879 | 98 | subprocess.check_call(set_args) | ||
1880 | 99 | |||
1881 | 100 | #------------------------------------------------------------------------------ | ||
1882 | 101 | # apt_get_install( package ): Installs a package | ||
1883 | 102 | #------------------------------------------------------------------------------ | ||
1884 | 103 | def apt_get_install(packages=None): | ||
1885 | 104 | if packages is None: | ||
1886 | 105 | return(False) | ||
1887 | 106 | cmd_line = ['apt-get', '-y', 'install', '-qq'] | ||
1888 | 107 | cmd_line.append(packages) | ||
1889 | 108 | return(subprocess.call(cmd_line)) | ||
1898 | 109 | 83 | ||
1899 | 110 | 84 | ||
1900 | 111 | #------------------------------------------------------------------------------ | 85 | #------------------------------------------------------------------------------ |
1901 | @@ -113,8 +87,8 @@ | |||
1902 | 113 | #------------------------------------------------------------------------------ | 87 | #------------------------------------------------------------------------------ |
1903 | 114 | def enable_haproxy(): | 88 | def enable_haproxy(): |
1904 | 115 | default_haproxy = "/etc/default/haproxy" | 89 | default_haproxy = "/etc/default/haproxy" |
1907 | 116 | enabled_haproxy = \ | 90 | with open(default_haproxy) as f: |
1908 | 117 | open(default_haproxy).read().replace('ENABLED=0', 'ENABLED=1') | 91 | enabled_haproxy = f.read().replace('ENABLED=0', 'ENABLED=1') |
1909 | 118 | with open(default_haproxy, 'w') as f: | 92 | with open(default_haproxy, 'w') as f: |
1910 | 119 | f.write(enabled_haproxy) | 93 | f.write(enabled_haproxy) |
1911 | 120 | 94 | ||
1912 | @@ -137,8 +111,8 @@ | |||
1913 | 137 | if config_data['global_quiet'] is True: | 111 | if config_data['global_quiet'] is True: |
1914 | 138 | haproxy_globals.append(" quiet") | 112 | haproxy_globals.append(" quiet") |
1915 | 139 | haproxy_globals.append(" spread-checks %d" % | 113 | haproxy_globals.append(" spread-checks %d" % |
1918 | 140 | config_data['global_spread_checks']) | 114 | config_data['global_spread_checks']) |
1919 | 141 | return('\n'.join(haproxy_globals)) | 115 | return '\n'.join(haproxy_globals) |
1920 | 142 | 116 | ||
1921 | 143 | 117 | ||
1922 | 144 | #------------------------------------------------------------------------------ | 118 | #------------------------------------------------------------------------------ |
1923 | @@ -157,7 +131,7 @@ | |||
1924 | 157 | haproxy_defaults.append(" retries %d" % config_data['default_retries']) | 131 | haproxy_defaults.append(" retries %d" % config_data['default_retries']) |
1925 | 158 | for timeout_item in default_timeouts: | 132 | for timeout_item in default_timeouts: |
1926 | 159 | haproxy_defaults.append(" timeout %s" % timeout_item.strip()) | 133 | haproxy_defaults.append(" timeout %s" % timeout_item.strip()) |
1928 | 160 | return('\n'.join(haproxy_defaults)) | 134 | return '\n'.join(haproxy_defaults) |
1929 | 161 | 135 | ||
1930 | 162 | 136 | ||
1931 | 163 | #------------------------------------------------------------------------------ | 137 | #------------------------------------------------------------------------------ |
1932 | @@ -168,9 +142,9 @@ | |||
1933 | 168 | #------------------------------------------------------------------------------ | 142 | #------------------------------------------------------------------------------ |
1934 | 169 | def load_haproxy_config(haproxy_config_file="/etc/haproxy/haproxy.cfg"): | 143 | def load_haproxy_config(haproxy_config_file="/etc/haproxy/haproxy.cfg"): |
1935 | 170 | if os.path.isfile(haproxy_config_file): | 144 | if os.path.isfile(haproxy_config_file): |
1937 | 171 | return(open(haproxy_config_file).read()) | 145 | return open(haproxy_config_file).read() |
1938 | 172 | else: | 146 | else: |
1940 | 173 | return(None) | 147 | return None |
1941 | 174 | 148 | ||
1942 | 175 | 149 | ||
1943 | 176 | #------------------------------------------------------------------------------ | 150 | #------------------------------------------------------------------------------ |
1944 | @@ -182,12 +156,12 @@ | |||
1945 | 182 | def get_monitoring_password(haproxy_config_file="/etc/haproxy/haproxy.cfg"): | 156 | def get_monitoring_password(haproxy_config_file="/etc/haproxy/haproxy.cfg"): |
1946 | 183 | haproxy_config = load_haproxy_config(haproxy_config_file) | 157 | haproxy_config = load_haproxy_config(haproxy_config_file) |
1947 | 184 | if haproxy_config is None: | 158 | if haproxy_config is None: |
1949 | 185 | return(None) | 159 | return None |
1950 | 186 | m = re.search("stats auth\s+(\w+):(\w+)", haproxy_config) | 160 | m = re.search("stats auth\s+(\w+):(\w+)", haproxy_config) |
1951 | 187 | if m is not None: | 161 | if m is not None: |
1953 | 188 | return(m.group(2)) | 162 | return m.group(2) |
1954 | 189 | else: | 163 | else: |
1956 | 190 | return(None) | 164 | return None |
1957 | 191 | 165 | ||
1958 | 192 | 166 | ||
1959 | 193 | #------------------------------------------------------------------------------ | 167 | #------------------------------------------------------------------------------ |
1960 | @@ -197,32 +171,29 @@ | |||
1961 | 197 | # to open and close when exposing/unexposing a service | 171 | # to open and close when exposing/unexposing a service |
1962 | 198 | #------------------------------------------------------------------------------ | 172 | #------------------------------------------------------------------------------ |
1963 | 199 | def get_service_ports(haproxy_config_file="/etc/haproxy/haproxy.cfg"): | 173 | def get_service_ports(haproxy_config_file="/etc/haproxy/haproxy.cfg"): |
1964 | 174 | stanzas = get_listen_stanzas(haproxy_config_file=haproxy_config_file) | ||
1965 | 175 | return tuple((int(port) for service, addr, port in stanzas)) | ||
1966 | 176 | |||
1967 | 177 | |||
1968 | 178 | #------------------------------------------------------------------------------ | ||
1969 | 179 | # get_listen_stanzas: Convenience function that scans the existing haproxy | ||
1970 | 180 | # configuration file and returns a list of the existing | ||
1971 | 181 | # listen stanzas cofnigured. | ||
1972 | 182 | #------------------------------------------------------------------------------ | ||
1973 | 183 | def get_listen_stanzas(haproxy_config_file="/etc/haproxy/haproxy.cfg"): | ||
1974 | 200 | haproxy_config = load_haproxy_config(haproxy_config_file) | 184 | haproxy_config = load_haproxy_config(haproxy_config_file) |
1975 | 201 | if haproxy_config is None: | 185 | if haproxy_config is None: |
2000 | 202 | return(None) | 186 | return () |
2001 | 203 | return(re.findall("listen.*:(.*)", haproxy_config)) | 187 | listen_stanzas = re.findall( |
2002 | 204 | 188 | "listen\s+([^\s]+)\s+([^:]+):(.*)", | |
2003 | 205 | 189 | haproxy_config) | |
2004 | 206 | #------------------------------------------------------------------------------ | 190 | bind_stanzas = re.findall( |
2005 | 207 | # open_port: Convenience function to open a port in juju to | 191 | "\s+bind\s+([^:]+):(\d+)\s*\n\s+default_backend\s+([^\s]+)", |
2006 | 208 | # expose a service | 192 | haproxy_config, re.M) |
2007 | 209 | #------------------------------------------------------------------------------ | 193 | return (tuple(((service, addr, int(port)) |
2008 | 210 | def open_port(port=None, protocol="TCP"): | 194 | for service, addr, port in listen_stanzas)) + |
2009 | 211 | if port is None: | 195 | tuple(((service, addr, int(port)) |
2010 | 212 | return(None) | 196 | for addr, port, service in bind_stanzas))) |
1987 | 213 | return(subprocess.call(['open-port', "%d/%s" % | ||
1988 | 214 | (int(port), protocol)])) | ||
1989 | 215 | |||
1990 | 216 | |||
1991 | 217 | #------------------------------------------------------------------------------ | ||
1992 | 218 | # close_port: Convenience function to close a port in juju to | ||
1993 | 219 | # unexpose a service | ||
1994 | 220 | #------------------------------------------------------------------------------ | ||
1995 | 221 | def close_port(port=None, protocol="TCP"): | ||
1996 | 222 | if port is None: | ||
1997 | 223 | return(None) | ||
1998 | 224 | return(subprocess.call(['close-port', "%d/%s" % | ||
1999 | 225 | (int(port), protocol)])) | ||
2011 | 226 | 197 | ||
2012 | 227 | 198 | ||
2013 | 228 | #------------------------------------------------------------------------------ | 199 | #------------------------------------------------------------------------------ |
2014 | @@ -232,26 +203,25 @@ | |||
2015 | 232 | #------------------------------------------------------------------------------ | 203 | #------------------------------------------------------------------------------ |
2016 | 233 | def update_service_ports(old_service_ports=None, new_service_ports=None): | 204 | def update_service_ports(old_service_ports=None, new_service_ports=None): |
2017 | 234 | if old_service_ports is None or new_service_ports is None: | 205 | if old_service_ports is None or new_service_ports is None: |
2019 | 235 | return(None) | 206 | return None |
2020 | 236 | for port in old_service_ports: | 207 | for port in old_service_ports: |
2021 | 237 | if port not in new_service_ports: | 208 | if port not in new_service_ports: |
2022 | 238 | close_port(port) | 209 | close_port(port) |
2023 | 239 | for port in new_service_ports: | 210 | for port in new_service_ports: |
2039 | 240 | if port not in old_service_ports: | 211 | open_port(port) |
2040 | 241 | open_port(port) | 212 | |
2041 | 242 | 213 | ||
2042 | 243 | 214 | #------------------------------------------------------------------------------ | |
2043 | 244 | #------------------------------------------------------------------------------ | 215 | # update_sysctl: create a sysctl.conf file from YAML-formatted 'sysctl' config |
2044 | 245 | # pwgen: Generates a random password | 216 | #------------------------------------------------------------------------------ |
2045 | 246 | # pwd_length: Defines the length of the password to generate | 217 | def update_sysctl(config_data): |
2046 | 247 | # default: 20 | 218 | sysctl_dict = yaml.load(config_data.get("sysctl", "{}")) |
2047 | 248 | #------------------------------------------------------------------------------ | 219 | if sysctl_dict: |
2048 | 249 | def pwgen(pwd_length=20): | 220 | sysctl_file = open("/etc/sysctl.d/50-haproxy.conf", "w") |
2049 | 250 | alphanumeric_chars = [l for l in (string.letters + string.digits) | 221 | for key in sysctl_dict: |
2050 | 251 | if l not in 'Iil0oO1'] | 222 | sysctl_file.write("{}={}\n".format(key, sysctl_dict[key])) |
2051 | 252 | random_chars = [random.choice(alphanumeric_chars) | 223 | sysctl_file.close() |
2052 | 253 | for i in range(pwd_length)] | 224 | subprocess.call(["sysctl", "-p", "/etc/sysctl.d/50-haproxy.conf"]) |
2038 | 254 | return(''.join(random_chars)) | ||
2053 | 255 | 225 | ||
2054 | 256 | 226 | ||
2055 | 257 | #------------------------------------------------------------------------------ | 227 | #------------------------------------------------------------------------------ |
2056 | @@ -271,22 +241,40 @@ | |||
2057 | 271 | service_port=None, service_options=None, | 241 | service_port=None, service_options=None, |
2058 | 272 | server_entries=None): | 242 | server_entries=None): |
2059 | 273 | if service_name is None or service_ip is None or service_port is None: | 243 | if service_name is None or service_ip is None or service_port is None: |
2061 | 274 | return(None) | 244 | return None |
2062 | 245 | fe_options = [] | ||
2063 | 246 | be_options = [] | ||
2064 | 247 | if service_options is not None: | ||
2065 | 248 | # Filter provided service options into frontend-only and backend-only. | ||
2066 | 249 | results = izip( | ||
2067 | 250 | (fe_options, be_options), | ||
2068 | 251 | (True, False), | ||
2069 | 252 | tee((o, any(map(o.strip().startswith, | ||
2070 | 253 | frontend_only_options))) | ||
2071 | 254 | for o in service_options)) | ||
2072 | 255 | for out, cond, result in results: | ||
2073 | 256 | out.extend(option for option, match in result if match is cond) | ||
2074 | 275 | service_config = [] | 257 | service_config = [] |
2083 | 276 | service_config.append("listen %s %s:%s" % | 258 | unit_name = os.environ["JUJU_UNIT_NAME"].replace("/", "-") |
2084 | 277 | (service_name, service_ip, service_port)) | 259 | service_config.append("frontend %s-%s" % (unit_name, service_port)) |
2085 | 278 | if service_options is not None: | 260 | service_config.append(" bind %s:%s" % |
2086 | 279 | for service_option in service_options: | 261 | (service_ip, service_port)) |
2087 | 280 | service_config.append(" %s" % service_option.strip()) | 262 | service_config.append(" default_backend %s" % (service_name,)) |
2088 | 281 | if server_entries is not None and isinstance(server_entries, list): | 263 | service_config.extend(" %s" % service_option.strip() |
2089 | 282 | for (server_name, server_ip, server_port, server_options) \ | 264 | for service_option in fe_options) |
2090 | 283 | in server_entries: | 265 | service_config.append("") |
2091 | 266 | service_config.append("backend %s" % (service_name,)) | ||
2092 | 267 | service_config.extend(" %s" % service_option.strip() | ||
2093 | 268 | for service_option in be_options) | ||
2094 | 269 | if isinstance(server_entries, (list, tuple)): | ||
2095 | 270 | for (server_name, server_ip, server_port, | ||
2096 | 271 | server_options) in server_entries: | ||
2097 | 284 | server_line = " server %s %s:%s" % \ | 272 | server_line = " server %s %s:%s" % \ |
2099 | 285 | (server_name, server_ip, server_port) | 273 | (server_name, server_ip, server_port) |
2100 | 286 | if server_options is not None: | 274 | if server_options is not None: |
2102 | 287 | server_line += " %s" % server_options | 275 | server_line += " %s" % " ".join(server_options) |
2103 | 288 | service_config.append(server_line) | 276 | service_config.append(server_line) |
2105 | 289 | return('\n'.join(service_config)) | 277 | return '\n'.join(service_config) |
2106 | 290 | 278 | ||
2107 | 291 | 279 | ||
2108 | 292 | #------------------------------------------------------------------------------ | 280 | #------------------------------------------------------------------------------ |
2109 | @@ -296,216 +284,234 @@ | |||
2110 | 296 | def create_monitoring_stanza(service_name="haproxy_monitoring"): | 284 | def create_monitoring_stanza(service_name="haproxy_monitoring"): |
2111 | 297 | config_data = config_get() | 285 | config_data = config_get() |
2112 | 298 | if config_data['enable_monitoring'] is False: | 286 | if config_data['enable_monitoring'] is False: |
2114 | 299 | return(None) | 287 | return None |
2115 | 300 | monitoring_password = get_monitoring_password() | 288 | monitoring_password = get_monitoring_password() |
2116 | 301 | if config_data['monitoring_password'] != "changeme": | 289 | if config_data['monitoring_password'] != "changeme": |
2117 | 302 | monitoring_password = config_data['monitoring_password'] | 290 | monitoring_password = config_data['monitoring_password'] |
2121 | 303 | elif monitoring_password is None and \ | 291 | elif (monitoring_password is None and |
2122 | 304 | config_data['monitoring_password'] == "changeme": | 292 | config_data['monitoring_password'] == "changeme"): |
2123 | 305 | monitoring_password = pwgen() | 293 | monitoring_password = pwgen(length=20) |
2124 | 306 | monitoring_config = [] | 294 | monitoring_config = [] |
2125 | 307 | monitoring_config.append("mode http") | 295 | monitoring_config.append("mode http") |
2126 | 308 | monitoring_config.append("acl allowed_cidr src %s" % | 296 | monitoring_config.append("acl allowed_cidr src %s" % |
2128 | 309 | config_data['monitoring_allowed_cidr']) | 297 | config_data['monitoring_allowed_cidr']) |
2129 | 310 | monitoring_config.append("block unless allowed_cidr") | 298 | monitoring_config.append("block unless allowed_cidr") |
2130 | 311 | monitoring_config.append("stats enable") | 299 | monitoring_config.append("stats enable") |
2131 | 312 | monitoring_config.append("stats uri /") | 300 | monitoring_config.append("stats uri /") |
2132 | 313 | monitoring_config.append("stats realm Haproxy\ Statistics") | 301 | monitoring_config.append("stats realm Haproxy\ Statistics") |
2133 | 314 | monitoring_config.append("stats auth %s:%s" % | 302 | monitoring_config.append("stats auth %s:%s" % |
2135 | 315 | (config_data['monitoring_username'], monitoring_password)) | 303 | (config_data['monitoring_username'], |
2136 | 304 | monitoring_password)) | ||
2137 | 316 | monitoring_config.append("stats refresh %d" % | 305 | monitoring_config.append("stats refresh %d" % |
2140 | 317 | config_data['monitoring_stats_refresh']) | 306 | config_data['monitoring_stats_refresh']) |
2141 | 318 | return(create_listen_stanza(service_name, | 307 | return create_listen_stanza(service_name, |
2142 | 319 | "0.0.0.0", | 308 | "0.0.0.0", |
2143 | 320 | config_data['monitoring_port'], | 309 | config_data['monitoring_port'], |
2155 | 321 | monitoring_config)) | 310 | monitoring_config) |
2156 | 322 | 311 | ||
2157 | 323 | def get_host_port(services_list): | 312 | |
2158 | 324 | """ | 313 | #------------------------------------------------------------------------------ |
2159 | 325 | Given a services list and global juju information, get a host | 314 | # get_config_services: Convenience function that returns a mapping containing |
2160 | 326 | and port for this system. | 315 | # all of the services configuration |
2161 | 327 | """ | 316 | #------------------------------------------------------------------------------ |
2151 | 328 | host = services_list[0]["service_host"] | ||
2152 | 329 | port = int(services_list[0]["service_port"]) | ||
2153 | 330 | return (host, port) | ||
2154 | 331 | |||
2162 | 332 | def get_config_services(): | 317 | def get_config_services(): |
2163 | 333 | """ | ||
2164 | 334 | Return dict of all services in the configuration, and in the relation | ||
2165 | 335 | where appropriate. If a relation contains a "services" key, read | ||
2166 | 336 | it in as yaml as is the case with the configuration. Set the host and | ||
2167 | 337 | port for any relation initiated service entry as those items cannot be | ||
2168 | 338 | known by the other side of the relation. In the case of a | ||
2169 | 339 | proxy configuration found, ensure the forward for option is set. | ||
2170 | 340 | """ | ||
2171 | 341 | config_data = config_get() | 318 | config_data = config_get() |
2189 | 342 | config_services_list = yaml.load(config_data['services']) | 319 | services = {} |
2190 | 343 | (host, port) = get_host_port(config_services_list) | 320 | for service in yaml.safe_load(config_data['services']): |
2191 | 344 | all_relations = relation_get_all("reverseproxy") | 321 | service_name = service["service_name"] |
2192 | 345 | services_list = [] | 322 | if not services: |
2193 | 346 | if hasattr(all_relations, "iteritems"): | 323 | # 'None' is used as a marker for the first service defined, which |
2194 | 347 | for relid, reldata in all_relations.iteritems(): | 324 | # is used as the default service if a proxied server doesn't |
2195 | 348 | for unit, relation_info in reldata.iteritems(): | 325 | # specify which service it is bound to. |
2196 | 349 | if relation_info.has_key("services"): | 326 | services[None] = {"service_name": service_name} |
2197 | 350 | rservices = yaml.load(relation_info["services"]) | 327 | if is_proxy(service_name) and ("option forwardfor" not in |
2198 | 351 | for r in rservices: | 328 | service["service_options"]): |
2199 | 352 | r["service_host"] = host | 329 | service["service_options"].append("option forwardfor") |
2200 | 353 | r["service_port"] = port | 330 | |
2201 | 354 | port += 1 | 331 | if isinstance(service["server_options"], basestring): |
2202 | 355 | services_list.extend(rservices) | 332 | service["server_options"] = service["server_options"].split() |
2203 | 356 | if len(services_list) == 0: | 333 | |
2204 | 357 | services_list = config_services_list | 334 | services[service_name] = service |
2205 | 358 | return(services_list) | 335 | |
2206 | 336 | return services | ||
2207 | 359 | 337 | ||
2208 | 360 | 338 | ||
2209 | 361 | #------------------------------------------------------------------------------ | 339 | #------------------------------------------------------------------------------ |
2210 | 362 | # get_config_service: Convenience function that returns a dictionary | 340 | # get_config_service: Convenience function that returns a dictionary |
2212 | 363 | # of the configuration of a given services configuration | 341 | # of the configuration of a given service's configuration |
2213 | 364 | #------------------------------------------------------------------------------ | 342 | #------------------------------------------------------------------------------ |
2214 | 365 | def get_config_service(service_name=None): | 343 | def get_config_service(service_name=None): |
2294 | 366 | services_list = get_config_services() | 344 | return get_config_services().get(service_name, None) |
2295 | 367 | for service_item in services_list: | 345 | |
2296 | 368 | if service_item['service_name'] == service_name: | 346 | |
2297 | 369 | return(service_item) | 347 | def is_proxy(service_name): |
2298 | 370 | return(None) | 348 | flag_path = os.path.join(default_haproxy_service_config_dir, |
2299 | 371 | 349 | "%s.is.proxy" % service_name) | |
2300 | 372 | 350 | return os.path.exists(flag_path) | |
2301 | 373 | def relation_get_all(relation_name): | 351 | |
2223 | 374 | """ | ||
2224 | 375 | Iterate through all relations, and return large data structure with the | ||
2225 | 376 | relation data set: | ||
2226 | 377 | |||
2227 | 378 | @param relation_name: The name of the relation to check | ||
2228 | 379 | |||
2229 | 380 | Returns: | ||
2230 | 381 | |||
2231 | 382 | relation_id: | ||
2232 | 383 | unit: | ||
2233 | 384 | key: value | ||
2234 | 385 | key2: value | ||
2235 | 386 | """ | ||
2236 | 387 | result = {} | ||
2237 | 388 | try: | ||
2238 | 389 | relids = subprocess.Popen( | ||
2239 | 390 | ['relation-ids', relation_name], stdout=subprocess.PIPE) | ||
2240 | 391 | for relid in [x.strip() for x in relids.stdout]: | ||
2241 | 392 | result[relid] = {} | ||
2242 | 393 | for unit in json.loads( | ||
2243 | 394 | subprocess.check_output( | ||
2244 | 395 | ['relation-list', '--format=json', '-r', relid])): | ||
2245 | 396 | result[relid][unit] = relation_get(None, unit, relid) | ||
2246 | 397 | return result | ||
2247 | 398 | except Exception, e: | ||
2248 | 399 | subprocess.call(['juju-log', str(e)]) | ||
2249 | 400 | |||
2250 | 401 | def get_services_dict(): | ||
2251 | 402 | """ | ||
2252 | 403 | Transform the services list into a dict for easier comprehension, | ||
2253 | 404 | and to ensure that we have only one entry per service type. If multiple | ||
2254 | 405 | relations specify the same server_name, try to union the servers | ||
2255 | 406 | entries. | ||
2256 | 407 | """ | ||
2257 | 408 | services_list = get_config_services() | ||
2258 | 409 | services_dict = {} | ||
2259 | 410 | |||
2260 | 411 | for service_item in services_list: | ||
2261 | 412 | if not hasattr(service_item, "iteritems"): | ||
2262 | 413 | juju_log("Each 'services' entry must be a dict: %s" % service_item) | ||
2263 | 414 | continue; | ||
2264 | 415 | if "service_name" not in service_item: | ||
2265 | 416 | juju_log("Missing 'service_name': %s" % service_item) | ||
2266 | 417 | continue; | ||
2267 | 418 | name = service_item["service_name"] | ||
2268 | 419 | options = service_item["service_options"] | ||
2269 | 420 | if name in services_dict: | ||
2270 | 421 | if "servers" in services_dict[name]: | ||
2271 | 422 | services_dict[name]["servers"].extend(service_item["servers"]) | ||
2272 | 423 | else: | ||
2273 | 424 | services_dict[name] = service_item | ||
2274 | 425 | if os.path.exists("%s/%s.is.proxy" % ( | ||
2275 | 426 | default_haproxy_service_config_dir, name)): | ||
2276 | 427 | if 'option forwardfor' not in options: | ||
2277 | 428 | options.append("option forwardfor") | ||
2278 | 429 | |||
2279 | 430 | return services_dict | ||
2280 | 431 | |||
2281 | 432 | def get_all_services(): | ||
2282 | 433 | """ | ||
2283 | 434 | Transform a services dict into an "all_services" relation setting expected | ||
2284 | 435 | by apache2. This is needed to ensure we have the port and hostname setting | ||
2285 | 436 | correct and in the proper format | ||
2286 | 437 | """ | ||
2287 | 438 | services = get_services_dict() | ||
2288 | 439 | all_services = [] | ||
2289 | 440 | for name in services: | ||
2290 | 441 | s = {"service_name": name, | ||
2291 | 442 | "service_port": services[name]["service_port"]} | ||
2292 | 443 | all_services.append(s) | ||
2293 | 444 | return all_services | ||
2302 | 445 | 352 | ||
2303 | 446 | #------------------------------------------------------------------------------ | 353 | #------------------------------------------------------------------------------ |
2304 | 447 | # create_services: Function that will create the services configuration | 354 | # create_services: Function that will create the services configuration |
2305 | 448 | # from the config data and/or relation information | 355 | # from the config data and/or relation information |
2306 | 449 | #------------------------------------------------------------------------------ | 356 | #------------------------------------------------------------------------------ |
2307 | 450 | def create_services(): | 357 | def create_services(): |
2341 | 451 | services_list = get_config_services() | 358 | services_dict = get_config_services() |
2342 | 452 | services_dict = get_services_dict() | 359 | if len(services_dict) == 0: |
2343 | 453 | 360 | log("No services configured, exiting.") | |
2344 | 454 | # service definitions overwrites user specified haproxy file in | 361 | return |
2345 | 455 | # a pseudo-template form | 362 | |
2346 | 456 | all_relations = relation_get_all("reverseproxy") | 363 | relation_data = relations_of_type("reverseproxy") |
2347 | 457 | for relid, reldata in all_relations.iteritems(): | 364 | |
2348 | 458 | for unit, relation_info in reldata.iteritems(): | 365 | for relation_info in relation_data: |
2349 | 459 | if not isinstance(relation_info, dict): | 366 | unit = relation_info['__unit__'] |
2350 | 460 | sys.exit(0) | 367 | juju_service_name = unit.rpartition('/')[0] |
2351 | 461 | if "services" in relation_info: | 368 | |
2352 | 462 | juju_log("Relation %s has services override defined" % relid) | 369 | relation_ok = True |
2353 | 463 | continue; | 370 | for required in ("port", "private-address", "hostname"): |
2354 | 464 | if "hostname" not in relation_info or "port" not in relation_info: | 371 | if not required in relation_info: |
2355 | 465 | juju_log("Relation %s needs hostname and port defined" % relid) | 372 | log("No %s in relation data for '%s', skipping." % |
2356 | 466 | continue; | 373 | (required, unit)) |
2357 | 467 | juju_service_name = unit.rpartition('/')[0] | 374 | relation_ok = False |
2358 | 468 | # Mandatory switches ( hostname, port ) | 375 | break |
2359 | 469 | server_name = "%s__%s" % ( | 376 | |
2360 | 470 | relation_info['hostname'].replace('.', '_'), | 377 | if not relation_ok: |
2361 | 471 | relation_info['port']) | 378 | continue |
2362 | 472 | server_ip = relation_info['hostname'] | 379 | |
2363 | 473 | server_port = relation_info['port'] | 380 | # Mandatory switches ( private-address, port ) |
2364 | 474 | # Optional switches ( service_name ) | 381 | host = relation_info['private-address'] |
2365 | 475 | if 'service_name' in relation_info: | 382 | port = relation_info['port'] |
2366 | 476 | if relation_info['service_name'] in services_dict: | 383 | server_name = ("%s-%s" % (unit.replace("/", "-"), port)) |
2367 | 477 | service_name = relation_info['service_name'] | 384 | |
2368 | 478 | else: | 385 | # Optional switches ( service_name, sitenames ) |
2369 | 479 | juju_log("service %s does not exist." % ( | 386 | service_names = set() |
2370 | 480 | relation_info['service_name'])) | 387 | if 'service_name' in relation_info: |
2371 | 481 | sys.exit(1) | 388 | if relation_info['service_name'] in services_dict: |
2372 | 482 | elif juju_service_name + '_service' in services_dict: | 389 | service_names.add(relation_info['service_name']) |
2340 | 483 | service_name = juju_service_name + '_service' | ||
2373 | 484 | else: | 390 | else: |
2375 | 485 | service_name = services_list[0]['service_name'] | 391 | log("Service '%s' does not exist." % |
2376 | 392 | relation_info['service_name']) | ||
2377 | 393 | continue | ||
2378 | 394 | |||
2379 | 395 | if 'sitenames' in relation_info: | ||
2380 | 396 | sitenames = relation_info['sitenames'].split() | ||
2381 | 397 | for sitename in sitenames: | ||
2382 | 398 | if sitename in services_dict: | ||
2383 | 399 | service_names.add(sitename) | ||
2384 | 400 | |||
2385 | 401 | if juju_service_name + "_service" in services_dict: | ||
2386 | 402 | service_names.add(juju_service_name + "_service") | ||
2387 | 403 | |||
2388 | 404 | if juju_service_name in services_dict: | ||
2389 | 405 | service_names.add(juju_service_name) | ||
2390 | 406 | |||
2391 | 407 | if not service_names: | ||
2392 | 408 | service_names.add(services_dict[None]["service_name"]) | ||
2393 | 409 | |||
2394 | 410 | for service_name in service_names: | ||
2395 | 411 | service = services_dict[service_name] | ||
2396 | 412 | |||
2397 | 486 | # Add the server entries | 413 | # Add the server entries |
2404 | 487 | if not 'servers' in services_dict[service_name]: | 414 | servers = service.setdefault("servers", []) |
2405 | 488 | services_dict[service_name]['servers'] = [] | 415 | servers.append((server_name, host, port, |
2406 | 489 | services_dict[service_name]['servers'].append(( | 416 | services_dict[service_name].get( |
2407 | 490 | server_name, server_ip, server_port, | 417 | 'server_options', []))) |
2408 | 491 | services_dict[service_name]['server_options'])) | 418 | |
2409 | 492 | 419 | has_servers = False | |
2410 | 420 | for service_name, service in services_dict.iteritems(): | ||
2411 | 421 | if service.get("servers", []): | ||
2412 | 422 | has_servers = True | ||
2413 | 423 | |||
2414 | 424 | if not has_servers: | ||
2415 | 425 | log("No backend servers, exiting.") | ||
2416 | 426 | return | ||
2417 | 427 | |||
2418 | 428 | del services_dict[None] | ||
2419 | 429 | services_dict = apply_peer_config(services_dict) | ||
2420 | 430 | write_service_config(services_dict) | ||
2421 | 431 | return services_dict | ||
2422 | 432 | |||
2423 | 433 | |||
2424 | 434 | def apply_peer_config(services_dict): | ||
2425 | 435 | peer_data = relations_of_type("peer") | ||
2426 | 436 | |||
2427 | 437 | peer_services = {} | ||
2428 | 438 | for relation_info in peer_data: | ||
2429 | 439 | unit_name = relation_info["__unit__"] | ||
2430 | 440 | peer_services_data = relation_info.get("all_services") | ||
2431 | 441 | if peer_services_data is None: | ||
2432 | 442 | continue | ||
2433 | 443 | service_data = yaml.safe_load(peer_services_data) | ||
2434 | 444 | for service in service_data: | ||
2435 | 445 | service_name = service["service_name"] | ||
2436 | 446 | if service_name in services_dict: | ||
2437 | 447 | peer_service = peer_services.setdefault(service_name, {}) | ||
2438 | 448 | peer_service["service_name"] = service_name | ||
2439 | 449 | peer_service["service_host"] = service["service_host"] | ||
2440 | 450 | peer_service["service_port"] = service["service_port"] | ||
2441 | 451 | peer_service["service_options"] = ["balance leastconn", | ||
2442 | 452 | "mode tcp", | ||
2443 | 453 | "option tcplog"] | ||
2444 | 454 | servers = peer_service.setdefault("servers", []) | ||
2445 | 455 | servers.append((unit_name.replace("/", "-"), | ||
2446 | 456 | relation_info["private-address"], | ||
2447 | 457 | service["service_port"] + 1, ["check"])) | ||
2448 | 458 | |||
2449 | 459 | if not peer_services: | ||
2450 | 460 | return services_dict | ||
2451 | 461 | |||
2452 | 462 | unit_name = os.environ["JUJU_UNIT_NAME"].replace("/", "-") | ||
2453 | 463 | private_address = unit_get("private-address") | ||
2454 | 464 | for service_name, peer_service in peer_services.iteritems(): | ||
2455 | 465 | original_service = services_dict[service_name] | ||
2456 | 466 | |||
2457 | 467 | # If the original service has timeout settings, copy them over to the | ||
2458 | 468 | # peer service. | ||
2459 | 469 | for option in original_service.get("service_options", ()): | ||
2460 | 470 | if "timeout" in option: | ||
2461 | 471 | peer_service["service_options"].append(option) | ||
2462 | 472 | |||
2463 | 473 | servers = peer_service["servers"] | ||
2464 | 474 | # Add ourselves to the list of servers for the peer listen stanza. | ||
2465 | 475 | servers.append((unit_name, private_address, | ||
2466 | 476 | original_service["service_port"] + 1, | ||
2467 | 477 | ["check"])) | ||
2468 | 478 | |||
2469 | 479 | # Make all but the first server in the peer listen stanza a backup | ||
2470 | 480 | # server. | ||
2471 | 481 | servers.sort() | ||
2472 | 482 | for server in servers[1:]: | ||
2473 | 483 | server[3].append("backup") | ||
2474 | 484 | |||
2475 | 485 | # Remap original service port, will now be used by peer listen stanza. | ||
2476 | 486 | original_service["service_port"] += 1 | ||
2477 | 487 | |||
2478 | 488 | # Remap original service to a new name, stuff peer listen stanza into | ||
2479 | 489 | # it's place. | ||
2480 | 490 | be_service = service_name + "_be" | ||
2481 | 491 | original_service["service_name"] = be_service | ||
2482 | 492 | services_dict[be_service] = original_service | ||
2483 | 493 | services_dict[service_name] = peer_service | ||
2484 | 494 | |||
2485 | 495 | return services_dict | ||
2486 | 496 | |||
2487 | 497 | |||
2488 | 498 | def write_service_config(services_dict): | ||
2489 | 493 | # Construct the new haproxy.cfg file | 499 | # Construct the new haproxy.cfg file |
2505 | 494 | for service in services_dict: | 500 | for service_key, service_config in services_dict.items(): |
2506 | 495 | juju_log("Service: ", service) | 501 | log("Service: %s" % service_key) |
2507 | 496 | server_entries = None | 502 | server_entries = service_config.get('servers') |
2508 | 497 | if 'servers' in services_dict[service]: | 503 | |
2509 | 498 | server_entries = services_dict[service]['servers'] | 504 | service_name = service_config["service_name"] |
2510 | 499 | service_config_file = "%s/%s.service" % ( | 505 | if not os.path.exists(default_haproxy_service_config_dir): |
2511 | 500 | default_haproxy_service_config_dir, | 506 | os.mkdir(default_haproxy_service_config_dir, 0600) |
2512 | 501 | services_dict[service]['service_name']) | 507 | with open(os.path.join(default_haproxy_service_config_dir, |
2513 | 502 | with open(service_config_file, 'w') as service_config: | 508 | "%s.service" % service_name), 'w') as config: |
2514 | 503 | service_config.write( | 509 | config.write(create_listen_stanza( |
2515 | 504 | create_listen_stanza(services_dict[service]['service_name'], | 510 | service_name, |
2516 | 505 | services_dict[service]['service_host'], | 511 | service_config['service_host'], |
2517 | 506 | services_dict[service]['service_port'], | 512 | service_config['service_port'], |
2518 | 507 | services_dict[service]['service_options'], | 513 | service_config['service_options'], |
2519 | 508 | server_entries)) | 514 | server_entries)) |
2520 | 509 | 515 | ||
2521 | 510 | 516 | ||
2522 | 511 | #------------------------------------------------------------------------------ | 517 | #------------------------------------------------------------------------------ |
2523 | @@ -516,17 +522,19 @@ | |||
2524 | 516 | services = '' | 522 | services = '' |
2525 | 517 | if service_name is not None: | 523 | if service_name is not None: |
2526 | 518 | if os.path.exists("%s/%s.service" % | 524 | if os.path.exists("%s/%s.service" % |
2530 | 519 | (default_haproxy_service_config_dir, service_name)): | 525 | (default_haproxy_service_config_dir, service_name)): |
2531 | 520 | services = open("%s/%s.service" % | 526 | with open("%s/%s.service" % (default_haproxy_service_config_dir, |
2532 | 521 | (default_haproxy_service_config_dir, service_name)).read() | 527 | service_name)) as f: |
2533 | 528 | services = f.read() | ||
2534 | 522 | else: | 529 | else: |
2535 | 523 | services = None | 530 | services = None |
2536 | 524 | else: | 531 | else: |
2537 | 525 | for service in glob.glob("%s/*.service" % | 532 | for service in glob.glob("%s/*.service" % |
2542 | 526 | default_haproxy_service_config_dir): | 533 | default_haproxy_service_config_dir): |
2543 | 527 | services += open(service).read() | 534 | with open(service) as f: |
2544 | 528 | services += "\n\n" | 535 | services += f.read() |
2545 | 529 | return(services) | 536 | services += "\n\n" |
2546 | 537 | return services | ||
2547 | 530 | 538 | ||
2548 | 531 | 539 | ||
2549 | 532 | #------------------------------------------------------------------------------ | 540 | #------------------------------------------------------------------------------ |
2550 | @@ -537,24 +545,24 @@ | |||
2551 | 537 | #------------------------------------------------------------------------------ | 545 | #------------------------------------------------------------------------------ |
2552 | 538 | def remove_services(service_name=None): | 546 | def remove_services(service_name=None): |
2553 | 539 | if service_name is not None: | 547 | if service_name is not None: |
2556 | 540 | if os.path.exists("%s/%s.service" % | 548 | path = "%s/%s.service" % (default_haproxy_service_config_dir, |
2557 | 541 | (default_haproxy_service_config_dir, service_name)): | 549 | service_name) |
2558 | 550 | if os.path.exists(path): | ||
2559 | 542 | try: | 551 | try: |
2563 | 543 | os.remove("%s/%s.service" % | 552 | os.remove(path) |
2561 | 544 | (default_haproxy_service_config_dir, service_name)) | ||
2562 | 545 | return(True) | ||
2564 | 546 | except Exception, e: | 553 | except Exception, e: |
2567 | 547 | subprocess.call(['juju-log', str(e)]) | 554 | log(str(e)) |
2568 | 548 | return(False) | 555 | return False |
2569 | 556 | return True | ||
2570 | 549 | else: | 557 | else: |
2571 | 550 | for service in glob.glob("%s/*.service" % | 558 | for service in glob.glob("%s/*.service" % |
2573 | 551 | default_haproxy_service_config_dir): | 559 | default_haproxy_service_config_dir): |
2574 | 552 | try: | 560 | try: |
2575 | 553 | os.remove(service) | 561 | os.remove(service) |
2576 | 554 | except Exception, e: | 562 | except Exception, e: |
2578 | 555 | subprocess.call(['juju-log', str(e)]) | 563 | log(str(e)) |
2579 | 556 | pass | 564 | pass |
2581 | 557 | return(True) | 565 | return True |
2582 | 558 | 566 | ||
2583 | 559 | 567 | ||
2584 | 560 | #------------------------------------------------------------------------------ | 568 | #------------------------------------------------------------------------------ |
2585 | @@ -567,27 +575,18 @@ | |||
2586 | 567 | # optional arguments | 575 | # optional arguments |
2587 | 568 | #------------------------------------------------------------------------------ | 576 | #------------------------------------------------------------------------------ |
2588 | 569 | def construct_haproxy_config(haproxy_globals=None, | 577 | def construct_haproxy_config(haproxy_globals=None, |
2595 | 570 | haproxy_defaults=None, | 578 | haproxy_defaults=None, |
2596 | 571 | haproxy_monitoring=None, | 579 | haproxy_monitoring=None, |
2597 | 572 | haproxy_services=None): | 580 | haproxy_services=None): |
2598 | 573 | if haproxy_globals is None or \ | 581 | if None in (haproxy_globals, haproxy_defaults): |
2599 | 574 | haproxy_defaults is None: | 582 | return |
2594 | 575 | return(None) | ||
2600 | 576 | with open(default_haproxy_config, 'w') as haproxy_config: | 583 | with open(default_haproxy_config, 'w') as haproxy_config: |
2615 | 577 | haproxy_config.write(haproxy_globals) | 584 | config_string = '' |
2616 | 578 | haproxy_config.write("\n") | 585 | for config in (haproxy_globals, haproxy_defaults, haproxy_monitoring, |
2617 | 579 | haproxy_config.write("\n") | 586 | haproxy_services): |
2618 | 580 | haproxy_config.write(haproxy_defaults) | 587 | if config is not None: |
2619 | 581 | haproxy_config.write("\n") | 588 | config_string += config + '\n\n' |
2620 | 582 | haproxy_config.write("\n") | 589 | haproxy_config.write(config_string) |
2607 | 583 | if haproxy_monitoring is not None: | ||
2608 | 584 | haproxy_config.write(haproxy_monitoring) | ||
2609 | 585 | haproxy_config.write("\n") | ||
2610 | 586 | haproxy_config.write("\n") | ||
2611 | 587 | if haproxy_services is not None: | ||
2612 | 588 | haproxy_config.write(haproxy_services) | ||
2613 | 589 | haproxy_config.write("\n") | ||
2614 | 590 | haproxy_config.write("\n") | ||
2621 | 591 | 590 | ||
2622 | 592 | 591 | ||
2623 | 593 | #------------------------------------------------------------------------------ | 592 | #------------------------------------------------------------------------------ |
2624 | @@ -595,50 +594,37 @@ | |||
2625 | 595 | # the haproxy service | 594 | # the haproxy service |
2626 | 596 | #------------------------------------------------------------------------------ | 595 | #------------------------------------------------------------------------------ |
2627 | 597 | def service_haproxy(action=None, haproxy_config=default_haproxy_config): | 596 | def service_haproxy(action=None, haproxy_config=default_haproxy_config): |
2630 | 598 | if action is None or haproxy_config is None: | 597 | if None in (action, haproxy_config): |
2631 | 599 | return(None) | 598 | return None |
2632 | 600 | elif action == "check": | 599 | elif action == "check": |
2641 | 601 | retVal = subprocess.call( | 600 | command = ['/usr/sbin/haproxy', '-f', haproxy_config, '-c'] |
2634 | 602 | ['/usr/sbin/haproxy', '-f', haproxy_config, '-c']) | ||
2635 | 603 | if retVal == 1: | ||
2636 | 604 | return(False) | ||
2637 | 605 | elif retVal == 0: | ||
2638 | 606 | return(True) | ||
2639 | 607 | else: | ||
2640 | 608 | return(False) | ||
2642 | 609 | else: | 601 | else: |
2658 | 610 | retVal = subprocess.call(['service', 'haproxy', action]) | 602 | command = ['service', 'haproxy', action] |
2659 | 611 | if retVal == 0: | 603 | return_value = subprocess.call(command) |
2660 | 612 | return(True) | 604 | return return_value == 0 |
2646 | 613 | else: | ||
2647 | 614 | return(False) | ||
2648 | 615 | |||
2649 | 616 | def website_notify(): | ||
2650 | 617 | """ | ||
2651 | 618 | Notify any webiste relations of any configuration changes. | ||
2652 | 619 | """ | ||
2653 | 620 | juju_log("Notifying all website relations of change") | ||
2654 | 621 | all_relations = relation_get_all("website") | ||
2655 | 622 | if hasattr(all_relations, "iteritems"): | ||
2656 | 623 | for relid, reldata in all_relations.iteritems(): | ||
2657 | 624 | relation_set(["time=%s" % time.time()], relation_id=relid) | ||
2661 | 625 | 605 | ||
2662 | 626 | 606 | ||
2663 | 627 | ############################################################################### | 607 | ############################################################################### |
2664 | 628 | # Hook functions | 608 | # Hook functions |
2665 | 629 | ############################################################################### | 609 | ############################################################################### |
2666 | 630 | def install_hook(): | 610 | def install_hook(): |
2667 | 631 | for f in glob.glob('exec.d/*/charm-pre-install'): | ||
2668 | 632 | if os.path.isfile(f) and os.access(f, os.X_OK): | ||
2669 | 633 | subprocess.check_call(['sh', '-c', f]) | ||
2670 | 634 | if not os.path.exists(default_haproxy_service_config_dir): | 611 | if not os.path.exists(default_haproxy_service_config_dir): |
2671 | 635 | os.mkdir(default_haproxy_service_config_dir, 0600) | 612 | os.mkdir(default_haproxy_service_config_dir, 0600) |
2673 | 636 | return ((apt_get_install("haproxy") == enable_haproxy()) is True) | 613 | |
2674 | 614 | apt_install('haproxy', fatal=True) | ||
2675 | 615 | ensure_package_status(service_affecting_packages, | ||
2676 | 616 | config_get('package_status')) | ||
2677 | 617 | enable_haproxy() | ||
2678 | 637 | 618 | ||
2679 | 638 | 619 | ||
2680 | 639 | def config_changed(): | 620 | def config_changed(): |
2681 | 640 | config_data = config_get() | 621 | config_data = config_get() |
2683 | 641 | current_service_ports = get_service_ports() | 622 | |
2684 | 623 | ensure_package_status(service_affecting_packages, | ||
2685 | 624 | config_data['package_status']) | ||
2686 | 625 | |||
2687 | 626 | old_service_ports = get_service_ports() | ||
2688 | 627 | old_stanzas = get_listen_stanzas() | ||
2689 | 642 | haproxy_globals = create_haproxy_globals() | 628 | haproxy_globals = create_haproxy_globals() |
2690 | 643 | haproxy_defaults = create_haproxy_defaults() | 629 | haproxy_defaults = create_haproxy_defaults() |
2691 | 644 | if config_data['enable_monitoring'] is True: | 630 | if config_data['enable_monitoring'] is True: |
2692 | @@ -648,105 +634,177 @@ | |||
2693 | 648 | remove_services() | 634 | remove_services() |
2694 | 649 | create_services() | 635 | create_services() |
2695 | 650 | haproxy_services = load_services() | 636 | haproxy_services = load_services() |
2696 | 637 | update_sysctl(config_data) | ||
2697 | 651 | construct_haproxy_config(haproxy_globals, | 638 | construct_haproxy_config(haproxy_globals, |
2698 | 652 | haproxy_defaults, | 639 | haproxy_defaults, |
2699 | 653 | haproxy_monitoring, | 640 | haproxy_monitoring, |
2700 | 654 | haproxy_services) | 641 | haproxy_services) |
2701 | 655 | 642 | ||
2702 | 656 | if service_haproxy("check"): | 643 | if service_haproxy("check"): |
2705 | 657 | updated_service_ports = get_service_ports() | 644 | update_service_ports(old_service_ports, get_service_ports()) |
2704 | 658 | update_service_ports(current_service_ports, updated_service_ports) | ||
2706 | 659 | service_haproxy("reload") | 645 | service_haproxy("reload") |
2707 | 646 | if not (get_listen_stanzas() == old_stanzas): | ||
2708 | 647 | notify_website() | ||
2709 | 648 | notify_peer() | ||
2710 | 649 | else: | ||
2711 | 650 | # XXX Ideally the config should be restored to a working state if the | ||
2712 | 651 | # check fails, otherwise an inadvertent reload will cause the service | ||
2713 | 652 | # to be broken. | ||
2714 | 653 | log("HAProxy configuration check failed, exiting.") | ||
2715 | 654 | sys.exit(1) | ||
2716 | 660 | 655 | ||
2717 | 661 | 656 | ||
2718 | 662 | def start_hook(): | 657 | def start_hook(): |
2719 | 663 | if service_haproxy("status"): | 658 | if service_haproxy("status"): |
2721 | 664 | return(service_haproxy("restart")) | 659 | return service_haproxy("restart") |
2722 | 665 | else: | 660 | else: |
2724 | 666 | return(service_haproxy("start")) | 661 | return service_haproxy("start") |
2725 | 667 | 662 | ||
2726 | 668 | 663 | ||
2727 | 669 | def stop_hook(): | 664 | def stop_hook(): |
2728 | 670 | if service_haproxy("status"): | 665 | if service_haproxy("status"): |
2730 | 671 | return(service_haproxy("stop")) | 666 | return service_haproxy("stop") |
2731 | 672 | 667 | ||
2732 | 673 | 668 | ||
2733 | 674 | def reverseproxy_interface(hook_name=None): | 669 | def reverseproxy_interface(hook_name=None): |
2734 | 675 | if hook_name is None: | 670 | if hook_name is None: |
2742 | 676 | return(None) | 671 | return None |
2743 | 677 | elif hook_name == "changed": | 672 | if hook_name in ("changed", "departed"): |
2744 | 678 | config_changed() | 673 | config_changed() |
2745 | 679 | website_notify() | 674 | |
2739 | 680 | elif hook_name=="departed": | ||
2740 | 681 | config_changed() | ||
2741 | 682 | website_notify() | ||
2746 | 683 | 675 | ||
2747 | 684 | def website_interface(hook_name=None): | 676 | def website_interface(hook_name=None): |
2748 | 685 | if hook_name is None: | 677 | if hook_name is None: |
2750 | 686 | return(None) | 678 | return None |
2751 | 679 | # Notify website relation but only for the current relation in context. | ||
2752 | 680 | notify_website(changed=hook_name == "changed", | ||
2753 | 681 | relation_ids=(relation_id(),)) | ||
2754 | 682 | |||
2755 | 683 | |||
2756 | 684 | def get_hostname(host=None): | ||
2757 | 685 | my_host = socket.gethostname() | ||
2758 | 686 | if host is None or host == "0.0.0.0": | ||
2759 | 687 | # If the listen ip has been set to 0.0.0.0 then pass back the hostname | ||
2760 | 688 | return socket.getfqdn(my_host) | ||
2761 | 689 | elif host == "localhost": | ||
2762 | 690 | # If the fqdn lookup has returned localhost (lxc setups) then return | ||
2763 | 691 | # hostname | ||
2764 | 692 | return my_host | ||
2765 | 693 | return host | ||
2766 | 694 | |||
2767 | 695 | |||
2768 | 696 | def notify_relation(relation, changed=False, relation_ids=None): | ||
2769 | 697 | config_data = config_get() | ||
2770 | 698 | default_host = get_hostname() | ||
2771 | 687 | default_port = 80 | 699 | default_port = 80 |
2802 | 688 | relation_data = relation_get() | 700 | |
2803 | 689 | 701 | for rid in relation_ids or get_relation_ids(relation): | |
2804 | 690 | # If a specfic service has been asked for then return the ip:port for | 702 | service_names = set() |
2805 | 691 | # that service, else pass back the default | 703 | if rid is None: |
2806 | 692 | if 'service_name' in relation_data: | 704 | rid = relation_id() |
2807 | 693 | service_name = relation_data['service_name'] | 705 | for relation_data in relations_for_id(rid): |
2808 | 694 | requestedservice = get_config_service(service_name) | 706 | if 'service_name' in relation_data: |
2809 | 695 | my_host = requestedservice['service_host'] | 707 | service_names.add(relation_data['service_name']) |
2810 | 696 | my_port = requestedservice['service_port'] | 708 | |
2811 | 697 | else: | 709 | if changed: |
2812 | 698 | my_host = socket.getfqdn(socket.gethostname()) | 710 | if 'is-proxy' in relation_data: |
2813 | 699 | my_port = default_port | 711 | remote_service = ("%s__%d" % (relation_data['hostname'], |
2814 | 700 | 712 | relation_data['port'])) | |
2815 | 701 | # If the listen ip has been set to 0.0.0.0 then pass back the hostname | 713 | open("%s/%s.is.proxy" % ( |
2816 | 702 | if my_host == "0.0.0.0": | 714 | default_haproxy_service_config_dir, |
2817 | 703 | my_host = socket.getfqdn(socket.gethostname()) | 715 | remote_service), 'a').close() |
2818 | 704 | 716 | ||
2819 | 705 | # If the fqdn lookup has returned localhost (lxc setups) then return | 717 | service_name = None |
2820 | 706 | # hostname | 718 | if len(service_names) == 1: |
2821 | 707 | if my_host == "localhost": | 719 | service_name = service_names.pop() |
2822 | 708 | my_host = socket.gethostname() | 720 | elif len(service_names) > 1: |
2823 | 709 | subprocess.call( | 721 | log("Remote units requested than a single service name." |
2824 | 710 | ['relation-set', 'port=%d' % my_port, 'hostname=%s' % my_host, | 722 | "Falling back to default host/port.") |
2825 | 711 | 'all_services=%s' % yaml.safe_dump(get_all_services())]) | 723 | |
2826 | 712 | if hook_name == "changed": | 724 | if service_name is not None: |
2827 | 713 | if 'is-proxy' in relation_data: | 725 | # If a specfic service has been asked for then return the ip:port |
2828 | 714 | service_name = "%s__%d" % \ | 726 | # for that service, else pass back the default |
2829 | 715 | (relation_data['hostname'], relation_data['port']) | 727 | requestedservice = get_config_service(service_name) |
2830 | 716 | open("%s/%s.is.proxy" % | 728 | my_host = get_hostname(requestedservice['service_host']) |
2831 | 717 | (default_haproxy_service_config_dir, service_name), 'a').close() | 729 | my_port = requestedservice['service_port'] |
2832 | 730 | else: | ||
2833 | 731 | my_host = default_host | ||
2834 | 732 | my_port = default_port | ||
2835 | 733 | |||
2836 | 734 | relation_set(relation_id=rid, port=str(my_port), | ||
2837 | 735 | hostname=my_host, | ||
2838 | 736 | all_services=config_data['services']) | ||
2839 | 737 | |||
2840 | 738 | |||
2841 | 739 | def notify_website(changed=False, relation_ids=None): | ||
2842 | 740 | notify_relation("website", changed=changed, relation_ids=relation_ids) | ||
2843 | 741 | |||
2844 | 742 | |||
2845 | 743 | def notify_peer(changed=False, relation_ids=None): | ||
2846 | 744 | notify_relation("peer", changed=changed, relation_ids=relation_ids) | ||
2847 | 745 | |||
2848 | 746 | |||
2849 | 747 | def install_nrpe_scripts(): | ||
2850 | 748 | scripts_src = os.path.join(os.environ["CHARM_DIR"], "files", | ||
2851 | 749 | "nrpe") | ||
2852 | 750 | scripts_dst = "/usr/lib/nagios/plugins" | ||
2853 | 751 | if not os.path.exists(scripts_dst): | ||
2854 | 752 | os.makedirs(scripts_dst) | ||
2855 | 753 | for fname in glob.glob(os.path.join(scripts_src, "*.sh")): | ||
2856 | 754 | shutil.copy2(fname, | ||
2857 | 755 | os.path.join(scripts_dst, os.path.basename(fname))) | ||
2858 | 756 | |||
2859 | 718 | 757 | ||
2860 | 719 | def update_nrpe_config(): | 758 | def update_nrpe_config(): |
2861 | 759 | install_nrpe_scripts() | ||
2862 | 720 | nrpe_compat = nrpe.NRPE() | 760 | nrpe_compat = nrpe.NRPE() |
2865 | 721 | nrpe_compat.add_check('haproxy','Check HAProxy', 'check_haproxy.sh') | 761 | nrpe_compat.add_check('haproxy', 'Check HAProxy', 'check_haproxy.sh') |
2866 | 722 | nrpe_compat.add_check('haproxy_queue','Check HAProxy queue depth', 'check_haproxy_queue_depth.sh') | 762 | nrpe_compat.add_check('haproxy_queue', 'Check HAProxy queue depth', |
2867 | 763 | 'check_haproxy_queue_depth.sh') | ||
2868 | 723 | nrpe_compat.write() | 764 | nrpe_compat.write() |
2869 | 724 | 765 | ||
2870 | 766 | |||
2871 | 725 | ############################################################################### | 767 | ############################################################################### |
2872 | 726 | # Main section | 768 | # Main section |
2873 | 727 | ############################################################################### | 769 | ############################################################################### |
2876 | 728 | if __name__ == "__main__": | 770 | |
2877 | 729 | if HOOK_NAME == "install": | 771 | |
2878 | 772 | def main(hook_name): | ||
2879 | 773 | if hook_name == "install": | ||
2880 | 730 | install_hook() | 774 | install_hook() |
2882 | 731 | elif HOOK_NAME == "config-changed": | 775 | elif hook_name in ("config-changed", "upgrade-charm"): |
2883 | 732 | config_changed() | 776 | config_changed() |
2884 | 733 | update_nrpe_config() | 777 | update_nrpe_config() |
2886 | 734 | elif HOOK_NAME == "start": | 778 | elif hook_name == "start": |
2887 | 735 | start_hook() | 779 | start_hook() |
2889 | 736 | elif HOOK_NAME == "stop": | 780 | elif hook_name == "stop": |
2890 | 737 | stop_hook() | 781 | stop_hook() |
2892 | 738 | elif HOOK_NAME == "reverseproxy-relation-broken": | 782 | elif hook_name == "reverseproxy-relation-broken": |
2893 | 739 | config_changed() | 783 | config_changed() |
2895 | 740 | elif HOOK_NAME == "reverseproxy-relation-changed": | 784 | elif hook_name == "reverseproxy-relation-changed": |
2896 | 741 | reverseproxy_interface("changed") | 785 | reverseproxy_interface("changed") |
2898 | 742 | elif HOOK_NAME == "reverseproxy-relation-departed": | 786 | elif hook_name == "reverseproxy-relation-departed": |
2899 | 743 | reverseproxy_interface("departed") | 787 | reverseproxy_interface("departed") |
2901 | 744 | elif HOOK_NAME == "website-relation-joined": | 788 | elif hook_name == "website-relation-joined": |
2902 | 745 | website_interface("joined") | 789 | website_interface("joined") |
2904 | 746 | elif HOOK_NAME == "website-relation-changed": | 790 | elif hook_name == "website-relation-changed": |
2905 | 747 | website_interface("changed") | 791 | website_interface("changed") |
2907 | 748 | elif HOOK_NAME == "nrpe-external-master-relation-changed": | 792 | elif hook_name == "peer-relation-joined": |
2908 | 793 | website_interface("joined") | ||
2909 | 794 | elif hook_name == "peer-relation-changed": | ||
2910 | 795 | reverseproxy_interface("changed") | ||
2911 | 796 | elif hook_name in ("nrpe-external-master-relation-joined", | ||
2912 | 797 | "local-monitors-relation-joined"): | ||
2913 | 749 | update_nrpe_config() | 798 | update_nrpe_config() |
2914 | 750 | else: | 799 | else: |
2915 | 751 | print "Unknown hook" | 800 | print "Unknown hook" |
2916 | 752 | sys.exit(1) | 801 | sys.exit(1) |
2917 | 802 | |||
2918 | 803 | if __name__ == "__main__": | ||
2919 | 804 | hook_name = os.path.basename(sys.argv[0]) | ||
2920 | 805 | # Also support being invoked directly with hook as argument name. | ||
2921 | 806 | if hook_name == "hooks.py": | ||
2922 | 807 | if len(sys.argv) < 2: | ||
2923 | 808 | sys.exit("Missing required hook name argument.") | ||
2924 | 809 | hook_name = sys.argv[1] | ||
2925 | 810 | main(hook_name) | ||
2926 | 753 | 811 | ||
2927 | === modified symlink 'hooks/install' (properties changed: -x to +x) | |||
2928 | === target was u'./hooks.py' | |||
2929 | --- hooks/install 1970-01-01 00:00:00 +0000 | |||
2930 | +++ hooks/install 2013-10-10 22:34:35 +0000 | |||
2931 | @@ -0,0 +1,13 @@ | |||
2932 | 1 | #!/bin/sh | ||
2933 | 2 | |||
2934 | 3 | set -eu | ||
2935 | 4 | |||
2936 | 5 | apt_get_install() { | ||
2937 | 6 | DEBIAN_FRONTEND=noninteractive apt-get -y -qq -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install $@ | ||
2938 | 7 | } | ||
2939 | 8 | |||
2940 | 9 | juju-log 'Invoking charm-pre-install hooks' | ||
2941 | 10 | [ -d exec.d ] && ( for f in exec.d/*/charm-pre-install; do [ -x $f ] && /bin/sh -c "$f"; done ) | ||
2942 | 11 | |||
2943 | 12 | juju-log 'Invoking python-based install hook' | ||
2944 | 13 | python hooks/hooks.py install | ||
2945 | 0 | 14 | ||
2946 | === added symlink 'hooks/local-monitors-relation-joined' | |||
2947 | === target is u'./hooks.py' | |||
2948 | === renamed symlink 'hooks/nrpe-external-master-relation-changed' => 'hooks/nrpe-external-master-relation-joined' | |||
2949 | === removed file 'hooks/nrpe.py' | |||
2950 | --- hooks/nrpe.py 2012-12-21 11:08:58 +0000 | |||
2951 | +++ hooks/nrpe.py 1970-01-01 00:00:00 +0000 | |||
2952 | @@ -1,170 +0,0 @@ | |||
2953 | 1 | import json | ||
2954 | 2 | import subprocess | ||
2955 | 3 | import pwd | ||
2956 | 4 | import grp | ||
2957 | 5 | import os | ||
2958 | 6 | import re | ||
2959 | 7 | import shlex | ||
2960 | 8 | |||
2961 | 9 | # This module adds compatibility with the nrpe_external_master | ||
2962 | 10 | # subordinate charm. To use it in your charm: | ||
2963 | 11 | # | ||
2964 | 12 | # 1. Update metadata.yaml | ||
2965 | 13 | # | ||
2966 | 14 | # provides: | ||
2967 | 15 | # (...) | ||
2968 | 16 | # nrpe-external-master: | ||
2969 | 17 | # interface: nrpe-external-master | ||
2970 | 18 | # scope: container | ||
2971 | 19 | # | ||
2972 | 20 | # 2. Add the following to config.yaml | ||
2973 | 21 | # | ||
2974 | 22 | # nagios_context: | ||
2975 | 23 | # default: "juju" | ||
2976 | 24 | # type: string | ||
2977 | 25 | # description: | | ||
2978 | 26 | # Used by the nrpe-external-master subordinate charm. | ||
2979 | 27 | # A string that will be prepended to instance name to set the host name | ||
2980 | 28 | # in nagios. So for instance the hostname would be something like: | ||
2981 | 29 | # juju-myservice-0 | ||
2982 | 30 | # If you're running multiple environments with the same services in them | ||
2983 | 31 | # this allows you to differentiate between them. | ||
2984 | 32 | # | ||
2985 | 33 | # 3. Add custom checks (Nagios plugins) to files/nrpe-external-master | ||
2986 | 34 | # | ||
2987 | 35 | # 4. Update your hooks.py with something like this: | ||
2988 | 36 | # | ||
2989 | 37 | # import nrpe | ||
2990 | 38 | # (...) | ||
2991 | 39 | # def update_nrpe_config(): | ||
2992 | 40 | # nrpe_compat = NRPE("myservice") | ||
2993 | 41 | # nrpe_compat.add_check( | ||
2994 | 42 | # shortname = "myservice", | ||
2995 | 43 | # description = "Check MyService", | ||
2996 | 44 | # check_cmd = "check_http -w 2 -c 10 http://localhost" | ||
2997 | 45 | # ) | ||
2998 | 46 | # nrpe_compat.add_check( | ||
2999 | 47 | # "myservice_other", | ||
3000 | 48 | # "Check for widget failures", | ||
3001 | 49 | # check_cmd = "/srv/myapp/scripts/widget_check" | ||
3002 | 50 | # ) | ||
3003 | 51 | # nrpe_compat.write() | ||
3004 | 52 | # | ||
3005 | 53 | # def config_changed(): | ||
3006 | 54 | # (...) | ||
3007 | 55 | # update_nrpe_config() | ||
3008 | 56 | |||
3009 | 57 | class ConfigurationError(Exception): | ||
3010 | 58 | '''An error interacting with the Juju config''' | ||
3011 | 59 | pass | ||
3012 | 60 | def config_get(scope=None): | ||
3013 | 61 | '''Return the Juju config as a dictionary''' | ||
3014 | 62 | try: | ||
3015 | 63 | config_cmd_line = ['config-get'] | ||
3016 | 64 | if scope is not None: | ||
3017 | 65 | config_cmd_line.append(scope) | ||
3018 | 66 | config_cmd_line.append('--format=json') | ||
3019 | 67 | return json.loads(subprocess.check_output(config_cmd_line)) | ||
3020 | 68 | except (ValueError, OSError, subprocess.CalledProcessError) as error: | ||
3021 | 69 | subprocess.call(['juju-log', str(error)]) | ||
3022 | 70 | raise ConfigurationError(str(error)) | ||
3023 | 71 | |||
3024 | 72 | class CheckException(Exception): pass | ||
3025 | 73 | class Check(object): | ||
3026 | 74 | shortname_re = '[A-Za-z0-9-_]*' | ||
3027 | 75 | service_template = """ | ||
3028 | 76 | #--------------------------------------------------- | ||
3029 | 77 | # This file is Juju managed | ||
3030 | 78 | #--------------------------------------------------- | ||
3031 | 79 | define service {{ | ||
3032 | 80 | use active-service | ||
3033 | 81 | host_name {nagios_hostname} | ||
3034 | 82 | service_description {nagios_hostname} {shortname} {description} | ||
3035 | 83 | check_command check_nrpe!check_{shortname} | ||
3036 | 84 | servicegroups {nagios_servicegroup} | ||
3037 | 85 | }} | ||
3038 | 86 | """ | ||
3039 | 87 | def __init__(self, shortname, description, check_cmd): | ||
3040 | 88 | super(Check, self).__init__() | ||
3041 | 89 | # XXX: could be better to calculate this from the service name | ||
3042 | 90 | if not re.match(self.shortname_re, shortname): | ||
3043 | 91 | raise CheckException("shortname must match {}".format(Check.shortname_re)) | ||
3044 | 92 | self.shortname = shortname | ||
3045 | 93 | self.description = description | ||
3046 | 94 | self.check_cmd = self._locate_cmd(check_cmd) | ||
3047 | 95 | |||
3048 | 96 | def _locate_cmd(self, check_cmd): | ||
3049 | 97 | search_path = ( | ||
3050 | 98 | '/', | ||
3051 | 99 | os.path.join(os.environ['CHARM_DIR'], 'files/nrpe-external-master'), | ||
3052 | 100 | '/usr/lib/nagios/plugins', | ||
3053 | 101 | ) | ||
3054 | 102 | command = shlex.split(check_cmd) | ||
3055 | 103 | for path in search_path: | ||
3056 | 104 | if os.path.exists(os.path.join(path,command[0])): | ||
3057 | 105 | return os.path.join(path, command[0]) + " " + " ".join(command[1:]) | ||
3058 | 106 | subprocess.call(['juju-log', 'Check command not found: {}'.format(command[0])]) | ||
3059 | 107 | return '' | ||
3060 | 108 | |||
3061 | 109 | def write(self, nagios_context, hostname): | ||
3062 | 110 | for f in os.listdir(NRPE.nagios_exportdir): | ||
3063 | 111 | if re.search('.*check_{}.cfg'.format(self.shortname), f): | ||
3064 | 112 | os.remove(os.path.join(NRPE.nagios_exportdir, f)) | ||
3065 | 113 | |||
3066 | 114 | templ_vars = { | ||
3067 | 115 | 'nagios_hostname': hostname, | ||
3068 | 116 | 'nagios_servicegroup': nagios_context, | ||
3069 | 117 | 'description': self.description, | ||
3070 | 118 | 'shortname': self.shortname, | ||
3071 | 119 | } | ||
3072 | 120 | nrpe_service_text = Check.service_template.format(**templ_vars) | ||
3073 | 121 | nrpe_service_file = '{}/service__{}_check_{}.cfg'.format( | ||
3074 | 122 | NRPE.nagios_exportdir, hostname, self.shortname) | ||
3075 | 123 | with open(nrpe_service_file, 'w') as nrpe_service_config: | ||
3076 | 124 | nrpe_service_config.write(str(nrpe_service_text)) | ||
3077 | 125 | |||
3078 | 126 | nrpe_check_file = '/etc/nagios/nrpe.d/check_{}.cfg'.format(self.shortname) | ||
3079 | 127 | with open(nrpe_check_file, 'w') as nrpe_check_config: | ||
3080 | 128 | nrpe_check_config.write("# check {}\n".format(self.shortname)) | ||
3081 | 129 | nrpe_check_config.write("command[check_{}]={}\n".format( | ||
3082 | 130 | self.shortname, self.check_cmd)) | ||
3083 | 131 | |||
3084 | 132 | def run(self): | ||
3085 | 133 | subprocess.call(self.check_cmd) | ||
3086 | 134 | |||
3087 | 135 | class NRPE(object): | ||
3088 | 136 | nagios_logdir = '/var/log/nagios' | ||
3089 | 137 | nagios_exportdir = '/var/lib/nagios/export' | ||
3090 | 138 | nrpe_confdir = '/etc/nagios/nrpe.d' | ||
3091 | 139 | def __init__(self): | ||
3092 | 140 | super(NRPE, self).__init__() | ||
3093 | 141 | self.config = config_get() | ||
3094 | 142 | self.nagios_context = self.config['nagios_context'] | ||
3095 | 143 | self.unit_name = os.environ['JUJU_UNIT_NAME'].replace('/', '-') | ||
3096 | 144 | self.hostname = "{}-{}".format(self.nagios_context, self.unit_name) | ||
3097 | 145 | self.checks = [] | ||
3098 | 146 | |||
3099 | 147 | def add_check(self, *args, **kwargs): | ||
3100 | 148 | self.checks.append( Check(*args, **kwargs) ) | ||
3101 | 149 | |||
3102 | 150 | def write(self): | ||
3103 | 151 | try: | ||
3104 | 152 | nagios_uid = pwd.getpwnam('nagios').pw_uid | ||
3105 | 153 | nagios_gid = grp.getgrnam('nagios').gr_gid | ||
3106 | 154 | except: | ||
3107 | 155 | subprocess.call(['juju-log', "Nagios user not set up, nrpe checks not updated"]) | ||
3108 | 156 | return | ||
3109 | 157 | |||
3110 | 158 | if not os.path.exists(NRPE.nagios_exportdir): | ||
3111 | 159 | subprocess.call(['juju-log', 'Exiting as {} is not accessible'.format(NRPE.nagios_exportdir)]) | ||
3112 | 160 | return | ||
3113 | 161 | |||
3114 | 162 | if not os.path.exists(NRPE.nagios_logdir): | ||
3115 | 163 | os.mkdir(NRPE.nagios_logdir) | ||
3116 | 164 | os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid) | ||
3117 | 165 | |||
3118 | 166 | for nrpecheck in self.checks: | ||
3119 | 167 | nrpecheck.write(self.nagios_context, self.hostname) | ||
3120 | 168 | |||
3121 | 169 | if os.path.isfile('/etc/init.d/nagios-nrpe-server'): | ||
3122 | 170 | subprocess.call(['service', 'nagios-nrpe-server', 'reload']) | ||
3123 | 171 | 0 | ||
3124 | === added symlink 'hooks/peer-relation-changed' | |||
3125 | === target is u'./hooks.py' | |||
3126 | === added symlink 'hooks/peer-relation-joined' | |||
3127 | === target is u'./hooks.py' | |||
3128 | === removed file 'hooks/test_hooks.py' | |||
3129 | --- hooks/test_hooks.py 2013-02-14 21:35:47 +0000 | |||
3130 | +++ hooks/test_hooks.py 1970-01-01 00:00:00 +0000 | |||
3131 | @@ -1,263 +0,0 @@ | |||
3132 | 1 | import hooks | ||
3133 | 2 | import yaml | ||
3134 | 3 | from textwrap import dedent | ||
3135 | 4 | from mocker import MockerTestCase, ARGS | ||
3136 | 5 | |||
3137 | 6 | class JujuHookTest(MockerTestCase): | ||
3138 | 7 | |||
3139 | 8 | def setUp(self): | ||
3140 | 9 | self.config_services = [{ | ||
3141 | 10 | "service_name": "haproxy_test", | ||
3142 | 11 | "service_host": "0.0.0.0", | ||
3143 | 12 | "service_port": "88", | ||
3144 | 13 | "service_options": ["balance leastconn"], | ||
3145 | 14 | "server_options": "maxconn 25"}] | ||
3146 | 15 | self.config_services_extended = [ | ||
3147 | 16 | {"service_name": "unit_service", | ||
3148 | 17 | "service_host": "supplied-hostname", | ||
3149 | 18 | "service_port": "999", | ||
3150 | 19 | "service_options": ["balance leastconn"], | ||
3151 | 20 | "server_options": "maxconn 99"}] | ||
3152 | 21 | self.relation_services = [ | ||
3153 | 22 | {"service_name": "foo_svc", | ||
3154 | 23 | "service_options": ["balance leastconn"], | ||
3155 | 24 | "servers": [("A", "hA", "1", "oA1 oA2")]}, | ||
3156 | 25 | {"service_name": "bar_svc", | ||
3157 | 26 | "service_options": ["balance leastconn"], | ||
3158 | 27 | "servers": [ | ||
3159 | 28 | ("A", "hA", "1", "oA1 oA2"), ("B", "hB", "2", "oB1 oB2")]}] | ||
3160 | 29 | self.relation_services2 = [ | ||
3161 | 30 | {"service_name": "foo_svc", | ||
3162 | 31 | "service_options": ["balance leastconn"], | ||
3163 | 32 | "servers": [("A2", "hA2", "12", "oA12 oA22")]}] | ||
3164 | 33 | hooks.default_haproxy_config_dir = self.makeDir() | ||
3165 | 34 | hooks.default_haproxy_config = self.makeFile() | ||
3166 | 35 | hooks.default_haproxy_service_config_dir = self.makeDir() | ||
3167 | 36 | obj = self.mocker.replace("hooks.juju_log") | ||
3168 | 37 | obj(ARGS) | ||
3169 | 38 | self.mocker.count(0, None) | ||
3170 | 39 | obj = self.mocker.replace("hooks.unit_get") | ||
3171 | 40 | obj("public-address") | ||
3172 | 41 | self.mocker.result("test-host.example.com") | ||
3173 | 42 | self.mocker.count(0, None) | ||
3174 | 43 | self.maxDiff = None | ||
3175 | 44 | |||
3176 | 45 | def _expect_config_get(self, **kwargs): | ||
3177 | 46 | result = { | ||
3178 | 47 | "default_timeouts": "queue 1000, connect 1000, client 1000, server 1000", | ||
3179 | 48 | "global_log": "127.0.0.1 local0, 127.0.0.1 local1 notice", | ||
3180 | 49 | "global_spread_checks": 0, | ||
3181 | 50 | "monitoring_allowed_cidr": "127.0.0.1/32", | ||
3182 | 51 | "monitoring_username": "haproxy", | ||
3183 | 52 | "default_log": "global", | ||
3184 | 53 | "global_group": "haproxy", | ||
3185 | 54 | "monitoring_stats_refresh": 3, | ||
3186 | 55 | "default_retries": 3, | ||
3187 | 56 | "services": yaml.dump(self.config_services), | ||
3188 | 57 | "global_maxconn": 4096, | ||
3189 | 58 | "global_user": "haproxy", | ||
3190 | 59 | "default_options": "httplog, dontlognull", | ||
3191 | 60 | "monitoring_port": 10000, | ||
3192 | 61 | "global_debug": False, | ||
3193 | 62 | "nagios_context": "juju", | ||
3194 | 63 | "global_quiet": False, | ||
3195 | 64 | "enable_monitoring": False, | ||
3196 | 65 | "monitoring_password": "changeme", | ||
3197 | 66 | "default_mode": "http"} | ||
3198 | 67 | obj = self.mocker.replace("hooks.config_get") | ||
3199 | 68 | obj() | ||
3200 | 69 | result.update(kwargs) | ||
3201 | 70 | self.mocker.result(result) | ||
3202 | 71 | self.mocker.count(1, None) | ||
3203 | 72 | |||
3204 | 73 | def _expect_relation_get_all(self, relation, extra={}): | ||
3205 | 74 | obj = self.mocker.replace("hooks.relation_get_all") | ||
3206 | 75 | obj(relation) | ||
3207 | 76 | relation = {"hostname": "10.0.1.2", | ||
3208 | 77 | "private-address": "10.0.1.2", | ||
3209 | 78 | "port": "10000"} | ||
3210 | 79 | relation.update(extra) | ||
3211 | 80 | result = {"1": {"unit/0": relation}} | ||
3212 | 81 | self.mocker.result(result) | ||
3213 | 82 | self.mocker.count(1, None) | ||
3214 | 83 | |||
3215 | 84 | def _expect_relation_get_all_multiple(self, relation_name): | ||
3216 | 85 | obj = self.mocker.replace("hooks.relation_get_all") | ||
3217 | 86 | obj(relation_name) | ||
3218 | 87 | result = { | ||
3219 | 88 | "1": {"unit/0": { | ||
3220 | 89 | "hostname": "10.0.1.2", | ||
3221 | 90 | "private-address": "10.0.1.2", | ||
3222 | 91 | "port": "10000", | ||
3223 | 92 | "services": yaml.dump(self.relation_services)}}, | ||
3224 | 93 | "2": {"unit/1": { | ||
3225 | 94 | "hostname": "10.0.1.3", | ||
3226 | 95 | "private-address": "10.0.1.3", | ||
3227 | 96 | "port": "10001", | ||
3228 | 97 | "services": yaml.dump(self.relation_services2)}}} | ||
3229 | 98 | self.mocker.result(result) | ||
3230 | 99 | self.mocker.count(1, None) | ||
3231 | 100 | |||
3232 | 101 | def _expect_relation_get_all_with_services(self, relation, extra={}): | ||
3233 | 102 | extra.update({"services": yaml.dump(self.relation_services)}) | ||
3234 | 103 | return self._expect_relation_get_all(relation, extra) | ||
3235 | 104 | |||
3236 | 105 | def _expect_relation_get(self): | ||
3237 | 106 | obj = self.mocker.replace("hooks.relation_get") | ||
3238 | 107 | obj() | ||
3239 | 108 | result = {} | ||
3240 | 109 | self.mocker.result(result) | ||
3241 | 110 | self.mocker.count(1, None) | ||
3242 | 111 | |||
3243 | 112 | def test_create_services(self): | ||
3244 | 113 | """ | ||
3245 | 114 | Simplest use case, config stanza seeded in config file, server line | ||
3246 | 115 | added through simple relation. Many servers can join this, but | ||
3247 | 116 | multiple services will not be presented to the outside | ||
3248 | 117 | """ | ||
3249 | 118 | self._expect_config_get() | ||
3250 | 119 | self._expect_relation_get_all("reverseproxy") | ||
3251 | 120 | self.mocker.replay() | ||
3252 | 121 | hooks.create_services() | ||
3253 | 122 | services = hooks.load_services() | ||
3254 | 123 | stanza = """\ | ||
3255 | 124 | listen haproxy_test 0.0.0.0:88 | ||
3256 | 125 | balance leastconn | ||
3257 | 126 | server 10_0_1_2__10000 10.0.1.2:10000 maxconn 25 | ||
3258 | 127 | |||
3259 | 128 | """ | ||
3260 | 129 | self.assertEquals(services, dedent(stanza)) | ||
3261 | 130 | |||
3262 | 131 | def test_create_services_extended_with_relation(self): | ||
3263 | 132 | """ | ||
3264 | 133 | This case covers specifying an up-front services file to ha-proxy | ||
3265 | 134 | in the config. The relation then specifies a singular hostname, | ||
3266 | 135 | port and server_options setting which is filled into the appropriate | ||
3267 | 136 | haproxy stanza based on multiple criteria. | ||
3268 | 137 | """ | ||
3269 | 138 | self._expect_config_get( | ||
3270 | 139 | services=yaml.dump(self.config_services_extended)) | ||
3271 | 140 | self._expect_relation_get_all("reverseproxy") | ||
3272 | 141 | self.mocker.replay() | ||
3273 | 142 | hooks.create_services() | ||
3274 | 143 | services = hooks.load_services() | ||
3275 | 144 | stanza = """\ | ||
3276 | 145 | listen unit_service supplied-hostname:999 | ||
3277 | 146 | balance leastconn | ||
3278 | 147 | server 10_0_1_2__10000 10.0.1.2:10000 maxconn 99 | ||
3279 | 148 | |||
3280 | 149 | """ | ||
3281 | 150 | self.assertEquals(dedent(stanza), services) | ||
3282 | 151 | |||
3283 | 152 | def test_create_services_pure_relation(self): | ||
3284 | 153 | """ | ||
3285 | 154 | In this case, the relation is in control of the haproxy config file. | ||
3286 | 155 | Each relation chooses what server it creates in the haproxy file, it | ||
3287 | 156 | relies on the haproxy service only for the hostname and front-end port. | ||
3288 | 157 | Each member of the relation will put a backend server entry under in | ||
3289 | 158 | the desired stanza. Each realtion can in fact supply multiple | ||
3290 | 159 | entries from the same juju service unit if desired. | ||
3291 | 160 | """ | ||
3292 | 161 | self._expect_config_get() | ||
3293 | 162 | self._expect_relation_get_all_with_services("reverseproxy") | ||
3294 | 163 | self.mocker.replay() | ||
3295 | 164 | hooks.create_services() | ||
3296 | 165 | services = hooks.load_services() | ||
3297 | 166 | stanza = """\ | ||
3298 | 167 | listen foo_svc 0.0.0.0:88 | ||
3299 | 168 | balance leastconn | ||
3300 | 169 | server A hA:1 oA1 oA2 | ||
3301 | 170 | """ | ||
3302 | 171 | self.assertIn(dedent(stanza), services) | ||
3303 | 172 | stanza = """\ | ||
3304 | 173 | listen bar_svc 0.0.0.0:89 | ||
3305 | 174 | balance leastconn | ||
3306 | 175 | server A hA:1 oA1 oA2 | ||
3307 | 176 | server B hB:2 oB1 oB2 | ||
3308 | 177 | """ | ||
3309 | 178 | self.assertIn(dedent(stanza), services) | ||
3310 | 179 | |||
3311 | 180 | def test_create_services_pure_relation_multiple(self): | ||
3312 | 181 | """ | ||
3313 | 182 | This is much liek the pure_relation case, where the relation specifies | ||
3314 | 183 | a "services" override. However, in this case we have multiple relations | ||
3315 | 184 | that partially override each other. We expect that the created haproxy | ||
3316 | 185 | conf file will combine things appropriately. | ||
3317 | 186 | """ | ||
3318 | 187 | self._expect_config_get() | ||
3319 | 188 | self._expect_relation_get_all_multiple("reverseproxy") | ||
3320 | 189 | self.mocker.replay() | ||
3321 | 190 | hooks.create_services() | ||
3322 | 191 | result = hooks.load_services() | ||
3323 | 192 | stanza = """\ | ||
3324 | 193 | listen foo_svc 0.0.0.0:88 | ||
3325 | 194 | balance leastconn | ||
3326 | 195 | server A hA:1 oA1 oA2 | ||
3327 | 196 | server A2 hA2:12 oA12 oA22 | ||
3328 | 197 | """ | ||
3329 | 198 | self.assertIn(dedent(stanza), result) | ||
3330 | 199 | stanza = """\ | ||
3331 | 200 | listen bar_svc 0.0.0.0:89 | ||
3332 | 201 | balance leastconn | ||
3333 | 202 | server A hA:1 oA1 oA2 | ||
3334 | 203 | server B hB:2 oB1 oB2 | ||
3335 | 204 | """ | ||
3336 | 205 | self.assertIn(dedent(stanza), result) | ||
3337 | 206 | |||
3338 | 207 | def test_get_config_services_config_only(self): | ||
3339 | 208 | """ | ||
3340 | 209 | Attempting to catch the case where a relation is not joined yet | ||
3341 | 210 | """ | ||
3342 | 211 | self._expect_config_get() | ||
3343 | 212 | obj = self.mocker.replace("hooks.relation_get_all") | ||
3344 | 213 | obj("reverseproxy") | ||
3345 | 214 | self.mocker.result(None) | ||
3346 | 215 | self.mocker.replay() | ||
3347 | 216 | result = hooks.get_config_services() | ||
3348 | 217 | self.assertEquals(result, self.config_services) | ||
3349 | 218 | |||
3350 | 219 | def test_get_config_services_relation_no_services(self): | ||
3351 | 220 | """ | ||
3352 | 221 | If the config specifies services and the realtion does not, just the | ||
3353 | 222 | config services should come through. | ||
3354 | 223 | """ | ||
3355 | 224 | self._expect_config_get() | ||
3356 | 225 | self._expect_relation_get_all("reverseproxy") | ||
3357 | 226 | self.mocker.replay() | ||
3358 | 227 | result = hooks.get_config_services() | ||
3359 | 228 | self.assertEquals(result, self.config_services) | ||
3360 | 229 | |||
3361 | 230 | def test_get_config_services_relation_with_services(self): | ||
3362 | 231 | """ | ||
3363 | 232 | Testing with both the config and relation providing services should | ||
3364 | 233 | yield the just the relation | ||
3365 | 234 | """ | ||
3366 | 235 | self._expect_config_get() | ||
3367 | 236 | self._expect_relation_get_all_with_services("reverseproxy") | ||
3368 | 237 | self.mocker.replay() | ||
3369 | 238 | result = hooks.get_config_services() | ||
3370 | 239 | # Just test "servers" since hostname and port and maybe other keys | ||
3371 | 240 | # will be added by the hook | ||
3372 | 241 | self.assertEquals(result[0]["servers"], | ||
3373 | 242 | self.relation_services[0]["servers"]) | ||
3374 | 243 | |||
3375 | 244 | def test_config_generation_indempotent(self): | ||
3376 | 245 | self._expect_config_get() | ||
3377 | 246 | self._expect_relation_get_all_multiple("reverseproxy") | ||
3378 | 247 | self.mocker.replay() | ||
3379 | 248 | |||
3380 | 249 | # Test that we generate the same haproxy.conf file each time | ||
3381 | 250 | hooks.create_services() | ||
3382 | 251 | result1 = hooks.load_services() | ||
3383 | 252 | hooks.create_services() | ||
3384 | 253 | result2 = hooks.load_services() | ||
3385 | 254 | self.assertEqual(result1, result2) | ||
3386 | 255 | |||
3387 | 256 | def test_get_all_services(self): | ||
3388 | 257 | self._expect_config_get() | ||
3389 | 258 | self._expect_relation_get_all_multiple("reverseproxy") | ||
3390 | 259 | self.mocker.replay() | ||
3391 | 260 | baseline = [{"service_name": "foo_svc", "service_port": 88}, | ||
3392 | 261 | {"service_name": "bar_svc", "service_port": 89}] | ||
3393 | 262 | services = hooks.get_all_services() | ||
3394 | 263 | self.assertEqual(baseline, services) | ||
3395 | 264 | 0 | ||
3396 | === added directory 'hooks/tests' | |||
3397 | === added file 'hooks/tests/__init__.py' | |||
3398 | === added file 'hooks/tests/test_config_changed_hooks.py' | |||
3399 | --- hooks/tests/test_config_changed_hooks.py 1970-01-01 00:00:00 +0000 | |||
3400 | +++ hooks/tests/test_config_changed_hooks.py 2013-10-10 22:34:35 +0000 | |||
3401 | @@ -0,0 +1,120 @@ | |||
3402 | 1 | import sys | ||
3403 | 2 | |||
3404 | 3 | from testtools import TestCase | ||
3405 | 4 | from mock import patch | ||
3406 | 5 | |||
3407 | 6 | import hooks | ||
3408 | 7 | from utils_for_tests import patch_open | ||
3409 | 8 | |||
3410 | 9 | |||
3411 | 10 | class ConfigChangedTest(TestCase): | ||
3412 | 11 | |||
3413 | 12 | def setUp(self): | ||
3414 | 13 | super(ConfigChangedTest, self).setUp() | ||
3415 | 14 | self.config_get = self.patch_hook("config_get") | ||
3416 | 15 | self.get_service_ports = self.patch_hook("get_service_ports") | ||
3417 | 16 | self.get_listen_stanzas = self.patch_hook("get_listen_stanzas") | ||
3418 | 17 | self.create_haproxy_globals = self.patch_hook( | ||
3419 | 18 | "create_haproxy_globals") | ||
3420 | 19 | self.create_haproxy_defaults = self.patch_hook( | ||
3421 | 20 | "create_haproxy_defaults") | ||
3422 | 21 | self.remove_services = self.patch_hook("remove_services") | ||
3423 | 22 | self.create_services = self.patch_hook("create_services") | ||
3424 | 23 | self.load_services = self.patch_hook("load_services") | ||
3425 | 24 | self.construct_haproxy_config = self.patch_hook( | ||
3426 | 25 | "construct_haproxy_config") | ||
3427 | 26 | self.service_haproxy = self.patch_hook( | ||
3428 | 27 | "service_haproxy") | ||
3429 | 28 | self.update_sysctl = self.patch_hook( | ||
3430 | 29 | "update_sysctl") | ||
3431 | 30 | self.notify_website = self.patch_hook("notify_website") | ||
3432 | 31 | self.notify_peer = self.patch_hook("notify_peer") | ||
3433 | 32 | self.log = self.patch_hook("log") | ||
3434 | 33 | sys_exit = patch.object(sys, "exit") | ||
3435 | 34 | self.sys_exit = sys_exit.start() | ||
3436 | 35 | self.addCleanup(sys_exit.stop) | ||
3437 | 36 | |||
3438 | 37 | def patch_hook(self, hook_name): | ||
3439 | 38 | mock_controller = patch.object(hooks, hook_name) | ||
3440 | 39 | mock = mock_controller.start() | ||
3441 | 40 | self.addCleanup(mock_controller.stop) | ||
3442 | 41 | return mock | ||
3443 | 42 | |||
3444 | 43 | def test_config_changed_notify_website_changed_stanzas(self): | ||
3445 | 44 | self.service_haproxy.return_value = True | ||
3446 | 45 | self.get_listen_stanzas.side_effect = ( | ||
3447 | 46 | (('foo.internal', '1.2.3.4', 123),), | ||
3448 | 47 | (('foo.internal', '1.2.3.4', 123), | ||
3449 | 48 | ('bar.internal', '1.2.3.5', 234),)) | ||
3450 | 49 | |||
3451 | 50 | hooks.config_changed() | ||
3452 | 51 | |||
3453 | 52 | self.notify_website.assert_called_once_with() | ||
3454 | 53 | self.notify_peer.assert_called_once_with() | ||
3455 | 54 | |||
3456 | 55 | def test_config_changed_no_notify_website_not_changed(self): | ||
3457 | 56 | self.service_haproxy.return_value = True | ||
3458 | 57 | self.get_listen_stanzas.side_effect = ( | ||
3459 | 58 | (('foo.internal', '1.2.3.4', 123),), | ||
3460 | 59 | (('foo.internal', '1.2.3.4', 123),)) | ||
3461 | 60 | |||
3462 | 61 | hooks.config_changed() | ||
3463 | 62 | |||
3464 | 63 | self.notify_website.assert_not_called() | ||
3465 | 64 | self.notify_peer.assert_not_called() | ||
3466 | 65 | |||
3467 | 66 | def test_config_changed_no_notify_website_failed_check(self): | ||
3468 | 67 | self.service_haproxy.return_value = False | ||
3469 | 68 | self.get_listen_stanzas.side_effect = ( | ||
3470 | 69 | (('foo.internal', '1.2.3.4', 123),), | ||
3471 | 70 | (('foo.internal', '1.2.3.4', 123), | ||
3472 | 71 | ('bar.internal', '1.2.3.5', 234),)) | ||
3473 | 72 | |||
3474 | 73 | hooks.config_changed() | ||
3475 | 74 | |||
3476 | 75 | self.notify_website.assert_not_called() | ||
3477 | 76 | self.notify_peer.assert_not_called() | ||
3478 | 77 | self.log.assert_called_once_with( | ||
3479 | 78 | "HAProxy configuration check failed, exiting.") | ||
3480 | 79 | self.sys_exit.assert_called_once_with(1) | ||
3481 | 80 | |||
3482 | 81 | |||
3483 | 82 | class HelpersTest(TestCase): | ||
3484 | 83 | def test_constructs_haproxy_config(self): | ||
3485 | 84 | with patch_open() as (mock_open, mock_file): | ||
3486 | 85 | hooks.construct_haproxy_config('foo-globals', 'foo-defaults', | ||
3487 | 86 | 'foo-monitoring', 'foo-services') | ||
3488 | 87 | |||
3489 | 88 | mock_file.write.assert_called_with( | ||
3490 | 89 | 'foo-globals\n\n' | ||
3491 | 90 | 'foo-defaults\n\n' | ||
3492 | 91 | 'foo-monitoring\n\n' | ||
3493 | 92 | 'foo-services\n\n' | ||
3494 | 93 | ) | ||
3495 | 94 | mock_open.assert_called_with(hooks.default_haproxy_config, 'w') | ||
3496 | 95 | |||
3497 | 96 | def test_constructs_nothing_if_globals_is_none(self): | ||
3498 | 97 | with patch_open() as (mock_open, mock_file): | ||
3499 | 98 | hooks.construct_haproxy_config(None, 'foo-defaults', | ||
3500 | 99 | 'foo-monitoring', 'foo-services') | ||
3501 | 100 | |||
3502 | 101 | self.assertFalse(mock_open.called) | ||
3503 | 102 | self.assertFalse(mock_file.called) | ||
3504 | 103 | |||
3505 | 104 | def test_constructs_nothing_if_defaults_is_none(self): | ||
3506 | 105 | with patch_open() as (mock_open, mock_file): | ||
3507 | 106 | hooks.construct_haproxy_config('foo-globals', None, | ||
3508 | 107 | 'foo-monitoring', 'foo-services') | ||
3509 | 108 | |||
3510 | 109 | self.assertFalse(mock_open.called) | ||
3511 | 110 | self.assertFalse(mock_file.called) | ||
3512 | 111 | |||
3513 | 112 | def test_constructs_haproxy_config_without_optionals(self): | ||
3514 | 113 | with patch_open() as (mock_open, mock_file): | ||
3515 | 114 | hooks.construct_haproxy_config('foo-globals', 'foo-defaults') | ||
3516 | 115 | |||
3517 | 116 | mock_file.write.assert_called_with( | ||
3518 | 117 | 'foo-globals\n\n' | ||
3519 | 118 | 'foo-defaults\n\n' | ||
3520 | 119 | ) | ||
3521 | 120 | mock_open.assert_called_with(hooks.default_haproxy_config, 'w') | ||
3522 | 0 | 121 | ||
3523 | === added file 'hooks/tests/test_helpers.py' | |||
3524 | --- hooks/tests/test_helpers.py 1970-01-01 00:00:00 +0000 | |||
3525 | +++ hooks/tests/test_helpers.py 2013-10-10 22:34:35 +0000 | |||
3526 | @@ -0,0 +1,745 @@ | |||
3527 | 1 | import os | ||
3528 | 2 | |||
3529 | 3 | from contextlib import contextmanager | ||
3530 | 4 | from StringIO import StringIO | ||
3531 | 5 | |||
3532 | 6 | from testtools import TestCase | ||
3533 | 7 | from mock import patch, call, MagicMock | ||
3534 | 8 | |||
3535 | 9 | import hooks | ||
3536 | 10 | from utils_for_tests import patch_open | ||
3537 | 11 | |||
3538 | 12 | |||
3539 | 13 | class HelpersTest(TestCase): | ||
3540 | 14 | |||
3541 | 15 | @patch('hooks.config_get') | ||
3542 | 16 | def test_creates_haproxy_globals(self, config_get): | ||
3543 | 17 | config_get.return_value = { | ||
3544 | 18 | 'global_log': 'foo-log, bar-log', | ||
3545 | 19 | 'global_maxconn': 123, | ||
3546 | 20 | 'global_user': 'foo-user', | ||
3547 | 21 | 'global_group': 'foo-group', | ||
3548 | 22 | 'global_spread_checks': 234, | ||
3549 | 23 | 'global_debug': False, | ||
3550 | 24 | 'global_quiet': False, | ||
3551 | 25 | } | ||
3552 | 26 | result = hooks.create_haproxy_globals() | ||
3553 | 27 | |||
3554 | 28 | expected = '\n'.join([ | ||
3555 | 29 | 'global', | ||
3556 | 30 | ' log foo-log', | ||
3557 | 31 | ' log bar-log', | ||
3558 | 32 | ' maxconn 123', | ||
3559 | 33 | ' user foo-user', | ||
3560 | 34 | ' group foo-group', | ||
3561 | 35 | ' spread-checks 234', | ||
3562 | 36 | ]) | ||
3563 | 37 | self.assertEqual(result, expected) | ||
3564 | 38 | |||
3565 | 39 | @patch('hooks.config_get') | ||
3566 | 40 | def test_creates_haproxy_globals_quietly_with_debug(self, config_get): | ||
3567 | 41 | config_get.return_value = { | ||
3568 | 42 | 'global_log': 'foo-log, bar-log', | ||
3569 | 43 | 'global_maxconn': 123, | ||
3570 | 44 | 'global_user': 'foo-user', | ||
3571 | 45 | 'global_group': 'foo-group', | ||
3572 | 46 | 'global_spread_checks': 234, | ||
3573 | 47 | 'global_debug': True, | ||
3574 | 48 | 'global_quiet': True, | ||
3575 | 49 | } | ||
3576 | 50 | result = hooks.create_haproxy_globals() | ||
3577 | 51 | |||
3578 | 52 | expected = '\n'.join([ | ||
3579 | 53 | 'global', | ||
3580 | 54 | ' log foo-log', | ||
3581 | 55 | ' log bar-log', | ||
3582 | 56 | ' maxconn 123', | ||
3583 | 57 | ' user foo-user', | ||
3584 | 58 | ' group foo-group', | ||
3585 | 59 | ' debug', | ||
3586 | 60 | ' quiet', | ||
3587 | 61 | ' spread-checks 234', | ||
3588 | 62 | ]) | ||
3589 | 63 | self.assertEqual(result, expected) | ||
3590 | 64 | |||
3591 | 65 | def test_enables_haproxy(self): | ||
3592 | 66 | mock_file = MagicMock() | ||
3593 | 67 | |||
3594 | 68 | @contextmanager | ||
3595 | 69 | def mock_open(*args, **kwargs): | ||
3596 | 70 | yield mock_file | ||
3597 | 71 | |||
3598 | 72 | initial_content = """ | ||
3599 | 73 | foo | ||
3600 | 74 | ENABLED=0 | ||
3601 | 75 | bar | ||
3602 | 76 | """ | ||
3603 | 77 | ending_content = initial_content.replace('ENABLED=0', 'ENABLED=1') | ||
3604 | 78 | |||
3605 | 79 | with patch('__builtin__.open', mock_open): | ||
3606 | 80 | mock_file.read.return_value = initial_content | ||
3607 | 81 | |||
3608 | 82 | hooks.enable_haproxy() | ||
3609 | 83 | |||
3610 | 84 | mock_file.write.assert_called_with(ending_content) | ||
3611 | 85 | |||
3612 | 86 | @patch('hooks.config_get') | ||
3613 | 87 | def test_creates_haproxy_defaults(self, config_get): | ||
3614 | 88 | config_get.return_value = { | ||
3615 | 89 | 'default_options': 'foo-option, bar-option', | ||
3616 | 90 | 'default_timeouts': '234, 456', | ||
3617 | 91 | 'default_log': 'foo-log', | ||
3618 | 92 | 'default_mode': 'foo-mode', | ||
3619 | 93 | 'default_retries': 321, | ||
3620 | 94 | } | ||
3621 | 95 | result = hooks.create_haproxy_defaults() | ||
3622 | 96 | |||
3623 | 97 | expected = '\n'.join([ | ||
3624 | 98 | 'defaults', | ||
3625 | 99 | ' log foo-log', | ||
3626 | 100 | ' mode foo-mode', | ||
3627 | 101 | ' option foo-option', | ||
3628 | 102 | ' option bar-option', | ||
3629 | 103 | ' retries 321', | ||
3630 | 104 | ' timeout 234', | ||
3631 | 105 | ' timeout 456', | ||
3632 | 106 | ]) | ||
3633 | 107 | self.assertEqual(result, expected) | ||
3634 | 108 | |||
3635 | 109 | def test_returns_none_when_haproxy_config_doesnt_exist(self): | ||
3636 | 110 | self.assertIsNone(hooks.load_haproxy_config('/some/foo/file')) | ||
3637 | 111 | |||
3638 | 112 | @patch('__builtin__.open') | ||
3639 | 113 | @patch('os.path.isfile') | ||
3640 | 114 | def test_loads_haproxy_config_file(self, isfile, mock_open): | ||
3641 | 115 | content = 'some content' | ||
3642 | 116 | config_file = '/etc/haproxy/haproxy.cfg' | ||
3643 | 117 | file_object = StringIO(content) | ||
3644 | 118 | isfile.return_value = True | ||
3645 | 119 | mock_open.return_value = file_object | ||
3646 | 120 | |||
3647 | 121 | result = hooks.load_haproxy_config() | ||
3648 | 122 | |||
3649 | 123 | self.assertEqual(result, content) | ||
3650 | 124 | isfile.assert_called_with(config_file) | ||
3651 | 125 | mock_open.assert_called_with(config_file) | ||
3652 | 126 | |||
3653 | 127 | @patch('hooks.load_haproxy_config') | ||
3654 | 128 | def test_gets_monitoring_password(self, load_haproxy_config): | ||
3655 | 129 | load_haproxy_config.return_value = 'stats auth foo:bar' | ||
3656 | 130 | |||
3657 | 131 | password = hooks.get_monitoring_password() | ||
3658 | 132 | |||
3659 | 133 | self.assertEqual(password, 'bar') | ||
3660 | 134 | |||
3661 | 135 | @patch('hooks.load_haproxy_config') | ||
3662 | 136 | def test_gets_none_if_different_pattern(self, load_haproxy_config): | ||
3663 | 137 | load_haproxy_config.return_value = 'some other pattern' | ||
3664 | 138 | |||
3665 | 139 | password = hooks.get_monitoring_password() | ||
3666 | 140 | |||
3667 | 141 | self.assertIsNone(password) | ||
3668 | 142 | |||
3669 | 143 | def test_gets_none_pass_if_config_doesnt_exist(self): | ||
3670 | 144 | password = hooks.get_monitoring_password('/some/foo/path') | ||
3671 | 145 | |||
3672 | 146 | self.assertIsNone(password) | ||
3673 | 147 | |||
3674 | 148 | @patch('hooks.load_haproxy_config') | ||
3675 | 149 | def test_gets_service_ports(self, load_haproxy_config): | ||
3676 | 150 | load_haproxy_config.return_value = ''' | ||
3677 | 151 | listen foo.internal 1.2.3.4:123 | ||
3678 | 152 | listen bar.internal 1.2.3.5:234 | ||
3679 | 153 | ''' | ||
3680 | 154 | |||
3681 | 155 | ports = hooks.get_service_ports() | ||
3682 | 156 | |||
3683 | 157 | self.assertEqual(ports, (123, 234)) | ||
3684 | 158 | |||
3685 | 159 | @patch('hooks.load_haproxy_config') | ||
3686 | 160 | def test_get_listen_stanzas(self, load_haproxy_config): | ||
3687 | 161 | load_haproxy_config.return_value = ''' | ||
3688 | 162 | listen foo.internal 1.2.3.4:123 | ||
3689 | 163 | listen bar.internal 1.2.3.5:234 | ||
3690 | 164 | ''' | ||
3691 | 165 | |||
3692 | 166 | stanzas = hooks.get_listen_stanzas() | ||
3693 | 167 | |||
3694 | 168 | self.assertEqual((('foo.internal', '1.2.3.4', 123), | ||
3695 | 169 | ('bar.internal', '1.2.3.5', 234)), | ||
3696 | 170 | stanzas) | ||
3697 | 171 | |||
3698 | 172 | @patch('hooks.load_haproxy_config') | ||
3699 | 173 | def test_get_listen_stanzas_with_frontend(self, load_haproxy_config): | ||
3700 | 174 | load_haproxy_config.return_value = ''' | ||
3701 | 175 | frontend foo-2-123 | ||
3702 | 176 | bind 1.2.3.4:123 | ||
3703 | 177 | default_backend foo.internal | ||
3704 | 178 | frontend foo-2-234 | ||
3705 | 179 | bind 1.2.3.5:234 | ||
3706 | 180 | default_backend bar.internal | ||
3707 | 181 | ''' | ||
3708 | 182 | |||
3709 | 183 | stanzas = hooks.get_listen_stanzas() | ||
3710 | 184 | |||
3711 | 185 | self.assertEqual((('foo.internal', '1.2.3.4', 123), | ||
3712 | 186 | ('bar.internal', '1.2.3.5', 234)), | ||
3713 | 187 | stanzas) | ||
3714 | 188 | |||
3715 | 189 | @patch('hooks.load_haproxy_config') | ||
3716 | 190 | def test_get_empty_tuple_when_no_stanzas(self, load_haproxy_config): | ||
3717 | 191 | load_haproxy_config.return_value = ''' | ||
3718 | 192 | ''' | ||
3719 | 193 | |||
3720 | 194 | stanzas = hooks.get_listen_stanzas() | ||
3721 | 195 | |||
3722 | 196 | self.assertEqual((), stanzas) | ||
3723 | 197 | |||
3724 | 198 | @patch('hooks.load_haproxy_config') | ||
3725 | 199 | def test_get_listen_stanzas_none_configured(self, load_haproxy_config): | ||
3726 | 200 | load_haproxy_config.return_value = "" | ||
3727 | 201 | |||
3728 | 202 | stanzas = hooks.get_listen_stanzas() | ||
3729 | 203 | |||
3730 | 204 | self.assertEqual((), stanzas) | ||
3731 | 205 | |||
3732 | 206 | def test_gets_no_ports_if_config_doesnt_exist(self): | ||
3733 | 207 | ports = hooks.get_service_ports('/some/foo/path') | ||
3734 | 208 | self.assertEqual((), ports) | ||
3735 | 209 | |||
3736 | 210 | @patch('hooks.open_port') | ||
3737 | 211 | @patch('hooks.close_port') | ||
3738 | 212 | def test_updates_service_ports(self, close_port, open_port): | ||
3739 | 213 | old_service_ports = [123, 234, 345] | ||
3740 | 214 | new_service_ports = [345, 456, 567] | ||
3741 | 215 | |||
3742 | 216 | hooks.update_service_ports(old_service_ports, new_service_ports) | ||
3743 | 217 | |||
3744 | 218 | self.assertEqual(close_port.mock_calls, [call(123), call(234)]) | ||
3745 | 219 | self.assertEqual(open_port.mock_calls, | ||
3746 | 220 | [call(345), call(456), call(567)]) | ||
3747 | 221 | |||
3748 | 222 | @patch('hooks.open_port') | ||
3749 | 223 | @patch('hooks.close_port') | ||
3750 | 224 | def test_updates_none_if_service_ports_not_provided(self, close_port, | ||
3751 | 225 | open_port): | ||
3752 | 226 | hooks.update_service_ports() | ||
3753 | 227 | |||
3754 | 228 | self.assertFalse(close_port.called) | ||
3755 | 229 | self.assertFalse(open_port.called) | ||
3756 | 230 | |||
3757 | 231 | @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"}) | ||
3758 | 232 | def test_creates_a_listen_stanza(self): | ||
3759 | 233 | service_name = 'some-name' | ||
3760 | 234 | service_ip = '10.11.12.13' | ||
3761 | 235 | service_port = 1234 | ||
3762 | 236 | service_options = ('foo', 'bar') | ||
3763 | 237 | server_entries = [ | ||
3764 | 238 | ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')), | ||
3765 | 239 | ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')), | ||
3766 | 240 | ] | ||
3767 | 241 | |||
3768 | 242 | result = hooks.create_listen_stanza(service_name, service_ip, | ||
3769 | 243 | service_port, service_options, | ||
3770 | 244 | server_entries) | ||
3771 | 245 | |||
3772 | 246 | expected = '\n'.join(( | ||
3773 | 247 | 'frontend haproxy-2-1234', | ||
3774 | 248 | ' bind 10.11.12.13:1234', | ||
3775 | 249 | ' default_backend some-name', | ||
3776 | 250 | '', | ||
3777 | 251 | 'backend some-name', | ||
3778 | 252 | ' foo', | ||
3779 | 253 | ' bar', | ||
3780 | 254 | ' server name-1 ip-1:port-1 foo1 bar1', | ||
3781 | 255 | ' server name-2 ip-2:port-2 foo2 bar2', | ||
3782 | 256 | )) | ||
3783 | 257 | |||
3784 | 258 | self.assertEqual(expected, result) | ||
3785 | 259 | |||
3786 | 260 | @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"}) | ||
3787 | 261 | def test_create_listen_stanza_filters_frontend_options(self): | ||
3788 | 262 | service_name = 'some-name' | ||
3789 | 263 | service_ip = '10.11.12.13' | ||
3790 | 264 | service_port = 1234 | ||
3791 | 265 | service_options = ('capture request header X-Man', | ||
3792 | 266 | 'retries 3', 'balance uri', 'option logasap') | ||
3793 | 267 | server_entries = [ | ||
3794 | 268 | ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')), | ||
3795 | 269 | ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')), | ||
3796 | 270 | ] | ||
3797 | 271 | |||
3798 | 272 | result = hooks.create_listen_stanza(service_name, service_ip, | ||
3799 | 273 | service_port, service_options, | ||
3800 | 274 | server_entries) | ||
3801 | 275 | |||
3802 | 276 | expected = '\n'.join(( | ||
3803 | 277 | 'frontend haproxy-2-1234', | ||
3804 | 278 | ' bind 10.11.12.13:1234', | ||
3805 | 279 | ' default_backend some-name', | ||
3806 | 280 | ' capture request header X-Man', | ||
3807 | 281 | ' option logasap', | ||
3808 | 282 | '', | ||
3809 | 283 | 'backend some-name', | ||
3810 | 284 | ' retries 3', | ||
3811 | 285 | ' balance uri', | ||
3812 | 286 | ' server name-1 ip-1:port-1 foo1 bar1', | ||
3813 | 287 | ' server name-2 ip-2:port-2 foo2 bar2', | ||
3814 | 288 | )) | ||
3815 | 289 | |||
3816 | 290 | self.assertEqual(expected, result) | ||
3817 | 291 | |||
3818 | 292 | @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"}) | ||
3819 | 293 | def test_creates_a_listen_stanza_with_tuple_entries(self): | ||
3820 | 294 | service_name = 'some-name' | ||
3821 | 295 | service_ip = '10.11.12.13' | ||
3822 | 296 | service_port = 1234 | ||
3823 | 297 | service_options = ('foo', 'bar') | ||
3824 | 298 | server_entries = ( | ||
3825 | 299 | ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')), | ||
3826 | 300 | ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')), | ||
3827 | 301 | ) | ||
3828 | 302 | |||
3829 | 303 | result = hooks.create_listen_stanza(service_name, service_ip, | ||
3830 | 304 | service_port, service_options, | ||
3831 | 305 | server_entries) | ||
3832 | 306 | |||
3833 | 307 | expected = '\n'.join(( | ||
3834 | 308 | 'frontend haproxy-2-1234', | ||
3835 | 309 | ' bind 10.11.12.13:1234', | ||
3836 | 310 | ' default_backend some-name', | ||
3837 | 311 | '', | ||
3838 | 312 | 'backend some-name', | ||
3839 | 313 | ' foo', | ||
3840 | 314 | ' bar', | ||
3841 | 315 | ' server name-1 ip-1:port-1 foo1 bar1', | ||
3842 | 316 | ' server name-2 ip-2:port-2 foo2 bar2', | ||
3843 | 317 | )) | ||
3844 | 318 | |||
3845 | 319 | self.assertEqual(expected, result) | ||
3846 | 320 | |||
3847 | 321 | def test_doesnt_create_listen_stanza_if_args_not_provided(self): | ||
3848 | 322 | self.assertIsNone(hooks.create_listen_stanza()) | ||
3849 | 323 | |||
3850 | 324 | @patch('hooks.create_listen_stanza') | ||
3851 | 325 | @patch('hooks.config_get') | ||
3852 | 326 | @patch('hooks.get_monitoring_password') | ||
3853 | 327 | def test_creates_a_monitoring_stanza(self, get_monitoring_password, | ||
3854 | 328 | config_get, create_listen_stanza): | ||
3855 | 329 | config_get.return_value = { | ||
3856 | 330 | 'enable_monitoring': True, | ||
3857 | 331 | 'monitoring_allowed_cidr': 'some-cidr', | ||
3858 | 332 | 'monitoring_password': 'some-pass', | ||
3859 | 333 | 'monitoring_username': 'some-user', | ||
3860 | 334 | 'monitoring_stats_refresh': 123, | ||
3861 | 335 | 'monitoring_port': 1234, | ||
3862 | 336 | } | ||
3863 | 337 | create_listen_stanza.return_value = 'some result' | ||
3864 | 338 | |||
3865 | 339 | result = hooks.create_monitoring_stanza(service_name="some-service") | ||
3866 | 340 | |||
3867 | 341 | self.assertEqual('some result', result) | ||
3868 | 342 | get_monitoring_password.assert_called_with() | ||
3869 | 343 | create_listen_stanza.assert_called_with( | ||
3870 | 344 | 'some-service', '0.0.0.0', 1234, [ | ||
3871 | 345 | 'mode http', | ||
3872 | 346 | 'acl allowed_cidr src some-cidr', | ||
3873 | 347 | 'block unless allowed_cidr', | ||
3874 | 348 | 'stats enable', | ||
3875 | 349 | 'stats uri /', | ||
3876 | 350 | 'stats realm Haproxy\\ Statistics', | ||
3877 | 351 | 'stats auth some-user:some-pass', | ||
3878 | 352 | 'stats refresh 123', | ||
3879 | 353 | ]) | ||
3880 | 354 | |||
3881 | 355 | @patch('hooks.create_listen_stanza') | ||
3882 | 356 | @patch('hooks.config_get') | ||
3883 | 357 | @patch('hooks.get_monitoring_password') | ||
3884 | 358 | def test_doesnt_create_a_monitoring_stanza_if_monitoring_disabled( | ||
3885 | 359 | self, get_monitoring_password, config_get, create_listen_stanza): | ||
3886 | 360 | config_get.return_value = { | ||
3887 | 361 | 'enable_monitoring': False, | ||
3888 | 362 | } | ||
3889 | 363 | |||
3890 | 364 | result = hooks.create_monitoring_stanza(service_name="some-service") | ||
3891 | 365 | |||
3892 | 366 | self.assertIsNone(result) | ||
3893 | 367 | self.assertFalse(get_monitoring_password.called) | ||
3894 | 368 | self.assertFalse(create_listen_stanza.called) | ||
3895 | 369 | |||
3896 | 370 | @patch('hooks.create_listen_stanza') | ||
3897 | 371 | @patch('hooks.config_get') | ||
3898 | 372 | @patch('hooks.get_monitoring_password') | ||
3899 | 373 | def test_uses_monitoring_password_for_stanza(self, get_monitoring_password, | ||
3900 | 374 | config_get, | ||
3901 | 375 | create_listen_stanza): | ||
3902 | 376 | config_get.return_value = { | ||
3903 | 377 | 'enable_monitoring': True, | ||
3904 | 378 | 'monitoring_allowed_cidr': 'some-cidr', | ||
3905 | 379 | 'monitoring_password': 'changeme', | ||
3906 | 380 | 'monitoring_username': 'some-user', | ||
3907 | 381 | 'monitoring_stats_refresh': 123, | ||
3908 | 382 | 'monitoring_port': 1234, | ||
3909 | 383 | } | ||
3910 | 384 | create_listen_stanza.return_value = 'some result' | ||
3911 | 385 | get_monitoring_password.return_value = 'some-monitoring-pass' | ||
3912 | 386 | |||
3913 | 387 | hooks.create_monitoring_stanza(service_name="some-service") | ||
3914 | 388 | |||
3915 | 389 | get_monitoring_password.assert_called_with() | ||
3916 | 390 | create_listen_stanza.assert_called_with( | ||
3917 | 391 | 'some-service', '0.0.0.0', 1234, [ | ||
3918 | 392 | 'mode http', | ||
3919 | 393 | 'acl allowed_cidr src some-cidr', | ||
3920 | 394 | 'block unless allowed_cidr', | ||
3921 | 395 | 'stats enable', | ||
3922 | 396 | 'stats uri /', | ||
3923 | 397 | 'stats realm Haproxy\\ Statistics', | ||
3924 | 398 | 'stats auth some-user:some-monitoring-pass', | ||
3925 | 399 | 'stats refresh 123', | ||
3926 | 400 | ]) | ||
3927 | 401 | |||
3928 | 402 | @patch('hooks.pwgen') | ||
3929 | 403 | @patch('hooks.create_listen_stanza') | ||
3930 | 404 | @patch('hooks.config_get') | ||
3931 | 405 | @patch('hooks.get_monitoring_password') | ||
3932 | 406 | def test_uses_new_password_for_stanza(self, get_monitoring_password, | ||
3933 | 407 | config_get, create_listen_stanza, | ||
3934 | 408 | pwgen): | ||
3935 | 409 | config_get.return_value = { | ||
3936 | 410 | 'enable_monitoring': True, | ||
3937 | 411 | 'monitoring_allowed_cidr': 'some-cidr', | ||
3938 | 412 | 'monitoring_password': 'changeme', | ||
3939 | 413 | 'monitoring_username': 'some-user', | ||
3940 | 414 | 'monitoring_stats_refresh': 123, | ||
3941 | 415 | 'monitoring_port': 1234, | ||
3942 | 416 | } | ||
3943 | 417 | create_listen_stanza.return_value = 'some result' | ||
3944 | 418 | get_monitoring_password.return_value = None | ||
3945 | 419 | pwgen.return_value = 'some-new-pass' | ||
3946 | 420 | |||
3947 | 421 | hooks.create_monitoring_stanza(service_name="some-service") | ||
3948 | 422 | |||
3949 | 423 | get_monitoring_password.assert_called_with() | ||
3950 | 424 | create_listen_stanza.assert_called_with( | ||
3951 | 425 | 'some-service', '0.0.0.0', 1234, [ | ||
3952 | 426 | 'mode http', | ||
3953 | 427 | 'acl allowed_cidr src some-cidr', | ||
3954 | 428 | 'block unless allowed_cidr', | ||
3955 | 429 | 'stats enable', | ||
3956 | 430 | 'stats uri /', | ||
3957 | 431 | 'stats realm Haproxy\\ Statistics', | ||
3958 | 432 | 'stats auth some-user:some-new-pass', | ||
3959 | 433 | 'stats refresh 123', | ||
3960 | 434 | ]) | ||
3961 | 435 | |||
3962 | 436 | @patch('hooks.is_proxy') | ||
3963 | 437 | @patch('hooks.config_get') | ||
3964 | 438 | @patch('yaml.safe_load') | ||
3965 | 439 | def test_gets_config_services(self, safe_load, config_get, is_proxy): | ||
3966 | 440 | config_get.return_value = { | ||
3967 | 441 | 'services': 'some-services', | ||
3968 | 442 | } | ||
3969 | 443 | safe_load.return_value = [ | ||
3970 | 444 | { | ||
3971 | 445 | 'service_name': 'foo', | ||
3972 | 446 | 'service_options': { | ||
3973 | 447 | 'foo-1': 123, | ||
3974 | 448 | }, | ||
3975 | 449 | 'service_options': ['foo1', 'foo2'], | ||
3976 | 450 | 'server_options': ['baz1', 'baz2'], | ||
3977 | 451 | }, | ||
3978 | 452 | { | ||
3979 | 453 | 'service_name': 'bar', | ||
3980 | 454 | 'service_options': ['bar1', 'bar2'], | ||
3981 | 455 | 'server_options': ['baz1', 'baz2'], | ||
3982 | 456 | }, | ||
3983 | 457 | ] | ||
3984 | 458 | is_proxy.return_value = False | ||
3985 | 459 | |||
3986 | 460 | result = hooks.get_config_services() | ||
3987 | 461 | expected = { | ||
3988 | 462 | None: { | ||
3989 | 463 | 'service_name': 'foo', | ||
3990 | 464 | }, | ||
3991 | 465 | 'foo': { | ||
3992 | 466 | 'service_name': 'foo', | ||
3993 | 467 | 'service_options': ['foo1', 'foo2'], | ||
3994 | 468 | 'server_options': ['baz1', 'baz2'], | ||
3995 | 469 | }, | ||
3996 | 470 | 'bar': { | ||
3997 | 471 | 'service_name': 'bar', | ||
3998 | 472 | 'service_options': ['bar1', 'bar2'], | ||
3999 | 473 | 'server_options': ['baz1', 'baz2'], | ||
4000 | 474 | }, | ||
4001 | 475 | } | ||
4002 | 476 | |||
4003 | 477 | self.assertEqual(expected, result) | ||
4004 | 478 | |||
4005 | 479 | @patch('hooks.is_proxy') | ||
4006 | 480 | @patch('hooks.config_get') | ||
4007 | 481 | @patch('yaml.safe_load') | ||
4008 | 482 | def test_gets_config_services_with_forward_option(self, safe_load, | ||
4009 | 483 | config_get, is_proxy): | ||
4010 | 484 | config_get.return_value = { | ||
4011 | 485 | 'services': 'some-services', | ||
4012 | 486 | } | ||
4013 | 487 | safe_load.return_value = [ | ||
4014 | 488 | { | ||
4015 | 489 | 'service_name': 'foo', | ||
4016 | 490 | 'service_options': { | ||
4017 | 491 | 'foo-1': 123, | ||
4018 | 492 | }, | ||
4019 | 493 | 'service_options': ['foo1', 'foo2'], | ||
4020 | 494 | 'server_options': ['baz1', 'baz2'], | ||
4021 | 495 | }, | ||
4022 | 496 | { | ||
4023 | 497 | 'service_name': 'bar', | ||
4024 | 498 | 'service_options': ['bar1', 'bar2'], | ||
4025 | 499 | 'server_options': ['baz1', 'baz2'], | ||
4026 | 500 | }, | ||
4027 | 501 | ] | ||
4028 | 502 | is_proxy.return_value = True | ||
4029 | 503 | |||
4030 | 504 | result = hooks.get_config_services() | ||
4031 | 505 | expected = { | ||
4032 | 506 | None: { | ||
4033 | 507 | 'service_name': 'foo', | ||
4034 | 508 | }, | ||
4035 | 509 | 'foo': { | ||
4036 | 510 | 'service_name': 'foo', | ||
4037 | 511 | 'service_options': ['foo1', 'foo2', 'option forwardfor'], | ||
4038 | 512 | 'server_options': ['baz1', 'baz2'], | ||
4039 | 513 | }, | ||
4040 | 514 | 'bar': { | ||
4041 | 515 | 'service_name': 'bar', | ||
4042 | 516 | 'service_options': ['bar1', 'bar2', 'option forwardfor'], | ||
4043 | 517 | 'server_options': ['baz1', 'baz2'], | ||
4044 | 518 | }, | ||
4045 | 519 | } | ||
4046 | 520 | |||
4047 | 521 | self.assertEqual(expected, result) | ||
4048 | 522 | |||
4049 | 523 | @patch('hooks.is_proxy') | ||
4050 | 524 | @patch('hooks.config_get') | ||
4051 | 525 | @patch('yaml.safe_load') | ||
4052 | 526 | def test_gets_config_services_with_options_string(self, safe_load, | ||
4053 | 527 | config_get, is_proxy): | ||
4054 | 528 | config_get.return_value = { | ||
4055 | 529 | 'services': 'some-services', | ||
4056 | 530 | } | ||
4057 | 531 | safe_load.return_value = [ | ||
4058 | 532 | { | ||
4059 | 533 | 'service_name': 'foo', | ||
4060 | 534 | 'service_options': { | ||
4061 | 535 | 'foo-1': 123, | ||
4062 | 536 | }, | ||
4063 | 537 | 'service_options': ['foo1', 'foo2'], | ||
4064 | 538 | 'server_options': 'baz1 baz2', | ||
4065 | 539 | }, | ||
4066 | 540 | { | ||
4067 | 541 | 'service_name': 'bar', | ||
4068 | 542 | 'service_options': ['bar1', 'bar2'], | ||
4069 | 543 | 'server_options': 'baz1 baz2', | ||
4070 | 544 | }, | ||
4071 | 545 | ] | ||
4072 | 546 | is_proxy.return_value = False | ||
4073 | 547 | |||
4074 | 548 | result = hooks.get_config_services() | ||
4075 | 549 | expected = { | ||
4076 | 550 | None: { | ||
4077 | 551 | 'service_name': 'foo', | ||
4078 | 552 | }, | ||
4079 | 553 | 'foo': { | ||
4080 | 554 | 'service_name': 'foo', | ||
4081 | 555 | 'service_options': ['foo1', 'foo2'], | ||
4082 | 556 | 'server_options': ['baz1', 'baz2'], | ||
4083 | 557 | }, | ||
4084 | 558 | 'bar': { | ||
4085 | 559 | 'service_name': 'bar', | ||
4086 | 560 | 'service_options': ['bar1', 'bar2'], | ||
4087 | 561 | 'server_options': ['baz1', 'baz2'], | ||
4088 | 562 | }, | ||
4089 | 563 | } | ||
4090 | 564 | |||
4091 | 565 | self.assertEqual(expected, result) | ||
4092 | 566 | |||
4093 | 567 | @patch('hooks.get_config_services') | ||
4094 | 568 | def test_gets_a_service_config(self, get_config_services): | ||
4095 | 569 | get_config_services.return_value = { | ||
4096 | 570 | 'foo': 'bar', | ||
4097 | 571 | } | ||
4098 | 572 | |||
4099 | 573 | self.assertEqual('bar', hooks.get_config_service('foo')) | ||
4100 | 574 | |||
4101 | 575 | @patch('hooks.get_config_services') | ||
4102 | 576 | def test_gets_a_service_config_from_none(self, get_config_services): | ||
4103 | 577 | get_config_services.return_value = { | ||
4104 | 578 | None: 'bar', | ||
4105 | 579 | } | ||
4106 | 580 | |||
4107 | 581 | self.assertEqual('bar', hooks.get_config_service()) | ||
4108 | 582 | |||
4109 | 583 | @patch('hooks.get_config_services') | ||
4110 | 584 | def test_gets_a_service_config_as_none(self, get_config_services): | ||
4111 | 585 | get_config_services.return_value = { | ||
4112 | 586 | 'baz': 'bar', | ||
4113 | 587 | } | ||
4114 | 588 | |||
4115 | 589 | self.assertIsNone(hooks.get_config_service()) | ||
4116 | 590 | |||
4117 | 591 | @patch('os.path.exists') | ||
4118 | 592 | def test_mark_as_proxy_when_path_exists(self, path_exists): | ||
4119 | 593 | path_exists.return_value = True | ||
4120 | 594 | |||
4121 | 595 | self.assertTrue(hooks.is_proxy('foo')) | ||
4122 | 596 | path_exists.assert_called_with('/var/run/haproxy/foo.is.proxy') | ||
4123 | 597 | |||
4124 | 598 | @patch('os.path.exists') | ||
4125 | 599 | def test_doesnt_mark_as_proxy_when_path_doesnt_exist(self, path_exists): | ||
4126 | 600 | path_exists.return_value = False | ||
4127 | 601 | |||
4128 | 602 | self.assertFalse(hooks.is_proxy('foo')) | ||
4129 | 603 | path_exists.assert_called_with('/var/run/haproxy/foo.is.proxy') | ||
4130 | 604 | |||
4131 | 605 | @patch('os.path.exists') | ||
4132 | 606 | def test_loads_services_by_name(self, path_exists): | ||
4133 | 607 | with patch_open() as (mock_open, mock_file): | ||
4134 | 608 | path_exists.return_value = True | ||
4135 | 609 | mock_file.read.return_value = 'some content' | ||
4136 | 610 | |||
4137 | 611 | result = hooks.load_services('some-service') | ||
4138 | 612 | |||
4139 | 613 | self.assertEqual('some content', result) | ||
4140 | 614 | mock_open.assert_called_with( | ||
4141 | 615 | '/var/run/haproxy/some-service.service') | ||
4142 | 616 | mock_file.read.assert_called_with() | ||
4143 | 617 | |||
4144 | 618 | @patch('os.path.exists') | ||
4145 | 619 | def test_loads_no_service_if_path_doesnt_exist(self, path_exists): | ||
4146 | 620 | path_exists.return_value = False | ||
4147 | 621 | |||
4148 | 622 | result = hooks.load_services('some-service') | ||
4149 | 623 | |||
4150 | 624 | self.assertIsNone(result) | ||
4151 | 625 | |||
4152 | 626 | @patch('glob.glob') | ||
4153 | 627 | def test_loads_services_within_dir_if_no_name_provided(self, glob): | ||
4154 | 628 | with patch_open() as (mock_open, mock_file): | ||
4155 | 629 | mock_file.read.side_effect = ['foo', 'bar'] | ||
4156 | 630 | glob.return_value = ['foo-file', 'bar-file'] | ||
4157 | 631 | |||
4158 | 632 | result = hooks.load_services() | ||
4159 | 633 | |||
4160 | 634 | self.assertEqual('foo\n\nbar\n\n', result) | ||
4161 | 635 | mock_open.assert_has_calls([call('foo-file'), call('bar-file')]) | ||
4162 | 636 | mock_file.read.assert_has_calls([call(), call()]) | ||
4163 | 637 | |||
4164 | 638 | @patch('hooks.os') | ||
4165 | 639 | def test_removes_services_by_name(self, os_): | ||
4166 | 640 | service_path = '/var/run/haproxy/some-service.service' | ||
4167 | 641 | os_.path.exists.return_value = True | ||
4168 | 642 | |||
4169 | 643 | self.assertTrue(hooks.remove_services('some-service')) | ||
4170 | 644 | |||
4171 | 645 | os_.path.exists.assert_called_with(service_path) | ||
4172 | 646 | os_.remove.assert_called_with(service_path) | ||
4173 | 647 | |||
4174 | 648 | @patch('hooks.os') | ||
4175 | 649 | def test_removes_nothing_if_service_doesnt_exist(self, os_): | ||
4176 | 650 | service_path = '/var/run/haproxy/some-service.service' | ||
4177 | 651 | os_.path.exists.return_value = False | ||
4178 | 652 | |||
4179 | 653 | self.assertTrue(hooks.remove_services('some-service')) | ||
4180 | 654 | |||
4181 | 655 | os_.path.exists.assert_called_with(service_path) | ||
4182 | 656 | |||
4183 | 657 | @patch('hooks.os') | ||
4184 | 658 | @patch('glob.glob') | ||
4185 | 659 | def test_removes_all_services_in_dir_if_name_not_provided(self, glob, os_): | ||
4186 | 660 | glob.return_value = ['foo', 'bar'] | ||
4187 | 661 | |||
4188 | 662 | self.assertTrue(hooks.remove_services()) | ||
4189 | 663 | |||
4190 | 664 | os_.remove.assert_has_calls([call('foo'), call('bar')]) | ||
4191 | 665 | |||
4192 | 666 | @patch('hooks.os') | ||
4193 | 667 | @patch('hooks.log') | ||
4194 | 668 | def test_logs_error_when_failing_to_remove_service_by_name(self, log, os_): | ||
4195 | 669 | error = Exception('some error') | ||
4196 | 670 | os_.path.exists.return_value = True | ||
4197 | 671 | os_.remove.side_effect = error | ||
4198 | 672 | |||
4199 | 673 | self.assertFalse(hooks.remove_services('some-service')) | ||
4200 | 674 | |||
4201 | 675 | log.assert_called_with(str(error)) | ||
4202 | 676 | |||
4203 | 677 | @patch('hooks.os') | ||
4204 | 678 | @patch('hooks.log') | ||
4205 | 679 | @patch('glob.glob') | ||
4206 | 680 | def test_logs_error_when_failing_to_remove_services(self, glob, log, os_): | ||
4207 | 681 | errors = [Exception('some error 1'), Exception('some error 2')] | ||
4208 | 682 | os_.remove.side_effect = errors | ||
4209 | 683 | glob.return_value = ['foo', 'bar'] | ||
4210 | 684 | |||
4211 | 685 | self.assertTrue(hooks.remove_services()) | ||
4212 | 686 | |||
4213 | 687 | log.assert_has_calls([ | ||
4214 | 688 | call(str(errors[0])), | ||
4215 | 689 | call(str(errors[1])), | ||
4216 | 690 | ]) | ||
4217 | 691 | |||
4218 | 692 | @patch('subprocess.call') | ||
4219 | 693 | def test_calls_check_action(self, mock_call): | ||
4220 | 694 | mock_call.return_value = 0 | ||
4221 | 695 | |||
4222 | 696 | result = hooks.service_haproxy('check') | ||
4223 | 697 | |||
4224 | 698 | self.assertTrue(result) | ||
4225 | 699 | mock_call.assert_called_with(['/usr/sbin/haproxy', '-f', | ||
4226 | 700 | hooks.default_haproxy_config, '-c']) | ||
4227 | 701 | |||
4228 | 702 | @patch('subprocess.call') | ||
4229 | 703 | def test_calls_check_action_with_different_config(self, mock_call): | ||
4230 | 704 | mock_call.return_value = 0 | ||
4231 | 705 | |||
4232 | 706 | result = hooks.service_haproxy('check', 'some-config') | ||
4233 | 707 | |||
4234 | 708 | self.assertTrue(result) | ||
4235 | 709 | mock_call.assert_called_with(['/usr/sbin/haproxy', '-f', | ||
4236 | 710 | 'some-config', '-c']) | ||
4237 | 711 | |||
4238 | 712 | @patch('subprocess.call') | ||
4239 | 713 | def test_fails_to_check_config(self, mock_call): | ||
4240 | 714 | mock_call.return_value = 1 | ||
4241 | 715 | |||
4242 | 716 | result = hooks.service_haproxy('check') | ||
4243 | 717 | |||
4244 | 718 | self.assertFalse(result) | ||
4245 | 719 | |||
4246 | 720 | @patch('subprocess.call') | ||
4247 | 721 | def test_calls_different_actions(self, mock_call): | ||
4248 | 722 | mock_call.return_value = 0 | ||
4249 | 723 | |||
4250 | 724 | result = hooks.service_haproxy('foo') | ||
4251 | 725 | |||
4252 | 726 | self.assertTrue(result) | ||
4253 | 727 | mock_call.assert_called_with(['service', 'haproxy', 'foo']) | ||
4254 | 728 | |||
4255 | 729 | @patch('subprocess.call') | ||
4256 | 730 | def test_fails_to_call_different_actions(self, mock_call): | ||
4257 | 731 | mock_call.return_value = 1 | ||
4258 | 732 | |||
4259 | 733 | result = hooks.service_haproxy('foo') | ||
4260 | 734 | |||
4261 | 735 | self.assertFalse(result) | ||
4262 | 736 | |||
4263 | 737 | @patch('subprocess.call') | ||
4264 | 738 | def test_doesnt_call_actions_if_action_not_provided(self, mock_call): | ||
4265 | 739 | self.assertIsNone(hooks.service_haproxy()) | ||
4266 | 740 | self.assertFalse(mock_call.called) | ||
4267 | 741 | |||
4268 | 742 | @patch('subprocess.call') | ||
4269 | 743 | def test_doesnt_call_actions_if_config_is_none(self, mock_call): | ||
4270 | 744 | self.assertIsNone(hooks.service_haproxy('foo', None)) | ||
4271 | 745 | self.assertFalse(mock_call.called) | ||
4272 | 0 | 746 | ||
4273 | === added file 'hooks/tests/test_nrpe_hooks.py' | |||
4274 | --- hooks/tests/test_nrpe_hooks.py 1970-01-01 00:00:00 +0000 | |||
4275 | +++ hooks/tests/test_nrpe_hooks.py 2013-10-10 22:34:35 +0000 | |||
4276 | @@ -0,0 +1,24 @@ | |||
4277 | 1 | from testtools import TestCase | ||
4278 | 2 | from mock import call, patch, MagicMock | ||
4279 | 3 | |||
4280 | 4 | import hooks | ||
4281 | 5 | |||
4282 | 6 | |||
4283 | 7 | class NRPEHooksTest(TestCase): | ||
4284 | 8 | |||
4285 | 9 | @patch('hooks.install_nrpe_scripts') | ||
4286 | 10 | @patch('charmhelpers.contrib.charmsupport.nrpe.NRPE') | ||
4287 | 11 | def test_update_nrpe_config(self, nrpe, install_nrpe_scripts): | ||
4288 | 12 | nrpe_compat = MagicMock() | ||
4289 | 13 | nrpe_compat.checks = [MagicMock(shortname="haproxy"), | ||
4290 | 14 | MagicMock(shortname="haproxy_queue")] | ||
4291 | 15 | nrpe.return_value = nrpe_compat | ||
4292 | 16 | |||
4293 | 17 | hooks.update_nrpe_config() | ||
4294 | 18 | |||
4295 | 19 | self.assertEqual( | ||
4296 | 20 | nrpe_compat.mock_calls, | ||
4297 | 21 | [call.add_check('haproxy', 'Check HAProxy', 'check_haproxy.sh'), | ||
4298 | 22 | call.add_check('haproxy_queue', 'Check HAProxy queue depth', | ||
4299 | 23 | 'check_haproxy_queue_depth.sh'), | ||
4300 | 24 | call.write()]) | ||
4301 | 0 | 25 | ||
4302 | === added file 'hooks/tests/test_peer_hooks.py' | |||
4303 | --- hooks/tests/test_peer_hooks.py 1970-01-01 00:00:00 +0000 | |||
4304 | +++ hooks/tests/test_peer_hooks.py 2013-10-10 22:34:35 +0000 | |||
4305 | @@ -0,0 +1,200 @@ | |||
4306 | 1 | import os | ||
4307 | 2 | import yaml | ||
4308 | 3 | |||
4309 | 4 | from testtools import TestCase | ||
4310 | 5 | from mock import patch | ||
4311 | 6 | |||
4312 | 7 | import hooks | ||
4313 | 8 | from utils_for_tests import patch_open | ||
4314 | 9 | |||
4315 | 10 | |||
4316 | 11 | class PeerRelationTest(TestCase): | ||
4317 | 12 | |||
4318 | 13 | def setUp(self): | ||
4319 | 14 | super(PeerRelationTest, self).setUp() | ||
4320 | 15 | |||
4321 | 16 | self.relations_of_type = self.patch_hook("relations_of_type") | ||
4322 | 17 | self.log = self.patch_hook("log") | ||
4323 | 18 | self.unit_get = self.patch_hook("unit_get") | ||
4324 | 19 | |||
4325 | 20 | def patch_hook(self, hook_name): | ||
4326 | 21 | mock_controller = patch.object(hooks, hook_name) | ||
4327 | 22 | mock = mock_controller.start() | ||
4328 | 23 | self.addCleanup(mock_controller.stop) | ||
4329 | 24 | return mock | ||
4330 | 25 | |||
4331 | 26 | @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"}) | ||
4332 | 27 | def test_with_peer_same_services(self): | ||
4333 | 28 | self.unit_get.return_value = "1.2.4.5" | ||
4334 | 29 | self.relations_of_type.return_value = [ | ||
4335 | 30 | {"__unit__": "haproxy/1", | ||
4336 | 31 | "hostname": "haproxy-1", | ||
4337 | 32 | "private-address": "1.2.4.4", | ||
4338 | 33 | "all_services": yaml.dump([ | ||
4339 | 34 | {"service_name": "foo_service", | ||
4340 | 35 | "service_host": "0.0.0.0", | ||
4341 | 36 | "service_options": ["balance leastconn"], | ||
4342 | 37 | "service_port": 4242}, | ||
4343 | 38 | ]) | ||
4344 | 39 | } | ||
4345 | 40 | ] | ||
4346 | 41 | |||
4347 | 42 | services_dict = { | ||
4348 | 43 | "foo_service": { | ||
4349 | 44 | "service_name": "foo_service", | ||
4350 | 45 | "service_host": "0.0.0.0", | ||
4351 | 46 | "service_port": 4242, | ||
4352 | 47 | "service_options": ["balance leastconn"], | ||
4353 | 48 | "server_options": ["maxconn 4"], | ||
4354 | 49 | "servers": [("backend_1__8080", "1.2.3.4", | ||
4355 | 50 | 8080, ["maxconn 4"])], | ||
4356 | 51 | }, | ||
4357 | 52 | } | ||
4358 | 53 | |||
4359 | 54 | expected = { | ||
4360 | 55 | "foo_service": { | ||
4361 | 56 | "service_name": "foo_service", | ||
4362 | 57 | "service_host": "0.0.0.0", | ||
4363 | 58 | "service_port": 4242, | ||
4364 | 59 | "service_options": ["balance leastconn", | ||
4365 | 60 | "mode tcp", | ||
4366 | 61 | "option tcplog"], | ||
4367 | 62 | "servers": [ | ||
4368 | 63 | ("haproxy-1", "1.2.4.4", 4243, ["check"]), | ||
4369 | 64 | ("haproxy-2", "1.2.4.5", 4243, ["check", "backup"]) | ||
4370 | 65 | ], | ||
4371 | 66 | }, | ||
4372 | 67 | "foo_service_be": { | ||
4373 | 68 | "service_name": "foo_service_be", | ||
4374 | 69 | "service_host": "0.0.0.0", | ||
4375 | 70 | "service_port": 4243, | ||
4376 | 71 | "service_options": ["balance leastconn"], | ||
4377 | 72 | "server_options": ["maxconn 4"], | ||
4378 | 73 | "servers": [("backend_1__8080", "1.2.3.4", | ||
4379 | 74 | 8080, ["maxconn 4"])], | ||
4380 | 75 | }, | ||
4381 | 76 | } | ||
4382 | 77 | self.assertEqual(expected, hooks.apply_peer_config(services_dict)) | ||
4383 | 78 | |||
4384 | 79 | @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"}) | ||
4385 | 80 | def test_inherit_timeout_settings(self): | ||
4386 | 81 | self.unit_get.return_value = "1.2.4.5" | ||
4387 | 82 | self.relations_of_type.return_value = [ | ||
4388 | 83 | {"__unit__": "haproxy/1", | ||
4389 | 84 | "hostname": "haproxy-1", | ||
4390 | 85 | "private-address": "1.2.4.4", | ||
4391 | 86 | "all_services": yaml.dump([ | ||
4392 | 87 | {"service_name": "foo_service", | ||
4393 | 88 | "service_host": "0.0.0.0", | ||
4394 | 89 | "service_options": ["timeout server 5000"], | ||
4395 | 90 | "service_port": 4242}, | ||
4396 | 91 | ]) | ||
4397 | 92 | } | ||
4398 | 93 | ] | ||
4399 | 94 | |||
4400 | 95 | services_dict = { | ||
4401 | 96 | "foo_service": { | ||
4402 | 97 | "service_name": "foo_service", | ||
4403 | 98 | "service_host": "0.0.0.0", | ||
4404 | 99 | "service_port": 4242, | ||
4405 | 100 | "service_options": ["timeout server 5000"], | ||
4406 | 101 | "server_options": ["maxconn 4"], | ||
4407 | 102 | "servers": [("backend_1__8080", "1.2.3.4", | ||
4408 | 103 | 8080, ["maxconn 4"])], | ||
4409 | 104 | }, | ||
4410 | 105 | } | ||
4411 | 106 | |||
4412 | 107 | expected = { | ||
4413 | 108 | "foo_service": { | ||
4414 | 109 | "service_name": "foo_service", | ||
4415 | 110 | "service_host": "0.0.0.0", | ||
4416 | 111 | "service_port": 4242, | ||
4417 | 112 | "service_options": ["balance leastconn", | ||
4418 | 113 | "mode tcp", | ||
4419 | 114 | "option tcplog", | ||
4420 | 115 | "timeout server 5000"], | ||
4421 | 116 | "servers": [ | ||
4422 | 117 | ("haproxy-1", "1.2.4.4", 4243, ["check"]), | ||
4423 | 118 | ("haproxy-2", "1.2.4.5", 4243, ["check", "backup"]) | ||
4424 | 119 | ], | ||
4425 | 120 | }, | ||
4426 | 121 | "foo_service_be": { | ||
4427 | 122 | "service_name": "foo_service_be", | ||
4428 | 123 | "service_host": "0.0.0.0", | ||
4429 | 124 | "service_port": 4243, | ||
4430 | 125 | "service_options": ["timeout server 5000"], | ||
4431 | 126 | "server_options": ["maxconn 4"], | ||
4432 | 127 | "servers": [("backend_1__8080", "1.2.3.4", | ||
4433 | 128 | 8080, ["maxconn 4"])], | ||
4434 | 129 | }, | ||
4435 | 130 | } | ||
4436 | 131 | self.assertEqual(expected, hooks.apply_peer_config(services_dict)) | ||
4437 | 132 | |||
4438 | 133 | @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"}) | ||
4439 | 134 | def test_with_no_relation_data(self): | ||
4440 | 135 | self.unit_get.return_value = "1.2.4.5" | ||
4441 | 136 | self.relations_of_type.return_value = [] | ||
4442 | 137 | |||
4443 | 138 | services_dict = { | ||
4444 | 139 | "foo_service": { | ||
4445 | 140 | "service_name": "foo_service", | ||
4446 | 141 | "service_host": "0.0.0.0", | ||
4447 | 142 | "service_port": 4242, | ||
4448 | 143 | "service_options": ["balance leastconn"], | ||
4449 | 144 | "server_options": ["maxconn 4"], | ||
4450 | 145 | "servers": [("backend_1__8080", "1.2.3.4", | ||
4451 | 146 | 8080, ["maxconn 4"])], | ||
4452 | 147 | }, | ||
4453 | 148 | } | ||
4454 | 149 | |||
4455 | 150 | expected = services_dict | ||
4456 | 151 | self.assertEqual(expected, hooks.apply_peer_config(services_dict)) | ||
4457 | 152 | |||
4458 | 153 | @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"}) | ||
4459 | 154 | def test_with_missing_all_services(self): | ||
4460 | 155 | self.unit_get.return_value = "1.2.4.5" | ||
4461 | 156 | self.relations_of_type.return_value = [ | ||
4462 | 157 | {"__unit__": "haproxy/1", | ||
4463 | 158 | "hostname": "haproxy-1", | ||
4464 | 159 | "private-address": "1.2.4.4", | ||
4465 | 160 | } | ||
4466 | 161 | ] | ||
4467 | 162 | |||
4468 | 163 | services_dict = { | ||
4469 | 164 | "foo_service": { | ||
4470 | 165 | "service_name": "foo_service", | ||
4471 | 166 | "service_host": "0.0.0.0", | ||
4472 | 167 | "service_port": 4242, | ||
4473 | 168 | "service_options": ["balance leastconn"], | ||
4474 | 169 | "server_options": ["maxconn 4"], | ||
4475 | 170 | "servers": [("backend_1__8080", "1.2.3.4", | ||
4476 | 171 | 8080, ["maxconn 4"])], | ||
4477 | 172 | }, | ||
4478 | 173 | } | ||
4479 | 174 | |||
4480 | 175 | expected = services_dict | ||
4481 | 176 | self.assertEqual(expected, hooks.apply_peer_config(services_dict)) | ||
4482 | 177 | |||
4483 | 178 | @patch('hooks.create_listen_stanza') | ||
4484 | 179 | def test_writes_service_config(self, create_listen_stanza): | ||
4485 | 180 | create_listen_stanza.return_value = 'some content' | ||
4486 | 181 | services_dict = { | ||
4487 | 182 | 'foo': { | ||
4488 | 183 | 'service_name': 'bar', | ||
4489 | 184 | 'service_host': 'some-host', | ||
4490 | 185 | 'service_port': 'some-port', | ||
4491 | 186 | 'service_options': 'some-options', | ||
4492 | 187 | 'servers': (1, 2), | ||
4493 | 188 | }, | ||
4494 | 189 | } | ||
4495 | 190 | |||
4496 | 191 | with patch.object(os.path, "exists") as exists: | ||
4497 | 192 | exists.return_value = True | ||
4498 | 193 | with patch_open() as (mock_open, mock_file): | ||
4499 | 194 | hooks.write_service_config(services_dict) | ||
4500 | 195 | |||
4501 | 196 | create_listen_stanza.assert_called_with( | ||
4502 | 197 | 'bar', 'some-host', 'some-port', 'some-options', (1, 2)) | ||
4503 | 198 | mock_open.assert_called_with( | ||
4504 | 199 | '/var/run/haproxy/bar.service', 'w') | ||
4505 | 200 | mock_file.write.assert_called_with('some content') | ||
4506 | 0 | 201 | ||
4507 | === added file 'hooks/tests/test_reverseproxy_hooks.py' | |||
4508 | --- hooks/tests/test_reverseproxy_hooks.py 1970-01-01 00:00:00 +0000 | |||
4509 | +++ hooks/tests/test_reverseproxy_hooks.py 2013-10-10 22:34:35 +0000 | |||
4510 | @@ -0,0 +1,345 @@ | |||
4511 | 1 | from testtools import TestCase | ||
4512 | 2 | from mock import patch, call | ||
4513 | 3 | |||
4514 | 4 | import hooks | ||
4515 | 5 | |||
4516 | 6 | |||
4517 | 7 | class ReverseProxyRelationTest(TestCase): | ||
4518 | 8 | |||
4519 | 9 | def setUp(self): | ||
4520 | 10 | super(ReverseProxyRelationTest, self).setUp() | ||
4521 | 11 | |||
4522 | 12 | self.relations_of_type = self.patch_hook("relations_of_type") | ||
4523 | 13 | self.get_config_services = self.patch_hook("get_config_services") | ||
4524 | 14 | self.log = self.patch_hook("log") | ||
4525 | 15 | self.write_service_config = self.patch_hook("write_service_config") | ||
4526 | 16 | self.apply_peer_config = self.patch_hook("apply_peer_config") | ||
4527 | 17 | self.apply_peer_config.side_effect = lambda value: value | ||
4528 | 18 | |||
4529 | 19 | def patch_hook(self, hook_name): | ||
4530 | 20 | mock_controller = patch.object(hooks, hook_name) | ||
4531 | 21 | mock = mock_controller.start() | ||
4532 | 22 | self.addCleanup(mock_controller.stop) | ||
4533 | 23 | return mock | ||
4534 | 24 | |||
4535 | 25 | def test_relation_data_returns_none(self): | ||
4536 | 26 | self.get_config_services.return_value = { | ||
4537 | 27 | "service": { | ||
4538 | 28 | "service_name": "service", | ||
4539 | 29 | }, | ||
4540 | 30 | } | ||
4541 | 31 | self.relations_of_type.return_value = [] | ||
4542 | 32 | self.assertIs(None, hooks.create_services()) | ||
4543 | 33 | self.log.assert_called_once_with("No backend servers, exiting.") | ||
4544 | 34 | self.write_service_config.assert_not_called() | ||
4545 | 35 | |||
4546 | 36 | def test_relation_data_returns_no_relations(self): | ||
4547 | 37 | self.get_config_services.return_value = { | ||
4548 | 38 | "service": { | ||
4549 | 39 | "service_name": "service", | ||
4550 | 40 | }, | ||
4551 | 41 | } | ||
4552 | 42 | self.relations_of_type.return_value = [] | ||
4553 | 43 | self.assertIs(None, hooks.create_services()) | ||
4554 | 44 | self.log.assert_called_once_with("No backend servers, exiting.") | ||
4555 | 45 | self.write_service_config.assert_not_called() | ||
4556 | 46 | |||
4557 | 47 | def test_relation_no_services(self): | ||
4558 | 48 | self.get_config_services.return_value = {} | ||
4559 | 49 | self.relations_of_type.return_value = [ | ||
4560 | 50 | {"port": 4242, | ||
4561 | 51 | "__unit__": "foo/0", | ||
4562 | 52 | "hostname": "backend.1", | ||
4563 | 53 | "private-address": "1.2.3.4"}, | ||
4564 | 54 | ] | ||
4565 | 55 | self.assertIs(None, hooks.create_services()) | ||
4566 | 56 | self.log.assert_called_once_with("No services configured, exiting.") | ||
4567 | 57 | self.write_service_config.assert_not_called() | ||
4568 | 58 | |||
4569 | 59 | def test_no_port_in_relation_data(self): | ||
4570 | 60 | self.get_config_services.return_value = { | ||
4571 | 61 | "service": { | ||
4572 | 62 | "service_name": "service", | ||
4573 | 63 | }, | ||
4574 | 64 | } | ||
4575 | 65 | self.relations_of_type.return_value = [ | ||
4576 | 66 | {"private-address": "1.2.3.4", | ||
4577 | 67 | "__unit__": "foo/0"}, | ||
4578 | 68 | ] | ||
4579 | 69 | self.assertIs(None, hooks.create_services()) | ||
4580 | 70 | self.log.assert_has_calls([call.log( | ||
4581 | 71 | "No port in relation data for 'foo/0', skipping.")]) | ||
4582 | 72 | self.write_service_config.assert_not_called() | ||
4583 | 73 | |||
4584 | 74 | def test_no_private_address_in_relation_data(self): | ||
4585 | 75 | self.get_config_services.return_value = { | ||
4586 | 76 | "service": { | ||
4587 | 77 | "service_name": "service", | ||
4588 | 78 | }, | ||
4589 | 79 | } | ||
4590 | 80 | self.relations_of_type.return_value = [ | ||
4591 | 81 | {"port": 4242, | ||
4592 | 82 | "__unit__": "foo/0"}, | ||
4593 | 83 | ] | ||
4594 | 84 | self.assertIs(None, hooks.create_services()) | ||
4595 | 85 | self.log.assert_has_calls([call.log( | ||
4596 | 86 | "No private-address in relation data for 'foo/0', skipping.")]) | ||
4597 | 87 | self.write_service_config.assert_not_called() | ||
4598 | 88 | |||
4599 | 89 | def test_no_hostname_in_relation_data(self): | ||
4600 | 90 | self.get_config_services.return_value = { | ||
4601 | 91 | "service": { | ||
4602 | 92 | "service_name": "service", | ||
4603 | 93 | }, | ||
4604 | 94 | } | ||
4605 | 95 | self.relations_of_type.return_value = [ | ||
4606 | 96 | {"port": 4242, | ||
4607 | 97 | "private-address": "1.2.3.4", | ||
4608 | 98 | "__unit__": "foo/0"}, | ||
4609 | 99 | ] | ||
4610 | 100 | self.assertIs(None, hooks.create_services()) | ||
4611 | 101 | self.log.assert_has_calls([call.log( | ||
4612 | 102 | "No hostname in relation data for 'foo/0', skipping.")]) | ||
4613 | 103 | self.write_service_config.assert_not_called() | ||
4614 | 104 | |||
4615 | 105 | def test_relation_unknown_service(self): | ||
4616 | 106 | self.get_config_services.return_value = { | ||
4617 | 107 | "service": { | ||
4618 | 108 | "service_name": "service", | ||
4619 | 109 | }, | ||
4620 | 110 | } | ||
4621 | 111 | self.relations_of_type.return_value = [ | ||
4622 | 112 | {"port": 4242, | ||
4623 | 113 | "hostname": "backend.1", | ||
4624 | 114 | "service_name": "invalid", | ||
4625 | 115 | "private-address": "1.2.3.4", | ||
4626 | 116 | "__unit__": "foo/0"}, | ||
4627 | 117 | ] | ||
4628 | 118 | self.assertIs(None, hooks.create_services()) | ||
4629 | 119 | self.log.assert_has_calls([call.log( | ||
4630 | 120 | "Service 'invalid' does not exist.")]) | ||
4631 | 121 | self.write_service_config.assert_not_called() | ||
4632 | 122 | |||
4633 | 123 | def test_no_relation_but_has_servers_from_config(self): | ||
4634 | 124 | self.get_config_services.return_value = { | ||
4635 | 125 | None: { | ||
4636 | 126 | "service_name": "service", | ||
4637 | 127 | }, | ||
4638 | 128 | "service": { | ||
4639 | 129 | "service_name": "service", | ||
4640 | 130 | "servers": [ | ||
4641 | 131 | ("legacy-backend", "1.2.3.1", 4242, ["maxconn 42"]), | ||
4642 | 132 | ] | ||
4643 | 133 | }, | ||
4644 | 134 | } | ||
4645 | 135 | self.relations_of_type.return_value = [] | ||
4646 | 136 | |||
4647 | 137 | expected = { | ||
4648 | 138 | 'service': { | ||
4649 | 139 | 'service_name': 'service', | ||
4650 | 140 | 'servers': [ | ||
4651 | 141 | ("legacy-backend", "1.2.3.1", 4242, ["maxconn 42"]), | ||
4652 | 142 | ], | ||
4653 | 143 | }, | ||
4654 | 144 | } | ||
4655 | 145 | self.assertEqual(expected, hooks.create_services()) | ||
4656 | 146 | self.write_service_config.assert_called_with(expected) | ||
4657 | 147 | |||
4658 | 148 | def test_relation_default_service(self): | ||
4659 | 149 | self.get_config_services.return_value = { | ||
4660 | 150 | None: { | ||
4661 | 151 | "service_name": "service", | ||
4662 | 152 | }, | ||
4663 | 153 | "service": { | ||
4664 | 154 | "service_name": "service", | ||
4665 | 155 | }, | ||
4666 | 156 | } | ||
4667 | 157 | self.relations_of_type.return_value = [ | ||
4668 | 158 | {"port": 4242, | ||
4669 | 159 | "hostname": "backend.1", | ||
4670 | 160 | "private-address": "1.2.3.4", | ||
4671 | 161 | "__unit__": "foo/0"}, | ||
4672 | 162 | ] | ||
4673 | 163 | |||
4674 | 164 | expected = { | ||
4675 | 165 | 'service': { | ||
4676 | 166 | 'service_name': 'service', | ||
4677 | 167 | 'servers': [('foo-0-4242', '1.2.3.4', 4242, [])], | ||
4678 | 168 | }, | ||
4679 | 169 | } | ||
4680 | 170 | self.assertEqual(expected, hooks.create_services()) | ||
4681 | 171 | self.write_service_config.assert_called_with(expected) | ||
4682 | 172 | |||
4683 | 173 | def test_with_service_options(self): | ||
4684 | 174 | self.get_config_services.return_value = { | ||
4685 | 175 | None: { | ||
4686 | 176 | "service_name": "service", | ||
4687 | 177 | }, | ||
4688 | 178 | "service": { | ||
4689 | 179 | "service_name": "service", | ||
4690 | 180 | "server_options": ["maxconn 4"], | ||
4691 | 181 | }, | ||
4692 | 182 | } | ||
4693 | 183 | self.relations_of_type.return_value = [ | ||
4694 | 184 | {"port": 4242, | ||
4695 | 185 | "hostname": "backend.1", | ||
4696 | 186 | "private-address": "1.2.3.4", | ||
4697 | 187 | "__unit__": "foo/0"}, | ||
4698 | 188 | ] | ||
4699 | 189 | |||
4700 | 190 | expected = { | ||
4701 | 191 | 'service': { | ||
4702 | 192 | 'service_name': 'service', | ||
4703 | 193 | 'server_options': ["maxconn 4"], | ||
4704 | 194 | 'servers': [('foo-0-4242', '1.2.3.4', | ||
4705 | 195 | 4242, ["maxconn 4"])], | ||
4706 | 196 | }, | ||
4707 | 197 | } | ||
4708 | 198 | self.assertEqual(expected, hooks.create_services()) | ||
4709 | 199 | self.write_service_config.assert_called_with(expected) | ||
4710 | 200 | |||
4711 | 201 | def test_with_service_name(self): | ||
4712 | 202 | self.get_config_services.return_value = { | ||
4713 | 203 | None: { | ||
4714 | 204 | "service_name": "service", | ||
4715 | 205 | }, | ||
4716 | 206 | "foo_service": { | ||
4717 | 207 | "service_name": "foo_service", | ||
4718 | 208 | "server_options": ["maxconn 4"], | ||
4719 | 209 | }, | ||
4720 | 210 | } | ||
4721 | 211 | self.relations_of_type.return_value = [ | ||
4722 | 212 | {"port": 4242, | ||
4723 | 213 | "hostname": "backend.1", | ||
4724 | 214 | "service_name": "foo_service", | ||
4725 | 215 | "private-address": "1.2.3.4", | ||
4726 | 216 | "__unit__": "foo/0"}, | ||
4727 | 217 | ] | ||
4728 | 218 | |||
4729 | 219 | expected = { | ||
4730 | 220 | 'foo_service': { | ||
4731 | 221 | 'service_name': 'foo_service', | ||
4732 | 222 | 'server_options': ["maxconn 4"], | ||
4733 | 223 | 'servers': [('foo-0-4242', '1.2.3.4', | ||
4734 | 224 | 4242, ["maxconn 4"])], | ||
4735 | 225 | }, | ||
4736 | 226 | } | ||
4737 | 227 | self.assertEqual(expected, hooks.create_services()) | ||
4738 | 228 | self.write_service_config.assert_called_with(expected) | ||
4739 | 229 | |||
4740 | 230 | def test_no_service_name_unit_name_match_service_name(self): | ||
4741 | 231 | self.get_config_services.return_value = { | ||
4742 | 232 | None: { | ||
4743 | 233 | "service_name": "foo_service", | ||
4744 | 234 | }, | ||
4745 | 235 | "foo_service": { | ||
4746 | 236 | "service_name": "foo_service", | ||
4747 | 237 | "server_options": ["maxconn 4"], | ||
4748 | 238 | }, | ||
4749 | 239 | } | ||
4750 | 240 | self.relations_of_type.return_value = [ | ||
4751 | 241 | {"port": 4242, | ||
4752 | 242 | "hostname": "backend.1", | ||
4753 | 243 | "private-address": "1.2.3.4", | ||
4754 | 244 | "__unit__": "foo/1"}, | ||
4755 | 245 | ] | ||
4756 | 246 | |||
4757 | 247 | expected = { | ||
4758 | 248 | 'foo_service': { | ||
4759 | 249 | 'service_name': 'foo_service', | ||
4760 | 250 | 'server_options': ["maxconn 4"], | ||
4761 | 251 | 'servers': [('foo-1-4242', '1.2.3.4', | ||
4762 | 252 | 4242, ["maxconn 4"])], | ||
4763 | 253 | }, | ||
4764 | 254 | } | ||
4765 | 255 | self.assertEqual(expected, hooks.create_services()) | ||
4766 | 256 | self.write_service_config.assert_called_with(expected) | ||
4767 | 257 | |||
4768 | 258 | def test_with_sitenames_match_service_name(self): | ||
4769 | 259 | self.get_config_services.return_value = { | ||
4770 | 260 | None: { | ||
4771 | 261 | "service_name": "service", | ||
4772 | 262 | }, | ||
4773 | 263 | "foo_srv": { | ||
4774 | 264 | "service_name": "foo_srv", | ||
4775 | 265 | "server_options": ["maxconn 4"], | ||
4776 | 266 | }, | ||
4777 | 267 | } | ||
4778 | 268 | self.relations_of_type.return_value = [ | ||
4779 | 269 | {"port": 4242, | ||
4780 | 270 | "hostname": "backend.1", | ||
4781 | 271 | "sitenames": "foo_srv bar_srv", | ||
4782 | 272 | "private-address": "1.2.3.4", | ||
4783 | 273 | "__unit__": "foo/0"}, | ||
4784 | 274 | ] | ||
4785 | 275 | |||
4786 | 276 | expected = { | ||
4787 | 277 | 'foo_srv': { | ||
4788 | 278 | 'service_name': 'foo_srv', | ||
4789 | 279 | 'server_options': ["maxconn 4"], | ||
4790 | 280 | 'servers': [('foo-0-4242', '1.2.3.4', | ||
4791 | 281 | 4242, ["maxconn 4"])], | ||
4792 | 282 | }, | ||
4793 | 283 | } | ||
4794 | 284 | self.assertEqual(expected, hooks.create_services()) | ||
4795 | 285 | self.write_service_config.assert_called_with(expected) | ||
4796 | 286 | |||
4797 | 287 | def test_with_juju_services_match_service_name(self): | ||
4798 | 288 | self.get_config_services.return_value = { | ||
4799 | 289 | None: { | ||
4800 | 290 | "service_name": "service", | ||
4801 | 291 | }, | ||
4802 | 292 | "foo_service": { | ||
4803 | 293 | "service_name": "foo_service", | ||
4804 | 294 | "server_options": ["maxconn 4"], | ||
4805 | 295 | }, | ||
4806 | 296 | } | ||
4807 | 297 | self.relations_of_type.return_value = [ | ||
4808 | 298 | {"port": 4242, | ||
4809 | 299 | "hostname": "backend.1", | ||
4810 | 300 | "private-address": "1.2.3.4", | ||
4811 | 301 | "__unit__": "foo/1"}, | ||
4812 | 302 | ] | ||
4813 | 303 | |||
4814 | 304 | expected = { | ||
4815 | 305 | 'foo_service': { | ||
4816 | 306 | 'service_name': 'foo_service', | ||
4817 | 307 | 'server_options': ["maxconn 4"], | ||
4818 | 308 | 'servers': [('foo-1-4242', '1.2.3.4', | ||
4819 | 309 | 4242, ["maxconn 4"])], | ||
4820 | 310 | }, | ||
4821 | 311 | } | ||
4822 | 312 | |||
4823 | 313 | result = hooks.create_services() | ||
4824 | 314 | |||
4825 | 315 | self.assertEqual(expected, result) | ||
4826 | 316 | self.write_service_config.assert_called_with(expected) | ||
4827 | 317 | |||
4828 | 318 | def test_with_sitenames_no_match_but_unit_name(self): | ||
4829 | 319 | self.get_config_services.return_value = { | ||
4830 | 320 | None: { | ||
4831 | 321 | "service_name": "service", | ||
4832 | 322 | }, | ||
4833 | 323 | "foo": { | ||
4834 | 324 | "service_name": "foo", | ||
4835 | 325 | "server_options": ["maxconn 4"], | ||
4836 | 326 | }, | ||
4837 | 327 | } | ||
4838 | 328 | self.relations_of_type.return_value = [ | ||
4839 | 329 | {"port": 4242, | ||
4840 | 330 | "hostname": "backend.1", | ||
4841 | 331 | "sitenames": "bar_service baz_service", | ||
4842 | 332 | "private-address": "1.2.3.4", | ||
4843 | 333 | "__unit__": "foo/0"}, | ||
4844 | 334 | ] | ||
4845 | 335 | |||
4846 | 336 | expected = { | ||
4847 | 337 | 'foo': { | ||
4848 | 338 | 'service_name': 'foo', | ||
4849 | 339 | 'server_options': ["maxconn 4"], | ||
4850 | 340 | 'servers': [('foo-0-4242', '1.2.3.4', | ||
4851 | 341 | 4242, ["maxconn 4"])], | ||
4852 | 342 | }, | ||
4853 | 343 | } | ||
4854 | 344 | self.assertEqual(expected, hooks.create_services()) | ||
4855 | 345 | self.write_service_config.assert_called_with(expected) | ||
4856 | 0 | 346 | ||
4857 | === added file 'hooks/tests/test_website_hooks.py' | |||
4858 | --- hooks/tests/test_website_hooks.py 1970-01-01 00:00:00 +0000 | |||
4859 | +++ hooks/tests/test_website_hooks.py 2013-10-10 22:34:35 +0000 | |||
4860 | @@ -0,0 +1,145 @@ | |||
4861 | 1 | from testtools import TestCase | ||
4862 | 2 | from mock import patch, call | ||
4863 | 3 | |||
4864 | 4 | import hooks | ||
4865 | 5 | |||
4866 | 6 | |||
4867 | 7 | class WebsiteRelationTest(TestCase): | ||
4868 | 8 | |||
4869 | 9 | def setUp(self): | ||
4870 | 10 | super(WebsiteRelationTest, self).setUp() | ||
4871 | 11 | self.notify_website = self.patch_hook("notify_website") | ||
4872 | 12 | |||
4873 | 13 | def patch_hook(self, hook_name): | ||
4874 | 14 | mock_controller = patch.object(hooks, hook_name) | ||
4875 | 15 | mock = mock_controller.start() | ||
4876 | 16 | self.addCleanup(mock_controller.stop) | ||
4877 | 17 | return mock | ||
4878 | 18 | |||
4879 | 19 | def test_website_interface_none(self): | ||
4880 | 20 | self.assertEqual(None, hooks.website_interface(hook_name=None)) | ||
4881 | 21 | self.notify_website.assert_not_called() | ||
4882 | 22 | |||
4883 | 23 | def test_website_interface_joined(self): | ||
4884 | 24 | hooks.website_interface(hook_name="joined") | ||
4885 | 25 | self.notify_website.assert_called_once_with( | ||
4886 | 26 | changed=False, relation_ids=(None,)) | ||
4887 | 27 | |||
4888 | 28 | def test_website_interface_changed(self): | ||
4889 | 29 | hooks.website_interface(hook_name="changed") | ||
4890 | 30 | self.notify_website.assert_called_once_with( | ||
4891 | 31 | changed=True, relation_ids=(None,)) | ||
4892 | 32 | |||
4893 | 33 | |||
4894 | 34 | class NotifyRelationTest(TestCase): | ||
4895 | 35 | |||
4896 | 36 | def setUp(self): | ||
4897 | 37 | super(NotifyRelationTest, self).setUp() | ||
4898 | 38 | |||
4899 | 39 | self.relations_for_id = self.patch_hook("relations_for_id") | ||
4900 | 40 | self.relation_set = self.patch_hook("relation_set") | ||
4901 | 41 | self.config_get = self.patch_hook("config_get") | ||
4902 | 42 | self.get_relation_ids = self.patch_hook("get_relation_ids") | ||
4903 | 43 | self.get_hostname = self.patch_hook("get_hostname") | ||
4904 | 44 | self.log = self.patch_hook("log") | ||
4905 | 45 | self.get_config_services = self.patch_hook("get_config_service") | ||
4906 | 46 | |||
4907 | 47 | def patch_hook(self, hook_name): | ||
4908 | 48 | mock_controller = patch.object(hooks, hook_name) | ||
4909 | 49 | mock = mock_controller.start() | ||
4910 | 50 | self.addCleanup(mock_controller.stop) | ||
4911 | 51 | return mock | ||
4912 | 52 | |||
4913 | 53 | def test_notify_website_relation_no_relation_ids(self): | ||
4914 | 54 | hooks.notify_relation("website") | ||
4915 | 55 | self.get_relation_ids.return_value = () | ||
4916 | 56 | self.relation_set.assert_not_called() | ||
4917 | 57 | self.get_relation_ids.assert_called_once_with("website") | ||
4918 | 58 | |||
4919 | 59 | def test_notify_website_relation_with_default_relation(self): | ||
4920 | 60 | self.get_relation_ids.return_value = () | ||
4921 | 61 | self.get_hostname.return_value = "foo.local" | ||
4922 | 62 | self.relations_for_id.return_value = [{}] | ||
4923 | 63 | self.config_get.return_value = {"services": ""} | ||
4924 | 64 | |||
4925 | 65 | hooks.notify_relation("website", relation_ids=(None,)) | ||
4926 | 66 | |||
4927 | 67 | self.get_hostname.assert_called_once_with() | ||
4928 | 68 | self.relations_for_id.assert_called_once_with(None) | ||
4929 | 69 | self.relation_set.assert_called_once_with( | ||
4930 | 70 | relation_id=None, port="80", hostname="foo.local", | ||
4931 | 71 | all_services="") | ||
4932 | 72 | self.get_relation_ids.assert_not_called() | ||
4933 | 73 | |||
4934 | 74 | def test_notify_website_relation_with_relations(self): | ||
4935 | 75 | self.get_relation_ids.return_value = ("website:1", | ||
4936 | 76 | "website:2") | ||
4937 | 77 | self.get_hostname.return_value = "foo.local" | ||
4938 | 78 | self.relations_for_id.return_value = [{}] | ||
4939 | 79 | self.config_get.return_value = {"services": ""} | ||
4940 | 80 | |||
4941 | 81 | hooks.notify_relation("website") | ||
4942 | 82 | |||
4943 | 83 | self.get_hostname.assert_called_once_with() | ||
4944 | 84 | self.get_relation_ids.assert_called_once_with("website") | ||
4945 | 85 | self.relations_for_id.assert_has_calls([ | ||
4946 | 86 | call("website:1"), | ||
4947 | 87 | call("website:2"), | ||
4948 | 88 | ]) | ||
4949 | 89 | |||
4950 | 90 | self.relation_set.assert_has_calls([ | ||
4951 | 91 | call(relation_id="website:1", port="80", hostname="foo.local", | ||
4952 | 92 | all_services=""), | ||
4953 | 93 | call(relation_id="website:2", port="80", hostname="foo.local", | ||
4954 | 94 | all_services=""), | ||
4955 | 95 | ]) | ||
4956 | 96 | |||
4957 | 97 | def test_notify_website_relation_with_different_sitenames(self): | ||
4958 | 98 | self.get_relation_ids.return_value = ("website:1",) | ||
4959 | 99 | self.get_hostname.return_value = "foo.local" | ||
4960 | 100 | self.relations_for_id.return_value = [{"service_name": "foo"}, | ||
4961 | 101 | {"service_name": "bar"}] | ||
4962 | 102 | self.config_get.return_value = {"services": ""} | ||
4963 | 103 | |||
4964 | 104 | hooks.notify_relation("website") | ||
4965 | 105 | |||
4966 | 106 | self.get_hostname.assert_called_once_with() | ||
4967 | 107 | self.get_relation_ids.assert_called_once_with("website") | ||
4968 | 108 | self.relations_for_id.assert_has_calls([ | ||
4969 | 109 | call("website:1"), | ||
4970 | 110 | ]) | ||
4971 | 111 | |||
4972 | 112 | self.relation_set.assert_has_calls([ | ||
4973 | 113 | call.relation_set( | ||
4974 | 114 | relation_id="website:1", port="80", hostname="foo.local", | ||
4975 | 115 | all_services=""), | ||
4976 | 116 | ]) | ||
4977 | 117 | self.log.assert_called_once_with( | ||
4978 | 118 | "Remote units requested than a single service name." | ||
4979 | 119 | "Falling back to default host/port.") | ||
4980 | 120 | |||
4981 | 121 | def test_notify_website_relation_with_same_sitenames(self): | ||
4982 | 122 | self.get_relation_ids.return_value = ("website:1",) | ||
4983 | 123 | self.get_hostname.side_effect = ["foo.local", "bar.local"] | ||
4984 | 124 | self.relations_for_id.return_value = [{"service_name": "bar"}, | ||
4985 | 125 | {"service_name": "bar"}] | ||
4986 | 126 | self.config_get.return_value = {"services": ""} | ||
4987 | 127 | self.get_config_services.return_value = {"service_host": "bar.local", | ||
4988 | 128 | "service_port": "4242"} | ||
4989 | 129 | |||
4990 | 130 | hooks.notify_relation("website") | ||
4991 | 131 | |||
4992 | 132 | self.get_hostname.assert_has_calls([ | ||
4993 | 133 | call(), | ||
4994 | 134 | call("bar.local")]) | ||
4995 | 135 | self.get_relation_ids.assert_called_once_with("website") | ||
4996 | 136 | self.relations_for_id.assert_has_calls([ | ||
4997 | 137 | call("website:1"), | ||
4998 | 138 | ]) | ||
4999 | 139 | |||
5000 | 140 | self.relation_set.assert_has_calls([ |
Reviewing this now.
-Juan