Merge lp:~alexlist/charms/trusty/swift-storage/nrpe-rebased into lp:~openstack-charmers-archive/charms/trusty/swift-storage/next
- Trusty Tahr (14.04)
- nrpe-rebased
- Merge into next
Status: | Needs review |
---|---|
Proposed branch: | lp:~alexlist/charms/trusty/swift-storage/nrpe-rebased |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/swift-storage/next |
Diff against target: |
931 lines (+691/-32) 15 files modified
charm-helpers-hooks.yaml (+1/-0) config.yaml (+14/-0) files/nagios/check_swift_service (+25/-0) files/nagios/check_swift_storage.py (+136/-0) files/sudo/swift-storage (+1/-0) hooks/charmhelpers/contrib/charmsupport/nrpe.py (+219/-0) hooks/charmhelpers/contrib/charmsupport/volumes.py (+159/-0) hooks/swift_storage_hooks.py (+58/-2) hooks/swift_storage_utils.py (+33/-2) metadata.yaml (+3/-0) revision (+1/-1) templates/050-swift-storage (+24/-0) templates/rsyncd.conf (+5/-23) unit_tests/test_swift_storage_relations.py (+10/-3) unit_tests/test_swift_storage_utils.py (+2/-1) |
To merge this branch: | bzr merge lp:~alexlist/charms/trusty/swift-storage/nrpe-rebased |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Needs Fixing | ||
Review via email: mp+231406@code.launchpad.net |
This proposal supersedes a proposal from 2014-07-31.
Commit message
Description of the change
This change adds nrpe-external-
Contains:
- re-sync of charmsupport
- renamed basic checks for swift ring hashes and replication
- status checks for swift storage services
- sudo permissions for above status checks
Rebase on -next as of 2014-08-19
James Page (james-page) wrote : Posted in a previous version of this proposal | # |
Jacek Nykis (jacekn) wrote : Posted in a previous version of this proposal | # |
> This merge proposal appears to have more than just the nrpe support - lots of
> changes around how rsync is managed as well?
>
> Was this intentional? if so I would prefer that they where split out into two
> MP's to make it easier to test/review.
Hi James,
Yes it was intentional. The swift-storage charm assumed full control over rsync config which could lead to broken configuration when used with subordinate charms.
So the rsync management changes are prerequisite for n-e-m support because n-e-m needs to be able to add rsync stanzas to the config without destroying swift config (and the swift charm needs to be able to update its config without breaking n-e-m config)
Gareth Woolridge (moon127) wrote : Posted in a previous version of this proposal | # |
James,
Rebase of Jacek's earlier work to apply cleanly to /next as we needed to pull in some other updates to PS4.
Can you respond to Jacek's earlier comment above? The nagios-
James Page (james-page) wrote : | # |
OK _ sorry it took me a while to get back to this; currently upgrades are broken and one of the unit tests fails due to incorrect ordering in the expected calls list - other than that LGTM.
Ryan Beisner (1chb1n) wrote : | # |
UOSCI bot says:
This MP triggered a test on the Ubuntu OSCI system. Here is a summary of results.
#642 swift-storage-next for alexlist mp231406
charm_lint_check
This build:
http://
MP URL:
https:/
Proposed branch:
lp:~alexlist/charms/trusty/swift-storage/nrpe-rebased
Results summary:
LINT FAIL: lint-check failed
LINT Results (max last 25 lines) from
/var/lib/
hooks/swift_
hooks/swift_
hooks/swift_
hooks/swift_
hooks/swift_
hooks/swift_
hooks/swift_
hooks/swift_
make: *** [lint] Error 1
Ubuntu OSCI Jenkins is currently in development on a Canonical private network, but we plan to publish results to a public instance soon. Tests are triggered if the proposed branch rev changes, or if the MP is placed into "Needs review" status after being otherwise for >= 1hr. Human review of results is still recommended.
http://
Ryan Beisner (1chb1n) wrote : | # |
UOSCI bot says:
This MP triggered a test on the Ubuntu OSCI system. Here is a summary of results.
#459 swift-storage-next for alexlist mp231406
charm_unit_test
This build:
http://
MP URL:
https:/
Proposed branch:
lp:~alexlist/charms/trusty/swift-storage/nrpe-rebased
Results summary:
UNIT FAIL: unit-test failed
UNIT Results (max last 25 lines) from
/var/lib/
call('/srv/node', owner='swift', group='swift'),
call('
=======
FAIL: test_fetch_
-------
Traceback (most recent call last):
File "/var/lib/
self.
AssertionError: [call(['wget', 'http://
call(['wget', 'http://
call(['wget', 'http://
Name Stmts Miss Cover Missing
-------
swift_storage_
swift_storage_hooks 70 16 77% 72, 77, 107-137
swift_storage_utils 111 8 93% 241-248
-------
TOTAL 221 24 89%
-------
Ran 31 tests in 1.384s
FAILED (failures=2)
make: *** [unit_test] Error 1
Ubuntu OSCI Jenkins is currently in development on a Canonical private network, but we plan to publish results to a public instance soon. Tests are triggered if the proposed branch rev changes, or if the MP is placed into "Needs review" status after being otherwise for >= 1hr. Human review of results is still recommended.
http://
Ryan Beisner (1chb1n) wrote : | # |
UOSCI bot says:
This MP triggered a test on the Ubuntu OSCI system. Here is a summary of results.
#653 swift-storage-next for alexlist mp231406
charm_lint_check
This build:
http://
MP URL:
https:/
Proposed branch:
lp:~alexlist/charms/trusty/swift-storage/nrpe-rebased
Results summary:
LINT FAIL: lint-check failed
LINT Results (max last 25 lines) from
/var/lib/
hooks/swift_
hooks/swift_
hooks/swift_
hooks/swift_
hooks/swift_
hooks/swift_
hooks/swift_
hooks/swift_
make: *** [lint] Error 1
Ubuntu OSCI Jenkins is currently in development on a Canonical private network, but we plan to publish results to a public instance soon. Tests are triggered if the proposed branch rev changes, or if the MP is placed into "Needs review" status after being otherwise for >= 1hr. Human review of results is still recommended.
http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #4234 swift-storage-next for alexlist mp231406
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #4251 swift-storage-next for alexlist mp231406
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #3977 swift-storage-next for alexlist mp231406
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
make: *** [unit_test] Error 1
ERROR:root:Make target returned non-zero.
Full unit test output: http://
Build: http://
Unmerged revisions
- 45. By Alexander List
-
[alexlist] move Nagios checks from scripts/ to files/, add service checks, add sudo permissions for service checks
- 44. By Alexander List
-
[alexlist] Initial support for nrpe-external-
master, fix rsync configuration to support /etc/rsync.d - 43. By Alexander List
-
[alexlist] re-sync charm-helpers, import charmsupport
- 42. By Alexander List
-
[alexlist] add config parameters for nrpe-external-
master support - 41. By Alexander List
-
[alexlist] add charmsupport dependency
Preview Diff
1 | === modified file 'charm-helpers-hooks.yaml' | |||
2 | --- charm-helpers-hooks.yaml 2014-07-28 12:08:48 +0000 | |||
3 | +++ charm-helpers-hooks.yaml 2014-08-19 15:54:23 +0000 | |||
4 | @@ -10,3 +10,4 @@ | |||
5 | 10 | - cluster | 10 | - cluster |
6 | 11 | - payload.execd | 11 | - payload.execd |
7 | 12 | - contrib.network.ip | 12 | - contrib.network.ip |
8 | 13 | - contrib.charmsupport | ||
9 | 13 | 14 | ||
10 | === modified file 'config.yaml' | |||
11 | --- config.yaml 2014-06-19 08:40:58 +0000 | |||
12 | +++ config.yaml 2014-08-19 15:54:23 +0000 | |||
13 | @@ -63,3 +63,17 @@ | |||
14 | 63 | to not use a per-disk thread pool. It is recommended to keep this value | 63 | to not use a per-disk thread pool. It is recommended to keep this value |
15 | 64 | small, as large values can result in high read latencies due to large | 64 | small, as large values can result in high read latencies due to large |
16 | 65 | queue depths. A good starting point is 4 threads per disk. | 65 | queue depths. A good starting point is 4 threads per disk. |
17 | 66 | nagios-check-params: | ||
18 | 67 | default: "-m -r 60 180 10 20" | ||
19 | 68 | type: string | ||
20 | 69 | description: String appended to nagios check | ||
21 | 70 | nagios_context: | ||
22 | 71 | default: "juju" | ||
23 | 72 | type: string | ||
24 | 73 | description: | | ||
25 | 74 | Used by the nrpe-external-master subordinate charm. | ||
26 | 75 | A string that will be prepended to instance name to set the host name | ||
27 | 76 | in nagios. So for instance the hostname would be something like: | ||
28 | 77 | juju-myservice-0 | ||
29 | 78 | If you're running multiple environments with the same services in them | ||
30 | 79 | this allows you to differentiate between them. | ||
31 | 66 | 80 | ||
32 | === added directory 'files' | |||
33 | === added directory 'files/nagios' | |||
34 | === added file 'files/nagios/check_swift_service' | |||
35 | --- files/nagios/check_swift_service 1970-01-01 00:00:00 +0000 | |||
36 | +++ files/nagios/check_swift_service 2014-08-19 15:54:23 +0000 | |||
37 | @@ -0,0 +1,25 @@ | |||
38 | 1 | #!/bin/sh | ||
39 | 2 | |||
40 | 3 | # | ||
41 | 4 | # check_swift_service --- exactly what it says on the tin | ||
42 | 5 | # | ||
43 | 6 | # Copyright 2013 Canonical Ltd. | ||
44 | 7 | # | ||
45 | 8 | # Authors: | ||
46 | 9 | # Paul Collins <paul.collins@canonical.com> | ||
47 | 10 | # | ||
48 | 11 | |||
49 | 12 | STATUS=$(sudo -u swift swift-init status "$1" 2>&1) | ||
50 | 13 | |||
51 | 14 | case $? in | ||
52 | 15 | 0) | ||
53 | 16 | echo "OK: ${STATUS}" | ||
54 | 17 | exit 0 | ||
55 | 18 | ;; | ||
56 | 19 | *) | ||
57 | 20 | echo "CRITICAL: ${STATUS}" | ||
58 | 21 | exit 2 | ||
59 | 22 | ;; | ||
60 | 23 | esac | ||
61 | 24 | |||
62 | 25 | exit 1 | ||
63 | 0 | 26 | ||
64 | === added file 'files/nagios/check_swift_storage.py' | |||
65 | --- files/nagios/check_swift_storage.py 1970-01-01 00:00:00 +0000 | |||
66 | +++ files/nagios/check_swift_storage.py 2014-08-19 15:54:23 +0000 | |||
67 | @@ -0,0 +1,136 @@ | |||
68 | 1 | #!/usr/bin/env python | ||
69 | 2 | |||
70 | 3 | # Copyright (C) 2014 Canonical | ||
71 | 4 | # All Rights Reserved | ||
72 | 5 | # Author: Jacek Nykis | ||
73 | 6 | |||
74 | 7 | import sys | ||
75 | 8 | import json | ||
76 | 9 | import urllib2 | ||
77 | 10 | import argparse | ||
78 | 11 | import hashlib | ||
79 | 12 | import datetime | ||
80 | 13 | |||
81 | 14 | STATUS_OK = 0 | ||
82 | 15 | STATUS_WARN = 1 | ||
83 | 16 | STATUS_CRIT = 2 | ||
84 | 17 | STATUS_UNKNOWN = 3 | ||
85 | 18 | |||
86 | 19 | |||
87 | 20 | def generate_md5(filename): | ||
88 | 21 | with open(filename, 'rb') as f: | ||
89 | 22 | md5 = hashlib.md5() | ||
90 | 23 | buffer = f.read(2 ** 20) | ||
91 | 24 | while buffer: | ||
92 | 25 | md5.update(buffer) | ||
93 | 26 | buffer = f.read(2 ** 20) | ||
94 | 27 | return md5.hexdigest() | ||
95 | 28 | |||
96 | 29 | |||
97 | 30 | def check_md5(base_url): | ||
98 | 31 | url = base_url + "ringmd5" | ||
99 | 32 | ringfiles = ["/etc/swift/object.ring.gz", | ||
100 | 33 | "/etc/swift/account.ring.gz", | ||
101 | 34 | "/etc/swift/container.ring.gz"] | ||
102 | 35 | results = [] | ||
103 | 36 | try: | ||
104 | 37 | data = urllib2.urlopen(url).read() | ||
105 | 38 | j = json.loads(data) | ||
106 | 39 | except urllib2.URLError: | ||
107 | 40 | return [(STATUS_UNKNOWN, "Can't open url: {}".format(url))] | ||
108 | 41 | except ValueError: | ||
109 | 42 | return [(STATUS_UNKNOWN, "Can't parse status data")] | ||
110 | 43 | |||
111 | 44 | for ringfile in ringfiles: | ||
112 | 45 | try: | ||
113 | 46 | if generate_md5(ringfile) != j[ringfile]: | ||
114 | 47 | results.append((STATUS_CRIT, | ||
115 | 48 | "Ringfile {} MD5 sum mismatch".format(ringfile))) | ||
116 | 49 | except IOError: | ||
117 | 50 | results.append( | ||
118 | 51 | (STATUS_UNKNOWN, "Can't open ringfile {}".format(ringfile))) | ||
119 | 52 | if results: | ||
120 | 53 | return results | ||
121 | 54 | else: | ||
122 | 55 | return [(STATUS_OK, "OK")] | ||
123 | 56 | |||
124 | 57 | |||
125 | 58 | def check_replication(base_url, limits): | ||
126 | 59 | types = ["account", "object", "container"] | ||
127 | 60 | results = [] | ||
128 | 61 | for repl in types: | ||
129 | 62 | url = base_url + "replication/" + repl | ||
130 | 63 | try: | ||
131 | 64 | data = urllib2.urlopen(url).read() | ||
132 | 65 | j = json.loads(data) | ||
133 | 66 | except urllib2.URLError: | ||
134 | 67 | results.append((STATUS_UNKNOWN, "Can't open url: {}".format(url))) | ||
135 | 68 | continue | ||
136 | 69 | except ValueError: | ||
137 | 70 | results.append((STATUS_UNKNOWN, "Can't parse status data")) | ||
138 | 71 | continue | ||
139 | 72 | |||
140 | 73 | if "object_replication_last" in j: | ||
141 | 74 | repl_last = datetime.datetime.fromtimestamp(j["object_replication_last"]) | ||
142 | 75 | else: | ||
143 | 76 | repl_last = datetime.datetime.fromtimestamp(j["replication_last"]) | ||
144 | 77 | delta = datetime.datetime.now() - repl_last | ||
145 | 78 | if delta.seconds >= limits[1]: | ||
146 | 79 | results.append((STATUS_CRIT, | ||
147 | 80 | "'{}' replication lag is {} seconds".format(repl, delta.seconds))) | ||
148 | 81 | elif delta.seconds >= limits[0]: | ||
149 | 82 | results.append((STATUS_WARN, | ||
150 | 83 | "'{}' replication lag is {} seconds".format(repl, delta.seconds))) | ||
151 | 84 | if "replication_stats" in j: | ||
152 | 85 | errors = j["replication_stats"]["failure"] | ||
153 | 86 | if errors >= limits[3]: | ||
154 | 87 | results.append( | ||
155 | 88 | (STATUS_CRIT, "{} replication failures".format(errors))) | ||
156 | 89 | elif errors >= limits[2]: | ||
157 | 90 | results.append( | ||
158 | 91 | (STATUS_WARN, "{} replication failures".format(errors))) | ||
159 | 92 | if results: | ||
160 | 93 | return results | ||
161 | 94 | else: | ||
162 | 95 | return [(STATUS_OK, "OK")] | ||
163 | 96 | |||
164 | 97 | |||
165 | 98 | if __name__ == '__main__': | ||
166 | 99 | parser = argparse.ArgumentParser(description='Check swift-storage health') | ||
167 | 100 | parser.add_argument('-H', '--host', dest='host', default='localhost', | ||
168 | 101 | help='Hostname to query') | ||
169 | 102 | parser.add_argument('-p', '--port', dest='port', default='6000', | ||
170 | 103 | type=int, help='Port number') | ||
171 | 104 | parser.add_argument('-r', '--replication', dest='check_replication', | ||
172 | 105 | type=int, nargs=4, help='Check replication status', | ||
173 | 106 | metavar=('lag_warn', 'lag_crit', 'failures_warn', 'failures_crit')) | ||
174 | 107 | parser.add_argument('-m', '--md5', dest='check_md5', action='store_true', | ||
175 | 108 | help='Compare server rings md5sum with local copy') | ||
176 | 109 | args = parser.parse_args() | ||
177 | 110 | |||
178 | 111 | if not args.check_replication and not args.check_md5: | ||
179 | 112 | print "You must use -r or -m switch" | ||
180 | 113 | sys.exit(STATUS_UNKNOWN) | ||
181 | 114 | |||
182 | 115 | base_url = "http://{}:{}/recon/".format(args.host, args.port) | ||
183 | 116 | results = [] | ||
184 | 117 | if args.check_replication: | ||
185 | 118 | results.extend(check_replication(base_url, args.check_replication)) | ||
186 | 119 | if args.check_md5: | ||
187 | 120 | results.extend(check_md5(base_url)) | ||
188 | 121 | |||
189 | 122 | crits = ';'.join([i[1] for i in results if i[0] == STATUS_CRIT]) | ||
190 | 123 | warns = ';'.join([i[1] for i in results if i[0] == STATUS_WARN]) | ||
191 | 124 | unknowns = ';'.join([i[1] for i in results if i[0] == STATUS_UNKNOWN]) | ||
192 | 125 | if crits: | ||
193 | 126 | print "CRITICAL: " + crits | ||
194 | 127 | sys.exit(STATUS_CRIT) | ||
195 | 128 | elif warns: | ||
196 | 129 | print "WARNING: " + warns | ||
197 | 130 | sys.exit(STATUS_WARN) | ||
198 | 131 | elif unknowns: | ||
199 | 132 | print "UNKNOWN: " + unknowns | ||
200 | 133 | sys.exit(STATUS_UNKNOWN) | ||
201 | 134 | else: | ||
202 | 135 | print "OK" | ||
203 | 136 | sys.exit(0) | ||
204 | 0 | 137 | ||
205 | === added directory 'files/sudo' | |||
206 | === added file 'files/sudo/swift-storage' | |||
207 | --- files/sudo/swift-storage 1970-01-01 00:00:00 +0000 | |||
208 | +++ files/sudo/swift-storage 2014-08-19 15:54:23 +0000 | |||
209 | @@ -0,0 +1,1 @@ | |||
210 | 1 | nagios ALL=(swift) NOPASSWD:/usr/bin/swift-init status * | ||
211 | 0 | 2 | ||
212 | === added directory 'hooks/charmhelpers/contrib/charmsupport' | |||
213 | === added file 'hooks/charmhelpers/contrib/charmsupport/__init__.py' | |||
214 | === added file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py' | |||
215 | --- hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000 | |||
216 | +++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2014-08-19 15:54:23 +0000 | |||
217 | @@ -0,0 +1,219 @@ | |||
218 | 1 | """Compatibility with the nrpe-external-master charm""" | ||
219 | 2 | # Copyright 2012 Canonical Ltd. | ||
220 | 3 | # | ||
221 | 4 | # Authors: | ||
222 | 5 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> | ||
223 | 6 | |||
224 | 7 | import subprocess | ||
225 | 8 | import pwd | ||
226 | 9 | import grp | ||
227 | 10 | import os | ||
228 | 11 | import re | ||
229 | 12 | import shlex | ||
230 | 13 | import yaml | ||
231 | 14 | |||
232 | 15 | from charmhelpers.core.hookenv import ( | ||
233 | 16 | config, | ||
234 | 17 | local_unit, | ||
235 | 18 | log, | ||
236 | 19 | relation_ids, | ||
237 | 20 | relation_set, | ||
238 | 21 | ) | ||
239 | 22 | |||
240 | 23 | from charmhelpers.core.host import service | ||
241 | 24 | |||
242 | 25 | # This module adds compatibility with the nrpe-external-master and plain nrpe | ||
243 | 26 | # subordinate charms. To use it in your charm: | ||
244 | 27 | # | ||
245 | 28 | # 1. Update metadata.yaml | ||
246 | 29 | # | ||
247 | 30 | # provides: | ||
248 | 31 | # (...) | ||
249 | 32 | # nrpe-external-master: | ||
250 | 33 | # interface: nrpe-external-master | ||
251 | 34 | # scope: container | ||
252 | 35 | # | ||
253 | 36 | # and/or | ||
254 | 37 | # | ||
255 | 38 | # provides: | ||
256 | 39 | # (...) | ||
257 | 40 | # local-monitors: | ||
258 | 41 | # interface: local-monitors | ||
259 | 42 | # scope: container | ||
260 | 43 | |||
261 | 44 | # | ||
262 | 45 | # 2. Add the following to config.yaml | ||
263 | 46 | # | ||
264 | 47 | # nagios_context: | ||
265 | 48 | # default: "juju" | ||
266 | 49 | # type: string | ||
267 | 50 | # description: | | ||
268 | 51 | # Used by the nrpe subordinate charms. | ||
269 | 52 | # A string that will be prepended to instance name to set the host name | ||
270 | 53 | # in nagios. So for instance the hostname would be something like: | ||
271 | 54 | # juju-myservice-0 | ||
272 | 55 | # If you're running multiple environments with the same services in them | ||
273 | 56 | # this allows you to differentiate between them. | ||
274 | 57 | # | ||
275 | 58 | # 3. Add custom checks (Nagios plugins) to files/nrpe-external-master | ||
276 | 59 | # | ||
277 | 60 | # 4. Update your hooks.py with something like this: | ||
278 | 61 | # | ||
279 | 62 | # from charmsupport.nrpe import NRPE | ||
280 | 63 | # (...) | ||
281 | 64 | # def update_nrpe_config(): | ||
282 | 65 | # nrpe_compat = NRPE() | ||
283 | 66 | # nrpe_compat.add_check( | ||
284 | 67 | # shortname = "myservice", | ||
285 | 68 | # description = "Check MyService", | ||
286 | 69 | # check_cmd = "check_http -w 2 -c 10 http://localhost" | ||
287 | 70 | # ) | ||
288 | 71 | # nrpe_compat.add_check( | ||
289 | 72 | # "myservice_other", | ||
290 | 73 | # "Check for widget failures", | ||
291 | 74 | # check_cmd = "/srv/myapp/scripts/widget_check" | ||
292 | 75 | # ) | ||
293 | 76 | # nrpe_compat.write() | ||
294 | 77 | # | ||
295 | 78 | # def config_changed(): | ||
296 | 79 | # (...) | ||
297 | 80 | # update_nrpe_config() | ||
298 | 81 | # | ||
299 | 82 | # def nrpe_external_master_relation_changed(): | ||
300 | 83 | # update_nrpe_config() | ||
301 | 84 | # | ||
302 | 85 | # def local_monitors_relation_changed(): | ||
303 | 86 | # update_nrpe_config() | ||
304 | 87 | # | ||
305 | 88 | # 5. ln -s hooks.py nrpe-external-master-relation-changed | ||
306 | 89 | # ln -s hooks.py local-monitors-relation-changed | ||
307 | 90 | |||
308 | 91 | |||
309 | 92 | class CheckException(Exception): | ||
310 | 93 | pass | ||
311 | 94 | |||
312 | 95 | |||
313 | 96 | class Check(object): | ||
314 | 97 | shortname_re = '[A-Za-z0-9-_]+$' | ||
315 | 98 | service_template = (""" | ||
316 | 99 | #--------------------------------------------------- | ||
317 | 100 | # This file is Juju managed | ||
318 | 101 | #--------------------------------------------------- | ||
319 | 102 | define service {{ | ||
320 | 103 | use active-service | ||
321 | 104 | host_name {nagios_hostname} | ||
322 | 105 | service_description {nagios_hostname}[{shortname}] """ | ||
323 | 106 | """{description} | ||
324 | 107 | check_command check_nrpe!{command} | ||
325 | 108 | servicegroups {nagios_servicegroup} | ||
326 | 109 | }} | ||
327 | 110 | """) | ||
328 | 111 | |||
329 | 112 | def __init__(self, shortname, description, check_cmd): | ||
330 | 113 | super(Check, self).__init__() | ||
331 | 114 | # XXX: could be better to calculate this from the service name | ||
332 | 115 | if not re.match(self.shortname_re, shortname): | ||
333 | 116 | raise CheckException("shortname must match {}".format( | ||
334 | 117 | Check.shortname_re)) | ||
335 | 118 | self.shortname = shortname | ||
336 | 119 | self.command = "check_{}".format(shortname) | ||
337 | 120 | # Note: a set of invalid characters is defined by the | ||
338 | 121 | # Nagios server config | ||
339 | 122 | # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()= | ||
340 | 123 | self.description = description | ||
341 | 124 | self.check_cmd = self._locate_cmd(check_cmd) | ||
342 | 125 | |||
343 | 126 | def _locate_cmd(self, check_cmd): | ||
344 | 127 | search_path = ( | ||
345 | 128 | '/usr/lib/nagios/plugins', | ||
346 | 129 | '/usr/local/lib/nagios/plugins', | ||
347 | 130 | ) | ||
348 | 131 | parts = shlex.split(check_cmd) | ||
349 | 132 | for path in search_path: | ||
350 | 133 | if os.path.exists(os.path.join(path, parts[0])): | ||
351 | 134 | command = os.path.join(path, parts[0]) | ||
352 | 135 | if len(parts) > 1: | ||
353 | 136 | command += " " + " ".join(parts[1:]) | ||
354 | 137 | return command | ||
355 | 138 | log('Check command not found: {}'.format(parts[0])) | ||
356 | 139 | return '' | ||
357 | 140 | |||
358 | 141 | def write(self, nagios_context, hostname): | ||
359 | 142 | nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format( | ||
360 | 143 | self.command) | ||
361 | 144 | with open(nrpe_check_file, 'w') as nrpe_check_config: | ||
362 | 145 | nrpe_check_config.write("# check {}\n".format(self.shortname)) | ||
363 | 146 | nrpe_check_config.write("command[{}]={}\n".format( | ||
364 | 147 | self.command, self.check_cmd)) | ||
365 | 148 | |||
366 | 149 | if not os.path.exists(NRPE.nagios_exportdir): | ||
367 | 150 | log('Not writing service config as {} is not accessible'.format( | ||
368 | 151 | NRPE.nagios_exportdir)) | ||
369 | 152 | else: | ||
370 | 153 | self.write_service_config(nagios_context, hostname) | ||
371 | 154 | |||
372 | 155 | def write_service_config(self, nagios_context, hostname): | ||
373 | 156 | for f in os.listdir(NRPE.nagios_exportdir): | ||
374 | 157 | if re.search('.*{}.cfg'.format(self.command), f): | ||
375 | 158 | os.remove(os.path.join(NRPE.nagios_exportdir, f)) | ||
376 | 159 | |||
377 | 160 | templ_vars = { | ||
378 | 161 | 'nagios_hostname': hostname, | ||
379 | 162 | 'nagios_servicegroup': nagios_context, | ||
380 | 163 | 'description': self.description, | ||
381 | 164 | 'shortname': self.shortname, | ||
382 | 165 | 'command': self.command, | ||
383 | 166 | } | ||
384 | 167 | nrpe_service_text = Check.service_template.format(**templ_vars) | ||
385 | 168 | nrpe_service_file = '{}/service__{}_{}.cfg'.format( | ||
386 | 169 | NRPE.nagios_exportdir, hostname, self.command) | ||
387 | 170 | with open(nrpe_service_file, 'w') as nrpe_service_config: | ||
388 | 171 | nrpe_service_config.write(str(nrpe_service_text)) | ||
389 | 172 | |||
390 | 173 | def run(self): | ||
391 | 174 | subprocess.call(self.check_cmd) | ||
392 | 175 | |||
393 | 176 | |||
394 | 177 | class NRPE(object): | ||
395 | 178 | nagios_logdir = '/var/log/nagios' | ||
396 | 179 | nagios_exportdir = '/var/lib/nagios/export' | ||
397 | 180 | nrpe_confdir = '/etc/nagios/nrpe.d' | ||
398 | 181 | |||
399 | 182 | def __init__(self, hostname=None): | ||
400 | 183 | super(NRPE, self).__init__() | ||
401 | 184 | self.config = config() | ||
402 | 185 | self.nagios_context = self.config['nagios_context'] | ||
403 | 186 | self.unit_name = local_unit().replace('/', '-') | ||
404 | 187 | if hostname: | ||
405 | 188 | self.hostname = hostname | ||
406 | 189 | else: | ||
407 | 190 | self.hostname = "{}-{}".format(self.nagios_context, self.unit_name) | ||
408 | 191 | self.checks = [] | ||
409 | 192 | |||
410 | 193 | def add_check(self, *args, **kwargs): | ||
411 | 194 | self.checks.append(Check(*args, **kwargs)) | ||
412 | 195 | |||
413 | 196 | def write(self): | ||
414 | 197 | try: | ||
415 | 198 | nagios_uid = pwd.getpwnam('nagios').pw_uid | ||
416 | 199 | nagios_gid = grp.getgrnam('nagios').gr_gid | ||
417 | 200 | except: | ||
418 | 201 | log("Nagios user not set up, nrpe checks not updated") | ||
419 | 202 | return | ||
420 | 203 | |||
421 | 204 | if not os.path.exists(NRPE.nagios_logdir): | ||
422 | 205 | os.mkdir(NRPE.nagios_logdir) | ||
423 | 206 | os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid) | ||
424 | 207 | |||
425 | 208 | nrpe_monitors = {} | ||
426 | 209 | monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}} | ||
427 | 210 | for nrpecheck in self.checks: | ||
428 | 211 | nrpecheck.write(self.nagios_context, self.hostname) | ||
429 | 212 | nrpe_monitors[nrpecheck.shortname] = { | ||
430 | 213 | "command": nrpecheck.command, | ||
431 | 214 | } | ||
432 | 215 | |||
433 | 216 | service('restart', 'nagios-nrpe-server') | ||
434 | 217 | |||
435 | 218 | for rid in relation_ids("local-monitors"): | ||
436 | 219 | relation_set(relation_id=rid, monitors=yaml.dump(monitors)) | ||
437 | 0 | 220 | ||
438 | === added file 'hooks/charmhelpers/contrib/charmsupport/volumes.py' | |||
439 | --- hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000 | |||
440 | +++ hooks/charmhelpers/contrib/charmsupport/volumes.py 2014-08-19 15:54:23 +0000 | |||
441 | @@ -0,0 +1,159 @@ | |||
442 | 1 | ''' | ||
443 | 2 | Functions for managing volumes in juju units. One volume is supported per unit. | ||
444 | 3 | Subordinates may have their own storage, provided it is on its own partition. | ||
445 | 4 | |||
446 | 5 | Configuration stanzas:: | ||
447 | 6 | |||
448 | 7 | volume-ephemeral: | ||
449 | 8 | type: boolean | ||
450 | 9 | default: true | ||
451 | 10 | description: > | ||
452 | 11 | If false, a volume is mounted as sepecified in "volume-map" | ||
453 | 12 | If true, ephemeral storage will be used, meaning that log data | ||
454 | 13 | will only exist as long as the machine. YOU HAVE BEEN WARNED. | ||
455 | 14 | volume-map: | ||
456 | 15 | type: string | ||
457 | 16 | default: {} | ||
458 | 17 | description: > | ||
459 | 18 | YAML map of units to device names, e.g: | ||
460 | 19 | "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }" | ||
461 | 20 | Service units will raise a configure-error if volume-ephemeral | ||
462 | 21 | is 'true' and no volume-map value is set. Use 'juju set' to set a | ||
463 | 22 | value and 'juju resolved' to complete configuration. | ||
464 | 23 | |||
465 | 24 | Usage:: | ||
466 | 25 | |||
467 | 26 | from charmsupport.volumes import configure_volume, VolumeConfigurationError | ||
468 | 27 | from charmsupport.hookenv import log, ERROR | ||
469 | 28 | def post_mount_hook(): | ||
470 | 29 | stop_service('myservice') | ||
471 | 30 | def post_mount_hook(): | ||
472 | 31 | start_service('myservice') | ||
473 | 32 | |||
474 | 33 | if __name__ == '__main__': | ||
475 | 34 | try: | ||
476 | 35 | configure_volume(before_change=pre_mount_hook, | ||
477 | 36 | after_change=post_mount_hook) | ||
478 | 37 | except VolumeConfigurationError: | ||
479 | 38 | log('Storage could not be configured', ERROR) | ||
480 | 39 | |||
481 | 40 | ''' | ||
482 | 41 | |||
483 | 42 | # XXX: Known limitations | ||
484 | 43 | # - fstab is neither consulted nor updated | ||
485 | 44 | |||
486 | 45 | import os | ||
487 | 46 | from charmhelpers.core import hookenv | ||
488 | 47 | from charmhelpers.core import host | ||
489 | 48 | import yaml | ||
490 | 49 | |||
491 | 50 | |||
492 | 51 | MOUNT_BASE = '/srv/juju/volumes' | ||
493 | 52 | |||
494 | 53 | |||
495 | 54 | class VolumeConfigurationError(Exception): | ||
496 | 55 | '''Volume configuration data is missing or invalid''' | ||
497 | 56 | pass | ||
498 | 57 | |||
499 | 58 | |||
500 | 59 | def get_config(): | ||
501 | 60 | '''Gather and sanity-check volume configuration data''' | ||
502 | 61 | volume_config = {} | ||
503 | 62 | config = hookenv.config() | ||
504 | 63 | |||
505 | 64 | errors = False | ||
506 | 65 | |||
507 | 66 | if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'): | ||
508 | 67 | volume_config['ephemeral'] = True | ||
509 | 68 | else: | ||
510 | 69 | volume_config['ephemeral'] = False | ||
511 | 70 | |||
512 | 71 | try: | ||
513 | 72 | volume_map = yaml.safe_load(config.get('volume-map', '{}')) | ||
514 | 73 | except yaml.YAMLError as e: | ||
515 | 74 | hookenv.log("Error parsing YAML volume-map: {}".format(e), | ||
516 | 75 | hookenv.ERROR) | ||
517 | 76 | errors = True | ||
518 | 77 | if volume_map is None: | ||
519 | 78 | # probably an empty string | ||
520 | 79 | volume_map = {} | ||
521 | 80 | elif not isinstance(volume_map, dict): | ||
522 | 81 | hookenv.log("Volume-map should be a dictionary, not {}".format( | ||
523 | 82 | type(volume_map))) | ||
524 | 83 | errors = True | ||
525 | 84 | |||
526 | 85 | volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME']) | ||
527 | 86 | if volume_config['device'] and volume_config['ephemeral']: | ||
528 | 87 | # asked for ephemeral storage but also defined a volume ID | ||
529 | 88 | hookenv.log('A volume is defined for this unit, but ephemeral ' | ||
530 | 89 | 'storage was requested', hookenv.ERROR) | ||
531 | 90 | errors = True | ||
532 | 91 | elif not volume_config['device'] and not volume_config['ephemeral']: | ||
533 | 92 | # asked for permanent storage but did not define volume ID | ||
534 | 93 | hookenv.log('Ephemeral storage was requested, but there is no volume ' | ||
535 | 94 | 'defined for this unit.', hookenv.ERROR) | ||
536 | 95 | errors = True | ||
537 | 96 | |||
538 | 97 | unit_mount_name = hookenv.local_unit().replace('/', '-') | ||
539 | 98 | volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name) | ||
540 | 99 | |||
541 | 100 | if errors: | ||
542 | 101 | return None | ||
543 | 102 | return volume_config | ||
544 | 103 | |||
545 | 104 | |||
546 | 105 | def mount_volume(config): | ||
547 | 106 | if os.path.exists(config['mountpoint']): | ||
548 | 107 | if not os.path.isdir(config['mountpoint']): | ||
549 | 108 | hookenv.log('Not a directory: {}'.format(config['mountpoint'])) | ||
550 | 109 | raise VolumeConfigurationError() | ||
551 | 110 | else: | ||
552 | 111 | host.mkdir(config['mountpoint']) | ||
553 | 112 | if os.path.ismount(config['mountpoint']): | ||
554 | 113 | unmount_volume(config) | ||
555 | 114 | if not host.mount(config['device'], config['mountpoint'], persist=True): | ||
556 | 115 | raise VolumeConfigurationError() | ||
557 | 116 | |||
558 | 117 | |||
559 | 118 | def unmount_volume(config): | ||
560 | 119 | if os.path.ismount(config['mountpoint']): | ||
561 | 120 | if not host.umount(config['mountpoint'], persist=True): | ||
562 | 121 | raise VolumeConfigurationError() | ||
563 | 122 | |||
564 | 123 | |||
565 | 124 | def managed_mounts(): | ||
566 | 125 | '''List of all mounted managed volumes''' | ||
567 | 126 | return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts()) | ||
568 | 127 | |||
569 | 128 | |||
570 | 129 | def configure_volume(before_change=lambda: None, after_change=lambda: None): | ||
571 | 130 | '''Set up storage (or don't) according to the charm's volume configuration. | ||
572 | 131 | Returns the mount point or "ephemeral". before_change and after_change | ||
573 | 132 | are optional functions to be called if the volume configuration changes. | ||
574 | 133 | ''' | ||
575 | 134 | |||
576 | 135 | config = get_config() | ||
577 | 136 | if not config: | ||
578 | 137 | hookenv.log('Failed to read volume configuration', hookenv.CRITICAL) | ||
579 | 138 | raise VolumeConfigurationError() | ||
580 | 139 | |||
581 | 140 | if config['ephemeral']: | ||
582 | 141 | if os.path.ismount(config['mountpoint']): | ||
583 | 142 | before_change() | ||
584 | 143 | unmount_volume(config) | ||
585 | 144 | after_change() | ||
586 | 145 | return 'ephemeral' | ||
587 | 146 | else: | ||
588 | 147 | # persistent storage | ||
589 | 148 | if os.path.ismount(config['mountpoint']): | ||
590 | 149 | mounts = dict(managed_mounts()) | ||
591 | 150 | if mounts.get(config['mountpoint']) != config['device']: | ||
592 | 151 | before_change() | ||
593 | 152 | unmount_volume(config) | ||
594 | 153 | mount_volume(config) | ||
595 | 154 | after_change() | ||
596 | 155 | else: | ||
597 | 156 | before_change() | ||
598 | 157 | mount_volume(config) | ||
599 | 158 | after_change() | ||
600 | 159 | return config['mountpoint'] | ||
601 | 0 | 160 | ||
602 | === added symlink 'hooks/nrpe-external-master-relation-changed' | |||
603 | === target is u'swift_storage_hooks.py' | |||
604 | === added symlink 'hooks/nrpe-external-master-relation-joined' | |||
605 | === target is u'swift_storage_hooks.py' | |||
606 | === modified file 'hooks/swift_storage_hooks.py' | |||
607 | --- hooks/swift_storage_hooks.py 2013-09-27 16:33:06 +0000 | |||
608 | +++ hooks/swift_storage_hooks.py 2014-08-19 15:54:23 +0000 | |||
609 | @@ -6,6 +6,7 @@ | |||
610 | 6 | from swift_storage_utils import ( | 6 | from swift_storage_utils import ( |
611 | 7 | PACKAGES, | 7 | PACKAGES, |
612 | 8 | RESTART_MAP, | 8 | RESTART_MAP, |
613 | 9 | SWIFT_SVCS, | ||
614 | 9 | determine_block_devices, | 10 | determine_block_devices, |
615 | 10 | do_openstack_upgrade, | 11 | do_openstack_upgrade, |
616 | 11 | ensure_swift_directories, | 12 | ensure_swift_directories, |
617 | @@ -13,6 +14,7 @@ | |||
618 | 13 | register_configs, | 14 | register_configs, |
619 | 14 | save_script_rc, | 15 | save_script_rc, |
620 | 15 | setup_storage, | 16 | setup_storage, |
621 | 17 | concat_rsync_fragments, | ||
622 | 16 | ) | 18 | ) |
623 | 17 | 19 | ||
624 | 18 | from charmhelpers.core.hookenv import ( | 20 | from charmhelpers.core.hookenv import ( |
625 | @@ -21,10 +23,11 @@ | |||
626 | 21 | log, | 23 | log, |
627 | 22 | relation_get, | 24 | relation_get, |
628 | 23 | relation_set, | 25 | relation_set, |
629 | 26 | relations_of_type, | ||
630 | 24 | ) | 27 | ) |
631 | 25 | 28 | ||
632 | 26 | from charmhelpers.fetch import apt_install, apt_update | 29 | from charmhelpers.fetch import apt_install, apt_update |
634 | 27 | from charmhelpers.core.host import restart_on_change | 30 | from charmhelpers.core.host import restart_on_change, rsync |
635 | 28 | from charmhelpers.payload.execd import execd_preinstall | 31 | from charmhelpers.payload.execd import execd_preinstall |
636 | 29 | 32 | ||
637 | 30 | from charmhelpers.contrib.openstack.utils import ( | 33 | from charmhelpers.contrib.openstack.utils import ( |
638 | @@ -32,9 +35,12 @@ | |||
639 | 32 | openstack_upgrade_available, | 35 | openstack_upgrade_available, |
640 | 33 | ) | 36 | ) |
641 | 34 | 37 | ||
642 | 38 | from charmhelpers.contrib.charmsupport.nrpe import NRPE | ||
643 | 39 | |||
644 | 35 | hooks = Hooks() | 40 | hooks = Hooks() |
645 | 36 | CONFIGS = register_configs() | 41 | CONFIGS = register_configs() |
647 | 37 | 42 | NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins' | |
648 | 43 | SUDOERS_D = '/etc/sudoers.d' | ||
649 | 38 | 44 | ||
650 | 39 | @hooks.hook() | 45 | @hooks.hook() |
651 | 40 | def install(): | 46 | def install(): |
652 | @@ -52,8 +58,24 @@ | |||
653 | 52 | if openstack_upgrade_available('swift'): | 58 | if openstack_upgrade_available('swift'): |
654 | 53 | do_openstack_upgrade(configs=CONFIGS) | 59 | do_openstack_upgrade(configs=CONFIGS) |
655 | 54 | CONFIGS.write_all() | 60 | CONFIGS.write_all() |
656 | 61 | |||
657 | 62 | # If basenode is not installed and managing rsyncd.conf, replicate | ||
658 | 63 | # its core functionality. Otherwise concat files | ||
659 | 64 | if not os.path.exists('/etc/rsyncd.d/001-basenode'): | ||
660 | 65 | with open('templates/rsyncd.conf') as _in: | ||
661 | 66 | rsync_header = _in.read() | ||
662 | 67 | with open('/etc/rsyncd.d/050-swift-storage') as _in: | ||
663 | 68 | rsync_fragment = _in.read() | ||
664 | 69 | with open('/etc/rsyncd.conf', 'w') as out: | ||
665 | 70 | out.write(rsync_header + rsync_fragment) | ||
666 | 71 | else: | ||
667 | 72 | concat_rsync_fragments() | ||
668 | 73 | |||
669 | 55 | save_script_rc() | 74 | save_script_rc() |
670 | 56 | 75 | ||
671 | 76 | if relations_of_type('nrpe-external-master'): | ||
672 | 77 | nrpe_relation() | ||
673 | 78 | |||
674 | 57 | 79 | ||
675 | 58 | @hooks.hook() | 80 | @hooks.hook() |
676 | 59 | def swift_storage_relation_joined(): | 81 | def swift_storage_relation_joined(): |
677 | @@ -79,6 +101,40 @@ | |||
678 | 79 | CONFIGS.write('/etc/swift/swift.conf') | 101 | CONFIGS.write('/etc/swift/swift.conf') |
679 | 80 | fetch_swift_rings(rings_url) | 102 | fetch_swift_rings(rings_url) |
680 | 81 | 103 | ||
681 | 104 | @hooks.hook('nrpe-external-master-relation-joined') | ||
682 | 105 | @hooks.hook('nrpe-external-master-relation-changed') | ||
683 | 106 | def nrpe_relation(): | ||
684 | 107 | log('Refreshing nrpe checks') | ||
685 | 108 | rsync(os.path.join(os.getenv('CHARM_DIR'), 'files', 'nagios', | ||
686 | 109 | 'check_swift_storage.py'), | ||
687 | 110 | os.path.join(NAGIOS_PLUGINS, 'check_swift_storage.py')) | ||
688 | 111 | rsync(os.path.join(os.getenv('CHARM_DIR'), 'files', 'nagios', | ||
689 | 112 | 'check_swift_service'), | ||
690 | 113 | os.path.join(NAGIOS_PLUGINS, 'check_swift_service')) | ||
691 | 114 | rsync(os.path.join(os.getenv('CHARM_DIR'), 'files', 'sudo', | ||
692 | 115 | 'swift-storage'), | ||
693 | 116 | os.path.join(SUDOERS_D, 'swift-storage')) | ||
694 | 117 | # Find out if nrpe set nagios_hostname | ||
695 | 118 | hostname = None | ||
696 | 119 | for rel in relations_of_type('nrpe-external-master'): | ||
697 | 120 | if 'nagios_hostname' in rel: | ||
698 | 121 | hostname = rel['nagios_hostname'] | ||
699 | 122 | break | ||
700 | 123 | nrpe = NRPE(hostname=hostname) | ||
701 | 124 | nrpe.add_check( | ||
702 | 125 | shortname='swift_storage', | ||
703 | 126 | description='Check swift storage ring hashes and replication', | ||
704 | 127 | check_cmd='check_swift_storage.py {}'.format( | ||
705 | 128 | config('nagios-check-params')) | ||
706 | 129 | ) | ||
707 | 130 | # check services are running | ||
708 | 131 | for service in SWIFT_SVCS: | ||
709 | 132 | nrpe.add_check( | ||
710 | 133 | shortname=service, | ||
711 | 134 | description='swift-storage %s service' % service, | ||
712 | 135 | check_cmd = 'check_swift_service %s' % service, | ||
713 | 136 | ) | ||
714 | 137 | nrpe.write() | ||
715 | 82 | 138 | ||
716 | 83 | def main(): | 139 | def main(): |
717 | 84 | try: | 140 | try: |
718 | 85 | 141 | ||
719 | === modified file 'hooks/swift_storage_utils.py' | |||
720 | --- hooks/swift_storage_utils.py 2014-08-11 09:14:47 +0000 | |||
721 | +++ hooks/swift_storage_utils.py 2014-08-19 15:54:23 +0000 | |||
722 | @@ -70,13 +70,28 @@ | |||
723 | 70 | ] | 70 | ] |
724 | 71 | 71 | ||
725 | 72 | RESTART_MAP = { | 72 | RESTART_MAP = { |
727 | 73 | '/etc/rsyncd.conf': ['rsync'], | 73 | '/etc/rsyncd.d/050-swift-storage': ['rsync'], |
728 | 74 | '/etc/swift/account-server.conf': ACCOUNT_SVCS, | 74 | '/etc/swift/account-server.conf': ACCOUNT_SVCS, |
729 | 75 | '/etc/swift/container-server.conf': CONTAINER_SVCS, | 75 | '/etc/swift/container-server.conf': CONTAINER_SVCS, |
730 | 76 | '/etc/swift/object-server.conf': OBJECT_SVCS, | 76 | '/etc/swift/object-server.conf': OBJECT_SVCS, |
731 | 77 | '/etc/swift/swift.conf': ACCOUNT_SVCS + CONTAINER_SVCS + OBJECT_SVCS | 77 | '/etc/swift/swift.conf': ACCOUNT_SVCS + CONTAINER_SVCS + OBJECT_SVCS |
732 | 78 | } | 78 | } |
733 | 79 | 79 | ||
734 | 80 | SWIFT_SVCS = [ | ||
735 | 81 | 'account-auditor', | ||
736 | 82 | 'account-reaper', | ||
737 | 83 | 'account-replicator', | ||
738 | 84 | 'account-server', | ||
739 | 85 | 'container-auditor', | ||
740 | 86 | 'container-replicator', | ||
741 | 87 | 'container-server', | ||
742 | 88 | 'container-sync', | ||
743 | 89 | 'container-updater', | ||
744 | 90 | 'object-auditor', | ||
745 | 91 | 'object-replicator', | ||
746 | 92 | 'object-server', | ||
747 | 93 | 'object-updater', | ||
748 | 94 | ] | ||
749 | 80 | 95 | ||
750 | 81 | def ensure_swift_directories(): | 96 | def ensure_swift_directories(): |
751 | 82 | ''' | 97 | ''' |
752 | @@ -90,6 +105,11 @@ | |||
753 | 90 | ] | 105 | ] |
754 | 91 | [mkdir(d, owner='swift', group='swift') for d in dirs | 106 | [mkdir(d, owner='swift', group='swift') for d in dirs |
755 | 92 | if not os.path.isdir(d)] | 107 | if not os.path.isdir(d)] |
756 | 108 | root_dirs = [ | ||
757 | 109 | '/etc/rsyncd.d', | ||
758 | 110 | ] | ||
759 | 111 | [mkdir(d, owner='root', group='root') for d in root_dirs | ||
760 | 112 | if not os.path.isdir(d)] | ||
761 | 93 | 113 | ||
762 | 94 | 114 | ||
763 | 95 | def register_configs(): | 115 | def register_configs(): |
764 | @@ -98,7 +118,7 @@ | |||
765 | 98 | openstack_release=release) | 118 | openstack_release=release) |
766 | 99 | configs.register('/etc/swift/swift.conf', | 119 | configs.register('/etc/swift/swift.conf', |
767 | 100 | [SwiftStorageContext()]) | 120 | [SwiftStorageContext()]) |
769 | 101 | configs.register('/etc/rsyncd.conf', | 121 | configs.register('/etc/rsyncd.d/050-swift-storage', |
770 | 102 | [RsyncContext()]) | 122 | [RsyncContext()]) |
771 | 103 | for server in ['account', 'object', 'container']: | 123 | for server in ['account', 'object', 'container']: |
772 | 104 | configs.register('/etc/swift/%s-server.conf' % server, | 124 | configs.register('/etc/swift/%s-server.conf' % server, |
773 | @@ -216,3 +236,14 @@ | |||
774 | 216 | 'OPENSTACK_URL_%s' % svc: url, | 236 | 'OPENSTACK_URL_%s' % svc: url, |
775 | 217 | }) | 237 | }) |
776 | 218 | _save_script_rc(**env_vars) | 238 | _save_script_rc(**env_vars) |
777 | 239 | |||
778 | 240 | def concat_rsync_fragments(): | ||
779 | 241 | log('Concatenating rsyncd.d fragments') | ||
780 | 242 | rsyncd_dir = '/etc/rsyncd.d' | ||
781 | 243 | rsyncd_conf = "" | ||
782 | 244 | for filename in sorted(os.listdir(rsyncd_dir)): | ||
783 | 245 | with open(os.path.join(rsyncd_dir, filename), 'r') as fragment: | ||
784 | 246 | rsyncd_conf += fragment.read() | ||
785 | 247 | with open('/etc/rsyncd.conf', 'w') as f: | ||
786 | 248 | f.write(rsyncd_conf) | ||
787 | 249 | |||
788 | 219 | 250 | ||
789 | === modified file 'metadata.yaml' | |||
790 | --- metadata.yaml 2013-07-11 20:45:19 +0000 | |||
791 | +++ metadata.yaml 2014-08-19 15:54:23 +0000 | |||
792 | @@ -8,3 +8,6 @@ | |||
793 | 8 | provides: | 8 | provides: |
794 | 9 | swift-storage: | 9 | swift-storage: |
795 | 10 | interface: swift | 10 | interface: swift |
796 | 11 | nrpe-external-master: | ||
797 | 12 | interface: nrpe-external-master | ||
798 | 13 | scope: container | ||
799 | 11 | 14 | ||
800 | === modified file 'revision' | |||
801 | --- revision 2013-07-19 21:13:59 +0000 | |||
802 | +++ revision 2014-08-19 15:54:23 +0000 | |||
803 | @@ -1,1 +1,1 @@ | |||
805 | 1 | 90 | 1 | 101 |
806 | 2 | 2 | ||
807 | === removed directory 'scripts' | |||
808 | === added file 'templates/050-swift-storage' | |||
809 | --- templates/050-swift-storage 1970-01-01 00:00:00 +0000 | |||
810 | +++ templates/050-swift-storage 2014-08-19 15:54:23 +0000 | |||
811 | @@ -0,0 +1,24 @@ | |||
812 | 1 | [account] | ||
813 | 2 | uid = swift | ||
814 | 3 | gid = swift | ||
815 | 4 | max connections = 2 | ||
816 | 5 | path = /srv/node/ | ||
817 | 6 | read only = false | ||
818 | 7 | lock file = /var/lock/account.lock | ||
819 | 8 | |||
820 | 9 | [container] | ||
821 | 10 | uid = swift | ||
822 | 11 | gid = swift | ||
823 | 12 | max connections = 2 | ||
824 | 13 | path = /srv/node/ | ||
825 | 14 | read only = false | ||
826 | 15 | lock file = /var/lock/container.lock | ||
827 | 16 | |||
828 | 17 | [object] | ||
829 | 18 | uid = swift | ||
830 | 19 | gid = swift | ||
831 | 20 | max connections = 2 | ||
832 | 21 | path = /srv/node/ | ||
833 | 22 | read only = false | ||
834 | 23 | lock file = /var/lock/object.lock | ||
835 | 24 | |||
836 | 0 | 25 | ||
837 | === modified file 'templates/rsyncd.conf' | |||
838 | --- templates/rsyncd.conf 2013-07-19 19:52:45 +0000 | |||
839 | +++ templates/rsyncd.conf 2014-08-19 15:54:23 +0000 | |||
840 | @@ -1,23 +1,5 @@ | |||
864 | 1 | uid = swift | 1 | uid = nobody |
865 | 2 | gid = swift | 2 | gid = nogroup |
866 | 3 | log file = /var/log/rsyncd.log | 3 | syslog facility = daemon |
867 | 4 | pid file = /var/run/rsyncd.pid | 4 | socket options = SO_KEEPALIVE |
868 | 5 | address = {{ local_ip }} | 5 | |
846 | 6 | |||
847 | 7 | [account] | ||
848 | 8 | max connections = 2 | ||
849 | 9 | path = /srv/node/ | ||
850 | 10 | read only = false | ||
851 | 11 | lock file = /var/lock/account.lock | ||
852 | 12 | |||
853 | 13 | [container] | ||
854 | 14 | max connections = 2 | ||
855 | 15 | path = /srv/node/ | ||
856 | 16 | read only = false | ||
857 | 17 | lock file = /var/lock/container.lock | ||
858 | 18 | |||
859 | 19 | [object] | ||
860 | 20 | max connections = 2 | ||
861 | 21 | path = /srv/node/ | ||
862 | 22 | read only = false | ||
863 | 23 | lock file = /var/lock/object.lock | ||
869 | 24 | 6 | ||
870 | === modified file 'unit_tests/test_swift_storage_relations.py' | |||
871 | --- unit_tests/test_swift_storage_relations.py 2014-06-19 08:40:58 +0000 | |||
872 | +++ unit_tests/test_swift_storage_relations.py 2014-08-19 15:54:23 +0000 | |||
873 | @@ -1,6 +1,6 @@ | |||
874 | 1 | from mock import patch, MagicMock | 1 | from mock import patch, MagicMock |
875 | 2 | 2 | ||
877 | 3 | from test_utils import CharmTestCase | 3 | from test_utils import CharmTestCase, patch_open |
878 | 4 | 4 | ||
879 | 5 | import swift_storage_utils as utils | 5 | import swift_storage_utils as utils |
880 | 6 | 6 | ||
881 | @@ -21,6 +21,7 @@ | |||
882 | 21 | 'log', | 21 | 'log', |
883 | 22 | 'relation_set', | 22 | 'relation_set', |
884 | 23 | 'relation_get', | 23 | 'relation_get', |
885 | 24 | 'relations_of_type', | ||
886 | 24 | # charmhelpers.core.host | 25 | # charmhelpers.core.host |
887 | 25 | 'apt_update', | 26 | 'apt_update', |
888 | 26 | 'apt_install', | 27 | 'apt_install', |
889 | @@ -61,13 +62,19 @@ | |||
890 | 61 | 62 | ||
891 | 62 | def test_config_changed_no_upgrade_available(self): | 63 | def test_config_changed_no_upgrade_available(self): |
892 | 63 | self.openstack_upgrade_available.return_value = False | 64 | self.openstack_upgrade_available.return_value = False |
894 | 64 | hooks.config_changed() | 65 | self.relations_of_type.return_value = False |
895 | 66 | with patch_open() as (_open, _file): | ||
896 | 67 | _file.read.return_value = "foo" | ||
897 | 68 | hooks.config_changed() | ||
898 | 65 | self.assertFalse(self.do_openstack_upgrade.called) | 69 | self.assertFalse(self.do_openstack_upgrade.called) |
899 | 66 | self.assertTrue(self.CONFIGS.write_all.called) | 70 | self.assertTrue(self.CONFIGS.write_all.called) |
900 | 67 | 71 | ||
901 | 68 | def test_config_changed_upgrade_available(self): | 72 | def test_config_changed_upgrade_available(self): |
902 | 69 | self.openstack_upgrade_available.return_value = True | 73 | self.openstack_upgrade_available.return_value = True |
904 | 70 | hooks.config_changed() | 74 | self.relations_of_type.return_value = False |
905 | 75 | with patch_open() as (_open, _file): | ||
906 | 76 | _file.read.return_value = "foo" | ||
907 | 77 | hooks.config_changed() | ||
908 | 71 | self.assertTrue(self.do_openstack_upgrade.called) | 78 | self.assertTrue(self.do_openstack_upgrade.called) |
909 | 72 | self.assertTrue(self.CONFIGS.write_all.called) | 79 | self.assertTrue(self.CONFIGS.write_all.called) |
910 | 73 | 80 | ||
911 | 74 | 81 | ||
912 | === modified file 'unit_tests/test_swift_storage_utils.py' | |||
913 | --- unit_tests/test_swift_storage_utils.py 2014-06-19 08:40:58 +0000 | |||
914 | +++ unit_tests/test_swift_storage_utils.py 2014-08-19 15:54:23 +0000 | |||
915 | @@ -75,6 +75,7 @@ | |||
916 | 75 | ex_dirs = [ | 75 | ex_dirs = [ |
917 | 76 | call('/etc/swift', owner='swift', group='swift'), | 76 | call('/etc/swift', owner='swift', group='swift'), |
918 | 77 | call('/var/cache/swift', owner='swift', group='swift'), | 77 | call('/var/cache/swift', owner='swift', group='swift'), |
919 | 78 | call('/etc/rsyncd.d', owner='root', group='root'), | ||
920 | 78 | call('/srv/node', owner='swift', group='swift') | 79 | call('/srv/node', owner='swift', group='swift') |
921 | 79 | ] | 80 | ] |
922 | 80 | self.assertEquals(ex_dirs, self.mkdir.call_args_list) | 81 | self.assertEquals(ex_dirs, self.mkdir.call_args_list) |
923 | @@ -211,7 +212,7 @@ | |||
924 | 211 | openstack_release='grizzly') | 212 | openstack_release='grizzly') |
925 | 212 | ex = [ | 213 | ex = [ |
926 | 213 | call('/etc/swift/swift.conf', ['swift_server_context']), | 214 | call('/etc/swift/swift.conf', ['swift_server_context']), |
928 | 214 | call('/etc/rsyncd.conf', ['rsync_context']), | 215 | call('/etc/rsyncd.d/050-swift-storage', ['rsync_context']), |
929 | 215 | call('/etc/swift/account-server.conf', ['swift_context']), | 216 | call('/etc/swift/account-server.conf', ['swift_context']), |
930 | 216 | call('/etc/swift/object-server.conf', ['swift_context']), | 217 | call('/etc/swift/object-server.conf', ['swift_context']), |
931 | 217 | call('/etc/swift/container-server.conf', ['swift_context']) | 218 | call('/etc/swift/container-server.conf', ['swift_context']) |
This merge proposal appears to have more than just the nrpe support - lots of changes around how rsync is managed as well?
Was this intentional? if so I would prefer that they where split out into two MP's to make it easier to test/review.