Merge lp:~gnuoy/charms/trusty/cinder-ceph/1453940 into lp:~openstack-charmers-archive/charms/trusty/cinder-ceph/next
- Trusty Tahr (14.04)
- 1453940
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 43 |
Proposed branch: | lp:~gnuoy/charms/trusty/cinder-ceph/1453940 |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/cinder-ceph/next |
Diff against target: |
1382 lines (+929/-101) 8 files modified
hooks/charmhelpers/contrib/openstack/context.py (+8/-9) hooks/charmhelpers/contrib/storage/linux/ceph.py (+224/-2) hooks/cinder_hooks.py (+13/-22) tests/basic_deployment.py (+52/-8) tests/charmhelpers/contrib/amulet/utils.py (+234/-52) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+20/-5) tests/charmhelpers/contrib/openstack/amulet/utils.py (+359/-0) unit_tests/test_cinder_hooks.py (+19/-3) |
To merge this branch: | bzr merge lp:~gnuoy/charms/trusty/cinder-ceph/1453940 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Edward Hope-Morley | Needs Fixing | ||
Ryan Beisner (community) | Needs Fixing | ||
Review via email: mp+269379@code.launchpad.net |
Commit message
Description of the change
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #8831 cinder-ceph-next for gnuoy mp269379
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6316 cinder-ceph-next for gnuoy mp269379
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
FYI, amulet test failed due to https:/
Re-running...
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6318 cinder-ceph-next for gnuoy mp269379
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Sorry, ignore my prev comment. I didn't look deeply enough. It's not that race bug mentioned.
Ryan Beisner (1chb1n) wrote : | # |
The proposed changes appear to cause additional subordinate relation data (request-id). If this is an expected behavior change, the amulet test will need to be updated accordingly.
Here's where it tripped:
actual relation data (partial):
'broker_req': '{"api-version": 1, "request-id": "5e38da9e-
expected relation data (partial):
'broker_req': '{"api-version": 1, "ops": [{"replicas": 3, "name": "cinder-ceph", "op": "create-pool"}]}'}
Thanks - holler with any questions!
- 45. By Liam Young
-
Charm helper sync
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #9713 cinder-ceph-next for gnuoy mp269379
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6341 cinder-ceph-next for gnuoy mp269379
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Edward Hope-Morley (hopem) wrote : | # |
LGTM +1 (think amulet fail is unrelated)
Edward Hope-Morley (hopem) wrote : | # |
Apologies I take that back. This amulet error is real and related to the patch.
- 46. By Liam Young
-
Fix amulet tests
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #9829 cinder-ceph-next for gnuoy mp269379
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9054 cinder-ceph-next for gnuoy mp269379
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6363 cinder-ceph-next for gnuoy mp269379
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Edward Hope-Morley (hopem) wrote : | # |
Still getting an amulet fail...
- 47. By Liam Young
-
More amulet fixes for ceph broker conversation
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #9929 cinder-ceph-next for gnuoy mp269379
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9148 cinder-ceph-next for gnuoy mp269379
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6413 cinder-ceph-next for gnuoy mp269379
AMULET OK: passed
Build: http://
Preview Diff
1 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' |
2 | --- hooks/charmhelpers/contrib/openstack/context.py 2015-09-03 09:44:48 +0000 |
3 | +++ hooks/charmhelpers/contrib/openstack/context.py 2015-09-14 10:08:16 +0000 |
4 | @@ -485,13 +485,15 @@ |
5 | |
6 | log('Generating template context for ceph', level=DEBUG) |
7 | mon_hosts = [] |
8 | - auth = None |
9 | - key = None |
10 | - use_syslog = str(config('use-syslog')).lower() |
11 | + ctxt = { |
12 | + 'use_syslog': str(config('use-syslog')).lower() |
13 | + } |
14 | for rid in relation_ids('ceph'): |
15 | for unit in related_units(rid): |
16 | - auth = relation_get('auth', rid=rid, unit=unit) |
17 | - key = relation_get('key', rid=rid, unit=unit) |
18 | + if not ctxt.get('auth'): |
19 | + ctxt['auth'] = relation_get('auth', rid=rid, unit=unit) |
20 | + if not ctxt.get('key'): |
21 | + ctxt['key'] = relation_get('key', rid=rid, unit=unit) |
22 | ceph_pub_addr = relation_get('ceph-public-address', rid=rid, |
23 | unit=unit) |
24 | unit_priv_addr = relation_get('private-address', rid=rid, |
25 | @@ -500,10 +502,7 @@ |
26 | ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr |
27 | mon_hosts.append(ceph_addr) |
28 | |
29 | - ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)), |
30 | - 'auth': auth, |
31 | - 'key': key, |
32 | - 'use_syslog': use_syslog} |
33 | + ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts)) |
34 | |
35 | if not os.path.isdir('/etc/ceph'): |
36 | os.mkdir('/etc/ceph') |
37 | |
38 | === modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' |
39 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-07-29 10:50:52 +0000 |
40 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-09-14 10:08:16 +0000 |
41 | @@ -28,6 +28,7 @@ |
42 | import shutil |
43 | import json |
44 | import time |
45 | +import uuid |
46 | |
47 | from subprocess import ( |
48 | check_call, |
49 | @@ -35,8 +36,10 @@ |
50 | CalledProcessError, |
51 | ) |
52 | from charmhelpers.core.hookenv import ( |
53 | + local_unit, |
54 | relation_get, |
55 | relation_ids, |
56 | + relation_set, |
57 | related_units, |
58 | log, |
59 | DEBUG, |
60 | @@ -411,17 +414,52 @@ |
61 | |
62 | The API is versioned and defaults to version 1. |
63 | """ |
64 | - def __init__(self, api_version=1): |
65 | + def __init__(self, api_version=1, request_id=None): |
66 | self.api_version = api_version |
67 | + if request_id: |
68 | + self.request_id = request_id |
69 | + else: |
70 | + self.request_id = str(uuid.uuid1()) |
71 | self.ops = [] |
72 | |
73 | def add_op_create_pool(self, name, replica_count=3): |
74 | self.ops.append({'op': 'create-pool', 'name': name, |
75 | 'replicas': replica_count}) |
76 | |
77 | + def set_ops(self, ops): |
78 | + """Set request ops to provided value. |
79 | + |
80 | + Useful for injecting ops that come from a previous request |
81 | + to allow comparisons to ensure validity. |
82 | + """ |
83 | + self.ops = ops |
84 | + |
85 | @property |
86 | def request(self): |
87 | - return json.dumps({'api-version': self.api_version, 'ops': self.ops}) |
88 | + return json.dumps({'api-version': self.api_version, 'ops': self.ops, |
89 | + 'request-id': self.request_id}) |
90 | + |
91 | + def _ops_equal(self, other): |
92 | + if len(self.ops) == len(other.ops): |
93 | + for req_no in range(0, len(self.ops)): |
94 | + for key in ['replicas', 'name', 'op']: |
95 | + if self.ops[req_no][key] != other.ops[req_no][key]: |
96 | + return False |
97 | + else: |
98 | + return False |
99 | + return True |
100 | + |
101 | + def __eq__(self, other): |
102 | + if not isinstance(other, self.__class__): |
103 | + return False |
104 | + if self.api_version == other.api_version and \ |
105 | + self._ops_equal(other): |
106 | + return True |
107 | + else: |
108 | + return False |
109 | + |
110 | + def __ne__(self, other): |
111 | + return not self.__eq__(other) |
112 | |
113 | |
114 | class CephBrokerRsp(object): |
115 | @@ -431,14 +469,198 @@ |
116 | |
117 | The API is versioned and defaults to version 1. |
118 | """ |
119 | + |
120 | def __init__(self, encoded_rsp): |
121 | self.api_version = None |
122 | self.rsp = json.loads(encoded_rsp) |
123 | |
124 | @property |
125 | + def request_id(self): |
126 | + return self.rsp.get('request-id') |
127 | + |
128 | + @property |
129 | def exit_code(self): |
130 | return self.rsp.get('exit-code') |
131 | |
132 | @property |
133 | def exit_msg(self): |
134 | return self.rsp.get('stderr') |
135 | + |
136 | + |
137 | +# Ceph Broker Conversation: |
138 | +# If a charm needs an action to be taken by ceph it can create a CephBrokerRq |
139 | +# and send that request to ceph via the ceph relation. The CephBrokerRq has a |
140 | +# unique id so that the client can identity which CephBrokerRsp is associated |
141 | +# with the request. Ceph will also respond to each client unit individually |
142 | +# creating a response key per client unit eg glance/0 will get a CephBrokerRsp |
143 | +# via key broker-rsp-glance-0 |
144 | +# |
145 | +# To use this the charm can just do something like: |
146 | +# |
147 | +# from charmhelpers.contrib.storage.linux.ceph import ( |
148 | +# send_request_if_needed, |
149 | +# is_request_complete, |
150 | +# CephBrokerRq, |
151 | +# ) |
152 | +# |
153 | +# @hooks.hook('ceph-relation-changed') |
154 | +# def ceph_changed(): |
155 | +# rq = CephBrokerRq() |
156 | +# rq.add_op_create_pool(name='poolname', replica_count=3) |
157 | +# |
158 | +# if is_request_complete(rq): |
159 | +# <Request complete actions> |
160 | +# else: |
161 | +# send_request_if_needed(get_ceph_request()) |
162 | +# |
163 | +# CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example |
164 | +# of glance having sent a request to ceph which ceph has successfully processed |
165 | +# 'ceph:8': { |
166 | +# 'ceph/0': { |
167 | +# 'auth': 'cephx', |
168 | +# 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}', |
169 | +# 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}', |
170 | +# 'ceph-public-address': '10.5.44.103', |
171 | +# 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==', |
172 | +# 'private-address': '10.5.44.103', |
173 | +# }, |
174 | +# 'glance/0': { |
175 | +# 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", ' |
176 | +# '"ops": [{"replicas": 3, "name": "glance", ' |
177 | +# '"op": "create-pool"}]}'), |
178 | +# 'private-address': '10.5.44.109', |
179 | +# }, |
180 | +# } |
181 | + |
182 | +def get_previous_request(rid): |
183 | + """Return the last ceph broker request sent on a given relation |
184 | + |
185 | + @param rid: Relation id to query for request |
186 | + """ |
187 | + request = None |
188 | + broker_req = relation_get(attribute='broker_req', rid=rid, |
189 | + unit=local_unit()) |
190 | + if broker_req: |
191 | + request_data = json.loads(broker_req) |
192 | + request = CephBrokerRq(api_version=request_data['api-version'], |
193 | + request_id=request_data['request-id']) |
194 | + request.set_ops(request_data['ops']) |
195 | + |
196 | + return request |
197 | + |
198 | + |
199 | +def get_request_states(request): |
200 | + """Return a dict of requests per relation id with their corresponding |
201 | + completion state. |
202 | + |
203 | + This allows a charm, which has a request for ceph, to see whether there is |
204 | + an equivalent request already being processed and if so what state that |
205 | + request is in. |
206 | + |
207 | + @param request: A CephBrokerRq object |
208 | + """ |
209 | + complete = [] |
210 | + requests = {} |
211 | + for rid in relation_ids('ceph'): |
212 | + complete = False |
213 | + previous_request = get_previous_request(rid) |
214 | + if request == previous_request: |
215 | + sent = True |
216 | + complete = is_request_complete_for_rid(previous_request, rid) |
217 | + else: |
218 | + sent = False |
219 | + complete = False |
220 | + |
221 | + requests[rid] = { |
222 | + 'sent': sent, |
223 | + 'complete': complete, |
224 | + } |
225 | + |
226 | + return requests |
227 | + |
228 | + |
229 | +def is_request_sent(request): |
230 | + """Check to see if a functionally equivalent request has already been sent |
231 | + |
232 | + Returns True if a similair request has been sent |
233 | + |
234 | + @param request: A CephBrokerRq object |
235 | + """ |
236 | + states = get_request_states(request) |
237 | + for rid in states.keys(): |
238 | + if not states[rid]['sent']: |
239 | + return False |
240 | + |
241 | + return True |
242 | + |
243 | + |
244 | +def is_request_complete(request): |
245 | + """Check to see if a functionally equivalent request has already been |
246 | + completed |
247 | + |
248 | + Returns True if a similair request has been completed |
249 | + |
250 | + @param request: A CephBrokerRq object |
251 | + """ |
252 | + states = get_request_states(request) |
253 | + for rid in states.keys(): |
254 | + if not states[rid]['complete']: |
255 | + return False |
256 | + |
257 | + return True |
258 | + |
259 | + |
260 | +def is_request_complete_for_rid(request, rid): |
261 | + """Check if a given request has been completed on the given relation |
262 | + |
263 | + @param request: A CephBrokerRq object |
264 | + @param rid: Relation ID |
265 | + """ |
266 | + broker_key = get_broker_rsp_key() |
267 | + for unit in related_units(rid): |
268 | + rdata = relation_get(rid=rid, unit=unit) |
269 | + if rdata.get(broker_key): |
270 | + rsp = CephBrokerRsp(rdata.get(broker_key)) |
271 | + if rsp.request_id == request.request_id: |
272 | + if not rsp.exit_code: |
273 | + return True |
274 | + else: |
275 | + # The remote unit sent no reply targeted at this unit so either the |
276 | + # remote ceph cluster does not support unit targeted replies or it |
277 | + # has not processed our request yet. |
278 | + if rdata.get('broker_rsp'): |
279 | + request_data = json.loads(rdata['broker_rsp']) |
280 | + if request_data.get('request-id'): |
281 | + log('Ignoring legacy broker_rsp without unit key as remote ' |
282 | + 'service supports unit specific replies', level=DEBUG) |
283 | + else: |
284 | + log('Using legacy broker_rsp as remote service does not ' |
285 | + 'supports unit specific replies', level=DEBUG) |
286 | + rsp = CephBrokerRsp(rdata['broker_rsp']) |
287 | + if not rsp.exit_code: |
288 | + return True |
289 | + |
290 | + return False |
291 | + |
292 | + |
293 | +def get_broker_rsp_key(): |
294 | + """Return broker response key for this unit |
295 | + |
296 | + This is the key that ceph is going to use to pass request status |
297 | + information back to this unit |
298 | + """ |
299 | + return 'broker-rsp-' + local_unit().replace('/', '-') |
300 | + |
301 | + |
302 | +def send_request_if_needed(request): |
303 | + """Send broker request if an equivalent request has not already been sent |
304 | + |
305 | + @param request: A CephBrokerRq object |
306 | + """ |
307 | + if is_request_sent(request): |
308 | + log('Request already sent but not complete, not sending new request', |
309 | + level=DEBUG) |
310 | + else: |
311 | + for rid in relation_ids('ceph'): |
312 | + log('Sending request {}'.format(request.request_id), level=DEBUG) |
313 | + relation_set(relation_id=rid, broker_req=request.request) |
314 | |
315 | === modified file 'hooks/cinder_hooks.py' |
316 | --- hooks/cinder_hooks.py 2015-09-10 09:34:26 +0000 |
317 | +++ hooks/cinder_hooks.py 2015-09-14 10:08:16 +0000 |
318 | @@ -17,12 +17,9 @@ |
319 | UnregisteredHookError, |
320 | config, |
321 | service_name, |
322 | - relation_get, |
323 | relation_set, |
324 | relation_ids, |
325 | log, |
326 | - INFO, |
327 | - ERROR, |
328 | ) |
329 | from charmhelpers.fetch import apt_install, apt_update |
330 | from charmhelpers.core.host import ( |
331 | @@ -30,9 +27,10 @@ |
332 | service_restart, |
333 | ) |
334 | from charmhelpers.contrib.storage.linux.ceph import ( |
335 | + send_request_if_needed, |
336 | + is_request_complete, |
337 | ensure_ceph_keyring, |
338 | CephBrokerRq, |
339 | - CephBrokerRsp, |
340 | delete_keyring, |
341 | ) |
342 | from charmhelpers.payload.execd import execd_preinstall |
343 | @@ -55,6 +53,14 @@ |
344 | os.mkdir('/etc/ceph') |
345 | |
346 | |
347 | +def get_ceph_request(): |
348 | + service = service_name() |
349 | + rq = CephBrokerRq() |
350 | + replicas = config('ceph-osd-replication-count') |
351 | + rq.add_op_create_pool(name=service, replica_count=replicas) |
352 | + return rq |
353 | + |
354 | + |
355 | @hooks.hook('ceph-relation-changed') |
356 | @restart_on_change(restart_map()) |
357 | def ceph_changed(): |
358 | @@ -68,32 +74,17 @@ |
359 | log('Could not create ceph keyring: peer not ready?') |
360 | return |
361 | |
362 | - settings = relation_get() |
363 | - if settings and 'broker_rsp' in settings: |
364 | - rsp = CephBrokerRsp(settings['broker_rsp']) |
365 | - # Non-zero return code implies failure |
366 | - if rsp.exit_code: |
367 | - log("Ceph broker request failed (rc=%s, msg=%s)" % |
368 | - (rsp.exit_code, rsp.exit_msg), level=ERROR) |
369 | - return |
370 | - |
371 | - log("Ceph broker request succeeded (rc=%s, msg=%s)" % |
372 | - (rsp.exit_code, rsp.exit_msg), level=INFO) |
373 | + if is_request_complete(get_ceph_request()): |
374 | + log('Request complete') |
375 | CONFIGS.write_all() |
376 | set_ceph_env_variables(service=service) |
377 | for rid in relation_ids('storage-backend'): |
378 | storage_backend(rid) |
379 | - |
380 | # Ensure that cinder-volume is restarted since only now can we |
381 | # guarantee that ceph resources are ready. |
382 | service_restart('cinder-volume') |
383 | else: |
384 | - rq = CephBrokerRq() |
385 | - replicas = config('ceph-osd-replication-count') |
386 | - rq.add_op_create_pool(name=service, replica_count=replicas) |
387 | - for rid in relation_ids('ceph'): |
388 | - relation_set(relation_id=rid, broker_req=rq.request) |
389 | - log("Request(s) sent to Ceph broker (rid=%s)" % (rid)) |
390 | + send_request_if_needed(get_ceph_request()) |
391 | |
392 | |
393 | @hooks.hook('ceph-relation-broken') |
394 | |
395 | === modified file 'tests/basic_deployment.py' |
396 | --- tests/basic_deployment.py 2015-06-30 20:40:06 +0000 |
397 | +++ tests/basic_deployment.py 2015-09-14 10:08:16 +0000 |
398 | @@ -4,6 +4,7 @@ |
399 | Basic cinder-ceph functional test. |
400 | """ |
401 | import amulet |
402 | +import json |
403 | import time |
404 | |
405 | from charmhelpers.contrib.openstack.amulet.deployment import ( |
406 | @@ -279,40 +280,83 @@ |
407 | amulet.raise_status(amulet.FAIL, |
408 | msg='cinder endpoint: {}'.format(ret)) |
409 | |
410 | + def validate_broker_req(self, unit, relation, expected): |
411 | + rel_data = json.loads(unit.relation( |
412 | + relation[0], |
413 | + relation[1])['broker_req']) |
414 | + if rel_data['api-version'] != expected['api-version']: |
415 | + return "Broker request api mismatch" |
416 | + for index in range(0, len(rel_data['ops'])): |
417 | + actual_op = rel_data['ops'][index] |
418 | + expected_op = expected['ops'][index] |
419 | + for key in ['op', 'name', 'replicas']: |
420 | + if actual_op[key] == expected_op[key]: |
421 | + u.log.debug("OK op {} key {}".format(index, key)) |
422 | + else: |
423 | + return "Mismatch, op: {} key: {}".format(index, key) |
424 | + return None |
425 | + |
426 | + def get_broker_request(self): |
427 | + client_unit = self.cinder_ceph_sentry |
428 | + broker_req = json.loads(client_unit.relation( |
429 | + 'ceph', |
430 | + 'ceph:client')['broker_req']) |
431 | + return broker_req |
432 | + |
433 | + def get_broker_response(self): |
434 | + broker_request = self.get_broker_request() |
435 | + response_key = "broker-rsp-cinder-ceph-0" |
436 | + ceph_sentrys = [self.ceph0_sentry, self.ceph1_sentry, self.ceph2_sentry] |
437 | + for sentry in ceph_sentrys: |
438 | + relation_data = sentry.relation('client', 'cinder-ceph:ceph') |
439 | + if relation_data.get(response_key): |
440 | + broker_response = json.loads(relation_data[response_key]) |
441 | + if broker_request['request-id'] == broker_response['request-id']: |
442 | + return broker_response |
443 | + |
444 | def test_200_cinderceph_ceph_ceph_relation(self): |
445 | u.log.debug('Checking cinder-ceph:ceph to ceph:client ' |
446 | 'relation data...') |
447 | unit = self.cinder_ceph_sentry |
448 | relation = ['ceph', 'ceph:client'] |
449 | |
450 | - req = ('{"api-version": 1, "ops": [{"replicas": 3, "name": ' |
451 | - '"cinder-ceph", "op": "create-pool"}]}') |
452 | - |
453 | + req = { |
454 | + "api-version": 1, |
455 | + "ops": [{"replicas": 3, "name": "cinder-ceph", "op": "create-pool"}] |
456 | + } |
457 | expected = { |
458 | 'private-address': u.valid_ip, |
459 | - 'broker_req': req |
460 | + 'broker_req': u.not_null, |
461 | } |
462 | ret = u.validate_relation_data(unit, relation, expected) |
463 | if ret: |
464 | msg = u.relation_error('cinder-ceph ceph', ret) |
465 | amulet.raise_status(amulet.FAIL, msg=msg) |
466 | + ret = self.validate_broker_req(unit, relation, req) |
467 | + if ret: |
468 | + msg = u.relation_error('cinder-ceph ceph', ret) |
469 | + amulet.raise_status(amulet.FAIL, msg=msg) |
470 | |
471 | def test_201_ceph_cinderceph_ceph_relation(self): |
472 | u.log.debug('Checking ceph:client to cinder-ceph:ceph ' |
473 | 'relation data...') |
474 | - unit = self.ceph0_sentry |
475 | + response_key = "broker-rsp-cinder-ceph-0" |
476 | + ceph_unit = self.ceph0_sentry |
477 | relation = ['client', 'cinder-ceph:ceph'] |
478 | expected = { |
479 | 'key': u.not_null, |
480 | 'private-address': u.valid_ip, |
481 | - 'broker_rsp': '{"exit-code": 0}', |
482 | 'ceph-public-address': u.valid_ip, |
483 | - 'auth': 'none' |
484 | + 'auth': 'none', |
485 | } |
486 | - ret = u.validate_relation_data(unit, relation, expected) |
487 | + ret = u.validate_relation_data(ceph_unit, relation, expected) |
488 | if ret: |
489 | msg = u.relation_error('cinder cinder-ceph storage-backend', ret) |
490 | amulet.raise_status(amulet.FAIL, msg=msg) |
491 | + broker_response = self.get_broker_response() |
492 | + if not broker_response or broker_response['exit-code'] != 0: |
493 | + msg='Broker request invalid or failed: {}'.format(broker_response) |
494 | + amulet.raise_status(amulet.FAIL, msg=msg) |
495 | |
496 | def test_202_cinderceph_cinder_backend_relation(self): |
497 | u.log.debug('Checking cinder-ceph:storage-backend to ' |
498 | |
499 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' |
500 | --- tests/charmhelpers/contrib/amulet/utils.py 2015-08-18 17:34:34 +0000 |
501 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-14 10:08:16 +0000 |
502 | @@ -19,9 +19,11 @@ |
503 | import logging |
504 | import os |
505 | import re |
506 | +import socket |
507 | import subprocess |
508 | import sys |
509 | import time |
510 | +import uuid |
511 | |
512 | import amulet |
513 | import distro_info |
514 | @@ -114,7 +116,7 @@ |
515 | # /!\ DEPRECATION WARNING (beisner): |
516 | # New and existing tests should be rewritten to use |
517 | # validate_services_by_name() as it is aware of init systems. |
518 | - self.log.warn('/!\\ DEPRECATION WARNING: use ' |
519 | + self.log.warn('DEPRECATION WARNING: use ' |
520 | 'validate_services_by_name instead of validate_services ' |
521 | 'due to init system differences.') |
522 | |
523 | @@ -269,33 +271,52 @@ |
524 | """Get last modification time of directory.""" |
525 | return sentry_unit.directory_stat(directory)['mtime'] |
526 | |
527 | - def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False): |
528 | - """Get process' start time. |
529 | - |
530 | - Determine start time of the process based on the last modification |
531 | - time of the /proc/pid directory. If pgrep_full is True, the process |
532 | - name is matched against the full command line. |
533 | - """ |
534 | - if pgrep_full: |
535 | - cmd = 'pgrep -o -f {}'.format(service) |
536 | - else: |
537 | - cmd = 'pgrep -o {}'.format(service) |
538 | - cmd = cmd + ' | grep -v pgrep || exit 0' |
539 | - cmd_out = sentry_unit.run(cmd) |
540 | - self.log.debug('CMDout: ' + str(cmd_out)) |
541 | - if cmd_out[0]: |
542 | - self.log.debug('Pid for %s %s' % (service, str(cmd_out[0]))) |
543 | - proc_dir = '/proc/{}'.format(cmd_out[0].strip()) |
544 | - return self._get_dir_mtime(sentry_unit, proc_dir) |
545 | + def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None): |
546 | + """Get start time of a process based on the last modification time |
547 | + of the /proc/pid directory. |
548 | + |
549 | + :sentry_unit: The sentry unit to check for the service on |
550 | + :service: service name to look for in process table |
551 | + :pgrep_full: [Deprecated] Use full command line search mode with pgrep |
552 | + :returns: epoch time of service process start |
553 | + :param commands: list of bash commands |
554 | + :param sentry_units: list of sentry unit pointers |
555 | + :returns: None if successful; Failure message otherwise |
556 | + """ |
557 | + if pgrep_full is not None: |
558 | + # /!\ DEPRECATION WARNING (beisner): |
559 | + # No longer implemented, as pidof is now used instead of pgrep. |
560 | + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 |
561 | + self.log.warn('DEPRECATION WARNING: pgrep_full bool is no ' |
562 | + 'longer implemented re: lp 1474030.') |
563 | + |
564 | + pid_list = self.get_process_id_list(sentry_unit, service) |
565 | + pid = pid_list[0] |
566 | + proc_dir = '/proc/{}'.format(pid) |
567 | + self.log.debug('Pid for {} on {}: {}'.format( |
568 | + service, sentry_unit.info['unit_name'], pid)) |
569 | + |
570 | + return self._get_dir_mtime(sentry_unit, proc_dir) |
571 | |
572 | def service_restarted(self, sentry_unit, service, filename, |
573 | - pgrep_full=False, sleep_time=20): |
574 | + pgrep_full=None, sleep_time=20): |
575 | """Check if service was restarted. |
576 | |
577 | Compare a service's start time vs a file's last modification time |
578 | (such as a config file for that service) to determine if the service |
579 | has been restarted. |
580 | """ |
581 | + # /!\ DEPRECATION WARNING (beisner): |
582 | + # This method is prone to races in that no before-time is known. |
583 | + # Use validate_service_config_changed instead. |
584 | + |
585 | + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now |
586 | + # used instead of pgrep. pgrep_full is still passed through to ensure |
587 | + # deprecation WARNS. lp1474030 |
588 | + self.log.warn('DEPRECATION WARNING: use ' |
589 | + 'validate_service_config_changed instead of ' |
590 | + 'service_restarted due to known races.') |
591 | + |
592 | time.sleep(sleep_time) |
593 | if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= |
594 | self._get_file_mtime(sentry_unit, filename)): |
595 | @@ -304,15 +325,15 @@ |
596 | return False |
597 | |
598 | def service_restarted_since(self, sentry_unit, mtime, service, |
599 | - pgrep_full=False, sleep_time=20, |
600 | - retry_count=2): |
601 | + pgrep_full=None, sleep_time=20, |
602 | + retry_count=2, retry_sleep_time=30): |
603 | """Check if service was been started after a given time. |
604 | |
605 | Args: |
606 | sentry_unit (sentry): The sentry unit to check for the service on |
607 | mtime (float): The epoch time to check against |
608 | service (string): service name to look for in process table |
609 | - pgrep_full (boolean): Use full command line search mode with pgrep |
610 | + pgrep_full: [Deprecated] Use full command line search mode with pgrep |
611 | sleep_time (int): Seconds to sleep before looking for process |
612 | retry_count (int): If service is not found, how many times to retry |
613 | |
614 | @@ -321,30 +342,44 @@ |
615 | False if service is older than mtime or if service was |
616 | not found. |
617 | """ |
618 | - self.log.debug('Checking %s restarted since %s' % (service, mtime)) |
619 | + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now |
620 | + # used instead of pgrep. pgrep_full is still passed through to ensure |
621 | + # deprecation WARNS. lp1474030 |
622 | + |
623 | + unit_name = sentry_unit.info['unit_name'] |
624 | + self.log.debug('Checking that %s service restarted since %s on ' |
625 | + '%s' % (service, mtime, unit_name)) |
626 | time.sleep(sleep_time) |
627 | - proc_start_time = self._get_proc_start_time(sentry_unit, service, |
628 | - pgrep_full) |
629 | - while retry_count > 0 and not proc_start_time: |
630 | - self.log.debug('No pid file found for service %s, will retry %i ' |
631 | - 'more times' % (service, retry_count)) |
632 | - time.sleep(30) |
633 | - proc_start_time = self._get_proc_start_time(sentry_unit, service, |
634 | - pgrep_full) |
635 | - retry_count = retry_count - 1 |
636 | + proc_start_time = None |
637 | + tries = 0 |
638 | + while tries <= retry_count and not proc_start_time: |
639 | + try: |
640 | + proc_start_time = self._get_proc_start_time(sentry_unit, |
641 | + service, |
642 | + pgrep_full) |
643 | + self.log.debug('Attempt {} to get {} proc start time on {} ' |
644 | + 'OK'.format(tries, service, unit_name)) |
645 | + except IOError: |
646 | + # NOTE(beisner) - race avoidance, proc may not exist yet. |
647 | + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 |
648 | + self.log.debug('Attempt {} to get {} proc start time on {} ' |
649 | + 'failed'.format(tries, service, unit_name)) |
650 | + time.sleep(retry_sleep_time) |
651 | + tries += 1 |
652 | |
653 | if not proc_start_time: |
654 | self.log.warn('No proc start time found, assuming service did ' |
655 | 'not start') |
656 | return False |
657 | if proc_start_time >= mtime: |
658 | - self.log.debug('proc start time is newer than provided mtime' |
659 | - '(%s >= %s)' % (proc_start_time, mtime)) |
660 | + self.log.debug('Proc start time is newer than provided mtime' |
661 | + '(%s >= %s) on %s (OK)' % (proc_start_time, |
662 | + mtime, unit_name)) |
663 | return True |
664 | else: |
665 | - self.log.warn('proc start time (%s) is older than provided mtime ' |
666 | - '(%s), service did not restart' % (proc_start_time, |
667 | - mtime)) |
668 | + self.log.warn('Proc start time (%s) is older than provided mtime ' |
669 | + '(%s) on %s, service did not ' |
670 | + 'restart' % (proc_start_time, mtime, unit_name)) |
671 | return False |
672 | |
673 | def config_updated_since(self, sentry_unit, filename, mtime, |
674 | @@ -374,8 +409,9 @@ |
675 | return False |
676 | |
677 | def validate_service_config_changed(self, sentry_unit, mtime, service, |
678 | - filename, pgrep_full=False, |
679 | - sleep_time=20, retry_count=2): |
680 | + filename, pgrep_full=None, |
681 | + sleep_time=20, retry_count=2, |
682 | + retry_sleep_time=30): |
683 | """Check service and file were updated after mtime |
684 | |
685 | Args: |
686 | @@ -383,9 +419,10 @@ |
687 | mtime (float): The epoch time to check against |
688 | service (string): service name to look for in process table |
689 | filename (string): The file to check mtime of |
690 | - pgrep_full (boolean): Use full command line search mode with pgrep |
691 | - sleep_time (int): Seconds to sleep before looking for process |
692 | + pgrep_full: [Deprecated] Use full command line search mode with pgrep |
693 | + sleep_time (int): Initial sleep in seconds to pass to test helpers |
694 | retry_count (int): If service is not found, how many times to retry |
695 | + retry_sleep_time (int): Time in seconds to wait between retries |
696 | |
697 | Typical Usage: |
698 | u = OpenStackAmuletUtils(ERROR) |
699 | @@ -402,15 +439,25 @@ |
700 | mtime, False if service is older than mtime or if service was |
701 | not found or if filename was modified before mtime. |
702 | """ |
703 | - self.log.debug('Checking %s restarted since %s' % (service, mtime)) |
704 | - time.sleep(sleep_time) |
705 | - service_restart = self.service_restarted_since(sentry_unit, mtime, |
706 | - service, |
707 | - pgrep_full=pgrep_full, |
708 | - sleep_time=0, |
709 | - retry_count=retry_count) |
710 | - config_update = self.config_updated_since(sentry_unit, filename, mtime, |
711 | - sleep_time=0) |
712 | + |
713 | + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now |
714 | + # used instead of pgrep. pgrep_full is still passed through to ensure |
715 | + # deprecation WARNS. lp1474030 |
716 | + |
717 | + service_restart = self.service_restarted_since( |
718 | + sentry_unit, mtime, |
719 | + service, |
720 | + pgrep_full=pgrep_full, |
721 | + sleep_time=sleep_time, |
722 | + retry_count=retry_count, |
723 | + retry_sleep_time=retry_sleep_time) |
724 | + |
725 | + config_update = self.config_updated_since( |
726 | + sentry_unit, |
727 | + filename, |
728 | + mtime, |
729 | + sleep_time=0) |
730 | + |
731 | return service_restart and config_update |
732 | |
733 | def get_sentry_time(self, sentry_unit): |
734 | @@ -428,7 +475,6 @@ |
735 | """Return a list of all Ubuntu releases in order of release.""" |
736 | _d = distro_info.UbuntuDistroInfo() |
737 | _release_list = _d.all |
738 | - self.log.debug('Ubuntu release list: {}'.format(_release_list)) |
739 | return _release_list |
740 | |
741 | def file_to_url(self, file_rel_path): |
742 | @@ -568,6 +614,142 @@ |
743 | |
744 | return None |
745 | |
746 | + def validate_sectionless_conf(self, file_contents, expected): |
747 | + """A crude conf parser. Useful to inspect configuration files which |
748 | + do not have section headers (as would be necessary in order to use |
749 | + the configparser). Such as openstack-dashboard or rabbitmq confs.""" |
750 | + for line in file_contents.split('\n'): |
751 | + if '=' in line: |
752 | + args = line.split('=') |
753 | + if len(args) <= 1: |
754 | + continue |
755 | + key = args[0].strip() |
756 | + value = args[1].strip() |
757 | + if key in expected.keys(): |
758 | + if expected[key] != value: |
759 | + msg = ('Config mismatch. Expected, actual: {}, ' |
760 | + '{}'.format(expected[key], value)) |
761 | + amulet.raise_status(amulet.FAIL, msg=msg) |
762 | + |
763 | + def get_unit_hostnames(self, units): |
764 | + """Return a dict of juju unit names to hostnames.""" |
765 | + host_names = {} |
766 | + for unit in units: |
767 | + host_names[unit.info['unit_name']] = \ |
768 | + str(unit.file_contents('/etc/hostname').strip()) |
769 | + self.log.debug('Unit host names: {}'.format(host_names)) |
770 | + return host_names |
771 | + |
772 | + def run_cmd_unit(self, sentry_unit, cmd): |
773 | + """Run a command on a unit, return the output and exit code.""" |
774 | + output, code = sentry_unit.run(cmd) |
775 | + if code == 0: |
776 | + self.log.debug('{} `{}` command returned {} ' |
777 | + '(OK)'.format(sentry_unit.info['unit_name'], |
778 | + cmd, code)) |
779 | + else: |
780 | + msg = ('{} `{}` command returned {} ' |
781 | + '{}'.format(sentry_unit.info['unit_name'], |
782 | + cmd, code, output)) |
783 | + amulet.raise_status(amulet.FAIL, msg=msg) |
784 | + return str(output), code |
785 | + |
786 | + def file_exists_on_unit(self, sentry_unit, file_name): |
787 | + """Check if a file exists on a unit.""" |
788 | + try: |
789 | + sentry_unit.file_stat(file_name) |
790 | + return True |
791 | + except IOError: |
792 | + return False |
793 | + except Exception as e: |
794 | + msg = 'Error checking file {}: {}'.format(file_name, e) |
795 | + amulet.raise_status(amulet.FAIL, msg=msg) |
796 | + |
797 | + def file_contents_safe(self, sentry_unit, file_name, |
798 | + max_wait=60, fatal=False): |
799 | + """Get file contents from a sentry unit. Wrap amulet file_contents |
800 | + with retry logic to address races where a file checks as existing, |
801 | + but no longer exists by the time file_contents is called. |
802 | + Return None if file not found. Optionally raise if fatal is True.""" |
803 | + unit_name = sentry_unit.info['unit_name'] |
804 | + file_contents = False |
805 | + tries = 0 |
806 | + while not file_contents and tries < (max_wait / 4): |
807 | + try: |
808 | + file_contents = sentry_unit.file_contents(file_name) |
809 | + except IOError: |
810 | + self.log.debug('Attempt {} to open file {} from {} ' |
811 | + 'failed'.format(tries, file_name, |
812 | + unit_name)) |
813 | + time.sleep(4) |
814 | + tries += 1 |
815 | + |
816 | + if file_contents: |
817 | + return file_contents |
818 | + elif not fatal: |
819 | + return None |
820 | + elif fatal: |
821 | + msg = 'Failed to get file contents from unit.' |
822 | + amulet.raise_status(amulet.FAIL, msg) |
823 | + |
824 | + def port_knock_tcp(self, host="localhost", port=22, timeout=15): |
825 | + """Open a TCP socket to check for a listening sevice on a host. |
826 | + |
827 | + :param host: host name or IP address, default to localhost |
828 | + :param port: TCP port number, default to 22 |
829 | + :param timeout: Connect timeout, default to 15 seconds |
830 | + :returns: True if successful, False if connect failed |
831 | + """ |
832 | + |
833 | + # Resolve host name if possible |
834 | + try: |
835 | + connect_host = socket.gethostbyname(host) |
836 | + host_human = "{} ({})".format(connect_host, host) |
837 | + except socket.error as e: |
838 | + self.log.warn('Unable to resolve address: ' |
839 | + '{} ({}) Trying anyway!'.format(host, e)) |
840 | + connect_host = host |
841 | + host_human = connect_host |
842 | + |
843 | + # Attempt socket connection |
844 | + try: |
845 | + knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) |
846 | + knock.settimeout(timeout) |
847 | + knock.connect((connect_host, port)) |
848 | + knock.close() |
849 | + self.log.debug('Socket connect OK for host ' |
850 | + '{} on port {}.'.format(host_human, port)) |
851 | + return True |
852 | + except socket.error as e: |
853 | + self.log.debug('Socket connect FAIL for' |
854 | + ' {} port {} ({})'.format(host_human, port, e)) |
855 | + return False |
856 | + |
857 | + def port_knock_units(self, sentry_units, port=22, |
858 | + timeout=15, expect_success=True): |
859 | + """Open a TCP socket to check for a listening sevice on each |
860 | + listed juju unit. |
861 | + |
862 | + :param sentry_units: list of sentry unit pointers |
863 | + :param port: TCP port number, default to 22 |
864 | + :param timeout: Connect timeout, default to 15 seconds |
865 | + :expect_success: True by default, set False to invert logic |
866 | + :returns: None if successful, Failure message otherwise |
867 | + """ |
868 | + for unit in sentry_units: |
869 | + host = unit.info['public-address'] |
870 | + connected = self.port_knock_tcp(host, port, timeout) |
871 | + if not connected and expect_success: |
872 | + return 'Socket connect failed.' |
873 | + elif connected and not expect_success: |
874 | + return 'Socket connected unexpectedly.' |
875 | + |
876 | + def get_uuid_epoch_stamp(self): |
877 | + """Returns a stamp string based on uuid4 and epoch time. Useful in |
878 | + generating test messages which need to be unique-ish.""" |
879 | + return '[{}-{}]'.format(uuid.uuid4(), time.time()) |
880 | + |
881 | +# amulet juju action helpers: |
882 | def run_action(self, unit_sentry, action, |
883 | _check_output=subprocess.check_output): |
884 | """Run the named action on a given unit sentry. |
885 | |
886 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' |
887 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-08-18 17:34:34 +0000 |
888 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-14 10:08:16 +0000 |
889 | @@ -44,8 +44,15 @@ |
890 | Determine if the local branch being tested is derived from its |
891 | stable or next (dev) branch, and based on this, use the corresonding |
892 | stable or next branches for the other_services.""" |
893 | + |
894 | + # Charms outside the lp:~openstack-charmers namespace |
895 | base_charms = ['mysql', 'mongodb', 'nrpe'] |
896 | |
897 | + # Force these charms to current series even when using an older series. |
898 | + # ie. Use trusty/nrpe even when series is precise, as the P charm |
899 | + # does not possess the necessary external master config and hooks. |
900 | + force_series_current = ['nrpe'] |
901 | + |
902 | if self.series in ['precise', 'trusty']: |
903 | base_series = self.series |
904 | else: |
905 | @@ -53,11 +60,17 @@ |
906 | |
907 | if self.stable: |
908 | for svc in other_services: |
909 | + if svc['name'] in force_series_current: |
910 | + base_series = self.current_next |
911 | + |
912 | temp = 'lp:charms/{}/{}' |
913 | svc['location'] = temp.format(base_series, |
914 | svc['name']) |
915 | else: |
916 | for svc in other_services: |
917 | + if svc['name'] in force_series_current: |
918 | + base_series = self.current_next |
919 | + |
920 | if svc['name'] in base_charms: |
921 | temp = 'lp:charms/{}/{}' |
922 | svc['location'] = temp.format(base_series, |
923 | @@ -77,21 +90,23 @@ |
924 | |
925 | services = other_services |
926 | services.append(this_service) |
927 | + |
928 | + # Charms which should use the source config option |
929 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
930 | 'ceph-osd', 'ceph-radosgw'] |
931 | - # Most OpenStack subordinate charms do not expose an origin option |
932 | - # as that is controlled by the principle. |
933 | - ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] |
934 | + |
935 | + # Charms which can not use openstack-origin, ie. many subordinates |
936 | + no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] |
937 | |
938 | if self.openstack: |
939 | for svc in services: |
940 | - if svc['name'] not in use_source + ignore: |
941 | + if svc['name'] not in use_source + no_origin: |
942 | config = {'openstack-origin': self.openstack} |
943 | self.d.configure(svc['name'], config) |
944 | |
945 | if self.source: |
946 | for svc in services: |
947 | - if svc['name'] in use_source and svc['name'] not in ignore: |
948 | + if svc['name'] in use_source and svc['name'] not in no_origin: |
949 | config = {'source': self.source} |
950 | self.d.configure(svc['name'], config) |
951 | |
952 | |
953 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' |
954 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-29 14:33:37 +0000 |
955 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-14 10:08:16 +0000 |
956 | @@ -27,6 +27,7 @@ |
957 | import heatclient.v1.client as heat_client |
958 | import keystoneclient.v2_0 as keystone_client |
959 | import novaclient.v1_1.client as nova_client |
960 | +import pika |
961 | import swiftclient |
962 | |
963 | from charmhelpers.contrib.amulet.utils import ( |
964 | @@ -602,3 +603,361 @@ |
965 | self.log.debug('Ceph {} samples (OK): ' |
966 | '{}'.format(sample_type, samples)) |
967 | return None |
968 | + |
969 | +# rabbitmq/amqp specific helpers: |
970 | + def add_rmq_test_user(self, sentry_units, |
971 | + username="testuser1", password="changeme"): |
972 | + """Add a test user via the first rmq juju unit, check connection as |
973 | + the new user against all sentry units. |
974 | + |
975 | + :param sentry_units: list of sentry unit pointers |
976 | + :param username: amqp user name, default to testuser1 |
977 | + :param password: amqp user password |
978 | + :returns: None if successful. Raise on error. |
979 | + """ |
980 | + self.log.debug('Adding rmq user ({})...'.format(username)) |
981 | + |
982 | + # Check that user does not already exist |
983 | + cmd_user_list = 'rabbitmqctl list_users' |
984 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) |
985 | + if username in output: |
986 | + self.log.warning('User ({}) already exists, returning ' |
987 | + 'gracefully.'.format(username)) |
988 | + return |
989 | + |
990 | + perms = '".*" ".*" ".*"' |
991 | + cmds = ['rabbitmqctl add_user {} {}'.format(username, password), |
992 | + 'rabbitmqctl set_permissions {} {}'.format(username, perms)] |
993 | + |
994 | + # Add user via first unit |
995 | + for cmd in cmds: |
996 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd) |
997 | + |
998 | + # Check connection against the other sentry_units |
999 | + self.log.debug('Checking user connect against units...') |
1000 | + for sentry_unit in sentry_units: |
1001 | + connection = self.connect_amqp_by_unit(sentry_unit, ssl=False, |
1002 | + username=username, |
1003 | + password=password) |
1004 | + connection.close() |
1005 | + |
1006 | + def delete_rmq_test_user(self, sentry_units, username="testuser1"): |
1007 | + """Delete a rabbitmq user via the first rmq juju unit. |
1008 | + |
1009 | + :param sentry_units: list of sentry unit pointers |
1010 | + :param username: amqp user name, default to testuser1 |
1011 | + :param password: amqp user password |
1012 | + :returns: None if successful or no such user. |
1013 | + """ |
1014 | + self.log.debug('Deleting rmq user ({})...'.format(username)) |
1015 | + |
1016 | + # Check that the user exists |
1017 | + cmd_user_list = 'rabbitmqctl list_users' |
1018 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) |
1019 | + |
1020 | + if username not in output: |
1021 | + self.log.warning('User ({}) does not exist, returning ' |
1022 | + 'gracefully.'.format(username)) |
1023 | + return |
1024 | + |
1025 | + # Delete the user |
1026 | + cmd_user_del = 'rabbitmqctl delete_user {}'.format(username) |
1027 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del) |
1028 | + |
1029 | + def get_rmq_cluster_status(self, sentry_unit): |
1030 | + """Execute rabbitmq cluster status command on a unit and return |
1031 | + the full output. |
1032 | + |
1033 | + :param unit: sentry unit |
1034 | + :returns: String containing console output of cluster status command |
1035 | + """ |
1036 | + cmd = 'rabbitmqctl cluster_status' |
1037 | + output, _ = self.run_cmd_unit(sentry_unit, cmd) |
1038 | + self.log.debug('{} cluster_status:\n{}'.format( |
1039 | + sentry_unit.info['unit_name'], output)) |
1040 | + return str(output) |
1041 | + |
1042 | + def get_rmq_cluster_running_nodes(self, sentry_unit): |
1043 | + """Parse rabbitmqctl cluster_status output string, return list of |
1044 | + running rabbitmq cluster nodes. |
1045 | + |
1046 | + :param unit: sentry unit |
1047 | + :returns: List containing node names of running nodes |
1048 | + """ |
1049 | + # NOTE(beisner): rabbitmqctl cluster_status output is not |
1050 | + # json-parsable, do string chop foo, then json.loads that. |
1051 | + str_stat = self.get_rmq_cluster_status(sentry_unit) |
1052 | + if 'running_nodes' in str_stat: |
1053 | + pos_start = str_stat.find("{running_nodes,") + 15 |
1054 | + pos_end = str_stat.find("]},", pos_start) + 1 |
1055 | + str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"') |
1056 | + run_nodes = json.loads(str_run_nodes) |
1057 | + return run_nodes |
1058 | + else: |
1059 | + return [] |
1060 | + |
1061 | + def validate_rmq_cluster_running_nodes(self, sentry_units): |
1062 | + """Check that all rmq unit hostnames are represented in the |
1063 | + cluster_status output of all units. |
1064 | + |
1065 | + :param host_names: dict of juju unit names to host names |
1066 | + :param units: list of sentry unit pointers (all rmq units) |
1067 | + :returns: None if successful, otherwise return error message |
1068 | + """ |
1069 | + host_names = self.get_unit_hostnames(sentry_units) |
1070 | + errors = [] |
1071 | + |
1072 | + # Query every unit for cluster_status running nodes |
1073 | + for query_unit in sentry_units: |
1074 | + query_unit_name = query_unit.info['unit_name'] |
1075 | + running_nodes = self.get_rmq_cluster_running_nodes(query_unit) |
1076 | + |
1077 | + # Confirm that every unit is represented in the queried unit's |
1078 | + # cluster_status running nodes output. |
1079 | + for validate_unit in sentry_units: |
1080 | + val_host_name = host_names[validate_unit.info['unit_name']] |
1081 | + val_node_name = 'rabbit@{}'.format(val_host_name) |
1082 | + |
1083 | + if val_node_name not in running_nodes: |
1084 | + errors.append('Cluster member check failed on {}: {} not ' |
1085 | + 'in {}\n'.format(query_unit_name, |
1086 | + val_node_name, |
1087 | + running_nodes)) |
1088 | + if errors: |
1089 | + return ''.join(errors) |
1090 | + |
1091 | + def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None): |
1092 | + """Check a single juju rmq unit for ssl and port in the config file.""" |
1093 | + host = sentry_unit.info['public-address'] |
1094 | + unit_name = sentry_unit.info['unit_name'] |
1095 | + |
1096 | + conf_file = '/etc/rabbitmq/rabbitmq.config' |
1097 | + conf_contents = str(self.file_contents_safe(sentry_unit, |
1098 | + conf_file, max_wait=16)) |
1099 | + # Checks |
1100 | + conf_ssl = 'ssl' in conf_contents |
1101 | + conf_port = str(port) in conf_contents |
1102 | + |
1103 | + # Port explicitly checked in config |
1104 | + if port and conf_port and conf_ssl: |
1105 | + self.log.debug('SSL is enabled @{}:{} ' |
1106 | + '({})'.format(host, port, unit_name)) |
1107 | + return True |
1108 | + elif port and not conf_port and conf_ssl: |
1109 | + self.log.debug('SSL is enabled @{} but not on port {} ' |
1110 | + '({})'.format(host, port, unit_name)) |
1111 | + return False |
1112 | + # Port not checked (useful when checking that ssl is disabled) |
1113 | + elif not port and conf_ssl: |
1114 | + self.log.debug('SSL is enabled @{}:{} ' |
1115 | + '({})'.format(host, port, unit_name)) |
1116 | + return True |
1117 | + elif not port and not conf_ssl: |
1118 | + self.log.debug('SSL not enabled @{}:{} ' |
1119 | + '({})'.format(host, port, unit_name)) |
1120 | + return False |
1121 | + else: |
1122 | + msg = ('Unknown condition when checking SSL status @{}:{} ' |
1123 | + '({})'.format(host, port, unit_name)) |
1124 | + amulet.raise_status(amulet.FAIL, msg) |
1125 | + |
1126 | + def validate_rmq_ssl_enabled_units(self, sentry_units, port=None): |
1127 | + """Check that ssl is enabled on rmq juju sentry units. |
1128 | + |
1129 | + :param sentry_units: list of all rmq sentry units |
1130 | + :param port: optional ssl port override to validate |
1131 | + :returns: None if successful, otherwise return error message |
1132 | + """ |
1133 | + for sentry_unit in sentry_units: |
1134 | + if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port): |
1135 | + return ('Unexpected condition: ssl is disabled on unit ' |
1136 | + '({})'.format(sentry_unit.info['unit_name'])) |
1137 | + return None |
1138 | + |
1139 | + def validate_rmq_ssl_disabled_units(self, sentry_units): |
1140 | + """Check that ssl is enabled on listed rmq juju sentry units. |
1141 | + |
1142 | + :param sentry_units: list of all rmq sentry units |
1143 | + :returns: True if successful. Raise on error. |
1144 | + """ |
1145 | + for sentry_unit in sentry_units: |
1146 | + if self.rmq_ssl_is_enabled_on_unit(sentry_unit): |
1147 | + return ('Unexpected condition: ssl is enabled on unit ' |
1148 | + '({})'.format(sentry_unit.info['unit_name'])) |
1149 | + return None |
1150 | + |
1151 | + def configure_rmq_ssl_on(self, sentry_units, deployment, |
1152 | + port=None, max_wait=60): |
1153 | + """Turn ssl charm config option on, with optional non-default |
1154 | + ssl port specification. Confirm that it is enabled on every |
1155 | + unit. |
1156 | + |
1157 | + :param sentry_units: list of sentry units |
1158 | + :param deployment: amulet deployment object pointer |
1159 | + :param port: amqp port, use defaults if None |
1160 | + :param max_wait: maximum time to wait in seconds to confirm |
1161 | + :returns: None if successful. Raise on error. |
1162 | + """ |
1163 | + self.log.debug('Setting ssl charm config option: on') |
1164 | + |
1165 | + # Enable RMQ SSL |
1166 | + config = {'ssl': 'on'} |
1167 | + if port: |
1168 | + config['ssl_port'] = port |
1169 | + |
1170 | + deployment.configure('rabbitmq-server', config) |
1171 | + |
1172 | + # Confirm |
1173 | + tries = 0 |
1174 | + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) |
1175 | + while ret and tries < (max_wait / 4): |
1176 | + time.sleep(4) |
1177 | + self.log.debug('Attempt {}: {}'.format(tries, ret)) |
1178 | + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) |
1179 | + tries += 1 |
1180 | + |
1181 | + if ret: |
1182 | + amulet.raise_status(amulet.FAIL, ret) |
1183 | + |
1184 | + def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60): |
1185 | + """Turn ssl charm config option off, confirm that it is disabled |
1186 | + on every unit. |
1187 | + |
1188 | + :param sentry_units: list of sentry units |
1189 | + :param deployment: amulet deployment object pointer |
1190 | + :param max_wait: maximum time to wait in seconds to confirm |
1191 | + :returns: None if successful. Raise on error. |
1192 | + """ |
1193 | + self.log.debug('Setting ssl charm config option: off') |
1194 | + |
1195 | + # Disable RMQ SSL |
1196 | + config = {'ssl': 'off'} |
1197 | + deployment.configure('rabbitmq-server', config) |
1198 | + |
1199 | + # Confirm |
1200 | + tries = 0 |
1201 | + ret = self.validate_rmq_ssl_disabled_units(sentry_units) |
1202 | + while ret and tries < (max_wait / 4): |
1203 | + time.sleep(4) |
1204 | + self.log.debug('Attempt {}: {}'.format(tries, ret)) |
1205 | + ret = self.validate_rmq_ssl_disabled_units(sentry_units) |
1206 | + tries += 1 |
1207 | + |
1208 | + if ret: |
1209 | + amulet.raise_status(amulet.FAIL, ret) |
1210 | + |
1211 | + def connect_amqp_by_unit(self, sentry_unit, ssl=False, |
1212 | + port=None, fatal=True, |
1213 | + username="testuser1", password="changeme"): |
1214 | + """Establish and return a pika amqp connection to the rabbitmq service |
1215 | + running on a rmq juju unit. |
1216 | + |
1217 | + :param sentry_unit: sentry unit pointer |
1218 | + :param ssl: boolean, default to False |
1219 | + :param port: amqp port, use defaults if None |
1220 | + :param fatal: boolean, default to True (raises on connect error) |
1221 | + :param username: amqp user name, default to testuser1 |
1222 | + :param password: amqp user password |
1223 | + :returns: pika amqp connection pointer or None if failed and non-fatal |
1224 | + """ |
1225 | + host = sentry_unit.info['public-address'] |
1226 | + unit_name = sentry_unit.info['unit_name'] |
1227 | + |
1228 | + # Default port logic if port is not specified |
1229 | + if ssl and not port: |
1230 | + port = 5671 |
1231 | + elif not ssl and not port: |
1232 | + port = 5672 |
1233 | + |
1234 | + self.log.debug('Connecting to amqp on {}:{} ({}) as ' |
1235 | + '{}...'.format(host, port, unit_name, username)) |
1236 | + |
1237 | + try: |
1238 | + credentials = pika.PlainCredentials(username, password) |
1239 | + parameters = pika.ConnectionParameters(host=host, port=port, |
1240 | + credentials=credentials, |
1241 | + ssl=ssl, |
1242 | + connection_attempts=3, |
1243 | + retry_delay=5, |
1244 | + socket_timeout=1) |
1245 | + connection = pika.BlockingConnection(parameters) |
1246 | + assert connection.server_properties['product'] == 'RabbitMQ' |
1247 | + self.log.debug('Connect OK') |
1248 | + return connection |
1249 | + except Exception as e: |
1250 | + msg = ('amqp connection failed to {}:{} as ' |
1251 | + '{} ({})'.format(host, port, username, str(e))) |
1252 | + if fatal: |
1253 | + amulet.raise_status(amulet.FAIL, msg) |
1254 | + else: |
1255 | + self.log.warn(msg) |
1256 | + return None |
1257 | + |
1258 | + def publish_amqp_message_by_unit(self, sentry_unit, message, |
1259 | + queue="test", ssl=False, |
1260 | + username="testuser1", |
1261 | + password="changeme", |
1262 | + port=None): |
1263 | + """Publish an amqp message to a rmq juju unit. |
1264 | + |
1265 | + :param sentry_unit: sentry unit pointer |
1266 | + :param message: amqp message string |
1267 | + :param queue: message queue, default to test |
1268 | + :param username: amqp user name, default to testuser1 |
1269 | + :param password: amqp user password |
1270 | + :param ssl: boolean, default to False |
1271 | + :param port: amqp port, use defaults if None |
1272 | + :returns: None. Raises exception if publish failed. |
1273 | + """ |
1274 | + self.log.debug('Publishing message to {} queue:\n{}'.format(queue, |
1275 | + message)) |
1276 | + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, |
1277 | + port=port, |
1278 | + username=username, |
1279 | + password=password) |
1280 | + |
1281 | + # NOTE(beisner): extra debug here re: pika hang potential: |
1282 | + # https://github.com/pika/pika/issues/297 |
1283 | + # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw |
1284 | + self.log.debug('Defining channel...') |
1285 | + channel = connection.channel() |
1286 | + self.log.debug('Declaring queue...') |
1287 | + channel.queue_declare(queue=queue, auto_delete=False, durable=True) |
1288 | + self.log.debug('Publishing message...') |
1289 | + channel.basic_publish(exchange='', routing_key=queue, body=message) |
1290 | + self.log.debug('Closing channel...') |
1291 | + channel.close() |
1292 | + self.log.debug('Closing connection...') |
1293 | + connection.close() |
1294 | + |
1295 | + def get_amqp_message_by_unit(self, sentry_unit, queue="test", |
1296 | + username="testuser1", |
1297 | + password="changeme", |
1298 | + ssl=False, port=None): |
1299 | + """Get an amqp message from a rmq juju unit. |
1300 | + |
1301 | + :param sentry_unit: sentry unit pointer |
1302 | + :param queue: message queue, default to test |
1303 | + :param username: amqp user name, default to testuser1 |
1304 | + :param password: amqp user password |
1305 | + :param ssl: boolean, default to False |
1306 | + :param port: amqp port, use defaults if None |
1307 | + :returns: amqp message body as string. Raise if get fails. |
1308 | + """ |
1309 | + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, |
1310 | + port=port, |
1311 | + username=username, |
1312 | + password=password) |
1313 | + channel = connection.channel() |
1314 | + method_frame, _, body = channel.basic_get(queue) |
1315 | + |
1316 | + if method_frame: |
1317 | + self.log.debug('Retreived message from {} queue:\n{}'.format(queue, |
1318 | + body)) |
1319 | + channel.basic_ack(method_frame.delivery_tag) |
1320 | + channel.close() |
1321 | + connection.close() |
1322 | + return body |
1323 | + else: |
1324 | + msg = 'No message retrieved.' |
1325 | + amulet.raise_status(amulet.FAIL, msg) |
1326 | |
1327 | === modified file 'unit_tests/test_cinder_hooks.py' |
1328 | --- unit_tests/test_cinder_hooks.py 2015-09-10 09:34:26 +0000 |
1329 | +++ unit_tests/test_cinder_hooks.py 2015-09-14 10:08:16 +0000 |
1330 | @@ -18,11 +18,12 @@ |
1331 | 'register_configs', |
1332 | 'restart_map', |
1333 | 'set_ceph_env_variables', |
1334 | + 'is_request_complete', |
1335 | + 'send_request_if_needed', |
1336 | 'CONFIGS', |
1337 | # charmhelpers.core.hookenv |
1338 | 'config', |
1339 | 'relation_ids', |
1340 | - 'relation_get', |
1341 | 'relation_set', |
1342 | 'service_name', |
1343 | 'service_restart', |
1344 | @@ -69,9 +70,8 @@ |
1345 | @patch('charmhelpers.core.hookenv.config') |
1346 | def test_ceph_changed(self, mock_config): |
1347 | '''It ensures ceph assets created on ceph changed''' |
1348 | + self.is_request_complete.return_value = True |
1349 | self.CONFIGS.complete_contexts.return_value = ['ceph'] |
1350 | - rsp = json.dumps({'exit_code': 0}) |
1351 | - self.relation_get.return_value = {'broker_rsp': rsp} |
1352 | self.service_name.return_value = 'cinder' |
1353 | self.ensure_ceph_keyring.return_value = True |
1354 | hooks.hooks.execute(['hooks/ceph-relation-changed']) |
1355 | @@ -81,11 +81,27 @@ |
1356 | self.assertTrue(self.CONFIGS.write_all.called) |
1357 | self.set_ceph_env_variables.assert_called_with(service='cinder') |
1358 | |
1359 | + @patch.object(hooks, 'get_ceph_request') |
1360 | + @patch('charmhelpers.core.hookenv.config') |
1361 | + def test_ceph_changed_newrq(self, mock_config, mock_get_ceph_request): |
1362 | + '''It ensures ceph assets created on ceph changed''' |
1363 | + mock_get_ceph_request.return_value = 'cephreq' |
1364 | + self.is_request_complete.return_value = False |
1365 | + self.CONFIGS.complete_contexts.return_value = ['ceph'] |
1366 | + self.service_name.return_value = 'cinder' |
1367 | + self.ensure_ceph_keyring.return_value = True |
1368 | + hooks.hooks.execute(['hooks/ceph-relation-changed']) |
1369 | + self.ensure_ceph_keyring.assert_called_with(service='cinder', |
1370 | + user='cinder', |
1371 | + group='cinder') |
1372 | + self.send_request_if_needed.assert_called_with('cephreq') |
1373 | + |
1374 | @patch('charmhelpers.core.hookenv.config') |
1375 | def test_ceph_changed_no_keys(self, mock_config): |
1376 | '''It ensures ceph assets created on ceph changed''' |
1377 | self.CONFIGS.complete_contexts.return_value = ['ceph'] |
1378 | self.service_name.return_value = 'cinder' |
1379 | + self.is_request_complete.return_value = True |
1380 | self.ensure_ceph_keyring.return_value = False |
1381 | hooks.hooks.execute(['hooks/ceph-relation-changed']) |
1382 | # NOTE(jamespage): If ensure_ceph keyring fails, then |
charm_lint_check #9593 cinder-ceph-next for gnuoy mp269379
LINT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_lint_ check/9593/