Merge lp:~gnuoy/charms/trusty/glance/1453940 into lp:~openstack-charmers-archive/charms/trusty/glance/next

Proposed by Liam Young on 2015-08-20
Status: Merged
Merged at revision: 138
Proposed branch: lp:~gnuoy/charms/trusty/glance/1453940
Merge into: lp:~openstack-charmers-archive/charms/trusty/glance/next
Diff against target: 1305 lines (+886/-104) (has conflicts)
8 files modified
charmhelpers/contrib/openstack/context.py (+8/-9)
charmhelpers/contrib/storage/linux/ceph.py (+224/-2)
hooks/glance_relations.py (+13/-19)
templates/ceph.conf (+2/-1)
tests/charmhelpers/contrib/amulet/utils.py (+234/-52)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+20/-5)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+359/-0)
unit_tests/test_glance_relations.py (+26/-16)
Text conflict in unit_tests/test_glance_relations.py
To merge this branch: bzr merge lp:~gnuoy/charms/trusty/glance/1453940
Reviewer Review Type Date Requested Status
Edward Hope-Morley 2015-08-20 Approve on 2015-09-11
Review via email: mp+268605@code.launchpad.net
To post a comment you must log in.

charm_lint_check #8424 glance-next for gnuoy mp268605
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12136056/
Build: http://10.245.162.77:8080/job/charm_lint_check/8424/

charm_unit_test #7818 glance-next for gnuoy mp268605
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12136058/
Build: http://10.245.162.77:8080/job/charm_unit_test/7818/

charm_amulet_test #5931 glance-next for gnuoy mp268605
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/5931/

charm_lint_check #8481 glance-next for gnuoy mp268605
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12140600/
Build: http://10.245.162.77:8080/job/charm_lint_check/8481/

charm_unit_test #7872 glance-next for gnuoy mp268605
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12140603/
Build: http://10.245.162.77:8080/job/charm_unit_test/7872/

charm_lint_check #8484 glance-next for gnuoy mp268605
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12141053/
Build: http://10.245.162.77:8080/job/charm_lint_check/8484/

charm_unit_test #7875 glance-next for gnuoy mp268605
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12141054/
Build: http://10.245.162.77:8080/job/charm_unit_test/7875/

charm_amulet_test #5940 glance-next for gnuoy mp268605
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/5940/

charm_amulet_test #5943 glance-next for gnuoy mp268605
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/5943/

charm_lint_check #8547 glance-next for gnuoy mp268605
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12149010/
Build: http://10.245.162.77:8080/job/charm_lint_check/8547/

charm_unit_test #7935 glance-next for gnuoy mp268605
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12149013/
Build: http://10.245.162.77:8080/job/charm_unit_test/7935/

charm_lint_check #8548 glance-next for gnuoy mp268605
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12149141/
Build: http://10.245.162.77:8080/job/charm_lint_check/8548/

charm_unit_test #7936 glance-next for gnuoy mp268605
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12149142/
Build: http://10.245.162.77:8080/job/charm_unit_test/7936/

charm_lint_check #8549 glance-next for gnuoy mp268605
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12149391/
Build: http://10.245.162.77:8080/job/charm_lint_check/8549/

charm_unit_test #7937 glance-next for gnuoy mp268605
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12149392/
Build: http://10.245.162.77:8080/job/charm_unit_test/7937/

charm_amulet_test #5962 glance-next for gnuoy mp268605
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/5962/

charm_amulet_test #5963 glance-next for gnuoy mp268605
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/5963/

charm_amulet_test #5964 glance-next for gnuoy mp268605
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/5964/

charm_lint_check #8648 glance-next for gnuoy mp268605
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12182292/
Build: http://10.245.162.77:8080/job/charm_lint_check/8648/

charm_unit_test #7985 glance-next for gnuoy mp268605
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12182293/
Build: http://10.245.162.77:8080/job/charm_unit_test/7985/

charm_amulet_test #6013 glance-next for gnuoy mp268605
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6013/

charm_lint_check #8704 glance-next for gnuoy mp268605
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12193056/
Build: http://10.245.162.77:8080/job/charm_lint_check/8704/

charm_unit_test #8038 glance-next for gnuoy mp268605
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12193057/
Build: http://10.245.162.77:8080/job/charm_unit_test/8038/

charm_amulet_test #6026 glance-next for gnuoy mp268605
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6026/

charm_lint_check #9383 glance-next for gnuoy mp268605
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/9383/

charm_unit_test #8679 glance-next for gnuoy mp268605
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12273018/
Build: http://10.245.162.77:8080/job/charm_unit_test/8679/

charm_amulet_test #6248 glance-next for gnuoy mp268605
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6248/

152. By Liam Young on 2015-09-07

Charm helper sync

153. By Liam Young on 2015-09-07

Fix unit tests

charm_lint_check #9537 glance-next for gnuoy mp268605
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/9537/

charm_unit_test #8779 glance-next for gnuoy mp268605
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8779/

charm_amulet_test #6309 glance-next for gnuoy mp268605
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6309/

154. By Liam Young on 2015-09-10

Charm helper sync

charm_lint_check #9700 glance-next for gnuoy mp268605
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/9700/

charm_unit_test #8933 glance-next for gnuoy mp268605
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8933/

charm_amulet_test #6347 glance-next for gnuoy mp268605
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12329523/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6347/

Edward Hope-Morley (hopem) wrote :

LGTM +1 (think amulet fail is unrelated)

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'charmhelpers/contrib/openstack/context.py'
2--- charmhelpers/contrib/openstack/context.py 2015-09-03 09:41:01 +0000
3+++ charmhelpers/contrib/openstack/context.py 2015-09-10 09:35:29 +0000
4@@ -485,13 +485,15 @@
5
6 log('Generating template context for ceph', level=DEBUG)
7 mon_hosts = []
8- auth = None
9- key = None
10- use_syslog = str(config('use-syslog')).lower()
11+ ctxt = {
12+ 'use_syslog': str(config('use-syslog')).lower()
13+ }
14 for rid in relation_ids('ceph'):
15 for unit in related_units(rid):
16- auth = relation_get('auth', rid=rid, unit=unit)
17- key = relation_get('key', rid=rid, unit=unit)
18+ if not ctxt.get('auth'):
19+ ctxt['auth'] = relation_get('auth', rid=rid, unit=unit)
20+ if not ctxt.get('key'):
21+ ctxt['key'] = relation_get('key', rid=rid, unit=unit)
22 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
23 unit=unit)
24 unit_priv_addr = relation_get('private-address', rid=rid,
25@@ -500,10 +502,7 @@
26 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
27 mon_hosts.append(ceph_addr)
28
29- ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
30- 'auth': auth,
31- 'key': key,
32- 'use_syslog': use_syslog}
33+ ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts))
34
35 if not os.path.isdir('/etc/ceph'):
36 os.mkdir('/etc/ceph')
37
38=== modified file 'charmhelpers/contrib/storage/linux/ceph.py'
39--- charmhelpers/contrib/storage/linux/ceph.py 2015-07-16 20:17:23 +0000
40+++ charmhelpers/contrib/storage/linux/ceph.py 2015-09-10 09:35:29 +0000
41@@ -28,6 +28,7 @@
42 import shutil
43 import json
44 import time
45+import uuid
46
47 from subprocess import (
48 check_call,
49@@ -35,8 +36,10 @@
50 CalledProcessError,
51 )
52 from charmhelpers.core.hookenv import (
53+ local_unit,
54 relation_get,
55 relation_ids,
56+ relation_set,
57 related_units,
58 log,
59 DEBUG,
60@@ -411,17 +414,52 @@
61
62 The API is versioned and defaults to version 1.
63 """
64- def __init__(self, api_version=1):
65+ def __init__(self, api_version=1, request_id=None):
66 self.api_version = api_version
67+ if request_id:
68+ self.request_id = request_id
69+ else:
70+ self.request_id = str(uuid.uuid1())
71 self.ops = []
72
73 def add_op_create_pool(self, name, replica_count=3):
74 self.ops.append({'op': 'create-pool', 'name': name,
75 'replicas': replica_count})
76
77+ def set_ops(self, ops):
78+ """Set request ops to provided value.
79+
80+ Useful for injecting ops that come from a previous request
81+ to allow comparisons to ensure validity.
82+ """
83+ self.ops = ops
84+
85 @property
86 def request(self):
87- return json.dumps({'api-version': self.api_version, 'ops': self.ops})
88+ return json.dumps({'api-version': self.api_version, 'ops': self.ops,
89+ 'request-id': self.request_id})
90+
91+ def _ops_equal(self, other):
92+ if len(self.ops) == len(other.ops):
93+ for req_no in range(0, len(self.ops)):
94+ for key in ['replicas', 'name', 'op']:
95+ if self.ops[req_no][key] != other.ops[req_no][key]:
96+ return False
97+ else:
98+ return False
99+ return True
100+
101+ def __eq__(self, other):
102+ if not isinstance(other, self.__class__):
103+ return False
104+ if self.api_version == other.api_version and \
105+ self._ops_equal(other):
106+ return True
107+ else:
108+ return False
109+
110+ def __ne__(self, other):
111+ return not self.__eq__(other)
112
113
114 class CephBrokerRsp(object):
115@@ -431,14 +469,198 @@
116
117 The API is versioned and defaults to version 1.
118 """
119+
120 def __init__(self, encoded_rsp):
121 self.api_version = None
122 self.rsp = json.loads(encoded_rsp)
123
124 @property
125+ def request_id(self):
126+ return self.rsp.get('request-id')
127+
128+ @property
129 def exit_code(self):
130 return self.rsp.get('exit-code')
131
132 @property
133 def exit_msg(self):
134 return self.rsp.get('stderr')
135+
136+
137+# Ceph Broker Conversation:
138+# If a charm needs an action to be taken by ceph it can create a CephBrokerRq
139+# and send that request to ceph via the ceph relation. The CephBrokerRq has a
140+# unique id so that the client can identity which CephBrokerRsp is associated
141+# with the request. Ceph will also respond to each client unit individually
142+# creating a response key per client unit eg glance/0 will get a CephBrokerRsp
143+# via key broker-rsp-glance-0
144+#
145+# To use this the charm can just do something like:
146+#
147+# from charmhelpers.contrib.storage.linux.ceph import (
148+# send_request_if_needed,
149+# is_request_complete,
150+# CephBrokerRq,
151+# )
152+#
153+# @hooks.hook('ceph-relation-changed')
154+# def ceph_changed():
155+# rq = CephBrokerRq()
156+# rq.add_op_create_pool(name='poolname', replica_count=3)
157+#
158+# if is_request_complete(rq):
159+# <Request complete actions>
160+# else:
161+# send_request_if_needed(get_ceph_request())
162+#
163+# CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example
164+# of glance having sent a request to ceph which ceph has successfully processed
165+# 'ceph:8': {
166+# 'ceph/0': {
167+# 'auth': 'cephx',
168+# 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}',
169+# 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}',
170+# 'ceph-public-address': '10.5.44.103',
171+# 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==',
172+# 'private-address': '10.5.44.103',
173+# },
174+# 'glance/0': {
175+# 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", '
176+# '"ops": [{"replicas": 3, "name": "glance", '
177+# '"op": "create-pool"}]}'),
178+# 'private-address': '10.5.44.109',
179+# },
180+# }
181+
182+def get_previous_request(rid):
183+ """Return the last ceph broker request sent on a given relation
184+
185+ @param rid: Relation id to query for request
186+ """
187+ request = None
188+ broker_req = relation_get(attribute='broker_req', rid=rid,
189+ unit=local_unit())
190+ if broker_req:
191+ request_data = json.loads(broker_req)
192+ request = CephBrokerRq(api_version=request_data['api-version'],
193+ request_id=request_data['request-id'])
194+ request.set_ops(request_data['ops'])
195+
196+ return request
197+
198+
199+def get_request_states(request):
200+ """Return a dict of requests per relation id with their corresponding
201+ completion state.
202+
203+ This allows a charm, which has a request for ceph, to see whether there is
204+ an equivalent request already being processed and if so what state that
205+ request is in.
206+
207+ @param request: A CephBrokerRq object
208+ """
209+ complete = []
210+ requests = {}
211+ for rid in relation_ids('ceph'):
212+ complete = False
213+ previous_request = get_previous_request(rid)
214+ if request == previous_request:
215+ sent = True
216+ complete = is_request_complete_for_rid(previous_request, rid)
217+ else:
218+ sent = False
219+ complete = False
220+
221+ requests[rid] = {
222+ 'sent': sent,
223+ 'complete': complete,
224+ }
225+
226+ return requests
227+
228+
229+def is_request_sent(request):
230+ """Check to see if a functionally equivalent request has already been sent
231+
232+ Returns True if a similair request has been sent
233+
234+ @param request: A CephBrokerRq object
235+ """
236+ states = get_request_states(request)
237+ for rid in states.keys():
238+ if not states[rid]['sent']:
239+ return False
240+
241+ return True
242+
243+
244+def is_request_complete(request):
245+ """Check to see if a functionally equivalent request has already been
246+ completed
247+
248+ Returns True if a similair request has been completed
249+
250+ @param request: A CephBrokerRq object
251+ """
252+ states = get_request_states(request)
253+ for rid in states.keys():
254+ if not states[rid]['complete']:
255+ return False
256+
257+ return True
258+
259+
260+def is_request_complete_for_rid(request, rid):
261+ """Check if a given request has been completed on the given relation
262+
263+ @param request: A CephBrokerRq object
264+ @param rid: Relation ID
265+ """
266+ broker_key = get_broker_rsp_key()
267+ for unit in related_units(rid):
268+ rdata = relation_get(rid=rid, unit=unit)
269+ if rdata.get(broker_key):
270+ rsp = CephBrokerRsp(rdata.get(broker_key))
271+ if rsp.request_id == request.request_id:
272+ if not rsp.exit_code:
273+ return True
274+ else:
275+ # The remote unit sent no reply targeted at this unit so either the
276+ # remote ceph cluster does not support unit targeted replies or it
277+ # has not processed our request yet.
278+ if rdata.get('broker_rsp'):
279+ request_data = json.loads(rdata['broker_rsp'])
280+ if request_data.get('request-id'):
281+ log('Ignoring legacy broker_rsp without unit key as remote '
282+ 'service supports unit specific replies', level=DEBUG)
283+ else:
284+ log('Using legacy broker_rsp as remote service does not '
285+ 'supports unit specific replies', level=DEBUG)
286+ rsp = CephBrokerRsp(rdata['broker_rsp'])
287+ if not rsp.exit_code:
288+ return True
289+
290+ return False
291+
292+
293+def get_broker_rsp_key():
294+ """Return broker response key for this unit
295+
296+ This is the key that ceph is going to use to pass request status
297+ information back to this unit
298+ """
299+ return 'broker-rsp-' + local_unit().replace('/', '-')
300+
301+
302+def send_request_if_needed(request):
303+ """Send broker request if an equivalent request has not already been sent
304+
305+ @param request: A CephBrokerRq object
306+ """
307+ if is_request_sent(request):
308+ log('Request already sent but not complete, not sending new request',
309+ level=DEBUG)
310+ else:
311+ for rid in relation_ids('ceph'):
312+ log('Sending request {}'.format(request.request_id), level=DEBUG)
313+ relation_set(relation_id=rid, broker_req=request.request)
314
315=== modified file 'hooks/glance_relations.py'
316--- hooks/glance_relations.py 2015-09-01 06:57:55 +0000
317+++ hooks/glance_relations.py 2015-09-10 09:35:29 +0000
318@@ -29,7 +29,6 @@
319 config,
320 Hooks,
321 log as juju_log,
322- INFO,
323 ERROR,
324 open_port,
325 is_relation_made,
326@@ -66,9 +65,10 @@
327 sync_db_with_multi_ipv6_addresses,
328 )
329 from charmhelpers.contrib.storage.linux.ceph import (
330+ send_request_if_needed,
331+ is_request_complete,
332 ensure_ceph_keyring,
333 CephBrokerRq,
334- CephBrokerRsp,
335 delete_keyring,
336 )
337 from charmhelpers.payload.execd import (
338@@ -240,6 +240,14 @@
339 apt_install(['ceph-common', 'python-ceph'])
340
341
342+def get_ceph_request():
343+ service = service_name()
344+ rq = CephBrokerRq()
345+ replicas = config('ceph-osd-replication-count')
346+ rq.add_op_create_pool(name=service, replica_count=replicas)
347+ return rq
348+
349+
350 @hooks.hook('ceph-relation-changed')
351 @restart_on_change(restart_map())
352 def ceph_changed():
353@@ -253,29 +261,15 @@
354 juju_log('Could not create ceph keyring: peer not ready?')
355 return
356
357- settings = relation_get()
358- if settings and 'broker_rsp' in settings:
359- rsp = CephBrokerRsp(settings['broker_rsp'])
360- # Non-zero return code implies failure
361- if rsp.exit_code:
362- juju_log("Ceph broker request failed (rc=%s, msg=%s)" %
363- (rsp.exit_code, rsp.exit_msg), level=ERROR)
364- return
365-
366- juju_log("Ceph broker request succeeded (rc=%s, msg=%s)" %
367- (rsp.exit_code, rsp.exit_msg), level=INFO)
368+ if is_request_complete(get_ceph_request()):
369+ juju_log('Request complete')
370 CONFIGS.write(GLANCE_API_CONF)
371 CONFIGS.write(ceph_config_file())
372 # Ensure that glance-api is restarted since only now can we
373 # guarantee that ceph resources are ready.
374 service_restart('glance-api')
375 else:
376- rq = CephBrokerRq()
377- replicas = config('ceph-osd-replication-count')
378- rq.add_op_create_pool(name=service, replica_count=replicas)
379- for rid in relation_ids('ceph'):
380- relation_set(relation_id=rid, broker_req=rq.request)
381- juju_log("Request(s) sent to Ceph broker (rid=%s)" % (rid))
382+ send_request_if_needed(get_ceph_request())
383
384
385 @hooks.hook('ceph-relation-broken')
386
387=== modified file 'templates/ceph.conf'
388--- templates/ceph.conf 2014-03-25 18:44:22 +0000
389+++ templates/ceph.conf 2015-09-10 09:35:29 +0000
390@@ -10,7 +10,8 @@
391 keyring = /etc/ceph/ceph.$name.keyring
392 mon host = {{ mon_hosts }}
393 {% endif -%}
394+{% if use_syslog -%}
395 log to syslog = {{ use_syslog }}
396 err to syslog = {{ use_syslog }}
397 clog to syslog = {{ use_syslog }}
398-
399+{% endif -%}
400
401=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
402--- tests/charmhelpers/contrib/amulet/utils.py 2015-08-18 17:34:34 +0000
403+++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-10 09:35:29 +0000
404@@ -19,9 +19,11 @@
405 import logging
406 import os
407 import re
408+import socket
409 import subprocess
410 import sys
411 import time
412+import uuid
413
414 import amulet
415 import distro_info
416@@ -114,7 +116,7 @@
417 # /!\ DEPRECATION WARNING (beisner):
418 # New and existing tests should be rewritten to use
419 # validate_services_by_name() as it is aware of init systems.
420- self.log.warn('/!\\ DEPRECATION WARNING: use '
421+ self.log.warn('DEPRECATION WARNING: use '
422 'validate_services_by_name instead of validate_services '
423 'due to init system differences.')
424
425@@ -269,33 +271,52 @@
426 """Get last modification time of directory."""
427 return sentry_unit.directory_stat(directory)['mtime']
428
429- def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):
430- """Get process' start time.
431-
432- Determine start time of the process based on the last modification
433- time of the /proc/pid directory. If pgrep_full is True, the process
434- name is matched against the full command line.
435- """
436- if pgrep_full:
437- cmd = 'pgrep -o -f {}'.format(service)
438- else:
439- cmd = 'pgrep -o {}'.format(service)
440- cmd = cmd + ' | grep -v pgrep || exit 0'
441- cmd_out = sentry_unit.run(cmd)
442- self.log.debug('CMDout: ' + str(cmd_out))
443- if cmd_out[0]:
444- self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))
445- proc_dir = '/proc/{}'.format(cmd_out[0].strip())
446- return self._get_dir_mtime(sentry_unit, proc_dir)
447+ def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
448+ """Get start time of a process based on the last modification time
449+ of the /proc/pid directory.
450+
451+ :sentry_unit: The sentry unit to check for the service on
452+ :service: service name to look for in process table
453+ :pgrep_full: [Deprecated] Use full command line search mode with pgrep
454+ :returns: epoch time of service process start
455+ :param commands: list of bash commands
456+ :param sentry_units: list of sentry unit pointers
457+ :returns: None if successful; Failure message otherwise
458+ """
459+ if pgrep_full is not None:
460+ # /!\ DEPRECATION WARNING (beisner):
461+ # No longer implemented, as pidof is now used instead of pgrep.
462+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
463+ self.log.warn('DEPRECATION WARNING: pgrep_full bool is no '
464+ 'longer implemented re: lp 1474030.')
465+
466+ pid_list = self.get_process_id_list(sentry_unit, service)
467+ pid = pid_list[0]
468+ proc_dir = '/proc/{}'.format(pid)
469+ self.log.debug('Pid for {} on {}: {}'.format(
470+ service, sentry_unit.info['unit_name'], pid))
471+
472+ return self._get_dir_mtime(sentry_unit, proc_dir)
473
474 def service_restarted(self, sentry_unit, service, filename,
475- pgrep_full=False, sleep_time=20):
476+ pgrep_full=None, sleep_time=20):
477 """Check if service was restarted.
478
479 Compare a service's start time vs a file's last modification time
480 (such as a config file for that service) to determine if the service
481 has been restarted.
482 """
483+ # /!\ DEPRECATION WARNING (beisner):
484+ # This method is prone to races in that no before-time is known.
485+ # Use validate_service_config_changed instead.
486+
487+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
488+ # used instead of pgrep. pgrep_full is still passed through to ensure
489+ # deprecation WARNS. lp1474030
490+ self.log.warn('DEPRECATION WARNING: use '
491+ 'validate_service_config_changed instead of '
492+ 'service_restarted due to known races.')
493+
494 time.sleep(sleep_time)
495 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
496 self._get_file_mtime(sentry_unit, filename)):
497@@ -304,15 +325,15 @@
498 return False
499
500 def service_restarted_since(self, sentry_unit, mtime, service,
501- pgrep_full=False, sleep_time=20,
502- retry_count=2):
503+ pgrep_full=None, sleep_time=20,
504+ retry_count=2, retry_sleep_time=30):
505 """Check if service was been started after a given time.
506
507 Args:
508 sentry_unit (sentry): The sentry unit to check for the service on
509 mtime (float): The epoch time to check against
510 service (string): service name to look for in process table
511- pgrep_full (boolean): Use full command line search mode with pgrep
512+ pgrep_full: [Deprecated] Use full command line search mode with pgrep
513 sleep_time (int): Seconds to sleep before looking for process
514 retry_count (int): If service is not found, how many times to retry
515
516@@ -321,30 +342,44 @@
517 False if service is older than mtime or if service was
518 not found.
519 """
520- self.log.debug('Checking %s restarted since %s' % (service, mtime))
521+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
522+ # used instead of pgrep. pgrep_full is still passed through to ensure
523+ # deprecation WARNS. lp1474030
524+
525+ unit_name = sentry_unit.info['unit_name']
526+ self.log.debug('Checking that %s service restarted since %s on '
527+ '%s' % (service, mtime, unit_name))
528 time.sleep(sleep_time)
529- proc_start_time = self._get_proc_start_time(sentry_unit, service,
530- pgrep_full)
531- while retry_count > 0 and not proc_start_time:
532- self.log.debug('No pid file found for service %s, will retry %i '
533- 'more times' % (service, retry_count))
534- time.sleep(30)
535- proc_start_time = self._get_proc_start_time(sentry_unit, service,
536- pgrep_full)
537- retry_count = retry_count - 1
538+ proc_start_time = None
539+ tries = 0
540+ while tries <= retry_count and not proc_start_time:
541+ try:
542+ proc_start_time = self._get_proc_start_time(sentry_unit,
543+ service,
544+ pgrep_full)
545+ self.log.debug('Attempt {} to get {} proc start time on {} '
546+ 'OK'.format(tries, service, unit_name))
547+ except IOError:
548+ # NOTE(beisner) - race avoidance, proc may not exist yet.
549+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
550+ self.log.debug('Attempt {} to get {} proc start time on {} '
551+ 'failed'.format(tries, service, unit_name))
552+ time.sleep(retry_sleep_time)
553+ tries += 1
554
555 if not proc_start_time:
556 self.log.warn('No proc start time found, assuming service did '
557 'not start')
558 return False
559 if proc_start_time >= mtime:
560- self.log.debug('proc start time is newer than provided mtime'
561- '(%s >= %s)' % (proc_start_time, mtime))
562+ self.log.debug('Proc start time is newer than provided mtime'
563+ '(%s >= %s) on %s (OK)' % (proc_start_time,
564+ mtime, unit_name))
565 return True
566 else:
567- self.log.warn('proc start time (%s) is older than provided mtime '
568- '(%s), service did not restart' % (proc_start_time,
569- mtime))
570+ self.log.warn('Proc start time (%s) is older than provided mtime '
571+ '(%s) on %s, service did not '
572+ 'restart' % (proc_start_time, mtime, unit_name))
573 return False
574
575 def config_updated_since(self, sentry_unit, filename, mtime,
576@@ -374,8 +409,9 @@
577 return False
578
579 def validate_service_config_changed(self, sentry_unit, mtime, service,
580- filename, pgrep_full=False,
581- sleep_time=20, retry_count=2):
582+ filename, pgrep_full=None,
583+ sleep_time=20, retry_count=2,
584+ retry_sleep_time=30):
585 """Check service and file were updated after mtime
586
587 Args:
588@@ -383,9 +419,10 @@
589 mtime (float): The epoch time to check against
590 service (string): service name to look for in process table
591 filename (string): The file to check mtime of
592- pgrep_full (boolean): Use full command line search mode with pgrep
593- sleep_time (int): Seconds to sleep before looking for process
594+ pgrep_full: [Deprecated] Use full command line search mode with pgrep
595+ sleep_time (int): Initial sleep in seconds to pass to test helpers
596 retry_count (int): If service is not found, how many times to retry
597+ retry_sleep_time (int): Time in seconds to wait between retries
598
599 Typical Usage:
600 u = OpenStackAmuletUtils(ERROR)
601@@ -402,15 +439,25 @@
602 mtime, False if service is older than mtime or if service was
603 not found or if filename was modified before mtime.
604 """
605- self.log.debug('Checking %s restarted since %s' % (service, mtime))
606- time.sleep(sleep_time)
607- service_restart = self.service_restarted_since(sentry_unit, mtime,
608- service,
609- pgrep_full=pgrep_full,
610- sleep_time=0,
611- retry_count=retry_count)
612- config_update = self.config_updated_since(sentry_unit, filename, mtime,
613- sleep_time=0)
614+
615+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
616+ # used instead of pgrep. pgrep_full is still passed through to ensure
617+ # deprecation WARNS. lp1474030
618+
619+ service_restart = self.service_restarted_since(
620+ sentry_unit, mtime,
621+ service,
622+ pgrep_full=pgrep_full,
623+ sleep_time=sleep_time,
624+ retry_count=retry_count,
625+ retry_sleep_time=retry_sleep_time)
626+
627+ config_update = self.config_updated_since(
628+ sentry_unit,
629+ filename,
630+ mtime,
631+ sleep_time=0)
632+
633 return service_restart and config_update
634
635 def get_sentry_time(self, sentry_unit):
636@@ -428,7 +475,6 @@
637 """Return a list of all Ubuntu releases in order of release."""
638 _d = distro_info.UbuntuDistroInfo()
639 _release_list = _d.all
640- self.log.debug('Ubuntu release list: {}'.format(_release_list))
641 return _release_list
642
643 def file_to_url(self, file_rel_path):
644@@ -568,6 +614,142 @@
645
646 return None
647
648+ def validate_sectionless_conf(self, file_contents, expected):
649+ """A crude conf parser. Useful to inspect configuration files which
650+ do not have section headers (as would be necessary in order to use
651+ the configparser). Such as openstack-dashboard or rabbitmq confs."""
652+ for line in file_contents.split('\n'):
653+ if '=' in line:
654+ args = line.split('=')
655+ if len(args) <= 1:
656+ continue
657+ key = args[0].strip()
658+ value = args[1].strip()
659+ if key in expected.keys():
660+ if expected[key] != value:
661+ msg = ('Config mismatch. Expected, actual: {}, '
662+ '{}'.format(expected[key], value))
663+ amulet.raise_status(amulet.FAIL, msg=msg)
664+
665+ def get_unit_hostnames(self, units):
666+ """Return a dict of juju unit names to hostnames."""
667+ host_names = {}
668+ for unit in units:
669+ host_names[unit.info['unit_name']] = \
670+ str(unit.file_contents('/etc/hostname').strip())
671+ self.log.debug('Unit host names: {}'.format(host_names))
672+ return host_names
673+
674+ def run_cmd_unit(self, sentry_unit, cmd):
675+ """Run a command on a unit, return the output and exit code."""
676+ output, code = sentry_unit.run(cmd)
677+ if code == 0:
678+ self.log.debug('{} `{}` command returned {} '
679+ '(OK)'.format(sentry_unit.info['unit_name'],
680+ cmd, code))
681+ else:
682+ msg = ('{} `{}` command returned {} '
683+ '{}'.format(sentry_unit.info['unit_name'],
684+ cmd, code, output))
685+ amulet.raise_status(amulet.FAIL, msg=msg)
686+ return str(output), code
687+
688+ def file_exists_on_unit(self, sentry_unit, file_name):
689+ """Check if a file exists on a unit."""
690+ try:
691+ sentry_unit.file_stat(file_name)
692+ return True
693+ except IOError:
694+ return False
695+ except Exception as e:
696+ msg = 'Error checking file {}: {}'.format(file_name, e)
697+ amulet.raise_status(amulet.FAIL, msg=msg)
698+
699+ def file_contents_safe(self, sentry_unit, file_name,
700+ max_wait=60, fatal=False):
701+ """Get file contents from a sentry unit. Wrap amulet file_contents
702+ with retry logic to address races where a file checks as existing,
703+ but no longer exists by the time file_contents is called.
704+ Return None if file not found. Optionally raise if fatal is True."""
705+ unit_name = sentry_unit.info['unit_name']
706+ file_contents = False
707+ tries = 0
708+ while not file_contents and tries < (max_wait / 4):
709+ try:
710+ file_contents = sentry_unit.file_contents(file_name)
711+ except IOError:
712+ self.log.debug('Attempt {} to open file {} from {} '
713+ 'failed'.format(tries, file_name,
714+ unit_name))
715+ time.sleep(4)
716+ tries += 1
717+
718+ if file_contents:
719+ return file_contents
720+ elif not fatal:
721+ return None
722+ elif fatal:
723+ msg = 'Failed to get file contents from unit.'
724+ amulet.raise_status(amulet.FAIL, msg)
725+
726+ def port_knock_tcp(self, host="localhost", port=22, timeout=15):
727+ """Open a TCP socket to check for a listening sevice on a host.
728+
729+ :param host: host name or IP address, default to localhost
730+ :param port: TCP port number, default to 22
731+ :param timeout: Connect timeout, default to 15 seconds
732+ :returns: True if successful, False if connect failed
733+ """
734+
735+ # Resolve host name if possible
736+ try:
737+ connect_host = socket.gethostbyname(host)
738+ host_human = "{} ({})".format(connect_host, host)
739+ except socket.error as e:
740+ self.log.warn('Unable to resolve address: '
741+ '{} ({}) Trying anyway!'.format(host, e))
742+ connect_host = host
743+ host_human = connect_host
744+
745+ # Attempt socket connection
746+ try:
747+ knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
748+ knock.settimeout(timeout)
749+ knock.connect((connect_host, port))
750+ knock.close()
751+ self.log.debug('Socket connect OK for host '
752+ '{} on port {}.'.format(host_human, port))
753+ return True
754+ except socket.error as e:
755+ self.log.debug('Socket connect FAIL for'
756+ ' {} port {} ({})'.format(host_human, port, e))
757+ return False
758+
759+ def port_knock_units(self, sentry_units, port=22,
760+ timeout=15, expect_success=True):
761+ """Open a TCP socket to check for a listening sevice on each
762+ listed juju unit.
763+
764+ :param sentry_units: list of sentry unit pointers
765+ :param port: TCP port number, default to 22
766+ :param timeout: Connect timeout, default to 15 seconds
767+ :expect_success: True by default, set False to invert logic
768+ :returns: None if successful, Failure message otherwise
769+ """
770+ for unit in sentry_units:
771+ host = unit.info['public-address']
772+ connected = self.port_knock_tcp(host, port, timeout)
773+ if not connected and expect_success:
774+ return 'Socket connect failed.'
775+ elif connected and not expect_success:
776+ return 'Socket connected unexpectedly.'
777+
778+ def get_uuid_epoch_stamp(self):
779+ """Returns a stamp string based on uuid4 and epoch time. Useful in
780+ generating test messages which need to be unique-ish."""
781+ return '[{}-{}]'.format(uuid.uuid4(), time.time())
782+
783+# amulet juju action helpers:
784 def run_action(self, unit_sentry, action,
785 _check_output=subprocess.check_output):
786 """Run the named action on a given unit sentry.
787
788=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
789--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-08-18 17:34:34 +0000
790+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-10 09:35:29 +0000
791@@ -44,8 +44,15 @@
792 Determine if the local branch being tested is derived from its
793 stable or next (dev) branch, and based on this, use the corresonding
794 stable or next branches for the other_services."""
795+
796+ # Charms outside the lp:~openstack-charmers namespace
797 base_charms = ['mysql', 'mongodb', 'nrpe']
798
799+ # Force these charms to current series even when using an older series.
800+ # ie. Use trusty/nrpe even when series is precise, as the P charm
801+ # does not possess the necessary external master config and hooks.
802+ force_series_current = ['nrpe']
803+
804 if self.series in ['precise', 'trusty']:
805 base_series = self.series
806 else:
807@@ -53,11 +60,17 @@
808
809 if self.stable:
810 for svc in other_services:
811+ if svc['name'] in force_series_current:
812+ base_series = self.current_next
813+
814 temp = 'lp:charms/{}/{}'
815 svc['location'] = temp.format(base_series,
816 svc['name'])
817 else:
818 for svc in other_services:
819+ if svc['name'] in force_series_current:
820+ base_series = self.current_next
821+
822 if svc['name'] in base_charms:
823 temp = 'lp:charms/{}/{}'
824 svc['location'] = temp.format(base_series,
825@@ -77,21 +90,23 @@
826
827 services = other_services
828 services.append(this_service)
829+
830+ # Charms which should use the source config option
831 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
832 'ceph-osd', 'ceph-radosgw']
833- # Most OpenStack subordinate charms do not expose an origin option
834- # as that is controlled by the principle.
835- ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
836+
837+ # Charms which can not use openstack-origin, ie. many subordinates
838+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
839
840 if self.openstack:
841 for svc in services:
842- if svc['name'] not in use_source + ignore:
843+ if svc['name'] not in use_source + no_origin:
844 config = {'openstack-origin': self.openstack}
845 self.d.configure(svc['name'], config)
846
847 if self.source:
848 for svc in services:
849- if svc['name'] in use_source and svc['name'] not in ignore:
850+ if svc['name'] in use_source and svc['name'] not in no_origin:
851 config = {'source': self.source}
852 self.d.configure(svc['name'], config)
853
854
855=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
856--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-16 20:17:23 +0000
857+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-10 09:35:29 +0000
858@@ -27,6 +27,7 @@
859 import heatclient.v1.client as heat_client
860 import keystoneclient.v2_0 as keystone_client
861 import novaclient.v1_1.client as nova_client
862+import pika
863 import swiftclient
864
865 from charmhelpers.contrib.amulet.utils import (
866@@ -602,3 +603,361 @@
867 self.log.debug('Ceph {} samples (OK): '
868 '{}'.format(sample_type, samples))
869 return None
870+
871+# rabbitmq/amqp specific helpers:
872+ def add_rmq_test_user(self, sentry_units,
873+ username="testuser1", password="changeme"):
874+ """Add a test user via the first rmq juju unit, check connection as
875+ the new user against all sentry units.
876+
877+ :param sentry_units: list of sentry unit pointers
878+ :param username: amqp user name, default to testuser1
879+ :param password: amqp user password
880+ :returns: None if successful. Raise on error.
881+ """
882+ self.log.debug('Adding rmq user ({})...'.format(username))
883+
884+ # Check that user does not already exist
885+ cmd_user_list = 'rabbitmqctl list_users'
886+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
887+ if username in output:
888+ self.log.warning('User ({}) already exists, returning '
889+ 'gracefully.'.format(username))
890+ return
891+
892+ perms = '".*" ".*" ".*"'
893+ cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
894+ 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
895+
896+ # Add user via first unit
897+ for cmd in cmds:
898+ output, _ = self.run_cmd_unit(sentry_units[0], cmd)
899+
900+ # Check connection against the other sentry_units
901+ self.log.debug('Checking user connect against units...')
902+ for sentry_unit in sentry_units:
903+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
904+ username=username,
905+ password=password)
906+ connection.close()
907+
908+ def delete_rmq_test_user(self, sentry_units, username="testuser1"):
909+ """Delete a rabbitmq user via the first rmq juju unit.
910+
911+ :param sentry_units: list of sentry unit pointers
912+ :param username: amqp user name, default to testuser1
913+ :param password: amqp user password
914+ :returns: None if successful or no such user.
915+ """
916+ self.log.debug('Deleting rmq user ({})...'.format(username))
917+
918+ # Check that the user exists
919+ cmd_user_list = 'rabbitmqctl list_users'
920+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
921+
922+ if username not in output:
923+ self.log.warning('User ({}) does not exist, returning '
924+ 'gracefully.'.format(username))
925+ return
926+
927+ # Delete the user
928+ cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
929+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
930+
931+ def get_rmq_cluster_status(self, sentry_unit):
932+ """Execute rabbitmq cluster status command on a unit and return
933+ the full output.
934+
935+ :param unit: sentry unit
936+ :returns: String containing console output of cluster status command
937+ """
938+ cmd = 'rabbitmqctl cluster_status'
939+ output, _ = self.run_cmd_unit(sentry_unit, cmd)
940+ self.log.debug('{} cluster_status:\n{}'.format(
941+ sentry_unit.info['unit_name'], output))
942+ return str(output)
943+
944+ def get_rmq_cluster_running_nodes(self, sentry_unit):
945+ """Parse rabbitmqctl cluster_status output string, return list of
946+ running rabbitmq cluster nodes.
947+
948+ :param unit: sentry unit
949+ :returns: List containing node names of running nodes
950+ """
951+ # NOTE(beisner): rabbitmqctl cluster_status output is not
952+ # json-parsable, do string chop foo, then json.loads that.
953+ str_stat = self.get_rmq_cluster_status(sentry_unit)
954+ if 'running_nodes' in str_stat:
955+ pos_start = str_stat.find("{running_nodes,") + 15
956+ pos_end = str_stat.find("]},", pos_start) + 1
957+ str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
958+ run_nodes = json.loads(str_run_nodes)
959+ return run_nodes
960+ else:
961+ return []
962+
963+ def validate_rmq_cluster_running_nodes(self, sentry_units):
964+ """Check that all rmq unit hostnames are represented in the
965+ cluster_status output of all units.
966+
967+ :param host_names: dict of juju unit names to host names
968+ :param units: list of sentry unit pointers (all rmq units)
969+ :returns: None if successful, otherwise return error message
970+ """
971+ host_names = self.get_unit_hostnames(sentry_units)
972+ errors = []
973+
974+ # Query every unit for cluster_status running nodes
975+ for query_unit in sentry_units:
976+ query_unit_name = query_unit.info['unit_name']
977+ running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
978+
979+ # Confirm that every unit is represented in the queried unit's
980+ # cluster_status running nodes output.
981+ for validate_unit in sentry_units:
982+ val_host_name = host_names[validate_unit.info['unit_name']]
983+ val_node_name = 'rabbit@{}'.format(val_host_name)
984+
985+ if val_node_name not in running_nodes:
986+ errors.append('Cluster member check failed on {}: {} not '
987+ 'in {}\n'.format(query_unit_name,
988+ val_node_name,
989+ running_nodes))
990+ if errors:
991+ return ''.join(errors)
992+
993+ def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
994+ """Check a single juju rmq unit for ssl and port in the config file."""
995+ host = sentry_unit.info['public-address']
996+ unit_name = sentry_unit.info['unit_name']
997+
998+ conf_file = '/etc/rabbitmq/rabbitmq.config'
999+ conf_contents = str(self.file_contents_safe(sentry_unit,
1000+ conf_file, max_wait=16))
1001+ # Checks
1002+ conf_ssl = 'ssl' in conf_contents
1003+ conf_port = str(port) in conf_contents
1004+
1005+ # Port explicitly checked in config
1006+ if port and conf_port and conf_ssl:
1007+ self.log.debug('SSL is enabled @{}:{} '
1008+ '({})'.format(host, port, unit_name))
1009+ return True
1010+ elif port and not conf_port and conf_ssl:
1011+ self.log.debug('SSL is enabled @{} but not on port {} '
1012+ '({})'.format(host, port, unit_name))
1013+ return False
1014+ # Port not checked (useful when checking that ssl is disabled)
1015+ elif not port and conf_ssl:
1016+ self.log.debug('SSL is enabled @{}:{} '
1017+ '({})'.format(host, port, unit_name))
1018+ return True
1019+ elif not port and not conf_ssl:
1020+ self.log.debug('SSL not enabled @{}:{} '
1021+ '({})'.format(host, port, unit_name))
1022+ return False
1023+ else:
1024+ msg = ('Unknown condition when checking SSL status @{}:{} '
1025+ '({})'.format(host, port, unit_name))
1026+ amulet.raise_status(amulet.FAIL, msg)
1027+
1028+ def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
1029+ """Check that ssl is enabled on rmq juju sentry units.
1030+
1031+ :param sentry_units: list of all rmq sentry units
1032+ :param port: optional ssl port override to validate
1033+ :returns: None if successful, otherwise return error message
1034+ """
1035+ for sentry_unit in sentry_units:
1036+ if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
1037+ return ('Unexpected condition: ssl is disabled on unit '
1038+ '({})'.format(sentry_unit.info['unit_name']))
1039+ return None
1040+
1041+ def validate_rmq_ssl_disabled_units(self, sentry_units):
1042+ """Check that ssl is enabled on listed rmq juju sentry units.
1043+
1044+ :param sentry_units: list of all rmq sentry units
1045+ :returns: True if successful. Raise on error.
1046+ """
1047+ for sentry_unit in sentry_units:
1048+ if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
1049+ return ('Unexpected condition: ssl is enabled on unit '
1050+ '({})'.format(sentry_unit.info['unit_name']))
1051+ return None
1052+
1053+ def configure_rmq_ssl_on(self, sentry_units, deployment,
1054+ port=None, max_wait=60):
1055+ """Turn ssl charm config option on, with optional non-default
1056+ ssl port specification. Confirm that it is enabled on every
1057+ unit.
1058+
1059+ :param sentry_units: list of sentry units
1060+ :param deployment: amulet deployment object pointer
1061+ :param port: amqp port, use defaults if None
1062+ :param max_wait: maximum time to wait in seconds to confirm
1063+ :returns: None if successful. Raise on error.
1064+ """
1065+ self.log.debug('Setting ssl charm config option: on')
1066+
1067+ # Enable RMQ SSL
1068+ config = {'ssl': 'on'}
1069+ if port:
1070+ config['ssl_port'] = port
1071+
1072+ deployment.configure('rabbitmq-server', config)
1073+
1074+ # Confirm
1075+ tries = 0
1076+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
1077+ while ret and tries < (max_wait / 4):
1078+ time.sleep(4)
1079+ self.log.debug('Attempt {}: {}'.format(tries, ret))
1080+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
1081+ tries += 1
1082+
1083+ if ret:
1084+ amulet.raise_status(amulet.FAIL, ret)
1085+
1086+ def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
1087+ """Turn ssl charm config option off, confirm that it is disabled
1088+ on every unit.
1089+
1090+ :param sentry_units: list of sentry units
1091+ :param deployment: amulet deployment object pointer
1092+ :param max_wait: maximum time to wait in seconds to confirm
1093+ :returns: None if successful. Raise on error.
1094+ """
1095+ self.log.debug('Setting ssl charm config option: off')
1096+
1097+ # Disable RMQ SSL
1098+ config = {'ssl': 'off'}
1099+ deployment.configure('rabbitmq-server', config)
1100+
1101+ # Confirm
1102+ tries = 0
1103+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1104+ while ret and tries < (max_wait / 4):
1105+ time.sleep(4)
1106+ self.log.debug('Attempt {}: {}'.format(tries, ret))
1107+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1108+ tries += 1
1109+
1110+ if ret:
1111+ amulet.raise_status(amulet.FAIL, ret)
1112+
1113+ def connect_amqp_by_unit(self, sentry_unit, ssl=False,
1114+ port=None, fatal=True,
1115+ username="testuser1", password="changeme"):
1116+ """Establish and return a pika amqp connection to the rabbitmq service
1117+ running on a rmq juju unit.
1118+
1119+ :param sentry_unit: sentry unit pointer
1120+ :param ssl: boolean, default to False
1121+ :param port: amqp port, use defaults if None
1122+ :param fatal: boolean, default to True (raises on connect error)
1123+ :param username: amqp user name, default to testuser1
1124+ :param password: amqp user password
1125+ :returns: pika amqp connection pointer or None if failed and non-fatal
1126+ """
1127+ host = sentry_unit.info['public-address']
1128+ unit_name = sentry_unit.info['unit_name']
1129+
1130+ # Default port logic if port is not specified
1131+ if ssl and not port:
1132+ port = 5671
1133+ elif not ssl and not port:
1134+ port = 5672
1135+
1136+ self.log.debug('Connecting to amqp on {}:{} ({}) as '
1137+ '{}...'.format(host, port, unit_name, username))
1138+
1139+ try:
1140+ credentials = pika.PlainCredentials(username, password)
1141+ parameters = pika.ConnectionParameters(host=host, port=port,
1142+ credentials=credentials,
1143+ ssl=ssl,
1144+ connection_attempts=3,
1145+ retry_delay=5,
1146+ socket_timeout=1)
1147+ connection = pika.BlockingConnection(parameters)
1148+ assert connection.server_properties['product'] == 'RabbitMQ'
1149+ self.log.debug('Connect OK')
1150+ return connection
1151+ except Exception as e:
1152+ msg = ('amqp connection failed to {}:{} as '
1153+ '{} ({})'.format(host, port, username, str(e)))
1154+ if fatal:
1155+ amulet.raise_status(amulet.FAIL, msg)
1156+ else:
1157+ self.log.warn(msg)
1158+ return None
1159+
1160+ def publish_amqp_message_by_unit(self, sentry_unit, message,
1161+ queue="test", ssl=False,
1162+ username="testuser1",
1163+ password="changeme",
1164+ port=None):
1165+ """Publish an amqp message to a rmq juju unit.
1166+
1167+ :param sentry_unit: sentry unit pointer
1168+ :param message: amqp message string
1169+ :param queue: message queue, default to test
1170+ :param username: amqp user name, default to testuser1
1171+ :param password: amqp user password
1172+ :param ssl: boolean, default to False
1173+ :param port: amqp port, use defaults if None
1174+ :returns: None. Raises exception if publish failed.
1175+ """
1176+ self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
1177+ message))
1178+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1179+ port=port,
1180+ username=username,
1181+ password=password)
1182+
1183+ # NOTE(beisner): extra debug here re: pika hang potential:
1184+ # https://github.com/pika/pika/issues/297
1185+ # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
1186+ self.log.debug('Defining channel...')
1187+ channel = connection.channel()
1188+ self.log.debug('Declaring queue...')
1189+ channel.queue_declare(queue=queue, auto_delete=False, durable=True)
1190+ self.log.debug('Publishing message...')
1191+ channel.basic_publish(exchange='', routing_key=queue, body=message)
1192+ self.log.debug('Closing channel...')
1193+ channel.close()
1194+ self.log.debug('Closing connection...')
1195+ connection.close()
1196+
1197+ def get_amqp_message_by_unit(self, sentry_unit, queue="test",
1198+ username="testuser1",
1199+ password="changeme",
1200+ ssl=False, port=None):
1201+ """Get an amqp message from a rmq juju unit.
1202+
1203+ :param sentry_unit: sentry unit pointer
1204+ :param queue: message queue, default to test
1205+ :param username: amqp user name, default to testuser1
1206+ :param password: amqp user password
1207+ :param ssl: boolean, default to False
1208+ :param port: amqp port, use defaults if None
1209+ :returns: amqp message body as string. Raise if get fails.
1210+ """
1211+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1212+ port=port,
1213+ username=username,
1214+ password=password)
1215+ channel = connection.channel()
1216+ method_frame, _, body = channel.basic_get(queue)
1217+
1218+ if method_frame:
1219+ self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
1220+ body))
1221+ channel.basic_ack(method_frame.delivery_tag)
1222+ channel.close()
1223+ connection.close()
1224+ return body
1225+ else:
1226+ msg = 'No message retrieved.'
1227+ amulet.raise_status(amulet.FAIL, msg)
1228
1229=== modified file 'unit_tests/test_glance_relations.py'
1230--- unit_tests/test_glance_relations.py 2015-08-26 08:43:57 +0000
1231+++ unit_tests/test_glance_relations.py 2015-09-10 09:35:29 +0000
1232@@ -1,5 +1,4 @@
1233 from mock import call, patch, MagicMock
1234-import json
1235 import os
1236 import yaml
1237
1238@@ -392,42 +391,53 @@
1239 'Could not create ceph keyring: peer not ready?'
1240 )
1241
1242+<<<<<<< TREE
1243 @patch("hooks.glance_relations.relation_set")
1244 @patch("hooks.glance_relations.relation_get")
1245+=======
1246+ @patch("glance_relations.get_ceph_request")
1247+ @patch("glance_relations.send_request_if_needed")
1248+ @patch("glance_relations.is_request_complete")
1249+>>>>>>> MERGE-SOURCE
1250 @patch.object(relations, 'CONFIGS')
1251- def test_ceph_changed_broker_send_rq(self, configs, mock_relation_get,
1252- mock_relation_set):
1253+ def test_ceph_changed_broker_send_rq(self, configs, mock_request_complete,
1254+ mock_send_request_if_needed,
1255+ mock_get_ceph_request):
1256+ self.service_name.return_value = 'glance'
1257+ configs.complete_contexts = MagicMock()
1258 configs.complete_contexts.return_value = ['ceph']
1259- self.service_name.return_value = 'glance'
1260+ mock_get_ceph_request.return_value = 'cephrq'
1261 self.ensure_ceph_keyring.return_value = True
1262- self.relation_ids.return_value = ['ceph:0']
1263+ mock_request_complete.return_value = False
1264 relations.hooks.execute(['hooks/ceph-relation-changed'])
1265 self.ensure_ceph_keyring.assert_called_with(service='glance',
1266 user='glance',
1267 group='glance')
1268- req = {'api-version': 1,
1269- 'ops': [{"op": "create-pool", "name": "glance", "replicas": 3}]}
1270- broker_dict = json.dumps(req)
1271- mock_relation_set.assert_called_with(broker_req=broker_dict,
1272- relation_id='ceph:0')
1273 for c in [call('/etc/glance/glance.conf')]:
1274 self.assertNotIn(c, configs.write.call_args_list)
1275
1276+<<<<<<< TREE
1277 @patch("charmhelpers.core.host.service")
1278 @patch("hooks.glance_relations.relation_get", autospec=True)
1279+=======
1280+ @patch("glance_relations.get_ceph_request")
1281+ @patch("glance_relations.send_request_if_needed")
1282+ @patch("glance_relations.is_request_complete")
1283+>>>>>>> MERGE-SOURCE
1284 @patch.object(relations, 'CONFIGS')
1285- def test_ceph_changed_with_key_and_relation_data(self, configs,
1286- mock_relation_get,
1287- mock_service):
1288+ def test_ceph_changed_key_and_relation_data(self, configs,
1289+ mock_request_complete,
1290+ mock_send_request_if_needed,
1291+ mock_service):
1292 configs.complete_contexts = MagicMock()
1293 configs.complete_contexts.return_value = ['ceph']
1294 configs.write = MagicMock()
1295 self.ensure_ceph_keyring.return_value = True
1296- mock_relation_get.return_value = {'broker_rsp':
1297- json.dumps({'exit-code': 0})}
1298+ mock_request_complete.return_value = True
1299+ self.ceph_config_file.return_value = '/etc/ceph/ceph.conf'
1300 relations.ceph_changed()
1301 self.assertEquals([call('/etc/glance/glance-api.conf'),
1302- call(self.ceph_config_file())],
1303+ call('/etc/ceph/ceph.conf')],
1304 configs.write.call_args_list)
1305 self.service_restart.assert_called_with('glance-api')
1306

Subscribers

People subscribed via source and target branches