Merge lp:~1chb1n/charms/trusty/ceph-osd/next-amulet-update into lp:~openstack-charmers-archive/charms/trusty/ceph-osd/next
- Trusty Tahr (14.04)
- next-amulet-update
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 42 |
Proposed branch: | lp:~1chb1n/charms/trusty/ceph-osd/next-amulet-update |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph-osd/next |
Diff against target: |
2197 lines (+1345/-201) 16 files modified
.coverage (+2/-1) Makefile (+6/-7) hooks/charmhelpers/contrib/charmsupport/nrpe.py (+3/-1) hooks/charmhelpers/core/hookenv.py (+231/-38) hooks/charmhelpers/core/host.py (+25/-7) hooks/charmhelpers/core/services/base.py (+43/-19) hooks/charmhelpers/fetch/__init__.py (+1/-1) hooks/charmhelpers/fetch/giturl.py (+7/-5) metadata.yaml (+5/-2) tests/00-setup (+6/-2) tests/README (+17/-0) tests/basic_deployment.py (+337/-47) tests/charmhelpers/contrib/amulet/utils.py (+227/-10) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+56/-10) tests/charmhelpers/contrib/openstack/amulet/utils.py (+361/-51) tests/tests.yaml (+18/-0) |
To merge this branch: | bzr merge lp:~1chb1n/charms/trusty/ceph-osd/next-amulet-update |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Corey Bryant (community) | Approve | ||
Review via email: mp+262249@code.launchpad.net |
Commit message
Description of the change
amulet tests - update test coverage, enable vivid, prep for wily
sync hooks/charmhelpers
sync tests/charmhelpers
This MP is dependent on charm-helpers updates in:
https:/
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5483 ceph-osd-next for 1chb1n mp262249
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4691 ceph-osd-next for 1chb1n mp262249
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4737 ceph-osd-next for 1chb1n mp262249
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4739 ceph-osd-next for 1chb1n mp262249
AMULET OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5602 ceph-osd-next for 1chb1n mp262249
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5234 ceph-osd-next for 1chb1n mp262249
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4771 ceph-osd-next for 1chb1n mp262249
AMULET OK: passed
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Flipped back to WIP re: tests/charmhelpers work in progress. Other things here are clear for review and input.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5303 ceph-osd-next for 1chb1n mp262249
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5671 ceph-osd-next for 1chb1n mp262249
LINT OK: passed
- 48. By Ryan Beisner
-
resync hooks/charmhelpers
- 49. By Ryan Beisner
-
resync tests/charmhelpers
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5673 ceph-osd-next for 1chb1n mp262249
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5305 ceph-osd-next for 1chb1n mp262249
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4856 ceph-osd-next for 1chb1n mp262249
AMULET OK: passed
Build: http://
- 50. By Ryan Beisner
-
Update publish target in makefile; update 00-setup and tests.yaml for dependencies.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5317 ceph-osd-next for 1chb1n mp262249
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5685 ceph-osd-next for 1chb1n mp262249
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4868 ceph-osd-next for 1chb1n mp262249
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 51. By Ryan Beisner
-
fix 00-setup
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5689 ceph-osd-next for 1chb1n mp262249
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5321 ceph-osd-next for 1chb1n mp262249
UNIT OK: passed
Corey Bryant (corey.bryant) wrote : | # |
Just a couple of inline comments below.
- 52. By Ryan Beisner
-
update test
Ryan Beisner (1chb1n) wrote : | # |
Ack, thanks. Replied in line. Will address ceph-osd service check.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5326 ceph-osd-next for 1chb1n mp262249
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5694 ceph-osd-next for 1chb1n mp262249
LINT OK: passed
Corey Bryant (corey.bryant) wrote : | # |
Looks good. I'll approve once you fixup test_102, the corresponding c-h lands and these amulet tests are successful.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4872 ceph-osd-next for 1chb1n mp262249
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 53. By Ryan Beisner
-
add pre-kilo ceph-mon service status checks
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5329 ceph-osd-next for 1chb1n mp262249
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5697 ceph-osd-next for 1chb1n mp262249
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4877 ceph-osd-next for 1chb1n mp262249
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4880 ceph-osd-next for 1chb1n mp262249
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
Timeout occurred (2700s), printing juju status.
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4885 ceph-osd-next for 1chb1n mp262249
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Test rig issue is causing failures in bootstrapping; will re-test when that's resolved.
- 54. By Ryan Beisner
-
update tags for consistency with other openstack charms
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5331 ceph-osd-next for 1chb1n mp262249
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5699 ceph-osd-next for 1chb1n mp262249
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4887 ceph-osd-next for 1chb1n mp262249
AMULET OK: passed
Build: http://
Corey Bryant (corey.bryant) : | # |
Preview Diff
1 | === modified file '.coverage' |
2 | --- .coverage 2015-04-17 15:03:30 +0000 |
3 | +++ .coverage 2015-07-01 14:47:35 +0000 |
4 | @@ -1,1 +1,2 @@ |
5 | -€}q(U collectorqUcoverage v3.7.1qUlinesq}q(UF/home/jamespage/src/charms/next-resync/ceph-osd/unit_tests/__init__.pyq]qKaUQ/home/jamespage/src/charms/landing-beisner-resync/ceph-osd/unit_tests/__init__.pyq]q Kauu. |
6 | \ No newline at end of file |
7 | +€}q(U collectorqUcoverage v3.7.1qUlinesq}q(UF/home/jamespage/src/charms/next-resync/ceph-osd/unit_tests/__init__.pyq]qKaUQ/home/jamespage/src/charms/landing-beisner-resync/ceph-osd/unit_tests/__init__.pyq]q KaU5/home/ubuntu/bzr/next/ceph-osd/unit_tests/__init__.pyq |
8 | +]q |
9 | Kauu. |
10 | \ No newline at end of file |
11 | |
12 | === modified file 'Makefile' |
13 | --- Makefile 2015-04-16 21:32:00 +0000 |
14 | +++ Makefile 2015-07-01 14:47:35 +0000 |
15 | @@ -2,18 +2,17 @@ |
16 | PYTHON := /usr/bin/env python |
17 | |
18 | lint: |
19 | - @flake8 --exclude hooks/charmhelpers hooks tests unit_tests |
20 | + @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \ |
21 | + hooks tests unit_tests |
22 | @charm proof |
23 | |
24 | -unit_test: |
25 | +test: |
26 | + @# Bundletester expects unit tests here. |
27 | @echo Starting unit tests... |
28 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
29 | |
30 | -test: |
31 | +functional_test: |
32 | @echo Starting Amulet tests... |
33 | - # coreycb note: The -v should only be temporary until Amulet sends |
34 | - # raise_status() messages to stderr: |
35 | - # https://bugs.launchpad.net/amulet/+bug/1320357 |
36 | @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 |
37 | |
38 | bin/charm_helpers_sync.py: |
39 | @@ -25,6 +24,6 @@ |
40 | $(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml |
41 | $(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml |
42 | |
43 | -publish: lint |
44 | +publish: lint test |
45 | bzr push lp:charms/ceph-osd |
46 | bzr push lp:charms/trusty/ceph-osd |
47 | |
48 | === modified file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py' |
49 | --- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2015-02-26 13:37:18 +0000 |
50 | +++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2015-07-01 14:47:35 +0000 |
51 | @@ -247,7 +247,9 @@ |
52 | |
53 | service('restart', 'nagios-nrpe-server') |
54 | |
55 | - for rid in relation_ids("local-monitors"): |
56 | + monitor_ids = relation_ids("local-monitors") + \ |
57 | + relation_ids("nrpe-external-master") |
58 | + for rid in monitor_ids: |
59 | relation_set(relation_id=rid, monitors=yaml.dump(monitors)) |
60 | |
61 | |
62 | |
63 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
64 | --- hooks/charmhelpers/core/hookenv.py 2015-04-16 21:32:48 +0000 |
65 | +++ hooks/charmhelpers/core/hookenv.py 2015-07-01 14:47:35 +0000 |
66 | @@ -21,12 +21,16 @@ |
67 | # Charm Helpers Developers <juju@lists.ubuntu.com> |
68 | |
69 | from __future__ import print_function |
70 | +from distutils.version import LooseVersion |
71 | +from functools import wraps |
72 | +import glob |
73 | import os |
74 | import json |
75 | import yaml |
76 | import subprocess |
77 | import sys |
78 | import errno |
79 | +import tempfile |
80 | from subprocess import CalledProcessError |
81 | |
82 | import six |
83 | @@ -58,15 +62,17 @@ |
84 | |
85 | will cache the result of unit_get + 'test' for future calls. |
86 | """ |
87 | + @wraps(func) |
88 | def wrapper(*args, **kwargs): |
89 | global cache |
90 | key = str((func, args, kwargs)) |
91 | try: |
92 | return cache[key] |
93 | except KeyError: |
94 | - res = func(*args, **kwargs) |
95 | - cache[key] = res |
96 | - return res |
97 | + pass # Drop out of the exception handler scope. |
98 | + res = func(*args, **kwargs) |
99 | + cache[key] = res |
100 | + return res |
101 | return wrapper |
102 | |
103 | |
104 | @@ -178,7 +184,7 @@ |
105 | |
106 | def remote_unit(): |
107 | """The remote unit for the current relation hook""" |
108 | - return os.environ['JUJU_REMOTE_UNIT'] |
109 | + return os.environ.get('JUJU_REMOTE_UNIT', None) |
110 | |
111 | |
112 | def service_name(): |
113 | @@ -238,23 +244,7 @@ |
114 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
115 | if os.path.exists(self.path): |
116 | self.load_previous() |
117 | - |
118 | - def __getitem__(self, key): |
119 | - """For regular dict lookups, check the current juju config first, |
120 | - then the previous (saved) copy. This ensures that user-saved values |
121 | - will be returned by a dict lookup. |
122 | - |
123 | - """ |
124 | - try: |
125 | - return dict.__getitem__(self, key) |
126 | - except KeyError: |
127 | - return (self._prev_dict or {})[key] |
128 | - |
129 | - def keys(self): |
130 | - prev_keys = [] |
131 | - if self._prev_dict is not None: |
132 | - prev_keys = self._prev_dict.keys() |
133 | - return list(set(prev_keys + list(dict.keys(self)))) |
134 | + atexit(self._implicit_save) |
135 | |
136 | def load_previous(self, path=None): |
137 | """Load previous copy of config from disk. |
138 | @@ -273,6 +263,9 @@ |
139 | self.path = path or self.path |
140 | with open(self.path) as f: |
141 | self._prev_dict = json.load(f) |
142 | + for k, v in self._prev_dict.items(): |
143 | + if k not in self: |
144 | + self[k] = v |
145 | |
146 | def changed(self, key): |
147 | """Return True if the current value for this key is different from |
148 | @@ -304,13 +297,13 @@ |
149 | instance. |
150 | |
151 | """ |
152 | - if self._prev_dict: |
153 | - for k, v in six.iteritems(self._prev_dict): |
154 | - if k not in self: |
155 | - self[k] = v |
156 | with open(self.path, 'w') as f: |
157 | json.dump(self, f) |
158 | |
159 | + def _implicit_save(self): |
160 | + if self.implicit_save: |
161 | + self.save() |
162 | + |
163 | |
164 | @cached |
165 | def config(scope=None): |
166 | @@ -353,18 +346,49 @@ |
167 | """Set relation information for the current unit""" |
168 | relation_settings = relation_settings if relation_settings else {} |
169 | relation_cmd_line = ['relation-set'] |
170 | + accepts_file = "--file" in subprocess.check_output( |
171 | + relation_cmd_line + ["--help"], universal_newlines=True) |
172 | if relation_id is not None: |
173 | relation_cmd_line.extend(('-r', relation_id)) |
174 | - for k, v in (list(relation_settings.items()) + list(kwargs.items())): |
175 | - if v is None: |
176 | - relation_cmd_line.append('{}='.format(k)) |
177 | - else: |
178 | - relation_cmd_line.append('{}={}'.format(k, v)) |
179 | - subprocess.check_call(relation_cmd_line) |
180 | + settings = relation_settings.copy() |
181 | + settings.update(kwargs) |
182 | + for key, value in settings.items(): |
183 | + # Force value to be a string: it always should, but some call |
184 | + # sites pass in things like dicts or numbers. |
185 | + if value is not None: |
186 | + settings[key] = "{}".format(value) |
187 | + if accepts_file: |
188 | + # --file was introduced in Juju 1.23.2. Use it by default if |
189 | + # available, since otherwise we'll break if the relation data is |
190 | + # too big. Ideally we should tell relation-set to read the data from |
191 | + # stdin, but that feature is broken in 1.23.2: Bug #1454678. |
192 | + with tempfile.NamedTemporaryFile(delete=False) as settings_file: |
193 | + settings_file.write(yaml.safe_dump(settings).encode("utf-8")) |
194 | + subprocess.check_call( |
195 | + relation_cmd_line + ["--file", settings_file.name]) |
196 | + os.remove(settings_file.name) |
197 | + else: |
198 | + for key, value in settings.items(): |
199 | + if value is None: |
200 | + relation_cmd_line.append('{}='.format(key)) |
201 | + else: |
202 | + relation_cmd_line.append('{}={}'.format(key, value)) |
203 | + subprocess.check_call(relation_cmd_line) |
204 | # Flush cache of any relation-gets for local unit |
205 | flush(local_unit()) |
206 | |
207 | |
208 | +def relation_clear(r_id=None): |
209 | + ''' Clears any relation data already set on relation r_id ''' |
210 | + settings = relation_get(rid=r_id, |
211 | + unit=local_unit()) |
212 | + for setting in settings: |
213 | + if setting not in ['public-address', 'private-address']: |
214 | + settings[setting] = None |
215 | + relation_set(relation_id=r_id, |
216 | + **settings) |
217 | + |
218 | + |
219 | @cached |
220 | def relation_ids(reltype=None): |
221 | """A list of relation_ids""" |
222 | @@ -509,6 +533,11 @@ |
223 | return None |
224 | |
225 | |
226 | +def unit_public_ip(): |
227 | + """Get this unit's public IP address""" |
228 | + return unit_get('public-address') |
229 | + |
230 | + |
231 | def unit_private_ip(): |
232 | """Get this unit's private IP address""" |
233 | return unit_get('private-address') |
234 | @@ -541,10 +570,14 @@ |
235 | hooks.execute(sys.argv) |
236 | """ |
237 | |
238 | - def __init__(self, config_save=True): |
239 | + def __init__(self, config_save=None): |
240 | super(Hooks, self).__init__() |
241 | self._hooks = {} |
242 | - self._config_save = config_save |
243 | + |
244 | + # For unknown reasons, we allow the Hooks constructor to override |
245 | + # config().implicit_save. |
246 | + if config_save is not None: |
247 | + config().implicit_save = config_save |
248 | |
249 | def register(self, name, function): |
250 | """Register a hook""" |
251 | @@ -552,13 +585,16 @@ |
252 | |
253 | def execute(self, args): |
254 | """Execute a registered hook based on args[0]""" |
255 | + _run_atstart() |
256 | hook_name = os.path.basename(args[0]) |
257 | if hook_name in self._hooks: |
258 | - self._hooks[hook_name]() |
259 | - if self._config_save: |
260 | - cfg = config() |
261 | - if cfg.implicit_save: |
262 | - cfg.save() |
263 | + try: |
264 | + self._hooks[hook_name]() |
265 | + except SystemExit as x: |
266 | + if x.code is None or x.code == 0: |
267 | + _run_atexit() |
268 | + raise |
269 | + _run_atexit() |
270 | else: |
271 | raise UnregisteredHookError(hook_name) |
272 | |
273 | @@ -605,3 +641,160 @@ |
274 | |
275 | The results set by action_set are preserved.""" |
276 | subprocess.check_call(['action-fail', message]) |
277 | + |
278 | + |
279 | +def status_set(workload_state, message): |
280 | + """Set the workload state with a message |
281 | + |
282 | + Use status-set to set the workload state with a message which is visible |
283 | + to the user via juju status. If the status-set command is not found then |
284 | + assume this is juju < 1.23 and juju-log the message unstead. |
285 | + |
286 | + workload_state -- valid juju workload state. |
287 | + message -- status update message |
288 | + """ |
289 | + valid_states = ['maintenance', 'blocked', 'waiting', 'active'] |
290 | + if workload_state not in valid_states: |
291 | + raise ValueError( |
292 | + '{!r} is not a valid workload state'.format(workload_state) |
293 | + ) |
294 | + cmd = ['status-set', workload_state, message] |
295 | + try: |
296 | + ret = subprocess.call(cmd) |
297 | + if ret == 0: |
298 | + return |
299 | + except OSError as e: |
300 | + if e.errno != errno.ENOENT: |
301 | + raise |
302 | + log_message = 'status-set failed: {} {}'.format(workload_state, |
303 | + message) |
304 | + log(log_message, level='INFO') |
305 | + |
306 | + |
307 | +def status_get(): |
308 | + """Retrieve the previously set juju workload state |
309 | + |
310 | + If the status-set command is not found then assume this is juju < 1.23 and |
311 | + return 'unknown' |
312 | + """ |
313 | + cmd = ['status-get'] |
314 | + try: |
315 | + raw_status = subprocess.check_output(cmd, universal_newlines=True) |
316 | + status = raw_status.rstrip() |
317 | + return status |
318 | + except OSError as e: |
319 | + if e.errno == errno.ENOENT: |
320 | + return 'unknown' |
321 | + else: |
322 | + raise |
323 | + |
324 | + |
325 | +def translate_exc(from_exc, to_exc): |
326 | + def inner_translate_exc1(f): |
327 | + def inner_translate_exc2(*args, **kwargs): |
328 | + try: |
329 | + return f(*args, **kwargs) |
330 | + except from_exc: |
331 | + raise to_exc |
332 | + |
333 | + return inner_translate_exc2 |
334 | + |
335 | + return inner_translate_exc1 |
336 | + |
337 | + |
338 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
339 | +def is_leader(): |
340 | + """Does the current unit hold the juju leadership |
341 | + |
342 | + Uses juju to determine whether the current unit is the leader of its peers |
343 | + """ |
344 | + cmd = ['is-leader', '--format=json'] |
345 | + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) |
346 | + |
347 | + |
348 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
349 | +def leader_get(attribute=None): |
350 | + """Juju leader get value(s)""" |
351 | + cmd = ['leader-get', '--format=json'] + [attribute or '-'] |
352 | + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) |
353 | + |
354 | + |
355 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
356 | +def leader_set(settings=None, **kwargs): |
357 | + """Juju leader set value(s)""" |
358 | + # Don't log secrets. |
359 | + # log("Juju leader-set '%s'" % (settings), level=DEBUG) |
360 | + cmd = ['leader-set'] |
361 | + settings = settings or {} |
362 | + settings.update(kwargs) |
363 | + for k, v in settings.items(): |
364 | + if v is None: |
365 | + cmd.append('{}='.format(k)) |
366 | + else: |
367 | + cmd.append('{}={}'.format(k, v)) |
368 | + subprocess.check_call(cmd) |
369 | + |
370 | + |
371 | +@cached |
372 | +def juju_version(): |
373 | + """Full version string (eg. '1.23.3.1-trusty-amd64')""" |
374 | + # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 |
375 | + jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] |
376 | + return subprocess.check_output([jujud, 'version'], |
377 | + universal_newlines=True).strip() |
378 | + |
379 | + |
380 | +@cached |
381 | +def has_juju_version(minimum_version): |
382 | + """Return True if the Juju version is at least the provided version""" |
383 | + return LooseVersion(juju_version()) >= LooseVersion(minimum_version) |
384 | + |
385 | + |
386 | +_atexit = [] |
387 | +_atstart = [] |
388 | + |
389 | + |
390 | +def atstart(callback, *args, **kwargs): |
391 | + '''Schedule a callback to run before the main hook. |
392 | + |
393 | + Callbacks are run in the order they were added. |
394 | + |
395 | + This is useful for modules and classes to perform initialization |
396 | + and inject behavior. In particular: |
397 | + - Run common code before all of your hooks, such as logging |
398 | + the hook name or interesting relation data. |
399 | + - Defer object or module initialization that requires a hook |
400 | + context until we know there actually is a hook context, |
401 | + making testing easier. |
402 | + - Rather than requiring charm authors to include boilerplate to |
403 | + invoke your helper's behavior, have it run automatically if |
404 | + your object is instantiated or module imported. |
405 | + |
406 | + This is not at all useful after your hook framework as been launched. |
407 | + ''' |
408 | + global _atstart |
409 | + _atstart.append((callback, args, kwargs)) |
410 | + |
411 | + |
412 | +def atexit(callback, *args, **kwargs): |
413 | + '''Schedule a callback to run on successful hook completion. |
414 | + |
415 | + Callbacks are run in the reverse order that they were added.''' |
416 | + _atexit.append((callback, args, kwargs)) |
417 | + |
418 | + |
419 | +def _run_atstart(): |
420 | + '''Hook frameworks must invoke this before running the main hook body.''' |
421 | + global _atstart |
422 | + for callback, args, kwargs in _atstart: |
423 | + callback(*args, **kwargs) |
424 | + del _atstart[:] |
425 | + |
426 | + |
427 | +def _run_atexit(): |
428 | + '''Hook frameworks must invoke this after the main hook body has |
429 | + successfully completed. Do not invoke it if the hook fails.''' |
430 | + global _atexit |
431 | + for callback, args, kwargs in reversed(_atexit): |
432 | + callback(*args, **kwargs) |
433 | + del _atexit[:] |
434 | |
435 | === modified file 'hooks/charmhelpers/core/host.py' |
436 | --- hooks/charmhelpers/core/host.py 2015-04-16 21:32:48 +0000 |
437 | +++ hooks/charmhelpers/core/host.py 2015-07-01 14:47:35 +0000 |
438 | @@ -24,6 +24,7 @@ |
439 | import os |
440 | import re |
441 | import pwd |
442 | +import glob |
443 | import grp |
444 | import random |
445 | import string |
446 | @@ -90,7 +91,7 @@ |
447 | ['service', service_name, 'status'], |
448 | stderr=subprocess.STDOUT).decode('UTF-8') |
449 | except subprocess.CalledProcessError as e: |
450 | - return 'unrecognized service' not in e.output |
451 | + return b'unrecognized service' not in e.output |
452 | else: |
453 | return True |
454 | |
455 | @@ -269,6 +270,21 @@ |
456 | return None |
457 | |
458 | |
459 | +def path_hash(path): |
460 | + """ |
461 | + Generate a hash checksum of all files matching 'path'. Standard wildcards |
462 | + like '*' and '?' are supported, see documentation for the 'glob' module for |
463 | + more information. |
464 | + |
465 | + :return: dict: A { filename: hash } dictionary for all matched files. |
466 | + Empty if none found. |
467 | + """ |
468 | + return { |
469 | + filename: file_hash(filename) |
470 | + for filename in glob.iglob(path) |
471 | + } |
472 | + |
473 | + |
474 | def check_hash(path, checksum, hash_type='md5'): |
475 | """ |
476 | Validate a file using a cryptographic checksum. |
477 | @@ -296,23 +312,25 @@ |
478 | |
479 | @restart_on_change({ |
480 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
481 | + '/etc/apache/sites-enabled/*': [ 'apache2' ] |
482 | }) |
483 | - def ceph_client_changed(): |
484 | + def config_changed(): |
485 | pass # your code here |
486 | |
487 | In this example, the cinder-api and cinder-volume services |
488 | would be restarted if /etc/ceph/ceph.conf is changed by the |
489 | - ceph_client_changed function. |
490 | + ceph_client_changed function. The apache2 service would be |
491 | + restarted if any file matching the pattern got changed, created |
492 | + or removed. Standard wildcards are supported, see documentation |
493 | + for the 'glob' module for more information. |
494 | """ |
495 | def wrap(f): |
496 | def wrapped_f(*args, **kwargs): |
497 | - checksums = {} |
498 | - for path in restart_map: |
499 | - checksums[path] = file_hash(path) |
500 | + checksums = {path: path_hash(path) for path in restart_map} |
501 | f(*args, **kwargs) |
502 | restarts = [] |
503 | for path in restart_map: |
504 | - if checksums[path] != file_hash(path): |
505 | + if path_hash(path) != checksums[path]: |
506 | restarts += restart_map[path] |
507 | services_list = list(OrderedDict.fromkeys(restarts)) |
508 | if not stopstart: |
509 | |
510 | === modified file 'hooks/charmhelpers/core/services/base.py' |
511 | --- hooks/charmhelpers/core/services/base.py 2015-01-26 11:51:28 +0000 |
512 | +++ hooks/charmhelpers/core/services/base.py 2015-07-01 14:47:35 +0000 |
513 | @@ -15,9 +15,9 @@ |
514 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
515 | |
516 | import os |
517 | -import re |
518 | import json |
519 | -from collections import Iterable |
520 | +from inspect import getargspec |
521 | +from collections import Iterable, OrderedDict |
522 | |
523 | from charmhelpers.core import host |
524 | from charmhelpers.core import hookenv |
525 | @@ -119,7 +119,7 @@ |
526 | """ |
527 | self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json') |
528 | self._ready = None |
529 | - self.services = {} |
530 | + self.services = OrderedDict() |
531 | for service in services or []: |
532 | service_name = service['service'] |
533 | self.services[service_name] = service |
534 | @@ -128,15 +128,18 @@ |
535 | """ |
536 | Handle the current hook by doing The Right Thing with the registered services. |
537 | """ |
538 | - hook_name = hookenv.hook_name() |
539 | - if hook_name == 'stop': |
540 | - self.stop_services() |
541 | - else: |
542 | - self.provide_data() |
543 | - self.reconfigure_services() |
544 | - cfg = hookenv.config() |
545 | - if cfg.implicit_save: |
546 | - cfg.save() |
547 | + hookenv._run_atstart() |
548 | + try: |
549 | + hook_name = hookenv.hook_name() |
550 | + if hook_name == 'stop': |
551 | + self.stop_services() |
552 | + else: |
553 | + self.reconfigure_services() |
554 | + self.provide_data() |
555 | + except SystemExit as x: |
556 | + if x.code is None or x.code == 0: |
557 | + hookenv._run_atexit() |
558 | + hookenv._run_atexit() |
559 | |
560 | def provide_data(self): |
561 | """ |
562 | @@ -145,15 +148,36 @@ |
563 | A provider must have a `name` attribute, which indicates which relation |
564 | to set data on, and a `provide_data()` method, which returns a dict of |
565 | data to set. |
566 | + |
567 | + The `provide_data()` method can optionally accept two parameters: |
568 | + |
569 | + * ``remote_service`` The name of the remote service that the data will |
570 | + be provided to. The `provide_data()` method will be called once |
571 | + for each connected service (not unit). This allows the method to |
572 | + tailor its data to the given service. |
573 | + * ``service_ready`` Whether or not the service definition had all of |
574 | + its requirements met, and thus the ``data_ready`` callbacks run. |
575 | + |
576 | + Note that the ``provided_data`` methods are now called **after** the |
577 | + ``data_ready`` callbacks are run. This gives the ``data_ready`` callbacks |
578 | + a chance to generate any data necessary for the providing to the remote |
579 | + services. |
580 | """ |
581 | - hook_name = hookenv.hook_name() |
582 | - for service in self.services.values(): |
583 | + for service_name, service in self.services.items(): |
584 | + service_ready = self.is_ready(service_name) |
585 | for provider in service.get('provided_data', []): |
586 | - if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name): |
587 | - data = provider.provide_data() |
588 | - _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data |
589 | - if _ready: |
590 | - hookenv.relation_set(None, data) |
591 | + for relid in hookenv.relation_ids(provider.name): |
592 | + units = hookenv.related_units(relid) |
593 | + if not units: |
594 | + continue |
595 | + remote_service = units[0].split('/')[0] |
596 | + argspec = getargspec(provider.provide_data) |
597 | + if len(argspec.args) > 1: |
598 | + data = provider.provide_data(remote_service, service_ready) |
599 | + else: |
600 | + data = provider.provide_data() |
601 | + if data: |
602 | + hookenv.relation_set(relid, data) |
603 | |
604 | def reconfigure_services(self, *service_names): |
605 | """ |
606 | |
607 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
608 | --- hooks/charmhelpers/fetch/__init__.py 2015-01-26 11:51:28 +0000 |
609 | +++ hooks/charmhelpers/fetch/__init__.py 2015-07-01 14:47:35 +0000 |
610 | @@ -158,7 +158,7 @@ |
611 | |
612 | def apt_cache(in_memory=True): |
613 | """Build and return an apt cache""" |
614 | - import apt_pkg |
615 | + from apt import apt_pkg |
616 | apt_pkg.init() |
617 | if in_memory: |
618 | apt_pkg.config.set("Dir::Cache::pkgcache", "") |
619 | |
620 | === modified file 'hooks/charmhelpers/fetch/giturl.py' |
621 | --- hooks/charmhelpers/fetch/giturl.py 2015-02-26 13:37:18 +0000 |
622 | +++ hooks/charmhelpers/fetch/giturl.py 2015-07-01 14:47:35 +0000 |
623 | @@ -45,14 +45,16 @@ |
624 | else: |
625 | return True |
626 | |
627 | - def clone(self, source, dest, branch): |
628 | + def clone(self, source, dest, branch, depth=None): |
629 | if not self.can_handle(source): |
630 | raise UnhandledSource("Cannot handle {}".format(source)) |
631 | |
632 | - repo = Repo.clone_from(source, dest) |
633 | - repo.git.checkout(branch) |
634 | + if depth: |
635 | + Repo.clone_from(source, dest, branch=branch, depth=depth) |
636 | + else: |
637 | + Repo.clone_from(source, dest, branch=branch) |
638 | |
639 | - def install(self, source, branch="master", dest=None): |
640 | + def install(self, source, branch="master", dest=None, depth=None): |
641 | url_parts = self.parse_url(source) |
642 | branch_name = url_parts.path.strip("/").split("/")[-1] |
643 | if dest: |
644 | @@ -63,7 +65,7 @@ |
645 | if not os.path.exists(dest_dir): |
646 | mkdir(dest_dir, perms=0o755) |
647 | try: |
648 | - self.clone(source, dest_dir, branch) |
649 | + self.clone(source, dest_dir, branch, depth) |
650 | except GitCommandError as e: |
651 | raise UnhandledSource(e.message) |
652 | except OSError as e: |
653 | |
654 | === modified file 'metadata.yaml' |
655 | --- metadata.yaml 2014-10-30 03:30:35 +0000 |
656 | +++ metadata.yaml 2015-07-01 14:47:35 +0000 |
657 | @@ -5,8 +5,11 @@ |
658 | nrpe-external-master: |
659 | interface: nrpe-external-master |
660 | scope: container |
661 | -categories: |
662 | - - misc |
663 | +tags: |
664 | + - openstack |
665 | + - storage |
666 | + - file-servers |
667 | + - misc |
668 | description: | |
669 | Ceph is a distributed storage and network file system designed to provide |
670 | excellent performance, reliability, and scalability. |
671 | |
672 | === modified file 'tests/00-setup' |
673 | --- tests/00-setup 2014-09-27 18:17:20 +0000 |
674 | +++ tests/00-setup 2015-07-01 14:47:35 +0000 |
675 | @@ -5,6 +5,10 @@ |
676 | sudo add-apt-repository --yes ppa:juju/stable |
677 | sudo apt-get update --yes |
678 | sudo apt-get install --yes python-amulet \ |
679 | + python-cinderclient \ |
680 | + python-distro-info \ |
681 | + python-glanceclient \ |
682 | + python-heatclient \ |
683 | python-keystoneclient \ |
684 | - python-glanceclient \ |
685 | - python-novaclient |
686 | + python-novaclient \ |
687 | + python-swiftclient |
688 | |
689 | === modified file 'tests/017-basic-trusty-kilo' (properties changed: -x to +x) |
690 | === modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x) |
691 | === modified file 'tests/README' |
692 | --- tests/README 2014-09-27 18:17:20 +0000 |
693 | +++ tests/README 2015-07-01 14:47:35 +0000 |
694 | @@ -1,6 +1,23 @@ |
695 | This directory provides Amulet tests that focus on verification of ceph-osd |
696 | deployments. |
697 | |
698 | +test_* methods are called in lexical sort order. |
699 | + |
700 | +Test name convention to ensure desired test order: |
701 | + 1xx service and endpoint checks |
702 | + 2xx relation checks |
703 | + 3xx config checks |
704 | + 4xx functional checks |
705 | + 9xx restarts and other final checks |
706 | + |
707 | +Common uses of ceph-osd relations in bundle deployments: |
708 | + - - "ceph-osd:mon" |
709 | + - "ceph:osd" |
710 | + |
711 | +More detailed relations of ceph-osd service in a common deployment: |
712 | + relations: |
713 | +???? |
714 | + |
715 | In order to run tests, you'll need charm-tools installed (in addition to |
716 | juju, of course): |
717 | sudo add-apt-repository ppa:juju/stable |
718 | |
719 | === modified file 'tests/basic_deployment.py' |
720 | --- tests/basic_deployment.py 2015-04-16 21:31:30 +0000 |
721 | +++ tests/basic_deployment.py 2015-07-01 14:47:35 +0000 |
722 | @@ -1,13 +1,14 @@ |
723 | -#!/usr/bin/python import amulet |
724 | +#!/usr/bin/python |
725 | |
726 | import amulet |
727 | +import time |
728 | from charmhelpers.contrib.openstack.amulet.deployment import ( |
729 | OpenStackAmuletDeployment |
730 | ) |
731 | -from charmhelpers.contrib.openstack.amulet.utils import ( # noqa |
732 | +from charmhelpers.contrib.openstack.amulet.utils import ( |
733 | OpenStackAmuletUtils, |
734 | DEBUG, |
735 | - ERROR |
736 | + # ERROR |
737 | ) |
738 | |
739 | # Use DEBUG to turn on debug logging |
740 | @@ -36,9 +37,12 @@ |
741 | compatible with the local charm (e.g. stable or next). |
742 | """ |
743 | this_service = {'name': 'ceph-osd'} |
744 | - other_services = [{'name': 'ceph', 'units': 3}, {'name': 'mysql'}, |
745 | - {'name': 'keystone'}, {'name': 'rabbitmq-server'}, |
746 | - {'name': 'nova-compute'}, {'name': 'glance'}, |
747 | + other_services = [{'name': 'ceph', 'units': 3}, |
748 | + {'name': 'mysql'}, |
749 | + {'name': 'keystone'}, |
750 | + {'name': 'rabbitmq-server'}, |
751 | + {'name': 'nova-compute'}, |
752 | + {'name': 'glance'}, |
753 | {'name': 'cinder'}] |
754 | super(CephOsdBasicDeployment, self)._add_services(this_service, |
755 | other_services) |
756 | @@ -98,13 +102,20 @@ |
757 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] |
758 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] |
759 | self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] |
760 | - self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0'] |
761 | + self.nova_sentry = self.d.sentry.unit['nova-compute/0'] |
762 | self.glance_sentry = self.d.sentry.unit['glance/0'] |
763 | self.cinder_sentry = self.d.sentry.unit['cinder/0'] |
764 | self.ceph0_sentry = self.d.sentry.unit['ceph/0'] |
765 | self.ceph1_sentry = self.d.sentry.unit['ceph/1'] |
766 | self.ceph2_sentry = self.d.sentry.unit['ceph/2'] |
767 | self.ceph_osd_sentry = self.d.sentry.unit['ceph-osd/0'] |
768 | + u.log.debug('openstack release val: {}'.format( |
769 | + self._get_openstack_release())) |
770 | + u.log.debug('openstack release str: {}'.format( |
771 | + self._get_openstack_release_string())) |
772 | + |
773 | + # Let things settle a bit original moving forward |
774 | + time.sleep(30) |
775 | |
776 | # Authenticate admin with keystone |
777 | self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, |
778 | @@ -112,9 +123,20 @@ |
779 | password='openstack', |
780 | tenant='admin') |
781 | |
782 | + # Authenticate admin with cinder endpoint |
783 | + self.cinder = u.authenticate_cinder_admin(self.keystone_sentry, |
784 | + username='admin', |
785 | + password='openstack', |
786 | + tenant='admin') |
787 | # Authenticate admin with glance endpoint |
788 | self.glance = u.authenticate_glance_admin(self.keystone) |
789 | |
790 | + # Authenticate admin with nova endpoint |
791 | + self.nova = u.authenticate_nova_user(self.keystone, |
792 | + user='admin', |
793 | + password='openstack', |
794 | + tenant='admin') |
795 | + |
796 | # Create a demo tenant/role/user |
797 | self.demo_tenant = 'demoTenant' |
798 | self.demo_role = 'demoRole' |
799 | @@ -141,40 +163,70 @@ |
800 | 'password', |
801 | self.demo_tenant) |
802 | |
803 | - def _ceph_osd_id(self, index): |
804 | - """Produce a shell command that will return a ceph-osd id.""" |
805 | - return "`initctl list | grep 'ceph-osd ' | awk 'NR=={} {{ print $2 }}' | grep -o '[0-9]*'`".format(index + 1) # noqa |
806 | - |
807 | - def test_services(self): |
808 | + def test_100_ceph_processes(self): |
809 | + """Verify that the expected service processes are running |
810 | + on each ceph unit.""" |
811 | + |
812 | + # Process name and quantity of processes to expect on each unit |
813 | + ceph_processes = { |
814 | + 'ceph-mon': 1, |
815 | + 'ceph-osd': 2 |
816 | + } |
817 | + |
818 | + # Units with process names and PID quantities expected |
819 | + expected_processes = { |
820 | + self.ceph0_sentry: ceph_processes, |
821 | + self.ceph1_sentry: ceph_processes, |
822 | + self.ceph2_sentry: ceph_processes, |
823 | + self.ceph_osd_sentry: {'ceph-osd': 2} |
824 | + } |
825 | + |
826 | + actual_pids = u.get_unit_process_ids(expected_processes) |
827 | + ret = u.validate_unit_process_ids(expected_processes, actual_pids) |
828 | + if ret: |
829 | + amulet.raise_status(amulet.FAIL, msg=ret) |
830 | + |
831 | + def test_102_services(self): |
832 | """Verify the expected services are running on the service units.""" |
833 | - commands = { |
834 | - self.mysql_sentry: ['status mysql'], |
835 | - self.rabbitmq_sentry: ['sudo service rabbitmq-server status'], |
836 | - self.nova_compute_sentry: ['status nova-compute'], |
837 | - self.keystone_sentry: ['status keystone'], |
838 | - self.glance_sentry: ['status glance-registry', |
839 | - 'status glance-api'], |
840 | - self.cinder_sentry: ['status cinder-api', |
841 | - 'status cinder-scheduler', |
842 | - 'status cinder-volume'] |
843 | + |
844 | + services = { |
845 | + self.mysql_sentry: ['mysql'], |
846 | + self.rabbitmq_sentry: ['rabbitmq-server'], |
847 | + self.nova_sentry: ['nova-compute'], |
848 | + self.keystone_sentry: ['keystone'], |
849 | + self.glance_sentry: ['glance-registry', |
850 | + 'glance-api'], |
851 | + self.cinder_sentry: ['cinder-api', |
852 | + 'cinder-scheduler', |
853 | + 'cinder-volume'], |
854 | } |
855 | - ceph_services = ['status ceph-mon-all', |
856 | - 'status ceph-mon id=`hostname`'] |
857 | - ceph_osd0 = 'status ceph-osd id={}'.format(self._ceph_osd_id(0)) |
858 | - ceph_osd1 = 'status ceph-osd id={}'.format(self._ceph_osd_id(1)) |
859 | - ceph_osd_services = [ceph_osd0, ceph_osd1, 'status ceph-osd-all'] |
860 | - ceph_services.extend(ceph_osd_services) |
861 | - commands[self.ceph0_sentry] = ceph_services |
862 | - commands[self.ceph1_sentry] = ceph_services |
863 | - commands[self.ceph2_sentry] = ceph_services |
864 | - commands[self.ceph_osd_sentry] = ceph_osd_services |
865 | - |
866 | - ret = u.validate_services(commands) |
867 | + |
868 | + if self._get_openstack_release() < self.vivid_kilo: |
869 | + # For upstart systems only. Ceph services under systemd |
870 | + # are checked by process name instead. |
871 | + ceph_services = [ |
872 | + 'ceph-mon-all', |
873 | + 'ceph-mon id=`hostname`', |
874 | + 'ceph-osd-all', |
875 | + 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(0)), |
876 | + 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(1)) |
877 | + ] |
878 | + services[self.ceph0_sentry] = ceph_services |
879 | + services[self.ceph1_sentry] = ceph_services |
880 | + services[self.ceph2_sentry] = ceph_services |
881 | + services[self.ceph_osd_sentry] = [ |
882 | + 'ceph-osd-all', |
883 | + 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(0)), |
884 | + 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(1)) |
885 | + ] |
886 | + |
887 | + ret = u.validate_services_by_name(services) |
888 | if ret: |
889 | amulet.raise_status(amulet.FAIL, msg=ret) |
890 | |
891 | - def test_ceph_osd_ceph_relation(self): |
892 | + def test_200_ceph_osd_ceph_relation(self): |
893 | """Verify the ceph-osd to ceph relation data.""" |
894 | + u.log.debug('Checking ceph-osd:ceph mon relation data...') |
895 | unit = self.ceph_osd_sentry |
896 | relation = ['mon', 'ceph:osd'] |
897 | expected = { |
898 | @@ -186,8 +238,9 @@ |
899 | message = u.relation_error('ceph-osd to ceph', ret) |
900 | amulet.raise_status(amulet.FAIL, msg=message) |
901 | |
902 | - def test_ceph0_to_ceph_osd_relation(self): |
903 | + def test_201_ceph0_to_ceph_osd_relation(self): |
904 | """Verify the ceph0 to ceph-osd relation data.""" |
905 | + u.log.debug('Checking ceph0:ceph-osd mon relation data...') |
906 | unit = self.ceph0_sentry |
907 | relation = ['osd', 'ceph-osd:mon'] |
908 | expected = { |
909 | @@ -203,8 +256,9 @@ |
910 | message = u.relation_error('ceph0 to ceph-osd', ret) |
911 | amulet.raise_status(amulet.FAIL, msg=message) |
912 | |
913 | - def test_ceph1_to_ceph_osd_relation(self): |
914 | + def test_202_ceph1_to_ceph_osd_relation(self): |
915 | """Verify the ceph1 to ceph-osd relation data.""" |
916 | + u.log.debug('Checking ceph1:ceph-osd mon relation data...') |
917 | unit = self.ceph1_sentry |
918 | relation = ['osd', 'ceph-osd:mon'] |
919 | expected = { |
920 | @@ -220,8 +274,9 @@ |
921 | message = u.relation_error('ceph1 to ceph-osd', ret) |
922 | amulet.raise_status(amulet.FAIL, msg=message) |
923 | |
924 | - def test_ceph2_to_ceph_osd_relation(self): |
925 | + def test_203_ceph2_to_ceph_osd_relation(self): |
926 | """Verify the ceph2 to ceph-osd relation data.""" |
927 | + u.log.debug('Checking ceph2:ceph-osd mon relation data...') |
928 | unit = self.ceph2_sentry |
929 | relation = ['osd', 'ceph-osd:mon'] |
930 | expected = { |
931 | @@ -237,8 +292,9 @@ |
932 | message = u.relation_error('ceph2 to ceph-osd', ret) |
933 | amulet.raise_status(amulet.FAIL, msg=message) |
934 | |
935 | - def test_ceph_config(self): |
936 | + def test_300_ceph_osd_config(self): |
937 | """Verify the data in the ceph config file.""" |
938 | + u.log.debug('Checking ceph config file data...') |
939 | unit = self.ceph_osd_sentry |
940 | conf = '/etc/ceph/ceph.conf' |
941 | expected = { |
942 | @@ -271,11 +327,245 @@ |
943 | message = "ceph config error: {}".format(ret) |
944 | amulet.raise_status(amulet.FAIL, msg=message) |
945 | |
946 | - def test_restart_on_config_change(self): |
947 | - """Verify the specified services are restarted on config change.""" |
948 | - # NOTE(coreycb): Test not implemented but should it be? ceph-osd svcs |
949 | - # aren't restarted by charm after config change. Should |
950 | - # they be restarted? |
951 | - if self._get_openstack_release() >= self.precise_essex: |
952 | - u.log.error("Test not implemented") |
953 | - return |
954 | + def test_302_cinder_rbd_config(self): |
955 | + """Verify the cinder config file data regarding ceph.""" |
956 | + u.log.debug('Checking cinder (rbd) config file data...') |
957 | + unit = self.cinder_sentry |
958 | + conf = '/etc/cinder/cinder.conf' |
959 | + expected = { |
960 | + 'DEFAULT': { |
961 | + 'volume_driver': 'cinder.volume.drivers.rbd.RBDDriver' |
962 | + } |
963 | + } |
964 | + for section, pairs in expected.iteritems(): |
965 | + ret = u.validate_config_data(unit, conf, section, pairs) |
966 | + if ret: |
967 | + message = "cinder (rbd) config error: {}".format(ret) |
968 | + amulet.raise_status(amulet.FAIL, msg=message) |
969 | + |
970 | + def test_304_glance_rbd_config(self): |
971 | + """Verify the glance config file data regarding ceph.""" |
972 | + u.log.debug('Checking glance (rbd) config file data...') |
973 | + unit = self.glance_sentry |
974 | + conf = '/etc/glance/glance-api.conf' |
975 | + config = { |
976 | + 'default_store': 'rbd', |
977 | + 'rbd_store_ceph_conf': '/etc/ceph/ceph.conf', |
978 | + 'rbd_store_user': 'glance', |
979 | + 'rbd_store_pool': 'glance', |
980 | + 'rbd_store_chunk_size': '8' |
981 | + } |
982 | + |
983 | + if self._get_openstack_release() >= self.trusty_kilo: |
984 | + # Kilo or later |
985 | + config['stores'] = ('glance.store.filesystem.Store,' |
986 | + 'glance.store.http.Store,' |
987 | + 'glance.store.rbd.Store') |
988 | + section = 'glance_store' |
989 | + else: |
990 | + # Juno or earlier |
991 | + section = 'DEFAULT' |
992 | + |
993 | + expected = {section: config} |
994 | + for section, pairs in expected.iteritems(): |
995 | + ret = u.validate_config_data(unit, conf, section, pairs) |
996 | + if ret: |
997 | + message = "glance (rbd) config error: {}".format(ret) |
998 | + amulet.raise_status(amulet.FAIL, msg=message) |
999 | + |
1000 | + def test_306_nova_rbd_config(self): |
1001 | + """Verify the nova config file data regarding ceph.""" |
1002 | + u.log.debug('Checking nova (rbd) config file data...') |
1003 | + unit = self.nova_sentry |
1004 | + conf = '/etc/nova/nova.conf' |
1005 | + expected = { |
1006 | + 'libvirt': { |
1007 | + 'rbd_pool': 'nova', |
1008 | + 'rbd_user': 'nova-compute', |
1009 | + 'rbd_secret_uuid': u.not_null |
1010 | + } |
1011 | + } |
1012 | + for section, pairs in expected.iteritems(): |
1013 | + ret = u.validate_config_data(unit, conf, section, pairs) |
1014 | + if ret: |
1015 | + message = "nova (rbd) config error: {}".format(ret) |
1016 | + amulet.raise_status(amulet.FAIL, msg=message) |
1017 | + |
1018 | + def test_400_ceph_check_osd_pools(self): |
1019 | + """Check osd pools on all ceph units, expect them to be |
1020 | + identical, and expect specific pools to be present.""" |
1021 | + u.log.debug('Checking pools on ceph units...') |
1022 | + |
1023 | + expected_pools = self.get_ceph_expected_pools() |
1024 | + results = [] |
1025 | + sentries = [ |
1026 | + self.ceph_osd_sentry, |
1027 | + self.ceph0_sentry, |
1028 | + self.ceph1_sentry, |
1029 | + self.ceph2_sentry |
1030 | + ] |
1031 | + |
1032 | + # Check for presence of expected pools on each unit |
1033 | + u.log.debug('Expected pools: {}'.format(expected_pools)) |
1034 | + for sentry_unit in sentries: |
1035 | + pools = u.get_ceph_pools(sentry_unit) |
1036 | + results.append(pools) |
1037 | + |
1038 | + for expected_pool in expected_pools: |
1039 | + if expected_pool not in pools: |
1040 | + msg = ('{} does not have pool: ' |
1041 | + '{}'.format(sentry_unit.info['unit_name'], |
1042 | + expected_pool)) |
1043 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1044 | + u.log.debug('{} has (at least) the expected ' |
1045 | + 'pools.'.format(sentry_unit.info['unit_name'])) |
1046 | + |
1047 | + # Check that all units returned the same pool name:id data |
1048 | + ret = u.validate_list_of_identical_dicts(results) |
1049 | + if ret: |
1050 | + u.log.debug('Pool list results: {}'.format(results)) |
1051 | + msg = ('{}; Pool list results are not identical on all ' |
1052 | + 'ceph units.'.format(ret)) |
1053 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1054 | + else: |
1055 | + u.log.debug('Pool list on all ceph units produced the ' |
1056 | + 'same results (OK).') |
1057 | + |
1058 | + def test_410_ceph_cinder_vol_create(self): |
1059 | + """Create and confirm a ceph-backed cinder volume, and inspect |
1060 | + ceph cinder pool object count as the volume is created |
1061 | + and deleted.""" |
1062 | + sentry_unit = self.ceph0_sentry |
1063 | + obj_count_samples = [] |
1064 | + pool_size_samples = [] |
1065 | + pools = u.get_ceph_pools(self.ceph0_sentry) |
1066 | + cinder_pool = pools['cinder'] |
1067 | + |
1068 | + # Check ceph cinder pool object count, disk space usage and pool name |
1069 | + u.log.debug('Checking ceph cinder pool original samples...') |
1070 | + pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit, |
1071 | + cinder_pool) |
1072 | + obj_count_samples.append(obj_count) |
1073 | + pool_size_samples.append(kb_used) |
1074 | + |
1075 | + expected = 'cinder' |
1076 | + if pool_name != expected: |
1077 | + msg = ('Ceph pool {} unexpected name (actual, expected): ' |
1078 | + '{}. {}'.format(cinder_pool, pool_name, expected)) |
1079 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1080 | + |
1081 | + # Create ceph-backed cinder volume |
1082 | + cinder_vol = u.create_cinder_volume(self.cinder) |
1083 | + |
1084 | + # Re-check ceph cinder pool object count and disk usage |
1085 | + time.sleep(10) |
1086 | + u.log.debug('Checking ceph cinder pool samples after volume create...') |
1087 | + pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit, |
1088 | + cinder_pool) |
1089 | + obj_count_samples.append(obj_count) |
1090 | + pool_size_samples.append(kb_used) |
1091 | + |
1092 | + # Delete ceph-backed cinder volume |
1093 | + u.delete_resource(self.cinder.volumes, cinder_vol, msg="cinder volume") |
1094 | + |
1095 | + # Final check, ceph cinder pool object count and disk usage |
1096 | + time.sleep(10) |
1097 | + u.log.debug('Checking ceph cinder pool after volume delete...') |
1098 | + pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit, |
1099 | + cinder_pool) |
1100 | + obj_count_samples.append(obj_count) |
1101 | + pool_size_samples.append(kb_used) |
1102 | + |
1103 | + # Validate ceph cinder pool object count samples over time |
1104 | + ret = u.validate_ceph_pool_samples(obj_count_samples, |
1105 | + "cinder pool object count") |
1106 | + if ret: |
1107 | + amulet.raise_status(amulet.FAIL, msg=ret) |
1108 | + |
1109 | + # Validate ceph cinder pool disk space usage samples over time |
1110 | + ret = u.validate_ceph_pool_samples(pool_size_samples, |
1111 | + "cinder pool disk usage") |
1112 | + if ret: |
1113 | + amulet.raise_status(amulet.FAIL, msg=ret) |
1114 | + |
1115 | + def test_412_ceph_glance_image_create_delete(self): |
1116 | + """Create and confirm a ceph-backed glance image, and inspect |
1117 | + ceph glance pool object count as the image is created |
1118 | + and deleted.""" |
1119 | + sentry_unit = self.ceph0_sentry |
1120 | + obj_count_samples = [] |
1121 | + pool_size_samples = [] |
1122 | + pools = u.get_ceph_pools(self.ceph0_sentry) |
1123 | + glance_pool = pools['glance'] |
1124 | + |
1125 | + # Check ceph glance pool object count, disk space usage and pool name |
1126 | + u.log.debug('Checking ceph glance pool original samples...') |
1127 | + pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit, |
1128 | + glance_pool) |
1129 | + obj_count_samples.append(obj_count) |
1130 | + pool_size_samples.append(kb_used) |
1131 | + |
1132 | + expected = 'glance' |
1133 | + if pool_name != expected: |
1134 | + msg = ('Ceph glance pool {} unexpected name (actual, ' |
1135 | + 'expected): {}. {}'.format(glance_pool, |
1136 | + pool_name, expected)) |
1137 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1138 | + |
1139 | + # Create ceph-backed glance image |
1140 | + glance_img = u.create_cirros_image(self.glance, 'cirros-image-1') |
1141 | + |
1142 | + # Re-check ceph glance pool object count and disk usage |
1143 | + time.sleep(10) |
1144 | + u.log.debug('Checking ceph glance pool samples after image create...') |
1145 | + pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit, |
1146 | + glance_pool) |
1147 | + obj_count_samples.append(obj_count) |
1148 | + pool_size_samples.append(kb_used) |
1149 | + |
1150 | + # Delete ceph-backed glance image |
1151 | + u.delete_resource(self.glance.images, |
1152 | + glance_img, msg="glance image") |
1153 | + |
1154 | + # Final check, ceph glance pool object count and disk usage |
1155 | + time.sleep(10) |
1156 | + u.log.debug('Checking ceph glance pool samples after image delete...') |
1157 | + pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit, |
1158 | + glance_pool) |
1159 | + obj_count_samples.append(obj_count) |
1160 | + pool_size_samples.append(kb_used) |
1161 | + |
1162 | + # Validate ceph glance pool object count samples over time |
1163 | + ret = u.validate_ceph_pool_samples(obj_count_samples, |
1164 | + "glance pool object count") |
1165 | + if ret: |
1166 | + amulet.raise_status(amulet.FAIL, msg=ret) |
1167 | + |
1168 | + # Validate ceph glance pool disk space usage samples over time |
1169 | + ret = u.validate_ceph_pool_samples(pool_size_samples, |
1170 | + "glance pool disk usage") |
1171 | + if ret: |
1172 | + amulet.raise_status(amulet.FAIL, msg=ret) |
1173 | + |
1174 | + def test_499_ceph_cmds_exit_zero(self): |
1175 | + """Check basic functionality of ceph cli commands against |
1176 | + all ceph units.""" |
1177 | + sentry_units = [ |
1178 | + self.ceph_osd_sentry, |
1179 | + self.ceph0_sentry, |
1180 | + self.ceph1_sentry, |
1181 | + self.ceph2_sentry |
1182 | + ] |
1183 | + commands = [ |
1184 | + 'sudo ceph health', |
1185 | + 'sudo ceph mds stat', |
1186 | + 'sudo ceph pg stat', |
1187 | + 'sudo ceph osd stat', |
1188 | + 'sudo ceph mon stat', |
1189 | + ] |
1190 | + ret = u.check_commands_on_units(commands, sentry_units) |
1191 | + if ret: |
1192 | + amulet.raise_status(amulet.FAIL, msg=ret) |
1193 | + |
1194 | + # FYI: No restart check as ceph services do not restart |
1195 | + # when charm config changes, unless monitor count increases. |
1196 | |
1197 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' |
1198 | --- tests/charmhelpers/contrib/amulet/utils.py 2015-04-16 21:32:48 +0000 |
1199 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-01 14:47:35 +0000 |
1200 | @@ -14,14 +14,17 @@ |
1201 | # You should have received a copy of the GNU Lesser General Public License |
1202 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1203 | |
1204 | +import amulet |
1205 | import ConfigParser |
1206 | +import distro_info |
1207 | import io |
1208 | import logging |
1209 | +import os |
1210 | import re |
1211 | +import six |
1212 | import sys |
1213 | import time |
1214 | - |
1215 | -import six |
1216 | +import urlparse |
1217 | |
1218 | |
1219 | class AmuletUtils(object): |
1220 | @@ -33,6 +36,7 @@ |
1221 | |
1222 | def __init__(self, log_level=logging.ERROR): |
1223 | self.log = self.get_logger(level=log_level) |
1224 | + self.ubuntu_releases = self.get_ubuntu_releases() |
1225 | |
1226 | def get_logger(self, name="amulet-logger", level=logging.DEBUG): |
1227 | """Get a logger object that will log to stdout.""" |
1228 | @@ -70,15 +74,85 @@ |
1229 | else: |
1230 | return False |
1231 | |
1232 | + def get_ubuntu_release_from_sentry(self, sentry_unit): |
1233 | + """Get Ubuntu release codename from sentry unit. |
1234 | + |
1235 | + :param sentry_unit: amulet sentry/service unit pointer |
1236 | + :returns: list of strings - release codename, failure message |
1237 | + """ |
1238 | + msg = None |
1239 | + cmd = 'lsb_release -cs' |
1240 | + release, code = sentry_unit.run(cmd) |
1241 | + if code == 0: |
1242 | + self.log.debug('{} lsb_release: {}'.format( |
1243 | + sentry_unit.info['unit_name'], release)) |
1244 | + else: |
1245 | + msg = ('{} `{}` returned {} ' |
1246 | + '{}'.format(sentry_unit.info['unit_name'], |
1247 | + cmd, release, code)) |
1248 | + if release not in self.ubuntu_releases: |
1249 | + msg = ("Release ({}) not found in Ubuntu releases " |
1250 | + "({})".format(release, self.ubuntu_releases)) |
1251 | + return release, msg |
1252 | + |
1253 | def validate_services(self, commands): |
1254 | - """Validate services. |
1255 | - |
1256 | - Verify the specified services are running on the corresponding |
1257 | + """Validate that lists of commands succeed on service units. Can be |
1258 | + used to verify system services are running on the corresponding |
1259 | service units. |
1260 | - """ |
1261 | + |
1262 | + :param commands: dict with sentry keys and arbitrary command list vals |
1263 | + :returns: None if successful, Failure string message otherwise |
1264 | + """ |
1265 | + self.log.debug('Checking status of system services...') |
1266 | + |
1267 | + # /!\ DEPRECATION WARNING (beisner): |
1268 | + # New and existing tests should be rewritten to use |
1269 | + # validate_services_by_name() as it is aware of init systems. |
1270 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1271 | + 'validate_services_by_name instead of validate_services ' |
1272 | + 'due to init system differences.') |
1273 | + |
1274 | for k, v in six.iteritems(commands): |
1275 | for cmd in v: |
1276 | output, code = k.run(cmd) |
1277 | + self.log.debug('{} `{}` returned ' |
1278 | + '{}'.format(k.info['unit_name'], |
1279 | + cmd, code)) |
1280 | + if code != 0: |
1281 | + return "command `{}` returned {}".format(cmd, str(code)) |
1282 | + return None |
1283 | + |
1284 | + def validate_services_by_name(self, sentry_services): |
1285 | + """Validate system service status by service name, automatically |
1286 | + detecting init system based on Ubuntu release codename. |
1287 | + |
1288 | + :param sentry_services: dict with sentry keys and svc list values |
1289 | + :returns: None if successful, Failure string message otherwise |
1290 | + """ |
1291 | + self.log.debug('Checking status of system services...') |
1292 | + |
1293 | + # Point at which systemd became a thing |
1294 | + systemd_switch = self.ubuntu_releases.index('vivid') |
1295 | + |
1296 | + for sentry_unit, services_list in six.iteritems(sentry_services): |
1297 | + # Get lsb_release codename from unit |
1298 | + release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) |
1299 | + if ret: |
1300 | + return ret |
1301 | + |
1302 | + for service_name in services_list: |
1303 | + if (self.ubuntu_releases.index(release) >= systemd_switch or |
1304 | + service_name == "rabbitmq-server"): |
1305 | + # init is systemd |
1306 | + cmd = 'sudo service {} status'.format(service_name) |
1307 | + elif self.ubuntu_releases.index(release) < systemd_switch: |
1308 | + # init is upstart |
1309 | + cmd = 'sudo status {}'.format(service_name) |
1310 | + |
1311 | + output, code = sentry_unit.run(cmd) |
1312 | + self.log.debug('{} `{}` returned ' |
1313 | + '{}'.format(sentry_unit.info['unit_name'], |
1314 | + cmd, code)) |
1315 | if code != 0: |
1316 | return "command `{}` returned {}".format(cmd, str(code)) |
1317 | return None |
1318 | @@ -86,7 +160,11 @@ |
1319 | def _get_config(self, unit, filename): |
1320 | """Get a ConfigParser object for parsing a unit's config file.""" |
1321 | file_contents = unit.file_contents(filename) |
1322 | - config = ConfigParser.ConfigParser() |
1323 | + |
1324 | + # NOTE(beisner): by default, ConfigParser does not handle options |
1325 | + # with no value, such as the flags used in the mysql my.cnf file. |
1326 | + # https://bugs.python.org/issue7005 |
1327 | + config = ConfigParser.ConfigParser(allow_no_value=True) |
1328 | config.readfp(io.StringIO(file_contents)) |
1329 | return config |
1330 | |
1331 | @@ -96,7 +174,15 @@ |
1332 | |
1333 | Verify that the specified section of the config file contains |
1334 | the expected option key:value pairs. |
1335 | + |
1336 | + Compare expected dictionary data vs actual dictionary data. |
1337 | + The values in the 'expected' dictionary can be strings, bools, ints, |
1338 | + longs, or can be a function that evaluates a variable and returns a |
1339 | + bool. |
1340 | """ |
1341 | + self.log.debug('Validating config file data ({} in {} on {})' |
1342 | + '...'.format(section, config_file, |
1343 | + sentry_unit.info['unit_name'])) |
1344 | config = self._get_config(sentry_unit, config_file) |
1345 | |
1346 | if section != 'DEFAULT' and not config.has_section(section): |
1347 | @@ -105,9 +191,20 @@ |
1348 | for k in expected.keys(): |
1349 | if not config.has_option(section, k): |
1350 | return "section [{}] is missing option {}".format(section, k) |
1351 | - if config.get(section, k) != expected[k]: |
1352 | + |
1353 | + actual = config.get(section, k) |
1354 | + v = expected[k] |
1355 | + if (isinstance(v, six.string_types) or |
1356 | + isinstance(v, bool) or |
1357 | + isinstance(v, six.integer_types)): |
1358 | + # handle explicit values |
1359 | + if actual != v: |
1360 | + return "section [{}] {}:{} != expected {}:{}".format( |
1361 | + section, k, actual, k, expected[k]) |
1362 | + # handle function pointers, such as not_null or valid_ip |
1363 | + elif not v(actual): |
1364 | return "section [{}] {}:{} != expected {}:{}".format( |
1365 | - section, k, config.get(section, k), k, expected[k]) |
1366 | + section, k, actual, k, expected[k]) |
1367 | return None |
1368 | |
1369 | def _validate_dict_data(self, expected, actual): |
1370 | @@ -115,7 +212,7 @@ |
1371 | |
1372 | Compare expected dictionary data vs actual dictionary data. |
1373 | The values in the 'expected' dictionary can be strings, bools, ints, |
1374 | - longs, or can be a function that evaluate a variable and returns a |
1375 | + longs, or can be a function that evaluates a variable and returns a |
1376 | bool. |
1377 | """ |
1378 | self.log.debug('actual: {}'.format(repr(actual))) |
1379 | @@ -126,8 +223,10 @@ |
1380 | if (isinstance(v, six.string_types) or |
1381 | isinstance(v, bool) or |
1382 | isinstance(v, six.integer_types)): |
1383 | + # handle explicit values |
1384 | if v != actual[k]: |
1385 | return "{}:{}".format(k, actual[k]) |
1386 | + # handle function pointers, such as not_null or valid_ip |
1387 | elif not v(actual[k]): |
1388 | return "{}:{}".format(k, actual[k]) |
1389 | else: |
1390 | @@ -314,3 +413,121 @@ |
1391 | |
1392 | def endpoint_error(self, name, data): |
1393 | return 'unexpected endpoint data in {} - {}'.format(name, data) |
1394 | + |
1395 | + def get_ubuntu_releases(self): |
1396 | + """Return a list of all Ubuntu releases in order of release.""" |
1397 | + _d = distro_info.UbuntuDistroInfo() |
1398 | + _release_list = _d.all |
1399 | + self.log.debug('Ubuntu release list: {}'.format(_release_list)) |
1400 | + return _release_list |
1401 | + |
1402 | + def file_to_url(self, file_rel_path): |
1403 | + """Convert a relative file path to a file URL.""" |
1404 | + _abs_path = os.path.abspath(file_rel_path) |
1405 | + return urlparse.urlparse(_abs_path, scheme='file').geturl() |
1406 | + |
1407 | + def check_commands_on_units(self, commands, sentry_units): |
1408 | + """Check that all commands in a list exit zero on all |
1409 | + sentry units in a list. |
1410 | + |
1411 | + :param commands: list of bash commands |
1412 | + :param sentry_units: list of sentry unit pointers |
1413 | + :returns: None if successful; Failure message otherwise |
1414 | + """ |
1415 | + self.log.debug('Checking exit codes for {} commands on {} ' |
1416 | + 'sentry units...'.format(len(commands), |
1417 | + len(sentry_units))) |
1418 | + for sentry_unit in sentry_units: |
1419 | + for cmd in commands: |
1420 | + output, code = sentry_unit.run(cmd) |
1421 | + if code == 0: |
1422 | + self.log.debug('{} `{}` returned {} ' |
1423 | + '(OK)'.format(sentry_unit.info['unit_name'], |
1424 | + cmd, code)) |
1425 | + else: |
1426 | + return ('{} `{}` returned {} ' |
1427 | + '{}'.format(sentry_unit.info['unit_name'], |
1428 | + cmd, code, output)) |
1429 | + return None |
1430 | + |
1431 | + def get_process_id_list(self, sentry_unit, process_name): |
1432 | + """Get a list of process ID(s) from a single sentry juju unit |
1433 | + for a single process name. |
1434 | + |
1435 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
1436 | + :param process_name: Process name |
1437 | + :returns: List of process IDs |
1438 | + """ |
1439 | + cmd = 'pidof {}'.format(process_name) |
1440 | + output, code = sentry_unit.run(cmd) |
1441 | + if code != 0: |
1442 | + msg = ('{} `{}` returned {} ' |
1443 | + '{}'.format(sentry_unit.info['unit_name'], |
1444 | + cmd, code, output)) |
1445 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1446 | + return str(output).split() |
1447 | + |
1448 | + def get_unit_process_ids(self, unit_processes): |
1449 | + """Construct a dict containing unit sentries, process names, and |
1450 | + process IDs.""" |
1451 | + pid_dict = {} |
1452 | + for sentry_unit, process_list in unit_processes.iteritems(): |
1453 | + pid_dict[sentry_unit] = {} |
1454 | + for process in process_list: |
1455 | + pids = self.get_process_id_list(sentry_unit, process) |
1456 | + pid_dict[sentry_unit].update({process: pids}) |
1457 | + return pid_dict |
1458 | + |
1459 | + def validate_unit_process_ids(self, expected, actual): |
1460 | + """Validate process id quantities for services on units.""" |
1461 | + self.log.debug('Checking units for running processes...') |
1462 | + self.log.debug('Expected PIDs: {}'.format(expected)) |
1463 | + self.log.debug('Actual PIDs: {}'.format(actual)) |
1464 | + |
1465 | + if len(actual) != len(expected): |
1466 | + return ('Unit count mismatch. expected, actual: {}, ' |
1467 | + '{} '.format(len(expected), len(actual))) |
1468 | + |
1469 | + for (e_sentry, e_proc_names) in expected.iteritems(): |
1470 | + e_sentry_name = e_sentry.info['unit_name'] |
1471 | + if e_sentry in actual.keys(): |
1472 | + a_proc_names = actual[e_sentry] |
1473 | + else: |
1474 | + return ('Expected sentry ({}) not found in actual dict data.' |
1475 | + '{}'.format(e_sentry_name, e_sentry)) |
1476 | + |
1477 | + if len(e_proc_names.keys()) != len(a_proc_names.keys()): |
1478 | + return ('Process name count mismatch. expected, actual: {}, ' |
1479 | + '{}'.format(len(expected), len(actual))) |
1480 | + |
1481 | + for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \ |
1482 | + zip(e_proc_names.items(), a_proc_names.items()): |
1483 | + if e_proc_name != a_proc_name: |
1484 | + return ('Process name mismatch. expected, actual: {}, ' |
1485 | + '{}'.format(e_proc_name, a_proc_name)) |
1486 | + |
1487 | + a_pids_length = len(a_pids) |
1488 | + if e_pids_length != a_pids_length: |
1489 | + return ('PID count mismatch. {} ({}) expected, actual: ' |
1490 | + '{}, {} ({})'.format(e_sentry_name, e_proc_name, |
1491 | + e_pids_length, a_pids_length, |
1492 | + a_pids)) |
1493 | + else: |
1494 | + self.log.debug('PID check OK: {} {} {}: ' |
1495 | + '{}'.format(e_sentry_name, e_proc_name, |
1496 | + e_pids_length, a_pids)) |
1497 | + return None |
1498 | + |
1499 | + def validate_list_of_identical_dicts(self, list_of_dicts): |
1500 | + """Check that all dicts within a list are identical.""" |
1501 | + hashes = [] |
1502 | + for _dict in list_of_dicts: |
1503 | + hashes.append(hash(frozenset(_dict.items()))) |
1504 | + |
1505 | + self.log.debug('Hashes: {}'.format(hashes)) |
1506 | + if len(set(hashes)) == 1: |
1507 | + self.log.debug('Dicts within list are identical') |
1508 | + else: |
1509 | + return 'Dicts within list are not identical' |
1510 | + |
1511 | + return None |
1512 | |
1513 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' |
1514 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-04-16 21:32:48 +0000 |
1515 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 14:47:35 +0000 |
1516 | @@ -46,15 +46,22 @@ |
1517 | stable or next branches for the other_services.""" |
1518 | base_charms = ['mysql', 'mongodb'] |
1519 | |
1520 | + if self.series in ['precise', 'trusty']: |
1521 | + base_series = self.series |
1522 | + else: |
1523 | + base_series = self.current_next |
1524 | + |
1525 | if self.stable: |
1526 | for svc in other_services: |
1527 | - temp = 'lp:charms/{}' |
1528 | - svc['location'] = temp.format(svc['name']) |
1529 | + temp = 'lp:charms/{}/{}' |
1530 | + svc['location'] = temp.format(base_series, |
1531 | + svc['name']) |
1532 | else: |
1533 | for svc in other_services: |
1534 | if svc['name'] in base_charms: |
1535 | - temp = 'lp:charms/{}' |
1536 | - svc['location'] = temp.format(svc['name']) |
1537 | + temp = 'lp:charms/{}/{}' |
1538 | + svc['location'] = temp.format(base_series, |
1539 | + svc['name']) |
1540 | else: |
1541 | temp = 'lp:~openstack-charmers/charms/{}/{}/next' |
1542 | svc['location'] = temp.format(self.current_next, |
1543 | @@ -72,9 +79,9 @@ |
1544 | services.append(this_service) |
1545 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
1546 | 'ceph-osd', 'ceph-radosgw'] |
1547 | - # Openstack subordinate charms do not expose an origin option as that |
1548 | - # is controlled by the principle |
1549 | - ignore = ['neutron-openvswitch'] |
1550 | + # Most OpenStack subordinate charms do not expose an origin option |
1551 | + # as that is controlled by the principle. |
1552 | + ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch'] |
1553 | |
1554 | if self.openstack: |
1555 | for svc in services: |
1556 | @@ -99,10 +106,13 @@ |
1557 | Return an integer representing the enum value of the openstack |
1558 | release. |
1559 | """ |
1560 | + # Must be ordered by OpenStack release (not by Ubuntu release): |
1561 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, |
1562 | self.precise_havana, self.precise_icehouse, |
1563 | - self.trusty_icehouse, self.trusty_juno, self.trusty_kilo, |
1564 | - self.utopic_juno, self.vivid_kilo) = range(10) |
1565 | + self.trusty_icehouse, self.trusty_juno, self.utopic_juno, |
1566 | + self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, |
1567 | + self.wily_liberty) = range(12) |
1568 | + |
1569 | releases = { |
1570 | ('precise', None): self.precise_essex, |
1571 | ('precise', 'cloud:precise-folsom'): self.precise_folsom, |
1572 | @@ -112,8 +122,10 @@ |
1573 | ('trusty', None): self.trusty_icehouse, |
1574 | ('trusty', 'cloud:trusty-juno'): self.trusty_juno, |
1575 | ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, |
1576 | + ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, |
1577 | ('utopic', None): self.utopic_juno, |
1578 | - ('vivid', None): self.vivid_kilo} |
1579 | + ('vivid', None): self.vivid_kilo, |
1580 | + ('wily', None): self.wily_liberty} |
1581 | return releases[(self.series, self.openstack)] |
1582 | |
1583 | def _get_openstack_release_string(self): |
1584 | @@ -129,9 +141,43 @@ |
1585 | ('trusty', 'icehouse'), |
1586 | ('utopic', 'juno'), |
1587 | ('vivid', 'kilo'), |
1588 | + ('wily', 'liberty'), |
1589 | ]) |
1590 | if self.openstack: |
1591 | os_origin = self.openstack.split(':')[1] |
1592 | return os_origin.split('%s-' % self.series)[1].split('/')[0] |
1593 | else: |
1594 | return releases[self.series] |
1595 | + |
1596 | + def get_ceph_expected_pools(self, radosgw=False): |
1597 | + """Return a list of expected ceph pools in a ceph + cinder + glance |
1598 | + test scenario, based on OpenStack release and whether ceph radosgw |
1599 | + is flagged as present or not.""" |
1600 | + |
1601 | + if self._get_openstack_release() >= self.trusty_kilo: |
1602 | + # Kilo or later |
1603 | + pools = [ |
1604 | + 'rbd', |
1605 | + 'cinder', |
1606 | + 'glance' |
1607 | + ] |
1608 | + else: |
1609 | + # Juno or earlier |
1610 | + pools = [ |
1611 | + 'data', |
1612 | + 'metadata', |
1613 | + 'rbd', |
1614 | + 'cinder', |
1615 | + 'glance' |
1616 | + ] |
1617 | + |
1618 | + if radosgw: |
1619 | + pools.extend([ |
1620 | + '.rgw.root', |
1621 | + '.rgw.control', |
1622 | + '.rgw', |
1623 | + '.rgw.gc', |
1624 | + '.users.uid' |
1625 | + ]) |
1626 | + |
1627 | + return pools |
1628 | |
1629 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' |
1630 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-01-26 11:51:28 +0000 |
1631 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 14:47:35 +0000 |
1632 | @@ -14,16 +14,20 @@ |
1633 | # You should have received a copy of the GNU Lesser General Public License |
1634 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1635 | |
1636 | +import amulet |
1637 | +import json |
1638 | import logging |
1639 | import os |
1640 | +import six |
1641 | import time |
1642 | import urllib |
1643 | |
1644 | +import cinderclient.v1.client as cinder_client |
1645 | import glanceclient.v1.client as glance_client |
1646 | +import heatclient.v1.client as heat_client |
1647 | import keystoneclient.v2_0 as keystone_client |
1648 | import novaclient.v1_1.client as nova_client |
1649 | - |
1650 | -import six |
1651 | +import swiftclient |
1652 | |
1653 | from charmhelpers.contrib.amulet.utils import ( |
1654 | AmuletUtils |
1655 | @@ -37,7 +41,7 @@ |
1656 | """OpenStack amulet utilities. |
1657 | |
1658 | This class inherits from AmuletUtils and has additional support |
1659 | - that is specifically for use by OpenStack charms. |
1660 | + that is specifically for use by OpenStack charm tests. |
1661 | """ |
1662 | |
1663 | def __init__(self, log_level=ERROR): |
1664 | @@ -51,6 +55,8 @@ |
1665 | Validate actual endpoint data vs expected endpoint data. The ports |
1666 | are used to find the matching endpoint. |
1667 | """ |
1668 | + self.log.debug('Validating endpoint data...') |
1669 | + self.log.debug('actual: {}'.format(repr(endpoints))) |
1670 | found = False |
1671 | for ep in endpoints: |
1672 | self.log.debug('endpoint: {}'.format(repr(ep))) |
1673 | @@ -77,6 +83,7 @@ |
1674 | Validate a list of actual service catalog endpoints vs a list of |
1675 | expected service catalog endpoints. |
1676 | """ |
1677 | + self.log.debug('Validating service catalog endpoint data...') |
1678 | self.log.debug('actual: {}'.format(repr(actual))) |
1679 | for k, v in six.iteritems(expected): |
1680 | if k in actual: |
1681 | @@ -93,6 +100,7 @@ |
1682 | Validate a list of actual tenant data vs list of expected tenant |
1683 | data. |
1684 | """ |
1685 | + self.log.debug('Validating tenant data...') |
1686 | self.log.debug('actual: {}'.format(repr(actual))) |
1687 | for e in expected: |
1688 | found = False |
1689 | @@ -114,6 +122,7 @@ |
1690 | Validate a list of actual role data vs a list of expected role |
1691 | data. |
1692 | """ |
1693 | + self.log.debug('Validating role data...') |
1694 | self.log.debug('actual: {}'.format(repr(actual))) |
1695 | for e in expected: |
1696 | found = False |
1697 | @@ -134,6 +143,7 @@ |
1698 | Validate a list of actual user data vs a list of expected user |
1699 | data. |
1700 | """ |
1701 | + self.log.debug('Validating user data...') |
1702 | self.log.debug('actual: {}'.format(repr(actual))) |
1703 | for e in expected: |
1704 | found = False |
1705 | @@ -155,17 +165,30 @@ |
1706 | |
1707 | Validate a list of actual flavors vs a list of expected flavors. |
1708 | """ |
1709 | + self.log.debug('Validating flavor data...') |
1710 | self.log.debug('actual: {}'.format(repr(actual))) |
1711 | act = [a.name for a in actual] |
1712 | return self._validate_list_data(expected, act) |
1713 | |
1714 | def tenant_exists(self, keystone, tenant): |
1715 | """Return True if tenant exists.""" |
1716 | + self.log.debug('Checking if tenant exists ({})...'.format(tenant)) |
1717 | return tenant in [t.name for t in keystone.tenants.list()] |
1718 | |
1719 | + def authenticate_cinder_admin(self, keystone_sentry, username, |
1720 | + password, tenant): |
1721 | + """Authenticates admin user with cinder.""" |
1722 | + # NOTE(beisner): cinder python client doesn't accept tokens. |
1723 | + service_ip = \ |
1724 | + keystone_sentry.relation('shared-db', |
1725 | + 'mysql:shared-db')['private-address'] |
1726 | + ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8')) |
1727 | + return cinder_client.Client(username, password, tenant, ept) |
1728 | + |
1729 | def authenticate_keystone_admin(self, keystone_sentry, user, password, |
1730 | tenant): |
1731 | """Authenticates admin user with the keystone admin endpoint.""" |
1732 | + self.log.debug('Authenticating keystone admin...') |
1733 | unit = keystone_sentry |
1734 | service_ip = unit.relation('shared-db', |
1735 | 'mysql:shared-db')['private-address'] |
1736 | @@ -175,6 +198,7 @@ |
1737 | |
1738 | def authenticate_keystone_user(self, keystone, user, password, tenant): |
1739 | """Authenticates a regular user with the keystone public endpoint.""" |
1740 | + self.log.debug('Authenticating keystone user ({})...'.format(user)) |
1741 | ep = keystone.service_catalog.url_for(service_type='identity', |
1742 | endpoint_type='publicURL') |
1743 | return keystone_client.Client(username=user, password=password, |
1744 | @@ -182,19 +206,49 @@ |
1745 | |
1746 | def authenticate_glance_admin(self, keystone): |
1747 | """Authenticates admin user with glance.""" |
1748 | + self.log.debug('Authenticating glance admin...') |
1749 | ep = keystone.service_catalog.url_for(service_type='image', |
1750 | endpoint_type='adminURL') |
1751 | return glance_client.Client(ep, token=keystone.auth_token) |
1752 | |
1753 | + def authenticate_heat_admin(self, keystone): |
1754 | + """Authenticates the admin user with heat.""" |
1755 | + self.log.debug('Authenticating heat admin...') |
1756 | + ep = keystone.service_catalog.url_for(service_type='orchestration', |
1757 | + endpoint_type='publicURL') |
1758 | + return heat_client.Client(endpoint=ep, token=keystone.auth_token) |
1759 | + |
1760 | def authenticate_nova_user(self, keystone, user, password, tenant): |
1761 | """Authenticates a regular user with nova-api.""" |
1762 | + self.log.debug('Authenticating nova user ({})...'.format(user)) |
1763 | ep = keystone.service_catalog.url_for(service_type='identity', |
1764 | endpoint_type='publicURL') |
1765 | return nova_client.Client(username=user, api_key=password, |
1766 | project_id=tenant, auth_url=ep) |
1767 | |
1768 | + def authenticate_swift_user(self, keystone, user, password, tenant): |
1769 | + """Authenticates a regular user with swift api.""" |
1770 | + self.log.debug('Authenticating swift user ({})...'.format(user)) |
1771 | + ep = keystone.service_catalog.url_for(service_type='identity', |
1772 | + endpoint_type='publicURL') |
1773 | + return swiftclient.Connection(authurl=ep, |
1774 | + user=user, |
1775 | + key=password, |
1776 | + tenant_name=tenant, |
1777 | + auth_version='2.0') |
1778 | + |
1779 | def create_cirros_image(self, glance, image_name): |
1780 | - """Download the latest cirros image and upload it to glance.""" |
1781 | + """Download the latest cirros image and upload it to glance, |
1782 | + validate and return a resource pointer. |
1783 | + |
1784 | + :param glance: pointer to authenticated glance connection |
1785 | + :param image_name: display name for new image |
1786 | + :returns: glance image pointer |
1787 | + """ |
1788 | + self.log.debug('Creating glance cirros image ' |
1789 | + '({})...'.format(image_name)) |
1790 | + |
1791 | + # Download cirros image |
1792 | http_proxy = os.getenv('AMULET_HTTP_PROXY') |
1793 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) |
1794 | if http_proxy: |
1795 | @@ -203,57 +257,67 @@ |
1796 | else: |
1797 | opener = urllib.FancyURLopener() |
1798 | |
1799 | - f = opener.open("http://download.cirros-cloud.net/version/released") |
1800 | + f = opener.open('http://download.cirros-cloud.net/version/released') |
1801 | version = f.read().strip() |
1802 | - cirros_img = "cirros-{}-x86_64-disk.img".format(version) |
1803 | + cirros_img = 'cirros-{}-x86_64-disk.img'.format(version) |
1804 | local_path = os.path.join('tests', cirros_img) |
1805 | |
1806 | if not os.path.exists(local_path): |
1807 | - cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net", |
1808 | + cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net', |
1809 | version, cirros_img) |
1810 | opener.retrieve(cirros_url, local_path) |
1811 | f.close() |
1812 | |
1813 | + # Create glance image |
1814 | with open(local_path) as f: |
1815 | image = glance.images.create(name=image_name, is_public=True, |
1816 | disk_format='qcow2', |
1817 | container_format='bare', data=f) |
1818 | - count = 1 |
1819 | - status = image.status |
1820 | - while status != 'active' and count < 10: |
1821 | - time.sleep(3) |
1822 | - image = glance.images.get(image.id) |
1823 | - status = image.status |
1824 | - self.log.debug('image status: {}'.format(status)) |
1825 | - count += 1 |
1826 | - |
1827 | - if status != 'active': |
1828 | - self.log.error('image creation timed out') |
1829 | - return None |
1830 | + |
1831 | + # Wait for image to reach active status |
1832 | + img_id = image.id |
1833 | + ret = self.resource_reaches_status(glance.images, img_id, |
1834 | + expected_stat='active', |
1835 | + msg='Image status wait') |
1836 | + if not ret: |
1837 | + msg = 'Glance image failed to reach expected state.' |
1838 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1839 | + |
1840 | + # Re-validate new image |
1841 | + self.log.debug('Validating image attributes...') |
1842 | + val_img_name = glance.images.get(img_id).name |
1843 | + val_img_stat = glance.images.get(img_id).status |
1844 | + val_img_pub = glance.images.get(img_id).is_public |
1845 | + val_img_cfmt = glance.images.get(img_id).container_format |
1846 | + val_img_dfmt = glance.images.get(img_id).disk_format |
1847 | + msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} ' |
1848 | + 'container fmt:{} disk fmt:{}'.format( |
1849 | + val_img_name, val_img_pub, img_id, |
1850 | + val_img_stat, val_img_cfmt, val_img_dfmt)) |
1851 | + |
1852 | + if val_img_name == image_name and val_img_stat == 'active' \ |
1853 | + and val_img_pub is True and val_img_cfmt == 'bare' \ |
1854 | + and val_img_dfmt == 'qcow2': |
1855 | + self.log.debug(msg_attr) |
1856 | + else: |
1857 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
1858 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1859 | |
1860 | return image |
1861 | |
1862 | def delete_image(self, glance, image): |
1863 | """Delete the specified image.""" |
1864 | - num_before = len(list(glance.images.list())) |
1865 | - glance.images.delete(image) |
1866 | - |
1867 | - count = 1 |
1868 | - num_after = len(list(glance.images.list())) |
1869 | - while num_after != (num_before - 1) and count < 10: |
1870 | - time.sleep(3) |
1871 | - num_after = len(list(glance.images.list())) |
1872 | - self.log.debug('number of images: {}'.format(num_after)) |
1873 | - count += 1 |
1874 | - |
1875 | - if num_after != (num_before - 1): |
1876 | - self.log.error('image deletion timed out') |
1877 | - return False |
1878 | - |
1879 | - return True |
1880 | + |
1881 | + # /!\ DEPRECATION WARNING |
1882 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1883 | + 'delete_resource instead of delete_image.') |
1884 | + self.log.debug('Deleting glance image ({})...'.format(image)) |
1885 | + return self.delete_resource(glance.images, image, msg='glance image') |
1886 | |
1887 | def create_instance(self, nova, image_name, instance_name, flavor): |
1888 | """Create the specified instance.""" |
1889 | + self.log.debug('Creating instance ' |
1890 | + '({}|{}|{})'.format(instance_name, image_name, flavor)) |
1891 | image = nova.images.find(name=image_name) |
1892 | flavor = nova.flavors.find(name=flavor) |
1893 | instance = nova.servers.create(name=instance_name, image=image, |
1894 | @@ -276,19 +340,265 @@ |
1895 | |
1896 | def delete_instance(self, nova, instance): |
1897 | """Delete the specified instance.""" |
1898 | - num_before = len(list(nova.servers.list())) |
1899 | - nova.servers.delete(instance) |
1900 | - |
1901 | - count = 1 |
1902 | - num_after = len(list(nova.servers.list())) |
1903 | - while num_after != (num_before - 1) and count < 10: |
1904 | - time.sleep(3) |
1905 | - num_after = len(list(nova.servers.list())) |
1906 | - self.log.debug('number of instances: {}'.format(num_after)) |
1907 | - count += 1 |
1908 | - |
1909 | - if num_after != (num_before - 1): |
1910 | - self.log.error('instance deletion timed out') |
1911 | - return False |
1912 | - |
1913 | - return True |
1914 | + |
1915 | + # /!\ DEPRECATION WARNING |
1916 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1917 | + 'delete_resource instead of delete_instance.') |
1918 | + self.log.debug('Deleting instance ({})...'.format(instance)) |
1919 | + return self.delete_resource(nova.servers, instance, |
1920 | + msg='nova instance') |
1921 | + |
1922 | + def create_or_get_keypair(self, nova, keypair_name="testkey"): |
1923 | + """Create a new keypair, or return pointer if it already exists.""" |
1924 | + try: |
1925 | + _keypair = nova.keypairs.get(keypair_name) |
1926 | + self.log.debug('Keypair ({}) already exists, ' |
1927 | + 'using it.'.format(keypair_name)) |
1928 | + return _keypair |
1929 | + except: |
1930 | + self.log.debug('Keypair ({}) does not exist, ' |
1931 | + 'creating it.'.format(keypair_name)) |
1932 | + |
1933 | + _keypair = nova.keypairs.create(name=keypair_name) |
1934 | + return _keypair |
1935 | + |
1936 | + def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, |
1937 | + img_id=None, src_vol_id=None, snap_id=None): |
1938 | + """Create cinder volume, optionally from a glance image, OR |
1939 | + optionally as a clone of an existing volume, OR optionally |
1940 | + from a snapshot. Wait for the new volume status to reach |
1941 | + the expected status, validate and return a resource pointer. |
1942 | + |
1943 | + :param vol_name: cinder volume display name |
1944 | + :param vol_size: size in gigabytes |
1945 | + :param img_id: optional glance image id |
1946 | + :param src_vol_id: optional source volume id to clone |
1947 | + :param snap_id: optional snapshot id to use |
1948 | + :returns: cinder volume pointer |
1949 | + """ |
1950 | + # Handle parameter input and avoid impossible combinations |
1951 | + if img_id and not src_vol_id and not snap_id: |
1952 | + # Create volume from image |
1953 | + self.log.debug('Creating cinder volume from glance image...') |
1954 | + bootable = 'true' |
1955 | + elif src_vol_id and not img_id and not snap_id: |
1956 | + # Clone an existing volume |
1957 | + self.log.debug('Cloning cinder volume...') |
1958 | + bootable = cinder.volumes.get(src_vol_id).bootable |
1959 | + elif snap_id and not src_vol_id and not img_id: |
1960 | + # Create volume from snapshot |
1961 | + self.log.debug('Creating cinder volume from snapshot...') |
1962 | + snap = cinder.volume_snapshots.find(id=snap_id) |
1963 | + vol_size = snap.size |
1964 | + snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id |
1965 | + bootable = cinder.volumes.get(snap_vol_id).bootable |
1966 | + elif not img_id and not src_vol_id and not snap_id: |
1967 | + # Create volume |
1968 | + self.log.debug('Creating cinder volume...') |
1969 | + bootable = 'false' |
1970 | + else: |
1971 | + # Impossible combination of parameters |
1972 | + msg = ('Invalid method use - name:{} size:{} img_id:{} ' |
1973 | + 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, |
1974 | + img_id, src_vol_id, |
1975 | + snap_id)) |
1976 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1977 | + |
1978 | + # Create new volume |
1979 | + try: |
1980 | + vol_new = cinder.volumes.create(display_name=vol_name, |
1981 | + imageRef=img_id, |
1982 | + size=vol_size, |
1983 | + source_volid=src_vol_id, |
1984 | + snapshot_id=snap_id) |
1985 | + vol_id = vol_new.id |
1986 | + except Exception as e: |
1987 | + msg = 'Failed to create volume: {}'.format(e) |
1988 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1989 | + |
1990 | + # Wait for volume to reach available status |
1991 | + ret = self.resource_reaches_status(cinder.volumes, vol_id, |
1992 | + expected_stat="available", |
1993 | + msg="Volume status wait") |
1994 | + if not ret: |
1995 | + msg = 'Cinder volume failed to reach expected state.' |
1996 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1997 | + |
1998 | + # Re-validate new volume |
1999 | + self.log.debug('Validating volume attributes...') |
2000 | + val_vol_name = cinder.volumes.get(vol_id).display_name |
2001 | + val_vol_boot = cinder.volumes.get(vol_id).bootable |
2002 | + val_vol_stat = cinder.volumes.get(vol_id).status |
2003 | + val_vol_size = cinder.volumes.get(vol_id).size |
2004 | + msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' |
2005 | + '{} size:{}'.format(val_vol_name, vol_id, |
2006 | + val_vol_stat, val_vol_boot, |
2007 | + val_vol_size)) |
2008 | + |
2009 | + if val_vol_boot == bootable and val_vol_stat == 'available' \ |
2010 | + and val_vol_name == vol_name and val_vol_size == vol_size: |
2011 | + self.log.debug(msg_attr) |
2012 | + else: |
2013 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
2014 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2015 | + |
2016 | + return vol_new |
2017 | + |
2018 | + def delete_resource(self, resource, resource_id, |
2019 | + msg="resource", max_wait=120): |
2020 | + """Delete one openstack resource, such as one instance, keypair, |
2021 | + image, volume, stack, etc., and confirm deletion within max wait time. |
2022 | + |
2023 | + :param resource: pointer to os resource type, ex:glance_client.images |
2024 | + :param resource_id: unique name or id for the openstack resource |
2025 | + :param msg: text to identify purpose in logging |
2026 | + :param max_wait: maximum wait time in seconds |
2027 | + :returns: True if successful, otherwise False |
2028 | + """ |
2029 | + self.log.debug('Deleting OpenStack resource ' |
2030 | + '{} ({})'.format(resource_id, msg)) |
2031 | + num_before = len(list(resource.list())) |
2032 | + resource.delete(resource_id) |
2033 | + |
2034 | + tries = 0 |
2035 | + num_after = len(list(resource.list())) |
2036 | + while num_after != (num_before - 1) and tries < (max_wait / 4): |
2037 | + self.log.debug('{} delete check: ' |
2038 | + '{} [{}:{}] {}'.format(msg, tries, |
2039 | + num_before, |
2040 | + num_after, |
2041 | + resource_id)) |
2042 | + time.sleep(4) |
2043 | + num_after = len(list(resource.list())) |
2044 | + tries += 1 |
2045 | + |
2046 | + self.log.debug('{}: expected, actual count = {}, ' |
2047 | + '{}'.format(msg, num_before - 1, num_after)) |
2048 | + |
2049 | + if num_after == (num_before - 1): |
2050 | + return True |
2051 | + else: |
2052 | + self.log.error('{} delete timed out'.format(msg)) |
2053 | + return False |
2054 | + |
2055 | + def resource_reaches_status(self, resource, resource_id, |
2056 | + expected_stat='available', |
2057 | + msg='resource', max_wait=120): |
2058 | + """Wait for an openstack resources status to reach an |
2059 | + expected status within a specified time. Useful to confirm that |
2060 | + nova instances, cinder vols, snapshots, glance images, heat stacks |
2061 | + and other resources eventually reach the expected status. |
2062 | + |
2063 | + :param resource: pointer to os resource type, ex: heat_client.stacks |
2064 | + :param resource_id: unique id for the openstack resource |
2065 | + :param expected_stat: status to expect resource to reach |
2066 | + :param msg: text to identify purpose in logging |
2067 | + :param max_wait: maximum wait time in seconds |
2068 | + :returns: True if successful, False if status is not reached |
2069 | + """ |
2070 | + |
2071 | + tries = 0 |
2072 | + resource_stat = resource.get(resource_id).status |
2073 | + while resource_stat != expected_stat and tries < (max_wait / 4): |
2074 | + self.log.debug('{} status check: ' |
2075 | + '{} [{}:{}] {}'.format(msg, tries, |
2076 | + resource_stat, |
2077 | + expected_stat, |
2078 | + resource_id)) |
2079 | + time.sleep(4) |
2080 | + resource_stat = resource.get(resource_id).status |
2081 | + tries += 1 |
2082 | + |
2083 | + self.log.debug('{}: expected, actual status = {}, ' |
2084 | + '{}'.format(msg, resource_stat, expected_stat)) |
2085 | + |
2086 | + if resource_stat == expected_stat: |
2087 | + return True |
2088 | + else: |
2089 | + self.log.debug('{} never reached expected status: ' |
2090 | + '{}'.format(resource_id, expected_stat)) |
2091 | + return False |
2092 | + |
2093 | + def get_ceph_osd_id_cmd(self, index): |
2094 | + """Produce a shell command that will return a ceph-osd id.""" |
2095 | + return ("`initctl list | grep 'ceph-osd ' | " |
2096 | + "awk 'NR=={} {{ print $2 }}' | " |
2097 | + "grep -o '[0-9]*'`".format(index + 1)) |
2098 | + |
2099 | + def get_ceph_pools(self, sentry_unit): |
2100 | + """Return a dict of ceph pools from a single ceph unit, with |
2101 | + pool name as keys, pool id as vals.""" |
2102 | + pools = {} |
2103 | + cmd = 'sudo ceph osd lspools' |
2104 | + output, code = sentry_unit.run(cmd) |
2105 | + if code != 0: |
2106 | + msg = ('{} `{}` returned {} ' |
2107 | + '{}'.format(sentry_unit.info['unit_name'], |
2108 | + cmd, code, output)) |
2109 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2110 | + |
2111 | + # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, |
2112 | + for pool in str(output).split(','): |
2113 | + pool_id_name = pool.split(' ') |
2114 | + if len(pool_id_name) == 2: |
2115 | + pool_id = pool_id_name[0] |
2116 | + pool_name = pool_id_name[1] |
2117 | + pools[pool_name] = int(pool_id) |
2118 | + |
2119 | + self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], |
2120 | + pools)) |
2121 | + return pools |
2122 | + |
2123 | + def get_ceph_df(self, sentry_unit): |
2124 | + """Return dict of ceph df json output, including ceph pool state. |
2125 | + |
2126 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
2127 | + :returns: Dict of ceph df output |
2128 | + """ |
2129 | + cmd = 'sudo ceph df --format=json' |
2130 | + output, code = sentry_unit.run(cmd) |
2131 | + if code != 0: |
2132 | + msg = ('{} `{}` returned {} ' |
2133 | + '{}'.format(sentry_unit.info['unit_name'], |
2134 | + cmd, code, output)) |
2135 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2136 | + return json.loads(output) |
2137 | + |
2138 | + def get_ceph_pool_sample(self, sentry_unit, pool_id=0): |
2139 | + """Take a sample of attributes of a ceph pool, returning ceph |
2140 | + pool name, object count and disk space used for the specified |
2141 | + pool ID number. |
2142 | + |
2143 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
2144 | + :param pool_id: Ceph pool ID |
2145 | + :returns: List of pool name, object count, kb disk space used |
2146 | + """ |
2147 | + df = self.get_ceph_df(sentry_unit) |
2148 | + pool_name = df['pools'][pool_id]['name'] |
2149 | + obj_count = df['pools'][pool_id]['stats']['objects'] |
2150 | + kb_used = df['pools'][pool_id]['stats']['kb_used'] |
2151 | + self.log.debug('Ceph {} pool (ID {}): {} objects, ' |
2152 | + '{} kb used'.format(pool_name, pool_id, |
2153 | + obj_count, kb_used)) |
2154 | + return pool_name, obj_count, kb_used |
2155 | + |
2156 | + def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): |
2157 | + """Validate ceph pool samples taken over time, such as pool |
2158 | + object counts or pool kb used, before adding, after adding, and |
2159 | + after deleting items which affect those pool attributes. The |
2160 | + 2nd element is expected to be greater than the 1st; 3rd is expected |
2161 | + to be less than the 2nd. |
2162 | + |
2163 | + :param samples: List containing 3 data samples |
2164 | + :param sample_type: String for logging and usage context |
2165 | + :returns: None if successful, Failure message otherwise |
2166 | + """ |
2167 | + original, created, deleted = range(3) |
2168 | + if samples[created] <= samples[original] or \ |
2169 | + samples[deleted] >= samples[created]: |
2170 | + return ('Ceph {} samples ({}) ' |
2171 | + 'unexpected.'.format(sample_type, samples)) |
2172 | + else: |
2173 | + self.log.debug('Ceph {} samples (OK): ' |
2174 | + '{}'.format(sample_type, samples)) |
2175 | + return None |
2176 | |
2177 | === added file 'tests/tests.yaml' |
2178 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 |
2179 | +++ tests/tests.yaml 2015-07-01 14:47:35 +0000 |
2180 | @@ -0,0 +1,18 @@ |
2181 | +bootstrap: true |
2182 | +reset: true |
2183 | +virtualenv: true |
2184 | +makefile: |
2185 | + - lint |
2186 | + - test |
2187 | +sources: |
2188 | + - ppa:juju/stable |
2189 | +packages: |
2190 | + - amulet |
2191 | + - python-amulet |
2192 | + - python-cinderclient |
2193 | + - python-distro-info |
2194 | + - python-glanceclient |
2195 | + - python-heatclient |
2196 | + - python-keystoneclient |
2197 | + - python-novaclient |
2198 | + - python-swiftclient |
charm_unit_test #5115 ceph-osd-next for 1chb1n mp262249
UNIT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_unit_ test/5115/