Merge lp:~1chb1n/charms/trusty/ceph/next-amulet-update into lp:~openstack-charmers-archive/charms/trusty/ceph/next

Proposed by Ryan Beisner on 2015-06-15
Status: Merged
Merged at revision: 107
Proposed branch: lp:~1chb1n/charms/trusty/ceph/next-amulet-update
Merge into: lp:~openstack-charmers-archive/charms/trusty/ceph/next
Diff against target: 2199 lines (+1327/-214)
14 files modified
Makefile (+6/-7)
hooks/charmhelpers/core/hookenv.py (+231/-38)
hooks/charmhelpers/core/host.py (+25/-7)
hooks/charmhelpers/core/services/base.py (+43/-19)
hooks/charmhelpers/fetch/__init__.py (+1/-1)
hooks/charmhelpers/fetch/giturl.py (+7/-5)
metadata.yaml (+5/-2)
tests/00-setup (+6/-2)
tests/README (+24/-0)
tests/basic_deployment.py (+339/-68)
tests/charmhelpers/contrib/amulet/utils.py (+219/-9)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+42/-5)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+361/-51)
tests/tests.yaml (+18/-0)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/ceph/next-amulet-update
Reviewer Review Type Date Requested Status
Corey Bryant 2015-06-15 Approve on 2015-07-02
Review via email: mp+262016@code.launchpad.net

Commit Message

Update amulet tests:
Remove unsupported release logic
Add nova, cinder and glance rbd config inspection
Enable Vivid tests, prep for Wily
Add debug logging
Add osd pool inspection
Add functional tests for ceph-backed cinder and glance
Add basic cli functional checks
Sync tests/charmhelpers

This MP is dependent on charm-helpers updates in:
https://code.launchpad.net/~1chb1n/charm-helpers/amulet-ceph-cinder-updates/+merge/262013.

Description of the Change

Update amulet tests:
Remove unsupported release logic
Add nova, cinder and glance rbd config inspection
Enable Vivid tests, prep for Wily
Add debug logging
Add osd pool inspection
Add functional tests for ceph-backed cinder and glance
Add basic cli functional checks
Sync tests/charmhelpers

This MP is dependent on charm-helpers updates in:
https://code.launchpad.net/~1chb1n/charm-helpers/amulet-ceph-cinder-updates/+merge/262013.

To post a comment you must log in.

charm_unit_test #5114 ceph-next for 1chb1n mp262016
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5114/

charm_lint_check #5482 ceph-next for 1chb1n mp262016
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5482/

charm_amulet_test #4690 ceph-next for 1chb1n mp262016
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4690/

charm_amulet_test #4738 ceph-next for 1chb1n mp262016
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4738/

charm_unit_test #5170 ceph-next for 1chb1n mp262016
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5170/

charm_lint_check #5538 ceph-next for 1chb1n mp262016
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5538/

charm_amulet_test #4749 ceph-next for 1chb1n mp262016
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4749/

charm_unit_test #5173 ceph-next for 1chb1n mp262016
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5173/

charm_lint_check #5541 ceph-next for 1chb1n mp262016
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5541/

charm_amulet_test #4752 ceph-next for 1chb1n mp262016
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11765315/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4752/

Ryan Beisner (1chb1n) wrote :

FYI, undercloud issue caused test failure for #4752.

charm_amulet_test #4779 ceph-next for 1chb1n mp262016
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4779/

Ryan Beisner (1chb1n) wrote :

Flipped back to WIP re: tests/charmhelpers work in progress. Other things here are clear for review and input.

119. By Ryan Beisner on 2015-06-29

resync hooks/charmhelpers

120. By Ryan Beisner on 2015-06-29

resync tests/charmhelpers

charm_unit_test #5304 ceph-next for 1chb1n mp262016
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5304/

charm_lint_check #5672 ceph-next for 1chb1n mp262016
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5672/

charm_amulet_test #4855 ceph-next for 1chb1n mp262016
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4855/

121. By Ryan Beisner on 2015-06-29

Update publish target in makefile; update 00-setup and tests.yaml for dependencies.

charm_unit_test #5316 ceph-next for 1chb1n mp262016
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5316/

charm_lint_check #5684 ceph-next for 1chb1n mp262016
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5684/

charm_amulet_test #4867 ceph-next for 1chb1n mp262016
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11794889/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4867/

122. By Ryan Beisner on 2015-06-29

fix 00-setup

charm_unit_test #5320 ceph-next for 1chb1n mp262016
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5320/

charm_lint_check #5688 ceph-next for 1chb1n mp262016
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5688/

Corey Bryant (corey.bryant) wrote :

One comment inline below.

Ryan Beisner (1chb1n) wrote :

Thanks! Reply inline.

123. By Ryan Beisner on 2015-06-29

update test

Corey Bryant (corey.bryant) wrote :

Looks good. I'll approve once the corresponding c-h lands and these amulet tests are successful.

charm_lint_check #5693 ceph-next for 1chb1n mp262016
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5693/

charm_unit_test #5325 ceph-next for 1chb1n mp262016
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5325/

charm_amulet_test #4871 ceph-next for 1chb1n mp262016
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
Timeout occurred (2700s), printing juju status...environment: osci-sv07
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11795762/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4871/

charm_amulet_test #4876 ceph-next for 1chb1n mp262016
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11796039/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4876/

charm_amulet_test #4882 ceph-next for 1chb1n mp262016
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11797191/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4882/

Ryan Beisner (1chb1n) wrote :

Test rig issue is causing failures in bootstrapping; will re-test when that's resolved.

124. By Ryan Beisner on 2015-07-01

update tags for consistency with other openstack charms

charm_unit_test #5330 ceph-next for 1chb1n mp262016
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5330/

charm_lint_check #5698 ceph-next for 1chb1n mp262016
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5698/

charm_amulet_test #4886 ceph-next for 1chb1n mp262016
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4886/

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2015-04-16 21:32:00 +0000
3+++ Makefile 2015-07-01 14:47:51 +0000
4@@ -2,18 +2,17 @@
5 PYTHON := /usr/bin/env python
6
7 lint:
8- @flake8 --exclude hooks/charmhelpers hooks tests unit_tests
9+ @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \
10+ hooks tests unit_tests
11 @charm proof
12
13-unit_test:
14+test:
15+ @# Bundletester expects unit tests here.
16 @echo Starting unit tests...
17 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
18
19-test:
20+functional_test:
21 @echo Starting Amulet tests...
22- # coreycb note: The -v should only be temporary until Amulet sends
23- # raise_status() messages to stderr:
24- # https://bugs.launchpad.net/amulet/+bug/1320357
25 @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700
26
27 bin/charm_helpers_sync.py:
28@@ -25,6 +24,6 @@
29 $(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
30 $(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
31
32-publish: lint
33+publish: lint test
34 bzr push lp:charms/ceph
35 bzr push lp:charms/trusty/ceph
36
37=== modified file 'hooks/charmhelpers/core/hookenv.py'
38--- hooks/charmhelpers/core/hookenv.py 2015-04-16 10:27:24 +0000
39+++ hooks/charmhelpers/core/hookenv.py 2015-07-01 14:47:51 +0000
40@@ -21,12 +21,16 @@
41 # Charm Helpers Developers <juju@lists.ubuntu.com>
42
43 from __future__ import print_function
44+from distutils.version import LooseVersion
45+from functools import wraps
46+import glob
47 import os
48 import json
49 import yaml
50 import subprocess
51 import sys
52 import errno
53+import tempfile
54 from subprocess import CalledProcessError
55
56 import six
57@@ -58,15 +62,17 @@
58
59 will cache the result of unit_get + 'test' for future calls.
60 """
61+ @wraps(func)
62 def wrapper(*args, **kwargs):
63 global cache
64 key = str((func, args, kwargs))
65 try:
66 return cache[key]
67 except KeyError:
68- res = func(*args, **kwargs)
69- cache[key] = res
70- return res
71+ pass # Drop out of the exception handler scope.
72+ res = func(*args, **kwargs)
73+ cache[key] = res
74+ return res
75 return wrapper
76
77
78@@ -178,7 +184,7 @@
79
80 def remote_unit():
81 """The remote unit for the current relation hook"""
82- return os.environ['JUJU_REMOTE_UNIT']
83+ return os.environ.get('JUJU_REMOTE_UNIT', None)
84
85
86 def service_name():
87@@ -238,23 +244,7 @@
88 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
89 if os.path.exists(self.path):
90 self.load_previous()
91-
92- def __getitem__(self, key):
93- """For regular dict lookups, check the current juju config first,
94- then the previous (saved) copy. This ensures that user-saved values
95- will be returned by a dict lookup.
96-
97- """
98- try:
99- return dict.__getitem__(self, key)
100- except KeyError:
101- return (self._prev_dict or {})[key]
102-
103- def keys(self):
104- prev_keys = []
105- if self._prev_dict is not None:
106- prev_keys = self._prev_dict.keys()
107- return list(set(prev_keys + list(dict.keys(self))))
108+ atexit(self._implicit_save)
109
110 def load_previous(self, path=None):
111 """Load previous copy of config from disk.
112@@ -273,6 +263,9 @@
113 self.path = path or self.path
114 with open(self.path) as f:
115 self._prev_dict = json.load(f)
116+ for k, v in self._prev_dict.items():
117+ if k not in self:
118+ self[k] = v
119
120 def changed(self, key):
121 """Return True if the current value for this key is different from
122@@ -304,13 +297,13 @@
123 instance.
124
125 """
126- if self._prev_dict:
127- for k, v in six.iteritems(self._prev_dict):
128- if k not in self:
129- self[k] = v
130 with open(self.path, 'w') as f:
131 json.dump(self, f)
132
133+ def _implicit_save(self):
134+ if self.implicit_save:
135+ self.save()
136+
137
138 @cached
139 def config(scope=None):
140@@ -353,18 +346,49 @@
141 """Set relation information for the current unit"""
142 relation_settings = relation_settings if relation_settings else {}
143 relation_cmd_line = ['relation-set']
144+ accepts_file = "--file" in subprocess.check_output(
145+ relation_cmd_line + ["--help"], universal_newlines=True)
146 if relation_id is not None:
147 relation_cmd_line.extend(('-r', relation_id))
148- for k, v in (list(relation_settings.items()) + list(kwargs.items())):
149- if v is None:
150- relation_cmd_line.append('{}='.format(k))
151- else:
152- relation_cmd_line.append('{}={}'.format(k, v))
153- subprocess.check_call(relation_cmd_line)
154+ settings = relation_settings.copy()
155+ settings.update(kwargs)
156+ for key, value in settings.items():
157+ # Force value to be a string: it always should, but some call
158+ # sites pass in things like dicts or numbers.
159+ if value is not None:
160+ settings[key] = "{}".format(value)
161+ if accepts_file:
162+ # --file was introduced in Juju 1.23.2. Use it by default if
163+ # available, since otherwise we'll break if the relation data is
164+ # too big. Ideally we should tell relation-set to read the data from
165+ # stdin, but that feature is broken in 1.23.2: Bug #1454678.
166+ with tempfile.NamedTemporaryFile(delete=False) as settings_file:
167+ settings_file.write(yaml.safe_dump(settings).encode("utf-8"))
168+ subprocess.check_call(
169+ relation_cmd_line + ["--file", settings_file.name])
170+ os.remove(settings_file.name)
171+ else:
172+ for key, value in settings.items():
173+ if value is None:
174+ relation_cmd_line.append('{}='.format(key))
175+ else:
176+ relation_cmd_line.append('{}={}'.format(key, value))
177+ subprocess.check_call(relation_cmd_line)
178 # Flush cache of any relation-gets for local unit
179 flush(local_unit())
180
181
182+def relation_clear(r_id=None):
183+ ''' Clears any relation data already set on relation r_id '''
184+ settings = relation_get(rid=r_id,
185+ unit=local_unit())
186+ for setting in settings:
187+ if setting not in ['public-address', 'private-address']:
188+ settings[setting] = None
189+ relation_set(relation_id=r_id,
190+ **settings)
191+
192+
193 @cached
194 def relation_ids(reltype=None):
195 """A list of relation_ids"""
196@@ -509,6 +533,11 @@
197 return None
198
199
200+def unit_public_ip():
201+ """Get this unit's public IP address"""
202+ return unit_get('public-address')
203+
204+
205 def unit_private_ip():
206 """Get this unit's private IP address"""
207 return unit_get('private-address')
208@@ -541,10 +570,14 @@
209 hooks.execute(sys.argv)
210 """
211
212- def __init__(self, config_save=True):
213+ def __init__(self, config_save=None):
214 super(Hooks, self).__init__()
215 self._hooks = {}
216- self._config_save = config_save
217+
218+ # For unknown reasons, we allow the Hooks constructor to override
219+ # config().implicit_save.
220+ if config_save is not None:
221+ config().implicit_save = config_save
222
223 def register(self, name, function):
224 """Register a hook"""
225@@ -552,13 +585,16 @@
226
227 def execute(self, args):
228 """Execute a registered hook based on args[0]"""
229+ _run_atstart()
230 hook_name = os.path.basename(args[0])
231 if hook_name in self._hooks:
232- self._hooks[hook_name]()
233- if self._config_save:
234- cfg = config()
235- if cfg.implicit_save:
236- cfg.save()
237+ try:
238+ self._hooks[hook_name]()
239+ except SystemExit as x:
240+ if x.code is None or x.code == 0:
241+ _run_atexit()
242+ raise
243+ _run_atexit()
244 else:
245 raise UnregisteredHookError(hook_name)
246
247@@ -605,3 +641,160 @@
248
249 The results set by action_set are preserved."""
250 subprocess.check_call(['action-fail', message])
251+
252+
253+def status_set(workload_state, message):
254+ """Set the workload state with a message
255+
256+ Use status-set to set the workload state with a message which is visible
257+ to the user via juju status. If the status-set command is not found then
258+ assume this is juju < 1.23 and juju-log the message unstead.
259+
260+ workload_state -- valid juju workload state.
261+ message -- status update message
262+ """
263+ valid_states = ['maintenance', 'blocked', 'waiting', 'active']
264+ if workload_state not in valid_states:
265+ raise ValueError(
266+ '{!r} is not a valid workload state'.format(workload_state)
267+ )
268+ cmd = ['status-set', workload_state, message]
269+ try:
270+ ret = subprocess.call(cmd)
271+ if ret == 0:
272+ return
273+ except OSError as e:
274+ if e.errno != errno.ENOENT:
275+ raise
276+ log_message = 'status-set failed: {} {}'.format(workload_state,
277+ message)
278+ log(log_message, level='INFO')
279+
280+
281+def status_get():
282+ """Retrieve the previously set juju workload state
283+
284+ If the status-set command is not found then assume this is juju < 1.23 and
285+ return 'unknown'
286+ """
287+ cmd = ['status-get']
288+ try:
289+ raw_status = subprocess.check_output(cmd, universal_newlines=True)
290+ status = raw_status.rstrip()
291+ return status
292+ except OSError as e:
293+ if e.errno == errno.ENOENT:
294+ return 'unknown'
295+ else:
296+ raise
297+
298+
299+def translate_exc(from_exc, to_exc):
300+ def inner_translate_exc1(f):
301+ def inner_translate_exc2(*args, **kwargs):
302+ try:
303+ return f(*args, **kwargs)
304+ except from_exc:
305+ raise to_exc
306+
307+ return inner_translate_exc2
308+
309+ return inner_translate_exc1
310+
311+
312+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
313+def is_leader():
314+ """Does the current unit hold the juju leadership
315+
316+ Uses juju to determine whether the current unit is the leader of its peers
317+ """
318+ cmd = ['is-leader', '--format=json']
319+ return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
320+
321+
322+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
323+def leader_get(attribute=None):
324+ """Juju leader get value(s)"""
325+ cmd = ['leader-get', '--format=json'] + [attribute or '-']
326+ return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
327+
328+
329+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
330+def leader_set(settings=None, **kwargs):
331+ """Juju leader set value(s)"""
332+ # Don't log secrets.
333+ # log("Juju leader-set '%s'" % (settings), level=DEBUG)
334+ cmd = ['leader-set']
335+ settings = settings or {}
336+ settings.update(kwargs)
337+ for k, v in settings.items():
338+ if v is None:
339+ cmd.append('{}='.format(k))
340+ else:
341+ cmd.append('{}={}'.format(k, v))
342+ subprocess.check_call(cmd)
343+
344+
345+@cached
346+def juju_version():
347+ """Full version string (eg. '1.23.3.1-trusty-amd64')"""
348+ # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
349+ jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
350+ return subprocess.check_output([jujud, 'version'],
351+ universal_newlines=True).strip()
352+
353+
354+@cached
355+def has_juju_version(minimum_version):
356+ """Return True if the Juju version is at least the provided version"""
357+ return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
358+
359+
360+_atexit = []
361+_atstart = []
362+
363+
364+def atstart(callback, *args, **kwargs):
365+ '''Schedule a callback to run before the main hook.
366+
367+ Callbacks are run in the order they were added.
368+
369+ This is useful for modules and classes to perform initialization
370+ and inject behavior. In particular:
371+ - Run common code before all of your hooks, such as logging
372+ the hook name or interesting relation data.
373+ - Defer object or module initialization that requires a hook
374+ context until we know there actually is a hook context,
375+ making testing easier.
376+ - Rather than requiring charm authors to include boilerplate to
377+ invoke your helper's behavior, have it run automatically if
378+ your object is instantiated or module imported.
379+
380+ This is not at all useful after your hook framework as been launched.
381+ '''
382+ global _atstart
383+ _atstart.append((callback, args, kwargs))
384+
385+
386+def atexit(callback, *args, **kwargs):
387+ '''Schedule a callback to run on successful hook completion.
388+
389+ Callbacks are run in the reverse order that they were added.'''
390+ _atexit.append((callback, args, kwargs))
391+
392+
393+def _run_atstart():
394+ '''Hook frameworks must invoke this before running the main hook body.'''
395+ global _atstart
396+ for callback, args, kwargs in _atstart:
397+ callback(*args, **kwargs)
398+ del _atstart[:]
399+
400+
401+def _run_atexit():
402+ '''Hook frameworks must invoke this after the main hook body has
403+ successfully completed. Do not invoke it if the hook fails.'''
404+ global _atexit
405+ for callback, args, kwargs in reversed(_atexit):
406+ callback(*args, **kwargs)
407+ del _atexit[:]
408
409=== modified file 'hooks/charmhelpers/core/host.py'
410--- hooks/charmhelpers/core/host.py 2015-04-16 10:27:24 +0000
411+++ hooks/charmhelpers/core/host.py 2015-07-01 14:47:51 +0000
412@@ -24,6 +24,7 @@
413 import os
414 import re
415 import pwd
416+import glob
417 import grp
418 import random
419 import string
420@@ -90,7 +91,7 @@
421 ['service', service_name, 'status'],
422 stderr=subprocess.STDOUT).decode('UTF-8')
423 except subprocess.CalledProcessError as e:
424- return 'unrecognized service' not in e.output
425+ return b'unrecognized service' not in e.output
426 else:
427 return True
428
429@@ -269,6 +270,21 @@
430 return None
431
432
433+def path_hash(path):
434+ """
435+ Generate a hash checksum of all files matching 'path'. Standard wildcards
436+ like '*' and '?' are supported, see documentation for the 'glob' module for
437+ more information.
438+
439+ :return: dict: A { filename: hash } dictionary for all matched files.
440+ Empty if none found.
441+ """
442+ return {
443+ filename: file_hash(filename)
444+ for filename in glob.iglob(path)
445+ }
446+
447+
448 def check_hash(path, checksum, hash_type='md5'):
449 """
450 Validate a file using a cryptographic checksum.
451@@ -296,23 +312,25 @@
452
453 @restart_on_change({
454 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
455+ '/etc/apache/sites-enabled/*': [ 'apache2' ]
456 })
457- def ceph_client_changed():
458+ def config_changed():
459 pass # your code here
460
461 In this example, the cinder-api and cinder-volume services
462 would be restarted if /etc/ceph/ceph.conf is changed by the
463- ceph_client_changed function.
464+ ceph_client_changed function. The apache2 service would be
465+ restarted if any file matching the pattern got changed, created
466+ or removed. Standard wildcards are supported, see documentation
467+ for the 'glob' module for more information.
468 """
469 def wrap(f):
470 def wrapped_f(*args, **kwargs):
471- checksums = {}
472- for path in restart_map:
473- checksums[path] = file_hash(path)
474+ checksums = {path: path_hash(path) for path in restart_map}
475 f(*args, **kwargs)
476 restarts = []
477 for path in restart_map:
478- if checksums[path] != file_hash(path):
479+ if path_hash(path) != checksums[path]:
480 restarts += restart_map[path]
481 services_list = list(OrderedDict.fromkeys(restarts))
482 if not stopstart:
483
484=== modified file 'hooks/charmhelpers/core/services/base.py'
485--- hooks/charmhelpers/core/services/base.py 2015-01-26 09:46:20 +0000
486+++ hooks/charmhelpers/core/services/base.py 2015-07-01 14:47:51 +0000
487@@ -15,9 +15,9 @@
488 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
489
490 import os
491-import re
492 import json
493-from collections import Iterable
494+from inspect import getargspec
495+from collections import Iterable, OrderedDict
496
497 from charmhelpers.core import host
498 from charmhelpers.core import hookenv
499@@ -119,7 +119,7 @@
500 """
501 self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
502 self._ready = None
503- self.services = {}
504+ self.services = OrderedDict()
505 for service in services or []:
506 service_name = service['service']
507 self.services[service_name] = service
508@@ -128,15 +128,18 @@
509 """
510 Handle the current hook by doing The Right Thing with the registered services.
511 """
512- hook_name = hookenv.hook_name()
513- if hook_name == 'stop':
514- self.stop_services()
515- else:
516- self.provide_data()
517- self.reconfigure_services()
518- cfg = hookenv.config()
519- if cfg.implicit_save:
520- cfg.save()
521+ hookenv._run_atstart()
522+ try:
523+ hook_name = hookenv.hook_name()
524+ if hook_name == 'stop':
525+ self.stop_services()
526+ else:
527+ self.reconfigure_services()
528+ self.provide_data()
529+ except SystemExit as x:
530+ if x.code is None or x.code == 0:
531+ hookenv._run_atexit()
532+ hookenv._run_atexit()
533
534 def provide_data(self):
535 """
536@@ -145,15 +148,36 @@
537 A provider must have a `name` attribute, which indicates which relation
538 to set data on, and a `provide_data()` method, which returns a dict of
539 data to set.
540+
541+ The `provide_data()` method can optionally accept two parameters:
542+
543+ * ``remote_service`` The name of the remote service that the data will
544+ be provided to. The `provide_data()` method will be called once
545+ for each connected service (not unit). This allows the method to
546+ tailor its data to the given service.
547+ * ``service_ready`` Whether or not the service definition had all of
548+ its requirements met, and thus the ``data_ready`` callbacks run.
549+
550+ Note that the ``provided_data`` methods are now called **after** the
551+ ``data_ready`` callbacks are run. This gives the ``data_ready`` callbacks
552+ a chance to generate any data necessary for the providing to the remote
553+ services.
554 """
555- hook_name = hookenv.hook_name()
556- for service in self.services.values():
557+ for service_name, service in self.services.items():
558+ service_ready = self.is_ready(service_name)
559 for provider in service.get('provided_data', []):
560- if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
561- data = provider.provide_data()
562- _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
563- if _ready:
564- hookenv.relation_set(None, data)
565+ for relid in hookenv.relation_ids(provider.name):
566+ units = hookenv.related_units(relid)
567+ if not units:
568+ continue
569+ remote_service = units[0].split('/')[0]
570+ argspec = getargspec(provider.provide_data)
571+ if len(argspec.args) > 1:
572+ data = provider.provide_data(remote_service, service_ready)
573+ else:
574+ data = provider.provide_data()
575+ if data:
576+ hookenv.relation_set(relid, data)
577
578 def reconfigure_services(self, *service_names):
579 """
580
581=== modified file 'hooks/charmhelpers/fetch/__init__.py'
582--- hooks/charmhelpers/fetch/__init__.py 2015-01-26 09:46:20 +0000
583+++ hooks/charmhelpers/fetch/__init__.py 2015-07-01 14:47:51 +0000
584@@ -158,7 +158,7 @@
585
586 def apt_cache(in_memory=True):
587 """Build and return an apt cache"""
588- import apt_pkg
589+ from apt import apt_pkg
590 apt_pkg.init()
591 if in_memory:
592 apt_pkg.config.set("Dir::Cache::pkgcache", "")
593
594=== modified file 'hooks/charmhelpers/fetch/giturl.py'
595--- hooks/charmhelpers/fetch/giturl.py 2015-02-26 11:01:14 +0000
596+++ hooks/charmhelpers/fetch/giturl.py 2015-07-01 14:47:51 +0000
597@@ -45,14 +45,16 @@
598 else:
599 return True
600
601- def clone(self, source, dest, branch):
602+ def clone(self, source, dest, branch, depth=None):
603 if not self.can_handle(source):
604 raise UnhandledSource("Cannot handle {}".format(source))
605
606- repo = Repo.clone_from(source, dest)
607- repo.git.checkout(branch)
608+ if depth:
609+ Repo.clone_from(source, dest, branch=branch, depth=depth)
610+ else:
611+ Repo.clone_from(source, dest, branch=branch)
612
613- def install(self, source, branch="master", dest=None):
614+ def install(self, source, branch="master", dest=None, depth=None):
615 url_parts = self.parse_url(source)
616 branch_name = url_parts.path.strip("/").split("/")[-1]
617 if dest:
618@@ -63,7 +65,7 @@
619 if not os.path.exists(dest_dir):
620 mkdir(dest_dir, perms=0o755)
621 try:
622- self.clone(source, dest_dir, branch)
623+ self.clone(source, dest_dir, branch, depth)
624 except GitCommandError as e:
625 raise UnhandledSource(e.message)
626 except OSError as e:
627
628=== modified file 'metadata.yaml'
629--- metadata.yaml 2015-02-04 15:31:08 +0000
630+++ metadata.yaml 2015-07-01 14:47:51 +0000
631@@ -4,8 +4,11 @@
632 description: |
633 Ceph is a distributed storage and network file system designed to provide
634 excellent performance, reliability, and scalability.
635-categories:
636- - file-servers
637+tags:
638+ - openstack
639+ - storage
640+ - file-servers
641+ - misc
642 peers:
643 mon:
644 interface: ceph
645
646=== modified file 'tests/00-setup'
647--- tests/00-setup 2014-09-27 18:15:47 +0000
648+++ tests/00-setup 2015-07-01 14:47:51 +0000
649@@ -5,6 +5,10 @@
650 sudo add-apt-repository --yes ppa:juju/stable
651 sudo apt-get update --yes
652 sudo apt-get install --yes python-amulet \
653+ python-cinderclient \
654+ python-distro-info \
655+ python-glanceclient \
656+ python-heatclient \
657 python-keystoneclient \
658- python-glanceclient \
659- python-novaclient
660+ python-novaclient \
661+ python-swiftclient
662
663=== modified file 'tests/017-basic-trusty-kilo' (properties changed: -x to +x)
664=== modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x)
665=== modified file 'tests/README'
666--- tests/README 2014-09-27 18:15:47 +0000
667+++ tests/README 2015-07-01 14:47:51 +0000
668@@ -1,6 +1,30 @@
669 This directory provides Amulet tests that focus on verification of ceph
670 deployments.
671
672+test_* methods are called in lexical sort order.
673+
674+Test name convention to ensure desired test order:
675+ 1xx service and endpoint checks
676+ 2xx relation checks
677+ 3xx config checks
678+ 4xx functional checks
679+ 9xx restarts and other final checks
680+
681+Common uses of ceph relations in bundle deployments:
682+ - [ nova-compute, ceph ]
683+ - [ glance, ceph ]
684+ - [ cinder, cinder-ceph ]
685+ - [ cinder-ceph, ceph ]
686+
687+More detailed relations of ceph service in a common deployment:
688+ relations:
689+ client:
690+ - cinder-ceph
691+ - glance
692+ - nova-compute
693+ mon:
694+ - ceph
695+
696 In order to run tests, you'll need charm-tools installed (in addition to
697 juju, of course):
698 sudo add-apt-repository ppa:juju/stable
699
700=== modified file 'tests/basic_deployment.py'
701--- tests/basic_deployment.py 2015-04-16 21:31:29 +0000
702+++ tests/basic_deployment.py 2015-07-01 14:47:51 +0000
703@@ -1,13 +1,14 @@
704 #!/usr/bin/python
705
706 import amulet
707+import time
708 from charmhelpers.contrib.openstack.amulet.deployment import (
709 OpenStackAmuletDeployment
710 )
711 from charmhelpers.contrib.openstack.amulet.utils import ( # noqa
712 OpenStackAmuletUtils,
713 DEBUG,
714- ERROR
715+ #ERROR
716 )
717
718 # Use DEBUG to turn on debug logging
719@@ -35,10 +36,12 @@
720 compatible with the local charm (e.g. stable or next).
721 """
722 this_service = {'name': 'ceph', 'units': 3}
723- other_services = [{'name': 'mysql'}, {'name': 'keystone'},
724+ other_services = [{'name': 'mysql'},
725+ {'name': 'keystone'},
726 {'name': 'rabbitmq-server'},
727 {'name': 'nova-compute'},
728- {'name': 'glance'}, {'name': 'cinder'}]
729+ {'name': 'glance'},
730+ {'name': 'cinder'}]
731 super(CephBasicDeployment, self)._add_services(this_service,
732 other_services)
733
734@@ -74,12 +77,9 @@
735 'fsid': '6547bd3e-1397-11e2-82e5-53567c8d32dc',
736 'monitor-secret': 'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ==',
737 'osd-reformat': 'yes',
738- 'ephemeral-unmount': '/mnt'
739+ 'ephemeral-unmount': '/mnt',
740+ 'osd-devices': '/dev/vdb /srv/ceph'
741 }
742- if self._get_openstack_release() >= self.precise_grizzly:
743- ceph_config['osd-devices'] = '/dev/vdb /srv/ceph'
744- else:
745- ceph_config['osd-devices'] = '/dev/vdb'
746
747 configs = {'keystone': keystone_config,
748 'mysql': mysql_config,
749@@ -88,27 +88,44 @@
750 super(CephBasicDeployment, self)._configure_services(configs)
751
752 def _initialize_tests(self):
753- """Perform final initialization before tests get run."""
754+ """Perform final initialization original tests get run."""
755 # Access the sentries for inspecting service units
756 self.mysql_sentry = self.d.sentry.unit['mysql/0']
757 self.keystone_sentry = self.d.sentry.unit['keystone/0']
758 self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']
759- self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0']
760+ self.nova_sentry = self.d.sentry.unit['nova-compute/0']
761 self.glance_sentry = self.d.sentry.unit['glance/0']
762 self.cinder_sentry = self.d.sentry.unit['cinder/0']
763 self.ceph0_sentry = self.d.sentry.unit['ceph/0']
764 self.ceph1_sentry = self.d.sentry.unit['ceph/1']
765 self.ceph2_sentry = self.d.sentry.unit['ceph/2']
766+ u.log.debug('openstack release val: {}'.format(
767+ self._get_openstack_release()))
768+ u.log.debug('openstack release str: {}'.format(
769+ self._get_openstack_release_string()))
770+
771+ # Let things settle a bit original moving forward
772+ time.sleep(30)
773
774 # Authenticate admin with keystone
775 self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
776 user='admin',
777 password='openstack',
778 tenant='admin')
779-
780+ # Authenticate admin with cinder endpoint
781+ self.cinder = u.authenticate_cinder_admin(self.keystone_sentry,
782+ username='admin',
783+ password='openstack',
784+ tenant='admin')
785 # Authenticate admin with glance endpoint
786 self.glance = u.authenticate_glance_admin(self.keystone)
787
788+ # Authenticate admin with nova endpoint
789+ self.nova = u.authenticate_nova_user(self.keystone,
790+ user='admin',
791+ password='openstack',
792+ tenant='admin')
793+
794 # Create a demo tenant/role/user
795 self.demo_tenant = 'demoTenant'
796 self.demo_role = 'demoRole'
797@@ -135,45 +152,64 @@
798 'password',
799 self.demo_tenant)
800
801- def _ceph_osd_id(self, index):
802- """Produce a shell command that will return a ceph-osd id."""
803- return "`initctl list | grep 'ceph-osd ' | awk 'NR=={} {{ print $2 }}' | grep -o '[0-9]*'`".format(index + 1) # noqa
804-
805- def test_services(self):
806+ def test_100_ceph_processes(self):
807+ """Verify that the expected service processes are running
808+ on each ceph unit."""
809+
810+ # Process name and quantity of processes to expect on each unit
811+ ceph_processes = {
812+ 'ceph-mon': 1,
813+ 'ceph-osd': 2
814+ }
815+
816+ # Units with process names and PID quantities expected
817+ expected_processes = {
818+ self.ceph0_sentry: ceph_processes,
819+ self.ceph1_sentry: ceph_processes,
820+ self.ceph2_sentry: ceph_processes
821+ }
822+
823+ actual_pids = u.get_unit_process_ids(expected_processes)
824+ ret = u.validate_unit_process_ids(expected_processes, actual_pids)
825+ if ret:
826+ amulet.raise_status(amulet.FAIL, msg=ret)
827+
828+ def test_102_services(self):
829 """Verify the expected services are running on the service units."""
830- ceph_services = ['status ceph-mon-all',
831- 'status ceph-mon id=`hostname`']
832- commands = {
833- self.mysql_sentry: ['status mysql'],
834- self.rabbitmq_sentry: ['sudo service rabbitmq-server status'],
835- self.nova_compute_sentry: ['status nova-compute'],
836- self.keystone_sentry: ['status keystone'],
837- self.glance_sentry: ['status glance-registry',
838- 'status glance-api'],
839- self.cinder_sentry: ['status cinder-api',
840- 'status cinder-scheduler',
841- 'status cinder-volume']
842+
843+ services = {
844+ self.mysql_sentry: ['mysql'],
845+ self.rabbitmq_sentry: ['rabbitmq-server'],
846+ self.nova_sentry: ['nova-compute'],
847+ self.keystone_sentry: ['keystone'],
848+ self.glance_sentry: ['glance-registry',
849+ 'glance-api'],
850+ self.cinder_sentry: ['cinder-api',
851+ 'cinder-scheduler',
852+ 'cinder-volume'],
853 }
854- if self._get_openstack_release() >= self.precise_grizzly:
855- ceph_osd0 = 'status ceph-osd id={}'.format(self._ceph_osd_id(0))
856- ceph_osd1 = 'status ceph-osd id={}'.format(self._ceph_osd_id(1))
857- ceph_services.extend([ceph_osd0, ceph_osd1, 'status ceph-osd-all'])
858- commands[self.ceph0_sentry] = ceph_services
859- commands[self.ceph1_sentry] = ceph_services
860- commands[self.ceph2_sentry] = ceph_services
861- else:
862- ceph_osd0 = 'status ceph-osd id={}'.format(self._ceph_osd_id(0))
863- ceph_services.append(ceph_osd0)
864- commands[self.ceph0_sentry] = ceph_services
865- commands[self.ceph1_sentry] = ceph_services
866- commands[self.ceph2_sentry] = ceph_services
867-
868- ret = u.validate_services(commands)
869+
870+ if self._get_openstack_release() < self.vivid_kilo:
871+ # For upstart systems only. Ceph services under systemd
872+ # are checked by process name instead.
873+ ceph_services = [
874+ 'ceph-mon-all',
875+ 'ceph-mon id=`hostname`',
876+ 'ceph-osd-all',
877+ 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(0)),
878+ 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(1))
879+ ]
880+ services[self.ceph0_sentry] = ceph_services
881+ services[self.ceph1_sentry] = ceph_services
882+ services[self.ceph2_sentry] = ceph_services
883+
884+ ret = u.validate_services_by_name(services)
885 if ret:
886 amulet.raise_status(amulet.FAIL, msg=ret)
887
888- def test_ceph_nova_client_relation(self):
889+ def test_200_ceph_nova_client_relation(self):
890 """Verify the ceph to nova ceph-client relation data."""
891+ u.log.debug('Checking ceph:nova-compute ceph relation data...')
892 unit = self.ceph0_sentry
893 relation = ['client', 'nova-compute:ceph']
894 expected = {
895@@ -187,9 +223,10 @@
896 message = u.relation_error('ceph to nova ceph-client', ret)
897 amulet.raise_status(amulet.FAIL, msg=message)
898
899- def test_nova_ceph_client_relation(self):
900- """Verify the nova to ceph ceph-client relation data."""
901- unit = self.nova_compute_sentry
902+ def test_201_nova_ceph_client_relation(self):
903+ """Verify the nova to ceph client relation data."""
904+ u.log.debug('Checking nova-compute:ceph ceph-client relation data...')
905+ unit = self.nova_sentry
906 relation = ['ceph', 'ceph:client']
907 expected = {
908 'private-address': u.valid_ip
909@@ -200,8 +237,9 @@
910 message = u.relation_error('nova to ceph ceph-client', ret)
911 amulet.raise_status(amulet.FAIL, msg=message)
912
913- def test_ceph_glance_client_relation(self):
914+ def test_202_ceph_glance_client_relation(self):
915 """Verify the ceph to glance ceph-client relation data."""
916+ u.log.debug('Checking ceph:glance client relation data...')
917 unit = self.ceph1_sentry
918 relation = ['client', 'glance:ceph']
919 expected = {
920@@ -215,8 +253,9 @@
921 message = u.relation_error('ceph to glance ceph-client', ret)
922 amulet.raise_status(amulet.FAIL, msg=message)
923
924- def test_glance_ceph_client_relation(self):
925- """Verify the glance to ceph ceph-client relation data."""
926+ def test_203_glance_ceph_client_relation(self):
927+ """Verify the glance to ceph client relation data."""
928+ u.log.debug('Checking glance:ceph client relation data...')
929 unit = self.glance_sentry
930 relation = ['ceph', 'ceph:client']
931 expected = {
932@@ -228,8 +267,9 @@
933 message = u.relation_error('glance to ceph ceph-client', ret)
934 amulet.raise_status(amulet.FAIL, msg=message)
935
936- def test_ceph_cinder_client_relation(self):
937+ def test_204_ceph_cinder_client_relation(self):
938 """Verify the ceph to cinder ceph-client relation data."""
939+ u.log.debug('Checking ceph:cinder ceph relation data...')
940 unit = self.ceph2_sentry
941 relation = ['client', 'cinder:ceph']
942 expected = {
943@@ -243,8 +283,9 @@
944 message = u.relation_error('ceph to cinder ceph-client', ret)
945 amulet.raise_status(amulet.FAIL, msg=message)
946
947- def test_cinder_ceph_client_relation(self):
948+ def test_205_cinder_ceph_client_relation(self):
949 """Verify the cinder to ceph ceph-client relation data."""
950+ u.log.debug('Checking cinder:ceph ceph relation data...')
951 unit = self.cinder_sentry
952 relation = ['ceph', 'ceph:client']
953 expected = {
954@@ -256,8 +297,9 @@
955 message = u.relation_error('cinder to ceph ceph-client', ret)
956 amulet.raise_status(amulet.FAIL, msg=message)
957
958- def test_ceph_config(self):
959+ def test_300_ceph_config(self):
960 """Verify the data in the ceph config file."""
961+ u.log.debug('Checking ceph config file data...')
962 unit = self.ceph0_sentry
963 conf = '/etc/ceph/ceph.conf'
964 expected = {
965@@ -267,7 +309,10 @@
966 'log to syslog': 'false',
967 'err to syslog': 'false',
968 'clog to syslog': 'false',
969- 'mon cluster log to syslog': 'false'
970+ 'mon cluster log to syslog': 'false',
971+ 'auth cluster required': 'none',
972+ 'auth service required': 'none',
973+ 'auth client required': 'none'
974 },
975 'mon': {
976 'keyring': '/var/lib/ceph/mon/$cluster-$id/keyring'
977@@ -281,12 +326,6 @@
978 'filestore xattr use omap': 'true'
979 },
980 }
981- if self._get_openstack_release() >= self.precise_grizzly:
982- expected['global']['auth cluster required'] = 'none'
983- expected['global']['auth service required'] = 'none'
984- expected['global']['auth client required'] = 'none'
985- else:
986- expected['global']['auth supported'] = 'none'
987
988 for section, pairs in expected.iteritems():
989 ret = u.validate_config_data(unit, conf, section, pairs)
990@@ -294,11 +333,243 @@
991 message = "ceph config error: {}".format(ret)
992 amulet.raise_status(amulet.FAIL, msg=message)
993
994- def test_restart_on_config_change(self):
995- """Verify the specified services are restarted on config change."""
996- # NOTE(coreycb): Test not implemented but should it be? ceph services
997- # aren't restarted by charm after config change. Should
998- # they be restarted?
999- if self._get_openstack_release() >= self.precise_essex:
1000- u.log.error("Test not implemented")
1001- return
1002+ def test_302_cinder_rbd_config(self):
1003+ """Verify the cinder config file data regarding ceph."""
1004+ u.log.debug('Checking cinder (rbd) config file data...')
1005+ unit = self.cinder_sentry
1006+ conf = '/etc/cinder/cinder.conf'
1007+ expected = {
1008+ 'DEFAULT': {
1009+ 'volume_driver': 'cinder.volume.drivers.rbd.RBDDriver'
1010+ }
1011+ }
1012+ for section, pairs in expected.iteritems():
1013+ ret = u.validate_config_data(unit, conf, section, pairs)
1014+ if ret:
1015+ message = "cinder (rbd) config error: {}".format(ret)
1016+ amulet.raise_status(amulet.FAIL, msg=message)
1017+
1018+ def test_304_glance_rbd_config(self):
1019+ """Verify the glance config file data regarding ceph."""
1020+ u.log.debug('Checking glance (rbd) config file data...')
1021+ unit = self.glance_sentry
1022+ conf = '/etc/glance/glance-api.conf'
1023+ config = {
1024+ 'default_store': 'rbd',
1025+ 'rbd_store_ceph_conf': '/etc/ceph/ceph.conf',
1026+ 'rbd_store_user': 'glance',
1027+ 'rbd_store_pool': 'glance',
1028+ 'rbd_store_chunk_size': '8'
1029+ }
1030+
1031+ if self._get_openstack_release() >= self.trusty_kilo:
1032+ # Kilo or later
1033+ config['stores'] = ('glance.store.filesystem.Store,'
1034+ 'glance.store.http.Store,'
1035+ 'glance.store.rbd.Store')
1036+ section = 'glance_store'
1037+ else:
1038+ # Juno or earlier
1039+ section = 'DEFAULT'
1040+
1041+ expected = {section: config}
1042+ for section, pairs in expected.iteritems():
1043+ ret = u.validate_config_data(unit, conf, section, pairs)
1044+ if ret:
1045+ message = "glance (rbd) config error: {}".format(ret)
1046+ amulet.raise_status(amulet.FAIL, msg=message)
1047+
1048+ def test_306_nova_rbd_config(self):
1049+ """Verify the nova config file data regarding ceph."""
1050+ u.log.debug('Checking nova (rbd) config file data...')
1051+ unit = self.nova_sentry
1052+ conf = '/etc/nova/nova.conf'
1053+ expected = {
1054+ 'libvirt': {
1055+ 'rbd_pool': 'nova',
1056+ 'rbd_user': 'nova-compute',
1057+ 'rbd_secret_uuid': u.not_null
1058+ }
1059+ }
1060+ for section, pairs in expected.iteritems():
1061+ ret = u.validate_config_data(unit, conf, section, pairs)
1062+ if ret:
1063+ message = "nova (rbd) config error: {}".format(ret)
1064+ amulet.raise_status(amulet.FAIL, msg=message)
1065+
1066+ def test_400_ceph_check_osd_pools(self):
1067+ """Check osd pools on all ceph units, expect them to be
1068+ identical, and expect specific pools to be present."""
1069+ u.log.debug('Checking pools on ceph units...')
1070+
1071+ expected_pools = self.get_ceph_expected_pools()
1072+ results = []
1073+ sentries = [
1074+ self.ceph0_sentry,
1075+ self.ceph1_sentry,
1076+ self.ceph2_sentry
1077+ ]
1078+
1079+ # Check for presence of expected pools on each unit
1080+ u.log.debug('Expected pools: {}'.format(expected_pools))
1081+ for sentry_unit in sentries:
1082+ pools = u.get_ceph_pools(sentry_unit)
1083+ results.append(pools)
1084+
1085+ for expected_pool in expected_pools:
1086+ if expected_pool not in pools:
1087+ msg = ('{} does not have pool: '
1088+ '{}'.format(sentry_unit.info['unit_name'],
1089+ expected_pool))
1090+ amulet.raise_status(amulet.FAIL, msg=msg)
1091+ u.log.debug('{} has (at least) the expected '
1092+ 'pools.'.format(sentry_unit.info['unit_name']))
1093+
1094+ # Check that all units returned the same pool name:id data
1095+ ret = u.validate_list_of_identical_dicts(results)
1096+ if ret:
1097+ u.log.debug('Pool list results: {}'.format(results))
1098+ msg = ('{}; Pool list results are not identical on all '
1099+ 'ceph units.'.format(ret))
1100+ amulet.raise_status(amulet.FAIL, msg=msg)
1101+ else:
1102+ u.log.debug('Pool list on all ceph units produced the '
1103+ 'same results (OK).')
1104+
1105+ def test_410_ceph_cinder_vol_create(self):
1106+ """Create and confirm a ceph-backed cinder volume, and inspect
1107+ ceph cinder pool object count as the volume is created
1108+ and deleted."""
1109+ sentry_unit = self.ceph0_sentry
1110+ obj_count_samples = []
1111+ pool_size_samples = []
1112+ pools = u.get_ceph_pools(self.ceph0_sentry)
1113+ cinder_pool = pools['cinder']
1114+
1115+ # Check ceph cinder pool object count, disk space usage and pool name
1116+ u.log.debug('Checking ceph cinder pool original samples...')
1117+ pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit,
1118+ cinder_pool)
1119+ obj_count_samples.append(obj_count)
1120+ pool_size_samples.append(kb_used)
1121+
1122+ expected = 'cinder'
1123+ if pool_name != expected:
1124+ msg = ('Ceph pool {} unexpected name (actual, expected): '
1125+ '{}. {}'.format(cinder_pool, pool_name, expected))
1126+ amulet.raise_status(amulet.FAIL, msg=msg)
1127+
1128+ # Create ceph-backed cinder volume
1129+ cinder_vol = u.create_cinder_volume(self.cinder)
1130+
1131+ # Re-check ceph cinder pool object count and disk usage
1132+ time.sleep(10)
1133+ u.log.debug('Checking ceph cinder pool samples after volume create...')
1134+ pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit,
1135+ cinder_pool)
1136+ obj_count_samples.append(obj_count)
1137+ pool_size_samples.append(kb_used)
1138+
1139+ # Delete ceph-backed cinder volume
1140+ u.delete_resource(self.cinder.volumes, cinder_vol, msg="cinder volume")
1141+
1142+ # Final check, ceph cinder pool object count and disk usage
1143+ time.sleep(10)
1144+ u.log.debug('Checking ceph cinder pool after volume delete...')
1145+ pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit,
1146+ cinder_pool)
1147+ obj_count_samples.append(obj_count)
1148+ pool_size_samples.append(kb_used)
1149+
1150+ # Validate ceph cinder pool object count samples over time
1151+ ret = u.validate_ceph_pool_samples(obj_count_samples,
1152+ "cinder pool object count")
1153+ if ret:
1154+ amulet.raise_status(amulet.FAIL, msg=ret)
1155+
1156+ # Validate ceph cinder pool disk space usage samples over time
1157+ ret = u.validate_ceph_pool_samples(pool_size_samples,
1158+ "cinder pool disk usage")
1159+ if ret:
1160+ amulet.raise_status(amulet.FAIL, msg=ret)
1161+
1162+ def test_412_ceph_glance_image_create_delete(self):
1163+ """Create and confirm a ceph-backed glance image, and inspect
1164+ ceph glance pool object count as the image is created
1165+ and deleted."""
1166+ sentry_unit = self.ceph0_sentry
1167+ obj_count_samples = []
1168+ pool_size_samples = []
1169+ pools = u.get_ceph_pools(self.ceph0_sentry)
1170+ glance_pool = pools['glance']
1171+
1172+ # Check ceph glance pool object count, disk space usage and pool name
1173+ u.log.debug('Checking ceph glance pool original samples...')
1174+ pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit,
1175+ glance_pool)
1176+ obj_count_samples.append(obj_count)
1177+ pool_size_samples.append(kb_used)
1178+
1179+ expected = 'glance'
1180+ if pool_name != expected:
1181+ msg = ('Ceph glance pool {} unexpected name (actual, '
1182+ 'expected): {}. {}'.format(glance_pool,
1183+ pool_name, expected))
1184+ amulet.raise_status(amulet.FAIL, msg=msg)
1185+
1186+ # Create ceph-backed glance image
1187+ glance_img = u.create_cirros_image(self.glance, "cirros-image-1")
1188+
1189+ # Re-check ceph glance pool object count and disk usage
1190+ time.sleep(10)
1191+ u.log.debug('Checking ceph glance pool samples after image create...')
1192+ pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit,
1193+ glance_pool)
1194+ obj_count_samples.append(obj_count)
1195+ pool_size_samples.append(kb_used)
1196+
1197+ # Delete ceph-backed glance image
1198+ u.delete_resource(self.glance.images,
1199+ glance_img, msg="glance image")
1200+
1201+ # Final check, ceph glance pool object count and disk usage
1202+ time.sleep(10)
1203+ u.log.debug('Checking ceph glance pool samples after image delete...')
1204+ pool_name, obj_count, kb_used = u.get_ceph_pool_sample(sentry_unit,
1205+ glance_pool)
1206+ obj_count_samples.append(obj_count)
1207+ pool_size_samples.append(kb_used)
1208+
1209+ # Validate ceph glance pool object count samples over time
1210+ ret = u.validate_ceph_pool_samples(obj_count_samples,
1211+ "glance pool object count")
1212+ if ret:
1213+ amulet.raise_status(amulet.FAIL, msg=ret)
1214+
1215+ # Validate ceph glance pool disk space usage samples over time
1216+ ret = u.validate_ceph_pool_samples(pool_size_samples,
1217+ "glance pool disk usage")
1218+ if ret:
1219+ amulet.raise_status(amulet.FAIL, msg=ret)
1220+
1221+ def test_499_ceph_cmds_exit_zero(self):
1222+ """Check basic functionality of ceph cli commands against
1223+ all ceph units."""
1224+ sentry_units = [
1225+ self.ceph0_sentry,
1226+ self.ceph1_sentry,
1227+ self.ceph2_sentry
1228+ ]
1229+ commands = [
1230+ 'sudo ceph health',
1231+ 'sudo ceph mds stat',
1232+ 'sudo ceph pg stat',
1233+ 'sudo ceph osd stat',
1234+ 'sudo ceph mon stat',
1235+ ]
1236+ ret = u.check_commands_on_units(commands, sentry_units)
1237+ if ret:
1238+ amulet.raise_status(amulet.FAIL, msg=ret)
1239+
1240+ # FYI: No restart check as ceph services do not restart
1241+ # when charm config changes, unless monitor count increases.
1242
1243=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
1244--- tests/charmhelpers/contrib/amulet/utils.py 2015-04-23 14:53:03 +0000
1245+++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-01 14:47:51 +0000
1246@@ -14,14 +14,17 @@
1247 # You should have received a copy of the GNU Lesser General Public License
1248 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1249
1250+import amulet
1251 import ConfigParser
1252+import distro_info
1253 import io
1254 import logging
1255+import os
1256 import re
1257+import six
1258 import sys
1259 import time
1260-
1261-import six
1262+import urlparse
1263
1264
1265 class AmuletUtils(object):
1266@@ -33,6 +36,7 @@
1267
1268 def __init__(self, log_level=logging.ERROR):
1269 self.log = self.get_logger(level=log_level)
1270+ self.ubuntu_releases = self.get_ubuntu_releases()
1271
1272 def get_logger(self, name="amulet-logger", level=logging.DEBUG):
1273 """Get a logger object that will log to stdout."""
1274@@ -70,12 +74,44 @@
1275 else:
1276 return False
1277
1278+ def get_ubuntu_release_from_sentry(self, sentry_unit):
1279+ """Get Ubuntu release codename from sentry unit.
1280+
1281+ :param sentry_unit: amulet sentry/service unit pointer
1282+ :returns: list of strings - release codename, failure message
1283+ """
1284+ msg = None
1285+ cmd = 'lsb_release -cs'
1286+ release, code = sentry_unit.run(cmd)
1287+ if code == 0:
1288+ self.log.debug('{} lsb_release: {}'.format(
1289+ sentry_unit.info['unit_name'], release))
1290+ else:
1291+ msg = ('{} `{}` returned {} '
1292+ '{}'.format(sentry_unit.info['unit_name'],
1293+ cmd, release, code))
1294+ if release not in self.ubuntu_releases:
1295+ msg = ("Release ({}) not found in Ubuntu releases "
1296+ "({})".format(release, self.ubuntu_releases))
1297+ return release, msg
1298+
1299 def validate_services(self, commands):
1300- """Validate services.
1301-
1302- Verify the specified services are running on the corresponding
1303+ """Validate that lists of commands succeed on service units. Can be
1304+ used to verify system services are running on the corresponding
1305 service units.
1306- """
1307+
1308+ :param commands: dict with sentry keys and arbitrary command list vals
1309+ :returns: None if successful, Failure string message otherwise
1310+ """
1311+ self.log.debug('Checking status of system services...')
1312+
1313+ # /!\ DEPRECATION WARNING (beisner):
1314+ # New and existing tests should be rewritten to use
1315+ # validate_services_by_name() as it is aware of init systems.
1316+ self.log.warn('/!\\ DEPRECATION WARNING: use '
1317+ 'validate_services_by_name instead of validate_services '
1318+ 'due to init system differences.')
1319+
1320 for k, v in six.iteritems(commands):
1321 for cmd in v:
1322 output, code = k.run(cmd)
1323@@ -86,6 +122,41 @@
1324 return "command `{}` returned {}".format(cmd, str(code))
1325 return None
1326
1327+ def validate_services_by_name(self, sentry_services):
1328+ """Validate system service status by service name, automatically
1329+ detecting init system based on Ubuntu release codename.
1330+
1331+ :param sentry_services: dict with sentry keys and svc list values
1332+ :returns: None if successful, Failure string message otherwise
1333+ """
1334+ self.log.debug('Checking status of system services...')
1335+
1336+ # Point at which systemd became a thing
1337+ systemd_switch = self.ubuntu_releases.index('vivid')
1338+
1339+ for sentry_unit, services_list in six.iteritems(sentry_services):
1340+ # Get lsb_release codename from unit
1341+ release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
1342+ if ret:
1343+ return ret
1344+
1345+ for service_name in services_list:
1346+ if (self.ubuntu_releases.index(release) >= systemd_switch or
1347+ service_name == "rabbitmq-server"):
1348+ # init is systemd
1349+ cmd = 'sudo service {} status'.format(service_name)
1350+ elif self.ubuntu_releases.index(release) < systemd_switch:
1351+ # init is upstart
1352+ cmd = 'sudo status {}'.format(service_name)
1353+
1354+ output, code = sentry_unit.run(cmd)
1355+ self.log.debug('{} `{}` returned '
1356+ '{}'.format(sentry_unit.info['unit_name'],
1357+ cmd, code))
1358+ if code != 0:
1359+ return "command `{}` returned {}".format(cmd, str(code))
1360+ return None
1361+
1362 def _get_config(self, unit, filename):
1363 """Get a ConfigParser object for parsing a unit's config file."""
1364 file_contents = unit.file_contents(filename)
1365@@ -103,7 +174,15 @@
1366
1367 Verify that the specified section of the config file contains
1368 the expected option key:value pairs.
1369+
1370+ Compare expected dictionary data vs actual dictionary data.
1371+ The values in the 'expected' dictionary can be strings, bools, ints,
1372+ longs, or can be a function that evaluates a variable and returns a
1373+ bool.
1374 """
1375+ self.log.debug('Validating config file data ({} in {} on {})'
1376+ '...'.format(section, config_file,
1377+ sentry_unit.info['unit_name']))
1378 config = self._get_config(sentry_unit, config_file)
1379
1380 if section != 'DEFAULT' and not config.has_section(section):
1381@@ -112,9 +191,20 @@
1382 for k in expected.keys():
1383 if not config.has_option(section, k):
1384 return "section [{}] is missing option {}".format(section, k)
1385- if config.get(section, k) != expected[k]:
1386+
1387+ actual = config.get(section, k)
1388+ v = expected[k]
1389+ if (isinstance(v, six.string_types) or
1390+ isinstance(v, bool) or
1391+ isinstance(v, six.integer_types)):
1392+ # handle explicit values
1393+ if actual != v:
1394+ return "section [{}] {}:{} != expected {}:{}".format(
1395+ section, k, actual, k, expected[k])
1396+ # handle function pointers, such as not_null or valid_ip
1397+ elif not v(actual):
1398 return "section [{}] {}:{} != expected {}:{}".format(
1399- section, k, config.get(section, k), k, expected[k])
1400+ section, k, actual, k, expected[k])
1401 return None
1402
1403 def _validate_dict_data(self, expected, actual):
1404@@ -122,7 +212,7 @@
1405
1406 Compare expected dictionary data vs actual dictionary data.
1407 The values in the 'expected' dictionary can be strings, bools, ints,
1408- longs, or can be a function that evaluate a variable and returns a
1409+ longs, or can be a function that evaluates a variable and returns a
1410 bool.
1411 """
1412 self.log.debug('actual: {}'.format(repr(actual)))
1413@@ -133,8 +223,10 @@
1414 if (isinstance(v, six.string_types) or
1415 isinstance(v, bool) or
1416 isinstance(v, six.integer_types)):
1417+ # handle explicit values
1418 if v != actual[k]:
1419 return "{}:{}".format(k, actual[k])
1420+ # handle function pointers, such as not_null or valid_ip
1421 elif not v(actual[k]):
1422 return "{}:{}".format(k, actual[k])
1423 else:
1424@@ -321,3 +413,121 @@
1425
1426 def endpoint_error(self, name, data):
1427 return 'unexpected endpoint data in {} - {}'.format(name, data)
1428+
1429+ def get_ubuntu_releases(self):
1430+ """Return a list of all Ubuntu releases in order of release."""
1431+ _d = distro_info.UbuntuDistroInfo()
1432+ _release_list = _d.all
1433+ self.log.debug('Ubuntu release list: {}'.format(_release_list))
1434+ return _release_list
1435+
1436+ def file_to_url(self, file_rel_path):
1437+ """Convert a relative file path to a file URL."""
1438+ _abs_path = os.path.abspath(file_rel_path)
1439+ return urlparse.urlparse(_abs_path, scheme='file').geturl()
1440+
1441+ def check_commands_on_units(self, commands, sentry_units):
1442+ """Check that all commands in a list exit zero on all
1443+ sentry units in a list.
1444+
1445+ :param commands: list of bash commands
1446+ :param sentry_units: list of sentry unit pointers
1447+ :returns: None if successful; Failure message otherwise
1448+ """
1449+ self.log.debug('Checking exit codes for {} commands on {} '
1450+ 'sentry units...'.format(len(commands),
1451+ len(sentry_units)))
1452+ for sentry_unit in sentry_units:
1453+ for cmd in commands:
1454+ output, code = sentry_unit.run(cmd)
1455+ if code == 0:
1456+ self.log.debug('{} `{}` returned {} '
1457+ '(OK)'.format(sentry_unit.info['unit_name'],
1458+ cmd, code))
1459+ else:
1460+ return ('{} `{}` returned {} '
1461+ '{}'.format(sentry_unit.info['unit_name'],
1462+ cmd, code, output))
1463+ return None
1464+
1465+ def get_process_id_list(self, sentry_unit, process_name):
1466+ """Get a list of process ID(s) from a single sentry juju unit
1467+ for a single process name.
1468+
1469+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1470+ :param process_name: Process name
1471+ :returns: List of process IDs
1472+ """
1473+ cmd = 'pidof {}'.format(process_name)
1474+ output, code = sentry_unit.run(cmd)
1475+ if code != 0:
1476+ msg = ('{} `{}` returned {} '
1477+ '{}'.format(sentry_unit.info['unit_name'],
1478+ cmd, code, output))
1479+ amulet.raise_status(amulet.FAIL, msg=msg)
1480+ return str(output).split()
1481+
1482+ def get_unit_process_ids(self, unit_processes):
1483+ """Construct a dict containing unit sentries, process names, and
1484+ process IDs."""
1485+ pid_dict = {}
1486+ for sentry_unit, process_list in unit_processes.iteritems():
1487+ pid_dict[sentry_unit] = {}
1488+ for process in process_list:
1489+ pids = self.get_process_id_list(sentry_unit, process)
1490+ pid_dict[sentry_unit].update({process: pids})
1491+ return pid_dict
1492+
1493+ def validate_unit_process_ids(self, expected, actual):
1494+ """Validate process id quantities for services on units."""
1495+ self.log.debug('Checking units for running processes...')
1496+ self.log.debug('Expected PIDs: {}'.format(expected))
1497+ self.log.debug('Actual PIDs: {}'.format(actual))
1498+
1499+ if len(actual) != len(expected):
1500+ return ('Unit count mismatch. expected, actual: {}, '
1501+ '{} '.format(len(expected), len(actual)))
1502+
1503+ for (e_sentry, e_proc_names) in expected.iteritems():
1504+ e_sentry_name = e_sentry.info['unit_name']
1505+ if e_sentry in actual.keys():
1506+ a_proc_names = actual[e_sentry]
1507+ else:
1508+ return ('Expected sentry ({}) not found in actual dict data.'
1509+ '{}'.format(e_sentry_name, e_sentry))
1510+
1511+ if len(e_proc_names.keys()) != len(a_proc_names.keys()):
1512+ return ('Process name count mismatch. expected, actual: {}, '
1513+ '{}'.format(len(expected), len(actual)))
1514+
1515+ for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
1516+ zip(e_proc_names.items(), a_proc_names.items()):
1517+ if e_proc_name != a_proc_name:
1518+ return ('Process name mismatch. expected, actual: {}, '
1519+ '{}'.format(e_proc_name, a_proc_name))
1520+
1521+ a_pids_length = len(a_pids)
1522+ if e_pids_length != a_pids_length:
1523+ return ('PID count mismatch. {} ({}) expected, actual: '
1524+ '{}, {} ({})'.format(e_sentry_name, e_proc_name,
1525+ e_pids_length, a_pids_length,
1526+ a_pids))
1527+ else:
1528+ self.log.debug('PID check OK: {} {} {}: '
1529+ '{}'.format(e_sentry_name, e_proc_name,
1530+ e_pids_length, a_pids))
1531+ return None
1532+
1533+ def validate_list_of_identical_dicts(self, list_of_dicts):
1534+ """Check that all dicts within a list are identical."""
1535+ hashes = []
1536+ for _dict in list_of_dicts:
1537+ hashes.append(hash(frozenset(_dict.items())))
1538+
1539+ self.log.debug('Hashes: {}'.format(hashes))
1540+ if len(set(hashes)) == 1:
1541+ self.log.debug('Dicts within list are identical')
1542+ else:
1543+ return 'Dicts within list are not identical'
1544+
1545+ return None
1546
1547=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
1548--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-04-23 14:53:03 +0000
1549+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 14:47:51 +0000
1550@@ -79,9 +79,9 @@
1551 services.append(this_service)
1552 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
1553 'ceph-osd', 'ceph-radosgw']
1554- # Openstack subordinate charms do not expose an origin option as that
1555- # is controlled by the principle
1556- ignore = ['neutron-openvswitch']
1557+ # Most OpenStack subordinate charms do not expose an origin option
1558+ # as that is controlled by the principle.
1559+ ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
1560
1561 if self.openstack:
1562 for svc in services:
1563@@ -110,7 +110,8 @@
1564 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
1565 self.precise_havana, self.precise_icehouse,
1566 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
1567- self.trusty_kilo, self.vivid_kilo) = range(10)
1568+ self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
1569+ self.wily_liberty) = range(12)
1570
1571 releases = {
1572 ('precise', None): self.precise_essex,
1573@@ -121,8 +122,10 @@
1574 ('trusty', None): self.trusty_icehouse,
1575 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
1576 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
1577+ ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
1578 ('utopic', None): self.utopic_juno,
1579- ('vivid', None): self.vivid_kilo}
1580+ ('vivid', None): self.vivid_kilo,
1581+ ('wily', None): self.wily_liberty}
1582 return releases[(self.series, self.openstack)]
1583
1584 def _get_openstack_release_string(self):
1585@@ -138,9 +141,43 @@
1586 ('trusty', 'icehouse'),
1587 ('utopic', 'juno'),
1588 ('vivid', 'kilo'),
1589+ ('wily', 'liberty'),
1590 ])
1591 if self.openstack:
1592 os_origin = self.openstack.split(':')[1]
1593 return os_origin.split('%s-' % self.series)[1].split('/')[0]
1594 else:
1595 return releases[self.series]
1596+
1597+ def get_ceph_expected_pools(self, radosgw=False):
1598+ """Return a list of expected ceph pools in a ceph + cinder + glance
1599+ test scenario, based on OpenStack release and whether ceph radosgw
1600+ is flagged as present or not."""
1601+
1602+ if self._get_openstack_release() >= self.trusty_kilo:
1603+ # Kilo or later
1604+ pools = [
1605+ 'rbd',
1606+ 'cinder',
1607+ 'glance'
1608+ ]
1609+ else:
1610+ # Juno or earlier
1611+ pools = [
1612+ 'data',
1613+ 'metadata',
1614+ 'rbd',
1615+ 'cinder',
1616+ 'glance'
1617+ ]
1618+
1619+ if radosgw:
1620+ pools.extend([
1621+ '.rgw.root',
1622+ '.rgw.control',
1623+ '.rgw',
1624+ '.rgw.gc',
1625+ '.users.uid'
1626+ ])
1627+
1628+ return pools
1629
1630=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
1631--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-01-26 09:46:20 +0000
1632+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 14:47:51 +0000
1633@@ -14,16 +14,20 @@
1634 # You should have received a copy of the GNU Lesser General Public License
1635 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1636
1637+import amulet
1638+import json
1639 import logging
1640 import os
1641+import six
1642 import time
1643 import urllib
1644
1645+import cinderclient.v1.client as cinder_client
1646 import glanceclient.v1.client as glance_client
1647+import heatclient.v1.client as heat_client
1648 import keystoneclient.v2_0 as keystone_client
1649 import novaclient.v1_1.client as nova_client
1650-
1651-import six
1652+import swiftclient
1653
1654 from charmhelpers.contrib.amulet.utils import (
1655 AmuletUtils
1656@@ -37,7 +41,7 @@
1657 """OpenStack amulet utilities.
1658
1659 This class inherits from AmuletUtils and has additional support
1660- that is specifically for use by OpenStack charms.
1661+ that is specifically for use by OpenStack charm tests.
1662 """
1663
1664 def __init__(self, log_level=ERROR):
1665@@ -51,6 +55,8 @@
1666 Validate actual endpoint data vs expected endpoint data. The ports
1667 are used to find the matching endpoint.
1668 """
1669+ self.log.debug('Validating endpoint data...')
1670+ self.log.debug('actual: {}'.format(repr(endpoints)))
1671 found = False
1672 for ep in endpoints:
1673 self.log.debug('endpoint: {}'.format(repr(ep)))
1674@@ -77,6 +83,7 @@
1675 Validate a list of actual service catalog endpoints vs a list of
1676 expected service catalog endpoints.
1677 """
1678+ self.log.debug('Validating service catalog endpoint data...')
1679 self.log.debug('actual: {}'.format(repr(actual)))
1680 for k, v in six.iteritems(expected):
1681 if k in actual:
1682@@ -93,6 +100,7 @@
1683 Validate a list of actual tenant data vs list of expected tenant
1684 data.
1685 """
1686+ self.log.debug('Validating tenant data...')
1687 self.log.debug('actual: {}'.format(repr(actual)))
1688 for e in expected:
1689 found = False
1690@@ -114,6 +122,7 @@
1691 Validate a list of actual role data vs a list of expected role
1692 data.
1693 """
1694+ self.log.debug('Validating role data...')
1695 self.log.debug('actual: {}'.format(repr(actual)))
1696 for e in expected:
1697 found = False
1698@@ -134,6 +143,7 @@
1699 Validate a list of actual user data vs a list of expected user
1700 data.
1701 """
1702+ self.log.debug('Validating user data...')
1703 self.log.debug('actual: {}'.format(repr(actual)))
1704 for e in expected:
1705 found = False
1706@@ -155,17 +165,30 @@
1707
1708 Validate a list of actual flavors vs a list of expected flavors.
1709 """
1710+ self.log.debug('Validating flavor data...')
1711 self.log.debug('actual: {}'.format(repr(actual)))
1712 act = [a.name for a in actual]
1713 return self._validate_list_data(expected, act)
1714
1715 def tenant_exists(self, keystone, tenant):
1716 """Return True if tenant exists."""
1717+ self.log.debug('Checking if tenant exists ({})...'.format(tenant))
1718 return tenant in [t.name for t in keystone.tenants.list()]
1719
1720+ def authenticate_cinder_admin(self, keystone_sentry, username,
1721+ password, tenant):
1722+ """Authenticates admin user with cinder."""
1723+ # NOTE(beisner): cinder python client doesn't accept tokens.
1724+ service_ip = \
1725+ keystone_sentry.relation('shared-db',
1726+ 'mysql:shared-db')['private-address']
1727+ ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8'))
1728+ return cinder_client.Client(username, password, tenant, ept)
1729+
1730 def authenticate_keystone_admin(self, keystone_sentry, user, password,
1731 tenant):
1732 """Authenticates admin user with the keystone admin endpoint."""
1733+ self.log.debug('Authenticating keystone admin...')
1734 unit = keystone_sentry
1735 service_ip = unit.relation('shared-db',
1736 'mysql:shared-db')['private-address']
1737@@ -175,6 +198,7 @@
1738
1739 def authenticate_keystone_user(self, keystone, user, password, tenant):
1740 """Authenticates a regular user with the keystone public endpoint."""
1741+ self.log.debug('Authenticating keystone user ({})...'.format(user))
1742 ep = keystone.service_catalog.url_for(service_type='identity',
1743 endpoint_type='publicURL')
1744 return keystone_client.Client(username=user, password=password,
1745@@ -182,19 +206,49 @@
1746
1747 def authenticate_glance_admin(self, keystone):
1748 """Authenticates admin user with glance."""
1749+ self.log.debug('Authenticating glance admin...')
1750 ep = keystone.service_catalog.url_for(service_type='image',
1751 endpoint_type='adminURL')
1752 return glance_client.Client(ep, token=keystone.auth_token)
1753
1754+ def authenticate_heat_admin(self, keystone):
1755+ """Authenticates the admin user with heat."""
1756+ self.log.debug('Authenticating heat admin...')
1757+ ep = keystone.service_catalog.url_for(service_type='orchestration',
1758+ endpoint_type='publicURL')
1759+ return heat_client.Client(endpoint=ep, token=keystone.auth_token)
1760+
1761 def authenticate_nova_user(self, keystone, user, password, tenant):
1762 """Authenticates a regular user with nova-api."""
1763+ self.log.debug('Authenticating nova user ({})...'.format(user))
1764 ep = keystone.service_catalog.url_for(service_type='identity',
1765 endpoint_type='publicURL')
1766 return nova_client.Client(username=user, api_key=password,
1767 project_id=tenant, auth_url=ep)
1768
1769+ def authenticate_swift_user(self, keystone, user, password, tenant):
1770+ """Authenticates a regular user with swift api."""
1771+ self.log.debug('Authenticating swift user ({})...'.format(user))
1772+ ep = keystone.service_catalog.url_for(service_type='identity',
1773+ endpoint_type='publicURL')
1774+ return swiftclient.Connection(authurl=ep,
1775+ user=user,
1776+ key=password,
1777+ tenant_name=tenant,
1778+ auth_version='2.0')
1779+
1780 def create_cirros_image(self, glance, image_name):
1781- """Download the latest cirros image and upload it to glance."""
1782+ """Download the latest cirros image and upload it to glance,
1783+ validate and return a resource pointer.
1784+
1785+ :param glance: pointer to authenticated glance connection
1786+ :param image_name: display name for new image
1787+ :returns: glance image pointer
1788+ """
1789+ self.log.debug('Creating glance cirros image '
1790+ '({})...'.format(image_name))
1791+
1792+ # Download cirros image
1793 http_proxy = os.getenv('AMULET_HTTP_PROXY')
1794 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
1795 if http_proxy:
1796@@ -203,57 +257,67 @@
1797 else:
1798 opener = urllib.FancyURLopener()
1799
1800- f = opener.open("http://download.cirros-cloud.net/version/released")
1801+ f = opener.open('http://download.cirros-cloud.net/version/released')
1802 version = f.read().strip()
1803- cirros_img = "cirros-{}-x86_64-disk.img".format(version)
1804+ cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
1805 local_path = os.path.join('tests', cirros_img)
1806
1807 if not os.path.exists(local_path):
1808- cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
1809+ cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
1810 version, cirros_img)
1811 opener.retrieve(cirros_url, local_path)
1812 f.close()
1813
1814+ # Create glance image
1815 with open(local_path) as f:
1816 image = glance.images.create(name=image_name, is_public=True,
1817 disk_format='qcow2',
1818 container_format='bare', data=f)
1819- count = 1
1820- status = image.status
1821- while status != 'active' and count < 10:
1822- time.sleep(3)
1823- image = glance.images.get(image.id)
1824- status = image.status
1825- self.log.debug('image status: {}'.format(status))
1826- count += 1
1827-
1828- if status != 'active':
1829- self.log.error('image creation timed out')
1830- return None
1831+
1832+ # Wait for image to reach active status
1833+ img_id = image.id
1834+ ret = self.resource_reaches_status(glance.images, img_id,
1835+ expected_stat='active',
1836+ msg='Image status wait')
1837+ if not ret:
1838+ msg = 'Glance image failed to reach expected state.'
1839+ amulet.raise_status(amulet.FAIL, msg=msg)
1840+
1841+ # Re-validate new image
1842+ self.log.debug('Validating image attributes...')
1843+ val_img_name = glance.images.get(img_id).name
1844+ val_img_stat = glance.images.get(img_id).status
1845+ val_img_pub = glance.images.get(img_id).is_public
1846+ val_img_cfmt = glance.images.get(img_id).container_format
1847+ val_img_dfmt = glance.images.get(img_id).disk_format
1848+ msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
1849+ 'container fmt:{} disk fmt:{}'.format(
1850+ val_img_name, val_img_pub, img_id,
1851+ val_img_stat, val_img_cfmt, val_img_dfmt))
1852+
1853+ if val_img_name == image_name and val_img_stat == 'active' \
1854+ and val_img_pub is True and val_img_cfmt == 'bare' \
1855+ and val_img_dfmt == 'qcow2':
1856+ self.log.debug(msg_attr)
1857+ else:
1858+ msg = ('Volume validation failed, {}'.format(msg_attr))
1859+ amulet.raise_status(amulet.FAIL, msg=msg)
1860
1861 return image
1862
1863 def delete_image(self, glance, image):
1864 """Delete the specified image."""
1865- num_before = len(list(glance.images.list()))
1866- glance.images.delete(image)
1867-
1868- count = 1
1869- num_after = len(list(glance.images.list()))
1870- while num_after != (num_before - 1) and count < 10:
1871- time.sleep(3)
1872- num_after = len(list(glance.images.list()))
1873- self.log.debug('number of images: {}'.format(num_after))
1874- count += 1
1875-
1876- if num_after != (num_before - 1):
1877- self.log.error('image deletion timed out')
1878- return False
1879-
1880- return True
1881+
1882+ # /!\ DEPRECATION WARNING
1883+ self.log.warn('/!\\ DEPRECATION WARNING: use '
1884+ 'delete_resource instead of delete_image.')
1885+ self.log.debug('Deleting glance image ({})...'.format(image))
1886+ return self.delete_resource(glance.images, image, msg='glance image')
1887
1888 def create_instance(self, nova, image_name, instance_name, flavor):
1889 """Create the specified instance."""
1890+ self.log.debug('Creating instance '
1891+ '({}|{}|{})'.format(instance_name, image_name, flavor))
1892 image = nova.images.find(name=image_name)
1893 flavor = nova.flavors.find(name=flavor)
1894 instance = nova.servers.create(name=instance_name, image=image,
1895@@ -276,19 +340,265 @@
1896
1897 def delete_instance(self, nova, instance):
1898 """Delete the specified instance."""
1899- num_before = len(list(nova.servers.list()))
1900- nova.servers.delete(instance)
1901-
1902- count = 1
1903- num_after = len(list(nova.servers.list()))
1904- while num_after != (num_before - 1) and count < 10:
1905- time.sleep(3)
1906- num_after = len(list(nova.servers.list()))
1907- self.log.debug('number of instances: {}'.format(num_after))
1908- count += 1
1909-
1910- if num_after != (num_before - 1):
1911- self.log.error('instance deletion timed out')
1912- return False
1913-
1914- return True
1915+
1916+ # /!\ DEPRECATION WARNING
1917+ self.log.warn('/!\\ DEPRECATION WARNING: use '
1918+ 'delete_resource instead of delete_instance.')
1919+ self.log.debug('Deleting instance ({})...'.format(instance))
1920+ return self.delete_resource(nova.servers, instance,
1921+ msg='nova instance')
1922+
1923+ def create_or_get_keypair(self, nova, keypair_name="testkey"):
1924+ """Create a new keypair, or return pointer if it already exists."""
1925+ try:
1926+ _keypair = nova.keypairs.get(keypair_name)
1927+ self.log.debug('Keypair ({}) already exists, '
1928+ 'using it.'.format(keypair_name))
1929+ return _keypair
1930+ except:
1931+ self.log.debug('Keypair ({}) does not exist, '
1932+ 'creating it.'.format(keypair_name))
1933+
1934+ _keypair = nova.keypairs.create(name=keypair_name)
1935+ return _keypair
1936+
1937+ def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
1938+ img_id=None, src_vol_id=None, snap_id=None):
1939+ """Create cinder volume, optionally from a glance image, OR
1940+ optionally as a clone of an existing volume, OR optionally
1941+ from a snapshot. Wait for the new volume status to reach
1942+ the expected status, validate and return a resource pointer.
1943+
1944+ :param vol_name: cinder volume display name
1945+ :param vol_size: size in gigabytes
1946+ :param img_id: optional glance image id
1947+ :param src_vol_id: optional source volume id to clone
1948+ :param snap_id: optional snapshot id to use
1949+ :returns: cinder volume pointer
1950+ """
1951+ # Handle parameter input and avoid impossible combinations
1952+ if img_id and not src_vol_id and not snap_id:
1953+ # Create volume from image
1954+ self.log.debug('Creating cinder volume from glance image...')
1955+ bootable = 'true'
1956+ elif src_vol_id and not img_id and not snap_id:
1957+ # Clone an existing volume
1958+ self.log.debug('Cloning cinder volume...')
1959+ bootable = cinder.volumes.get(src_vol_id).bootable
1960+ elif snap_id and not src_vol_id and not img_id:
1961+ # Create volume from snapshot
1962+ self.log.debug('Creating cinder volume from snapshot...')
1963+ snap = cinder.volume_snapshots.find(id=snap_id)
1964+ vol_size = snap.size
1965+ snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
1966+ bootable = cinder.volumes.get(snap_vol_id).bootable
1967+ elif not img_id and not src_vol_id and not snap_id:
1968+ # Create volume
1969+ self.log.debug('Creating cinder volume...')
1970+ bootable = 'false'
1971+ else:
1972+ # Impossible combination of parameters
1973+ msg = ('Invalid method use - name:{} size:{} img_id:{} '
1974+ 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
1975+ img_id, src_vol_id,
1976+ snap_id))
1977+ amulet.raise_status(amulet.FAIL, msg=msg)
1978+
1979+ # Create new volume
1980+ try:
1981+ vol_new = cinder.volumes.create(display_name=vol_name,
1982+ imageRef=img_id,
1983+ size=vol_size,
1984+ source_volid=src_vol_id,
1985+ snapshot_id=snap_id)
1986+ vol_id = vol_new.id
1987+ except Exception as e:
1988+ msg = 'Failed to create volume: {}'.format(e)
1989+ amulet.raise_status(amulet.FAIL, msg=msg)
1990+
1991+ # Wait for volume to reach available status
1992+ ret = self.resource_reaches_status(cinder.volumes, vol_id,
1993+ expected_stat="available",
1994+ msg="Volume status wait")
1995+ if not ret:
1996+ msg = 'Cinder volume failed to reach expected state.'
1997+ amulet.raise_status(amulet.FAIL, msg=msg)
1998+
1999+ # Re-validate new volume
2000+ self.log.debug('Validating volume attributes...')
2001+ val_vol_name = cinder.volumes.get(vol_id).display_name
2002+ val_vol_boot = cinder.volumes.get(vol_id).bootable
2003+ val_vol_stat = cinder.volumes.get(vol_id).status
2004+ val_vol_size = cinder.volumes.get(vol_id).size
2005+ msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
2006+ '{} size:{}'.format(val_vol_name, vol_id,
2007+ val_vol_stat, val_vol_boot,
2008+ val_vol_size))
2009+
2010+ if val_vol_boot == bootable and val_vol_stat == 'available' \
2011+ and val_vol_name == vol_name and val_vol_size == vol_size:
2012+ self.log.debug(msg_attr)
2013+ else:
2014+ msg = ('Volume validation failed, {}'.format(msg_attr))
2015+ amulet.raise_status(amulet.FAIL, msg=msg)
2016+
2017+ return vol_new
2018+
2019+ def delete_resource(self, resource, resource_id,
2020+ msg="resource", max_wait=120):
2021+ """Delete one openstack resource, such as one instance, keypair,
2022+ image, volume, stack, etc., and confirm deletion within max wait time.
2023+
2024+ :param resource: pointer to os resource type, ex:glance_client.images
2025+ :param resource_id: unique name or id for the openstack resource
2026+ :param msg: text to identify purpose in logging
2027+ :param max_wait: maximum wait time in seconds
2028+ :returns: True if successful, otherwise False
2029+ """
2030+ self.log.debug('Deleting OpenStack resource '
2031+ '{} ({})'.format(resource_id, msg))
2032+ num_before = len(list(resource.list()))
2033+ resource.delete(resource_id)
2034+
2035+ tries = 0
2036+ num_after = len(list(resource.list()))
2037+ while num_after != (num_before - 1) and tries < (max_wait / 4):
2038+ self.log.debug('{} delete check: '
2039+ '{} [{}:{}] {}'.format(msg, tries,
2040+ num_before,
2041+ num_after,
2042+ resource_id))
2043+ time.sleep(4)
2044+ num_after = len(list(resource.list()))
2045+ tries += 1
2046+
2047+ self.log.debug('{}: expected, actual count = {}, '
2048+ '{}'.format(msg, num_before - 1, num_after))
2049+
2050+ if num_after == (num_before - 1):
2051+ return True
2052+ else:
2053+ self.log.error('{} delete timed out'.format(msg))
2054+ return False
2055+
2056+ def resource_reaches_status(self, resource, resource_id,
2057+ expected_stat='available',
2058+ msg='resource', max_wait=120):
2059+ """Wait for an openstack resources status to reach an
2060+ expected status within a specified time. Useful to confirm that
2061+ nova instances, cinder vols, snapshots, glance images, heat stacks
2062+ and other resources eventually reach the expected status.
2063+
2064+ :param resource: pointer to os resource type, ex: heat_client.stacks
2065+ :param resource_id: unique id for the openstack resource
2066+ :param expected_stat: status to expect resource to reach
2067+ :param msg: text to identify purpose in logging
2068+ :param max_wait: maximum wait time in seconds
2069+ :returns: True if successful, False if status is not reached
2070+ """
2071+
2072+ tries = 0
2073+ resource_stat = resource.get(resource_id).status
2074+ while resource_stat != expected_stat and tries < (max_wait / 4):
2075+ self.log.debug('{} status check: '
2076+ '{} [{}:{}] {}'.format(msg, tries,
2077+ resource_stat,
2078+ expected_stat,
2079+ resource_id))
2080+ time.sleep(4)
2081+ resource_stat = resource.get(resource_id).status
2082+ tries += 1
2083+
2084+ self.log.debug('{}: expected, actual status = {}, '
2085+ '{}'.format(msg, resource_stat, expected_stat))
2086+
2087+ if resource_stat == expected_stat:
2088+ return True
2089+ else:
2090+ self.log.debug('{} never reached expected status: '
2091+ '{}'.format(resource_id, expected_stat))
2092+ return False
2093+
2094+ def get_ceph_osd_id_cmd(self, index):
2095+ """Produce a shell command that will return a ceph-osd id."""
2096+ return ("`initctl list | grep 'ceph-osd ' | "
2097+ "awk 'NR=={} {{ print $2 }}' | "
2098+ "grep -o '[0-9]*'`".format(index + 1))
2099+
2100+ def get_ceph_pools(self, sentry_unit):
2101+ """Return a dict of ceph pools from a single ceph unit, with
2102+ pool name as keys, pool id as vals."""
2103+ pools = {}
2104+ cmd = 'sudo ceph osd lspools'
2105+ output, code = sentry_unit.run(cmd)
2106+ if code != 0:
2107+ msg = ('{} `{}` returned {} '
2108+ '{}'.format(sentry_unit.info['unit_name'],
2109+ cmd, code, output))
2110+ amulet.raise_status(amulet.FAIL, msg=msg)
2111+
2112+ # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
2113+ for pool in str(output).split(','):
2114+ pool_id_name = pool.split(' ')
2115+ if len(pool_id_name) == 2:
2116+ pool_id = pool_id_name[0]
2117+ pool_name = pool_id_name[1]
2118+ pools[pool_name] = int(pool_id)
2119+
2120+ self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
2121+ pools))
2122+ return pools
2123+
2124+ def get_ceph_df(self, sentry_unit):
2125+ """Return dict of ceph df json output, including ceph pool state.
2126+
2127+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
2128+ :returns: Dict of ceph df output
2129+ """
2130+ cmd = 'sudo ceph df --format=json'
2131+ output, code = sentry_unit.run(cmd)
2132+ if code != 0:
2133+ msg = ('{} `{}` returned {} '
2134+ '{}'.format(sentry_unit.info['unit_name'],
2135+ cmd, code, output))
2136+ amulet.raise_status(amulet.FAIL, msg=msg)
2137+ return json.loads(output)
2138+
2139+ def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
2140+ """Take a sample of attributes of a ceph pool, returning ceph
2141+ pool name, object count and disk space used for the specified
2142+ pool ID number.
2143+
2144+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
2145+ :param pool_id: Ceph pool ID
2146+ :returns: List of pool name, object count, kb disk space used
2147+ """
2148+ df = self.get_ceph_df(sentry_unit)
2149+ pool_name = df['pools'][pool_id]['name']
2150+ obj_count = df['pools'][pool_id]['stats']['objects']
2151+ kb_used = df['pools'][pool_id]['stats']['kb_used']
2152+ self.log.debug('Ceph {} pool (ID {}): {} objects, '
2153+ '{} kb used'.format(pool_name, pool_id,
2154+ obj_count, kb_used))
2155+ return pool_name, obj_count, kb_used
2156+
2157+ def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
2158+ """Validate ceph pool samples taken over time, such as pool
2159+ object counts or pool kb used, before adding, after adding, and
2160+ after deleting items which affect those pool attributes. The
2161+ 2nd element is expected to be greater than the 1st; 3rd is expected
2162+ to be less than the 2nd.
2163+
2164+ :param samples: List containing 3 data samples
2165+ :param sample_type: String for logging and usage context
2166+ :returns: None if successful, Failure message otherwise
2167+ """
2168+ original, created, deleted = range(3)
2169+ if samples[created] <= samples[original] or \
2170+ samples[deleted] >= samples[created]:
2171+ return ('Ceph {} samples ({}) '
2172+ 'unexpected.'.format(sample_type, samples))
2173+ else:
2174+ self.log.debug('Ceph {} samples (OK): '
2175+ '{}'.format(sample_type, samples))
2176+ return None
2177
2178=== added file 'tests/tests.yaml'
2179--- tests/tests.yaml 1970-01-01 00:00:00 +0000
2180+++ tests/tests.yaml 2015-07-01 14:47:51 +0000
2181@@ -0,0 +1,18 @@
2182+bootstrap: true
2183+reset: true
2184+virtualenv: true
2185+makefile:
2186+ - lint
2187+ - test
2188+sources:
2189+ - ppa:juju/stable
2190+packages:
2191+ - amulet
2192+ - python-amulet
2193+ - python-cinderclient
2194+ - python-distro-info
2195+ - python-glanceclient
2196+ - python-heatclient
2197+ - python-keystoneclient
2198+ - python-novaclient
2199+ - python-swiftclient

Subscribers

People subscribed via source and target branches