Merge lp:~1chb1n/charms/trusty/glance/next-amulet-update into lp:~openstack-charmers-archive/charms/trusty/glance/next

Proposed by Ryan Beisner on 2015-06-30
Status: Merged
Merged at revision: 122
Proposed branch: lp:~1chb1n/charms/trusty/glance/next-amulet-update
Merge into: lp:~openstack-charmers-archive/charms/trusty/glance/next
Diff against target: 1784 lines (+929/-460)
13 files modified
Makefile (+10/-15)
hooks/charmhelpers/core/hookenv.py (+92/-36)
hooks/charmhelpers/core/services/base.py (+12/-9)
metadata.yaml (+4/-2)
tests/00-setup (+6/-2)
tests/020-basic-trusty-liberty (+11/-0)
tests/021-basic-wily-liberty (+9/-0)
tests/README (+9/-0)
tests/basic_deployment.py (+349/-340)
tests/charmhelpers/contrib/amulet/utils.py (+137/-4)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+35/-3)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+237/-49)
tests/tests.yaml (+18/-0)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/glance/next-amulet-update
Reviewer Review Type Date Requested Status
Corey Bryant 2015-06-30 Pending
Review via email: mp+263413@code.launchpad.net

This proposal supersedes a proposal from 2015-06-30.

Description of the change

Update amulet tests for Kilo, prep for wily. Sync hooks/charmhelpers; Sync tests/charmhelpers.

To post a comment you must log in.

charm_lint_check #5714 glance-next for 1chb1n mp263413
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/11808187/
Build: http://10.245.162.77:8080/job/charm_lint_check/5714/

charm_unit_test #5346 glance-next for 1chb1n mp263413
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5346/

charm_lint_check #5716 glance-next for 1chb1n mp263413
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5716/

charm_unit_test #5348 glance-next for 1chb1n mp263413
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5348/

charm_amulet_test #4906 glance-next for 1chb1n mp263413
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11808398/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4906/

charm_amulet_test #4908 glance-next for 1chb1n mp263413
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11808501/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4908/

126. By Ryan Beisner on 2015-07-02

update tests for vivid-kilo

charm_lint_check #5718 glance-next for 1chb1n mp263413
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5718/

charm_unit_test #5350 glance-next for 1chb1n mp263413
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5350/

charm_amulet_test #4910 glance-next for 1chb1n mp263413
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4910/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2015-04-16 21:32:02 +0000
3+++ Makefile 2015-07-02 12:52:15 +0000
4@@ -2,16 +2,18 @@
5 PYTHON := /usr/bin/env python
6
7 lint:
8- @echo "Running flake8 tests: "
9- @flake8 --exclude hooks/charmhelpers actions hooks unit_tests tests
10- @echo "OK"
11- @echo "Running charm proof: "
12+ @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \
13+ actions hooks unit_tests tests
14 @charm proof
15- @echo "OK"
16
17-unit_test:
18+test:
19+ @# Bundletester expects unit tests here.
20 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
21
22+functional_test:
23+ @echo Starting Amulet tests...
24+ @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700
25+
26 bin/charm_helpers_sync.py:
27 @mkdir -p bin
28 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
29@@ -21,15 +23,8 @@
30 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
31 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
32
33-test:
34- @echo Starting Amulet tests...
35- # /!\ Note: The -v should only be temporary until Amulet sends
36- # raise_status() messages to stderr:
37- # https://bugs.launchpad.net/amulet/+bug/1320357
38- @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700
39-
40-publish: lint unit_test
41+publish: lint test
42 bzr push lp:charms/glance
43 bzr push lp:charms/trusty/glance
44
45-all: unit_test lint
46+all: test lint
47
48=== modified file 'hooks/charmhelpers/core/hookenv.py'
49--- hooks/charmhelpers/core/hookenv.py 2015-06-10 20:31:46 +0000
50+++ hooks/charmhelpers/core/hookenv.py 2015-07-02 12:52:15 +0000
51@@ -21,7 +21,9 @@
52 # Charm Helpers Developers <juju@lists.ubuntu.com>
53
54 from __future__ import print_function
55+from distutils.version import LooseVersion
56 from functools import wraps
57+import glob
58 import os
59 import json
60 import yaml
61@@ -242,29 +244,7 @@
62 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
63 if os.path.exists(self.path):
64 self.load_previous()
65-
66- def __getitem__(self, key):
67- """For regular dict lookups, check the current juju config first,
68- then the previous (saved) copy. This ensures that user-saved values
69- will be returned by a dict lookup.
70-
71- """
72- try:
73- return dict.__getitem__(self, key)
74- except KeyError:
75- return (self._prev_dict or {})[key]
76-
77- def get(self, key, default=None):
78- try:
79- return self[key]
80- except KeyError:
81- return default
82-
83- def keys(self):
84- prev_keys = []
85- if self._prev_dict is not None:
86- prev_keys = self._prev_dict.keys()
87- return list(set(prev_keys + list(dict.keys(self))))
88+ atexit(self._implicit_save)
89
90 def load_previous(self, path=None):
91 """Load previous copy of config from disk.
92@@ -283,6 +263,9 @@
93 self.path = path or self.path
94 with open(self.path) as f:
95 self._prev_dict = json.load(f)
96+ for k, v in self._prev_dict.items():
97+ if k not in self:
98+ self[k] = v
99
100 def changed(self, key):
101 """Return True if the current value for this key is different from
102@@ -314,13 +297,13 @@
103 instance.
104
105 """
106- if self._prev_dict:
107- for k, v in six.iteritems(self._prev_dict):
108- if k not in self:
109- self[k] = v
110 with open(self.path, 'w') as f:
111 json.dump(self, f)
112
113+ def _implicit_save(self):
114+ if self.implicit_save:
115+ self.save()
116+
117
118 @cached
119 def config(scope=None):
120@@ -587,10 +570,14 @@
121 hooks.execute(sys.argv)
122 """
123
124- def __init__(self, config_save=True):
125+ def __init__(self, config_save=None):
126 super(Hooks, self).__init__()
127 self._hooks = {}
128- self._config_save = config_save
129+
130+ # For unknown reasons, we allow the Hooks constructor to override
131+ # config().implicit_save.
132+ if config_save is not None:
133+ config().implicit_save = config_save
134
135 def register(self, name, function):
136 """Register a hook"""
137@@ -598,13 +585,16 @@
138
139 def execute(self, args):
140 """Execute a registered hook based on args[0]"""
141+ _run_atstart()
142 hook_name = os.path.basename(args[0])
143 if hook_name in self._hooks:
144- self._hooks[hook_name]()
145- if self._config_save:
146- cfg = config()
147- if cfg.implicit_save:
148- cfg.save()
149+ try:
150+ self._hooks[hook_name]()
151+ except SystemExit as x:
152+ if x.code is None or x.code == 0:
153+ _run_atexit()
154+ raise
155+ _run_atexit()
156 else:
157 raise UnregisteredHookError(hook_name)
158
159@@ -732,13 +722,79 @@
160 @translate_exc(from_exc=OSError, to_exc=NotImplementedError)
161 def leader_set(settings=None, **kwargs):
162 """Juju leader set value(s)"""
163- log("Juju leader-set '%s'" % (settings), level=DEBUG)
164+ # Don't log secrets.
165+ # log("Juju leader-set '%s'" % (settings), level=DEBUG)
166 cmd = ['leader-set']
167 settings = settings or {}
168 settings.update(kwargs)
169- for k, v in settings.iteritems():
170+ for k, v in settings.items():
171 if v is None:
172 cmd.append('{}='.format(k))
173 else:
174 cmd.append('{}={}'.format(k, v))
175 subprocess.check_call(cmd)
176+
177+
178+@cached
179+def juju_version():
180+ """Full version string (eg. '1.23.3.1-trusty-amd64')"""
181+ # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
182+ jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
183+ return subprocess.check_output([jujud, 'version'],
184+ universal_newlines=True).strip()
185+
186+
187+@cached
188+def has_juju_version(minimum_version):
189+ """Return True if the Juju version is at least the provided version"""
190+ return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
191+
192+
193+_atexit = []
194+_atstart = []
195+
196+
197+def atstart(callback, *args, **kwargs):
198+ '''Schedule a callback to run before the main hook.
199+
200+ Callbacks are run in the order they were added.
201+
202+ This is useful for modules and classes to perform initialization
203+ and inject behavior. In particular:
204+ - Run common code before all of your hooks, such as logging
205+ the hook name or interesting relation data.
206+ - Defer object or module initialization that requires a hook
207+ context until we know there actually is a hook context,
208+ making testing easier.
209+ - Rather than requiring charm authors to include boilerplate to
210+ invoke your helper's behavior, have it run automatically if
211+ your object is instantiated or module imported.
212+
213+ This is not at all useful after your hook framework as been launched.
214+ '''
215+ global _atstart
216+ _atstart.append((callback, args, kwargs))
217+
218+
219+def atexit(callback, *args, **kwargs):
220+ '''Schedule a callback to run on successful hook completion.
221+
222+ Callbacks are run in the reverse order that they were added.'''
223+ _atexit.append((callback, args, kwargs))
224+
225+
226+def _run_atstart():
227+ '''Hook frameworks must invoke this before running the main hook body.'''
228+ global _atstart
229+ for callback, args, kwargs in _atstart:
230+ callback(*args, **kwargs)
231+ del _atstart[:]
232+
233+
234+def _run_atexit():
235+ '''Hook frameworks must invoke this after the main hook body has
236+ successfully completed. Do not invoke it if the hook fails.'''
237+ global _atexit
238+ for callback, args, kwargs in reversed(_atexit):
239+ callback(*args, **kwargs)
240+ del _atexit[:]
241
242=== modified file 'hooks/charmhelpers/core/services/base.py'
243--- hooks/charmhelpers/core/services/base.py 2015-06-10 20:31:46 +0000
244+++ hooks/charmhelpers/core/services/base.py 2015-07-02 12:52:15 +0000
245@@ -128,15 +128,18 @@
246 """
247 Handle the current hook by doing The Right Thing with the registered services.
248 """
249- hook_name = hookenv.hook_name()
250- if hook_name == 'stop':
251- self.stop_services()
252- else:
253- self.reconfigure_services()
254- self.provide_data()
255- cfg = hookenv.config()
256- if cfg.implicit_save:
257- cfg.save()
258+ hookenv._run_atstart()
259+ try:
260+ hook_name = hookenv.hook_name()
261+ if hook_name == 'stop':
262+ self.stop_services()
263+ else:
264+ self.reconfigure_services()
265+ self.provide_data()
266+ except SystemExit as x:
267+ if x.code is None or x.code == 0:
268+ hookenv._run_atexit()
269+ hookenv._run_atexit()
270
271 def provide_data(self):
272 """
273
274=== modified file 'metadata.yaml'
275--- metadata.yaml 2014-10-30 03:30:35 +0000
276+++ metadata.yaml 2015-07-02 12:52:15 +0000
277@@ -6,8 +6,10 @@
278 (Parallax) and an image delivery service (Teller). These services are used
279 in conjunction by Nova to deliver images from object stores, such as
280 OpenStack's Swift service, to Nova's compute nodes.
281-categories:
282- - miscellaneous
283+tags:
284+ - openstack
285+ - storage
286+ - misc
287 provides:
288 nrpe-external-master:
289 interface: nrpe-external-master
290
291=== modified file 'tests/00-setup'
292--- tests/00-setup 2014-10-08 20:18:38 +0000
293+++ tests/00-setup 2015-07-02 12:52:15 +0000
294@@ -5,6 +5,10 @@
295 sudo add-apt-repository --yes ppa:juju/stable
296 sudo apt-get update --yes
297 sudo apt-get install --yes python-amulet \
298+ python-cinderclient \
299+ python-distro-info \
300+ python-glanceclient \
301+ python-heatclient \
302 python-keystoneclient \
303- python-glanceclient \
304- python-novaclient
305+ python-novaclient \
306+ python-swiftclient
307
308=== modified file 'tests/017-basic-trusty-kilo' (properties changed: -x to +x)
309=== modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x)
310=== added file 'tests/020-basic-trusty-liberty'
311--- tests/020-basic-trusty-liberty 1970-01-01 00:00:00 +0000
312+++ tests/020-basic-trusty-liberty 2015-07-02 12:52:15 +0000
313@@ -0,0 +1,11 @@
314+#!/usr/bin/python
315+
316+"""Amulet tests on a basic glance deployment on trusty-liberty."""
317+
318+from basic_deployment import GlanceBasicDeployment
319+
320+if __name__ == '__main__':
321+ deployment = GlanceBasicDeployment(series='trusty',
322+ openstack='cloud:trusty-liberty',
323+ source='cloud:trusty-updates/liberty')
324+ deployment.run_tests()
325
326=== added file 'tests/021-basic-wily-liberty'
327--- tests/021-basic-wily-liberty 1970-01-01 00:00:00 +0000
328+++ tests/021-basic-wily-liberty 2015-07-02 12:52:15 +0000
329@@ -0,0 +1,9 @@
330+#!/usr/bin/python
331+
332+"""Amulet tests on a basic glance deployment on wily-liberty."""
333+
334+from basic_deployment import GlanceBasicDeployment
335+
336+if __name__ == '__main__':
337+ deployment = GlanceBasicDeployment(series='wily')
338+ deployment.run_tests()
339
340=== modified file 'tests/README'
341--- tests/README 2014-10-08 20:18:38 +0000
342+++ tests/README 2015-07-02 12:52:15 +0000
343@@ -1,6 +1,15 @@
344 This directory provides Amulet tests that focus on verification of Glance
345 deployments.
346
347+test_* methods are called in lexical sort order.
348+
349+Test name convention to ensure desired test order:
350+ 1xx service and endpoint checks
351+ 2xx relation checks
352+ 3xx config checks
353+ 4xx functional checks
354+ 9xx restarts and other final checks
355+
356 In order to run tests, you'll need charm-tools installed (in addition to
357 juju, of course):
358 sudo add-apt-repository ppa:juju/stable
359
360=== modified file 'tests/basic_deployment.py'
361--- tests/basic_deployment.py 2015-05-12 14:49:27 +0000
362+++ tests/basic_deployment.py 2015-07-02 12:52:15 +0000
363@@ -1,7 +1,12 @@
364 #!/usr/bin/python
365
366+"""
367+Basic glance amulet functional tests.
368+"""
369+
370 import amulet
371 import os
372+import time
373 import yaml
374
375 from charmhelpers.contrib.openstack.amulet.deployment import (
376@@ -10,25 +15,24 @@
377
378 from charmhelpers.contrib.openstack.amulet.utils import (
379 OpenStackAmuletUtils,
380- DEBUG, # flake8: noqa
381- ERROR
382+ DEBUG,
383+ # ERROR
384 )
385
386 # Use DEBUG to turn on debug logging
387 u = OpenStackAmuletUtils(DEBUG)
388
389+
390 class GlanceBasicDeployment(OpenStackAmuletDeployment):
391- '''Amulet tests on a basic file-backed glance deployment. Verify relations,
392- service status, endpoint service catalog, create and delete new image.'''
393-
394-# TO-DO(beisner):
395-# * Add tests with different storage back ends
396-# * Resolve Essex->Havana juju set charm bug
397+ """Amulet tests on a basic file-backed glance deployment. Verify
398+ relations, service status, endpoint service catalog, create and
399+ delete new image."""
400
401 def __init__(self, series=None, openstack=None, source=None, git=False,
402 stable=False):
403- '''Deploy the entire test environment.'''
404- super(GlanceBasicDeployment, self).__init__(series, openstack, source, stable)
405+ """Deploy the entire test environment."""
406+ super(GlanceBasicDeployment, self).__init__(series, openstack,
407+ source, stable)
408 self.git = git
409 self._add_services()
410 self._add_relations()
411@@ -37,20 +41,21 @@
412 self._initialize_tests()
413
414 def _add_services(self):
415- '''Add services
416+ """Add services
417
418 Add the services that we're testing, where glance is local,
419 and the rest of the service are from lp branches that are
420 compatible with the local charm (e.g. stable or next).
421- '''
422+ """
423 this_service = {'name': 'glance'}
424- other_services = [{'name': 'mysql'}, {'name': 'rabbitmq-server'},
425+ other_services = [{'name': 'mysql'},
426+ {'name': 'rabbitmq-server'},
427 {'name': 'keystone'}]
428 super(GlanceBasicDeployment, self)._add_services(this_service,
429 other_services)
430
431 def _add_relations(self):
432- '''Add relations for the services.'''
433+ """Add relations for the services."""
434 relations = {'glance:identity-service': 'keystone:identity-service',
435 'glance:shared-db': 'mysql:shared-db',
436 'keystone:shared-db': 'mysql:shared-db',
437@@ -58,7 +63,7 @@
438 super(GlanceBasicDeployment, self)._add_relations(relations)
439
440 def _configure_services(self):
441- '''Configure all of the services.'''
442+ """Configure all of the services."""
443 glance_config = {}
444 if self.git:
445 branch = 'stable/' + self._get_openstack_release_string()
446@@ -76,7 +81,8 @@
447 'http_proxy': amulet_http_proxy,
448 'https_proxy': amulet_http_proxy,
449 }
450- glance_config['openstack-origin-git'] = yaml.dump(openstack_origin_git)
451+ glance_config['openstack-origin-git'] = \
452+ yaml.dump(openstack_origin_git)
453
454 keystone_config = {'admin-password': 'openstack',
455 'admin-token': 'ubuntutesting'}
456@@ -87,12 +93,19 @@
457 super(GlanceBasicDeployment, self)._configure_services(configs)
458
459 def _initialize_tests(self):
460- '''Perform final initialization before tests get run.'''
461+ """Perform final initialization before tests get run."""
462 # Access the sentries for inspecting service units
463 self.mysql_sentry = self.d.sentry.unit['mysql/0']
464 self.glance_sentry = self.d.sentry.unit['glance/0']
465 self.keystone_sentry = self.d.sentry.unit['keystone/0']
466 self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']
467+ u.log.debug('openstack release val: {}'.format(
468+ self._get_openstack_release()))
469+ u.log.debug('openstack release str: {}'.format(
470+ self._get_openstack_release_string()))
471+
472+ # Let things settle a bit before moving forward
473+ time.sleep(30)
474
475 # Authenticate admin with keystone
476 self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
477@@ -103,46 +116,103 @@
478 # Authenticate admin with glance endpoint
479 self.glance = u.authenticate_glance_admin(self.keystone)
480
481- u.log.debug('openstack release: {}'.format(self._get_openstack_release()))
482-
483- def test_services(self):
484- '''Verify that the expected services are running on the
485- corresponding service units.'''
486- commands = {
487- self.mysql_sentry: ['status mysql'],
488- self.keystone_sentry: ['status keystone'],
489- self.glance_sentry: ['status glance-api', 'status glance-registry'],
490- self.rabbitmq_sentry: ['sudo service rabbitmq-server status']
491+ def test_100_services(self):
492+ """Verify that the expected services are running on the
493+ corresponding service units."""
494+ services = {
495+ self.mysql_sentry: ['mysql'],
496+ self.keystone_sentry: ['keystone'],
497+ self.glance_sentry: ['glance-api', 'glance-registry'],
498+ self.rabbitmq_sentry: ['rabbitmq-server']
499 }
500- u.log.debug('commands: {}'.format(commands))
501- ret = u.validate_services(commands)
502+
503+ ret = u.validate_services_by_name(services)
504 if ret:
505 amulet.raise_status(amulet.FAIL, msg=ret)
506
507- def test_service_catalog(self):
508- '''Verify that the service catalog endpoint data'''
509- endpoint_vol = {'adminURL': u.valid_url,
510- 'region': 'RegionOne',
511- 'publicURL': u.valid_url,
512- 'internalURL': u.valid_url}
513- endpoint_id = {'adminURL': u.valid_url,
514- 'region': 'RegionOne',
515- 'publicURL': u.valid_url,
516- 'internalURL': u.valid_url}
517- if self._get_openstack_release() >= self.trusty_icehouse:
518- endpoint_vol['id'] = u.not_null
519- endpoint_id['id'] = u.not_null
520-
521- expected = {'image': [endpoint_id],
522- 'identity': [endpoint_id]}
523+ def test_102_service_catalog(self):
524+ """Verify that the service catalog endpoint data is valid."""
525+ u.log.debug('Checking keystone service catalog...')
526+ endpoint_check = {
527+ 'adminURL': u.valid_url,
528+ 'id': u.not_null,
529+ 'region': 'RegionOne',
530+ 'publicURL': u.valid_url,
531+ 'internalURL': u.valid_url
532+ }
533+ expected = {
534+ 'image': [endpoint_check],
535+ 'identity': [endpoint_check]
536+ }
537 actual = self.keystone.service_catalog.get_endpoints()
538
539 ret = u.validate_svc_catalog_endpoint_data(expected, actual)
540 if ret:
541 amulet.raise_status(amulet.FAIL, msg=ret)
542
543- def test_mysql_glance_db_relation(self):
544- '''Verify the mysql:glance shared-db relation data'''
545+ def test_104_glance_endpoint(self):
546+ """Verify the glance endpoint data."""
547+ u.log.debug('Checking glance api endpoint data...')
548+ endpoints = self.keystone.endpoints.list()
549+ admin_port = internal_port = public_port = '9292'
550+ expected = {
551+ 'id': u.not_null,
552+ 'region': 'RegionOne',
553+ 'adminurl': u.valid_url,
554+ 'internalurl': u.valid_url,
555+ 'publicurl': u.valid_url,
556+ 'service_id': u.not_null
557+ }
558+ ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
559+ public_port, expected)
560+
561+ if ret:
562+ amulet.raise_status(amulet.FAIL,
563+ msg='glance endpoint: {}'.format(ret))
564+
565+ def test_106_keystone_endpoint(self):
566+ """Verify the keystone endpoint data."""
567+ u.log.debug('Checking keystone api endpoint data...')
568+ endpoints = self.keystone.endpoints.list()
569+ admin_port = '35357'
570+ internal_port = public_port = '5000'
571+ expected = {
572+ 'id': u.not_null,
573+ 'region': 'RegionOne',
574+ 'adminurl': u.valid_url,
575+ 'internalurl': u.valid_url,
576+ 'publicurl': u.valid_url,
577+ 'service_id': u.not_null
578+ }
579+ ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
580+ public_port, expected)
581+ if ret:
582+ amulet.raise_status(amulet.FAIL,
583+ msg='keystone endpoint: {}'.format(ret))
584+
585+ def test_110_users(self):
586+ """Verify expected users."""
587+ u.log.debug('Checking keystone users...')
588+ expected = [
589+ {'name': 'glance',
590+ 'enabled': True,
591+ 'tenantId': u.not_null,
592+ 'id': u.not_null,
593+ 'email': 'juju@localhost'},
594+ {'name': 'admin',
595+ 'enabled': True,
596+ 'tenantId': u.not_null,
597+ 'id': u.not_null,
598+ 'email': 'juju@localhost'}
599+ ]
600+ actual = self.keystone.users.list()
601+ ret = u.validate_user_data(expected, actual)
602+ if ret:
603+ amulet.raise_status(amulet.FAIL, msg=ret)
604+
605+ def test_200_mysql_glance_db_relation(self):
606+ """Verify the mysql:glance shared-db relation data"""
607+ u.log.debug('Checking mysql to glance shared-db relation data...')
608 unit = self.mysql_sentry
609 relation = ['shared-db', 'glance:shared-db']
610 expected = {
611@@ -154,8 +224,9 @@
612 message = u.relation_error('mysql shared-db', ret)
613 amulet.raise_status(amulet.FAIL, msg=message)
614
615- def test_glance_mysql_db_relation(self):
616- '''Verify the glance:mysql shared-db relation data'''
617+ def test_201_glance_mysql_db_relation(self):
618+ """Verify the glance:mysql shared-db relation data"""
619+ u.log.debug('Checking glance to mysql shared-db relation data...')
620 unit = self.glance_sentry
621 relation = ['shared-db', 'mysql:shared-db']
622 expected = {
623@@ -169,8 +240,9 @@
624 message = u.relation_error('glance shared-db', ret)
625 amulet.raise_status(amulet.FAIL, msg=message)
626
627- def test_keystone_glance_id_relation(self):
628- '''Verify the keystone:glance identity-service relation data'''
629+ def test_202_keystone_glance_id_relation(self):
630+ """Verify the keystone:glance identity-service relation data"""
631+ u.log.debug('Checking keystone to glance id relation data...')
632 unit = self.keystone_sentry
633 relation = ['identity-service',
634 'glance:identity-service']
635@@ -193,8 +265,9 @@
636 message = u.relation_error('keystone identity-service', ret)
637 amulet.raise_status(amulet.FAIL, msg=message)
638
639- def test_glance_keystone_id_relation(self):
640- '''Verify the glance:keystone identity-service relation data'''
641+ def test_203_glance_keystone_id_relation(self):
642+ """Verify the glance:keystone identity-service relation data"""
643+ u.log.debug('Checking glance to keystone relation data...')
644 unit = self.glance_sentry
645 relation = ['identity-service',
646 'keystone:identity-service']
647@@ -211,8 +284,9 @@
648 message = u.relation_error('glance identity-service', ret)
649 amulet.raise_status(amulet.FAIL, msg=message)
650
651- def test_rabbitmq_glance_amqp_relation(self):
652- '''Verify the rabbitmq-server:glance amqp relation data'''
653+ def test_204_rabbitmq_glance_amqp_relation(self):
654+ """Verify the rabbitmq-server:glance amqp relation data"""
655+ u.log.debug('Checking rmq to glance amqp relation data...')
656 unit = self.rabbitmq_sentry
657 relation = ['amqp', 'glance:amqp']
658 expected = {
659@@ -225,8 +299,9 @@
660 message = u.relation_error('rabbitmq amqp', ret)
661 amulet.raise_status(amulet.FAIL, msg=message)
662
663- def test_glance_rabbitmq_amqp_relation(self):
664- '''Verify the glance:rabbitmq-server amqp relation data'''
665+ def test_205_glance_rabbitmq_amqp_relation(self):
666+ """Verify the glance:rabbitmq-server amqp relation data"""
667+ u.log.debug('Checking glance to rmq amqp relation data...')
668 unit = self.glance_sentry
669 relation = ['amqp', 'rabbitmq-server:amqp']
670 expected = {
671@@ -239,291 +314,225 @@
672 message = u.relation_error('glance amqp', ret)
673 amulet.raise_status(amulet.FAIL, msg=message)
674
675- def test_image_create_delete(self):
676- '''Create new cirros image in glance, verify, then delete it'''
677-
678- # Create a new image
679- image_name = 'cirros-image-1'
680- image_new = u.create_cirros_image(self.glance, image_name)
681-
682- # Confirm image is created and has status of 'active'
683- if not image_new:
684- message = 'glance image create failed'
685- amulet.raise_status(amulet.FAIL, msg=message)
686-
687- # Verify new image name
688- images_list = list(self.glance.images.list())
689- if images_list[0].name != image_name:
690- message = 'glance image create failed or unexpected image name {}'.format(images_list[0].name)
691- amulet.raise_status(amulet.FAIL, msg=message)
692-
693- # Delete the new image
694- u.log.debug('image count before delete: {}'.format(len(list(self.glance.images.list()))))
695- u.delete_image(self.glance, image_new)
696- u.log.debug('image count after delete: {}'.format(len(list(self.glance.images.list()))))
697-
698- def test_glance_api_default_config(self):
699- '''Verify default section configs in glance-api.conf and
700- compare some of the parameters to relation data.'''
701- unit = self.glance_sentry
702- rel_gl_mq = unit.relation('amqp', 'rabbitmq-server:amqp')
703- conf = '/etc/glance/glance-api.conf'
704- expected = {'use_syslog': 'False',
705- 'default_store': 'file',
706- 'filesystem_store_datadir': '/var/lib/glance/images/',
707- 'rabbit_userid': rel_gl_mq['username'],
708- 'log_file': '/var/log/glance/api.log',
709- 'debug': 'False',
710- 'verbose': 'False'}
711- section = 'DEFAULT'
712-
713- if self._get_openstack_release() <= self.precise_havana:
714- # Defaults were different before icehouse
715- expected['debug'] = 'True'
716- expected['verbose'] = 'True'
717-
718- ret = u.validate_config_data(unit, conf, section, expected)
719- if ret:
720- message = "glance-api default config error: {}".format(ret)
721- amulet.raise_status(amulet.FAIL, msg=message)
722-
723- def test_glance_api_auth_config(self):
724- '''Verify authtoken section config in glance-api.conf using
725- glance/keystone relation data.'''
726- unit_gl = self.glance_sentry
727- unit_ks = self.keystone_sentry
728- rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp')
729- rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service')
730- conf = '/etc/glance/glance-api.conf'
731- section = 'keystone_authtoken'
732-
733- if self._get_openstack_release() > self.precise_havana:
734- # No auth config exists in this file before icehouse
735- expected = {'admin_user': 'glance',
736- 'admin_password': rel_ks_gl['service_password']}
737-
738- ret = u.validate_config_data(unit_gl, conf, section, expected)
739- if ret:
740- message = "glance-api auth config error: {}".format(ret)
741- amulet.raise_status(amulet.FAIL, msg=message)
742-
743- def test_glance_api_paste_auth_config(self):
744- '''Verify authtoken section config in glance-api-paste.ini using
745- glance/keystone relation data.'''
746- unit_gl = self.glance_sentry
747- unit_ks = self.keystone_sentry
748- rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp')
749- rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service')
750+ def _get_keystone_authtoken_expected_dict(self, rel_ks_gl):
751+ """Return expected authtoken dict for OS release"""
752+ expected = {
753+ 'keystone_authtoken': {
754+ 'signing_dir': '/var/cache/glance',
755+ 'admin_tenant_name': 'services',
756+ 'admin_user': 'glance',
757+ 'admin_password': rel_ks_gl['service_password'],
758+ 'auth_uri': u.valid_url
759+ }
760+ }
761+
762+ if self._get_openstack_release() >= self.trusty_kilo:
763+ # Trusty-Kilo and later
764+ expected['keystone_authtoken'].update({
765+ 'identity_uri': u.valid_url,
766+ })
767+ else:
768+ # Utopic-Juno and earlier
769+ expected['keystone_authtoken'].update({
770+ 'auth_host': rel_ks_gl['auth_host'],
771+ 'auth_port': rel_ks_gl['auth_port'],
772+ 'auth_protocol': rel_ks_gl['auth_protocol']
773+ })
774+
775+ return expected
776+
777+ def test_300_glance_api_default_config(self):
778+ """Verify default section configs in glance-api.conf and
779+ compare some of the parameters to relation data."""
780+ u.log.debug('Checking glance api config file...')
781+ unit = self.glance_sentry
782+ unit_ks = self.keystone_sentry
783+ rel_mq_gl = self.rabbitmq_sentry.relation('amqp', 'glance:amqp')
784+ rel_ks_gl = unit_ks.relation('identity-service',
785+ 'glance:identity-service')
786+ rel_my_gl = self.mysql_sentry.relation('shared-db', 'glance:shared-db')
787+ db_uri = "mysql://{}:{}@{}/{}".format('glance', rel_my_gl['password'],
788+ rel_my_gl['db_host'], 'glance')
789+ conf = '/etc/glance/glance-api.conf'
790+ expected = {
791+ 'DEFAULT': {
792+ 'debug': 'False',
793+ 'verbose': 'False',
794+ 'use_syslog': 'False',
795+ 'log_file': '/var/log/glance/api.log',
796+ 'bind_host': '0.0.0.0',
797+ 'bind_port': '9282',
798+ 'registry_host': '0.0.0.0',
799+ 'registry_port': '9191',
800+ 'registry_client_protocol': 'http',
801+ 'delayed_delete': 'False',
802+ 'scrub_time': '43200',
803+ 'notification_driver': 'rabbit',
804+ 'scrubber_datadir': '/var/lib/glance/scrubber',
805+ 'image_cache_dir': '/var/lib/glance/image-cache/',
806+ 'db_enforce_mysql_charset': 'False'
807+ },
808+ }
809+
810+ expected.update(self._get_keystone_authtoken_expected_dict(rel_ks_gl))
811+
812+ if self._get_openstack_release() >= self.trusty_kilo:
813+ # Kilo or later
814+ expected['oslo_messaging_rabbit'] = {
815+ 'rabbit_userid': 'glance',
816+ 'rabbit_virtual_host': 'openstack',
817+ 'rabbit_password': rel_mq_gl['password'],
818+ 'rabbit_host': rel_mq_gl['hostname']
819+ }
820+ expected['glance_store'] = {
821+ 'filesystem_store_datadir': '/var/lib/glance/images/',
822+ 'stores': 'glance.store.filesystem.'
823+ 'Store,glance.store.http.Store',
824+ 'default_store': 'file'
825+ }
826+ expected['database'] = {
827+ 'idle_timeout': '3600',
828+ 'connection': db_uri
829+ }
830+ else:
831+ # Juno or earlier
832+ expected['DEFAULT'].update({
833+ 'rabbit_userid': 'glance',
834+ 'rabbit_virtual_host': 'openstack',
835+ 'rabbit_password': rel_mq_gl['password'],
836+ 'rabbit_host': rel_mq_gl['hostname'],
837+ 'filesystem_store_datadir': '/var/lib/glance/images/',
838+ 'default_store': 'file',
839+ })
840+ expected['database'] = {
841+ 'sql_idle_timeout': '3600',
842+ 'connection': db_uri
843+ }
844+
845+ for section, pairs in expected.iteritems():
846+ ret = u.validate_config_data(unit, conf, section, pairs)
847+ if ret:
848+ message = "glance api config error: {}".format(ret)
849+ amulet.raise_status(amulet.FAIL, msg=message)
850+
851+ def test_302_glance_registry_default_config(self):
852+ """Verify configs in glance-registry.conf"""
853+ u.log.debug('Checking glance registry config file...')
854+ unit = self.glance_sentry
855+ unit_ks = self.keystone_sentry
856+ rel_ks_gl = unit_ks.relation('identity-service',
857+ 'glance:identity-service')
858+ rel_my_gl = self.mysql_sentry.relation('shared-db', 'glance:shared-db')
859+ db_uri = "mysql://{}:{}@{}/{}".format('glance', rel_my_gl['password'],
860+ rel_my_gl['db_host'], 'glance')
861+ conf = '/etc/glance/glance-registry.conf'
862+
863+ expected = {
864+ 'DEFAULT': {
865+ 'use_syslog': 'False',
866+ 'log_file': '/var/log/glance/registry.log',
867+ 'debug': 'False',
868+ 'verbose': 'False',
869+ 'bind_host': '0.0.0.0',
870+ 'bind_port': '9191'
871+ },
872+ }
873+
874+ if self._get_openstack_release() >= self.trusty_kilo:
875+ # Kilo or later
876+ expected['database'] = {
877+ 'idle_timeout': '3600',
878+ 'connection': db_uri
879+ }
880+ else:
881+ # Juno or earlier
882+ expected['database'] = {
883+ 'idle_timeout': '3600',
884+ 'connection': db_uri
885+ }
886+
887+ expected.update(self._get_keystone_authtoken_expected_dict(rel_ks_gl))
888+
889+ for section, pairs in expected.iteritems():
890+ ret = u.validate_config_data(unit, conf, section, pairs)
891+ if ret:
892+ message = "glance registry paste config error: {}".format(ret)
893+ amulet.raise_status(amulet.FAIL, msg=message)
894+
895+ def _get_filter_factory_expected_dict(self):
896+ """Return expected authtoken filter factory dict for OS release"""
897+ if self._get_openstack_release() >= self.trusty_kilo:
898+ # Kilo and later
899+ val = 'keystonemiddleware.auth_token:filter_factory'
900+ else:
901+ # Juno and earlier
902+ val = 'keystoneclient.middleware.auth_token:filter_factory'
903+
904+ return {'filter:authtoken': {'paste.filter_factory': val}}
905+
906+ def test_304_glance_api_paste_auth_config(self):
907+ """Verify authtoken section config in glance-api-paste.ini using
908+ glance/keystone relation data."""
909+ u.log.debug('Checking glance api paste config file...')
910+ unit = self.glance_sentry
911 conf = '/etc/glance/glance-api-paste.ini'
912- section = 'filter:authtoken'
913-
914- if self._get_openstack_release() <= self.precise_havana:
915- # No auth config exists in this file after havana
916- expected = {'admin_user': 'glance',
917- 'admin_password': rel_ks_gl['service_password']}
918-
919- ret = u.validate_config_data(unit_gl, conf, section, expected)
920+ expected = self._get_filter_factory_expected_dict()
921+
922+ for section, pairs in expected.iteritems():
923+ ret = u.validate_config_data(unit, conf, section, pairs)
924 if ret:
925- message = "glance-api-paste auth config error: {}".format(ret)
926+ message = "glance api paste config error: {}".format(ret)
927 amulet.raise_status(amulet.FAIL, msg=message)
928
929- def test_glance_registry_paste_auth_config(self):
930- '''Verify authtoken section config in glance-registry-paste.ini using
931- glance/keystone relation data.'''
932- unit_gl = self.glance_sentry
933- unit_ks = self.keystone_sentry
934- rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp')
935- rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service')
936+ def test_306_glance_registry_paste_auth_config(self):
937+ """Verify authtoken section config in glance-registry-paste.ini using
938+ glance/keystone relation data."""
939+ u.log.debug('Checking glance registry paste config file...')
940+ unit = self.glance_sentry
941 conf = '/etc/glance/glance-registry-paste.ini'
942- section = 'filter:authtoken'
943-
944- if self._get_openstack_release() <= self.precise_havana:
945- # No auth config exists in this file after havana
946- expected = {'admin_user': 'glance',
947- 'admin_password': rel_ks_gl['service_password']}
948-
949- ret = u.validate_config_data(unit_gl, conf, section, expected)
950- if ret:
951- message = "glance-registry-paste auth config error: {}".format(ret)
952- amulet.raise_status(amulet.FAIL, msg=message)
953-
954- def test_glance_registry_default_config(self):
955- '''Verify default section configs in glance-registry.conf'''
956- unit = self.glance_sentry
957- conf = '/etc/glance/glance-registry.conf'
958- expected = {'use_syslog': 'False',
959- 'log_file': '/var/log/glance/registry.log',
960- 'debug': 'False',
961- 'verbose': 'False'}
962- section = 'DEFAULT'
963-
964- if self._get_openstack_release() <= self.precise_havana:
965- # Defaults were different before icehouse
966- expected['debug'] = 'True'
967- expected['verbose'] = 'True'
968-
969- ret = u.validate_config_data(unit, conf, section, expected)
970- if ret:
971- message = "glance-registry default config error: {}".format(ret)
972- amulet.raise_status(amulet.FAIL, msg=message)
973-
974- def test_glance_registry_auth_config(self):
975- '''Verify authtoken section config in glance-registry.conf
976- using glance/keystone relation data.'''
977- unit_gl = self.glance_sentry
978- unit_ks = self.keystone_sentry
979- rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp')
980- rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service')
981- conf = '/etc/glance/glance-registry.conf'
982- section = 'keystone_authtoken'
983-
984- if self._get_openstack_release() > self.precise_havana:
985- # No auth config exists in this file before icehouse
986- expected = {'admin_user': 'glance',
987- 'admin_password': rel_ks_gl['service_password']}
988-
989- ret = u.validate_config_data(unit_gl, conf, section, expected)
990- if ret:
991- message = "glance-registry keystone_authtoken config error: {}".format(ret)
992- amulet.raise_status(amulet.FAIL, msg=message)
993-
994- def test_glance_api_database_config(self):
995- '''Verify database config in glance-api.conf and
996- compare with a db uri constructed from relation data.'''
997- unit = self.glance_sentry
998- conf = '/etc/glance/glance-api.conf'
999- relation = self.mysql_sentry.relation('shared-db', 'glance:shared-db')
1000- db_uri = "mysql://{}:{}@{}/{}".format('glance', relation['password'],
1001- relation['db_host'], 'glance')
1002- expected = {'connection': db_uri, 'sql_idle_timeout': '3600'}
1003- section = 'database'
1004-
1005- if self._get_openstack_release() <= self.precise_havana:
1006- # Section and directive for this config changed in icehouse
1007- expected = {'sql_connection': db_uri, 'sql_idle_timeout': '3600'}
1008- section = 'DEFAULT'
1009-
1010- ret = u.validate_config_data(unit, conf, section, expected)
1011- if ret:
1012- message = "glance db config error: {}".format(ret)
1013- amulet.raise_status(amulet.FAIL, msg=message)
1014-
1015- def test_glance_registry_database_config(self):
1016- '''Verify database config in glance-registry.conf and
1017- compare with a db uri constructed from relation data.'''
1018- unit = self.glance_sentry
1019- conf = '/etc/glance/glance-registry.conf'
1020- relation = self.mysql_sentry.relation('shared-db', 'glance:shared-db')
1021- db_uri = "mysql://{}:{}@{}/{}".format('glance', relation['password'],
1022- relation['db_host'], 'glance')
1023- expected = {'connection': db_uri, 'sql_idle_timeout': '3600'}
1024- section = 'database'
1025-
1026- if self._get_openstack_release() <= self.precise_havana:
1027- # Section and directive for this config changed in icehouse
1028- expected = {'sql_connection': db_uri, 'sql_idle_timeout': '3600'}
1029- section = 'DEFAULT'
1030-
1031- ret = u.validate_config_data(unit, conf, section, expected)
1032- if ret:
1033- message = "glance db config error: {}".format(ret)
1034- amulet.raise_status(amulet.FAIL, msg=message)
1035-
1036- def test_glance_endpoint(self):
1037- '''Verify the glance endpoint data.'''
1038- endpoints = self.keystone.endpoints.list()
1039- admin_port = internal_port = public_port = '9292'
1040- expected = {'id': u.not_null,
1041- 'region': 'RegionOne',
1042- 'adminurl': u.valid_url,
1043- 'internalurl': u.valid_url,
1044- 'publicurl': u.valid_url,
1045- 'service_id': u.not_null}
1046- ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
1047- public_port, expected)
1048-
1049- if ret:
1050- amulet.raise_status(amulet.FAIL,
1051- msg='glance endpoint: {}'.format(ret))
1052-
1053- def test_keystone_endpoint(self):
1054- '''Verify the keystone endpoint data.'''
1055- endpoints = self.keystone.endpoints.list()
1056- admin_port = '35357'
1057- internal_port = public_port = '5000'
1058- expected = {'id': u.not_null,
1059- 'region': 'RegionOne',
1060- 'adminurl': u.valid_url,
1061- 'internalurl': u.valid_url,
1062- 'publicurl': u.valid_url,
1063- 'service_id': u.not_null}
1064- ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
1065- public_port, expected)
1066- if ret:
1067- amulet.raise_status(amulet.FAIL,
1068- msg='keystone endpoint: {}'.format(ret))
1069-
1070- def _change_config(self):
1071- if self._get_openstack_release() > self.precise_havana:
1072- self.d.configure('glance', {'debug': 'True'})
1073- else:
1074- self.d.configure('glance', {'debug': 'False'})
1075-
1076- def _restore_config(self):
1077- if self._get_openstack_release() > self.precise_havana:
1078- self.d.configure('glance', {'debug': 'False'})
1079- else:
1080- self.d.configure('glance', {'debug': 'True'})
1081-
1082- def test_z_glance_restart_on_config_change(self):
1083- '''Verify that glance is restarted when the config is changed.
1084-
1085- Note(coreycb): The method name with the _z_ is a little odd
1086- but it forces the test to run last. It just makes things
1087- easier because restarting services requires re-authorization.
1088- '''
1089- if self._get_openstack_release() <= self.precise_havana:
1090- # /!\ NOTE(beisner): Glance charm before Icehouse doesn't respond
1091- # to attempted config changes via juju / juju set.
1092- # https://bugs.launchpad.net/charms/+source/glance/+bug/1340307
1093- u.log.error('NOTE(beisner): skipping glance restart on config ' +
1094- 'change check due to bug 1340307.')
1095- return
1096-
1097- # Make config change to trigger a service restart
1098- self._change_config()
1099-
1100- if not u.service_restarted(self.glance_sentry, 'glance-api',
1101- '/etc/glance/glance-api.conf'):
1102- self._restore_config()
1103- message = "glance service didn't restart after config change"
1104- amulet.raise_status(amulet.FAIL, msg=message)
1105-
1106- if not u.service_restarted(self.glance_sentry, 'glance-registry',
1107- '/etc/glance/glance-registry.conf',
1108- sleep_time=0):
1109- self._restore_config()
1110- message = "glance service didn't restart after config change"
1111- amulet.raise_status(amulet.FAIL, msg=message)
1112-
1113- # Return to original config
1114- self._restore_config()
1115-
1116- def test_users(self):
1117- '''Verify expected users.'''
1118- user0 = {'name': 'glance',
1119- 'enabled': True,
1120- 'tenantId': u.not_null,
1121- 'id': u.not_null,
1122- 'email': 'juju@localhost'}
1123- user1 = {'name': 'admin',
1124- 'enabled': True,
1125- 'tenantId': u.not_null,
1126- 'id': u.not_null,
1127- 'email': 'juju@localhost'}
1128- expected = [user0, user1]
1129- actual = self.keystone.users.list()
1130-
1131- ret = u.validate_user_data(expected, actual)
1132- if ret:
1133- amulet.raise_status(amulet.FAIL, msg=ret)
1134+ expected = self._get_filter_factory_expected_dict()
1135+
1136+ for section, pairs in expected.iteritems():
1137+ ret = u.validate_config_data(unit, conf, section, pairs)
1138+ if ret:
1139+ message = "glance registry paste config error: {}".format(ret)
1140+ amulet.raise_status(amulet.FAIL, msg=message)
1141+
1142+ def test_410_glance_image_create_delete(self):
1143+ """Create new cirros image in glance, verify, then delete it."""
1144+ u.log.debug('Creating, checking and deleting glance image...')
1145+ img_new = u.create_cirros_image(self.glance, "cirros-image-1")
1146+ img_id = img_new.id
1147+ u.delete_resource(self.glance.images, img_id, msg="glance image")
1148+
1149+ def test_900_glance_restart_on_config_change(self):
1150+ """Verify that the specified services are restarted when the config
1151+ is changed."""
1152+ sentry = self.glance_sentry
1153+ juju_service = 'glance'
1154+
1155+ # Expected default and alternate values
1156+ set_default = {'use-syslog': 'False'}
1157+ set_alternate = {'use-syslog': 'True'}
1158+
1159+ # Config file affected by juju set config change
1160+ conf_file = '/etc/glance/glance-api.conf'
1161+
1162+ # Services which are expected to restart upon config change
1163+ services = ['glance-api', 'glance-registry']
1164+
1165+ # Make config change, check for service restarts
1166+ u.log.debug('Making config change on {}...'.format(juju_service))
1167+ self.d.configure(juju_service, set_alternate)
1168+
1169+ sleep_time = 30
1170+ for s in services:
1171+ u.log.debug("Checking that service restarted: {}".format(s))
1172+ if not u.service_restarted(sentry, s,
1173+ conf_file, sleep_time=sleep_time):
1174+ self.d.configure(juju_service, set_default)
1175+ msg = "service {} didn't restart after config change".format(s)
1176+ amulet.raise_status(amulet.FAIL, msg=msg)
1177+ sleep_time = 0
1178+
1179+ self.d.configure(juju_service, set_default)
1180
1181=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
1182--- tests/charmhelpers/contrib/amulet/utils.py 2015-06-19 15:08:48 +0000
1183+++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-02 12:52:15 +0000
1184@@ -185,10 +185,23 @@
1185 for k in expected.keys():
1186 if not config.has_option(section, k):
1187 return "section [{}] is missing option {}".format(section, k)
1188- if config.get(section, k) != expected[k]:
1189- return "section [{}] {}:{} != expected {}:{}".format(
1190- section, k, config.get(section, k), k, expected[k])
1191- return None
1192+
1193+ actual = config.get(section, k)
1194+ v = expected[k]
1195+ if (isinstance(v, six.string_types) or
1196+ isinstance(v, bool) or
1197+ isinstance(v, six.integer_types)):
1198+ # handle explicit values
1199+ if actual != v:
1200+ return "section [{}] {}:{} != expected {}:{}".format(
1201+ section, k, actual, k, expected[k])
1202+ else:
1203+ # handle not_null, valid_ip boolean comparison methods, etc.
1204+ if v(actual):
1205+ return None
1206+ else:
1207+ return "section [{}] {}:{} != expected {}:{}".format(
1208+ section, k, actual, k, expected[k])
1209
1210 def _validate_dict_data(self, expected, actual):
1211 """Validate dictionary data.
1212@@ -406,3 +419,123 @@
1213 """Convert a relative file path to a file URL."""
1214 _abs_path = os.path.abspath(file_rel_path)
1215 return urlparse.urlparse(_abs_path, scheme='file').geturl()
1216+
1217+ def check_commands_on_units(self, commands, sentry_units):
1218+ """Check that all commands in a list exit zero on all
1219+ sentry units in a list.
1220+
1221+ :param commands: list of bash commands
1222+ :param sentry_units: list of sentry unit pointers
1223+ :returns: None if successful; Failure message otherwise
1224+ """
1225+ self.log.debug('Checking exit codes for {} commands on {} '
1226+ 'sentry units...'.format(len(commands),
1227+ len(sentry_units)))
1228+ for sentry_unit in sentry_units:
1229+ for cmd in commands:
1230+ output, code = sentry_unit.run(cmd)
1231+ if code == 0:
1232+ msg = ('{} `{}` returned {} '
1233+ '(OK)'.format(sentry_unit.info['unit_name'],
1234+ cmd, code))
1235+ self.log.debug(msg)
1236+ else:
1237+ msg = ('{} `{}` returned {} '
1238+ '{}'.format(sentry_unit.info['unit_name'],
1239+ cmd, code, output))
1240+ return msg
1241+ return None
1242+
1243+ def get_process_id_list(self, sentry_unit, process_name):
1244+ """Get a list of process ID(s) from a single sentry juju unit
1245+ for a single process name.
1246+
1247+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1248+ :param process_name: Process name
1249+ :returns: List of process IDs
1250+ """
1251+ cmd = 'pidof {}'.format(process_name)
1252+ output, code = sentry_unit.run(cmd)
1253+ if code != 0:
1254+ msg = ('{} `{}` returned {} '
1255+ '{}'.format(sentry_unit.info['unit_name'],
1256+ cmd, code, output))
1257+ raise RuntimeError(msg)
1258+ return str(output).split()
1259+
1260+ def get_unit_process_ids(self, unit_processes):
1261+ """Construct a dict containing unit sentries, process names, and
1262+ process IDs."""
1263+ pid_dict = {}
1264+ for sentry_unit, process_list in unit_processes.iteritems():
1265+ pid_dict[sentry_unit] = {}
1266+ for process in process_list:
1267+ pids = self.get_process_id_list(sentry_unit, process)
1268+ pid_dict[sentry_unit].update({process: pids})
1269+ return pid_dict
1270+
1271+ def validate_unit_process_ids(self, expected, actual):
1272+ """Validate process id quantities for services on units."""
1273+ self.log.debug('Checking units for running processes...')
1274+ self.log.debug('Expected PIDs: {}'.format(expected))
1275+ self.log.debug('Actual PIDs: {}'.format(actual))
1276+
1277+ if len(actual) != len(expected):
1278+ msg = ('Unit count mismatch. expected, actual: {}, '
1279+ '{} '.format(len(expected), len(actual)))
1280+ return msg
1281+
1282+ for (e_sentry, e_proc_names) in expected.iteritems():
1283+ e_sentry_name = e_sentry.info['unit_name']
1284+ if e_sentry in actual.keys():
1285+ a_proc_names = actual[e_sentry]
1286+ else:
1287+ msg = ('Expected sentry ({}) not found in actual dict data.'
1288+ '{}'.format(e_sentry_name, e_sentry))
1289+ return msg
1290+
1291+ if len(e_proc_names.keys()) != len(a_proc_names.keys()):
1292+ msg = ('Process name count mismatch. expected, actual: {}, '
1293+ '{}'.format(len(expected), len(actual)))
1294+ return msg
1295+
1296+ for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
1297+ zip(e_proc_names.items(), a_proc_names.items()):
1298+ if e_proc_name != a_proc_name:
1299+ msg = ('Process name mismatch. expected, actual: {}, '
1300+ '{}'.format(e_proc_name, a_proc_name))
1301+ return msg
1302+
1303+ a_pids_length = len(a_pids)
1304+ if e_pids_length != a_pids_length:
1305+ msg = ('PID count mismatch. {} ({}) expected, actual: {}, '
1306+ '{} ({})'.format(e_sentry_name,
1307+ e_proc_name,
1308+ e_pids_length,
1309+ a_pids_length,
1310+ a_pids))
1311+ return msg
1312+ else:
1313+ msg = ('PID check OK: {} {} {}: '
1314+ '{}'.format(e_sentry_name,
1315+ e_proc_name,
1316+ e_pids_length,
1317+ a_pids))
1318+ self.log.debug(msg)
1319+ return None
1320+
1321+ def validate_list_of_identical_dicts(self, list_of_dicts):
1322+ """Check that all dicts within a list are identical."""
1323+ hashes = []
1324+ for _dict in list_of_dicts:
1325+ hashes.append(hash(frozenset(_dict.items())))
1326+
1327+ self.log.debug('Hashes: {}'.format(hashes))
1328+ if len(set(hashes)) == 1:
1329+ msg = 'Dicts within list are identical'
1330+ self.log.debug(msg)
1331+ else:
1332+ msg = 'Dicts within list are not identical'
1333+ return msg
1334+
1335+ return None
1336
1337=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
1338--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-19 15:08:48 +0000
1339+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-02 12:52:15 +0000
1340@@ -79,9 +79,9 @@
1341 services.append(this_service)
1342 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
1343 'ceph-osd', 'ceph-radosgw']
1344- # Openstack subordinate charms do not expose an origin option as that
1345- # is controlled by the principle
1346- ignore = ['neutron-openvswitch']
1347+ # Most OpenStack subordinate charms do not expose an origin option
1348+ # as that is controlled by the principle.
1349+ ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
1350
1351 if self.openstack:
1352 for svc in services:
1353@@ -148,3 +148,35 @@
1354 return os_origin.split('%s-' % self.series)[1].split('/')[0]
1355 else:
1356 return releases[self.series]
1357+
1358+ def get_ceph_expected_pools(self, radosgw=False):
1359+ """Return a list of expected ceph pools based on Ubuntu-OpenStack
1360+ release and whether ceph radosgw is flagged as present or not."""
1361+
1362+ if self._get_openstack_release() >= self.trusty_kilo:
1363+ # Kilo or later
1364+ pools = [
1365+ 'rbd',
1366+ 'cinder',
1367+ 'glance'
1368+ ]
1369+ else:
1370+ # Juno or earlier
1371+ pools = [
1372+ 'data',
1373+ 'metadata',
1374+ 'rbd',
1375+ 'cinder',
1376+ 'glance'
1377+ ]
1378+
1379+ if radosgw:
1380+ pools.extend([
1381+ '.rgw.root',
1382+ '.rgw.control',
1383+ '.rgw',
1384+ '.rgw.gc',
1385+ '.users.uid'
1386+ ])
1387+
1388+ return pools
1389
1390=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
1391--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-19 15:08:48 +0000
1392+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-02 12:52:15 +0000
1393@@ -14,16 +14,19 @@
1394 # You should have received a copy of the GNU Lesser General Public License
1395 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1396
1397+import json
1398 import logging
1399 import os
1400 import six
1401 import time
1402 import urllib
1403
1404+import cinderclient.v1.client as cinder_client
1405 import glanceclient.v1.client as glance_client
1406 import heatclient.v1.client as heat_client
1407 import keystoneclient.v2_0 as keystone_client
1408 import novaclient.v1_1.client as nova_client
1409+import swiftclient
1410
1411 from charmhelpers.contrib.amulet.utils import (
1412 AmuletUtils
1413@@ -171,6 +174,15 @@
1414 self.log.debug('Checking if tenant exists ({})...'.format(tenant))
1415 return tenant in [t.name for t in keystone.tenants.list()]
1416
1417+ def authenticate_cinder_admin(self, keystone_sentry, username,
1418+ password, tenant):
1419+ """Authenticates admin user with cinder."""
1420+ service_ip = \
1421+ keystone_sentry.relation('shared-db',
1422+ 'mysql:shared-db')['private-address']
1423+ ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8'))
1424+ return cinder_client.Client(username, password, tenant, ept)
1425+
1426 def authenticate_keystone_admin(self, keystone_sentry, user, password,
1427 tenant):
1428 """Authenticates admin user with the keystone admin endpoint."""
1429@@ -212,9 +224,29 @@
1430 return nova_client.Client(username=user, api_key=password,
1431 project_id=tenant, auth_url=ep)
1432
1433+ def authenticate_swift_user(self, keystone, user, password, tenant):
1434+ """Authenticates a regular user with swift api."""
1435+ self.log.debug('Authenticating swift user ({})...'.format(user))
1436+ ep = keystone.service_catalog.url_for(service_type='identity',
1437+ endpoint_type='publicURL')
1438+ return swiftclient.Connection(authurl=ep,
1439+ user=user,
1440+ key=password,
1441+ tenant_name=tenant,
1442+ auth_version='2.0')
1443+
1444 def create_cirros_image(self, glance, image_name):
1445- """Download the latest cirros image and upload it to glance."""
1446- self.log.debug('Creating glance image ({})...'.format(image_name))
1447+ """Download the latest cirros image and upload it to glance,
1448+ validate and return a resource pointer.
1449+
1450+ :param glance: pointer to authenticated glance connection
1451+ :param image_name: display name for new image
1452+ :returns: glance image pointer
1453+ """
1454+ self.log.debug('Creating glance cirros image '
1455+ '({})...'.format(image_name))
1456+
1457+ # Download cirros image
1458 http_proxy = os.getenv('AMULET_HTTP_PROXY')
1459 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
1460 if http_proxy:
1461@@ -223,33 +255,51 @@
1462 else:
1463 opener = urllib.FancyURLopener()
1464
1465- f = opener.open("http://download.cirros-cloud.net/version/released")
1466+ f = opener.open('http://download.cirros-cloud.net/version/released')
1467 version = f.read().strip()
1468- cirros_img = "cirros-{}-x86_64-disk.img".format(version)
1469+ cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
1470 local_path = os.path.join('tests', cirros_img)
1471
1472 if not os.path.exists(local_path):
1473- cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
1474+ cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
1475 version, cirros_img)
1476 opener.retrieve(cirros_url, local_path)
1477 f.close()
1478
1479+ # Create glance image
1480 with open(local_path) as f:
1481 image = glance.images.create(name=image_name, is_public=True,
1482 disk_format='qcow2',
1483 container_format='bare', data=f)
1484- count = 1
1485- status = image.status
1486- while status != 'active' and count < 10:
1487- time.sleep(3)
1488- image = glance.images.get(image.id)
1489- status = image.status
1490- self.log.debug('image status: {}'.format(status))
1491- count += 1
1492-
1493- if status != 'active':
1494- self.log.error('image creation timed out')
1495- return None
1496+
1497+ # Wait for image to reach active status
1498+ img_id = image.id
1499+ ret = self.resource_reaches_status(glance.images, img_id,
1500+ expected_stat='active',
1501+ msg='Image status wait')
1502+ if not ret:
1503+ msg = 'Glance image failed to reach expected state.'
1504+ raise RuntimeError(msg)
1505+
1506+ # Re-validate new image
1507+ self.log.debug('Validating image attributes...')
1508+ val_img_name = glance.images.get(img_id).name
1509+ val_img_stat = glance.images.get(img_id).status
1510+ val_img_pub = glance.images.get(img_id).is_public
1511+ val_img_cfmt = glance.images.get(img_id).container_format
1512+ val_img_dfmt = glance.images.get(img_id).disk_format
1513+ msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
1514+ 'container fmt:{} disk fmt:{}'.format(
1515+ val_img_name, val_img_pub, img_id,
1516+ val_img_stat, val_img_cfmt, val_img_dfmt))
1517+
1518+ if val_img_name == image_name and val_img_stat == 'active' \
1519+ and val_img_pub is True and val_img_cfmt == 'bare' \
1520+ and val_img_dfmt == 'qcow2':
1521+ self.log.debug(msg_attr)
1522+ else:
1523+ msg = ('Volume validation failed, {}'.format(msg_attr))
1524+ raise RuntimeError(msg)
1525
1526 return image
1527
1528@@ -260,22 +310,7 @@
1529 self.log.warn('/!\\ DEPRECATION WARNING: use '
1530 'delete_resource instead of delete_image.')
1531 self.log.debug('Deleting glance image ({})...'.format(image))
1532- num_before = len(list(glance.images.list()))
1533- glance.images.delete(image)
1534-
1535- count = 1
1536- num_after = len(list(glance.images.list()))
1537- while num_after != (num_before - 1) and count < 10:
1538- time.sleep(3)
1539- num_after = len(list(glance.images.list()))
1540- self.log.debug('number of images: {}'.format(num_after))
1541- count += 1
1542-
1543- if num_after != (num_before - 1):
1544- self.log.error('image deletion timed out')
1545- return False
1546-
1547- return True
1548+ return self.delete_resource(glance.images, image, msg='glance image')
1549
1550 def create_instance(self, nova, image_name, instance_name, flavor):
1551 """Create the specified instance."""
1552@@ -308,22 +343,8 @@
1553 self.log.warn('/!\\ DEPRECATION WARNING: use '
1554 'delete_resource instead of delete_instance.')
1555 self.log.debug('Deleting instance ({})...'.format(instance))
1556- num_before = len(list(nova.servers.list()))
1557- nova.servers.delete(instance)
1558-
1559- count = 1
1560- num_after = len(list(nova.servers.list()))
1561- while num_after != (num_before - 1) and count < 10:
1562- time.sleep(3)
1563- num_after = len(list(nova.servers.list()))
1564- self.log.debug('number of instances: {}'.format(num_after))
1565- count += 1
1566-
1567- if num_after != (num_before - 1):
1568- self.log.error('instance deletion timed out')
1569- return False
1570-
1571- return True
1572+ return self.delete_resource(nova.servers, instance,
1573+ msg='nova instance')
1574
1575 def create_or_get_keypair(self, nova, keypair_name="testkey"):
1576 """Create a new keypair, or return pointer if it already exists."""
1577@@ -339,6 +360,84 @@
1578 _keypair = nova.keypairs.create(name=keypair_name)
1579 return _keypair
1580
1581+ def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
1582+ img_id=None, src_vol_id=None, snap_id=None):
1583+ """Create cinder volume, optionally from a glance image, or
1584+ optionally as a clone of an existing volume, or optionally
1585+ from a snapshot. Wait for the new volume status to reach
1586+ the expected status, validate and return a resource pointer.
1587+
1588+ :param vol_name: cinder volume display name
1589+ :param vol_size: size in gigabytes
1590+ :param img_id: optional glance image id
1591+ :param src_vol_id: optional source volume id to clone
1592+ :param snap_id: optional snapshot id to use
1593+ :returns: cinder volume pointer
1594+ """
1595+ # Handle parameter input
1596+ if img_id and not src_vol_id and not snap_id:
1597+ self.log.debug('Creating cinder volume from glance image '
1598+ '({})...'.format(img_id))
1599+ bootable = 'true'
1600+ elif src_vol_id and not img_id and not snap_id:
1601+ self.log.debug('Cloning cinder volume...')
1602+ bootable = cinder.volumes.get(src_vol_id).bootable
1603+ elif snap_id and not src_vol_id and not img_id:
1604+ self.log.debug('Creating cinder volume from snapshot...')
1605+ snap = cinder.volume_snapshots.find(id=snap_id)
1606+ vol_size = snap.size
1607+ snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
1608+ bootable = cinder.volumes.get(snap_vol_id).bootable
1609+ elif not img_id and not src_vol_id and not snap_id:
1610+ self.log.debug('Creating cinder volume...')
1611+ bootable = 'false'
1612+ else:
1613+ msg = ('Invalid method use - name:{} size:{} img_id:{} '
1614+ 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
1615+ img_id, src_vol_id,
1616+ snap_id))
1617+ raise RuntimeError(msg)
1618+
1619+ # Create new volume
1620+ try:
1621+ vol_new = cinder.volumes.create(display_name=vol_name,
1622+ imageRef=img_id,
1623+ size=vol_size,
1624+ source_volid=src_vol_id,
1625+ snapshot_id=snap_id)
1626+ vol_id = vol_new.id
1627+ except Exception as e:
1628+ msg = 'Failed to create volume: {}'.format(e)
1629+ raise RuntimeError(msg)
1630+
1631+ # Wait for volume to reach available status
1632+ ret = self.resource_reaches_status(cinder.volumes, vol_id,
1633+ expected_stat="available",
1634+ msg="Volume status wait")
1635+ if not ret:
1636+ msg = 'Cinder volume failed to reach expected state.'
1637+ raise RuntimeError(msg)
1638+
1639+ # Re-validate new volume
1640+ self.log.debug('Validating volume attributes...')
1641+ val_vol_name = cinder.volumes.get(vol_id).display_name
1642+ val_vol_boot = cinder.volumes.get(vol_id).bootable
1643+ val_vol_stat = cinder.volumes.get(vol_id).status
1644+ val_vol_size = cinder.volumes.get(vol_id).size
1645+ msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
1646+ '{} size:{}'.format(val_vol_name, vol_id,
1647+ val_vol_stat, val_vol_boot,
1648+ val_vol_size))
1649+
1650+ if val_vol_boot == bootable and val_vol_stat == 'available' \
1651+ and val_vol_name == vol_name and val_vol_size == vol_size:
1652+ self.log.debug(msg_attr)
1653+ else:
1654+ msg = ('Volume validation failed, {}'.format(msg_attr))
1655+ raise RuntimeError(msg)
1656+
1657+ return vol_new
1658+
1659 def delete_resource(self, resource, resource_id,
1660 msg="resource", max_wait=120):
1661 """Delete one openstack resource, such as one instance, keypair,
1662@@ -350,6 +449,8 @@
1663 :param max_wait: maximum wait time in seconds
1664 :returns: True if successful, otherwise False
1665 """
1666+ self.log.debug('Deleting OpenStack resource '
1667+ '{} ({})'.format(resource_id, msg))
1668 num_before = len(list(resource.list()))
1669 resource.delete(resource_id)
1670
1671@@ -411,3 +512,90 @@
1672 self.log.debug('{} never reached expected status: '
1673 '{}'.format(resource_id, expected_stat))
1674 return False
1675+
1676+ def get_ceph_osd_id_cmd(self, index):
1677+ """Produce a shell command that will return a ceph-osd id."""
1678+ cmd = ("`initctl list | grep 'ceph-osd ' | awk 'NR=={} {{ print $2 }}'"
1679+ " | grep -o '[0-9]*'`".format(index + 1))
1680+ return cmd
1681+
1682+ def get_ceph_pools(self, sentry_unit):
1683+ """Return a dict of ceph pools from a single ceph unit, with
1684+ pool name as keys, pool id as vals."""
1685+ pools = {}
1686+ cmd = 'sudo ceph osd lspools'
1687+ output, code = sentry_unit.run(cmd)
1688+ if code != 0:
1689+ msg = ('{} `{}` returned {} '
1690+ '{}'.format(sentry_unit.info['unit_name'],
1691+ cmd, code, output))
1692+ raise RuntimeError(msg)
1693+
1694+ # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
1695+ for pool in str(output).split(','):
1696+ pool_id_name = pool.split(' ')
1697+ if len(pool_id_name) == 2:
1698+ pool_id = pool_id_name[0]
1699+ pool_name = pool_id_name[1]
1700+ pools[pool_name] = int(pool_id)
1701+
1702+ self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
1703+ pools))
1704+ return pools
1705+
1706+ def get_ceph_df(self, sentry_unit):
1707+ """Return dict of ceph df json output, including ceph pool state.
1708+
1709+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1710+ :returns: Dict of ceph df output
1711+ """
1712+ cmd = 'sudo ceph df --format=json'
1713+ output, code = sentry_unit.run(cmd)
1714+ if code != 0:
1715+ msg = ('{} `{}` returned {} '
1716+ '{}'.format(sentry_unit.info['unit_name'],
1717+ cmd, code, output))
1718+ raise RuntimeError(msg)
1719+ return json.loads(output)
1720+
1721+ def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
1722+ """Take a sample of attributes of a ceph pool, returning ceph
1723+ pool name, object count and disk space used for the specified
1724+ pool ID number.
1725+
1726+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1727+ :param pool_id: Ceph pool ID
1728+ :returns: List of pool name, object count, kb disk space used
1729+ """
1730+ df = self.get_ceph_df(sentry_unit)
1731+ pool_name = df['pools'][pool_id]['name']
1732+ obj_count = df['pools'][pool_id]['stats']['objects']
1733+ kb_used = df['pools'][pool_id]['stats']['kb_used']
1734+ self.log.debug('Ceph {} pool (ID {}): {} objects, '
1735+ '{} kb used'.format(pool_name,
1736+ pool_id,
1737+ obj_count,
1738+ kb_used))
1739+ return pool_name, obj_count, kb_used
1740+
1741+ def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
1742+ """Validate ceph pool samples taken over time, such as pool
1743+ object counts or pool kb used, before adding, after adding, and
1744+ after deleting items which affect those pool attributes. The
1745+ 2nd element is expected to be greater than the 1st; 3rd is expected
1746+ to be less than the 2nd.
1747+
1748+ :param samples: List containing 3 data samples
1749+ :param sample_type: String for logging and usage context
1750+ :returns: None if successful, Failure message otherwise
1751+ """
1752+ original, created, deleted = range(3)
1753+ if samples[created] <= samples[original] or \
1754+ samples[deleted] >= samples[created]:
1755+ msg = ('Ceph {} samples ({}) '
1756+ 'unexpected.'.format(sample_type, samples))
1757+ return msg
1758+ else:
1759+ self.log.debug('Ceph {} samples (OK): '
1760+ '{}'.format(sample_type, samples))
1761+ return None
1762
1763=== added file 'tests/tests.yaml'
1764--- tests/tests.yaml 1970-01-01 00:00:00 +0000
1765+++ tests/tests.yaml 2015-07-02 12:52:15 +0000
1766@@ -0,0 +1,18 @@
1767+bootstrap: true
1768+reset: true
1769+virtualenv: true
1770+makefile:
1771+ - lint
1772+ - test
1773+sources:
1774+ - ppa:juju/stable
1775+packages:
1776+ - amulet
1777+ - python-amulet
1778+ - python-cinderclient
1779+ - python-distro-info
1780+ - python-glanceclient
1781+ - python-heatclient
1782+ - python-keystoneclient
1783+ - python-novaclient
1784+ - python-swiftclient

Subscribers

People subscribed via source and target branches