Merge lp:~1chb1n/charms/trusty/keystone/next-amulet-update into lp:~openstack-charmers-archive/charms/trusty/keystone/next

Proposed by Ryan Beisner
Status: Merged
Merged at revision: 157
Proposed branch: lp:~1chb1n/charms/trusty/keystone/next-amulet-update
Merge into: lp:~openstack-charmers-archive/charms/trusty/keystone/next
Diff against target: 1269 lines (+703/-204)
11 files modified
Makefile (+7/-8)
metadata.yaml (+4/-1)
tests/00-setup (+6/-2)
tests/020-basic-trusty-liberty (+11/-0)
tests/021-basic-wily-liberty (+9/-0)
tests/README (+9/-0)
tests/basic_deployment.py (+235/-138)
tests/charmhelpers/contrib/amulet/utils.py (+128/-3)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+36/-3)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+240/-49)
tests/tests.yaml (+18/-0)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/keystone/next-amulet-update
Reviewer Review Type Date Requested Status
Corey Bryant Pending
Review via email: mp+263460@code.launchpad.net

Description of the change

Update amulet tests for vivid and prep for wily; Sync tests/charmhelpers; Update makefile.

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5715 keystone-next for 1chb1n mp263460
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5715/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #5347 keystone-next for 1chb1n mp263460
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5347/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4907 keystone-next for 1chb1n mp263460
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4907/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2015-04-16 21:32:02 +0000
3+++ Makefile 2015-07-01 20:45:21 +0000
4@@ -2,18 +2,17 @@
5 PYTHON := /usr/bin/env python
6
7 lint:
8- @flake8 --exclude hooks/charmhelpers actions hooks unit_tests tests
9+ @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \
10+ actions hooks unit_tests tests
11 @charm proof
12
13-unit_test:
14+test:
15+ @# Bundletester expects unit tests here.
16 @echo Starting unit tests...
17- @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
18+ @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
19
20-test:
21+functional_test:
22 @echo Starting Amulet tests...
23- # coreycb note: The -v should only be temporary until Amulet sends
24- # raise_status() messages to stderr:
25- # https://bugs.launchpad.net/amulet/+bug/1320357
26 @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700
27
28 bin/charm_helpers_sync.py:
29@@ -25,6 +24,6 @@
30 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
31 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
32
33-publish: lint unit_test
34+publish: lint test
35 bzr push lp:charms/keystone
36 bzr push lp:charms/trusty/keystone
37
38=== modified file 'metadata.yaml'
39--- metadata.yaml 2015-01-09 15:54:17 +0000
40+++ metadata.yaml 2015-07-01 20:45:21 +0000
41@@ -5,7 +5,10 @@
42 Keystone is an OpenStack project that provides Identity, Token, Catalog and
43 Policy services for use specifically by projects in the OpenStack family. It
44 implements OpenStack’s Identity API.
45-categories: ["misc"]
46+tags:
47+ - openstack
48+ - identity
49+ - misc
50 provides:
51 nrpe-external-master:
52 interface: nrpe-external-master
53
54=== modified file 'tests/00-setup'
55--- tests/00-setup 2014-09-27 21:32:11 +0000
56+++ tests/00-setup 2015-07-01 20:45:21 +0000
57@@ -5,6 +5,10 @@
58 sudo add-apt-repository --yes ppa:juju/stable
59 sudo apt-get update --yes
60 sudo apt-get install --yes python-amulet \
61+ python-cinderclient \
62+ python-distro-info \
63+ python-glanceclient \
64+ python-heatclient \
65 python-keystoneclient \
66- python-glanceclient \
67- python-novaclient
68+ python-novaclient \
69+ python-swiftclient
70
71=== modified file 'tests/017-basic-trusty-kilo' (properties changed: -x to +x)
72=== modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x)
73=== added file 'tests/020-basic-trusty-liberty'
74--- tests/020-basic-trusty-liberty 1970-01-01 00:00:00 +0000
75+++ tests/020-basic-trusty-liberty 2015-07-01 20:45:21 +0000
76@@ -0,0 +1,11 @@
77+#!/usr/bin/python
78+
79+"""Amulet tests on a basic keystone deployment on trusty-liberty."""
80+
81+from basic_deployment import KeystoneBasicDeployment
82+
83+if __name__ == '__main__':
84+ deployment = KeystoneBasicDeployment(series='trusty',
85+ openstack='cloud:trusty-liberty',
86+ source='cloud:trusty-updates/liberty')
87+ deployment.run_tests()
88
89=== added file 'tests/021-basic-wily-liberty'
90--- tests/021-basic-wily-liberty 1970-01-01 00:00:00 +0000
91+++ tests/021-basic-wily-liberty 2015-07-01 20:45:21 +0000
92@@ -0,0 +1,9 @@
93+#!/usr/bin/python
94+
95+"""Amulet tests on a basic keystone deployment on wily-liberty."""
96+
97+from basic_deployment import KeystoneBasicDeployment
98+
99+if __name__ == '__main__':
100+ deployment = KeystoneBasicDeployment(series='wily')
101+ deployment.run_tests()
102
103=== modified file 'tests/README'
104--- tests/README 2014-09-27 21:32:11 +0000
105+++ tests/README 2015-07-01 20:45:21 +0000
106@@ -1,6 +1,15 @@
107 This directory provides Amulet tests that focus on verification of Keystone
108 deployments.
109
110+test_* methods are called in lexical sort order.
111+
112+Test name convention to ensure desired test order:
113+ 1xx service and endpoint checks
114+ 2xx relation checks
115+ 3xx config checks
116+ 4xx functional checks
117+ 9xx restarts and other final checks
118+
119 In order to run tests, you'll need charm-tools installed (in addition to
120 juju, of course):
121 sudo add-apt-repository ppa:juju/stable
122
123=== modified file 'tests/basic_deployment.py'
124--- tests/basic_deployment.py 2015-06-10 13:59:24 +0000
125+++ tests/basic_deployment.py 2015-07-01 20:45:21 +0000
126@@ -1,7 +1,12 @@
127 #!/usr/bin/python
128
129+"""
130+Basic keystone amulet functional tests.
131+"""
132+
133 import amulet
134 import os
135+import time
136 import yaml
137
138 from charmhelpers.contrib.openstack.amulet.deployment import (
139@@ -10,8 +15,8 @@
140
141 from charmhelpers.contrib.openstack.amulet.utils import (
142 OpenStackAmuletUtils,
143- DEBUG, # flake8: noqa
144- ERROR
145+ DEBUG,
146+ # ERROR
147 )
148
149 # Use DEBUG to turn on debug logging
150@@ -21,9 +26,11 @@
151 class KeystoneBasicDeployment(OpenStackAmuletDeployment):
152 """Amulet tests on a basic keystone deployment."""
153
154- def __init__(self, series=None, openstack=None, source=None, git=False, stable=False):
155+ def __init__(self, series=None, openstack=None,
156+ source=None, git=False, stable=False):
157 """Deploy the entire test environment."""
158- super(KeystoneBasicDeployment, self).__init__(series, openstack, source, stable)
159+ super(KeystoneBasicDeployment, self).__init__(series, openstack,
160+ source, stable)
161 self.git = git
162 self._add_services()
163 self._add_relations()
164@@ -39,7 +46,8 @@
165 compatible with the local charm (e.g. stable or next).
166 """
167 this_service = {'name': 'keystone'}
168- other_services = [{'name': 'mysql'}, {'name': 'cinder'}]
169+ other_services = [{'name': 'mysql'},
170+ {'name': 'cinder'}]
171 super(KeystoneBasicDeployment, self)._add_services(this_service,
172 other_services)
173
174@@ -69,13 +77,16 @@
175 'http_proxy': amulet_http_proxy,
176 'https_proxy': amulet_http_proxy,
177 }
178- keystone_config['openstack-origin-git'] = yaml.dump(openstack_origin_git)
179+ keystone_config['openstack-origin-git'] = \
180+ yaml.dump(openstack_origin_git)
181
182 mysql_config = {'dataset-size': '50%'}
183 cinder_config = {'block-device': 'None'}
184- configs = {'keystone': keystone_config,
185- 'mysql': mysql_config,
186- 'cinder': cinder_config}
187+ configs = {
188+ 'keystone': keystone_config,
189+ 'mysql': mysql_config,
190+ 'cinder': cinder_config
191+ }
192 super(KeystoneBasicDeployment, self)._configure_services(configs)
193
194 def _initialize_tests(self):
195@@ -84,6 +95,13 @@
196 self.mysql_sentry = self.d.sentry.unit['mysql/0']
197 self.keystone_sentry = self.d.sentry.unit['keystone/0']
198 self.cinder_sentry = self.d.sentry.unit['cinder/0']
199+ u.log.debug('openstack release val: {}'.format(
200+ self._get_openstack_release()))
201+ u.log.debug('openstack release str: {}'.format(
202+ self._get_openstack_release_string()))
203+
204+ # Let things settle a bit before moving forward
205+ time.sleep(30)
206
207 # Authenticate keystone admin
208 self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
209@@ -100,139 +118,156 @@
210 description='demo tenant',
211 enabled=True)
212 self.keystone.roles.create(name=self.demo_role)
213- self.keystone.users.create(name=self.demo_user, password='password',
214+ self.keystone.users.create(name=self.demo_user,
215+ password='password',
216 tenant_id=tenant.id,
217 email='demo@demo.com')
218
219 # Authenticate keystone demo
220- self.keystone_demo = u.authenticate_keystone_user(self.keystone,
221- user=self.demo_user,
222- password='password',
223- tenant=self.demo_tenant)
224+ self.keystone_demo = u.authenticate_keystone_user(
225+ self.keystone, user=self.demo_user,
226+ password='password', tenant=self.demo_tenant)
227
228- def test_services(self):
229+ def test_100_services(self):
230 """Verify the expected services are running on the corresponding
231 service units."""
232- commands = {
233- self.mysql_sentry: ['status mysql'],
234- self.keystone_sentry: ['status keystone'],
235- self.cinder_sentry: ['status cinder-api', 'status cinder-scheduler',
236- 'status cinder-volume']
237+ services = {
238+ self.mysql_sentry: ['mysql'],
239+ self.keystone_sentry: ['keystone'],
240+ self.cinder_sentry: ['cinder-api',
241+ 'cinder-scheduler',
242+ 'cinder-volume']
243 }
244- ret = u.validate_services(commands)
245+ ret = u.validate_services_by_name(services)
246 if ret:
247 amulet.raise_status(amulet.FAIL, msg=ret)
248
249- def test_tenants(self):
250+ def test_102_keystone_tenants(self):
251 """Verify all existing tenants."""
252- tenant1 = {'enabled': True,
253- 'description': 'Created by Juju',
254- 'name': 'services',
255- 'id': u.not_null}
256- tenant2 = {'enabled': True,
257- 'description': 'demo tenant',
258- 'name': 'demoTenant',
259- 'id': u.not_null}
260- tenant3 = {'enabled': True,
261- 'description': 'Created by Juju',
262- 'name': 'admin',
263- 'id': u.not_null}
264- expected = [tenant1, tenant2, tenant3]
265+ u.log.debug('Checking keystone tenants...')
266+ expected = [
267+ {'name': 'services',
268+ 'enabled': True,
269+ 'description': 'Created by Juju',
270+ 'id': u.not_null},
271+ {'name': 'demoTenant',
272+ 'enabled': True,
273+ 'description': 'demo tenant',
274+ 'id': u.not_null},
275+ {'name': 'admin',
276+ 'enabled': True,
277+ 'description': 'Created by Juju',
278+ 'id': u.not_null}
279+ ]
280 actual = self.keystone.tenants.list()
281
282 ret = u.validate_tenant_data(expected, actual)
283 if ret:
284 amulet.raise_status(amulet.FAIL, msg=ret)
285
286- def test_roles(self):
287+ def test_104_keystone_roles(self):
288 """Verify all existing roles."""
289- role1 = {'name': 'demoRole', 'id': u.not_null}
290- role2 = {'name': 'Admin', 'id': u.not_null}
291- expected = [role1, role2]
292+ u.log.debug('Checking keystone roles...')
293+ expected = [
294+ {'name': 'demoRole',
295+ 'id': u.not_null},
296+ {'name': 'Admin',
297+ 'id': u.not_null}
298+ ]
299 actual = self.keystone.roles.list()
300
301 ret = u.validate_role_data(expected, actual)
302 if ret:
303 amulet.raise_status(amulet.FAIL, msg=ret)
304
305- def test_users(self):
306+ def test_106_keystone_users(self):
307 """Verify all existing roles."""
308- user1 = {'name': 'demoUser',
309- 'enabled': True,
310- 'tenantId': u.not_null,
311- 'id': u.not_null,
312- 'email': 'demo@demo.com'}
313- user2 = {'name': 'admin',
314- 'enabled': True,
315- 'tenantId': u.not_null,
316- 'id': u.not_null,
317- 'email': 'juju@localhost'}
318- user3 = {'name': 'cinder_cinderv2',
319- 'enabled': True,
320- 'tenantId': u.not_null,
321- 'id': u.not_null,
322- 'email': u'juju@localhost'}
323- expected = [user1, user2, user3]
324+ u.log.debug('Checking keystone users...')
325+ expected = [
326+ {'name': 'demoUser',
327+ 'enabled': True,
328+ 'tenantId': u.not_null,
329+ 'id': u.not_null,
330+ 'email': 'demo@demo.com'},
331+ {'name': 'admin',
332+ 'enabled': True,
333+ 'tenantId': u.not_null,
334+ 'id': u.not_null,
335+ 'email': 'juju@localhost'},
336+ {'name': 'cinder_cinderv2',
337+ 'enabled': True,
338+ 'tenantId': u.not_null,
339+ 'id': u.not_null,
340+ 'email': u'juju@localhost'}
341+ ]
342 actual = self.keystone.users.list()
343 ret = u.validate_user_data(expected, actual)
344 if ret:
345 amulet.raise_status(amulet.FAIL, msg=ret)
346
347- def test_service_catalog(self):
348+ def test_108_service_catalog(self):
349 """Verify that the service catalog endpoint data is valid."""
350- endpoint_vol = {'adminURL': u.valid_url,
351- 'region': 'RegionOne',
352- 'publicURL': u.valid_url,
353- 'internalURL': u.valid_url}
354- endpoint_id = {'adminURL': u.valid_url,
355- 'region': 'RegionOne',
356- 'publicURL': u.valid_url,
357- 'internalURL': u.valid_url}
358- if self._get_openstack_release() > self.precise_essex:
359- endpoint_vol['id'] = u.not_null
360- endpoint_id['id'] = u.not_null
361- expected = {'volume': [endpoint_vol], 'identity': [endpoint_id]}
362- actual = self.keystone_demo.service_catalog.get_endpoints()
363+ u.log.debug('Checking keystone service catalog...')
364+ endpoint_check = {
365+ 'adminURL': u.valid_url,
366+ 'id': u.not_null,
367+ 'region': 'RegionOne',
368+ 'publicURL': u.valid_url,
369+ 'internalURL': u.valid_url
370+ }
371+ expected = {
372+ 'volume': [endpoint_check],
373+ 'identity': [endpoint_check]
374+ }
375+ actual = self.keystone.service_catalog.get_endpoints()
376
377 ret = u.validate_svc_catalog_endpoint_data(expected, actual)
378 if ret:
379 amulet.raise_status(amulet.FAIL, msg=ret)
380
381- def test_keystone_endpoint(self):
382+ def test_110_keystone_endpoint(self):
383 """Verify the keystone endpoint data."""
384+ u.log.debug('Checking keystone api endpoint data...')
385 endpoints = self.keystone.endpoints.list()
386 admin_port = '35357'
387 internal_port = public_port = '5000'
388- expected = {'id': u.not_null,
389- 'region': 'RegionOne',
390- 'adminurl': u.valid_url,
391- 'internalurl': u.valid_url,
392- 'publicurl': u.valid_url,
393- 'service_id': u.not_null}
394+ expected = {
395+ 'id': u.not_null,
396+ 'region': 'RegionOne',
397+ 'adminurl': u.valid_url,
398+ 'internalurl': u.valid_url,
399+ 'publicurl': u.valid_url,
400+ 'service_id': u.not_null
401+ }
402 ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
403 public_port, expected)
404 if ret:
405 amulet.raise_status(amulet.FAIL,
406 msg='keystone endpoint: {}'.format(ret))
407
408- def test_cinder_endpoint(self):
409+ def test_112_cinder_endpoint(self):
410 """Verify the cinder endpoint data."""
411+ u.log.debug('Checking cinder endpoint...')
412 endpoints = self.keystone.endpoints.list()
413 admin_port = internal_port = public_port = '8776'
414- expected = {'id': u.not_null,
415- 'region': 'RegionOne',
416- 'adminurl': u.valid_url,
417- 'internalurl': u.valid_url,
418- 'publicurl': u.valid_url,
419- 'service_id': u.not_null}
420+ expected = {
421+ 'id': u.not_null,
422+ 'region': 'RegionOne',
423+ 'adminurl': u.valid_url,
424+ 'internalurl': u.valid_url,
425+ 'publicurl': u.valid_url,
426+ 'service_id': u.not_null
427+ }
428+
429 ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
430 public_port, expected)
431 if ret:
432 amulet.raise_status(amulet.FAIL,
433 msg='cinder endpoint: {}'.format(ret))
434
435- def test_keystone_shared_db_relation(self):
436+ def test_200_keystone_mysql_shared_db_relation(self):
437 """Verify the keystone shared-db relation data"""
438+ u.log.debug('Checking keystone to mysql db relation data...')
439 unit = self.keystone_sentry
440 relation = ['shared-db', 'mysql:shared-db']
441 expected = {
442@@ -246,8 +281,9 @@
443 message = u.relation_error('keystone shared-db', ret)
444 amulet.raise_status(amulet.FAIL, msg=message)
445
446- def test_mysql_shared_db_relation(self):
447+ def test_201_mysql_keystone_shared_db_relation(self):
448 """Verify the mysql shared-db relation data"""
449+ u.log.debug('Checking mysql to keystone db relation data...')
450 unit = self.mysql_sentry
451 relation = ['shared-db', 'keystone:shared-db']
452 expected_data = {
453@@ -260,8 +296,9 @@
454 message = u.relation_error('mysql shared-db', ret)
455 amulet.raise_status(amulet.FAIL, msg=message)
456
457- def test_keystone_identity_service_relation(self):
458+ def test_202_keystone_cinder_identity_service_relation(self):
459 """Verify the keystone identity-service relation data"""
460+ u.log.debug('Checking keystone to cinder id relation data...')
461 unit = self.keystone_sentry
462 relation = ['identity-service', 'cinder:identity-service']
463 expected = {
464@@ -283,8 +320,9 @@
465 message = u.relation_error('keystone identity-service', ret)
466 amulet.raise_status(amulet.FAIL, msg=message)
467
468- def test_cinder_identity_service_relation(self):
469+ def test_203_cinder_keystone_identity_service_relation(self):
470 """Verify the cinder identity-service relation data"""
471+ u.log.debug('Checking cinder to keystone id relation data...')
472 unit = self.cinder_sentry
473 relation = ['identity-service', 'keystone:identity-service']
474 expected = {
475@@ -305,55 +343,114 @@
476 message = u.relation_error('cinder identity-service', ret)
477 amulet.raise_status(amulet.FAIL, msg=message)
478
479- def test_z_restart_on_config_change(self):
480- """Verify that keystone is restarted when the config is changed.
481-
482- Note(coreycb): The method name with the _z_ is a little odd
483- but it forces the test to run last. It just makes things
484- easier because restarting services requires re-authorization.
485- """
486- self.d.configure('keystone', {'verbose': 'True'})
487- if not u.service_restarted(self.keystone_sentry, 'keystone-all',
488- '/etc/keystone/keystone.conf',
489- sleep_time=30):
490- self.d.configure('keystone', {'verbose': 'False'})
491- message = "keystone service didn't restart after config change"
492- amulet.raise_status(amulet.FAIL, msg=message)
493- self.d.configure('keystone', {'verbose': 'False'})
494-
495- def test_default_config(self):
496- """Verify the data in the keystone config file's default section,
497+ def test_300_keystone_default_config(self):
498+ """Verify the data in the keystone config file,
499 comparing some of the variables vs relation data."""
500- unit = self.keystone_sentry
501- conf = '/etc/keystone/keystone.conf'
502- relation = unit.relation('identity-service', 'cinder:identity-service')
503- expected = {'admin_token': relation['admin_token'],
504- 'admin_port': '35347',
505- 'public_port': '4990',
506- 'use_syslog': 'False',
507- 'log_config': '/etc/keystone/logging.conf',
508- 'debug': 'False',
509- 'verbose': 'False'}
510-
511- ret = u.validate_config_data(unit, conf, 'DEFAULT', expected)
512- if ret:
513- message = "keystone config error: {}".format(ret)
514- amulet.raise_status(amulet.FAIL, msg=message)
515-
516- def test_database_config(self):
517- """Verify the data in the keystone config file's database (or sql
518- depending on release) section, comparing vs relation data."""
519- unit = self.keystone_sentry
520- conf = '/etc/keystone/keystone.conf'
521- relation = self.mysql_sentry.relation('shared-db', 'keystone:shared-db')
522- db_uri = "mysql://{}:{}@{}/{}".format('keystone', relation['password'],
523- relation['db_host'], 'keystone')
524- expected = {'connection': db_uri, 'idle_timeout': '200'}
525-
526- if self._get_openstack_release() > self.precise_havana:
527- ret = u.validate_config_data(unit, conf, 'database', expected)
528+ u.log.debug('Checking keystone config file...')
529+ unit = self.keystone_sentry
530+ conf = '/etc/keystone/keystone.conf'
531+ ks_ci_rel = unit.relation('identity-service',
532+ 'cinder:identity-service')
533+ my_ks_rel = self.mysql_sentry.relation('shared-db',
534+ 'keystone:shared-db')
535+ db_uri = "mysql://{}:{}@{}/{}".format('keystone',
536+ my_ks_rel['password'],
537+ my_ks_rel['db_host'],
538+ 'keystone')
539+ expected = {
540+ 'DEFAULT': {
541+ 'debug': 'False',
542+ 'verbose': 'False',
543+ 'admin_token': ks_ci_rel['admin_token'],
544+ 'use_syslog': 'False',
545+ 'log_config': '/etc/keystone/logging.conf',
546+ 'public_endpoint': u.valid_url, # get specific
547+ 'admin_endpoint': u.valid_url, # get specific
548+ },
549+ 'extra_headers': {
550+ 'Distribution': 'Ubuntu'
551+ },
552+ 'database': {
553+ 'connection': db_uri,
554+ 'idle_timeout': '200'
555+ }
556+ }
557+
558+ if self._get_openstack_release() >= self.trusty_kilo:
559+ # Kilo and later
560+ expected['eventlet_server'] = {
561+ 'admin_bind_host': '0.0.0.0',
562+ 'public_bind_host': '0.0.0.0',
563+ 'admin_port': '35347',
564+ 'public_port': '4990',
565+ }
566 else:
567- ret = u.validate_config_data(unit, conf, 'sql', expected)
568- if ret:
569- message = "keystone config error: {}".format(ret)
570- amulet.raise_status(amulet.FAIL, msg=message)
571+ # Juno and earlier
572+ expected['DEFAULT'].update({
573+ 'admin_port': '35347',
574+ 'public_port': '4990',
575+ 'bind_host': '0.0.0.0',
576+ })
577+
578+ for section, pairs in expected.iteritems():
579+ ret = u.validate_config_data(unit, conf, section, pairs)
580+ if ret:
581+ message = "keystone config error: {}".format(ret)
582+ amulet.raise_status(amulet.FAIL, msg=message)
583+
584+ def test_302_keystone_logging_config(self):
585+ """Verify the data in the keystone logging config file"""
586+ u.log.debug('Checking keystone config file...')
587+ unit = self.keystone_sentry
588+ conf = '/etc/keystone/logging.conf'
589+ expected = {
590+ 'logger_root': {
591+ 'level': 'WARNING',
592+ 'handlers': 'file',
593+ },
594+ 'handlers': {
595+ 'keys': 'production,file,devel'
596+ },
597+ 'handler_file': {
598+ 'level': 'DEBUG',
599+ 'args': "('/var/log/keystone/keystone.log', 'a')"
600+ }
601+ }
602+
603+ for section, pairs in expected.iteritems():
604+ ret = u.validate_config_data(unit, conf, section, pairs)
605+ if ret:
606+ message = "keystone logging config error: {}".format(ret)
607+ amulet.raise_status(amulet.FAIL, msg=message)
608+
609+ def test_900_keystone_restart_on_config_change(self):
610+ """Verify that the specified services are restarted when the config
611+ is changed."""
612+ sentry = self.keystone_sentry
613+ juju_service = 'keystone'
614+
615+ # Expected default and alternate values
616+ set_default = {'use-syslog': 'False'}
617+ set_alternate = {'use-syslog': 'True'}
618+
619+ # Config file affected by juju set config change
620+ conf_file = '/etc/keystone/keystone.conf'
621+
622+ # Services which are expected to restart upon config change
623+ services = ['keystone-all']
624+
625+ # Make config change, check for service restarts
626+ u.log.debug('Making config change on {}...'.format(juju_service))
627+ self.d.configure(juju_service, set_alternate)
628+
629+ sleep_time = 30
630+ for s in services:
631+ u.log.debug("Checking that service restarted: {}".format(s))
632+ if not u.service_restarted(sentry, s,
633+ conf_file, sleep_time=sleep_time):
634+ self.d.configure(juju_service, set_default)
635+ msg = "service {} didn't restart after config change".format(s)
636+ amulet.raise_status(amulet.FAIL, msg=msg)
637+ sleep_time = 0
638+
639+ self.d.configure(juju_service, set_default)
640
641=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
642--- tests/charmhelpers/contrib/amulet/utils.py 2015-06-19 14:56:49 +0000
643+++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-01 20:45:21 +0000
644@@ -14,6 +14,7 @@
645 # You should have received a copy of the GNU Lesser General Public License
646 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
647
648+import amulet
649 import ConfigParser
650 import distro_info
651 import io
652@@ -173,6 +174,11 @@
653
654 Verify that the specified section of the config file contains
655 the expected option key:value pairs.
656+
657+ Compare expected dictionary data vs actual dictionary data.
658+ The values in the 'expected' dictionary can be strings, bools, ints,
659+ longs, or can be a function that evaluates a variable and returns a
660+ bool.
661 """
662 self.log.debug('Validating config file data ({} in {} on {})'
663 '...'.format(section, config_file,
664@@ -185,9 +191,20 @@
665 for k in expected.keys():
666 if not config.has_option(section, k):
667 return "section [{}] is missing option {}".format(section, k)
668- if config.get(section, k) != expected[k]:
669+
670+ actual = config.get(section, k)
671+ v = expected[k]
672+ if (isinstance(v, six.string_types) or
673+ isinstance(v, bool) or
674+ isinstance(v, six.integer_types)):
675+ # handle explicit values
676+ if actual != v:
677+ return "section [{}] {}:{} != expected {}:{}".format(
678+ section, k, actual, k, expected[k])
679+ # handle function pointers, such as not_null or valid_ip
680+ elif not v(actual):
681 return "section [{}] {}:{} != expected {}:{}".format(
682- section, k, config.get(section, k), k, expected[k])
683+ section, k, actual, k, expected[k])
684 return None
685
686 def _validate_dict_data(self, expected, actual):
687@@ -195,7 +212,7 @@
688
689 Compare expected dictionary data vs actual dictionary data.
690 The values in the 'expected' dictionary can be strings, bools, ints,
691- longs, or can be a function that evaluate a variable and returns a
692+ longs, or can be a function that evaluates a variable and returns a
693 bool.
694 """
695 self.log.debug('actual: {}'.format(repr(actual)))
696@@ -206,8 +223,10 @@
697 if (isinstance(v, six.string_types) or
698 isinstance(v, bool) or
699 isinstance(v, six.integer_types)):
700+ # handle explicit values
701 if v != actual[k]:
702 return "{}:{}".format(k, actual[k])
703+ # handle function pointers, such as not_null or valid_ip
704 elif not v(actual[k]):
705 return "{}:{}".format(k, actual[k])
706 else:
707@@ -406,3 +425,109 @@
708 """Convert a relative file path to a file URL."""
709 _abs_path = os.path.abspath(file_rel_path)
710 return urlparse.urlparse(_abs_path, scheme='file').geturl()
711+
712+ def check_commands_on_units(self, commands, sentry_units):
713+ """Check that all commands in a list exit zero on all
714+ sentry units in a list.
715+
716+ :param commands: list of bash commands
717+ :param sentry_units: list of sentry unit pointers
718+ :returns: None if successful; Failure message otherwise
719+ """
720+ self.log.debug('Checking exit codes for {} commands on {} '
721+ 'sentry units...'.format(len(commands),
722+ len(sentry_units)))
723+ for sentry_unit in sentry_units:
724+ for cmd in commands:
725+ output, code = sentry_unit.run(cmd)
726+ if code == 0:
727+ self.log.debug('{} `{}` returned {} '
728+ '(OK)'.format(sentry_unit.info['unit_name'],
729+ cmd, code))
730+ else:
731+ return ('{} `{}` returned {} '
732+ '{}'.format(sentry_unit.info['unit_name'],
733+ cmd, code, output))
734+ return None
735+
736+ def get_process_id_list(self, sentry_unit, process_name):
737+ """Get a list of process ID(s) from a single sentry juju unit
738+ for a single process name.
739+
740+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
741+ :param process_name: Process name
742+ :returns: List of process IDs
743+ """
744+ cmd = 'pidof {}'.format(process_name)
745+ output, code = sentry_unit.run(cmd)
746+ if code != 0:
747+ msg = ('{} `{}` returned {} '
748+ '{}'.format(sentry_unit.info['unit_name'],
749+ cmd, code, output))
750+ amulet.raise_status(amulet.FAIL, msg=msg)
751+ return str(output).split()
752+
753+ def get_unit_process_ids(self, unit_processes):
754+ """Construct a dict containing unit sentries, process names, and
755+ process IDs."""
756+ pid_dict = {}
757+ for sentry_unit, process_list in unit_processes.iteritems():
758+ pid_dict[sentry_unit] = {}
759+ for process in process_list:
760+ pids = self.get_process_id_list(sentry_unit, process)
761+ pid_dict[sentry_unit].update({process: pids})
762+ return pid_dict
763+
764+ def validate_unit_process_ids(self, expected, actual):
765+ """Validate process id quantities for services on units."""
766+ self.log.debug('Checking units for running processes...')
767+ self.log.debug('Expected PIDs: {}'.format(expected))
768+ self.log.debug('Actual PIDs: {}'.format(actual))
769+
770+ if len(actual) != len(expected):
771+ return ('Unit count mismatch. expected, actual: {}, '
772+ '{} '.format(len(expected), len(actual)))
773+
774+ for (e_sentry, e_proc_names) in expected.iteritems():
775+ e_sentry_name = e_sentry.info['unit_name']
776+ if e_sentry in actual.keys():
777+ a_proc_names = actual[e_sentry]
778+ else:
779+ return ('Expected sentry ({}) not found in actual dict data.'
780+ '{}'.format(e_sentry_name, e_sentry))
781+
782+ if len(e_proc_names.keys()) != len(a_proc_names.keys()):
783+ return ('Process name count mismatch. expected, actual: {}, '
784+ '{}'.format(len(expected), len(actual)))
785+
786+ for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
787+ zip(e_proc_names.items(), a_proc_names.items()):
788+ if e_proc_name != a_proc_name:
789+ return ('Process name mismatch. expected, actual: {}, '
790+ '{}'.format(e_proc_name, a_proc_name))
791+
792+ a_pids_length = len(a_pids)
793+ if e_pids_length != a_pids_length:
794+ return ('PID count mismatch. {} ({}) expected, actual: '
795+ '{}, {} ({})'.format(e_sentry_name, e_proc_name,
796+ e_pids_length, a_pids_length,
797+ a_pids))
798+ else:
799+ self.log.debug('PID check OK: {} {} {}: '
800+ '{}'.format(e_sentry_name, e_proc_name,
801+ e_pids_length, a_pids))
802+ return None
803+
804+ def validate_list_of_identical_dicts(self, list_of_dicts):
805+ """Check that all dicts within a list are identical."""
806+ hashes = []
807+ for _dict in list_of_dicts:
808+ hashes.append(hash(frozenset(_dict.items())))
809+
810+ self.log.debug('Hashes: {}'.format(hashes))
811+ if len(set(hashes)) == 1:
812+ self.log.debug('Dicts within list are identical')
813+ else:
814+ return 'Dicts within list are not identical'
815+
816+ return None
817
818=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
819--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-19 14:56:49 +0000
820+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 20:45:21 +0000
821@@ -79,9 +79,9 @@
822 services.append(this_service)
823 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
824 'ceph-osd', 'ceph-radosgw']
825- # Openstack subordinate charms do not expose an origin option as that
826- # is controlled by the principle
827- ignore = ['neutron-openvswitch']
828+ # Most OpenStack subordinate charms do not expose an origin option
829+ # as that is controlled by the principle.
830+ ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
831
832 if self.openstack:
833 for svc in services:
834@@ -148,3 +148,36 @@
835 return os_origin.split('%s-' % self.series)[1].split('/')[0]
836 else:
837 return releases[self.series]
838+
839+ def get_ceph_expected_pools(self, radosgw=False):
840+ """Return a list of expected ceph pools in a ceph + cinder + glance
841+ test scenario, based on OpenStack release and whether ceph radosgw
842+ is flagged as present or not."""
843+
844+ if self._get_openstack_release() >= self.trusty_kilo:
845+ # Kilo or later
846+ pools = [
847+ 'rbd',
848+ 'cinder',
849+ 'glance'
850+ ]
851+ else:
852+ # Juno or earlier
853+ pools = [
854+ 'data',
855+ 'metadata',
856+ 'rbd',
857+ 'cinder',
858+ 'glance'
859+ ]
860+
861+ if radosgw:
862+ pools.extend([
863+ '.rgw.root',
864+ '.rgw.control',
865+ '.rgw',
866+ '.rgw.gc',
867+ '.users.uid'
868+ ])
869+
870+ return pools
871
872=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
873--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-19 14:56:49 +0000
874+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 20:45:21 +0000
875@@ -14,16 +14,20 @@
876 # You should have received a copy of the GNU Lesser General Public License
877 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
878
879+import amulet
880+import json
881 import logging
882 import os
883 import six
884 import time
885 import urllib
886
887+import cinderclient.v1.client as cinder_client
888 import glanceclient.v1.client as glance_client
889 import heatclient.v1.client as heat_client
890 import keystoneclient.v2_0 as keystone_client
891 import novaclient.v1_1.client as nova_client
892+import swiftclient
893
894 from charmhelpers.contrib.amulet.utils import (
895 AmuletUtils
896@@ -171,6 +175,16 @@
897 self.log.debug('Checking if tenant exists ({})...'.format(tenant))
898 return tenant in [t.name for t in keystone.tenants.list()]
899
900+ def authenticate_cinder_admin(self, keystone_sentry, username,
901+ password, tenant):
902+ """Authenticates admin user with cinder."""
903+ # NOTE(beisner): cinder python client doesn't accept tokens.
904+ service_ip = \
905+ keystone_sentry.relation('shared-db',
906+ 'mysql:shared-db')['private-address']
907+ ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8'))
908+ return cinder_client.Client(username, password, tenant, ept)
909+
910 def authenticate_keystone_admin(self, keystone_sentry, user, password,
911 tenant):
912 """Authenticates admin user with the keystone admin endpoint."""
913@@ -212,9 +226,29 @@
914 return nova_client.Client(username=user, api_key=password,
915 project_id=tenant, auth_url=ep)
916
917+ def authenticate_swift_user(self, keystone, user, password, tenant):
918+ """Authenticates a regular user with swift api."""
919+ self.log.debug('Authenticating swift user ({})...'.format(user))
920+ ep = keystone.service_catalog.url_for(service_type='identity',
921+ endpoint_type='publicURL')
922+ return swiftclient.Connection(authurl=ep,
923+ user=user,
924+ key=password,
925+ tenant_name=tenant,
926+ auth_version='2.0')
927+
928 def create_cirros_image(self, glance, image_name):
929- """Download the latest cirros image and upload it to glance."""
930- self.log.debug('Creating glance image ({})...'.format(image_name))
931+ """Download the latest cirros image and upload it to glance,
932+ validate and return a resource pointer.
933+
934+ :param glance: pointer to authenticated glance connection
935+ :param image_name: display name for new image
936+ :returns: glance image pointer
937+ """
938+ self.log.debug('Creating glance cirros image '
939+ '({})...'.format(image_name))
940+
941+ # Download cirros image
942 http_proxy = os.getenv('AMULET_HTTP_PROXY')
943 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
944 if http_proxy:
945@@ -223,33 +257,51 @@
946 else:
947 opener = urllib.FancyURLopener()
948
949- f = opener.open("http://download.cirros-cloud.net/version/released")
950+ f = opener.open('http://download.cirros-cloud.net/version/released')
951 version = f.read().strip()
952- cirros_img = "cirros-{}-x86_64-disk.img".format(version)
953+ cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
954 local_path = os.path.join('tests', cirros_img)
955
956 if not os.path.exists(local_path):
957- cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
958+ cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
959 version, cirros_img)
960 opener.retrieve(cirros_url, local_path)
961 f.close()
962
963+ # Create glance image
964 with open(local_path) as f:
965 image = glance.images.create(name=image_name, is_public=True,
966 disk_format='qcow2',
967 container_format='bare', data=f)
968- count = 1
969- status = image.status
970- while status != 'active' and count < 10:
971- time.sleep(3)
972- image = glance.images.get(image.id)
973- status = image.status
974- self.log.debug('image status: {}'.format(status))
975- count += 1
976-
977- if status != 'active':
978- self.log.error('image creation timed out')
979- return None
980+
981+ # Wait for image to reach active status
982+ img_id = image.id
983+ ret = self.resource_reaches_status(glance.images, img_id,
984+ expected_stat='active',
985+ msg='Image status wait')
986+ if not ret:
987+ msg = 'Glance image failed to reach expected state.'
988+ amulet.raise_status(amulet.FAIL, msg=msg)
989+
990+ # Re-validate new image
991+ self.log.debug('Validating image attributes...')
992+ val_img_name = glance.images.get(img_id).name
993+ val_img_stat = glance.images.get(img_id).status
994+ val_img_pub = glance.images.get(img_id).is_public
995+ val_img_cfmt = glance.images.get(img_id).container_format
996+ val_img_dfmt = glance.images.get(img_id).disk_format
997+ msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
998+ 'container fmt:{} disk fmt:{}'.format(
999+ val_img_name, val_img_pub, img_id,
1000+ val_img_stat, val_img_cfmt, val_img_dfmt))
1001+
1002+ if val_img_name == image_name and val_img_stat == 'active' \
1003+ and val_img_pub is True and val_img_cfmt == 'bare' \
1004+ and val_img_dfmt == 'qcow2':
1005+ self.log.debug(msg_attr)
1006+ else:
1007+ msg = ('Volume validation failed, {}'.format(msg_attr))
1008+ amulet.raise_status(amulet.FAIL, msg=msg)
1009
1010 return image
1011
1012@@ -260,22 +312,7 @@
1013 self.log.warn('/!\\ DEPRECATION WARNING: use '
1014 'delete_resource instead of delete_image.')
1015 self.log.debug('Deleting glance image ({})...'.format(image))
1016- num_before = len(list(glance.images.list()))
1017- glance.images.delete(image)
1018-
1019- count = 1
1020- num_after = len(list(glance.images.list()))
1021- while num_after != (num_before - 1) and count < 10:
1022- time.sleep(3)
1023- num_after = len(list(glance.images.list()))
1024- self.log.debug('number of images: {}'.format(num_after))
1025- count += 1
1026-
1027- if num_after != (num_before - 1):
1028- self.log.error('image deletion timed out')
1029- return False
1030-
1031- return True
1032+ return self.delete_resource(glance.images, image, msg='glance image')
1033
1034 def create_instance(self, nova, image_name, instance_name, flavor):
1035 """Create the specified instance."""
1036@@ -308,22 +345,8 @@
1037 self.log.warn('/!\\ DEPRECATION WARNING: use '
1038 'delete_resource instead of delete_instance.')
1039 self.log.debug('Deleting instance ({})...'.format(instance))
1040- num_before = len(list(nova.servers.list()))
1041- nova.servers.delete(instance)
1042-
1043- count = 1
1044- num_after = len(list(nova.servers.list()))
1045- while num_after != (num_before - 1) and count < 10:
1046- time.sleep(3)
1047- num_after = len(list(nova.servers.list()))
1048- self.log.debug('number of instances: {}'.format(num_after))
1049- count += 1
1050-
1051- if num_after != (num_before - 1):
1052- self.log.error('instance deletion timed out')
1053- return False
1054-
1055- return True
1056+ return self.delete_resource(nova.servers, instance,
1057+ msg='nova instance')
1058
1059 def create_or_get_keypair(self, nova, keypair_name="testkey"):
1060 """Create a new keypair, or return pointer if it already exists."""
1061@@ -339,6 +362,88 @@
1062 _keypair = nova.keypairs.create(name=keypair_name)
1063 return _keypair
1064
1065+ def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
1066+ img_id=None, src_vol_id=None, snap_id=None):
1067+ """Create cinder volume, optionally from a glance image, OR
1068+ optionally as a clone of an existing volume, OR optionally
1069+ from a snapshot. Wait for the new volume status to reach
1070+ the expected status, validate and return a resource pointer.
1071+
1072+ :param vol_name: cinder volume display name
1073+ :param vol_size: size in gigabytes
1074+ :param img_id: optional glance image id
1075+ :param src_vol_id: optional source volume id to clone
1076+ :param snap_id: optional snapshot id to use
1077+ :returns: cinder volume pointer
1078+ """
1079+ # Handle parameter input and avoid impossible combinations
1080+ if img_id and not src_vol_id and not snap_id:
1081+ # Create volume from image
1082+ self.log.debug('Creating cinder volume from glance image...')
1083+ bootable = 'true'
1084+ elif src_vol_id and not img_id and not snap_id:
1085+ # Clone an existing volume
1086+ self.log.debug('Cloning cinder volume...')
1087+ bootable = cinder.volumes.get(src_vol_id).bootable
1088+ elif snap_id and not src_vol_id and not img_id:
1089+ # Create volume from snapshot
1090+ self.log.debug('Creating cinder volume from snapshot...')
1091+ snap = cinder.volume_snapshots.find(id=snap_id)
1092+ vol_size = snap.size
1093+ snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
1094+ bootable = cinder.volumes.get(snap_vol_id).bootable
1095+ elif not img_id and not src_vol_id and not snap_id:
1096+ # Create volume
1097+ self.log.debug('Creating cinder volume...')
1098+ bootable = 'false'
1099+ else:
1100+ # Impossible combination of parameters
1101+ msg = ('Invalid method use - name:{} size:{} img_id:{} '
1102+ 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
1103+ img_id, src_vol_id,
1104+ snap_id))
1105+ amulet.raise_status(amulet.FAIL, msg=msg)
1106+
1107+ # Create new volume
1108+ try:
1109+ vol_new = cinder.volumes.create(display_name=vol_name,
1110+ imageRef=img_id,
1111+ size=vol_size,
1112+ source_volid=src_vol_id,
1113+ snapshot_id=snap_id)
1114+ vol_id = vol_new.id
1115+ except Exception as e:
1116+ msg = 'Failed to create volume: {}'.format(e)
1117+ amulet.raise_status(amulet.FAIL, msg=msg)
1118+
1119+ # Wait for volume to reach available status
1120+ ret = self.resource_reaches_status(cinder.volumes, vol_id,
1121+ expected_stat="available",
1122+ msg="Volume status wait")
1123+ if not ret:
1124+ msg = 'Cinder volume failed to reach expected state.'
1125+ amulet.raise_status(amulet.FAIL, msg=msg)
1126+
1127+ # Re-validate new volume
1128+ self.log.debug('Validating volume attributes...')
1129+ val_vol_name = cinder.volumes.get(vol_id).display_name
1130+ val_vol_boot = cinder.volumes.get(vol_id).bootable
1131+ val_vol_stat = cinder.volumes.get(vol_id).status
1132+ val_vol_size = cinder.volumes.get(vol_id).size
1133+ msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
1134+ '{} size:{}'.format(val_vol_name, vol_id,
1135+ val_vol_stat, val_vol_boot,
1136+ val_vol_size))
1137+
1138+ if val_vol_boot == bootable and val_vol_stat == 'available' \
1139+ and val_vol_name == vol_name and val_vol_size == vol_size:
1140+ self.log.debug(msg_attr)
1141+ else:
1142+ msg = ('Volume validation failed, {}'.format(msg_attr))
1143+ amulet.raise_status(amulet.FAIL, msg=msg)
1144+
1145+ return vol_new
1146+
1147 def delete_resource(self, resource, resource_id,
1148 msg="resource", max_wait=120):
1149 """Delete one openstack resource, such as one instance, keypair,
1150@@ -350,6 +455,8 @@
1151 :param max_wait: maximum wait time in seconds
1152 :returns: True if successful, otherwise False
1153 """
1154+ self.log.debug('Deleting OpenStack resource '
1155+ '{} ({})'.format(resource_id, msg))
1156 num_before = len(list(resource.list()))
1157 resource.delete(resource_id)
1158
1159@@ -411,3 +518,87 @@
1160 self.log.debug('{} never reached expected status: '
1161 '{}'.format(resource_id, expected_stat))
1162 return False
1163+
1164+ def get_ceph_osd_id_cmd(self, index):
1165+ """Produce a shell command that will return a ceph-osd id."""
1166+ return ("`initctl list | grep 'ceph-osd ' | "
1167+ "awk 'NR=={} {{ print $2 }}' | "
1168+ "grep -o '[0-9]*'`".format(index + 1))
1169+
1170+ def get_ceph_pools(self, sentry_unit):
1171+ """Return a dict of ceph pools from a single ceph unit, with
1172+ pool name as keys, pool id as vals."""
1173+ pools = {}
1174+ cmd = 'sudo ceph osd lspools'
1175+ output, code = sentry_unit.run(cmd)
1176+ if code != 0:
1177+ msg = ('{} `{}` returned {} '
1178+ '{}'.format(sentry_unit.info['unit_name'],
1179+ cmd, code, output))
1180+ amulet.raise_status(amulet.FAIL, msg=msg)
1181+
1182+ # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
1183+ for pool in str(output).split(','):
1184+ pool_id_name = pool.split(' ')
1185+ if len(pool_id_name) == 2:
1186+ pool_id = pool_id_name[0]
1187+ pool_name = pool_id_name[1]
1188+ pools[pool_name] = int(pool_id)
1189+
1190+ self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
1191+ pools))
1192+ return pools
1193+
1194+ def get_ceph_df(self, sentry_unit):
1195+ """Return dict of ceph df json output, including ceph pool state.
1196+
1197+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1198+ :returns: Dict of ceph df output
1199+ """
1200+ cmd = 'sudo ceph df --format=json'
1201+ output, code = sentry_unit.run(cmd)
1202+ if code != 0:
1203+ msg = ('{} `{}` returned {} '
1204+ '{}'.format(sentry_unit.info['unit_name'],
1205+ cmd, code, output))
1206+ amulet.raise_status(amulet.FAIL, msg=msg)
1207+ return json.loads(output)
1208+
1209+ def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
1210+ """Take a sample of attributes of a ceph pool, returning ceph
1211+ pool name, object count and disk space used for the specified
1212+ pool ID number.
1213+
1214+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1215+ :param pool_id: Ceph pool ID
1216+ :returns: List of pool name, object count, kb disk space used
1217+ """
1218+ df = self.get_ceph_df(sentry_unit)
1219+ pool_name = df['pools'][pool_id]['name']
1220+ obj_count = df['pools'][pool_id]['stats']['objects']
1221+ kb_used = df['pools'][pool_id]['stats']['kb_used']
1222+ self.log.debug('Ceph {} pool (ID {}): {} objects, '
1223+ '{} kb used'.format(pool_name, pool_id,
1224+ obj_count, kb_used))
1225+ return pool_name, obj_count, kb_used
1226+
1227+ def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
1228+ """Validate ceph pool samples taken over time, such as pool
1229+ object counts or pool kb used, before adding, after adding, and
1230+ after deleting items which affect those pool attributes. The
1231+ 2nd element is expected to be greater than the 1st; 3rd is expected
1232+ to be less than the 2nd.
1233+
1234+ :param samples: List containing 3 data samples
1235+ :param sample_type: String for logging and usage context
1236+ :returns: None if successful, Failure message otherwise
1237+ """
1238+ original, created, deleted = range(3)
1239+ if samples[created] <= samples[original] or \
1240+ samples[deleted] >= samples[created]:
1241+ return ('Ceph {} samples ({}) '
1242+ 'unexpected.'.format(sample_type, samples))
1243+ else:
1244+ self.log.debug('Ceph {} samples (OK): '
1245+ '{}'.format(sample_type, samples))
1246+ return None
1247
1248=== added file 'tests/tests.yaml'
1249--- tests/tests.yaml 1970-01-01 00:00:00 +0000
1250+++ tests/tests.yaml 2015-07-01 20:45:21 +0000
1251@@ -0,0 +1,18 @@
1252+bootstrap: true
1253+reset: true
1254+virtualenv: true
1255+makefile:
1256+ - lint
1257+ - test
1258+sources:
1259+ - ppa:juju/stable
1260+packages:
1261+ - amulet
1262+ - python-amulet
1263+ - python-cinderclient
1264+ - python-distro-info
1265+ - python-glanceclient
1266+ - python-heatclient
1267+ - python-keystoneclient
1268+ - python-novaclient
1269+ - python-swiftclient

Subscribers

People subscribed via source and target branches