Merge lp:~1chb1n/charms/trusty/cinder/next-amulet-updates into lp:~openstack-charmers-archive/charms/trusty/cinder/next
- Trusty Tahr (14.04)
- next-amulet-updates
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 100 |
Proposed branch: | lp:~1chb1n/charms/trusty/cinder/next-amulet-updates |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/cinder/next |
Diff against target: |
2085 lines (+964/-647) 11 files modified
Makefile (+6/-6) hooks/charmhelpers/core/hookenv.py (+92/-36) hooks/charmhelpers/core/services/base.py (+12/-9) metadata.yaml (+4/-2) tests/00-setup (+4/-1) tests/README (+9/-0) tests/basic_deployment.py (+415/-538) tests/charmhelpers/contrib/amulet/utils.py (+128/-3) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+36/-3) tests/charmhelpers/contrib/openstack/amulet/utils.py (+240/-49) tests/tests.yaml (+18/-0) |
To merge this branch: | bzr merge lp:~1chb1n/charms/trusty/cinder/next-amulet-updates |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Corey Bryant (community) | Approve | ||
Review via email: mp+262772@code.launchpad.net |
Commit message
Update amulet tests for vivid, prep for wily. Refactor existing tests. Move pieces to charmhelpers. Sync hooks/charmhelpers. Sync tests/charmhelpers.
Description of the change
Update amulet tests for vivid, prep for wily. Refactor existing tests. Move pieces to charmhelpers. Sync hooks/charmhelpers. Sync tests/charmhelpers.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5221 cinder-next for 1chb1n mp262772
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4755 cinder-next for 1chb1n mp262772
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
FYI, undercloud issue caused test failure for #4755.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5590 cinder-next for 1chb1n mp262772
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5222 cinder-next for 1chb1n mp262772
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4777 cinder-next for 1chb1n mp262772
AMULET OK: passed
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Flipped back to WIP re: tests/charmhelpers work in progress. Other things here are clear for review and input.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5675 cinder-next for 1chb1n mp262772
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5307 cinder-next for 1chb1n mp262772
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4858 cinder-next for 1chb1n mp262772
AMULET OK: passed
Build: http://
- 105. By Ryan Beisner
-
add tests.yaml, update 00-setup
- 106. By Ryan Beisner
-
fix publish target in makefile
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5683 cinder-next for 1chb1n mp262772
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5315 cinder-next for 1chb1n mp262772
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4866 cinder-next for 1chb1n mp262772
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 107. By Ryan Beisner
-
fix 00-setup
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5691 cinder-next for 1chb1n mp262772
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5323 cinder-next for 1chb1n mp262772
UNIT OK: passed
Corey Bryant (corey.bryant) wrote : | # |
Looks good (one suggestion inline to ponder, not a blocker to merging). I'll approve once the corresponding c-h lands and these amulet tests are successful.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4874 cinder-next for 1chb1n mp262772
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Corey - Thanks for the review. Regarding the local helper methods, I also think they might have a place in c-h, but wanted to wait and get through this iteration of all of the os-charm updates to get a better feel for where or if they might be reusable elsewhere. ie. Erring on the side of keeping it out of c-h until there is a need.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4881 cinder-next for 1chb1n mp262772
AMULET OK: passed
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Test rig issue is causing failures in bootstrapping; will re-test when that's resolved.
- 108. By Ryan Beisner
-
update tags for consistency with other openstack charms
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5701 cinder-next for 1chb1n mp262772
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5333 cinder-next for 1chb1n mp262772
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4889 cinder-next for 1chb1n mp262772
AMULET OK: passed
Build: http://
Corey Bryant (corey.bryant) : | # |
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2015-04-16 21:32:02 +0000 |
3 | +++ Makefile 2015-07-01 14:48:09 +0000 |
4 | @@ -2,17 +2,17 @@ |
5 | PYTHON := /usr/bin/env python |
6 | |
7 | lint: |
8 | - @flake8 --exclude hooks/charmhelpers actions hooks unit_tests tests |
9 | + @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \ |
10 | + actions hooks unit_tests tests |
11 | @charm proof |
12 | |
13 | -unit_test: |
14 | +test: |
15 | + @# Bundletester expects unit tests here. |
16 | @echo Starting unit tests... |
17 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
18 | |
19 | -test: |
20 | +functional_test: |
21 | @echo Starting amulet deployment tests... |
22 | - #NOTE(beisner): can remove -v after bug 1320357 is fixed |
23 | - # https://bugs.launchpad.net/amulet/+bug/1320357 |
24 | @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 |
25 | |
26 | bin/charm_helpers_sync.py: |
27 | @@ -24,6 +24,6 @@ |
28 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml |
29 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml |
30 | |
31 | -publish: lint unit_test |
32 | +publish: lint test |
33 | bzr push lp:charms/cinder |
34 | bzr push lp:charms/trusty/cinder |
35 | |
36 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
37 | --- hooks/charmhelpers/core/hookenv.py 2015-06-10 21:37:05 +0000 |
38 | +++ hooks/charmhelpers/core/hookenv.py 2015-07-01 14:48:09 +0000 |
39 | @@ -21,7 +21,9 @@ |
40 | # Charm Helpers Developers <juju@lists.ubuntu.com> |
41 | |
42 | from __future__ import print_function |
43 | +from distutils.version import LooseVersion |
44 | from functools import wraps |
45 | +import glob |
46 | import os |
47 | import json |
48 | import yaml |
49 | @@ -242,29 +244,7 @@ |
50 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
51 | if os.path.exists(self.path): |
52 | self.load_previous() |
53 | - |
54 | - def __getitem__(self, key): |
55 | - """For regular dict lookups, check the current juju config first, |
56 | - then the previous (saved) copy. This ensures that user-saved values |
57 | - will be returned by a dict lookup. |
58 | - |
59 | - """ |
60 | - try: |
61 | - return dict.__getitem__(self, key) |
62 | - except KeyError: |
63 | - return (self._prev_dict or {})[key] |
64 | - |
65 | - def get(self, key, default=None): |
66 | - try: |
67 | - return self[key] |
68 | - except KeyError: |
69 | - return default |
70 | - |
71 | - def keys(self): |
72 | - prev_keys = [] |
73 | - if self._prev_dict is not None: |
74 | - prev_keys = self._prev_dict.keys() |
75 | - return list(set(prev_keys + list(dict.keys(self)))) |
76 | + atexit(self._implicit_save) |
77 | |
78 | def load_previous(self, path=None): |
79 | """Load previous copy of config from disk. |
80 | @@ -283,6 +263,9 @@ |
81 | self.path = path or self.path |
82 | with open(self.path) as f: |
83 | self._prev_dict = json.load(f) |
84 | + for k, v in self._prev_dict.items(): |
85 | + if k not in self: |
86 | + self[k] = v |
87 | |
88 | def changed(self, key): |
89 | """Return True if the current value for this key is different from |
90 | @@ -314,13 +297,13 @@ |
91 | instance. |
92 | |
93 | """ |
94 | - if self._prev_dict: |
95 | - for k, v in six.iteritems(self._prev_dict): |
96 | - if k not in self: |
97 | - self[k] = v |
98 | with open(self.path, 'w') as f: |
99 | json.dump(self, f) |
100 | |
101 | + def _implicit_save(self): |
102 | + if self.implicit_save: |
103 | + self.save() |
104 | + |
105 | |
106 | @cached |
107 | def config(scope=None): |
108 | @@ -587,10 +570,14 @@ |
109 | hooks.execute(sys.argv) |
110 | """ |
111 | |
112 | - def __init__(self, config_save=True): |
113 | + def __init__(self, config_save=None): |
114 | super(Hooks, self).__init__() |
115 | self._hooks = {} |
116 | - self._config_save = config_save |
117 | + |
118 | + # For unknown reasons, we allow the Hooks constructor to override |
119 | + # config().implicit_save. |
120 | + if config_save is not None: |
121 | + config().implicit_save = config_save |
122 | |
123 | def register(self, name, function): |
124 | """Register a hook""" |
125 | @@ -598,13 +585,16 @@ |
126 | |
127 | def execute(self, args): |
128 | """Execute a registered hook based on args[0]""" |
129 | + _run_atstart() |
130 | hook_name = os.path.basename(args[0]) |
131 | if hook_name in self._hooks: |
132 | - self._hooks[hook_name]() |
133 | - if self._config_save: |
134 | - cfg = config() |
135 | - if cfg.implicit_save: |
136 | - cfg.save() |
137 | + try: |
138 | + self._hooks[hook_name]() |
139 | + except SystemExit as x: |
140 | + if x.code is None or x.code == 0: |
141 | + _run_atexit() |
142 | + raise |
143 | + _run_atexit() |
144 | else: |
145 | raise UnregisteredHookError(hook_name) |
146 | |
147 | @@ -732,13 +722,79 @@ |
148 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
149 | def leader_set(settings=None, **kwargs): |
150 | """Juju leader set value(s)""" |
151 | - log("Juju leader-set '%s'" % (settings), level=DEBUG) |
152 | + # Don't log secrets. |
153 | + # log("Juju leader-set '%s'" % (settings), level=DEBUG) |
154 | cmd = ['leader-set'] |
155 | settings = settings or {} |
156 | settings.update(kwargs) |
157 | - for k, v in settings.iteritems(): |
158 | + for k, v in settings.items(): |
159 | if v is None: |
160 | cmd.append('{}='.format(k)) |
161 | else: |
162 | cmd.append('{}={}'.format(k, v)) |
163 | subprocess.check_call(cmd) |
164 | + |
165 | + |
166 | +@cached |
167 | +def juju_version(): |
168 | + """Full version string (eg. '1.23.3.1-trusty-amd64')""" |
169 | + # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 |
170 | + jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] |
171 | + return subprocess.check_output([jujud, 'version'], |
172 | + universal_newlines=True).strip() |
173 | + |
174 | + |
175 | +@cached |
176 | +def has_juju_version(minimum_version): |
177 | + """Return True if the Juju version is at least the provided version""" |
178 | + return LooseVersion(juju_version()) >= LooseVersion(minimum_version) |
179 | + |
180 | + |
181 | +_atexit = [] |
182 | +_atstart = [] |
183 | + |
184 | + |
185 | +def atstart(callback, *args, **kwargs): |
186 | + '''Schedule a callback to run before the main hook. |
187 | + |
188 | + Callbacks are run in the order they were added. |
189 | + |
190 | + This is useful for modules and classes to perform initialization |
191 | + and inject behavior. In particular: |
192 | + - Run common code before all of your hooks, such as logging |
193 | + the hook name or interesting relation data. |
194 | + - Defer object or module initialization that requires a hook |
195 | + context until we know there actually is a hook context, |
196 | + making testing easier. |
197 | + - Rather than requiring charm authors to include boilerplate to |
198 | + invoke your helper's behavior, have it run automatically if |
199 | + your object is instantiated or module imported. |
200 | + |
201 | + This is not at all useful after your hook framework as been launched. |
202 | + ''' |
203 | + global _atstart |
204 | + _atstart.append((callback, args, kwargs)) |
205 | + |
206 | + |
207 | +def atexit(callback, *args, **kwargs): |
208 | + '''Schedule a callback to run on successful hook completion. |
209 | + |
210 | + Callbacks are run in the reverse order that they were added.''' |
211 | + _atexit.append((callback, args, kwargs)) |
212 | + |
213 | + |
214 | +def _run_atstart(): |
215 | + '''Hook frameworks must invoke this before running the main hook body.''' |
216 | + global _atstart |
217 | + for callback, args, kwargs in _atstart: |
218 | + callback(*args, **kwargs) |
219 | + del _atstart[:] |
220 | + |
221 | + |
222 | +def _run_atexit(): |
223 | + '''Hook frameworks must invoke this after the main hook body has |
224 | + successfully completed. Do not invoke it if the hook fails.''' |
225 | + global _atexit |
226 | + for callback, args, kwargs in reversed(_atexit): |
227 | + callback(*args, **kwargs) |
228 | + del _atexit[:] |
229 | |
230 | === modified file 'hooks/charmhelpers/core/services/base.py' |
231 | --- hooks/charmhelpers/core/services/base.py 2015-06-10 21:37:05 +0000 |
232 | +++ hooks/charmhelpers/core/services/base.py 2015-07-01 14:48:09 +0000 |
233 | @@ -128,15 +128,18 @@ |
234 | """ |
235 | Handle the current hook by doing The Right Thing with the registered services. |
236 | """ |
237 | - hook_name = hookenv.hook_name() |
238 | - if hook_name == 'stop': |
239 | - self.stop_services() |
240 | - else: |
241 | - self.reconfigure_services() |
242 | - self.provide_data() |
243 | - cfg = hookenv.config() |
244 | - if cfg.implicit_save: |
245 | - cfg.save() |
246 | + hookenv._run_atstart() |
247 | + try: |
248 | + hook_name = hookenv.hook_name() |
249 | + if hook_name == 'stop': |
250 | + self.stop_services() |
251 | + else: |
252 | + self.reconfigure_services() |
253 | + self.provide_data() |
254 | + except SystemExit as x: |
255 | + if x.code is None or x.code == 0: |
256 | + hookenv._run_atexit() |
257 | + hookenv._run_atexit() |
258 | |
259 | def provide_data(self): |
260 | """ |
261 | |
262 | === modified file 'metadata.yaml' |
263 | --- metadata.yaml 2015-01-09 16:02:39 +0000 |
264 | +++ metadata.yaml 2015-07-01 14:48:09 +0000 |
265 | @@ -3,8 +3,10 @@ |
266 | maintainer: Adam Gandelman <adamg@canonical.com> |
267 | description: | |
268 | Cinder is a storage service for the Openstack project |
269 | -categories: |
270 | - - miscellaneous |
271 | +tags: |
272 | + - openstack |
273 | + - storage |
274 | + - misc |
275 | provides: |
276 | nrpe-external-master: |
277 | interface: nrpe-external-master |
278 | |
279 | === modified file 'tests/00-setup' |
280 | --- tests/00-setup 2015-03-08 02:27:57 +0000 |
281 | +++ tests/00-setup 2015-07-01 14:48:09 +0000 |
282 | @@ -6,6 +6,9 @@ |
283 | sudo apt-get update --yes |
284 | sudo apt-get install --yes python-amulet \ |
285 | python-cinderclient \ |
286 | + python-distro-info \ |
287 | python-glanceclient \ |
288 | + python-heatclient \ |
289 | python-keystoneclient \ |
290 | - python-novaclient |
291 | + python-novaclient \ |
292 | + python-swiftclient |
293 | |
294 | === modified file 'tests/017-basic-trusty-kilo' (properties changed: -x to +x) |
295 | === modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x) |
296 | === modified file 'tests/053-basic-trusty-kilo-git' (properties changed: -x to +x) |
297 | === modified file 'tests/054-basic-vivid-kilo-git' (properties changed: -x to +x) |
298 | === modified file 'tests/README' |
299 | --- tests/README 2014-10-07 18:32:59 +0000 |
300 | +++ tests/README 2015-07-01 14:48:09 +0000 |
301 | @@ -1,6 +1,15 @@ |
302 | This directory provides Amulet tests that focus on verification of Cinder |
303 | deployments. |
304 | |
305 | +test_* methods are called in lexical sort order. |
306 | + |
307 | +Test name convention to ensure desired test order: |
308 | + 1xx service and endpoint checks |
309 | + 2xx relation checks |
310 | + 3xx config checks |
311 | + 4xx functional checks |
312 | + 9xx restarts and other final checks |
313 | + |
314 | In order to run tests, you'll need charm-tools installed (in addition to |
315 | juju, of course): |
316 | sudo add-apt-repository ppa:juju/stable |
317 | |
318 | === modified file 'tests/basic_deployment.py' |
319 | --- tests/basic_deployment.py 2015-05-12 14:49:27 +0000 |
320 | +++ tests/basic_deployment.py 2015-07-01 14:48:09 +0000 |
321 | @@ -2,19 +2,17 @@ |
322 | |
323 | import amulet |
324 | import os |
325 | -import types |
326 | -from time import sleep |
327 | +import time |
328 | import yaml |
329 | -import cinderclient.v1.client as cinder_client |
330 | |
331 | from charmhelpers.contrib.openstack.amulet.deployment import ( |
332 | OpenStackAmuletDeployment |
333 | ) |
334 | |
335 | -from charmhelpers.contrib.openstack.amulet.utils import ( # noqa |
336 | +from charmhelpers.contrib.openstack.amulet.utils import ( |
337 | OpenStackAmuletUtils, |
338 | DEBUG, |
339 | - ERROR |
340 | + # ERROR |
341 | ) |
342 | |
343 | # Use DEBUG to turn on debug logging |
344 | @@ -22,17 +20,14 @@ |
345 | |
346 | |
347 | class CinderBasicDeployment(OpenStackAmuletDeployment): |
348 | - '''Amulet tests on a basic lvm-backed cinder deployment. Verify |
349 | + """Amulet tests on a basic lvm-backed cinder deployment. Verify |
350 | relations, service status, users and endpoint service catalog. |
351 | Create, clone, delete volumes. Create volume from glance image. |
352 | - Create volume snapshot. Create volume from snapshot.''' |
353 | - |
354 | - # NOTE(beisner): Features and tests vary across Openstack releases. |
355 | - # https://wiki.openstack.org/wiki/CinderSupportMatrix |
356 | + Create volume snapshot. Create volume from snapshot.""" |
357 | |
358 | def __init__(self, series=None, openstack=None, source=None, git=False, |
359 | stable=False): |
360 | - '''Deploy the entire test environment.''' |
361 | + """Deploy the entire test environment.""" |
362 | super(CinderBasicDeployment, self).__init__(series, openstack, source, |
363 | stable) |
364 | self.git = git |
365 | @@ -56,7 +51,7 @@ |
366 | other_services) |
367 | |
368 | def _add_relations(self): |
369 | - '''Add relations for the services.''' |
370 | + """Add relations for the services.""" |
371 | relations = { |
372 | 'keystone:shared-db': 'mysql:shared-db', |
373 | 'cinder:shared-db': 'mysql:shared-db', |
374 | @@ -70,7 +65,7 @@ |
375 | super(CinderBasicDeployment, self)._add_relations(relations) |
376 | |
377 | def _configure_services(self): |
378 | - '''Configure all of the services.''' |
379 | + """Configure all of the services.""" |
380 | cinder_config = {'block-device': 'vdb', |
381 | 'glance-api-version': '2', |
382 | 'overwrite': 'true'} |
383 | @@ -102,137 +97,211 @@ |
384 | super(CinderBasicDeployment, self)._configure_services(configs) |
385 | |
386 | def _initialize_tests(self): |
387 | - '''Perform final initialization before tests get run.''' |
388 | + """Perform final initialization before tests get run.""" |
389 | # Access the sentries for inspecting service units |
390 | self.cinder_sentry = self.d.sentry.unit['cinder/0'] |
391 | self.glance_sentry = self.d.sentry.unit['glance/0'] |
392 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] |
393 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] |
394 | self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] |
395 | + u.log.debug('openstack release val: {}'.format( |
396 | + self._get_openstack_release())) |
397 | + u.log.debug('openstack release str: {}'.format( |
398 | + self._get_openstack_release_string())) |
399 | + |
400 | + # Let things settle a bit original moving forward |
401 | + time.sleep(30) |
402 | |
403 | # Authenticate admin with keystone |
404 | self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, |
405 | user='admin', |
406 | password='openstack', |
407 | tenant='admin') |
408 | + |
409 | # Authenticate admin with cinder endpoint |
410 | - self.cinder = self.authenticate_cinder_admin(username='admin', |
411 | - password='openstack', |
412 | - tenant='admin') |
413 | + self.cinder = u.authenticate_cinder_admin(self.keystone_sentry, |
414 | + username='admin', |
415 | + password='openstack', |
416 | + tenant='admin') |
417 | + |
418 | # Authenticate admin with glance endpoint |
419 | self.glance = u.authenticate_glance_admin(self.keystone) |
420 | |
421 | - u.log.debug('openstack rel: {}'.format(self._get_openstack_release())) |
422 | - # Wait for relations to settle |
423 | - sleep(120) |
424 | - |
425 | - def authenticate_cinder_admin(self, username, password, tenant): |
426 | - """Authenticates admin user with cinder.""" |
427 | - # NOTE(beisner): need to move to charmhelpers, and adjust calls here. |
428 | - # Probably useful on other charm tests. |
429 | - service_ip = \ |
430 | - self.keystone_sentry.relation('shared-db', |
431 | - 'mysql:shared-db')['private-address'] |
432 | - ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8')) |
433 | - return cinder_client.Client(username, password, tenant, ept) |
434 | - |
435 | - def force_list(self, obj): |
436 | - '''Determine the object type and return a list. Some Openstack |
437 | - component API list methods return generators, some return lists. |
438 | - Where obj is cinder.volumes, cinder.volume_snapshots, glance.images, |
439 | - or other Openstack object with a list method.''' |
440 | - # NOTE(beisner): need to move to charmhelpers, and adjust calls here. |
441 | - |
442 | - # NOTE(beisner): Beware - glance's list method returns a generator, |
443 | - # and cinder's list method returns a list! |
444 | - if isinstance(obj.list(), types.ListType): |
445 | - return obj.list() |
446 | - elif isinstance(obj.list(), types.GeneratorType): |
447 | - return list(obj.list()) |
448 | - else: |
449 | - u.log.debug('unhandled object type: {}'.format(type(obj.list()))) |
450 | - return False |
451 | - |
452 | - def delete_all_objs(self, obj, item_desc='object', max_wait=60): |
453 | - '''Delete all objects from openstack component, such as all volumes, |
454 | - all images or all snapshots. Waits and confirms deletion.''' |
455 | - # NOTE(beisner): need to move to charmhelpers, and adjust calls here. |
456 | - # Probably useful on other charm tests. |
457 | - |
458 | - # Get list of objects to delete |
459 | - obj_list = self.force_list(obj) |
460 | - if obj_list is False: |
461 | - return '{} list failed'.format(item_desc) |
462 | - |
463 | - if len(obj_list) == 0: |
464 | - u.log.debug('no {}(s) to delete'.format(item_desc)) |
465 | - return None |
466 | - |
467 | - # Delete objects |
468 | - for obj_this in obj_list: |
469 | - u.log.debug('deleting {}: {}'.format(item_desc, obj_this.id)) |
470 | + def _extend_cinder_volume(self, vol_id, new_size=2): |
471 | + """Extend an existing cinder volume size. |
472 | + |
473 | + :param vol_id: existing cinder volume to extend |
474 | + :param new_size: new size in gigabytes |
475 | + :returns: None if successful; Failure message otherwise |
476 | + """ |
477 | + # Extend existing volume size |
478 | + try: |
479 | + self.cinder.volumes.extend(vol_id, new_size) |
480 | + vol_size_org = self.cinder.volumes.get(vol_id).size |
481 | + except Exception as e: |
482 | + msg = 'Failed to extend volume: {}'.format(e) |
483 | + amulet.raise_status(amulet.FAIL, msg=msg) |
484 | + |
485 | + # Confirm that the volume reaches available status. |
486 | + ret = u.resource_reaches_status(self.cinder.volumes, vol_id, |
487 | + expected_stat="available", |
488 | + msg="Volume status wait") |
489 | + if not ret: |
490 | + msg = ('Cinder volume failed to reach expected state ' |
491 | + 'while extending.') |
492 | + return ret |
493 | + |
494 | + # Validate volume size and status |
495 | + u.log.debug('Validating volume attributes...') |
496 | + vol_size_ext = self.cinder.volumes.get(vol_id).size |
497 | + vol_stat = self.cinder.volumes.get(vol_id).status |
498 | + msg_attr = ('Volume attributes - orig size:{} extended size:{} ' |
499 | + 'stat:{}'.format(vol_size_org, vol_size_ext, vol_stat)) |
500 | + |
501 | + if vol_size_ext > vol_size_org and vol_stat == 'available': |
502 | + u.log.debug(msg_attr) |
503 | + else: |
504 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
505 | + return ret |
506 | + |
507 | + return None |
508 | + |
509 | + def _snapshot_cinder_volume(self, name='demo-snapshot', vol_id=None): |
510 | + """Create a snapshot of an existing cinder volume. |
511 | + |
512 | + :param name: display name to assign to snapshot |
513 | + :param vol_id: existing cinder volume to snapshot |
514 | + :returns: None if successful; Failure message otherwise |
515 | + """ |
516 | + u.log.debug('Creating snapshot of volume ({})...'.format(vol_id)) |
517 | + # Create snapshot of an existing cinder volume |
518 | + try: |
519 | + snap_new = self.cinder.volume_snapshots.create( |
520 | + volume_id=vol_id, display_name=name) |
521 | + snap_id = snap_new.id |
522 | + except Exception as e: |
523 | + msg = 'Failed to snapshot the volume: {}'.format(e) |
524 | + amulet.raise_status(amulet.FAIL, msg=msg) |
525 | + |
526 | + # Confirm that the volume reaches available status. |
527 | + ret = u.resource_reaches_status(self.cinder.volume_snapshots, |
528 | + snap_id, |
529 | + expected_stat="available", |
530 | + msg="Volume status wait") |
531 | + if not ret: |
532 | + msg = ('Cinder volume failed to reach expected state ' |
533 | + 'while snapshotting.') |
534 | + return ret |
535 | + |
536 | + # Validate snapshot |
537 | + u.log.debug('Validating snapshot attributes...') |
538 | + snap_name = self.cinder.volume_snapshots.get(snap_id).display_name |
539 | + snap_stat = self.cinder.volume_snapshots.get(snap_id).status |
540 | + snap_vol_id = self.cinder.volume_snapshots.get(snap_id).volume_id |
541 | + msg_attr = ('Snapshot attributes - name:{} status:{} ' |
542 | + 'vol_id:{}'.format(snap_name, snap_stat, snap_vol_id)) |
543 | + |
544 | + if snap_name == name and snap_stat == 'available' \ |
545 | + and snap_vol_id == vol_id: |
546 | + u.log.debug(msg_attr) |
547 | + else: |
548 | + msg = ('Snapshot validation failed, {}'.format(msg_attr)) |
549 | + amulet.raise_status(amulet.FAIL, msg=msg) |
550 | + |
551 | + return snap_new |
552 | + |
553 | + def _check_cinder_lvm(self): |
554 | + """Inspect lvm on cinder unit, do basic validation against |
555 | + cinder volumes and snapshots that exist.""" |
556 | + u.log.debug('Checking cinder volumes against lvm volumes...') |
557 | + # Inspect |
558 | + cmd = 'sudo lvs | grep cinder-volumes | awk \'{ print $1 }\'' |
559 | + output, code = self.cinder_sentry.run(cmd) |
560 | + u.log.debug('{} `{}` returned ' |
561 | + '{}'.format(self.cinder_sentry.info['unit_name'], |
562 | + cmd, code)) |
563 | + if code != 0: |
564 | + return "command `{}` returned {}".format(cmd, str(code)) |
565 | + |
566 | + vol_list = self.cinder.volumes.list() |
567 | + lv_id_list = output.split('\n') |
568 | + lv_count = len(lv_id_list) |
569 | + vol_count = len(vol_list) |
570 | + snap_count = len(self.cinder.volume_snapshots.list()) |
571 | + |
572 | + # Expect cinder vol + snap count to match lvm log vol count |
573 | + u.log.debug('vols:{} snaps:{} lvs:{}'.format(vol_count, |
574 | + snap_count, |
575 | + lv_count)) |
576 | + if (vol_count + snap_count) != len(lv_id_list): |
577 | + msg = ('lvm volume count ({}) != cinder volume + snap count ' |
578 | + '({})'.format(len(vol_list), len(lv_id_list))) |
579 | + return msg |
580 | + |
581 | + # Expect all cinder vol IDs to exist in the LVM volume list |
582 | + for vol_this in vol_list: |
583 | try: |
584 | - obj_this.delete() |
585 | + vol_id = vol_this.id |
586 | + vol_name = vol_this.display_name |
587 | + lv_id = 'volume-{}'.format(vol_id) |
588 | + _index = lv_id_list.index(lv_id) |
589 | + u.log.info('Volume ({}) correlates to lv ' |
590 | + '{} ({})'.format(vol_name, |
591 | + _index, |
592 | + lv_id)) |
593 | except: |
594 | - return '{} delete failed for {} with status {}'.format( |
595 | - item_desc, obj_this.id, obj_this.status) |
596 | - |
597 | - # Wait for objects to disappear |
598 | - obj_count = len(self.force_list(obj)) |
599 | - tries = 0 |
600 | - while obj_count != 0 and tries <= (max_wait/4): |
601 | - u.log.debug('{} delete wait: {} {}'.format(item_desc, |
602 | - tries, obj_count)) |
603 | - sleep(4) |
604 | - obj_count = len(self.force_list(obj)) |
605 | - tries += 1 |
606 | - |
607 | - if obj_count != 0: |
608 | - return '{}(s) not deleted, {} remain.'.format(item_desc, |
609 | - obj_count) |
610 | - |
611 | - def obj_is_status(self, obj, obj_id, stat='available', |
612 | - msg='openstack object status check', max_wait=120): |
613 | - ''''Wait for an openstack object status to be as expected. |
614 | - By default, expect an available status within 120s. Useful |
615 | - when confirming cinder volumes, snapshots, glance images, etc. |
616 | - reach a certain state/status within a specified time.''' |
617 | - # NOTE(beisner): need to move to charmhelpers, and adjust calls here. |
618 | - # Probably useful on other charm tests. |
619 | - |
620 | - obj_stat = obj.get(obj_id).status |
621 | - tries = 0 |
622 | - while obj_stat != stat and tries < (max_wait/4): |
623 | - u.log.debug(msg + ': {} [{}:{}] {}'.format(tries, obj_stat, |
624 | - stat, obj_id)) |
625 | - sleep(4) |
626 | - obj_stat = obj.get(obj_id).status |
627 | - tries += 1 |
628 | - if obj_stat == stat: |
629 | - return True |
630 | - else: |
631 | - return False |
632 | - |
633 | - def test_services(self): |
634 | - '''Verify that the expected services are running on the |
635 | - corresponding service units.''' |
636 | - commands = { |
637 | - self.cinder_sentry: ['status cinder-api', |
638 | - 'status cinder-scheduler', |
639 | - 'status cinder-volume'], |
640 | - self.glance_sentry: ['status glance-registry', |
641 | - 'status glance-api'], |
642 | - self.mysql_sentry: ['status mysql'], |
643 | - self.keystone_sentry: ['status keystone'], |
644 | - self.rabbitmq_sentry: ['sudo service rabbitmq-server status'] |
645 | + u.log.error('lvs output: {}'.format(output)) |
646 | + msg = ('Volume ID {} not found in ' |
647 | + 'LVM volume list.'.format(vol_this.id)) |
648 | + return msg |
649 | + |
650 | + return None |
651 | + |
652 | + def test_100_services(self): |
653 | + """Verify that the expected services are running on the |
654 | + corresponding service units.""" |
655 | + services = { |
656 | + self.cinder_sentry: ['cinder-api', |
657 | + 'cinder-scheduler', |
658 | + 'cinder-volume'], |
659 | + self.glance_sentry: ['glance-registry', |
660 | + 'glance-api'], |
661 | + self.mysql_sentry: ['mysql'], |
662 | + self.keystone_sentry: ['keystone'], |
663 | + self.rabbitmq_sentry: ['rabbitmq-server'] |
664 | } |
665 | - u.log.debug('commands: {}'.format(commands)) |
666 | - ret = u.validate_services(commands) |
667 | - if ret: |
668 | - amulet.raise_status(amulet.FAIL, msg=ret) |
669 | - |
670 | - def test_service_catalog(self): |
671 | - '''Verify that the service catalog endpoint data''' |
672 | + ret = u.validate_services_by_name(services) |
673 | + if ret: |
674 | + amulet.raise_status(amulet.FAIL, msg=ret) |
675 | + |
676 | + def test_110_users(self): |
677 | + """Verify expected users.""" |
678 | + u.log.debug('Checking keystone users...') |
679 | + user0 = {'name': 'cinder_cinderv2', |
680 | + 'enabled': True, |
681 | + 'tenantId': u.not_null, |
682 | + 'id': u.not_null, |
683 | + 'email': 'juju@localhost'} |
684 | + user1 = {'name': 'admin', |
685 | + 'enabled': True, |
686 | + 'tenantId': u.not_null, |
687 | + 'id': u.not_null, |
688 | + 'email': 'juju@localhost'} |
689 | + user2 = {'name': 'glance', |
690 | + 'enabled': True, |
691 | + 'tenantId': u.not_null, |
692 | + 'id': u.not_null, |
693 | + 'email': 'juju@localhost'} |
694 | + expected = [user0, user1, user2] |
695 | + actual = self.keystone.users.list() |
696 | + |
697 | + ret = u.validate_user_data(expected, actual) |
698 | + if ret: |
699 | + amulet.raise_status(amulet.FAIL, msg=ret) |
700 | + |
701 | + def test_112_service_catalog(self): |
702 | + """Verify that the service catalog endpoint data""" |
703 | + u.log.debug('Checking keystone service catalog...') |
704 | endpoint_vol = {'adminURL': u.valid_url, |
705 | 'region': 'RegionOne', |
706 | 'publicURL': u.valid_url, |
707 | @@ -254,47 +323,66 @@ |
708 | if ret: |
709 | amulet.raise_status(amulet.FAIL, msg=ret) |
710 | |
711 | - def test_cinder_glance_image_service_relation(self): |
712 | - '''Verify the cinder:glance image-service relation data''' |
713 | + def test_114_cinder_endpoint(self): |
714 | + """Verify the cinder endpoint data.""" |
715 | + u.log.debug('Checking cinder endpoint...') |
716 | + endpoints = self.keystone.endpoints.list() |
717 | + admin_port = internal_port = public_port = '8776' |
718 | + expected = {'id': u.not_null, |
719 | + 'region': 'RegionOne', |
720 | + 'adminurl': u.valid_url, |
721 | + 'internalurl': u.valid_url, |
722 | + 'publicurl': u.valid_url, |
723 | + 'service_id': u.not_null} |
724 | + |
725 | + ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, |
726 | + public_port, expected) |
727 | + if ret: |
728 | + amulet.raise_status(amulet.FAIL, |
729 | + msg='cinder endpoint: {}'.format(ret)) |
730 | + |
731 | + def test_202_cinder_glance_image_service_relation(self): |
732 | + """Verify the cinder:glance image-service relation data""" |
733 | + u.log.debug('Checking cinder:glance image-service relation data...') |
734 | unit = self.cinder_sentry |
735 | relation = ['image-service', 'glance:image-service'] |
736 | expected = {'private-address': u.valid_ip} |
737 | - u.log.debug('') |
738 | ret = u.validate_relation_data(unit, relation, expected) |
739 | if ret: |
740 | msg = u.relation_error('cinder image-service', ret) |
741 | amulet.raise_status(amulet.FAIL, msg=msg) |
742 | |
743 | - def test_glance_cinder_image_service_relation(self): |
744 | - '''Verify the glance:cinder image-service relation data''' |
745 | + def test_203_glance_cinder_image_service_relation(self): |
746 | + """Verify the glance:cinder image-service relation data""" |
747 | + u.log.debug('Checking glance:cinder image-service relation data...') |
748 | unit = self.glance_sentry |
749 | relation = ['image-service', 'cinder:image-service'] |
750 | expected = { |
751 | 'private-address': u.valid_ip, |
752 | 'glance-api-server': u.valid_url |
753 | } |
754 | - u.log.debug('') |
755 | ret = u.validate_relation_data(unit, relation, expected) |
756 | if ret: |
757 | msg = u.relation_error('glance image-service', ret) |
758 | amulet.raise_status(amulet.FAIL, msg=msg) |
759 | |
760 | - def test_mysql_cinder_db_relation(self): |
761 | - '''Verify the mysql:glance shared-db relation data''' |
762 | + def test_204_mysql_cinder_db_relation(self): |
763 | + """Verify the mysql:glance shared-db relation data""" |
764 | + u.log.debug('Checking mysql:cinder db relation data...') |
765 | unit = self.mysql_sentry |
766 | relation = ['shared-db', 'cinder:shared-db'] |
767 | expected = { |
768 | 'private-address': u.valid_ip, |
769 | 'db_host': u.valid_ip |
770 | } |
771 | - u.log.debug('') |
772 | ret = u.validate_relation_data(unit, relation, expected) |
773 | if ret: |
774 | msg = u.relation_error('mysql shared-db', ret) |
775 | amulet.raise_status(amulet.FAIL, msg=msg) |
776 | |
777 | - def test_cinder_mysql_db_relation(self): |
778 | - '''Verify the cinder:mysql shared-db relation data''' |
779 | + def test_205_cinder_mysql_db_relation(self): |
780 | + """Verify the cinder:mysql shared-db relation data""" |
781 | + u.log.debug('Checking cinder:mysql db relation data...') |
782 | unit = self.cinder_sentry |
783 | relation = ['shared-db', 'mysql:shared-db'] |
784 | expected = { |
785 | @@ -303,14 +391,14 @@ |
786 | 'username': 'cinder', |
787 | 'database': 'cinder' |
788 | } |
789 | - u.log.debug('') |
790 | ret = u.validate_relation_data(unit, relation, expected) |
791 | if ret: |
792 | msg = u.relation_error('cinder shared-db', ret) |
793 | amulet.raise_status(amulet.FAIL, msg=msg) |
794 | |
795 | - def test_keystone_cinder_id_relation(self): |
796 | - '''Verify the keystone:cinder identity-service relation data''' |
797 | + def test_206_keystone_cinder_id_relation(self): |
798 | + """Verify the keystone:cinder identity-service relation data""" |
799 | + u.log.debug('Checking keystone:cinder id relation data...') |
800 | unit = self.keystone_sentry |
801 | relation = ['identity-service', |
802 | 'cinder:identity-service'] |
803 | @@ -328,14 +416,14 @@ |
804 | 'service_tenant_id': u.not_null, |
805 | 'service_host': u.valid_ip |
806 | } |
807 | - u.log.debug('') |
808 | ret = u.validate_relation_data(unit, relation, expected) |
809 | if ret: |
810 | msg = u.relation_error('identity-service cinder', ret) |
811 | amulet.raise_status(amulet.FAIL, msg=msg) |
812 | |
813 | - def test_cinder_keystone_id_relation(self): |
814 | - '''Verify the cinder:keystone identity-service relation data''' |
815 | + def test_207_cinder_keystone_id_relation(self): |
816 | + """Verify the cinder:keystone identity-service relation data""" |
817 | + u.log.debug('Checking cinder:keystone id relation data...') |
818 | unit = self.cinder_sentry |
819 | relation = ['identity-service', |
820 | 'keystone:identity-service'] |
821 | @@ -347,14 +435,14 @@ |
822 | 'cinder_admin_url': u.valid_url, |
823 | 'private-address': u.valid_ip |
824 | } |
825 | - u.log.debug('') |
826 | ret = u.validate_relation_data(unit, relation, expected) |
827 | if ret: |
828 | msg = u.relation_error('cinder identity-service', ret) |
829 | amulet.raise_status(amulet.FAIL, msg=msg) |
830 | |
831 | - def test_rabbitmq_cinder_amqp_relation(self): |
832 | - '''Verify the rabbitmq-server:cinder amqp relation data''' |
833 | + def test_208_rabbitmq_cinder_amqp_relation(self): |
834 | + """Verify the rabbitmq-server:cinder amqp relation data""" |
835 | + u.log.debug('Checking rmq:cinder amqp relation data...') |
836 | unit = self.rabbitmq_sentry |
837 | relation = ['amqp', 'cinder:amqp'] |
838 | expected = { |
839 | @@ -362,14 +450,14 @@ |
840 | 'password': u.not_null, |
841 | 'hostname': u.valid_ip |
842 | } |
843 | - u.log.debug('') |
844 | ret = u.validate_relation_data(unit, relation, expected) |
845 | if ret: |
846 | msg = u.relation_error('amqp cinder', ret) |
847 | amulet.raise_status(amulet.FAIL, msg=msg) |
848 | |
849 | - def test_cinder_rabbitmq_amqp_relation(self): |
850 | - '''Verify the cinder:rabbitmq-server amqp relation data''' |
851 | + def test_209_cinder_rabbitmq_amqp_relation(self): |
852 | + """Verify the cinder:rabbitmq-server amqp relation data""" |
853 | + u.log.debug('Checking cinder:rmq amqp relation data...') |
854 | unit = self.cinder_sentry |
855 | relation = ['amqp', 'rabbitmq-server:amqp'] |
856 | expected = { |
857 | @@ -377,71 +465,69 @@ |
858 | 'vhost': 'openstack', |
859 | 'username': u.not_null |
860 | } |
861 | - u.log.debug('') |
862 | ret = u.validate_relation_data(unit, relation, expected) |
863 | if ret: |
864 | msg = u.relation_error('cinder amqp', ret) |
865 | amulet.raise_status(amulet.FAIL, msg=msg) |
866 | |
867 | - def test_cinder_default_config(self): |
868 | - '''Verify default section configs in cinder.conf and |
869 | - compare some of the parameters to relation data.''' |
870 | - unit_ci = self.cinder_sentry |
871 | + def test_300_cinder_config(self): |
872 | + """Verify the data in the cinder.conf file.""" |
873 | + u.log.debug('Checking cinder config file data...') |
874 | + unit = self.cinder_sentry |
875 | + conf = '/etc/cinder/cinder.conf' |
876 | unit_mq = self.rabbitmq_sentry |
877 | - rel_ci_mq = unit_ci.relation('amqp', 'rabbitmq-server:amqp') |
878 | + unit_ks = self.keystone_sentry |
879 | rel_mq_ci = unit_mq.relation('amqp', 'cinder:amqp') |
880 | - u.log.debug('actual ci:mq relation: {}'.format(rel_ci_mq)) |
881 | - u.log.debug('actual mq:ci relation: {}'.format(rel_mq_ci)) |
882 | - conf = '/etc/cinder/cinder.conf' |
883 | - expected = {'use_syslog': 'False', |
884 | - 'debug': 'False', |
885 | - 'verbose': 'False', |
886 | - 'iscsi_helper': 'tgtadm', |
887 | - 'volume_group': 'cinder-volumes', |
888 | - 'rabbit_userid': 'cinder', |
889 | - 'rabbit_password': rel_mq_ci['password'], |
890 | - 'rabbit_host': rel_mq_ci['hostname'], |
891 | - 'auth_strategy': 'keystone', |
892 | - 'volumes_dir': '/var/lib/cinder/volumes'} |
893 | - section = 'DEFAULT' |
894 | - u.log.debug('') |
895 | - ret = u.validate_config_data(unit_ci, conf, section, expected) |
896 | - if ret: |
897 | - msg = 'cinder.conf default config error: {}'.format(ret) |
898 | - amulet.raise_status(amulet.FAIL, msg=msg) |
899 | - |
900 | - def test_cinder_auth_config(self): |
901 | - '''Verify authtoken section config in cinder.conf or |
902 | - api-paste.ini using glance/keystone relation data.''' |
903 | - unit_ci = self.cinder_sentry |
904 | - unit_ks = self.keystone_sentry |
905 | rel_ks_ci = unit_ks.relation('identity-service', |
906 | 'cinder:identity-service') |
907 | - u.log.debug('actual ks:ci relation: {}'.format(rel_ks_ci)) |
908 | - |
909 | - expected = {'admin_user': rel_ks_ci['service_username'], |
910 | - 'admin_password': rel_ks_ci['service_password'], |
911 | - 'admin_tenant_name': rel_ks_ci['service_tenant'], |
912 | - 'auth_host': rel_ks_ci['auth_host']} |
913 | - |
914 | - if self._get_openstack_release() >= self.precise_icehouse: |
915 | - conf = '/etc/cinder/cinder.conf' |
916 | - section = 'keystone_authtoken' |
917 | - auth_uri = 'http://' + rel_ks_ci['auth_host'] + \ |
918 | - ':' + rel_ks_ci['service_port'] + '/' |
919 | - expected['auth_uri'] = auth_uri |
920 | + |
921 | + auth_uri = 'http://' + rel_ks_ci['auth_host'] + \ |
922 | + ':' + rel_ks_ci['service_port'] + '/' |
923 | + |
924 | + expected = { |
925 | + 'DEFAULT': { |
926 | + 'use_syslog': 'False', |
927 | + 'debug': 'False', |
928 | + 'verbose': 'False', |
929 | + 'iscsi_helper': 'tgtadm', |
930 | + 'volume_group': 'cinder-volumes', |
931 | + 'auth_strategy': 'keystone', |
932 | + 'volumes_dir': '/var/lib/cinder/volumes' |
933 | + }, |
934 | + 'keystone_authtoken': { |
935 | + 'admin_user': rel_ks_ci['service_username'], |
936 | + 'admin_password': rel_ks_ci['service_password'], |
937 | + 'admin_tenant_name': rel_ks_ci['service_tenant'], |
938 | + 'auth_uri': auth_uri |
939 | + } |
940 | + } |
941 | + |
942 | + expected_rmq = { |
943 | + 'rabbit_userid': 'cinder', |
944 | + 'rabbit_virtual_host': 'openstack', |
945 | + 'rabbit_password': rel_mq_ci['password'], |
946 | + 'rabbit_host': rel_mq_ci['hostname'], |
947 | + } |
948 | + |
949 | + if self._get_openstack_release() >= self.trusty_kilo: |
950 | + # Kilo or later |
951 | + expected['oslo_messaging_rabbit'] = expected_rmq |
952 | else: |
953 | - conf = '/etc/cinder/api-paste.ini' |
954 | - section = 'filter:authtoken' |
955 | - |
956 | - ret = u.validate_config_data(unit_ci, conf, section, expected) |
957 | - if ret: |
958 | - msg = "cinder auth config error: {}".format(ret) |
959 | - amulet.raise_status(amulet.FAIL, msg=msg) |
960 | - |
961 | - def test_cinder_logging_config(self): |
962 | - ''' Inspect select sections and config pairs in logging.conf.''' |
963 | - unit_ci = self.cinder_sentry |
964 | + # Juno or earlier |
965 | + expected['DEFAULT'].update(expected_rmq) |
966 | + expected['keystone_authtoken']['auth_host'] = \ |
967 | + rel_ks_ci['auth_host'] |
968 | + |
969 | + for section, pairs in expected.iteritems(): |
970 | + ret = u.validate_config_data(unit, conf, section, pairs) |
971 | + if ret: |
972 | + message = "cinder config error: {}".format(ret) |
973 | + amulet.raise_status(amulet.FAIL, msg=message) |
974 | + |
975 | + def test_301_cinder_logging_config(self): |
976 | + """Verify the data in the cinder logging conf file.""" |
977 | + u.log.debug('Checking cinder logging config file data...') |
978 | + unit = self.cinder_sentry |
979 | conf = '/etc/cinder/logging.conf' |
980 | |
981 | expected = { |
982 | @@ -460,347 +546,138 @@ |
983 | } |
984 | |
985 | for section, pairs in expected.iteritems(): |
986 | - ret = u.validate_config_data(unit_ci, conf, section, pairs) |
987 | + ret = u.validate_config_data(unit, conf, section, pairs) |
988 | if ret: |
989 | - msg = "cinder logging config error: {}".format(ret) |
990 | - amulet.raise_status(amulet.FAIL, msg=msg) |
991 | + message = "cinder logging config error: {}".format(ret) |
992 | + amulet.raise_status(amulet.FAIL, msg=message) |
993 | |
994 | - def test_cinder_rootwrap_config(self): |
995 | - ''' Inspect select config pairs in rootwrap.conf. ''' |
996 | - unit_ci = self.cinder_sentry |
997 | + def test_303_cinder_rootwrap_config(self): |
998 | + """Inspect select config pairs in rootwrap.conf.""" |
999 | + u.log.debug('Checking cinder rootwrap config file data...') |
1000 | + unit = self.cinder_sentry |
1001 | conf = '/etc/cinder/rootwrap.conf' |
1002 | - expected = {'filters_path': '/etc/cinder/rootwrap.d,' |
1003 | - '/usr/share/cinder/rootwrap'} |
1004 | section = 'DEFAULT' |
1005 | - |
1006 | - if self._get_openstack_release() >= self.precise_havana: |
1007 | - expected['use_syslog'] = 'False' |
1008 | - expected['exec_dirs'] = '/sbin,/usr/sbin,/bin,/usr/bin' |
1009 | - |
1010 | - ret = u.validate_config_data(unit_ci, conf, section, expected) |
1011 | + expected = { |
1012 | + 'filters_path': '/etc/cinder/rootwrap.d,' |
1013 | + '/usr/share/cinder/rootwrap', |
1014 | + 'use_syslog': 'False', |
1015 | + } |
1016 | + |
1017 | + ret = u.validate_config_data(unit, conf, section, expected) |
1018 | if ret: |
1019 | msg = "cinder rootwrap config error: {}".format(ret) |
1020 | amulet.raise_status(amulet.FAIL, msg=msg) |
1021 | |
1022 | - def test_cinder_endpoint(self): |
1023 | - '''Verify the cinder endpoint data.''' |
1024 | - endpoints = self.keystone.endpoints.list() |
1025 | - admin_port = internal_port = public_port = '8776' |
1026 | - expected = {'id': u.not_null, |
1027 | - 'region': 'RegionOne', |
1028 | - 'adminurl': u.valid_url, |
1029 | - 'internalurl': u.valid_url, |
1030 | - 'publicurl': u.valid_url, |
1031 | - 'service_id': u.not_null} |
1032 | - |
1033 | - ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, |
1034 | - public_port, expected) |
1035 | - if ret: |
1036 | - amulet.raise_status(amulet.FAIL, |
1037 | - msg='glance endpoint: {}'.format(ret)) |
1038 | - |
1039 | - def test_z_cinder_restart_on_config_change(self): |
1040 | - '''Verify cinder services are restarted when the config is changed. |
1041 | - |
1042 | - Note(coreycb): The method name with the _z_ is a little odd |
1043 | - but it forces the test to run last. It just makes things |
1044 | - easier because restarting services requires re-authorization. |
1045 | - ''' |
1046 | - u.log.debug('making charm config change') |
1047 | - mtime = u.get_sentry_time(self.cinder_sentry) |
1048 | - self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'}) |
1049 | - if not u.validate_service_config_changed(self.cinder_sentry, |
1050 | - mtime, |
1051 | - 'cinder-api', |
1052 | - '/etc/cinder/cinder.conf'): |
1053 | - self.d.configure('cinder', {'verbose': 'False', 'debug': 'False'}) |
1054 | - msg = "cinder-api service didn't restart after config change" |
1055 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1056 | - |
1057 | - if not u.validate_service_config_changed(self.cinder_sentry, |
1058 | - mtime, |
1059 | - 'cinder-volume', |
1060 | - '/etc/cinder/cinder.conf', |
1061 | - sleep_time=0): |
1062 | - self.d.configure('cinder', {'verbose': 'False', 'debug': 'False'}) |
1063 | - msg = "cinder-volume service didn't restart after conf change" |
1064 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1065 | - |
1066 | - u.log.debug('returning to original charm config') |
1067 | - self.d.configure('cinder', {'verbose': 'False', 'debug': 'False'}) |
1068 | - |
1069 | - def test_users(self): |
1070 | - '''Verify expected users.''' |
1071 | - user0 = {'name': 'cinder_cinderv2', |
1072 | - 'enabled': True, |
1073 | - 'tenantId': u.not_null, |
1074 | - 'id': u.not_null, |
1075 | - 'email': 'juju@localhost'} |
1076 | - user1 = {'name': 'admin', |
1077 | - 'enabled': True, |
1078 | - 'tenantId': u.not_null, |
1079 | - 'id': u.not_null, |
1080 | - 'email': 'juju@localhost'} |
1081 | - user2 = {'name': 'glance', |
1082 | - 'enabled': True, |
1083 | - 'tenantId': u.not_null, |
1084 | - 'id': u.not_null, |
1085 | - 'email': 'juju@localhost'} |
1086 | - expected = [user0, user1, user2] |
1087 | - actual = self.keystone.users.list() |
1088 | - |
1089 | - ret = u.validate_user_data(expected, actual) |
1090 | - if ret: |
1091 | - amulet.raise_status(amulet.FAIL, msg=ret) |
1092 | - |
1093 | - def test_000_delete_volumes_snapshots_images(self): |
1094 | - '''Delete all volumes, snapshots and images, if they exist, |
1095 | - as the first of the ordered tests. Useful in re-run scenarios.''' |
1096 | - self.test_900_delete_all_snapshots() |
1097 | - self.test_900_glance_delete_all_images() |
1098 | - self.test_999_delete_all_volumes() |
1099 | - |
1100 | - def test_100_create_and_extend_volume(self): |
1101 | - '''Add and confirm a new 1GB volume. In Havana and later, |
1102 | - extend that volume to 2GB.''' |
1103 | - # Create new volume |
1104 | - vol_new = self.cinder.volumes.create(display_name="demo-vol", size=1) |
1105 | - vol_id = vol_new.id |
1106 | - |
1107 | - # Wait for volume status to be available |
1108 | - ret = self.obj_is_status(self.cinder.volumes, obj_id=vol_id, |
1109 | - stat='available', |
1110 | - msg='create vol status wait') |
1111 | - if not ret: |
1112 | - msg = 'volume create failed' |
1113 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1114 | - |
1115 | - # NOTE(beisner): Cinder extend is supported only in Havana or later |
1116 | - if self._get_openstack_release() < self.precise_havana: |
1117 | - u.log.debug('Skipping volume extend due to openstack release < H') |
1118 | - return |
1119 | - |
1120 | - # Extend volume size |
1121 | - self.cinder.volumes.extend(vol_id, '2') |
1122 | - |
1123 | - # Wait for extend |
1124 | - vol_size = self.cinder.volumes.get(vol_id).size |
1125 | - tries = 0 |
1126 | - while vol_size != 2 and tries <= 15: |
1127 | - u.log.debug('volume extend size wait: {} {}'.format(tries, |
1128 | - vol_id)) |
1129 | - sleep(4) |
1130 | - vol_size = self.cinder.volumes.get(vol_id).size |
1131 | - tries += 1 |
1132 | - |
1133 | - if vol_size != 2: |
1134 | - msg = 'Failed to extend volume, size is {}'.format(vol_size) |
1135 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1136 | - |
1137 | - def test_100_glance_image_create(self): |
1138 | - '''Create new cirros glance image, to be referenced by |
1139 | - a cinder volume create tests in Havana or later.''' |
1140 | - |
1141 | - # NOTE(beisner): Cinder create vol-from-img support for lvm and |
1142 | - # rbd(ceph) exists only in Havana or later |
1143 | - if self._get_openstack_release() < self.precise_havana: |
1144 | - u.log.debug('Skipping create glance img due to openstack rel < H') |
1145 | - return |
1146 | - |
1147 | - # Create a new image |
1148 | - image_new = u.create_cirros_image(self.glance, 'cirros-image-1') |
1149 | - |
1150 | - # Confirm image is created and has status of 'active' |
1151 | - if not image_new: |
1152 | - msg = 'image create failed' |
1153 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1154 | - |
1155 | - def test_200_clone_volume(self): |
1156 | - '''Create a new cinder volume, clone it to another cinder volume.''' |
1157 | - # Get volume object and ID |
1158 | - try: |
1159 | - vol = self.cinder.volumes.find(display_name="demo-vol") |
1160 | - vol_id = vol.id |
1161 | - vol_size = vol.size |
1162 | - except: |
1163 | - msg = ('Volume (demo-vol) not found.') |
1164 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1165 | - |
1166 | - if vol.status != 'available': |
1167 | - msg = ('volume status not == available: {}'.format(vol.status)) |
1168 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1169 | - |
1170 | - # Create new clone volume from source volume |
1171 | - vol_clone = self.cinder.volumes.create(display_name="demo-vol-clone", |
1172 | - size=vol_size, |
1173 | - source_volid=vol_id) |
1174 | - |
1175 | - ret = self.obj_is_status(self.cinder.volumes, obj_id=vol_clone.id, |
1176 | - stat='available', |
1177 | - msg='clone vol status wait') |
1178 | - if not ret: |
1179 | - msg = 'volume clone failed - from {}'.format(vol_id) |
1180 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1181 | - |
1182 | - def test_200_create_volume_from_glance_image(self): |
1183 | - '''Create new volume from glance cirros image (Havana and later), |
1184 | - check status and bootable flag.''' |
1185 | - |
1186 | - # NOTE(beisner): Cinder create vol-from-img support for lvm and |
1187 | - # rbd(ceph) exists only in Havana or later |
1188 | - if self._get_openstack_release() < self.precise_havana: |
1189 | - u.log.debug('Skipping create vol from img, openstack rel < H') |
1190 | - return |
1191 | - |
1192 | - # Get image object and id |
1193 | - expected_img_name = 'cirros-image-1' |
1194 | - img_list = list(self.glance.images.list()) |
1195 | - img_count = len(img_list) |
1196 | - |
1197 | - if img_count != 0: |
1198 | - # NOTE(beisner): glance api has no find method, presume 1st image |
1199 | - img_id = img_list[0].id |
1200 | - else: |
1201 | - msg = 'image not found' |
1202 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1203 | - |
1204 | - # Confirm image name |
1205 | - if img_list[0].name != expected_img_name: |
1206 | - msg = 'unexpected image name {}'.format(img_list[0].name) |
1207 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1208 | - |
1209 | - # Create new volume from glance image |
1210 | - vol_new = self.cinder.volumes.create(display_name="demo-vol-cirros", |
1211 | - size=1, imageRef=img_id) |
1212 | - vol_id = vol_new.id |
1213 | - |
1214 | - # Wait for volume stat to be avail, check that it's flagged bootable |
1215 | - ret = self.obj_is_status(self.cinder.volumes, obj_id=vol_id, |
1216 | - stat='available', |
1217 | - msg='create vol from img status wait') |
1218 | - vol_boot = self.cinder.volumes.get(vol_id).bootable |
1219 | - |
1220 | - if not ret or vol_boot != 'true': |
1221 | - vol_stat = self.cinder.volumes.get(vol_id).status |
1222 | - msg = ('vol create failed - from glance img:' |
1223 | - ' id:{} stat:{} boot:{}'.format(vol_id, |
1224 | - vol_stat, |
1225 | - vol_boot)) |
1226 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1227 | - |
1228 | - def test_300_cinder_create_snapshot(self): |
1229 | - '''Create a snapshot of a volume. Use a cirros-based volume where |
1230 | - supported (Havana and newer), and fall back to a vanilla |
1231 | - volume snapshot everywhere else.''' |
1232 | - |
1233 | - if self._get_openstack_release() >= self.precise_havana: |
1234 | - vol_src_name = "demo-vol-cirros" |
1235 | - elif self._get_openstack_release() < self.precise_havana: |
1236 | - vol_src_name = "demo-vol" |
1237 | - |
1238 | - u.log.debug('creating snapshot of volume: {}'.format(vol_src_name)) |
1239 | - |
1240 | - # Get volume object and id |
1241 | - try: |
1242 | - vol_src = self.cinder.volumes.find(display_name=vol_src_name) |
1243 | - vol_id = vol_src.id |
1244 | - except: |
1245 | - msg = ('volume not found while creating snapshot') |
1246 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1247 | - |
1248 | - if vol_src.status != 'available': |
1249 | - msg = ('volume status not == available: {}').format(vol_src.status) |
1250 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1251 | - |
1252 | - # Create new snapshot |
1253 | - snap_new = self.cinder.volume_snapshots.create( |
1254 | - volume_id=vol_id, display_name='demo-snapshot') |
1255 | - snap_id = snap_new.id |
1256 | - |
1257 | - # Wait for snapshot status to become available |
1258 | - ret = self.obj_is_status(self.cinder.volume_snapshots, obj_id=snap_id, |
1259 | - stat='available', |
1260 | - msg='snapshot create status wait') |
1261 | - if not ret: |
1262 | - snap_stat = self.cinder.volume_snapshots.get(snap_id).status |
1263 | - msg = 'volume snapshot failed: {} {}'.format(snap_id, |
1264 | - snap_stat) |
1265 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1266 | - |
1267 | - def test_310_create_volume_from_snapshot(self): |
1268 | - '''Create a new volume from a snapshot of a volume.''' |
1269 | - # Get snapshot object and ID |
1270 | - try: |
1271 | - snap = self.cinder.volume_snapshots.find( |
1272 | - display_name="demo-snapshot") |
1273 | - snap_id = snap.id |
1274 | - snap_size = snap.size |
1275 | - except: |
1276 | - msg = 'snapshot not found while creating volume' |
1277 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1278 | - |
1279 | - if snap.status != 'available': |
1280 | - msg = 'snapshot status not == available: {}'.format(snap.status) |
1281 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1282 | - |
1283 | - # Create new volume from snapshot |
1284 | - vol_new = self.cinder.volumes.create( |
1285 | - display_name="demo-vol-from-snap", |
1286 | - snapshot_id=snap_id, |
1287 | - size=snap_size) |
1288 | - vol_id = vol_new.id |
1289 | - |
1290 | - # Wait for volume status to be == available |
1291 | - ret = self.obj_is_status(self.cinder.volumes, obj_id=vol_id, |
1292 | - stat='available', |
1293 | - msg='vol from snap create status wait') |
1294 | - if not ret: |
1295 | - vol_stat = self.cinder.volumes.get(vol_id).status |
1296 | - msg = 'volume create failed: {} {}'.format(vol_id, |
1297 | - vol_stat) |
1298 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1299 | - |
1300 | - def test_900_confirm_lvm_volume_list(self): |
1301 | - '''Confirm cinder volume IDs with lvm logical volume IDs. |
1302 | - Expect a 1:1 relationship of lvm:cinder volumes.''' |
1303 | - commando = self.cinder_sentry.run('sudo lvs | grep cinder-volumes | ' |
1304 | - 'awk \'{ print $1 }\'') |
1305 | - vol_list = self.cinder.volumes.list() |
1306 | - lv_id_list = commando[0].split('\n') |
1307 | - vol_count = len(vol_list) |
1308 | - snap_count = len(self.cinder.volume_snapshots.list()) |
1309 | - |
1310 | - # Expect cinder vol + snap count to match lvm log vol count |
1311 | - if (vol_count + snap_count) != len(lv_id_list): |
1312 | - msg = ('lvm volume count ({}) != cinder volume + snap count ' |
1313 | - '({})'.format(len(vol_list), len(lv_id_list))) |
1314 | - amulet.raise_status(amulet.FAIL, msg=msg) |
1315 | - |
1316 | - # Expect all cinder vol IDs to exist in the LVM volume list |
1317 | - for vol_this in vol_list: |
1318 | - try: |
1319 | - lv_id_list.index('volume-' + vol_this.id) |
1320 | - except: |
1321 | - msg = ('volume ID {} not found in ' |
1322 | - 'LVM volume list.'.format(vol_this.id)) |
1323 | + def test_400_cinder_api_connection(self): |
1324 | + """Simple api call to check service is up and responding""" |
1325 | + u.log.debug('Checking basic cinder api functionality...') |
1326 | + check = list(self.cinder.volumes.list()) |
1327 | + u.log.debug('Cinder api check (volumes.list): {}'.format(check)) |
1328 | + assert(check == []) |
1329 | + |
1330 | + def test_401_create_delete_volume(self): |
1331 | + """Create a cinder volume and delete it.""" |
1332 | + u.log.debug('Creating, checking and deleting cinder volume...') |
1333 | + vol_new = u.create_cinder_volume(self.cinder) |
1334 | + vol_id = vol_new.id |
1335 | + u.delete_resource(self.cinder.volumes, vol_id, msg="cinder volume") |
1336 | + |
1337 | + def test_402_create_delete_volume_from_image(self): |
1338 | + """Create a cinder volume from a glance image, and delete it.""" |
1339 | + u.log.debug('Creating, checking and deleting cinder volume' |
1340 | + 'from glance image...') |
1341 | + img_new = u.create_cirros_image(self.glance, "cirros-image-1") |
1342 | + img_id = img_new.id |
1343 | + vol_new = u.create_cinder_volume(self.cinder, |
1344 | + vol_name="demo-vol-cirros", |
1345 | + img_id=img_id) |
1346 | + vol_id = vol_new.id |
1347 | + u.delete_resource(self.glance.images, img_id, msg="glance image") |
1348 | + u.delete_resource(self.cinder.volumes, vol_id, msg="cinder volume") |
1349 | + |
1350 | + def test_403_volume_snap_clone_extend_inspect(self): |
1351 | + """Create a cinder volume, clone it, extend its size, create a |
1352 | + snapshot of the volume, create a volume from a snapshot, check |
1353 | + status of each, inspect underlying lvm, then delete the resources.""" |
1354 | + u.log.debug('Creating, snapshotting, cloning, extending a ' |
1355 | + 'cinder volume...') |
1356 | + vols = [] |
1357 | + |
1358 | + # Create a 1GB volume |
1359 | + vol_new = u.create_cinder_volume(self.cinder, vol_size=1) |
1360 | + vols.append(vol_new) |
1361 | + vol_id = vol_new.id |
1362 | + |
1363 | + # Snapshot the volume |
1364 | + snap = self._snapshot_cinder_volume(vol_id=vol_id) |
1365 | + snap_id = snap.id |
1366 | + |
1367 | + # Create a volume from the snapshot |
1368 | + vol_from_snap = u.create_cinder_volume(self.cinder, |
1369 | + vol_name="demo-vol-from-snap", |
1370 | + snap_id=snap_id) |
1371 | + vols.append(vol_from_snap) |
1372 | + |
1373 | + # Clone an existing volume |
1374 | + vol_clone = u.create_cinder_volume(self.cinder, |
1375 | + vol_name="demo-vol-clone", |
1376 | + src_vol_id=vol_id) |
1377 | + vols.append(vol_clone) |
1378 | + vol_clone_id = vol_clone.id |
1379 | + |
1380 | + # Extend the cloned volume and confirm new size |
1381 | + ret = self._extend_cinder_volume(vol_clone_id, new_size=2) |
1382 | + if ret: |
1383 | + amulet.raise_status(amulet.FAIL, msg=ret) |
1384 | + |
1385 | + # Inspect logical volumes (lvm) on cinder unit |
1386 | + ret = self._check_cinder_lvm() |
1387 | + if ret: |
1388 | + amulet.raise_status(amulet.FAIL, msg=ret) |
1389 | + |
1390 | + # Cleanup |
1391 | + u.log.debug('Deleting snapshot {}...'.format(snap_id)) |
1392 | + u.delete_resource(self.cinder.volume_snapshots, |
1393 | + snap_id, msg="cinder volume") |
1394 | + |
1395 | + for vol in vols: |
1396 | + u.log.debug('Deleting volume {}...'.format(vol.id)) |
1397 | + u.delete_resource(self.cinder.volumes, vol.id, msg="cinder volume") |
1398 | + |
1399 | + def test_900_restart_on_config_change(self): |
1400 | + """Verify that the specified services are restarted when the |
1401 | + config is changed.""" |
1402 | + |
1403 | + sentry = self.cinder_sentry |
1404 | + juju_service = 'cinder' |
1405 | + |
1406 | + # Expected default and alternate values |
1407 | + set_default = {'debug': 'False'} |
1408 | + set_alternate = {'debug': 'True'} |
1409 | + |
1410 | + # Config file affected by juju set config change |
1411 | + conf_file = '/etc/cinder/cinder.conf' |
1412 | + |
1413 | + # Services which are expected to restart upon config change |
1414 | + services = [ |
1415 | + 'cinder-api', |
1416 | + 'cinder-scheduler', |
1417 | + 'cinder-volume' |
1418 | + ] |
1419 | + |
1420 | + # Make config change, check for service restarts |
1421 | + u.log.debug('Making config change on {}...'.format(juju_service)) |
1422 | + self.d.configure(juju_service, set_alternate) |
1423 | + |
1424 | + sleep_time = 40 |
1425 | + for s in services: |
1426 | + u.log.debug("Checking that service restarted: {}".format(s)) |
1427 | + if not u.service_restarted(sentry, s, |
1428 | + conf_file, sleep_time=sleep_time, |
1429 | + pgrep_full=True): |
1430 | + self.d.configure(juju_service, set_default) |
1431 | + msg = "service {} didn't restart after config change".format(s) |
1432 | amulet.raise_status(amulet.FAIL, msg=msg) |
1433 | - |
1434 | - def test_900_glance_delete_all_images(self): |
1435 | - '''Delete all glance images and confirm deletion.''' |
1436 | - ret = self.delete_all_objs(self.glance.images, item_desc='image') |
1437 | - if ret: |
1438 | - amulet.raise_status(amulet.FAIL, msg=ret) |
1439 | - |
1440 | - def test_900_delete_all_snapshots(self): |
1441 | - '''Delete all cinder volume snapshots and confirm deletion.''' |
1442 | - ret = self.delete_all_objs(self.cinder.volume_snapshots, |
1443 | - item_desc='snapshot') |
1444 | - if ret: |
1445 | - amulet.raise_status(amulet.FAIL, msg=ret) |
1446 | - |
1447 | - def test_999_delete_all_volumes(self): |
1448 | - '''Delete all cinder volumes and confirm deletion, |
1449 | - as the last of the ordered tests.''' |
1450 | - ret = self.delete_all_objs(self.cinder.volumes, item_desc='volume') |
1451 | - if ret: |
1452 | - amulet.raise_status(amulet.FAIL, msg=ret) |
1453 | + sleep_time = 0 |
1454 | + |
1455 | + self.d.configure(juju_service, set_default) |
1456 | |
1457 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' |
1458 | --- tests/charmhelpers/contrib/amulet/utils.py 2015-06-19 16:29:27 +0000 |
1459 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-01 14:48:09 +0000 |
1460 | @@ -14,6 +14,7 @@ |
1461 | # You should have received a copy of the GNU Lesser General Public License |
1462 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1463 | |
1464 | +import amulet |
1465 | import ConfigParser |
1466 | import distro_info |
1467 | import io |
1468 | @@ -173,6 +174,11 @@ |
1469 | |
1470 | Verify that the specified section of the config file contains |
1471 | the expected option key:value pairs. |
1472 | + |
1473 | + Compare expected dictionary data vs actual dictionary data. |
1474 | + The values in the 'expected' dictionary can be strings, bools, ints, |
1475 | + longs, or can be a function that evaluates a variable and returns a |
1476 | + bool. |
1477 | """ |
1478 | self.log.debug('Validating config file data ({} in {} on {})' |
1479 | '...'.format(section, config_file, |
1480 | @@ -185,9 +191,20 @@ |
1481 | for k in expected.keys(): |
1482 | if not config.has_option(section, k): |
1483 | return "section [{}] is missing option {}".format(section, k) |
1484 | - if config.get(section, k) != expected[k]: |
1485 | + |
1486 | + actual = config.get(section, k) |
1487 | + v = expected[k] |
1488 | + if (isinstance(v, six.string_types) or |
1489 | + isinstance(v, bool) or |
1490 | + isinstance(v, six.integer_types)): |
1491 | + # handle explicit values |
1492 | + if actual != v: |
1493 | + return "section [{}] {}:{} != expected {}:{}".format( |
1494 | + section, k, actual, k, expected[k]) |
1495 | + # handle function pointers, such as not_null or valid_ip |
1496 | + elif not v(actual): |
1497 | return "section [{}] {}:{} != expected {}:{}".format( |
1498 | - section, k, config.get(section, k), k, expected[k]) |
1499 | + section, k, actual, k, expected[k]) |
1500 | return None |
1501 | |
1502 | def _validate_dict_data(self, expected, actual): |
1503 | @@ -195,7 +212,7 @@ |
1504 | |
1505 | Compare expected dictionary data vs actual dictionary data. |
1506 | The values in the 'expected' dictionary can be strings, bools, ints, |
1507 | - longs, or can be a function that evaluate a variable and returns a |
1508 | + longs, or can be a function that evaluates a variable and returns a |
1509 | bool. |
1510 | """ |
1511 | self.log.debug('actual: {}'.format(repr(actual))) |
1512 | @@ -206,8 +223,10 @@ |
1513 | if (isinstance(v, six.string_types) or |
1514 | isinstance(v, bool) or |
1515 | isinstance(v, six.integer_types)): |
1516 | + # handle explicit values |
1517 | if v != actual[k]: |
1518 | return "{}:{}".format(k, actual[k]) |
1519 | + # handle function pointers, such as not_null or valid_ip |
1520 | elif not v(actual[k]): |
1521 | return "{}:{}".format(k, actual[k]) |
1522 | else: |
1523 | @@ -406,3 +425,109 @@ |
1524 | """Convert a relative file path to a file URL.""" |
1525 | _abs_path = os.path.abspath(file_rel_path) |
1526 | return urlparse.urlparse(_abs_path, scheme='file').geturl() |
1527 | + |
1528 | + def check_commands_on_units(self, commands, sentry_units): |
1529 | + """Check that all commands in a list exit zero on all |
1530 | + sentry units in a list. |
1531 | + |
1532 | + :param commands: list of bash commands |
1533 | + :param sentry_units: list of sentry unit pointers |
1534 | + :returns: None if successful; Failure message otherwise |
1535 | + """ |
1536 | + self.log.debug('Checking exit codes for {} commands on {} ' |
1537 | + 'sentry units...'.format(len(commands), |
1538 | + len(sentry_units))) |
1539 | + for sentry_unit in sentry_units: |
1540 | + for cmd in commands: |
1541 | + output, code = sentry_unit.run(cmd) |
1542 | + if code == 0: |
1543 | + self.log.debug('{} `{}` returned {} ' |
1544 | + '(OK)'.format(sentry_unit.info['unit_name'], |
1545 | + cmd, code)) |
1546 | + else: |
1547 | + return ('{} `{}` returned {} ' |
1548 | + '{}'.format(sentry_unit.info['unit_name'], |
1549 | + cmd, code, output)) |
1550 | + return None |
1551 | + |
1552 | + def get_process_id_list(self, sentry_unit, process_name): |
1553 | + """Get a list of process ID(s) from a single sentry juju unit |
1554 | + for a single process name. |
1555 | + |
1556 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
1557 | + :param process_name: Process name |
1558 | + :returns: List of process IDs |
1559 | + """ |
1560 | + cmd = 'pidof {}'.format(process_name) |
1561 | + output, code = sentry_unit.run(cmd) |
1562 | + if code != 0: |
1563 | + msg = ('{} `{}` returned {} ' |
1564 | + '{}'.format(sentry_unit.info['unit_name'], |
1565 | + cmd, code, output)) |
1566 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1567 | + return str(output).split() |
1568 | + |
1569 | + def get_unit_process_ids(self, unit_processes): |
1570 | + """Construct a dict containing unit sentries, process names, and |
1571 | + process IDs.""" |
1572 | + pid_dict = {} |
1573 | + for sentry_unit, process_list in unit_processes.iteritems(): |
1574 | + pid_dict[sentry_unit] = {} |
1575 | + for process in process_list: |
1576 | + pids = self.get_process_id_list(sentry_unit, process) |
1577 | + pid_dict[sentry_unit].update({process: pids}) |
1578 | + return pid_dict |
1579 | + |
1580 | + def validate_unit_process_ids(self, expected, actual): |
1581 | + """Validate process id quantities for services on units.""" |
1582 | + self.log.debug('Checking units for running processes...') |
1583 | + self.log.debug('Expected PIDs: {}'.format(expected)) |
1584 | + self.log.debug('Actual PIDs: {}'.format(actual)) |
1585 | + |
1586 | + if len(actual) != len(expected): |
1587 | + return ('Unit count mismatch. expected, actual: {}, ' |
1588 | + '{} '.format(len(expected), len(actual))) |
1589 | + |
1590 | + for (e_sentry, e_proc_names) in expected.iteritems(): |
1591 | + e_sentry_name = e_sentry.info['unit_name'] |
1592 | + if e_sentry in actual.keys(): |
1593 | + a_proc_names = actual[e_sentry] |
1594 | + else: |
1595 | + return ('Expected sentry ({}) not found in actual dict data.' |
1596 | + '{}'.format(e_sentry_name, e_sentry)) |
1597 | + |
1598 | + if len(e_proc_names.keys()) != len(a_proc_names.keys()): |
1599 | + return ('Process name count mismatch. expected, actual: {}, ' |
1600 | + '{}'.format(len(expected), len(actual))) |
1601 | + |
1602 | + for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \ |
1603 | + zip(e_proc_names.items(), a_proc_names.items()): |
1604 | + if e_proc_name != a_proc_name: |
1605 | + return ('Process name mismatch. expected, actual: {}, ' |
1606 | + '{}'.format(e_proc_name, a_proc_name)) |
1607 | + |
1608 | + a_pids_length = len(a_pids) |
1609 | + if e_pids_length != a_pids_length: |
1610 | + return ('PID count mismatch. {} ({}) expected, actual: ' |
1611 | + '{}, {} ({})'.format(e_sentry_name, e_proc_name, |
1612 | + e_pids_length, a_pids_length, |
1613 | + a_pids)) |
1614 | + else: |
1615 | + self.log.debug('PID check OK: {} {} {}: ' |
1616 | + '{}'.format(e_sentry_name, e_proc_name, |
1617 | + e_pids_length, a_pids)) |
1618 | + return None |
1619 | + |
1620 | + def validate_list_of_identical_dicts(self, list_of_dicts): |
1621 | + """Check that all dicts within a list are identical.""" |
1622 | + hashes = [] |
1623 | + for _dict in list_of_dicts: |
1624 | + hashes.append(hash(frozenset(_dict.items()))) |
1625 | + |
1626 | + self.log.debug('Hashes: {}'.format(hashes)) |
1627 | + if len(set(hashes)) == 1: |
1628 | + self.log.debug('Dicts within list are identical') |
1629 | + else: |
1630 | + return 'Dicts within list are not identical' |
1631 | + |
1632 | + return None |
1633 | |
1634 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' |
1635 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-19 16:29:27 +0000 |
1636 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 14:48:09 +0000 |
1637 | @@ -79,9 +79,9 @@ |
1638 | services.append(this_service) |
1639 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
1640 | 'ceph-osd', 'ceph-radosgw'] |
1641 | - # Openstack subordinate charms do not expose an origin option as that |
1642 | - # is controlled by the principle |
1643 | - ignore = ['neutron-openvswitch'] |
1644 | + # Most OpenStack subordinate charms do not expose an origin option |
1645 | + # as that is controlled by the principle. |
1646 | + ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch'] |
1647 | |
1648 | if self.openstack: |
1649 | for svc in services: |
1650 | @@ -148,3 +148,36 @@ |
1651 | return os_origin.split('%s-' % self.series)[1].split('/')[0] |
1652 | else: |
1653 | return releases[self.series] |
1654 | + |
1655 | + def get_ceph_expected_pools(self, radosgw=False): |
1656 | + """Return a list of expected ceph pools in a ceph + cinder + glance |
1657 | + test scenario, based on OpenStack release and whether ceph radosgw |
1658 | + is flagged as present or not.""" |
1659 | + |
1660 | + if self._get_openstack_release() >= self.trusty_kilo: |
1661 | + # Kilo or later |
1662 | + pools = [ |
1663 | + 'rbd', |
1664 | + 'cinder', |
1665 | + 'glance' |
1666 | + ] |
1667 | + else: |
1668 | + # Juno or earlier |
1669 | + pools = [ |
1670 | + 'data', |
1671 | + 'metadata', |
1672 | + 'rbd', |
1673 | + 'cinder', |
1674 | + 'glance' |
1675 | + ] |
1676 | + |
1677 | + if radosgw: |
1678 | + pools.extend([ |
1679 | + '.rgw.root', |
1680 | + '.rgw.control', |
1681 | + '.rgw', |
1682 | + '.rgw.gc', |
1683 | + '.users.uid' |
1684 | + ]) |
1685 | + |
1686 | + return pools |
1687 | |
1688 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' |
1689 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-19 16:29:27 +0000 |
1690 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 14:48:09 +0000 |
1691 | @@ -14,16 +14,20 @@ |
1692 | # You should have received a copy of the GNU Lesser General Public License |
1693 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1694 | |
1695 | +import amulet |
1696 | +import json |
1697 | import logging |
1698 | import os |
1699 | import six |
1700 | import time |
1701 | import urllib |
1702 | |
1703 | +import cinderclient.v1.client as cinder_client |
1704 | import glanceclient.v1.client as glance_client |
1705 | import heatclient.v1.client as heat_client |
1706 | import keystoneclient.v2_0 as keystone_client |
1707 | import novaclient.v1_1.client as nova_client |
1708 | +import swiftclient |
1709 | |
1710 | from charmhelpers.contrib.amulet.utils import ( |
1711 | AmuletUtils |
1712 | @@ -171,6 +175,16 @@ |
1713 | self.log.debug('Checking if tenant exists ({})...'.format(tenant)) |
1714 | return tenant in [t.name for t in keystone.tenants.list()] |
1715 | |
1716 | + def authenticate_cinder_admin(self, keystone_sentry, username, |
1717 | + password, tenant): |
1718 | + """Authenticates admin user with cinder.""" |
1719 | + # NOTE(beisner): cinder python client doesn't accept tokens. |
1720 | + service_ip = \ |
1721 | + keystone_sentry.relation('shared-db', |
1722 | + 'mysql:shared-db')['private-address'] |
1723 | + ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8')) |
1724 | + return cinder_client.Client(username, password, tenant, ept) |
1725 | + |
1726 | def authenticate_keystone_admin(self, keystone_sentry, user, password, |
1727 | tenant): |
1728 | """Authenticates admin user with the keystone admin endpoint.""" |
1729 | @@ -212,9 +226,29 @@ |
1730 | return nova_client.Client(username=user, api_key=password, |
1731 | project_id=tenant, auth_url=ep) |
1732 | |
1733 | + def authenticate_swift_user(self, keystone, user, password, tenant): |
1734 | + """Authenticates a regular user with swift api.""" |
1735 | + self.log.debug('Authenticating swift user ({})...'.format(user)) |
1736 | + ep = keystone.service_catalog.url_for(service_type='identity', |
1737 | + endpoint_type='publicURL') |
1738 | + return swiftclient.Connection(authurl=ep, |
1739 | + user=user, |
1740 | + key=password, |
1741 | + tenant_name=tenant, |
1742 | + auth_version='2.0') |
1743 | + |
1744 | def create_cirros_image(self, glance, image_name): |
1745 | - """Download the latest cirros image and upload it to glance.""" |
1746 | - self.log.debug('Creating glance image ({})...'.format(image_name)) |
1747 | + """Download the latest cirros image and upload it to glance, |
1748 | + validate and return a resource pointer. |
1749 | + |
1750 | + :param glance: pointer to authenticated glance connection |
1751 | + :param image_name: display name for new image |
1752 | + :returns: glance image pointer |
1753 | + """ |
1754 | + self.log.debug('Creating glance cirros image ' |
1755 | + '({})...'.format(image_name)) |
1756 | + |
1757 | + # Download cirros image |
1758 | http_proxy = os.getenv('AMULET_HTTP_PROXY') |
1759 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) |
1760 | if http_proxy: |
1761 | @@ -223,33 +257,51 @@ |
1762 | else: |
1763 | opener = urllib.FancyURLopener() |
1764 | |
1765 | - f = opener.open("http://download.cirros-cloud.net/version/released") |
1766 | + f = opener.open('http://download.cirros-cloud.net/version/released') |
1767 | version = f.read().strip() |
1768 | - cirros_img = "cirros-{}-x86_64-disk.img".format(version) |
1769 | + cirros_img = 'cirros-{}-x86_64-disk.img'.format(version) |
1770 | local_path = os.path.join('tests', cirros_img) |
1771 | |
1772 | if not os.path.exists(local_path): |
1773 | - cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net", |
1774 | + cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net', |
1775 | version, cirros_img) |
1776 | opener.retrieve(cirros_url, local_path) |
1777 | f.close() |
1778 | |
1779 | + # Create glance image |
1780 | with open(local_path) as f: |
1781 | image = glance.images.create(name=image_name, is_public=True, |
1782 | disk_format='qcow2', |
1783 | container_format='bare', data=f) |
1784 | - count = 1 |
1785 | - status = image.status |
1786 | - while status != 'active' and count < 10: |
1787 | - time.sleep(3) |
1788 | - image = glance.images.get(image.id) |
1789 | - status = image.status |
1790 | - self.log.debug('image status: {}'.format(status)) |
1791 | - count += 1 |
1792 | - |
1793 | - if status != 'active': |
1794 | - self.log.error('image creation timed out') |
1795 | - return None |
1796 | + |
1797 | + # Wait for image to reach active status |
1798 | + img_id = image.id |
1799 | + ret = self.resource_reaches_status(glance.images, img_id, |
1800 | + expected_stat='active', |
1801 | + msg='Image status wait') |
1802 | + if not ret: |
1803 | + msg = 'Glance image failed to reach expected state.' |
1804 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1805 | + |
1806 | + # Re-validate new image |
1807 | + self.log.debug('Validating image attributes...') |
1808 | + val_img_name = glance.images.get(img_id).name |
1809 | + val_img_stat = glance.images.get(img_id).status |
1810 | + val_img_pub = glance.images.get(img_id).is_public |
1811 | + val_img_cfmt = glance.images.get(img_id).container_format |
1812 | + val_img_dfmt = glance.images.get(img_id).disk_format |
1813 | + msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} ' |
1814 | + 'container fmt:{} disk fmt:{}'.format( |
1815 | + val_img_name, val_img_pub, img_id, |
1816 | + val_img_stat, val_img_cfmt, val_img_dfmt)) |
1817 | + |
1818 | + if val_img_name == image_name and val_img_stat == 'active' \ |
1819 | + and val_img_pub is True and val_img_cfmt == 'bare' \ |
1820 | + and val_img_dfmt == 'qcow2': |
1821 | + self.log.debug(msg_attr) |
1822 | + else: |
1823 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
1824 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1825 | |
1826 | return image |
1827 | |
1828 | @@ -260,22 +312,7 @@ |
1829 | self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1830 | 'delete_resource instead of delete_image.') |
1831 | self.log.debug('Deleting glance image ({})...'.format(image)) |
1832 | - num_before = len(list(glance.images.list())) |
1833 | - glance.images.delete(image) |
1834 | - |
1835 | - count = 1 |
1836 | - num_after = len(list(glance.images.list())) |
1837 | - while num_after != (num_before - 1) and count < 10: |
1838 | - time.sleep(3) |
1839 | - num_after = len(list(glance.images.list())) |
1840 | - self.log.debug('number of images: {}'.format(num_after)) |
1841 | - count += 1 |
1842 | - |
1843 | - if num_after != (num_before - 1): |
1844 | - self.log.error('image deletion timed out') |
1845 | - return False |
1846 | - |
1847 | - return True |
1848 | + return self.delete_resource(glance.images, image, msg='glance image') |
1849 | |
1850 | def create_instance(self, nova, image_name, instance_name, flavor): |
1851 | """Create the specified instance.""" |
1852 | @@ -308,22 +345,8 @@ |
1853 | self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1854 | 'delete_resource instead of delete_instance.') |
1855 | self.log.debug('Deleting instance ({})...'.format(instance)) |
1856 | - num_before = len(list(nova.servers.list())) |
1857 | - nova.servers.delete(instance) |
1858 | - |
1859 | - count = 1 |
1860 | - num_after = len(list(nova.servers.list())) |
1861 | - while num_after != (num_before - 1) and count < 10: |
1862 | - time.sleep(3) |
1863 | - num_after = len(list(nova.servers.list())) |
1864 | - self.log.debug('number of instances: {}'.format(num_after)) |
1865 | - count += 1 |
1866 | - |
1867 | - if num_after != (num_before - 1): |
1868 | - self.log.error('instance deletion timed out') |
1869 | - return False |
1870 | - |
1871 | - return True |
1872 | + return self.delete_resource(nova.servers, instance, |
1873 | + msg='nova instance') |
1874 | |
1875 | def create_or_get_keypair(self, nova, keypair_name="testkey"): |
1876 | """Create a new keypair, or return pointer if it already exists.""" |
1877 | @@ -339,6 +362,88 @@ |
1878 | _keypair = nova.keypairs.create(name=keypair_name) |
1879 | return _keypair |
1880 | |
1881 | + def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, |
1882 | + img_id=None, src_vol_id=None, snap_id=None): |
1883 | + """Create cinder volume, optionally from a glance image, OR |
1884 | + optionally as a clone of an existing volume, OR optionally |
1885 | + from a snapshot. Wait for the new volume status to reach |
1886 | + the expected status, validate and return a resource pointer. |
1887 | + |
1888 | + :param vol_name: cinder volume display name |
1889 | + :param vol_size: size in gigabytes |
1890 | + :param img_id: optional glance image id |
1891 | + :param src_vol_id: optional source volume id to clone |
1892 | + :param snap_id: optional snapshot id to use |
1893 | + :returns: cinder volume pointer |
1894 | + """ |
1895 | + # Handle parameter input and avoid impossible combinations |
1896 | + if img_id and not src_vol_id and not snap_id: |
1897 | + # Create volume from image |
1898 | + self.log.debug('Creating cinder volume from glance image...') |
1899 | + bootable = 'true' |
1900 | + elif src_vol_id and not img_id and not snap_id: |
1901 | + # Clone an existing volume |
1902 | + self.log.debug('Cloning cinder volume...') |
1903 | + bootable = cinder.volumes.get(src_vol_id).bootable |
1904 | + elif snap_id and not src_vol_id and not img_id: |
1905 | + # Create volume from snapshot |
1906 | + self.log.debug('Creating cinder volume from snapshot...') |
1907 | + snap = cinder.volume_snapshots.find(id=snap_id) |
1908 | + vol_size = snap.size |
1909 | + snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id |
1910 | + bootable = cinder.volumes.get(snap_vol_id).bootable |
1911 | + elif not img_id and not src_vol_id and not snap_id: |
1912 | + # Create volume |
1913 | + self.log.debug('Creating cinder volume...') |
1914 | + bootable = 'false' |
1915 | + else: |
1916 | + # Impossible combination of parameters |
1917 | + msg = ('Invalid method use - name:{} size:{} img_id:{} ' |
1918 | + 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, |
1919 | + img_id, src_vol_id, |
1920 | + snap_id)) |
1921 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1922 | + |
1923 | + # Create new volume |
1924 | + try: |
1925 | + vol_new = cinder.volumes.create(display_name=vol_name, |
1926 | + imageRef=img_id, |
1927 | + size=vol_size, |
1928 | + source_volid=src_vol_id, |
1929 | + snapshot_id=snap_id) |
1930 | + vol_id = vol_new.id |
1931 | + except Exception as e: |
1932 | + msg = 'Failed to create volume: {}'.format(e) |
1933 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1934 | + |
1935 | + # Wait for volume to reach available status |
1936 | + ret = self.resource_reaches_status(cinder.volumes, vol_id, |
1937 | + expected_stat="available", |
1938 | + msg="Volume status wait") |
1939 | + if not ret: |
1940 | + msg = 'Cinder volume failed to reach expected state.' |
1941 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1942 | + |
1943 | + # Re-validate new volume |
1944 | + self.log.debug('Validating volume attributes...') |
1945 | + val_vol_name = cinder.volumes.get(vol_id).display_name |
1946 | + val_vol_boot = cinder.volumes.get(vol_id).bootable |
1947 | + val_vol_stat = cinder.volumes.get(vol_id).status |
1948 | + val_vol_size = cinder.volumes.get(vol_id).size |
1949 | + msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' |
1950 | + '{} size:{}'.format(val_vol_name, vol_id, |
1951 | + val_vol_stat, val_vol_boot, |
1952 | + val_vol_size)) |
1953 | + |
1954 | + if val_vol_boot == bootable and val_vol_stat == 'available' \ |
1955 | + and val_vol_name == vol_name and val_vol_size == vol_size: |
1956 | + self.log.debug(msg_attr) |
1957 | + else: |
1958 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
1959 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1960 | + |
1961 | + return vol_new |
1962 | + |
1963 | def delete_resource(self, resource, resource_id, |
1964 | msg="resource", max_wait=120): |
1965 | """Delete one openstack resource, such as one instance, keypair, |
1966 | @@ -350,6 +455,8 @@ |
1967 | :param max_wait: maximum wait time in seconds |
1968 | :returns: True if successful, otherwise False |
1969 | """ |
1970 | + self.log.debug('Deleting OpenStack resource ' |
1971 | + '{} ({})'.format(resource_id, msg)) |
1972 | num_before = len(list(resource.list())) |
1973 | resource.delete(resource_id) |
1974 | |
1975 | @@ -411,3 +518,87 @@ |
1976 | self.log.debug('{} never reached expected status: ' |
1977 | '{}'.format(resource_id, expected_stat)) |
1978 | return False |
1979 | + |
1980 | + def get_ceph_osd_id_cmd(self, index): |
1981 | + """Produce a shell command that will return a ceph-osd id.""" |
1982 | + return ("`initctl list | grep 'ceph-osd ' | " |
1983 | + "awk 'NR=={} {{ print $2 }}' | " |
1984 | + "grep -o '[0-9]*'`".format(index + 1)) |
1985 | + |
1986 | + def get_ceph_pools(self, sentry_unit): |
1987 | + """Return a dict of ceph pools from a single ceph unit, with |
1988 | + pool name as keys, pool id as vals.""" |
1989 | + pools = {} |
1990 | + cmd = 'sudo ceph osd lspools' |
1991 | + output, code = sentry_unit.run(cmd) |
1992 | + if code != 0: |
1993 | + msg = ('{} `{}` returned {} ' |
1994 | + '{}'.format(sentry_unit.info['unit_name'], |
1995 | + cmd, code, output)) |
1996 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1997 | + |
1998 | + # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, |
1999 | + for pool in str(output).split(','): |
2000 | + pool_id_name = pool.split(' ') |
2001 | + if len(pool_id_name) == 2: |
2002 | + pool_id = pool_id_name[0] |
2003 | + pool_name = pool_id_name[1] |
2004 | + pools[pool_name] = int(pool_id) |
2005 | + |
2006 | + self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], |
2007 | + pools)) |
2008 | + return pools |
2009 | + |
2010 | + def get_ceph_df(self, sentry_unit): |
2011 | + """Return dict of ceph df json output, including ceph pool state. |
2012 | + |
2013 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
2014 | + :returns: Dict of ceph df output |
2015 | + """ |
2016 | + cmd = 'sudo ceph df --format=json' |
2017 | + output, code = sentry_unit.run(cmd) |
2018 | + if code != 0: |
2019 | + msg = ('{} `{}` returned {} ' |
2020 | + '{}'.format(sentry_unit.info['unit_name'], |
2021 | + cmd, code, output)) |
2022 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2023 | + return json.loads(output) |
2024 | + |
2025 | + def get_ceph_pool_sample(self, sentry_unit, pool_id=0): |
2026 | + """Take a sample of attributes of a ceph pool, returning ceph |
2027 | + pool name, object count and disk space used for the specified |
2028 | + pool ID number. |
2029 | + |
2030 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
2031 | + :param pool_id: Ceph pool ID |
2032 | + :returns: List of pool name, object count, kb disk space used |
2033 | + """ |
2034 | + df = self.get_ceph_df(sentry_unit) |
2035 | + pool_name = df['pools'][pool_id]['name'] |
2036 | + obj_count = df['pools'][pool_id]['stats']['objects'] |
2037 | + kb_used = df['pools'][pool_id]['stats']['kb_used'] |
2038 | + self.log.debug('Ceph {} pool (ID {}): {} objects, ' |
2039 | + '{} kb used'.format(pool_name, pool_id, |
2040 | + obj_count, kb_used)) |
2041 | + return pool_name, obj_count, kb_used |
2042 | + |
2043 | + def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): |
2044 | + """Validate ceph pool samples taken over time, such as pool |
2045 | + object counts or pool kb used, before adding, after adding, and |
2046 | + after deleting items which affect those pool attributes. The |
2047 | + 2nd element is expected to be greater than the 1st; 3rd is expected |
2048 | + to be less than the 2nd. |
2049 | + |
2050 | + :param samples: List containing 3 data samples |
2051 | + :param sample_type: String for logging and usage context |
2052 | + :returns: None if successful, Failure message otherwise |
2053 | + """ |
2054 | + original, created, deleted = range(3) |
2055 | + if samples[created] <= samples[original] or \ |
2056 | + samples[deleted] >= samples[created]: |
2057 | + return ('Ceph {} samples ({}) ' |
2058 | + 'unexpected.'.format(sample_type, samples)) |
2059 | + else: |
2060 | + self.log.debug('Ceph {} samples (OK): ' |
2061 | + '{}'.format(sample_type, samples)) |
2062 | + return None |
2063 | |
2064 | === added file 'tests/tests.yaml' |
2065 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 |
2066 | +++ tests/tests.yaml 2015-07-01 14:48:09 +0000 |
2067 | @@ -0,0 +1,18 @@ |
2068 | +bootstrap: true |
2069 | +reset: true |
2070 | +virtualenv: true |
2071 | +makefile: |
2072 | + - lint |
2073 | + - test |
2074 | +sources: |
2075 | + - ppa:juju/stable |
2076 | +packages: |
2077 | + - amulet |
2078 | + - python-amulet |
2079 | + - python-cinderclient |
2080 | + - python-distro-info |
2081 | + - python-glanceclient |
2082 | + - python-heatclient |
2083 | + - python-keystoneclient |
2084 | + - python-novaclient |
2085 | + - python-swiftclient |
charm_lint_check #5589 cinder-next for 1chb1n mp262772
LINT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_lint_ check/5589/