Merge lp:~1chb1n/charms/trusty/glance/next-amulet-update into lp:~openstack-charmers-archive/charms/trusty/glance/next
- Trusty Tahr (14.04)
- next-amulet-update
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 122 |
Proposed branch: | lp:~1chb1n/charms/trusty/glance/next-amulet-update |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/glance/next |
Diff against target: |
1784 lines (+929/-460) 13 files modified
Makefile (+10/-15) hooks/charmhelpers/core/hookenv.py (+92/-36) hooks/charmhelpers/core/services/base.py (+12/-9) metadata.yaml (+4/-2) tests/00-setup (+6/-2) tests/020-basic-trusty-liberty (+11/-0) tests/021-basic-wily-liberty (+9/-0) tests/README (+9/-0) tests/basic_deployment.py (+349/-340) tests/charmhelpers/contrib/amulet/utils.py (+137/-4) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+35/-3) tests/charmhelpers/contrib/openstack/amulet/utils.py (+237/-49) tests/tests.yaml (+18/-0) |
To merge this branch: | bzr merge lp:~1chb1n/charms/trusty/glance/next-amulet-update |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Corey Bryant | Pending | ||
Review via email: mp+263413@code.launchpad.net |
This proposal supersedes a proposal from 2015-06-30.
Commit message
Description of the change
Update amulet tests for Kilo, prep for wily. Sync hooks/charmhelpers; Sync tests/charmhelpers.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5346 glance-next for 1chb1n mp263413
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5716 glance-next for 1chb1n mp263413
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5348 glance-next for 1chb1n mp263413
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4906 glance-next for 1chb1n mp263413
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4908 glance-next for 1chb1n mp263413
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 126. By Ryan Beisner
-
update tests for vivid-kilo
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5718 glance-next for 1chb1n mp263413
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5350 glance-next for 1chb1n mp263413
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4910 glance-next for 1chb1n mp263413
AMULET OK: passed
Build: http://
Preview Diff
1 | === modified file 'Makefile' | |||
2 | --- Makefile 2015-04-16 21:32:02 +0000 | |||
3 | +++ Makefile 2015-07-02 12:52:15 +0000 | |||
4 | @@ -2,16 +2,18 @@ | |||
5 | 2 | PYTHON := /usr/bin/env python | 2 | PYTHON := /usr/bin/env python |
6 | 3 | 3 | ||
7 | 4 | lint: | 4 | lint: |
12 | 5 | @echo "Running flake8 tests: " | 5 | @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \ |
13 | 6 | @flake8 --exclude hooks/charmhelpers actions hooks unit_tests tests | 6 | actions hooks unit_tests tests |
10 | 7 | @echo "OK" | ||
11 | 8 | @echo "Running charm proof: " | ||
14 | 9 | @charm proof | 7 | @charm proof |
15 | 10 | @echo "OK" | ||
16 | 11 | 8 | ||
18 | 12 | unit_test: | 9 | test: |
19 | 10 | @# Bundletester expects unit tests here. | ||
20 | 13 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests | 11 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
21 | 14 | 12 | ||
22 | 13 | functional_test: | ||
23 | 14 | @echo Starting Amulet tests... | ||
24 | 15 | @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 | ||
25 | 16 | |||
26 | 15 | bin/charm_helpers_sync.py: | 17 | bin/charm_helpers_sync.py: |
27 | 16 | @mkdir -p bin | 18 | @mkdir -p bin |
28 | 17 | @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ | 19 | @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ |
29 | @@ -21,15 +23,8 @@ | |||
30 | 21 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml | 23 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml |
31 | 22 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml | 24 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml |
32 | 23 | 25 | ||
41 | 24 | test: | 26 | publish: lint test |
34 | 25 | @echo Starting Amulet tests... | ||
35 | 26 | # /!\ Note: The -v should only be temporary until Amulet sends | ||
36 | 27 | # raise_status() messages to stderr: | ||
37 | 28 | # https://bugs.launchpad.net/amulet/+bug/1320357 | ||
38 | 29 | @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 | ||
39 | 30 | |||
40 | 31 | publish: lint unit_test | ||
42 | 32 | bzr push lp:charms/glance | 27 | bzr push lp:charms/glance |
43 | 33 | bzr push lp:charms/trusty/glance | 28 | bzr push lp:charms/trusty/glance |
44 | 34 | 29 | ||
46 | 35 | all: unit_test lint | 30 | all: test lint |
47 | 36 | 31 | ||
48 | === modified file 'hooks/charmhelpers/core/hookenv.py' | |||
49 | --- hooks/charmhelpers/core/hookenv.py 2015-06-10 20:31:46 +0000 | |||
50 | +++ hooks/charmhelpers/core/hookenv.py 2015-07-02 12:52:15 +0000 | |||
51 | @@ -21,7 +21,9 @@ | |||
52 | 21 | # Charm Helpers Developers <juju@lists.ubuntu.com> | 21 | # Charm Helpers Developers <juju@lists.ubuntu.com> |
53 | 22 | 22 | ||
54 | 23 | from __future__ import print_function | 23 | from __future__ import print_function |
55 | 24 | from distutils.version import LooseVersion | ||
56 | 24 | from functools import wraps | 25 | from functools import wraps |
57 | 26 | import glob | ||
58 | 25 | import os | 27 | import os |
59 | 26 | import json | 28 | import json |
60 | 27 | import yaml | 29 | import yaml |
61 | @@ -242,29 +244,7 @@ | |||
62 | 242 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) | 244 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
63 | 243 | if os.path.exists(self.path): | 245 | if os.path.exists(self.path): |
64 | 244 | self.load_previous() | 246 | self.load_previous() |
88 | 245 | 247 | atexit(self._implicit_save) | |
66 | 246 | def __getitem__(self, key): | ||
67 | 247 | """For regular dict lookups, check the current juju config first, | ||
68 | 248 | then the previous (saved) copy. This ensures that user-saved values | ||
69 | 249 | will be returned by a dict lookup. | ||
70 | 250 | |||
71 | 251 | """ | ||
72 | 252 | try: | ||
73 | 253 | return dict.__getitem__(self, key) | ||
74 | 254 | except KeyError: | ||
75 | 255 | return (self._prev_dict or {})[key] | ||
76 | 256 | |||
77 | 257 | def get(self, key, default=None): | ||
78 | 258 | try: | ||
79 | 259 | return self[key] | ||
80 | 260 | except KeyError: | ||
81 | 261 | return default | ||
82 | 262 | |||
83 | 263 | def keys(self): | ||
84 | 264 | prev_keys = [] | ||
85 | 265 | if self._prev_dict is not None: | ||
86 | 266 | prev_keys = self._prev_dict.keys() | ||
87 | 267 | return list(set(prev_keys + list(dict.keys(self)))) | ||
89 | 268 | 248 | ||
90 | 269 | def load_previous(self, path=None): | 249 | def load_previous(self, path=None): |
91 | 270 | """Load previous copy of config from disk. | 250 | """Load previous copy of config from disk. |
92 | @@ -283,6 +263,9 @@ | |||
93 | 283 | self.path = path or self.path | 263 | self.path = path or self.path |
94 | 284 | with open(self.path) as f: | 264 | with open(self.path) as f: |
95 | 285 | self._prev_dict = json.load(f) | 265 | self._prev_dict = json.load(f) |
96 | 266 | for k, v in self._prev_dict.items(): | ||
97 | 267 | if k not in self: | ||
98 | 268 | self[k] = v | ||
99 | 286 | 269 | ||
100 | 287 | def changed(self, key): | 270 | def changed(self, key): |
101 | 288 | """Return True if the current value for this key is different from | 271 | """Return True if the current value for this key is different from |
102 | @@ -314,13 +297,13 @@ | |||
103 | 314 | instance. | 297 | instance. |
104 | 315 | 298 | ||
105 | 316 | """ | 299 | """ |
106 | 317 | if self._prev_dict: | ||
107 | 318 | for k, v in six.iteritems(self._prev_dict): | ||
108 | 319 | if k not in self: | ||
109 | 320 | self[k] = v | ||
110 | 321 | with open(self.path, 'w') as f: | 300 | with open(self.path, 'w') as f: |
111 | 322 | json.dump(self, f) | 301 | json.dump(self, f) |
112 | 323 | 302 | ||
113 | 303 | def _implicit_save(self): | ||
114 | 304 | if self.implicit_save: | ||
115 | 305 | self.save() | ||
116 | 306 | |||
117 | 324 | 307 | ||
118 | 325 | @cached | 308 | @cached |
119 | 326 | def config(scope=None): | 309 | def config(scope=None): |
120 | @@ -587,10 +570,14 @@ | |||
121 | 587 | hooks.execute(sys.argv) | 570 | hooks.execute(sys.argv) |
122 | 588 | """ | 571 | """ |
123 | 589 | 572 | ||
125 | 590 | def __init__(self, config_save=True): | 573 | def __init__(self, config_save=None): |
126 | 591 | super(Hooks, self).__init__() | 574 | super(Hooks, self).__init__() |
127 | 592 | self._hooks = {} | 575 | self._hooks = {} |
129 | 593 | self._config_save = config_save | 576 | |
130 | 577 | # For unknown reasons, we allow the Hooks constructor to override | ||
131 | 578 | # config().implicit_save. | ||
132 | 579 | if config_save is not None: | ||
133 | 580 | config().implicit_save = config_save | ||
134 | 594 | 581 | ||
135 | 595 | def register(self, name, function): | 582 | def register(self, name, function): |
136 | 596 | """Register a hook""" | 583 | """Register a hook""" |
137 | @@ -598,13 +585,16 @@ | |||
138 | 598 | 585 | ||
139 | 599 | def execute(self, args): | 586 | def execute(self, args): |
140 | 600 | """Execute a registered hook based on args[0]""" | 587 | """Execute a registered hook based on args[0]""" |
141 | 588 | _run_atstart() | ||
142 | 601 | hook_name = os.path.basename(args[0]) | 589 | hook_name = os.path.basename(args[0]) |
143 | 602 | if hook_name in self._hooks: | 590 | if hook_name in self._hooks: |
149 | 603 | self._hooks[hook_name]() | 591 | try: |
150 | 604 | if self._config_save: | 592 | self._hooks[hook_name]() |
151 | 605 | cfg = config() | 593 | except SystemExit as x: |
152 | 606 | if cfg.implicit_save: | 594 | if x.code is None or x.code == 0: |
153 | 607 | cfg.save() | 595 | _run_atexit() |
154 | 596 | raise | ||
155 | 597 | _run_atexit() | ||
156 | 608 | else: | 598 | else: |
157 | 609 | raise UnregisteredHookError(hook_name) | 599 | raise UnregisteredHookError(hook_name) |
158 | 610 | 600 | ||
159 | @@ -732,13 +722,79 @@ | |||
160 | 732 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) | 722 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
161 | 733 | def leader_set(settings=None, **kwargs): | 723 | def leader_set(settings=None, **kwargs): |
162 | 734 | """Juju leader set value(s)""" | 724 | """Juju leader set value(s)""" |
164 | 735 | log("Juju leader-set '%s'" % (settings), level=DEBUG) | 725 | # Don't log secrets. |
165 | 726 | # log("Juju leader-set '%s'" % (settings), level=DEBUG) | ||
166 | 736 | cmd = ['leader-set'] | 727 | cmd = ['leader-set'] |
167 | 737 | settings = settings or {} | 728 | settings = settings or {} |
168 | 738 | settings.update(kwargs) | 729 | settings.update(kwargs) |
170 | 739 | for k, v in settings.iteritems(): | 730 | for k, v in settings.items(): |
171 | 740 | if v is None: | 731 | if v is None: |
172 | 741 | cmd.append('{}='.format(k)) | 732 | cmd.append('{}='.format(k)) |
173 | 742 | else: | 733 | else: |
174 | 743 | cmd.append('{}={}'.format(k, v)) | 734 | cmd.append('{}={}'.format(k, v)) |
175 | 744 | subprocess.check_call(cmd) | 735 | subprocess.check_call(cmd) |
176 | 736 | |||
177 | 737 | |||
178 | 738 | @cached | ||
179 | 739 | def juju_version(): | ||
180 | 740 | """Full version string (eg. '1.23.3.1-trusty-amd64')""" | ||
181 | 741 | # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 | ||
182 | 742 | jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] | ||
183 | 743 | return subprocess.check_output([jujud, 'version'], | ||
184 | 744 | universal_newlines=True).strip() | ||
185 | 745 | |||
186 | 746 | |||
187 | 747 | @cached | ||
188 | 748 | def has_juju_version(minimum_version): | ||
189 | 749 | """Return True if the Juju version is at least the provided version""" | ||
190 | 750 | return LooseVersion(juju_version()) >= LooseVersion(minimum_version) | ||
191 | 751 | |||
192 | 752 | |||
193 | 753 | _atexit = [] | ||
194 | 754 | _atstart = [] | ||
195 | 755 | |||
196 | 756 | |||
197 | 757 | def atstart(callback, *args, **kwargs): | ||
198 | 758 | '''Schedule a callback to run before the main hook. | ||
199 | 759 | |||
200 | 760 | Callbacks are run in the order they were added. | ||
201 | 761 | |||
202 | 762 | This is useful for modules and classes to perform initialization | ||
203 | 763 | and inject behavior. In particular: | ||
204 | 764 | - Run common code before all of your hooks, such as logging | ||
205 | 765 | the hook name or interesting relation data. | ||
206 | 766 | - Defer object or module initialization that requires a hook | ||
207 | 767 | context until we know there actually is a hook context, | ||
208 | 768 | making testing easier. | ||
209 | 769 | - Rather than requiring charm authors to include boilerplate to | ||
210 | 770 | invoke your helper's behavior, have it run automatically if | ||
211 | 771 | your object is instantiated or module imported. | ||
212 | 772 | |||
213 | 773 | This is not at all useful after your hook framework as been launched. | ||
214 | 774 | ''' | ||
215 | 775 | global _atstart | ||
216 | 776 | _atstart.append((callback, args, kwargs)) | ||
217 | 777 | |||
218 | 778 | |||
219 | 779 | def atexit(callback, *args, **kwargs): | ||
220 | 780 | '''Schedule a callback to run on successful hook completion. | ||
221 | 781 | |||
222 | 782 | Callbacks are run in the reverse order that they were added.''' | ||
223 | 783 | _atexit.append((callback, args, kwargs)) | ||
224 | 784 | |||
225 | 785 | |||
226 | 786 | def _run_atstart(): | ||
227 | 787 | '''Hook frameworks must invoke this before running the main hook body.''' | ||
228 | 788 | global _atstart | ||
229 | 789 | for callback, args, kwargs in _atstart: | ||
230 | 790 | callback(*args, **kwargs) | ||
231 | 791 | del _atstart[:] | ||
232 | 792 | |||
233 | 793 | |||
234 | 794 | def _run_atexit(): | ||
235 | 795 | '''Hook frameworks must invoke this after the main hook body has | ||
236 | 796 | successfully completed. Do not invoke it if the hook fails.''' | ||
237 | 797 | global _atexit | ||
238 | 798 | for callback, args, kwargs in reversed(_atexit): | ||
239 | 799 | callback(*args, **kwargs) | ||
240 | 800 | del _atexit[:] | ||
241 | 745 | 801 | ||
242 | === modified file 'hooks/charmhelpers/core/services/base.py' | |||
243 | --- hooks/charmhelpers/core/services/base.py 2015-06-10 20:31:46 +0000 | |||
244 | +++ hooks/charmhelpers/core/services/base.py 2015-07-02 12:52:15 +0000 | |||
245 | @@ -128,15 +128,18 @@ | |||
246 | 128 | """ | 128 | """ |
247 | 129 | Handle the current hook by doing The Right Thing with the registered services. | 129 | Handle the current hook by doing The Right Thing with the registered services. |
248 | 130 | """ | 130 | """ |
258 | 131 | hook_name = hookenv.hook_name() | 131 | hookenv._run_atstart() |
259 | 132 | if hook_name == 'stop': | 132 | try: |
260 | 133 | self.stop_services() | 133 | hook_name = hookenv.hook_name() |
261 | 134 | else: | 134 | if hook_name == 'stop': |
262 | 135 | self.reconfigure_services() | 135 | self.stop_services() |
263 | 136 | self.provide_data() | 136 | else: |
264 | 137 | cfg = hookenv.config() | 137 | self.reconfigure_services() |
265 | 138 | if cfg.implicit_save: | 138 | self.provide_data() |
266 | 139 | cfg.save() | 139 | except SystemExit as x: |
267 | 140 | if x.code is None or x.code == 0: | ||
268 | 141 | hookenv._run_atexit() | ||
269 | 142 | hookenv._run_atexit() | ||
270 | 140 | 143 | ||
271 | 141 | def provide_data(self): | 144 | def provide_data(self): |
272 | 142 | """ | 145 | """ |
273 | 143 | 146 | ||
274 | === modified file 'metadata.yaml' | |||
275 | --- metadata.yaml 2014-10-30 03:30:35 +0000 | |||
276 | +++ metadata.yaml 2015-07-02 12:52:15 +0000 | |||
277 | @@ -6,8 +6,10 @@ | |||
278 | 6 | (Parallax) and an image delivery service (Teller). These services are used | 6 | (Parallax) and an image delivery service (Teller). These services are used |
279 | 7 | in conjunction by Nova to deliver images from object stores, such as | 7 | in conjunction by Nova to deliver images from object stores, such as |
280 | 8 | OpenStack's Swift service, to Nova's compute nodes. | 8 | OpenStack's Swift service, to Nova's compute nodes. |
283 | 9 | categories: | 9 | tags: |
284 | 10 | - miscellaneous | 10 | - openstack |
285 | 11 | - storage | ||
286 | 12 | - misc | ||
287 | 11 | provides: | 13 | provides: |
288 | 12 | nrpe-external-master: | 14 | nrpe-external-master: |
289 | 13 | interface: nrpe-external-master | 15 | interface: nrpe-external-master |
290 | 14 | 16 | ||
291 | === modified file 'tests/00-setup' | |||
292 | --- tests/00-setup 2014-10-08 20:18:38 +0000 | |||
293 | +++ tests/00-setup 2015-07-02 12:52:15 +0000 | |||
294 | @@ -5,6 +5,10 @@ | |||
295 | 5 | sudo add-apt-repository --yes ppa:juju/stable | 5 | sudo add-apt-repository --yes ppa:juju/stable |
296 | 6 | sudo apt-get update --yes | 6 | sudo apt-get update --yes |
297 | 7 | sudo apt-get install --yes python-amulet \ | 7 | sudo apt-get install --yes python-amulet \ |
298 | 8 | python-cinderclient \ | ||
299 | 9 | python-distro-info \ | ||
300 | 10 | python-glanceclient \ | ||
301 | 11 | python-heatclient \ | ||
302 | 8 | python-keystoneclient \ | 12 | python-keystoneclient \ |
305 | 9 | python-glanceclient \ | 13 | python-novaclient \ |
306 | 10 | python-novaclient | 14 | python-swiftclient |
307 | 11 | 15 | ||
308 | === modified file 'tests/017-basic-trusty-kilo' (properties changed: -x to +x) | |||
309 | === modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x) | |||
310 | === added file 'tests/020-basic-trusty-liberty' | |||
311 | --- tests/020-basic-trusty-liberty 1970-01-01 00:00:00 +0000 | |||
312 | +++ tests/020-basic-trusty-liberty 2015-07-02 12:52:15 +0000 | |||
313 | @@ -0,0 +1,11 @@ | |||
314 | 1 | #!/usr/bin/python | ||
315 | 2 | |||
316 | 3 | """Amulet tests on a basic glance deployment on trusty-liberty.""" | ||
317 | 4 | |||
318 | 5 | from basic_deployment import GlanceBasicDeployment | ||
319 | 6 | |||
320 | 7 | if __name__ == '__main__': | ||
321 | 8 | deployment = GlanceBasicDeployment(series='trusty', | ||
322 | 9 | openstack='cloud:trusty-liberty', | ||
323 | 10 | source='cloud:trusty-updates/liberty') | ||
324 | 11 | deployment.run_tests() | ||
325 | 0 | 12 | ||
326 | === added file 'tests/021-basic-wily-liberty' | |||
327 | --- tests/021-basic-wily-liberty 1970-01-01 00:00:00 +0000 | |||
328 | +++ tests/021-basic-wily-liberty 2015-07-02 12:52:15 +0000 | |||
329 | @@ -0,0 +1,9 @@ | |||
330 | 1 | #!/usr/bin/python | ||
331 | 2 | |||
332 | 3 | """Amulet tests on a basic glance deployment on wily-liberty.""" | ||
333 | 4 | |||
334 | 5 | from basic_deployment import GlanceBasicDeployment | ||
335 | 6 | |||
336 | 7 | if __name__ == '__main__': | ||
337 | 8 | deployment = GlanceBasicDeployment(series='wily') | ||
338 | 9 | deployment.run_tests() | ||
339 | 0 | 10 | ||
340 | === modified file 'tests/README' | |||
341 | --- tests/README 2014-10-08 20:18:38 +0000 | |||
342 | +++ tests/README 2015-07-02 12:52:15 +0000 | |||
343 | @@ -1,6 +1,15 @@ | |||
344 | 1 | This directory provides Amulet tests that focus on verification of Glance | 1 | This directory provides Amulet tests that focus on verification of Glance |
345 | 2 | deployments. | 2 | deployments. |
346 | 3 | 3 | ||
347 | 4 | test_* methods are called in lexical sort order. | ||
348 | 5 | |||
349 | 6 | Test name convention to ensure desired test order: | ||
350 | 7 | 1xx service and endpoint checks | ||
351 | 8 | 2xx relation checks | ||
352 | 9 | 3xx config checks | ||
353 | 10 | 4xx functional checks | ||
354 | 11 | 9xx restarts and other final checks | ||
355 | 12 | |||
356 | 4 | In order to run tests, you'll need charm-tools installed (in addition to | 13 | In order to run tests, you'll need charm-tools installed (in addition to |
357 | 5 | juju, of course): | 14 | juju, of course): |
358 | 6 | sudo add-apt-repository ppa:juju/stable | 15 | sudo add-apt-repository ppa:juju/stable |
359 | 7 | 16 | ||
360 | === modified file 'tests/basic_deployment.py' | |||
361 | --- tests/basic_deployment.py 2015-05-12 14:49:27 +0000 | |||
362 | +++ tests/basic_deployment.py 2015-07-02 12:52:15 +0000 | |||
363 | @@ -1,7 +1,12 @@ | |||
364 | 1 | #!/usr/bin/python | 1 | #!/usr/bin/python |
365 | 2 | 2 | ||
366 | 3 | """ | ||
367 | 4 | Basic glance amulet functional tests. | ||
368 | 5 | """ | ||
369 | 6 | |||
370 | 3 | import amulet | 7 | import amulet |
371 | 4 | import os | 8 | import os |
372 | 9 | import time | ||
373 | 5 | import yaml | 10 | import yaml |
374 | 6 | 11 | ||
375 | 7 | from charmhelpers.contrib.openstack.amulet.deployment import ( | 12 | from charmhelpers.contrib.openstack.amulet.deployment import ( |
376 | @@ -10,25 +15,24 @@ | |||
377 | 10 | 15 | ||
378 | 11 | from charmhelpers.contrib.openstack.amulet.utils import ( | 16 | from charmhelpers.contrib.openstack.amulet.utils import ( |
379 | 12 | OpenStackAmuletUtils, | 17 | OpenStackAmuletUtils, |
382 | 13 | DEBUG, # flake8: noqa | 18 | DEBUG, |
383 | 14 | ERROR | 19 | # ERROR |
384 | 15 | ) | 20 | ) |
385 | 16 | 21 | ||
386 | 17 | # Use DEBUG to turn on debug logging | 22 | # Use DEBUG to turn on debug logging |
387 | 18 | u = OpenStackAmuletUtils(DEBUG) | 23 | u = OpenStackAmuletUtils(DEBUG) |
388 | 19 | 24 | ||
389 | 25 | |||
390 | 20 | class GlanceBasicDeployment(OpenStackAmuletDeployment): | 26 | class GlanceBasicDeployment(OpenStackAmuletDeployment): |
397 | 21 | '''Amulet tests on a basic file-backed glance deployment. Verify relations, | 27 | """Amulet tests on a basic file-backed glance deployment. Verify |
398 | 22 | service status, endpoint service catalog, create and delete new image.''' | 28 | relations, service status, endpoint service catalog, create and |
399 | 23 | 29 | delete new image.""" | |
394 | 24 | # TO-DO(beisner): | ||
395 | 25 | # * Add tests with different storage back ends | ||
396 | 26 | # * Resolve Essex->Havana juju set charm bug | ||
400 | 27 | 30 | ||
401 | 28 | def __init__(self, series=None, openstack=None, source=None, git=False, | 31 | def __init__(self, series=None, openstack=None, source=None, git=False, |
402 | 29 | stable=False): | 32 | stable=False): |
405 | 30 | '''Deploy the entire test environment.''' | 33 | """Deploy the entire test environment.""" |
406 | 31 | super(GlanceBasicDeployment, self).__init__(series, openstack, source, stable) | 34 | super(GlanceBasicDeployment, self).__init__(series, openstack, |
407 | 35 | source, stable) | ||
408 | 32 | self.git = git | 36 | self.git = git |
409 | 33 | self._add_services() | 37 | self._add_services() |
410 | 34 | self._add_relations() | 38 | self._add_relations() |
411 | @@ -37,20 +41,21 @@ | |||
412 | 37 | self._initialize_tests() | 41 | self._initialize_tests() |
413 | 38 | 42 | ||
414 | 39 | def _add_services(self): | 43 | def _add_services(self): |
416 | 40 | '''Add services | 44 | """Add services |
417 | 41 | 45 | ||
418 | 42 | Add the services that we're testing, where glance is local, | 46 | Add the services that we're testing, where glance is local, |
419 | 43 | and the rest of the service are from lp branches that are | 47 | and the rest of the service are from lp branches that are |
420 | 44 | compatible with the local charm (e.g. stable or next). | 48 | compatible with the local charm (e.g. stable or next). |
422 | 45 | ''' | 49 | """ |
423 | 46 | this_service = {'name': 'glance'} | 50 | this_service = {'name': 'glance'} |
425 | 47 | other_services = [{'name': 'mysql'}, {'name': 'rabbitmq-server'}, | 51 | other_services = [{'name': 'mysql'}, |
426 | 52 | {'name': 'rabbitmq-server'}, | ||
427 | 48 | {'name': 'keystone'}] | 53 | {'name': 'keystone'}] |
428 | 49 | super(GlanceBasicDeployment, self)._add_services(this_service, | 54 | super(GlanceBasicDeployment, self)._add_services(this_service, |
429 | 50 | other_services) | 55 | other_services) |
430 | 51 | 56 | ||
431 | 52 | def _add_relations(self): | 57 | def _add_relations(self): |
433 | 53 | '''Add relations for the services.''' | 58 | """Add relations for the services.""" |
434 | 54 | relations = {'glance:identity-service': 'keystone:identity-service', | 59 | relations = {'glance:identity-service': 'keystone:identity-service', |
435 | 55 | 'glance:shared-db': 'mysql:shared-db', | 60 | 'glance:shared-db': 'mysql:shared-db', |
436 | 56 | 'keystone:shared-db': 'mysql:shared-db', | 61 | 'keystone:shared-db': 'mysql:shared-db', |
437 | @@ -58,7 +63,7 @@ | |||
438 | 58 | super(GlanceBasicDeployment, self)._add_relations(relations) | 63 | super(GlanceBasicDeployment, self)._add_relations(relations) |
439 | 59 | 64 | ||
440 | 60 | def _configure_services(self): | 65 | def _configure_services(self): |
442 | 61 | '''Configure all of the services.''' | 66 | """Configure all of the services.""" |
443 | 62 | glance_config = {} | 67 | glance_config = {} |
444 | 63 | if self.git: | 68 | if self.git: |
445 | 64 | branch = 'stable/' + self._get_openstack_release_string() | 69 | branch = 'stable/' + self._get_openstack_release_string() |
446 | @@ -76,7 +81,8 @@ | |||
447 | 76 | 'http_proxy': amulet_http_proxy, | 81 | 'http_proxy': amulet_http_proxy, |
448 | 77 | 'https_proxy': amulet_http_proxy, | 82 | 'https_proxy': amulet_http_proxy, |
449 | 78 | } | 83 | } |
451 | 79 | glance_config['openstack-origin-git'] = yaml.dump(openstack_origin_git) | 84 | glance_config['openstack-origin-git'] = \ |
452 | 85 | yaml.dump(openstack_origin_git) | ||
453 | 80 | 86 | ||
454 | 81 | keystone_config = {'admin-password': 'openstack', | 87 | keystone_config = {'admin-password': 'openstack', |
455 | 82 | 'admin-token': 'ubuntutesting'} | 88 | 'admin-token': 'ubuntutesting'} |
456 | @@ -87,12 +93,19 @@ | |||
457 | 87 | super(GlanceBasicDeployment, self)._configure_services(configs) | 93 | super(GlanceBasicDeployment, self)._configure_services(configs) |
458 | 88 | 94 | ||
459 | 89 | def _initialize_tests(self): | 95 | def _initialize_tests(self): |
461 | 90 | '''Perform final initialization before tests get run.''' | 96 | """Perform final initialization before tests get run.""" |
462 | 91 | # Access the sentries for inspecting service units | 97 | # Access the sentries for inspecting service units |
463 | 92 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] | 98 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] |
464 | 93 | self.glance_sentry = self.d.sentry.unit['glance/0'] | 99 | self.glance_sentry = self.d.sentry.unit['glance/0'] |
465 | 94 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] | 100 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] |
466 | 95 | self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] | 101 | self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] |
467 | 102 | u.log.debug('openstack release val: {}'.format( | ||
468 | 103 | self._get_openstack_release())) | ||
469 | 104 | u.log.debug('openstack release str: {}'.format( | ||
470 | 105 | self._get_openstack_release_string())) | ||
471 | 106 | |||
472 | 107 | # Let things settle a bit before moving forward | ||
473 | 108 | time.sleep(30) | ||
474 | 96 | 109 | ||
475 | 97 | # Authenticate admin with keystone | 110 | # Authenticate admin with keystone |
476 | 98 | self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, | 111 | self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, |
477 | @@ -103,46 +116,103 @@ | |||
478 | 103 | # Authenticate admin with glance endpoint | 116 | # Authenticate admin with glance endpoint |
479 | 104 | self.glance = u.authenticate_glance_admin(self.keystone) | 117 | self.glance = u.authenticate_glance_admin(self.keystone) |
480 | 105 | 118 | ||
491 | 106 | u.log.debug('openstack release: {}'.format(self._get_openstack_release())) | 119 | def test_100_services(self): |
492 | 107 | 120 | """Verify that the expected services are running on the | |
493 | 108 | def test_services(self): | 121 | corresponding service units.""" |
494 | 109 | '''Verify that the expected services are running on the | 122 | services = { |
495 | 110 | corresponding service units.''' | 123 | self.mysql_sentry: ['mysql'], |
496 | 111 | commands = { | 124 | self.keystone_sentry: ['keystone'], |
497 | 112 | self.mysql_sentry: ['status mysql'], | 125 | self.glance_sentry: ['glance-api', 'glance-registry'], |
498 | 113 | self.keystone_sentry: ['status keystone'], | 126 | self.rabbitmq_sentry: ['rabbitmq-server'] |
489 | 114 | self.glance_sentry: ['status glance-api', 'status glance-registry'], | ||
490 | 115 | self.rabbitmq_sentry: ['sudo service rabbitmq-server status'] | ||
499 | 116 | } | 127 | } |
502 | 117 | u.log.debug('commands: {}'.format(commands)) | 128 | |
503 | 118 | ret = u.validate_services(commands) | 129 | ret = u.validate_services_by_name(services) |
504 | 119 | if ret: | 130 | if ret: |
505 | 120 | amulet.raise_status(amulet.FAIL, msg=ret) | 131 | amulet.raise_status(amulet.FAIL, msg=ret) |
506 | 121 | 132 | ||
523 | 122 | def test_service_catalog(self): | 133 | def test_102_service_catalog(self): |
524 | 123 | '''Verify that the service catalog endpoint data''' | 134 | """Verify that the service catalog endpoint data is valid.""" |
525 | 124 | endpoint_vol = {'adminURL': u.valid_url, | 135 | u.log.debug('Checking keystone service catalog...') |
526 | 125 | 'region': 'RegionOne', | 136 | endpoint_check = { |
527 | 126 | 'publicURL': u.valid_url, | 137 | 'adminURL': u.valid_url, |
528 | 127 | 'internalURL': u.valid_url} | 138 | 'id': u.not_null, |
529 | 128 | endpoint_id = {'adminURL': u.valid_url, | 139 | 'region': 'RegionOne', |
530 | 129 | 'region': 'RegionOne', | 140 | 'publicURL': u.valid_url, |
531 | 130 | 'publicURL': u.valid_url, | 141 | 'internalURL': u.valid_url |
532 | 131 | 'internalURL': u.valid_url} | 142 | } |
533 | 132 | if self._get_openstack_release() >= self.trusty_icehouse: | 143 | expected = { |
534 | 133 | endpoint_vol['id'] = u.not_null | 144 | 'image': [endpoint_check], |
535 | 134 | endpoint_id['id'] = u.not_null | 145 | 'identity': [endpoint_check] |
536 | 135 | 146 | } | |
521 | 136 | expected = {'image': [endpoint_id], | ||
522 | 137 | 'identity': [endpoint_id]} | ||
537 | 138 | actual = self.keystone.service_catalog.get_endpoints() | 147 | actual = self.keystone.service_catalog.get_endpoints() |
538 | 139 | 148 | ||
539 | 140 | ret = u.validate_svc_catalog_endpoint_data(expected, actual) | 149 | ret = u.validate_svc_catalog_endpoint_data(expected, actual) |
540 | 141 | if ret: | 150 | if ret: |
541 | 142 | amulet.raise_status(amulet.FAIL, msg=ret) | 151 | amulet.raise_status(amulet.FAIL, msg=ret) |
542 | 143 | 152 | ||
545 | 144 | def test_mysql_glance_db_relation(self): | 153 | def test_104_glance_endpoint(self): |
546 | 145 | '''Verify the mysql:glance shared-db relation data''' | 154 | """Verify the glance endpoint data.""" |
547 | 155 | u.log.debug('Checking glance api endpoint data...') | ||
548 | 156 | endpoints = self.keystone.endpoints.list() | ||
549 | 157 | admin_port = internal_port = public_port = '9292' | ||
550 | 158 | expected = { | ||
551 | 159 | 'id': u.not_null, | ||
552 | 160 | 'region': 'RegionOne', | ||
553 | 161 | 'adminurl': u.valid_url, | ||
554 | 162 | 'internalurl': u.valid_url, | ||
555 | 163 | 'publicurl': u.valid_url, | ||
556 | 164 | 'service_id': u.not_null | ||
557 | 165 | } | ||
558 | 166 | ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, | ||
559 | 167 | public_port, expected) | ||
560 | 168 | |||
561 | 169 | if ret: | ||
562 | 170 | amulet.raise_status(amulet.FAIL, | ||
563 | 171 | msg='glance endpoint: {}'.format(ret)) | ||
564 | 172 | |||
565 | 173 | def test_106_keystone_endpoint(self): | ||
566 | 174 | """Verify the keystone endpoint data.""" | ||
567 | 175 | u.log.debug('Checking keystone api endpoint data...') | ||
568 | 176 | endpoints = self.keystone.endpoints.list() | ||
569 | 177 | admin_port = '35357' | ||
570 | 178 | internal_port = public_port = '5000' | ||
571 | 179 | expected = { | ||
572 | 180 | 'id': u.not_null, | ||
573 | 181 | 'region': 'RegionOne', | ||
574 | 182 | 'adminurl': u.valid_url, | ||
575 | 183 | 'internalurl': u.valid_url, | ||
576 | 184 | 'publicurl': u.valid_url, | ||
577 | 185 | 'service_id': u.not_null | ||
578 | 186 | } | ||
579 | 187 | ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, | ||
580 | 188 | public_port, expected) | ||
581 | 189 | if ret: | ||
582 | 190 | amulet.raise_status(amulet.FAIL, | ||
583 | 191 | msg='keystone endpoint: {}'.format(ret)) | ||
584 | 192 | |||
585 | 193 | def test_110_users(self): | ||
586 | 194 | """Verify expected users.""" | ||
587 | 195 | u.log.debug('Checking keystone users...') | ||
588 | 196 | expected = [ | ||
589 | 197 | {'name': 'glance', | ||
590 | 198 | 'enabled': True, | ||
591 | 199 | 'tenantId': u.not_null, | ||
592 | 200 | 'id': u.not_null, | ||
593 | 201 | 'email': 'juju@localhost'}, | ||
594 | 202 | {'name': 'admin', | ||
595 | 203 | 'enabled': True, | ||
596 | 204 | 'tenantId': u.not_null, | ||
597 | 205 | 'id': u.not_null, | ||
598 | 206 | 'email': 'juju@localhost'} | ||
599 | 207 | ] | ||
600 | 208 | actual = self.keystone.users.list() | ||
601 | 209 | ret = u.validate_user_data(expected, actual) | ||
602 | 210 | if ret: | ||
603 | 211 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
604 | 212 | |||
605 | 213 | def test_200_mysql_glance_db_relation(self): | ||
606 | 214 | """Verify the mysql:glance shared-db relation data""" | ||
607 | 215 | u.log.debug('Checking mysql to glance shared-db relation data...') | ||
608 | 146 | unit = self.mysql_sentry | 216 | unit = self.mysql_sentry |
609 | 147 | relation = ['shared-db', 'glance:shared-db'] | 217 | relation = ['shared-db', 'glance:shared-db'] |
610 | 148 | expected = { | 218 | expected = { |
611 | @@ -154,8 +224,9 @@ | |||
612 | 154 | message = u.relation_error('mysql shared-db', ret) | 224 | message = u.relation_error('mysql shared-db', ret) |
613 | 155 | amulet.raise_status(amulet.FAIL, msg=message) | 225 | amulet.raise_status(amulet.FAIL, msg=message) |
614 | 156 | 226 | ||
617 | 157 | def test_glance_mysql_db_relation(self): | 227 | def test_201_glance_mysql_db_relation(self): |
618 | 158 | '''Verify the glance:mysql shared-db relation data''' | 228 | """Verify the glance:mysql shared-db relation data""" |
619 | 229 | u.log.debug('Checking glance to mysql shared-db relation data...') | ||
620 | 159 | unit = self.glance_sentry | 230 | unit = self.glance_sentry |
621 | 160 | relation = ['shared-db', 'mysql:shared-db'] | 231 | relation = ['shared-db', 'mysql:shared-db'] |
622 | 161 | expected = { | 232 | expected = { |
623 | @@ -169,8 +240,9 @@ | |||
624 | 169 | message = u.relation_error('glance shared-db', ret) | 240 | message = u.relation_error('glance shared-db', ret) |
625 | 170 | amulet.raise_status(amulet.FAIL, msg=message) | 241 | amulet.raise_status(amulet.FAIL, msg=message) |
626 | 171 | 242 | ||
629 | 172 | def test_keystone_glance_id_relation(self): | 243 | def test_202_keystone_glance_id_relation(self): |
630 | 173 | '''Verify the keystone:glance identity-service relation data''' | 244 | """Verify the keystone:glance identity-service relation data""" |
631 | 245 | u.log.debug('Checking keystone to glance id relation data...') | ||
632 | 174 | unit = self.keystone_sentry | 246 | unit = self.keystone_sentry |
633 | 175 | relation = ['identity-service', | 247 | relation = ['identity-service', |
634 | 176 | 'glance:identity-service'] | 248 | 'glance:identity-service'] |
635 | @@ -193,8 +265,9 @@ | |||
636 | 193 | message = u.relation_error('keystone identity-service', ret) | 265 | message = u.relation_error('keystone identity-service', ret) |
637 | 194 | amulet.raise_status(amulet.FAIL, msg=message) | 266 | amulet.raise_status(amulet.FAIL, msg=message) |
638 | 195 | 267 | ||
641 | 196 | def test_glance_keystone_id_relation(self): | 268 | def test_203_glance_keystone_id_relation(self): |
642 | 197 | '''Verify the glance:keystone identity-service relation data''' | 269 | """Verify the glance:keystone identity-service relation data""" |
643 | 270 | u.log.debug('Checking glance to keystone relation data...') | ||
644 | 198 | unit = self.glance_sentry | 271 | unit = self.glance_sentry |
645 | 199 | relation = ['identity-service', | 272 | relation = ['identity-service', |
646 | 200 | 'keystone:identity-service'] | 273 | 'keystone:identity-service'] |
647 | @@ -211,8 +284,9 @@ | |||
648 | 211 | message = u.relation_error('glance identity-service', ret) | 284 | message = u.relation_error('glance identity-service', ret) |
649 | 212 | amulet.raise_status(amulet.FAIL, msg=message) | 285 | amulet.raise_status(amulet.FAIL, msg=message) |
650 | 213 | 286 | ||
653 | 214 | def test_rabbitmq_glance_amqp_relation(self): | 287 | def test_204_rabbitmq_glance_amqp_relation(self): |
654 | 215 | '''Verify the rabbitmq-server:glance amqp relation data''' | 288 | """Verify the rabbitmq-server:glance amqp relation data""" |
655 | 289 | u.log.debug('Checking rmq to glance amqp relation data...') | ||
656 | 216 | unit = self.rabbitmq_sentry | 290 | unit = self.rabbitmq_sentry |
657 | 217 | relation = ['amqp', 'glance:amqp'] | 291 | relation = ['amqp', 'glance:amqp'] |
658 | 218 | expected = { | 292 | expected = { |
659 | @@ -225,8 +299,9 @@ | |||
660 | 225 | message = u.relation_error('rabbitmq amqp', ret) | 299 | message = u.relation_error('rabbitmq amqp', ret) |
661 | 226 | amulet.raise_status(amulet.FAIL, msg=message) | 300 | amulet.raise_status(amulet.FAIL, msg=message) |
662 | 227 | 301 | ||
665 | 228 | def test_glance_rabbitmq_amqp_relation(self): | 302 | def test_205_glance_rabbitmq_amqp_relation(self): |
666 | 229 | '''Verify the glance:rabbitmq-server amqp relation data''' | 303 | """Verify the glance:rabbitmq-server amqp relation data""" |
667 | 304 | u.log.debug('Checking glance to rmq amqp relation data...') | ||
668 | 230 | unit = self.glance_sentry | 305 | unit = self.glance_sentry |
669 | 231 | relation = ['amqp', 'rabbitmq-server:amqp'] | 306 | relation = ['amqp', 'rabbitmq-server:amqp'] |
670 | 232 | expected = { | 307 | expected = { |
671 | @@ -239,291 +314,225 @@ | |||
672 | 239 | message = u.relation_error('glance amqp', ret) | 314 | message = u.relation_error('glance amqp', ret) |
673 | 240 | amulet.raise_status(amulet.FAIL, msg=message) | 315 | amulet.raise_status(amulet.FAIL, msg=message) |
674 | 241 | 316 | ||
750 | 242 | def test_image_create_delete(self): | 317 | def _get_keystone_authtoken_expected_dict(self, rel_ks_gl): |
751 | 243 | '''Create new cirros image in glance, verify, then delete it''' | 318 | """Return expected authtoken dict for OS release""" |
752 | 244 | 319 | expected = { | |
753 | 245 | # Create a new image | 320 | 'keystone_authtoken': { |
754 | 246 | image_name = 'cirros-image-1' | 321 | 'signing_dir': '/var/cache/glance', |
755 | 247 | image_new = u.create_cirros_image(self.glance, image_name) | 322 | 'admin_tenant_name': 'services', |
756 | 248 | 323 | 'admin_user': 'glance', | |
757 | 249 | # Confirm image is created and has status of 'active' | 324 | 'admin_password': rel_ks_gl['service_password'], |
758 | 250 | if not image_new: | 325 | 'auth_uri': u.valid_url |
759 | 251 | message = 'glance image create failed' | 326 | } |
760 | 252 | amulet.raise_status(amulet.FAIL, msg=message) | 327 | } |
761 | 253 | 328 | ||
762 | 254 | # Verify new image name | 329 | if self._get_openstack_release() >= self.trusty_kilo: |
763 | 255 | images_list = list(self.glance.images.list()) | 330 | # Trusty-Kilo and later |
764 | 256 | if images_list[0].name != image_name: | 331 | expected['keystone_authtoken'].update({ |
765 | 257 | message = 'glance image create failed or unexpected image name {}'.format(images_list[0].name) | 332 | 'identity_uri': u.valid_url, |
766 | 258 | amulet.raise_status(amulet.FAIL, msg=message) | 333 | }) |
767 | 259 | 334 | else: | |
768 | 260 | # Delete the new image | 335 | # Utopic-Juno and earlier |
769 | 261 | u.log.debug('image count before delete: {}'.format(len(list(self.glance.images.list())))) | 336 | expected['keystone_authtoken'].update({ |
770 | 262 | u.delete_image(self.glance, image_new) | 337 | 'auth_host': rel_ks_gl['auth_host'], |
771 | 263 | u.log.debug('image count after delete: {}'.format(len(list(self.glance.images.list())))) | 338 | 'auth_port': rel_ks_gl['auth_port'], |
772 | 264 | 339 | 'auth_protocol': rel_ks_gl['auth_protocol'] | |
773 | 265 | def test_glance_api_default_config(self): | 340 | }) |
774 | 266 | '''Verify default section configs in glance-api.conf and | 341 | |
775 | 267 | compare some of the parameters to relation data.''' | 342 | return expected |
776 | 268 | unit = self.glance_sentry | 343 | |
777 | 269 | rel_gl_mq = unit.relation('amqp', 'rabbitmq-server:amqp') | 344 | def test_300_glance_api_default_config(self): |
778 | 270 | conf = '/etc/glance/glance-api.conf' | 345 | """Verify default section configs in glance-api.conf and |
779 | 271 | expected = {'use_syslog': 'False', | 346 | compare some of the parameters to relation data.""" |
780 | 272 | 'default_store': 'file', | 347 | u.log.debug('Checking glance api config file...') |
781 | 273 | 'filesystem_store_datadir': '/var/lib/glance/images/', | 348 | unit = self.glance_sentry |
782 | 274 | 'rabbit_userid': rel_gl_mq['username'], | 349 | unit_ks = self.keystone_sentry |
783 | 275 | 'log_file': '/var/log/glance/api.log', | 350 | rel_mq_gl = self.rabbitmq_sentry.relation('amqp', 'glance:amqp') |
784 | 276 | 'debug': 'False', | 351 | rel_ks_gl = unit_ks.relation('identity-service', |
785 | 277 | 'verbose': 'False'} | 352 | 'glance:identity-service') |
786 | 278 | section = 'DEFAULT' | 353 | rel_my_gl = self.mysql_sentry.relation('shared-db', 'glance:shared-db') |
787 | 279 | 354 | db_uri = "mysql://{}:{}@{}/{}".format('glance', rel_my_gl['password'], | |
788 | 280 | if self._get_openstack_release() <= self.precise_havana: | 355 | rel_my_gl['db_host'], 'glance') |
789 | 281 | # Defaults were different before icehouse | 356 | conf = '/etc/glance/glance-api.conf' |
790 | 282 | expected['debug'] = 'True' | 357 | expected = { |
791 | 283 | expected['verbose'] = 'True' | 358 | 'DEFAULT': { |
792 | 284 | 359 | 'debug': 'False', | |
793 | 285 | ret = u.validate_config_data(unit, conf, section, expected) | 360 | 'verbose': 'False', |
794 | 286 | if ret: | 361 | 'use_syslog': 'False', |
795 | 287 | message = "glance-api default config error: {}".format(ret) | 362 | 'log_file': '/var/log/glance/api.log', |
796 | 288 | amulet.raise_status(amulet.FAIL, msg=message) | 363 | 'bind_host': '0.0.0.0', |
797 | 289 | 364 | 'bind_port': '9282', | |
798 | 290 | def test_glance_api_auth_config(self): | 365 | 'registry_host': '0.0.0.0', |
799 | 291 | '''Verify authtoken section config in glance-api.conf using | 366 | 'registry_port': '9191', |
800 | 292 | glance/keystone relation data.''' | 367 | 'registry_client_protocol': 'http', |
801 | 293 | unit_gl = self.glance_sentry | 368 | 'delayed_delete': 'False', |
802 | 294 | unit_ks = self.keystone_sentry | 369 | 'scrub_time': '43200', |
803 | 295 | rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp') | 370 | 'notification_driver': 'rabbit', |
804 | 296 | rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service') | 371 | 'scrubber_datadir': '/var/lib/glance/scrubber', |
805 | 297 | conf = '/etc/glance/glance-api.conf' | 372 | 'image_cache_dir': '/var/lib/glance/image-cache/', |
806 | 298 | section = 'keystone_authtoken' | 373 | 'db_enforce_mysql_charset': 'False' |
807 | 299 | 374 | }, | |
808 | 300 | if self._get_openstack_release() > self.precise_havana: | 375 | } |
809 | 301 | # No auth config exists in this file before icehouse | 376 | |
810 | 302 | expected = {'admin_user': 'glance', | 377 | expected.update(self._get_keystone_authtoken_expected_dict(rel_ks_gl)) |
811 | 303 | 'admin_password': rel_ks_gl['service_password']} | 378 | |
812 | 304 | 379 | if self._get_openstack_release() >= self.trusty_kilo: | |
813 | 305 | ret = u.validate_config_data(unit_gl, conf, section, expected) | 380 | # Kilo or later |
814 | 306 | if ret: | 381 | expected['oslo_messaging_rabbit'] = { |
815 | 307 | message = "glance-api auth config error: {}".format(ret) | 382 | 'rabbit_userid': 'glance', |
816 | 308 | amulet.raise_status(amulet.FAIL, msg=message) | 383 | 'rabbit_virtual_host': 'openstack', |
817 | 309 | 384 | 'rabbit_password': rel_mq_gl['password'], | |
818 | 310 | def test_glance_api_paste_auth_config(self): | 385 | 'rabbit_host': rel_mq_gl['hostname'] |
819 | 311 | '''Verify authtoken section config in glance-api-paste.ini using | 386 | } |
820 | 312 | glance/keystone relation data.''' | 387 | expected['glance_store'] = { |
821 | 313 | unit_gl = self.glance_sentry | 388 | 'filesystem_store_datadir': '/var/lib/glance/images/', |
822 | 314 | unit_ks = self.keystone_sentry | 389 | 'stores': 'glance.store.filesystem.' |
823 | 315 | rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp') | 390 | 'Store,glance.store.http.Store', |
824 | 316 | rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service') | 391 | 'default_store': 'file' |
825 | 392 | } | ||
826 | 393 | expected['database'] = { | ||
827 | 394 | 'idle_timeout': '3600', | ||
828 | 395 | 'connection': db_uri | ||
829 | 396 | } | ||
830 | 397 | else: | ||
831 | 398 | # Juno or earlier | ||
832 | 399 | expected['DEFAULT'].update({ | ||
833 | 400 | 'rabbit_userid': 'glance', | ||
834 | 401 | 'rabbit_virtual_host': 'openstack', | ||
835 | 402 | 'rabbit_password': rel_mq_gl['password'], | ||
836 | 403 | 'rabbit_host': rel_mq_gl['hostname'], | ||
837 | 404 | 'filesystem_store_datadir': '/var/lib/glance/images/', | ||
838 | 405 | 'default_store': 'file', | ||
839 | 406 | }) | ||
840 | 407 | expected['database'] = { | ||
841 | 408 | 'sql_idle_timeout': '3600', | ||
842 | 409 | 'connection': db_uri | ||
843 | 410 | } | ||
844 | 411 | |||
845 | 412 | for section, pairs in expected.iteritems(): | ||
846 | 413 | ret = u.validate_config_data(unit, conf, section, pairs) | ||
847 | 414 | if ret: | ||
848 | 415 | message = "glance api config error: {}".format(ret) | ||
849 | 416 | amulet.raise_status(amulet.FAIL, msg=message) | ||
850 | 417 | |||
851 | 418 | def test_302_glance_registry_default_config(self): | ||
852 | 419 | """Verify configs in glance-registry.conf""" | ||
853 | 420 | u.log.debug('Checking glance registry config file...') | ||
854 | 421 | unit = self.glance_sentry | ||
855 | 422 | unit_ks = self.keystone_sentry | ||
856 | 423 | rel_ks_gl = unit_ks.relation('identity-service', | ||
857 | 424 | 'glance:identity-service') | ||
858 | 425 | rel_my_gl = self.mysql_sentry.relation('shared-db', 'glance:shared-db') | ||
859 | 426 | db_uri = "mysql://{}:{}@{}/{}".format('glance', rel_my_gl['password'], | ||
860 | 427 | rel_my_gl['db_host'], 'glance') | ||
861 | 428 | conf = '/etc/glance/glance-registry.conf' | ||
862 | 429 | |||
863 | 430 | expected = { | ||
864 | 431 | 'DEFAULT': { | ||
865 | 432 | 'use_syslog': 'False', | ||
866 | 433 | 'log_file': '/var/log/glance/registry.log', | ||
867 | 434 | 'debug': 'False', | ||
868 | 435 | 'verbose': 'False', | ||
869 | 436 | 'bind_host': '0.0.0.0', | ||
870 | 437 | 'bind_port': '9191' | ||
871 | 438 | }, | ||
872 | 439 | } | ||
873 | 440 | |||
874 | 441 | if self._get_openstack_release() >= self.trusty_kilo: | ||
875 | 442 | # Kilo or later | ||
876 | 443 | expected['database'] = { | ||
877 | 444 | 'idle_timeout': '3600', | ||
878 | 445 | 'connection': db_uri | ||
879 | 446 | } | ||
880 | 447 | else: | ||
881 | 448 | # Juno or earlier | ||
882 | 449 | expected['database'] = { | ||
883 | 450 | 'idle_timeout': '3600', | ||
884 | 451 | 'connection': db_uri | ||
885 | 452 | } | ||
886 | 453 | |||
887 | 454 | expected.update(self._get_keystone_authtoken_expected_dict(rel_ks_gl)) | ||
888 | 455 | |||
889 | 456 | for section, pairs in expected.iteritems(): | ||
890 | 457 | ret = u.validate_config_data(unit, conf, section, pairs) | ||
891 | 458 | if ret: | ||
892 | 459 | message = "glance registry paste config error: {}".format(ret) | ||
893 | 460 | amulet.raise_status(amulet.FAIL, msg=message) | ||
894 | 461 | |||
895 | 462 | def _get_filter_factory_expected_dict(self): | ||
896 | 463 | """Return expected authtoken filter factory dict for OS release""" | ||
897 | 464 | if self._get_openstack_release() >= self.trusty_kilo: | ||
898 | 465 | # Kilo and later | ||
899 | 466 | val = 'keystonemiddleware.auth_token:filter_factory' | ||
900 | 467 | else: | ||
901 | 468 | # Juno and earlier | ||
902 | 469 | val = 'keystoneclient.middleware.auth_token:filter_factory' | ||
903 | 470 | |||
904 | 471 | return {'filter:authtoken': {'paste.filter_factory': val}} | ||
905 | 472 | |||
906 | 473 | def test_304_glance_api_paste_auth_config(self): | ||
907 | 474 | """Verify authtoken section config in glance-api-paste.ini using | ||
908 | 475 | glance/keystone relation data.""" | ||
909 | 476 | u.log.debug('Checking glance api paste config file...') | ||
910 | 477 | unit = self.glance_sentry | ||
911 | 317 | conf = '/etc/glance/glance-api-paste.ini' | 478 | conf = '/etc/glance/glance-api-paste.ini' |
920 | 318 | section = 'filter:authtoken' | 479 | expected = self._get_filter_factory_expected_dict() |
921 | 319 | 480 | ||
922 | 320 | if self._get_openstack_release() <= self.precise_havana: | 481 | for section, pairs in expected.iteritems(): |
923 | 321 | # No auth config exists in this file after havana | 482 | ret = u.validate_config_data(unit, conf, section, pairs) |
916 | 322 | expected = {'admin_user': 'glance', | ||
917 | 323 | 'admin_password': rel_ks_gl['service_password']} | ||
918 | 324 | |||
919 | 325 | ret = u.validate_config_data(unit_gl, conf, section, expected) | ||
924 | 326 | if ret: | 483 | if ret: |
926 | 327 | message = "glance-api-paste auth config error: {}".format(ret) | 484 | message = "glance api paste config error: {}".format(ret) |
927 | 328 | amulet.raise_status(amulet.FAIL, msg=message) | 485 | amulet.raise_status(amulet.FAIL, msg=message) |
928 | 329 | 486 | ||
936 | 330 | def test_glance_registry_paste_auth_config(self): | 487 | def test_306_glance_registry_paste_auth_config(self): |
937 | 331 | '''Verify authtoken section config in glance-registry-paste.ini using | 488 | """Verify authtoken section config in glance-registry-paste.ini using |
938 | 332 | glance/keystone relation data.''' | 489 | glance/keystone relation data.""" |
939 | 333 | unit_gl = self.glance_sentry | 490 | u.log.debug('Checking glance registry paste config file...') |
940 | 334 | unit_ks = self.keystone_sentry | 491 | unit = self.glance_sentry |
934 | 335 | rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp') | ||
935 | 336 | rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service') | ||
941 | 337 | conf = '/etc/glance/glance-registry-paste.ini' | 492 | conf = '/etc/glance/glance-registry-paste.ini' |
1134 | 338 | section = 'filter:authtoken' | 493 | expected = self._get_filter_factory_expected_dict() |
1135 | 339 | 494 | ||
1136 | 340 | if self._get_openstack_release() <= self.precise_havana: | 495 | for section, pairs in expected.iteritems(): |
1137 | 341 | # No auth config exists in this file after havana | 496 | ret = u.validate_config_data(unit, conf, section, pairs) |
1138 | 342 | expected = {'admin_user': 'glance', | 497 | if ret: |
1139 | 343 | 'admin_password': rel_ks_gl['service_password']} | 498 | message = "glance registry paste config error: {}".format(ret) |
1140 | 344 | 499 | amulet.raise_status(amulet.FAIL, msg=message) | |
1141 | 345 | ret = u.validate_config_data(unit_gl, conf, section, expected) | 500 | |
1142 | 346 | if ret: | 501 | def test_410_glance_image_create_delete(self): |
1143 | 347 | message = "glance-registry-paste auth config error: {}".format(ret) | 502 | """Create new cirros image in glance, verify, then delete it.""" |
1144 | 348 | amulet.raise_status(amulet.FAIL, msg=message) | 503 | u.log.debug('Creating, checking and deleting glance image...') |
1145 | 349 | 504 | img_new = u.create_cirros_image(self.glance, "cirros-image-1") | |
1146 | 350 | def test_glance_registry_default_config(self): | 505 | img_id = img_new.id |
1147 | 351 | '''Verify default section configs in glance-registry.conf''' | 506 | u.delete_resource(self.glance.images, img_id, msg="glance image") |
1148 | 352 | unit = self.glance_sentry | 507 | |
1149 | 353 | conf = '/etc/glance/glance-registry.conf' | 508 | def test_900_glance_restart_on_config_change(self): |
1150 | 354 | expected = {'use_syslog': 'False', | 509 | """Verify that the specified services are restarted when the config |
1151 | 355 | 'log_file': '/var/log/glance/registry.log', | 510 | is changed.""" |
1152 | 356 | 'debug': 'False', | 511 | sentry = self.glance_sentry |
1153 | 357 | 'verbose': 'False'} | 512 | juju_service = 'glance' |
1154 | 358 | section = 'DEFAULT' | 513 | |
1155 | 359 | 514 | # Expected default and alternate values | |
1156 | 360 | if self._get_openstack_release() <= self.precise_havana: | 515 | set_default = {'use-syslog': 'False'} |
1157 | 361 | # Defaults were different before icehouse | 516 | set_alternate = {'use-syslog': 'True'} |
1158 | 362 | expected['debug'] = 'True' | 517 | |
1159 | 363 | expected['verbose'] = 'True' | 518 | # Config file affected by juju set config change |
1160 | 364 | 519 | conf_file = '/etc/glance/glance-api.conf' | |
1161 | 365 | ret = u.validate_config_data(unit, conf, section, expected) | 520 | |
1162 | 366 | if ret: | 521 | # Services which are expected to restart upon config change |
1163 | 367 | message = "glance-registry default config error: {}".format(ret) | 522 | services = ['glance-api', 'glance-registry'] |
1164 | 368 | amulet.raise_status(amulet.FAIL, msg=message) | 523 | |
1165 | 369 | 524 | # Make config change, check for service restarts | |
1166 | 370 | def test_glance_registry_auth_config(self): | 525 | u.log.debug('Making config change on {}...'.format(juju_service)) |
1167 | 371 | '''Verify authtoken section config in glance-registry.conf | 526 | self.d.configure(juju_service, set_alternate) |
1168 | 372 | using glance/keystone relation data.''' | 527 | |
1169 | 373 | unit_gl = self.glance_sentry | 528 | sleep_time = 30 |
1170 | 374 | unit_ks = self.keystone_sentry | 529 | for s in services: |
1171 | 375 | rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp') | 530 | u.log.debug("Checking that service restarted: {}".format(s)) |
1172 | 376 | rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service') | 531 | if not u.service_restarted(sentry, s, |
1173 | 377 | conf = '/etc/glance/glance-registry.conf' | 532 | conf_file, sleep_time=sleep_time): |
1174 | 378 | section = 'keystone_authtoken' | 533 | self.d.configure(juju_service, set_default) |
1175 | 379 | 534 | msg = "service {} didn't restart after config change".format(s) | |
1176 | 380 | if self._get_openstack_release() > self.precise_havana: | 535 | amulet.raise_status(amulet.FAIL, msg=msg) |
1177 | 381 | # No auth config exists in this file before icehouse | 536 | sleep_time = 0 |
1178 | 382 | expected = {'admin_user': 'glance', | 537 | |
1179 | 383 | 'admin_password': rel_ks_gl['service_password']} | 538 | self.d.configure(juju_service, set_default) |
988 | 384 | |||
989 | 385 | ret = u.validate_config_data(unit_gl, conf, section, expected) | ||
990 | 386 | if ret: | ||
991 | 387 | message = "glance-registry keystone_authtoken config error: {}".format(ret) | ||
992 | 388 | amulet.raise_status(amulet.FAIL, msg=message) | ||
993 | 389 | |||
994 | 390 | def test_glance_api_database_config(self): | ||
995 | 391 | '''Verify database config in glance-api.conf and | ||
996 | 392 | compare with a db uri constructed from relation data.''' | ||
997 | 393 | unit = self.glance_sentry | ||
998 | 394 | conf = '/etc/glance/glance-api.conf' | ||
999 | 395 | relation = self.mysql_sentry.relation('shared-db', 'glance:shared-db') | ||
1000 | 396 | db_uri = "mysql://{}:{}@{}/{}".format('glance', relation['password'], | ||
1001 | 397 | relation['db_host'], 'glance') | ||
1002 | 398 | expected = {'connection': db_uri, 'sql_idle_timeout': '3600'} | ||
1003 | 399 | section = 'database' | ||
1004 | 400 | |||
1005 | 401 | if self._get_openstack_release() <= self.precise_havana: | ||
1006 | 402 | # Section and directive for this config changed in icehouse | ||
1007 | 403 | expected = {'sql_connection': db_uri, 'sql_idle_timeout': '3600'} | ||
1008 | 404 | section = 'DEFAULT' | ||
1009 | 405 | |||
1010 | 406 | ret = u.validate_config_data(unit, conf, section, expected) | ||
1011 | 407 | if ret: | ||
1012 | 408 | message = "glance db config error: {}".format(ret) | ||
1013 | 409 | amulet.raise_status(amulet.FAIL, msg=message) | ||
1014 | 410 | |||
1015 | 411 | def test_glance_registry_database_config(self): | ||
1016 | 412 | '''Verify database config in glance-registry.conf and | ||
1017 | 413 | compare with a db uri constructed from relation data.''' | ||
1018 | 414 | unit = self.glance_sentry | ||
1019 | 415 | conf = '/etc/glance/glance-registry.conf' | ||
1020 | 416 | relation = self.mysql_sentry.relation('shared-db', 'glance:shared-db') | ||
1021 | 417 | db_uri = "mysql://{}:{}@{}/{}".format('glance', relation['password'], | ||
1022 | 418 | relation['db_host'], 'glance') | ||
1023 | 419 | expected = {'connection': db_uri, 'sql_idle_timeout': '3600'} | ||
1024 | 420 | section = 'database' | ||
1025 | 421 | |||
1026 | 422 | if self._get_openstack_release() <= self.precise_havana: | ||
1027 | 423 | # Section and directive for this config changed in icehouse | ||
1028 | 424 | expected = {'sql_connection': db_uri, 'sql_idle_timeout': '3600'} | ||
1029 | 425 | section = 'DEFAULT' | ||
1030 | 426 | |||
1031 | 427 | ret = u.validate_config_data(unit, conf, section, expected) | ||
1032 | 428 | if ret: | ||
1033 | 429 | message = "glance db config error: {}".format(ret) | ||
1034 | 430 | amulet.raise_status(amulet.FAIL, msg=message) | ||
1035 | 431 | |||
1036 | 432 | def test_glance_endpoint(self): | ||
1037 | 433 | '''Verify the glance endpoint data.''' | ||
1038 | 434 | endpoints = self.keystone.endpoints.list() | ||
1039 | 435 | admin_port = internal_port = public_port = '9292' | ||
1040 | 436 | expected = {'id': u.not_null, | ||
1041 | 437 | 'region': 'RegionOne', | ||
1042 | 438 | 'adminurl': u.valid_url, | ||
1043 | 439 | 'internalurl': u.valid_url, | ||
1044 | 440 | 'publicurl': u.valid_url, | ||
1045 | 441 | 'service_id': u.not_null} | ||
1046 | 442 | ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, | ||
1047 | 443 | public_port, expected) | ||
1048 | 444 | |||
1049 | 445 | if ret: | ||
1050 | 446 | amulet.raise_status(amulet.FAIL, | ||
1051 | 447 | msg='glance endpoint: {}'.format(ret)) | ||
1052 | 448 | |||
1053 | 449 | def test_keystone_endpoint(self): | ||
1054 | 450 | '''Verify the keystone endpoint data.''' | ||
1055 | 451 | endpoints = self.keystone.endpoints.list() | ||
1056 | 452 | admin_port = '35357' | ||
1057 | 453 | internal_port = public_port = '5000' | ||
1058 | 454 | expected = {'id': u.not_null, | ||
1059 | 455 | 'region': 'RegionOne', | ||
1060 | 456 | 'adminurl': u.valid_url, | ||
1061 | 457 | 'internalurl': u.valid_url, | ||
1062 | 458 | 'publicurl': u.valid_url, | ||
1063 | 459 | 'service_id': u.not_null} | ||
1064 | 460 | ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, | ||
1065 | 461 | public_port, expected) | ||
1066 | 462 | if ret: | ||
1067 | 463 | amulet.raise_status(amulet.FAIL, | ||
1068 | 464 | msg='keystone endpoint: {}'.format(ret)) | ||
1069 | 465 | |||
1070 | 466 | def _change_config(self): | ||
1071 | 467 | if self._get_openstack_release() > self.precise_havana: | ||
1072 | 468 | self.d.configure('glance', {'debug': 'True'}) | ||
1073 | 469 | else: | ||
1074 | 470 | self.d.configure('glance', {'debug': 'False'}) | ||
1075 | 471 | |||
1076 | 472 | def _restore_config(self): | ||
1077 | 473 | if self._get_openstack_release() > self.precise_havana: | ||
1078 | 474 | self.d.configure('glance', {'debug': 'False'}) | ||
1079 | 475 | else: | ||
1080 | 476 | self.d.configure('glance', {'debug': 'True'}) | ||
1081 | 477 | |||
1082 | 478 | def test_z_glance_restart_on_config_change(self): | ||
1083 | 479 | '''Verify that glance is restarted when the config is changed. | ||
1084 | 480 | |||
1085 | 481 | Note(coreycb): The method name with the _z_ is a little odd | ||
1086 | 482 | but it forces the test to run last. It just makes things | ||
1087 | 483 | easier because restarting services requires re-authorization. | ||
1088 | 484 | ''' | ||
1089 | 485 | if self._get_openstack_release() <= self.precise_havana: | ||
1090 | 486 | # /!\ NOTE(beisner): Glance charm before Icehouse doesn't respond | ||
1091 | 487 | # to attempted config changes via juju / juju set. | ||
1092 | 488 | # https://bugs.launchpad.net/charms/+source/glance/+bug/1340307 | ||
1093 | 489 | u.log.error('NOTE(beisner): skipping glance restart on config ' + | ||
1094 | 490 | 'change check due to bug 1340307.') | ||
1095 | 491 | return | ||
1096 | 492 | |||
1097 | 493 | # Make config change to trigger a service restart | ||
1098 | 494 | self._change_config() | ||
1099 | 495 | |||
1100 | 496 | if not u.service_restarted(self.glance_sentry, 'glance-api', | ||
1101 | 497 | '/etc/glance/glance-api.conf'): | ||
1102 | 498 | self._restore_config() | ||
1103 | 499 | message = "glance service didn't restart after config change" | ||
1104 | 500 | amulet.raise_status(amulet.FAIL, msg=message) | ||
1105 | 501 | |||
1106 | 502 | if not u.service_restarted(self.glance_sentry, 'glance-registry', | ||
1107 | 503 | '/etc/glance/glance-registry.conf', | ||
1108 | 504 | sleep_time=0): | ||
1109 | 505 | self._restore_config() | ||
1110 | 506 | message = "glance service didn't restart after config change" | ||
1111 | 507 | amulet.raise_status(amulet.FAIL, msg=message) | ||
1112 | 508 | |||
1113 | 509 | # Return to original config | ||
1114 | 510 | self._restore_config() | ||
1115 | 511 | |||
1116 | 512 | def test_users(self): | ||
1117 | 513 | '''Verify expected users.''' | ||
1118 | 514 | user0 = {'name': 'glance', | ||
1119 | 515 | 'enabled': True, | ||
1120 | 516 | 'tenantId': u.not_null, | ||
1121 | 517 | 'id': u.not_null, | ||
1122 | 518 | 'email': 'juju@localhost'} | ||
1123 | 519 | user1 = {'name': 'admin', | ||
1124 | 520 | 'enabled': True, | ||
1125 | 521 | 'tenantId': u.not_null, | ||
1126 | 522 | 'id': u.not_null, | ||
1127 | 523 | 'email': 'juju@localhost'} | ||
1128 | 524 | expected = [user0, user1] | ||
1129 | 525 | actual = self.keystone.users.list() | ||
1130 | 526 | |||
1131 | 527 | ret = u.validate_user_data(expected, actual) | ||
1132 | 528 | if ret: | ||
1133 | 529 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
1180 | 530 | 539 | ||
1181 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' | |||
1182 | --- tests/charmhelpers/contrib/amulet/utils.py 2015-06-19 15:08:48 +0000 | |||
1183 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-02 12:52:15 +0000 | |||
1184 | @@ -185,10 +185,23 @@ | |||
1185 | 185 | for k in expected.keys(): | 185 | for k in expected.keys(): |
1186 | 186 | if not config.has_option(section, k): | 186 | if not config.has_option(section, k): |
1187 | 187 | return "section [{}] is missing option {}".format(section, k) | 187 | return "section [{}] is missing option {}".format(section, k) |
1192 | 188 | if config.get(section, k) != expected[k]: | 188 | |
1193 | 189 | return "section [{}] {}:{} != expected {}:{}".format( | 189 | actual = config.get(section, k) |
1194 | 190 | section, k, config.get(section, k), k, expected[k]) | 190 | v = expected[k] |
1195 | 191 | return None | 191 | if (isinstance(v, six.string_types) or |
1196 | 192 | isinstance(v, bool) or | ||
1197 | 193 | isinstance(v, six.integer_types)): | ||
1198 | 194 | # handle explicit values | ||
1199 | 195 | if actual != v: | ||
1200 | 196 | return "section [{}] {}:{} != expected {}:{}".format( | ||
1201 | 197 | section, k, actual, k, expected[k]) | ||
1202 | 198 | else: | ||
1203 | 199 | # handle not_null, valid_ip boolean comparison methods, etc. | ||
1204 | 200 | if v(actual): | ||
1205 | 201 | return None | ||
1206 | 202 | else: | ||
1207 | 203 | return "section [{}] {}:{} != expected {}:{}".format( | ||
1208 | 204 | section, k, actual, k, expected[k]) | ||
1209 | 192 | 205 | ||
1210 | 193 | def _validate_dict_data(self, expected, actual): | 206 | def _validate_dict_data(self, expected, actual): |
1211 | 194 | """Validate dictionary data. | 207 | """Validate dictionary data. |
1212 | @@ -406,3 +419,123 @@ | |||
1213 | 406 | """Convert a relative file path to a file URL.""" | 419 | """Convert a relative file path to a file URL.""" |
1214 | 407 | _abs_path = os.path.abspath(file_rel_path) | 420 | _abs_path = os.path.abspath(file_rel_path) |
1215 | 408 | return urlparse.urlparse(_abs_path, scheme='file').geturl() | 421 | return urlparse.urlparse(_abs_path, scheme='file').geturl() |
1216 | 422 | |||
1217 | 423 | def check_commands_on_units(self, commands, sentry_units): | ||
1218 | 424 | """Check that all commands in a list exit zero on all | ||
1219 | 425 | sentry units in a list. | ||
1220 | 426 | |||
1221 | 427 | :param commands: list of bash commands | ||
1222 | 428 | :param sentry_units: list of sentry unit pointers | ||
1223 | 429 | :returns: None if successful; Failure message otherwise | ||
1224 | 430 | """ | ||
1225 | 431 | self.log.debug('Checking exit codes for {} commands on {} ' | ||
1226 | 432 | 'sentry units...'.format(len(commands), | ||
1227 | 433 | len(sentry_units))) | ||
1228 | 434 | for sentry_unit in sentry_units: | ||
1229 | 435 | for cmd in commands: | ||
1230 | 436 | output, code = sentry_unit.run(cmd) | ||
1231 | 437 | if code == 0: | ||
1232 | 438 | msg = ('{} `{}` returned {} ' | ||
1233 | 439 | '(OK)'.format(sentry_unit.info['unit_name'], | ||
1234 | 440 | cmd, code)) | ||
1235 | 441 | self.log.debug(msg) | ||
1236 | 442 | else: | ||
1237 | 443 | msg = ('{} `{}` returned {} ' | ||
1238 | 444 | '{}'.format(sentry_unit.info['unit_name'], | ||
1239 | 445 | cmd, code, output)) | ||
1240 | 446 | return msg | ||
1241 | 447 | return None | ||
1242 | 448 | |||
1243 | 449 | def get_process_id_list(self, sentry_unit, process_name): | ||
1244 | 450 | """Get a list of process ID(s) from a single sentry juju unit | ||
1245 | 451 | for a single process name. | ||
1246 | 452 | |||
1247 | 453 | :param sentry_unit: Pointer to amulet sentry instance (juju unit) | ||
1248 | 454 | :param process_name: Process name | ||
1249 | 455 | :returns: List of process IDs | ||
1250 | 456 | """ | ||
1251 | 457 | cmd = 'pidof {}'.format(process_name) | ||
1252 | 458 | output, code = sentry_unit.run(cmd) | ||
1253 | 459 | if code != 0: | ||
1254 | 460 | msg = ('{} `{}` returned {} ' | ||
1255 | 461 | '{}'.format(sentry_unit.info['unit_name'], | ||
1256 | 462 | cmd, code, output)) | ||
1257 | 463 | raise RuntimeError(msg) | ||
1258 | 464 | return str(output).split() | ||
1259 | 465 | |||
1260 | 466 | def get_unit_process_ids(self, unit_processes): | ||
1261 | 467 | """Construct a dict containing unit sentries, process names, and | ||
1262 | 468 | process IDs.""" | ||
1263 | 469 | pid_dict = {} | ||
1264 | 470 | for sentry_unit, process_list in unit_processes.iteritems(): | ||
1265 | 471 | pid_dict[sentry_unit] = {} | ||
1266 | 472 | for process in process_list: | ||
1267 | 473 | pids = self.get_process_id_list(sentry_unit, process) | ||
1268 | 474 | pid_dict[sentry_unit].update({process: pids}) | ||
1269 | 475 | return pid_dict | ||
1270 | 476 | |||
1271 | 477 | def validate_unit_process_ids(self, expected, actual): | ||
1272 | 478 | """Validate process id quantities for services on units.""" | ||
1273 | 479 | self.log.debug('Checking units for running processes...') | ||
1274 | 480 | self.log.debug('Expected PIDs: {}'.format(expected)) | ||
1275 | 481 | self.log.debug('Actual PIDs: {}'.format(actual)) | ||
1276 | 482 | |||
1277 | 483 | if len(actual) != len(expected): | ||
1278 | 484 | msg = ('Unit count mismatch. expected, actual: {}, ' | ||
1279 | 485 | '{} '.format(len(expected), len(actual))) | ||
1280 | 486 | return msg | ||
1281 | 487 | |||
1282 | 488 | for (e_sentry, e_proc_names) in expected.iteritems(): | ||
1283 | 489 | e_sentry_name = e_sentry.info['unit_name'] | ||
1284 | 490 | if e_sentry in actual.keys(): | ||
1285 | 491 | a_proc_names = actual[e_sentry] | ||
1286 | 492 | else: | ||
1287 | 493 | msg = ('Expected sentry ({}) not found in actual dict data.' | ||
1288 | 494 | '{}'.format(e_sentry_name, e_sentry)) | ||
1289 | 495 | return msg | ||
1290 | 496 | |||
1291 | 497 | if len(e_proc_names.keys()) != len(a_proc_names.keys()): | ||
1292 | 498 | msg = ('Process name count mismatch. expected, actual: {}, ' | ||
1293 | 499 | '{}'.format(len(expected), len(actual))) | ||
1294 | 500 | return msg | ||
1295 | 501 | |||
1296 | 502 | for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \ | ||
1297 | 503 | zip(e_proc_names.items(), a_proc_names.items()): | ||
1298 | 504 | if e_proc_name != a_proc_name: | ||
1299 | 505 | msg = ('Process name mismatch. expected, actual: {}, ' | ||
1300 | 506 | '{}'.format(e_proc_name, a_proc_name)) | ||
1301 | 507 | return msg | ||
1302 | 508 | |||
1303 | 509 | a_pids_length = len(a_pids) | ||
1304 | 510 | if e_pids_length != a_pids_length: | ||
1305 | 511 | msg = ('PID count mismatch. {} ({}) expected, actual: {}, ' | ||
1306 | 512 | '{} ({})'.format(e_sentry_name, | ||
1307 | 513 | e_proc_name, | ||
1308 | 514 | e_pids_length, | ||
1309 | 515 | a_pids_length, | ||
1310 | 516 | a_pids)) | ||
1311 | 517 | return msg | ||
1312 | 518 | else: | ||
1313 | 519 | msg = ('PID check OK: {} {} {}: ' | ||
1314 | 520 | '{}'.format(e_sentry_name, | ||
1315 | 521 | e_proc_name, | ||
1316 | 522 | e_pids_length, | ||
1317 | 523 | a_pids)) | ||
1318 | 524 | self.log.debug(msg) | ||
1319 | 525 | return None | ||
1320 | 526 | |||
1321 | 527 | def validate_list_of_identical_dicts(self, list_of_dicts): | ||
1322 | 528 | """Check that all dicts within a list are identical.""" | ||
1323 | 529 | hashes = [] | ||
1324 | 530 | for _dict in list_of_dicts: | ||
1325 | 531 | hashes.append(hash(frozenset(_dict.items()))) | ||
1326 | 532 | |||
1327 | 533 | self.log.debug('Hashes: {}'.format(hashes)) | ||
1328 | 534 | if len(set(hashes)) == 1: | ||
1329 | 535 | msg = 'Dicts within list are identical' | ||
1330 | 536 | self.log.debug(msg) | ||
1331 | 537 | else: | ||
1332 | 538 | msg = 'Dicts within list are not identical' | ||
1333 | 539 | return msg | ||
1334 | 540 | |||
1335 | 541 | return None | ||
1336 | 409 | 542 | ||
1337 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' | |||
1338 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-19 15:08:48 +0000 | |||
1339 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-02 12:52:15 +0000 | |||
1340 | @@ -79,9 +79,9 @@ | |||
1341 | 79 | services.append(this_service) | 79 | services.append(this_service) |
1342 | 80 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', | 80 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
1343 | 81 | 'ceph-osd', 'ceph-radosgw'] | 81 | 'ceph-osd', 'ceph-radosgw'] |
1347 | 82 | # Openstack subordinate charms do not expose an origin option as that | 82 | # Most OpenStack subordinate charms do not expose an origin option |
1348 | 83 | # is controlled by the principle | 83 | # as that is controlled by the principle. |
1349 | 84 | ignore = ['neutron-openvswitch'] | 84 | ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch'] |
1350 | 85 | 85 | ||
1351 | 86 | if self.openstack: | 86 | if self.openstack: |
1352 | 87 | for svc in services: | 87 | for svc in services: |
1353 | @@ -148,3 +148,35 @@ | |||
1354 | 148 | return os_origin.split('%s-' % self.series)[1].split('/')[0] | 148 | return os_origin.split('%s-' % self.series)[1].split('/')[0] |
1355 | 149 | else: | 149 | else: |
1356 | 150 | return releases[self.series] | 150 | return releases[self.series] |
1357 | 151 | |||
1358 | 152 | def get_ceph_expected_pools(self, radosgw=False): | ||
1359 | 153 | """Return a list of expected ceph pools based on Ubuntu-OpenStack | ||
1360 | 154 | release and whether ceph radosgw is flagged as present or not.""" | ||
1361 | 155 | |||
1362 | 156 | if self._get_openstack_release() >= self.trusty_kilo: | ||
1363 | 157 | # Kilo or later | ||
1364 | 158 | pools = [ | ||
1365 | 159 | 'rbd', | ||
1366 | 160 | 'cinder', | ||
1367 | 161 | 'glance' | ||
1368 | 162 | ] | ||
1369 | 163 | else: | ||
1370 | 164 | # Juno or earlier | ||
1371 | 165 | pools = [ | ||
1372 | 166 | 'data', | ||
1373 | 167 | 'metadata', | ||
1374 | 168 | 'rbd', | ||
1375 | 169 | 'cinder', | ||
1376 | 170 | 'glance' | ||
1377 | 171 | ] | ||
1378 | 172 | |||
1379 | 173 | if radosgw: | ||
1380 | 174 | pools.extend([ | ||
1381 | 175 | '.rgw.root', | ||
1382 | 176 | '.rgw.control', | ||
1383 | 177 | '.rgw', | ||
1384 | 178 | '.rgw.gc', | ||
1385 | 179 | '.users.uid' | ||
1386 | 180 | ]) | ||
1387 | 181 | |||
1388 | 182 | return pools | ||
1389 | 151 | 183 | ||
1390 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' | |||
1391 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-19 15:08:48 +0000 | |||
1392 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-02 12:52:15 +0000 | |||
1393 | @@ -14,16 +14,19 @@ | |||
1394 | 14 | # You should have received a copy of the GNU Lesser General Public License | 14 | # You should have received a copy of the GNU Lesser General Public License |
1395 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1396 | 16 | 16 | ||
1397 | 17 | import json | ||
1398 | 17 | import logging | 18 | import logging |
1399 | 18 | import os | 19 | import os |
1400 | 19 | import six | 20 | import six |
1401 | 20 | import time | 21 | import time |
1402 | 21 | import urllib | 22 | import urllib |
1403 | 22 | 23 | ||
1404 | 24 | import cinderclient.v1.client as cinder_client | ||
1405 | 23 | import glanceclient.v1.client as glance_client | 25 | import glanceclient.v1.client as glance_client |
1406 | 24 | import heatclient.v1.client as heat_client | 26 | import heatclient.v1.client as heat_client |
1407 | 25 | import keystoneclient.v2_0 as keystone_client | 27 | import keystoneclient.v2_0 as keystone_client |
1408 | 26 | import novaclient.v1_1.client as nova_client | 28 | import novaclient.v1_1.client as nova_client |
1409 | 29 | import swiftclient | ||
1410 | 27 | 30 | ||
1411 | 28 | from charmhelpers.contrib.amulet.utils import ( | 31 | from charmhelpers.contrib.amulet.utils import ( |
1412 | 29 | AmuletUtils | 32 | AmuletUtils |
1413 | @@ -171,6 +174,15 @@ | |||
1414 | 171 | self.log.debug('Checking if tenant exists ({})...'.format(tenant)) | 174 | self.log.debug('Checking if tenant exists ({})...'.format(tenant)) |
1415 | 172 | return tenant in [t.name for t in keystone.tenants.list()] | 175 | return tenant in [t.name for t in keystone.tenants.list()] |
1416 | 173 | 176 | ||
1417 | 177 | def authenticate_cinder_admin(self, keystone_sentry, username, | ||
1418 | 178 | password, tenant): | ||
1419 | 179 | """Authenticates admin user with cinder.""" | ||
1420 | 180 | service_ip = \ | ||
1421 | 181 | keystone_sentry.relation('shared-db', | ||
1422 | 182 | 'mysql:shared-db')['private-address'] | ||
1423 | 183 | ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8')) | ||
1424 | 184 | return cinder_client.Client(username, password, tenant, ept) | ||
1425 | 185 | |||
1426 | 174 | def authenticate_keystone_admin(self, keystone_sentry, user, password, | 186 | def authenticate_keystone_admin(self, keystone_sentry, user, password, |
1427 | 175 | tenant): | 187 | tenant): |
1428 | 176 | """Authenticates admin user with the keystone admin endpoint.""" | 188 | """Authenticates admin user with the keystone admin endpoint.""" |
1429 | @@ -212,9 +224,29 @@ | |||
1430 | 212 | return nova_client.Client(username=user, api_key=password, | 224 | return nova_client.Client(username=user, api_key=password, |
1431 | 213 | project_id=tenant, auth_url=ep) | 225 | project_id=tenant, auth_url=ep) |
1432 | 214 | 226 | ||
1433 | 227 | def authenticate_swift_user(self, keystone, user, password, tenant): | ||
1434 | 228 | """Authenticates a regular user with swift api.""" | ||
1435 | 229 | self.log.debug('Authenticating swift user ({})...'.format(user)) | ||
1436 | 230 | ep = keystone.service_catalog.url_for(service_type='identity', | ||
1437 | 231 | endpoint_type='publicURL') | ||
1438 | 232 | return swiftclient.Connection(authurl=ep, | ||
1439 | 233 | user=user, | ||
1440 | 234 | key=password, | ||
1441 | 235 | tenant_name=tenant, | ||
1442 | 236 | auth_version='2.0') | ||
1443 | 237 | |||
1444 | 215 | def create_cirros_image(self, glance, image_name): | 238 | def create_cirros_image(self, glance, image_name): |
1447 | 216 | """Download the latest cirros image and upload it to glance.""" | 239 | """Download the latest cirros image and upload it to glance, |
1448 | 217 | self.log.debug('Creating glance image ({})...'.format(image_name)) | 240 | validate and return a resource pointer. |
1449 | 241 | |||
1450 | 242 | :param glance: pointer to authenticated glance connection | ||
1451 | 243 | :param image_name: display name for new image | ||
1452 | 244 | :returns: glance image pointer | ||
1453 | 245 | """ | ||
1454 | 246 | self.log.debug('Creating glance cirros image ' | ||
1455 | 247 | '({})...'.format(image_name)) | ||
1456 | 248 | |||
1457 | 249 | # Download cirros image | ||
1458 | 218 | http_proxy = os.getenv('AMULET_HTTP_PROXY') | 250 | http_proxy = os.getenv('AMULET_HTTP_PROXY') |
1459 | 219 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) | 251 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) |
1460 | 220 | if http_proxy: | 252 | if http_proxy: |
1461 | @@ -223,33 +255,51 @@ | |||
1462 | 223 | else: | 255 | else: |
1463 | 224 | opener = urllib.FancyURLopener() | 256 | opener = urllib.FancyURLopener() |
1464 | 225 | 257 | ||
1466 | 226 | f = opener.open("http://download.cirros-cloud.net/version/released") | 258 | f = opener.open('http://download.cirros-cloud.net/version/released') |
1467 | 227 | version = f.read().strip() | 259 | version = f.read().strip() |
1469 | 228 | cirros_img = "cirros-{}-x86_64-disk.img".format(version) | 260 | cirros_img = 'cirros-{}-x86_64-disk.img'.format(version) |
1470 | 229 | local_path = os.path.join('tests', cirros_img) | 261 | local_path = os.path.join('tests', cirros_img) |
1471 | 230 | 262 | ||
1472 | 231 | if not os.path.exists(local_path): | 263 | if not os.path.exists(local_path): |
1474 | 232 | cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net", | 264 | cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net', |
1475 | 233 | version, cirros_img) | 265 | version, cirros_img) |
1476 | 234 | opener.retrieve(cirros_url, local_path) | 266 | opener.retrieve(cirros_url, local_path) |
1477 | 235 | f.close() | 267 | f.close() |
1478 | 236 | 268 | ||
1479 | 269 | # Create glance image | ||
1480 | 237 | with open(local_path) as f: | 270 | with open(local_path) as f: |
1481 | 238 | image = glance.images.create(name=image_name, is_public=True, | 271 | image = glance.images.create(name=image_name, is_public=True, |
1482 | 239 | disk_format='qcow2', | 272 | disk_format='qcow2', |
1483 | 240 | container_format='bare', data=f) | 273 | container_format='bare', data=f) |
1496 | 241 | count = 1 | 274 | |
1497 | 242 | status = image.status | 275 | # Wait for image to reach active status |
1498 | 243 | while status != 'active' and count < 10: | 276 | img_id = image.id |
1499 | 244 | time.sleep(3) | 277 | ret = self.resource_reaches_status(glance.images, img_id, |
1500 | 245 | image = glance.images.get(image.id) | 278 | expected_stat='active', |
1501 | 246 | status = image.status | 279 | msg='Image status wait') |
1502 | 247 | self.log.debug('image status: {}'.format(status)) | 280 | if not ret: |
1503 | 248 | count += 1 | 281 | msg = 'Glance image failed to reach expected state.' |
1504 | 249 | 282 | raise RuntimeError(msg) | |
1505 | 250 | if status != 'active': | 283 | |
1506 | 251 | self.log.error('image creation timed out') | 284 | # Re-validate new image |
1507 | 252 | return None | 285 | self.log.debug('Validating image attributes...') |
1508 | 286 | val_img_name = glance.images.get(img_id).name | ||
1509 | 287 | val_img_stat = glance.images.get(img_id).status | ||
1510 | 288 | val_img_pub = glance.images.get(img_id).is_public | ||
1511 | 289 | val_img_cfmt = glance.images.get(img_id).container_format | ||
1512 | 290 | val_img_dfmt = glance.images.get(img_id).disk_format | ||
1513 | 291 | msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} ' | ||
1514 | 292 | 'container fmt:{} disk fmt:{}'.format( | ||
1515 | 293 | val_img_name, val_img_pub, img_id, | ||
1516 | 294 | val_img_stat, val_img_cfmt, val_img_dfmt)) | ||
1517 | 295 | |||
1518 | 296 | if val_img_name == image_name and val_img_stat == 'active' \ | ||
1519 | 297 | and val_img_pub is True and val_img_cfmt == 'bare' \ | ||
1520 | 298 | and val_img_dfmt == 'qcow2': | ||
1521 | 299 | self.log.debug(msg_attr) | ||
1522 | 300 | else: | ||
1523 | 301 | msg = ('Volume validation failed, {}'.format(msg_attr)) | ||
1524 | 302 | raise RuntimeError(msg) | ||
1525 | 253 | 303 | ||
1526 | 254 | return image | 304 | return image |
1527 | 255 | 305 | ||
1528 | @@ -260,22 +310,7 @@ | |||
1529 | 260 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | 310 | self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1530 | 261 | 'delete_resource instead of delete_image.') | 311 | 'delete_resource instead of delete_image.') |
1531 | 262 | self.log.debug('Deleting glance image ({})...'.format(image)) | 312 | self.log.debug('Deleting glance image ({})...'.format(image)) |
1548 | 263 | num_before = len(list(glance.images.list())) | 313 | return self.delete_resource(glance.images, image, msg='glance image') |
1533 | 264 | glance.images.delete(image) | ||
1534 | 265 | |||
1535 | 266 | count = 1 | ||
1536 | 267 | num_after = len(list(glance.images.list())) | ||
1537 | 268 | while num_after != (num_before - 1) and count < 10: | ||
1538 | 269 | time.sleep(3) | ||
1539 | 270 | num_after = len(list(glance.images.list())) | ||
1540 | 271 | self.log.debug('number of images: {}'.format(num_after)) | ||
1541 | 272 | count += 1 | ||
1542 | 273 | |||
1543 | 274 | if num_after != (num_before - 1): | ||
1544 | 275 | self.log.error('image deletion timed out') | ||
1545 | 276 | return False | ||
1546 | 277 | |||
1547 | 278 | return True | ||
1549 | 279 | 314 | ||
1550 | 280 | def create_instance(self, nova, image_name, instance_name, flavor): | 315 | def create_instance(self, nova, image_name, instance_name, flavor): |
1551 | 281 | """Create the specified instance.""" | 316 | """Create the specified instance.""" |
1552 | @@ -308,22 +343,8 @@ | |||
1553 | 308 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | 343 | self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1554 | 309 | 'delete_resource instead of delete_instance.') | 344 | 'delete_resource instead of delete_instance.') |
1555 | 310 | self.log.debug('Deleting instance ({})...'.format(instance)) | 345 | self.log.debug('Deleting instance ({})...'.format(instance)) |
1572 | 311 | num_before = len(list(nova.servers.list())) | 346 | return self.delete_resource(nova.servers, instance, |
1573 | 312 | nova.servers.delete(instance) | 347 | msg='nova instance') |
1558 | 313 | |||
1559 | 314 | count = 1 | ||
1560 | 315 | num_after = len(list(nova.servers.list())) | ||
1561 | 316 | while num_after != (num_before - 1) and count < 10: | ||
1562 | 317 | time.sleep(3) | ||
1563 | 318 | num_after = len(list(nova.servers.list())) | ||
1564 | 319 | self.log.debug('number of instances: {}'.format(num_after)) | ||
1565 | 320 | count += 1 | ||
1566 | 321 | |||
1567 | 322 | if num_after != (num_before - 1): | ||
1568 | 323 | self.log.error('instance deletion timed out') | ||
1569 | 324 | return False | ||
1570 | 325 | |||
1571 | 326 | return True | ||
1574 | 327 | 348 | ||
1575 | 328 | def create_or_get_keypair(self, nova, keypair_name="testkey"): | 349 | def create_or_get_keypair(self, nova, keypair_name="testkey"): |
1576 | 329 | """Create a new keypair, or return pointer if it already exists.""" | 350 | """Create a new keypair, or return pointer if it already exists.""" |
1577 | @@ -339,6 +360,84 @@ | |||
1578 | 339 | _keypair = nova.keypairs.create(name=keypair_name) | 360 | _keypair = nova.keypairs.create(name=keypair_name) |
1579 | 340 | return _keypair | 361 | return _keypair |
1580 | 341 | 362 | ||
1581 | 363 | def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, | ||
1582 | 364 | img_id=None, src_vol_id=None, snap_id=None): | ||
1583 | 365 | """Create cinder volume, optionally from a glance image, or | ||
1584 | 366 | optionally as a clone of an existing volume, or optionally | ||
1585 | 367 | from a snapshot. Wait for the new volume status to reach | ||
1586 | 368 | the expected status, validate and return a resource pointer. | ||
1587 | 369 | |||
1588 | 370 | :param vol_name: cinder volume display name | ||
1589 | 371 | :param vol_size: size in gigabytes | ||
1590 | 372 | :param img_id: optional glance image id | ||
1591 | 373 | :param src_vol_id: optional source volume id to clone | ||
1592 | 374 | :param snap_id: optional snapshot id to use | ||
1593 | 375 | :returns: cinder volume pointer | ||
1594 | 376 | """ | ||
1595 | 377 | # Handle parameter input | ||
1596 | 378 | if img_id and not src_vol_id and not snap_id: | ||
1597 | 379 | self.log.debug('Creating cinder volume from glance image ' | ||
1598 | 380 | '({})...'.format(img_id)) | ||
1599 | 381 | bootable = 'true' | ||
1600 | 382 | elif src_vol_id and not img_id and not snap_id: | ||
1601 | 383 | self.log.debug('Cloning cinder volume...') | ||
1602 | 384 | bootable = cinder.volumes.get(src_vol_id).bootable | ||
1603 | 385 | elif snap_id and not src_vol_id and not img_id: | ||
1604 | 386 | self.log.debug('Creating cinder volume from snapshot...') | ||
1605 | 387 | snap = cinder.volume_snapshots.find(id=snap_id) | ||
1606 | 388 | vol_size = snap.size | ||
1607 | 389 | snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id | ||
1608 | 390 | bootable = cinder.volumes.get(snap_vol_id).bootable | ||
1609 | 391 | elif not img_id and not src_vol_id and not snap_id: | ||
1610 | 392 | self.log.debug('Creating cinder volume...') | ||
1611 | 393 | bootable = 'false' | ||
1612 | 394 | else: | ||
1613 | 395 | msg = ('Invalid method use - name:{} size:{} img_id:{} ' | ||
1614 | 396 | 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, | ||
1615 | 397 | img_id, src_vol_id, | ||
1616 | 398 | snap_id)) | ||
1617 | 399 | raise RuntimeError(msg) | ||
1618 | 400 | |||
1619 | 401 | # Create new volume | ||
1620 | 402 | try: | ||
1621 | 403 | vol_new = cinder.volumes.create(display_name=vol_name, | ||
1622 | 404 | imageRef=img_id, | ||
1623 | 405 | size=vol_size, | ||
1624 | 406 | source_volid=src_vol_id, | ||
1625 | 407 | snapshot_id=snap_id) | ||
1626 | 408 | vol_id = vol_new.id | ||
1627 | 409 | except Exception as e: | ||
1628 | 410 | msg = 'Failed to create volume: {}'.format(e) | ||
1629 | 411 | raise RuntimeError(msg) | ||
1630 | 412 | |||
1631 | 413 | # Wait for volume to reach available status | ||
1632 | 414 | ret = self.resource_reaches_status(cinder.volumes, vol_id, | ||
1633 | 415 | expected_stat="available", | ||
1634 | 416 | msg="Volume status wait") | ||
1635 | 417 | if not ret: | ||
1636 | 418 | msg = 'Cinder volume failed to reach expected state.' | ||
1637 | 419 | raise RuntimeError(msg) | ||
1638 | 420 | |||
1639 | 421 | # Re-validate new volume | ||
1640 | 422 | self.log.debug('Validating volume attributes...') | ||
1641 | 423 | val_vol_name = cinder.volumes.get(vol_id).display_name | ||
1642 | 424 | val_vol_boot = cinder.volumes.get(vol_id).bootable | ||
1643 | 425 | val_vol_stat = cinder.volumes.get(vol_id).status | ||
1644 | 426 | val_vol_size = cinder.volumes.get(vol_id).size | ||
1645 | 427 | msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' | ||
1646 | 428 | '{} size:{}'.format(val_vol_name, vol_id, | ||
1647 | 429 | val_vol_stat, val_vol_boot, | ||
1648 | 430 | val_vol_size)) | ||
1649 | 431 | |||
1650 | 432 | if val_vol_boot == bootable and val_vol_stat == 'available' \ | ||
1651 | 433 | and val_vol_name == vol_name and val_vol_size == vol_size: | ||
1652 | 434 | self.log.debug(msg_attr) | ||
1653 | 435 | else: | ||
1654 | 436 | msg = ('Volume validation failed, {}'.format(msg_attr)) | ||
1655 | 437 | raise RuntimeError(msg) | ||
1656 | 438 | |||
1657 | 439 | return vol_new | ||
1658 | 440 | |||
1659 | 342 | def delete_resource(self, resource, resource_id, | 441 | def delete_resource(self, resource, resource_id, |
1660 | 343 | msg="resource", max_wait=120): | 442 | msg="resource", max_wait=120): |
1661 | 344 | """Delete one openstack resource, such as one instance, keypair, | 443 | """Delete one openstack resource, such as one instance, keypair, |
1662 | @@ -350,6 +449,8 @@ | |||
1663 | 350 | :param max_wait: maximum wait time in seconds | 449 | :param max_wait: maximum wait time in seconds |
1664 | 351 | :returns: True if successful, otherwise False | 450 | :returns: True if successful, otherwise False |
1665 | 352 | """ | 451 | """ |
1666 | 452 | self.log.debug('Deleting OpenStack resource ' | ||
1667 | 453 | '{} ({})'.format(resource_id, msg)) | ||
1668 | 353 | num_before = len(list(resource.list())) | 454 | num_before = len(list(resource.list())) |
1669 | 354 | resource.delete(resource_id) | 455 | resource.delete(resource_id) |
1670 | 355 | 456 | ||
1671 | @@ -411,3 +512,90 @@ | |||
1672 | 411 | self.log.debug('{} never reached expected status: ' | 512 | self.log.debug('{} never reached expected status: ' |
1673 | 412 | '{}'.format(resource_id, expected_stat)) | 513 | '{}'.format(resource_id, expected_stat)) |
1674 | 413 | return False | 514 | return False |
1675 | 515 | |||
1676 | 516 | def get_ceph_osd_id_cmd(self, index): | ||
1677 | 517 | """Produce a shell command that will return a ceph-osd id.""" | ||
1678 | 518 | cmd = ("`initctl list | grep 'ceph-osd ' | awk 'NR=={} {{ print $2 }}'" | ||
1679 | 519 | " | grep -o '[0-9]*'`".format(index + 1)) | ||
1680 | 520 | return cmd | ||
1681 | 521 | |||
1682 | 522 | def get_ceph_pools(self, sentry_unit): | ||
1683 | 523 | """Return a dict of ceph pools from a single ceph unit, with | ||
1684 | 524 | pool name as keys, pool id as vals.""" | ||
1685 | 525 | pools = {} | ||
1686 | 526 | cmd = 'sudo ceph osd lspools' | ||
1687 | 527 | output, code = sentry_unit.run(cmd) | ||
1688 | 528 | if code != 0: | ||
1689 | 529 | msg = ('{} `{}` returned {} ' | ||
1690 | 530 | '{}'.format(sentry_unit.info['unit_name'], | ||
1691 | 531 | cmd, code, output)) | ||
1692 | 532 | raise RuntimeError(msg) | ||
1693 | 533 | |||
1694 | 534 | # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, | ||
1695 | 535 | for pool in str(output).split(','): | ||
1696 | 536 | pool_id_name = pool.split(' ') | ||
1697 | 537 | if len(pool_id_name) == 2: | ||
1698 | 538 | pool_id = pool_id_name[0] | ||
1699 | 539 | pool_name = pool_id_name[1] | ||
1700 | 540 | pools[pool_name] = int(pool_id) | ||
1701 | 541 | |||
1702 | 542 | self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], | ||
1703 | 543 | pools)) | ||
1704 | 544 | return pools | ||
1705 | 545 | |||
1706 | 546 | def get_ceph_df(self, sentry_unit): | ||
1707 | 547 | """Return dict of ceph df json output, including ceph pool state. | ||
1708 | 548 | |||
1709 | 549 | :param sentry_unit: Pointer to amulet sentry instance (juju unit) | ||
1710 | 550 | :returns: Dict of ceph df output | ||
1711 | 551 | """ | ||
1712 | 552 | cmd = 'sudo ceph df --format=json' | ||
1713 | 553 | output, code = sentry_unit.run(cmd) | ||
1714 | 554 | if code != 0: | ||
1715 | 555 | msg = ('{} `{}` returned {} ' | ||
1716 | 556 | '{}'.format(sentry_unit.info['unit_name'], | ||
1717 | 557 | cmd, code, output)) | ||
1718 | 558 | raise RuntimeError(msg) | ||
1719 | 559 | return json.loads(output) | ||
1720 | 560 | |||
1721 | 561 | def get_ceph_pool_sample(self, sentry_unit, pool_id=0): | ||
1722 | 562 | """Take a sample of attributes of a ceph pool, returning ceph | ||
1723 | 563 | pool name, object count and disk space used for the specified | ||
1724 | 564 | pool ID number. | ||
1725 | 565 | |||
1726 | 566 | :param sentry_unit: Pointer to amulet sentry instance (juju unit) | ||
1727 | 567 | :param pool_id: Ceph pool ID | ||
1728 | 568 | :returns: List of pool name, object count, kb disk space used | ||
1729 | 569 | """ | ||
1730 | 570 | df = self.get_ceph_df(sentry_unit) | ||
1731 | 571 | pool_name = df['pools'][pool_id]['name'] | ||
1732 | 572 | obj_count = df['pools'][pool_id]['stats']['objects'] | ||
1733 | 573 | kb_used = df['pools'][pool_id]['stats']['kb_used'] | ||
1734 | 574 | self.log.debug('Ceph {} pool (ID {}): {} objects, ' | ||
1735 | 575 | '{} kb used'.format(pool_name, | ||
1736 | 576 | pool_id, | ||
1737 | 577 | obj_count, | ||
1738 | 578 | kb_used)) | ||
1739 | 579 | return pool_name, obj_count, kb_used | ||
1740 | 580 | |||
1741 | 581 | def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): | ||
1742 | 582 | """Validate ceph pool samples taken over time, such as pool | ||
1743 | 583 | object counts or pool kb used, before adding, after adding, and | ||
1744 | 584 | after deleting items which affect those pool attributes. The | ||
1745 | 585 | 2nd element is expected to be greater than the 1st; 3rd is expected | ||
1746 | 586 | to be less than the 2nd. | ||
1747 | 587 | |||
1748 | 588 | :param samples: List containing 3 data samples | ||
1749 | 589 | :param sample_type: String for logging and usage context | ||
1750 | 590 | :returns: None if successful, Failure message otherwise | ||
1751 | 591 | """ | ||
1752 | 592 | original, created, deleted = range(3) | ||
1753 | 593 | if samples[created] <= samples[original] or \ | ||
1754 | 594 | samples[deleted] >= samples[created]: | ||
1755 | 595 | msg = ('Ceph {} samples ({}) ' | ||
1756 | 596 | 'unexpected.'.format(sample_type, samples)) | ||
1757 | 597 | return msg | ||
1758 | 598 | else: | ||
1759 | 599 | self.log.debug('Ceph {} samples (OK): ' | ||
1760 | 600 | '{}'.format(sample_type, samples)) | ||
1761 | 601 | return None | ||
1762 | 414 | 602 | ||
1763 | === added file 'tests/tests.yaml' | |||
1764 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 | |||
1765 | +++ tests/tests.yaml 2015-07-02 12:52:15 +0000 | |||
1766 | @@ -0,0 +1,18 @@ | |||
1767 | 1 | bootstrap: true | ||
1768 | 2 | reset: true | ||
1769 | 3 | virtualenv: true | ||
1770 | 4 | makefile: | ||
1771 | 5 | - lint | ||
1772 | 6 | - test | ||
1773 | 7 | sources: | ||
1774 | 8 | - ppa:juju/stable | ||
1775 | 9 | packages: | ||
1776 | 10 | - amulet | ||
1777 | 11 | - python-amulet | ||
1778 | 12 | - python-cinderclient | ||
1779 | 13 | - python-distro-info | ||
1780 | 14 | - python-glanceclient | ||
1781 | 15 | - python-heatclient | ||
1782 | 16 | - python-keystoneclient | ||
1783 | 17 | - python-novaclient | ||
1784 | 18 | - python-swiftclient |
charm_lint_check #5714 glance-next for 1chb1n mp263413
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http:// paste.ubuntu. com/11808187/ 10.245. 162.77: 8080/job/ charm_lint_ check/5714/
Build: http://