Merge lp:~1chb1n/charms/trusty/heat/next-amulet-init into lp:~openstack-charmers-archive/charms/trusty/heat/next
- Trusty Tahr (14.04)
- next-amulet-init
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 44 |
Proposed branch: | lp:~1chb1n/charms/trusty/heat/next-amulet-init |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/heat/next |
Diff against target: |
2349 lines (+2079/-59) 24 files modified
Makefile (+16/-12) charm-helpers-tests.yaml (+5/-0) hooks/charmhelpers/contrib/hahelpers/cluster.py (+12/-3) hooks/charmhelpers/contrib/openstack/ip.py (+49/-44) tests/00-setup (+11/-0) tests/014-basic-precise-icehouse (+11/-0) tests/015-basic-trusty-icehouse (+9/-0) tests/016-basic-trusty-juno (+11/-0) tests/017-basic-trusty-kilo (+11/-0) tests/018-basic-utopic-juno (+9/-0) tests/019-basic-vivid-kilo (+9/-0) tests/README (+76/-0) tests/basic_deployment.py (+606/-0) tests/charmhelpers/__init__.py (+38/-0) tests/charmhelpers/contrib/__init__.py (+15/-0) tests/charmhelpers/contrib/amulet/__init__.py (+15/-0) tests/charmhelpers/contrib/amulet/deployment.py (+93/-0) tests/charmhelpers/contrib/amulet/utils.py (+408/-0) tests/charmhelpers/contrib/openstack/__init__.py (+15/-0) tests/charmhelpers/contrib/openstack/amulet/__init__.py (+15/-0) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+151/-0) tests/charmhelpers/contrib/openstack/amulet/utils.py (+413/-0) tests/files/hot_hello_world.yaml (+66/-0) tests/tests.yaml (+15/-0) |
To merge this branch: | bzr merge lp:~1chb1n/charms/trusty/heat/next-amulet-init |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Corey Bryant (community) | Approve | ||
OpenStack Charmers | Pending | ||
Review via email: mp+258105@code.launchpad.net |
Commit message
Description of the change
Add basic amulet tests; sync tests/charmhelpers; sync charmhelpers.
Depends on this charm-helpers mp also landing: https:/
Tracking bug: https:/
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5183 heat-next for 1chb1n mp258105
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4546 heat-next for 1chb1n mp258105
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #4864 heat-next for 1chb1n mp258105
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5185 heat-next for 1chb1n mp258105
LINT OK: passed
Ryan Beisner (1chb1n) wrote : | # |
Silly rabbit service has to be different. Amulet passed locally, uosci bot will report back with a full run.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4548 heat-next for 1chb1n mp258105
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4550 heat-next for 1chb1n mp258105
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Corey Bryant (corey.bryant) wrote : | # |
Hello! Some inline comments below.
Ryan Beisner (1chb1n) wrote : | # |
Thanks for the detailed review, appreciate it! Acks, further info and questions also commented inline...
Corey Bryant (corey.bryant) wrote : | # |
No problem, responses to responses below.
Ryan Beisner (1chb1n) wrote : | # |
Reply in-line. TA!
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #4972 heat-next for 1chb1n mp258105
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5339 heat-next for 1chb1n mp258105
LINT OK: passed
Ryan Beisner (1chb1n) wrote : | # |
Ready for review, release to the bot.
Corey Bryant (corey.bryant) wrote : | # |
Approved but waiting for c-h changes to land before this lands. And waiting on tests. Thanks!
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5343 heat-next for 1chb1n mp258105
LINT OK: passed
Ryan Beisner (1chb1n) wrote : | # |
@Corey:
Amulet passed all targets, but there is a bzr bot comment issue with the uosci amulet commentator (addressing that separately).
Pasting amulet success output here: http://
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2014-12-15 09:16:40 +0000 |
3 | +++ Makefile 2015-06-11 15:38:49 +0000 |
4 | @@ -2,13 +2,17 @@ |
5 | PYTHON := /usr/bin/env python |
6 | |
7 | lint: |
8 | - @echo -n "Running flake8 tests: " |
9 | - @flake8 --exclude hooks/charmhelpers hooks |
10 | - @flake8 unit_tests |
11 | - @echo "OK" |
12 | - @echo -n "Running charm proof: " |
13 | + @echo Lint inspections and charm proof... |
14 | + @flake8 --exclude hooks/charmhelpers hooks tests unit_tests |
15 | @charm proof |
16 | - @echo "OK" |
17 | + |
18 | +test: |
19 | + @# Bundletester expects unit tests here. |
20 | + @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
21 | + |
22 | +functional_test: |
23 | + @echo Starting all functional, lint and unit tests... |
24 | + @juju test -v -p AMULET_HTTP_PROXY --timeout 2700 |
25 | |
26 | bin/charm_helpers_sync.py: |
27 | @mkdir -p bin |
28 | @@ -16,9 +20,9 @@ |
29 | > bin/charm_helpers_sync.py |
30 | |
31 | sync: bin/charm_helpers_sync.py |
32 | - @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml |
33 | - |
34 | -unit_test: |
35 | - @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
36 | - |
37 | -all: unit_test lint |
38 | + @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml |
39 | + @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml |
40 | + |
41 | +publish: lint unit_test |
42 | + bzr push lp:charms/heat |
43 | + bzr push lp:charms/trusty/heat |
44 | |
45 | === renamed file 'charm-helpers.yaml' => 'charm-helpers-hooks.yaml' |
46 | === added file 'charm-helpers-tests.yaml' |
47 | --- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000 |
48 | +++ charm-helpers-tests.yaml 2015-06-11 15:38:49 +0000 |
49 | @@ -0,0 +1,5 @@ |
50 | +branch: lp:charm-helpers |
51 | +destination: tests/charmhelpers |
52 | +include: |
53 | + - contrib.amulet |
54 | + - contrib.openstack.amulet |
55 | |
56 | === modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' |
57 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-04 08:45:25 +0000 |
58 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-11 15:38:49 +0000 |
59 | @@ -64,6 +64,10 @@ |
60 | pass |
61 | |
62 | |
63 | +class CRMDCNotFound(Exception): |
64 | + pass |
65 | + |
66 | + |
67 | def is_elected_leader(resource): |
68 | """ |
69 | Returns True if the charm executing this is the elected cluster leader. |
70 | @@ -116,8 +120,9 @@ |
71 | status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) |
72 | if not isinstance(status, six.text_type): |
73 | status = six.text_type(status, "utf-8") |
74 | - except subprocess.CalledProcessError: |
75 | - return False |
76 | + except subprocess.CalledProcessError as ex: |
77 | + raise CRMDCNotFound(str(ex)) |
78 | + |
79 | current_dc = '' |
80 | for line in status.split('\n'): |
81 | if line.startswith('Current DC'): |
82 | @@ -125,10 +130,14 @@ |
83 | current_dc = line.split(':')[1].split()[0] |
84 | if current_dc == get_unit_hostname(): |
85 | return True |
86 | + elif current_dc == 'NONE': |
87 | + raise CRMDCNotFound('Current DC: NONE') |
88 | + |
89 | return False |
90 | |
91 | |
92 | -@retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound) |
93 | +@retry_on_exception(5, base_delay=2, |
94 | + exc_type=(CRMResourceNotFound, CRMDCNotFound)) |
95 | def is_crm_leader(resource, retry=False): |
96 | """ |
97 | Returns True if the charm calling this is the elected corosync leader, |
98 | |
99 | === modified file 'hooks/charmhelpers/contrib/openstack/ip.py' |
100 | --- hooks/charmhelpers/contrib/openstack/ip.py 2015-02-24 11:04:31 +0000 |
101 | +++ hooks/charmhelpers/contrib/openstack/ip.py 2015-06-11 15:38:49 +0000 |
102 | @@ -17,6 +17,7 @@ |
103 | from charmhelpers.core.hookenv import ( |
104 | config, |
105 | unit_get, |
106 | + service_name, |
107 | ) |
108 | from charmhelpers.contrib.network.ip import ( |
109 | get_address_in_network, |
110 | @@ -26,8 +27,6 @@ |
111 | ) |
112 | from charmhelpers.contrib.hahelpers.cluster import is_clustered |
113 | |
114 | -from functools import partial |
115 | - |
116 | PUBLIC = 'public' |
117 | INTERNAL = 'int' |
118 | ADMIN = 'admin' |
119 | @@ -35,15 +34,18 @@ |
120 | ADDRESS_MAP = { |
121 | PUBLIC: { |
122 | 'config': 'os-public-network', |
123 | - 'fallback': 'public-address' |
124 | + 'fallback': 'public-address', |
125 | + 'override': 'os-public-hostname', |
126 | }, |
127 | INTERNAL: { |
128 | 'config': 'os-internal-network', |
129 | - 'fallback': 'private-address' |
130 | + 'fallback': 'private-address', |
131 | + 'override': 'os-internal-hostname', |
132 | }, |
133 | ADMIN: { |
134 | 'config': 'os-admin-network', |
135 | - 'fallback': 'private-address' |
136 | + 'fallback': 'private-address', |
137 | + 'override': 'os-admin-hostname', |
138 | } |
139 | } |
140 | |
141 | @@ -57,15 +59,50 @@ |
142 | :param endpoint_type: str endpoint type to resolve. |
143 | :param returns: str base URL for services on the current service unit. |
144 | """ |
145 | - scheme = 'http' |
146 | - if 'https' in configs.complete_contexts(): |
147 | - scheme = 'https' |
148 | + scheme = _get_scheme(configs) |
149 | + |
150 | address = resolve_address(endpoint_type) |
151 | if is_ipv6(address): |
152 | address = "[{}]".format(address) |
153 | + |
154 | return '%s://%s' % (scheme, address) |
155 | |
156 | |
157 | +def _get_scheme(configs): |
158 | + """Returns the scheme to use for the url (either http or https) |
159 | + depending upon whether https is in the configs value. |
160 | + |
161 | + :param configs: OSTemplateRenderer config templating object to inspect |
162 | + for a complete https context. |
163 | + :returns: either 'http' or 'https' depending on whether https is |
164 | + configured within the configs context. |
165 | + """ |
166 | + scheme = 'http' |
167 | + if configs and 'https' in configs.complete_contexts(): |
168 | + scheme = 'https' |
169 | + return scheme |
170 | + |
171 | + |
172 | +def _get_address_override(endpoint_type=PUBLIC): |
173 | + """Returns any address overrides that the user has defined based on the |
174 | + endpoint type. |
175 | + |
176 | + Note: this function allows for the service name to be inserted into the |
177 | + address if the user specifies {service_name}.somehost.org. |
178 | + |
179 | + :param endpoint_type: the type of endpoint to retrieve the override |
180 | + value for. |
181 | + :returns: any endpoint address or hostname that the user has overridden |
182 | + or None if an override is not present. |
183 | + """ |
184 | + override_key = ADDRESS_MAP[endpoint_type]['override'] |
185 | + addr_override = config(override_key) |
186 | + if not addr_override: |
187 | + return None |
188 | + else: |
189 | + return addr_override.format(service_name=service_name()) |
190 | + |
191 | + |
192 | def resolve_address(endpoint_type=PUBLIC): |
193 | """Return unit address depending on net config. |
194 | |
195 | @@ -77,7 +114,10 @@ |
196 | |
197 | :param endpoint_type: Network endpoing type |
198 | """ |
199 | - resolved_address = None |
200 | + resolved_address = _get_address_override(endpoint_type) |
201 | + if resolved_address: |
202 | + return resolved_address |
203 | + |
204 | vips = config('vip') |
205 | if vips: |
206 | vips = vips.split() |
207 | @@ -109,38 +149,3 @@ |
208 | "clustered=%s)" % (net_type, clustered)) |
209 | |
210 | return resolved_address |
211 | - |
212 | - |
213 | -def endpoint_url(configs, url_template, port, endpoint_type=PUBLIC, |
214 | - override=None): |
215 | - """Returns the correct endpoint URL to advertise to Keystone. |
216 | - |
217 | - This method provides the correct endpoint URL which should be advertised to |
218 | - the keystone charm for endpoint creation. This method allows for the url to |
219 | - be overridden to force a keystone endpoint to have specific URL for any of |
220 | - the defined scopes (admin, internal, public). |
221 | - |
222 | - :param configs: OSTemplateRenderer config templating object to inspect |
223 | - for a complete https context. |
224 | - :param url_template: str format string for creating the url template. Only |
225 | - two values will be passed - the scheme+hostname |
226 | - returned by the canonical_url and the port. |
227 | - :param endpoint_type: str endpoint type to resolve. |
228 | - :param override: str the name of the config option which overrides the |
229 | - endpoint URL defined by the charm itself. None will |
230 | - disable any overrides (default). |
231 | - """ |
232 | - if override: |
233 | - # Return any user-defined overrides for the keystone endpoint URL. |
234 | - user_value = config(override) |
235 | - if user_value: |
236 | - return user_value.strip() |
237 | - |
238 | - return url_template % (canonical_url(configs, endpoint_type), port) |
239 | - |
240 | - |
241 | -public_endpoint = partial(endpoint_url, endpoint_type=PUBLIC) |
242 | - |
243 | -internal_endpoint = partial(endpoint_url, endpoint_type=INTERNAL) |
244 | - |
245 | -admin_endpoint = partial(endpoint_url, endpoint_type=ADMIN) |
246 | |
247 | === added directory 'tests' |
248 | === added file 'tests/00-setup' |
249 | --- tests/00-setup 1970-01-01 00:00:00 +0000 |
250 | +++ tests/00-setup 2015-06-11 15:38:49 +0000 |
251 | @@ -0,0 +1,11 @@ |
252 | +#!/bin/bash |
253 | + |
254 | +set -ex |
255 | + |
256 | +sudo add-apt-repository --yes ppa:juju/stable |
257 | +sudo apt-get update --yes |
258 | +sudo apt-get install --yes python-amulet \ |
259 | + python-distro-info \ |
260 | + python-glanceclient \ |
261 | + python-keystoneclient \ |
262 | + python-novaclient |
263 | |
264 | === added file 'tests/014-basic-precise-icehouse' |
265 | --- tests/014-basic-precise-icehouse 1970-01-01 00:00:00 +0000 |
266 | +++ tests/014-basic-precise-icehouse 2015-06-11 15:38:49 +0000 |
267 | @@ -0,0 +1,11 @@ |
268 | +#!/usr/bin/python |
269 | + |
270 | +"""Amulet tests on a basic heat deployment on precise-icehouse.""" |
271 | + |
272 | +from basic_deployment import HeatBasicDeployment |
273 | + |
274 | +if __name__ == '__main__': |
275 | + deployment = HeatBasicDeployment(series='precise', |
276 | + openstack='cloud:precise-icehouse', |
277 | + source='cloud:precise-updates/icehouse') |
278 | + deployment.run_tests() |
279 | |
280 | === added file 'tests/015-basic-trusty-icehouse' |
281 | --- tests/015-basic-trusty-icehouse 1970-01-01 00:00:00 +0000 |
282 | +++ tests/015-basic-trusty-icehouse 2015-06-11 15:38:49 +0000 |
283 | @@ -0,0 +1,9 @@ |
284 | +#!/usr/bin/python |
285 | + |
286 | +"""Amulet tests on a basic heat deployment on trusty-icehouse.""" |
287 | + |
288 | +from basic_deployment import HeatBasicDeployment |
289 | + |
290 | +if __name__ == '__main__': |
291 | + deployment = HeatBasicDeployment(series='trusty') |
292 | + deployment.run_tests() |
293 | |
294 | === added file 'tests/016-basic-trusty-juno' |
295 | --- tests/016-basic-trusty-juno 1970-01-01 00:00:00 +0000 |
296 | +++ tests/016-basic-trusty-juno 2015-06-11 15:38:49 +0000 |
297 | @@ -0,0 +1,11 @@ |
298 | +#!/usr/bin/python |
299 | + |
300 | +"""Amulet tests on a basic heat deployment on trusty-juno.""" |
301 | + |
302 | +from basic_deployment import HeatBasicDeployment |
303 | + |
304 | +if __name__ == '__main__': |
305 | + deployment = HeatBasicDeployment(series='trusty', |
306 | + openstack='cloud:trusty-juno', |
307 | + source='cloud:trusty-updates/juno') |
308 | + deployment.run_tests() |
309 | |
310 | === added file 'tests/017-basic-trusty-kilo' |
311 | --- tests/017-basic-trusty-kilo 1970-01-01 00:00:00 +0000 |
312 | +++ tests/017-basic-trusty-kilo 2015-06-11 15:38:49 +0000 |
313 | @@ -0,0 +1,11 @@ |
314 | +#!/usr/bin/python |
315 | + |
316 | +"""Amulet tests on a basic heat deployment on trusty-kilo.""" |
317 | + |
318 | +from basic_deployment import HeatBasicDeployment |
319 | + |
320 | +if __name__ == '__main__': |
321 | + deployment = HeatBasicDeployment(series='trusty', |
322 | + openstack='cloud:trusty-kilo', |
323 | + source='cloud:trusty-updates/kilo') |
324 | + deployment.run_tests() |
325 | |
326 | === added file 'tests/018-basic-utopic-juno' |
327 | --- tests/018-basic-utopic-juno 1970-01-01 00:00:00 +0000 |
328 | +++ tests/018-basic-utopic-juno 2015-06-11 15:38:49 +0000 |
329 | @@ -0,0 +1,9 @@ |
330 | +#!/usr/bin/python |
331 | + |
332 | +"""Amulet tests on a basic heat deployment on utopic-juno.""" |
333 | + |
334 | +from basic_deployment import HeatBasicDeployment |
335 | + |
336 | +if __name__ == '__main__': |
337 | + deployment = HeatBasicDeployment(series='utopic') |
338 | + deployment.run_tests() |
339 | |
340 | === added file 'tests/019-basic-vivid-kilo' |
341 | --- tests/019-basic-vivid-kilo 1970-01-01 00:00:00 +0000 |
342 | +++ tests/019-basic-vivid-kilo 2015-06-11 15:38:49 +0000 |
343 | @@ -0,0 +1,9 @@ |
344 | +#!/usr/bin/python |
345 | + |
346 | +"""Amulet tests on a basic heat deployment on vivid-kilo.""" |
347 | + |
348 | +from basic_deployment import HeatBasicDeployment |
349 | + |
350 | +if __name__ == '__main__': |
351 | + deployment = HeatBasicDeployment(series='vivid') |
352 | + deployment.run_tests() |
353 | |
354 | === added file 'tests/README' |
355 | --- tests/README 1970-01-01 00:00:00 +0000 |
356 | +++ tests/README 2015-06-11 15:38:49 +0000 |
357 | @@ -0,0 +1,76 @@ |
358 | +This directory provides Amulet tests that focus on verification of heat |
359 | +deployments. |
360 | + |
361 | +test_* methods are called in lexical sort order. |
362 | + |
363 | +Test name convention to ensure desired test order: |
364 | + 1xx service and endpoint checks |
365 | + 2xx relation checks |
366 | + 3xx config checks |
367 | + 4xx functional checks |
368 | + 9xx restarts and other final checks |
369 | + |
370 | +Common uses of heat relations in deployments: |
371 | + - [ heat, mysql ] |
372 | + - [ heat, keystone ] |
373 | + - [ heat, rabbitmq-server ] |
374 | + |
375 | +More detailed relations of heat service in a common deployment: |
376 | + relations: |
377 | + amqp: |
378 | + - rabbitmq-server |
379 | + identity-service: |
380 | + - keystone |
381 | + shared-db: |
382 | + - mysql |
383 | + |
384 | +In order to run tests, you'll need charm-tools installed (in addition to |
385 | +juju, of course): |
386 | + sudo add-apt-repository ppa:juju/stable |
387 | + sudo apt-get update |
388 | + sudo apt-get install charm-tools |
389 | + |
390 | +If you use a web proxy server to access the web, you'll need to set the |
391 | +AMULET_HTTP_PROXY environment variable to the http URL of the proxy server. |
392 | + |
393 | +The following examples demonstrate different ways that tests can be executed. |
394 | +All examples are run from the charm's root directory. |
395 | + |
396 | + * To run all tests (starting with 00-setup): |
397 | + |
398 | + make test |
399 | + |
400 | + * To run a specific test module (or modules): |
401 | + |
402 | + juju test -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse |
403 | + |
404 | + * To run a specific test module (or modules), and keep the environment |
405 | + deployed after a failure: |
406 | + |
407 | + juju test --set-e -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse |
408 | + |
409 | + * To re-run a test module against an already deployed environment (one |
410 | + that was deployed by a previous call to 'juju test --set-e'): |
411 | + |
412 | + ./tests/15-basic-trusty-icehouse |
413 | + |
414 | +For debugging and test development purposes, all code should be idempotent. |
415 | +In other words, the code should have the ability to be re-run without changing |
416 | +the results beyond the initial run. This enables editing and re-running of a |
417 | +test module against an already deployed environment, as described above. |
418 | + |
419 | +Manual debugging tips: |
420 | + |
421 | + * Set the following env vars before using the OpenStack CLI as admin: |
422 | + export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0 |
423 | + export OS_TENANT_NAME=admin |
424 | + export OS_USERNAME=admin |
425 | + export OS_PASSWORD=openstack |
426 | + export OS_REGION_NAME=RegionOne |
427 | + |
428 | + * Set the following env vars before using the OpenStack CLI as demoUser: |
429 | + export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0 |
430 | + export OS_TENANT_NAME=demoTenant |
431 | + export OS_USERNAME=demoUser |
432 | + export OS_PASSWORD=password |
433 | + export OS_REGION_NAME=RegionOne |
434 | |
435 | === added file 'tests/basic_deployment.py' |
436 | --- tests/basic_deployment.py 1970-01-01 00:00:00 +0000 |
437 | +++ tests/basic_deployment.py 2015-06-11 15:38:49 +0000 |
438 | @@ -0,0 +1,606 @@ |
439 | +#!/usr/bin/python |
440 | + |
441 | +""" |
442 | +Basic heat functional test. |
443 | +""" |
444 | +import amulet |
445 | +import time |
446 | +from heatclient.common import template_utils |
447 | + |
448 | +from charmhelpers.contrib.openstack.amulet.deployment import ( |
449 | + OpenStackAmuletDeployment |
450 | +) |
451 | + |
452 | +from charmhelpers.contrib.openstack.amulet.utils import ( |
453 | + OpenStackAmuletUtils, |
454 | + DEBUG, |
455 | + #ERROR |
456 | +) |
457 | + |
458 | +# Use DEBUG to turn on debug logging |
459 | +u = OpenStackAmuletUtils(DEBUG) |
460 | + |
461 | +# Resource and name constants |
462 | +IMAGE_NAME = 'cirros-image-1' |
463 | +KEYPAIR_NAME = 'testkey' |
464 | +STACK_NAME = 'hello_world' |
465 | +RESOURCE_TYPE = 'server' |
466 | +TEMPLATE_REL_PATH = 'tests/files/hot_hello_world.yaml' |
467 | + |
468 | + |
469 | +class HeatBasicDeployment(OpenStackAmuletDeployment): |
470 | + """Amulet tests on a basic heat deployment.""" |
471 | + |
472 | + def __init__(self, series=None, openstack=None, source=None, git=False, |
473 | + stable=False): |
474 | + """Deploy the entire test environment.""" |
475 | + super(HeatBasicDeployment, self).__init__(series, openstack, |
476 | + source, stable) |
477 | + self.git = git |
478 | + self._add_services() |
479 | + self._add_relations() |
480 | + self._configure_services() |
481 | + self._deploy() |
482 | + self._initialize_tests() |
483 | + |
484 | + def _add_services(self): |
485 | + """Add services |
486 | + |
487 | + Add the services that we're testing, where heat is local, |
488 | + and the rest of the service are from lp branches that are |
489 | + compatible with the local charm (e.g. stable or next). |
490 | + """ |
491 | + this_service = {'name': 'heat'} |
492 | + other_services = [{'name': 'keystone'}, |
493 | + {'name': 'rabbitmq-server'}, |
494 | + {'name': 'mysql'}, |
495 | + {'name': 'glance'}, |
496 | + {'name': 'nova-cloud-controller'}, |
497 | + {'name': 'nova-compute'}] |
498 | + super(HeatBasicDeployment, self)._add_services(this_service, |
499 | + other_services) |
500 | + |
501 | + def _add_relations(self): |
502 | + """Add all of the relations for the services.""" |
503 | + |
504 | + relations = { |
505 | + 'heat:amqp': 'rabbitmq-server:amqp', |
506 | + 'heat:identity-service': 'keystone:identity-service', |
507 | + 'heat:shared-db': 'mysql:shared-db', |
508 | + 'nova-compute:image-service': 'glance:image-service', |
509 | + 'nova-compute:shared-db': 'mysql:shared-db', |
510 | + 'nova-compute:amqp': 'rabbitmq-server:amqp', |
511 | + 'nova-cloud-controller:shared-db': 'mysql:shared-db', |
512 | + 'nova-cloud-controller:identity-service': |
513 | + 'keystone:identity-service', |
514 | + 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp', |
515 | + 'nova-cloud-controller:cloud-compute': |
516 | + 'nova-compute:cloud-compute', |
517 | + 'nova-cloud-controller:image-service': 'glance:image-service', |
518 | + 'keystone:shared-db': 'mysql:shared-db', |
519 | + 'glance:identity-service': 'keystone:identity-service', |
520 | + 'glance:shared-db': 'mysql:shared-db', |
521 | + 'glance:amqp': 'rabbitmq-server:amqp' |
522 | + } |
523 | + super(HeatBasicDeployment, self)._add_relations(relations) |
524 | + |
525 | + def _configure_services(self): |
526 | + """Configure all of the services.""" |
527 | + nova_config = {'config-flags': 'auto_assign_floating_ip=False', |
528 | + 'enable-live-migration': 'False'} |
529 | + keystone_config = {'admin-password': 'openstack', |
530 | + 'admin-token': 'ubuntutesting'} |
531 | + configs = {'nova-compute': nova_config, 'keystone': keystone_config} |
532 | + super(HeatBasicDeployment, self)._configure_services(configs) |
533 | + |
534 | + def _initialize_tests(self): |
535 | + """Perform final initialization before tests get run.""" |
536 | + # Access the sentries for inspecting service units |
537 | + self.heat_sentry = self.d.sentry.unit['heat/0'] |
538 | + self.mysql_sentry = self.d.sentry.unit['mysql/0'] |
539 | + self.keystone_sentry = self.d.sentry.unit['keystone/0'] |
540 | + self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] |
541 | + self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0'] |
542 | + self.glance_sentry = self.d.sentry.unit['glance/0'] |
543 | + u.log.debug('openstack release val: {}'.format( |
544 | + self._get_openstack_release())) |
545 | + u.log.debug('openstack release str: {}'.format( |
546 | + self._get_openstack_release_string())) |
547 | + |
548 | + # Let things settle a bit before moving forward |
549 | + time.sleep(30) |
550 | + |
551 | + # Authenticate admin with keystone |
552 | + self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, |
553 | + user='admin', |
554 | + password='openstack', |
555 | + tenant='admin') |
556 | + |
557 | + # Authenticate admin with glance endpoint |
558 | + self.glance = u.authenticate_glance_admin(self.keystone) |
559 | + |
560 | + # Authenticate admin with nova endpoint |
561 | + self.nova = u.authenticate_nova_user(self.keystone, |
562 | + user='admin', |
563 | + password='openstack', |
564 | + tenant='admin') |
565 | + |
566 | + # Authenticate admin with heat endpoint |
567 | + self.heat = u.authenticate_heat_admin(self.keystone) |
568 | + |
569 | + def _image_create(self): |
570 | + """Create an image to be used by the heat template, verify it exists""" |
571 | + u.log.debug('Creating glance image ({})...'.format(IMAGE_NAME)) |
572 | + |
573 | + # Create a new image |
574 | + image_new = u.create_cirros_image(self.glance, IMAGE_NAME) |
575 | + |
576 | + # Confirm image is created and has status of 'active' |
577 | + if not image_new: |
578 | + message = 'glance image create failed' |
579 | + amulet.raise_status(amulet.FAIL, msg=message) |
580 | + |
581 | + # Verify new image name |
582 | + images_list = list(self.glance.images.list()) |
583 | + if images_list[0].name != IMAGE_NAME: |
584 | + message = ('glance image create failed or unexpected ' |
585 | + 'image name {}'.format(images_list[0].name)) |
586 | + amulet.raise_status(amulet.FAIL, msg=message) |
587 | + |
588 | + def _keypair_create(self): |
589 | + """Create a keypair to be used by the heat template, |
590 | + or get a keypair if it exists.""" |
591 | + self.keypair = u.create_or_get_keypair(self.nova, |
592 | + keypair_name=KEYPAIR_NAME) |
593 | + if not self.keypair: |
594 | + msg = 'Failed to create or get keypair.' |
595 | + amulet.raise_status(amulet.FAIL, msg=msg) |
596 | + u.log.debug("Keypair: {} {}".format(self.keypair.id, |
597 | + self.keypair.fingerprint)) |
598 | + |
599 | + def _stack_create(self): |
600 | + """Create a heat stack from a basic heat template, verify its status""" |
601 | + u.log.debug('Creating heat stack...') |
602 | + |
603 | + t_url = u.file_to_url(TEMPLATE_REL_PATH) |
604 | + r_req = self.heat.http_client.raw_request |
605 | + u.log.debug('template url: {}'.format(t_url)) |
606 | + |
607 | + t_files, template = template_utils.get_template_contents(t_url, r_req) |
608 | + env_files, env = template_utils.process_environment_and_files( |
609 | + env_path=None) |
610 | + |
611 | + fields = { |
612 | + 'stack_name': STACK_NAME, |
613 | + 'timeout_mins': '15', |
614 | + 'disable_rollback': False, |
615 | + 'parameters': { |
616 | + 'admin_pass': 'Ubuntu', |
617 | + 'key_name': KEYPAIR_NAME, |
618 | + 'image': IMAGE_NAME |
619 | + }, |
620 | + 'template': template, |
621 | + 'files': dict(list(t_files.items()) + list(env_files.items())), |
622 | + 'environment': env |
623 | + } |
624 | + |
625 | + # Create the stack. |
626 | + try: |
627 | + _stack = self.heat.stacks.create(**fields) |
628 | + u.log.debug('Stack data: {}'.format(_stack)) |
629 | + _stack_id = _stack['stack']['id'] |
630 | + u.log.debug('Creating new stack, ID: {}'.format(_stack_id)) |
631 | + except Exception as e: |
632 | + # Generally, an api or cloud config error if this is hit. |
633 | + msg = 'Failed to create heat stack: {}'.format(e) |
634 | + amulet.raise_status(amulet.FAIL, msg=msg) |
635 | + |
636 | + # Confirm stack reaches COMPLETE status. |
637 | + # /!\ Heat stacks reach a COMPLETE status even when nova cannot |
638 | + # find resources (a valid hypervisor) to fit the instance, in |
639 | + # which case the heat stack self-deletes! Confirm anyway... |
640 | + ret = u.resource_reaches_status(self.heat.stacks, _stack_id, |
641 | + expected_stat="COMPLETE", |
642 | + msg="Stack status wait") |
643 | + _stacks = list(self.heat.stacks.list()) |
644 | + u.log.debug('All stacks: {}'.format(_stacks)) |
645 | + if not ret: |
646 | + msg = 'Heat stack failed to reach expected state.' |
647 | + amulet.raise_status(amulet.FAIL, msg=msg) |
648 | + |
649 | + # Confirm stack still exists. |
650 | + try: |
651 | + _stack = self.heat.stacks.get(STACK_NAME) |
652 | + except Exception as e: |
653 | + # Generally, a resource availability issue if this is hit. |
654 | + msg = 'Failed to get heat stack: {}'.format(e) |
655 | + amulet.raise_status(amulet.FAIL, msg=msg) |
656 | + |
657 | + # Confirm stack name. |
658 | + u.log.debug('Expected, actual stack name: {}, ' |
659 | + '{}'.format(STACK_NAME, _stack.stack_name)) |
660 | + if STACK_NAME != _stack.stack_name: |
661 | + msg = 'Stack name mismatch, {} != {}'.format(STACK_NAME, |
662 | + _stack.stack_name) |
663 | + amulet.raise_status(amulet.FAIL, msg=msg) |
664 | + |
665 | + def _stack_resource_compute(self): |
666 | + """Confirm that the stack has created a subsequent nova |
667 | + compute resource, and confirm its status.""" |
668 | + u.log.debug('Confirming heat stack resource status...') |
669 | + |
670 | + # Confirm existence of a heat-generated nova compute resource. |
671 | + _resource = self.heat.resources.get(STACK_NAME, RESOURCE_TYPE) |
672 | + _server_id = _resource.physical_resource_id |
673 | + if _server_id: |
674 | + u.log.debug('Heat template spawned nova instance, ' |
675 | + 'ID: {}'.format(_server_id)) |
676 | + else: |
677 | + msg = 'Stack failed to spawn a nova compute resource (instance).' |
678 | + amulet.raise_status(amulet.FAIL, msg=msg) |
679 | + |
680 | + # Confirm nova instance reaches ACTIVE status. |
681 | + ret = u.resource_reaches_status(self.nova.servers, _server_id, |
682 | + expected_stat="ACTIVE", |
683 | + msg="nova instance") |
684 | + if not ret: |
685 | + msg = 'Nova compute instance failed to reach expected state.' |
686 | + amulet.raise_status(amulet.FAIL, msg=msg) |
687 | + |
688 | + def _stack_delete(self): |
689 | + """Delete a heat stack, verify.""" |
690 | + u.log.debug('Deleting heat stack...') |
691 | + u.delete_resource(self.heat.stacks, STACK_NAME, msg="heat stack") |
692 | + |
693 | + def _image_delete(self): |
694 | + """Delete that image.""" |
695 | + u.log.debug('Deleting glance image...') |
696 | + image = self.nova.images.find(name=IMAGE_NAME) |
697 | + u.delete_resource(self.nova.images, image, msg="glance image") |
698 | + |
699 | + def _keypair_delete(self): |
700 | + """Delete that keypair.""" |
701 | + u.log.debug('Deleting keypair...') |
702 | + u.delete_resource(self.nova.keypairs, KEYPAIR_NAME, msg="nova keypair") |
703 | + |
704 | + def test_100_services(self): |
705 | + """Verify the expected services are running on the corresponding |
706 | + service units.""" |
707 | + service_names = { |
708 | + self.heat_sentry: ['heat-api', |
709 | + 'heat-api-cfn', |
710 | + 'heat-engine'], |
711 | + self.mysql_sentry: ['mysql'], |
712 | + self.rabbitmq_sentry: ['rabbitmq-server'], |
713 | + self.nova_compute_sentry: ['nova-compute', |
714 | + 'nova-network', |
715 | + 'nova-api'], |
716 | + self.keystone_sentry: ['keystone'], |
717 | + self.glance_sentry: ['glance-registry', 'glance-api'] |
718 | + } |
719 | + |
720 | + ret = u.validate_services_by_name(service_names) |
721 | + if ret: |
722 | + amulet.raise_status(amulet.FAIL, msg=ret) |
723 | + |
724 | + def test_110_service_catalog(self): |
725 | + """Verify that the service catalog endpoint data is valid.""" |
726 | + u.log.debug('Checking service catalog endpoint data...') |
727 | + endpoint_vol = {'adminURL': u.valid_url, |
728 | + 'region': 'RegionOne', |
729 | + 'publicURL': u.valid_url, |
730 | + 'internalURL': u.valid_url} |
731 | + endpoint_id = {'adminURL': u.valid_url, |
732 | + 'region': 'RegionOne', |
733 | + 'publicURL': u.valid_url, |
734 | + 'internalURL': u.valid_url} |
735 | + if self._get_openstack_release() >= self.precise_folsom: |
736 | + endpoint_vol['id'] = u.not_null |
737 | + endpoint_id['id'] = u.not_null |
738 | + expected = {'compute': [endpoint_vol], 'orchestration': [endpoint_vol], |
739 | + 'image': [endpoint_vol], 'identity': [endpoint_id]} |
740 | + |
741 | + if self._get_openstack_release() <= self.trusty_juno: |
742 | + # Before Kilo |
743 | + expected['s3'] = [endpoint_vol] |
744 | + expected['ec2'] = [endpoint_vol] |
745 | + |
746 | + actual = self.keystone.service_catalog.get_endpoints() |
747 | + ret = u.validate_svc_catalog_endpoint_data(expected, actual) |
748 | + if ret: |
749 | + amulet.raise_status(amulet.FAIL, msg=ret) |
750 | + |
751 | + def test_120_heat_endpoint(self): |
752 | + """Verify the heat api endpoint data.""" |
753 | + u.log.debug('Checking api endpoint data...') |
754 | + endpoints = self.keystone.endpoints.list() |
755 | + |
756 | + if self._get_openstack_release() <= self.trusty_juno: |
757 | + # Before Kilo |
758 | + admin_port = internal_port = public_port = '3333' |
759 | + else: |
760 | + # Kilo and later |
761 | + admin_port = internal_port = public_port = '8004' |
762 | + |
763 | + expected = {'id': u.not_null, |
764 | + 'region': 'RegionOne', |
765 | + 'adminurl': u.valid_url, |
766 | + 'internalurl': u.valid_url, |
767 | + 'publicurl': u.valid_url, |
768 | + 'service_id': u.not_null} |
769 | + |
770 | + ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, |
771 | + public_port, expected) |
772 | + if ret: |
773 | + message = 'heat endpoint: {}'.format(ret) |
774 | + amulet.raise_status(amulet.FAIL, msg=message) |
775 | + |
776 | + def test_200_heat_mysql_shared_db_relation(self): |
777 | + """Verify the heat:mysql shared-db relation data""" |
778 | + u.log.debug('Checking heat:mysql shared-db relation data...') |
779 | + unit = self.heat_sentry |
780 | + relation = ['shared-db', 'mysql:shared-db'] |
781 | + expected = { |
782 | + 'private-address': u.valid_ip, |
783 | + 'heat_database': 'heat', |
784 | + 'heat_username': 'heat', |
785 | + 'heat_hostname': u.valid_ip |
786 | + } |
787 | + |
788 | + ret = u.validate_relation_data(unit, relation, expected) |
789 | + if ret: |
790 | + message = u.relation_error('heat:mysql shared-db', ret) |
791 | + amulet.raise_status(amulet.FAIL, msg=message) |
792 | + |
793 | + def test_201_mysql_heat_shared_db_relation(self): |
794 | + """Verify the mysql:heat shared-db relation data""" |
795 | + u.log.debug('Checking mysql:heat shared-db relation data...') |
796 | + unit = self.mysql_sentry |
797 | + relation = ['shared-db', 'heat:shared-db'] |
798 | + expected = { |
799 | + 'private-address': u.valid_ip, |
800 | + 'db_host': u.valid_ip, |
801 | + 'heat_allowed_units': 'heat/0', |
802 | + 'heat_password': u.not_null |
803 | + } |
804 | + |
805 | + ret = u.validate_relation_data(unit, relation, expected) |
806 | + if ret: |
807 | + message = u.relation_error('mysql:heat shared-db', ret) |
808 | + amulet.raise_status(amulet.FAIL, msg=message) |
809 | + |
810 | + def test_202_heat_keystone_identity_relation(self): |
811 | + """Verify the heat:keystone identity-service relation data""" |
812 | + u.log.debug('Checking heat:keystone identity-service relation data...') |
813 | + unit = self.heat_sentry |
814 | + relation = ['identity-service', 'keystone:identity-service'] |
815 | + expected = { |
816 | + 'heat_service': 'heat', |
817 | + 'heat_region': 'RegionOne', |
818 | + 'heat_public_url': u.valid_url, |
819 | + 'heat_admin_url': u.valid_url, |
820 | + 'heat_internal_url': u.valid_url, |
821 | + 'heat-cfn_service': 'heat-cfn', |
822 | + 'heat-cfn_region': 'RegionOne', |
823 | + 'heat-cfn_public_url': u.valid_url, |
824 | + 'heat-cfn_admin_url': u.valid_url, |
825 | + 'heat-cfn_internal_url': u.valid_url |
826 | + } |
827 | + ret = u.validate_relation_data(unit, relation, expected) |
828 | + if ret: |
829 | + message = u.relation_error('heat:keystone identity-service', ret) |
830 | + amulet.raise_status(amulet.FAIL, msg=message) |
831 | + |
832 | + def test_203_keystone_heat_identity_relation(self): |
833 | + """Verify the keystone:heat identity-service relation data""" |
834 | + u.log.debug('Checking keystone:heat identity-service relation data...') |
835 | + unit = self.keystone_sentry |
836 | + relation = ['identity-service', 'heat:identity-service'] |
837 | + expected = { |
838 | + 'service_protocol': 'http', |
839 | + 'service_tenant': 'services', |
840 | + 'admin_token': 'ubuntutesting', |
841 | + 'service_password': u.not_null, |
842 | + 'service_port': '5000', |
843 | + 'auth_port': '35357', |
844 | + 'auth_protocol': 'http', |
845 | + 'private-address': u.valid_ip, |
846 | + 'auth_host': u.valid_ip, |
847 | + 'service_username': 'heat-cfn_heat', |
848 | + 'service_tenant_id': u.not_null, |
849 | + 'service_host': u.valid_ip |
850 | + } |
851 | + ret = u.validate_relation_data(unit, relation, expected) |
852 | + if ret: |
853 | + message = u.relation_error('keystone:heat identity-service', ret) |
854 | + amulet.raise_status(amulet.FAIL, msg=message) |
855 | + |
856 | + def test_204_heat_rmq_amqp_relation(self): |
857 | + """Verify the heat:rabbitmq-server amqp relation data""" |
858 | + u.log.debug('Checking heat:rabbitmq-server amqp relation data...') |
859 | + unit = self.heat_sentry |
860 | + relation = ['amqp', 'rabbitmq-server:amqp'] |
861 | + expected = { |
862 | + 'username': u.not_null, |
863 | + 'private-address': u.valid_ip, |
864 | + 'vhost': 'openstack' |
865 | + } |
866 | + |
867 | + ret = u.validate_relation_data(unit, relation, expected) |
868 | + if ret: |
869 | + message = u.relation_error('heat:rabbitmq-server amqp', ret) |
870 | + amulet.raise_status(amulet.FAIL, msg=message) |
871 | + |
872 | + def test_205_rmq_heat_amqp_relation(self): |
873 | + """Verify the rabbitmq-server:heat amqp relation data""" |
874 | + u.log.debug('Checking rabbitmq-server:heat amqp relation data...') |
875 | + unit = self.rabbitmq_sentry |
876 | + relation = ['amqp', 'heat:amqp'] |
877 | + expected = { |
878 | + 'private-address': u.valid_ip, |
879 | + 'password': u.not_null, |
880 | + 'hostname': u.valid_ip, |
881 | + } |
882 | + |
883 | + ret = u.validate_relation_data(unit, relation, expected) |
884 | + if ret: |
885 | + message = u.relation_error('rabbitmq-server:heat amqp', ret) |
886 | + amulet.raise_status(amulet.FAIL, msg=message) |
887 | + |
888 | + def test_300_heat_config(self): |
889 | + """Verify the data in the heat config file.""" |
890 | + u.log.debug('Checking heat config file data...') |
891 | + unit = self.heat_sentry |
892 | + conf = '/etc/heat/heat.conf' |
893 | + |
894 | + ks_rel = self.keystone_sentry.relation('identity-service', |
895 | + 'heat:identity-service') |
896 | + rmq_rel = self.rabbitmq_sentry.relation('amqp', |
897 | + 'heat:amqp') |
898 | + mysql_rel = self.mysql_sentry.relation('shared-db', |
899 | + 'heat:shared-db') |
900 | + |
901 | + u.log.debug('keystone:heat relation: {}'.format(ks_rel)) |
902 | + u.log.debug('rabbitmq:heat relation: {}'.format(rmq_rel)) |
903 | + u.log.debug('mysql:heat relation: {}'.format(mysql_rel)) |
904 | + |
905 | + db_uri = "mysql://{}:{}@{}/{}".format('heat', |
906 | + mysql_rel['heat_password'], |
907 | + mysql_rel['db_host'], |
908 | + 'heat') |
909 | + |
910 | + auth_uri = '{}://{}:{}/v2.0'.format(ks_rel['service_protocol'], |
911 | + ks_rel['service_host'], |
912 | + ks_rel['service_port']) |
913 | + |
914 | + expected = { |
915 | + 'DEFAULT': { |
916 | + 'use_syslog': 'False', |
917 | + 'debug': 'False', |
918 | + 'verbose': 'False', |
919 | + 'log_dir': '/var/log/heat', |
920 | + 'instance_driver': 'heat.engine.nova', |
921 | + 'plugin_dirs': '/usr/lib64/heat,/usr/lib/heat', |
922 | + 'environment_dir': '/etc/heat/environment.d', |
923 | + 'deferred_auth_method': 'password', |
924 | + 'host': 'heat', |
925 | + 'rabbit_userid': 'heat', |
926 | + 'rabbit_virtual_host': 'openstack', |
927 | + 'rabbit_password': rmq_rel['password'], |
928 | + 'rabbit_host': rmq_rel['hostname'] |
929 | + }, |
930 | + 'keystone_authtoken': { |
931 | + 'auth_uri': auth_uri, |
932 | + 'auth_host': ks_rel['service_host'], |
933 | + 'auth_port': ks_rel['auth_port'], |
934 | + 'auth_protocol': ks_rel['auth_protocol'], |
935 | + 'admin_tenant_name': 'services', |
936 | + 'admin_user': 'heat-cfn_heat', |
937 | + 'admin_password': ks_rel['service_password'], |
938 | + 'signing_dir': '/var/cache/heat' |
939 | + }, |
940 | + 'database': { |
941 | + 'connection': db_uri |
942 | + }, |
943 | + 'heat_api': { |
944 | + 'bind_port': '7994' |
945 | + }, |
946 | + 'heat_api_cfn': { |
947 | + 'bind_port': '7990' |
948 | + }, |
949 | + 'paste_deploy': { |
950 | + 'api_paste_config': '/etc/heat/api-paste.ini' |
951 | + }, |
952 | + } |
953 | + |
954 | + for section, pairs in expected.iteritems(): |
955 | + ret = u.validate_config_data(unit, conf, section, pairs) |
956 | + if ret: |
957 | + message = "heat config error: {}".format(ret) |
958 | + amulet.raise_status(amulet.FAIL, msg=message) |
959 | + |
960 | + def test_400_heat_resource_types_list(self): |
961 | + """Check default heat resource list behavior, also confirm |
962 | + heat functionality.""" |
963 | + u.log.debug('Checking default heat resouce list...') |
964 | + try: |
965 | + types = list(self.heat.resource_types.list()) |
966 | + if type(types) is list: |
967 | + u.log.debug('Resource type list check is ok.') |
968 | + else: |
969 | + msg = 'Resource type list is not a list!' |
970 | + u.log.error('{}'.format(msg)) |
971 | + raise |
972 | + if len(types) > 0: |
973 | + u.log.debug('Resource type list is populated ' |
974 | + '({}, ok).'.format(len(types))) |
975 | + else: |
976 | + msg = 'Resource type list length is zero!' |
977 | + u.log.error(msg) |
978 | + raise |
979 | + except: |
980 | + msg = 'Resource type list failed.' |
981 | + u.log.error(msg) |
982 | + raise |
983 | + |
984 | + def test_402_heat_stack_list(self): |
985 | + """Check default heat stack list behavior, also confirm |
986 | + heat functionality.""" |
987 | + u.log.debug('Checking default heat stack list...') |
988 | + try: |
989 | + stacks = list(self.heat.stacks.list()) |
990 | + if type(stacks) is list: |
991 | + u.log.debug("Stack list check is ok.") |
992 | + else: |
993 | + msg = 'Stack list returned something other than a list.' |
994 | + u.log.error(msg) |
995 | + raise |
996 | + except: |
997 | + msg = 'Heat stack list failed.' |
998 | + u.log.error(msg) |
999 | + raise |
1000 | + |
1001 | + def test_410_heat_stack_create_delete(self): |
1002 | + """Create a heat stack from template, confirm that a corresponding |
1003 | + nova compute resource is spawned, delete stack.""" |
1004 | + self._image_create() |
1005 | + self._keypair_create() |
1006 | + self._stack_create() |
1007 | + self._stack_resource_compute() |
1008 | + self._stack_delete() |
1009 | + self._image_delete() |
1010 | + self._keypair_delete() |
1011 | + |
1012 | + def test_900_heat_restart_on_config_change(self): |
1013 | + """Verify that the specified services are restarted when the config |
1014 | + is changed.""" |
1015 | + sentry = self.heat_sentry |
1016 | + juju_service = 'heat' |
1017 | + |
1018 | + # Expected default and alternate values |
1019 | + set_default = {'use-syslog': 'False'} |
1020 | + set_alternate = {'use-syslog': 'True'} |
1021 | + |
1022 | + # Config file affected by juju set config change |
1023 | + conf_file = '/etc/heat/heat.conf' |
1024 | + |
1025 | + # Services which are expected to restart upon config change |
1026 | + services = ['heat-api', |
1027 | + 'heat-api-cfn', |
1028 | + 'heat-engine'] |
1029 | + |
1030 | + # Make config change, check for service restarts |
1031 | + u.log.debug('Making config change on {}...'.format(juju_service)) |
1032 | + self.d.configure(juju_service, set_alternate) |
1033 | + |
1034 | + sleep_time = 30 |
1035 | + for s in services: |
1036 | + u.log.debug("Checking that service restarted: {}".format(s)) |
1037 | + if not u.service_restarted(sentry, s, |
1038 | + conf_file, sleep_time=sleep_time): |
1039 | + self.d.configure(juju_service, set_default) |
1040 | + msg = "service {} didn't restart after config change".format(s) |
1041 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1042 | + sleep_time = 0 |
1043 | + |
1044 | + self.d.configure(juju_service, set_default) |
1045 | |
1046 | === added directory 'tests/charmhelpers' |
1047 | === added file 'tests/charmhelpers/__init__.py' |
1048 | --- tests/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000 |
1049 | +++ tests/charmhelpers/__init__.py 2015-06-11 15:38:49 +0000 |
1050 | @@ -0,0 +1,38 @@ |
1051 | +# Copyright 2014-2015 Canonical Limited. |
1052 | +# |
1053 | +# This file is part of charm-helpers. |
1054 | +# |
1055 | +# charm-helpers is free software: you can redistribute it and/or modify |
1056 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1057 | +# published by the Free Software Foundation. |
1058 | +# |
1059 | +# charm-helpers is distributed in the hope that it will be useful, |
1060 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1061 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1062 | +# GNU Lesser General Public License for more details. |
1063 | +# |
1064 | +# You should have received a copy of the GNU Lesser General Public License |
1065 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1066 | + |
1067 | +# Bootstrap charm-helpers, installing its dependencies if necessary using |
1068 | +# only standard libraries. |
1069 | +import subprocess |
1070 | +import sys |
1071 | + |
1072 | +try: |
1073 | + import six # flake8: noqa |
1074 | +except ImportError: |
1075 | + if sys.version_info.major == 2: |
1076 | + subprocess.check_call(['apt-get', 'install', '-y', 'python-six']) |
1077 | + else: |
1078 | + subprocess.check_call(['apt-get', 'install', '-y', 'python3-six']) |
1079 | + import six # flake8: noqa |
1080 | + |
1081 | +try: |
1082 | + import yaml # flake8: noqa |
1083 | +except ImportError: |
1084 | + if sys.version_info.major == 2: |
1085 | + subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml']) |
1086 | + else: |
1087 | + subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml']) |
1088 | + import yaml # flake8: noqa |
1089 | |
1090 | === added directory 'tests/charmhelpers/contrib' |
1091 | === added file 'tests/charmhelpers/contrib/__init__.py' |
1092 | --- tests/charmhelpers/contrib/__init__.py 1970-01-01 00:00:00 +0000 |
1093 | +++ tests/charmhelpers/contrib/__init__.py 2015-06-11 15:38:49 +0000 |
1094 | @@ -0,0 +1,15 @@ |
1095 | +# Copyright 2014-2015 Canonical Limited. |
1096 | +# |
1097 | +# This file is part of charm-helpers. |
1098 | +# |
1099 | +# charm-helpers is free software: you can redistribute it and/or modify |
1100 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1101 | +# published by the Free Software Foundation. |
1102 | +# |
1103 | +# charm-helpers is distributed in the hope that it will be useful, |
1104 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1105 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1106 | +# GNU Lesser General Public License for more details. |
1107 | +# |
1108 | +# You should have received a copy of the GNU Lesser General Public License |
1109 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1110 | |
1111 | === added directory 'tests/charmhelpers/contrib/amulet' |
1112 | === added file 'tests/charmhelpers/contrib/amulet/__init__.py' |
1113 | --- tests/charmhelpers/contrib/amulet/__init__.py 1970-01-01 00:00:00 +0000 |
1114 | +++ tests/charmhelpers/contrib/amulet/__init__.py 2015-06-11 15:38:49 +0000 |
1115 | @@ -0,0 +1,15 @@ |
1116 | +# Copyright 2014-2015 Canonical Limited. |
1117 | +# |
1118 | +# This file is part of charm-helpers. |
1119 | +# |
1120 | +# charm-helpers is free software: you can redistribute it and/or modify |
1121 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1122 | +# published by the Free Software Foundation. |
1123 | +# |
1124 | +# charm-helpers is distributed in the hope that it will be useful, |
1125 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1126 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1127 | +# GNU Lesser General Public License for more details. |
1128 | +# |
1129 | +# You should have received a copy of the GNU Lesser General Public License |
1130 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1131 | |
1132 | === added file 'tests/charmhelpers/contrib/amulet/deployment.py' |
1133 | --- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000 |
1134 | +++ tests/charmhelpers/contrib/amulet/deployment.py 2015-06-11 15:38:49 +0000 |
1135 | @@ -0,0 +1,93 @@ |
1136 | +# Copyright 2014-2015 Canonical Limited. |
1137 | +# |
1138 | +# This file is part of charm-helpers. |
1139 | +# |
1140 | +# charm-helpers is free software: you can redistribute it and/or modify |
1141 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1142 | +# published by the Free Software Foundation. |
1143 | +# |
1144 | +# charm-helpers is distributed in the hope that it will be useful, |
1145 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1146 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1147 | +# GNU Lesser General Public License for more details. |
1148 | +# |
1149 | +# You should have received a copy of the GNU Lesser General Public License |
1150 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1151 | + |
1152 | +import amulet |
1153 | +import os |
1154 | +import six |
1155 | + |
1156 | + |
1157 | +class AmuletDeployment(object): |
1158 | + """Amulet deployment. |
1159 | + |
1160 | + This class provides generic Amulet deployment and test runner |
1161 | + methods. |
1162 | + """ |
1163 | + |
1164 | + def __init__(self, series=None): |
1165 | + """Initialize the deployment environment.""" |
1166 | + self.series = None |
1167 | + |
1168 | + if series: |
1169 | + self.series = series |
1170 | + self.d = amulet.Deployment(series=self.series) |
1171 | + else: |
1172 | + self.d = amulet.Deployment() |
1173 | + |
1174 | + def _add_services(self, this_service, other_services): |
1175 | + """Add services. |
1176 | + |
1177 | + Add services to the deployment where this_service is the local charm |
1178 | + that we're testing and other_services are the other services that |
1179 | + are being used in the local amulet tests. |
1180 | + """ |
1181 | + if this_service['name'] != os.path.basename(os.getcwd()): |
1182 | + s = this_service['name'] |
1183 | + msg = "The charm's root directory name needs to be {}".format(s) |
1184 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1185 | + |
1186 | + if 'units' not in this_service: |
1187 | + this_service['units'] = 1 |
1188 | + |
1189 | + self.d.add(this_service['name'], units=this_service['units']) |
1190 | + |
1191 | + for svc in other_services: |
1192 | + if 'location' in svc: |
1193 | + branch_location = svc['location'] |
1194 | + elif self.series: |
1195 | + branch_location = 'cs:{}/{}'.format(self.series, svc['name']), |
1196 | + else: |
1197 | + branch_location = None |
1198 | + |
1199 | + if 'units' not in svc: |
1200 | + svc['units'] = 1 |
1201 | + |
1202 | + self.d.add(svc['name'], charm=branch_location, units=svc['units']) |
1203 | + |
1204 | + def _add_relations(self, relations): |
1205 | + """Add all of the relations for the services.""" |
1206 | + for k, v in six.iteritems(relations): |
1207 | + self.d.relate(k, v) |
1208 | + |
1209 | + def _configure_services(self, configs): |
1210 | + """Configure all of the services.""" |
1211 | + for service, config in six.iteritems(configs): |
1212 | + self.d.configure(service, config) |
1213 | + |
1214 | + def _deploy(self): |
1215 | + """Deploy environment and wait for all hooks to finish executing.""" |
1216 | + try: |
1217 | + self.d.setup(timeout=900) |
1218 | + self.d.sentry.wait(timeout=900) |
1219 | + except amulet.helpers.TimeoutError: |
1220 | + amulet.raise_status(amulet.FAIL, msg="Deployment timed out") |
1221 | + except Exception: |
1222 | + raise |
1223 | + |
1224 | + def run_tests(self): |
1225 | + """Run all of the methods that are prefixed with 'test_'.""" |
1226 | + for test in dir(self): |
1227 | + if test.startswith('test_'): |
1228 | + getattr(self, test)() |
1229 | |
1230 | === added file 'tests/charmhelpers/contrib/amulet/utils.py' |
1231 | --- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000 |
1232 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-06-11 15:38:49 +0000 |
1233 | @@ -0,0 +1,408 @@ |
1234 | +# Copyright 2014-2015 Canonical Limited. |
1235 | +# |
1236 | +# This file is part of charm-helpers. |
1237 | +# |
1238 | +# charm-helpers is free software: you can redistribute it and/or modify |
1239 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1240 | +# published by the Free Software Foundation. |
1241 | +# |
1242 | +# charm-helpers is distributed in the hope that it will be useful, |
1243 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1244 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1245 | +# GNU Lesser General Public License for more details. |
1246 | +# |
1247 | +# You should have received a copy of the GNU Lesser General Public License |
1248 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1249 | + |
1250 | +import ConfigParser |
1251 | +import distro_info |
1252 | +import io |
1253 | +import logging |
1254 | +import os |
1255 | +import re |
1256 | +import six |
1257 | +import sys |
1258 | +import time |
1259 | +import urlparse |
1260 | + |
1261 | + |
1262 | +class AmuletUtils(object): |
1263 | + """Amulet utilities. |
1264 | + |
1265 | + This class provides common utility functions that are used by Amulet |
1266 | + tests. |
1267 | + """ |
1268 | + |
1269 | + def __init__(self, log_level=logging.ERROR): |
1270 | + self.log = self.get_logger(level=log_level) |
1271 | + self.ubuntu_releases = self.get_ubuntu_releases() |
1272 | + |
1273 | + def get_logger(self, name="amulet-logger", level=logging.DEBUG): |
1274 | + """Get a logger object that will log to stdout.""" |
1275 | + log = logging |
1276 | + logger = log.getLogger(name) |
1277 | + fmt = log.Formatter("%(asctime)s %(funcName)s " |
1278 | + "%(levelname)s: %(message)s") |
1279 | + |
1280 | + handler = log.StreamHandler(stream=sys.stdout) |
1281 | + handler.setLevel(level) |
1282 | + handler.setFormatter(fmt) |
1283 | + |
1284 | + logger.addHandler(handler) |
1285 | + logger.setLevel(level) |
1286 | + |
1287 | + return logger |
1288 | + |
1289 | + def valid_ip(self, ip): |
1290 | + if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip): |
1291 | + return True |
1292 | + else: |
1293 | + return False |
1294 | + |
1295 | + def valid_url(self, url): |
1296 | + p = re.compile( |
1297 | + r'^(?:http|ftp)s?://' |
1298 | + r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa |
1299 | + r'localhost|' |
1300 | + r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' |
1301 | + r'(?::\d+)?' |
1302 | + r'(?:/?|[/?]\S+)$', |
1303 | + re.IGNORECASE) |
1304 | + if p.match(url): |
1305 | + return True |
1306 | + else: |
1307 | + return False |
1308 | + |
1309 | + def get_ubuntu_release_from_sentry(self, sentry_unit): |
1310 | + """Get Ubuntu release codename from sentry unit. |
1311 | + |
1312 | + :param sentry_unit: amulet sentry/service unit pointer |
1313 | + :returns: list of strings - release codename, failure message |
1314 | + """ |
1315 | + msg = None |
1316 | + cmd = 'lsb_release -cs' |
1317 | + release, code = sentry_unit.run(cmd) |
1318 | + if code == 0: |
1319 | + self.log.debug('{} lsb_release: {}'.format( |
1320 | + sentry_unit.info['unit_name'], release)) |
1321 | + else: |
1322 | + msg = ('{} `{}` returned {} ' |
1323 | + '{}'.format(sentry_unit.info['unit_name'], |
1324 | + cmd, release, code)) |
1325 | + if release not in self.ubuntu_releases: |
1326 | + msg = ("Release ({}) not found in Ubuntu releases " |
1327 | + "({})".format(release, self.ubuntu_releases)) |
1328 | + return release, msg |
1329 | + |
1330 | + def validate_services(self, commands): |
1331 | + """Validate that lists of commands succeed on service units. Can be |
1332 | + used to verify system services are running on the corresponding |
1333 | + service units. |
1334 | + |
1335 | + :param command: dict with sentry keys and arbitrary command list values |
1336 | + :returns: None if successful, Failure string message otherwise |
1337 | + """ |
1338 | + self.log.debug('Checking status of system services...') |
1339 | + |
1340 | + # /!\ DEPRECATION WARNING (beisner): |
1341 | + # New and existing tests should be rewritten to use |
1342 | + # validate_services_by_name() as it is aware of init systems. |
1343 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1344 | + 'validate_services_by_name instead of validate_services ' |
1345 | + 'due to init system differences.') |
1346 | + |
1347 | + for k, v in six.iteritems(commands): |
1348 | + for cmd in v: |
1349 | + output, code = k.run(cmd) |
1350 | + self.log.debug('{} `{}` returned ' |
1351 | + '{}'.format(k.info['unit_name'], |
1352 | + cmd, code)) |
1353 | + if code != 0: |
1354 | + return "command `{}` returned {}".format(cmd, str(code)) |
1355 | + return None |
1356 | + |
1357 | + def validate_services_by_name(self, sentry_services): |
1358 | + """Validate system service status by service name, automatically |
1359 | + detecting init system based on Ubuntu release codename. |
1360 | + |
1361 | + :param sentry_resources: dict with sentry keys and svc list values |
1362 | + :returns: None if successful, Failure string message otherwise |
1363 | + """ |
1364 | + self.log.debug('Checking status of system services...') |
1365 | + |
1366 | + # Point at which systemd became a thing |
1367 | + systemd_switch = self.ubuntu_releases.index('vivid') |
1368 | + |
1369 | + for sentry_unit, services_list in six.iteritems(sentry_services): |
1370 | + # Get lsb_release codename from unit |
1371 | + release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) |
1372 | + if ret: |
1373 | + return ret |
1374 | + |
1375 | + for service_name in services_list: |
1376 | + if (self.ubuntu_releases.index(release) >= systemd_switch or |
1377 | + service_name == "rabbitmq-server"): |
1378 | + # init is systemd |
1379 | + cmd = 'sudo service {} status'.format(service_name) |
1380 | + elif self.ubuntu_releases.index(release) < systemd_switch: |
1381 | + # init is upstart |
1382 | + cmd = 'sudo status {}'.format(service_name) |
1383 | + |
1384 | + output, code = sentry_unit.run(cmd) |
1385 | + self.log.debug('{} `{}` returned ' |
1386 | + '{}'.format(sentry_unit.info['unit_name'], |
1387 | + cmd, code)) |
1388 | + if code != 0: |
1389 | + return "command `{}` returned {}".format(cmd, str(code)) |
1390 | + return None |
1391 | + |
1392 | + def _get_config(self, unit, filename): |
1393 | + """Get a ConfigParser object for parsing a unit's config file.""" |
1394 | + file_contents = unit.file_contents(filename) |
1395 | + |
1396 | + # NOTE(beisner): by default, ConfigParser does not handle options |
1397 | + # with no value, such as the flags used in the mysql my.cnf file. |
1398 | + # https://bugs.python.org/issue7005 |
1399 | + config = ConfigParser.ConfigParser(allow_no_value=True) |
1400 | + config.readfp(io.StringIO(file_contents)) |
1401 | + return config |
1402 | + |
1403 | + def validate_config_data(self, sentry_unit, config_file, section, |
1404 | + expected): |
1405 | + """Validate config file data. |
1406 | + |
1407 | + Verify that the specified section of the config file contains |
1408 | + the expected option key:value pairs. |
1409 | + """ |
1410 | + self.log.debug('Validating config file data ({} in {} on {})' |
1411 | + '...'.format(section, config_file, |
1412 | + sentry_unit.info['unit_name'])) |
1413 | + config = self._get_config(sentry_unit, config_file) |
1414 | + |
1415 | + if section != 'DEFAULT' and not config.has_section(section): |
1416 | + return "section [{}] does not exist".format(section) |
1417 | + |
1418 | + for k in expected.keys(): |
1419 | + if not config.has_option(section, k): |
1420 | + return "section [{}] is missing option {}".format(section, k) |
1421 | + if config.get(section, k) != expected[k]: |
1422 | + return "section [{}] {}:{} != expected {}:{}".format( |
1423 | + section, k, config.get(section, k), k, expected[k]) |
1424 | + return None |
1425 | + |
1426 | + def _validate_dict_data(self, expected, actual): |
1427 | + """Validate dictionary data. |
1428 | + |
1429 | + Compare expected dictionary data vs actual dictionary data. |
1430 | + The values in the 'expected' dictionary can be strings, bools, ints, |
1431 | + longs, or can be a function that evaluate a variable and returns a |
1432 | + bool. |
1433 | + """ |
1434 | + self.log.debug('actual: {}'.format(repr(actual))) |
1435 | + self.log.debug('expected: {}'.format(repr(expected))) |
1436 | + |
1437 | + for k, v in six.iteritems(expected): |
1438 | + if k in actual: |
1439 | + if (isinstance(v, six.string_types) or |
1440 | + isinstance(v, bool) or |
1441 | + isinstance(v, six.integer_types)): |
1442 | + if v != actual[k]: |
1443 | + return "{}:{}".format(k, actual[k]) |
1444 | + elif not v(actual[k]): |
1445 | + return "{}:{}".format(k, actual[k]) |
1446 | + else: |
1447 | + return "key '{}' does not exist".format(k) |
1448 | + return None |
1449 | + |
1450 | + def validate_relation_data(self, sentry_unit, relation, expected): |
1451 | + """Validate actual relation data based on expected relation data.""" |
1452 | + actual = sentry_unit.relation(relation[0], relation[1]) |
1453 | + return self._validate_dict_data(expected, actual) |
1454 | + |
1455 | + def _validate_list_data(self, expected, actual): |
1456 | + """Compare expected list vs actual list data.""" |
1457 | + for e in expected: |
1458 | + if e not in actual: |
1459 | + return "expected item {} not found in actual list".format(e) |
1460 | + return None |
1461 | + |
1462 | + def not_null(self, string): |
1463 | + if string is not None: |
1464 | + return True |
1465 | + else: |
1466 | + return False |
1467 | + |
1468 | + def _get_file_mtime(self, sentry_unit, filename): |
1469 | + """Get last modification time of file.""" |
1470 | + return sentry_unit.file_stat(filename)['mtime'] |
1471 | + |
1472 | + def _get_dir_mtime(self, sentry_unit, directory): |
1473 | + """Get last modification time of directory.""" |
1474 | + return sentry_unit.directory_stat(directory)['mtime'] |
1475 | + |
1476 | + def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False): |
1477 | + """Get process' start time. |
1478 | + |
1479 | + Determine start time of the process based on the last modification |
1480 | + time of the /proc/pid directory. If pgrep_full is True, the process |
1481 | + name is matched against the full command line. |
1482 | + """ |
1483 | + if pgrep_full: |
1484 | + cmd = 'pgrep -o -f {}'.format(service) |
1485 | + else: |
1486 | + cmd = 'pgrep -o {}'.format(service) |
1487 | + cmd = cmd + ' | grep -v pgrep || exit 0' |
1488 | + cmd_out = sentry_unit.run(cmd) |
1489 | + self.log.debug('CMDout: ' + str(cmd_out)) |
1490 | + if cmd_out[0]: |
1491 | + self.log.debug('Pid for %s %s' % (service, str(cmd_out[0]))) |
1492 | + proc_dir = '/proc/{}'.format(cmd_out[0].strip()) |
1493 | + return self._get_dir_mtime(sentry_unit, proc_dir) |
1494 | + |
1495 | + def service_restarted(self, sentry_unit, service, filename, |
1496 | + pgrep_full=False, sleep_time=20): |
1497 | + """Check if service was restarted. |
1498 | + |
1499 | + Compare a service's start time vs a file's last modification time |
1500 | + (such as a config file for that service) to determine if the service |
1501 | + has been restarted. |
1502 | + """ |
1503 | + time.sleep(sleep_time) |
1504 | + if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= |
1505 | + self._get_file_mtime(sentry_unit, filename)): |
1506 | + return True |
1507 | + else: |
1508 | + return False |
1509 | + |
1510 | + def service_restarted_since(self, sentry_unit, mtime, service, |
1511 | + pgrep_full=False, sleep_time=20, |
1512 | + retry_count=2): |
1513 | + """Check if service was been started after a given time. |
1514 | + |
1515 | + Args: |
1516 | + sentry_unit (sentry): The sentry unit to check for the service on |
1517 | + mtime (float): The epoch time to check against |
1518 | + service (string): service name to look for in process table |
1519 | + pgrep_full (boolean): Use full command line search mode with pgrep |
1520 | + sleep_time (int): Seconds to sleep before looking for process |
1521 | + retry_count (int): If service is not found, how many times to retry |
1522 | + |
1523 | + Returns: |
1524 | + bool: True if service found and its start time it newer than mtime, |
1525 | + False if service is older than mtime or if service was |
1526 | + not found. |
1527 | + """ |
1528 | + self.log.debug('Checking %s restarted since %s' % (service, mtime)) |
1529 | + time.sleep(sleep_time) |
1530 | + proc_start_time = self._get_proc_start_time(sentry_unit, service, |
1531 | + pgrep_full) |
1532 | + while retry_count > 0 and not proc_start_time: |
1533 | + self.log.debug('No pid file found for service %s, will retry %i ' |
1534 | + 'more times' % (service, retry_count)) |
1535 | + time.sleep(30) |
1536 | + proc_start_time = self._get_proc_start_time(sentry_unit, service, |
1537 | + pgrep_full) |
1538 | + retry_count = retry_count - 1 |
1539 | + |
1540 | + if not proc_start_time: |
1541 | + self.log.warn('No proc start time found, assuming service did ' |
1542 | + 'not start') |
1543 | + return False |
1544 | + if proc_start_time >= mtime: |
1545 | + self.log.debug('proc start time is newer than provided mtime' |
1546 | + '(%s >= %s)' % (proc_start_time, mtime)) |
1547 | + return True |
1548 | + else: |
1549 | + self.log.warn('proc start time (%s) is older than provided mtime ' |
1550 | + '(%s), service did not restart' % (proc_start_time, |
1551 | + mtime)) |
1552 | + return False |
1553 | + |
1554 | + def config_updated_since(self, sentry_unit, filename, mtime, |
1555 | + sleep_time=20): |
1556 | + """Check if file was modified after a given time. |
1557 | + |
1558 | + Args: |
1559 | + sentry_unit (sentry): The sentry unit to check the file mtime on |
1560 | + filename (string): The file to check mtime of |
1561 | + mtime (float): The epoch time to check against |
1562 | + sleep_time (int): Seconds to sleep before looking for process |
1563 | + |
1564 | + Returns: |
1565 | + bool: True if file was modified more recently than mtime, False if |
1566 | + file was modified before mtime, |
1567 | + """ |
1568 | + self.log.debug('Checking %s updated since %s' % (filename, mtime)) |
1569 | + time.sleep(sleep_time) |
1570 | + file_mtime = self._get_file_mtime(sentry_unit, filename) |
1571 | + if file_mtime >= mtime: |
1572 | + self.log.debug('File mtime is newer than provided mtime ' |
1573 | + '(%s >= %s)' % (file_mtime, mtime)) |
1574 | + return True |
1575 | + else: |
1576 | + self.log.warn('File mtime %s is older than provided mtime %s' |
1577 | + % (file_mtime, mtime)) |
1578 | + return False |
1579 | + |
1580 | + def validate_service_config_changed(self, sentry_unit, mtime, service, |
1581 | + filename, pgrep_full=False, |
1582 | + sleep_time=20, retry_count=2): |
1583 | + """Check service and file were updated after mtime |
1584 | + |
1585 | + Args: |
1586 | + sentry_unit (sentry): The sentry unit to check for the service on |
1587 | + mtime (float): The epoch time to check against |
1588 | + service (string): service name to look for in process table |
1589 | + filename (string): The file to check mtime of |
1590 | + pgrep_full (boolean): Use full command line search mode with pgrep |
1591 | + sleep_time (int): Seconds to sleep before looking for process |
1592 | + retry_count (int): If service is not found, how many times to retry |
1593 | + |
1594 | + Typical Usage: |
1595 | + u = OpenStackAmuletUtils(ERROR) |
1596 | + ... |
1597 | + mtime = u.get_sentry_time(self.cinder_sentry) |
1598 | + self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'}) |
1599 | + if not u.validate_service_config_changed(self.cinder_sentry, |
1600 | + mtime, |
1601 | + 'cinder-api', |
1602 | + '/etc/cinder/cinder.conf') |
1603 | + amulet.raise_status(amulet.FAIL, msg='update failed') |
1604 | + Returns: |
1605 | + bool: True if both service and file where updated/restarted after |
1606 | + mtime, False if service is older than mtime or if service was |
1607 | + not found or if filename was modified before mtime. |
1608 | + """ |
1609 | + self.log.debug('Checking %s restarted since %s' % (service, mtime)) |
1610 | + time.sleep(sleep_time) |
1611 | + service_restart = self.service_restarted_since(sentry_unit, mtime, |
1612 | + service, |
1613 | + pgrep_full=pgrep_full, |
1614 | + sleep_time=0, |
1615 | + retry_count=retry_count) |
1616 | + config_update = self.config_updated_since(sentry_unit, filename, mtime, |
1617 | + sleep_time=0) |
1618 | + return service_restart and config_update |
1619 | + |
1620 | + def get_sentry_time(self, sentry_unit): |
1621 | + """Return current epoch time on a sentry""" |
1622 | + cmd = "date +'%s'" |
1623 | + return float(sentry_unit.run(cmd)[0]) |
1624 | + |
1625 | + def relation_error(self, name, data): |
1626 | + return 'unexpected relation data in {} - {}'.format(name, data) |
1627 | + |
1628 | + def endpoint_error(self, name, data): |
1629 | + return 'unexpected endpoint data in {} - {}'.format(name, data) |
1630 | + |
1631 | + def get_ubuntu_releases(self): |
1632 | + """Return a list of all Ubuntu releases in order of release.""" |
1633 | + _d = distro_info.UbuntuDistroInfo() |
1634 | + _release_list = _d.all |
1635 | + self.log.debug('Ubuntu release list: {}'.format(_release_list)) |
1636 | + return _release_list |
1637 | + |
1638 | + def file_to_url(self, file_rel_path): |
1639 | + """Convert a relative file path to a file URL.""" |
1640 | + _abs_path = os.path.abspath(file_rel_path) |
1641 | + return urlparse.urlparse(_abs_path, scheme='file').geturl() |
1642 | |
1643 | === added directory 'tests/charmhelpers/contrib/openstack' |
1644 | === added file 'tests/charmhelpers/contrib/openstack/__init__.py' |
1645 | --- tests/charmhelpers/contrib/openstack/__init__.py 1970-01-01 00:00:00 +0000 |
1646 | +++ tests/charmhelpers/contrib/openstack/__init__.py 2015-06-11 15:38:49 +0000 |
1647 | @@ -0,0 +1,15 @@ |
1648 | +# Copyright 2014-2015 Canonical Limited. |
1649 | +# |
1650 | +# This file is part of charm-helpers. |
1651 | +# |
1652 | +# charm-helpers is free software: you can redistribute it and/or modify |
1653 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1654 | +# published by the Free Software Foundation. |
1655 | +# |
1656 | +# charm-helpers is distributed in the hope that it will be useful, |
1657 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1658 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1659 | +# GNU Lesser General Public License for more details. |
1660 | +# |
1661 | +# You should have received a copy of the GNU Lesser General Public License |
1662 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1663 | |
1664 | === added directory 'tests/charmhelpers/contrib/openstack/amulet' |
1665 | === added file 'tests/charmhelpers/contrib/openstack/amulet/__init__.py' |
1666 | --- tests/charmhelpers/contrib/openstack/amulet/__init__.py 1970-01-01 00:00:00 +0000 |
1667 | +++ tests/charmhelpers/contrib/openstack/amulet/__init__.py 2015-06-11 15:38:49 +0000 |
1668 | @@ -0,0 +1,15 @@ |
1669 | +# Copyright 2014-2015 Canonical Limited. |
1670 | +# |
1671 | +# This file is part of charm-helpers. |
1672 | +# |
1673 | +# charm-helpers is free software: you can redistribute it and/or modify |
1674 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1675 | +# published by the Free Software Foundation. |
1676 | +# |
1677 | +# charm-helpers is distributed in the hope that it will be useful, |
1678 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1679 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1680 | +# GNU Lesser General Public License for more details. |
1681 | +# |
1682 | +# You should have received a copy of the GNU Lesser General Public License |
1683 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1684 | |
1685 | === added file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' |
1686 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000 |
1687 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-11 15:38:49 +0000 |
1688 | @@ -0,0 +1,151 @@ |
1689 | +# Copyright 2014-2015 Canonical Limited. |
1690 | +# |
1691 | +# This file is part of charm-helpers. |
1692 | +# |
1693 | +# charm-helpers is free software: you can redistribute it and/or modify |
1694 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1695 | +# published by the Free Software Foundation. |
1696 | +# |
1697 | +# charm-helpers is distributed in the hope that it will be useful, |
1698 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1699 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1700 | +# GNU Lesser General Public License for more details. |
1701 | +# |
1702 | +# You should have received a copy of the GNU Lesser General Public License |
1703 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1704 | + |
1705 | +import six |
1706 | +from collections import OrderedDict |
1707 | +from charmhelpers.contrib.amulet.deployment import ( |
1708 | + AmuletDeployment |
1709 | +) |
1710 | + |
1711 | + |
1712 | +class OpenStackAmuletDeployment(AmuletDeployment): |
1713 | + """OpenStack amulet deployment. |
1714 | + |
1715 | + This class inherits from AmuletDeployment and has additional support |
1716 | + that is specifically for use by OpenStack charms. |
1717 | + """ |
1718 | + |
1719 | + def __init__(self, series=None, openstack=None, source=None, stable=True): |
1720 | + """Initialize the deployment environment.""" |
1721 | + super(OpenStackAmuletDeployment, self).__init__(series) |
1722 | + self.openstack = openstack |
1723 | + self.source = source |
1724 | + self.stable = stable |
1725 | + # Note(coreycb): this needs to be changed when new next branches come |
1726 | + # out. |
1727 | + self.current_next = "trusty" |
1728 | + |
1729 | + def _determine_branch_locations(self, other_services): |
1730 | + """Determine the branch locations for the other services. |
1731 | + |
1732 | + Determine if the local branch being tested is derived from its |
1733 | + stable or next (dev) branch, and based on this, use the corresonding |
1734 | + stable or next branches for the other_services.""" |
1735 | + base_charms = ['mysql', 'mongodb'] |
1736 | + |
1737 | + if self.series in ['precise', 'trusty']: |
1738 | + base_series = self.series |
1739 | + else: |
1740 | + base_series = self.current_next |
1741 | + |
1742 | + if self.stable: |
1743 | + for svc in other_services: |
1744 | + temp = 'lp:charms/{}/{}' |
1745 | + svc['location'] = temp.format(base_series, |
1746 | + svc['name']) |
1747 | + else: |
1748 | + for svc in other_services: |
1749 | + if svc['name'] in base_charms: |
1750 | + temp = 'lp:charms/{}/{}' |
1751 | + svc['location'] = temp.format(base_series, |
1752 | + svc['name']) |
1753 | + else: |
1754 | + temp = 'lp:~openstack-charmers/charms/{}/{}/next' |
1755 | + svc['location'] = temp.format(self.current_next, |
1756 | + svc['name']) |
1757 | + return other_services |
1758 | + |
1759 | + def _add_services(self, this_service, other_services): |
1760 | + """Add services to the deployment and set openstack-origin/source.""" |
1761 | + other_services = self._determine_branch_locations(other_services) |
1762 | + |
1763 | + super(OpenStackAmuletDeployment, self)._add_services(this_service, |
1764 | + other_services) |
1765 | + |
1766 | + services = other_services |
1767 | + services.append(this_service) |
1768 | + use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
1769 | + 'ceph-osd', 'ceph-radosgw'] |
1770 | + # Openstack subordinate charms do not expose an origin option as that |
1771 | + # is controlled by the principle |
1772 | + ignore = ['neutron-openvswitch'] |
1773 | + |
1774 | + if self.openstack: |
1775 | + for svc in services: |
1776 | + if svc['name'] not in use_source + ignore: |
1777 | + config = {'openstack-origin': self.openstack} |
1778 | + self.d.configure(svc['name'], config) |
1779 | + |
1780 | + if self.source: |
1781 | + for svc in services: |
1782 | + if svc['name'] in use_source and svc['name'] not in ignore: |
1783 | + config = {'source': self.source} |
1784 | + self.d.configure(svc['name'], config) |
1785 | + |
1786 | + def _configure_services(self, configs): |
1787 | + """Configure all of the services.""" |
1788 | + for service, config in six.iteritems(configs): |
1789 | + self.d.configure(service, config) |
1790 | + |
1791 | + def _get_openstack_release(self): |
1792 | + """Get openstack release. |
1793 | + |
1794 | + Return an integer representing the enum value of the openstack |
1795 | + release. |
1796 | + """ |
1797 | + # Must be ordered by OpenStack release (not by Ubuntu release): |
1798 | + (self.precise_essex, self.precise_folsom, self.precise_grizzly, |
1799 | + self.precise_havana, self.precise_icehouse, |
1800 | + self.trusty_icehouse, self.trusty_juno, self.utopic_juno, |
1801 | + self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, |
1802 | + self.wily_liberty) = range(12) |
1803 | + |
1804 | + releases = { |
1805 | + ('precise', None): self.precise_essex, |
1806 | + ('precise', 'cloud:precise-folsom'): self.precise_folsom, |
1807 | + ('precise', 'cloud:precise-grizzly'): self.precise_grizzly, |
1808 | + ('precise', 'cloud:precise-havana'): self.precise_havana, |
1809 | + ('precise', 'cloud:precise-icehouse'): self.precise_icehouse, |
1810 | + ('trusty', None): self.trusty_icehouse, |
1811 | + ('trusty', 'cloud:trusty-juno'): self.trusty_juno, |
1812 | + ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, |
1813 | + ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, |
1814 | + ('utopic', None): self.utopic_juno, |
1815 | + ('vivid', None): self.vivid_kilo, |
1816 | + ('wily', None): self.wily_liberty} |
1817 | + |
1818 | + return releases[(self.series, self.openstack)] |
1819 | + |
1820 | + def _get_openstack_release_string(self): |
1821 | + """Get openstack release string. |
1822 | + |
1823 | + Return a string representing the openstack release. |
1824 | + """ |
1825 | + releases = OrderedDict([ |
1826 | + ('precise', 'essex'), |
1827 | + ('quantal', 'folsom'), |
1828 | + ('raring', 'grizzly'), |
1829 | + ('saucy', 'havana'), |
1830 | + ('trusty', 'icehouse'), |
1831 | + ('utopic', 'juno'), |
1832 | + ('vivid', 'kilo'), |
1833 | + ('wily', 'liberty'), |
1834 | + ]) |
1835 | + if self.openstack: |
1836 | + os_origin = self.openstack.split(':')[1] |
1837 | + return os_origin.split('%s-' % self.series)[1].split('/')[0] |
1838 | + else: |
1839 | + return releases[self.series] |
1840 | |
1841 | === added file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' |
1842 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000 |
1843 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-11 15:38:49 +0000 |
1844 | @@ -0,0 +1,413 @@ |
1845 | +# Copyright 2014-2015 Canonical Limited. |
1846 | +# |
1847 | +# This file is part of charm-helpers. |
1848 | +# |
1849 | +# charm-helpers is free software: you can redistribute it and/or modify |
1850 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1851 | +# published by the Free Software Foundation. |
1852 | +# |
1853 | +# charm-helpers is distributed in the hope that it will be useful, |
1854 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1855 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1856 | +# GNU Lesser General Public License for more details. |
1857 | +# |
1858 | +# You should have received a copy of the GNU Lesser General Public License |
1859 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1860 | + |
1861 | +import logging |
1862 | +import os |
1863 | +import six |
1864 | +import time |
1865 | +import urllib |
1866 | + |
1867 | +import glanceclient.v1.client as glance_client |
1868 | +import heatclient.v1.client as heat_client |
1869 | +import keystoneclient.v2_0 as keystone_client |
1870 | +import novaclient.v1_1.client as nova_client |
1871 | + |
1872 | +from charmhelpers.contrib.amulet.utils import ( |
1873 | + AmuletUtils |
1874 | +) |
1875 | + |
1876 | +DEBUG = logging.DEBUG |
1877 | +ERROR = logging.ERROR |
1878 | + |
1879 | + |
1880 | +class OpenStackAmuletUtils(AmuletUtils): |
1881 | + """OpenStack amulet utilities. |
1882 | + |
1883 | + This class inherits from AmuletUtils and has additional support |
1884 | + that is specifically for use by OpenStack charm tests. |
1885 | + """ |
1886 | + |
1887 | + def __init__(self, log_level=ERROR): |
1888 | + """Initialize the deployment environment.""" |
1889 | + super(OpenStackAmuletUtils, self).__init__(log_level) |
1890 | + |
1891 | + def validate_endpoint_data(self, endpoints, admin_port, internal_port, |
1892 | + public_port, expected): |
1893 | + """Validate endpoint data. |
1894 | + |
1895 | + Validate actual endpoint data vs expected endpoint data. The ports |
1896 | + are used to find the matching endpoint. |
1897 | + """ |
1898 | + self.log.debug('Validating endpoint data...') |
1899 | + self.log.debug('actual: {}'.format(repr(endpoints))) |
1900 | + found = False |
1901 | + for ep in endpoints: |
1902 | + self.log.debug('endpoint: {}'.format(repr(ep))) |
1903 | + if (admin_port in ep.adminurl and |
1904 | + internal_port in ep.internalurl and |
1905 | + public_port in ep.publicurl): |
1906 | + found = True |
1907 | + actual = {'id': ep.id, |
1908 | + 'region': ep.region, |
1909 | + 'adminurl': ep.adminurl, |
1910 | + 'internalurl': ep.internalurl, |
1911 | + 'publicurl': ep.publicurl, |
1912 | + 'service_id': ep.service_id} |
1913 | + ret = self._validate_dict_data(expected, actual) |
1914 | + if ret: |
1915 | + return 'unexpected endpoint data - {}'.format(ret) |
1916 | + |
1917 | + if not found: |
1918 | + return 'endpoint not found' |
1919 | + |
1920 | + def validate_svc_catalog_endpoint_data(self, expected, actual): |
1921 | + """Validate service catalog endpoint data. |
1922 | + |
1923 | + Validate a list of actual service catalog endpoints vs a list of |
1924 | + expected service catalog endpoints. |
1925 | + """ |
1926 | + self.log.debug('Validating service catalog endpoint data...') |
1927 | + self.log.debug('actual: {}'.format(repr(actual))) |
1928 | + for k, v in six.iteritems(expected): |
1929 | + if k in actual: |
1930 | + ret = self._validate_dict_data(expected[k][0], actual[k][0]) |
1931 | + if ret: |
1932 | + return self.endpoint_error(k, ret) |
1933 | + else: |
1934 | + return "endpoint {} does not exist".format(k) |
1935 | + return ret |
1936 | + |
1937 | + def validate_tenant_data(self, expected, actual): |
1938 | + """Validate tenant data. |
1939 | + |
1940 | + Validate a list of actual tenant data vs list of expected tenant |
1941 | + data. |
1942 | + """ |
1943 | + self.log.debug('Validating tenant data...') |
1944 | + self.log.debug('actual: {}'.format(repr(actual))) |
1945 | + for e in expected: |
1946 | + found = False |
1947 | + for act in actual: |
1948 | + a = {'enabled': act.enabled, 'description': act.description, |
1949 | + 'name': act.name, 'id': act.id} |
1950 | + if e['name'] == a['name']: |
1951 | + found = True |
1952 | + ret = self._validate_dict_data(e, a) |
1953 | + if ret: |
1954 | + return "unexpected tenant data - {}".format(ret) |
1955 | + if not found: |
1956 | + return "tenant {} does not exist".format(e['name']) |
1957 | + return ret |
1958 | + |
1959 | + def validate_role_data(self, expected, actual): |
1960 | + """Validate role data. |
1961 | + |
1962 | + Validate a list of actual role data vs a list of expected role |
1963 | + data. |
1964 | + """ |
1965 | + self.log.debug('Validating role data...') |
1966 | + self.log.debug('actual: {}'.format(repr(actual))) |
1967 | + for e in expected: |
1968 | + found = False |
1969 | + for act in actual: |
1970 | + a = {'name': act.name, 'id': act.id} |
1971 | + if e['name'] == a['name']: |
1972 | + found = True |
1973 | + ret = self._validate_dict_data(e, a) |
1974 | + if ret: |
1975 | + return "unexpected role data - {}".format(ret) |
1976 | + if not found: |
1977 | + return "role {} does not exist".format(e['name']) |
1978 | + return ret |
1979 | + |
1980 | + def validate_user_data(self, expected, actual): |
1981 | + """Validate user data. |
1982 | + |
1983 | + Validate a list of actual user data vs a list of expected user |
1984 | + data. |
1985 | + """ |
1986 | + self.log.debug('Validating user data...') |
1987 | + self.log.debug('actual: {}'.format(repr(actual))) |
1988 | + for e in expected: |
1989 | + found = False |
1990 | + for act in actual: |
1991 | + a = {'enabled': act.enabled, 'name': act.name, |
1992 | + 'email': act.email, 'tenantId': act.tenantId, |
1993 | + 'id': act.id} |
1994 | + if e['name'] == a['name']: |
1995 | + found = True |
1996 | + ret = self._validate_dict_data(e, a) |
1997 | + if ret: |
1998 | + return "unexpected user data - {}".format(ret) |
1999 | + if not found: |
2000 | + return "user {} does not exist".format(e['name']) |
2001 | + return ret |
2002 | + |
2003 | + def validate_flavor_data(self, expected, actual): |
2004 | + """Validate flavor data. |
2005 | + |
2006 | + Validate a list of actual flavors vs a list of expected flavors. |
2007 | + """ |
2008 | + self.log.debug('Validating flavor data...') |
2009 | + self.log.debug('actual: {}'.format(repr(actual))) |
2010 | + act = [a.name for a in actual] |
2011 | + return self._validate_list_data(expected, act) |
2012 | + |
2013 | + def tenant_exists(self, keystone, tenant): |
2014 | + """Return True if tenant exists.""" |
2015 | + self.log.debug('Checking if tenant exists ({})...'.format(tenant)) |
2016 | + return tenant in [t.name for t in keystone.tenants.list()] |
2017 | + |
2018 | + def authenticate_keystone_admin(self, keystone_sentry, user, password, |
2019 | + tenant): |
2020 | + """Authenticates admin user with the keystone admin endpoint.""" |
2021 | + self.log.debug('Authenticating keystone admin...') |
2022 | + unit = keystone_sentry |
2023 | + service_ip = unit.relation('shared-db', |
2024 | + 'mysql:shared-db')['private-address'] |
2025 | + ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8')) |
2026 | + return keystone_client.Client(username=user, password=password, |
2027 | + tenant_name=tenant, auth_url=ep) |
2028 | + |
2029 | + def authenticate_keystone_user(self, keystone, user, password, tenant): |
2030 | + """Authenticates a regular user with the keystone public endpoint.""" |
2031 | + self.log.debug('Authenticating keystone user ({})...'.format(user)) |
2032 | + ep = keystone.service_catalog.url_for(service_type='identity', |
2033 | + endpoint_type='publicURL') |
2034 | + return keystone_client.Client(username=user, password=password, |
2035 | + tenant_name=tenant, auth_url=ep) |
2036 | + |
2037 | + def authenticate_glance_admin(self, keystone): |
2038 | + """Authenticates admin user with glance.""" |
2039 | + self.log.debug('Authenticating glance admin...') |
2040 | + ep = keystone.service_catalog.url_for(service_type='image', |
2041 | + endpoint_type='adminURL') |
2042 | + return glance_client.Client(ep, token=keystone.auth_token) |
2043 | + |
2044 | + def authenticate_heat_admin(self, keystone): |
2045 | + """Authenticates the admin user with heat.""" |
2046 | + self.log.debug('Authenticating heat admin...') |
2047 | + ep = keystone.service_catalog.url_for(service_type='orchestration', |
2048 | + endpoint_type='publicURL') |
2049 | + return heat_client.Client(endpoint=ep, token=keystone.auth_token) |
2050 | + |
2051 | + def authenticate_nova_user(self, keystone, user, password, tenant): |
2052 | + """Authenticates a regular user with nova-api.""" |
2053 | + self.log.debug('Authenticating nova user ({})...'.format(user)) |
2054 | + ep = keystone.service_catalog.url_for(service_type='identity', |
2055 | + endpoint_type='publicURL') |
2056 | + return nova_client.Client(username=user, api_key=password, |
2057 | + project_id=tenant, auth_url=ep) |
2058 | + |
2059 | + def create_cirros_image(self, glance, image_name): |
2060 | + """Download the latest cirros image and upload it to glance.""" |
2061 | + self.log.debug('Creating glance image ({})...'.format(image_name)) |
2062 | + http_proxy = os.getenv('AMULET_HTTP_PROXY') |
2063 | + self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) |
2064 | + if http_proxy: |
2065 | + proxies = {'http': http_proxy} |
2066 | + opener = urllib.FancyURLopener(proxies) |
2067 | + else: |
2068 | + opener = urllib.FancyURLopener() |
2069 | + |
2070 | + f = opener.open("http://download.cirros-cloud.net/version/released") |
2071 | + version = f.read().strip() |
2072 | + cirros_img = "cirros-{}-x86_64-disk.img".format(version) |
2073 | + local_path = os.path.join('tests', cirros_img) |
2074 | + |
2075 | + if not os.path.exists(local_path): |
2076 | + cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net", |
2077 | + version, cirros_img) |
2078 | + opener.retrieve(cirros_url, local_path) |
2079 | + f.close() |
2080 | + |
2081 | + with open(local_path) as f: |
2082 | + image = glance.images.create(name=image_name, is_public=True, |
2083 | + disk_format='qcow2', |
2084 | + container_format='bare', data=f) |
2085 | + count = 1 |
2086 | + status = image.status |
2087 | + while status != 'active' and count < 10: |
2088 | + time.sleep(3) |
2089 | + image = glance.images.get(image.id) |
2090 | + status = image.status |
2091 | + self.log.debug('image status: {}'.format(status)) |
2092 | + count += 1 |
2093 | + |
2094 | + if status != 'active': |
2095 | + self.log.error('image creation timed out') |
2096 | + return None |
2097 | + |
2098 | + return image |
2099 | + |
2100 | + def delete_image(self, glance, image): |
2101 | + """Delete the specified image.""" |
2102 | + |
2103 | + # /!\ DEPRECATION WARNING |
2104 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
2105 | + 'delete_resource instead of delete_image.') |
2106 | + self.log.debug('Deleting glance image ({})...'.format(image)) |
2107 | + num_before = len(list(glance.images.list())) |
2108 | + glance.images.delete(image) |
2109 | + |
2110 | + count = 1 |
2111 | + num_after = len(list(glance.images.list())) |
2112 | + while num_after != (num_before - 1) and count < 10: |
2113 | + time.sleep(3) |
2114 | + num_after = len(list(glance.images.list())) |
2115 | + self.log.debug('number of images: {}'.format(num_after)) |
2116 | + count += 1 |
2117 | + |
2118 | + if num_after != (num_before - 1): |
2119 | + self.log.error('image deletion timed out') |
2120 | + return False |
2121 | + |
2122 | + return True |
2123 | + |
2124 | + def create_instance(self, nova, image_name, instance_name, flavor): |
2125 | + """Create the specified instance.""" |
2126 | + self.log.debug('Creating instance ' |
2127 | + '({}|{}|{})'.format(instance_name, image_name, flavor)) |
2128 | + image = nova.images.find(name=image_name) |
2129 | + flavor = nova.flavors.find(name=flavor) |
2130 | + instance = nova.servers.create(name=instance_name, image=image, |
2131 | + flavor=flavor) |
2132 | + |
2133 | + count = 1 |
2134 | + status = instance.status |
2135 | + while status != 'ACTIVE' and count < 60: |
2136 | + time.sleep(3) |
2137 | + instance = nova.servers.get(instance.id) |
2138 | + status = instance.status |
2139 | + self.log.debug('instance status: {}'.format(status)) |
2140 | + count += 1 |
2141 | + |
2142 | + if status != 'ACTIVE': |
2143 | + self.log.error('instance creation timed out') |
2144 | + return None |
2145 | + |
2146 | + return instance |
2147 | + |
2148 | + def delete_instance(self, nova, instance): |
2149 | + """Delete the specified instance.""" |
2150 | + |
2151 | + # /!\ DEPRECATION WARNING |
2152 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
2153 | + 'delete_resource instead of delete_instance.') |
2154 | + self.log.debug('Deleting instance ({})...'.format(instance)) |
2155 | + num_before = len(list(nova.servers.list())) |
2156 | + nova.servers.delete(instance) |
2157 | + |
2158 | + count = 1 |
2159 | + num_after = len(list(nova.servers.list())) |
2160 | + while num_after != (num_before - 1) and count < 10: |
2161 | + time.sleep(3) |
2162 | + num_after = len(list(nova.servers.list())) |
2163 | + self.log.debug('number of instances: {}'.format(num_after)) |
2164 | + count += 1 |
2165 | + |
2166 | + if num_after != (num_before - 1): |
2167 | + self.log.error('instance deletion timed out') |
2168 | + return False |
2169 | + |
2170 | + return True |
2171 | + |
2172 | + def create_or_get_keypair(self, nova, keypair_name="testkey"): |
2173 | + """Create a new keypair, or return pointer if it already exists.""" |
2174 | + try: |
2175 | + _keypair = nova.keypairs.get(keypair_name) |
2176 | + self.log.debug('Keypair ({}) already exists, ' |
2177 | + 'using it.'.format(keypair_name)) |
2178 | + return _keypair |
2179 | + except: |
2180 | + self.log.debug('Keypair ({}) does not exist, ' |
2181 | + 'creating it.'.format(keypair_name)) |
2182 | + |
2183 | + _keypair = nova.keypairs.create(name=keypair_name) |
2184 | + return _keypair |
2185 | + |
2186 | + def delete_resource(self, resource, resource_id, |
2187 | + msg="resource", max_wait=120): |
2188 | + """Delete one openstack resource, such as one instance, keypair, |
2189 | + image, volume, stack, etc., and confirm deletion within max wait time. |
2190 | + |
2191 | + :param resource: pointer to os resource type, ex:glance_client.images |
2192 | + :param resource_id: unique name or id for the openstack resource |
2193 | + :param msg: text to identify purpose in logging |
2194 | + :param max_wait: maximum wait time in seconds |
2195 | + :returns: True if successful, otherwise False |
2196 | + """ |
2197 | + num_before = len(list(resource.list())) |
2198 | + resource.delete(resource_id) |
2199 | + |
2200 | + tries = 0 |
2201 | + num_after = len(list(resource.list())) |
2202 | + while num_after != (num_before - 1) and tries < (max_wait / 4): |
2203 | + self.log.debug('{} delete check: ' |
2204 | + '{} [{}:{}] {}'.format(msg, tries, |
2205 | + num_before, |
2206 | + num_after, |
2207 | + resource_id)) |
2208 | + time.sleep(4) |
2209 | + num_after = len(list(resource.list())) |
2210 | + tries += 1 |
2211 | + |
2212 | + self.log.debug('{}: expected, actual count = {}, ' |
2213 | + '{}'.format(msg, num_before - 1, num_after)) |
2214 | + |
2215 | + if num_after == (num_before - 1): |
2216 | + return True |
2217 | + else: |
2218 | + self.log.error('{} delete timed out'.format(msg)) |
2219 | + return False |
2220 | + |
2221 | + def resource_reaches_status(self, resource, resource_id, |
2222 | + expected_stat='available', |
2223 | + msg='resource', max_wait=120): |
2224 | + """Wait for an openstack resources status to reach an |
2225 | + expected status within a specified time. Useful to confirm that |
2226 | + nova instances, cinder vols, snapshots, glance images, heat stacks |
2227 | + and other resources eventually reach the expected status. |
2228 | + |
2229 | + :param resource: pointer to os resource type, ex: heat_client.stacks |
2230 | + :param resource_id: unique id for the openstack resource |
2231 | + :param expected_stat: status to expect resource to reach |
2232 | + :param msg: text to identify purpose in logging |
2233 | + :param max_wait: maximum wait time in seconds |
2234 | + :returns: True if successful, False if status is not reached |
2235 | + """ |
2236 | + |
2237 | + tries = 0 |
2238 | + resource_stat = resource.get(resource_id).status |
2239 | + while resource_stat != expected_stat and tries < (max_wait / 4): |
2240 | + self.log.debug('{} status check: ' |
2241 | + '{} [{}:{}] {}'.format(msg, tries, |
2242 | + resource_stat, |
2243 | + expected_stat, |
2244 | + resource_id)) |
2245 | + time.sleep(4) |
2246 | + resource_stat = resource.get(resource_id).status |
2247 | + tries += 1 |
2248 | + |
2249 | + self.log.debug('{}: expected, actual status = {}, ' |
2250 | + '{}'.format(msg, resource_stat, expected_stat)) |
2251 | + |
2252 | + if resource_stat == expected_stat: |
2253 | + return True |
2254 | + else: |
2255 | + self.log.debug('{} never reached expected status: ' |
2256 | + '{}'.format(resource_id, expected_stat)) |
2257 | + return False |
2258 | |
2259 | === added directory 'tests/files' |
2260 | === added file 'tests/files/hot_hello_world.yaml' |
2261 | --- tests/files/hot_hello_world.yaml 1970-01-01 00:00:00 +0000 |
2262 | +++ tests/files/hot_hello_world.yaml 2015-06-11 15:38:49 +0000 |
2263 | @@ -0,0 +1,66 @@ |
2264 | +# |
2265 | +# This is a hello world HOT template just defining a single compute |
2266 | +# server. |
2267 | +# |
2268 | +heat_template_version: 2013-05-23 |
2269 | + |
2270 | +description: > |
2271 | + Hello world HOT template that just defines a single server. |
2272 | + Contains just base features to verify base HOT support. |
2273 | + |
2274 | +parameters: |
2275 | + key_name: |
2276 | + type: string |
2277 | + description: Name of an existing key pair to use for the server |
2278 | + constraints: |
2279 | + - custom_constraint: nova.keypair |
2280 | + flavor: |
2281 | + type: string |
2282 | + description: Flavor for the server to be created |
2283 | + default: m1.tiny |
2284 | + constraints: |
2285 | + - custom_constraint: nova.flavor |
2286 | + image: |
2287 | + type: string |
2288 | + description: Image ID or image name to use for the server |
2289 | + constraints: |
2290 | + - custom_constraint: glance.image |
2291 | + admin_pass: |
2292 | + type: string |
2293 | + description: Admin password |
2294 | + hidden: true |
2295 | + constraints: |
2296 | + - length: { min: 6, max: 8 } |
2297 | + description: Password length must be between 6 and 8 characters |
2298 | + - allowed_pattern: "[a-zA-Z0-9]+" |
2299 | + description: Password must consist of characters and numbers only |
2300 | + - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" |
2301 | + description: Password must start with an uppercase character |
2302 | + db_port: |
2303 | + type: number |
2304 | + description: Database port number |
2305 | + default: 50000 |
2306 | + constraints: |
2307 | + - range: { min: 40000, max: 60000 } |
2308 | + description: Port number must be between 40000 and 60000 |
2309 | + |
2310 | +resources: |
2311 | + server: |
2312 | + type: OS::Nova::Server |
2313 | + properties: |
2314 | + key_name: { get_param: key_name } |
2315 | + image: { get_param: image } |
2316 | + flavor: { get_param: flavor } |
2317 | + admin_pass: { get_param: admin_pass } |
2318 | + user_data: |
2319 | + str_replace: |
2320 | + template: | |
2321 | + #!/bin/bash |
2322 | + echo db_port |
2323 | + params: |
2324 | + db_port: { get_param: db_port } |
2325 | + |
2326 | +outputs: |
2327 | + server_networks: |
2328 | + description: The networks of the deployed server |
2329 | + value: { get_attr: [server, networks] } |
2330 | |
2331 | === added file 'tests/tests.yaml' |
2332 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 |
2333 | +++ tests/tests.yaml 2015-06-11 15:38:49 +0000 |
2334 | @@ -0,0 +1,15 @@ |
2335 | +bootstrap: true |
2336 | +reset: true |
2337 | +virtualenv: true |
2338 | +makefile: |
2339 | + - lint |
2340 | + - unit_test |
2341 | +sources: |
2342 | + - ppa:juju/stable |
2343 | +packages: |
2344 | + - amulet |
2345 | + - python-amulet |
2346 | + - python-distro-info |
2347 | + - python-glanceclient |
2348 | + - python-keystoneclient |
2349 | + - python-novaclient |
charm_unit_test #4862 heat-next for 1chb1n mp258105
UNIT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_unit_ test/4862/