Merge lp:~1chb1n/charms/trusty/heat/next-amulet-init into lp:~openstack-charmers-archive/charms/trusty/heat/next
- Trusty Tahr (14.04)
- next-amulet-init
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 44 |
Proposed branch: | lp:~1chb1n/charms/trusty/heat/next-amulet-init |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/heat/next |
Diff against target: |
2349 lines (+2079/-59) 24 files modified
Makefile (+16/-12) charm-helpers-tests.yaml (+5/-0) hooks/charmhelpers/contrib/hahelpers/cluster.py (+12/-3) hooks/charmhelpers/contrib/openstack/ip.py (+49/-44) tests/00-setup (+11/-0) tests/014-basic-precise-icehouse (+11/-0) tests/015-basic-trusty-icehouse (+9/-0) tests/016-basic-trusty-juno (+11/-0) tests/017-basic-trusty-kilo (+11/-0) tests/018-basic-utopic-juno (+9/-0) tests/019-basic-vivid-kilo (+9/-0) tests/README (+76/-0) tests/basic_deployment.py (+606/-0) tests/charmhelpers/__init__.py (+38/-0) tests/charmhelpers/contrib/__init__.py (+15/-0) tests/charmhelpers/contrib/amulet/__init__.py (+15/-0) tests/charmhelpers/contrib/amulet/deployment.py (+93/-0) tests/charmhelpers/contrib/amulet/utils.py (+408/-0) tests/charmhelpers/contrib/openstack/__init__.py (+15/-0) tests/charmhelpers/contrib/openstack/amulet/__init__.py (+15/-0) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+151/-0) tests/charmhelpers/contrib/openstack/amulet/utils.py (+413/-0) tests/files/hot_hello_world.yaml (+66/-0) tests/tests.yaml (+15/-0) |
To merge this branch: | bzr merge lp:~1chb1n/charms/trusty/heat/next-amulet-init |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Corey Bryant (community) | Approve | ||
OpenStack Charmers | Pending | ||
Review via email: mp+258105@code.launchpad.net |
Commit message
Description of the change
Add basic amulet tests; sync tests/charmhelpers; sync charmhelpers.
Depends on this charm-helpers mp also landing: https:/
Tracking bug: https:/
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5183 heat-next for 1chb1n mp258105
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4546 heat-next for 1chb1n mp258105
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #4864 heat-next for 1chb1n mp258105
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5185 heat-next for 1chb1n mp258105
LINT OK: passed
Ryan Beisner (1chb1n) wrote : | # |
Silly rabbit service has to be different. Amulet passed locally, uosci bot will report back with a full run.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4548 heat-next for 1chb1n mp258105
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4550 heat-next for 1chb1n mp258105
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Corey Bryant (corey.bryant) wrote : | # |
Hello! Some inline comments below.
Ryan Beisner (1chb1n) wrote : | # |
Thanks for the detailed review, appreciate it! Acks, further info and questions also commented inline...
Corey Bryant (corey.bryant) wrote : | # |
No problem, responses to responses below.
Ryan Beisner (1chb1n) wrote : | # |
Reply in-line. TA!
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #4972 heat-next for 1chb1n mp258105
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5339 heat-next for 1chb1n mp258105
LINT OK: passed
Ryan Beisner (1chb1n) wrote : | # |
Ready for review, release to the bot.
Corey Bryant (corey.bryant) wrote : | # |
Approved but waiting for c-h changes to land before this lands. And waiting on tests. Thanks!
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5343 heat-next for 1chb1n mp258105
LINT OK: passed
Ryan Beisner (1chb1n) wrote : | # |
@Corey:
Amulet passed all targets, but there is a bzr bot comment issue with the uosci amulet commentator (addressing that separately).
Pasting amulet success output here: http://
Preview Diff
1 | === modified file 'Makefile' | |||
2 | --- Makefile 2014-12-15 09:16:40 +0000 | |||
3 | +++ Makefile 2015-06-11 15:38:49 +0000 | |||
4 | @@ -2,13 +2,17 @@ | |||
5 | 2 | PYTHON := /usr/bin/env python | 2 | PYTHON := /usr/bin/env python |
6 | 3 | 3 | ||
7 | 4 | lint: | 4 | lint: |
13 | 5 | @echo -n "Running flake8 tests: " | 5 | @echo Lint inspections and charm proof... |
14 | 6 | @flake8 --exclude hooks/charmhelpers hooks | 6 | @flake8 --exclude hooks/charmhelpers hooks tests unit_tests |
10 | 7 | @flake8 unit_tests | ||
11 | 8 | @echo "OK" | ||
12 | 9 | @echo -n "Running charm proof: " | ||
15 | 10 | @charm proof | 7 | @charm proof |
17 | 11 | @echo "OK" | 8 | |
18 | 9 | test: | ||
19 | 10 | @# Bundletester expects unit tests here. | ||
20 | 11 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests | ||
21 | 12 | |||
22 | 13 | functional_test: | ||
23 | 14 | @echo Starting all functional, lint and unit tests... | ||
24 | 15 | @juju test -v -p AMULET_HTTP_PROXY --timeout 2700 | ||
25 | 12 | 16 | ||
26 | 13 | bin/charm_helpers_sync.py: | 17 | bin/charm_helpers_sync.py: |
27 | 14 | @mkdir -p bin | 18 | @mkdir -p bin |
28 | @@ -16,9 +20,9 @@ | |||
29 | 16 | > bin/charm_helpers_sync.py | 20 | > bin/charm_helpers_sync.py |
30 | 17 | 21 | ||
31 | 18 | sync: bin/charm_helpers_sync.py | 22 | sync: bin/charm_helpers_sync.py |
38 | 19 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml | 23 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml |
39 | 20 | 24 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml | |
40 | 21 | unit_test: | 25 | |
41 | 22 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests | 26 | publish: lint unit_test |
42 | 23 | 27 | bzr push lp:charms/heat | |
43 | 24 | all: unit_test lint | 28 | bzr push lp:charms/trusty/heat |
44 | 25 | 29 | ||
45 | === renamed file 'charm-helpers.yaml' => 'charm-helpers-hooks.yaml' | |||
46 | === added file 'charm-helpers-tests.yaml' | |||
47 | --- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000 | |||
48 | +++ charm-helpers-tests.yaml 2015-06-11 15:38:49 +0000 | |||
49 | @@ -0,0 +1,5 @@ | |||
50 | 1 | branch: lp:charm-helpers | ||
51 | 2 | destination: tests/charmhelpers | ||
52 | 3 | include: | ||
53 | 4 | - contrib.amulet | ||
54 | 5 | - contrib.openstack.amulet | ||
55 | 0 | 6 | ||
56 | === modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' | |||
57 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-04 08:45:25 +0000 | |||
58 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-11 15:38:49 +0000 | |||
59 | @@ -64,6 +64,10 @@ | |||
60 | 64 | pass | 64 | pass |
61 | 65 | 65 | ||
62 | 66 | 66 | ||
63 | 67 | class CRMDCNotFound(Exception): | ||
64 | 68 | pass | ||
65 | 69 | |||
66 | 70 | |||
67 | 67 | def is_elected_leader(resource): | 71 | def is_elected_leader(resource): |
68 | 68 | """ | 72 | """ |
69 | 69 | Returns True if the charm executing this is the elected cluster leader. | 73 | Returns True if the charm executing this is the elected cluster leader. |
70 | @@ -116,8 +120,9 @@ | |||
71 | 116 | status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) | 120 | status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) |
72 | 117 | if not isinstance(status, six.text_type): | 121 | if not isinstance(status, six.text_type): |
73 | 118 | status = six.text_type(status, "utf-8") | 122 | status = six.text_type(status, "utf-8") |
76 | 119 | except subprocess.CalledProcessError: | 123 | except subprocess.CalledProcessError as ex: |
77 | 120 | return False | 124 | raise CRMDCNotFound(str(ex)) |
78 | 125 | |||
79 | 121 | current_dc = '' | 126 | current_dc = '' |
80 | 122 | for line in status.split('\n'): | 127 | for line in status.split('\n'): |
81 | 123 | if line.startswith('Current DC'): | 128 | if line.startswith('Current DC'): |
82 | @@ -125,10 +130,14 @@ | |||
83 | 125 | current_dc = line.split(':')[1].split()[0] | 130 | current_dc = line.split(':')[1].split()[0] |
84 | 126 | if current_dc == get_unit_hostname(): | 131 | if current_dc == get_unit_hostname(): |
85 | 127 | return True | 132 | return True |
86 | 133 | elif current_dc == 'NONE': | ||
87 | 134 | raise CRMDCNotFound('Current DC: NONE') | ||
88 | 135 | |||
89 | 128 | return False | 136 | return False |
90 | 129 | 137 | ||
91 | 130 | 138 | ||
93 | 131 | @retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound) | 139 | @retry_on_exception(5, base_delay=2, |
94 | 140 | exc_type=(CRMResourceNotFound, CRMDCNotFound)) | ||
95 | 132 | def is_crm_leader(resource, retry=False): | 141 | def is_crm_leader(resource, retry=False): |
96 | 133 | """ | 142 | """ |
97 | 134 | Returns True if the charm calling this is the elected corosync leader, | 143 | Returns True if the charm calling this is the elected corosync leader, |
98 | 135 | 144 | ||
99 | === modified file 'hooks/charmhelpers/contrib/openstack/ip.py' | |||
100 | --- hooks/charmhelpers/contrib/openstack/ip.py 2015-02-24 11:04:31 +0000 | |||
101 | +++ hooks/charmhelpers/contrib/openstack/ip.py 2015-06-11 15:38:49 +0000 | |||
102 | @@ -17,6 +17,7 @@ | |||
103 | 17 | from charmhelpers.core.hookenv import ( | 17 | from charmhelpers.core.hookenv import ( |
104 | 18 | config, | 18 | config, |
105 | 19 | unit_get, | 19 | unit_get, |
106 | 20 | service_name, | ||
107 | 20 | ) | 21 | ) |
108 | 21 | from charmhelpers.contrib.network.ip import ( | 22 | from charmhelpers.contrib.network.ip import ( |
109 | 22 | get_address_in_network, | 23 | get_address_in_network, |
110 | @@ -26,8 +27,6 @@ | |||
111 | 26 | ) | 27 | ) |
112 | 27 | from charmhelpers.contrib.hahelpers.cluster import is_clustered | 28 | from charmhelpers.contrib.hahelpers.cluster import is_clustered |
113 | 28 | 29 | ||
114 | 29 | from functools import partial | ||
115 | 30 | |||
116 | 31 | PUBLIC = 'public' | 30 | PUBLIC = 'public' |
117 | 32 | INTERNAL = 'int' | 31 | INTERNAL = 'int' |
118 | 33 | ADMIN = 'admin' | 32 | ADMIN = 'admin' |
119 | @@ -35,15 +34,18 @@ | |||
120 | 35 | ADDRESS_MAP = { | 34 | ADDRESS_MAP = { |
121 | 36 | PUBLIC: { | 35 | PUBLIC: { |
122 | 37 | 'config': 'os-public-network', | 36 | 'config': 'os-public-network', |
124 | 38 | 'fallback': 'public-address' | 37 | 'fallback': 'public-address', |
125 | 38 | 'override': 'os-public-hostname', | ||
126 | 39 | }, | 39 | }, |
127 | 40 | INTERNAL: { | 40 | INTERNAL: { |
128 | 41 | 'config': 'os-internal-network', | 41 | 'config': 'os-internal-network', |
130 | 42 | 'fallback': 'private-address' | 42 | 'fallback': 'private-address', |
131 | 43 | 'override': 'os-internal-hostname', | ||
132 | 43 | }, | 44 | }, |
133 | 44 | ADMIN: { | 45 | ADMIN: { |
134 | 45 | 'config': 'os-admin-network', | 46 | 'config': 'os-admin-network', |
136 | 46 | 'fallback': 'private-address' | 47 | 'fallback': 'private-address', |
137 | 48 | 'override': 'os-admin-hostname', | ||
138 | 47 | } | 49 | } |
139 | 48 | } | 50 | } |
140 | 49 | 51 | ||
141 | @@ -57,15 +59,50 @@ | |||
142 | 57 | :param endpoint_type: str endpoint type to resolve. | 59 | :param endpoint_type: str endpoint type to resolve. |
143 | 58 | :param returns: str base URL for services on the current service unit. | 60 | :param returns: str base URL for services on the current service unit. |
144 | 59 | """ | 61 | """ |
148 | 60 | scheme = 'http' | 62 | scheme = _get_scheme(configs) |
149 | 61 | if 'https' in configs.complete_contexts(): | 63 | |
147 | 62 | scheme = 'https' | ||
150 | 63 | address = resolve_address(endpoint_type) | 64 | address = resolve_address(endpoint_type) |
151 | 64 | if is_ipv6(address): | 65 | if is_ipv6(address): |
152 | 65 | address = "[{}]".format(address) | 66 | address = "[{}]".format(address) |
153 | 67 | |||
154 | 66 | return '%s://%s' % (scheme, address) | 68 | return '%s://%s' % (scheme, address) |
155 | 67 | 69 | ||
156 | 68 | 70 | ||
157 | 71 | def _get_scheme(configs): | ||
158 | 72 | """Returns the scheme to use for the url (either http or https) | ||
159 | 73 | depending upon whether https is in the configs value. | ||
160 | 74 | |||
161 | 75 | :param configs: OSTemplateRenderer config templating object to inspect | ||
162 | 76 | for a complete https context. | ||
163 | 77 | :returns: either 'http' or 'https' depending on whether https is | ||
164 | 78 | configured within the configs context. | ||
165 | 79 | """ | ||
166 | 80 | scheme = 'http' | ||
167 | 81 | if configs and 'https' in configs.complete_contexts(): | ||
168 | 82 | scheme = 'https' | ||
169 | 83 | return scheme | ||
170 | 84 | |||
171 | 85 | |||
172 | 86 | def _get_address_override(endpoint_type=PUBLIC): | ||
173 | 87 | """Returns any address overrides that the user has defined based on the | ||
174 | 88 | endpoint type. | ||
175 | 89 | |||
176 | 90 | Note: this function allows for the service name to be inserted into the | ||
177 | 91 | address if the user specifies {service_name}.somehost.org. | ||
178 | 92 | |||
179 | 93 | :param endpoint_type: the type of endpoint to retrieve the override | ||
180 | 94 | value for. | ||
181 | 95 | :returns: any endpoint address or hostname that the user has overridden | ||
182 | 96 | or None if an override is not present. | ||
183 | 97 | """ | ||
184 | 98 | override_key = ADDRESS_MAP[endpoint_type]['override'] | ||
185 | 99 | addr_override = config(override_key) | ||
186 | 100 | if not addr_override: | ||
187 | 101 | return None | ||
188 | 102 | else: | ||
189 | 103 | return addr_override.format(service_name=service_name()) | ||
190 | 104 | |||
191 | 105 | |||
192 | 69 | def resolve_address(endpoint_type=PUBLIC): | 106 | def resolve_address(endpoint_type=PUBLIC): |
193 | 70 | """Return unit address depending on net config. | 107 | """Return unit address depending on net config. |
194 | 71 | 108 | ||
195 | @@ -77,7 +114,10 @@ | |||
196 | 77 | 114 | ||
197 | 78 | :param endpoint_type: Network endpoing type | 115 | :param endpoint_type: Network endpoing type |
198 | 79 | """ | 116 | """ |
200 | 80 | resolved_address = None | 117 | resolved_address = _get_address_override(endpoint_type) |
201 | 118 | if resolved_address: | ||
202 | 119 | return resolved_address | ||
203 | 120 | |||
204 | 81 | vips = config('vip') | 121 | vips = config('vip') |
205 | 82 | if vips: | 122 | if vips: |
206 | 83 | vips = vips.split() | 123 | vips = vips.split() |
207 | @@ -109,38 +149,3 @@ | |||
208 | 109 | "clustered=%s)" % (net_type, clustered)) | 149 | "clustered=%s)" % (net_type, clustered)) |
209 | 110 | 150 | ||
210 | 111 | return resolved_address | 151 | return resolved_address |
211 | 112 | |||
212 | 113 | |||
213 | 114 | def endpoint_url(configs, url_template, port, endpoint_type=PUBLIC, | ||
214 | 115 | override=None): | ||
215 | 116 | """Returns the correct endpoint URL to advertise to Keystone. | ||
216 | 117 | |||
217 | 118 | This method provides the correct endpoint URL which should be advertised to | ||
218 | 119 | the keystone charm for endpoint creation. This method allows for the url to | ||
219 | 120 | be overridden to force a keystone endpoint to have specific URL for any of | ||
220 | 121 | the defined scopes (admin, internal, public). | ||
221 | 122 | |||
222 | 123 | :param configs: OSTemplateRenderer config templating object to inspect | ||
223 | 124 | for a complete https context. | ||
224 | 125 | :param url_template: str format string for creating the url template. Only | ||
225 | 126 | two values will be passed - the scheme+hostname | ||
226 | 127 | returned by the canonical_url and the port. | ||
227 | 128 | :param endpoint_type: str endpoint type to resolve. | ||
228 | 129 | :param override: str the name of the config option which overrides the | ||
229 | 130 | endpoint URL defined by the charm itself. None will | ||
230 | 131 | disable any overrides (default). | ||
231 | 132 | """ | ||
232 | 133 | if override: | ||
233 | 134 | # Return any user-defined overrides for the keystone endpoint URL. | ||
234 | 135 | user_value = config(override) | ||
235 | 136 | if user_value: | ||
236 | 137 | return user_value.strip() | ||
237 | 138 | |||
238 | 139 | return url_template % (canonical_url(configs, endpoint_type), port) | ||
239 | 140 | |||
240 | 141 | |||
241 | 142 | public_endpoint = partial(endpoint_url, endpoint_type=PUBLIC) | ||
242 | 143 | |||
243 | 144 | internal_endpoint = partial(endpoint_url, endpoint_type=INTERNAL) | ||
244 | 145 | |||
245 | 146 | admin_endpoint = partial(endpoint_url, endpoint_type=ADMIN) | ||
246 | 147 | 152 | ||
247 | === added directory 'tests' | |||
248 | === added file 'tests/00-setup' | |||
249 | --- tests/00-setup 1970-01-01 00:00:00 +0000 | |||
250 | +++ tests/00-setup 2015-06-11 15:38:49 +0000 | |||
251 | @@ -0,0 +1,11 @@ | |||
252 | 1 | #!/bin/bash | ||
253 | 2 | |||
254 | 3 | set -ex | ||
255 | 4 | |||
256 | 5 | sudo add-apt-repository --yes ppa:juju/stable | ||
257 | 6 | sudo apt-get update --yes | ||
258 | 7 | sudo apt-get install --yes python-amulet \ | ||
259 | 8 | python-distro-info \ | ||
260 | 9 | python-glanceclient \ | ||
261 | 10 | python-keystoneclient \ | ||
262 | 11 | python-novaclient | ||
263 | 0 | 12 | ||
264 | === added file 'tests/014-basic-precise-icehouse' | |||
265 | --- tests/014-basic-precise-icehouse 1970-01-01 00:00:00 +0000 | |||
266 | +++ tests/014-basic-precise-icehouse 2015-06-11 15:38:49 +0000 | |||
267 | @@ -0,0 +1,11 @@ | |||
268 | 1 | #!/usr/bin/python | ||
269 | 2 | |||
270 | 3 | """Amulet tests on a basic heat deployment on precise-icehouse.""" | ||
271 | 4 | |||
272 | 5 | from basic_deployment import HeatBasicDeployment | ||
273 | 6 | |||
274 | 7 | if __name__ == '__main__': | ||
275 | 8 | deployment = HeatBasicDeployment(series='precise', | ||
276 | 9 | openstack='cloud:precise-icehouse', | ||
277 | 10 | source='cloud:precise-updates/icehouse') | ||
278 | 11 | deployment.run_tests() | ||
279 | 0 | 12 | ||
280 | === added file 'tests/015-basic-trusty-icehouse' | |||
281 | --- tests/015-basic-trusty-icehouse 1970-01-01 00:00:00 +0000 | |||
282 | +++ tests/015-basic-trusty-icehouse 2015-06-11 15:38:49 +0000 | |||
283 | @@ -0,0 +1,9 @@ | |||
284 | 1 | #!/usr/bin/python | ||
285 | 2 | |||
286 | 3 | """Amulet tests on a basic heat deployment on trusty-icehouse.""" | ||
287 | 4 | |||
288 | 5 | from basic_deployment import HeatBasicDeployment | ||
289 | 6 | |||
290 | 7 | if __name__ == '__main__': | ||
291 | 8 | deployment = HeatBasicDeployment(series='trusty') | ||
292 | 9 | deployment.run_tests() | ||
293 | 0 | 10 | ||
294 | === added file 'tests/016-basic-trusty-juno' | |||
295 | --- tests/016-basic-trusty-juno 1970-01-01 00:00:00 +0000 | |||
296 | +++ tests/016-basic-trusty-juno 2015-06-11 15:38:49 +0000 | |||
297 | @@ -0,0 +1,11 @@ | |||
298 | 1 | #!/usr/bin/python | ||
299 | 2 | |||
300 | 3 | """Amulet tests on a basic heat deployment on trusty-juno.""" | ||
301 | 4 | |||
302 | 5 | from basic_deployment import HeatBasicDeployment | ||
303 | 6 | |||
304 | 7 | if __name__ == '__main__': | ||
305 | 8 | deployment = HeatBasicDeployment(series='trusty', | ||
306 | 9 | openstack='cloud:trusty-juno', | ||
307 | 10 | source='cloud:trusty-updates/juno') | ||
308 | 11 | deployment.run_tests() | ||
309 | 0 | 12 | ||
310 | === added file 'tests/017-basic-trusty-kilo' | |||
311 | --- tests/017-basic-trusty-kilo 1970-01-01 00:00:00 +0000 | |||
312 | +++ tests/017-basic-trusty-kilo 2015-06-11 15:38:49 +0000 | |||
313 | @@ -0,0 +1,11 @@ | |||
314 | 1 | #!/usr/bin/python | ||
315 | 2 | |||
316 | 3 | """Amulet tests on a basic heat deployment on trusty-kilo.""" | ||
317 | 4 | |||
318 | 5 | from basic_deployment import HeatBasicDeployment | ||
319 | 6 | |||
320 | 7 | if __name__ == '__main__': | ||
321 | 8 | deployment = HeatBasicDeployment(series='trusty', | ||
322 | 9 | openstack='cloud:trusty-kilo', | ||
323 | 10 | source='cloud:trusty-updates/kilo') | ||
324 | 11 | deployment.run_tests() | ||
325 | 0 | 12 | ||
326 | === added file 'tests/018-basic-utopic-juno' | |||
327 | --- tests/018-basic-utopic-juno 1970-01-01 00:00:00 +0000 | |||
328 | +++ tests/018-basic-utopic-juno 2015-06-11 15:38:49 +0000 | |||
329 | @@ -0,0 +1,9 @@ | |||
330 | 1 | #!/usr/bin/python | ||
331 | 2 | |||
332 | 3 | """Amulet tests on a basic heat deployment on utopic-juno.""" | ||
333 | 4 | |||
334 | 5 | from basic_deployment import HeatBasicDeployment | ||
335 | 6 | |||
336 | 7 | if __name__ == '__main__': | ||
337 | 8 | deployment = HeatBasicDeployment(series='utopic') | ||
338 | 9 | deployment.run_tests() | ||
339 | 0 | 10 | ||
340 | === added file 'tests/019-basic-vivid-kilo' | |||
341 | --- tests/019-basic-vivid-kilo 1970-01-01 00:00:00 +0000 | |||
342 | +++ tests/019-basic-vivid-kilo 2015-06-11 15:38:49 +0000 | |||
343 | @@ -0,0 +1,9 @@ | |||
344 | 1 | #!/usr/bin/python | ||
345 | 2 | |||
346 | 3 | """Amulet tests on a basic heat deployment on vivid-kilo.""" | ||
347 | 4 | |||
348 | 5 | from basic_deployment import HeatBasicDeployment | ||
349 | 6 | |||
350 | 7 | if __name__ == '__main__': | ||
351 | 8 | deployment = HeatBasicDeployment(series='vivid') | ||
352 | 9 | deployment.run_tests() | ||
353 | 0 | 10 | ||
354 | === added file 'tests/README' | |||
355 | --- tests/README 1970-01-01 00:00:00 +0000 | |||
356 | +++ tests/README 2015-06-11 15:38:49 +0000 | |||
357 | @@ -0,0 +1,76 @@ | |||
358 | 1 | This directory provides Amulet tests that focus on verification of heat | ||
359 | 2 | deployments. | ||
360 | 3 | |||
361 | 4 | test_* methods are called in lexical sort order. | ||
362 | 5 | |||
363 | 6 | Test name convention to ensure desired test order: | ||
364 | 7 | 1xx service and endpoint checks | ||
365 | 8 | 2xx relation checks | ||
366 | 9 | 3xx config checks | ||
367 | 10 | 4xx functional checks | ||
368 | 11 | 9xx restarts and other final checks | ||
369 | 12 | |||
370 | 13 | Common uses of heat relations in deployments: | ||
371 | 14 | - [ heat, mysql ] | ||
372 | 15 | - [ heat, keystone ] | ||
373 | 16 | - [ heat, rabbitmq-server ] | ||
374 | 17 | |||
375 | 18 | More detailed relations of heat service in a common deployment: | ||
376 | 19 | relations: | ||
377 | 20 | amqp: | ||
378 | 21 | - rabbitmq-server | ||
379 | 22 | identity-service: | ||
380 | 23 | - keystone | ||
381 | 24 | shared-db: | ||
382 | 25 | - mysql | ||
383 | 26 | |||
384 | 27 | In order to run tests, you'll need charm-tools installed (in addition to | ||
385 | 28 | juju, of course): | ||
386 | 29 | sudo add-apt-repository ppa:juju/stable | ||
387 | 30 | sudo apt-get update | ||
388 | 31 | sudo apt-get install charm-tools | ||
389 | 32 | |||
390 | 33 | If you use a web proxy server to access the web, you'll need to set the | ||
391 | 34 | AMULET_HTTP_PROXY environment variable to the http URL of the proxy server. | ||
392 | 35 | |||
393 | 36 | The following examples demonstrate different ways that tests can be executed. | ||
394 | 37 | All examples are run from the charm's root directory. | ||
395 | 38 | |||
396 | 39 | * To run all tests (starting with 00-setup): | ||
397 | 40 | |||
398 | 41 | make test | ||
399 | 42 | |||
400 | 43 | * To run a specific test module (or modules): | ||
401 | 44 | |||
402 | 45 | juju test -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse | ||
403 | 46 | |||
404 | 47 | * To run a specific test module (or modules), and keep the environment | ||
405 | 48 | deployed after a failure: | ||
406 | 49 | |||
407 | 50 | juju test --set-e -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse | ||
408 | 51 | |||
409 | 52 | * To re-run a test module against an already deployed environment (one | ||
410 | 53 | that was deployed by a previous call to 'juju test --set-e'): | ||
411 | 54 | |||
412 | 55 | ./tests/15-basic-trusty-icehouse | ||
413 | 56 | |||
414 | 57 | For debugging and test development purposes, all code should be idempotent. | ||
415 | 58 | In other words, the code should have the ability to be re-run without changing | ||
416 | 59 | the results beyond the initial run. This enables editing and re-running of a | ||
417 | 60 | test module against an already deployed environment, as described above. | ||
418 | 61 | |||
419 | 62 | Manual debugging tips: | ||
420 | 63 | |||
421 | 64 | * Set the following env vars before using the OpenStack CLI as admin: | ||
422 | 65 | export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0 | ||
423 | 66 | export OS_TENANT_NAME=admin | ||
424 | 67 | export OS_USERNAME=admin | ||
425 | 68 | export OS_PASSWORD=openstack | ||
426 | 69 | export OS_REGION_NAME=RegionOne | ||
427 | 70 | |||
428 | 71 | * Set the following env vars before using the OpenStack CLI as demoUser: | ||
429 | 72 | export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0 | ||
430 | 73 | export OS_TENANT_NAME=demoTenant | ||
431 | 74 | export OS_USERNAME=demoUser | ||
432 | 75 | export OS_PASSWORD=password | ||
433 | 76 | export OS_REGION_NAME=RegionOne | ||
434 | 0 | 77 | ||
435 | === added file 'tests/basic_deployment.py' | |||
436 | --- tests/basic_deployment.py 1970-01-01 00:00:00 +0000 | |||
437 | +++ tests/basic_deployment.py 2015-06-11 15:38:49 +0000 | |||
438 | @@ -0,0 +1,606 @@ | |||
439 | 1 | #!/usr/bin/python | ||
440 | 2 | |||
441 | 3 | """ | ||
442 | 4 | Basic heat functional test. | ||
443 | 5 | """ | ||
444 | 6 | import amulet | ||
445 | 7 | import time | ||
446 | 8 | from heatclient.common import template_utils | ||
447 | 9 | |||
448 | 10 | from charmhelpers.contrib.openstack.amulet.deployment import ( | ||
449 | 11 | OpenStackAmuletDeployment | ||
450 | 12 | ) | ||
451 | 13 | |||
452 | 14 | from charmhelpers.contrib.openstack.amulet.utils import ( | ||
453 | 15 | OpenStackAmuletUtils, | ||
454 | 16 | DEBUG, | ||
455 | 17 | #ERROR | ||
456 | 18 | ) | ||
457 | 19 | |||
458 | 20 | # Use DEBUG to turn on debug logging | ||
459 | 21 | u = OpenStackAmuletUtils(DEBUG) | ||
460 | 22 | |||
461 | 23 | # Resource and name constants | ||
462 | 24 | IMAGE_NAME = 'cirros-image-1' | ||
463 | 25 | KEYPAIR_NAME = 'testkey' | ||
464 | 26 | STACK_NAME = 'hello_world' | ||
465 | 27 | RESOURCE_TYPE = 'server' | ||
466 | 28 | TEMPLATE_REL_PATH = 'tests/files/hot_hello_world.yaml' | ||
467 | 29 | |||
468 | 30 | |||
469 | 31 | class HeatBasicDeployment(OpenStackAmuletDeployment): | ||
470 | 32 | """Amulet tests on a basic heat deployment.""" | ||
471 | 33 | |||
472 | 34 | def __init__(self, series=None, openstack=None, source=None, git=False, | ||
473 | 35 | stable=False): | ||
474 | 36 | """Deploy the entire test environment.""" | ||
475 | 37 | super(HeatBasicDeployment, self).__init__(series, openstack, | ||
476 | 38 | source, stable) | ||
477 | 39 | self.git = git | ||
478 | 40 | self._add_services() | ||
479 | 41 | self._add_relations() | ||
480 | 42 | self._configure_services() | ||
481 | 43 | self._deploy() | ||
482 | 44 | self._initialize_tests() | ||
483 | 45 | |||
484 | 46 | def _add_services(self): | ||
485 | 47 | """Add services | ||
486 | 48 | |||
487 | 49 | Add the services that we're testing, where heat is local, | ||
488 | 50 | and the rest of the service are from lp branches that are | ||
489 | 51 | compatible with the local charm (e.g. stable or next). | ||
490 | 52 | """ | ||
491 | 53 | this_service = {'name': 'heat'} | ||
492 | 54 | other_services = [{'name': 'keystone'}, | ||
493 | 55 | {'name': 'rabbitmq-server'}, | ||
494 | 56 | {'name': 'mysql'}, | ||
495 | 57 | {'name': 'glance'}, | ||
496 | 58 | {'name': 'nova-cloud-controller'}, | ||
497 | 59 | {'name': 'nova-compute'}] | ||
498 | 60 | super(HeatBasicDeployment, self)._add_services(this_service, | ||
499 | 61 | other_services) | ||
500 | 62 | |||
501 | 63 | def _add_relations(self): | ||
502 | 64 | """Add all of the relations for the services.""" | ||
503 | 65 | |||
504 | 66 | relations = { | ||
505 | 67 | 'heat:amqp': 'rabbitmq-server:amqp', | ||
506 | 68 | 'heat:identity-service': 'keystone:identity-service', | ||
507 | 69 | 'heat:shared-db': 'mysql:shared-db', | ||
508 | 70 | 'nova-compute:image-service': 'glance:image-service', | ||
509 | 71 | 'nova-compute:shared-db': 'mysql:shared-db', | ||
510 | 72 | 'nova-compute:amqp': 'rabbitmq-server:amqp', | ||
511 | 73 | 'nova-cloud-controller:shared-db': 'mysql:shared-db', | ||
512 | 74 | 'nova-cloud-controller:identity-service': | ||
513 | 75 | 'keystone:identity-service', | ||
514 | 76 | 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp', | ||
515 | 77 | 'nova-cloud-controller:cloud-compute': | ||
516 | 78 | 'nova-compute:cloud-compute', | ||
517 | 79 | 'nova-cloud-controller:image-service': 'glance:image-service', | ||
518 | 80 | 'keystone:shared-db': 'mysql:shared-db', | ||
519 | 81 | 'glance:identity-service': 'keystone:identity-service', | ||
520 | 82 | 'glance:shared-db': 'mysql:shared-db', | ||
521 | 83 | 'glance:amqp': 'rabbitmq-server:amqp' | ||
522 | 84 | } | ||
523 | 85 | super(HeatBasicDeployment, self)._add_relations(relations) | ||
524 | 86 | |||
525 | 87 | def _configure_services(self): | ||
526 | 88 | """Configure all of the services.""" | ||
527 | 89 | nova_config = {'config-flags': 'auto_assign_floating_ip=False', | ||
528 | 90 | 'enable-live-migration': 'False'} | ||
529 | 91 | keystone_config = {'admin-password': 'openstack', | ||
530 | 92 | 'admin-token': 'ubuntutesting'} | ||
531 | 93 | configs = {'nova-compute': nova_config, 'keystone': keystone_config} | ||
532 | 94 | super(HeatBasicDeployment, self)._configure_services(configs) | ||
533 | 95 | |||
534 | 96 | def _initialize_tests(self): | ||
535 | 97 | """Perform final initialization before tests get run.""" | ||
536 | 98 | # Access the sentries for inspecting service units | ||
537 | 99 | self.heat_sentry = self.d.sentry.unit['heat/0'] | ||
538 | 100 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] | ||
539 | 101 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] | ||
540 | 102 | self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] | ||
541 | 103 | self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0'] | ||
542 | 104 | self.glance_sentry = self.d.sentry.unit['glance/0'] | ||
543 | 105 | u.log.debug('openstack release val: {}'.format( | ||
544 | 106 | self._get_openstack_release())) | ||
545 | 107 | u.log.debug('openstack release str: {}'.format( | ||
546 | 108 | self._get_openstack_release_string())) | ||
547 | 109 | |||
548 | 110 | # Let things settle a bit before moving forward | ||
549 | 111 | time.sleep(30) | ||
550 | 112 | |||
551 | 113 | # Authenticate admin with keystone | ||
552 | 114 | self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, | ||
553 | 115 | user='admin', | ||
554 | 116 | password='openstack', | ||
555 | 117 | tenant='admin') | ||
556 | 118 | |||
557 | 119 | # Authenticate admin with glance endpoint | ||
558 | 120 | self.glance = u.authenticate_glance_admin(self.keystone) | ||
559 | 121 | |||
560 | 122 | # Authenticate admin with nova endpoint | ||
561 | 123 | self.nova = u.authenticate_nova_user(self.keystone, | ||
562 | 124 | user='admin', | ||
563 | 125 | password='openstack', | ||
564 | 126 | tenant='admin') | ||
565 | 127 | |||
566 | 128 | # Authenticate admin with heat endpoint | ||
567 | 129 | self.heat = u.authenticate_heat_admin(self.keystone) | ||
568 | 130 | |||
569 | 131 | def _image_create(self): | ||
570 | 132 | """Create an image to be used by the heat template, verify it exists""" | ||
571 | 133 | u.log.debug('Creating glance image ({})...'.format(IMAGE_NAME)) | ||
572 | 134 | |||
573 | 135 | # Create a new image | ||
574 | 136 | image_new = u.create_cirros_image(self.glance, IMAGE_NAME) | ||
575 | 137 | |||
576 | 138 | # Confirm image is created and has status of 'active' | ||
577 | 139 | if not image_new: | ||
578 | 140 | message = 'glance image create failed' | ||
579 | 141 | amulet.raise_status(amulet.FAIL, msg=message) | ||
580 | 142 | |||
581 | 143 | # Verify new image name | ||
582 | 144 | images_list = list(self.glance.images.list()) | ||
583 | 145 | if images_list[0].name != IMAGE_NAME: | ||
584 | 146 | message = ('glance image create failed or unexpected ' | ||
585 | 147 | 'image name {}'.format(images_list[0].name)) | ||
586 | 148 | amulet.raise_status(amulet.FAIL, msg=message) | ||
587 | 149 | |||
588 | 150 | def _keypair_create(self): | ||
589 | 151 | """Create a keypair to be used by the heat template, | ||
590 | 152 | or get a keypair if it exists.""" | ||
591 | 153 | self.keypair = u.create_or_get_keypair(self.nova, | ||
592 | 154 | keypair_name=KEYPAIR_NAME) | ||
593 | 155 | if not self.keypair: | ||
594 | 156 | msg = 'Failed to create or get keypair.' | ||
595 | 157 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
596 | 158 | u.log.debug("Keypair: {} {}".format(self.keypair.id, | ||
597 | 159 | self.keypair.fingerprint)) | ||
598 | 160 | |||
599 | 161 | def _stack_create(self): | ||
600 | 162 | """Create a heat stack from a basic heat template, verify its status""" | ||
601 | 163 | u.log.debug('Creating heat stack...') | ||
602 | 164 | |||
603 | 165 | t_url = u.file_to_url(TEMPLATE_REL_PATH) | ||
604 | 166 | r_req = self.heat.http_client.raw_request | ||
605 | 167 | u.log.debug('template url: {}'.format(t_url)) | ||
606 | 168 | |||
607 | 169 | t_files, template = template_utils.get_template_contents(t_url, r_req) | ||
608 | 170 | env_files, env = template_utils.process_environment_and_files( | ||
609 | 171 | env_path=None) | ||
610 | 172 | |||
611 | 173 | fields = { | ||
612 | 174 | 'stack_name': STACK_NAME, | ||
613 | 175 | 'timeout_mins': '15', | ||
614 | 176 | 'disable_rollback': False, | ||
615 | 177 | 'parameters': { | ||
616 | 178 | 'admin_pass': 'Ubuntu', | ||
617 | 179 | 'key_name': KEYPAIR_NAME, | ||
618 | 180 | 'image': IMAGE_NAME | ||
619 | 181 | }, | ||
620 | 182 | 'template': template, | ||
621 | 183 | 'files': dict(list(t_files.items()) + list(env_files.items())), | ||
622 | 184 | 'environment': env | ||
623 | 185 | } | ||
624 | 186 | |||
625 | 187 | # Create the stack. | ||
626 | 188 | try: | ||
627 | 189 | _stack = self.heat.stacks.create(**fields) | ||
628 | 190 | u.log.debug('Stack data: {}'.format(_stack)) | ||
629 | 191 | _stack_id = _stack['stack']['id'] | ||
630 | 192 | u.log.debug('Creating new stack, ID: {}'.format(_stack_id)) | ||
631 | 193 | except Exception as e: | ||
632 | 194 | # Generally, an api or cloud config error if this is hit. | ||
633 | 195 | msg = 'Failed to create heat stack: {}'.format(e) | ||
634 | 196 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
635 | 197 | |||
636 | 198 | # Confirm stack reaches COMPLETE status. | ||
637 | 199 | # /!\ Heat stacks reach a COMPLETE status even when nova cannot | ||
638 | 200 | # find resources (a valid hypervisor) to fit the instance, in | ||
639 | 201 | # which case the heat stack self-deletes! Confirm anyway... | ||
640 | 202 | ret = u.resource_reaches_status(self.heat.stacks, _stack_id, | ||
641 | 203 | expected_stat="COMPLETE", | ||
642 | 204 | msg="Stack status wait") | ||
643 | 205 | _stacks = list(self.heat.stacks.list()) | ||
644 | 206 | u.log.debug('All stacks: {}'.format(_stacks)) | ||
645 | 207 | if not ret: | ||
646 | 208 | msg = 'Heat stack failed to reach expected state.' | ||
647 | 209 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
648 | 210 | |||
649 | 211 | # Confirm stack still exists. | ||
650 | 212 | try: | ||
651 | 213 | _stack = self.heat.stacks.get(STACK_NAME) | ||
652 | 214 | except Exception as e: | ||
653 | 215 | # Generally, a resource availability issue if this is hit. | ||
654 | 216 | msg = 'Failed to get heat stack: {}'.format(e) | ||
655 | 217 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
656 | 218 | |||
657 | 219 | # Confirm stack name. | ||
658 | 220 | u.log.debug('Expected, actual stack name: {}, ' | ||
659 | 221 | '{}'.format(STACK_NAME, _stack.stack_name)) | ||
660 | 222 | if STACK_NAME != _stack.stack_name: | ||
661 | 223 | msg = 'Stack name mismatch, {} != {}'.format(STACK_NAME, | ||
662 | 224 | _stack.stack_name) | ||
663 | 225 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
664 | 226 | |||
665 | 227 | def _stack_resource_compute(self): | ||
666 | 228 | """Confirm that the stack has created a subsequent nova | ||
667 | 229 | compute resource, and confirm its status.""" | ||
668 | 230 | u.log.debug('Confirming heat stack resource status...') | ||
669 | 231 | |||
670 | 232 | # Confirm existence of a heat-generated nova compute resource. | ||
671 | 233 | _resource = self.heat.resources.get(STACK_NAME, RESOURCE_TYPE) | ||
672 | 234 | _server_id = _resource.physical_resource_id | ||
673 | 235 | if _server_id: | ||
674 | 236 | u.log.debug('Heat template spawned nova instance, ' | ||
675 | 237 | 'ID: {}'.format(_server_id)) | ||
676 | 238 | else: | ||
677 | 239 | msg = 'Stack failed to spawn a nova compute resource (instance).' | ||
678 | 240 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
679 | 241 | |||
680 | 242 | # Confirm nova instance reaches ACTIVE status. | ||
681 | 243 | ret = u.resource_reaches_status(self.nova.servers, _server_id, | ||
682 | 244 | expected_stat="ACTIVE", | ||
683 | 245 | msg="nova instance") | ||
684 | 246 | if not ret: | ||
685 | 247 | msg = 'Nova compute instance failed to reach expected state.' | ||
686 | 248 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
687 | 249 | |||
688 | 250 | def _stack_delete(self): | ||
689 | 251 | """Delete a heat stack, verify.""" | ||
690 | 252 | u.log.debug('Deleting heat stack...') | ||
691 | 253 | u.delete_resource(self.heat.stacks, STACK_NAME, msg="heat stack") | ||
692 | 254 | |||
693 | 255 | def _image_delete(self): | ||
694 | 256 | """Delete that image.""" | ||
695 | 257 | u.log.debug('Deleting glance image...') | ||
696 | 258 | image = self.nova.images.find(name=IMAGE_NAME) | ||
697 | 259 | u.delete_resource(self.nova.images, image, msg="glance image") | ||
698 | 260 | |||
699 | 261 | def _keypair_delete(self): | ||
700 | 262 | """Delete that keypair.""" | ||
701 | 263 | u.log.debug('Deleting keypair...') | ||
702 | 264 | u.delete_resource(self.nova.keypairs, KEYPAIR_NAME, msg="nova keypair") | ||
703 | 265 | |||
704 | 266 | def test_100_services(self): | ||
705 | 267 | """Verify the expected services are running on the corresponding | ||
706 | 268 | service units.""" | ||
707 | 269 | service_names = { | ||
708 | 270 | self.heat_sentry: ['heat-api', | ||
709 | 271 | 'heat-api-cfn', | ||
710 | 272 | 'heat-engine'], | ||
711 | 273 | self.mysql_sentry: ['mysql'], | ||
712 | 274 | self.rabbitmq_sentry: ['rabbitmq-server'], | ||
713 | 275 | self.nova_compute_sentry: ['nova-compute', | ||
714 | 276 | 'nova-network', | ||
715 | 277 | 'nova-api'], | ||
716 | 278 | self.keystone_sentry: ['keystone'], | ||
717 | 279 | self.glance_sentry: ['glance-registry', 'glance-api'] | ||
718 | 280 | } | ||
719 | 281 | |||
720 | 282 | ret = u.validate_services_by_name(service_names) | ||
721 | 283 | if ret: | ||
722 | 284 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
723 | 285 | |||
724 | 286 | def test_110_service_catalog(self): | ||
725 | 287 | """Verify that the service catalog endpoint data is valid.""" | ||
726 | 288 | u.log.debug('Checking service catalog endpoint data...') | ||
727 | 289 | endpoint_vol = {'adminURL': u.valid_url, | ||
728 | 290 | 'region': 'RegionOne', | ||
729 | 291 | 'publicURL': u.valid_url, | ||
730 | 292 | 'internalURL': u.valid_url} | ||
731 | 293 | endpoint_id = {'adminURL': u.valid_url, | ||
732 | 294 | 'region': 'RegionOne', | ||
733 | 295 | 'publicURL': u.valid_url, | ||
734 | 296 | 'internalURL': u.valid_url} | ||
735 | 297 | if self._get_openstack_release() >= self.precise_folsom: | ||
736 | 298 | endpoint_vol['id'] = u.not_null | ||
737 | 299 | endpoint_id['id'] = u.not_null | ||
738 | 300 | expected = {'compute': [endpoint_vol], 'orchestration': [endpoint_vol], | ||
739 | 301 | 'image': [endpoint_vol], 'identity': [endpoint_id]} | ||
740 | 302 | |||
741 | 303 | if self._get_openstack_release() <= self.trusty_juno: | ||
742 | 304 | # Before Kilo | ||
743 | 305 | expected['s3'] = [endpoint_vol] | ||
744 | 306 | expected['ec2'] = [endpoint_vol] | ||
745 | 307 | |||
746 | 308 | actual = self.keystone.service_catalog.get_endpoints() | ||
747 | 309 | ret = u.validate_svc_catalog_endpoint_data(expected, actual) | ||
748 | 310 | if ret: | ||
749 | 311 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
750 | 312 | |||
751 | 313 | def test_120_heat_endpoint(self): | ||
752 | 314 | """Verify the heat api endpoint data.""" | ||
753 | 315 | u.log.debug('Checking api endpoint data...') | ||
754 | 316 | endpoints = self.keystone.endpoints.list() | ||
755 | 317 | |||
756 | 318 | if self._get_openstack_release() <= self.trusty_juno: | ||
757 | 319 | # Before Kilo | ||
758 | 320 | admin_port = internal_port = public_port = '3333' | ||
759 | 321 | else: | ||
760 | 322 | # Kilo and later | ||
761 | 323 | admin_port = internal_port = public_port = '8004' | ||
762 | 324 | |||
763 | 325 | expected = {'id': u.not_null, | ||
764 | 326 | 'region': 'RegionOne', | ||
765 | 327 | 'adminurl': u.valid_url, | ||
766 | 328 | 'internalurl': u.valid_url, | ||
767 | 329 | 'publicurl': u.valid_url, | ||
768 | 330 | 'service_id': u.not_null} | ||
769 | 331 | |||
770 | 332 | ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, | ||
771 | 333 | public_port, expected) | ||
772 | 334 | if ret: | ||
773 | 335 | message = 'heat endpoint: {}'.format(ret) | ||
774 | 336 | amulet.raise_status(amulet.FAIL, msg=message) | ||
775 | 337 | |||
776 | 338 | def test_200_heat_mysql_shared_db_relation(self): | ||
777 | 339 | """Verify the heat:mysql shared-db relation data""" | ||
778 | 340 | u.log.debug('Checking heat:mysql shared-db relation data...') | ||
779 | 341 | unit = self.heat_sentry | ||
780 | 342 | relation = ['shared-db', 'mysql:shared-db'] | ||
781 | 343 | expected = { | ||
782 | 344 | 'private-address': u.valid_ip, | ||
783 | 345 | 'heat_database': 'heat', | ||
784 | 346 | 'heat_username': 'heat', | ||
785 | 347 | 'heat_hostname': u.valid_ip | ||
786 | 348 | } | ||
787 | 349 | |||
788 | 350 | ret = u.validate_relation_data(unit, relation, expected) | ||
789 | 351 | if ret: | ||
790 | 352 | message = u.relation_error('heat:mysql shared-db', ret) | ||
791 | 353 | amulet.raise_status(amulet.FAIL, msg=message) | ||
792 | 354 | |||
793 | 355 | def test_201_mysql_heat_shared_db_relation(self): | ||
794 | 356 | """Verify the mysql:heat shared-db relation data""" | ||
795 | 357 | u.log.debug('Checking mysql:heat shared-db relation data...') | ||
796 | 358 | unit = self.mysql_sentry | ||
797 | 359 | relation = ['shared-db', 'heat:shared-db'] | ||
798 | 360 | expected = { | ||
799 | 361 | 'private-address': u.valid_ip, | ||
800 | 362 | 'db_host': u.valid_ip, | ||
801 | 363 | 'heat_allowed_units': 'heat/0', | ||
802 | 364 | 'heat_password': u.not_null | ||
803 | 365 | } | ||
804 | 366 | |||
805 | 367 | ret = u.validate_relation_data(unit, relation, expected) | ||
806 | 368 | if ret: | ||
807 | 369 | message = u.relation_error('mysql:heat shared-db', ret) | ||
808 | 370 | amulet.raise_status(amulet.FAIL, msg=message) | ||
809 | 371 | |||
810 | 372 | def test_202_heat_keystone_identity_relation(self): | ||
811 | 373 | """Verify the heat:keystone identity-service relation data""" | ||
812 | 374 | u.log.debug('Checking heat:keystone identity-service relation data...') | ||
813 | 375 | unit = self.heat_sentry | ||
814 | 376 | relation = ['identity-service', 'keystone:identity-service'] | ||
815 | 377 | expected = { | ||
816 | 378 | 'heat_service': 'heat', | ||
817 | 379 | 'heat_region': 'RegionOne', | ||
818 | 380 | 'heat_public_url': u.valid_url, | ||
819 | 381 | 'heat_admin_url': u.valid_url, | ||
820 | 382 | 'heat_internal_url': u.valid_url, | ||
821 | 383 | 'heat-cfn_service': 'heat-cfn', | ||
822 | 384 | 'heat-cfn_region': 'RegionOne', | ||
823 | 385 | 'heat-cfn_public_url': u.valid_url, | ||
824 | 386 | 'heat-cfn_admin_url': u.valid_url, | ||
825 | 387 | 'heat-cfn_internal_url': u.valid_url | ||
826 | 388 | } | ||
827 | 389 | ret = u.validate_relation_data(unit, relation, expected) | ||
828 | 390 | if ret: | ||
829 | 391 | message = u.relation_error('heat:keystone identity-service', ret) | ||
830 | 392 | amulet.raise_status(amulet.FAIL, msg=message) | ||
831 | 393 | |||
832 | 394 | def test_203_keystone_heat_identity_relation(self): | ||
833 | 395 | """Verify the keystone:heat identity-service relation data""" | ||
834 | 396 | u.log.debug('Checking keystone:heat identity-service relation data...') | ||
835 | 397 | unit = self.keystone_sentry | ||
836 | 398 | relation = ['identity-service', 'heat:identity-service'] | ||
837 | 399 | expected = { | ||
838 | 400 | 'service_protocol': 'http', | ||
839 | 401 | 'service_tenant': 'services', | ||
840 | 402 | 'admin_token': 'ubuntutesting', | ||
841 | 403 | 'service_password': u.not_null, | ||
842 | 404 | 'service_port': '5000', | ||
843 | 405 | 'auth_port': '35357', | ||
844 | 406 | 'auth_protocol': 'http', | ||
845 | 407 | 'private-address': u.valid_ip, | ||
846 | 408 | 'auth_host': u.valid_ip, | ||
847 | 409 | 'service_username': 'heat-cfn_heat', | ||
848 | 410 | 'service_tenant_id': u.not_null, | ||
849 | 411 | 'service_host': u.valid_ip | ||
850 | 412 | } | ||
851 | 413 | ret = u.validate_relation_data(unit, relation, expected) | ||
852 | 414 | if ret: | ||
853 | 415 | message = u.relation_error('keystone:heat identity-service', ret) | ||
854 | 416 | amulet.raise_status(amulet.FAIL, msg=message) | ||
855 | 417 | |||
856 | 418 | def test_204_heat_rmq_amqp_relation(self): | ||
857 | 419 | """Verify the heat:rabbitmq-server amqp relation data""" | ||
858 | 420 | u.log.debug('Checking heat:rabbitmq-server amqp relation data...') | ||
859 | 421 | unit = self.heat_sentry | ||
860 | 422 | relation = ['amqp', 'rabbitmq-server:amqp'] | ||
861 | 423 | expected = { | ||
862 | 424 | 'username': u.not_null, | ||
863 | 425 | 'private-address': u.valid_ip, | ||
864 | 426 | 'vhost': 'openstack' | ||
865 | 427 | } | ||
866 | 428 | |||
867 | 429 | ret = u.validate_relation_data(unit, relation, expected) | ||
868 | 430 | if ret: | ||
869 | 431 | message = u.relation_error('heat:rabbitmq-server amqp', ret) | ||
870 | 432 | amulet.raise_status(amulet.FAIL, msg=message) | ||
871 | 433 | |||
872 | 434 | def test_205_rmq_heat_amqp_relation(self): | ||
873 | 435 | """Verify the rabbitmq-server:heat amqp relation data""" | ||
874 | 436 | u.log.debug('Checking rabbitmq-server:heat amqp relation data...') | ||
875 | 437 | unit = self.rabbitmq_sentry | ||
876 | 438 | relation = ['amqp', 'heat:amqp'] | ||
877 | 439 | expected = { | ||
878 | 440 | 'private-address': u.valid_ip, | ||
879 | 441 | 'password': u.not_null, | ||
880 | 442 | 'hostname': u.valid_ip, | ||
881 | 443 | } | ||
882 | 444 | |||
883 | 445 | ret = u.validate_relation_data(unit, relation, expected) | ||
884 | 446 | if ret: | ||
885 | 447 | message = u.relation_error('rabbitmq-server:heat amqp', ret) | ||
886 | 448 | amulet.raise_status(amulet.FAIL, msg=message) | ||
887 | 449 | |||
888 | 450 | def test_300_heat_config(self): | ||
889 | 451 | """Verify the data in the heat config file.""" | ||
890 | 452 | u.log.debug('Checking heat config file data...') | ||
891 | 453 | unit = self.heat_sentry | ||
892 | 454 | conf = '/etc/heat/heat.conf' | ||
893 | 455 | |||
894 | 456 | ks_rel = self.keystone_sentry.relation('identity-service', | ||
895 | 457 | 'heat:identity-service') | ||
896 | 458 | rmq_rel = self.rabbitmq_sentry.relation('amqp', | ||
897 | 459 | 'heat:amqp') | ||
898 | 460 | mysql_rel = self.mysql_sentry.relation('shared-db', | ||
899 | 461 | 'heat:shared-db') | ||
900 | 462 | |||
901 | 463 | u.log.debug('keystone:heat relation: {}'.format(ks_rel)) | ||
902 | 464 | u.log.debug('rabbitmq:heat relation: {}'.format(rmq_rel)) | ||
903 | 465 | u.log.debug('mysql:heat relation: {}'.format(mysql_rel)) | ||
904 | 466 | |||
905 | 467 | db_uri = "mysql://{}:{}@{}/{}".format('heat', | ||
906 | 468 | mysql_rel['heat_password'], | ||
907 | 469 | mysql_rel['db_host'], | ||
908 | 470 | 'heat') | ||
909 | 471 | |||
910 | 472 | auth_uri = '{}://{}:{}/v2.0'.format(ks_rel['service_protocol'], | ||
911 | 473 | ks_rel['service_host'], | ||
912 | 474 | ks_rel['service_port']) | ||
913 | 475 | |||
914 | 476 | expected = { | ||
915 | 477 | 'DEFAULT': { | ||
916 | 478 | 'use_syslog': 'False', | ||
917 | 479 | 'debug': 'False', | ||
918 | 480 | 'verbose': 'False', | ||
919 | 481 | 'log_dir': '/var/log/heat', | ||
920 | 482 | 'instance_driver': 'heat.engine.nova', | ||
921 | 483 | 'plugin_dirs': '/usr/lib64/heat,/usr/lib/heat', | ||
922 | 484 | 'environment_dir': '/etc/heat/environment.d', | ||
923 | 485 | 'deferred_auth_method': 'password', | ||
924 | 486 | 'host': 'heat', | ||
925 | 487 | 'rabbit_userid': 'heat', | ||
926 | 488 | 'rabbit_virtual_host': 'openstack', | ||
927 | 489 | 'rabbit_password': rmq_rel['password'], | ||
928 | 490 | 'rabbit_host': rmq_rel['hostname'] | ||
929 | 491 | }, | ||
930 | 492 | 'keystone_authtoken': { | ||
931 | 493 | 'auth_uri': auth_uri, | ||
932 | 494 | 'auth_host': ks_rel['service_host'], | ||
933 | 495 | 'auth_port': ks_rel['auth_port'], | ||
934 | 496 | 'auth_protocol': ks_rel['auth_protocol'], | ||
935 | 497 | 'admin_tenant_name': 'services', | ||
936 | 498 | 'admin_user': 'heat-cfn_heat', | ||
937 | 499 | 'admin_password': ks_rel['service_password'], | ||
938 | 500 | 'signing_dir': '/var/cache/heat' | ||
939 | 501 | }, | ||
940 | 502 | 'database': { | ||
941 | 503 | 'connection': db_uri | ||
942 | 504 | }, | ||
943 | 505 | 'heat_api': { | ||
944 | 506 | 'bind_port': '7994' | ||
945 | 507 | }, | ||
946 | 508 | 'heat_api_cfn': { | ||
947 | 509 | 'bind_port': '7990' | ||
948 | 510 | }, | ||
949 | 511 | 'paste_deploy': { | ||
950 | 512 | 'api_paste_config': '/etc/heat/api-paste.ini' | ||
951 | 513 | }, | ||
952 | 514 | } | ||
953 | 515 | |||
954 | 516 | for section, pairs in expected.iteritems(): | ||
955 | 517 | ret = u.validate_config_data(unit, conf, section, pairs) | ||
956 | 518 | if ret: | ||
957 | 519 | message = "heat config error: {}".format(ret) | ||
958 | 520 | amulet.raise_status(amulet.FAIL, msg=message) | ||
959 | 521 | |||
960 | 522 | def test_400_heat_resource_types_list(self): | ||
961 | 523 | """Check default heat resource list behavior, also confirm | ||
962 | 524 | heat functionality.""" | ||
963 | 525 | u.log.debug('Checking default heat resouce list...') | ||
964 | 526 | try: | ||
965 | 527 | types = list(self.heat.resource_types.list()) | ||
966 | 528 | if type(types) is list: | ||
967 | 529 | u.log.debug('Resource type list check is ok.') | ||
968 | 530 | else: | ||
969 | 531 | msg = 'Resource type list is not a list!' | ||
970 | 532 | u.log.error('{}'.format(msg)) | ||
971 | 533 | raise | ||
972 | 534 | if len(types) > 0: | ||
973 | 535 | u.log.debug('Resource type list is populated ' | ||
974 | 536 | '({}, ok).'.format(len(types))) | ||
975 | 537 | else: | ||
976 | 538 | msg = 'Resource type list length is zero!' | ||
977 | 539 | u.log.error(msg) | ||
978 | 540 | raise | ||
979 | 541 | except: | ||
980 | 542 | msg = 'Resource type list failed.' | ||
981 | 543 | u.log.error(msg) | ||
982 | 544 | raise | ||
983 | 545 | |||
984 | 546 | def test_402_heat_stack_list(self): | ||
985 | 547 | """Check default heat stack list behavior, also confirm | ||
986 | 548 | heat functionality.""" | ||
987 | 549 | u.log.debug('Checking default heat stack list...') | ||
988 | 550 | try: | ||
989 | 551 | stacks = list(self.heat.stacks.list()) | ||
990 | 552 | if type(stacks) is list: | ||
991 | 553 | u.log.debug("Stack list check is ok.") | ||
992 | 554 | else: | ||
993 | 555 | msg = 'Stack list returned something other than a list.' | ||
994 | 556 | u.log.error(msg) | ||
995 | 557 | raise | ||
996 | 558 | except: | ||
997 | 559 | msg = 'Heat stack list failed.' | ||
998 | 560 | u.log.error(msg) | ||
999 | 561 | raise | ||
1000 | 562 | |||
1001 | 563 | def test_410_heat_stack_create_delete(self): | ||
1002 | 564 | """Create a heat stack from template, confirm that a corresponding | ||
1003 | 565 | nova compute resource is spawned, delete stack.""" | ||
1004 | 566 | self._image_create() | ||
1005 | 567 | self._keypair_create() | ||
1006 | 568 | self._stack_create() | ||
1007 | 569 | self._stack_resource_compute() | ||
1008 | 570 | self._stack_delete() | ||
1009 | 571 | self._image_delete() | ||
1010 | 572 | self._keypair_delete() | ||
1011 | 573 | |||
1012 | 574 | def test_900_heat_restart_on_config_change(self): | ||
1013 | 575 | """Verify that the specified services are restarted when the config | ||
1014 | 576 | is changed.""" | ||
1015 | 577 | sentry = self.heat_sentry | ||
1016 | 578 | juju_service = 'heat' | ||
1017 | 579 | |||
1018 | 580 | # Expected default and alternate values | ||
1019 | 581 | set_default = {'use-syslog': 'False'} | ||
1020 | 582 | set_alternate = {'use-syslog': 'True'} | ||
1021 | 583 | |||
1022 | 584 | # Config file affected by juju set config change | ||
1023 | 585 | conf_file = '/etc/heat/heat.conf' | ||
1024 | 586 | |||
1025 | 587 | # Services which are expected to restart upon config change | ||
1026 | 588 | services = ['heat-api', | ||
1027 | 589 | 'heat-api-cfn', | ||
1028 | 590 | 'heat-engine'] | ||
1029 | 591 | |||
1030 | 592 | # Make config change, check for service restarts | ||
1031 | 593 | u.log.debug('Making config change on {}...'.format(juju_service)) | ||
1032 | 594 | self.d.configure(juju_service, set_alternate) | ||
1033 | 595 | |||
1034 | 596 | sleep_time = 30 | ||
1035 | 597 | for s in services: | ||
1036 | 598 | u.log.debug("Checking that service restarted: {}".format(s)) | ||
1037 | 599 | if not u.service_restarted(sentry, s, | ||
1038 | 600 | conf_file, sleep_time=sleep_time): | ||
1039 | 601 | self.d.configure(juju_service, set_default) | ||
1040 | 602 | msg = "service {} didn't restart after config change".format(s) | ||
1041 | 603 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1042 | 604 | sleep_time = 0 | ||
1043 | 605 | |||
1044 | 606 | self.d.configure(juju_service, set_default) | ||
1045 | 0 | 607 | ||
1046 | === added directory 'tests/charmhelpers' | |||
1047 | === added file 'tests/charmhelpers/__init__.py' | |||
1048 | --- tests/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000 | |||
1049 | +++ tests/charmhelpers/__init__.py 2015-06-11 15:38:49 +0000 | |||
1050 | @@ -0,0 +1,38 @@ | |||
1051 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
1052 | 2 | # | ||
1053 | 3 | # This file is part of charm-helpers. | ||
1054 | 4 | # | ||
1055 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
1056 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
1057 | 7 | # published by the Free Software Foundation. | ||
1058 | 8 | # | ||
1059 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
1060 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1061 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1062 | 12 | # GNU Lesser General Public License for more details. | ||
1063 | 13 | # | ||
1064 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1065 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
1066 | 16 | |||
1067 | 17 | # Bootstrap charm-helpers, installing its dependencies if necessary using | ||
1068 | 18 | # only standard libraries. | ||
1069 | 19 | import subprocess | ||
1070 | 20 | import sys | ||
1071 | 21 | |||
1072 | 22 | try: | ||
1073 | 23 | import six # flake8: noqa | ||
1074 | 24 | except ImportError: | ||
1075 | 25 | if sys.version_info.major == 2: | ||
1076 | 26 | subprocess.check_call(['apt-get', 'install', '-y', 'python-six']) | ||
1077 | 27 | else: | ||
1078 | 28 | subprocess.check_call(['apt-get', 'install', '-y', 'python3-six']) | ||
1079 | 29 | import six # flake8: noqa | ||
1080 | 30 | |||
1081 | 31 | try: | ||
1082 | 32 | import yaml # flake8: noqa | ||
1083 | 33 | except ImportError: | ||
1084 | 34 | if sys.version_info.major == 2: | ||
1085 | 35 | subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml']) | ||
1086 | 36 | else: | ||
1087 | 37 | subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml']) | ||
1088 | 38 | import yaml # flake8: noqa | ||
1089 | 0 | 39 | ||
1090 | === added directory 'tests/charmhelpers/contrib' | |||
1091 | === added file 'tests/charmhelpers/contrib/__init__.py' | |||
1092 | --- tests/charmhelpers/contrib/__init__.py 1970-01-01 00:00:00 +0000 | |||
1093 | +++ tests/charmhelpers/contrib/__init__.py 2015-06-11 15:38:49 +0000 | |||
1094 | @@ -0,0 +1,15 @@ | |||
1095 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
1096 | 2 | # | ||
1097 | 3 | # This file is part of charm-helpers. | ||
1098 | 4 | # | ||
1099 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
1100 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
1101 | 7 | # published by the Free Software Foundation. | ||
1102 | 8 | # | ||
1103 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
1104 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1105 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1106 | 12 | # GNU Lesser General Public License for more details. | ||
1107 | 13 | # | ||
1108 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1109 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
1110 | 0 | 16 | ||
1111 | === added directory 'tests/charmhelpers/contrib/amulet' | |||
1112 | === added file 'tests/charmhelpers/contrib/amulet/__init__.py' | |||
1113 | --- tests/charmhelpers/contrib/amulet/__init__.py 1970-01-01 00:00:00 +0000 | |||
1114 | +++ tests/charmhelpers/contrib/amulet/__init__.py 2015-06-11 15:38:49 +0000 | |||
1115 | @@ -0,0 +1,15 @@ | |||
1116 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
1117 | 2 | # | ||
1118 | 3 | # This file is part of charm-helpers. | ||
1119 | 4 | # | ||
1120 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
1121 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
1122 | 7 | # published by the Free Software Foundation. | ||
1123 | 8 | # | ||
1124 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
1125 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1126 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1127 | 12 | # GNU Lesser General Public License for more details. | ||
1128 | 13 | # | ||
1129 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1130 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
1131 | 0 | 16 | ||
1132 | === added file 'tests/charmhelpers/contrib/amulet/deployment.py' | |||
1133 | --- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000 | |||
1134 | +++ tests/charmhelpers/contrib/amulet/deployment.py 2015-06-11 15:38:49 +0000 | |||
1135 | @@ -0,0 +1,93 @@ | |||
1136 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
1137 | 2 | # | ||
1138 | 3 | # This file is part of charm-helpers. | ||
1139 | 4 | # | ||
1140 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
1141 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
1142 | 7 | # published by the Free Software Foundation. | ||
1143 | 8 | # | ||
1144 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
1145 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1146 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1147 | 12 | # GNU Lesser General Public License for more details. | ||
1148 | 13 | # | ||
1149 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1150 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
1151 | 16 | |||
1152 | 17 | import amulet | ||
1153 | 18 | import os | ||
1154 | 19 | import six | ||
1155 | 20 | |||
1156 | 21 | |||
1157 | 22 | class AmuletDeployment(object): | ||
1158 | 23 | """Amulet deployment. | ||
1159 | 24 | |||
1160 | 25 | This class provides generic Amulet deployment and test runner | ||
1161 | 26 | methods. | ||
1162 | 27 | """ | ||
1163 | 28 | |||
1164 | 29 | def __init__(self, series=None): | ||
1165 | 30 | """Initialize the deployment environment.""" | ||
1166 | 31 | self.series = None | ||
1167 | 32 | |||
1168 | 33 | if series: | ||
1169 | 34 | self.series = series | ||
1170 | 35 | self.d = amulet.Deployment(series=self.series) | ||
1171 | 36 | else: | ||
1172 | 37 | self.d = amulet.Deployment() | ||
1173 | 38 | |||
1174 | 39 | def _add_services(self, this_service, other_services): | ||
1175 | 40 | """Add services. | ||
1176 | 41 | |||
1177 | 42 | Add services to the deployment where this_service is the local charm | ||
1178 | 43 | that we're testing and other_services are the other services that | ||
1179 | 44 | are being used in the local amulet tests. | ||
1180 | 45 | """ | ||
1181 | 46 | if this_service['name'] != os.path.basename(os.getcwd()): | ||
1182 | 47 | s = this_service['name'] | ||
1183 | 48 | msg = "The charm's root directory name needs to be {}".format(s) | ||
1184 | 49 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1185 | 50 | |||
1186 | 51 | if 'units' not in this_service: | ||
1187 | 52 | this_service['units'] = 1 | ||
1188 | 53 | |||
1189 | 54 | self.d.add(this_service['name'], units=this_service['units']) | ||
1190 | 55 | |||
1191 | 56 | for svc in other_services: | ||
1192 | 57 | if 'location' in svc: | ||
1193 | 58 | branch_location = svc['location'] | ||
1194 | 59 | elif self.series: | ||
1195 | 60 | branch_location = 'cs:{}/{}'.format(self.series, svc['name']), | ||
1196 | 61 | else: | ||
1197 | 62 | branch_location = None | ||
1198 | 63 | |||
1199 | 64 | if 'units' not in svc: | ||
1200 | 65 | svc['units'] = 1 | ||
1201 | 66 | |||
1202 | 67 | self.d.add(svc['name'], charm=branch_location, units=svc['units']) | ||
1203 | 68 | |||
1204 | 69 | def _add_relations(self, relations): | ||
1205 | 70 | """Add all of the relations for the services.""" | ||
1206 | 71 | for k, v in six.iteritems(relations): | ||
1207 | 72 | self.d.relate(k, v) | ||
1208 | 73 | |||
1209 | 74 | def _configure_services(self, configs): | ||
1210 | 75 | """Configure all of the services.""" | ||
1211 | 76 | for service, config in six.iteritems(configs): | ||
1212 | 77 | self.d.configure(service, config) | ||
1213 | 78 | |||
1214 | 79 | def _deploy(self): | ||
1215 | 80 | """Deploy environment and wait for all hooks to finish executing.""" | ||
1216 | 81 | try: | ||
1217 | 82 | self.d.setup(timeout=900) | ||
1218 | 83 | self.d.sentry.wait(timeout=900) | ||
1219 | 84 | except amulet.helpers.TimeoutError: | ||
1220 | 85 | amulet.raise_status(amulet.FAIL, msg="Deployment timed out") | ||
1221 | 86 | except Exception: | ||
1222 | 87 | raise | ||
1223 | 88 | |||
1224 | 89 | def run_tests(self): | ||
1225 | 90 | """Run all of the methods that are prefixed with 'test_'.""" | ||
1226 | 91 | for test in dir(self): | ||
1227 | 92 | if test.startswith('test_'): | ||
1228 | 93 | getattr(self, test)() | ||
1229 | 0 | 94 | ||
1230 | === added file 'tests/charmhelpers/contrib/amulet/utils.py' | |||
1231 | --- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000 | |||
1232 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-06-11 15:38:49 +0000 | |||
1233 | @@ -0,0 +1,408 @@ | |||
1234 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
1235 | 2 | # | ||
1236 | 3 | # This file is part of charm-helpers. | ||
1237 | 4 | # | ||
1238 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
1239 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
1240 | 7 | # published by the Free Software Foundation. | ||
1241 | 8 | # | ||
1242 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
1243 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1244 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1245 | 12 | # GNU Lesser General Public License for more details. | ||
1246 | 13 | # | ||
1247 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1248 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
1249 | 16 | |||
1250 | 17 | import ConfigParser | ||
1251 | 18 | import distro_info | ||
1252 | 19 | import io | ||
1253 | 20 | import logging | ||
1254 | 21 | import os | ||
1255 | 22 | import re | ||
1256 | 23 | import six | ||
1257 | 24 | import sys | ||
1258 | 25 | import time | ||
1259 | 26 | import urlparse | ||
1260 | 27 | |||
1261 | 28 | |||
1262 | 29 | class AmuletUtils(object): | ||
1263 | 30 | """Amulet utilities. | ||
1264 | 31 | |||
1265 | 32 | This class provides common utility functions that are used by Amulet | ||
1266 | 33 | tests. | ||
1267 | 34 | """ | ||
1268 | 35 | |||
1269 | 36 | def __init__(self, log_level=logging.ERROR): | ||
1270 | 37 | self.log = self.get_logger(level=log_level) | ||
1271 | 38 | self.ubuntu_releases = self.get_ubuntu_releases() | ||
1272 | 39 | |||
1273 | 40 | def get_logger(self, name="amulet-logger", level=logging.DEBUG): | ||
1274 | 41 | """Get a logger object that will log to stdout.""" | ||
1275 | 42 | log = logging | ||
1276 | 43 | logger = log.getLogger(name) | ||
1277 | 44 | fmt = log.Formatter("%(asctime)s %(funcName)s " | ||
1278 | 45 | "%(levelname)s: %(message)s") | ||
1279 | 46 | |||
1280 | 47 | handler = log.StreamHandler(stream=sys.stdout) | ||
1281 | 48 | handler.setLevel(level) | ||
1282 | 49 | handler.setFormatter(fmt) | ||
1283 | 50 | |||
1284 | 51 | logger.addHandler(handler) | ||
1285 | 52 | logger.setLevel(level) | ||
1286 | 53 | |||
1287 | 54 | return logger | ||
1288 | 55 | |||
1289 | 56 | def valid_ip(self, ip): | ||
1290 | 57 | if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip): | ||
1291 | 58 | return True | ||
1292 | 59 | else: | ||
1293 | 60 | return False | ||
1294 | 61 | |||
1295 | 62 | def valid_url(self, url): | ||
1296 | 63 | p = re.compile( | ||
1297 | 64 | r'^(?:http|ftp)s?://' | ||
1298 | 65 | r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa | ||
1299 | 66 | r'localhost|' | ||
1300 | 67 | r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' | ||
1301 | 68 | r'(?::\d+)?' | ||
1302 | 69 | r'(?:/?|[/?]\S+)$', | ||
1303 | 70 | re.IGNORECASE) | ||
1304 | 71 | if p.match(url): | ||
1305 | 72 | return True | ||
1306 | 73 | else: | ||
1307 | 74 | return False | ||
1308 | 75 | |||
1309 | 76 | def get_ubuntu_release_from_sentry(self, sentry_unit): | ||
1310 | 77 | """Get Ubuntu release codename from sentry unit. | ||
1311 | 78 | |||
1312 | 79 | :param sentry_unit: amulet sentry/service unit pointer | ||
1313 | 80 | :returns: list of strings - release codename, failure message | ||
1314 | 81 | """ | ||
1315 | 82 | msg = None | ||
1316 | 83 | cmd = 'lsb_release -cs' | ||
1317 | 84 | release, code = sentry_unit.run(cmd) | ||
1318 | 85 | if code == 0: | ||
1319 | 86 | self.log.debug('{} lsb_release: {}'.format( | ||
1320 | 87 | sentry_unit.info['unit_name'], release)) | ||
1321 | 88 | else: | ||
1322 | 89 | msg = ('{} `{}` returned {} ' | ||
1323 | 90 | '{}'.format(sentry_unit.info['unit_name'], | ||
1324 | 91 | cmd, release, code)) | ||
1325 | 92 | if release not in self.ubuntu_releases: | ||
1326 | 93 | msg = ("Release ({}) not found in Ubuntu releases " | ||
1327 | 94 | "({})".format(release, self.ubuntu_releases)) | ||
1328 | 95 | return release, msg | ||
1329 | 96 | |||
1330 | 97 | def validate_services(self, commands): | ||
1331 | 98 | """Validate that lists of commands succeed on service units. Can be | ||
1332 | 99 | used to verify system services are running on the corresponding | ||
1333 | 100 | service units. | ||
1334 | 101 | |||
1335 | 102 | :param command: dict with sentry keys and arbitrary command list values | ||
1336 | 103 | :returns: None if successful, Failure string message otherwise | ||
1337 | 104 | """ | ||
1338 | 105 | self.log.debug('Checking status of system services...') | ||
1339 | 106 | |||
1340 | 107 | # /!\ DEPRECATION WARNING (beisner): | ||
1341 | 108 | # New and existing tests should be rewritten to use | ||
1342 | 109 | # validate_services_by_name() as it is aware of init systems. | ||
1343 | 110 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | ||
1344 | 111 | 'validate_services_by_name instead of validate_services ' | ||
1345 | 112 | 'due to init system differences.') | ||
1346 | 113 | |||
1347 | 114 | for k, v in six.iteritems(commands): | ||
1348 | 115 | for cmd in v: | ||
1349 | 116 | output, code = k.run(cmd) | ||
1350 | 117 | self.log.debug('{} `{}` returned ' | ||
1351 | 118 | '{}'.format(k.info['unit_name'], | ||
1352 | 119 | cmd, code)) | ||
1353 | 120 | if code != 0: | ||
1354 | 121 | return "command `{}` returned {}".format(cmd, str(code)) | ||
1355 | 122 | return None | ||
1356 | 123 | |||
1357 | 124 | def validate_services_by_name(self, sentry_services): | ||
1358 | 125 | """Validate system service status by service name, automatically | ||
1359 | 126 | detecting init system based on Ubuntu release codename. | ||
1360 | 127 | |||
1361 | 128 | :param sentry_resources: dict with sentry keys and svc list values | ||
1362 | 129 | :returns: None if successful, Failure string message otherwise | ||
1363 | 130 | """ | ||
1364 | 131 | self.log.debug('Checking status of system services...') | ||
1365 | 132 | |||
1366 | 133 | # Point at which systemd became a thing | ||
1367 | 134 | systemd_switch = self.ubuntu_releases.index('vivid') | ||
1368 | 135 | |||
1369 | 136 | for sentry_unit, services_list in six.iteritems(sentry_services): | ||
1370 | 137 | # Get lsb_release codename from unit | ||
1371 | 138 | release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) | ||
1372 | 139 | if ret: | ||
1373 | 140 | return ret | ||
1374 | 141 | |||
1375 | 142 | for service_name in services_list: | ||
1376 | 143 | if (self.ubuntu_releases.index(release) >= systemd_switch or | ||
1377 | 144 | service_name == "rabbitmq-server"): | ||
1378 | 145 | # init is systemd | ||
1379 | 146 | cmd = 'sudo service {} status'.format(service_name) | ||
1380 | 147 | elif self.ubuntu_releases.index(release) < systemd_switch: | ||
1381 | 148 | # init is upstart | ||
1382 | 149 | cmd = 'sudo status {}'.format(service_name) | ||
1383 | 150 | |||
1384 | 151 | output, code = sentry_unit.run(cmd) | ||
1385 | 152 | self.log.debug('{} `{}` returned ' | ||
1386 | 153 | '{}'.format(sentry_unit.info['unit_name'], | ||
1387 | 154 | cmd, code)) | ||
1388 | 155 | if code != 0: | ||
1389 | 156 | return "command `{}` returned {}".format(cmd, str(code)) | ||
1390 | 157 | return None | ||
1391 | 158 | |||
1392 | 159 | def _get_config(self, unit, filename): | ||
1393 | 160 | """Get a ConfigParser object for parsing a unit's config file.""" | ||
1394 | 161 | file_contents = unit.file_contents(filename) | ||
1395 | 162 | |||
1396 | 163 | # NOTE(beisner): by default, ConfigParser does not handle options | ||
1397 | 164 | # with no value, such as the flags used in the mysql my.cnf file. | ||
1398 | 165 | # https://bugs.python.org/issue7005 | ||
1399 | 166 | config = ConfigParser.ConfigParser(allow_no_value=True) | ||
1400 | 167 | config.readfp(io.StringIO(file_contents)) | ||
1401 | 168 | return config | ||
1402 | 169 | |||
1403 | 170 | def validate_config_data(self, sentry_unit, config_file, section, | ||
1404 | 171 | expected): | ||
1405 | 172 | """Validate config file data. | ||
1406 | 173 | |||
1407 | 174 | Verify that the specified section of the config file contains | ||
1408 | 175 | the expected option key:value pairs. | ||
1409 | 176 | """ | ||
1410 | 177 | self.log.debug('Validating config file data ({} in {} on {})' | ||
1411 | 178 | '...'.format(section, config_file, | ||
1412 | 179 | sentry_unit.info['unit_name'])) | ||
1413 | 180 | config = self._get_config(sentry_unit, config_file) | ||
1414 | 181 | |||
1415 | 182 | if section != 'DEFAULT' and not config.has_section(section): | ||
1416 | 183 | return "section [{}] does not exist".format(section) | ||
1417 | 184 | |||
1418 | 185 | for k in expected.keys(): | ||
1419 | 186 | if not config.has_option(section, k): | ||
1420 | 187 | return "section [{}] is missing option {}".format(section, k) | ||
1421 | 188 | if config.get(section, k) != expected[k]: | ||
1422 | 189 | return "section [{}] {}:{} != expected {}:{}".format( | ||
1423 | 190 | section, k, config.get(section, k), k, expected[k]) | ||
1424 | 191 | return None | ||
1425 | 192 | |||
1426 | 193 | def _validate_dict_data(self, expected, actual): | ||
1427 | 194 | """Validate dictionary data. | ||
1428 | 195 | |||
1429 | 196 | Compare expected dictionary data vs actual dictionary data. | ||
1430 | 197 | The values in the 'expected' dictionary can be strings, bools, ints, | ||
1431 | 198 | longs, or can be a function that evaluate a variable and returns a | ||
1432 | 199 | bool. | ||
1433 | 200 | """ | ||
1434 | 201 | self.log.debug('actual: {}'.format(repr(actual))) | ||
1435 | 202 | self.log.debug('expected: {}'.format(repr(expected))) | ||
1436 | 203 | |||
1437 | 204 | for k, v in six.iteritems(expected): | ||
1438 | 205 | if k in actual: | ||
1439 | 206 | if (isinstance(v, six.string_types) or | ||
1440 | 207 | isinstance(v, bool) or | ||
1441 | 208 | isinstance(v, six.integer_types)): | ||
1442 | 209 | if v != actual[k]: | ||
1443 | 210 | return "{}:{}".format(k, actual[k]) | ||
1444 | 211 | elif not v(actual[k]): | ||
1445 | 212 | return "{}:{}".format(k, actual[k]) | ||
1446 | 213 | else: | ||
1447 | 214 | return "key '{}' does not exist".format(k) | ||
1448 | 215 | return None | ||
1449 | 216 | |||
1450 | 217 | def validate_relation_data(self, sentry_unit, relation, expected): | ||
1451 | 218 | """Validate actual relation data based on expected relation data.""" | ||
1452 | 219 | actual = sentry_unit.relation(relation[0], relation[1]) | ||
1453 | 220 | return self._validate_dict_data(expected, actual) | ||
1454 | 221 | |||
1455 | 222 | def _validate_list_data(self, expected, actual): | ||
1456 | 223 | """Compare expected list vs actual list data.""" | ||
1457 | 224 | for e in expected: | ||
1458 | 225 | if e not in actual: | ||
1459 | 226 | return "expected item {} not found in actual list".format(e) | ||
1460 | 227 | return None | ||
1461 | 228 | |||
1462 | 229 | def not_null(self, string): | ||
1463 | 230 | if string is not None: | ||
1464 | 231 | return True | ||
1465 | 232 | else: | ||
1466 | 233 | return False | ||
1467 | 234 | |||
1468 | 235 | def _get_file_mtime(self, sentry_unit, filename): | ||
1469 | 236 | """Get last modification time of file.""" | ||
1470 | 237 | return sentry_unit.file_stat(filename)['mtime'] | ||
1471 | 238 | |||
1472 | 239 | def _get_dir_mtime(self, sentry_unit, directory): | ||
1473 | 240 | """Get last modification time of directory.""" | ||
1474 | 241 | return sentry_unit.directory_stat(directory)['mtime'] | ||
1475 | 242 | |||
1476 | 243 | def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False): | ||
1477 | 244 | """Get process' start time. | ||
1478 | 245 | |||
1479 | 246 | Determine start time of the process based on the last modification | ||
1480 | 247 | time of the /proc/pid directory. If pgrep_full is True, the process | ||
1481 | 248 | name is matched against the full command line. | ||
1482 | 249 | """ | ||
1483 | 250 | if pgrep_full: | ||
1484 | 251 | cmd = 'pgrep -o -f {}'.format(service) | ||
1485 | 252 | else: | ||
1486 | 253 | cmd = 'pgrep -o {}'.format(service) | ||
1487 | 254 | cmd = cmd + ' | grep -v pgrep || exit 0' | ||
1488 | 255 | cmd_out = sentry_unit.run(cmd) | ||
1489 | 256 | self.log.debug('CMDout: ' + str(cmd_out)) | ||
1490 | 257 | if cmd_out[0]: | ||
1491 | 258 | self.log.debug('Pid for %s %s' % (service, str(cmd_out[0]))) | ||
1492 | 259 | proc_dir = '/proc/{}'.format(cmd_out[0].strip()) | ||
1493 | 260 | return self._get_dir_mtime(sentry_unit, proc_dir) | ||
1494 | 261 | |||
1495 | 262 | def service_restarted(self, sentry_unit, service, filename, | ||
1496 | 263 | pgrep_full=False, sleep_time=20): | ||
1497 | 264 | """Check if service was restarted. | ||
1498 | 265 | |||
1499 | 266 | Compare a service's start time vs a file's last modification time | ||
1500 | 267 | (such as a config file for that service) to determine if the service | ||
1501 | 268 | has been restarted. | ||
1502 | 269 | """ | ||
1503 | 270 | time.sleep(sleep_time) | ||
1504 | 271 | if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= | ||
1505 | 272 | self._get_file_mtime(sentry_unit, filename)): | ||
1506 | 273 | return True | ||
1507 | 274 | else: | ||
1508 | 275 | return False | ||
1509 | 276 | |||
1510 | 277 | def service_restarted_since(self, sentry_unit, mtime, service, | ||
1511 | 278 | pgrep_full=False, sleep_time=20, | ||
1512 | 279 | retry_count=2): | ||
1513 | 280 | """Check if service was been started after a given time. | ||
1514 | 281 | |||
1515 | 282 | Args: | ||
1516 | 283 | sentry_unit (sentry): The sentry unit to check for the service on | ||
1517 | 284 | mtime (float): The epoch time to check against | ||
1518 | 285 | service (string): service name to look for in process table | ||
1519 | 286 | pgrep_full (boolean): Use full command line search mode with pgrep | ||
1520 | 287 | sleep_time (int): Seconds to sleep before looking for process | ||
1521 | 288 | retry_count (int): If service is not found, how many times to retry | ||
1522 | 289 | |||
1523 | 290 | Returns: | ||
1524 | 291 | bool: True if service found and its start time it newer than mtime, | ||
1525 | 292 | False if service is older than mtime or if service was | ||
1526 | 293 | not found. | ||
1527 | 294 | """ | ||
1528 | 295 | self.log.debug('Checking %s restarted since %s' % (service, mtime)) | ||
1529 | 296 | time.sleep(sleep_time) | ||
1530 | 297 | proc_start_time = self._get_proc_start_time(sentry_unit, service, | ||
1531 | 298 | pgrep_full) | ||
1532 | 299 | while retry_count > 0 and not proc_start_time: | ||
1533 | 300 | self.log.debug('No pid file found for service %s, will retry %i ' | ||
1534 | 301 | 'more times' % (service, retry_count)) | ||
1535 | 302 | time.sleep(30) | ||
1536 | 303 | proc_start_time = self._get_proc_start_time(sentry_unit, service, | ||
1537 | 304 | pgrep_full) | ||
1538 | 305 | retry_count = retry_count - 1 | ||
1539 | 306 | |||
1540 | 307 | if not proc_start_time: | ||
1541 | 308 | self.log.warn('No proc start time found, assuming service did ' | ||
1542 | 309 | 'not start') | ||
1543 | 310 | return False | ||
1544 | 311 | if proc_start_time >= mtime: | ||
1545 | 312 | self.log.debug('proc start time is newer than provided mtime' | ||
1546 | 313 | '(%s >= %s)' % (proc_start_time, mtime)) | ||
1547 | 314 | return True | ||
1548 | 315 | else: | ||
1549 | 316 | self.log.warn('proc start time (%s) is older than provided mtime ' | ||
1550 | 317 | '(%s), service did not restart' % (proc_start_time, | ||
1551 | 318 | mtime)) | ||
1552 | 319 | return False | ||
1553 | 320 | |||
1554 | 321 | def config_updated_since(self, sentry_unit, filename, mtime, | ||
1555 | 322 | sleep_time=20): | ||
1556 | 323 | """Check if file was modified after a given time. | ||
1557 | 324 | |||
1558 | 325 | Args: | ||
1559 | 326 | sentry_unit (sentry): The sentry unit to check the file mtime on | ||
1560 | 327 | filename (string): The file to check mtime of | ||
1561 | 328 | mtime (float): The epoch time to check against | ||
1562 | 329 | sleep_time (int): Seconds to sleep before looking for process | ||
1563 | 330 | |||
1564 | 331 | Returns: | ||
1565 | 332 | bool: True if file was modified more recently than mtime, False if | ||
1566 | 333 | file was modified before mtime, | ||
1567 | 334 | """ | ||
1568 | 335 | self.log.debug('Checking %s updated since %s' % (filename, mtime)) | ||
1569 | 336 | time.sleep(sleep_time) | ||
1570 | 337 | file_mtime = self._get_file_mtime(sentry_unit, filename) | ||
1571 | 338 | if file_mtime >= mtime: | ||
1572 | 339 | self.log.debug('File mtime is newer than provided mtime ' | ||
1573 | 340 | '(%s >= %s)' % (file_mtime, mtime)) | ||
1574 | 341 | return True | ||
1575 | 342 | else: | ||
1576 | 343 | self.log.warn('File mtime %s is older than provided mtime %s' | ||
1577 | 344 | % (file_mtime, mtime)) | ||
1578 | 345 | return False | ||
1579 | 346 | |||
1580 | 347 | def validate_service_config_changed(self, sentry_unit, mtime, service, | ||
1581 | 348 | filename, pgrep_full=False, | ||
1582 | 349 | sleep_time=20, retry_count=2): | ||
1583 | 350 | """Check service and file were updated after mtime | ||
1584 | 351 | |||
1585 | 352 | Args: | ||
1586 | 353 | sentry_unit (sentry): The sentry unit to check for the service on | ||
1587 | 354 | mtime (float): The epoch time to check against | ||
1588 | 355 | service (string): service name to look for in process table | ||
1589 | 356 | filename (string): The file to check mtime of | ||
1590 | 357 | pgrep_full (boolean): Use full command line search mode with pgrep | ||
1591 | 358 | sleep_time (int): Seconds to sleep before looking for process | ||
1592 | 359 | retry_count (int): If service is not found, how many times to retry | ||
1593 | 360 | |||
1594 | 361 | Typical Usage: | ||
1595 | 362 | u = OpenStackAmuletUtils(ERROR) | ||
1596 | 363 | ... | ||
1597 | 364 | mtime = u.get_sentry_time(self.cinder_sentry) | ||
1598 | 365 | self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'}) | ||
1599 | 366 | if not u.validate_service_config_changed(self.cinder_sentry, | ||
1600 | 367 | mtime, | ||
1601 | 368 | 'cinder-api', | ||
1602 | 369 | '/etc/cinder/cinder.conf') | ||
1603 | 370 | amulet.raise_status(amulet.FAIL, msg='update failed') | ||
1604 | 371 | Returns: | ||
1605 | 372 | bool: True if both service and file where updated/restarted after | ||
1606 | 373 | mtime, False if service is older than mtime or if service was | ||
1607 | 374 | not found or if filename was modified before mtime. | ||
1608 | 375 | """ | ||
1609 | 376 | self.log.debug('Checking %s restarted since %s' % (service, mtime)) | ||
1610 | 377 | time.sleep(sleep_time) | ||
1611 | 378 | service_restart = self.service_restarted_since(sentry_unit, mtime, | ||
1612 | 379 | service, | ||
1613 | 380 | pgrep_full=pgrep_full, | ||
1614 | 381 | sleep_time=0, | ||
1615 | 382 | retry_count=retry_count) | ||
1616 | 383 | config_update = self.config_updated_since(sentry_unit, filename, mtime, | ||
1617 | 384 | sleep_time=0) | ||
1618 | 385 | return service_restart and config_update | ||
1619 | 386 | |||
1620 | 387 | def get_sentry_time(self, sentry_unit): | ||
1621 | 388 | """Return current epoch time on a sentry""" | ||
1622 | 389 | cmd = "date +'%s'" | ||
1623 | 390 | return float(sentry_unit.run(cmd)[0]) | ||
1624 | 391 | |||
1625 | 392 | def relation_error(self, name, data): | ||
1626 | 393 | return 'unexpected relation data in {} - {}'.format(name, data) | ||
1627 | 394 | |||
1628 | 395 | def endpoint_error(self, name, data): | ||
1629 | 396 | return 'unexpected endpoint data in {} - {}'.format(name, data) | ||
1630 | 397 | |||
1631 | 398 | def get_ubuntu_releases(self): | ||
1632 | 399 | """Return a list of all Ubuntu releases in order of release.""" | ||
1633 | 400 | _d = distro_info.UbuntuDistroInfo() | ||
1634 | 401 | _release_list = _d.all | ||
1635 | 402 | self.log.debug('Ubuntu release list: {}'.format(_release_list)) | ||
1636 | 403 | return _release_list | ||
1637 | 404 | |||
1638 | 405 | def file_to_url(self, file_rel_path): | ||
1639 | 406 | """Convert a relative file path to a file URL.""" | ||
1640 | 407 | _abs_path = os.path.abspath(file_rel_path) | ||
1641 | 408 | return urlparse.urlparse(_abs_path, scheme='file').geturl() | ||
1642 | 0 | 409 | ||
1643 | === added directory 'tests/charmhelpers/contrib/openstack' | |||
1644 | === added file 'tests/charmhelpers/contrib/openstack/__init__.py' | |||
1645 | --- tests/charmhelpers/contrib/openstack/__init__.py 1970-01-01 00:00:00 +0000 | |||
1646 | +++ tests/charmhelpers/contrib/openstack/__init__.py 2015-06-11 15:38:49 +0000 | |||
1647 | @@ -0,0 +1,15 @@ | |||
1648 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
1649 | 2 | # | ||
1650 | 3 | # This file is part of charm-helpers. | ||
1651 | 4 | # | ||
1652 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
1653 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
1654 | 7 | # published by the Free Software Foundation. | ||
1655 | 8 | # | ||
1656 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
1657 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1658 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1659 | 12 | # GNU Lesser General Public License for more details. | ||
1660 | 13 | # | ||
1661 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1662 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
1663 | 0 | 16 | ||
1664 | === added directory 'tests/charmhelpers/contrib/openstack/amulet' | |||
1665 | === added file 'tests/charmhelpers/contrib/openstack/amulet/__init__.py' | |||
1666 | --- tests/charmhelpers/contrib/openstack/amulet/__init__.py 1970-01-01 00:00:00 +0000 | |||
1667 | +++ tests/charmhelpers/contrib/openstack/amulet/__init__.py 2015-06-11 15:38:49 +0000 | |||
1668 | @@ -0,0 +1,15 @@ | |||
1669 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
1670 | 2 | # | ||
1671 | 3 | # This file is part of charm-helpers. | ||
1672 | 4 | # | ||
1673 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
1674 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
1675 | 7 | # published by the Free Software Foundation. | ||
1676 | 8 | # | ||
1677 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
1678 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1679 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1680 | 12 | # GNU Lesser General Public License for more details. | ||
1681 | 13 | # | ||
1682 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1683 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
1684 | 0 | 16 | ||
1685 | === added file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' | |||
1686 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000 | |||
1687 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-11 15:38:49 +0000 | |||
1688 | @@ -0,0 +1,151 @@ | |||
1689 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
1690 | 2 | # | ||
1691 | 3 | # This file is part of charm-helpers. | ||
1692 | 4 | # | ||
1693 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
1694 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
1695 | 7 | # published by the Free Software Foundation. | ||
1696 | 8 | # | ||
1697 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
1698 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1699 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1700 | 12 | # GNU Lesser General Public License for more details. | ||
1701 | 13 | # | ||
1702 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1703 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
1704 | 16 | |||
1705 | 17 | import six | ||
1706 | 18 | from collections import OrderedDict | ||
1707 | 19 | from charmhelpers.contrib.amulet.deployment import ( | ||
1708 | 20 | AmuletDeployment | ||
1709 | 21 | ) | ||
1710 | 22 | |||
1711 | 23 | |||
1712 | 24 | class OpenStackAmuletDeployment(AmuletDeployment): | ||
1713 | 25 | """OpenStack amulet deployment. | ||
1714 | 26 | |||
1715 | 27 | This class inherits from AmuletDeployment and has additional support | ||
1716 | 28 | that is specifically for use by OpenStack charms. | ||
1717 | 29 | """ | ||
1718 | 30 | |||
1719 | 31 | def __init__(self, series=None, openstack=None, source=None, stable=True): | ||
1720 | 32 | """Initialize the deployment environment.""" | ||
1721 | 33 | super(OpenStackAmuletDeployment, self).__init__(series) | ||
1722 | 34 | self.openstack = openstack | ||
1723 | 35 | self.source = source | ||
1724 | 36 | self.stable = stable | ||
1725 | 37 | # Note(coreycb): this needs to be changed when new next branches come | ||
1726 | 38 | # out. | ||
1727 | 39 | self.current_next = "trusty" | ||
1728 | 40 | |||
1729 | 41 | def _determine_branch_locations(self, other_services): | ||
1730 | 42 | """Determine the branch locations for the other services. | ||
1731 | 43 | |||
1732 | 44 | Determine if the local branch being tested is derived from its | ||
1733 | 45 | stable or next (dev) branch, and based on this, use the corresonding | ||
1734 | 46 | stable or next branches for the other_services.""" | ||
1735 | 47 | base_charms = ['mysql', 'mongodb'] | ||
1736 | 48 | |||
1737 | 49 | if self.series in ['precise', 'trusty']: | ||
1738 | 50 | base_series = self.series | ||
1739 | 51 | else: | ||
1740 | 52 | base_series = self.current_next | ||
1741 | 53 | |||
1742 | 54 | if self.stable: | ||
1743 | 55 | for svc in other_services: | ||
1744 | 56 | temp = 'lp:charms/{}/{}' | ||
1745 | 57 | svc['location'] = temp.format(base_series, | ||
1746 | 58 | svc['name']) | ||
1747 | 59 | else: | ||
1748 | 60 | for svc in other_services: | ||
1749 | 61 | if svc['name'] in base_charms: | ||
1750 | 62 | temp = 'lp:charms/{}/{}' | ||
1751 | 63 | svc['location'] = temp.format(base_series, | ||
1752 | 64 | svc['name']) | ||
1753 | 65 | else: | ||
1754 | 66 | temp = 'lp:~openstack-charmers/charms/{}/{}/next' | ||
1755 | 67 | svc['location'] = temp.format(self.current_next, | ||
1756 | 68 | svc['name']) | ||
1757 | 69 | return other_services | ||
1758 | 70 | |||
1759 | 71 | def _add_services(self, this_service, other_services): | ||
1760 | 72 | """Add services to the deployment and set openstack-origin/source.""" | ||
1761 | 73 | other_services = self._determine_branch_locations(other_services) | ||
1762 | 74 | |||
1763 | 75 | super(OpenStackAmuletDeployment, self)._add_services(this_service, | ||
1764 | 76 | other_services) | ||
1765 | 77 | |||
1766 | 78 | services = other_services | ||
1767 | 79 | services.append(this_service) | ||
1768 | 80 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', | ||
1769 | 81 | 'ceph-osd', 'ceph-radosgw'] | ||
1770 | 82 | # Openstack subordinate charms do not expose an origin option as that | ||
1771 | 83 | # is controlled by the principle | ||
1772 | 84 | ignore = ['neutron-openvswitch'] | ||
1773 | 85 | |||
1774 | 86 | if self.openstack: | ||
1775 | 87 | for svc in services: | ||
1776 | 88 | if svc['name'] not in use_source + ignore: | ||
1777 | 89 | config = {'openstack-origin': self.openstack} | ||
1778 | 90 | self.d.configure(svc['name'], config) | ||
1779 | 91 | |||
1780 | 92 | if self.source: | ||
1781 | 93 | for svc in services: | ||
1782 | 94 | if svc['name'] in use_source and svc['name'] not in ignore: | ||
1783 | 95 | config = {'source': self.source} | ||
1784 | 96 | self.d.configure(svc['name'], config) | ||
1785 | 97 | |||
1786 | 98 | def _configure_services(self, configs): | ||
1787 | 99 | """Configure all of the services.""" | ||
1788 | 100 | for service, config in six.iteritems(configs): | ||
1789 | 101 | self.d.configure(service, config) | ||
1790 | 102 | |||
1791 | 103 | def _get_openstack_release(self): | ||
1792 | 104 | """Get openstack release. | ||
1793 | 105 | |||
1794 | 106 | Return an integer representing the enum value of the openstack | ||
1795 | 107 | release. | ||
1796 | 108 | """ | ||
1797 | 109 | # Must be ordered by OpenStack release (not by Ubuntu release): | ||
1798 | 110 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, | ||
1799 | 111 | self.precise_havana, self.precise_icehouse, | ||
1800 | 112 | self.trusty_icehouse, self.trusty_juno, self.utopic_juno, | ||
1801 | 113 | self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, | ||
1802 | 114 | self.wily_liberty) = range(12) | ||
1803 | 115 | |||
1804 | 116 | releases = { | ||
1805 | 117 | ('precise', None): self.precise_essex, | ||
1806 | 118 | ('precise', 'cloud:precise-folsom'): self.precise_folsom, | ||
1807 | 119 | ('precise', 'cloud:precise-grizzly'): self.precise_grizzly, | ||
1808 | 120 | ('precise', 'cloud:precise-havana'): self.precise_havana, | ||
1809 | 121 | ('precise', 'cloud:precise-icehouse'): self.precise_icehouse, | ||
1810 | 122 | ('trusty', None): self.trusty_icehouse, | ||
1811 | 123 | ('trusty', 'cloud:trusty-juno'): self.trusty_juno, | ||
1812 | 124 | ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, | ||
1813 | 125 | ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, | ||
1814 | 126 | ('utopic', None): self.utopic_juno, | ||
1815 | 127 | ('vivid', None): self.vivid_kilo, | ||
1816 | 128 | ('wily', None): self.wily_liberty} | ||
1817 | 129 | |||
1818 | 130 | return releases[(self.series, self.openstack)] | ||
1819 | 131 | |||
1820 | 132 | def _get_openstack_release_string(self): | ||
1821 | 133 | """Get openstack release string. | ||
1822 | 134 | |||
1823 | 135 | Return a string representing the openstack release. | ||
1824 | 136 | """ | ||
1825 | 137 | releases = OrderedDict([ | ||
1826 | 138 | ('precise', 'essex'), | ||
1827 | 139 | ('quantal', 'folsom'), | ||
1828 | 140 | ('raring', 'grizzly'), | ||
1829 | 141 | ('saucy', 'havana'), | ||
1830 | 142 | ('trusty', 'icehouse'), | ||
1831 | 143 | ('utopic', 'juno'), | ||
1832 | 144 | ('vivid', 'kilo'), | ||
1833 | 145 | ('wily', 'liberty'), | ||
1834 | 146 | ]) | ||
1835 | 147 | if self.openstack: | ||
1836 | 148 | os_origin = self.openstack.split(':')[1] | ||
1837 | 149 | return os_origin.split('%s-' % self.series)[1].split('/')[0] | ||
1838 | 150 | else: | ||
1839 | 151 | return releases[self.series] | ||
1840 | 0 | 152 | ||
1841 | === added file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' | |||
1842 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000 | |||
1843 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-11 15:38:49 +0000 | |||
1844 | @@ -0,0 +1,413 @@ | |||
1845 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
1846 | 2 | # | ||
1847 | 3 | # This file is part of charm-helpers. | ||
1848 | 4 | # | ||
1849 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
1850 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
1851 | 7 | # published by the Free Software Foundation. | ||
1852 | 8 | # | ||
1853 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
1854 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1855 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1856 | 12 | # GNU Lesser General Public License for more details. | ||
1857 | 13 | # | ||
1858 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1859 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
1860 | 16 | |||
1861 | 17 | import logging | ||
1862 | 18 | import os | ||
1863 | 19 | import six | ||
1864 | 20 | import time | ||
1865 | 21 | import urllib | ||
1866 | 22 | |||
1867 | 23 | import glanceclient.v1.client as glance_client | ||
1868 | 24 | import heatclient.v1.client as heat_client | ||
1869 | 25 | import keystoneclient.v2_0 as keystone_client | ||
1870 | 26 | import novaclient.v1_1.client as nova_client | ||
1871 | 27 | |||
1872 | 28 | from charmhelpers.contrib.amulet.utils import ( | ||
1873 | 29 | AmuletUtils | ||
1874 | 30 | ) | ||
1875 | 31 | |||
1876 | 32 | DEBUG = logging.DEBUG | ||
1877 | 33 | ERROR = logging.ERROR | ||
1878 | 34 | |||
1879 | 35 | |||
1880 | 36 | class OpenStackAmuletUtils(AmuletUtils): | ||
1881 | 37 | """OpenStack amulet utilities. | ||
1882 | 38 | |||
1883 | 39 | This class inherits from AmuletUtils and has additional support | ||
1884 | 40 | that is specifically for use by OpenStack charm tests. | ||
1885 | 41 | """ | ||
1886 | 42 | |||
1887 | 43 | def __init__(self, log_level=ERROR): | ||
1888 | 44 | """Initialize the deployment environment.""" | ||
1889 | 45 | super(OpenStackAmuletUtils, self).__init__(log_level) | ||
1890 | 46 | |||
1891 | 47 | def validate_endpoint_data(self, endpoints, admin_port, internal_port, | ||
1892 | 48 | public_port, expected): | ||
1893 | 49 | """Validate endpoint data. | ||
1894 | 50 | |||
1895 | 51 | Validate actual endpoint data vs expected endpoint data. The ports | ||
1896 | 52 | are used to find the matching endpoint. | ||
1897 | 53 | """ | ||
1898 | 54 | self.log.debug('Validating endpoint data...') | ||
1899 | 55 | self.log.debug('actual: {}'.format(repr(endpoints))) | ||
1900 | 56 | found = False | ||
1901 | 57 | for ep in endpoints: | ||
1902 | 58 | self.log.debug('endpoint: {}'.format(repr(ep))) | ||
1903 | 59 | if (admin_port in ep.adminurl and | ||
1904 | 60 | internal_port in ep.internalurl and | ||
1905 | 61 | public_port in ep.publicurl): | ||
1906 | 62 | found = True | ||
1907 | 63 | actual = {'id': ep.id, | ||
1908 | 64 | 'region': ep.region, | ||
1909 | 65 | 'adminurl': ep.adminurl, | ||
1910 | 66 | 'internalurl': ep.internalurl, | ||
1911 | 67 | 'publicurl': ep.publicurl, | ||
1912 | 68 | 'service_id': ep.service_id} | ||
1913 | 69 | ret = self._validate_dict_data(expected, actual) | ||
1914 | 70 | if ret: | ||
1915 | 71 | return 'unexpected endpoint data - {}'.format(ret) | ||
1916 | 72 | |||
1917 | 73 | if not found: | ||
1918 | 74 | return 'endpoint not found' | ||
1919 | 75 | |||
1920 | 76 | def validate_svc_catalog_endpoint_data(self, expected, actual): | ||
1921 | 77 | """Validate service catalog endpoint data. | ||
1922 | 78 | |||
1923 | 79 | Validate a list of actual service catalog endpoints vs a list of | ||
1924 | 80 | expected service catalog endpoints. | ||
1925 | 81 | """ | ||
1926 | 82 | self.log.debug('Validating service catalog endpoint data...') | ||
1927 | 83 | self.log.debug('actual: {}'.format(repr(actual))) | ||
1928 | 84 | for k, v in six.iteritems(expected): | ||
1929 | 85 | if k in actual: | ||
1930 | 86 | ret = self._validate_dict_data(expected[k][0], actual[k][0]) | ||
1931 | 87 | if ret: | ||
1932 | 88 | return self.endpoint_error(k, ret) | ||
1933 | 89 | else: | ||
1934 | 90 | return "endpoint {} does not exist".format(k) | ||
1935 | 91 | return ret | ||
1936 | 92 | |||
1937 | 93 | def validate_tenant_data(self, expected, actual): | ||
1938 | 94 | """Validate tenant data. | ||
1939 | 95 | |||
1940 | 96 | Validate a list of actual tenant data vs list of expected tenant | ||
1941 | 97 | data. | ||
1942 | 98 | """ | ||
1943 | 99 | self.log.debug('Validating tenant data...') | ||
1944 | 100 | self.log.debug('actual: {}'.format(repr(actual))) | ||
1945 | 101 | for e in expected: | ||
1946 | 102 | found = False | ||
1947 | 103 | for act in actual: | ||
1948 | 104 | a = {'enabled': act.enabled, 'description': act.description, | ||
1949 | 105 | 'name': act.name, 'id': act.id} | ||
1950 | 106 | if e['name'] == a['name']: | ||
1951 | 107 | found = True | ||
1952 | 108 | ret = self._validate_dict_data(e, a) | ||
1953 | 109 | if ret: | ||
1954 | 110 | return "unexpected tenant data - {}".format(ret) | ||
1955 | 111 | if not found: | ||
1956 | 112 | return "tenant {} does not exist".format(e['name']) | ||
1957 | 113 | return ret | ||
1958 | 114 | |||
1959 | 115 | def validate_role_data(self, expected, actual): | ||
1960 | 116 | """Validate role data. | ||
1961 | 117 | |||
1962 | 118 | Validate a list of actual role data vs a list of expected role | ||
1963 | 119 | data. | ||
1964 | 120 | """ | ||
1965 | 121 | self.log.debug('Validating role data...') | ||
1966 | 122 | self.log.debug('actual: {}'.format(repr(actual))) | ||
1967 | 123 | for e in expected: | ||
1968 | 124 | found = False | ||
1969 | 125 | for act in actual: | ||
1970 | 126 | a = {'name': act.name, 'id': act.id} | ||
1971 | 127 | if e['name'] == a['name']: | ||
1972 | 128 | found = True | ||
1973 | 129 | ret = self._validate_dict_data(e, a) | ||
1974 | 130 | if ret: | ||
1975 | 131 | return "unexpected role data - {}".format(ret) | ||
1976 | 132 | if not found: | ||
1977 | 133 | return "role {} does not exist".format(e['name']) | ||
1978 | 134 | return ret | ||
1979 | 135 | |||
1980 | 136 | def validate_user_data(self, expected, actual): | ||
1981 | 137 | """Validate user data. | ||
1982 | 138 | |||
1983 | 139 | Validate a list of actual user data vs a list of expected user | ||
1984 | 140 | data. | ||
1985 | 141 | """ | ||
1986 | 142 | self.log.debug('Validating user data...') | ||
1987 | 143 | self.log.debug('actual: {}'.format(repr(actual))) | ||
1988 | 144 | for e in expected: | ||
1989 | 145 | found = False | ||
1990 | 146 | for act in actual: | ||
1991 | 147 | a = {'enabled': act.enabled, 'name': act.name, | ||
1992 | 148 | 'email': act.email, 'tenantId': act.tenantId, | ||
1993 | 149 | 'id': act.id} | ||
1994 | 150 | if e['name'] == a['name']: | ||
1995 | 151 | found = True | ||
1996 | 152 | ret = self._validate_dict_data(e, a) | ||
1997 | 153 | if ret: | ||
1998 | 154 | return "unexpected user data - {}".format(ret) | ||
1999 | 155 | if not found: | ||
2000 | 156 | return "user {} does not exist".format(e['name']) | ||
2001 | 157 | return ret | ||
2002 | 158 | |||
2003 | 159 | def validate_flavor_data(self, expected, actual): | ||
2004 | 160 | """Validate flavor data. | ||
2005 | 161 | |||
2006 | 162 | Validate a list of actual flavors vs a list of expected flavors. | ||
2007 | 163 | """ | ||
2008 | 164 | self.log.debug('Validating flavor data...') | ||
2009 | 165 | self.log.debug('actual: {}'.format(repr(actual))) | ||
2010 | 166 | act = [a.name for a in actual] | ||
2011 | 167 | return self._validate_list_data(expected, act) | ||
2012 | 168 | |||
2013 | 169 | def tenant_exists(self, keystone, tenant): | ||
2014 | 170 | """Return True if tenant exists.""" | ||
2015 | 171 | self.log.debug('Checking if tenant exists ({})...'.format(tenant)) | ||
2016 | 172 | return tenant in [t.name for t in keystone.tenants.list()] | ||
2017 | 173 | |||
2018 | 174 | def authenticate_keystone_admin(self, keystone_sentry, user, password, | ||
2019 | 175 | tenant): | ||
2020 | 176 | """Authenticates admin user with the keystone admin endpoint.""" | ||
2021 | 177 | self.log.debug('Authenticating keystone admin...') | ||
2022 | 178 | unit = keystone_sentry | ||
2023 | 179 | service_ip = unit.relation('shared-db', | ||
2024 | 180 | 'mysql:shared-db')['private-address'] | ||
2025 | 181 | ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8')) | ||
2026 | 182 | return keystone_client.Client(username=user, password=password, | ||
2027 | 183 | tenant_name=tenant, auth_url=ep) | ||
2028 | 184 | |||
2029 | 185 | def authenticate_keystone_user(self, keystone, user, password, tenant): | ||
2030 | 186 | """Authenticates a regular user with the keystone public endpoint.""" | ||
2031 | 187 | self.log.debug('Authenticating keystone user ({})...'.format(user)) | ||
2032 | 188 | ep = keystone.service_catalog.url_for(service_type='identity', | ||
2033 | 189 | endpoint_type='publicURL') | ||
2034 | 190 | return keystone_client.Client(username=user, password=password, | ||
2035 | 191 | tenant_name=tenant, auth_url=ep) | ||
2036 | 192 | |||
2037 | 193 | def authenticate_glance_admin(self, keystone): | ||
2038 | 194 | """Authenticates admin user with glance.""" | ||
2039 | 195 | self.log.debug('Authenticating glance admin...') | ||
2040 | 196 | ep = keystone.service_catalog.url_for(service_type='image', | ||
2041 | 197 | endpoint_type='adminURL') | ||
2042 | 198 | return glance_client.Client(ep, token=keystone.auth_token) | ||
2043 | 199 | |||
2044 | 200 | def authenticate_heat_admin(self, keystone): | ||
2045 | 201 | """Authenticates the admin user with heat.""" | ||
2046 | 202 | self.log.debug('Authenticating heat admin...') | ||
2047 | 203 | ep = keystone.service_catalog.url_for(service_type='orchestration', | ||
2048 | 204 | endpoint_type='publicURL') | ||
2049 | 205 | return heat_client.Client(endpoint=ep, token=keystone.auth_token) | ||
2050 | 206 | |||
2051 | 207 | def authenticate_nova_user(self, keystone, user, password, tenant): | ||
2052 | 208 | """Authenticates a regular user with nova-api.""" | ||
2053 | 209 | self.log.debug('Authenticating nova user ({})...'.format(user)) | ||
2054 | 210 | ep = keystone.service_catalog.url_for(service_type='identity', | ||
2055 | 211 | endpoint_type='publicURL') | ||
2056 | 212 | return nova_client.Client(username=user, api_key=password, | ||
2057 | 213 | project_id=tenant, auth_url=ep) | ||
2058 | 214 | |||
2059 | 215 | def create_cirros_image(self, glance, image_name): | ||
2060 | 216 | """Download the latest cirros image and upload it to glance.""" | ||
2061 | 217 | self.log.debug('Creating glance image ({})...'.format(image_name)) | ||
2062 | 218 | http_proxy = os.getenv('AMULET_HTTP_PROXY') | ||
2063 | 219 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) | ||
2064 | 220 | if http_proxy: | ||
2065 | 221 | proxies = {'http': http_proxy} | ||
2066 | 222 | opener = urllib.FancyURLopener(proxies) | ||
2067 | 223 | else: | ||
2068 | 224 | opener = urllib.FancyURLopener() | ||
2069 | 225 | |||
2070 | 226 | f = opener.open("http://download.cirros-cloud.net/version/released") | ||
2071 | 227 | version = f.read().strip() | ||
2072 | 228 | cirros_img = "cirros-{}-x86_64-disk.img".format(version) | ||
2073 | 229 | local_path = os.path.join('tests', cirros_img) | ||
2074 | 230 | |||
2075 | 231 | if not os.path.exists(local_path): | ||
2076 | 232 | cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net", | ||
2077 | 233 | version, cirros_img) | ||
2078 | 234 | opener.retrieve(cirros_url, local_path) | ||
2079 | 235 | f.close() | ||
2080 | 236 | |||
2081 | 237 | with open(local_path) as f: | ||
2082 | 238 | image = glance.images.create(name=image_name, is_public=True, | ||
2083 | 239 | disk_format='qcow2', | ||
2084 | 240 | container_format='bare', data=f) | ||
2085 | 241 | count = 1 | ||
2086 | 242 | status = image.status | ||
2087 | 243 | while status != 'active' and count < 10: | ||
2088 | 244 | time.sleep(3) | ||
2089 | 245 | image = glance.images.get(image.id) | ||
2090 | 246 | status = image.status | ||
2091 | 247 | self.log.debug('image status: {}'.format(status)) | ||
2092 | 248 | count += 1 | ||
2093 | 249 | |||
2094 | 250 | if status != 'active': | ||
2095 | 251 | self.log.error('image creation timed out') | ||
2096 | 252 | return None | ||
2097 | 253 | |||
2098 | 254 | return image | ||
2099 | 255 | |||
2100 | 256 | def delete_image(self, glance, image): | ||
2101 | 257 | """Delete the specified image.""" | ||
2102 | 258 | |||
2103 | 259 | # /!\ DEPRECATION WARNING | ||
2104 | 260 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | ||
2105 | 261 | 'delete_resource instead of delete_image.') | ||
2106 | 262 | self.log.debug('Deleting glance image ({})...'.format(image)) | ||
2107 | 263 | num_before = len(list(glance.images.list())) | ||
2108 | 264 | glance.images.delete(image) | ||
2109 | 265 | |||
2110 | 266 | count = 1 | ||
2111 | 267 | num_after = len(list(glance.images.list())) | ||
2112 | 268 | while num_after != (num_before - 1) and count < 10: | ||
2113 | 269 | time.sleep(3) | ||
2114 | 270 | num_after = len(list(glance.images.list())) | ||
2115 | 271 | self.log.debug('number of images: {}'.format(num_after)) | ||
2116 | 272 | count += 1 | ||
2117 | 273 | |||
2118 | 274 | if num_after != (num_before - 1): | ||
2119 | 275 | self.log.error('image deletion timed out') | ||
2120 | 276 | return False | ||
2121 | 277 | |||
2122 | 278 | return True | ||
2123 | 279 | |||
2124 | 280 | def create_instance(self, nova, image_name, instance_name, flavor): | ||
2125 | 281 | """Create the specified instance.""" | ||
2126 | 282 | self.log.debug('Creating instance ' | ||
2127 | 283 | '({}|{}|{})'.format(instance_name, image_name, flavor)) | ||
2128 | 284 | image = nova.images.find(name=image_name) | ||
2129 | 285 | flavor = nova.flavors.find(name=flavor) | ||
2130 | 286 | instance = nova.servers.create(name=instance_name, image=image, | ||
2131 | 287 | flavor=flavor) | ||
2132 | 288 | |||
2133 | 289 | count = 1 | ||
2134 | 290 | status = instance.status | ||
2135 | 291 | while status != 'ACTIVE' and count < 60: | ||
2136 | 292 | time.sleep(3) | ||
2137 | 293 | instance = nova.servers.get(instance.id) | ||
2138 | 294 | status = instance.status | ||
2139 | 295 | self.log.debug('instance status: {}'.format(status)) | ||
2140 | 296 | count += 1 | ||
2141 | 297 | |||
2142 | 298 | if status != 'ACTIVE': | ||
2143 | 299 | self.log.error('instance creation timed out') | ||
2144 | 300 | return None | ||
2145 | 301 | |||
2146 | 302 | return instance | ||
2147 | 303 | |||
2148 | 304 | def delete_instance(self, nova, instance): | ||
2149 | 305 | """Delete the specified instance.""" | ||
2150 | 306 | |||
2151 | 307 | # /!\ DEPRECATION WARNING | ||
2152 | 308 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | ||
2153 | 309 | 'delete_resource instead of delete_instance.') | ||
2154 | 310 | self.log.debug('Deleting instance ({})...'.format(instance)) | ||
2155 | 311 | num_before = len(list(nova.servers.list())) | ||
2156 | 312 | nova.servers.delete(instance) | ||
2157 | 313 | |||
2158 | 314 | count = 1 | ||
2159 | 315 | num_after = len(list(nova.servers.list())) | ||
2160 | 316 | while num_after != (num_before - 1) and count < 10: | ||
2161 | 317 | time.sleep(3) | ||
2162 | 318 | num_after = len(list(nova.servers.list())) | ||
2163 | 319 | self.log.debug('number of instances: {}'.format(num_after)) | ||
2164 | 320 | count += 1 | ||
2165 | 321 | |||
2166 | 322 | if num_after != (num_before - 1): | ||
2167 | 323 | self.log.error('instance deletion timed out') | ||
2168 | 324 | return False | ||
2169 | 325 | |||
2170 | 326 | return True | ||
2171 | 327 | |||
2172 | 328 | def create_or_get_keypair(self, nova, keypair_name="testkey"): | ||
2173 | 329 | """Create a new keypair, or return pointer if it already exists.""" | ||
2174 | 330 | try: | ||
2175 | 331 | _keypair = nova.keypairs.get(keypair_name) | ||
2176 | 332 | self.log.debug('Keypair ({}) already exists, ' | ||
2177 | 333 | 'using it.'.format(keypair_name)) | ||
2178 | 334 | return _keypair | ||
2179 | 335 | except: | ||
2180 | 336 | self.log.debug('Keypair ({}) does not exist, ' | ||
2181 | 337 | 'creating it.'.format(keypair_name)) | ||
2182 | 338 | |||
2183 | 339 | _keypair = nova.keypairs.create(name=keypair_name) | ||
2184 | 340 | return _keypair | ||
2185 | 341 | |||
2186 | 342 | def delete_resource(self, resource, resource_id, | ||
2187 | 343 | msg="resource", max_wait=120): | ||
2188 | 344 | """Delete one openstack resource, such as one instance, keypair, | ||
2189 | 345 | image, volume, stack, etc., and confirm deletion within max wait time. | ||
2190 | 346 | |||
2191 | 347 | :param resource: pointer to os resource type, ex:glance_client.images | ||
2192 | 348 | :param resource_id: unique name or id for the openstack resource | ||
2193 | 349 | :param msg: text to identify purpose in logging | ||
2194 | 350 | :param max_wait: maximum wait time in seconds | ||
2195 | 351 | :returns: True if successful, otherwise False | ||
2196 | 352 | """ | ||
2197 | 353 | num_before = len(list(resource.list())) | ||
2198 | 354 | resource.delete(resource_id) | ||
2199 | 355 | |||
2200 | 356 | tries = 0 | ||
2201 | 357 | num_after = len(list(resource.list())) | ||
2202 | 358 | while num_after != (num_before - 1) and tries < (max_wait / 4): | ||
2203 | 359 | self.log.debug('{} delete check: ' | ||
2204 | 360 | '{} [{}:{}] {}'.format(msg, tries, | ||
2205 | 361 | num_before, | ||
2206 | 362 | num_after, | ||
2207 | 363 | resource_id)) | ||
2208 | 364 | time.sleep(4) | ||
2209 | 365 | num_after = len(list(resource.list())) | ||
2210 | 366 | tries += 1 | ||
2211 | 367 | |||
2212 | 368 | self.log.debug('{}: expected, actual count = {}, ' | ||
2213 | 369 | '{}'.format(msg, num_before - 1, num_after)) | ||
2214 | 370 | |||
2215 | 371 | if num_after == (num_before - 1): | ||
2216 | 372 | return True | ||
2217 | 373 | else: | ||
2218 | 374 | self.log.error('{} delete timed out'.format(msg)) | ||
2219 | 375 | return False | ||
2220 | 376 | |||
2221 | 377 | def resource_reaches_status(self, resource, resource_id, | ||
2222 | 378 | expected_stat='available', | ||
2223 | 379 | msg='resource', max_wait=120): | ||
2224 | 380 | """Wait for an openstack resources status to reach an | ||
2225 | 381 | expected status within a specified time. Useful to confirm that | ||
2226 | 382 | nova instances, cinder vols, snapshots, glance images, heat stacks | ||
2227 | 383 | and other resources eventually reach the expected status. | ||
2228 | 384 | |||
2229 | 385 | :param resource: pointer to os resource type, ex: heat_client.stacks | ||
2230 | 386 | :param resource_id: unique id for the openstack resource | ||
2231 | 387 | :param expected_stat: status to expect resource to reach | ||
2232 | 388 | :param msg: text to identify purpose in logging | ||
2233 | 389 | :param max_wait: maximum wait time in seconds | ||
2234 | 390 | :returns: True if successful, False if status is not reached | ||
2235 | 391 | """ | ||
2236 | 392 | |||
2237 | 393 | tries = 0 | ||
2238 | 394 | resource_stat = resource.get(resource_id).status | ||
2239 | 395 | while resource_stat != expected_stat and tries < (max_wait / 4): | ||
2240 | 396 | self.log.debug('{} status check: ' | ||
2241 | 397 | '{} [{}:{}] {}'.format(msg, tries, | ||
2242 | 398 | resource_stat, | ||
2243 | 399 | expected_stat, | ||
2244 | 400 | resource_id)) | ||
2245 | 401 | time.sleep(4) | ||
2246 | 402 | resource_stat = resource.get(resource_id).status | ||
2247 | 403 | tries += 1 | ||
2248 | 404 | |||
2249 | 405 | self.log.debug('{}: expected, actual status = {}, ' | ||
2250 | 406 | '{}'.format(msg, resource_stat, expected_stat)) | ||
2251 | 407 | |||
2252 | 408 | if resource_stat == expected_stat: | ||
2253 | 409 | return True | ||
2254 | 410 | else: | ||
2255 | 411 | self.log.debug('{} never reached expected status: ' | ||
2256 | 412 | '{}'.format(resource_id, expected_stat)) | ||
2257 | 413 | return False | ||
2258 | 0 | 414 | ||
2259 | === added directory 'tests/files' | |||
2260 | === added file 'tests/files/hot_hello_world.yaml' | |||
2261 | --- tests/files/hot_hello_world.yaml 1970-01-01 00:00:00 +0000 | |||
2262 | +++ tests/files/hot_hello_world.yaml 2015-06-11 15:38:49 +0000 | |||
2263 | @@ -0,0 +1,66 @@ | |||
2264 | 1 | # | ||
2265 | 2 | # This is a hello world HOT template just defining a single compute | ||
2266 | 3 | # server. | ||
2267 | 4 | # | ||
2268 | 5 | heat_template_version: 2013-05-23 | ||
2269 | 6 | |||
2270 | 7 | description: > | ||
2271 | 8 | Hello world HOT template that just defines a single server. | ||
2272 | 9 | Contains just base features to verify base HOT support. | ||
2273 | 10 | |||
2274 | 11 | parameters: | ||
2275 | 12 | key_name: | ||
2276 | 13 | type: string | ||
2277 | 14 | description: Name of an existing key pair to use for the server | ||
2278 | 15 | constraints: | ||
2279 | 16 | - custom_constraint: nova.keypair | ||
2280 | 17 | flavor: | ||
2281 | 18 | type: string | ||
2282 | 19 | description: Flavor for the server to be created | ||
2283 | 20 | default: m1.tiny | ||
2284 | 21 | constraints: | ||
2285 | 22 | - custom_constraint: nova.flavor | ||
2286 | 23 | image: | ||
2287 | 24 | type: string | ||
2288 | 25 | description: Image ID or image name to use for the server | ||
2289 | 26 | constraints: | ||
2290 | 27 | - custom_constraint: glance.image | ||
2291 | 28 | admin_pass: | ||
2292 | 29 | type: string | ||
2293 | 30 | description: Admin password | ||
2294 | 31 | hidden: true | ||
2295 | 32 | constraints: | ||
2296 | 33 | - length: { min: 6, max: 8 } | ||
2297 | 34 | description: Password length must be between 6 and 8 characters | ||
2298 | 35 | - allowed_pattern: "[a-zA-Z0-9]+" | ||
2299 | 36 | description: Password must consist of characters and numbers only | ||
2300 | 37 | - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" | ||
2301 | 38 | description: Password must start with an uppercase character | ||
2302 | 39 | db_port: | ||
2303 | 40 | type: number | ||
2304 | 41 | description: Database port number | ||
2305 | 42 | default: 50000 | ||
2306 | 43 | constraints: | ||
2307 | 44 | - range: { min: 40000, max: 60000 } | ||
2308 | 45 | description: Port number must be between 40000 and 60000 | ||
2309 | 46 | |||
2310 | 47 | resources: | ||
2311 | 48 | server: | ||
2312 | 49 | type: OS::Nova::Server | ||
2313 | 50 | properties: | ||
2314 | 51 | key_name: { get_param: key_name } | ||
2315 | 52 | image: { get_param: image } | ||
2316 | 53 | flavor: { get_param: flavor } | ||
2317 | 54 | admin_pass: { get_param: admin_pass } | ||
2318 | 55 | user_data: | ||
2319 | 56 | str_replace: | ||
2320 | 57 | template: | | ||
2321 | 58 | #!/bin/bash | ||
2322 | 59 | echo db_port | ||
2323 | 60 | params: | ||
2324 | 61 | db_port: { get_param: db_port } | ||
2325 | 62 | |||
2326 | 63 | outputs: | ||
2327 | 64 | server_networks: | ||
2328 | 65 | description: The networks of the deployed server | ||
2329 | 66 | value: { get_attr: [server, networks] } | ||
2330 | 0 | 67 | ||
2331 | === added file 'tests/tests.yaml' | |||
2332 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 | |||
2333 | +++ tests/tests.yaml 2015-06-11 15:38:49 +0000 | |||
2334 | @@ -0,0 +1,15 @@ | |||
2335 | 1 | bootstrap: true | ||
2336 | 2 | reset: true | ||
2337 | 3 | virtualenv: true | ||
2338 | 4 | makefile: | ||
2339 | 5 | - lint | ||
2340 | 6 | - unit_test | ||
2341 | 7 | sources: | ||
2342 | 8 | - ppa:juju/stable | ||
2343 | 9 | packages: | ||
2344 | 10 | - amulet | ||
2345 | 11 | - python-amulet | ||
2346 | 12 | - python-distro-info | ||
2347 | 13 | - python-glanceclient | ||
2348 | 14 | - python-keystoneclient | ||
2349 | 15 | - python-novaclient |
charm_unit_test #4862 heat-next for 1chb1n mp258105
UNIT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_unit_ test/4862/