Merge lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-update into lp:~openstack-charmers-archive/charms/trusty/ceph-radosgw/next
- Trusty Tahr (14.04)
- next-amulet-update
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 40 |
Proposed branch: | lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-update |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph-radosgw/next |
Diff against target: |
2225 lines (+1213/-196) 18 files modified
Makefile (+7/-7) hooks/charmhelpers/contrib/hahelpers/cluster.py (+24/-5) hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+6/-2) hooks/charmhelpers/contrib/openstack/amulet/utils.py (+122/-3) hooks/charmhelpers/contrib/openstack/context.py (+1/-1) hooks/charmhelpers/contrib/openstack/neutron.py (+6/-4) hooks/charmhelpers/contrib/openstack/utils.py (+21/-8) hooks/charmhelpers/contrib/python/packages.py (+2/-0) hooks/charmhelpers/core/hookenv.py (+92/-36) hooks/charmhelpers/core/host.py (+24/-6) hooks/charmhelpers/core/services/base.py (+12/-9) metadata.yaml (+4/-1) tests/00-setup (+6/-2) tests/basic_deployment.py (+246/-47) tests/charmhelpers/contrib/amulet/utils.py (+219/-9) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+42/-5) tests/charmhelpers/contrib/openstack/amulet/utils.py (+361/-51) tests/tests.yaml (+18/-0) |
To merge this branch: | bzr merge lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-update |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Corey Bryant (community) | Approve | ||
Review via email: mp+262599@code.launchpad.net |
Commit message
amulet tests - update test coverage, enable vivid, prep for wily, add basic functional checks
sync tests/charmhelpers
sync hooks/charmhelpers
Description of the change
amulet tests - update test coverage, enable vivid, prep for wily, add basic functional checks
sync tests/charmhelpers
sync hooks/charmhelpers
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5171 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4750 ceph-radosgw-next for 1chb1n mp262599
AMULET OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5542 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5174 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4753 ceph-radosgw-next for 1chb1n mp262599
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
FYI, undercloud issue caused test failure for #4753.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4780 ceph-radosgw-next for 1chb1n mp262599
AMULET OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5607 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5239 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4783 ceph-radosgw-next for 1chb1n mp262599
AMULET OK: passed
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Flipped back to WIP re: tests/charmhelpers work in progress. Other things here are clear for review and input.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5674 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5306 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4857 ceph-radosgw-next for 1chb1n mp262599
AMULET OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5682 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5314 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
- 48. By Ryan Beisner
-
Update publish target in makefile; update 00-setup and tests.yaml for dependencies.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5686 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5318 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4869 ceph-radosgw-next for 1chb1n mp262599
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 49. By Ryan Beisner
-
fix 00-setup
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5690 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5322 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
- 50. By Ryan Beisner
-
update test
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5695 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5327 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
Corey Bryant (corey.bryant) wrote : | # |
Looks good. I'll approve once the corresponding c-h lands and these amulet tests are successful.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4873 ceph-radosgw-next for 1chb1n mp262599
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4878 ceph-radosgw-next for 1chb1n mp262599
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
Timeout occurred (2700s), printing juju status.
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4883 ceph-radosgw-next for 1chb1n mp262599
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Test rig issue is causing failures in bootstrapping; will re-test when that's resolved.
- 51. By Ryan Beisner
-
update tags for consistency with other openstack charms
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5700 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5332 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4888 ceph-radosgw-next for 1chb1n mp262599
AMULET OK: passed
Build: http://
Corey Bryant (corey.bryant) : | # |
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2015-04-16 21:32:01 +0000 |
3 | +++ Makefile 2015-07-01 14:47:24 +0000 |
4 | @@ -2,17 +2,17 @@ |
5 | PYTHON := /usr/bin/env python |
6 | |
7 | lint: |
8 | - @flake8 --exclude hooks/charmhelpers hooks tests unit_tests |
9 | + @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \ |
10 | + hooks tests unit_tests |
11 | @charm proof |
12 | |
13 | -unit_test: |
14 | +test: |
15 | + @# Bundletester expects unit tests here. |
16 | + @echo Starting unit tests... |
17 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
18 | |
19 | -test: |
20 | +functional_test: |
21 | @echo Starting Amulet tests... |
22 | - # coreycb note: The -v should only be temporary until Amulet sends |
23 | - # raise_status() messages to stderr: |
24 | - # https://bugs.launchpad.net/amulet/+bug/1320357 |
25 | @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 |
26 | |
27 | bin/charm_helpers_sync.py: |
28 | @@ -24,6 +24,6 @@ |
29 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml |
30 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml |
31 | |
32 | -publish: lint |
33 | +publish: lint test |
34 | bzr push lp:charms/ceph-radosgw |
35 | bzr push lp:charms/trusty/ceph-radosgw |
36 | |
37 | === modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' |
38 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-04 23:06:40 +0000 |
39 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-07-01 14:47:24 +0000 |
40 | @@ -44,6 +44,7 @@ |
41 | ERROR, |
42 | WARNING, |
43 | unit_get, |
44 | + is_leader as juju_is_leader |
45 | ) |
46 | from charmhelpers.core.decorators import ( |
47 | retry_on_exception, |
48 | @@ -63,17 +64,30 @@ |
49 | pass |
50 | |
51 | |
52 | +class CRMDCNotFound(Exception): |
53 | + pass |
54 | + |
55 | + |
56 | def is_elected_leader(resource): |
57 | """ |
58 | Returns True if the charm executing this is the elected cluster leader. |
59 | |
60 | It relies on two mechanisms to determine leadership: |
61 | - 1. If the charm is part of a corosync cluster, call corosync to |
62 | + 1. If juju is sufficiently new and leadership election is supported, |
63 | + the is_leader command will be used. |
64 | + 2. If the charm is part of a corosync cluster, call corosync to |
65 | determine leadership. |
66 | - 2. If the charm is not part of a corosync cluster, the leader is |
67 | + 3. If the charm is not part of a corosync cluster, the leader is |
68 | determined as being "the alive unit with the lowest unit numer". In |
69 | other words, the oldest surviving unit. |
70 | """ |
71 | + try: |
72 | + return juju_is_leader() |
73 | + except NotImplementedError: |
74 | + log('Juju leadership election feature not enabled' |
75 | + ', using fallback support', |
76 | + level=WARNING) |
77 | + |
78 | if is_clustered(): |
79 | if not is_crm_leader(resource): |
80 | log('Deferring action to CRM leader.', level=INFO) |
81 | @@ -106,8 +120,9 @@ |
82 | status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) |
83 | if not isinstance(status, six.text_type): |
84 | status = six.text_type(status, "utf-8") |
85 | - except subprocess.CalledProcessError: |
86 | - return False |
87 | + except subprocess.CalledProcessError as ex: |
88 | + raise CRMDCNotFound(str(ex)) |
89 | + |
90 | current_dc = '' |
91 | for line in status.split('\n'): |
92 | if line.startswith('Current DC'): |
93 | @@ -115,10 +130,14 @@ |
94 | current_dc = line.split(':')[1].split()[0] |
95 | if current_dc == get_unit_hostname(): |
96 | return True |
97 | + elif current_dc == 'NONE': |
98 | + raise CRMDCNotFound('Current DC: NONE') |
99 | + |
100 | return False |
101 | |
102 | |
103 | -@retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound) |
104 | +@retry_on_exception(5, base_delay=2, |
105 | + exc_type=(CRMResourceNotFound, CRMDCNotFound)) |
106 | def is_crm_leader(resource, retry=False): |
107 | """ |
108 | Returns True if the charm calling this is the elected corosync leader, |
109 | |
110 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py' |
111 | --- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-04 23:06:40 +0000 |
112 | +++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 14:47:24 +0000 |
113 | @@ -110,7 +110,8 @@ |
114 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, |
115 | self.precise_havana, self.precise_icehouse, |
116 | self.trusty_icehouse, self.trusty_juno, self.utopic_juno, |
117 | - self.trusty_kilo, self.vivid_kilo) = range(10) |
118 | + self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, |
119 | + self.wily_liberty) = range(12) |
120 | |
121 | releases = { |
122 | ('precise', None): self.precise_essex, |
123 | @@ -121,8 +122,10 @@ |
124 | ('trusty', None): self.trusty_icehouse, |
125 | ('trusty', 'cloud:trusty-juno'): self.trusty_juno, |
126 | ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, |
127 | + ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, |
128 | ('utopic', None): self.utopic_juno, |
129 | - ('vivid', None): self.vivid_kilo} |
130 | + ('vivid', None): self.vivid_kilo, |
131 | + ('wily', None): self.wily_liberty} |
132 | return releases[(self.series, self.openstack)] |
133 | |
134 | def _get_openstack_release_string(self): |
135 | @@ -138,6 +141,7 @@ |
136 | ('trusty', 'icehouse'), |
137 | ('utopic', 'juno'), |
138 | ('vivid', 'kilo'), |
139 | + ('wily', 'liberty'), |
140 | ]) |
141 | if self.openstack: |
142 | os_origin = self.openstack.split(':')[1] |
143 | |
144 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py' |
145 | --- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-01-26 11:53:19 +0000 |
146 | +++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 14:47:24 +0000 |
147 | @@ -16,15 +16,15 @@ |
148 | |
149 | import logging |
150 | import os |
151 | +import six |
152 | import time |
153 | import urllib |
154 | |
155 | import glanceclient.v1.client as glance_client |
156 | +import heatclient.v1.client as heat_client |
157 | import keystoneclient.v2_0 as keystone_client |
158 | import novaclient.v1_1.client as nova_client |
159 | |
160 | -import six |
161 | - |
162 | from charmhelpers.contrib.amulet.utils import ( |
163 | AmuletUtils |
164 | ) |
165 | @@ -37,7 +37,7 @@ |
166 | """OpenStack amulet utilities. |
167 | |
168 | This class inherits from AmuletUtils and has additional support |
169 | - that is specifically for use by OpenStack charms. |
170 | + that is specifically for use by OpenStack charm tests. |
171 | """ |
172 | |
173 | def __init__(self, log_level=ERROR): |
174 | @@ -51,6 +51,8 @@ |
175 | Validate actual endpoint data vs expected endpoint data. The ports |
176 | are used to find the matching endpoint. |
177 | """ |
178 | + self.log.debug('Validating endpoint data...') |
179 | + self.log.debug('actual: {}'.format(repr(endpoints))) |
180 | found = False |
181 | for ep in endpoints: |
182 | self.log.debug('endpoint: {}'.format(repr(ep))) |
183 | @@ -77,6 +79,7 @@ |
184 | Validate a list of actual service catalog endpoints vs a list of |
185 | expected service catalog endpoints. |
186 | """ |
187 | + self.log.debug('Validating service catalog endpoint data...') |
188 | self.log.debug('actual: {}'.format(repr(actual))) |
189 | for k, v in six.iteritems(expected): |
190 | if k in actual: |
191 | @@ -93,6 +96,7 @@ |
192 | Validate a list of actual tenant data vs list of expected tenant |
193 | data. |
194 | """ |
195 | + self.log.debug('Validating tenant data...') |
196 | self.log.debug('actual: {}'.format(repr(actual))) |
197 | for e in expected: |
198 | found = False |
199 | @@ -114,6 +118,7 @@ |
200 | Validate a list of actual role data vs a list of expected role |
201 | data. |
202 | """ |
203 | + self.log.debug('Validating role data...') |
204 | self.log.debug('actual: {}'.format(repr(actual))) |
205 | for e in expected: |
206 | found = False |
207 | @@ -134,6 +139,7 @@ |
208 | Validate a list of actual user data vs a list of expected user |
209 | data. |
210 | """ |
211 | + self.log.debug('Validating user data...') |
212 | self.log.debug('actual: {}'.format(repr(actual))) |
213 | for e in expected: |
214 | found = False |
215 | @@ -155,17 +161,20 @@ |
216 | |
217 | Validate a list of actual flavors vs a list of expected flavors. |
218 | """ |
219 | + self.log.debug('Validating flavor data...') |
220 | self.log.debug('actual: {}'.format(repr(actual))) |
221 | act = [a.name for a in actual] |
222 | return self._validate_list_data(expected, act) |
223 | |
224 | def tenant_exists(self, keystone, tenant): |
225 | """Return True if tenant exists.""" |
226 | + self.log.debug('Checking if tenant exists ({})...'.format(tenant)) |
227 | return tenant in [t.name for t in keystone.tenants.list()] |
228 | |
229 | def authenticate_keystone_admin(self, keystone_sentry, user, password, |
230 | tenant): |
231 | """Authenticates admin user with the keystone admin endpoint.""" |
232 | + self.log.debug('Authenticating keystone admin...') |
233 | unit = keystone_sentry |
234 | service_ip = unit.relation('shared-db', |
235 | 'mysql:shared-db')['private-address'] |
236 | @@ -175,6 +184,7 @@ |
237 | |
238 | def authenticate_keystone_user(self, keystone, user, password, tenant): |
239 | """Authenticates a regular user with the keystone public endpoint.""" |
240 | + self.log.debug('Authenticating keystone user ({})...'.format(user)) |
241 | ep = keystone.service_catalog.url_for(service_type='identity', |
242 | endpoint_type='publicURL') |
243 | return keystone_client.Client(username=user, password=password, |
244 | @@ -182,12 +192,21 @@ |
245 | |
246 | def authenticate_glance_admin(self, keystone): |
247 | """Authenticates admin user with glance.""" |
248 | + self.log.debug('Authenticating glance admin...') |
249 | ep = keystone.service_catalog.url_for(service_type='image', |
250 | endpoint_type='adminURL') |
251 | return glance_client.Client(ep, token=keystone.auth_token) |
252 | |
253 | + def authenticate_heat_admin(self, keystone): |
254 | + """Authenticates the admin user with heat.""" |
255 | + self.log.debug('Authenticating heat admin...') |
256 | + ep = keystone.service_catalog.url_for(service_type='orchestration', |
257 | + endpoint_type='publicURL') |
258 | + return heat_client.Client(endpoint=ep, token=keystone.auth_token) |
259 | + |
260 | def authenticate_nova_user(self, keystone, user, password, tenant): |
261 | """Authenticates a regular user with nova-api.""" |
262 | + self.log.debug('Authenticating nova user ({})...'.format(user)) |
263 | ep = keystone.service_catalog.url_for(service_type='identity', |
264 | endpoint_type='publicURL') |
265 | return nova_client.Client(username=user, api_key=password, |
266 | @@ -195,6 +214,7 @@ |
267 | |
268 | def create_cirros_image(self, glance, image_name): |
269 | """Download the latest cirros image and upload it to glance.""" |
270 | + self.log.debug('Creating glance image ({})...'.format(image_name)) |
271 | http_proxy = os.getenv('AMULET_HTTP_PROXY') |
272 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) |
273 | if http_proxy: |
274 | @@ -235,6 +255,11 @@ |
275 | |
276 | def delete_image(self, glance, image): |
277 | """Delete the specified image.""" |
278 | + |
279 | + # /!\ DEPRECATION WARNING |
280 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
281 | + 'delete_resource instead of delete_image.') |
282 | + self.log.debug('Deleting glance image ({})...'.format(image)) |
283 | num_before = len(list(glance.images.list())) |
284 | glance.images.delete(image) |
285 | |
286 | @@ -254,6 +279,8 @@ |
287 | |
288 | def create_instance(self, nova, image_name, instance_name, flavor): |
289 | """Create the specified instance.""" |
290 | + self.log.debug('Creating instance ' |
291 | + '({}|{}|{})'.format(instance_name, image_name, flavor)) |
292 | image = nova.images.find(name=image_name) |
293 | flavor = nova.flavors.find(name=flavor) |
294 | instance = nova.servers.create(name=instance_name, image=image, |
295 | @@ -276,6 +303,11 @@ |
296 | |
297 | def delete_instance(self, nova, instance): |
298 | """Delete the specified instance.""" |
299 | + |
300 | + # /!\ DEPRECATION WARNING |
301 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
302 | + 'delete_resource instead of delete_instance.') |
303 | + self.log.debug('Deleting instance ({})...'.format(instance)) |
304 | num_before = len(list(nova.servers.list())) |
305 | nova.servers.delete(instance) |
306 | |
307 | @@ -292,3 +324,90 @@ |
308 | return False |
309 | |
310 | return True |
311 | + |
312 | + def create_or_get_keypair(self, nova, keypair_name="testkey"): |
313 | + """Create a new keypair, or return pointer if it already exists.""" |
314 | + try: |
315 | + _keypair = nova.keypairs.get(keypair_name) |
316 | + self.log.debug('Keypair ({}) already exists, ' |
317 | + 'using it.'.format(keypair_name)) |
318 | + return _keypair |
319 | + except: |
320 | + self.log.debug('Keypair ({}) does not exist, ' |
321 | + 'creating it.'.format(keypair_name)) |
322 | + |
323 | + _keypair = nova.keypairs.create(name=keypair_name) |
324 | + return _keypair |
325 | + |
326 | + def delete_resource(self, resource, resource_id, |
327 | + msg="resource", max_wait=120): |
328 | + """Delete one openstack resource, such as one instance, keypair, |
329 | + image, volume, stack, etc., and confirm deletion within max wait time. |
330 | + |
331 | + :param resource: pointer to os resource type, ex:glance_client.images |
332 | + :param resource_id: unique name or id for the openstack resource |
333 | + :param msg: text to identify purpose in logging |
334 | + :param max_wait: maximum wait time in seconds |
335 | + :returns: True if successful, otherwise False |
336 | + """ |
337 | + num_before = len(list(resource.list())) |
338 | + resource.delete(resource_id) |
339 | + |
340 | + tries = 0 |
341 | + num_after = len(list(resource.list())) |
342 | + while num_after != (num_before - 1) and tries < (max_wait / 4): |
343 | + self.log.debug('{} delete check: ' |
344 | + '{} [{}:{}] {}'.format(msg, tries, |
345 | + num_before, |
346 | + num_after, |
347 | + resource_id)) |
348 | + time.sleep(4) |
349 | + num_after = len(list(resource.list())) |
350 | + tries += 1 |
351 | + |
352 | + self.log.debug('{}: expected, actual count = {}, ' |
353 | + '{}'.format(msg, num_before - 1, num_after)) |
354 | + |
355 | + if num_after == (num_before - 1): |
356 | + return True |
357 | + else: |
358 | + self.log.error('{} delete timed out'.format(msg)) |
359 | + return False |
360 | + |
361 | + def resource_reaches_status(self, resource, resource_id, |
362 | + expected_stat='available', |
363 | + msg='resource', max_wait=120): |
364 | + """Wait for an openstack resources status to reach an |
365 | + expected status within a specified time. Useful to confirm that |
366 | + nova instances, cinder vols, snapshots, glance images, heat stacks |
367 | + and other resources eventually reach the expected status. |
368 | + |
369 | + :param resource: pointer to os resource type, ex: heat_client.stacks |
370 | + :param resource_id: unique id for the openstack resource |
371 | + :param expected_stat: status to expect resource to reach |
372 | + :param msg: text to identify purpose in logging |
373 | + :param max_wait: maximum wait time in seconds |
374 | + :returns: True if successful, False if status is not reached |
375 | + """ |
376 | + |
377 | + tries = 0 |
378 | + resource_stat = resource.get(resource_id).status |
379 | + while resource_stat != expected_stat and tries < (max_wait / 4): |
380 | + self.log.debug('{} status check: ' |
381 | + '{} [{}:{}] {}'.format(msg, tries, |
382 | + resource_stat, |
383 | + expected_stat, |
384 | + resource_id)) |
385 | + time.sleep(4) |
386 | + resource_stat = resource.get(resource_id).status |
387 | + tries += 1 |
388 | + |
389 | + self.log.debug('{}: expected, actual status = {}, ' |
390 | + '{}'.format(msg, resource_stat, expected_stat)) |
391 | + |
392 | + if resource_stat == expected_stat: |
393 | + return True |
394 | + else: |
395 | + self.log.debug('{} never reached expected status: ' |
396 | + '{}'.format(resource_id, expected_stat)) |
397 | + return False |
398 | |
399 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' |
400 | --- hooks/charmhelpers/contrib/openstack/context.py 2015-04-16 21:32:59 +0000 |
401 | +++ hooks/charmhelpers/contrib/openstack/context.py 2015-07-01 14:47:24 +0000 |
402 | @@ -240,7 +240,7 @@ |
403 | if self.relation_prefix: |
404 | password_setting = self.relation_prefix + '_password' |
405 | |
406 | - for rid in relation_ids('shared-db'): |
407 | + for rid in relation_ids(self.interfaces[0]): |
408 | for unit in related_units(rid): |
409 | rdata = relation_get(rid=rid, unit=unit) |
410 | host = rdata.get('db_host') |
411 | |
412 | === modified file 'hooks/charmhelpers/contrib/openstack/neutron.py' |
413 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2015-06-04 23:06:40 +0000 |
414 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2015-07-01 14:47:24 +0000 |
415 | @@ -172,14 +172,16 @@ |
416 | 'services': ['calico-felix', |
417 | 'bird', |
418 | 'neutron-dhcp-agent', |
419 | - 'nova-api-metadata'], |
420 | + 'nova-api-metadata', |
421 | + 'etcd'], |
422 | 'packages': [[headers_package()] + determine_dkms_package(), |
423 | ['calico-compute', |
424 | 'bird', |
425 | 'neutron-dhcp-agent', |
426 | - 'nova-api-metadata']], |
427 | - 'server_packages': ['neutron-server', 'calico-control'], |
428 | - 'server_services': ['neutron-server'] |
429 | + 'nova-api-metadata', |
430 | + 'etcd']], |
431 | + 'server_packages': ['neutron-server', 'calico-control', 'etcd'], |
432 | + 'server_services': ['neutron-server', 'etcd'] |
433 | }, |
434 | 'vsp': { |
435 | 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini', |
436 | |
437 | === modified file 'hooks/charmhelpers/contrib/openstack/utils.py' |
438 | --- hooks/charmhelpers/contrib/openstack/utils.py 2015-06-04 23:06:40 +0000 |
439 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2015-07-01 14:47:24 +0000 |
440 | @@ -79,6 +79,7 @@ |
441 | ('trusty', 'icehouse'), |
442 | ('utopic', 'juno'), |
443 | ('vivid', 'kilo'), |
444 | + ('wily', 'liberty'), |
445 | ]) |
446 | |
447 | |
448 | @@ -91,6 +92,7 @@ |
449 | ('2014.1', 'icehouse'), |
450 | ('2014.2', 'juno'), |
451 | ('2015.1', 'kilo'), |
452 | + ('2015.2', 'liberty'), |
453 | ]) |
454 | |
455 | # The ugly duckling |
456 | @@ -113,6 +115,7 @@ |
457 | ('2.2.0', 'juno'), |
458 | ('2.2.1', 'kilo'), |
459 | ('2.2.2', 'kilo'), |
460 | + ('2.3.0', 'liberty'), |
461 | ]) |
462 | |
463 | DEFAULT_LOOPBACK_SIZE = '5G' |
464 | @@ -321,6 +324,9 @@ |
465 | 'kilo': 'trusty-updates/kilo', |
466 | 'kilo/updates': 'trusty-updates/kilo', |
467 | 'kilo/proposed': 'trusty-proposed/kilo', |
468 | + 'liberty': 'trusty-updates/liberty', |
469 | + 'liberty/updates': 'trusty-updates/liberty', |
470 | + 'liberty/proposed': 'trusty-proposed/liberty', |
471 | } |
472 | |
473 | try: |
474 | @@ -549,6 +555,11 @@ |
475 | |
476 | pip_create_virtualenv(os.path.join(parent_dir, 'venv')) |
477 | |
478 | + # Upgrade setuptools from default virtualenv version. The default version |
479 | + # in trusty breaks update.py in global requirements master branch. |
480 | + pip_install('setuptools', upgrade=True, proxy=http_proxy, |
481 | + venv=os.path.join(parent_dir, 'venv')) |
482 | + |
483 | for p in projects['repositories']: |
484 | repo = p['repository'] |
485 | branch = p['branch'] |
486 | @@ -610,24 +621,24 @@ |
487 | else: |
488 | repo_dir = dest_dir |
489 | |
490 | + venv = os.path.join(parent_dir, 'venv') |
491 | + |
492 | if update_requirements: |
493 | if not requirements_dir: |
494 | error_out('requirements repo must be cloned before ' |
495 | 'updating from global requirements.') |
496 | - _git_update_requirements(repo_dir, requirements_dir) |
497 | + _git_update_requirements(venv, repo_dir, requirements_dir) |
498 | |
499 | juju_log('Installing git repo from dir: {}'.format(repo_dir)) |
500 | if http_proxy: |
501 | - pip_install(repo_dir, proxy=http_proxy, |
502 | - venv=os.path.join(parent_dir, 'venv')) |
503 | + pip_install(repo_dir, proxy=http_proxy, venv=venv) |
504 | else: |
505 | - pip_install(repo_dir, |
506 | - venv=os.path.join(parent_dir, 'venv')) |
507 | + pip_install(repo_dir, venv=venv) |
508 | |
509 | return repo_dir |
510 | |
511 | |
512 | -def _git_update_requirements(package_dir, reqs_dir): |
513 | +def _git_update_requirements(venv, package_dir, reqs_dir): |
514 | """ |
515 | Update from global requirements. |
516 | |
517 | @@ -636,12 +647,14 @@ |
518 | """ |
519 | orig_dir = os.getcwd() |
520 | os.chdir(reqs_dir) |
521 | - cmd = ['python', 'update.py', package_dir] |
522 | + python = os.path.join(venv, 'bin/python') |
523 | + cmd = [python, 'update.py', package_dir] |
524 | try: |
525 | subprocess.check_call(cmd) |
526 | except subprocess.CalledProcessError: |
527 | package = os.path.basename(package_dir) |
528 | - error_out("Error updating {} from global-requirements.txt".format(package)) |
529 | + error_out("Error updating {} from " |
530 | + "global-requirements.txt".format(package)) |
531 | os.chdir(orig_dir) |
532 | |
533 | |
534 | |
535 | === modified file 'hooks/charmhelpers/contrib/python/packages.py' |
536 | --- hooks/charmhelpers/contrib/python/packages.py 2015-06-04 23:06:40 +0000 |
537 | +++ hooks/charmhelpers/contrib/python/packages.py 2015-07-01 14:47:24 +0000 |
538 | @@ -36,6 +36,8 @@ |
539 | def parse_options(given, available): |
540 | """Given a set of options, check if available""" |
541 | for key, value in sorted(given.items()): |
542 | + if not value: |
543 | + continue |
544 | if key in available: |
545 | yield "--{0}={1}".format(key, value) |
546 | |
547 | |
548 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
549 | --- hooks/charmhelpers/core/hookenv.py 2015-06-04 23:06:40 +0000 |
550 | +++ hooks/charmhelpers/core/hookenv.py 2015-07-01 14:47:24 +0000 |
551 | @@ -21,7 +21,9 @@ |
552 | # Charm Helpers Developers <juju@lists.ubuntu.com> |
553 | |
554 | from __future__ import print_function |
555 | +from distutils.version import LooseVersion |
556 | from functools import wraps |
557 | +import glob |
558 | import os |
559 | import json |
560 | import yaml |
561 | @@ -242,29 +244,7 @@ |
562 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
563 | if os.path.exists(self.path): |
564 | self.load_previous() |
565 | - |
566 | - def __getitem__(self, key): |
567 | - """For regular dict lookups, check the current juju config first, |
568 | - then the previous (saved) copy. This ensures that user-saved values |
569 | - will be returned by a dict lookup. |
570 | - |
571 | - """ |
572 | - try: |
573 | - return dict.__getitem__(self, key) |
574 | - except KeyError: |
575 | - return (self._prev_dict or {})[key] |
576 | - |
577 | - def get(self, key, default=None): |
578 | - try: |
579 | - return self[key] |
580 | - except KeyError: |
581 | - return default |
582 | - |
583 | - def keys(self): |
584 | - prev_keys = [] |
585 | - if self._prev_dict is not None: |
586 | - prev_keys = self._prev_dict.keys() |
587 | - return list(set(prev_keys + list(dict.keys(self)))) |
588 | + atexit(self._implicit_save) |
589 | |
590 | def load_previous(self, path=None): |
591 | """Load previous copy of config from disk. |
592 | @@ -283,6 +263,9 @@ |
593 | self.path = path or self.path |
594 | with open(self.path) as f: |
595 | self._prev_dict = json.load(f) |
596 | + for k, v in self._prev_dict.items(): |
597 | + if k not in self: |
598 | + self[k] = v |
599 | |
600 | def changed(self, key): |
601 | """Return True if the current value for this key is different from |
602 | @@ -314,13 +297,13 @@ |
603 | instance. |
604 | |
605 | """ |
606 | - if self._prev_dict: |
607 | - for k, v in six.iteritems(self._prev_dict): |
608 | - if k not in self: |
609 | - self[k] = v |
610 | with open(self.path, 'w') as f: |
611 | json.dump(self, f) |
612 | |
613 | + def _implicit_save(self): |
614 | + if self.implicit_save: |
615 | + self.save() |
616 | + |
617 | |
618 | @cached |
619 | def config(scope=None): |
620 | @@ -587,10 +570,14 @@ |
621 | hooks.execute(sys.argv) |
622 | """ |
623 | |
624 | - def __init__(self, config_save=True): |
625 | + def __init__(self, config_save=None): |
626 | super(Hooks, self).__init__() |
627 | self._hooks = {} |
628 | - self._config_save = config_save |
629 | + |
630 | + # For unknown reasons, we allow the Hooks constructor to override |
631 | + # config().implicit_save. |
632 | + if config_save is not None: |
633 | + config().implicit_save = config_save |
634 | |
635 | def register(self, name, function): |
636 | """Register a hook""" |
637 | @@ -598,13 +585,16 @@ |
638 | |
639 | def execute(self, args): |
640 | """Execute a registered hook based on args[0]""" |
641 | + _run_atstart() |
642 | hook_name = os.path.basename(args[0]) |
643 | if hook_name in self._hooks: |
644 | - self._hooks[hook_name]() |
645 | - if self._config_save: |
646 | - cfg = config() |
647 | - if cfg.implicit_save: |
648 | - cfg.save() |
649 | + try: |
650 | + self._hooks[hook_name]() |
651 | + except SystemExit as x: |
652 | + if x.code is None or x.code == 0: |
653 | + _run_atexit() |
654 | + raise |
655 | + _run_atexit() |
656 | else: |
657 | raise UnregisteredHookError(hook_name) |
658 | |
659 | @@ -732,13 +722,79 @@ |
660 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
661 | def leader_set(settings=None, **kwargs): |
662 | """Juju leader set value(s)""" |
663 | - log("Juju leader-set '%s'" % (settings), level=DEBUG) |
664 | + # Don't log secrets. |
665 | + # log("Juju leader-set '%s'" % (settings), level=DEBUG) |
666 | cmd = ['leader-set'] |
667 | settings = settings or {} |
668 | settings.update(kwargs) |
669 | - for k, v in settings.iteritems(): |
670 | + for k, v in settings.items(): |
671 | if v is None: |
672 | cmd.append('{}='.format(k)) |
673 | else: |
674 | cmd.append('{}={}'.format(k, v)) |
675 | subprocess.check_call(cmd) |
676 | + |
677 | + |
678 | +@cached |
679 | +def juju_version(): |
680 | + """Full version string (eg. '1.23.3.1-trusty-amd64')""" |
681 | + # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 |
682 | + jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] |
683 | + return subprocess.check_output([jujud, 'version'], |
684 | + universal_newlines=True).strip() |
685 | + |
686 | + |
687 | +@cached |
688 | +def has_juju_version(minimum_version): |
689 | + """Return True if the Juju version is at least the provided version""" |
690 | + return LooseVersion(juju_version()) >= LooseVersion(minimum_version) |
691 | + |
692 | + |
693 | +_atexit = [] |
694 | +_atstart = [] |
695 | + |
696 | + |
697 | +def atstart(callback, *args, **kwargs): |
698 | + '''Schedule a callback to run before the main hook. |
699 | + |
700 | + Callbacks are run in the order they were added. |
701 | + |
702 | + This is useful for modules and classes to perform initialization |
703 | + and inject behavior. In particular: |
704 | + - Run common code before all of your hooks, such as logging |
705 | + the hook name or interesting relation data. |
706 | + - Defer object or module initialization that requires a hook |
707 | + context until we know there actually is a hook context, |
708 | + making testing easier. |
709 | + - Rather than requiring charm authors to include boilerplate to |
710 | + invoke your helper's behavior, have it run automatically if |
711 | + your object is instantiated or module imported. |
712 | + |
713 | + This is not at all useful after your hook framework as been launched. |
714 | + ''' |
715 | + global _atstart |
716 | + _atstart.append((callback, args, kwargs)) |
717 | + |
718 | + |
719 | +def atexit(callback, *args, **kwargs): |
720 | + '''Schedule a callback to run on successful hook completion. |
721 | + |
722 | + Callbacks are run in the reverse order that they were added.''' |
723 | + _atexit.append((callback, args, kwargs)) |
724 | + |
725 | + |
726 | +def _run_atstart(): |
727 | + '''Hook frameworks must invoke this before running the main hook body.''' |
728 | + global _atstart |
729 | + for callback, args, kwargs in _atstart: |
730 | + callback(*args, **kwargs) |
731 | + del _atstart[:] |
732 | + |
733 | + |
734 | +def _run_atexit(): |
735 | + '''Hook frameworks must invoke this after the main hook body has |
736 | + successfully completed. Do not invoke it if the hook fails.''' |
737 | + global _atexit |
738 | + for callback, args, kwargs in reversed(_atexit): |
739 | + callback(*args, **kwargs) |
740 | + del _atexit[:] |
741 | |
742 | === modified file 'hooks/charmhelpers/core/host.py' |
743 | --- hooks/charmhelpers/core/host.py 2015-06-04 23:06:40 +0000 |
744 | +++ hooks/charmhelpers/core/host.py 2015-07-01 14:47:24 +0000 |
745 | @@ -24,6 +24,7 @@ |
746 | import os |
747 | import re |
748 | import pwd |
749 | +import glob |
750 | import grp |
751 | import random |
752 | import string |
753 | @@ -269,6 +270,21 @@ |
754 | return None |
755 | |
756 | |
757 | +def path_hash(path): |
758 | + """ |
759 | + Generate a hash checksum of all files matching 'path'. Standard wildcards |
760 | + like '*' and '?' are supported, see documentation for the 'glob' module for |
761 | + more information. |
762 | + |
763 | + :return: dict: A { filename: hash } dictionary for all matched files. |
764 | + Empty if none found. |
765 | + """ |
766 | + return { |
767 | + filename: file_hash(filename) |
768 | + for filename in glob.iglob(path) |
769 | + } |
770 | + |
771 | + |
772 | def check_hash(path, checksum, hash_type='md5'): |
773 | """ |
774 | Validate a file using a cryptographic checksum. |
775 | @@ -296,23 +312,25 @@ |
776 | |
777 | @restart_on_change({ |
778 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
779 | + '/etc/apache/sites-enabled/*': [ 'apache2' ] |
780 | }) |
781 | - def ceph_client_changed(): |
782 | + def config_changed(): |
783 | pass # your code here |
784 | |
785 | In this example, the cinder-api and cinder-volume services |
786 | would be restarted if /etc/ceph/ceph.conf is changed by the |
787 | - ceph_client_changed function. |
788 | + ceph_client_changed function. The apache2 service would be |
789 | + restarted if any file matching the pattern got changed, created |
790 | + or removed. Standard wildcards are supported, see documentation |
791 | + for the 'glob' module for more information. |
792 | """ |
793 | def wrap(f): |
794 | def wrapped_f(*args, **kwargs): |
795 | - checksums = {} |
796 | - for path in restart_map: |
797 | - checksums[path] = file_hash(path) |
798 | + checksums = {path: path_hash(path) for path in restart_map} |
799 | f(*args, **kwargs) |
800 | restarts = [] |
801 | for path in restart_map: |
802 | - if checksums[path] != file_hash(path): |
803 | + if path_hash(path) != checksums[path]: |
804 | restarts += restart_map[path] |
805 | services_list = list(OrderedDict.fromkeys(restarts)) |
806 | if not stopstart: |
807 | |
808 | === modified file 'hooks/charmhelpers/core/services/base.py' |
809 | --- hooks/charmhelpers/core/services/base.py 2015-06-04 23:06:40 +0000 |
810 | +++ hooks/charmhelpers/core/services/base.py 2015-07-01 14:47:24 +0000 |
811 | @@ -128,15 +128,18 @@ |
812 | """ |
813 | Handle the current hook by doing The Right Thing with the registered services. |
814 | """ |
815 | - hook_name = hookenv.hook_name() |
816 | - if hook_name == 'stop': |
817 | - self.stop_services() |
818 | - else: |
819 | - self.reconfigure_services() |
820 | - self.provide_data() |
821 | - cfg = hookenv.config() |
822 | - if cfg.implicit_save: |
823 | - cfg.save() |
824 | + hookenv._run_atstart() |
825 | + try: |
826 | + hook_name = hookenv.hook_name() |
827 | + if hook_name == 'stop': |
828 | + self.stop_services() |
829 | + else: |
830 | + self.reconfigure_services() |
831 | + self.provide_data() |
832 | + except SystemExit as x: |
833 | + if x.code is None or x.code == 0: |
834 | + hookenv._run_atexit() |
835 | + hookenv._run_atexit() |
836 | |
837 | def provide_data(self): |
838 | """ |
839 | |
840 | === modified file 'metadata.yaml' |
841 | --- metadata.yaml 2014-09-19 11:00:18 +0000 |
842 | +++ metadata.yaml 2015-07-01 14:47:24 +0000 |
843 | @@ -7,7 +7,10 @@ |
844 | . |
845 | This charm provides the RADOS HTTP gateway supporting S3 and Swift protocols |
846 | for object storage. |
847 | -categories: |
848 | +tags: |
849 | + - openstack |
850 | + - storage |
851 | + - file-servers |
852 | - misc |
853 | requires: |
854 | mon: |
855 | |
856 | === modified file 'tests/00-setup' |
857 | --- tests/00-setup 2014-09-29 01:57:43 +0000 |
858 | +++ tests/00-setup 2015-07-01 14:47:24 +0000 |
859 | @@ -5,6 +5,10 @@ |
860 | sudo add-apt-repository --yes ppa:juju/stable |
861 | sudo apt-get update --yes |
862 | sudo apt-get install --yes python-amulet \ |
863 | + python-cinderclient \ |
864 | + python-distro-info \ |
865 | + python-glanceclient \ |
866 | + python-heatclient \ |
867 | python-keystoneclient \ |
868 | - python-glanceclient \ |
869 | - python-novaclient |
870 | + python-novaclient \ |
871 | + python-swiftclient |
872 | |
873 | === modified file 'tests/017-basic-trusty-kilo' (properties changed: -x to +x) |
874 | === modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x) |
875 | === modified file 'tests/basic_deployment.py' |
876 | --- tests/basic_deployment.py 2015-04-16 21:31:30 +0000 |
877 | +++ tests/basic_deployment.py 2015-07-01 14:47:24 +0000 |
878 | @@ -1,13 +1,14 @@ |
879 | #!/usr/bin/python |
880 | |
881 | import amulet |
882 | +import time |
883 | from charmhelpers.contrib.openstack.amulet.deployment import ( |
884 | OpenStackAmuletDeployment |
885 | ) |
886 | -from charmhelpers.contrib.openstack.amulet.utils import ( # noqa |
887 | +from charmhelpers.contrib.openstack.amulet.utils import ( |
888 | OpenStackAmuletUtils, |
889 | DEBUG, |
890 | - ERROR |
891 | + #ERROR |
892 | ) |
893 | |
894 | # Use DEBUG to turn on debug logging |
895 | @@ -35,9 +36,12 @@ |
896 | compatible with the local charm (e.g. stable or next). |
897 | """ |
898 | this_service = {'name': 'ceph-radosgw'} |
899 | - other_services = [{'name': 'ceph', 'units': 3}, {'name': 'mysql'}, |
900 | - {'name': 'keystone'}, {'name': 'rabbitmq-server'}, |
901 | - {'name': 'nova-compute'}, {'name': 'glance'}, |
902 | + other_services = [{'name': 'ceph', 'units': 3}, |
903 | + {'name': 'mysql'}, |
904 | + {'name': 'keystone'}, |
905 | + {'name': 'rabbitmq-server'}, |
906 | + {'name': 'nova-compute'}, |
907 | + {'name': 'glance'}, |
908 | {'name': 'cinder'}] |
909 | super(CephRadosGwBasicDeployment, self)._add_services(this_service, |
910 | other_services) |
911 | @@ -92,13 +96,20 @@ |
912 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] |
913 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] |
914 | self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] |
915 | - self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0'] |
916 | + self.nova_sentry = self.d.sentry.unit['nova-compute/0'] |
917 | self.glance_sentry = self.d.sentry.unit['glance/0'] |
918 | self.cinder_sentry = self.d.sentry.unit['cinder/0'] |
919 | self.ceph0_sentry = self.d.sentry.unit['ceph/0'] |
920 | self.ceph1_sentry = self.d.sentry.unit['ceph/1'] |
921 | self.ceph2_sentry = self.d.sentry.unit['ceph/2'] |
922 | self.ceph_radosgw_sentry = self.d.sentry.unit['ceph-radosgw/0'] |
923 | + u.log.debug('openstack release val: {}'.format( |
924 | + self._get_openstack_release())) |
925 | + u.log.debug('openstack release str: {}'.format( |
926 | + self._get_openstack_release_string())) |
927 | + |
928 | + # Let things settle a bit original moving forward |
929 | + time.sleep(30) |
930 | |
931 | # Authenticate admin with keystone |
932 | self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, |
933 | @@ -135,39 +146,77 @@ |
934 | 'password', |
935 | self.demo_tenant) |
936 | |
937 | - def _ceph_osd_id(self, index): |
938 | - """Produce a shell command that will return a ceph-osd id.""" |
939 | - return "`initctl list | grep 'ceph-osd ' | awk 'NR=={} {{ print $2 }}' | grep -o '[0-9]*'`".format(index + 1) # noqa |
940 | - |
941 | - def test_services(self): |
942 | + # Authenticate radosgw user using swift api |
943 | + ks_obj_rel = self.keystone_sentry.relation( |
944 | + 'identity-service', |
945 | + 'ceph-radosgw:identity-service') |
946 | + self.swift = u.authenticate_swift_user( |
947 | + self.keystone, |
948 | + user=ks_obj_rel['service_username'], |
949 | + password=ks_obj_rel['service_password'], |
950 | + tenant=ks_obj_rel['service_tenant']) |
951 | + |
952 | + def test_100_ceph_processes(self): |
953 | + """Verify that the expected service processes are running |
954 | + on each ceph unit.""" |
955 | + |
956 | + # Process name and quantity of processes to expect on each unit |
957 | + ceph_processes = { |
958 | + 'ceph-mon': 1, |
959 | + 'ceph-osd': 2 |
960 | + } |
961 | + |
962 | + # Units with process names and PID quantities expected |
963 | + expected_processes = { |
964 | + self.ceph_radosgw_sentry: {'radosgw': 1}, |
965 | + self.ceph0_sentry: ceph_processes, |
966 | + self.ceph1_sentry: ceph_processes, |
967 | + self.ceph2_sentry: ceph_processes |
968 | + } |
969 | + |
970 | + actual_pids = u.get_unit_process_ids(expected_processes) |
971 | + ret = u.validate_unit_process_ids(expected_processes, actual_pids) |
972 | + if ret: |
973 | + amulet.raise_status(amulet.FAIL, msg=ret) |
974 | + |
975 | + def test_102_services(self): |
976 | """Verify the expected services are running on the service units.""" |
977 | - ceph_services = ['status ceph-mon-all', |
978 | - 'status ceph-mon id=`hostname`'] |
979 | - commands = { |
980 | - self.mysql_sentry: ['status mysql'], |
981 | - self.rabbitmq_sentry: ['sudo service rabbitmq-server status'], |
982 | - self.nova_compute_sentry: ['status nova-compute'], |
983 | - self.keystone_sentry: ['status keystone'], |
984 | - self.glance_sentry: ['status glance-registry', |
985 | - 'status glance-api'], |
986 | - self.cinder_sentry: ['status cinder-api', |
987 | - 'status cinder-scheduler', |
988 | - 'status cinder-volume'], |
989 | - self.ceph_radosgw_sentry: ['status radosgw-all'] |
990 | + |
991 | + services = { |
992 | + self.mysql_sentry: ['mysql'], |
993 | + self.rabbitmq_sentry: ['rabbitmq-server'], |
994 | + self.nova_sentry: ['nova-compute'], |
995 | + self.keystone_sentry: ['keystone'], |
996 | + self.glance_sentry: ['glance-registry', |
997 | + 'glance-api'], |
998 | + self.cinder_sentry: ['cinder-api', |
999 | + 'cinder-scheduler', |
1000 | + 'cinder-volume'], |
1001 | } |
1002 | - ceph_osd0 = 'status ceph-osd id={}'.format(self._ceph_osd_id(0)) |
1003 | - ceph_osd1 = 'status ceph-osd id={}'.format(self._ceph_osd_id(1)) |
1004 | - ceph_services.extend([ceph_osd0, ceph_osd1, 'status ceph-osd-all']) |
1005 | - commands[self.ceph0_sentry] = ceph_services |
1006 | - commands[self.ceph1_sentry] = ceph_services |
1007 | - commands[self.ceph2_sentry] = ceph_services |
1008 | - |
1009 | - ret = u.validate_services(commands) |
1010 | + |
1011 | + if self._get_openstack_release() < self.vivid_kilo: |
1012 | + # For upstart systems only. Ceph services under systemd |
1013 | + # are checked by process name instead. |
1014 | + ceph_services = [ |
1015 | + 'ceph-mon-all', |
1016 | + 'ceph-mon id=`hostname`', |
1017 | + 'ceph-osd-all', |
1018 | + 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(0)), |
1019 | + 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(1)) |
1020 | + ] |
1021 | + services[self.ceph0_sentry] = ceph_services |
1022 | + services[self.ceph1_sentry] = ceph_services |
1023 | + services[self.ceph2_sentry] = ceph_services |
1024 | + services[self.ceph_radosgw_sentry] = ['radosgw-all'] |
1025 | + |
1026 | + ret = u.validate_services_by_name(services) |
1027 | if ret: |
1028 | amulet.raise_status(amulet.FAIL, msg=ret) |
1029 | |
1030 | - def test_ceph_radosgw_ceph_relation(self): |
1031 | + def test_200_ceph_radosgw_ceph_relation(self): |
1032 | """Verify the ceph-radosgw to ceph relation data.""" |
1033 | + u.log.debug('Checking ceph-radosgw:mon to ceph:radosgw ' |
1034 | + 'relation data...') |
1035 | unit = self.ceph_radosgw_sentry |
1036 | relation = ['mon', 'ceph:radosgw'] |
1037 | expected = { |
1038 | @@ -179,8 +228,9 @@ |
1039 | message = u.relation_error('ceph-radosgw to ceph', ret) |
1040 | amulet.raise_status(amulet.FAIL, msg=message) |
1041 | |
1042 | - def test_ceph0_ceph_radosgw_relation(self): |
1043 | + def test_201_ceph0_ceph_radosgw_relation(self): |
1044 | """Verify the ceph0 to ceph-radosgw relation data.""" |
1045 | + u.log.debug('Checking ceph0:radosgw radosgw:mon relation data...') |
1046 | unit = self.ceph0_sentry |
1047 | relation = ['radosgw', 'ceph-radosgw:mon'] |
1048 | expected = { |
1049 | @@ -196,8 +246,9 @@ |
1050 | message = u.relation_error('ceph0 to ceph-radosgw', ret) |
1051 | amulet.raise_status(amulet.FAIL, msg=message) |
1052 | |
1053 | - def test_ceph1_ceph_radosgw_relation(self): |
1054 | + def test_202_ceph1_ceph_radosgw_relation(self): |
1055 | """Verify the ceph1 to ceph-radosgw relation data.""" |
1056 | + u.log.debug('Checking ceph1:radosgw ceph-radosgw:mon relation data...') |
1057 | unit = self.ceph1_sentry |
1058 | relation = ['radosgw', 'ceph-radosgw:mon'] |
1059 | expected = { |
1060 | @@ -213,8 +264,9 @@ |
1061 | message = u.relation_error('ceph1 to ceph-radosgw', ret) |
1062 | amulet.raise_status(amulet.FAIL, msg=message) |
1063 | |
1064 | - def test_ceph2_ceph_radosgw_relation(self): |
1065 | + def test_203_ceph2_ceph_radosgw_relation(self): |
1066 | """Verify the ceph2 to ceph-radosgw relation data.""" |
1067 | + u.log.debug('Checking ceph2:radosgw ceph-radosgw:mon relation data...') |
1068 | unit = self.ceph2_sentry |
1069 | relation = ['radosgw', 'ceph-radosgw:mon'] |
1070 | expected = { |
1071 | @@ -230,8 +282,10 @@ |
1072 | message = u.relation_error('ceph2 to ceph-radosgw', ret) |
1073 | amulet.raise_status(amulet.FAIL, msg=message) |
1074 | |
1075 | - def test_ceph_radosgw_keystone_relation(self): |
1076 | + def test_204_ceph_radosgw_keystone_relation(self): |
1077 | """Verify the ceph-radosgw to keystone relation data.""" |
1078 | + u.log.debug('Checking ceph-radosgw to keystone id service ' |
1079 | + 'relation data...') |
1080 | unit = self.ceph_radosgw_sentry |
1081 | relation = ['identity-service', 'keystone:identity-service'] |
1082 | expected = { |
1083 | @@ -249,8 +303,10 @@ |
1084 | message = u.relation_error('ceph-radosgw to keystone', ret) |
1085 | amulet.raise_status(amulet.FAIL, msg=message) |
1086 | |
1087 | - def test_keystone_ceph_radosgw_relation(self): |
1088 | + def test_205_keystone_ceph_radosgw_relation(self): |
1089 | """Verify the keystone to ceph-radosgw relation data.""" |
1090 | + u.log.debug('Checking keystone to ceph-radosgw id service ' |
1091 | + 'relation data...') |
1092 | unit = self.keystone_sentry |
1093 | relation = ['identity-service', 'ceph-radosgw:identity-service'] |
1094 | expected = { |
1095 | @@ -273,8 +329,9 @@ |
1096 | message = u.relation_error('keystone to ceph-radosgw', ret) |
1097 | amulet.raise_status(amulet.FAIL, msg=message) |
1098 | |
1099 | - def test_ceph_config(self): |
1100 | + def test_300_ceph_radosgw_config(self): |
1101 | """Verify the data in the ceph config file.""" |
1102 | + u.log.debug('Checking ceph config file data...') |
1103 | unit = self.ceph_radosgw_sentry |
1104 | conf = '/etc/ceph/ceph.conf' |
1105 | keystone_sentry = self.keystone_sentry |
1106 | @@ -309,11 +366,153 @@ |
1107 | message = "ceph config error: {}".format(ret) |
1108 | amulet.raise_status(amulet.FAIL, msg=message) |
1109 | |
1110 | - def test_restart_on_config_change(self): |
1111 | - """Verify the specified services are restarted on config change.""" |
1112 | - # NOTE(coreycb): Test not implemented but should it be? ceph-radosgw |
1113 | - # svcs aren't restarted by charm after config change |
1114 | - # Should they be restarted? |
1115 | - if self._get_openstack_release() >= self.precise_essex: |
1116 | - u.log.error("Test not implemented") |
1117 | - return |
1118 | + def test_302_cinder_rbd_config(self): |
1119 | + """Verify the cinder config file data regarding ceph.""" |
1120 | + u.log.debug('Checking cinder (rbd) config file data...') |
1121 | + unit = self.cinder_sentry |
1122 | + conf = '/etc/cinder/cinder.conf' |
1123 | + expected = { |
1124 | + 'DEFAULT': { |
1125 | + 'volume_driver': 'cinder.volume.drivers.rbd.RBDDriver' |
1126 | + } |
1127 | + } |
1128 | + for section, pairs in expected.iteritems(): |
1129 | + ret = u.validate_config_data(unit, conf, section, pairs) |
1130 | + if ret: |
1131 | + message = "cinder (rbd) config error: {}".format(ret) |
1132 | + amulet.raise_status(amulet.FAIL, msg=message) |
1133 | + |
1134 | + def test_304_glance_rbd_config(self): |
1135 | + """Verify the glance config file data regarding ceph.""" |
1136 | + u.log.debug('Checking glance (rbd) config file data...') |
1137 | + unit = self.glance_sentry |
1138 | + conf = '/etc/glance/glance-api.conf' |
1139 | + config = { |
1140 | + 'default_store': 'rbd', |
1141 | + 'rbd_store_ceph_conf': '/etc/ceph/ceph.conf', |
1142 | + 'rbd_store_user': 'glance', |
1143 | + 'rbd_store_pool': 'glance', |
1144 | + 'rbd_store_chunk_size': '8' |
1145 | + } |
1146 | + |
1147 | + if self._get_openstack_release() >= self.trusty_kilo: |
1148 | + # Kilo or later |
1149 | + config['stores'] = ('glance.store.filesystem.Store,' |
1150 | + 'glance.store.http.Store,' |
1151 | + 'glance.store.rbd.Store') |
1152 | + section = 'glance_store' |
1153 | + else: |
1154 | + # Juno or earlier |
1155 | + section = 'DEFAULT' |
1156 | + |
1157 | + expected = {section: config} |
1158 | + for section, pairs in expected.iteritems(): |
1159 | + ret = u.validate_config_data(unit, conf, section, pairs) |
1160 | + if ret: |
1161 | + message = "glance (rbd) config error: {}".format(ret) |
1162 | + amulet.raise_status(amulet.FAIL, msg=message) |
1163 | + |
1164 | + def test_306_nova_rbd_config(self): |
1165 | + """Verify the nova config file data regarding ceph.""" |
1166 | + u.log.debug('Checking nova (rbd) config file data...') |
1167 | + unit = self.nova_sentry |
1168 | + conf = '/etc/nova/nova.conf' |
1169 | + expected = { |
1170 | + 'libvirt': { |
1171 | + 'rbd_pool': 'nova', |
1172 | + 'rbd_user': 'nova-compute', |
1173 | + 'rbd_secret_uuid': u.not_null |
1174 | + } |
1175 | + } |
1176 | + for section, pairs in expected.iteritems(): |
1177 | + ret = u.validate_config_data(unit, conf, section, pairs) |
1178 | + if ret: |
1179 | + message = "nova (rbd) config error: {}".format(ret) |
1180 | + amulet.raise_status(amulet.FAIL, msg=message) |
1181 | + |
1182 | + def test_400_ceph_check_osd_pools(self): |
1183 | + """Check osd pools on all ceph units, expect them to be |
1184 | + identical, and expect specific pools to be present.""" |
1185 | + u.log.debug('Checking pools on ceph units...') |
1186 | + |
1187 | + expected_pools = self.get_ceph_expected_pools(radosgw=True) |
1188 | + results = [] |
1189 | + sentries = [ |
1190 | + self.ceph_radosgw_sentry, |
1191 | + self.ceph0_sentry, |
1192 | + self.ceph1_sentry, |
1193 | + self.ceph2_sentry |
1194 | + ] |
1195 | + |
1196 | + # Check for presence of expected pools on each unit |
1197 | + u.log.debug('Expected pools: {}'.format(expected_pools)) |
1198 | + for sentry_unit in sentries: |
1199 | + pools = u.get_ceph_pools(sentry_unit) |
1200 | + results.append(pools) |
1201 | + |
1202 | + for expected_pool in expected_pools: |
1203 | + if expected_pool not in pools: |
1204 | + msg = ('{} does not have pool: ' |
1205 | + '{}'.format(sentry_unit.info['unit_name'], |
1206 | + expected_pool)) |
1207 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1208 | + u.log.debug('{} has (at least) the expected ' |
1209 | + 'pools.'.format(sentry_unit.info['unit_name'])) |
1210 | + |
1211 | + # Check that all units returned the same pool name:id data |
1212 | + ret = u.validate_list_of_identical_dicts(results) |
1213 | + if ret: |
1214 | + u.log.debug('Pool list results: {}'.format(results)) |
1215 | + msg = ('{}; Pool list results are not identical on all ' |
1216 | + 'ceph units.'.format(ret)) |
1217 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1218 | + else: |
1219 | + u.log.debug('Pool list on all ceph units produced the ' |
1220 | + 'same results (OK).') |
1221 | + |
1222 | + def test_402_swift_api_connection(self): |
1223 | + """Simple api call to confirm basic service functionality""" |
1224 | + u.log.debug('Checking basic radosgw functionality via swift api...') |
1225 | + headers, containers = self.swift.get_account() |
1226 | + assert('content-type' in headers.keys()) |
1227 | + assert(containers == []) |
1228 | + |
1229 | + def test_498_radosgw_cmds_exit_zero(self): |
1230 | + """Check basic functionality of radosgw cli commands against |
1231 | + the ceph_radosgw unit.""" |
1232 | + sentry_units = [self.ceph_radosgw_sentry] |
1233 | + commands = [ |
1234 | + 'sudo radosgw-admin regions list', |
1235 | + 'sudo radosgw-admin bucket list', |
1236 | + 'sudo radosgw-admin zone list', |
1237 | + 'sudo radosgw-admin metadata list', |
1238 | + 'sudo radosgw-admin gc list' |
1239 | + ] |
1240 | + ret = u.check_commands_on_units(commands, sentry_units) |
1241 | + if ret: |
1242 | + amulet.raise_status(amulet.FAIL, msg=ret) |
1243 | + |
1244 | + def test_499_ceph_cmds_exit_zero(self): |
1245 | + """Check basic functionality of ceph cli commands against |
1246 | + all ceph units.""" |
1247 | + sentry_units = [ |
1248 | + self.ceph_radosgw_sentry, |
1249 | + self.ceph0_sentry, |
1250 | + self.ceph1_sentry, |
1251 | + self.ceph2_sentry |
1252 | + ] |
1253 | + commands = [ |
1254 | + 'sudo ceph health', |
1255 | + 'sudo ceph mds stat', |
1256 | + 'sudo ceph pg stat', |
1257 | + 'sudo ceph osd stat', |
1258 | + 'sudo ceph mon stat', |
1259 | + ] |
1260 | + ret = u.check_commands_on_units(commands, sentry_units) |
1261 | + if ret: |
1262 | + amulet.raise_status(amulet.FAIL, msg=ret) |
1263 | + |
1264 | + # Note(beisner): need to add basic object store functional checks. |
1265 | + |
1266 | + # FYI: No restart check as ceph services do not restart |
1267 | + # when charm config changes, unless monitor count increases. |
1268 | |
1269 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' |
1270 | --- tests/charmhelpers/contrib/amulet/utils.py 2015-06-04 23:06:40 +0000 |
1271 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-01 14:47:24 +0000 |
1272 | @@ -14,14 +14,17 @@ |
1273 | # You should have received a copy of the GNU Lesser General Public License |
1274 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1275 | |
1276 | +import amulet |
1277 | import ConfigParser |
1278 | +import distro_info |
1279 | import io |
1280 | import logging |
1281 | +import os |
1282 | import re |
1283 | +import six |
1284 | import sys |
1285 | import time |
1286 | - |
1287 | -import six |
1288 | +import urlparse |
1289 | |
1290 | |
1291 | class AmuletUtils(object): |
1292 | @@ -33,6 +36,7 @@ |
1293 | |
1294 | def __init__(self, log_level=logging.ERROR): |
1295 | self.log = self.get_logger(level=log_level) |
1296 | + self.ubuntu_releases = self.get_ubuntu_releases() |
1297 | |
1298 | def get_logger(self, name="amulet-logger", level=logging.DEBUG): |
1299 | """Get a logger object that will log to stdout.""" |
1300 | @@ -70,12 +74,44 @@ |
1301 | else: |
1302 | return False |
1303 | |
1304 | + def get_ubuntu_release_from_sentry(self, sentry_unit): |
1305 | + """Get Ubuntu release codename from sentry unit. |
1306 | + |
1307 | + :param sentry_unit: amulet sentry/service unit pointer |
1308 | + :returns: list of strings - release codename, failure message |
1309 | + """ |
1310 | + msg = None |
1311 | + cmd = 'lsb_release -cs' |
1312 | + release, code = sentry_unit.run(cmd) |
1313 | + if code == 0: |
1314 | + self.log.debug('{} lsb_release: {}'.format( |
1315 | + sentry_unit.info['unit_name'], release)) |
1316 | + else: |
1317 | + msg = ('{} `{}` returned {} ' |
1318 | + '{}'.format(sentry_unit.info['unit_name'], |
1319 | + cmd, release, code)) |
1320 | + if release not in self.ubuntu_releases: |
1321 | + msg = ("Release ({}) not found in Ubuntu releases " |
1322 | + "({})".format(release, self.ubuntu_releases)) |
1323 | + return release, msg |
1324 | + |
1325 | def validate_services(self, commands): |
1326 | - """Validate services. |
1327 | - |
1328 | - Verify the specified services are running on the corresponding |
1329 | + """Validate that lists of commands succeed on service units. Can be |
1330 | + used to verify system services are running on the corresponding |
1331 | service units. |
1332 | - """ |
1333 | + |
1334 | + :param commands: dict with sentry keys and arbitrary command list vals |
1335 | + :returns: None if successful, Failure string message otherwise |
1336 | + """ |
1337 | + self.log.debug('Checking status of system services...') |
1338 | + |
1339 | + # /!\ DEPRECATION WARNING (beisner): |
1340 | + # New and existing tests should be rewritten to use |
1341 | + # validate_services_by_name() as it is aware of init systems. |
1342 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1343 | + 'validate_services_by_name instead of validate_services ' |
1344 | + 'due to init system differences.') |
1345 | + |
1346 | for k, v in six.iteritems(commands): |
1347 | for cmd in v: |
1348 | output, code = k.run(cmd) |
1349 | @@ -86,6 +122,41 @@ |
1350 | return "command `{}` returned {}".format(cmd, str(code)) |
1351 | return None |
1352 | |
1353 | + def validate_services_by_name(self, sentry_services): |
1354 | + """Validate system service status by service name, automatically |
1355 | + detecting init system based on Ubuntu release codename. |
1356 | + |
1357 | + :param sentry_services: dict with sentry keys and svc list values |
1358 | + :returns: None if successful, Failure string message otherwise |
1359 | + """ |
1360 | + self.log.debug('Checking status of system services...') |
1361 | + |
1362 | + # Point at which systemd became a thing |
1363 | + systemd_switch = self.ubuntu_releases.index('vivid') |
1364 | + |
1365 | + for sentry_unit, services_list in six.iteritems(sentry_services): |
1366 | + # Get lsb_release codename from unit |
1367 | + release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) |
1368 | + if ret: |
1369 | + return ret |
1370 | + |
1371 | + for service_name in services_list: |
1372 | + if (self.ubuntu_releases.index(release) >= systemd_switch or |
1373 | + service_name == "rabbitmq-server"): |
1374 | + # init is systemd |
1375 | + cmd = 'sudo service {} status'.format(service_name) |
1376 | + elif self.ubuntu_releases.index(release) < systemd_switch: |
1377 | + # init is upstart |
1378 | + cmd = 'sudo status {}'.format(service_name) |
1379 | + |
1380 | + output, code = sentry_unit.run(cmd) |
1381 | + self.log.debug('{} `{}` returned ' |
1382 | + '{}'.format(sentry_unit.info['unit_name'], |
1383 | + cmd, code)) |
1384 | + if code != 0: |
1385 | + return "command `{}` returned {}".format(cmd, str(code)) |
1386 | + return None |
1387 | + |
1388 | def _get_config(self, unit, filename): |
1389 | """Get a ConfigParser object for parsing a unit's config file.""" |
1390 | file_contents = unit.file_contents(filename) |
1391 | @@ -103,7 +174,15 @@ |
1392 | |
1393 | Verify that the specified section of the config file contains |
1394 | the expected option key:value pairs. |
1395 | + |
1396 | + Compare expected dictionary data vs actual dictionary data. |
1397 | + The values in the 'expected' dictionary can be strings, bools, ints, |
1398 | + longs, or can be a function that evaluates a variable and returns a |
1399 | + bool. |
1400 | """ |
1401 | + self.log.debug('Validating config file data ({} in {} on {})' |
1402 | + '...'.format(section, config_file, |
1403 | + sentry_unit.info['unit_name'])) |
1404 | config = self._get_config(sentry_unit, config_file) |
1405 | |
1406 | if section != 'DEFAULT' and not config.has_section(section): |
1407 | @@ -112,9 +191,20 @@ |
1408 | for k in expected.keys(): |
1409 | if not config.has_option(section, k): |
1410 | return "section [{}] is missing option {}".format(section, k) |
1411 | - if config.get(section, k) != expected[k]: |
1412 | + |
1413 | + actual = config.get(section, k) |
1414 | + v = expected[k] |
1415 | + if (isinstance(v, six.string_types) or |
1416 | + isinstance(v, bool) or |
1417 | + isinstance(v, six.integer_types)): |
1418 | + # handle explicit values |
1419 | + if actual != v: |
1420 | + return "section [{}] {}:{} != expected {}:{}".format( |
1421 | + section, k, actual, k, expected[k]) |
1422 | + # handle function pointers, such as not_null or valid_ip |
1423 | + elif not v(actual): |
1424 | return "section [{}] {}:{} != expected {}:{}".format( |
1425 | - section, k, config.get(section, k), k, expected[k]) |
1426 | + section, k, actual, k, expected[k]) |
1427 | return None |
1428 | |
1429 | def _validate_dict_data(self, expected, actual): |
1430 | @@ -122,7 +212,7 @@ |
1431 | |
1432 | Compare expected dictionary data vs actual dictionary data. |
1433 | The values in the 'expected' dictionary can be strings, bools, ints, |
1434 | - longs, or can be a function that evaluate a variable and returns a |
1435 | + longs, or can be a function that evaluates a variable and returns a |
1436 | bool. |
1437 | """ |
1438 | self.log.debug('actual: {}'.format(repr(actual))) |
1439 | @@ -133,8 +223,10 @@ |
1440 | if (isinstance(v, six.string_types) or |
1441 | isinstance(v, bool) or |
1442 | isinstance(v, six.integer_types)): |
1443 | + # handle explicit values |
1444 | if v != actual[k]: |
1445 | return "{}:{}".format(k, actual[k]) |
1446 | + # handle function pointers, such as not_null or valid_ip |
1447 | elif not v(actual[k]): |
1448 | return "{}:{}".format(k, actual[k]) |
1449 | else: |
1450 | @@ -321,3 +413,121 @@ |
1451 | |
1452 | def endpoint_error(self, name, data): |
1453 | return 'unexpected endpoint data in {} - {}'.format(name, data) |
1454 | + |
1455 | + def get_ubuntu_releases(self): |
1456 | + """Return a list of all Ubuntu releases in order of release.""" |
1457 | + _d = distro_info.UbuntuDistroInfo() |
1458 | + _release_list = _d.all |
1459 | + self.log.debug('Ubuntu release list: {}'.format(_release_list)) |
1460 | + return _release_list |
1461 | + |
1462 | + def file_to_url(self, file_rel_path): |
1463 | + """Convert a relative file path to a file URL.""" |
1464 | + _abs_path = os.path.abspath(file_rel_path) |
1465 | + return urlparse.urlparse(_abs_path, scheme='file').geturl() |
1466 | + |
1467 | + def check_commands_on_units(self, commands, sentry_units): |
1468 | + """Check that all commands in a list exit zero on all |
1469 | + sentry units in a list. |
1470 | + |
1471 | + :param commands: list of bash commands |
1472 | + :param sentry_units: list of sentry unit pointers |
1473 | + :returns: None if successful; Failure message otherwise |
1474 | + """ |
1475 | + self.log.debug('Checking exit codes for {} commands on {} ' |
1476 | + 'sentry units...'.format(len(commands), |
1477 | + len(sentry_units))) |
1478 | + for sentry_unit in sentry_units: |
1479 | + for cmd in commands: |
1480 | + output, code = sentry_unit.run(cmd) |
1481 | + if code == 0: |
1482 | + self.log.debug('{} `{}` returned {} ' |
1483 | + '(OK)'.format(sentry_unit.info['unit_name'], |
1484 | + cmd, code)) |
1485 | + else: |
1486 | + return ('{} `{}` returned {} ' |
1487 | + '{}'.format(sentry_unit.info['unit_name'], |
1488 | + cmd, code, output)) |
1489 | + return None |
1490 | + |
1491 | + def get_process_id_list(self, sentry_unit, process_name): |
1492 | + """Get a list of process ID(s) from a single sentry juju unit |
1493 | + for a single process name. |
1494 | + |
1495 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
1496 | + :param process_name: Process name |
1497 | + :returns: List of process IDs |
1498 | + """ |
1499 | + cmd = 'pidof {}'.format(process_name) |
1500 | + output, code = sentry_unit.run(cmd) |
1501 | + if code != 0: |
1502 | + msg = ('{} `{}` returned {} ' |
1503 | + '{}'.format(sentry_unit.info['unit_name'], |
1504 | + cmd, code, output)) |
1505 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1506 | + return str(output).split() |
1507 | + |
1508 | + def get_unit_process_ids(self, unit_processes): |
1509 | + """Construct a dict containing unit sentries, process names, and |
1510 | + process IDs.""" |
1511 | + pid_dict = {} |
1512 | + for sentry_unit, process_list in unit_processes.iteritems(): |
1513 | + pid_dict[sentry_unit] = {} |
1514 | + for process in process_list: |
1515 | + pids = self.get_process_id_list(sentry_unit, process) |
1516 | + pid_dict[sentry_unit].update({process: pids}) |
1517 | + return pid_dict |
1518 | + |
1519 | + def validate_unit_process_ids(self, expected, actual): |
1520 | + """Validate process id quantities for services on units.""" |
1521 | + self.log.debug('Checking units for running processes...') |
1522 | + self.log.debug('Expected PIDs: {}'.format(expected)) |
1523 | + self.log.debug('Actual PIDs: {}'.format(actual)) |
1524 | + |
1525 | + if len(actual) != len(expected): |
1526 | + return ('Unit count mismatch. expected, actual: {}, ' |
1527 | + '{} '.format(len(expected), len(actual))) |
1528 | + |
1529 | + for (e_sentry, e_proc_names) in expected.iteritems(): |
1530 | + e_sentry_name = e_sentry.info['unit_name'] |
1531 | + if e_sentry in actual.keys(): |
1532 | + a_proc_names = actual[e_sentry] |
1533 | + else: |
1534 | + return ('Expected sentry ({}) not found in actual dict data.' |
1535 | + '{}'.format(e_sentry_name, e_sentry)) |
1536 | + |
1537 | + if len(e_proc_names.keys()) != len(a_proc_names.keys()): |
1538 | + return ('Process name count mismatch. expected, actual: {}, ' |
1539 | + '{}'.format(len(expected), len(actual))) |
1540 | + |
1541 | + for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \ |
1542 | + zip(e_proc_names.items(), a_proc_names.items()): |
1543 | + if e_proc_name != a_proc_name: |
1544 | + return ('Process name mismatch. expected, actual: {}, ' |
1545 | + '{}'.format(e_proc_name, a_proc_name)) |
1546 | + |
1547 | + a_pids_length = len(a_pids) |
1548 | + if e_pids_length != a_pids_length: |
1549 | + return ('PID count mismatch. {} ({}) expected, actual: ' |
1550 | + '{}, {} ({})'.format(e_sentry_name, e_proc_name, |
1551 | + e_pids_length, a_pids_length, |
1552 | + a_pids)) |
1553 | + else: |
1554 | + self.log.debug('PID check OK: {} {} {}: ' |
1555 | + '{}'.format(e_sentry_name, e_proc_name, |
1556 | + e_pids_length, a_pids)) |
1557 | + return None |
1558 | + |
1559 | + def validate_list_of_identical_dicts(self, list_of_dicts): |
1560 | + """Check that all dicts within a list are identical.""" |
1561 | + hashes = [] |
1562 | + for _dict in list_of_dicts: |
1563 | + hashes.append(hash(frozenset(_dict.items()))) |
1564 | + |
1565 | + self.log.debug('Hashes: {}'.format(hashes)) |
1566 | + if len(set(hashes)) == 1: |
1567 | + self.log.debug('Dicts within list are identical') |
1568 | + else: |
1569 | + return 'Dicts within list are not identical' |
1570 | + |
1571 | + return None |
1572 | |
1573 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' |
1574 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-04 23:06:40 +0000 |
1575 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 14:47:24 +0000 |
1576 | @@ -79,9 +79,9 @@ |
1577 | services.append(this_service) |
1578 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
1579 | 'ceph-osd', 'ceph-radosgw'] |
1580 | - # Openstack subordinate charms do not expose an origin option as that |
1581 | - # is controlled by the principle |
1582 | - ignore = ['neutron-openvswitch'] |
1583 | + # Most OpenStack subordinate charms do not expose an origin option |
1584 | + # as that is controlled by the principle. |
1585 | + ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch'] |
1586 | |
1587 | if self.openstack: |
1588 | for svc in services: |
1589 | @@ -110,7 +110,8 @@ |
1590 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, |
1591 | self.precise_havana, self.precise_icehouse, |
1592 | self.trusty_icehouse, self.trusty_juno, self.utopic_juno, |
1593 | - self.trusty_kilo, self.vivid_kilo) = range(10) |
1594 | + self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, |
1595 | + self.wily_liberty) = range(12) |
1596 | |
1597 | releases = { |
1598 | ('precise', None): self.precise_essex, |
1599 | @@ -121,8 +122,10 @@ |
1600 | ('trusty', None): self.trusty_icehouse, |
1601 | ('trusty', 'cloud:trusty-juno'): self.trusty_juno, |
1602 | ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, |
1603 | + ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, |
1604 | ('utopic', None): self.utopic_juno, |
1605 | - ('vivid', None): self.vivid_kilo} |
1606 | + ('vivid', None): self.vivid_kilo, |
1607 | + ('wily', None): self.wily_liberty} |
1608 | return releases[(self.series, self.openstack)] |
1609 | |
1610 | def _get_openstack_release_string(self): |
1611 | @@ -138,9 +141,43 @@ |
1612 | ('trusty', 'icehouse'), |
1613 | ('utopic', 'juno'), |
1614 | ('vivid', 'kilo'), |
1615 | + ('wily', 'liberty'), |
1616 | ]) |
1617 | if self.openstack: |
1618 | os_origin = self.openstack.split(':')[1] |
1619 | return os_origin.split('%s-' % self.series)[1].split('/')[0] |
1620 | else: |
1621 | return releases[self.series] |
1622 | + |
1623 | + def get_ceph_expected_pools(self, radosgw=False): |
1624 | + """Return a list of expected ceph pools in a ceph + cinder + glance |
1625 | + test scenario, based on OpenStack release and whether ceph radosgw |
1626 | + is flagged as present or not.""" |
1627 | + |
1628 | + if self._get_openstack_release() >= self.trusty_kilo: |
1629 | + # Kilo or later |
1630 | + pools = [ |
1631 | + 'rbd', |
1632 | + 'cinder', |
1633 | + 'glance' |
1634 | + ] |
1635 | + else: |
1636 | + # Juno or earlier |
1637 | + pools = [ |
1638 | + 'data', |
1639 | + 'metadata', |
1640 | + 'rbd', |
1641 | + 'cinder', |
1642 | + 'glance' |
1643 | + ] |
1644 | + |
1645 | + if radosgw: |
1646 | + pools.extend([ |
1647 | + '.rgw.root', |
1648 | + '.rgw.control', |
1649 | + '.rgw', |
1650 | + '.rgw.gc', |
1651 | + '.users.uid' |
1652 | + ]) |
1653 | + |
1654 | + return pools |
1655 | |
1656 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' |
1657 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-01-26 11:53:19 +0000 |
1658 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 14:47:24 +0000 |
1659 | @@ -14,16 +14,20 @@ |
1660 | # You should have received a copy of the GNU Lesser General Public License |
1661 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1662 | |
1663 | +import amulet |
1664 | +import json |
1665 | import logging |
1666 | import os |
1667 | +import six |
1668 | import time |
1669 | import urllib |
1670 | |
1671 | +import cinderclient.v1.client as cinder_client |
1672 | import glanceclient.v1.client as glance_client |
1673 | +import heatclient.v1.client as heat_client |
1674 | import keystoneclient.v2_0 as keystone_client |
1675 | import novaclient.v1_1.client as nova_client |
1676 | - |
1677 | -import six |
1678 | +import swiftclient |
1679 | |
1680 | from charmhelpers.contrib.amulet.utils import ( |
1681 | AmuletUtils |
1682 | @@ -37,7 +41,7 @@ |
1683 | """OpenStack amulet utilities. |
1684 | |
1685 | This class inherits from AmuletUtils and has additional support |
1686 | - that is specifically for use by OpenStack charms. |
1687 | + that is specifically for use by OpenStack charm tests. |
1688 | """ |
1689 | |
1690 | def __init__(self, log_level=ERROR): |
1691 | @@ -51,6 +55,8 @@ |
1692 | Validate actual endpoint data vs expected endpoint data. The ports |
1693 | are used to find the matching endpoint. |
1694 | """ |
1695 | + self.log.debug('Validating endpoint data...') |
1696 | + self.log.debug('actual: {}'.format(repr(endpoints))) |
1697 | found = False |
1698 | for ep in endpoints: |
1699 | self.log.debug('endpoint: {}'.format(repr(ep))) |
1700 | @@ -77,6 +83,7 @@ |
1701 | Validate a list of actual service catalog endpoints vs a list of |
1702 | expected service catalog endpoints. |
1703 | """ |
1704 | + self.log.debug('Validating service catalog endpoint data...') |
1705 | self.log.debug('actual: {}'.format(repr(actual))) |
1706 | for k, v in six.iteritems(expected): |
1707 | if k in actual: |
1708 | @@ -93,6 +100,7 @@ |
1709 | Validate a list of actual tenant data vs list of expected tenant |
1710 | data. |
1711 | """ |
1712 | + self.log.debug('Validating tenant data...') |
1713 | self.log.debug('actual: {}'.format(repr(actual))) |
1714 | for e in expected: |
1715 | found = False |
1716 | @@ -114,6 +122,7 @@ |
1717 | Validate a list of actual role data vs a list of expected role |
1718 | data. |
1719 | """ |
1720 | + self.log.debug('Validating role data...') |
1721 | self.log.debug('actual: {}'.format(repr(actual))) |
1722 | for e in expected: |
1723 | found = False |
1724 | @@ -134,6 +143,7 @@ |
1725 | Validate a list of actual user data vs a list of expected user |
1726 | data. |
1727 | """ |
1728 | + self.log.debug('Validating user data...') |
1729 | self.log.debug('actual: {}'.format(repr(actual))) |
1730 | for e in expected: |
1731 | found = False |
1732 | @@ -155,17 +165,30 @@ |
1733 | |
1734 | Validate a list of actual flavors vs a list of expected flavors. |
1735 | """ |
1736 | + self.log.debug('Validating flavor data...') |
1737 | self.log.debug('actual: {}'.format(repr(actual))) |
1738 | act = [a.name for a in actual] |
1739 | return self._validate_list_data(expected, act) |
1740 | |
1741 | def tenant_exists(self, keystone, tenant): |
1742 | """Return True if tenant exists.""" |
1743 | + self.log.debug('Checking if tenant exists ({})...'.format(tenant)) |
1744 | return tenant in [t.name for t in keystone.tenants.list()] |
1745 | |
1746 | + def authenticate_cinder_admin(self, keystone_sentry, username, |
1747 | + password, tenant): |
1748 | + """Authenticates admin user with cinder.""" |
1749 | + # NOTE(beisner): cinder python client doesn't accept tokens. |
1750 | + service_ip = \ |
1751 | + keystone_sentry.relation('shared-db', |
1752 | + 'mysql:shared-db')['private-address'] |
1753 | + ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8')) |
1754 | + return cinder_client.Client(username, password, tenant, ept) |
1755 | + |
1756 | def authenticate_keystone_admin(self, keystone_sentry, user, password, |
1757 | tenant): |
1758 | """Authenticates admin user with the keystone admin endpoint.""" |
1759 | + self.log.debug('Authenticating keystone admin...') |
1760 | unit = keystone_sentry |
1761 | service_ip = unit.relation('shared-db', |
1762 | 'mysql:shared-db')['private-address'] |
1763 | @@ -175,6 +198,7 @@ |
1764 | |
1765 | def authenticate_keystone_user(self, keystone, user, password, tenant): |
1766 | """Authenticates a regular user with the keystone public endpoint.""" |
1767 | + self.log.debug('Authenticating keystone user ({})...'.format(user)) |
1768 | ep = keystone.service_catalog.url_for(service_type='identity', |
1769 | endpoint_type='publicURL') |
1770 | return keystone_client.Client(username=user, password=password, |
1771 | @@ -182,19 +206,49 @@ |
1772 | |
1773 | def authenticate_glance_admin(self, keystone): |
1774 | """Authenticates admin user with glance.""" |
1775 | + self.log.debug('Authenticating glance admin...') |
1776 | ep = keystone.service_catalog.url_for(service_type='image', |
1777 | endpoint_type='adminURL') |
1778 | return glance_client.Client(ep, token=keystone.auth_token) |
1779 | |
1780 | + def authenticate_heat_admin(self, keystone): |
1781 | + """Authenticates the admin user with heat.""" |
1782 | + self.log.debug('Authenticating heat admin...') |
1783 | + ep = keystone.service_catalog.url_for(service_type='orchestration', |
1784 | + endpoint_type='publicURL') |
1785 | + return heat_client.Client(endpoint=ep, token=keystone.auth_token) |
1786 | + |
1787 | def authenticate_nova_user(self, keystone, user, password, tenant): |
1788 | """Authenticates a regular user with nova-api.""" |
1789 | + self.log.debug('Authenticating nova user ({})...'.format(user)) |
1790 | ep = keystone.service_catalog.url_for(service_type='identity', |
1791 | endpoint_type='publicURL') |
1792 | return nova_client.Client(username=user, api_key=password, |
1793 | project_id=tenant, auth_url=ep) |
1794 | |
1795 | + def authenticate_swift_user(self, keystone, user, password, tenant): |
1796 | + """Authenticates a regular user with swift api.""" |
1797 | + self.log.debug('Authenticating swift user ({})...'.format(user)) |
1798 | + ep = keystone.service_catalog.url_for(service_type='identity', |
1799 | + endpoint_type='publicURL') |
1800 | + return swiftclient.Connection(authurl=ep, |
1801 | + user=user, |
1802 | + key=password, |
1803 | + tenant_name=tenant, |
1804 | + auth_version='2.0') |
1805 | + |
1806 | def create_cirros_image(self, glance, image_name): |
1807 | - """Download the latest cirros image and upload it to glance.""" |
1808 | + """Download the latest cirros image and upload it to glance, |
1809 | + validate and return a resource pointer. |
1810 | + |
1811 | + :param glance: pointer to authenticated glance connection |
1812 | + :param image_name: display name for new image |
1813 | + :returns: glance image pointer |
1814 | + """ |
1815 | + self.log.debug('Creating glance cirros image ' |
1816 | + '({})...'.format(image_name)) |
1817 | + |
1818 | + # Download cirros image |
1819 | http_proxy = os.getenv('AMULET_HTTP_PROXY') |
1820 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) |
1821 | if http_proxy: |
1822 | @@ -203,57 +257,67 @@ |
1823 | else: |
1824 | opener = urllib.FancyURLopener() |
1825 | |
1826 | - f = opener.open("http://download.cirros-cloud.net/version/released") |
1827 | + f = opener.open('http://download.cirros-cloud.net/version/released') |
1828 | version = f.read().strip() |
1829 | - cirros_img = "cirros-{}-x86_64-disk.img".format(version) |
1830 | + cirros_img = 'cirros-{}-x86_64-disk.img'.format(version) |
1831 | local_path = os.path.join('tests', cirros_img) |
1832 | |
1833 | if not os.path.exists(local_path): |
1834 | - cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net", |
1835 | + cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net', |
1836 | version, cirros_img) |
1837 | opener.retrieve(cirros_url, local_path) |
1838 | f.close() |
1839 | |
1840 | + # Create glance image |
1841 | with open(local_path) as f: |
1842 | image = glance.images.create(name=image_name, is_public=True, |
1843 | disk_format='qcow2', |
1844 | container_format='bare', data=f) |
1845 | - count = 1 |
1846 | - status = image.status |
1847 | - while status != 'active' and count < 10: |
1848 | - time.sleep(3) |
1849 | - image = glance.images.get(image.id) |
1850 | - status = image.status |
1851 | - self.log.debug('image status: {}'.format(status)) |
1852 | - count += 1 |
1853 | - |
1854 | - if status != 'active': |
1855 | - self.log.error('image creation timed out') |
1856 | - return None |
1857 | + |
1858 | + # Wait for image to reach active status |
1859 | + img_id = image.id |
1860 | + ret = self.resource_reaches_status(glance.images, img_id, |
1861 | + expected_stat='active', |
1862 | + msg='Image status wait') |
1863 | + if not ret: |
1864 | + msg = 'Glance image failed to reach expected state.' |
1865 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1866 | + |
1867 | + # Re-validate new image |
1868 | + self.log.debug('Validating image attributes...') |
1869 | + val_img_name = glance.images.get(img_id).name |
1870 | + val_img_stat = glance.images.get(img_id).status |
1871 | + val_img_pub = glance.images.get(img_id).is_public |
1872 | + val_img_cfmt = glance.images.get(img_id).container_format |
1873 | + val_img_dfmt = glance.images.get(img_id).disk_format |
1874 | + msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} ' |
1875 | + 'container fmt:{} disk fmt:{}'.format( |
1876 | + val_img_name, val_img_pub, img_id, |
1877 | + val_img_stat, val_img_cfmt, val_img_dfmt)) |
1878 | + |
1879 | + if val_img_name == image_name and val_img_stat == 'active' \ |
1880 | + and val_img_pub is True and val_img_cfmt == 'bare' \ |
1881 | + and val_img_dfmt == 'qcow2': |
1882 | + self.log.debug(msg_attr) |
1883 | + else: |
1884 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
1885 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1886 | |
1887 | return image |
1888 | |
1889 | def delete_image(self, glance, image): |
1890 | """Delete the specified image.""" |
1891 | - num_before = len(list(glance.images.list())) |
1892 | - glance.images.delete(image) |
1893 | - |
1894 | - count = 1 |
1895 | - num_after = len(list(glance.images.list())) |
1896 | - while num_after != (num_before - 1) and count < 10: |
1897 | - time.sleep(3) |
1898 | - num_after = len(list(glance.images.list())) |
1899 | - self.log.debug('number of images: {}'.format(num_after)) |
1900 | - count += 1 |
1901 | - |
1902 | - if num_after != (num_before - 1): |
1903 | - self.log.error('image deletion timed out') |
1904 | - return False |
1905 | - |
1906 | - return True |
1907 | + |
1908 | + # /!\ DEPRECATION WARNING |
1909 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1910 | + 'delete_resource instead of delete_image.') |
1911 | + self.log.debug('Deleting glance image ({})...'.format(image)) |
1912 | + return self.delete_resource(glance.images, image, msg='glance image') |
1913 | |
1914 | def create_instance(self, nova, image_name, instance_name, flavor): |
1915 | """Create the specified instance.""" |
1916 | + self.log.debug('Creating instance ' |
1917 | + '({}|{}|{})'.format(instance_name, image_name, flavor)) |
1918 | image = nova.images.find(name=image_name) |
1919 | flavor = nova.flavors.find(name=flavor) |
1920 | instance = nova.servers.create(name=instance_name, image=image, |
1921 | @@ -276,19 +340,265 @@ |
1922 | |
1923 | def delete_instance(self, nova, instance): |
1924 | """Delete the specified instance.""" |
1925 | - num_before = len(list(nova.servers.list())) |
1926 | - nova.servers.delete(instance) |
1927 | - |
1928 | - count = 1 |
1929 | - num_after = len(list(nova.servers.list())) |
1930 | - while num_after != (num_before - 1) and count < 10: |
1931 | - time.sleep(3) |
1932 | - num_after = len(list(nova.servers.list())) |
1933 | - self.log.debug('number of instances: {}'.format(num_after)) |
1934 | - count += 1 |
1935 | - |
1936 | - if num_after != (num_before - 1): |
1937 | - self.log.error('instance deletion timed out') |
1938 | - return False |
1939 | - |
1940 | - return True |
1941 | + |
1942 | + # /!\ DEPRECATION WARNING |
1943 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1944 | + 'delete_resource instead of delete_instance.') |
1945 | + self.log.debug('Deleting instance ({})...'.format(instance)) |
1946 | + return self.delete_resource(nova.servers, instance, |
1947 | + msg='nova instance') |
1948 | + |
1949 | + def create_or_get_keypair(self, nova, keypair_name="testkey"): |
1950 | + """Create a new keypair, or return pointer if it already exists.""" |
1951 | + try: |
1952 | + _keypair = nova.keypairs.get(keypair_name) |
1953 | + self.log.debug('Keypair ({}) already exists, ' |
1954 | + 'using it.'.format(keypair_name)) |
1955 | + return _keypair |
1956 | + except: |
1957 | + self.log.debug('Keypair ({}) does not exist, ' |
1958 | + 'creating it.'.format(keypair_name)) |
1959 | + |
1960 | + _keypair = nova.keypairs.create(name=keypair_name) |
1961 | + return _keypair |
1962 | + |
1963 | + def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, |
1964 | + img_id=None, src_vol_id=None, snap_id=None): |
1965 | + """Create cinder volume, optionally from a glance image, OR |
1966 | + optionally as a clone of an existing volume, OR optionally |
1967 | + from a snapshot. Wait for the new volume status to reach |
1968 | + the expected status, validate and return a resource pointer. |
1969 | + |
1970 | + :param vol_name: cinder volume display name |
1971 | + :param vol_size: size in gigabytes |
1972 | + :param img_id: optional glance image id |
1973 | + :param src_vol_id: optional source volume id to clone |
1974 | + :param snap_id: optional snapshot id to use |
1975 | + :returns: cinder volume pointer |
1976 | + """ |
1977 | + # Handle parameter input and avoid impossible combinations |
1978 | + if img_id and not src_vol_id and not snap_id: |
1979 | + # Create volume from image |
1980 | + self.log.debug('Creating cinder volume from glance image...') |
1981 | + bootable = 'true' |
1982 | + elif src_vol_id and not img_id and not snap_id: |
1983 | + # Clone an existing volume |
1984 | + self.log.debug('Cloning cinder volume...') |
1985 | + bootable = cinder.volumes.get(src_vol_id).bootable |
1986 | + elif snap_id and not src_vol_id and not img_id: |
1987 | + # Create volume from snapshot |
1988 | + self.log.debug('Creating cinder volume from snapshot...') |
1989 | + snap = cinder.volume_snapshots.find(id=snap_id) |
1990 | + vol_size = snap.size |
1991 | + snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id |
1992 | + bootable = cinder.volumes.get(snap_vol_id).bootable |
1993 | + elif not img_id and not src_vol_id and not snap_id: |
1994 | + # Create volume |
1995 | + self.log.debug('Creating cinder volume...') |
1996 | + bootable = 'false' |
1997 | + else: |
1998 | + # Impossible combination of parameters |
1999 | + msg = ('Invalid method use - name:{} size:{} img_id:{} ' |
2000 | + 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, |
2001 | + img_id, src_vol_id, |
2002 | + snap_id)) |
2003 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2004 | + |
2005 | + # Create new volume |
2006 | + try: |
2007 | + vol_new = cinder.volumes.create(display_name=vol_name, |
2008 | + imageRef=img_id, |
2009 | + size=vol_size, |
2010 | + source_volid=src_vol_id, |
2011 | + snapshot_id=snap_id) |
2012 | + vol_id = vol_new.id |
2013 | + except Exception as e: |
2014 | + msg = 'Failed to create volume: {}'.format(e) |
2015 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2016 | + |
2017 | + # Wait for volume to reach available status |
2018 | + ret = self.resource_reaches_status(cinder.volumes, vol_id, |
2019 | + expected_stat="available", |
2020 | + msg="Volume status wait") |
2021 | + if not ret: |
2022 | + msg = 'Cinder volume failed to reach expected state.' |
2023 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2024 | + |
2025 | + # Re-validate new volume |
2026 | + self.log.debug('Validating volume attributes...') |
2027 | + val_vol_name = cinder.volumes.get(vol_id).display_name |
2028 | + val_vol_boot = cinder.volumes.get(vol_id).bootable |
2029 | + val_vol_stat = cinder.volumes.get(vol_id).status |
2030 | + val_vol_size = cinder.volumes.get(vol_id).size |
2031 | + msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' |
2032 | + '{} size:{}'.format(val_vol_name, vol_id, |
2033 | + val_vol_stat, val_vol_boot, |
2034 | + val_vol_size)) |
2035 | + |
2036 | + if val_vol_boot == bootable and val_vol_stat == 'available' \ |
2037 | + and val_vol_name == vol_name and val_vol_size == vol_size: |
2038 | + self.log.debug(msg_attr) |
2039 | + else: |
2040 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
2041 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2042 | + |
2043 | + return vol_new |
2044 | + |
2045 | + def delete_resource(self, resource, resource_id, |
2046 | + msg="resource", max_wait=120): |
2047 | + """Delete one openstack resource, such as one instance, keypair, |
2048 | + image, volume, stack, etc., and confirm deletion within max wait time. |
2049 | + |
2050 | + :param resource: pointer to os resource type, ex:glance_client.images |
2051 | + :param resource_id: unique name or id for the openstack resource |
2052 | + :param msg: text to identify purpose in logging |
2053 | + :param max_wait: maximum wait time in seconds |
2054 | + :returns: True if successful, otherwise False |
2055 | + """ |
2056 | + self.log.debug('Deleting OpenStack resource ' |
2057 | + '{} ({})'.format(resource_id, msg)) |
2058 | + num_before = len(list(resource.list())) |
2059 | + resource.delete(resource_id) |
2060 | + |
2061 | + tries = 0 |
2062 | + num_after = len(list(resource.list())) |
2063 | + while num_after != (num_before - 1) and tries < (max_wait / 4): |
2064 | + self.log.debug('{} delete check: ' |
2065 | + '{} [{}:{}] {}'.format(msg, tries, |
2066 | + num_before, |
2067 | + num_after, |
2068 | + resource_id)) |
2069 | + time.sleep(4) |
2070 | + num_after = len(list(resource.list())) |
2071 | + tries += 1 |
2072 | + |
2073 | + self.log.debug('{}: expected, actual count = {}, ' |
2074 | + '{}'.format(msg, num_before - 1, num_after)) |
2075 | + |
2076 | + if num_after == (num_before - 1): |
2077 | + return True |
2078 | + else: |
2079 | + self.log.error('{} delete timed out'.format(msg)) |
2080 | + return False |
2081 | + |
2082 | + def resource_reaches_status(self, resource, resource_id, |
2083 | + expected_stat='available', |
2084 | + msg='resource', max_wait=120): |
2085 | + """Wait for an openstack resources status to reach an |
2086 | + expected status within a specified time. Useful to confirm that |
2087 | + nova instances, cinder vols, snapshots, glance images, heat stacks |
2088 | + and other resources eventually reach the expected status. |
2089 | + |
2090 | + :param resource: pointer to os resource type, ex: heat_client.stacks |
2091 | + :param resource_id: unique id for the openstack resource |
2092 | + :param expected_stat: status to expect resource to reach |
2093 | + :param msg: text to identify purpose in logging |
2094 | + :param max_wait: maximum wait time in seconds |
2095 | + :returns: True if successful, False if status is not reached |
2096 | + """ |
2097 | + |
2098 | + tries = 0 |
2099 | + resource_stat = resource.get(resource_id).status |
2100 | + while resource_stat != expected_stat and tries < (max_wait / 4): |
2101 | + self.log.debug('{} status check: ' |
2102 | + '{} [{}:{}] {}'.format(msg, tries, |
2103 | + resource_stat, |
2104 | + expected_stat, |
2105 | + resource_id)) |
2106 | + time.sleep(4) |
2107 | + resource_stat = resource.get(resource_id).status |
2108 | + tries += 1 |
2109 | + |
2110 | + self.log.debug('{}: expected, actual status = {}, ' |
2111 | + '{}'.format(msg, resource_stat, expected_stat)) |
2112 | + |
2113 | + if resource_stat == expected_stat: |
2114 | + return True |
2115 | + else: |
2116 | + self.log.debug('{} never reached expected status: ' |
2117 | + '{}'.format(resource_id, expected_stat)) |
2118 | + return False |
2119 | + |
2120 | + def get_ceph_osd_id_cmd(self, index): |
2121 | + """Produce a shell command that will return a ceph-osd id.""" |
2122 | + return ("`initctl list | grep 'ceph-osd ' | " |
2123 | + "awk 'NR=={} {{ print $2 }}' | " |
2124 | + "grep -o '[0-9]*'`".format(index + 1)) |
2125 | + |
2126 | + def get_ceph_pools(self, sentry_unit): |
2127 | + """Return a dict of ceph pools from a single ceph unit, with |
2128 | + pool name as keys, pool id as vals.""" |
2129 | + pools = {} |
2130 | + cmd = 'sudo ceph osd lspools' |
2131 | + output, code = sentry_unit.run(cmd) |
2132 | + if code != 0: |
2133 | + msg = ('{} `{}` returned {} ' |
2134 | + '{}'.format(sentry_unit.info['unit_name'], |
2135 | + cmd, code, output)) |
2136 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2137 | + |
2138 | + # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, |
2139 | + for pool in str(output).split(','): |
2140 | + pool_id_name = pool.split(' ') |
2141 | + if len(pool_id_name) == 2: |
2142 | + pool_id = pool_id_name[0] |
2143 | + pool_name = pool_id_name[1] |
2144 | + pools[pool_name] = int(pool_id) |
2145 | + |
2146 | + self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], |
2147 | + pools)) |
2148 | + return pools |
2149 | + |
2150 | + def get_ceph_df(self, sentry_unit): |
2151 | + """Return dict of ceph df json output, including ceph pool state. |
2152 | + |
2153 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
2154 | + :returns: Dict of ceph df output |
2155 | + """ |
2156 | + cmd = 'sudo ceph df --format=json' |
2157 | + output, code = sentry_unit.run(cmd) |
2158 | + if code != 0: |
2159 | + msg = ('{} `{}` returned {} ' |
2160 | + '{}'.format(sentry_unit.info['unit_name'], |
2161 | + cmd, code, output)) |
2162 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2163 | + return json.loads(output) |
2164 | + |
2165 | + def get_ceph_pool_sample(self, sentry_unit, pool_id=0): |
2166 | + """Take a sample of attributes of a ceph pool, returning ceph |
2167 | + pool name, object count and disk space used for the specified |
2168 | + pool ID number. |
2169 | + |
2170 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
2171 | + :param pool_id: Ceph pool ID |
2172 | + :returns: List of pool name, object count, kb disk space used |
2173 | + """ |
2174 | + df = self.get_ceph_df(sentry_unit) |
2175 | + pool_name = df['pools'][pool_id]['name'] |
2176 | + obj_count = df['pools'][pool_id]['stats']['objects'] |
2177 | + kb_used = df['pools'][pool_id]['stats']['kb_used'] |
2178 | + self.log.debug('Ceph {} pool (ID {}): {} objects, ' |
2179 | + '{} kb used'.format(pool_name, pool_id, |
2180 | + obj_count, kb_used)) |
2181 | + return pool_name, obj_count, kb_used |
2182 | + |
2183 | + def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): |
2184 | + """Validate ceph pool samples taken over time, such as pool |
2185 | + object counts or pool kb used, before adding, after adding, and |
2186 | + after deleting items which affect those pool attributes. The |
2187 | + 2nd element is expected to be greater than the 1st; 3rd is expected |
2188 | + to be less than the 2nd. |
2189 | + |
2190 | + :param samples: List containing 3 data samples |
2191 | + :param sample_type: String for logging and usage context |
2192 | + :returns: None if successful, Failure message otherwise |
2193 | + """ |
2194 | + original, created, deleted = range(3) |
2195 | + if samples[created] <= samples[original] or \ |
2196 | + samples[deleted] >= samples[created]: |
2197 | + return ('Ceph {} samples ({}) ' |
2198 | + 'unexpected.'.format(sample_type, samples)) |
2199 | + else: |
2200 | + self.log.debug('Ceph {} samples (OK): ' |
2201 | + '{}'.format(sample_type, samples)) |
2202 | + return None |
2203 | |
2204 | === added file 'tests/tests.yaml' |
2205 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 |
2206 | +++ tests/tests.yaml 2015-07-01 14:47:24 +0000 |
2207 | @@ -0,0 +1,18 @@ |
2208 | +bootstrap: true |
2209 | +reset: true |
2210 | +virtualenv: true |
2211 | +makefile: |
2212 | + - lint |
2213 | + - test |
2214 | +sources: |
2215 | + - ppa:juju/stable |
2216 | +packages: |
2217 | + - amulet |
2218 | + - python-amulet |
2219 | + - python-cinderclient |
2220 | + - python-distro-info |
2221 | + - python-glanceclient |
2222 | + - python-heatclient |
2223 | + - python-keystoneclient |
2224 | + - python-novaclient |
2225 | + - python-swiftclient |
charm_lint_check #5539 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_lint_ check/5539/