Merge lp:~1chb1n/charms/trusty/ceilometer-agent/next-amulet-add-initial into lp:~openstack-charmers-archive/charms/trusty/ceilometer-agent/next
- Trusty Tahr (14.04)
- next-amulet-add-initial
- Merge into next
Status: | Merged | ||||||||
---|---|---|---|---|---|---|---|---|---|
Merged at revision: | 56 | ||||||||
Proposed branch: | lp:~1chb1n/charms/trusty/ceilometer-agent/next-amulet-add-initial | ||||||||
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceilometer-agent/next | ||||||||
Diff against target: |
4014 lines (+2939/-192) 41 files modified
Makefile (+15/-4) charm-helpers-tests.yaml (+5/-0) hooks/ceilometer_hooks.py (+0/-4) hooks/ceilometer_utils.py (+1/-1) hooks/charmhelpers/contrib/hahelpers/cluster.py (+47/-3) hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+6/-2) hooks/charmhelpers/contrib/openstack/amulet/utils.py (+122/-3) hooks/charmhelpers/contrib/openstack/context.py (+1/-1) hooks/charmhelpers/contrib/openstack/ip.py (+49/-44) hooks/charmhelpers/contrib/openstack/neutron.py (+16/-9) hooks/charmhelpers/contrib/openstack/utils.py (+82/-22) hooks/charmhelpers/contrib/python/packages.py (+30/-5) hooks/charmhelpers/core/hookenv.py (+231/-38) hooks/charmhelpers/core/host.py (+25/-7) hooks/charmhelpers/core/services/base.py (+43/-19) hooks/charmhelpers/fetch/__init__.py (+1/-1) hooks/charmhelpers/fetch/giturl.py (+7/-5) metadata.yaml (+3/-2) templates/ceilometer.conf.~1~ (+0/-21) tests/00-setup (+15/-0) tests/014-basic-precise-icehouse (+11/-0) tests/015-basic-trusty-icehouse (+9/-0) tests/016-basic-trusty-juno (+11/-0) tests/017-basic-trusty-kilo (+11/-0) tests/018-basic-utopic-juno (+9/-0) tests/019-basic-vivid-kilo (+9/-0) tests/020-basic-trusty-liberty (+11/-0) tests/021-basic-wily-liberty (+9/-0) tests/README (+62/-0) tests/basic_deployment.py (+567/-0) tests/charmhelpers/__init__.py (+38/-0) tests/charmhelpers/contrib/__init__.py (+15/-0) tests/charmhelpers/contrib/amulet/__init__.py (+15/-0) tests/charmhelpers/contrib/amulet/deployment.py (+93/-0) tests/charmhelpers/contrib/amulet/utils.py (+533/-0) tests/charmhelpers/contrib/openstack/__init__.py (+15/-0) tests/charmhelpers/contrib/openstack/amulet/__init__.py (+15/-0) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+183/-0) tests/charmhelpers/contrib/openstack/amulet/utils.py (+604/-0) tests/tests.yaml (+19/-0) unit_tests/test_ceilometer_utils.py (+1/-1) |
||||||||
To merge this branch: | bzr merge lp:~1chb1n/charms/trusty/ceilometer-agent/next-amulet-add-initial | ||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Corey Bryant | Pending | ||
Liam Young | Pending | ||
Review via email: mp+263040@code.launchpad.net |
Commit message
Description of the change
Carry over amulet tests from ceilometer as a baseline; Add subordinate relation, service catalog, endpoint and nova ceilometer config checks.
Fix lint in unit test (unused import).
Resolve grizzly-override assumptions re: bug 1469241.
Remove accidental templates/
Update tags for consistency with other os-charms.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5252 ceilometer-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4802 ceilometer-
AMULET OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5255 ceilometer-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5623 ceilometer-
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4806 ceilometer-
AMULET OK: passed
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Flipped back to WIP re: tests/charmhelpers work in progress. Other things here are clear for review and input.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5677 ceilometer-
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5309 ceilometer-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5679 ceilometer-
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5311 ceilometer-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4862 ceilometer-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 65. By Ryan Beisner
-
remove accidental templates/
ceilometer. conf.~1~ from rev23 circa 2013
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5705 ceilometer-
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5337 ceilometer-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4893 ceilometer-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 66. By Ryan Beisner
-
update test
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5340 ceilometer-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5708 ceilometer-
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4899 ceilometer-
AMULET OK: passed
Build: http://
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2014-12-15 09:20:47 +0000 |
3 | +++ Makefile 2015-07-01 21:31:48 +0000 |
4 | @@ -2,16 +2,27 @@ |
5 | PYTHON := /usr/bin/env python |
6 | |
7 | lint: |
8 | - @flake8 --exclude hooks/charmhelpers hooks |
9 | + @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \ |
10 | + hooks tests unit_tests |
11 | @charm proof |
12 | |
13 | +test: |
14 | + @# Bundletester expects unit tests here. |
15 | + @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
16 | + |
17 | +functional_test: |
18 | + @echo Starting Amulet tests... |
19 | + @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 |
20 | + |
21 | bin/charm_helpers_sync.py: |
22 | @mkdir -p bin |
23 | @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ |
24 | > bin/charm_helpers_sync.py |
25 | |
26 | sync: bin/charm_helpers_sync.py |
27 | - @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml |
28 | + @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml |
29 | + @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml |
30 | |
31 | -unit_test: |
32 | - @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
33 | +publish: lint test |
34 | + @bzr push lp:charms/ceilometer-agent |
35 | + @bzr push lp:charms/trusty/ceilometer-agent |
36 | |
37 | === renamed file 'charm-helpers.yaml' => 'charm-helpers-hooks.yaml' |
38 | === added file 'charm-helpers-tests.yaml' |
39 | --- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000 |
40 | +++ charm-helpers-tests.yaml 2015-07-01 21:31:48 +0000 |
41 | @@ -0,0 +1,5 @@ |
42 | +branch: lp:charm-helpers |
43 | +destination: tests/charmhelpers |
44 | +include: |
45 | + - contrib.amulet |
46 | + - contrib.openstack.amulet |
47 | |
48 | === modified file 'hooks/ceilometer_hooks.py' |
49 | --- hooks/ceilometer_hooks.py 2015-04-14 16:06:52 +0000 |
50 | +++ hooks/ceilometer_hooks.py 2015-07-01 21:31:48 +0000 |
51 | @@ -15,7 +15,6 @@ |
52 | ) |
53 | from charmhelpers.core.host import ( |
54 | restart_on_change, |
55 | - lsb_release, |
56 | ) |
57 | from charmhelpers.contrib.openstack.utils import ( |
58 | configure_installation_source, |
59 | @@ -38,9 +37,6 @@ |
60 | @hooks.hook() |
61 | def install(): |
62 | origin = config('openstack-origin') |
63 | - if (lsb_release()['DISTRIB_CODENAME'] == 'precise' |
64 | - and origin == 'distro'): |
65 | - origin = 'cloud:precise-grizzly' |
66 | configure_installation_source(origin) |
67 | apt_update(fatal=True) |
68 | apt_install( |
69 | |
70 | === modified file 'hooks/ceilometer_utils.py' |
71 | --- hooks/ceilometer_utils.py 2014-11-18 05:51:05 +0000 |
72 | +++ hooks/ceilometer_utils.py 2015-07-01 21:31:48 +0000 |
73 | @@ -66,7 +66,7 @@ |
74 | # just default to earliest supported release. configs dont get touched |
75 | # till post-install, anyway. |
76 | release = get_os_codename_package('ceilometer-common', fatal=False) \ |
77 | - or 'grizzly' |
78 | + or 'icehouse' |
79 | configs = templating.OSConfigRenderer(templates_dir=TEMPLATES, |
80 | openstack_release=release) |
81 | |
82 | |
83 | === modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' |
84 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-02-26 11:10:15 +0000 |
85 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-07-01 21:31:48 +0000 |
86 | @@ -44,6 +44,7 @@ |
87 | ERROR, |
88 | WARNING, |
89 | unit_get, |
90 | + is_leader as juju_is_leader |
91 | ) |
92 | from charmhelpers.core.decorators import ( |
93 | retry_on_exception, |
94 | @@ -52,6 +53,8 @@ |
95 | bool_from_string, |
96 | ) |
97 | |
98 | +DC_RESOURCE_NAME = 'DC' |
99 | + |
100 | |
101 | class HAIncompleteConfig(Exception): |
102 | pass |
103 | @@ -61,17 +64,30 @@ |
104 | pass |
105 | |
106 | |
107 | +class CRMDCNotFound(Exception): |
108 | + pass |
109 | + |
110 | + |
111 | def is_elected_leader(resource): |
112 | """ |
113 | Returns True if the charm executing this is the elected cluster leader. |
114 | |
115 | It relies on two mechanisms to determine leadership: |
116 | - 1. If the charm is part of a corosync cluster, call corosync to |
117 | + 1. If juju is sufficiently new and leadership election is supported, |
118 | + the is_leader command will be used. |
119 | + 2. If the charm is part of a corosync cluster, call corosync to |
120 | determine leadership. |
121 | - 2. If the charm is not part of a corosync cluster, the leader is |
122 | + 3. If the charm is not part of a corosync cluster, the leader is |
123 | determined as being "the alive unit with the lowest unit numer". In |
124 | other words, the oldest surviving unit. |
125 | """ |
126 | + try: |
127 | + return juju_is_leader() |
128 | + except NotImplementedError: |
129 | + log('Juju leadership election feature not enabled' |
130 | + ', using fallback support', |
131 | + level=WARNING) |
132 | + |
133 | if is_clustered(): |
134 | if not is_crm_leader(resource): |
135 | log('Deferring action to CRM leader.', level=INFO) |
136 | @@ -95,7 +111,33 @@ |
137 | return False |
138 | |
139 | |
140 | -@retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound) |
141 | +def is_crm_dc(): |
142 | + """ |
143 | + Determine leadership by querying the pacemaker Designated Controller |
144 | + """ |
145 | + cmd = ['crm', 'status'] |
146 | + try: |
147 | + status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) |
148 | + if not isinstance(status, six.text_type): |
149 | + status = six.text_type(status, "utf-8") |
150 | + except subprocess.CalledProcessError as ex: |
151 | + raise CRMDCNotFound(str(ex)) |
152 | + |
153 | + current_dc = '' |
154 | + for line in status.split('\n'): |
155 | + if line.startswith('Current DC'): |
156 | + # Current DC: juju-lytrusty-machine-2 (168108163) - partition with quorum |
157 | + current_dc = line.split(':')[1].split()[0] |
158 | + if current_dc == get_unit_hostname(): |
159 | + return True |
160 | + elif current_dc == 'NONE': |
161 | + raise CRMDCNotFound('Current DC: NONE') |
162 | + |
163 | + return False |
164 | + |
165 | + |
166 | +@retry_on_exception(5, base_delay=2, |
167 | + exc_type=(CRMResourceNotFound, CRMDCNotFound)) |
168 | def is_crm_leader(resource, retry=False): |
169 | """ |
170 | Returns True if the charm calling this is the elected corosync leader, |
171 | @@ -104,6 +146,8 @@ |
172 | We allow this operation to be retried to avoid the possibility of getting a |
173 | false negative. See LP #1396246 for more info. |
174 | """ |
175 | + if resource == DC_RESOURCE_NAME: |
176 | + return is_crm_dc() |
177 | cmd = ['crm', 'resource', 'show', resource] |
178 | try: |
179 | status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) |
180 | |
181 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py' |
182 | --- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-04-23 14:55:47 +0000 |
183 | +++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 21:31:48 +0000 |
184 | @@ -110,7 +110,8 @@ |
185 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, |
186 | self.precise_havana, self.precise_icehouse, |
187 | self.trusty_icehouse, self.trusty_juno, self.utopic_juno, |
188 | - self.trusty_kilo, self.vivid_kilo) = range(10) |
189 | + self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, |
190 | + self.wily_liberty) = range(12) |
191 | |
192 | releases = { |
193 | ('precise', None): self.precise_essex, |
194 | @@ -121,8 +122,10 @@ |
195 | ('trusty', None): self.trusty_icehouse, |
196 | ('trusty', 'cloud:trusty-juno'): self.trusty_juno, |
197 | ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, |
198 | + ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, |
199 | ('utopic', None): self.utopic_juno, |
200 | - ('vivid', None): self.vivid_kilo} |
201 | + ('vivid', None): self.vivid_kilo, |
202 | + ('wily', None): self.wily_liberty} |
203 | return releases[(self.series, self.openstack)] |
204 | |
205 | def _get_openstack_release_string(self): |
206 | @@ -138,6 +141,7 @@ |
207 | ('trusty', 'icehouse'), |
208 | ('utopic', 'juno'), |
209 | ('vivid', 'kilo'), |
210 | + ('wily', 'liberty'), |
211 | ]) |
212 | if self.openstack: |
213 | os_origin = self.openstack.split(':')[1] |
214 | |
215 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py' |
216 | --- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-01-26 10:55:38 +0000 |
217 | +++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 21:31:48 +0000 |
218 | @@ -16,15 +16,15 @@ |
219 | |
220 | import logging |
221 | import os |
222 | +import six |
223 | import time |
224 | import urllib |
225 | |
226 | import glanceclient.v1.client as glance_client |
227 | +import heatclient.v1.client as heat_client |
228 | import keystoneclient.v2_0 as keystone_client |
229 | import novaclient.v1_1.client as nova_client |
230 | |
231 | -import six |
232 | - |
233 | from charmhelpers.contrib.amulet.utils import ( |
234 | AmuletUtils |
235 | ) |
236 | @@ -37,7 +37,7 @@ |
237 | """OpenStack amulet utilities. |
238 | |
239 | This class inherits from AmuletUtils and has additional support |
240 | - that is specifically for use by OpenStack charms. |
241 | + that is specifically for use by OpenStack charm tests. |
242 | """ |
243 | |
244 | def __init__(self, log_level=ERROR): |
245 | @@ -51,6 +51,8 @@ |
246 | Validate actual endpoint data vs expected endpoint data. The ports |
247 | are used to find the matching endpoint. |
248 | """ |
249 | + self.log.debug('Validating endpoint data...') |
250 | + self.log.debug('actual: {}'.format(repr(endpoints))) |
251 | found = False |
252 | for ep in endpoints: |
253 | self.log.debug('endpoint: {}'.format(repr(ep))) |
254 | @@ -77,6 +79,7 @@ |
255 | Validate a list of actual service catalog endpoints vs a list of |
256 | expected service catalog endpoints. |
257 | """ |
258 | + self.log.debug('Validating service catalog endpoint data...') |
259 | self.log.debug('actual: {}'.format(repr(actual))) |
260 | for k, v in six.iteritems(expected): |
261 | if k in actual: |
262 | @@ -93,6 +96,7 @@ |
263 | Validate a list of actual tenant data vs list of expected tenant |
264 | data. |
265 | """ |
266 | + self.log.debug('Validating tenant data...') |
267 | self.log.debug('actual: {}'.format(repr(actual))) |
268 | for e in expected: |
269 | found = False |
270 | @@ -114,6 +118,7 @@ |
271 | Validate a list of actual role data vs a list of expected role |
272 | data. |
273 | """ |
274 | + self.log.debug('Validating role data...') |
275 | self.log.debug('actual: {}'.format(repr(actual))) |
276 | for e in expected: |
277 | found = False |
278 | @@ -134,6 +139,7 @@ |
279 | Validate a list of actual user data vs a list of expected user |
280 | data. |
281 | """ |
282 | + self.log.debug('Validating user data...') |
283 | self.log.debug('actual: {}'.format(repr(actual))) |
284 | for e in expected: |
285 | found = False |
286 | @@ -155,17 +161,20 @@ |
287 | |
288 | Validate a list of actual flavors vs a list of expected flavors. |
289 | """ |
290 | + self.log.debug('Validating flavor data...') |
291 | self.log.debug('actual: {}'.format(repr(actual))) |
292 | act = [a.name for a in actual] |
293 | return self._validate_list_data(expected, act) |
294 | |
295 | def tenant_exists(self, keystone, tenant): |
296 | """Return True if tenant exists.""" |
297 | + self.log.debug('Checking if tenant exists ({})...'.format(tenant)) |
298 | return tenant in [t.name for t in keystone.tenants.list()] |
299 | |
300 | def authenticate_keystone_admin(self, keystone_sentry, user, password, |
301 | tenant): |
302 | """Authenticates admin user with the keystone admin endpoint.""" |
303 | + self.log.debug('Authenticating keystone admin...') |
304 | unit = keystone_sentry |
305 | service_ip = unit.relation('shared-db', |
306 | 'mysql:shared-db')['private-address'] |
307 | @@ -175,6 +184,7 @@ |
308 | |
309 | def authenticate_keystone_user(self, keystone, user, password, tenant): |
310 | """Authenticates a regular user with the keystone public endpoint.""" |
311 | + self.log.debug('Authenticating keystone user ({})...'.format(user)) |
312 | ep = keystone.service_catalog.url_for(service_type='identity', |
313 | endpoint_type='publicURL') |
314 | return keystone_client.Client(username=user, password=password, |
315 | @@ -182,12 +192,21 @@ |
316 | |
317 | def authenticate_glance_admin(self, keystone): |
318 | """Authenticates admin user with glance.""" |
319 | + self.log.debug('Authenticating glance admin...') |
320 | ep = keystone.service_catalog.url_for(service_type='image', |
321 | endpoint_type='adminURL') |
322 | return glance_client.Client(ep, token=keystone.auth_token) |
323 | |
324 | + def authenticate_heat_admin(self, keystone): |
325 | + """Authenticates the admin user with heat.""" |
326 | + self.log.debug('Authenticating heat admin...') |
327 | + ep = keystone.service_catalog.url_for(service_type='orchestration', |
328 | + endpoint_type='publicURL') |
329 | + return heat_client.Client(endpoint=ep, token=keystone.auth_token) |
330 | + |
331 | def authenticate_nova_user(self, keystone, user, password, tenant): |
332 | """Authenticates a regular user with nova-api.""" |
333 | + self.log.debug('Authenticating nova user ({})...'.format(user)) |
334 | ep = keystone.service_catalog.url_for(service_type='identity', |
335 | endpoint_type='publicURL') |
336 | return nova_client.Client(username=user, api_key=password, |
337 | @@ -195,6 +214,7 @@ |
338 | |
339 | def create_cirros_image(self, glance, image_name): |
340 | """Download the latest cirros image and upload it to glance.""" |
341 | + self.log.debug('Creating glance image ({})...'.format(image_name)) |
342 | http_proxy = os.getenv('AMULET_HTTP_PROXY') |
343 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) |
344 | if http_proxy: |
345 | @@ -235,6 +255,11 @@ |
346 | |
347 | def delete_image(self, glance, image): |
348 | """Delete the specified image.""" |
349 | + |
350 | + # /!\ DEPRECATION WARNING |
351 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
352 | + 'delete_resource instead of delete_image.') |
353 | + self.log.debug('Deleting glance image ({})...'.format(image)) |
354 | num_before = len(list(glance.images.list())) |
355 | glance.images.delete(image) |
356 | |
357 | @@ -254,6 +279,8 @@ |
358 | |
359 | def create_instance(self, nova, image_name, instance_name, flavor): |
360 | """Create the specified instance.""" |
361 | + self.log.debug('Creating instance ' |
362 | + '({}|{}|{})'.format(instance_name, image_name, flavor)) |
363 | image = nova.images.find(name=image_name) |
364 | flavor = nova.flavors.find(name=flavor) |
365 | instance = nova.servers.create(name=instance_name, image=image, |
366 | @@ -276,6 +303,11 @@ |
367 | |
368 | def delete_instance(self, nova, instance): |
369 | """Delete the specified instance.""" |
370 | + |
371 | + # /!\ DEPRECATION WARNING |
372 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
373 | + 'delete_resource instead of delete_instance.') |
374 | + self.log.debug('Deleting instance ({})...'.format(instance)) |
375 | num_before = len(list(nova.servers.list())) |
376 | nova.servers.delete(instance) |
377 | |
378 | @@ -292,3 +324,90 @@ |
379 | return False |
380 | |
381 | return True |
382 | + |
383 | + def create_or_get_keypair(self, nova, keypair_name="testkey"): |
384 | + """Create a new keypair, or return pointer if it already exists.""" |
385 | + try: |
386 | + _keypair = nova.keypairs.get(keypair_name) |
387 | + self.log.debug('Keypair ({}) already exists, ' |
388 | + 'using it.'.format(keypair_name)) |
389 | + return _keypair |
390 | + except: |
391 | + self.log.debug('Keypair ({}) does not exist, ' |
392 | + 'creating it.'.format(keypair_name)) |
393 | + |
394 | + _keypair = nova.keypairs.create(name=keypair_name) |
395 | + return _keypair |
396 | + |
397 | + def delete_resource(self, resource, resource_id, |
398 | + msg="resource", max_wait=120): |
399 | + """Delete one openstack resource, such as one instance, keypair, |
400 | + image, volume, stack, etc., and confirm deletion within max wait time. |
401 | + |
402 | + :param resource: pointer to os resource type, ex:glance_client.images |
403 | + :param resource_id: unique name or id for the openstack resource |
404 | + :param msg: text to identify purpose in logging |
405 | + :param max_wait: maximum wait time in seconds |
406 | + :returns: True if successful, otherwise False |
407 | + """ |
408 | + num_before = len(list(resource.list())) |
409 | + resource.delete(resource_id) |
410 | + |
411 | + tries = 0 |
412 | + num_after = len(list(resource.list())) |
413 | + while num_after != (num_before - 1) and tries < (max_wait / 4): |
414 | + self.log.debug('{} delete check: ' |
415 | + '{} [{}:{}] {}'.format(msg, tries, |
416 | + num_before, |
417 | + num_after, |
418 | + resource_id)) |
419 | + time.sleep(4) |
420 | + num_after = len(list(resource.list())) |
421 | + tries += 1 |
422 | + |
423 | + self.log.debug('{}: expected, actual count = {}, ' |
424 | + '{}'.format(msg, num_before - 1, num_after)) |
425 | + |
426 | + if num_after == (num_before - 1): |
427 | + return True |
428 | + else: |
429 | + self.log.error('{} delete timed out'.format(msg)) |
430 | + return False |
431 | + |
432 | + def resource_reaches_status(self, resource, resource_id, |
433 | + expected_stat='available', |
434 | + msg='resource', max_wait=120): |
435 | + """Wait for an openstack resources status to reach an |
436 | + expected status within a specified time. Useful to confirm that |
437 | + nova instances, cinder vols, snapshots, glance images, heat stacks |
438 | + and other resources eventually reach the expected status. |
439 | + |
440 | + :param resource: pointer to os resource type, ex: heat_client.stacks |
441 | + :param resource_id: unique id for the openstack resource |
442 | + :param expected_stat: status to expect resource to reach |
443 | + :param msg: text to identify purpose in logging |
444 | + :param max_wait: maximum wait time in seconds |
445 | + :returns: True if successful, False if status is not reached |
446 | + """ |
447 | + |
448 | + tries = 0 |
449 | + resource_stat = resource.get(resource_id).status |
450 | + while resource_stat != expected_stat and tries < (max_wait / 4): |
451 | + self.log.debug('{} status check: ' |
452 | + '{} [{}:{}] {}'.format(msg, tries, |
453 | + resource_stat, |
454 | + expected_stat, |
455 | + resource_id)) |
456 | + time.sleep(4) |
457 | + resource_stat = resource.get(resource_id).status |
458 | + tries += 1 |
459 | + |
460 | + self.log.debug('{}: expected, actual status = {}, ' |
461 | + '{}'.format(msg, resource_stat, expected_stat)) |
462 | + |
463 | + if resource_stat == expected_stat: |
464 | + return True |
465 | + else: |
466 | + self.log.debug('{} never reached expected status: ' |
467 | + '{}'.format(resource_id, expected_stat)) |
468 | + return False |
469 | |
470 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' |
471 | --- hooks/charmhelpers/contrib/openstack/context.py 2015-04-16 21:32:27 +0000 |
472 | +++ hooks/charmhelpers/contrib/openstack/context.py 2015-07-01 21:31:48 +0000 |
473 | @@ -240,7 +240,7 @@ |
474 | if self.relation_prefix: |
475 | password_setting = self.relation_prefix + '_password' |
476 | |
477 | - for rid in relation_ids('shared-db'): |
478 | + for rid in relation_ids(self.interfaces[0]): |
479 | for unit in related_units(rid): |
480 | rdata = relation_get(rid=rid, unit=unit) |
481 | host = rdata.get('db_host') |
482 | |
483 | === modified file 'hooks/charmhelpers/contrib/openstack/ip.py' |
484 | --- hooks/charmhelpers/contrib/openstack/ip.py 2015-02-26 11:10:15 +0000 |
485 | +++ hooks/charmhelpers/contrib/openstack/ip.py 2015-07-01 21:31:48 +0000 |
486 | @@ -17,6 +17,7 @@ |
487 | from charmhelpers.core.hookenv import ( |
488 | config, |
489 | unit_get, |
490 | + service_name, |
491 | ) |
492 | from charmhelpers.contrib.network.ip import ( |
493 | get_address_in_network, |
494 | @@ -26,8 +27,6 @@ |
495 | ) |
496 | from charmhelpers.contrib.hahelpers.cluster import is_clustered |
497 | |
498 | -from functools import partial |
499 | - |
500 | PUBLIC = 'public' |
501 | INTERNAL = 'int' |
502 | ADMIN = 'admin' |
503 | @@ -35,15 +34,18 @@ |
504 | ADDRESS_MAP = { |
505 | PUBLIC: { |
506 | 'config': 'os-public-network', |
507 | - 'fallback': 'public-address' |
508 | + 'fallback': 'public-address', |
509 | + 'override': 'os-public-hostname', |
510 | }, |
511 | INTERNAL: { |
512 | 'config': 'os-internal-network', |
513 | - 'fallback': 'private-address' |
514 | + 'fallback': 'private-address', |
515 | + 'override': 'os-internal-hostname', |
516 | }, |
517 | ADMIN: { |
518 | 'config': 'os-admin-network', |
519 | - 'fallback': 'private-address' |
520 | + 'fallback': 'private-address', |
521 | + 'override': 'os-admin-hostname', |
522 | } |
523 | } |
524 | |
525 | @@ -57,15 +59,50 @@ |
526 | :param endpoint_type: str endpoint type to resolve. |
527 | :param returns: str base URL for services on the current service unit. |
528 | """ |
529 | - scheme = 'http' |
530 | - if 'https' in configs.complete_contexts(): |
531 | - scheme = 'https' |
532 | + scheme = _get_scheme(configs) |
533 | + |
534 | address = resolve_address(endpoint_type) |
535 | if is_ipv6(address): |
536 | address = "[{}]".format(address) |
537 | + |
538 | return '%s://%s' % (scheme, address) |
539 | |
540 | |
541 | +def _get_scheme(configs): |
542 | + """Returns the scheme to use for the url (either http or https) |
543 | + depending upon whether https is in the configs value. |
544 | + |
545 | + :param configs: OSTemplateRenderer config templating object to inspect |
546 | + for a complete https context. |
547 | + :returns: either 'http' or 'https' depending on whether https is |
548 | + configured within the configs context. |
549 | + """ |
550 | + scheme = 'http' |
551 | + if configs and 'https' in configs.complete_contexts(): |
552 | + scheme = 'https' |
553 | + return scheme |
554 | + |
555 | + |
556 | +def _get_address_override(endpoint_type=PUBLIC): |
557 | + """Returns any address overrides that the user has defined based on the |
558 | + endpoint type. |
559 | + |
560 | + Note: this function allows for the service name to be inserted into the |
561 | + address if the user specifies {service_name}.somehost.org. |
562 | + |
563 | + :param endpoint_type: the type of endpoint to retrieve the override |
564 | + value for. |
565 | + :returns: any endpoint address or hostname that the user has overridden |
566 | + or None if an override is not present. |
567 | + """ |
568 | + override_key = ADDRESS_MAP[endpoint_type]['override'] |
569 | + addr_override = config(override_key) |
570 | + if not addr_override: |
571 | + return None |
572 | + else: |
573 | + return addr_override.format(service_name=service_name()) |
574 | + |
575 | + |
576 | def resolve_address(endpoint_type=PUBLIC): |
577 | """Return unit address depending on net config. |
578 | |
579 | @@ -77,7 +114,10 @@ |
580 | |
581 | :param endpoint_type: Network endpoing type |
582 | """ |
583 | - resolved_address = None |
584 | + resolved_address = _get_address_override(endpoint_type) |
585 | + if resolved_address: |
586 | + return resolved_address |
587 | + |
588 | vips = config('vip') |
589 | if vips: |
590 | vips = vips.split() |
591 | @@ -109,38 +149,3 @@ |
592 | "clustered=%s)" % (net_type, clustered)) |
593 | |
594 | return resolved_address |
595 | - |
596 | - |
597 | -def endpoint_url(configs, url_template, port, endpoint_type=PUBLIC, |
598 | - override=None): |
599 | - """Returns the correct endpoint URL to advertise to Keystone. |
600 | - |
601 | - This method provides the correct endpoint URL which should be advertised to |
602 | - the keystone charm for endpoint creation. This method allows for the url to |
603 | - be overridden to force a keystone endpoint to have specific URL for any of |
604 | - the defined scopes (admin, internal, public). |
605 | - |
606 | - :param configs: OSTemplateRenderer config templating object to inspect |
607 | - for a complete https context. |
608 | - :param url_template: str format string for creating the url template. Only |
609 | - two values will be passed - the scheme+hostname |
610 | - returned by the canonical_url and the port. |
611 | - :param endpoint_type: str endpoint type to resolve. |
612 | - :param override: str the name of the config option which overrides the |
613 | - endpoint URL defined by the charm itself. None will |
614 | - disable any overrides (default). |
615 | - """ |
616 | - if override: |
617 | - # Return any user-defined overrides for the keystone endpoint URL. |
618 | - user_value = config(override) |
619 | - if user_value: |
620 | - return user_value.strip() |
621 | - |
622 | - return url_template % (canonical_url(configs, endpoint_type), port) |
623 | - |
624 | - |
625 | -public_endpoint = partial(endpoint_url, endpoint_type=PUBLIC) |
626 | - |
627 | -internal_endpoint = partial(endpoint_url, endpoint_type=INTERNAL) |
628 | - |
629 | -admin_endpoint = partial(endpoint_url, endpoint_type=ADMIN) |
630 | |
631 | === modified file 'hooks/charmhelpers/contrib/openstack/neutron.py' |
632 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2015-04-16 10:29:22 +0000 |
633 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2015-07-01 21:31:48 +0000 |
634 | @@ -172,14 +172,16 @@ |
635 | 'services': ['calico-felix', |
636 | 'bird', |
637 | 'neutron-dhcp-agent', |
638 | - 'nova-api-metadata'], |
639 | + 'nova-api-metadata', |
640 | + 'etcd'], |
641 | 'packages': [[headers_package()] + determine_dkms_package(), |
642 | ['calico-compute', |
643 | 'bird', |
644 | 'neutron-dhcp-agent', |
645 | - 'nova-api-metadata']], |
646 | - 'server_packages': ['neutron-server', 'calico-control'], |
647 | - 'server_services': ['neutron-server'] |
648 | + 'nova-api-metadata', |
649 | + 'etcd']], |
650 | + 'server_packages': ['neutron-server', 'calico-control', 'etcd'], |
651 | + 'server_services': ['neutron-server', 'etcd'] |
652 | }, |
653 | 'vsp': { |
654 | 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini', |
655 | @@ -256,11 +258,14 @@ |
656 | def parse_mappings(mappings): |
657 | parsed = {} |
658 | if mappings: |
659 | - mappings = mappings.split(' ') |
660 | + mappings = mappings.split() |
661 | for m in mappings: |
662 | p = m.partition(':') |
663 | - if p[1] == ':': |
664 | - parsed[p[0].strip()] = p[2].strip() |
665 | + key = p[0].strip() |
666 | + if p[1]: |
667 | + parsed[key] = p[2].strip() |
668 | + else: |
669 | + parsed[key] = '' |
670 | |
671 | return parsed |
672 | |
673 | @@ -283,13 +288,13 @@ |
674 | Returns dict of the form {bridge:port}. |
675 | """ |
676 | _mappings = parse_mappings(mappings) |
677 | - if not _mappings: |
678 | + if not _mappings or list(_mappings.values()) == ['']: |
679 | if not mappings: |
680 | return {} |
681 | |
682 | # For backwards-compatibility we need to support port-only provided in |
683 | # config. |
684 | - _mappings = {default_bridge: mappings.split(' ')[0]} |
685 | + _mappings = {default_bridge: mappings.split()[0]} |
686 | |
687 | bridges = _mappings.keys() |
688 | ports = _mappings.values() |
689 | @@ -309,6 +314,8 @@ |
690 | |
691 | Mappings must be a space-delimited list of provider:start:end mappings. |
692 | |
693 | + The start:end range is optional and may be omitted. |
694 | + |
695 | Returns dict of the form {provider: (start, end)}. |
696 | """ |
697 | _mappings = parse_mappings(mappings) |
698 | |
699 | === modified file 'hooks/charmhelpers/contrib/openstack/utils.py' |
700 | --- hooks/charmhelpers/contrib/openstack/utils.py 2015-04-16 21:32:27 +0000 |
701 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2015-07-01 21:31:48 +0000 |
702 | @@ -53,9 +53,13 @@ |
703 | get_ipv6_addr |
704 | ) |
705 | |
706 | +from charmhelpers.contrib.python.packages import ( |
707 | + pip_create_virtualenv, |
708 | + pip_install, |
709 | +) |
710 | + |
711 | from charmhelpers.core.host import lsb_release, mounts, umount |
712 | from charmhelpers.fetch import apt_install, apt_cache, install_remote |
713 | -from charmhelpers.contrib.python.packages import pip_install |
714 | from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk |
715 | from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device |
716 | |
717 | @@ -75,6 +79,7 @@ |
718 | ('trusty', 'icehouse'), |
719 | ('utopic', 'juno'), |
720 | ('vivid', 'kilo'), |
721 | + ('wily', 'liberty'), |
722 | ]) |
723 | |
724 | |
725 | @@ -87,6 +92,7 @@ |
726 | ('2014.1', 'icehouse'), |
727 | ('2014.2', 'juno'), |
728 | ('2015.1', 'kilo'), |
729 | + ('2015.2', 'liberty'), |
730 | ]) |
731 | |
732 | # The ugly duckling |
733 | @@ -109,6 +115,7 @@ |
734 | ('2.2.0', 'juno'), |
735 | ('2.2.1', 'kilo'), |
736 | ('2.2.2', 'kilo'), |
737 | + ('2.3.0', 'liberty'), |
738 | ]) |
739 | |
740 | DEFAULT_LOOPBACK_SIZE = '5G' |
741 | @@ -317,6 +324,9 @@ |
742 | 'kilo': 'trusty-updates/kilo', |
743 | 'kilo/updates': 'trusty-updates/kilo', |
744 | 'kilo/proposed': 'trusty-proposed/kilo', |
745 | + 'liberty': 'trusty-updates/liberty', |
746 | + 'liberty/updates': 'trusty-updates/liberty', |
747 | + 'liberty/proposed': 'trusty-proposed/liberty', |
748 | } |
749 | |
750 | try: |
751 | @@ -497,7 +507,17 @@ |
752 | requirements_dir = None |
753 | |
754 | |
755 | -def git_clone_and_install(projects_yaml, core_project): |
756 | +def _git_yaml_load(projects_yaml): |
757 | + """ |
758 | + Load the specified yaml into a dictionary. |
759 | + """ |
760 | + if not projects_yaml: |
761 | + return None |
762 | + |
763 | + return yaml.load(projects_yaml) |
764 | + |
765 | + |
766 | +def git_clone_and_install(projects_yaml, core_project, depth=1): |
767 | """ |
768 | Clone/install all specified OpenStack repositories. |
769 | |
770 | @@ -510,23 +530,22 @@ |
771 | repository: 'git://git.openstack.org/openstack/requirements.git', |
772 | branch: 'stable/icehouse'} |
773 | directory: /mnt/openstack-git |
774 | - http_proxy: http://squid.internal:3128 |
775 | - https_proxy: https://squid.internal:3128 |
776 | + http_proxy: squid-proxy-url |
777 | + https_proxy: squid-proxy-url |
778 | |
779 | The directory, http_proxy, and https_proxy keys are optional. |
780 | """ |
781 | global requirements_dir |
782 | parent_dir = '/mnt/openstack-git' |
783 | - |
784 | - if not projects_yaml: |
785 | - return |
786 | - |
787 | - projects = yaml.load(projects_yaml) |
788 | + http_proxy = None |
789 | + |
790 | + projects = _git_yaml_load(projects_yaml) |
791 | _git_validate_projects_yaml(projects, core_project) |
792 | |
793 | old_environ = dict(os.environ) |
794 | |
795 | if 'http_proxy' in projects.keys(): |
796 | + http_proxy = projects['http_proxy'] |
797 | os.environ['http_proxy'] = projects['http_proxy'] |
798 | if 'https_proxy' in projects.keys(): |
799 | os.environ['https_proxy'] = projects['https_proxy'] |
800 | @@ -534,15 +553,24 @@ |
801 | if 'directory' in projects.keys(): |
802 | parent_dir = projects['directory'] |
803 | |
804 | + pip_create_virtualenv(os.path.join(parent_dir, 'venv')) |
805 | + |
806 | + # Upgrade setuptools from default virtualenv version. The default version |
807 | + # in trusty breaks update.py in global requirements master branch. |
808 | + pip_install('setuptools', upgrade=True, proxy=http_proxy, |
809 | + venv=os.path.join(parent_dir, 'venv')) |
810 | + |
811 | for p in projects['repositories']: |
812 | repo = p['repository'] |
813 | branch = p['branch'] |
814 | if p['name'] == 'requirements': |
815 | - repo_dir = _git_clone_and_install_single(repo, branch, parent_dir, |
816 | + repo_dir = _git_clone_and_install_single(repo, branch, depth, |
817 | + parent_dir, http_proxy, |
818 | update_requirements=False) |
819 | requirements_dir = repo_dir |
820 | else: |
821 | - repo_dir = _git_clone_and_install_single(repo, branch, parent_dir, |
822 | + repo_dir = _git_clone_and_install_single(repo, branch, depth, |
823 | + parent_dir, http_proxy, |
824 | update_requirements=True) |
825 | |
826 | os.environ = old_environ |
827 | @@ -574,7 +602,8 @@ |
828 | error_out('openstack-origin-git key \'{}\' is missing'.format(key)) |
829 | |
830 | |
831 | -def _git_clone_and_install_single(repo, branch, parent_dir, update_requirements): |
832 | +def _git_clone_and_install_single(repo, branch, depth, parent_dir, http_proxy, |
833 | + update_requirements): |
834 | """ |
835 | Clone and install a single git repository. |
836 | """ |
837 | @@ -587,23 +616,29 @@ |
838 | |
839 | if not os.path.exists(dest_dir): |
840 | juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch)) |
841 | - repo_dir = install_remote(repo, dest=parent_dir, branch=branch) |
842 | + repo_dir = install_remote(repo, dest=parent_dir, branch=branch, |
843 | + depth=depth) |
844 | else: |
845 | repo_dir = dest_dir |
846 | |
847 | + venv = os.path.join(parent_dir, 'venv') |
848 | + |
849 | if update_requirements: |
850 | if not requirements_dir: |
851 | error_out('requirements repo must be cloned before ' |
852 | 'updating from global requirements.') |
853 | - _git_update_requirements(repo_dir, requirements_dir) |
854 | + _git_update_requirements(venv, repo_dir, requirements_dir) |
855 | |
856 | juju_log('Installing git repo from dir: {}'.format(repo_dir)) |
857 | - pip_install(repo_dir) |
858 | + if http_proxy: |
859 | + pip_install(repo_dir, proxy=http_proxy, venv=venv) |
860 | + else: |
861 | + pip_install(repo_dir, venv=venv) |
862 | |
863 | return repo_dir |
864 | |
865 | |
866 | -def _git_update_requirements(package_dir, reqs_dir): |
867 | +def _git_update_requirements(venv, package_dir, reqs_dir): |
868 | """ |
869 | Update from global requirements. |
870 | |
871 | @@ -612,25 +647,38 @@ |
872 | """ |
873 | orig_dir = os.getcwd() |
874 | os.chdir(reqs_dir) |
875 | - cmd = ['python', 'update.py', package_dir] |
876 | + python = os.path.join(venv, 'bin/python') |
877 | + cmd = [python, 'update.py', package_dir] |
878 | try: |
879 | subprocess.check_call(cmd) |
880 | except subprocess.CalledProcessError: |
881 | package = os.path.basename(package_dir) |
882 | - error_out("Error updating {} from global-requirements.txt".format(package)) |
883 | + error_out("Error updating {} from " |
884 | + "global-requirements.txt".format(package)) |
885 | os.chdir(orig_dir) |
886 | |
887 | |
888 | +def git_pip_venv_dir(projects_yaml): |
889 | + """ |
890 | + Return the pip virtualenv path. |
891 | + """ |
892 | + parent_dir = '/mnt/openstack-git' |
893 | + |
894 | + projects = _git_yaml_load(projects_yaml) |
895 | + |
896 | + if 'directory' in projects.keys(): |
897 | + parent_dir = projects['directory'] |
898 | + |
899 | + return os.path.join(parent_dir, 'venv') |
900 | + |
901 | + |
902 | def git_src_dir(projects_yaml, project): |
903 | """ |
904 | Return the directory where the specified project's source is located. |
905 | """ |
906 | parent_dir = '/mnt/openstack-git' |
907 | |
908 | - if not projects_yaml: |
909 | - return |
910 | - |
911 | - projects = yaml.load(projects_yaml) |
912 | + projects = _git_yaml_load(projects_yaml) |
913 | |
914 | if 'directory' in projects.keys(): |
915 | parent_dir = projects['directory'] |
916 | @@ -640,3 +688,15 @@ |
917 | return os.path.join(parent_dir, os.path.basename(p['repository'])) |
918 | |
919 | return None |
920 | + |
921 | + |
922 | +def git_yaml_value(projects_yaml, key): |
923 | + """ |
924 | + Return the value in projects_yaml for the specified key. |
925 | + """ |
926 | + projects = _git_yaml_load(projects_yaml) |
927 | + |
928 | + if key in projects.keys(): |
929 | + return projects[key] |
930 | + |
931 | + return None |
932 | |
933 | === modified file 'hooks/charmhelpers/contrib/python/packages.py' |
934 | --- hooks/charmhelpers/contrib/python/packages.py 2015-02-26 11:10:15 +0000 |
935 | +++ hooks/charmhelpers/contrib/python/packages.py 2015-07-01 21:31:48 +0000 |
936 | @@ -17,8 +17,11 @@ |
937 | # You should have received a copy of the GNU Lesser General Public License |
938 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
939 | |
940 | +import os |
941 | +import subprocess |
942 | + |
943 | from charmhelpers.fetch import apt_install, apt_update |
944 | -from charmhelpers.core.hookenv import log |
945 | +from charmhelpers.core.hookenv import charm_dir, log |
946 | |
947 | try: |
948 | from pip import main as pip_execute |
949 | @@ -33,6 +36,8 @@ |
950 | def parse_options(given, available): |
951 | """Given a set of options, check if available""" |
952 | for key, value in sorted(given.items()): |
953 | + if not value: |
954 | + continue |
955 | if key in available: |
956 | yield "--{0}={1}".format(key, value) |
957 | |
958 | @@ -51,11 +56,15 @@ |
959 | pip_execute(command) |
960 | |
961 | |
962 | -def pip_install(package, fatal=False, upgrade=False, **options): |
963 | +def pip_install(package, fatal=False, upgrade=False, venv=None, **options): |
964 | """Install a python package""" |
965 | - command = ["install"] |
966 | + if venv: |
967 | + venv_python = os.path.join(venv, 'bin/pip') |
968 | + command = [venv_python, "install"] |
969 | + else: |
970 | + command = ["install"] |
971 | |
972 | - available_options = ('proxy', 'src', 'log', "index-url", ) |
973 | + available_options = ('proxy', 'src', 'log', 'index-url', ) |
974 | for option in parse_options(options, available_options): |
975 | command.append(option) |
976 | |
977 | @@ -69,7 +78,10 @@ |
978 | |
979 | log("Installing {} package with options: {}".format(package, |
980 | command)) |
981 | - pip_execute(command) |
982 | + if venv: |
983 | + subprocess.check_call(command) |
984 | + else: |
985 | + pip_execute(command) |
986 | |
987 | |
988 | def pip_uninstall(package, **options): |
989 | @@ -94,3 +106,16 @@ |
990 | """Returns the list of current python installed packages |
991 | """ |
992 | return pip_execute(["list"]) |
993 | + |
994 | + |
995 | +def pip_create_virtualenv(path=None): |
996 | + """Create an isolated Python environment.""" |
997 | + apt_install('python-virtualenv') |
998 | + |
999 | + if path: |
1000 | + venv_path = path |
1001 | + else: |
1002 | + venv_path = os.path.join(charm_dir(), 'venv') |
1003 | + |
1004 | + if not os.path.exists(venv_path): |
1005 | + subprocess.check_call(['virtualenv', venv_path]) |
1006 | |
1007 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
1008 | --- hooks/charmhelpers/core/hookenv.py 2015-04-16 10:29:22 +0000 |
1009 | +++ hooks/charmhelpers/core/hookenv.py 2015-07-01 21:31:48 +0000 |
1010 | @@ -21,12 +21,16 @@ |
1011 | # Charm Helpers Developers <juju@lists.ubuntu.com> |
1012 | |
1013 | from __future__ import print_function |
1014 | +from distutils.version import LooseVersion |
1015 | +from functools import wraps |
1016 | +import glob |
1017 | import os |
1018 | import json |
1019 | import yaml |
1020 | import subprocess |
1021 | import sys |
1022 | import errno |
1023 | +import tempfile |
1024 | from subprocess import CalledProcessError |
1025 | |
1026 | import six |
1027 | @@ -58,15 +62,17 @@ |
1028 | |
1029 | will cache the result of unit_get + 'test' for future calls. |
1030 | """ |
1031 | + @wraps(func) |
1032 | def wrapper(*args, **kwargs): |
1033 | global cache |
1034 | key = str((func, args, kwargs)) |
1035 | try: |
1036 | return cache[key] |
1037 | except KeyError: |
1038 | - res = func(*args, **kwargs) |
1039 | - cache[key] = res |
1040 | - return res |
1041 | + pass # Drop out of the exception handler scope. |
1042 | + res = func(*args, **kwargs) |
1043 | + cache[key] = res |
1044 | + return res |
1045 | return wrapper |
1046 | |
1047 | |
1048 | @@ -178,7 +184,7 @@ |
1049 | |
1050 | def remote_unit(): |
1051 | """The remote unit for the current relation hook""" |
1052 | - return os.environ['JUJU_REMOTE_UNIT'] |
1053 | + return os.environ.get('JUJU_REMOTE_UNIT', None) |
1054 | |
1055 | |
1056 | def service_name(): |
1057 | @@ -238,23 +244,7 @@ |
1058 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
1059 | if os.path.exists(self.path): |
1060 | self.load_previous() |
1061 | - |
1062 | - def __getitem__(self, key): |
1063 | - """For regular dict lookups, check the current juju config first, |
1064 | - then the previous (saved) copy. This ensures that user-saved values |
1065 | - will be returned by a dict lookup. |
1066 | - |
1067 | - """ |
1068 | - try: |
1069 | - return dict.__getitem__(self, key) |
1070 | - except KeyError: |
1071 | - return (self._prev_dict or {})[key] |
1072 | - |
1073 | - def keys(self): |
1074 | - prev_keys = [] |
1075 | - if self._prev_dict is not None: |
1076 | - prev_keys = self._prev_dict.keys() |
1077 | - return list(set(prev_keys + list(dict.keys(self)))) |
1078 | + atexit(self._implicit_save) |
1079 | |
1080 | def load_previous(self, path=None): |
1081 | """Load previous copy of config from disk. |
1082 | @@ -273,6 +263,9 @@ |
1083 | self.path = path or self.path |
1084 | with open(self.path) as f: |
1085 | self._prev_dict = json.load(f) |
1086 | + for k, v in self._prev_dict.items(): |
1087 | + if k not in self: |
1088 | + self[k] = v |
1089 | |
1090 | def changed(self, key): |
1091 | """Return True if the current value for this key is different from |
1092 | @@ -304,13 +297,13 @@ |
1093 | instance. |
1094 | |
1095 | """ |
1096 | - if self._prev_dict: |
1097 | - for k, v in six.iteritems(self._prev_dict): |
1098 | - if k not in self: |
1099 | - self[k] = v |
1100 | with open(self.path, 'w') as f: |
1101 | json.dump(self, f) |
1102 | |
1103 | + def _implicit_save(self): |
1104 | + if self.implicit_save: |
1105 | + self.save() |
1106 | + |
1107 | |
1108 | @cached |
1109 | def config(scope=None): |
1110 | @@ -353,18 +346,49 @@ |
1111 | """Set relation information for the current unit""" |
1112 | relation_settings = relation_settings if relation_settings else {} |
1113 | relation_cmd_line = ['relation-set'] |
1114 | + accepts_file = "--file" in subprocess.check_output( |
1115 | + relation_cmd_line + ["--help"], universal_newlines=True) |
1116 | if relation_id is not None: |
1117 | relation_cmd_line.extend(('-r', relation_id)) |
1118 | - for k, v in (list(relation_settings.items()) + list(kwargs.items())): |
1119 | - if v is None: |
1120 | - relation_cmd_line.append('{}='.format(k)) |
1121 | - else: |
1122 | - relation_cmd_line.append('{}={}'.format(k, v)) |
1123 | - subprocess.check_call(relation_cmd_line) |
1124 | + settings = relation_settings.copy() |
1125 | + settings.update(kwargs) |
1126 | + for key, value in settings.items(): |
1127 | + # Force value to be a string: it always should, but some call |
1128 | + # sites pass in things like dicts or numbers. |
1129 | + if value is not None: |
1130 | + settings[key] = "{}".format(value) |
1131 | + if accepts_file: |
1132 | + # --file was introduced in Juju 1.23.2. Use it by default if |
1133 | + # available, since otherwise we'll break if the relation data is |
1134 | + # too big. Ideally we should tell relation-set to read the data from |
1135 | + # stdin, but that feature is broken in 1.23.2: Bug #1454678. |
1136 | + with tempfile.NamedTemporaryFile(delete=False) as settings_file: |
1137 | + settings_file.write(yaml.safe_dump(settings).encode("utf-8")) |
1138 | + subprocess.check_call( |
1139 | + relation_cmd_line + ["--file", settings_file.name]) |
1140 | + os.remove(settings_file.name) |
1141 | + else: |
1142 | + for key, value in settings.items(): |
1143 | + if value is None: |
1144 | + relation_cmd_line.append('{}='.format(key)) |
1145 | + else: |
1146 | + relation_cmd_line.append('{}={}'.format(key, value)) |
1147 | + subprocess.check_call(relation_cmd_line) |
1148 | # Flush cache of any relation-gets for local unit |
1149 | flush(local_unit()) |
1150 | |
1151 | |
1152 | +def relation_clear(r_id=None): |
1153 | + ''' Clears any relation data already set on relation r_id ''' |
1154 | + settings = relation_get(rid=r_id, |
1155 | + unit=local_unit()) |
1156 | + for setting in settings: |
1157 | + if setting not in ['public-address', 'private-address']: |
1158 | + settings[setting] = None |
1159 | + relation_set(relation_id=r_id, |
1160 | + **settings) |
1161 | + |
1162 | + |
1163 | @cached |
1164 | def relation_ids(reltype=None): |
1165 | """A list of relation_ids""" |
1166 | @@ -509,6 +533,11 @@ |
1167 | return None |
1168 | |
1169 | |
1170 | +def unit_public_ip(): |
1171 | + """Get this unit's public IP address""" |
1172 | + return unit_get('public-address') |
1173 | + |
1174 | + |
1175 | def unit_private_ip(): |
1176 | """Get this unit's private IP address""" |
1177 | return unit_get('private-address') |
1178 | @@ -541,10 +570,14 @@ |
1179 | hooks.execute(sys.argv) |
1180 | """ |
1181 | |
1182 | - def __init__(self, config_save=True): |
1183 | + def __init__(self, config_save=None): |
1184 | super(Hooks, self).__init__() |
1185 | self._hooks = {} |
1186 | - self._config_save = config_save |
1187 | + |
1188 | + # For unknown reasons, we allow the Hooks constructor to override |
1189 | + # config().implicit_save. |
1190 | + if config_save is not None: |
1191 | + config().implicit_save = config_save |
1192 | |
1193 | def register(self, name, function): |
1194 | """Register a hook""" |
1195 | @@ -552,13 +585,16 @@ |
1196 | |
1197 | def execute(self, args): |
1198 | """Execute a registered hook based on args[0]""" |
1199 | + _run_atstart() |
1200 | hook_name = os.path.basename(args[0]) |
1201 | if hook_name in self._hooks: |
1202 | - self._hooks[hook_name]() |
1203 | - if self._config_save: |
1204 | - cfg = config() |
1205 | - if cfg.implicit_save: |
1206 | - cfg.save() |
1207 | + try: |
1208 | + self._hooks[hook_name]() |
1209 | + except SystemExit as x: |
1210 | + if x.code is None or x.code == 0: |
1211 | + _run_atexit() |
1212 | + raise |
1213 | + _run_atexit() |
1214 | else: |
1215 | raise UnregisteredHookError(hook_name) |
1216 | |
1217 | @@ -605,3 +641,160 @@ |
1218 | |
1219 | The results set by action_set are preserved.""" |
1220 | subprocess.check_call(['action-fail', message]) |
1221 | + |
1222 | + |
1223 | +def status_set(workload_state, message): |
1224 | + """Set the workload state with a message |
1225 | + |
1226 | + Use status-set to set the workload state with a message which is visible |
1227 | + to the user via juju status. If the status-set command is not found then |
1228 | + assume this is juju < 1.23 and juju-log the message unstead. |
1229 | + |
1230 | + workload_state -- valid juju workload state. |
1231 | + message -- status update message |
1232 | + """ |
1233 | + valid_states = ['maintenance', 'blocked', 'waiting', 'active'] |
1234 | + if workload_state not in valid_states: |
1235 | + raise ValueError( |
1236 | + '{!r} is not a valid workload state'.format(workload_state) |
1237 | + ) |
1238 | + cmd = ['status-set', workload_state, message] |
1239 | + try: |
1240 | + ret = subprocess.call(cmd) |
1241 | + if ret == 0: |
1242 | + return |
1243 | + except OSError as e: |
1244 | + if e.errno != errno.ENOENT: |
1245 | + raise |
1246 | + log_message = 'status-set failed: {} {}'.format(workload_state, |
1247 | + message) |
1248 | + log(log_message, level='INFO') |
1249 | + |
1250 | + |
1251 | +def status_get(): |
1252 | + """Retrieve the previously set juju workload state |
1253 | + |
1254 | + If the status-set command is not found then assume this is juju < 1.23 and |
1255 | + return 'unknown' |
1256 | + """ |
1257 | + cmd = ['status-get'] |
1258 | + try: |
1259 | + raw_status = subprocess.check_output(cmd, universal_newlines=True) |
1260 | + status = raw_status.rstrip() |
1261 | + return status |
1262 | + except OSError as e: |
1263 | + if e.errno == errno.ENOENT: |
1264 | + return 'unknown' |
1265 | + else: |
1266 | + raise |
1267 | + |
1268 | + |
1269 | +def translate_exc(from_exc, to_exc): |
1270 | + def inner_translate_exc1(f): |
1271 | + def inner_translate_exc2(*args, **kwargs): |
1272 | + try: |
1273 | + return f(*args, **kwargs) |
1274 | + except from_exc: |
1275 | + raise to_exc |
1276 | + |
1277 | + return inner_translate_exc2 |
1278 | + |
1279 | + return inner_translate_exc1 |
1280 | + |
1281 | + |
1282 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
1283 | +def is_leader(): |
1284 | + """Does the current unit hold the juju leadership |
1285 | + |
1286 | + Uses juju to determine whether the current unit is the leader of its peers |
1287 | + """ |
1288 | + cmd = ['is-leader', '--format=json'] |
1289 | + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) |
1290 | + |
1291 | + |
1292 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
1293 | +def leader_get(attribute=None): |
1294 | + """Juju leader get value(s)""" |
1295 | + cmd = ['leader-get', '--format=json'] + [attribute or '-'] |
1296 | + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) |
1297 | + |
1298 | + |
1299 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
1300 | +def leader_set(settings=None, **kwargs): |
1301 | + """Juju leader set value(s)""" |
1302 | + # Don't log secrets. |
1303 | + # log("Juju leader-set '%s'" % (settings), level=DEBUG) |
1304 | + cmd = ['leader-set'] |
1305 | + settings = settings or {} |
1306 | + settings.update(kwargs) |
1307 | + for k, v in settings.items(): |
1308 | + if v is None: |
1309 | + cmd.append('{}='.format(k)) |
1310 | + else: |
1311 | + cmd.append('{}={}'.format(k, v)) |
1312 | + subprocess.check_call(cmd) |
1313 | + |
1314 | + |
1315 | +@cached |
1316 | +def juju_version(): |
1317 | + """Full version string (eg. '1.23.3.1-trusty-amd64')""" |
1318 | + # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 |
1319 | + jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] |
1320 | + return subprocess.check_output([jujud, 'version'], |
1321 | + universal_newlines=True).strip() |
1322 | + |
1323 | + |
1324 | +@cached |
1325 | +def has_juju_version(minimum_version): |
1326 | + """Return True if the Juju version is at least the provided version""" |
1327 | + return LooseVersion(juju_version()) >= LooseVersion(minimum_version) |
1328 | + |
1329 | + |
1330 | +_atexit = [] |
1331 | +_atstart = [] |
1332 | + |
1333 | + |
1334 | +def atstart(callback, *args, **kwargs): |
1335 | + '''Schedule a callback to run before the main hook. |
1336 | + |
1337 | + Callbacks are run in the order they were added. |
1338 | + |
1339 | + This is useful for modules and classes to perform initialization |
1340 | + and inject behavior. In particular: |
1341 | + - Run common code before all of your hooks, such as logging |
1342 | + the hook name or interesting relation data. |
1343 | + - Defer object or module initialization that requires a hook |
1344 | + context until we know there actually is a hook context, |
1345 | + making testing easier. |
1346 | + - Rather than requiring charm authors to include boilerplate to |
1347 | + invoke your helper's behavior, have it run automatically if |
1348 | + your object is instantiated or module imported. |
1349 | + |
1350 | + This is not at all useful after your hook framework as been launched. |
1351 | + ''' |
1352 | + global _atstart |
1353 | + _atstart.append((callback, args, kwargs)) |
1354 | + |
1355 | + |
1356 | +def atexit(callback, *args, **kwargs): |
1357 | + '''Schedule a callback to run on successful hook completion. |
1358 | + |
1359 | + Callbacks are run in the reverse order that they were added.''' |
1360 | + _atexit.append((callback, args, kwargs)) |
1361 | + |
1362 | + |
1363 | +def _run_atstart(): |
1364 | + '''Hook frameworks must invoke this before running the main hook body.''' |
1365 | + global _atstart |
1366 | + for callback, args, kwargs in _atstart: |
1367 | + callback(*args, **kwargs) |
1368 | + del _atstart[:] |
1369 | + |
1370 | + |
1371 | +def _run_atexit(): |
1372 | + '''Hook frameworks must invoke this after the main hook body has |
1373 | + successfully completed. Do not invoke it if the hook fails.''' |
1374 | + global _atexit |
1375 | + for callback, args, kwargs in reversed(_atexit): |
1376 | + callback(*args, **kwargs) |
1377 | + del _atexit[:] |
1378 | |
1379 | === modified file 'hooks/charmhelpers/core/host.py' |
1380 | --- hooks/charmhelpers/core/host.py 2015-04-16 10:29:22 +0000 |
1381 | +++ hooks/charmhelpers/core/host.py 2015-07-01 21:31:48 +0000 |
1382 | @@ -24,6 +24,7 @@ |
1383 | import os |
1384 | import re |
1385 | import pwd |
1386 | +import glob |
1387 | import grp |
1388 | import random |
1389 | import string |
1390 | @@ -90,7 +91,7 @@ |
1391 | ['service', service_name, 'status'], |
1392 | stderr=subprocess.STDOUT).decode('UTF-8') |
1393 | except subprocess.CalledProcessError as e: |
1394 | - return 'unrecognized service' not in e.output |
1395 | + return b'unrecognized service' not in e.output |
1396 | else: |
1397 | return True |
1398 | |
1399 | @@ -269,6 +270,21 @@ |
1400 | return None |
1401 | |
1402 | |
1403 | +def path_hash(path): |
1404 | + """ |
1405 | + Generate a hash checksum of all files matching 'path'. Standard wildcards |
1406 | + like '*' and '?' are supported, see documentation for the 'glob' module for |
1407 | + more information. |
1408 | + |
1409 | + :return: dict: A { filename: hash } dictionary for all matched files. |
1410 | + Empty if none found. |
1411 | + """ |
1412 | + return { |
1413 | + filename: file_hash(filename) |
1414 | + for filename in glob.iglob(path) |
1415 | + } |
1416 | + |
1417 | + |
1418 | def check_hash(path, checksum, hash_type='md5'): |
1419 | """ |
1420 | Validate a file using a cryptographic checksum. |
1421 | @@ -296,23 +312,25 @@ |
1422 | |
1423 | @restart_on_change({ |
1424 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
1425 | + '/etc/apache/sites-enabled/*': [ 'apache2' ] |
1426 | }) |
1427 | - def ceph_client_changed(): |
1428 | + def config_changed(): |
1429 | pass # your code here |
1430 | |
1431 | In this example, the cinder-api and cinder-volume services |
1432 | would be restarted if /etc/ceph/ceph.conf is changed by the |
1433 | - ceph_client_changed function. |
1434 | + ceph_client_changed function. The apache2 service would be |
1435 | + restarted if any file matching the pattern got changed, created |
1436 | + or removed. Standard wildcards are supported, see documentation |
1437 | + for the 'glob' module for more information. |
1438 | """ |
1439 | def wrap(f): |
1440 | def wrapped_f(*args, **kwargs): |
1441 | - checksums = {} |
1442 | - for path in restart_map: |
1443 | - checksums[path] = file_hash(path) |
1444 | + checksums = {path: path_hash(path) for path in restart_map} |
1445 | f(*args, **kwargs) |
1446 | restarts = [] |
1447 | for path in restart_map: |
1448 | - if checksums[path] != file_hash(path): |
1449 | + if path_hash(path) != checksums[path]: |
1450 | restarts += restart_map[path] |
1451 | services_list = list(OrderedDict.fromkeys(restarts)) |
1452 | if not stopstart: |
1453 | |
1454 | === modified file 'hooks/charmhelpers/core/services/base.py' |
1455 | --- hooks/charmhelpers/core/services/base.py 2015-01-26 10:55:38 +0000 |
1456 | +++ hooks/charmhelpers/core/services/base.py 2015-07-01 21:31:48 +0000 |
1457 | @@ -15,9 +15,9 @@ |
1458 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1459 | |
1460 | import os |
1461 | -import re |
1462 | import json |
1463 | -from collections import Iterable |
1464 | +from inspect import getargspec |
1465 | +from collections import Iterable, OrderedDict |
1466 | |
1467 | from charmhelpers.core import host |
1468 | from charmhelpers.core import hookenv |
1469 | @@ -119,7 +119,7 @@ |
1470 | """ |
1471 | self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json') |
1472 | self._ready = None |
1473 | - self.services = {} |
1474 | + self.services = OrderedDict() |
1475 | for service in services or []: |
1476 | service_name = service['service'] |
1477 | self.services[service_name] = service |
1478 | @@ -128,15 +128,18 @@ |
1479 | """ |
1480 | Handle the current hook by doing The Right Thing with the registered services. |
1481 | """ |
1482 | - hook_name = hookenv.hook_name() |
1483 | - if hook_name == 'stop': |
1484 | - self.stop_services() |
1485 | - else: |
1486 | - self.provide_data() |
1487 | - self.reconfigure_services() |
1488 | - cfg = hookenv.config() |
1489 | - if cfg.implicit_save: |
1490 | - cfg.save() |
1491 | + hookenv._run_atstart() |
1492 | + try: |
1493 | + hook_name = hookenv.hook_name() |
1494 | + if hook_name == 'stop': |
1495 | + self.stop_services() |
1496 | + else: |
1497 | + self.reconfigure_services() |
1498 | + self.provide_data() |
1499 | + except SystemExit as x: |
1500 | + if x.code is None or x.code == 0: |
1501 | + hookenv._run_atexit() |
1502 | + hookenv._run_atexit() |
1503 | |
1504 | def provide_data(self): |
1505 | """ |
1506 | @@ -145,15 +148,36 @@ |
1507 | A provider must have a `name` attribute, which indicates which relation |
1508 | to set data on, and a `provide_data()` method, which returns a dict of |
1509 | data to set. |
1510 | + |
1511 | + The `provide_data()` method can optionally accept two parameters: |
1512 | + |
1513 | + * ``remote_service`` The name of the remote service that the data will |
1514 | + be provided to. The `provide_data()` method will be called once |
1515 | + for each connected service (not unit). This allows the method to |
1516 | + tailor its data to the given service. |
1517 | + * ``service_ready`` Whether or not the service definition had all of |
1518 | + its requirements met, and thus the ``data_ready`` callbacks run. |
1519 | + |
1520 | + Note that the ``provided_data`` methods are now called **after** the |
1521 | + ``data_ready`` callbacks are run. This gives the ``data_ready`` callbacks |
1522 | + a chance to generate any data necessary for the providing to the remote |
1523 | + services. |
1524 | """ |
1525 | - hook_name = hookenv.hook_name() |
1526 | - for service in self.services.values(): |
1527 | + for service_name, service in self.services.items(): |
1528 | + service_ready = self.is_ready(service_name) |
1529 | for provider in service.get('provided_data', []): |
1530 | - if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name): |
1531 | - data = provider.provide_data() |
1532 | - _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data |
1533 | - if _ready: |
1534 | - hookenv.relation_set(None, data) |
1535 | + for relid in hookenv.relation_ids(provider.name): |
1536 | + units = hookenv.related_units(relid) |
1537 | + if not units: |
1538 | + continue |
1539 | + remote_service = units[0].split('/')[0] |
1540 | + argspec = getargspec(provider.provide_data) |
1541 | + if len(argspec.args) > 1: |
1542 | + data = provider.provide_data(remote_service, service_ready) |
1543 | + else: |
1544 | + data = provider.provide_data() |
1545 | + if data: |
1546 | + hookenv.relation_set(relid, data) |
1547 | |
1548 | def reconfigure_services(self, *service_names): |
1549 | """ |
1550 | |
1551 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
1552 | --- hooks/charmhelpers/fetch/__init__.py 2015-01-26 10:55:38 +0000 |
1553 | +++ hooks/charmhelpers/fetch/__init__.py 2015-07-01 21:31:48 +0000 |
1554 | @@ -158,7 +158,7 @@ |
1555 | |
1556 | def apt_cache(in_memory=True): |
1557 | """Build and return an apt cache""" |
1558 | - import apt_pkg |
1559 | + from apt import apt_pkg |
1560 | apt_pkg.init() |
1561 | if in_memory: |
1562 | apt_pkg.config.set("Dir::Cache::pkgcache", "") |
1563 | |
1564 | === modified file 'hooks/charmhelpers/fetch/giturl.py' |
1565 | --- hooks/charmhelpers/fetch/giturl.py 2015-02-26 11:10:15 +0000 |
1566 | +++ hooks/charmhelpers/fetch/giturl.py 2015-07-01 21:31:48 +0000 |
1567 | @@ -45,14 +45,16 @@ |
1568 | else: |
1569 | return True |
1570 | |
1571 | - def clone(self, source, dest, branch): |
1572 | + def clone(self, source, dest, branch, depth=None): |
1573 | if not self.can_handle(source): |
1574 | raise UnhandledSource("Cannot handle {}".format(source)) |
1575 | |
1576 | - repo = Repo.clone_from(source, dest) |
1577 | - repo.git.checkout(branch) |
1578 | + if depth: |
1579 | + Repo.clone_from(source, dest, branch=branch, depth=depth) |
1580 | + else: |
1581 | + Repo.clone_from(source, dest, branch=branch) |
1582 | |
1583 | - def install(self, source, branch="master", dest=None): |
1584 | + def install(self, source, branch="master", dest=None, depth=None): |
1585 | url_parts = self.parse_url(source) |
1586 | branch_name = url_parts.path.strip("/").split("/")[-1] |
1587 | if dest: |
1588 | @@ -63,7 +65,7 @@ |
1589 | if not os.path.exists(dest_dir): |
1590 | mkdir(dest_dir, perms=0o755) |
1591 | try: |
1592 | - self.clone(source, dest_dir, branch) |
1593 | + self.clone(source, dest_dir, branch, depth) |
1594 | except GitCommandError as e: |
1595 | raise UnhandledSource(e.message) |
1596 | except OSError as e: |
1597 | |
1598 | === modified file 'metadata.yaml' |
1599 | --- metadata.yaml 2014-11-18 05:51:05 +0000 |
1600 | +++ metadata.yaml 2015-07-01 21:31:48 +0000 |
1601 | @@ -11,9 +11,10 @@ |
1602 | . |
1603 | This charm should be used in conjunction with the ceilometer and nova charm to collect |
1604 | Openstack measures. |
1605 | -categories: |
1606 | - - miscellaneous |
1607 | +tags: |
1608 | - openstack |
1609 | + - telemetry |
1610 | + - misc |
1611 | provides: |
1612 | nrpe-external-master: |
1613 | interface: nrpe-external-master |
1614 | |
1615 | === removed file 'templates/ceilometer.conf.~1~' |
1616 | --- templates/ceilometer.conf.~1~ 2013-10-18 15:58:17 +0000 |
1617 | +++ templates/ceilometer.conf.~1~ 1970-01-01 00:00:00 +0000 |
1618 | @@ -1,21 +0,0 @@ |
1619 | -[DEFAULT] |
1620 | -debug = {{ debug }} |
1621 | -verbose = {{ verbose }} |
1622 | - |
1623 | -metering_secret = {{ metering_secret }} |
1624 | - |
1625 | -rabbit_host = {{ rabbitmq_host }} |
1626 | -rabbit_userid = {{ rabbitmq_user }} |
1627 | -rabbit_password = {{ rabbitmq_password }} |
1628 | -rabbit_virtual_host = {{ rabbitmq_virtual_host }} |
1629 | - |
1630 | -os_auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/v2.0 |
1631 | -os_tenant_name = {{ admin_tenant_name }} |
1632 | -os_username = {{ admin_user }} |
1633 | -os_password = {{ admin_password }} |
1634 | - |
1635 | -logdir = /var/log/ceilometer |
1636 | - |
1637 | -host = {{ ceilometer_host }} |
1638 | - |
1639 | -# from socket import gethostname as get_host_name |
1640 | \ No newline at end of file |
1641 | |
1642 | === added directory 'tests' |
1643 | === added file 'tests/00-setup' |
1644 | --- tests/00-setup 1970-01-01 00:00:00 +0000 |
1645 | +++ tests/00-setup 2015-07-01 21:31:48 +0000 |
1646 | @@ -0,0 +1,15 @@ |
1647 | +#!/bin/bash |
1648 | + |
1649 | +set -ex |
1650 | + |
1651 | +sudo add-apt-repository --yes ppa:juju/stable |
1652 | +sudo apt-get update --yes |
1653 | +sudo apt-get install --yes python-amulet \ |
1654 | + python-ceilometerclient \ |
1655 | + python-cinderclient \ |
1656 | + python-distro-info \ |
1657 | + python-glanceclient \ |
1658 | + python-heatclient \ |
1659 | + python-keystoneclient \ |
1660 | + python-novaclient \ |
1661 | + python-swiftclient |
1662 | |
1663 | === added file 'tests/014-basic-precise-icehouse' |
1664 | --- tests/014-basic-precise-icehouse 1970-01-01 00:00:00 +0000 |
1665 | +++ tests/014-basic-precise-icehouse 2015-07-01 21:31:48 +0000 |
1666 | @@ -0,0 +1,11 @@ |
1667 | +#!/usr/bin/python |
1668 | + |
1669 | +"""Amulet tests on a basic ceilometer-agent deployment on precise-icehouse.""" |
1670 | + |
1671 | +from basic_deployment import CeiloAgentBasicDeployment |
1672 | + |
1673 | +if __name__ == '__main__': |
1674 | + deployment = CeiloAgentBasicDeployment(series='precise', |
1675 | + openstack='cloud:precise-icehouse', |
1676 | + source='cloud:precise-updates/icehouse') |
1677 | + deployment.run_tests() |
1678 | |
1679 | === added file 'tests/015-basic-trusty-icehouse' |
1680 | --- tests/015-basic-trusty-icehouse 1970-01-01 00:00:00 +0000 |
1681 | +++ tests/015-basic-trusty-icehouse 2015-07-01 21:31:48 +0000 |
1682 | @@ -0,0 +1,9 @@ |
1683 | +#!/usr/bin/python |
1684 | + |
1685 | +"""Amulet tests on a basic ceilometer-agent deployment on trusty-icehouse.""" |
1686 | + |
1687 | +from basic_deployment import CeiloAgentBasicDeployment |
1688 | + |
1689 | +if __name__ == '__main__': |
1690 | + deployment = CeiloAgentBasicDeployment(series='trusty') |
1691 | + deployment.run_tests() |
1692 | |
1693 | === added file 'tests/016-basic-trusty-juno' |
1694 | --- tests/016-basic-trusty-juno 1970-01-01 00:00:00 +0000 |
1695 | +++ tests/016-basic-trusty-juno 2015-07-01 21:31:48 +0000 |
1696 | @@ -0,0 +1,11 @@ |
1697 | +#!/usr/bin/python |
1698 | + |
1699 | +"""Amulet tests on a basic ceilometer-agent deployment on trusty-juno.""" |
1700 | + |
1701 | +from basic_deployment import CeiloAgentBasicDeployment |
1702 | + |
1703 | +if __name__ == '__main__': |
1704 | + deployment = CeiloAgentBasicDeployment(series='trusty', |
1705 | + openstack='cloud:trusty-juno', |
1706 | + source='cloud:trusty-updates/juno') |
1707 | + deployment.run_tests() |
1708 | |
1709 | === added file 'tests/017-basic-trusty-kilo' |
1710 | --- tests/017-basic-trusty-kilo 1970-01-01 00:00:00 +0000 |
1711 | +++ tests/017-basic-trusty-kilo 2015-07-01 21:31:48 +0000 |
1712 | @@ -0,0 +1,11 @@ |
1713 | +#!/usr/bin/python |
1714 | + |
1715 | +"""Amulet tests on a basic ceilometer-agent deployment on trusty-kilo.""" |
1716 | + |
1717 | +from basic_deployment import CeiloAgentBasicDeployment |
1718 | + |
1719 | +if __name__ == '__main__': |
1720 | + deployment = CeiloAgentBasicDeployment(series='trusty', |
1721 | + openstack='cloud:trusty-kilo', |
1722 | + source='cloud:trusty-updates/kilo') |
1723 | + deployment.run_tests() |
1724 | |
1725 | === added file 'tests/018-basic-utopic-juno' |
1726 | --- tests/018-basic-utopic-juno 1970-01-01 00:00:00 +0000 |
1727 | +++ tests/018-basic-utopic-juno 2015-07-01 21:31:48 +0000 |
1728 | @@ -0,0 +1,9 @@ |
1729 | +#!/usr/bin/python |
1730 | + |
1731 | +"""Amulet tests on a basic ceilometer-agent deployment on utopic-juno.""" |
1732 | + |
1733 | +from basic_deployment import CeiloAgentBasicDeployment |
1734 | + |
1735 | +if __name__ == '__main__': |
1736 | + deployment = CeiloAgentBasicDeployment(series='utopic') |
1737 | + deployment.run_tests() |
1738 | |
1739 | === added file 'tests/019-basic-vivid-kilo' |
1740 | --- tests/019-basic-vivid-kilo 1970-01-01 00:00:00 +0000 |
1741 | +++ tests/019-basic-vivid-kilo 2015-07-01 21:31:48 +0000 |
1742 | @@ -0,0 +1,9 @@ |
1743 | +#!/usr/bin/python |
1744 | + |
1745 | +"""Amulet tests on a basic ceilometer-agent deployment on vivid-kilo.""" |
1746 | + |
1747 | +from basic_deployment import CeiloAgentBasicDeployment |
1748 | + |
1749 | +if __name__ == '__main__': |
1750 | + deployment = CeiloAgentBasicDeployment(series='vivid') |
1751 | + deployment.run_tests() |
1752 | |
1753 | === added file 'tests/020-basic-trusty-liberty' |
1754 | --- tests/020-basic-trusty-liberty 1970-01-01 00:00:00 +0000 |
1755 | +++ tests/020-basic-trusty-liberty 2015-07-01 21:31:48 +0000 |
1756 | @@ -0,0 +1,11 @@ |
1757 | +#!/usr/bin/python |
1758 | + |
1759 | +"""Amulet tests on a basic ceilometer-agent deployment on trusty-liberty.""" |
1760 | + |
1761 | +from basic_deployment import CeiloAgentBasicDeployment |
1762 | + |
1763 | +if __name__ == '__main__': |
1764 | + deployment = CeiloAgentBasicDeployment(series='trusty', |
1765 | + openstack='cloud:trusty-liberty', |
1766 | + source='cloud:trusty-updates/liberty') |
1767 | + deployment.run_tests() |
1768 | |
1769 | === added file 'tests/021-basic-wily-liberty' |
1770 | --- tests/021-basic-wily-liberty 1970-01-01 00:00:00 +0000 |
1771 | +++ tests/021-basic-wily-liberty 2015-07-01 21:31:48 +0000 |
1772 | @@ -0,0 +1,9 @@ |
1773 | +#!/usr/bin/python |
1774 | + |
1775 | +"""Amulet tests on a basic ceilometer-agent deployment on wily-liberty.""" |
1776 | + |
1777 | +from basic_deployment import CeiloAgentBasicDeployment |
1778 | + |
1779 | +if __name__ == '__main__': |
1780 | + deployment = CeiloAgentBasicDeployment(series='wily') |
1781 | + deployment.run_tests() |
1782 | |
1783 | === added file 'tests/README' |
1784 | --- tests/README 1970-01-01 00:00:00 +0000 |
1785 | +++ tests/README 2015-07-01 21:31:48 +0000 |
1786 | @@ -0,0 +1,62 @@ |
1787 | +This directory provides Amulet tests that focus on verification of |
1788 | +ceilometer-agent deployments. |
1789 | + |
1790 | +test_* methods are called in lexical sort order. |
1791 | + |
1792 | +Test name convention to ensure desired test order: |
1793 | + 1xx service and endpoint checks |
1794 | + 2xx relation checks |
1795 | + 3xx config checks |
1796 | + 4xx functional checks |
1797 | + 9xx restarts and other final checks |
1798 | + |
1799 | +In order to run tests, you'll need charm-tools installed (in addition to |
1800 | +juju, of course): |
1801 | + sudo add-apt-repository ppa:juju/stable |
1802 | + sudo apt-get update |
1803 | + sudo apt-get install charm-tools |
1804 | + |
1805 | +If you use a web proxy server to access the web, you'll need to set the |
1806 | +AMULET_HTTP_PROXY environment variable to the http URL of the proxy server. |
1807 | + |
1808 | +The following examples demonstrate different ways that tests can be executed. |
1809 | +All examples are run from the charm's root directory. |
1810 | + |
1811 | + * To run all tests (starting with 00-setup): |
1812 | + |
1813 | + make test |
1814 | + |
1815 | + * To run a specific test module (or modules): |
1816 | + |
1817 | + juju test -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse |
1818 | + |
1819 | + * To run a specific test module (or modules), and keep the environment |
1820 | + deployed after a failure: |
1821 | + |
1822 | + juju test --set-e -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse |
1823 | + |
1824 | + * To re-run a test module against an already deployed environment (one |
1825 | + that was deployed by a previous call to 'juju test --set-e'): |
1826 | + |
1827 | + ./tests/15-basic-trusty-icehouse |
1828 | + |
1829 | +For debugging and test development purposes, all code should be idempotent. |
1830 | +In other words, the code should have the ability to be re-run without changing |
1831 | +the results beyond the initial run. This enables editing and re-running of a |
1832 | +test module against an already deployed environment, as described above. |
1833 | + |
1834 | +Manual debugging tips: |
1835 | + |
1836 | + * Set the following env vars before using the OpenStack CLI as admin: |
1837 | + export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0 |
1838 | + export OS_TENANT_NAME=admin |
1839 | + export OS_USERNAME=admin |
1840 | + export OS_PASSWORD=openstack |
1841 | + export OS_REGION_NAME=RegionOne |
1842 | + |
1843 | + * Set the following env vars before using the OpenStack CLI as demoUser: |
1844 | + export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0 |
1845 | + export OS_TENANT_NAME=demoTenant |
1846 | + export OS_USERNAME=demoUser |
1847 | + export OS_PASSWORD=password |
1848 | + export OS_REGION_NAME=RegionOne |
1849 | |
1850 | === added file 'tests/basic_deployment.py' |
1851 | --- tests/basic_deployment.py 1970-01-01 00:00:00 +0000 |
1852 | +++ tests/basic_deployment.py 2015-07-01 21:31:48 +0000 |
1853 | @@ -0,0 +1,567 @@ |
1854 | +#!/usr/bin/python |
1855 | + |
1856 | +""" |
1857 | +Basic ceilometer-agent functional tests. |
1858 | +""" |
1859 | +import amulet |
1860 | +import time |
1861 | +from ceilometerclient.v2 import client as ceilclient |
1862 | + |
1863 | +from charmhelpers.contrib.openstack.amulet.deployment import ( |
1864 | + OpenStackAmuletDeployment |
1865 | +) |
1866 | + |
1867 | +from charmhelpers.contrib.openstack.amulet.utils import ( |
1868 | + OpenStackAmuletUtils, |
1869 | + DEBUG, |
1870 | + #ERROR |
1871 | +) |
1872 | + |
1873 | +# Use DEBUG to turn on debug logging |
1874 | +u = OpenStackAmuletUtils(DEBUG) |
1875 | + |
1876 | + |
1877 | +class CeiloAgentBasicDeployment(OpenStackAmuletDeployment): |
1878 | + """Amulet tests on a basic ceilometer-agent deployment.""" |
1879 | + |
1880 | + def __init__(self, series, openstack=None, source=None, stable=False): |
1881 | + """Deploy the entire test environment.""" |
1882 | + super(CeiloAgentBasicDeployment, self).__init__(series, openstack, |
1883 | + source, stable) |
1884 | + self._add_services() |
1885 | + self._add_relations() |
1886 | + self._configure_services() |
1887 | + self._deploy() |
1888 | + self._initialize_tests() |
1889 | + |
1890 | + def _add_services(self): |
1891 | + """Add services |
1892 | + |
1893 | + Add the services that we're testing, where ceilometer is local, |
1894 | + and the rest of the service are from lp branches that are |
1895 | + compatible with the local charm (e.g. stable or next). |
1896 | + """ |
1897 | + # Note: ceilometer-agent becomes a subordinate of nova-compute |
1898 | + this_service = {'name': 'ceilometer-agent'} |
1899 | + other_services = [{'name': 'mysql'}, |
1900 | + {'name': 'rabbitmq-server'}, |
1901 | + {'name': 'keystone'}, |
1902 | + {'name': 'mongodb'}, |
1903 | + {'name': 'ceilometer'}, |
1904 | + {'name': 'nova-compute'}] |
1905 | + super(CeiloAgentBasicDeployment, self)._add_services(this_service, |
1906 | + other_services) |
1907 | + |
1908 | + def _add_relations(self): |
1909 | + """Add all of the relations for the services.""" |
1910 | + relations = { |
1911 | + 'ceilometer:shared-db': 'mongodb:database', |
1912 | + 'ceilometer:amqp': 'rabbitmq-server:amqp', |
1913 | + 'ceilometer:identity-service': 'keystone:identity-service', |
1914 | + 'ceilometer:identity-notifications': 'keystone:' |
1915 | + 'identity-notifications', |
1916 | + 'keystone:shared-db': 'mysql:shared-db', |
1917 | + 'ceilometer:ceilometer-service': 'ceilometer-agent:' |
1918 | + 'ceilometer-service', |
1919 | + 'nova-compute:nova-ceilometer': 'ceilometer-agent:nova-ceilometer', |
1920 | + 'nova-compute:shared-db': 'mysql:shared-db', |
1921 | + 'nova-compute:amqp': 'rabbitmq-server:amqp' |
1922 | + } |
1923 | + super(CeiloAgentBasicDeployment, self)._add_relations(relations) |
1924 | + |
1925 | + def _configure_services(self): |
1926 | + """Configure all of the services.""" |
1927 | + keystone_config = {'admin-password': 'openstack', |
1928 | + 'admin-token': 'ubuntutesting'} |
1929 | + configs = {'keystone': keystone_config} |
1930 | + super(CeiloAgentBasicDeployment, self)._configure_services(configs) |
1931 | + |
1932 | + def _get_token(self): |
1933 | + return self.keystone.service_catalog.catalog['token']['id'] |
1934 | + |
1935 | + def _initialize_tests(self): |
1936 | + """Perform final initialization before tests get run.""" |
1937 | + # Access the sentries for inspecting service units |
1938 | + self.ceil_agent_sentry = self.d.sentry.unit['ceilometer-agent/0'] |
1939 | + self.ceil_sentry = self.d.sentry.unit['ceilometer/0'] |
1940 | + self.mysql_sentry = self.d.sentry.unit['mysql/0'] |
1941 | + self.keystone_sentry = self.d.sentry.unit['keystone/0'] |
1942 | + self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] |
1943 | + self.mongodb_sentry = self.d.sentry.unit['mongodb/0'] |
1944 | + self.nova_sentry = self.d.sentry.unit['nova-compute/0'] |
1945 | + u.log.debug('openstack release val: {}'.format( |
1946 | + self._get_openstack_release())) |
1947 | + u.log.debug('openstack release str: {}'.format( |
1948 | + self._get_openstack_release_string())) |
1949 | + |
1950 | + # Let things settle a bit before moving forward |
1951 | + time.sleep(30) |
1952 | + |
1953 | + # Authenticate admin with keystone endpoint |
1954 | + self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, |
1955 | + user='admin', |
1956 | + password='openstack', |
1957 | + tenant='admin') |
1958 | + |
1959 | + # Authenticate admin with ceilometer endpoint |
1960 | + ep = self.keystone.service_catalog.url_for(service_type='metering', |
1961 | + endpoint_type='publicURL') |
1962 | + self.ceil = ceilclient.Client(endpoint=ep, token=self._get_token) |
1963 | + |
1964 | + def test_100_services(self): |
1965 | + """Verify the expected services are running on the corresponding |
1966 | + service units.""" |
1967 | + ceilometer_svcs = [ |
1968 | + 'ceilometer-collector', |
1969 | + 'ceilometer-api', |
1970 | + 'ceilometer-alarm-evaluator', |
1971 | + 'ceilometer-alarm-notifier', |
1972 | + 'ceilometer-agent-notification', |
1973 | + ] |
1974 | + service_names = { |
1975 | + self.ceil_sentry: ceilometer_svcs, |
1976 | + self.mysql_sentry: ['mysql'], |
1977 | + self.keystone_sentry: ['keystone'], |
1978 | + self.rabbitmq_sentry: ['rabbitmq-server'], |
1979 | + self.mongodb_sentry: ['mongodb'], |
1980 | + } |
1981 | + |
1982 | + ret = u.validate_services_by_name(service_names) |
1983 | + if ret: |
1984 | + amulet.raise_status(amulet.FAIL, msg=ret) |
1985 | + |
1986 | + def test_110_service_catalog(self): |
1987 | + """Verify that the service catalog endpoint data is valid.""" |
1988 | + endpoint_check = { |
1989 | + 'adminURL': u.valid_url, |
1990 | + 'id': u.not_null, |
1991 | + 'region': 'RegionOne', |
1992 | + 'publicURL': u.valid_url, |
1993 | + 'internalURL': u.valid_url |
1994 | + } |
1995 | + expected = { |
1996 | + 'metering': [endpoint_check], |
1997 | + 'identity': [endpoint_check] |
1998 | + } |
1999 | + actual = self.keystone.service_catalog.get_endpoints() |
2000 | + |
2001 | + ret = u.validate_svc_catalog_endpoint_data(expected, actual) |
2002 | + if ret: |
2003 | + amulet.raise_status(amulet.FAIL, msg=ret) |
2004 | + |
2005 | + def test_112_keystone_api_endpoint(self): |
2006 | + """Verify the ceilometer api endpoint data.""" |
2007 | + endpoints = self.keystone.endpoints.list() |
2008 | + u.log.debug(endpoints) |
2009 | + internal_port = public_port = '5000' |
2010 | + admin_port = '35357' |
2011 | + expected = {'id': u.not_null, |
2012 | + 'region': 'RegionOne', |
2013 | + 'adminurl': u.valid_url, |
2014 | + 'internalurl': u.valid_url, |
2015 | + 'publicurl': u.valid_url, |
2016 | + 'service_id': u.not_null} |
2017 | + |
2018 | + ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, |
2019 | + public_port, expected) |
2020 | + if ret: |
2021 | + message = 'Keystone endpoint: {}'.format(ret) |
2022 | + amulet.raise_status(amulet.FAIL, msg=message) |
2023 | + |
2024 | + def test_114_ceilometer_api_endpoint(self): |
2025 | + """Verify the ceilometer api endpoint data.""" |
2026 | + endpoints = self.keystone.endpoints.list() |
2027 | + u.log.debug(endpoints) |
2028 | + admin_port = internal_port = public_port = '8777' |
2029 | + expected = {'id': u.not_null, |
2030 | + 'region': 'RegionOne', |
2031 | + 'adminurl': u.valid_url, |
2032 | + 'internalurl': u.valid_url, |
2033 | + 'publicurl': u.valid_url, |
2034 | + 'service_id': u.not_null} |
2035 | + |
2036 | + ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, |
2037 | + public_port, expected) |
2038 | + if ret: |
2039 | + message = 'Ceilometer endpoint: {}'.format(ret) |
2040 | + amulet.raise_status(amulet.FAIL, msg=message) |
2041 | + |
2042 | + def test_200_ceilometer_identity_relation(self): |
2043 | + """Verify the ceilometer to keystone identity-service relation data""" |
2044 | + u.log.debug('Checking service catalog endpoint data...') |
2045 | + unit = self.ceil_sentry |
2046 | + relation = ['identity-service', 'keystone:identity-service'] |
2047 | + ceil_ip = unit.relation('identity-service', |
2048 | + 'keystone:identity-service')['private-address'] |
2049 | + ceil_endpoint = "http://%s:8777" % (ceil_ip) |
2050 | + |
2051 | + expected = { |
2052 | + 'admin_url': ceil_endpoint, |
2053 | + 'internal_url': ceil_endpoint, |
2054 | + 'private-address': ceil_ip, |
2055 | + 'public_url': ceil_endpoint, |
2056 | + 'region': 'RegionOne', |
2057 | + 'requested_roles': 'ResellerAdmin', |
2058 | + 'service': 'ceilometer', |
2059 | + } |
2060 | + |
2061 | + ret = u.validate_relation_data(unit, relation, expected) |
2062 | + if ret: |
2063 | + message = u.relation_error('ceilometer identity-service', ret) |
2064 | + amulet.raise_status(amulet.FAIL, msg=message) |
2065 | + |
2066 | + def test_201_keystone_ceilometer_identity_relation(self): |
2067 | + """Verify the keystone to ceilometer identity-service relation data""" |
2068 | + u.log.debug('Checking keystone:ceilometer identity relation data...') |
2069 | + unit = self.keystone_sentry |
2070 | + relation = ['identity-service', 'ceilometer:identity-service'] |
2071 | + id_relation = unit.relation('identity-service', |
2072 | + 'ceilometer:identity-service') |
2073 | + id_ip = id_relation['private-address'] |
2074 | + expected = { |
2075 | + 'admin_token': 'ubuntutesting', |
2076 | + 'auth_host': id_ip, |
2077 | + 'auth_port': "35357", |
2078 | + 'auth_protocol': 'http', |
2079 | + 'private-address': id_ip, |
2080 | + 'service_host': id_ip, |
2081 | + 'service_password': u.not_null, |
2082 | + 'service_port': "5000", |
2083 | + 'service_protocol': 'http', |
2084 | + 'service_tenant': 'services', |
2085 | + 'service_tenant_id': u.not_null, |
2086 | + 'service_username': 'ceilometer', |
2087 | + } |
2088 | + ret = u.validate_relation_data(unit, relation, expected) |
2089 | + if ret: |
2090 | + message = u.relation_error('keystone identity-service', ret) |
2091 | + amulet.raise_status(amulet.FAIL, msg=message) |
2092 | + |
2093 | + def test_202_keystone_ceilometer_identity_notes_relation(self): |
2094 | + """Verify ceilometer to keystone identity-notifications relation""" |
2095 | + u.log.debug('Checking keystone:ceilometer ' |
2096 | + 'identity-notifications relation data...') |
2097 | + unit = self.keystone_sentry |
2098 | + relation = ['identity-service', 'ceilometer:identity-notifications'] |
2099 | + expected = { |
2100 | + 'ceilometer-endpoint-changed': u.not_null, |
2101 | + } |
2102 | + ret = u.validate_relation_data(unit, relation, expected) |
2103 | + if ret: |
2104 | + message = u.relation_error('keystone identity-notifications', ret) |
2105 | + amulet.raise_status(amulet.FAIL, msg=message) |
2106 | + |
2107 | + def test_203_ceilometer_amqp_relation(self): |
2108 | + """Verify the ceilometer to rabbitmq-server amqp relation data""" |
2109 | + u.log.debug('Checking ceilometer:rabbitmq amqp relation data...') |
2110 | + unit = self.ceil_sentry |
2111 | + relation = ['amqp', 'rabbitmq-server:amqp'] |
2112 | + expected = { |
2113 | + 'username': 'ceilometer', |
2114 | + 'private-address': u.valid_ip, |
2115 | + 'vhost': 'openstack' |
2116 | + } |
2117 | + |
2118 | + ret = u.validate_relation_data(unit, relation, expected) |
2119 | + if ret: |
2120 | + message = u.relation_error('ceilometer amqp', ret) |
2121 | + amulet.raise_status(amulet.FAIL, msg=message) |
2122 | + |
2123 | + def test_204_amqp_ceilometer_relation(self): |
2124 | + """Verify the rabbitmq-server to ceilometer amqp relation data""" |
2125 | + u.log.debug('Checking rabbitmq:ceilometer amqp relation data...') |
2126 | + unit = self.rabbitmq_sentry |
2127 | + relation = ['amqp', 'ceilometer:amqp'] |
2128 | + expected = { |
2129 | + 'hostname': u.valid_ip, |
2130 | + 'private-address': u.valid_ip, |
2131 | + 'password': u.not_null, |
2132 | + } |
2133 | + |
2134 | + ret = u.validate_relation_data(unit, relation, expected) |
2135 | + if ret: |
2136 | + message = u.relation_error('rabbitmq amqp', ret) |
2137 | + amulet.raise_status(amulet.FAIL, msg=message) |
2138 | + |
2139 | + def test_205_ceilometer_to_mongodb_relation(self): |
2140 | + """Verify the ceilometer to mongodb relation data""" |
2141 | + u.log.debug('Checking ceilometer:mongodb relation data...') |
2142 | + unit = self.ceil_sentry |
2143 | + relation = ['shared-db', 'mongodb:database'] |
2144 | + expected = { |
2145 | + 'ceilometer_database': 'ceilometer', |
2146 | + 'private-address': u.valid_ip, |
2147 | + } |
2148 | + |
2149 | + ret = u.validate_relation_data(unit, relation, expected) |
2150 | + if ret: |
2151 | + message = u.relation_error('ceilometer shared-db', ret) |
2152 | + amulet.raise_status(amulet.FAIL, msg=message) |
2153 | + |
2154 | + def test_206_mongodb_to_ceilometer_relation(self): |
2155 | + """Verify the mongodb to ceilometer relation data""" |
2156 | + u.log.debug('Checking mongodb:ceilometer relation data...') |
2157 | + unit = self.mongodb_sentry |
2158 | + relation = ['database', 'ceilometer:shared-db'] |
2159 | + expected = { |
2160 | + 'hostname': u.valid_ip, |
2161 | + 'port': '27017', |
2162 | + 'private-address': u.valid_ip, |
2163 | + 'type': 'database', |
2164 | + } |
2165 | + |
2166 | + if self._get_openstack_release() == self.precise_icehouse: |
2167 | + expected['replset'] = 'myset' |
2168 | + |
2169 | + ret = u.validate_relation_data(unit, relation, expected) |
2170 | + if ret: |
2171 | + message = u.relation_error('mongodb database', ret) |
2172 | + amulet.raise_status(amulet.FAIL, msg=message) |
2173 | + |
2174 | + def test_207_ceilometer_ceilometer_agent_relation(self): |
2175 | + """Verify the ceilometer to ceilometer-agent relation data""" |
2176 | + u.log.debug('Checking ceilometer:ceilometer-agent relation data...') |
2177 | + unit = self.ceil_sentry |
2178 | + relation = ['ceilometer-service', |
2179 | + 'ceilometer-agent:ceilometer-service'] |
2180 | + expected = { |
2181 | + 'rabbitmq_user': 'ceilometer', |
2182 | + 'verbose': 'False', |
2183 | + 'rabbitmq_host': u.valid_ip, |
2184 | + 'service_ports': "{'ceilometer_api': [8777, 8767]}", |
2185 | + 'use_syslog': 'False', |
2186 | + 'metering_secret': u.not_null, |
2187 | + 'rabbitmq_virtual_host': 'openstack', |
2188 | + 'db_port': '27017', |
2189 | + 'private-address': u.valid_ip, |
2190 | + 'db_name': 'ceilometer', |
2191 | + 'db_host': u.valid_ip, |
2192 | + 'debug': 'False', |
2193 | + 'rabbitmq_password': u.not_null, |
2194 | + 'port': '8767' |
2195 | + } |
2196 | + |
2197 | + ret = u.validate_relation_data(unit, relation, expected) |
2198 | + if ret: |
2199 | + message = u.relation_error('ceilometer-service', ret) |
2200 | + amulet.raise_status(amulet.FAIL, msg=message) |
2201 | + |
2202 | + def test_208_ceilometer_agent_ceilometer_relation(self): |
2203 | + """Verify the ceilometer-agent to ceilometer relation data""" |
2204 | + u.log.debug('Checking ceilometer-agent:ceilometer relation data...') |
2205 | + unit = self.ceil_agent_sentry |
2206 | + relation = ['ceilometer-service', 'ceilometer:ceilometer-service'] |
2207 | + expected = {'private-address': u.valid_ip} |
2208 | + |
2209 | + ret = u.validate_relation_data(unit, relation, expected) |
2210 | + if ret: |
2211 | + message = u.relation_error('ceilometer-service', ret) |
2212 | + amulet.raise_status(amulet.FAIL, msg=message) |
2213 | + |
2214 | + def test_209_nova_compute_ceilometer_agent_relation(self): |
2215 | + """Verify the nova-compute to ceilometer relation data""" |
2216 | + u.log.debug('Checking nova-compute:ceilometer relation data...') |
2217 | + unit = self.nova_sentry |
2218 | + relation = ['nova-ceilometer', 'ceilometer-agent:nova-ceilometer'] |
2219 | + expected = {'private-address': u.valid_ip} |
2220 | + |
2221 | + ret = u.validate_relation_data(unit, relation, expected) |
2222 | + if ret: |
2223 | + message = u.relation_error('ceilometer-service', ret) |
2224 | + amulet.raise_status(amulet.FAIL, msg=message) |
2225 | + |
2226 | + def test_210_ceilometer_agent_nova_compute_relation(self): |
2227 | + """Verify the ceilometer to nova-compute relation data""" |
2228 | + u.log.debug('Checking ceilometer:nova-compute relation data...') |
2229 | + unit = self.ceil_agent_sentry |
2230 | + relation = ['nova-ceilometer', 'nova-compute:nova-ceilometer'] |
2231 | + sub = ('{"nova": {"/etc/nova/nova.conf": {"sections": {"DEFAULT": ' |
2232 | + '[["instance_usage_audit", "True"], ' |
2233 | + '["instance_usage_audit_period", "hour"], ' |
2234 | + '["notify_on_state_change", "vm_and_task_state"], ' |
2235 | + '["notification_driver", "ceilometer.compute.nova_notifier"], ' |
2236 | + '["notification_driver", ' |
2237 | + '"nova.openstack.common.notifier.rpc_notifier"]]}}}}') |
2238 | + expected = { |
2239 | + 'subordinate_configuration': sub, |
2240 | + 'private-address': u.valid_ip |
2241 | + } |
2242 | + |
2243 | + ret = u.validate_relation_data(unit, relation, expected) |
2244 | + if ret: |
2245 | + message = u.relation_error('ceilometer-service', ret) |
2246 | + amulet.raise_status(amulet.FAIL, msg=message) |
2247 | + |
2248 | + def test_300_ceilometer_config(self): |
2249 | + """Verify the data in the ceilometer config file.""" |
2250 | + u.log.debug('Checking ceilometer config file data...') |
2251 | + unit = self.ceil_sentry |
2252 | + rmq_rel = self.rabbitmq_sentry.relation('amqp', |
2253 | + 'ceilometer:amqp') |
2254 | + ks_rel = self.keystone_sentry.relation('identity-service', |
2255 | + 'ceilometer:identity-service') |
2256 | + auth_uri = '%s://%s:%s/' % (ks_rel['service_protocol'], |
2257 | + ks_rel['service_host'], |
2258 | + ks_rel['service_port']) |
2259 | + db_relation = self.mongodb_sentry.relation('database', |
2260 | + 'ceilometer:shared-db') |
2261 | + db_conn = 'mongodb://%s:%s/ceilometer' % (db_relation['hostname'], |
2262 | + db_relation['port']) |
2263 | + conf = '/etc/ceilometer/ceilometer.conf' |
2264 | + expected = { |
2265 | + 'DEFAULT': { |
2266 | + 'verbose': 'False', |
2267 | + 'debug': 'False', |
2268 | + 'use_syslog': 'False', |
2269 | + 'rabbit_userid': 'ceilometer', |
2270 | + 'rabbit_virtual_host': 'openstack', |
2271 | + 'rabbit_password': rmq_rel['password'], |
2272 | + 'rabbit_host': rmq_rel['hostname'], |
2273 | + }, |
2274 | + 'api': { |
2275 | + 'port': '8767', |
2276 | + }, |
2277 | + 'service_credentials': { |
2278 | + 'os_auth_url': auth_uri + 'v2.0', |
2279 | + 'os_tenant_name': 'services', |
2280 | + 'os_username': 'ceilometer', |
2281 | + 'os_password': ks_rel['service_password'], |
2282 | + }, |
2283 | + 'database': { |
2284 | + 'connection': db_conn, |
2285 | + }, |
2286 | + 'keystone_authtoken': { |
2287 | + 'auth_uri': auth_uri, |
2288 | + 'auth_host': ks_rel['auth_host'], |
2289 | + 'auth_port': ks_rel['auth_port'], |
2290 | + 'auth_protocol': ks_rel['auth_protocol'], |
2291 | + 'admin_tenant_name': 'services', |
2292 | + 'admin_user': 'ceilometer', |
2293 | + 'admin_password': ks_rel['service_password'], |
2294 | + }, |
2295 | + } |
2296 | + |
2297 | + for section, pairs in expected.iteritems(): |
2298 | + ret = u.validate_config_data(unit, conf, section, pairs) |
2299 | + if ret: |
2300 | + message = "ceilometer config error: {}".format(ret) |
2301 | + amulet.raise_status(amulet.FAIL, msg=message) |
2302 | + |
2303 | + def test_301_nova_config(self): |
2304 | + """Verify data in the nova compute nova config file""" |
2305 | + u.log.debug('Checking nova compute config file...') |
2306 | + unit = self.nova_sentry |
2307 | + conf = '/etc/nova/nova.conf' |
2308 | + expected = { |
2309 | + 'DEFAULT': { |
2310 | + 'verbose': 'False', |
2311 | + 'debug': 'False', |
2312 | + 'use_syslog': 'False', |
2313 | + 'my_ip': u.valid_ip, |
2314 | + 'dhcpbridge_flagfile': '/etc/nova/nova.conf', |
2315 | + 'dhcpbridge': '/usr/bin/nova-dhcpbridge', |
2316 | + 'logdir': '/var/log/nova', |
2317 | + 'state_path': '/var/lib/nova', |
2318 | + 'api_paste_config': '/etc/nova/api-paste.ini', |
2319 | + 'enabled_apis': 'ec2,osapi_compute,metadata', |
2320 | + 'auth_strategy': 'keystone', |
2321 | + 'instance_usage_audit': 'True', |
2322 | + 'instance_usage_audit_period': 'hour', |
2323 | + 'notify_on_state_change': 'vm_and_task_state', |
2324 | + } |
2325 | + } |
2326 | + |
2327 | + # NOTE(beisner): notification_driver is not checked like the |
2328 | + # others, as configparser does not support duplicate config |
2329 | + # options, and dicts cant have duplicate keys. |
2330 | + for section, pairs in expected.iteritems(): |
2331 | + ret = u.validate_config_data(unit, conf, section, pairs) |
2332 | + if ret: |
2333 | + message = "ceilometer config error: {}".format(ret) |
2334 | + amulet.raise_status(amulet.FAIL, msg=message) |
2335 | + |
2336 | + # Check notification_driver existence via simple grep cmd |
2337 | + lines = [('notification_driver = ' |
2338 | + 'ceilometer.compute.nova_notifier'), |
2339 | + ('notification_driver = ' |
2340 | + 'nova.openstack.common.notifier.rpc_notifier')] |
2341 | + |
2342 | + sentry_units = [unit] |
2343 | + cmds = [] |
2344 | + for line in lines: |
2345 | + cmds.append('grep "{}" {}'.format(line, conf)) |
2346 | + |
2347 | + ret = u.check_commands_on_units(cmds, sentry_units) |
2348 | + if ret: |
2349 | + amulet.raise_status(amulet.FAIL, msg=ret) |
2350 | + |
2351 | + def test_302_nova_ceilometer_config(self): |
2352 | + """Verify data in the ceilometer config file on the |
2353 | + nova-compute (ceilometer-agent) unit.""" |
2354 | + u.log.debug('Checking nova ceilometer config file...') |
2355 | + unit = self.nova_sentry |
2356 | + conf = '/etc/ceilometer/ceilometer.conf' |
2357 | + expected = { |
2358 | + 'DEFAULT': { |
2359 | + 'logdir': '/var/log/ceilometer' |
2360 | + }, |
2361 | + 'database': { |
2362 | + 'backend': 'sqlalchemy', |
2363 | + 'connection': 'sqlite:////var/lib/ceilometer/$sqlite_db' |
2364 | + } |
2365 | + } |
2366 | + |
2367 | + for section, pairs in expected.iteritems(): |
2368 | + ret = u.validate_config_data(unit, conf, section, pairs) |
2369 | + if ret: |
2370 | + message = "ceilometer config error: {}".format(ret) |
2371 | + amulet.raise_status(amulet.FAIL, msg=message) |
2372 | + |
2373 | + def test_400_api_connection(self): |
2374 | + """Simple api calls to check service is up and responding""" |
2375 | + u.log.debug('Checking api functionality...') |
2376 | + assert(self.ceil.samples.list() == []) |
2377 | + assert(self.ceil.meters.list() == []) |
2378 | + |
2379 | + # NOTE(beisner): need to add more functional tests |
2380 | + |
2381 | + def test_900_restart_on_config_change(self): |
2382 | + """Verify that the specified services are restarted when the config |
2383 | + is changed. |
2384 | + """ |
2385 | + sentry = self.ceil_sentry |
2386 | + juju_service = 'ceilometer' |
2387 | + |
2388 | + # Expected default and alternate values |
2389 | + set_default = {'debug': 'False'} |
2390 | + set_alternate = {'debug': 'True'} |
2391 | + |
2392 | + # Config file affected by juju set config change |
2393 | + conf_file = '/etc/ceilometer/ceilometer.conf' |
2394 | + |
2395 | + # Services which are expected to restart upon config change |
2396 | + services = [ |
2397 | + 'ceilometer-agent-central', |
2398 | + 'ceilometer-collector', |
2399 | + 'ceilometer-api', |
2400 | + 'ceilometer-alarm-evaluator', |
2401 | + 'ceilometer-alarm-notifier', |
2402 | + 'ceilometer-agent-notification', |
2403 | + ] |
2404 | + |
2405 | + # Make config change, check for service restarts |
2406 | + u.log.debug('Making config change on {}...'.format(juju_service)) |
2407 | + self.d.configure(juju_service, set_alternate) |
2408 | + |
2409 | + sleep_time = 40 |
2410 | + for s in services: |
2411 | + u.log.debug("Checking that service restarted: {}".format(s)) |
2412 | + if not u.service_restarted(sentry, s, |
2413 | + conf_file, sleep_time=sleep_time, |
2414 | + pgrep_full=True): |
2415 | + self.d.configure(juju_service, set_default) |
2416 | + msg = "service {} didn't restart after config change".format(s) |
2417 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2418 | + sleep_time = 0 |
2419 | + |
2420 | + self.d.configure(juju_service, set_default) |
2421 | |
2422 | === added directory 'tests/charmhelpers' |
2423 | === added file 'tests/charmhelpers/__init__.py' |
2424 | --- tests/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000 |
2425 | +++ tests/charmhelpers/__init__.py 2015-07-01 21:31:48 +0000 |
2426 | @@ -0,0 +1,38 @@ |
2427 | +# Copyright 2014-2015 Canonical Limited. |
2428 | +# |
2429 | +# This file is part of charm-helpers. |
2430 | +# |
2431 | +# charm-helpers is free software: you can redistribute it and/or modify |
2432 | +# it under the terms of the GNU Lesser General Public License version 3 as |
2433 | +# published by the Free Software Foundation. |
2434 | +# |
2435 | +# charm-helpers is distributed in the hope that it will be useful, |
2436 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
2437 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
2438 | +# GNU Lesser General Public License for more details. |
2439 | +# |
2440 | +# You should have received a copy of the GNU Lesser General Public License |
2441 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
2442 | + |
2443 | +# Bootstrap charm-helpers, installing its dependencies if necessary using |
2444 | +# only standard libraries. |
2445 | +import subprocess |
2446 | +import sys |
2447 | + |
2448 | +try: |
2449 | + import six # flake8: noqa |
2450 | +except ImportError: |
2451 | + if sys.version_info.major == 2: |
2452 | + subprocess.check_call(['apt-get', 'install', '-y', 'python-six']) |
2453 | + else: |
2454 | + subprocess.check_call(['apt-get', 'install', '-y', 'python3-six']) |
2455 | + import six # flake8: noqa |
2456 | + |
2457 | +try: |
2458 | + import yaml # flake8: noqa |
2459 | +except ImportError: |
2460 | + if sys.version_info.major == 2: |
2461 | + subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml']) |
2462 | + else: |
2463 | + subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml']) |
2464 | + import yaml # flake8: noqa |
2465 | |
2466 | === added directory 'tests/charmhelpers/contrib' |
2467 | === added file 'tests/charmhelpers/contrib/__init__.py' |
2468 | --- tests/charmhelpers/contrib/__init__.py 1970-01-01 00:00:00 +0000 |
2469 | +++ tests/charmhelpers/contrib/__init__.py 2015-07-01 21:31:48 +0000 |
2470 | @@ -0,0 +1,15 @@ |
2471 | +# Copyright 2014-2015 Canonical Limited. |
2472 | +# |
2473 | +# This file is part of charm-helpers. |
2474 | +# |
2475 | +# charm-helpers is free software: you can redistribute it and/or modify |
2476 | +# it under the terms of the GNU Lesser General Public License version 3 as |
2477 | +# published by the Free Software Foundation. |
2478 | +# |
2479 | +# charm-helpers is distributed in the hope that it will be useful, |
2480 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
2481 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
2482 | +# GNU Lesser General Public License for more details. |
2483 | +# |
2484 | +# You should have received a copy of the GNU Lesser General Public License |
2485 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
2486 | |
2487 | === added directory 'tests/charmhelpers/contrib/amulet' |
2488 | === added file 'tests/charmhelpers/contrib/amulet/__init__.py' |
2489 | --- tests/charmhelpers/contrib/amulet/__init__.py 1970-01-01 00:00:00 +0000 |
2490 | +++ tests/charmhelpers/contrib/amulet/__init__.py 2015-07-01 21:31:48 +0000 |
2491 | @@ -0,0 +1,15 @@ |
2492 | +# Copyright 2014-2015 Canonical Limited. |
2493 | +# |
2494 | +# This file is part of charm-helpers. |
2495 | +# |
2496 | +# charm-helpers is free software: you can redistribute it and/or modify |
2497 | +# it under the terms of the GNU Lesser General Public License version 3 as |
2498 | +# published by the Free Software Foundation. |
2499 | +# |
2500 | +# charm-helpers is distributed in the hope that it will be useful, |
2501 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
2502 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
2503 | +# GNU Lesser General Public License for more details. |
2504 | +# |
2505 | +# You should have received a copy of the GNU Lesser General Public License |
2506 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
2507 | |
2508 | === added file 'tests/charmhelpers/contrib/amulet/deployment.py' |
2509 | --- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000 |
2510 | +++ tests/charmhelpers/contrib/amulet/deployment.py 2015-07-01 21:31:48 +0000 |
2511 | @@ -0,0 +1,93 @@ |
2512 | +# Copyright 2014-2015 Canonical Limited. |
2513 | +# |
2514 | +# This file is part of charm-helpers. |
2515 | +# |
2516 | +# charm-helpers is free software: you can redistribute it and/or modify |
2517 | +# it under the terms of the GNU Lesser General Public License version 3 as |
2518 | +# published by the Free Software Foundation. |
2519 | +# |
2520 | +# charm-helpers is distributed in the hope that it will be useful, |
2521 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
2522 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
2523 | +# GNU Lesser General Public License for more details. |
2524 | +# |
2525 | +# You should have received a copy of the GNU Lesser General Public License |
2526 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
2527 | + |
2528 | +import amulet |
2529 | +import os |
2530 | +import six |
2531 | + |
2532 | + |
2533 | +class AmuletDeployment(object): |
2534 | + """Amulet deployment. |
2535 | + |
2536 | + This class provides generic Amulet deployment and test runner |
2537 | + methods. |
2538 | + """ |
2539 | + |
2540 | + def __init__(self, series=None): |
2541 | + """Initialize the deployment environment.""" |
2542 | + self.series = None |
2543 | + |
2544 | + if series: |
2545 | + self.series = series |
2546 | + self.d = amulet.Deployment(series=self.series) |
2547 | + else: |
2548 | + self.d = amulet.Deployment() |
2549 | + |
2550 | + def _add_services(self, this_service, other_services): |
2551 | + """Add services. |
2552 | + |
2553 | + Add services to the deployment where this_service is the local charm |
2554 | + that we're testing and other_services are the other services that |
2555 | + are being used in the local amulet tests. |
2556 | + """ |
2557 | + if this_service['name'] != os.path.basename(os.getcwd()): |
2558 | + s = this_service['name'] |
2559 | + msg = "The charm's root directory name needs to be {}".format(s) |
2560 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2561 | + |
2562 | + if 'units' not in this_service: |
2563 | + this_service['units'] = 1 |
2564 | + |
2565 | + self.d.add(this_service['name'], units=this_service['units']) |
2566 | + |
2567 | + for svc in other_services: |
2568 | + if 'location' in svc: |
2569 | + branch_location = svc['location'] |
2570 | + elif self.series: |
2571 | + branch_location = 'cs:{}/{}'.format(self.series, svc['name']), |
2572 | + else: |
2573 | + branch_location = None |
2574 | + |
2575 | + if 'units' not in svc: |
2576 | + svc['units'] = 1 |
2577 | + |
2578 | + self.d.add(svc['name'], charm=branch_location, units=svc['units']) |
2579 | + |
2580 | + def _add_relations(self, relations): |
2581 | + """Add all of the relations for the services.""" |
2582 | + for k, v in six.iteritems(relations): |
2583 | + self.d.relate(k, v) |
2584 | + |
2585 | + def _configure_services(self, configs): |
2586 | + """Configure all of the services.""" |
2587 | + for service, config in six.iteritems(configs): |
2588 | + self.d.configure(service, config) |
2589 | + |
2590 | + def _deploy(self): |
2591 | + """Deploy environment and wait for all hooks to finish executing.""" |
2592 | + try: |
2593 | + self.d.setup(timeout=900) |
2594 | + self.d.sentry.wait(timeout=900) |
2595 | + except amulet.helpers.TimeoutError: |
2596 | + amulet.raise_status(amulet.FAIL, msg="Deployment timed out") |
2597 | + except Exception: |
2598 | + raise |
2599 | + |
2600 | + def run_tests(self): |
2601 | + """Run all of the methods that are prefixed with 'test_'.""" |
2602 | + for test in dir(self): |
2603 | + if test.startswith('test_'): |
2604 | + getattr(self, test)() |
2605 | |
2606 | === added file 'tests/charmhelpers/contrib/amulet/utils.py' |
2607 | --- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000 |
2608 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-01 21:31:48 +0000 |
2609 | @@ -0,0 +1,533 @@ |
2610 | +# Copyright 2014-2015 Canonical Limited. |
2611 | +# |
2612 | +# This file is part of charm-helpers. |
2613 | +# |
2614 | +# charm-helpers is free software: you can redistribute it and/or modify |
2615 | +# it under the terms of the GNU Lesser General Public License version 3 as |
2616 | +# published by the Free Software Foundation. |
2617 | +# |
2618 | +# charm-helpers is distributed in the hope that it will be useful, |
2619 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
2620 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
2621 | +# GNU Lesser General Public License for more details. |
2622 | +# |
2623 | +# You should have received a copy of the GNU Lesser General Public License |
2624 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
2625 | + |
2626 | +import amulet |
2627 | +import ConfigParser |
2628 | +import distro_info |
2629 | +import io |
2630 | +import logging |
2631 | +import os |
2632 | +import re |
2633 | +import six |
2634 | +import sys |
2635 | +import time |
2636 | +import urlparse |
2637 | + |
2638 | + |
2639 | +class AmuletUtils(object): |
2640 | + """Amulet utilities. |
2641 | + |
2642 | + This class provides common utility functions that are used by Amulet |
2643 | + tests. |
2644 | + """ |
2645 | + |
2646 | + def __init__(self, log_level=logging.ERROR): |
2647 | + self.log = self.get_logger(level=log_level) |
2648 | + self.ubuntu_releases = self.get_ubuntu_releases() |
2649 | + |
2650 | + def get_logger(self, name="amulet-logger", level=logging.DEBUG): |
2651 | + """Get a logger object that will log to stdout.""" |
2652 | + log = logging |
2653 | + logger = log.getLogger(name) |
2654 | + fmt = log.Formatter("%(asctime)s %(funcName)s " |
2655 | + "%(levelname)s: %(message)s") |
2656 | + |
2657 | + handler = log.StreamHandler(stream=sys.stdout) |
2658 | + handler.setLevel(level) |
2659 | + handler.setFormatter(fmt) |
2660 | + |
2661 | + logger.addHandler(handler) |
2662 | + logger.setLevel(level) |
2663 | + |
2664 | + return logger |
2665 | + |
2666 | + def valid_ip(self, ip): |
2667 | + if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip): |
2668 | + return True |
2669 | + else: |
2670 | + return False |
2671 | + |
2672 | + def valid_url(self, url): |
2673 | + p = re.compile( |
2674 | + r'^(?:http|ftp)s?://' |
2675 | + r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa |
2676 | + r'localhost|' |
2677 | + r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' |
2678 | + r'(?::\d+)?' |
2679 | + r'(?:/?|[/?]\S+)$', |
2680 | + re.IGNORECASE) |
2681 | + if p.match(url): |
2682 | + return True |
2683 | + else: |
2684 | + return False |
2685 | + |
2686 | + def get_ubuntu_release_from_sentry(self, sentry_unit): |
2687 | + """Get Ubuntu release codename from sentry unit. |
2688 | + |
2689 | + :param sentry_unit: amulet sentry/service unit pointer |
2690 | + :returns: list of strings - release codename, failure message |
2691 | + """ |
2692 | + msg = None |
2693 | + cmd = 'lsb_release -cs' |
2694 | + release, code = sentry_unit.run(cmd) |
2695 | + if code == 0: |
2696 | + self.log.debug('{} lsb_release: {}'.format( |
2697 | + sentry_unit.info['unit_name'], release)) |
2698 | + else: |
2699 | + msg = ('{} `{}` returned {} ' |
2700 | + '{}'.format(sentry_unit.info['unit_name'], |
2701 | + cmd, release, code)) |
2702 | + if release not in self.ubuntu_releases: |
2703 | + msg = ("Release ({}) not found in Ubuntu releases " |
2704 | + "({})".format(release, self.ubuntu_releases)) |
2705 | + return release, msg |
2706 | + |
2707 | + def validate_services(self, commands): |
2708 | + """Validate that lists of commands succeed on service units. Can be |
2709 | + used to verify system services are running on the corresponding |
2710 | + service units. |
2711 | + |
2712 | + :param commands: dict with sentry keys and arbitrary command list vals |
2713 | + :returns: None if successful, Failure string message otherwise |
2714 | + """ |
2715 | + self.log.debug('Checking status of system services...') |
2716 | + |
2717 | + # /!\ DEPRECATION WARNING (beisner): |
2718 | + # New and existing tests should be rewritten to use |
2719 | + # validate_services_by_name() as it is aware of init systems. |
2720 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
2721 | + 'validate_services_by_name instead of validate_services ' |
2722 | + 'due to init system differences.') |
2723 | + |
2724 | + for k, v in six.iteritems(commands): |
2725 | + for cmd in v: |
2726 | + output, code = k.run(cmd) |
2727 | + self.log.debug('{} `{}` returned ' |
2728 | + '{}'.format(k.info['unit_name'], |
2729 | + cmd, code)) |
2730 | + if code != 0: |
2731 | + return "command `{}` returned {}".format(cmd, str(code)) |
2732 | + return None |
2733 | + |
2734 | + def validate_services_by_name(self, sentry_services): |
2735 | + """Validate system service status by service name, automatically |
2736 | + detecting init system based on Ubuntu release codename. |
2737 | + |
2738 | + :param sentry_services: dict with sentry keys and svc list values |
2739 | + :returns: None if successful, Failure string message otherwise |
2740 | + """ |
2741 | + self.log.debug('Checking status of system services...') |
2742 | + |
2743 | + # Point at which systemd became a thing |
2744 | + systemd_switch = self.ubuntu_releases.index('vivid') |
2745 | + |
2746 | + for sentry_unit, services_list in six.iteritems(sentry_services): |
2747 | + # Get lsb_release codename from unit |
2748 | + release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) |
2749 | + if ret: |
2750 | + return ret |
2751 | + |
2752 | + for service_name in services_list: |
2753 | + if (self.ubuntu_releases.index(release) >= systemd_switch or |
2754 | + service_name == "rabbitmq-server"): |
2755 | + # init is systemd |
2756 | + cmd = 'sudo service {} status'.format(service_name) |
2757 | + elif self.ubuntu_releases.index(release) < systemd_switch: |
2758 | + # init is upstart |
2759 | + cmd = 'sudo status {}'.format(service_name) |
2760 | + |
2761 | + output, code = sentry_unit.run(cmd) |
2762 | + self.log.debug('{} `{}` returned ' |
2763 | + '{}'.format(sentry_unit.info['unit_name'], |
2764 | + cmd, code)) |
2765 | + if code != 0: |
2766 | + return "command `{}` returned {}".format(cmd, str(code)) |
2767 | + return None |
2768 | + |
2769 | + def _get_config(self, unit, filename): |
2770 | + """Get a ConfigParser object for parsing a unit's config file.""" |
2771 | + file_contents = unit.file_contents(filename) |
2772 | + |
2773 | + # NOTE(beisner): by default, ConfigParser does not handle options |
2774 | + # with no value, such as the flags used in the mysql my.cnf file. |
2775 | + # https://bugs.python.org/issue7005 |
2776 | + config = ConfigParser.ConfigParser(allow_no_value=True) |
2777 | + config.readfp(io.StringIO(file_contents)) |
2778 | + return config |
2779 | + |
2780 | + def validate_config_data(self, sentry_unit, config_file, section, |
2781 | + expected): |
2782 | + """Validate config file data. |
2783 | + |
2784 | + Verify that the specified section of the config file contains |
2785 | + the expected option key:value pairs. |
2786 | + |
2787 | + Compare expected dictionary data vs actual dictionary data. |
2788 | + The values in the 'expected' dictionary can be strings, bools, ints, |
2789 | + longs, or can be a function that evaluates a variable and returns a |
2790 | + bool. |
2791 | + """ |
2792 | + self.log.debug('Validating config file data ({} in {} on {})' |
2793 | + '...'.format(section, config_file, |
2794 | + sentry_unit.info['unit_name'])) |
2795 | + config = self._get_config(sentry_unit, config_file) |
2796 | + |
2797 | + if section != 'DEFAULT' and not config.has_section(section): |
2798 | + return "section [{}] does not exist".format(section) |
2799 | + |
2800 | + for k in expected.keys(): |
2801 | + if not config.has_option(section, k): |
2802 | + return "section [{}] is missing option {}".format(section, k) |
2803 | + |
2804 | + actual = config.get(section, k) |
2805 | + v = expected[k] |
2806 | + if (isinstance(v, six.string_types) or |
2807 | + isinstance(v, bool) or |
2808 | + isinstance(v, six.integer_types)): |
2809 | + # handle explicit values |
2810 | + if actual != v: |
2811 | + return "section [{}] {}:{} != expected {}:{}".format( |
2812 | + section, k, actual, k, expected[k]) |
2813 | + # handle function pointers, such as not_null or valid_ip |
2814 | + elif not v(actual): |
2815 | + return "section [{}] {}:{} != expected {}:{}".format( |
2816 | + section, k, actual, k, expected[k]) |
2817 | + return None |
2818 | + |
2819 | + def _validate_dict_data(self, expected, actual): |
2820 | + """Validate dictionary data. |
2821 | + |
2822 | + Compare expected dictionary data vs actual dictionary data. |
2823 | + The values in the 'expected' dictionary can be strings, bools, ints, |
2824 | + longs, or can be a function that evaluates a variable and returns a |
2825 | + bool. |
2826 | + """ |
2827 | + self.log.debug('actual: {}'.format(repr(actual))) |
2828 | + self.log.debug('expected: {}'.format(repr(expected))) |
2829 | + |
2830 | + for k, v in six.iteritems(expected): |
2831 | + if k in actual: |
2832 | + if (isinstance(v, six.string_types) or |
2833 | + isinstance(v, bool) or |
2834 | + isinstance(v, six.integer_types)): |
2835 | + # handle explicit values |
2836 | + if v != actual[k]: |
2837 | + return "{}:{}".format(k, actual[k]) |
2838 | + # handle function pointers, such as not_null or valid_ip |
2839 | + elif not v(actual[k]): |
2840 | + return "{}:{}".format(k, actual[k]) |
2841 | + else: |
2842 | + return "key '{}' does not exist".format(k) |
2843 | + return None |
2844 | + |
2845 | + def validate_relation_data(self, sentry_unit, relation, expected): |
2846 | + """Validate actual relation data based on expected relation data.""" |
2847 | + actual = sentry_unit.relation(relation[0], relation[1]) |
2848 | + return self._validate_dict_data(expected, actual) |
2849 | + |
2850 | + def _validate_list_data(self, expected, actual): |
2851 | + """Compare expected list vs actual list data.""" |
2852 | + for e in expected: |
2853 | + if e not in actual: |
2854 | + return "expected item {} not found in actual list".format(e) |
2855 | + return None |
2856 | + |
2857 | + def not_null(self, string): |
2858 | + if string is not None: |
2859 | + return True |
2860 | + else: |
2861 | + return False |
2862 | + |
2863 | + def _get_file_mtime(self, sentry_unit, filename): |
2864 | + """Get last modification time of file.""" |
2865 | + return sentry_unit.file_stat(filename)['mtime'] |
2866 | + |
2867 | + def _get_dir_mtime(self, sentry_unit, directory): |
2868 | + """Get last modification time of directory.""" |
2869 | + return sentry_unit.directory_stat(directory)['mtime'] |
2870 | + |
2871 | + def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False): |
2872 | + """Get process' start time. |
2873 | + |
2874 | + Determine start time of the process based on the last modification |
2875 | + time of the /proc/pid directory. If pgrep_full is True, the process |
2876 | + name is matched against the full command line. |
2877 | + """ |
2878 | + if pgrep_full: |
2879 | + cmd = 'pgrep -o -f {}'.format(service) |
2880 | + else: |
2881 | + cmd = 'pgrep -o {}'.format(service) |
2882 | + cmd = cmd + ' | grep -v pgrep || exit 0' |
2883 | + cmd_out = sentry_unit.run(cmd) |
2884 | + self.log.debug('CMDout: ' + str(cmd_out)) |
2885 | + if cmd_out[0]: |
2886 | + self.log.debug('Pid for %s %s' % (service, str(cmd_out[0]))) |
2887 | + proc_dir = '/proc/{}'.format(cmd_out[0].strip()) |
2888 | + return self._get_dir_mtime(sentry_unit, proc_dir) |
2889 | + |
2890 | + def service_restarted(self, sentry_unit, service, filename, |
2891 | + pgrep_full=False, sleep_time=20): |
2892 | + """Check if service was restarted. |
2893 | + |
2894 | + Compare a service's start time vs a file's last modification time |
2895 | + (such as a config file for that service) to determine if the service |
2896 | + has been restarted. |
2897 | + """ |
2898 | + time.sleep(sleep_time) |
2899 | + if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= |
2900 | + self._get_file_mtime(sentry_unit, filename)): |
2901 | + return True |
2902 | + else: |
2903 | + return False |
2904 | + |
2905 | + def service_restarted_since(self, sentry_unit, mtime, service, |
2906 | + pgrep_full=False, sleep_time=20, |
2907 | + retry_count=2): |
2908 | + """Check if service was been started after a given time. |
2909 | + |
2910 | + Args: |
2911 | + sentry_unit (sentry): The sentry unit to check for the service on |
2912 | + mtime (float): The epoch time to check against |
2913 | + service (string): service name to look for in process table |
2914 | + pgrep_full (boolean): Use full command line search mode with pgrep |
2915 | + sleep_time (int): Seconds to sleep before looking for process |
2916 | + retry_count (int): If service is not found, how many times to retry |
2917 | + |
2918 | + Returns: |
2919 | + bool: True if service found and its start time it newer than mtime, |
2920 | + False if service is older than mtime or if service was |
2921 | + not found. |
2922 | + """ |
2923 | + self.log.debug('Checking %s restarted since %s' % (service, mtime)) |
2924 | + time.sleep(sleep_time) |
2925 | + proc_start_time = self._get_proc_start_time(sentry_unit, service, |
2926 | + pgrep_full) |
2927 | + while retry_count > 0 and not proc_start_time: |
2928 | + self.log.debug('No pid file found for service %s, will retry %i ' |
2929 | + 'more times' % (service, retry_count)) |
2930 | + time.sleep(30) |
2931 | + proc_start_time = self._get_proc_start_time(sentry_unit, service, |
2932 | + pgrep_full) |
2933 | + retry_count = retry_count - 1 |
2934 | + |
2935 | + if not proc_start_time: |
2936 | + self.log.warn('No proc start time found, assuming service did ' |
2937 | + 'not start') |
2938 | + return False |
2939 | + if proc_start_time >= mtime: |
2940 | + self.log.debug('proc start time is newer than provided mtime' |
2941 | + '(%s >= %s)' % (proc_start_time, mtime)) |
2942 | + return True |
2943 | + else: |
2944 | + self.log.warn('proc start time (%s) is older than provided mtime ' |
2945 | + '(%s), service did not restart' % (proc_start_time, |
2946 | + mtime)) |
2947 | + return False |
2948 | + |
2949 | + def config_updated_since(self, sentry_unit, filename, mtime, |
2950 | + sleep_time=20): |
2951 | + """Check if file was modified after a given time. |
2952 | + |
2953 | + Args: |
2954 | + sentry_unit (sentry): The sentry unit to check the file mtime on |
2955 | + filename (string): The file to check mtime of |
2956 | + mtime (float): The epoch time to check against |
2957 | + sleep_time (int): Seconds to sleep before looking for process |
2958 | + |
2959 | + Returns: |
2960 | + bool: True if file was modified more recently than mtime, False if |
2961 | + file was modified before mtime, |
2962 | + """ |
2963 | + self.log.debug('Checking %s updated since %s' % (filename, mtime)) |
2964 | + time.sleep(sleep_time) |
2965 | + file_mtime = self._get_file_mtime(sentry_unit, filename) |
2966 | + if file_mtime >= mtime: |
2967 | + self.log.debug('File mtime is newer than provided mtime ' |
2968 | + '(%s >= %s)' % (file_mtime, mtime)) |
2969 | + return True |
2970 | + else: |
2971 | + self.log.warn('File mtime %s is older than provided mtime %s' |
2972 | + % (file_mtime, mtime)) |
2973 | + return False |
2974 | + |
2975 | + def validate_service_config_changed(self, sentry_unit, mtime, service, |
2976 | + filename, pgrep_full=False, |
2977 | + sleep_time=20, retry_count=2): |
2978 | + """Check service and file were updated after mtime |
2979 | + |
2980 | + Args: |
2981 | + sentry_unit (sentry): The sentry unit to check for the service on |
2982 | + mtime (float): The epoch time to check against |
2983 | + service (string): service name to look for in process table |
2984 | + filename (string): The file to check mtime of |
2985 | + pgrep_full (boolean): Use full command line search mode with pgrep |
2986 | + sleep_time (int): Seconds to sleep before looking for process |
2987 | + retry_count (int): If service is not found, how many times to retry |
2988 | + |
2989 | + Typical Usage: |
2990 | + u = OpenStackAmuletUtils(ERROR) |
2991 | + ... |
2992 | + mtime = u.get_sentry_time(self.cinder_sentry) |
2993 | + self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'}) |
2994 | + if not u.validate_service_config_changed(self.cinder_sentry, |
2995 | + mtime, |
2996 | + 'cinder-api', |
2997 | + '/etc/cinder/cinder.conf') |
2998 | + amulet.raise_status(amulet.FAIL, msg='update failed') |
2999 | + Returns: |
3000 | + bool: True if both service and file where updated/restarted after |
3001 | + mtime, False if service is older than mtime or if service was |
3002 | + not found or if filename was modified before mtime. |
3003 | + """ |
3004 | + self.log.debug('Checking %s restarted since %s' % (service, mtime)) |
3005 | + time.sleep(sleep_time) |
3006 | + service_restart = self.service_restarted_since(sentry_unit, mtime, |
3007 | + service, |
3008 | + pgrep_full=pgrep_full, |
3009 | + sleep_time=0, |
3010 | + retry_count=retry_count) |
3011 | + config_update = self.config_updated_since(sentry_unit, filename, mtime, |
3012 | + sleep_time=0) |
3013 | + return service_restart and config_update |
3014 | + |
3015 | + def get_sentry_time(self, sentry_unit): |
3016 | + """Return current epoch time on a sentry""" |
3017 | + cmd = "date +'%s'" |
3018 | + return float(sentry_unit.run(cmd)[0]) |
3019 | + |
3020 | + def relation_error(self, name, data): |
3021 | + return 'unexpected relation data in {} - {}'.format(name, data) |
3022 | + |
3023 | + def endpoint_error(self, name, data): |
3024 | + return 'unexpected endpoint data in {} - {}'.format(name, data) |
3025 | + |
3026 | + def get_ubuntu_releases(self): |
3027 | + """Return a list of all Ubuntu releases in order of release.""" |
3028 | + _d = distro_info.UbuntuDistroInfo() |
3029 | + _release_list = _d.all |
3030 | + self.log.debug('Ubuntu release list: {}'.format(_release_list)) |
3031 | + return _release_list |
3032 | + |
3033 | + def file_to_url(self, file_rel_path): |
3034 | + """Convert a relative file path to a file URL.""" |
3035 | + _abs_path = os.path.abspath(file_rel_path) |
3036 | + return urlparse.urlparse(_abs_path, scheme='file').geturl() |
3037 | + |
3038 | + def check_commands_on_units(self, commands, sentry_units): |
3039 | + """Check that all commands in a list exit zero on all |
3040 | + sentry units in a list. |
3041 | + |
3042 | + :param commands: list of bash commands |
3043 | + :param sentry_units: list of sentry unit pointers |
3044 | + :returns: None if successful; Failure message otherwise |
3045 | + """ |
3046 | + self.log.debug('Checking exit codes for {} commands on {} ' |
3047 | + 'sentry units...'.format(len(commands), |
3048 | + len(sentry_units))) |
3049 | + for sentry_unit in sentry_units: |
3050 | + for cmd in commands: |
3051 | + output, code = sentry_unit.run(cmd) |
3052 | + if code == 0: |
3053 | + self.log.debug('{} `{}` returned {} ' |
3054 | + '(OK)'.format(sentry_unit.info['unit_name'], |
3055 | + cmd, code)) |
3056 | + else: |
3057 | + return ('{} `{}` returned {} ' |
3058 | + '{}'.format(sentry_unit.info['unit_name'], |
3059 | + cmd, code, output)) |
3060 | + return None |
3061 | + |
3062 | + def get_process_id_list(self, sentry_unit, process_name): |
3063 | + """Get a list of process ID(s) from a single sentry juju unit |
3064 | + for a single process name. |
3065 | + |
3066 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
3067 | + :param process_name: Process name |
3068 | + :returns: List of process IDs |
3069 | + """ |
3070 | + cmd = 'pidof {}'.format(process_name) |
3071 | + output, code = sentry_unit.run(cmd) |
3072 | + if code != 0: |
3073 | + msg = ('{} `{}` returned {} ' |
3074 | + '{}'.format(sentry_unit.info['unit_name'], |
3075 | + cmd, code, output)) |
3076 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3077 | + return str(output).split() |
3078 | + |
3079 | + def get_unit_process_ids(self, unit_processes): |
3080 | + """Construct a dict containing unit sentries, process names, and |
3081 | + process IDs.""" |
3082 | + pid_dict = {} |
3083 | + for sentry_unit, process_list in unit_processes.iteritems(): |
3084 | + pid_dict[sentry_unit] = {} |
3085 | + for process in process_list: |
3086 | + pids = self.get_process_id_list(sentry_unit, process) |
3087 | + pid_dict[sentry_unit].update({process: pids}) |
3088 | + return pid_dict |
3089 | + |
3090 | + def validate_unit_process_ids(self, expected, actual): |
3091 | + """Validate process id quantities for services on units.""" |
3092 | + self.log.debug('Checking units for running processes...') |
3093 | + self.log.debug('Expected PIDs: {}'.format(expected)) |
3094 | + self.log.debug('Actual PIDs: {}'.format(actual)) |
3095 | + |
3096 | + if len(actual) != len(expected): |
3097 | + return ('Unit count mismatch. expected, actual: {}, ' |
3098 | + '{} '.format(len(expected), len(actual))) |
3099 | + |
3100 | + for (e_sentry, e_proc_names) in expected.iteritems(): |
3101 | + e_sentry_name = e_sentry.info['unit_name'] |
3102 | + if e_sentry in actual.keys(): |
3103 | + a_proc_names = actual[e_sentry] |
3104 | + else: |
3105 | + return ('Expected sentry ({}) not found in actual dict data.' |
3106 | + '{}'.format(e_sentry_name, e_sentry)) |
3107 | + |
3108 | + if len(e_proc_names.keys()) != len(a_proc_names.keys()): |
3109 | + return ('Process name count mismatch. expected, actual: {}, ' |
3110 | + '{}'.format(len(expected), len(actual))) |
3111 | + |
3112 | + for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \ |
3113 | + zip(e_proc_names.items(), a_proc_names.items()): |
3114 | + if e_proc_name != a_proc_name: |
3115 | + return ('Process name mismatch. expected, actual: {}, ' |
3116 | + '{}'.format(e_proc_name, a_proc_name)) |
3117 | + |
3118 | + a_pids_length = len(a_pids) |
3119 | + if e_pids_length != a_pids_length: |
3120 | + return ('PID count mismatch. {} ({}) expected, actual: ' |
3121 | + '{}, {} ({})'.format(e_sentry_name, e_proc_name, |
3122 | + e_pids_length, a_pids_length, |
3123 | + a_pids)) |
3124 | + else: |
3125 | + self.log.debug('PID check OK: {} {} {}: ' |
3126 | + '{}'.format(e_sentry_name, e_proc_name, |
3127 | + e_pids_length, a_pids)) |
3128 | + return None |
3129 | + |
3130 | + def validate_list_of_identical_dicts(self, list_of_dicts): |
3131 | + """Check that all dicts within a list are identical.""" |
3132 | + hashes = [] |
3133 | + for _dict in list_of_dicts: |
3134 | + hashes.append(hash(frozenset(_dict.items()))) |
3135 | + |
3136 | + self.log.debug('Hashes: {}'.format(hashes)) |
3137 | + if len(set(hashes)) == 1: |
3138 | + self.log.debug('Dicts within list are identical') |
3139 | + else: |
3140 | + return 'Dicts within list are not identical' |
3141 | + |
3142 | + return None |
3143 | |
3144 | === added directory 'tests/charmhelpers/contrib/openstack' |
3145 | === added file 'tests/charmhelpers/contrib/openstack/__init__.py' |
3146 | --- tests/charmhelpers/contrib/openstack/__init__.py 1970-01-01 00:00:00 +0000 |
3147 | +++ tests/charmhelpers/contrib/openstack/__init__.py 2015-07-01 21:31:48 +0000 |
3148 | @@ -0,0 +1,15 @@ |
3149 | +# Copyright 2014-2015 Canonical Limited. |
3150 | +# |
3151 | +# This file is part of charm-helpers. |
3152 | +# |
3153 | +# charm-helpers is free software: you can redistribute it and/or modify |
3154 | +# it under the terms of the GNU Lesser General Public License version 3 as |
3155 | +# published by the Free Software Foundation. |
3156 | +# |
3157 | +# charm-helpers is distributed in the hope that it will be useful, |
3158 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
3159 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
3160 | +# GNU Lesser General Public License for more details. |
3161 | +# |
3162 | +# You should have received a copy of the GNU Lesser General Public License |
3163 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
3164 | |
3165 | === added directory 'tests/charmhelpers/contrib/openstack/amulet' |
3166 | === added file 'tests/charmhelpers/contrib/openstack/amulet/__init__.py' |
3167 | --- tests/charmhelpers/contrib/openstack/amulet/__init__.py 1970-01-01 00:00:00 +0000 |
3168 | +++ tests/charmhelpers/contrib/openstack/amulet/__init__.py 2015-07-01 21:31:48 +0000 |
3169 | @@ -0,0 +1,15 @@ |
3170 | +# Copyright 2014-2015 Canonical Limited. |
3171 | +# |
3172 | +# This file is part of charm-helpers. |
3173 | +# |
3174 | +# charm-helpers is free software: you can redistribute it and/or modify |
3175 | +# it under the terms of the GNU Lesser General Public License version 3 as |
3176 | +# published by the Free Software Foundation. |
3177 | +# |
3178 | +# charm-helpers is distributed in the hope that it will be useful, |
3179 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
3180 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
3181 | +# GNU Lesser General Public License for more details. |
3182 | +# |
3183 | +# You should have received a copy of the GNU Lesser General Public License |
3184 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
3185 | |
3186 | === added file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' |
3187 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000 |
3188 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 21:31:48 +0000 |
3189 | @@ -0,0 +1,183 @@ |
3190 | +# Copyright 2014-2015 Canonical Limited. |
3191 | +# |
3192 | +# This file is part of charm-helpers. |
3193 | +# |
3194 | +# charm-helpers is free software: you can redistribute it and/or modify |
3195 | +# it under the terms of the GNU Lesser General Public License version 3 as |
3196 | +# published by the Free Software Foundation. |
3197 | +# |
3198 | +# charm-helpers is distributed in the hope that it will be useful, |
3199 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
3200 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
3201 | +# GNU Lesser General Public License for more details. |
3202 | +# |
3203 | +# You should have received a copy of the GNU Lesser General Public License |
3204 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
3205 | + |
3206 | +import six |
3207 | +from collections import OrderedDict |
3208 | +from charmhelpers.contrib.amulet.deployment import ( |
3209 | + AmuletDeployment |
3210 | +) |
3211 | + |
3212 | + |
3213 | +class OpenStackAmuletDeployment(AmuletDeployment): |
3214 | + """OpenStack amulet deployment. |
3215 | + |
3216 | + This class inherits from AmuletDeployment and has additional support |
3217 | + that is specifically for use by OpenStack charms. |
3218 | + """ |
3219 | + |
3220 | + def __init__(self, series=None, openstack=None, source=None, stable=True): |
3221 | + """Initialize the deployment environment.""" |
3222 | + super(OpenStackAmuletDeployment, self).__init__(series) |
3223 | + self.openstack = openstack |
3224 | + self.source = source |
3225 | + self.stable = stable |
3226 | + # Note(coreycb): this needs to be changed when new next branches come |
3227 | + # out. |
3228 | + self.current_next = "trusty" |
3229 | + |
3230 | + def _determine_branch_locations(self, other_services): |
3231 | + """Determine the branch locations for the other services. |
3232 | + |
3233 | + Determine if the local branch being tested is derived from its |
3234 | + stable or next (dev) branch, and based on this, use the corresonding |
3235 | + stable or next branches for the other_services.""" |
3236 | + base_charms = ['mysql', 'mongodb'] |
3237 | + |
3238 | + if self.series in ['precise', 'trusty']: |
3239 | + base_series = self.series |
3240 | + else: |
3241 | + base_series = self.current_next |
3242 | + |
3243 | + if self.stable: |
3244 | + for svc in other_services: |
3245 | + temp = 'lp:charms/{}/{}' |
3246 | + svc['location'] = temp.format(base_series, |
3247 | + svc['name']) |
3248 | + else: |
3249 | + for svc in other_services: |
3250 | + if svc['name'] in base_charms: |
3251 | + temp = 'lp:charms/{}/{}' |
3252 | + svc['location'] = temp.format(base_series, |
3253 | + svc['name']) |
3254 | + else: |
3255 | + temp = 'lp:~openstack-charmers/charms/{}/{}/next' |
3256 | + svc['location'] = temp.format(self.current_next, |
3257 | + svc['name']) |
3258 | + return other_services |
3259 | + |
3260 | + def _add_services(self, this_service, other_services): |
3261 | + """Add services to the deployment and set openstack-origin/source.""" |
3262 | + other_services = self._determine_branch_locations(other_services) |
3263 | + |
3264 | + super(OpenStackAmuletDeployment, self)._add_services(this_service, |
3265 | + other_services) |
3266 | + |
3267 | + services = other_services |
3268 | + services.append(this_service) |
3269 | + use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
3270 | + 'ceph-osd', 'ceph-radosgw'] |
3271 | + # Most OpenStack subordinate charms do not expose an origin option |
3272 | + # as that is controlled by the principle. |
3273 | + ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch'] |
3274 | + |
3275 | + if self.openstack: |
3276 | + for svc in services: |
3277 | + if svc['name'] not in use_source + ignore: |
3278 | + config = {'openstack-origin': self.openstack} |
3279 | + self.d.configure(svc['name'], config) |
3280 | + |
3281 | + if self.source: |
3282 | + for svc in services: |
3283 | + if svc['name'] in use_source and svc['name'] not in ignore: |
3284 | + config = {'source': self.source} |
3285 | + self.d.configure(svc['name'], config) |
3286 | + |
3287 | + def _configure_services(self, configs): |
3288 | + """Configure all of the services.""" |
3289 | + for service, config in six.iteritems(configs): |
3290 | + self.d.configure(service, config) |
3291 | + |
3292 | + def _get_openstack_release(self): |
3293 | + """Get openstack release. |
3294 | + |
3295 | + Return an integer representing the enum value of the openstack |
3296 | + release. |
3297 | + """ |
3298 | + # Must be ordered by OpenStack release (not by Ubuntu release): |
3299 | + (self.precise_essex, self.precise_folsom, self.precise_grizzly, |
3300 | + self.precise_havana, self.precise_icehouse, |
3301 | + self.trusty_icehouse, self.trusty_juno, self.utopic_juno, |
3302 | + self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, |
3303 | + self.wily_liberty) = range(12) |
3304 | + |
3305 | + releases = { |
3306 | + ('precise', None): self.precise_essex, |
3307 | + ('precise', 'cloud:precise-folsom'): self.precise_folsom, |
3308 | + ('precise', 'cloud:precise-grizzly'): self.precise_grizzly, |
3309 | + ('precise', 'cloud:precise-havana'): self.precise_havana, |
3310 | + ('precise', 'cloud:precise-icehouse'): self.precise_icehouse, |
3311 | + ('trusty', None): self.trusty_icehouse, |
3312 | + ('trusty', 'cloud:trusty-juno'): self.trusty_juno, |
3313 | + ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, |
3314 | + ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, |
3315 | + ('utopic', None): self.utopic_juno, |
3316 | + ('vivid', None): self.vivid_kilo, |
3317 | + ('wily', None): self.wily_liberty} |
3318 | + return releases[(self.series, self.openstack)] |
3319 | + |
3320 | + def _get_openstack_release_string(self): |
3321 | + """Get openstack release string. |
3322 | + |
3323 | + Return a string representing the openstack release. |
3324 | + """ |
3325 | + releases = OrderedDict([ |
3326 | + ('precise', 'essex'), |
3327 | + ('quantal', 'folsom'), |
3328 | + ('raring', 'grizzly'), |
3329 | + ('saucy', 'havana'), |
3330 | + ('trusty', 'icehouse'), |
3331 | + ('utopic', 'juno'), |
3332 | + ('vivid', 'kilo'), |
3333 | + ('wily', 'liberty'), |
3334 | + ]) |
3335 | + if self.openstack: |
3336 | + os_origin = self.openstack.split(':')[1] |
3337 | + return os_origin.split('%s-' % self.series)[1].split('/')[0] |
3338 | + else: |
3339 | + return releases[self.series] |
3340 | + |
3341 | + def get_ceph_expected_pools(self, radosgw=False): |
3342 | + """Return a list of expected ceph pools in a ceph + cinder + glance |
3343 | + test scenario, based on OpenStack release and whether ceph radosgw |
3344 | + is flagged as present or not.""" |
3345 | + |
3346 | + if self._get_openstack_release() >= self.trusty_kilo: |
3347 | + # Kilo or later |
3348 | + pools = [ |
3349 | + 'rbd', |
3350 | + 'cinder', |
3351 | + 'glance' |
3352 | + ] |
3353 | + else: |
3354 | + # Juno or earlier |
3355 | + pools = [ |
3356 | + 'data', |
3357 | + 'metadata', |
3358 | + 'rbd', |
3359 | + 'cinder', |
3360 | + 'glance' |
3361 | + ] |
3362 | + |
3363 | + if radosgw: |
3364 | + pools.extend([ |
3365 | + '.rgw.root', |
3366 | + '.rgw.control', |
3367 | + '.rgw', |
3368 | + '.rgw.gc', |
3369 | + '.users.uid' |
3370 | + ]) |
3371 | + |
3372 | + return pools |
3373 | |
3374 | === added file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' |
3375 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000 |
3376 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 21:31:48 +0000 |
3377 | @@ -0,0 +1,604 @@ |
3378 | +# Copyright 2014-2015 Canonical Limited. |
3379 | +# |
3380 | +# This file is part of charm-helpers. |
3381 | +# |
3382 | +# charm-helpers is free software: you can redistribute it and/or modify |
3383 | +# it under the terms of the GNU Lesser General Public License version 3 as |
3384 | +# published by the Free Software Foundation. |
3385 | +# |
3386 | +# charm-helpers is distributed in the hope that it will be useful, |
3387 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
3388 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
3389 | +# GNU Lesser General Public License for more details. |
3390 | +# |
3391 | +# You should have received a copy of the GNU Lesser General Public License |
3392 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
3393 | + |
3394 | +import amulet |
3395 | +import json |
3396 | +import logging |
3397 | +import os |
3398 | +import six |
3399 | +import time |
3400 | +import urllib |
3401 | + |
3402 | +import cinderclient.v1.client as cinder_client |
3403 | +import glanceclient.v1.client as glance_client |
3404 | +import heatclient.v1.client as heat_client |
3405 | +import keystoneclient.v2_0 as keystone_client |
3406 | +import novaclient.v1_1.client as nova_client |
3407 | +import swiftclient |
3408 | + |
3409 | +from charmhelpers.contrib.amulet.utils import ( |
3410 | + AmuletUtils |
3411 | +) |
3412 | + |
3413 | +DEBUG = logging.DEBUG |
3414 | +ERROR = logging.ERROR |
3415 | + |
3416 | + |
3417 | +class OpenStackAmuletUtils(AmuletUtils): |
3418 | + """OpenStack amulet utilities. |
3419 | + |
3420 | + This class inherits from AmuletUtils and has additional support |
3421 | + that is specifically for use by OpenStack charm tests. |
3422 | + """ |
3423 | + |
3424 | + def __init__(self, log_level=ERROR): |
3425 | + """Initialize the deployment environment.""" |
3426 | + super(OpenStackAmuletUtils, self).__init__(log_level) |
3427 | + |
3428 | + def validate_endpoint_data(self, endpoints, admin_port, internal_port, |
3429 | + public_port, expected): |
3430 | + """Validate endpoint data. |
3431 | + |
3432 | + Validate actual endpoint data vs expected endpoint data. The ports |
3433 | + are used to find the matching endpoint. |
3434 | + """ |
3435 | + self.log.debug('Validating endpoint data...') |
3436 | + self.log.debug('actual: {}'.format(repr(endpoints))) |
3437 | + found = False |
3438 | + for ep in endpoints: |
3439 | + self.log.debug('endpoint: {}'.format(repr(ep))) |
3440 | + if (admin_port in ep.adminurl and |
3441 | + internal_port in ep.internalurl and |
3442 | + public_port in ep.publicurl): |
3443 | + found = True |
3444 | + actual = {'id': ep.id, |
3445 | + 'region': ep.region, |
3446 | + 'adminurl': ep.adminurl, |
3447 | + 'internalurl': ep.internalurl, |
3448 | + 'publicurl': ep.publicurl, |
3449 | + 'service_id': ep.service_id} |
3450 | + ret = self._validate_dict_data(expected, actual) |
3451 | + if ret: |
3452 | + return 'unexpected endpoint data - {}'.format(ret) |
3453 | + |
3454 | + if not found: |
3455 | + return 'endpoint not found' |
3456 | + |
3457 | + def validate_svc_catalog_endpoint_data(self, expected, actual): |
3458 | + """Validate service catalog endpoint data. |
3459 | + |
3460 | + Validate a list of actual service catalog endpoints vs a list of |
3461 | + expected service catalog endpoints. |
3462 | + """ |
3463 | + self.log.debug('Validating service catalog endpoint data...') |
3464 | + self.log.debug('actual: {}'.format(repr(actual))) |
3465 | + for k, v in six.iteritems(expected): |
3466 | + if k in actual: |
3467 | + ret = self._validate_dict_data(expected[k][0], actual[k][0]) |
3468 | + if ret: |
3469 | + return self.endpoint_error(k, ret) |
3470 | + else: |
3471 | + return "endpoint {} does not exist".format(k) |
3472 | + return ret |
3473 | + |
3474 | + def validate_tenant_data(self, expected, actual): |
3475 | + """Validate tenant data. |
3476 | + |
3477 | + Validate a list of actual tenant data vs list of expected tenant |
3478 | + data. |
3479 | + """ |
3480 | + self.log.debug('Validating tenant data...') |
3481 | + self.log.debug('actual: {}'.format(repr(actual))) |
3482 | + for e in expected: |
3483 | + found = False |
3484 | + for act in actual: |
3485 | + a = {'enabled': act.enabled, 'description': act.description, |
3486 | + 'name': act.name, 'id': act.id} |
3487 | + if e['name'] == a['name']: |
3488 | + found = True |
3489 | + ret = self._validate_dict_data(e, a) |
3490 | + if ret: |
3491 | + return "unexpected tenant data - {}".format(ret) |
3492 | + if not found: |
3493 | + return "tenant {} does not exist".format(e['name']) |
3494 | + return ret |
3495 | + |
3496 | + def validate_role_data(self, expected, actual): |
3497 | + """Validate role data. |
3498 | + |
3499 | + Validate a list of actual role data vs a list of expected role |
3500 | + data. |
3501 | + """ |
3502 | + self.log.debug('Validating role data...') |
3503 | + self.log.debug('actual: {}'.format(repr(actual))) |
3504 | + for e in expected: |
3505 | + found = False |
3506 | + for act in actual: |
3507 | + a = {'name': act.name, 'id': act.id} |
3508 | + if e['name'] == a['name']: |
3509 | + found = True |
3510 | + ret = self._validate_dict_data(e, a) |
3511 | + if ret: |
3512 | + return "unexpected role data - {}".format(ret) |
3513 | + if not found: |
3514 | + return "role {} does not exist".format(e['name']) |
3515 | + return ret |
3516 | + |
3517 | + def validate_user_data(self, expected, actual): |
3518 | + """Validate user data. |
3519 | + |
3520 | + Validate a list of actual user data vs a list of expected user |
3521 | + data. |
3522 | + """ |
3523 | + self.log.debug('Validating user data...') |
3524 | + self.log.debug('actual: {}'.format(repr(actual))) |
3525 | + for e in expected: |
3526 | + found = False |
3527 | + for act in actual: |
3528 | + a = {'enabled': act.enabled, 'name': act.name, |
3529 | + 'email': act.email, 'tenantId': act.tenantId, |
3530 | + 'id': act.id} |
3531 | + if e['name'] == a['name']: |
3532 | + found = True |
3533 | + ret = self._validate_dict_data(e, a) |
3534 | + if ret: |
3535 | + return "unexpected user data - {}".format(ret) |
3536 | + if not found: |
3537 | + return "user {} does not exist".format(e['name']) |
3538 | + return ret |
3539 | + |
3540 | + def validate_flavor_data(self, expected, actual): |
3541 | + """Validate flavor data. |
3542 | + |
3543 | + Validate a list of actual flavors vs a list of expected flavors. |
3544 | + """ |
3545 | + self.log.debug('Validating flavor data...') |
3546 | + self.log.debug('actual: {}'.format(repr(actual))) |
3547 | + act = [a.name for a in actual] |
3548 | + return self._validate_list_data(expected, act) |
3549 | + |
3550 | + def tenant_exists(self, keystone, tenant): |
3551 | + """Return True if tenant exists.""" |
3552 | + self.log.debug('Checking if tenant exists ({})...'.format(tenant)) |
3553 | + return tenant in [t.name for t in keystone.tenants.list()] |
3554 | + |
3555 | + def authenticate_cinder_admin(self, keystone_sentry, username, |
3556 | + password, tenant): |
3557 | + """Authenticates admin user with cinder.""" |
3558 | + # NOTE(beisner): cinder python client doesn't accept tokens. |
3559 | + service_ip = \ |
3560 | + keystone_sentry.relation('shared-db', |
3561 | + 'mysql:shared-db')['private-address'] |
3562 | + ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8')) |
3563 | + return cinder_client.Client(username, password, tenant, ept) |
3564 | + |
3565 | + def authenticate_keystone_admin(self, keystone_sentry, user, password, |
3566 | + tenant): |
3567 | + """Authenticates admin user with the keystone admin endpoint.""" |
3568 | + self.log.debug('Authenticating keystone admin...') |
3569 | + unit = keystone_sentry |
3570 | + service_ip = unit.relation('shared-db', |
3571 | + 'mysql:shared-db')['private-address'] |
3572 | + ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8')) |
3573 | + return keystone_client.Client(username=user, password=password, |
3574 | + tenant_name=tenant, auth_url=ep) |
3575 | + |
3576 | + def authenticate_keystone_user(self, keystone, user, password, tenant): |
3577 | + """Authenticates a regular user with the keystone public endpoint.""" |
3578 | + self.log.debug('Authenticating keystone user ({})...'.format(user)) |
3579 | + ep = keystone.service_catalog.url_for(service_type='identity', |
3580 | + endpoint_type='publicURL') |
3581 | + return keystone_client.Client(username=user, password=password, |
3582 | + tenant_name=tenant, auth_url=ep) |
3583 | + |
3584 | + def authenticate_glance_admin(self, keystone): |
3585 | + """Authenticates admin user with glance.""" |
3586 | + self.log.debug('Authenticating glance admin...') |
3587 | + ep = keystone.service_catalog.url_for(service_type='image', |
3588 | + endpoint_type='adminURL') |
3589 | + return glance_client.Client(ep, token=keystone.auth_token) |
3590 | + |
3591 | + def authenticate_heat_admin(self, keystone): |
3592 | + """Authenticates the admin user with heat.""" |
3593 | + self.log.debug('Authenticating heat admin...') |
3594 | + ep = keystone.service_catalog.url_for(service_type='orchestration', |
3595 | + endpoint_type='publicURL') |
3596 | + return heat_client.Client(endpoint=ep, token=keystone.auth_token) |
3597 | + |
3598 | + def authenticate_nova_user(self, keystone, user, password, tenant): |
3599 | + """Authenticates a regular user with nova-api.""" |
3600 | + self.log.debug('Authenticating nova user ({})...'.format(user)) |
3601 | + ep = keystone.service_catalog.url_for(service_type='identity', |
3602 | + endpoint_type='publicURL') |
3603 | + return nova_client.Client(username=user, api_key=password, |
3604 | + project_id=tenant, auth_url=ep) |
3605 | + |
3606 | + def authenticate_swift_user(self, keystone, user, password, tenant): |
3607 | + """Authenticates a regular user with swift api.""" |
3608 | + self.log.debug('Authenticating swift user ({})...'.format(user)) |
3609 | + ep = keystone.service_catalog.url_for(service_type='identity', |
3610 | + endpoint_type='publicURL') |
3611 | + return swiftclient.Connection(authurl=ep, |
3612 | + user=user, |
3613 | + key=password, |
3614 | + tenant_name=tenant, |
3615 | + auth_version='2.0') |
3616 | + |
3617 | + def create_cirros_image(self, glance, image_name): |
3618 | + """Download the latest cirros image and upload it to glance, |
3619 | + validate and return a resource pointer. |
3620 | + |
3621 | + :param glance: pointer to authenticated glance connection |
3622 | + :param image_name: display name for new image |
3623 | + :returns: glance image pointer |
3624 | + """ |
3625 | + self.log.debug('Creating glance cirros image ' |
3626 | + '({})...'.format(image_name)) |
3627 | + |
3628 | + # Download cirros image |
3629 | + http_proxy = os.getenv('AMULET_HTTP_PROXY') |
3630 | + self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) |
3631 | + if http_proxy: |
3632 | + proxies = {'http': http_proxy} |
3633 | + opener = urllib.FancyURLopener(proxies) |
3634 | + else: |
3635 | + opener = urllib.FancyURLopener() |
3636 | + |
3637 | + f = opener.open('http://download.cirros-cloud.net/version/released') |
3638 | + version = f.read().strip() |
3639 | + cirros_img = 'cirros-{}-x86_64-disk.img'.format(version) |
3640 | + local_path = os.path.join('tests', cirros_img) |
3641 | + |
3642 | + if not os.path.exists(local_path): |
3643 | + cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net', |
3644 | + version, cirros_img) |
3645 | + opener.retrieve(cirros_url, local_path) |
3646 | + f.close() |
3647 | + |
3648 | + # Create glance image |
3649 | + with open(local_path) as f: |
3650 | + image = glance.images.create(name=image_name, is_public=True, |
3651 | + disk_format='qcow2', |
3652 | + container_format='bare', data=f) |
3653 | + |
3654 | + # Wait for image to reach active status |
3655 | + img_id = image.id |
3656 | + ret = self.resource_reaches_status(glance.images, img_id, |
3657 | + expected_stat='active', |
3658 | + msg='Image status wait') |
3659 | + if not ret: |
3660 | + msg = 'Glance image failed to reach expected state.' |
3661 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3662 | + |
3663 | + # Re-validate new image |
3664 | + self.log.debug('Validating image attributes...') |
3665 | + val_img_name = glance.images.get(img_id).name |
3666 | + val_img_stat = glance.images.get(img_id).status |
3667 | + val_img_pub = glance.images.get(img_id).is_public |
3668 | + val_img_cfmt = glance.images.get(img_id).container_format |
3669 | + val_img_dfmt = glance.images.get(img_id).disk_format |
3670 | + msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} ' |
3671 | + 'container fmt:{} disk fmt:{}'.format( |
3672 | + val_img_name, val_img_pub, img_id, |
3673 | + val_img_stat, val_img_cfmt, val_img_dfmt)) |
3674 | + |
3675 | + if val_img_name == image_name and val_img_stat == 'active' \ |
3676 | + and val_img_pub is True and val_img_cfmt == 'bare' \ |
3677 | + and val_img_dfmt == 'qcow2': |
3678 | + self.log.debug(msg_attr) |
3679 | + else: |
3680 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
3681 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3682 | + |
3683 | + return image |
3684 | + |
3685 | + def delete_image(self, glance, image): |
3686 | + """Delete the specified image.""" |
3687 | + |
3688 | + # /!\ DEPRECATION WARNING |
3689 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
3690 | + 'delete_resource instead of delete_image.') |
3691 | + self.log.debug('Deleting glance image ({})...'.format(image)) |
3692 | + return self.delete_resource(glance.images, image, msg='glance image') |
3693 | + |
3694 | + def create_instance(self, nova, image_name, instance_name, flavor): |
3695 | + """Create the specified instance.""" |
3696 | + self.log.debug('Creating instance ' |
3697 | + '({}|{}|{})'.format(instance_name, image_name, flavor)) |
3698 | + image = nova.images.find(name=image_name) |
3699 | + flavor = nova.flavors.find(name=flavor) |
3700 | + instance = nova.servers.create(name=instance_name, image=image, |
3701 | + flavor=flavor) |
3702 | + |
3703 | + count = 1 |
3704 | + status = instance.status |
3705 | + while status != 'ACTIVE' and count < 60: |
3706 | + time.sleep(3) |
3707 | + instance = nova.servers.get(instance.id) |
3708 | + status = instance.status |
3709 | + self.log.debug('instance status: {}'.format(status)) |
3710 | + count += 1 |
3711 | + |
3712 | + if status != 'ACTIVE': |
3713 | + self.log.error('instance creation timed out') |
3714 | + return None |
3715 | + |
3716 | + return instance |
3717 | + |
3718 | + def delete_instance(self, nova, instance): |
3719 | + """Delete the specified instance.""" |
3720 | + |
3721 | + # /!\ DEPRECATION WARNING |
3722 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
3723 | + 'delete_resource instead of delete_instance.') |
3724 | + self.log.debug('Deleting instance ({})...'.format(instance)) |
3725 | + return self.delete_resource(nova.servers, instance, |
3726 | + msg='nova instance') |
3727 | + |
3728 | + def create_or_get_keypair(self, nova, keypair_name="testkey"): |
3729 | + """Create a new keypair, or return pointer if it already exists.""" |
3730 | + try: |
3731 | + _keypair = nova.keypairs.get(keypair_name) |
3732 | + self.log.debug('Keypair ({}) already exists, ' |
3733 | + 'using it.'.format(keypair_name)) |
3734 | + return _keypair |
3735 | + except: |
3736 | + self.log.debug('Keypair ({}) does not exist, ' |
3737 | + 'creating it.'.format(keypair_name)) |
3738 | + |
3739 | + _keypair = nova.keypairs.create(name=keypair_name) |
3740 | + return _keypair |
3741 | + |
3742 | + def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, |
3743 | + img_id=None, src_vol_id=None, snap_id=None): |
3744 | + """Create cinder volume, optionally from a glance image, OR |
3745 | + optionally as a clone of an existing volume, OR optionally |
3746 | + from a snapshot. Wait for the new volume status to reach |
3747 | + the expected status, validate and return a resource pointer. |
3748 | + |
3749 | + :param vol_name: cinder volume display name |
3750 | + :param vol_size: size in gigabytes |
3751 | + :param img_id: optional glance image id |
3752 | + :param src_vol_id: optional source volume id to clone |
3753 | + :param snap_id: optional snapshot id to use |
3754 | + :returns: cinder volume pointer |
3755 | + """ |
3756 | + # Handle parameter input and avoid impossible combinations |
3757 | + if img_id and not src_vol_id and not snap_id: |
3758 | + # Create volume from image |
3759 | + self.log.debug('Creating cinder volume from glance image...') |
3760 | + bootable = 'true' |
3761 | + elif src_vol_id and not img_id and not snap_id: |
3762 | + # Clone an existing volume |
3763 | + self.log.debug('Cloning cinder volume...') |
3764 | + bootable = cinder.volumes.get(src_vol_id).bootable |
3765 | + elif snap_id and not src_vol_id and not img_id: |
3766 | + # Create volume from snapshot |
3767 | + self.log.debug('Creating cinder volume from snapshot...') |
3768 | + snap = cinder.volume_snapshots.find(id=snap_id) |
3769 | + vol_size = snap.size |
3770 | + snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id |
3771 | + bootable = cinder.volumes.get(snap_vol_id).bootable |
3772 | + elif not img_id and not src_vol_id and not snap_id: |
3773 | + # Create volume |
3774 | + self.log.debug('Creating cinder volume...') |
3775 | + bootable = 'false' |
3776 | + else: |
3777 | + # Impossible combination of parameters |
3778 | + msg = ('Invalid method use - name:{} size:{} img_id:{} ' |
3779 | + 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, |
3780 | + img_id, src_vol_id, |
3781 | + snap_id)) |
3782 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3783 | + |
3784 | + # Create new volume |
3785 | + try: |
3786 | + vol_new = cinder.volumes.create(display_name=vol_name, |
3787 | + imageRef=img_id, |
3788 | + size=vol_size, |
3789 | + source_volid=src_vol_id, |
3790 | + snapshot_id=snap_id) |
3791 | + vol_id = vol_new.id |
3792 | + except Exception as e: |
3793 | + msg = 'Failed to create volume: {}'.format(e) |
3794 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3795 | + |
3796 | + # Wait for volume to reach available status |
3797 | + ret = self.resource_reaches_status(cinder.volumes, vol_id, |
3798 | + expected_stat="available", |
3799 | + msg="Volume status wait") |
3800 | + if not ret: |
3801 | + msg = 'Cinder volume failed to reach expected state.' |
3802 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3803 | + |
3804 | + # Re-validate new volume |
3805 | + self.log.debug('Validating volume attributes...') |
3806 | + val_vol_name = cinder.volumes.get(vol_id).display_name |
3807 | + val_vol_boot = cinder.volumes.get(vol_id).bootable |
3808 | + val_vol_stat = cinder.volumes.get(vol_id).status |
3809 | + val_vol_size = cinder.volumes.get(vol_id).size |
3810 | + msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' |
3811 | + '{} size:{}'.format(val_vol_name, vol_id, |
3812 | + val_vol_stat, val_vol_boot, |
3813 | + val_vol_size)) |
3814 | + |
3815 | + if val_vol_boot == bootable and val_vol_stat == 'available' \ |
3816 | + and val_vol_name == vol_name and val_vol_size == vol_size: |
3817 | + self.log.debug(msg_attr) |
3818 | + else: |
3819 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
3820 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3821 | + |
3822 | + return vol_new |
3823 | + |
3824 | + def delete_resource(self, resource, resource_id, |
3825 | + msg="resource", max_wait=120): |
3826 | + """Delete one openstack resource, such as one instance, keypair, |
3827 | + image, volume, stack, etc., and confirm deletion within max wait time. |
3828 | + |
3829 | + :param resource: pointer to os resource type, ex:glance_client.images |
3830 | + :param resource_id: unique name or id for the openstack resource |
3831 | + :param msg: text to identify purpose in logging |
3832 | + :param max_wait: maximum wait time in seconds |
3833 | + :returns: True if successful, otherwise False |
3834 | + """ |
3835 | + self.log.debug('Deleting OpenStack resource ' |
3836 | + '{} ({})'.format(resource_id, msg)) |
3837 | + num_before = len(list(resource.list())) |
3838 | + resource.delete(resource_id) |
3839 | + |
3840 | + tries = 0 |
3841 | + num_after = len(list(resource.list())) |
3842 | + while num_after != (num_before - 1) and tries < (max_wait / 4): |
3843 | + self.log.debug('{} delete check: ' |
3844 | + '{} [{}:{}] {}'.format(msg, tries, |
3845 | + num_before, |
3846 | + num_after, |
3847 | + resource_id)) |
3848 | + time.sleep(4) |
3849 | + num_after = len(list(resource.list())) |
3850 | + tries += 1 |
3851 | + |
3852 | + self.log.debug('{}: expected, actual count = {}, ' |
3853 | + '{}'.format(msg, num_before - 1, num_after)) |
3854 | + |
3855 | + if num_after == (num_before - 1): |
3856 | + return True |
3857 | + else: |
3858 | + self.log.error('{} delete timed out'.format(msg)) |
3859 | + return False |
3860 | + |
3861 | + def resource_reaches_status(self, resource, resource_id, |
3862 | + expected_stat='available', |
3863 | + msg='resource', max_wait=120): |
3864 | + """Wait for an openstack resources status to reach an |
3865 | + expected status within a specified time. Useful to confirm that |
3866 | + nova instances, cinder vols, snapshots, glance images, heat stacks |
3867 | + and other resources eventually reach the expected status. |
3868 | + |
3869 | + :param resource: pointer to os resource type, ex: heat_client.stacks |
3870 | + :param resource_id: unique id for the openstack resource |
3871 | + :param expected_stat: status to expect resource to reach |
3872 | + :param msg: text to identify purpose in logging |
3873 | + :param max_wait: maximum wait time in seconds |
3874 | + :returns: True if successful, False if status is not reached |
3875 | + """ |
3876 | + |
3877 | + tries = 0 |
3878 | + resource_stat = resource.get(resource_id).status |
3879 | + while resource_stat != expected_stat and tries < (max_wait / 4): |
3880 | + self.log.debug('{} status check: ' |
3881 | + '{} [{}:{}] {}'.format(msg, tries, |
3882 | + resource_stat, |
3883 | + expected_stat, |
3884 | + resource_id)) |
3885 | + time.sleep(4) |
3886 | + resource_stat = resource.get(resource_id).status |
3887 | + tries += 1 |
3888 | + |
3889 | + self.log.debug('{}: expected, actual status = {}, ' |
3890 | + '{}'.format(msg, resource_stat, expected_stat)) |
3891 | + |
3892 | + if resource_stat == expected_stat: |
3893 | + return True |
3894 | + else: |
3895 | + self.log.debug('{} never reached expected status: ' |
3896 | + '{}'.format(resource_id, expected_stat)) |
3897 | + return False |
3898 | + |
3899 | + def get_ceph_osd_id_cmd(self, index): |
3900 | + """Produce a shell command that will return a ceph-osd id.""" |
3901 | + return ("`initctl list | grep 'ceph-osd ' | " |
3902 | + "awk 'NR=={} {{ print $2 }}' | " |
3903 | + "grep -o '[0-9]*'`".format(index + 1)) |
3904 | + |
3905 | + def get_ceph_pools(self, sentry_unit): |
3906 | + """Return a dict of ceph pools from a single ceph unit, with |
3907 | + pool name as keys, pool id as vals.""" |
3908 | + pools = {} |
3909 | + cmd = 'sudo ceph osd lspools' |
3910 | + output, code = sentry_unit.run(cmd) |
3911 | + if code != 0: |
3912 | + msg = ('{} `{}` returned {} ' |
3913 | + '{}'.format(sentry_unit.info['unit_name'], |
3914 | + cmd, code, output)) |
3915 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3916 | + |
3917 | + # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, |
3918 | + for pool in str(output).split(','): |
3919 | + pool_id_name = pool.split(' ') |
3920 | + if len(pool_id_name) == 2: |
3921 | + pool_id = pool_id_name[0] |
3922 | + pool_name = pool_id_name[1] |
3923 | + pools[pool_name] = int(pool_id) |
3924 | + |
3925 | + self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], |
3926 | + pools)) |
3927 | + return pools |
3928 | + |
3929 | + def get_ceph_df(self, sentry_unit): |
3930 | + """Return dict of ceph df json output, including ceph pool state. |
3931 | + |
3932 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
3933 | + :returns: Dict of ceph df output |
3934 | + """ |
3935 | + cmd = 'sudo ceph df --format=json' |
3936 | + output, code = sentry_unit.run(cmd) |
3937 | + if code != 0: |
3938 | + msg = ('{} `{}` returned {} ' |
3939 | + '{}'.format(sentry_unit.info['unit_name'], |
3940 | + cmd, code, output)) |
3941 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3942 | + return json.loads(output) |
3943 | + |
3944 | + def get_ceph_pool_sample(self, sentry_unit, pool_id=0): |
3945 | + """Take a sample of attributes of a ceph pool, returning ceph |
3946 | + pool name, object count and disk space used for the specified |
3947 | + pool ID number. |
3948 | + |
3949 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
3950 | + :param pool_id: Ceph pool ID |
3951 | + :returns: List of pool name, object count, kb disk space used |
3952 | + """ |
3953 | + df = self.get_ceph_df(sentry_unit) |
3954 | + pool_name = df['pools'][pool_id]['name'] |
3955 | + obj_count = df['pools'][pool_id]['stats']['objects'] |
3956 | + kb_used = df['pools'][pool_id]['stats']['kb_used'] |
3957 | + self.log.debug('Ceph {} pool (ID {}): {} objects, ' |
3958 | + '{} kb used'.format(pool_name, pool_id, |
3959 | + obj_count, kb_used)) |
3960 | + return pool_name, obj_count, kb_used |
3961 | + |
3962 | + def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): |
3963 | + """Validate ceph pool samples taken over time, such as pool |
3964 | + object counts or pool kb used, before adding, after adding, and |
3965 | + after deleting items which affect those pool attributes. The |
3966 | + 2nd element is expected to be greater than the 1st; 3rd is expected |
3967 | + to be less than the 2nd. |
3968 | + |
3969 | + :param samples: List containing 3 data samples |
3970 | + :param sample_type: String for logging and usage context |
3971 | + :returns: None if successful, Failure message otherwise |
3972 | + """ |
3973 | + original, created, deleted = range(3) |
3974 | + if samples[created] <= samples[original] or \ |
3975 | + samples[deleted] >= samples[created]: |
3976 | + return ('Ceph {} samples ({}) ' |
3977 | + 'unexpected.'.format(sample_type, samples)) |
3978 | + else: |
3979 | + self.log.debug('Ceph {} samples (OK): ' |
3980 | + '{}'.format(sample_type, samples)) |
3981 | + return None |
3982 | |
3983 | === added file 'tests/tests.yaml' |
3984 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 |
3985 | +++ tests/tests.yaml 2015-07-01 21:31:48 +0000 |
3986 | @@ -0,0 +1,19 @@ |
3987 | +bootstrap: true |
3988 | +reset: true |
3989 | +virtualenv: true |
3990 | +makefile: |
3991 | + - lint |
3992 | + - test |
3993 | +sources: |
3994 | + - ppa:juju/stable |
3995 | +packages: |
3996 | + - amulet |
3997 | + - python-amulet |
3998 | + - python-ceilometerclient |
3999 | + - python-cinderclient |
4000 | + - python-distro-info |
4001 | + - python-glanceclient |
4002 | + - python-heatclient |
4003 | + - python-keystoneclient |
4004 | + - python-novaclient |
4005 | + - python-swiftclient |
4006 | |
4007 | === modified file 'unit_tests/test_ceilometer_utils.py' |
4008 | --- unit_tests/test_ceilometer_utils.py 2014-04-01 16:53:45 +0000 |
4009 | +++ unit_tests/test_ceilometer_utils.py 2015-07-01 21:31:48 +0000 |
4010 | @@ -1,4 +1,4 @@ |
4011 | -from mock import patch, call, MagicMock |
4012 | +from mock import call, MagicMock |
4013 | |
4014 | import ceilometer_utils as utils |
4015 |
charm_lint_check #5620 ceilometer- agent-next for 1chb1n mp263040
LINT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_lint_ check/5620/