Merge lp:~1chb1n/charms/trusty/ceilometer/next-amulet-kilo into lp:~openstack-charmers-archive/charms/trusty/ceilometer/next

Proposed by Ryan Beisner on 2015-06-12
Status: Merged
Merged at revision: 86
Proposed branch: lp:~1chb1n/charms/trusty/ceilometer/next-amulet-kilo
Merge into: lp:~openstack-charmers-archive/charms/trusty/ceilometer/next
Diff against target: 1246 lines (+519/-95)
15 files modified
Makefile (+2/-1)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+12/-3)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+6/-2)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+122/-3)
hooks/charmhelpers/contrib/openstack/context.py (+1/-1)
hooks/charmhelpers/contrib/openstack/neutron.py (+6/-4)
hooks/charmhelpers/contrib/openstack/utils.py (+8/-1)
hooks/charmhelpers/core/host.py (+24/-6)
metadata.yaml (+1/-1)
tests/00-setup (+2/-3)
tests/README (+30/-0)
tests/basic_deployment.py (+86/-59)
tests/charmhelpers/contrib/amulet/utils.py (+91/-6)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+6/-2)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+122/-3)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/ceilometer/next-amulet-kilo
Reviewer Review Type Date Requested Status
Corey Bryant 2015-06-12 Approve on 2015-06-16
Review via email: mp+261850@code.launchpad.net

Commit Message

Update amulet tests for Kilo; Prep for Wily, Liberty; sync tests/charmhelpers; sync hooks/charmhelpers.

Description of the Change

Update amulet tests for Kilo; Prep for Wily, Liberty; sync tests/charmhelpers; sync hooks/charmhelpers.

To post a comment you must log in.

charm_lint_check #5349 ceilometer-next for 1chb1n mp261850
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5349/

charm_unit_test #4981 ceilometer-next for 1chb1n mp261850
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4981/

89. By Ryan Beisner on 2015-06-12

update Makefile

90. By Ryan Beisner on 2015-06-12

update categories to tags in metadata.yaml

charm_unit_test #4982 ceilometer-next for 1chb1n mp261850
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4982/

charm_lint_check #5350 ceilometer-next for 1chb1n mp261850
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5350/

charm_amulet_test #4591 ceilometer-next for 1chb1n mp261850
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11702550/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4591/

charm_amulet_test #4592 ceilometer-next for 1chb1n mp261850
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11702555/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4592/

91. By Ryan Beisner on 2015-06-12

fix dependency package name typo

charm_lint_check #5351 ceilometer-next for 1chb1n mp261850
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5351/

charm_unit_test #4983 ceilometer-next for 1chb1n mp261850
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4983/

charm_amulet_test #4594 ceilometer-next for 1chb1n mp261850
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4594/

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2015-04-16 21:31:59 +0000
3+++ Makefile 2015-06-12 17:20:17 +0000
4@@ -2,7 +2,8 @@
5 PYTHON := /usr/bin/env python
6
7 lint:
8- @flake8 --exclude hooks/charmhelpers hooks unit_tests tests
9+ @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \
10+ hooks unit_tests tests
11 @charm proof
12
13 unit_test:
14
15=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
16--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-11 12:57:33 +0000
17+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-12 17:20:17 +0000
18@@ -64,6 +64,10 @@
19 pass
20
21
22+class CRMDCNotFound(Exception):
23+ pass
24+
25+
26 def is_elected_leader(resource):
27 """
28 Returns True if the charm executing this is the elected cluster leader.
29@@ -116,8 +120,9 @@
30 status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
31 if not isinstance(status, six.text_type):
32 status = six.text_type(status, "utf-8")
33- except subprocess.CalledProcessError:
34- return False
35+ except subprocess.CalledProcessError as ex:
36+ raise CRMDCNotFound(str(ex))
37+
38 current_dc = ''
39 for line in status.split('\n'):
40 if line.startswith('Current DC'):
41@@ -125,10 +130,14 @@
42 current_dc = line.split(':')[1].split()[0]
43 if current_dc == get_unit_hostname():
44 return True
45+ elif current_dc == 'NONE':
46+ raise CRMDCNotFound('Current DC: NONE')
47+
48 return False
49
50
51-@retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound)
52+@retry_on_exception(5, base_delay=2,
53+ exc_type=(CRMResourceNotFound, CRMDCNotFound))
54 def is_crm_leader(resource, retry=False):
55 """
56 Returns True if the charm calling this is the elected corosync leader,
57
58=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
59--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-05-11 07:26:16 +0000
60+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-12 17:20:17 +0000
61@@ -110,7 +110,8 @@
62 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
63 self.precise_havana, self.precise_icehouse,
64 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
65- self.trusty_kilo, self.vivid_kilo) = range(10)
66+ self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
67+ self.wily_liberty) = range(12)
68
69 releases = {
70 ('precise', None): self.precise_essex,
71@@ -121,8 +122,10 @@
72 ('trusty', None): self.trusty_icehouse,
73 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
74 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
75+ ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
76 ('utopic', None): self.utopic_juno,
77- ('vivid', None): self.vivid_kilo}
78+ ('vivid', None): self.vivid_kilo,
79+ ('wily', None): self.wily_liberty}
80 return releases[(self.series, self.openstack)]
81
82 def _get_openstack_release_string(self):
83@@ -138,6 +141,7 @@
84 ('trusty', 'icehouse'),
85 ('utopic', 'juno'),
86 ('vivid', 'kilo'),
87+ ('wily', 'liberty'),
88 ])
89 if self.openstack:
90 os_origin = self.openstack.split(':')[1]
91
92=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
93--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-01-26 09:48:14 +0000
94+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-12 17:20:17 +0000
95@@ -16,15 +16,15 @@
96
97 import logging
98 import os
99+import six
100 import time
101 import urllib
102
103 import glanceclient.v1.client as glance_client
104+import heatclient.v1.client as heat_client
105 import keystoneclient.v2_0 as keystone_client
106 import novaclient.v1_1.client as nova_client
107
108-import six
109-
110 from charmhelpers.contrib.amulet.utils import (
111 AmuletUtils
112 )
113@@ -37,7 +37,7 @@
114 """OpenStack amulet utilities.
115
116 This class inherits from AmuletUtils and has additional support
117- that is specifically for use by OpenStack charms.
118+ that is specifically for use by OpenStack charm tests.
119 """
120
121 def __init__(self, log_level=ERROR):
122@@ -51,6 +51,8 @@
123 Validate actual endpoint data vs expected endpoint data. The ports
124 are used to find the matching endpoint.
125 """
126+ self.log.debug('Validating endpoint data...')
127+ self.log.debug('actual: {}'.format(repr(endpoints)))
128 found = False
129 for ep in endpoints:
130 self.log.debug('endpoint: {}'.format(repr(ep)))
131@@ -77,6 +79,7 @@
132 Validate a list of actual service catalog endpoints vs a list of
133 expected service catalog endpoints.
134 """
135+ self.log.debug('Validating service catalog endpoint data...')
136 self.log.debug('actual: {}'.format(repr(actual)))
137 for k, v in six.iteritems(expected):
138 if k in actual:
139@@ -93,6 +96,7 @@
140 Validate a list of actual tenant data vs list of expected tenant
141 data.
142 """
143+ self.log.debug('Validating tenant data...')
144 self.log.debug('actual: {}'.format(repr(actual)))
145 for e in expected:
146 found = False
147@@ -114,6 +118,7 @@
148 Validate a list of actual role data vs a list of expected role
149 data.
150 """
151+ self.log.debug('Validating role data...')
152 self.log.debug('actual: {}'.format(repr(actual)))
153 for e in expected:
154 found = False
155@@ -134,6 +139,7 @@
156 Validate a list of actual user data vs a list of expected user
157 data.
158 """
159+ self.log.debug('Validating user data...')
160 self.log.debug('actual: {}'.format(repr(actual)))
161 for e in expected:
162 found = False
163@@ -155,17 +161,20 @@
164
165 Validate a list of actual flavors vs a list of expected flavors.
166 """
167+ self.log.debug('Validating flavor data...')
168 self.log.debug('actual: {}'.format(repr(actual)))
169 act = [a.name for a in actual]
170 return self._validate_list_data(expected, act)
171
172 def tenant_exists(self, keystone, tenant):
173 """Return True if tenant exists."""
174+ self.log.debug('Checking if tenant exists ({})...'.format(tenant))
175 return tenant in [t.name for t in keystone.tenants.list()]
176
177 def authenticate_keystone_admin(self, keystone_sentry, user, password,
178 tenant):
179 """Authenticates admin user with the keystone admin endpoint."""
180+ self.log.debug('Authenticating keystone admin...')
181 unit = keystone_sentry
182 service_ip = unit.relation('shared-db',
183 'mysql:shared-db')['private-address']
184@@ -175,6 +184,7 @@
185
186 def authenticate_keystone_user(self, keystone, user, password, tenant):
187 """Authenticates a regular user with the keystone public endpoint."""
188+ self.log.debug('Authenticating keystone user ({})...'.format(user))
189 ep = keystone.service_catalog.url_for(service_type='identity',
190 endpoint_type='publicURL')
191 return keystone_client.Client(username=user, password=password,
192@@ -182,12 +192,21 @@
193
194 def authenticate_glance_admin(self, keystone):
195 """Authenticates admin user with glance."""
196+ self.log.debug('Authenticating glance admin...')
197 ep = keystone.service_catalog.url_for(service_type='image',
198 endpoint_type='adminURL')
199 return glance_client.Client(ep, token=keystone.auth_token)
200
201+ def authenticate_heat_admin(self, keystone):
202+ """Authenticates the admin user with heat."""
203+ self.log.debug('Authenticating heat admin...')
204+ ep = keystone.service_catalog.url_for(service_type='orchestration',
205+ endpoint_type='publicURL')
206+ return heat_client.Client(endpoint=ep, token=keystone.auth_token)
207+
208 def authenticate_nova_user(self, keystone, user, password, tenant):
209 """Authenticates a regular user with nova-api."""
210+ self.log.debug('Authenticating nova user ({})...'.format(user))
211 ep = keystone.service_catalog.url_for(service_type='identity',
212 endpoint_type='publicURL')
213 return nova_client.Client(username=user, api_key=password,
214@@ -195,6 +214,7 @@
215
216 def create_cirros_image(self, glance, image_name):
217 """Download the latest cirros image and upload it to glance."""
218+ self.log.debug('Creating glance image ({})...'.format(image_name))
219 http_proxy = os.getenv('AMULET_HTTP_PROXY')
220 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
221 if http_proxy:
222@@ -235,6 +255,11 @@
223
224 def delete_image(self, glance, image):
225 """Delete the specified image."""
226+
227+ # /!\ DEPRECATION WARNING
228+ self.log.warn('/!\\ DEPRECATION WARNING: use '
229+ 'delete_resource instead of delete_image.')
230+ self.log.debug('Deleting glance image ({})...'.format(image))
231 num_before = len(list(glance.images.list()))
232 glance.images.delete(image)
233
234@@ -254,6 +279,8 @@
235
236 def create_instance(self, nova, image_name, instance_name, flavor):
237 """Create the specified instance."""
238+ self.log.debug('Creating instance '
239+ '({}|{}|{})'.format(instance_name, image_name, flavor))
240 image = nova.images.find(name=image_name)
241 flavor = nova.flavors.find(name=flavor)
242 instance = nova.servers.create(name=instance_name, image=image,
243@@ -276,6 +303,11 @@
244
245 def delete_instance(self, nova, instance):
246 """Delete the specified instance."""
247+
248+ # /!\ DEPRECATION WARNING
249+ self.log.warn('/!\\ DEPRECATION WARNING: use '
250+ 'delete_resource instead of delete_instance.')
251+ self.log.debug('Deleting instance ({})...'.format(instance))
252 num_before = len(list(nova.servers.list()))
253 nova.servers.delete(instance)
254
255@@ -292,3 +324,90 @@
256 return False
257
258 return True
259+
260+ def create_or_get_keypair(self, nova, keypair_name="testkey"):
261+ """Create a new keypair, or return pointer if it already exists."""
262+ try:
263+ _keypair = nova.keypairs.get(keypair_name)
264+ self.log.debug('Keypair ({}) already exists, '
265+ 'using it.'.format(keypair_name))
266+ return _keypair
267+ except:
268+ self.log.debug('Keypair ({}) does not exist, '
269+ 'creating it.'.format(keypair_name))
270+
271+ _keypair = nova.keypairs.create(name=keypair_name)
272+ return _keypair
273+
274+ def delete_resource(self, resource, resource_id,
275+ msg="resource", max_wait=120):
276+ """Delete one openstack resource, such as one instance, keypair,
277+ image, volume, stack, etc., and confirm deletion within max wait time.
278+
279+ :param resource: pointer to os resource type, ex:glance_client.images
280+ :param resource_id: unique name or id for the openstack resource
281+ :param msg: text to identify purpose in logging
282+ :param max_wait: maximum wait time in seconds
283+ :returns: True if successful, otherwise False
284+ """
285+ num_before = len(list(resource.list()))
286+ resource.delete(resource_id)
287+
288+ tries = 0
289+ num_after = len(list(resource.list()))
290+ while num_after != (num_before - 1) and tries < (max_wait / 4):
291+ self.log.debug('{} delete check: '
292+ '{} [{}:{}] {}'.format(msg, tries,
293+ num_before,
294+ num_after,
295+ resource_id))
296+ time.sleep(4)
297+ num_after = len(list(resource.list()))
298+ tries += 1
299+
300+ self.log.debug('{}: expected, actual count = {}, '
301+ '{}'.format(msg, num_before - 1, num_after))
302+
303+ if num_after == (num_before - 1):
304+ return True
305+ else:
306+ self.log.error('{} delete timed out'.format(msg))
307+ return False
308+
309+ def resource_reaches_status(self, resource, resource_id,
310+ expected_stat='available',
311+ msg='resource', max_wait=120):
312+ """Wait for an openstack resources status to reach an
313+ expected status within a specified time. Useful to confirm that
314+ nova instances, cinder vols, snapshots, glance images, heat stacks
315+ and other resources eventually reach the expected status.
316+
317+ :param resource: pointer to os resource type, ex: heat_client.stacks
318+ :param resource_id: unique id for the openstack resource
319+ :param expected_stat: status to expect resource to reach
320+ :param msg: text to identify purpose in logging
321+ :param max_wait: maximum wait time in seconds
322+ :returns: True if successful, False if status is not reached
323+ """
324+
325+ tries = 0
326+ resource_stat = resource.get(resource_id).status
327+ while resource_stat != expected_stat and tries < (max_wait / 4):
328+ self.log.debug('{} status check: '
329+ '{} [{}:{}] {}'.format(msg, tries,
330+ resource_stat,
331+ expected_stat,
332+ resource_id))
333+ time.sleep(4)
334+ resource_stat = resource.get(resource_id).status
335+ tries += 1
336+
337+ self.log.debug('{}: expected, actual status = {}, '
338+ '{}'.format(msg, resource_stat, expected_stat))
339+
340+ if resource_stat == expected_stat:
341+ return True
342+ else:
343+ self.log.debug('{} never reached expected status: '
344+ '{}'.format(resource_id, expected_stat))
345+ return False
346
347=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
348--- hooks/charmhelpers/contrib/openstack/context.py 2015-05-11 07:26:16 +0000
349+++ hooks/charmhelpers/contrib/openstack/context.py 2015-06-12 17:20:17 +0000
350@@ -240,7 +240,7 @@
351 if self.relation_prefix:
352 password_setting = self.relation_prefix + '_password'
353
354- for rid in relation_ids('shared-db'):
355+ for rid in relation_ids(self.interfaces[0]):
356 for unit in related_units(rid):
357 rdata = relation_get(rid=rid, unit=unit)
358 host = rdata.get('db_host')
359
360=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
361--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-06-11 12:57:33 +0000
362+++ hooks/charmhelpers/contrib/openstack/neutron.py 2015-06-12 17:20:17 +0000
363@@ -172,14 +172,16 @@
364 'services': ['calico-felix',
365 'bird',
366 'neutron-dhcp-agent',
367- 'nova-api-metadata'],
368+ 'nova-api-metadata',
369+ 'etcd'],
370 'packages': [[headers_package()] + determine_dkms_package(),
371 ['calico-compute',
372 'bird',
373 'neutron-dhcp-agent',
374- 'nova-api-metadata']],
375- 'server_packages': ['neutron-server', 'calico-control'],
376- 'server_services': ['neutron-server']
377+ 'nova-api-metadata',
378+ 'etcd']],
379+ 'server_packages': ['neutron-server', 'calico-control', 'etcd'],
380+ 'server_services': ['neutron-server', 'etcd']
381 },
382 'vsp': {
383 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini',
384
385=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
386--- hooks/charmhelpers/contrib/openstack/utils.py 2015-06-11 12:57:33 +0000
387+++ hooks/charmhelpers/contrib/openstack/utils.py 2015-06-12 17:20:17 +0000
388@@ -79,6 +79,7 @@
389 ('trusty', 'icehouse'),
390 ('utopic', 'juno'),
391 ('vivid', 'kilo'),
392+ ('wily', 'liberty'),
393 ])
394
395
396@@ -91,6 +92,7 @@
397 ('2014.1', 'icehouse'),
398 ('2014.2', 'juno'),
399 ('2015.1', 'kilo'),
400+ ('2015.2', 'liberty'),
401 ])
402
403 # The ugly duckling
404@@ -113,6 +115,7 @@
405 ('2.2.0', 'juno'),
406 ('2.2.1', 'kilo'),
407 ('2.2.2', 'kilo'),
408+ ('2.3.0', 'liberty'),
409 ])
410
411 DEFAULT_LOOPBACK_SIZE = '5G'
412@@ -321,6 +324,9 @@
413 'kilo': 'trusty-updates/kilo',
414 'kilo/updates': 'trusty-updates/kilo',
415 'kilo/proposed': 'trusty-proposed/kilo',
416+ 'liberty': 'trusty-updates/liberty',
417+ 'liberty/updates': 'trusty-updates/liberty',
418+ 'liberty/proposed': 'trusty-proposed/liberty',
419 }
420
421 try:
422@@ -641,7 +647,8 @@
423 subprocess.check_call(cmd)
424 except subprocess.CalledProcessError:
425 package = os.path.basename(package_dir)
426- error_out("Error updating {} from global-requirements.txt".format(package))
427+ error_out("Error updating {} from "
428+ "global-requirements.txt".format(package))
429 os.chdir(orig_dir)
430
431
432
433=== modified file 'hooks/charmhelpers/core/host.py'
434--- hooks/charmhelpers/core/host.py 2015-06-11 12:57:33 +0000
435+++ hooks/charmhelpers/core/host.py 2015-06-12 17:20:17 +0000
436@@ -24,6 +24,7 @@
437 import os
438 import re
439 import pwd
440+import glob
441 import grp
442 import random
443 import string
444@@ -269,6 +270,21 @@
445 return None
446
447
448+def path_hash(path):
449+ """
450+ Generate a hash checksum of all files matching 'path'. Standard wildcards
451+ like '*' and '?' are supported, see documentation for the 'glob' module for
452+ more information.
453+
454+ :return: dict: A { filename: hash } dictionary for all matched files.
455+ Empty if none found.
456+ """
457+ return {
458+ filename: file_hash(filename)
459+ for filename in glob.iglob(path)
460+ }
461+
462+
463 def check_hash(path, checksum, hash_type='md5'):
464 """
465 Validate a file using a cryptographic checksum.
466@@ -296,23 +312,25 @@
467
468 @restart_on_change({
469 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
470+ '/etc/apache/sites-enabled/*': [ 'apache2' ]
471 })
472- def ceph_client_changed():
473+ def config_changed():
474 pass # your code here
475
476 In this example, the cinder-api and cinder-volume services
477 would be restarted if /etc/ceph/ceph.conf is changed by the
478- ceph_client_changed function.
479+ ceph_client_changed function. The apache2 service would be
480+ restarted if any file matching the pattern got changed, created
481+ or removed. Standard wildcards are supported, see documentation
482+ for the 'glob' module for more information.
483 """
484 def wrap(f):
485 def wrapped_f(*args, **kwargs):
486- checksums = {}
487- for path in restart_map:
488- checksums[path] = file_hash(path)
489+ checksums = {path: path_hash(path) for path in restart_map}
490 f(*args, **kwargs)
491 restarts = []
492 for path in restart_map:
493- if checksums[path] != file_hash(path):
494+ if path_hash(path) != checksums[path]:
495 restarts += restart_map[path]
496 services_list = list(OrderedDict.fromkeys(restarts))
497 if not stopstart:
498
499=== modified file 'metadata.yaml'
500--- metadata.yaml 2015-01-09 16:27:03 +0000
501+++ metadata.yaml 2015-06-12 17:20:17 +0000
502@@ -8,7 +8,7 @@
503 framework should be easily expandable to collect for other needs. To that
504 effect, Ceilometer should be able to share collected data with a variety
505 of consumers.
506-categories:
507+tags:
508 - miscellaneous
509 - openstack
510 provides:
511
512=== modified file 'tests/00-setup'
513--- tests/00-setup 2015-02-17 14:09:16 +0000
514+++ tests/00-setup 2015-06-12 17:20:17 +0000
515@@ -5,7 +5,6 @@
516 sudo add-apt-repository --yes ppa:juju/stable
517 sudo apt-get update --yes
518 sudo apt-get install --yes python-amulet \
519- python-neutronclient \
520+ python-distro-info \
521 python-keystoneclient \
522- python-novaclient \
523- python-glanceclient
524+ python-ceilometerclient
525
526=== modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x)
527=== modified file 'tests/README'
528--- tests/README 2015-02-17 14:09:16 +0000
529+++ tests/README 2015-06-12 17:20:17 +0000
530@@ -1,12 +1,42 @@
531 This directory provides Amulet tests that focus on verification of
532 ceilometer deployments.
533
534+test_* methods are called in lexical sort order.
535+
536+Test name convention to ensure desired test order:
537+ 1xx service and endpoint checks
538+ 2xx relation checks
539+ 3xx config checks
540+ 4xx functional checks
541+ 9xx restarts and other final checks
542+
543 In order to run tests, you'll need charm-tools installed (in addition to
544 juju, of course):
545 sudo add-apt-repository ppa:juju/stable
546 sudo apt-get update
547 sudo apt-get install charm-tools
548
549+Common uses of ceilometer relations in bundles:
550+ - - ceilometer
551+ - keystone:identity-service
552+ - [ ceilometer, rabbitmq-server ]
553+ - [ ceilometer, mongodb ]
554+ - [ ceilometer-agent, nova-compute ]
555+ - [ ceilometer-agent, ceilometer ]
556+
557+More detailed relations of ceilometer in a common deployment:
558+ relations:
559+ amqp:
560+ - rabbitmq-server
561+ ceilometer-service:
562+ - ceilometer-agent
563+ cluster:
564+ - ceilometer
565+ identity-service:
566+ - keystone
567+ shared-db:
568+ - mongodb
569+
570 If you use a web proxy server to access the web, you'll need to set the
571 AMULET_HTTP_PROXY environment variable to the http URL of the proxy server.
572
573
574=== modified file 'tests/basic_deployment.py'
575--- tests/basic_deployment.py 2015-06-04 15:59:29 +0000
576+++ tests/basic_deployment.py 2015-06-12 17:20:17 +0000
577@@ -10,8 +10,8 @@
578
579 from charmhelpers.contrib.openstack.amulet.utils import (
580 OpenStackAmuletUtils,
581- DEBUG, # flake8: noqa
582- ERROR
583+ DEBUG,
584+ #ERROR
585 )
586
587 # Use DEBUG to turn on debug logging
588@@ -82,6 +82,10 @@
589 self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']
590 self.mongodb_sentry = self.d.sentry.unit['mongodb/0']
591 self.compute_sentry = self.d.sentry.unit['nova-compute/0']
592+ u.log.debug('openstack release val: {}'.format(
593+ self._get_openstack_release()))
594+ u.log.debug('openstack release str: {}'.format(
595+ self._get_openstack_release_string()))
596
597 # Let things settle a bit before moving forward
598 time.sleep(30)
599@@ -97,34 +101,32 @@
600 endpoint_type='publicURL')
601 self.ceil = ceilclient.Client(endpoint=ep, token=self._get_token)
602
603- def test_api_connection(self):
604- """Simple api call to check service is up and responding"""
605- assert(self.ceil.meters.list() == [])
606-
607- def test_services(self):
608+ def test_100_services(self):
609 """Verify the expected services are running on the corresponding
610 service units."""
611- ceilometer_cmds = [
612- 'status ceilometer-agent-central',
613- 'status ceilometer-collector',
614- 'status ceilometer-api',
615- 'status ceilometer-alarm-evaluator',
616- 'status ceilometer-alarm-notifier',
617- 'status ceilometer-agent-notification',
618+ ceilometer_svcs = [
619+ 'ceilometer-agent-central',
620+ 'ceilometer-collector',
621+ 'ceilometer-api',
622+ 'ceilometer-alarm-evaluator',
623+ 'ceilometer-alarm-notifier',
624+ 'ceilometer-agent-notification',
625 ]
626- commands = {
627- self.ceil_sentry: ceilometer_cmds,
628- self.mysql_sentry: ['status mysql'],
629- self.keystone_sentry: ['status keystone'],
630- self.rabbitmq_sentry: ['service rabbitmq-server status'],
631- self.mongodb_sentry: ['status mongodb'],
632+ service_names = {
633+ self.ceil_sentry: ceilometer_svcs,
634+ self.mysql_sentry: ['mysql'],
635+ self.keystone_sentry: ['keystone'],
636+ self.rabbitmq_sentry: ['rabbitmq-server'],
637+ self.mongodb_sentry: ['mongodb'],
638 }
639- ret = u.validate_services(commands)
640+
641+ ret = u.validate_services_by_name(service_names)
642 if ret:
643 amulet.raise_status(amulet.FAIL, msg=ret)
644
645- def test_ceilometer_identity_relation(self):
646+ def test_200_ceilometer_identity_relation(self):
647 """Verify the ceilometer to keystone identity-service relation data"""
648+ u.log.debug('Checking service catalog endpoint data...')
649 unit = self.ceil_sentry
650 relation = ['identity-service', 'keystone:identity-service']
651 ceil_ip = unit.relation('identity-service',
652@@ -146,8 +148,9 @@
653 message = u.relation_error('ceilometer identity-service', ret)
654 amulet.raise_status(amulet.FAIL, msg=message)
655
656- def test_keystone_ceilometer_identity_relation(self):
657+ def test_201_keystone_ceilometer_identity_relation(self):
658 """Verify the keystone to ceilometer identity-service relation data"""
659+ u.log.debug('Checking keystone:ceilometer identity relation data...')
660 unit = self.keystone_sentry
661 relation = ['identity-service', 'ceilometer:identity-service']
662 id_relation = unit.relation('identity-service',
663@@ -172,8 +175,10 @@
664 message = u.relation_error('keystone identity-service', ret)
665 amulet.raise_status(amulet.FAIL, msg=message)
666
667- def test_keystone_ceilometer_identity_notes_relation(self):
668+ def test_202_keystone_ceilometer_identity_notes_relation(self):
669 """Verify ceilometer to keystone identity-notifications relation"""
670+ u.log.debug('Checking keystone:ceilometer '
671+ 'identity-notifications relation data...')
672 unit = self.keystone_sentry
673 relation = ['identity-service', 'ceilometer:identity-notifications']
674 expected = {
675@@ -184,8 +189,9 @@
676 message = u.relation_error('keystone identity-notifications', ret)
677 amulet.raise_status(amulet.FAIL, msg=message)
678
679- def test_ceilometer_amqp_relation(self):
680+ def test_203_ceilometer_amqp_relation(self):
681 """Verify the ceilometer to rabbitmq-server amqp relation data"""
682+ u.log.debug('Checking ceilometer:rabbitmq amqp relation data...')
683 unit = self.ceil_sentry
684 relation = ['amqp', 'rabbitmq-server:amqp']
685 expected = {
686@@ -199,8 +205,9 @@
687 message = u.relation_error('ceilometer amqp', ret)
688 amulet.raise_status(amulet.FAIL, msg=message)
689
690- def test_amqp_ceilometer_relation(self):
691+ def test_204_amqp_ceilometer_relation(self):
692 """Verify the rabbitmq-server to ceilometer amqp relation data"""
693+ u.log.debug('Checking rabbitmq:ceilometer amqp relation data...')
694 unit = self.rabbitmq_sentry
695 relation = ['amqp', 'ceilometer:amqp']
696 expected = {
697@@ -214,8 +221,9 @@
698 message = u.relation_error('rabbitmq amqp', ret)
699 amulet.raise_status(amulet.FAIL, msg=message)
700
701- def test_ceilometer_to_mongodb_relation(self):
702+ def test_205_ceilometer_to_mongodb_relation(self):
703 """Verify the ceilometer to mongodb relation data"""
704+ u.log.debug('Checking ceilometer:mongodb relation data...')
705 unit = self.ceil_sentry
706 relation = ['shared-db', 'mongodb:database']
707 expected = {
708@@ -228,8 +236,9 @@
709 message = u.relation_error('ceilometer shared-db', ret)
710 amulet.raise_status(amulet.FAIL, msg=message)
711
712- def test_mongodb_to_ceilometer_relation(self):
713+ def test_206_mongodb_to_ceilometer_relation(self):
714 """Verify the mongodb to ceilometer relation data"""
715+ u.log.debug('Checking mongodb:ceilometer relation data...')
716 unit = self.mongodb_sentry
717 relation = ['database', 'ceilometer:shared-db']
718 expected = {
719@@ -247,38 +256,9 @@
720 message = u.relation_error('mongodb database', ret)
721 amulet.raise_status(amulet.FAIL, msg=message)
722
723- def test_z_restart_on_config_change(self):
724- """Verify that the specified services are restarted when the config
725- is changed.
726-
727- Note(coreycb): The method name with the _z_ is a little odd
728- but it forces the test to run last. It just makes things
729- easier because restarting services requires re-authorization.
730- """
731- conf = '/etc/ceilometer/ceilometer.conf'
732- self.d.configure('ceilometer', {'debug': 'True'})
733- services = [
734- 'ceilometer-agent-central',
735- 'ceilometer-collector',
736- 'ceilometer-api',
737- 'ceilometer-alarm-evaluator',
738- 'ceilometer-alarm-notifier',
739- 'ceilometer-agent-notification',
740- ]
741-
742- time = 20
743- for s in services:
744- if not u.service_restarted(self.ceil_sentry, s, conf,
745- pgrep_full=True, sleep_time=time):
746- self.d.configure('ceilometer', {'debug': 'False'})
747- msg = "service {} didn't restart after config change".format(s)
748- amulet.raise_status(amulet.FAIL, msg=msg)
749- time = 0
750-
751- self.d.configure('ceilometer', {'debug': 'False'})
752-
753- def test_ceilometer_config(self):
754+ def test_300_ceilometer_config(self):
755 """Verify the data in the ceilometer config file."""
756+ u.log.debug('Checking ceilometer config file data...')
757 unit = self.ceil_sentry
758 rabbitmq_relation = self.rabbitmq_sentry.relation('amqp',
759 'ceilometer:amqp')
760@@ -330,3 +310,50 @@
761 if ret:
762 message = "ceilometer config error: {}".format(ret)
763 amulet.raise_status(amulet.FAIL, msg=message)
764+
765+ def test_400_api_connection(self):
766+ """Simple api calls to check service is up and responding"""
767+ u.log.debug('Checking api functionality...')
768+ assert(self.ceil.samples.list() == [])
769+ assert(self.ceil.meters.list() == [])
770+
771+ def test_900_restart_on_config_change(self):
772+ """Verify that the specified services are restarted when the config
773+ is changed.
774+ """
775+ sentry = self.ceil_sentry
776+ juju_service = 'ceilometer'
777+
778+ # Expected default and alternate values
779+ set_default = {'debug': 'False'}
780+ set_alternate = {'debug': 'True'}
781+
782+ # Config file affected by juju set config change
783+ conf_file = '/etc/ceilometer/ceilometer.conf'
784+
785+ # Services which are expected to restart upon config change
786+ services = [
787+ 'ceilometer-agent-central',
788+ 'ceilometer-collector',
789+ 'ceilometer-api',
790+ 'ceilometer-alarm-evaluator',
791+ 'ceilometer-alarm-notifier',
792+ 'ceilometer-agent-notification',
793+ ]
794+
795+ # Make config change, check for service restarts
796+ u.log.debug('Making config change on {}...'.format(juju_service))
797+ self.d.configure(juju_service, set_alternate)
798+
799+ sleep_time = 40
800+ for s in services:
801+ u.log.debug("Checking that service restarted: {}".format(s))
802+ if not u.service_restarted(sentry, s,
803+ conf_file, sleep_time=sleep_time,
804+ pgrep_full=True):
805+ self.d.configure(juju_service, set_default)
806+ msg = "service {} didn't restart after config change".format(s)
807+ amulet.raise_status(amulet.FAIL, msg=msg)
808+ sleep_time = 0
809+
810+ self.d.configure(juju_service, set_default)
811
812=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
813--- tests/charmhelpers/contrib/amulet/utils.py 2015-04-23 14:55:32 +0000
814+++ tests/charmhelpers/contrib/amulet/utils.py 2015-06-12 17:20:17 +0000
815@@ -15,13 +15,15 @@
816 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
817
818 import ConfigParser
819+import distro_info
820 import io
821 import logging
822+import os
823 import re
824+import six
825 import sys
826 import time
827-
828-import six
829+import urlparse
830
831
832 class AmuletUtils(object):
833@@ -33,6 +35,7 @@
834
835 def __init__(self, log_level=logging.ERROR):
836 self.log = self.get_logger(level=log_level)
837+ self.ubuntu_releases = self.get_ubuntu_releases()
838
839 def get_logger(self, name="amulet-logger", level=logging.DEBUG):
840 """Get a logger object that will log to stdout."""
841@@ -70,12 +73,44 @@
842 else:
843 return False
844
845+ def get_ubuntu_release_from_sentry(self, sentry_unit):
846+ """Get Ubuntu release codename from sentry unit.
847+
848+ :param sentry_unit: amulet sentry/service unit pointer
849+ :returns: list of strings - release codename, failure message
850+ """
851+ msg = None
852+ cmd = 'lsb_release -cs'
853+ release, code = sentry_unit.run(cmd)
854+ if code == 0:
855+ self.log.debug('{} lsb_release: {}'.format(
856+ sentry_unit.info['unit_name'], release))
857+ else:
858+ msg = ('{} `{}` returned {} '
859+ '{}'.format(sentry_unit.info['unit_name'],
860+ cmd, release, code))
861+ if release not in self.ubuntu_releases:
862+ msg = ("Release ({}) not found in Ubuntu releases "
863+ "({})".format(release, self.ubuntu_releases))
864+ return release, msg
865+
866 def validate_services(self, commands):
867- """Validate services.
868-
869- Verify the specified services are running on the corresponding
870+ """Validate that lists of commands succeed on service units. Can be
871+ used to verify system services are running on the corresponding
872 service units.
873- """
874+
875+ :param commands: dict with sentry keys and arbitrary command list vals
876+ :returns: None if successful, Failure string message otherwise
877+ """
878+ self.log.debug('Checking status of system services...')
879+
880+ # /!\ DEPRECATION WARNING (beisner):
881+ # New and existing tests should be rewritten to use
882+ # validate_services_by_name() as it is aware of init systems.
883+ self.log.warn('/!\\ DEPRECATION WARNING: use '
884+ 'validate_services_by_name instead of validate_services '
885+ 'due to init system differences.')
886+
887 for k, v in six.iteritems(commands):
888 for cmd in v:
889 output, code = k.run(cmd)
890@@ -86,6 +121,41 @@
891 return "command `{}` returned {}".format(cmd, str(code))
892 return None
893
894+ def validate_services_by_name(self, sentry_services):
895+ """Validate system service status by service name, automatically
896+ detecting init system based on Ubuntu release codename.
897+
898+ :param sentry_services: dict with sentry keys and svc list values
899+ :returns: None if successful, Failure string message otherwise
900+ """
901+ self.log.debug('Checking status of system services...')
902+
903+ # Point at which systemd became a thing
904+ systemd_switch = self.ubuntu_releases.index('vivid')
905+
906+ for sentry_unit, services_list in six.iteritems(sentry_services):
907+ # Get lsb_release codename from unit
908+ release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
909+ if ret:
910+ return ret
911+
912+ for service_name in services_list:
913+ if (self.ubuntu_releases.index(release) >= systemd_switch or
914+ service_name == "rabbitmq-server"):
915+ # init is systemd
916+ cmd = 'sudo service {} status'.format(service_name)
917+ elif self.ubuntu_releases.index(release) < systemd_switch:
918+ # init is upstart
919+ cmd = 'sudo status {}'.format(service_name)
920+
921+ output, code = sentry_unit.run(cmd)
922+ self.log.debug('{} `{}` returned '
923+ '{}'.format(sentry_unit.info['unit_name'],
924+ cmd, code))
925+ if code != 0:
926+ return "command `{}` returned {}".format(cmd, str(code))
927+ return None
928+
929 def _get_config(self, unit, filename):
930 """Get a ConfigParser object for parsing a unit's config file."""
931 file_contents = unit.file_contents(filename)
932@@ -104,6 +174,9 @@
933 Verify that the specified section of the config file contains
934 the expected option key:value pairs.
935 """
936+ self.log.debug('Validating config file data ({} in {} on {})'
937+ '...'.format(section, config_file,
938+ sentry_unit.info['unit_name']))
939 config = self._get_config(sentry_unit, config_file)
940
941 if section != 'DEFAULT' and not config.has_section(section):
942@@ -321,3 +394,15 @@
943
944 def endpoint_error(self, name, data):
945 return 'unexpected endpoint data in {} - {}'.format(name, data)
946+
947+ def get_ubuntu_releases(self):
948+ """Return a list of all Ubuntu releases in order of release."""
949+ _d = distro_info.UbuntuDistroInfo()
950+ _release_list = _d.all
951+ self.log.debug('Ubuntu release list: {}'.format(_release_list))
952+ return _release_list
953+
954+ def file_to_url(self, file_rel_path):
955+ """Convert a relative file path to a file URL."""
956+ _abs_path = os.path.abspath(file_rel_path)
957+ return urlparse.urlparse(_abs_path, scheme='file').geturl()
958
959=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
960--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-05-11 07:26:16 +0000
961+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-12 17:20:17 +0000
962@@ -110,7 +110,8 @@
963 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
964 self.precise_havana, self.precise_icehouse,
965 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
966- self.trusty_kilo, self.vivid_kilo) = range(10)
967+ self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
968+ self.wily_liberty) = range(12)
969
970 releases = {
971 ('precise', None): self.precise_essex,
972@@ -121,8 +122,10 @@
973 ('trusty', None): self.trusty_icehouse,
974 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
975 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
976+ ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
977 ('utopic', None): self.utopic_juno,
978- ('vivid', None): self.vivid_kilo}
979+ ('vivid', None): self.vivid_kilo,
980+ ('wily', None): self.wily_liberty}
981 return releases[(self.series, self.openstack)]
982
983 def _get_openstack_release_string(self):
984@@ -138,6 +141,7 @@
985 ('trusty', 'icehouse'),
986 ('utopic', 'juno'),
987 ('vivid', 'kilo'),
988+ ('wily', 'liberty'),
989 ])
990 if self.openstack:
991 os_origin = self.openstack.split(':')[1]
992
993=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
994--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-02-17 14:09:16 +0000
995+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-12 17:20:17 +0000
996@@ -16,15 +16,15 @@
997
998 import logging
999 import os
1000+import six
1001 import time
1002 import urllib
1003
1004 import glanceclient.v1.client as glance_client
1005+import heatclient.v1.client as heat_client
1006 import keystoneclient.v2_0 as keystone_client
1007 import novaclient.v1_1.client as nova_client
1008
1009-import six
1010-
1011 from charmhelpers.contrib.amulet.utils import (
1012 AmuletUtils
1013 )
1014@@ -37,7 +37,7 @@
1015 """OpenStack amulet utilities.
1016
1017 This class inherits from AmuletUtils and has additional support
1018- that is specifically for use by OpenStack charms.
1019+ that is specifically for use by OpenStack charm tests.
1020 """
1021
1022 def __init__(self, log_level=ERROR):
1023@@ -51,6 +51,8 @@
1024 Validate actual endpoint data vs expected endpoint data. The ports
1025 are used to find the matching endpoint.
1026 """
1027+ self.log.debug('Validating endpoint data...')
1028+ self.log.debug('actual: {}'.format(repr(endpoints)))
1029 found = False
1030 for ep in endpoints:
1031 self.log.debug('endpoint: {}'.format(repr(ep)))
1032@@ -77,6 +79,7 @@
1033 Validate a list of actual service catalog endpoints vs a list of
1034 expected service catalog endpoints.
1035 """
1036+ self.log.debug('Validating service catalog endpoint data...')
1037 self.log.debug('actual: {}'.format(repr(actual)))
1038 for k, v in six.iteritems(expected):
1039 if k in actual:
1040@@ -93,6 +96,7 @@
1041 Validate a list of actual tenant data vs list of expected tenant
1042 data.
1043 """
1044+ self.log.debug('Validating tenant data...')
1045 self.log.debug('actual: {}'.format(repr(actual)))
1046 for e in expected:
1047 found = False
1048@@ -114,6 +118,7 @@
1049 Validate a list of actual role data vs a list of expected role
1050 data.
1051 """
1052+ self.log.debug('Validating role data...')
1053 self.log.debug('actual: {}'.format(repr(actual)))
1054 for e in expected:
1055 found = False
1056@@ -134,6 +139,7 @@
1057 Validate a list of actual user data vs a list of expected user
1058 data.
1059 """
1060+ self.log.debug('Validating user data...')
1061 self.log.debug('actual: {}'.format(repr(actual)))
1062 for e in expected:
1063 found = False
1064@@ -155,17 +161,20 @@
1065
1066 Validate a list of actual flavors vs a list of expected flavors.
1067 """
1068+ self.log.debug('Validating flavor data...')
1069 self.log.debug('actual: {}'.format(repr(actual)))
1070 act = [a.name for a in actual]
1071 return self._validate_list_data(expected, act)
1072
1073 def tenant_exists(self, keystone, tenant):
1074 """Return True if tenant exists."""
1075+ self.log.debug('Checking if tenant exists ({})...'.format(tenant))
1076 return tenant in [t.name for t in keystone.tenants.list()]
1077
1078 def authenticate_keystone_admin(self, keystone_sentry, user, password,
1079 tenant):
1080 """Authenticates admin user with the keystone admin endpoint."""
1081+ self.log.debug('Authenticating keystone admin...')
1082 unit = keystone_sentry
1083 service_ip = unit.relation('shared-db',
1084 'mysql:shared-db')['private-address']
1085@@ -175,6 +184,7 @@
1086
1087 def authenticate_keystone_user(self, keystone, user, password, tenant):
1088 """Authenticates a regular user with the keystone public endpoint."""
1089+ self.log.debug('Authenticating keystone user ({})...'.format(user))
1090 ep = keystone.service_catalog.url_for(service_type='identity',
1091 endpoint_type='publicURL')
1092 return keystone_client.Client(username=user, password=password,
1093@@ -182,12 +192,21 @@
1094
1095 def authenticate_glance_admin(self, keystone):
1096 """Authenticates admin user with glance."""
1097+ self.log.debug('Authenticating glance admin...')
1098 ep = keystone.service_catalog.url_for(service_type='image',
1099 endpoint_type='adminURL')
1100 return glance_client.Client(ep, token=keystone.auth_token)
1101
1102+ def authenticate_heat_admin(self, keystone):
1103+ """Authenticates the admin user with heat."""
1104+ self.log.debug('Authenticating heat admin...')
1105+ ep = keystone.service_catalog.url_for(service_type='orchestration',
1106+ endpoint_type='publicURL')
1107+ return heat_client.Client(endpoint=ep, token=keystone.auth_token)
1108+
1109 def authenticate_nova_user(self, keystone, user, password, tenant):
1110 """Authenticates a regular user with nova-api."""
1111+ self.log.debug('Authenticating nova user ({})...'.format(user))
1112 ep = keystone.service_catalog.url_for(service_type='identity',
1113 endpoint_type='publicURL')
1114 return nova_client.Client(username=user, api_key=password,
1115@@ -195,6 +214,7 @@
1116
1117 def create_cirros_image(self, glance, image_name):
1118 """Download the latest cirros image and upload it to glance."""
1119+ self.log.debug('Creating glance image ({})...'.format(image_name))
1120 http_proxy = os.getenv('AMULET_HTTP_PROXY')
1121 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
1122 if http_proxy:
1123@@ -235,6 +255,11 @@
1124
1125 def delete_image(self, glance, image):
1126 """Delete the specified image."""
1127+
1128+ # /!\ DEPRECATION WARNING
1129+ self.log.warn('/!\\ DEPRECATION WARNING: use '
1130+ 'delete_resource instead of delete_image.')
1131+ self.log.debug('Deleting glance image ({})...'.format(image))
1132 num_before = len(list(glance.images.list()))
1133 glance.images.delete(image)
1134
1135@@ -254,6 +279,8 @@
1136
1137 def create_instance(self, nova, image_name, instance_name, flavor):
1138 """Create the specified instance."""
1139+ self.log.debug('Creating instance '
1140+ '({}|{}|{})'.format(instance_name, image_name, flavor))
1141 image = nova.images.find(name=image_name)
1142 flavor = nova.flavors.find(name=flavor)
1143 instance = nova.servers.create(name=instance_name, image=image,
1144@@ -276,6 +303,11 @@
1145
1146 def delete_instance(self, nova, instance):
1147 """Delete the specified instance."""
1148+
1149+ # /!\ DEPRECATION WARNING
1150+ self.log.warn('/!\\ DEPRECATION WARNING: use '
1151+ 'delete_resource instead of delete_instance.')
1152+ self.log.debug('Deleting instance ({})...'.format(instance))
1153 num_before = len(list(nova.servers.list()))
1154 nova.servers.delete(instance)
1155
1156@@ -292,3 +324,90 @@
1157 return False
1158
1159 return True
1160+
1161+ def create_or_get_keypair(self, nova, keypair_name="testkey"):
1162+ """Create a new keypair, or return pointer if it already exists."""
1163+ try:
1164+ _keypair = nova.keypairs.get(keypair_name)
1165+ self.log.debug('Keypair ({}) already exists, '
1166+ 'using it.'.format(keypair_name))
1167+ return _keypair
1168+ except:
1169+ self.log.debug('Keypair ({}) does not exist, '
1170+ 'creating it.'.format(keypair_name))
1171+
1172+ _keypair = nova.keypairs.create(name=keypair_name)
1173+ return _keypair
1174+
1175+ def delete_resource(self, resource, resource_id,
1176+ msg="resource", max_wait=120):
1177+ """Delete one openstack resource, such as one instance, keypair,
1178+ image, volume, stack, etc., and confirm deletion within max wait time.
1179+
1180+ :param resource: pointer to os resource type, ex:glance_client.images
1181+ :param resource_id: unique name or id for the openstack resource
1182+ :param msg: text to identify purpose in logging
1183+ :param max_wait: maximum wait time in seconds
1184+ :returns: True if successful, otherwise False
1185+ """
1186+ num_before = len(list(resource.list()))
1187+ resource.delete(resource_id)
1188+
1189+ tries = 0
1190+ num_after = len(list(resource.list()))
1191+ while num_after != (num_before - 1) and tries < (max_wait / 4):
1192+ self.log.debug('{} delete check: '
1193+ '{} [{}:{}] {}'.format(msg, tries,
1194+ num_before,
1195+ num_after,
1196+ resource_id))
1197+ time.sleep(4)
1198+ num_after = len(list(resource.list()))
1199+ tries += 1
1200+
1201+ self.log.debug('{}: expected, actual count = {}, '
1202+ '{}'.format(msg, num_before - 1, num_after))
1203+
1204+ if num_after == (num_before - 1):
1205+ return True
1206+ else:
1207+ self.log.error('{} delete timed out'.format(msg))
1208+ return False
1209+
1210+ def resource_reaches_status(self, resource, resource_id,
1211+ expected_stat='available',
1212+ msg='resource', max_wait=120):
1213+ """Wait for an openstack resources status to reach an
1214+ expected status within a specified time. Useful to confirm that
1215+ nova instances, cinder vols, snapshots, glance images, heat stacks
1216+ and other resources eventually reach the expected status.
1217+
1218+ :param resource: pointer to os resource type, ex: heat_client.stacks
1219+ :param resource_id: unique id for the openstack resource
1220+ :param expected_stat: status to expect resource to reach
1221+ :param msg: text to identify purpose in logging
1222+ :param max_wait: maximum wait time in seconds
1223+ :returns: True if successful, False if status is not reached
1224+ """
1225+
1226+ tries = 0
1227+ resource_stat = resource.get(resource_id).status
1228+ while resource_stat != expected_stat and tries < (max_wait / 4):
1229+ self.log.debug('{} status check: '
1230+ '{} [{}:{}] {}'.format(msg, tries,
1231+ resource_stat,
1232+ expected_stat,
1233+ resource_id))
1234+ time.sleep(4)
1235+ resource_stat = resource.get(resource_id).status
1236+ tries += 1
1237+
1238+ self.log.debug('{}: expected, actual status = {}, '
1239+ '{}'.format(msg, resource_stat, expected_stat))
1240+
1241+ if resource_stat == expected_stat:
1242+ return True
1243+ else:
1244+ self.log.debug('{} never reached expected status: '
1245+ '{}'.format(resource_id, expected_stat))
1246+ return False

Subscribers

People subscribed via source and target branches