Merge lp:~1chb1n/charms/trusty/glance/next-amulet-update into lp:~openstack-charmers-archive/charms/trusty/glance/trunk

Proposed by Ryan Beisner on 2015-06-30
Status: Superseded
Proposed branch: lp:~1chb1n/charms/trusty/glance/next-amulet-update
Merge into: lp:~openstack-charmers-archive/charms/trusty/glance/trunk
Diff against target: 3606 lines (+1746/-542) (has conflicts)
28 files modified
Makefile (+10/-15)
README.md (+77/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+46/-2)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+6/-2)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+122/-3)
hooks/charmhelpers/contrib/openstack/context.py (+1/-1)
hooks/charmhelpers/contrib/openstack/neutron.py (+16/-9)
hooks/charmhelpers/contrib/openstack/utils.py (+82/-22)
hooks/charmhelpers/contrib/python/packages.py (+30/-5)
hooks/charmhelpers/core/hookenv.py (+231/-38)
hooks/charmhelpers/core/host.py (+25/-7)
hooks/charmhelpers/core/services/base.py (+43/-19)
hooks/charmhelpers/fetch/__init__.py (+1/-1)
hooks/charmhelpers/fetch/giturl.py (+7/-5)
hooks/glance_relations.py (+3/-3)
hooks/glance_utils.py (+50/-4)
metadata.yaml (+1/-1)
tests/00-setup (+5/-1)
tests/020-basic-trusty-liberty (+11/-0)
tests/021-basic-wily-liberty (+9/-0)
tests/README (+9/-0)
tests/basic_deployment.py (+288/-326)
tests/charmhelpers/contrib/amulet/utils.py (+228/-10)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+41/-5)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+358/-51)
tests/tests.yaml (+18/-0)
unit_tests/test_glance_relations.py (+11/-5)
unit_tests/test_glance_utils.py (+17/-7)
Text conflict in README.md
Text conflict in hooks/charmhelpers/contrib/hahelpers/cluster.py
Text conflict in tests/basic_deployment.py
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/glance/next-amulet-update
Reviewer Review Type Date Requested Status
Corey Bryant 2015-06-30 Pending
Review via email: mp+263411@code.launchpad.net

This proposal has been superseded by a proposal from 2015-06-30.

Description of the change

Update amulet tests for Kilo, prep for wily. Sync hooks/charmhelpers; Sync tests/charmhelpers.

To post a comment you must log in.
124. By Ryan Beisner on 2015-06-30

update tests

125. By Ryan Beisner on 2015-07-01

update tags for consistency with other openstack charms

126. By Ryan Beisner on 2015-07-02

update tests for vivid-kilo

Unmerged revisions

126. By Ryan Beisner on 2015-07-02

update tests for vivid-kilo

125. By Ryan Beisner on 2015-07-01

update tags for consistency with other openstack charms

124. By Ryan Beisner on 2015-06-30

update tests

123. By Ryan Beisner on 2015-06-26

sync tests/charmhelpers

122. By Ryan Beisner on 2015-06-26

sync hooks/charmhelpers

121. By Liam Young on 2015-06-26

[corey.bryant, r=gnuoy] charmhelper sync

120. By Billy Olsen on 2015-06-22

[corey.bryant,r=billy-olsen] Fix global requirements for git-deploy.

119. By Corey Bryant on 2015-06-10

[billy-olsen,r=corey.bryant] Provide support for user-specified public endpoint hostname.

118. By James Page on 2015-06-09

Add support for leader-election

117. By James Page on 2015-06-03

Fixup glance-api template sections for Kilo release.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2015-04-16 21:32:02 +0000
3+++ Makefile 2015-06-30 20:18:04 +0000
4@@ -2,16 +2,18 @@
5 PYTHON := /usr/bin/env python
6
7 lint:
8- @echo "Running flake8 tests: "
9- @flake8 --exclude hooks/charmhelpers actions hooks unit_tests tests
10- @echo "OK"
11- @echo "Running charm proof: "
12+ @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \
13+ actions hooks unit_tests tests
14 @charm proof
15- @echo "OK"
16
17-unit_test:
18+test:
19+ @# Bundletester expects unit tests here.
20 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
21
22+functional_test:
23+ @echo Starting Amulet tests...
24+ @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700
25+
26 bin/charm_helpers_sync.py:
27 @mkdir -p bin
28 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
29@@ -21,15 +23,8 @@
30 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
31 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
32
33-test:
34- @echo Starting Amulet tests...
35- # /!\ Note: The -v should only be temporary until Amulet sends
36- # raise_status() messages to stderr:
37- # https://bugs.launchpad.net/amulet/+bug/1320357
38- @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700
39-
40-publish: lint unit_test
41+publish: lint test
42 bzr push lp:charms/glance
43 bzr push lp:charms/trusty/glance
44
45-all: unit_test lint
46+all: test lint
47
48=== modified file 'README.md'
49--- README.md 2015-04-30 15:23:58 +0000
50+++ README.md 2015-06-30 20:18:04 +0000
51@@ -86,6 +86,7 @@
52
53 The minimum openstack-origin-git config required to deploy from source is:
54
55+<<<<<<< TREE
56 openstack-origin-git: include-file://glance-juno.yaml
57
58 glance-juno.yaml
59@@ -97,6 +98,18 @@
60 - {name: glance,
61 repository: 'git://github.com/openstack/glance',
62 branch: stable/juno}
63+=======
64+ openstack-origin-git: include-file://glance-juno.yaml
65+
66+ glance-juno.yaml
67+ repositories:
68+ - {name: requirements,
69+ repository: 'git://github.com/openstack/requirements',
70+ branch: stable/juno}
71+ - {name: glance,
72+ repository: 'git://github.com/openstack/glance',
73+ branch: stable/juno}
74+>>>>>>> MERGE-SOURCE
75
76 Note that there are only two 'name' values the charm knows about: 'requirements'
77 and 'glance'. These repositories must correspond to these 'name' values.
78@@ -106,6 +119,7 @@
79
80 The following is a full list of current tip repos (may not be up-to-date):
81
82+<<<<<<< TREE
83 openstack-origin-git: include-file://glance-master.yaml
84
85 glance-master.yaml
86@@ -168,6 +182,69 @@
87 - {name: glance,
88 repository: 'git://github.com/openstack/glance',
89 branch: master}
90+=======
91+ openstack-origin-git: include-file://glance-master.yaml
92+
93+ glance-master.yaml
94+ repositories:
95+ - {name: requirements,
96+ repository: 'git://github.com/openstack/requirements',
97+ branch: master}
98+ - {name: oslo-concurrency,
99+ repository: 'git://github.com/openstack/oslo.concurrency',
100+ branch: master}
101+ - {name: oslo-config,
102+ repository: 'git://github.com/openstack/oslo.config',
103+ branch: master}
104+ - {name: oslo-db,
105+ repository: 'git://github.com/openstack/oslo.db',
106+ branch: master}
107+ - {name: oslo-i18n,
108+ repository: 'git://github.com/openstack/oslo.i18n',
109+ branch: master}
110+ - {name: oslo-messaging,
111+ repository: 'git://github.com/openstack/oslo.messaging',
112+ branch: master}
113+ - {name: oslo-serialization,
114+ repository: 'git://github.com/openstack/oslo.serialization',
115+ branch: master}
116+ - {name: oslo-utils,
117+ repository: 'git://github.com/openstack/oslo.utils',
118+ branch: master}
119+ - {name: oslo-vmware,
120+ repository: 'git://github.com/openstack/oslo.vmware',
121+ branch: master}
122+ - {name: osprofiler,
123+ repository: 'git://github.com/stackforge/osprofiler',
124+ branch: master}
125+ - {name: pbr,
126+ repository: 'git://github.com/openstack-dev/pbr',
127+ branch: master}
128+ - {name: python-keystoneclient,
129+ repository: 'git://github.com/openstack/python-keystoneclient',
130+ branch: master}
131+ - {name: python-swiftclient,
132+ repository: 'git://github.com/openstack/python-swiftclient',
133+ branch: master}
134+ - {name: sqlalchemy-migrate,
135+ repository: 'git://github.com/stackforge/sqlalchemy-migrate',
136+ branch: master}
137+ - {name: stevedore,
138+ repository: 'git://github.com/openstack/stevedore',
139+ branch: master}
140+ - {name: wsme,
141+ repository: 'git://github.com/stackforge/wsme',
142+ branch: master}
143+ - {name: keystonemiddleware,
144+ repository: 'git://github.com/openstack/keystonemiddleware',
145+ branch: master}
146+ - {name: glance-store,
147+ repository: 'git://github.com/openstack/glance_store',
148+ branch: master}
149+ - {name: glance,
150+ repository: 'git://github.com/openstack/glance',
151+ branch: master}
152+>>>>>>> MERGE-SOURCE
153
154 Contact Information
155 -------------------
156
157=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
158--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-18 23:26:31 +0000
159+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-30 20:18:04 +0000
160@@ -44,6 +44,7 @@
161 ERROR,
162 WARNING,
163 unit_get,
164+ is_leader as juju_is_leader
165 )
166 from charmhelpers.core.decorators import (
167 retry_on_exception,
168@@ -63,17 +64,30 @@
169 pass
170
171
172+class CRMDCNotFound(Exception):
173+ pass
174+
175+
176 def is_elected_leader(resource):
177 """
178 Returns True if the charm executing this is the elected cluster leader.
179
180 It relies on two mechanisms to determine leadership:
181- 1. If the charm is part of a corosync cluster, call corosync to
182+ 1. If juju is sufficiently new and leadership election is supported,
183+ the is_leader command will be used.
184+ 2. If the charm is part of a corosync cluster, call corosync to
185 determine leadership.
186- 2. If the charm is not part of a corosync cluster, the leader is
187+ 3. If the charm is not part of a corosync cluster, the leader is
188 determined as being "the alive unit with the lowest unit numer". In
189 other words, the oldest surviving unit.
190 """
191+ try:
192+ return juju_is_leader()
193+ except NotImplementedError:
194+ log('Juju leadership election feature not enabled'
195+ ', using fallback support',
196+ level=WARNING)
197+
198 if is_clustered():
199 if not is_crm_leader(resource):
200 log('Deferring action to CRM leader.', level=INFO)
201@@ -97,6 +111,7 @@
202 return False
203
204
205+<<<<<<< TREE
206 def is_crm_dc():
207 """
208 Determine leadership by querying the pacemaker Designated Controller
209@@ -119,6 +134,35 @@
210
211
212 @retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound)
213+=======
214+def is_crm_dc():
215+ """
216+ Determine leadership by querying the pacemaker Designated Controller
217+ """
218+ cmd = ['crm', 'status']
219+ try:
220+ status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
221+ if not isinstance(status, six.text_type):
222+ status = six.text_type(status, "utf-8")
223+ except subprocess.CalledProcessError as ex:
224+ raise CRMDCNotFound(str(ex))
225+
226+ current_dc = ''
227+ for line in status.split('\n'):
228+ if line.startswith('Current DC'):
229+ # Current DC: juju-lytrusty-machine-2 (168108163) - partition with quorum
230+ current_dc = line.split(':')[1].split()[0]
231+ if current_dc == get_unit_hostname():
232+ return True
233+ elif current_dc == 'NONE':
234+ raise CRMDCNotFound('Current DC: NONE')
235+
236+ return False
237+
238+
239+@retry_on_exception(5, base_delay=2,
240+ exc_type=(CRMResourceNotFound, CRMDCNotFound))
241+>>>>>>> MERGE-SOURCE
242 def is_crm_leader(resource, retry=False):
243 """
244 Returns True if the charm calling this is the elected corosync leader,
245
246=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
247--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-04-23 14:52:07 +0000
248+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-30 20:18:04 +0000
249@@ -110,7 +110,8 @@
250 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
251 self.precise_havana, self.precise_icehouse,
252 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
253- self.trusty_kilo, self.vivid_kilo) = range(10)
254+ self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
255+ self.wily_liberty) = range(12)
256
257 releases = {
258 ('precise', None): self.precise_essex,
259@@ -121,8 +122,10 @@
260 ('trusty', None): self.trusty_icehouse,
261 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
262 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
263+ ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
264 ('utopic', None): self.utopic_juno,
265- ('vivid', None): self.vivid_kilo}
266+ ('vivid', None): self.vivid_kilo,
267+ ('wily', None): self.wily_liberty}
268 return releases[(self.series, self.openstack)]
269
270 def _get_openstack_release_string(self):
271@@ -138,6 +141,7 @@
272 ('trusty', 'icehouse'),
273 ('utopic', 'juno'),
274 ('vivid', 'kilo'),
275+ ('wily', 'liberty'),
276 ])
277 if self.openstack:
278 os_origin = self.openstack.split(':')[1]
279
280=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
281--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-03-20 17:15:02 +0000
282+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-30 20:18:04 +0000
283@@ -16,15 +16,15 @@
284
285 import logging
286 import os
287+import six
288 import time
289 import urllib
290
291 import glanceclient.v1.client as glance_client
292+import heatclient.v1.client as heat_client
293 import keystoneclient.v2_0 as keystone_client
294 import novaclient.v1_1.client as nova_client
295
296-import six
297-
298 from charmhelpers.contrib.amulet.utils import (
299 AmuletUtils
300 )
301@@ -37,7 +37,7 @@
302 """OpenStack amulet utilities.
303
304 This class inherits from AmuletUtils and has additional support
305- that is specifically for use by OpenStack charms.
306+ that is specifically for use by OpenStack charm tests.
307 """
308
309 def __init__(self, log_level=ERROR):
310@@ -51,6 +51,8 @@
311 Validate actual endpoint data vs expected endpoint data. The ports
312 are used to find the matching endpoint.
313 """
314+ self.log.debug('Validating endpoint data...')
315+ self.log.debug('actual: {}'.format(repr(endpoints)))
316 found = False
317 for ep in endpoints:
318 self.log.debug('endpoint: {}'.format(repr(ep)))
319@@ -77,6 +79,7 @@
320 Validate a list of actual service catalog endpoints vs a list of
321 expected service catalog endpoints.
322 """
323+ self.log.debug('Validating service catalog endpoint data...')
324 self.log.debug('actual: {}'.format(repr(actual)))
325 for k, v in six.iteritems(expected):
326 if k in actual:
327@@ -93,6 +96,7 @@
328 Validate a list of actual tenant data vs list of expected tenant
329 data.
330 """
331+ self.log.debug('Validating tenant data...')
332 self.log.debug('actual: {}'.format(repr(actual)))
333 for e in expected:
334 found = False
335@@ -114,6 +118,7 @@
336 Validate a list of actual role data vs a list of expected role
337 data.
338 """
339+ self.log.debug('Validating role data...')
340 self.log.debug('actual: {}'.format(repr(actual)))
341 for e in expected:
342 found = False
343@@ -134,6 +139,7 @@
344 Validate a list of actual user data vs a list of expected user
345 data.
346 """
347+ self.log.debug('Validating user data...')
348 self.log.debug('actual: {}'.format(repr(actual)))
349 for e in expected:
350 found = False
351@@ -155,17 +161,20 @@
352
353 Validate a list of actual flavors vs a list of expected flavors.
354 """
355+ self.log.debug('Validating flavor data...')
356 self.log.debug('actual: {}'.format(repr(actual)))
357 act = [a.name for a in actual]
358 return self._validate_list_data(expected, act)
359
360 def tenant_exists(self, keystone, tenant):
361 """Return True if tenant exists."""
362+ self.log.debug('Checking if tenant exists ({})...'.format(tenant))
363 return tenant in [t.name for t in keystone.tenants.list()]
364
365 def authenticate_keystone_admin(self, keystone_sentry, user, password,
366 tenant):
367 """Authenticates admin user with the keystone admin endpoint."""
368+ self.log.debug('Authenticating keystone admin...')
369 unit = keystone_sentry
370 service_ip = unit.relation('shared-db',
371 'mysql:shared-db')['private-address']
372@@ -175,6 +184,7 @@
373
374 def authenticate_keystone_user(self, keystone, user, password, tenant):
375 """Authenticates a regular user with the keystone public endpoint."""
376+ self.log.debug('Authenticating keystone user ({})...'.format(user))
377 ep = keystone.service_catalog.url_for(service_type='identity',
378 endpoint_type='publicURL')
379 return keystone_client.Client(username=user, password=password,
380@@ -182,12 +192,21 @@
381
382 def authenticate_glance_admin(self, keystone):
383 """Authenticates admin user with glance."""
384+ self.log.debug('Authenticating glance admin...')
385 ep = keystone.service_catalog.url_for(service_type='image',
386 endpoint_type='adminURL')
387 return glance_client.Client(ep, token=keystone.auth_token)
388
389+ def authenticate_heat_admin(self, keystone):
390+ """Authenticates the admin user with heat."""
391+ self.log.debug('Authenticating heat admin...')
392+ ep = keystone.service_catalog.url_for(service_type='orchestration',
393+ endpoint_type='publicURL')
394+ return heat_client.Client(endpoint=ep, token=keystone.auth_token)
395+
396 def authenticate_nova_user(self, keystone, user, password, tenant):
397 """Authenticates a regular user with nova-api."""
398+ self.log.debug('Authenticating nova user ({})...'.format(user))
399 ep = keystone.service_catalog.url_for(service_type='identity',
400 endpoint_type='publicURL')
401 return nova_client.Client(username=user, api_key=password,
402@@ -195,6 +214,7 @@
403
404 def create_cirros_image(self, glance, image_name):
405 """Download the latest cirros image and upload it to glance."""
406+ self.log.debug('Creating glance image ({})...'.format(image_name))
407 http_proxy = os.getenv('AMULET_HTTP_PROXY')
408 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
409 if http_proxy:
410@@ -235,6 +255,11 @@
411
412 def delete_image(self, glance, image):
413 """Delete the specified image."""
414+
415+ # /!\ DEPRECATION WARNING
416+ self.log.warn('/!\\ DEPRECATION WARNING: use '
417+ 'delete_resource instead of delete_image.')
418+ self.log.debug('Deleting glance image ({})...'.format(image))
419 num_before = len(list(glance.images.list()))
420 glance.images.delete(image)
421
422@@ -254,6 +279,8 @@
423
424 def create_instance(self, nova, image_name, instance_name, flavor):
425 """Create the specified instance."""
426+ self.log.debug('Creating instance '
427+ '({}|{}|{})'.format(instance_name, image_name, flavor))
428 image = nova.images.find(name=image_name)
429 flavor = nova.flavors.find(name=flavor)
430 instance = nova.servers.create(name=instance_name, image=image,
431@@ -276,6 +303,11 @@
432
433 def delete_instance(self, nova, instance):
434 """Delete the specified instance."""
435+
436+ # /!\ DEPRECATION WARNING
437+ self.log.warn('/!\\ DEPRECATION WARNING: use '
438+ 'delete_resource instead of delete_instance.')
439+ self.log.debug('Deleting instance ({})...'.format(instance))
440 num_before = len(list(nova.servers.list()))
441 nova.servers.delete(instance)
442
443@@ -292,3 +324,90 @@
444 return False
445
446 return True
447+
448+ def create_or_get_keypair(self, nova, keypair_name="testkey"):
449+ """Create a new keypair, or return pointer if it already exists."""
450+ try:
451+ _keypair = nova.keypairs.get(keypair_name)
452+ self.log.debug('Keypair ({}) already exists, '
453+ 'using it.'.format(keypair_name))
454+ return _keypair
455+ except:
456+ self.log.debug('Keypair ({}) does not exist, '
457+ 'creating it.'.format(keypair_name))
458+
459+ _keypair = nova.keypairs.create(name=keypair_name)
460+ return _keypair
461+
462+ def delete_resource(self, resource, resource_id,
463+ msg="resource", max_wait=120):
464+ """Delete one openstack resource, such as one instance, keypair,
465+ image, volume, stack, etc., and confirm deletion within max wait time.
466+
467+ :param resource: pointer to os resource type, ex:glance_client.images
468+ :param resource_id: unique name or id for the openstack resource
469+ :param msg: text to identify purpose in logging
470+ :param max_wait: maximum wait time in seconds
471+ :returns: True if successful, otherwise False
472+ """
473+ num_before = len(list(resource.list()))
474+ resource.delete(resource_id)
475+
476+ tries = 0
477+ num_after = len(list(resource.list()))
478+ while num_after != (num_before - 1) and tries < (max_wait / 4):
479+ self.log.debug('{} delete check: '
480+ '{} [{}:{}] {}'.format(msg, tries,
481+ num_before,
482+ num_after,
483+ resource_id))
484+ time.sleep(4)
485+ num_after = len(list(resource.list()))
486+ tries += 1
487+
488+ self.log.debug('{}: expected, actual count = {}, '
489+ '{}'.format(msg, num_before - 1, num_after))
490+
491+ if num_after == (num_before - 1):
492+ return True
493+ else:
494+ self.log.error('{} delete timed out'.format(msg))
495+ return False
496+
497+ def resource_reaches_status(self, resource, resource_id,
498+ expected_stat='available',
499+ msg='resource', max_wait=120):
500+ """Wait for an openstack resources status to reach an
501+ expected status within a specified time. Useful to confirm that
502+ nova instances, cinder vols, snapshots, glance images, heat stacks
503+ and other resources eventually reach the expected status.
504+
505+ :param resource: pointer to os resource type, ex: heat_client.stacks
506+ :param resource_id: unique id for the openstack resource
507+ :param expected_stat: status to expect resource to reach
508+ :param msg: text to identify purpose in logging
509+ :param max_wait: maximum wait time in seconds
510+ :returns: True if successful, False if status is not reached
511+ """
512+
513+ tries = 0
514+ resource_stat = resource.get(resource_id).status
515+ while resource_stat != expected_stat and tries < (max_wait / 4):
516+ self.log.debug('{} status check: '
517+ '{} [{}:{}] {}'.format(msg, tries,
518+ resource_stat,
519+ expected_stat,
520+ resource_id))
521+ time.sleep(4)
522+ resource_stat = resource.get(resource_id).status
523+ tries += 1
524+
525+ self.log.debug('{}: expected, actual status = {}, '
526+ '{}'.format(msg, resource_stat, expected_stat))
527+
528+ if resource_stat == expected_stat:
529+ return True
530+ else:
531+ self.log.debug('{} never reached expected status: '
532+ '{}'.format(resource_id, expected_stat))
533+ return False
534
535=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
536--- hooks/charmhelpers/contrib/openstack/context.py 2015-04-16 21:33:32 +0000
537+++ hooks/charmhelpers/contrib/openstack/context.py 2015-06-30 20:18:04 +0000
538@@ -240,7 +240,7 @@
539 if self.relation_prefix:
540 password_setting = self.relation_prefix + '_password'
541
542- for rid in relation_ids('shared-db'):
543+ for rid in relation_ids(self.interfaces[0]):
544 for unit in related_units(rid):
545 rdata = relation_get(rid=rid, unit=unit)
546 host = rdata.get('db_host')
547
548=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
549--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-04-16 19:53:49 +0000
550+++ hooks/charmhelpers/contrib/openstack/neutron.py 2015-06-30 20:18:04 +0000
551@@ -172,14 +172,16 @@
552 'services': ['calico-felix',
553 'bird',
554 'neutron-dhcp-agent',
555- 'nova-api-metadata'],
556+ 'nova-api-metadata',
557+ 'etcd'],
558 'packages': [[headers_package()] + determine_dkms_package(),
559 ['calico-compute',
560 'bird',
561 'neutron-dhcp-agent',
562- 'nova-api-metadata']],
563- 'server_packages': ['neutron-server', 'calico-control'],
564- 'server_services': ['neutron-server']
565+ 'nova-api-metadata',
566+ 'etcd']],
567+ 'server_packages': ['neutron-server', 'calico-control', 'etcd'],
568+ 'server_services': ['neutron-server', 'etcd']
569 },
570 'vsp': {
571 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini',
572@@ -256,11 +258,14 @@
573 def parse_mappings(mappings):
574 parsed = {}
575 if mappings:
576- mappings = mappings.split(' ')
577+ mappings = mappings.split()
578 for m in mappings:
579 p = m.partition(':')
580- if p[1] == ':':
581- parsed[p[0].strip()] = p[2].strip()
582+ key = p[0].strip()
583+ if p[1]:
584+ parsed[key] = p[2].strip()
585+ else:
586+ parsed[key] = ''
587
588 return parsed
589
590@@ -283,13 +288,13 @@
591 Returns dict of the form {bridge:port}.
592 """
593 _mappings = parse_mappings(mappings)
594- if not _mappings:
595+ if not _mappings or list(_mappings.values()) == ['']:
596 if not mappings:
597 return {}
598
599 # For backwards-compatibility we need to support port-only provided in
600 # config.
601- _mappings = {default_bridge: mappings.split(' ')[0]}
602+ _mappings = {default_bridge: mappings.split()[0]}
603
604 bridges = _mappings.keys()
605 ports = _mappings.values()
606@@ -309,6 +314,8 @@
607
608 Mappings must be a space-delimited list of provider:start:end mappings.
609
610+ The start:end range is optional and may be omitted.
611+
612 Returns dict of the form {provider: (start, end)}.
613 """
614 _mappings = parse_mappings(mappings)
615
616=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
617--- hooks/charmhelpers/contrib/openstack/utils.py 2015-04-16 19:53:49 +0000
618+++ hooks/charmhelpers/contrib/openstack/utils.py 2015-06-30 20:18:04 +0000
619@@ -53,9 +53,13 @@
620 get_ipv6_addr
621 )
622
623+from charmhelpers.contrib.python.packages import (
624+ pip_create_virtualenv,
625+ pip_install,
626+)
627+
628 from charmhelpers.core.host import lsb_release, mounts, umount
629 from charmhelpers.fetch import apt_install, apt_cache, install_remote
630-from charmhelpers.contrib.python.packages import pip_install
631 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
632 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
633
634@@ -75,6 +79,7 @@
635 ('trusty', 'icehouse'),
636 ('utopic', 'juno'),
637 ('vivid', 'kilo'),
638+ ('wily', 'liberty'),
639 ])
640
641
642@@ -87,6 +92,7 @@
643 ('2014.1', 'icehouse'),
644 ('2014.2', 'juno'),
645 ('2015.1', 'kilo'),
646+ ('2015.2', 'liberty'),
647 ])
648
649 # The ugly duckling
650@@ -109,6 +115,7 @@
651 ('2.2.0', 'juno'),
652 ('2.2.1', 'kilo'),
653 ('2.2.2', 'kilo'),
654+ ('2.3.0', 'liberty'),
655 ])
656
657 DEFAULT_LOOPBACK_SIZE = '5G'
658@@ -317,6 +324,9 @@
659 'kilo': 'trusty-updates/kilo',
660 'kilo/updates': 'trusty-updates/kilo',
661 'kilo/proposed': 'trusty-proposed/kilo',
662+ 'liberty': 'trusty-updates/liberty',
663+ 'liberty/updates': 'trusty-updates/liberty',
664+ 'liberty/proposed': 'trusty-proposed/liberty',
665 }
666
667 try:
668@@ -497,7 +507,17 @@
669 requirements_dir = None
670
671
672-def git_clone_and_install(projects_yaml, core_project):
673+def _git_yaml_load(projects_yaml):
674+ """
675+ Load the specified yaml into a dictionary.
676+ """
677+ if not projects_yaml:
678+ return None
679+
680+ return yaml.load(projects_yaml)
681+
682+
683+def git_clone_and_install(projects_yaml, core_project, depth=1):
684 """
685 Clone/install all specified OpenStack repositories.
686
687@@ -510,23 +530,22 @@
688 repository: 'git://git.openstack.org/openstack/requirements.git',
689 branch: 'stable/icehouse'}
690 directory: /mnt/openstack-git
691- http_proxy: http://squid.internal:3128
692- https_proxy: https://squid.internal:3128
693+ http_proxy: squid-proxy-url
694+ https_proxy: squid-proxy-url
695
696 The directory, http_proxy, and https_proxy keys are optional.
697 """
698 global requirements_dir
699 parent_dir = '/mnt/openstack-git'
700-
701- if not projects_yaml:
702- return
703-
704- projects = yaml.load(projects_yaml)
705+ http_proxy = None
706+
707+ projects = _git_yaml_load(projects_yaml)
708 _git_validate_projects_yaml(projects, core_project)
709
710 old_environ = dict(os.environ)
711
712 if 'http_proxy' in projects.keys():
713+ http_proxy = projects['http_proxy']
714 os.environ['http_proxy'] = projects['http_proxy']
715 if 'https_proxy' in projects.keys():
716 os.environ['https_proxy'] = projects['https_proxy']
717@@ -534,15 +553,24 @@
718 if 'directory' in projects.keys():
719 parent_dir = projects['directory']
720
721+ pip_create_virtualenv(os.path.join(parent_dir, 'venv'))
722+
723+ # Upgrade setuptools from default virtualenv version. The default version
724+ # in trusty breaks update.py in global requirements master branch.
725+ pip_install('setuptools', upgrade=True, proxy=http_proxy,
726+ venv=os.path.join(parent_dir, 'venv'))
727+
728 for p in projects['repositories']:
729 repo = p['repository']
730 branch = p['branch']
731 if p['name'] == 'requirements':
732- repo_dir = _git_clone_and_install_single(repo, branch, parent_dir,
733+ repo_dir = _git_clone_and_install_single(repo, branch, depth,
734+ parent_dir, http_proxy,
735 update_requirements=False)
736 requirements_dir = repo_dir
737 else:
738- repo_dir = _git_clone_and_install_single(repo, branch, parent_dir,
739+ repo_dir = _git_clone_and_install_single(repo, branch, depth,
740+ parent_dir, http_proxy,
741 update_requirements=True)
742
743 os.environ = old_environ
744@@ -574,7 +602,8 @@
745 error_out('openstack-origin-git key \'{}\' is missing'.format(key))
746
747
748-def _git_clone_and_install_single(repo, branch, parent_dir, update_requirements):
749+def _git_clone_and_install_single(repo, branch, depth, parent_dir, http_proxy,
750+ update_requirements):
751 """
752 Clone and install a single git repository.
753 """
754@@ -587,23 +616,29 @@
755
756 if not os.path.exists(dest_dir):
757 juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
758- repo_dir = install_remote(repo, dest=parent_dir, branch=branch)
759+ repo_dir = install_remote(repo, dest=parent_dir, branch=branch,
760+ depth=depth)
761 else:
762 repo_dir = dest_dir
763
764+ venv = os.path.join(parent_dir, 'venv')
765+
766 if update_requirements:
767 if not requirements_dir:
768 error_out('requirements repo must be cloned before '
769 'updating from global requirements.')
770- _git_update_requirements(repo_dir, requirements_dir)
771+ _git_update_requirements(venv, repo_dir, requirements_dir)
772
773 juju_log('Installing git repo from dir: {}'.format(repo_dir))
774- pip_install(repo_dir)
775+ if http_proxy:
776+ pip_install(repo_dir, proxy=http_proxy, venv=venv)
777+ else:
778+ pip_install(repo_dir, venv=venv)
779
780 return repo_dir
781
782
783-def _git_update_requirements(package_dir, reqs_dir):
784+def _git_update_requirements(venv, package_dir, reqs_dir):
785 """
786 Update from global requirements.
787
788@@ -612,25 +647,38 @@
789 """
790 orig_dir = os.getcwd()
791 os.chdir(reqs_dir)
792- cmd = ['python', 'update.py', package_dir]
793+ python = os.path.join(venv, 'bin/python')
794+ cmd = [python, 'update.py', package_dir]
795 try:
796 subprocess.check_call(cmd)
797 except subprocess.CalledProcessError:
798 package = os.path.basename(package_dir)
799- error_out("Error updating {} from global-requirements.txt".format(package))
800+ error_out("Error updating {} from "
801+ "global-requirements.txt".format(package))
802 os.chdir(orig_dir)
803
804
805+def git_pip_venv_dir(projects_yaml):
806+ """
807+ Return the pip virtualenv path.
808+ """
809+ parent_dir = '/mnt/openstack-git'
810+
811+ projects = _git_yaml_load(projects_yaml)
812+
813+ if 'directory' in projects.keys():
814+ parent_dir = projects['directory']
815+
816+ return os.path.join(parent_dir, 'venv')
817+
818+
819 def git_src_dir(projects_yaml, project):
820 """
821 Return the directory where the specified project's source is located.
822 """
823 parent_dir = '/mnt/openstack-git'
824
825- if not projects_yaml:
826- return
827-
828- projects = yaml.load(projects_yaml)
829+ projects = _git_yaml_load(projects_yaml)
830
831 if 'directory' in projects.keys():
832 parent_dir = projects['directory']
833@@ -640,3 +688,15 @@
834 return os.path.join(parent_dir, os.path.basename(p['repository']))
835
836 return None
837+
838+
839+def git_yaml_value(projects_yaml, key):
840+ """
841+ Return the value in projects_yaml for the specified key.
842+ """
843+ projects = _git_yaml_load(projects_yaml)
844+
845+ if key in projects.keys():
846+ return projects[key]
847+
848+ return None
849
850=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
851--- hooks/charmhelpers/contrib/python/packages.py 2015-03-20 17:15:02 +0000
852+++ hooks/charmhelpers/contrib/python/packages.py 2015-06-30 20:18:04 +0000
853@@ -17,8 +17,11 @@
854 # You should have received a copy of the GNU Lesser General Public License
855 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
856
857+import os
858+import subprocess
859+
860 from charmhelpers.fetch import apt_install, apt_update
861-from charmhelpers.core.hookenv import log
862+from charmhelpers.core.hookenv import charm_dir, log
863
864 try:
865 from pip import main as pip_execute
866@@ -33,6 +36,8 @@
867 def parse_options(given, available):
868 """Given a set of options, check if available"""
869 for key, value in sorted(given.items()):
870+ if not value:
871+ continue
872 if key in available:
873 yield "--{0}={1}".format(key, value)
874
875@@ -51,11 +56,15 @@
876 pip_execute(command)
877
878
879-def pip_install(package, fatal=False, upgrade=False, **options):
880+def pip_install(package, fatal=False, upgrade=False, venv=None, **options):
881 """Install a python package"""
882- command = ["install"]
883+ if venv:
884+ venv_python = os.path.join(venv, 'bin/pip')
885+ command = [venv_python, "install"]
886+ else:
887+ command = ["install"]
888
889- available_options = ('proxy', 'src', 'log', "index-url", )
890+ available_options = ('proxy', 'src', 'log', 'index-url', )
891 for option in parse_options(options, available_options):
892 command.append(option)
893
894@@ -69,7 +78,10 @@
895
896 log("Installing {} package with options: {}".format(package,
897 command))
898- pip_execute(command)
899+ if venv:
900+ subprocess.check_call(command)
901+ else:
902+ pip_execute(command)
903
904
905 def pip_uninstall(package, **options):
906@@ -94,3 +106,16 @@
907 """Returns the list of current python installed packages
908 """
909 return pip_execute(["list"])
910+
911+
912+def pip_create_virtualenv(path=None):
913+ """Create an isolated Python environment."""
914+ apt_install('python-virtualenv')
915+
916+ if path:
917+ venv_path = path
918+ else:
919+ venv_path = os.path.join(charm_dir(), 'venv')
920+
921+ if not os.path.exists(venv_path):
922+ subprocess.check_call(['virtualenv', venv_path])
923
924=== modified file 'hooks/charmhelpers/core/hookenv.py'
925--- hooks/charmhelpers/core/hookenv.py 2015-04-16 19:53:49 +0000
926+++ hooks/charmhelpers/core/hookenv.py 2015-06-30 20:18:04 +0000
927@@ -21,12 +21,16 @@
928 # Charm Helpers Developers <juju@lists.ubuntu.com>
929
930 from __future__ import print_function
931+from distutils.version import LooseVersion
932+from functools import wraps
933+import glob
934 import os
935 import json
936 import yaml
937 import subprocess
938 import sys
939 import errno
940+import tempfile
941 from subprocess import CalledProcessError
942
943 import six
944@@ -58,15 +62,17 @@
945
946 will cache the result of unit_get + 'test' for future calls.
947 """
948+ @wraps(func)
949 def wrapper(*args, **kwargs):
950 global cache
951 key = str((func, args, kwargs))
952 try:
953 return cache[key]
954 except KeyError:
955- res = func(*args, **kwargs)
956- cache[key] = res
957- return res
958+ pass # Drop out of the exception handler scope.
959+ res = func(*args, **kwargs)
960+ cache[key] = res
961+ return res
962 return wrapper
963
964
965@@ -178,7 +184,7 @@
966
967 def remote_unit():
968 """The remote unit for the current relation hook"""
969- return os.environ['JUJU_REMOTE_UNIT']
970+ return os.environ.get('JUJU_REMOTE_UNIT', None)
971
972
973 def service_name():
974@@ -238,23 +244,7 @@
975 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
976 if os.path.exists(self.path):
977 self.load_previous()
978-
979- def __getitem__(self, key):
980- """For regular dict lookups, check the current juju config first,
981- then the previous (saved) copy. This ensures that user-saved values
982- will be returned by a dict lookup.
983-
984- """
985- try:
986- return dict.__getitem__(self, key)
987- except KeyError:
988- return (self._prev_dict or {})[key]
989-
990- def keys(self):
991- prev_keys = []
992- if self._prev_dict is not None:
993- prev_keys = self._prev_dict.keys()
994- return list(set(prev_keys + list(dict.keys(self))))
995+ atexit(self._implicit_save)
996
997 def load_previous(self, path=None):
998 """Load previous copy of config from disk.
999@@ -273,6 +263,9 @@
1000 self.path = path or self.path
1001 with open(self.path) as f:
1002 self._prev_dict = json.load(f)
1003+ for k, v in self._prev_dict.items():
1004+ if k not in self:
1005+ self[k] = v
1006
1007 def changed(self, key):
1008 """Return True if the current value for this key is different from
1009@@ -304,13 +297,13 @@
1010 instance.
1011
1012 """
1013- if self._prev_dict:
1014- for k, v in six.iteritems(self._prev_dict):
1015- if k not in self:
1016- self[k] = v
1017 with open(self.path, 'w') as f:
1018 json.dump(self, f)
1019
1020+ def _implicit_save(self):
1021+ if self.implicit_save:
1022+ self.save()
1023+
1024
1025 @cached
1026 def config(scope=None):
1027@@ -353,18 +346,49 @@
1028 """Set relation information for the current unit"""
1029 relation_settings = relation_settings if relation_settings else {}
1030 relation_cmd_line = ['relation-set']
1031+ accepts_file = "--file" in subprocess.check_output(
1032+ relation_cmd_line + ["--help"], universal_newlines=True)
1033 if relation_id is not None:
1034 relation_cmd_line.extend(('-r', relation_id))
1035- for k, v in (list(relation_settings.items()) + list(kwargs.items())):
1036- if v is None:
1037- relation_cmd_line.append('{}='.format(k))
1038- else:
1039- relation_cmd_line.append('{}={}'.format(k, v))
1040- subprocess.check_call(relation_cmd_line)
1041+ settings = relation_settings.copy()
1042+ settings.update(kwargs)
1043+ for key, value in settings.items():
1044+ # Force value to be a string: it always should, but some call
1045+ # sites pass in things like dicts or numbers.
1046+ if value is not None:
1047+ settings[key] = "{}".format(value)
1048+ if accepts_file:
1049+ # --file was introduced in Juju 1.23.2. Use it by default if
1050+ # available, since otherwise we'll break if the relation data is
1051+ # too big. Ideally we should tell relation-set to read the data from
1052+ # stdin, but that feature is broken in 1.23.2: Bug #1454678.
1053+ with tempfile.NamedTemporaryFile(delete=False) as settings_file:
1054+ settings_file.write(yaml.safe_dump(settings).encode("utf-8"))
1055+ subprocess.check_call(
1056+ relation_cmd_line + ["--file", settings_file.name])
1057+ os.remove(settings_file.name)
1058+ else:
1059+ for key, value in settings.items():
1060+ if value is None:
1061+ relation_cmd_line.append('{}='.format(key))
1062+ else:
1063+ relation_cmd_line.append('{}={}'.format(key, value))
1064+ subprocess.check_call(relation_cmd_line)
1065 # Flush cache of any relation-gets for local unit
1066 flush(local_unit())
1067
1068
1069+def relation_clear(r_id=None):
1070+ ''' Clears any relation data already set on relation r_id '''
1071+ settings = relation_get(rid=r_id,
1072+ unit=local_unit())
1073+ for setting in settings:
1074+ if setting not in ['public-address', 'private-address']:
1075+ settings[setting] = None
1076+ relation_set(relation_id=r_id,
1077+ **settings)
1078+
1079+
1080 @cached
1081 def relation_ids(reltype=None):
1082 """A list of relation_ids"""
1083@@ -509,6 +533,11 @@
1084 return None
1085
1086
1087+def unit_public_ip():
1088+ """Get this unit's public IP address"""
1089+ return unit_get('public-address')
1090+
1091+
1092 def unit_private_ip():
1093 """Get this unit's private IP address"""
1094 return unit_get('private-address')
1095@@ -541,10 +570,14 @@
1096 hooks.execute(sys.argv)
1097 """
1098
1099- def __init__(self, config_save=True):
1100+ def __init__(self, config_save=None):
1101 super(Hooks, self).__init__()
1102 self._hooks = {}
1103- self._config_save = config_save
1104+
1105+ # For unknown reasons, we allow the Hooks constructor to override
1106+ # config().implicit_save.
1107+ if config_save is not None:
1108+ config().implicit_save = config_save
1109
1110 def register(self, name, function):
1111 """Register a hook"""
1112@@ -552,13 +585,16 @@
1113
1114 def execute(self, args):
1115 """Execute a registered hook based on args[0]"""
1116+ _run_atstart()
1117 hook_name = os.path.basename(args[0])
1118 if hook_name in self._hooks:
1119- self._hooks[hook_name]()
1120- if self._config_save:
1121- cfg = config()
1122- if cfg.implicit_save:
1123- cfg.save()
1124+ try:
1125+ self._hooks[hook_name]()
1126+ except SystemExit as x:
1127+ if x.code is None or x.code == 0:
1128+ _run_atexit()
1129+ raise
1130+ _run_atexit()
1131 else:
1132 raise UnregisteredHookError(hook_name)
1133
1134@@ -605,3 +641,160 @@
1135
1136 The results set by action_set are preserved."""
1137 subprocess.check_call(['action-fail', message])
1138+
1139+
1140+def status_set(workload_state, message):
1141+ """Set the workload state with a message
1142+
1143+ Use status-set to set the workload state with a message which is visible
1144+ to the user via juju status. If the status-set command is not found then
1145+ assume this is juju < 1.23 and juju-log the message unstead.
1146+
1147+ workload_state -- valid juju workload state.
1148+ message -- status update message
1149+ """
1150+ valid_states = ['maintenance', 'blocked', 'waiting', 'active']
1151+ if workload_state not in valid_states:
1152+ raise ValueError(
1153+ '{!r} is not a valid workload state'.format(workload_state)
1154+ )
1155+ cmd = ['status-set', workload_state, message]
1156+ try:
1157+ ret = subprocess.call(cmd)
1158+ if ret == 0:
1159+ return
1160+ except OSError as e:
1161+ if e.errno != errno.ENOENT:
1162+ raise
1163+ log_message = 'status-set failed: {} {}'.format(workload_state,
1164+ message)
1165+ log(log_message, level='INFO')
1166+
1167+
1168+def status_get():
1169+ """Retrieve the previously set juju workload state
1170+
1171+ If the status-set command is not found then assume this is juju < 1.23 and
1172+ return 'unknown'
1173+ """
1174+ cmd = ['status-get']
1175+ try:
1176+ raw_status = subprocess.check_output(cmd, universal_newlines=True)
1177+ status = raw_status.rstrip()
1178+ return status
1179+ except OSError as e:
1180+ if e.errno == errno.ENOENT:
1181+ return 'unknown'
1182+ else:
1183+ raise
1184+
1185+
1186+def translate_exc(from_exc, to_exc):
1187+ def inner_translate_exc1(f):
1188+ def inner_translate_exc2(*args, **kwargs):
1189+ try:
1190+ return f(*args, **kwargs)
1191+ except from_exc:
1192+ raise to_exc
1193+
1194+ return inner_translate_exc2
1195+
1196+ return inner_translate_exc1
1197+
1198+
1199+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1200+def is_leader():
1201+ """Does the current unit hold the juju leadership
1202+
1203+ Uses juju to determine whether the current unit is the leader of its peers
1204+ """
1205+ cmd = ['is-leader', '--format=json']
1206+ return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
1207+
1208+
1209+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1210+def leader_get(attribute=None):
1211+ """Juju leader get value(s)"""
1212+ cmd = ['leader-get', '--format=json'] + [attribute or '-']
1213+ return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
1214+
1215+
1216+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1217+def leader_set(settings=None, **kwargs):
1218+ """Juju leader set value(s)"""
1219+ # Don't log secrets.
1220+ # log("Juju leader-set '%s'" % (settings), level=DEBUG)
1221+ cmd = ['leader-set']
1222+ settings = settings or {}
1223+ settings.update(kwargs)
1224+ for k, v in settings.items():
1225+ if v is None:
1226+ cmd.append('{}='.format(k))
1227+ else:
1228+ cmd.append('{}={}'.format(k, v))
1229+ subprocess.check_call(cmd)
1230+
1231+
1232+@cached
1233+def juju_version():
1234+ """Full version string (eg. '1.23.3.1-trusty-amd64')"""
1235+ # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
1236+ jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
1237+ return subprocess.check_output([jujud, 'version'],
1238+ universal_newlines=True).strip()
1239+
1240+
1241+@cached
1242+def has_juju_version(minimum_version):
1243+ """Return True if the Juju version is at least the provided version"""
1244+ return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
1245+
1246+
1247+_atexit = []
1248+_atstart = []
1249+
1250+
1251+def atstart(callback, *args, **kwargs):
1252+ '''Schedule a callback to run before the main hook.
1253+
1254+ Callbacks are run in the order they were added.
1255+
1256+ This is useful for modules and classes to perform initialization
1257+ and inject behavior. In particular:
1258+ - Run common code before all of your hooks, such as logging
1259+ the hook name or interesting relation data.
1260+ - Defer object or module initialization that requires a hook
1261+ context until we know there actually is a hook context,
1262+ making testing easier.
1263+ - Rather than requiring charm authors to include boilerplate to
1264+ invoke your helper's behavior, have it run automatically if
1265+ your object is instantiated or module imported.
1266+
1267+ This is not at all useful after your hook framework as been launched.
1268+ '''
1269+ global _atstart
1270+ _atstart.append((callback, args, kwargs))
1271+
1272+
1273+def atexit(callback, *args, **kwargs):
1274+ '''Schedule a callback to run on successful hook completion.
1275+
1276+ Callbacks are run in the reverse order that they were added.'''
1277+ _atexit.append((callback, args, kwargs))
1278+
1279+
1280+def _run_atstart():
1281+ '''Hook frameworks must invoke this before running the main hook body.'''
1282+ global _atstart
1283+ for callback, args, kwargs in _atstart:
1284+ callback(*args, **kwargs)
1285+ del _atstart[:]
1286+
1287+
1288+def _run_atexit():
1289+ '''Hook frameworks must invoke this after the main hook body has
1290+ successfully completed. Do not invoke it if the hook fails.'''
1291+ global _atexit
1292+ for callback, args, kwargs in reversed(_atexit):
1293+ callback(*args, **kwargs)
1294+ del _atexit[:]
1295
1296=== modified file 'hooks/charmhelpers/core/host.py'
1297--- hooks/charmhelpers/core/host.py 2015-03-20 17:15:02 +0000
1298+++ hooks/charmhelpers/core/host.py 2015-06-30 20:18:04 +0000
1299@@ -24,6 +24,7 @@
1300 import os
1301 import re
1302 import pwd
1303+import glob
1304 import grp
1305 import random
1306 import string
1307@@ -90,7 +91,7 @@
1308 ['service', service_name, 'status'],
1309 stderr=subprocess.STDOUT).decode('UTF-8')
1310 except subprocess.CalledProcessError as e:
1311- return 'unrecognized service' not in e.output
1312+ return b'unrecognized service' not in e.output
1313 else:
1314 return True
1315
1316@@ -269,6 +270,21 @@
1317 return None
1318
1319
1320+def path_hash(path):
1321+ """
1322+ Generate a hash checksum of all files matching 'path'. Standard wildcards
1323+ like '*' and '?' are supported, see documentation for the 'glob' module for
1324+ more information.
1325+
1326+ :return: dict: A { filename: hash } dictionary for all matched files.
1327+ Empty if none found.
1328+ """
1329+ return {
1330+ filename: file_hash(filename)
1331+ for filename in glob.iglob(path)
1332+ }
1333+
1334+
1335 def check_hash(path, checksum, hash_type='md5'):
1336 """
1337 Validate a file using a cryptographic checksum.
1338@@ -296,23 +312,25 @@
1339
1340 @restart_on_change({
1341 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1342+ '/etc/apache/sites-enabled/*': [ 'apache2' ]
1343 })
1344- def ceph_client_changed():
1345+ def config_changed():
1346 pass # your code here
1347
1348 In this example, the cinder-api and cinder-volume services
1349 would be restarted if /etc/ceph/ceph.conf is changed by the
1350- ceph_client_changed function.
1351+ ceph_client_changed function. The apache2 service would be
1352+ restarted if any file matching the pattern got changed, created
1353+ or removed. Standard wildcards are supported, see documentation
1354+ for the 'glob' module for more information.
1355 """
1356 def wrap(f):
1357 def wrapped_f(*args, **kwargs):
1358- checksums = {}
1359- for path in restart_map:
1360- checksums[path] = file_hash(path)
1361+ checksums = {path: path_hash(path) for path in restart_map}
1362 f(*args, **kwargs)
1363 restarts = []
1364 for path in restart_map:
1365- if checksums[path] != file_hash(path):
1366+ if path_hash(path) != checksums[path]:
1367 restarts += restart_map[path]
1368 services_list = list(OrderedDict.fromkeys(restarts))
1369 if not stopstart:
1370
1371=== modified file 'hooks/charmhelpers/core/services/base.py'
1372--- hooks/charmhelpers/core/services/base.py 2015-03-20 17:15:02 +0000
1373+++ hooks/charmhelpers/core/services/base.py 2015-06-30 20:18:04 +0000
1374@@ -15,9 +15,9 @@
1375 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1376
1377 import os
1378-import re
1379 import json
1380-from collections import Iterable
1381+from inspect import getargspec
1382+from collections import Iterable, OrderedDict
1383
1384 from charmhelpers.core import host
1385 from charmhelpers.core import hookenv
1386@@ -119,7 +119,7 @@
1387 """
1388 self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
1389 self._ready = None
1390- self.services = {}
1391+ self.services = OrderedDict()
1392 for service in services or []:
1393 service_name = service['service']
1394 self.services[service_name] = service
1395@@ -128,15 +128,18 @@
1396 """
1397 Handle the current hook by doing The Right Thing with the registered services.
1398 """
1399- hook_name = hookenv.hook_name()
1400- if hook_name == 'stop':
1401- self.stop_services()
1402- else:
1403- self.provide_data()
1404- self.reconfigure_services()
1405- cfg = hookenv.config()
1406- if cfg.implicit_save:
1407- cfg.save()
1408+ hookenv._run_atstart()
1409+ try:
1410+ hook_name = hookenv.hook_name()
1411+ if hook_name == 'stop':
1412+ self.stop_services()
1413+ else:
1414+ self.reconfigure_services()
1415+ self.provide_data()
1416+ except SystemExit as x:
1417+ if x.code is None or x.code == 0:
1418+ hookenv._run_atexit()
1419+ hookenv._run_atexit()
1420
1421 def provide_data(self):
1422 """
1423@@ -145,15 +148,36 @@
1424 A provider must have a `name` attribute, which indicates which relation
1425 to set data on, and a `provide_data()` method, which returns a dict of
1426 data to set.
1427+
1428+ The `provide_data()` method can optionally accept two parameters:
1429+
1430+ * ``remote_service`` The name of the remote service that the data will
1431+ be provided to. The `provide_data()` method will be called once
1432+ for each connected service (not unit). This allows the method to
1433+ tailor its data to the given service.
1434+ * ``service_ready`` Whether or not the service definition had all of
1435+ its requirements met, and thus the ``data_ready`` callbacks run.
1436+
1437+ Note that the ``provided_data`` methods are now called **after** the
1438+ ``data_ready`` callbacks are run. This gives the ``data_ready`` callbacks
1439+ a chance to generate any data necessary for the providing to the remote
1440+ services.
1441 """
1442- hook_name = hookenv.hook_name()
1443- for service in self.services.values():
1444+ for service_name, service in self.services.items():
1445+ service_ready = self.is_ready(service_name)
1446 for provider in service.get('provided_data', []):
1447- if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
1448- data = provider.provide_data()
1449- _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
1450- if _ready:
1451- hookenv.relation_set(None, data)
1452+ for relid in hookenv.relation_ids(provider.name):
1453+ units = hookenv.related_units(relid)
1454+ if not units:
1455+ continue
1456+ remote_service = units[0].split('/')[0]
1457+ argspec = getargspec(provider.provide_data)
1458+ if len(argspec.args) > 1:
1459+ data = provider.provide_data(remote_service, service_ready)
1460+ else:
1461+ data = provider.provide_data()
1462+ if data:
1463+ hookenv.relation_set(relid, data)
1464
1465 def reconfigure_services(self, *service_names):
1466 """
1467
1468=== modified file 'hooks/charmhelpers/fetch/__init__.py'
1469--- hooks/charmhelpers/fetch/__init__.py 2015-03-20 17:15:02 +0000
1470+++ hooks/charmhelpers/fetch/__init__.py 2015-06-30 20:18:04 +0000
1471@@ -158,7 +158,7 @@
1472
1473 def apt_cache(in_memory=True):
1474 """Build and return an apt cache"""
1475- import apt_pkg
1476+ from apt import apt_pkg
1477 apt_pkg.init()
1478 if in_memory:
1479 apt_pkg.config.set("Dir::Cache::pkgcache", "")
1480
1481=== modified file 'hooks/charmhelpers/fetch/giturl.py'
1482--- hooks/charmhelpers/fetch/giturl.py 2015-03-20 17:15:02 +0000
1483+++ hooks/charmhelpers/fetch/giturl.py 2015-06-30 20:18:04 +0000
1484@@ -45,14 +45,16 @@
1485 else:
1486 return True
1487
1488- def clone(self, source, dest, branch):
1489+ def clone(self, source, dest, branch, depth=None):
1490 if not self.can_handle(source):
1491 raise UnhandledSource("Cannot handle {}".format(source))
1492
1493- repo = Repo.clone_from(source, dest)
1494- repo.git.checkout(branch)
1495+ if depth:
1496+ Repo.clone_from(source, dest, branch=branch, depth=depth)
1497+ else:
1498+ Repo.clone_from(source, dest, branch=branch)
1499
1500- def install(self, source, branch="master", dest=None):
1501+ def install(self, source, branch="master", dest=None, depth=None):
1502 url_parts = self.parse_url(source)
1503 branch_name = url_parts.path.strip("/").split("/")[-1]
1504 if dest:
1505@@ -63,7 +65,7 @@
1506 if not os.path.exists(dest_dir):
1507 mkdir(dest_dir, perms=0o755)
1508 try:
1509- self.clone(source, dest_dir, branch)
1510+ self.clone(source, dest_dir, branch, depth)
1511 except GitCommandError as e:
1512 raise UnhandledSource(e.message)
1513 except OSError as e:
1514
1515=== modified file 'hooks/glance_relations.py'
1516--- hooks/glance_relations.py 2015-05-01 11:37:27 +0000
1517+++ hooks/glance_relations.py 2015-06-30 20:18:04 +0000
1518@@ -53,7 +53,7 @@
1519 filter_installed_packages
1520 )
1521 from charmhelpers.contrib.hahelpers.cluster import (
1522- eligible_leader,
1523+ is_elected_leader,
1524 get_hacluster_config
1525 )
1526 from charmhelpers.contrib.openstack.utils import (
1527@@ -160,7 +160,7 @@
1528 if rel != "essex":
1529 CONFIGS.write(GLANCE_API_CONF)
1530
1531- if eligible_leader(CLUSTER_RES):
1532+ if is_elected_leader(CLUSTER_RES):
1533 # Bugs 1353135 & 1187508. Dbs can appear to be ready before the units
1534 # acl entry has been added. So, if the db supports passing a list of
1535 # permitted units then check if we're in the list.
1536@@ -194,7 +194,7 @@
1537 if rel != "essex":
1538 CONFIGS.write(GLANCE_API_CONF)
1539
1540- if eligible_leader(CLUSTER_RES):
1541+ if is_elected_leader(CLUSTER_RES):
1542 if rel == "essex":
1543 status = call(['glance-manage', 'db_version'])
1544 if status != 0:
1545
1546=== modified file 'hooks/glance_utils.py'
1547--- hooks/glance_utils.py 2015-04-17 12:05:48 +0000
1548+++ hooks/glance_utils.py 2015-06-30 20:18:04 +0000
1549@@ -14,6 +14,10 @@
1550 apt_install,
1551 add_source)
1552
1553+from charmhelpers.contrib.python.packages import (
1554+ pip_install,
1555+)
1556+
1557 from charmhelpers.core.hookenv import (
1558 charm_dir,
1559 config,
1560@@ -38,7 +42,7 @@
1561 context,)
1562
1563 from charmhelpers.contrib.hahelpers.cluster import (
1564- eligible_leader,
1565+ is_elected_leader,
1566 )
1567
1568 from charmhelpers.contrib.openstack.alternatives import install_alternative
1569@@ -47,12 +51,18 @@
1570 git_install_requested,
1571 git_clone_and_install,
1572 git_src_dir,
1573+ git_yaml_value,
1574+ git_pip_venv_dir,
1575 configure_installation_source,
1576 os_release,
1577 )
1578
1579 from charmhelpers.core.templating import render
1580
1581+from charmhelpers.core.decorators import (
1582+ retry_on_exception,
1583+)
1584+
1585 CLUSTER_RES = "grp_glance_vips"
1586
1587 PACKAGES = [
1588@@ -60,8 +70,12 @@
1589 "python-psycopg2", "python-keystone", "python-six", "uuid", "haproxy", ]
1590
1591 BASE_GIT_PACKAGES = [
1592+ 'libffi-dev',
1593+ 'libmysqlclient-dev',
1594 'libxml2-dev',
1595 'libxslt1-dev',
1596+ 'libssl-dev',
1597+ 'libyaml-dev',
1598 'python-dev',
1599 'python-pip',
1600 'python-setuptools',
1601@@ -209,6 +223,9 @@
1602 return configs
1603
1604
1605+# NOTE(jamespage): Retry deals with sync issues during one-shot HA deploys.
1606+# mysql might be restarting or suchlike.
1607+@retry_on_exception(5, base_delay=3, exc_type=subprocess.CalledProcessError)
1608 def determine_packages():
1609 packages = [] + PACKAGES
1610
1611@@ -256,7 +273,7 @@
1612 configs.write_all()
1613
1614 [service_stop(s) for s in services()]
1615- if eligible_leader(CLUSTER_RES):
1616+ if is_elected_leader(CLUSTER_RES):
1617 migrate_database()
1618 [service_start(s) for s in services()]
1619
1620@@ -340,6 +357,14 @@
1621
1622 def git_post_install(projects_yaml):
1623 """Perform glance post-install setup."""
1624+ http_proxy = git_yaml_value(projects_yaml, 'http_proxy')
1625+ if http_proxy:
1626+ pip_install('mysql-python', proxy=http_proxy,
1627+ venv=git_pip_venv_dir(projects_yaml))
1628+ else:
1629+ pip_install('mysql-python',
1630+ venv=git_pip_venv_dir(projects_yaml))
1631+
1632 src_etc = os.path.join(git_src_dir(projects_yaml, 'glance'), 'etc')
1633 configs = {
1634 'src': src_etc,
1635@@ -350,13 +375,34 @@
1636 shutil.rmtree(configs['dest'])
1637 shutil.copytree(configs['src'], configs['dest'])
1638
1639+ symlinks = [
1640+ # NOTE(coreycb): Need to find better solution than bin symlinks.
1641+ {'src': os.path.join(git_pip_venv_dir(projects_yaml),
1642+ 'bin/glance-manage'),
1643+ 'link': '/usr/local/bin/glance-manage'},
1644+ # NOTE(coreycb): This is ugly but couldn't find pypi package that
1645+ # installs rbd.py and rados.py.
1646+ {'src': '/usr/lib/python2.7/dist-packages/rbd.py',
1647+ 'link': os.path.join(git_pip_venv_dir(projects_yaml),
1648+ 'lib/python2.7/site-packages/rbd.py')},
1649+ {'src': '/usr/lib/python2.7/dist-packages/rados.py',
1650+ 'link': os.path.join(git_pip_venv_dir(projects_yaml),
1651+ 'lib/python2.7/site-packages/rados.py')},
1652+ ]
1653+
1654+ for s in symlinks:
1655+ if os.path.lexists(s['link']):
1656+ os.remove(s['link'])
1657+ os.symlink(s['src'], s['link'])
1658+
1659+ bin_dir = os.path.join(git_pip_venv_dir(projects_yaml), 'bin')
1660 glance_api_context = {
1661 'service_description': 'Glance API server',
1662 'service_name': 'Glance',
1663 'user_name': 'glance',
1664 'start_dir': '/var/lib/glance',
1665 'process_name': 'glance-api',
1666- 'executable_name': '/usr/local/bin/glance-api',
1667+ 'executable_name': os.path.join(bin_dir, 'glance-api'),
1668 'config_files': ['/etc/glance/glance-api.conf'],
1669 'log_file': '/var/log/glance/api.log',
1670 }
1671@@ -367,7 +413,7 @@
1672 'user_name': 'glance',
1673 'start_dir': '/var/lib/glance',
1674 'process_name': 'glance-registry',
1675- 'executable_name': '/usr/local/bin/glance-registry',
1676+ 'executable_name': os.path.join(bin_dir, 'glance-registry'),
1677 'config_files': ['/etc/glance/glance-registry.conf'],
1678 'log_file': '/var/log/glance/registry.log',
1679 }
1680
1681=== modified file 'metadata.yaml'
1682--- metadata.yaml 2014-10-30 03:30:35 +0000
1683+++ metadata.yaml 2015-06-30 20:18:04 +0000
1684@@ -6,7 +6,7 @@
1685 (Parallax) and an image delivery service (Teller). These services are used
1686 in conjunction by Nova to deliver images from object stores, such as
1687 OpenStack's Swift service, to Nova's compute nodes.
1688-categories:
1689+tags:
1690 - miscellaneous
1691 provides:
1692 nrpe-external-master:
1693
1694=== modified file 'tests/00-setup'
1695--- tests/00-setup 2014-10-08 20:18:38 +0000
1696+++ tests/00-setup 2015-06-30 20:18:04 +0000
1697@@ -5,6 +5,10 @@
1698 sudo add-apt-repository --yes ppa:juju/stable
1699 sudo apt-get update --yes
1700 sudo apt-get install --yes python-amulet \
1701+ python-cinderclient \
1702+ python-distro-info \
1703+ python-glanceclient \
1704+ python-heatclient \
1705 python-keystoneclient \
1706- python-glanceclient \
1707 python-novaclient
1708+ python-swiftclient
1709
1710=== modified file 'tests/017-basic-trusty-kilo' (properties changed: -x to +x)
1711=== modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x)
1712=== added file 'tests/020-basic-trusty-liberty'
1713--- tests/020-basic-trusty-liberty 1970-01-01 00:00:00 +0000
1714+++ tests/020-basic-trusty-liberty 2015-06-30 20:18:04 +0000
1715@@ -0,0 +1,11 @@
1716+#!/usr/bin/python
1717+
1718+"""Amulet tests on a basic glance deployment on trusty-liberty."""
1719+
1720+from basic_deployment import GlanceBasicDeployment
1721+
1722+if __name__ == '__main__':
1723+ deployment = GlanceBasicDeployment(series='trusty',
1724+ openstack='cloud:trusty-liberty',
1725+ source='cloud:trusty-updates/liberty')
1726+ deployment.run_tests()
1727
1728=== added file 'tests/021-basic-wily-liberty'
1729--- tests/021-basic-wily-liberty 1970-01-01 00:00:00 +0000
1730+++ tests/021-basic-wily-liberty 2015-06-30 20:18:04 +0000
1731@@ -0,0 +1,9 @@
1732+#!/usr/bin/python
1733+
1734+"""Amulet tests on a basic glance deployment on wily-liberty."""
1735+
1736+from basic_deployment import GlanceBasicDeployment
1737+
1738+if __name__ == '__main__':
1739+ deployment = GlanceBasicDeployment(series='wily')
1740+ deployment.run_tests()
1741
1742=== modified file 'tests/README'
1743--- tests/README 2014-10-08 20:18:38 +0000
1744+++ tests/README 2015-06-30 20:18:04 +0000
1745@@ -1,6 +1,15 @@
1746 This directory provides Amulet tests that focus on verification of Glance
1747 deployments.
1748
1749+test_* methods are called in lexical sort order.
1750+
1751+Test name convention to ensure desired test order:
1752+ 1xx service and endpoint checks
1753+ 2xx relation checks
1754+ 3xx config checks
1755+ 4xx functional checks
1756+ 9xx restarts and other final checks
1757+
1758 In order to run tests, you'll need charm-tools installed (in addition to
1759 juju, of course):
1760 sudo add-apt-repository ppa:juju/stable
1761
1762=== modified file 'tests/basic_deployment.py'
1763--- tests/basic_deployment.py 2015-04-24 10:07:08 +0000
1764+++ tests/basic_deployment.py 2015-06-30 20:18:04 +0000
1765@@ -2,6 +2,7 @@
1766
1767 import amulet
1768 import os
1769+import time
1770 import yaml
1771
1772 from charmhelpers.contrib.openstack.amulet.deployment import (
1773@@ -10,25 +11,30 @@
1774
1775 from charmhelpers.contrib.openstack.amulet.utils import (
1776 OpenStackAmuletUtils,
1777- DEBUG, # flake8: noqa
1778- ERROR
1779+ DEBUG,
1780+ # ERROR
1781 )
1782
1783 # Use DEBUG to turn on debug logging
1784 u = OpenStackAmuletUtils(DEBUG)
1785
1786+
1787 class GlanceBasicDeployment(OpenStackAmuletDeployment):
1788- '''Amulet tests on a basic file-backed glance deployment. Verify relations,
1789- service status, endpoint service catalog, create and delete new image.'''
1790-
1791-# TO-DO(beisner):
1792-# * Add tests with different storage back ends
1793-# * Resolve Essex->Havana juju set charm bug
1794+ """Amulet tests on a basic file-backed glance deployment. Verify
1795+ relations, service status, endpoint service catalog, create and
1796+ delete new image."""
1797
1798 def __init__(self, series=None, openstack=None, source=None, git=False,
1799+<<<<<<< TREE
1800 stable=True):
1801 '''Deploy the entire test environment.'''
1802 super(GlanceBasicDeployment, self).__init__(series, openstack, source, stable)
1803+=======
1804+ stable=False):
1805+ """Deploy the entire test environment."""
1806+ super(GlanceBasicDeployment, self).__init__(series, openstack,
1807+ source, stable)
1808+>>>>>>> MERGE-SOURCE
1809 self.git = git
1810 self._add_services()
1811 self._add_relations()
1812@@ -37,20 +43,21 @@
1813 self._initialize_tests()
1814
1815 def _add_services(self):
1816- '''Add services
1817+ """Add services
1818
1819 Add the services that we're testing, where glance is local,
1820 and the rest of the service are from lp branches that are
1821 compatible with the local charm (e.g. stable or next).
1822- '''
1823+ """
1824 this_service = {'name': 'glance'}
1825- other_services = [{'name': 'mysql'}, {'name': 'rabbitmq-server'},
1826+ other_services = [{'name': 'mysql'},
1827+ {'name': 'rabbitmq-server'},
1828 {'name': 'keystone'}]
1829 super(GlanceBasicDeployment, self)._add_services(this_service,
1830 other_services)
1831
1832 def _add_relations(self):
1833- '''Add relations for the services.'''
1834+ """Add relations for the services."""
1835 relations = {'glance:identity-service': 'keystone:identity-service',
1836 'glance:shared-db': 'mysql:shared-db',
1837 'keystone:shared-db': 'mysql:shared-db',
1838@@ -58,7 +65,7 @@
1839 super(GlanceBasicDeployment, self)._add_relations(relations)
1840
1841 def _configure_services(self):
1842- '''Configure all of the services.'''
1843+ """Configure all of the services."""
1844 glance_config = {}
1845 if self.git:
1846 branch = 'stable/' + self._get_openstack_release_string()
1847@@ -66,17 +73,18 @@
1848 openstack_origin_git = {
1849 'repositories': [
1850 {'name': 'requirements',
1851- 'repository': 'git://git.openstack.org/openstack/requirements',
1852+ 'repository': 'git://github.com/openstack/requirements',
1853 'branch': branch},
1854 {'name': 'glance',
1855- 'repository': 'git://git.openstack.org/openstack/glance',
1856+ 'repository': 'git://github.com/openstack/glance',
1857 'branch': branch},
1858 ],
1859 'directory': '/mnt/openstack-git',
1860 'http_proxy': amulet_http_proxy,
1861 'https_proxy': amulet_http_proxy,
1862 }
1863- glance_config['openstack-origin-git'] = yaml.dump(openstack_origin_git)
1864+ glance_config['openstack-origin-git'] = \
1865+ yaml.dump(openstack_origin_git)
1866
1867 keystone_config = {'admin-password': 'openstack',
1868 'admin-token': 'ubuntutesting'}
1869@@ -87,12 +95,19 @@
1870 super(GlanceBasicDeployment, self)._configure_services(configs)
1871
1872 def _initialize_tests(self):
1873- '''Perform final initialization before tests get run.'''
1874+ """Perform final initialization before tests get run."""
1875 # Access the sentries for inspecting service units
1876 self.mysql_sentry = self.d.sentry.unit['mysql/0']
1877 self.glance_sentry = self.d.sentry.unit['glance/0']
1878 self.keystone_sentry = self.d.sentry.unit['keystone/0']
1879 self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']
1880+ u.log.debug('openstack release val: {}'.format(
1881+ self._get_openstack_release()))
1882+ u.log.debug('openstack release str: {}'.format(
1883+ self._get_openstack_release_string()))
1884+
1885+ # Let things settle a bit before moving forward
1886+ time.sleep(30)
1887
1888 # Authenticate admin with keystone
1889 self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
1890@@ -103,46 +118,99 @@
1891 # Authenticate admin with glance endpoint
1892 self.glance = u.authenticate_glance_admin(self.keystone)
1893
1894- u.log.debug('openstack release: {}'.format(self._get_openstack_release()))
1895-
1896- def test_services(self):
1897- '''Verify that the expected services are running on the
1898- corresponding service units.'''
1899- commands = {
1900- self.mysql_sentry: ['status mysql'],
1901- self.keystone_sentry: ['status keystone'],
1902- self.glance_sentry: ['status glance-api', 'status glance-registry'],
1903- self.rabbitmq_sentry: ['sudo service rabbitmq-server status']
1904+ def test_100_services(self):
1905+ """Verify that the expected services are running on the
1906+ corresponding service units."""
1907+ services = {
1908+ self.mysql_sentry: ['mysql'],
1909+ self.keystone_sentry: ['keystone'],
1910+ self.glance_sentry: ['glance-api', 'glance-registry'],
1911+ self.rabbitmq_sentry: ['rabbitmq-server']
1912 }
1913- u.log.debug('commands: {}'.format(commands))
1914- ret = u.validate_services(commands)
1915+
1916+ ret = u.validate_services_by_name(services)
1917 if ret:
1918 amulet.raise_status(amulet.FAIL, msg=ret)
1919
1920- def test_service_catalog(self):
1921- '''Verify that the service catalog endpoint data'''
1922- endpoint_vol = {'adminURL': u.valid_url,
1923- 'region': 'RegionOne',
1924- 'publicURL': u.valid_url,
1925- 'internalURL': u.valid_url}
1926- endpoint_id = {'adminURL': u.valid_url,
1927- 'region': 'RegionOne',
1928- 'publicURL': u.valid_url,
1929- 'internalURL': u.valid_url}
1930- if self._get_openstack_release() >= self.trusty_icehouse:
1931- endpoint_vol['id'] = u.not_null
1932- endpoint_id['id'] = u.not_null
1933-
1934- expected = {'image': [endpoint_id],
1935- 'identity': [endpoint_id]}
1936+ def test_102_service_catalog(self):
1937+ """Verify that the service catalog endpoint data is valid."""
1938+ u.log.debug('Checking keystone service catalog...')
1939+ endpoint_check = {
1940+ 'adminURL': u.valid_url,
1941+ 'id': u.not_null,
1942+ 'region': 'RegionOne',
1943+ 'publicURL': u.valid_url,
1944+ 'internalURL': u.valid_url
1945+ }
1946+ expected = {
1947+ 'image': [endpoint_check],
1948+ 'identity': [endpoint_check]
1949+ }
1950 actual = self.keystone.service_catalog.get_endpoints()
1951
1952 ret = u.validate_svc_catalog_endpoint_data(expected, actual)
1953 if ret:
1954 amulet.raise_status(amulet.FAIL, msg=ret)
1955
1956- def test_mysql_glance_db_relation(self):
1957- '''Verify the mysql:glance shared-db relation data'''
1958+ def test_104_glance_endpoint(self):
1959+ """Verify the glance endpoint data."""
1960+ u.log.debug('Checking glance api endpoint data...')
1961+ endpoints = self.keystone.endpoints.list()
1962+ admin_port = internal_port = public_port = '9292'
1963+ expected = {'id': u.not_null,
1964+ 'region': 'RegionOne',
1965+ 'adminurl': u.valid_url,
1966+ 'internalurl': u.valid_url,
1967+ 'publicurl': u.valid_url,
1968+ 'service_id': u.not_null}
1969+ ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
1970+ public_port, expected)
1971+
1972+ if ret:
1973+ amulet.raise_status(amulet.FAIL,
1974+ msg='glance endpoint: {}'.format(ret))
1975+
1976+ def test_106_keystone_endpoint(self):
1977+ """Verify the keystone endpoint data."""
1978+ u.log.debug('Checking keystone api endpoint data...')
1979+ endpoints = self.keystone.endpoints.list()
1980+ admin_port = '35357'
1981+ internal_port = public_port = '5000'
1982+ expected = {'id': u.not_null,
1983+ 'region': 'RegionOne',
1984+ 'adminurl': u.valid_url,
1985+ 'internalurl': u.valid_url,
1986+ 'publicurl': u.valid_url,
1987+ 'service_id': u.not_null}
1988+ ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
1989+ public_port, expected)
1990+ if ret:
1991+ amulet.raise_status(amulet.FAIL,
1992+ msg='keystone endpoint: {}'.format(ret))
1993+
1994+ def test_110_users(self):
1995+ """Verify expected users."""
1996+ u.log.debug('Checking keystone users...')
1997+ user0 = {'name': 'admin',
1998+ 'enabled': True,
1999+ 'tenantId': u.not_null,
2000+ 'id': u.not_null,
2001+ 'email': 'juju@localhost'}
2002+ user1 = {'name': 'glance',
2003+ 'enabled': True,
2004+ 'tenantId': u.not_null,
2005+ 'id': u.not_null,
2006+ 'email': 'juju@localhost'}
2007+ expected = [user0, user1]
2008+ actual = self.keystone.users.list()
2009+
2010+ ret = u.validate_user_data(expected, actual)
2011+ if ret:
2012+ amulet.raise_status(amulet.FAIL, msg=ret)
2013+
2014+ def test_200_mysql_glance_db_relation(self):
2015+ """Verify the mysql:glance shared-db relation data"""
2016+ u.log.debug('Checking mysql to glance shared-db relation data...')
2017 unit = self.mysql_sentry
2018 relation = ['shared-db', 'glance:shared-db']
2019 expected = {
2020@@ -154,8 +222,9 @@
2021 message = u.relation_error('mysql shared-db', ret)
2022 amulet.raise_status(amulet.FAIL, msg=message)
2023
2024- def test_glance_mysql_db_relation(self):
2025- '''Verify the glance:mysql shared-db relation data'''
2026+ def test_201_glance_mysql_db_relation(self):
2027+ """Verify the glance:mysql shared-db relation data"""
2028+ u.log.debug('Checking glance to mysql shared-db relation data...')
2029 unit = self.glance_sentry
2030 relation = ['shared-db', 'mysql:shared-db']
2031 expected = {
2032@@ -169,8 +238,9 @@
2033 message = u.relation_error('glance shared-db', ret)
2034 amulet.raise_status(amulet.FAIL, msg=message)
2035
2036- def test_keystone_glance_id_relation(self):
2037- '''Verify the keystone:glance identity-service relation data'''
2038+ def test_202_keystone_glance_id_relation(self):
2039+ """Verify the keystone:glance identity-service relation data"""
2040+ u.log.debug('Checking keystone to glance id relation data...')
2041 unit = self.keystone_sentry
2042 relation = ['identity-service',
2043 'glance:identity-service']
2044@@ -193,8 +263,9 @@
2045 message = u.relation_error('keystone identity-service', ret)
2046 amulet.raise_status(amulet.FAIL, msg=message)
2047
2048- def test_glance_keystone_id_relation(self):
2049- '''Verify the glance:keystone identity-service relation data'''
2050+ def test_203_glance_keystone_id_relation(self):
2051+ """Verify the glance:keystone identity-service relation data"""
2052+ u.log.debug('Checking glance to keystone relation data...')
2053 unit = self.glance_sentry
2054 relation = ['identity-service',
2055 'keystone:identity-service']
2056@@ -211,8 +282,9 @@
2057 message = u.relation_error('glance identity-service', ret)
2058 amulet.raise_status(amulet.FAIL, msg=message)
2059
2060- def test_rabbitmq_glance_amqp_relation(self):
2061- '''Verify the rabbitmq-server:glance amqp relation data'''
2062+ def test_204_rabbitmq_glance_amqp_relation(self):
2063+ """Verify the rabbitmq-server:glance amqp relation data"""
2064+ u.log.debug('Checking rmq to glance amqp relation data...')
2065 unit = self.rabbitmq_sentry
2066 relation = ['amqp', 'glance:amqp']
2067 expected = {
2068@@ -225,8 +297,9 @@
2069 message = u.relation_error('rabbitmq amqp', ret)
2070 amulet.raise_status(amulet.FAIL, msg=message)
2071
2072- def test_glance_rabbitmq_amqp_relation(self):
2073- '''Verify the glance:rabbitmq-server amqp relation data'''
2074+ def test_205_glance_rabbitmq_amqp_relation(self):
2075+ """Verify the glance:rabbitmq-server amqp relation data"""
2076+ u.log.debug('Checking glance to rmq amqp relation data...')
2077 unit = self.glance_sentry
2078 relation = ['amqp', 'rabbitmq-server:amqp']
2079 expected = {
2080@@ -239,291 +312,180 @@
2081 message = u.relation_error('glance amqp', ret)
2082 amulet.raise_status(amulet.FAIL, msg=message)
2083
2084- def test_image_create_delete(self):
2085- '''Create new cirros image in glance, verify, then delete it'''
2086-
2087- # Create a new image
2088- image_name = 'cirros-image-1'
2089- image_new = u.create_cirros_image(self.glance, image_name)
2090-
2091- # Confirm image is created and has status of 'active'
2092- if not image_new:
2093- message = 'glance image create failed'
2094- amulet.raise_status(amulet.FAIL, msg=message)
2095-
2096- # Verify new image name
2097- images_list = list(self.glance.images.list())
2098- if images_list[0].name != image_name:
2099- message = 'glance image create failed or unexpected image name {}'.format(images_list[0].name)
2100- amulet.raise_status(amulet.FAIL, msg=message)
2101-
2102- # Delete the new image
2103- u.log.debug('image count before delete: {}'.format(len(list(self.glance.images.list()))))
2104- u.delete_image(self.glance, image_new)
2105- u.log.debug('image count after delete: {}'.format(len(list(self.glance.images.list()))))
2106-
2107- def test_glance_api_default_config(self):
2108- '''Verify default section configs in glance-api.conf and
2109- compare some of the parameters to relation data.'''
2110+ def test_300_glance_api_default_config(self):
2111+ """Verify default section configs in glance-api.conf and
2112+ compare some of the parameters to relation data."""
2113+ u.log.debug('Checking glance api config file...')
2114 unit = self.glance_sentry
2115+ unit_ks = self.keystone_sentry
2116 rel_gl_mq = unit.relation('amqp', 'rabbitmq-server:amqp')
2117- conf = '/etc/glance/glance-api.conf'
2118- expected = {'use_syslog': 'False',
2119- 'default_store': 'file',
2120- 'filesystem_store_datadir': '/var/lib/glance/images/',
2121- 'rabbit_userid': rel_gl_mq['username'],
2122- 'log_file': '/var/log/glance/api.log',
2123- 'debug': 'False',
2124- 'verbose': 'False'}
2125- section = 'DEFAULT'
2126-
2127- if self._get_openstack_release() <= self.precise_havana:
2128- # Defaults were different before icehouse
2129- expected['debug'] = 'True'
2130- expected['verbose'] = 'True'
2131-
2132- ret = u.validate_config_data(unit, conf, section, expected)
2133- if ret:
2134- message = "glance-api default config error: {}".format(ret)
2135- amulet.raise_status(amulet.FAIL, msg=message)
2136-
2137- def test_glance_api_auth_config(self):
2138- '''Verify authtoken section config in glance-api.conf using
2139- glance/keystone relation data.'''
2140- unit_gl = self.glance_sentry
2141- unit_ks = self.keystone_sentry
2142- rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp')
2143- rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service')
2144- conf = '/etc/glance/glance-api.conf'
2145- section = 'keystone_authtoken'
2146-
2147- if self._get_openstack_release() > self.precise_havana:
2148- # No auth config exists in this file before icehouse
2149- expected = {'admin_user': 'glance',
2150- 'admin_password': rel_ks_gl['service_password']}
2151-
2152- ret = u.validate_config_data(unit_gl, conf, section, expected)
2153+ rel_ks_gl = unit_ks.relation('identity-service',
2154+ 'glance:identity-service')
2155+ rel_my_gl = self.mysql_sentry.relation('shared-db', 'glance:shared-db')
2156+ db_uri = "mysql://{}:{}@{}/{}".format('glance', rel_my_gl['password'],
2157+ rel_my_gl['db_host'], 'glance')
2158+ conf = '/etc/glance/glance-api.conf'
2159+ expected = {
2160+ 'DEFAULT': {
2161+ 'debug': 'False',
2162+ 'verbose': 'False',
2163+ 'use_syslog': 'False',
2164+ 'log_file': '/var/log/glance/api.log',
2165+ 'default_store': 'file',
2166+ 'filesystem_store_datadir': '/var/lib/glance/images/',
2167+ 'rabbit_userid': rel_gl_mq['username'],
2168+ 'bind_host': '0.0.0.0',
2169+ 'bind_port': '9282',
2170+ 'registry_host': '0.0.0.0',
2171+ 'registry_port': '9191',
2172+ 'registry_client_protocol': 'http',
2173+ 'delayed_delete': 'False',
2174+ 'scrub_time': '43200',
2175+ 'notification_driver': 'rabbit',
2176+ 'filesystem_store_datadir': '/var/lib/glance/images/',
2177+ 'scrubber_datadir': '/var/lib/glance/scrubber',
2178+ 'image_cache_dir': '/var/lib/glance/image-cache/',
2179+ 'db_enforce_mysql_charset': 'False',
2180+ 'rabbit_userid': 'glance',
2181+ 'rabbit_virtual_host': 'openstack',
2182+ 'rabbit_password': u.not_null,
2183+ 'rabbit_host': u.valid_ip
2184+ },
2185+ 'keystone_authtoken': {
2186+ 'admin_user': 'glance',
2187+ 'admin_password': rel_ks_gl['service_password'],
2188+ 'auth_uri': u.valid_url,
2189+ 'auth_host': u.valid_ip,
2190+ 'auth_port': '35357',
2191+ 'auth_protocol': 'http',
2192+ },
2193+ 'database': {
2194+ 'connection': db_uri,
2195+ 'sql_idle_timeout': '3600'
2196+ }
2197+ }
2198+
2199+ for section, pairs in expected.iteritems():
2200+ ret = u.validate_config_data(unit, conf, section, pairs)
2201 if ret:
2202- message = "glance-api auth config error: {}".format(ret)
2203+ message = "glance api config error: {}".format(ret)
2204 amulet.raise_status(amulet.FAIL, msg=message)
2205
2206- def test_glance_api_paste_auth_config(self):
2207- '''Verify authtoken section config in glance-api-paste.ini using
2208- glance/keystone relation data.'''
2209- unit_gl = self.glance_sentry
2210- unit_ks = self.keystone_sentry
2211- rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp')
2212- rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service')
2213+ def _get_filter_factory_expected_dict(self):
2214+ """Return expected authtoken filter factory dict for OS release"""
2215+ if self._get_openstack_release() < self.vivid_kilo:
2216+ # Juno and earlier
2217+ val = 'keystoneclient.middleware.auth_token:filter_factory'
2218+ else:
2219+ # Kilo and later
2220+ val = 'keystonemiddleware.auth_token: filter_factory'
2221+
2222+ return {'filter:authtoken': {'paste.filter_factory': val}}
2223+
2224+ def test_304_glance_api_paste_auth_config(self):
2225+ """Verify authtoken section config in glance-api-paste.ini using
2226+ glance/keystone relation data."""
2227+ u.log.debug('Checking glance api paste config file...')
2228+ unit = self.glance_sentry
2229 conf = '/etc/glance/glance-api-paste.ini'
2230- section = 'filter:authtoken'
2231-
2232- if self._get_openstack_release() <= self.precise_havana:
2233- # No auth config exists in this file after havana
2234- expected = {'admin_user': 'glance',
2235- 'admin_password': rel_ks_gl['service_password']}
2236-
2237- ret = u.validate_config_data(unit_gl, conf, section, expected)
2238+ expected = self._get_filter_factory_expected_dict()
2239+
2240+ for section, pairs in expected.iteritems():
2241+ ret = u.validate_config_data(unit, conf, section, pairs)
2242 if ret:
2243- message = "glance-api-paste auth config error: {}".format(ret)
2244+ message = "glance api paste config error: {}".format(ret)
2245 amulet.raise_status(amulet.FAIL, msg=message)
2246
2247- def test_glance_registry_paste_auth_config(self):
2248- '''Verify authtoken section config in glance-registry-paste.ini using
2249- glance/keystone relation data.'''
2250- unit_gl = self.glance_sentry
2251- unit_ks = self.keystone_sentry
2252- rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp')
2253- rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service')
2254+ def test_306_glance_registry_paste_auth_config(self):
2255+ """Verify authtoken section config in glance-registry-paste.ini using
2256+ glance/keystone relation data."""
2257+ u.log.debug('Checking glance registry paste config file...')
2258+ unit = self.glance_sentry
2259 conf = '/etc/glance/glance-registry-paste.ini'
2260- section = 'filter:authtoken'
2261-
2262- if self._get_openstack_release() <= self.precise_havana:
2263- # No auth config exists in this file after havana
2264- expected = {'admin_user': 'glance',
2265- 'admin_password': rel_ks_gl['service_password']}
2266-
2267- ret = u.validate_config_data(unit_gl, conf, section, expected)
2268+ expected = self._get_filter_factory_expected_dict()
2269+
2270+ for section, pairs in expected.iteritems():
2271+ ret = u.validate_config_data(unit, conf, section, pairs)
2272 if ret:
2273- message = "glance-registry-paste auth config error: {}".format(ret)
2274+ message = "glance registry paste config error: {}".format(ret)
2275 amulet.raise_status(amulet.FAIL, msg=message)
2276
2277- def test_glance_registry_default_config(self):
2278- '''Verify default section configs in glance-registry.conf'''
2279+ def test_308_glance_registry_default_config(self):
2280+ """Verify configs in glance-registry.conf"""
2281+ u.log.debug('Checking glance registry config file...')
2282 unit = self.glance_sentry
2283- conf = '/etc/glance/glance-registry.conf'
2284- expected = {'use_syslog': 'False',
2285- 'log_file': '/var/log/glance/registry.log',
2286- 'debug': 'False',
2287- 'verbose': 'False'}
2288- section = 'DEFAULT'
2289-
2290- if self._get_openstack_release() <= self.precise_havana:
2291- # Defaults were different before icehouse
2292- expected['debug'] = 'True'
2293- expected['verbose'] = 'True'
2294-
2295- ret = u.validate_config_data(unit, conf, section, expected)
2296- if ret:
2297- message = "glance-registry default config error: {}".format(ret)
2298- amulet.raise_status(amulet.FAIL, msg=message)
2299-
2300- def test_glance_registry_auth_config(self):
2301- '''Verify authtoken section config in glance-registry.conf
2302- using glance/keystone relation data.'''
2303- unit_gl = self.glance_sentry
2304 unit_ks = self.keystone_sentry
2305- rel_gl_mq = unit_gl.relation('amqp', 'rabbitmq-server:amqp')
2306- rel_ks_gl = unit_ks.relation('identity-service', 'glance:identity-service')
2307+ rel_ks_gl = unit_ks.relation('identity-service',
2308+ 'glance:identity-service')
2309+ rel_my_gl = self.mysql_sentry.relation('shared-db', 'glance:shared-db')
2310+ db_uri = "mysql://{}:{}@{}/{}".format('glance', rel_my_gl['password'],
2311+ rel_my_gl['db_host'], 'glance')
2312 conf = '/etc/glance/glance-registry.conf'
2313- section = 'keystone_authtoken'
2314-
2315- if self._get_openstack_release() > self.precise_havana:
2316- # No auth config exists in this file before icehouse
2317- expected = {'admin_user': 'glance',
2318- 'admin_password': rel_ks_gl['service_password']}
2319-
2320- ret = u.validate_config_data(unit_gl, conf, section, expected)
2321+
2322+ expected = {
2323+ 'DEFAULT': {
2324+ 'use_syslog': 'False',
2325+ 'log_file': '/var/log/glance/registry.log',
2326+ 'debug': 'False',
2327+ 'verbose': 'False',
2328+ 'bind_host': '0.0.0.0',
2329+ 'bind_port': '9191'
2330+ },
2331+ 'database': {
2332+ 'connection': db_uri,
2333+ 'sql_idle_timeout': '3600'
2334+ },
2335+ 'keystone_authtoken': {
2336+ 'admin_user': 'glance',
2337+ 'admin_password': rel_ks_gl['service_password'],
2338+ 'auth_uri': u.valid_url,
2339+ 'auth_host': u.valid_ip,
2340+ 'auth_port': '35357',
2341+ 'auth_protocol': 'http',
2342+ },
2343+ }
2344+
2345+ for section, pairs in expected.iteritems():
2346+ ret = u.validate_config_data(unit, conf, section, pairs)
2347 if ret:
2348- message = "glance-registry keystone_authtoken config error: {}".format(ret)
2349+ message = "glance registry paste config error: {}".format(ret)
2350 amulet.raise_status(amulet.FAIL, msg=message)
2351
2352- def test_glance_api_database_config(self):
2353- '''Verify database config in glance-api.conf and
2354- compare with a db uri constructed from relation data.'''
2355- unit = self.glance_sentry
2356- conf = '/etc/glance/glance-api.conf'
2357- relation = self.mysql_sentry.relation('shared-db', 'glance:shared-db')
2358- db_uri = "mysql://{}:{}@{}/{}".format('glance', relation['password'],
2359- relation['db_host'], 'glance')
2360- expected = {'connection': db_uri, 'sql_idle_timeout': '3600'}
2361- section = 'database'
2362-
2363- if self._get_openstack_release() <= self.precise_havana:
2364- # Section and directive for this config changed in icehouse
2365- expected = {'sql_connection': db_uri, 'sql_idle_timeout': '3600'}
2366- section = 'DEFAULT'
2367-
2368- ret = u.validate_config_data(unit, conf, section, expected)
2369- if ret:
2370- message = "glance db config error: {}".format(ret)
2371- amulet.raise_status(amulet.FAIL, msg=message)
2372-
2373- def test_glance_registry_database_config(self):
2374- '''Verify database config in glance-registry.conf and
2375- compare with a db uri constructed from relation data.'''
2376- unit = self.glance_sentry
2377- conf = '/etc/glance/glance-registry.conf'
2378- relation = self.mysql_sentry.relation('shared-db', 'glance:shared-db')
2379- db_uri = "mysql://{}:{}@{}/{}".format('glance', relation['password'],
2380- relation['db_host'], 'glance')
2381- expected = {'connection': db_uri, 'sql_idle_timeout': '3600'}
2382- section = 'database'
2383-
2384- if self._get_openstack_release() <= self.precise_havana:
2385- # Section and directive for this config changed in icehouse
2386- expected = {'sql_connection': db_uri, 'sql_idle_timeout': '3600'}
2387- section = 'DEFAULT'
2388-
2389- ret = u.validate_config_data(unit, conf, section, expected)
2390- if ret:
2391- message = "glance db config error: {}".format(ret)
2392- amulet.raise_status(amulet.FAIL, msg=message)
2393-
2394- def test_glance_endpoint(self):
2395- '''Verify the glance endpoint data.'''
2396- endpoints = self.keystone.endpoints.list()
2397- admin_port = internal_port = public_port = '9292'
2398- expected = {'id': u.not_null,
2399- 'region': 'RegionOne',
2400- 'adminurl': u.valid_url,
2401- 'internalurl': u.valid_url,
2402- 'publicurl': u.valid_url,
2403- 'service_id': u.not_null}
2404- ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
2405- public_port, expected)
2406-
2407- if ret:
2408- amulet.raise_status(amulet.FAIL,
2409- msg='glance endpoint: {}'.format(ret))
2410-
2411- def test_keystone_endpoint(self):
2412- '''Verify the keystone endpoint data.'''
2413- endpoints = self.keystone.endpoints.list()
2414- admin_port = '35357'
2415- internal_port = public_port = '5000'
2416- expected = {'id': u.not_null,
2417- 'region': 'RegionOne',
2418- 'adminurl': u.valid_url,
2419- 'internalurl': u.valid_url,
2420- 'publicurl': u.valid_url,
2421- 'service_id': u.not_null}
2422- ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
2423- public_port, expected)
2424- if ret:
2425- amulet.raise_status(amulet.FAIL,
2426- msg='keystone endpoint: {}'.format(ret))
2427-
2428- def _change_config(self):
2429- if self._get_openstack_release() > self.precise_havana:
2430- self.d.configure('glance', {'debug': 'True'})
2431- else:
2432- self.d.configure('glance', {'debug': 'False'})
2433-
2434- def _restore_config(self):
2435- if self._get_openstack_release() > self.precise_havana:
2436- self.d.configure('glance', {'debug': 'False'})
2437- else:
2438- self.d.configure('glance', {'debug': 'True'})
2439-
2440- def test_z_glance_restart_on_config_change(self):
2441- '''Verify that glance is restarted when the config is changed.
2442-
2443- Note(coreycb): The method name with the _z_ is a little odd
2444- but it forces the test to run last. It just makes things
2445- easier because restarting services requires re-authorization.
2446- '''
2447- if self._get_openstack_release() <= self.precise_havana:
2448- # /!\ NOTE(beisner): Glance charm before Icehouse doesn't respond
2449- # to attempted config changes via juju / juju set.
2450- # https://bugs.launchpad.net/charms/+source/glance/+bug/1340307
2451- u.log.error('NOTE(beisner): skipping glance restart on config ' +
2452- 'change check due to bug 1340307.')
2453- return
2454-
2455- # Make config change to trigger a service restart
2456- self._change_config()
2457-
2458- if not u.service_restarted(self.glance_sentry, 'glance-api',
2459- '/etc/glance/glance-api.conf'):
2460- self._restore_config()
2461- message = "glance service didn't restart after config change"
2462- amulet.raise_status(amulet.FAIL, msg=message)
2463-
2464- if not u.service_restarted(self.glance_sentry, 'glance-registry',
2465- '/etc/glance/glance-registry.conf',
2466- sleep_time=0):
2467- self._restore_config()
2468- message = "glance service didn't restart after config change"
2469- amulet.raise_status(amulet.FAIL, msg=message)
2470-
2471- # Return to original config
2472- self._restore_config()
2473-
2474- def test_users(self):
2475- '''Verify expected users.'''
2476- user0 = {'name': 'glance',
2477- 'enabled': True,
2478- 'tenantId': u.not_null,
2479- 'id': u.not_null,
2480- 'email': 'juju@localhost'}
2481- user1 = {'name': 'admin',
2482- 'enabled': True,
2483- 'tenantId': u.not_null,
2484- 'id': u.not_null,
2485- 'email': 'juju@localhost'}
2486- expected = [user0, user1]
2487- actual = self.keystone.users.list()
2488-
2489- ret = u.validate_user_data(expected, actual)
2490- if ret:
2491- amulet.raise_status(amulet.FAIL, msg=ret)
2492+ def test_410_glance_image_create_delete(self):
2493+ """Create new cirros image in glance, verify, then delete it."""
2494+ u.log.debug('Creating, checking and deleting glance image...')
2495+ img_new = u.create_cirros_image(self.glance, "cirros-image-1")
2496+ img_id = img_new.id
2497+ u.delete_resource(self.glance.images, img_id, msg="glance image")
2498+
2499+ def test_900_glance_restart_on_config_change(self):
2500+ """Verify that the specified services are restarted when the config
2501+ is changed."""
2502+ sentry = self.glance_sentry
2503+ juju_service = 'glance'
2504+
2505+ # Expected default and alternate values
2506+ set_default = {'use-syslog': 'False'}
2507+ set_alternate = {'use-syslog': 'True'}
2508+
2509+ # Config file affected by juju set config change
2510+ conf_file = '/etc/glance/glance-api.conf'
2511+
2512+ # Services which are expected to restart upon config change
2513+ services = ['glance-api', 'glance-registry']
2514+
2515+ # Make config change, check for service restarts
2516+ u.log.debug('Making config change on {}...'.format(juju_service))
2517+ self.d.configure(juju_service, set_alternate)
2518+
2519+ sleep_time = 30
2520+ for s in services:
2521+ u.log.debug("Checking that service restarted: {}".format(s))
2522+ if not u.service_restarted(sentry, s,
2523+ conf_file, sleep_time=sleep_time):
2524+ self.d.configure(juju_service, set_default)
2525+ msg = "service {} didn't restart after config change".format(s)
2526+ amulet.raise_status(amulet.FAIL, msg=msg)
2527+ sleep_time = 0
2528+
2529+ self.d.configure(juju_service, set_default)
2530
2531=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
2532--- tests/charmhelpers/contrib/amulet/utils.py 2015-04-23 14:52:07 +0000
2533+++ tests/charmhelpers/contrib/amulet/utils.py 2015-06-30 20:18:04 +0000
2534@@ -15,13 +15,15 @@
2535 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2536
2537 import ConfigParser
2538+import distro_info
2539 import io
2540 import logging
2541+import os
2542 import re
2543+import six
2544 import sys
2545 import time
2546-
2547-import six
2548+import urlparse
2549
2550
2551 class AmuletUtils(object):
2552@@ -33,6 +35,7 @@
2553
2554 def __init__(self, log_level=logging.ERROR):
2555 self.log = self.get_logger(level=log_level)
2556+ self.ubuntu_releases = self.get_ubuntu_releases()
2557
2558 def get_logger(self, name="amulet-logger", level=logging.DEBUG):
2559 """Get a logger object that will log to stdout."""
2560@@ -70,12 +73,44 @@
2561 else:
2562 return False
2563
2564+ def get_ubuntu_release_from_sentry(self, sentry_unit):
2565+ """Get Ubuntu release codename from sentry unit.
2566+
2567+ :param sentry_unit: amulet sentry/service unit pointer
2568+ :returns: list of strings - release codename, failure message
2569+ """
2570+ msg = None
2571+ cmd = 'lsb_release -cs'
2572+ release, code = sentry_unit.run(cmd)
2573+ if code == 0:
2574+ self.log.debug('{} lsb_release: {}'.format(
2575+ sentry_unit.info['unit_name'], release))
2576+ else:
2577+ msg = ('{} `{}` returned {} '
2578+ '{}'.format(sentry_unit.info['unit_name'],
2579+ cmd, release, code))
2580+ if release not in self.ubuntu_releases:
2581+ msg = ("Release ({}) not found in Ubuntu releases "
2582+ "({})".format(release, self.ubuntu_releases))
2583+ return release, msg
2584+
2585 def validate_services(self, commands):
2586- """Validate services.
2587-
2588- Verify the specified services are running on the corresponding
2589+ """Validate that lists of commands succeed on service units. Can be
2590+ used to verify system services are running on the corresponding
2591 service units.
2592- """
2593+
2594+ :param commands: dict with sentry keys and arbitrary command list vals
2595+ :returns: None if successful, Failure string message otherwise
2596+ """
2597+ self.log.debug('Checking status of system services...')
2598+
2599+ # /!\ DEPRECATION WARNING (beisner):
2600+ # New and existing tests should be rewritten to use
2601+ # validate_services_by_name() as it is aware of init systems.
2602+ self.log.warn('/!\\ DEPRECATION WARNING: use '
2603+ 'validate_services_by_name instead of validate_services '
2604+ 'due to init system differences.')
2605+
2606 for k, v in six.iteritems(commands):
2607 for cmd in v:
2608 output, code = k.run(cmd)
2609@@ -86,6 +121,41 @@
2610 return "command `{}` returned {}".format(cmd, str(code))
2611 return None
2612
2613+ def validate_services_by_name(self, sentry_services):
2614+ """Validate system service status by service name, automatically
2615+ detecting init system based on Ubuntu release codename.
2616+
2617+ :param sentry_services: dict with sentry keys and svc list values
2618+ :returns: None if successful, Failure string message otherwise
2619+ """
2620+ self.log.debug('Checking status of system services...')
2621+
2622+ # Point at which systemd became a thing
2623+ systemd_switch = self.ubuntu_releases.index('vivid')
2624+
2625+ for sentry_unit, services_list in six.iteritems(sentry_services):
2626+ # Get lsb_release codename from unit
2627+ release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
2628+ if ret:
2629+ return ret
2630+
2631+ for service_name in services_list:
2632+ if (self.ubuntu_releases.index(release) >= systemd_switch or
2633+ service_name == "rabbitmq-server"):
2634+ # init is systemd
2635+ cmd = 'sudo service {} status'.format(service_name)
2636+ elif self.ubuntu_releases.index(release) < systemd_switch:
2637+ # init is upstart
2638+ cmd = 'sudo status {}'.format(service_name)
2639+
2640+ output, code = sentry_unit.run(cmd)
2641+ self.log.debug('{} `{}` returned '
2642+ '{}'.format(sentry_unit.info['unit_name'],
2643+ cmd, code))
2644+ if code != 0:
2645+ return "command `{}` returned {}".format(cmd, str(code))
2646+ return None
2647+
2648 def _get_config(self, unit, filename):
2649 """Get a ConfigParser object for parsing a unit's config file."""
2650 file_contents = unit.file_contents(filename)
2651@@ -104,6 +174,9 @@
2652 Verify that the specified section of the config file contains
2653 the expected option key:value pairs.
2654 """
2655+ self.log.debug('Validating config file data ({} in {} on {})'
2656+ '...'.format(section, config_file,
2657+ sentry_unit.info['unit_name']))
2658 config = self._get_config(sentry_unit, config_file)
2659
2660 if section != 'DEFAULT' and not config.has_section(section):
2661@@ -112,10 +185,23 @@
2662 for k in expected.keys():
2663 if not config.has_option(section, k):
2664 return "section [{}] is missing option {}".format(section, k)
2665- if config.get(section, k) != expected[k]:
2666- return "section [{}] {}:{} != expected {}:{}".format(
2667- section, k, config.get(section, k), k, expected[k])
2668- return None
2669+
2670+ actual = config.get(section, k)
2671+ v = expected[k]
2672+ if (isinstance(v, six.string_types) or
2673+ isinstance(v, bool) or
2674+ isinstance(v, six.integer_types)):
2675+ # handle explicit values
2676+ if actual != v:
2677+ return "section [{}] {}:{} != expected {}:{}".format(
2678+ section, k, actual, k, expected[k])
2679+ else:
2680+ # handle not_null, valid_ip boolean comparison methods, etc.
2681+ if v(actual):
2682+ return None
2683+ else:
2684+ return "section [{}] {}:{} != expected {}:{}".format(
2685+ section, k, actual, k, expected[k])
2686
2687 def _validate_dict_data(self, expected, actual):
2688 """Validate dictionary data.
2689@@ -321,3 +407,135 @@
2690
2691 def endpoint_error(self, name, data):
2692 return 'unexpected endpoint data in {} - {}'.format(name, data)
2693+
2694+ def get_ubuntu_releases(self):
2695+ """Return a list of all Ubuntu releases in order of release."""
2696+ _d = distro_info.UbuntuDistroInfo()
2697+ _release_list = _d.all
2698+ self.log.debug('Ubuntu release list: {}'.format(_release_list))
2699+ return _release_list
2700+
2701+ def file_to_url(self, file_rel_path):
2702+ """Convert a relative file path to a file URL."""
2703+ _abs_path = os.path.abspath(file_rel_path)
2704+ return urlparse.urlparse(_abs_path, scheme='file').geturl()
2705+
2706+ def check_commands_on_units(self, commands, sentry_units):
2707+ """Check that all commands in a list exit zero on all
2708+ sentry units in a list.
2709+
2710+ :param commands: list of bash commands
2711+ :param sentry_units: list of sentry unit pointers
2712+ :returns: None if successful; Failure message otherwise
2713+ """
2714+ self.log.debug('Checking exit codes for {} commands on {} '
2715+ 'sentry units...'.format(len(commands),
2716+ len(sentry_units)))
2717+ for sentry_unit in sentry_units:
2718+ for cmd in commands:
2719+ output, code = sentry_unit.run(cmd)
2720+ if code == 0:
2721+ msg = ('{} `{}` returned {} '
2722+ '(OK)'.format(sentry_unit.info['unit_name'],
2723+ cmd, code))
2724+ self.log.debug(msg)
2725+ else:
2726+ msg = ('{} `{}` returned {} '
2727+ '{}'.format(sentry_unit.info['unit_name'],
2728+ cmd, code, output))
2729+ return msg
2730+ return None
2731+
2732+ def get_process_id_list(self, sentry_unit, process_name):
2733+ """Get a list of process ID(s) from a single sentry juju unit
2734+ for a single process name.
2735+
2736+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
2737+ :param process_name: Process name
2738+ :returns: List of process IDs
2739+ """
2740+ cmd = 'pidof {}'.format(process_name)
2741+ output, code = sentry_unit.run(cmd)
2742+ if code != 0:
2743+ msg = ('{} `{}` returned {} '
2744+ '{}'.format(sentry_unit.info['unit_name'],
2745+ cmd, code, output))
2746+ raise RuntimeError(msg)
2747+ return str(output).split()
2748+
2749+ def get_unit_process_ids(self, unit_processes):
2750+ """Construct a dict containing unit sentries, process names, and
2751+ process IDs."""
2752+ pid_dict = {}
2753+ for sentry_unit, process_list in unit_processes.iteritems():
2754+ pid_dict[sentry_unit] = {}
2755+ for process in process_list:
2756+ pids = self.get_process_id_list(sentry_unit, process)
2757+ pid_dict[sentry_unit].update({process: pids})
2758+ return pid_dict
2759+
2760+ def validate_unit_process_ids(self, expected, actual):
2761+ """Validate process id quantities for services on units."""
2762+ self.log.debug('Checking units for running processes...')
2763+ self.log.debug('Expected PIDs: {}'.format(expected))
2764+ self.log.debug('Actual PIDs: {}'.format(actual))
2765+
2766+ if len(actual) != len(expected):
2767+ msg = ('Unit count mismatch. expected, actual: {}, '
2768+ '{} '.format(len(expected), len(actual)))
2769+ return msg
2770+
2771+ for (e_sentry, e_proc_names) in expected.iteritems():
2772+ e_sentry_name = e_sentry.info['unit_name']
2773+ if e_sentry in actual.keys():
2774+ a_proc_names = actual[e_sentry]
2775+ else:
2776+ msg = ('Expected sentry ({}) not found in actual dict data.'
2777+ '{}'.format(e_sentry_name, e_sentry))
2778+ return msg
2779+
2780+ if len(e_proc_names.keys()) != len(a_proc_names.keys()):
2781+ msg = ('Process name count mismatch. expected, actual: {}, '
2782+ '{}'.format(len(expected), len(actual)))
2783+ return msg
2784+
2785+ for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
2786+ zip(e_proc_names.items(), a_proc_names.items()):
2787+ if e_proc_name != a_proc_name:
2788+ msg = ('Process name mismatch. expected, actual: {}, '
2789+ '{}'.format(e_proc_name, a_proc_name))
2790+ return msg
2791+
2792+ a_pids_length = len(a_pids)
2793+ if e_pids_length != a_pids_length:
2794+ msg = ('PID count mismatch. {} ({}) expected, actual: {}, '
2795+ '{} ({})'.format(e_sentry_name,
2796+ e_proc_name,
2797+ e_pids_length,
2798+ a_pids_length,
2799+ a_pids))
2800+ return msg
2801+ else:
2802+ msg = ('PID check OK: {} {} {}: '
2803+ '{}'.format(e_sentry_name,
2804+ e_proc_name,
2805+ e_pids_length,
2806+ a_pids))
2807+ self.log.debug(msg)
2808+ return None
2809+
2810+ def validate_list_of_identical_dicts(self, list_of_dicts):
2811+ """Check that all dicts within a list are identical."""
2812+ hashes = []
2813+ for _dict in list_of_dicts:
2814+ hashes.append(hash(frozenset(_dict.items())))
2815+
2816+ self.log.debug('Hashes: {}'.format(hashes))
2817+ if len(set(hashes)) == 1:
2818+ msg = 'Dicts within list are identical'
2819+ self.log.debug(msg)
2820+ else:
2821+ msg = 'Dicts within list are not identical'
2822+ return msg
2823+
2824+ return None
2825
2826=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
2827--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-04-23 14:52:07 +0000
2828+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-30 20:18:04 +0000
2829@@ -79,9 +79,9 @@
2830 services.append(this_service)
2831 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
2832 'ceph-osd', 'ceph-radosgw']
2833- # Openstack subordinate charms do not expose an origin option as that
2834- # is controlled by the principle
2835- ignore = ['neutron-openvswitch']
2836+ # Most OpenStack subordinate charms do not expose an origin option
2837+ # as that is controlled by the principle.
2838+ ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
2839
2840 if self.openstack:
2841 for svc in services:
2842@@ -110,7 +110,8 @@
2843 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
2844 self.precise_havana, self.precise_icehouse,
2845 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
2846- self.trusty_kilo, self.vivid_kilo) = range(10)
2847+ self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
2848+ self.wily_liberty) = range(12)
2849
2850 releases = {
2851 ('precise', None): self.precise_essex,
2852@@ -121,8 +122,10 @@
2853 ('trusty', None): self.trusty_icehouse,
2854 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
2855 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
2856+ ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
2857 ('utopic', None): self.utopic_juno,
2858- ('vivid', None): self.vivid_kilo}
2859+ ('vivid', None): self.vivid_kilo,
2860+ ('wily', None): self.wily_liberty}
2861 return releases[(self.series, self.openstack)]
2862
2863 def _get_openstack_release_string(self):
2864@@ -138,9 +141,42 @@
2865 ('trusty', 'icehouse'),
2866 ('utopic', 'juno'),
2867 ('vivid', 'kilo'),
2868+ ('wily', 'liberty'),
2869 ])
2870 if self.openstack:
2871 os_origin = self.openstack.split(':')[1]
2872 return os_origin.split('%s-' % self.series)[1].split('/')[0]
2873 else:
2874 return releases[self.series]
2875+
2876+ def get_ceph_expected_pools(self, radosgw=False):
2877+ """Return a list of expected ceph pools based on Ubuntu-OpenStack
2878+ release and whether ceph radosgw is flagged as present or not."""
2879+
2880+ if self._get_openstack_release() >= self.trusty_kilo:
2881+ # Kilo or later
2882+ pools = [
2883+ 'rbd',
2884+ 'cinder',
2885+ 'glance'
2886+ ]
2887+ else:
2888+ # Juno or earlier
2889+ pools = [
2890+ 'data',
2891+ 'metadata',
2892+ 'rbd',
2893+ 'cinder',
2894+ 'glance'
2895+ ]
2896+
2897+ if radosgw:
2898+ pools.extend([
2899+ '.rgw.root',
2900+ '.rgw.control',
2901+ '.rgw',
2902+ '.rgw.gc',
2903+ '.users.uid'
2904+ ])
2905+
2906+ return pools
2907
2908=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
2909--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-03-20 17:15:02 +0000
2910+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-30 20:18:04 +0000
2911@@ -14,16 +14,19 @@
2912 # You should have received a copy of the GNU Lesser General Public License
2913 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2914
2915+import json
2916 import logging
2917 import os
2918+import six
2919 import time
2920 import urllib
2921
2922+import cinderclient.v1.client as cinder_client
2923 import glanceclient.v1.client as glance_client
2924+import heatclient.v1.client as heat_client
2925 import keystoneclient.v2_0 as keystone_client
2926 import novaclient.v1_1.client as nova_client
2927-
2928-import six
2929+import swiftclient
2930
2931 from charmhelpers.contrib.amulet.utils import (
2932 AmuletUtils
2933@@ -37,7 +40,7 @@
2934 """OpenStack amulet utilities.
2935
2936 This class inherits from AmuletUtils and has additional support
2937- that is specifically for use by OpenStack charms.
2938+ that is specifically for use by OpenStack charm tests.
2939 """
2940
2941 def __init__(self, log_level=ERROR):
2942@@ -51,6 +54,8 @@
2943 Validate actual endpoint data vs expected endpoint data. The ports
2944 are used to find the matching endpoint.
2945 """
2946+ self.log.debug('Validating endpoint data...')
2947+ self.log.debug('actual: {}'.format(repr(endpoints)))
2948 found = False
2949 for ep in endpoints:
2950 self.log.debug('endpoint: {}'.format(repr(ep)))
2951@@ -77,6 +82,7 @@
2952 Validate a list of actual service catalog endpoints vs a list of
2953 expected service catalog endpoints.
2954 """
2955+ self.log.debug('Validating service catalog endpoint data...')
2956 self.log.debug('actual: {}'.format(repr(actual)))
2957 for k, v in six.iteritems(expected):
2958 if k in actual:
2959@@ -93,6 +99,7 @@
2960 Validate a list of actual tenant data vs list of expected tenant
2961 data.
2962 """
2963+ self.log.debug('Validating tenant data...')
2964 self.log.debug('actual: {}'.format(repr(actual)))
2965 for e in expected:
2966 found = False
2967@@ -114,6 +121,7 @@
2968 Validate a list of actual role data vs a list of expected role
2969 data.
2970 """
2971+ self.log.debug('Validating role data...')
2972 self.log.debug('actual: {}'.format(repr(actual)))
2973 for e in expected:
2974 found = False
2975@@ -134,6 +142,7 @@
2976 Validate a list of actual user data vs a list of expected user
2977 data.
2978 """
2979+ self.log.debug('Validating user data...')
2980 self.log.debug('actual: {}'.format(repr(actual)))
2981 for e in expected:
2982 found = False
2983@@ -155,17 +164,29 @@
2984
2985 Validate a list of actual flavors vs a list of expected flavors.
2986 """
2987+ self.log.debug('Validating flavor data...')
2988 self.log.debug('actual: {}'.format(repr(actual)))
2989 act = [a.name for a in actual]
2990 return self._validate_list_data(expected, act)
2991
2992 def tenant_exists(self, keystone, tenant):
2993 """Return True if tenant exists."""
2994+ self.log.debug('Checking if tenant exists ({})...'.format(tenant))
2995 return tenant in [t.name for t in keystone.tenants.list()]
2996
2997+ def authenticate_cinder_admin(self, keystone_sentry, username,
2998+ password, tenant):
2999+ """Authenticates admin user with cinder."""
3000+ service_ip = \
3001+ keystone_sentry.relation('shared-db',
3002+ 'mysql:shared-db')['private-address']
3003+ ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8'))
3004+ return cinder_client.Client(username, password, tenant, ept)
3005+
3006 def authenticate_keystone_admin(self, keystone_sentry, user, password,
3007 tenant):
3008 """Authenticates admin user with the keystone admin endpoint."""
3009+ self.log.debug('Authenticating keystone admin...')
3010 unit = keystone_sentry
3011 service_ip = unit.relation('shared-db',
3012 'mysql:shared-db')['private-address']
3013@@ -175,6 +196,7 @@
3014
3015 def authenticate_keystone_user(self, keystone, user, password, tenant):
3016 """Authenticates a regular user with the keystone public endpoint."""
3017+ self.log.debug('Authenticating keystone user ({})...'.format(user))
3018 ep = keystone.service_catalog.url_for(service_type='identity',
3019 endpoint_type='publicURL')
3020 return keystone_client.Client(username=user, password=password,
3021@@ -182,19 +204,49 @@
3022
3023 def authenticate_glance_admin(self, keystone):
3024 """Authenticates admin user with glance."""
3025+ self.log.debug('Authenticating glance admin...')
3026 ep = keystone.service_catalog.url_for(service_type='image',
3027 endpoint_type='adminURL')
3028 return glance_client.Client(ep, token=keystone.auth_token)
3029
3030+ def authenticate_heat_admin(self, keystone):
3031+ """Authenticates the admin user with heat."""
3032+ self.log.debug('Authenticating heat admin...')
3033+ ep = keystone.service_catalog.url_for(service_type='orchestration',
3034+ endpoint_type='publicURL')
3035+ return heat_client.Client(endpoint=ep, token=keystone.auth_token)
3036+
3037 def authenticate_nova_user(self, keystone, user, password, tenant):
3038 """Authenticates a regular user with nova-api."""
3039+ self.log.debug('Authenticating nova user ({})...'.format(user))
3040 ep = keystone.service_catalog.url_for(service_type='identity',
3041 endpoint_type='publicURL')
3042 return nova_client.Client(username=user, api_key=password,
3043 project_id=tenant, auth_url=ep)
3044
3045+ def authenticate_swift_user(self, keystone, user, password, tenant):
3046+ """Authenticates a regular user with swift api."""
3047+ self.log.debug('Authenticating swift user ({})...'.format(user))
3048+ ep = keystone.service_catalog.url_for(service_type='identity',
3049+ endpoint_type='publicURL')
3050+ return swiftclient.Connection(authurl=ep,
3051+ user=user,
3052+ key=password,
3053+ tenant_name=tenant,
3054+ auth_version='2.0')
3055+
3056 def create_cirros_image(self, glance, image_name):
3057- """Download the latest cirros image and upload it to glance."""
3058+ """Download the latest cirros image and upload it to glance,
3059+ validate and return a resource pointer.
3060+
3061+ :param glance: pointer to authenticated glance connection
3062+ :param image_name: display name for new image
3063+ :returns: glance image pointer
3064+ """
3065+ self.log.debug('Creating glance cirros image '
3066+ '({})...'.format(image_name))
3067+
3068+ # Download cirros image
3069 http_proxy = os.getenv('AMULET_HTTP_PROXY')
3070 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
3071 if http_proxy:
3072@@ -203,57 +255,67 @@
3073 else:
3074 opener = urllib.FancyURLopener()
3075
3076- f = opener.open("http://download.cirros-cloud.net/version/released")
3077+ f = opener.open('http://download.cirros-cloud.net/version/released')
3078 version = f.read().strip()
3079- cirros_img = "cirros-{}-x86_64-disk.img".format(version)
3080+ cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
3081 local_path = os.path.join('tests', cirros_img)
3082
3083 if not os.path.exists(local_path):
3084- cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
3085+ cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
3086 version, cirros_img)
3087 opener.retrieve(cirros_url, local_path)
3088 f.close()
3089
3090+ # Create glance image
3091 with open(local_path) as f:
3092 image = glance.images.create(name=image_name, is_public=True,
3093 disk_format='qcow2',
3094 container_format='bare', data=f)
3095- count = 1
3096- status = image.status
3097- while status != 'active' and count < 10:
3098- time.sleep(3)
3099- image = glance.images.get(image.id)
3100- status = image.status
3101- self.log.debug('image status: {}'.format(status))
3102- count += 1
3103-
3104- if status != 'active':
3105- self.log.error('image creation timed out')
3106- return None
3107+
3108+ # Wait for image to reach active status
3109+ img_id = image.id
3110+ ret = self.resource_reaches_status(glance.images, img_id,
3111+ expected_stat='active',
3112+ msg='Image status wait')
3113+ if not ret:
3114+ msg = 'Glance image failed to reach expected state.'
3115+ raise RuntimeError(msg)
3116+
3117+ # Re-validate new image
3118+ self.log.debug('Validating image attributes...')
3119+ val_img_name = glance.images.get(img_id).name
3120+ val_img_stat = glance.images.get(img_id).status
3121+ val_img_pub = glance.images.get(img_id).is_public
3122+ val_img_cfmt = glance.images.get(img_id).container_format
3123+ val_img_dfmt = glance.images.get(img_id).disk_format
3124+ msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
3125+ 'container fmt:{} disk fmt:{}'.format(
3126+ val_img_name, val_img_pub, img_id,
3127+ val_img_stat, val_img_cfmt, val_img_dfmt))
3128+
3129+ if val_img_name == image_name and val_img_stat == 'active' \
3130+ and val_img_pub is True and val_img_cfmt == 'bare' \
3131+ and val_img_dfmt == 'qcow2':
3132+ self.log.debug(msg_attr)
3133+ else:
3134+ msg = ('Volume validation failed, {}'.format(msg_attr))
3135+ raise RuntimeError(msg)
3136
3137 return image
3138
3139 def delete_image(self, glance, image):
3140 """Delete the specified image."""
3141- num_before = len(list(glance.images.list()))
3142- glance.images.delete(image)
3143-
3144- count = 1
3145- num_after = len(list(glance.images.list()))
3146- while num_after != (num_before - 1) and count < 10:
3147- time.sleep(3)
3148- num_after = len(list(glance.images.list()))
3149- self.log.debug('number of images: {}'.format(num_after))
3150- count += 1
3151-
3152- if num_after != (num_before - 1):
3153- self.log.error('image deletion timed out')
3154- return False
3155-
3156- return True
3157+
3158+ # /!\ DEPRECATION WARNING
3159+ self.log.warn('/!\\ DEPRECATION WARNING: use '
3160+ 'delete_resource instead of delete_image.')
3161+ self.log.debug('Deleting glance image ({})...'.format(image))
3162+ return self.delete_resource(glance.images, image, msg='glance image')
3163
3164 def create_instance(self, nova, image_name, instance_name, flavor):
3165 """Create the specified instance."""
3166+ self.log.debug('Creating instance '
3167+ '({}|{}|{})'.format(instance_name, image_name, flavor))
3168 image = nova.images.find(name=image_name)
3169 flavor = nova.flavors.find(name=flavor)
3170 instance = nova.servers.create(name=instance_name, image=image,
3171@@ -276,19 +338,264 @@
3172
3173 def delete_instance(self, nova, instance):
3174 """Delete the specified instance."""
3175- num_before = len(list(nova.servers.list()))
3176- nova.servers.delete(instance)
3177-
3178- count = 1
3179- num_after = len(list(nova.servers.list()))
3180- while num_after != (num_before - 1) and count < 10:
3181- time.sleep(3)
3182- num_after = len(list(nova.servers.list()))
3183- self.log.debug('number of instances: {}'.format(num_after))
3184- count += 1
3185-
3186- if num_after != (num_before - 1):
3187- self.log.error('instance deletion timed out')
3188- return False
3189-
3190- return True
3191+
3192+ # /!\ DEPRECATION WARNING
3193+ self.log.warn('/!\\ DEPRECATION WARNING: use '
3194+ 'delete_resource instead of delete_instance.')
3195+ self.log.debug('Deleting instance ({})...'.format(instance))
3196+ return self.delete_resource(nova.servers, instance,
3197+ msg='nova instance')
3198+
3199+ def create_or_get_keypair(self, nova, keypair_name="testkey"):
3200+ """Create a new keypair, or return pointer if it already exists."""
3201+ try:
3202+ _keypair = nova.keypairs.get(keypair_name)
3203+ self.log.debug('Keypair ({}) already exists, '
3204+ 'using it.'.format(keypair_name))
3205+ return _keypair
3206+ except:
3207+ self.log.debug('Keypair ({}) does not exist, '
3208+ 'creating it.'.format(keypair_name))
3209+
3210+ _keypair = nova.keypairs.create(name=keypair_name)
3211+ return _keypair
3212+
3213+ def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
3214+ img_id=None, src_vol_id=None, snap_id=None):
3215+ """Create cinder volume, optionally from a glance image, or
3216+ optionally as a clone of an existing volume, or optionally
3217+ from a snapshot. Wait for the new volume status to reach
3218+ the expected status, validate and return a resource pointer.
3219+
3220+ :param vol_name: cinder volume display name
3221+ :param vol_size: size in gigabytes
3222+ :param img_id: optional glance image id
3223+ :param src_vol_id: optional source volume id to clone
3224+ :param snap_id: optional snapshot id to use
3225+ :returns: cinder volume pointer
3226+ """
3227+ # Handle parameter input
3228+ if img_id and not src_vol_id and not snap_id:
3229+ self.log.debug('Creating cinder volume from glance image '
3230+ '({})...'.format(img_id))
3231+ bootable = 'true'
3232+ elif src_vol_id and not img_id and not snap_id:
3233+ self.log.debug('Cloning cinder volume...')
3234+ bootable = cinder.volumes.get(src_vol_id).bootable
3235+ elif snap_id and not src_vol_id and not img_id:
3236+ self.log.debug('Creating cinder volume from snapshot...')
3237+ snap = cinder.volume_snapshots.find(id=snap_id)
3238+ vol_size = snap.size
3239+ snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
3240+ bootable = cinder.volumes.get(snap_vol_id).bootable
3241+ elif not img_id and not src_vol_id and not snap_id:
3242+ self.log.debug('Creating cinder volume...')
3243+ bootable = 'false'
3244+ else:
3245+ msg = ('Invalid method use - name:{} size:{} img_id:{} '
3246+ 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
3247+ img_id, src_vol_id,
3248+ snap_id))
3249+ raise RuntimeError(msg)
3250+
3251+ # Create new volume
3252+ try:
3253+ vol_new = cinder.volumes.create(display_name=vol_name,
3254+ imageRef=img_id,
3255+ size=vol_size,
3256+ source_volid=src_vol_id,
3257+ snapshot_id=snap_id)
3258+ vol_id = vol_new.id
3259+ except Exception as e:
3260+ msg = 'Failed to create volume: {}'.format(e)
3261+ raise RuntimeError(msg)
3262+
3263+ # Wait for volume to reach available status
3264+ ret = self.resource_reaches_status(cinder.volumes, vol_id,
3265+ expected_stat="available",
3266+ msg="Volume status wait")
3267+ if not ret:
3268+ msg = 'Cinder volume failed to reach expected state.'
3269+ raise RuntimeError(msg)
3270+
3271+ # Re-validate new volume
3272+ self.log.debug('Validating volume attributes...')
3273+ val_vol_name = cinder.volumes.get(vol_id).display_name
3274+ val_vol_boot = cinder.volumes.get(vol_id).bootable
3275+ val_vol_stat = cinder.volumes.get(vol_id).status
3276+ val_vol_size = cinder.volumes.get(vol_id).size
3277+ msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
3278+ '{} size:{}'.format(val_vol_name, vol_id,
3279+ val_vol_stat, val_vol_boot,
3280+ val_vol_size))
3281+
3282+ if val_vol_boot == bootable and val_vol_stat == 'available' \
3283+ and val_vol_name == vol_name and val_vol_size == vol_size:
3284+ self.log.debug(msg_attr)
3285+ else:
3286+ msg = ('Volume validation failed, {}'.format(msg_attr))
3287+ raise RuntimeError(msg)
3288+
3289+ return vol_new
3290+
3291+ def delete_resource(self, resource, resource_id,
3292+ msg="resource", max_wait=120):
3293+ """Delete one openstack resource, such as one instance, keypair,
3294+ image, volume, stack, etc., and confirm deletion within max wait time.
3295+
3296+ :param resource: pointer to os resource type, ex:glance_client.images
3297+ :param resource_id: unique name or id for the openstack resource
3298+ :param msg: text to identify purpose in logging
3299+ :param max_wait: maximum wait time in seconds
3300+ :returns: True if successful, otherwise False
3301+ """
3302+ self.log.debug('Deleting OpenStack resource '
3303+ '{} ({})'.format(resource_id, msg))
3304+ num_before = len(list(resource.list()))
3305+ resource.delete(resource_id)
3306+
3307+ tries = 0
3308+ num_after = len(list(resource.list()))
3309+ while num_after != (num_before - 1) and tries < (max_wait / 4):
3310+ self.log.debug('{} delete check: '
3311+ '{} [{}:{}] {}'.format(msg, tries,
3312+ num_before,
3313+ num_after,
3314+ resource_id))
3315+ time.sleep(4)
3316+ num_after = len(list(resource.list()))
3317+ tries += 1
3318+
3319+ self.log.debug('{}: expected, actual count = {}, '
3320+ '{}'.format(msg, num_before - 1, num_after))
3321+
3322+ if num_after == (num_before - 1):
3323+ return True
3324+ else:
3325+ self.log.error('{} delete timed out'.format(msg))
3326+ return False
3327+
3328+ def resource_reaches_status(self, resource, resource_id,
3329+ expected_stat='available',
3330+ msg='resource', max_wait=120):
3331+ """Wait for an openstack resources status to reach an
3332+ expected status within a specified time. Useful to confirm that
3333+ nova instances, cinder vols, snapshots, glance images, heat stacks
3334+ and other resources eventually reach the expected status.
3335+
3336+ :param resource: pointer to os resource type, ex: heat_client.stacks
3337+ :param resource_id: unique id for the openstack resource
3338+ :param expected_stat: status to expect resource to reach
3339+ :param msg: text to identify purpose in logging
3340+ :param max_wait: maximum wait time in seconds
3341+ :returns: True if successful, False if status is not reached
3342+ """
3343+
3344+ tries = 0
3345+ resource_stat = resource.get(resource_id).status
3346+ while resource_stat != expected_stat and tries < (max_wait / 4):
3347+ self.log.debug('{} status check: '
3348+ '{} [{}:{}] {}'.format(msg, tries,
3349+ resource_stat,
3350+ expected_stat,
3351+ resource_id))
3352+ time.sleep(4)
3353+ resource_stat = resource.get(resource_id).status
3354+ tries += 1
3355+
3356+ self.log.debug('{}: expected, actual status = {}, '
3357+ '{}'.format(msg, resource_stat, expected_stat))
3358+
3359+ if resource_stat == expected_stat:
3360+ return True
3361+ else:
3362+ self.log.debug('{} never reached expected status: '
3363+ '{}'.format(resource_id, expected_stat))
3364+ return False
3365+
3366+ def get_ceph_osd_id_cmd(self, index):
3367+ """Produce a shell command that will return a ceph-osd id."""
3368+ cmd = ("`initctl list | grep 'ceph-osd ' | awk 'NR=={} {{ print $2 }}'"
3369+ " | grep -o '[0-9]*'`".format(index + 1))
3370+ return cmd
3371+
3372+ def get_ceph_pools(self, sentry_unit):
3373+ """Return a dict of ceph pools from a single ceph unit, with
3374+ pool name as keys, pool id as vals."""
3375+ pools = {}
3376+ cmd = 'sudo ceph osd lspools'
3377+ output, code = sentry_unit.run(cmd)
3378+ if code != 0:
3379+ msg = ('{} `{}` returned {} '
3380+ '{}'.format(sentry_unit.info['unit_name'],
3381+ cmd, code, output))
3382+ raise RuntimeError(msg)
3383+
3384+ # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
3385+ for pool in str(output).split(','):
3386+ pool_id_name = pool.split(' ')
3387+ if len(pool_id_name) == 2:
3388+ pool_id = pool_id_name[0]
3389+ pool_name = pool_id_name[1]
3390+ pools[pool_name] = int(pool_id)
3391+
3392+ self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
3393+ pools))
3394+ return pools
3395+
3396+ def get_ceph_df(self, sentry_unit):
3397+ """Return dict of ceph df json output, including ceph pool state.
3398+
3399+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
3400+ :returns: Dict of ceph df output
3401+ """
3402+ cmd = 'sudo ceph df --format=json'
3403+ output, code = sentry_unit.run(cmd)
3404+ if code != 0:
3405+ msg = ('{} `{}` returned {} '
3406+ '{}'.format(sentry_unit.info['unit_name'],
3407+ cmd, code, output))
3408+ raise RuntimeError(msg)
3409+ return json.loads(output)
3410+
3411+ def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
3412+ """Take a sample of attributes of a ceph pool, returning ceph
3413+ pool name, object count and disk space used for the specified
3414+ pool ID number.
3415+
3416+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
3417+ :param pool_id: Ceph pool ID
3418+ :returns: List of pool name, object count, kb disk space used
3419+ """
3420+ df = self.get_ceph_df(sentry_unit)
3421+ pool_name = df['pools'][pool_id]['name']
3422+ obj_count = df['pools'][pool_id]['stats']['objects']
3423+ kb_used = df['pools'][pool_id]['stats']['kb_used']
3424+ self.log.debug('Ceph {} pool (ID {}): {} objects, '
3425+ '{} kb used'.format(pool_name,
3426+ pool_id,
3427+ obj_count,
3428+ kb_used))
3429+ return pool_name, obj_count, kb_used
3430+
3431+ def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
3432+ """Validate ceph pool samples taken over time, such as pool
3433+ object counts or pool kb used, before adding, after adding, and
3434+ after deleting items which affect those pool attributes. The
3435+ 2nd element is expected to be greater than the 1st; 3rd is expected
3436+ to be less than the 2nd.
3437+
3438+ :param samples: List containing 3 data samples
3439+ :param sample_type: String for logging and usage context
3440+ :returns: None if successful, Failure message otherwise
3441+ """
3442+ original, created, deleted = range(3)
3443+ if samples[created] <= samples[original] or \
3444+ samples[deleted] >= samples[created]:
3445+ msg = ('Ceph {} samples ({}) '
3446+ 'unexpected.'.format(sample_type, samples))
3447+ return msg
3448+ else:
3449+ self.log.debug('Ceph {} samples (OK): '
3450+ '{}'.format(sample_type, samples))
3451+ return None
3452
3453=== added file 'tests/tests.yaml'
3454--- tests/tests.yaml 1970-01-01 00:00:00 +0000
3455+++ tests/tests.yaml 2015-06-30 20:18:04 +0000
3456@@ -0,0 +1,18 @@
3457+bootstrap: true
3458+reset: true
3459+virtualenv: true
3460+makefile:
3461+ - lint
3462+ - test
3463+sources:
3464+ - ppa:juju/stable
3465+packages:
3466+ - amulet
3467+ - python-amulet
3468+ - python-cinderclient
3469+ - python-distro-info
3470+ - python-glanceclient
3471+ - python-heatclient
3472+ - python-keystoneclient
3473+ - python-novaclient
3474+ - python-swiftclient
3475
3476=== modified file 'unit_tests/test_glance_relations.py'
3477--- unit_tests/test_glance_relations.py 2015-06-15 17:39:54 +0000
3478+++ unit_tests/test_glance_relations.py 2015-06-30 20:18:04 +0000
3479@@ -40,12 +40,13 @@
3480 'restart_on_change',
3481 'service_reload',
3482 'service_stop',
3483+ 'service_restart',
3484 # charmhelpers.contrib.openstack.utils
3485 'configure_installation_source',
3486 'os_release',
3487 'openstack_upgrade_available',
3488 # charmhelpers.contrib.hahelpers.cluster_utils
3489- 'eligible_leader',
3490+ 'is_elected_leader',
3491 # glance_utils
3492 'restart_map',
3493 'register_configs',
3494@@ -129,10 +130,14 @@
3495 self.apt_update.assert_called_with(fatal=True)
3496 self.apt_install.assert_called_with(['haproxy', 'python-setuptools',
3497 'python-six', 'uuid',
3498- 'python-mysqldb', 'python-pip',
3499- 'apache2', 'libxslt1-dev',
3500- 'python-psycopg2', 'zlib1g-dev',
3501- 'python-dev', 'libxml2-dev'],
3502+ 'python-mysqldb',
3503+ 'libmysqlclient-dev',
3504+ 'libssl-dev', 'libffi-dev',
3505+ 'apache2', 'python-pip',
3506+ 'libxslt1-dev', 'libyaml-dev',
3507+ 'python-psycopg2',
3508+ 'zlib1g-dev', 'python-dev',
3509+ 'libxml2-dev'],
3510 fatal=True)
3511 self.git_install.assert_called_with(projects_yaml)
3512
3513@@ -430,6 +435,7 @@
3514 self.assertEquals([call('/etc/glance/glance-api.conf'),
3515 call(self.ceph_config_file())],
3516 configs.write.call_args_list)
3517+ self.service_restart.assert_called_with('glance-api')
3518
3519 @patch.object(relations, 'CONFIGS')
3520 def test_ceph_broken(self, configs):
3521
3522=== modified file 'unit_tests/test_glance_utils.py'
3523--- unit_tests/test_glance_utils.py 2015-04-17 12:05:48 +0000
3524+++ unit_tests/test_glance_utils.py 2015-06-30 20:18:04 +0000
3525@@ -16,13 +16,14 @@
3526 'relation_ids',
3527 'get_os_codename_install_source',
3528 'configure_installation_source',
3529- 'eligible_leader',
3530+ 'is_elected_leader',
3531 'templating',
3532 'apt_update',
3533 'apt_upgrade',
3534 'apt_install',
3535 'mkdir',
3536 'os_release',
3537+ 'pip_install',
3538 'service_start',
3539 'service_stop',
3540 'service_name',
3541@@ -152,7 +153,7 @@
3542 git_requested.return_value = True
3543 self.config.side_effect = None
3544 self.config.return_value = 'cloud:precise-havana'
3545- self.eligible_leader.return_value = True
3546+ self.is_elected_leader.return_value = True
3547 self.get_os_codename_install_source.return_value = 'havana'
3548 configs = MagicMock()
3549 utils.do_openstack_upgrade(configs)
3550@@ -170,7 +171,7 @@
3551 git_requested.return_value = True
3552 self.config.side_effect = None
3553 self.config.return_value = 'cloud:precise-havana'
3554- self.eligible_leader.return_value = False
3555+ self.is_elected_leader.return_value = False
3556 self.get_os_codename_install_source.return_value = 'havana'
3557 configs = MagicMock()
3558 utils.do_openstack_upgrade(configs)
3559@@ -236,26 +237,35 @@
3560 @patch.object(utils, 'git_src_dir')
3561 @patch.object(utils, 'service_restart')
3562 @patch.object(utils, 'render')
3563+ @patch.object(utils, 'git_pip_venv_dir')
3564 @patch('os.path.join')
3565 @patch('os.path.exists')
3566+ @patch('os.symlink')
3567 @patch('shutil.copytree')
3568 @patch('shutil.rmtree')
3569- def test_git_post_install(self, rmtree, copytree, exists, join, render,
3570- service_restart, git_src_dir):
3571+ @patch('subprocess.check_call')
3572+ def test_git_post_install(self, check_call, rmtree, copytree, symlink,
3573+ exists, join, venv, render, service_restart,
3574+ git_src_dir):
3575 projects_yaml = openstack_origin_git
3576 join.return_value = 'joined-string'
3577+ venv.return_value = '/mnt/openstack-git/venv'
3578 utils.git_post_install(projects_yaml)
3579 expected = [
3580 call('joined-string', '/etc/glance'),
3581 ]
3582 copytree.assert_has_calls(expected)
3583+ expected = [
3584+ call('joined-string', '/usr/local/bin/glance-manage'),
3585+ ]
3586+ symlink.assert_has_calls(expected, any_order=True)
3587 glance_api_context = {
3588 'service_description': 'Glance API server',
3589 'service_name': 'Glance',
3590 'user_name': 'glance',
3591 'start_dir': '/var/lib/glance',
3592 'process_name': 'glance-api',
3593- 'executable_name': '/usr/local/bin/glance-api',
3594+ 'executable_name': 'joined-string',
3595 'config_files': ['/etc/glance/glance-api.conf'],
3596 'log_file': '/var/log/glance/api.log',
3597 }
3598@@ -265,7 +275,7 @@
3599 'user_name': 'glance',
3600 'start_dir': '/var/lib/glance',
3601 'process_name': 'glance-registry',
3602- 'executable_name': '/usr/local/bin/glance-registry',
3603+ 'executable_name': 'joined-string',
3604 'config_files': ['/etc/glance/glance-registry.conf'],
3605 'log_file': '/var/log/glance/registry.log',
3606 }

Subscribers

People subscribed via source and target branches