Merge lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-update into lp:~openstack-charmers-archive/charms/trusty/ceph-radosgw/next

Proposed by Ryan Beisner
Status: Merged
Merged at revision: 40
Proposed branch: lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-update
Merge into: lp:~openstack-charmers-archive/charms/trusty/ceph-radosgw/next
Diff against target: 2225 lines (+1213/-196)
18 files modified
Makefile (+7/-7)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+24/-5)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+6/-2)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+122/-3)
hooks/charmhelpers/contrib/openstack/context.py (+1/-1)
hooks/charmhelpers/contrib/openstack/neutron.py (+6/-4)
hooks/charmhelpers/contrib/openstack/utils.py (+21/-8)
hooks/charmhelpers/contrib/python/packages.py (+2/-0)
hooks/charmhelpers/core/hookenv.py (+92/-36)
hooks/charmhelpers/core/host.py (+24/-6)
hooks/charmhelpers/core/services/base.py (+12/-9)
metadata.yaml (+4/-1)
tests/00-setup (+6/-2)
tests/basic_deployment.py (+246/-47)
tests/charmhelpers/contrib/amulet/utils.py (+219/-9)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+42/-5)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+361/-51)
tests/tests.yaml (+18/-0)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-update
Reviewer Review Type Date Requested Status
Corey Bryant (community) Approve
Review via email: mp+262599@code.launchpad.net

Commit message

amulet tests - update test coverage, enable vivid, prep for wily, add basic functional checks
sync tests/charmhelpers
sync hooks/charmhelpers

Description of the change

amulet tests - update test coverage, enable vivid, prep for wily, add basic functional checks
sync tests/charmhelpers
sync hooks/charmhelpers

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5539 ceph-radosgw-next for 1chb1n mp262599
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5539/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #5171 ceph-radosgw-next for 1chb1n mp262599
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5171/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4750 ceph-radosgw-next for 1chb1n mp262599
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4750/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5542 ceph-radosgw-next for 1chb1n mp262599
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5542/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #5174 ceph-radosgw-next for 1chb1n mp262599
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5174/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4753 ceph-radosgw-next for 1chb1n mp262599
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11765277/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4753/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

FYI, undercloud issue caused test failure for #4753.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4780 ceph-radosgw-next for 1chb1n mp262599
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4780/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5607 ceph-radosgw-next for 1chb1n mp262599
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5607/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #5239 ceph-radosgw-next for 1chb1n mp262599
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5239/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4783 ceph-radosgw-next for 1chb1n mp262599
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4783/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Flipped back to WIP re: tests/charmhelpers work in progress. Other things here are clear for review and input.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5674 ceph-radosgw-next for 1chb1n mp262599
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5674/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #5306 ceph-radosgw-next for 1chb1n mp262599
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5306/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4857 ceph-radosgw-next for 1chb1n mp262599
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4857/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5682 ceph-radosgw-next for 1chb1n mp262599
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5682/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #5314 ceph-radosgw-next for 1chb1n mp262599
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5314/

48. By Ryan Beisner

Update publish target in makefile; update 00-setup and tests.yaml for dependencies.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5686 ceph-radosgw-next for 1chb1n mp262599
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5686/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #5318 ceph-radosgw-next for 1chb1n mp262599
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5318/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4869 ceph-radosgw-next for 1chb1n mp262599
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11794890/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4869/

49. By Ryan Beisner

fix 00-setup

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5690 ceph-radosgw-next for 1chb1n mp262599
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5690/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #5322 ceph-radosgw-next for 1chb1n mp262599
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5322/

50. By Ryan Beisner

update test

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5695 ceph-radosgw-next for 1chb1n mp262599
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5695/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #5327 ceph-radosgw-next for 1chb1n mp262599
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5327/

Revision history for this message
Corey Bryant (corey.bryant) wrote :

Looks good. I'll approve once the corresponding c-h lands and these amulet tests are successful.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4873 ceph-radosgw-next for 1chb1n mp262599
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11795761/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4873/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4878 ceph-radosgw-next for 1chb1n mp262599
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
Timeout occurred (2700s), printing juju status...environment: osci-sv18
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11796228/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4878/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4883 ceph-radosgw-next for 1chb1n mp262599
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11797188/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4883/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Test rig issue is causing failures in bootstrapping; will re-test when that's resolved.

51. By Ryan Beisner

update tags for consistency with other openstack charms

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5700 ceph-radosgw-next for 1chb1n mp262599
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5700/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #5332 ceph-radosgw-next for 1chb1n mp262599
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/5332/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4888 ceph-radosgw-next for 1chb1n mp262599
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4888/

Revision history for this message
Corey Bryant (corey.bryant) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2015-04-16 21:32:01 +0000
+++ Makefile 2015-07-01 14:47:24 +0000
@@ -2,17 +2,17 @@
2PYTHON := /usr/bin/env python2PYTHON := /usr/bin/env python
33
4lint:4lint:
5 @flake8 --exclude hooks/charmhelpers hooks tests unit_tests5 @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \
6 hooks tests unit_tests
6 @charm proof7 @charm proof
78
8unit_test:9test:
10 @# Bundletester expects unit tests here.
11 @echo Starting unit tests...
9 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests12 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
1013
11test:14functional_test:
12 @echo Starting Amulet tests...15 @echo Starting Amulet tests...
13 # coreycb note: The -v should only be temporary until Amulet sends
14 # raise_status() messages to stderr:
15 # https://bugs.launchpad.net/amulet/+bug/1320357
16 @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 270016 @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700
1717
18bin/charm_helpers_sync.py:18bin/charm_helpers_sync.py:
@@ -24,6 +24,6 @@
24 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml24 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
25 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml25 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
2626
27publish: lint27publish: lint test
28 bzr push lp:charms/ceph-radosgw28 bzr push lp:charms/ceph-radosgw
29 bzr push lp:charms/trusty/ceph-radosgw29 bzr push lp:charms/trusty/ceph-radosgw
3030
=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-04 23:06:40 +0000
+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-07-01 14:47:24 +0000
@@ -44,6 +44,7 @@
44 ERROR,44 ERROR,
45 WARNING,45 WARNING,
46 unit_get,46 unit_get,
47 is_leader as juju_is_leader
47)48)
48from charmhelpers.core.decorators import (49from charmhelpers.core.decorators import (
49 retry_on_exception,50 retry_on_exception,
@@ -63,17 +64,30 @@
63 pass64 pass
6465
6566
67class CRMDCNotFound(Exception):
68 pass
69
70
66def is_elected_leader(resource):71def is_elected_leader(resource):
67 """72 """
68 Returns True if the charm executing this is the elected cluster leader.73 Returns True if the charm executing this is the elected cluster leader.
6974
70 It relies on two mechanisms to determine leadership:75 It relies on two mechanisms to determine leadership:
71 1. If the charm is part of a corosync cluster, call corosync to76 1. If juju is sufficiently new and leadership election is supported,
77 the is_leader command will be used.
78 2. If the charm is part of a corosync cluster, call corosync to
72 determine leadership.79 determine leadership.
73 2. If the charm is not part of a corosync cluster, the leader is80 3. If the charm is not part of a corosync cluster, the leader is
74 determined as being "the alive unit with the lowest unit numer". In81 determined as being "the alive unit with the lowest unit numer". In
75 other words, the oldest surviving unit.82 other words, the oldest surviving unit.
76 """83 """
84 try:
85 return juju_is_leader()
86 except NotImplementedError:
87 log('Juju leadership election feature not enabled'
88 ', using fallback support',
89 level=WARNING)
90
77 if is_clustered():91 if is_clustered():
78 if not is_crm_leader(resource):92 if not is_crm_leader(resource):
79 log('Deferring action to CRM leader.', level=INFO)93 log('Deferring action to CRM leader.', level=INFO)
@@ -106,8 +120,9 @@
106 status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)120 status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
107 if not isinstance(status, six.text_type):121 if not isinstance(status, six.text_type):
108 status = six.text_type(status, "utf-8")122 status = six.text_type(status, "utf-8")
109 except subprocess.CalledProcessError:123 except subprocess.CalledProcessError as ex:
110 return False124 raise CRMDCNotFound(str(ex))
125
111 current_dc = ''126 current_dc = ''
112 for line in status.split('\n'):127 for line in status.split('\n'):
113 if line.startswith('Current DC'):128 if line.startswith('Current DC'):
@@ -115,10 +130,14 @@
115 current_dc = line.split(':')[1].split()[0]130 current_dc = line.split(':')[1].split()[0]
116 if current_dc == get_unit_hostname():131 if current_dc == get_unit_hostname():
117 return True132 return True
133 elif current_dc == 'NONE':
134 raise CRMDCNotFound('Current DC: NONE')
135
118 return False136 return False
119137
120138
121@retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound)139@retry_on_exception(5, base_delay=2,
140 exc_type=(CRMResourceNotFound, CRMDCNotFound))
122def is_crm_leader(resource, retry=False):141def is_crm_leader(resource, retry=False):
123 """142 """
124 Returns True if the charm calling this is the elected corosync leader,143 Returns True if the charm calling this is the elected corosync leader,
125144
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-04 23:06:40 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 14:47:24 +0000
@@ -110,7 +110,8 @@
110 (self.precise_essex, self.precise_folsom, self.precise_grizzly,110 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
111 self.precise_havana, self.precise_icehouse,111 self.precise_havana, self.precise_icehouse,
112 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,112 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
113 self.trusty_kilo, self.vivid_kilo) = range(10)113 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
114 self.wily_liberty) = range(12)
114115
115 releases = {116 releases = {
116 ('precise', None): self.precise_essex,117 ('precise', None): self.precise_essex,
@@ -121,8 +122,10 @@
121 ('trusty', None): self.trusty_icehouse,122 ('trusty', None): self.trusty_icehouse,
122 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,123 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
123 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,124 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
125 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
124 ('utopic', None): self.utopic_juno,126 ('utopic', None): self.utopic_juno,
125 ('vivid', None): self.vivid_kilo}127 ('vivid', None): self.vivid_kilo,
128 ('wily', None): self.wily_liberty}
126 return releases[(self.series, self.openstack)]129 return releases[(self.series, self.openstack)]
127130
128 def _get_openstack_release_string(self):131 def _get_openstack_release_string(self):
@@ -138,6 +141,7 @@
138 ('trusty', 'icehouse'),141 ('trusty', 'icehouse'),
139 ('utopic', 'juno'),142 ('utopic', 'juno'),
140 ('vivid', 'kilo'),143 ('vivid', 'kilo'),
144 ('wily', 'liberty'),
141 ])145 ])
142 if self.openstack:146 if self.openstack:
143 os_origin = self.openstack.split(':')[1]147 os_origin = self.openstack.split(':')[1]
144148
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-01-26 11:53:19 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 14:47:24 +0000
@@ -16,15 +16,15 @@
1616
17import logging17import logging
18import os18import os
19import six
19import time20import time
20import urllib21import urllib
2122
22import glanceclient.v1.client as glance_client23import glanceclient.v1.client as glance_client
24import heatclient.v1.client as heat_client
23import keystoneclient.v2_0 as keystone_client25import keystoneclient.v2_0 as keystone_client
24import novaclient.v1_1.client as nova_client26import novaclient.v1_1.client as nova_client
2527
26import six
27
28from charmhelpers.contrib.amulet.utils import (28from charmhelpers.contrib.amulet.utils import (
29 AmuletUtils29 AmuletUtils
30)30)
@@ -37,7 +37,7 @@
37 """OpenStack amulet utilities.37 """OpenStack amulet utilities.
3838
39 This class inherits from AmuletUtils and has additional support39 This class inherits from AmuletUtils and has additional support
40 that is specifically for use by OpenStack charms.40 that is specifically for use by OpenStack charm tests.
41 """41 """
4242
43 def __init__(self, log_level=ERROR):43 def __init__(self, log_level=ERROR):
@@ -51,6 +51,8 @@
51 Validate actual endpoint data vs expected endpoint data. The ports51 Validate actual endpoint data vs expected endpoint data. The ports
52 are used to find the matching endpoint.52 are used to find the matching endpoint.
53 """53 """
54 self.log.debug('Validating endpoint data...')
55 self.log.debug('actual: {}'.format(repr(endpoints)))
54 found = False56 found = False
55 for ep in endpoints:57 for ep in endpoints:
56 self.log.debug('endpoint: {}'.format(repr(ep)))58 self.log.debug('endpoint: {}'.format(repr(ep)))
@@ -77,6 +79,7 @@
77 Validate a list of actual service catalog endpoints vs a list of79 Validate a list of actual service catalog endpoints vs a list of
78 expected service catalog endpoints.80 expected service catalog endpoints.
79 """81 """
82 self.log.debug('Validating service catalog endpoint data...')
80 self.log.debug('actual: {}'.format(repr(actual)))83 self.log.debug('actual: {}'.format(repr(actual)))
81 for k, v in six.iteritems(expected):84 for k, v in six.iteritems(expected):
82 if k in actual:85 if k in actual:
@@ -93,6 +96,7 @@
93 Validate a list of actual tenant data vs list of expected tenant96 Validate a list of actual tenant data vs list of expected tenant
94 data.97 data.
95 """98 """
99 self.log.debug('Validating tenant data...')
96 self.log.debug('actual: {}'.format(repr(actual)))100 self.log.debug('actual: {}'.format(repr(actual)))
97 for e in expected:101 for e in expected:
98 found = False102 found = False
@@ -114,6 +118,7 @@
114 Validate a list of actual role data vs a list of expected role118 Validate a list of actual role data vs a list of expected role
115 data.119 data.
116 """120 """
121 self.log.debug('Validating role data...')
117 self.log.debug('actual: {}'.format(repr(actual)))122 self.log.debug('actual: {}'.format(repr(actual)))
118 for e in expected:123 for e in expected:
119 found = False124 found = False
@@ -134,6 +139,7 @@
134 Validate a list of actual user data vs a list of expected user139 Validate a list of actual user data vs a list of expected user
135 data.140 data.
136 """141 """
142 self.log.debug('Validating user data...')
137 self.log.debug('actual: {}'.format(repr(actual)))143 self.log.debug('actual: {}'.format(repr(actual)))
138 for e in expected:144 for e in expected:
139 found = False145 found = False
@@ -155,17 +161,20 @@
155161
156 Validate a list of actual flavors vs a list of expected flavors.162 Validate a list of actual flavors vs a list of expected flavors.
157 """163 """
164 self.log.debug('Validating flavor data...')
158 self.log.debug('actual: {}'.format(repr(actual)))165 self.log.debug('actual: {}'.format(repr(actual)))
159 act = [a.name for a in actual]166 act = [a.name for a in actual]
160 return self._validate_list_data(expected, act)167 return self._validate_list_data(expected, act)
161168
162 def tenant_exists(self, keystone, tenant):169 def tenant_exists(self, keystone, tenant):
163 """Return True if tenant exists."""170 """Return True if tenant exists."""
171 self.log.debug('Checking if tenant exists ({})...'.format(tenant))
164 return tenant in [t.name for t in keystone.tenants.list()]172 return tenant in [t.name for t in keystone.tenants.list()]
165173
166 def authenticate_keystone_admin(self, keystone_sentry, user, password,174 def authenticate_keystone_admin(self, keystone_sentry, user, password,
167 tenant):175 tenant):
168 """Authenticates admin user with the keystone admin endpoint."""176 """Authenticates admin user with the keystone admin endpoint."""
177 self.log.debug('Authenticating keystone admin...')
169 unit = keystone_sentry178 unit = keystone_sentry
170 service_ip = unit.relation('shared-db',179 service_ip = unit.relation('shared-db',
171 'mysql:shared-db')['private-address']180 'mysql:shared-db')['private-address']
@@ -175,6 +184,7 @@
175184
176 def authenticate_keystone_user(self, keystone, user, password, tenant):185 def authenticate_keystone_user(self, keystone, user, password, tenant):
177 """Authenticates a regular user with the keystone public endpoint."""186 """Authenticates a regular user with the keystone public endpoint."""
187 self.log.debug('Authenticating keystone user ({})...'.format(user))
178 ep = keystone.service_catalog.url_for(service_type='identity',188 ep = keystone.service_catalog.url_for(service_type='identity',
179 endpoint_type='publicURL')189 endpoint_type='publicURL')
180 return keystone_client.Client(username=user, password=password,190 return keystone_client.Client(username=user, password=password,
@@ -182,12 +192,21 @@
182192
183 def authenticate_glance_admin(self, keystone):193 def authenticate_glance_admin(self, keystone):
184 """Authenticates admin user with glance."""194 """Authenticates admin user with glance."""
195 self.log.debug('Authenticating glance admin...')
185 ep = keystone.service_catalog.url_for(service_type='image',196 ep = keystone.service_catalog.url_for(service_type='image',
186 endpoint_type='adminURL')197 endpoint_type='adminURL')
187 return glance_client.Client(ep, token=keystone.auth_token)198 return glance_client.Client(ep, token=keystone.auth_token)
188199
200 def authenticate_heat_admin(self, keystone):
201 """Authenticates the admin user with heat."""
202 self.log.debug('Authenticating heat admin...')
203 ep = keystone.service_catalog.url_for(service_type='orchestration',
204 endpoint_type='publicURL')
205 return heat_client.Client(endpoint=ep, token=keystone.auth_token)
206
189 def authenticate_nova_user(self, keystone, user, password, tenant):207 def authenticate_nova_user(self, keystone, user, password, tenant):
190 """Authenticates a regular user with nova-api."""208 """Authenticates a regular user with nova-api."""
209 self.log.debug('Authenticating nova user ({})...'.format(user))
191 ep = keystone.service_catalog.url_for(service_type='identity',210 ep = keystone.service_catalog.url_for(service_type='identity',
192 endpoint_type='publicURL')211 endpoint_type='publicURL')
193 return nova_client.Client(username=user, api_key=password,212 return nova_client.Client(username=user, api_key=password,
@@ -195,6 +214,7 @@
195214
196 def create_cirros_image(self, glance, image_name):215 def create_cirros_image(self, glance, image_name):
197 """Download the latest cirros image and upload it to glance."""216 """Download the latest cirros image and upload it to glance."""
217 self.log.debug('Creating glance image ({})...'.format(image_name))
198 http_proxy = os.getenv('AMULET_HTTP_PROXY')218 http_proxy = os.getenv('AMULET_HTTP_PROXY')
199 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))219 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
200 if http_proxy:220 if http_proxy:
@@ -235,6 +255,11 @@
235255
236 def delete_image(self, glance, image):256 def delete_image(self, glance, image):
237 """Delete the specified image."""257 """Delete the specified image."""
258
259 # /!\ DEPRECATION WARNING
260 self.log.warn('/!\\ DEPRECATION WARNING: use '
261 'delete_resource instead of delete_image.')
262 self.log.debug('Deleting glance image ({})...'.format(image))
238 num_before = len(list(glance.images.list()))263 num_before = len(list(glance.images.list()))
239 glance.images.delete(image)264 glance.images.delete(image)
240265
@@ -254,6 +279,8 @@
254279
255 def create_instance(self, nova, image_name, instance_name, flavor):280 def create_instance(self, nova, image_name, instance_name, flavor):
256 """Create the specified instance."""281 """Create the specified instance."""
282 self.log.debug('Creating instance '
283 '({}|{}|{})'.format(instance_name, image_name, flavor))
257 image = nova.images.find(name=image_name)284 image = nova.images.find(name=image_name)
258 flavor = nova.flavors.find(name=flavor)285 flavor = nova.flavors.find(name=flavor)
259 instance = nova.servers.create(name=instance_name, image=image,286 instance = nova.servers.create(name=instance_name, image=image,
@@ -276,6 +303,11 @@
276303
277 def delete_instance(self, nova, instance):304 def delete_instance(self, nova, instance):
278 """Delete the specified instance."""305 """Delete the specified instance."""
306
307 # /!\ DEPRECATION WARNING
308 self.log.warn('/!\\ DEPRECATION WARNING: use '
309 'delete_resource instead of delete_instance.')
310 self.log.debug('Deleting instance ({})...'.format(instance))
279 num_before = len(list(nova.servers.list()))311 num_before = len(list(nova.servers.list()))
280 nova.servers.delete(instance)312 nova.servers.delete(instance)
281313
@@ -292,3 +324,90 @@
292 return False324 return False
293325
294 return True326 return True
327
328 def create_or_get_keypair(self, nova, keypair_name="testkey"):
329 """Create a new keypair, or return pointer if it already exists."""
330 try:
331 _keypair = nova.keypairs.get(keypair_name)
332 self.log.debug('Keypair ({}) already exists, '
333 'using it.'.format(keypair_name))
334 return _keypair
335 except:
336 self.log.debug('Keypair ({}) does not exist, '
337 'creating it.'.format(keypair_name))
338
339 _keypair = nova.keypairs.create(name=keypair_name)
340 return _keypair
341
342 def delete_resource(self, resource, resource_id,
343 msg="resource", max_wait=120):
344 """Delete one openstack resource, such as one instance, keypair,
345 image, volume, stack, etc., and confirm deletion within max wait time.
346
347 :param resource: pointer to os resource type, ex:glance_client.images
348 :param resource_id: unique name or id for the openstack resource
349 :param msg: text to identify purpose in logging
350 :param max_wait: maximum wait time in seconds
351 :returns: True if successful, otherwise False
352 """
353 num_before = len(list(resource.list()))
354 resource.delete(resource_id)
355
356 tries = 0
357 num_after = len(list(resource.list()))
358 while num_after != (num_before - 1) and tries < (max_wait / 4):
359 self.log.debug('{} delete check: '
360 '{} [{}:{}] {}'.format(msg, tries,
361 num_before,
362 num_after,
363 resource_id))
364 time.sleep(4)
365 num_after = len(list(resource.list()))
366 tries += 1
367
368 self.log.debug('{}: expected, actual count = {}, '
369 '{}'.format(msg, num_before - 1, num_after))
370
371 if num_after == (num_before - 1):
372 return True
373 else:
374 self.log.error('{} delete timed out'.format(msg))
375 return False
376
377 def resource_reaches_status(self, resource, resource_id,
378 expected_stat='available',
379 msg='resource', max_wait=120):
380 """Wait for an openstack resources status to reach an
381 expected status within a specified time. Useful to confirm that
382 nova instances, cinder vols, snapshots, glance images, heat stacks
383 and other resources eventually reach the expected status.
384
385 :param resource: pointer to os resource type, ex: heat_client.stacks
386 :param resource_id: unique id for the openstack resource
387 :param expected_stat: status to expect resource to reach
388 :param msg: text to identify purpose in logging
389 :param max_wait: maximum wait time in seconds
390 :returns: True if successful, False if status is not reached
391 """
392
393 tries = 0
394 resource_stat = resource.get(resource_id).status
395 while resource_stat != expected_stat and tries < (max_wait / 4):
396 self.log.debug('{} status check: '
397 '{} [{}:{}] {}'.format(msg, tries,
398 resource_stat,
399 expected_stat,
400 resource_id))
401 time.sleep(4)
402 resource_stat = resource.get(resource_id).status
403 tries += 1
404
405 self.log.debug('{}: expected, actual status = {}, '
406 '{}'.format(msg, resource_stat, expected_stat))
407
408 if resource_stat == expected_stat:
409 return True
410 else:
411 self.log.debug('{} never reached expected status: '
412 '{}'.format(resource_id, expected_stat))
413 return False
295414
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2015-04-16 21:32:59 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2015-07-01 14:47:24 +0000
@@ -240,7 +240,7 @@
240 if self.relation_prefix:240 if self.relation_prefix:
241 password_setting = self.relation_prefix + '_password'241 password_setting = self.relation_prefix + '_password'
242242
243 for rid in relation_ids('shared-db'):243 for rid in relation_ids(self.interfaces[0]):
244 for unit in related_units(rid):244 for unit in related_units(rid):
245 rdata = relation_get(rid=rid, unit=unit)245 rdata = relation_get(rid=rid, unit=unit)
246 host = rdata.get('db_host')246 host = rdata.get('db_host')
247247
=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-06-04 23:06:40 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2015-07-01 14:47:24 +0000
@@ -172,14 +172,16 @@
172 'services': ['calico-felix',172 'services': ['calico-felix',
173 'bird',173 'bird',
174 'neutron-dhcp-agent',174 'neutron-dhcp-agent',
175 'nova-api-metadata'],175 'nova-api-metadata',
176 'etcd'],
176 'packages': [[headers_package()] + determine_dkms_package(),177 'packages': [[headers_package()] + determine_dkms_package(),
177 ['calico-compute',178 ['calico-compute',
178 'bird',179 'bird',
179 'neutron-dhcp-agent',180 'neutron-dhcp-agent',
180 'nova-api-metadata']],181 'nova-api-metadata',
181 'server_packages': ['neutron-server', 'calico-control'],182 'etcd']],
182 'server_services': ['neutron-server']183 'server_packages': ['neutron-server', 'calico-control', 'etcd'],
184 'server_services': ['neutron-server', 'etcd']
183 },185 },
184 'vsp': {186 'vsp': {
185 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini',187 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini',
186188
=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 2015-06-04 23:06:40 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2015-07-01 14:47:24 +0000
@@ -79,6 +79,7 @@
79 ('trusty', 'icehouse'),79 ('trusty', 'icehouse'),
80 ('utopic', 'juno'),80 ('utopic', 'juno'),
81 ('vivid', 'kilo'),81 ('vivid', 'kilo'),
82 ('wily', 'liberty'),
82])83])
8384
8485
@@ -91,6 +92,7 @@
91 ('2014.1', 'icehouse'),92 ('2014.1', 'icehouse'),
92 ('2014.2', 'juno'),93 ('2014.2', 'juno'),
93 ('2015.1', 'kilo'),94 ('2015.1', 'kilo'),
95 ('2015.2', 'liberty'),
94])96])
9597
96# The ugly duckling98# The ugly duckling
@@ -113,6 +115,7 @@
113 ('2.2.0', 'juno'),115 ('2.2.0', 'juno'),
114 ('2.2.1', 'kilo'),116 ('2.2.1', 'kilo'),
115 ('2.2.2', 'kilo'),117 ('2.2.2', 'kilo'),
118 ('2.3.0', 'liberty'),
116])119])
117120
118DEFAULT_LOOPBACK_SIZE = '5G'121DEFAULT_LOOPBACK_SIZE = '5G'
@@ -321,6 +324,9 @@
321 'kilo': 'trusty-updates/kilo',324 'kilo': 'trusty-updates/kilo',
322 'kilo/updates': 'trusty-updates/kilo',325 'kilo/updates': 'trusty-updates/kilo',
323 'kilo/proposed': 'trusty-proposed/kilo',326 'kilo/proposed': 'trusty-proposed/kilo',
327 'liberty': 'trusty-updates/liberty',
328 'liberty/updates': 'trusty-updates/liberty',
329 'liberty/proposed': 'trusty-proposed/liberty',
324 }330 }
325331
326 try:332 try:
@@ -549,6 +555,11 @@
549555
550 pip_create_virtualenv(os.path.join(parent_dir, 'venv'))556 pip_create_virtualenv(os.path.join(parent_dir, 'venv'))
551557
558 # Upgrade setuptools from default virtualenv version. The default version
559 # in trusty breaks update.py in global requirements master branch.
560 pip_install('setuptools', upgrade=True, proxy=http_proxy,
561 venv=os.path.join(parent_dir, 'venv'))
562
552 for p in projects['repositories']:563 for p in projects['repositories']:
553 repo = p['repository']564 repo = p['repository']
554 branch = p['branch']565 branch = p['branch']
@@ -610,24 +621,24 @@
610 else:621 else:
611 repo_dir = dest_dir622 repo_dir = dest_dir
612623
624 venv = os.path.join(parent_dir, 'venv')
625
613 if update_requirements:626 if update_requirements:
614 if not requirements_dir:627 if not requirements_dir:
615 error_out('requirements repo must be cloned before '628 error_out('requirements repo must be cloned before '
616 'updating from global requirements.')629 'updating from global requirements.')
617 _git_update_requirements(repo_dir, requirements_dir)630 _git_update_requirements(venv, repo_dir, requirements_dir)
618631
619 juju_log('Installing git repo from dir: {}'.format(repo_dir))632 juju_log('Installing git repo from dir: {}'.format(repo_dir))
620 if http_proxy:633 if http_proxy:
621 pip_install(repo_dir, proxy=http_proxy,634 pip_install(repo_dir, proxy=http_proxy, venv=venv)
622 venv=os.path.join(parent_dir, 'venv'))
623 else:635 else:
624 pip_install(repo_dir,636 pip_install(repo_dir, venv=venv)
625 venv=os.path.join(parent_dir, 'venv'))
626637
627 return repo_dir638 return repo_dir
628639
629640
630def _git_update_requirements(package_dir, reqs_dir):641def _git_update_requirements(venv, package_dir, reqs_dir):
631 """642 """
632 Update from global requirements.643 Update from global requirements.
633644
@@ -636,12 +647,14 @@
636 """647 """
637 orig_dir = os.getcwd()648 orig_dir = os.getcwd()
638 os.chdir(reqs_dir)649 os.chdir(reqs_dir)
639 cmd = ['python', 'update.py', package_dir]650 python = os.path.join(venv, 'bin/python')
651 cmd = [python, 'update.py', package_dir]
640 try:652 try:
641 subprocess.check_call(cmd)653 subprocess.check_call(cmd)
642 except subprocess.CalledProcessError:654 except subprocess.CalledProcessError:
643 package = os.path.basename(package_dir)655 package = os.path.basename(package_dir)
644 error_out("Error updating {} from global-requirements.txt".format(package))656 error_out("Error updating {} from "
657 "global-requirements.txt".format(package))
645 os.chdir(orig_dir)658 os.chdir(orig_dir)
646659
647660
648661
=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
--- hooks/charmhelpers/contrib/python/packages.py 2015-06-04 23:06:40 +0000
+++ hooks/charmhelpers/contrib/python/packages.py 2015-07-01 14:47:24 +0000
@@ -36,6 +36,8 @@
36def parse_options(given, available):36def parse_options(given, available):
37 """Given a set of options, check if available"""37 """Given a set of options, check if available"""
38 for key, value in sorted(given.items()):38 for key, value in sorted(given.items()):
39 if not value:
40 continue
39 if key in available:41 if key in available:
40 yield "--{0}={1}".format(key, value)42 yield "--{0}={1}".format(key, value)
4143
4244
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2015-06-04 23:06:40 +0000
+++ hooks/charmhelpers/core/hookenv.py 2015-07-01 14:47:24 +0000
@@ -21,7 +21,9 @@
21# Charm Helpers Developers <juju@lists.ubuntu.com>21# Charm Helpers Developers <juju@lists.ubuntu.com>
2222
23from __future__ import print_function23from __future__ import print_function
24from distutils.version import LooseVersion
24from functools import wraps25from functools import wraps
26import glob
25import os27import os
26import json28import json
27import yaml29import yaml
@@ -242,29 +244,7 @@
242 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)244 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
243 if os.path.exists(self.path):245 if os.path.exists(self.path):
244 self.load_previous()246 self.load_previous()
245247 atexit(self._implicit_save)
246 def __getitem__(self, key):
247 """For regular dict lookups, check the current juju config first,
248 then the previous (saved) copy. This ensures that user-saved values
249 will be returned by a dict lookup.
250
251 """
252 try:
253 return dict.__getitem__(self, key)
254 except KeyError:
255 return (self._prev_dict or {})[key]
256
257 def get(self, key, default=None):
258 try:
259 return self[key]
260 except KeyError:
261 return default
262
263 def keys(self):
264 prev_keys = []
265 if self._prev_dict is not None:
266 prev_keys = self._prev_dict.keys()
267 return list(set(prev_keys + list(dict.keys(self))))
268248
269 def load_previous(self, path=None):249 def load_previous(self, path=None):
270 """Load previous copy of config from disk.250 """Load previous copy of config from disk.
@@ -283,6 +263,9 @@
283 self.path = path or self.path263 self.path = path or self.path
284 with open(self.path) as f:264 with open(self.path) as f:
285 self._prev_dict = json.load(f)265 self._prev_dict = json.load(f)
266 for k, v in self._prev_dict.items():
267 if k not in self:
268 self[k] = v
286269
287 def changed(self, key):270 def changed(self, key):
288 """Return True if the current value for this key is different from271 """Return True if the current value for this key is different from
@@ -314,13 +297,13 @@
314 instance.297 instance.
315298
316 """299 """
317 if self._prev_dict:
318 for k, v in six.iteritems(self._prev_dict):
319 if k not in self:
320 self[k] = v
321 with open(self.path, 'w') as f:300 with open(self.path, 'w') as f:
322 json.dump(self, f)301 json.dump(self, f)
323302
303 def _implicit_save(self):
304 if self.implicit_save:
305 self.save()
306
324307
325@cached308@cached
326def config(scope=None):309def config(scope=None):
@@ -587,10 +570,14 @@
587 hooks.execute(sys.argv)570 hooks.execute(sys.argv)
588 """571 """
589572
590 def __init__(self, config_save=True):573 def __init__(self, config_save=None):
591 super(Hooks, self).__init__()574 super(Hooks, self).__init__()
592 self._hooks = {}575 self._hooks = {}
593 self._config_save = config_save576
577 # For unknown reasons, we allow the Hooks constructor to override
578 # config().implicit_save.
579 if config_save is not None:
580 config().implicit_save = config_save
594581
595 def register(self, name, function):582 def register(self, name, function):
596 """Register a hook"""583 """Register a hook"""
@@ -598,13 +585,16 @@
598585
599 def execute(self, args):586 def execute(self, args):
600 """Execute a registered hook based on args[0]"""587 """Execute a registered hook based on args[0]"""
588 _run_atstart()
601 hook_name = os.path.basename(args[0])589 hook_name = os.path.basename(args[0])
602 if hook_name in self._hooks:590 if hook_name in self._hooks:
603 self._hooks[hook_name]()591 try:
604 if self._config_save:592 self._hooks[hook_name]()
605 cfg = config()593 except SystemExit as x:
606 if cfg.implicit_save:594 if x.code is None or x.code == 0:
607 cfg.save()595 _run_atexit()
596 raise
597 _run_atexit()
608 else:598 else:
609 raise UnregisteredHookError(hook_name)599 raise UnregisteredHookError(hook_name)
610600
@@ -732,13 +722,79 @@
732@translate_exc(from_exc=OSError, to_exc=NotImplementedError)722@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
733def leader_set(settings=None, **kwargs):723def leader_set(settings=None, **kwargs):
734 """Juju leader set value(s)"""724 """Juju leader set value(s)"""
735 log("Juju leader-set '%s'" % (settings), level=DEBUG)725 # Don't log secrets.
726 # log("Juju leader-set '%s'" % (settings), level=DEBUG)
736 cmd = ['leader-set']727 cmd = ['leader-set']
737 settings = settings or {}728 settings = settings or {}
738 settings.update(kwargs)729 settings.update(kwargs)
739 for k, v in settings.iteritems():730 for k, v in settings.items():
740 if v is None:731 if v is None:
741 cmd.append('{}='.format(k))732 cmd.append('{}='.format(k))
742 else:733 else:
743 cmd.append('{}={}'.format(k, v))734 cmd.append('{}={}'.format(k, v))
744 subprocess.check_call(cmd)735 subprocess.check_call(cmd)
736
737
738@cached
739def juju_version():
740 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
741 # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
742 jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
743 return subprocess.check_output([jujud, 'version'],
744 universal_newlines=True).strip()
745
746
747@cached
748def has_juju_version(minimum_version):
749 """Return True if the Juju version is at least the provided version"""
750 return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
751
752
753_atexit = []
754_atstart = []
755
756
757def atstart(callback, *args, **kwargs):
758 '''Schedule a callback to run before the main hook.
759
760 Callbacks are run in the order they were added.
761
762 This is useful for modules and classes to perform initialization
763 and inject behavior. In particular:
764 - Run common code before all of your hooks, such as logging
765 the hook name or interesting relation data.
766 - Defer object or module initialization that requires a hook
767 context until we know there actually is a hook context,
768 making testing easier.
769 - Rather than requiring charm authors to include boilerplate to
770 invoke your helper's behavior, have it run automatically if
771 your object is instantiated or module imported.
772
773 This is not at all useful after your hook framework as been launched.
774 '''
775 global _atstart
776 _atstart.append((callback, args, kwargs))
777
778
779def atexit(callback, *args, **kwargs):
780 '''Schedule a callback to run on successful hook completion.
781
782 Callbacks are run in the reverse order that they were added.'''
783 _atexit.append((callback, args, kwargs))
784
785
786def _run_atstart():
787 '''Hook frameworks must invoke this before running the main hook body.'''
788 global _atstart
789 for callback, args, kwargs in _atstart:
790 callback(*args, **kwargs)
791 del _atstart[:]
792
793
794def _run_atexit():
795 '''Hook frameworks must invoke this after the main hook body has
796 successfully completed. Do not invoke it if the hook fails.'''
797 global _atexit
798 for callback, args, kwargs in reversed(_atexit):
799 callback(*args, **kwargs)
800 del _atexit[:]
745801
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2015-06-04 23:06:40 +0000
+++ hooks/charmhelpers/core/host.py 2015-07-01 14:47:24 +0000
@@ -24,6 +24,7 @@
24import os24import os
25import re25import re
26import pwd26import pwd
27import glob
27import grp28import grp
28import random29import random
29import string30import string
@@ -269,6 +270,21 @@
269 return None270 return None
270271
271272
273def path_hash(path):
274 """
275 Generate a hash checksum of all files matching 'path'. Standard wildcards
276 like '*' and '?' are supported, see documentation for the 'glob' module for
277 more information.
278
279 :return: dict: A { filename: hash } dictionary for all matched files.
280 Empty if none found.
281 """
282 return {
283 filename: file_hash(filename)
284 for filename in glob.iglob(path)
285 }
286
287
272def check_hash(path, checksum, hash_type='md5'):288def check_hash(path, checksum, hash_type='md5'):
273 """289 """
274 Validate a file using a cryptographic checksum.290 Validate a file using a cryptographic checksum.
@@ -296,23 +312,25 @@
296312
297 @restart_on_change({313 @restart_on_change({
298 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]314 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
315 '/etc/apache/sites-enabled/*': [ 'apache2' ]
299 })316 })
300 def ceph_client_changed():317 def config_changed():
301 pass # your code here318 pass # your code here
302319
303 In this example, the cinder-api and cinder-volume services320 In this example, the cinder-api and cinder-volume services
304 would be restarted if /etc/ceph/ceph.conf is changed by the321 would be restarted if /etc/ceph/ceph.conf is changed by the
305 ceph_client_changed function.322 ceph_client_changed function. The apache2 service would be
323 restarted if any file matching the pattern got changed, created
324 or removed. Standard wildcards are supported, see documentation
325 for the 'glob' module for more information.
306 """326 """
307 def wrap(f):327 def wrap(f):
308 def wrapped_f(*args, **kwargs):328 def wrapped_f(*args, **kwargs):
309 checksums = {}329 checksums = {path: path_hash(path) for path in restart_map}
310 for path in restart_map:
311 checksums[path] = file_hash(path)
312 f(*args, **kwargs)330 f(*args, **kwargs)
313 restarts = []331 restarts = []
314 for path in restart_map:332 for path in restart_map:
315 if checksums[path] != file_hash(path):333 if path_hash(path) != checksums[path]:
316 restarts += restart_map[path]334 restarts += restart_map[path]
317 services_list = list(OrderedDict.fromkeys(restarts))335 services_list = list(OrderedDict.fromkeys(restarts))
318 if not stopstart:336 if not stopstart:
319337
=== modified file 'hooks/charmhelpers/core/services/base.py'
--- hooks/charmhelpers/core/services/base.py 2015-06-04 23:06:40 +0000
+++ hooks/charmhelpers/core/services/base.py 2015-07-01 14:47:24 +0000
@@ -128,15 +128,18 @@
128 """128 """
129 Handle the current hook by doing The Right Thing with the registered services.129 Handle the current hook by doing The Right Thing with the registered services.
130 """130 """
131 hook_name = hookenv.hook_name()131 hookenv._run_atstart()
132 if hook_name == 'stop':132 try:
133 self.stop_services()133 hook_name = hookenv.hook_name()
134 else:134 if hook_name == 'stop':
135 self.reconfigure_services()135 self.stop_services()
136 self.provide_data()136 else:
137 cfg = hookenv.config()137 self.reconfigure_services()
138 if cfg.implicit_save:138 self.provide_data()
139 cfg.save()139 except SystemExit as x:
140 if x.code is None or x.code == 0:
141 hookenv._run_atexit()
142 hookenv._run_atexit()
140143
141 def provide_data(self):144 def provide_data(self):
142 """145 """
143146
=== modified file 'metadata.yaml'
--- metadata.yaml 2014-09-19 11:00:18 +0000
+++ metadata.yaml 2015-07-01 14:47:24 +0000
@@ -7,7 +7,10 @@
7 .7 .
8 This charm provides the RADOS HTTP gateway supporting S3 and Swift protocols8 This charm provides the RADOS HTTP gateway supporting S3 and Swift protocols
9 for object storage.9 for object storage.
10categories:10tags:
11 - openstack
12 - storage
13 - file-servers
11 - misc14 - misc
12requires:15requires:
13 mon:16 mon:
1417
=== modified file 'tests/00-setup'
--- tests/00-setup 2014-09-29 01:57:43 +0000
+++ tests/00-setup 2015-07-01 14:47:24 +0000
@@ -5,6 +5,10 @@
5sudo add-apt-repository --yes ppa:juju/stable5sudo add-apt-repository --yes ppa:juju/stable
6sudo apt-get update --yes6sudo apt-get update --yes
7sudo apt-get install --yes python-amulet \7sudo apt-get install --yes python-amulet \
8 python-cinderclient \
9 python-distro-info \
10 python-glanceclient \
11 python-heatclient \
8 python-keystoneclient \12 python-keystoneclient \
9 python-glanceclient \13 python-novaclient \
10 python-novaclient14 python-swiftclient
1115
=== modified file 'tests/017-basic-trusty-kilo' (properties changed: -x to +x)
=== modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x)
=== modified file 'tests/basic_deployment.py'
--- tests/basic_deployment.py 2015-04-16 21:31:30 +0000
+++ tests/basic_deployment.py 2015-07-01 14:47:24 +0000
@@ -1,13 +1,14 @@
1#!/usr/bin/python1#!/usr/bin/python
22
3import amulet3import amulet
4import time
4from charmhelpers.contrib.openstack.amulet.deployment import (5from charmhelpers.contrib.openstack.amulet.deployment import (
5 OpenStackAmuletDeployment6 OpenStackAmuletDeployment
6)7)
7from charmhelpers.contrib.openstack.amulet.utils import ( # noqa8from charmhelpers.contrib.openstack.amulet.utils import (
8 OpenStackAmuletUtils,9 OpenStackAmuletUtils,
9 DEBUG,10 DEBUG,
10 ERROR11 #ERROR
11)12)
1213
13# Use DEBUG to turn on debug logging14# Use DEBUG to turn on debug logging
@@ -35,9 +36,12 @@
35 compatible with the local charm (e.g. stable or next).36 compatible with the local charm (e.g. stable or next).
36 """37 """
37 this_service = {'name': 'ceph-radosgw'}38 this_service = {'name': 'ceph-radosgw'}
38 other_services = [{'name': 'ceph', 'units': 3}, {'name': 'mysql'},39 other_services = [{'name': 'ceph', 'units': 3},
39 {'name': 'keystone'}, {'name': 'rabbitmq-server'},40 {'name': 'mysql'},
40 {'name': 'nova-compute'}, {'name': 'glance'},41 {'name': 'keystone'},
42 {'name': 'rabbitmq-server'},
43 {'name': 'nova-compute'},
44 {'name': 'glance'},
41 {'name': 'cinder'}]45 {'name': 'cinder'}]
42 super(CephRadosGwBasicDeployment, self)._add_services(this_service,46 super(CephRadosGwBasicDeployment, self)._add_services(this_service,
43 other_services)47 other_services)
@@ -92,13 +96,20 @@
92 self.mysql_sentry = self.d.sentry.unit['mysql/0']96 self.mysql_sentry = self.d.sentry.unit['mysql/0']
93 self.keystone_sentry = self.d.sentry.unit['keystone/0']97 self.keystone_sentry = self.d.sentry.unit['keystone/0']
94 self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']98 self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']
95 self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0']99 self.nova_sentry = self.d.sentry.unit['nova-compute/0']
96 self.glance_sentry = self.d.sentry.unit['glance/0']100 self.glance_sentry = self.d.sentry.unit['glance/0']
97 self.cinder_sentry = self.d.sentry.unit['cinder/0']101 self.cinder_sentry = self.d.sentry.unit['cinder/0']
98 self.ceph0_sentry = self.d.sentry.unit['ceph/0']102 self.ceph0_sentry = self.d.sentry.unit['ceph/0']
99 self.ceph1_sentry = self.d.sentry.unit['ceph/1']103 self.ceph1_sentry = self.d.sentry.unit['ceph/1']
100 self.ceph2_sentry = self.d.sentry.unit['ceph/2']104 self.ceph2_sentry = self.d.sentry.unit['ceph/2']
101 self.ceph_radosgw_sentry = self.d.sentry.unit['ceph-radosgw/0']105 self.ceph_radosgw_sentry = self.d.sentry.unit['ceph-radosgw/0']
106 u.log.debug('openstack release val: {}'.format(
107 self._get_openstack_release()))
108 u.log.debug('openstack release str: {}'.format(
109 self._get_openstack_release_string()))
110
111 # Let things settle a bit original moving forward
112 time.sleep(30)
102113
103 # Authenticate admin with keystone114 # Authenticate admin with keystone
104 self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,115 self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
@@ -135,39 +146,77 @@
135 'password',146 'password',
136 self.demo_tenant)147 self.demo_tenant)
137148
138 def _ceph_osd_id(self, index):149 # Authenticate radosgw user using swift api
139 """Produce a shell command that will return a ceph-osd id."""150 ks_obj_rel = self.keystone_sentry.relation(
140 return "`initctl list | grep 'ceph-osd ' | awk 'NR=={} {{ print $2 }}' | grep -o '[0-9]*'`".format(index + 1) # noqa151 'identity-service',
141152 'ceph-radosgw:identity-service')
142 def test_services(self):153 self.swift = u.authenticate_swift_user(
154 self.keystone,
155 user=ks_obj_rel['service_username'],
156 password=ks_obj_rel['service_password'],
157 tenant=ks_obj_rel['service_tenant'])
158
159 def test_100_ceph_processes(self):
160 """Verify that the expected service processes are running
161 on each ceph unit."""
162
163 # Process name and quantity of processes to expect on each unit
164 ceph_processes = {
165 'ceph-mon': 1,
166 'ceph-osd': 2
167 }
168
169 # Units with process names and PID quantities expected
170 expected_processes = {
171 self.ceph_radosgw_sentry: {'radosgw': 1},
172 self.ceph0_sentry: ceph_processes,
173 self.ceph1_sentry: ceph_processes,
174 self.ceph2_sentry: ceph_processes
175 }
176
177 actual_pids = u.get_unit_process_ids(expected_processes)
178 ret = u.validate_unit_process_ids(expected_processes, actual_pids)
179 if ret:
180 amulet.raise_status(amulet.FAIL, msg=ret)
181
182 def test_102_services(self):
143 """Verify the expected services are running on the service units."""183 """Verify the expected services are running on the service units."""
144 ceph_services = ['status ceph-mon-all',184
145 'status ceph-mon id=`hostname`']185 services = {
146 commands = {186 self.mysql_sentry: ['mysql'],
147 self.mysql_sentry: ['status mysql'],187 self.rabbitmq_sentry: ['rabbitmq-server'],
148 self.rabbitmq_sentry: ['sudo service rabbitmq-server status'],188 self.nova_sentry: ['nova-compute'],
149 self.nova_compute_sentry: ['status nova-compute'],189 self.keystone_sentry: ['keystone'],
150 self.keystone_sentry: ['status keystone'],190 self.glance_sentry: ['glance-registry',
151 self.glance_sentry: ['status glance-registry',191 'glance-api'],
152 'status glance-api'],192 self.cinder_sentry: ['cinder-api',
153 self.cinder_sentry: ['status cinder-api',193 'cinder-scheduler',
154 'status cinder-scheduler',194 'cinder-volume'],
155 'status cinder-volume'],
156 self.ceph_radosgw_sentry: ['status radosgw-all']
157 }195 }
158 ceph_osd0 = 'status ceph-osd id={}'.format(self._ceph_osd_id(0))196
159 ceph_osd1 = 'status ceph-osd id={}'.format(self._ceph_osd_id(1))197 if self._get_openstack_release() < self.vivid_kilo:
160 ceph_services.extend([ceph_osd0, ceph_osd1, 'status ceph-osd-all'])198 # For upstart systems only. Ceph services under systemd
161 commands[self.ceph0_sentry] = ceph_services199 # are checked by process name instead.
162 commands[self.ceph1_sentry] = ceph_services200 ceph_services = [
163 commands[self.ceph2_sentry] = ceph_services201 'ceph-mon-all',
164202 'ceph-mon id=`hostname`',
165 ret = u.validate_services(commands)203 'ceph-osd-all',
204 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(0)),
205 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(1))
206 ]
207 services[self.ceph0_sentry] = ceph_services
208 services[self.ceph1_sentry] = ceph_services
209 services[self.ceph2_sentry] = ceph_services
210 services[self.ceph_radosgw_sentry] = ['radosgw-all']
211
212 ret = u.validate_services_by_name(services)
166 if ret:213 if ret:
167 amulet.raise_status(amulet.FAIL, msg=ret)214 amulet.raise_status(amulet.FAIL, msg=ret)
168215
169 def test_ceph_radosgw_ceph_relation(self):216 def test_200_ceph_radosgw_ceph_relation(self):
170 """Verify the ceph-radosgw to ceph relation data."""217 """Verify the ceph-radosgw to ceph relation data."""
218 u.log.debug('Checking ceph-radosgw:mon to ceph:radosgw '
219 'relation data...')
171 unit = self.ceph_radosgw_sentry220 unit = self.ceph_radosgw_sentry
172 relation = ['mon', 'ceph:radosgw']221 relation = ['mon', 'ceph:radosgw']
173 expected = {222 expected = {
@@ -179,8 +228,9 @@
179 message = u.relation_error('ceph-radosgw to ceph', ret)228 message = u.relation_error('ceph-radosgw to ceph', ret)
180 amulet.raise_status(amulet.FAIL, msg=message)229 amulet.raise_status(amulet.FAIL, msg=message)
181230
182 def test_ceph0_ceph_radosgw_relation(self):231 def test_201_ceph0_ceph_radosgw_relation(self):
183 """Verify the ceph0 to ceph-radosgw relation data."""232 """Verify the ceph0 to ceph-radosgw relation data."""
233 u.log.debug('Checking ceph0:radosgw radosgw:mon relation data...')
184 unit = self.ceph0_sentry234 unit = self.ceph0_sentry
185 relation = ['radosgw', 'ceph-radosgw:mon']235 relation = ['radosgw', 'ceph-radosgw:mon']
186 expected = {236 expected = {
@@ -196,8 +246,9 @@
196 message = u.relation_error('ceph0 to ceph-radosgw', ret)246 message = u.relation_error('ceph0 to ceph-radosgw', ret)
197 amulet.raise_status(amulet.FAIL, msg=message)247 amulet.raise_status(amulet.FAIL, msg=message)
198248
199 def test_ceph1_ceph_radosgw_relation(self):249 def test_202_ceph1_ceph_radosgw_relation(self):
200 """Verify the ceph1 to ceph-radosgw relation data."""250 """Verify the ceph1 to ceph-radosgw relation data."""
251 u.log.debug('Checking ceph1:radosgw ceph-radosgw:mon relation data...')
201 unit = self.ceph1_sentry252 unit = self.ceph1_sentry
202 relation = ['radosgw', 'ceph-radosgw:mon']253 relation = ['radosgw', 'ceph-radosgw:mon']
203 expected = {254 expected = {
@@ -213,8 +264,9 @@
213 message = u.relation_error('ceph1 to ceph-radosgw', ret)264 message = u.relation_error('ceph1 to ceph-radosgw', ret)
214 amulet.raise_status(amulet.FAIL, msg=message)265 amulet.raise_status(amulet.FAIL, msg=message)
215266
216 def test_ceph2_ceph_radosgw_relation(self):267 def test_203_ceph2_ceph_radosgw_relation(self):
217 """Verify the ceph2 to ceph-radosgw relation data."""268 """Verify the ceph2 to ceph-radosgw relation data."""
269 u.log.debug('Checking ceph2:radosgw ceph-radosgw:mon relation data...')
218 unit = self.ceph2_sentry270 unit = self.ceph2_sentry
219 relation = ['radosgw', 'ceph-radosgw:mon']271 relation = ['radosgw', 'ceph-radosgw:mon']
220 expected = {272 expected = {
@@ -230,8 +282,10 @@
230 message = u.relation_error('ceph2 to ceph-radosgw', ret)282 message = u.relation_error('ceph2 to ceph-radosgw', ret)
231 amulet.raise_status(amulet.FAIL, msg=message)283 amulet.raise_status(amulet.FAIL, msg=message)
232284
233 def test_ceph_radosgw_keystone_relation(self):285 def test_204_ceph_radosgw_keystone_relation(self):
234 """Verify the ceph-radosgw to keystone relation data."""286 """Verify the ceph-radosgw to keystone relation data."""
287 u.log.debug('Checking ceph-radosgw to keystone id service '
288 'relation data...')
235 unit = self.ceph_radosgw_sentry289 unit = self.ceph_radosgw_sentry
236 relation = ['identity-service', 'keystone:identity-service']290 relation = ['identity-service', 'keystone:identity-service']
237 expected = {291 expected = {
@@ -249,8 +303,10 @@
249 message = u.relation_error('ceph-radosgw to keystone', ret)303 message = u.relation_error('ceph-radosgw to keystone', ret)
250 amulet.raise_status(amulet.FAIL, msg=message)304 amulet.raise_status(amulet.FAIL, msg=message)
251305
252 def test_keystone_ceph_radosgw_relation(self):306 def test_205_keystone_ceph_radosgw_relation(self):
253 """Verify the keystone to ceph-radosgw relation data."""307 """Verify the keystone to ceph-radosgw relation data."""
308 u.log.debug('Checking keystone to ceph-radosgw id service '
309 'relation data...')
254 unit = self.keystone_sentry310 unit = self.keystone_sentry
255 relation = ['identity-service', 'ceph-radosgw:identity-service']311 relation = ['identity-service', 'ceph-radosgw:identity-service']
256 expected = {312 expected = {
@@ -273,8 +329,9 @@
273 message = u.relation_error('keystone to ceph-radosgw', ret)329 message = u.relation_error('keystone to ceph-radosgw', ret)
274 amulet.raise_status(amulet.FAIL, msg=message)330 amulet.raise_status(amulet.FAIL, msg=message)
275331
276 def test_ceph_config(self):332 def test_300_ceph_radosgw_config(self):
277 """Verify the data in the ceph config file."""333 """Verify the data in the ceph config file."""
334 u.log.debug('Checking ceph config file data...')
278 unit = self.ceph_radosgw_sentry335 unit = self.ceph_radosgw_sentry
279 conf = '/etc/ceph/ceph.conf'336 conf = '/etc/ceph/ceph.conf'
280 keystone_sentry = self.keystone_sentry337 keystone_sentry = self.keystone_sentry
@@ -309,11 +366,153 @@
309 message = "ceph config error: {}".format(ret)366 message = "ceph config error: {}".format(ret)
310 amulet.raise_status(amulet.FAIL, msg=message)367 amulet.raise_status(amulet.FAIL, msg=message)
311368
312 def test_restart_on_config_change(self):369 def test_302_cinder_rbd_config(self):
313 """Verify the specified services are restarted on config change."""370 """Verify the cinder config file data regarding ceph."""
314 # NOTE(coreycb): Test not implemented but should it be? ceph-radosgw371 u.log.debug('Checking cinder (rbd) config file data...')
315 # svcs aren't restarted by charm after config change372 unit = self.cinder_sentry
316 # Should they be restarted?373 conf = '/etc/cinder/cinder.conf'
317 if self._get_openstack_release() >= self.precise_essex:374 expected = {
318 u.log.error("Test not implemented")375 'DEFAULT': {
319 return376 'volume_driver': 'cinder.volume.drivers.rbd.RBDDriver'
377 }
378 }
379 for section, pairs in expected.iteritems():
380 ret = u.validate_config_data(unit, conf, section, pairs)
381 if ret:
382 message = "cinder (rbd) config error: {}".format(ret)
383 amulet.raise_status(amulet.FAIL, msg=message)
384
385 def test_304_glance_rbd_config(self):
386 """Verify the glance config file data regarding ceph."""
387 u.log.debug('Checking glance (rbd) config file data...')
388 unit = self.glance_sentry
389 conf = '/etc/glance/glance-api.conf'
390 config = {
391 'default_store': 'rbd',
392 'rbd_store_ceph_conf': '/etc/ceph/ceph.conf',
393 'rbd_store_user': 'glance',
394 'rbd_store_pool': 'glance',
395 'rbd_store_chunk_size': '8'
396 }
397
398 if self._get_openstack_release() >= self.trusty_kilo:
399 # Kilo or later
400 config['stores'] = ('glance.store.filesystem.Store,'
401 'glance.store.http.Store,'
402 'glance.store.rbd.Store')
403 section = 'glance_store'
404 else:
405 # Juno or earlier
406 section = 'DEFAULT'
407
408 expected = {section: config}
409 for section, pairs in expected.iteritems():
410 ret = u.validate_config_data(unit, conf, section, pairs)
411 if ret:
412 message = "glance (rbd) config error: {}".format(ret)
413 amulet.raise_status(amulet.FAIL, msg=message)
414
415 def test_306_nova_rbd_config(self):
416 """Verify the nova config file data regarding ceph."""
417 u.log.debug('Checking nova (rbd) config file data...')
418 unit = self.nova_sentry
419 conf = '/etc/nova/nova.conf'
420 expected = {
421 'libvirt': {
422 'rbd_pool': 'nova',
423 'rbd_user': 'nova-compute',
424 'rbd_secret_uuid': u.not_null
425 }
426 }
427 for section, pairs in expected.iteritems():
428 ret = u.validate_config_data(unit, conf, section, pairs)
429 if ret:
430 message = "nova (rbd) config error: {}".format(ret)
431 amulet.raise_status(amulet.FAIL, msg=message)
432
433 def test_400_ceph_check_osd_pools(self):
434 """Check osd pools on all ceph units, expect them to be
435 identical, and expect specific pools to be present."""
436 u.log.debug('Checking pools on ceph units...')
437
438 expected_pools = self.get_ceph_expected_pools(radosgw=True)
439 results = []
440 sentries = [
441 self.ceph_radosgw_sentry,
442 self.ceph0_sentry,
443 self.ceph1_sentry,
444 self.ceph2_sentry
445 ]
446
447 # Check for presence of expected pools on each unit
448 u.log.debug('Expected pools: {}'.format(expected_pools))
449 for sentry_unit in sentries:
450 pools = u.get_ceph_pools(sentry_unit)
451 results.append(pools)
452
453 for expected_pool in expected_pools:
454 if expected_pool not in pools:
455 msg = ('{} does not have pool: '
456 '{}'.format(sentry_unit.info['unit_name'],
457 expected_pool))
458 amulet.raise_status(amulet.FAIL, msg=msg)
459 u.log.debug('{} has (at least) the expected '
460 'pools.'.format(sentry_unit.info['unit_name']))
461
462 # Check that all units returned the same pool name:id data
463 ret = u.validate_list_of_identical_dicts(results)
464 if ret:
465 u.log.debug('Pool list results: {}'.format(results))
466 msg = ('{}; Pool list results are not identical on all '
467 'ceph units.'.format(ret))
468 amulet.raise_status(amulet.FAIL, msg=msg)
469 else:
470 u.log.debug('Pool list on all ceph units produced the '
471 'same results (OK).')
472
473 def test_402_swift_api_connection(self):
474 """Simple api call to confirm basic service functionality"""
475 u.log.debug('Checking basic radosgw functionality via swift api...')
476 headers, containers = self.swift.get_account()
477 assert('content-type' in headers.keys())
478 assert(containers == [])
479
480 def test_498_radosgw_cmds_exit_zero(self):
481 """Check basic functionality of radosgw cli commands against
482 the ceph_radosgw unit."""
483 sentry_units = [self.ceph_radosgw_sentry]
484 commands = [
485 'sudo radosgw-admin regions list',
486 'sudo radosgw-admin bucket list',
487 'sudo radosgw-admin zone list',
488 'sudo radosgw-admin metadata list',
489 'sudo radosgw-admin gc list'
490 ]
491 ret = u.check_commands_on_units(commands, sentry_units)
492 if ret:
493 amulet.raise_status(amulet.FAIL, msg=ret)
494
495 def test_499_ceph_cmds_exit_zero(self):
496 """Check basic functionality of ceph cli commands against
497 all ceph units."""
498 sentry_units = [
499 self.ceph_radosgw_sentry,
500 self.ceph0_sentry,
501 self.ceph1_sentry,
502 self.ceph2_sentry
503 ]
504 commands = [
505 'sudo ceph health',
506 'sudo ceph mds stat',
507 'sudo ceph pg stat',
508 'sudo ceph osd stat',
509 'sudo ceph mon stat',
510 ]
511 ret = u.check_commands_on_units(commands, sentry_units)
512 if ret:
513 amulet.raise_status(amulet.FAIL, msg=ret)
514
515 # Note(beisner): need to add basic object store functional checks.
516
517 # FYI: No restart check as ceph services do not restart
518 # when charm config changes, unless monitor count increases.
320519
=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
--- tests/charmhelpers/contrib/amulet/utils.py 2015-06-04 23:06:40 +0000
+++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-01 14:47:24 +0000
@@ -14,14 +14,17 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import amulet
17import ConfigParser18import ConfigParser
19import distro_info
18import io20import io
19import logging21import logging
22import os
20import re23import re
24import six
21import sys25import sys
22import time26import time
2327import urlparse
24import six
2528
2629
27class AmuletUtils(object):30class AmuletUtils(object):
@@ -33,6 +36,7 @@
3336
34 def __init__(self, log_level=logging.ERROR):37 def __init__(self, log_level=logging.ERROR):
35 self.log = self.get_logger(level=log_level)38 self.log = self.get_logger(level=log_level)
39 self.ubuntu_releases = self.get_ubuntu_releases()
3640
37 def get_logger(self, name="amulet-logger", level=logging.DEBUG):41 def get_logger(self, name="amulet-logger", level=logging.DEBUG):
38 """Get a logger object that will log to stdout."""42 """Get a logger object that will log to stdout."""
@@ -70,12 +74,44 @@
70 else:74 else:
71 return False75 return False
7276
77 def get_ubuntu_release_from_sentry(self, sentry_unit):
78 """Get Ubuntu release codename from sentry unit.
79
80 :param sentry_unit: amulet sentry/service unit pointer
81 :returns: list of strings - release codename, failure message
82 """
83 msg = None
84 cmd = 'lsb_release -cs'
85 release, code = sentry_unit.run(cmd)
86 if code == 0:
87 self.log.debug('{} lsb_release: {}'.format(
88 sentry_unit.info['unit_name'], release))
89 else:
90 msg = ('{} `{}` returned {} '
91 '{}'.format(sentry_unit.info['unit_name'],
92 cmd, release, code))
93 if release not in self.ubuntu_releases:
94 msg = ("Release ({}) not found in Ubuntu releases "
95 "({})".format(release, self.ubuntu_releases))
96 return release, msg
97
73 def validate_services(self, commands):98 def validate_services(self, commands):
74 """Validate services.99 """Validate that lists of commands succeed on service units. Can be
75100 used to verify system services are running on the corresponding
76 Verify the specified services are running on the corresponding
77 service units.101 service units.
78 """102
103 :param commands: dict with sentry keys and arbitrary command list vals
104 :returns: None if successful, Failure string message otherwise
105 """
106 self.log.debug('Checking status of system services...')
107
108 # /!\ DEPRECATION WARNING (beisner):
109 # New and existing tests should be rewritten to use
110 # validate_services_by_name() as it is aware of init systems.
111 self.log.warn('/!\\ DEPRECATION WARNING: use '
112 'validate_services_by_name instead of validate_services '
113 'due to init system differences.')
114
79 for k, v in six.iteritems(commands):115 for k, v in six.iteritems(commands):
80 for cmd in v:116 for cmd in v:
81 output, code = k.run(cmd)117 output, code = k.run(cmd)
@@ -86,6 +122,41 @@
86 return "command `{}` returned {}".format(cmd, str(code))122 return "command `{}` returned {}".format(cmd, str(code))
87 return None123 return None
88124
125 def validate_services_by_name(self, sentry_services):
126 """Validate system service status by service name, automatically
127 detecting init system based on Ubuntu release codename.
128
129 :param sentry_services: dict with sentry keys and svc list values
130 :returns: None if successful, Failure string message otherwise
131 """
132 self.log.debug('Checking status of system services...')
133
134 # Point at which systemd became a thing
135 systemd_switch = self.ubuntu_releases.index('vivid')
136
137 for sentry_unit, services_list in six.iteritems(sentry_services):
138 # Get lsb_release codename from unit
139 release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
140 if ret:
141 return ret
142
143 for service_name in services_list:
144 if (self.ubuntu_releases.index(release) >= systemd_switch or
145 service_name == "rabbitmq-server"):
146 # init is systemd
147 cmd = 'sudo service {} status'.format(service_name)
148 elif self.ubuntu_releases.index(release) < systemd_switch:
149 # init is upstart
150 cmd = 'sudo status {}'.format(service_name)
151
152 output, code = sentry_unit.run(cmd)
153 self.log.debug('{} `{}` returned '
154 '{}'.format(sentry_unit.info['unit_name'],
155 cmd, code))
156 if code != 0:
157 return "command `{}` returned {}".format(cmd, str(code))
158 return None
159
89 def _get_config(self, unit, filename):160 def _get_config(self, unit, filename):
90 """Get a ConfigParser object for parsing a unit's config file."""161 """Get a ConfigParser object for parsing a unit's config file."""
91 file_contents = unit.file_contents(filename)162 file_contents = unit.file_contents(filename)
@@ -103,7 +174,15 @@
103174
104 Verify that the specified section of the config file contains175 Verify that the specified section of the config file contains
105 the expected option key:value pairs.176 the expected option key:value pairs.
177
178 Compare expected dictionary data vs actual dictionary data.
179 The values in the 'expected' dictionary can be strings, bools, ints,
180 longs, or can be a function that evaluates a variable and returns a
181 bool.
106 """182 """
183 self.log.debug('Validating config file data ({} in {} on {})'
184 '...'.format(section, config_file,
185 sentry_unit.info['unit_name']))
107 config = self._get_config(sentry_unit, config_file)186 config = self._get_config(sentry_unit, config_file)
108187
109 if section != 'DEFAULT' and not config.has_section(section):188 if section != 'DEFAULT' and not config.has_section(section):
@@ -112,9 +191,20 @@
112 for k in expected.keys():191 for k in expected.keys():
113 if not config.has_option(section, k):192 if not config.has_option(section, k):
114 return "section [{}] is missing option {}".format(section, k)193 return "section [{}] is missing option {}".format(section, k)
115 if config.get(section, k) != expected[k]:194
195 actual = config.get(section, k)
196 v = expected[k]
197 if (isinstance(v, six.string_types) or
198 isinstance(v, bool) or
199 isinstance(v, six.integer_types)):
200 # handle explicit values
201 if actual != v:
202 return "section [{}] {}:{} != expected {}:{}".format(
203 section, k, actual, k, expected[k])
204 # handle function pointers, such as not_null or valid_ip
205 elif not v(actual):
116 return "section [{}] {}:{} != expected {}:{}".format(206 return "section [{}] {}:{} != expected {}:{}".format(
117 section, k, config.get(section, k), k, expected[k])207 section, k, actual, k, expected[k])
118 return None208 return None
119209
120 def _validate_dict_data(self, expected, actual):210 def _validate_dict_data(self, expected, actual):
@@ -122,7 +212,7 @@
122212
123 Compare expected dictionary data vs actual dictionary data.213 Compare expected dictionary data vs actual dictionary data.
124 The values in the 'expected' dictionary can be strings, bools, ints,214 The values in the 'expected' dictionary can be strings, bools, ints,
125 longs, or can be a function that evaluate a variable and returns a215 longs, or can be a function that evaluates a variable and returns a
126 bool.216 bool.
127 """217 """
128 self.log.debug('actual: {}'.format(repr(actual)))218 self.log.debug('actual: {}'.format(repr(actual)))
@@ -133,8 +223,10 @@
133 if (isinstance(v, six.string_types) or223 if (isinstance(v, six.string_types) or
134 isinstance(v, bool) or224 isinstance(v, bool) or
135 isinstance(v, six.integer_types)):225 isinstance(v, six.integer_types)):
226 # handle explicit values
136 if v != actual[k]:227 if v != actual[k]:
137 return "{}:{}".format(k, actual[k])228 return "{}:{}".format(k, actual[k])
229 # handle function pointers, such as not_null or valid_ip
138 elif not v(actual[k]):230 elif not v(actual[k]):
139 return "{}:{}".format(k, actual[k])231 return "{}:{}".format(k, actual[k])
140 else:232 else:
@@ -321,3 +413,121 @@
321413
322 def endpoint_error(self, name, data):414 def endpoint_error(self, name, data):
323 return 'unexpected endpoint data in {} - {}'.format(name, data)415 return 'unexpected endpoint data in {} - {}'.format(name, data)
416
417 def get_ubuntu_releases(self):
418 """Return a list of all Ubuntu releases in order of release."""
419 _d = distro_info.UbuntuDistroInfo()
420 _release_list = _d.all
421 self.log.debug('Ubuntu release list: {}'.format(_release_list))
422 return _release_list
423
424 def file_to_url(self, file_rel_path):
425 """Convert a relative file path to a file URL."""
426 _abs_path = os.path.abspath(file_rel_path)
427 return urlparse.urlparse(_abs_path, scheme='file').geturl()
428
429 def check_commands_on_units(self, commands, sentry_units):
430 """Check that all commands in a list exit zero on all
431 sentry units in a list.
432
433 :param commands: list of bash commands
434 :param sentry_units: list of sentry unit pointers
435 :returns: None if successful; Failure message otherwise
436 """
437 self.log.debug('Checking exit codes for {} commands on {} '
438 'sentry units...'.format(len(commands),
439 len(sentry_units)))
440 for sentry_unit in sentry_units:
441 for cmd in commands:
442 output, code = sentry_unit.run(cmd)
443 if code == 0:
444 self.log.debug('{} `{}` returned {} '
445 '(OK)'.format(sentry_unit.info['unit_name'],
446 cmd, code))
447 else:
448 return ('{} `{}` returned {} '
449 '{}'.format(sentry_unit.info['unit_name'],
450 cmd, code, output))
451 return None
452
453 def get_process_id_list(self, sentry_unit, process_name):
454 """Get a list of process ID(s) from a single sentry juju unit
455 for a single process name.
456
457 :param sentry_unit: Pointer to amulet sentry instance (juju unit)
458 :param process_name: Process name
459 :returns: List of process IDs
460 """
461 cmd = 'pidof {}'.format(process_name)
462 output, code = sentry_unit.run(cmd)
463 if code != 0:
464 msg = ('{} `{}` returned {} '
465 '{}'.format(sentry_unit.info['unit_name'],
466 cmd, code, output))
467 amulet.raise_status(amulet.FAIL, msg=msg)
468 return str(output).split()
469
470 def get_unit_process_ids(self, unit_processes):
471 """Construct a dict containing unit sentries, process names, and
472 process IDs."""
473 pid_dict = {}
474 for sentry_unit, process_list in unit_processes.iteritems():
475 pid_dict[sentry_unit] = {}
476 for process in process_list:
477 pids = self.get_process_id_list(sentry_unit, process)
478 pid_dict[sentry_unit].update({process: pids})
479 return pid_dict
480
481 def validate_unit_process_ids(self, expected, actual):
482 """Validate process id quantities for services on units."""
483 self.log.debug('Checking units for running processes...')
484 self.log.debug('Expected PIDs: {}'.format(expected))
485 self.log.debug('Actual PIDs: {}'.format(actual))
486
487 if len(actual) != len(expected):
488 return ('Unit count mismatch. expected, actual: {}, '
489 '{} '.format(len(expected), len(actual)))
490
491 for (e_sentry, e_proc_names) in expected.iteritems():
492 e_sentry_name = e_sentry.info['unit_name']
493 if e_sentry in actual.keys():
494 a_proc_names = actual[e_sentry]
495 else:
496 return ('Expected sentry ({}) not found in actual dict data.'
497 '{}'.format(e_sentry_name, e_sentry))
498
499 if len(e_proc_names.keys()) != len(a_proc_names.keys()):
500 return ('Process name count mismatch. expected, actual: {}, '
501 '{}'.format(len(expected), len(actual)))
502
503 for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
504 zip(e_proc_names.items(), a_proc_names.items()):
505 if e_proc_name != a_proc_name:
506 return ('Process name mismatch. expected, actual: {}, '
507 '{}'.format(e_proc_name, a_proc_name))
508
509 a_pids_length = len(a_pids)
510 if e_pids_length != a_pids_length:
511 return ('PID count mismatch. {} ({}) expected, actual: '
512 '{}, {} ({})'.format(e_sentry_name, e_proc_name,
513 e_pids_length, a_pids_length,
514 a_pids))
515 else:
516 self.log.debug('PID check OK: {} {} {}: '
517 '{}'.format(e_sentry_name, e_proc_name,
518 e_pids_length, a_pids))
519 return None
520
521 def validate_list_of_identical_dicts(self, list_of_dicts):
522 """Check that all dicts within a list are identical."""
523 hashes = []
524 for _dict in list_of_dicts:
525 hashes.append(hash(frozenset(_dict.items())))
526
527 self.log.debug('Hashes: {}'.format(hashes))
528 if len(set(hashes)) == 1:
529 self.log.debug('Dicts within list are identical')
530 else:
531 return 'Dicts within list are not identical'
532
533 return None
324534
=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-04 23:06:40 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 14:47:24 +0000
@@ -79,9 +79,9 @@
79 services.append(this_service)79 services.append(this_service)
80 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',80 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
81 'ceph-osd', 'ceph-radosgw']81 'ceph-osd', 'ceph-radosgw']
82 # Openstack subordinate charms do not expose an origin option as that82 # Most OpenStack subordinate charms do not expose an origin option
83 # is controlled by the principle83 # as that is controlled by the principle.
84 ignore = ['neutron-openvswitch']84 ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
8585
86 if self.openstack:86 if self.openstack:
87 for svc in services:87 for svc in services:
@@ -110,7 +110,8 @@
110 (self.precise_essex, self.precise_folsom, self.precise_grizzly,110 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
111 self.precise_havana, self.precise_icehouse,111 self.precise_havana, self.precise_icehouse,
112 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,112 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
113 self.trusty_kilo, self.vivid_kilo) = range(10)113 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
114 self.wily_liberty) = range(12)
114115
115 releases = {116 releases = {
116 ('precise', None): self.precise_essex,117 ('precise', None): self.precise_essex,
@@ -121,8 +122,10 @@
121 ('trusty', None): self.trusty_icehouse,122 ('trusty', None): self.trusty_icehouse,
122 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,123 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
123 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,124 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
125 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
124 ('utopic', None): self.utopic_juno,126 ('utopic', None): self.utopic_juno,
125 ('vivid', None): self.vivid_kilo}127 ('vivid', None): self.vivid_kilo,
128 ('wily', None): self.wily_liberty}
126 return releases[(self.series, self.openstack)]129 return releases[(self.series, self.openstack)]
127130
128 def _get_openstack_release_string(self):131 def _get_openstack_release_string(self):
@@ -138,9 +141,43 @@
138 ('trusty', 'icehouse'),141 ('trusty', 'icehouse'),
139 ('utopic', 'juno'),142 ('utopic', 'juno'),
140 ('vivid', 'kilo'),143 ('vivid', 'kilo'),
144 ('wily', 'liberty'),
141 ])145 ])
142 if self.openstack:146 if self.openstack:
143 os_origin = self.openstack.split(':')[1]147 os_origin = self.openstack.split(':')[1]
144 return os_origin.split('%s-' % self.series)[1].split('/')[0]148 return os_origin.split('%s-' % self.series)[1].split('/')[0]
145 else:149 else:
146 return releases[self.series]150 return releases[self.series]
151
152 def get_ceph_expected_pools(self, radosgw=False):
153 """Return a list of expected ceph pools in a ceph + cinder + glance
154 test scenario, based on OpenStack release and whether ceph radosgw
155 is flagged as present or not."""
156
157 if self._get_openstack_release() >= self.trusty_kilo:
158 # Kilo or later
159 pools = [
160 'rbd',
161 'cinder',
162 'glance'
163 ]
164 else:
165 # Juno or earlier
166 pools = [
167 'data',
168 'metadata',
169 'rbd',
170 'cinder',
171 'glance'
172 ]
173
174 if radosgw:
175 pools.extend([
176 '.rgw.root',
177 '.rgw.control',
178 '.rgw',
179 '.rgw.gc',
180 '.users.uid'
181 ])
182
183 return pools
147184
=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-01-26 11:53:19 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 14:47:24 +0000
@@ -14,16 +14,20 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import amulet
18import json
17import logging19import logging
18import os20import os
21import six
19import time22import time
20import urllib23import urllib
2124
25import cinderclient.v1.client as cinder_client
22import glanceclient.v1.client as glance_client26import glanceclient.v1.client as glance_client
27import heatclient.v1.client as heat_client
23import keystoneclient.v2_0 as keystone_client28import keystoneclient.v2_0 as keystone_client
24import novaclient.v1_1.client as nova_client29import novaclient.v1_1.client as nova_client
2530import swiftclient
26import six
2731
28from charmhelpers.contrib.amulet.utils import (32from charmhelpers.contrib.amulet.utils import (
29 AmuletUtils33 AmuletUtils
@@ -37,7 +41,7 @@
37 """OpenStack amulet utilities.41 """OpenStack amulet utilities.
3842
39 This class inherits from AmuletUtils and has additional support43 This class inherits from AmuletUtils and has additional support
40 that is specifically for use by OpenStack charms.44 that is specifically for use by OpenStack charm tests.
41 """45 """
4246
43 def __init__(self, log_level=ERROR):47 def __init__(self, log_level=ERROR):
@@ -51,6 +55,8 @@
51 Validate actual endpoint data vs expected endpoint data. The ports55 Validate actual endpoint data vs expected endpoint data. The ports
52 are used to find the matching endpoint.56 are used to find the matching endpoint.
53 """57 """
58 self.log.debug('Validating endpoint data...')
59 self.log.debug('actual: {}'.format(repr(endpoints)))
54 found = False60 found = False
55 for ep in endpoints:61 for ep in endpoints:
56 self.log.debug('endpoint: {}'.format(repr(ep)))62 self.log.debug('endpoint: {}'.format(repr(ep)))
@@ -77,6 +83,7 @@
77 Validate a list of actual service catalog endpoints vs a list of83 Validate a list of actual service catalog endpoints vs a list of
78 expected service catalog endpoints.84 expected service catalog endpoints.
79 """85 """
86 self.log.debug('Validating service catalog endpoint data...')
80 self.log.debug('actual: {}'.format(repr(actual)))87 self.log.debug('actual: {}'.format(repr(actual)))
81 for k, v in six.iteritems(expected):88 for k, v in six.iteritems(expected):
82 if k in actual:89 if k in actual:
@@ -93,6 +100,7 @@
93 Validate a list of actual tenant data vs list of expected tenant100 Validate a list of actual tenant data vs list of expected tenant
94 data.101 data.
95 """102 """
103 self.log.debug('Validating tenant data...')
96 self.log.debug('actual: {}'.format(repr(actual)))104 self.log.debug('actual: {}'.format(repr(actual)))
97 for e in expected:105 for e in expected:
98 found = False106 found = False
@@ -114,6 +122,7 @@
114 Validate a list of actual role data vs a list of expected role122 Validate a list of actual role data vs a list of expected role
115 data.123 data.
116 """124 """
125 self.log.debug('Validating role data...')
117 self.log.debug('actual: {}'.format(repr(actual)))126 self.log.debug('actual: {}'.format(repr(actual)))
118 for e in expected:127 for e in expected:
119 found = False128 found = False
@@ -134,6 +143,7 @@
134 Validate a list of actual user data vs a list of expected user143 Validate a list of actual user data vs a list of expected user
135 data.144 data.
136 """145 """
146 self.log.debug('Validating user data...')
137 self.log.debug('actual: {}'.format(repr(actual)))147 self.log.debug('actual: {}'.format(repr(actual)))
138 for e in expected:148 for e in expected:
139 found = False149 found = False
@@ -155,17 +165,30 @@
155165
156 Validate a list of actual flavors vs a list of expected flavors.166 Validate a list of actual flavors vs a list of expected flavors.
157 """167 """
168 self.log.debug('Validating flavor data...')
158 self.log.debug('actual: {}'.format(repr(actual)))169 self.log.debug('actual: {}'.format(repr(actual)))
159 act = [a.name for a in actual]170 act = [a.name for a in actual]
160 return self._validate_list_data(expected, act)171 return self._validate_list_data(expected, act)
161172
162 def tenant_exists(self, keystone, tenant):173 def tenant_exists(self, keystone, tenant):
163 """Return True if tenant exists."""174 """Return True if tenant exists."""
175 self.log.debug('Checking if tenant exists ({})...'.format(tenant))
164 return tenant in [t.name for t in keystone.tenants.list()]176 return tenant in [t.name for t in keystone.tenants.list()]
165177
178 def authenticate_cinder_admin(self, keystone_sentry, username,
179 password, tenant):
180 """Authenticates admin user with cinder."""
181 # NOTE(beisner): cinder python client doesn't accept tokens.
182 service_ip = \
183 keystone_sentry.relation('shared-db',
184 'mysql:shared-db')['private-address']
185 ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8'))
186 return cinder_client.Client(username, password, tenant, ept)
187
166 def authenticate_keystone_admin(self, keystone_sentry, user, password,188 def authenticate_keystone_admin(self, keystone_sentry, user, password,
167 tenant):189 tenant):
168 """Authenticates admin user with the keystone admin endpoint."""190 """Authenticates admin user with the keystone admin endpoint."""
191 self.log.debug('Authenticating keystone admin...')
169 unit = keystone_sentry192 unit = keystone_sentry
170 service_ip = unit.relation('shared-db',193 service_ip = unit.relation('shared-db',
171 'mysql:shared-db')['private-address']194 'mysql:shared-db')['private-address']
@@ -175,6 +198,7 @@
175198
176 def authenticate_keystone_user(self, keystone, user, password, tenant):199 def authenticate_keystone_user(self, keystone, user, password, tenant):
177 """Authenticates a regular user with the keystone public endpoint."""200 """Authenticates a regular user with the keystone public endpoint."""
201 self.log.debug('Authenticating keystone user ({})...'.format(user))
178 ep = keystone.service_catalog.url_for(service_type='identity',202 ep = keystone.service_catalog.url_for(service_type='identity',
179 endpoint_type='publicURL')203 endpoint_type='publicURL')
180 return keystone_client.Client(username=user, password=password,204 return keystone_client.Client(username=user, password=password,
@@ -182,19 +206,49 @@
182206
183 def authenticate_glance_admin(self, keystone):207 def authenticate_glance_admin(self, keystone):
184 """Authenticates admin user with glance."""208 """Authenticates admin user with glance."""
209 self.log.debug('Authenticating glance admin...')
185 ep = keystone.service_catalog.url_for(service_type='image',210 ep = keystone.service_catalog.url_for(service_type='image',
186 endpoint_type='adminURL')211 endpoint_type='adminURL')
187 return glance_client.Client(ep, token=keystone.auth_token)212 return glance_client.Client(ep, token=keystone.auth_token)
188213
214 def authenticate_heat_admin(self, keystone):
215 """Authenticates the admin user with heat."""
216 self.log.debug('Authenticating heat admin...')
217 ep = keystone.service_catalog.url_for(service_type='orchestration',
218 endpoint_type='publicURL')
219 return heat_client.Client(endpoint=ep, token=keystone.auth_token)
220
189 def authenticate_nova_user(self, keystone, user, password, tenant):221 def authenticate_nova_user(self, keystone, user, password, tenant):
190 """Authenticates a regular user with nova-api."""222 """Authenticates a regular user with nova-api."""
223 self.log.debug('Authenticating nova user ({})...'.format(user))
191 ep = keystone.service_catalog.url_for(service_type='identity',224 ep = keystone.service_catalog.url_for(service_type='identity',
192 endpoint_type='publicURL')225 endpoint_type='publicURL')
193 return nova_client.Client(username=user, api_key=password,226 return nova_client.Client(username=user, api_key=password,
194 project_id=tenant, auth_url=ep)227 project_id=tenant, auth_url=ep)
195228
229 def authenticate_swift_user(self, keystone, user, password, tenant):
230 """Authenticates a regular user with swift api."""
231 self.log.debug('Authenticating swift user ({})...'.format(user))
232 ep = keystone.service_catalog.url_for(service_type='identity',
233 endpoint_type='publicURL')
234 return swiftclient.Connection(authurl=ep,
235 user=user,
236 key=password,
237 tenant_name=tenant,
238 auth_version='2.0')
239
196 def create_cirros_image(self, glance, image_name):240 def create_cirros_image(self, glance, image_name):
197 """Download the latest cirros image and upload it to glance."""241 """Download the latest cirros image and upload it to glance,
242 validate and return a resource pointer.
243
244 :param glance: pointer to authenticated glance connection
245 :param image_name: display name for new image
246 :returns: glance image pointer
247 """
248 self.log.debug('Creating glance cirros image '
249 '({})...'.format(image_name))
250
251 # Download cirros image
198 http_proxy = os.getenv('AMULET_HTTP_PROXY')252 http_proxy = os.getenv('AMULET_HTTP_PROXY')
199 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))253 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
200 if http_proxy:254 if http_proxy:
@@ -203,57 +257,67 @@
203 else:257 else:
204 opener = urllib.FancyURLopener()258 opener = urllib.FancyURLopener()
205259
206 f = opener.open("http://download.cirros-cloud.net/version/released")260 f = opener.open('http://download.cirros-cloud.net/version/released')
207 version = f.read().strip()261 version = f.read().strip()
208 cirros_img = "cirros-{}-x86_64-disk.img".format(version)262 cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
209 local_path = os.path.join('tests', cirros_img)263 local_path = os.path.join('tests', cirros_img)
210264
211 if not os.path.exists(local_path):265 if not os.path.exists(local_path):
212 cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",266 cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
213 version, cirros_img)267 version, cirros_img)
214 opener.retrieve(cirros_url, local_path)268 opener.retrieve(cirros_url, local_path)
215 f.close()269 f.close()
216270
271 # Create glance image
217 with open(local_path) as f:272 with open(local_path) as f:
218 image = glance.images.create(name=image_name, is_public=True,273 image = glance.images.create(name=image_name, is_public=True,
219 disk_format='qcow2',274 disk_format='qcow2',
220 container_format='bare', data=f)275 container_format='bare', data=f)
221 count = 1276
222 status = image.status277 # Wait for image to reach active status
223 while status != 'active' and count < 10:278 img_id = image.id
224 time.sleep(3)279 ret = self.resource_reaches_status(glance.images, img_id,
225 image = glance.images.get(image.id)280 expected_stat='active',
226 status = image.status281 msg='Image status wait')
227 self.log.debug('image status: {}'.format(status))282 if not ret:
228 count += 1283 msg = 'Glance image failed to reach expected state.'
229284 amulet.raise_status(amulet.FAIL, msg=msg)
230 if status != 'active':285
231 self.log.error('image creation timed out')286 # Re-validate new image
232 return None287 self.log.debug('Validating image attributes...')
288 val_img_name = glance.images.get(img_id).name
289 val_img_stat = glance.images.get(img_id).status
290 val_img_pub = glance.images.get(img_id).is_public
291 val_img_cfmt = glance.images.get(img_id).container_format
292 val_img_dfmt = glance.images.get(img_id).disk_format
293 msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
294 'container fmt:{} disk fmt:{}'.format(
295 val_img_name, val_img_pub, img_id,
296 val_img_stat, val_img_cfmt, val_img_dfmt))
297
298 if val_img_name == image_name and val_img_stat == 'active' \
299 and val_img_pub is True and val_img_cfmt == 'bare' \
300 and val_img_dfmt == 'qcow2':
301 self.log.debug(msg_attr)
302 else:
303 msg = ('Volume validation failed, {}'.format(msg_attr))
304 amulet.raise_status(amulet.FAIL, msg=msg)
233305
234 return image306 return image
235307
236 def delete_image(self, glance, image):308 def delete_image(self, glance, image):
237 """Delete the specified image."""309 """Delete the specified image."""
238 num_before = len(list(glance.images.list()))310
239 glance.images.delete(image)311 # /!\ DEPRECATION WARNING
240312 self.log.warn('/!\\ DEPRECATION WARNING: use '
241 count = 1313 'delete_resource instead of delete_image.')
242 num_after = len(list(glance.images.list()))314 self.log.debug('Deleting glance image ({})...'.format(image))
243 while num_after != (num_before - 1) and count < 10:315 return self.delete_resource(glance.images, image, msg='glance image')
244 time.sleep(3)
245 num_after = len(list(glance.images.list()))
246 self.log.debug('number of images: {}'.format(num_after))
247 count += 1
248
249 if num_after != (num_before - 1):
250 self.log.error('image deletion timed out')
251 return False
252
253 return True
254316
255 def create_instance(self, nova, image_name, instance_name, flavor):317 def create_instance(self, nova, image_name, instance_name, flavor):
256 """Create the specified instance."""318 """Create the specified instance."""
319 self.log.debug('Creating instance '
320 '({}|{}|{})'.format(instance_name, image_name, flavor))
257 image = nova.images.find(name=image_name)321 image = nova.images.find(name=image_name)
258 flavor = nova.flavors.find(name=flavor)322 flavor = nova.flavors.find(name=flavor)
259 instance = nova.servers.create(name=instance_name, image=image,323 instance = nova.servers.create(name=instance_name, image=image,
@@ -276,19 +340,265 @@
276340
277 def delete_instance(self, nova, instance):341 def delete_instance(self, nova, instance):
278 """Delete the specified instance."""342 """Delete the specified instance."""
279 num_before = len(list(nova.servers.list()))343
280 nova.servers.delete(instance)344 # /!\ DEPRECATION WARNING
281345 self.log.warn('/!\\ DEPRECATION WARNING: use '
282 count = 1346 'delete_resource instead of delete_instance.')
283 num_after = len(list(nova.servers.list()))347 self.log.debug('Deleting instance ({})...'.format(instance))
284 while num_after != (num_before - 1) and count < 10:348 return self.delete_resource(nova.servers, instance,
285 time.sleep(3)349 msg='nova instance')
286 num_after = len(list(nova.servers.list()))350
287 self.log.debug('number of instances: {}'.format(num_after))351 def create_or_get_keypair(self, nova, keypair_name="testkey"):
288 count += 1352 """Create a new keypair, or return pointer if it already exists."""
289353 try:
290 if num_after != (num_before - 1):354 _keypair = nova.keypairs.get(keypair_name)
291 self.log.error('instance deletion timed out')355 self.log.debug('Keypair ({}) already exists, '
292 return False356 'using it.'.format(keypair_name))
293357 return _keypair
294 return True358 except:
359 self.log.debug('Keypair ({}) does not exist, '
360 'creating it.'.format(keypair_name))
361
362 _keypair = nova.keypairs.create(name=keypair_name)
363 return _keypair
364
365 def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
366 img_id=None, src_vol_id=None, snap_id=None):
367 """Create cinder volume, optionally from a glance image, OR
368 optionally as a clone of an existing volume, OR optionally
369 from a snapshot. Wait for the new volume status to reach
370 the expected status, validate and return a resource pointer.
371
372 :param vol_name: cinder volume display name
373 :param vol_size: size in gigabytes
374 :param img_id: optional glance image id
375 :param src_vol_id: optional source volume id to clone
376 :param snap_id: optional snapshot id to use
377 :returns: cinder volume pointer
378 """
379 # Handle parameter input and avoid impossible combinations
380 if img_id and not src_vol_id and not snap_id:
381 # Create volume from image
382 self.log.debug('Creating cinder volume from glance image...')
383 bootable = 'true'
384 elif src_vol_id and not img_id and not snap_id:
385 # Clone an existing volume
386 self.log.debug('Cloning cinder volume...')
387 bootable = cinder.volumes.get(src_vol_id).bootable
388 elif snap_id and not src_vol_id and not img_id:
389 # Create volume from snapshot
390 self.log.debug('Creating cinder volume from snapshot...')
391 snap = cinder.volume_snapshots.find(id=snap_id)
392 vol_size = snap.size
393 snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
394 bootable = cinder.volumes.get(snap_vol_id).bootable
395 elif not img_id and not src_vol_id and not snap_id:
396 # Create volume
397 self.log.debug('Creating cinder volume...')
398 bootable = 'false'
399 else:
400 # Impossible combination of parameters
401 msg = ('Invalid method use - name:{} size:{} img_id:{} '
402 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
403 img_id, src_vol_id,
404 snap_id))
405 amulet.raise_status(amulet.FAIL, msg=msg)
406
407 # Create new volume
408 try:
409 vol_new = cinder.volumes.create(display_name=vol_name,
410 imageRef=img_id,
411 size=vol_size,
412 source_volid=src_vol_id,
413 snapshot_id=snap_id)
414 vol_id = vol_new.id
415 except Exception as e:
416 msg = 'Failed to create volume: {}'.format(e)
417 amulet.raise_status(amulet.FAIL, msg=msg)
418
419 # Wait for volume to reach available status
420 ret = self.resource_reaches_status(cinder.volumes, vol_id,
421 expected_stat="available",
422 msg="Volume status wait")
423 if not ret:
424 msg = 'Cinder volume failed to reach expected state.'
425 amulet.raise_status(amulet.FAIL, msg=msg)
426
427 # Re-validate new volume
428 self.log.debug('Validating volume attributes...')
429 val_vol_name = cinder.volumes.get(vol_id).display_name
430 val_vol_boot = cinder.volumes.get(vol_id).bootable
431 val_vol_stat = cinder.volumes.get(vol_id).status
432 val_vol_size = cinder.volumes.get(vol_id).size
433 msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
434 '{} size:{}'.format(val_vol_name, vol_id,
435 val_vol_stat, val_vol_boot,
436 val_vol_size))
437
438 if val_vol_boot == bootable and val_vol_stat == 'available' \
439 and val_vol_name == vol_name and val_vol_size == vol_size:
440 self.log.debug(msg_attr)
441 else:
442 msg = ('Volume validation failed, {}'.format(msg_attr))
443 amulet.raise_status(amulet.FAIL, msg=msg)
444
445 return vol_new
446
447 def delete_resource(self, resource, resource_id,
448 msg="resource", max_wait=120):
449 """Delete one openstack resource, such as one instance, keypair,
450 image, volume, stack, etc., and confirm deletion within max wait time.
451
452 :param resource: pointer to os resource type, ex:glance_client.images
453 :param resource_id: unique name or id for the openstack resource
454 :param msg: text to identify purpose in logging
455 :param max_wait: maximum wait time in seconds
456 :returns: True if successful, otherwise False
457 """
458 self.log.debug('Deleting OpenStack resource '
459 '{} ({})'.format(resource_id, msg))
460 num_before = len(list(resource.list()))
461 resource.delete(resource_id)
462
463 tries = 0
464 num_after = len(list(resource.list()))
465 while num_after != (num_before - 1) and tries < (max_wait / 4):
466 self.log.debug('{} delete check: '
467 '{} [{}:{}] {}'.format(msg, tries,
468 num_before,
469 num_after,
470 resource_id))
471 time.sleep(4)
472 num_after = len(list(resource.list()))
473 tries += 1
474
475 self.log.debug('{}: expected, actual count = {}, '
476 '{}'.format(msg, num_before - 1, num_after))
477
478 if num_after == (num_before - 1):
479 return True
480 else:
481 self.log.error('{} delete timed out'.format(msg))
482 return False
483
484 def resource_reaches_status(self, resource, resource_id,
485 expected_stat='available',
486 msg='resource', max_wait=120):
487 """Wait for an openstack resources status to reach an
488 expected status within a specified time. Useful to confirm that
489 nova instances, cinder vols, snapshots, glance images, heat stacks
490 and other resources eventually reach the expected status.
491
492 :param resource: pointer to os resource type, ex: heat_client.stacks
493 :param resource_id: unique id for the openstack resource
494 :param expected_stat: status to expect resource to reach
495 :param msg: text to identify purpose in logging
496 :param max_wait: maximum wait time in seconds
497 :returns: True if successful, False if status is not reached
498 """
499
500 tries = 0
501 resource_stat = resource.get(resource_id).status
502 while resource_stat != expected_stat and tries < (max_wait / 4):
503 self.log.debug('{} status check: '
504 '{} [{}:{}] {}'.format(msg, tries,
505 resource_stat,
506 expected_stat,
507 resource_id))
508 time.sleep(4)
509 resource_stat = resource.get(resource_id).status
510 tries += 1
511
512 self.log.debug('{}: expected, actual status = {}, '
513 '{}'.format(msg, resource_stat, expected_stat))
514
515 if resource_stat == expected_stat:
516 return True
517 else:
518 self.log.debug('{} never reached expected status: '
519 '{}'.format(resource_id, expected_stat))
520 return False
521
522 def get_ceph_osd_id_cmd(self, index):
523 """Produce a shell command that will return a ceph-osd id."""
524 return ("`initctl list | grep 'ceph-osd ' | "
525 "awk 'NR=={} {{ print $2 }}' | "
526 "grep -o '[0-9]*'`".format(index + 1))
527
528 def get_ceph_pools(self, sentry_unit):
529 """Return a dict of ceph pools from a single ceph unit, with
530 pool name as keys, pool id as vals."""
531 pools = {}
532 cmd = 'sudo ceph osd lspools'
533 output, code = sentry_unit.run(cmd)
534 if code != 0:
535 msg = ('{} `{}` returned {} '
536 '{}'.format(sentry_unit.info['unit_name'],
537 cmd, code, output))
538 amulet.raise_status(amulet.FAIL, msg=msg)
539
540 # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
541 for pool in str(output).split(','):
542 pool_id_name = pool.split(' ')
543 if len(pool_id_name) == 2:
544 pool_id = pool_id_name[0]
545 pool_name = pool_id_name[1]
546 pools[pool_name] = int(pool_id)
547
548 self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
549 pools))
550 return pools
551
552 def get_ceph_df(self, sentry_unit):
553 """Return dict of ceph df json output, including ceph pool state.
554
555 :param sentry_unit: Pointer to amulet sentry instance (juju unit)
556 :returns: Dict of ceph df output
557 """
558 cmd = 'sudo ceph df --format=json'
559 output, code = sentry_unit.run(cmd)
560 if code != 0:
561 msg = ('{} `{}` returned {} '
562 '{}'.format(sentry_unit.info['unit_name'],
563 cmd, code, output))
564 amulet.raise_status(amulet.FAIL, msg=msg)
565 return json.loads(output)
566
567 def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
568 """Take a sample of attributes of a ceph pool, returning ceph
569 pool name, object count and disk space used for the specified
570 pool ID number.
571
572 :param sentry_unit: Pointer to amulet sentry instance (juju unit)
573 :param pool_id: Ceph pool ID
574 :returns: List of pool name, object count, kb disk space used
575 """
576 df = self.get_ceph_df(sentry_unit)
577 pool_name = df['pools'][pool_id]['name']
578 obj_count = df['pools'][pool_id]['stats']['objects']
579 kb_used = df['pools'][pool_id]['stats']['kb_used']
580 self.log.debug('Ceph {} pool (ID {}): {} objects, '
581 '{} kb used'.format(pool_name, pool_id,
582 obj_count, kb_used))
583 return pool_name, obj_count, kb_used
584
585 def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
586 """Validate ceph pool samples taken over time, such as pool
587 object counts or pool kb used, before adding, after adding, and
588 after deleting items which affect those pool attributes. The
589 2nd element is expected to be greater than the 1st; 3rd is expected
590 to be less than the 2nd.
591
592 :param samples: List containing 3 data samples
593 :param sample_type: String for logging and usage context
594 :returns: None if successful, Failure message otherwise
595 """
596 original, created, deleted = range(3)
597 if samples[created] <= samples[original] or \
598 samples[deleted] >= samples[created]:
599 return ('Ceph {} samples ({}) '
600 'unexpected.'.format(sample_type, samples))
601 else:
602 self.log.debug('Ceph {} samples (OK): '
603 '{}'.format(sample_type, samples))
604 return None
295605
=== added file 'tests/tests.yaml'
--- tests/tests.yaml 1970-01-01 00:00:00 +0000
+++ tests/tests.yaml 2015-07-01 14:47:24 +0000
@@ -0,0 +1,18 @@
1bootstrap: true
2reset: true
3virtualenv: true
4makefile:
5 - lint
6 - test
7sources:
8 - ppa:juju/stable
9packages:
10 - amulet
11 - python-amulet
12 - python-cinderclient
13 - python-distro-info
14 - python-glanceclient
15 - python-heatclient
16 - python-keystoneclient
17 - python-novaclient
18 - python-swiftclient

Subscribers

People subscribed via source and target branches