Merge lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-update into lp:~openstack-charmers-archive/charms/trusty/ceph-radosgw/next
- Trusty Tahr (14.04)
- next-amulet-update
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 40 |
Proposed branch: | lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-update |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph-radosgw/next |
Diff against target: |
2225 lines (+1213/-196) 18 files modified
Makefile (+7/-7) hooks/charmhelpers/contrib/hahelpers/cluster.py (+24/-5) hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+6/-2) hooks/charmhelpers/contrib/openstack/amulet/utils.py (+122/-3) hooks/charmhelpers/contrib/openstack/context.py (+1/-1) hooks/charmhelpers/contrib/openstack/neutron.py (+6/-4) hooks/charmhelpers/contrib/openstack/utils.py (+21/-8) hooks/charmhelpers/contrib/python/packages.py (+2/-0) hooks/charmhelpers/core/hookenv.py (+92/-36) hooks/charmhelpers/core/host.py (+24/-6) hooks/charmhelpers/core/services/base.py (+12/-9) metadata.yaml (+4/-1) tests/00-setup (+6/-2) tests/basic_deployment.py (+246/-47) tests/charmhelpers/contrib/amulet/utils.py (+219/-9) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+42/-5) tests/charmhelpers/contrib/openstack/amulet/utils.py (+361/-51) tests/tests.yaml (+18/-0) |
To merge this branch: | bzr merge lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-update |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Corey Bryant (community) | Approve | ||
Review via email: mp+262599@code.launchpad.net |
Commit message
amulet tests - update test coverage, enable vivid, prep for wily, add basic functional checks
sync tests/charmhelpers
sync hooks/charmhelpers
Description of the change
amulet tests - update test coverage, enable vivid, prep for wily, add basic functional checks
sync tests/charmhelpers
sync hooks/charmhelpers
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5171 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4750 ceph-radosgw-next for 1chb1n mp262599
AMULET OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5542 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5174 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4753 ceph-radosgw-next for 1chb1n mp262599
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
FYI, undercloud issue caused test failure for #4753.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4780 ceph-radosgw-next for 1chb1n mp262599
AMULET OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5607 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5239 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4783 ceph-radosgw-next for 1chb1n mp262599
AMULET OK: passed
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Flipped back to WIP re: tests/charmhelpers work in progress. Other things here are clear for review and input.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5674 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5306 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4857 ceph-radosgw-next for 1chb1n mp262599
AMULET OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5682 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5314 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
- 48. By Ryan Beisner
-
Update publish target in makefile; update 00-setup and tests.yaml for dependencies.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5686 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5318 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4869 ceph-radosgw-next for 1chb1n mp262599
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 49. By Ryan Beisner
-
fix 00-setup
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5690 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5322 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
- 50. By Ryan Beisner
-
update test
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5695 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5327 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
Corey Bryant (corey.bryant) wrote : | # |
Looks good. I'll approve once the corresponding c-h lands and these amulet tests are successful.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4873 ceph-radosgw-next for 1chb1n mp262599
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4878 ceph-radosgw-next for 1chb1n mp262599
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
Timeout occurred (2700s), printing juju status.
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4883 ceph-radosgw-next for 1chb1n mp262599
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Test rig issue is causing failures in bootstrapping; will re-test when that's resolved.
- 51. By Ryan Beisner
-
update tags for consistency with other openstack charms
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #5700 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #5332 ceph-radosgw-next for 1chb1n mp262599
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #4888 ceph-radosgw-next for 1chb1n mp262599
AMULET OK: passed
Build: http://
Corey Bryant (corey.bryant) : | # |
Preview Diff
1 | === modified file 'Makefile' | |||
2 | --- Makefile 2015-04-16 21:32:01 +0000 | |||
3 | +++ Makefile 2015-07-01 14:47:24 +0000 | |||
4 | @@ -2,17 +2,17 @@ | |||
5 | 2 | PYTHON := /usr/bin/env python | 2 | PYTHON := /usr/bin/env python |
6 | 3 | 3 | ||
7 | 4 | lint: | 4 | lint: |
9 | 5 | @flake8 --exclude hooks/charmhelpers hooks tests unit_tests | 5 | @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \ |
10 | 6 | hooks tests unit_tests | ||
11 | 6 | @charm proof | 7 | @charm proof |
12 | 7 | 8 | ||
14 | 8 | unit_test: | 9 | test: |
15 | 10 | @# Bundletester expects unit tests here. | ||
16 | 11 | @echo Starting unit tests... | ||
17 | 9 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests | 12 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
18 | 10 | 13 | ||
20 | 11 | test: | 14 | functional_test: |
21 | 12 | @echo Starting Amulet tests... | 15 | @echo Starting Amulet tests... |
22 | 13 | # coreycb note: The -v should only be temporary until Amulet sends | ||
23 | 14 | # raise_status() messages to stderr: | ||
24 | 15 | # https://bugs.launchpad.net/amulet/+bug/1320357 | ||
25 | 16 | @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 | 16 | @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 |
26 | 17 | 17 | ||
27 | 18 | bin/charm_helpers_sync.py: | 18 | bin/charm_helpers_sync.py: |
28 | @@ -24,6 +24,6 @@ | |||
29 | 24 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml | 24 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml |
30 | 25 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml | 25 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml |
31 | 26 | 26 | ||
33 | 27 | publish: lint | 27 | publish: lint test |
34 | 28 | bzr push lp:charms/ceph-radosgw | 28 | bzr push lp:charms/ceph-radosgw |
35 | 29 | bzr push lp:charms/trusty/ceph-radosgw | 29 | bzr push lp:charms/trusty/ceph-radosgw |
36 | 30 | 30 | ||
37 | === modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' | |||
38 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-04 23:06:40 +0000 | |||
39 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-07-01 14:47:24 +0000 | |||
40 | @@ -44,6 +44,7 @@ | |||
41 | 44 | ERROR, | 44 | ERROR, |
42 | 45 | WARNING, | 45 | WARNING, |
43 | 46 | unit_get, | 46 | unit_get, |
44 | 47 | is_leader as juju_is_leader | ||
45 | 47 | ) | 48 | ) |
46 | 48 | from charmhelpers.core.decorators import ( | 49 | from charmhelpers.core.decorators import ( |
47 | 49 | retry_on_exception, | 50 | retry_on_exception, |
48 | @@ -63,17 +64,30 @@ | |||
49 | 63 | pass | 64 | pass |
50 | 64 | 65 | ||
51 | 65 | 66 | ||
52 | 67 | class CRMDCNotFound(Exception): | ||
53 | 68 | pass | ||
54 | 69 | |||
55 | 70 | |||
56 | 66 | def is_elected_leader(resource): | 71 | def is_elected_leader(resource): |
57 | 67 | """ | 72 | """ |
58 | 68 | Returns True if the charm executing this is the elected cluster leader. | 73 | Returns True if the charm executing this is the elected cluster leader. |
59 | 69 | 74 | ||
60 | 70 | It relies on two mechanisms to determine leadership: | 75 | It relies on two mechanisms to determine leadership: |
62 | 71 | 1. If the charm is part of a corosync cluster, call corosync to | 76 | 1. If juju is sufficiently new and leadership election is supported, |
63 | 77 | the is_leader command will be used. | ||
64 | 78 | 2. If the charm is part of a corosync cluster, call corosync to | ||
65 | 72 | determine leadership. | 79 | determine leadership. |
67 | 73 | 2. If the charm is not part of a corosync cluster, the leader is | 80 | 3. If the charm is not part of a corosync cluster, the leader is |
68 | 74 | determined as being "the alive unit with the lowest unit numer". In | 81 | determined as being "the alive unit with the lowest unit numer". In |
69 | 75 | other words, the oldest surviving unit. | 82 | other words, the oldest surviving unit. |
70 | 76 | """ | 83 | """ |
71 | 84 | try: | ||
72 | 85 | return juju_is_leader() | ||
73 | 86 | except NotImplementedError: | ||
74 | 87 | log('Juju leadership election feature not enabled' | ||
75 | 88 | ', using fallback support', | ||
76 | 89 | level=WARNING) | ||
77 | 90 | |||
78 | 77 | if is_clustered(): | 91 | if is_clustered(): |
79 | 78 | if not is_crm_leader(resource): | 92 | if not is_crm_leader(resource): |
80 | 79 | log('Deferring action to CRM leader.', level=INFO) | 93 | log('Deferring action to CRM leader.', level=INFO) |
81 | @@ -106,8 +120,9 @@ | |||
82 | 106 | status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) | 120 | status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) |
83 | 107 | if not isinstance(status, six.text_type): | 121 | if not isinstance(status, six.text_type): |
84 | 108 | status = six.text_type(status, "utf-8") | 122 | status = six.text_type(status, "utf-8") |
87 | 109 | except subprocess.CalledProcessError: | 123 | except subprocess.CalledProcessError as ex: |
88 | 110 | return False | 124 | raise CRMDCNotFound(str(ex)) |
89 | 125 | |||
90 | 111 | current_dc = '' | 126 | current_dc = '' |
91 | 112 | for line in status.split('\n'): | 127 | for line in status.split('\n'): |
92 | 113 | if line.startswith('Current DC'): | 128 | if line.startswith('Current DC'): |
93 | @@ -115,10 +130,14 @@ | |||
94 | 115 | current_dc = line.split(':')[1].split()[0] | 130 | current_dc = line.split(':')[1].split()[0] |
95 | 116 | if current_dc == get_unit_hostname(): | 131 | if current_dc == get_unit_hostname(): |
96 | 117 | return True | 132 | return True |
97 | 133 | elif current_dc == 'NONE': | ||
98 | 134 | raise CRMDCNotFound('Current DC: NONE') | ||
99 | 135 | |||
100 | 118 | return False | 136 | return False |
101 | 119 | 137 | ||
102 | 120 | 138 | ||
104 | 121 | @retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound) | 139 | @retry_on_exception(5, base_delay=2, |
105 | 140 | exc_type=(CRMResourceNotFound, CRMDCNotFound)) | ||
106 | 122 | def is_crm_leader(resource, retry=False): | 141 | def is_crm_leader(resource, retry=False): |
107 | 123 | """ | 142 | """ |
108 | 124 | Returns True if the charm calling this is the elected corosync leader, | 143 | Returns True if the charm calling this is the elected corosync leader, |
109 | 125 | 144 | ||
110 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py' | |||
111 | --- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-04 23:06:40 +0000 | |||
112 | +++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 14:47:24 +0000 | |||
113 | @@ -110,7 +110,8 @@ | |||
114 | 110 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, | 110 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, |
115 | 111 | self.precise_havana, self.precise_icehouse, | 111 | self.precise_havana, self.precise_icehouse, |
116 | 112 | self.trusty_icehouse, self.trusty_juno, self.utopic_juno, | 112 | self.trusty_icehouse, self.trusty_juno, self.utopic_juno, |
118 | 113 | self.trusty_kilo, self.vivid_kilo) = range(10) | 113 | self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, |
119 | 114 | self.wily_liberty) = range(12) | ||
120 | 114 | 115 | ||
121 | 115 | releases = { | 116 | releases = { |
122 | 116 | ('precise', None): self.precise_essex, | 117 | ('precise', None): self.precise_essex, |
123 | @@ -121,8 +122,10 @@ | |||
124 | 121 | ('trusty', None): self.trusty_icehouse, | 122 | ('trusty', None): self.trusty_icehouse, |
125 | 122 | ('trusty', 'cloud:trusty-juno'): self.trusty_juno, | 123 | ('trusty', 'cloud:trusty-juno'): self.trusty_juno, |
126 | 123 | ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, | 124 | ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, |
127 | 125 | ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, | ||
128 | 124 | ('utopic', None): self.utopic_juno, | 126 | ('utopic', None): self.utopic_juno, |
130 | 125 | ('vivid', None): self.vivid_kilo} | 127 | ('vivid', None): self.vivid_kilo, |
131 | 128 | ('wily', None): self.wily_liberty} | ||
132 | 126 | return releases[(self.series, self.openstack)] | 129 | return releases[(self.series, self.openstack)] |
133 | 127 | 130 | ||
134 | 128 | def _get_openstack_release_string(self): | 131 | def _get_openstack_release_string(self): |
135 | @@ -138,6 +141,7 @@ | |||
136 | 138 | ('trusty', 'icehouse'), | 141 | ('trusty', 'icehouse'), |
137 | 139 | ('utopic', 'juno'), | 142 | ('utopic', 'juno'), |
138 | 140 | ('vivid', 'kilo'), | 143 | ('vivid', 'kilo'), |
139 | 144 | ('wily', 'liberty'), | ||
140 | 141 | ]) | 145 | ]) |
141 | 142 | if self.openstack: | 146 | if self.openstack: |
142 | 143 | os_origin = self.openstack.split(':')[1] | 147 | os_origin = self.openstack.split(':')[1] |
143 | 144 | 148 | ||
144 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py' | |||
145 | --- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-01-26 11:53:19 +0000 | |||
146 | +++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 14:47:24 +0000 | |||
147 | @@ -16,15 +16,15 @@ | |||
148 | 16 | 16 | ||
149 | 17 | import logging | 17 | import logging |
150 | 18 | import os | 18 | import os |
151 | 19 | import six | ||
152 | 19 | import time | 20 | import time |
153 | 20 | import urllib | 21 | import urllib |
154 | 21 | 22 | ||
155 | 22 | import glanceclient.v1.client as glance_client | 23 | import glanceclient.v1.client as glance_client |
156 | 24 | import heatclient.v1.client as heat_client | ||
157 | 23 | import keystoneclient.v2_0 as keystone_client | 25 | import keystoneclient.v2_0 as keystone_client |
158 | 24 | import novaclient.v1_1.client as nova_client | 26 | import novaclient.v1_1.client as nova_client |
159 | 25 | 27 | ||
160 | 26 | import six | ||
161 | 27 | |||
162 | 28 | from charmhelpers.contrib.amulet.utils import ( | 28 | from charmhelpers.contrib.amulet.utils import ( |
163 | 29 | AmuletUtils | 29 | AmuletUtils |
164 | 30 | ) | 30 | ) |
165 | @@ -37,7 +37,7 @@ | |||
166 | 37 | """OpenStack amulet utilities. | 37 | """OpenStack amulet utilities. |
167 | 38 | 38 | ||
168 | 39 | This class inherits from AmuletUtils and has additional support | 39 | This class inherits from AmuletUtils and has additional support |
170 | 40 | that is specifically for use by OpenStack charms. | 40 | that is specifically for use by OpenStack charm tests. |
171 | 41 | """ | 41 | """ |
172 | 42 | 42 | ||
173 | 43 | def __init__(self, log_level=ERROR): | 43 | def __init__(self, log_level=ERROR): |
174 | @@ -51,6 +51,8 @@ | |||
175 | 51 | Validate actual endpoint data vs expected endpoint data. The ports | 51 | Validate actual endpoint data vs expected endpoint data. The ports |
176 | 52 | are used to find the matching endpoint. | 52 | are used to find the matching endpoint. |
177 | 53 | """ | 53 | """ |
178 | 54 | self.log.debug('Validating endpoint data...') | ||
179 | 55 | self.log.debug('actual: {}'.format(repr(endpoints))) | ||
180 | 54 | found = False | 56 | found = False |
181 | 55 | for ep in endpoints: | 57 | for ep in endpoints: |
182 | 56 | self.log.debug('endpoint: {}'.format(repr(ep))) | 58 | self.log.debug('endpoint: {}'.format(repr(ep))) |
183 | @@ -77,6 +79,7 @@ | |||
184 | 77 | Validate a list of actual service catalog endpoints vs a list of | 79 | Validate a list of actual service catalog endpoints vs a list of |
185 | 78 | expected service catalog endpoints. | 80 | expected service catalog endpoints. |
186 | 79 | """ | 81 | """ |
187 | 82 | self.log.debug('Validating service catalog endpoint data...') | ||
188 | 80 | self.log.debug('actual: {}'.format(repr(actual))) | 83 | self.log.debug('actual: {}'.format(repr(actual))) |
189 | 81 | for k, v in six.iteritems(expected): | 84 | for k, v in six.iteritems(expected): |
190 | 82 | if k in actual: | 85 | if k in actual: |
191 | @@ -93,6 +96,7 @@ | |||
192 | 93 | Validate a list of actual tenant data vs list of expected tenant | 96 | Validate a list of actual tenant data vs list of expected tenant |
193 | 94 | data. | 97 | data. |
194 | 95 | """ | 98 | """ |
195 | 99 | self.log.debug('Validating tenant data...') | ||
196 | 96 | self.log.debug('actual: {}'.format(repr(actual))) | 100 | self.log.debug('actual: {}'.format(repr(actual))) |
197 | 97 | for e in expected: | 101 | for e in expected: |
198 | 98 | found = False | 102 | found = False |
199 | @@ -114,6 +118,7 @@ | |||
200 | 114 | Validate a list of actual role data vs a list of expected role | 118 | Validate a list of actual role data vs a list of expected role |
201 | 115 | data. | 119 | data. |
202 | 116 | """ | 120 | """ |
203 | 121 | self.log.debug('Validating role data...') | ||
204 | 117 | self.log.debug('actual: {}'.format(repr(actual))) | 122 | self.log.debug('actual: {}'.format(repr(actual))) |
205 | 118 | for e in expected: | 123 | for e in expected: |
206 | 119 | found = False | 124 | found = False |
207 | @@ -134,6 +139,7 @@ | |||
208 | 134 | Validate a list of actual user data vs a list of expected user | 139 | Validate a list of actual user data vs a list of expected user |
209 | 135 | data. | 140 | data. |
210 | 136 | """ | 141 | """ |
211 | 142 | self.log.debug('Validating user data...') | ||
212 | 137 | self.log.debug('actual: {}'.format(repr(actual))) | 143 | self.log.debug('actual: {}'.format(repr(actual))) |
213 | 138 | for e in expected: | 144 | for e in expected: |
214 | 139 | found = False | 145 | found = False |
215 | @@ -155,17 +161,20 @@ | |||
216 | 155 | 161 | ||
217 | 156 | Validate a list of actual flavors vs a list of expected flavors. | 162 | Validate a list of actual flavors vs a list of expected flavors. |
218 | 157 | """ | 163 | """ |
219 | 164 | self.log.debug('Validating flavor data...') | ||
220 | 158 | self.log.debug('actual: {}'.format(repr(actual))) | 165 | self.log.debug('actual: {}'.format(repr(actual))) |
221 | 159 | act = [a.name for a in actual] | 166 | act = [a.name for a in actual] |
222 | 160 | return self._validate_list_data(expected, act) | 167 | return self._validate_list_data(expected, act) |
223 | 161 | 168 | ||
224 | 162 | def tenant_exists(self, keystone, tenant): | 169 | def tenant_exists(self, keystone, tenant): |
225 | 163 | """Return True if tenant exists.""" | 170 | """Return True if tenant exists.""" |
226 | 171 | self.log.debug('Checking if tenant exists ({})...'.format(tenant)) | ||
227 | 164 | return tenant in [t.name for t in keystone.tenants.list()] | 172 | return tenant in [t.name for t in keystone.tenants.list()] |
228 | 165 | 173 | ||
229 | 166 | def authenticate_keystone_admin(self, keystone_sentry, user, password, | 174 | def authenticate_keystone_admin(self, keystone_sentry, user, password, |
230 | 167 | tenant): | 175 | tenant): |
231 | 168 | """Authenticates admin user with the keystone admin endpoint.""" | 176 | """Authenticates admin user with the keystone admin endpoint.""" |
232 | 177 | self.log.debug('Authenticating keystone admin...') | ||
233 | 169 | unit = keystone_sentry | 178 | unit = keystone_sentry |
234 | 170 | service_ip = unit.relation('shared-db', | 179 | service_ip = unit.relation('shared-db', |
235 | 171 | 'mysql:shared-db')['private-address'] | 180 | 'mysql:shared-db')['private-address'] |
236 | @@ -175,6 +184,7 @@ | |||
237 | 175 | 184 | ||
238 | 176 | def authenticate_keystone_user(self, keystone, user, password, tenant): | 185 | def authenticate_keystone_user(self, keystone, user, password, tenant): |
239 | 177 | """Authenticates a regular user with the keystone public endpoint.""" | 186 | """Authenticates a regular user with the keystone public endpoint.""" |
240 | 187 | self.log.debug('Authenticating keystone user ({})...'.format(user)) | ||
241 | 178 | ep = keystone.service_catalog.url_for(service_type='identity', | 188 | ep = keystone.service_catalog.url_for(service_type='identity', |
242 | 179 | endpoint_type='publicURL') | 189 | endpoint_type='publicURL') |
243 | 180 | return keystone_client.Client(username=user, password=password, | 190 | return keystone_client.Client(username=user, password=password, |
244 | @@ -182,12 +192,21 @@ | |||
245 | 182 | 192 | ||
246 | 183 | def authenticate_glance_admin(self, keystone): | 193 | def authenticate_glance_admin(self, keystone): |
247 | 184 | """Authenticates admin user with glance.""" | 194 | """Authenticates admin user with glance.""" |
248 | 195 | self.log.debug('Authenticating glance admin...') | ||
249 | 185 | ep = keystone.service_catalog.url_for(service_type='image', | 196 | ep = keystone.service_catalog.url_for(service_type='image', |
250 | 186 | endpoint_type='adminURL') | 197 | endpoint_type='adminURL') |
251 | 187 | return glance_client.Client(ep, token=keystone.auth_token) | 198 | return glance_client.Client(ep, token=keystone.auth_token) |
252 | 188 | 199 | ||
253 | 200 | def authenticate_heat_admin(self, keystone): | ||
254 | 201 | """Authenticates the admin user with heat.""" | ||
255 | 202 | self.log.debug('Authenticating heat admin...') | ||
256 | 203 | ep = keystone.service_catalog.url_for(service_type='orchestration', | ||
257 | 204 | endpoint_type='publicURL') | ||
258 | 205 | return heat_client.Client(endpoint=ep, token=keystone.auth_token) | ||
259 | 206 | |||
260 | 189 | def authenticate_nova_user(self, keystone, user, password, tenant): | 207 | def authenticate_nova_user(self, keystone, user, password, tenant): |
261 | 190 | """Authenticates a regular user with nova-api.""" | 208 | """Authenticates a regular user with nova-api.""" |
262 | 209 | self.log.debug('Authenticating nova user ({})...'.format(user)) | ||
263 | 191 | ep = keystone.service_catalog.url_for(service_type='identity', | 210 | ep = keystone.service_catalog.url_for(service_type='identity', |
264 | 192 | endpoint_type='publicURL') | 211 | endpoint_type='publicURL') |
265 | 193 | return nova_client.Client(username=user, api_key=password, | 212 | return nova_client.Client(username=user, api_key=password, |
266 | @@ -195,6 +214,7 @@ | |||
267 | 195 | 214 | ||
268 | 196 | def create_cirros_image(self, glance, image_name): | 215 | def create_cirros_image(self, glance, image_name): |
269 | 197 | """Download the latest cirros image and upload it to glance.""" | 216 | """Download the latest cirros image and upload it to glance.""" |
270 | 217 | self.log.debug('Creating glance image ({})...'.format(image_name)) | ||
271 | 198 | http_proxy = os.getenv('AMULET_HTTP_PROXY') | 218 | http_proxy = os.getenv('AMULET_HTTP_PROXY') |
272 | 199 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) | 219 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) |
273 | 200 | if http_proxy: | 220 | if http_proxy: |
274 | @@ -235,6 +255,11 @@ | |||
275 | 235 | 255 | ||
276 | 236 | def delete_image(self, glance, image): | 256 | def delete_image(self, glance, image): |
277 | 237 | """Delete the specified image.""" | 257 | """Delete the specified image.""" |
278 | 258 | |||
279 | 259 | # /!\ DEPRECATION WARNING | ||
280 | 260 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | ||
281 | 261 | 'delete_resource instead of delete_image.') | ||
282 | 262 | self.log.debug('Deleting glance image ({})...'.format(image)) | ||
283 | 238 | num_before = len(list(glance.images.list())) | 263 | num_before = len(list(glance.images.list())) |
284 | 239 | glance.images.delete(image) | 264 | glance.images.delete(image) |
285 | 240 | 265 | ||
286 | @@ -254,6 +279,8 @@ | |||
287 | 254 | 279 | ||
288 | 255 | def create_instance(self, nova, image_name, instance_name, flavor): | 280 | def create_instance(self, nova, image_name, instance_name, flavor): |
289 | 256 | """Create the specified instance.""" | 281 | """Create the specified instance.""" |
290 | 282 | self.log.debug('Creating instance ' | ||
291 | 283 | '({}|{}|{})'.format(instance_name, image_name, flavor)) | ||
292 | 257 | image = nova.images.find(name=image_name) | 284 | image = nova.images.find(name=image_name) |
293 | 258 | flavor = nova.flavors.find(name=flavor) | 285 | flavor = nova.flavors.find(name=flavor) |
294 | 259 | instance = nova.servers.create(name=instance_name, image=image, | 286 | instance = nova.servers.create(name=instance_name, image=image, |
295 | @@ -276,6 +303,11 @@ | |||
296 | 276 | 303 | ||
297 | 277 | def delete_instance(self, nova, instance): | 304 | def delete_instance(self, nova, instance): |
298 | 278 | """Delete the specified instance.""" | 305 | """Delete the specified instance.""" |
299 | 306 | |||
300 | 307 | # /!\ DEPRECATION WARNING | ||
301 | 308 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | ||
302 | 309 | 'delete_resource instead of delete_instance.') | ||
303 | 310 | self.log.debug('Deleting instance ({})...'.format(instance)) | ||
304 | 279 | num_before = len(list(nova.servers.list())) | 311 | num_before = len(list(nova.servers.list())) |
305 | 280 | nova.servers.delete(instance) | 312 | nova.servers.delete(instance) |
306 | 281 | 313 | ||
307 | @@ -292,3 +324,90 @@ | |||
308 | 292 | return False | 324 | return False |
309 | 293 | 325 | ||
310 | 294 | return True | 326 | return True |
311 | 327 | |||
312 | 328 | def create_or_get_keypair(self, nova, keypair_name="testkey"): | ||
313 | 329 | """Create a new keypair, or return pointer if it already exists.""" | ||
314 | 330 | try: | ||
315 | 331 | _keypair = nova.keypairs.get(keypair_name) | ||
316 | 332 | self.log.debug('Keypair ({}) already exists, ' | ||
317 | 333 | 'using it.'.format(keypair_name)) | ||
318 | 334 | return _keypair | ||
319 | 335 | except: | ||
320 | 336 | self.log.debug('Keypair ({}) does not exist, ' | ||
321 | 337 | 'creating it.'.format(keypair_name)) | ||
322 | 338 | |||
323 | 339 | _keypair = nova.keypairs.create(name=keypair_name) | ||
324 | 340 | return _keypair | ||
325 | 341 | |||
326 | 342 | def delete_resource(self, resource, resource_id, | ||
327 | 343 | msg="resource", max_wait=120): | ||
328 | 344 | """Delete one openstack resource, such as one instance, keypair, | ||
329 | 345 | image, volume, stack, etc., and confirm deletion within max wait time. | ||
330 | 346 | |||
331 | 347 | :param resource: pointer to os resource type, ex:glance_client.images | ||
332 | 348 | :param resource_id: unique name or id for the openstack resource | ||
333 | 349 | :param msg: text to identify purpose in logging | ||
334 | 350 | :param max_wait: maximum wait time in seconds | ||
335 | 351 | :returns: True if successful, otherwise False | ||
336 | 352 | """ | ||
337 | 353 | num_before = len(list(resource.list())) | ||
338 | 354 | resource.delete(resource_id) | ||
339 | 355 | |||
340 | 356 | tries = 0 | ||
341 | 357 | num_after = len(list(resource.list())) | ||
342 | 358 | while num_after != (num_before - 1) and tries < (max_wait / 4): | ||
343 | 359 | self.log.debug('{} delete check: ' | ||
344 | 360 | '{} [{}:{}] {}'.format(msg, tries, | ||
345 | 361 | num_before, | ||
346 | 362 | num_after, | ||
347 | 363 | resource_id)) | ||
348 | 364 | time.sleep(4) | ||
349 | 365 | num_after = len(list(resource.list())) | ||
350 | 366 | tries += 1 | ||
351 | 367 | |||
352 | 368 | self.log.debug('{}: expected, actual count = {}, ' | ||
353 | 369 | '{}'.format(msg, num_before - 1, num_after)) | ||
354 | 370 | |||
355 | 371 | if num_after == (num_before - 1): | ||
356 | 372 | return True | ||
357 | 373 | else: | ||
358 | 374 | self.log.error('{} delete timed out'.format(msg)) | ||
359 | 375 | return False | ||
360 | 376 | |||
361 | 377 | def resource_reaches_status(self, resource, resource_id, | ||
362 | 378 | expected_stat='available', | ||
363 | 379 | msg='resource', max_wait=120): | ||
364 | 380 | """Wait for an openstack resources status to reach an | ||
365 | 381 | expected status within a specified time. Useful to confirm that | ||
366 | 382 | nova instances, cinder vols, snapshots, glance images, heat stacks | ||
367 | 383 | and other resources eventually reach the expected status. | ||
368 | 384 | |||
369 | 385 | :param resource: pointer to os resource type, ex: heat_client.stacks | ||
370 | 386 | :param resource_id: unique id for the openstack resource | ||
371 | 387 | :param expected_stat: status to expect resource to reach | ||
372 | 388 | :param msg: text to identify purpose in logging | ||
373 | 389 | :param max_wait: maximum wait time in seconds | ||
374 | 390 | :returns: True if successful, False if status is not reached | ||
375 | 391 | """ | ||
376 | 392 | |||
377 | 393 | tries = 0 | ||
378 | 394 | resource_stat = resource.get(resource_id).status | ||
379 | 395 | while resource_stat != expected_stat and tries < (max_wait / 4): | ||
380 | 396 | self.log.debug('{} status check: ' | ||
381 | 397 | '{} [{}:{}] {}'.format(msg, tries, | ||
382 | 398 | resource_stat, | ||
383 | 399 | expected_stat, | ||
384 | 400 | resource_id)) | ||
385 | 401 | time.sleep(4) | ||
386 | 402 | resource_stat = resource.get(resource_id).status | ||
387 | 403 | tries += 1 | ||
388 | 404 | |||
389 | 405 | self.log.debug('{}: expected, actual status = {}, ' | ||
390 | 406 | '{}'.format(msg, resource_stat, expected_stat)) | ||
391 | 407 | |||
392 | 408 | if resource_stat == expected_stat: | ||
393 | 409 | return True | ||
394 | 410 | else: | ||
395 | 411 | self.log.debug('{} never reached expected status: ' | ||
396 | 412 | '{}'.format(resource_id, expected_stat)) | ||
397 | 413 | return False | ||
398 | 295 | 414 | ||
399 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' | |||
400 | --- hooks/charmhelpers/contrib/openstack/context.py 2015-04-16 21:32:59 +0000 | |||
401 | +++ hooks/charmhelpers/contrib/openstack/context.py 2015-07-01 14:47:24 +0000 | |||
402 | @@ -240,7 +240,7 @@ | |||
403 | 240 | if self.relation_prefix: | 240 | if self.relation_prefix: |
404 | 241 | password_setting = self.relation_prefix + '_password' | 241 | password_setting = self.relation_prefix + '_password' |
405 | 242 | 242 | ||
407 | 243 | for rid in relation_ids('shared-db'): | 243 | for rid in relation_ids(self.interfaces[0]): |
408 | 244 | for unit in related_units(rid): | 244 | for unit in related_units(rid): |
409 | 245 | rdata = relation_get(rid=rid, unit=unit) | 245 | rdata = relation_get(rid=rid, unit=unit) |
410 | 246 | host = rdata.get('db_host') | 246 | host = rdata.get('db_host') |
411 | 247 | 247 | ||
412 | === modified file 'hooks/charmhelpers/contrib/openstack/neutron.py' | |||
413 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2015-06-04 23:06:40 +0000 | |||
414 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2015-07-01 14:47:24 +0000 | |||
415 | @@ -172,14 +172,16 @@ | |||
416 | 172 | 'services': ['calico-felix', | 172 | 'services': ['calico-felix', |
417 | 173 | 'bird', | 173 | 'bird', |
418 | 174 | 'neutron-dhcp-agent', | 174 | 'neutron-dhcp-agent', |
420 | 175 | 'nova-api-metadata'], | 175 | 'nova-api-metadata', |
421 | 176 | 'etcd'], | ||
422 | 176 | 'packages': [[headers_package()] + determine_dkms_package(), | 177 | 'packages': [[headers_package()] + determine_dkms_package(), |
423 | 177 | ['calico-compute', | 178 | ['calico-compute', |
424 | 178 | 'bird', | 179 | 'bird', |
425 | 179 | 'neutron-dhcp-agent', | 180 | 'neutron-dhcp-agent', |
429 | 180 | 'nova-api-metadata']], | 181 | 'nova-api-metadata', |
430 | 181 | 'server_packages': ['neutron-server', 'calico-control'], | 182 | 'etcd']], |
431 | 182 | 'server_services': ['neutron-server'] | 183 | 'server_packages': ['neutron-server', 'calico-control', 'etcd'], |
432 | 184 | 'server_services': ['neutron-server', 'etcd'] | ||
433 | 183 | }, | 185 | }, |
434 | 184 | 'vsp': { | 186 | 'vsp': { |
435 | 185 | 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini', | 187 | 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini', |
436 | 186 | 188 | ||
437 | === modified file 'hooks/charmhelpers/contrib/openstack/utils.py' | |||
438 | --- hooks/charmhelpers/contrib/openstack/utils.py 2015-06-04 23:06:40 +0000 | |||
439 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2015-07-01 14:47:24 +0000 | |||
440 | @@ -79,6 +79,7 @@ | |||
441 | 79 | ('trusty', 'icehouse'), | 79 | ('trusty', 'icehouse'), |
442 | 80 | ('utopic', 'juno'), | 80 | ('utopic', 'juno'), |
443 | 81 | ('vivid', 'kilo'), | 81 | ('vivid', 'kilo'), |
444 | 82 | ('wily', 'liberty'), | ||
445 | 82 | ]) | 83 | ]) |
446 | 83 | 84 | ||
447 | 84 | 85 | ||
448 | @@ -91,6 +92,7 @@ | |||
449 | 91 | ('2014.1', 'icehouse'), | 92 | ('2014.1', 'icehouse'), |
450 | 92 | ('2014.2', 'juno'), | 93 | ('2014.2', 'juno'), |
451 | 93 | ('2015.1', 'kilo'), | 94 | ('2015.1', 'kilo'), |
452 | 95 | ('2015.2', 'liberty'), | ||
453 | 94 | ]) | 96 | ]) |
454 | 95 | 97 | ||
455 | 96 | # The ugly duckling | 98 | # The ugly duckling |
456 | @@ -113,6 +115,7 @@ | |||
457 | 113 | ('2.2.0', 'juno'), | 115 | ('2.2.0', 'juno'), |
458 | 114 | ('2.2.1', 'kilo'), | 116 | ('2.2.1', 'kilo'), |
459 | 115 | ('2.2.2', 'kilo'), | 117 | ('2.2.2', 'kilo'), |
460 | 118 | ('2.3.0', 'liberty'), | ||
461 | 116 | ]) | 119 | ]) |
462 | 117 | 120 | ||
463 | 118 | DEFAULT_LOOPBACK_SIZE = '5G' | 121 | DEFAULT_LOOPBACK_SIZE = '5G' |
464 | @@ -321,6 +324,9 @@ | |||
465 | 321 | 'kilo': 'trusty-updates/kilo', | 324 | 'kilo': 'trusty-updates/kilo', |
466 | 322 | 'kilo/updates': 'trusty-updates/kilo', | 325 | 'kilo/updates': 'trusty-updates/kilo', |
467 | 323 | 'kilo/proposed': 'trusty-proposed/kilo', | 326 | 'kilo/proposed': 'trusty-proposed/kilo', |
468 | 327 | 'liberty': 'trusty-updates/liberty', | ||
469 | 328 | 'liberty/updates': 'trusty-updates/liberty', | ||
470 | 329 | 'liberty/proposed': 'trusty-proposed/liberty', | ||
471 | 324 | } | 330 | } |
472 | 325 | 331 | ||
473 | 326 | try: | 332 | try: |
474 | @@ -549,6 +555,11 @@ | |||
475 | 549 | 555 | ||
476 | 550 | pip_create_virtualenv(os.path.join(parent_dir, 'venv')) | 556 | pip_create_virtualenv(os.path.join(parent_dir, 'venv')) |
477 | 551 | 557 | ||
478 | 558 | # Upgrade setuptools from default virtualenv version. The default version | ||
479 | 559 | # in trusty breaks update.py in global requirements master branch. | ||
480 | 560 | pip_install('setuptools', upgrade=True, proxy=http_proxy, | ||
481 | 561 | venv=os.path.join(parent_dir, 'venv')) | ||
482 | 562 | |||
483 | 552 | for p in projects['repositories']: | 563 | for p in projects['repositories']: |
484 | 553 | repo = p['repository'] | 564 | repo = p['repository'] |
485 | 554 | branch = p['branch'] | 565 | branch = p['branch'] |
486 | @@ -610,24 +621,24 @@ | |||
487 | 610 | else: | 621 | else: |
488 | 611 | repo_dir = dest_dir | 622 | repo_dir = dest_dir |
489 | 612 | 623 | ||
490 | 624 | venv = os.path.join(parent_dir, 'venv') | ||
491 | 625 | |||
492 | 613 | if update_requirements: | 626 | if update_requirements: |
493 | 614 | if not requirements_dir: | 627 | if not requirements_dir: |
494 | 615 | error_out('requirements repo must be cloned before ' | 628 | error_out('requirements repo must be cloned before ' |
495 | 616 | 'updating from global requirements.') | 629 | 'updating from global requirements.') |
497 | 617 | _git_update_requirements(repo_dir, requirements_dir) | 630 | _git_update_requirements(venv, repo_dir, requirements_dir) |
498 | 618 | 631 | ||
499 | 619 | juju_log('Installing git repo from dir: {}'.format(repo_dir)) | 632 | juju_log('Installing git repo from dir: {}'.format(repo_dir)) |
500 | 620 | if http_proxy: | 633 | if http_proxy: |
503 | 621 | pip_install(repo_dir, proxy=http_proxy, | 634 | pip_install(repo_dir, proxy=http_proxy, venv=venv) |
502 | 622 | venv=os.path.join(parent_dir, 'venv')) | ||
504 | 623 | else: | 635 | else: |
507 | 624 | pip_install(repo_dir, | 636 | pip_install(repo_dir, venv=venv) |
506 | 625 | venv=os.path.join(parent_dir, 'venv')) | ||
508 | 626 | 637 | ||
509 | 627 | return repo_dir | 638 | return repo_dir |
510 | 628 | 639 | ||
511 | 629 | 640 | ||
513 | 630 | def _git_update_requirements(package_dir, reqs_dir): | 641 | def _git_update_requirements(venv, package_dir, reqs_dir): |
514 | 631 | """ | 642 | """ |
515 | 632 | Update from global requirements. | 643 | Update from global requirements. |
516 | 633 | 644 | ||
517 | @@ -636,12 +647,14 @@ | |||
518 | 636 | """ | 647 | """ |
519 | 637 | orig_dir = os.getcwd() | 648 | orig_dir = os.getcwd() |
520 | 638 | os.chdir(reqs_dir) | 649 | os.chdir(reqs_dir) |
522 | 639 | cmd = ['python', 'update.py', package_dir] | 650 | python = os.path.join(venv, 'bin/python') |
523 | 651 | cmd = [python, 'update.py', package_dir] | ||
524 | 640 | try: | 652 | try: |
525 | 641 | subprocess.check_call(cmd) | 653 | subprocess.check_call(cmd) |
526 | 642 | except subprocess.CalledProcessError: | 654 | except subprocess.CalledProcessError: |
527 | 643 | package = os.path.basename(package_dir) | 655 | package = os.path.basename(package_dir) |
529 | 644 | error_out("Error updating {} from global-requirements.txt".format(package)) | 656 | error_out("Error updating {} from " |
530 | 657 | "global-requirements.txt".format(package)) | ||
531 | 645 | os.chdir(orig_dir) | 658 | os.chdir(orig_dir) |
532 | 646 | 659 | ||
533 | 647 | 660 | ||
534 | 648 | 661 | ||
535 | === modified file 'hooks/charmhelpers/contrib/python/packages.py' | |||
536 | --- hooks/charmhelpers/contrib/python/packages.py 2015-06-04 23:06:40 +0000 | |||
537 | +++ hooks/charmhelpers/contrib/python/packages.py 2015-07-01 14:47:24 +0000 | |||
538 | @@ -36,6 +36,8 @@ | |||
539 | 36 | def parse_options(given, available): | 36 | def parse_options(given, available): |
540 | 37 | """Given a set of options, check if available""" | 37 | """Given a set of options, check if available""" |
541 | 38 | for key, value in sorted(given.items()): | 38 | for key, value in sorted(given.items()): |
542 | 39 | if not value: | ||
543 | 40 | continue | ||
544 | 39 | if key in available: | 41 | if key in available: |
545 | 40 | yield "--{0}={1}".format(key, value) | 42 | yield "--{0}={1}".format(key, value) |
546 | 41 | 43 | ||
547 | 42 | 44 | ||
548 | === modified file 'hooks/charmhelpers/core/hookenv.py' | |||
549 | --- hooks/charmhelpers/core/hookenv.py 2015-06-04 23:06:40 +0000 | |||
550 | +++ hooks/charmhelpers/core/hookenv.py 2015-07-01 14:47:24 +0000 | |||
551 | @@ -21,7 +21,9 @@ | |||
552 | 21 | # Charm Helpers Developers <juju@lists.ubuntu.com> | 21 | # Charm Helpers Developers <juju@lists.ubuntu.com> |
553 | 22 | 22 | ||
554 | 23 | from __future__ import print_function | 23 | from __future__ import print_function |
555 | 24 | from distutils.version import LooseVersion | ||
556 | 24 | from functools import wraps | 25 | from functools import wraps |
557 | 26 | import glob | ||
558 | 25 | import os | 27 | import os |
559 | 26 | import json | 28 | import json |
560 | 27 | import yaml | 29 | import yaml |
561 | @@ -242,29 +244,7 @@ | |||
562 | 242 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) | 244 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
563 | 243 | if os.path.exists(self.path): | 245 | if os.path.exists(self.path): |
564 | 244 | self.load_previous() | 246 | self.load_previous() |
588 | 245 | 247 | atexit(self._implicit_save) | |
566 | 246 | def __getitem__(self, key): | ||
567 | 247 | """For regular dict lookups, check the current juju config first, | ||
568 | 248 | then the previous (saved) copy. This ensures that user-saved values | ||
569 | 249 | will be returned by a dict lookup. | ||
570 | 250 | |||
571 | 251 | """ | ||
572 | 252 | try: | ||
573 | 253 | return dict.__getitem__(self, key) | ||
574 | 254 | except KeyError: | ||
575 | 255 | return (self._prev_dict or {})[key] | ||
576 | 256 | |||
577 | 257 | def get(self, key, default=None): | ||
578 | 258 | try: | ||
579 | 259 | return self[key] | ||
580 | 260 | except KeyError: | ||
581 | 261 | return default | ||
582 | 262 | |||
583 | 263 | def keys(self): | ||
584 | 264 | prev_keys = [] | ||
585 | 265 | if self._prev_dict is not None: | ||
586 | 266 | prev_keys = self._prev_dict.keys() | ||
587 | 267 | return list(set(prev_keys + list(dict.keys(self)))) | ||
589 | 268 | 248 | ||
590 | 269 | def load_previous(self, path=None): | 249 | def load_previous(self, path=None): |
591 | 270 | """Load previous copy of config from disk. | 250 | """Load previous copy of config from disk. |
592 | @@ -283,6 +263,9 @@ | |||
593 | 283 | self.path = path or self.path | 263 | self.path = path or self.path |
594 | 284 | with open(self.path) as f: | 264 | with open(self.path) as f: |
595 | 285 | self._prev_dict = json.load(f) | 265 | self._prev_dict = json.load(f) |
596 | 266 | for k, v in self._prev_dict.items(): | ||
597 | 267 | if k not in self: | ||
598 | 268 | self[k] = v | ||
599 | 286 | 269 | ||
600 | 287 | def changed(self, key): | 270 | def changed(self, key): |
601 | 288 | """Return True if the current value for this key is different from | 271 | """Return True if the current value for this key is different from |
602 | @@ -314,13 +297,13 @@ | |||
603 | 314 | instance. | 297 | instance. |
604 | 315 | 298 | ||
605 | 316 | """ | 299 | """ |
606 | 317 | if self._prev_dict: | ||
607 | 318 | for k, v in six.iteritems(self._prev_dict): | ||
608 | 319 | if k not in self: | ||
609 | 320 | self[k] = v | ||
610 | 321 | with open(self.path, 'w') as f: | 300 | with open(self.path, 'w') as f: |
611 | 322 | json.dump(self, f) | 301 | json.dump(self, f) |
612 | 323 | 302 | ||
613 | 303 | def _implicit_save(self): | ||
614 | 304 | if self.implicit_save: | ||
615 | 305 | self.save() | ||
616 | 306 | |||
617 | 324 | 307 | ||
618 | 325 | @cached | 308 | @cached |
619 | 326 | def config(scope=None): | 309 | def config(scope=None): |
620 | @@ -587,10 +570,14 @@ | |||
621 | 587 | hooks.execute(sys.argv) | 570 | hooks.execute(sys.argv) |
622 | 588 | """ | 571 | """ |
623 | 589 | 572 | ||
625 | 590 | def __init__(self, config_save=True): | 573 | def __init__(self, config_save=None): |
626 | 591 | super(Hooks, self).__init__() | 574 | super(Hooks, self).__init__() |
627 | 592 | self._hooks = {} | 575 | self._hooks = {} |
629 | 593 | self._config_save = config_save | 576 | |
630 | 577 | # For unknown reasons, we allow the Hooks constructor to override | ||
631 | 578 | # config().implicit_save. | ||
632 | 579 | if config_save is not None: | ||
633 | 580 | config().implicit_save = config_save | ||
634 | 594 | 581 | ||
635 | 595 | def register(self, name, function): | 582 | def register(self, name, function): |
636 | 596 | """Register a hook""" | 583 | """Register a hook""" |
637 | @@ -598,13 +585,16 @@ | |||
638 | 598 | 585 | ||
639 | 599 | def execute(self, args): | 586 | def execute(self, args): |
640 | 600 | """Execute a registered hook based on args[0]""" | 587 | """Execute a registered hook based on args[0]""" |
641 | 588 | _run_atstart() | ||
642 | 601 | hook_name = os.path.basename(args[0]) | 589 | hook_name = os.path.basename(args[0]) |
643 | 602 | if hook_name in self._hooks: | 590 | if hook_name in self._hooks: |
649 | 603 | self._hooks[hook_name]() | 591 | try: |
650 | 604 | if self._config_save: | 592 | self._hooks[hook_name]() |
651 | 605 | cfg = config() | 593 | except SystemExit as x: |
652 | 606 | if cfg.implicit_save: | 594 | if x.code is None or x.code == 0: |
653 | 607 | cfg.save() | 595 | _run_atexit() |
654 | 596 | raise | ||
655 | 597 | _run_atexit() | ||
656 | 608 | else: | 598 | else: |
657 | 609 | raise UnregisteredHookError(hook_name) | 599 | raise UnregisteredHookError(hook_name) |
658 | 610 | 600 | ||
659 | @@ -732,13 +722,79 @@ | |||
660 | 732 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) | 722 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
661 | 733 | def leader_set(settings=None, **kwargs): | 723 | def leader_set(settings=None, **kwargs): |
662 | 734 | """Juju leader set value(s)""" | 724 | """Juju leader set value(s)""" |
664 | 735 | log("Juju leader-set '%s'" % (settings), level=DEBUG) | 725 | # Don't log secrets. |
665 | 726 | # log("Juju leader-set '%s'" % (settings), level=DEBUG) | ||
666 | 736 | cmd = ['leader-set'] | 727 | cmd = ['leader-set'] |
667 | 737 | settings = settings or {} | 728 | settings = settings or {} |
668 | 738 | settings.update(kwargs) | 729 | settings.update(kwargs) |
670 | 739 | for k, v in settings.iteritems(): | 730 | for k, v in settings.items(): |
671 | 740 | if v is None: | 731 | if v is None: |
672 | 741 | cmd.append('{}='.format(k)) | 732 | cmd.append('{}='.format(k)) |
673 | 742 | else: | 733 | else: |
674 | 743 | cmd.append('{}={}'.format(k, v)) | 734 | cmd.append('{}={}'.format(k, v)) |
675 | 744 | subprocess.check_call(cmd) | 735 | subprocess.check_call(cmd) |
676 | 736 | |||
677 | 737 | |||
678 | 738 | @cached | ||
679 | 739 | def juju_version(): | ||
680 | 740 | """Full version string (eg. '1.23.3.1-trusty-amd64')""" | ||
681 | 741 | # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 | ||
682 | 742 | jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] | ||
683 | 743 | return subprocess.check_output([jujud, 'version'], | ||
684 | 744 | universal_newlines=True).strip() | ||
685 | 745 | |||
686 | 746 | |||
687 | 747 | @cached | ||
688 | 748 | def has_juju_version(minimum_version): | ||
689 | 749 | """Return True if the Juju version is at least the provided version""" | ||
690 | 750 | return LooseVersion(juju_version()) >= LooseVersion(minimum_version) | ||
691 | 751 | |||
692 | 752 | |||
693 | 753 | _atexit = [] | ||
694 | 754 | _atstart = [] | ||
695 | 755 | |||
696 | 756 | |||
697 | 757 | def atstart(callback, *args, **kwargs): | ||
698 | 758 | '''Schedule a callback to run before the main hook. | ||
699 | 759 | |||
700 | 760 | Callbacks are run in the order they were added. | ||
701 | 761 | |||
702 | 762 | This is useful for modules and classes to perform initialization | ||
703 | 763 | and inject behavior. In particular: | ||
704 | 764 | - Run common code before all of your hooks, such as logging | ||
705 | 765 | the hook name or interesting relation data. | ||
706 | 766 | - Defer object or module initialization that requires a hook | ||
707 | 767 | context until we know there actually is a hook context, | ||
708 | 768 | making testing easier. | ||
709 | 769 | - Rather than requiring charm authors to include boilerplate to | ||
710 | 770 | invoke your helper's behavior, have it run automatically if | ||
711 | 771 | your object is instantiated or module imported. | ||
712 | 772 | |||
713 | 773 | This is not at all useful after your hook framework as been launched. | ||
714 | 774 | ''' | ||
715 | 775 | global _atstart | ||
716 | 776 | _atstart.append((callback, args, kwargs)) | ||
717 | 777 | |||
718 | 778 | |||
719 | 779 | def atexit(callback, *args, **kwargs): | ||
720 | 780 | '''Schedule a callback to run on successful hook completion. | ||
721 | 781 | |||
722 | 782 | Callbacks are run in the reverse order that they were added.''' | ||
723 | 783 | _atexit.append((callback, args, kwargs)) | ||
724 | 784 | |||
725 | 785 | |||
726 | 786 | def _run_atstart(): | ||
727 | 787 | '''Hook frameworks must invoke this before running the main hook body.''' | ||
728 | 788 | global _atstart | ||
729 | 789 | for callback, args, kwargs in _atstart: | ||
730 | 790 | callback(*args, **kwargs) | ||
731 | 791 | del _atstart[:] | ||
732 | 792 | |||
733 | 793 | |||
734 | 794 | def _run_atexit(): | ||
735 | 795 | '''Hook frameworks must invoke this after the main hook body has | ||
736 | 796 | successfully completed. Do not invoke it if the hook fails.''' | ||
737 | 797 | global _atexit | ||
738 | 798 | for callback, args, kwargs in reversed(_atexit): | ||
739 | 799 | callback(*args, **kwargs) | ||
740 | 800 | del _atexit[:] | ||
741 | 745 | 801 | ||
742 | === modified file 'hooks/charmhelpers/core/host.py' | |||
743 | --- hooks/charmhelpers/core/host.py 2015-06-04 23:06:40 +0000 | |||
744 | +++ hooks/charmhelpers/core/host.py 2015-07-01 14:47:24 +0000 | |||
745 | @@ -24,6 +24,7 @@ | |||
746 | 24 | import os | 24 | import os |
747 | 25 | import re | 25 | import re |
748 | 26 | import pwd | 26 | import pwd |
749 | 27 | import glob | ||
750 | 27 | import grp | 28 | import grp |
751 | 28 | import random | 29 | import random |
752 | 29 | import string | 30 | import string |
753 | @@ -269,6 +270,21 @@ | |||
754 | 269 | return None | 270 | return None |
755 | 270 | 271 | ||
756 | 271 | 272 | ||
757 | 273 | def path_hash(path): | ||
758 | 274 | """ | ||
759 | 275 | Generate a hash checksum of all files matching 'path'. Standard wildcards | ||
760 | 276 | like '*' and '?' are supported, see documentation for the 'glob' module for | ||
761 | 277 | more information. | ||
762 | 278 | |||
763 | 279 | :return: dict: A { filename: hash } dictionary for all matched files. | ||
764 | 280 | Empty if none found. | ||
765 | 281 | """ | ||
766 | 282 | return { | ||
767 | 283 | filename: file_hash(filename) | ||
768 | 284 | for filename in glob.iglob(path) | ||
769 | 285 | } | ||
770 | 286 | |||
771 | 287 | |||
772 | 272 | def check_hash(path, checksum, hash_type='md5'): | 288 | def check_hash(path, checksum, hash_type='md5'): |
773 | 273 | """ | 289 | """ |
774 | 274 | Validate a file using a cryptographic checksum. | 290 | Validate a file using a cryptographic checksum. |
775 | @@ -296,23 +312,25 @@ | |||
776 | 296 | 312 | ||
777 | 297 | @restart_on_change({ | 313 | @restart_on_change({ |
778 | 298 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] | 314 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
779 | 315 | '/etc/apache/sites-enabled/*': [ 'apache2' ] | ||
780 | 299 | }) | 316 | }) |
782 | 300 | def ceph_client_changed(): | 317 | def config_changed(): |
783 | 301 | pass # your code here | 318 | pass # your code here |
784 | 302 | 319 | ||
785 | 303 | In this example, the cinder-api and cinder-volume services | 320 | In this example, the cinder-api and cinder-volume services |
786 | 304 | would be restarted if /etc/ceph/ceph.conf is changed by the | 321 | would be restarted if /etc/ceph/ceph.conf is changed by the |
788 | 305 | ceph_client_changed function. | 322 | ceph_client_changed function. The apache2 service would be |
789 | 323 | restarted if any file matching the pattern got changed, created | ||
790 | 324 | or removed. Standard wildcards are supported, see documentation | ||
791 | 325 | for the 'glob' module for more information. | ||
792 | 306 | """ | 326 | """ |
793 | 307 | def wrap(f): | 327 | def wrap(f): |
794 | 308 | def wrapped_f(*args, **kwargs): | 328 | def wrapped_f(*args, **kwargs): |
798 | 309 | checksums = {} | 329 | checksums = {path: path_hash(path) for path in restart_map} |
796 | 310 | for path in restart_map: | ||
797 | 311 | checksums[path] = file_hash(path) | ||
799 | 312 | f(*args, **kwargs) | 330 | f(*args, **kwargs) |
800 | 313 | restarts = [] | 331 | restarts = [] |
801 | 314 | for path in restart_map: | 332 | for path in restart_map: |
803 | 315 | if checksums[path] != file_hash(path): | 333 | if path_hash(path) != checksums[path]: |
804 | 316 | restarts += restart_map[path] | 334 | restarts += restart_map[path] |
805 | 317 | services_list = list(OrderedDict.fromkeys(restarts)) | 335 | services_list = list(OrderedDict.fromkeys(restarts)) |
806 | 318 | if not stopstart: | 336 | if not stopstart: |
807 | 319 | 337 | ||
808 | === modified file 'hooks/charmhelpers/core/services/base.py' | |||
809 | --- hooks/charmhelpers/core/services/base.py 2015-06-04 23:06:40 +0000 | |||
810 | +++ hooks/charmhelpers/core/services/base.py 2015-07-01 14:47:24 +0000 | |||
811 | @@ -128,15 +128,18 @@ | |||
812 | 128 | """ | 128 | """ |
813 | 129 | Handle the current hook by doing The Right Thing with the registered services. | 129 | Handle the current hook by doing The Right Thing with the registered services. |
814 | 130 | """ | 130 | """ |
824 | 131 | hook_name = hookenv.hook_name() | 131 | hookenv._run_atstart() |
825 | 132 | if hook_name == 'stop': | 132 | try: |
826 | 133 | self.stop_services() | 133 | hook_name = hookenv.hook_name() |
827 | 134 | else: | 134 | if hook_name == 'stop': |
828 | 135 | self.reconfigure_services() | 135 | self.stop_services() |
829 | 136 | self.provide_data() | 136 | else: |
830 | 137 | cfg = hookenv.config() | 137 | self.reconfigure_services() |
831 | 138 | if cfg.implicit_save: | 138 | self.provide_data() |
832 | 139 | cfg.save() | 139 | except SystemExit as x: |
833 | 140 | if x.code is None or x.code == 0: | ||
834 | 141 | hookenv._run_atexit() | ||
835 | 142 | hookenv._run_atexit() | ||
836 | 140 | 143 | ||
837 | 141 | def provide_data(self): | 144 | def provide_data(self): |
838 | 142 | """ | 145 | """ |
839 | 143 | 146 | ||
840 | === modified file 'metadata.yaml' | |||
841 | --- metadata.yaml 2014-09-19 11:00:18 +0000 | |||
842 | +++ metadata.yaml 2015-07-01 14:47:24 +0000 | |||
843 | @@ -7,7 +7,10 @@ | |||
844 | 7 | . | 7 | . |
845 | 8 | This charm provides the RADOS HTTP gateway supporting S3 and Swift protocols | 8 | This charm provides the RADOS HTTP gateway supporting S3 and Swift protocols |
846 | 9 | for object storage. | 9 | for object storage. |
848 | 10 | categories: | 10 | tags: |
849 | 11 | - openstack | ||
850 | 12 | - storage | ||
851 | 13 | - file-servers | ||
852 | 11 | - misc | 14 | - misc |
853 | 12 | requires: | 15 | requires: |
854 | 13 | mon: | 16 | mon: |
855 | 14 | 17 | ||
856 | === modified file 'tests/00-setup' | |||
857 | --- tests/00-setup 2014-09-29 01:57:43 +0000 | |||
858 | +++ tests/00-setup 2015-07-01 14:47:24 +0000 | |||
859 | @@ -5,6 +5,10 @@ | |||
860 | 5 | sudo add-apt-repository --yes ppa:juju/stable | 5 | sudo add-apt-repository --yes ppa:juju/stable |
861 | 6 | sudo apt-get update --yes | 6 | sudo apt-get update --yes |
862 | 7 | sudo apt-get install --yes python-amulet \ | 7 | sudo apt-get install --yes python-amulet \ |
863 | 8 | python-cinderclient \ | ||
864 | 9 | python-distro-info \ | ||
865 | 10 | python-glanceclient \ | ||
866 | 11 | python-heatclient \ | ||
867 | 8 | python-keystoneclient \ | 12 | python-keystoneclient \ |
870 | 9 | python-glanceclient \ | 13 | python-novaclient \ |
871 | 10 | python-novaclient | 14 | python-swiftclient |
872 | 11 | 15 | ||
873 | === modified file 'tests/017-basic-trusty-kilo' (properties changed: -x to +x) | |||
874 | === modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x) | |||
875 | === modified file 'tests/basic_deployment.py' | |||
876 | --- tests/basic_deployment.py 2015-04-16 21:31:30 +0000 | |||
877 | +++ tests/basic_deployment.py 2015-07-01 14:47:24 +0000 | |||
878 | @@ -1,13 +1,14 @@ | |||
879 | 1 | #!/usr/bin/python | 1 | #!/usr/bin/python |
880 | 2 | 2 | ||
881 | 3 | import amulet | 3 | import amulet |
882 | 4 | import time | ||
883 | 4 | from charmhelpers.contrib.openstack.amulet.deployment import ( | 5 | from charmhelpers.contrib.openstack.amulet.deployment import ( |
884 | 5 | OpenStackAmuletDeployment | 6 | OpenStackAmuletDeployment |
885 | 6 | ) | 7 | ) |
887 | 7 | from charmhelpers.contrib.openstack.amulet.utils import ( # noqa | 8 | from charmhelpers.contrib.openstack.amulet.utils import ( |
888 | 8 | OpenStackAmuletUtils, | 9 | OpenStackAmuletUtils, |
889 | 9 | DEBUG, | 10 | DEBUG, |
891 | 10 | ERROR | 11 | #ERROR |
892 | 11 | ) | 12 | ) |
893 | 12 | 13 | ||
894 | 13 | # Use DEBUG to turn on debug logging | 14 | # Use DEBUG to turn on debug logging |
895 | @@ -35,9 +36,12 @@ | |||
896 | 35 | compatible with the local charm (e.g. stable or next). | 36 | compatible with the local charm (e.g. stable or next). |
897 | 36 | """ | 37 | """ |
898 | 37 | this_service = {'name': 'ceph-radosgw'} | 38 | this_service = {'name': 'ceph-radosgw'} |
902 | 38 | other_services = [{'name': 'ceph', 'units': 3}, {'name': 'mysql'}, | 39 | other_services = [{'name': 'ceph', 'units': 3}, |
903 | 39 | {'name': 'keystone'}, {'name': 'rabbitmq-server'}, | 40 | {'name': 'mysql'}, |
904 | 40 | {'name': 'nova-compute'}, {'name': 'glance'}, | 41 | {'name': 'keystone'}, |
905 | 42 | {'name': 'rabbitmq-server'}, | ||
906 | 43 | {'name': 'nova-compute'}, | ||
907 | 44 | {'name': 'glance'}, | ||
908 | 41 | {'name': 'cinder'}] | 45 | {'name': 'cinder'}] |
909 | 42 | super(CephRadosGwBasicDeployment, self)._add_services(this_service, | 46 | super(CephRadosGwBasicDeployment, self)._add_services(this_service, |
910 | 43 | other_services) | 47 | other_services) |
911 | @@ -92,13 +96,20 @@ | |||
912 | 92 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] | 96 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] |
913 | 93 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] | 97 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] |
914 | 94 | self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] | 98 | self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] |
916 | 95 | self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0'] | 99 | self.nova_sentry = self.d.sentry.unit['nova-compute/0'] |
917 | 96 | self.glance_sentry = self.d.sentry.unit['glance/0'] | 100 | self.glance_sentry = self.d.sentry.unit['glance/0'] |
918 | 97 | self.cinder_sentry = self.d.sentry.unit['cinder/0'] | 101 | self.cinder_sentry = self.d.sentry.unit['cinder/0'] |
919 | 98 | self.ceph0_sentry = self.d.sentry.unit['ceph/0'] | 102 | self.ceph0_sentry = self.d.sentry.unit['ceph/0'] |
920 | 99 | self.ceph1_sentry = self.d.sentry.unit['ceph/1'] | 103 | self.ceph1_sentry = self.d.sentry.unit['ceph/1'] |
921 | 100 | self.ceph2_sentry = self.d.sentry.unit['ceph/2'] | 104 | self.ceph2_sentry = self.d.sentry.unit['ceph/2'] |
922 | 101 | self.ceph_radosgw_sentry = self.d.sentry.unit['ceph-radosgw/0'] | 105 | self.ceph_radosgw_sentry = self.d.sentry.unit['ceph-radosgw/0'] |
923 | 106 | u.log.debug('openstack release val: {}'.format( | ||
924 | 107 | self._get_openstack_release())) | ||
925 | 108 | u.log.debug('openstack release str: {}'.format( | ||
926 | 109 | self._get_openstack_release_string())) | ||
927 | 110 | |||
928 | 111 | # Let things settle a bit original moving forward | ||
929 | 112 | time.sleep(30) | ||
930 | 102 | 113 | ||
931 | 103 | # Authenticate admin with keystone | 114 | # Authenticate admin with keystone |
932 | 104 | self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, | 115 | self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, |
933 | @@ -135,39 +146,77 @@ | |||
934 | 135 | 'password', | 146 | 'password', |
935 | 136 | self.demo_tenant) | 147 | self.demo_tenant) |
936 | 137 | 148 | ||
942 | 138 | def _ceph_osd_id(self, index): | 149 | # Authenticate radosgw user using swift api |
943 | 139 | """Produce a shell command that will return a ceph-osd id.""" | 150 | ks_obj_rel = self.keystone_sentry.relation( |
944 | 140 | return "`initctl list | grep 'ceph-osd ' | awk 'NR=={} {{ print $2 }}' | grep -o '[0-9]*'`".format(index + 1) # noqa | 151 | 'identity-service', |
945 | 141 | 152 | 'ceph-radosgw:identity-service') | |
946 | 142 | def test_services(self): | 153 | self.swift = u.authenticate_swift_user( |
947 | 154 | self.keystone, | ||
948 | 155 | user=ks_obj_rel['service_username'], | ||
949 | 156 | password=ks_obj_rel['service_password'], | ||
950 | 157 | tenant=ks_obj_rel['service_tenant']) | ||
951 | 158 | |||
952 | 159 | def test_100_ceph_processes(self): | ||
953 | 160 | """Verify that the expected service processes are running | ||
954 | 161 | on each ceph unit.""" | ||
955 | 162 | |||
956 | 163 | # Process name and quantity of processes to expect on each unit | ||
957 | 164 | ceph_processes = { | ||
958 | 165 | 'ceph-mon': 1, | ||
959 | 166 | 'ceph-osd': 2 | ||
960 | 167 | } | ||
961 | 168 | |||
962 | 169 | # Units with process names and PID quantities expected | ||
963 | 170 | expected_processes = { | ||
964 | 171 | self.ceph_radosgw_sentry: {'radosgw': 1}, | ||
965 | 172 | self.ceph0_sentry: ceph_processes, | ||
966 | 173 | self.ceph1_sentry: ceph_processes, | ||
967 | 174 | self.ceph2_sentry: ceph_processes | ||
968 | 175 | } | ||
969 | 176 | |||
970 | 177 | actual_pids = u.get_unit_process_ids(expected_processes) | ||
971 | 178 | ret = u.validate_unit_process_ids(expected_processes, actual_pids) | ||
972 | 179 | if ret: | ||
973 | 180 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
974 | 181 | |||
975 | 182 | def test_102_services(self): | ||
976 | 143 | """Verify the expected services are running on the service units.""" | 183 | """Verify the expected services are running on the service units.""" |
990 | 144 | ceph_services = ['status ceph-mon-all', | 184 | |
991 | 145 | 'status ceph-mon id=`hostname`'] | 185 | services = { |
992 | 146 | commands = { | 186 | self.mysql_sentry: ['mysql'], |
993 | 147 | self.mysql_sentry: ['status mysql'], | 187 | self.rabbitmq_sentry: ['rabbitmq-server'], |
994 | 148 | self.rabbitmq_sentry: ['sudo service rabbitmq-server status'], | 188 | self.nova_sentry: ['nova-compute'], |
995 | 149 | self.nova_compute_sentry: ['status nova-compute'], | 189 | self.keystone_sentry: ['keystone'], |
996 | 150 | self.keystone_sentry: ['status keystone'], | 190 | self.glance_sentry: ['glance-registry', |
997 | 151 | self.glance_sentry: ['status glance-registry', | 191 | 'glance-api'], |
998 | 152 | 'status glance-api'], | 192 | self.cinder_sentry: ['cinder-api', |
999 | 153 | self.cinder_sentry: ['status cinder-api', | 193 | 'cinder-scheduler', |
1000 | 154 | 'status cinder-scheduler', | 194 | 'cinder-volume'], |
988 | 155 | 'status cinder-volume'], | ||
989 | 156 | self.ceph_radosgw_sentry: ['status radosgw-all'] | ||
1001 | 157 | } | 195 | } |
1010 | 158 | ceph_osd0 = 'status ceph-osd id={}'.format(self._ceph_osd_id(0)) | 196 | |
1011 | 159 | ceph_osd1 = 'status ceph-osd id={}'.format(self._ceph_osd_id(1)) | 197 | if self._get_openstack_release() < self.vivid_kilo: |
1012 | 160 | ceph_services.extend([ceph_osd0, ceph_osd1, 'status ceph-osd-all']) | 198 | # For upstart systems only. Ceph services under systemd |
1013 | 161 | commands[self.ceph0_sentry] = ceph_services | 199 | # are checked by process name instead. |
1014 | 162 | commands[self.ceph1_sentry] = ceph_services | 200 | ceph_services = [ |
1015 | 163 | commands[self.ceph2_sentry] = ceph_services | 201 | 'ceph-mon-all', |
1016 | 164 | 202 | 'ceph-mon id=`hostname`', | |
1017 | 165 | ret = u.validate_services(commands) | 203 | 'ceph-osd-all', |
1018 | 204 | 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(0)), | ||
1019 | 205 | 'ceph-osd id={}'.format(u.get_ceph_osd_id_cmd(1)) | ||
1020 | 206 | ] | ||
1021 | 207 | services[self.ceph0_sentry] = ceph_services | ||
1022 | 208 | services[self.ceph1_sentry] = ceph_services | ||
1023 | 209 | services[self.ceph2_sentry] = ceph_services | ||
1024 | 210 | services[self.ceph_radosgw_sentry] = ['radosgw-all'] | ||
1025 | 211 | |||
1026 | 212 | ret = u.validate_services_by_name(services) | ||
1027 | 166 | if ret: | 213 | if ret: |
1028 | 167 | amulet.raise_status(amulet.FAIL, msg=ret) | 214 | amulet.raise_status(amulet.FAIL, msg=ret) |
1029 | 168 | 215 | ||
1031 | 169 | def test_ceph_radosgw_ceph_relation(self): | 216 | def test_200_ceph_radosgw_ceph_relation(self): |
1032 | 170 | """Verify the ceph-radosgw to ceph relation data.""" | 217 | """Verify the ceph-radosgw to ceph relation data.""" |
1033 | 218 | u.log.debug('Checking ceph-radosgw:mon to ceph:radosgw ' | ||
1034 | 219 | 'relation data...') | ||
1035 | 171 | unit = self.ceph_radosgw_sentry | 220 | unit = self.ceph_radosgw_sentry |
1036 | 172 | relation = ['mon', 'ceph:radosgw'] | 221 | relation = ['mon', 'ceph:radosgw'] |
1037 | 173 | expected = { | 222 | expected = { |
1038 | @@ -179,8 +228,9 @@ | |||
1039 | 179 | message = u.relation_error('ceph-radosgw to ceph', ret) | 228 | message = u.relation_error('ceph-radosgw to ceph', ret) |
1040 | 180 | amulet.raise_status(amulet.FAIL, msg=message) | 229 | amulet.raise_status(amulet.FAIL, msg=message) |
1041 | 181 | 230 | ||
1043 | 182 | def test_ceph0_ceph_radosgw_relation(self): | 231 | def test_201_ceph0_ceph_radosgw_relation(self): |
1044 | 183 | """Verify the ceph0 to ceph-radosgw relation data.""" | 232 | """Verify the ceph0 to ceph-radosgw relation data.""" |
1045 | 233 | u.log.debug('Checking ceph0:radosgw radosgw:mon relation data...') | ||
1046 | 184 | unit = self.ceph0_sentry | 234 | unit = self.ceph0_sentry |
1047 | 185 | relation = ['radosgw', 'ceph-radosgw:mon'] | 235 | relation = ['radosgw', 'ceph-radosgw:mon'] |
1048 | 186 | expected = { | 236 | expected = { |
1049 | @@ -196,8 +246,9 @@ | |||
1050 | 196 | message = u.relation_error('ceph0 to ceph-radosgw', ret) | 246 | message = u.relation_error('ceph0 to ceph-radosgw', ret) |
1051 | 197 | amulet.raise_status(amulet.FAIL, msg=message) | 247 | amulet.raise_status(amulet.FAIL, msg=message) |
1052 | 198 | 248 | ||
1054 | 199 | def test_ceph1_ceph_radosgw_relation(self): | 249 | def test_202_ceph1_ceph_radosgw_relation(self): |
1055 | 200 | """Verify the ceph1 to ceph-radosgw relation data.""" | 250 | """Verify the ceph1 to ceph-radosgw relation data.""" |
1056 | 251 | u.log.debug('Checking ceph1:radosgw ceph-radosgw:mon relation data...') | ||
1057 | 201 | unit = self.ceph1_sentry | 252 | unit = self.ceph1_sentry |
1058 | 202 | relation = ['radosgw', 'ceph-radosgw:mon'] | 253 | relation = ['radosgw', 'ceph-radosgw:mon'] |
1059 | 203 | expected = { | 254 | expected = { |
1060 | @@ -213,8 +264,9 @@ | |||
1061 | 213 | message = u.relation_error('ceph1 to ceph-radosgw', ret) | 264 | message = u.relation_error('ceph1 to ceph-radosgw', ret) |
1062 | 214 | amulet.raise_status(amulet.FAIL, msg=message) | 265 | amulet.raise_status(amulet.FAIL, msg=message) |
1063 | 215 | 266 | ||
1065 | 216 | def test_ceph2_ceph_radosgw_relation(self): | 267 | def test_203_ceph2_ceph_radosgw_relation(self): |
1066 | 217 | """Verify the ceph2 to ceph-radosgw relation data.""" | 268 | """Verify the ceph2 to ceph-radosgw relation data.""" |
1067 | 269 | u.log.debug('Checking ceph2:radosgw ceph-radosgw:mon relation data...') | ||
1068 | 218 | unit = self.ceph2_sentry | 270 | unit = self.ceph2_sentry |
1069 | 219 | relation = ['radosgw', 'ceph-radosgw:mon'] | 271 | relation = ['radosgw', 'ceph-radosgw:mon'] |
1070 | 220 | expected = { | 272 | expected = { |
1071 | @@ -230,8 +282,10 @@ | |||
1072 | 230 | message = u.relation_error('ceph2 to ceph-radosgw', ret) | 282 | message = u.relation_error('ceph2 to ceph-radosgw', ret) |
1073 | 231 | amulet.raise_status(amulet.FAIL, msg=message) | 283 | amulet.raise_status(amulet.FAIL, msg=message) |
1074 | 232 | 284 | ||
1076 | 233 | def test_ceph_radosgw_keystone_relation(self): | 285 | def test_204_ceph_radosgw_keystone_relation(self): |
1077 | 234 | """Verify the ceph-radosgw to keystone relation data.""" | 286 | """Verify the ceph-radosgw to keystone relation data.""" |
1078 | 287 | u.log.debug('Checking ceph-radosgw to keystone id service ' | ||
1079 | 288 | 'relation data...') | ||
1080 | 235 | unit = self.ceph_radosgw_sentry | 289 | unit = self.ceph_radosgw_sentry |
1081 | 236 | relation = ['identity-service', 'keystone:identity-service'] | 290 | relation = ['identity-service', 'keystone:identity-service'] |
1082 | 237 | expected = { | 291 | expected = { |
1083 | @@ -249,8 +303,10 @@ | |||
1084 | 249 | message = u.relation_error('ceph-radosgw to keystone', ret) | 303 | message = u.relation_error('ceph-radosgw to keystone', ret) |
1085 | 250 | amulet.raise_status(amulet.FAIL, msg=message) | 304 | amulet.raise_status(amulet.FAIL, msg=message) |
1086 | 251 | 305 | ||
1088 | 252 | def test_keystone_ceph_radosgw_relation(self): | 306 | def test_205_keystone_ceph_radosgw_relation(self): |
1089 | 253 | """Verify the keystone to ceph-radosgw relation data.""" | 307 | """Verify the keystone to ceph-radosgw relation data.""" |
1090 | 308 | u.log.debug('Checking keystone to ceph-radosgw id service ' | ||
1091 | 309 | 'relation data...') | ||
1092 | 254 | unit = self.keystone_sentry | 310 | unit = self.keystone_sentry |
1093 | 255 | relation = ['identity-service', 'ceph-radosgw:identity-service'] | 311 | relation = ['identity-service', 'ceph-radosgw:identity-service'] |
1094 | 256 | expected = { | 312 | expected = { |
1095 | @@ -273,8 +329,9 @@ | |||
1096 | 273 | message = u.relation_error('keystone to ceph-radosgw', ret) | 329 | message = u.relation_error('keystone to ceph-radosgw', ret) |
1097 | 274 | amulet.raise_status(amulet.FAIL, msg=message) | 330 | amulet.raise_status(amulet.FAIL, msg=message) |
1098 | 275 | 331 | ||
1100 | 276 | def test_ceph_config(self): | 332 | def test_300_ceph_radosgw_config(self): |
1101 | 277 | """Verify the data in the ceph config file.""" | 333 | """Verify the data in the ceph config file.""" |
1102 | 334 | u.log.debug('Checking ceph config file data...') | ||
1103 | 278 | unit = self.ceph_radosgw_sentry | 335 | unit = self.ceph_radosgw_sentry |
1104 | 279 | conf = '/etc/ceph/ceph.conf' | 336 | conf = '/etc/ceph/ceph.conf' |
1105 | 280 | keystone_sentry = self.keystone_sentry | 337 | keystone_sentry = self.keystone_sentry |
1106 | @@ -309,11 +366,153 @@ | |||
1107 | 309 | message = "ceph config error: {}".format(ret) | 366 | message = "ceph config error: {}".format(ret) |
1108 | 310 | amulet.raise_status(amulet.FAIL, msg=message) | 367 | amulet.raise_status(amulet.FAIL, msg=message) |
1109 | 311 | 368 | ||
1118 | 312 | def test_restart_on_config_change(self): | 369 | def test_302_cinder_rbd_config(self): |
1119 | 313 | """Verify the specified services are restarted on config change.""" | 370 | """Verify the cinder config file data regarding ceph.""" |
1120 | 314 | # NOTE(coreycb): Test not implemented but should it be? ceph-radosgw | 371 | u.log.debug('Checking cinder (rbd) config file data...') |
1121 | 315 | # svcs aren't restarted by charm after config change | 372 | unit = self.cinder_sentry |
1122 | 316 | # Should they be restarted? | 373 | conf = '/etc/cinder/cinder.conf' |
1123 | 317 | if self._get_openstack_release() >= self.precise_essex: | 374 | expected = { |
1124 | 318 | u.log.error("Test not implemented") | 375 | 'DEFAULT': { |
1125 | 319 | return | 376 | 'volume_driver': 'cinder.volume.drivers.rbd.RBDDriver' |
1126 | 377 | } | ||
1127 | 378 | } | ||
1128 | 379 | for section, pairs in expected.iteritems(): | ||
1129 | 380 | ret = u.validate_config_data(unit, conf, section, pairs) | ||
1130 | 381 | if ret: | ||
1131 | 382 | message = "cinder (rbd) config error: {}".format(ret) | ||
1132 | 383 | amulet.raise_status(amulet.FAIL, msg=message) | ||
1133 | 384 | |||
1134 | 385 | def test_304_glance_rbd_config(self): | ||
1135 | 386 | """Verify the glance config file data regarding ceph.""" | ||
1136 | 387 | u.log.debug('Checking glance (rbd) config file data...') | ||
1137 | 388 | unit = self.glance_sentry | ||
1138 | 389 | conf = '/etc/glance/glance-api.conf' | ||
1139 | 390 | config = { | ||
1140 | 391 | 'default_store': 'rbd', | ||
1141 | 392 | 'rbd_store_ceph_conf': '/etc/ceph/ceph.conf', | ||
1142 | 393 | 'rbd_store_user': 'glance', | ||
1143 | 394 | 'rbd_store_pool': 'glance', | ||
1144 | 395 | 'rbd_store_chunk_size': '8' | ||
1145 | 396 | } | ||
1146 | 397 | |||
1147 | 398 | if self._get_openstack_release() >= self.trusty_kilo: | ||
1148 | 399 | # Kilo or later | ||
1149 | 400 | config['stores'] = ('glance.store.filesystem.Store,' | ||
1150 | 401 | 'glance.store.http.Store,' | ||
1151 | 402 | 'glance.store.rbd.Store') | ||
1152 | 403 | section = 'glance_store' | ||
1153 | 404 | else: | ||
1154 | 405 | # Juno or earlier | ||
1155 | 406 | section = 'DEFAULT' | ||
1156 | 407 | |||
1157 | 408 | expected = {section: config} | ||
1158 | 409 | for section, pairs in expected.iteritems(): | ||
1159 | 410 | ret = u.validate_config_data(unit, conf, section, pairs) | ||
1160 | 411 | if ret: | ||
1161 | 412 | message = "glance (rbd) config error: {}".format(ret) | ||
1162 | 413 | amulet.raise_status(amulet.FAIL, msg=message) | ||
1163 | 414 | |||
1164 | 415 | def test_306_nova_rbd_config(self): | ||
1165 | 416 | """Verify the nova config file data regarding ceph.""" | ||
1166 | 417 | u.log.debug('Checking nova (rbd) config file data...') | ||
1167 | 418 | unit = self.nova_sentry | ||
1168 | 419 | conf = '/etc/nova/nova.conf' | ||
1169 | 420 | expected = { | ||
1170 | 421 | 'libvirt': { | ||
1171 | 422 | 'rbd_pool': 'nova', | ||
1172 | 423 | 'rbd_user': 'nova-compute', | ||
1173 | 424 | 'rbd_secret_uuid': u.not_null | ||
1174 | 425 | } | ||
1175 | 426 | } | ||
1176 | 427 | for section, pairs in expected.iteritems(): | ||
1177 | 428 | ret = u.validate_config_data(unit, conf, section, pairs) | ||
1178 | 429 | if ret: | ||
1179 | 430 | message = "nova (rbd) config error: {}".format(ret) | ||
1180 | 431 | amulet.raise_status(amulet.FAIL, msg=message) | ||
1181 | 432 | |||
1182 | 433 | def test_400_ceph_check_osd_pools(self): | ||
1183 | 434 | """Check osd pools on all ceph units, expect them to be | ||
1184 | 435 | identical, and expect specific pools to be present.""" | ||
1185 | 436 | u.log.debug('Checking pools on ceph units...') | ||
1186 | 437 | |||
1187 | 438 | expected_pools = self.get_ceph_expected_pools(radosgw=True) | ||
1188 | 439 | results = [] | ||
1189 | 440 | sentries = [ | ||
1190 | 441 | self.ceph_radosgw_sentry, | ||
1191 | 442 | self.ceph0_sentry, | ||
1192 | 443 | self.ceph1_sentry, | ||
1193 | 444 | self.ceph2_sentry | ||
1194 | 445 | ] | ||
1195 | 446 | |||
1196 | 447 | # Check for presence of expected pools on each unit | ||
1197 | 448 | u.log.debug('Expected pools: {}'.format(expected_pools)) | ||
1198 | 449 | for sentry_unit in sentries: | ||
1199 | 450 | pools = u.get_ceph_pools(sentry_unit) | ||
1200 | 451 | results.append(pools) | ||
1201 | 452 | |||
1202 | 453 | for expected_pool in expected_pools: | ||
1203 | 454 | if expected_pool not in pools: | ||
1204 | 455 | msg = ('{} does not have pool: ' | ||
1205 | 456 | '{}'.format(sentry_unit.info['unit_name'], | ||
1206 | 457 | expected_pool)) | ||
1207 | 458 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1208 | 459 | u.log.debug('{} has (at least) the expected ' | ||
1209 | 460 | 'pools.'.format(sentry_unit.info['unit_name'])) | ||
1210 | 461 | |||
1211 | 462 | # Check that all units returned the same pool name:id data | ||
1212 | 463 | ret = u.validate_list_of_identical_dicts(results) | ||
1213 | 464 | if ret: | ||
1214 | 465 | u.log.debug('Pool list results: {}'.format(results)) | ||
1215 | 466 | msg = ('{}; Pool list results are not identical on all ' | ||
1216 | 467 | 'ceph units.'.format(ret)) | ||
1217 | 468 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1218 | 469 | else: | ||
1219 | 470 | u.log.debug('Pool list on all ceph units produced the ' | ||
1220 | 471 | 'same results (OK).') | ||
1221 | 472 | |||
1222 | 473 | def test_402_swift_api_connection(self): | ||
1223 | 474 | """Simple api call to confirm basic service functionality""" | ||
1224 | 475 | u.log.debug('Checking basic radosgw functionality via swift api...') | ||
1225 | 476 | headers, containers = self.swift.get_account() | ||
1226 | 477 | assert('content-type' in headers.keys()) | ||
1227 | 478 | assert(containers == []) | ||
1228 | 479 | |||
1229 | 480 | def test_498_radosgw_cmds_exit_zero(self): | ||
1230 | 481 | """Check basic functionality of radosgw cli commands against | ||
1231 | 482 | the ceph_radosgw unit.""" | ||
1232 | 483 | sentry_units = [self.ceph_radosgw_sentry] | ||
1233 | 484 | commands = [ | ||
1234 | 485 | 'sudo radosgw-admin regions list', | ||
1235 | 486 | 'sudo radosgw-admin bucket list', | ||
1236 | 487 | 'sudo radosgw-admin zone list', | ||
1237 | 488 | 'sudo radosgw-admin metadata list', | ||
1238 | 489 | 'sudo radosgw-admin gc list' | ||
1239 | 490 | ] | ||
1240 | 491 | ret = u.check_commands_on_units(commands, sentry_units) | ||
1241 | 492 | if ret: | ||
1242 | 493 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
1243 | 494 | |||
1244 | 495 | def test_499_ceph_cmds_exit_zero(self): | ||
1245 | 496 | """Check basic functionality of ceph cli commands against | ||
1246 | 497 | all ceph units.""" | ||
1247 | 498 | sentry_units = [ | ||
1248 | 499 | self.ceph_radosgw_sentry, | ||
1249 | 500 | self.ceph0_sentry, | ||
1250 | 501 | self.ceph1_sentry, | ||
1251 | 502 | self.ceph2_sentry | ||
1252 | 503 | ] | ||
1253 | 504 | commands = [ | ||
1254 | 505 | 'sudo ceph health', | ||
1255 | 506 | 'sudo ceph mds stat', | ||
1256 | 507 | 'sudo ceph pg stat', | ||
1257 | 508 | 'sudo ceph osd stat', | ||
1258 | 509 | 'sudo ceph mon stat', | ||
1259 | 510 | ] | ||
1260 | 511 | ret = u.check_commands_on_units(commands, sentry_units) | ||
1261 | 512 | if ret: | ||
1262 | 513 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
1263 | 514 | |||
1264 | 515 | # Note(beisner): need to add basic object store functional checks. | ||
1265 | 516 | |||
1266 | 517 | # FYI: No restart check as ceph services do not restart | ||
1267 | 518 | # when charm config changes, unless monitor count increases. | ||
1268 | 320 | 519 | ||
1269 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' | |||
1270 | --- tests/charmhelpers/contrib/amulet/utils.py 2015-06-04 23:06:40 +0000 | |||
1271 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-01 14:47:24 +0000 | |||
1272 | @@ -14,14 +14,17 @@ | |||
1273 | 14 | # You should have received a copy of the GNU Lesser General Public License | 14 | # You should have received a copy of the GNU Lesser General Public License |
1274 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1275 | 16 | 16 | ||
1276 | 17 | import amulet | ||
1277 | 17 | import ConfigParser | 18 | import ConfigParser |
1278 | 19 | import distro_info | ||
1279 | 18 | import io | 20 | import io |
1280 | 19 | import logging | 21 | import logging |
1281 | 22 | import os | ||
1282 | 20 | import re | 23 | import re |
1283 | 24 | import six | ||
1284 | 21 | import sys | 25 | import sys |
1285 | 22 | import time | 26 | import time |
1288 | 23 | 27 | import urlparse | |
1287 | 24 | import six | ||
1289 | 25 | 28 | ||
1290 | 26 | 29 | ||
1291 | 27 | class AmuletUtils(object): | 30 | class AmuletUtils(object): |
1292 | @@ -33,6 +36,7 @@ | |||
1293 | 33 | 36 | ||
1294 | 34 | def __init__(self, log_level=logging.ERROR): | 37 | def __init__(self, log_level=logging.ERROR): |
1295 | 35 | self.log = self.get_logger(level=log_level) | 38 | self.log = self.get_logger(level=log_level) |
1296 | 39 | self.ubuntu_releases = self.get_ubuntu_releases() | ||
1297 | 36 | 40 | ||
1298 | 37 | def get_logger(self, name="amulet-logger", level=logging.DEBUG): | 41 | def get_logger(self, name="amulet-logger", level=logging.DEBUG): |
1299 | 38 | """Get a logger object that will log to stdout.""" | 42 | """Get a logger object that will log to stdout.""" |
1300 | @@ -70,12 +74,44 @@ | |||
1301 | 70 | else: | 74 | else: |
1302 | 71 | return False | 75 | return False |
1303 | 72 | 76 | ||
1304 | 77 | def get_ubuntu_release_from_sentry(self, sentry_unit): | ||
1305 | 78 | """Get Ubuntu release codename from sentry unit. | ||
1306 | 79 | |||
1307 | 80 | :param sentry_unit: amulet sentry/service unit pointer | ||
1308 | 81 | :returns: list of strings - release codename, failure message | ||
1309 | 82 | """ | ||
1310 | 83 | msg = None | ||
1311 | 84 | cmd = 'lsb_release -cs' | ||
1312 | 85 | release, code = sentry_unit.run(cmd) | ||
1313 | 86 | if code == 0: | ||
1314 | 87 | self.log.debug('{} lsb_release: {}'.format( | ||
1315 | 88 | sentry_unit.info['unit_name'], release)) | ||
1316 | 89 | else: | ||
1317 | 90 | msg = ('{} `{}` returned {} ' | ||
1318 | 91 | '{}'.format(sentry_unit.info['unit_name'], | ||
1319 | 92 | cmd, release, code)) | ||
1320 | 93 | if release not in self.ubuntu_releases: | ||
1321 | 94 | msg = ("Release ({}) not found in Ubuntu releases " | ||
1322 | 95 | "({})".format(release, self.ubuntu_releases)) | ||
1323 | 96 | return release, msg | ||
1324 | 97 | |||
1325 | 73 | def validate_services(self, commands): | 98 | def validate_services(self, commands): |
1329 | 74 | """Validate services. | 99 | """Validate that lists of commands succeed on service units. Can be |
1330 | 75 | 100 | used to verify system services are running on the corresponding | |
1328 | 76 | Verify the specified services are running on the corresponding | ||
1331 | 77 | service units. | 101 | service units. |
1333 | 78 | """ | 102 | |
1334 | 103 | :param commands: dict with sentry keys and arbitrary command list vals | ||
1335 | 104 | :returns: None if successful, Failure string message otherwise | ||
1336 | 105 | """ | ||
1337 | 106 | self.log.debug('Checking status of system services...') | ||
1338 | 107 | |||
1339 | 108 | # /!\ DEPRECATION WARNING (beisner): | ||
1340 | 109 | # New and existing tests should be rewritten to use | ||
1341 | 110 | # validate_services_by_name() as it is aware of init systems. | ||
1342 | 111 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | ||
1343 | 112 | 'validate_services_by_name instead of validate_services ' | ||
1344 | 113 | 'due to init system differences.') | ||
1345 | 114 | |||
1346 | 79 | for k, v in six.iteritems(commands): | 115 | for k, v in six.iteritems(commands): |
1347 | 80 | for cmd in v: | 116 | for cmd in v: |
1348 | 81 | output, code = k.run(cmd) | 117 | output, code = k.run(cmd) |
1349 | @@ -86,6 +122,41 @@ | |||
1350 | 86 | return "command `{}` returned {}".format(cmd, str(code)) | 122 | return "command `{}` returned {}".format(cmd, str(code)) |
1351 | 87 | return None | 123 | return None |
1352 | 88 | 124 | ||
1353 | 125 | def validate_services_by_name(self, sentry_services): | ||
1354 | 126 | """Validate system service status by service name, automatically | ||
1355 | 127 | detecting init system based on Ubuntu release codename. | ||
1356 | 128 | |||
1357 | 129 | :param sentry_services: dict with sentry keys and svc list values | ||
1358 | 130 | :returns: None if successful, Failure string message otherwise | ||
1359 | 131 | """ | ||
1360 | 132 | self.log.debug('Checking status of system services...') | ||
1361 | 133 | |||
1362 | 134 | # Point at which systemd became a thing | ||
1363 | 135 | systemd_switch = self.ubuntu_releases.index('vivid') | ||
1364 | 136 | |||
1365 | 137 | for sentry_unit, services_list in six.iteritems(sentry_services): | ||
1366 | 138 | # Get lsb_release codename from unit | ||
1367 | 139 | release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) | ||
1368 | 140 | if ret: | ||
1369 | 141 | return ret | ||
1370 | 142 | |||
1371 | 143 | for service_name in services_list: | ||
1372 | 144 | if (self.ubuntu_releases.index(release) >= systemd_switch or | ||
1373 | 145 | service_name == "rabbitmq-server"): | ||
1374 | 146 | # init is systemd | ||
1375 | 147 | cmd = 'sudo service {} status'.format(service_name) | ||
1376 | 148 | elif self.ubuntu_releases.index(release) < systemd_switch: | ||
1377 | 149 | # init is upstart | ||
1378 | 150 | cmd = 'sudo status {}'.format(service_name) | ||
1379 | 151 | |||
1380 | 152 | output, code = sentry_unit.run(cmd) | ||
1381 | 153 | self.log.debug('{} `{}` returned ' | ||
1382 | 154 | '{}'.format(sentry_unit.info['unit_name'], | ||
1383 | 155 | cmd, code)) | ||
1384 | 156 | if code != 0: | ||
1385 | 157 | return "command `{}` returned {}".format(cmd, str(code)) | ||
1386 | 158 | return None | ||
1387 | 159 | |||
1388 | 89 | def _get_config(self, unit, filename): | 160 | def _get_config(self, unit, filename): |
1389 | 90 | """Get a ConfigParser object for parsing a unit's config file.""" | 161 | """Get a ConfigParser object for parsing a unit's config file.""" |
1390 | 91 | file_contents = unit.file_contents(filename) | 162 | file_contents = unit.file_contents(filename) |
1391 | @@ -103,7 +174,15 @@ | |||
1392 | 103 | 174 | ||
1393 | 104 | Verify that the specified section of the config file contains | 175 | Verify that the specified section of the config file contains |
1394 | 105 | the expected option key:value pairs. | 176 | the expected option key:value pairs. |
1395 | 177 | |||
1396 | 178 | Compare expected dictionary data vs actual dictionary data. | ||
1397 | 179 | The values in the 'expected' dictionary can be strings, bools, ints, | ||
1398 | 180 | longs, or can be a function that evaluates a variable and returns a | ||
1399 | 181 | bool. | ||
1400 | 106 | """ | 182 | """ |
1401 | 183 | self.log.debug('Validating config file data ({} in {} on {})' | ||
1402 | 184 | '...'.format(section, config_file, | ||
1403 | 185 | sentry_unit.info['unit_name'])) | ||
1404 | 107 | config = self._get_config(sentry_unit, config_file) | 186 | config = self._get_config(sentry_unit, config_file) |
1405 | 108 | 187 | ||
1406 | 109 | if section != 'DEFAULT' and not config.has_section(section): | 188 | if section != 'DEFAULT' and not config.has_section(section): |
1407 | @@ -112,9 +191,20 @@ | |||
1408 | 112 | for k in expected.keys(): | 191 | for k in expected.keys(): |
1409 | 113 | if not config.has_option(section, k): | 192 | if not config.has_option(section, k): |
1410 | 114 | return "section [{}] is missing option {}".format(section, k) | 193 | return "section [{}] is missing option {}".format(section, k) |
1412 | 115 | if config.get(section, k) != expected[k]: | 194 | |
1413 | 195 | actual = config.get(section, k) | ||
1414 | 196 | v = expected[k] | ||
1415 | 197 | if (isinstance(v, six.string_types) or | ||
1416 | 198 | isinstance(v, bool) or | ||
1417 | 199 | isinstance(v, six.integer_types)): | ||
1418 | 200 | # handle explicit values | ||
1419 | 201 | if actual != v: | ||
1420 | 202 | return "section [{}] {}:{} != expected {}:{}".format( | ||
1421 | 203 | section, k, actual, k, expected[k]) | ||
1422 | 204 | # handle function pointers, such as not_null or valid_ip | ||
1423 | 205 | elif not v(actual): | ||
1424 | 116 | return "section [{}] {}:{} != expected {}:{}".format( | 206 | return "section [{}] {}:{} != expected {}:{}".format( |
1426 | 117 | section, k, config.get(section, k), k, expected[k]) | 207 | section, k, actual, k, expected[k]) |
1427 | 118 | return None | 208 | return None |
1428 | 119 | 209 | ||
1429 | 120 | def _validate_dict_data(self, expected, actual): | 210 | def _validate_dict_data(self, expected, actual): |
1430 | @@ -122,7 +212,7 @@ | |||
1431 | 122 | 212 | ||
1432 | 123 | Compare expected dictionary data vs actual dictionary data. | 213 | Compare expected dictionary data vs actual dictionary data. |
1433 | 124 | The values in the 'expected' dictionary can be strings, bools, ints, | 214 | The values in the 'expected' dictionary can be strings, bools, ints, |
1435 | 125 | longs, or can be a function that evaluate a variable and returns a | 215 | longs, or can be a function that evaluates a variable and returns a |
1436 | 126 | bool. | 216 | bool. |
1437 | 127 | """ | 217 | """ |
1438 | 128 | self.log.debug('actual: {}'.format(repr(actual))) | 218 | self.log.debug('actual: {}'.format(repr(actual))) |
1439 | @@ -133,8 +223,10 @@ | |||
1440 | 133 | if (isinstance(v, six.string_types) or | 223 | if (isinstance(v, six.string_types) or |
1441 | 134 | isinstance(v, bool) or | 224 | isinstance(v, bool) or |
1442 | 135 | isinstance(v, six.integer_types)): | 225 | isinstance(v, six.integer_types)): |
1443 | 226 | # handle explicit values | ||
1444 | 136 | if v != actual[k]: | 227 | if v != actual[k]: |
1445 | 137 | return "{}:{}".format(k, actual[k]) | 228 | return "{}:{}".format(k, actual[k]) |
1446 | 229 | # handle function pointers, such as not_null or valid_ip | ||
1447 | 138 | elif not v(actual[k]): | 230 | elif not v(actual[k]): |
1448 | 139 | return "{}:{}".format(k, actual[k]) | 231 | return "{}:{}".format(k, actual[k]) |
1449 | 140 | else: | 232 | else: |
1450 | @@ -321,3 +413,121 @@ | |||
1451 | 321 | 413 | ||
1452 | 322 | def endpoint_error(self, name, data): | 414 | def endpoint_error(self, name, data): |
1453 | 323 | return 'unexpected endpoint data in {} - {}'.format(name, data) | 415 | return 'unexpected endpoint data in {} - {}'.format(name, data) |
1454 | 416 | |||
1455 | 417 | def get_ubuntu_releases(self): | ||
1456 | 418 | """Return a list of all Ubuntu releases in order of release.""" | ||
1457 | 419 | _d = distro_info.UbuntuDistroInfo() | ||
1458 | 420 | _release_list = _d.all | ||
1459 | 421 | self.log.debug('Ubuntu release list: {}'.format(_release_list)) | ||
1460 | 422 | return _release_list | ||
1461 | 423 | |||
1462 | 424 | def file_to_url(self, file_rel_path): | ||
1463 | 425 | """Convert a relative file path to a file URL.""" | ||
1464 | 426 | _abs_path = os.path.abspath(file_rel_path) | ||
1465 | 427 | return urlparse.urlparse(_abs_path, scheme='file').geturl() | ||
1466 | 428 | |||
1467 | 429 | def check_commands_on_units(self, commands, sentry_units): | ||
1468 | 430 | """Check that all commands in a list exit zero on all | ||
1469 | 431 | sentry units in a list. | ||
1470 | 432 | |||
1471 | 433 | :param commands: list of bash commands | ||
1472 | 434 | :param sentry_units: list of sentry unit pointers | ||
1473 | 435 | :returns: None if successful; Failure message otherwise | ||
1474 | 436 | """ | ||
1475 | 437 | self.log.debug('Checking exit codes for {} commands on {} ' | ||
1476 | 438 | 'sentry units...'.format(len(commands), | ||
1477 | 439 | len(sentry_units))) | ||
1478 | 440 | for sentry_unit in sentry_units: | ||
1479 | 441 | for cmd in commands: | ||
1480 | 442 | output, code = sentry_unit.run(cmd) | ||
1481 | 443 | if code == 0: | ||
1482 | 444 | self.log.debug('{} `{}` returned {} ' | ||
1483 | 445 | '(OK)'.format(sentry_unit.info['unit_name'], | ||
1484 | 446 | cmd, code)) | ||
1485 | 447 | else: | ||
1486 | 448 | return ('{} `{}` returned {} ' | ||
1487 | 449 | '{}'.format(sentry_unit.info['unit_name'], | ||
1488 | 450 | cmd, code, output)) | ||
1489 | 451 | return None | ||
1490 | 452 | |||
1491 | 453 | def get_process_id_list(self, sentry_unit, process_name): | ||
1492 | 454 | """Get a list of process ID(s) from a single sentry juju unit | ||
1493 | 455 | for a single process name. | ||
1494 | 456 | |||
1495 | 457 | :param sentry_unit: Pointer to amulet sentry instance (juju unit) | ||
1496 | 458 | :param process_name: Process name | ||
1497 | 459 | :returns: List of process IDs | ||
1498 | 460 | """ | ||
1499 | 461 | cmd = 'pidof {}'.format(process_name) | ||
1500 | 462 | output, code = sentry_unit.run(cmd) | ||
1501 | 463 | if code != 0: | ||
1502 | 464 | msg = ('{} `{}` returned {} ' | ||
1503 | 465 | '{}'.format(sentry_unit.info['unit_name'], | ||
1504 | 466 | cmd, code, output)) | ||
1505 | 467 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1506 | 468 | return str(output).split() | ||
1507 | 469 | |||
1508 | 470 | def get_unit_process_ids(self, unit_processes): | ||
1509 | 471 | """Construct a dict containing unit sentries, process names, and | ||
1510 | 472 | process IDs.""" | ||
1511 | 473 | pid_dict = {} | ||
1512 | 474 | for sentry_unit, process_list in unit_processes.iteritems(): | ||
1513 | 475 | pid_dict[sentry_unit] = {} | ||
1514 | 476 | for process in process_list: | ||
1515 | 477 | pids = self.get_process_id_list(sentry_unit, process) | ||
1516 | 478 | pid_dict[sentry_unit].update({process: pids}) | ||
1517 | 479 | return pid_dict | ||
1518 | 480 | |||
1519 | 481 | def validate_unit_process_ids(self, expected, actual): | ||
1520 | 482 | """Validate process id quantities for services on units.""" | ||
1521 | 483 | self.log.debug('Checking units for running processes...') | ||
1522 | 484 | self.log.debug('Expected PIDs: {}'.format(expected)) | ||
1523 | 485 | self.log.debug('Actual PIDs: {}'.format(actual)) | ||
1524 | 486 | |||
1525 | 487 | if len(actual) != len(expected): | ||
1526 | 488 | return ('Unit count mismatch. expected, actual: {}, ' | ||
1527 | 489 | '{} '.format(len(expected), len(actual))) | ||
1528 | 490 | |||
1529 | 491 | for (e_sentry, e_proc_names) in expected.iteritems(): | ||
1530 | 492 | e_sentry_name = e_sentry.info['unit_name'] | ||
1531 | 493 | if e_sentry in actual.keys(): | ||
1532 | 494 | a_proc_names = actual[e_sentry] | ||
1533 | 495 | else: | ||
1534 | 496 | return ('Expected sentry ({}) not found in actual dict data.' | ||
1535 | 497 | '{}'.format(e_sentry_name, e_sentry)) | ||
1536 | 498 | |||
1537 | 499 | if len(e_proc_names.keys()) != len(a_proc_names.keys()): | ||
1538 | 500 | return ('Process name count mismatch. expected, actual: {}, ' | ||
1539 | 501 | '{}'.format(len(expected), len(actual))) | ||
1540 | 502 | |||
1541 | 503 | for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \ | ||
1542 | 504 | zip(e_proc_names.items(), a_proc_names.items()): | ||
1543 | 505 | if e_proc_name != a_proc_name: | ||
1544 | 506 | return ('Process name mismatch. expected, actual: {}, ' | ||
1545 | 507 | '{}'.format(e_proc_name, a_proc_name)) | ||
1546 | 508 | |||
1547 | 509 | a_pids_length = len(a_pids) | ||
1548 | 510 | if e_pids_length != a_pids_length: | ||
1549 | 511 | return ('PID count mismatch. {} ({}) expected, actual: ' | ||
1550 | 512 | '{}, {} ({})'.format(e_sentry_name, e_proc_name, | ||
1551 | 513 | e_pids_length, a_pids_length, | ||
1552 | 514 | a_pids)) | ||
1553 | 515 | else: | ||
1554 | 516 | self.log.debug('PID check OK: {} {} {}: ' | ||
1555 | 517 | '{}'.format(e_sentry_name, e_proc_name, | ||
1556 | 518 | e_pids_length, a_pids)) | ||
1557 | 519 | return None | ||
1558 | 520 | |||
1559 | 521 | def validate_list_of_identical_dicts(self, list_of_dicts): | ||
1560 | 522 | """Check that all dicts within a list are identical.""" | ||
1561 | 523 | hashes = [] | ||
1562 | 524 | for _dict in list_of_dicts: | ||
1563 | 525 | hashes.append(hash(frozenset(_dict.items()))) | ||
1564 | 526 | |||
1565 | 527 | self.log.debug('Hashes: {}'.format(hashes)) | ||
1566 | 528 | if len(set(hashes)) == 1: | ||
1567 | 529 | self.log.debug('Dicts within list are identical') | ||
1568 | 530 | else: | ||
1569 | 531 | return 'Dicts within list are not identical' | ||
1570 | 532 | |||
1571 | 533 | return None | ||
1572 | 324 | 534 | ||
1573 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' | |||
1574 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-04 23:06:40 +0000 | |||
1575 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-01 14:47:24 +0000 | |||
1576 | @@ -79,9 +79,9 @@ | |||
1577 | 79 | services.append(this_service) | 79 | services.append(this_service) |
1578 | 80 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', | 80 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
1579 | 81 | 'ceph-osd', 'ceph-radosgw'] | 81 | 'ceph-osd', 'ceph-radosgw'] |
1583 | 82 | # Openstack subordinate charms do not expose an origin option as that | 82 | # Most OpenStack subordinate charms do not expose an origin option |
1584 | 83 | # is controlled by the principle | 83 | # as that is controlled by the principle. |
1585 | 84 | ignore = ['neutron-openvswitch'] | 84 | ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch'] |
1586 | 85 | 85 | ||
1587 | 86 | if self.openstack: | 86 | if self.openstack: |
1588 | 87 | for svc in services: | 87 | for svc in services: |
1589 | @@ -110,7 +110,8 @@ | |||
1590 | 110 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, | 110 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, |
1591 | 111 | self.precise_havana, self.precise_icehouse, | 111 | self.precise_havana, self.precise_icehouse, |
1592 | 112 | self.trusty_icehouse, self.trusty_juno, self.utopic_juno, | 112 | self.trusty_icehouse, self.trusty_juno, self.utopic_juno, |
1594 | 113 | self.trusty_kilo, self.vivid_kilo) = range(10) | 113 | self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, |
1595 | 114 | self.wily_liberty) = range(12) | ||
1596 | 114 | 115 | ||
1597 | 115 | releases = { | 116 | releases = { |
1598 | 116 | ('precise', None): self.precise_essex, | 117 | ('precise', None): self.precise_essex, |
1599 | @@ -121,8 +122,10 @@ | |||
1600 | 121 | ('trusty', None): self.trusty_icehouse, | 122 | ('trusty', None): self.trusty_icehouse, |
1601 | 122 | ('trusty', 'cloud:trusty-juno'): self.trusty_juno, | 123 | ('trusty', 'cloud:trusty-juno'): self.trusty_juno, |
1602 | 123 | ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, | 124 | ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, |
1603 | 125 | ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, | ||
1604 | 124 | ('utopic', None): self.utopic_juno, | 126 | ('utopic', None): self.utopic_juno, |
1606 | 125 | ('vivid', None): self.vivid_kilo} | 127 | ('vivid', None): self.vivid_kilo, |
1607 | 128 | ('wily', None): self.wily_liberty} | ||
1608 | 126 | return releases[(self.series, self.openstack)] | 129 | return releases[(self.series, self.openstack)] |
1609 | 127 | 130 | ||
1610 | 128 | def _get_openstack_release_string(self): | 131 | def _get_openstack_release_string(self): |
1611 | @@ -138,9 +141,43 @@ | |||
1612 | 138 | ('trusty', 'icehouse'), | 141 | ('trusty', 'icehouse'), |
1613 | 139 | ('utopic', 'juno'), | 142 | ('utopic', 'juno'), |
1614 | 140 | ('vivid', 'kilo'), | 143 | ('vivid', 'kilo'), |
1615 | 144 | ('wily', 'liberty'), | ||
1616 | 141 | ]) | 145 | ]) |
1617 | 142 | if self.openstack: | 146 | if self.openstack: |
1618 | 143 | os_origin = self.openstack.split(':')[1] | 147 | os_origin = self.openstack.split(':')[1] |
1619 | 144 | return os_origin.split('%s-' % self.series)[1].split('/')[0] | 148 | return os_origin.split('%s-' % self.series)[1].split('/')[0] |
1620 | 145 | else: | 149 | else: |
1621 | 146 | return releases[self.series] | 150 | return releases[self.series] |
1622 | 151 | |||
1623 | 152 | def get_ceph_expected_pools(self, radosgw=False): | ||
1624 | 153 | """Return a list of expected ceph pools in a ceph + cinder + glance | ||
1625 | 154 | test scenario, based on OpenStack release and whether ceph radosgw | ||
1626 | 155 | is flagged as present or not.""" | ||
1627 | 156 | |||
1628 | 157 | if self._get_openstack_release() >= self.trusty_kilo: | ||
1629 | 158 | # Kilo or later | ||
1630 | 159 | pools = [ | ||
1631 | 160 | 'rbd', | ||
1632 | 161 | 'cinder', | ||
1633 | 162 | 'glance' | ||
1634 | 163 | ] | ||
1635 | 164 | else: | ||
1636 | 165 | # Juno or earlier | ||
1637 | 166 | pools = [ | ||
1638 | 167 | 'data', | ||
1639 | 168 | 'metadata', | ||
1640 | 169 | 'rbd', | ||
1641 | 170 | 'cinder', | ||
1642 | 171 | 'glance' | ||
1643 | 172 | ] | ||
1644 | 173 | |||
1645 | 174 | if radosgw: | ||
1646 | 175 | pools.extend([ | ||
1647 | 176 | '.rgw.root', | ||
1648 | 177 | '.rgw.control', | ||
1649 | 178 | '.rgw', | ||
1650 | 179 | '.rgw.gc', | ||
1651 | 180 | '.users.uid' | ||
1652 | 181 | ]) | ||
1653 | 182 | |||
1654 | 183 | return pools | ||
1655 | 147 | 184 | ||
1656 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' | |||
1657 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-01-26 11:53:19 +0000 | |||
1658 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-01 14:47:24 +0000 | |||
1659 | @@ -14,16 +14,20 @@ | |||
1660 | 14 | # You should have received a copy of the GNU Lesser General Public License | 14 | # You should have received a copy of the GNU Lesser General Public License |
1661 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1662 | 16 | 16 | ||
1663 | 17 | import amulet | ||
1664 | 18 | import json | ||
1665 | 17 | import logging | 19 | import logging |
1666 | 18 | import os | 20 | import os |
1667 | 21 | import six | ||
1668 | 19 | import time | 22 | import time |
1669 | 20 | import urllib | 23 | import urllib |
1670 | 21 | 24 | ||
1671 | 25 | import cinderclient.v1.client as cinder_client | ||
1672 | 22 | import glanceclient.v1.client as glance_client | 26 | import glanceclient.v1.client as glance_client |
1673 | 27 | import heatclient.v1.client as heat_client | ||
1674 | 23 | import keystoneclient.v2_0 as keystone_client | 28 | import keystoneclient.v2_0 as keystone_client |
1675 | 24 | import novaclient.v1_1.client as nova_client | 29 | import novaclient.v1_1.client as nova_client |
1678 | 25 | 30 | import swiftclient | |
1677 | 26 | import six | ||
1679 | 27 | 31 | ||
1680 | 28 | from charmhelpers.contrib.amulet.utils import ( | 32 | from charmhelpers.contrib.amulet.utils import ( |
1681 | 29 | AmuletUtils | 33 | AmuletUtils |
1682 | @@ -37,7 +41,7 @@ | |||
1683 | 37 | """OpenStack amulet utilities. | 41 | """OpenStack amulet utilities. |
1684 | 38 | 42 | ||
1685 | 39 | This class inherits from AmuletUtils and has additional support | 43 | This class inherits from AmuletUtils and has additional support |
1687 | 40 | that is specifically for use by OpenStack charms. | 44 | that is specifically for use by OpenStack charm tests. |
1688 | 41 | """ | 45 | """ |
1689 | 42 | 46 | ||
1690 | 43 | def __init__(self, log_level=ERROR): | 47 | def __init__(self, log_level=ERROR): |
1691 | @@ -51,6 +55,8 @@ | |||
1692 | 51 | Validate actual endpoint data vs expected endpoint data. The ports | 55 | Validate actual endpoint data vs expected endpoint data. The ports |
1693 | 52 | are used to find the matching endpoint. | 56 | are used to find the matching endpoint. |
1694 | 53 | """ | 57 | """ |
1695 | 58 | self.log.debug('Validating endpoint data...') | ||
1696 | 59 | self.log.debug('actual: {}'.format(repr(endpoints))) | ||
1697 | 54 | found = False | 60 | found = False |
1698 | 55 | for ep in endpoints: | 61 | for ep in endpoints: |
1699 | 56 | self.log.debug('endpoint: {}'.format(repr(ep))) | 62 | self.log.debug('endpoint: {}'.format(repr(ep))) |
1700 | @@ -77,6 +83,7 @@ | |||
1701 | 77 | Validate a list of actual service catalog endpoints vs a list of | 83 | Validate a list of actual service catalog endpoints vs a list of |
1702 | 78 | expected service catalog endpoints. | 84 | expected service catalog endpoints. |
1703 | 79 | """ | 85 | """ |
1704 | 86 | self.log.debug('Validating service catalog endpoint data...') | ||
1705 | 80 | self.log.debug('actual: {}'.format(repr(actual))) | 87 | self.log.debug('actual: {}'.format(repr(actual))) |
1706 | 81 | for k, v in six.iteritems(expected): | 88 | for k, v in six.iteritems(expected): |
1707 | 82 | if k in actual: | 89 | if k in actual: |
1708 | @@ -93,6 +100,7 @@ | |||
1709 | 93 | Validate a list of actual tenant data vs list of expected tenant | 100 | Validate a list of actual tenant data vs list of expected tenant |
1710 | 94 | data. | 101 | data. |
1711 | 95 | """ | 102 | """ |
1712 | 103 | self.log.debug('Validating tenant data...') | ||
1713 | 96 | self.log.debug('actual: {}'.format(repr(actual))) | 104 | self.log.debug('actual: {}'.format(repr(actual))) |
1714 | 97 | for e in expected: | 105 | for e in expected: |
1715 | 98 | found = False | 106 | found = False |
1716 | @@ -114,6 +122,7 @@ | |||
1717 | 114 | Validate a list of actual role data vs a list of expected role | 122 | Validate a list of actual role data vs a list of expected role |
1718 | 115 | data. | 123 | data. |
1719 | 116 | """ | 124 | """ |
1720 | 125 | self.log.debug('Validating role data...') | ||
1721 | 117 | self.log.debug('actual: {}'.format(repr(actual))) | 126 | self.log.debug('actual: {}'.format(repr(actual))) |
1722 | 118 | for e in expected: | 127 | for e in expected: |
1723 | 119 | found = False | 128 | found = False |
1724 | @@ -134,6 +143,7 @@ | |||
1725 | 134 | Validate a list of actual user data vs a list of expected user | 143 | Validate a list of actual user data vs a list of expected user |
1726 | 135 | data. | 144 | data. |
1727 | 136 | """ | 145 | """ |
1728 | 146 | self.log.debug('Validating user data...') | ||
1729 | 137 | self.log.debug('actual: {}'.format(repr(actual))) | 147 | self.log.debug('actual: {}'.format(repr(actual))) |
1730 | 138 | for e in expected: | 148 | for e in expected: |
1731 | 139 | found = False | 149 | found = False |
1732 | @@ -155,17 +165,30 @@ | |||
1733 | 155 | 165 | ||
1734 | 156 | Validate a list of actual flavors vs a list of expected flavors. | 166 | Validate a list of actual flavors vs a list of expected flavors. |
1735 | 157 | """ | 167 | """ |
1736 | 168 | self.log.debug('Validating flavor data...') | ||
1737 | 158 | self.log.debug('actual: {}'.format(repr(actual))) | 169 | self.log.debug('actual: {}'.format(repr(actual))) |
1738 | 159 | act = [a.name for a in actual] | 170 | act = [a.name for a in actual] |
1739 | 160 | return self._validate_list_data(expected, act) | 171 | return self._validate_list_data(expected, act) |
1740 | 161 | 172 | ||
1741 | 162 | def tenant_exists(self, keystone, tenant): | 173 | def tenant_exists(self, keystone, tenant): |
1742 | 163 | """Return True if tenant exists.""" | 174 | """Return True if tenant exists.""" |
1743 | 175 | self.log.debug('Checking if tenant exists ({})...'.format(tenant)) | ||
1744 | 164 | return tenant in [t.name for t in keystone.tenants.list()] | 176 | return tenant in [t.name for t in keystone.tenants.list()] |
1745 | 165 | 177 | ||
1746 | 178 | def authenticate_cinder_admin(self, keystone_sentry, username, | ||
1747 | 179 | password, tenant): | ||
1748 | 180 | """Authenticates admin user with cinder.""" | ||
1749 | 181 | # NOTE(beisner): cinder python client doesn't accept tokens. | ||
1750 | 182 | service_ip = \ | ||
1751 | 183 | keystone_sentry.relation('shared-db', | ||
1752 | 184 | 'mysql:shared-db')['private-address'] | ||
1753 | 185 | ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8')) | ||
1754 | 186 | return cinder_client.Client(username, password, tenant, ept) | ||
1755 | 187 | |||
1756 | 166 | def authenticate_keystone_admin(self, keystone_sentry, user, password, | 188 | def authenticate_keystone_admin(self, keystone_sentry, user, password, |
1757 | 167 | tenant): | 189 | tenant): |
1758 | 168 | """Authenticates admin user with the keystone admin endpoint.""" | 190 | """Authenticates admin user with the keystone admin endpoint.""" |
1759 | 191 | self.log.debug('Authenticating keystone admin...') | ||
1760 | 169 | unit = keystone_sentry | 192 | unit = keystone_sentry |
1761 | 170 | service_ip = unit.relation('shared-db', | 193 | service_ip = unit.relation('shared-db', |
1762 | 171 | 'mysql:shared-db')['private-address'] | 194 | 'mysql:shared-db')['private-address'] |
1763 | @@ -175,6 +198,7 @@ | |||
1764 | 175 | 198 | ||
1765 | 176 | def authenticate_keystone_user(self, keystone, user, password, tenant): | 199 | def authenticate_keystone_user(self, keystone, user, password, tenant): |
1766 | 177 | """Authenticates a regular user with the keystone public endpoint.""" | 200 | """Authenticates a regular user with the keystone public endpoint.""" |
1767 | 201 | self.log.debug('Authenticating keystone user ({})...'.format(user)) | ||
1768 | 178 | ep = keystone.service_catalog.url_for(service_type='identity', | 202 | ep = keystone.service_catalog.url_for(service_type='identity', |
1769 | 179 | endpoint_type='publicURL') | 203 | endpoint_type='publicURL') |
1770 | 180 | return keystone_client.Client(username=user, password=password, | 204 | return keystone_client.Client(username=user, password=password, |
1771 | @@ -182,19 +206,49 @@ | |||
1772 | 182 | 206 | ||
1773 | 183 | def authenticate_glance_admin(self, keystone): | 207 | def authenticate_glance_admin(self, keystone): |
1774 | 184 | """Authenticates admin user with glance.""" | 208 | """Authenticates admin user with glance.""" |
1775 | 209 | self.log.debug('Authenticating glance admin...') | ||
1776 | 185 | ep = keystone.service_catalog.url_for(service_type='image', | 210 | ep = keystone.service_catalog.url_for(service_type='image', |
1777 | 186 | endpoint_type='adminURL') | 211 | endpoint_type='adminURL') |
1778 | 187 | return glance_client.Client(ep, token=keystone.auth_token) | 212 | return glance_client.Client(ep, token=keystone.auth_token) |
1779 | 188 | 213 | ||
1780 | 214 | def authenticate_heat_admin(self, keystone): | ||
1781 | 215 | """Authenticates the admin user with heat.""" | ||
1782 | 216 | self.log.debug('Authenticating heat admin...') | ||
1783 | 217 | ep = keystone.service_catalog.url_for(service_type='orchestration', | ||
1784 | 218 | endpoint_type='publicURL') | ||
1785 | 219 | return heat_client.Client(endpoint=ep, token=keystone.auth_token) | ||
1786 | 220 | |||
1787 | 189 | def authenticate_nova_user(self, keystone, user, password, tenant): | 221 | def authenticate_nova_user(self, keystone, user, password, tenant): |
1788 | 190 | """Authenticates a regular user with nova-api.""" | 222 | """Authenticates a regular user with nova-api.""" |
1789 | 223 | self.log.debug('Authenticating nova user ({})...'.format(user)) | ||
1790 | 191 | ep = keystone.service_catalog.url_for(service_type='identity', | 224 | ep = keystone.service_catalog.url_for(service_type='identity', |
1791 | 192 | endpoint_type='publicURL') | 225 | endpoint_type='publicURL') |
1792 | 193 | return nova_client.Client(username=user, api_key=password, | 226 | return nova_client.Client(username=user, api_key=password, |
1793 | 194 | project_id=tenant, auth_url=ep) | 227 | project_id=tenant, auth_url=ep) |
1794 | 195 | 228 | ||
1795 | 229 | def authenticate_swift_user(self, keystone, user, password, tenant): | ||
1796 | 230 | """Authenticates a regular user with swift api.""" | ||
1797 | 231 | self.log.debug('Authenticating swift user ({})...'.format(user)) | ||
1798 | 232 | ep = keystone.service_catalog.url_for(service_type='identity', | ||
1799 | 233 | endpoint_type='publicURL') | ||
1800 | 234 | return swiftclient.Connection(authurl=ep, | ||
1801 | 235 | user=user, | ||
1802 | 236 | key=password, | ||
1803 | 237 | tenant_name=tenant, | ||
1804 | 238 | auth_version='2.0') | ||
1805 | 239 | |||
1806 | 196 | def create_cirros_image(self, glance, image_name): | 240 | def create_cirros_image(self, glance, image_name): |
1808 | 197 | """Download the latest cirros image and upload it to glance.""" | 241 | """Download the latest cirros image and upload it to glance, |
1809 | 242 | validate and return a resource pointer. | ||
1810 | 243 | |||
1811 | 244 | :param glance: pointer to authenticated glance connection | ||
1812 | 245 | :param image_name: display name for new image | ||
1813 | 246 | :returns: glance image pointer | ||
1814 | 247 | """ | ||
1815 | 248 | self.log.debug('Creating glance cirros image ' | ||
1816 | 249 | '({})...'.format(image_name)) | ||
1817 | 250 | |||
1818 | 251 | # Download cirros image | ||
1819 | 198 | http_proxy = os.getenv('AMULET_HTTP_PROXY') | 252 | http_proxy = os.getenv('AMULET_HTTP_PROXY') |
1820 | 199 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) | 253 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) |
1821 | 200 | if http_proxy: | 254 | if http_proxy: |
1822 | @@ -203,57 +257,67 @@ | |||
1823 | 203 | else: | 257 | else: |
1824 | 204 | opener = urllib.FancyURLopener() | 258 | opener = urllib.FancyURLopener() |
1825 | 205 | 259 | ||
1827 | 206 | f = opener.open("http://download.cirros-cloud.net/version/released") | 260 | f = opener.open('http://download.cirros-cloud.net/version/released') |
1828 | 207 | version = f.read().strip() | 261 | version = f.read().strip() |
1830 | 208 | cirros_img = "cirros-{}-x86_64-disk.img".format(version) | 262 | cirros_img = 'cirros-{}-x86_64-disk.img'.format(version) |
1831 | 209 | local_path = os.path.join('tests', cirros_img) | 263 | local_path = os.path.join('tests', cirros_img) |
1832 | 210 | 264 | ||
1833 | 211 | if not os.path.exists(local_path): | 265 | if not os.path.exists(local_path): |
1835 | 212 | cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net", | 266 | cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net', |
1836 | 213 | version, cirros_img) | 267 | version, cirros_img) |
1837 | 214 | opener.retrieve(cirros_url, local_path) | 268 | opener.retrieve(cirros_url, local_path) |
1838 | 215 | f.close() | 269 | f.close() |
1839 | 216 | 270 | ||
1840 | 271 | # Create glance image | ||
1841 | 217 | with open(local_path) as f: | 272 | with open(local_path) as f: |
1842 | 218 | image = glance.images.create(name=image_name, is_public=True, | 273 | image = glance.images.create(name=image_name, is_public=True, |
1843 | 219 | disk_format='qcow2', | 274 | disk_format='qcow2', |
1844 | 220 | container_format='bare', data=f) | 275 | container_format='bare', data=f) |
1857 | 221 | count = 1 | 276 | |
1858 | 222 | status = image.status | 277 | # Wait for image to reach active status |
1859 | 223 | while status != 'active' and count < 10: | 278 | img_id = image.id |
1860 | 224 | time.sleep(3) | 279 | ret = self.resource_reaches_status(glance.images, img_id, |
1861 | 225 | image = glance.images.get(image.id) | 280 | expected_stat='active', |
1862 | 226 | status = image.status | 281 | msg='Image status wait') |
1863 | 227 | self.log.debug('image status: {}'.format(status)) | 282 | if not ret: |
1864 | 228 | count += 1 | 283 | msg = 'Glance image failed to reach expected state.' |
1865 | 229 | 284 | amulet.raise_status(amulet.FAIL, msg=msg) | |
1866 | 230 | if status != 'active': | 285 | |
1867 | 231 | self.log.error('image creation timed out') | 286 | # Re-validate new image |
1868 | 232 | return None | 287 | self.log.debug('Validating image attributes...') |
1869 | 288 | val_img_name = glance.images.get(img_id).name | ||
1870 | 289 | val_img_stat = glance.images.get(img_id).status | ||
1871 | 290 | val_img_pub = glance.images.get(img_id).is_public | ||
1872 | 291 | val_img_cfmt = glance.images.get(img_id).container_format | ||
1873 | 292 | val_img_dfmt = glance.images.get(img_id).disk_format | ||
1874 | 293 | msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} ' | ||
1875 | 294 | 'container fmt:{} disk fmt:{}'.format( | ||
1876 | 295 | val_img_name, val_img_pub, img_id, | ||
1877 | 296 | val_img_stat, val_img_cfmt, val_img_dfmt)) | ||
1878 | 297 | |||
1879 | 298 | if val_img_name == image_name and val_img_stat == 'active' \ | ||
1880 | 299 | and val_img_pub is True and val_img_cfmt == 'bare' \ | ||
1881 | 300 | and val_img_dfmt == 'qcow2': | ||
1882 | 301 | self.log.debug(msg_attr) | ||
1883 | 302 | else: | ||
1884 | 303 | msg = ('Volume validation failed, {}'.format(msg_attr)) | ||
1885 | 304 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1886 | 233 | 305 | ||
1887 | 234 | return image | 306 | return image |
1888 | 235 | 307 | ||
1889 | 236 | def delete_image(self, glance, image): | 308 | def delete_image(self, glance, image): |
1890 | 237 | """Delete the specified image.""" | 309 | """Delete the specified image.""" |
1907 | 238 | num_before = len(list(glance.images.list())) | 310 | |
1908 | 239 | glance.images.delete(image) | 311 | # /!\ DEPRECATION WARNING |
1909 | 240 | 312 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | |
1910 | 241 | count = 1 | 313 | 'delete_resource instead of delete_image.') |
1911 | 242 | num_after = len(list(glance.images.list())) | 314 | self.log.debug('Deleting glance image ({})...'.format(image)) |
1912 | 243 | while num_after != (num_before - 1) and count < 10: | 315 | return self.delete_resource(glance.images, image, msg='glance image') |
1897 | 244 | time.sleep(3) | ||
1898 | 245 | num_after = len(list(glance.images.list())) | ||
1899 | 246 | self.log.debug('number of images: {}'.format(num_after)) | ||
1900 | 247 | count += 1 | ||
1901 | 248 | |||
1902 | 249 | if num_after != (num_before - 1): | ||
1903 | 250 | self.log.error('image deletion timed out') | ||
1904 | 251 | return False | ||
1905 | 252 | |||
1906 | 253 | return True | ||
1913 | 254 | 316 | ||
1914 | 255 | def create_instance(self, nova, image_name, instance_name, flavor): | 317 | def create_instance(self, nova, image_name, instance_name, flavor): |
1915 | 256 | """Create the specified instance.""" | 318 | """Create the specified instance.""" |
1916 | 319 | self.log.debug('Creating instance ' | ||
1917 | 320 | '({}|{}|{})'.format(instance_name, image_name, flavor)) | ||
1918 | 257 | image = nova.images.find(name=image_name) | 321 | image = nova.images.find(name=image_name) |
1919 | 258 | flavor = nova.flavors.find(name=flavor) | 322 | flavor = nova.flavors.find(name=flavor) |
1920 | 259 | instance = nova.servers.create(name=instance_name, image=image, | 323 | instance = nova.servers.create(name=instance_name, image=image, |
1921 | @@ -276,19 +340,265 @@ | |||
1922 | 276 | 340 | ||
1923 | 277 | def delete_instance(self, nova, instance): | 341 | def delete_instance(self, nova, instance): |
1924 | 278 | """Delete the specified instance.""" | 342 | """Delete the specified instance.""" |
1941 | 279 | num_before = len(list(nova.servers.list())) | 343 | |
1942 | 280 | nova.servers.delete(instance) | 344 | # /!\ DEPRECATION WARNING |
1943 | 281 | 345 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | |
1944 | 282 | count = 1 | 346 | 'delete_resource instead of delete_instance.') |
1945 | 283 | num_after = len(list(nova.servers.list())) | 347 | self.log.debug('Deleting instance ({})...'.format(instance)) |
1946 | 284 | while num_after != (num_before - 1) and count < 10: | 348 | return self.delete_resource(nova.servers, instance, |
1947 | 285 | time.sleep(3) | 349 | msg='nova instance') |
1948 | 286 | num_after = len(list(nova.servers.list())) | 350 | |
1949 | 287 | self.log.debug('number of instances: {}'.format(num_after)) | 351 | def create_or_get_keypair(self, nova, keypair_name="testkey"): |
1950 | 288 | count += 1 | 352 | """Create a new keypair, or return pointer if it already exists.""" |
1951 | 289 | 353 | try: | |
1952 | 290 | if num_after != (num_before - 1): | 354 | _keypair = nova.keypairs.get(keypair_name) |
1953 | 291 | self.log.error('instance deletion timed out') | 355 | self.log.debug('Keypair ({}) already exists, ' |
1954 | 292 | return False | 356 | 'using it.'.format(keypair_name)) |
1955 | 293 | 357 | return _keypair | |
1956 | 294 | return True | 358 | except: |
1957 | 359 | self.log.debug('Keypair ({}) does not exist, ' | ||
1958 | 360 | 'creating it.'.format(keypair_name)) | ||
1959 | 361 | |||
1960 | 362 | _keypair = nova.keypairs.create(name=keypair_name) | ||
1961 | 363 | return _keypair | ||
1962 | 364 | |||
1963 | 365 | def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, | ||
1964 | 366 | img_id=None, src_vol_id=None, snap_id=None): | ||
1965 | 367 | """Create cinder volume, optionally from a glance image, OR | ||
1966 | 368 | optionally as a clone of an existing volume, OR optionally | ||
1967 | 369 | from a snapshot. Wait for the new volume status to reach | ||
1968 | 370 | the expected status, validate and return a resource pointer. | ||
1969 | 371 | |||
1970 | 372 | :param vol_name: cinder volume display name | ||
1971 | 373 | :param vol_size: size in gigabytes | ||
1972 | 374 | :param img_id: optional glance image id | ||
1973 | 375 | :param src_vol_id: optional source volume id to clone | ||
1974 | 376 | :param snap_id: optional snapshot id to use | ||
1975 | 377 | :returns: cinder volume pointer | ||
1976 | 378 | """ | ||
1977 | 379 | # Handle parameter input and avoid impossible combinations | ||
1978 | 380 | if img_id and not src_vol_id and not snap_id: | ||
1979 | 381 | # Create volume from image | ||
1980 | 382 | self.log.debug('Creating cinder volume from glance image...') | ||
1981 | 383 | bootable = 'true' | ||
1982 | 384 | elif src_vol_id and not img_id and not snap_id: | ||
1983 | 385 | # Clone an existing volume | ||
1984 | 386 | self.log.debug('Cloning cinder volume...') | ||
1985 | 387 | bootable = cinder.volumes.get(src_vol_id).bootable | ||
1986 | 388 | elif snap_id and not src_vol_id and not img_id: | ||
1987 | 389 | # Create volume from snapshot | ||
1988 | 390 | self.log.debug('Creating cinder volume from snapshot...') | ||
1989 | 391 | snap = cinder.volume_snapshots.find(id=snap_id) | ||
1990 | 392 | vol_size = snap.size | ||
1991 | 393 | snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id | ||
1992 | 394 | bootable = cinder.volumes.get(snap_vol_id).bootable | ||
1993 | 395 | elif not img_id and not src_vol_id and not snap_id: | ||
1994 | 396 | # Create volume | ||
1995 | 397 | self.log.debug('Creating cinder volume...') | ||
1996 | 398 | bootable = 'false' | ||
1997 | 399 | else: | ||
1998 | 400 | # Impossible combination of parameters | ||
1999 | 401 | msg = ('Invalid method use - name:{} size:{} img_id:{} ' | ||
2000 | 402 | 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, | ||
2001 | 403 | img_id, src_vol_id, | ||
2002 | 404 | snap_id)) | ||
2003 | 405 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
2004 | 406 | |||
2005 | 407 | # Create new volume | ||
2006 | 408 | try: | ||
2007 | 409 | vol_new = cinder.volumes.create(display_name=vol_name, | ||
2008 | 410 | imageRef=img_id, | ||
2009 | 411 | size=vol_size, | ||
2010 | 412 | source_volid=src_vol_id, | ||
2011 | 413 | snapshot_id=snap_id) | ||
2012 | 414 | vol_id = vol_new.id | ||
2013 | 415 | except Exception as e: | ||
2014 | 416 | msg = 'Failed to create volume: {}'.format(e) | ||
2015 | 417 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
2016 | 418 | |||
2017 | 419 | # Wait for volume to reach available status | ||
2018 | 420 | ret = self.resource_reaches_status(cinder.volumes, vol_id, | ||
2019 | 421 | expected_stat="available", | ||
2020 | 422 | msg="Volume status wait") | ||
2021 | 423 | if not ret: | ||
2022 | 424 | msg = 'Cinder volume failed to reach expected state.' | ||
2023 | 425 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
2024 | 426 | |||
2025 | 427 | # Re-validate new volume | ||
2026 | 428 | self.log.debug('Validating volume attributes...') | ||
2027 | 429 | val_vol_name = cinder.volumes.get(vol_id).display_name | ||
2028 | 430 | val_vol_boot = cinder.volumes.get(vol_id).bootable | ||
2029 | 431 | val_vol_stat = cinder.volumes.get(vol_id).status | ||
2030 | 432 | val_vol_size = cinder.volumes.get(vol_id).size | ||
2031 | 433 | msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' | ||
2032 | 434 | '{} size:{}'.format(val_vol_name, vol_id, | ||
2033 | 435 | val_vol_stat, val_vol_boot, | ||
2034 | 436 | val_vol_size)) | ||
2035 | 437 | |||
2036 | 438 | if val_vol_boot == bootable and val_vol_stat == 'available' \ | ||
2037 | 439 | and val_vol_name == vol_name and val_vol_size == vol_size: | ||
2038 | 440 | self.log.debug(msg_attr) | ||
2039 | 441 | else: | ||
2040 | 442 | msg = ('Volume validation failed, {}'.format(msg_attr)) | ||
2041 | 443 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
2042 | 444 | |||
2043 | 445 | return vol_new | ||
2044 | 446 | |||
2045 | 447 | def delete_resource(self, resource, resource_id, | ||
2046 | 448 | msg="resource", max_wait=120): | ||
2047 | 449 | """Delete one openstack resource, such as one instance, keypair, | ||
2048 | 450 | image, volume, stack, etc., and confirm deletion within max wait time. | ||
2049 | 451 | |||
2050 | 452 | :param resource: pointer to os resource type, ex:glance_client.images | ||
2051 | 453 | :param resource_id: unique name or id for the openstack resource | ||
2052 | 454 | :param msg: text to identify purpose in logging | ||
2053 | 455 | :param max_wait: maximum wait time in seconds | ||
2054 | 456 | :returns: True if successful, otherwise False | ||
2055 | 457 | """ | ||
2056 | 458 | self.log.debug('Deleting OpenStack resource ' | ||
2057 | 459 | '{} ({})'.format(resource_id, msg)) | ||
2058 | 460 | num_before = len(list(resource.list())) | ||
2059 | 461 | resource.delete(resource_id) | ||
2060 | 462 | |||
2061 | 463 | tries = 0 | ||
2062 | 464 | num_after = len(list(resource.list())) | ||
2063 | 465 | while num_after != (num_before - 1) and tries < (max_wait / 4): | ||
2064 | 466 | self.log.debug('{} delete check: ' | ||
2065 | 467 | '{} [{}:{}] {}'.format(msg, tries, | ||
2066 | 468 | num_before, | ||
2067 | 469 | num_after, | ||
2068 | 470 | resource_id)) | ||
2069 | 471 | time.sleep(4) | ||
2070 | 472 | num_after = len(list(resource.list())) | ||
2071 | 473 | tries += 1 | ||
2072 | 474 | |||
2073 | 475 | self.log.debug('{}: expected, actual count = {}, ' | ||
2074 | 476 | '{}'.format(msg, num_before - 1, num_after)) | ||
2075 | 477 | |||
2076 | 478 | if num_after == (num_before - 1): | ||
2077 | 479 | return True | ||
2078 | 480 | else: | ||
2079 | 481 | self.log.error('{} delete timed out'.format(msg)) | ||
2080 | 482 | return False | ||
2081 | 483 | |||
2082 | 484 | def resource_reaches_status(self, resource, resource_id, | ||
2083 | 485 | expected_stat='available', | ||
2084 | 486 | msg='resource', max_wait=120): | ||
2085 | 487 | """Wait for an openstack resources status to reach an | ||
2086 | 488 | expected status within a specified time. Useful to confirm that | ||
2087 | 489 | nova instances, cinder vols, snapshots, glance images, heat stacks | ||
2088 | 490 | and other resources eventually reach the expected status. | ||
2089 | 491 | |||
2090 | 492 | :param resource: pointer to os resource type, ex: heat_client.stacks | ||
2091 | 493 | :param resource_id: unique id for the openstack resource | ||
2092 | 494 | :param expected_stat: status to expect resource to reach | ||
2093 | 495 | :param msg: text to identify purpose in logging | ||
2094 | 496 | :param max_wait: maximum wait time in seconds | ||
2095 | 497 | :returns: True if successful, False if status is not reached | ||
2096 | 498 | """ | ||
2097 | 499 | |||
2098 | 500 | tries = 0 | ||
2099 | 501 | resource_stat = resource.get(resource_id).status | ||
2100 | 502 | while resource_stat != expected_stat and tries < (max_wait / 4): | ||
2101 | 503 | self.log.debug('{} status check: ' | ||
2102 | 504 | '{} [{}:{}] {}'.format(msg, tries, | ||
2103 | 505 | resource_stat, | ||
2104 | 506 | expected_stat, | ||
2105 | 507 | resource_id)) | ||
2106 | 508 | time.sleep(4) | ||
2107 | 509 | resource_stat = resource.get(resource_id).status | ||
2108 | 510 | tries += 1 | ||
2109 | 511 | |||
2110 | 512 | self.log.debug('{}: expected, actual status = {}, ' | ||
2111 | 513 | '{}'.format(msg, resource_stat, expected_stat)) | ||
2112 | 514 | |||
2113 | 515 | if resource_stat == expected_stat: | ||
2114 | 516 | return True | ||
2115 | 517 | else: | ||
2116 | 518 | self.log.debug('{} never reached expected status: ' | ||
2117 | 519 | '{}'.format(resource_id, expected_stat)) | ||
2118 | 520 | return False | ||
2119 | 521 | |||
2120 | 522 | def get_ceph_osd_id_cmd(self, index): | ||
2121 | 523 | """Produce a shell command that will return a ceph-osd id.""" | ||
2122 | 524 | return ("`initctl list | grep 'ceph-osd ' | " | ||
2123 | 525 | "awk 'NR=={} {{ print $2 }}' | " | ||
2124 | 526 | "grep -o '[0-9]*'`".format(index + 1)) | ||
2125 | 527 | |||
2126 | 528 | def get_ceph_pools(self, sentry_unit): | ||
2127 | 529 | """Return a dict of ceph pools from a single ceph unit, with | ||
2128 | 530 | pool name as keys, pool id as vals.""" | ||
2129 | 531 | pools = {} | ||
2130 | 532 | cmd = 'sudo ceph osd lspools' | ||
2131 | 533 | output, code = sentry_unit.run(cmd) | ||
2132 | 534 | if code != 0: | ||
2133 | 535 | msg = ('{} `{}` returned {} ' | ||
2134 | 536 | '{}'.format(sentry_unit.info['unit_name'], | ||
2135 | 537 | cmd, code, output)) | ||
2136 | 538 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
2137 | 539 | |||
2138 | 540 | # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, | ||
2139 | 541 | for pool in str(output).split(','): | ||
2140 | 542 | pool_id_name = pool.split(' ') | ||
2141 | 543 | if len(pool_id_name) == 2: | ||
2142 | 544 | pool_id = pool_id_name[0] | ||
2143 | 545 | pool_name = pool_id_name[1] | ||
2144 | 546 | pools[pool_name] = int(pool_id) | ||
2145 | 547 | |||
2146 | 548 | self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], | ||
2147 | 549 | pools)) | ||
2148 | 550 | return pools | ||
2149 | 551 | |||
2150 | 552 | def get_ceph_df(self, sentry_unit): | ||
2151 | 553 | """Return dict of ceph df json output, including ceph pool state. | ||
2152 | 554 | |||
2153 | 555 | :param sentry_unit: Pointer to amulet sentry instance (juju unit) | ||
2154 | 556 | :returns: Dict of ceph df output | ||
2155 | 557 | """ | ||
2156 | 558 | cmd = 'sudo ceph df --format=json' | ||
2157 | 559 | output, code = sentry_unit.run(cmd) | ||
2158 | 560 | if code != 0: | ||
2159 | 561 | msg = ('{} `{}` returned {} ' | ||
2160 | 562 | '{}'.format(sentry_unit.info['unit_name'], | ||
2161 | 563 | cmd, code, output)) | ||
2162 | 564 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
2163 | 565 | return json.loads(output) | ||
2164 | 566 | |||
2165 | 567 | def get_ceph_pool_sample(self, sentry_unit, pool_id=0): | ||
2166 | 568 | """Take a sample of attributes of a ceph pool, returning ceph | ||
2167 | 569 | pool name, object count and disk space used for the specified | ||
2168 | 570 | pool ID number. | ||
2169 | 571 | |||
2170 | 572 | :param sentry_unit: Pointer to amulet sentry instance (juju unit) | ||
2171 | 573 | :param pool_id: Ceph pool ID | ||
2172 | 574 | :returns: List of pool name, object count, kb disk space used | ||
2173 | 575 | """ | ||
2174 | 576 | df = self.get_ceph_df(sentry_unit) | ||
2175 | 577 | pool_name = df['pools'][pool_id]['name'] | ||
2176 | 578 | obj_count = df['pools'][pool_id]['stats']['objects'] | ||
2177 | 579 | kb_used = df['pools'][pool_id]['stats']['kb_used'] | ||
2178 | 580 | self.log.debug('Ceph {} pool (ID {}): {} objects, ' | ||
2179 | 581 | '{} kb used'.format(pool_name, pool_id, | ||
2180 | 582 | obj_count, kb_used)) | ||
2181 | 583 | return pool_name, obj_count, kb_used | ||
2182 | 584 | |||
2183 | 585 | def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): | ||
2184 | 586 | """Validate ceph pool samples taken over time, such as pool | ||
2185 | 587 | object counts or pool kb used, before adding, after adding, and | ||
2186 | 588 | after deleting items which affect those pool attributes. The | ||
2187 | 589 | 2nd element is expected to be greater than the 1st; 3rd is expected | ||
2188 | 590 | to be less than the 2nd. | ||
2189 | 591 | |||
2190 | 592 | :param samples: List containing 3 data samples | ||
2191 | 593 | :param sample_type: String for logging and usage context | ||
2192 | 594 | :returns: None if successful, Failure message otherwise | ||
2193 | 595 | """ | ||
2194 | 596 | original, created, deleted = range(3) | ||
2195 | 597 | if samples[created] <= samples[original] or \ | ||
2196 | 598 | samples[deleted] >= samples[created]: | ||
2197 | 599 | return ('Ceph {} samples ({}) ' | ||
2198 | 600 | 'unexpected.'.format(sample_type, samples)) | ||
2199 | 601 | else: | ||
2200 | 602 | self.log.debug('Ceph {} samples (OK): ' | ||
2201 | 603 | '{}'.format(sample_type, samples)) | ||
2202 | 604 | return None | ||
2203 | 295 | 605 | ||
2204 | === added file 'tests/tests.yaml' | |||
2205 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 | |||
2206 | +++ tests/tests.yaml 2015-07-01 14:47:24 +0000 | |||
2207 | @@ -0,0 +1,18 @@ | |||
2208 | 1 | bootstrap: true | ||
2209 | 2 | reset: true | ||
2210 | 3 | virtualenv: true | ||
2211 | 4 | makefile: | ||
2212 | 5 | - lint | ||
2213 | 6 | - test | ||
2214 | 7 | sources: | ||
2215 | 8 | - ppa:juju/stable | ||
2216 | 9 | packages: | ||
2217 | 10 | - amulet | ||
2218 | 11 | - python-amulet | ||
2219 | 12 | - python-cinderclient | ||
2220 | 13 | - python-distro-info | ||
2221 | 14 | - python-glanceclient | ||
2222 | 15 | - python-heatclient | ||
2223 | 16 | - python-keystoneclient | ||
2224 | 17 | - python-novaclient | ||
2225 | 18 | - python-swiftclient |
charm_lint_check #5539 ceph-radosgw-next for 1chb1n mp262599
LINT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_lint_ check/5539/