Merge lp:~1chb1n/charms/trusty/neutron-gateway/amulet-update-1508 into lp:~openstack-charmers-archive/charms/trusty/neutron-gateway/next
- Trusty Tahr (14.04)
- amulet-update-1508
- Merge into next
Status: | Superseded |
---|---|
Proposed branch: | lp:~1chb1n/charms/trusty/neutron-gateway/amulet-update-1508 |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/neutron-gateway/next |
Diff against target: |
2164 lines (+1242/-341) 11 files modified
Makefile (+1/-1) tests/00-setup (+7/-3) tests/020-basic-trusty-liberty (+11/-0) tests/021-basic-wily-liberty (+9/-0) tests/README (+10/-0) tests/basic_deployment.py (+514/-264) tests/charmhelpers/contrib/amulet/deployment.py (+4/-2) tests/charmhelpers/contrib/amulet/utils.py (+284/-62) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+23/-9) tests/charmhelpers/contrib/openstack/amulet/utils.py (+359/-0) tests/tests.yaml (+20/-0) |
To merge this branch: | bzr merge lp:~1chb1n/charms/trusty/neutron-gateway/amulet-update-1508 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
OpenStack Charmers | Pending | ||
Review via email: mp+271198@code.launchpad.net |
This proposal has been superseded by a proposal from 2015-09-22.
Commit message
Description of the change
Update tests for Vivid-Kilo + Trusty-Liberty enablement; sync tests/charmhelpers.
This proposal is dependent on charm-helpers landing code from:
https:/
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9280 neutron-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10119 neutron-
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9282 neutron-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6455 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6457 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10120 neutron-
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9283 neutron-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6458 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10184 neutron-
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9342 neutron-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6472 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10187 neutron-
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9345 neutron-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10189 neutron-
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9347 neutron-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6475 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6477 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10193 neutron-
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9351 neutron-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6482 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10253 neutron-
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9408 neutron-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6494 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
FYI - ignore previous amulet test results here.
Test-writing was compounded by amulet bug @ https:/
Will resubmit after confirming that is no longer an issue.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10433 neutron-
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9623 neutron-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6576 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 143. By Ryan Beisner
-
rebase with next
- 144. By Ryan Beisner
-
disable wily test target, to be re-enabled separately after validation
- 145. By Ryan Beisner
-
rebase with next
- 146. By Ryan Beisner
-
rebase from next (re: amulet git test fails)
- 147. By Ryan Beisner
-
rebase from next (re: unit test fails re: git fix)
Unmerged revisions
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2015-06-25 17:58:14 +0000 |
3 | +++ Makefile 2015-09-21 20:38:30 +0000 |
4 | @@ -24,6 +24,6 @@ |
5 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml |
6 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml |
7 | |
8 | -publish: lint unit_test |
9 | +publish: lint test |
10 | bzr push lp:charms/neutron-gateway |
11 | bzr push lp:charms/trusty/neutron-gateway |
12 | |
13 | === modified file 'tests/00-setup' |
14 | --- tests/00-setup 2015-07-10 13:34:03 +0000 |
15 | +++ tests/00-setup 2015-09-21 20:38:30 +0000 |
16 | @@ -4,9 +4,13 @@ |
17 | |
18 | sudo add-apt-repository --yes ppa:juju/stable |
19 | sudo apt-get update --yes |
20 | -sudo apt-get install --yes python-amulet \ |
21 | +sudo apt-get install --yes amulet \ |
22 | + python-cinderclient \ |
23 | python-distro-info \ |
24 | + python-glanceclient \ |
25 | + python-heatclient \ |
26 | + python-keystoneclient \ |
27 | python-neutronclient \ |
28 | - python-keystoneclient \ |
29 | python-novaclient \ |
30 | - python-glanceclient |
31 | + python-pika \ |
32 | + python-swiftclient |
33 | |
34 | === modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x) |
35 | === added file 'tests/020-basic-trusty-liberty' |
36 | --- tests/020-basic-trusty-liberty 1970-01-01 00:00:00 +0000 |
37 | +++ tests/020-basic-trusty-liberty 2015-09-21 20:38:30 +0000 |
38 | @@ -0,0 +1,11 @@ |
39 | +#!/usr/bin/python |
40 | + |
41 | +"""Amulet tests on a basic quantum-gateway deployment on trusty-liberty.""" |
42 | + |
43 | +from basic_deployment import NeutronGatewayBasicDeployment |
44 | + |
45 | +if __name__ == '__main__': |
46 | + deployment = NeutronGatewayBasicDeployment(series='trusty', |
47 | + openstack='cloud:trusty-liberty', |
48 | + source='cloud:trusty-updates/liberty') |
49 | + deployment.run_tests() |
50 | |
51 | === added file 'tests/021-basic-wily-liberty' |
52 | --- tests/021-basic-wily-liberty 1970-01-01 00:00:00 +0000 |
53 | +++ tests/021-basic-wily-liberty 2015-09-21 20:38:30 +0000 |
54 | @@ -0,0 +1,9 @@ |
55 | +#!/usr/bin/python |
56 | + |
57 | +"""Amulet tests on a basic quantum-gateway deployment on wily-liberty.""" |
58 | + |
59 | +from basic_deployment import NeutronGatewayBasicDeployment |
60 | + |
61 | +if __name__ == '__main__': |
62 | + deployment = NeutronGatewayBasicDeployment(series='wily') |
63 | + deployment.run_tests() |
64 | |
65 | === modified file 'tests/README' |
66 | --- tests/README 2014-10-07 21:19:29 +0000 |
67 | +++ tests/README 2015-09-21 20:38:30 +0000 |
68 | @@ -1,6 +1,16 @@ |
69 | This directory provides Amulet tests that focus on verification of |
70 | quantum-gateway deployments. |
71 | |
72 | +test_* methods are called in lexical sort order, although each individual test |
73 | +should be idempotent, and expected to pass regardless of run order. |
74 | + |
75 | +Test name convention to ensure desired test order: |
76 | + 1xx service and endpoint checks |
77 | + 2xx relation checks |
78 | + 3xx config checks |
79 | + 4xx functional checks |
80 | + 9xx restarts and other final checks |
81 | + |
82 | In order to run tests, you'll need charm-tools installed (in addition to |
83 | juju, of course): |
84 | sudo add-apt-repository ppa:juju/stable |
85 | |
86 | === modified file 'tests/basic_deployment.py' |
87 | --- tests/basic_deployment.py 2015-09-14 14:30:47 +0000 |
88 | +++ tests/basic_deployment.py 2015-09-21 20:38:30 +0000 |
89 | @@ -13,8 +13,8 @@ |
90 | |
91 | from charmhelpers.contrib.openstack.amulet.utils import ( |
92 | OpenStackAmuletUtils, |
93 | - DEBUG, # flake8: noqa |
94 | - ERROR |
95 | + DEBUG, |
96 | + #ERROR |
97 | ) |
98 | |
99 | # Use DEBUG to turn on debug logging |
100 | @@ -45,30 +45,31 @@ |
101 | """ |
102 | this_service = {'name': 'neutron-gateway'} |
103 | other_services = [{'name': 'mysql'}, |
104 | - {'name': 'rabbitmq-server'}, {'name': 'keystone'}, |
105 | - {'name': 'nova-cloud-controller'}] |
106 | - if self._get_openstack_release() >= self.trusty_kilo: |
107 | - other_services.append({'name': 'neutron-api'}) |
108 | - super(NeutronGatewayBasicDeployment, self)._add_services(this_service, |
109 | - other_services) |
110 | + {'name': 'rabbitmq-server'}, |
111 | + {'name': 'keystone'}, |
112 | + {'name': 'nova-cloud-controller'}, |
113 | + {'name': 'neutron-api'}] |
114 | + |
115 | + super(NeutronGatewayBasicDeployment, self)._add_services( |
116 | + this_service, other_services) |
117 | |
118 | def _add_relations(self): |
119 | """Add all of the relations for the services.""" |
120 | relations = { |
121 | - 'keystone:shared-db': 'mysql:shared-db', |
122 | - 'neutron-gateway:shared-db': 'mysql:shared-db', |
123 | - 'neutron-gateway:amqp': 'rabbitmq-server:amqp', |
124 | - 'nova-cloud-controller:quantum-network-service': \ |
125 | - 'neutron-gateway:quantum-network-service', |
126 | - 'nova-cloud-controller:shared-db': 'mysql:shared-db', |
127 | - 'nova-cloud-controller:identity-service': 'keystone:identity-service', |
128 | - 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp' |
129 | + 'keystone:shared-db': 'mysql:shared-db', |
130 | + 'neutron-gateway:shared-db': 'mysql:shared-db', |
131 | + 'neutron-gateway:amqp': 'rabbitmq-server:amqp', |
132 | + 'nova-cloud-controller:quantum-network-service': |
133 | + 'neutron-gateway:quantum-network-service', |
134 | + 'nova-cloud-controller:shared-db': 'mysql:shared-db', |
135 | + 'nova-cloud-controller:identity-service': 'keystone:' |
136 | + 'identity-service', |
137 | + 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp', |
138 | + 'neutron-api:shared-db': 'mysql:shared-db', |
139 | + 'neutron-api:amqp': 'rabbitmq-server:amqp', |
140 | + 'neutron-api:neutron-api': 'nova-cloud-controller:neutron-api', |
141 | + 'neutron-api:identity-service': 'keystone:identity-service' |
142 | } |
143 | - if self._get_openstack_release() >= self.trusty_kilo: |
144 | - relations['neutron-api:shared-db'] = 'mysql:shared-db' |
145 | - relations['neutron-api:amqp'] = 'rabbitmq-server:amqp' |
146 | - relations['neutron-api:neutron-api'] = 'nova-cloud-controller:neutron-api' |
147 | - relations['neutron-api:identity-service'] = 'keystone:identity-service' |
148 | super(NeutronGatewayBasicDeployment, self)._add_relations(relations) |
149 | |
150 | def _configure_services(self): |
151 | @@ -83,16 +84,16 @@ |
152 | openstack_origin_git = { |
153 | 'repositories': [ |
154 | {'name': 'requirements', |
155 | - 'repository': 'git://github.com/openstack/requirements', |
156 | + 'repository': 'git://github.com/openstack/requirements', # noqa |
157 | 'branch': branch}, |
158 | {'name': 'neutron-fwaas', |
159 | - 'repository': 'git://github.com/openstack/neutron-fwaas', |
160 | + 'repository': 'git://github.com/openstack/neutron-fwaas', # noqa |
161 | 'branch': branch}, |
162 | {'name': 'neutron-lbaas', |
163 | - 'repository': 'git://github.com/openstack/neutron-lbaas', |
164 | + 'repository': 'git://github.com/openstack/neutron-lbaas', # noqa |
165 | 'branch': branch}, |
166 | {'name': 'neutron-vpnaas', |
167 | - 'repository': 'git://github.com/openstack/neutron-vpnaas', |
168 | + 'repository': 'git://github.com/openstack/neutron-vpnaas', # noqa |
169 | 'branch': branch}, |
170 | {'name': 'neutron', |
171 | 'repository': 'git://github.com/openstack/neutron', |
172 | @@ -122,7 +123,10 @@ |
173 | 'http_proxy': amulet_http_proxy, |
174 | 'https_proxy': amulet_http_proxy, |
175 | } |
176 | - neutron_gateway_config['openstack-origin-git'] = yaml.dump(openstack_origin_git) |
177 | + |
178 | + neutron_gateway_config['openstack-origin-git'] = \ |
179 | + yaml.dump(openstack_origin_git) |
180 | + |
181 | keystone_config = {'admin-password': 'openstack', |
182 | 'admin-token': 'ubuntutesting'} |
183 | nova_cc_config = {'network-manager': 'Quantum', |
184 | @@ -137,9 +141,10 @@ |
185 | # Access the sentries for inspecting service units |
186 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] |
187 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] |
188 | - self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] |
189 | + self.rmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] |
190 | self.nova_cc_sentry = self.d.sentry.unit['nova-cloud-controller/0'] |
191 | self.neutron_gateway_sentry = self.d.sentry.unit['neutron-gateway/0'] |
192 | + self.neutron_api_sentry = self.d.sentry.unit['neutron-api/0'] |
193 | |
194 | # Let things settle a bit before moving forward |
195 | time.sleep(30) |
196 | @@ -150,7 +155,6 @@ |
197 | password='openstack', |
198 | tenant='admin') |
199 | |
200 | - |
201 | # Authenticate admin with neutron |
202 | ep = self.keystone.service_catalog.url_for(service_type='identity', |
203 | endpoint_type='publicURL') |
204 | @@ -160,40 +164,121 @@ |
205 | tenant_name='admin', |
206 | region_name='RegionOne') |
207 | |
208 | - def test_services(self): |
209 | + def test_100_services(self): |
210 | """Verify the expected services are running on the corresponding |
211 | service units.""" |
212 | - neutron_services = ['status neutron-dhcp-agent', |
213 | - 'status neutron-lbaas-agent', |
214 | - 'status neutron-metadata-agent', |
215 | - 'status neutron-metering-agent', |
216 | - 'status neutron-ovs-cleanup', |
217 | - 'status neutron-plugin-openvswitch-agent'] |
218 | + neutron_services = ['neutron-dhcp-agent', |
219 | + 'neutron-lbaas-agent', |
220 | + 'neutron-metadata-agent', |
221 | + 'neutron-metering-agent', |
222 | + 'neutron-ovs-cleanup', |
223 | + 'neutron-plugin-openvswitch-agent'] |
224 | |
225 | if self._get_openstack_release() <= self.trusty_juno: |
226 | - neutron_services.append('status neutron-vpn-agent') |
227 | + neutron_services.append('neutron-vpn-agent') |
228 | |
229 | - nova_cc_services = ['status nova-api-ec2', |
230 | - 'status nova-api-os-compute', |
231 | - 'status nova-objectstore', |
232 | - 'status nova-cert', |
233 | - 'status nova-scheduler'] |
234 | - if self._get_openstack_release() >= self.precise_grizzly: |
235 | - nova_cc_services.append('status nova-conductor') |
236 | + nova_cc_services = ['nova-api-ec2', |
237 | + 'nova-api-os-compute', |
238 | + 'nova-objectstore', |
239 | + 'nova-cert', |
240 | + 'nova-scheduler', |
241 | + 'nova-conductor'] |
242 | |
243 | commands = { |
244 | - self.mysql_sentry: ['status mysql'], |
245 | - self.keystone_sentry: ['status keystone'], |
246 | + self.mysql_sentry: ['mysql'], |
247 | + self.keystone_sentry: ['keystone'], |
248 | self.nova_cc_sentry: nova_cc_services, |
249 | self.neutron_gateway_sentry: neutron_services |
250 | } |
251 | |
252 | - ret = u.validate_services(commands) |
253 | - if ret: |
254 | - amulet.raise_status(amulet.FAIL, msg=ret) |
255 | - |
256 | - def test_neutron_gateway_shared_db_relation(self): |
257 | + ret = u.validate_services_by_name(commands) |
258 | + if ret: |
259 | + amulet.raise_status(amulet.FAIL, msg=ret) |
260 | + |
261 | + def test_102_service_catalog(self): |
262 | + """Verify that the service catalog endpoint data is valid.""" |
263 | + u.log.debug('Checking keystone service catalog...') |
264 | + endpoint_check = { |
265 | + 'adminURL': u.valid_url, |
266 | + 'id': u.not_null, |
267 | + 'region': 'RegionOne', |
268 | + 'publicURL': u.valid_url, |
269 | + 'internalURL': u.valid_url |
270 | + } |
271 | + expected = { |
272 | + 'network': [endpoint_check], |
273 | + 'compute': [endpoint_check], |
274 | + 'identity': [endpoint_check] |
275 | + } |
276 | + actual = self.keystone.service_catalog.get_endpoints() |
277 | + |
278 | + ret = u.validate_svc_catalog_endpoint_data(expected, actual) |
279 | + if ret: |
280 | + amulet.raise_status(amulet.FAIL, msg=ret) |
281 | + |
282 | + def test_104_network_endpoint(self): |
283 | + """Verify the neutron network endpoint data.""" |
284 | + u.log.debug('Checking neutron network api endpoint data...') |
285 | + endpoints = self.keystone.endpoints.list() |
286 | + admin_port = internal_port = public_port = '9696' |
287 | + expected = { |
288 | + 'id': u.not_null, |
289 | + 'region': 'RegionOne', |
290 | + 'adminurl': u.valid_url, |
291 | + 'internalurl': u.valid_url, |
292 | + 'publicurl': u.valid_url, |
293 | + 'service_id': u.not_null |
294 | + } |
295 | + ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, |
296 | + public_port, expected) |
297 | + |
298 | + if ret: |
299 | + amulet.raise_status(amulet.FAIL, |
300 | + msg='glance endpoint: {}'.format(ret)) |
301 | + |
302 | + def test_110_users(self): |
303 | + """Verify expected users.""" |
304 | + u.log.debug('Checking keystone users...') |
305 | + expected = [ |
306 | + {'name': 'admin', |
307 | + 'enabled': True, |
308 | + 'tenantId': u.not_null, |
309 | + 'id': u.not_null, |
310 | + 'email': 'juju@localhost'}, |
311 | + {'name': 'quantum', |
312 | + 'enabled': True, |
313 | + 'tenantId': u.not_null, |
314 | + 'id': u.not_null, |
315 | + 'email': 'juju@localhost'} |
316 | + ] |
317 | + |
318 | + if self._get_openstack_release() >= self.trusty_kilo: |
319 | + # Kilo or later |
320 | + expected.append({ |
321 | + 'name': 'nova', |
322 | + 'enabled': True, |
323 | + 'tenantId': u.not_null, |
324 | + 'id': u.not_null, |
325 | + 'email': 'juju@localhost' |
326 | + }) |
327 | + else: |
328 | + # Juno and earlier |
329 | + expected.append({ |
330 | + 'name': 's3_ec2_nova', |
331 | + 'enabled': True, |
332 | + 'tenantId': u.not_null, |
333 | + 'id': u.not_null, |
334 | + 'email': 'juju@localhost' |
335 | + }) |
336 | + |
337 | + actual = self.keystone.users.list() |
338 | + ret = u.validate_user_data(expected, actual) |
339 | + if ret: |
340 | + amulet.raise_status(amulet.FAIL, msg=ret) |
341 | + |
342 | + def test_200_neutron_gateway_mysql_shared_db_relation(self): |
343 | """Verify the neutron-gateway to mysql shared-db relation data""" |
344 | + u.log.debug('Checking neutron-gateway:mysql db relation data...') |
345 | unit = self.neutron_gateway_sentry |
346 | relation = ['shared-db', 'mysql:shared-db'] |
347 | expected = { |
348 | @@ -208,8 +293,9 @@ |
349 | message = u.relation_error('neutron-gateway shared-db', ret) |
350 | amulet.raise_status(amulet.FAIL, msg=message) |
351 | |
352 | - def test_mysql_shared_db_relation(self): |
353 | + def test_201_mysql_neutron_gateway_shared_db_relation(self): |
354 | """Verify the mysql to neutron-gateway shared-db relation data""" |
355 | + u.log.debug('Checking mysql:neutron-gateway db relation data...') |
356 | unit = self.mysql_sentry |
357 | relation = ['shared-db', 'neutron-gateway:shared-db'] |
358 | expected = { |
359 | @@ -223,8 +309,9 @@ |
360 | message = u.relation_error('mysql shared-db', ret) |
361 | amulet.raise_status(amulet.FAIL, msg=message) |
362 | |
363 | - def test_neutron_gateway_amqp_relation(self): |
364 | + def test_202_neutron_gateway_rabbitmq_amqp_relation(self): |
365 | """Verify the neutron-gateway to rabbitmq-server amqp relation data""" |
366 | + u.log.debug('Checking neutron-gateway:rmq amqp relation data...') |
367 | unit = self.neutron_gateway_sentry |
368 | relation = ['amqp', 'rabbitmq-server:amqp'] |
369 | expected = { |
370 | @@ -238,9 +325,10 @@ |
371 | message = u.relation_error('neutron-gateway amqp', ret) |
372 | amulet.raise_status(amulet.FAIL, msg=message) |
373 | |
374 | - def test_rabbitmq_amqp_relation(self): |
375 | + def test_203_rabbitmq_neutron_gateway_amqp_relation(self): |
376 | """Verify the rabbitmq-server to neutron-gateway amqp relation data""" |
377 | - unit = self.rabbitmq_sentry |
378 | + u.log.debug('Checking rmq:neutron-gateway amqp relation data...') |
379 | + unit = self.rmq_sentry |
380 | relation = ['amqp', 'neutron-gateway:amqp'] |
381 | expected = { |
382 | 'private-address': u.valid_ip, |
383 | @@ -253,9 +341,11 @@ |
384 | message = u.relation_error('rabbitmq amqp', ret) |
385 | amulet.raise_status(amulet.FAIL, msg=message) |
386 | |
387 | - def test_neutron_gateway_network_service_relation(self): |
388 | + def test_204_neutron_gateway_network_service_relation(self): |
389 | """Verify the neutron-gateway to nova-cc quantum-network-service |
390 | relation data""" |
391 | + u.log.debug('Checking neutron-gateway:nova-cc net svc ' |
392 | + 'relation data...') |
393 | unit = self.neutron_gateway_sentry |
394 | relation = ['quantum-network-service', |
395 | 'nova-cloud-controller:quantum-network-service'] |
396 | @@ -268,9 +358,11 @@ |
397 | message = u.relation_error('neutron-gateway network-service', ret) |
398 | amulet.raise_status(amulet.FAIL, msg=message) |
399 | |
400 | - def test_nova_cc_network_service_relation(self): |
401 | + def test_205_nova_cc_network_service_relation(self): |
402 | """Verify the nova-cc to neutron-gateway quantum-network-service |
403 | relation data""" |
404 | + u.log.debug('Checking nova-cc:neutron-gateway net svc ' |
405 | + 'relation data...') |
406 | unit = self.nova_cc_sentry |
407 | relation = ['quantum-network-service', |
408 | 'neutron-gateway:quantum-network-service'] |
409 | @@ -289,56 +381,178 @@ |
410 | 'keystone_host': u.valid_ip, |
411 | 'quantum_plugin': 'ovs', |
412 | 'auth_host': u.valid_ip, |
413 | - 'service_username': 'quantum_s3_ec2_nova', |
414 | 'service_tenant_name': 'services' |
415 | } |
416 | + |
417 | if self._get_openstack_release() >= self.trusty_kilo: |
418 | + # Kilo or later |
419 | expected['service_username'] = 'nova' |
420 | + else: |
421 | + # Juno or earlier |
422 | + expected['service_username'] = 's3_ec2_nova' |
423 | |
424 | ret = u.validate_relation_data(unit, relation, expected) |
425 | if ret: |
426 | message = u.relation_error('nova-cc network-service', ret) |
427 | amulet.raise_status(amulet.FAIL, msg=message) |
428 | |
429 | - def test_z_restart_on_config_change(self): |
430 | - """Verify that the specified services are restarted when the config |
431 | - is changed. |
432 | - |
433 | - Note(coreycb): The method name with the _z_ is a little odd |
434 | - but it forces the test to run last. It just makes things |
435 | - easier because restarting services requires re-authorization. |
436 | - """ |
437 | - conf = '/etc/neutron/neutron.conf' |
438 | - |
439 | - services = ['neutron-dhcp-agent', |
440 | - 'neutron-lbaas-agent', |
441 | - 'neutron-metadata-agent', |
442 | - 'neutron-metering-agent', |
443 | - 'neutron-openvswitch-agent'] |
444 | - |
445 | - if self._get_openstack_release() <= self.trusty_juno: |
446 | - services.append('neutron-vpn-agent') |
447 | - |
448 | - u.log.debug("Making config change on neutron-gateway...") |
449 | - self.d.configure('neutron-gateway', {'debug': 'True'}) |
450 | - |
451 | - time = 60 |
452 | - for s in services: |
453 | - u.log.debug("Checking that service restarted: {}".format(s)) |
454 | - if not u.service_restarted(self.neutron_gateway_sentry, s, conf, |
455 | - pgrep_full=True, sleep_time=time): |
456 | - self.d.configure('neutron-gateway', {'debug': 'False'}) |
457 | - msg = "service {} didn't restart after config change".format(s) |
458 | - amulet.raise_status(amulet.FAIL, msg=msg) |
459 | - time = 0 |
460 | - |
461 | - self.d.configure('neutron-gateway', {'debug': 'False'}) |
462 | - |
463 | - def test_neutron_config(self): |
464 | + def test_206_neutron_api_shared_db_relation(self): |
465 | + """Verify the neutron-api to mysql shared-db relation data""" |
466 | + u.log.debug('Checking neutron-api:mysql db relation data...') |
467 | + unit = self.neutron_api_sentry |
468 | + relation = ['shared-db', 'mysql:shared-db'] |
469 | + expected = { |
470 | + 'private-address': u.valid_ip, |
471 | + 'database': 'neutron', |
472 | + 'username': 'neutron', |
473 | + 'hostname': u.valid_ip |
474 | + } |
475 | + |
476 | + ret = u.validate_relation_data(unit, relation, expected) |
477 | + if ret: |
478 | + message = u.relation_error('neutron-api shared-db', ret) |
479 | + amulet.raise_status(amulet.FAIL, msg=message) |
480 | + |
481 | + def test_207_shared_db_neutron_api_relation(self): |
482 | + """Verify the mysql to neutron-api shared-db relation data""" |
483 | + u.log.debug('Checking mysql:neutron-api db relation data...') |
484 | + unit = self.mysql_sentry |
485 | + relation = ['shared-db', 'neutron-api:shared-db'] |
486 | + expected = { |
487 | + 'db_host': u.valid_ip, |
488 | + 'private-address': u.valid_ip, |
489 | + 'password': u.not_null |
490 | + } |
491 | + |
492 | + if self._get_openstack_release() == self.precise_icehouse: |
493 | + # Precise |
494 | + expected['allowed_units'] = 'nova-cloud-controller/0 neutron-api/0' |
495 | + else: |
496 | + # Not Precise |
497 | + expected['allowed_units'] = 'neutron-api/0' |
498 | + |
499 | + ret = u.validate_relation_data(unit, relation, expected) |
500 | + if ret: |
501 | + message = u.relation_error('mysql shared-db', ret) |
502 | + amulet.raise_status(amulet.FAIL, msg=message) |
503 | + |
504 | + def test_208_neutron_api_amqp_relation(self): |
505 | + """Verify the neutron-api to rabbitmq-server amqp relation data""" |
506 | + u.log.debug('Checking neutron-api:amqp relation data...') |
507 | + unit = self.neutron_api_sentry |
508 | + relation = ['amqp', 'rabbitmq-server:amqp'] |
509 | + expected = { |
510 | + 'username': 'neutron', |
511 | + 'private-address': u.valid_ip, |
512 | + 'vhost': 'openstack' |
513 | + } |
514 | + |
515 | + ret = u.validate_relation_data(unit, relation, expected) |
516 | + if ret: |
517 | + message = u.relation_error('neutron-api amqp', ret) |
518 | + amulet.raise_status(amulet.FAIL, msg=message) |
519 | + |
520 | + def test_209_amqp_neutron_api_relation(self): |
521 | + """Verify the rabbitmq-server to neutron-api amqp relation data""" |
522 | + u.log.debug('Checking amqp:neutron-api relation data...') |
523 | + unit = self.rmq_sentry |
524 | + relation = ['amqp', 'neutron-api:amqp'] |
525 | + expected = { |
526 | + 'hostname': u.valid_ip, |
527 | + 'private-address': u.valid_ip, |
528 | + 'password': u.not_null |
529 | + } |
530 | + |
531 | + ret = u.validate_relation_data(unit, relation, expected) |
532 | + if ret: |
533 | + message = u.relation_error('rabbitmq amqp', ret) |
534 | + amulet.raise_status(amulet.FAIL, msg=message) |
535 | + |
536 | + def test_210_neutron_api_keystone_identity_relation(self): |
537 | + """Verify the neutron-api to keystone identity-service relation data""" |
538 | + u.log.debug('Checking neutron-api:keystone id relation data...') |
539 | + unit = self.neutron_api_sentry |
540 | + relation = ['identity-service', 'keystone:identity-service'] |
541 | + api_ip = unit.relation('identity-service', |
542 | + 'keystone:identity-service')['private-address'] |
543 | + api_endpoint = 'http://{}:9696'.format(api_ip) |
544 | + expected = { |
545 | + 'private-address': u.valid_ip, |
546 | + 'quantum_region': 'RegionOne', |
547 | + 'quantum_service': 'quantum', |
548 | + 'quantum_admin_url': api_endpoint, |
549 | + 'quantum_internal_url': api_endpoint, |
550 | + 'quantum_public_url': api_endpoint, |
551 | + } |
552 | + |
553 | + ret = u.validate_relation_data(unit, relation, expected) |
554 | + if ret: |
555 | + message = u.relation_error('neutron-api identity-service', ret) |
556 | + amulet.raise_status(amulet.FAIL, msg=message) |
557 | + |
558 | + def test_211_keystone_neutron_api_identity_relation(self): |
559 | + """Verify the keystone to neutron-api identity-service relation data""" |
560 | + u.log.debug('Checking keystone:neutron-api id relation data...') |
561 | + unit = self.keystone_sentry |
562 | + relation = ['identity-service', 'neutron-api:identity-service'] |
563 | + rel_ks_id = unit.relation('identity-service', |
564 | + 'neutron-api:identity-service') |
565 | + id_ip = rel_ks_id['private-address'] |
566 | + expected = { |
567 | + 'admin_token': 'ubuntutesting', |
568 | + 'auth_host': id_ip, |
569 | + 'auth_port': "35357", |
570 | + 'auth_protocol': 'http', |
571 | + 'private-address': id_ip, |
572 | + 'service_host': id_ip, |
573 | + } |
574 | + ret = u.validate_relation_data(unit, relation, expected) |
575 | + if ret: |
576 | + message = u.relation_error('neutron-api identity-service', ret) |
577 | + amulet.raise_status(amulet.FAIL, msg=message) |
578 | + |
579 | + def test_212_neutron_api_novacc_relation(self): |
580 | + """Verify the neutron-api to nova-cloud-controller relation data""" |
581 | + u.log.debug('Checking neutron-api:novacc relation data...') |
582 | + unit = self.neutron_api_sentry |
583 | + relation = ['neutron-api', 'nova-cloud-controller:neutron-api'] |
584 | + api_ip = unit.relation('identity-service', |
585 | + 'keystone:identity-service')['private-address'] |
586 | + api_endpoint = 'http://{}:9696'.format(api_ip) |
587 | + expected = { |
588 | + 'private-address': api_ip, |
589 | + 'neutron-plugin': 'ovs', |
590 | + 'neutron-security-groups': "no", |
591 | + 'neutron-url': api_endpoint, |
592 | + } |
593 | + ret = u.validate_relation_data(unit, relation, expected) |
594 | + if ret: |
595 | + message = u.relation_error('neutron-api neutron-api', ret) |
596 | + amulet.raise_status(amulet.FAIL, msg=message) |
597 | + |
598 | + def test_213_novacc_neutron_api_relation(self): |
599 | + """Verify the nova-cloud-controller to neutron-api relation data""" |
600 | + u.log.debug('Checking novacc:neutron-api relation data...') |
601 | + unit = self.nova_cc_sentry |
602 | + relation = ['neutron-api', 'neutron-api:neutron-api'] |
603 | + cc_ip = unit.relation('neutron-api', |
604 | + 'neutron-api:neutron-api')['private-address'] |
605 | + cc_endpoint = 'http://{}:8774/v2'.format(cc_ip) |
606 | + expected = { |
607 | + 'private-address': cc_ip, |
608 | + 'nova_url': cc_endpoint, |
609 | + } |
610 | + ret = u.validate_relation_data(unit, relation, expected) |
611 | + if ret: |
612 | + message = u.relation_error('nova-cc neutron-api', ret) |
613 | + amulet.raise_status(amulet.FAIL, msg=message) |
614 | + |
615 | + def test_300_neutron_config(self): |
616 | """Verify the data in the neutron config file.""" |
617 | + u.log.debug('Checking neutron gateway config file data...') |
618 | unit = self.neutron_gateway_sentry |
619 | - rabbitmq_relation = self.rabbitmq_sentry.relation('amqp', |
620 | - 'neutron-gateway:amqp') |
621 | + rmq_ng_rel = self.rmq_sentry.relation( |
622 | + 'amqp', 'neutron-gateway:amqp') |
623 | |
624 | conf = '/etc/neutron/neutron.conf' |
625 | expected = { |
626 | @@ -350,35 +564,34 @@ |
627 | 'notification_driver': 'neutron.openstack.common.notifier.' |
628 | 'list_notifier', |
629 | 'list_notifier_drivers': 'neutron.openstack.common.' |
630 | - 'notifier.rabbit_notifier' |
631 | + 'notifier.rabbit_notifier', |
632 | }, |
633 | 'agent': { |
634 | 'root_helper': 'sudo /usr/bin/neutron-rootwrap ' |
635 | '/etc/neutron/rootwrap.conf' |
636 | } |
637 | } |
638 | + |
639 | if self._get_openstack_release() >= self.trusty_kilo: |
640 | - oslo_concurrency = { |
641 | - 'oslo_concurrency': { |
642 | - 'lock_path':'/var/lock/neutron' |
643 | - } |
644 | - } |
645 | - oslo_messaging_rabbit = { |
646 | - 'oslo_messaging_rabbit': { |
647 | - 'rabbit_userid': 'neutron', |
648 | - 'rabbit_virtual_host': 'openstack', |
649 | - 'rabbit_password': rabbitmq_relation['password'], |
650 | - 'rabbit_host': rabbitmq_relation['hostname'], |
651 | - } |
652 | - } |
653 | - expected.update(oslo_concurrency) |
654 | - expected.update(oslo_messaging_rabbit) |
655 | + # Kilo or later |
656 | + expected['oslo_messaging_rabbit'] = { |
657 | + 'rabbit_userid': 'neutron', |
658 | + 'rabbit_virtual_host': 'openstack', |
659 | + 'rabbit_password': rmq_ng_rel['password'], |
660 | + 'rabbit_host': rmq_ng_rel['hostname'], |
661 | + } |
662 | + expected['oslo_concurrency'] = { |
663 | + 'lock_path': '/var/lock/neutron' |
664 | + } |
665 | else: |
666 | - expected['DEFAULT']['lock_path'] = '/var/lock/neutron' |
667 | - expected['DEFAULT']['rabbit_userid'] = 'neutron' |
668 | - expected['DEFAULT']['rabbit_virtual_host'] = 'openstack' |
669 | - expected['DEFAULT']['rabbit_password'] = rabbitmq_relation['password'] |
670 | - expected['DEFAULT']['rabbit_host'] = rabbitmq_relation['hostname'] |
671 | + # Juno or earlier |
672 | + expected['DEFAULT'].update({ |
673 | + 'rabbit_userid': 'neutron', |
674 | + 'rabbit_virtual_host': 'openstack', |
675 | + 'rabbit_password': rmq_ng_rel['password'], |
676 | + 'rabbit_host': rmq_ng_rel['hostname'], |
677 | + 'lock_path': '/var/lock/neutron', |
678 | + }) |
679 | |
680 | for section, pairs in expected.iteritems(): |
681 | ret = u.validate_config_data(unit, conf, section, pairs) |
682 | @@ -386,15 +599,17 @@ |
683 | message = "neutron config error: {}".format(ret) |
684 | amulet.raise_status(amulet.FAIL, msg=message) |
685 | |
686 | - def test_ml2_config(self): |
687 | + def test_301_neutron_ml2_config(self): |
688 | """Verify the data in the ml2 config file. This is only available |
689 | since icehouse.""" |
690 | + u.log.debug('Checking neutron gateway ml2 config file data...') |
691 | if self._get_openstack_release() < self.precise_icehouse: |
692 | return |
693 | |
694 | unit = self.neutron_gateway_sentry |
695 | conf = '/etc/neutron/plugins/ml2/ml2_conf.ini' |
696 | - neutron_gateway_relation = unit.relation('shared-db', 'mysql:shared-db') |
697 | + ng_db_rel = unit.relation('shared-db', 'mysql:shared-db') |
698 | + |
699 | expected = { |
700 | 'ml2': { |
701 | 'type_drivers': 'gre,vxlan,vlan,flat', |
702 | @@ -409,7 +624,7 @@ |
703 | }, |
704 | 'ovs': { |
705 | 'enable_tunneling': 'True', |
706 | - 'local_ip': neutron_gateway_relation['private-address'] |
707 | + 'local_ip': ng_db_rel['private-address'] |
708 | }, |
709 | 'agent': { |
710 | 'tunnel_types': 'gre', |
711 | @@ -427,8 +642,9 @@ |
712 | message = "ml2 config error: {}".format(ret) |
713 | amulet.raise_status(amulet.FAIL, msg=message) |
714 | |
715 | - def test_dhcp_agent_config(self): |
716 | + def test_302_neutron_dhcp_agent_config(self): |
717 | """Verify the data in the dhcp agent config file.""" |
718 | + u.log.debug('Checking neutron gateway dhcp agent config file data...') |
719 | unit = self.neutron_gateway_sentry |
720 | conf = '/etc/neutron/dhcp_agent.ini' |
721 | expected = { |
722 | @@ -440,44 +656,45 @@ |
723 | '/etc/neutron/rootwrap.conf', |
724 | 'ovs_use_veth': 'True' |
725 | } |
726 | + section = 'DEFAULT' |
727 | |
728 | - ret = u.validate_config_data(unit, conf, 'DEFAULT', expected) |
729 | + ret = u.validate_config_data(unit, conf, section, expected) |
730 | if ret: |
731 | message = "dhcp agent config error: {}".format(ret) |
732 | amulet.raise_status(amulet.FAIL, msg=message) |
733 | |
734 | - def test_fwaas_driver_config(self): |
735 | + def test_303_neutron_fwaas_driver_config(self): |
736 | """Verify the data in the fwaas driver config file. This is only |
737 | available since havana.""" |
738 | - if self._get_openstack_release() < self.precise_havana: |
739 | - return |
740 | - |
741 | + u.log.debug('Checking neutron gateway fwaas config file data...') |
742 | unit = self.neutron_gateway_sentry |
743 | conf = '/etc/neutron/fwaas_driver.ini' |
744 | + expected = { |
745 | + 'enabled': 'True' |
746 | + } |
747 | + section = 'fwaas' |
748 | + |
749 | if self._get_openstack_release() >= self.trusty_kilo: |
750 | - expected = { |
751 | - 'driver': 'neutron_fwaas.services.firewall.drivers.' |
752 | - 'linux.iptables_fwaas.IptablesFwaasDriver', |
753 | - 'enabled': 'True' |
754 | - } |
755 | + # Kilo or later |
756 | + expected['driver'] = ('neutron_fwaas.services.firewall.drivers.' |
757 | + 'linux.iptables_fwaas.IptablesFwaasDriver') |
758 | else: |
759 | - expected = { |
760 | - 'driver': 'neutron.services.firewall.drivers.' |
761 | - 'linux.iptables_fwaas.IptablesFwaasDriver', |
762 | - 'enabled': 'True' |
763 | - } |
764 | + # Juno or earlier |
765 | + expected['driver'] = ('neutron.services.firewall.drivers.linux.' |
766 | + 'iptables_fwaas.IptablesFwaasDriver') |
767 | |
768 | - ret = u.validate_config_data(unit, conf, 'fwaas', expected) |
769 | + ret = u.validate_config_data(unit, conf, section, expected) |
770 | if ret: |
771 | message = "fwaas driver config error: {}".format(ret) |
772 | amulet.raise_status(amulet.FAIL, msg=message) |
773 | |
774 | - def test_l3_agent_config(self): |
775 | + def test_304_neutron_l3_agent_config(self): |
776 | """Verify the data in the l3 agent config file.""" |
777 | + u.log.debug('Checking neutron gateway l3 agent config file data...') |
778 | unit = self.neutron_gateway_sentry |
779 | - nova_cc_relation = self.nova_cc_sentry.relation(\ |
780 | - 'quantum-network-service', |
781 | - 'neutron-gateway:quantum-network-service') |
782 | + ncc_ng_rel = self.nova_cc_sentry.relation( |
783 | + 'quantum-network-service', |
784 | + 'neutron-gateway:quantum-network-service') |
785 | ep = self.keystone.service_catalog.url_for(service_type='identity', |
786 | endpoint_type='publicURL') |
787 | |
788 | @@ -488,24 +705,30 @@ |
789 | 'auth_url': ep, |
790 | 'auth_region': 'RegionOne', |
791 | 'admin_tenant_name': 'services', |
792 | - 'admin_user': 'quantum_s3_ec2_nova', |
793 | - 'admin_password': nova_cc_relation['service_password'], |
794 | + 'admin_password': ncc_ng_rel['service_password'], |
795 | 'root_helper': 'sudo /usr/bin/neutron-rootwrap ' |
796 | '/etc/neutron/rootwrap.conf', |
797 | 'ovs_use_veth': 'True', |
798 | 'handle_internal_only_routers': 'True' |
799 | } |
800 | + section = 'DEFAULT' |
801 | + |
802 | if self._get_openstack_release() >= self.trusty_kilo: |
803 | + # Kilo or later |
804 | expected['admin_user'] = 'nova' |
805 | + else: |
806 | + # Juno or earlier |
807 | + expected['admin_user'] = 's3_ec2_nova' |
808 | |
809 | - ret = u.validate_config_data(unit, conf, 'DEFAULT', expected) |
810 | + ret = u.validate_config_data(unit, conf, section, expected) |
811 | if ret: |
812 | message = "l3 agent config error: {}".format(ret) |
813 | amulet.raise_status(amulet.FAIL, msg=message) |
814 | |
815 | - def test_lbaas_agent_config(self): |
816 | + def test_305_neutron_lbaas_agent_config(self): |
817 | """Verify the data in the lbaas agent config file. This is only |
818 | available since havana.""" |
819 | + u.log.debug('Checking neutron gateway lbaas config file data...') |
820 | if self._get_openstack_release() < self.precise_havana: |
821 | return |
822 | |
823 | @@ -513,21 +736,27 @@ |
824 | conf = '/etc/neutron/lbaas_agent.ini' |
825 | expected = { |
826 | 'DEFAULT': { |
827 | - 'periodic_interval': '10', |
828 | 'interface_driver': 'neutron.agent.linux.interface.' |
829 | 'OVSInterfaceDriver', |
830 | + 'periodic_interval': '10', |
831 | 'ovs_use_veth': 'False', |
832 | - 'device_driver': 'neutron.services.loadbalancer.drivers.' |
833 | - 'haproxy.namespace_driver.HaproxyNSDriver' |
834 | }, |
835 | 'haproxy': { |
836 | 'loadbalancer_state_path': '$state_path/lbaas', |
837 | 'user_group': 'nogroup' |
838 | } |
839 | } |
840 | + |
841 | if self._get_openstack_release() >= self.trusty_kilo: |
842 | - expected['DEFAULT']['device_driver'] = ('neutron_lbaas.services.' + |
843 | - 'loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver') |
844 | + # Kilo or later |
845 | + expected['DEFAULT']['device_driver'] = \ |
846 | + ('neutron_lbaas.services.loadbalancer.drivers.haproxy.' |
847 | + 'namespace_driver.HaproxyNSDriver') |
848 | + else: |
849 | + # Juno or earlier |
850 | + expected['DEFAULT']['device_driver'] = \ |
851 | + ('neutron.services.loadbalancer.drivers.haproxy.' |
852 | + 'namespace_driver.HaproxyNSDriver') |
853 | |
854 | for section, pairs in expected.iteritems(): |
855 | ret = u.validate_config_data(unit, conf, section, pairs) |
856 | @@ -535,46 +764,51 @@ |
857 | message = "lbaas agent config error: {}".format(ret) |
858 | amulet.raise_status(amulet.FAIL, msg=message) |
859 | |
860 | - def test_metadata_agent_config(self): |
861 | + def test_306_neutron_metadata_agent_config(self): |
862 | """Verify the data in the metadata agent config file.""" |
863 | + u.log.debug('Checking neutron gateway metadata agent ' |
864 | + 'config file data...') |
865 | unit = self.neutron_gateway_sentry |
866 | ep = self.keystone.service_catalog.url_for(service_type='identity', |
867 | endpoint_type='publicURL') |
868 | - neutron_gateway_relation = unit.relation('shared-db', 'mysql:shared-db') |
869 | - nova_cc_relation = self.nova_cc_sentry.relation(\ |
870 | - 'quantum-network-service', |
871 | - 'neutron-gateway:quantum-network-service') |
872 | + ng_db_rel = unit.relation('shared-db', |
873 | + 'mysql:shared-db') |
874 | + nova_cc_relation = self.nova_cc_sentry.relation( |
875 | + 'quantum-network-service', |
876 | + 'neutron-gateway:quantum-network-service') |
877 | |
878 | conf = '/etc/neutron/metadata_agent.ini' |
879 | expected = { |
880 | 'auth_url': ep, |
881 | 'auth_region': 'RegionOne', |
882 | 'admin_tenant_name': 'services', |
883 | - 'admin_user': 'quantum_s3_ec2_nova', |
884 | 'admin_password': nova_cc_relation['service_password'], |
885 | 'root_helper': 'sudo neutron-rootwrap ' |
886 | - '/etc/neutron/rootwrap.conf', |
887 | + '/etc/neutron/rootwrap.conf', |
888 | 'state_path': '/var/lib/neutron', |
889 | - 'nova_metadata_ip': neutron_gateway_relation['private-address'], |
890 | - 'nova_metadata_port': '8775' |
891 | + 'nova_metadata_ip': ng_db_rel['private-address'], |
892 | + 'nova_metadata_port': '8775', |
893 | + 'cache_url': 'memory://?default_ttl=5' |
894 | } |
895 | + section = 'DEFAULT' |
896 | + |
897 | if self._get_openstack_release() >= self.trusty_kilo: |
898 | + # Kilo or later |
899 | expected['admin_user'] = 'nova' |
900 | - |
901 | - if self._get_openstack_release() >= self.precise_icehouse: |
902 | - expected['cache_url'] = 'memory://?default_ttl=5' |
903 | - |
904 | - ret = u.validate_config_data(unit, conf, 'DEFAULT', expected) |
905 | + else: |
906 | + # Juno or earlier |
907 | + expected['admin_user'] = 's3_ec2_nova' |
908 | + |
909 | + ret = u.validate_config_data(unit, conf, section, expected) |
910 | if ret: |
911 | message = "metadata agent config error: {}".format(ret) |
912 | amulet.raise_status(amulet.FAIL, msg=message) |
913 | |
914 | - def test_metering_agent_config(self): |
915 | + def test_307_neutron_metering_agent_config(self): |
916 | """Verify the data in the metering agent config file. This is only |
917 | available since havana.""" |
918 | - if self._get_openstack_release() < self.precise_havana: |
919 | - return |
920 | - |
921 | + u.log.debug('Checking neutron gateway metering agent ' |
922 | + 'config file data...') |
923 | unit = self.neutron_gateway_sentry |
924 | conf = '/etc/neutron/metering_agent.ini' |
925 | expected = { |
926 | @@ -586,26 +820,24 @@ |
927 | 'OVSInterfaceDriver', |
928 | 'use_namespaces': 'True' |
929 | } |
930 | + section = 'DEFAULT' |
931 | |
932 | - ret = u.validate_config_data(unit, conf, 'DEFAULT', expected) |
933 | + ret = u.validate_config_data(unit, conf, section, expected) |
934 | if ret: |
935 | message = "metering agent config error: {}".format(ret) |
936 | + amulet.raise_status(amulet.FAIL, msg=message) |
937 | |
938 | - def test_nova_config(self): |
939 | + def test_308_neutron_nova_config(self): |
940 | """Verify the data in the nova config file.""" |
941 | + u.log.debug('Checking neutron gateway nova config file data...') |
942 | unit = self.neutron_gateway_sentry |
943 | conf = '/etc/nova/nova.conf' |
944 | - mysql_relation = self.mysql_sentry.relation('shared-db', |
945 | - 'neutron-gateway:shared-db') |
946 | - db_uri = "mysql://{}:{}@{}/{}".format('nova', |
947 | - mysql_relation['password'], |
948 | - mysql_relation['db_host'], |
949 | - 'nova') |
950 | - rabbitmq_relation = self.rabbitmq_sentry.relation('amqp', |
951 | - 'neutron-gateway:amqp') |
952 | - nova_cc_relation = self.nova_cc_sentry.relation(\ |
953 | - 'quantum-network-service', |
954 | - 'neutron-gateway:quantum-network-service') |
955 | + |
956 | + rabbitmq_relation = self.rmq_sentry.relation( |
957 | + 'amqp', 'neutron-gateway:amqp') |
958 | + nova_cc_relation = self.nova_cc_sentry.relation( |
959 | + 'quantum-network-service', |
960 | + 'neutron-gateway:quantum-network-service') |
961 | ep = self.keystone.service_catalog.url_for(service_type='identity', |
962 | endpoint_type='publicURL') |
963 | |
964 | @@ -622,49 +854,44 @@ |
965 | 'network_api_class': 'nova.network.neutronv2.api.API', |
966 | } |
967 | } |
968 | + |
969 | if self._get_openstack_release() >= self.trusty_kilo: |
970 | - neutron = { |
971 | - 'neutron': { |
972 | - 'auth_strategy': 'keystone', |
973 | - 'url': nova_cc_relation['quantum_url'], |
974 | - 'admin_tenant_name': 'services', |
975 | - 'admin_username': 'nova', |
976 | - 'admin_password': nova_cc_relation['service_password'], |
977 | - 'admin_auth_url': ep, |
978 | - 'service_metadata_proxy': 'True', |
979 | - } |
980 | - } |
981 | - oslo_concurrency = { |
982 | - 'oslo_concurrency': { |
983 | - 'lock_path':'/var/lock/nova' |
984 | - } |
985 | - } |
986 | - oslo_messaging_rabbit = { |
987 | - 'oslo_messaging_rabbit': { |
988 | - 'rabbit_userid': 'neutron', |
989 | - 'rabbit_virtual_host': 'openstack', |
990 | - 'rabbit_password': rabbitmq_relation['password'], |
991 | - 'rabbit_host': rabbitmq_relation['hostname'], |
992 | - } |
993 | - } |
994 | - expected.update(neutron) |
995 | - expected.update(oslo_concurrency) |
996 | - expected.update(oslo_messaging_rabbit) |
997 | + # Kilo or later |
998 | + expected['oslo_messaging_rabbit'] = { |
999 | + 'rabbit_userid': 'neutron', |
1000 | + 'rabbit_virtual_host': 'openstack', |
1001 | + 'rabbit_password': rabbitmq_relation['password'], |
1002 | + 'rabbit_host': rabbitmq_relation['hostname'], |
1003 | + } |
1004 | + expected['oslo_concurrency'] = { |
1005 | + 'lock_path': '/var/lock/nova' |
1006 | + } |
1007 | + expected['neutron'] = { |
1008 | + 'auth_strategy': 'keystone', |
1009 | + 'url': nova_cc_relation['quantum_url'], |
1010 | + 'admin_tenant_name': 'services', |
1011 | + 'admin_username': 'nova', |
1012 | + 'admin_password': nova_cc_relation['service_password'], |
1013 | + 'admin_auth_url': ep, |
1014 | + 'service_metadata_proxy': 'True', |
1015 | + 'metadata_proxy_shared_secret': u.not_null |
1016 | + } |
1017 | else: |
1018 | - d = 'DEFAULT' |
1019 | - expected[d]['lock_path'] = '/var/lock/nova' |
1020 | - expected[d]['rabbit_userid'] = 'neutron' |
1021 | - expected[d]['rabbit_virtual_host'] = 'openstack' |
1022 | - expected[d]['rabbit_password'] = rabbitmq_relation['password'] |
1023 | - expected[d]['rabbit_host'] = rabbitmq_relation['hostname'] |
1024 | - expected[d]['service_neutron_metadata_proxy'] = 'True' |
1025 | - expected[d]['neutron_auth_strategy'] = 'keystone' |
1026 | - expected[d]['neutron_url'] = nova_cc_relation['quantum_url'] |
1027 | - expected[d]['neutron_admin_tenant_name'] = 'services' |
1028 | - expected[d]['neutron_admin_username'] = 'quantum_s3_ec2_nova' |
1029 | - expected[d]['neutron_admin_password'] = \ |
1030 | - nova_cc_relation['service_password'] |
1031 | - expected[d]['neutron_admin_auth_url'] = ep |
1032 | + # Juno or earlier |
1033 | + expected['DEFAULT'].update({ |
1034 | + 'rabbit_userid': 'neutron', |
1035 | + 'rabbit_virtual_host': 'openstack', |
1036 | + 'rabbit_password': rabbitmq_relation['password'], |
1037 | + 'rabbit_host': rabbitmq_relation['hostname'], |
1038 | + 'lock_path': '/var/lock/nova', |
1039 | + 'neutron_auth_strategy': 'keystone', |
1040 | + 'neutron_url': nova_cc_relation['quantum_url'], |
1041 | + 'neutron_admin_tenant_name': 'services', |
1042 | + 'neutron_admin_username': 's3_ec2_nova', |
1043 | + 'neutron_admin_password': nova_cc_relation['service_password'], |
1044 | + 'neutron_admin_auth_url': ep, |
1045 | + 'service_neutron_metadata_proxy': 'True', |
1046 | + }) |
1047 | |
1048 | for section, pairs in expected.iteritems(): |
1049 | ret = u.validate_config_data(unit, conf, section, pairs) |
1050 | @@ -672,56 +899,30 @@ |
1051 | message = "nova config error: {}".format(ret) |
1052 | amulet.raise_status(amulet.FAIL, msg=message) |
1053 | |
1054 | - def test_ovs_neutron_plugin_config(self): |
1055 | - """Verify the data in the ovs neutron plugin config file. The ovs |
1056 | - plugin is not used by default since icehouse.""" |
1057 | - if self._get_openstack_release() >= self.precise_icehouse: |
1058 | - return |
1059 | - |
1060 | - unit = self.neutron_gateway_sentry |
1061 | - neutron_gateway_relation = unit.relation('shared-db', 'mysql:shared-db') |
1062 | - |
1063 | - conf = '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini' |
1064 | - expected = { |
1065 | - 'ovs': { |
1066 | - 'local_ip': neutron_gateway_relation['private-address'], |
1067 | - 'tenant_network_type': 'gre', |
1068 | - 'enable_tunneling': 'True', |
1069 | - 'tunnel_id_ranges': '1:1000' |
1070 | - }, |
1071 | - 'agent': { |
1072 | - 'polling_interval': '10', |
1073 | - 'root_helper': 'sudo /usr/bin/neutron-rootwrap ' |
1074 | - '/etc/neutron/rootwrap.conf' |
1075 | - } |
1076 | - } |
1077 | - |
1078 | - for section, pairs in expected.iteritems(): |
1079 | - ret = u.validate_config_data(unit, conf, section, pairs) |
1080 | - if ret: |
1081 | - message = "ovs neutron plugin config error: {}".format(ret) |
1082 | - amulet.raise_status(amulet.FAIL, msg=message) |
1083 | - |
1084 | - def test_vpn_agent_config(self): |
1085 | + def test_309_neutron_vpn_agent_config(self): |
1086 | """Verify the data in the vpn agent config file. This isn't available |
1087 | prior to havana.""" |
1088 | - if self._get_openstack_release() < self.precise_havana: |
1089 | - return |
1090 | - |
1091 | + u.log.debug('Checking neutron gateway vpn agent config file data...') |
1092 | unit = self.neutron_gateway_sentry |
1093 | conf = '/etc/neutron/vpn_agent.ini' |
1094 | expected = { |
1095 | - 'vpnagent': { |
1096 | + 'ipsec': { |
1097 | + 'ipsec_status_check_interval': '60' |
1098 | + } |
1099 | + } |
1100 | + |
1101 | + if self._get_openstack_release() >= self.trusty_kilo: |
1102 | + # Kilo or later |
1103 | + expected['vpnagent'] = { |
1104 | + 'vpn_device_driver': 'neutron_vpnaas.services.vpn.' |
1105 | + 'device_drivers.ipsec.OpenSwanDriver' |
1106 | + } |
1107 | + else: |
1108 | + # Juno or earlier |
1109 | + expected['vpnagent'] = { |
1110 | 'vpn_device_driver': 'neutron.services.vpn.device_drivers.' |
1111 | 'ipsec.OpenSwanDriver' |
1112 | - }, |
1113 | - 'ipsec': { |
1114 | - 'ipsec_status_check_interval': '60' |
1115 | } |
1116 | - } |
1117 | - if self._get_openstack_release() >= self.trusty_kilo: |
1118 | - expected['vpnagent']['vpn_device_driver'] = ('neutron_vpnaas.' + |
1119 | - 'services.vpn.device_drivers.ipsec.OpenSwanDriver') |
1120 | |
1121 | for section, pairs in expected.iteritems(): |
1122 | ret = u.validate_config_data(unit, conf, section, pairs) |
1123 | @@ -729,8 +930,9 @@ |
1124 | message = "vpn agent config error: {}".format(ret) |
1125 | amulet.raise_status(amulet.FAIL, msg=message) |
1126 | |
1127 | - def test_create_network(self): |
1128 | + def test_400_create_network(self): |
1129 | """Create a network, verify that it exists, and then delete it.""" |
1130 | + u.log.debug('Creating neutron network...') |
1131 | self.neutron.format = 'json' |
1132 | net_name = 'ext_net' |
1133 | |
1134 | @@ -743,7 +945,7 @@ |
1135 | |
1136 | # Create a network and verify that it exists |
1137 | network = {'name': net_name} |
1138 | - self.neutron.create_network({'network':network}) |
1139 | + self.neutron.create_network({'network': network}) |
1140 | |
1141 | networks = self.neutron.list_networks(name=net_name) |
1142 | net_len = len(networks['networks']) |
1143 | @@ -751,9 +953,57 @@ |
1144 | msg = "Expected 1 network, found {}".format(net_len) |
1145 | amulet.raise_status(amulet.FAIL, msg=msg) |
1146 | |
1147 | + u.log.debug('Confirming new neutron network...') |
1148 | network = networks['networks'][0] |
1149 | if network['name'] != net_name: |
1150 | amulet.raise_status(amulet.FAIL, msg="network ext_net not found") |
1151 | |
1152 | #Cleanup |
1153 | + u.log.debug('Deleting neutron network...') |
1154 | self.neutron.delete_network(network['id']) |
1155 | + |
1156 | + def test_900_restart_on_config_change(self): |
1157 | + """Verify that the specified services are restarted when the |
1158 | + config is changed.""" |
1159 | + |
1160 | + sentry = self.neutron_gateway_sentry |
1161 | + juju_service = 'neutron-gateway' |
1162 | + |
1163 | + # Expected default and alternate values |
1164 | + set_default = {'debug': 'False'} |
1165 | + set_alternate = {'debug': 'True'} |
1166 | + |
1167 | + # Services which are expected to restart upon config change, |
1168 | + # and corresponding config files affected by the change |
1169 | + conf_file = '/etc/neutron/neutron.conf' |
1170 | + services = { |
1171 | + 'neutron-dhcp-agent': conf_file, |
1172 | + 'neutron-lbaas-agent': conf_file, |
1173 | + 'neutron-metadata-agent': conf_file, |
1174 | + 'neutron-metering-agent': conf_file, |
1175 | + 'neutron-openvswitch-agent': conf_file, |
1176 | + } |
1177 | + |
1178 | + if self._get_openstack_release() <= self.trusty_juno: |
1179 | + services.update({'neutron-vpn-agent': conf_file}) |
1180 | + |
1181 | + # Make config change, check for svc restart, conf file mod time change |
1182 | + u.log.debug('Making config change on {}...'.format(juju_service)) |
1183 | + mtime = u.get_sentry_time(sentry) |
1184 | + self.d.configure(juju_service, set_alternate) |
1185 | + |
1186 | +# sleep_time = 90 |
1187 | + for s, conf_file in services.iteritems(): |
1188 | + u.log.debug("Checking that service restarted: {}".format(s)) |
1189 | + if not u.validate_service_config_changed(sentry, mtime, s, |
1190 | + conf_file): |
1191 | +# conf_file, |
1192 | +# sleep_time=sleep_time): |
1193 | + self.d.configure(juju_service, set_default) |
1194 | + msg = "service {} didn't restart after config change".format(s) |
1195 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1196 | + |
1197 | + # Only do initial sleep on first service check |
1198 | +# sleep_time = 0 |
1199 | + |
1200 | + self.d.configure(juju_service, set_default) |
1201 | |
1202 | === modified file 'tests/charmhelpers/contrib/amulet/deployment.py' |
1203 | --- tests/charmhelpers/contrib/amulet/deployment.py 2015-01-23 11:08:26 +0000 |
1204 | +++ tests/charmhelpers/contrib/amulet/deployment.py 2015-09-21 20:38:30 +0000 |
1205 | @@ -51,7 +51,8 @@ |
1206 | if 'units' not in this_service: |
1207 | this_service['units'] = 1 |
1208 | |
1209 | - self.d.add(this_service['name'], units=this_service['units']) |
1210 | + self.d.add(this_service['name'], units=this_service['units'], |
1211 | + constraints=this_service.get('constraints')) |
1212 | |
1213 | for svc in other_services: |
1214 | if 'location' in svc: |
1215 | @@ -64,7 +65,8 @@ |
1216 | if 'units' not in svc: |
1217 | svc['units'] = 1 |
1218 | |
1219 | - self.d.add(svc['name'], charm=branch_location, units=svc['units']) |
1220 | + self.d.add(svc['name'], charm=branch_location, units=svc['units'], |
1221 | + constraints=svc.get('constraints')) |
1222 | |
1223 | def _add_relations(self, relations): |
1224 | """Add all of the relations for the services.""" |
1225 | |
1226 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' |
1227 | --- tests/charmhelpers/contrib/amulet/utils.py 2015-08-18 21:16:23 +0000 |
1228 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-21 20:38:30 +0000 |
1229 | @@ -19,9 +19,11 @@ |
1230 | import logging |
1231 | import os |
1232 | import re |
1233 | +import socket |
1234 | import subprocess |
1235 | import sys |
1236 | import time |
1237 | +import uuid |
1238 | |
1239 | import amulet |
1240 | import distro_info |
1241 | @@ -114,7 +116,7 @@ |
1242 | # /!\ DEPRECATION WARNING (beisner): |
1243 | # New and existing tests should be rewritten to use |
1244 | # validate_services_by_name() as it is aware of init systems. |
1245 | - self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1246 | + self.log.warn('DEPRECATION WARNING: use ' |
1247 | 'validate_services_by_name instead of validate_services ' |
1248 | 'due to init system differences.') |
1249 | |
1250 | @@ -269,33 +271,52 @@ |
1251 | """Get last modification time of directory.""" |
1252 | return sentry_unit.directory_stat(directory)['mtime'] |
1253 | |
1254 | - def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False): |
1255 | - """Get process' start time. |
1256 | - |
1257 | - Determine start time of the process based on the last modification |
1258 | - time of the /proc/pid directory. If pgrep_full is True, the process |
1259 | - name is matched against the full command line. |
1260 | - """ |
1261 | - if pgrep_full: |
1262 | - cmd = 'pgrep -o -f {}'.format(service) |
1263 | - else: |
1264 | - cmd = 'pgrep -o {}'.format(service) |
1265 | - cmd = cmd + ' | grep -v pgrep || exit 0' |
1266 | - cmd_out = sentry_unit.run(cmd) |
1267 | - self.log.debug('CMDout: ' + str(cmd_out)) |
1268 | - if cmd_out[0]: |
1269 | - self.log.debug('Pid for %s %s' % (service, str(cmd_out[0]))) |
1270 | - proc_dir = '/proc/{}'.format(cmd_out[0].strip()) |
1271 | - return self._get_dir_mtime(sentry_unit, proc_dir) |
1272 | + def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None): |
1273 | + """Get start time of a process based on the last modification time |
1274 | + of the /proc/pid directory. |
1275 | + |
1276 | + :sentry_unit: The sentry unit to check for the service on |
1277 | + :service: service name to look for in process table |
1278 | + :pgrep_full: [Deprecated] Use full command line search mode with pgrep |
1279 | + :returns: epoch time of service process start |
1280 | + :param commands: list of bash commands |
1281 | + :param sentry_units: list of sentry unit pointers |
1282 | + :returns: None if successful; Failure message otherwise |
1283 | + """ |
1284 | + if pgrep_full is not None: |
1285 | + # /!\ DEPRECATION WARNING (beisner): |
1286 | + # No longer implemented, as pidof is now used instead of pgrep. |
1287 | + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 |
1288 | + self.log.warn('DEPRECATION WARNING: pgrep_full bool is no ' |
1289 | + 'longer implemented re: lp 1474030.') |
1290 | + |
1291 | + pid_list = self.get_process_id_list(sentry_unit, service) |
1292 | + pid = pid_list[0] |
1293 | + proc_dir = '/proc/{}'.format(pid) |
1294 | + self.log.debug('Pid for {} on {}: {}'.format( |
1295 | + service, sentry_unit.info['unit_name'], pid)) |
1296 | + |
1297 | + return self._get_dir_mtime(sentry_unit, proc_dir) |
1298 | |
1299 | def service_restarted(self, sentry_unit, service, filename, |
1300 | - pgrep_full=False, sleep_time=20): |
1301 | + pgrep_full=None, sleep_time=20): |
1302 | """Check if service was restarted. |
1303 | |
1304 | Compare a service's start time vs a file's last modification time |
1305 | (such as a config file for that service) to determine if the service |
1306 | has been restarted. |
1307 | """ |
1308 | + # /!\ DEPRECATION WARNING (beisner): |
1309 | + # This method is prone to races in that no before-time is known. |
1310 | + # Use validate_service_config_changed instead. |
1311 | + |
1312 | + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now |
1313 | + # used instead of pgrep. pgrep_full is still passed through to ensure |
1314 | + # deprecation WARNS. lp1474030 |
1315 | + self.log.warn('DEPRECATION WARNING: use ' |
1316 | + 'validate_service_config_changed instead of ' |
1317 | + 'service_restarted due to known races.') |
1318 | + |
1319 | time.sleep(sleep_time) |
1320 | if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= |
1321 | self._get_file_mtime(sentry_unit, filename)): |
1322 | @@ -304,78 +325,122 @@ |
1323 | return False |
1324 | |
1325 | def service_restarted_since(self, sentry_unit, mtime, service, |
1326 | - pgrep_full=False, sleep_time=20, |
1327 | - retry_count=2): |
1328 | + pgrep_full=None, sleep_time=20, |
1329 | + retry_count=30, retry_sleep_time=10): |
1330 | """Check if service was been started after a given time. |
1331 | |
1332 | Args: |
1333 | sentry_unit (sentry): The sentry unit to check for the service on |
1334 | mtime (float): The epoch time to check against |
1335 | service (string): service name to look for in process table |
1336 | - pgrep_full (boolean): Use full command line search mode with pgrep |
1337 | - sleep_time (int): Seconds to sleep before looking for process |
1338 | - retry_count (int): If service is not found, how many times to retry |
1339 | + pgrep_full: [Deprecated] Use full command line search mode with pgrep |
1340 | + sleep_time (int): Initial sleep time (s) before looking for file |
1341 | + retry_sleep_time (int): Time (s) to sleep between retries |
1342 | + retry_count (int): If file is not found, how many times to retry |
1343 | |
1344 | Returns: |
1345 | bool: True if service found and its start time it newer than mtime, |
1346 | False if service is older than mtime or if service was |
1347 | not found. |
1348 | """ |
1349 | - self.log.debug('Checking %s restarted since %s' % (service, mtime)) |
1350 | + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now |
1351 | + # used instead of pgrep. pgrep_full is still passed through to ensure |
1352 | + # deprecation WARNS. lp1474030 |
1353 | + |
1354 | + unit_name = sentry_unit.info['unit_name'] |
1355 | + self.log.debug('Checking that %s service restarted since %s on ' |
1356 | + '%s' % (service, mtime, unit_name)) |
1357 | time.sleep(sleep_time) |
1358 | - proc_start_time = self._get_proc_start_time(sentry_unit, service, |
1359 | - pgrep_full) |
1360 | - while retry_count > 0 and not proc_start_time: |
1361 | - self.log.debug('No pid file found for service %s, will retry %i ' |
1362 | - 'more times' % (service, retry_count)) |
1363 | - time.sleep(30) |
1364 | - proc_start_time = self._get_proc_start_time(sentry_unit, service, |
1365 | - pgrep_full) |
1366 | - retry_count = retry_count - 1 |
1367 | + proc_start_time = None |
1368 | + tries = 0 |
1369 | + while tries <= retry_count and not proc_start_time: |
1370 | + try: |
1371 | + proc_start_time = self._get_proc_start_time(sentry_unit, |
1372 | + service, |
1373 | + pgrep_full) |
1374 | + self.log.debug('Attempt {} to get {} proc start time on {} ' |
1375 | + 'OK'.format(tries, service, unit_name)) |
1376 | + except IOError as e: |
1377 | + # NOTE(beisner) - race avoidance, proc may not exist yet. |
1378 | + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 |
1379 | + self.log.debug('Attempt {} to get {} proc start time on {} ' |
1380 | + 'failed\n{}'.format(tries, service, |
1381 | + unit_name, e)) |
1382 | + time.sleep(retry_sleep_time) |
1383 | + tries += 1 |
1384 | |
1385 | if not proc_start_time: |
1386 | self.log.warn('No proc start time found, assuming service did ' |
1387 | 'not start') |
1388 | return False |
1389 | if proc_start_time >= mtime: |
1390 | - self.log.debug('proc start time is newer than provided mtime' |
1391 | - '(%s >= %s)' % (proc_start_time, mtime)) |
1392 | + self.log.debug('Proc start time is newer than provided mtime' |
1393 | + '(%s >= %s) on %s (OK)' % (proc_start_time, |
1394 | + mtime, unit_name)) |
1395 | return True |
1396 | else: |
1397 | - self.log.warn('proc start time (%s) is older than provided mtime ' |
1398 | - '(%s), service did not restart' % (proc_start_time, |
1399 | - mtime)) |
1400 | + self.log.warn('Proc start time (%s) is older than provided mtime ' |
1401 | + '(%s) on %s, service did not ' |
1402 | + 'restart' % (proc_start_time, mtime, unit_name)) |
1403 | return False |
1404 | |
1405 | def config_updated_since(self, sentry_unit, filename, mtime, |
1406 | - sleep_time=20): |
1407 | + sleep_time=20, retry_count=30, |
1408 | + retry_sleep_time=10): |
1409 | """Check if file was modified after a given time. |
1410 | |
1411 | Args: |
1412 | sentry_unit (sentry): The sentry unit to check the file mtime on |
1413 | filename (string): The file to check mtime of |
1414 | mtime (float): The epoch time to check against |
1415 | - sleep_time (int): Seconds to sleep before looking for process |
1416 | + sleep_time (int): Initial sleep time (s) before looking for file |
1417 | + retry_sleep_time (int): Time (s) to sleep between retries |
1418 | + retry_count (int): If file is not found, how many times to retry |
1419 | |
1420 | Returns: |
1421 | bool: True if file was modified more recently than mtime, False if |
1422 | - file was modified before mtime, |
1423 | + file was modified before mtime, or if file not found. |
1424 | """ |
1425 | - self.log.debug('Checking %s updated since %s' % (filename, mtime)) |
1426 | + unit_name = sentry_unit.info['unit_name'] |
1427 | + self.log.debug('Checking that %s updated since %s on ' |
1428 | + '%s' % (filename, mtime, unit_name)) |
1429 | time.sleep(sleep_time) |
1430 | - file_mtime = self._get_file_mtime(sentry_unit, filename) |
1431 | + file_mtime = None |
1432 | + tries = 0 |
1433 | + while tries <= retry_count and not file_mtime: |
1434 | + try: |
1435 | + file_mtime = self._get_file_mtime(sentry_unit, filename) |
1436 | + self.log.debug('Attempt {} to get {} file mtime on {} ' |
1437 | + 'OK'.format(tries, filename, unit_name)) |
1438 | + except IOError as e: |
1439 | + # NOTE(beisner) - race avoidance, file may not exist yet. |
1440 | + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 |
1441 | + self.log.debug('Attempt {} to get {} file mtime on {} ' |
1442 | + 'failed\n{}'.format(tries, filename, |
1443 | + unit_name, e)) |
1444 | + time.sleep(retry_sleep_time) |
1445 | + tries += 1 |
1446 | + |
1447 | + if not file_mtime: |
1448 | + self.log.warn('Could not determine file mtime, assuming ' |
1449 | + 'file does not exist') |
1450 | + return False |
1451 | + |
1452 | if file_mtime >= mtime: |
1453 | self.log.debug('File mtime is newer than provided mtime ' |
1454 | - '(%s >= %s)' % (file_mtime, mtime)) |
1455 | + '(%s >= %s) on %s (OK)' % (file_mtime, |
1456 | + mtime, unit_name)) |
1457 | return True |
1458 | else: |
1459 | - self.log.warn('File mtime %s is older than provided mtime %s' |
1460 | - % (file_mtime, mtime)) |
1461 | + self.log.warn('File mtime is older than provided mtime' |
1462 | + '(%s < on %s) on %s' % (file_mtime, |
1463 | + mtime, unit_name)) |
1464 | return False |
1465 | |
1466 | def validate_service_config_changed(self, sentry_unit, mtime, service, |
1467 | - filename, pgrep_full=False, |
1468 | - sleep_time=20, retry_count=2): |
1469 | + filename, pgrep_full=None, |
1470 | + sleep_time=20, retry_count=30, |
1471 | + retry_sleep_time=10): |
1472 | """Check service and file were updated after mtime |
1473 | |
1474 | Args: |
1475 | @@ -383,9 +448,10 @@ |
1476 | mtime (float): The epoch time to check against |
1477 | service (string): service name to look for in process table |
1478 | filename (string): The file to check mtime of |
1479 | - pgrep_full (boolean): Use full command line search mode with pgrep |
1480 | - sleep_time (int): Seconds to sleep before looking for process |
1481 | + pgrep_full: [Deprecated] Use full command line search mode with pgrep |
1482 | + sleep_time (int): Initial sleep in seconds to pass to test helpers |
1483 | retry_count (int): If service is not found, how many times to retry |
1484 | + retry_sleep_time (int): Time in seconds to wait between retries |
1485 | |
1486 | Typical Usage: |
1487 | u = OpenStackAmuletUtils(ERROR) |
1488 | @@ -402,15 +468,27 @@ |
1489 | mtime, False if service is older than mtime or if service was |
1490 | not found or if filename was modified before mtime. |
1491 | """ |
1492 | - self.log.debug('Checking %s restarted since %s' % (service, mtime)) |
1493 | - time.sleep(sleep_time) |
1494 | - service_restart = self.service_restarted_since(sentry_unit, mtime, |
1495 | - service, |
1496 | - pgrep_full=pgrep_full, |
1497 | - sleep_time=0, |
1498 | - retry_count=retry_count) |
1499 | - config_update = self.config_updated_since(sentry_unit, filename, mtime, |
1500 | - sleep_time=0) |
1501 | + |
1502 | + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now |
1503 | + # used instead of pgrep. pgrep_full is still passed through to ensure |
1504 | + # deprecation WARNS. lp1474030 |
1505 | + |
1506 | + service_restart = self.service_restarted_since( |
1507 | + sentry_unit, mtime, |
1508 | + service, |
1509 | + pgrep_full=pgrep_full, |
1510 | + sleep_time=sleep_time, |
1511 | + retry_count=retry_count, |
1512 | + retry_sleep_time=retry_sleep_time) |
1513 | + |
1514 | + config_update = self.config_updated_since( |
1515 | + sentry_unit, |
1516 | + filename, |
1517 | + mtime, |
1518 | + sleep_time=sleep_time, |
1519 | + retry_count=retry_count, |
1520 | + retry_sleep_time=retry_sleep_time) |
1521 | + |
1522 | return service_restart and config_update |
1523 | |
1524 | def get_sentry_time(self, sentry_unit): |
1525 | @@ -428,7 +506,6 @@ |
1526 | """Return a list of all Ubuntu releases in order of release.""" |
1527 | _d = distro_info.UbuntuDistroInfo() |
1528 | _release_list = _d.all |
1529 | - self.log.debug('Ubuntu release list: {}'.format(_release_list)) |
1530 | return _release_list |
1531 | |
1532 | def file_to_url(self, file_rel_path): |
1533 | @@ -568,6 +645,142 @@ |
1534 | |
1535 | return None |
1536 | |
1537 | + def validate_sectionless_conf(self, file_contents, expected): |
1538 | + """A crude conf parser. Useful to inspect configuration files which |
1539 | + do not have section headers (as would be necessary in order to use |
1540 | + the configparser). Such as openstack-dashboard or rabbitmq confs.""" |
1541 | + for line in file_contents.split('\n'): |
1542 | + if '=' in line: |
1543 | + args = line.split('=') |
1544 | + if len(args) <= 1: |
1545 | + continue |
1546 | + key = args[0].strip() |
1547 | + value = args[1].strip() |
1548 | + if key in expected.keys(): |
1549 | + if expected[key] != value: |
1550 | + msg = ('Config mismatch. Expected, actual: {}, ' |
1551 | + '{}'.format(expected[key], value)) |
1552 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1553 | + |
1554 | + def get_unit_hostnames(self, units): |
1555 | + """Return a dict of juju unit names to hostnames.""" |
1556 | + host_names = {} |
1557 | + for unit in units: |
1558 | + host_names[unit.info['unit_name']] = \ |
1559 | + str(unit.file_contents('/etc/hostname').strip()) |
1560 | + self.log.debug('Unit host names: {}'.format(host_names)) |
1561 | + return host_names |
1562 | + |
1563 | + def run_cmd_unit(self, sentry_unit, cmd): |
1564 | + """Run a command on a unit, return the output and exit code.""" |
1565 | + output, code = sentry_unit.run(cmd) |
1566 | + if code == 0: |
1567 | + self.log.debug('{} `{}` command returned {} ' |
1568 | + '(OK)'.format(sentry_unit.info['unit_name'], |
1569 | + cmd, code)) |
1570 | + else: |
1571 | + msg = ('{} `{}` command returned {} ' |
1572 | + '{}'.format(sentry_unit.info['unit_name'], |
1573 | + cmd, code, output)) |
1574 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1575 | + return str(output), code |
1576 | + |
1577 | + def file_exists_on_unit(self, sentry_unit, file_name): |
1578 | + """Check if a file exists on a unit.""" |
1579 | + try: |
1580 | + sentry_unit.file_stat(file_name) |
1581 | + return True |
1582 | + except IOError: |
1583 | + return False |
1584 | + except Exception as e: |
1585 | + msg = 'Error checking file {}: {}'.format(file_name, e) |
1586 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1587 | + |
1588 | + def file_contents_safe(self, sentry_unit, file_name, |
1589 | + max_wait=60, fatal=False): |
1590 | + """Get file contents from a sentry unit. Wrap amulet file_contents |
1591 | + with retry logic to address races where a file checks as existing, |
1592 | + but no longer exists by the time file_contents is called. |
1593 | + Return None if file not found. Optionally raise if fatal is True.""" |
1594 | + unit_name = sentry_unit.info['unit_name'] |
1595 | + file_contents = False |
1596 | + tries = 0 |
1597 | + while not file_contents and tries < (max_wait / 4): |
1598 | + try: |
1599 | + file_contents = sentry_unit.file_contents(file_name) |
1600 | + except IOError: |
1601 | + self.log.debug('Attempt {} to open file {} from {} ' |
1602 | + 'failed'.format(tries, file_name, |
1603 | + unit_name)) |
1604 | + time.sleep(4) |
1605 | + tries += 1 |
1606 | + |
1607 | + if file_contents: |
1608 | + return file_contents |
1609 | + elif not fatal: |
1610 | + return None |
1611 | + elif fatal: |
1612 | + msg = 'Failed to get file contents from unit.' |
1613 | + amulet.raise_status(amulet.FAIL, msg) |
1614 | + |
1615 | + def port_knock_tcp(self, host="localhost", port=22, timeout=15): |
1616 | + """Open a TCP socket to check for a listening sevice on a host. |
1617 | + |
1618 | + :param host: host name or IP address, default to localhost |
1619 | + :param port: TCP port number, default to 22 |
1620 | + :param timeout: Connect timeout, default to 15 seconds |
1621 | + :returns: True if successful, False if connect failed |
1622 | + """ |
1623 | + |
1624 | + # Resolve host name if possible |
1625 | + try: |
1626 | + connect_host = socket.gethostbyname(host) |
1627 | + host_human = "{} ({})".format(connect_host, host) |
1628 | + except socket.error as e: |
1629 | + self.log.warn('Unable to resolve address: ' |
1630 | + '{} ({}) Trying anyway!'.format(host, e)) |
1631 | + connect_host = host |
1632 | + host_human = connect_host |
1633 | + |
1634 | + # Attempt socket connection |
1635 | + try: |
1636 | + knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) |
1637 | + knock.settimeout(timeout) |
1638 | + knock.connect((connect_host, port)) |
1639 | + knock.close() |
1640 | + self.log.debug('Socket connect OK for host ' |
1641 | + '{} on port {}.'.format(host_human, port)) |
1642 | + return True |
1643 | + except socket.error as e: |
1644 | + self.log.debug('Socket connect FAIL for' |
1645 | + ' {} port {} ({})'.format(host_human, port, e)) |
1646 | + return False |
1647 | + |
1648 | + def port_knock_units(self, sentry_units, port=22, |
1649 | + timeout=15, expect_success=True): |
1650 | + """Open a TCP socket to check for a listening sevice on each |
1651 | + listed juju unit. |
1652 | + |
1653 | + :param sentry_units: list of sentry unit pointers |
1654 | + :param port: TCP port number, default to 22 |
1655 | + :param timeout: Connect timeout, default to 15 seconds |
1656 | + :expect_success: True by default, set False to invert logic |
1657 | + :returns: None if successful, Failure message otherwise |
1658 | + """ |
1659 | + for unit in sentry_units: |
1660 | + host = unit.info['public-address'] |
1661 | + connected = self.port_knock_tcp(host, port, timeout) |
1662 | + if not connected and expect_success: |
1663 | + return 'Socket connect failed.' |
1664 | + elif connected and not expect_success: |
1665 | + return 'Socket connected unexpectedly.' |
1666 | + |
1667 | + def get_uuid_epoch_stamp(self): |
1668 | + """Returns a stamp string based on uuid4 and epoch time. Useful in |
1669 | + generating test messages which need to be unique-ish.""" |
1670 | + return '[{}-{}]'.format(uuid.uuid4(), time.time()) |
1671 | + |
1672 | +# amulet juju action helpers: |
1673 | def run_action(self, unit_sentry, action, |
1674 | _check_output=subprocess.check_output): |
1675 | """Run the named action on a given unit sentry. |
1676 | @@ -594,3 +807,12 @@ |
1677 | output = _check_output(command, universal_newlines=True) |
1678 | data = json.loads(output) |
1679 | return data.get(u"status") == "completed" |
1680 | + |
1681 | + def status_get(self, unit): |
1682 | + """Return the current service status of this unit.""" |
1683 | + raw_status, return_code = unit.run( |
1684 | + "status-get --format=json --include-data") |
1685 | + if return_code != 0: |
1686 | + return ("unknown", "") |
1687 | + status = json.loads(raw_status) |
1688 | + return (status["status"], status["message"]) |
1689 | |
1690 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' |
1691 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-08-18 21:16:23 +0000 |
1692 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-21 20:38:30 +0000 |
1693 | @@ -44,20 +44,31 @@ |
1694 | Determine if the local branch being tested is derived from its |
1695 | stable or next (dev) branch, and based on this, use the corresonding |
1696 | stable or next branches for the other_services.""" |
1697 | + |
1698 | + # Charms outside the lp:~openstack-charmers namespace |
1699 | base_charms = ['mysql', 'mongodb', 'nrpe'] |
1700 | |
1701 | + # Force these charms to current series even when using an older series. |
1702 | + # ie. Use trusty/nrpe even when series is precise, as the P charm |
1703 | + # does not possess the necessary external master config and hooks. |
1704 | + force_series_current = ['nrpe'] |
1705 | + |
1706 | if self.series in ['precise', 'trusty']: |
1707 | base_series = self.series |
1708 | else: |
1709 | base_series = self.current_next |
1710 | |
1711 | - if self.stable: |
1712 | - for svc in other_services: |
1713 | + for svc in other_services: |
1714 | + if svc['name'] in force_series_current: |
1715 | + base_series = self.current_next |
1716 | + # If a location has been explicitly set, use it |
1717 | + if svc.get('location'): |
1718 | + continue |
1719 | + if self.stable: |
1720 | temp = 'lp:charms/{}/{}' |
1721 | svc['location'] = temp.format(base_series, |
1722 | svc['name']) |
1723 | - else: |
1724 | - for svc in other_services: |
1725 | + else: |
1726 | if svc['name'] in base_charms: |
1727 | temp = 'lp:charms/{}/{}' |
1728 | svc['location'] = temp.format(base_series, |
1729 | @@ -66,6 +77,7 @@ |
1730 | temp = 'lp:~openstack-charmers/charms/{}/{}/next' |
1731 | svc['location'] = temp.format(self.current_next, |
1732 | svc['name']) |
1733 | + |
1734 | return other_services |
1735 | |
1736 | def _add_services(self, this_service, other_services): |
1737 | @@ -77,21 +89,23 @@ |
1738 | |
1739 | services = other_services |
1740 | services.append(this_service) |
1741 | + |
1742 | + # Charms which should use the source config option |
1743 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
1744 | 'ceph-osd', 'ceph-radosgw'] |
1745 | - # Most OpenStack subordinate charms do not expose an origin option |
1746 | - # as that is controlled by the principle. |
1747 | - ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] |
1748 | + |
1749 | + # Charms which can not use openstack-origin, ie. many subordinates |
1750 | + no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] |
1751 | |
1752 | if self.openstack: |
1753 | for svc in services: |
1754 | - if svc['name'] not in use_source + ignore: |
1755 | + if svc['name'] not in use_source + no_origin: |
1756 | config = {'openstack-origin': self.openstack} |
1757 | self.d.configure(svc['name'], config) |
1758 | |
1759 | if self.source: |
1760 | for svc in services: |
1761 | - if svc['name'] in use_source and svc['name'] not in ignore: |
1762 | + if svc['name'] in use_source and svc['name'] not in no_origin: |
1763 | config = {'source': self.source} |
1764 | self.d.configure(svc['name'], config) |
1765 | |
1766 | |
1767 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' |
1768 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-16 20:18:08 +0000 |
1769 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-21 20:38:30 +0000 |
1770 | @@ -27,6 +27,7 @@ |
1771 | import heatclient.v1.client as heat_client |
1772 | import keystoneclient.v2_0 as keystone_client |
1773 | import novaclient.v1_1.client as nova_client |
1774 | +import pika |
1775 | import swiftclient |
1776 | |
1777 | from charmhelpers.contrib.amulet.utils import ( |
1778 | @@ -602,3 +603,361 @@ |
1779 | self.log.debug('Ceph {} samples (OK): ' |
1780 | '{}'.format(sample_type, samples)) |
1781 | return None |
1782 | + |
1783 | +# rabbitmq/amqp specific helpers: |
1784 | + def add_rmq_test_user(self, sentry_units, |
1785 | + username="testuser1", password="changeme"): |
1786 | + """Add a test user via the first rmq juju unit, check connection as |
1787 | + the new user against all sentry units. |
1788 | + |
1789 | + :param sentry_units: list of sentry unit pointers |
1790 | + :param username: amqp user name, default to testuser1 |
1791 | + :param password: amqp user password |
1792 | + :returns: None if successful. Raise on error. |
1793 | + """ |
1794 | + self.log.debug('Adding rmq user ({})...'.format(username)) |
1795 | + |
1796 | + # Check that user does not already exist |
1797 | + cmd_user_list = 'rabbitmqctl list_users' |
1798 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) |
1799 | + if username in output: |
1800 | + self.log.warning('User ({}) already exists, returning ' |
1801 | + 'gracefully.'.format(username)) |
1802 | + return |
1803 | + |
1804 | + perms = '".*" ".*" ".*"' |
1805 | + cmds = ['rabbitmqctl add_user {} {}'.format(username, password), |
1806 | + 'rabbitmqctl set_permissions {} {}'.format(username, perms)] |
1807 | + |
1808 | + # Add user via first unit |
1809 | + for cmd in cmds: |
1810 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd) |
1811 | + |
1812 | + # Check connection against the other sentry_units |
1813 | + self.log.debug('Checking user connect against units...') |
1814 | + for sentry_unit in sentry_units: |
1815 | + connection = self.connect_amqp_by_unit(sentry_unit, ssl=False, |
1816 | + username=username, |
1817 | + password=password) |
1818 | + connection.close() |
1819 | + |
1820 | + def delete_rmq_test_user(self, sentry_units, username="testuser1"): |
1821 | + """Delete a rabbitmq user via the first rmq juju unit. |
1822 | + |
1823 | + :param sentry_units: list of sentry unit pointers |
1824 | + :param username: amqp user name, default to testuser1 |
1825 | + :param password: amqp user password |
1826 | + :returns: None if successful or no such user. |
1827 | + """ |
1828 | + self.log.debug('Deleting rmq user ({})...'.format(username)) |
1829 | + |
1830 | + # Check that the user exists |
1831 | + cmd_user_list = 'rabbitmqctl list_users' |
1832 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) |
1833 | + |
1834 | + if username not in output: |
1835 | + self.log.warning('User ({}) does not exist, returning ' |
1836 | + 'gracefully.'.format(username)) |
1837 | + return |
1838 | + |
1839 | + # Delete the user |
1840 | + cmd_user_del = 'rabbitmqctl delete_user {}'.format(username) |
1841 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del) |
1842 | + |
1843 | + def get_rmq_cluster_status(self, sentry_unit): |
1844 | + """Execute rabbitmq cluster status command on a unit and return |
1845 | + the full output. |
1846 | + |
1847 | + :param unit: sentry unit |
1848 | + :returns: String containing console output of cluster status command |
1849 | + """ |
1850 | + cmd = 'rabbitmqctl cluster_status' |
1851 | + output, _ = self.run_cmd_unit(sentry_unit, cmd) |
1852 | + self.log.debug('{} cluster_status:\n{}'.format( |
1853 | + sentry_unit.info['unit_name'], output)) |
1854 | + return str(output) |
1855 | + |
1856 | + def get_rmq_cluster_running_nodes(self, sentry_unit): |
1857 | + """Parse rabbitmqctl cluster_status output string, return list of |
1858 | + running rabbitmq cluster nodes. |
1859 | + |
1860 | + :param unit: sentry unit |
1861 | + :returns: List containing node names of running nodes |
1862 | + """ |
1863 | + # NOTE(beisner): rabbitmqctl cluster_status output is not |
1864 | + # json-parsable, do string chop foo, then json.loads that. |
1865 | + str_stat = self.get_rmq_cluster_status(sentry_unit) |
1866 | + if 'running_nodes' in str_stat: |
1867 | + pos_start = str_stat.find("{running_nodes,") + 15 |
1868 | + pos_end = str_stat.find("]},", pos_start) + 1 |
1869 | + str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"') |
1870 | + run_nodes = json.loads(str_run_nodes) |
1871 | + return run_nodes |
1872 | + else: |
1873 | + return [] |
1874 | + |
1875 | + def validate_rmq_cluster_running_nodes(self, sentry_units): |
1876 | + """Check that all rmq unit hostnames are represented in the |
1877 | + cluster_status output of all units. |
1878 | + |
1879 | + :param host_names: dict of juju unit names to host names |
1880 | + :param units: list of sentry unit pointers (all rmq units) |
1881 | + :returns: None if successful, otherwise return error message |
1882 | + """ |
1883 | + host_names = self.get_unit_hostnames(sentry_units) |
1884 | + errors = [] |
1885 | + |
1886 | + # Query every unit for cluster_status running nodes |
1887 | + for query_unit in sentry_units: |
1888 | + query_unit_name = query_unit.info['unit_name'] |
1889 | + running_nodes = self.get_rmq_cluster_running_nodes(query_unit) |
1890 | + |
1891 | + # Confirm that every unit is represented in the queried unit's |
1892 | + # cluster_status running nodes output. |
1893 | + for validate_unit in sentry_units: |
1894 | + val_host_name = host_names[validate_unit.info['unit_name']] |
1895 | + val_node_name = 'rabbit@{}'.format(val_host_name) |
1896 | + |
1897 | + if val_node_name not in running_nodes: |
1898 | + errors.append('Cluster member check failed on {}: {} not ' |
1899 | + 'in {}\n'.format(query_unit_name, |
1900 | + val_node_name, |
1901 | + running_nodes)) |
1902 | + if errors: |
1903 | + return ''.join(errors) |
1904 | + |
1905 | + def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None): |
1906 | + """Check a single juju rmq unit for ssl and port in the config file.""" |
1907 | + host = sentry_unit.info['public-address'] |
1908 | + unit_name = sentry_unit.info['unit_name'] |
1909 | + |
1910 | + conf_file = '/etc/rabbitmq/rabbitmq.config' |
1911 | + conf_contents = str(self.file_contents_safe(sentry_unit, |
1912 | + conf_file, max_wait=16)) |
1913 | + # Checks |
1914 | + conf_ssl = 'ssl' in conf_contents |
1915 | + conf_port = str(port) in conf_contents |
1916 | + |
1917 | + # Port explicitly checked in config |
1918 | + if port and conf_port and conf_ssl: |
1919 | + self.log.debug('SSL is enabled @{}:{} ' |
1920 | + '({})'.format(host, port, unit_name)) |
1921 | + return True |
1922 | + elif port and not conf_port and conf_ssl: |
1923 | + self.log.debug('SSL is enabled @{} but not on port {} ' |
1924 | + '({})'.format(host, port, unit_name)) |
1925 | + return False |
1926 | + # Port not checked (useful when checking that ssl is disabled) |
1927 | + elif not port and conf_ssl: |
1928 | + self.log.debug('SSL is enabled @{}:{} ' |
1929 | + '({})'.format(host, port, unit_name)) |
1930 | + return True |
1931 | + elif not port and not conf_ssl: |
1932 | + self.log.debug('SSL not enabled @{}:{} ' |
1933 | + '({})'.format(host, port, unit_name)) |
1934 | + return False |
1935 | + else: |
1936 | + msg = ('Unknown condition when checking SSL status @{}:{} ' |
1937 | + '({})'.format(host, port, unit_name)) |
1938 | + amulet.raise_status(amulet.FAIL, msg) |
1939 | + |
1940 | + def validate_rmq_ssl_enabled_units(self, sentry_units, port=None): |
1941 | + """Check that ssl is enabled on rmq juju sentry units. |
1942 | + |
1943 | + :param sentry_units: list of all rmq sentry units |
1944 | + :param port: optional ssl port override to validate |
1945 | + :returns: None if successful, otherwise return error message |
1946 | + """ |
1947 | + for sentry_unit in sentry_units: |
1948 | + if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port): |
1949 | + return ('Unexpected condition: ssl is disabled on unit ' |
1950 | + '({})'.format(sentry_unit.info['unit_name'])) |
1951 | + return None |
1952 | + |
1953 | + def validate_rmq_ssl_disabled_units(self, sentry_units): |
1954 | + """Check that ssl is enabled on listed rmq juju sentry units. |
1955 | + |
1956 | + :param sentry_units: list of all rmq sentry units |
1957 | + :returns: True if successful. Raise on error. |
1958 | + """ |
1959 | + for sentry_unit in sentry_units: |
1960 | + if self.rmq_ssl_is_enabled_on_unit(sentry_unit): |
1961 | + return ('Unexpected condition: ssl is enabled on unit ' |
1962 | + '({})'.format(sentry_unit.info['unit_name'])) |
1963 | + return None |
1964 | + |
1965 | + def configure_rmq_ssl_on(self, sentry_units, deployment, |
1966 | + port=None, max_wait=60): |
1967 | + """Turn ssl charm config option on, with optional non-default |
1968 | + ssl port specification. Confirm that it is enabled on every |
1969 | + unit. |
1970 | + |
1971 | + :param sentry_units: list of sentry units |
1972 | + :param deployment: amulet deployment object pointer |
1973 | + :param port: amqp port, use defaults if None |
1974 | + :param max_wait: maximum time to wait in seconds to confirm |
1975 | + :returns: None if successful. Raise on error. |
1976 | + """ |
1977 | + self.log.debug('Setting ssl charm config option: on') |
1978 | + |
1979 | + # Enable RMQ SSL |
1980 | + config = {'ssl': 'on'} |
1981 | + if port: |
1982 | + config['ssl_port'] = port |
1983 | + |
1984 | + deployment.configure('rabbitmq-server', config) |
1985 | + |
1986 | + # Confirm |
1987 | + tries = 0 |
1988 | + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) |
1989 | + while ret and tries < (max_wait / 4): |
1990 | + time.sleep(4) |
1991 | + self.log.debug('Attempt {}: {}'.format(tries, ret)) |
1992 | + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) |
1993 | + tries += 1 |
1994 | + |
1995 | + if ret: |
1996 | + amulet.raise_status(amulet.FAIL, ret) |
1997 | + |
1998 | + def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60): |
1999 | + """Turn ssl charm config option off, confirm that it is disabled |
2000 | + on every unit. |
2001 | + |
2002 | + :param sentry_units: list of sentry units |
2003 | + :param deployment: amulet deployment object pointer |
2004 | + :param max_wait: maximum time to wait in seconds to confirm |
2005 | + :returns: None if successful. Raise on error. |
2006 | + """ |
2007 | + self.log.debug('Setting ssl charm config option: off') |
2008 | + |
2009 | + # Disable RMQ SSL |
2010 | + config = {'ssl': 'off'} |
2011 | + deployment.configure('rabbitmq-server', config) |
2012 | + |
2013 | + # Confirm |
2014 | + tries = 0 |
2015 | + ret = self.validate_rmq_ssl_disabled_units(sentry_units) |
2016 | + while ret and tries < (max_wait / 4): |
2017 | + time.sleep(4) |
2018 | + self.log.debug('Attempt {}: {}'.format(tries, ret)) |
2019 | + ret = self.validate_rmq_ssl_disabled_units(sentry_units) |
2020 | + tries += 1 |
2021 | + |
2022 | + if ret: |
2023 | + amulet.raise_status(amulet.FAIL, ret) |
2024 | + |
2025 | + def connect_amqp_by_unit(self, sentry_unit, ssl=False, |
2026 | + port=None, fatal=True, |
2027 | + username="testuser1", password="changeme"): |
2028 | + """Establish and return a pika amqp connection to the rabbitmq service |
2029 | + running on a rmq juju unit. |
2030 | + |
2031 | + :param sentry_unit: sentry unit pointer |
2032 | + :param ssl: boolean, default to False |
2033 | + :param port: amqp port, use defaults if None |
2034 | + :param fatal: boolean, default to True (raises on connect error) |
2035 | + :param username: amqp user name, default to testuser1 |
2036 | + :param password: amqp user password |
2037 | + :returns: pika amqp connection pointer or None if failed and non-fatal |
2038 | + """ |
2039 | + host = sentry_unit.info['public-address'] |
2040 | + unit_name = sentry_unit.info['unit_name'] |
2041 | + |
2042 | + # Default port logic if port is not specified |
2043 | + if ssl and not port: |
2044 | + port = 5671 |
2045 | + elif not ssl and not port: |
2046 | + port = 5672 |
2047 | + |
2048 | + self.log.debug('Connecting to amqp on {}:{} ({}) as ' |
2049 | + '{}...'.format(host, port, unit_name, username)) |
2050 | + |
2051 | + try: |
2052 | + credentials = pika.PlainCredentials(username, password) |
2053 | + parameters = pika.ConnectionParameters(host=host, port=port, |
2054 | + credentials=credentials, |
2055 | + ssl=ssl, |
2056 | + connection_attempts=3, |
2057 | + retry_delay=5, |
2058 | + socket_timeout=1) |
2059 | + connection = pika.BlockingConnection(parameters) |
2060 | + assert connection.server_properties['product'] == 'RabbitMQ' |
2061 | + self.log.debug('Connect OK') |
2062 | + return connection |
2063 | + except Exception as e: |
2064 | + msg = ('amqp connection failed to {}:{} as ' |
2065 | + '{} ({})'.format(host, port, username, str(e))) |
2066 | + if fatal: |
2067 | + amulet.raise_status(amulet.FAIL, msg) |
2068 | + else: |
2069 | + self.log.warn(msg) |
2070 | + return None |
2071 | + |
2072 | + def publish_amqp_message_by_unit(self, sentry_unit, message, |
2073 | + queue="test", ssl=False, |
2074 | + username="testuser1", |
2075 | + password="changeme", |
2076 | + port=None): |
2077 | + """Publish an amqp message to a rmq juju unit. |
2078 | + |
2079 | + :param sentry_unit: sentry unit pointer |
2080 | + :param message: amqp message string |
2081 | + :param queue: message queue, default to test |
2082 | + :param username: amqp user name, default to testuser1 |
2083 | + :param password: amqp user password |
2084 | + :param ssl: boolean, default to False |
2085 | + :param port: amqp port, use defaults if None |
2086 | + :returns: None. Raises exception if publish failed. |
2087 | + """ |
2088 | + self.log.debug('Publishing message to {} queue:\n{}'.format(queue, |
2089 | + message)) |
2090 | + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, |
2091 | + port=port, |
2092 | + username=username, |
2093 | + password=password) |
2094 | + |
2095 | + # NOTE(beisner): extra debug here re: pika hang potential: |
2096 | + # https://github.com/pika/pika/issues/297 |
2097 | + # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw |
2098 | + self.log.debug('Defining channel...') |
2099 | + channel = connection.channel() |
2100 | + self.log.debug('Declaring queue...') |
2101 | + channel.queue_declare(queue=queue, auto_delete=False, durable=True) |
2102 | + self.log.debug('Publishing message...') |
2103 | + channel.basic_publish(exchange='', routing_key=queue, body=message) |
2104 | + self.log.debug('Closing channel...') |
2105 | + channel.close() |
2106 | + self.log.debug('Closing connection...') |
2107 | + connection.close() |
2108 | + |
2109 | + def get_amqp_message_by_unit(self, sentry_unit, queue="test", |
2110 | + username="testuser1", |
2111 | + password="changeme", |
2112 | + ssl=False, port=None): |
2113 | + """Get an amqp message from a rmq juju unit. |
2114 | + |
2115 | + :param sentry_unit: sentry unit pointer |
2116 | + :param queue: message queue, default to test |
2117 | + :param username: amqp user name, default to testuser1 |
2118 | + :param password: amqp user password |
2119 | + :param ssl: boolean, default to False |
2120 | + :param port: amqp port, use defaults if None |
2121 | + :returns: amqp message body as string. Raise if get fails. |
2122 | + """ |
2123 | + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, |
2124 | + port=port, |
2125 | + username=username, |
2126 | + password=password) |
2127 | + channel = connection.channel() |
2128 | + method_frame, _, body = channel.basic_get(queue) |
2129 | + |
2130 | + if method_frame: |
2131 | + self.log.debug('Retreived message from {} queue:\n{}'.format(queue, |
2132 | + body)) |
2133 | + channel.basic_ack(method_frame.delivery_tag) |
2134 | + channel.close() |
2135 | + connection.close() |
2136 | + return body |
2137 | + else: |
2138 | + msg = 'No message retrieved.' |
2139 | + amulet.raise_status(amulet.FAIL, msg) |
2140 | |
2141 | === added file 'tests/tests.yaml' |
2142 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 |
2143 | +++ tests/tests.yaml 2015-09-21 20:38:30 +0000 |
2144 | @@ -0,0 +1,20 @@ |
2145 | +bootstrap: true |
2146 | +reset: true |
2147 | +virtualenv: true |
2148 | +makefile: |
2149 | + - lint |
2150 | + - test |
2151 | +sources: |
2152 | + - ppa:juju/stable |
2153 | +packages: |
2154 | + - amulet |
2155 | + - python-amulet |
2156 | + - python-cinderclient |
2157 | + - python-distro-info |
2158 | + - python-glanceclient |
2159 | + - python-heatclient |
2160 | + - python-keystoneclient |
2161 | + - python-neutronclient |
2162 | + - python-novaclient |
2163 | + - python-pika |
2164 | + - python-swiftclient |
charm_lint_check #10116 neutron- gateway- next for 1chb1n mp271198
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http:// paste.ubuntu. com/12423661/ 10.245. 162.77: 8080/job/ charm_lint_ check/10116/
Build: http://