Merge lp:~1chb1n/charms/trusty/neutron-gateway/amulet-update-1508 into lp:~openstack-charmers-archive/charms/trusty/neutron-gateway/next
- Trusty Tahr (14.04)
- amulet-update-1508
- Merge into next
Status: | Superseded |
---|---|
Proposed branch: | lp:~1chb1n/charms/trusty/neutron-gateway/amulet-update-1508 |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/neutron-gateway/next |
Diff against target: |
2164 lines (+1242/-341) 11 files modified
Makefile (+1/-1) tests/00-setup (+7/-3) tests/020-basic-trusty-liberty (+11/-0) tests/021-basic-wily-liberty (+9/-0) tests/README (+10/-0) tests/basic_deployment.py (+514/-264) tests/charmhelpers/contrib/amulet/deployment.py (+4/-2) tests/charmhelpers/contrib/amulet/utils.py (+284/-62) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+23/-9) tests/charmhelpers/contrib/openstack/amulet/utils.py (+359/-0) tests/tests.yaml (+20/-0) |
To merge this branch: | bzr merge lp:~1chb1n/charms/trusty/neutron-gateway/amulet-update-1508 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
OpenStack Charmers | Pending | ||
Review via email:
|
This proposal has been superseded by a proposal from 2015-09-22.
Commit message
Description of the change
Update tests for Vivid-Kilo + Trusty-Liberty enablement; sync tests/charmhelpers.
This proposal is dependent on charm-helpers landing code from:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9280 neutron-
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10119 neutron-
LINT OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9282 neutron-
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6455 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6457 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10120 neutron-
LINT OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9283 neutron-
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6458 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10184 neutron-
LINT OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9342 neutron-
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6472 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10187 neutron-
LINT OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9345 neutron-
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10189 neutron-
LINT OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9347 neutron-
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6475 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6477 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10193 neutron-
LINT OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9351 neutron-
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6482 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10253 neutron-
LINT OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9408 neutron-
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6494 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Ryan Beisner (1chb1n) wrote : | # |
FYI - ignore previous amulet test results here.
Test-writing was compounded by amulet bug @ https:/
Will resubmit after confirming that is no longer an issue.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #10433 neutron-
LINT OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #9623 neutron-
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6576 neutron-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 143. By Ryan Beisner
-
rebase with next
- 144. By Ryan Beisner
-
disable wily test target, to be re-enabled separately after validation
- 145. By Ryan Beisner
-
rebase with next
- 146. By Ryan Beisner
-
rebase from next (re: amulet git test fails)
- 147. By Ryan Beisner
-
rebase from next (re: unit test fails re: git fix)
Unmerged revisions
Preview Diff
1 | === modified file 'Makefile' | |||
2 | --- Makefile 2015-06-25 17:58:14 +0000 | |||
3 | +++ Makefile 2015-09-21 20:38:30 +0000 | |||
4 | @@ -24,6 +24,6 @@ | |||
5 | 24 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml | 24 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml |
6 | 25 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml | 25 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml |
7 | 26 | 26 | ||
9 | 27 | publish: lint unit_test | 27 | publish: lint test |
10 | 28 | bzr push lp:charms/neutron-gateway | 28 | bzr push lp:charms/neutron-gateway |
11 | 29 | bzr push lp:charms/trusty/neutron-gateway | 29 | bzr push lp:charms/trusty/neutron-gateway |
12 | 30 | 30 | ||
13 | === modified file 'tests/00-setup' | |||
14 | --- tests/00-setup 2015-07-10 13:34:03 +0000 | |||
15 | +++ tests/00-setup 2015-09-21 20:38:30 +0000 | |||
16 | @@ -4,9 +4,13 @@ | |||
17 | 4 | 4 | ||
18 | 5 | sudo add-apt-repository --yes ppa:juju/stable | 5 | sudo add-apt-repository --yes ppa:juju/stable |
19 | 6 | sudo apt-get update --yes | 6 | sudo apt-get update --yes |
21 | 7 | sudo apt-get install --yes python-amulet \ | 7 | sudo apt-get install --yes amulet \ |
22 | 8 | python-cinderclient \ | ||
23 | 8 | python-distro-info \ | 9 | python-distro-info \ |
24 | 10 | python-glanceclient \ | ||
25 | 11 | python-heatclient \ | ||
26 | 12 | python-keystoneclient \ | ||
27 | 9 | python-neutronclient \ | 13 | python-neutronclient \ |
28 | 10 | python-keystoneclient \ | ||
29 | 11 | python-novaclient \ | 14 | python-novaclient \ |
31 | 12 | python-glanceclient | 15 | python-pika \ |
32 | 16 | python-swiftclient | ||
33 | 13 | 17 | ||
34 | === modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x) | |||
35 | === added file 'tests/020-basic-trusty-liberty' | |||
36 | --- tests/020-basic-trusty-liberty 1970-01-01 00:00:00 +0000 | |||
37 | +++ tests/020-basic-trusty-liberty 2015-09-21 20:38:30 +0000 | |||
38 | @@ -0,0 +1,11 @@ | |||
39 | 1 | #!/usr/bin/python | ||
40 | 2 | |||
41 | 3 | """Amulet tests on a basic quantum-gateway deployment on trusty-liberty.""" | ||
42 | 4 | |||
43 | 5 | from basic_deployment import NeutronGatewayBasicDeployment | ||
44 | 6 | |||
45 | 7 | if __name__ == '__main__': | ||
46 | 8 | deployment = NeutronGatewayBasicDeployment(series='trusty', | ||
47 | 9 | openstack='cloud:trusty-liberty', | ||
48 | 10 | source='cloud:trusty-updates/liberty') | ||
49 | 11 | deployment.run_tests() | ||
50 | 0 | 12 | ||
51 | === added file 'tests/021-basic-wily-liberty' | |||
52 | --- tests/021-basic-wily-liberty 1970-01-01 00:00:00 +0000 | |||
53 | +++ tests/021-basic-wily-liberty 2015-09-21 20:38:30 +0000 | |||
54 | @@ -0,0 +1,9 @@ | |||
55 | 1 | #!/usr/bin/python | ||
56 | 2 | |||
57 | 3 | """Amulet tests on a basic quantum-gateway deployment on wily-liberty.""" | ||
58 | 4 | |||
59 | 5 | from basic_deployment import NeutronGatewayBasicDeployment | ||
60 | 6 | |||
61 | 7 | if __name__ == '__main__': | ||
62 | 8 | deployment = NeutronGatewayBasicDeployment(series='wily') | ||
63 | 9 | deployment.run_tests() | ||
64 | 0 | 10 | ||
65 | === modified file 'tests/README' | |||
66 | --- tests/README 2014-10-07 21:19:29 +0000 | |||
67 | +++ tests/README 2015-09-21 20:38:30 +0000 | |||
68 | @@ -1,6 +1,16 @@ | |||
69 | 1 | This directory provides Amulet tests that focus on verification of | 1 | This directory provides Amulet tests that focus on verification of |
70 | 2 | quantum-gateway deployments. | 2 | quantum-gateway deployments. |
71 | 3 | 3 | ||
72 | 4 | test_* methods are called in lexical sort order, although each individual test | ||
73 | 5 | should be idempotent, and expected to pass regardless of run order. | ||
74 | 6 | |||
75 | 7 | Test name convention to ensure desired test order: | ||
76 | 8 | 1xx service and endpoint checks | ||
77 | 9 | 2xx relation checks | ||
78 | 10 | 3xx config checks | ||
79 | 11 | 4xx functional checks | ||
80 | 12 | 9xx restarts and other final checks | ||
81 | 13 | |||
82 | 4 | In order to run tests, you'll need charm-tools installed (in addition to | 14 | In order to run tests, you'll need charm-tools installed (in addition to |
83 | 5 | juju, of course): | 15 | juju, of course): |
84 | 6 | sudo add-apt-repository ppa:juju/stable | 16 | sudo add-apt-repository ppa:juju/stable |
85 | 7 | 17 | ||
86 | === modified file 'tests/basic_deployment.py' | |||
87 | --- tests/basic_deployment.py 2015-09-14 14:30:47 +0000 | |||
88 | +++ tests/basic_deployment.py 2015-09-21 20:38:30 +0000 | |||
89 | @@ -13,8 +13,8 @@ | |||
90 | 13 | 13 | ||
91 | 14 | from charmhelpers.contrib.openstack.amulet.utils import ( | 14 | from charmhelpers.contrib.openstack.amulet.utils import ( |
92 | 15 | OpenStackAmuletUtils, | 15 | OpenStackAmuletUtils, |
95 | 16 | DEBUG, # flake8: noqa | 16 | DEBUG, |
96 | 17 | ERROR | 17 | #ERROR |
97 | 18 | ) | 18 | ) |
98 | 19 | 19 | ||
99 | 20 | # Use DEBUG to turn on debug logging | 20 | # Use DEBUG to turn on debug logging |
100 | @@ -45,30 +45,31 @@ | |||
101 | 45 | """ | 45 | """ |
102 | 46 | this_service = {'name': 'neutron-gateway'} | 46 | this_service = {'name': 'neutron-gateway'} |
103 | 47 | other_services = [{'name': 'mysql'}, | 47 | other_services = [{'name': 'mysql'}, |
110 | 48 | {'name': 'rabbitmq-server'}, {'name': 'keystone'}, | 48 | {'name': 'rabbitmq-server'}, |
111 | 49 | {'name': 'nova-cloud-controller'}] | 49 | {'name': 'keystone'}, |
112 | 50 | if self._get_openstack_release() >= self.trusty_kilo: | 50 | {'name': 'nova-cloud-controller'}, |
113 | 51 | other_services.append({'name': 'neutron-api'}) | 51 | {'name': 'neutron-api'}] |
114 | 52 | super(NeutronGatewayBasicDeployment, self)._add_services(this_service, | 52 | |
115 | 53 | other_services) | 53 | super(NeutronGatewayBasicDeployment, self)._add_services( |
116 | 54 | this_service, other_services) | ||
117 | 54 | 55 | ||
118 | 55 | def _add_relations(self): | 56 | def _add_relations(self): |
119 | 56 | """Add all of the relations for the services.""" | 57 | """Add all of the relations for the services.""" |
120 | 57 | relations = { | 58 | relations = { |
129 | 58 | 'keystone:shared-db': 'mysql:shared-db', | 59 | 'keystone:shared-db': 'mysql:shared-db', |
130 | 59 | 'neutron-gateway:shared-db': 'mysql:shared-db', | 60 | 'neutron-gateway:shared-db': 'mysql:shared-db', |
131 | 60 | 'neutron-gateway:amqp': 'rabbitmq-server:amqp', | 61 | 'neutron-gateway:amqp': 'rabbitmq-server:amqp', |
132 | 61 | 'nova-cloud-controller:quantum-network-service': \ | 62 | 'nova-cloud-controller:quantum-network-service': |
133 | 62 | 'neutron-gateway:quantum-network-service', | 63 | 'neutron-gateway:quantum-network-service', |
134 | 63 | 'nova-cloud-controller:shared-db': 'mysql:shared-db', | 64 | 'nova-cloud-controller:shared-db': 'mysql:shared-db', |
135 | 64 | 'nova-cloud-controller:identity-service': 'keystone:identity-service', | 65 | 'nova-cloud-controller:identity-service': 'keystone:' |
136 | 65 | 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp' | 66 | 'identity-service', |
137 | 67 | 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp', | ||
138 | 68 | 'neutron-api:shared-db': 'mysql:shared-db', | ||
139 | 69 | 'neutron-api:amqp': 'rabbitmq-server:amqp', | ||
140 | 70 | 'neutron-api:neutron-api': 'nova-cloud-controller:neutron-api', | ||
141 | 71 | 'neutron-api:identity-service': 'keystone:identity-service' | ||
142 | 66 | } | 72 | } |
143 | 67 | if self._get_openstack_release() >= self.trusty_kilo: | ||
144 | 68 | relations['neutron-api:shared-db'] = 'mysql:shared-db' | ||
145 | 69 | relations['neutron-api:amqp'] = 'rabbitmq-server:amqp' | ||
146 | 70 | relations['neutron-api:neutron-api'] = 'nova-cloud-controller:neutron-api' | ||
147 | 71 | relations['neutron-api:identity-service'] = 'keystone:identity-service' | ||
148 | 72 | super(NeutronGatewayBasicDeployment, self)._add_relations(relations) | 73 | super(NeutronGatewayBasicDeployment, self)._add_relations(relations) |
149 | 73 | 74 | ||
150 | 74 | def _configure_services(self): | 75 | def _configure_services(self): |
151 | @@ -83,16 +84,16 @@ | |||
152 | 83 | openstack_origin_git = { | 84 | openstack_origin_git = { |
153 | 84 | 'repositories': [ | 85 | 'repositories': [ |
154 | 85 | {'name': 'requirements', | 86 | {'name': 'requirements', |
156 | 86 | 'repository': 'git://github.com/openstack/requirements', | 87 | 'repository': 'git://github.com/openstack/requirements', # noqa |
157 | 87 | 'branch': branch}, | 88 | 'branch': branch}, |
158 | 88 | {'name': 'neutron-fwaas', | 89 | {'name': 'neutron-fwaas', |
160 | 89 | 'repository': 'git://github.com/openstack/neutron-fwaas', | 90 | 'repository': 'git://github.com/openstack/neutron-fwaas', # noqa |
161 | 90 | 'branch': branch}, | 91 | 'branch': branch}, |
162 | 91 | {'name': 'neutron-lbaas', | 92 | {'name': 'neutron-lbaas', |
164 | 92 | 'repository': 'git://github.com/openstack/neutron-lbaas', | 93 | 'repository': 'git://github.com/openstack/neutron-lbaas', # noqa |
165 | 93 | 'branch': branch}, | 94 | 'branch': branch}, |
166 | 94 | {'name': 'neutron-vpnaas', | 95 | {'name': 'neutron-vpnaas', |
168 | 95 | 'repository': 'git://github.com/openstack/neutron-vpnaas', | 96 | 'repository': 'git://github.com/openstack/neutron-vpnaas', # noqa |
169 | 96 | 'branch': branch}, | 97 | 'branch': branch}, |
170 | 97 | {'name': 'neutron', | 98 | {'name': 'neutron', |
171 | 98 | 'repository': 'git://github.com/openstack/neutron', | 99 | 'repository': 'git://github.com/openstack/neutron', |
172 | @@ -122,7 +123,10 @@ | |||
173 | 122 | 'http_proxy': amulet_http_proxy, | 123 | 'http_proxy': amulet_http_proxy, |
174 | 123 | 'https_proxy': amulet_http_proxy, | 124 | 'https_proxy': amulet_http_proxy, |
175 | 124 | } | 125 | } |
177 | 125 | neutron_gateway_config['openstack-origin-git'] = yaml.dump(openstack_origin_git) | 126 | |
178 | 127 | neutron_gateway_config['openstack-origin-git'] = \ | ||
179 | 128 | yaml.dump(openstack_origin_git) | ||
180 | 129 | |||
181 | 126 | keystone_config = {'admin-password': 'openstack', | 130 | keystone_config = {'admin-password': 'openstack', |
182 | 127 | 'admin-token': 'ubuntutesting'} | 131 | 'admin-token': 'ubuntutesting'} |
183 | 128 | nova_cc_config = {'network-manager': 'Quantum', | 132 | nova_cc_config = {'network-manager': 'Quantum', |
184 | @@ -137,9 +141,10 @@ | |||
185 | 137 | # Access the sentries for inspecting service units | 141 | # Access the sentries for inspecting service units |
186 | 138 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] | 142 | self.mysql_sentry = self.d.sentry.unit['mysql/0'] |
187 | 139 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] | 143 | self.keystone_sentry = self.d.sentry.unit['keystone/0'] |
189 | 140 | self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] | 144 | self.rmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] |
190 | 141 | self.nova_cc_sentry = self.d.sentry.unit['nova-cloud-controller/0'] | 145 | self.nova_cc_sentry = self.d.sentry.unit['nova-cloud-controller/0'] |
191 | 142 | self.neutron_gateway_sentry = self.d.sentry.unit['neutron-gateway/0'] | 146 | self.neutron_gateway_sentry = self.d.sentry.unit['neutron-gateway/0'] |
192 | 147 | self.neutron_api_sentry = self.d.sentry.unit['neutron-api/0'] | ||
193 | 143 | 148 | ||
194 | 144 | # Let things settle a bit before moving forward | 149 | # Let things settle a bit before moving forward |
195 | 145 | time.sleep(30) | 150 | time.sleep(30) |
196 | @@ -150,7 +155,6 @@ | |||
197 | 150 | password='openstack', | 155 | password='openstack', |
198 | 151 | tenant='admin') | 156 | tenant='admin') |
199 | 152 | 157 | ||
200 | 153 | |||
201 | 154 | # Authenticate admin with neutron | 158 | # Authenticate admin with neutron |
202 | 155 | ep = self.keystone.service_catalog.url_for(service_type='identity', | 159 | ep = self.keystone.service_catalog.url_for(service_type='identity', |
203 | 156 | endpoint_type='publicURL') | 160 | endpoint_type='publicURL') |
204 | @@ -160,40 +164,121 @@ | |||
205 | 160 | tenant_name='admin', | 164 | tenant_name='admin', |
206 | 161 | region_name='RegionOne') | 165 | region_name='RegionOne') |
207 | 162 | 166 | ||
209 | 163 | def test_services(self): | 167 | def test_100_services(self): |
210 | 164 | """Verify the expected services are running on the corresponding | 168 | """Verify the expected services are running on the corresponding |
211 | 165 | service units.""" | 169 | service units.""" |
218 | 166 | neutron_services = ['status neutron-dhcp-agent', | 170 | neutron_services = ['neutron-dhcp-agent', |
219 | 167 | 'status neutron-lbaas-agent', | 171 | 'neutron-lbaas-agent', |
220 | 168 | 'status neutron-metadata-agent', | 172 | 'neutron-metadata-agent', |
221 | 169 | 'status neutron-metering-agent', | 173 | 'neutron-metering-agent', |
222 | 170 | 'status neutron-ovs-cleanup', | 174 | 'neutron-ovs-cleanup', |
223 | 171 | 'status neutron-plugin-openvswitch-agent'] | 175 | 'neutron-plugin-openvswitch-agent'] |
224 | 172 | 176 | ||
225 | 173 | if self._get_openstack_release() <= self.trusty_juno: | 177 | if self._get_openstack_release() <= self.trusty_juno: |
227 | 174 | neutron_services.append('status neutron-vpn-agent') | 178 | neutron_services.append('neutron-vpn-agent') |
228 | 175 | 179 | ||
236 | 176 | nova_cc_services = ['status nova-api-ec2', | 180 | nova_cc_services = ['nova-api-ec2', |
237 | 177 | 'status nova-api-os-compute', | 181 | 'nova-api-os-compute', |
238 | 178 | 'status nova-objectstore', | 182 | 'nova-objectstore', |
239 | 179 | 'status nova-cert', | 183 | 'nova-cert', |
240 | 180 | 'status nova-scheduler'] | 184 | 'nova-scheduler', |
241 | 181 | if self._get_openstack_release() >= self.precise_grizzly: | 185 | 'nova-conductor'] |
235 | 182 | nova_cc_services.append('status nova-conductor') | ||
242 | 183 | 186 | ||
243 | 184 | commands = { | 187 | commands = { |
246 | 185 | self.mysql_sentry: ['status mysql'], | 188 | self.mysql_sentry: ['mysql'], |
247 | 186 | self.keystone_sentry: ['status keystone'], | 189 | self.keystone_sentry: ['keystone'], |
248 | 187 | self.nova_cc_sentry: nova_cc_services, | 190 | self.nova_cc_sentry: nova_cc_services, |
249 | 188 | self.neutron_gateway_sentry: neutron_services | 191 | self.neutron_gateway_sentry: neutron_services |
250 | 189 | } | 192 | } |
251 | 190 | 193 | ||
257 | 191 | ret = u.validate_services(commands) | 194 | ret = u.validate_services_by_name(commands) |
258 | 192 | if ret: | 195 | if ret: |
259 | 193 | amulet.raise_status(amulet.FAIL, msg=ret) | 196 | amulet.raise_status(amulet.FAIL, msg=ret) |
260 | 194 | 197 | ||
261 | 195 | def test_neutron_gateway_shared_db_relation(self): | 198 | def test_102_service_catalog(self): |
262 | 199 | """Verify that the service catalog endpoint data is valid.""" | ||
263 | 200 | u.log.debug('Checking keystone service catalog...') | ||
264 | 201 | endpoint_check = { | ||
265 | 202 | 'adminURL': u.valid_url, | ||
266 | 203 | 'id': u.not_null, | ||
267 | 204 | 'region': 'RegionOne', | ||
268 | 205 | 'publicURL': u.valid_url, | ||
269 | 206 | 'internalURL': u.valid_url | ||
270 | 207 | } | ||
271 | 208 | expected = { | ||
272 | 209 | 'network': [endpoint_check], | ||
273 | 210 | 'compute': [endpoint_check], | ||
274 | 211 | 'identity': [endpoint_check] | ||
275 | 212 | } | ||
276 | 213 | actual = self.keystone.service_catalog.get_endpoints() | ||
277 | 214 | |||
278 | 215 | ret = u.validate_svc_catalog_endpoint_data(expected, actual) | ||
279 | 216 | if ret: | ||
280 | 217 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
281 | 218 | |||
282 | 219 | def test_104_network_endpoint(self): | ||
283 | 220 | """Verify the neutron network endpoint data.""" | ||
284 | 221 | u.log.debug('Checking neutron network api endpoint data...') | ||
285 | 222 | endpoints = self.keystone.endpoints.list() | ||
286 | 223 | admin_port = internal_port = public_port = '9696' | ||
287 | 224 | expected = { | ||
288 | 225 | 'id': u.not_null, | ||
289 | 226 | 'region': 'RegionOne', | ||
290 | 227 | 'adminurl': u.valid_url, | ||
291 | 228 | 'internalurl': u.valid_url, | ||
292 | 229 | 'publicurl': u.valid_url, | ||
293 | 230 | 'service_id': u.not_null | ||
294 | 231 | } | ||
295 | 232 | ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, | ||
296 | 233 | public_port, expected) | ||
297 | 234 | |||
298 | 235 | if ret: | ||
299 | 236 | amulet.raise_status(amulet.FAIL, | ||
300 | 237 | msg='glance endpoint: {}'.format(ret)) | ||
301 | 238 | |||
302 | 239 | def test_110_users(self): | ||
303 | 240 | """Verify expected users.""" | ||
304 | 241 | u.log.debug('Checking keystone users...') | ||
305 | 242 | expected = [ | ||
306 | 243 | {'name': 'admin', | ||
307 | 244 | 'enabled': True, | ||
308 | 245 | 'tenantId': u.not_null, | ||
309 | 246 | 'id': u.not_null, | ||
310 | 247 | 'email': 'juju@localhost'}, | ||
311 | 248 | {'name': 'quantum', | ||
312 | 249 | 'enabled': True, | ||
313 | 250 | 'tenantId': u.not_null, | ||
314 | 251 | 'id': u.not_null, | ||
315 | 252 | 'email': 'juju@localhost'} | ||
316 | 253 | ] | ||
317 | 254 | |||
318 | 255 | if self._get_openstack_release() >= self.trusty_kilo: | ||
319 | 256 | # Kilo or later | ||
320 | 257 | expected.append({ | ||
321 | 258 | 'name': 'nova', | ||
322 | 259 | 'enabled': True, | ||
323 | 260 | 'tenantId': u.not_null, | ||
324 | 261 | 'id': u.not_null, | ||
325 | 262 | 'email': 'juju@localhost' | ||
326 | 263 | }) | ||
327 | 264 | else: | ||
328 | 265 | # Juno and earlier | ||
329 | 266 | expected.append({ | ||
330 | 267 | 'name': 's3_ec2_nova', | ||
331 | 268 | 'enabled': True, | ||
332 | 269 | 'tenantId': u.not_null, | ||
333 | 270 | 'id': u.not_null, | ||
334 | 271 | 'email': 'juju@localhost' | ||
335 | 272 | }) | ||
336 | 273 | |||
337 | 274 | actual = self.keystone.users.list() | ||
338 | 275 | ret = u.validate_user_data(expected, actual) | ||
339 | 276 | if ret: | ||
340 | 277 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
341 | 278 | |||
342 | 279 | def test_200_neutron_gateway_mysql_shared_db_relation(self): | ||
343 | 196 | """Verify the neutron-gateway to mysql shared-db relation data""" | 280 | """Verify the neutron-gateway to mysql shared-db relation data""" |
344 | 281 | u.log.debug('Checking neutron-gateway:mysql db relation data...') | ||
345 | 197 | unit = self.neutron_gateway_sentry | 282 | unit = self.neutron_gateway_sentry |
346 | 198 | relation = ['shared-db', 'mysql:shared-db'] | 283 | relation = ['shared-db', 'mysql:shared-db'] |
347 | 199 | expected = { | 284 | expected = { |
348 | @@ -208,8 +293,9 @@ | |||
349 | 208 | message = u.relation_error('neutron-gateway shared-db', ret) | 293 | message = u.relation_error('neutron-gateway shared-db', ret) |
350 | 209 | amulet.raise_status(amulet.FAIL, msg=message) | 294 | amulet.raise_status(amulet.FAIL, msg=message) |
351 | 210 | 295 | ||
353 | 211 | def test_mysql_shared_db_relation(self): | 296 | def test_201_mysql_neutron_gateway_shared_db_relation(self): |
354 | 212 | """Verify the mysql to neutron-gateway shared-db relation data""" | 297 | """Verify the mysql to neutron-gateway shared-db relation data""" |
355 | 298 | u.log.debug('Checking mysql:neutron-gateway db relation data...') | ||
356 | 213 | unit = self.mysql_sentry | 299 | unit = self.mysql_sentry |
357 | 214 | relation = ['shared-db', 'neutron-gateway:shared-db'] | 300 | relation = ['shared-db', 'neutron-gateway:shared-db'] |
358 | 215 | expected = { | 301 | expected = { |
359 | @@ -223,8 +309,9 @@ | |||
360 | 223 | message = u.relation_error('mysql shared-db', ret) | 309 | message = u.relation_error('mysql shared-db', ret) |
361 | 224 | amulet.raise_status(amulet.FAIL, msg=message) | 310 | amulet.raise_status(amulet.FAIL, msg=message) |
362 | 225 | 311 | ||
364 | 226 | def test_neutron_gateway_amqp_relation(self): | 312 | def test_202_neutron_gateway_rabbitmq_amqp_relation(self): |
365 | 227 | """Verify the neutron-gateway to rabbitmq-server amqp relation data""" | 313 | """Verify the neutron-gateway to rabbitmq-server amqp relation data""" |
366 | 314 | u.log.debug('Checking neutron-gateway:rmq amqp relation data...') | ||
367 | 228 | unit = self.neutron_gateway_sentry | 315 | unit = self.neutron_gateway_sentry |
368 | 229 | relation = ['amqp', 'rabbitmq-server:amqp'] | 316 | relation = ['amqp', 'rabbitmq-server:amqp'] |
369 | 230 | expected = { | 317 | expected = { |
370 | @@ -238,9 +325,10 @@ | |||
371 | 238 | message = u.relation_error('neutron-gateway amqp', ret) | 325 | message = u.relation_error('neutron-gateway amqp', ret) |
372 | 239 | amulet.raise_status(amulet.FAIL, msg=message) | 326 | amulet.raise_status(amulet.FAIL, msg=message) |
373 | 240 | 327 | ||
375 | 241 | def test_rabbitmq_amqp_relation(self): | 328 | def test_203_rabbitmq_neutron_gateway_amqp_relation(self): |
376 | 242 | """Verify the rabbitmq-server to neutron-gateway amqp relation data""" | 329 | """Verify the rabbitmq-server to neutron-gateway amqp relation data""" |
378 | 243 | unit = self.rabbitmq_sentry | 330 | u.log.debug('Checking rmq:neutron-gateway amqp relation data...') |
379 | 331 | unit = self.rmq_sentry | ||
380 | 244 | relation = ['amqp', 'neutron-gateway:amqp'] | 332 | relation = ['amqp', 'neutron-gateway:amqp'] |
381 | 245 | expected = { | 333 | expected = { |
382 | 246 | 'private-address': u.valid_ip, | 334 | 'private-address': u.valid_ip, |
383 | @@ -253,9 +341,11 @@ | |||
384 | 253 | message = u.relation_error('rabbitmq amqp', ret) | 341 | message = u.relation_error('rabbitmq amqp', ret) |
385 | 254 | amulet.raise_status(amulet.FAIL, msg=message) | 342 | amulet.raise_status(amulet.FAIL, msg=message) |
386 | 255 | 343 | ||
388 | 256 | def test_neutron_gateway_network_service_relation(self): | 344 | def test_204_neutron_gateway_network_service_relation(self): |
389 | 257 | """Verify the neutron-gateway to nova-cc quantum-network-service | 345 | """Verify the neutron-gateway to nova-cc quantum-network-service |
390 | 258 | relation data""" | 346 | relation data""" |
391 | 347 | u.log.debug('Checking neutron-gateway:nova-cc net svc ' | ||
392 | 348 | 'relation data...') | ||
393 | 259 | unit = self.neutron_gateway_sentry | 349 | unit = self.neutron_gateway_sentry |
394 | 260 | relation = ['quantum-network-service', | 350 | relation = ['quantum-network-service', |
395 | 261 | 'nova-cloud-controller:quantum-network-service'] | 351 | 'nova-cloud-controller:quantum-network-service'] |
396 | @@ -268,9 +358,11 @@ | |||
397 | 268 | message = u.relation_error('neutron-gateway network-service', ret) | 358 | message = u.relation_error('neutron-gateway network-service', ret) |
398 | 269 | amulet.raise_status(amulet.FAIL, msg=message) | 359 | amulet.raise_status(amulet.FAIL, msg=message) |
399 | 270 | 360 | ||
401 | 271 | def test_nova_cc_network_service_relation(self): | 361 | def test_205_nova_cc_network_service_relation(self): |
402 | 272 | """Verify the nova-cc to neutron-gateway quantum-network-service | 362 | """Verify the nova-cc to neutron-gateway quantum-network-service |
403 | 273 | relation data""" | 363 | relation data""" |
404 | 364 | u.log.debug('Checking nova-cc:neutron-gateway net svc ' | ||
405 | 365 | 'relation data...') | ||
406 | 274 | unit = self.nova_cc_sentry | 366 | unit = self.nova_cc_sentry |
407 | 275 | relation = ['quantum-network-service', | 367 | relation = ['quantum-network-service', |
408 | 276 | 'neutron-gateway:quantum-network-service'] | 368 | 'neutron-gateway:quantum-network-service'] |
409 | @@ -289,56 +381,178 @@ | |||
410 | 289 | 'keystone_host': u.valid_ip, | 381 | 'keystone_host': u.valid_ip, |
411 | 290 | 'quantum_plugin': 'ovs', | 382 | 'quantum_plugin': 'ovs', |
412 | 291 | 'auth_host': u.valid_ip, | 383 | 'auth_host': u.valid_ip, |
413 | 292 | 'service_username': 'quantum_s3_ec2_nova', | ||
414 | 293 | 'service_tenant_name': 'services' | 384 | 'service_tenant_name': 'services' |
415 | 294 | } | 385 | } |
416 | 386 | |||
417 | 295 | if self._get_openstack_release() >= self.trusty_kilo: | 387 | if self._get_openstack_release() >= self.trusty_kilo: |
418 | 388 | # Kilo or later | ||
419 | 296 | expected['service_username'] = 'nova' | 389 | expected['service_username'] = 'nova' |
420 | 390 | else: | ||
421 | 391 | # Juno or earlier | ||
422 | 392 | expected['service_username'] = 's3_ec2_nova' | ||
423 | 297 | 393 | ||
424 | 298 | ret = u.validate_relation_data(unit, relation, expected) | 394 | ret = u.validate_relation_data(unit, relation, expected) |
425 | 299 | if ret: | 395 | if ret: |
426 | 300 | message = u.relation_error('nova-cc network-service', ret) | 396 | message = u.relation_error('nova-cc network-service', ret) |
427 | 301 | amulet.raise_status(amulet.FAIL, msg=message) | 397 | amulet.raise_status(amulet.FAIL, msg=message) |
428 | 302 | 398 | ||
464 | 303 | def test_z_restart_on_config_change(self): | 399 | def test_206_neutron_api_shared_db_relation(self): |
465 | 304 | """Verify that the specified services are restarted when the config | 400 | """Verify the neutron-api to mysql shared-db relation data""" |
466 | 305 | is changed. | 401 | u.log.debug('Checking neutron-api:mysql db relation data...') |
467 | 306 | 402 | unit = self.neutron_api_sentry | |
468 | 307 | Note(coreycb): The method name with the _z_ is a little odd | 403 | relation = ['shared-db', 'mysql:shared-db'] |
469 | 308 | but it forces the test to run last. It just makes things | 404 | expected = { |
470 | 309 | easier because restarting services requires re-authorization. | 405 | 'private-address': u.valid_ip, |
471 | 310 | """ | 406 | 'database': 'neutron', |
472 | 311 | conf = '/etc/neutron/neutron.conf' | 407 | 'username': 'neutron', |
473 | 312 | 408 | 'hostname': u.valid_ip | |
474 | 313 | services = ['neutron-dhcp-agent', | 409 | } |
475 | 314 | 'neutron-lbaas-agent', | 410 | |
476 | 315 | 'neutron-metadata-agent', | 411 | ret = u.validate_relation_data(unit, relation, expected) |
477 | 316 | 'neutron-metering-agent', | 412 | if ret: |
478 | 317 | 'neutron-openvswitch-agent'] | 413 | message = u.relation_error('neutron-api shared-db', ret) |
479 | 318 | 414 | amulet.raise_status(amulet.FAIL, msg=message) | |
480 | 319 | if self._get_openstack_release() <= self.trusty_juno: | 415 | |
481 | 320 | services.append('neutron-vpn-agent') | 416 | def test_207_shared_db_neutron_api_relation(self): |
482 | 321 | 417 | """Verify the mysql to neutron-api shared-db relation data""" | |
483 | 322 | u.log.debug("Making config change on neutron-gateway...") | 418 | u.log.debug('Checking mysql:neutron-api db relation data...') |
484 | 323 | self.d.configure('neutron-gateway', {'debug': 'True'}) | 419 | unit = self.mysql_sentry |
485 | 324 | 420 | relation = ['shared-db', 'neutron-api:shared-db'] | |
486 | 325 | time = 60 | 421 | expected = { |
487 | 326 | for s in services: | 422 | 'db_host': u.valid_ip, |
488 | 327 | u.log.debug("Checking that service restarted: {}".format(s)) | 423 | 'private-address': u.valid_ip, |
489 | 328 | if not u.service_restarted(self.neutron_gateway_sentry, s, conf, | 424 | 'password': u.not_null |
490 | 329 | pgrep_full=True, sleep_time=time): | 425 | } |
491 | 330 | self.d.configure('neutron-gateway', {'debug': 'False'}) | 426 | |
492 | 331 | msg = "service {} didn't restart after config change".format(s) | 427 | if self._get_openstack_release() == self.precise_icehouse: |
493 | 332 | amulet.raise_status(amulet.FAIL, msg=msg) | 428 | # Precise |
494 | 333 | time = 0 | 429 | expected['allowed_units'] = 'nova-cloud-controller/0 neutron-api/0' |
495 | 334 | 430 | else: | |
496 | 335 | self.d.configure('neutron-gateway', {'debug': 'False'}) | 431 | # Not Precise |
497 | 336 | 432 | expected['allowed_units'] = 'neutron-api/0' | |
498 | 337 | def test_neutron_config(self): | 433 | |
499 | 434 | ret = u.validate_relation_data(unit, relation, expected) | ||
500 | 435 | if ret: | ||
501 | 436 | message = u.relation_error('mysql shared-db', ret) | ||
502 | 437 | amulet.raise_status(amulet.FAIL, msg=message) | ||
503 | 438 | |||
504 | 439 | def test_208_neutron_api_amqp_relation(self): | ||
505 | 440 | """Verify the neutron-api to rabbitmq-server amqp relation data""" | ||
506 | 441 | u.log.debug('Checking neutron-api:amqp relation data...') | ||
507 | 442 | unit = self.neutron_api_sentry | ||
508 | 443 | relation = ['amqp', 'rabbitmq-server:amqp'] | ||
509 | 444 | expected = { | ||
510 | 445 | 'username': 'neutron', | ||
511 | 446 | 'private-address': u.valid_ip, | ||
512 | 447 | 'vhost': 'openstack' | ||
513 | 448 | } | ||
514 | 449 | |||
515 | 450 | ret = u.validate_relation_data(unit, relation, expected) | ||
516 | 451 | if ret: | ||
517 | 452 | message = u.relation_error('neutron-api amqp', ret) | ||
518 | 453 | amulet.raise_status(amulet.FAIL, msg=message) | ||
519 | 454 | |||
520 | 455 | def test_209_amqp_neutron_api_relation(self): | ||
521 | 456 | """Verify the rabbitmq-server to neutron-api amqp relation data""" | ||
522 | 457 | u.log.debug('Checking amqp:neutron-api relation data...') | ||
523 | 458 | unit = self.rmq_sentry | ||
524 | 459 | relation = ['amqp', 'neutron-api:amqp'] | ||
525 | 460 | expected = { | ||
526 | 461 | 'hostname': u.valid_ip, | ||
527 | 462 | 'private-address': u.valid_ip, | ||
528 | 463 | 'password': u.not_null | ||
529 | 464 | } | ||
530 | 465 | |||
531 | 466 | ret = u.validate_relation_data(unit, relation, expected) | ||
532 | 467 | if ret: | ||
533 | 468 | message = u.relation_error('rabbitmq amqp', ret) | ||
534 | 469 | amulet.raise_status(amulet.FAIL, msg=message) | ||
535 | 470 | |||
536 | 471 | def test_210_neutron_api_keystone_identity_relation(self): | ||
537 | 472 | """Verify the neutron-api to keystone identity-service relation data""" | ||
538 | 473 | u.log.debug('Checking neutron-api:keystone id relation data...') | ||
539 | 474 | unit = self.neutron_api_sentry | ||
540 | 475 | relation = ['identity-service', 'keystone:identity-service'] | ||
541 | 476 | api_ip = unit.relation('identity-service', | ||
542 | 477 | 'keystone:identity-service')['private-address'] | ||
543 | 478 | api_endpoint = 'http://{}:9696'.format(api_ip) | ||
544 | 479 | expected = { | ||
545 | 480 | 'private-address': u.valid_ip, | ||
546 | 481 | 'quantum_region': 'RegionOne', | ||
547 | 482 | 'quantum_service': 'quantum', | ||
548 | 483 | 'quantum_admin_url': api_endpoint, | ||
549 | 484 | 'quantum_internal_url': api_endpoint, | ||
550 | 485 | 'quantum_public_url': api_endpoint, | ||
551 | 486 | } | ||
552 | 487 | |||
553 | 488 | ret = u.validate_relation_data(unit, relation, expected) | ||
554 | 489 | if ret: | ||
555 | 490 | message = u.relation_error('neutron-api identity-service', ret) | ||
556 | 491 | amulet.raise_status(amulet.FAIL, msg=message) | ||
557 | 492 | |||
558 | 493 | def test_211_keystone_neutron_api_identity_relation(self): | ||
559 | 494 | """Verify the keystone to neutron-api identity-service relation data""" | ||
560 | 495 | u.log.debug('Checking keystone:neutron-api id relation data...') | ||
561 | 496 | unit = self.keystone_sentry | ||
562 | 497 | relation = ['identity-service', 'neutron-api:identity-service'] | ||
563 | 498 | rel_ks_id = unit.relation('identity-service', | ||
564 | 499 | 'neutron-api:identity-service') | ||
565 | 500 | id_ip = rel_ks_id['private-address'] | ||
566 | 501 | expected = { | ||
567 | 502 | 'admin_token': 'ubuntutesting', | ||
568 | 503 | 'auth_host': id_ip, | ||
569 | 504 | 'auth_port': "35357", | ||
570 | 505 | 'auth_protocol': 'http', | ||
571 | 506 | 'private-address': id_ip, | ||
572 | 507 | 'service_host': id_ip, | ||
573 | 508 | } | ||
574 | 509 | ret = u.validate_relation_data(unit, relation, expected) | ||
575 | 510 | if ret: | ||
576 | 511 | message = u.relation_error('neutron-api identity-service', ret) | ||
577 | 512 | amulet.raise_status(amulet.FAIL, msg=message) | ||
578 | 513 | |||
579 | 514 | def test_212_neutron_api_novacc_relation(self): | ||
580 | 515 | """Verify the neutron-api to nova-cloud-controller relation data""" | ||
581 | 516 | u.log.debug('Checking neutron-api:novacc relation data...') | ||
582 | 517 | unit = self.neutron_api_sentry | ||
583 | 518 | relation = ['neutron-api', 'nova-cloud-controller:neutron-api'] | ||
584 | 519 | api_ip = unit.relation('identity-service', | ||
585 | 520 | 'keystone:identity-service')['private-address'] | ||
586 | 521 | api_endpoint = 'http://{}:9696'.format(api_ip) | ||
587 | 522 | expected = { | ||
588 | 523 | 'private-address': api_ip, | ||
589 | 524 | 'neutron-plugin': 'ovs', | ||
590 | 525 | 'neutron-security-groups': "no", | ||
591 | 526 | 'neutron-url': api_endpoint, | ||
592 | 527 | } | ||
593 | 528 | ret = u.validate_relation_data(unit, relation, expected) | ||
594 | 529 | if ret: | ||
595 | 530 | message = u.relation_error('neutron-api neutron-api', ret) | ||
596 | 531 | amulet.raise_status(amulet.FAIL, msg=message) | ||
597 | 532 | |||
598 | 533 | def test_213_novacc_neutron_api_relation(self): | ||
599 | 534 | """Verify the nova-cloud-controller to neutron-api relation data""" | ||
600 | 535 | u.log.debug('Checking novacc:neutron-api relation data...') | ||
601 | 536 | unit = self.nova_cc_sentry | ||
602 | 537 | relation = ['neutron-api', 'neutron-api:neutron-api'] | ||
603 | 538 | cc_ip = unit.relation('neutron-api', | ||
604 | 539 | 'neutron-api:neutron-api')['private-address'] | ||
605 | 540 | cc_endpoint = 'http://{}:8774/v2'.format(cc_ip) | ||
606 | 541 | expected = { | ||
607 | 542 | 'private-address': cc_ip, | ||
608 | 543 | 'nova_url': cc_endpoint, | ||
609 | 544 | } | ||
610 | 545 | ret = u.validate_relation_data(unit, relation, expected) | ||
611 | 546 | if ret: | ||
612 | 547 | message = u.relation_error('nova-cc neutron-api', ret) | ||
613 | 548 | amulet.raise_status(amulet.FAIL, msg=message) | ||
614 | 549 | |||
615 | 550 | def test_300_neutron_config(self): | ||
616 | 338 | """Verify the data in the neutron config file.""" | 551 | """Verify the data in the neutron config file.""" |
617 | 552 | u.log.debug('Checking neutron gateway config file data...') | ||
618 | 339 | unit = self.neutron_gateway_sentry | 553 | unit = self.neutron_gateway_sentry |
621 | 340 | rabbitmq_relation = self.rabbitmq_sentry.relation('amqp', | 554 | rmq_ng_rel = self.rmq_sentry.relation( |
622 | 341 | 'neutron-gateway:amqp') | 555 | 'amqp', 'neutron-gateway:amqp') |
623 | 342 | 556 | ||
624 | 343 | conf = '/etc/neutron/neutron.conf' | 557 | conf = '/etc/neutron/neutron.conf' |
625 | 344 | expected = { | 558 | expected = { |
626 | @@ -350,35 +564,34 @@ | |||
627 | 350 | 'notification_driver': 'neutron.openstack.common.notifier.' | 564 | 'notification_driver': 'neutron.openstack.common.notifier.' |
628 | 351 | 'list_notifier', | 565 | 'list_notifier', |
629 | 352 | 'list_notifier_drivers': 'neutron.openstack.common.' | 566 | 'list_notifier_drivers': 'neutron.openstack.common.' |
631 | 353 | 'notifier.rabbit_notifier' | 567 | 'notifier.rabbit_notifier', |
632 | 354 | }, | 568 | }, |
633 | 355 | 'agent': { | 569 | 'agent': { |
634 | 356 | 'root_helper': 'sudo /usr/bin/neutron-rootwrap ' | 570 | 'root_helper': 'sudo /usr/bin/neutron-rootwrap ' |
635 | 357 | '/etc/neutron/rootwrap.conf' | 571 | '/etc/neutron/rootwrap.conf' |
636 | 358 | } | 572 | } |
637 | 359 | } | 573 | } |
638 | 574 | |||
639 | 360 | if self._get_openstack_release() >= self.trusty_kilo: | 575 | if self._get_openstack_release() >= self.trusty_kilo: |
655 | 361 | oslo_concurrency = { | 576 | # Kilo or later |
656 | 362 | 'oslo_concurrency': { | 577 | expected['oslo_messaging_rabbit'] = { |
657 | 363 | 'lock_path':'/var/lock/neutron' | 578 | 'rabbit_userid': 'neutron', |
658 | 364 | } | 579 | 'rabbit_virtual_host': 'openstack', |
659 | 365 | } | 580 | 'rabbit_password': rmq_ng_rel['password'], |
660 | 366 | oslo_messaging_rabbit = { | 581 | 'rabbit_host': rmq_ng_rel['hostname'], |
661 | 367 | 'oslo_messaging_rabbit': { | 582 | } |
662 | 368 | 'rabbit_userid': 'neutron', | 583 | expected['oslo_concurrency'] = { |
663 | 369 | 'rabbit_virtual_host': 'openstack', | 584 | 'lock_path': '/var/lock/neutron' |
664 | 370 | 'rabbit_password': rabbitmq_relation['password'], | 585 | } |
650 | 371 | 'rabbit_host': rabbitmq_relation['hostname'], | ||
651 | 372 | } | ||
652 | 373 | } | ||
653 | 374 | expected.update(oslo_concurrency) | ||
654 | 375 | expected.update(oslo_messaging_rabbit) | ||
665 | 376 | else: | 586 | else: |
671 | 377 | expected['DEFAULT']['lock_path'] = '/var/lock/neutron' | 587 | # Juno or earlier |
672 | 378 | expected['DEFAULT']['rabbit_userid'] = 'neutron' | 588 | expected['DEFAULT'].update({ |
673 | 379 | expected['DEFAULT']['rabbit_virtual_host'] = 'openstack' | 589 | 'rabbit_userid': 'neutron', |
674 | 380 | expected['DEFAULT']['rabbit_password'] = rabbitmq_relation['password'] | 590 | 'rabbit_virtual_host': 'openstack', |
675 | 381 | expected['DEFAULT']['rabbit_host'] = rabbitmq_relation['hostname'] | 591 | 'rabbit_password': rmq_ng_rel['password'], |
676 | 592 | 'rabbit_host': rmq_ng_rel['hostname'], | ||
677 | 593 | 'lock_path': '/var/lock/neutron', | ||
678 | 594 | }) | ||
679 | 382 | 595 | ||
680 | 383 | for section, pairs in expected.iteritems(): | 596 | for section, pairs in expected.iteritems(): |
681 | 384 | ret = u.validate_config_data(unit, conf, section, pairs) | 597 | ret = u.validate_config_data(unit, conf, section, pairs) |
682 | @@ -386,15 +599,17 @@ | |||
683 | 386 | message = "neutron config error: {}".format(ret) | 599 | message = "neutron config error: {}".format(ret) |
684 | 387 | amulet.raise_status(amulet.FAIL, msg=message) | 600 | amulet.raise_status(amulet.FAIL, msg=message) |
685 | 388 | 601 | ||
687 | 389 | def test_ml2_config(self): | 602 | def test_301_neutron_ml2_config(self): |
688 | 390 | """Verify the data in the ml2 config file. This is only available | 603 | """Verify the data in the ml2 config file. This is only available |
689 | 391 | since icehouse.""" | 604 | since icehouse.""" |
690 | 605 | u.log.debug('Checking neutron gateway ml2 config file data...') | ||
691 | 392 | if self._get_openstack_release() < self.precise_icehouse: | 606 | if self._get_openstack_release() < self.precise_icehouse: |
692 | 393 | return | 607 | return |
693 | 394 | 608 | ||
694 | 395 | unit = self.neutron_gateway_sentry | 609 | unit = self.neutron_gateway_sentry |
695 | 396 | conf = '/etc/neutron/plugins/ml2/ml2_conf.ini' | 610 | conf = '/etc/neutron/plugins/ml2/ml2_conf.ini' |
697 | 397 | neutron_gateway_relation = unit.relation('shared-db', 'mysql:shared-db') | 611 | ng_db_rel = unit.relation('shared-db', 'mysql:shared-db') |
698 | 612 | |||
699 | 398 | expected = { | 613 | expected = { |
700 | 399 | 'ml2': { | 614 | 'ml2': { |
701 | 400 | 'type_drivers': 'gre,vxlan,vlan,flat', | 615 | 'type_drivers': 'gre,vxlan,vlan,flat', |
702 | @@ -409,7 +624,7 @@ | |||
703 | 409 | }, | 624 | }, |
704 | 410 | 'ovs': { | 625 | 'ovs': { |
705 | 411 | 'enable_tunneling': 'True', | 626 | 'enable_tunneling': 'True', |
707 | 412 | 'local_ip': neutron_gateway_relation['private-address'] | 627 | 'local_ip': ng_db_rel['private-address'] |
708 | 413 | }, | 628 | }, |
709 | 414 | 'agent': { | 629 | 'agent': { |
710 | 415 | 'tunnel_types': 'gre', | 630 | 'tunnel_types': 'gre', |
711 | @@ -427,8 +642,9 @@ | |||
712 | 427 | message = "ml2 config error: {}".format(ret) | 642 | message = "ml2 config error: {}".format(ret) |
713 | 428 | amulet.raise_status(amulet.FAIL, msg=message) | 643 | amulet.raise_status(amulet.FAIL, msg=message) |
714 | 429 | 644 | ||
716 | 430 | def test_dhcp_agent_config(self): | 645 | def test_302_neutron_dhcp_agent_config(self): |
717 | 431 | """Verify the data in the dhcp agent config file.""" | 646 | """Verify the data in the dhcp agent config file.""" |
718 | 647 | u.log.debug('Checking neutron gateway dhcp agent config file data...') | ||
719 | 432 | unit = self.neutron_gateway_sentry | 648 | unit = self.neutron_gateway_sentry |
720 | 433 | conf = '/etc/neutron/dhcp_agent.ini' | 649 | conf = '/etc/neutron/dhcp_agent.ini' |
721 | 434 | expected = { | 650 | expected = { |
722 | @@ -440,44 +656,45 @@ | |||
723 | 440 | '/etc/neutron/rootwrap.conf', | 656 | '/etc/neutron/rootwrap.conf', |
724 | 441 | 'ovs_use_veth': 'True' | 657 | 'ovs_use_veth': 'True' |
725 | 442 | } | 658 | } |
726 | 659 | section = 'DEFAULT' | ||
727 | 443 | 660 | ||
729 | 444 | ret = u.validate_config_data(unit, conf, 'DEFAULT', expected) | 661 | ret = u.validate_config_data(unit, conf, section, expected) |
730 | 445 | if ret: | 662 | if ret: |
731 | 446 | message = "dhcp agent config error: {}".format(ret) | 663 | message = "dhcp agent config error: {}".format(ret) |
732 | 447 | amulet.raise_status(amulet.FAIL, msg=message) | 664 | amulet.raise_status(amulet.FAIL, msg=message) |
733 | 448 | 665 | ||
735 | 449 | def test_fwaas_driver_config(self): | 666 | def test_303_neutron_fwaas_driver_config(self): |
736 | 450 | """Verify the data in the fwaas driver config file. This is only | 667 | """Verify the data in the fwaas driver config file. This is only |
737 | 451 | available since havana.""" | 668 | available since havana.""" |
741 | 452 | if self._get_openstack_release() < self.precise_havana: | 669 | u.log.debug('Checking neutron gateway fwaas config file data...') |
739 | 453 | return | ||
740 | 454 | |||
742 | 455 | unit = self.neutron_gateway_sentry | 670 | unit = self.neutron_gateway_sentry |
743 | 456 | conf = '/etc/neutron/fwaas_driver.ini' | 671 | conf = '/etc/neutron/fwaas_driver.ini' |
744 | 672 | expected = { | ||
745 | 673 | 'enabled': 'True' | ||
746 | 674 | } | ||
747 | 675 | section = 'fwaas' | ||
748 | 676 | |||
749 | 457 | if self._get_openstack_release() >= self.trusty_kilo: | 677 | if self._get_openstack_release() >= self.trusty_kilo: |
755 | 458 | expected = { | 678 | # Kilo or later |
756 | 459 | 'driver': 'neutron_fwaas.services.firewall.drivers.' | 679 | expected['driver'] = ('neutron_fwaas.services.firewall.drivers.' |
757 | 460 | 'linux.iptables_fwaas.IptablesFwaasDriver', | 680 | 'linux.iptables_fwaas.IptablesFwaasDriver') |
753 | 461 | 'enabled': 'True' | ||
754 | 462 | } | ||
758 | 463 | else: | 681 | else: |
764 | 464 | expected = { | 682 | # Juno or earlier |
765 | 465 | 'driver': 'neutron.services.firewall.drivers.' | 683 | expected['driver'] = ('neutron.services.firewall.drivers.linux.' |
766 | 466 | 'linux.iptables_fwaas.IptablesFwaasDriver', | 684 | 'iptables_fwaas.IptablesFwaasDriver') |
762 | 467 | 'enabled': 'True' | ||
763 | 468 | } | ||
767 | 469 | 685 | ||
769 | 470 | ret = u.validate_config_data(unit, conf, 'fwaas', expected) | 686 | ret = u.validate_config_data(unit, conf, section, expected) |
770 | 471 | if ret: | 687 | if ret: |
771 | 472 | message = "fwaas driver config error: {}".format(ret) | 688 | message = "fwaas driver config error: {}".format(ret) |
772 | 473 | amulet.raise_status(amulet.FAIL, msg=message) | 689 | amulet.raise_status(amulet.FAIL, msg=message) |
773 | 474 | 690 | ||
775 | 475 | def test_l3_agent_config(self): | 691 | def test_304_neutron_l3_agent_config(self): |
776 | 476 | """Verify the data in the l3 agent config file.""" | 692 | """Verify the data in the l3 agent config file.""" |
777 | 693 | u.log.debug('Checking neutron gateway l3 agent config file data...') | ||
778 | 477 | unit = self.neutron_gateway_sentry | 694 | unit = self.neutron_gateway_sentry |
782 | 478 | nova_cc_relation = self.nova_cc_sentry.relation(\ | 695 | ncc_ng_rel = self.nova_cc_sentry.relation( |
783 | 479 | 'quantum-network-service', | 696 | 'quantum-network-service', |
784 | 480 | 'neutron-gateway:quantum-network-service') | 697 | 'neutron-gateway:quantum-network-service') |
785 | 481 | ep = self.keystone.service_catalog.url_for(service_type='identity', | 698 | ep = self.keystone.service_catalog.url_for(service_type='identity', |
786 | 482 | endpoint_type='publicURL') | 699 | endpoint_type='publicURL') |
787 | 483 | 700 | ||
788 | @@ -488,24 +705,30 @@ | |||
789 | 488 | 'auth_url': ep, | 705 | 'auth_url': ep, |
790 | 489 | 'auth_region': 'RegionOne', | 706 | 'auth_region': 'RegionOne', |
791 | 490 | 'admin_tenant_name': 'services', | 707 | 'admin_tenant_name': 'services', |
794 | 491 | 'admin_user': 'quantum_s3_ec2_nova', | 708 | 'admin_password': ncc_ng_rel['service_password'], |
793 | 492 | 'admin_password': nova_cc_relation['service_password'], | ||
795 | 493 | 'root_helper': 'sudo /usr/bin/neutron-rootwrap ' | 709 | 'root_helper': 'sudo /usr/bin/neutron-rootwrap ' |
796 | 494 | '/etc/neutron/rootwrap.conf', | 710 | '/etc/neutron/rootwrap.conf', |
797 | 495 | 'ovs_use_veth': 'True', | 711 | 'ovs_use_veth': 'True', |
798 | 496 | 'handle_internal_only_routers': 'True' | 712 | 'handle_internal_only_routers': 'True' |
799 | 497 | } | 713 | } |
800 | 714 | section = 'DEFAULT' | ||
801 | 715 | |||
802 | 498 | if self._get_openstack_release() >= self.trusty_kilo: | 716 | if self._get_openstack_release() >= self.trusty_kilo: |
803 | 717 | # Kilo or later | ||
804 | 499 | expected['admin_user'] = 'nova' | 718 | expected['admin_user'] = 'nova' |
805 | 719 | else: | ||
806 | 720 | # Juno or earlier | ||
807 | 721 | expected['admin_user'] = 's3_ec2_nova' | ||
808 | 500 | 722 | ||
810 | 501 | ret = u.validate_config_data(unit, conf, 'DEFAULT', expected) | 723 | ret = u.validate_config_data(unit, conf, section, expected) |
811 | 502 | if ret: | 724 | if ret: |
812 | 503 | message = "l3 agent config error: {}".format(ret) | 725 | message = "l3 agent config error: {}".format(ret) |
813 | 504 | amulet.raise_status(amulet.FAIL, msg=message) | 726 | amulet.raise_status(amulet.FAIL, msg=message) |
814 | 505 | 727 | ||
816 | 506 | def test_lbaas_agent_config(self): | 728 | def test_305_neutron_lbaas_agent_config(self): |
817 | 507 | """Verify the data in the lbaas agent config file. This is only | 729 | """Verify the data in the lbaas agent config file. This is only |
818 | 508 | available since havana.""" | 730 | available since havana.""" |
819 | 731 | u.log.debug('Checking neutron gateway lbaas config file data...') | ||
820 | 509 | if self._get_openstack_release() < self.precise_havana: | 732 | if self._get_openstack_release() < self.precise_havana: |
821 | 510 | return | 733 | return |
822 | 511 | 734 | ||
823 | @@ -513,21 +736,27 @@ | |||
824 | 513 | conf = '/etc/neutron/lbaas_agent.ini' | 736 | conf = '/etc/neutron/lbaas_agent.ini' |
825 | 514 | expected = { | 737 | expected = { |
826 | 515 | 'DEFAULT': { | 738 | 'DEFAULT': { |
827 | 516 | 'periodic_interval': '10', | ||
828 | 517 | 'interface_driver': 'neutron.agent.linux.interface.' | 739 | 'interface_driver': 'neutron.agent.linux.interface.' |
829 | 518 | 'OVSInterfaceDriver', | 740 | 'OVSInterfaceDriver', |
830 | 741 | 'periodic_interval': '10', | ||
831 | 519 | 'ovs_use_veth': 'False', | 742 | 'ovs_use_veth': 'False', |
832 | 520 | 'device_driver': 'neutron.services.loadbalancer.drivers.' | ||
833 | 521 | 'haproxy.namespace_driver.HaproxyNSDriver' | ||
834 | 522 | }, | 743 | }, |
835 | 523 | 'haproxy': { | 744 | 'haproxy': { |
836 | 524 | 'loadbalancer_state_path': '$state_path/lbaas', | 745 | 'loadbalancer_state_path': '$state_path/lbaas', |
837 | 525 | 'user_group': 'nogroup' | 746 | 'user_group': 'nogroup' |
838 | 526 | } | 747 | } |
839 | 527 | } | 748 | } |
840 | 749 | |||
841 | 528 | if self._get_openstack_release() >= self.trusty_kilo: | 750 | if self._get_openstack_release() >= self.trusty_kilo: |
844 | 529 | expected['DEFAULT']['device_driver'] = ('neutron_lbaas.services.' + | 751 | # Kilo or later |
845 | 530 | 'loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver') | 752 | expected['DEFAULT']['device_driver'] = \ |
846 | 753 | ('neutron_lbaas.services.loadbalancer.drivers.haproxy.' | ||
847 | 754 | 'namespace_driver.HaproxyNSDriver') | ||
848 | 755 | else: | ||
849 | 756 | # Juno or earlier | ||
850 | 757 | expected['DEFAULT']['device_driver'] = \ | ||
851 | 758 | ('neutron.services.loadbalancer.drivers.haproxy.' | ||
852 | 759 | 'namespace_driver.HaproxyNSDriver') | ||
853 | 531 | 760 | ||
854 | 532 | for section, pairs in expected.iteritems(): | 761 | for section, pairs in expected.iteritems(): |
855 | 533 | ret = u.validate_config_data(unit, conf, section, pairs) | 762 | ret = u.validate_config_data(unit, conf, section, pairs) |
856 | @@ -535,46 +764,51 @@ | |||
857 | 535 | message = "lbaas agent config error: {}".format(ret) | 764 | message = "lbaas agent config error: {}".format(ret) |
858 | 536 | amulet.raise_status(amulet.FAIL, msg=message) | 765 | amulet.raise_status(amulet.FAIL, msg=message) |
859 | 537 | 766 | ||
861 | 538 | def test_metadata_agent_config(self): | 767 | def test_306_neutron_metadata_agent_config(self): |
862 | 539 | """Verify the data in the metadata agent config file.""" | 768 | """Verify the data in the metadata agent config file.""" |
863 | 769 | u.log.debug('Checking neutron gateway metadata agent ' | ||
864 | 770 | 'config file data...') | ||
865 | 540 | unit = self.neutron_gateway_sentry | 771 | unit = self.neutron_gateway_sentry |
866 | 541 | ep = self.keystone.service_catalog.url_for(service_type='identity', | 772 | ep = self.keystone.service_catalog.url_for(service_type='identity', |
867 | 542 | endpoint_type='publicURL') | 773 | endpoint_type='publicURL') |
872 | 543 | neutron_gateway_relation = unit.relation('shared-db', 'mysql:shared-db') | 774 | ng_db_rel = unit.relation('shared-db', |
873 | 544 | nova_cc_relation = self.nova_cc_sentry.relation(\ | 775 | 'mysql:shared-db') |
874 | 545 | 'quantum-network-service', | 776 | nova_cc_relation = self.nova_cc_sentry.relation( |
875 | 546 | 'neutron-gateway:quantum-network-service') | 777 | 'quantum-network-service', |
876 | 778 | 'neutron-gateway:quantum-network-service') | ||
877 | 547 | 779 | ||
878 | 548 | conf = '/etc/neutron/metadata_agent.ini' | 780 | conf = '/etc/neutron/metadata_agent.ini' |
879 | 549 | expected = { | 781 | expected = { |
880 | 550 | 'auth_url': ep, | 782 | 'auth_url': ep, |
881 | 551 | 'auth_region': 'RegionOne', | 783 | 'auth_region': 'RegionOne', |
882 | 552 | 'admin_tenant_name': 'services', | 784 | 'admin_tenant_name': 'services', |
883 | 553 | 'admin_user': 'quantum_s3_ec2_nova', | ||
884 | 554 | 'admin_password': nova_cc_relation['service_password'], | 785 | 'admin_password': nova_cc_relation['service_password'], |
885 | 555 | 'root_helper': 'sudo neutron-rootwrap ' | 786 | 'root_helper': 'sudo neutron-rootwrap ' |
887 | 556 | '/etc/neutron/rootwrap.conf', | 787 | '/etc/neutron/rootwrap.conf', |
888 | 557 | 'state_path': '/var/lib/neutron', | 788 | 'state_path': '/var/lib/neutron', |
891 | 558 | 'nova_metadata_ip': neutron_gateway_relation['private-address'], | 789 | 'nova_metadata_ip': ng_db_rel['private-address'], |
892 | 559 | 'nova_metadata_port': '8775' | 790 | 'nova_metadata_port': '8775', |
893 | 791 | 'cache_url': 'memory://?default_ttl=5' | ||
894 | 560 | } | 792 | } |
895 | 793 | section = 'DEFAULT' | ||
896 | 794 | |||
897 | 561 | if self._get_openstack_release() >= self.trusty_kilo: | 795 | if self._get_openstack_release() >= self.trusty_kilo: |
898 | 796 | # Kilo or later | ||
899 | 562 | expected['admin_user'] = 'nova' | 797 | expected['admin_user'] = 'nova' |
905 | 563 | 798 | else: | |
906 | 564 | if self._get_openstack_release() >= self.precise_icehouse: | 799 | # Juno or earlier |
907 | 565 | expected['cache_url'] = 'memory://?default_ttl=5' | 800 | expected['admin_user'] = 's3_ec2_nova' |
908 | 566 | 801 | ||
909 | 567 | ret = u.validate_config_data(unit, conf, 'DEFAULT', expected) | 802 | ret = u.validate_config_data(unit, conf, section, expected) |
910 | 568 | if ret: | 803 | if ret: |
911 | 569 | message = "metadata agent config error: {}".format(ret) | 804 | message = "metadata agent config error: {}".format(ret) |
912 | 570 | amulet.raise_status(amulet.FAIL, msg=message) | 805 | amulet.raise_status(amulet.FAIL, msg=message) |
913 | 571 | 806 | ||
915 | 572 | def test_metering_agent_config(self): | 807 | def test_307_neutron_metering_agent_config(self): |
916 | 573 | """Verify the data in the metering agent config file. This is only | 808 | """Verify the data in the metering agent config file. This is only |
917 | 574 | available since havana.""" | 809 | available since havana.""" |
921 | 575 | if self._get_openstack_release() < self.precise_havana: | 810 | u.log.debug('Checking neutron gateway metering agent ' |
922 | 576 | return | 811 | 'config file data...') |
920 | 577 | |||
923 | 578 | unit = self.neutron_gateway_sentry | 812 | unit = self.neutron_gateway_sentry |
924 | 579 | conf = '/etc/neutron/metering_agent.ini' | 813 | conf = '/etc/neutron/metering_agent.ini' |
925 | 580 | expected = { | 814 | expected = { |
926 | @@ -586,26 +820,24 @@ | |||
927 | 586 | 'OVSInterfaceDriver', | 820 | 'OVSInterfaceDriver', |
928 | 587 | 'use_namespaces': 'True' | 821 | 'use_namespaces': 'True' |
929 | 588 | } | 822 | } |
930 | 823 | section = 'DEFAULT' | ||
931 | 589 | 824 | ||
933 | 590 | ret = u.validate_config_data(unit, conf, 'DEFAULT', expected) | 825 | ret = u.validate_config_data(unit, conf, section, expected) |
934 | 591 | if ret: | 826 | if ret: |
935 | 592 | message = "metering agent config error: {}".format(ret) | 827 | message = "metering agent config error: {}".format(ret) |
936 | 828 | amulet.raise_status(amulet.FAIL, msg=message) | ||
937 | 593 | 829 | ||
939 | 594 | def test_nova_config(self): | 830 | def test_308_neutron_nova_config(self): |
940 | 595 | """Verify the data in the nova config file.""" | 831 | """Verify the data in the nova config file.""" |
941 | 832 | u.log.debug('Checking neutron gateway nova config file data...') | ||
942 | 596 | unit = self.neutron_gateway_sentry | 833 | unit = self.neutron_gateway_sentry |
943 | 597 | conf = '/etc/nova/nova.conf' | 834 | conf = '/etc/nova/nova.conf' |
955 | 598 | mysql_relation = self.mysql_sentry.relation('shared-db', | 835 | |
956 | 599 | 'neutron-gateway:shared-db') | 836 | rabbitmq_relation = self.rmq_sentry.relation( |
957 | 600 | db_uri = "mysql://{}:{}@{}/{}".format('nova', | 837 | 'amqp', 'neutron-gateway:amqp') |
958 | 601 | mysql_relation['password'], | 838 | nova_cc_relation = self.nova_cc_sentry.relation( |
959 | 602 | mysql_relation['db_host'], | 839 | 'quantum-network-service', |
960 | 603 | 'nova') | 840 | 'neutron-gateway:quantum-network-service') |
950 | 604 | rabbitmq_relation = self.rabbitmq_sentry.relation('amqp', | ||
951 | 605 | 'neutron-gateway:amqp') | ||
952 | 606 | nova_cc_relation = self.nova_cc_sentry.relation(\ | ||
953 | 607 | 'quantum-network-service', | ||
954 | 608 | 'neutron-gateway:quantum-network-service') | ||
961 | 609 | ep = self.keystone.service_catalog.url_for(service_type='identity', | 841 | ep = self.keystone.service_catalog.url_for(service_type='identity', |
962 | 610 | endpoint_type='publicURL') | 842 | endpoint_type='publicURL') |
963 | 611 | 843 | ||
964 | @@ -622,49 +854,44 @@ | |||
965 | 622 | 'network_api_class': 'nova.network.neutronv2.api.API', | 854 | 'network_api_class': 'nova.network.neutronv2.api.API', |
966 | 623 | } | 855 | } |
967 | 624 | } | 856 | } |
968 | 857 | |||
969 | 625 | if self._get_openstack_release() >= self.trusty_kilo: | 858 | if self._get_openstack_release() >= self.trusty_kilo: |
997 | 626 | neutron = { | 859 | # Kilo or later |
998 | 627 | 'neutron': { | 860 | expected['oslo_messaging_rabbit'] = { |
999 | 628 | 'auth_strategy': 'keystone', | 861 | 'rabbit_userid': 'neutron', |
1000 | 629 | 'url': nova_cc_relation['quantum_url'], | 862 | 'rabbit_virtual_host': 'openstack', |
1001 | 630 | 'admin_tenant_name': 'services', | 863 | 'rabbit_password': rabbitmq_relation['password'], |
1002 | 631 | 'admin_username': 'nova', | 864 | 'rabbit_host': rabbitmq_relation['hostname'], |
1003 | 632 | 'admin_password': nova_cc_relation['service_password'], | 865 | } |
1004 | 633 | 'admin_auth_url': ep, | 866 | expected['oslo_concurrency'] = { |
1005 | 634 | 'service_metadata_proxy': 'True', | 867 | 'lock_path': '/var/lock/nova' |
1006 | 635 | } | 868 | } |
1007 | 636 | } | 869 | expected['neutron'] = { |
1008 | 637 | oslo_concurrency = { | 870 | 'auth_strategy': 'keystone', |
1009 | 638 | 'oslo_concurrency': { | 871 | 'url': nova_cc_relation['quantum_url'], |
1010 | 639 | 'lock_path':'/var/lock/nova' | 872 | 'admin_tenant_name': 'services', |
1011 | 640 | } | 873 | 'admin_username': 'nova', |
1012 | 641 | } | 874 | 'admin_password': nova_cc_relation['service_password'], |
1013 | 642 | oslo_messaging_rabbit = { | 875 | 'admin_auth_url': ep, |
1014 | 643 | 'oslo_messaging_rabbit': { | 876 | 'service_metadata_proxy': 'True', |
1015 | 644 | 'rabbit_userid': 'neutron', | 877 | 'metadata_proxy_shared_secret': u.not_null |
1016 | 645 | 'rabbit_virtual_host': 'openstack', | 878 | } |
990 | 646 | 'rabbit_password': rabbitmq_relation['password'], | ||
991 | 647 | 'rabbit_host': rabbitmq_relation['hostname'], | ||
992 | 648 | } | ||
993 | 649 | } | ||
994 | 650 | expected.update(neutron) | ||
995 | 651 | expected.update(oslo_concurrency) | ||
996 | 652 | expected.update(oslo_messaging_rabbit) | ||
1017 | 653 | else: | 879 | else: |
1032 | 654 | d = 'DEFAULT' | 880 | # Juno or earlier |
1033 | 655 | expected[d]['lock_path'] = '/var/lock/nova' | 881 | expected['DEFAULT'].update({ |
1034 | 656 | expected[d]['rabbit_userid'] = 'neutron' | 882 | 'rabbit_userid': 'neutron', |
1035 | 657 | expected[d]['rabbit_virtual_host'] = 'openstack' | 883 | 'rabbit_virtual_host': 'openstack', |
1036 | 658 | expected[d]['rabbit_password'] = rabbitmq_relation['password'] | 884 | 'rabbit_password': rabbitmq_relation['password'], |
1037 | 659 | expected[d]['rabbit_host'] = rabbitmq_relation['hostname'] | 885 | 'rabbit_host': rabbitmq_relation['hostname'], |
1038 | 660 | expected[d]['service_neutron_metadata_proxy'] = 'True' | 886 | 'lock_path': '/var/lock/nova', |
1039 | 661 | expected[d]['neutron_auth_strategy'] = 'keystone' | 887 | 'neutron_auth_strategy': 'keystone', |
1040 | 662 | expected[d]['neutron_url'] = nova_cc_relation['quantum_url'] | 888 | 'neutron_url': nova_cc_relation['quantum_url'], |
1041 | 663 | expected[d]['neutron_admin_tenant_name'] = 'services' | 889 | 'neutron_admin_tenant_name': 'services', |
1042 | 664 | expected[d]['neutron_admin_username'] = 'quantum_s3_ec2_nova' | 890 | 'neutron_admin_username': 's3_ec2_nova', |
1043 | 665 | expected[d]['neutron_admin_password'] = \ | 891 | 'neutron_admin_password': nova_cc_relation['service_password'], |
1044 | 666 | nova_cc_relation['service_password'] | 892 | 'neutron_admin_auth_url': ep, |
1045 | 667 | expected[d]['neutron_admin_auth_url'] = ep | 893 | 'service_neutron_metadata_proxy': 'True', |
1046 | 894 | }) | ||
1047 | 668 | 895 | ||
1048 | 669 | for section, pairs in expected.iteritems(): | 896 | for section, pairs in expected.iteritems(): |
1049 | 670 | ret = u.validate_config_data(unit, conf, section, pairs) | 897 | ret = u.validate_config_data(unit, conf, section, pairs) |
1050 | @@ -672,56 +899,30 @@ | |||
1051 | 672 | message = "nova config error: {}".format(ret) | 899 | message = "nova config error: {}".format(ret) |
1052 | 673 | amulet.raise_status(amulet.FAIL, msg=message) | 900 | amulet.raise_status(amulet.FAIL, msg=message) |
1053 | 674 | 901 | ||
1085 | 675 | def test_ovs_neutron_plugin_config(self): | 902 | def test_309_neutron_vpn_agent_config(self): |
1055 | 676 | """Verify the data in the ovs neutron plugin config file. The ovs | ||
1056 | 677 | plugin is not used by default since icehouse.""" | ||
1057 | 678 | if self._get_openstack_release() >= self.precise_icehouse: | ||
1058 | 679 | return | ||
1059 | 680 | |||
1060 | 681 | unit = self.neutron_gateway_sentry | ||
1061 | 682 | neutron_gateway_relation = unit.relation('shared-db', 'mysql:shared-db') | ||
1062 | 683 | |||
1063 | 684 | conf = '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini' | ||
1064 | 685 | expected = { | ||
1065 | 686 | 'ovs': { | ||
1066 | 687 | 'local_ip': neutron_gateway_relation['private-address'], | ||
1067 | 688 | 'tenant_network_type': 'gre', | ||
1068 | 689 | 'enable_tunneling': 'True', | ||
1069 | 690 | 'tunnel_id_ranges': '1:1000' | ||
1070 | 691 | }, | ||
1071 | 692 | 'agent': { | ||
1072 | 693 | 'polling_interval': '10', | ||
1073 | 694 | 'root_helper': 'sudo /usr/bin/neutron-rootwrap ' | ||
1074 | 695 | '/etc/neutron/rootwrap.conf' | ||
1075 | 696 | } | ||
1076 | 697 | } | ||
1077 | 698 | |||
1078 | 699 | for section, pairs in expected.iteritems(): | ||
1079 | 700 | ret = u.validate_config_data(unit, conf, section, pairs) | ||
1080 | 701 | if ret: | ||
1081 | 702 | message = "ovs neutron plugin config error: {}".format(ret) | ||
1082 | 703 | amulet.raise_status(amulet.FAIL, msg=message) | ||
1083 | 704 | |||
1084 | 705 | def test_vpn_agent_config(self): | ||
1086 | 706 | """Verify the data in the vpn agent config file. This isn't available | 903 | """Verify the data in the vpn agent config file. This isn't available |
1087 | 707 | prior to havana.""" | 904 | prior to havana.""" |
1091 | 708 | if self._get_openstack_release() < self.precise_havana: | 905 | u.log.debug('Checking neutron gateway vpn agent config file data...') |
1089 | 709 | return | ||
1090 | 710 | |||
1092 | 711 | unit = self.neutron_gateway_sentry | 906 | unit = self.neutron_gateway_sentry |
1093 | 712 | conf = '/etc/neutron/vpn_agent.ini' | 907 | conf = '/etc/neutron/vpn_agent.ini' |
1094 | 713 | expected = { | 908 | expected = { |
1096 | 714 | 'vpnagent': { | 909 | 'ipsec': { |
1097 | 910 | 'ipsec_status_check_interval': '60' | ||
1098 | 911 | } | ||
1099 | 912 | } | ||
1100 | 913 | |||
1101 | 914 | if self._get_openstack_release() >= self.trusty_kilo: | ||
1102 | 915 | # Kilo or later | ||
1103 | 916 | expected['vpnagent'] = { | ||
1104 | 917 | 'vpn_device_driver': 'neutron_vpnaas.services.vpn.' | ||
1105 | 918 | 'device_drivers.ipsec.OpenSwanDriver' | ||
1106 | 919 | } | ||
1107 | 920 | else: | ||
1108 | 921 | # Juno or earlier | ||
1109 | 922 | expected['vpnagent'] = { | ||
1110 | 715 | 'vpn_device_driver': 'neutron.services.vpn.device_drivers.' | 923 | 'vpn_device_driver': 'neutron.services.vpn.device_drivers.' |
1111 | 716 | 'ipsec.OpenSwanDriver' | 924 | 'ipsec.OpenSwanDriver' |
1112 | 717 | }, | ||
1113 | 718 | 'ipsec': { | ||
1114 | 719 | 'ipsec_status_check_interval': '60' | ||
1115 | 720 | } | 925 | } |
1116 | 721 | } | ||
1117 | 722 | if self._get_openstack_release() >= self.trusty_kilo: | ||
1118 | 723 | expected['vpnagent']['vpn_device_driver'] = ('neutron_vpnaas.' + | ||
1119 | 724 | 'services.vpn.device_drivers.ipsec.OpenSwanDriver') | ||
1120 | 725 | 926 | ||
1121 | 726 | for section, pairs in expected.iteritems(): | 927 | for section, pairs in expected.iteritems(): |
1122 | 727 | ret = u.validate_config_data(unit, conf, section, pairs) | 928 | ret = u.validate_config_data(unit, conf, section, pairs) |
1123 | @@ -729,8 +930,9 @@ | |||
1124 | 729 | message = "vpn agent config error: {}".format(ret) | 930 | message = "vpn agent config error: {}".format(ret) |
1125 | 730 | amulet.raise_status(amulet.FAIL, msg=message) | 931 | amulet.raise_status(amulet.FAIL, msg=message) |
1126 | 731 | 932 | ||
1128 | 732 | def test_create_network(self): | 933 | def test_400_create_network(self): |
1129 | 733 | """Create a network, verify that it exists, and then delete it.""" | 934 | """Create a network, verify that it exists, and then delete it.""" |
1130 | 935 | u.log.debug('Creating neutron network...') | ||
1131 | 734 | self.neutron.format = 'json' | 936 | self.neutron.format = 'json' |
1132 | 735 | net_name = 'ext_net' | 937 | net_name = 'ext_net' |
1133 | 736 | 938 | ||
1134 | @@ -743,7 +945,7 @@ | |||
1135 | 743 | 945 | ||
1136 | 744 | # Create a network and verify that it exists | 946 | # Create a network and verify that it exists |
1137 | 745 | network = {'name': net_name} | 947 | network = {'name': net_name} |
1139 | 746 | self.neutron.create_network({'network':network}) | 948 | self.neutron.create_network({'network': network}) |
1140 | 747 | 949 | ||
1141 | 748 | networks = self.neutron.list_networks(name=net_name) | 950 | networks = self.neutron.list_networks(name=net_name) |
1142 | 749 | net_len = len(networks['networks']) | 951 | net_len = len(networks['networks']) |
1143 | @@ -751,9 +953,57 @@ | |||
1144 | 751 | msg = "Expected 1 network, found {}".format(net_len) | 953 | msg = "Expected 1 network, found {}".format(net_len) |
1145 | 752 | amulet.raise_status(amulet.FAIL, msg=msg) | 954 | amulet.raise_status(amulet.FAIL, msg=msg) |
1146 | 753 | 955 | ||
1147 | 956 | u.log.debug('Confirming new neutron network...') | ||
1148 | 754 | network = networks['networks'][0] | 957 | network = networks['networks'][0] |
1149 | 755 | if network['name'] != net_name: | 958 | if network['name'] != net_name: |
1150 | 756 | amulet.raise_status(amulet.FAIL, msg="network ext_net not found") | 959 | amulet.raise_status(amulet.FAIL, msg="network ext_net not found") |
1151 | 757 | 960 | ||
1152 | 758 | #Cleanup | 961 | #Cleanup |
1153 | 962 | u.log.debug('Deleting neutron network...') | ||
1154 | 759 | self.neutron.delete_network(network['id']) | 963 | self.neutron.delete_network(network['id']) |
1155 | 964 | |||
1156 | 965 | def test_900_restart_on_config_change(self): | ||
1157 | 966 | """Verify that the specified services are restarted when the | ||
1158 | 967 | config is changed.""" | ||
1159 | 968 | |||
1160 | 969 | sentry = self.neutron_gateway_sentry | ||
1161 | 970 | juju_service = 'neutron-gateway' | ||
1162 | 971 | |||
1163 | 972 | # Expected default and alternate values | ||
1164 | 973 | set_default = {'debug': 'False'} | ||
1165 | 974 | set_alternate = {'debug': 'True'} | ||
1166 | 975 | |||
1167 | 976 | # Services which are expected to restart upon config change, | ||
1168 | 977 | # and corresponding config files affected by the change | ||
1169 | 978 | conf_file = '/etc/neutron/neutron.conf' | ||
1170 | 979 | services = { | ||
1171 | 980 | 'neutron-dhcp-agent': conf_file, | ||
1172 | 981 | 'neutron-lbaas-agent': conf_file, | ||
1173 | 982 | 'neutron-metadata-agent': conf_file, | ||
1174 | 983 | 'neutron-metering-agent': conf_file, | ||
1175 | 984 | 'neutron-openvswitch-agent': conf_file, | ||
1176 | 985 | } | ||
1177 | 986 | |||
1178 | 987 | if self._get_openstack_release() <= self.trusty_juno: | ||
1179 | 988 | services.update({'neutron-vpn-agent': conf_file}) | ||
1180 | 989 | |||
1181 | 990 | # Make config change, check for svc restart, conf file mod time change | ||
1182 | 991 | u.log.debug('Making config change on {}...'.format(juju_service)) | ||
1183 | 992 | mtime = u.get_sentry_time(sentry) | ||
1184 | 993 | self.d.configure(juju_service, set_alternate) | ||
1185 | 994 | |||
1186 | 995 | # sleep_time = 90 | ||
1187 | 996 | for s, conf_file in services.iteritems(): | ||
1188 | 997 | u.log.debug("Checking that service restarted: {}".format(s)) | ||
1189 | 998 | if not u.validate_service_config_changed(sentry, mtime, s, | ||
1190 | 999 | conf_file): | ||
1191 | 1000 | # conf_file, | ||
1192 | 1001 | # sleep_time=sleep_time): | ||
1193 | 1002 | self.d.configure(juju_service, set_default) | ||
1194 | 1003 | msg = "service {} didn't restart after config change".format(s) | ||
1195 | 1004 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1196 | 1005 | |||
1197 | 1006 | # Only do initial sleep on first service check | ||
1198 | 1007 | # sleep_time = 0 | ||
1199 | 1008 | |||
1200 | 1009 | self.d.configure(juju_service, set_default) | ||
1201 | 760 | 1010 | ||
1202 | === modified file 'tests/charmhelpers/contrib/amulet/deployment.py' | |||
1203 | --- tests/charmhelpers/contrib/amulet/deployment.py 2015-01-23 11:08:26 +0000 | |||
1204 | +++ tests/charmhelpers/contrib/amulet/deployment.py 2015-09-21 20:38:30 +0000 | |||
1205 | @@ -51,7 +51,8 @@ | |||
1206 | 51 | if 'units' not in this_service: | 51 | if 'units' not in this_service: |
1207 | 52 | this_service['units'] = 1 | 52 | this_service['units'] = 1 |
1208 | 53 | 53 | ||
1210 | 54 | self.d.add(this_service['name'], units=this_service['units']) | 54 | self.d.add(this_service['name'], units=this_service['units'], |
1211 | 55 | constraints=this_service.get('constraints')) | ||
1212 | 55 | 56 | ||
1213 | 56 | for svc in other_services: | 57 | for svc in other_services: |
1214 | 57 | if 'location' in svc: | 58 | if 'location' in svc: |
1215 | @@ -64,7 +65,8 @@ | |||
1216 | 64 | if 'units' not in svc: | 65 | if 'units' not in svc: |
1217 | 65 | svc['units'] = 1 | 66 | svc['units'] = 1 |
1218 | 66 | 67 | ||
1220 | 67 | self.d.add(svc['name'], charm=branch_location, units=svc['units']) | 68 | self.d.add(svc['name'], charm=branch_location, units=svc['units'], |
1221 | 69 | constraints=svc.get('constraints')) | ||
1222 | 68 | 70 | ||
1223 | 69 | def _add_relations(self, relations): | 71 | def _add_relations(self, relations): |
1224 | 70 | """Add all of the relations for the services.""" | 72 | """Add all of the relations for the services.""" |
1225 | 71 | 73 | ||
1226 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' | |||
1227 | --- tests/charmhelpers/contrib/amulet/utils.py 2015-08-18 21:16:23 +0000 | |||
1228 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-21 20:38:30 +0000 | |||
1229 | @@ -19,9 +19,11 @@ | |||
1230 | 19 | import logging | 19 | import logging |
1231 | 20 | import os | 20 | import os |
1232 | 21 | import re | 21 | import re |
1233 | 22 | import socket | ||
1234 | 22 | import subprocess | 23 | import subprocess |
1235 | 23 | import sys | 24 | import sys |
1236 | 24 | import time | 25 | import time |
1237 | 26 | import uuid | ||
1238 | 25 | 27 | ||
1239 | 26 | import amulet | 28 | import amulet |
1240 | 27 | import distro_info | 29 | import distro_info |
1241 | @@ -114,7 +116,7 @@ | |||
1242 | 114 | # /!\ DEPRECATION WARNING (beisner): | 116 | # /!\ DEPRECATION WARNING (beisner): |
1243 | 115 | # New and existing tests should be rewritten to use | 117 | # New and existing tests should be rewritten to use |
1244 | 116 | # validate_services_by_name() as it is aware of init systems. | 118 | # validate_services_by_name() as it is aware of init systems. |
1246 | 117 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | 119 | self.log.warn('DEPRECATION WARNING: use ' |
1247 | 118 | 'validate_services_by_name instead of validate_services ' | 120 | 'validate_services_by_name instead of validate_services ' |
1248 | 119 | 'due to init system differences.') | 121 | 'due to init system differences.') |
1249 | 120 | 122 | ||
1250 | @@ -269,33 +271,52 @@ | |||
1251 | 269 | """Get last modification time of directory.""" | 271 | """Get last modification time of directory.""" |
1252 | 270 | return sentry_unit.directory_stat(directory)['mtime'] | 272 | return sentry_unit.directory_stat(directory)['mtime'] |
1253 | 271 | 273 | ||
1272 | 272 | def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False): | 274 | def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None): |
1273 | 273 | """Get process' start time. | 275 | """Get start time of a process based on the last modification time |
1274 | 274 | 276 | of the /proc/pid directory. | |
1275 | 275 | Determine start time of the process based on the last modification | 277 | |
1276 | 276 | time of the /proc/pid directory. If pgrep_full is True, the process | 278 | :sentry_unit: The sentry unit to check for the service on |
1277 | 277 | name is matched against the full command line. | 279 | :service: service name to look for in process table |
1278 | 278 | """ | 280 | :pgrep_full: [Deprecated] Use full command line search mode with pgrep |
1279 | 279 | if pgrep_full: | 281 | :returns: epoch time of service process start |
1280 | 280 | cmd = 'pgrep -o -f {}'.format(service) | 282 | :param commands: list of bash commands |
1281 | 281 | else: | 283 | :param sentry_units: list of sentry unit pointers |
1282 | 282 | cmd = 'pgrep -o {}'.format(service) | 284 | :returns: None if successful; Failure message otherwise |
1283 | 283 | cmd = cmd + ' | grep -v pgrep || exit 0' | 285 | """ |
1284 | 284 | cmd_out = sentry_unit.run(cmd) | 286 | if pgrep_full is not None: |
1285 | 285 | self.log.debug('CMDout: ' + str(cmd_out)) | 287 | # /!\ DEPRECATION WARNING (beisner): |
1286 | 286 | if cmd_out[0]: | 288 | # No longer implemented, as pidof is now used instead of pgrep. |
1287 | 287 | self.log.debug('Pid for %s %s' % (service, str(cmd_out[0]))) | 289 | # https://bugs.launchpad.net/charm-helpers/+bug/1474030 |
1288 | 288 | proc_dir = '/proc/{}'.format(cmd_out[0].strip()) | 290 | self.log.warn('DEPRECATION WARNING: pgrep_full bool is no ' |
1289 | 289 | return self._get_dir_mtime(sentry_unit, proc_dir) | 291 | 'longer implemented re: lp 1474030.') |
1290 | 292 | |||
1291 | 293 | pid_list = self.get_process_id_list(sentry_unit, service) | ||
1292 | 294 | pid = pid_list[0] | ||
1293 | 295 | proc_dir = '/proc/{}'.format(pid) | ||
1294 | 296 | self.log.debug('Pid for {} on {}: {}'.format( | ||
1295 | 297 | service, sentry_unit.info['unit_name'], pid)) | ||
1296 | 298 | |||
1297 | 299 | return self._get_dir_mtime(sentry_unit, proc_dir) | ||
1298 | 290 | 300 | ||
1299 | 291 | def service_restarted(self, sentry_unit, service, filename, | 301 | def service_restarted(self, sentry_unit, service, filename, |
1301 | 292 | pgrep_full=False, sleep_time=20): | 302 | pgrep_full=None, sleep_time=20): |
1302 | 293 | """Check if service was restarted. | 303 | """Check if service was restarted. |
1303 | 294 | 304 | ||
1304 | 295 | Compare a service's start time vs a file's last modification time | 305 | Compare a service's start time vs a file's last modification time |
1305 | 296 | (such as a config file for that service) to determine if the service | 306 | (such as a config file for that service) to determine if the service |
1306 | 297 | has been restarted. | 307 | has been restarted. |
1307 | 298 | """ | 308 | """ |
1308 | 309 | # /!\ DEPRECATION WARNING (beisner): | ||
1309 | 310 | # This method is prone to races in that no before-time is known. | ||
1310 | 311 | # Use validate_service_config_changed instead. | ||
1311 | 312 | |||
1312 | 313 | # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now | ||
1313 | 314 | # used instead of pgrep. pgrep_full is still passed through to ensure | ||
1314 | 315 | # deprecation WARNS. lp1474030 | ||
1315 | 316 | self.log.warn('DEPRECATION WARNING: use ' | ||
1316 | 317 | 'validate_service_config_changed instead of ' | ||
1317 | 318 | 'service_restarted due to known races.') | ||
1318 | 319 | |||
1319 | 299 | time.sleep(sleep_time) | 320 | time.sleep(sleep_time) |
1320 | 300 | if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= | 321 | if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= |
1321 | 301 | self._get_file_mtime(sentry_unit, filename)): | 322 | self._get_file_mtime(sentry_unit, filename)): |
1322 | @@ -304,78 +325,122 @@ | |||
1323 | 304 | return False | 325 | return False |
1324 | 305 | 326 | ||
1325 | 306 | def service_restarted_since(self, sentry_unit, mtime, service, | 327 | def service_restarted_since(self, sentry_unit, mtime, service, |
1328 | 307 | pgrep_full=False, sleep_time=20, | 328 | pgrep_full=None, sleep_time=20, |
1329 | 308 | retry_count=2): | 329 | retry_count=30, retry_sleep_time=10): |
1330 | 309 | """Check if service was been started after a given time. | 330 | """Check if service was been started after a given time. |
1331 | 310 | 331 | ||
1332 | 311 | Args: | 332 | Args: |
1333 | 312 | sentry_unit (sentry): The sentry unit to check for the service on | 333 | sentry_unit (sentry): The sentry unit to check for the service on |
1334 | 313 | mtime (float): The epoch time to check against | 334 | mtime (float): The epoch time to check against |
1335 | 314 | service (string): service name to look for in process table | 335 | service (string): service name to look for in process table |
1339 | 315 | pgrep_full (boolean): Use full command line search mode with pgrep | 336 | pgrep_full: [Deprecated] Use full command line search mode with pgrep |
1340 | 316 | sleep_time (int): Seconds to sleep before looking for process | 337 | sleep_time (int): Initial sleep time (s) before looking for file |
1341 | 317 | retry_count (int): If service is not found, how many times to retry | 338 | retry_sleep_time (int): Time (s) to sleep between retries |
1342 | 339 | retry_count (int): If file is not found, how many times to retry | ||
1343 | 318 | 340 | ||
1344 | 319 | Returns: | 341 | Returns: |
1345 | 320 | bool: True if service found and its start time it newer than mtime, | 342 | bool: True if service found and its start time it newer than mtime, |
1346 | 321 | False if service is older than mtime or if service was | 343 | False if service is older than mtime or if service was |
1347 | 322 | not found. | 344 | not found. |
1348 | 323 | """ | 345 | """ |
1350 | 324 | self.log.debug('Checking %s restarted since %s' % (service, mtime)) | 346 | # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now |
1351 | 347 | # used instead of pgrep. pgrep_full is still passed through to ensure | ||
1352 | 348 | # deprecation WARNS. lp1474030 | ||
1353 | 349 | |||
1354 | 350 | unit_name = sentry_unit.info['unit_name'] | ||
1355 | 351 | self.log.debug('Checking that %s service restarted since %s on ' | ||
1356 | 352 | '%s' % (service, mtime, unit_name)) | ||
1357 | 325 | time.sleep(sleep_time) | 353 | time.sleep(sleep_time) |
1367 | 326 | proc_start_time = self._get_proc_start_time(sentry_unit, service, | 354 | proc_start_time = None |
1368 | 327 | pgrep_full) | 355 | tries = 0 |
1369 | 328 | while retry_count > 0 and not proc_start_time: | 356 | while tries <= retry_count and not proc_start_time: |
1370 | 329 | self.log.debug('No pid file found for service %s, will retry %i ' | 357 | try: |
1371 | 330 | 'more times' % (service, retry_count)) | 358 | proc_start_time = self._get_proc_start_time(sentry_unit, |
1372 | 331 | time.sleep(30) | 359 | service, |
1373 | 332 | proc_start_time = self._get_proc_start_time(sentry_unit, service, | 360 | pgrep_full) |
1374 | 333 | pgrep_full) | 361 | self.log.debug('Attempt {} to get {} proc start time on {} ' |
1375 | 334 | retry_count = retry_count - 1 | 362 | 'OK'.format(tries, service, unit_name)) |
1376 | 363 | except IOError as e: | ||
1377 | 364 | # NOTE(beisner) - race avoidance, proc may not exist yet. | ||
1378 | 365 | # https://bugs.launchpad.net/charm-helpers/+bug/1474030 | ||
1379 | 366 | self.log.debug('Attempt {} to get {} proc start time on {} ' | ||
1380 | 367 | 'failed\n{}'.format(tries, service, | ||
1381 | 368 | unit_name, e)) | ||
1382 | 369 | time.sleep(retry_sleep_time) | ||
1383 | 370 | tries += 1 | ||
1384 | 335 | 371 | ||
1385 | 336 | if not proc_start_time: | 372 | if not proc_start_time: |
1386 | 337 | self.log.warn('No proc start time found, assuming service did ' | 373 | self.log.warn('No proc start time found, assuming service did ' |
1387 | 338 | 'not start') | 374 | 'not start') |
1388 | 339 | return False | 375 | return False |
1389 | 340 | if proc_start_time >= mtime: | 376 | if proc_start_time >= mtime: |
1392 | 341 | self.log.debug('proc start time is newer than provided mtime' | 377 | self.log.debug('Proc start time is newer than provided mtime' |
1393 | 342 | '(%s >= %s)' % (proc_start_time, mtime)) | 378 | '(%s >= %s) on %s (OK)' % (proc_start_time, |
1394 | 379 | mtime, unit_name)) | ||
1395 | 343 | return True | 380 | return True |
1396 | 344 | else: | 381 | else: |
1400 | 345 | self.log.warn('proc start time (%s) is older than provided mtime ' | 382 | self.log.warn('Proc start time (%s) is older than provided mtime ' |
1401 | 346 | '(%s), service did not restart' % (proc_start_time, | 383 | '(%s) on %s, service did not ' |
1402 | 347 | mtime)) | 384 | 'restart' % (proc_start_time, mtime, unit_name)) |
1403 | 348 | return False | 385 | return False |
1404 | 349 | 386 | ||
1405 | 350 | def config_updated_since(self, sentry_unit, filename, mtime, | 387 | def config_updated_since(self, sentry_unit, filename, mtime, |
1407 | 351 | sleep_time=20): | 388 | sleep_time=20, retry_count=30, |
1408 | 389 | retry_sleep_time=10): | ||
1409 | 352 | """Check if file was modified after a given time. | 390 | """Check if file was modified after a given time. |
1410 | 353 | 391 | ||
1411 | 354 | Args: | 392 | Args: |
1412 | 355 | sentry_unit (sentry): The sentry unit to check the file mtime on | 393 | sentry_unit (sentry): The sentry unit to check the file mtime on |
1413 | 356 | filename (string): The file to check mtime of | 394 | filename (string): The file to check mtime of |
1414 | 357 | mtime (float): The epoch time to check against | 395 | mtime (float): The epoch time to check against |
1416 | 358 | sleep_time (int): Seconds to sleep before looking for process | 396 | sleep_time (int): Initial sleep time (s) before looking for file |
1417 | 397 | retry_sleep_time (int): Time (s) to sleep between retries | ||
1418 | 398 | retry_count (int): If file is not found, how many times to retry | ||
1419 | 359 | 399 | ||
1420 | 360 | Returns: | 400 | Returns: |
1421 | 361 | bool: True if file was modified more recently than mtime, False if | 401 | bool: True if file was modified more recently than mtime, False if |
1423 | 362 | file was modified before mtime, | 402 | file was modified before mtime, or if file not found. |
1424 | 363 | """ | 403 | """ |
1426 | 364 | self.log.debug('Checking %s updated since %s' % (filename, mtime)) | 404 | unit_name = sentry_unit.info['unit_name'] |
1427 | 405 | self.log.debug('Checking that %s updated since %s on ' | ||
1428 | 406 | '%s' % (filename, mtime, unit_name)) | ||
1429 | 365 | time.sleep(sleep_time) | 407 | time.sleep(sleep_time) |
1431 | 366 | file_mtime = self._get_file_mtime(sentry_unit, filename) | 408 | file_mtime = None |
1432 | 409 | tries = 0 | ||
1433 | 410 | while tries <= retry_count and not file_mtime: | ||
1434 | 411 | try: | ||
1435 | 412 | file_mtime = self._get_file_mtime(sentry_unit, filename) | ||
1436 | 413 | self.log.debug('Attempt {} to get {} file mtime on {} ' | ||
1437 | 414 | 'OK'.format(tries, filename, unit_name)) | ||
1438 | 415 | except IOError as e: | ||
1439 | 416 | # NOTE(beisner) - race avoidance, file may not exist yet. | ||
1440 | 417 | # https://bugs.launchpad.net/charm-helpers/+bug/1474030 | ||
1441 | 418 | self.log.debug('Attempt {} to get {} file mtime on {} ' | ||
1442 | 419 | 'failed\n{}'.format(tries, filename, | ||
1443 | 420 | unit_name, e)) | ||
1444 | 421 | time.sleep(retry_sleep_time) | ||
1445 | 422 | tries += 1 | ||
1446 | 423 | |||
1447 | 424 | if not file_mtime: | ||
1448 | 425 | self.log.warn('Could not determine file mtime, assuming ' | ||
1449 | 426 | 'file does not exist') | ||
1450 | 427 | return False | ||
1451 | 428 | |||
1452 | 367 | if file_mtime >= mtime: | 429 | if file_mtime >= mtime: |
1453 | 368 | self.log.debug('File mtime is newer than provided mtime ' | 430 | self.log.debug('File mtime is newer than provided mtime ' |
1455 | 369 | '(%s >= %s)' % (file_mtime, mtime)) | 431 | '(%s >= %s) on %s (OK)' % (file_mtime, |
1456 | 432 | mtime, unit_name)) | ||
1457 | 370 | return True | 433 | return True |
1458 | 371 | else: | 434 | else: |
1461 | 372 | self.log.warn('File mtime %s is older than provided mtime %s' | 435 | self.log.warn('File mtime is older than provided mtime' |
1462 | 373 | % (file_mtime, mtime)) | 436 | '(%s < on %s) on %s' % (file_mtime, |
1463 | 437 | mtime, unit_name)) | ||
1464 | 374 | return False | 438 | return False |
1465 | 375 | 439 | ||
1466 | 376 | def validate_service_config_changed(self, sentry_unit, mtime, service, | 440 | def validate_service_config_changed(self, sentry_unit, mtime, service, |
1469 | 377 | filename, pgrep_full=False, | 441 | filename, pgrep_full=None, |
1470 | 378 | sleep_time=20, retry_count=2): | 442 | sleep_time=20, retry_count=30, |
1471 | 443 | retry_sleep_time=10): | ||
1472 | 379 | """Check service and file were updated after mtime | 444 | """Check service and file were updated after mtime |
1473 | 380 | 445 | ||
1474 | 381 | Args: | 446 | Args: |
1475 | @@ -383,9 +448,10 @@ | |||
1476 | 383 | mtime (float): The epoch time to check against | 448 | mtime (float): The epoch time to check against |
1477 | 384 | service (string): service name to look for in process table | 449 | service (string): service name to look for in process table |
1478 | 385 | filename (string): The file to check mtime of | 450 | filename (string): The file to check mtime of |
1481 | 386 | pgrep_full (boolean): Use full command line search mode with pgrep | 451 | pgrep_full: [Deprecated] Use full command line search mode with pgrep |
1482 | 387 | sleep_time (int): Seconds to sleep before looking for process | 452 | sleep_time (int): Initial sleep in seconds to pass to test helpers |
1483 | 388 | retry_count (int): If service is not found, how many times to retry | 453 | retry_count (int): If service is not found, how many times to retry |
1484 | 454 | retry_sleep_time (int): Time in seconds to wait between retries | ||
1485 | 389 | 455 | ||
1486 | 390 | Typical Usage: | 456 | Typical Usage: |
1487 | 391 | u = OpenStackAmuletUtils(ERROR) | 457 | u = OpenStackAmuletUtils(ERROR) |
1488 | @@ -402,15 +468,27 @@ | |||
1489 | 402 | mtime, False if service is older than mtime or if service was | 468 | mtime, False if service is older than mtime or if service was |
1490 | 403 | not found or if filename was modified before mtime. | 469 | not found or if filename was modified before mtime. |
1491 | 404 | """ | 470 | """ |
1501 | 405 | self.log.debug('Checking %s restarted since %s' % (service, mtime)) | 471 | |
1502 | 406 | time.sleep(sleep_time) | 472 | # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now |
1503 | 407 | service_restart = self.service_restarted_since(sentry_unit, mtime, | 473 | # used instead of pgrep. pgrep_full is still passed through to ensure |
1504 | 408 | service, | 474 | # deprecation WARNS. lp1474030 |
1505 | 409 | pgrep_full=pgrep_full, | 475 | |
1506 | 410 | sleep_time=0, | 476 | service_restart = self.service_restarted_since( |
1507 | 411 | retry_count=retry_count) | 477 | sentry_unit, mtime, |
1508 | 412 | config_update = self.config_updated_since(sentry_unit, filename, mtime, | 478 | service, |
1509 | 413 | sleep_time=0) | 479 | pgrep_full=pgrep_full, |
1510 | 480 | sleep_time=sleep_time, | ||
1511 | 481 | retry_count=retry_count, | ||
1512 | 482 | retry_sleep_time=retry_sleep_time) | ||
1513 | 483 | |||
1514 | 484 | config_update = self.config_updated_since( | ||
1515 | 485 | sentry_unit, | ||
1516 | 486 | filename, | ||
1517 | 487 | mtime, | ||
1518 | 488 | sleep_time=sleep_time, | ||
1519 | 489 | retry_count=retry_count, | ||
1520 | 490 | retry_sleep_time=retry_sleep_time) | ||
1521 | 491 | |||
1522 | 414 | return service_restart and config_update | 492 | return service_restart and config_update |
1523 | 415 | 493 | ||
1524 | 416 | def get_sentry_time(self, sentry_unit): | 494 | def get_sentry_time(self, sentry_unit): |
1525 | @@ -428,7 +506,6 @@ | |||
1526 | 428 | """Return a list of all Ubuntu releases in order of release.""" | 506 | """Return a list of all Ubuntu releases in order of release.""" |
1527 | 429 | _d = distro_info.UbuntuDistroInfo() | 507 | _d = distro_info.UbuntuDistroInfo() |
1528 | 430 | _release_list = _d.all | 508 | _release_list = _d.all |
1529 | 431 | self.log.debug('Ubuntu release list: {}'.format(_release_list)) | ||
1530 | 432 | return _release_list | 509 | return _release_list |
1531 | 433 | 510 | ||
1532 | 434 | def file_to_url(self, file_rel_path): | 511 | def file_to_url(self, file_rel_path): |
1533 | @@ -568,6 +645,142 @@ | |||
1534 | 568 | 645 | ||
1535 | 569 | return None | 646 | return None |
1536 | 570 | 647 | ||
1537 | 648 | def validate_sectionless_conf(self, file_contents, expected): | ||
1538 | 649 | """A crude conf parser. Useful to inspect configuration files which | ||
1539 | 650 | do not have section headers (as would be necessary in order to use | ||
1540 | 651 | the configparser). Such as openstack-dashboard or rabbitmq confs.""" | ||
1541 | 652 | for line in file_contents.split('\n'): | ||
1542 | 653 | if '=' in line: | ||
1543 | 654 | args = line.split('=') | ||
1544 | 655 | if len(args) <= 1: | ||
1545 | 656 | continue | ||
1546 | 657 | key = args[0].strip() | ||
1547 | 658 | value = args[1].strip() | ||
1548 | 659 | if key in expected.keys(): | ||
1549 | 660 | if expected[key] != value: | ||
1550 | 661 | msg = ('Config mismatch. Expected, actual: {}, ' | ||
1551 | 662 | '{}'.format(expected[key], value)) | ||
1552 | 663 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1553 | 664 | |||
1554 | 665 | def get_unit_hostnames(self, units): | ||
1555 | 666 | """Return a dict of juju unit names to hostnames.""" | ||
1556 | 667 | host_names = {} | ||
1557 | 668 | for unit in units: | ||
1558 | 669 | host_names[unit.info['unit_name']] = \ | ||
1559 | 670 | str(unit.file_contents('/etc/hostname').strip()) | ||
1560 | 671 | self.log.debug('Unit host names: {}'.format(host_names)) | ||
1561 | 672 | return host_names | ||
1562 | 673 | |||
1563 | 674 | def run_cmd_unit(self, sentry_unit, cmd): | ||
1564 | 675 | """Run a command on a unit, return the output and exit code.""" | ||
1565 | 676 | output, code = sentry_unit.run(cmd) | ||
1566 | 677 | if code == 0: | ||
1567 | 678 | self.log.debug('{} `{}` command returned {} ' | ||
1568 | 679 | '(OK)'.format(sentry_unit.info['unit_name'], | ||
1569 | 680 | cmd, code)) | ||
1570 | 681 | else: | ||
1571 | 682 | msg = ('{} `{}` command returned {} ' | ||
1572 | 683 | '{}'.format(sentry_unit.info['unit_name'], | ||
1573 | 684 | cmd, code, output)) | ||
1574 | 685 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1575 | 686 | return str(output), code | ||
1576 | 687 | |||
1577 | 688 | def file_exists_on_unit(self, sentry_unit, file_name): | ||
1578 | 689 | """Check if a file exists on a unit.""" | ||
1579 | 690 | try: | ||
1580 | 691 | sentry_unit.file_stat(file_name) | ||
1581 | 692 | return True | ||
1582 | 693 | except IOError: | ||
1583 | 694 | return False | ||
1584 | 695 | except Exception as e: | ||
1585 | 696 | msg = 'Error checking file {}: {}'.format(file_name, e) | ||
1586 | 697 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1587 | 698 | |||
1588 | 699 | def file_contents_safe(self, sentry_unit, file_name, | ||
1589 | 700 | max_wait=60, fatal=False): | ||
1590 | 701 | """Get file contents from a sentry unit. Wrap amulet file_contents | ||
1591 | 702 | with retry logic to address races where a file checks as existing, | ||
1592 | 703 | but no longer exists by the time file_contents is called. | ||
1593 | 704 | Return None if file not found. Optionally raise if fatal is True.""" | ||
1594 | 705 | unit_name = sentry_unit.info['unit_name'] | ||
1595 | 706 | file_contents = False | ||
1596 | 707 | tries = 0 | ||
1597 | 708 | while not file_contents and tries < (max_wait / 4): | ||
1598 | 709 | try: | ||
1599 | 710 | file_contents = sentry_unit.file_contents(file_name) | ||
1600 | 711 | except IOError: | ||
1601 | 712 | self.log.debug('Attempt {} to open file {} from {} ' | ||
1602 | 713 | 'failed'.format(tries, file_name, | ||
1603 | 714 | unit_name)) | ||
1604 | 715 | time.sleep(4) | ||
1605 | 716 | tries += 1 | ||
1606 | 717 | |||
1607 | 718 | if file_contents: | ||
1608 | 719 | return file_contents | ||
1609 | 720 | elif not fatal: | ||
1610 | 721 | return None | ||
1611 | 722 | elif fatal: | ||
1612 | 723 | msg = 'Failed to get file contents from unit.' | ||
1613 | 724 | amulet.raise_status(amulet.FAIL, msg) | ||
1614 | 725 | |||
1615 | 726 | def port_knock_tcp(self, host="localhost", port=22, timeout=15): | ||
1616 | 727 | """Open a TCP socket to check for a listening sevice on a host. | ||
1617 | 728 | |||
1618 | 729 | :param host: host name or IP address, default to localhost | ||
1619 | 730 | :param port: TCP port number, default to 22 | ||
1620 | 731 | :param timeout: Connect timeout, default to 15 seconds | ||
1621 | 732 | :returns: True if successful, False if connect failed | ||
1622 | 733 | """ | ||
1623 | 734 | |||
1624 | 735 | # Resolve host name if possible | ||
1625 | 736 | try: | ||
1626 | 737 | connect_host = socket.gethostbyname(host) | ||
1627 | 738 | host_human = "{} ({})".format(connect_host, host) | ||
1628 | 739 | except socket.error as e: | ||
1629 | 740 | self.log.warn('Unable to resolve address: ' | ||
1630 | 741 | '{} ({}) Trying anyway!'.format(host, e)) | ||
1631 | 742 | connect_host = host | ||
1632 | 743 | host_human = connect_host | ||
1633 | 744 | |||
1634 | 745 | # Attempt socket connection | ||
1635 | 746 | try: | ||
1636 | 747 | knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) | ||
1637 | 748 | knock.settimeout(timeout) | ||
1638 | 749 | knock.connect((connect_host, port)) | ||
1639 | 750 | knock.close() | ||
1640 | 751 | self.log.debug('Socket connect OK for host ' | ||
1641 | 752 | '{} on port {}.'.format(host_human, port)) | ||
1642 | 753 | return True | ||
1643 | 754 | except socket.error as e: | ||
1644 | 755 | self.log.debug('Socket connect FAIL for' | ||
1645 | 756 | ' {} port {} ({})'.format(host_human, port, e)) | ||
1646 | 757 | return False | ||
1647 | 758 | |||
1648 | 759 | def port_knock_units(self, sentry_units, port=22, | ||
1649 | 760 | timeout=15, expect_success=True): | ||
1650 | 761 | """Open a TCP socket to check for a listening sevice on each | ||
1651 | 762 | listed juju unit. | ||
1652 | 763 | |||
1653 | 764 | :param sentry_units: list of sentry unit pointers | ||
1654 | 765 | :param port: TCP port number, default to 22 | ||
1655 | 766 | :param timeout: Connect timeout, default to 15 seconds | ||
1656 | 767 | :expect_success: True by default, set False to invert logic | ||
1657 | 768 | :returns: None if successful, Failure message otherwise | ||
1658 | 769 | """ | ||
1659 | 770 | for unit in sentry_units: | ||
1660 | 771 | host = unit.info['public-address'] | ||
1661 | 772 | connected = self.port_knock_tcp(host, port, timeout) | ||
1662 | 773 | if not connected and expect_success: | ||
1663 | 774 | return 'Socket connect failed.' | ||
1664 | 775 | elif connected and not expect_success: | ||
1665 | 776 | return 'Socket connected unexpectedly.' | ||
1666 | 777 | |||
1667 | 778 | def get_uuid_epoch_stamp(self): | ||
1668 | 779 | """Returns a stamp string based on uuid4 and epoch time. Useful in | ||
1669 | 780 | generating test messages which need to be unique-ish.""" | ||
1670 | 781 | return '[{}-{}]'.format(uuid.uuid4(), time.time()) | ||
1671 | 782 | |||
1672 | 783 | # amulet juju action helpers: | ||
1673 | 571 | def run_action(self, unit_sentry, action, | 784 | def run_action(self, unit_sentry, action, |
1674 | 572 | _check_output=subprocess.check_output): | 785 | _check_output=subprocess.check_output): |
1675 | 573 | """Run the named action on a given unit sentry. | 786 | """Run the named action on a given unit sentry. |
1676 | @@ -594,3 +807,12 @@ | |||
1677 | 594 | output = _check_output(command, universal_newlines=True) | 807 | output = _check_output(command, universal_newlines=True) |
1678 | 595 | data = json.loads(output) | 808 | data = json.loads(output) |
1679 | 596 | return data.get(u"status") == "completed" | 809 | return data.get(u"status") == "completed" |
1680 | 810 | |||
1681 | 811 | def status_get(self, unit): | ||
1682 | 812 | """Return the current service status of this unit.""" | ||
1683 | 813 | raw_status, return_code = unit.run( | ||
1684 | 814 | "status-get --format=json --include-data") | ||
1685 | 815 | if return_code != 0: | ||
1686 | 816 | return ("unknown", "") | ||
1687 | 817 | status = json.loads(raw_status) | ||
1688 | 818 | return (status["status"], status["message"]) | ||
1689 | 597 | 819 | ||
1690 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' | |||
1691 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-08-18 21:16:23 +0000 | |||
1692 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-21 20:38:30 +0000 | |||
1693 | @@ -44,20 +44,31 @@ | |||
1694 | 44 | Determine if the local branch being tested is derived from its | 44 | Determine if the local branch being tested is derived from its |
1695 | 45 | stable or next (dev) branch, and based on this, use the corresonding | 45 | stable or next (dev) branch, and based on this, use the corresonding |
1696 | 46 | stable or next branches for the other_services.""" | 46 | stable or next branches for the other_services.""" |
1697 | 47 | |||
1698 | 48 | # Charms outside the lp:~openstack-charmers namespace | ||
1699 | 47 | base_charms = ['mysql', 'mongodb', 'nrpe'] | 49 | base_charms = ['mysql', 'mongodb', 'nrpe'] |
1700 | 48 | 50 | ||
1701 | 51 | # Force these charms to current series even when using an older series. | ||
1702 | 52 | # ie. Use trusty/nrpe even when series is precise, as the P charm | ||
1703 | 53 | # does not possess the necessary external master config and hooks. | ||
1704 | 54 | force_series_current = ['nrpe'] | ||
1705 | 55 | |||
1706 | 49 | if self.series in ['precise', 'trusty']: | 56 | if self.series in ['precise', 'trusty']: |
1707 | 50 | base_series = self.series | 57 | base_series = self.series |
1708 | 51 | else: | 58 | else: |
1709 | 52 | base_series = self.current_next | 59 | base_series = self.current_next |
1710 | 53 | 60 | ||
1713 | 54 | if self.stable: | 61 | for svc in other_services: |
1714 | 55 | for svc in other_services: | 62 | if svc['name'] in force_series_current: |
1715 | 63 | base_series = self.current_next | ||
1716 | 64 | # If a location has been explicitly set, use it | ||
1717 | 65 | if svc.get('location'): | ||
1718 | 66 | continue | ||
1719 | 67 | if self.stable: | ||
1720 | 56 | temp = 'lp:charms/{}/{}' | 68 | temp = 'lp:charms/{}/{}' |
1721 | 57 | svc['location'] = temp.format(base_series, | 69 | svc['location'] = temp.format(base_series, |
1722 | 58 | svc['name']) | 70 | svc['name']) |
1725 | 59 | else: | 71 | else: |
1724 | 60 | for svc in other_services: | ||
1726 | 61 | if svc['name'] in base_charms: | 72 | if svc['name'] in base_charms: |
1727 | 62 | temp = 'lp:charms/{}/{}' | 73 | temp = 'lp:charms/{}/{}' |
1728 | 63 | svc['location'] = temp.format(base_series, | 74 | svc['location'] = temp.format(base_series, |
1729 | @@ -66,6 +77,7 @@ | |||
1730 | 66 | temp = 'lp:~openstack-charmers/charms/{}/{}/next' | 77 | temp = 'lp:~openstack-charmers/charms/{}/{}/next' |
1731 | 67 | svc['location'] = temp.format(self.current_next, | 78 | svc['location'] = temp.format(self.current_next, |
1732 | 68 | svc['name']) | 79 | svc['name']) |
1733 | 80 | |||
1734 | 69 | return other_services | 81 | return other_services |
1735 | 70 | 82 | ||
1736 | 71 | def _add_services(self, this_service, other_services): | 83 | def _add_services(self, this_service, other_services): |
1737 | @@ -77,21 +89,23 @@ | |||
1738 | 77 | 89 | ||
1739 | 78 | services = other_services | 90 | services = other_services |
1740 | 79 | services.append(this_service) | 91 | services.append(this_service) |
1741 | 92 | |||
1742 | 93 | # Charms which should use the source config option | ||
1743 | 80 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', | 94 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
1744 | 81 | 'ceph-osd', 'ceph-radosgw'] | 95 | 'ceph-osd', 'ceph-radosgw'] |
1748 | 82 | # Most OpenStack subordinate charms do not expose an origin option | 96 | |
1749 | 83 | # as that is controlled by the principle. | 97 | # Charms which can not use openstack-origin, ie. many subordinates |
1750 | 84 | ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] | 98 | no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] |
1751 | 85 | 99 | ||
1752 | 86 | if self.openstack: | 100 | if self.openstack: |
1753 | 87 | for svc in services: | 101 | for svc in services: |
1755 | 88 | if svc['name'] not in use_source + ignore: | 102 | if svc['name'] not in use_source + no_origin: |
1756 | 89 | config = {'openstack-origin': self.openstack} | 103 | config = {'openstack-origin': self.openstack} |
1757 | 90 | self.d.configure(svc['name'], config) | 104 | self.d.configure(svc['name'], config) |
1758 | 91 | 105 | ||
1759 | 92 | if self.source: | 106 | if self.source: |
1760 | 93 | for svc in services: | 107 | for svc in services: |
1762 | 94 | if svc['name'] in use_source and svc['name'] not in ignore: | 108 | if svc['name'] in use_source and svc['name'] not in no_origin: |
1763 | 95 | config = {'source': self.source} | 109 | config = {'source': self.source} |
1764 | 96 | self.d.configure(svc['name'], config) | 110 | self.d.configure(svc['name'], config) |
1765 | 97 | 111 | ||
1766 | 98 | 112 | ||
1767 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' | |||
1768 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-16 20:18:08 +0000 | |||
1769 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-21 20:38:30 +0000 | |||
1770 | @@ -27,6 +27,7 @@ | |||
1771 | 27 | import heatclient.v1.client as heat_client | 27 | import heatclient.v1.client as heat_client |
1772 | 28 | import keystoneclient.v2_0 as keystone_client | 28 | import keystoneclient.v2_0 as keystone_client |
1773 | 29 | import novaclient.v1_1.client as nova_client | 29 | import novaclient.v1_1.client as nova_client |
1774 | 30 | import pika | ||
1775 | 30 | import swiftclient | 31 | import swiftclient |
1776 | 31 | 32 | ||
1777 | 32 | from charmhelpers.contrib.amulet.utils import ( | 33 | from charmhelpers.contrib.amulet.utils import ( |
1778 | @@ -602,3 +603,361 @@ | |||
1779 | 602 | self.log.debug('Ceph {} samples (OK): ' | 603 | self.log.debug('Ceph {} samples (OK): ' |
1780 | 603 | '{}'.format(sample_type, samples)) | 604 | '{}'.format(sample_type, samples)) |
1781 | 604 | return None | 605 | return None |
1782 | 606 | |||
1783 | 607 | # rabbitmq/amqp specific helpers: | ||
1784 | 608 | def add_rmq_test_user(self, sentry_units, | ||
1785 | 609 | username="testuser1", password="changeme"): | ||
1786 | 610 | """Add a test user via the first rmq juju unit, check connection as | ||
1787 | 611 | the new user against all sentry units. | ||
1788 | 612 | |||
1789 | 613 | :param sentry_units: list of sentry unit pointers | ||
1790 | 614 | :param username: amqp user name, default to testuser1 | ||
1791 | 615 | :param password: amqp user password | ||
1792 | 616 | :returns: None if successful. Raise on error. | ||
1793 | 617 | """ | ||
1794 | 618 | self.log.debug('Adding rmq user ({})...'.format(username)) | ||
1795 | 619 | |||
1796 | 620 | # Check that user does not already exist | ||
1797 | 621 | cmd_user_list = 'rabbitmqctl list_users' | ||
1798 | 622 | output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) | ||
1799 | 623 | if username in output: | ||
1800 | 624 | self.log.warning('User ({}) already exists, returning ' | ||
1801 | 625 | 'gracefully.'.format(username)) | ||
1802 | 626 | return | ||
1803 | 627 | |||
1804 | 628 | perms = '".*" ".*" ".*"' | ||
1805 | 629 | cmds = ['rabbitmqctl add_user {} {}'.format(username, password), | ||
1806 | 630 | 'rabbitmqctl set_permissions {} {}'.format(username, perms)] | ||
1807 | 631 | |||
1808 | 632 | # Add user via first unit | ||
1809 | 633 | for cmd in cmds: | ||
1810 | 634 | output, _ = self.run_cmd_unit(sentry_units[0], cmd) | ||
1811 | 635 | |||
1812 | 636 | # Check connection against the other sentry_units | ||
1813 | 637 | self.log.debug('Checking user connect against units...') | ||
1814 | 638 | for sentry_unit in sentry_units: | ||
1815 | 639 | connection = self.connect_amqp_by_unit(sentry_unit, ssl=False, | ||
1816 | 640 | username=username, | ||
1817 | 641 | password=password) | ||
1818 | 642 | connection.close() | ||
1819 | 643 | |||
1820 | 644 | def delete_rmq_test_user(self, sentry_units, username="testuser1"): | ||
1821 | 645 | """Delete a rabbitmq user via the first rmq juju unit. | ||
1822 | 646 | |||
1823 | 647 | :param sentry_units: list of sentry unit pointers | ||
1824 | 648 | :param username: amqp user name, default to testuser1 | ||
1825 | 649 | :param password: amqp user password | ||
1826 | 650 | :returns: None if successful or no such user. | ||
1827 | 651 | """ | ||
1828 | 652 | self.log.debug('Deleting rmq user ({})...'.format(username)) | ||
1829 | 653 | |||
1830 | 654 | # Check that the user exists | ||
1831 | 655 | cmd_user_list = 'rabbitmqctl list_users' | ||
1832 | 656 | output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) | ||
1833 | 657 | |||
1834 | 658 | if username not in output: | ||
1835 | 659 | self.log.warning('User ({}) does not exist, returning ' | ||
1836 | 660 | 'gracefully.'.format(username)) | ||
1837 | 661 | return | ||
1838 | 662 | |||
1839 | 663 | # Delete the user | ||
1840 | 664 | cmd_user_del = 'rabbitmqctl delete_user {}'.format(username) | ||
1841 | 665 | output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del) | ||
1842 | 666 | |||
1843 | 667 | def get_rmq_cluster_status(self, sentry_unit): | ||
1844 | 668 | """Execute rabbitmq cluster status command on a unit and return | ||
1845 | 669 | the full output. | ||
1846 | 670 | |||
1847 | 671 | :param unit: sentry unit | ||
1848 | 672 | :returns: String containing console output of cluster status command | ||
1849 | 673 | """ | ||
1850 | 674 | cmd = 'rabbitmqctl cluster_status' | ||
1851 | 675 | output, _ = self.run_cmd_unit(sentry_unit, cmd) | ||
1852 | 676 | self.log.debug('{} cluster_status:\n{}'.format( | ||
1853 | 677 | sentry_unit.info['unit_name'], output)) | ||
1854 | 678 | return str(output) | ||
1855 | 679 | |||
1856 | 680 | def get_rmq_cluster_running_nodes(self, sentry_unit): | ||
1857 | 681 | """Parse rabbitmqctl cluster_status output string, return list of | ||
1858 | 682 | running rabbitmq cluster nodes. | ||
1859 | 683 | |||
1860 | 684 | :param unit: sentry unit | ||
1861 | 685 | :returns: List containing node names of running nodes | ||
1862 | 686 | """ | ||
1863 | 687 | # NOTE(beisner): rabbitmqctl cluster_status output is not | ||
1864 | 688 | # json-parsable, do string chop foo, then json.loads that. | ||
1865 | 689 | str_stat = self.get_rmq_cluster_status(sentry_unit) | ||
1866 | 690 | if 'running_nodes' in str_stat: | ||
1867 | 691 | pos_start = str_stat.find("{running_nodes,") + 15 | ||
1868 | 692 | pos_end = str_stat.find("]},", pos_start) + 1 | ||
1869 | 693 | str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"') | ||
1870 | 694 | run_nodes = json.loads(str_run_nodes) | ||
1871 | 695 | return run_nodes | ||
1872 | 696 | else: | ||
1873 | 697 | return [] | ||
1874 | 698 | |||
1875 | 699 | def validate_rmq_cluster_running_nodes(self, sentry_units): | ||
1876 | 700 | """Check that all rmq unit hostnames are represented in the | ||
1877 | 701 | cluster_status output of all units. | ||
1878 | 702 | |||
1879 | 703 | :param host_names: dict of juju unit names to host names | ||
1880 | 704 | :param units: list of sentry unit pointers (all rmq units) | ||
1881 | 705 | :returns: None if successful, otherwise return error message | ||
1882 | 706 | """ | ||
1883 | 707 | host_names = self.get_unit_hostnames(sentry_units) | ||
1884 | 708 | errors = [] | ||
1885 | 709 | |||
1886 | 710 | # Query every unit for cluster_status running nodes | ||
1887 | 711 | for query_unit in sentry_units: | ||
1888 | 712 | query_unit_name = query_unit.info['unit_name'] | ||
1889 | 713 | running_nodes = self.get_rmq_cluster_running_nodes(query_unit) | ||
1890 | 714 | |||
1891 | 715 | # Confirm that every unit is represented in the queried unit's | ||
1892 | 716 | # cluster_status running nodes output. | ||
1893 | 717 | for validate_unit in sentry_units: | ||
1894 | 718 | val_host_name = host_names[validate_unit.info['unit_name']] | ||
1895 | 719 | val_node_name = 'rabbit@{}'.format(val_host_name) | ||
1896 | 720 | |||
1897 | 721 | if val_node_name not in running_nodes: | ||
1898 | 722 | errors.append('Cluster member check failed on {}: {} not ' | ||
1899 | 723 | 'in {}\n'.format(query_unit_name, | ||
1900 | 724 | val_node_name, | ||
1901 | 725 | running_nodes)) | ||
1902 | 726 | if errors: | ||
1903 | 727 | return ''.join(errors) | ||
1904 | 728 | |||
1905 | 729 | def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None): | ||
1906 | 730 | """Check a single juju rmq unit for ssl and port in the config file.""" | ||
1907 | 731 | host = sentry_unit.info['public-address'] | ||
1908 | 732 | unit_name = sentry_unit.info['unit_name'] | ||
1909 | 733 | |||
1910 | 734 | conf_file = '/etc/rabbitmq/rabbitmq.config' | ||
1911 | 735 | conf_contents = str(self.file_contents_safe(sentry_unit, | ||
1912 | 736 | conf_file, max_wait=16)) | ||
1913 | 737 | # Checks | ||
1914 | 738 | conf_ssl = 'ssl' in conf_contents | ||
1915 | 739 | conf_port = str(port) in conf_contents | ||
1916 | 740 | |||
1917 | 741 | # Port explicitly checked in config | ||
1918 | 742 | if port and conf_port and conf_ssl: | ||
1919 | 743 | self.log.debug('SSL is enabled @{}:{} ' | ||
1920 | 744 | '({})'.format(host, port, unit_name)) | ||
1921 | 745 | return True | ||
1922 | 746 | elif port and not conf_port and conf_ssl: | ||
1923 | 747 | self.log.debug('SSL is enabled @{} but not on port {} ' | ||
1924 | 748 | '({})'.format(host, port, unit_name)) | ||
1925 | 749 | return False | ||
1926 | 750 | # Port not checked (useful when checking that ssl is disabled) | ||
1927 | 751 | elif not port and conf_ssl: | ||
1928 | 752 | self.log.debug('SSL is enabled @{}:{} ' | ||
1929 | 753 | '({})'.format(host, port, unit_name)) | ||
1930 | 754 | return True | ||
1931 | 755 | elif not port and not conf_ssl: | ||
1932 | 756 | self.log.debug('SSL not enabled @{}:{} ' | ||
1933 | 757 | '({})'.format(host, port, unit_name)) | ||
1934 | 758 | return False | ||
1935 | 759 | else: | ||
1936 | 760 | msg = ('Unknown condition when checking SSL status @{}:{} ' | ||
1937 | 761 | '({})'.format(host, port, unit_name)) | ||
1938 | 762 | amulet.raise_status(amulet.FAIL, msg) | ||
1939 | 763 | |||
1940 | 764 | def validate_rmq_ssl_enabled_units(self, sentry_units, port=None): | ||
1941 | 765 | """Check that ssl is enabled on rmq juju sentry units. | ||
1942 | 766 | |||
1943 | 767 | :param sentry_units: list of all rmq sentry units | ||
1944 | 768 | :param port: optional ssl port override to validate | ||
1945 | 769 | :returns: None if successful, otherwise return error message | ||
1946 | 770 | """ | ||
1947 | 771 | for sentry_unit in sentry_units: | ||
1948 | 772 | if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port): | ||
1949 | 773 | return ('Unexpected condition: ssl is disabled on unit ' | ||
1950 | 774 | '({})'.format(sentry_unit.info['unit_name'])) | ||
1951 | 775 | return None | ||
1952 | 776 | |||
1953 | 777 | def validate_rmq_ssl_disabled_units(self, sentry_units): | ||
1954 | 778 | """Check that ssl is enabled on listed rmq juju sentry units. | ||
1955 | 779 | |||
1956 | 780 | :param sentry_units: list of all rmq sentry units | ||
1957 | 781 | :returns: True if successful. Raise on error. | ||
1958 | 782 | """ | ||
1959 | 783 | for sentry_unit in sentry_units: | ||
1960 | 784 | if self.rmq_ssl_is_enabled_on_unit(sentry_unit): | ||
1961 | 785 | return ('Unexpected condition: ssl is enabled on unit ' | ||
1962 | 786 | '({})'.format(sentry_unit.info['unit_name'])) | ||
1963 | 787 | return None | ||
1964 | 788 | |||
1965 | 789 | def configure_rmq_ssl_on(self, sentry_units, deployment, | ||
1966 | 790 | port=None, max_wait=60): | ||
1967 | 791 | """Turn ssl charm config option on, with optional non-default | ||
1968 | 792 | ssl port specification. Confirm that it is enabled on every | ||
1969 | 793 | unit. | ||
1970 | 794 | |||
1971 | 795 | :param sentry_units: list of sentry units | ||
1972 | 796 | :param deployment: amulet deployment object pointer | ||
1973 | 797 | :param port: amqp port, use defaults if None | ||
1974 | 798 | :param max_wait: maximum time to wait in seconds to confirm | ||
1975 | 799 | :returns: None if successful. Raise on error. | ||
1976 | 800 | """ | ||
1977 | 801 | self.log.debug('Setting ssl charm config option: on') | ||
1978 | 802 | |||
1979 | 803 | # Enable RMQ SSL | ||
1980 | 804 | config = {'ssl': 'on'} | ||
1981 | 805 | if port: | ||
1982 | 806 | config['ssl_port'] = port | ||
1983 | 807 | |||
1984 | 808 | deployment.configure('rabbitmq-server', config) | ||
1985 | 809 | |||
1986 | 810 | # Confirm | ||
1987 | 811 | tries = 0 | ||
1988 | 812 | ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) | ||
1989 | 813 | while ret and tries < (max_wait / 4): | ||
1990 | 814 | time.sleep(4) | ||
1991 | 815 | self.log.debug('Attempt {}: {}'.format(tries, ret)) | ||
1992 | 816 | ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) | ||
1993 | 817 | tries += 1 | ||
1994 | 818 | |||
1995 | 819 | if ret: | ||
1996 | 820 | amulet.raise_status(amulet.FAIL, ret) | ||
1997 | 821 | |||
1998 | 822 | def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60): | ||
1999 | 823 | """Turn ssl charm config option off, confirm that it is disabled | ||
2000 | 824 | on every unit. | ||
2001 | 825 | |||
2002 | 826 | :param sentry_units: list of sentry units | ||
2003 | 827 | :param deployment: amulet deployment object pointer | ||
2004 | 828 | :param max_wait: maximum time to wait in seconds to confirm | ||
2005 | 829 | :returns: None if successful. Raise on error. | ||
2006 | 830 | """ | ||
2007 | 831 | self.log.debug('Setting ssl charm config option: off') | ||
2008 | 832 | |||
2009 | 833 | # Disable RMQ SSL | ||
2010 | 834 | config = {'ssl': 'off'} | ||
2011 | 835 | deployment.configure('rabbitmq-server', config) | ||
2012 | 836 | |||
2013 | 837 | # Confirm | ||
2014 | 838 | tries = 0 | ||
2015 | 839 | ret = self.validate_rmq_ssl_disabled_units(sentry_units) | ||
2016 | 840 | while ret and tries < (max_wait / 4): | ||
2017 | 841 | time.sleep(4) | ||
2018 | 842 | self.log.debug('Attempt {}: {}'.format(tries, ret)) | ||
2019 | 843 | ret = self.validate_rmq_ssl_disabled_units(sentry_units) | ||
2020 | 844 | tries += 1 | ||
2021 | 845 | |||
2022 | 846 | if ret: | ||
2023 | 847 | amulet.raise_status(amulet.FAIL, ret) | ||
2024 | 848 | |||
2025 | 849 | def connect_amqp_by_unit(self, sentry_unit, ssl=False, | ||
2026 | 850 | port=None, fatal=True, | ||
2027 | 851 | username="testuser1", password="changeme"): | ||
2028 | 852 | """Establish and return a pika amqp connection to the rabbitmq service | ||
2029 | 853 | running on a rmq juju unit. | ||
2030 | 854 | |||
2031 | 855 | :param sentry_unit: sentry unit pointer | ||
2032 | 856 | :param ssl: boolean, default to False | ||
2033 | 857 | :param port: amqp port, use defaults if None | ||
2034 | 858 | :param fatal: boolean, default to True (raises on connect error) | ||
2035 | 859 | :param username: amqp user name, default to testuser1 | ||
2036 | 860 | :param password: amqp user password | ||
2037 | 861 | :returns: pika amqp connection pointer or None if failed and non-fatal | ||
2038 | 862 | """ | ||
2039 | 863 | host = sentry_unit.info['public-address'] | ||
2040 | 864 | unit_name = sentry_unit.info['unit_name'] | ||
2041 | 865 | |||
2042 | 866 | # Default port logic if port is not specified | ||
2043 | 867 | if ssl and not port: | ||
2044 | 868 | port = 5671 | ||
2045 | 869 | elif not ssl and not port: | ||
2046 | 870 | port = 5672 | ||
2047 | 871 | |||
2048 | 872 | self.log.debug('Connecting to amqp on {}:{} ({}) as ' | ||
2049 | 873 | '{}...'.format(host, port, unit_name, username)) | ||
2050 | 874 | |||
2051 | 875 | try: | ||
2052 | 876 | credentials = pika.PlainCredentials(username, password) | ||
2053 | 877 | parameters = pika.ConnectionParameters(host=host, port=port, | ||
2054 | 878 | credentials=credentials, | ||
2055 | 879 | ssl=ssl, | ||
2056 | 880 | connection_attempts=3, | ||
2057 | 881 | retry_delay=5, | ||
2058 | 882 | socket_timeout=1) | ||
2059 | 883 | connection = pika.BlockingConnection(parameters) | ||
2060 | 884 | assert connection.server_properties['product'] == 'RabbitMQ' | ||
2061 | 885 | self.log.debug('Connect OK') | ||
2062 | 886 | return connection | ||
2063 | 887 | except Exception as e: | ||
2064 | 888 | msg = ('amqp connection failed to {}:{} as ' | ||
2065 | 889 | '{} ({})'.format(host, port, username, str(e))) | ||
2066 | 890 | if fatal: | ||
2067 | 891 | amulet.raise_status(amulet.FAIL, msg) | ||
2068 | 892 | else: | ||
2069 | 893 | self.log.warn(msg) | ||
2070 | 894 | return None | ||
2071 | 895 | |||
2072 | 896 | def publish_amqp_message_by_unit(self, sentry_unit, message, | ||
2073 | 897 | queue="test", ssl=False, | ||
2074 | 898 | username="testuser1", | ||
2075 | 899 | password="changeme", | ||
2076 | 900 | port=None): | ||
2077 | 901 | """Publish an amqp message to a rmq juju unit. | ||
2078 | 902 | |||
2079 | 903 | :param sentry_unit: sentry unit pointer | ||
2080 | 904 | :param message: amqp message string | ||
2081 | 905 | :param queue: message queue, default to test | ||
2082 | 906 | :param username: amqp user name, default to testuser1 | ||
2083 | 907 | :param password: amqp user password | ||
2084 | 908 | :param ssl: boolean, default to False | ||
2085 | 909 | :param port: amqp port, use defaults if None | ||
2086 | 910 | :returns: None. Raises exception if publish failed. | ||
2087 | 911 | """ | ||
2088 | 912 | self.log.debug('Publishing message to {} queue:\n{}'.format(queue, | ||
2089 | 913 | message)) | ||
2090 | 914 | connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, | ||
2091 | 915 | port=port, | ||
2092 | 916 | username=username, | ||
2093 | 917 | password=password) | ||
2094 | 918 | |||
2095 | 919 | # NOTE(beisner): extra debug here re: pika hang potential: | ||
2096 | 920 | # https://github.com/pika/pika/issues/297 | ||
2097 | 921 | # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw | ||
2098 | 922 | self.log.debug('Defining channel...') | ||
2099 | 923 | channel = connection.channel() | ||
2100 | 924 | self.log.debug('Declaring queue...') | ||
2101 | 925 | channel.queue_declare(queue=queue, auto_delete=False, durable=True) | ||
2102 | 926 | self.log.debug('Publishing message...') | ||
2103 | 927 | channel.basic_publish(exchange='', routing_key=queue, body=message) | ||
2104 | 928 | self.log.debug('Closing channel...') | ||
2105 | 929 | channel.close() | ||
2106 | 930 | self.log.debug('Closing connection...') | ||
2107 | 931 | connection.close() | ||
2108 | 932 | |||
2109 | 933 | def get_amqp_message_by_unit(self, sentry_unit, queue="test", | ||
2110 | 934 | username="testuser1", | ||
2111 | 935 | password="changeme", | ||
2112 | 936 | ssl=False, port=None): | ||
2113 | 937 | """Get an amqp message from a rmq juju unit. | ||
2114 | 938 | |||
2115 | 939 | :param sentry_unit: sentry unit pointer | ||
2116 | 940 | :param queue: message queue, default to test | ||
2117 | 941 | :param username: amqp user name, default to testuser1 | ||
2118 | 942 | :param password: amqp user password | ||
2119 | 943 | :param ssl: boolean, default to False | ||
2120 | 944 | :param port: amqp port, use defaults if None | ||
2121 | 945 | :returns: amqp message body as string. Raise if get fails. | ||
2122 | 946 | """ | ||
2123 | 947 | connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, | ||
2124 | 948 | port=port, | ||
2125 | 949 | username=username, | ||
2126 | 950 | password=password) | ||
2127 | 951 | channel = connection.channel() | ||
2128 | 952 | method_frame, _, body = channel.basic_get(queue) | ||
2129 | 953 | |||
2130 | 954 | if method_frame: | ||
2131 | 955 | self.log.debug('Retreived message from {} queue:\n{}'.format(queue, | ||
2132 | 956 | body)) | ||
2133 | 957 | channel.basic_ack(method_frame.delivery_tag) | ||
2134 | 958 | channel.close() | ||
2135 | 959 | connection.close() | ||
2136 | 960 | return body | ||
2137 | 961 | else: | ||
2138 | 962 | msg = 'No message retrieved.' | ||
2139 | 963 | amulet.raise_status(amulet.FAIL, msg) | ||
2140 | 605 | 964 | ||
2141 | === added file 'tests/tests.yaml' | |||
2142 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 | |||
2143 | +++ tests/tests.yaml 2015-09-21 20:38:30 +0000 | |||
2144 | @@ -0,0 +1,20 @@ | |||
2145 | 1 | bootstrap: true | ||
2146 | 2 | reset: true | ||
2147 | 3 | virtualenv: true | ||
2148 | 4 | makefile: | ||
2149 | 5 | - lint | ||
2150 | 6 | - test | ||
2151 | 7 | sources: | ||
2152 | 8 | - ppa:juju/stable | ||
2153 | 9 | packages: | ||
2154 | 10 | - amulet | ||
2155 | 11 | - python-amulet | ||
2156 | 12 | - python-cinderclient | ||
2157 | 13 | - python-distro-info | ||
2158 | 14 | - python-glanceclient | ||
2159 | 15 | - python-heatclient | ||
2160 | 16 | - python-keystoneclient | ||
2161 | 17 | - python-neutronclient | ||
2162 | 18 | - python-novaclient | ||
2163 | 19 | - python-pika | ||
2164 | 20 | - python-swiftclient |
charm_lint_check #10116 neutron- gateway- next for 1chb1n mp271198
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http:// paste.ubuntu. com/12423661/ 10.245. 162.77: 8080/job/ charm_lint_ check/10116/
Build: http://