Merge lp:~1chb1n/charms/trusty/neutron-gateway/amulet-update-1508 into lp:~openstack-charmers-archive/charms/trusty/neutron-gateway/next

Proposed by Ryan Beisner
Status: Superseded
Proposed branch: lp:~1chb1n/charms/trusty/neutron-gateway/amulet-update-1508
Merge into: lp:~openstack-charmers-archive/charms/trusty/neutron-gateway/next
Diff against target: 2164 lines (+1242/-341)
11 files modified
Makefile (+1/-1)
tests/00-setup (+7/-3)
tests/020-basic-trusty-liberty (+11/-0)
tests/021-basic-wily-liberty (+9/-0)
tests/README (+10/-0)
tests/basic_deployment.py (+514/-264)
tests/charmhelpers/contrib/amulet/deployment.py (+4/-2)
tests/charmhelpers/contrib/amulet/utils.py (+284/-62)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+23/-9)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+359/-0)
tests/tests.yaml (+20/-0)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/neutron-gateway/amulet-update-1508
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+271198@code.launchpad.net

This proposal has been superseded by a proposal from 2015-09-22.

Description of the change

Update tests for Vivid-Kilo + Trusty-Liberty enablement; sync tests/charmhelpers.

This proposal is dependent on charm-helpers landing code from:
 https://code.launchpad.net/~1chb1n/charm-helpers/amulet-file-race/+merge/271417

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10116 neutron-gateway-next for 1chb1n mp271198
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12423661/
Build: http://10.245.162.77:8080/job/charm_lint_check/10116/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9280 neutron-gateway-next for 1chb1n mp271198
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9280/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10119 neutron-gateway-next for 1chb1n mp271198
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10119/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9282 neutron-gateway-next for 1chb1n mp271198
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9282/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6455 neutron-gateway-next for 1chb1n mp271198
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12423795/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6455/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6457 neutron-gateway-next for 1chb1n mp271198
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12423843/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6457/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10120 neutron-gateway-next for 1chb1n mp271198
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10120/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9283 neutron-gateway-next for 1chb1n mp271198
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9283/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6458 neutron-gateway-next for 1chb1n mp271198
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12424182/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6458/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10184 neutron-gateway-next for 1chb1n mp271198
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10184/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9342 neutron-gateway-next for 1chb1n mp271198
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9342/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6472 neutron-gateway-next for 1chb1n mp271198
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12434297/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6472/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10187 neutron-gateway-next for 1chb1n mp271198
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10187/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9345 neutron-gateway-next for 1chb1n mp271198
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9345/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10189 neutron-gateway-next for 1chb1n mp271198
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10189/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9347 neutron-gateway-next for 1chb1n mp271198
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9347/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6475 neutron-gateway-next for 1chb1n mp271198
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12437543/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6475/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6477 neutron-gateway-next for 1chb1n mp271198
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12437593/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6477/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10193 neutron-gateway-next for 1chb1n mp271198
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10193/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9351 neutron-gateway-next for 1chb1n mp271198
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9351/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6482 neutron-gateway-next for 1chb1n mp271198
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12439988/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6482/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10253 neutron-gateway-next for 1chb1n mp271198
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10253/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9408 neutron-gateway-next for 1chb1n mp271198
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9408/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6494 neutron-gateway-next for 1chb1n mp271198
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12449713/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6494/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

FYI - ignore previous amulet test results here.

Test-writing was compounded by amulet bug @ https://github.com/juju/amulet/issues/94.

Will resubmit after confirming that is no longer an issue.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10433 neutron-gateway-next for 1chb1n mp271198
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10433/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9623 neutron-gateway-next for 1chb1n mp271198
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9623/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6576 neutron-gateway-next for 1chb1n mp271198
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12521680/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6576/

143. By Ryan Beisner

rebase with next

144. By Ryan Beisner

disable wily test target, to be re-enabled separately after validation

145. By Ryan Beisner

rebase with next

146. By Ryan Beisner

rebase from next (re: amulet git test fails)

147. By Ryan Beisner

rebase from next (re: unit test fails re: git fix)

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2015-06-25 17:58:14 +0000
+++ Makefile 2015-09-21 20:38:30 +0000
@@ -24,6 +24,6 @@
24 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml24 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
25 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml25 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
2626
27publish: lint unit_test27publish: lint test
28 bzr push lp:charms/neutron-gateway28 bzr push lp:charms/neutron-gateway
29 bzr push lp:charms/trusty/neutron-gateway29 bzr push lp:charms/trusty/neutron-gateway
3030
=== modified file 'tests/00-setup'
--- tests/00-setup 2015-07-10 13:34:03 +0000
+++ tests/00-setup 2015-09-21 20:38:30 +0000
@@ -4,9 +4,13 @@
44
5sudo add-apt-repository --yes ppa:juju/stable5sudo add-apt-repository --yes ppa:juju/stable
6sudo apt-get update --yes6sudo apt-get update --yes
7sudo apt-get install --yes python-amulet \7sudo apt-get install --yes amulet \
8 python-cinderclient \
8 python-distro-info \9 python-distro-info \
10 python-glanceclient \
11 python-heatclient \
12 python-keystoneclient \
9 python-neutronclient \13 python-neutronclient \
10 python-keystoneclient \
11 python-novaclient \14 python-novaclient \
12 python-glanceclient15 python-pika \
16 python-swiftclient
1317
=== modified file 'tests/019-basic-vivid-kilo' (properties changed: -x to +x)
=== added file 'tests/020-basic-trusty-liberty'
--- tests/020-basic-trusty-liberty 1970-01-01 00:00:00 +0000
+++ tests/020-basic-trusty-liberty 2015-09-21 20:38:30 +0000
@@ -0,0 +1,11 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic quantum-gateway deployment on trusty-liberty."""
4
5from basic_deployment import NeutronGatewayBasicDeployment
6
7if __name__ == '__main__':
8 deployment = NeutronGatewayBasicDeployment(series='trusty',
9 openstack='cloud:trusty-liberty',
10 source='cloud:trusty-updates/liberty')
11 deployment.run_tests()
012
=== added file 'tests/021-basic-wily-liberty'
--- tests/021-basic-wily-liberty 1970-01-01 00:00:00 +0000
+++ tests/021-basic-wily-liberty 2015-09-21 20:38:30 +0000
@@ -0,0 +1,9 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic quantum-gateway deployment on wily-liberty."""
4
5from basic_deployment import NeutronGatewayBasicDeployment
6
7if __name__ == '__main__':
8 deployment = NeutronGatewayBasicDeployment(series='wily')
9 deployment.run_tests()
010
=== modified file 'tests/README'
--- tests/README 2014-10-07 21:19:29 +0000
+++ tests/README 2015-09-21 20:38:30 +0000
@@ -1,6 +1,16 @@
1This directory provides Amulet tests that focus on verification of1This directory provides Amulet tests that focus on verification of
2quantum-gateway deployments.2quantum-gateway deployments.
33
4test_* methods are called in lexical sort order, although each individual test
5should be idempotent, and expected to pass regardless of run order.
6
7Test name convention to ensure desired test order:
8 1xx service and endpoint checks
9 2xx relation checks
10 3xx config checks
11 4xx functional checks
12 9xx restarts and other final checks
13
4In order to run tests, you'll need charm-tools installed (in addition to14In order to run tests, you'll need charm-tools installed (in addition to
5juju, of course):15juju, of course):
6 sudo add-apt-repository ppa:juju/stable16 sudo add-apt-repository ppa:juju/stable
717
=== modified file 'tests/basic_deployment.py'
--- tests/basic_deployment.py 2015-09-14 14:30:47 +0000
+++ tests/basic_deployment.py 2015-09-21 20:38:30 +0000
@@ -13,8 +13,8 @@
1313
14from charmhelpers.contrib.openstack.amulet.utils import (14from charmhelpers.contrib.openstack.amulet.utils import (
15 OpenStackAmuletUtils,15 OpenStackAmuletUtils,
16 DEBUG, # flake8: noqa16 DEBUG,
17 ERROR17 #ERROR
18)18)
1919
20# Use DEBUG to turn on debug logging20# Use DEBUG to turn on debug logging
@@ -45,30 +45,31 @@
45 """45 """
46 this_service = {'name': 'neutron-gateway'}46 this_service = {'name': 'neutron-gateway'}
47 other_services = [{'name': 'mysql'},47 other_services = [{'name': 'mysql'},
48 {'name': 'rabbitmq-server'}, {'name': 'keystone'},48 {'name': 'rabbitmq-server'},
49 {'name': 'nova-cloud-controller'}]49 {'name': 'keystone'},
50 if self._get_openstack_release() >= self.trusty_kilo:50 {'name': 'nova-cloud-controller'},
51 other_services.append({'name': 'neutron-api'})51 {'name': 'neutron-api'}]
52 super(NeutronGatewayBasicDeployment, self)._add_services(this_service,52
53 other_services)53 super(NeutronGatewayBasicDeployment, self)._add_services(
54 this_service, other_services)
5455
55 def _add_relations(self):56 def _add_relations(self):
56 """Add all of the relations for the services."""57 """Add all of the relations for the services."""
57 relations = {58 relations = {
58 'keystone:shared-db': 'mysql:shared-db',59 'keystone:shared-db': 'mysql:shared-db',
59 'neutron-gateway:shared-db': 'mysql:shared-db',60 'neutron-gateway:shared-db': 'mysql:shared-db',
60 'neutron-gateway:amqp': 'rabbitmq-server:amqp',61 'neutron-gateway:amqp': 'rabbitmq-server:amqp',
61 'nova-cloud-controller:quantum-network-service': \62 'nova-cloud-controller:quantum-network-service':
62 'neutron-gateway:quantum-network-service',63 'neutron-gateway:quantum-network-service',
63 'nova-cloud-controller:shared-db': 'mysql:shared-db',64 'nova-cloud-controller:shared-db': 'mysql:shared-db',
64 'nova-cloud-controller:identity-service': 'keystone:identity-service',65 'nova-cloud-controller:identity-service': 'keystone:'
65 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp'66 'identity-service',
67 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp',
68 'neutron-api:shared-db': 'mysql:shared-db',
69 'neutron-api:amqp': 'rabbitmq-server:amqp',
70 'neutron-api:neutron-api': 'nova-cloud-controller:neutron-api',
71 'neutron-api:identity-service': 'keystone:identity-service'
66 }72 }
67 if self._get_openstack_release() >= self.trusty_kilo:
68 relations['neutron-api:shared-db'] = 'mysql:shared-db'
69 relations['neutron-api:amqp'] = 'rabbitmq-server:amqp'
70 relations['neutron-api:neutron-api'] = 'nova-cloud-controller:neutron-api'
71 relations['neutron-api:identity-service'] = 'keystone:identity-service'
72 super(NeutronGatewayBasicDeployment, self)._add_relations(relations)73 super(NeutronGatewayBasicDeployment, self)._add_relations(relations)
7374
74 def _configure_services(self):75 def _configure_services(self):
@@ -83,16 +84,16 @@
83 openstack_origin_git = {84 openstack_origin_git = {
84 'repositories': [85 'repositories': [
85 {'name': 'requirements',86 {'name': 'requirements',
86 'repository': 'git://github.com/openstack/requirements',87 'repository': 'git://github.com/openstack/requirements', # noqa
87 'branch': branch},88 'branch': branch},
88 {'name': 'neutron-fwaas',89 {'name': 'neutron-fwaas',
89 'repository': 'git://github.com/openstack/neutron-fwaas',90 'repository': 'git://github.com/openstack/neutron-fwaas', # noqa
90 'branch': branch},91 'branch': branch},
91 {'name': 'neutron-lbaas',92 {'name': 'neutron-lbaas',
92 'repository': 'git://github.com/openstack/neutron-lbaas',93 'repository': 'git://github.com/openstack/neutron-lbaas', # noqa
93 'branch': branch},94 'branch': branch},
94 {'name': 'neutron-vpnaas',95 {'name': 'neutron-vpnaas',
95 'repository': 'git://github.com/openstack/neutron-vpnaas',96 'repository': 'git://github.com/openstack/neutron-vpnaas', # noqa
96 'branch': branch},97 'branch': branch},
97 {'name': 'neutron',98 {'name': 'neutron',
98 'repository': 'git://github.com/openstack/neutron',99 'repository': 'git://github.com/openstack/neutron',
@@ -122,7 +123,10 @@
122 'http_proxy': amulet_http_proxy,123 'http_proxy': amulet_http_proxy,
123 'https_proxy': amulet_http_proxy,124 'https_proxy': amulet_http_proxy,
124 }125 }
125 neutron_gateway_config['openstack-origin-git'] = yaml.dump(openstack_origin_git)126
127 neutron_gateway_config['openstack-origin-git'] = \
128 yaml.dump(openstack_origin_git)
129
126 keystone_config = {'admin-password': 'openstack',130 keystone_config = {'admin-password': 'openstack',
127 'admin-token': 'ubuntutesting'}131 'admin-token': 'ubuntutesting'}
128 nova_cc_config = {'network-manager': 'Quantum',132 nova_cc_config = {'network-manager': 'Quantum',
@@ -137,9 +141,10 @@
137 # Access the sentries for inspecting service units141 # Access the sentries for inspecting service units
138 self.mysql_sentry = self.d.sentry.unit['mysql/0']142 self.mysql_sentry = self.d.sentry.unit['mysql/0']
139 self.keystone_sentry = self.d.sentry.unit['keystone/0']143 self.keystone_sentry = self.d.sentry.unit['keystone/0']
140 self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']144 self.rmq_sentry = self.d.sentry.unit['rabbitmq-server/0']
141 self.nova_cc_sentry = self.d.sentry.unit['nova-cloud-controller/0']145 self.nova_cc_sentry = self.d.sentry.unit['nova-cloud-controller/0']
142 self.neutron_gateway_sentry = self.d.sentry.unit['neutron-gateway/0']146 self.neutron_gateway_sentry = self.d.sentry.unit['neutron-gateway/0']
147 self.neutron_api_sentry = self.d.sentry.unit['neutron-api/0']
143148
144 # Let things settle a bit before moving forward149 # Let things settle a bit before moving forward
145 time.sleep(30)150 time.sleep(30)
@@ -150,7 +155,6 @@
150 password='openstack',155 password='openstack',
151 tenant='admin')156 tenant='admin')
152157
153
154 # Authenticate admin with neutron158 # Authenticate admin with neutron
155 ep = self.keystone.service_catalog.url_for(service_type='identity',159 ep = self.keystone.service_catalog.url_for(service_type='identity',
156 endpoint_type='publicURL')160 endpoint_type='publicURL')
@@ -160,40 +164,121 @@
160 tenant_name='admin',164 tenant_name='admin',
161 region_name='RegionOne')165 region_name='RegionOne')
162166
163 def test_services(self):167 def test_100_services(self):
164 """Verify the expected services are running on the corresponding168 """Verify the expected services are running on the corresponding
165 service units."""169 service units."""
166 neutron_services = ['status neutron-dhcp-agent',170 neutron_services = ['neutron-dhcp-agent',
167 'status neutron-lbaas-agent',171 'neutron-lbaas-agent',
168 'status neutron-metadata-agent',172 'neutron-metadata-agent',
169 'status neutron-metering-agent',173 'neutron-metering-agent',
170 'status neutron-ovs-cleanup',174 'neutron-ovs-cleanup',
171 'status neutron-plugin-openvswitch-agent']175 'neutron-plugin-openvswitch-agent']
172176
173 if self._get_openstack_release() <= self.trusty_juno:177 if self._get_openstack_release() <= self.trusty_juno:
174 neutron_services.append('status neutron-vpn-agent')178 neutron_services.append('neutron-vpn-agent')
175179
176 nova_cc_services = ['status nova-api-ec2',180 nova_cc_services = ['nova-api-ec2',
177 'status nova-api-os-compute',181 'nova-api-os-compute',
178 'status nova-objectstore',182 'nova-objectstore',
179 'status nova-cert',183 'nova-cert',
180 'status nova-scheduler']184 'nova-scheduler',
181 if self._get_openstack_release() >= self.precise_grizzly:185 'nova-conductor']
182 nova_cc_services.append('status nova-conductor')
183186
184 commands = {187 commands = {
185 self.mysql_sentry: ['status mysql'],188 self.mysql_sentry: ['mysql'],
186 self.keystone_sentry: ['status keystone'],189 self.keystone_sentry: ['keystone'],
187 self.nova_cc_sentry: nova_cc_services,190 self.nova_cc_sentry: nova_cc_services,
188 self.neutron_gateway_sentry: neutron_services191 self.neutron_gateway_sentry: neutron_services
189 }192 }
190193
191 ret = u.validate_services(commands)194 ret = u.validate_services_by_name(commands)
192 if ret:195 if ret:
193 amulet.raise_status(amulet.FAIL, msg=ret)196 amulet.raise_status(amulet.FAIL, msg=ret)
194197
195 def test_neutron_gateway_shared_db_relation(self):198 def test_102_service_catalog(self):
199 """Verify that the service catalog endpoint data is valid."""
200 u.log.debug('Checking keystone service catalog...')
201 endpoint_check = {
202 'adminURL': u.valid_url,
203 'id': u.not_null,
204 'region': 'RegionOne',
205 'publicURL': u.valid_url,
206 'internalURL': u.valid_url
207 }
208 expected = {
209 'network': [endpoint_check],
210 'compute': [endpoint_check],
211 'identity': [endpoint_check]
212 }
213 actual = self.keystone.service_catalog.get_endpoints()
214
215 ret = u.validate_svc_catalog_endpoint_data(expected, actual)
216 if ret:
217 amulet.raise_status(amulet.FAIL, msg=ret)
218
219 def test_104_network_endpoint(self):
220 """Verify the neutron network endpoint data."""
221 u.log.debug('Checking neutron network api endpoint data...')
222 endpoints = self.keystone.endpoints.list()
223 admin_port = internal_port = public_port = '9696'
224 expected = {
225 'id': u.not_null,
226 'region': 'RegionOne',
227 'adminurl': u.valid_url,
228 'internalurl': u.valid_url,
229 'publicurl': u.valid_url,
230 'service_id': u.not_null
231 }
232 ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
233 public_port, expected)
234
235 if ret:
236 amulet.raise_status(amulet.FAIL,
237 msg='glance endpoint: {}'.format(ret))
238
239 def test_110_users(self):
240 """Verify expected users."""
241 u.log.debug('Checking keystone users...')
242 expected = [
243 {'name': 'admin',
244 'enabled': True,
245 'tenantId': u.not_null,
246 'id': u.not_null,
247 'email': 'juju@localhost'},
248 {'name': 'quantum',
249 'enabled': True,
250 'tenantId': u.not_null,
251 'id': u.not_null,
252 'email': 'juju@localhost'}
253 ]
254
255 if self._get_openstack_release() >= self.trusty_kilo:
256 # Kilo or later
257 expected.append({
258 'name': 'nova',
259 'enabled': True,
260 'tenantId': u.not_null,
261 'id': u.not_null,
262 'email': 'juju@localhost'
263 })
264 else:
265 # Juno and earlier
266 expected.append({
267 'name': 's3_ec2_nova',
268 'enabled': True,
269 'tenantId': u.not_null,
270 'id': u.not_null,
271 'email': 'juju@localhost'
272 })
273
274 actual = self.keystone.users.list()
275 ret = u.validate_user_data(expected, actual)
276 if ret:
277 amulet.raise_status(amulet.FAIL, msg=ret)
278
279 def test_200_neutron_gateway_mysql_shared_db_relation(self):
196 """Verify the neutron-gateway to mysql shared-db relation data"""280 """Verify the neutron-gateway to mysql shared-db relation data"""
281 u.log.debug('Checking neutron-gateway:mysql db relation data...')
197 unit = self.neutron_gateway_sentry282 unit = self.neutron_gateway_sentry
198 relation = ['shared-db', 'mysql:shared-db']283 relation = ['shared-db', 'mysql:shared-db']
199 expected = {284 expected = {
@@ -208,8 +293,9 @@
208 message = u.relation_error('neutron-gateway shared-db', ret)293 message = u.relation_error('neutron-gateway shared-db', ret)
209 amulet.raise_status(amulet.FAIL, msg=message)294 amulet.raise_status(amulet.FAIL, msg=message)
210295
211 def test_mysql_shared_db_relation(self):296 def test_201_mysql_neutron_gateway_shared_db_relation(self):
212 """Verify the mysql to neutron-gateway shared-db relation data"""297 """Verify the mysql to neutron-gateway shared-db relation data"""
298 u.log.debug('Checking mysql:neutron-gateway db relation data...')
213 unit = self.mysql_sentry299 unit = self.mysql_sentry
214 relation = ['shared-db', 'neutron-gateway:shared-db']300 relation = ['shared-db', 'neutron-gateway:shared-db']
215 expected = {301 expected = {
@@ -223,8 +309,9 @@
223 message = u.relation_error('mysql shared-db', ret)309 message = u.relation_error('mysql shared-db', ret)
224 amulet.raise_status(amulet.FAIL, msg=message)310 amulet.raise_status(amulet.FAIL, msg=message)
225311
226 def test_neutron_gateway_amqp_relation(self):312 def test_202_neutron_gateway_rabbitmq_amqp_relation(self):
227 """Verify the neutron-gateway to rabbitmq-server amqp relation data"""313 """Verify the neutron-gateway to rabbitmq-server amqp relation data"""
314 u.log.debug('Checking neutron-gateway:rmq amqp relation data...')
228 unit = self.neutron_gateway_sentry315 unit = self.neutron_gateway_sentry
229 relation = ['amqp', 'rabbitmq-server:amqp']316 relation = ['amqp', 'rabbitmq-server:amqp']
230 expected = {317 expected = {
@@ -238,9 +325,10 @@
238 message = u.relation_error('neutron-gateway amqp', ret)325 message = u.relation_error('neutron-gateway amqp', ret)
239 amulet.raise_status(amulet.FAIL, msg=message)326 amulet.raise_status(amulet.FAIL, msg=message)
240327
241 def test_rabbitmq_amqp_relation(self):328 def test_203_rabbitmq_neutron_gateway_amqp_relation(self):
242 """Verify the rabbitmq-server to neutron-gateway amqp relation data"""329 """Verify the rabbitmq-server to neutron-gateway amqp relation data"""
243 unit = self.rabbitmq_sentry330 u.log.debug('Checking rmq:neutron-gateway amqp relation data...')
331 unit = self.rmq_sentry
244 relation = ['amqp', 'neutron-gateway:amqp']332 relation = ['amqp', 'neutron-gateway:amqp']
245 expected = {333 expected = {
246 'private-address': u.valid_ip,334 'private-address': u.valid_ip,
@@ -253,9 +341,11 @@
253 message = u.relation_error('rabbitmq amqp', ret)341 message = u.relation_error('rabbitmq amqp', ret)
254 amulet.raise_status(amulet.FAIL, msg=message)342 amulet.raise_status(amulet.FAIL, msg=message)
255343
256 def test_neutron_gateway_network_service_relation(self):344 def test_204_neutron_gateway_network_service_relation(self):
257 """Verify the neutron-gateway to nova-cc quantum-network-service345 """Verify the neutron-gateway to nova-cc quantum-network-service
258 relation data"""346 relation data"""
347 u.log.debug('Checking neutron-gateway:nova-cc net svc '
348 'relation data...')
259 unit = self.neutron_gateway_sentry349 unit = self.neutron_gateway_sentry
260 relation = ['quantum-network-service',350 relation = ['quantum-network-service',
261 'nova-cloud-controller:quantum-network-service']351 'nova-cloud-controller:quantum-network-service']
@@ -268,9 +358,11 @@
268 message = u.relation_error('neutron-gateway network-service', ret)358 message = u.relation_error('neutron-gateway network-service', ret)
269 amulet.raise_status(amulet.FAIL, msg=message)359 amulet.raise_status(amulet.FAIL, msg=message)
270360
271 def test_nova_cc_network_service_relation(self):361 def test_205_nova_cc_network_service_relation(self):
272 """Verify the nova-cc to neutron-gateway quantum-network-service362 """Verify the nova-cc to neutron-gateway quantum-network-service
273 relation data"""363 relation data"""
364 u.log.debug('Checking nova-cc:neutron-gateway net svc '
365 'relation data...')
274 unit = self.nova_cc_sentry366 unit = self.nova_cc_sentry
275 relation = ['quantum-network-service',367 relation = ['quantum-network-service',
276 'neutron-gateway:quantum-network-service']368 'neutron-gateway:quantum-network-service']
@@ -289,56 +381,178 @@
289 'keystone_host': u.valid_ip,381 'keystone_host': u.valid_ip,
290 'quantum_plugin': 'ovs',382 'quantum_plugin': 'ovs',
291 'auth_host': u.valid_ip,383 'auth_host': u.valid_ip,
292 'service_username': 'quantum_s3_ec2_nova',
293 'service_tenant_name': 'services'384 'service_tenant_name': 'services'
294 }385 }
386
295 if self._get_openstack_release() >= self.trusty_kilo:387 if self._get_openstack_release() >= self.trusty_kilo:
388 # Kilo or later
296 expected['service_username'] = 'nova'389 expected['service_username'] = 'nova'
390 else:
391 # Juno or earlier
392 expected['service_username'] = 's3_ec2_nova'
297393
298 ret = u.validate_relation_data(unit, relation, expected)394 ret = u.validate_relation_data(unit, relation, expected)
299 if ret:395 if ret:
300 message = u.relation_error('nova-cc network-service', ret)396 message = u.relation_error('nova-cc network-service', ret)
301 amulet.raise_status(amulet.FAIL, msg=message)397 amulet.raise_status(amulet.FAIL, msg=message)
302398
303 def test_z_restart_on_config_change(self):399 def test_206_neutron_api_shared_db_relation(self):
304 """Verify that the specified services are restarted when the config400 """Verify the neutron-api to mysql shared-db relation data"""
305 is changed.401 u.log.debug('Checking neutron-api:mysql db relation data...')
306402 unit = self.neutron_api_sentry
307 Note(coreycb): The method name with the _z_ is a little odd403 relation = ['shared-db', 'mysql:shared-db']
308 but it forces the test to run last. It just makes things404 expected = {
309 easier because restarting services requires re-authorization.405 'private-address': u.valid_ip,
310 """406 'database': 'neutron',
311 conf = '/etc/neutron/neutron.conf'407 'username': 'neutron',
312408 'hostname': u.valid_ip
313 services = ['neutron-dhcp-agent',409 }
314 'neutron-lbaas-agent',410
315 'neutron-metadata-agent',411 ret = u.validate_relation_data(unit, relation, expected)
316 'neutron-metering-agent',412 if ret:
317 'neutron-openvswitch-agent']413 message = u.relation_error('neutron-api shared-db', ret)
318414 amulet.raise_status(amulet.FAIL, msg=message)
319 if self._get_openstack_release() <= self.trusty_juno:415
320 services.append('neutron-vpn-agent')416 def test_207_shared_db_neutron_api_relation(self):
321417 """Verify the mysql to neutron-api shared-db relation data"""
322 u.log.debug("Making config change on neutron-gateway...")418 u.log.debug('Checking mysql:neutron-api db relation data...')
323 self.d.configure('neutron-gateway', {'debug': 'True'})419 unit = self.mysql_sentry
324420 relation = ['shared-db', 'neutron-api:shared-db']
325 time = 60421 expected = {
326 for s in services:422 'db_host': u.valid_ip,
327 u.log.debug("Checking that service restarted: {}".format(s))423 'private-address': u.valid_ip,
328 if not u.service_restarted(self.neutron_gateway_sentry, s, conf,424 'password': u.not_null
329 pgrep_full=True, sleep_time=time):425 }
330 self.d.configure('neutron-gateway', {'debug': 'False'})426
331 msg = "service {} didn't restart after config change".format(s)427 if self._get_openstack_release() == self.precise_icehouse:
332 amulet.raise_status(amulet.FAIL, msg=msg)428 # Precise
333 time = 0429 expected['allowed_units'] = 'nova-cloud-controller/0 neutron-api/0'
334430 else:
335 self.d.configure('neutron-gateway', {'debug': 'False'})431 # Not Precise
336432 expected['allowed_units'] = 'neutron-api/0'
337 def test_neutron_config(self):433
434 ret = u.validate_relation_data(unit, relation, expected)
435 if ret:
436 message = u.relation_error('mysql shared-db', ret)
437 amulet.raise_status(amulet.FAIL, msg=message)
438
439 def test_208_neutron_api_amqp_relation(self):
440 """Verify the neutron-api to rabbitmq-server amqp relation data"""
441 u.log.debug('Checking neutron-api:amqp relation data...')
442 unit = self.neutron_api_sentry
443 relation = ['amqp', 'rabbitmq-server:amqp']
444 expected = {
445 'username': 'neutron',
446 'private-address': u.valid_ip,
447 'vhost': 'openstack'
448 }
449
450 ret = u.validate_relation_data(unit, relation, expected)
451 if ret:
452 message = u.relation_error('neutron-api amqp', ret)
453 amulet.raise_status(amulet.FAIL, msg=message)
454
455 def test_209_amqp_neutron_api_relation(self):
456 """Verify the rabbitmq-server to neutron-api amqp relation data"""
457 u.log.debug('Checking amqp:neutron-api relation data...')
458 unit = self.rmq_sentry
459 relation = ['amqp', 'neutron-api:amqp']
460 expected = {
461 'hostname': u.valid_ip,
462 'private-address': u.valid_ip,
463 'password': u.not_null
464 }
465
466 ret = u.validate_relation_data(unit, relation, expected)
467 if ret:
468 message = u.relation_error('rabbitmq amqp', ret)
469 amulet.raise_status(amulet.FAIL, msg=message)
470
471 def test_210_neutron_api_keystone_identity_relation(self):
472 """Verify the neutron-api to keystone identity-service relation data"""
473 u.log.debug('Checking neutron-api:keystone id relation data...')
474 unit = self.neutron_api_sentry
475 relation = ['identity-service', 'keystone:identity-service']
476 api_ip = unit.relation('identity-service',
477 'keystone:identity-service')['private-address']
478 api_endpoint = 'http://{}:9696'.format(api_ip)
479 expected = {
480 'private-address': u.valid_ip,
481 'quantum_region': 'RegionOne',
482 'quantum_service': 'quantum',
483 'quantum_admin_url': api_endpoint,
484 'quantum_internal_url': api_endpoint,
485 'quantum_public_url': api_endpoint,
486 }
487
488 ret = u.validate_relation_data(unit, relation, expected)
489 if ret:
490 message = u.relation_error('neutron-api identity-service', ret)
491 amulet.raise_status(amulet.FAIL, msg=message)
492
493 def test_211_keystone_neutron_api_identity_relation(self):
494 """Verify the keystone to neutron-api identity-service relation data"""
495 u.log.debug('Checking keystone:neutron-api id relation data...')
496 unit = self.keystone_sentry
497 relation = ['identity-service', 'neutron-api:identity-service']
498 rel_ks_id = unit.relation('identity-service',
499 'neutron-api:identity-service')
500 id_ip = rel_ks_id['private-address']
501 expected = {
502 'admin_token': 'ubuntutesting',
503 'auth_host': id_ip,
504 'auth_port': "35357",
505 'auth_protocol': 'http',
506 'private-address': id_ip,
507 'service_host': id_ip,
508 }
509 ret = u.validate_relation_data(unit, relation, expected)
510 if ret:
511 message = u.relation_error('neutron-api identity-service', ret)
512 amulet.raise_status(amulet.FAIL, msg=message)
513
514 def test_212_neutron_api_novacc_relation(self):
515 """Verify the neutron-api to nova-cloud-controller relation data"""
516 u.log.debug('Checking neutron-api:novacc relation data...')
517 unit = self.neutron_api_sentry
518 relation = ['neutron-api', 'nova-cloud-controller:neutron-api']
519 api_ip = unit.relation('identity-service',
520 'keystone:identity-service')['private-address']
521 api_endpoint = 'http://{}:9696'.format(api_ip)
522 expected = {
523 'private-address': api_ip,
524 'neutron-plugin': 'ovs',
525 'neutron-security-groups': "no",
526 'neutron-url': api_endpoint,
527 }
528 ret = u.validate_relation_data(unit, relation, expected)
529 if ret:
530 message = u.relation_error('neutron-api neutron-api', ret)
531 amulet.raise_status(amulet.FAIL, msg=message)
532
533 def test_213_novacc_neutron_api_relation(self):
534 """Verify the nova-cloud-controller to neutron-api relation data"""
535 u.log.debug('Checking novacc:neutron-api relation data...')
536 unit = self.nova_cc_sentry
537 relation = ['neutron-api', 'neutron-api:neutron-api']
538 cc_ip = unit.relation('neutron-api',
539 'neutron-api:neutron-api')['private-address']
540 cc_endpoint = 'http://{}:8774/v2'.format(cc_ip)
541 expected = {
542 'private-address': cc_ip,
543 'nova_url': cc_endpoint,
544 }
545 ret = u.validate_relation_data(unit, relation, expected)
546 if ret:
547 message = u.relation_error('nova-cc neutron-api', ret)
548 amulet.raise_status(amulet.FAIL, msg=message)
549
550 def test_300_neutron_config(self):
338 """Verify the data in the neutron config file."""551 """Verify the data in the neutron config file."""
552 u.log.debug('Checking neutron gateway config file data...')
339 unit = self.neutron_gateway_sentry553 unit = self.neutron_gateway_sentry
340 rabbitmq_relation = self.rabbitmq_sentry.relation('amqp',554 rmq_ng_rel = self.rmq_sentry.relation(
341 'neutron-gateway:amqp')555 'amqp', 'neutron-gateway:amqp')
342556
343 conf = '/etc/neutron/neutron.conf'557 conf = '/etc/neutron/neutron.conf'
344 expected = {558 expected = {
@@ -350,35 +564,34 @@
350 'notification_driver': 'neutron.openstack.common.notifier.'564 'notification_driver': 'neutron.openstack.common.notifier.'
351 'list_notifier',565 'list_notifier',
352 'list_notifier_drivers': 'neutron.openstack.common.'566 'list_notifier_drivers': 'neutron.openstack.common.'
353 'notifier.rabbit_notifier'567 'notifier.rabbit_notifier',
354 },568 },
355 'agent': {569 'agent': {
356 'root_helper': 'sudo /usr/bin/neutron-rootwrap '570 'root_helper': 'sudo /usr/bin/neutron-rootwrap '
357 '/etc/neutron/rootwrap.conf'571 '/etc/neutron/rootwrap.conf'
358 }572 }
359 }573 }
574
360 if self._get_openstack_release() >= self.trusty_kilo:575 if self._get_openstack_release() >= self.trusty_kilo:
361 oslo_concurrency = {576 # Kilo or later
362 'oslo_concurrency': {577 expected['oslo_messaging_rabbit'] = {
363 'lock_path':'/var/lock/neutron'578 'rabbit_userid': 'neutron',
364 }579 'rabbit_virtual_host': 'openstack',
365 }580 'rabbit_password': rmq_ng_rel['password'],
366 oslo_messaging_rabbit = {581 'rabbit_host': rmq_ng_rel['hostname'],
367 'oslo_messaging_rabbit': {582 }
368 'rabbit_userid': 'neutron',583 expected['oslo_concurrency'] = {
369 'rabbit_virtual_host': 'openstack',584 'lock_path': '/var/lock/neutron'
370 'rabbit_password': rabbitmq_relation['password'],585 }
371 'rabbit_host': rabbitmq_relation['hostname'],
372 }
373 }
374 expected.update(oslo_concurrency)
375 expected.update(oslo_messaging_rabbit)
376 else:586 else:
377 expected['DEFAULT']['lock_path'] = '/var/lock/neutron'587 # Juno or earlier
378 expected['DEFAULT']['rabbit_userid'] = 'neutron'588 expected['DEFAULT'].update({
379 expected['DEFAULT']['rabbit_virtual_host'] = 'openstack'589 'rabbit_userid': 'neutron',
380 expected['DEFAULT']['rabbit_password'] = rabbitmq_relation['password']590 'rabbit_virtual_host': 'openstack',
381 expected['DEFAULT']['rabbit_host'] = rabbitmq_relation['hostname']591 'rabbit_password': rmq_ng_rel['password'],
592 'rabbit_host': rmq_ng_rel['hostname'],
593 'lock_path': '/var/lock/neutron',
594 })
382595
383 for section, pairs in expected.iteritems():596 for section, pairs in expected.iteritems():
384 ret = u.validate_config_data(unit, conf, section, pairs)597 ret = u.validate_config_data(unit, conf, section, pairs)
@@ -386,15 +599,17 @@
386 message = "neutron config error: {}".format(ret)599 message = "neutron config error: {}".format(ret)
387 amulet.raise_status(amulet.FAIL, msg=message)600 amulet.raise_status(amulet.FAIL, msg=message)
388601
389 def test_ml2_config(self):602 def test_301_neutron_ml2_config(self):
390 """Verify the data in the ml2 config file. This is only available603 """Verify the data in the ml2 config file. This is only available
391 since icehouse."""604 since icehouse."""
605 u.log.debug('Checking neutron gateway ml2 config file data...')
392 if self._get_openstack_release() < self.precise_icehouse:606 if self._get_openstack_release() < self.precise_icehouse:
393 return607 return
394608
395 unit = self.neutron_gateway_sentry609 unit = self.neutron_gateway_sentry
396 conf = '/etc/neutron/plugins/ml2/ml2_conf.ini'610 conf = '/etc/neutron/plugins/ml2/ml2_conf.ini'
397 neutron_gateway_relation = unit.relation('shared-db', 'mysql:shared-db')611 ng_db_rel = unit.relation('shared-db', 'mysql:shared-db')
612
398 expected = {613 expected = {
399 'ml2': {614 'ml2': {
400 'type_drivers': 'gre,vxlan,vlan,flat',615 'type_drivers': 'gre,vxlan,vlan,flat',
@@ -409,7 +624,7 @@
409 },624 },
410 'ovs': {625 'ovs': {
411 'enable_tunneling': 'True',626 'enable_tunneling': 'True',
412 'local_ip': neutron_gateway_relation['private-address']627 'local_ip': ng_db_rel['private-address']
413 },628 },
414 'agent': {629 'agent': {
415 'tunnel_types': 'gre',630 'tunnel_types': 'gre',
@@ -427,8 +642,9 @@
427 message = "ml2 config error: {}".format(ret)642 message = "ml2 config error: {}".format(ret)
428 amulet.raise_status(amulet.FAIL, msg=message)643 amulet.raise_status(amulet.FAIL, msg=message)
429644
430 def test_dhcp_agent_config(self):645 def test_302_neutron_dhcp_agent_config(self):
431 """Verify the data in the dhcp agent config file."""646 """Verify the data in the dhcp agent config file."""
647 u.log.debug('Checking neutron gateway dhcp agent config file data...')
432 unit = self.neutron_gateway_sentry648 unit = self.neutron_gateway_sentry
433 conf = '/etc/neutron/dhcp_agent.ini'649 conf = '/etc/neutron/dhcp_agent.ini'
434 expected = {650 expected = {
@@ -440,44 +656,45 @@
440 '/etc/neutron/rootwrap.conf',656 '/etc/neutron/rootwrap.conf',
441 'ovs_use_veth': 'True'657 'ovs_use_veth': 'True'
442 }658 }
659 section = 'DEFAULT'
443660
444 ret = u.validate_config_data(unit, conf, 'DEFAULT', expected)661 ret = u.validate_config_data(unit, conf, section, expected)
445 if ret:662 if ret:
446 message = "dhcp agent config error: {}".format(ret)663 message = "dhcp agent config error: {}".format(ret)
447 amulet.raise_status(amulet.FAIL, msg=message)664 amulet.raise_status(amulet.FAIL, msg=message)
448665
449 def test_fwaas_driver_config(self):666 def test_303_neutron_fwaas_driver_config(self):
450 """Verify the data in the fwaas driver config file. This is only667 """Verify the data in the fwaas driver config file. This is only
451 available since havana."""668 available since havana."""
452 if self._get_openstack_release() < self.precise_havana:669 u.log.debug('Checking neutron gateway fwaas config file data...')
453 return
454
455 unit = self.neutron_gateway_sentry670 unit = self.neutron_gateway_sentry
456 conf = '/etc/neutron/fwaas_driver.ini'671 conf = '/etc/neutron/fwaas_driver.ini'
672 expected = {
673 'enabled': 'True'
674 }
675 section = 'fwaas'
676
457 if self._get_openstack_release() >= self.trusty_kilo:677 if self._get_openstack_release() >= self.trusty_kilo:
458 expected = {678 # Kilo or later
459 'driver': 'neutron_fwaas.services.firewall.drivers.'679 expected['driver'] = ('neutron_fwaas.services.firewall.drivers.'
460 'linux.iptables_fwaas.IptablesFwaasDriver',680 'linux.iptables_fwaas.IptablesFwaasDriver')
461 'enabled': 'True'
462 }
463 else:681 else:
464 expected = {682 # Juno or earlier
465 'driver': 'neutron.services.firewall.drivers.'683 expected['driver'] = ('neutron.services.firewall.drivers.linux.'
466 'linux.iptables_fwaas.IptablesFwaasDriver',684 'iptables_fwaas.IptablesFwaasDriver')
467 'enabled': 'True'
468 }
469685
470 ret = u.validate_config_data(unit, conf, 'fwaas', expected)686 ret = u.validate_config_data(unit, conf, section, expected)
471 if ret:687 if ret:
472 message = "fwaas driver config error: {}".format(ret)688 message = "fwaas driver config error: {}".format(ret)
473 amulet.raise_status(amulet.FAIL, msg=message)689 amulet.raise_status(amulet.FAIL, msg=message)
474690
475 def test_l3_agent_config(self):691 def test_304_neutron_l3_agent_config(self):
476 """Verify the data in the l3 agent config file."""692 """Verify the data in the l3 agent config file."""
693 u.log.debug('Checking neutron gateway l3 agent config file data...')
477 unit = self.neutron_gateway_sentry694 unit = self.neutron_gateway_sentry
478 nova_cc_relation = self.nova_cc_sentry.relation(\695 ncc_ng_rel = self.nova_cc_sentry.relation(
479 'quantum-network-service',696 'quantum-network-service',
480 'neutron-gateway:quantum-network-service')697 'neutron-gateway:quantum-network-service')
481 ep = self.keystone.service_catalog.url_for(service_type='identity',698 ep = self.keystone.service_catalog.url_for(service_type='identity',
482 endpoint_type='publicURL')699 endpoint_type='publicURL')
483700
@@ -488,24 +705,30 @@
488 'auth_url': ep,705 'auth_url': ep,
489 'auth_region': 'RegionOne',706 'auth_region': 'RegionOne',
490 'admin_tenant_name': 'services',707 'admin_tenant_name': 'services',
491 'admin_user': 'quantum_s3_ec2_nova',708 'admin_password': ncc_ng_rel['service_password'],
492 'admin_password': nova_cc_relation['service_password'],
493 'root_helper': 'sudo /usr/bin/neutron-rootwrap '709 'root_helper': 'sudo /usr/bin/neutron-rootwrap '
494 '/etc/neutron/rootwrap.conf',710 '/etc/neutron/rootwrap.conf',
495 'ovs_use_veth': 'True',711 'ovs_use_veth': 'True',
496 'handle_internal_only_routers': 'True'712 'handle_internal_only_routers': 'True'
497 }713 }
714 section = 'DEFAULT'
715
498 if self._get_openstack_release() >= self.trusty_kilo:716 if self._get_openstack_release() >= self.trusty_kilo:
717 # Kilo or later
499 expected['admin_user'] = 'nova'718 expected['admin_user'] = 'nova'
719 else:
720 # Juno or earlier
721 expected['admin_user'] = 's3_ec2_nova'
500722
501 ret = u.validate_config_data(unit, conf, 'DEFAULT', expected)723 ret = u.validate_config_data(unit, conf, section, expected)
502 if ret:724 if ret:
503 message = "l3 agent config error: {}".format(ret)725 message = "l3 agent config error: {}".format(ret)
504 amulet.raise_status(amulet.FAIL, msg=message)726 amulet.raise_status(amulet.FAIL, msg=message)
505727
506 def test_lbaas_agent_config(self):728 def test_305_neutron_lbaas_agent_config(self):
507 """Verify the data in the lbaas agent config file. This is only729 """Verify the data in the lbaas agent config file. This is only
508 available since havana."""730 available since havana."""
731 u.log.debug('Checking neutron gateway lbaas config file data...')
509 if self._get_openstack_release() < self.precise_havana:732 if self._get_openstack_release() < self.precise_havana:
510 return733 return
511734
@@ -513,21 +736,27 @@
513 conf = '/etc/neutron/lbaas_agent.ini'736 conf = '/etc/neutron/lbaas_agent.ini'
514 expected = {737 expected = {
515 'DEFAULT': {738 'DEFAULT': {
516 'periodic_interval': '10',
517 'interface_driver': 'neutron.agent.linux.interface.'739 'interface_driver': 'neutron.agent.linux.interface.'
518 'OVSInterfaceDriver',740 'OVSInterfaceDriver',
741 'periodic_interval': '10',
519 'ovs_use_veth': 'False',742 'ovs_use_veth': 'False',
520 'device_driver': 'neutron.services.loadbalancer.drivers.'
521 'haproxy.namespace_driver.HaproxyNSDriver'
522 },743 },
523 'haproxy': {744 'haproxy': {
524 'loadbalancer_state_path': '$state_path/lbaas',745 'loadbalancer_state_path': '$state_path/lbaas',
525 'user_group': 'nogroup'746 'user_group': 'nogroup'
526 }747 }
527 }748 }
749
528 if self._get_openstack_release() >= self.trusty_kilo:750 if self._get_openstack_release() >= self.trusty_kilo:
529 expected['DEFAULT']['device_driver'] = ('neutron_lbaas.services.' +751 # Kilo or later
530 'loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver')752 expected['DEFAULT']['device_driver'] = \
753 ('neutron_lbaas.services.loadbalancer.drivers.haproxy.'
754 'namespace_driver.HaproxyNSDriver')
755 else:
756 # Juno or earlier
757 expected['DEFAULT']['device_driver'] = \
758 ('neutron.services.loadbalancer.drivers.haproxy.'
759 'namespace_driver.HaproxyNSDriver')
531760
532 for section, pairs in expected.iteritems():761 for section, pairs in expected.iteritems():
533 ret = u.validate_config_data(unit, conf, section, pairs)762 ret = u.validate_config_data(unit, conf, section, pairs)
@@ -535,46 +764,51 @@
535 message = "lbaas agent config error: {}".format(ret)764 message = "lbaas agent config error: {}".format(ret)
536 amulet.raise_status(amulet.FAIL, msg=message)765 amulet.raise_status(amulet.FAIL, msg=message)
537766
538 def test_metadata_agent_config(self):767 def test_306_neutron_metadata_agent_config(self):
539 """Verify the data in the metadata agent config file."""768 """Verify the data in the metadata agent config file."""
769 u.log.debug('Checking neutron gateway metadata agent '
770 'config file data...')
540 unit = self.neutron_gateway_sentry771 unit = self.neutron_gateway_sentry
541 ep = self.keystone.service_catalog.url_for(service_type='identity',772 ep = self.keystone.service_catalog.url_for(service_type='identity',
542 endpoint_type='publicURL')773 endpoint_type='publicURL')
543 neutron_gateway_relation = unit.relation('shared-db', 'mysql:shared-db')774 ng_db_rel = unit.relation('shared-db',
544 nova_cc_relation = self.nova_cc_sentry.relation(\775 'mysql:shared-db')
545 'quantum-network-service',776 nova_cc_relation = self.nova_cc_sentry.relation(
546 'neutron-gateway:quantum-network-service')777 'quantum-network-service',
778 'neutron-gateway:quantum-network-service')
547779
548 conf = '/etc/neutron/metadata_agent.ini'780 conf = '/etc/neutron/metadata_agent.ini'
549 expected = {781 expected = {
550 'auth_url': ep,782 'auth_url': ep,
551 'auth_region': 'RegionOne',783 'auth_region': 'RegionOne',
552 'admin_tenant_name': 'services',784 'admin_tenant_name': 'services',
553 'admin_user': 'quantum_s3_ec2_nova',
554 'admin_password': nova_cc_relation['service_password'],785 'admin_password': nova_cc_relation['service_password'],
555 'root_helper': 'sudo neutron-rootwrap '786 'root_helper': 'sudo neutron-rootwrap '
556 '/etc/neutron/rootwrap.conf',787 '/etc/neutron/rootwrap.conf',
557 'state_path': '/var/lib/neutron',788 'state_path': '/var/lib/neutron',
558 'nova_metadata_ip': neutron_gateway_relation['private-address'],789 'nova_metadata_ip': ng_db_rel['private-address'],
559 'nova_metadata_port': '8775'790 'nova_metadata_port': '8775',
791 'cache_url': 'memory://?default_ttl=5'
560 }792 }
793 section = 'DEFAULT'
794
561 if self._get_openstack_release() >= self.trusty_kilo:795 if self._get_openstack_release() >= self.trusty_kilo:
796 # Kilo or later
562 expected['admin_user'] = 'nova'797 expected['admin_user'] = 'nova'
563798 else:
564 if self._get_openstack_release() >= self.precise_icehouse:799 # Juno or earlier
565 expected['cache_url'] = 'memory://?default_ttl=5'800 expected['admin_user'] = 's3_ec2_nova'
566801
567 ret = u.validate_config_data(unit, conf, 'DEFAULT', expected)802 ret = u.validate_config_data(unit, conf, section, expected)
568 if ret:803 if ret:
569 message = "metadata agent config error: {}".format(ret)804 message = "metadata agent config error: {}".format(ret)
570 amulet.raise_status(amulet.FAIL, msg=message)805 amulet.raise_status(amulet.FAIL, msg=message)
571806
572 def test_metering_agent_config(self):807 def test_307_neutron_metering_agent_config(self):
573 """Verify the data in the metering agent config file. This is only808 """Verify the data in the metering agent config file. This is only
574 available since havana."""809 available since havana."""
575 if self._get_openstack_release() < self.precise_havana:810 u.log.debug('Checking neutron gateway metering agent '
576 return811 'config file data...')
577
578 unit = self.neutron_gateway_sentry812 unit = self.neutron_gateway_sentry
579 conf = '/etc/neutron/metering_agent.ini'813 conf = '/etc/neutron/metering_agent.ini'
580 expected = {814 expected = {
@@ -586,26 +820,24 @@
586 'OVSInterfaceDriver',820 'OVSInterfaceDriver',
587 'use_namespaces': 'True'821 'use_namespaces': 'True'
588 }822 }
823 section = 'DEFAULT'
589824
590 ret = u.validate_config_data(unit, conf, 'DEFAULT', expected)825 ret = u.validate_config_data(unit, conf, section, expected)
591 if ret:826 if ret:
592 message = "metering agent config error: {}".format(ret)827 message = "metering agent config error: {}".format(ret)
828 amulet.raise_status(amulet.FAIL, msg=message)
593829
594 def test_nova_config(self):830 def test_308_neutron_nova_config(self):
595 """Verify the data in the nova config file."""831 """Verify the data in the nova config file."""
832 u.log.debug('Checking neutron gateway nova config file data...')
596 unit = self.neutron_gateway_sentry833 unit = self.neutron_gateway_sentry
597 conf = '/etc/nova/nova.conf'834 conf = '/etc/nova/nova.conf'
598 mysql_relation = self.mysql_sentry.relation('shared-db',835
599 'neutron-gateway:shared-db')836 rabbitmq_relation = self.rmq_sentry.relation(
600 db_uri = "mysql://{}:{}@{}/{}".format('nova',837 'amqp', 'neutron-gateway:amqp')
601 mysql_relation['password'],838 nova_cc_relation = self.nova_cc_sentry.relation(
602 mysql_relation['db_host'],839 'quantum-network-service',
603 'nova')840 'neutron-gateway:quantum-network-service')
604 rabbitmq_relation = self.rabbitmq_sentry.relation('amqp',
605 'neutron-gateway:amqp')
606 nova_cc_relation = self.nova_cc_sentry.relation(\
607 'quantum-network-service',
608 'neutron-gateway:quantum-network-service')
609 ep = self.keystone.service_catalog.url_for(service_type='identity',841 ep = self.keystone.service_catalog.url_for(service_type='identity',
610 endpoint_type='publicURL')842 endpoint_type='publicURL')
611843
@@ -622,49 +854,44 @@
622 'network_api_class': 'nova.network.neutronv2.api.API',854 'network_api_class': 'nova.network.neutronv2.api.API',
623 }855 }
624 }856 }
857
625 if self._get_openstack_release() >= self.trusty_kilo:858 if self._get_openstack_release() >= self.trusty_kilo:
626 neutron = {859 # Kilo or later
627 'neutron': {860 expected['oslo_messaging_rabbit'] = {
628 'auth_strategy': 'keystone',861 'rabbit_userid': 'neutron',
629 'url': nova_cc_relation['quantum_url'],862 'rabbit_virtual_host': 'openstack',
630 'admin_tenant_name': 'services',863 'rabbit_password': rabbitmq_relation['password'],
631 'admin_username': 'nova',864 'rabbit_host': rabbitmq_relation['hostname'],
632 'admin_password': nova_cc_relation['service_password'],865 }
633 'admin_auth_url': ep,866 expected['oslo_concurrency'] = {
634 'service_metadata_proxy': 'True',867 'lock_path': '/var/lock/nova'
635 }868 }
636 }869 expected['neutron'] = {
637 oslo_concurrency = {870 'auth_strategy': 'keystone',
638 'oslo_concurrency': {871 'url': nova_cc_relation['quantum_url'],
639 'lock_path':'/var/lock/nova'872 'admin_tenant_name': 'services',
640 }873 'admin_username': 'nova',
641 }874 'admin_password': nova_cc_relation['service_password'],
642 oslo_messaging_rabbit = {875 'admin_auth_url': ep,
643 'oslo_messaging_rabbit': {876 'service_metadata_proxy': 'True',
644 'rabbit_userid': 'neutron',877 'metadata_proxy_shared_secret': u.not_null
645 'rabbit_virtual_host': 'openstack',878 }
646 'rabbit_password': rabbitmq_relation['password'],
647 'rabbit_host': rabbitmq_relation['hostname'],
648 }
649 }
650 expected.update(neutron)
651 expected.update(oslo_concurrency)
652 expected.update(oslo_messaging_rabbit)
653 else:879 else:
654 d = 'DEFAULT'880 # Juno or earlier
655 expected[d]['lock_path'] = '/var/lock/nova'881 expected['DEFAULT'].update({
656 expected[d]['rabbit_userid'] = 'neutron'882 'rabbit_userid': 'neutron',
657 expected[d]['rabbit_virtual_host'] = 'openstack'883 'rabbit_virtual_host': 'openstack',
658 expected[d]['rabbit_password'] = rabbitmq_relation['password']884 'rabbit_password': rabbitmq_relation['password'],
659 expected[d]['rabbit_host'] = rabbitmq_relation['hostname']885 'rabbit_host': rabbitmq_relation['hostname'],
660 expected[d]['service_neutron_metadata_proxy'] = 'True'886 'lock_path': '/var/lock/nova',
661 expected[d]['neutron_auth_strategy'] = 'keystone'887 'neutron_auth_strategy': 'keystone',
662 expected[d]['neutron_url'] = nova_cc_relation['quantum_url']888 'neutron_url': nova_cc_relation['quantum_url'],
663 expected[d]['neutron_admin_tenant_name'] = 'services'889 'neutron_admin_tenant_name': 'services',
664 expected[d]['neutron_admin_username'] = 'quantum_s3_ec2_nova'890 'neutron_admin_username': 's3_ec2_nova',
665 expected[d]['neutron_admin_password'] = \891 'neutron_admin_password': nova_cc_relation['service_password'],
666 nova_cc_relation['service_password']892 'neutron_admin_auth_url': ep,
667 expected[d]['neutron_admin_auth_url'] = ep893 'service_neutron_metadata_proxy': 'True',
894 })
668895
669 for section, pairs in expected.iteritems():896 for section, pairs in expected.iteritems():
670 ret = u.validate_config_data(unit, conf, section, pairs)897 ret = u.validate_config_data(unit, conf, section, pairs)
@@ -672,56 +899,30 @@
672 message = "nova config error: {}".format(ret)899 message = "nova config error: {}".format(ret)
673 amulet.raise_status(amulet.FAIL, msg=message)900 amulet.raise_status(amulet.FAIL, msg=message)
674901
675 def test_ovs_neutron_plugin_config(self):902 def test_309_neutron_vpn_agent_config(self):
676 """Verify the data in the ovs neutron plugin config file. The ovs
677 plugin is not used by default since icehouse."""
678 if self._get_openstack_release() >= self.precise_icehouse:
679 return
680
681 unit = self.neutron_gateway_sentry
682 neutron_gateway_relation = unit.relation('shared-db', 'mysql:shared-db')
683
684 conf = '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'
685 expected = {
686 'ovs': {
687 'local_ip': neutron_gateway_relation['private-address'],
688 'tenant_network_type': 'gre',
689 'enable_tunneling': 'True',
690 'tunnel_id_ranges': '1:1000'
691 },
692 'agent': {
693 'polling_interval': '10',
694 'root_helper': 'sudo /usr/bin/neutron-rootwrap '
695 '/etc/neutron/rootwrap.conf'
696 }
697 }
698
699 for section, pairs in expected.iteritems():
700 ret = u.validate_config_data(unit, conf, section, pairs)
701 if ret:
702 message = "ovs neutron plugin config error: {}".format(ret)
703 amulet.raise_status(amulet.FAIL, msg=message)
704
705 def test_vpn_agent_config(self):
706 """Verify the data in the vpn agent config file. This isn't available903 """Verify the data in the vpn agent config file. This isn't available
707 prior to havana."""904 prior to havana."""
708 if self._get_openstack_release() < self.precise_havana:905 u.log.debug('Checking neutron gateway vpn agent config file data...')
709 return
710
711 unit = self.neutron_gateway_sentry906 unit = self.neutron_gateway_sentry
712 conf = '/etc/neutron/vpn_agent.ini'907 conf = '/etc/neutron/vpn_agent.ini'
713 expected = {908 expected = {
714 'vpnagent': {909 'ipsec': {
910 'ipsec_status_check_interval': '60'
911 }
912 }
913
914 if self._get_openstack_release() >= self.trusty_kilo:
915 # Kilo or later
916 expected['vpnagent'] = {
917 'vpn_device_driver': 'neutron_vpnaas.services.vpn.'
918 'device_drivers.ipsec.OpenSwanDriver'
919 }
920 else:
921 # Juno or earlier
922 expected['vpnagent'] = {
715 'vpn_device_driver': 'neutron.services.vpn.device_drivers.'923 'vpn_device_driver': 'neutron.services.vpn.device_drivers.'
716 'ipsec.OpenSwanDriver'924 'ipsec.OpenSwanDriver'
717 },
718 'ipsec': {
719 'ipsec_status_check_interval': '60'
720 }925 }
721 }
722 if self._get_openstack_release() >= self.trusty_kilo:
723 expected['vpnagent']['vpn_device_driver'] = ('neutron_vpnaas.' +
724 'services.vpn.device_drivers.ipsec.OpenSwanDriver')
725926
726 for section, pairs in expected.iteritems():927 for section, pairs in expected.iteritems():
727 ret = u.validate_config_data(unit, conf, section, pairs)928 ret = u.validate_config_data(unit, conf, section, pairs)
@@ -729,8 +930,9 @@
729 message = "vpn agent config error: {}".format(ret)930 message = "vpn agent config error: {}".format(ret)
730 amulet.raise_status(amulet.FAIL, msg=message)931 amulet.raise_status(amulet.FAIL, msg=message)
731932
732 def test_create_network(self):933 def test_400_create_network(self):
733 """Create a network, verify that it exists, and then delete it."""934 """Create a network, verify that it exists, and then delete it."""
935 u.log.debug('Creating neutron network...')
734 self.neutron.format = 'json'936 self.neutron.format = 'json'
735 net_name = 'ext_net'937 net_name = 'ext_net'
736938
@@ -743,7 +945,7 @@
743945
744 # Create a network and verify that it exists946 # Create a network and verify that it exists
745 network = {'name': net_name}947 network = {'name': net_name}
746 self.neutron.create_network({'network':network})948 self.neutron.create_network({'network': network})
747949
748 networks = self.neutron.list_networks(name=net_name)950 networks = self.neutron.list_networks(name=net_name)
749 net_len = len(networks['networks'])951 net_len = len(networks['networks'])
@@ -751,9 +953,57 @@
751 msg = "Expected 1 network, found {}".format(net_len)953 msg = "Expected 1 network, found {}".format(net_len)
752 amulet.raise_status(amulet.FAIL, msg=msg)954 amulet.raise_status(amulet.FAIL, msg=msg)
753955
956 u.log.debug('Confirming new neutron network...')
754 network = networks['networks'][0]957 network = networks['networks'][0]
755 if network['name'] != net_name:958 if network['name'] != net_name:
756 amulet.raise_status(amulet.FAIL, msg="network ext_net not found")959 amulet.raise_status(amulet.FAIL, msg="network ext_net not found")
757960
758 #Cleanup961 #Cleanup
962 u.log.debug('Deleting neutron network...')
759 self.neutron.delete_network(network['id'])963 self.neutron.delete_network(network['id'])
964
965 def test_900_restart_on_config_change(self):
966 """Verify that the specified services are restarted when the
967 config is changed."""
968
969 sentry = self.neutron_gateway_sentry
970 juju_service = 'neutron-gateway'
971
972 # Expected default and alternate values
973 set_default = {'debug': 'False'}
974 set_alternate = {'debug': 'True'}
975
976 # Services which are expected to restart upon config change,
977 # and corresponding config files affected by the change
978 conf_file = '/etc/neutron/neutron.conf'
979 services = {
980 'neutron-dhcp-agent': conf_file,
981 'neutron-lbaas-agent': conf_file,
982 'neutron-metadata-agent': conf_file,
983 'neutron-metering-agent': conf_file,
984 'neutron-openvswitch-agent': conf_file,
985 }
986
987 if self._get_openstack_release() <= self.trusty_juno:
988 services.update({'neutron-vpn-agent': conf_file})
989
990 # Make config change, check for svc restart, conf file mod time change
991 u.log.debug('Making config change on {}...'.format(juju_service))
992 mtime = u.get_sentry_time(sentry)
993 self.d.configure(juju_service, set_alternate)
994
995# sleep_time = 90
996 for s, conf_file in services.iteritems():
997 u.log.debug("Checking that service restarted: {}".format(s))
998 if not u.validate_service_config_changed(sentry, mtime, s,
999 conf_file):
1000# conf_file,
1001# sleep_time=sleep_time):
1002 self.d.configure(juju_service, set_default)
1003 msg = "service {} didn't restart after config change".format(s)
1004 amulet.raise_status(amulet.FAIL, msg=msg)
1005
1006 # Only do initial sleep on first service check
1007# sleep_time = 0
1008
1009 self.d.configure(juju_service, set_default)
7601010
=== modified file 'tests/charmhelpers/contrib/amulet/deployment.py'
--- tests/charmhelpers/contrib/amulet/deployment.py 2015-01-23 11:08:26 +0000
+++ tests/charmhelpers/contrib/amulet/deployment.py 2015-09-21 20:38:30 +0000
@@ -51,7 +51,8 @@
51 if 'units' not in this_service:51 if 'units' not in this_service:
52 this_service['units'] = 152 this_service['units'] = 1
5353
54 self.d.add(this_service['name'], units=this_service['units'])54 self.d.add(this_service['name'], units=this_service['units'],
55 constraints=this_service.get('constraints'))
5556
56 for svc in other_services:57 for svc in other_services:
57 if 'location' in svc:58 if 'location' in svc:
@@ -64,7 +65,8 @@
64 if 'units' not in svc:65 if 'units' not in svc:
65 svc['units'] = 166 svc['units'] = 1
6667
67 self.d.add(svc['name'], charm=branch_location, units=svc['units'])68 self.d.add(svc['name'], charm=branch_location, units=svc['units'],
69 constraints=svc.get('constraints'))
6870
69 def _add_relations(self, relations):71 def _add_relations(self, relations):
70 """Add all of the relations for the services."""72 """Add all of the relations for the services."""
7173
=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
--- tests/charmhelpers/contrib/amulet/utils.py 2015-08-18 21:16:23 +0000
+++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-21 20:38:30 +0000
@@ -19,9 +19,11 @@
19import logging19import logging
20import os20import os
21import re21import re
22import socket
22import subprocess23import subprocess
23import sys24import sys
24import time25import time
26import uuid
2527
26import amulet28import amulet
27import distro_info29import distro_info
@@ -114,7 +116,7 @@
114 # /!\ DEPRECATION WARNING (beisner):116 # /!\ DEPRECATION WARNING (beisner):
115 # New and existing tests should be rewritten to use117 # New and existing tests should be rewritten to use
116 # validate_services_by_name() as it is aware of init systems.118 # validate_services_by_name() as it is aware of init systems.
117 self.log.warn('/!\\ DEPRECATION WARNING: use '119 self.log.warn('DEPRECATION WARNING: use '
118 'validate_services_by_name instead of validate_services '120 'validate_services_by_name instead of validate_services '
119 'due to init system differences.')121 'due to init system differences.')
120122
@@ -269,33 +271,52 @@
269 """Get last modification time of directory."""271 """Get last modification time of directory."""
270 return sentry_unit.directory_stat(directory)['mtime']272 return sentry_unit.directory_stat(directory)['mtime']
271273
272 def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):274 def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
273 """Get process' start time.275 """Get start time of a process based on the last modification time
274276 of the /proc/pid directory.
275 Determine start time of the process based on the last modification277
276 time of the /proc/pid directory. If pgrep_full is True, the process278 :sentry_unit: The sentry unit to check for the service on
277 name is matched against the full command line.279 :service: service name to look for in process table
278 """280 :pgrep_full: [Deprecated] Use full command line search mode with pgrep
279 if pgrep_full:281 :returns: epoch time of service process start
280 cmd = 'pgrep -o -f {}'.format(service)282 :param commands: list of bash commands
281 else:283 :param sentry_units: list of sentry unit pointers
282 cmd = 'pgrep -o {}'.format(service)284 :returns: None if successful; Failure message otherwise
283 cmd = cmd + ' | grep -v pgrep || exit 0'285 """
284 cmd_out = sentry_unit.run(cmd)286 if pgrep_full is not None:
285 self.log.debug('CMDout: ' + str(cmd_out))287 # /!\ DEPRECATION WARNING (beisner):
286 if cmd_out[0]:288 # No longer implemented, as pidof is now used instead of pgrep.
287 self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))289 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
288 proc_dir = '/proc/{}'.format(cmd_out[0].strip())290 self.log.warn('DEPRECATION WARNING: pgrep_full bool is no '
289 return self._get_dir_mtime(sentry_unit, proc_dir)291 'longer implemented re: lp 1474030.')
292
293 pid_list = self.get_process_id_list(sentry_unit, service)
294 pid = pid_list[0]
295 proc_dir = '/proc/{}'.format(pid)
296 self.log.debug('Pid for {} on {}: {}'.format(
297 service, sentry_unit.info['unit_name'], pid))
298
299 return self._get_dir_mtime(sentry_unit, proc_dir)
290300
291 def service_restarted(self, sentry_unit, service, filename,301 def service_restarted(self, sentry_unit, service, filename,
292 pgrep_full=False, sleep_time=20):302 pgrep_full=None, sleep_time=20):
293 """Check if service was restarted.303 """Check if service was restarted.
294304
295 Compare a service's start time vs a file's last modification time305 Compare a service's start time vs a file's last modification time
296 (such as a config file for that service) to determine if the service306 (such as a config file for that service) to determine if the service
297 has been restarted.307 has been restarted.
298 """308 """
309 # /!\ DEPRECATION WARNING (beisner):
310 # This method is prone to races in that no before-time is known.
311 # Use validate_service_config_changed instead.
312
313 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
314 # used instead of pgrep. pgrep_full is still passed through to ensure
315 # deprecation WARNS. lp1474030
316 self.log.warn('DEPRECATION WARNING: use '
317 'validate_service_config_changed instead of '
318 'service_restarted due to known races.')
319
299 time.sleep(sleep_time)320 time.sleep(sleep_time)
300 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=321 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
301 self._get_file_mtime(sentry_unit, filename)):322 self._get_file_mtime(sentry_unit, filename)):
@@ -304,78 +325,122 @@
304 return False325 return False
305326
306 def service_restarted_since(self, sentry_unit, mtime, service,327 def service_restarted_since(self, sentry_unit, mtime, service,
307 pgrep_full=False, sleep_time=20,328 pgrep_full=None, sleep_time=20,
308 retry_count=2):329 retry_count=30, retry_sleep_time=10):
309 """Check if service was been started after a given time.330 """Check if service was been started after a given time.
310331
311 Args:332 Args:
312 sentry_unit (sentry): The sentry unit to check for the service on333 sentry_unit (sentry): The sentry unit to check for the service on
313 mtime (float): The epoch time to check against334 mtime (float): The epoch time to check against
314 service (string): service name to look for in process table335 service (string): service name to look for in process table
315 pgrep_full (boolean): Use full command line search mode with pgrep336 pgrep_full: [Deprecated] Use full command line search mode with pgrep
316 sleep_time (int): Seconds to sleep before looking for process337 sleep_time (int): Initial sleep time (s) before looking for file
317 retry_count (int): If service is not found, how many times to retry338 retry_sleep_time (int): Time (s) to sleep between retries
339 retry_count (int): If file is not found, how many times to retry
318340
319 Returns:341 Returns:
320 bool: True if service found and its start time it newer than mtime,342 bool: True if service found and its start time it newer than mtime,
321 False if service is older than mtime or if service was343 False if service is older than mtime or if service was
322 not found.344 not found.
323 """345 """
324 self.log.debug('Checking %s restarted since %s' % (service, mtime))346 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
347 # used instead of pgrep. pgrep_full is still passed through to ensure
348 # deprecation WARNS. lp1474030
349
350 unit_name = sentry_unit.info['unit_name']
351 self.log.debug('Checking that %s service restarted since %s on '
352 '%s' % (service, mtime, unit_name))
325 time.sleep(sleep_time)353 time.sleep(sleep_time)
326 proc_start_time = self._get_proc_start_time(sentry_unit, service,354 proc_start_time = None
327 pgrep_full)355 tries = 0
328 while retry_count > 0 and not proc_start_time:356 while tries <= retry_count and not proc_start_time:
329 self.log.debug('No pid file found for service %s, will retry %i '357 try:
330 'more times' % (service, retry_count))358 proc_start_time = self._get_proc_start_time(sentry_unit,
331 time.sleep(30)359 service,
332 proc_start_time = self._get_proc_start_time(sentry_unit, service,360 pgrep_full)
333 pgrep_full)361 self.log.debug('Attempt {} to get {} proc start time on {} '
334 retry_count = retry_count - 1362 'OK'.format(tries, service, unit_name))
363 except IOError as e:
364 # NOTE(beisner) - race avoidance, proc may not exist yet.
365 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
366 self.log.debug('Attempt {} to get {} proc start time on {} '
367 'failed\n{}'.format(tries, service,
368 unit_name, e))
369 time.sleep(retry_sleep_time)
370 tries += 1
335371
336 if not proc_start_time:372 if not proc_start_time:
337 self.log.warn('No proc start time found, assuming service did '373 self.log.warn('No proc start time found, assuming service did '
338 'not start')374 'not start')
339 return False375 return False
340 if proc_start_time >= mtime:376 if proc_start_time >= mtime:
341 self.log.debug('proc start time is newer than provided mtime'377 self.log.debug('Proc start time is newer than provided mtime'
342 '(%s >= %s)' % (proc_start_time, mtime))378 '(%s >= %s) on %s (OK)' % (proc_start_time,
379 mtime, unit_name))
343 return True380 return True
344 else:381 else:
345 self.log.warn('proc start time (%s) is older than provided mtime '382 self.log.warn('Proc start time (%s) is older than provided mtime '
346 '(%s), service did not restart' % (proc_start_time,383 '(%s) on %s, service did not '
347 mtime))384 'restart' % (proc_start_time, mtime, unit_name))
348 return False385 return False
349386
350 def config_updated_since(self, sentry_unit, filename, mtime,387 def config_updated_since(self, sentry_unit, filename, mtime,
351 sleep_time=20):388 sleep_time=20, retry_count=30,
389 retry_sleep_time=10):
352 """Check if file was modified after a given time.390 """Check if file was modified after a given time.
353391
354 Args:392 Args:
355 sentry_unit (sentry): The sentry unit to check the file mtime on393 sentry_unit (sentry): The sentry unit to check the file mtime on
356 filename (string): The file to check mtime of394 filename (string): The file to check mtime of
357 mtime (float): The epoch time to check against395 mtime (float): The epoch time to check against
358 sleep_time (int): Seconds to sleep before looking for process396 sleep_time (int): Initial sleep time (s) before looking for file
397 retry_sleep_time (int): Time (s) to sleep between retries
398 retry_count (int): If file is not found, how many times to retry
359399
360 Returns:400 Returns:
361 bool: True if file was modified more recently than mtime, False if401 bool: True if file was modified more recently than mtime, False if
362 file was modified before mtime,402 file was modified before mtime, or if file not found.
363 """403 """
364 self.log.debug('Checking %s updated since %s' % (filename, mtime))404 unit_name = sentry_unit.info['unit_name']
405 self.log.debug('Checking that %s updated since %s on '
406 '%s' % (filename, mtime, unit_name))
365 time.sleep(sleep_time)407 time.sleep(sleep_time)
366 file_mtime = self._get_file_mtime(sentry_unit, filename)408 file_mtime = None
409 tries = 0
410 while tries <= retry_count and not file_mtime:
411 try:
412 file_mtime = self._get_file_mtime(sentry_unit, filename)
413 self.log.debug('Attempt {} to get {} file mtime on {} '
414 'OK'.format(tries, filename, unit_name))
415 except IOError as e:
416 # NOTE(beisner) - race avoidance, file may not exist yet.
417 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
418 self.log.debug('Attempt {} to get {} file mtime on {} '
419 'failed\n{}'.format(tries, filename,
420 unit_name, e))
421 time.sleep(retry_sleep_time)
422 tries += 1
423
424 if not file_mtime:
425 self.log.warn('Could not determine file mtime, assuming '
426 'file does not exist')
427 return False
428
367 if file_mtime >= mtime:429 if file_mtime >= mtime:
368 self.log.debug('File mtime is newer than provided mtime '430 self.log.debug('File mtime is newer than provided mtime '
369 '(%s >= %s)' % (file_mtime, mtime))431 '(%s >= %s) on %s (OK)' % (file_mtime,
432 mtime, unit_name))
370 return True433 return True
371 else:434 else:
372 self.log.warn('File mtime %s is older than provided mtime %s'435 self.log.warn('File mtime is older than provided mtime'
373 % (file_mtime, mtime))436 '(%s < on %s) on %s' % (file_mtime,
437 mtime, unit_name))
374 return False438 return False
375439
376 def validate_service_config_changed(self, sentry_unit, mtime, service,440 def validate_service_config_changed(self, sentry_unit, mtime, service,
377 filename, pgrep_full=False,441 filename, pgrep_full=None,
378 sleep_time=20, retry_count=2):442 sleep_time=20, retry_count=30,
443 retry_sleep_time=10):
379 """Check service and file were updated after mtime444 """Check service and file were updated after mtime
380445
381 Args:446 Args:
@@ -383,9 +448,10 @@
383 mtime (float): The epoch time to check against448 mtime (float): The epoch time to check against
384 service (string): service name to look for in process table449 service (string): service name to look for in process table
385 filename (string): The file to check mtime of450 filename (string): The file to check mtime of
386 pgrep_full (boolean): Use full command line search mode with pgrep451 pgrep_full: [Deprecated] Use full command line search mode with pgrep
387 sleep_time (int): Seconds to sleep before looking for process452 sleep_time (int): Initial sleep in seconds to pass to test helpers
388 retry_count (int): If service is not found, how many times to retry453 retry_count (int): If service is not found, how many times to retry
454 retry_sleep_time (int): Time in seconds to wait between retries
389455
390 Typical Usage:456 Typical Usage:
391 u = OpenStackAmuletUtils(ERROR)457 u = OpenStackAmuletUtils(ERROR)
@@ -402,15 +468,27 @@
402 mtime, False if service is older than mtime or if service was468 mtime, False if service is older than mtime or if service was
403 not found or if filename was modified before mtime.469 not found or if filename was modified before mtime.
404 """470 """
405 self.log.debug('Checking %s restarted since %s' % (service, mtime))471
406 time.sleep(sleep_time)472 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
407 service_restart = self.service_restarted_since(sentry_unit, mtime,473 # used instead of pgrep. pgrep_full is still passed through to ensure
408 service,474 # deprecation WARNS. lp1474030
409 pgrep_full=pgrep_full,475
410 sleep_time=0,476 service_restart = self.service_restarted_since(
411 retry_count=retry_count)477 sentry_unit, mtime,
412 config_update = self.config_updated_since(sentry_unit, filename, mtime,478 service,
413 sleep_time=0)479 pgrep_full=pgrep_full,
480 sleep_time=sleep_time,
481 retry_count=retry_count,
482 retry_sleep_time=retry_sleep_time)
483
484 config_update = self.config_updated_since(
485 sentry_unit,
486 filename,
487 mtime,
488 sleep_time=sleep_time,
489 retry_count=retry_count,
490 retry_sleep_time=retry_sleep_time)
491
414 return service_restart and config_update492 return service_restart and config_update
415493
416 def get_sentry_time(self, sentry_unit):494 def get_sentry_time(self, sentry_unit):
@@ -428,7 +506,6 @@
428 """Return a list of all Ubuntu releases in order of release."""506 """Return a list of all Ubuntu releases in order of release."""
429 _d = distro_info.UbuntuDistroInfo()507 _d = distro_info.UbuntuDistroInfo()
430 _release_list = _d.all508 _release_list = _d.all
431 self.log.debug('Ubuntu release list: {}'.format(_release_list))
432 return _release_list509 return _release_list
433510
434 def file_to_url(self, file_rel_path):511 def file_to_url(self, file_rel_path):
@@ -568,6 +645,142 @@
568645
569 return None646 return None
570647
648 def validate_sectionless_conf(self, file_contents, expected):
649 """A crude conf parser. Useful to inspect configuration files which
650 do not have section headers (as would be necessary in order to use
651 the configparser). Such as openstack-dashboard or rabbitmq confs."""
652 for line in file_contents.split('\n'):
653 if '=' in line:
654 args = line.split('=')
655 if len(args) <= 1:
656 continue
657 key = args[0].strip()
658 value = args[1].strip()
659 if key in expected.keys():
660 if expected[key] != value:
661 msg = ('Config mismatch. Expected, actual: {}, '
662 '{}'.format(expected[key], value))
663 amulet.raise_status(amulet.FAIL, msg=msg)
664
665 def get_unit_hostnames(self, units):
666 """Return a dict of juju unit names to hostnames."""
667 host_names = {}
668 for unit in units:
669 host_names[unit.info['unit_name']] = \
670 str(unit.file_contents('/etc/hostname').strip())
671 self.log.debug('Unit host names: {}'.format(host_names))
672 return host_names
673
674 def run_cmd_unit(self, sentry_unit, cmd):
675 """Run a command on a unit, return the output and exit code."""
676 output, code = sentry_unit.run(cmd)
677 if code == 0:
678 self.log.debug('{} `{}` command returned {} '
679 '(OK)'.format(sentry_unit.info['unit_name'],
680 cmd, code))
681 else:
682 msg = ('{} `{}` command returned {} '
683 '{}'.format(sentry_unit.info['unit_name'],
684 cmd, code, output))
685 amulet.raise_status(amulet.FAIL, msg=msg)
686 return str(output), code
687
688 def file_exists_on_unit(self, sentry_unit, file_name):
689 """Check if a file exists on a unit."""
690 try:
691 sentry_unit.file_stat(file_name)
692 return True
693 except IOError:
694 return False
695 except Exception as e:
696 msg = 'Error checking file {}: {}'.format(file_name, e)
697 amulet.raise_status(amulet.FAIL, msg=msg)
698
699 def file_contents_safe(self, sentry_unit, file_name,
700 max_wait=60, fatal=False):
701 """Get file contents from a sentry unit. Wrap amulet file_contents
702 with retry logic to address races where a file checks as existing,
703 but no longer exists by the time file_contents is called.
704 Return None if file not found. Optionally raise if fatal is True."""
705 unit_name = sentry_unit.info['unit_name']
706 file_contents = False
707 tries = 0
708 while not file_contents and tries < (max_wait / 4):
709 try:
710 file_contents = sentry_unit.file_contents(file_name)
711 except IOError:
712 self.log.debug('Attempt {} to open file {} from {} '
713 'failed'.format(tries, file_name,
714 unit_name))
715 time.sleep(4)
716 tries += 1
717
718 if file_contents:
719 return file_contents
720 elif not fatal:
721 return None
722 elif fatal:
723 msg = 'Failed to get file contents from unit.'
724 amulet.raise_status(amulet.FAIL, msg)
725
726 def port_knock_tcp(self, host="localhost", port=22, timeout=15):
727 """Open a TCP socket to check for a listening sevice on a host.
728
729 :param host: host name or IP address, default to localhost
730 :param port: TCP port number, default to 22
731 :param timeout: Connect timeout, default to 15 seconds
732 :returns: True if successful, False if connect failed
733 """
734
735 # Resolve host name if possible
736 try:
737 connect_host = socket.gethostbyname(host)
738 host_human = "{} ({})".format(connect_host, host)
739 except socket.error as e:
740 self.log.warn('Unable to resolve address: '
741 '{} ({}) Trying anyway!'.format(host, e))
742 connect_host = host
743 host_human = connect_host
744
745 # Attempt socket connection
746 try:
747 knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
748 knock.settimeout(timeout)
749 knock.connect((connect_host, port))
750 knock.close()
751 self.log.debug('Socket connect OK for host '
752 '{} on port {}.'.format(host_human, port))
753 return True
754 except socket.error as e:
755 self.log.debug('Socket connect FAIL for'
756 ' {} port {} ({})'.format(host_human, port, e))
757 return False
758
759 def port_knock_units(self, sentry_units, port=22,
760 timeout=15, expect_success=True):
761 """Open a TCP socket to check for a listening sevice on each
762 listed juju unit.
763
764 :param sentry_units: list of sentry unit pointers
765 :param port: TCP port number, default to 22
766 :param timeout: Connect timeout, default to 15 seconds
767 :expect_success: True by default, set False to invert logic
768 :returns: None if successful, Failure message otherwise
769 """
770 for unit in sentry_units:
771 host = unit.info['public-address']
772 connected = self.port_knock_tcp(host, port, timeout)
773 if not connected and expect_success:
774 return 'Socket connect failed.'
775 elif connected and not expect_success:
776 return 'Socket connected unexpectedly.'
777
778 def get_uuid_epoch_stamp(self):
779 """Returns a stamp string based on uuid4 and epoch time. Useful in
780 generating test messages which need to be unique-ish."""
781 return '[{}-{}]'.format(uuid.uuid4(), time.time())
782
783# amulet juju action helpers:
571 def run_action(self, unit_sentry, action,784 def run_action(self, unit_sentry, action,
572 _check_output=subprocess.check_output):785 _check_output=subprocess.check_output):
573 """Run the named action on a given unit sentry.786 """Run the named action on a given unit sentry.
@@ -594,3 +807,12 @@
594 output = _check_output(command, universal_newlines=True)807 output = _check_output(command, universal_newlines=True)
595 data = json.loads(output)808 data = json.loads(output)
596 return data.get(u"status") == "completed"809 return data.get(u"status") == "completed"
810
811 def status_get(self, unit):
812 """Return the current service status of this unit."""
813 raw_status, return_code = unit.run(
814 "status-get --format=json --include-data")
815 if return_code != 0:
816 return ("unknown", "")
817 status = json.loads(raw_status)
818 return (status["status"], status["message"])
597819
=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-08-18 21:16:23 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-21 20:38:30 +0000
@@ -44,20 +44,31 @@
44 Determine if the local branch being tested is derived from its44 Determine if the local branch being tested is derived from its
45 stable or next (dev) branch, and based on this, use the corresonding45 stable or next (dev) branch, and based on this, use the corresonding
46 stable or next branches for the other_services."""46 stable or next branches for the other_services."""
47
48 # Charms outside the lp:~openstack-charmers namespace
47 base_charms = ['mysql', 'mongodb', 'nrpe']49 base_charms = ['mysql', 'mongodb', 'nrpe']
4850
51 # Force these charms to current series even when using an older series.
52 # ie. Use trusty/nrpe even when series is precise, as the P charm
53 # does not possess the necessary external master config and hooks.
54 force_series_current = ['nrpe']
55
49 if self.series in ['precise', 'trusty']:56 if self.series in ['precise', 'trusty']:
50 base_series = self.series57 base_series = self.series
51 else:58 else:
52 base_series = self.current_next59 base_series = self.current_next
5360
54 if self.stable:61 for svc in other_services:
55 for svc in other_services:62 if svc['name'] in force_series_current:
63 base_series = self.current_next
64 # If a location has been explicitly set, use it
65 if svc.get('location'):
66 continue
67 if self.stable:
56 temp = 'lp:charms/{}/{}'68 temp = 'lp:charms/{}/{}'
57 svc['location'] = temp.format(base_series,69 svc['location'] = temp.format(base_series,
58 svc['name'])70 svc['name'])
59 else:71 else:
60 for svc in other_services:
61 if svc['name'] in base_charms:72 if svc['name'] in base_charms:
62 temp = 'lp:charms/{}/{}'73 temp = 'lp:charms/{}/{}'
63 svc['location'] = temp.format(base_series,74 svc['location'] = temp.format(base_series,
@@ -66,6 +77,7 @@
66 temp = 'lp:~openstack-charmers/charms/{}/{}/next'77 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
67 svc['location'] = temp.format(self.current_next,78 svc['location'] = temp.format(self.current_next,
68 svc['name'])79 svc['name'])
80
69 return other_services81 return other_services
7082
71 def _add_services(self, this_service, other_services):83 def _add_services(self, this_service, other_services):
@@ -77,21 +89,23 @@
7789
78 services = other_services90 services = other_services
79 services.append(this_service)91 services.append(this_service)
92
93 # Charms which should use the source config option
80 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',94 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
81 'ceph-osd', 'ceph-radosgw']95 'ceph-osd', 'ceph-radosgw']
82 # Most OpenStack subordinate charms do not expose an origin option96
83 # as that is controlled by the principle.97 # Charms which can not use openstack-origin, ie. many subordinates
84 ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']98 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
8599
86 if self.openstack:100 if self.openstack:
87 for svc in services:101 for svc in services:
88 if svc['name'] not in use_source + ignore:102 if svc['name'] not in use_source + no_origin:
89 config = {'openstack-origin': self.openstack}103 config = {'openstack-origin': self.openstack}
90 self.d.configure(svc['name'], config)104 self.d.configure(svc['name'], config)
91105
92 if self.source:106 if self.source:
93 for svc in services:107 for svc in services:
94 if svc['name'] in use_source and svc['name'] not in ignore:108 if svc['name'] in use_source and svc['name'] not in no_origin:
95 config = {'source': self.source}109 config = {'source': self.source}
96 self.d.configure(svc['name'], config)110 self.d.configure(svc['name'], config)
97111
98112
=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-16 20:18:08 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-21 20:38:30 +0000
@@ -27,6 +27,7 @@
27import heatclient.v1.client as heat_client27import heatclient.v1.client as heat_client
28import keystoneclient.v2_0 as keystone_client28import keystoneclient.v2_0 as keystone_client
29import novaclient.v1_1.client as nova_client29import novaclient.v1_1.client as nova_client
30import pika
30import swiftclient31import swiftclient
3132
32from charmhelpers.contrib.amulet.utils import (33from charmhelpers.contrib.amulet.utils import (
@@ -602,3 +603,361 @@
602 self.log.debug('Ceph {} samples (OK): '603 self.log.debug('Ceph {} samples (OK): '
603 '{}'.format(sample_type, samples))604 '{}'.format(sample_type, samples))
604 return None605 return None
606
607# rabbitmq/amqp specific helpers:
608 def add_rmq_test_user(self, sentry_units,
609 username="testuser1", password="changeme"):
610 """Add a test user via the first rmq juju unit, check connection as
611 the new user against all sentry units.
612
613 :param sentry_units: list of sentry unit pointers
614 :param username: amqp user name, default to testuser1
615 :param password: amqp user password
616 :returns: None if successful. Raise on error.
617 """
618 self.log.debug('Adding rmq user ({})...'.format(username))
619
620 # Check that user does not already exist
621 cmd_user_list = 'rabbitmqctl list_users'
622 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
623 if username in output:
624 self.log.warning('User ({}) already exists, returning '
625 'gracefully.'.format(username))
626 return
627
628 perms = '".*" ".*" ".*"'
629 cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
630 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
631
632 # Add user via first unit
633 for cmd in cmds:
634 output, _ = self.run_cmd_unit(sentry_units[0], cmd)
635
636 # Check connection against the other sentry_units
637 self.log.debug('Checking user connect against units...')
638 for sentry_unit in sentry_units:
639 connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
640 username=username,
641 password=password)
642 connection.close()
643
644 def delete_rmq_test_user(self, sentry_units, username="testuser1"):
645 """Delete a rabbitmq user via the first rmq juju unit.
646
647 :param sentry_units: list of sentry unit pointers
648 :param username: amqp user name, default to testuser1
649 :param password: amqp user password
650 :returns: None if successful or no such user.
651 """
652 self.log.debug('Deleting rmq user ({})...'.format(username))
653
654 # Check that the user exists
655 cmd_user_list = 'rabbitmqctl list_users'
656 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
657
658 if username not in output:
659 self.log.warning('User ({}) does not exist, returning '
660 'gracefully.'.format(username))
661 return
662
663 # Delete the user
664 cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
665 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
666
667 def get_rmq_cluster_status(self, sentry_unit):
668 """Execute rabbitmq cluster status command on a unit and return
669 the full output.
670
671 :param unit: sentry unit
672 :returns: String containing console output of cluster status command
673 """
674 cmd = 'rabbitmqctl cluster_status'
675 output, _ = self.run_cmd_unit(sentry_unit, cmd)
676 self.log.debug('{} cluster_status:\n{}'.format(
677 sentry_unit.info['unit_name'], output))
678 return str(output)
679
680 def get_rmq_cluster_running_nodes(self, sentry_unit):
681 """Parse rabbitmqctl cluster_status output string, return list of
682 running rabbitmq cluster nodes.
683
684 :param unit: sentry unit
685 :returns: List containing node names of running nodes
686 """
687 # NOTE(beisner): rabbitmqctl cluster_status output is not
688 # json-parsable, do string chop foo, then json.loads that.
689 str_stat = self.get_rmq_cluster_status(sentry_unit)
690 if 'running_nodes' in str_stat:
691 pos_start = str_stat.find("{running_nodes,") + 15
692 pos_end = str_stat.find("]},", pos_start) + 1
693 str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
694 run_nodes = json.loads(str_run_nodes)
695 return run_nodes
696 else:
697 return []
698
699 def validate_rmq_cluster_running_nodes(self, sentry_units):
700 """Check that all rmq unit hostnames are represented in the
701 cluster_status output of all units.
702
703 :param host_names: dict of juju unit names to host names
704 :param units: list of sentry unit pointers (all rmq units)
705 :returns: None if successful, otherwise return error message
706 """
707 host_names = self.get_unit_hostnames(sentry_units)
708 errors = []
709
710 # Query every unit for cluster_status running nodes
711 for query_unit in sentry_units:
712 query_unit_name = query_unit.info['unit_name']
713 running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
714
715 # Confirm that every unit is represented in the queried unit's
716 # cluster_status running nodes output.
717 for validate_unit in sentry_units:
718 val_host_name = host_names[validate_unit.info['unit_name']]
719 val_node_name = 'rabbit@{}'.format(val_host_name)
720
721 if val_node_name not in running_nodes:
722 errors.append('Cluster member check failed on {}: {} not '
723 'in {}\n'.format(query_unit_name,
724 val_node_name,
725 running_nodes))
726 if errors:
727 return ''.join(errors)
728
729 def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
730 """Check a single juju rmq unit for ssl and port in the config file."""
731 host = sentry_unit.info['public-address']
732 unit_name = sentry_unit.info['unit_name']
733
734 conf_file = '/etc/rabbitmq/rabbitmq.config'
735 conf_contents = str(self.file_contents_safe(sentry_unit,
736 conf_file, max_wait=16))
737 # Checks
738 conf_ssl = 'ssl' in conf_contents
739 conf_port = str(port) in conf_contents
740
741 # Port explicitly checked in config
742 if port and conf_port and conf_ssl:
743 self.log.debug('SSL is enabled @{}:{} '
744 '({})'.format(host, port, unit_name))
745 return True
746 elif port and not conf_port and conf_ssl:
747 self.log.debug('SSL is enabled @{} but not on port {} '
748 '({})'.format(host, port, unit_name))
749 return False
750 # Port not checked (useful when checking that ssl is disabled)
751 elif not port and conf_ssl:
752 self.log.debug('SSL is enabled @{}:{} '
753 '({})'.format(host, port, unit_name))
754 return True
755 elif not port and not conf_ssl:
756 self.log.debug('SSL not enabled @{}:{} '
757 '({})'.format(host, port, unit_name))
758 return False
759 else:
760 msg = ('Unknown condition when checking SSL status @{}:{} '
761 '({})'.format(host, port, unit_name))
762 amulet.raise_status(amulet.FAIL, msg)
763
764 def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
765 """Check that ssl is enabled on rmq juju sentry units.
766
767 :param sentry_units: list of all rmq sentry units
768 :param port: optional ssl port override to validate
769 :returns: None if successful, otherwise return error message
770 """
771 for sentry_unit in sentry_units:
772 if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
773 return ('Unexpected condition: ssl is disabled on unit '
774 '({})'.format(sentry_unit.info['unit_name']))
775 return None
776
777 def validate_rmq_ssl_disabled_units(self, sentry_units):
778 """Check that ssl is enabled on listed rmq juju sentry units.
779
780 :param sentry_units: list of all rmq sentry units
781 :returns: True if successful. Raise on error.
782 """
783 for sentry_unit in sentry_units:
784 if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
785 return ('Unexpected condition: ssl is enabled on unit '
786 '({})'.format(sentry_unit.info['unit_name']))
787 return None
788
789 def configure_rmq_ssl_on(self, sentry_units, deployment,
790 port=None, max_wait=60):
791 """Turn ssl charm config option on, with optional non-default
792 ssl port specification. Confirm that it is enabled on every
793 unit.
794
795 :param sentry_units: list of sentry units
796 :param deployment: amulet deployment object pointer
797 :param port: amqp port, use defaults if None
798 :param max_wait: maximum time to wait in seconds to confirm
799 :returns: None if successful. Raise on error.
800 """
801 self.log.debug('Setting ssl charm config option: on')
802
803 # Enable RMQ SSL
804 config = {'ssl': 'on'}
805 if port:
806 config['ssl_port'] = port
807
808 deployment.configure('rabbitmq-server', config)
809
810 # Confirm
811 tries = 0
812 ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
813 while ret and tries < (max_wait / 4):
814 time.sleep(4)
815 self.log.debug('Attempt {}: {}'.format(tries, ret))
816 ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
817 tries += 1
818
819 if ret:
820 amulet.raise_status(amulet.FAIL, ret)
821
822 def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
823 """Turn ssl charm config option off, confirm that it is disabled
824 on every unit.
825
826 :param sentry_units: list of sentry units
827 :param deployment: amulet deployment object pointer
828 :param max_wait: maximum time to wait in seconds to confirm
829 :returns: None if successful. Raise on error.
830 """
831 self.log.debug('Setting ssl charm config option: off')
832
833 # Disable RMQ SSL
834 config = {'ssl': 'off'}
835 deployment.configure('rabbitmq-server', config)
836
837 # Confirm
838 tries = 0
839 ret = self.validate_rmq_ssl_disabled_units(sentry_units)
840 while ret and tries < (max_wait / 4):
841 time.sleep(4)
842 self.log.debug('Attempt {}: {}'.format(tries, ret))
843 ret = self.validate_rmq_ssl_disabled_units(sentry_units)
844 tries += 1
845
846 if ret:
847 amulet.raise_status(amulet.FAIL, ret)
848
849 def connect_amqp_by_unit(self, sentry_unit, ssl=False,
850 port=None, fatal=True,
851 username="testuser1", password="changeme"):
852 """Establish and return a pika amqp connection to the rabbitmq service
853 running on a rmq juju unit.
854
855 :param sentry_unit: sentry unit pointer
856 :param ssl: boolean, default to False
857 :param port: amqp port, use defaults if None
858 :param fatal: boolean, default to True (raises on connect error)
859 :param username: amqp user name, default to testuser1
860 :param password: amqp user password
861 :returns: pika amqp connection pointer or None if failed and non-fatal
862 """
863 host = sentry_unit.info['public-address']
864 unit_name = sentry_unit.info['unit_name']
865
866 # Default port logic if port is not specified
867 if ssl and not port:
868 port = 5671
869 elif not ssl and not port:
870 port = 5672
871
872 self.log.debug('Connecting to amqp on {}:{} ({}) as '
873 '{}...'.format(host, port, unit_name, username))
874
875 try:
876 credentials = pika.PlainCredentials(username, password)
877 parameters = pika.ConnectionParameters(host=host, port=port,
878 credentials=credentials,
879 ssl=ssl,
880 connection_attempts=3,
881 retry_delay=5,
882 socket_timeout=1)
883 connection = pika.BlockingConnection(parameters)
884 assert connection.server_properties['product'] == 'RabbitMQ'
885 self.log.debug('Connect OK')
886 return connection
887 except Exception as e:
888 msg = ('amqp connection failed to {}:{} as '
889 '{} ({})'.format(host, port, username, str(e)))
890 if fatal:
891 amulet.raise_status(amulet.FAIL, msg)
892 else:
893 self.log.warn(msg)
894 return None
895
896 def publish_amqp_message_by_unit(self, sentry_unit, message,
897 queue="test", ssl=False,
898 username="testuser1",
899 password="changeme",
900 port=None):
901 """Publish an amqp message to a rmq juju unit.
902
903 :param sentry_unit: sentry unit pointer
904 :param message: amqp message string
905 :param queue: message queue, default to test
906 :param username: amqp user name, default to testuser1
907 :param password: amqp user password
908 :param ssl: boolean, default to False
909 :param port: amqp port, use defaults if None
910 :returns: None. Raises exception if publish failed.
911 """
912 self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
913 message))
914 connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
915 port=port,
916 username=username,
917 password=password)
918
919 # NOTE(beisner): extra debug here re: pika hang potential:
920 # https://github.com/pika/pika/issues/297
921 # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
922 self.log.debug('Defining channel...')
923 channel = connection.channel()
924 self.log.debug('Declaring queue...')
925 channel.queue_declare(queue=queue, auto_delete=False, durable=True)
926 self.log.debug('Publishing message...')
927 channel.basic_publish(exchange='', routing_key=queue, body=message)
928 self.log.debug('Closing channel...')
929 channel.close()
930 self.log.debug('Closing connection...')
931 connection.close()
932
933 def get_amqp_message_by_unit(self, sentry_unit, queue="test",
934 username="testuser1",
935 password="changeme",
936 ssl=False, port=None):
937 """Get an amqp message from a rmq juju unit.
938
939 :param sentry_unit: sentry unit pointer
940 :param queue: message queue, default to test
941 :param username: amqp user name, default to testuser1
942 :param password: amqp user password
943 :param ssl: boolean, default to False
944 :param port: amqp port, use defaults if None
945 :returns: amqp message body as string. Raise if get fails.
946 """
947 connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
948 port=port,
949 username=username,
950 password=password)
951 channel = connection.channel()
952 method_frame, _, body = channel.basic_get(queue)
953
954 if method_frame:
955 self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
956 body))
957 channel.basic_ack(method_frame.delivery_tag)
958 channel.close()
959 connection.close()
960 return body
961 else:
962 msg = 'No message retrieved.'
963 amulet.raise_status(amulet.FAIL, msg)
605964
=== added file 'tests/tests.yaml'
--- tests/tests.yaml 1970-01-01 00:00:00 +0000
+++ tests/tests.yaml 2015-09-21 20:38:30 +0000
@@ -0,0 +1,20 @@
1bootstrap: true
2reset: true
3virtualenv: true
4makefile:
5 - lint
6 - test
7sources:
8 - ppa:juju/stable
9packages:
10 - amulet
11 - python-amulet
12 - python-cinderclient
13 - python-distro-info
14 - python-glanceclient
15 - python-heatclient
16 - python-keystoneclient
17 - python-neutronclient
18 - python-novaclient
19 - python-pika
20 - python-swiftclient

Subscribers

People subscribed via source and target branches