Merge lp:~1chb1n/charms/trusty/heat/next-amulet-init into lp:~openstack-charmers-archive/charms/trusty/heat/next

Proposed by Ryan Beisner
Status: Merged
Merged at revision: 44
Proposed branch: lp:~1chb1n/charms/trusty/heat/next-amulet-init
Merge into: lp:~openstack-charmers-archive/charms/trusty/heat/next
Diff against target: 2349 lines (+2079/-59)
24 files modified
Makefile (+16/-12)
charm-helpers-tests.yaml (+5/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+12/-3)
hooks/charmhelpers/contrib/openstack/ip.py (+49/-44)
tests/00-setup (+11/-0)
tests/014-basic-precise-icehouse (+11/-0)
tests/015-basic-trusty-icehouse (+9/-0)
tests/016-basic-trusty-juno (+11/-0)
tests/017-basic-trusty-kilo (+11/-0)
tests/018-basic-utopic-juno (+9/-0)
tests/019-basic-vivid-kilo (+9/-0)
tests/README (+76/-0)
tests/basic_deployment.py (+606/-0)
tests/charmhelpers/__init__.py (+38/-0)
tests/charmhelpers/contrib/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+93/-0)
tests/charmhelpers/contrib/amulet/utils.py (+408/-0)
tests/charmhelpers/contrib/openstack/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+151/-0)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+413/-0)
tests/files/hot_hello_world.yaml (+66/-0)
tests/tests.yaml (+15/-0)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/heat/next-amulet-init
Reviewer Review Type Date Requested Status
Corey Bryant (community) Approve
OpenStack Charmers Pending
Review via email: mp+258105@code.launchpad.net

Description of the change

Add basic amulet tests; sync tests/charmhelpers; sync charmhelpers.

Depends on this charm-helpers mp also landing: https://code.launchpad.net/~1chb1n/charm-helpers/heat-amulet/+merge/260723

Tracking bug: https://bugs.launchpad.net/charm-helpers/+bug/1461535

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #4862 heat-next for 1chb1n mp258105
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4862/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5183 heat-next for 1chb1n mp258105
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5183/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4546 heat-next for 1chb1n mp258105
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11676517/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4546/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #4864 heat-next for 1chb1n mp258105
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4864/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5185 heat-next for 1chb1n mp258105
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5185/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Silly rabbit service has to be different. Amulet passed locally, uosci bot will report back with a full run.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4548 heat-next for 1chb1n mp258105
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11679253/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4548/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4550 heat-next for 1chb1n mp258105
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/11689507/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4550/

Revision history for this message
Corey Bryant (corey.bryant) wrote :

Hello! Some inline comments below.

review: Needs Fixing
Revision history for this message
Ryan Beisner (1chb1n) wrote :

Thanks for the detailed review, appreciate it! Acks, further info and questions also commented inline...

Revision history for this message
Corey Bryant (corey.bryant) wrote :

No problem, responses to responses below.

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Reply in-line. TA!

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #4972 heat-next for 1chb1n mp258105
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4972/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5339 heat-next for 1chb1n mp258105
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5339/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Ready for review, release to the bot.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

Approved but waiting for c-h changes to land before this lands. And waiting on tests. Thanks!

review: Approve
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #5343 heat-next for 1chb1n mp258105
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/5343/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

@Corey:

Amulet passed all targets, but there is a bzr bot comment issue with the uosci amulet commentator (addressing that separately).

Pasting amulet success output here: http://paste.ubuntu.com/11698173/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2014-12-15 09:16:40 +0000
+++ Makefile 2015-06-11 15:38:49 +0000
@@ -2,13 +2,17 @@
2PYTHON := /usr/bin/env python2PYTHON := /usr/bin/env python
33
4lint:4lint:
5 @echo -n "Running flake8 tests: "5 @echo Lint inspections and charm proof...
6 @flake8 --exclude hooks/charmhelpers hooks6 @flake8 --exclude hooks/charmhelpers hooks tests unit_tests
7 @flake8 unit_tests
8 @echo "OK"
9 @echo -n "Running charm proof: "
10 @charm proof7 @charm proof
11 @echo "OK"8
9test:
10 @# Bundletester expects unit tests here.
11 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
12
13functional_test:
14 @echo Starting all functional, lint and unit tests...
15 @juju test -v -p AMULET_HTTP_PROXY --timeout 2700
1216
13bin/charm_helpers_sync.py:17bin/charm_helpers_sync.py:
14 @mkdir -p bin18 @mkdir -p bin
@@ -16,9 +20,9 @@
16 > bin/charm_helpers_sync.py20 > bin/charm_helpers_sync.py
1721
18sync: bin/charm_helpers_sync.py22sync: bin/charm_helpers_sync.py
19 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml23 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
2024 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
21unit_test:25
22 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests26publish: lint unit_test
2327 bzr push lp:charms/heat
24all: unit_test lint28 bzr push lp:charms/trusty/heat
2529
=== renamed file 'charm-helpers.yaml' => 'charm-helpers-hooks.yaml'
=== added file 'charm-helpers-tests.yaml'
--- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000
+++ charm-helpers-tests.yaml 2015-06-11 15:38:49 +0000
@@ -0,0 +1,5 @@
1branch: lp:charm-helpers
2destination: tests/charmhelpers
3include:
4 - contrib.amulet
5 - contrib.openstack.amulet
06
=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-04 08:45:25 +0000
+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-06-11 15:38:49 +0000
@@ -64,6 +64,10 @@
64 pass64 pass
6565
6666
67class CRMDCNotFound(Exception):
68 pass
69
70
67def is_elected_leader(resource):71def is_elected_leader(resource):
68 """72 """
69 Returns True if the charm executing this is the elected cluster leader.73 Returns True if the charm executing this is the elected cluster leader.
@@ -116,8 +120,9 @@
116 status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)120 status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
117 if not isinstance(status, six.text_type):121 if not isinstance(status, six.text_type):
118 status = six.text_type(status, "utf-8")122 status = six.text_type(status, "utf-8")
119 except subprocess.CalledProcessError:123 except subprocess.CalledProcessError as ex:
120 return False124 raise CRMDCNotFound(str(ex))
125
121 current_dc = ''126 current_dc = ''
122 for line in status.split('\n'):127 for line in status.split('\n'):
123 if line.startswith('Current DC'):128 if line.startswith('Current DC'):
@@ -125,10 +130,14 @@
125 current_dc = line.split(':')[1].split()[0]130 current_dc = line.split(':')[1].split()[0]
126 if current_dc == get_unit_hostname():131 if current_dc == get_unit_hostname():
127 return True132 return True
133 elif current_dc == 'NONE':
134 raise CRMDCNotFound('Current DC: NONE')
135
128 return False136 return False
129137
130138
131@retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound)139@retry_on_exception(5, base_delay=2,
140 exc_type=(CRMResourceNotFound, CRMDCNotFound))
132def is_crm_leader(resource, retry=False):141def is_crm_leader(resource, retry=False):
133 """142 """
134 Returns True if the charm calling this is the elected corosync leader,143 Returns True if the charm calling this is the elected corosync leader,
135144
=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
--- hooks/charmhelpers/contrib/openstack/ip.py 2015-02-24 11:04:31 +0000
+++ hooks/charmhelpers/contrib/openstack/ip.py 2015-06-11 15:38:49 +0000
@@ -17,6 +17,7 @@
17from charmhelpers.core.hookenv import (17from charmhelpers.core.hookenv import (
18 config,18 config,
19 unit_get,19 unit_get,
20 service_name,
20)21)
21from charmhelpers.contrib.network.ip import (22from charmhelpers.contrib.network.ip import (
22 get_address_in_network,23 get_address_in_network,
@@ -26,8 +27,6 @@
26)27)
27from charmhelpers.contrib.hahelpers.cluster import is_clustered28from charmhelpers.contrib.hahelpers.cluster import is_clustered
2829
29from functools import partial
30
31PUBLIC = 'public'30PUBLIC = 'public'
32INTERNAL = 'int'31INTERNAL = 'int'
33ADMIN = 'admin'32ADMIN = 'admin'
@@ -35,15 +34,18 @@
35ADDRESS_MAP = {34ADDRESS_MAP = {
36 PUBLIC: {35 PUBLIC: {
37 'config': 'os-public-network',36 'config': 'os-public-network',
38 'fallback': 'public-address'37 'fallback': 'public-address',
38 'override': 'os-public-hostname',
39 },39 },
40 INTERNAL: {40 INTERNAL: {
41 'config': 'os-internal-network',41 'config': 'os-internal-network',
42 'fallback': 'private-address'42 'fallback': 'private-address',
43 'override': 'os-internal-hostname',
43 },44 },
44 ADMIN: {45 ADMIN: {
45 'config': 'os-admin-network',46 'config': 'os-admin-network',
46 'fallback': 'private-address'47 'fallback': 'private-address',
48 'override': 'os-admin-hostname',
47 }49 }
48}50}
4951
@@ -57,15 +59,50 @@
57 :param endpoint_type: str endpoint type to resolve.59 :param endpoint_type: str endpoint type to resolve.
58 :param returns: str base URL for services on the current service unit.60 :param returns: str base URL for services on the current service unit.
59 """61 """
60 scheme = 'http'62 scheme = _get_scheme(configs)
61 if 'https' in configs.complete_contexts():63
62 scheme = 'https'
63 address = resolve_address(endpoint_type)64 address = resolve_address(endpoint_type)
64 if is_ipv6(address):65 if is_ipv6(address):
65 address = "[{}]".format(address)66 address = "[{}]".format(address)
67
66 return '%s://%s' % (scheme, address)68 return '%s://%s' % (scheme, address)
6769
6870
71def _get_scheme(configs):
72 """Returns the scheme to use for the url (either http or https)
73 depending upon whether https is in the configs value.
74
75 :param configs: OSTemplateRenderer config templating object to inspect
76 for a complete https context.
77 :returns: either 'http' or 'https' depending on whether https is
78 configured within the configs context.
79 """
80 scheme = 'http'
81 if configs and 'https' in configs.complete_contexts():
82 scheme = 'https'
83 return scheme
84
85
86def _get_address_override(endpoint_type=PUBLIC):
87 """Returns any address overrides that the user has defined based on the
88 endpoint type.
89
90 Note: this function allows for the service name to be inserted into the
91 address if the user specifies {service_name}.somehost.org.
92
93 :param endpoint_type: the type of endpoint to retrieve the override
94 value for.
95 :returns: any endpoint address or hostname that the user has overridden
96 or None if an override is not present.
97 """
98 override_key = ADDRESS_MAP[endpoint_type]['override']
99 addr_override = config(override_key)
100 if not addr_override:
101 return None
102 else:
103 return addr_override.format(service_name=service_name())
104
105
69def resolve_address(endpoint_type=PUBLIC):106def resolve_address(endpoint_type=PUBLIC):
70 """Return unit address depending on net config.107 """Return unit address depending on net config.
71108
@@ -77,7 +114,10 @@
77114
78 :param endpoint_type: Network endpoing type115 :param endpoint_type: Network endpoing type
79 """116 """
80 resolved_address = None117 resolved_address = _get_address_override(endpoint_type)
118 if resolved_address:
119 return resolved_address
120
81 vips = config('vip')121 vips = config('vip')
82 if vips:122 if vips:
83 vips = vips.split()123 vips = vips.split()
@@ -109,38 +149,3 @@
109 "clustered=%s)" % (net_type, clustered))149 "clustered=%s)" % (net_type, clustered))
110150
111 return resolved_address151 return resolved_address
112
113
114def endpoint_url(configs, url_template, port, endpoint_type=PUBLIC,
115 override=None):
116 """Returns the correct endpoint URL to advertise to Keystone.
117
118 This method provides the correct endpoint URL which should be advertised to
119 the keystone charm for endpoint creation. This method allows for the url to
120 be overridden to force a keystone endpoint to have specific URL for any of
121 the defined scopes (admin, internal, public).
122
123 :param configs: OSTemplateRenderer config templating object to inspect
124 for a complete https context.
125 :param url_template: str format string for creating the url template. Only
126 two values will be passed - the scheme+hostname
127 returned by the canonical_url and the port.
128 :param endpoint_type: str endpoint type to resolve.
129 :param override: str the name of the config option which overrides the
130 endpoint URL defined by the charm itself. None will
131 disable any overrides (default).
132 """
133 if override:
134 # Return any user-defined overrides for the keystone endpoint URL.
135 user_value = config(override)
136 if user_value:
137 return user_value.strip()
138
139 return url_template % (canonical_url(configs, endpoint_type), port)
140
141
142public_endpoint = partial(endpoint_url, endpoint_type=PUBLIC)
143
144internal_endpoint = partial(endpoint_url, endpoint_type=INTERNAL)
145
146admin_endpoint = partial(endpoint_url, endpoint_type=ADMIN)
147152
=== added directory 'tests'
=== added file 'tests/00-setup'
--- tests/00-setup 1970-01-01 00:00:00 +0000
+++ tests/00-setup 2015-06-11 15:38:49 +0000
@@ -0,0 +1,11 @@
1#!/bin/bash
2
3set -ex
4
5sudo add-apt-repository --yes ppa:juju/stable
6sudo apt-get update --yes
7sudo apt-get install --yes python-amulet \
8 python-distro-info \
9 python-glanceclient \
10 python-keystoneclient \
11 python-novaclient
012
=== added file 'tests/014-basic-precise-icehouse'
--- tests/014-basic-precise-icehouse 1970-01-01 00:00:00 +0000
+++ tests/014-basic-precise-icehouse 2015-06-11 15:38:49 +0000
@@ -0,0 +1,11 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic heat deployment on precise-icehouse."""
4
5from basic_deployment import HeatBasicDeployment
6
7if __name__ == '__main__':
8 deployment = HeatBasicDeployment(series='precise',
9 openstack='cloud:precise-icehouse',
10 source='cloud:precise-updates/icehouse')
11 deployment.run_tests()
012
=== added file 'tests/015-basic-trusty-icehouse'
--- tests/015-basic-trusty-icehouse 1970-01-01 00:00:00 +0000
+++ tests/015-basic-trusty-icehouse 2015-06-11 15:38:49 +0000
@@ -0,0 +1,9 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic heat deployment on trusty-icehouse."""
4
5from basic_deployment import HeatBasicDeployment
6
7if __name__ == '__main__':
8 deployment = HeatBasicDeployment(series='trusty')
9 deployment.run_tests()
010
=== added file 'tests/016-basic-trusty-juno'
--- tests/016-basic-trusty-juno 1970-01-01 00:00:00 +0000
+++ tests/016-basic-trusty-juno 2015-06-11 15:38:49 +0000
@@ -0,0 +1,11 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic heat deployment on trusty-juno."""
4
5from basic_deployment import HeatBasicDeployment
6
7if __name__ == '__main__':
8 deployment = HeatBasicDeployment(series='trusty',
9 openstack='cloud:trusty-juno',
10 source='cloud:trusty-updates/juno')
11 deployment.run_tests()
012
=== added file 'tests/017-basic-trusty-kilo'
--- tests/017-basic-trusty-kilo 1970-01-01 00:00:00 +0000
+++ tests/017-basic-trusty-kilo 2015-06-11 15:38:49 +0000
@@ -0,0 +1,11 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic heat deployment on trusty-kilo."""
4
5from basic_deployment import HeatBasicDeployment
6
7if __name__ == '__main__':
8 deployment = HeatBasicDeployment(series='trusty',
9 openstack='cloud:trusty-kilo',
10 source='cloud:trusty-updates/kilo')
11 deployment.run_tests()
012
=== added file 'tests/018-basic-utopic-juno'
--- tests/018-basic-utopic-juno 1970-01-01 00:00:00 +0000
+++ tests/018-basic-utopic-juno 2015-06-11 15:38:49 +0000
@@ -0,0 +1,9 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic heat deployment on utopic-juno."""
4
5from basic_deployment import HeatBasicDeployment
6
7if __name__ == '__main__':
8 deployment = HeatBasicDeployment(series='utopic')
9 deployment.run_tests()
010
=== added file 'tests/019-basic-vivid-kilo'
--- tests/019-basic-vivid-kilo 1970-01-01 00:00:00 +0000
+++ tests/019-basic-vivid-kilo 2015-06-11 15:38:49 +0000
@@ -0,0 +1,9 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic heat deployment on vivid-kilo."""
4
5from basic_deployment import HeatBasicDeployment
6
7if __name__ == '__main__':
8 deployment = HeatBasicDeployment(series='vivid')
9 deployment.run_tests()
010
=== added file 'tests/README'
--- tests/README 1970-01-01 00:00:00 +0000
+++ tests/README 2015-06-11 15:38:49 +0000
@@ -0,0 +1,76 @@
1This directory provides Amulet tests that focus on verification of heat
2deployments.
3
4test_* methods are called in lexical sort order.
5
6Test name convention to ensure desired test order:
7 1xx service and endpoint checks
8 2xx relation checks
9 3xx config checks
10 4xx functional checks
11 9xx restarts and other final checks
12
13Common uses of heat relations in deployments:
14 - [ heat, mysql ]
15 - [ heat, keystone ]
16 - [ heat, rabbitmq-server ]
17
18More detailed relations of heat service in a common deployment:
19 relations:
20 amqp:
21 - rabbitmq-server
22 identity-service:
23 - keystone
24 shared-db:
25 - mysql
26
27In order to run tests, you'll need charm-tools installed (in addition to
28juju, of course):
29 sudo add-apt-repository ppa:juju/stable
30 sudo apt-get update
31 sudo apt-get install charm-tools
32
33If you use a web proxy server to access the web, you'll need to set the
34AMULET_HTTP_PROXY environment variable to the http URL of the proxy server.
35
36The following examples demonstrate different ways that tests can be executed.
37All examples are run from the charm's root directory.
38
39 * To run all tests (starting with 00-setup):
40
41 make test
42
43 * To run a specific test module (or modules):
44
45 juju test -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse
46
47 * To run a specific test module (or modules), and keep the environment
48 deployed after a failure:
49
50 juju test --set-e -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse
51
52 * To re-run a test module against an already deployed environment (one
53 that was deployed by a previous call to 'juju test --set-e'):
54
55 ./tests/15-basic-trusty-icehouse
56
57For debugging and test development purposes, all code should be idempotent.
58In other words, the code should have the ability to be re-run without changing
59the results beyond the initial run. This enables editing and re-running of a
60test module against an already deployed environment, as described above.
61
62Manual debugging tips:
63
64 * Set the following env vars before using the OpenStack CLI as admin:
65 export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0
66 export OS_TENANT_NAME=admin
67 export OS_USERNAME=admin
68 export OS_PASSWORD=openstack
69 export OS_REGION_NAME=RegionOne
70
71 * Set the following env vars before using the OpenStack CLI as demoUser:
72 export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0
73 export OS_TENANT_NAME=demoTenant
74 export OS_USERNAME=demoUser
75 export OS_PASSWORD=password
76 export OS_REGION_NAME=RegionOne
077
=== added file 'tests/basic_deployment.py'
--- tests/basic_deployment.py 1970-01-01 00:00:00 +0000
+++ tests/basic_deployment.py 2015-06-11 15:38:49 +0000
@@ -0,0 +1,606 @@
1#!/usr/bin/python
2
3"""
4Basic heat functional test.
5"""
6import amulet
7import time
8from heatclient.common import template_utils
9
10from charmhelpers.contrib.openstack.amulet.deployment import (
11 OpenStackAmuletDeployment
12)
13
14from charmhelpers.contrib.openstack.amulet.utils import (
15 OpenStackAmuletUtils,
16 DEBUG,
17 #ERROR
18)
19
20# Use DEBUG to turn on debug logging
21u = OpenStackAmuletUtils(DEBUG)
22
23# Resource and name constants
24IMAGE_NAME = 'cirros-image-1'
25KEYPAIR_NAME = 'testkey'
26STACK_NAME = 'hello_world'
27RESOURCE_TYPE = 'server'
28TEMPLATE_REL_PATH = 'tests/files/hot_hello_world.yaml'
29
30
31class HeatBasicDeployment(OpenStackAmuletDeployment):
32 """Amulet tests on a basic heat deployment."""
33
34 def __init__(self, series=None, openstack=None, source=None, git=False,
35 stable=False):
36 """Deploy the entire test environment."""
37 super(HeatBasicDeployment, self).__init__(series, openstack,
38 source, stable)
39 self.git = git
40 self._add_services()
41 self._add_relations()
42 self._configure_services()
43 self._deploy()
44 self._initialize_tests()
45
46 def _add_services(self):
47 """Add services
48
49 Add the services that we're testing, where heat is local,
50 and the rest of the service are from lp branches that are
51 compatible with the local charm (e.g. stable or next).
52 """
53 this_service = {'name': 'heat'}
54 other_services = [{'name': 'keystone'},
55 {'name': 'rabbitmq-server'},
56 {'name': 'mysql'},
57 {'name': 'glance'},
58 {'name': 'nova-cloud-controller'},
59 {'name': 'nova-compute'}]
60 super(HeatBasicDeployment, self)._add_services(this_service,
61 other_services)
62
63 def _add_relations(self):
64 """Add all of the relations for the services."""
65
66 relations = {
67 'heat:amqp': 'rabbitmq-server:amqp',
68 'heat:identity-service': 'keystone:identity-service',
69 'heat:shared-db': 'mysql:shared-db',
70 'nova-compute:image-service': 'glance:image-service',
71 'nova-compute:shared-db': 'mysql:shared-db',
72 'nova-compute:amqp': 'rabbitmq-server:amqp',
73 'nova-cloud-controller:shared-db': 'mysql:shared-db',
74 'nova-cloud-controller:identity-service':
75 'keystone:identity-service',
76 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp',
77 'nova-cloud-controller:cloud-compute':
78 'nova-compute:cloud-compute',
79 'nova-cloud-controller:image-service': 'glance:image-service',
80 'keystone:shared-db': 'mysql:shared-db',
81 'glance:identity-service': 'keystone:identity-service',
82 'glance:shared-db': 'mysql:shared-db',
83 'glance:amqp': 'rabbitmq-server:amqp'
84 }
85 super(HeatBasicDeployment, self)._add_relations(relations)
86
87 def _configure_services(self):
88 """Configure all of the services."""
89 nova_config = {'config-flags': 'auto_assign_floating_ip=False',
90 'enable-live-migration': 'False'}
91 keystone_config = {'admin-password': 'openstack',
92 'admin-token': 'ubuntutesting'}
93 configs = {'nova-compute': nova_config, 'keystone': keystone_config}
94 super(HeatBasicDeployment, self)._configure_services(configs)
95
96 def _initialize_tests(self):
97 """Perform final initialization before tests get run."""
98 # Access the sentries for inspecting service units
99 self.heat_sentry = self.d.sentry.unit['heat/0']
100 self.mysql_sentry = self.d.sentry.unit['mysql/0']
101 self.keystone_sentry = self.d.sentry.unit['keystone/0']
102 self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']
103 self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0']
104 self.glance_sentry = self.d.sentry.unit['glance/0']
105 u.log.debug('openstack release val: {}'.format(
106 self._get_openstack_release()))
107 u.log.debug('openstack release str: {}'.format(
108 self._get_openstack_release_string()))
109
110 # Let things settle a bit before moving forward
111 time.sleep(30)
112
113 # Authenticate admin with keystone
114 self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
115 user='admin',
116 password='openstack',
117 tenant='admin')
118
119 # Authenticate admin with glance endpoint
120 self.glance = u.authenticate_glance_admin(self.keystone)
121
122 # Authenticate admin with nova endpoint
123 self.nova = u.authenticate_nova_user(self.keystone,
124 user='admin',
125 password='openstack',
126 tenant='admin')
127
128 # Authenticate admin with heat endpoint
129 self.heat = u.authenticate_heat_admin(self.keystone)
130
131 def _image_create(self):
132 """Create an image to be used by the heat template, verify it exists"""
133 u.log.debug('Creating glance image ({})...'.format(IMAGE_NAME))
134
135 # Create a new image
136 image_new = u.create_cirros_image(self.glance, IMAGE_NAME)
137
138 # Confirm image is created and has status of 'active'
139 if not image_new:
140 message = 'glance image create failed'
141 amulet.raise_status(amulet.FAIL, msg=message)
142
143 # Verify new image name
144 images_list = list(self.glance.images.list())
145 if images_list[0].name != IMAGE_NAME:
146 message = ('glance image create failed or unexpected '
147 'image name {}'.format(images_list[0].name))
148 amulet.raise_status(amulet.FAIL, msg=message)
149
150 def _keypair_create(self):
151 """Create a keypair to be used by the heat template,
152 or get a keypair if it exists."""
153 self.keypair = u.create_or_get_keypair(self.nova,
154 keypair_name=KEYPAIR_NAME)
155 if not self.keypair:
156 msg = 'Failed to create or get keypair.'
157 amulet.raise_status(amulet.FAIL, msg=msg)
158 u.log.debug("Keypair: {} {}".format(self.keypair.id,
159 self.keypair.fingerprint))
160
161 def _stack_create(self):
162 """Create a heat stack from a basic heat template, verify its status"""
163 u.log.debug('Creating heat stack...')
164
165 t_url = u.file_to_url(TEMPLATE_REL_PATH)
166 r_req = self.heat.http_client.raw_request
167 u.log.debug('template url: {}'.format(t_url))
168
169 t_files, template = template_utils.get_template_contents(t_url, r_req)
170 env_files, env = template_utils.process_environment_and_files(
171 env_path=None)
172
173 fields = {
174 'stack_name': STACK_NAME,
175 'timeout_mins': '15',
176 'disable_rollback': False,
177 'parameters': {
178 'admin_pass': 'Ubuntu',
179 'key_name': KEYPAIR_NAME,
180 'image': IMAGE_NAME
181 },
182 'template': template,
183 'files': dict(list(t_files.items()) + list(env_files.items())),
184 'environment': env
185 }
186
187 # Create the stack.
188 try:
189 _stack = self.heat.stacks.create(**fields)
190 u.log.debug('Stack data: {}'.format(_stack))
191 _stack_id = _stack['stack']['id']
192 u.log.debug('Creating new stack, ID: {}'.format(_stack_id))
193 except Exception as e:
194 # Generally, an api or cloud config error if this is hit.
195 msg = 'Failed to create heat stack: {}'.format(e)
196 amulet.raise_status(amulet.FAIL, msg=msg)
197
198 # Confirm stack reaches COMPLETE status.
199 # /!\ Heat stacks reach a COMPLETE status even when nova cannot
200 # find resources (a valid hypervisor) to fit the instance, in
201 # which case the heat stack self-deletes! Confirm anyway...
202 ret = u.resource_reaches_status(self.heat.stacks, _stack_id,
203 expected_stat="COMPLETE",
204 msg="Stack status wait")
205 _stacks = list(self.heat.stacks.list())
206 u.log.debug('All stacks: {}'.format(_stacks))
207 if not ret:
208 msg = 'Heat stack failed to reach expected state.'
209 amulet.raise_status(amulet.FAIL, msg=msg)
210
211 # Confirm stack still exists.
212 try:
213 _stack = self.heat.stacks.get(STACK_NAME)
214 except Exception as e:
215 # Generally, a resource availability issue if this is hit.
216 msg = 'Failed to get heat stack: {}'.format(e)
217 amulet.raise_status(amulet.FAIL, msg=msg)
218
219 # Confirm stack name.
220 u.log.debug('Expected, actual stack name: {}, '
221 '{}'.format(STACK_NAME, _stack.stack_name))
222 if STACK_NAME != _stack.stack_name:
223 msg = 'Stack name mismatch, {} != {}'.format(STACK_NAME,
224 _stack.stack_name)
225 amulet.raise_status(amulet.FAIL, msg=msg)
226
227 def _stack_resource_compute(self):
228 """Confirm that the stack has created a subsequent nova
229 compute resource, and confirm its status."""
230 u.log.debug('Confirming heat stack resource status...')
231
232 # Confirm existence of a heat-generated nova compute resource.
233 _resource = self.heat.resources.get(STACK_NAME, RESOURCE_TYPE)
234 _server_id = _resource.physical_resource_id
235 if _server_id:
236 u.log.debug('Heat template spawned nova instance, '
237 'ID: {}'.format(_server_id))
238 else:
239 msg = 'Stack failed to spawn a nova compute resource (instance).'
240 amulet.raise_status(amulet.FAIL, msg=msg)
241
242 # Confirm nova instance reaches ACTIVE status.
243 ret = u.resource_reaches_status(self.nova.servers, _server_id,
244 expected_stat="ACTIVE",
245 msg="nova instance")
246 if not ret:
247 msg = 'Nova compute instance failed to reach expected state.'
248 amulet.raise_status(amulet.FAIL, msg=msg)
249
250 def _stack_delete(self):
251 """Delete a heat stack, verify."""
252 u.log.debug('Deleting heat stack...')
253 u.delete_resource(self.heat.stacks, STACK_NAME, msg="heat stack")
254
255 def _image_delete(self):
256 """Delete that image."""
257 u.log.debug('Deleting glance image...')
258 image = self.nova.images.find(name=IMAGE_NAME)
259 u.delete_resource(self.nova.images, image, msg="glance image")
260
261 def _keypair_delete(self):
262 """Delete that keypair."""
263 u.log.debug('Deleting keypair...')
264 u.delete_resource(self.nova.keypairs, KEYPAIR_NAME, msg="nova keypair")
265
266 def test_100_services(self):
267 """Verify the expected services are running on the corresponding
268 service units."""
269 service_names = {
270 self.heat_sentry: ['heat-api',
271 'heat-api-cfn',
272 'heat-engine'],
273 self.mysql_sentry: ['mysql'],
274 self.rabbitmq_sentry: ['rabbitmq-server'],
275 self.nova_compute_sentry: ['nova-compute',
276 'nova-network',
277 'nova-api'],
278 self.keystone_sentry: ['keystone'],
279 self.glance_sentry: ['glance-registry', 'glance-api']
280 }
281
282 ret = u.validate_services_by_name(service_names)
283 if ret:
284 amulet.raise_status(amulet.FAIL, msg=ret)
285
286 def test_110_service_catalog(self):
287 """Verify that the service catalog endpoint data is valid."""
288 u.log.debug('Checking service catalog endpoint data...')
289 endpoint_vol = {'adminURL': u.valid_url,
290 'region': 'RegionOne',
291 'publicURL': u.valid_url,
292 'internalURL': u.valid_url}
293 endpoint_id = {'adminURL': u.valid_url,
294 'region': 'RegionOne',
295 'publicURL': u.valid_url,
296 'internalURL': u.valid_url}
297 if self._get_openstack_release() >= self.precise_folsom:
298 endpoint_vol['id'] = u.not_null
299 endpoint_id['id'] = u.not_null
300 expected = {'compute': [endpoint_vol], 'orchestration': [endpoint_vol],
301 'image': [endpoint_vol], 'identity': [endpoint_id]}
302
303 if self._get_openstack_release() <= self.trusty_juno:
304 # Before Kilo
305 expected['s3'] = [endpoint_vol]
306 expected['ec2'] = [endpoint_vol]
307
308 actual = self.keystone.service_catalog.get_endpoints()
309 ret = u.validate_svc_catalog_endpoint_data(expected, actual)
310 if ret:
311 amulet.raise_status(amulet.FAIL, msg=ret)
312
313 def test_120_heat_endpoint(self):
314 """Verify the heat api endpoint data."""
315 u.log.debug('Checking api endpoint data...')
316 endpoints = self.keystone.endpoints.list()
317
318 if self._get_openstack_release() <= self.trusty_juno:
319 # Before Kilo
320 admin_port = internal_port = public_port = '3333'
321 else:
322 # Kilo and later
323 admin_port = internal_port = public_port = '8004'
324
325 expected = {'id': u.not_null,
326 'region': 'RegionOne',
327 'adminurl': u.valid_url,
328 'internalurl': u.valid_url,
329 'publicurl': u.valid_url,
330 'service_id': u.not_null}
331
332 ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
333 public_port, expected)
334 if ret:
335 message = 'heat endpoint: {}'.format(ret)
336 amulet.raise_status(amulet.FAIL, msg=message)
337
338 def test_200_heat_mysql_shared_db_relation(self):
339 """Verify the heat:mysql shared-db relation data"""
340 u.log.debug('Checking heat:mysql shared-db relation data...')
341 unit = self.heat_sentry
342 relation = ['shared-db', 'mysql:shared-db']
343 expected = {
344 'private-address': u.valid_ip,
345 'heat_database': 'heat',
346 'heat_username': 'heat',
347 'heat_hostname': u.valid_ip
348 }
349
350 ret = u.validate_relation_data(unit, relation, expected)
351 if ret:
352 message = u.relation_error('heat:mysql shared-db', ret)
353 amulet.raise_status(amulet.FAIL, msg=message)
354
355 def test_201_mysql_heat_shared_db_relation(self):
356 """Verify the mysql:heat shared-db relation data"""
357 u.log.debug('Checking mysql:heat shared-db relation data...')
358 unit = self.mysql_sentry
359 relation = ['shared-db', 'heat:shared-db']
360 expected = {
361 'private-address': u.valid_ip,
362 'db_host': u.valid_ip,
363 'heat_allowed_units': 'heat/0',
364 'heat_password': u.not_null
365 }
366
367 ret = u.validate_relation_data(unit, relation, expected)
368 if ret:
369 message = u.relation_error('mysql:heat shared-db', ret)
370 amulet.raise_status(amulet.FAIL, msg=message)
371
372 def test_202_heat_keystone_identity_relation(self):
373 """Verify the heat:keystone identity-service relation data"""
374 u.log.debug('Checking heat:keystone identity-service relation data...')
375 unit = self.heat_sentry
376 relation = ['identity-service', 'keystone:identity-service']
377 expected = {
378 'heat_service': 'heat',
379 'heat_region': 'RegionOne',
380 'heat_public_url': u.valid_url,
381 'heat_admin_url': u.valid_url,
382 'heat_internal_url': u.valid_url,
383 'heat-cfn_service': 'heat-cfn',
384 'heat-cfn_region': 'RegionOne',
385 'heat-cfn_public_url': u.valid_url,
386 'heat-cfn_admin_url': u.valid_url,
387 'heat-cfn_internal_url': u.valid_url
388 }
389 ret = u.validate_relation_data(unit, relation, expected)
390 if ret:
391 message = u.relation_error('heat:keystone identity-service', ret)
392 amulet.raise_status(amulet.FAIL, msg=message)
393
394 def test_203_keystone_heat_identity_relation(self):
395 """Verify the keystone:heat identity-service relation data"""
396 u.log.debug('Checking keystone:heat identity-service relation data...')
397 unit = self.keystone_sentry
398 relation = ['identity-service', 'heat:identity-service']
399 expected = {
400 'service_protocol': 'http',
401 'service_tenant': 'services',
402 'admin_token': 'ubuntutesting',
403 'service_password': u.not_null,
404 'service_port': '5000',
405 'auth_port': '35357',
406 'auth_protocol': 'http',
407 'private-address': u.valid_ip,
408 'auth_host': u.valid_ip,
409 'service_username': 'heat-cfn_heat',
410 'service_tenant_id': u.not_null,
411 'service_host': u.valid_ip
412 }
413 ret = u.validate_relation_data(unit, relation, expected)
414 if ret:
415 message = u.relation_error('keystone:heat identity-service', ret)
416 amulet.raise_status(amulet.FAIL, msg=message)
417
418 def test_204_heat_rmq_amqp_relation(self):
419 """Verify the heat:rabbitmq-server amqp relation data"""
420 u.log.debug('Checking heat:rabbitmq-server amqp relation data...')
421 unit = self.heat_sentry
422 relation = ['amqp', 'rabbitmq-server:amqp']
423 expected = {
424 'username': u.not_null,
425 'private-address': u.valid_ip,
426 'vhost': 'openstack'
427 }
428
429 ret = u.validate_relation_data(unit, relation, expected)
430 if ret:
431 message = u.relation_error('heat:rabbitmq-server amqp', ret)
432 amulet.raise_status(amulet.FAIL, msg=message)
433
434 def test_205_rmq_heat_amqp_relation(self):
435 """Verify the rabbitmq-server:heat amqp relation data"""
436 u.log.debug('Checking rabbitmq-server:heat amqp relation data...')
437 unit = self.rabbitmq_sentry
438 relation = ['amqp', 'heat:amqp']
439 expected = {
440 'private-address': u.valid_ip,
441 'password': u.not_null,
442 'hostname': u.valid_ip,
443 }
444
445 ret = u.validate_relation_data(unit, relation, expected)
446 if ret:
447 message = u.relation_error('rabbitmq-server:heat amqp', ret)
448 amulet.raise_status(amulet.FAIL, msg=message)
449
450 def test_300_heat_config(self):
451 """Verify the data in the heat config file."""
452 u.log.debug('Checking heat config file data...')
453 unit = self.heat_sentry
454 conf = '/etc/heat/heat.conf'
455
456 ks_rel = self.keystone_sentry.relation('identity-service',
457 'heat:identity-service')
458 rmq_rel = self.rabbitmq_sentry.relation('amqp',
459 'heat:amqp')
460 mysql_rel = self.mysql_sentry.relation('shared-db',
461 'heat:shared-db')
462
463 u.log.debug('keystone:heat relation: {}'.format(ks_rel))
464 u.log.debug('rabbitmq:heat relation: {}'.format(rmq_rel))
465 u.log.debug('mysql:heat relation: {}'.format(mysql_rel))
466
467 db_uri = "mysql://{}:{}@{}/{}".format('heat',
468 mysql_rel['heat_password'],
469 mysql_rel['db_host'],
470 'heat')
471
472 auth_uri = '{}://{}:{}/v2.0'.format(ks_rel['service_protocol'],
473 ks_rel['service_host'],
474 ks_rel['service_port'])
475
476 expected = {
477 'DEFAULT': {
478 'use_syslog': 'False',
479 'debug': 'False',
480 'verbose': 'False',
481 'log_dir': '/var/log/heat',
482 'instance_driver': 'heat.engine.nova',
483 'plugin_dirs': '/usr/lib64/heat,/usr/lib/heat',
484 'environment_dir': '/etc/heat/environment.d',
485 'deferred_auth_method': 'password',
486 'host': 'heat',
487 'rabbit_userid': 'heat',
488 'rabbit_virtual_host': 'openstack',
489 'rabbit_password': rmq_rel['password'],
490 'rabbit_host': rmq_rel['hostname']
491 },
492 'keystone_authtoken': {
493 'auth_uri': auth_uri,
494 'auth_host': ks_rel['service_host'],
495 'auth_port': ks_rel['auth_port'],
496 'auth_protocol': ks_rel['auth_protocol'],
497 'admin_tenant_name': 'services',
498 'admin_user': 'heat-cfn_heat',
499 'admin_password': ks_rel['service_password'],
500 'signing_dir': '/var/cache/heat'
501 },
502 'database': {
503 'connection': db_uri
504 },
505 'heat_api': {
506 'bind_port': '7994'
507 },
508 'heat_api_cfn': {
509 'bind_port': '7990'
510 },
511 'paste_deploy': {
512 'api_paste_config': '/etc/heat/api-paste.ini'
513 },
514 }
515
516 for section, pairs in expected.iteritems():
517 ret = u.validate_config_data(unit, conf, section, pairs)
518 if ret:
519 message = "heat config error: {}".format(ret)
520 amulet.raise_status(amulet.FAIL, msg=message)
521
522 def test_400_heat_resource_types_list(self):
523 """Check default heat resource list behavior, also confirm
524 heat functionality."""
525 u.log.debug('Checking default heat resouce list...')
526 try:
527 types = list(self.heat.resource_types.list())
528 if type(types) is list:
529 u.log.debug('Resource type list check is ok.')
530 else:
531 msg = 'Resource type list is not a list!'
532 u.log.error('{}'.format(msg))
533 raise
534 if len(types) > 0:
535 u.log.debug('Resource type list is populated '
536 '({}, ok).'.format(len(types)))
537 else:
538 msg = 'Resource type list length is zero!'
539 u.log.error(msg)
540 raise
541 except:
542 msg = 'Resource type list failed.'
543 u.log.error(msg)
544 raise
545
546 def test_402_heat_stack_list(self):
547 """Check default heat stack list behavior, also confirm
548 heat functionality."""
549 u.log.debug('Checking default heat stack list...')
550 try:
551 stacks = list(self.heat.stacks.list())
552 if type(stacks) is list:
553 u.log.debug("Stack list check is ok.")
554 else:
555 msg = 'Stack list returned something other than a list.'
556 u.log.error(msg)
557 raise
558 except:
559 msg = 'Heat stack list failed.'
560 u.log.error(msg)
561 raise
562
563 def test_410_heat_stack_create_delete(self):
564 """Create a heat stack from template, confirm that a corresponding
565 nova compute resource is spawned, delete stack."""
566 self._image_create()
567 self._keypair_create()
568 self._stack_create()
569 self._stack_resource_compute()
570 self._stack_delete()
571 self._image_delete()
572 self._keypair_delete()
573
574 def test_900_heat_restart_on_config_change(self):
575 """Verify that the specified services are restarted when the config
576 is changed."""
577 sentry = self.heat_sentry
578 juju_service = 'heat'
579
580 # Expected default and alternate values
581 set_default = {'use-syslog': 'False'}
582 set_alternate = {'use-syslog': 'True'}
583
584 # Config file affected by juju set config change
585 conf_file = '/etc/heat/heat.conf'
586
587 # Services which are expected to restart upon config change
588 services = ['heat-api',
589 'heat-api-cfn',
590 'heat-engine']
591
592 # Make config change, check for service restarts
593 u.log.debug('Making config change on {}...'.format(juju_service))
594 self.d.configure(juju_service, set_alternate)
595
596 sleep_time = 30
597 for s in services:
598 u.log.debug("Checking that service restarted: {}".format(s))
599 if not u.service_restarted(sentry, s,
600 conf_file, sleep_time=sleep_time):
601 self.d.configure(juju_service, set_default)
602 msg = "service {} didn't restart after config change".format(s)
603 amulet.raise_status(amulet.FAIL, msg=msg)
604 sleep_time = 0
605
606 self.d.configure(juju_service, set_default)
0607
=== added directory 'tests/charmhelpers'
=== added file 'tests/charmhelpers/__init__.py'
--- tests/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/__init__.py 2015-06-11 15:38:49 +0000
@@ -0,0 +1,38 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17# Bootstrap charm-helpers, installing its dependencies if necessary using
18# only standard libraries.
19import subprocess
20import sys
21
22try:
23 import six # flake8: noqa
24except ImportError:
25 if sys.version_info.major == 2:
26 subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
27 else:
28 subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
29 import six # flake8: noqa
30
31try:
32 import yaml # flake8: noqa
33except ImportError:
34 if sys.version_info.major == 2:
35 subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
36 else:
37 subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
38 import yaml # flake8: noqa
039
=== added directory 'tests/charmhelpers/contrib'
=== added file 'tests/charmhelpers/contrib/__init__.py'
--- tests/charmhelpers/contrib/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/__init__.py 2015-06-11 15:38:49 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added directory 'tests/charmhelpers/contrib/amulet'
=== added file 'tests/charmhelpers/contrib/amulet/__init__.py'
--- tests/charmhelpers/contrib/amulet/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/amulet/__init__.py 2015-06-11 15:38:49 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added file 'tests/charmhelpers/contrib/amulet/deployment.py'
--- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/amulet/deployment.py 2015-06-11 15:38:49 +0000
@@ -0,0 +1,93 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import amulet
18import os
19import six
20
21
22class AmuletDeployment(object):
23 """Amulet deployment.
24
25 This class provides generic Amulet deployment and test runner
26 methods.
27 """
28
29 def __init__(self, series=None):
30 """Initialize the deployment environment."""
31 self.series = None
32
33 if series:
34 self.series = series
35 self.d = amulet.Deployment(series=self.series)
36 else:
37 self.d = amulet.Deployment()
38
39 def _add_services(self, this_service, other_services):
40 """Add services.
41
42 Add services to the deployment where this_service is the local charm
43 that we're testing and other_services are the other services that
44 are being used in the local amulet tests.
45 """
46 if this_service['name'] != os.path.basename(os.getcwd()):
47 s = this_service['name']
48 msg = "The charm's root directory name needs to be {}".format(s)
49 amulet.raise_status(amulet.FAIL, msg=msg)
50
51 if 'units' not in this_service:
52 this_service['units'] = 1
53
54 self.d.add(this_service['name'], units=this_service['units'])
55
56 for svc in other_services:
57 if 'location' in svc:
58 branch_location = svc['location']
59 elif self.series:
60 branch_location = 'cs:{}/{}'.format(self.series, svc['name']),
61 else:
62 branch_location = None
63
64 if 'units' not in svc:
65 svc['units'] = 1
66
67 self.d.add(svc['name'], charm=branch_location, units=svc['units'])
68
69 def _add_relations(self, relations):
70 """Add all of the relations for the services."""
71 for k, v in six.iteritems(relations):
72 self.d.relate(k, v)
73
74 def _configure_services(self, configs):
75 """Configure all of the services."""
76 for service, config in six.iteritems(configs):
77 self.d.configure(service, config)
78
79 def _deploy(self):
80 """Deploy environment and wait for all hooks to finish executing."""
81 try:
82 self.d.setup(timeout=900)
83 self.d.sentry.wait(timeout=900)
84 except amulet.helpers.TimeoutError:
85 amulet.raise_status(amulet.FAIL, msg="Deployment timed out")
86 except Exception:
87 raise
88
89 def run_tests(self):
90 """Run all of the methods that are prefixed with 'test_'."""
91 for test in dir(self):
92 if test.startswith('test_'):
93 getattr(self, test)()
094
=== added file 'tests/charmhelpers/contrib/amulet/utils.py'
--- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/amulet/utils.py 2015-06-11 15:38:49 +0000
@@ -0,0 +1,408 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import ConfigParser
18import distro_info
19import io
20import logging
21import os
22import re
23import six
24import sys
25import time
26import urlparse
27
28
29class AmuletUtils(object):
30 """Amulet utilities.
31
32 This class provides common utility functions that are used by Amulet
33 tests.
34 """
35
36 def __init__(self, log_level=logging.ERROR):
37 self.log = self.get_logger(level=log_level)
38 self.ubuntu_releases = self.get_ubuntu_releases()
39
40 def get_logger(self, name="amulet-logger", level=logging.DEBUG):
41 """Get a logger object that will log to stdout."""
42 log = logging
43 logger = log.getLogger(name)
44 fmt = log.Formatter("%(asctime)s %(funcName)s "
45 "%(levelname)s: %(message)s")
46
47 handler = log.StreamHandler(stream=sys.stdout)
48 handler.setLevel(level)
49 handler.setFormatter(fmt)
50
51 logger.addHandler(handler)
52 logger.setLevel(level)
53
54 return logger
55
56 def valid_ip(self, ip):
57 if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip):
58 return True
59 else:
60 return False
61
62 def valid_url(self, url):
63 p = re.compile(
64 r'^(?:http|ftp)s?://'
65 r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa
66 r'localhost|'
67 r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
68 r'(?::\d+)?'
69 r'(?:/?|[/?]\S+)$',
70 re.IGNORECASE)
71 if p.match(url):
72 return True
73 else:
74 return False
75
76 def get_ubuntu_release_from_sentry(self, sentry_unit):
77 """Get Ubuntu release codename from sentry unit.
78
79 :param sentry_unit: amulet sentry/service unit pointer
80 :returns: list of strings - release codename, failure message
81 """
82 msg = None
83 cmd = 'lsb_release -cs'
84 release, code = sentry_unit.run(cmd)
85 if code == 0:
86 self.log.debug('{} lsb_release: {}'.format(
87 sentry_unit.info['unit_name'], release))
88 else:
89 msg = ('{} `{}` returned {} '
90 '{}'.format(sentry_unit.info['unit_name'],
91 cmd, release, code))
92 if release not in self.ubuntu_releases:
93 msg = ("Release ({}) not found in Ubuntu releases "
94 "({})".format(release, self.ubuntu_releases))
95 return release, msg
96
97 def validate_services(self, commands):
98 """Validate that lists of commands succeed on service units. Can be
99 used to verify system services are running on the corresponding
100 service units.
101
102 :param command: dict with sentry keys and arbitrary command list values
103 :returns: None if successful, Failure string message otherwise
104 """
105 self.log.debug('Checking status of system services...')
106
107 # /!\ DEPRECATION WARNING (beisner):
108 # New and existing tests should be rewritten to use
109 # validate_services_by_name() as it is aware of init systems.
110 self.log.warn('/!\\ DEPRECATION WARNING: use '
111 'validate_services_by_name instead of validate_services '
112 'due to init system differences.')
113
114 for k, v in six.iteritems(commands):
115 for cmd in v:
116 output, code = k.run(cmd)
117 self.log.debug('{} `{}` returned '
118 '{}'.format(k.info['unit_name'],
119 cmd, code))
120 if code != 0:
121 return "command `{}` returned {}".format(cmd, str(code))
122 return None
123
124 def validate_services_by_name(self, sentry_services):
125 """Validate system service status by service name, automatically
126 detecting init system based on Ubuntu release codename.
127
128 :param sentry_resources: dict with sentry keys and svc list values
129 :returns: None if successful, Failure string message otherwise
130 """
131 self.log.debug('Checking status of system services...')
132
133 # Point at which systemd became a thing
134 systemd_switch = self.ubuntu_releases.index('vivid')
135
136 for sentry_unit, services_list in six.iteritems(sentry_services):
137 # Get lsb_release codename from unit
138 release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
139 if ret:
140 return ret
141
142 for service_name in services_list:
143 if (self.ubuntu_releases.index(release) >= systemd_switch or
144 service_name == "rabbitmq-server"):
145 # init is systemd
146 cmd = 'sudo service {} status'.format(service_name)
147 elif self.ubuntu_releases.index(release) < systemd_switch:
148 # init is upstart
149 cmd = 'sudo status {}'.format(service_name)
150
151 output, code = sentry_unit.run(cmd)
152 self.log.debug('{} `{}` returned '
153 '{}'.format(sentry_unit.info['unit_name'],
154 cmd, code))
155 if code != 0:
156 return "command `{}` returned {}".format(cmd, str(code))
157 return None
158
159 def _get_config(self, unit, filename):
160 """Get a ConfigParser object for parsing a unit's config file."""
161 file_contents = unit.file_contents(filename)
162
163 # NOTE(beisner): by default, ConfigParser does not handle options
164 # with no value, such as the flags used in the mysql my.cnf file.
165 # https://bugs.python.org/issue7005
166 config = ConfigParser.ConfigParser(allow_no_value=True)
167 config.readfp(io.StringIO(file_contents))
168 return config
169
170 def validate_config_data(self, sentry_unit, config_file, section,
171 expected):
172 """Validate config file data.
173
174 Verify that the specified section of the config file contains
175 the expected option key:value pairs.
176 """
177 self.log.debug('Validating config file data ({} in {} on {})'
178 '...'.format(section, config_file,
179 sentry_unit.info['unit_name']))
180 config = self._get_config(sentry_unit, config_file)
181
182 if section != 'DEFAULT' and not config.has_section(section):
183 return "section [{}] does not exist".format(section)
184
185 for k in expected.keys():
186 if not config.has_option(section, k):
187 return "section [{}] is missing option {}".format(section, k)
188 if config.get(section, k) != expected[k]:
189 return "section [{}] {}:{} != expected {}:{}".format(
190 section, k, config.get(section, k), k, expected[k])
191 return None
192
193 def _validate_dict_data(self, expected, actual):
194 """Validate dictionary data.
195
196 Compare expected dictionary data vs actual dictionary data.
197 The values in the 'expected' dictionary can be strings, bools, ints,
198 longs, or can be a function that evaluate a variable and returns a
199 bool.
200 """
201 self.log.debug('actual: {}'.format(repr(actual)))
202 self.log.debug('expected: {}'.format(repr(expected)))
203
204 for k, v in six.iteritems(expected):
205 if k in actual:
206 if (isinstance(v, six.string_types) or
207 isinstance(v, bool) or
208 isinstance(v, six.integer_types)):
209 if v != actual[k]:
210 return "{}:{}".format(k, actual[k])
211 elif not v(actual[k]):
212 return "{}:{}".format(k, actual[k])
213 else:
214 return "key '{}' does not exist".format(k)
215 return None
216
217 def validate_relation_data(self, sentry_unit, relation, expected):
218 """Validate actual relation data based on expected relation data."""
219 actual = sentry_unit.relation(relation[0], relation[1])
220 return self._validate_dict_data(expected, actual)
221
222 def _validate_list_data(self, expected, actual):
223 """Compare expected list vs actual list data."""
224 for e in expected:
225 if e not in actual:
226 return "expected item {} not found in actual list".format(e)
227 return None
228
229 def not_null(self, string):
230 if string is not None:
231 return True
232 else:
233 return False
234
235 def _get_file_mtime(self, sentry_unit, filename):
236 """Get last modification time of file."""
237 return sentry_unit.file_stat(filename)['mtime']
238
239 def _get_dir_mtime(self, sentry_unit, directory):
240 """Get last modification time of directory."""
241 return sentry_unit.directory_stat(directory)['mtime']
242
243 def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):
244 """Get process' start time.
245
246 Determine start time of the process based on the last modification
247 time of the /proc/pid directory. If pgrep_full is True, the process
248 name is matched against the full command line.
249 """
250 if pgrep_full:
251 cmd = 'pgrep -o -f {}'.format(service)
252 else:
253 cmd = 'pgrep -o {}'.format(service)
254 cmd = cmd + ' | grep -v pgrep || exit 0'
255 cmd_out = sentry_unit.run(cmd)
256 self.log.debug('CMDout: ' + str(cmd_out))
257 if cmd_out[0]:
258 self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))
259 proc_dir = '/proc/{}'.format(cmd_out[0].strip())
260 return self._get_dir_mtime(sentry_unit, proc_dir)
261
262 def service_restarted(self, sentry_unit, service, filename,
263 pgrep_full=False, sleep_time=20):
264 """Check if service was restarted.
265
266 Compare a service's start time vs a file's last modification time
267 (such as a config file for that service) to determine if the service
268 has been restarted.
269 """
270 time.sleep(sleep_time)
271 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
272 self._get_file_mtime(sentry_unit, filename)):
273 return True
274 else:
275 return False
276
277 def service_restarted_since(self, sentry_unit, mtime, service,
278 pgrep_full=False, sleep_time=20,
279 retry_count=2):
280 """Check if service was been started after a given time.
281
282 Args:
283 sentry_unit (sentry): The sentry unit to check for the service on
284 mtime (float): The epoch time to check against
285 service (string): service name to look for in process table
286 pgrep_full (boolean): Use full command line search mode with pgrep
287 sleep_time (int): Seconds to sleep before looking for process
288 retry_count (int): If service is not found, how many times to retry
289
290 Returns:
291 bool: True if service found and its start time it newer than mtime,
292 False if service is older than mtime or if service was
293 not found.
294 """
295 self.log.debug('Checking %s restarted since %s' % (service, mtime))
296 time.sleep(sleep_time)
297 proc_start_time = self._get_proc_start_time(sentry_unit, service,
298 pgrep_full)
299 while retry_count > 0 and not proc_start_time:
300 self.log.debug('No pid file found for service %s, will retry %i '
301 'more times' % (service, retry_count))
302 time.sleep(30)
303 proc_start_time = self._get_proc_start_time(sentry_unit, service,
304 pgrep_full)
305 retry_count = retry_count - 1
306
307 if not proc_start_time:
308 self.log.warn('No proc start time found, assuming service did '
309 'not start')
310 return False
311 if proc_start_time >= mtime:
312 self.log.debug('proc start time is newer than provided mtime'
313 '(%s >= %s)' % (proc_start_time, mtime))
314 return True
315 else:
316 self.log.warn('proc start time (%s) is older than provided mtime '
317 '(%s), service did not restart' % (proc_start_time,
318 mtime))
319 return False
320
321 def config_updated_since(self, sentry_unit, filename, mtime,
322 sleep_time=20):
323 """Check if file was modified after a given time.
324
325 Args:
326 sentry_unit (sentry): The sentry unit to check the file mtime on
327 filename (string): The file to check mtime of
328 mtime (float): The epoch time to check against
329 sleep_time (int): Seconds to sleep before looking for process
330
331 Returns:
332 bool: True if file was modified more recently than mtime, False if
333 file was modified before mtime,
334 """
335 self.log.debug('Checking %s updated since %s' % (filename, mtime))
336 time.sleep(sleep_time)
337 file_mtime = self._get_file_mtime(sentry_unit, filename)
338 if file_mtime >= mtime:
339 self.log.debug('File mtime is newer than provided mtime '
340 '(%s >= %s)' % (file_mtime, mtime))
341 return True
342 else:
343 self.log.warn('File mtime %s is older than provided mtime %s'
344 % (file_mtime, mtime))
345 return False
346
347 def validate_service_config_changed(self, sentry_unit, mtime, service,
348 filename, pgrep_full=False,
349 sleep_time=20, retry_count=2):
350 """Check service and file were updated after mtime
351
352 Args:
353 sentry_unit (sentry): The sentry unit to check for the service on
354 mtime (float): The epoch time to check against
355 service (string): service name to look for in process table
356 filename (string): The file to check mtime of
357 pgrep_full (boolean): Use full command line search mode with pgrep
358 sleep_time (int): Seconds to sleep before looking for process
359 retry_count (int): If service is not found, how many times to retry
360
361 Typical Usage:
362 u = OpenStackAmuletUtils(ERROR)
363 ...
364 mtime = u.get_sentry_time(self.cinder_sentry)
365 self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'})
366 if not u.validate_service_config_changed(self.cinder_sentry,
367 mtime,
368 'cinder-api',
369 '/etc/cinder/cinder.conf')
370 amulet.raise_status(amulet.FAIL, msg='update failed')
371 Returns:
372 bool: True if both service and file where updated/restarted after
373 mtime, False if service is older than mtime or if service was
374 not found or if filename was modified before mtime.
375 """
376 self.log.debug('Checking %s restarted since %s' % (service, mtime))
377 time.sleep(sleep_time)
378 service_restart = self.service_restarted_since(sentry_unit, mtime,
379 service,
380 pgrep_full=pgrep_full,
381 sleep_time=0,
382 retry_count=retry_count)
383 config_update = self.config_updated_since(sentry_unit, filename, mtime,
384 sleep_time=0)
385 return service_restart and config_update
386
387 def get_sentry_time(self, sentry_unit):
388 """Return current epoch time on a sentry"""
389 cmd = "date +'%s'"
390 return float(sentry_unit.run(cmd)[0])
391
392 def relation_error(self, name, data):
393 return 'unexpected relation data in {} - {}'.format(name, data)
394
395 def endpoint_error(self, name, data):
396 return 'unexpected endpoint data in {} - {}'.format(name, data)
397
398 def get_ubuntu_releases(self):
399 """Return a list of all Ubuntu releases in order of release."""
400 _d = distro_info.UbuntuDistroInfo()
401 _release_list = _d.all
402 self.log.debug('Ubuntu release list: {}'.format(_release_list))
403 return _release_list
404
405 def file_to_url(self, file_rel_path):
406 """Convert a relative file path to a file URL."""
407 _abs_path = os.path.abspath(file_rel_path)
408 return urlparse.urlparse(_abs_path, scheme='file').geturl()
0409
=== added directory 'tests/charmhelpers/contrib/openstack'
=== added file 'tests/charmhelpers/contrib/openstack/__init__.py'
--- tests/charmhelpers/contrib/openstack/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/openstack/__init__.py 2015-06-11 15:38:49 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added directory 'tests/charmhelpers/contrib/openstack/amulet'
=== added file 'tests/charmhelpers/contrib/openstack/amulet/__init__.py'
--- tests/charmhelpers/contrib/openstack/amulet/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/__init__.py 2015-06-11 15:38:49 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-11 15:38:49 +0000
@@ -0,0 +1,151 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import six
18from collections import OrderedDict
19from charmhelpers.contrib.amulet.deployment import (
20 AmuletDeployment
21)
22
23
24class OpenStackAmuletDeployment(AmuletDeployment):
25 """OpenStack amulet deployment.
26
27 This class inherits from AmuletDeployment and has additional support
28 that is specifically for use by OpenStack charms.
29 """
30
31 def __init__(self, series=None, openstack=None, source=None, stable=True):
32 """Initialize the deployment environment."""
33 super(OpenStackAmuletDeployment, self).__init__(series)
34 self.openstack = openstack
35 self.source = source
36 self.stable = stable
37 # Note(coreycb): this needs to be changed when new next branches come
38 # out.
39 self.current_next = "trusty"
40
41 def _determine_branch_locations(self, other_services):
42 """Determine the branch locations for the other services.
43
44 Determine if the local branch being tested is derived from its
45 stable or next (dev) branch, and based on this, use the corresonding
46 stable or next branches for the other_services."""
47 base_charms = ['mysql', 'mongodb']
48
49 if self.series in ['precise', 'trusty']:
50 base_series = self.series
51 else:
52 base_series = self.current_next
53
54 if self.stable:
55 for svc in other_services:
56 temp = 'lp:charms/{}/{}'
57 svc['location'] = temp.format(base_series,
58 svc['name'])
59 else:
60 for svc in other_services:
61 if svc['name'] in base_charms:
62 temp = 'lp:charms/{}/{}'
63 svc['location'] = temp.format(base_series,
64 svc['name'])
65 else:
66 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
67 svc['location'] = temp.format(self.current_next,
68 svc['name'])
69 return other_services
70
71 def _add_services(self, this_service, other_services):
72 """Add services to the deployment and set openstack-origin/source."""
73 other_services = self._determine_branch_locations(other_services)
74
75 super(OpenStackAmuletDeployment, self)._add_services(this_service,
76 other_services)
77
78 services = other_services
79 services.append(this_service)
80 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
81 'ceph-osd', 'ceph-radosgw']
82 # Openstack subordinate charms do not expose an origin option as that
83 # is controlled by the principle
84 ignore = ['neutron-openvswitch']
85
86 if self.openstack:
87 for svc in services:
88 if svc['name'] not in use_source + ignore:
89 config = {'openstack-origin': self.openstack}
90 self.d.configure(svc['name'], config)
91
92 if self.source:
93 for svc in services:
94 if svc['name'] in use_source and svc['name'] not in ignore:
95 config = {'source': self.source}
96 self.d.configure(svc['name'], config)
97
98 def _configure_services(self, configs):
99 """Configure all of the services."""
100 for service, config in six.iteritems(configs):
101 self.d.configure(service, config)
102
103 def _get_openstack_release(self):
104 """Get openstack release.
105
106 Return an integer representing the enum value of the openstack
107 release.
108 """
109 # Must be ordered by OpenStack release (not by Ubuntu release):
110 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
111 self.precise_havana, self.precise_icehouse,
112 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
113 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
114 self.wily_liberty) = range(12)
115
116 releases = {
117 ('precise', None): self.precise_essex,
118 ('precise', 'cloud:precise-folsom'): self.precise_folsom,
119 ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
120 ('precise', 'cloud:precise-havana'): self.precise_havana,
121 ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
122 ('trusty', None): self.trusty_icehouse,
123 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
124 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
125 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
126 ('utopic', None): self.utopic_juno,
127 ('vivid', None): self.vivid_kilo,
128 ('wily', None): self.wily_liberty}
129
130 return releases[(self.series, self.openstack)]
131
132 def _get_openstack_release_string(self):
133 """Get openstack release string.
134
135 Return a string representing the openstack release.
136 """
137 releases = OrderedDict([
138 ('precise', 'essex'),
139 ('quantal', 'folsom'),
140 ('raring', 'grizzly'),
141 ('saucy', 'havana'),
142 ('trusty', 'icehouse'),
143 ('utopic', 'juno'),
144 ('vivid', 'kilo'),
145 ('wily', 'liberty'),
146 ])
147 if self.openstack:
148 os_origin = self.openstack.split(':')[1]
149 return os_origin.split('%s-' % self.series)[1].split('/')[0]
150 else:
151 return releases[self.series]
0152
=== added file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
--- tests/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-06-11 15:38:49 +0000
@@ -0,0 +1,413 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import logging
18import os
19import six
20import time
21import urllib
22
23import glanceclient.v1.client as glance_client
24import heatclient.v1.client as heat_client
25import keystoneclient.v2_0 as keystone_client
26import novaclient.v1_1.client as nova_client
27
28from charmhelpers.contrib.amulet.utils import (
29 AmuletUtils
30)
31
32DEBUG = logging.DEBUG
33ERROR = logging.ERROR
34
35
36class OpenStackAmuletUtils(AmuletUtils):
37 """OpenStack amulet utilities.
38
39 This class inherits from AmuletUtils and has additional support
40 that is specifically for use by OpenStack charm tests.
41 """
42
43 def __init__(self, log_level=ERROR):
44 """Initialize the deployment environment."""
45 super(OpenStackAmuletUtils, self).__init__(log_level)
46
47 def validate_endpoint_data(self, endpoints, admin_port, internal_port,
48 public_port, expected):
49 """Validate endpoint data.
50
51 Validate actual endpoint data vs expected endpoint data. The ports
52 are used to find the matching endpoint.
53 """
54 self.log.debug('Validating endpoint data...')
55 self.log.debug('actual: {}'.format(repr(endpoints)))
56 found = False
57 for ep in endpoints:
58 self.log.debug('endpoint: {}'.format(repr(ep)))
59 if (admin_port in ep.adminurl and
60 internal_port in ep.internalurl and
61 public_port in ep.publicurl):
62 found = True
63 actual = {'id': ep.id,
64 'region': ep.region,
65 'adminurl': ep.adminurl,
66 'internalurl': ep.internalurl,
67 'publicurl': ep.publicurl,
68 'service_id': ep.service_id}
69 ret = self._validate_dict_data(expected, actual)
70 if ret:
71 return 'unexpected endpoint data - {}'.format(ret)
72
73 if not found:
74 return 'endpoint not found'
75
76 def validate_svc_catalog_endpoint_data(self, expected, actual):
77 """Validate service catalog endpoint data.
78
79 Validate a list of actual service catalog endpoints vs a list of
80 expected service catalog endpoints.
81 """
82 self.log.debug('Validating service catalog endpoint data...')
83 self.log.debug('actual: {}'.format(repr(actual)))
84 for k, v in six.iteritems(expected):
85 if k in actual:
86 ret = self._validate_dict_data(expected[k][0], actual[k][0])
87 if ret:
88 return self.endpoint_error(k, ret)
89 else:
90 return "endpoint {} does not exist".format(k)
91 return ret
92
93 def validate_tenant_data(self, expected, actual):
94 """Validate tenant data.
95
96 Validate a list of actual tenant data vs list of expected tenant
97 data.
98 """
99 self.log.debug('Validating tenant data...')
100 self.log.debug('actual: {}'.format(repr(actual)))
101 for e in expected:
102 found = False
103 for act in actual:
104 a = {'enabled': act.enabled, 'description': act.description,
105 'name': act.name, 'id': act.id}
106 if e['name'] == a['name']:
107 found = True
108 ret = self._validate_dict_data(e, a)
109 if ret:
110 return "unexpected tenant data - {}".format(ret)
111 if not found:
112 return "tenant {} does not exist".format(e['name'])
113 return ret
114
115 def validate_role_data(self, expected, actual):
116 """Validate role data.
117
118 Validate a list of actual role data vs a list of expected role
119 data.
120 """
121 self.log.debug('Validating role data...')
122 self.log.debug('actual: {}'.format(repr(actual)))
123 for e in expected:
124 found = False
125 for act in actual:
126 a = {'name': act.name, 'id': act.id}
127 if e['name'] == a['name']:
128 found = True
129 ret = self._validate_dict_data(e, a)
130 if ret:
131 return "unexpected role data - {}".format(ret)
132 if not found:
133 return "role {} does not exist".format(e['name'])
134 return ret
135
136 def validate_user_data(self, expected, actual):
137 """Validate user data.
138
139 Validate a list of actual user data vs a list of expected user
140 data.
141 """
142 self.log.debug('Validating user data...')
143 self.log.debug('actual: {}'.format(repr(actual)))
144 for e in expected:
145 found = False
146 for act in actual:
147 a = {'enabled': act.enabled, 'name': act.name,
148 'email': act.email, 'tenantId': act.tenantId,
149 'id': act.id}
150 if e['name'] == a['name']:
151 found = True
152 ret = self._validate_dict_data(e, a)
153 if ret:
154 return "unexpected user data - {}".format(ret)
155 if not found:
156 return "user {} does not exist".format(e['name'])
157 return ret
158
159 def validate_flavor_data(self, expected, actual):
160 """Validate flavor data.
161
162 Validate a list of actual flavors vs a list of expected flavors.
163 """
164 self.log.debug('Validating flavor data...')
165 self.log.debug('actual: {}'.format(repr(actual)))
166 act = [a.name for a in actual]
167 return self._validate_list_data(expected, act)
168
169 def tenant_exists(self, keystone, tenant):
170 """Return True if tenant exists."""
171 self.log.debug('Checking if tenant exists ({})...'.format(tenant))
172 return tenant in [t.name for t in keystone.tenants.list()]
173
174 def authenticate_keystone_admin(self, keystone_sentry, user, password,
175 tenant):
176 """Authenticates admin user with the keystone admin endpoint."""
177 self.log.debug('Authenticating keystone admin...')
178 unit = keystone_sentry
179 service_ip = unit.relation('shared-db',
180 'mysql:shared-db')['private-address']
181 ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
182 return keystone_client.Client(username=user, password=password,
183 tenant_name=tenant, auth_url=ep)
184
185 def authenticate_keystone_user(self, keystone, user, password, tenant):
186 """Authenticates a regular user with the keystone public endpoint."""
187 self.log.debug('Authenticating keystone user ({})...'.format(user))
188 ep = keystone.service_catalog.url_for(service_type='identity',
189 endpoint_type='publicURL')
190 return keystone_client.Client(username=user, password=password,
191 tenant_name=tenant, auth_url=ep)
192
193 def authenticate_glance_admin(self, keystone):
194 """Authenticates admin user with glance."""
195 self.log.debug('Authenticating glance admin...')
196 ep = keystone.service_catalog.url_for(service_type='image',
197 endpoint_type='adminURL')
198 return glance_client.Client(ep, token=keystone.auth_token)
199
200 def authenticate_heat_admin(self, keystone):
201 """Authenticates the admin user with heat."""
202 self.log.debug('Authenticating heat admin...')
203 ep = keystone.service_catalog.url_for(service_type='orchestration',
204 endpoint_type='publicURL')
205 return heat_client.Client(endpoint=ep, token=keystone.auth_token)
206
207 def authenticate_nova_user(self, keystone, user, password, tenant):
208 """Authenticates a regular user with nova-api."""
209 self.log.debug('Authenticating nova user ({})...'.format(user))
210 ep = keystone.service_catalog.url_for(service_type='identity',
211 endpoint_type='publicURL')
212 return nova_client.Client(username=user, api_key=password,
213 project_id=tenant, auth_url=ep)
214
215 def create_cirros_image(self, glance, image_name):
216 """Download the latest cirros image and upload it to glance."""
217 self.log.debug('Creating glance image ({})...'.format(image_name))
218 http_proxy = os.getenv('AMULET_HTTP_PROXY')
219 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
220 if http_proxy:
221 proxies = {'http': http_proxy}
222 opener = urllib.FancyURLopener(proxies)
223 else:
224 opener = urllib.FancyURLopener()
225
226 f = opener.open("http://download.cirros-cloud.net/version/released")
227 version = f.read().strip()
228 cirros_img = "cirros-{}-x86_64-disk.img".format(version)
229 local_path = os.path.join('tests', cirros_img)
230
231 if not os.path.exists(local_path):
232 cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
233 version, cirros_img)
234 opener.retrieve(cirros_url, local_path)
235 f.close()
236
237 with open(local_path) as f:
238 image = glance.images.create(name=image_name, is_public=True,
239 disk_format='qcow2',
240 container_format='bare', data=f)
241 count = 1
242 status = image.status
243 while status != 'active' and count < 10:
244 time.sleep(3)
245 image = glance.images.get(image.id)
246 status = image.status
247 self.log.debug('image status: {}'.format(status))
248 count += 1
249
250 if status != 'active':
251 self.log.error('image creation timed out')
252 return None
253
254 return image
255
256 def delete_image(self, glance, image):
257 """Delete the specified image."""
258
259 # /!\ DEPRECATION WARNING
260 self.log.warn('/!\\ DEPRECATION WARNING: use '
261 'delete_resource instead of delete_image.')
262 self.log.debug('Deleting glance image ({})...'.format(image))
263 num_before = len(list(glance.images.list()))
264 glance.images.delete(image)
265
266 count = 1
267 num_after = len(list(glance.images.list()))
268 while num_after != (num_before - 1) and count < 10:
269 time.sleep(3)
270 num_after = len(list(glance.images.list()))
271 self.log.debug('number of images: {}'.format(num_after))
272 count += 1
273
274 if num_after != (num_before - 1):
275 self.log.error('image deletion timed out')
276 return False
277
278 return True
279
280 def create_instance(self, nova, image_name, instance_name, flavor):
281 """Create the specified instance."""
282 self.log.debug('Creating instance '
283 '({}|{}|{})'.format(instance_name, image_name, flavor))
284 image = nova.images.find(name=image_name)
285 flavor = nova.flavors.find(name=flavor)
286 instance = nova.servers.create(name=instance_name, image=image,
287 flavor=flavor)
288
289 count = 1
290 status = instance.status
291 while status != 'ACTIVE' and count < 60:
292 time.sleep(3)
293 instance = nova.servers.get(instance.id)
294 status = instance.status
295 self.log.debug('instance status: {}'.format(status))
296 count += 1
297
298 if status != 'ACTIVE':
299 self.log.error('instance creation timed out')
300 return None
301
302 return instance
303
304 def delete_instance(self, nova, instance):
305 """Delete the specified instance."""
306
307 # /!\ DEPRECATION WARNING
308 self.log.warn('/!\\ DEPRECATION WARNING: use '
309 'delete_resource instead of delete_instance.')
310 self.log.debug('Deleting instance ({})...'.format(instance))
311 num_before = len(list(nova.servers.list()))
312 nova.servers.delete(instance)
313
314 count = 1
315 num_after = len(list(nova.servers.list()))
316 while num_after != (num_before - 1) and count < 10:
317 time.sleep(3)
318 num_after = len(list(nova.servers.list()))
319 self.log.debug('number of instances: {}'.format(num_after))
320 count += 1
321
322 if num_after != (num_before - 1):
323 self.log.error('instance deletion timed out')
324 return False
325
326 return True
327
328 def create_or_get_keypair(self, nova, keypair_name="testkey"):
329 """Create a new keypair, or return pointer if it already exists."""
330 try:
331 _keypair = nova.keypairs.get(keypair_name)
332 self.log.debug('Keypair ({}) already exists, '
333 'using it.'.format(keypair_name))
334 return _keypair
335 except:
336 self.log.debug('Keypair ({}) does not exist, '
337 'creating it.'.format(keypair_name))
338
339 _keypair = nova.keypairs.create(name=keypair_name)
340 return _keypair
341
342 def delete_resource(self, resource, resource_id,
343 msg="resource", max_wait=120):
344 """Delete one openstack resource, such as one instance, keypair,
345 image, volume, stack, etc., and confirm deletion within max wait time.
346
347 :param resource: pointer to os resource type, ex:glance_client.images
348 :param resource_id: unique name or id for the openstack resource
349 :param msg: text to identify purpose in logging
350 :param max_wait: maximum wait time in seconds
351 :returns: True if successful, otherwise False
352 """
353 num_before = len(list(resource.list()))
354 resource.delete(resource_id)
355
356 tries = 0
357 num_after = len(list(resource.list()))
358 while num_after != (num_before - 1) and tries < (max_wait / 4):
359 self.log.debug('{} delete check: '
360 '{} [{}:{}] {}'.format(msg, tries,
361 num_before,
362 num_after,
363 resource_id))
364 time.sleep(4)
365 num_after = len(list(resource.list()))
366 tries += 1
367
368 self.log.debug('{}: expected, actual count = {}, '
369 '{}'.format(msg, num_before - 1, num_after))
370
371 if num_after == (num_before - 1):
372 return True
373 else:
374 self.log.error('{} delete timed out'.format(msg))
375 return False
376
377 def resource_reaches_status(self, resource, resource_id,
378 expected_stat='available',
379 msg='resource', max_wait=120):
380 """Wait for an openstack resources status to reach an
381 expected status within a specified time. Useful to confirm that
382 nova instances, cinder vols, snapshots, glance images, heat stacks
383 and other resources eventually reach the expected status.
384
385 :param resource: pointer to os resource type, ex: heat_client.stacks
386 :param resource_id: unique id for the openstack resource
387 :param expected_stat: status to expect resource to reach
388 :param msg: text to identify purpose in logging
389 :param max_wait: maximum wait time in seconds
390 :returns: True if successful, False if status is not reached
391 """
392
393 tries = 0
394 resource_stat = resource.get(resource_id).status
395 while resource_stat != expected_stat and tries < (max_wait / 4):
396 self.log.debug('{} status check: '
397 '{} [{}:{}] {}'.format(msg, tries,
398 resource_stat,
399 expected_stat,
400 resource_id))
401 time.sleep(4)
402 resource_stat = resource.get(resource_id).status
403 tries += 1
404
405 self.log.debug('{}: expected, actual status = {}, '
406 '{}'.format(msg, resource_stat, expected_stat))
407
408 if resource_stat == expected_stat:
409 return True
410 else:
411 self.log.debug('{} never reached expected status: '
412 '{}'.format(resource_id, expected_stat))
413 return False
0414
=== added directory 'tests/files'
=== added file 'tests/files/hot_hello_world.yaml'
--- tests/files/hot_hello_world.yaml 1970-01-01 00:00:00 +0000
+++ tests/files/hot_hello_world.yaml 2015-06-11 15:38:49 +0000
@@ -0,0 +1,66 @@
1#
2# This is a hello world HOT template just defining a single compute
3# server.
4#
5heat_template_version: 2013-05-23
6
7description: >
8 Hello world HOT template that just defines a single server.
9 Contains just base features to verify base HOT support.
10
11parameters:
12 key_name:
13 type: string
14 description: Name of an existing key pair to use for the server
15 constraints:
16 - custom_constraint: nova.keypair
17 flavor:
18 type: string
19 description: Flavor for the server to be created
20 default: m1.tiny
21 constraints:
22 - custom_constraint: nova.flavor
23 image:
24 type: string
25 description: Image ID or image name to use for the server
26 constraints:
27 - custom_constraint: glance.image
28 admin_pass:
29 type: string
30 description: Admin password
31 hidden: true
32 constraints:
33 - length: { min: 6, max: 8 }
34 description: Password length must be between 6 and 8 characters
35 - allowed_pattern: "[a-zA-Z0-9]+"
36 description: Password must consist of characters and numbers only
37 - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*"
38 description: Password must start with an uppercase character
39 db_port:
40 type: number
41 description: Database port number
42 default: 50000
43 constraints:
44 - range: { min: 40000, max: 60000 }
45 description: Port number must be between 40000 and 60000
46
47resources:
48 server:
49 type: OS::Nova::Server
50 properties:
51 key_name: { get_param: key_name }
52 image: { get_param: image }
53 flavor: { get_param: flavor }
54 admin_pass: { get_param: admin_pass }
55 user_data:
56 str_replace:
57 template: |
58 #!/bin/bash
59 echo db_port
60 params:
61 db_port: { get_param: db_port }
62
63outputs:
64 server_networks:
65 description: The networks of the deployed server
66 value: { get_attr: [server, networks] }
067
=== added file 'tests/tests.yaml'
--- tests/tests.yaml 1970-01-01 00:00:00 +0000
+++ tests/tests.yaml 2015-06-11 15:38:49 +0000
@@ -0,0 +1,15 @@
1bootstrap: true
2reset: true
3virtualenv: true
4makefile:
5 - lint
6 - unit_test
7sources:
8 - ppa:juju/stable
9packages:
10 - amulet
11 - python-amulet
12 - python-distro-info
13 - python-glanceclient
14 - python-keystoneclient
15 - python-novaclient

Subscribers

People subscribed via source and target branches