Merge lp:~james-page/charms/trusty/nova-cloud-controller/service-guard into lp:~openstack-charmers-archive/charms/trusty/nova-cloud-controller/trunk

Proposed by James Page
Status: Superseded
Proposed branch: lp:~james-page/charms/trusty/nova-cloud-controller/service-guard
Merge into: lp:~openstack-charmers-archive/charms/trusty/nova-cloud-controller/trunk
Diff against target: 4341 lines (+2840/-302) (has conflicts)
45 files modified
.bzrignore (+2/-0)
Makefile (+17/-5)
README.txt (+5/-0)
charm-helpers-hooks.yaml (+11/-0)
charm-helpers-tests.yaml (+5/-0)
charm-helpers.yaml (+0/-10)
config.yaml (+54/-11)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+3/-2)
hooks/charmhelpers/contrib/network/ip.py (+156/-0)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+55/-0)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+209/-0)
hooks/charmhelpers/contrib/openstack/context.py (+95/-22)
hooks/charmhelpers/contrib/openstack/ip.py (+75/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+14/-0)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+6/-1)
hooks/charmhelpers/contrib/openstack/templating.py (+22/-23)
hooks/charmhelpers/contrib/openstack/utils.py (+11/-3)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+1/-1)
hooks/charmhelpers/contrib/storage/linux/utils.py (+1/-0)
hooks/charmhelpers/core/fstab.py (+116/-0)
hooks/charmhelpers/core/hookenv.py (+5/-4)
hooks/charmhelpers/core/host.py (+32/-12)
hooks/charmhelpers/fetch/__init__.py (+33/-16)
hooks/charmhelpers/fetch/bzrurl.py (+2/-1)
hooks/nova_cc_context.py (+32/-1)
hooks/nova_cc_hooks.py (+211/-57)
hooks/nova_cc_utils.py (+218/-103)
metadata.yaml (+2/-0)
revision (+1/-1)
tests/00-setup (+10/-0)
tests/10-basic-precise-essex (+10/-0)
tests/11-basic-precise-folsom (+18/-0)
tests/12-basic-precise-grizzly (+12/-0)
tests/13-basic-precise-havana (+12/-0)
tests/14-basic-precise-icehouse (+12/-0)
tests/15-basic-trusty-icehouse (+10/-0)
tests/README (+47/-0)
tests/basic_deployment.py (+520/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+58/-0)
tests/charmhelpers/contrib/amulet/utils.py (+157/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+55/-0)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+209/-0)
unit_tests/test_nova_cc_hooks.py (+146/-12)
unit_tests/test_nova_cc_utils.py (+167/-14)
unit_tests/test_utils.py (+3/-3)
Text conflict in config.yaml
To merge this branch: bzr merge lp:~james-page/charms/trusty/nova-cloud-controller/service-guard
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+228669@code.launchpad.net

This proposal has been superseded by a proposal from 2014-07-29.

Description of the change

Add support for service-guard configuration to disable services prior to relations being completely formed.

To post a comment you must log in.
95. By James Page

Don't add neutron stuff if related to neutron-api charm

96. By James Page

Fixup unit tests

97. By James Page

Tidy lint

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.bzrignore'
2--- .bzrignore 1970-01-01 00:00:00 +0000
3+++ .bzrignore 2014-07-29 13:07:23 +0000
4@@ -0,0 +1,2 @@
5+bin
6+.coverage
7
8=== modified file 'Makefile'
9--- Makefile 2014-05-21 10:14:28 +0000
10+++ Makefile 2014-07-29 13:07:23 +0000
11@@ -2,16 +2,28 @@
12 PYTHON := /usr/bin/env python
13
14 lint:
15- @flake8 --exclude hooks/charmhelpers hooks unit_tests
16+ @flake8 --exclude hooks/charmhelpers hooks unit_tests tests
17 @charm proof
18
19+unit_test:
20+ @echo Starting unit tests...
21+ @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
22+
23+bin/charm_helpers_sync.py:
24+ @mkdir -p bin
25+ @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
26+ > bin/charm_helpers_sync.py
27 test:
28- @echo Starting tests...
29- @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
30+ @echo Starting Amulet tests...
31+ # coreycb note: The -v should only be temporary until Amulet sends
32+ # raise_status() messages to stderr:
33+ # https://bugs.launchpad.net/amulet/+bug/1320357
34+ @juju test -v -p AMULET_HTTP_PROXY
35
36 sync:
37- @charm-helper-sync -c charm-helpers.yaml
38+ @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
39+ @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
40
41-publish: lint test
42+publish: lint unit_test
43 bzr push lp:charms/nova-cloud-controller
44 bzr push lp:charms/trusty/nova-cloud-controller
45
46=== modified file 'README.txt'
47--- README.txt 2014-03-25 09:11:04 +0000
48+++ README.txt 2014-07-29 13:07:23 +0000
49@@ -4,6 +4,11 @@
50
51 Cloud controller node for Openstack nova. Contains nova-schedule, nova-api, nova-network and nova-objectstore.
52
53+The neutron-api interface can be used join this charm with an external neutron-api server. If this is done
54+then this charm will shutdown its neutron-api service and the external charm will be registered as the
55+neutron-api endpoint in keystone. It will also use the quantum-security-groups setting which is passed to
56+it by the api service rather than its own quantum-security-groups setting.
57+
58 ******************************************************
59 Special considerations to be deployed using Postgresql
60 ******************************************************
61
62=== added file 'charm-helpers-hooks.yaml'
63--- charm-helpers-hooks.yaml 1970-01-01 00:00:00 +0000
64+++ charm-helpers-hooks.yaml 2014-07-29 13:07:23 +0000
65@@ -0,0 +1,11 @@
66+branch: lp:charm-helpers
67+destination: hooks/charmhelpers
68+include:
69+ - core
70+ - fetch
71+ - contrib.openstack|inc=*
72+ - contrib.storage
73+ - contrib.hahelpers:
74+ - apache
75+ - payload.execd
76+ - contrib.network.ip
77
78=== added file 'charm-helpers-tests.yaml'
79--- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000
80+++ charm-helpers-tests.yaml 2014-07-29 13:07:23 +0000
81@@ -0,0 +1,5 @@
82+branch: lp:charm-helpers
83+destination: tests/charmhelpers
84+include:
85+ - contrib.amulet
86+ - contrib.openstack.amulet
87
88=== removed file 'charm-helpers.yaml'
89--- charm-helpers.yaml 2014-04-02 17:04:22 +0000
90+++ charm-helpers.yaml 1970-01-01 00:00:00 +0000
91@@ -1,10 +0,0 @@
92-branch: lp:charm-helpers
93-destination: hooks/charmhelpers
94-include:
95- - core
96- - fetch
97- - contrib.openstack|inc=*
98- - contrib.storage
99- - contrib.hahelpers:
100- - apache
101- - payload.execd
102
103=== modified file 'config.yaml'
104--- config.yaml 2014-06-17 10:01:21 +0000
105+++ config.yaml 2014-07-29 13:07:23 +0000
106@@ -97,15 +97,11 @@
107 # HA configuration settings
108 vip:
109 type: string
110- description: "Virtual IP to use to front API services in ha configuration"
111- vip_iface:
112- type: string
113- default: eth0
114- description: "Network Interface where to place the Virtual IP"
115- vip_cidr:
116- type: int
117- default: 24
118- description: "Netmask that will be used for the Virtual IP"
119+ description: |
120+ Virtual IP(s) to use to front API services in HA configuration.
121+ .
122+ If multiple networks are being used, a VIP should be provided for each
123+ network, separated by spaces.
124 ha-bindiface:
125 type: string
126 default: eth0
127@@ -163,5 +159,52 @@
128 nvp-l3-uuid:
129 type: string
130 description: |
131- This is uuid of the default NVP/NSX L3 Gateway Service.
132- # end of NVP/NSX configuration
133+<<<<<<< TREE
134+ This is uuid of the default NVP/NSX L3 Gateway Service.
135+ # end of NVP/NSX configuration
136+=======
137+ This is uuid of the default NVP/NSX L3 Gateway Service.
138+ # end of NVP/NSX configuration
139+ # Network configuration options
140+ # by default all access is over 'private-address'
141+ os-admin-network:
142+ type: string
143+ description: |
144+ The IP address and netmask of the OpenStack Admin network (e.g.,
145+ 192.168.0.0/24)
146+ .
147+ This network will be used for admin endpoints.
148+ os-internal-network:
149+ type: string
150+ description: |
151+ The IP address and netmask of the OpenStack Internal network (e.g.,
152+ 192.168.0.0/24)
153+ .
154+ This network will be used for internal endpoints.
155+ os-public-network:
156+ type: string
157+ description: |
158+ The IP address and netmask of the OpenStack Public network (e.g.,
159+ 192.168.0.0/24)
160+ .
161+ This network will be used for public endpoints.
162+ service-guard:
163+ type: boolean
164+ default: false
165+ description: |
166+ Ensure required relations are made and complete before allowing services
167+ to be started
168+ .
169+ By default, services may be up and accepting API request from install
170+ onwards.
171+ .
172+ Enabling this flag ensures that services will not be started until the
173+ minimum 'core relations' have been made between this charm and other
174+ charms.
175+ .
176+ For this charm the following relations must be made:
177+ .
178+ * shared-db or (pgsql-nova-db, pgsql-neutron-db)
179+ * amqp
180+ * identity-service
181+>>>>>>> MERGE-SOURCE
182
183=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
184--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-02-17 12:10:27 +0000
185+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-07-29 13:07:23 +0000
186@@ -146,12 +146,12 @@
187 Obtains all relevant configuration from charm configuration required
188 for initiating a relation to hacluster:
189
190- ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
191+ ha-bindiface, ha-mcastport, vip
192
193 returns: dict: A dict containing settings keyed by setting name.
194 raises: HAIncompleteConfig if settings are missing.
195 '''
196- settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
197+ settings = ['ha-bindiface', 'ha-mcastport', 'vip']
198 conf = {}
199 for setting in settings:
200 conf[setting] = config_get(setting)
201@@ -170,6 +170,7 @@
202
203 :configs : OSTemplateRenderer: A config tempating object to inspect for
204 a complete https context.
205+
206 :vip_setting: str: Setting in charm config that specifies
207 VIP address.
208 '''
209
210=== added directory 'hooks/charmhelpers/contrib/network'
211=== added file 'hooks/charmhelpers/contrib/network/__init__.py'
212=== added file 'hooks/charmhelpers/contrib/network/ip.py'
213--- hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000
214+++ hooks/charmhelpers/contrib/network/ip.py 2014-07-29 13:07:23 +0000
215@@ -0,0 +1,156 @@
216+import sys
217+
218+from functools import partial
219+
220+from charmhelpers.fetch import apt_install
221+from charmhelpers.core.hookenv import (
222+ ERROR, log,
223+)
224+
225+try:
226+ import netifaces
227+except ImportError:
228+ apt_install('python-netifaces')
229+ import netifaces
230+
231+try:
232+ import netaddr
233+except ImportError:
234+ apt_install('python-netaddr')
235+ import netaddr
236+
237+
238+def _validate_cidr(network):
239+ try:
240+ netaddr.IPNetwork(network)
241+ except (netaddr.core.AddrFormatError, ValueError):
242+ raise ValueError("Network (%s) is not in CIDR presentation format" %
243+ network)
244+
245+
246+def get_address_in_network(network, fallback=None, fatal=False):
247+ """
248+ Get an IPv4 or IPv6 address within the network from the host.
249+
250+ :param network (str): CIDR presentation format. For example,
251+ '192.168.1.0/24'.
252+ :param fallback (str): If no address is found, return fallback.
253+ :param fatal (boolean): If no address is found, fallback is not
254+ set and fatal is True then exit(1).
255+
256+ """
257+
258+ def not_found_error_out():
259+ log("No IP address found in network: %s" % network,
260+ level=ERROR)
261+ sys.exit(1)
262+
263+ if network is None:
264+ if fallback is not None:
265+ return fallback
266+ else:
267+ if fatal:
268+ not_found_error_out()
269+
270+ _validate_cidr(network)
271+ network = netaddr.IPNetwork(network)
272+ for iface in netifaces.interfaces():
273+ addresses = netifaces.ifaddresses(iface)
274+ if network.version == 4 and netifaces.AF_INET in addresses:
275+ addr = addresses[netifaces.AF_INET][0]['addr']
276+ netmask = addresses[netifaces.AF_INET][0]['netmask']
277+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
278+ if cidr in network:
279+ return str(cidr.ip)
280+ if network.version == 6 and netifaces.AF_INET6 in addresses:
281+ for addr in addresses[netifaces.AF_INET6]:
282+ if not addr['addr'].startswith('fe80'):
283+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
284+ addr['netmask']))
285+ if cidr in network:
286+ return str(cidr.ip)
287+
288+ if fallback is not None:
289+ return fallback
290+
291+ if fatal:
292+ not_found_error_out()
293+
294+ return None
295+
296+
297+def is_ipv6(address):
298+ '''Determine whether provided address is IPv6 or not'''
299+ try:
300+ address = netaddr.IPAddress(address)
301+ except netaddr.AddrFormatError:
302+ # probably a hostname - so not an address at all!
303+ return False
304+ else:
305+ return address.version == 6
306+
307+
308+def is_address_in_network(network, address):
309+ """
310+ Determine whether the provided address is within a network range.
311+
312+ :param network (str): CIDR presentation format. For example,
313+ '192.168.1.0/24'.
314+ :param address: An individual IPv4 or IPv6 address without a net
315+ mask or subnet prefix. For example, '192.168.1.1'.
316+ :returns boolean: Flag indicating whether address is in network.
317+ """
318+ try:
319+ network = netaddr.IPNetwork(network)
320+ except (netaddr.core.AddrFormatError, ValueError):
321+ raise ValueError("Network (%s) is not in CIDR presentation format" %
322+ network)
323+ try:
324+ address = netaddr.IPAddress(address)
325+ except (netaddr.core.AddrFormatError, ValueError):
326+ raise ValueError("Address (%s) is not in correct presentation format" %
327+ address)
328+ if address in network:
329+ return True
330+ else:
331+ return False
332+
333+
334+def _get_for_address(address, key):
335+ """Retrieve an attribute of or the physical interface that
336+ the IP address provided could be bound to.
337+
338+ :param address (str): An individual IPv4 or IPv6 address without a net
339+ mask or subnet prefix. For example, '192.168.1.1'.
340+ :param key: 'iface' for the physical interface name or an attribute
341+ of the configured interface, for example 'netmask'.
342+ :returns str: Requested attribute or None if address is not bindable.
343+ """
344+ address = netaddr.IPAddress(address)
345+ for iface in netifaces.interfaces():
346+ addresses = netifaces.ifaddresses(iface)
347+ if address.version == 4 and netifaces.AF_INET in addresses:
348+ addr = addresses[netifaces.AF_INET][0]['addr']
349+ netmask = addresses[netifaces.AF_INET][0]['netmask']
350+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
351+ if address in cidr:
352+ if key == 'iface':
353+ return iface
354+ else:
355+ return addresses[netifaces.AF_INET][0][key]
356+ if address.version == 6 and netifaces.AF_INET6 in addresses:
357+ for addr in addresses[netifaces.AF_INET6]:
358+ if not addr['addr'].startswith('fe80'):
359+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
360+ addr['netmask']))
361+ if address in cidr:
362+ if key == 'iface':
363+ return iface
364+ else:
365+ return addr[key]
366+ return None
367+
368+
369+get_iface_for_address = partial(_get_for_address, key='iface')
370+
371+get_netmask_for_address = partial(_get_for_address, key='netmask')
372
373=== added directory 'hooks/charmhelpers/contrib/openstack/amulet'
374=== added file 'hooks/charmhelpers/contrib/openstack/amulet/__init__.py'
375=== added file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
376--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
377+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-07-29 13:07:23 +0000
378@@ -0,0 +1,55 @@
379+from charmhelpers.contrib.amulet.deployment import (
380+ AmuletDeployment
381+)
382+
383+
384+class OpenStackAmuletDeployment(AmuletDeployment):
385+ """This class inherits from AmuletDeployment and has additional support
386+ that is specifically for use by OpenStack charms."""
387+
388+ def __init__(self, series=None, openstack=None, source=None):
389+ """Initialize the deployment environment."""
390+ super(OpenStackAmuletDeployment, self).__init__(series)
391+ self.openstack = openstack
392+ self.source = source
393+
394+ def _add_services(self, this_service, other_services):
395+ """Add services to the deployment and set openstack-origin."""
396+ super(OpenStackAmuletDeployment, self)._add_services(this_service,
397+ other_services)
398+ name = 0
399+ services = other_services
400+ services.append(this_service)
401+ use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph']
402+
403+ if self.openstack:
404+ for svc in services:
405+ if svc[name] not in use_source:
406+ config = {'openstack-origin': self.openstack}
407+ self.d.configure(svc[name], config)
408+
409+ if self.source:
410+ for svc in services:
411+ if svc[name] in use_source:
412+ config = {'source': self.source}
413+ self.d.configure(svc[name], config)
414+
415+ def _configure_services(self, configs):
416+ """Configure all of the services."""
417+ for service, config in configs.iteritems():
418+ self.d.configure(service, config)
419+
420+ def _get_openstack_release(self):
421+ """Return an integer representing the enum value of the openstack
422+ release."""
423+ self.precise_essex, self.precise_folsom, self.precise_grizzly, \
424+ self.precise_havana, self.precise_icehouse, \
425+ self.trusty_icehouse = range(6)
426+ releases = {
427+ ('precise', None): self.precise_essex,
428+ ('precise', 'cloud:precise-folsom'): self.precise_folsom,
429+ ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
430+ ('precise', 'cloud:precise-havana'): self.precise_havana,
431+ ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
432+ ('trusty', None): self.trusty_icehouse}
433+ return releases[(self.series, self.openstack)]
434
435=== added file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
436--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
437+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-07-29 13:07:23 +0000
438@@ -0,0 +1,209 @@
439+import logging
440+import os
441+import time
442+import urllib
443+
444+import glanceclient.v1.client as glance_client
445+import keystoneclient.v2_0 as keystone_client
446+import novaclient.v1_1.client as nova_client
447+
448+from charmhelpers.contrib.amulet.utils import (
449+ AmuletUtils
450+)
451+
452+DEBUG = logging.DEBUG
453+ERROR = logging.ERROR
454+
455+
456+class OpenStackAmuletUtils(AmuletUtils):
457+ """This class inherits from AmuletUtils and has additional support
458+ that is specifically for use by OpenStack charms."""
459+
460+ def __init__(self, log_level=ERROR):
461+ """Initialize the deployment environment."""
462+ super(OpenStackAmuletUtils, self).__init__(log_level)
463+
464+ def validate_endpoint_data(self, endpoints, admin_port, internal_port,
465+ public_port, expected):
466+ """Validate actual endpoint data vs expected endpoint data. The ports
467+ are used to find the matching endpoint."""
468+ found = False
469+ for ep in endpoints:
470+ self.log.debug('endpoint: {}'.format(repr(ep)))
471+ if admin_port in ep.adminurl and internal_port in ep.internalurl \
472+ and public_port in ep.publicurl:
473+ found = True
474+ actual = {'id': ep.id,
475+ 'region': ep.region,
476+ 'adminurl': ep.adminurl,
477+ 'internalurl': ep.internalurl,
478+ 'publicurl': ep.publicurl,
479+ 'service_id': ep.service_id}
480+ ret = self._validate_dict_data(expected, actual)
481+ if ret:
482+ return 'unexpected endpoint data - {}'.format(ret)
483+
484+ if not found:
485+ return 'endpoint not found'
486+
487+ def validate_svc_catalog_endpoint_data(self, expected, actual):
488+ """Validate a list of actual service catalog endpoints vs a list of
489+ expected service catalog endpoints."""
490+ self.log.debug('actual: {}'.format(repr(actual)))
491+ for k, v in expected.iteritems():
492+ if k in actual:
493+ ret = self._validate_dict_data(expected[k][0], actual[k][0])
494+ if ret:
495+ return self.endpoint_error(k, ret)
496+ else:
497+ return "endpoint {} does not exist".format(k)
498+ return ret
499+
500+ def validate_tenant_data(self, expected, actual):
501+ """Validate a list of actual tenant data vs list of expected tenant
502+ data."""
503+ self.log.debug('actual: {}'.format(repr(actual)))
504+ for e in expected:
505+ found = False
506+ for act in actual:
507+ a = {'enabled': act.enabled, 'description': act.description,
508+ 'name': act.name, 'id': act.id}
509+ if e['name'] == a['name']:
510+ found = True
511+ ret = self._validate_dict_data(e, a)
512+ if ret:
513+ return "unexpected tenant data - {}".format(ret)
514+ if not found:
515+ return "tenant {} does not exist".format(e['name'])
516+ return ret
517+
518+ def validate_role_data(self, expected, actual):
519+ """Validate a list of actual role data vs a list of expected role
520+ data."""
521+ self.log.debug('actual: {}'.format(repr(actual)))
522+ for e in expected:
523+ found = False
524+ for act in actual:
525+ a = {'name': act.name, 'id': act.id}
526+ if e['name'] == a['name']:
527+ found = True
528+ ret = self._validate_dict_data(e, a)
529+ if ret:
530+ return "unexpected role data - {}".format(ret)
531+ if not found:
532+ return "role {} does not exist".format(e['name'])
533+ return ret
534+
535+ def validate_user_data(self, expected, actual):
536+ """Validate a list of actual user data vs a list of expected user
537+ data."""
538+ self.log.debug('actual: {}'.format(repr(actual)))
539+ for e in expected:
540+ found = False
541+ for act in actual:
542+ a = {'enabled': act.enabled, 'name': act.name,
543+ 'email': act.email, 'tenantId': act.tenantId,
544+ 'id': act.id}
545+ if e['name'] == a['name']:
546+ found = True
547+ ret = self._validate_dict_data(e, a)
548+ if ret:
549+ return "unexpected user data - {}".format(ret)
550+ if not found:
551+ return "user {} does not exist".format(e['name'])
552+ return ret
553+
554+ def validate_flavor_data(self, expected, actual):
555+ """Validate a list of actual flavors vs a list of expected flavors."""
556+ self.log.debug('actual: {}'.format(repr(actual)))
557+ act = [a.name for a in actual]
558+ return self._validate_list_data(expected, act)
559+
560+ def tenant_exists(self, keystone, tenant):
561+ """Return True if tenant exists"""
562+ return tenant in [t.name for t in keystone.tenants.list()]
563+
564+ def authenticate_keystone_admin(self, keystone_sentry, user, password,
565+ tenant):
566+ """Authenticates admin user with the keystone admin endpoint."""
567+ service_ip = \
568+ keystone_sentry.relation('shared-db',
569+ 'mysql:shared-db')['private-address']
570+ ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
571+ return keystone_client.Client(username=user, password=password,
572+ tenant_name=tenant, auth_url=ep)
573+
574+ def authenticate_keystone_user(self, keystone, user, password, tenant):
575+ """Authenticates a regular user with the keystone public endpoint."""
576+ ep = keystone.service_catalog.url_for(service_type='identity',
577+ endpoint_type='publicURL')
578+ return keystone_client.Client(username=user, password=password,
579+ tenant_name=tenant, auth_url=ep)
580+
581+ def authenticate_glance_admin(self, keystone):
582+ """Authenticates admin user with glance."""
583+ ep = keystone.service_catalog.url_for(service_type='image',
584+ endpoint_type='adminURL')
585+ return glance_client.Client(ep, token=keystone.auth_token)
586+
587+ def authenticate_nova_user(self, keystone, user, password, tenant):
588+ """Authenticates a regular user with nova-api."""
589+ ep = keystone.service_catalog.url_for(service_type='identity',
590+ endpoint_type='publicURL')
591+ return nova_client.Client(username=user, api_key=password,
592+ project_id=tenant, auth_url=ep)
593+
594+ def create_cirros_image(self, glance, image_name):
595+ """Download the latest cirros image and upload it to glance."""
596+ http_proxy = os.getenv('AMULET_HTTP_PROXY')
597+ self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
598+ if http_proxy:
599+ proxies = {'http': http_proxy}
600+ opener = urllib.FancyURLopener(proxies)
601+ else:
602+ opener = urllib.FancyURLopener()
603+
604+ f = opener.open("http://download.cirros-cloud.net/version/released")
605+ version = f.read().strip()
606+ cirros_img = "tests/cirros-{}-x86_64-disk.img".format(version)
607+
608+ if not os.path.exists(cirros_img):
609+ cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
610+ version, cirros_img)
611+ opener.retrieve(cirros_url, cirros_img)
612+ f.close()
613+
614+ with open(cirros_img) as f:
615+ image = glance.images.create(name=image_name, is_public=True,
616+ disk_format='qcow2',
617+ container_format='bare', data=f)
618+ return image
619+
620+ def delete_image(self, glance, image):
621+ """Delete the specified image."""
622+ glance.images.delete(image)
623+
624+ def create_instance(self, nova, image_name, instance_name, flavor):
625+ """Create the specified instance."""
626+ image = nova.images.find(name=image_name)
627+ flavor = nova.flavors.find(name=flavor)
628+ instance = nova.servers.create(name=instance_name, image=image,
629+ flavor=flavor)
630+
631+ count = 1
632+ status = instance.status
633+ while status != 'ACTIVE' and count < 60:
634+ time.sleep(3)
635+ instance = nova.servers.get(instance.id)
636+ status = instance.status
637+ self.log.debug('instance status: {}'.format(status))
638+ count += 1
639+
640+ if status == 'BUILD':
641+ return None
642+
643+ return instance
644+
645+ def delete_instance(self, nova, instance):
646+ """Delete the specified instance."""
647+ nova.servers.delete(instance)
648
649=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
650--- hooks/charmhelpers/contrib/openstack/context.py 2014-05-19 11:38:09 +0000
651+++ hooks/charmhelpers/contrib/openstack/context.py 2014-07-29 13:07:23 +0000
652@@ -21,9 +21,11 @@
653 relation_get,
654 relation_ids,
655 related_units,
656+ relation_set,
657 unit_get,
658 unit_private_ip,
659 ERROR,
660+ INFO
661 )
662
663 from charmhelpers.contrib.hahelpers.cluster import (
664@@ -42,6 +44,8 @@
665 neutron_plugin_attribute,
666 )
667
668+from charmhelpers.contrib.network.ip import get_address_in_network
669+
670 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
671
672
673@@ -134,8 +138,26 @@
674 'Missing required charm config options. '
675 '(database name and user)')
676 raise OSContextError
677+
678 ctxt = {}
679
680+ # NOTE(jamespage) if mysql charm provides a network upon which
681+ # access to the database should be made, reconfigure relation
682+ # with the service units local address and defer execution
683+ access_network = relation_get('access-network')
684+ if access_network is not None:
685+ if self.relation_prefix is not None:
686+ hostname_key = "{}_hostname".format(self.relation_prefix)
687+ else:
688+ hostname_key = "hostname"
689+ access_hostname = get_address_in_network(access_network,
690+ unit_get('private-address'))
691+ set_hostname = relation_get(attribute=hostname_key,
692+ unit=local_unit())
693+ if set_hostname != access_hostname:
694+ relation_set(relation_settings={hostname_key: access_hostname})
695+ return ctxt # Defer any further hook execution for now....
696+
697 password_setting = 'password'
698 if self.relation_prefix:
699 password_setting = self.relation_prefix + '_password'
700@@ -243,23 +265,31 @@
701
702
703 class AMQPContext(OSContextGenerator):
704- interfaces = ['amqp']
705
706- def __init__(self, ssl_dir=None):
707+ def __init__(self, ssl_dir=None, rel_name='amqp', relation_prefix=None):
708 self.ssl_dir = ssl_dir
709+ self.rel_name = rel_name
710+ self.relation_prefix = relation_prefix
711+ self.interfaces = [rel_name]
712
713 def __call__(self):
714 log('Generating template context for amqp')
715 conf = config()
716+ user_setting = 'rabbit-user'
717+ vhost_setting = 'rabbit-vhost'
718+ if self.relation_prefix:
719+ user_setting = self.relation_prefix + '-rabbit-user'
720+ vhost_setting = self.relation_prefix + '-rabbit-vhost'
721+
722 try:
723- username = conf['rabbit-user']
724- vhost = conf['rabbit-vhost']
725+ username = conf[user_setting]
726+ vhost = conf[vhost_setting]
727 except KeyError as e:
728 log('Could not generate shared_db context. '
729 'Missing required charm config options: %s.' % e)
730 raise OSContextError
731 ctxt = {}
732- for rid in relation_ids('amqp'):
733+ for rid in relation_ids(self.rel_name):
734 ha_vip_only = False
735 for unit in related_units(rid):
736 if relation_get('clustered', rid=rid, unit=unit):
737@@ -332,10 +362,12 @@
738 use_syslog = str(config('use-syslog')).lower()
739 for rid in relation_ids('ceph'):
740 for unit in related_units(rid):
741- mon_hosts.append(relation_get('private-address', rid=rid,
742- unit=unit))
743 auth = relation_get('auth', rid=rid, unit=unit)
744 key = relation_get('key', rid=rid, unit=unit)
745+ ceph_addr = \
746+ relation_get('ceph-public-address', rid=rid, unit=unit) or \
747+ relation_get('private-address', rid=rid, unit=unit)
748+ mon_hosts.append(ceph_addr)
749
750 ctxt = {
751 'mon_hosts': ' '.join(mon_hosts),
752@@ -369,7 +401,9 @@
753
754 cluster_hosts = {}
755 l_unit = local_unit().replace('/', '-')
756- cluster_hosts[l_unit] = unit_get('private-address')
757+ cluster_hosts[l_unit] = \
758+ get_address_in_network(config('os-internal-network'),
759+ unit_get('private-address'))
760
761 for rid in relation_ids('cluster'):
762 for unit in related_units(rid):
763@@ -418,12 +452,13 @@
764 """
765 Generates a context for an apache vhost configuration that configures
766 HTTPS reverse proxying for one or many endpoints. Generated context
767- looks something like:
768- {
769- 'namespace': 'cinder',
770- 'private_address': 'iscsi.mycinderhost.com',
771- 'endpoints': [(8776, 8766), (8777, 8767)]
772- }
773+ looks something like::
774+
775+ {
776+ 'namespace': 'cinder',
777+ 'private_address': 'iscsi.mycinderhost.com',
778+ 'endpoints': [(8776, 8766), (8777, 8767)]
779+ }
780
781 The endpoints list consists of a tuples mapping external ports
782 to internal ports.
783@@ -541,6 +576,26 @@
784
785 return nvp_ctxt
786
787+ def n1kv_ctxt(self):
788+ driver = neutron_plugin_attribute(self.plugin, 'driver',
789+ self.network_manager)
790+ n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
791+ self.network_manager)
792+ n1kv_ctxt = {
793+ 'core_plugin': driver,
794+ 'neutron_plugin': 'n1kv',
795+ 'neutron_security_groups': self.neutron_security_groups,
796+ 'local_ip': unit_private_ip(),
797+ 'config': n1kv_config,
798+ 'vsm_ip': config('n1kv-vsm-ip'),
799+ 'vsm_username': config('n1kv-vsm-username'),
800+ 'vsm_password': config('n1kv-vsm-password'),
801+ 'restrict_policy_profiles': config(
802+ 'n1kv_restrict_policy_profiles'),
803+ }
804+
805+ return n1kv_ctxt
806+
807 def neutron_ctxt(self):
808 if https():
809 proto = 'https'
810@@ -572,6 +627,8 @@
811 ctxt.update(self.ovs_ctxt())
812 elif self.plugin in ['nvp', 'nsx']:
813 ctxt.update(self.nvp_ctxt())
814+ elif self.plugin == 'n1kv':
815+ ctxt.update(self.n1kv_ctxt())
816
817 alchemy_flags = config('neutron-alchemy-flags')
818 if alchemy_flags:
819@@ -611,7 +668,7 @@
820 The subordinate interface allows subordinates to export their
821 configuration requirements to the principle for multiple config
822 files and multiple serivces. Ie, a subordinate that has interfaces
823- to both glance and nova may export to following yaml blob as json:
824+ to both glance and nova may export to following yaml blob as json::
825
826 glance:
827 /etc/glance/glance-api.conf:
828@@ -630,7 +687,8 @@
829
830 It is then up to the principle charms to subscribe this context to
831 the service+config file it is interestd in. Configuration data will
832- be available in the template context, in glance's case, as:
833+ be available in the template context, in glance's case, as::
834+
835 ctxt = {
836 ... other context ...
837 'subordinate_config': {
838@@ -657,7 +715,7 @@
839 self.interface = interface
840
841 def __call__(self):
842- ctxt = {}
843+ ctxt = {'sections': {}}
844 for rid in relation_ids(self.interface):
845 for unit in related_units(rid):
846 sub_config = relation_get('subordinate_configuration',
847@@ -683,11 +741,26 @@
848
849 sub_config = sub_config[self.config_file]
850 for k, v in sub_config.iteritems():
851- ctxt[k] = v
852-
853- if not ctxt:
854- ctxt['sections'] = {}
855-
856+ if k == 'sections':
857+ for section, config_dict in v.iteritems():
858+ log("adding section '%s'" % (section))
859+ ctxt[k][section] = config_dict
860+ else:
861+ ctxt[k] = v
862+
863+ log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
864+
865+ return ctxt
866+
867+
868+class LogLevelContext(OSContextGenerator):
869+
870+ def __call__(self):
871+ ctxt = {}
872+ ctxt['debug'] = \
873+ False if config('debug') is None else config('debug')
874+ ctxt['verbose'] = \
875+ False if config('verbose') is None else config('verbose')
876 return ctxt
877
878
879
880=== added file 'hooks/charmhelpers/contrib/openstack/ip.py'
881--- hooks/charmhelpers/contrib/openstack/ip.py 1970-01-01 00:00:00 +0000
882+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-07-29 13:07:23 +0000
883@@ -0,0 +1,75 @@
884+from charmhelpers.core.hookenv import (
885+ config,
886+ unit_get,
887+)
888+
889+from charmhelpers.contrib.network.ip import (
890+ get_address_in_network,
891+ is_address_in_network,
892+ is_ipv6,
893+)
894+
895+from charmhelpers.contrib.hahelpers.cluster import is_clustered
896+
897+PUBLIC = 'public'
898+INTERNAL = 'int'
899+ADMIN = 'admin'
900+
901+_address_map = {
902+ PUBLIC: {
903+ 'config': 'os-public-network',
904+ 'fallback': 'public-address'
905+ },
906+ INTERNAL: {
907+ 'config': 'os-internal-network',
908+ 'fallback': 'private-address'
909+ },
910+ ADMIN: {
911+ 'config': 'os-admin-network',
912+ 'fallback': 'private-address'
913+ }
914+}
915+
916+
917+def canonical_url(configs, endpoint_type=PUBLIC):
918+ '''
919+ Returns the correct HTTP URL to this host given the state of HTTPS
920+ configuration, hacluster and charm configuration.
921+
922+ :configs OSTemplateRenderer: A config tempating object to inspect for
923+ a complete https context.
924+ :endpoint_type str: The endpoint type to resolve.
925+
926+ :returns str: Base URL for services on the current service unit.
927+ '''
928+ scheme = 'http'
929+ if 'https' in configs.complete_contexts():
930+ scheme = 'https'
931+ address = resolve_address(endpoint_type)
932+ if is_ipv6(address):
933+ address = "[{}]".format(address)
934+ return '%s://%s' % (scheme, address)
935+
936+
937+def resolve_address(endpoint_type=PUBLIC):
938+ resolved_address = None
939+ if is_clustered():
940+ if config(_address_map[endpoint_type]['config']) is None:
941+ # Assume vip is simple and pass back directly
942+ resolved_address = config('vip')
943+ else:
944+ for vip in config('vip').split():
945+ if is_address_in_network(
946+ config(_address_map[endpoint_type]['config']),
947+ vip):
948+ resolved_address = vip
949+ else:
950+ resolved_address = get_address_in_network(
951+ config(_address_map[endpoint_type]['config']),
952+ unit_get(_address_map[endpoint_type]['fallback'])
953+ )
954+ if resolved_address is None:
955+ raise ValueError('Unable to resolve a suitable IP address'
956+ ' based on charm state and configuration')
957+ else:
958+ return resolved_address
959
960=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
961--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-05-19 11:38:09 +0000
962+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-07-29 13:07:23 +0000
963@@ -128,6 +128,20 @@
964 'server_packages': ['neutron-server',
965 'neutron-plugin-vmware'],
966 'server_services': ['neutron-server']
967+ },
968+ 'n1kv': {
969+ 'config': '/etc/neutron/plugins/cisco/cisco_plugins.ini',
970+ 'driver': 'neutron.plugins.cisco.network_plugin.PluginV2',
971+ 'contexts': [
972+ context.SharedDBContext(user=config('neutron-database-user'),
973+ database=config('neutron-database'),
974+ relation_prefix='neutron',
975+ ssl_dir=NEUTRON_CONF_DIR)],
976+ 'services': [],
977+ 'packages': [['neutron-plugin-cisco']],
978+ 'server_packages': ['neutron-server',
979+ 'neutron-plugin-cisco'],
980+ 'server_services': ['neutron-server']
981 }
982 }
983 if release >= 'icehouse':
984
985=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
986--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-02-27 09:26:38 +0000
987+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-07-29 13:07:23 +0000
988@@ -27,7 +27,12 @@
989
990 {% if units -%}
991 {% for service, ports in service_ports.iteritems() -%}
992-listen {{ service }} 0.0.0.0:{{ ports[0] }}
993+listen {{ service }}_ipv4 0.0.0.0:{{ ports[0] }}
994+ balance roundrobin
995+ {% for unit, address in units.iteritems() -%}
996+ server {{ unit }} {{ address }}:{{ ports[1] }} check
997+ {% endfor %}
998+listen {{ service }}_ipv6 :::{{ ports[0] }}
999 balance roundrobin
1000 {% for unit, address in units.iteritems() -%}
1001 server {{ unit }} {{ address }}:{{ ports[1] }} check
1002
1003=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1004--- hooks/charmhelpers/contrib/openstack/templating.py 2014-02-24 19:31:57 +0000
1005+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-07-29 13:07:23 +0000
1006@@ -30,17 +30,17 @@
1007 loading dir.
1008
1009 A charm may also ship a templates dir with this module
1010- and it will be appended to the bottom of the search list, eg:
1011- hooks/charmhelpers/contrib/openstack/templates.
1012-
1013- :param templates_dir: str: Base template directory containing release
1014- sub-directories.
1015- :param os_release : str: OpenStack release codename to construct template
1016- loader.
1017-
1018- :returns : jinja2.ChoiceLoader constructed with a list of
1019- jinja2.FilesystemLoaders, ordered in descending
1020- order by OpenStack release.
1021+ and it will be appended to the bottom of the search list, eg::
1022+
1023+ hooks/charmhelpers/contrib/openstack/templates
1024+
1025+ :param templates_dir (str): Base template directory containing release
1026+ sub-directories.
1027+ :param os_release (str): OpenStack release codename to construct template
1028+ loader.
1029+ :returns: jinja2.ChoiceLoader constructed with a list of
1030+ jinja2.FilesystemLoaders, ordered in descending
1031+ order by OpenStack release.
1032 """
1033 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1034 for rel in OPENSTACK_CODENAMES.itervalues()]
1035@@ -111,7 +111,8 @@
1036 and ease the burden of managing config templates across multiple OpenStack
1037 releases.
1038
1039- Basic usage:
1040+ Basic usage::
1041+
1042 # import some common context generates from charmhelpers
1043 from charmhelpers.contrib.openstack import context
1044
1045@@ -131,21 +132,19 @@
1046 # write out all registered configs
1047 configs.write_all()
1048
1049- Details:
1050+ **OpenStack Releases and template loading**
1051
1052- OpenStack Releases and template loading
1053- ---------------------------------------
1054 When the object is instantiated, it is associated with a specific OS
1055 release. This dictates how the template loader will be constructed.
1056
1057 The constructed loader attempts to load the template from several places
1058 in the following order:
1059- - from the most recent OS release-specific template dir (if one exists)
1060- - the base templates_dir
1061- - a template directory shipped in the charm with this helper file.
1062-
1063-
1064- For the example above, '/tmp/templates' contains the following structure:
1065+ - from the most recent OS release-specific template dir (if one exists)
1066+ - the base templates_dir
1067+ - a template directory shipped in the charm with this helper file.
1068+
1069+ For the example above, '/tmp/templates' contains the following structure::
1070+
1071 /tmp/templates/nova.conf
1072 /tmp/templates/api-paste.ini
1073 /tmp/templates/grizzly/api-paste.ini
1074@@ -169,8 +168,8 @@
1075 $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
1076 us to ship common templates (haproxy, apache) with the helpers.
1077
1078- Context generators
1079- ---------------------------------------
1080+ **Context generators**
1081+
1082 Context generators are used to generate template contexts during hook
1083 execution. Doing so may require inspecting service relations, charm
1084 config, etc. When registered, a config file is associated with a list
1085
1086=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1087--- hooks/charmhelpers/contrib/openstack/utils.py 2014-05-19 11:38:09 +0000
1088+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-07-29 13:07:23 +0000
1089@@ -3,7 +3,6 @@
1090 # Common python helper functions used for OpenStack charms.
1091 from collections import OrderedDict
1092
1093-import apt_pkg as apt
1094 import subprocess
1095 import os
1096 import socket
1097@@ -41,7 +40,8 @@
1098 ('quantal', 'folsom'),
1099 ('raring', 'grizzly'),
1100 ('saucy', 'havana'),
1101- ('trusty', 'icehouse')
1102+ ('trusty', 'icehouse'),
1103+ ('utopic', 'juno'),
1104 ])
1105
1106
1107@@ -52,6 +52,7 @@
1108 ('2013.1', 'grizzly'),
1109 ('2013.2', 'havana'),
1110 ('2014.1', 'icehouse'),
1111+ ('2014.2', 'juno'),
1112 ])
1113
1114 # The ugly duckling
1115@@ -83,6 +84,8 @@
1116 '''Derive OpenStack release codename from a given installation source.'''
1117 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1118 rel = ''
1119+ if src is None:
1120+ return rel
1121 if src in ['distro', 'distro-proposed']:
1122 try:
1123 rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
1124@@ -130,6 +133,7 @@
1125
1126 def get_os_codename_package(package, fatal=True):
1127 '''Derive OpenStack release codename from an installed package.'''
1128+ import apt_pkg as apt
1129 apt.init()
1130
1131 # Tell apt to build an in-memory cache to prevent race conditions (if
1132@@ -187,7 +191,7 @@
1133 for version, cname in vers_map.iteritems():
1134 if cname == codename:
1135 return version
1136- #e = "Could not determine OpenStack version for package: %s" % pkg
1137+ # e = "Could not determine OpenStack version for package: %s" % pkg
1138 # error_out(e)
1139
1140
1141@@ -273,6 +277,9 @@
1142 'icehouse': 'precise-updates/icehouse',
1143 'icehouse/updates': 'precise-updates/icehouse',
1144 'icehouse/proposed': 'precise-proposed/icehouse',
1145+ 'juno': 'trusty-updates/juno',
1146+ 'juno/updates': 'trusty-updates/juno',
1147+ 'juno/proposed': 'trusty-proposed/juno',
1148 }
1149
1150 try:
1151@@ -320,6 +327,7 @@
1152
1153 """
1154
1155+ import apt_pkg as apt
1156 src = config('openstack-origin')
1157 cur_vers = get_os_version_package(package)
1158 available_vers = get_os_version_install_source(src)
1159
1160=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
1161--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-03-27 11:02:24 +0000
1162+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-29 13:07:23 +0000
1163@@ -303,7 +303,7 @@
1164 blk_device, fstype, system_services=[]):
1165 """
1166 NOTE: This function must only be called from a single service unit for
1167- the same rbd_img otherwise data loss will occur.
1168+ the same rbd_img otherwise data loss will occur.
1169
1170 Ensures given pool and RBD image exists, is mapped to a block device,
1171 and the device is formatted and mounted at the given mount_point.
1172
1173=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
1174--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-05-19 11:38:09 +0000
1175+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-07-29 13:07:23 +0000
1176@@ -37,6 +37,7 @@
1177 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
1178 'bs=512', 'count=100', 'seek=%s' % (gpt_end)])
1179
1180+
1181 def is_device_mounted(device):
1182 '''Given a device path, return True if that device is mounted, and False
1183 if it isn't.
1184
1185=== added file 'hooks/charmhelpers/core/fstab.py'
1186--- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
1187+++ hooks/charmhelpers/core/fstab.py 2014-07-29 13:07:23 +0000
1188@@ -0,0 +1,116 @@
1189+#!/usr/bin/env python
1190+# -*- coding: utf-8 -*-
1191+
1192+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
1193+
1194+import os
1195+
1196+
1197+class Fstab(file):
1198+ """This class extends file in order to implement a file reader/writer
1199+ for file `/etc/fstab`
1200+ """
1201+
1202+ class Entry(object):
1203+ """Entry class represents a non-comment line on the `/etc/fstab` file
1204+ """
1205+ def __init__(self, device, mountpoint, filesystem,
1206+ options, d=0, p=0):
1207+ self.device = device
1208+ self.mountpoint = mountpoint
1209+ self.filesystem = filesystem
1210+
1211+ if not options:
1212+ options = "defaults"
1213+
1214+ self.options = options
1215+ self.d = d
1216+ self.p = p
1217+
1218+ def __eq__(self, o):
1219+ return str(self) == str(o)
1220+
1221+ def __str__(self):
1222+ return "{} {} {} {} {} {}".format(self.device,
1223+ self.mountpoint,
1224+ self.filesystem,
1225+ self.options,
1226+ self.d,
1227+ self.p)
1228+
1229+ DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
1230+
1231+ def __init__(self, path=None):
1232+ if path:
1233+ self._path = path
1234+ else:
1235+ self._path = self.DEFAULT_PATH
1236+ file.__init__(self, self._path, 'r+')
1237+
1238+ def _hydrate_entry(self, line):
1239+ # NOTE: use split with no arguments to split on any
1240+ # whitespace including tabs
1241+ return Fstab.Entry(*filter(
1242+ lambda x: x not in ('', None),
1243+ line.strip("\n").split()))
1244+
1245+ @property
1246+ def entries(self):
1247+ self.seek(0)
1248+ for line in self.readlines():
1249+ try:
1250+ if not line.startswith("#"):
1251+ yield self._hydrate_entry(line)
1252+ except ValueError:
1253+ pass
1254+
1255+ def get_entry_by_attr(self, attr, value):
1256+ for entry in self.entries:
1257+ e_attr = getattr(entry, attr)
1258+ if e_attr == value:
1259+ return entry
1260+ return None
1261+
1262+ def add_entry(self, entry):
1263+ if self.get_entry_by_attr('device', entry.device):
1264+ return False
1265+
1266+ self.write(str(entry) + '\n')
1267+ self.truncate()
1268+ return entry
1269+
1270+ def remove_entry(self, entry):
1271+ self.seek(0)
1272+
1273+ lines = self.readlines()
1274+
1275+ found = False
1276+ for index, line in enumerate(lines):
1277+ if not line.startswith("#"):
1278+ if self._hydrate_entry(line) == entry:
1279+ found = True
1280+ break
1281+
1282+ if not found:
1283+ return False
1284+
1285+ lines.remove(line)
1286+
1287+ self.seek(0)
1288+ self.write(''.join(lines))
1289+ self.truncate()
1290+ return True
1291+
1292+ @classmethod
1293+ def remove_by_mountpoint(cls, mountpoint, path=None):
1294+ fstab = cls(path=path)
1295+ entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
1296+ if entry:
1297+ return fstab.remove_entry(entry)
1298+ return False
1299+
1300+ @classmethod
1301+ def add(cls, device, mountpoint, filesystem, options=None, path=None):
1302+ return cls(path=path).add_entry(Fstab.Entry(device,
1303+ mountpoint, filesystem,
1304+ options=options))
1305
1306=== modified file 'hooks/charmhelpers/core/hookenv.py'
1307--- hooks/charmhelpers/core/hookenv.py 2014-05-19 11:38:09 +0000
1308+++ hooks/charmhelpers/core/hookenv.py 2014-07-29 13:07:23 +0000
1309@@ -25,7 +25,7 @@
1310 def cached(func):
1311 """Cache return values for multiple executions of func + args
1312
1313- For example:
1314+ For example::
1315
1316 @cached
1317 def unit_get(attribute):
1318@@ -445,18 +445,19 @@
1319 class Hooks(object):
1320 """A convenient handler for hook functions.
1321
1322- Example:
1323+ Example::
1324+
1325 hooks = Hooks()
1326
1327 # register a hook, taking its name from the function name
1328 @hooks.hook()
1329 def install():
1330- ...
1331+ pass # your code here
1332
1333 # register a hook, providing a custom hook name
1334 @hooks.hook("config-changed")
1335 def config_changed():
1336- ...
1337+ pass # your code here
1338
1339 if __name__ == "__main__":
1340 # execute a hook based on the name the program is called by
1341
1342=== modified file 'hooks/charmhelpers/core/host.py'
1343--- hooks/charmhelpers/core/host.py 2014-05-19 11:38:09 +0000
1344+++ hooks/charmhelpers/core/host.py 2014-07-29 13:07:23 +0000
1345@@ -12,11 +12,11 @@
1346 import string
1347 import subprocess
1348 import hashlib
1349-import apt_pkg
1350
1351 from collections import OrderedDict
1352
1353 from hookenv import log
1354+from fstab import Fstab
1355
1356
1357 def service_start(service_name):
1358@@ -35,7 +35,8 @@
1359
1360
1361 def service_reload(service_name, restart_on_failure=False):
1362- """Reload a system service, optionally falling back to restart if reload fails"""
1363+ """Reload a system service, optionally falling back to restart if
1364+ reload fails"""
1365 service_result = service('reload', service_name)
1366 if not service_result and restart_on_failure:
1367 service_result = service('restart', service_name)
1368@@ -144,7 +145,19 @@
1369 target.write(content)
1370
1371
1372-def mount(device, mountpoint, options=None, persist=False):
1373+def fstab_remove(mp):
1374+ """Remove the given mountpoint entry from /etc/fstab
1375+ """
1376+ return Fstab.remove_by_mountpoint(mp)
1377+
1378+
1379+def fstab_add(dev, mp, fs, options=None):
1380+ """Adds the given device entry to the /etc/fstab file
1381+ """
1382+ return Fstab.add(dev, mp, fs, options=options)
1383+
1384+
1385+def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
1386 """Mount a filesystem at a particular mountpoint"""
1387 cmd_args = ['mount']
1388 if options is not None:
1389@@ -155,9 +168,9 @@
1390 except subprocess.CalledProcessError, e:
1391 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
1392 return False
1393+
1394 if persist:
1395- # TODO: update fstab
1396- pass
1397+ return fstab_add(device, mountpoint, filesystem, options=options)
1398 return True
1399
1400
1401@@ -169,9 +182,9 @@
1402 except subprocess.CalledProcessError, e:
1403 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1404 return False
1405+
1406 if persist:
1407- # TODO: update fstab
1408- pass
1409+ return fstab_remove(mountpoint)
1410 return True
1411
1412
1413@@ -198,13 +211,13 @@
1414 def restart_on_change(restart_map, stopstart=False):
1415 """Restart services based on configuration files changing
1416
1417- This function is used a decorator, for example
1418+ This function is used a decorator, for example::
1419
1420 @restart_on_change({
1421 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1422 })
1423 def ceph_client_changed():
1424- ...
1425+ pass # your code here
1426
1427 In this example, the cinder-api and cinder-volume services
1428 would be restarted if /etc/ceph/ceph.conf is changed by the
1429@@ -300,12 +313,19 @@
1430
1431 def cmp_pkgrevno(package, revno, pkgcache=None):
1432 '''Compare supplied revno with the revno of the installed package
1433- 1 => Installed revno is greater than supplied arg
1434- 0 => Installed revno is the same as supplied arg
1435- -1 => Installed revno is less than supplied arg
1436+
1437+ * 1 => Installed revno is greater than supplied arg
1438+ * 0 => Installed revno is the same as supplied arg
1439+ * -1 => Installed revno is less than supplied arg
1440+
1441 '''
1442+ import apt_pkg
1443 if not pkgcache:
1444 apt_pkg.init()
1445+ # Force Apt to build its cache in memory. That way we avoid race
1446+ # conditions with other applications building the cache in the same
1447+ # place.
1448+ apt_pkg.config.set("Dir::Cache::pkgcache", "")
1449 pkgcache = apt_pkg.Cache()
1450 pkg = pkgcache[package]
1451 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
1452
1453=== modified file 'hooks/charmhelpers/fetch/__init__.py'
1454--- hooks/charmhelpers/fetch/__init__.py 2014-05-19 11:38:09 +0000
1455+++ hooks/charmhelpers/fetch/__init__.py 2014-07-29 13:07:23 +0000
1456@@ -13,7 +13,6 @@
1457 config,
1458 log,
1459 )
1460-import apt_pkg
1461 import os
1462
1463
1464@@ -56,6 +55,15 @@
1465 'icehouse/proposed': 'precise-proposed/icehouse',
1466 'precise-icehouse/proposed': 'precise-proposed/icehouse',
1467 'precise-proposed/icehouse': 'precise-proposed/icehouse',
1468+ # Juno
1469+ 'juno': 'trusty-updates/juno',
1470+ 'trusty-juno': 'trusty-updates/juno',
1471+ 'trusty-juno/updates': 'trusty-updates/juno',
1472+ 'trusty-updates/juno': 'trusty-updates/juno',
1473+ 'juno/proposed': 'trusty-proposed/juno',
1474+ 'juno/proposed': 'trusty-proposed/juno',
1475+ 'trusty-juno/proposed': 'trusty-proposed/juno',
1476+ 'trusty-proposed/juno': 'trusty-proposed/juno',
1477 }
1478
1479 # The order of this list is very important. Handlers should be listed in from
1480@@ -108,6 +116,7 @@
1481
1482 def filter_installed_packages(packages):
1483 """Returns a list of packages that require installation"""
1484+ import apt_pkg
1485 apt_pkg.init()
1486
1487 # Tell apt to build an in-memory cache to prevent race conditions (if
1488@@ -226,31 +235,39 @@
1489 sources_var='install_sources',
1490 keys_var='install_keys'):
1491 """
1492- Configure multiple sources from charm configuration
1493+ Configure multiple sources from charm configuration.
1494+
1495+ The lists are encoded as yaml fragments in the configuration.
1496+ The frament needs to be included as a string.
1497
1498 Example config:
1499- install_sources:
1500+ install_sources: |
1501 - "ppa:foo"
1502 - "http://example.com/repo precise main"
1503- install_keys:
1504+ install_keys: |
1505 - null
1506 - "a1b2c3d4"
1507
1508 Note that 'null' (a.k.a. None) should not be quoted.
1509 """
1510- sources = safe_load(config(sources_var))
1511- keys = config(keys_var)
1512- if keys is not None:
1513- keys = safe_load(keys)
1514- if isinstance(sources, basestring) and (
1515- keys is None or isinstance(keys, basestring)):
1516- add_source(sources, keys)
1517+ sources = safe_load((config(sources_var) or '').strip()) or []
1518+ keys = safe_load((config(keys_var) or '').strip()) or None
1519+
1520+ if isinstance(sources, basestring):
1521+ sources = [sources]
1522+
1523+ if keys is None:
1524+ for source in sources:
1525+ add_source(source, None)
1526 else:
1527- if not len(sources) == len(keys):
1528- msg = 'Install sources and keys lists are different lengths'
1529- raise SourceConfigError(msg)
1530- for src_num in range(len(sources)):
1531- add_source(sources[src_num], keys[src_num])
1532+ if isinstance(keys, basestring):
1533+ keys = [keys]
1534+
1535+ if len(sources) != len(keys):
1536+ raise SourceConfigError(
1537+ 'Install sources and keys lists are different lengths')
1538+ for source, key in zip(sources, keys):
1539+ add_source(source, key)
1540 if update:
1541 apt_update(fatal=True)
1542
1543
1544=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
1545--- hooks/charmhelpers/fetch/bzrurl.py 2013-11-06 03:48:26 +0000
1546+++ hooks/charmhelpers/fetch/bzrurl.py 2014-07-29 13:07:23 +0000
1547@@ -39,7 +39,8 @@
1548 def install(self, source):
1549 url_parts = self.parse_url(source)
1550 branch_name = url_parts.path.strip("/").split("/")[-1]
1551- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
1552+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1553+ branch_name)
1554 if not os.path.exists(dest_dir):
1555 mkdir(dest_dir, perms=0755)
1556 try:
1557
1558=== added symlink 'hooks/neutron-api-relation-broken'
1559=== target is u'nova_cc_hooks.py'
1560=== added symlink 'hooks/neutron-api-relation-changed'
1561=== target is u'nova_cc_hooks.py'
1562=== added symlink 'hooks/neutron-api-relation-departed'
1563=== target is u'nova_cc_hooks.py'
1564=== added symlink 'hooks/neutron-api-relation-joined'
1565=== target is u'nova_cc_hooks.py'
1566=== modified file 'hooks/nova_cc_context.py'
1567--- hooks/nova_cc_context.py 2014-06-17 10:01:21 +0000
1568+++ hooks/nova_cc_context.py 2014-07-29 13:07:23 +0000
1569@@ -1,7 +1,7 @@
1570
1571 from charmhelpers.core.hookenv import (
1572 config, relation_ids, relation_set, log, ERROR,
1573- unit_get)
1574+ unit_get, related_units, relation_get)
1575
1576 from charmhelpers.fetch import apt_install, filter_installed_packages
1577 from charmhelpers.contrib.openstack import context, neutron, utils
1578@@ -14,6 +14,17 @@
1579 )
1580
1581
1582+def context_complete(ctxt):
1583+ _missing = []
1584+ for k, v in ctxt.iteritems():
1585+ if v is None or v == '':
1586+ _missing.append(k)
1587+ if _missing:
1588+ log('Missing required data: %s' % ' '.join(_missing), level='INFO')
1589+ return False
1590+ return True
1591+
1592+
1593 class ApacheSSLContext(context.ApacheSSLContext):
1594
1595 interfaces = ['https']
1596@@ -27,6 +38,26 @@
1597 return super(ApacheSSLContext, self).__call__()
1598
1599
1600+class NeutronAPIContext(context.OSContextGenerator):
1601+
1602+ def __call__(self):
1603+ log('Generating template context from neutron api relation')
1604+ ctxt = {}
1605+ for rid in relation_ids('neutron-api'):
1606+ for unit in related_units(rid):
1607+ rdata = relation_get(rid=rid, unit=unit)
1608+ ctxt = {
1609+ 'neutron_url': rdata.get('neutron-url'),
1610+ 'neutron_plugin': rdata.get('neutron-plugin'),
1611+ 'neutron_security_groups':
1612+ rdata.get('neutron-security-groups'),
1613+ 'network_manager': 'neutron',
1614+ }
1615+ if context_complete(ctxt):
1616+ return ctxt
1617+ return {}
1618+
1619+
1620 class VolumeServiceContext(context.OSContextGenerator):
1621 interfaces = []
1622
1623
1624=== modified file 'hooks/nova_cc_hooks.py'
1625--- hooks/nova_cc_hooks.py 2014-04-11 16:41:42 +0000
1626+++ hooks/nova_cc_hooks.py 2014-07-29 13:07:23 +0000
1627@@ -19,12 +19,15 @@
1628 relation_get,
1629 relation_ids,
1630 relation_set,
1631+ related_units,
1632 open_port,
1633 unit_get,
1634 )
1635
1636 from charmhelpers.core.host import (
1637- restart_on_change
1638+ restart_on_change,
1639+ service_running,
1640+ service_stop,
1641 )
1642
1643 from charmhelpers.fetch import (
1644@@ -41,6 +44,10 @@
1645 neutron_plugin_attribute,
1646 )
1647
1648+from nova_cc_context import (
1649+ NeutronAPIContext
1650+)
1651+
1652 from nova_cc_utils import (
1653 api_port,
1654 auth_token_config,
1655@@ -54,8 +61,8 @@
1656 save_script_rc,
1657 ssh_compute_add,
1658 ssh_compute_remove,
1659- ssh_known_hosts_b64,
1660- ssh_authorized_keys_b64,
1661+ ssh_known_hosts_lines,
1662+ ssh_authorized_keys_lines,
1663 register_configs,
1664 restart_map,
1665 volume_service,
1666@@ -63,11 +70,12 @@
1667 NOVA_CONF,
1668 QUANTUM_CONF,
1669 NEUTRON_CONF,
1670- QUANTUM_API_PASTE
1671+ QUANTUM_API_PASTE,
1672+ service_guard,
1673+ guard_map,
1674 )
1675
1676 from charmhelpers.contrib.hahelpers.cluster import (
1677- canonical_url,
1678 eligible_leader,
1679 get_hacluster_config,
1680 is_leader,
1681@@ -75,6 +83,16 @@
1682
1683 from charmhelpers.payload.execd import execd_preinstall
1684
1685+from charmhelpers.contrib.openstack.ip import (
1686+ canonical_url,
1687+ PUBLIC, INTERNAL, ADMIN
1688+)
1689+
1690+from charmhelpers.contrib.network.ip import (
1691+ get_iface_for_address,
1692+ get_netmask_for_address
1693+)
1694+
1695 hooks = Hooks()
1696 CONFIGS = register_configs()
1697
1698@@ -96,6 +114,8 @@
1699
1700
1701 @hooks.hook('config-changed')
1702+@service_guard(guard_map(), CONFIGS,
1703+ active=config('service-guard'))
1704 @restart_on_change(restart_map(), stopstart=True)
1705 def config_changed():
1706 global CONFIGS
1707@@ -104,6 +124,8 @@
1708 save_script_rc()
1709 configure_https()
1710 CONFIGS.write_all()
1711+ for r_id in relation_ids('identity-service'):
1712+ identity_joined(rid=r_id)
1713
1714
1715 @hooks.hook('amqp-relation-joined')
1716@@ -114,16 +136,19 @@
1717
1718 @hooks.hook('amqp-relation-changed')
1719 @hooks.hook('amqp-relation-departed')
1720+@service_guard(guard_map(), CONFIGS,
1721+ active=config('service-guard'))
1722 @restart_on_change(restart_map())
1723 def amqp_changed():
1724 if 'amqp' not in CONFIGS.complete_contexts():
1725 log('amqp relation incomplete. Peer not ready?')
1726 return
1727 CONFIGS.write(NOVA_CONF)
1728- if network_manager() == 'quantum':
1729- CONFIGS.write(QUANTUM_CONF)
1730- if network_manager() == 'neutron':
1731- CONFIGS.write(NEUTRON_CONF)
1732+ if not is_relation_made('neutron-api'):
1733+ if network_manager() == 'quantum':
1734+ CONFIGS.write(QUANTUM_CONF)
1735+ if network_manager() == 'neutron':
1736+ CONFIGS.write(NEUTRON_CONF)
1737
1738
1739 @hooks.hook('shared-db-relation-joined')
1740@@ -171,6 +196,8 @@
1741
1742
1743 @hooks.hook('shared-db-relation-changed')
1744+@service_guard(guard_map(), CONFIGS,
1745+ active=config('service-guard'))
1746 @restart_on_change(restart_map())
1747 def db_changed():
1748 if 'shared-db' not in CONFIGS.complete_contexts():
1749@@ -186,6 +213,8 @@
1750
1751
1752 @hooks.hook('pgsql-nova-db-relation-changed')
1753+@service_guard(guard_map(), CONFIGS,
1754+ active=config('service-guard'))
1755 @restart_on_change(restart_map())
1756 def postgresql_nova_db_changed():
1757 if 'pgsql-nova-db' not in CONFIGS.complete_contexts():
1758@@ -201,6 +230,8 @@
1759
1760
1761 @hooks.hook('pgsql-neutron-db-relation-changed')
1762+@service_guard(guard_map(), CONFIGS,
1763+ active=config('service-guard'))
1764 @restart_on_change(restart_map())
1765 def postgresql_neutron_db_changed():
1766 if network_manager() in ['neutron', 'quantum']:
1767@@ -210,6 +241,8 @@
1768
1769
1770 @hooks.hook('image-service-relation-changed')
1771+@service_guard(guard_map(), CONFIGS,
1772+ active=config('service-guard'))
1773 @restart_on_change(restart_map())
1774 def image_service_changed():
1775 if 'image-service' not in CONFIGS.complete_contexts():
1776@@ -223,11 +256,17 @@
1777 def identity_joined(rid=None):
1778 if not eligible_leader(CLUSTER_RES):
1779 return
1780- base_url = canonical_url(CONFIGS)
1781- relation_set(relation_id=rid, **determine_endpoints(base_url))
1782+ public_url = canonical_url(CONFIGS, PUBLIC)
1783+ internal_url = canonical_url(CONFIGS, INTERNAL)
1784+ admin_url = canonical_url(CONFIGS, ADMIN)
1785+ relation_set(relation_id=rid, **determine_endpoints(public_url,
1786+ internal_url,
1787+ admin_url))
1788
1789
1790 @hooks.hook('identity-service-relation-changed')
1791+@service_guard(guard_map(), CONFIGS,
1792+ active=config('service-guard'))
1793 @restart_on_change(restart_map())
1794 def identity_changed():
1795 if 'identity-service' not in CONFIGS.complete_contexts():
1796@@ -235,20 +274,24 @@
1797 return
1798 CONFIGS.write('/etc/nova/api-paste.ini')
1799 CONFIGS.write(NOVA_CONF)
1800- if network_manager() == 'quantum':
1801- CONFIGS.write(QUANTUM_API_PASTE)
1802- CONFIGS.write(QUANTUM_CONF)
1803- save_novarc()
1804- if network_manager() == 'neutron':
1805- CONFIGS.write(NEUTRON_CONF)
1806+ if not is_relation_made('neutron-api'):
1807+ if network_manager() == 'quantum':
1808+ CONFIGS.write(QUANTUM_API_PASTE)
1809+ CONFIGS.write(QUANTUM_CONF)
1810+ save_novarc()
1811+ if network_manager() == 'neutron':
1812+ CONFIGS.write(NEUTRON_CONF)
1813 [compute_joined(rid) for rid in relation_ids('cloud-compute')]
1814 [quantum_joined(rid) for rid in relation_ids('quantum-network-service')]
1815 [nova_vmware_relation_joined(rid) for rid in relation_ids('nova-vmware')]
1816+ [neutron_api_relation_joined(rid) for rid in relation_ids('neutron-api')]
1817 configure_https()
1818
1819
1820 @hooks.hook('nova-volume-service-relation-joined',
1821 'cinder-volume-service-relation-joined')
1822+@service_guard(guard_map(), CONFIGS,
1823+ active=config('service-guard'))
1824 @restart_on_change(restart_map())
1825 def volume_joined():
1826 CONFIGS.write(NOVA_CONF)
1827@@ -293,6 +336,33 @@
1828 out.write('export OS_REGION_NAME=%s\n' % config('region'))
1829
1830
1831+def neutron_settings():
1832+ neutron_settings = {}
1833+ if is_relation_made('neutron-api', 'neutron-plugin'):
1834+ neutron_api_info = NeutronAPIContext()()
1835+ neutron_settings.update({
1836+ # XXX: Rename these relations settings?
1837+ 'quantum_plugin': neutron_api_info['neutron_plugin'],
1838+ 'region': config('region'),
1839+ 'quantum_security_groups':
1840+ neutron_api_info['neutron_security_groups'],
1841+ 'quantum_url': neutron_api_info['neutron_url'],
1842+ })
1843+ else:
1844+ neutron_settings.update({
1845+ # XXX: Rename these relations settings?
1846+ 'quantum_plugin': neutron_plugin(),
1847+ 'region': config('region'),
1848+ 'quantum_security_groups': config('quantum-security-groups'),
1849+ 'quantum_url': "{}:{}".format(canonical_url(CONFIGS, INTERNAL),
1850+ str(api_port('neutron-server'))),
1851+ })
1852+ neutron_url = urlparse(neutron_settings['quantum_url'])
1853+ neutron_settings['quantum_host'] = neutron_url.hostname
1854+ neutron_settings['quantum_port'] = neutron_url.port
1855+ return neutron_settings
1856+
1857+
1858 def keystone_compute_settings():
1859 ks_auth_config = _auth_config()
1860 rel_settings = {}
1861@@ -300,20 +370,10 @@
1862 if network_manager() in ['quantum', 'neutron']:
1863 if ks_auth_config:
1864 rel_settings.update(ks_auth_config)
1865-
1866- rel_settings.update({
1867- # XXX: Rename these relations settings?
1868- 'quantum_plugin': neutron_plugin(),
1869- 'region': config('region'),
1870- 'quantum_security_groups': config('quantum-security-groups'),
1871- 'quantum_url': (canonical_url(CONFIGS) + ':' +
1872- str(api_port('neutron-server'))),
1873- })
1874-
1875+ rel_settings.update(neutron_settings())
1876 ks_ca = keystone_ca_cert_b64()
1877 if ks_auth_config and ks_ca:
1878 rel_settings['ca_cert'] = ks_ca
1879-
1880 return rel_settings
1881
1882
1883@@ -328,7 +388,6 @@
1884 # this may not even be needed.
1885 'ec2_host': unit_get('private-address'),
1886 }
1887-
1888 # update relation setting if we're attempting to restart remote
1889 # services
1890 if remote_restart:
1891@@ -339,21 +398,63 @@
1892
1893
1894 @hooks.hook('cloud-compute-relation-changed')
1895-def compute_changed():
1896- migration_auth = relation_get('migration_auth_type')
1897- if migration_auth == 'ssh':
1898- key = relation_get('ssh_public_key')
1899+def compute_changed(rid=None, unit=None):
1900+ rel_settings = relation_get(rid=rid, unit=unit)
1901+ if 'migration_auth_type' not in rel_settings:
1902+ return
1903+ if rel_settings['migration_auth_type'] == 'ssh':
1904+ key = rel_settings.get('ssh_public_key')
1905 if not key:
1906 log('SSH migration set but peer did not publish key.')
1907 return
1908- ssh_compute_add(key)
1909- relation_set(known_hosts=ssh_known_hosts_b64(),
1910- authorized_keys=ssh_authorized_keys_b64())
1911- if relation_get('nova_ssh_public_key'):
1912- key = relation_get('nova_ssh_public_key')
1913- ssh_compute_add(key, user='nova')
1914- relation_set(nova_known_hosts=ssh_known_hosts_b64(user='nova'),
1915- nova_authorized_keys=ssh_authorized_keys_b64(user='nova'))
1916+ ssh_compute_add(key, rid=rid, unit=unit)
1917+ index = 0
1918+ for line in ssh_known_hosts_lines(unit=unit):
1919+ relation_set(
1920+ relation_id=rid,
1921+ relation_settings={
1922+ 'known_hosts_{}'.format(index): line})
1923+ index += 1
1924+ relation_set(relation_id=rid, known_hosts_max_index=index)
1925+ index = 0
1926+ for line in ssh_authorized_keys_lines(unit=unit):
1927+ relation_set(
1928+ relation_id=rid,
1929+ relation_settings={
1930+ 'authorized_keys_{}'.format(index): line})
1931+ index += 1
1932+ relation_set(relation_id=rid, authorized_keys_max_index=index)
1933+ if 'nova_ssh_public_key' not in rel_settings:
1934+ return
1935+ if rel_settings['nova_ssh_public_key']:
1936+ ssh_compute_add(rel_settings['nova_ssh_public_key'],
1937+ rid=rid, unit=unit, user='nova')
1938+ index = 0
1939+ for line in ssh_known_hosts_lines(unit=unit, user='nova'):
1940+ relation_set(
1941+ relation_id=rid,
1942+ relation_settings={
1943+ '{}_known_hosts_{}'.format(
1944+ 'nova',
1945+ index): line})
1946+ index += 1
1947+ relation_set(
1948+ relation_id=rid,
1949+ relation_settings={
1950+ '{}_known_hosts_max_index'.format('nova'): index})
1951+ index = 0
1952+ for line in ssh_authorized_keys_lines(unit=unit, user='nova'):
1953+ relation_set(
1954+ relation_id=rid,
1955+ relation_settings={
1956+ '{}_authorized_keys_{}'.format(
1957+ 'nova',
1958+ index): line})
1959+ index += 1
1960+ relation_set(
1961+ relation_id=rid,
1962+ relation_settings={
1963+ '{}_authorized_keys_max_index'.format('nova'): index})
1964
1965
1966 @hooks.hook('cloud-compute-relation-departed')
1967@@ -367,15 +468,7 @@
1968 if not eligible_leader(CLUSTER_RES):
1969 return
1970
1971- url = canonical_url(CONFIGS) + ':9696'
1972- # XXX: Can we rename to neutron_*?
1973- rel_settings = {
1974- 'quantum_host': urlparse(url).hostname,
1975- 'quantum_url': url,
1976- 'quantum_port': 9696,
1977- 'quantum_plugin': neutron_plugin(),
1978- 'region': config('region')
1979- }
1980+ rel_settings = neutron_settings()
1981
1982 # inform quantum about local keystone auth config
1983 ks_auth_config = _auth_config()
1984@@ -385,12 +478,13 @@
1985 ks_ca = keystone_ca_cert_b64()
1986 if ks_auth_config and ks_ca:
1987 rel_settings['ca_cert'] = ks_ca
1988-
1989 relation_set(relation_id=rid, **rel_settings)
1990
1991
1992 @hooks.hook('cluster-relation-changed',
1993 'cluster-relation-departed')
1994+@service_guard(guard_map(), CONFIGS,
1995+ active=config('service-guard'))
1996 @restart_on_change(restart_map(), stopstart=True)
1997 def cluster_changed():
1998 CONFIGS.write_all()
1999@@ -400,15 +494,28 @@
2000 def ha_joined():
2001 config = get_hacluster_config()
2002 resources = {
2003- 'res_nova_vip': 'ocf:heartbeat:IPaddr2',
2004 'res_nova_haproxy': 'lsb:haproxy',
2005 }
2006- vip_params = 'params ip="%s" cidr_netmask="%s" nic="%s"' % \
2007- (config['vip'], config['vip_cidr'], config['vip_iface'])
2008 resource_params = {
2009- 'res_nova_vip': vip_params,
2010 'res_nova_haproxy': 'op monitor interval="5s"'
2011 }
2012+ vip_group = []
2013+ for vip in config['vip'].split():
2014+ iface = get_iface_for_address(vip)
2015+ if iface is not None:
2016+ vip_key = 'res_nova_{}_vip'.format(iface)
2017+ resources[vip_key] = 'ocf:heartbeat:IPaddr2'
2018+ resource_params[vip_key] = (
2019+ 'params ip="{vip}" cidr_netmask="{netmask}"'
2020+ ' nic="{iface}"'.format(vip=vip,
2021+ iface=iface,
2022+ netmask=get_netmask_for_address(vip))
2023+ )
2024+ vip_group.append(vip_key)
2025+
2026+ if len(vip_group) > 1:
2027+ relation_set(groups={'grp_nova_vips': ' '.join(vip_group)})
2028+
2029 init_services = {
2030 'res_nova_haproxy': 'haproxy'
2031 }
2032@@ -447,6 +554,8 @@
2033 'pgsql-nova-db-relation-broken',
2034 'pgsql-neutron-db-relation-broken',
2035 'quantum-network-service-relation-broken')
2036+@service_guard(guard_map(), CONFIGS,
2037+ active=config('service-guard'))
2038 def relation_broken():
2039 CONFIGS.write_all()
2040
2041@@ -480,13 +589,15 @@
2042 rel_settings.update({
2043 'quantum_plugin': neutron_plugin(),
2044 'quantum_security_groups': config('quantum-security-groups'),
2045- 'quantum_url': (canonical_url(CONFIGS) + ':' +
2046- str(api_port('neutron-server')))})
2047+ 'quantum_url': "{}:{}".format(canonical_url(CONFIGS, INTERNAL),
2048+ str(api_port('neutron-server')))})
2049
2050 relation_set(relation_id=rid, **rel_settings)
2051
2052
2053 @hooks.hook('nova-vmware-relation-changed')
2054+@service_guard(guard_map(), CONFIGS,
2055+ active=config('service-guard'))
2056 @restart_on_change(restart_map())
2057 def nova_vmware_relation_changed():
2058 CONFIGS.write('/etc/nova/nova.conf')
2059@@ -498,6 +609,49 @@
2060 amqp_joined(relation_id=r_id)
2061 for r_id in relation_ids('identity-service'):
2062 identity_joined(rid=r_id)
2063+ for r_id in relation_ids('cloud-compute'):
2064+ for unit in related_units(r_id):
2065+ compute_changed(r_id, unit)
2066+
2067+
2068+@hooks.hook('neutron-api-relation-joined')
2069+def neutron_api_relation_joined(rid=None):
2070+ with open('/etc/init/neutron-server.override', 'wb') as out:
2071+ out.write('manual\n')
2072+ if os.path.isfile(NEUTRON_CONF):
2073+ os.rename(NEUTRON_CONF, NEUTRON_CONF + '_unused')
2074+ if service_running('neutron-server'):
2075+ service_stop('neutron-server')
2076+ for id_rid in relation_ids('identity-service'):
2077+ identity_joined(rid=id_rid)
2078+ nova_url = canonical_url(CONFIGS, INTERNAL) + ":8774/v2"
2079+ relation_set(relation_id=rid, nova_url=nova_url)
2080+
2081+
2082+@hooks.hook('neutron-api-relation-changed')
2083+@service_guard(guard_map(), CONFIGS,
2084+ active=config('service-guard'))
2085+@restart_on_change(restart_map())
2086+def neutron_api_relation_changed():
2087+ CONFIGS.write(NOVA_CONF)
2088+ for rid in relation_ids('cloud-compute'):
2089+ compute_joined(rid=rid)
2090+ for rid in relation_ids('quantum-network-service'):
2091+ quantum_joined(rid=rid)
2092+
2093+
2094+@hooks.hook('neutron-api-relation-broken')
2095+@service_guard(guard_map(), CONFIGS,
2096+ active=config('service-guard'))
2097+@restart_on_change(restart_map())
2098+def neutron_api_relation_broken():
2099+ if os.path.isfile('/etc/init/neutron-server.override'):
2100+ os.remove('/etc/init/neutron-server.override')
2101+ CONFIGS.write_all()
2102+ for rid in relation_ids('cloud-compute'):
2103+ compute_joined(rid=rid)
2104+ for rid in relation_ids('quantum-network-service'):
2105+ quantum_joined(rid=rid)
2106
2107
2108 def main():
2109
2110=== modified file 'hooks/nova_cc_utils.py'
2111--- hooks/nova_cc_utils.py 2014-05-21 10:03:01 +0000
2112+++ hooks/nova_cc_utils.py 2014-07-29 13:07:23 +0000
2113@@ -33,20 +33,22 @@
2114 relation_get,
2115 relation_ids,
2116 remote_unit,
2117+ is_relation_made,
2118 INFO,
2119 ERROR,
2120 )
2121
2122 from charmhelpers.core.host import (
2123- service_start
2124+ service_start,
2125+ service_stop,
2126+ service_running
2127 )
2128
2129-
2130 import nova_cc_context
2131
2132 TEMPLATES = 'templates/'
2133
2134-CLUSTER_RES = 'res_nova_vip'
2135+CLUSTER_RES = 'grp_nova_vips'
2136
2137 # removed from original: charm-helper-sh
2138 BASE_PACKAGES = [
2139@@ -106,8 +108,7 @@
2140 context.SyslogContext(),
2141 nova_cc_context.HAProxyContext(),
2142 nova_cc_context.IdentityServiceContext(),
2143- nova_cc_context.VolumeServiceContext(),
2144- nova_cc_context.NeutronCCContext()],
2145+ nova_cc_context.VolumeServiceContext()],
2146 }),
2147 (NOVA_API_PASTE, {
2148 'services': [s for s in BASE_SERVICES if 'api' in s],
2149@@ -188,39 +189,47 @@
2150
2151 net_manager = network_manager()
2152
2153- # pop out irrelevant resources from the OrderedDict (easier than adding
2154- # them late)
2155- if net_manager != 'quantum':
2156- [resource_map.pop(k) for k in list(resource_map.iterkeys())
2157- if 'quantum' in k]
2158- if net_manager != 'neutron':
2159- [resource_map.pop(k) for k in list(resource_map.iterkeys())
2160- if 'neutron' in k]
2161-
2162 if os.path.exists('/etc/apache2/conf-available'):
2163 resource_map.pop(APACHE_CONF)
2164 else:
2165 resource_map.pop(APACHE_24_CONF)
2166
2167- # add neutron plugin requirements. nova-c-c only needs the neutron-server
2168- # associated with configs, not the plugin agent.
2169- if net_manager in ['quantum', 'neutron']:
2170- plugin = neutron_plugin()
2171- if plugin:
2172- conf = neutron_plugin_attribute(plugin, 'config', net_manager)
2173- ctxts = (neutron_plugin_attribute(plugin, 'contexts', net_manager)
2174- or [])
2175- services = neutron_plugin_attribute(plugin, 'server_services',
2176- net_manager)
2177- resource_map[conf] = {}
2178- resource_map[conf]['services'] = services
2179- resource_map[conf]['contexts'] = ctxts
2180- resource_map[conf]['contexts'].append(
2181- nova_cc_context.NeutronCCContext())
2182+ if is_relation_made('neutron-api'):
2183+ [resource_map.pop(k) for k in list(resource_map.iterkeys())
2184+ if 'quantum' in k or 'neutron' in k]
2185+ resource_map[NOVA_CONF]['contexts'].append(
2186+ nova_cc_context.NeutronAPIContext())
2187+ else:
2188+ resource_map[NOVA_CONF]['contexts'].append(
2189+ nova_cc_context.NeutronCCContext())
2190+ # pop out irrelevant resources from the OrderedDict (easier than adding
2191+ # them late)
2192+ if net_manager != 'quantum':
2193+ [resource_map.pop(k) for k in list(resource_map.iterkeys())
2194+ if 'quantum' in k]
2195+ if net_manager != 'neutron':
2196+ [resource_map.pop(k) for k in list(resource_map.iterkeys())
2197+ if 'neutron' in k]
2198+ # add neutron plugin requirements. nova-c-c only needs the
2199+ # neutron-server associated with configs, not the plugin agent.
2200+ if net_manager in ['quantum', 'neutron']:
2201+ plugin = neutron_plugin()
2202+ if plugin:
2203+ conf = neutron_plugin_attribute(plugin, 'config', net_manager)
2204+ ctxts = (neutron_plugin_attribute(plugin, 'contexts',
2205+ net_manager)
2206+ or [])
2207+ services = neutron_plugin_attribute(plugin, 'server_services',
2208+ net_manager)
2209+ resource_map[conf] = {}
2210+ resource_map[conf]['services'] = services
2211+ resource_map[conf]['contexts'] = ctxts
2212+ resource_map[conf]['contexts'].append(
2213+ nova_cc_context.NeutronCCContext())
2214
2215- # update for postgres
2216- resource_map[conf]['contexts'].append(
2217- nova_cc_context.NeutronPostgresqlDBContext())
2218+ # update for postgres
2219+ resource_map[conf]['contexts'].append(
2220+ nova_cc_context.NeutronPostgresqlDBContext())
2221
2222 # nova-conductor for releases >= G.
2223 if os_release('nova-common') not in ['essex', 'folsom']:
2224@@ -235,6 +244,7 @@
2225 for s in vmware_ctxt['services']:
2226 if s not in resource_map[NOVA_CONF]['services']:
2227 resource_map[NOVA_CONF]['services'].append(s)
2228+
2229 return resource_map
2230
2231
2232@@ -509,8 +519,11 @@
2233 return b64encode(_in.read())
2234
2235
2236-def ssh_directory_for_unit(user=None):
2237- remote_service = remote_unit().split('/')[0]
2238+def ssh_directory_for_unit(unit=None, user=None):
2239+ if unit:
2240+ remote_service = unit.split('/')[0]
2241+ else:
2242+ remote_service = remote_unit().split('/')[0]
2243 if user:
2244 remote_service = "{}_{}".format(remote_service, user)
2245 _dir = os.path.join(NOVA_SSH_DIR, remote_service)
2246@@ -524,29 +537,29 @@
2247 return _dir
2248
2249
2250-def known_hosts(user=None):
2251- return os.path.join(ssh_directory_for_unit(user), 'known_hosts')
2252-
2253-
2254-def authorized_keys(user=None):
2255- return os.path.join(ssh_directory_for_unit(user), 'authorized_keys')
2256-
2257-
2258-def ssh_known_host_key(host, user=None):
2259- cmd = ['ssh-keygen', '-f', known_hosts(user), '-H', '-F', host]
2260+def known_hosts(unit=None, user=None):
2261+ return os.path.join(ssh_directory_for_unit(unit, user), 'known_hosts')
2262+
2263+
2264+def authorized_keys(unit=None, user=None):
2265+ return os.path.join(ssh_directory_for_unit(unit, user), 'authorized_keys')
2266+
2267+
2268+def ssh_known_host_key(host, unit=None, user=None):
2269+ cmd = ['ssh-keygen', '-f', known_hosts(unit, user), '-H', '-F', host]
2270 try:
2271 return subprocess.check_output(cmd).strip()
2272 except subprocess.CalledProcessError:
2273 return None
2274
2275
2276-def remove_known_host(host, user=None):
2277+def remove_known_host(host, unit=None, user=None):
2278 log('Removing SSH known host entry for compute host at %s' % host)
2279- cmd = ['ssh-keygen', '-f', known_hosts(user), '-R', host]
2280+ cmd = ['ssh-keygen', '-f', known_hosts(unit, user), '-R', host]
2281 subprocess.check_call(cmd)
2282
2283
2284-def add_known_host(host, user=None):
2285+def add_known_host(host, unit=None, user=None):
2286 '''Add variations of host to a known hosts file.'''
2287 cmd = ['ssh-keyscan', '-H', '-t', 'rsa', host]
2288 try:
2289@@ -555,34 +568,37 @@
2290 log('Could not obtain SSH host key from %s' % host, level=ERROR)
2291 raise e
2292
2293- current_key = ssh_known_host_key(host, user)
2294+ current_key = ssh_known_host_key(host, unit, user)
2295 if current_key:
2296 if remote_key == current_key:
2297 log('Known host key for compute host %s up to date.' % host)
2298 return
2299 else:
2300- remove_known_host(host, user)
2301+ remove_known_host(host, unit, user)
2302
2303 log('Adding SSH host key to known hosts for compute node at %s.' % host)
2304- with open(known_hosts(user), 'a') as out:
2305+ with open(known_hosts(unit, user), 'a') as out:
2306 out.write(remote_key + '\n')
2307
2308
2309-def ssh_authorized_key_exists(public_key, user=None):
2310- with open(authorized_keys(user)) as keys:
2311+def ssh_authorized_key_exists(public_key, unit=None, user=None):
2312+ with open(authorized_keys(unit, user)) as keys:
2313 return (' %s ' % public_key) in keys.read()
2314
2315
2316-def add_authorized_key(public_key, user=None):
2317- with open(authorized_keys(user), 'a') as keys:
2318+def add_authorized_key(public_key, unit=None, user=None):
2319+ with open(authorized_keys(unit, user), 'a') as keys:
2320 keys.write(public_key + '\n')
2321
2322
2323-def ssh_compute_add(public_key, user=None):
2324+def ssh_compute_add(public_key, rid=None, unit=None, user=None):
2325 # If remote compute node hands us a hostname, ensure we have a
2326 # known hosts entry for its IP, hostname and FQDN.
2327- private_address = relation_get('private-address')
2328+ private_address = relation_get(rid=rid, unit=unit,
2329+ attribute='private-address')
2330 hosts = [private_address]
2331+ if relation_get('hostname'):
2332+ hosts.append(relation_get('hostname'))
2333
2334 if not is_ip(private_address):
2335 hosts.append(get_host_ip(private_address))
2336@@ -593,31 +609,41 @@
2337 hosts.append(hn.split('.')[0])
2338
2339 for host in list(set(hosts)):
2340- if not ssh_known_host_key(host, user):
2341- add_known_host(host, user)
2342+ if not ssh_known_host_key(host, unit, user):
2343+ add_known_host(host, unit, user)
2344
2345- if not ssh_authorized_key_exists(public_key, user):
2346+ if not ssh_authorized_key_exists(public_key, unit, user):
2347 log('Saving SSH authorized key for compute host at %s.' %
2348 private_address)
2349- add_authorized_key(public_key, user)
2350-
2351-
2352-def ssh_known_hosts_b64(user=None):
2353- with open(known_hosts(user)) as hosts:
2354- return b64encode(hosts.read())
2355-
2356-
2357-def ssh_authorized_keys_b64(user=None):
2358- with open(authorized_keys(user)) as keys:
2359- return b64encode(keys.read())
2360-
2361-
2362-def ssh_compute_remove(public_key, user=None):
2363- if not (os.path.isfile(authorized_keys(user)) or
2364- os.path.isfile(known_hosts(user))):
2365+ add_authorized_key(public_key, unit, user)
2366+
2367+
2368+def ssh_known_hosts_lines(unit=None, user=None):
2369+ known_hosts_list = []
2370+
2371+ with open(known_hosts(unit, user)) as hosts:
2372+ for hosts_line in hosts:
2373+ if hosts_line.rstrip():
2374+ known_hosts_list.append(hosts_line.rstrip())
2375+ return(known_hosts_list)
2376+
2377+
2378+def ssh_authorized_keys_lines(unit=None, user=None):
2379+ authorized_keys_list = []
2380+
2381+ with open(authorized_keys(unit, user)) as keys:
2382+ for authkey_line in keys:
2383+ if authkey_line.rstrip():
2384+ authorized_keys_list.append(authkey_line.rstrip())
2385+ return(authorized_keys_list)
2386+
2387+
2388+def ssh_compute_remove(public_key, unit=None, user=None):
2389+ if not (os.path.isfile(authorized_keys(unit, user)) or
2390+ os.path.isfile(known_hosts(unit, user))):
2391 return
2392
2393- with open(authorized_keys(user)) as _keys:
2394+ with open(authorized_keys(unit, user)) as _keys:
2395 keys = [k.strip() for k in _keys.readlines()]
2396
2397 if public_key not in keys:
2398@@ -625,67 +651,101 @@
2399
2400 [keys.remove(key) for key in keys if key == public_key]
2401
2402- with open(authorized_keys(user), 'w') as _keys:
2403+ with open(authorized_keys(unit, user), 'w') as _keys:
2404 keys = '\n'.join(keys)
2405 if not keys.endswith('\n'):
2406 keys += '\n'
2407 _keys.write(keys)
2408
2409
2410-def determine_endpoints(url):
2411+def determine_endpoints(public_url, internal_url, admin_url):
2412 '''Generates a dictionary containing all relevant endpoints to be
2413 passed to keystone as relation settings.'''
2414 region = config('region')
2415 os_rel = os_release('nova-common')
2416
2417 if os_rel >= 'grizzly':
2418- nova_url = ('%s:%s/v2/$(tenant_id)s' %
2419- (url, api_port('nova-api-os-compute')))
2420+ nova_public_url = ('%s:%s/v2/$(tenant_id)s' %
2421+ (public_url, api_port('nova-api-os-compute')))
2422+ nova_internal_url = ('%s:%s/v2/$(tenant_id)s' %
2423+ (internal_url, api_port('nova-api-os-compute')))
2424+ nova_admin_url = ('%s:%s/v2/$(tenant_id)s' %
2425+ (admin_url, api_port('nova-api-os-compute')))
2426 else:
2427- nova_url = ('%s:%s/v1.1/$(tenant_id)s' %
2428- (url, api_port('nova-api-os-compute')))
2429- ec2_url = '%s:%s/services/Cloud' % (url, api_port('nova-api-ec2'))
2430- nova_volume_url = ('%s:%s/v1/$(tenant_id)s' %
2431- (url, api_port('nova-api-os-compute')))
2432- neutron_url = '%s:%s' % (url, api_port('neutron-server'))
2433- s3_url = '%s:%s' % (url, api_port('nova-objectstore'))
2434+ nova_public_url = ('%s:%s/v1.1/$(tenant_id)s' %
2435+ (public_url, api_port('nova-api-os-compute')))
2436+ nova_internal_url = ('%s:%s/v1.1/$(tenant_id)s' %
2437+ (internal_url, api_port('nova-api-os-compute')))
2438+ nova_admin_url = ('%s:%s/v1.1/$(tenant_id)s' %
2439+ (admin_url, api_port('nova-api-os-compute')))
2440+
2441+ ec2_public_url = '%s:%s/services/Cloud' % (
2442+ public_url, api_port('nova-api-ec2'))
2443+ ec2_internal_url = '%s:%s/services/Cloud' % (
2444+ internal_url, api_port('nova-api-ec2'))
2445+ ec2_admin_url = '%s:%s/services/Cloud' % (admin_url,
2446+ api_port('nova-api-ec2'))
2447+
2448+ nova_volume_public_url = ('%s:%s/v1/$(tenant_id)s' %
2449+ (public_url, api_port('nova-api-os-compute')))
2450+ nova_volume_internal_url = ('%s:%s/v1/$(tenant_id)s' %
2451+ (internal_url,
2452+ api_port('nova-api-os-compute')))
2453+ nova_volume_admin_url = ('%s:%s/v1/$(tenant_id)s' %
2454+ (admin_url, api_port('nova-api-os-compute')))
2455+
2456+ neutron_public_url = '%s:%s' % (public_url, api_port('neutron-server'))
2457+ neutron_internal_url = '%s:%s' % (internal_url, api_port('neutron-server'))
2458+ neutron_admin_url = '%s:%s' % (admin_url, api_port('neutron-server'))
2459+
2460+ s3_public_url = '%s:%s' % (public_url, api_port('nova-objectstore'))
2461+ s3_internal_url = '%s:%s' % (internal_url, api_port('nova-objectstore'))
2462+ s3_admin_url = '%s:%s' % (admin_url, api_port('nova-objectstore'))
2463
2464 # the base endpoints
2465 endpoints = {
2466 'nova_service': 'nova',
2467 'nova_region': region,
2468- 'nova_public_url': nova_url,
2469- 'nova_admin_url': nova_url,
2470- 'nova_internal_url': nova_url,
2471+ 'nova_public_url': nova_public_url,
2472+ 'nova_admin_url': nova_admin_url,
2473+ 'nova_internal_url': nova_internal_url,
2474 'ec2_service': 'ec2',
2475 'ec2_region': region,
2476- 'ec2_public_url': ec2_url,
2477- 'ec2_admin_url': ec2_url,
2478- 'ec2_internal_url': ec2_url,
2479+ 'ec2_public_url': ec2_public_url,
2480+ 'ec2_admin_url': ec2_admin_url,
2481+ 'ec2_internal_url': ec2_internal_url,
2482 's3_service': 's3',
2483 's3_region': region,
2484- 's3_public_url': s3_url,
2485- 's3_admin_url': s3_url,
2486- 's3_internal_url': s3_url,
2487+ 's3_public_url': s3_public_url,
2488+ 's3_admin_url': s3_admin_url,
2489+ 's3_internal_url': s3_internal_url,
2490 }
2491
2492 if relation_ids('nova-volume-service'):
2493 endpoints.update({
2494 'nova-volume_service': 'nova-volume',
2495 'nova-volume_region': region,
2496- 'nova-volume_public_url': nova_volume_url,
2497- 'nova-volume_admin_url': nova_volume_url,
2498- 'nova-volume_internal_url': nova_volume_url,
2499+ 'nova-volume_public_url': nova_volume_public_url,
2500+ 'nova-volume_admin_url': nova_volume_admin_url,
2501+ 'nova-volume_internal_url': nova_volume_internal_url,
2502 })
2503
2504 # XXX: Keep these relations named quantum_*??
2505- if network_manager() in ['quantum', 'neutron']:
2506+ if is_relation_made('neutron-api'):
2507+ endpoints.update({
2508+ 'quantum_service': None,
2509+ 'quantum_region': None,
2510+ 'quantum_public_url': None,
2511+ 'quantum_admin_url': None,
2512+ 'quantum_internal_url': None,
2513+ })
2514+ elif network_manager() in ['quantum', 'neutron']:
2515 endpoints.update({
2516 'quantum_service': 'quantum',
2517 'quantum_region': region,
2518- 'quantum_public_url': neutron_url,
2519- 'quantum_admin_url': neutron_url,
2520- 'quantum_internal_url': neutron_url,
2521+ 'quantum_public_url': neutron_public_url,
2522+ 'quantum_admin_url': neutron_admin_url,
2523+ 'quantum_internal_url': neutron_internal_url,
2524 })
2525
2526 return endpoints
2527@@ -695,3 +755,58 @@
2528 # quantum-plugin config setting can be safely overriden
2529 # as we only supported OVS in G/neutron
2530 return config('neutron-plugin') or config('quantum-plugin')
2531+
2532+
2533+def guard_map():
2534+ '''Map of services and required interfaces that must be present before
2535+ the service should be allowed to start'''
2536+ gmap = {}
2537+ nova_services = deepcopy(BASE_SERVICES)
2538+ if os_release('nova-common') not in ['essex', 'folsom']:
2539+ nova_services.append('nova-conductor')
2540+
2541+ nova_interfaces = ['identity-service', 'amqp']
2542+ if relation_ids('pgsql-nova-db'):
2543+ nova_interfaces.append('pgsql-nova-db')
2544+ else:
2545+ nova_interfaces.append('shared-db')
2546+
2547+ for svc in nova_services:
2548+ gmap[svc] = nova_interfaces
2549+
2550+ net_manager = network_manager()
2551+ if net_manager in ['neutron', 'quantum']:
2552+ neutron_interfaces = ['identity-service', 'amqp']
2553+ if relation_ids('pgsql-neutron-db'):
2554+ neutron_interfaces.append('pgsql-neutron-db')
2555+ else:
2556+ neutron_interfaces.append('shared-db')
2557+ if network_manager() == 'quantum':
2558+ gmap['quantum-server'] = neutron_interfaces
2559+ else:
2560+ gmap['neutron-server'] = neutron_interfaces
2561+
2562+ return gmap
2563+
2564+
2565+def service_guard(guard_map, contexts, active=False):
2566+ '''Inhibit services in guard_map from running unless
2567+ required interfaces are found complete in contexts.'''
2568+ def wrap(f):
2569+ def wrapped_f(*args):
2570+ if active is True:
2571+ incomplete_services = []
2572+ for svc in guard_map:
2573+ for interface in guard_map[svc]:
2574+ if interface not in contexts.complete_contexts():
2575+ incomplete_services.append(svc)
2576+ f(*args)
2577+ for svc in incomplete_services:
2578+ if service_running(svc):
2579+ log('Service {} has unfulfilled '
2580+ 'interface requirements, stopping.'.format(svc))
2581+ service_stop(svc)
2582+ else:
2583+ f(*args)
2584+ return wrapped_f
2585+ return wrap
2586
2587=== modified file 'metadata.yaml'
2588--- metadata.yaml 2014-03-31 11:56:09 +0000
2589+++ metadata.yaml 2014-07-29 13:07:23 +0000
2590@@ -30,6 +30,8 @@
2591 interface: nova-volume
2592 quantum-network-service:
2593 interface: quantum
2594+ neutron-api:
2595+ interface: neutron-api
2596 ha:
2597 interface: hacluster
2598 scope: container
2599
2600=== modified file 'revision'
2601--- revision 2014-04-16 08:25:14 +0000
2602+++ revision 2014-07-29 13:07:23 +0000
2603@@ -1,1 +1,1 @@
2604-315
2605+500
2606
2607=== added directory 'tests'
2608=== added file 'tests/00-setup'
2609--- tests/00-setup 1970-01-01 00:00:00 +0000
2610+++ tests/00-setup 2014-07-29 13:07:23 +0000
2611@@ -0,0 +1,10 @@
2612+#!/bin/bash
2613+
2614+set -ex
2615+
2616+sudo add-apt-repository --yes ppa:juju/stable
2617+sudo apt-get update --yes
2618+sudo apt-get install --yes python-amulet
2619+sudo apt-get install --yes python-glanceclient
2620+sudo apt-get install --yes python-keystoneclient
2621+sudo apt-get install --yes python-novaclient
2622
2623=== added file 'tests/10-basic-precise-essex'
2624--- tests/10-basic-precise-essex 1970-01-01 00:00:00 +0000
2625+++ tests/10-basic-precise-essex 2014-07-29 13:07:23 +0000
2626@@ -0,0 +1,10 @@
2627+#!/usr/bin/python
2628+
2629+"""Amulet tests on a basic nova cloud controller deployment on
2630+ precise-essex."""
2631+
2632+from basic_deployment import NovaCCBasicDeployment
2633+
2634+if __name__ == '__main__':
2635+ deployment = NovaCCBasicDeployment(series='precise')
2636+ deployment.run_tests()
2637
2638=== added file 'tests/11-basic-precise-folsom'
2639--- tests/11-basic-precise-folsom 1970-01-01 00:00:00 +0000
2640+++ tests/11-basic-precise-folsom 2014-07-29 13:07:23 +0000
2641@@ -0,0 +1,18 @@
2642+#!/usr/bin/python
2643+
2644+"""Amulet tests on a basic nova cloud controller deployment on
2645+ precise-folsom."""
2646+
2647+import amulet
2648+from basic_deployment import NovaCCBasicDeployment
2649+
2650+if __name__ == '__main__':
2651+ # NOTE(coreycb): Skipping failing test until resolved. 'nova-manage db sync'
2652+ # fails in shared-db-relation-changed (only fails on folsom)
2653+ message = "Skipping failing test until resolved"
2654+ amulet.raise_status(amulet.SKIP, msg=message)
2655+
2656+ deployment = NovaCCBasicDeployment(series='precise',
2657+ openstack='cloud:precise-folsom',
2658+ source='cloud:precise-updates/folsom')
2659+ deployment.run_tests()
2660
2661=== added file 'tests/12-basic-precise-grizzly'
2662--- tests/12-basic-precise-grizzly 1970-01-01 00:00:00 +0000
2663+++ tests/12-basic-precise-grizzly 2014-07-29 13:07:23 +0000
2664@@ -0,0 +1,12 @@
2665+#!/usr/bin/python
2666+
2667+"""Amulet tests on a basic nova cloud controller deployment on
2668+ precise-grizzly."""
2669+
2670+from basic_deployment import NovaCCBasicDeployment
2671+
2672+if __name__ == '__main__':
2673+ deployment = NovaCCBasicDeployment(series='precise',
2674+ openstack='cloud:precise-grizzly',
2675+ source='cloud:precise-updates/grizzly')
2676+ deployment.run_tests()
2677
2678=== added file 'tests/13-basic-precise-havana'
2679--- tests/13-basic-precise-havana 1970-01-01 00:00:00 +0000
2680+++ tests/13-basic-precise-havana 2014-07-29 13:07:23 +0000
2681@@ -0,0 +1,12 @@
2682+#!/usr/bin/python
2683+
2684+"""Amulet tests on a basic nova cloud controller deployment on
2685+ precise-havana."""
2686+
2687+from basic_deployment import NovaCCBasicDeployment
2688+
2689+if __name__ == '__main__':
2690+ deployment = NovaCCBasicDeployment(series='precise',
2691+ openstack='cloud:precise-havana',
2692+ source='cloud:precise-updates/havana')
2693+ deployment.run_tests()
2694
2695=== added file 'tests/14-basic-precise-icehouse'
2696--- tests/14-basic-precise-icehouse 1970-01-01 00:00:00 +0000
2697+++ tests/14-basic-precise-icehouse 2014-07-29 13:07:23 +0000
2698@@ -0,0 +1,12 @@
2699+#!/usr/bin/python
2700+
2701+"""Amulet tests on a basic nova cloud controller deployment on
2702+ precise-icehouse."""
2703+
2704+from basic_deployment import NovaCCBasicDeployment
2705+
2706+if __name__ == '__main__':
2707+ deployment = NovaCCBasicDeployment(series='precise',
2708+ openstack='cloud:precise-icehouse',
2709+ source='cloud:precise-updates/icehouse')
2710+ deployment.run_tests()
2711
2712=== added file 'tests/15-basic-trusty-icehouse'
2713--- tests/15-basic-trusty-icehouse 1970-01-01 00:00:00 +0000
2714+++ tests/15-basic-trusty-icehouse 2014-07-29 13:07:23 +0000
2715@@ -0,0 +1,10 @@
2716+#!/usr/bin/python
2717+
2718+"""Amulet tests on a basic nova cloud controller deployment on
2719+ trusty-icehouse."""
2720+
2721+from basic_deployment import NovaCCBasicDeployment
2722+
2723+if __name__ == '__main__':
2724+ deployment = NovaCCBasicDeployment(series='trusty')
2725+ deployment.run_tests()
2726
2727=== added file 'tests/README'
2728--- tests/README 1970-01-01 00:00:00 +0000
2729+++ tests/README 2014-07-29 13:07:23 +0000
2730@@ -0,0 +1,47 @@
2731+This directory provides Amulet tests that focus on verification of Nova Cloud
2732+Controller deployments.
2733+
2734+If you use a web proxy server to access the web, you'll need to set the
2735+AMULET_HTTP_PROXY environment variable to the http URL of the proxy server.
2736+
2737+The following examples demonstrate different ways that tests can be executed.
2738+All examples are run from the charm's root directory.
2739+
2740+ * To run all tests (starting with 00-setup):
2741+
2742+ make test
2743+
2744+ * To run a specific test module (or modules):
2745+
2746+ juju test -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse
2747+
2748+ * To run a specific test module (or modules), and keep the environment
2749+ deployed after a failure:
2750+
2751+ juju test --set-e -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse
2752+
2753+ * To re-run a test module against an already deployed environment (one
2754+ that was deployed by a previous call to 'juju test --set-e'):
2755+
2756+ ./tests/15-basic-trusty-icehouse
2757+
2758+For debugging and test development purposes, all code should be idempotent.
2759+In other words, the code should have the ability to be re-run without changing
2760+the results beyond the initial run. This enables editing and re-running of a
2761+test module against an already deployed environment, as described above.
2762+
2763+Manual debugging tips:
2764+
2765+ * Set the following env vars before using the OpenStack CLI as admin:
2766+ export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0
2767+ export OS_TENANT_NAME=admin
2768+ export OS_USERNAME=admin
2769+ export OS_PASSWORD=openstack
2770+ export OS_REGION_NAME=RegionOne
2771+
2772+ * Set the following env vars before using the OpenStack CLI as demoUser:
2773+ export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0
2774+ export OS_TENANT_NAME=demoTenant
2775+ export OS_USERNAME=demoUser
2776+ export OS_PASSWORD=password
2777+ export OS_REGION_NAME=RegionOne
2778
2779=== added file 'tests/basic_deployment.py'
2780--- tests/basic_deployment.py 1970-01-01 00:00:00 +0000
2781+++ tests/basic_deployment.py 2014-07-29 13:07:23 +0000
2782@@ -0,0 +1,520 @@
2783+#!/usr/bin/python
2784+
2785+import amulet
2786+
2787+from charmhelpers.contrib.openstack.amulet.deployment import (
2788+ OpenStackAmuletDeployment
2789+)
2790+
2791+from charmhelpers.contrib.openstack.amulet.utils import (
2792+ OpenStackAmuletUtils,
2793+ DEBUG, # flake8: noqa
2794+ ERROR
2795+)
2796+
2797+# Use DEBUG to turn on debug logging
2798+u = OpenStackAmuletUtils(ERROR)
2799+
2800+
2801+class NovaCCBasicDeployment(OpenStackAmuletDeployment):
2802+ """Amulet tests on a basic nova cloud controller deployment."""
2803+
2804+ def __init__(self, series=None, openstack=None, source=None):
2805+ """Deploy the entire test environment."""
2806+ super(NovaCCBasicDeployment, self).__init__(series, openstack, source)
2807+ self._add_services()
2808+ self._add_relations()
2809+ self._configure_services()
2810+ self._deploy()
2811+ self._initialize_tests()
2812+
2813+ def _add_services(self):
2814+ """Add the service that we're testing, including the number of units,
2815+ where nova-cloud-controller is local, and the other charms are from
2816+ the charm store."""
2817+ this_service = ('nova-cloud-controller', 1)
2818+ other_services = [('mysql', 1), ('rabbitmq-server', 1),
2819+ ('nova-compute', 2), ('keystone', 1), ('glance', 1)]
2820+ super(NovaCCBasicDeployment, self)._add_services(this_service,
2821+ other_services)
2822+
2823+ def _add_relations(self):
2824+ """Add all of the relations for the services."""
2825+ relations = {
2826+ 'nova-cloud-controller:shared-db': 'mysql:shared-db',
2827+ 'nova-cloud-controller:identity-service': 'keystone:identity-service',
2828+ 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp',
2829+ 'nova-cloud-controller:cloud-compute': 'nova-compute:cloud-compute',
2830+ 'nova-cloud-controller:image-service': 'glance:image-service',
2831+ 'nova-compute:image-service': 'glance:image-service',
2832+ 'nova-compute:shared-db': 'mysql:shared-db',
2833+ 'nova-compute:amqp': 'rabbitmq-server:amqp',
2834+ 'keystone:shared-db': 'mysql:shared-db',
2835+ 'glance:identity-service': 'keystone:identity-service',
2836+ 'glance:shared-db': 'mysql:shared-db',
2837+ 'glance:amqp': 'rabbitmq-server:amqp'
2838+ }
2839+ super(NovaCCBasicDeployment, self)._add_relations(relations)
2840+
2841+ def _configure_services(self):
2842+ """Configure all of the services."""
2843+ keystone_config = {'admin-password': 'openstack',
2844+ 'admin-token': 'ubuntutesting'}
2845+ configs = {'keystone': keystone_config}
2846+ super(NovaCCBasicDeployment, self)._configure_services(configs)
2847+
2848+ def _initialize_tests(self):
2849+ """Perform final initialization before tests get run."""
2850+ # Access the sentries for inspecting service units
2851+ self.mysql_sentry = self.d.sentry.unit['mysql/0']
2852+ self.keystone_sentry = self.d.sentry.unit['keystone/0']
2853+ self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']
2854+ self.nova_cc_sentry = self.d.sentry.unit['nova-cloud-controller/0']
2855+ self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0']
2856+ self.glance_sentry = self.d.sentry.unit['glance/0']
2857+
2858+ # Authenticate admin with keystone
2859+ self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
2860+ user='admin',
2861+ password='openstack',
2862+ tenant='admin')
2863+
2864+ # Authenticate admin with glance endpoint
2865+ self.glance = u.authenticate_glance_admin(self.keystone)
2866+
2867+ # Create a demo tenant/role/user
2868+ self.demo_tenant = 'demoTenant'
2869+ self.demo_role = 'demoRole'
2870+ self.demo_user = 'demoUser'
2871+ if not u.tenant_exists(self.keystone, self.demo_tenant):
2872+ tenant = self.keystone.tenants.create(tenant_name=self.demo_tenant,
2873+ description='demo tenant',
2874+ enabled=True)
2875+ self.keystone.roles.create(name=self.demo_role)
2876+ self.keystone.users.create(name=self.demo_user,
2877+ password='password',
2878+ tenant_id=tenant.id,
2879+ email='demo@demo.com')
2880+
2881+ # Authenticate demo user with keystone
2882+ self.keystone_demo = \
2883+ u.authenticate_keystone_user(self.keystone, user=self.demo_user,
2884+ password='password',
2885+ tenant=self.demo_tenant)
2886+
2887+ # Authenticate demo user with nova-api
2888+ self.nova_demo = u.authenticate_nova_user(self.keystone,
2889+ user=self.demo_user,
2890+ password='password',
2891+ tenant=self.demo_tenant)
2892+
2893+ def test_services(self):
2894+ """Verify the expected services are running on the corresponding
2895+ service units."""
2896+ commands = {
2897+ self.mysql_sentry: ['status mysql'],
2898+ self.rabbitmq_sentry: ['sudo service rabbitmq-server status'],
2899+ self.nova_cc_sentry: ['status nova-api-ec2',
2900+ 'status nova-api-os-compute',
2901+ 'status nova-objectstore',
2902+ 'status nova-cert',
2903+ 'status nova-scheduler'],
2904+ self.nova_compute_sentry: ['status nova-compute',
2905+ 'status nova-network',
2906+ 'status nova-api'],
2907+ self.keystone_sentry: ['status keystone'],
2908+ self.glance_sentry: ['status glance-registry', 'status glance-api']
2909+ }
2910+ if self._get_openstack_release() >= self.precise_grizzly:
2911+ commands[self.nova_cc_sentry] = ['status nova-conductor']
2912+
2913+ ret = u.validate_services(commands)
2914+ if ret:
2915+ amulet.raise_status(amulet.FAIL, msg=ret)
2916+
2917+ def test_service_catalog(self):
2918+ """Verify that the service catalog endpoint data is valid."""
2919+ endpoint_vol = {'adminURL': u.valid_url,
2920+ 'region': 'RegionOne',
2921+ 'publicURL': u.valid_url,
2922+ 'internalURL': u.valid_url}
2923+ endpoint_id = {'adminURL': u.valid_url,
2924+ 'region': 'RegionOne',
2925+ 'publicURL': u.valid_url,
2926+ 'internalURL': u.valid_url}
2927+ if self._get_openstack_release() >= self.precise_folsom:
2928+ endpoint_vol['id'] = u.not_null
2929+ endpoint_id['id'] = u.not_null
2930+ expected = {'s3': [endpoint_vol], 'compute': [endpoint_vol],
2931+ 'ec2': [endpoint_vol], 'identity': [endpoint_id]}
2932+ actual = self.keystone_demo.service_catalog.get_endpoints()
2933+
2934+ ret = u.validate_svc_catalog_endpoint_data(expected, actual)
2935+ if ret:
2936+ amulet.raise_status(amulet.FAIL, msg=ret)
2937+
2938+ def test_openstack_compute_api_endpoint(self):
2939+ """Verify the openstack compute api (osapi) endpoint data."""
2940+ endpoints = self.keystone.endpoints.list()
2941+ admin_port = internal_port = public_port = '8774'
2942+ expected = {'id': u.not_null,
2943+ 'region': 'RegionOne',
2944+ 'adminurl': u.valid_url,
2945+ 'internalurl': u.valid_url,
2946+ 'publicurl': u.valid_url,
2947+ 'service_id': u.not_null}
2948+
2949+ ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
2950+ public_port, expected)
2951+ if ret:
2952+ message = 'osapi endpoint: {}'.format(ret)
2953+ amulet.raise_status(amulet.FAIL, msg=message)
2954+
2955+ def test_ec2_api_endpoint(self):
2956+ """Verify the EC2 api endpoint data."""
2957+ endpoints = self.keystone.endpoints.list()
2958+ admin_port = internal_port = public_port = '8773'
2959+ expected = {'id': u.not_null,
2960+ 'region': 'RegionOne',
2961+ 'adminurl': u.valid_url,
2962+ 'internalurl': u.valid_url,
2963+ 'publicurl': u.valid_url,
2964+ 'service_id': u.not_null}
2965+
2966+ ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
2967+ public_port, expected)
2968+ if ret:
2969+ message = 'EC2 endpoint: {}'.format(ret)
2970+ amulet.raise_status(amulet.FAIL, msg=message)
2971+
2972+ def test_s3_api_endpoint(self):
2973+ """Verify the S3 api endpoint data."""
2974+ endpoints = self.keystone.endpoints.list()
2975+ admin_port = internal_port = public_port = '3333'
2976+ expected = {'id': u.not_null,
2977+ 'region': 'RegionOne',
2978+ 'adminurl': u.valid_url,
2979+ 'internalurl': u.valid_url,
2980+ 'publicurl': u.valid_url,
2981+ 'service_id': u.not_null}
2982+
2983+ ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
2984+ public_port, expected)
2985+ if ret:
2986+ message = 'S3 endpoint: {}'.format(ret)
2987+ amulet.raise_status(amulet.FAIL, msg=message)
2988+
2989+ def test_nova_cc_shared_db_relation(self):
2990+ """Verify the nova-cc to mysql shared-db relation data"""
2991+ unit = self.nova_cc_sentry
2992+ relation = ['shared-db', 'mysql:shared-db']
2993+ expected = {
2994+ 'private-address': u.valid_ip,
2995+ 'nova_database': 'nova',
2996+ 'nova_username': 'nova',
2997+ 'nova_hostname': u.valid_ip
2998+ }
2999+
3000+ ret = u.validate_relation_data(unit, relation, expected)
3001+ if ret:
3002+ message = u.relation_error('nova-cc shared-db', ret)
3003+ amulet.raise_status(amulet.FAIL, msg=message)
3004+
3005+ def test_mysql_shared_db_relation(self):
3006+ """Verify the mysql to nova-cc shared-db relation data"""
3007+ unit = self.mysql_sentry
3008+ relation = ['shared-db', 'nova-cloud-controller:shared-db']
3009+ expected = {
3010+ 'private-address': u.valid_ip,
3011+ 'nova_password': u.not_null,
3012+ 'db_host': u.valid_ip
3013+ }
3014+
3015+ ret = u.validate_relation_data(unit, relation, expected)
3016+ if ret:
3017+ message = u.relation_error('mysql shared-db', ret)
3018+ amulet.raise_status(amulet.FAIL, msg=message)
3019+
3020+ def test_nova_cc_identity_service_relation(self):
3021+ """Verify the nova-cc to keystone identity-service relation data"""
3022+ unit = self.nova_cc_sentry
3023+ relation = ['identity-service', 'keystone:identity-service']
3024+ expected = {
3025+ 'nova_internal_url': u.valid_url,
3026+ 'nova_public_url': u.valid_url,
3027+ 's3_public_url': u.valid_url,
3028+ 's3_service': 's3',
3029+ 'ec2_admin_url': u.valid_url,
3030+ 'ec2_internal_url': u.valid_url,
3031+ 'nova_service': 'nova',
3032+ 's3_region': 'RegionOne',
3033+ 'private-address': u.valid_ip,
3034+ 'nova_region': 'RegionOne',
3035+ 'ec2_public_url': u.valid_url,
3036+ 'ec2_region': 'RegionOne',
3037+ 's3_internal_url': u.valid_url,
3038+ 's3_admin_url': u.valid_url,
3039+ 'nova_admin_url': u.valid_url,
3040+ 'ec2_service': 'ec2'
3041+ }
3042+
3043+ ret = u.validate_relation_data(unit, relation, expected)
3044+ if ret:
3045+ message = u.relation_error('nova-cc identity-service', ret)
3046+ amulet.raise_status(amulet.FAIL, msg=message)
3047+
3048+ def test_keystone_identity_service_relation(self):
3049+ """Verify the keystone to nova-cc identity-service relation data"""
3050+ unit = self.keystone_sentry
3051+ relation = ['identity-service',
3052+ 'nova-cloud-controller:identity-service']
3053+ expected = {
3054+ 'service_protocol': 'http',
3055+ 'service_tenant': 'services',
3056+ 'admin_token': 'ubuntutesting',
3057+ 'service_password': u.not_null,
3058+ 'service_port': '5000',
3059+ 'auth_port': '35357',
3060+ 'auth_protocol': 'http',
3061+ 'private-address': u.valid_ip,
3062+ 'https_keystone': 'False',
3063+ 'auth_host': u.valid_ip,
3064+ 'service_username': 's3_ec2_nova',
3065+ 'service_tenant_id': u.not_null,
3066+ 'service_host': u.valid_ip
3067+ }
3068+
3069+ ret = u.validate_relation_data(unit, relation, expected)
3070+ if ret:
3071+ message = u.relation_error('keystone identity-service', ret)
3072+ amulet.raise_status(amulet.FAIL, msg=message)
3073+
3074+ def test_nova_cc_amqp_relation(self):
3075+ """Verify the nova-cc to rabbitmq-server amqp relation data"""
3076+ unit = self.nova_cc_sentry
3077+ relation = ['amqp', 'rabbitmq-server:amqp']
3078+ expected = {
3079+ 'username': 'nova',
3080+ 'private-address': u.valid_ip,
3081+ 'vhost': 'openstack'
3082+ }
3083+
3084+ ret = u.validate_relation_data(unit, relation, expected)
3085+ if ret:
3086+ message = u.relation_error('nova-cc amqp', ret)
3087+ amulet.raise_status(amulet.FAIL, msg=message)
3088+
3089+ def test_rabbitmq_amqp_relation(self):
3090+ """Verify the rabbitmq-server to nova-cc amqp relation data"""
3091+ unit = self.rabbitmq_sentry
3092+ relation = ['amqp', 'nova-cloud-controller:amqp']
3093+ expected = {
3094+ 'private-address': u.valid_ip,
3095+ 'password': u.not_null,
3096+ 'hostname': u.valid_ip
3097+ }
3098+
3099+ ret = u.validate_relation_data(unit, relation, expected)
3100+ if ret:
3101+ message = u.relation_error('rabbitmq amqp', ret)
3102+ amulet.raise_status(amulet.FAIL, msg=message)
3103+
3104+ def test_nova_cc_cloud_compute_relation(self):
3105+ """Verify the nova-cc to nova-compute cloud-compute relation data"""
3106+ unit = self.nova_cc_sentry
3107+ relation = ['cloud-compute', 'nova-compute:cloud-compute']
3108+ expected = {
3109+ 'volume_service': 'cinder',
3110+ 'network_manager': 'flatdhcpmanager',
3111+ 'ec2_host': u.valid_ip,
3112+ 'private-address': u.valid_ip,
3113+ 'restart_trigger': u.not_null
3114+ }
3115+ if self._get_openstack_release() == self.precise_essex:
3116+ expected['volume_service'] = 'nova-volume'
3117+
3118+ ret = u.validate_relation_data(unit, relation, expected)
3119+ if ret:
3120+ message = u.relation_error('nova-cc cloud-compute', ret)
3121+ amulet.raise_status(amulet.FAIL, msg=message)
3122+
3123+ def test_nova_cloud_compute_relation(self):
3124+ """Verify the nova-compute to nova-cc cloud-compute relation data"""
3125+ unit = self.nova_compute_sentry
3126+ relation = ['cloud-compute', 'nova-cloud-controller:cloud-compute']
3127+ expected = {
3128+ 'private-address': u.valid_ip,
3129+ }
3130+
3131+ ret = u.validate_relation_data(unit, relation, expected)
3132+ if ret:
3133+ message = u.relation_error('nova-compute cloud-compute', ret)
3134+ amulet.raise_status(amulet.FAIL, msg=message)
3135+
3136+ def test_nova_cc_image_service_relation(self):
3137+ """Verify the nova-cc to glance image-service relation data"""
3138+ unit = self.nova_cc_sentry
3139+ relation = ['image-service', 'glance:image-service']
3140+ expected = {
3141+ 'private-address': u.valid_ip,
3142+ }
3143+
3144+ ret = u.validate_relation_data(unit, relation, expected)
3145+ if ret:
3146+ message = u.relation_error('nova-cc image-service', ret)
3147+ amulet.raise_status(amulet.FAIL, msg=message)
3148+
3149+ def test_glance_image_service_relation(self):
3150+ """Verify the glance to nova-cc image-service relation data"""
3151+ unit = self.glance_sentry
3152+ relation = ['image-service', 'nova-cloud-controller:image-service']
3153+ expected = {
3154+ 'private-address': u.valid_ip,
3155+ 'glance-api-server': u.valid_url
3156+ }
3157+
3158+ ret = u.validate_relation_data(unit, relation, expected)
3159+ if ret:
3160+ message = u.relation_error('glance image-service', ret)
3161+ amulet.raise_status(amulet.FAIL, msg=message)
3162+
3163+ def test_restart_on_config_change(self):
3164+ """Verify that the specified services are restarted when the config
3165+ is changed."""
3166+ # NOTE(coreycb): Skipping failing test on essex until resolved.
3167+ # config-flags don't take effect on essex.
3168+ if self._get_openstack_release() == self.precise_essex:
3169+ u.log.error("Skipping failing test until resolved")
3170+ return
3171+
3172+ services = ['nova-api-ec2', 'nova-api-os-compute', 'nova-objectstore',
3173+ 'nova-cert', 'nova-scheduler', 'nova-conductor']
3174+ self.d.configure('nova-cloud-controller',
3175+ {'config-flags': 'quota_cores=20,quota_instances=40,quota_ram=102400'})
3176+ pgrep_full = True
3177+
3178+ time = 20
3179+ conf = '/etc/nova/nova.conf'
3180+ for s in services:
3181+ if not u.service_restarted(self.nova_cc_sentry, s, conf,
3182+ pgrep_full=True, sleep_time=time):
3183+ msg = "service {} didn't restart after config change".format(s)
3184+ amulet.raise_status(amulet.FAIL, msg=msg)
3185+ time = 0
3186+
3187+ def test_nova_default_config(self):
3188+ """Verify the data in the nova config file's default section."""
3189+ # NOTE(coreycb): Currently no way to test on essex because config file
3190+ # has no section headers.
3191+ if self._get_openstack_release() == self.precise_essex:
3192+ return
3193+
3194+ unit = self.nova_cc_sentry
3195+ conf = '/etc/nova/nova.conf'
3196+ rabbitmq_relation = self.rabbitmq_sentry.relation('amqp',
3197+ 'nova-cloud-controller:amqp')
3198+ glance_relation = self.glance_sentry.relation('image-service',
3199+ 'nova-cloud-controller:image-service')
3200+ mysql_relation = self.mysql_sentry.relation('shared-db',
3201+ 'nova-cloud-controller:shared-db')
3202+ db_uri = "mysql://{}:{}@{}/{}".format('nova',
3203+ mysql_relation['nova_password'],
3204+ mysql_relation['db_host'],
3205+ 'nova')
3206+ keystone_ep = self.keystone_demo.service_catalog.url_for(\
3207+ service_type='identity',
3208+ endpoint_type='publicURL')
3209+ keystone_ec2 = "{}/ec2tokens".format(keystone_ep)
3210+
3211+ expected = {'dhcpbridge_flagfile': '/etc/nova/nova.conf',
3212+ 'dhcpbridge': '/usr/bin/nova-dhcpbridge',
3213+ 'logdir': '/var/log/nova',
3214+ 'state_path': '/var/lib/nova',
3215+ 'lock_path': '/var/lock/nova',
3216+ 'force_dhcp_release': 'True',
3217+ 'iscsi_helper': 'tgtadm',
3218+ 'libvirt_use_virtio_for_bridges': 'True',
3219+ 'connection_type': 'libvirt',
3220+ 'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf',
3221+ 'verbose': 'True',
3222+ 'ec2_private_dns_show_ip': 'True',
3223+ 'api_paste_config': '/etc/nova/api-paste.ini',
3224+ 'volumes_path': '/var/lib/nova/volumes',
3225+ 'enabled_apis': 'ec2,osapi_compute,metadata',
3226+ 'auth_strategy': 'keystone',
3227+ 'compute_driver': 'libvirt.LibvirtDriver',
3228+ 'keystone_ec2_url': keystone_ec2,
3229+ 'sql_connection': db_uri,
3230+ 'rabbit_userid': 'nova',
3231+ 'rabbit_virtual_host': 'openstack',
3232+ 'rabbit_password': rabbitmq_relation['password'],
3233+ 'rabbit_host': rabbitmq_relation['hostname'],
3234+ 'glance_api_servers': glance_relation['glance-api-server'],
3235+ 'network_manager': 'nova.network.manager.FlatDHCPManager',
3236+ 's3_listen_port': '3333',
3237+ 'osapi_compute_listen_port': '8774',
3238+ 'ec2_listen_port': '8773'}
3239+
3240+ ret = u.validate_config_data(unit, conf, 'DEFAULT', expected)
3241+ if ret:
3242+ message = "nova config error: {}".format(ret)
3243+ amulet.raise_status(amulet.FAIL, msg=message)
3244+
3245+
3246+ def test_nova_keystone_authtoken_config(self):
3247+ """Verify the data in the nova config file's keystone_authtoken
3248+ section. This data only exists since icehouse."""
3249+ if self._get_openstack_release() < self.precise_icehouse:
3250+ return
3251+
3252+ unit = self.nova_cc_sentry
3253+ conf = '/etc/nova/nova.conf'
3254+ keystone_relation = self.keystone_sentry.relation('identity-service',
3255+ 'nova-cloud-controller:identity-service')
3256+ keystone_uri = "http://{}:{}/".format(keystone_relation['service_host'],
3257+ keystone_relation['service_port'])
3258+ expected = {'auth_uri': keystone_uri,
3259+ 'auth_host': keystone_relation['service_host'],
3260+ 'auth_port': keystone_relation['auth_port'],
3261+ 'auth_protocol': keystone_relation['auth_protocol'],
3262+ 'admin_tenant_name': keystone_relation['service_tenant'],
3263+ 'admin_user': keystone_relation['service_username'],
3264+ 'admin_password': keystone_relation['service_password']}
3265+
3266+ ret = u.validate_config_data(unit, conf, 'keystone_authtoken', expected)
3267+ if ret:
3268+ message = "nova config error: {}".format(ret)
3269+ amulet.raise_status(amulet.FAIL, msg=message)
3270+
3271+ def test_image_instance_create(self):
3272+ """Create an image/instance, verify they exist, and delete them."""
3273+ # NOTE(coreycb): Skipping failing test on essex until resolved. essex
3274+ # nova API calls are getting "Malformed request url (HTTP
3275+ # 400)".
3276+ if self._get_openstack_release() == self.precise_essex:
3277+ u.log.error("Skipping failing test until resolved")
3278+ return
3279+
3280+ image = u.create_cirros_image(self.glance, "cirros-image")
3281+ if not image:
3282+ amulet.raise_status(amulet.FAIL, msg="Image create failed")
3283+
3284+ instance = u.create_instance(self.nova_demo, "cirros-image", "cirros",
3285+ "m1.tiny")
3286+ if not instance:
3287+ amulet.raise_status(amulet.FAIL, msg="Instance create failed")
3288+
3289+ found = False
3290+ for instance in self.nova_demo.servers.list():
3291+ if instance.name == 'cirros':
3292+ found = True
3293+ if instance.status != 'ACTIVE':
3294+ msg = "cirros instance is not active"
3295+ amulet.raise_status(amulet.FAIL, msg=message)
3296+
3297+ if not found:
3298+ message = "nova cirros instance does not exist"
3299+ amulet.raise_status(amulet.FAIL, msg=message)
3300+
3301+ u.delete_image(self.glance, image)
3302+ u.delete_instance(self.nova_demo, instance)
3303
3304=== added directory 'tests/charmhelpers'
3305=== added file 'tests/charmhelpers/__init__.py'
3306=== added directory 'tests/charmhelpers/contrib'
3307=== added file 'tests/charmhelpers/contrib/__init__.py'
3308=== added directory 'tests/charmhelpers/contrib/amulet'
3309=== added file 'tests/charmhelpers/contrib/amulet/__init__.py'
3310=== added file 'tests/charmhelpers/contrib/amulet/deployment.py'
3311--- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000
3312+++ tests/charmhelpers/contrib/amulet/deployment.py 2014-07-29 13:07:23 +0000
3313@@ -0,0 +1,58 @@
3314+import amulet
3315+
3316+
3317+class AmuletDeployment(object):
3318+ """This class provides generic Amulet deployment and test runner
3319+ methods."""
3320+
3321+ def __init__(self, series=None):
3322+ """Initialize the deployment environment."""
3323+ self.series = None
3324+
3325+ if series:
3326+ self.series = series
3327+ self.d = amulet.Deployment(series=self.series)
3328+ else:
3329+ self.d = amulet.Deployment()
3330+
3331+ def _add_services(self, this_service, other_services):
3332+ """Add services to the deployment where this_service is the local charm
3333+ that we're focused on testing and other_services are the other
3334+ charms that come from the charm store."""
3335+ name, units = range(2)
3336+ self.this_service = this_service[name]
3337+ self.d.add(this_service[name], units=this_service[units])
3338+
3339+ for svc in other_services:
3340+ if self.series:
3341+ self.d.add(svc[name],
3342+ charm='cs:{}/{}'.format(self.series, svc[name]),
3343+ units=svc[units])
3344+ else:
3345+ self.d.add(svc[name], units=svc[units])
3346+
3347+ def _add_relations(self, relations):
3348+ """Add all of the relations for the services."""
3349+ for k, v in relations.iteritems():
3350+ self.d.relate(k, v)
3351+
3352+ def _configure_services(self, configs):
3353+ """Configure all of the services."""
3354+ for service, config in configs.iteritems():
3355+ self.d.configure(service, config)
3356+
3357+ def _deploy(self):
3358+ """Deploy environment and wait for all hooks to finish executing."""
3359+ try:
3360+ self.d.setup()
3361+ self.d.sentry.wait()
3362+ except amulet.helpers.TimeoutError:
3363+ amulet.raise_status(amulet.FAIL, msg="Deployment timed out")
3364+ except:
3365+ raise
3366+
3367+ def run_tests(self):
3368+ """Run all of the methods that are prefixed with 'test_'."""
3369+ for test in dir(self):
3370+ if test.startswith('test_'):
3371+ getattr(self, test)()
3372
3373=== added file 'tests/charmhelpers/contrib/amulet/utils.py'
3374--- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000
3375+++ tests/charmhelpers/contrib/amulet/utils.py 2014-07-29 13:07:23 +0000
3376@@ -0,0 +1,157 @@
3377+import ConfigParser
3378+import io
3379+import logging
3380+import re
3381+import sys
3382+from time import sleep
3383+
3384+
3385+class AmuletUtils(object):
3386+ """This class provides common utility functions that are used by Amulet
3387+ tests."""
3388+
3389+ def __init__(self, log_level=logging.ERROR):
3390+ self.log = self.get_logger(level=log_level)
3391+
3392+ def get_logger(self, name="amulet-logger", level=logging.DEBUG):
3393+ """Get a logger object that will log to stdout."""
3394+ log = logging
3395+ logger = log.getLogger(name)
3396+ fmt = \
3397+ log.Formatter("%(asctime)s %(funcName)s %(levelname)s: %(message)s")
3398+
3399+ handler = log.StreamHandler(stream=sys.stdout)
3400+ handler.setLevel(level)
3401+ handler.setFormatter(fmt)
3402+
3403+ logger.addHandler(handler)
3404+ logger.setLevel(level)
3405+
3406+ return logger
3407+
3408+ def valid_ip(self, ip):
3409+ if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip):
3410+ return True
3411+ else:
3412+ return False
3413+
3414+ def valid_url(self, url):
3415+ p = re.compile(
3416+ r'^(?:http|ftp)s?://'
3417+ r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # flake8: noqa
3418+ r'localhost|'
3419+ r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
3420+ r'(?::\d+)?'
3421+ r'(?:/?|[/?]\S+)$',
3422+ re.IGNORECASE)
3423+ if p.match(url):
3424+ return True
3425+ else:
3426+ return False
3427+
3428+ def validate_services(self, commands):
3429+ """Verify the specified services are running on the corresponding
3430+ service units."""
3431+ for k, v in commands.iteritems():
3432+ for cmd in v:
3433+ output, code = k.run(cmd)
3434+ if code != 0:
3435+ return "command `{}` returned {}".format(cmd, str(code))
3436+ return None
3437+
3438+ def _get_config(self, unit, filename):
3439+ """Get a ConfigParser object for parsing a unit's config file."""
3440+ file_contents = unit.file_contents(filename)
3441+ config = ConfigParser.ConfigParser()
3442+ config.readfp(io.StringIO(file_contents))
3443+ return config
3444+
3445+ def validate_config_data(self, sentry_unit, config_file, section, expected):
3446+ """Verify that the specified section of the config file contains
3447+ the expected option key:value pairs."""
3448+ config = self._get_config(sentry_unit, config_file)
3449+
3450+ if section != 'DEFAULT' and not config.has_section(section):
3451+ return "section [{}] does not exist".format(section)
3452+
3453+ for k in expected.keys():
3454+ if not config.has_option(section, k):
3455+ return "section [{}] is missing option {}".format(section, k)
3456+ if config.get(section, k) != expected[k]:
3457+ return "section [{}] {}:{} != expected {}:{}".format(section,
3458+ k, config.get(section, k), k, expected[k])
3459+ return None
3460+
3461+ def _validate_dict_data(self, expected, actual):
3462+ """Compare expected dictionary data vs actual dictionary data.
3463+ The values in the 'expected' dictionary can be strings, bools, ints,
3464+ longs, or can be a function that evaluate a variable and returns a
3465+ bool."""
3466+ for k, v in expected.iteritems():
3467+ if k in actual:
3468+ if isinstance(v, basestring) or \
3469+ isinstance(v, bool) or \
3470+ isinstance(v, (int, long)):
3471+ if v != actual[k]:
3472+ return "{}:{}".format(k, actual[k])
3473+ elif not v(actual[k]):
3474+ return "{}:{}".format(k, actual[k])
3475+ else:
3476+ return "key '{}' does not exist".format(k)
3477+ return None
3478+
3479+ def validate_relation_data(self, sentry_unit, relation, expected):
3480+ """Validate actual relation data based on expected relation data."""
3481+ actual = sentry_unit.relation(relation[0], relation[1])
3482+ self.log.debug('actual: {}'.format(repr(actual)))
3483+ return self._validate_dict_data(expected, actual)
3484+
3485+ def _validate_list_data(self, expected, actual):
3486+ """Compare expected list vs actual list data."""
3487+ for e in expected:
3488+ if e not in actual:
3489+ return "expected item {} not found in actual list".format(e)
3490+ return None
3491+
3492+ def not_null(self, string):
3493+ if string != None:
3494+ return True
3495+ else:
3496+ return False
3497+
3498+ def _get_file_mtime(self, sentry_unit, filename):
3499+ """Get last modification time of file."""
3500+ return sentry_unit.file_stat(filename)['mtime']
3501+
3502+ def _get_dir_mtime(self, sentry_unit, directory):
3503+ """Get last modification time of directory."""
3504+ return sentry_unit.directory_stat(directory)['mtime']
3505+
3506+ def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):
3507+ """Determine start time of the process based on the last modification
3508+ time of the /proc/pid directory. If pgrep_full is True, the process
3509+ name is matched against the full command line."""
3510+ if pgrep_full:
3511+ cmd = 'pgrep -o -f {}'.format(service)
3512+ else:
3513+ cmd = 'pgrep -o {}'.format(service)
3514+ proc_dir = '/proc/{}'.format(sentry_unit.run(cmd)[0].strip())
3515+ return self._get_dir_mtime(sentry_unit, proc_dir)
3516+
3517+ def service_restarted(self, sentry_unit, service, filename,
3518+ pgrep_full=False):
3519+ """Compare a service's start time vs a file's last modification time
3520+ (such as a config file for that service) to determine if the service
3521+ has been restarted."""
3522+ sleep(10)
3523+ if self._get_proc_start_time(sentry_unit, service, pgrep_full) >= \
3524+ self._get_file_mtime(sentry_unit, filename):
3525+ return True
3526+ else:
3527+ return False
3528+
3529+ def relation_error(self, name, data):
3530+ return 'unexpected relation data in {} - {}'.format(name, data)
3531+
3532+ def endpoint_error(self, name, data):
3533+ return 'unexpected endpoint data in {} - {}'.format(name, data)
3534
3535=== added directory 'tests/charmhelpers/contrib/openstack'
3536=== added file 'tests/charmhelpers/contrib/openstack/__init__.py'
3537=== added directory 'tests/charmhelpers/contrib/openstack/amulet'
3538=== added file 'tests/charmhelpers/contrib/openstack/amulet/__init__.py'
3539=== added file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
3540--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
3541+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2014-07-29 13:07:23 +0000
3542@@ -0,0 +1,55 @@
3543+from charmhelpers.contrib.amulet.deployment import (
3544+ AmuletDeployment
3545+)
3546+
3547+
3548+class OpenStackAmuletDeployment(AmuletDeployment):
3549+ """This class inherits from AmuletDeployment and has additional support
3550+ that is specifically for use by OpenStack charms."""
3551+
3552+ def __init__(self, series=None, openstack=None, source=None):
3553+ """Initialize the deployment environment."""
3554+ super(OpenStackAmuletDeployment, self).__init__(series)
3555+ self.openstack = openstack
3556+ self.source = source
3557+
3558+ def _add_services(self, this_service, other_services):
3559+ """Add services to the deployment and set openstack-origin."""
3560+ super(OpenStackAmuletDeployment, self)._add_services(this_service,
3561+ other_services)
3562+ name = 0
3563+ services = other_services
3564+ services.append(this_service)
3565+ use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph']
3566+
3567+ if self.openstack:
3568+ for svc in services:
3569+ if svc[name] not in use_source:
3570+ config = {'openstack-origin': self.openstack}
3571+ self.d.configure(svc[name], config)
3572+
3573+ if self.source:
3574+ for svc in services:
3575+ if svc[name] in use_source:
3576+ config = {'source': self.source}
3577+ self.d.configure(svc[name], config)
3578+
3579+ def _configure_services(self, configs):
3580+ """Configure all of the services."""
3581+ for service, config in configs.iteritems():
3582+ self.d.configure(service, config)
3583+
3584+ def _get_openstack_release(self):
3585+ """Return an integer representing the enum value of the openstack
3586+ release."""
3587+ self.precise_essex, self.precise_folsom, self.precise_grizzly, \
3588+ self.precise_havana, self.precise_icehouse, \
3589+ self.trusty_icehouse = range(6)
3590+ releases = {
3591+ ('precise', None): self.precise_essex,
3592+ ('precise', 'cloud:precise-folsom'): self.precise_folsom,
3593+ ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
3594+ ('precise', 'cloud:precise-havana'): self.precise_havana,
3595+ ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
3596+ ('trusty', None): self.trusty_icehouse}
3597+ return releases[(self.series, self.openstack)]
3598
3599=== added file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
3600--- tests/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
3601+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2014-07-29 13:07:23 +0000
3602@@ -0,0 +1,209 @@
3603+import logging
3604+import os
3605+import time
3606+import urllib
3607+
3608+import glanceclient.v1.client as glance_client
3609+import keystoneclient.v2_0 as keystone_client
3610+import novaclient.v1_1.client as nova_client
3611+
3612+from charmhelpers.contrib.amulet.utils import (
3613+ AmuletUtils
3614+)
3615+
3616+DEBUG = logging.DEBUG
3617+ERROR = logging.ERROR
3618+
3619+
3620+class OpenStackAmuletUtils(AmuletUtils):
3621+ """This class inherits from AmuletUtils and has additional support
3622+ that is specifically for use by OpenStack charms."""
3623+
3624+ def __init__(self, log_level=ERROR):
3625+ """Initialize the deployment environment."""
3626+ super(OpenStackAmuletUtils, self).__init__(log_level)
3627+
3628+ def validate_endpoint_data(self, endpoints, admin_port, internal_port,
3629+ public_port, expected):
3630+ """Validate actual endpoint data vs expected endpoint data. The ports
3631+ are used to find the matching endpoint."""
3632+ found = False
3633+ for ep in endpoints:
3634+ self.log.debug('endpoint: {}'.format(repr(ep)))
3635+ if admin_port in ep.adminurl and internal_port in ep.internalurl \
3636+ and public_port in ep.publicurl:
3637+ found = True
3638+ actual = {'id': ep.id,
3639+ 'region': ep.region,
3640+ 'adminurl': ep.adminurl,
3641+ 'internalurl': ep.internalurl,
3642+ 'publicurl': ep.publicurl,
3643+ 'service_id': ep.service_id}
3644+ ret = self._validate_dict_data(expected, actual)
3645+ if ret:
3646+ return 'unexpected endpoint data - {}'.format(ret)
3647+
3648+ if not found:
3649+ return 'endpoint not found'
3650+
3651+ def validate_svc_catalog_endpoint_data(self, expected, actual):
3652+ """Validate a list of actual service catalog endpoints vs a list of
3653+ expected service catalog endpoints."""
3654+ self.log.debug('actual: {}'.format(repr(actual)))
3655+ for k, v in expected.iteritems():
3656+ if k in actual:
3657+ ret = self._validate_dict_data(expected[k][0], actual[k][0])
3658+ if ret:
3659+ return self.endpoint_error(k, ret)
3660+ else:
3661+ return "endpoint {} does not exist".format(k)
3662+ return ret
3663+
3664+ def validate_tenant_data(self, expected, actual):
3665+ """Validate a list of actual tenant data vs list of expected tenant
3666+ data."""
3667+ self.log.debug('actual: {}'.format(repr(actual)))
3668+ for e in expected:
3669+ found = False
3670+ for act in actual:
3671+ a = {'enabled': act.enabled, 'description': act.description,
3672+ 'name': act.name, 'id': act.id}
3673+ if e['name'] == a['name']:
3674+ found = True
3675+ ret = self._validate_dict_data(e, a)
3676+ if ret:
3677+ return "unexpected tenant data - {}".format(ret)
3678+ if not found:
3679+ return "tenant {} does not exist".format(e['name'])
3680+ return ret
3681+
3682+ def validate_role_data(self, expected, actual):
3683+ """Validate a list of actual role data vs a list of expected role
3684+ data."""
3685+ self.log.debug('actual: {}'.format(repr(actual)))
3686+ for e in expected:
3687+ found = False
3688+ for act in actual:
3689+ a = {'name': act.name, 'id': act.id}
3690+ if e['name'] == a['name']:
3691+ found = True
3692+ ret = self._validate_dict_data(e, a)
3693+ if ret:
3694+ return "unexpected role data - {}".format(ret)
3695+ if not found:
3696+ return "role {} does not exist".format(e['name'])
3697+ return ret
3698+
3699+ def validate_user_data(self, expected, actual):
3700+ """Validate a list of actual user data vs a list of expected user
3701+ data."""
3702+ self.log.debug('actual: {}'.format(repr(actual)))
3703+ for e in expected:
3704+ found = False
3705+ for act in actual:
3706+ a = {'enabled': act.enabled, 'name': act.name,
3707+ 'email': act.email, 'tenantId': act.tenantId,
3708+ 'id': act.id}
3709+ if e['name'] == a['name']:
3710+ found = True
3711+ ret = self._validate_dict_data(e, a)
3712+ if ret:
3713+ return "unexpected user data - {}".format(ret)
3714+ if not found:
3715+ return "user {} does not exist".format(e['name'])
3716+ return ret
3717+
3718+ def validate_flavor_data(self, expected, actual):
3719+ """Validate a list of actual flavors vs a list of expected flavors."""
3720+ self.log.debug('actual: {}'.format(repr(actual)))
3721+ act = [a.name for a in actual]
3722+ return self._validate_list_data(expected, act)
3723+
3724+ def tenant_exists(self, keystone, tenant):
3725+ """Return True if tenant exists"""
3726+ return tenant in [t.name for t in keystone.tenants.list()]
3727+
3728+ def authenticate_keystone_admin(self, keystone_sentry, user, password,
3729+ tenant):
3730+ """Authenticates admin user with the keystone admin endpoint."""
3731+ service_ip = \
3732+ keystone_sentry.relation('shared-db',
3733+ 'mysql:shared-db')['private-address']
3734+ ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
3735+ return keystone_client.Client(username=user, password=password,
3736+ tenant_name=tenant, auth_url=ep)
3737+
3738+ def authenticate_keystone_user(self, keystone, user, password, tenant):
3739+ """Authenticates a regular user with the keystone public endpoint."""
3740+ ep = keystone.service_catalog.url_for(service_type='identity',
3741+ endpoint_type='publicURL')
3742+ return keystone_client.Client(username=user, password=password,
3743+ tenant_name=tenant, auth_url=ep)
3744+
3745+ def authenticate_glance_admin(self, keystone):
3746+ """Authenticates admin user with glance."""
3747+ ep = keystone.service_catalog.url_for(service_type='image',
3748+ endpoint_type='adminURL')
3749+ return glance_client.Client(ep, token=keystone.auth_token)
3750+
3751+ def authenticate_nova_user(self, keystone, user, password, tenant):
3752+ """Authenticates a regular user with nova-api."""
3753+ ep = keystone.service_catalog.url_for(service_type='identity',
3754+ endpoint_type='publicURL')
3755+ return nova_client.Client(username=user, api_key=password,
3756+ project_id=tenant, auth_url=ep)
3757+
3758+ def create_cirros_image(self, glance, image_name):
3759+ """Download the latest cirros image and upload it to glance."""
3760+ http_proxy = os.getenv('AMULET_HTTP_PROXY')
3761+ self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
3762+ if http_proxy:
3763+ proxies = {'http': http_proxy}
3764+ opener = urllib.FancyURLopener(proxies)
3765+ else:
3766+ opener = urllib.FancyURLopener()
3767+
3768+ f = opener.open("http://download.cirros-cloud.net/version/released")
3769+ version = f.read().strip()
3770+ cirros_img = "tests/cirros-{}-x86_64-disk.img".format(version)
3771+
3772+ if not os.path.exists(cirros_img):
3773+ cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
3774+ version, cirros_img)
3775+ opener.retrieve(cirros_url, cirros_img)
3776+ f.close()
3777+
3778+ with open(cirros_img) as f:
3779+ image = glance.images.create(name=image_name, is_public=True,
3780+ disk_format='qcow2',
3781+ container_format='bare', data=f)
3782+ return image
3783+
3784+ def delete_image(self, glance, image):
3785+ """Delete the specified image."""
3786+ glance.images.delete(image)
3787+
3788+ def create_instance(self, nova, image_name, instance_name, flavor):
3789+ """Create the specified instance."""
3790+ image = nova.images.find(name=image_name)
3791+ flavor = nova.flavors.find(name=flavor)
3792+ instance = nova.servers.create(name=instance_name, image=image,
3793+ flavor=flavor)
3794+
3795+ count = 1
3796+ status = instance.status
3797+ while status != 'ACTIVE' and count < 60:
3798+ time.sleep(3)
3799+ instance = nova.servers.get(instance.id)
3800+ status = instance.status
3801+ self.log.debug('instance status: {}'.format(status))
3802+ count += 1
3803+
3804+ if status == 'BUILD':
3805+ return None
3806+
3807+ return instance
3808+
3809+ def delete_instance(self, nova, instance):
3810+ """Delete the specified instance."""
3811+ nova.servers.delete(instance)
3812
3813=== modified file 'unit_tests/test_nova_cc_hooks.py'
3814--- unit_tests/test_nova_cc_hooks.py 2014-05-21 10:03:01 +0000
3815+++ unit_tests/test_nova_cc_hooks.py 2014-07-29 13:07:23 +0000
3816@@ -1,6 +1,6 @@
3817-from mock import MagicMock, patch
3818-from test_utils import CharmTestCase
3819-
3820+from mock import MagicMock, patch, call
3821+from test_utils import CharmTestCase, patch_open
3822+import os
3823 with patch('charmhelpers.core.hookenv.config') as config:
3824 config.return_value = 'neutron'
3825 import nova_cc_utils as utils
3826@@ -11,7 +11,11 @@
3827 utils.register_configs = MagicMock()
3828 utils.restart_map = MagicMock()
3829
3830-import nova_cc_hooks as hooks
3831+with patch('nova_cc_utils.guard_map') as gmap:
3832+ with patch('charmhelpers.core.hookenv.config') as config:
3833+ config.return_value = False
3834+ gmap.return_value = {}
3835+ import nova_cc_hooks as hooks
3836
3837 utils.register_configs = _reg
3838 utils.restart_map = _map
3839@@ -35,9 +39,11 @@
3840 'relation_set',
3841 'relation_ids',
3842 'ssh_compute_add',
3843- 'ssh_known_hosts_b64',
3844- 'ssh_authorized_keys_b64',
3845+ 'ssh_known_hosts_lines',
3846+ 'ssh_authorized_keys_lines',
3847 'save_script_rc',
3848+ 'service_running',
3849+ 'service_stop',
3850 'execd_preinstall',
3851 'network_manager',
3852 'volume_service',
3853@@ -98,15 +104,64 @@
3854 self.test_relation.set({
3855 'migration_auth_type': 'ssh', 'ssh_public_key': 'fookey',
3856 'private-address': '10.0.0.1'})
3857- self.ssh_known_hosts_b64.return_value = 'hosts'
3858- self.ssh_authorized_keys_b64.return_value = 'keys'
3859- hooks.compute_changed()
3860- self.ssh_compute_add.assert_called_with('fookey')
3861- self.relation_set.assert_called_with(known_hosts='hosts',
3862- authorized_keys='keys')
3863+ self.ssh_known_hosts_lines.return_value = [
3864+ 'k_h_0', 'k_h_1', 'k_h_2']
3865+ self.ssh_authorized_keys_lines.return_value = [
3866+ 'auth_0', 'auth_1', 'auth_2']
3867+ hooks.compute_changed()
3868+ self.ssh_compute_add.assert_called_with('fookey', rid=None, unit=None)
3869+ expected_relations = [
3870+ call(relation_settings={'authorized_keys_0': 'auth_0'},
3871+ relation_id=None),
3872+ call(relation_settings={'authorized_keys_1': 'auth_1'},
3873+ relation_id=None),
3874+ call(relation_settings={'authorized_keys_2': 'auth_2'},
3875+ relation_id=None),
3876+ call(relation_settings={'known_hosts_0': 'k_h_0'},
3877+ relation_id=None),
3878+ call(relation_settings={'known_hosts_1': 'k_h_1'},
3879+ relation_id=None),
3880+ call(relation_settings={'known_hosts_2': 'k_h_2'},
3881+ relation_id=None),
3882+ call(authorized_keys_max_index=3, relation_id=None),
3883+ call(known_hosts_max_index=3, relation_id=None)]
3884+ self.assertEquals(sorted(self.relation_set.call_args_list),
3885+ sorted(expected_relations))
3886+
3887+ def test_compute_changed_nova_public_key(self):
3888+ self.test_relation.set({
3889+ 'migration_auth_type': 'sasl', 'nova_ssh_public_key': 'fookey',
3890+ 'private-address': '10.0.0.1'})
3891+ self.ssh_known_hosts_lines.return_value = [
3892+ 'k_h_0', 'k_h_1', 'k_h_2']
3893+ self.ssh_authorized_keys_lines.return_value = [
3894+ 'auth_0', 'auth_1', 'auth_2']
3895+ hooks.compute_changed()
3896+ self.ssh_compute_add.assert_called_with('fookey', user='nova',
3897+ rid=None, unit=None)
3898+ expected_relations = [
3899+ call(relation_settings={'nova_authorized_keys_0': 'auth_0'},
3900+ relation_id=None),
3901+ call(relation_settings={'nova_authorized_keys_1': 'auth_1'},
3902+ relation_id=None),
3903+ call(relation_settings={'nova_authorized_keys_2': 'auth_2'},
3904+ relation_id=None),
3905+ call(relation_settings={'nova_known_hosts_0': 'k_h_0'},
3906+ relation_id=None),
3907+ call(relation_settings={'nova_known_hosts_1': 'k_h_1'},
3908+ relation_id=None),
3909+ call(relation_settings={'nova_known_hosts_2': 'k_h_2'},
3910+ relation_id=None),
3911+ call(relation_settings={'nova_known_hosts_max_index': 3},
3912+ relation_id=None),
3913+ call(relation_settings={'nova_authorized_keys_max_index': 3},
3914+ relation_id=None)]
3915+ self.assertEquals(sorted(self.relation_set.call_args_list),
3916+ sorted(expected_relations))
3917
3918 @patch.object(hooks, '_auth_config')
3919 def test_compute_joined_neutron(self, auth_config):
3920+ self.is_relation_made.return_value = False
3921 self.network_manager.return_value = 'neutron'
3922 self.eligible_leader = True
3923 self.keystone_ca_cert_b64.return_value = 'foocert64'
3924@@ -122,6 +177,8 @@
3925 relation_id=None,
3926 quantum_url='http://nova-cc-host1:9696',
3927 ca_cert='foocert64',
3928+ quantum_port=9696,
3929+ quantum_host='nova-cc-host1',
3930 quantum_security_groups='no',
3931 region='RegionOne',
3932 volume_service='cinder',
3933@@ -129,6 +186,40 @@
3934 quantum_plugin='nvp',
3935 network_manager='neutron', **FAKE_KS_AUTH_CFG)
3936
3937+ @patch.object(hooks, 'NeutronAPIContext')
3938+ @patch.object(hooks, '_auth_config')
3939+ def test_compute_joined_neutron_api_rel(self, auth_config, napi):
3940+ def mock_NeutronAPIContext():
3941+ return {
3942+ 'neutron_plugin': 'bob',
3943+ 'neutron_security_groups': 'yes',
3944+ 'neutron_url': 'http://nova-cc-host1:9696',
3945+ }
3946+ napi.return_value = mock_NeutronAPIContext
3947+ self.is_relation_made.return_value = True
3948+ self.network_manager.return_value = 'neutron'
3949+ self.eligible_leader = True
3950+ self.keystone_ca_cert_b64.return_value = 'foocert64'
3951+ self.volume_service.return_value = 'cinder'
3952+ self.unit_get.return_value = 'nova-cc-host1'
3953+ self.canonical_url.return_value = 'http://nova-cc-host1'
3954+ self.api_port.return_value = '9696'
3955+ self.neutron_plugin.return_value = 'nvp'
3956+ auth_config.return_value = FAKE_KS_AUTH_CFG
3957+ hooks.compute_joined()
3958+ self.relation_set.assert_called_with(
3959+ relation_id=None,
3960+ quantum_url='http://nova-cc-host1:9696',
3961+ ca_cert='foocert64',
3962+ quantum_port=9696,
3963+ quantum_host='nova-cc-host1',
3964+ quantum_security_groups='yes',
3965+ region='RegionOne',
3966+ volume_service='cinder',
3967+ ec2_host='nova-cc-host1',
3968+ quantum_plugin='bob',
3969+ network_manager='neutron', **FAKE_KS_AUTH_CFG)
3970+
3971 @patch.object(hooks, '_auth_config')
3972 def test_nova_vmware_joined(self, auth_config):
3973 auth_config.return_value = FAKE_KS_AUTH_CFG
3974@@ -231,3 +322,46 @@
3975 self._postgresql_db_test(configs)
3976 self.assertTrue(configs.write_all.called)
3977 self.migrate_database.assert_called_with()
3978+
3979+ @patch.object(os, 'rename')
3980+ @patch.object(os.path, 'isfile')
3981+ @patch.object(hooks, 'CONFIGS')
3982+ def test_neutron_api_relation_joined(self, configs, isfile, rename):
3983+ neutron_conf = '/etc/neutron/neutron.conf'
3984+ nova_url = 'http://novaurl:8774/v2'
3985+ isfile.return_value = True
3986+ self.service_running.return_value = True
3987+ _identity_joined = self.patch('identity_joined')
3988+ self.relation_ids.side_effect = ['relid']
3989+ self.canonical_url.return_value = 'http://novaurl'
3990+ with patch_open() as (_open, _file):
3991+ hooks.neutron_api_relation_joined()
3992+ self.service_stop.assert_called_with('neutron-server')
3993+ rename.assert_called_with(neutron_conf, neutron_conf + '_unused')
3994+ self.assertTrue(_identity_joined.called)
3995+ self.relation_set.assert_called_with(relation_id=None,
3996+ nova_url=nova_url)
3997+
3998+ @patch.object(hooks, 'CONFIGS')
3999+ def test_neutron_api_relation_changed(self, configs):
4000+ self.relation_ids.return_value = ['relid']
4001+ _compute_joined = self.patch('compute_joined')
4002+ _quantum_joined = self.patch('quantum_joined')
4003+ hooks.neutron_api_relation_changed()
4004+ self.assertTrue(configs.write.called_with('/etc/nova/nova.conf'))
4005+ self.assertTrue(_compute_joined.called)
4006+ self.assertTrue(_quantum_joined.called)
4007+
4008+ @patch.object(os, 'remove')
4009+ @patch.object(os.path, 'isfile')
4010+ @patch.object(hooks, 'CONFIGS')
4011+ def test_neutron_api_relation_broken(self, configs, isfile, remove):
4012+ isfile.return_value = True
4013+ self.relation_ids.return_value = ['relid']
4014+ _compute_joined = self.patch('compute_joined')
4015+ _quantum_joined = self.patch('quantum_joined')
4016+ hooks.neutron_api_relation_broken()
4017+ remove.assert_called_with('/etc/init/neutron-server.override')
4018+ self.assertTrue(configs.write_all.called)
4019+ self.assertTrue(_compute_joined.called)
4020+ self.assertTrue(_quantum_joined.called)
4021
4022=== modified file 'unit_tests/test_nova_cc_utils.py'
4023--- unit_tests/test_nova_cc_utils.py 2014-05-02 10:06:23 +0000
4024+++ unit_tests/test_nova_cc_utils.py 2014-07-29 13:07:23 +0000
4025@@ -22,6 +22,7 @@
4026 'eligible_leader',
4027 'enable_policy_rcd',
4028 'get_os_codename_install_source',
4029+ 'is_relation_made',
4030 'log',
4031 'ml2_migration',
4032 'network_manager',
4033@@ -34,7 +35,9 @@
4034 'remote_unit',
4035 '_save_script_rc',
4036 'service_start',
4037- 'services'
4038+ 'services',
4039+ 'service_running',
4040+ 'service_stop'
4041 ]
4042
4043 SCRIPTRC_ENV_VARS = {
4044@@ -151,6 +154,7 @@
4045
4046 @patch('charmhelpers.contrib.openstack.context.SubordinateConfigContext')
4047 def test_resource_map_quantum(self, subcontext):
4048+ self.is_relation_made.return_value = False
4049 self._resource_map(network_manager='quantum')
4050 _map = utils.resource_map()
4051 confs = [
4052@@ -162,6 +166,7 @@
4053
4054 @patch('charmhelpers.contrib.openstack.context.SubordinateConfigContext')
4055 def test_resource_map_neutron(self, subcontext):
4056+ self.is_relation_made.return_value = False
4057 self._resource_map(network_manager='neutron')
4058 _map = utils.resource_map()
4059 confs = [
4060@@ -170,6 +175,17 @@
4061 [self.assertIn(q_conf, _map.keys()) for q_conf in confs]
4062
4063 @patch('charmhelpers.contrib.openstack.context.SubordinateConfigContext')
4064+ def test_resource_map_neutron_api_rel(self, subcontext):
4065+ self.is_relation_made.return_value = True
4066+ self._resource_map(network_manager='neutron')
4067+ _map = utils.resource_map()
4068+ confs = [
4069+ '/etc/neutron/neutron.conf',
4070+ ]
4071+ for q_conf in confs:
4072+ self.assertFalse(q_conf in _map.keys())
4073+
4074+ @patch('charmhelpers.contrib.openstack.context.SubordinateConfigContext')
4075 def test_resource_map_vmware(self, subcontext):
4076 fake_context = MagicMock()
4077 fake_context.return_value = {
4078@@ -201,6 +217,7 @@
4079 @patch('os.path.exists')
4080 @patch('charmhelpers.contrib.openstack.context.SubordinateConfigContext')
4081 def test_restart_map_api_before_frontends(self, subcontext, _exists):
4082+ self.is_relation_made.return_value = False
4083 _exists.return_value = False
4084 self._resource_map(network_manager='neutron')
4085 _map = utils.restart_map()
4086@@ -226,6 +243,7 @@
4087
4088 @patch('charmhelpers.contrib.openstack.context.SubordinateConfigContext')
4089 def test_determine_packages_neutron(self, subcontext):
4090+ self.is_relation_made.return_value = False
4091 self._resource_map(network_manager='neutron')
4092 pkgs = utils.determine_packages()
4093 self.assertIn('neutron-server', pkgs)
4094@@ -321,8 +339,8 @@
4095 check_output.return_value = 'fookey'
4096 host_key.return_value = 'fookey_old'
4097 with patch_open() as (_open, _file):
4098- utils.add_known_host('foohost')
4099- rm.assert_called_with('foohost', None)
4100+ utils.add_known_host('foohost', None, None)
4101+ rm.assert_called_with('foohost', None, None)
4102
4103 @patch.object(utils, 'known_hosts')
4104 @patch.object(utils, 'remove_known_host')
4105@@ -355,19 +373,19 @@
4106 def test_known_hosts(self, ssh_dir):
4107 ssh_dir.return_value = '/tmp/foo'
4108 self.assertEquals(utils.known_hosts(), '/tmp/foo/known_hosts')
4109- ssh_dir.assert_called_with(None)
4110+ ssh_dir.assert_called_with(None, None)
4111 self.assertEquals(utils.known_hosts('bar'), '/tmp/foo/known_hosts')
4112- ssh_dir.assert_called_with('bar')
4113+ ssh_dir.assert_called_with('bar', None)
4114
4115 @patch.object(utils, 'ssh_directory_for_unit')
4116 def test_authorized_keys(self, ssh_dir):
4117 ssh_dir.return_value = '/tmp/foo'
4118 self.assertEquals(utils.authorized_keys(), '/tmp/foo/authorized_keys')
4119- ssh_dir.assert_called_with(None)
4120+ ssh_dir.assert_called_with(None, None)
4121 self.assertEquals(
4122 utils.authorized_keys('bar'),
4123 '/tmp/foo/authorized_keys')
4124- ssh_dir.assert_called_with('bar')
4125+ ssh_dir.assert_called_with('bar', None)
4126
4127 @patch.object(utils, 'known_hosts')
4128 @patch('subprocess.check_call')
4129@@ -421,11 +439,15 @@
4130 self.os_release.return_value = 'folsom'
4131
4132 def test_determine_endpoints_base(self):
4133+ self.is_relation_made.return_value = False
4134 self.relation_ids.return_value = []
4135 self.assertEquals(
4136- BASE_ENDPOINTS, utils.determine_endpoints('http://foohost.com'))
4137+ BASE_ENDPOINTS, utils.determine_endpoints('http://foohost.com',
4138+ 'http://foohost.com',
4139+ 'http://foohost.com'))
4140
4141 def test_determine_endpoints_nova_volume(self):
4142+ self.is_relation_made.return_value = False
4143 self.relation_ids.return_value = ['nova-volume-service/0']
4144 endpoints = deepcopy(BASE_ENDPOINTS)
4145 endpoints.update({
4146@@ -438,9 +460,12 @@
4147 'nova-volume_region': 'RegionOne',
4148 'nova-volume_service': 'nova-volume'})
4149 self.assertEquals(
4150- endpoints, utils.determine_endpoints('http://foohost.com'))
4151+ endpoints, utils.determine_endpoints('http://foohost.com',
4152+ 'http://foohost.com',
4153+ 'http://foohost.com'))
4154
4155 def test_determine_endpoints_quantum_neutron(self):
4156+ self.is_relation_made.return_value = False
4157 self.relation_ids.return_value = []
4158 self.network_manager.return_value = 'quantum'
4159 endpoints = deepcopy(BASE_ENDPOINTS)
4160@@ -451,7 +476,25 @@
4161 'quantum_region': 'RegionOne',
4162 'quantum_service': 'quantum'})
4163 self.assertEquals(
4164- endpoints, utils.determine_endpoints('http://foohost.com'))
4165+ endpoints, utils.determine_endpoints('http://foohost.com',
4166+ 'http://foohost.com',
4167+ 'http://foohost.com'))
4168+
4169+ def test_determine_endpoints_neutron_api_rel(self):
4170+ self.is_relation_made.return_value = True
4171+ self.relation_ids.return_value = []
4172+ self.network_manager.return_value = 'quantum'
4173+ endpoints = deepcopy(BASE_ENDPOINTS)
4174+ endpoints.update({
4175+ 'quantum_admin_url': None,
4176+ 'quantum_internal_url': None,
4177+ 'quantum_public_url': None,
4178+ 'quantum_region': None,
4179+ 'quantum_service': None})
4180+ self.assertEquals(
4181+ endpoints, utils.determine_endpoints('http://foohost.com',
4182+ 'http://foohost.com',
4183+ 'http://foohost.com'))
4184
4185 @patch.object(utils, 'known_hosts')
4186 @patch('subprocess.check_output')
4187@@ -461,9 +504,9 @@
4188 _check_output.assert_called_with(
4189 ['ssh-keygen', '-f', '/foo/known_hosts',
4190 '-H', '-F', 'test'])
4191- _known_hosts.assert_called_with(None)
4192+ _known_hosts.assert_called_with(None, None)
4193 utils.ssh_known_host_key('test', 'bar')
4194- _known_hosts.assert_called_with('bar')
4195+ _known_hosts.assert_called_with('bar', None)
4196
4197 @patch.object(utils, 'known_hosts')
4198 @patch('subprocess.check_call')
4199@@ -473,9 +516,9 @@
4200 _check_call.assert_called_with(
4201 ['ssh-keygen', '-f', '/foo/known_hosts',
4202 '-R', 'test'])
4203- _known_hosts.assert_called_with(None)
4204+ _known_hosts.assert_called_with(None, None)
4205 utils.remove_known_host('test', 'bar')
4206- _known_hosts.assert_called_with('bar')
4207+ _known_hosts.assert_called_with('bar', None)
4208
4209 @patch('subprocess.check_output')
4210 def test_migrate_database(self, check_output):
4211@@ -555,3 +598,113 @@
4212 utils.do_openstack_upgrade()
4213 expected = [call('cloud:precise-icehouse')]
4214 self.assertEquals(_do_openstack_upgrade.call_args_list, expected)
4215+
4216+ def test_guard_map_nova(self):
4217+ self.relation_ids.return_value = []
4218+ self.os_release.return_value = 'havana'
4219+ self.assertEqual(
4220+ {'nova-api-ec2': ['identity-service', 'amqp', 'shared-db'],
4221+ 'nova-api-os-compute': ['identity-service', 'amqp', 'shared-db'],
4222+ 'nova-cert': ['identity-service', 'amqp', 'shared-db'],
4223+ 'nova-conductor': ['identity-service', 'amqp', 'shared-db'],
4224+ 'nova-objectstore': ['identity-service', 'amqp', 'shared-db'],
4225+ 'nova-scheduler': ['identity-service', 'amqp', 'shared-db']},
4226+ utils.guard_map()
4227+ )
4228+ self.os_release.return_value = 'essex'
4229+ self.assertEqual(
4230+ {'nova-api-ec2': ['identity-service', 'amqp', 'shared-db'],
4231+ 'nova-api-os-compute': ['identity-service', 'amqp', 'shared-db'],
4232+ 'nova-cert': ['identity-service', 'amqp', 'shared-db'],
4233+ 'nova-objectstore': ['identity-service', 'amqp', 'shared-db'],
4234+ 'nova-scheduler': ['identity-service', 'amqp', 'shared-db']},
4235+ utils.guard_map()
4236+ )
4237+
4238+ def test_guard_map_neutron(self):
4239+ self.relation_ids.return_value = []
4240+ self.network_manager.return_value = 'neutron'
4241+ self.os_release.return_value = 'icehouse'
4242+ self.assertEqual(
4243+ {'neutron-server': ['identity-service', 'amqp', 'shared-db'],
4244+ 'nova-api-ec2': ['identity-service', 'amqp', 'shared-db'],
4245+ 'nova-api-os-compute': ['identity-service', 'amqp', 'shared-db'],
4246+ 'nova-cert': ['identity-service', 'amqp', 'shared-db'],
4247+ 'nova-conductor': ['identity-service', 'amqp', 'shared-db'],
4248+ 'nova-objectstore': ['identity-service', 'amqp', 'shared-db'],
4249+ 'nova-scheduler': ['identity-service', 'amqp', 'shared-db'], },
4250+ utils.guard_map()
4251+ )
4252+ self.network_manager.return_value = 'quantum'
4253+ self.os_release.return_value = 'grizzly'
4254+ self.assertEqual(
4255+ {'quantum-server': ['identity-service', 'amqp', 'shared-db'],
4256+ 'nova-api-ec2': ['identity-service', 'amqp', 'shared-db'],
4257+ 'nova-api-os-compute': ['identity-service', 'amqp', 'shared-db'],
4258+ 'nova-cert': ['identity-service', 'amqp', 'shared-db'],
4259+ 'nova-conductor': ['identity-service', 'amqp', 'shared-db'],
4260+ 'nova-objectstore': ['identity-service', 'amqp', 'shared-db'],
4261+ 'nova-scheduler': ['identity-service', 'amqp', 'shared-db'], },
4262+ utils.guard_map()
4263+ )
4264+
4265+ def test_guard_map_pgsql(self):
4266+ self.relation_ids.return_value = ['pgsql:1']
4267+ self.network_manager.return_value = 'neutron'
4268+ self.os_release.return_value = 'icehouse'
4269+ self.assertEqual(
4270+ {'neutron-server': ['identity-service', 'amqp',
4271+ 'pgsql-neutron-db'],
4272+ 'nova-api-ec2': ['identity-service', 'amqp', 'pgsql-nova-db'],
4273+ 'nova-api-os-compute': ['identity-service', 'amqp',
4274+ 'pgsql-nova-db'],
4275+ 'nova-cert': ['identity-service', 'amqp', 'pgsql-nova-db'],
4276+ 'nova-conductor': ['identity-service', 'amqp', 'pgsql-nova-db'],
4277+ 'nova-objectstore': ['identity-service', 'amqp',
4278+ 'pgsql-nova-db'],
4279+ 'nova-scheduler': ['identity-service', 'amqp',
4280+ 'pgsql-nova-db'], },
4281+ utils.guard_map()
4282+ )
4283+
4284+ def test_service_guard_inactive(self):
4285+ '''Ensure that if disabled, service guards nothing'''
4286+ contexts = MagicMock()
4287+
4288+ @utils.service_guard({'test': ['interfacea', 'interfaceb']},
4289+ contexts, False)
4290+ def dummy_func():
4291+ pass
4292+ dummy_func()
4293+ self.assertFalse(self.service_running.called)
4294+ self.assertFalse(contexts.complete_contexts.called)
4295+
4296+ def test_service_guard_active_guard(self):
4297+ '''Ensure services with incomplete interfaces are stopped'''
4298+ contexts = MagicMock()
4299+ contexts.complete_contexts.return_value = ['interfacea']
4300+ self.service_running.return_value = True
4301+
4302+ @utils.service_guard({'test': ['interfacea', 'interfaceb']},
4303+ contexts, True)
4304+ def dummy_func():
4305+ pass
4306+ dummy_func()
4307+ self.service_running.assert_called_with('test')
4308+ self.service_stop.assert_called_with('test')
4309+ self.assertTrue(contexts.complete_contexts.called)
4310+
4311+ def test_service_guard_active_release(self):
4312+ '''Ensure services with complete interfaces are not stopped'''
4313+ contexts = MagicMock()
4314+ contexts.complete_contexts.return_value = ['interfacea',
4315+ 'interfaceb']
4316+
4317+ @utils.service_guard({'test': ['interfacea', 'interfaceb']},
4318+ contexts, True)
4319+ def dummy_func():
4320+ pass
4321+ dummy_func()
4322+ self.assertFalse(self.service_running.called)
4323+ self.assertFalse(self.service_stop.called)
4324+ self.assertTrue(contexts.complete_contexts.called)
4325
4326=== modified file 'unit_tests/test_utils.py'
4327--- unit_tests/test_utils.py 2013-11-08 05:41:39 +0000
4328+++ unit_tests/test_utils.py 2014-07-29 13:07:23 +0000
4329@@ -82,9 +82,9 @@
4330 return self.config
4331
4332 def set(self, attr, value):
4333- if attr not in self.config:
4334- raise KeyError
4335- self.config[attr] = value
4336+ if attr not in self.config:
4337+ raise KeyError
4338+ self.config[attr] = value
4339
4340
4341 class TestRelation(object):

Subscribers

People subscribed via source and target branches