Merge lp:~tribaal/charms/trusty/swift-storage/fix-1350049-guess-root into lp:~openstack-charmers-archive/charms/trusty/swift-storage/trunk

Proposed by Chris Glass
Status: Superseded
Proposed branch: lp:~tribaal/charms/trusty/swift-storage/fix-1350049-guess-root
Merge into: lp:~openstack-charmers-archive/charms/trusty/swift-storage/trunk
Diff against target: 3154 lines (+2214/-118)
45 files modified
Makefile (+13/-4)
charm-helpers-hooks.yaml (+12/-0)
charm-helpers-tests.yaml (+5/-0)
charm-helpers.yaml (+0/-11)
config.yaml (+14/-1)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+12/-2)
hooks/charmhelpers/contrib/network/ip.py (+156/-0)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+61/-0)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+275/-0)
hooks/charmhelpers/contrib/openstack/context.py (+95/-22)
hooks/charmhelpers/contrib/openstack/ip.py (+75/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+14/-0)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+6/-1)
hooks/charmhelpers/contrib/openstack/templating.py (+22/-23)
hooks/charmhelpers/contrib/openstack/utils.py (+11/-3)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+1/-1)
hooks/charmhelpers/contrib/storage/linux/utils.py (+4/-0)
hooks/charmhelpers/core/fstab.py (+116/-0)
hooks/charmhelpers/core/hookenv.py (+5/-4)
hooks/charmhelpers/core/host.py (+32/-12)
hooks/charmhelpers/fetch/__init__.py (+33/-16)
hooks/charmhelpers/fetch/bzrurl.py (+2/-1)
hooks/misc_utils.py (+2/-1)
hooks/swift_storage_context.py (+5/-0)
hooks/swift_storage_utils.py (+20/-6)
templates/account-server.conf (+1/-1)
templates/container-server.conf (+1/-1)
templates/object-server.conf (+2/-1)
tests/00-setup (+11/-0)
tests/10-basic-precise-essex (+9/-0)
tests/11-basic-precise-folsom (+11/-0)
tests/12-basic-precise-grizzly (+11/-0)
tests/13-basic-precise-havana (+11/-0)
tests/14-basic-precise-icehouse (+11/-0)
tests/15-basic-trusty-icehouse (+9/-0)
tests/README (+52/-0)
tests/basic_deployment.py (+450/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+71/-0)
tests/charmhelpers/contrib/amulet/utils.py (+176/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+61/-0)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+275/-0)
unit_tests/test_swift_storage_context.py (+8/-1)
unit_tests/test_swift_storage_relations.py (+1/-0)
unit_tests/test_swift_storage_utils.py (+46/-3)
unit_tests/test_utils.py (+6/-3)
To merge this branch: bzr merge lp:~tribaal/charms/trusty/swift-storage/fix-1350049-guess-root
Reviewer Review Type Date Requested Status
Liam Young Pending
Review via email: mp+228997@code.launchpad.net

This proposal has been superseded by a proposal from 2014-07-31.

Description of the change

This branch fixes the swift-storage charm's "guess" config option to ignore disks with mounted partitions instead of using a blacklist.

The linked bug has more details about the error condition, but basically sometimes root is not /dev/sda1, and so the blacklist "detection" fails. Testing for mounted partitions is more robust.

This depends on a charm-helpers branch to land first: https://code.launchpad.net/~tribaal/charm-helpers/is-device-mounted-work-with-partitions/+merge/228987

Once the branch lands, I shall update charm-helpers in this charm and include the resulting diff in this MP.

To post a comment you must log in.
38. By Chris Glass

Updated charmhelpers

Unmerged revisions

38. By Chris Glass

Updated charmhelpers

37. By Chris Glass

The "guess" option for the charm's block device config option now is a little smarter
about which disks to use: it uses any disks with no mounted partitions as storage.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2014-05-21 10:08:22 +0000
3+++ Makefile 2014-07-31 10:21:42 +0000
4@@ -3,15 +3,24 @@
5
6 lint:
7 @flake8 --exclude hooks/charmhelpers hooks
8- @flake8 --exclude hooks/charmhelpers unit_tests
9+ @flake8 --exclude hooks/charmhelpers unit_tests tests
10 @charm proof
11
12+unit_test:
13+ @echo Starting unit tests...
14+ @echo Please make sure the following deb files are installed: python-netaddr python-netifaces
15+ @$(PYTHON) /usr/bin/nosetests -v --nologcapture --with-coverage unit_tests
16+
17 test:
18- @echo Starting tests...
19- @$(PYTHON) /usr/bin/nosetests -v --nologcapture --with-coverage unit_tests
20+ @echo Starting Amulet tests...
21+ # coreycb note: The -v should only be temporary until Amulet sends
22+ # raise_status() messages to stderr:
23+ # https://bugs.launchpad.net/amulet/+bug/1320357
24+ @juju test -v -p AMULET_HTTP_PROXY
25
26 sync:
27- @charm-helper-sync -c charm-helpers.yaml
28+ @charm-helper-sync -c charm-helpers-hooks.yaml
29+ @charm-helper-sync -c charm-helpers-tests.yaml
30
31 publish: lint test
32 bzr push lp:charms/swift-storage
33
34=== added file 'charm-helpers-hooks.yaml'
35--- charm-helpers-hooks.yaml 1970-01-01 00:00:00 +0000
36+++ charm-helpers-hooks.yaml 2014-07-31 10:21:42 +0000
37@@ -0,0 +1,12 @@
38+branch: lp:charm-helpers
39+destination: hooks/charmhelpers
40+include:
41+ - core
42+ - contrib.openstack|inc=*
43+ - contrib.storage
44+ - fetch
45+ - contrib.hahelpers:
46+ - apache
47+ - cluster
48+ - payload.execd
49+ - contrib.network.ip
50
51=== added file 'charm-helpers-tests.yaml'
52--- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000
53+++ charm-helpers-tests.yaml 2014-07-31 10:21:42 +0000
54@@ -0,0 +1,5 @@
55+branch: lp:charm-helpers
56+destination: tests/charmhelpers
57+include:
58+ - contrib.amulet
59+ - contrib.openstack.amulet
60
61=== removed file 'charm-helpers.yaml'
62--- charm-helpers.yaml 2014-03-25 17:05:07 +0000
63+++ charm-helpers.yaml 1970-01-01 00:00:00 +0000
64@@ -1,11 +0,0 @@
65-branch: lp:charm-helpers
66-destination: hooks/charmhelpers
67-include:
68- - core
69- - contrib.openstack|inc=*
70- - contrib.storage
71- - fetch
72- - contrib.hahelpers:
73- - apache
74- - cluster
75- - payload.execd
76
77=== modified file 'config.yaml'
78--- config.yaml 2012-12-19 23:09:13 +0000
79+++ config.yaml 2014-07-31 10:21:42 +0000
80@@ -49,4 +49,17 @@
81 default: 6002
82 type: int
83 description: Listening port of the swift-account-server.
84-
85+ worker-multiplier:
86+ default: 1
87+ type: int
88+ description: |
89+ The CPU multiplier to use when configuring worker processes for the
90+ account, container and object server processes.
91+ object-server-threads-per-disk:
92+ default: 4
93+ type: int
94+ description: |
95+ Size of the per-disk thread pool used for performing disk I/O. 0 means
96+ to not use a per-disk thread pool. It is recommended to keep this value
97+ small, as large values can result in high read latencies due to large
98+ queue depths. A good starting point is 4 threads per disk.
99
100=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
101--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-02-24 17:52:34 +0000
102+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-07-31 10:21:42 +0000
103@@ -62,6 +62,15 @@
104 return peers
105
106
107+def peer_ips(peer_relation='cluster', addr_key='private-address'):
108+ '''Return a dict of peers and their private-address'''
109+ peers = {}
110+ for r_id in relation_ids(peer_relation):
111+ for unit in relation_list(r_id):
112+ peers[unit] = relation_get(addr_key, rid=r_id, unit=unit)
113+ return peers
114+
115+
116 def oldest_peer(peers):
117 local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
118 for peer in peers:
119@@ -146,12 +155,12 @@
120 Obtains all relevant configuration from charm configuration required
121 for initiating a relation to hacluster:
122
123- ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
124+ ha-bindiface, ha-mcastport, vip
125
126 returns: dict: A dict containing settings keyed by setting name.
127 raises: HAIncompleteConfig if settings are missing.
128 '''
129- settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
130+ settings = ['ha-bindiface', 'ha-mcastport', 'vip']
131 conf = {}
132 for setting in settings:
133 conf[setting] = config_get(setting)
134@@ -170,6 +179,7 @@
135
136 :configs : OSTemplateRenderer: A config tempating object to inspect for
137 a complete https context.
138+
139 :vip_setting: str: Setting in charm config that specifies
140 VIP address.
141 '''
142
143=== added directory 'hooks/charmhelpers/contrib/network'
144=== added file 'hooks/charmhelpers/contrib/network/__init__.py'
145=== added file 'hooks/charmhelpers/contrib/network/ip.py'
146--- hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000
147+++ hooks/charmhelpers/contrib/network/ip.py 2014-07-31 10:21:42 +0000
148@@ -0,0 +1,156 @@
149+import sys
150+
151+from functools import partial
152+
153+from charmhelpers.fetch import apt_install
154+from charmhelpers.core.hookenv import (
155+ ERROR, log,
156+)
157+
158+try:
159+ import netifaces
160+except ImportError:
161+ apt_install('python-netifaces')
162+ import netifaces
163+
164+try:
165+ import netaddr
166+except ImportError:
167+ apt_install('python-netaddr')
168+ import netaddr
169+
170+
171+def _validate_cidr(network):
172+ try:
173+ netaddr.IPNetwork(network)
174+ except (netaddr.core.AddrFormatError, ValueError):
175+ raise ValueError("Network (%s) is not in CIDR presentation format" %
176+ network)
177+
178+
179+def get_address_in_network(network, fallback=None, fatal=False):
180+ """
181+ Get an IPv4 or IPv6 address within the network from the host.
182+
183+ :param network (str): CIDR presentation format. For example,
184+ '192.168.1.0/24'.
185+ :param fallback (str): If no address is found, return fallback.
186+ :param fatal (boolean): If no address is found, fallback is not
187+ set and fatal is True then exit(1).
188+
189+ """
190+
191+ def not_found_error_out():
192+ log("No IP address found in network: %s" % network,
193+ level=ERROR)
194+ sys.exit(1)
195+
196+ if network is None:
197+ if fallback is not None:
198+ return fallback
199+ else:
200+ if fatal:
201+ not_found_error_out()
202+
203+ _validate_cidr(network)
204+ network = netaddr.IPNetwork(network)
205+ for iface in netifaces.interfaces():
206+ addresses = netifaces.ifaddresses(iface)
207+ if network.version == 4 and netifaces.AF_INET in addresses:
208+ addr = addresses[netifaces.AF_INET][0]['addr']
209+ netmask = addresses[netifaces.AF_INET][0]['netmask']
210+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
211+ if cidr in network:
212+ return str(cidr.ip)
213+ if network.version == 6 and netifaces.AF_INET6 in addresses:
214+ for addr in addresses[netifaces.AF_INET6]:
215+ if not addr['addr'].startswith('fe80'):
216+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
217+ addr['netmask']))
218+ if cidr in network:
219+ return str(cidr.ip)
220+
221+ if fallback is not None:
222+ return fallback
223+
224+ if fatal:
225+ not_found_error_out()
226+
227+ return None
228+
229+
230+def is_ipv6(address):
231+ '''Determine whether provided address is IPv6 or not'''
232+ try:
233+ address = netaddr.IPAddress(address)
234+ except netaddr.AddrFormatError:
235+ # probably a hostname - so not an address at all!
236+ return False
237+ else:
238+ return address.version == 6
239+
240+
241+def is_address_in_network(network, address):
242+ """
243+ Determine whether the provided address is within a network range.
244+
245+ :param network (str): CIDR presentation format. For example,
246+ '192.168.1.0/24'.
247+ :param address: An individual IPv4 or IPv6 address without a net
248+ mask or subnet prefix. For example, '192.168.1.1'.
249+ :returns boolean: Flag indicating whether address is in network.
250+ """
251+ try:
252+ network = netaddr.IPNetwork(network)
253+ except (netaddr.core.AddrFormatError, ValueError):
254+ raise ValueError("Network (%s) is not in CIDR presentation format" %
255+ network)
256+ try:
257+ address = netaddr.IPAddress(address)
258+ except (netaddr.core.AddrFormatError, ValueError):
259+ raise ValueError("Address (%s) is not in correct presentation format" %
260+ address)
261+ if address in network:
262+ return True
263+ else:
264+ return False
265+
266+
267+def _get_for_address(address, key):
268+ """Retrieve an attribute of or the physical interface that
269+ the IP address provided could be bound to.
270+
271+ :param address (str): An individual IPv4 or IPv6 address without a net
272+ mask or subnet prefix. For example, '192.168.1.1'.
273+ :param key: 'iface' for the physical interface name or an attribute
274+ of the configured interface, for example 'netmask'.
275+ :returns str: Requested attribute or None if address is not bindable.
276+ """
277+ address = netaddr.IPAddress(address)
278+ for iface in netifaces.interfaces():
279+ addresses = netifaces.ifaddresses(iface)
280+ if address.version == 4 and netifaces.AF_INET in addresses:
281+ addr = addresses[netifaces.AF_INET][0]['addr']
282+ netmask = addresses[netifaces.AF_INET][0]['netmask']
283+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
284+ if address in cidr:
285+ if key == 'iface':
286+ return iface
287+ else:
288+ return addresses[netifaces.AF_INET][0][key]
289+ if address.version == 6 and netifaces.AF_INET6 in addresses:
290+ for addr in addresses[netifaces.AF_INET6]:
291+ if not addr['addr'].startswith('fe80'):
292+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
293+ addr['netmask']))
294+ if address in cidr:
295+ if key == 'iface':
296+ return iface
297+ else:
298+ return addr[key]
299+ return None
300+
301+
302+get_iface_for_address = partial(_get_for_address, key='iface')
303+
304+get_netmask_for_address = partial(_get_for_address, key='netmask')
305
306=== added directory 'hooks/charmhelpers/contrib/openstack/amulet'
307=== added file 'hooks/charmhelpers/contrib/openstack/amulet/__init__.py'
308=== added file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
309--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
310+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-07-31 10:21:42 +0000
311@@ -0,0 +1,61 @@
312+from charmhelpers.contrib.amulet.deployment import (
313+ AmuletDeployment
314+)
315+
316+
317+class OpenStackAmuletDeployment(AmuletDeployment):
318+ """OpenStack amulet deployment.
319+
320+ This class inherits from AmuletDeployment and has additional support
321+ that is specifically for use by OpenStack charms.
322+ """
323+
324+ def __init__(self, series=None, openstack=None, source=None):
325+ """Initialize the deployment environment."""
326+ super(OpenStackAmuletDeployment, self).__init__(series)
327+ self.openstack = openstack
328+ self.source = source
329+
330+ def _add_services(self, this_service, other_services):
331+ """Add services to the deployment and set openstack-origin."""
332+ super(OpenStackAmuletDeployment, self)._add_services(this_service,
333+ other_services)
334+ name = 0
335+ services = other_services
336+ services.append(this_service)
337+ use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph']
338+
339+ if self.openstack:
340+ for svc in services:
341+ if svc[name] not in use_source:
342+ config = {'openstack-origin': self.openstack}
343+ self.d.configure(svc[name], config)
344+
345+ if self.source:
346+ for svc in services:
347+ if svc[name] in use_source:
348+ config = {'source': self.source}
349+ self.d.configure(svc[name], config)
350+
351+ def _configure_services(self, configs):
352+ """Configure all of the services."""
353+ for service, config in configs.iteritems():
354+ self.d.configure(service, config)
355+
356+ def _get_openstack_release(self):
357+ """Get openstack release.
358+
359+ Return an integer representing the enum value of the openstack
360+ release.
361+ """
362+ (self.precise_essex, self.precise_folsom, self.precise_grizzly,
363+ self.precise_havana, self.precise_icehouse,
364+ self.trusty_icehouse) = range(6)
365+ releases = {
366+ ('precise', None): self.precise_essex,
367+ ('precise', 'cloud:precise-folsom'): self.precise_folsom,
368+ ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
369+ ('precise', 'cloud:precise-havana'): self.precise_havana,
370+ ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
371+ ('trusty', None): self.trusty_icehouse}
372+ return releases[(self.series, self.openstack)]
373
374=== added file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
375--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
376+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-07-31 10:21:42 +0000
377@@ -0,0 +1,275 @@
378+import logging
379+import os
380+import time
381+import urllib
382+
383+import glanceclient.v1.client as glance_client
384+import keystoneclient.v2_0 as keystone_client
385+import novaclient.v1_1.client as nova_client
386+
387+from charmhelpers.contrib.amulet.utils import (
388+ AmuletUtils
389+)
390+
391+DEBUG = logging.DEBUG
392+ERROR = logging.ERROR
393+
394+
395+class OpenStackAmuletUtils(AmuletUtils):
396+ """OpenStack amulet utilities.
397+
398+ This class inherits from AmuletUtils and has additional support
399+ that is specifically for use by OpenStack charms.
400+ """
401+
402+ def __init__(self, log_level=ERROR):
403+ """Initialize the deployment environment."""
404+ super(OpenStackAmuletUtils, self).__init__(log_level)
405+
406+ def validate_endpoint_data(self, endpoints, admin_port, internal_port,
407+ public_port, expected):
408+ """Validate endpoint data.
409+
410+ Validate actual endpoint data vs expected endpoint data. The ports
411+ are used to find the matching endpoint.
412+ """
413+ found = False
414+ for ep in endpoints:
415+ self.log.debug('endpoint: {}'.format(repr(ep)))
416+ if (admin_port in ep.adminurl and
417+ internal_port in ep.internalurl and
418+ public_port in ep.publicurl):
419+ found = True
420+ actual = {'id': ep.id,
421+ 'region': ep.region,
422+ 'adminurl': ep.adminurl,
423+ 'internalurl': ep.internalurl,
424+ 'publicurl': ep.publicurl,
425+ 'service_id': ep.service_id}
426+ ret = self._validate_dict_data(expected, actual)
427+ if ret:
428+ return 'unexpected endpoint data - {}'.format(ret)
429+
430+ if not found:
431+ return 'endpoint not found'
432+
433+ def validate_svc_catalog_endpoint_data(self, expected, actual):
434+ """Validate service catalog endpoint data.
435+
436+ Validate a list of actual service catalog endpoints vs a list of
437+ expected service catalog endpoints.
438+ """
439+ self.log.debug('actual: {}'.format(repr(actual)))
440+ for k, v in expected.iteritems():
441+ if k in actual:
442+ ret = self._validate_dict_data(expected[k][0], actual[k][0])
443+ if ret:
444+ return self.endpoint_error(k, ret)
445+ else:
446+ return "endpoint {} does not exist".format(k)
447+ return ret
448+
449+ def validate_tenant_data(self, expected, actual):
450+ """Validate tenant data.
451+
452+ Validate a list of actual tenant data vs list of expected tenant
453+ data.
454+ """
455+ self.log.debug('actual: {}'.format(repr(actual)))
456+ for e in expected:
457+ found = False
458+ for act in actual:
459+ a = {'enabled': act.enabled, 'description': act.description,
460+ 'name': act.name, 'id': act.id}
461+ if e['name'] == a['name']:
462+ found = True
463+ ret = self._validate_dict_data(e, a)
464+ if ret:
465+ return "unexpected tenant data - {}".format(ret)
466+ if not found:
467+ return "tenant {} does not exist".format(e['name'])
468+ return ret
469+
470+ def validate_role_data(self, expected, actual):
471+ """Validate role data.
472+
473+ Validate a list of actual role data vs a list of expected role
474+ data.
475+ """
476+ self.log.debug('actual: {}'.format(repr(actual)))
477+ for e in expected:
478+ found = False
479+ for act in actual:
480+ a = {'name': act.name, 'id': act.id}
481+ if e['name'] == a['name']:
482+ found = True
483+ ret = self._validate_dict_data(e, a)
484+ if ret:
485+ return "unexpected role data - {}".format(ret)
486+ if not found:
487+ return "role {} does not exist".format(e['name'])
488+ return ret
489+
490+ def validate_user_data(self, expected, actual):
491+ """Validate user data.
492+
493+ Validate a list of actual user data vs a list of expected user
494+ data.
495+ """
496+ self.log.debug('actual: {}'.format(repr(actual)))
497+ for e in expected:
498+ found = False
499+ for act in actual:
500+ a = {'enabled': act.enabled, 'name': act.name,
501+ 'email': act.email, 'tenantId': act.tenantId,
502+ 'id': act.id}
503+ if e['name'] == a['name']:
504+ found = True
505+ ret = self._validate_dict_data(e, a)
506+ if ret:
507+ return "unexpected user data - {}".format(ret)
508+ if not found:
509+ return "user {} does not exist".format(e['name'])
510+ return ret
511+
512+ def validate_flavor_data(self, expected, actual):
513+ """Validate flavor data.
514+
515+ Validate a list of actual flavors vs a list of expected flavors.
516+ """
517+ self.log.debug('actual: {}'.format(repr(actual)))
518+ act = [a.name for a in actual]
519+ return self._validate_list_data(expected, act)
520+
521+ def tenant_exists(self, keystone, tenant):
522+ """Return True if tenant exists."""
523+ return tenant in [t.name for t in keystone.tenants.list()]
524+
525+ def authenticate_keystone_admin(self, keystone_sentry, user, password,
526+ tenant):
527+ """Authenticates admin user with the keystone admin endpoint."""
528+ unit = keystone_sentry
529+ service_ip = unit.relation('shared-db',
530+ 'mysql:shared-db')['private-address']
531+ ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
532+ return keystone_client.Client(username=user, password=password,
533+ tenant_name=tenant, auth_url=ep)
534+
535+ def authenticate_keystone_user(self, keystone, user, password, tenant):
536+ """Authenticates a regular user with the keystone public endpoint."""
537+ ep = keystone.service_catalog.url_for(service_type='identity',
538+ endpoint_type='publicURL')
539+ return keystone_client.Client(username=user, password=password,
540+ tenant_name=tenant, auth_url=ep)
541+
542+ def authenticate_glance_admin(self, keystone):
543+ """Authenticates admin user with glance."""
544+ ep = keystone.service_catalog.url_for(service_type='image',
545+ endpoint_type='adminURL')
546+ return glance_client.Client(ep, token=keystone.auth_token)
547+
548+ def authenticate_nova_user(self, keystone, user, password, tenant):
549+ """Authenticates a regular user with nova-api."""
550+ ep = keystone.service_catalog.url_for(service_type='identity',
551+ endpoint_type='publicURL')
552+ return nova_client.Client(username=user, api_key=password,
553+ project_id=tenant, auth_url=ep)
554+
555+ def create_cirros_image(self, glance, image_name):
556+ """Download the latest cirros image and upload it to glance."""
557+ http_proxy = os.getenv('AMULET_HTTP_PROXY')
558+ self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
559+ if http_proxy:
560+ proxies = {'http': http_proxy}
561+ opener = urllib.FancyURLopener(proxies)
562+ else:
563+ opener = urllib.FancyURLopener()
564+
565+ f = opener.open("http://download.cirros-cloud.net/version/released")
566+ version = f.read().strip()
567+ cirros_img = "tests/cirros-{}-x86_64-disk.img".format(version)
568+
569+ if not os.path.exists(cirros_img):
570+ cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
571+ version, cirros_img)
572+ opener.retrieve(cirros_url, cirros_img)
573+ f.close()
574+
575+ with open(cirros_img) as f:
576+ image = glance.images.create(name=image_name, is_public=True,
577+ disk_format='qcow2',
578+ container_format='bare', data=f)
579+ count = 1
580+ status = image.status
581+ while status != 'active' and count < 10:
582+ time.sleep(3)
583+ image = glance.images.get(image.id)
584+ status = image.status
585+ self.log.debug('image status: {}'.format(status))
586+ count += 1
587+
588+ if status != 'active':
589+ self.log.error('image creation timed out')
590+ return None
591+
592+ return image
593+
594+ def delete_image(self, glance, image):
595+ """Delete the specified image."""
596+ num_before = len(list(glance.images.list()))
597+ glance.images.delete(image)
598+
599+ count = 1
600+ num_after = len(list(glance.images.list()))
601+ while num_after != (num_before - 1) and count < 10:
602+ time.sleep(3)
603+ num_after = len(list(glance.images.list()))
604+ self.log.debug('number of images: {}'.format(num_after))
605+ count += 1
606+
607+ if num_after != (num_before - 1):
608+ self.log.error('image deletion timed out')
609+ return False
610+
611+ return True
612+
613+ def create_instance(self, nova, image_name, instance_name, flavor):
614+ """Create the specified instance."""
615+ image = nova.images.find(name=image_name)
616+ flavor = nova.flavors.find(name=flavor)
617+ instance = nova.servers.create(name=instance_name, image=image,
618+ flavor=flavor)
619+
620+ count = 1
621+ status = instance.status
622+ while status != 'ACTIVE' and count < 60:
623+ time.sleep(3)
624+ instance = nova.servers.get(instance.id)
625+ status = instance.status
626+ self.log.debug('instance status: {}'.format(status))
627+ count += 1
628+
629+ if status != 'ACTIVE':
630+ self.log.error('instance creation timed out')
631+ return None
632+
633+ return instance
634+
635+ def delete_instance(self, nova, instance):
636+ """Delete the specified instance."""
637+ num_before = len(list(nova.servers.list()))
638+ nova.servers.delete(instance)
639+
640+ count = 1
641+ num_after = len(list(nova.servers.list()))
642+ while num_after != (num_before - 1) and count < 10:
643+ time.sleep(3)
644+ num_after = len(list(nova.servers.list()))
645+ self.log.debug('number of instances: {}'.format(num_after))
646+ count += 1
647+
648+ if num_after != (num_before - 1):
649+ self.log.error('instance deletion timed out')
650+ return False
651+
652+ return True
653
654=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
655--- hooks/charmhelpers/contrib/openstack/context.py 2014-05-19 11:41:35 +0000
656+++ hooks/charmhelpers/contrib/openstack/context.py 2014-07-31 10:21:42 +0000
657@@ -21,9 +21,11 @@
658 relation_get,
659 relation_ids,
660 related_units,
661+ relation_set,
662 unit_get,
663 unit_private_ip,
664 ERROR,
665+ INFO
666 )
667
668 from charmhelpers.contrib.hahelpers.cluster import (
669@@ -42,6 +44,8 @@
670 neutron_plugin_attribute,
671 )
672
673+from charmhelpers.contrib.network.ip import get_address_in_network
674+
675 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
676
677
678@@ -134,8 +138,26 @@
679 'Missing required charm config options. '
680 '(database name and user)')
681 raise OSContextError
682+
683 ctxt = {}
684
685+ # NOTE(jamespage) if mysql charm provides a network upon which
686+ # access to the database should be made, reconfigure relation
687+ # with the service units local address and defer execution
688+ access_network = relation_get('access-network')
689+ if access_network is not None:
690+ if self.relation_prefix is not None:
691+ hostname_key = "{}_hostname".format(self.relation_prefix)
692+ else:
693+ hostname_key = "hostname"
694+ access_hostname = get_address_in_network(access_network,
695+ unit_get('private-address'))
696+ set_hostname = relation_get(attribute=hostname_key,
697+ unit=local_unit())
698+ if set_hostname != access_hostname:
699+ relation_set(relation_settings={hostname_key: access_hostname})
700+ return ctxt # Defer any further hook execution for now....
701+
702 password_setting = 'password'
703 if self.relation_prefix:
704 password_setting = self.relation_prefix + '_password'
705@@ -243,23 +265,31 @@
706
707
708 class AMQPContext(OSContextGenerator):
709- interfaces = ['amqp']
710
711- def __init__(self, ssl_dir=None):
712+ def __init__(self, ssl_dir=None, rel_name='amqp', relation_prefix=None):
713 self.ssl_dir = ssl_dir
714+ self.rel_name = rel_name
715+ self.relation_prefix = relation_prefix
716+ self.interfaces = [rel_name]
717
718 def __call__(self):
719 log('Generating template context for amqp')
720 conf = config()
721+ user_setting = 'rabbit-user'
722+ vhost_setting = 'rabbit-vhost'
723+ if self.relation_prefix:
724+ user_setting = self.relation_prefix + '-rabbit-user'
725+ vhost_setting = self.relation_prefix + '-rabbit-vhost'
726+
727 try:
728- username = conf['rabbit-user']
729- vhost = conf['rabbit-vhost']
730+ username = conf[user_setting]
731+ vhost = conf[vhost_setting]
732 except KeyError as e:
733 log('Could not generate shared_db context. '
734 'Missing required charm config options: %s.' % e)
735 raise OSContextError
736 ctxt = {}
737- for rid in relation_ids('amqp'):
738+ for rid in relation_ids(self.rel_name):
739 ha_vip_only = False
740 for unit in related_units(rid):
741 if relation_get('clustered', rid=rid, unit=unit):
742@@ -332,10 +362,12 @@
743 use_syslog = str(config('use-syslog')).lower()
744 for rid in relation_ids('ceph'):
745 for unit in related_units(rid):
746- mon_hosts.append(relation_get('private-address', rid=rid,
747- unit=unit))
748 auth = relation_get('auth', rid=rid, unit=unit)
749 key = relation_get('key', rid=rid, unit=unit)
750+ ceph_addr = \
751+ relation_get('ceph-public-address', rid=rid, unit=unit) or \
752+ relation_get('private-address', rid=rid, unit=unit)
753+ mon_hosts.append(ceph_addr)
754
755 ctxt = {
756 'mon_hosts': ' '.join(mon_hosts),
757@@ -369,7 +401,9 @@
758
759 cluster_hosts = {}
760 l_unit = local_unit().replace('/', '-')
761- cluster_hosts[l_unit] = unit_get('private-address')
762+ cluster_hosts[l_unit] = \
763+ get_address_in_network(config('os-internal-network'),
764+ unit_get('private-address'))
765
766 for rid in relation_ids('cluster'):
767 for unit in related_units(rid):
768@@ -418,12 +452,13 @@
769 """
770 Generates a context for an apache vhost configuration that configures
771 HTTPS reverse proxying for one or many endpoints. Generated context
772- looks something like:
773- {
774- 'namespace': 'cinder',
775- 'private_address': 'iscsi.mycinderhost.com',
776- 'endpoints': [(8776, 8766), (8777, 8767)]
777- }
778+ looks something like::
779+
780+ {
781+ 'namespace': 'cinder',
782+ 'private_address': 'iscsi.mycinderhost.com',
783+ 'endpoints': [(8776, 8766), (8777, 8767)]
784+ }
785
786 The endpoints list consists of a tuples mapping external ports
787 to internal ports.
788@@ -541,6 +576,26 @@
789
790 return nvp_ctxt
791
792+ def n1kv_ctxt(self):
793+ driver = neutron_plugin_attribute(self.plugin, 'driver',
794+ self.network_manager)
795+ n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
796+ self.network_manager)
797+ n1kv_ctxt = {
798+ 'core_plugin': driver,
799+ 'neutron_plugin': 'n1kv',
800+ 'neutron_security_groups': self.neutron_security_groups,
801+ 'local_ip': unit_private_ip(),
802+ 'config': n1kv_config,
803+ 'vsm_ip': config('n1kv-vsm-ip'),
804+ 'vsm_username': config('n1kv-vsm-username'),
805+ 'vsm_password': config('n1kv-vsm-password'),
806+ 'restrict_policy_profiles': config(
807+ 'n1kv_restrict_policy_profiles'),
808+ }
809+
810+ return n1kv_ctxt
811+
812 def neutron_ctxt(self):
813 if https():
814 proto = 'https'
815@@ -572,6 +627,8 @@
816 ctxt.update(self.ovs_ctxt())
817 elif self.plugin in ['nvp', 'nsx']:
818 ctxt.update(self.nvp_ctxt())
819+ elif self.plugin == 'n1kv':
820+ ctxt.update(self.n1kv_ctxt())
821
822 alchemy_flags = config('neutron-alchemy-flags')
823 if alchemy_flags:
824@@ -611,7 +668,7 @@
825 The subordinate interface allows subordinates to export their
826 configuration requirements to the principle for multiple config
827 files and multiple serivces. Ie, a subordinate that has interfaces
828- to both glance and nova may export to following yaml blob as json:
829+ to both glance and nova may export to following yaml blob as json::
830
831 glance:
832 /etc/glance/glance-api.conf:
833@@ -630,7 +687,8 @@
834
835 It is then up to the principle charms to subscribe this context to
836 the service+config file it is interestd in. Configuration data will
837- be available in the template context, in glance's case, as:
838+ be available in the template context, in glance's case, as::
839+
840 ctxt = {
841 ... other context ...
842 'subordinate_config': {
843@@ -657,7 +715,7 @@
844 self.interface = interface
845
846 def __call__(self):
847- ctxt = {}
848+ ctxt = {'sections': {}}
849 for rid in relation_ids(self.interface):
850 for unit in related_units(rid):
851 sub_config = relation_get('subordinate_configuration',
852@@ -683,11 +741,26 @@
853
854 sub_config = sub_config[self.config_file]
855 for k, v in sub_config.iteritems():
856- ctxt[k] = v
857-
858- if not ctxt:
859- ctxt['sections'] = {}
860-
861+ if k == 'sections':
862+ for section, config_dict in v.iteritems():
863+ log("adding section '%s'" % (section))
864+ ctxt[k][section] = config_dict
865+ else:
866+ ctxt[k] = v
867+
868+ log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
869+
870+ return ctxt
871+
872+
873+class LogLevelContext(OSContextGenerator):
874+
875+ def __call__(self):
876+ ctxt = {}
877+ ctxt['debug'] = \
878+ False if config('debug') is None else config('debug')
879+ ctxt['verbose'] = \
880+ False if config('verbose') is None else config('verbose')
881 return ctxt
882
883
884
885=== added file 'hooks/charmhelpers/contrib/openstack/ip.py'
886--- hooks/charmhelpers/contrib/openstack/ip.py 1970-01-01 00:00:00 +0000
887+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-07-31 10:21:42 +0000
888@@ -0,0 +1,75 @@
889+from charmhelpers.core.hookenv import (
890+ config,
891+ unit_get,
892+)
893+
894+from charmhelpers.contrib.network.ip import (
895+ get_address_in_network,
896+ is_address_in_network,
897+ is_ipv6,
898+)
899+
900+from charmhelpers.contrib.hahelpers.cluster import is_clustered
901+
902+PUBLIC = 'public'
903+INTERNAL = 'int'
904+ADMIN = 'admin'
905+
906+_address_map = {
907+ PUBLIC: {
908+ 'config': 'os-public-network',
909+ 'fallback': 'public-address'
910+ },
911+ INTERNAL: {
912+ 'config': 'os-internal-network',
913+ 'fallback': 'private-address'
914+ },
915+ ADMIN: {
916+ 'config': 'os-admin-network',
917+ 'fallback': 'private-address'
918+ }
919+}
920+
921+
922+def canonical_url(configs, endpoint_type=PUBLIC):
923+ '''
924+ Returns the correct HTTP URL to this host given the state of HTTPS
925+ configuration, hacluster and charm configuration.
926+
927+ :configs OSTemplateRenderer: A config tempating object to inspect for
928+ a complete https context.
929+ :endpoint_type str: The endpoint type to resolve.
930+
931+ :returns str: Base URL for services on the current service unit.
932+ '''
933+ scheme = 'http'
934+ if 'https' in configs.complete_contexts():
935+ scheme = 'https'
936+ address = resolve_address(endpoint_type)
937+ if is_ipv6(address):
938+ address = "[{}]".format(address)
939+ return '%s://%s' % (scheme, address)
940+
941+
942+def resolve_address(endpoint_type=PUBLIC):
943+ resolved_address = None
944+ if is_clustered():
945+ if config(_address_map[endpoint_type]['config']) is None:
946+ # Assume vip is simple and pass back directly
947+ resolved_address = config('vip')
948+ else:
949+ for vip in config('vip').split():
950+ if is_address_in_network(
951+ config(_address_map[endpoint_type]['config']),
952+ vip):
953+ resolved_address = vip
954+ else:
955+ resolved_address = get_address_in_network(
956+ config(_address_map[endpoint_type]['config']),
957+ unit_get(_address_map[endpoint_type]['fallback'])
958+ )
959+ if resolved_address is None:
960+ raise ValueError('Unable to resolve a suitable IP address'
961+ ' based on charm state and configuration')
962+ else:
963+ return resolved_address
964
965=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
966--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-05-19 11:41:35 +0000
967+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-07-31 10:21:42 +0000
968@@ -128,6 +128,20 @@
969 'server_packages': ['neutron-server',
970 'neutron-plugin-vmware'],
971 'server_services': ['neutron-server']
972+ },
973+ 'n1kv': {
974+ 'config': '/etc/neutron/plugins/cisco/cisco_plugins.ini',
975+ 'driver': 'neutron.plugins.cisco.network_plugin.PluginV2',
976+ 'contexts': [
977+ context.SharedDBContext(user=config('neutron-database-user'),
978+ database=config('neutron-database'),
979+ relation_prefix='neutron',
980+ ssl_dir=NEUTRON_CONF_DIR)],
981+ 'services': [],
982+ 'packages': [['neutron-plugin-cisco']],
983+ 'server_packages': ['neutron-server',
984+ 'neutron-plugin-cisco'],
985+ 'server_services': ['neutron-server']
986 }
987 }
988 if release >= 'icehouse':
989
990=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
991--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-02-24 17:52:34 +0000
992+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-07-31 10:21:42 +0000
993@@ -27,7 +27,12 @@
994
995 {% if units -%}
996 {% for service, ports in service_ports.iteritems() -%}
997-listen {{ service }} 0.0.0.0:{{ ports[0] }}
998+listen {{ service }}_ipv4 0.0.0.0:{{ ports[0] }}
999+ balance roundrobin
1000+ {% for unit, address in units.iteritems() -%}
1001+ server {{ unit }} {{ address }}:{{ ports[1] }} check
1002+ {% endfor %}
1003+listen {{ service }}_ipv6 :::{{ ports[0] }}
1004 balance roundrobin
1005 {% for unit, address in units.iteritems() -%}
1006 server {{ unit }} {{ address }}:{{ ports[1] }} check
1007
1008=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1009--- hooks/charmhelpers/contrib/openstack/templating.py 2013-09-23 19:01:06 +0000
1010+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-07-31 10:21:42 +0000
1011@@ -30,17 +30,17 @@
1012 loading dir.
1013
1014 A charm may also ship a templates dir with this module
1015- and it will be appended to the bottom of the search list, eg:
1016- hooks/charmhelpers/contrib/openstack/templates.
1017-
1018- :param templates_dir: str: Base template directory containing release
1019- sub-directories.
1020- :param os_release : str: OpenStack release codename to construct template
1021- loader.
1022-
1023- :returns : jinja2.ChoiceLoader constructed with a list of
1024- jinja2.FilesystemLoaders, ordered in descending
1025- order by OpenStack release.
1026+ and it will be appended to the bottom of the search list, eg::
1027+
1028+ hooks/charmhelpers/contrib/openstack/templates
1029+
1030+ :param templates_dir (str): Base template directory containing release
1031+ sub-directories.
1032+ :param os_release (str): OpenStack release codename to construct template
1033+ loader.
1034+ :returns: jinja2.ChoiceLoader constructed with a list of
1035+ jinja2.FilesystemLoaders, ordered in descending
1036+ order by OpenStack release.
1037 """
1038 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1039 for rel in OPENSTACK_CODENAMES.itervalues()]
1040@@ -111,7 +111,8 @@
1041 and ease the burden of managing config templates across multiple OpenStack
1042 releases.
1043
1044- Basic usage:
1045+ Basic usage::
1046+
1047 # import some common context generates from charmhelpers
1048 from charmhelpers.contrib.openstack import context
1049
1050@@ -131,21 +132,19 @@
1051 # write out all registered configs
1052 configs.write_all()
1053
1054- Details:
1055+ **OpenStack Releases and template loading**
1056
1057- OpenStack Releases and template loading
1058- ---------------------------------------
1059 When the object is instantiated, it is associated with a specific OS
1060 release. This dictates how the template loader will be constructed.
1061
1062 The constructed loader attempts to load the template from several places
1063 in the following order:
1064- - from the most recent OS release-specific template dir (if one exists)
1065- - the base templates_dir
1066- - a template directory shipped in the charm with this helper file.
1067-
1068-
1069- For the example above, '/tmp/templates' contains the following structure:
1070+ - from the most recent OS release-specific template dir (if one exists)
1071+ - the base templates_dir
1072+ - a template directory shipped in the charm with this helper file.
1073+
1074+ For the example above, '/tmp/templates' contains the following structure::
1075+
1076 /tmp/templates/nova.conf
1077 /tmp/templates/api-paste.ini
1078 /tmp/templates/grizzly/api-paste.ini
1079@@ -169,8 +168,8 @@
1080 $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
1081 us to ship common templates (haproxy, apache) with the helpers.
1082
1083- Context generators
1084- ---------------------------------------
1085+ **Context generators**
1086+
1087 Context generators are used to generate template contexts during hook
1088 execution. Doing so may require inspecting service relations, charm
1089 config, etc. When registered, a config file is associated with a list
1090
1091=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1092--- hooks/charmhelpers/contrib/openstack/utils.py 2014-05-19 11:41:35 +0000
1093+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-07-31 10:21:42 +0000
1094@@ -3,7 +3,6 @@
1095 # Common python helper functions used for OpenStack charms.
1096 from collections import OrderedDict
1097
1098-import apt_pkg as apt
1099 import subprocess
1100 import os
1101 import socket
1102@@ -41,7 +40,8 @@
1103 ('quantal', 'folsom'),
1104 ('raring', 'grizzly'),
1105 ('saucy', 'havana'),
1106- ('trusty', 'icehouse')
1107+ ('trusty', 'icehouse'),
1108+ ('utopic', 'juno'),
1109 ])
1110
1111
1112@@ -52,6 +52,7 @@
1113 ('2013.1', 'grizzly'),
1114 ('2013.2', 'havana'),
1115 ('2014.1', 'icehouse'),
1116+ ('2014.2', 'juno'),
1117 ])
1118
1119 # The ugly duckling
1120@@ -83,6 +84,8 @@
1121 '''Derive OpenStack release codename from a given installation source.'''
1122 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1123 rel = ''
1124+ if src is None:
1125+ return rel
1126 if src in ['distro', 'distro-proposed']:
1127 try:
1128 rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
1129@@ -130,6 +133,7 @@
1130
1131 def get_os_codename_package(package, fatal=True):
1132 '''Derive OpenStack release codename from an installed package.'''
1133+ import apt_pkg as apt
1134 apt.init()
1135
1136 # Tell apt to build an in-memory cache to prevent race conditions (if
1137@@ -187,7 +191,7 @@
1138 for version, cname in vers_map.iteritems():
1139 if cname == codename:
1140 return version
1141- #e = "Could not determine OpenStack version for package: %s" % pkg
1142+ # e = "Could not determine OpenStack version for package: %s" % pkg
1143 # error_out(e)
1144
1145
1146@@ -273,6 +277,9 @@
1147 'icehouse': 'precise-updates/icehouse',
1148 'icehouse/updates': 'precise-updates/icehouse',
1149 'icehouse/proposed': 'precise-proposed/icehouse',
1150+ 'juno': 'trusty-updates/juno',
1151+ 'juno/updates': 'trusty-updates/juno',
1152+ 'juno/proposed': 'trusty-proposed/juno',
1153 }
1154
1155 try:
1156@@ -320,6 +327,7 @@
1157
1158 """
1159
1160+ import apt_pkg as apt
1161 src = config('openstack-origin')
1162 cur_vers = get_os_version_package(package)
1163 available_vers = get_os_version_install_source(src)
1164
1165=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
1166--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-02-24 17:52:34 +0000
1167+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-31 10:21:42 +0000
1168@@ -303,7 +303,7 @@
1169 blk_device, fstype, system_services=[]):
1170 """
1171 NOTE: This function must only be called from a single service unit for
1172- the same rbd_img otherwise data loss will occur.
1173+ the same rbd_img otherwise data loss will occur.
1174
1175 Ensures given pool and RBD image exists, is mapped to a block device,
1176 and the device is formatted and mounted at the given mount_point.
1177
1178=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
1179--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-05-19 11:41:35 +0000
1180+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-07-31 10:21:42 +0000
1181@@ -37,6 +37,7 @@
1182 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
1183 'bs=512', 'count=100', 'seek=%s' % (gpt_end)])
1184
1185+
1186 def is_device_mounted(device):
1187 '''Given a device path, return True if that device is mounted, and False
1188 if it isn't.
1189@@ -45,5 +46,8 @@
1190 :returns: boolean: True if the path represents a mounted device, False if
1191 it doesn't.
1192 '''
1193+ is_partition = bool(re.search(r".*[0-9]+\b", device))
1194 out = check_output(['mount'])
1195+ if is_partition:
1196+ return bool(re.search(device + r"\b", out))
1197 return bool(re.search(device + r"[0-9]+\b", out))
1198
1199=== added file 'hooks/charmhelpers/core/fstab.py'
1200--- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
1201+++ hooks/charmhelpers/core/fstab.py 2014-07-31 10:21:42 +0000
1202@@ -0,0 +1,116 @@
1203+#!/usr/bin/env python
1204+# -*- coding: utf-8 -*-
1205+
1206+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
1207+
1208+import os
1209+
1210+
1211+class Fstab(file):
1212+ """This class extends file in order to implement a file reader/writer
1213+ for file `/etc/fstab`
1214+ """
1215+
1216+ class Entry(object):
1217+ """Entry class represents a non-comment line on the `/etc/fstab` file
1218+ """
1219+ def __init__(self, device, mountpoint, filesystem,
1220+ options, d=0, p=0):
1221+ self.device = device
1222+ self.mountpoint = mountpoint
1223+ self.filesystem = filesystem
1224+
1225+ if not options:
1226+ options = "defaults"
1227+
1228+ self.options = options
1229+ self.d = d
1230+ self.p = p
1231+
1232+ def __eq__(self, o):
1233+ return str(self) == str(o)
1234+
1235+ def __str__(self):
1236+ return "{} {} {} {} {} {}".format(self.device,
1237+ self.mountpoint,
1238+ self.filesystem,
1239+ self.options,
1240+ self.d,
1241+ self.p)
1242+
1243+ DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
1244+
1245+ def __init__(self, path=None):
1246+ if path:
1247+ self._path = path
1248+ else:
1249+ self._path = self.DEFAULT_PATH
1250+ file.__init__(self, self._path, 'r+')
1251+
1252+ def _hydrate_entry(self, line):
1253+ # NOTE: use split with no arguments to split on any
1254+ # whitespace including tabs
1255+ return Fstab.Entry(*filter(
1256+ lambda x: x not in ('', None),
1257+ line.strip("\n").split()))
1258+
1259+ @property
1260+ def entries(self):
1261+ self.seek(0)
1262+ for line in self.readlines():
1263+ try:
1264+ if not line.startswith("#"):
1265+ yield self._hydrate_entry(line)
1266+ except ValueError:
1267+ pass
1268+
1269+ def get_entry_by_attr(self, attr, value):
1270+ for entry in self.entries:
1271+ e_attr = getattr(entry, attr)
1272+ if e_attr == value:
1273+ return entry
1274+ return None
1275+
1276+ def add_entry(self, entry):
1277+ if self.get_entry_by_attr('device', entry.device):
1278+ return False
1279+
1280+ self.write(str(entry) + '\n')
1281+ self.truncate()
1282+ return entry
1283+
1284+ def remove_entry(self, entry):
1285+ self.seek(0)
1286+
1287+ lines = self.readlines()
1288+
1289+ found = False
1290+ for index, line in enumerate(lines):
1291+ if not line.startswith("#"):
1292+ if self._hydrate_entry(line) == entry:
1293+ found = True
1294+ break
1295+
1296+ if not found:
1297+ return False
1298+
1299+ lines.remove(line)
1300+
1301+ self.seek(0)
1302+ self.write(''.join(lines))
1303+ self.truncate()
1304+ return True
1305+
1306+ @classmethod
1307+ def remove_by_mountpoint(cls, mountpoint, path=None):
1308+ fstab = cls(path=path)
1309+ entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
1310+ if entry:
1311+ return fstab.remove_entry(entry)
1312+ return False
1313+
1314+ @classmethod
1315+ def add(cls, device, mountpoint, filesystem, options=None, path=None):
1316+ return cls(path=path).add_entry(Fstab.Entry(device,
1317+ mountpoint, filesystem,
1318+ options=options))
1319
1320=== modified file 'hooks/charmhelpers/core/hookenv.py'
1321--- hooks/charmhelpers/core/hookenv.py 2014-05-19 11:41:35 +0000
1322+++ hooks/charmhelpers/core/hookenv.py 2014-07-31 10:21:42 +0000
1323@@ -25,7 +25,7 @@
1324 def cached(func):
1325 """Cache return values for multiple executions of func + args
1326
1327- For example:
1328+ For example::
1329
1330 @cached
1331 def unit_get(attribute):
1332@@ -445,18 +445,19 @@
1333 class Hooks(object):
1334 """A convenient handler for hook functions.
1335
1336- Example:
1337+ Example::
1338+
1339 hooks = Hooks()
1340
1341 # register a hook, taking its name from the function name
1342 @hooks.hook()
1343 def install():
1344- ...
1345+ pass # your code here
1346
1347 # register a hook, providing a custom hook name
1348 @hooks.hook("config-changed")
1349 def config_changed():
1350- ...
1351+ pass # your code here
1352
1353 if __name__ == "__main__":
1354 # execute a hook based on the name the program is called by
1355
1356=== modified file 'hooks/charmhelpers/core/host.py'
1357--- hooks/charmhelpers/core/host.py 2014-05-19 11:41:35 +0000
1358+++ hooks/charmhelpers/core/host.py 2014-07-31 10:21:42 +0000
1359@@ -12,11 +12,11 @@
1360 import string
1361 import subprocess
1362 import hashlib
1363-import apt_pkg
1364
1365 from collections import OrderedDict
1366
1367 from hookenv import log
1368+from fstab import Fstab
1369
1370
1371 def service_start(service_name):
1372@@ -35,7 +35,8 @@
1373
1374
1375 def service_reload(service_name, restart_on_failure=False):
1376- """Reload a system service, optionally falling back to restart if reload fails"""
1377+ """Reload a system service, optionally falling back to restart if
1378+ reload fails"""
1379 service_result = service('reload', service_name)
1380 if not service_result and restart_on_failure:
1381 service_result = service('restart', service_name)
1382@@ -144,7 +145,19 @@
1383 target.write(content)
1384
1385
1386-def mount(device, mountpoint, options=None, persist=False):
1387+def fstab_remove(mp):
1388+ """Remove the given mountpoint entry from /etc/fstab
1389+ """
1390+ return Fstab.remove_by_mountpoint(mp)
1391+
1392+
1393+def fstab_add(dev, mp, fs, options=None):
1394+ """Adds the given device entry to the /etc/fstab file
1395+ """
1396+ return Fstab.add(dev, mp, fs, options=options)
1397+
1398+
1399+def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
1400 """Mount a filesystem at a particular mountpoint"""
1401 cmd_args = ['mount']
1402 if options is not None:
1403@@ -155,9 +168,9 @@
1404 except subprocess.CalledProcessError, e:
1405 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
1406 return False
1407+
1408 if persist:
1409- # TODO: update fstab
1410- pass
1411+ return fstab_add(device, mountpoint, filesystem, options=options)
1412 return True
1413
1414
1415@@ -169,9 +182,9 @@
1416 except subprocess.CalledProcessError, e:
1417 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1418 return False
1419+
1420 if persist:
1421- # TODO: update fstab
1422- pass
1423+ return fstab_remove(mountpoint)
1424 return True
1425
1426
1427@@ -198,13 +211,13 @@
1428 def restart_on_change(restart_map, stopstart=False):
1429 """Restart services based on configuration files changing
1430
1431- This function is used a decorator, for example
1432+ This function is used a decorator, for example::
1433
1434 @restart_on_change({
1435 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1436 })
1437 def ceph_client_changed():
1438- ...
1439+ pass # your code here
1440
1441 In this example, the cinder-api and cinder-volume services
1442 would be restarted if /etc/ceph/ceph.conf is changed by the
1443@@ -300,12 +313,19 @@
1444
1445 def cmp_pkgrevno(package, revno, pkgcache=None):
1446 '''Compare supplied revno with the revno of the installed package
1447- 1 => Installed revno is greater than supplied arg
1448- 0 => Installed revno is the same as supplied arg
1449- -1 => Installed revno is less than supplied arg
1450+
1451+ * 1 => Installed revno is greater than supplied arg
1452+ * 0 => Installed revno is the same as supplied arg
1453+ * -1 => Installed revno is less than supplied arg
1454+
1455 '''
1456+ import apt_pkg
1457 if not pkgcache:
1458 apt_pkg.init()
1459+ # Force Apt to build its cache in memory. That way we avoid race
1460+ # conditions with other applications building the cache in the same
1461+ # place.
1462+ apt_pkg.config.set("Dir::Cache::pkgcache", "")
1463 pkgcache = apt_pkg.Cache()
1464 pkg = pkgcache[package]
1465 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
1466
1467=== modified file 'hooks/charmhelpers/fetch/__init__.py'
1468--- hooks/charmhelpers/fetch/__init__.py 2014-05-19 11:41:35 +0000
1469+++ hooks/charmhelpers/fetch/__init__.py 2014-07-31 10:21:42 +0000
1470@@ -13,7 +13,6 @@
1471 config,
1472 log,
1473 )
1474-import apt_pkg
1475 import os
1476
1477
1478@@ -56,6 +55,15 @@
1479 'icehouse/proposed': 'precise-proposed/icehouse',
1480 'precise-icehouse/proposed': 'precise-proposed/icehouse',
1481 'precise-proposed/icehouse': 'precise-proposed/icehouse',
1482+ # Juno
1483+ 'juno': 'trusty-updates/juno',
1484+ 'trusty-juno': 'trusty-updates/juno',
1485+ 'trusty-juno/updates': 'trusty-updates/juno',
1486+ 'trusty-updates/juno': 'trusty-updates/juno',
1487+ 'juno/proposed': 'trusty-proposed/juno',
1488+ 'juno/proposed': 'trusty-proposed/juno',
1489+ 'trusty-juno/proposed': 'trusty-proposed/juno',
1490+ 'trusty-proposed/juno': 'trusty-proposed/juno',
1491 }
1492
1493 # The order of this list is very important. Handlers should be listed in from
1494@@ -108,6 +116,7 @@
1495
1496 def filter_installed_packages(packages):
1497 """Returns a list of packages that require installation"""
1498+ import apt_pkg
1499 apt_pkg.init()
1500
1501 # Tell apt to build an in-memory cache to prevent race conditions (if
1502@@ -226,31 +235,39 @@
1503 sources_var='install_sources',
1504 keys_var='install_keys'):
1505 """
1506- Configure multiple sources from charm configuration
1507+ Configure multiple sources from charm configuration.
1508+
1509+ The lists are encoded as yaml fragments in the configuration.
1510+ The frament needs to be included as a string.
1511
1512 Example config:
1513- install_sources:
1514+ install_sources: |
1515 - "ppa:foo"
1516 - "http://example.com/repo precise main"
1517- install_keys:
1518+ install_keys: |
1519 - null
1520 - "a1b2c3d4"
1521
1522 Note that 'null' (a.k.a. None) should not be quoted.
1523 """
1524- sources = safe_load(config(sources_var))
1525- keys = config(keys_var)
1526- if keys is not None:
1527- keys = safe_load(keys)
1528- if isinstance(sources, basestring) and (
1529- keys is None or isinstance(keys, basestring)):
1530- add_source(sources, keys)
1531+ sources = safe_load((config(sources_var) or '').strip()) or []
1532+ keys = safe_load((config(keys_var) or '').strip()) or None
1533+
1534+ if isinstance(sources, basestring):
1535+ sources = [sources]
1536+
1537+ if keys is None:
1538+ for source in sources:
1539+ add_source(source, None)
1540 else:
1541- if not len(sources) == len(keys):
1542- msg = 'Install sources and keys lists are different lengths'
1543- raise SourceConfigError(msg)
1544- for src_num in range(len(sources)):
1545- add_source(sources[src_num], keys[src_num])
1546+ if isinstance(keys, basestring):
1547+ keys = [keys]
1548+
1549+ if len(sources) != len(keys):
1550+ raise SourceConfigError(
1551+ 'Install sources and keys lists are different lengths')
1552+ for source, key in zip(sources, keys):
1553+ add_source(source, key)
1554 if update:
1555 apt_update(fatal=True)
1556
1557
1558=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
1559--- hooks/charmhelpers/fetch/bzrurl.py 2013-12-04 09:51:46 +0000
1560+++ hooks/charmhelpers/fetch/bzrurl.py 2014-07-31 10:21:42 +0000
1561@@ -39,7 +39,8 @@
1562 def install(self, source):
1563 url_parts = self.parse_url(source)
1564 branch_name = url_parts.path.strip("/").split("/")[-1]
1565- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
1566+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1567+ branch_name)
1568 if not os.path.exists(dest_dir):
1569 mkdir(dest_dir, perms=0755)
1570 try:
1571
1572=== modified file 'hooks/misc_utils.py'
1573--- hooks/misc_utils.py 2013-07-19 19:52:45 +0000
1574+++ hooks/misc_utils.py 2014-07-31 10:21:42 +0000
1575@@ -56,7 +56,8 @@
1576
1577 if not is_block_device(bdev):
1578 log('Failed to locate valid block device at %s' % bdev, level=ERROR)
1579- raise
1580+ # ignore missing block devices
1581+ return
1582
1583 return bdev
1584
1585
1586=== modified file 'hooks/swift_storage_context.py'
1587--- hooks/swift_storage_context.py 2013-08-16 20:38:32 +0000
1588+++ hooks/swift_storage_context.py 2014-07-31 10:21:42 +0000
1589@@ -61,10 +61,15 @@
1590 interfaces = []
1591
1592 def __call__(self):
1593+ import psutil
1594+ multiplier = int(config('worker-multiplier')) or 1
1595 ctxt = {
1596 'local_ip': unit_private_ip(),
1597 'account_server_port': config('account-server-port'),
1598 'container_server_port': config('container-server-port'),
1599 'object_server_port': config('object-server-port'),
1600+ 'workers': str(psutil.NUM_CPUS * multiplier),
1601+ 'object_server_threads_per_disk': config(
1602+ 'object-server-threads-per-disk'),
1603 }
1604 return ctxt
1605
1606=== modified file 'hooks/swift_storage_utils.py'
1607--- hooks/swift_storage_utils.py 2014-04-07 14:50:34 +0000
1608+++ hooks/swift_storage_utils.py 2014-07-31 10:21:42 +0000
1609@@ -33,6 +33,7 @@
1610
1611 from charmhelpers.contrib.storage.linux.utils import (
1612 is_block_device,
1613+ is_device_mounted,
1614 )
1615
1616 from charmhelpers.contrib.openstack.utils import (
1617@@ -48,7 +49,7 @@
1618
1619 PACKAGES = [
1620 'swift', 'swift-account', 'swift-container', 'swift-object',
1621- 'xfsprogs', 'gdisk', 'lvm2', 'python-jinja2',
1622+ 'xfsprogs', 'gdisk', 'lvm2', 'python-jinja2', 'python-psutil',
1623 ]
1624
1625 TEMPLATES = 'templates/'
1626@@ -135,10 +136,17 @@
1627 (ACCOUNT_SVCS + CONTAINER_SVCS + OBJECT_SVCS)]
1628
1629
1630+def _is_storage_ready(partition):
1631+ """
1632+ A small helper to determine if a given device is suitabe to be used as
1633+ a storage device.
1634+ """
1635+ return is_block_device(partition) and not is_device_mounted(partition)
1636+
1637+
1638 def find_block_devices():
1639 found = []
1640 incl = ['sd[a-z]', 'vd[a-z]', 'cciss\/c[0-9]d[0-9]']
1641- blacklist = ['sda', 'vda', 'cciss/c0d0']
1642
1643 with open('/proc/partitions') as proc:
1644 print proc
1645@@ -146,9 +154,9 @@
1646 for partition in [p[3] for p in partitions if p]:
1647 for inc in incl:
1648 _re = re.compile(r'^(%s)$' % inc)
1649- if _re.match(partition) and partition not in blacklist:
1650+ if _re.match(partition):
1651 found.append(os.path.join('/dev', partition))
1652- return [f for f in found if is_block_device(f)]
1653+ return [f for f in found if _is_storage_ready(f)]
1654
1655
1656 def determine_block_devices():
1657@@ -164,7 +172,12 @@
1658 else:
1659 bdevs = block_device.split(' ')
1660
1661- return [ensure_block_device(bd) for bd in bdevs]
1662+ # attempt to ensure block devices, but filter out missing devs
1663+ _none = ['None', 'none', None]
1664+ valid_bdevs = \
1665+ [x for x in map(ensure_block_device, bdevs) if x not in _none]
1666+ log('Valid ensured block devices: %s' % valid_bdevs)
1667+ return valid_bdevs
1668
1669
1670 def mkfs_xfs(bdev):
1671@@ -181,7 +194,8 @@
1672 _dev = os.path.basename(dev)
1673 _mp = os.path.join('/srv', 'node', _dev)
1674 mkdir(_mp, owner='swift', group='swift')
1675- mount(dev, '/srv/node/%s' % _dev, persist=True)
1676+ mount(dev, '/srv/node/%s' % _dev, persist=True,
1677+ filesystem="xfs")
1678 check_call(['chown', '-R', 'swift:swift', '/srv/node/'])
1679 check_call(['chmod', '-R', '0750', '/srv/node/'])
1680
1681
1682=== modified file 'templates/account-server.conf'
1683--- templates/account-server.conf 2013-09-30 13:29:32 +0000
1684+++ templates/account-server.conf 2014-07-31 10:21:42 +0000
1685@@ -1,7 +1,7 @@
1686 [DEFAULT]
1687 bind_ip = 0.0.0.0
1688 bind_port = {{ account_server_port }}
1689-workers = 2
1690+workers = {{ workers }}
1691
1692 [pipeline:main]
1693 pipeline = recon account-server
1694
1695=== modified file 'templates/container-server.conf'
1696--- templates/container-server.conf 2014-04-02 12:49:56 +0000
1697+++ templates/container-server.conf 2014-07-31 10:21:42 +0000
1698@@ -1,7 +1,7 @@
1699 [DEFAULT]
1700 bind_ip = 0.0.0.0
1701 bind_port = {{ container_server_port }}
1702-workers = 2
1703+workers = {{ workers }}
1704
1705 [pipeline:main]
1706 pipeline = recon container-server
1707
1708=== modified file 'templates/object-server.conf'
1709--- templates/object-server.conf 2013-09-30 13:29:32 +0000
1710+++ templates/object-server.conf 2014-07-31 10:21:42 +0000
1711@@ -1,7 +1,7 @@
1712 [DEFAULT]
1713 bind_ip = 0.0.0.0
1714 bind_port = {{ object_server_port }}
1715-workers = 2
1716+workers = {{ workers }}
1717
1718 [pipeline:main]
1719 pipeline = recon object-server
1720@@ -12,6 +12,7 @@
1721
1722 [app:object-server]
1723 use = egg:swift#object
1724+threads_per_disk = {{ object_server_threads_per_disk }}
1725
1726 [object-replicator]
1727
1728
1729=== added directory 'tests'
1730=== added file 'tests/00-setup'
1731--- tests/00-setup 1970-01-01 00:00:00 +0000
1732+++ tests/00-setup 2014-07-31 10:21:42 +0000
1733@@ -0,0 +1,11 @@
1734+#!/bin/bash
1735+
1736+set -ex
1737+
1738+sudo add-apt-repository --yes ppa:juju/stable
1739+sudo apt-get update --yes
1740+sudo apt-get install --yes python-amulet
1741+sudo apt-get install --yes python-swiftclient
1742+sudo apt-get install --yes python-glanceclient
1743+sudo apt-get install --yes python-keystoneclient
1744+sudo apt-get install --yes python-novaclient
1745
1746=== added file 'tests/10-basic-precise-essex'
1747--- tests/10-basic-precise-essex 1970-01-01 00:00:00 +0000
1748+++ tests/10-basic-precise-essex 2014-07-31 10:21:42 +0000
1749@@ -0,0 +1,9 @@
1750+#!/usr/bin/python
1751+
1752+"""Amulet tests on a basic swift-storage deployment on precise-essex."""
1753+
1754+from basic_deployment import SwiftStorageBasicDeployment
1755+
1756+if __name__ == '__main__':
1757+ deployment = SwiftStorageBasicDeployment(series='precise')
1758+ deployment.run_tests()
1759
1760=== added file 'tests/11-basic-precise-folsom'
1761--- tests/11-basic-precise-folsom 1970-01-01 00:00:00 +0000
1762+++ tests/11-basic-precise-folsom 2014-07-31 10:21:42 +0000
1763@@ -0,0 +1,11 @@
1764+#!/usr/bin/python
1765+
1766+"""Amulet tests on a basic swift-storage deployment on precise-folsom."""
1767+
1768+from basic_deployment import SwiftStorageBasicDeployment
1769+
1770+if __name__ == '__main__':
1771+ deployment = SwiftStorageBasicDeployment(series='precise',
1772+ openstack='cloud:precise-folsom',
1773+ source='cloud:precise-updates/folsom')
1774+ deployment.run_tests()
1775
1776=== added file 'tests/12-basic-precise-grizzly'
1777--- tests/12-basic-precise-grizzly 1970-01-01 00:00:00 +0000
1778+++ tests/12-basic-precise-grizzly 2014-07-31 10:21:42 +0000
1779@@ -0,0 +1,11 @@
1780+#!/usr/bin/python
1781+
1782+"""Amulet tests on a basic swift-storage deployment on precise-grizzly."""
1783+
1784+from basic_deployment import SwiftStorageBasicDeployment
1785+
1786+if __name__ == '__main__':
1787+ deployment = SwiftStorageBasicDeployment(series='precise',
1788+ openstack='cloud:precise-grizzly',
1789+ source='cloud:precise-updates/grizzly')
1790+ deployment.run_tests()
1791
1792=== added file 'tests/13-basic-precise-havana'
1793--- tests/13-basic-precise-havana 1970-01-01 00:00:00 +0000
1794+++ tests/13-basic-precise-havana 2014-07-31 10:21:42 +0000
1795@@ -0,0 +1,11 @@
1796+#!/usr/bin/python
1797+
1798+"""Amulet tests on a basic swift-storage deployment on precise-havana."""
1799+
1800+from basic_deployment import SwiftStorageBasicDeployment
1801+
1802+if __name__ == '__main__':
1803+ deployment = SwiftStorageBasicDeployment(series='precise',
1804+ openstack='cloud:precise-havana',
1805+ source='cloud:precise-updates/havana')
1806+ deployment.run_tests()
1807
1808=== added file 'tests/14-basic-precise-icehouse'
1809--- tests/14-basic-precise-icehouse 1970-01-01 00:00:00 +0000
1810+++ tests/14-basic-precise-icehouse 2014-07-31 10:21:42 +0000
1811@@ -0,0 +1,11 @@
1812+#!/usr/bin/python
1813+
1814+"""Amulet tests on a basic swift-storage deployment on precise-icehouse."""
1815+
1816+from basic_deployment import SwiftStorageBasicDeployment
1817+
1818+if __name__ == '__main__':
1819+ deployment = SwiftStorageBasicDeployment(series='precise',
1820+ openstack='cloud:precise-icehouse',
1821+ source='cloud:precise-updates/icehouse')
1822+ deployment.run_tests()
1823
1824=== added file 'tests/15-basic-trusty-icehouse'
1825--- tests/15-basic-trusty-icehouse 1970-01-01 00:00:00 +0000
1826+++ tests/15-basic-trusty-icehouse 2014-07-31 10:21:42 +0000
1827@@ -0,0 +1,9 @@
1828+#!/usr/bin/python
1829+
1830+"""Amulet tests on a basic swift-storage deployment on trusty-icehouse."""
1831+
1832+from basic_deployment import SwiftStorageBasicDeployment
1833+
1834+if __name__ == '__main__':
1835+ deployment = SwiftStorageBasicDeployment(series='trusty')
1836+ deployment.run_tests()
1837
1838=== added file 'tests/README'
1839--- tests/README 1970-01-01 00:00:00 +0000
1840+++ tests/README 2014-07-31 10:21:42 +0000
1841@@ -0,0 +1,52 @@
1842+This directory provides Amulet tests that focus on verification of swift-storage
1843+deployments.
1844+
1845+If you use a web proxy server to access the web, you'll need to set the
1846+AMULET_HTTP_PROXY environment variable to the http URL of the proxy server.
1847+
1848+The following examples demonstrate different ways that tests can be executed.
1849+All examples are run from the charm's root directory.
1850+
1851+ * To run all tests (starting with 00-setup):
1852+
1853+ make test
1854+
1855+ * To run a specific test module (or modules):
1856+
1857+ juju test -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse
1858+
1859+ * To run a specific test module (or modules), and keep the environment
1860+ deployed after a failure:
1861+
1862+ juju test --set-e -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse
1863+
1864+ * To re-run a test module against an already deployed environment (one
1865+ that was deployed by a previous call to 'juju test --set-e'):
1866+
1867+ ./tests/15-basic-trusty-icehouse
1868+
1869+For debugging and test development purposes, all code should be idempotent.
1870+In other words, the code should have the ability to be re-run without changing
1871+the results beyond the initial run. This enables editing and re-running of a
1872+test module against an already deployed environment, as described above.
1873+
1874+Manual debugging tips:
1875+
1876+ * Set the following env vars before using the OpenStack CLI as admin:
1877+ export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0
1878+ export OS_TENANT_NAME=admin
1879+ export OS_USERNAME=admin
1880+ export OS_PASSWORD=openstack
1881+ export OS_REGION_NAME=RegionOne
1882+
1883+ * Set the following env vars before using the OpenStack CLI as demoUser:
1884+ export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0
1885+ export OS_TENANT_NAME=demoTenant
1886+ export OS_USERNAME=demoUser
1887+ export OS_PASSWORD=password
1888+ export OS_REGION_NAME=RegionOne
1889+
1890+ * Sample swift command:
1891+ swift -A $OS_AUTH_URL --os-tenant-name services --os-username swift \
1892+ --os-password password list
1893+ (where tenant/user names and password are in swift-proxy's nova.conf file)
1894
1895=== added file 'tests/basic_deployment.py'
1896--- tests/basic_deployment.py 1970-01-01 00:00:00 +0000
1897+++ tests/basic_deployment.py 2014-07-31 10:21:42 +0000
1898@@ -0,0 +1,450 @@
1899+#!/usr/bin/python
1900+
1901+import amulet
1902+import swiftclient
1903+
1904+from charmhelpers.contrib.openstack.amulet.deployment import (
1905+ OpenStackAmuletDeployment
1906+)
1907+
1908+from charmhelpers.contrib.openstack.amulet.utils import (
1909+ OpenStackAmuletUtils,
1910+ DEBUG, # flake8: noqa
1911+ ERROR
1912+)
1913+
1914+# Use DEBUG to turn on debug logging
1915+u = OpenStackAmuletUtils(ERROR)
1916+
1917+
1918+class SwiftStorageBasicDeployment(OpenStackAmuletDeployment):
1919+ """Amulet tests on a basic swift-storage deployment."""
1920+
1921+ def __init__(self, series, openstack=None, source=None):
1922+ """Deploy the entire test environment."""
1923+ super(SwiftStorageBasicDeployment, self).__init__(series, openstack,
1924+ source)
1925+ self._add_services()
1926+ self._add_relations()
1927+ self._configure_services()
1928+ self._deploy()
1929+ self._initialize_tests()
1930+
1931+ def _add_services(self):
1932+ """Add the service that we're testing, including the number of units,
1933+ where swift-storage is local, and the other charms are from
1934+ the charm store."""
1935+ this_service = ('swift-storage', 1)
1936+ other_services = [('mysql', 1),
1937+ ('keystone', 1), ('glance', 1), ('swift-proxy', 1)]
1938+ super(SwiftStorageBasicDeployment, self)._add_services(this_service,
1939+ other_services)
1940+
1941+ def _add_relations(self):
1942+ """Add all of the relations for the services."""
1943+ relations = {
1944+ 'keystone:shared-db': 'mysql:shared-db',
1945+ 'swift-proxy:identity-service': 'keystone:identity-service',
1946+ 'swift-storage:swift-storage': 'swift-proxy:swift-storage',
1947+ 'glance:identity-service': 'keystone:identity-service',
1948+ 'glance:shared-db': 'mysql:shared-db',
1949+ 'glance:object-store': 'swift-proxy:object-store'
1950+ }
1951+ super(SwiftStorageBasicDeployment, self)._add_relations(relations)
1952+
1953+ def _configure_services(self):
1954+ """Configure all of the services."""
1955+ keystone_config = {'admin-password': 'openstack',
1956+ 'admin-token': 'ubuntutesting'}
1957+ swift_proxy_config = {'zone-assignment': 'manual',
1958+ 'replicas': '1',
1959+ 'swift-hash': 'fdfef9d4-8b06-11e2-8ac0-531c923c8fae',
1960+ 'use-https': 'no'}
1961+ swift_storage_config = {'zone': '1',
1962+ 'block-device': 'vdb',
1963+ 'overwrite': 'true'}
1964+ configs = {'keystone': keystone_config,
1965+ 'swift-proxy': swift_proxy_config,
1966+ 'swift-storage': swift_storage_config}
1967+ super(SwiftStorageBasicDeployment, self)._configure_services(configs)
1968+
1969+ def _initialize_tests(self):
1970+ """Perform final initialization before tests get run."""
1971+ # Access the sentries for inspecting service units
1972+ self.mysql_sentry = self.d.sentry.unit['mysql/0']
1973+ self.keystone_sentry = self.d.sentry.unit['keystone/0']
1974+ self.glance_sentry = self.d.sentry.unit['glance/0']
1975+ self.swift_proxy_sentry = self.d.sentry.unit['swift-proxy/0']
1976+ self.swift_storage_sentry = self.d.sentry.unit['swift-storage/0']
1977+
1978+ # Authenticate admin with keystone
1979+ self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
1980+ user='admin',
1981+ password='openstack',
1982+ tenant='admin')
1983+
1984+ # Authenticate admin with glance endpoint
1985+ self.glance = u.authenticate_glance_admin(self.keystone)
1986+
1987+ # Authenticate swift user
1988+ keystone_relation = self.keystone_sentry.relation('identity-service',
1989+ 'swift-proxy:identity-service')
1990+ ep = self.keystone.service_catalog.url_for(service_type='identity',
1991+ endpoint_type='publicURL')
1992+ self.swift = swiftclient.Connection(authurl=ep,
1993+ user=keystone_relation['service_username'],
1994+ key=keystone_relation['service_password'],
1995+ tenant_name=keystone_relation['service_tenant'],
1996+ auth_version='2.0')
1997+
1998+ # Create a demo tenant/role/user
1999+ self.demo_tenant = 'demoTenant'
2000+ self.demo_role = 'demoRole'
2001+ self.demo_user = 'demoUser'
2002+ if not u.tenant_exists(self.keystone, self.demo_tenant):
2003+ tenant = self.keystone.tenants.create(tenant_name=self.demo_tenant,
2004+ description='demo tenant',
2005+ enabled=True)
2006+ self.keystone.roles.create(name=self.demo_role)
2007+ self.keystone.users.create(name=self.demo_user,
2008+ password='password',
2009+ tenant_id=tenant.id,
2010+ email='demo@demo.com')
2011+
2012+ # Authenticate demo user with keystone
2013+ self.keystone_demo = \
2014+ u.authenticate_keystone_user(self.keystone, user=self.demo_user,
2015+ password='password',
2016+ tenant=self.demo_tenant)
2017+
2018+ def test_services(self):
2019+ """Verify the expected services are running on the corresponding
2020+ service units."""
2021+ swift_storage_services = ['status swift-account',
2022+ 'status swift-account-auditor',
2023+ 'status swift-account-reaper',
2024+ 'status swift-account-replicator',
2025+ 'status swift-container',
2026+ 'status swift-container-auditor',
2027+ 'status swift-container-replicator',
2028+ 'status swift-container-updater',
2029+ 'status swift-object',
2030+ 'status swift-object-auditor',
2031+ 'status swift-object-replicator',
2032+ 'status swift-object-updater']
2033+ if self._get_openstack_release() >= self.precise_icehouse:
2034+ swift_storage_services.append('status swift-container-sync')
2035+ commands = {
2036+ self.mysql_sentry: ['status mysql'],
2037+ self.keystone_sentry: ['status keystone'],
2038+ self.glance_sentry: ['status glance-registry', 'status glance-api'],
2039+ self.swift_proxy_sentry: ['status swift-proxy'],
2040+ self.swift_storage_sentry: swift_storage_services
2041+ }
2042+
2043+ ret = u.validate_services(commands)
2044+ if ret:
2045+ amulet.raise_status(amulet.FAIL, msg=ret)
2046+
2047+ def test_users(self):
2048+ """Verify all existing roles."""
2049+ user1 = {'name': 'demoUser',
2050+ 'enabled': True,
2051+ 'tenantId': u.not_null,
2052+ 'id': u.not_null,
2053+ 'email': 'demo@demo.com'}
2054+ user2 = {'name': 'admin',
2055+ 'enabled': True,
2056+ 'tenantId': u.not_null,
2057+ 'id': u.not_null,
2058+ 'email': 'juju@localhost'}
2059+ user3 = {'name': 'glance',
2060+ 'enabled': True,
2061+ 'tenantId': u.not_null,
2062+ 'id': u.not_null,
2063+ 'email': u'juju@localhost'}
2064+ user4 = {'name': 'swift',
2065+ 'enabled': True,
2066+ 'tenantId': u.not_null,
2067+ 'id': u.not_null,
2068+ 'email': u'juju@localhost'}
2069+ expected = [user1, user2, user3, user4]
2070+ actual = self.keystone.users.list()
2071+
2072+ ret = u.validate_user_data(expected, actual)
2073+ if ret:
2074+ amulet.raise_status(amulet.FAIL, msg=ret)
2075+
2076+ def test_service_catalog(self):
2077+ """Verify that the service catalog endpoint data is valid."""
2078+ endpoint_vol = {'adminURL': u.valid_url,
2079+ 'region': 'RegionOne',
2080+ 'publicURL': u.valid_url,
2081+ 'internalURL': u.valid_url}
2082+ endpoint_id = {'adminURL': u.valid_url,
2083+ 'region': 'RegionOne',
2084+ 'publicURL': u.valid_url,
2085+ 'internalURL': u.valid_url}
2086+ if self._get_openstack_release() >= self.precise_folsom:
2087+ endpoint_vol['id'] = u.not_null
2088+ endpoint_id['id'] = u.not_null
2089+ expected = {'image': [endpoint_id], 'object-store': [endpoint_id],
2090+ 'identity': [endpoint_id]}
2091+ actual = self.keystone_demo.service_catalog.get_endpoints()
2092+
2093+ ret = u.validate_svc_catalog_endpoint_data(expected, actual)
2094+ if ret:
2095+ amulet.raise_status(amulet.FAIL, msg=ret)
2096+
2097+ def test_openstack_object_store_endpoint(self):
2098+ """Verify the swift object-store endpoint data."""
2099+ endpoints = self.keystone.endpoints.list()
2100+ admin_port = internal_port = public_port = '8080'
2101+ expected = {'id': u.not_null,
2102+ 'region': 'RegionOne',
2103+ 'adminurl': u.valid_url,
2104+ 'internalurl': u.valid_url,
2105+ 'publicurl': u.valid_url,
2106+ 'service_id': u.not_null}
2107+
2108+ ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
2109+ public_port, expected)
2110+ if ret:
2111+ message = 'object-store endpoint: {}'.format(ret)
2112+ amulet.raise_status(amulet.FAIL, msg=message)
2113+
2114+ def test_swift_storage_swift_storage_relation(self):
2115+ """Verify the swift-storage to swift-proxy swift-storage relation
2116+ data."""
2117+ unit = self.swift_storage_sentry
2118+ relation = ['swift-storage', 'swift-proxy:swift-storage']
2119+ expected = {
2120+ 'account_port': '6002',
2121+ 'zone': '1',
2122+ 'object_port': '6000',
2123+ 'container_port': '6001',
2124+ 'private-address': u.valid_ip,
2125+ 'device': 'vdb'
2126+ }
2127+
2128+ ret = u.validate_relation_data(unit, relation, expected)
2129+ if ret:
2130+ message = u.relation_error('swift-storage swift-storage', ret)
2131+ amulet.raise_status(amulet.FAIL, msg=message)
2132+
2133+ def test_swift_proxy_swift_storage_relation(self):
2134+ """Verify the swift-proxy to swift-storage swift-storage relation
2135+ data."""
2136+ unit = self.swift_proxy_sentry
2137+ relation = ['swift-storage', 'swift-storage:swift-storage']
2138+ expected = {
2139+ 'private-address': u.valid_ip,
2140+ 'trigger': u.not_null,
2141+ 'rings_url': u.valid_url,
2142+ 'swift_hash': u.not_null
2143+ }
2144+
2145+ ret = u.validate_relation_data(unit, relation, expected)
2146+ if ret:
2147+ message = u.relation_error('swift-proxy swift-storage', ret)
2148+ amulet.raise_status(amulet.FAIL, msg=message)
2149+
2150+ def test_restart_on_config_change(self):
2151+ """Verify that the specified services are restarted when the config
2152+ is changed."""
2153+ # NOTE(coreycb): Skipping failing test on until resolved. This test
2154+ # fails because the config file's last mod time is
2155+ # slightly after the process' last mod time.
2156+ if self._get_openstack_release() >= self.precise_essex:
2157+ u.log.error("Skipping failing test until resolved")
2158+ return
2159+
2160+ services = {'swift-account-server': 'account-server.conf',
2161+ 'swift-account-auditor': 'account-server.conf',
2162+ 'swift-account-reaper': 'account-server.conf',
2163+ 'swift-account-replicator': 'account-server.conf',
2164+ 'swift-container-server': 'container-server.conf',
2165+ 'swift-container-auditor': 'container-server.conf',
2166+ 'swift-container-replicator': 'container-server.conf',
2167+ 'swift-container-updater': 'container-server.conf',
2168+ 'swift-object-server': 'object-server.conf',
2169+ 'swift-object-auditor': 'object-server.conf',
2170+ 'swift-object-replicator': 'object-server.conf',
2171+ 'swift-object-updater': 'object-server.conf'}
2172+ if self._get_openstack_release() >= self.precise_icehouse:
2173+ services['swift-container-sync'] = 'container-server.conf'
2174+
2175+ self.d.configure('swift-storage',
2176+ {'object-server-threads-per-disk': '2'})
2177+
2178+ time = 20
2179+ for s, conf in services.iteritems():
2180+ config = '/etc/swift/{}'.format(conf)
2181+ if not u.service_restarted(self.swift_storage_sentry, s, config,
2182+ pgrep_full=True, sleep_time=time):
2183+ msg = "service {} didn't restart after config change".format(s)
2184+ amulet.raise_status(amulet.FAIL, msg=msg)
2185+ time = 0
2186+
2187+ self.d.configure('swift-storage',
2188+ {'object-server-threads-per-disk': '4'})
2189+
2190+ def test_swift_config(self):
2191+ """Verify the data in the swift-hash section of the swift config
2192+ file."""
2193+ unit = self.swift_storage_sentry
2194+ conf = '/etc/swift/swift.conf'
2195+ swift_proxy_relation = self.swift_proxy_sentry.relation('swift-storage',
2196+ 'swift-storage:swift-storage')
2197+ expected = {
2198+ 'swift_hash_path_suffix': swift_proxy_relation['swift_hash']
2199+ }
2200+
2201+ ret = u.validate_config_data(unit, conf, 'swift-hash', expected)
2202+ if ret:
2203+ message = "swift config error: {}".format(ret)
2204+ amulet.raise_status(amulet.FAIL, msg=message)
2205+
2206+ def test_account_server_config(self):
2207+ """Verify the data in the account server config file."""
2208+ unit = self.swift_storage_sentry
2209+ conf = '/etc/swift/account-server.conf'
2210+ expected = {
2211+ 'DEFAULT': {
2212+ 'bind_ip': '0.0.0.0',
2213+ 'bind_port': '6002',
2214+ 'workers': '1'
2215+ },
2216+ 'pipeline:main': {
2217+ 'pipeline': 'recon account-server'
2218+ },
2219+ 'filter:recon': {
2220+ 'use': 'egg:swift#recon',
2221+ 'recon_cache_path': '/var/cache/swift'
2222+ },
2223+ 'app:account-server': {
2224+ 'use': 'egg:swift#account'
2225+ }
2226+ }
2227+
2228+ for section, pairs in expected.iteritems():
2229+ ret = u.validate_config_data(unit, conf, section, pairs)
2230+ if ret:
2231+ message = "account server config error: {}".format(ret)
2232+ amulet.raise_status(amulet.FAIL, msg=message)
2233+
2234+ def test_container_server_config(self):
2235+ """Verify the data in the container server config file."""
2236+ unit = self.swift_storage_sentry
2237+ conf = '/etc/swift/container-server.conf'
2238+ expected = {
2239+ 'DEFAULT': {
2240+ 'bind_ip': '0.0.0.0',
2241+ 'bind_port': '6001',
2242+ 'workers': '1'
2243+ },
2244+ 'pipeline:main': {
2245+ 'pipeline': 'recon container-server'
2246+ },
2247+ 'filter:recon': {
2248+ 'use': 'egg:swift#recon',
2249+ 'recon_cache_path': '/var/cache/swift'
2250+ },
2251+ 'app:container-server': {
2252+ 'use': 'egg:swift#container',
2253+ 'allow_versions': 'true'
2254+ }
2255+ }
2256+
2257+ for section, pairs in expected.iteritems():
2258+ ret = u.validate_config_data(unit, conf, section, pairs)
2259+ if ret:
2260+ message = "container server config error: {}".format(ret)
2261+ amulet.raise_status(amulet.FAIL, msg=message)
2262+
2263+ def test_object_server_config(self):
2264+ """Verify the data in the object server config file."""
2265+ unit = self.swift_storage_sentry
2266+ conf = '/etc/swift/object-server.conf'
2267+ expected = {
2268+ 'DEFAULT': {
2269+ 'bind_ip': '0.0.0.0',
2270+ 'bind_port': '6000',
2271+ 'workers': '1'
2272+ },
2273+ 'pipeline:main': {
2274+ 'pipeline': 'recon object-server'
2275+ },
2276+ 'filter:recon': {
2277+ 'use': 'egg:swift#recon',
2278+ 'recon_cache_path': '/var/cache/swift'
2279+ },
2280+ 'app:object-server': {
2281+ 'use': 'egg:swift#object',
2282+ 'threads_per_disk': '4'
2283+ }
2284+ }
2285+
2286+ for section, pairs in expected.iteritems():
2287+ ret = u.validate_config_data(unit, conf, section, pairs)
2288+ if ret:
2289+ message = "object server config error: {}".format(ret)
2290+ amulet.raise_status(amulet.FAIL, msg=message)
2291+
2292+ def test_image_create(self):
2293+ """Create an instance in glance, which is backed by swift, and validate
2294+ that some of the metadata for the image match in glance and swift."""
2295+ # NOTE(coreycb): Skipping failing test on folsom until resolved. On
2296+ # folsom only, uploading an image to glance gets 400 Bad
2297+ # Request - Error uploading image: (error): [Errno 111]
2298+ # ECONNREFUSED (HTTP 400)
2299+ if self._get_openstack_release() == self.precise_folsom:
2300+ u.log.error("Skipping failing test until resolved")
2301+ return
2302+
2303+ # Create glance image
2304+ image = u.create_cirros_image(self.glance, "cirros-image")
2305+ if not image:
2306+ amulet.raise_status(amulet.FAIL, msg="Image create failed")
2307+
2308+ # Validate that cirros image exists in glance and get its checksum/size
2309+ images = list(self.glance.images.list())
2310+ if len(images) != 1:
2311+ msg = "Expected 1 glance image, found {}".format(len(images))
2312+ amulet.raise_status(amulet.FAIL, msg=msg)
2313+
2314+ if images[0].name != 'cirros-image':
2315+ message = "cirros image does not exist"
2316+ amulet.raise_status(amulet.FAIL, msg=message)
2317+
2318+ glance_image_md5 = image.checksum
2319+ glance_image_size = image.size
2320+
2321+ # Validate that swift object's checksum/size match that from glance
2322+ headers, containers = self.swift.get_account()
2323+ if len(containers) != 1:
2324+ msg = "Expected 1 swift container, found {}".format(len(containers))
2325+ amulet.raise_status(amulet.FAIL, msg=msg)
2326+
2327+ container_name = containers[0].get('name')
2328+
2329+ headers, objects = self.swift.get_container(container_name)
2330+ if len(objects) != 1:
2331+ msg = "Expected 1 swift object, found {}".format(len(objects))
2332+ amulet.raise_status(amulet.FAIL, msg=msg)
2333+
2334+ swift_object_size = objects[0].get('bytes')
2335+ swift_object_md5 = objects[0].get('hash')
2336+
2337+ if glance_image_size != swift_object_size:
2338+ msg = "Glance image size {} != swift object size {}".format( \
2339+ glance_image_size, swift_object_size)
2340+ amulet.raise_status(amulet.FAIL, msg=msg)
2341+
2342+ if glance_image_md5 != swift_object_md5:
2343+ msg = "Glance image hash {} != swift object hash {}".format( \
2344+ glance_image_md5, swift_object_md5)
2345+ amulet.raise_status(amulet.FAIL, msg=msg)
2346+
2347+ # Cleanup
2348+ u.delete_image(self.glance, image)
2349
2350=== added directory 'tests/charmhelpers'
2351=== added file 'tests/charmhelpers/__init__.py'
2352=== added directory 'tests/charmhelpers/contrib'
2353=== added file 'tests/charmhelpers/contrib/__init__.py'
2354=== added directory 'tests/charmhelpers/contrib/amulet'
2355=== added file 'tests/charmhelpers/contrib/amulet/__init__.py'
2356=== added file 'tests/charmhelpers/contrib/amulet/deployment.py'
2357--- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000
2358+++ tests/charmhelpers/contrib/amulet/deployment.py 2014-07-31 10:21:42 +0000
2359@@ -0,0 +1,71 @@
2360+import amulet
2361+
2362+import os
2363+
2364+
2365+class AmuletDeployment(object):
2366+ """Amulet deployment.
2367+
2368+ This class provides generic Amulet deployment and test runner
2369+ methods.
2370+ """
2371+
2372+ def __init__(self, series=None):
2373+ """Initialize the deployment environment."""
2374+ self.series = None
2375+
2376+ if series:
2377+ self.series = series
2378+ self.d = amulet.Deployment(series=self.series)
2379+ else:
2380+ self.d = amulet.Deployment()
2381+
2382+ def _add_services(self, this_service, other_services):
2383+ """Add services.
2384+
2385+ Add services to the deployment where this_service is the local charm
2386+ that we're focused on testing and other_services are the other
2387+ charms that come from the charm store.
2388+ """
2389+ name, units = range(2)
2390+
2391+ if this_service[name] != os.path.basename(os.getcwd()):
2392+ s = this_service[name]
2393+ msg = "The charm's root directory name needs to be {}".format(s)
2394+ amulet.raise_status(amulet.FAIL, msg=msg)
2395+
2396+ self.d.add(this_service[name], units=this_service[units])
2397+
2398+ for svc in other_services:
2399+ if self.series:
2400+ self.d.add(svc[name],
2401+ charm='cs:{}/{}'.format(self.series, svc[name]),
2402+ units=svc[units])
2403+ else:
2404+ self.d.add(svc[name], units=svc[units])
2405+
2406+ def _add_relations(self, relations):
2407+ """Add all of the relations for the services."""
2408+ for k, v in relations.iteritems():
2409+ self.d.relate(k, v)
2410+
2411+ def _configure_services(self, configs):
2412+ """Configure all of the services."""
2413+ for service, config in configs.iteritems():
2414+ self.d.configure(service, config)
2415+
2416+ def _deploy(self):
2417+ """Deploy environment and wait for all hooks to finish executing."""
2418+ try:
2419+ self.d.setup()
2420+ self.d.sentry.wait(timeout=900)
2421+ except amulet.helpers.TimeoutError:
2422+ amulet.raise_status(amulet.FAIL, msg="Deployment timed out")
2423+ except Exception:
2424+ raise
2425+
2426+ def run_tests(self):
2427+ """Run all of the methods that are prefixed with 'test_'."""
2428+ for test in dir(self):
2429+ if test.startswith('test_'):
2430+ getattr(self, test)()
2431
2432=== added file 'tests/charmhelpers/contrib/amulet/utils.py'
2433--- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000
2434+++ tests/charmhelpers/contrib/amulet/utils.py 2014-07-31 10:21:42 +0000
2435@@ -0,0 +1,176 @@
2436+import ConfigParser
2437+import io
2438+import logging
2439+import re
2440+import sys
2441+import time
2442+
2443+
2444+class AmuletUtils(object):
2445+ """Amulet utilities.
2446+
2447+ This class provides common utility functions that are used by Amulet
2448+ tests.
2449+ """
2450+
2451+ def __init__(self, log_level=logging.ERROR):
2452+ self.log = self.get_logger(level=log_level)
2453+
2454+ def get_logger(self, name="amulet-logger", level=logging.DEBUG):
2455+ """Get a logger object that will log to stdout."""
2456+ log = logging
2457+ logger = log.getLogger(name)
2458+ fmt = log.Formatter("%(asctime)s %(funcName)s "
2459+ "%(levelname)s: %(message)s")
2460+
2461+ handler = log.StreamHandler(stream=sys.stdout)
2462+ handler.setLevel(level)
2463+ handler.setFormatter(fmt)
2464+
2465+ logger.addHandler(handler)
2466+ logger.setLevel(level)
2467+
2468+ return logger
2469+
2470+ def valid_ip(self, ip):
2471+ if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip):
2472+ return True
2473+ else:
2474+ return False
2475+
2476+ def valid_url(self, url):
2477+ p = re.compile(
2478+ r'^(?:http|ftp)s?://'
2479+ r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa
2480+ r'localhost|'
2481+ r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
2482+ r'(?::\d+)?'
2483+ r'(?:/?|[/?]\S+)$',
2484+ re.IGNORECASE)
2485+ if p.match(url):
2486+ return True
2487+ else:
2488+ return False
2489+
2490+ def validate_services(self, commands):
2491+ """Validate services.
2492+
2493+ Verify the specified services are running on the corresponding
2494+ service units.
2495+ """
2496+ for k, v in commands.iteritems():
2497+ for cmd in v:
2498+ output, code = k.run(cmd)
2499+ if code != 0:
2500+ return "command `{}` returned {}".format(cmd, str(code))
2501+ return None
2502+
2503+ def _get_config(self, unit, filename):
2504+ """Get a ConfigParser object for parsing a unit's config file."""
2505+ file_contents = unit.file_contents(filename)
2506+ config = ConfigParser.ConfigParser()
2507+ config.readfp(io.StringIO(file_contents))
2508+ return config
2509+
2510+ def validate_config_data(self, sentry_unit, config_file, section,
2511+ expected):
2512+ """Validate config file data.
2513+
2514+ Verify that the specified section of the config file contains
2515+ the expected option key:value pairs.
2516+ """
2517+ config = self._get_config(sentry_unit, config_file)
2518+
2519+ if section != 'DEFAULT' and not config.has_section(section):
2520+ return "section [{}] does not exist".format(section)
2521+
2522+ for k in expected.keys():
2523+ if not config.has_option(section, k):
2524+ return "section [{}] is missing option {}".format(section, k)
2525+ if config.get(section, k) != expected[k]:
2526+ return "section [{}] {}:{} != expected {}:{}".format(
2527+ section, k, config.get(section, k), k, expected[k])
2528+ return None
2529+
2530+ def _validate_dict_data(self, expected, actual):
2531+ """Validate dictionary data.
2532+
2533+ Compare expected dictionary data vs actual dictionary data.
2534+ The values in the 'expected' dictionary can be strings, bools, ints,
2535+ longs, or can be a function that evaluate a variable and returns a
2536+ bool.
2537+ """
2538+ for k, v in expected.iteritems():
2539+ if k in actual:
2540+ if (isinstance(v, basestring) or
2541+ isinstance(v, bool) or
2542+ isinstance(v, (int, long))):
2543+ if v != actual[k]:
2544+ return "{}:{}".format(k, actual[k])
2545+ elif not v(actual[k]):
2546+ return "{}:{}".format(k, actual[k])
2547+ else:
2548+ return "key '{}' does not exist".format(k)
2549+ return None
2550+
2551+ def validate_relation_data(self, sentry_unit, relation, expected):
2552+ """Validate actual relation data based on expected relation data."""
2553+ actual = sentry_unit.relation(relation[0], relation[1])
2554+ self.log.debug('actual: {}'.format(repr(actual)))
2555+ return self._validate_dict_data(expected, actual)
2556+
2557+ def _validate_list_data(self, expected, actual):
2558+ """Compare expected list vs actual list data."""
2559+ for e in expected:
2560+ if e not in actual:
2561+ return "expected item {} not found in actual list".format(e)
2562+ return None
2563+
2564+ def not_null(self, string):
2565+ if string is not None:
2566+ return True
2567+ else:
2568+ return False
2569+
2570+ def _get_file_mtime(self, sentry_unit, filename):
2571+ """Get last modification time of file."""
2572+ return sentry_unit.file_stat(filename)['mtime']
2573+
2574+ def _get_dir_mtime(self, sentry_unit, directory):
2575+ """Get last modification time of directory."""
2576+ return sentry_unit.directory_stat(directory)['mtime']
2577+
2578+ def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):
2579+ """Get process' start time.
2580+
2581+ Determine start time of the process based on the last modification
2582+ time of the /proc/pid directory. If pgrep_full is True, the process
2583+ name is matched against the full command line.
2584+ """
2585+ if pgrep_full:
2586+ cmd = 'pgrep -o -f {}'.format(service)
2587+ else:
2588+ cmd = 'pgrep -o {}'.format(service)
2589+ proc_dir = '/proc/{}'.format(sentry_unit.run(cmd)[0].strip())
2590+ return self._get_dir_mtime(sentry_unit, proc_dir)
2591+
2592+ def service_restarted(self, sentry_unit, service, filename,
2593+ pgrep_full=False, sleep_time=20):
2594+ """Check if service was restarted.
2595+
2596+ Compare a service's start time vs a file's last modification time
2597+ (such as a config file for that service) to determine if the service
2598+ has been restarted.
2599+ """
2600+ time.sleep(sleep_time)
2601+ if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
2602+ self._get_file_mtime(sentry_unit, filename)):
2603+ return True
2604+ else:
2605+ return False
2606+
2607+ def relation_error(self, name, data):
2608+ return 'unexpected relation data in {} - {}'.format(name, data)
2609+
2610+ def endpoint_error(self, name, data):
2611+ return 'unexpected endpoint data in {} - {}'.format(name, data)
2612
2613=== added directory 'tests/charmhelpers/contrib/openstack'
2614=== added file 'tests/charmhelpers/contrib/openstack/__init__.py'
2615=== added directory 'tests/charmhelpers/contrib/openstack/amulet'
2616=== added file 'tests/charmhelpers/contrib/openstack/amulet/__init__.py'
2617=== added file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
2618--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
2619+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2014-07-31 10:21:42 +0000
2620@@ -0,0 +1,61 @@
2621+from charmhelpers.contrib.amulet.deployment import (
2622+ AmuletDeployment
2623+)
2624+
2625+
2626+class OpenStackAmuletDeployment(AmuletDeployment):
2627+ """OpenStack amulet deployment.
2628+
2629+ This class inherits from AmuletDeployment and has additional support
2630+ that is specifically for use by OpenStack charms.
2631+ """
2632+
2633+ def __init__(self, series=None, openstack=None, source=None):
2634+ """Initialize the deployment environment."""
2635+ super(OpenStackAmuletDeployment, self).__init__(series)
2636+ self.openstack = openstack
2637+ self.source = source
2638+
2639+ def _add_services(self, this_service, other_services):
2640+ """Add services to the deployment and set openstack-origin."""
2641+ super(OpenStackAmuletDeployment, self)._add_services(this_service,
2642+ other_services)
2643+ name = 0
2644+ services = other_services
2645+ services.append(this_service)
2646+ use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph']
2647+
2648+ if self.openstack:
2649+ for svc in services:
2650+ if svc[name] not in use_source:
2651+ config = {'openstack-origin': self.openstack}
2652+ self.d.configure(svc[name], config)
2653+
2654+ if self.source:
2655+ for svc in services:
2656+ if svc[name] in use_source:
2657+ config = {'source': self.source}
2658+ self.d.configure(svc[name], config)
2659+
2660+ def _configure_services(self, configs):
2661+ """Configure all of the services."""
2662+ for service, config in configs.iteritems():
2663+ self.d.configure(service, config)
2664+
2665+ def _get_openstack_release(self):
2666+ """Get openstack release.
2667+
2668+ Return an integer representing the enum value of the openstack
2669+ release.
2670+ """
2671+ (self.precise_essex, self.precise_folsom, self.precise_grizzly,
2672+ self.precise_havana, self.precise_icehouse,
2673+ self.trusty_icehouse) = range(6)
2674+ releases = {
2675+ ('precise', None): self.precise_essex,
2676+ ('precise', 'cloud:precise-folsom'): self.precise_folsom,
2677+ ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
2678+ ('precise', 'cloud:precise-havana'): self.precise_havana,
2679+ ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
2680+ ('trusty', None): self.trusty_icehouse}
2681+ return releases[(self.series, self.openstack)]
2682
2683=== added file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
2684--- tests/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
2685+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2014-07-31 10:21:42 +0000
2686@@ -0,0 +1,275 @@
2687+import logging
2688+import os
2689+import time
2690+import urllib
2691+
2692+import glanceclient.v1.client as glance_client
2693+import keystoneclient.v2_0 as keystone_client
2694+import novaclient.v1_1.client as nova_client
2695+
2696+from charmhelpers.contrib.amulet.utils import (
2697+ AmuletUtils
2698+)
2699+
2700+DEBUG = logging.DEBUG
2701+ERROR = logging.ERROR
2702+
2703+
2704+class OpenStackAmuletUtils(AmuletUtils):
2705+ """OpenStack amulet utilities.
2706+
2707+ This class inherits from AmuletUtils and has additional support
2708+ that is specifically for use by OpenStack charms.
2709+ """
2710+
2711+ def __init__(self, log_level=ERROR):
2712+ """Initialize the deployment environment."""
2713+ super(OpenStackAmuletUtils, self).__init__(log_level)
2714+
2715+ def validate_endpoint_data(self, endpoints, admin_port, internal_port,
2716+ public_port, expected):
2717+ """Validate endpoint data.
2718+
2719+ Validate actual endpoint data vs expected endpoint data. The ports
2720+ are used to find the matching endpoint.
2721+ """
2722+ found = False
2723+ for ep in endpoints:
2724+ self.log.debug('endpoint: {}'.format(repr(ep)))
2725+ if (admin_port in ep.adminurl and
2726+ internal_port in ep.internalurl and
2727+ public_port in ep.publicurl):
2728+ found = True
2729+ actual = {'id': ep.id,
2730+ 'region': ep.region,
2731+ 'adminurl': ep.adminurl,
2732+ 'internalurl': ep.internalurl,
2733+ 'publicurl': ep.publicurl,
2734+ 'service_id': ep.service_id}
2735+ ret = self._validate_dict_data(expected, actual)
2736+ if ret:
2737+ return 'unexpected endpoint data - {}'.format(ret)
2738+
2739+ if not found:
2740+ return 'endpoint not found'
2741+
2742+ def validate_svc_catalog_endpoint_data(self, expected, actual):
2743+ """Validate service catalog endpoint data.
2744+
2745+ Validate a list of actual service catalog endpoints vs a list of
2746+ expected service catalog endpoints.
2747+ """
2748+ self.log.debug('actual: {}'.format(repr(actual)))
2749+ for k, v in expected.iteritems():
2750+ if k in actual:
2751+ ret = self._validate_dict_data(expected[k][0], actual[k][0])
2752+ if ret:
2753+ return self.endpoint_error(k, ret)
2754+ else:
2755+ return "endpoint {} does not exist".format(k)
2756+ return ret
2757+
2758+ def validate_tenant_data(self, expected, actual):
2759+ """Validate tenant data.
2760+
2761+ Validate a list of actual tenant data vs list of expected tenant
2762+ data.
2763+ """
2764+ self.log.debug('actual: {}'.format(repr(actual)))
2765+ for e in expected:
2766+ found = False
2767+ for act in actual:
2768+ a = {'enabled': act.enabled, 'description': act.description,
2769+ 'name': act.name, 'id': act.id}
2770+ if e['name'] == a['name']:
2771+ found = True
2772+ ret = self._validate_dict_data(e, a)
2773+ if ret:
2774+ return "unexpected tenant data - {}".format(ret)
2775+ if not found:
2776+ return "tenant {} does not exist".format(e['name'])
2777+ return ret
2778+
2779+ def validate_role_data(self, expected, actual):
2780+ """Validate role data.
2781+
2782+ Validate a list of actual role data vs a list of expected role
2783+ data.
2784+ """
2785+ self.log.debug('actual: {}'.format(repr(actual)))
2786+ for e in expected:
2787+ found = False
2788+ for act in actual:
2789+ a = {'name': act.name, 'id': act.id}
2790+ if e['name'] == a['name']:
2791+ found = True
2792+ ret = self._validate_dict_data(e, a)
2793+ if ret:
2794+ return "unexpected role data - {}".format(ret)
2795+ if not found:
2796+ return "role {} does not exist".format(e['name'])
2797+ return ret
2798+
2799+ def validate_user_data(self, expected, actual):
2800+ """Validate user data.
2801+
2802+ Validate a list of actual user data vs a list of expected user
2803+ data.
2804+ """
2805+ self.log.debug('actual: {}'.format(repr(actual)))
2806+ for e in expected:
2807+ found = False
2808+ for act in actual:
2809+ a = {'enabled': act.enabled, 'name': act.name,
2810+ 'email': act.email, 'tenantId': act.tenantId,
2811+ 'id': act.id}
2812+ if e['name'] == a['name']:
2813+ found = True
2814+ ret = self._validate_dict_data(e, a)
2815+ if ret:
2816+ return "unexpected user data - {}".format(ret)
2817+ if not found:
2818+ return "user {} does not exist".format(e['name'])
2819+ return ret
2820+
2821+ def validate_flavor_data(self, expected, actual):
2822+ """Validate flavor data.
2823+
2824+ Validate a list of actual flavors vs a list of expected flavors.
2825+ """
2826+ self.log.debug('actual: {}'.format(repr(actual)))
2827+ act = [a.name for a in actual]
2828+ return self._validate_list_data(expected, act)
2829+
2830+ def tenant_exists(self, keystone, tenant):
2831+ """Return True if tenant exists."""
2832+ return tenant in [t.name for t in keystone.tenants.list()]
2833+
2834+ def authenticate_keystone_admin(self, keystone_sentry, user, password,
2835+ tenant):
2836+ """Authenticates admin user with the keystone admin endpoint."""
2837+ unit = keystone_sentry
2838+ service_ip = unit.relation('shared-db',
2839+ 'mysql:shared-db')['private-address']
2840+ ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
2841+ return keystone_client.Client(username=user, password=password,
2842+ tenant_name=tenant, auth_url=ep)
2843+
2844+ def authenticate_keystone_user(self, keystone, user, password, tenant):
2845+ """Authenticates a regular user with the keystone public endpoint."""
2846+ ep = keystone.service_catalog.url_for(service_type='identity',
2847+ endpoint_type='publicURL')
2848+ return keystone_client.Client(username=user, password=password,
2849+ tenant_name=tenant, auth_url=ep)
2850+
2851+ def authenticate_glance_admin(self, keystone):
2852+ """Authenticates admin user with glance."""
2853+ ep = keystone.service_catalog.url_for(service_type='image',
2854+ endpoint_type='adminURL')
2855+ return glance_client.Client(ep, token=keystone.auth_token)
2856+
2857+ def authenticate_nova_user(self, keystone, user, password, tenant):
2858+ """Authenticates a regular user with nova-api."""
2859+ ep = keystone.service_catalog.url_for(service_type='identity',
2860+ endpoint_type='publicURL')
2861+ return nova_client.Client(username=user, api_key=password,
2862+ project_id=tenant, auth_url=ep)
2863+
2864+ def create_cirros_image(self, glance, image_name):
2865+ """Download the latest cirros image and upload it to glance."""
2866+ http_proxy = os.getenv('AMULET_HTTP_PROXY')
2867+ self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
2868+ if http_proxy:
2869+ proxies = {'http': http_proxy}
2870+ opener = urllib.FancyURLopener(proxies)
2871+ else:
2872+ opener = urllib.FancyURLopener()
2873+
2874+ f = opener.open("http://download.cirros-cloud.net/version/released")
2875+ version = f.read().strip()
2876+ cirros_img = "tests/cirros-{}-x86_64-disk.img".format(version)
2877+
2878+ if not os.path.exists(cirros_img):
2879+ cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
2880+ version, cirros_img)
2881+ opener.retrieve(cirros_url, cirros_img)
2882+ f.close()
2883+
2884+ with open(cirros_img) as f:
2885+ image = glance.images.create(name=image_name, is_public=True,
2886+ disk_format='qcow2',
2887+ container_format='bare', data=f)
2888+ count = 1
2889+ status = image.status
2890+ while status != 'active' and count < 10:
2891+ time.sleep(3)
2892+ image = glance.images.get(image.id)
2893+ status = image.status
2894+ self.log.debug('image status: {}'.format(status))
2895+ count += 1
2896+
2897+ if status != 'active':
2898+ self.log.error('image creation timed out')
2899+ return None
2900+
2901+ return image
2902+
2903+ def delete_image(self, glance, image):
2904+ """Delete the specified image."""
2905+ num_before = len(list(glance.images.list()))
2906+ glance.images.delete(image)
2907+
2908+ count = 1
2909+ num_after = len(list(glance.images.list()))
2910+ while num_after != (num_before - 1) and count < 10:
2911+ time.sleep(3)
2912+ num_after = len(list(glance.images.list()))
2913+ self.log.debug('number of images: {}'.format(num_after))
2914+ count += 1
2915+
2916+ if num_after != (num_before - 1):
2917+ self.log.error('image deletion timed out')
2918+ return False
2919+
2920+ return True
2921+
2922+ def create_instance(self, nova, image_name, instance_name, flavor):
2923+ """Create the specified instance."""
2924+ image = nova.images.find(name=image_name)
2925+ flavor = nova.flavors.find(name=flavor)
2926+ instance = nova.servers.create(name=instance_name, image=image,
2927+ flavor=flavor)
2928+
2929+ count = 1
2930+ status = instance.status
2931+ while status != 'ACTIVE' and count < 60:
2932+ time.sleep(3)
2933+ instance = nova.servers.get(instance.id)
2934+ status = instance.status
2935+ self.log.debug('instance status: {}'.format(status))
2936+ count += 1
2937+
2938+ if status != 'ACTIVE':
2939+ self.log.error('instance creation timed out')
2940+ return None
2941+
2942+ return instance
2943+
2944+ def delete_instance(self, nova, instance):
2945+ """Delete the specified instance."""
2946+ num_before = len(list(nova.servers.list()))
2947+ nova.servers.delete(instance)
2948+
2949+ count = 1
2950+ num_after = len(list(nova.servers.list()))
2951+ while num_after != (num_before - 1) and count < 10:
2952+ time.sleep(3)
2953+ num_after = len(list(nova.servers.list()))
2954+ self.log.debug('number of instances: {}'.format(num_after))
2955+ count += 1
2956+
2957+ if num_after != (num_before - 1):
2958+ self.log.error('instance deletion timed out')
2959+ return False
2960+
2961+ return True
2962
2963=== modified file 'unit_tests/test_swift_storage_context.py'
2964--- unit_tests/test_swift_storage_context.py 2013-09-27 16:33:06 +0000
2965+++ unit_tests/test_swift_storage_context.py 2014-07-31 10:21:42 +0000
2966@@ -15,6 +15,7 @@
2967
2968
2969 class SwiftStorageContextTests(CharmTestCase):
2970+
2971 def setUp(self):
2972 super(SwiftStorageContextTests, self).setUp(swift_context, TO_PATCH)
2973 self.config.side_effect = self.test_config.get
2974@@ -56,16 +57,22 @@
2975 _file.write.assert_called_with('RSYNC_ENABLE=true\n')
2976
2977 def test_swift_storage_server_context(self):
2978+ import psutil
2979 self.unit_private_ip.return_value = '10.0.0.5'
2980 self.test_config.set('account-server-port', '500')
2981 self.test_config.set('object-server-port', '501')
2982 self.test_config.set('container-server-port', '502')
2983+ self.test_config.set('object-server-threads-per-disk', '3')
2984+ self.test_config.set('worker-multiplier', '3')
2985+ num_workers = psutil.NUM_CPUS * 3
2986 ctxt = swift_context.SwiftStorageServerContext()
2987 result = ctxt()
2988 ex = {
2989 'container_server_port': '502',
2990 'object_server_port': '501',
2991 'account_server_port': '500',
2992- 'local_ip': '10.0.0.5'
2993+ 'local_ip': '10.0.0.5',
2994+ 'object_server_threads_per_disk': '3',
2995+ 'workers': str(num_workers),
2996 }
2997 self.assertEquals(ex, result)
2998
2999=== modified file 'unit_tests/test_swift_storage_relations.py'
3000--- unit_tests/test_swift_storage_relations.py 2013-09-27 16:33:06 +0000
3001+++ unit_tests/test_swift_storage_relations.py 2014-07-31 10:21:42 +0000
3002@@ -40,6 +40,7 @@
3003
3004
3005 class SwiftStorageRelationsTests(CharmTestCase):
3006+
3007 def setUp(self):
3008 super(SwiftStorageRelationsTests, self).setUp(hooks,
3009 TO_PATCH)
3010
3011=== modified file 'unit_tests/test_swift_storage_utils.py'
3012--- unit_tests/test_swift_storage_utils.py 2014-03-20 13:50:49 +0000
3013+++ unit_tests/test_swift_storage_utils.py 2014-07-31 10:21:42 +0000
3014@@ -17,6 +17,7 @@
3015 'ensure_block_device',
3016 'clean_storage',
3017 'is_block_device',
3018+ 'is_device_mounted',
3019 'get_os_codename_package',
3020 'get_os_codename_install_source',
3021 'unit_private_ip',
3022@@ -62,7 +63,16 @@
3023 }
3024
3025
3026+REAL_WORLD_PARTITIONS = """
3027+major minor #blocks name
3028+
3029+ 8 0 117220824 sda
3030+ 8 1 117219800 sda1
3031+ 8 16 119454720 sdb
3032+"""
3033+
3034 class SwiftStorageUtilsTests(CharmTestCase):
3035+
3036 def setUp(self):
3037 super(SwiftStorageUtilsTests, self).setUp(swift_utils, TO_PATCH)
3038 self.config.side_effect = self.test_config.get
3039@@ -92,7 +102,7 @@
3040 wgets = []
3041 for s in ['account', 'object', 'container']:
3042 _c = call(['wget', '%s/%s.ring.gz' % (url, s),
3043- '-O', '/etc/swift/%s.ring.gz' % s])
3044+ '-O', '/etc/swift/%s.ring.gz' % s])
3045 wgets.append(_c)
3046 self.assertEquals(wgets, self.check_call.call_args_list)
3047
3048@@ -101,12 +111,17 @@
3049 self.assertEquals(swift_utils.determine_block_devices(), None)
3050
3051 def _fake_ensure(self, bdev):
3052- return bdev.split('|').pop(0)
3053+ # /dev/vdz is a missing dev
3054+ if '/dev/vdz' in bdev:
3055+ return None
3056+ else:
3057+ return bdev.split('|').pop(0)
3058
3059 @patch.object(swift_utils, 'ensure_block_device')
3060 def test_determine_block_device_single_dev(self, _ensure):
3061 _ensure.side_effect = self._fake_ensure
3062- self.test_config.set('block-device', '/dev/vdb')
3063+ bdevs = '/dev/vdb'
3064+ self.test_config.set('block-device', bdevs)
3065 result = swift_utils.determine_block_devices()
3066 self.assertEquals(['/dev/vdb'], result)
3067
3068@@ -119,6 +134,15 @@
3069 ex = ['/dev/vdb', '/dev/vdc', '/tmp/swift.img']
3070 self.assertEquals(ex, result)
3071
3072+ @patch.object(swift_utils, 'ensure_block_device')
3073+ def test_determine_block_device_with_missing(self, _ensure):
3074+ _ensure.side_effect = self._fake_ensure
3075+ bdevs = '/dev/vdb /srv/swift.img|20G /dev/vdz'
3076+ self.test_config.set('block-device', bdevs)
3077+ result = swift_utils.determine_block_devices()
3078+ ex = ['/dev/vdb', '/srv/swift.img']
3079+ self.assertEqual(ex, result)
3080+
3081 @patch.object(swift_utils, 'find_block_devices')
3082 @patch.object(swift_utils, 'ensure_block_device')
3083 def test_determine_block_device_guess_dev(self, _ensure, _find):
3084@@ -156,8 +180,15 @@
3085 group='swift')
3086 self.mount.assert_called('/dev/vdb', '/srv/node/vdb', persist=True)
3087
3088+ def _fake_is_device_mounted(self, device):
3089+ if device in ["/dev/sda", "/dev/vda", "/dev/cciss/c0d0"]:
3090+ return True
3091+ else:
3092+ return False
3093+
3094 def test_find_block_devices(self):
3095 self.is_block_device.return_value = True
3096+ self.is_device_mounted.side_effect = self._fake_is_device_mounted
3097 with patch_open() as (_open, _file):
3098 _file.read.return_value = PROC_PARTITIONS
3099 _file.readlines = MagicMock()
3100@@ -166,6 +197,18 @@
3101 ex = ['/dev/sdb', '/dev/vdb', '/dev/cciss/c1d0']
3102 self.assertEquals(ex, result)
3103
3104+ def test_find_block_devices_real_world(self):
3105+ self.is_block_device.return_value = True
3106+ side_effect = lambda x: x in ["/dev/sda", "/dev/sda1"]
3107+ self.is_device_mounted.side_effect = side_effect
3108+ with patch_open() as (_open, _file):
3109+ _file.read.return_value = REAL_WORLD_PARTITIONS
3110+ _file.readlines = MagicMock()
3111+ _file.readlines.return_value = REAL_WORLD_PARTITIONS.split('\n')
3112+ result = swift_utils.find_block_devices()
3113+ expected = ["/dev/sdb"]
3114+ self.assertEquals(expected, result)
3115+
3116 def test_save_script_rc(self):
3117 self.unit_private_ip.return_value = '10.0.0.1'
3118 swift_utils.save_script_rc()
3119
3120=== modified file 'unit_tests/test_utils.py'
3121--- unit_tests/test_utils.py 2013-07-19 20:44:37 +0000
3122+++ unit_tests/test_utils.py 2014-07-31 10:21:42 +0000
3123@@ -45,6 +45,7 @@
3124
3125
3126 class CharmTestCase(unittest.TestCase):
3127+
3128 def setUp(self, obj, patches):
3129 super(CharmTestCase, self).setUp()
3130 self.patches = patches
3131@@ -65,6 +66,7 @@
3132
3133
3134 class TestConfig(object):
3135+
3136 def __init__(self):
3137 self.config = get_default_config()
3138
3139@@ -80,12 +82,13 @@
3140 return self.config
3141
3142 def set(self, attr, value):
3143- if attr not in self.config:
3144- raise KeyError
3145- self.config[attr] = value
3146+ if attr not in self.config:
3147+ raise KeyError
3148+ self.config[attr] = value
3149
3150
3151 class TestRelation(object):
3152+
3153 def __init__(self, relation_data={}):
3154 self.relation_data = relation_data
3155

Subscribers

People subscribed via source and target branches