Merge lp:~openstack-charmers/charms/precise/cinder/icehouse into lp:~openstack-charmers-archive/charms/precise/cinder/trunk

Proposed by James Page
Status: Merged
Merged at revision: 33
Proposed branch: lp:~openstack-charmers/charms/precise/cinder/icehouse
Merge into: lp:~openstack-charmers-archive/charms/precise/cinder/trunk
Diff against target: 2205 lines (+1053/-348)
27 files modified
Makefile (+1/-2)
config.yaml (+144/-124)
hooks/charmhelpers/contrib/hahelpers/apache.py (+9/-8)
hooks/charmhelpers/contrib/openstack/context.py (+106/-25)
hooks/charmhelpers/contrib/openstack/neutron.py (+14/-4)
hooks/charmhelpers/contrib/openstack/utils.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+12/-3)
hooks/cinder_contexts.py (+34/-1)
hooks/cinder_hooks.py (+68/-13)
hooks/cinder_utils.py (+106/-56)
metadata.yaml (+5/-0)
revision (+1/-1)
setup.cfg (+1/-2)
templates/cinder.conf (+19/-5)
templates/grizzly/cinder.conf (+57/-0)
templates/havana/api-paste.ini (+3/-0)
templates/icehouse/api-paste.ini (+55/-0)
templates/icehouse/cinder.conf (+68/-0)
templates/parts/backends (+18/-0)
templates/parts/database (+3/-0)
templates/parts/rabbitmq (+21/-0)
templates/parts/section-database (+4/-0)
unit_tests/test_cinder_contexts.py (+42/-3)
unit_tests/test_cinder_hooks.py (+98/-41)
unit_tests/test_cinder_utils.py (+158/-57)
unit_tests/test_cluster_hooks.py (+2/-3)
unit_tests/test_utils.py (+3/-0)
To merge this branch: bzr merge lp:~openstack-charmers/charms/precise/cinder/icehouse
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+215223@code.launchpad.net

Description of the change

Update for OpenStack Icehouse

Support for whitelisting of block devices in block-device config.

Support for multi-backend through suboridinate interfaces.

To post a comment you must log in.
33. By James Page

[james-page,hazmat,yolanda.robla,r=james-page,t=*]

Support for Icehouse on 12.04 and 14.04
Support for Active/Active and SSL RabbitMQ
Support for SSL MySQL
Support for SSL endpoints
Support for PostgreSQL

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2013-10-19 16:58:37 +0000
3+++ Makefile 2014-04-16 08:36:27 +0000
4@@ -2,8 +2,7 @@
5 PYTHON := /usr/bin/env python
6
7 lint:
8- @flake8 --exclude hooks/charmhelpers hooks
9- @flake8 --exclude hooks/charmhelpers unit_tests
10+ @flake8 --exclude hooks/charmhelpers hooks unit_tests
11 @charm proof
12
13 test:
14
15=== modified file 'config.yaml'
16--- config.yaml 2014-03-25 18:44:23 +0000
17+++ config.yaml 2014-04-16 08:36:27 +0000
18@@ -1,125 +1,145 @@
19 options:
20- openstack-origin:
21- default: distro
22- type: string
23- description: |
24- Repository from which to install. May be one of the following:
25- distro (default), ppa:somecustom/ppa, a deb url sources entry,
26- or a supported Cloud Archive release pocket.
27-
28- Supported Cloud Archive sources include: cloud:precise-folsom,
29- cloud:precise-folsom/updates, cloud:precise-folsom/staging,
30- cloud:precise-folsom/proposed.
31-
32- When deploying to Precise, the default distro option will use
33- the cloud:precise-folsom/updates repository instead, since Cinder
34- was not available in the Ubuntu archive for Precise and is only
35- available via the Ubuntu Cloud Archive.
36- enabled-services:
37- default: all
38- type: string
39- description: |
40- If splitting cinder services between units, define which services
41- to install and configure.
42- block-device:
43- default: sdb
44- type: string
45- description: |
46- The *available* block device on which to create LVM volume group.
47- May also be set to None for deployments that will not need local
48- storage (eg, Ceph/RBD-backed volumes).
49- ceph-osd-replication-count:
50- default: 2
51- type: int
52- description: |
53- This value dictates the number of replicas ceph must make of any
54- object it stores withing the cinder rbd pool. Of course, this only
55- applies if using Ceph as a backend store. Note that once the cinder
56- rbd pool has been created, changing this value will not have any
57- effect (although it can be changed in ceph by manually configuring
58- your ceph cluster).
59- volume-group:
60- default: cinder-volumes
61- type: string
62- description: Name of volume group to create and store Cinder volumes.
63- overwrite:
64- default: "false"
65- type: string
66- description: |
67- If true, charm will attempt to overwrite block devices containin
68- previous filesystems or LVM, assuming it is not in use.
69- database-user:
70- default: cinder
71- type: string
72- description: Username to request database access.
73- database:
74- default: cinder
75- type: string
76- description: Database to request access.
77- rabbit-user:
78- default: cinder
79- type: string
80- description: Username to request access on rabbitmq-server.
81- rabbit-vhost:
82- default: openstack
83- type: string
84- description: RabbitMQ virtual host to request access on rabbitmq-server.
85- api-listening-port:
86- default: 8776
87- type: int
88- description: OpenStack Volume API listening port.
89- region:
90- default: RegionOne
91- type: string
92- description: OpenStack Region
93- glance-api-version:
94- default: 1
95- type: int
96- description: |
97- Newer storage drivers may require the v2 Glance API to perform certain
98- actions e.g. the RBD driver requires requires this to support COW
99- cloning of images. This option will default to v1 for backwards
100- compatibility older glance services.
101- use-syslog:
102- type: boolean
103- default: False
104- description: |
105- If set to True, supporting services will log to syslog.
106- # HA configuration settings
107- vip:
108- type: string
109- description: "Virtual IP to use to front cinder API in ha configuration"
110- vip_iface:
111- type: string
112- default: eth0
113- description: "Network Interface where to place the Virtual IP"
114- vip_cidr:
115- type: int
116- default: 24
117- description: "Netmask that will be used for the Virtual IP"
118- ha-bindiface:
119- type: string
120- default: eth0
121- description: |
122- Default network interface on which HA cluster will bind to communication
123- with the other members of the HA Cluster.
124- ha-mcastport:
125- type: int
126- default: 5401
127- description: |
128- Default multicast port number that will be used to communicate between
129- HA Cluster nodes.
130- # Per-service HTTPS configuration.
131- ssl_cert:
132- type: string
133- description: |
134- SSL certificate to install and use for API ports. Setting this value
135- and ssl_key will enable reverse proxying, point Glance's entry in the
136- Keystone catalog to use https, and override any certficiate and key
137- issued by Keystone (if it is configured to do so).
138- ssl_key:
139- type: string
140- description: SSL key to use with certificate specified as ssl_cert.
141- config-flags:
142- type: string
143- description: Comma separated list of key=value config flags to be set in cinder.conf.
144+ openstack-origin:
145+ default: distro
146+ type: string
147+ description: |
148+ Repository from which to install. May be one of the following:
149+ distro (default), ppa:somecustom/ppa, a deb url sources entry,
150+ or a supported Cloud Archive release pocket.
151+
152+ Supported Cloud Archive sources include: cloud:precise-folsom,
153+ cloud:precise-folsom/updates, cloud:precise-folsom/staging,
154+ cloud:precise-folsom/proposed.
155+
156+ When deploying to Precise, the default distro option will use
157+ the cloud:precise-folsom/updates repository instead, since Cinder
158+ was not available in the Ubuntu archive for Precise and is only
159+ available via the Ubuntu Cloud Archive.
160+ enabled-services:
161+ default: all
162+ type: string
163+ description: |
164+ If splitting cinder services between units, define which services
165+ to install and configure.
166+ block-device:
167+ default: sdb
168+ type: string
169+ description: |
170+ The block devices on which to create LVM volume group.
171+
172+ May also be set to None for deployments that will not need local
173+ storage (eg, Ceph/RBD-backed volumes).
174+
175+ This can also be a space delimited list of block devices to attempt
176+ to use in the cinder LVM volume group - each block device detected
177+ will be added to the available physical volumes in the volume group.
178+ ceph-osd-replication-count:
179+ default: 2
180+ type: int
181+ description: |
182+ This value dictates the number of replicas ceph must make of any
183+ object it stores withing the cinder rbd pool. Of course, this only
184+ applies if using Ceph as a backend store. Note that once the cinder
185+ rbd pool has been created, changing this value will not have any
186+ effect (although it can be changed in ceph by manually configuring
187+ your ceph cluster).
188+ volume-group:
189+ default: cinder-volumes
190+ type: string
191+ description: Name of volume group to create and store Cinder volumes.
192+ overwrite:
193+ default: "false"
194+ type: string
195+ description: |
196+ If true, charm will attempt to overwrite block devices containin
197+ previous filesystems or LVM, assuming it is not in use.
198+ database-user:
199+ default: cinder
200+ type: string
201+ description: Username to request database access.
202+ database:
203+ default: cinder
204+ type: string
205+ description: Database to request access.
206+ rabbit-user:
207+ default: cinder
208+ type: string
209+ description: Username to request access on rabbitmq-server.
210+ rabbit-vhost:
211+ default: openstack
212+ type: string
213+ description: RabbitMQ virtual host to request access on rabbitmq-server.
214+ api-listening-port:
215+ default: 8776
216+ type: int
217+ description: OpenStack Volume API listening port.
218+ region:
219+ default: RegionOne
220+ type: string
221+ description: OpenStack Region
222+ glance-api-version:
223+ default: 1
224+ type: int
225+ description: |
226+ Newer storage drivers may require the v2 Glance API to perform certain
227+ actions e.g. the RBD driver requires requires this to support COW
228+ cloning of images. This option will default to v1 for backwards
229+ compatibility older glance services.
230+ use-syslog:
231+ type: boolean
232+ default: False
233+ description: |
234+ By default, all services will log into their corresponding log files.
235+ Setting this to True will force all services to log to the syslog.
236+ debug:
237+ default: False
238+ type: boolean
239+ description: Enable debug logging
240+ verbose:
241+ default: False
242+ type: boolean
243+ description: Enable verbose logging
244+ # HA configuration settings
245+ vip:
246+ type: string
247+ description: "Virtual IP to use to front cinder API in ha configuration"
248+ vip_iface:
249+ type: string
250+ default: eth0
251+ description: "Network Interface where to place the Virtual IP"
252+ vip_cidr:
253+ type: int
254+ default: 24
255+ description: "Netmask that will be used for the Virtual IP"
256+ ha-bindiface:
257+ type: string
258+ default: eth0
259+ description: |
260+ Default network interface on which HA cluster will bind to communication
261+ with the other members of the HA Cluster.
262+ ha-mcastport:
263+ type: int
264+ default: 5401
265+ description: |
266+ Default multicast port number that will be used to communicate between
267+ HA Cluster nodes.
268+ # Per-service HTTPS configuration.
269+ ssl_cert:
270+ type: string
271+ description: |
272+ SSL certificate to install and use for API ports. Setting this value
273+ and ssl_key will enable reverse proxying, point Glance's entry in the
274+ Keystone catalog to use https, and override any certficiate and key
275+ issued by Keystone (if it is configured to do so).
276+ ssl_key:
277+ type: string
278+ description: SSL key to use with certificate specified as ssl_cert.
279+ ssl_ca:
280+ type: string
281+ description: |
282+ SSL CA to use with the certificate and key provided - this is only
283+ required if you are providing a privately signed ssl_cert and ssl_key.
284+ config-flags:
285+ type: string
286+ description: Comma separated list of key=value config flags to be set in cinder.conf.
287+
288
289=== added symlink 'hooks/amqp-relation-departed'
290=== target is u'cinder_hooks.py'
291=== modified file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
292--- hooks/charmhelpers/contrib/hahelpers/apache.py 2013-10-17 21:48:08 +0000
293+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2014-04-16 08:36:27 +0000
294@@ -39,14 +39,15 @@
295
296
297 def get_ca_cert():
298- ca_cert = None
299- log("Inspecting identity-service relations for CA SSL certificate.",
300- level=INFO)
301- for r_id in relation_ids('identity-service'):
302- for unit in relation_list(r_id):
303- if not ca_cert:
304- ca_cert = relation_get('ca_cert',
305- rid=r_id, unit=unit)
306+ ca_cert = config_get('ssl_ca')
307+ if ca_cert is None:
308+ log("Inspecting identity-service relations for CA SSL certificate.",
309+ level=INFO)
310+ for r_id in relation_ids('identity-service'):
311+ for unit in relation_list(r_id):
312+ if ca_cert is None:
313+ ca_cert = relation_get('ca_cert',
314+ rid=r_id, unit=unit)
315 return ca_cert
316
317
318
319=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
320--- hooks/charmhelpers/contrib/openstack/context.py 2014-03-25 12:26:41 +0000
321+++ hooks/charmhelpers/contrib/openstack/context.py 2014-04-16 08:36:27 +0000
322@@ -1,5 +1,6 @@
323 import json
324 import os
325+import time
326
327 from base64 import b64decode
328
329@@ -113,7 +114,8 @@
330 class SharedDBContext(OSContextGenerator):
331 interfaces = ['shared-db']
332
333- def __init__(self, database=None, user=None, relation_prefix=None):
334+ def __init__(self,
335+ database=None, user=None, relation_prefix=None, ssl_dir=None):
336 '''
337 Allows inspecting relation for settings prefixed with relation_prefix.
338 This is useful for parsing access for multiple databases returned via
339@@ -122,6 +124,7 @@
340 self.relation_prefix = relation_prefix
341 self.database = database
342 self.user = user
343+ self.ssl_dir = ssl_dir
344
345 def __call__(self):
346 self.database = self.database or config('database')
347@@ -139,17 +142,72 @@
348
349 for rid in relation_ids('shared-db'):
350 for unit in related_units(rid):
351- passwd = relation_get(password_setting, rid=rid, unit=unit)
352+ rdata = relation_get(rid=rid, unit=unit)
353 ctxt = {
354- 'database_host': relation_get('db_host', rid=rid,
355- unit=unit),
356+ 'database_host': rdata.get('db_host'),
357 'database': self.database,
358 'database_user': self.user,
359- 'database_password': passwd,
360- }
361- if context_complete(ctxt):
362- return ctxt
363- return {}
364+ 'database_password': rdata.get(password_setting),
365+ 'database_type': 'mysql'
366+ }
367+ if context_complete(ctxt):
368+ db_ssl(rdata, ctxt, self.ssl_dir)
369+ return ctxt
370+ return {}
371+
372+
373+class PostgresqlDBContext(OSContextGenerator):
374+ interfaces = ['pgsql-db']
375+
376+ def __init__(self, database=None):
377+ self.database = database
378+
379+ def __call__(self):
380+ self.database = self.database or config('database')
381+ if self.database is None:
382+ log('Could not generate postgresql_db context. '
383+ 'Missing required charm config options. '
384+ '(database name)')
385+ raise OSContextError
386+ ctxt = {}
387+
388+ for rid in relation_ids(self.interfaces[0]):
389+ for unit in related_units(rid):
390+ ctxt = {
391+ 'database_host': relation_get('host', rid=rid, unit=unit),
392+ 'database': self.database,
393+ 'database_user': relation_get('user', rid=rid, unit=unit),
394+ 'database_password': relation_get('password', rid=rid, unit=unit),
395+ 'database_type': 'postgresql',
396+ }
397+ if context_complete(ctxt):
398+ return ctxt
399+ return {}
400+
401+
402+def db_ssl(rdata, ctxt, ssl_dir):
403+ if 'ssl_ca' in rdata and ssl_dir:
404+ ca_path = os.path.join(ssl_dir, 'db-client.ca')
405+ with open(ca_path, 'w') as fh:
406+ fh.write(b64decode(rdata['ssl_ca']))
407+ ctxt['database_ssl_ca'] = ca_path
408+ elif 'ssl_ca' in rdata:
409+ log("Charm not setup for ssl support but ssl ca found")
410+ return ctxt
411+ if 'ssl_cert' in rdata:
412+ cert_path = os.path.join(
413+ ssl_dir, 'db-client.cert')
414+ if not os.path.exists(cert_path):
415+ log("Waiting 1m for ssl client cert validity")
416+ time.sleep(60)
417+ with open(cert_path, 'w') as fh:
418+ fh.write(b64decode(rdata['ssl_cert']))
419+ ctxt['database_ssl_cert'] = cert_path
420+ key_path = os.path.join(ssl_dir, 'db-client.key')
421+ with open(key_path, 'w') as fh:
422+ fh.write(b64decode(rdata['ssl_key']))
423+ ctxt['database_ssl_key'] = key_path
424+ return ctxt
425
426
427 class IdentityServiceContext(OSContextGenerator):
428@@ -161,24 +219,25 @@
429
430 for rid in relation_ids('identity-service'):
431 for unit in related_units(rid):
432+ rdata = relation_get(rid=rid, unit=unit)
433 ctxt = {
434- 'service_port': relation_get('service_port', rid=rid,
435- unit=unit),
436- 'service_host': relation_get('service_host', rid=rid,
437- unit=unit),
438- 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
439- 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
440- 'admin_tenant_name': relation_get('service_tenant',
441- rid=rid, unit=unit),
442- 'admin_user': relation_get('service_username', rid=rid,
443- unit=unit),
444- 'admin_password': relation_get('service_password', rid=rid,
445- unit=unit),
446- # XXX: Hard-coded http.
447- 'service_protocol': 'http',
448- 'auth_protocol': 'http',
449+ 'service_port': rdata.get('service_port'),
450+ 'service_host': rdata.get('service_host'),
451+ 'auth_host': rdata.get('auth_host'),
452+ 'auth_port': rdata.get('auth_port'),
453+ 'admin_tenant_name': rdata.get('service_tenant'),
454+ 'admin_user': rdata.get('service_username'),
455+ 'admin_password': rdata.get('service_password'),
456+ 'service_protocol':
457+ rdata.get('service_protocol') or 'http',
458+ 'auth_protocol':
459+ rdata.get('auth_protocol') or 'http',
460 }
461 if context_complete(ctxt):
462+ # NOTE(jamespage) this is required for >= icehouse
463+ # so a missing value just indicates keystone needs
464+ # upgrading
465+ ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
466 return ctxt
467 return {}
468
469@@ -186,6 +245,9 @@
470 class AMQPContext(OSContextGenerator):
471 interfaces = ['amqp']
472
473+ def __init__(self, ssl_dir=None):
474+ self.ssl_dir = ssl_dir
475+
476 def __call__(self):
477 log('Generating template context for amqp')
478 conf = config()
479@@ -196,7 +258,6 @@
480 log('Could not generate shared_db context. '
481 'Missing required charm config options: %s.' % e)
482 raise OSContextError
483-
484 ctxt = {}
485 for rid in relation_ids('amqp'):
486 ha_vip_only = False
487@@ -214,6 +275,14 @@
488 unit=unit),
489 'rabbitmq_virtual_host': vhost,
490 })
491+
492+ ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
493+ if ssl_port:
494+ ctxt['rabbit_ssl_port'] = ssl_port
495+ ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
496+ if ssl_ca:
497+ ctxt['rabbit_ssl_ca'] = ssl_ca
498+
499 if relation_get('ha_queues', rid=rid, unit=unit) is not None:
500 ctxt['rabbitmq_ha_queues'] = True
501
502@@ -221,6 +290,16 @@
503 rid=rid, unit=unit) is not None
504
505 if context_complete(ctxt):
506+ if 'rabbit_ssl_ca' in ctxt:
507+ if not self.ssl_dir:
508+ log(("Charm not setup for ssl support "
509+ "but ssl ca found"))
510+ break
511+ ca_path = os.path.join(
512+ self.ssl_dir, 'rabbit-client-ca.pem')
513+ with open(ca_path, 'w') as fh:
514+ fh.write(b64decode(ctxt['rabbit_ssl_ca']))
515+ ctxt['rabbit_ssl_ca'] = ca_path
516 # Sufficient information found = break out!
517 break
518 # Used for active/active rabbitmq >= grizzly
519@@ -391,6 +470,8 @@
520 'private_address': unit_get('private-address'),
521 'endpoints': []
522 }
523+ if is_clustered():
524+ ctxt['private_address'] = config('vip')
525 for api_port in self.external_ports:
526 ext_port = determine_apache_port(api_port)
527 int_port = determine_api_port(api_port)
528
529=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
530--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-03-25 12:26:41 +0000
531+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-04-16 08:36:27 +0000
532@@ -17,6 +17,8 @@
533 kver = check_output(['uname', '-r']).strip()
534 return 'linux-headers-%s' % kver
535
536+QUANTUM_CONF_DIR = '/etc/quantum'
537+
538
539 def kernel_version():
540 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
541@@ -35,6 +37,8 @@
542
543
544 # legacy
545+
546+
547 def quantum_plugins():
548 from charmhelpers.contrib.openstack import context
549 return {
550@@ -46,7 +50,8 @@
551 'contexts': [
552 context.SharedDBContext(user=config('neutron-database-user'),
553 database=config('neutron-database'),
554- relation_prefix='neutron')],
555+ relation_prefix='neutron',
556+ ssl_dir=QUANTUM_CONF_DIR)],
557 'services': ['quantum-plugin-openvswitch-agent'],
558 'packages': [[headers_package()] + determine_dkms_package(),
559 ['quantum-plugin-openvswitch-agent']],
560@@ -61,7 +66,8 @@
561 'contexts': [
562 context.SharedDBContext(user=config('neutron-database-user'),
563 database=config('neutron-database'),
564- relation_prefix='neutron')],
565+ relation_prefix='neutron',
566+ ssl_dir=QUANTUM_CONF_DIR)],
567 'services': [],
568 'packages': [],
569 'server_packages': ['quantum-server',
570@@ -70,6 +76,8 @@
571 }
572 }
573
574+NEUTRON_CONF_DIR = '/etc/neutron'
575+
576
577 def neutron_plugins():
578 from charmhelpers.contrib.openstack import context
579@@ -83,7 +91,8 @@
580 'contexts': [
581 context.SharedDBContext(user=config('neutron-database-user'),
582 database=config('neutron-database'),
583- relation_prefix='neutron')],
584+ relation_prefix='neutron',
585+ ssl_dir=NEUTRON_CONF_DIR)],
586 'services': ['neutron-plugin-openvswitch-agent'],
587 'packages': [[headers_package()] + determine_dkms_package(),
588 ['neutron-plugin-openvswitch-agent']],
589@@ -98,7 +107,8 @@
590 'contexts': [
591 context.SharedDBContext(user=config('neutron-database-user'),
592 database=config('neutron-database'),
593- relation_prefix='neutron')],
594+ relation_prefix='neutron',
595+ ssl_dir=NEUTRON_CONF_DIR)],
596 'services': [],
597 'packages': [],
598 'server_packages': ['neutron-server',
599
600=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
601--- hooks/charmhelpers/contrib/openstack/utils.py 2014-03-25 12:26:41 +0000
602+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-04-16 08:36:27 +0000
603@@ -65,6 +65,7 @@
604 ('1.10.0', 'havana'),
605 ('1.9.1', 'havana'),
606 ('1.9.0', 'havana'),
607+ ('1.13.1', 'icehouse'),
608 ('1.13.0', 'icehouse'),
609 ('1.12.0', 'icehouse'),
610 ('1.11.0', 'icehouse'),
611
612=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
613--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-03-25 12:26:41 +0000
614+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-04-16 08:36:27 +0000
615@@ -2,7 +2,9 @@
616 from stat import S_ISBLK
617
618 from subprocess import (
619- check_call
620+ check_call,
621+ check_output,
622+ call
623 )
624
625
626@@ -22,5 +24,12 @@
627
628 :param block_device: str: Full path of block device to clean.
629 '''
630- check_call(['sgdisk', '--zap-all', '--clear',
631- '--mbrtogpt', block_device])
632+ # sometimes sgdisk exits non-zero; this is OK, dd will clean up
633+ call(['sgdisk', '--zap-all', '--mbrtogpt',
634+ '--clear', block_device])
635+ dev_end = check_output(['blockdev', '--getsz', block_device])
636+ gpt_end = int(dev_end.split()[0]) - 100
637+ check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device),
638+ 'bs=1M', 'count=1'])
639+ check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device),
640+ 'bs=512', 'count=100', 'seek=%s'%(gpt_end)])
641
642=== modified file 'hooks/cinder_contexts.py'
643--- hooks/cinder_contexts.py 2014-03-13 15:50:49 +0000
644+++ hooks/cinder_contexts.py 2014-04-16 08:36:27 +0000
645@@ -2,6 +2,8 @@
646 config,
647 relation_ids,
648 service_name,
649+ related_units,
650+ relation_get,
651 )
652
653 from charmhelpers.contrib.openstack.context import (
654@@ -9,6 +11,10 @@
655 ApacheSSLContext as SSLContext,
656 )
657
658+from charmhelpers.contrib.openstack.utils import (
659+ get_os_codename_install_source
660+)
661+
662 from charmhelpers.contrib.hahelpers.cluster import (
663 determine_apache_port,
664 determine_api_port,
665@@ -35,8 +41,13 @@
666 if not relation_ids('ceph'):
667 return {}
668 service = service_name()
669+ if get_os_codename_install_source(config('openstack-origin')) \
670+ >= "icehouse":
671+ volume_driver = 'cinder.volume.drivers.rbd.RBDDriver'
672+ else:
673+ volume_driver = 'cinder.volume.driver.RBDDriver'
674 return {
675- 'volume_driver': 'cinder.volume.driver.RBDDriver',
676+ 'volume_driver': volume_driver,
677 # ensure_ceph_pool() creates pool based on service name.
678 'rbd_pool': service,
679 'rbd_user': service,
680@@ -74,3 +85,25 @@
681 if not service_enabled('cinder-api'):
682 return {}
683 return super(ApacheSSLContext, self).__call__()
684+
685+
686+class StorageBackendContext(OSContextGenerator):
687+ interfaces = ['storage-backend']
688+
689+ def __call__(self):
690+ backends = []
691+ for rid in relation_ids('storage-backend'):
692+ for unit in related_units(rid):
693+ backend_name = relation_get('backend_name',
694+ unit, rid)
695+ if backend_name:
696+ backends.append(backend_name)
697+ if len(backends) > 0:
698+ return {'backends': ",".join(backends)}
699+ else:
700+ return {}
701+
702+
703+class LoggingConfigContext(OSContextGenerator):
704+ def __call__(self):
705+ return {'debug': config('debug'), 'verbose': config('verbose')}
706
707=== modified file 'hooks/cinder_hooks.py'
708--- hooks/cinder_hooks.py 2014-03-13 15:50:49 +0000
709+++ hooks/cinder_hooks.py 2014-04-16 08:36:27 +0000
710@@ -2,18 +2,17 @@
711
712 import os
713 import sys
714+import uuid
715
716 from subprocess import check_call
717
718 from cinder_utils import (
719- clean_storage,
720 determine_packages,
721 do_openstack_upgrade,
722- ensure_block_device,
723 ensure_ceph_pool,
724 juju_log,
725 migrate_database,
726- prepare_lvm_storage,
727+ configure_lvm_storage,
728 register_configs,
729 restart_map,
730 service_enabled,
731@@ -28,11 +27,14 @@
732 Hooks,
733 UnregisteredHookError,
734 config,
735+ is_relation_made,
736 relation_get,
737 relation_ids,
738 relation_set,
739 service_name,
740 unit_get,
741+ log,
742+ ERROR
743 )
744
745 from charmhelpers.fetch import apt_install, apt_update
746@@ -69,32 +71,56 @@
747 apt_update()
748 apt_install(determine_packages(), fatal=True)
749
750- if (service_enabled('volume') and
751- conf['block-device'] not in [None, 'None', 'none']):
752- bdev = ensure_block_device(conf['block-device'])
753- juju_log('Located valid block device: %s' % bdev)
754- if conf['overwrite'] in ['true', 'True', True]:
755- juju_log('Ensuring block device is clean: %s' % bdev)
756- clean_storage(bdev)
757- prepare_lvm_storage(bdev, conf['volume-group'])
758-
759
760 @hooks.hook('config-changed')
761 @restart_on_change(restart_map(), stopstart=True)
762 def config_changed():
763+ conf = config()
764+ if (service_enabled('volume') and
765+ conf['block-device'] not in [None, 'None', 'none']):
766+ block_devices = conf['block-device'].split()
767+ configure_lvm_storage(block_devices,
768+ conf['volume-group'],
769+ conf['overwrite'] in ['true', 'True', True])
770+
771 if openstack_upgrade_available('cinder-common'):
772 do_openstack_upgrade(configs=CONFIGS)
773+ # NOTE(jamespage) tell any storage-backends we just upgraded
774+ for rid in relation_ids('storage-backend'):
775+ relation_set(relation_id=rid,
776+ upgrade_nonce=uuid.uuid4())
777+
778 CONFIGS.write_all()
779 configure_https()
780
781
782 @hooks.hook('shared-db-relation-joined')
783 def db_joined():
784+ if is_relation_made('pgsql-db'):
785+ # error, postgresql is used
786+ e = ('Attempting to associate a mysql database when there is already '
787+ 'associated a postgresql one')
788+ log(e, level=ERROR)
789+ raise Exception(e)
790+
791 conf = config()
792 relation_set(database=conf['database'], username=conf['database-user'],
793 hostname=unit_get('private-address'))
794
795
796+@hooks.hook('pgsql-db-relation-joined')
797+def pgsql_db_joined():
798+ if is_relation_made('shared-db'):
799+ # raise error
800+ e = ('Attempting to associate a postgresql database when there is'
801+ ' already associated a mysql one')
802+ log(e, level=ERROR)
803+ raise Exception(e)
804+
805+ conf = config()
806+ relation_set(database=conf['database'])
807+
808+
809 @hooks.hook('shared-db-relation-changed')
810 @restart_on_change(restart_map())
811 def db_changed():
812@@ -107,6 +133,18 @@
813 migrate_database()
814
815
816+@hooks.hook('pgsql-db-relation-changed')
817+@restart_on_change(restart_map())
818+def pgsql_db_changed():
819+ if 'pgsql-db' not in CONFIGS.complete_contexts():
820+ juju_log('pgsql-db relation incomplete. Peer not ready?')
821+ return
822+ CONFIGS.write(CINDER_CONF)
823+ if eligible_leader(CLUSTER_RES):
824+ juju_log('Cluster leader, performing db sync')
825+ migrate_database()
826+
827+
828 @hooks.hook('amqp-relation-joined')
829 def amqp_joined(relation_id=None):
830 conf = config()
831@@ -123,6 +161,15 @@
832 CONFIGS.write(CINDER_CONF)
833
834
835+@hooks.hook('amqp-relation-departed')
836+@restart_on_change(restart_map())
837+def amqp_departed():
838+ if 'amqp' not in CONFIGS.complete_contexts():
839+ juju_log('amqp relation incomplete. Peer not ready?')
840+ return
841+ CONFIGS.write(CINDER_CONF)
842+
843+
844 @hooks.hook('identity-service-relation-joined')
845 def identity_joined(rid=None):
846 if not eligible_leader(CLUSTER_RES):
847@@ -241,7 +288,8 @@
848 'ceph-relation-broken',
849 'identity-service-relation-broken',
850 'image-service-relation-broken',
851- 'shared-db-relation-broken')
852+ 'shared-db-relation-broken',
853+ 'pgsql-db-relation-broken')
854 @restart_on_change(restart_map(), stopstart=True)
855 def relation_broken():
856 CONFIGS.write_all()
857@@ -271,6 +319,13 @@
858 amqp_joined(relation_id=rel_id)
859
860
861+@hooks.hook('storage-backend-relation-changed')
862+@hooks.hook('storage-backend-relation-broken')
863+@restart_on_change(restart_map())
864+def storage_backend():
865+ CONFIGS.write(CINDER_CONF)
866+
867+
868 if __name__ == '__main__':
869 try:
870 hooks.execute(sys.argv)
871
872=== modified file 'hooks/cinder_utils.py'
873--- hooks/cinder_utils.py 2014-02-18 12:25:01 +0000
874+++ hooks/cinder_utils.py 2014-04-16 08:36:27 +0000
875@@ -12,13 +12,16 @@
876 )
877
878 from charmhelpers.fetch import (
879- apt_install,
880+ apt_upgrade,
881 apt_update,
882+ apt_install
883 )
884
885 from charmhelpers.core.host import (
886 mounts,
887 umount,
888+ service_stop,
889+ service_start,
890 mkdir
891 )
892
893@@ -43,6 +46,7 @@
894 deactivate_lvm_volume_group,
895 is_lvm_physical_volume,
896 remove_lvm_physical_volume,
897+ list_lvm_volume_group
898 )
899
900 from charmhelpers.contrib.storage.linux.loopback import (
901@@ -70,6 +74,7 @@
902 'python-jinja2',
903 'python-keystoneclient',
904 'python-mysqldb',
905+ 'python-psycopg2',
906 'qemu-utils',
907 ]
908
909@@ -86,8 +91,9 @@
910 class CinderCharmError(Exception):
911 pass
912
913-CINDER_CONF = '/etc/cinder/cinder.conf'
914-CINDER_API_CONF = '/etc/cinder/api-paste.ini'
915+CINDER_CONF_DIR = "/etc/cinder"
916+CINDER_CONF = '%s/cinder.conf' % CINDER_CONF_DIR
917+CINDER_API_CONF = '%s/api-paste.ini' % CINDER_CONF_DIR
918 CEPH_CONF = '/etc/ceph/ceph.conf'
919 CHARM_CEPH_CONF = '/var/lib/charm/{}/ceph.conf'
920
921@@ -98,6 +104,7 @@
922
923 TEMPLATES = 'templates/'
924
925+
926 def ceph_config_file():
927 return CHARM_CEPH_CONF.format(service_name())
928
929@@ -105,14 +112,22 @@
930 # with file in restart_on_changes()'s service map.
931 CONFIG_FILES = OrderedDict([
932 (CINDER_CONF, {
933- 'hook_contexts': [context.SharedDBContext(),
934- context.AMQPContext(),
935+ 'hook_contexts': [context.SharedDBContext(ssl_dir=CINDER_CONF_DIR),
936+ context.PostgresqlDBContext(),
937+ context.AMQPContext(ssl_dir=CINDER_CONF_DIR),
938 context.ImageServiceContext(),
939 context.OSConfigFlagContext(),
940 context.SyslogContext(),
941 cinder_contexts.CephContext(),
942 cinder_contexts.HAProxyContext(),
943- cinder_contexts.ImageServiceContext()],
944+ cinder_contexts.ImageServiceContext(),
945+ context.SubordinateConfigContext(
946+ interface='storage-backend',
947+ service='cinder',
948+ config_file=CINDER_CONF),
949+ cinder_contexts.StorageBackendContext(),
950+ cinder_contexts.LoggingConfigContext(),
951+ context.IdentityServiceContext()],
952 'services': ['cinder-api', 'cinder-volume',
953 'cinder-scheduler', 'haproxy']
954 }),
955@@ -239,31 +254,69 @@
956 return OrderedDict(_map)
957
958
959-def prepare_lvm_storage(block_device, volume_group):
960- '''Ensures block_device is initialized as a LVM PV
961- and creates volume_group.
962- Assumes block device is clean and will raise if storage is already
963- initialized as a PV.
964-
965- :param block_device: str: Full path to block device to be prepared.
966- :param volume_group: str: Name of volume group to be created with
967- block_device as backing PV.
968-
969- :returns: None or raises CinderCharmError if storage is unclean.
970- '''
971- e = None
972- if is_lvm_physical_volume(block_device):
973- juju_log('ERROR: Could not prepare LVM storage: %s is already '
974- 'initialized as LVM physical device.' % block_device)
975- raise CinderCharmError
976-
977- try:
978- create_lvm_physical_volume(block_device)
979- create_lvm_volume_group(volume_group, block_device)
980- except Exception as e:
981- juju_log('Could not prepare LVM storage on %s.' % block_device)
982- juju_log(e)
983- raise CinderCharmError
984+def services():
985+ ''' Returns a list of services associate with this charm '''
986+ _services = []
987+ for v in restart_map().values():
988+ _services = _services + v
989+ return list(set(_services))
990+
991+
992+def extend_lvm_volume_group(volume_group, block_device):
993+ '''
994+ Extend and LVM volume group onto a given block device.
995+
996+ Assumes block device has already been initialized as an LVM PV.
997+
998+ :param volume_group: str: Name of volume group to create.
999+ :block_device: str: Full path of PV-initialized block device.
1000+ '''
1001+ subprocess.check_call(['vgextend', volume_group, block_device])
1002+
1003+
1004+def configure_lvm_storage(block_devices, volume_group, overwrite=False):
1005+ ''' Configure LVM storage on the list of block devices provided
1006+
1007+ :param block_devices: list: List of whitelisted block devices to detect
1008+ and use if found
1009+ :param overwrite: bool: Scrub any existing block data if block device is
1010+ not already in-use
1011+ '''
1012+ devices = []
1013+ for block_device in block_devices:
1014+ (block_device, size) = _parse_block_device(block_device)
1015+ if size == 0 and is_block_device(block_device):
1016+ devices.append(block_device)
1017+ elif size > 0:
1018+ devices.append(ensure_loopback_device(block_device, size))
1019+
1020+ # NOTE(jamespage)
1021+ # might need todo an initial one-time scrub on install if need be
1022+ vg_found = False
1023+ new_devices = []
1024+ for device in devices:
1025+ if (not is_lvm_physical_volume(device) or
1026+ (is_lvm_physical_volume(device) and
1027+ list_lvm_volume_group(device) != volume_group)):
1028+ # Existing LVM but not part of required VG or new device
1029+ if overwrite is True:
1030+ clean_storage(device)
1031+ new_devices.append(device)
1032+ create_lvm_physical_volume(device)
1033+ elif (is_lvm_physical_volume(device) and
1034+ list_lvm_volume_group(device) == volume_group):
1035+ # Mark vg as found
1036+ vg_found = True
1037+
1038+ if vg_found is False and len(new_devices) > 0:
1039+ # Create new volume group from first device
1040+ create_lvm_volume_group(volume_group, new_devices[0])
1041+ new_devices.remove(new_devices[0])
1042+
1043+ if len(new_devices) > 0:
1044+ # Extend the volume group as required
1045+ for new_device in new_devices:
1046+ extend_lvm_volume_group(volume_group, new_device)
1047
1048
1049 def clean_storage(block_device):
1050@@ -284,25 +337,24 @@
1051 if is_lvm_physical_volume(block_device):
1052 deactivate_lvm_volume_group(block_device)
1053 remove_lvm_physical_volume(block_device)
1054- else:
1055- zap_disk(block_device)
1056-
1057-
1058-def ensure_block_device(block_device):
1059- '''Confirm block_device, create as loopback if necessary.
1060-
1061- :param block_device: str: Full path of block device to ensure.
1062-
1063- :returns: str: Full path of ensured block device.
1064+
1065+ zap_disk(block_device)
1066+
1067+
1068+def _parse_block_device(block_device):
1069+ ''' Parse a block device string and return either the full path
1070+ to the block device, or the path to a loopback device and its size
1071+
1072+ :param: block_device: str: Block device as provided in configuration
1073+
1074+ :returns: (str, int): Full path to block device and 0 OR
1075+ Full path to loopback device and required size
1076 '''
1077 _none = ['None', 'none', None]
1078- if (block_device in _none):
1079- juju_log('prepare_storage(): Missing required input: '
1080- 'block_device=%s.' % block_device)
1081- raise CinderCharmError
1082-
1083+ if block_device in _none:
1084+ return (None, 0)
1085 if block_device.startswith('/dev/'):
1086- bdev = block_device
1087+ return (block_device, 0)
1088 elif block_device.startswith('/'):
1089 _bd = block_device.split('|')
1090 if len(_bd) == 2:
1091@@ -310,15 +362,9 @@
1092 else:
1093 bdev = block_device
1094 size = DEFAULT_LOOPBACK_SIZE
1095- bdev = ensure_loopback_device(bdev, size)
1096+ return (bdev, size)
1097 else:
1098- bdev = '/dev/%s' % block_device
1099-
1100- if not is_block_device(bdev):
1101- juju_log('Failed to locate valid block device at %s' % bdev)
1102- raise CinderCharmError
1103-
1104- return bdev
1105+ return ('/dev/{}'.format(block_device), 0)
1106
1107
1108 def migrate_database():
1109@@ -364,11 +410,15 @@
1110 '--option', 'Dpkg::Options::=--force-confdef',
1111 ]
1112 apt_update()
1113- apt_install(packages=determine_packages(), options=dpkg_opts, fatal=True)
1114+ apt_upgrade(options=dpkg_opts, fatal=True, dist=True)
1115+ apt_install(determine_packages(), fatal=True)
1116
1117 # set CONFIGS to load templates from new release and regenerate config
1118 configs.set_release(openstack_release=new_os_rel)
1119 configs.write_all()
1120
1121+ # Stop/start services and migrate DB if leader
1122+ [service_stop(s) for s in services()]
1123 if eligible_leader(CLUSTER_RES):
1124 migrate_database()
1125+ [service_start(s) for s in services()]
1126
1127=== added symlink 'hooks/pgsql-db-relation-broken'
1128=== target is u'cinder_hooks.py'
1129=== added symlink 'hooks/pgsql-db-relation-changed'
1130=== target is u'cinder_hooks.py'
1131=== added symlink 'hooks/pgsql-db-relation-joined'
1132=== target is u'cinder_hooks.py'
1133=== added symlink 'hooks/storage-backend-relation-broken'
1134=== target is u'cinder_hooks.py'
1135=== added symlink 'hooks/storage-backend-relation-changed'
1136=== target is u'cinder_hooks.py'
1137=== added symlink 'hooks/storage-backend-relation-departed'
1138=== target is u'cinder_hooks.py'
1139=== added symlink 'hooks/storage-backend-relation-joined'
1140=== target is u'cinder_hooks.py'
1141=== modified file 'metadata.yaml'
1142--- metadata.yaml 2013-10-17 21:48:08 +0000
1143+++ metadata.yaml 2014-04-16 08:36:27 +0000
1144@@ -11,6 +11,8 @@
1145 requires:
1146 shared-db:
1147 interface: mysql-shared
1148+ pgsql-db:
1149+ interface: pgsql
1150 amqp:
1151 interface: rabbitmq
1152 identity-service:
1153@@ -22,6 +24,9 @@
1154 ha:
1155 interface: hacluster
1156 scope: container
1157+ storage-backend:
1158+ interface: cinder-backend
1159+ scope: container
1160 peers:
1161 cluster:
1162 interface: cinder-ha
1163
1164=== modified file 'revision'
1165--- revision 2014-03-25 18:44:23 +0000
1166+++ revision 2014-04-16 08:36:27 +0000
1167@@ -1,1 +1,1 @@
1168-135
1169+136
1170
1171=== modified file 'setup.cfg'
1172--- setup.cfg 2013-10-19 16:59:05 +0000
1173+++ setup.cfg 2014-04-16 08:36:27 +0000
1174@@ -1,6 +1,5 @@
1175 [nosetests]
1176-verbosity=1
1177+verbosity=2
1178 with-coverage=1
1179 cover-erase=1
1180 cover-package=hooks
1181-
1182
1183=== modified file 'templates/cinder.conf'
1184--- templates/cinder.conf 2014-02-03 10:44:24 +0000
1185+++ templates/cinder.conf 2014-04-16 08:36:27 +0000
1186@@ -9,37 +9,50 @@
1187 iscsi_helper = tgtadm
1188 volume_name_template = volume-%s
1189 volume_group = cinder-volumes
1190-verbose = True
1191+verbose = {{ verbose }}
1192+debug = {{ debug }}
1193 use_syslog = {{ use_syslog }}
1194 auth_strategy = keystone
1195 state_path = /var/lib/cinder
1196 lock_path = /var/lock/cinder
1197 volumes_dir = /var/lib/cinder/volumes
1198-{% if database_host -%}
1199-sql_connection = mysql://{{ database_user }}:{{ database_password }}@{{ database_host }}/{{ database }}
1200-{% endif -%}
1201-{% if rabbitmq_host -%}
1202+
1203+{% include "parts/database" %}
1204+
1205+{% if rabbitmq_host %}
1206 notification_driver = cinder.openstack.common.notifier.rabbit_notifier
1207 control_exchange = cinder
1208+{% if rabbit_ssl_port %}
1209+rabbit_use_ssl=True
1210+rabbit_port={{ rabbit_ssl_port }}
1211+{% if rabbit_ssl_ca %}
1212+kombu_ssl_ca_certs={{rabbit_ssl_ca}}
1213+{% endif %}
1214+{% endif %}
1215 rabbit_host = {{ rabbitmq_host }}
1216 rabbit_userid = {{ rabbitmq_user }}
1217 rabbit_password = {{ rabbitmq_password }}
1218 rabbit_virtual_host = {{ rabbitmq_virtual_host }}
1219 {% endif -%}
1220+
1221 {% if volume_driver -%}
1222 volume_driver = {{ volume_driver }}
1223 {% endif -%}
1224+
1225 {% if rbd_pool -%}
1226 rbd_pool = {{ rbd_pool }}
1227 host = {{ host }}
1228 rbd_user = {{ rbd_user }}
1229 {% endif -%}
1230+
1231 {% if osapi_volume_listen_port -%}
1232 osapi_volume_listen_port = {{ osapi_volume_listen_port }}
1233 {% endif -%}
1234+
1235 {% if glance_api_servers -%}
1236 glance_api_servers = {{ glance_api_servers }}
1237 {% endif -%}
1238+
1239 {% if glance_api_version -%}
1240 glance_api_version = {{ glance_api_version }}
1241 {% endif -%}
1242@@ -50,3 +63,4 @@
1243 {% endfor -%}
1244 {% endif -%}
1245
1246+{% include "parts/backends" %}
1247
1248=== added file 'templates/grizzly/cinder.conf'
1249--- templates/grizzly/cinder.conf 1970-01-01 00:00:00 +0000
1250+++ templates/grizzly/cinder.conf 2014-04-16 08:36:27 +0000
1251@@ -0,0 +1,57 @@
1252+###############################################################################
1253+# [ WARNING ]
1254+# cinder configuration file maintained by Juju
1255+# local changes may be overwritten.
1256+###############################################################################
1257+[DEFAULT]
1258+rootwrap_config = /etc/cinder/rootwrap.conf
1259+api_paste_confg = /etc/cinder/api-paste.ini
1260+iscsi_helper = tgtadm
1261+volume_name_template = volume-%s
1262+volume_group = cinder-volumes
1263+verbose = {{ verbose }}
1264+debug = {{ debug }}
1265+use_syslog = {{ use_syslog }}
1266+auth_strategy = keystone
1267+state_path = /var/lib/cinder
1268+lock_path = /var/lock/cinder
1269+volumes_dir = /var/lib/cinder/volumes
1270+
1271+{% include "parts/database" %}
1272+
1273+{% include "parts/rabbitmq" %}
1274+
1275+{% if rabbitmq_host or rabbitmq_hosts -%}
1276+notification_driver = cinder.openstack.common.notifier.rabbit_notifier
1277+control_exchange = cinder
1278+{% endif -%}
1279+
1280+{% if volume_driver -%}
1281+volume_driver = {{ volume_driver }}
1282+{% endif -%}
1283+
1284+{% if rbd_pool -%}
1285+rbd_pool = {{ rbd_pool }}
1286+host = {{ host }}
1287+rbd_user = {{ rbd_user }}
1288+{% endif -%}
1289+
1290+{% if osapi_volume_listen_port -%}
1291+osapi_volume_listen_port = {{ osapi_volume_listen_port }}
1292+{% endif -%}
1293+
1294+{% if glance_api_servers -%}
1295+glance_api_servers = {{ glance_api_servers }}
1296+{% endif -%}
1297+
1298+{% if glance_api_version -%}
1299+glance_api_version = {{ glance_api_version }}
1300+{% endif -%}
1301+
1302+{% if user_config_flags -%}
1303+{% for key, value in user_config_flags.iteritems() -%}
1304+{{ key }} = {{ value }}
1305+{% endfor -%}
1306+{% endif -%}
1307+
1308+{% include "parts/backends" %}
1309
1310=== modified file 'templates/havana/api-paste.ini'
1311--- templates/havana/api-paste.ini 2013-10-17 21:48:08 +0000
1312+++ templates/havana/api-paste.ini 2014-04-16 08:36:27 +0000
1313@@ -58,6 +58,9 @@
1314 [filter:authtoken]
1315 paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
1316 {% if service_host -%}
1317+service_protocol = {{ service_protocol }}
1318+service_host = {{ service_host }}
1319+service_port = {{ service_port }}
1320 auth_host = {{ auth_host }}
1321 auth_port = {{ auth_port }}
1322 auth_protocol = {{ auth_protocol }}
1323
1324=== added directory 'templates/icehouse'
1325=== added file 'templates/icehouse/api-paste.ini'
1326--- templates/icehouse/api-paste.ini 1970-01-01 00:00:00 +0000
1327+++ templates/icehouse/api-paste.ini 2014-04-16 08:36:27 +0000
1328@@ -0,0 +1,55 @@
1329+#############
1330+# OpenStack #
1331+#############
1332+
1333+[composite:osapi_volume]
1334+use = call:cinder.api:root_app_factory
1335+/: apiversions
1336+/v1: openstack_volume_api_v1
1337+/v2: openstack_volume_api_v2
1338+
1339+[composite:openstack_volume_api_v1]
1340+use = call:cinder.api.middleware.auth:pipeline_factory
1341+noauth = request_id faultwrap sizelimit noauth apiv1
1342+keystone = request_id faultwrap sizelimit authtoken keystonecontext apiv1
1343+keystone_nolimit = request_id faultwrap sizelimit authtoken keystonecontext apiv1
1344+
1345+[composite:openstack_volume_api_v2]
1346+use = call:cinder.api.middleware.auth:pipeline_factory
1347+noauth = request_id faultwrap sizelimit noauth apiv2
1348+keystone = request_id faultwrap sizelimit authtoken keystonecontext apiv2
1349+keystone_nolimit = request_id faultwrap sizelimit authtoken keystonecontext apiv2
1350+
1351+[filter:request_id]
1352+paste.filter_factory = cinder.openstack.common.middleware.request_id:RequestIdMiddleware.factory
1353+
1354+[filter:faultwrap]
1355+paste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory
1356+
1357+[filter:noauth]
1358+paste.filter_factory = cinder.api.middleware.auth:NoAuthMiddleware.factory
1359+
1360+[filter:sizelimit]
1361+paste.filter_factory = cinder.api.middleware.sizelimit:RequestBodySizeLimiter.factory
1362+
1363+[app:apiv1]
1364+paste.app_factory = cinder.api.v1.router:APIRouter.factory
1365+
1366+[app:apiv2]
1367+paste.app_factory = cinder.api.v2.router:APIRouter.factory
1368+
1369+[pipeline:apiversions]
1370+pipeline = faultwrap osvolumeversionapp
1371+
1372+[app:osvolumeversionapp]
1373+paste.app_factory = cinder.api.versions:Versions.factory
1374+
1375+##########
1376+# Shared #
1377+##########
1378+
1379+[filter:keystonecontext]
1380+paste.filter_factory = cinder.api.middleware.auth:CinderKeystoneContext.factory
1381+
1382+[filter:authtoken]
1383+paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
1384
1385=== added file 'templates/icehouse/cinder.conf'
1386--- templates/icehouse/cinder.conf 1970-01-01 00:00:00 +0000
1387+++ templates/icehouse/cinder.conf 2014-04-16 08:36:27 +0000
1388@@ -0,0 +1,68 @@
1389+###############################################################################
1390+# [ WARNING ]
1391+# cinder configuration file maintained by Juju
1392+# local changes may be overwritten.
1393+###############################################################################
1394+[DEFAULT]
1395+rootwrap_config = /etc/cinder/rootwrap.conf
1396+api_paste_confg = /etc/cinder/api-paste.ini
1397+iscsi_helper = tgtadm
1398+volume_name_template = volume-%s
1399+volume_group = cinder-volumes
1400+verbose = {{ verbose }}
1401+debug = {{ debug }}
1402+use_syslog = {{ use_syslog }}
1403+auth_strategy = keystone
1404+state_path = /var/lib/cinder
1405+lock_path = /var/lock/cinder
1406+volumes_dir = /var/lib/cinder/volumes
1407+
1408+{% include "parts/rabbitmq" %}
1409+
1410+{% if rabbitmq_host or rabbitmq_hosts -%}
1411+notification_driver = cinder.openstack.common.notifier.rpc_notifier
1412+control_exchange = cinder
1413+{% endif -%}
1414+
1415+{% if volume_driver -%}
1416+volume_driver = {{ volume_driver }}
1417+{% endif -%}
1418+
1419+{% if rbd_pool -%}
1420+rbd_pool = {{ rbd_pool }}
1421+host = {{ host }}
1422+rbd_user = {{ rbd_user }}
1423+{% endif -%}
1424+
1425+{% if osapi_volume_listen_port -%}
1426+osapi_volume_listen_port = {{ osapi_volume_listen_port }}
1427+{% endif -%}
1428+
1429+{% if glance_api_servers -%}
1430+glance_api_servers = {{ glance_api_servers }}
1431+{% endif -%}
1432+
1433+{% if glance_api_version -%}
1434+glance_api_version = {{ glance_api_version }}
1435+{% endif -%}
1436+
1437+{% if user_config_flags -%}
1438+{% for key, value in user_config_flags.iteritems() -%}
1439+{{ key }} = {{ value }}
1440+{% endfor -%}
1441+{% endif -%}
1442+
1443+{% include "parts/backends" %}
1444+
1445+{% if auth_host -%}
1446+[keystone_authtoken]
1447+auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/
1448+auth_host = {{ auth_host }}
1449+auth_port = {{ auth_port }}
1450+auth_protocol = {{ auth_protocol }}
1451+admin_tenant_name = {{ admin_tenant_name }}
1452+admin_user = {{ admin_user }}
1453+admin_password = {{ admin_password }}
1454+{% endif -%}
1455+
1456+{% include "parts/section-database" %}
1457
1458=== added directory 'templates/parts'
1459=== added file 'templates/parts/backends'
1460--- templates/parts/backends 1970-01-01 00:00:00 +0000
1461+++ templates/parts/backends 2014-04-16 08:36:27 +0000
1462@@ -0,0 +1,18 @@
1463+{% if sections and 'DEFAULT' in sections -%}
1464+{% for key, value in sections['DEFAULT'] -%}
1465+{{ key }} = {{ value }}
1466+{% endfor -%}
1467+{% endif -%}
1468+
1469+{% if backends -%}
1470+enabled_backends = {{ backends }}
1471+{% endif -%}
1472+
1473+{% for section in sections -%}
1474+{% if section != 'DEFAULT' -%}
1475+[{{ section }}]
1476+{% for key, value in sections[section] -%}
1477+{{ key }} = {{ value }}
1478+{% endfor -%}
1479+{% endif -%}
1480+{% endfor -%}
1481\ No newline at end of file
1482
1483=== added file 'templates/parts/database'
1484--- templates/parts/database 1970-01-01 00:00:00 +0000
1485+++ templates/parts/database 2014-04-16 08:36:27 +0000
1486@@ -0,0 +1,3 @@
1487+{% if database_host -%}
1488+sql_connection = {{ database_type }}://{{ database_user }}:{{ database_password }}@{{ database_host }}/{{ database }}{% if database_ssl_ca %}?ssl_ca={{ database_ssl_ca }}{% if database_ssl_cert %}&ssl_cert={{ database_ssl_cert }}&ssl_key={{ database_ssl_key }}{% endif %}{% endif %}
1489+{% endif -%}
1490
1491=== added file 'templates/parts/rabbitmq'
1492--- templates/parts/rabbitmq 1970-01-01 00:00:00 +0000
1493+++ templates/parts/rabbitmq 2014-04-16 08:36:27 +0000
1494@@ -0,0 +1,21 @@
1495+{% if rabbitmq_host or rabbitmq_hosts -%}
1496+rabbit_userid = {{ rabbitmq_user }}
1497+rabbit_virtual_host = {{ rabbitmq_virtual_host }}
1498+rabbit_password = {{ rabbitmq_password }}
1499+{% if rabbitmq_hosts -%}
1500+rabbit_hosts = {{ rabbitmq_hosts }}
1501+{% if rabbitmq_ha_queues -%}
1502+rabbit_ha_queues = True
1503+rabbit_durable_queues = False
1504+{% endif -%}
1505+{% else -%}
1506+rabbit_host = {{ rabbitmq_host }}
1507+{% endif -%}
1508+{% if rabbit_ssl_port -%}
1509+rabbit_use_ssl = True
1510+rabbit_port = {{ rabbit_ssl_port }}
1511+{% if rabbit_ssl_ca -%}
1512+kombu_ssl_ca_certs = {{ rabbit_ssl_ca }}
1513+{% endif -%}
1514+{% endif -%}
1515+{% endif -%}
1516\ No newline at end of file
1517
1518=== added file 'templates/parts/section-database'
1519--- templates/parts/section-database 1970-01-01 00:00:00 +0000
1520+++ templates/parts/section-database 2014-04-16 08:36:27 +0000
1521@@ -0,0 +1,4 @@
1522+{% if database_host -%}
1523+[database]
1524+connection = {{ database_type }}://{{ database_user }}:{{ database_password }}@{{ database_host }}/{{ database }}{% if database_ssl_ca %}?ssl_ca={{ database_ssl_ca }}{% if database_ssl_cert %}&ssl_cert={{ database_ssl_cert }}&ssl_key={{ database_ssl_key }}{% endif %}{% endif %}
1525+{% endif -%}
1526
1527=== modified file 'unit_tests/test_cinder_contexts.py'
1528--- unit_tests/test_cinder_contexts.py 2014-03-13 15:50:49 +0000
1529+++ unit_tests/test_cinder_contexts.py 2014-04-16 08:36:27 +0000
1530@@ -16,10 +16,14 @@
1531 'service_name',
1532 'determine_apache_port',
1533 'determine_api_port',
1534+ 'get_os_codename_install_source',
1535+ 'related_units',
1536+ 'relation_get'
1537 ]
1538
1539
1540 class TestCinderContext(CharmTestCase):
1541+
1542 def setUp(self):
1543 super(TestCinderContext, self).setUp(contexts, TO_PATCH)
1544
1545@@ -45,6 +49,7 @@
1546
1547 def test_ceph_related(self):
1548 self.relation_ids.return_value = ['ceph:0']
1549+ self.get_os_codename_install_source.return_value = 'havana'
1550 service = 'mycinder'
1551 self.service_name.return_value = service
1552 self.assertEquals(
1553@@ -54,11 +59,43 @@
1554 'rbd_user': service,
1555 'host': service})
1556
1557+ def test_ceph_related_icehouse(self):
1558+ self.relation_ids.return_value = ['ceph:0']
1559+ self.get_os_codename_install_source.return_value = 'icehouse'
1560+ service = 'mycinder'
1561+ self.service_name.return_value = service
1562+ self.assertEquals(
1563+ contexts.CephContext()(),
1564+ {'volume_driver': 'cinder.volume.drivers.rbd.RBDDriver',
1565+ 'rbd_pool': service,
1566+ 'rbd_user': service,
1567+ 'host': service})
1568+
1569 @patch.object(utils, 'service_enabled')
1570 def test_apache_ssl_context_service_disabled(self, service_enabled):
1571 service_enabled.return_value = False
1572 self.assertEquals(contexts.ApacheSSLContext()(), {})
1573
1574+ def test_storage_backend_no_backends(self):
1575+ self.relation_ids.return_value = []
1576+ self.assertEquals(contexts.StorageBackendContext()(), {})
1577+
1578+ def test_storage_backend_single_backend(self):
1579+ self.relation_ids.return_value = ['cinder-ceph:0']
1580+ self.related_units.return_value = ['cinder-ceph/0']
1581+ self.relation_get.return_value = 'cinder-ceph'
1582+ self.assertEquals(contexts.StorageBackendContext()(),
1583+ {'backends': 'cinder-ceph'})
1584+
1585+ def test_storage_backend_multi_backend(self):
1586+ self.relation_ids.return_value = ['cinder-ceph:0', 'cinder-vmware:0']
1587+ self.related_units.side_effect = [['cinder-ceph/0'],
1588+ ['cinder-vmware/0']]
1589+ self.relation_get.side_effect = ['cinder-ceph', 'cinder-vmware']
1590+ self.assertEquals(contexts.StorageBackendContext()(),
1591+ {'backends': 'cinder-ceph,cinder-vmware'})
1592+
1593+ @patch('charmhelpers.contrib.openstack.context.is_clustered')
1594 @patch('charmhelpers.contrib.openstack.context.determine_apache_port')
1595 @patch('charmhelpers.contrib.openstack.context.determine_api_port')
1596 @patch('charmhelpers.contrib.openstack.context.unit_get')
1597@@ -67,15 +104,17 @@
1598 def test_apache_ssl_context_service_enabled(self, service_enabled,
1599 mock_https, mock_unit_get,
1600 mock_determine_api_port,
1601- mock_determine_apache_port):
1602+ mock_determine_apache_port,
1603+ mock_is_clustered):
1604 mock_https.return_value = True
1605 mock_unit_get.return_value = '1.2.3.4'
1606 mock_determine_api_port.return_value = '12'
1607 mock_determine_apache_port.return_value = '34'
1608+ mock_is_clustered.return_value = False
1609
1610 ctxt = contexts.ApacheSSLContext()
1611- with patch.object(ctxt, 'enable_modules') as mock_enable_modules:
1612- with patch.object(ctxt, 'configure_cert') as mock_configure_cert:
1613+ with patch.object(ctxt, 'enable_modules'):
1614+ with patch.object(ctxt, 'configure_cert'):
1615 service_enabled.return_value = False
1616 self.assertEquals(ctxt(), {})
1617 self.assertFalse(mock_https.called)
1618
1619=== modified file 'unit_tests/test_cinder_hooks.py'
1620--- unit_tests/test_cinder_hooks.py 2014-02-18 12:25:57 +0000
1621+++ unit_tests/test_cinder_hooks.py 2014-04-16 08:36:27 +0000
1622@@ -24,16 +24,15 @@
1623 TO_PATCH = [
1624 'check_call',
1625 # cinder_utils
1626- 'clean_storage',
1627+ 'configure_lvm_storage',
1628 'determine_packages',
1629 'do_openstack_upgrade',
1630- 'ensure_block_device',
1631 'ensure_ceph_keyring',
1632 'ensure_ceph_pool',
1633 'juju_log',
1634+ 'log',
1635 'lsb_release',
1636 'migrate_database',
1637- 'prepare_lvm_storage',
1638 'register_configs',
1639 'restart_map',
1640 'service_enabled',
1641@@ -43,6 +42,7 @@
1642 'ceph_config_file',
1643 # charmhelpers.core.hookenv
1644 'config',
1645+ 'is_relation_made',
1646 'relation_get',
1647 'relation_ids',
1648 'relation_set',
1649@@ -64,6 +64,7 @@
1650
1651
1652 class TestInstallHook(CharmTestCase):
1653+
1654 def setUp(self):
1655 super(TestInstallHook, self).setUp(hooks, TO_PATCH)
1656 self.config.side_effect = self.test_config.get_all
1657@@ -81,46 +82,9 @@
1658 hooks.hooks.execute(['hooks/install'])
1659 self.apt_install.assert_called_with(['foo', 'bar', 'baz'], fatal=True)
1660
1661- def test_storage_prepared(self):
1662- 'It prepares local storage if volume service enabled'
1663- self.test_config.set('block-device', 'vdb')
1664- self.test_config.set('volume-group', 'cinder')
1665- self.test_config.set('overwrite', 'true')
1666- self.service_enabled.return_value = True
1667- self.ensure_block_device.return_value = '/dev/vdb'
1668- hooks.hooks.execute(['hooks/install'])
1669- self.ensure_block_device.assert_called_with('vdb')
1670- self.prepare_lvm_storage.assert_called_with('/dev/vdb', 'cinder')
1671-
1672- def test_storage_not_prepared(self):
1673- 'It does not prepare storage when not necessary'
1674- self.service_enabled.return_value = False
1675- hooks.hooks.execute(['hooks/install'])
1676- self.assertFalse(self.ensure_block_device.called)
1677- self.service_enabled.return_value = True
1678- for none in ['None', 'none', None]:
1679- self.test_config.set('block-device', none)
1680- hooks.hooks.execute(['hooks/install'])
1681- self.assertFalse(self.ensure_block_device.called)
1682-
1683- def test_storage_is_cleaned(self):
1684- 'It cleans storage when configured to do so'
1685- self.ensure_block_device.return_value = '/dev/foo'
1686- for true in ['True', 'true', True]:
1687- self.test_config.set('overwrite', true)
1688- hooks.hooks.execute(['hooks/install'])
1689- self.clean_storage.assert_called_with('/dev/foo')
1690-
1691- def test_storage_is_not_cleaned(self):
1692- 'It does not clean storage when not configured to'
1693- self.ensure_block_device.return_value = '/dev/foo'
1694- for true in ['False', 'false', False]:
1695- self.test_config.set('overwrite', true)
1696- hooks.hooks.execute(['hooks/install'])
1697- self.assertFalse(self.clean_storage.called)
1698-
1699
1700 class TestChangedHooks(CharmTestCase):
1701+
1702 def setUp(self):
1703 super(TestChangedHooks, self).setUp(hooks, TO_PATCH)
1704 self.config.side_effect = self.test_config.get_all
1705@@ -144,6 +108,24 @@
1706 hooks.hooks.execute(['hooks/config-changed'])
1707 self.assertTrue(self.CONFIGS.write_all.called)
1708 self.assertTrue(conf_https.called)
1709+ self.configure_lvm_storage.assert_called_with(['sdb'],
1710+ 'cinder-volumes',
1711+ False)
1712+
1713+ @patch.object(hooks, 'configure_https')
1714+ def test_config_changed_block_devices(self, conf_https):
1715+ 'It writes out all config'
1716+ self.openstack_upgrade_available.return_value = False
1717+ self.test_config.set('block-device', 'sdb /dev/sdc sde')
1718+ self.test_config.set('volume-group', 'cinder-new')
1719+ self.test_config.set('overwrite', 'True')
1720+ hooks.hooks.execute(['hooks/config-changed'])
1721+ self.assertTrue(self.CONFIGS.write_all.called)
1722+ self.assertTrue(conf_https.called)
1723+ self.configure_lvm_storage.assert_called_with(
1724+ ['sdb', '/dev/sdc', 'sde'],
1725+ 'cinder-new',
1726+ True)
1727
1728 @patch.object(hooks, 'configure_https')
1729 def test_config_changed_upgrade_available(self, conf_https):
1730@@ -159,12 +141,25 @@
1731 self.CONFIGS.write.assert_called_with('/etc/cinder/cinder.conf')
1732 self.assertTrue(self.migrate_database.called)
1733
1734+ def test_pgsql_db_changed(self):
1735+ 'It writes out cinder.conf on db changed'
1736+ self.CONFIGS.complete_contexts.return_value = ['pgsql-db']
1737+ hooks.hooks.execute(['hooks/pgsql-db-relation-changed'])
1738+ self.CONFIGS.write.assert_called_with('/etc/cinder/cinder.conf')
1739+ self.assertTrue(self.migrate_database.called)
1740+
1741 def test_db_changed_relation_incomplete(self):
1742 'It does not write out cinder.conf with incomplete shared-db rel'
1743 hooks.hooks.execute(['hooks/shared-db-relation-changed'])
1744 self.assertFalse(self.CONFIGS.write.called)
1745 self.assertFalse(self.migrate_database.called)
1746
1747+ def test_pgsql_db_changed_relation_incomplete(self):
1748+ 'It does not write out cinder.conf with incomplete pgsql-db rel'
1749+ hooks.hooks.execute(['hooks/pgsql-db-relation-changed'])
1750+ self.assertFalse(self.CONFIGS.write.called)
1751+ self.assertFalse(self.migrate_database.called)
1752+
1753 def test_db_changed_not_leader(self):
1754 'It does not migrate database when not leader'
1755 self.eligible_leader.return_value = False
1756@@ -173,6 +168,14 @@
1757 self.CONFIGS.write.assert_called_with('/etc/cinder/cinder.conf')
1758 self.assertFalse(self.migrate_database.called)
1759
1760+ def test_pgsql_db_changed_not_leader(self):
1761+ 'It does not migrate database when not leader'
1762+ self.eligible_leader.return_value = False
1763+ self.CONFIGS.complete_contexts.return_value = ['pgsql-db']
1764+ hooks.hooks.execute(['hooks/pgsql-db-relation-changed'])
1765+ self.CONFIGS.write.assert_called_with('/etc/cinder/cinder.conf')
1766+ self.assertFalse(self.migrate_database.called)
1767+
1768 def test_amqp_changed(self):
1769 'It writes out cinder.conf on amqp changed with complete relation'
1770 self.CONFIGS.complete_contexts.return_value = ['amqp']
1771@@ -228,8 +231,17 @@
1772 hooks.hooks.execute(['hooks/image-service-relation-broken'])
1773 self.assertTrue(self.CONFIGS.write_all.called)
1774
1775+ def test_storage_backend_changed(self):
1776+ hooks.hooks.execute(['hooks/storage-backend-relation-changed'])
1777+ self.CONFIGS.write.assert_called_with(utils.CINDER_CONF)
1778+
1779+ def test_storage_backend_broken(self):
1780+ hooks.hooks.execute(['hooks/storage-backend-relation-broken'])
1781+ self.CONFIGS.write.assert_called_with(utils.CINDER_CONF)
1782+
1783
1784 class TestJoinedHooks(CharmTestCase):
1785+
1786 def setUp(self):
1787 super(TestJoinedHooks, self).setUp(hooks, TO_PATCH)
1788 self.config.side_effect = self.test_config.get_all
1789@@ -237,11 +249,38 @@
1790 def test_db_joined(self):
1791 'It properly requests access to a shared-db service'
1792 self.unit_get.return_value = 'cindernode1'
1793+ self.is_relation_made.return_value = False
1794 hooks.hooks.execute(['hooks/shared-db-relation-joined'])
1795 expected = {'username': 'cinder',
1796 'hostname': 'cindernode1', 'database': 'cinder'}
1797 self.relation_set.assert_called_with(**expected)
1798
1799+ def test_db_joined_with_postgresql(self):
1800+ self.is_relation_made.return_value = True
1801+
1802+ with self.assertRaises(Exception) as context:
1803+ hooks.hooks.execute(['hooks/shared-db-relation-joined'])
1804+ self.assertEqual(context.exception.message,
1805+ 'Attempting to associate a mysql database when there '
1806+ 'is already associated a postgresql one')
1807+
1808+ def test_postgresql_db_joined(self):
1809+ 'It properly requests access to a postgresql-db service'
1810+ self.unit_get.return_value = 'cindernode1'
1811+ self.is_relation_made.return_value = False
1812+ hooks.hooks.execute(['hooks/pgsql-db-relation-joined'])
1813+ expected = {'database': 'cinder'}
1814+ self.relation_set.assert_called_with(**expected)
1815+
1816+ def test_postgresql_joined_with_db(self):
1817+ self.is_relation_made.return_value = True
1818+
1819+ with self.assertRaises(Exception) as context:
1820+ hooks.hooks.execute(['hooks/pgsql-db-relation-joined'])
1821+ self.assertEqual(context.exception.message,
1822+ 'Attempting to associate a postgresql database when'
1823+ ' there is already associated a mysql one')
1824+
1825 def test_amqp_joined(self):
1826 'It properly requests access to an amqp service'
1827 hooks.hooks.execute(['hooks/amqp-relation-joined'])
1828@@ -329,3 +368,21 @@
1829 self.ensure_ceph_keyring.return_value = True
1830 hooks.hooks.execute(['hooks/ceph-relation-changed'])
1831 self.assertFalse(self.ensure_ceph_pool.called)
1832+
1833+
1834+class TestDepartedHooks(CharmTestCase):
1835+
1836+ def setUp(self):
1837+ super(TestDepartedHooks, self).setUp(hooks, TO_PATCH)
1838+ self.config.side_effect = self.test_config.get_all
1839+
1840+ def test_amqp_departed(self):
1841+ self.CONFIGS.complete_contexts.return_value = ['amqp']
1842+ hooks.hooks.execute(['hooks/amqp-relation-departed'])
1843+ self.CONFIGS.write.assert_called_with('/etc/cinder/cinder.conf')
1844+
1845+ def test_amqp_departed_incomplete(self):
1846+ self.CONFIGS.complete_contexts.return_value = []
1847+ hooks.hooks.execute(['hooks/amqp-relation-departed'])
1848+ assert not self.CONFIGS.write.called
1849+ assert self.juju_log.called
1850
1851=== modified file 'unit_tests/test_cinder_utils.py'
1852--- unit_tests/test_cinder_utils.py 2014-02-12 10:55:14 +0000
1853+++ unit_tests/test_cinder_utils.py 2014-04-16 08:36:27 +0000
1854@@ -26,6 +26,7 @@
1855 'create_lvm_volume_group',
1856 'deactivate_lvm_volume_group',
1857 'is_lvm_physical_volume',
1858+ 'list_lvm_volume_group',
1859 'relation_ids',
1860 'remove_lvm_physical_volume',
1861 'ensure_loopback_device',
1862@@ -39,7 +40,10 @@
1863 'install_alternative',
1864 # fetch
1865 'apt_update',
1866+ 'apt_upgrade',
1867 'apt_install',
1868+ 'service_stop',
1869+ 'service_start',
1870 # cinder
1871 'ceph_config_file'
1872 ]
1873@@ -49,8 +53,14 @@
1874 ['/mnt', '/dev/vdb']
1875 ]
1876
1877+DPKG_OPTIONS = [
1878+ '--option', 'Dpkg::Options::=--force-confnew',
1879+ '--option', 'Dpkg::Options::=--force-confdef',
1880+]
1881+
1882
1883 class TestCinderUtils(CharmTestCase):
1884+
1885 def setUp(self):
1886 super(TestCinderUtils, self).setUp(cinder_utils, TO_PATCH)
1887 self.config.side_effect = self.test_config.get_all
1888@@ -109,6 +119,11 @@
1889 sorted(common + cinder_utils.API_PACKAGES +
1890 cinder_utils.SCHEDULER_PACKAGES))
1891
1892+ def test_services(self):
1893+ self.assertEquals(cinder_utils.services(),
1894+ ['haproxy', 'apache2', 'cinder-api',
1895+ 'cinder-volume', 'cinder-scheduler'])
1896+
1897 def test_creates_restart_map_all_enabled(self):
1898 'It creates correct restart map when all services enabled'
1899 ex_map = OrderedDict([
1900@@ -157,36 +172,6 @@
1901 ])
1902 self.assertEquals(cinder_utils.restart_map(), ex_map)
1903
1904- def test_ensure_block_device_bad_config(self):
1905- "It doesn't prepare storage with bad config"
1906- for none in ['None', 'none', None]:
1907- self.assertRaises(cinder_utils.CinderCharmError,
1908- cinder_utils.ensure_block_device,
1909- block_device=none)
1910-
1911- def test_ensure_block_device_loopback(self):
1912- 'It ensures loopback device when checking block device'
1913- cinder_utils.ensure_block_device('/tmp/cinder.img')
1914- ex_size = cinder_utils.DEFAULT_LOOPBACK_SIZE
1915- self.ensure_loopback_device.assert_called_with('/tmp/cinder.img',
1916- ex_size)
1917-
1918- cinder_utils.ensure_block_device('/tmp/cinder-2.img|15G')
1919- self.ensure_loopback_device.assert_called_with('/tmp/cinder-2.img',
1920- '15G')
1921-
1922- def test_ensure_standard_block_device(self):
1923- 'It looks for storage at both relative and full device path'
1924- for dev in ['vdb', '/dev/vdb']:
1925- cinder_utils.ensure_block_device(dev)
1926- self.is_block_device.assert_called_with('/dev/vdb')
1927-
1928- def test_ensure_nonexistent_block_device(self):
1929- 'It will not ensure a non-existant block device'
1930- self.is_block_device.return_value = False
1931- self.assertRaises(cinder_utils.CinderCharmError,
1932- cinder_utils.ensure_block_device, 'foo')
1933-
1934 def test_clean_storage_unmount(self):
1935 'It unmounts block device when cleaning storage'
1936 self.is_lvm_physical_volume.return_value = False
1937@@ -202,6 +187,7 @@
1938 cinder_utils.clean_storage('/dev/vdb')
1939 self.remove_lvm_physical_volume.assert_called_with('/dev/vdb')
1940 self.deactivate_lvm_volume_group.assert_called_with('/dev/vdb')
1941+ self.zap_disk.assert_called_with('/dev/vdb')
1942
1943 def test_clean_storage_zap_disk(self):
1944 'It removes traces of LVM when cleaning storage'
1945@@ -210,31 +196,136 @@
1946 cinder_utils.clean_storage('/dev/vdb')
1947 self.zap_disk.assert_called_with('/dev/vdb')
1948
1949- def test_prepare_lvm_storage_not_clean(self):
1950- 'It errors when prepping non-clean LVM storage'
1951- self.is_lvm_physical_volume.return_value = True
1952- self.assertRaises(cinder_utils.CinderCharmError,
1953- cinder_utils.prepare_lvm_storage,
1954- block_device='/dev/foobar',
1955- volume_group='bar-vg')
1956-
1957- def test_prepare_lvm_storage_clean(self):
1958- self.is_lvm_physical_volume.return_value = False
1959- cinder_utils.prepare_lvm_storage(block_device='/dev/foobar',
1960- volume_group='bar-vg')
1961- self.create_lvm_physical_volume.assert_called_with('/dev/foobar')
1962- self.create_lvm_volume_group.assert_called_with('bar-vg',
1963- '/dev/foobar')
1964-
1965- def test_prepare_lvm_storage_error(self):
1966- self.is_lvm_physical_volume.return_value = False
1967- self.create_lvm_physical_volume.side_effect = Exception()
1968- # NOTE(jamespage) ensure general Exceptions mapped
1969- # to CinderCharmError's
1970- self.assertRaises(cinder_utils.CinderCharmError,
1971- cinder_utils.prepare_lvm_storage,
1972- block_device='/dev/foobar',
1973- volume_group='bar-vg')
1974+ def test_parse_block_device(self):
1975+ self.assertTrue(cinder_utils._parse_block_device(None),
1976+ (None, 0))
1977+ self.assertTrue(cinder_utils._parse_block_device('vdc'),
1978+ ('/dev/vdc', 0))
1979+ self.assertTrue(cinder_utils._parse_block_device('/dev/vdc'),
1980+ ('/dev/vdc', 0))
1981+ self.assertTrue(cinder_utils._parse_block_device('/dev/vdc'),
1982+ ('/dev/vdc', 0))
1983+ self.assertTrue(cinder_utils._parse_block_device('/mnt/loop0|10'),
1984+ ('/mnt/loop0', 10))
1985+ self.assertTrue(cinder_utils._parse_block_device('/mnt/loop0'),
1986+ ('/mnt/loop0', cinder_utils.DEFAULT_LOOPBACK_SIZE))
1987+
1988+ @patch.object(cinder_utils, 'clean_storage')
1989+ @patch.object(cinder_utils, 'extend_lvm_volume_group')
1990+ def test_configure_lvm_storage(self, extend_lvm, clean_storage):
1991+ devices = ['/dev/vdb', '/dev/vdc']
1992+ self.is_lvm_physical_volume.return_value = False
1993+ cinder_utils.configure_lvm_storage(devices, 'test', True)
1994+ clean_storage.assert_has_calls(
1995+ [call('/dev/vdb'),
1996+ call('/dev/vdc')]
1997+ )
1998+ self.create_lvm_physical_volume.assert_has_calls(
1999+ [call('/dev/vdb'),
2000+ call('/dev/vdc')]
2001+ )
2002+ self.create_lvm_volume_group.assert_called_with('test', '/dev/vdb')
2003+ extend_lvm.assert_called_with('test', '/dev/vdc')
2004+
2005+ @patch.object(cinder_utils, 'clean_storage')
2006+ @patch.object(cinder_utils, 'extend_lvm_volume_group')
2007+ def test_configure_lvm_storage_loopback(self, extend_lvm, clean_storage):
2008+ devices = ['/mnt/loop0|10']
2009+ self.ensure_loopback_device.return_value = '/dev/loop0'
2010+ self.is_lvm_physical_volume.return_value = False
2011+ cinder_utils.configure_lvm_storage(devices, 'test', True)
2012+ clean_storage.assert_called_with('/dev/loop0')
2013+ self.ensure_loopback_device.assert_called_with('/mnt/loop0', '10')
2014+ self.create_lvm_physical_volume.assert_called_with('/dev/loop0')
2015+ self.create_lvm_volume_group.assert_called_with('test', '/dev/loop0')
2016+ self.assertFalse(extend_lvm.called)
2017+
2018+ @patch.object(cinder_utils, 'clean_storage')
2019+ @patch.object(cinder_utils, 'extend_lvm_volume_group')
2020+ def test_configure_lvm_storage_existing_vg(self, extend_lvm,
2021+ clean_storage):
2022+ def pv_lookup(device):
2023+ devices = {
2024+ '/dev/vdb': True,
2025+ '/dev/vdc': False
2026+ }
2027+ return devices[device]
2028+
2029+ def vg_lookup(device):
2030+ devices = {
2031+ '/dev/vdb': 'test',
2032+ '/dev/vdc': None
2033+ }
2034+ return devices[device]
2035+ devices = ['/dev/vdb', '/dev/vdc']
2036+ self.is_lvm_physical_volume.side_effect = pv_lookup
2037+ self.list_lvm_volume_group.side_effect = vg_lookup
2038+ cinder_utils.configure_lvm_storage(devices, 'test', True)
2039+ clean_storage.assert_has_calls(
2040+ [call('/dev/vdc')]
2041+ )
2042+ self.create_lvm_physical_volume.assert_has_calls(
2043+ [call('/dev/vdc')]
2044+ )
2045+ extend_lvm.assert_called_with('test', '/dev/vdc')
2046+ self.assertFalse(self.create_lvm_volume_group.called)
2047+
2048+ @patch.object(cinder_utils, 'clean_storage')
2049+ @patch.object(cinder_utils, 'extend_lvm_volume_group')
2050+ def test_configure_lvm_storage_different_vg(self, extend_lvm,
2051+ clean_storage):
2052+ def pv_lookup(device):
2053+ devices = {
2054+ '/dev/vdb': True,
2055+ '/dev/vdc': True
2056+ }
2057+ return devices[device]
2058+
2059+ def vg_lookup(device):
2060+ devices = {
2061+ '/dev/vdb': 'test',
2062+ '/dev/vdc': 'another'
2063+ }
2064+ return devices[device]
2065+ devices = ['/dev/vdb', '/dev/vdc']
2066+ self.is_lvm_physical_volume.side_effect = pv_lookup
2067+ self.list_lvm_volume_group.side_effect = vg_lookup
2068+ cinder_utils.configure_lvm_storage(devices, 'test', True)
2069+ clean_storage.assert_called_with('/dev/vdc')
2070+ self.create_lvm_physical_volume.assert_called_with('/dev/vdc')
2071+ extend_lvm.assert_called_with('test', '/dev/vdc')
2072+ self.assertFalse(self.create_lvm_volume_group.called)
2073+
2074+ @patch.object(cinder_utils, 'clean_storage')
2075+ @patch.object(cinder_utils, 'extend_lvm_volume_group')
2076+ def test_configure_lvm_storage_different_vg_ignore(self, extend_lvm,
2077+ clean_storage):
2078+ def pv_lookup(device):
2079+ devices = {
2080+ '/dev/vdb': True,
2081+ '/dev/vdc': True
2082+ }
2083+ return devices[device]
2084+
2085+ def vg_lookup(device):
2086+ devices = {
2087+ '/dev/vdb': 'test',
2088+ '/dev/vdc': 'another'
2089+ }
2090+ return devices[device]
2091+ devices = ['/dev/vdb', '/dev/vdc']
2092+ self.is_lvm_physical_volume.side_effect = pv_lookup
2093+ self.list_lvm_volume_group.side_effect = vg_lookup
2094+ cinder_utils.configure_lvm_storage(devices, 'test', False)
2095+ self.assertFalse(clean_storage.called)
2096+ self.assertFalse(self.create_lvm_physical_volume.called)
2097+ self.assertFalse(extend_lvm.called)
2098+ self.assertFalse(self.create_lvm_volume_group.called)
2099+
2100+ @patch('subprocess.check_call')
2101+ def test_extend_lvm_volume_group(self, _call):
2102+ cinder_utils.extend_lvm_volume_group('test', '/dev/sdb')
2103+ _call.assert_called_with(['vgextend', 'test', '/dev/sdb'])
2104
2105 def test_migrate_database(self):
2106 'It migrates database with cinder-manage'
2107@@ -322,30 +413,40 @@
2108 out.write('env CEPH_ARGS="--id %s"\n' % service)
2109 """
2110
2111+ @patch.object(cinder_utils, 'services')
2112 @patch.object(cinder_utils, 'migrate_database')
2113 @patch.object(cinder_utils, 'determine_packages')
2114- def test_openstack_upgrade_leader(self, pkgs, migrate):
2115+ def test_openstack_upgrade_leader(self, pkgs, migrate, services):
2116 pkgs.return_value = ['mypackage']
2117 self.config.side_effect = None
2118 self.config.return_value = 'cloud:precise-havana'
2119+ services.return_value = ['cinder-api', 'cinder-volume']
2120 self.eligible_leader.return_value = True
2121 self.get_os_codename_install_source.return_value = 'havana'
2122 configs = MagicMock()
2123 cinder_utils.do_openstack_upgrade(configs)
2124 self.assertTrue(configs.write_all.called)
2125+ self.apt_upgrade.assert_called_with(options=DPKG_OPTIONS,
2126+ fatal=True, dist=True)
2127+ self.apt_install.assert_called_with(['mypackage'], fatal=True)
2128 configs.set_release.assert_called_with(openstack_release='havana')
2129 self.assertTrue(migrate.called)
2130
2131+ @patch.object(cinder_utils, 'services')
2132 @patch.object(cinder_utils, 'migrate_database')
2133 @patch.object(cinder_utils, 'determine_packages')
2134- def test_openstack_upgrade_not_leader(self, pkgs, migrate):
2135+ def test_openstack_upgrade_not_leader(self, pkgs, migrate, services):
2136 pkgs.return_value = ['mypackage']
2137 self.config.side_effect = None
2138 self.config.return_value = 'cloud:precise-havana'
2139+ services.return_value = ['cinder-api', 'cinder-volume']
2140 self.eligible_leader.return_value = False
2141 self.get_os_codename_install_source.return_value = 'havana'
2142 configs = MagicMock()
2143 cinder_utils.do_openstack_upgrade(configs)
2144 self.assertTrue(configs.write_all.called)
2145+ self.apt_upgrade.assert_called_with(options=DPKG_OPTIONS,
2146+ fatal=True, dist=True)
2147+ self.apt_install.assert_called_with(['mypackage'], fatal=True)
2148 configs.set_release.assert_called_with(openstack_release='havana')
2149 self.assertFalse(migrate.called)
2150
2151=== modified file 'unit_tests/test_cluster_hooks.py'
2152--- unit_tests/test_cluster_hooks.py 2014-03-13 15:50:49 +0000
2153+++ unit_tests/test_cluster_hooks.py 2014-04-16 08:36:27 +0000
2154@@ -24,15 +24,13 @@
2155
2156 TO_PATCH = [
2157 # cinder_utils
2158- 'clean_storage',
2159 'determine_packages',
2160- 'ensure_block_device',
2161 'ensure_ceph_keyring',
2162 'ensure_ceph_pool',
2163 'juju_log',
2164 'lsb_release',
2165 'migrate_database',
2166- 'prepare_lvm_storage',
2167+ 'configure_lvm_storage',
2168 'register_configs',
2169 'service_enabled',
2170 'set_ceph_env_variables',
2171@@ -58,6 +56,7 @@
2172
2173
2174 class TestClusterHooks(CharmTestCase):
2175+
2176 def setUp(self):
2177 super(TestClusterHooks, self).setUp(hooks, TO_PATCH)
2178 self.config.side_effect = self.test_config.get_all
2179
2180=== modified file 'unit_tests/test_utils.py'
2181--- unit_tests/test_utils.py 2014-01-15 15:30:21 +0000
2182+++ unit_tests/test_utils.py 2014-04-16 08:36:27 +0000
2183@@ -55,6 +55,7 @@
2184
2185
2186 class CharmTestCase(unittest.TestCase):
2187+
2188 def setUp(self, obj, patches):
2189 super(CharmTestCase, self).setUp()
2190 self.patches = patches
2191@@ -75,6 +76,7 @@
2192
2193
2194 class TestConfig(object):
2195+
2196 def __init__(self):
2197 self.config = get_default_config()
2198
2199@@ -94,6 +96,7 @@
2200
2201
2202 class TestRelation(object):
2203+
2204 def __init__(self, relation_data={}):
2205 self.relation_data = relation_data
2206

Subscribers

People subscribed via source and target branches