Merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-use-charmhelpers-to-set-openstack-origin into lp:charms/block-storage-broker

Proposed by Chad Smith
Status: Work in progress
Proposed branch: lp:~chad.smith/charms/precise/block-storage-broker/bsb-use-charmhelpers-to-set-openstack-origin
Merge into: lp:charms/block-storage-broker
Diff against target: 3846 lines (+3261/-146)
23 files modified
Makefile (+10/-7)
charm-helpers.yaml (+2/-0)
config.yaml (+15/-0)
hooks/charmhelpers/contrib/openstack/alternatives.py (+17/-0)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+61/-0)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+275/-0)
hooks/charmhelpers/contrib/openstack/context.py (+789/-0)
hooks/charmhelpers/contrib/openstack/ip.py (+79/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+201/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+279/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+459/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+387/-0)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+53/-0)
hooks/charmhelpers/core/hookenv.py (+129/-6)
hooks/charmhelpers/core/host.py (+81/-12)
hooks/charmhelpers/fetch/__init__.py (+191/-74)
hooks/charmhelpers/fetch/archiveurl.py (+56/-1)
hooks/charmhelpers/fetch/bzrurl.py (+2/-1)
hooks/hooks.py (+4/-4)
hooks/test_hooks.py (+19/-41)
To merge this branch: bzr merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-use-charmhelpers-to-set-openstack-origin
Reviewer Review Type Date Requested Status
David Britton (community) Needs Fixing
Review via email: mp+231594@code.launchpad.net

Description of the change

This branch avoids making a static call to charmhelpers' fetch.add_source("cloud-archive:havana") to add cloud archives in favor the a more flexible openstack charms approach as the former approach breaks on newer distribution series (such as trusty).

 This branch is quite sizeable because of pulling in the charmhelpers.contrib.openstack module. The changes to the block-storage-broker are as follows:
  1. sync new charmhelpers dependencies
      - minor Makefile sync target added for simplified charmhelper updates
      - charm-helpers.yaml to define new charmhelpers dependencies contrib.openstack and contrib.storage
      - new files sync'd under charmhelpers (not authored in this branch)

  2. config.yaml has a new openstack-origin parameter that will default the cloud archive repository to the supported distro default but will allow a user to set their own custom cloud archive repository if needed

  3. hooks/hooks.py drop use of fetch.add_source("cloud:havana") instead use charmhelpers.openstack.utils.configure_installation_source()

  4. fix unit tests

The relevant changes that exclude the charmhelpers.contrib directory sync are linked here for quick reference.
http://pastebin.ubuntu.com/8099398/

To post a comment you must log in.
Revision history for this message
David Britton (dpb) wrote :

Hi Chad -- Thanks for this MP!

I don't see any reason why this would be controversial, please clear the merge conflict with trunk and I'll review and commit this straightaway.

review: Needs Fixing
62. By Chad Smith

merge block-storage-broker trunk resolve conflicts and fix unit tests to avoid mocker use

Unmerged revisions

62. By Chad Smith

merge block-storage-broker trunk resolve conflicts and fix unit tests to avoid mocker use

61. By Chad Smith

correct yaml indent in config.yaml

60. By Chad Smith

update unit tests to validate use of charmhelpers config_installation_source

59. By Chad Smith

update charmhelpers sync functionality

58. By Chad Smith

add openstack-origin to config.yaml options and use charmhelpers configure_installation_source to pull appropriate deb packages for a given ubuntu series

57. By Chad Smith

sync added contrib.(storage|openstack) files

56. By Chad Smith

add contrib.openstack and it's dependency contrib.storage to charm-helpers.yaml file

55. By Chad Smith

sync existing charmhelpers dependencies

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2014-03-21 17:05:09 +0000
3+++ Makefile 2014-09-10 21:17:48 +0000
4@@ -1,4 +1,6 @@
5 .PHONY: test lint clean
6+PYTHON := /usr/bin/env python
7+
8 CHARM_DIR=`pwd`
9
10 clean:
11@@ -10,10 +12,11 @@
12 lint:
13 @flake8 --exclude hooks/charmhelpers hooks
14
15-update-charm-helpers:
16- # Pull latest charm-helpers branch and sync the components based on our
17- $ charm-helpers.yaml
18- rm -rf charm-helpers
19- bzr co lp:charm-helpers
20- ./charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py -c charm-helpers.yaml
21- rm -rf charm-helpers
22+bin/charm_helpers_sync.py:
23+ @mkdir -p bin
24+ @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
25+ > bin/charm_helpers_sync.py
26+
27+# Update charmhelpers dependencies within our charm
28+sync: bin/charm_helpers_sync.py
29+ $(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
30
31=== modified file 'charm-helpers.yaml'
32--- charm-helpers.yaml 2014-02-04 17:36:03 +0000
33+++ charm-helpers.yaml 2014-09-10 21:17:48 +0000
34@@ -5,3 +5,5 @@
35 include:
36 - core
37 - fetch
38+ - contrib.openstack
39+ - contrib.storage # for openstack dependencies
40
41=== modified file 'config.yaml'
42--- config.yaml 2014-07-15 22:58:26 +0000
43+++ config.yaml 2014-09-10 21:17:48 +0000
44@@ -29,3 +29,18 @@
45 type: int
46 description: The volume size in GB if the relation does not specify
47 default: 5
48+ openstack-origin:
49+ default: distro
50+ type: string
51+ description: |
52+ Repository from which to install. May be one of the following:
53+ distro (default), ppa:somecustom/ppa, a deb url sources entry,
54+ or a supported Cloud Archive release pocket.
55+
56+ Supported Cloud Archive sources include: cloud:precise-folsom,
57+ cloud:precise-folsom/updates, cloud:precise-folsom/staging,
58+ cloud:precise-folsom/proposed.
59+
60+ Note that updating this setting to a source that is known to
61+ provide a later version of OpenStack will trigger a software
62+ upgrade.
63
64=== added directory 'hooks/charmhelpers/contrib'
65=== added file 'hooks/charmhelpers/contrib/__init__.py'
66=== added directory 'hooks/charmhelpers/contrib/openstack'
67=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
68=== added file 'hooks/charmhelpers/contrib/openstack/alternatives.py'
69--- hooks/charmhelpers/contrib/openstack/alternatives.py 1970-01-01 00:00:00 +0000
70+++ hooks/charmhelpers/contrib/openstack/alternatives.py 2014-09-10 21:17:48 +0000
71@@ -0,0 +1,17 @@
72+''' Helper for managing alternatives for file conflict resolution '''
73+
74+import subprocess
75+import shutil
76+import os
77+
78+
79+def install_alternative(name, target, source, priority=50):
80+ ''' Install alternative configuration '''
81+ if (os.path.exists(target) and not os.path.islink(target)):
82+ # Move existing file/directory away before installing
83+ shutil.move(target, '{}.bak'.format(target))
84+ cmd = [
85+ 'update-alternatives', '--force', '--install',
86+ target, name, source, str(priority)
87+ ]
88+ subprocess.check_call(cmd)
89
90=== added directory 'hooks/charmhelpers/contrib/openstack/amulet'
91=== added file 'hooks/charmhelpers/contrib/openstack/amulet/__init__.py'
92=== added file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
93--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
94+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-09-10 21:17:48 +0000
95@@ -0,0 +1,61 @@
96+from charmhelpers.contrib.amulet.deployment import (
97+ AmuletDeployment
98+)
99+
100+
101+class OpenStackAmuletDeployment(AmuletDeployment):
102+ """OpenStack amulet deployment.
103+
104+ This class inherits from AmuletDeployment and has additional support
105+ that is specifically for use by OpenStack charms.
106+ """
107+
108+ def __init__(self, series=None, openstack=None, source=None):
109+ """Initialize the deployment environment."""
110+ super(OpenStackAmuletDeployment, self).__init__(series)
111+ self.openstack = openstack
112+ self.source = source
113+
114+ def _add_services(self, this_service, other_services):
115+ """Add services to the deployment and set openstack-origin."""
116+ super(OpenStackAmuletDeployment, self)._add_services(this_service,
117+ other_services)
118+ name = 0
119+ services = other_services
120+ services.append(this_service)
121+ use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph']
122+
123+ if self.openstack:
124+ for svc in services:
125+ if svc[name] not in use_source:
126+ config = {'openstack-origin': self.openstack}
127+ self.d.configure(svc[name], config)
128+
129+ if self.source:
130+ for svc in services:
131+ if svc[name] in use_source:
132+ config = {'source': self.source}
133+ self.d.configure(svc[name], config)
134+
135+ def _configure_services(self, configs):
136+ """Configure all of the services."""
137+ for service, config in configs.iteritems():
138+ self.d.configure(service, config)
139+
140+ def _get_openstack_release(self):
141+ """Get openstack release.
142+
143+ Return an integer representing the enum value of the openstack
144+ release.
145+ """
146+ (self.precise_essex, self.precise_folsom, self.precise_grizzly,
147+ self.precise_havana, self.precise_icehouse,
148+ self.trusty_icehouse) = range(6)
149+ releases = {
150+ ('precise', None): self.precise_essex,
151+ ('precise', 'cloud:precise-folsom'): self.precise_folsom,
152+ ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
153+ ('precise', 'cloud:precise-havana'): self.precise_havana,
154+ ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
155+ ('trusty', None): self.trusty_icehouse}
156+ return releases[(self.series, self.openstack)]
157
158=== added file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
159--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
160+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-09-10 21:17:48 +0000
161@@ -0,0 +1,275 @@
162+import logging
163+import os
164+import time
165+import urllib
166+
167+import glanceclient.v1.client as glance_client
168+import keystoneclient.v2_0 as keystone_client
169+import novaclient.v1_1.client as nova_client
170+
171+from charmhelpers.contrib.amulet.utils import (
172+ AmuletUtils
173+)
174+
175+DEBUG = logging.DEBUG
176+ERROR = logging.ERROR
177+
178+
179+class OpenStackAmuletUtils(AmuletUtils):
180+ """OpenStack amulet utilities.
181+
182+ This class inherits from AmuletUtils and has additional support
183+ that is specifically for use by OpenStack charms.
184+ """
185+
186+ def __init__(self, log_level=ERROR):
187+ """Initialize the deployment environment."""
188+ super(OpenStackAmuletUtils, self).__init__(log_level)
189+
190+ def validate_endpoint_data(self, endpoints, admin_port, internal_port,
191+ public_port, expected):
192+ """Validate endpoint data.
193+
194+ Validate actual endpoint data vs expected endpoint data. The ports
195+ are used to find the matching endpoint.
196+ """
197+ found = False
198+ for ep in endpoints:
199+ self.log.debug('endpoint: {}'.format(repr(ep)))
200+ if (admin_port in ep.adminurl and
201+ internal_port in ep.internalurl and
202+ public_port in ep.publicurl):
203+ found = True
204+ actual = {'id': ep.id,
205+ 'region': ep.region,
206+ 'adminurl': ep.adminurl,
207+ 'internalurl': ep.internalurl,
208+ 'publicurl': ep.publicurl,
209+ 'service_id': ep.service_id}
210+ ret = self._validate_dict_data(expected, actual)
211+ if ret:
212+ return 'unexpected endpoint data - {}'.format(ret)
213+
214+ if not found:
215+ return 'endpoint not found'
216+
217+ def validate_svc_catalog_endpoint_data(self, expected, actual):
218+ """Validate service catalog endpoint data.
219+
220+ Validate a list of actual service catalog endpoints vs a list of
221+ expected service catalog endpoints.
222+ """
223+ self.log.debug('actual: {}'.format(repr(actual)))
224+ for k, v in expected.iteritems():
225+ if k in actual:
226+ ret = self._validate_dict_data(expected[k][0], actual[k][0])
227+ if ret:
228+ return self.endpoint_error(k, ret)
229+ else:
230+ return "endpoint {} does not exist".format(k)
231+ return ret
232+
233+ def validate_tenant_data(self, expected, actual):
234+ """Validate tenant data.
235+
236+ Validate a list of actual tenant data vs list of expected tenant
237+ data.
238+ """
239+ self.log.debug('actual: {}'.format(repr(actual)))
240+ for e in expected:
241+ found = False
242+ for act in actual:
243+ a = {'enabled': act.enabled, 'description': act.description,
244+ 'name': act.name, 'id': act.id}
245+ if e['name'] == a['name']:
246+ found = True
247+ ret = self._validate_dict_data(e, a)
248+ if ret:
249+ return "unexpected tenant data - {}".format(ret)
250+ if not found:
251+ return "tenant {} does not exist".format(e['name'])
252+ return ret
253+
254+ def validate_role_data(self, expected, actual):
255+ """Validate role data.
256+
257+ Validate a list of actual role data vs a list of expected role
258+ data.
259+ """
260+ self.log.debug('actual: {}'.format(repr(actual)))
261+ for e in expected:
262+ found = False
263+ for act in actual:
264+ a = {'name': act.name, 'id': act.id}
265+ if e['name'] == a['name']:
266+ found = True
267+ ret = self._validate_dict_data(e, a)
268+ if ret:
269+ return "unexpected role data - {}".format(ret)
270+ if not found:
271+ return "role {} does not exist".format(e['name'])
272+ return ret
273+
274+ def validate_user_data(self, expected, actual):
275+ """Validate user data.
276+
277+ Validate a list of actual user data vs a list of expected user
278+ data.
279+ """
280+ self.log.debug('actual: {}'.format(repr(actual)))
281+ for e in expected:
282+ found = False
283+ for act in actual:
284+ a = {'enabled': act.enabled, 'name': act.name,
285+ 'email': act.email, 'tenantId': act.tenantId,
286+ 'id': act.id}
287+ if e['name'] == a['name']:
288+ found = True
289+ ret = self._validate_dict_data(e, a)
290+ if ret:
291+ return "unexpected user data - {}".format(ret)
292+ if not found:
293+ return "user {} does not exist".format(e['name'])
294+ return ret
295+
296+ def validate_flavor_data(self, expected, actual):
297+ """Validate flavor data.
298+
299+ Validate a list of actual flavors vs a list of expected flavors.
300+ """
301+ self.log.debug('actual: {}'.format(repr(actual)))
302+ act = [a.name for a in actual]
303+ return self._validate_list_data(expected, act)
304+
305+ def tenant_exists(self, keystone, tenant):
306+ """Return True if tenant exists."""
307+ return tenant in [t.name for t in keystone.tenants.list()]
308+
309+ def authenticate_keystone_admin(self, keystone_sentry, user, password,
310+ tenant):
311+ """Authenticates admin user with the keystone admin endpoint."""
312+ unit = keystone_sentry
313+ service_ip = unit.relation('shared-db',
314+ 'mysql:shared-db')['private-address']
315+ ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
316+ return keystone_client.Client(username=user, password=password,
317+ tenant_name=tenant, auth_url=ep)
318+
319+ def authenticate_keystone_user(self, keystone, user, password, tenant):
320+ """Authenticates a regular user with the keystone public endpoint."""
321+ ep = keystone.service_catalog.url_for(service_type='identity',
322+ endpoint_type='publicURL')
323+ return keystone_client.Client(username=user, password=password,
324+ tenant_name=tenant, auth_url=ep)
325+
326+ def authenticate_glance_admin(self, keystone):
327+ """Authenticates admin user with glance."""
328+ ep = keystone.service_catalog.url_for(service_type='image',
329+ endpoint_type='adminURL')
330+ return glance_client.Client(ep, token=keystone.auth_token)
331+
332+ def authenticate_nova_user(self, keystone, user, password, tenant):
333+ """Authenticates a regular user with nova-api."""
334+ ep = keystone.service_catalog.url_for(service_type='identity',
335+ endpoint_type='publicURL')
336+ return nova_client.Client(username=user, api_key=password,
337+ project_id=tenant, auth_url=ep)
338+
339+ def create_cirros_image(self, glance, image_name):
340+ """Download the latest cirros image and upload it to glance."""
341+ http_proxy = os.getenv('AMULET_HTTP_PROXY')
342+ self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
343+ if http_proxy:
344+ proxies = {'http': http_proxy}
345+ opener = urllib.FancyURLopener(proxies)
346+ else:
347+ opener = urllib.FancyURLopener()
348+
349+ f = opener.open("http://download.cirros-cloud.net/version/released")
350+ version = f.read().strip()
351+ cirros_img = "tests/cirros-{}-x86_64-disk.img".format(version)
352+
353+ if not os.path.exists(cirros_img):
354+ cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
355+ version, cirros_img)
356+ opener.retrieve(cirros_url, cirros_img)
357+ f.close()
358+
359+ with open(cirros_img) as f:
360+ image = glance.images.create(name=image_name, is_public=True,
361+ disk_format='qcow2',
362+ container_format='bare', data=f)
363+ count = 1
364+ status = image.status
365+ while status != 'active' and count < 10:
366+ time.sleep(3)
367+ image = glance.images.get(image.id)
368+ status = image.status
369+ self.log.debug('image status: {}'.format(status))
370+ count += 1
371+
372+ if status != 'active':
373+ self.log.error('image creation timed out')
374+ return None
375+
376+ return image
377+
378+ def delete_image(self, glance, image):
379+ """Delete the specified image."""
380+ num_before = len(list(glance.images.list()))
381+ glance.images.delete(image)
382+
383+ count = 1
384+ num_after = len(list(glance.images.list()))
385+ while num_after != (num_before - 1) and count < 10:
386+ time.sleep(3)
387+ num_after = len(list(glance.images.list()))
388+ self.log.debug('number of images: {}'.format(num_after))
389+ count += 1
390+
391+ if num_after != (num_before - 1):
392+ self.log.error('image deletion timed out')
393+ return False
394+
395+ return True
396+
397+ def create_instance(self, nova, image_name, instance_name, flavor):
398+ """Create the specified instance."""
399+ image = nova.images.find(name=image_name)
400+ flavor = nova.flavors.find(name=flavor)
401+ instance = nova.servers.create(name=instance_name, image=image,
402+ flavor=flavor)
403+
404+ count = 1
405+ status = instance.status
406+ while status != 'ACTIVE' and count < 60:
407+ time.sleep(3)
408+ instance = nova.servers.get(instance.id)
409+ status = instance.status
410+ self.log.debug('instance status: {}'.format(status))
411+ count += 1
412+
413+ if status != 'ACTIVE':
414+ self.log.error('instance creation timed out')
415+ return None
416+
417+ return instance
418+
419+ def delete_instance(self, nova, instance):
420+ """Delete the specified instance."""
421+ num_before = len(list(nova.servers.list()))
422+ nova.servers.delete(instance)
423+
424+ count = 1
425+ num_after = len(list(nova.servers.list()))
426+ while num_after != (num_before - 1) and count < 10:
427+ time.sleep(3)
428+ num_after = len(list(nova.servers.list()))
429+ self.log.debug('number of instances: {}'.format(num_after))
430+ count += 1
431+
432+ if num_after != (num_before - 1):
433+ self.log.error('instance deletion timed out')
434+ return False
435+
436+ return True
437
438=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
439--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
440+++ hooks/charmhelpers/contrib/openstack/context.py 2014-09-10 21:17:48 +0000
441@@ -0,0 +1,789 @@
442+import json
443+import os
444+import time
445+
446+from base64 import b64decode
447+
448+from subprocess import (
449+ check_call
450+)
451+
452+
453+from charmhelpers.fetch import (
454+ apt_install,
455+ filter_installed_packages,
456+)
457+
458+from charmhelpers.core.hookenv import (
459+ config,
460+ local_unit,
461+ log,
462+ relation_get,
463+ relation_ids,
464+ related_units,
465+ relation_set,
466+ unit_get,
467+ unit_private_ip,
468+ ERROR,
469+ INFO
470+)
471+
472+from charmhelpers.contrib.hahelpers.cluster import (
473+ determine_apache_port,
474+ determine_api_port,
475+ https,
476+ is_clustered
477+)
478+
479+from charmhelpers.contrib.hahelpers.apache import (
480+ get_cert,
481+ get_ca_cert,
482+)
483+
484+from charmhelpers.contrib.openstack.neutron import (
485+ neutron_plugin_attribute,
486+)
487+
488+from charmhelpers.contrib.network.ip import (
489+ get_address_in_network,
490+ get_ipv6_addr,
491+)
492+
493+CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
494+
495+
496+class OSContextError(Exception):
497+ pass
498+
499+
500+def ensure_packages(packages):
501+ '''Install but do not upgrade required plugin packages'''
502+ required = filter_installed_packages(packages)
503+ if required:
504+ apt_install(required, fatal=True)
505+
506+
507+def context_complete(ctxt):
508+ _missing = []
509+ for k, v in ctxt.iteritems():
510+ if v is None or v == '':
511+ _missing.append(k)
512+ if _missing:
513+ log('Missing required data: %s' % ' '.join(_missing), level='INFO')
514+ return False
515+ return True
516+
517+
518+def config_flags_parser(config_flags):
519+ if config_flags.find('==') >= 0:
520+ log("config_flags is not in expected format (key=value)",
521+ level=ERROR)
522+ raise OSContextError
523+ # strip the following from each value.
524+ post_strippers = ' ,'
525+ # we strip any leading/trailing '=' or ' ' from the string then
526+ # split on '='.
527+ split = config_flags.strip(' =').split('=')
528+ limit = len(split)
529+ flags = {}
530+ for i in xrange(0, limit - 1):
531+ current = split[i]
532+ next = split[i + 1]
533+ vindex = next.rfind(',')
534+ if (i == limit - 2) or (vindex < 0):
535+ value = next
536+ else:
537+ value = next[:vindex]
538+
539+ if i == 0:
540+ key = current
541+ else:
542+ # if this not the first entry, expect an embedded key.
543+ index = current.rfind(',')
544+ if index < 0:
545+ log("invalid config value(s) at index %s" % (i),
546+ level=ERROR)
547+ raise OSContextError
548+ key = current[index + 1:]
549+
550+ # Add to collection.
551+ flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
552+ return flags
553+
554+
555+class OSContextGenerator(object):
556+ interfaces = []
557+
558+ def __call__(self):
559+ raise NotImplementedError
560+
561+
562+class SharedDBContext(OSContextGenerator):
563+ interfaces = ['shared-db']
564+
565+ def __init__(self,
566+ database=None, user=None, relation_prefix=None, ssl_dir=None):
567+ '''
568+ Allows inspecting relation for settings prefixed with relation_prefix.
569+ This is useful for parsing access for multiple databases returned via
570+ the shared-db interface (eg, nova_password, quantum_password)
571+ '''
572+ self.relation_prefix = relation_prefix
573+ self.database = database
574+ self.user = user
575+ self.ssl_dir = ssl_dir
576+
577+ def __call__(self):
578+ self.database = self.database or config('database')
579+ self.user = self.user or config('database-user')
580+ if None in [self.database, self.user]:
581+ log('Could not generate shared_db context. '
582+ 'Missing required charm config options. '
583+ '(database name and user)')
584+ raise OSContextError
585+
586+ ctxt = {}
587+
588+ # NOTE(jamespage) if mysql charm provides a network upon which
589+ # access to the database should be made, reconfigure relation
590+ # with the service units local address and defer execution
591+ access_network = relation_get('access-network')
592+ if access_network is not None:
593+ if self.relation_prefix is not None:
594+ hostname_key = "{}_hostname".format(self.relation_prefix)
595+ else:
596+ hostname_key = "hostname"
597+ access_hostname = get_address_in_network(access_network,
598+ unit_get('private-address'))
599+ set_hostname = relation_get(attribute=hostname_key,
600+ unit=local_unit())
601+ if set_hostname != access_hostname:
602+ relation_set(relation_settings={hostname_key: access_hostname})
603+ return ctxt # Defer any further hook execution for now....
604+
605+ password_setting = 'password'
606+ if self.relation_prefix:
607+ password_setting = self.relation_prefix + '_password'
608+
609+ for rid in relation_ids('shared-db'):
610+ for unit in related_units(rid):
611+ rdata = relation_get(rid=rid, unit=unit)
612+ ctxt = {
613+ 'database_host': rdata.get('db_host'),
614+ 'database': self.database,
615+ 'database_user': self.user,
616+ 'database_password': rdata.get(password_setting),
617+ 'database_type': 'mysql'
618+ }
619+ if context_complete(ctxt):
620+ db_ssl(rdata, ctxt, self.ssl_dir)
621+ return ctxt
622+ return {}
623+
624+
625+class PostgresqlDBContext(OSContextGenerator):
626+ interfaces = ['pgsql-db']
627+
628+ def __init__(self, database=None):
629+ self.database = database
630+
631+ def __call__(self):
632+ self.database = self.database or config('database')
633+ if self.database is None:
634+ log('Could not generate postgresql_db context. '
635+ 'Missing required charm config options. '
636+ '(database name)')
637+ raise OSContextError
638+ ctxt = {}
639+
640+ for rid in relation_ids(self.interfaces[0]):
641+ for unit in related_units(rid):
642+ ctxt = {
643+ 'database_host': relation_get('host', rid=rid, unit=unit),
644+ 'database': self.database,
645+ 'database_user': relation_get('user', rid=rid, unit=unit),
646+ 'database_password': relation_get('password', rid=rid, unit=unit),
647+ 'database_type': 'postgresql',
648+ }
649+ if context_complete(ctxt):
650+ return ctxt
651+ return {}
652+
653+
654+def db_ssl(rdata, ctxt, ssl_dir):
655+ if 'ssl_ca' in rdata and ssl_dir:
656+ ca_path = os.path.join(ssl_dir, 'db-client.ca')
657+ with open(ca_path, 'w') as fh:
658+ fh.write(b64decode(rdata['ssl_ca']))
659+ ctxt['database_ssl_ca'] = ca_path
660+ elif 'ssl_ca' in rdata:
661+ log("Charm not setup for ssl support but ssl ca found")
662+ return ctxt
663+ if 'ssl_cert' in rdata:
664+ cert_path = os.path.join(
665+ ssl_dir, 'db-client.cert')
666+ if not os.path.exists(cert_path):
667+ log("Waiting 1m for ssl client cert validity")
668+ time.sleep(60)
669+ with open(cert_path, 'w') as fh:
670+ fh.write(b64decode(rdata['ssl_cert']))
671+ ctxt['database_ssl_cert'] = cert_path
672+ key_path = os.path.join(ssl_dir, 'db-client.key')
673+ with open(key_path, 'w') as fh:
674+ fh.write(b64decode(rdata['ssl_key']))
675+ ctxt['database_ssl_key'] = key_path
676+ return ctxt
677+
678+
679+class IdentityServiceContext(OSContextGenerator):
680+ interfaces = ['identity-service']
681+
682+ def __call__(self):
683+ log('Generating template context for identity-service')
684+ ctxt = {}
685+
686+ for rid in relation_ids('identity-service'):
687+ for unit in related_units(rid):
688+ rdata = relation_get(rid=rid, unit=unit)
689+ ctxt = {
690+ 'service_port': rdata.get('service_port'),
691+ 'service_host': rdata.get('service_host'),
692+ 'auth_host': rdata.get('auth_host'),
693+ 'auth_port': rdata.get('auth_port'),
694+ 'admin_tenant_name': rdata.get('service_tenant'),
695+ 'admin_user': rdata.get('service_username'),
696+ 'admin_password': rdata.get('service_password'),
697+ 'service_protocol':
698+ rdata.get('service_protocol') or 'http',
699+ 'auth_protocol':
700+ rdata.get('auth_protocol') or 'http',
701+ }
702+ if context_complete(ctxt):
703+ # NOTE(jamespage) this is required for >= icehouse
704+ # so a missing value just indicates keystone needs
705+ # upgrading
706+ ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
707+ return ctxt
708+ return {}
709+
710+
711+class AMQPContext(OSContextGenerator):
712+
713+ def __init__(self, ssl_dir=None, rel_name='amqp', relation_prefix=None):
714+ self.ssl_dir = ssl_dir
715+ self.rel_name = rel_name
716+ self.relation_prefix = relation_prefix
717+ self.interfaces = [rel_name]
718+
719+ def __call__(self):
720+ log('Generating template context for amqp')
721+ conf = config()
722+ user_setting = 'rabbit-user'
723+ vhost_setting = 'rabbit-vhost'
724+ if self.relation_prefix:
725+ user_setting = self.relation_prefix + '-rabbit-user'
726+ vhost_setting = self.relation_prefix + '-rabbit-vhost'
727+
728+ try:
729+ username = conf[user_setting]
730+ vhost = conf[vhost_setting]
731+ except KeyError as e:
732+ log('Could not generate shared_db context. '
733+ 'Missing required charm config options: %s.' % e)
734+ raise OSContextError
735+ ctxt = {}
736+ for rid in relation_ids(self.rel_name):
737+ ha_vip_only = False
738+ for unit in related_units(rid):
739+ if relation_get('clustered', rid=rid, unit=unit):
740+ ctxt['clustered'] = True
741+ ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
742+ unit=unit)
743+ else:
744+ ctxt['rabbitmq_host'] = relation_get('private-address',
745+ rid=rid, unit=unit)
746+ ctxt.update({
747+ 'rabbitmq_user': username,
748+ 'rabbitmq_password': relation_get('password', rid=rid,
749+ unit=unit),
750+ 'rabbitmq_virtual_host': vhost,
751+ })
752+
753+ ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
754+ if ssl_port:
755+ ctxt['rabbit_ssl_port'] = ssl_port
756+ ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
757+ if ssl_ca:
758+ ctxt['rabbit_ssl_ca'] = ssl_ca
759+
760+ if relation_get('ha_queues', rid=rid, unit=unit) is not None:
761+ ctxt['rabbitmq_ha_queues'] = True
762+
763+ ha_vip_only = relation_get('ha-vip-only',
764+ rid=rid, unit=unit) is not None
765+
766+ if context_complete(ctxt):
767+ if 'rabbit_ssl_ca' in ctxt:
768+ if not self.ssl_dir:
769+ log(("Charm not setup for ssl support "
770+ "but ssl ca found"))
771+ break
772+ ca_path = os.path.join(
773+ self.ssl_dir, 'rabbit-client-ca.pem')
774+ with open(ca_path, 'w') as fh:
775+ fh.write(b64decode(ctxt['rabbit_ssl_ca']))
776+ ctxt['rabbit_ssl_ca'] = ca_path
777+ # Sufficient information found = break out!
778+ break
779+ # Used for active/active rabbitmq >= grizzly
780+ if ('clustered' not in ctxt or ha_vip_only) \
781+ and len(related_units(rid)) > 1:
782+ rabbitmq_hosts = []
783+ for unit in related_units(rid):
784+ rabbitmq_hosts.append(relation_get('private-address',
785+ rid=rid, unit=unit))
786+ ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
787+ if not context_complete(ctxt):
788+ return {}
789+ else:
790+ return ctxt
791+
792+
793+class CephContext(OSContextGenerator):
794+ interfaces = ['ceph']
795+
796+ def __call__(self):
797+ '''This generates context for /etc/ceph/ceph.conf templates'''
798+ if not relation_ids('ceph'):
799+ return {}
800+
801+ log('Generating template context for ceph')
802+
803+ mon_hosts = []
804+ auth = None
805+ key = None
806+ use_syslog = str(config('use-syslog')).lower()
807+ for rid in relation_ids('ceph'):
808+ for unit in related_units(rid):
809+ auth = relation_get('auth', rid=rid, unit=unit)
810+ key = relation_get('key', rid=rid, unit=unit)
811+ ceph_addr = \
812+ relation_get('ceph-public-address', rid=rid, unit=unit) or \
813+ relation_get('private-address', rid=rid, unit=unit)
814+ mon_hosts.append(ceph_addr)
815+
816+ ctxt = {
817+ 'mon_hosts': ' '.join(mon_hosts),
818+ 'auth': auth,
819+ 'key': key,
820+ 'use_syslog': use_syslog
821+ }
822+
823+ if not os.path.isdir('/etc/ceph'):
824+ os.mkdir('/etc/ceph')
825+
826+ if not context_complete(ctxt):
827+ return {}
828+
829+ ensure_packages(['ceph-common'])
830+
831+ return ctxt
832+
833+
834+class HAProxyContext(OSContextGenerator):
835+ interfaces = ['cluster']
836+
837+ def __call__(self):
838+ '''
839+ Builds half a context for the haproxy template, which describes
840+ all peers to be included in the cluster. Each charm needs to include
841+ its own context generator that describes the port mapping.
842+ '''
843+ if not relation_ids('cluster'):
844+ return {}
845+
846+ cluster_hosts = {}
847+ l_unit = local_unit().replace('/', '-')
848+ if config('prefer-ipv6'):
849+ addr = get_ipv6_addr()
850+ else:
851+ addr = unit_get('private-address')
852+ cluster_hosts[l_unit] = get_address_in_network(config('os-internal-network'),
853+ addr)
854+
855+ for rid in relation_ids('cluster'):
856+ for unit in related_units(rid):
857+ _unit = unit.replace('/', '-')
858+ addr = relation_get('private-address', rid=rid, unit=unit)
859+ cluster_hosts[_unit] = addr
860+
861+ ctxt = {
862+ 'units': cluster_hosts,
863+ }
864+
865+ if config('prefer-ipv6'):
866+ ctxt['local_host'] = 'ip6-localhost'
867+ ctxt['haproxy_host'] = '::'
868+ ctxt['stat_port'] = ':::8888'
869+ else:
870+ ctxt['local_host'] = '127.0.0.1'
871+ ctxt['haproxy_host'] = '0.0.0.0'
872+ ctxt['stat_port'] = ':8888'
873+
874+ if len(cluster_hosts.keys()) > 1:
875+ # Enable haproxy when we have enough peers.
876+ log('Ensuring haproxy enabled in /etc/default/haproxy.')
877+ with open('/etc/default/haproxy', 'w') as out:
878+ out.write('ENABLED=1\n')
879+ return ctxt
880+ log('HAProxy context is incomplete, this unit has no peers.')
881+ return {}
882+
883+
884+class ImageServiceContext(OSContextGenerator):
885+ interfaces = ['image-service']
886+
887+ def __call__(self):
888+ '''
889+ Obtains the glance API server from the image-service relation. Useful
890+ in nova and cinder (currently).
891+ '''
892+ log('Generating template context for image-service.')
893+ rids = relation_ids('image-service')
894+ if not rids:
895+ return {}
896+ for rid in rids:
897+ for unit in related_units(rid):
898+ api_server = relation_get('glance-api-server',
899+ rid=rid, unit=unit)
900+ if api_server:
901+ return {'glance_api_servers': api_server}
902+ log('ImageService context is incomplete. '
903+ 'Missing required relation data.')
904+ return {}
905+
906+
907+class ApacheSSLContext(OSContextGenerator):
908+
909+ """
910+ Generates a context for an apache vhost configuration that configures
911+ HTTPS reverse proxying for one or many endpoints. Generated context
912+ looks something like::
913+
914+ {
915+ 'namespace': 'cinder',
916+ 'private_address': 'iscsi.mycinderhost.com',
917+ 'endpoints': [(8776, 8766), (8777, 8767)]
918+ }
919+
920+ The endpoints list consists of a tuples mapping external ports
921+ to internal ports.
922+ """
923+ interfaces = ['https']
924+
925+ # charms should inherit this context and set external ports
926+ # and service namespace accordingly.
927+ external_ports = []
928+ service_namespace = None
929+
930+ def enable_modules(self):
931+ cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
932+ check_call(cmd)
933+
934+ def configure_cert(self):
935+ if not os.path.isdir('/etc/apache2/ssl'):
936+ os.mkdir('/etc/apache2/ssl')
937+ ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
938+ if not os.path.isdir(ssl_dir):
939+ os.mkdir(ssl_dir)
940+ cert, key = get_cert()
941+ with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
942+ cert_out.write(b64decode(cert))
943+ with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
944+ key_out.write(b64decode(key))
945+ ca_cert = get_ca_cert()
946+ if ca_cert:
947+ with open(CA_CERT_PATH, 'w') as ca_out:
948+ ca_out.write(b64decode(ca_cert))
949+ check_call(['update-ca-certificates'])
950+
951+ def __call__(self):
952+ if isinstance(self.external_ports, basestring):
953+ self.external_ports = [self.external_ports]
954+ if (not self.external_ports or not https()):
955+ return {}
956+
957+ self.configure_cert()
958+ self.enable_modules()
959+
960+ ctxt = {
961+ 'namespace': self.service_namespace,
962+ 'private_address': unit_get('private-address'),
963+ 'endpoints': []
964+ }
965+ if is_clustered():
966+ ctxt['private_address'] = config('vip')
967+ for api_port in self.external_ports:
968+ ext_port = determine_apache_port(api_port)
969+ int_port = determine_api_port(api_port)
970+ portmap = (int(ext_port), int(int_port))
971+ ctxt['endpoints'].append(portmap)
972+ return ctxt
973+
974+
975+class NeutronContext(OSContextGenerator):
976+ interfaces = []
977+
978+ @property
979+ def plugin(self):
980+ return None
981+
982+ @property
983+ def network_manager(self):
984+ return None
985+
986+ @property
987+ def packages(self):
988+ return neutron_plugin_attribute(
989+ self.plugin, 'packages', self.network_manager)
990+
991+ @property
992+ def neutron_security_groups(self):
993+ return None
994+
995+ def _ensure_packages(self):
996+ [ensure_packages(pkgs) for pkgs in self.packages]
997+
998+ def _save_flag_file(self):
999+ if self.network_manager == 'quantum':
1000+ _file = '/etc/nova/quantum_plugin.conf'
1001+ else:
1002+ _file = '/etc/nova/neutron_plugin.conf'
1003+ with open(_file, 'wb') as out:
1004+ out.write(self.plugin + '\n')
1005+
1006+ def ovs_ctxt(self):
1007+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1008+ self.network_manager)
1009+ config = neutron_plugin_attribute(self.plugin, 'config',
1010+ self.network_manager)
1011+ ovs_ctxt = {
1012+ 'core_plugin': driver,
1013+ 'neutron_plugin': 'ovs',
1014+ 'neutron_security_groups': self.neutron_security_groups,
1015+ 'local_ip': unit_private_ip(),
1016+ 'config': config
1017+ }
1018+
1019+ return ovs_ctxt
1020+
1021+ def nvp_ctxt(self):
1022+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1023+ self.network_manager)
1024+ config = neutron_plugin_attribute(self.plugin, 'config',
1025+ self.network_manager)
1026+ nvp_ctxt = {
1027+ 'core_plugin': driver,
1028+ 'neutron_plugin': 'nvp',
1029+ 'neutron_security_groups': self.neutron_security_groups,
1030+ 'local_ip': unit_private_ip(),
1031+ 'config': config
1032+ }
1033+
1034+ return nvp_ctxt
1035+
1036+ def n1kv_ctxt(self):
1037+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1038+ self.network_manager)
1039+ n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1040+ self.network_manager)
1041+ n1kv_ctxt = {
1042+ 'core_plugin': driver,
1043+ 'neutron_plugin': 'n1kv',
1044+ 'neutron_security_groups': self.neutron_security_groups,
1045+ 'local_ip': unit_private_ip(),
1046+ 'config': n1kv_config,
1047+ 'vsm_ip': config('n1kv-vsm-ip'),
1048+ 'vsm_username': config('n1kv-vsm-username'),
1049+ 'vsm_password': config('n1kv-vsm-password'),
1050+ 'restrict_policy_profiles': config(
1051+ 'n1kv_restrict_policy_profiles'),
1052+ }
1053+
1054+ return n1kv_ctxt
1055+
1056+ def neutron_ctxt(self):
1057+ if https():
1058+ proto = 'https'
1059+ else:
1060+ proto = 'http'
1061+ if is_clustered():
1062+ host = config('vip')
1063+ else:
1064+ host = unit_get('private-address')
1065+ url = '%s://%s:%s' % (proto, host, '9696')
1066+ ctxt = {
1067+ 'network_manager': self.network_manager,
1068+ 'neutron_url': url,
1069+ }
1070+ return ctxt
1071+
1072+ def __call__(self):
1073+ self._ensure_packages()
1074+
1075+ if self.network_manager not in ['quantum', 'neutron']:
1076+ return {}
1077+
1078+ if not self.plugin:
1079+ return {}
1080+
1081+ ctxt = self.neutron_ctxt()
1082+
1083+ if self.plugin == 'ovs':
1084+ ctxt.update(self.ovs_ctxt())
1085+ elif self.plugin in ['nvp', 'nsx']:
1086+ ctxt.update(self.nvp_ctxt())
1087+ elif self.plugin == 'n1kv':
1088+ ctxt.update(self.n1kv_ctxt())
1089+
1090+ alchemy_flags = config('neutron-alchemy-flags')
1091+ if alchemy_flags:
1092+ flags = config_flags_parser(alchemy_flags)
1093+ ctxt['neutron_alchemy_flags'] = flags
1094+
1095+ self._save_flag_file()
1096+ return ctxt
1097+
1098+
1099+class OSConfigFlagContext(OSContextGenerator):
1100+
1101+ """
1102+ Responsible for adding user-defined config-flags in charm config to a
1103+ template context.
1104+
1105+ NOTE: the value of config-flags may be a comma-separated list of
1106+ key=value pairs and some Openstack config files support
1107+ comma-separated lists as values.
1108+ """
1109+
1110+ def __call__(self):
1111+ config_flags = config('config-flags')
1112+ if not config_flags:
1113+ return {}
1114+
1115+ flags = config_flags_parser(config_flags)
1116+ return {'user_config_flags': flags}
1117+
1118+
1119+class SubordinateConfigContext(OSContextGenerator):
1120+
1121+ """
1122+ Responsible for inspecting relations to subordinates that
1123+ may be exporting required config via a json blob.
1124+
1125+ The subordinate interface allows subordinates to export their
1126+ configuration requirements to the principle for multiple config
1127+ files and multiple serivces. Ie, a subordinate that has interfaces
1128+ to both glance and nova may export to following yaml blob as json::
1129+
1130+ glance:
1131+ /etc/glance/glance-api.conf:
1132+ sections:
1133+ DEFAULT:
1134+ - [key1, value1]
1135+ /etc/glance/glance-registry.conf:
1136+ MYSECTION:
1137+ - [key2, value2]
1138+ nova:
1139+ /etc/nova/nova.conf:
1140+ sections:
1141+ DEFAULT:
1142+ - [key3, value3]
1143+
1144+
1145+ It is then up to the principle charms to subscribe this context to
1146+ the service+config file it is interestd in. Configuration data will
1147+ be available in the template context, in glance's case, as::
1148+
1149+ ctxt = {
1150+ ... other context ...
1151+ 'subordinate_config': {
1152+ 'DEFAULT': {
1153+ 'key1': 'value1',
1154+ },
1155+ 'MYSECTION': {
1156+ 'key2': 'value2',
1157+ },
1158+ }
1159+ }
1160+
1161+ """
1162+
1163+ def __init__(self, service, config_file, interface):
1164+ """
1165+ :param service : Service name key to query in any subordinate
1166+ data found
1167+ :param config_file : Service's config file to query sections
1168+ :param interface : Subordinate interface to inspect
1169+ """
1170+ self.service = service
1171+ self.config_file = config_file
1172+ self.interface = interface
1173+
1174+ def __call__(self):
1175+ ctxt = {'sections': {}}
1176+ for rid in relation_ids(self.interface):
1177+ for unit in related_units(rid):
1178+ sub_config = relation_get('subordinate_configuration',
1179+ rid=rid, unit=unit)
1180+ if sub_config and sub_config != '':
1181+ try:
1182+ sub_config = json.loads(sub_config)
1183+ except:
1184+ log('Could not parse JSON from subordinate_config '
1185+ 'setting from %s' % rid, level=ERROR)
1186+ continue
1187+
1188+ if self.service not in sub_config:
1189+ log('Found subordinate_config on %s but it contained'
1190+ 'nothing for %s service' % (rid, self.service))
1191+ continue
1192+
1193+ sub_config = sub_config[self.service]
1194+ if self.config_file not in sub_config:
1195+ log('Found subordinate_config on %s but it contained'
1196+ 'nothing for %s' % (rid, self.config_file))
1197+ continue
1198+
1199+ sub_config = sub_config[self.config_file]
1200+ for k, v in sub_config.iteritems():
1201+ if k == 'sections':
1202+ for section, config_dict in v.iteritems():
1203+ log("adding section '%s'" % (section))
1204+ ctxt[k][section] = config_dict
1205+ else:
1206+ ctxt[k] = v
1207+
1208+ log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1209+
1210+ return ctxt
1211+
1212+
1213+class LogLevelContext(OSContextGenerator):
1214+
1215+ def __call__(self):
1216+ ctxt = {}
1217+ ctxt['debug'] = \
1218+ False if config('debug') is None else config('debug')
1219+ ctxt['verbose'] = \
1220+ False if config('verbose') is None else config('verbose')
1221+ return ctxt
1222+
1223+
1224+class SyslogContext(OSContextGenerator):
1225+
1226+ def __call__(self):
1227+ ctxt = {
1228+ 'use_syslog': config('use-syslog')
1229+ }
1230+ return ctxt
1231
1232=== added file 'hooks/charmhelpers/contrib/openstack/ip.py'
1233--- hooks/charmhelpers/contrib/openstack/ip.py 1970-01-01 00:00:00 +0000
1234+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-09-10 21:17:48 +0000
1235@@ -0,0 +1,79 @@
1236+from charmhelpers.core.hookenv import (
1237+ config,
1238+ unit_get,
1239+)
1240+
1241+from charmhelpers.contrib.network.ip import (
1242+ get_address_in_network,
1243+ is_address_in_network,
1244+ is_ipv6,
1245+ get_ipv6_addr,
1246+)
1247+
1248+from charmhelpers.contrib.hahelpers.cluster import is_clustered
1249+
1250+PUBLIC = 'public'
1251+INTERNAL = 'int'
1252+ADMIN = 'admin'
1253+
1254+_address_map = {
1255+ PUBLIC: {
1256+ 'config': 'os-public-network',
1257+ 'fallback': 'public-address'
1258+ },
1259+ INTERNAL: {
1260+ 'config': 'os-internal-network',
1261+ 'fallback': 'private-address'
1262+ },
1263+ ADMIN: {
1264+ 'config': 'os-admin-network',
1265+ 'fallback': 'private-address'
1266+ }
1267+}
1268+
1269+
1270+def canonical_url(configs, endpoint_type=PUBLIC):
1271+ '''
1272+ Returns the correct HTTP URL to this host given the state of HTTPS
1273+ configuration, hacluster and charm configuration.
1274+
1275+ :configs OSTemplateRenderer: A config tempating object to inspect for
1276+ a complete https context.
1277+ :endpoint_type str: The endpoint type to resolve.
1278+
1279+ :returns str: Base URL for services on the current service unit.
1280+ '''
1281+ scheme = 'http'
1282+ if 'https' in configs.complete_contexts():
1283+ scheme = 'https'
1284+ address = resolve_address(endpoint_type)
1285+ if is_ipv6(address):
1286+ address = "[{}]".format(address)
1287+ return '%s://%s' % (scheme, address)
1288+
1289+
1290+def resolve_address(endpoint_type=PUBLIC):
1291+ resolved_address = None
1292+ if is_clustered():
1293+ if config(_address_map[endpoint_type]['config']) is None:
1294+ # Assume vip is simple and pass back directly
1295+ resolved_address = config('vip')
1296+ else:
1297+ for vip in config('vip').split():
1298+ if is_address_in_network(
1299+ config(_address_map[endpoint_type]['config']),
1300+ vip):
1301+ resolved_address = vip
1302+ else:
1303+ if config('prefer-ipv6'):
1304+ fallback_addr = get_ipv6_addr()
1305+ else:
1306+ fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1307+ resolved_address = get_address_in_network(
1308+ config(_address_map[endpoint_type]['config']), fallback_addr)
1309+
1310+ if resolved_address is None:
1311+ raise ValueError('Unable to resolve a suitable IP address'
1312+ ' based on charm state and configuration')
1313+ else:
1314+ return resolved_address
1315
1316=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1317--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
1318+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-09-10 21:17:48 +0000
1319@@ -0,0 +1,201 @@
1320+# Various utilies for dealing with Neutron and the renaming from Quantum.
1321+
1322+from subprocess import check_output
1323+
1324+from charmhelpers.core.hookenv import (
1325+ config,
1326+ log,
1327+ ERROR,
1328+)
1329+
1330+from charmhelpers.contrib.openstack.utils import os_release
1331+
1332+
1333+def headers_package():
1334+ """Ensures correct linux-headers for running kernel are installed,
1335+ for building DKMS package"""
1336+ kver = check_output(['uname', '-r']).strip()
1337+ return 'linux-headers-%s' % kver
1338+
1339+QUANTUM_CONF_DIR = '/etc/quantum'
1340+
1341+
1342+def kernel_version():
1343+ """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1344+ kver = check_output(['uname', '-r']).strip()
1345+ kver = kver.split('.')
1346+ return (int(kver[0]), int(kver[1]))
1347+
1348+
1349+def determine_dkms_package():
1350+ """ Determine which DKMS package should be used based on kernel version """
1351+ # NOTE: 3.13 kernels have support for GRE and VXLAN native
1352+ if kernel_version() >= (3, 13):
1353+ return []
1354+ else:
1355+ return ['openvswitch-datapath-dkms']
1356+
1357+
1358+# legacy
1359+
1360+
1361+def quantum_plugins():
1362+ from charmhelpers.contrib.openstack import context
1363+ return {
1364+ 'ovs': {
1365+ 'config': '/etc/quantum/plugins/openvswitch/'
1366+ 'ovs_quantum_plugin.ini',
1367+ 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
1368+ 'OVSQuantumPluginV2',
1369+ 'contexts': [
1370+ context.SharedDBContext(user=config('neutron-database-user'),
1371+ database=config('neutron-database'),
1372+ relation_prefix='neutron',
1373+ ssl_dir=QUANTUM_CONF_DIR)],
1374+ 'services': ['quantum-plugin-openvswitch-agent'],
1375+ 'packages': [[headers_package()] + determine_dkms_package(),
1376+ ['quantum-plugin-openvswitch-agent']],
1377+ 'server_packages': ['quantum-server',
1378+ 'quantum-plugin-openvswitch'],
1379+ 'server_services': ['quantum-server']
1380+ },
1381+ 'nvp': {
1382+ 'config': '/etc/quantum/plugins/nicira/nvp.ini',
1383+ 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
1384+ 'QuantumPlugin.NvpPluginV2',
1385+ 'contexts': [
1386+ context.SharedDBContext(user=config('neutron-database-user'),
1387+ database=config('neutron-database'),
1388+ relation_prefix='neutron',
1389+ ssl_dir=QUANTUM_CONF_DIR)],
1390+ 'services': [],
1391+ 'packages': [],
1392+ 'server_packages': ['quantum-server',
1393+ 'quantum-plugin-nicira'],
1394+ 'server_services': ['quantum-server']
1395+ }
1396+ }
1397+
1398+NEUTRON_CONF_DIR = '/etc/neutron'
1399+
1400+
1401+def neutron_plugins():
1402+ from charmhelpers.contrib.openstack import context
1403+ release = os_release('nova-common')
1404+ plugins = {
1405+ 'ovs': {
1406+ 'config': '/etc/neutron/plugins/openvswitch/'
1407+ 'ovs_neutron_plugin.ini',
1408+ 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
1409+ 'OVSNeutronPluginV2',
1410+ 'contexts': [
1411+ context.SharedDBContext(user=config('neutron-database-user'),
1412+ database=config('neutron-database'),
1413+ relation_prefix='neutron',
1414+ ssl_dir=NEUTRON_CONF_DIR)],
1415+ 'services': ['neutron-plugin-openvswitch-agent'],
1416+ 'packages': [[headers_package()] + determine_dkms_package(),
1417+ ['neutron-plugin-openvswitch-agent']],
1418+ 'server_packages': ['neutron-server',
1419+ 'neutron-plugin-openvswitch'],
1420+ 'server_services': ['neutron-server']
1421+ },
1422+ 'nvp': {
1423+ 'config': '/etc/neutron/plugins/nicira/nvp.ini',
1424+ 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
1425+ 'NeutronPlugin.NvpPluginV2',
1426+ 'contexts': [
1427+ context.SharedDBContext(user=config('neutron-database-user'),
1428+ database=config('neutron-database'),
1429+ relation_prefix='neutron',
1430+ ssl_dir=NEUTRON_CONF_DIR)],
1431+ 'services': [],
1432+ 'packages': [],
1433+ 'server_packages': ['neutron-server',
1434+ 'neutron-plugin-nicira'],
1435+ 'server_services': ['neutron-server']
1436+ },
1437+ 'nsx': {
1438+ 'config': '/etc/neutron/plugins/vmware/nsx.ini',
1439+ 'driver': 'vmware',
1440+ 'contexts': [
1441+ context.SharedDBContext(user=config('neutron-database-user'),
1442+ database=config('neutron-database'),
1443+ relation_prefix='neutron',
1444+ ssl_dir=NEUTRON_CONF_DIR)],
1445+ 'services': [],
1446+ 'packages': [],
1447+ 'server_packages': ['neutron-server',
1448+ 'neutron-plugin-vmware'],
1449+ 'server_services': ['neutron-server']
1450+ },
1451+ 'n1kv': {
1452+ 'config': '/etc/neutron/plugins/cisco/cisco_plugins.ini',
1453+ 'driver': 'neutron.plugins.cisco.network_plugin.PluginV2',
1454+ 'contexts': [
1455+ context.SharedDBContext(user=config('neutron-database-user'),
1456+ database=config('neutron-database'),
1457+ relation_prefix='neutron',
1458+ ssl_dir=NEUTRON_CONF_DIR)],
1459+ 'services': [],
1460+ 'packages': [['neutron-plugin-cisco']],
1461+ 'server_packages': ['neutron-server',
1462+ 'neutron-plugin-cisco'],
1463+ 'server_services': ['neutron-server']
1464+ }
1465+ }
1466+ if release >= 'icehouse':
1467+ # NOTE: patch in ml2 plugin for icehouse onwards
1468+ plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini'
1469+ plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin'
1470+ plugins['ovs']['server_packages'] = ['neutron-server',
1471+ 'neutron-plugin-ml2']
1472+ # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
1473+ plugins['nvp'] = plugins['nsx']
1474+ return plugins
1475+
1476+
1477+def neutron_plugin_attribute(plugin, attr, net_manager=None):
1478+ manager = net_manager or network_manager()
1479+ if manager == 'quantum':
1480+ plugins = quantum_plugins()
1481+ elif manager == 'neutron':
1482+ plugins = neutron_plugins()
1483+ else:
1484+ log('Error: Network manager does not support plugins.')
1485+ raise Exception
1486+
1487+ try:
1488+ _plugin = plugins[plugin]
1489+ except KeyError:
1490+ log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
1491+ raise Exception
1492+
1493+ try:
1494+ return _plugin[attr]
1495+ except KeyError:
1496+ return None
1497+
1498+
1499+def network_manager():
1500+ '''
1501+ Deals with the renaming of Quantum to Neutron in H and any situations
1502+ that require compatability (eg, deploying H with network-manager=quantum,
1503+ upgrading from G).
1504+ '''
1505+ release = os_release('nova-common')
1506+ manager = config('network-manager').lower()
1507+
1508+ if manager not in ['quantum', 'neutron']:
1509+ return manager
1510+
1511+ if release in ['essex']:
1512+ # E does not support neutron
1513+ log('Neutron networking not supported in Essex.', level=ERROR)
1514+ raise Exception
1515+ elif release in ['folsom', 'grizzly']:
1516+ # neutron is named quantum in F and G
1517+ return 'quantum'
1518+ else:
1519+ # ensure accurate naming for all releases post-H
1520+ return 'neutron'
1521
1522=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
1523=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
1524--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
1525+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2014-09-10 21:17:48 +0000
1526@@ -0,0 +1,2 @@
1527+# dummy __init__.py to fool syncer into thinking this is a syncable python
1528+# module
1529
1530=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
1531--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
1532+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-09-10 21:17:48 +0000
1533@@ -0,0 +1,279 @@
1534+import os
1535+
1536+from charmhelpers.fetch import apt_install
1537+
1538+from charmhelpers.core.hookenv import (
1539+ log,
1540+ ERROR,
1541+ INFO
1542+)
1543+
1544+from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1545+
1546+try:
1547+ from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
1548+except ImportError:
1549+ # python-jinja2 may not be installed yet, or we're running unittests.
1550+ FileSystemLoader = ChoiceLoader = Environment = exceptions = None
1551+
1552+
1553+class OSConfigException(Exception):
1554+ pass
1555+
1556+
1557+def get_loader(templates_dir, os_release):
1558+ """
1559+ Create a jinja2.ChoiceLoader containing template dirs up to
1560+ and including os_release. If directory template directory
1561+ is missing at templates_dir, it will be omitted from the loader.
1562+ templates_dir is added to the bottom of the search list as a base
1563+ loading dir.
1564+
1565+ A charm may also ship a templates dir with this module
1566+ and it will be appended to the bottom of the search list, eg::
1567+
1568+ hooks/charmhelpers/contrib/openstack/templates
1569+
1570+ :param templates_dir (str): Base template directory containing release
1571+ sub-directories.
1572+ :param os_release (str): OpenStack release codename to construct template
1573+ loader.
1574+ :returns: jinja2.ChoiceLoader constructed with a list of
1575+ jinja2.FilesystemLoaders, ordered in descending
1576+ order by OpenStack release.
1577+ """
1578+ tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1579+ for rel in OPENSTACK_CODENAMES.itervalues()]
1580+
1581+ if not os.path.isdir(templates_dir):
1582+ log('Templates directory not found @ %s.' % templates_dir,
1583+ level=ERROR)
1584+ raise OSConfigException
1585+
1586+ # the bottom contains tempaltes_dir and possibly a common templates dir
1587+ # shipped with the helper.
1588+ loaders = [FileSystemLoader(templates_dir)]
1589+ helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
1590+ if os.path.isdir(helper_templates):
1591+ loaders.append(FileSystemLoader(helper_templates))
1592+
1593+ for rel, tmpl_dir in tmpl_dirs:
1594+ if os.path.isdir(tmpl_dir):
1595+ loaders.insert(0, FileSystemLoader(tmpl_dir))
1596+ if rel == os_release:
1597+ break
1598+ log('Creating choice loader with dirs: %s' %
1599+ [l.searchpath for l in loaders], level=INFO)
1600+ return ChoiceLoader(loaders)
1601+
1602+
1603+class OSConfigTemplate(object):
1604+ """
1605+ Associates a config file template with a list of context generators.
1606+ Responsible for constructing a template context based on those generators.
1607+ """
1608+ def __init__(self, config_file, contexts):
1609+ self.config_file = config_file
1610+
1611+ if hasattr(contexts, '__call__'):
1612+ self.contexts = [contexts]
1613+ else:
1614+ self.contexts = contexts
1615+
1616+ self._complete_contexts = []
1617+
1618+ def context(self):
1619+ ctxt = {}
1620+ for context in self.contexts:
1621+ _ctxt = context()
1622+ if _ctxt:
1623+ ctxt.update(_ctxt)
1624+ # track interfaces for every complete context.
1625+ [self._complete_contexts.append(interface)
1626+ for interface in context.interfaces
1627+ if interface not in self._complete_contexts]
1628+ return ctxt
1629+
1630+ def complete_contexts(self):
1631+ '''
1632+ Return a list of interfaces that have atisfied contexts.
1633+ '''
1634+ if self._complete_contexts:
1635+ return self._complete_contexts
1636+ self.context()
1637+ return self._complete_contexts
1638+
1639+
1640+class OSConfigRenderer(object):
1641+ """
1642+ This class provides a common templating system to be used by OpenStack
1643+ charms. It is intended to help charms share common code and templates,
1644+ and ease the burden of managing config templates across multiple OpenStack
1645+ releases.
1646+
1647+ Basic usage::
1648+
1649+ # import some common context generates from charmhelpers
1650+ from charmhelpers.contrib.openstack import context
1651+
1652+ # Create a renderer object for a specific OS release.
1653+ configs = OSConfigRenderer(templates_dir='/tmp/templates',
1654+ openstack_release='folsom')
1655+ # register some config files with context generators.
1656+ configs.register(config_file='/etc/nova/nova.conf',
1657+ contexts=[context.SharedDBContext(),
1658+ context.AMQPContext()])
1659+ configs.register(config_file='/etc/nova/api-paste.ini',
1660+ contexts=[context.IdentityServiceContext()])
1661+ configs.register(config_file='/etc/haproxy/haproxy.conf',
1662+ contexts=[context.HAProxyContext()])
1663+ # write out a single config
1664+ configs.write('/etc/nova/nova.conf')
1665+ # write out all registered configs
1666+ configs.write_all()
1667+
1668+ **OpenStack Releases and template loading**
1669+
1670+ When the object is instantiated, it is associated with a specific OS
1671+ release. This dictates how the template loader will be constructed.
1672+
1673+ The constructed loader attempts to load the template from several places
1674+ in the following order:
1675+ - from the most recent OS release-specific template dir (if one exists)
1676+ - the base templates_dir
1677+ - a template directory shipped in the charm with this helper file.
1678+
1679+ For the example above, '/tmp/templates' contains the following structure::
1680+
1681+ /tmp/templates/nova.conf
1682+ /tmp/templates/api-paste.ini
1683+ /tmp/templates/grizzly/api-paste.ini
1684+ /tmp/templates/havana/api-paste.ini
1685+
1686+ Since it was registered with the grizzly release, it first seraches
1687+ the grizzly directory for nova.conf, then the templates dir.
1688+
1689+ When writing api-paste.ini, it will find the template in the grizzly
1690+ directory.
1691+
1692+ If the object were created with folsom, it would fall back to the
1693+ base templates dir for its api-paste.ini template.
1694+
1695+ This system should help manage changes in config files through
1696+ openstack releases, allowing charms to fall back to the most recently
1697+ updated config template for a given release
1698+
1699+ The haproxy.conf, since it is not shipped in the templates dir, will
1700+ be loaded from the module directory's template directory, eg
1701+ $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
1702+ us to ship common templates (haproxy, apache) with the helpers.
1703+
1704+ **Context generators**
1705+
1706+ Context generators are used to generate template contexts during hook
1707+ execution. Doing so may require inspecting service relations, charm
1708+ config, etc. When registered, a config file is associated with a list
1709+ of generators. When a template is rendered and written, all context
1710+ generates are called in a chain to generate the context dictionary
1711+ passed to the jinja2 template. See context.py for more info.
1712+ """
1713+ def __init__(self, templates_dir, openstack_release):
1714+ if not os.path.isdir(templates_dir):
1715+ log('Could not locate templates dir %s' % templates_dir,
1716+ level=ERROR)
1717+ raise OSConfigException
1718+
1719+ self.templates_dir = templates_dir
1720+ self.openstack_release = openstack_release
1721+ self.templates = {}
1722+ self._tmpl_env = None
1723+
1724+ if None in [Environment, ChoiceLoader, FileSystemLoader]:
1725+ # if this code is running, the object is created pre-install hook.
1726+ # jinja2 shouldn't get touched until the module is reloaded on next
1727+ # hook execution, with proper jinja2 bits successfully imported.
1728+ apt_install('python-jinja2')
1729+
1730+ def register(self, config_file, contexts):
1731+ """
1732+ Register a config file with a list of context generators to be called
1733+ during rendering.
1734+ """
1735+ self.templates[config_file] = OSConfigTemplate(config_file=config_file,
1736+ contexts=contexts)
1737+ log('Registered config file: %s' % config_file, level=INFO)
1738+
1739+ def _get_tmpl_env(self):
1740+ if not self._tmpl_env:
1741+ loader = get_loader(self.templates_dir, self.openstack_release)
1742+ self._tmpl_env = Environment(loader=loader)
1743+
1744+ def _get_template(self, template):
1745+ self._get_tmpl_env()
1746+ template = self._tmpl_env.get_template(template)
1747+ log('Loaded template from %s' % template.filename, level=INFO)
1748+ return template
1749+
1750+ def render(self, config_file):
1751+ if config_file not in self.templates:
1752+ log('Config not registered: %s' % config_file, level=ERROR)
1753+ raise OSConfigException
1754+ ctxt = self.templates[config_file].context()
1755+
1756+ _tmpl = os.path.basename(config_file)
1757+ try:
1758+ template = self._get_template(_tmpl)
1759+ except exceptions.TemplateNotFound:
1760+ # if no template is found with basename, try looking for it
1761+ # using a munged full path, eg:
1762+ # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
1763+ _tmpl = '_'.join(config_file.split('/')[1:])
1764+ try:
1765+ template = self._get_template(_tmpl)
1766+ except exceptions.TemplateNotFound as e:
1767+ log('Could not load template from %s by %s or %s.' %
1768+ (self.templates_dir, os.path.basename(config_file), _tmpl),
1769+ level=ERROR)
1770+ raise e
1771+
1772+ log('Rendering from template: %s' % _tmpl, level=INFO)
1773+ return template.render(ctxt)
1774+
1775+ def write(self, config_file):
1776+ """
1777+ Write a single config file, raises if config file is not registered.
1778+ """
1779+ if config_file not in self.templates:
1780+ log('Config not registered: %s' % config_file, level=ERROR)
1781+ raise OSConfigException
1782+
1783+ _out = self.render(config_file)
1784+
1785+ with open(config_file, 'wb') as out:
1786+ out.write(_out)
1787+
1788+ log('Wrote template %s.' % config_file, level=INFO)
1789+
1790+ def write_all(self):
1791+ """
1792+ Write out all registered config files.
1793+ """
1794+ [self.write(k) for k in self.templates.iterkeys()]
1795+
1796+ def set_release(self, openstack_release):
1797+ """
1798+ Resets the template environment and generates a new template loader
1799+ based on a the new openstack release.
1800+ """
1801+ self._tmpl_env = None
1802+ self.openstack_release = openstack_release
1803+ self._get_tmpl_env()
1804+
1805+ def complete_contexts(self):
1806+ '''
1807+ Returns a list of context interfaces that yield a complete context.
1808+ '''
1809+ interfaces = []
1810+ [interfaces.extend(i.complete_contexts())
1811+ for i in self.templates.itervalues()]
1812+ return interfaces
1813
1814=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
1815--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
1816+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-09-10 21:17:48 +0000
1817@@ -0,0 +1,459 @@
1818+#!/usr/bin/python
1819+
1820+# Common python helper functions used for OpenStack charms.
1821+from collections import OrderedDict
1822+
1823+import subprocess
1824+import os
1825+import socket
1826+import sys
1827+
1828+from charmhelpers.core.hookenv import (
1829+ config,
1830+ log as juju_log,
1831+ charm_dir,
1832+ ERROR,
1833+ INFO
1834+)
1835+
1836+from charmhelpers.contrib.storage.linux.lvm import (
1837+ deactivate_lvm_volume_group,
1838+ is_lvm_physical_volume,
1839+ remove_lvm_physical_volume,
1840+)
1841+
1842+from charmhelpers.core.host import lsb_release, mounts, umount
1843+from charmhelpers.fetch import apt_install, apt_cache
1844+from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
1845+from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
1846+
1847+CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
1848+CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
1849+
1850+DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
1851+ 'restricted main multiverse universe')
1852+
1853+
1854+UBUNTU_OPENSTACK_RELEASE = OrderedDict([
1855+ ('oneiric', 'diablo'),
1856+ ('precise', 'essex'),
1857+ ('quantal', 'folsom'),
1858+ ('raring', 'grizzly'),
1859+ ('saucy', 'havana'),
1860+ ('trusty', 'icehouse'),
1861+ ('utopic', 'juno'),
1862+])
1863+
1864+
1865+OPENSTACK_CODENAMES = OrderedDict([
1866+ ('2011.2', 'diablo'),
1867+ ('2012.1', 'essex'),
1868+ ('2012.2', 'folsom'),
1869+ ('2013.1', 'grizzly'),
1870+ ('2013.2', 'havana'),
1871+ ('2014.1', 'icehouse'),
1872+ ('2014.2', 'juno'),
1873+])
1874+
1875+# The ugly duckling
1876+SWIFT_CODENAMES = OrderedDict([
1877+ ('1.4.3', 'diablo'),
1878+ ('1.4.8', 'essex'),
1879+ ('1.7.4', 'folsom'),
1880+ ('1.8.0', 'grizzly'),
1881+ ('1.7.7', 'grizzly'),
1882+ ('1.7.6', 'grizzly'),
1883+ ('1.10.0', 'havana'),
1884+ ('1.9.1', 'havana'),
1885+ ('1.9.0', 'havana'),
1886+ ('1.13.1', 'icehouse'),
1887+ ('1.13.0', 'icehouse'),
1888+ ('1.12.0', 'icehouse'),
1889+ ('1.11.0', 'icehouse'),
1890+ ('2.0.0', 'juno'),
1891+])
1892+
1893+DEFAULT_LOOPBACK_SIZE = '5G'
1894+
1895+
1896+def error_out(msg):
1897+ juju_log("FATAL ERROR: %s" % msg, level='ERROR')
1898+ sys.exit(1)
1899+
1900+
1901+def get_os_codename_install_source(src):
1902+ '''Derive OpenStack release codename from a given installation source.'''
1903+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1904+ rel = ''
1905+ if src is None:
1906+ return rel
1907+ if src in ['distro', 'distro-proposed']:
1908+ try:
1909+ rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
1910+ except KeyError:
1911+ e = 'Could not derive openstack release for '\
1912+ 'this Ubuntu release: %s' % ubuntu_rel
1913+ error_out(e)
1914+ return rel
1915+
1916+ if src.startswith('cloud:'):
1917+ ca_rel = src.split(':')[1]
1918+ ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
1919+ return ca_rel
1920+
1921+ # Best guess match based on deb string provided
1922+ if src.startswith('deb') or src.startswith('ppa'):
1923+ for k, v in OPENSTACK_CODENAMES.iteritems():
1924+ if v in src:
1925+ return v
1926+
1927+
1928+def get_os_version_install_source(src):
1929+ codename = get_os_codename_install_source(src)
1930+ return get_os_version_codename(codename)
1931+
1932+
1933+def get_os_codename_version(vers):
1934+ '''Determine OpenStack codename from version number.'''
1935+ try:
1936+ return OPENSTACK_CODENAMES[vers]
1937+ except KeyError:
1938+ e = 'Could not determine OpenStack codename for version %s' % vers
1939+ error_out(e)
1940+
1941+
1942+def get_os_version_codename(codename):
1943+ '''Determine OpenStack version number from codename.'''
1944+ for k, v in OPENSTACK_CODENAMES.iteritems():
1945+ if v == codename:
1946+ return k
1947+ e = 'Could not derive OpenStack version for '\
1948+ 'codename: %s' % codename
1949+ error_out(e)
1950+
1951+
1952+def get_os_codename_package(package, fatal=True):
1953+ '''Derive OpenStack release codename from an installed package.'''
1954+ import apt_pkg as apt
1955+
1956+ cache = apt_cache()
1957+
1958+ try:
1959+ pkg = cache[package]
1960+ except:
1961+ if not fatal:
1962+ return None
1963+ # the package is unknown to the current apt cache.
1964+ e = 'Could not determine version of package with no installation '\
1965+ 'candidate: %s' % package
1966+ error_out(e)
1967+
1968+ if not pkg.current_ver:
1969+ if not fatal:
1970+ return None
1971+ # package is known, but no version is currently installed.
1972+ e = 'Could not determine version of uninstalled package: %s' % package
1973+ error_out(e)
1974+
1975+ vers = apt.upstream_version(pkg.current_ver.ver_str)
1976+
1977+ try:
1978+ if 'swift' in pkg.name:
1979+ swift_vers = vers[:5]
1980+ if swift_vers not in SWIFT_CODENAMES:
1981+ # Deal with 1.10.0 upward
1982+ swift_vers = vers[:6]
1983+ return SWIFT_CODENAMES[swift_vers]
1984+ else:
1985+ vers = vers[:6]
1986+ return OPENSTACK_CODENAMES[vers]
1987+ except KeyError:
1988+ e = 'Could not determine OpenStack codename for version %s' % vers
1989+ error_out(e)
1990+
1991+
1992+def get_os_version_package(pkg, fatal=True):
1993+ '''Derive OpenStack version number from an installed package.'''
1994+ codename = get_os_codename_package(pkg, fatal=fatal)
1995+
1996+ if not codename:
1997+ return None
1998+
1999+ if 'swift' in pkg:
2000+ vers_map = SWIFT_CODENAMES
2001+ else:
2002+ vers_map = OPENSTACK_CODENAMES
2003+
2004+ for version, cname in vers_map.iteritems():
2005+ if cname == codename:
2006+ return version
2007+ # e = "Could not determine OpenStack version for package: %s" % pkg
2008+ # error_out(e)
2009+
2010+
2011+os_rel = None
2012+
2013+
2014+def os_release(package, base='essex'):
2015+ '''
2016+ Returns OpenStack release codename from a cached global.
2017+ If the codename can not be determined from either an installed package or
2018+ the installation source, the earliest release supported by the charm should
2019+ be returned.
2020+ '''
2021+ global os_rel
2022+ if os_rel:
2023+ return os_rel
2024+ os_rel = (get_os_codename_package(package, fatal=False) or
2025+ get_os_codename_install_source(config('openstack-origin')) or
2026+ base)
2027+ return os_rel
2028+
2029+
2030+def import_key(keyid):
2031+ cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
2032+ "--recv-keys %s" % keyid
2033+ try:
2034+ subprocess.check_call(cmd.split(' '))
2035+ except subprocess.CalledProcessError:
2036+ error_out("Error importing repo key %s" % keyid)
2037+
2038+
2039+def configure_installation_source(rel):
2040+ '''Configure apt installation source.'''
2041+ if rel == 'distro':
2042+ return
2043+ elif rel == 'distro-proposed':
2044+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
2045+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
2046+ f.write(DISTRO_PROPOSED % ubuntu_rel)
2047+ elif rel[:4] == "ppa:":
2048+ src = rel
2049+ subprocess.check_call(["add-apt-repository", "-y", src])
2050+ elif rel[:3] == "deb":
2051+ l = len(rel.split('|'))
2052+ if l == 2:
2053+ src, key = rel.split('|')
2054+ juju_log("Importing PPA key from keyserver for %s" % src)
2055+ import_key(key)
2056+ elif l == 1:
2057+ src = rel
2058+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
2059+ f.write(src)
2060+ elif rel[:6] == 'cloud:':
2061+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
2062+ rel = rel.split(':')[1]
2063+ u_rel = rel.split('-')[0]
2064+ ca_rel = rel.split('-')[1]
2065+
2066+ if u_rel != ubuntu_rel:
2067+ e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
2068+ 'version (%s)' % (ca_rel, ubuntu_rel)
2069+ error_out(e)
2070+
2071+ if 'staging' in ca_rel:
2072+ # staging is just a regular PPA.
2073+ os_rel = ca_rel.split('/')[0]
2074+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
2075+ cmd = 'add-apt-repository -y %s' % ppa
2076+ subprocess.check_call(cmd.split(' '))
2077+ return
2078+
2079+ # map charm config options to actual archive pockets.
2080+ pockets = {
2081+ 'folsom': 'precise-updates/folsom',
2082+ 'folsom/updates': 'precise-updates/folsom',
2083+ 'folsom/proposed': 'precise-proposed/folsom',
2084+ 'grizzly': 'precise-updates/grizzly',
2085+ 'grizzly/updates': 'precise-updates/grizzly',
2086+ 'grizzly/proposed': 'precise-proposed/grizzly',
2087+ 'havana': 'precise-updates/havana',
2088+ 'havana/updates': 'precise-updates/havana',
2089+ 'havana/proposed': 'precise-proposed/havana',
2090+ 'icehouse': 'precise-updates/icehouse',
2091+ 'icehouse/updates': 'precise-updates/icehouse',
2092+ 'icehouse/proposed': 'precise-proposed/icehouse',
2093+ 'juno': 'trusty-updates/juno',
2094+ 'juno/updates': 'trusty-updates/juno',
2095+ 'juno/proposed': 'trusty-proposed/juno',
2096+ }
2097+
2098+ try:
2099+ pocket = pockets[ca_rel]
2100+ except KeyError:
2101+ e = 'Invalid Cloud Archive release specified: %s' % rel
2102+ error_out(e)
2103+
2104+ src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
2105+ apt_install('ubuntu-cloud-keyring', fatal=True)
2106+
2107+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
2108+ f.write(src)
2109+ else:
2110+ error_out("Invalid openstack-release specified: %s" % rel)
2111+
2112+
2113+def save_script_rc(script_path="scripts/scriptrc", **env_vars):
2114+ """
2115+ Write an rc file in the charm-delivered directory containing
2116+ exported environment variables provided by env_vars. Any charm scripts run
2117+ outside the juju hook environment can source this scriptrc to obtain
2118+ updated config information necessary to perform health checks or
2119+ service changes.
2120+ """
2121+ juju_rc_path = "%s/%s" % (charm_dir(), script_path)
2122+ if not os.path.exists(os.path.dirname(juju_rc_path)):
2123+ os.mkdir(os.path.dirname(juju_rc_path))
2124+ with open(juju_rc_path, 'wb') as rc_script:
2125+ rc_script.write(
2126+ "#!/bin/bash\n")
2127+ [rc_script.write('export %s=%s\n' % (u, p))
2128+ for u, p in env_vars.iteritems() if u != "script_path"]
2129+
2130+
2131+def openstack_upgrade_available(package):
2132+ """
2133+ Determines if an OpenStack upgrade is available from installation
2134+ source, based on version of installed package.
2135+
2136+ :param package: str: Name of installed package.
2137+
2138+ :returns: bool: : Returns True if configured installation source offers
2139+ a newer version of package.
2140+
2141+ """
2142+
2143+ import apt_pkg as apt
2144+ src = config('openstack-origin')
2145+ cur_vers = get_os_version_package(package)
2146+ available_vers = get_os_version_install_source(src)
2147+ apt.init()
2148+ return apt.version_compare(available_vers, cur_vers) == 1
2149+
2150+
2151+def ensure_block_device(block_device):
2152+ '''
2153+ Confirm block_device, create as loopback if necessary.
2154+
2155+ :param block_device: str: Full path of block device to ensure.
2156+
2157+ :returns: str: Full path of ensured block device.
2158+ '''
2159+ _none = ['None', 'none', None]
2160+ if (block_device in _none):
2161+ error_out('prepare_storage(): Missing required input: '
2162+ 'block_device=%s.' % block_device, level=ERROR)
2163+
2164+ if block_device.startswith('/dev/'):
2165+ bdev = block_device
2166+ elif block_device.startswith('/'):
2167+ _bd = block_device.split('|')
2168+ if len(_bd) == 2:
2169+ bdev, size = _bd
2170+ else:
2171+ bdev = block_device
2172+ size = DEFAULT_LOOPBACK_SIZE
2173+ bdev = ensure_loopback_device(bdev, size)
2174+ else:
2175+ bdev = '/dev/%s' % block_device
2176+
2177+ if not is_block_device(bdev):
2178+ error_out('Failed to locate valid block device at %s' % bdev,
2179+ level=ERROR)
2180+
2181+ return bdev
2182+
2183+
2184+def clean_storage(block_device):
2185+ '''
2186+ Ensures a block device is clean. That is:
2187+ - unmounted
2188+ - any lvm volume groups are deactivated
2189+ - any lvm physical device signatures removed
2190+ - partition table wiped
2191+
2192+ :param block_device: str: Full path to block device to clean.
2193+ '''
2194+ for mp, d in mounts():
2195+ if d == block_device:
2196+ juju_log('clean_storage(): %s is mounted @ %s, unmounting.' %
2197+ (d, mp), level=INFO)
2198+ umount(mp, persist=True)
2199+
2200+ if is_lvm_physical_volume(block_device):
2201+ deactivate_lvm_volume_group(block_device)
2202+ remove_lvm_physical_volume(block_device)
2203+ else:
2204+ zap_disk(block_device)
2205+
2206+
2207+def is_ip(address):
2208+ """
2209+ Returns True if address is a valid IP address.
2210+ """
2211+ try:
2212+ # Test to see if already an IPv4 address
2213+ socket.inet_aton(address)
2214+ return True
2215+ except socket.error:
2216+ return False
2217+
2218+
2219+def ns_query(address):
2220+ try:
2221+ import dns.resolver
2222+ except ImportError:
2223+ apt_install('python-dnspython')
2224+ import dns.resolver
2225+
2226+ if isinstance(address, dns.name.Name):
2227+ rtype = 'PTR'
2228+ elif isinstance(address, basestring):
2229+ rtype = 'A'
2230+ else:
2231+ return None
2232+
2233+ answers = dns.resolver.query(address, rtype)
2234+ if answers:
2235+ return str(answers[0])
2236+ return None
2237+
2238+
2239+def get_host_ip(hostname):
2240+ """
2241+ Resolves the IP for a given hostname, or returns
2242+ the input if it is already an IP.
2243+ """
2244+ if is_ip(hostname):
2245+ return hostname
2246+
2247+ return ns_query(hostname)
2248+
2249+
2250+def get_hostname(address, fqdn=True):
2251+ """
2252+ Resolves hostname for given IP, or returns the input
2253+ if it is already a hostname.
2254+ """
2255+ if is_ip(address):
2256+ try:
2257+ import dns.reversename
2258+ except ImportError:
2259+ apt_install('python-dnspython')
2260+ import dns.reversename
2261+
2262+ rev = dns.reversename.from_address(address)
2263+ result = ns_query(rev)
2264+ if not result:
2265+ return None
2266+ else:
2267+ result = address
2268+
2269+ if fqdn:
2270+ # strip trailing .
2271+ if result.endswith('.'):
2272+ return result[:-1]
2273+ else:
2274+ return result
2275+ else:
2276+ return result.split('.')[0]
2277
2278=== added directory 'hooks/charmhelpers/contrib/storage'
2279=== added file 'hooks/charmhelpers/contrib/storage/__init__.py'
2280=== added directory 'hooks/charmhelpers/contrib/storage/linux'
2281=== added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py'
2282=== added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
2283--- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000
2284+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-09-10 21:17:48 +0000
2285@@ -0,0 +1,387 @@
2286+#
2287+# Copyright 2012 Canonical Ltd.
2288+#
2289+# This file is sourced from lp:openstack-charm-helpers
2290+#
2291+# Authors:
2292+# James Page <james.page@ubuntu.com>
2293+# Adam Gandelman <adamg@ubuntu.com>
2294+#
2295+
2296+import os
2297+import shutil
2298+import json
2299+import time
2300+
2301+from subprocess import (
2302+ check_call,
2303+ check_output,
2304+ CalledProcessError
2305+)
2306+
2307+from charmhelpers.core.hookenv import (
2308+ relation_get,
2309+ relation_ids,
2310+ related_units,
2311+ log,
2312+ INFO,
2313+ WARNING,
2314+ ERROR
2315+)
2316+
2317+from charmhelpers.core.host import (
2318+ mount,
2319+ mounts,
2320+ service_start,
2321+ service_stop,
2322+ service_running,
2323+ umount,
2324+)
2325+
2326+from charmhelpers.fetch import (
2327+ apt_install,
2328+)
2329+
2330+KEYRING = '/etc/ceph/ceph.client.{}.keyring'
2331+KEYFILE = '/etc/ceph/ceph.client.{}.key'
2332+
2333+CEPH_CONF = """[global]
2334+ auth supported = {auth}
2335+ keyring = {keyring}
2336+ mon host = {mon_hosts}
2337+ log to syslog = {use_syslog}
2338+ err to syslog = {use_syslog}
2339+ clog to syslog = {use_syslog}
2340+"""
2341+
2342+
2343+def install():
2344+ ''' Basic Ceph client installation '''
2345+ ceph_dir = "/etc/ceph"
2346+ if not os.path.exists(ceph_dir):
2347+ os.mkdir(ceph_dir)
2348+ apt_install('ceph-common', fatal=True)
2349+
2350+
2351+def rbd_exists(service, pool, rbd_img):
2352+ ''' Check to see if a RADOS block device exists '''
2353+ try:
2354+ out = check_output(['rbd', 'list', '--id', service,
2355+ '--pool', pool])
2356+ except CalledProcessError:
2357+ return False
2358+ else:
2359+ return rbd_img in out
2360+
2361+
2362+def create_rbd_image(service, pool, image, sizemb):
2363+ ''' Create a new RADOS block device '''
2364+ cmd = [
2365+ 'rbd',
2366+ 'create',
2367+ image,
2368+ '--size',
2369+ str(sizemb),
2370+ '--id',
2371+ service,
2372+ '--pool',
2373+ pool
2374+ ]
2375+ check_call(cmd)
2376+
2377+
2378+def pool_exists(service, name):
2379+ ''' Check to see if a RADOS pool already exists '''
2380+ try:
2381+ out = check_output(['rados', '--id', service, 'lspools'])
2382+ except CalledProcessError:
2383+ return False
2384+ else:
2385+ return name in out
2386+
2387+
2388+def get_osds(service):
2389+ '''
2390+ Return a list of all Ceph Object Storage Daemons
2391+ currently in the cluster
2392+ '''
2393+ version = ceph_version()
2394+ if version and version >= '0.56':
2395+ return json.loads(check_output(['ceph', '--id', service,
2396+ 'osd', 'ls', '--format=json']))
2397+ else:
2398+ return None
2399+
2400+
2401+def create_pool(service, name, replicas=2):
2402+ ''' Create a new RADOS pool '''
2403+ if pool_exists(service, name):
2404+ log("Ceph pool {} already exists, skipping creation".format(name),
2405+ level=WARNING)
2406+ return
2407+ # Calculate the number of placement groups based
2408+ # on upstream recommended best practices.
2409+ osds = get_osds(service)
2410+ if osds:
2411+ pgnum = (len(osds) * 100 / replicas)
2412+ else:
2413+ # NOTE(james-page): Default to 200 for older ceph versions
2414+ # which don't support OSD query from cli
2415+ pgnum = 200
2416+ cmd = [
2417+ 'ceph', '--id', service,
2418+ 'osd', 'pool', 'create',
2419+ name, str(pgnum)
2420+ ]
2421+ check_call(cmd)
2422+ cmd = [
2423+ 'ceph', '--id', service,
2424+ 'osd', 'pool', 'set', name,
2425+ 'size', str(replicas)
2426+ ]
2427+ check_call(cmd)
2428+
2429+
2430+def delete_pool(service, name):
2431+ ''' Delete a RADOS pool from ceph '''
2432+ cmd = [
2433+ 'ceph', '--id', service,
2434+ 'osd', 'pool', 'delete',
2435+ name, '--yes-i-really-really-mean-it'
2436+ ]
2437+ check_call(cmd)
2438+
2439+
2440+def _keyfile_path(service):
2441+ return KEYFILE.format(service)
2442+
2443+
2444+def _keyring_path(service):
2445+ return KEYRING.format(service)
2446+
2447+
2448+def create_keyring(service, key):
2449+ ''' Create a new Ceph keyring containing key'''
2450+ keyring = _keyring_path(service)
2451+ if os.path.exists(keyring):
2452+ log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2453+ return
2454+ cmd = [
2455+ 'ceph-authtool',
2456+ keyring,
2457+ '--create-keyring',
2458+ '--name=client.{}'.format(service),
2459+ '--add-key={}'.format(key)
2460+ ]
2461+ check_call(cmd)
2462+ log('ceph: Created new ring at %s.' % keyring, level=INFO)
2463+
2464+
2465+def create_key_file(service, key):
2466+ ''' Create a file containing key '''
2467+ keyfile = _keyfile_path(service)
2468+ if os.path.exists(keyfile):
2469+ log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2470+ return
2471+ with open(keyfile, 'w') as fd:
2472+ fd.write(key)
2473+ log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2474+
2475+
2476+def get_ceph_nodes():
2477+ ''' Query named relation 'ceph' to detemine current nodes '''
2478+ hosts = []
2479+ for r_id in relation_ids('ceph'):
2480+ for unit in related_units(r_id):
2481+ hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2482+ return hosts
2483+
2484+
2485+def configure(service, key, auth, use_syslog):
2486+ ''' Perform basic configuration of Ceph '''
2487+ create_keyring(service, key)
2488+ create_key_file(service, key)
2489+ hosts = get_ceph_nodes()
2490+ with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
2491+ ceph_conf.write(CEPH_CONF.format(auth=auth,
2492+ keyring=_keyring_path(service),
2493+ mon_hosts=",".join(map(str, hosts)),
2494+ use_syslog=use_syslog))
2495+ modprobe('rbd')
2496+
2497+
2498+def image_mapped(name):
2499+ ''' Determine whether a RADOS block device is mapped locally '''
2500+ try:
2501+ out = check_output(['rbd', 'showmapped'])
2502+ except CalledProcessError:
2503+ return False
2504+ else:
2505+ return name in out
2506+
2507+
2508+def map_block_storage(service, pool, image):
2509+ ''' Map a RADOS block device for local use '''
2510+ cmd = [
2511+ 'rbd',
2512+ 'map',
2513+ '{}/{}'.format(pool, image),
2514+ '--user',
2515+ service,
2516+ '--secret',
2517+ _keyfile_path(service),
2518+ ]
2519+ check_call(cmd)
2520+
2521+
2522+def filesystem_mounted(fs):
2523+ ''' Determine whether a filesytems is already mounted '''
2524+ return fs in [f for f, m in mounts()]
2525+
2526+
2527+def make_filesystem(blk_device, fstype='ext4', timeout=10):
2528+ ''' Make a new filesystem on the specified block device '''
2529+ count = 0
2530+ e_noent = os.errno.ENOENT
2531+ while not os.path.exists(blk_device):
2532+ if count >= timeout:
2533+ log('ceph: gave up waiting on block device %s' % blk_device,
2534+ level=ERROR)
2535+ raise IOError(e_noent, os.strerror(e_noent), blk_device)
2536+ log('ceph: waiting for block device %s to appear' % blk_device,
2537+ level=INFO)
2538+ count += 1
2539+ time.sleep(1)
2540+ else:
2541+ log('ceph: Formatting block device %s as filesystem %s.' %
2542+ (blk_device, fstype), level=INFO)
2543+ check_call(['mkfs', '-t', fstype, blk_device])
2544+
2545+
2546+def place_data_on_block_device(blk_device, data_src_dst):
2547+ ''' Migrate data in data_src_dst to blk_device and then remount '''
2548+ # mount block device into /mnt
2549+ mount(blk_device, '/mnt')
2550+ # copy data to /mnt
2551+ copy_files(data_src_dst, '/mnt')
2552+ # umount block device
2553+ umount('/mnt')
2554+ # Grab user/group ID's from original source
2555+ _dir = os.stat(data_src_dst)
2556+ uid = _dir.st_uid
2557+ gid = _dir.st_gid
2558+ # re-mount where the data should originally be
2559+ # TODO: persist is currently a NO-OP in core.host
2560+ mount(blk_device, data_src_dst, persist=True)
2561+ # ensure original ownership of new mount.
2562+ os.chown(data_src_dst, uid, gid)
2563+
2564+
2565+# TODO: re-use
2566+def modprobe(module):
2567+ ''' Load a kernel module and configure for auto-load on reboot '''
2568+ log('ceph: Loading kernel module', level=INFO)
2569+ cmd = ['modprobe', module]
2570+ check_call(cmd)
2571+ with open('/etc/modules', 'r+') as modules:
2572+ if module not in modules.read():
2573+ modules.write(module)
2574+
2575+
2576+def copy_files(src, dst, symlinks=False, ignore=None):
2577+ ''' Copy files from src to dst '''
2578+ for item in os.listdir(src):
2579+ s = os.path.join(src, item)
2580+ d = os.path.join(dst, item)
2581+ if os.path.isdir(s):
2582+ shutil.copytree(s, d, symlinks, ignore)
2583+ else:
2584+ shutil.copy2(s, d)
2585+
2586+
2587+def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2588+ blk_device, fstype, system_services=[]):
2589+ """
2590+ NOTE: This function must only be called from a single service unit for
2591+ the same rbd_img otherwise data loss will occur.
2592+
2593+ Ensures given pool and RBD image exists, is mapped to a block device,
2594+ and the device is formatted and mounted at the given mount_point.
2595+
2596+ If formatting a device for the first time, data existing at mount_point
2597+ will be migrated to the RBD device before being re-mounted.
2598+
2599+ All services listed in system_services will be stopped prior to data
2600+ migration and restarted when complete.
2601+ """
2602+ # Ensure pool, RBD image, RBD mappings are in place.
2603+ if not pool_exists(service, pool):
2604+ log('ceph: Creating new pool {}.'.format(pool))
2605+ create_pool(service, pool)
2606+
2607+ if not rbd_exists(service, pool, rbd_img):
2608+ log('ceph: Creating RBD image ({}).'.format(rbd_img))
2609+ create_rbd_image(service, pool, rbd_img, sizemb)
2610+
2611+ if not image_mapped(rbd_img):
2612+ log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2613+ map_block_storage(service, pool, rbd_img)
2614+
2615+ # make file system
2616+ # TODO: What happens if for whatever reason this is run again and
2617+ # the data is already in the rbd device and/or is mounted??
2618+ # When it is mounted already, it will fail to make the fs
2619+ # XXX: This is really sketchy! Need to at least add an fstab entry
2620+ # otherwise this hook will blow away existing data if its executed
2621+ # after a reboot.
2622+ if not filesystem_mounted(mount_point):
2623+ make_filesystem(blk_device, fstype)
2624+
2625+ for svc in system_services:
2626+ if service_running(svc):
2627+ log('ceph: Stopping services {} prior to migrating data.'
2628+ .format(svc))
2629+ service_stop(svc)
2630+
2631+ place_data_on_block_device(blk_device, mount_point)
2632+
2633+ for svc in system_services:
2634+ log('ceph: Starting service {} after migrating data.'
2635+ .format(svc))
2636+ service_start(svc)
2637+
2638+
2639+def ensure_ceph_keyring(service, user=None, group=None):
2640+ '''
2641+ Ensures a ceph keyring is created for a named service
2642+ and optionally ensures user and group ownership.
2643+
2644+ Returns False if no ceph key is available in relation state.
2645+ '''
2646+ key = None
2647+ for rid in relation_ids('ceph'):
2648+ for unit in related_units(rid):
2649+ key = relation_get('key', rid=rid, unit=unit)
2650+ if key:
2651+ break
2652+ if not key:
2653+ return False
2654+ create_keyring(service=service, key=key)
2655+ keyring = _keyring_path(service)
2656+ if user and group:
2657+ check_call(['chown', '%s.%s' % (user, group), keyring])
2658+ return True
2659+
2660+
2661+def ceph_version():
2662+ ''' Retrieve the local version of ceph '''
2663+ if os.path.exists('/usr/bin/ceph'):
2664+ cmd = ['ceph', '-v']
2665+ output = check_output(cmd)
2666+ output = output.split()
2667+ if len(output) > 3:
2668+ return output[2]
2669+ else:
2670+ return None
2671+ else:
2672+ return None
2673
2674=== added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2675--- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000
2676+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-09-10 21:17:48 +0000
2677@@ -0,0 +1,62 @@
2678+
2679+import os
2680+import re
2681+
2682+from subprocess import (
2683+ check_call,
2684+ check_output,
2685+)
2686+
2687+
2688+##################################################
2689+# loopback device helpers.
2690+##################################################
2691+def loopback_devices():
2692+ '''
2693+ Parse through 'losetup -a' output to determine currently mapped
2694+ loopback devices. Output is expected to look like:
2695+
2696+ /dev/loop0: [0807]:961814 (/tmp/my.img)
2697+
2698+ :returns: dict: a dict mapping {loopback_dev: backing_file}
2699+ '''
2700+ loopbacks = {}
2701+ cmd = ['losetup', '-a']
2702+ devs = [d.strip().split(' ') for d in
2703+ check_output(cmd).splitlines() if d != '']
2704+ for dev, _, f in devs:
2705+ loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0]
2706+ return loopbacks
2707+
2708+
2709+def create_loopback(file_path):
2710+ '''
2711+ Create a loopback device for a given backing file.
2712+
2713+ :returns: str: Full path to new loopback device (eg, /dev/loop0)
2714+ '''
2715+ file_path = os.path.abspath(file_path)
2716+ check_call(['losetup', '--find', file_path])
2717+ for d, f in loopback_devices().iteritems():
2718+ if f == file_path:
2719+ return d
2720+
2721+
2722+def ensure_loopback_device(path, size):
2723+ '''
2724+ Ensure a loopback device exists for a given backing file path and size.
2725+ If it a loopback device is not mapped to file, a new one will be created.
2726+
2727+ TODO: Confirm size of found loopback device.
2728+
2729+ :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2730+ '''
2731+ for d, f in loopback_devices().iteritems():
2732+ if f == path:
2733+ return d
2734+
2735+ if not os.path.exists(path):
2736+ cmd = ['truncate', '--size', size, path]
2737+ check_call(cmd)
2738+
2739+ return create_loopback(path)
2740
2741=== added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2742--- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000
2743+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-09-10 21:17:48 +0000
2744@@ -0,0 +1,88 @@
2745+from subprocess import (
2746+ CalledProcessError,
2747+ check_call,
2748+ check_output,
2749+ Popen,
2750+ PIPE,
2751+)
2752+
2753+
2754+##################################################
2755+# LVM helpers.
2756+##################################################
2757+def deactivate_lvm_volume_group(block_device):
2758+ '''
2759+ Deactivate any volume gruop associated with an LVM physical volume.
2760+
2761+ :param block_device: str: Full path to LVM physical volume
2762+ '''
2763+ vg = list_lvm_volume_group(block_device)
2764+ if vg:
2765+ cmd = ['vgchange', '-an', vg]
2766+ check_call(cmd)
2767+
2768+
2769+def is_lvm_physical_volume(block_device):
2770+ '''
2771+ Determine whether a block device is initialized as an LVM PV.
2772+
2773+ :param block_device: str: Full path of block device to inspect.
2774+
2775+ :returns: boolean: True if block device is a PV, False if not.
2776+ '''
2777+ try:
2778+ check_output(['pvdisplay', block_device])
2779+ return True
2780+ except CalledProcessError:
2781+ return False
2782+
2783+
2784+def remove_lvm_physical_volume(block_device):
2785+ '''
2786+ Remove LVM PV signatures from a given block device.
2787+
2788+ :param block_device: str: Full path of block device to scrub.
2789+ '''
2790+ p = Popen(['pvremove', '-ff', block_device],
2791+ stdin=PIPE)
2792+ p.communicate(input='y\n')
2793+
2794+
2795+def list_lvm_volume_group(block_device):
2796+ '''
2797+ List LVM volume group associated with a given block device.
2798+
2799+ Assumes block device is a valid LVM PV.
2800+
2801+ :param block_device: str: Full path of block device to inspect.
2802+
2803+ :returns: str: Name of volume group associated with block device or None
2804+ '''
2805+ vg = None
2806+ pvd = check_output(['pvdisplay', block_device]).splitlines()
2807+ for l in pvd:
2808+ if l.strip().startswith('VG Name'):
2809+ vg = ' '.join(l.strip().split()[2:])
2810+ return vg
2811+
2812+
2813+def create_lvm_physical_volume(block_device):
2814+ '''
2815+ Initialize a block device as an LVM physical volume.
2816+
2817+ :param block_device: str: Full path of block device to initialize.
2818+
2819+ '''
2820+ check_call(['pvcreate', block_device])
2821+
2822+
2823+def create_lvm_volume_group(volume_group, block_device):
2824+ '''
2825+ Create an LVM volume group backed by a given block device.
2826+
2827+ Assumes block device has already been initialized as an LVM PV.
2828+
2829+ :param volume_group: str: Name of volume group to create.
2830+ :block_device: str: Full path of PV-initialized block device.
2831+ '''
2832+ check_call(['vgcreate', volume_group, block_device])
2833
2834=== added file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2835--- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000
2836+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-09-10 21:17:48 +0000
2837@@ -0,0 +1,53 @@
2838+import os
2839+import re
2840+from stat import S_ISBLK
2841+
2842+from subprocess import (
2843+ check_call,
2844+ check_output,
2845+ call
2846+)
2847+
2848+
2849+def is_block_device(path):
2850+ '''
2851+ Confirm device at path is a valid block device node.
2852+
2853+ :returns: boolean: True if path is a block device, False if not.
2854+ '''
2855+ if not os.path.exists(path):
2856+ return False
2857+ return S_ISBLK(os.stat(path).st_mode)
2858+
2859+
2860+def zap_disk(block_device):
2861+ '''
2862+ Clear a block device of partition table. Relies on sgdisk, which is
2863+ installed as pat of the 'gdisk' package in Ubuntu.
2864+
2865+ :param block_device: str: Full path of block device to clean.
2866+ '''
2867+ # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2868+ call(['sgdisk', '--zap-all', '--mbrtogpt',
2869+ '--clear', block_device])
2870+ dev_end = check_output(['blockdev', '--getsz', block_device])
2871+ gpt_end = int(dev_end.split()[0]) - 100
2872+ check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2873+ 'bs=1M', 'count=1'])
2874+ check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2875+ 'bs=512', 'count=100', 'seek=%s' % (gpt_end)])
2876+
2877+
2878+def is_device_mounted(device):
2879+ '''Given a device path, return True if that device is mounted, and False
2880+ if it isn't.
2881+
2882+ :param device: str: Full path of the device to check.
2883+ :returns: boolean: True if the path represents a mounted device, False if
2884+ it doesn't.
2885+ '''
2886+ is_partition = bool(re.search(r".*[0-9]+\b", device))
2887+ out = check_output(['mount'])
2888+ if is_partition:
2889+ return bool(re.search(device + r"\b", out))
2890+ return bool(re.search(device + r"[0-9]+\b", out))
2891
2892=== modified file 'hooks/charmhelpers/core/hookenv.py'
2893--- hooks/charmhelpers/core/hookenv.py 2014-01-28 00:01:57 +0000
2894+++ hooks/charmhelpers/core/hookenv.py 2014-09-10 21:17:48 +0000
2895@@ -25,7 +25,7 @@
2896 def cached(func):
2897 """Cache return values for multiple executions of func + args
2898
2899- For example:
2900+ For example::
2901
2902 @cached
2903 def unit_get(attribute):
2904@@ -155,6 +155,121 @@
2905 return os.path.basename(sys.argv[0])
2906
2907
2908+class Config(dict):
2909+ """A dictionary representation of the charm's config.yaml, with some
2910+ extra features:
2911+
2912+ - See which values in the dictionary have changed since the previous hook.
2913+ - For values that have changed, see what the previous value was.
2914+ - Store arbitrary data for use in a later hook.
2915+
2916+ NOTE: Do not instantiate this object directly - instead call
2917+ ``hookenv.config()``, which will return an instance of :class:`Config`.
2918+
2919+ Example usage::
2920+
2921+ >>> # inside a hook
2922+ >>> from charmhelpers.core import hookenv
2923+ >>> config = hookenv.config()
2924+ >>> config['foo']
2925+ 'bar'
2926+ >>> # store a new key/value for later use
2927+ >>> config['mykey'] = 'myval'
2928+
2929+
2930+ >>> # user runs `juju set mycharm foo=baz`
2931+ >>> # now we're inside subsequent config-changed hook
2932+ >>> config = hookenv.config()
2933+ >>> config['foo']
2934+ 'baz'
2935+ >>> # test to see if this val has changed since last hook
2936+ >>> config.changed('foo')
2937+ True
2938+ >>> # what was the previous value?
2939+ >>> config.previous('foo')
2940+ 'bar'
2941+ >>> # keys/values that we add are preserved across hooks
2942+ >>> config['mykey']
2943+ 'myval'
2944+
2945+ """
2946+ CONFIG_FILE_NAME = '.juju-persistent-config'
2947+
2948+ def __init__(self, *args, **kw):
2949+ super(Config, self).__init__(*args, **kw)
2950+ self.implicit_save = True
2951+ self._prev_dict = None
2952+ self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
2953+ if os.path.exists(self.path):
2954+ self.load_previous()
2955+
2956+ def __getitem__(self, key):
2957+ """For regular dict lookups, check the current juju config first,
2958+ then the previous (saved) copy. This ensures that user-saved values
2959+ will be returned by a dict lookup.
2960+
2961+ """
2962+ try:
2963+ return dict.__getitem__(self, key)
2964+ except KeyError:
2965+ return (self._prev_dict or {})[key]
2966+
2967+ def load_previous(self, path=None):
2968+ """Load previous copy of config from disk.
2969+
2970+ In normal usage you don't need to call this method directly - it
2971+ is called automatically at object initialization.
2972+
2973+ :param path:
2974+
2975+ File path from which to load the previous config. If `None`,
2976+ config is loaded from the default location. If `path` is
2977+ specified, subsequent `save()` calls will write to the same
2978+ path.
2979+
2980+ """
2981+ self.path = path or self.path
2982+ with open(self.path) as f:
2983+ self._prev_dict = json.load(f)
2984+
2985+ def changed(self, key):
2986+ """Return True if the current value for this key is different from
2987+ the previous value.
2988+
2989+ """
2990+ if self._prev_dict is None:
2991+ return True
2992+ return self.previous(key) != self.get(key)
2993+
2994+ def previous(self, key):
2995+ """Return previous value for this key, or None if there
2996+ is no previous value.
2997+
2998+ """
2999+ if self._prev_dict:
3000+ return self._prev_dict.get(key)
3001+ return None
3002+
3003+ def save(self):
3004+ """Save this config to disk.
3005+
3006+ If the charm is using the :mod:`Services Framework <services.base>`
3007+ or :meth:'@hook <Hooks.hook>' decorator, this
3008+ is called automatically at the end of successful hook execution.
3009+ Otherwise, it should be called directly by user code.
3010+
3011+ To disable automatic saves, set ``implicit_save=False`` on this
3012+ instance.
3013+
3014+ """
3015+ if self._prev_dict:
3016+ for k, v in self._prev_dict.iteritems():
3017+ if k not in self:
3018+ self[k] = v
3019+ with open(self.path, 'w') as f:
3020+ json.dump(self, f)
3021+
3022+
3023 @cached
3024 def config(scope=None):
3025 """Juju charm configuration"""
3026@@ -163,7 +278,10 @@
3027 config_cmd_line.append(scope)
3028 config_cmd_line.append('--format=json')
3029 try:
3030- return json.loads(subprocess.check_output(config_cmd_line))
3031+ config_data = json.loads(subprocess.check_output(config_cmd_line))
3032+ if scope is not None:
3033+ return config_data
3034+ return Config(config_data)
3035 except ValueError:
3036 return None
3037
3038@@ -188,8 +306,9 @@
3039 raise
3040
3041
3042-def relation_set(relation_id=None, relation_settings={}, **kwargs):
3043+def relation_set(relation_id=None, relation_settings=None, **kwargs):
3044 """Set relation information for the current unit"""
3045+ relation_settings = relation_settings if relation_settings else {}
3046 relation_cmd_line = ['relation-set']
3047 if relation_id is not None:
3048 relation_cmd_line.extend(('-r', relation_id))
3049@@ -348,18 +467,19 @@
3050 class Hooks(object):
3051 """A convenient handler for hook functions.
3052
3053- Example:
3054+ Example::
3055+
3056 hooks = Hooks()
3057
3058 # register a hook, taking its name from the function name
3059 @hooks.hook()
3060 def install():
3061- ...
3062+ pass # your code here
3063
3064 # register a hook, providing a custom hook name
3065 @hooks.hook("config-changed")
3066 def config_changed():
3067- ...
3068+ pass # your code here
3069
3070 if __name__ == "__main__":
3071 # execute a hook based on the name the program is called by
3072@@ -379,6 +499,9 @@
3073 hook_name = os.path.basename(args[0])
3074 if hook_name in self._hooks:
3075 self._hooks[hook_name]()
3076+ cfg = config()
3077+ if cfg.implicit_save:
3078+ cfg.save()
3079 else:
3080 raise UnregisteredHookError(hook_name)
3081
3082
3083=== modified file 'hooks/charmhelpers/core/host.py'
3084--- hooks/charmhelpers/core/host.py 2014-01-28 00:01:57 +0000
3085+++ hooks/charmhelpers/core/host.py 2014-09-10 21:17:48 +0000
3086@@ -12,10 +12,13 @@
3087 import string
3088 import subprocess
3089 import hashlib
3090+import shutil
3091+from contextlib import contextmanager
3092
3093 from collections import OrderedDict
3094
3095 from hookenv import log
3096+from fstab import Fstab
3097
3098
3099 def service_start(service_name):
3100@@ -34,7 +37,8 @@
3101
3102
3103 def service_reload(service_name, restart_on_failure=False):
3104- """Reload a system service, optionally falling back to restart if reload fails"""
3105+ """Reload a system service, optionally falling back to restart if
3106+ reload fails"""
3107 service_result = service('reload', service_name)
3108 if not service_result and restart_on_failure:
3109 service_result = service('restart', service_name)
3110@@ -50,7 +54,7 @@
3111 def service_running(service):
3112 """Determine whether a system service is running"""
3113 try:
3114- output = subprocess.check_output(['service', service, 'status'])
3115+ output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
3116 except subprocess.CalledProcessError:
3117 return False
3118 else:
3119@@ -60,6 +64,16 @@
3120 return False
3121
3122
3123+def service_available(service_name):
3124+ """Determine whether a system service is available"""
3125+ try:
3126+ subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
3127+ except subprocess.CalledProcessError:
3128+ return False
3129+ else:
3130+ return True
3131+
3132+
3133 def adduser(username, password=None, shell='/bin/bash', system_user=False):
3134 """Add a user to the system"""
3135 try:
3136@@ -143,7 +157,19 @@
3137 target.write(content)
3138
3139
3140-def mount(device, mountpoint, options=None, persist=False):
3141+def fstab_remove(mp):
3142+ """Remove the given mountpoint entry from /etc/fstab
3143+ """
3144+ return Fstab.remove_by_mountpoint(mp)
3145+
3146+
3147+def fstab_add(dev, mp, fs, options=None):
3148+ """Adds the given device entry to the /etc/fstab file
3149+ """
3150+ return Fstab.add(dev, mp, fs, options=options)
3151+
3152+
3153+def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
3154 """Mount a filesystem at a particular mountpoint"""
3155 cmd_args = ['mount']
3156 if options is not None:
3157@@ -154,9 +180,9 @@
3158 except subprocess.CalledProcessError, e:
3159 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
3160 return False
3161+
3162 if persist:
3163- # TODO: update fstab
3164- pass
3165+ return fstab_add(device, mountpoint, filesystem, options=options)
3166 return True
3167
3168
3169@@ -168,9 +194,9 @@
3170 except subprocess.CalledProcessError, e:
3171 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
3172 return False
3173+
3174 if persist:
3175- # TODO: update fstab
3176- pass
3177+ return fstab_remove(mountpoint)
3178 return True
3179
3180
3181@@ -194,16 +220,16 @@
3182 return None
3183
3184
3185-def restart_on_change(restart_map):
3186+def restart_on_change(restart_map, stopstart=False):
3187 """Restart services based on configuration files changing
3188
3189- This function is used a decorator, for example
3190+ This function is used a decorator, for example::
3191
3192 @restart_on_change({
3193 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
3194 })
3195 def ceph_client_changed():
3196- ...
3197+ pass # your code here
3198
3199 In this example, the cinder-api and cinder-volume services
3200 would be restarted if /etc/ceph/ceph.conf is changed by the
3201@@ -219,8 +245,14 @@
3202 for path in restart_map:
3203 if checksums[path] != file_hash(path):
3204 restarts += restart_map[path]
3205- for service_name in list(OrderedDict.fromkeys(restarts)):
3206- service('restart', service_name)
3207+ services_list = list(OrderedDict.fromkeys(restarts))
3208+ if not stopstart:
3209+ for service_name in services_list:
3210+ service('restart', service_name)
3211+ else:
3212+ for action in ['stop', 'start']:
3213+ for service_name in services_list:
3214+ service(action, service_name)
3215 return wrapped_f
3216 return wrap
3217
3218@@ -289,3 +321,40 @@
3219 if 'link/ether' in words:
3220 hwaddr = words[words.index('link/ether') + 1]
3221 return hwaddr
3222+
3223+
3224+def cmp_pkgrevno(package, revno, pkgcache=None):
3225+ '''Compare supplied revno with the revno of the installed package
3226+
3227+ * 1 => Installed revno is greater than supplied arg
3228+ * 0 => Installed revno is the same as supplied arg
3229+ * -1 => Installed revno is less than supplied arg
3230+
3231+ '''
3232+ import apt_pkg
3233+ from charmhelpers.fetch import apt_cache
3234+ if not pkgcache:
3235+ pkgcache = apt_cache()
3236+ pkg = pkgcache[package]
3237+ return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
3238+
3239+
3240+@contextmanager
3241+def chdir(d):
3242+ cur = os.getcwd()
3243+ try:
3244+ yield os.chdir(d)
3245+ finally:
3246+ os.chdir(cur)
3247+
3248+
3249+def chownr(path, owner, group):
3250+ uid = pwd.getpwnam(owner).pw_uid
3251+ gid = grp.getgrnam(group).gr_gid
3252+
3253+ for root, dirs, files in os.walk(path):
3254+ for name in dirs + files:
3255+ full = os.path.join(root, name)
3256+ broken_symlink = os.path.lexists(full) and not os.path.exists(full)
3257+ if not broken_symlink:
3258+ os.chown(full, uid, gid)
3259
3260=== modified file 'hooks/charmhelpers/fetch/__init__.py'
3261--- hooks/charmhelpers/fetch/__init__.py 2014-01-28 00:01:57 +0000
3262+++ hooks/charmhelpers/fetch/__init__.py 2014-09-10 21:17:48 +0000
3263@@ -1,4 +1,6 @@
3264 import importlib
3265+from tempfile import NamedTemporaryFile
3266+import time
3267 from yaml import safe_load
3268 from charmhelpers.core.host import (
3269 lsb_release
3270@@ -12,9 +14,9 @@
3271 config,
3272 log,
3273 )
3274-import apt_pkg
3275 import os
3276
3277+
3278 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
3279 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
3280 """
3281@@ -54,13 +56,68 @@
3282 'icehouse/proposed': 'precise-proposed/icehouse',
3283 'precise-icehouse/proposed': 'precise-proposed/icehouse',
3284 'precise-proposed/icehouse': 'precise-proposed/icehouse',
3285+ # Juno
3286+ 'juno': 'trusty-updates/juno',
3287+ 'trusty-juno': 'trusty-updates/juno',
3288+ 'trusty-juno/updates': 'trusty-updates/juno',
3289+ 'trusty-updates/juno': 'trusty-updates/juno',
3290+ 'juno/proposed': 'trusty-proposed/juno',
3291+ 'juno/proposed': 'trusty-proposed/juno',
3292+ 'trusty-juno/proposed': 'trusty-proposed/juno',
3293+ 'trusty-proposed/juno': 'trusty-proposed/juno',
3294 }
3295
3296+# The order of this list is very important. Handlers should be listed in from
3297+# least- to most-specific URL matching.
3298+FETCH_HANDLERS = (
3299+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
3300+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
3301+)
3302+
3303+APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
3304+APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
3305+APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
3306+
3307+
3308+class SourceConfigError(Exception):
3309+ pass
3310+
3311+
3312+class UnhandledSource(Exception):
3313+ pass
3314+
3315+
3316+class AptLockError(Exception):
3317+ pass
3318+
3319+
3320+class BaseFetchHandler(object):
3321+
3322+ """Base class for FetchHandler implementations in fetch plugins"""
3323+
3324+ def can_handle(self, source):
3325+ """Returns True if the source can be handled. Otherwise returns
3326+ a string explaining why it cannot"""
3327+ return "Wrong source type"
3328+
3329+ def install(self, source):
3330+ """Try to download and unpack the source. Return the path to the
3331+ unpacked files or raise UnhandledSource."""
3332+ raise UnhandledSource("Wrong source type {}".format(source))
3333+
3334+ def parse_url(self, url):
3335+ return urlparse(url)
3336+
3337+ def base_url(self, url):
3338+ """Return url without querystring or fragment"""
3339+ parts = list(self.parse_url(url))
3340+ parts[4:] = ['' for i in parts[4:]]
3341+ return urlunparse(parts)
3342+
3343
3344 def filter_installed_packages(packages):
3345 """Returns a list of packages that require installation"""
3346- apt_pkg.init()
3347- cache = apt_pkg.Cache()
3348+ cache = apt_cache()
3349 _pkgs = []
3350 for package in packages:
3351 try:
3352@@ -73,6 +130,16 @@
3353 return _pkgs
3354
3355
3356+def apt_cache(in_memory=True):
3357+ """Build and return an apt cache"""
3358+ import apt_pkg
3359+ apt_pkg.init()
3360+ if in_memory:
3361+ apt_pkg.config.set("Dir::Cache::pkgcache", "")
3362+ apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
3363+ return apt_pkg.Cache()
3364+
3365+
3366 def apt_install(packages, options=None, fatal=False):
3367 """Install one or more packages"""
3368 if options is None:
3369@@ -87,23 +154,28 @@
3370 cmd.extend(packages)
3371 log("Installing {} with options: {}".format(packages,
3372 options))
3373- env = os.environ.copy()
3374- if 'DEBIAN_FRONTEND' not in env:
3375- env['DEBIAN_FRONTEND'] = 'noninteractive'
3376-
3377- if fatal:
3378- subprocess.check_call(cmd, env=env)
3379+ _run_apt_command(cmd, fatal)
3380+
3381+
3382+def apt_upgrade(options=None, fatal=False, dist=False):
3383+ """Upgrade all packages"""
3384+ if options is None:
3385+ options = ['--option=Dpkg::Options::=--force-confold']
3386+
3387+ cmd = ['apt-get', '--assume-yes']
3388+ cmd.extend(options)
3389+ if dist:
3390+ cmd.append('dist-upgrade')
3391 else:
3392- subprocess.call(cmd, env=env)
3393+ cmd.append('upgrade')
3394+ log("Upgrading with options: {}".format(options))
3395+ _run_apt_command(cmd, fatal)
3396
3397
3398 def apt_update(fatal=False):
3399 """Update local apt cache"""
3400 cmd = ['apt-get', 'update']
3401- if fatal:
3402- subprocess.check_call(cmd)
3403- else:
3404- subprocess.call(cmd)
3405+ _run_apt_command(cmd, fatal)
3406
3407
3408 def apt_purge(packages, fatal=False):
3409@@ -114,10 +186,7 @@
3410 else:
3411 cmd.extend(packages)
3412 log("Purging {}".format(packages))
3413- if fatal:
3414- subprocess.check_call(cmd)
3415- else:
3416- subprocess.call(cmd)
3417+ _run_apt_command(cmd, fatal)
3418
3419
3420 def apt_hold(packages, fatal=False):
3421@@ -128,6 +197,7 @@
3422 else:
3423 cmd.extend(packages)
3424 log("Holding {}".format(packages))
3425+
3426 if fatal:
3427 subprocess.check_call(cmd)
3428 else:
3429@@ -135,8 +205,33 @@
3430
3431
3432 def add_source(source, key=None):
3433+ """Add a package source to this system.
3434+
3435+ @param source: a URL or sources.list entry, as supported by
3436+ add-apt-repository(1). Examples:
3437+ ppa:charmers/example
3438+ deb https://stub:key@private.example.com/ubuntu trusty main
3439+
3440+ In addition:
3441+ 'proposed:' may be used to enable the standard 'proposed'
3442+ pocket for the release.
3443+ 'cloud:' may be used to activate official cloud archive pockets,
3444+ such as 'cloud:icehouse'
3445+
3446+ @param key: A key to be added to the system's APT keyring and used
3447+ to verify the signatures on packages. Ideally, this should be an
3448+ ASCII format GPG public key including the block headers. A GPG key
3449+ id may also be used, but be aware that only insecure protocols are
3450+ available to retrieve the actual public key from a public keyserver
3451+ placing your Juju environment at risk. ppa and cloud archive keys
3452+ are securely added automtically, so sould not be provided.
3453+ """
3454+ if source is None:
3455+ log('Source is not present. Skipping')
3456+ return
3457+
3458 if (source.startswith('ppa:') or
3459- source.startswith('http:') or
3460+ source.startswith('http') or
3461 source.startswith('deb ') or
3462 source.startswith('cloud-archive:')):
3463 subprocess.check_call(['add-apt-repository', '--yes', source])
3464@@ -155,57 +250,66 @@
3465 release = lsb_release()['DISTRIB_CODENAME']
3466 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
3467 apt.write(PROPOSED_POCKET.format(release))
3468+ else:
3469+ raise SourceConfigError("Unknown source: {!r}".format(source))
3470+
3471 if key:
3472- subprocess.check_call(['apt-key', 'import', key])
3473-
3474-
3475-class SourceConfigError(Exception):
3476- pass
3477+ if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
3478+ with NamedTemporaryFile() as key_file:
3479+ key_file.write(key)
3480+ key_file.flush()
3481+ key_file.seek(0)
3482+ subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
3483+ else:
3484+ # Note that hkp: is in no way a secure protocol. Using a
3485+ # GPG key id is pointless from a security POV unless you
3486+ # absolutely trust your network and DNS.
3487+ subprocess.check_call(['apt-key', 'adv', '--keyserver',
3488+ 'hkp://keyserver.ubuntu.com:80', '--recv',
3489+ key])
3490
3491
3492 def configure_sources(update=False,
3493 sources_var='install_sources',
3494 keys_var='install_keys'):
3495 """
3496- Configure multiple sources from charm configuration
3497+ Configure multiple sources from charm configuration.
3498+
3499+ The lists are encoded as yaml fragments in the configuration.
3500+ The frament needs to be included as a string. Sources and their
3501+ corresponding keys are of the types supported by add_source().
3502
3503 Example config:
3504- install_sources:
3505+ install_sources: |
3506 - "ppa:foo"
3507 - "http://example.com/repo precise main"
3508- install_keys:
3509+ install_keys: |
3510 - null
3511 - "a1b2c3d4"
3512
3513 Note that 'null' (a.k.a. None) should not be quoted.
3514 """
3515- sources = safe_load(config(sources_var))
3516- keys = config(keys_var)
3517- if keys is not None:
3518- keys = safe_load(keys)
3519- if isinstance(sources, basestring) and (
3520- keys is None or isinstance(keys, basestring)):
3521- add_source(sources, keys)
3522+ sources = safe_load((config(sources_var) or '').strip()) or []
3523+ keys = safe_load((config(keys_var) or '').strip()) or None
3524+
3525+ if isinstance(sources, basestring):
3526+ sources = [sources]
3527+
3528+ if keys is None:
3529+ for source in sources:
3530+ add_source(source, None)
3531 else:
3532- if not len(sources) == len(keys):
3533- msg = 'Install sources and keys lists are different lengths'
3534- raise SourceConfigError(msg)
3535- for src_num in range(len(sources)):
3536- add_source(sources[src_num], keys[src_num])
3537+ if isinstance(keys, basestring):
3538+ keys = [keys]
3539+
3540+ if len(sources) != len(keys):
3541+ raise SourceConfigError(
3542+ 'Install sources and keys lists are different lengths')
3543+ for source, key in zip(sources, keys):
3544+ add_source(source, key)
3545 if update:
3546 apt_update(fatal=True)
3547
3548-# The order of this list is very important. Handlers should be listed in from
3549-# least- to most-specific URL matching.
3550-FETCH_HANDLERS = (
3551- 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
3552- 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
3553-)
3554-
3555-
3556-class UnhandledSource(Exception):
3557- pass
3558-
3559
3560 def install_remote(source):
3561 """
3562@@ -236,30 +340,6 @@
3563 return install_remote(source)
3564
3565
3566-class BaseFetchHandler(object):
3567-
3568- """Base class for FetchHandler implementations in fetch plugins"""
3569-
3570- def can_handle(self, source):
3571- """Returns True if the source can be handled. Otherwise returns
3572- a string explaining why it cannot"""
3573- return "Wrong source type"
3574-
3575- def install(self, source):
3576- """Try to download and unpack the source. Return the path to the
3577- unpacked files or raise UnhandledSource."""
3578- raise UnhandledSource("Wrong source type {}".format(source))
3579-
3580- def parse_url(self, url):
3581- return urlparse(url)
3582-
3583- def base_url(self, url):
3584- """Return url without querystring or fragment"""
3585- parts = list(self.parse_url(url))
3586- parts[4:] = ['' for i in parts[4:]]
3587- return urlunparse(parts)
3588-
3589-
3590 def plugins(fetch_handlers=None):
3591 if not fetch_handlers:
3592 fetch_handlers = FETCH_HANDLERS
3593@@ -277,3 +357,40 @@
3594 log("FetchHandler {} not found, skipping plugin".format(
3595 handler_name))
3596 return plugin_list
3597+
3598+
3599+def _run_apt_command(cmd, fatal=False):
3600+ """
3601+ Run an APT command, checking output and retrying if the fatal flag is set
3602+ to True.
3603+
3604+ :param: cmd: str: The apt command to run.
3605+ :param: fatal: bool: Whether the command's output should be checked and
3606+ retried.
3607+ """
3608+ env = os.environ.copy()
3609+
3610+ if 'DEBIAN_FRONTEND' not in env:
3611+ env['DEBIAN_FRONTEND'] = 'noninteractive'
3612+
3613+ if fatal:
3614+ retry_count = 0
3615+ result = None
3616+
3617+ # If the command is considered "fatal", we need to retry if the apt
3618+ # lock was not acquired.
3619+
3620+ while result is None or result == APT_NO_LOCK:
3621+ try:
3622+ result = subprocess.check_call(cmd, env=env)
3623+ except subprocess.CalledProcessError, e:
3624+ retry_count = retry_count + 1
3625+ if retry_count > APT_NO_LOCK_RETRY_COUNT:
3626+ raise
3627+ result = e.returncode
3628+ log("Couldn't acquire DPKG lock. Will retry in {} seconds."
3629+ "".format(APT_NO_LOCK_RETRY_DELAY))
3630+ time.sleep(APT_NO_LOCK_RETRY_DELAY)
3631+
3632+ else:
3633+ subprocess.call(cmd, env=env)
3634
3635=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
3636--- hooks/charmhelpers/fetch/archiveurl.py 2014-01-28 00:01:57 +0000
3637+++ hooks/charmhelpers/fetch/archiveurl.py 2014-09-10 21:17:48 +0000
3638@@ -1,5 +1,9 @@
3639 import os
3640 import urllib2
3641+from urllib import urlretrieve
3642+import urlparse
3643+import hashlib
3644+
3645 from charmhelpers.fetch import (
3646 BaseFetchHandler,
3647 UnhandledSource
3648@@ -10,7 +14,17 @@
3649 )
3650 from charmhelpers.core.host import mkdir
3651
3652-
3653+"""
3654+This class is a plugin for charmhelpers.fetch.install_remote.
3655+
3656+It grabs, validates and installs remote archives fetched over "http", "https", "ftp" or "file" protocols. The contents of the archive are installed in $CHARM_DIR/fetched/.
3657+
3658+Example usage:
3659+install_remote("https://example.com/some/archive.tar.gz")
3660+# Installs the contents of archive.tar.gz in $CHARM_DIR/fetched/.
3661+
3662+See charmhelpers.fetch.archiveurl.get_archivehandler for supported archive types.
3663+"""
3664 class ArchiveUrlFetchHandler(BaseFetchHandler):
3665 """Handler for archives via generic URLs"""
3666 def can_handle(self, source):
3667@@ -24,6 +38,19 @@
3668 def download(self, source, dest):
3669 # propogate all exceptions
3670 # URLError, OSError, etc
3671+ proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
3672+ if proto in ('http', 'https'):
3673+ auth, barehost = urllib2.splituser(netloc)
3674+ if auth is not None:
3675+ source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
3676+ username, password = urllib2.splitpasswd(auth)
3677+ passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
3678+ # Realm is set to None in add_password to force the username and password
3679+ # to be used whatever the realm
3680+ passman.add_password(None, source, username, password)
3681+ authhandler = urllib2.HTTPBasicAuthHandler(passman)
3682+ opener = urllib2.build_opener(authhandler)
3683+ urllib2.install_opener(opener)
3684 response = urllib2.urlopen(source)
3685 try:
3686 with open(dest, 'w') as dest_file:
3687@@ -46,3 +73,31 @@
3688 except OSError as e:
3689 raise UnhandledSource(e.strerror)
3690 return extract(dld_file)
3691+
3692+ # Mandatory file validation via Sha1 or MD5 hashing.
3693+ def download_and_validate(self, url, hashsum, validate="sha1"):
3694+ if validate == 'sha1' and len(hashsum) != 40:
3695+ raise ValueError("HashSum must be = 40 characters when using sha1"
3696+ " validation")
3697+ if validate == 'md5' and len(hashsum) != 32:
3698+ raise ValueError("HashSum must be = 32 characters when using md5"
3699+ " validation")
3700+ tempfile, headers = urlretrieve(url)
3701+ self.validate_file(tempfile, hashsum, validate)
3702+ return tempfile
3703+
3704+ # Predicate method that returns status of hash matching expected hash.
3705+ def validate_file(self, source, hashsum, vmethod='sha1'):
3706+ if vmethod != 'sha1' and vmethod != 'md5':
3707+ raise ValueError("Validation Method not supported")
3708+
3709+ if vmethod == 'md5':
3710+ m = hashlib.md5()
3711+ if vmethod == 'sha1':
3712+ m = hashlib.sha1()
3713+ with open(source) as f:
3714+ for line in f:
3715+ m.update(line)
3716+ if hashsum != m.hexdigest():
3717+ msg = "Hash Mismatch on {} expected {} got {}"
3718+ raise ValueError(msg.format(source, hashsum, m.hexdigest()))
3719
3720=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
3721--- hooks/charmhelpers/fetch/bzrurl.py 2014-01-28 00:01:57 +0000
3722+++ hooks/charmhelpers/fetch/bzrurl.py 2014-09-10 21:17:48 +0000
3723@@ -39,7 +39,8 @@
3724 def install(self, source):
3725 url_parts = self.parse_url(source)
3726 branch_name = url_parts.path.strip("/").split("/")[-1]
3727- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
3728+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3729+ branch_name)
3730 if not os.path.exists(dest_dir):
3731 mkdir(dest_dir, perms=0755)
3732 try:
3733
3734=== modified file 'hooks/hooks.py'
3735--- hooks/hooks.py 2014-08-22 07:52:20 +0000
3736+++ hooks/hooks.py 2014-09-10 21:17:48 +0000
3737@@ -1,6 +1,7 @@
3738 #!/usr/bin/env python
3739 # vim: et ai ts=4 sw=4:
3740
3741+from charmhelpers.contrib.openstack.utils import configure_installation_source
3742 from charmhelpers import fetch
3743 from charmhelpers.core import hookenv
3744 from charmhelpers.core.hookenv import ERROR, INFO
3745@@ -8,7 +9,7 @@
3746 import json
3747 import os
3748 import sys
3749-from util import StorageServiceUtil, generate_volume_label, get_running_series
3750+from util import StorageServiceUtil, generate_volume_label
3751
3752 hooks = hookenv.Hooks()
3753
3754@@ -84,13 +85,12 @@
3755 if apt_install is None: # for testing purposes
3756 apt_install = fetch.apt_install
3757 if add_source is None: # for testing purposes
3758- add_source = fetch.add_source
3759+ add_source = configure_installation_source
3760
3761 provider = hookenv.config("provider")
3762 if provider == "nova":
3763+ add_source(hookenv.config('openstack-origin'))
3764 required_packages = ["python-novaclient"]
3765- if int(get_running_series()['release'].split(".")[0]) < 14:
3766- add_source("cloud-archive:havana")
3767 elif provider == "ec2":
3768 required_packages = ["python-boto"]
3769 fetch.apt_update(fatal=True)
3770
3771=== modified file 'hooks/test_hooks.py'
3772--- hooks/test_hooks.py 2014-09-09 16:24:46 +0000
3773+++ hooks/test_hooks.py 2014-09-10 21:17:48 +0000
3774@@ -16,7 +16,8 @@
3775 {"key": "myusername", "tenant": "myusername_project",
3776 "secret": "password", "region": "region1", "provider": "nova",
3777 "endpoint": "https://keystone_url:443/v2.0/",
3778- "default_volume_size": 11})
3779+ "default_volume_size": 11,
3780+ "openstack-origin": "cloud:precise-folsom/staging"})
3781
3782 def test_wb_persist_data_creates_persist_file_if_it_doesnt_exist(self):
3783 """
3784@@ -182,46 +183,23 @@
3785 self.mocker.replay()
3786 hooks.config_changed()
3787
3788- def test_install_installs_novaclient_and_no_cloud_archive_on_trusty(self):
3789- """
3790- On trusty, 14.04, and later, L{install} will not call
3791- C{fetch.add_source} to add a cloud repository but it will install the
3792- install the C{python-novaclient} package.
3793- """
3794- get_running_series = self.mocker.replace(hooks.get_running_series)
3795- get_running_series()
3796- self.mocker.result({'release': '14.04'}) # Trusty series
3797- add_source = self.mocker.replace(fetch.add_source)
3798- add_source("cloud-archive:havana")
3799- self.mocker.count(0) # Test we never called add_source
3800- apt_update = self.mocker.replace(fetch.apt_update)
3801- apt_update(fatal=True)
3802- self.mocker.replay()
3803-
3804- def apt_install(packages, fatal):
3805- self.assertEqual(["python-novaclient"], packages)
3806- self.assertTrue(fatal)
3807-
3808- hooks.install(apt_install=apt_install, add_source=add_source)
3809-
3810- def test_precise_install_adds_apt_source_and_installs_novaclient(self):
3811- """
3812- L{install} will call C{fetch.add_source} to add a cloud repository and
3813- install the C{python-novaclient} package.
3814- """
3815- get_running_series = self.mocker.replace(hooks.get_running_series)
3816- get_running_series()
3817- self.mocker.result({'release': '12.04'}) # precise
3818- apt_update = self.mocker.replace(fetch.apt_update)
3819- apt_update(fatal=True)
3820- self.mocker.replay()
3821-
3822- def add_source(source):
3823- self.assertEqual("cloud-archive:havana", source)
3824-
3825- def apt_install(packages, fatal):
3826- self.assertEqual(["python-novaclient"], packages)
3827- self.assertTrue(fatal)
3828+ def test_install_installs_novaclient_from_openstack_origin_config(self):
3829+ """
3830+ When C{provider} is nova, L{install} will call the charmhelper's
3831+ C{configure_installation_source} to add the appropriate cloud archive
3832+ for the configured C{openstack-origin}. The C{python-novaclient}
3833+ package will then be installed.
3834+ """
3835+ apt_update = self.mocker.replace(fetch.apt_update)
3836+ apt_update(fatal=True)
3837+ self.mocker.replay()
3838+
3839+ def apt_install(packages, fatal):
3840+ self.assertEqual(["python-novaclient"], packages)
3841+ self.assertTrue(fatal)
3842+
3843+ def add_source(origin):
3844+ self.assertEqual("cloud:precise-folsom/staging", origin)
3845
3846 hooks.install(apt_install=apt_install, add_source=add_source)
3847

Subscribers

People subscribed via source and target branches

to all changes: