Merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-use-charmhelpers-to-set-openstack-origin into lp:charms/block-storage-broker

Proposed by Chad Smith
Status: Work in progress
Proposed branch: lp:~chad.smith/charms/precise/block-storage-broker/bsb-use-charmhelpers-to-set-openstack-origin
Merge into: lp:charms/block-storage-broker
Diff against target: 3846 lines (+3261/-146)
23 files modified
Makefile (+10/-7)
charm-helpers.yaml (+2/-0)
config.yaml (+15/-0)
hooks/charmhelpers/contrib/openstack/alternatives.py (+17/-0)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+61/-0)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+275/-0)
hooks/charmhelpers/contrib/openstack/context.py (+789/-0)
hooks/charmhelpers/contrib/openstack/ip.py (+79/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+201/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+279/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+459/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+387/-0)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+53/-0)
hooks/charmhelpers/core/hookenv.py (+129/-6)
hooks/charmhelpers/core/host.py (+81/-12)
hooks/charmhelpers/fetch/__init__.py (+191/-74)
hooks/charmhelpers/fetch/archiveurl.py (+56/-1)
hooks/charmhelpers/fetch/bzrurl.py (+2/-1)
hooks/hooks.py (+4/-4)
hooks/test_hooks.py (+19/-41)
To merge this branch: bzr merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-use-charmhelpers-to-set-openstack-origin
Reviewer Review Type Date Requested Status
David Britton (community) Needs Fixing
Review via email: mp+231594@code.launchpad.net

Description of the change

This branch avoids making a static call to charmhelpers' fetch.add_source("cloud-archive:havana") to add cloud archives in favor the a more flexible openstack charms approach as the former approach breaks on newer distribution series (such as trusty).

 This branch is quite sizeable because of pulling in the charmhelpers.contrib.openstack module. The changes to the block-storage-broker are as follows:
  1. sync new charmhelpers dependencies
      - minor Makefile sync target added for simplified charmhelper updates
      - charm-helpers.yaml to define new charmhelpers dependencies contrib.openstack and contrib.storage
      - new files sync'd under charmhelpers (not authored in this branch)

  2. config.yaml has a new openstack-origin parameter that will default the cloud archive repository to the supported distro default but will allow a user to set their own custom cloud archive repository if needed

  3. hooks/hooks.py drop use of fetch.add_source("cloud:havana") instead use charmhelpers.openstack.utils.configure_installation_source()

  4. fix unit tests

The relevant changes that exclude the charmhelpers.contrib directory sync are linked here for quick reference.
http://pastebin.ubuntu.com/8099398/

To post a comment you must log in.
Revision history for this message
David Britton (dpb) wrote :

Hi Chad -- Thanks for this MP!

I don't see any reason why this would be controversial, please clear the merge conflict with trunk and I'll review and commit this straightaway.

review: Needs Fixing
62. By Chad Smith

merge block-storage-broker trunk resolve conflicts and fix unit tests to avoid mocker use

Unmerged revisions

62. By Chad Smith

merge block-storage-broker trunk resolve conflicts and fix unit tests to avoid mocker use

61. By Chad Smith

correct yaml indent in config.yaml

60. By Chad Smith

update unit tests to validate use of charmhelpers config_installation_source

59. By Chad Smith

update charmhelpers sync functionality

58. By Chad Smith

add openstack-origin to config.yaml options and use charmhelpers configure_installation_source to pull appropriate deb packages for a given ubuntu series

57. By Chad Smith

sync added contrib.(storage|openstack) files

56. By Chad Smith

add contrib.openstack and it's dependency contrib.storage to charm-helpers.yaml file

55. By Chad Smith

sync existing charmhelpers dependencies

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2014-03-21 17:05:09 +0000
+++ Makefile 2014-09-10 21:17:48 +0000
@@ -1,4 +1,6 @@
1.PHONY: test lint clean1.PHONY: test lint clean
2PYTHON := /usr/bin/env python
3
2CHARM_DIR=`pwd`4CHARM_DIR=`pwd`
35
4clean:6clean:
@@ -10,10 +12,11 @@
10lint:12lint:
11 @flake8 --exclude hooks/charmhelpers hooks13 @flake8 --exclude hooks/charmhelpers hooks
1214
13update-charm-helpers:15bin/charm_helpers_sync.py:
14 # Pull latest charm-helpers branch and sync the components based on our16 @mkdir -p bin
15 $ charm-helpers.yaml17 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
16 rm -rf charm-helpers18 > bin/charm_helpers_sync.py
17 bzr co lp:charm-helpers19
18 ./charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py -c charm-helpers.yaml20# Update charmhelpers dependencies within our charm
19 rm -rf charm-helpers21sync: bin/charm_helpers_sync.py
22 $(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
2023
=== modified file 'charm-helpers.yaml'
--- charm-helpers.yaml 2014-02-04 17:36:03 +0000
+++ charm-helpers.yaml 2014-09-10 21:17:48 +0000
@@ -5,3 +5,5 @@
5include:5include:
6 - core6 - core
7 - fetch7 - fetch
8 - contrib.openstack
9 - contrib.storage # for openstack dependencies
810
=== modified file 'config.yaml'
--- config.yaml 2014-07-15 22:58:26 +0000
+++ config.yaml 2014-09-10 21:17:48 +0000
@@ -29,3 +29,18 @@
29 type: int29 type: int
30 description: The volume size in GB if the relation does not specify30 description: The volume size in GB if the relation does not specify
31 default: 531 default: 5
32 openstack-origin:
33 default: distro
34 type: string
35 description: |
36 Repository from which to install. May be one of the following:
37 distro (default), ppa:somecustom/ppa, a deb url sources entry,
38 or a supported Cloud Archive release pocket.
39
40 Supported Cloud Archive sources include: cloud:precise-folsom,
41 cloud:precise-folsom/updates, cloud:precise-folsom/staging,
42 cloud:precise-folsom/proposed.
43
44 Note that updating this setting to a source that is known to
45 provide a later version of OpenStack will trigger a software
46 upgrade.
3247
=== added directory 'hooks/charmhelpers/contrib'
=== added file 'hooks/charmhelpers/contrib/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/openstack'
=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
=== added file 'hooks/charmhelpers/contrib/openstack/alternatives.py'
--- hooks/charmhelpers/contrib/openstack/alternatives.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/alternatives.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,17 @@
1''' Helper for managing alternatives for file conflict resolution '''
2
3import subprocess
4import shutil
5import os
6
7
8def install_alternative(name, target, source, priority=50):
9 ''' Install alternative configuration '''
10 if (os.path.exists(target) and not os.path.islink(target)):
11 # Move existing file/directory away before installing
12 shutil.move(target, '{}.bak'.format(target))
13 cmd = [
14 'update-alternatives', '--force', '--install',
15 target, name, source, str(priority)
16 ]
17 subprocess.check_call(cmd)
018
=== added directory 'hooks/charmhelpers/contrib/openstack/amulet'
=== added file 'hooks/charmhelpers/contrib/openstack/amulet/__init__.py'
=== added file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,61 @@
1from charmhelpers.contrib.amulet.deployment import (
2 AmuletDeployment
3)
4
5
6class OpenStackAmuletDeployment(AmuletDeployment):
7 """OpenStack amulet deployment.
8
9 This class inherits from AmuletDeployment and has additional support
10 that is specifically for use by OpenStack charms.
11 """
12
13 def __init__(self, series=None, openstack=None, source=None):
14 """Initialize the deployment environment."""
15 super(OpenStackAmuletDeployment, self).__init__(series)
16 self.openstack = openstack
17 self.source = source
18
19 def _add_services(self, this_service, other_services):
20 """Add services to the deployment and set openstack-origin."""
21 super(OpenStackAmuletDeployment, self)._add_services(this_service,
22 other_services)
23 name = 0
24 services = other_services
25 services.append(this_service)
26 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph']
27
28 if self.openstack:
29 for svc in services:
30 if svc[name] not in use_source:
31 config = {'openstack-origin': self.openstack}
32 self.d.configure(svc[name], config)
33
34 if self.source:
35 for svc in services:
36 if svc[name] in use_source:
37 config = {'source': self.source}
38 self.d.configure(svc[name], config)
39
40 def _configure_services(self, configs):
41 """Configure all of the services."""
42 for service, config in configs.iteritems():
43 self.d.configure(service, config)
44
45 def _get_openstack_release(self):
46 """Get openstack release.
47
48 Return an integer representing the enum value of the openstack
49 release.
50 """
51 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
52 self.precise_havana, self.precise_icehouse,
53 self.trusty_icehouse) = range(6)
54 releases = {
55 ('precise', None): self.precise_essex,
56 ('precise', 'cloud:precise-folsom'): self.precise_folsom,
57 ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
58 ('precise', 'cloud:precise-havana'): self.precise_havana,
59 ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
60 ('trusty', None): self.trusty_icehouse}
61 return releases[(self.series, self.openstack)]
062
=== added file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,275 @@
1import logging
2import os
3import time
4import urllib
5
6import glanceclient.v1.client as glance_client
7import keystoneclient.v2_0 as keystone_client
8import novaclient.v1_1.client as nova_client
9
10from charmhelpers.contrib.amulet.utils import (
11 AmuletUtils
12)
13
14DEBUG = logging.DEBUG
15ERROR = logging.ERROR
16
17
18class OpenStackAmuletUtils(AmuletUtils):
19 """OpenStack amulet utilities.
20
21 This class inherits from AmuletUtils and has additional support
22 that is specifically for use by OpenStack charms.
23 """
24
25 def __init__(self, log_level=ERROR):
26 """Initialize the deployment environment."""
27 super(OpenStackAmuletUtils, self).__init__(log_level)
28
29 def validate_endpoint_data(self, endpoints, admin_port, internal_port,
30 public_port, expected):
31 """Validate endpoint data.
32
33 Validate actual endpoint data vs expected endpoint data. The ports
34 are used to find the matching endpoint.
35 """
36 found = False
37 for ep in endpoints:
38 self.log.debug('endpoint: {}'.format(repr(ep)))
39 if (admin_port in ep.adminurl and
40 internal_port in ep.internalurl and
41 public_port in ep.publicurl):
42 found = True
43 actual = {'id': ep.id,
44 'region': ep.region,
45 'adminurl': ep.adminurl,
46 'internalurl': ep.internalurl,
47 'publicurl': ep.publicurl,
48 'service_id': ep.service_id}
49 ret = self._validate_dict_data(expected, actual)
50 if ret:
51 return 'unexpected endpoint data - {}'.format(ret)
52
53 if not found:
54 return 'endpoint not found'
55
56 def validate_svc_catalog_endpoint_data(self, expected, actual):
57 """Validate service catalog endpoint data.
58
59 Validate a list of actual service catalog endpoints vs a list of
60 expected service catalog endpoints.
61 """
62 self.log.debug('actual: {}'.format(repr(actual)))
63 for k, v in expected.iteritems():
64 if k in actual:
65 ret = self._validate_dict_data(expected[k][0], actual[k][0])
66 if ret:
67 return self.endpoint_error(k, ret)
68 else:
69 return "endpoint {} does not exist".format(k)
70 return ret
71
72 def validate_tenant_data(self, expected, actual):
73 """Validate tenant data.
74
75 Validate a list of actual tenant data vs list of expected tenant
76 data.
77 """
78 self.log.debug('actual: {}'.format(repr(actual)))
79 for e in expected:
80 found = False
81 for act in actual:
82 a = {'enabled': act.enabled, 'description': act.description,
83 'name': act.name, 'id': act.id}
84 if e['name'] == a['name']:
85 found = True
86 ret = self._validate_dict_data(e, a)
87 if ret:
88 return "unexpected tenant data - {}".format(ret)
89 if not found:
90 return "tenant {} does not exist".format(e['name'])
91 return ret
92
93 def validate_role_data(self, expected, actual):
94 """Validate role data.
95
96 Validate a list of actual role data vs a list of expected role
97 data.
98 """
99 self.log.debug('actual: {}'.format(repr(actual)))
100 for e in expected:
101 found = False
102 for act in actual:
103 a = {'name': act.name, 'id': act.id}
104 if e['name'] == a['name']:
105 found = True
106 ret = self._validate_dict_data(e, a)
107 if ret:
108 return "unexpected role data - {}".format(ret)
109 if not found:
110 return "role {} does not exist".format(e['name'])
111 return ret
112
113 def validate_user_data(self, expected, actual):
114 """Validate user data.
115
116 Validate a list of actual user data vs a list of expected user
117 data.
118 """
119 self.log.debug('actual: {}'.format(repr(actual)))
120 for e in expected:
121 found = False
122 for act in actual:
123 a = {'enabled': act.enabled, 'name': act.name,
124 'email': act.email, 'tenantId': act.tenantId,
125 'id': act.id}
126 if e['name'] == a['name']:
127 found = True
128 ret = self._validate_dict_data(e, a)
129 if ret:
130 return "unexpected user data - {}".format(ret)
131 if not found:
132 return "user {} does not exist".format(e['name'])
133 return ret
134
135 def validate_flavor_data(self, expected, actual):
136 """Validate flavor data.
137
138 Validate a list of actual flavors vs a list of expected flavors.
139 """
140 self.log.debug('actual: {}'.format(repr(actual)))
141 act = [a.name for a in actual]
142 return self._validate_list_data(expected, act)
143
144 def tenant_exists(self, keystone, tenant):
145 """Return True if tenant exists."""
146 return tenant in [t.name for t in keystone.tenants.list()]
147
148 def authenticate_keystone_admin(self, keystone_sentry, user, password,
149 tenant):
150 """Authenticates admin user with the keystone admin endpoint."""
151 unit = keystone_sentry
152 service_ip = unit.relation('shared-db',
153 'mysql:shared-db')['private-address']
154 ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
155 return keystone_client.Client(username=user, password=password,
156 tenant_name=tenant, auth_url=ep)
157
158 def authenticate_keystone_user(self, keystone, user, password, tenant):
159 """Authenticates a regular user with the keystone public endpoint."""
160 ep = keystone.service_catalog.url_for(service_type='identity',
161 endpoint_type='publicURL')
162 return keystone_client.Client(username=user, password=password,
163 tenant_name=tenant, auth_url=ep)
164
165 def authenticate_glance_admin(self, keystone):
166 """Authenticates admin user with glance."""
167 ep = keystone.service_catalog.url_for(service_type='image',
168 endpoint_type='adminURL')
169 return glance_client.Client(ep, token=keystone.auth_token)
170
171 def authenticate_nova_user(self, keystone, user, password, tenant):
172 """Authenticates a regular user with nova-api."""
173 ep = keystone.service_catalog.url_for(service_type='identity',
174 endpoint_type='publicURL')
175 return nova_client.Client(username=user, api_key=password,
176 project_id=tenant, auth_url=ep)
177
178 def create_cirros_image(self, glance, image_name):
179 """Download the latest cirros image and upload it to glance."""
180 http_proxy = os.getenv('AMULET_HTTP_PROXY')
181 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
182 if http_proxy:
183 proxies = {'http': http_proxy}
184 opener = urllib.FancyURLopener(proxies)
185 else:
186 opener = urllib.FancyURLopener()
187
188 f = opener.open("http://download.cirros-cloud.net/version/released")
189 version = f.read().strip()
190 cirros_img = "tests/cirros-{}-x86_64-disk.img".format(version)
191
192 if not os.path.exists(cirros_img):
193 cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
194 version, cirros_img)
195 opener.retrieve(cirros_url, cirros_img)
196 f.close()
197
198 with open(cirros_img) as f:
199 image = glance.images.create(name=image_name, is_public=True,
200 disk_format='qcow2',
201 container_format='bare', data=f)
202 count = 1
203 status = image.status
204 while status != 'active' and count < 10:
205 time.sleep(3)
206 image = glance.images.get(image.id)
207 status = image.status
208 self.log.debug('image status: {}'.format(status))
209 count += 1
210
211 if status != 'active':
212 self.log.error('image creation timed out')
213 return None
214
215 return image
216
217 def delete_image(self, glance, image):
218 """Delete the specified image."""
219 num_before = len(list(glance.images.list()))
220 glance.images.delete(image)
221
222 count = 1
223 num_after = len(list(glance.images.list()))
224 while num_after != (num_before - 1) and count < 10:
225 time.sleep(3)
226 num_after = len(list(glance.images.list()))
227 self.log.debug('number of images: {}'.format(num_after))
228 count += 1
229
230 if num_after != (num_before - 1):
231 self.log.error('image deletion timed out')
232 return False
233
234 return True
235
236 def create_instance(self, nova, image_name, instance_name, flavor):
237 """Create the specified instance."""
238 image = nova.images.find(name=image_name)
239 flavor = nova.flavors.find(name=flavor)
240 instance = nova.servers.create(name=instance_name, image=image,
241 flavor=flavor)
242
243 count = 1
244 status = instance.status
245 while status != 'ACTIVE' and count < 60:
246 time.sleep(3)
247 instance = nova.servers.get(instance.id)
248 status = instance.status
249 self.log.debug('instance status: {}'.format(status))
250 count += 1
251
252 if status != 'ACTIVE':
253 self.log.error('instance creation timed out')
254 return None
255
256 return instance
257
258 def delete_instance(self, nova, instance):
259 """Delete the specified instance."""
260 num_before = len(list(nova.servers.list()))
261 nova.servers.delete(instance)
262
263 count = 1
264 num_after = len(list(nova.servers.list()))
265 while num_after != (num_before - 1) and count < 10:
266 time.sleep(3)
267 num_after = len(list(nova.servers.list()))
268 self.log.debug('number of instances: {}'.format(num_after))
269 count += 1
270
271 if num_after != (num_before - 1):
272 self.log.error('instance deletion timed out')
273 return False
274
275 return True
0276
=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,789 @@
1import json
2import os
3import time
4
5from base64 import b64decode
6
7from subprocess import (
8 check_call
9)
10
11
12from charmhelpers.fetch import (
13 apt_install,
14 filter_installed_packages,
15)
16
17from charmhelpers.core.hookenv import (
18 config,
19 local_unit,
20 log,
21 relation_get,
22 relation_ids,
23 related_units,
24 relation_set,
25 unit_get,
26 unit_private_ip,
27 ERROR,
28 INFO
29)
30
31from charmhelpers.contrib.hahelpers.cluster import (
32 determine_apache_port,
33 determine_api_port,
34 https,
35 is_clustered
36)
37
38from charmhelpers.contrib.hahelpers.apache import (
39 get_cert,
40 get_ca_cert,
41)
42
43from charmhelpers.contrib.openstack.neutron import (
44 neutron_plugin_attribute,
45)
46
47from charmhelpers.contrib.network.ip import (
48 get_address_in_network,
49 get_ipv6_addr,
50)
51
52CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
53
54
55class OSContextError(Exception):
56 pass
57
58
59def ensure_packages(packages):
60 '''Install but do not upgrade required plugin packages'''
61 required = filter_installed_packages(packages)
62 if required:
63 apt_install(required, fatal=True)
64
65
66def context_complete(ctxt):
67 _missing = []
68 for k, v in ctxt.iteritems():
69 if v is None or v == '':
70 _missing.append(k)
71 if _missing:
72 log('Missing required data: %s' % ' '.join(_missing), level='INFO')
73 return False
74 return True
75
76
77def config_flags_parser(config_flags):
78 if config_flags.find('==') >= 0:
79 log("config_flags is not in expected format (key=value)",
80 level=ERROR)
81 raise OSContextError
82 # strip the following from each value.
83 post_strippers = ' ,'
84 # we strip any leading/trailing '=' or ' ' from the string then
85 # split on '='.
86 split = config_flags.strip(' =').split('=')
87 limit = len(split)
88 flags = {}
89 for i in xrange(0, limit - 1):
90 current = split[i]
91 next = split[i + 1]
92 vindex = next.rfind(',')
93 if (i == limit - 2) or (vindex < 0):
94 value = next
95 else:
96 value = next[:vindex]
97
98 if i == 0:
99 key = current
100 else:
101 # if this not the first entry, expect an embedded key.
102 index = current.rfind(',')
103 if index < 0:
104 log("invalid config value(s) at index %s" % (i),
105 level=ERROR)
106 raise OSContextError
107 key = current[index + 1:]
108
109 # Add to collection.
110 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
111 return flags
112
113
114class OSContextGenerator(object):
115 interfaces = []
116
117 def __call__(self):
118 raise NotImplementedError
119
120
121class SharedDBContext(OSContextGenerator):
122 interfaces = ['shared-db']
123
124 def __init__(self,
125 database=None, user=None, relation_prefix=None, ssl_dir=None):
126 '''
127 Allows inspecting relation for settings prefixed with relation_prefix.
128 This is useful for parsing access for multiple databases returned via
129 the shared-db interface (eg, nova_password, quantum_password)
130 '''
131 self.relation_prefix = relation_prefix
132 self.database = database
133 self.user = user
134 self.ssl_dir = ssl_dir
135
136 def __call__(self):
137 self.database = self.database or config('database')
138 self.user = self.user or config('database-user')
139 if None in [self.database, self.user]:
140 log('Could not generate shared_db context. '
141 'Missing required charm config options. '
142 '(database name and user)')
143 raise OSContextError
144
145 ctxt = {}
146
147 # NOTE(jamespage) if mysql charm provides a network upon which
148 # access to the database should be made, reconfigure relation
149 # with the service units local address and defer execution
150 access_network = relation_get('access-network')
151 if access_network is not None:
152 if self.relation_prefix is not None:
153 hostname_key = "{}_hostname".format(self.relation_prefix)
154 else:
155 hostname_key = "hostname"
156 access_hostname = get_address_in_network(access_network,
157 unit_get('private-address'))
158 set_hostname = relation_get(attribute=hostname_key,
159 unit=local_unit())
160 if set_hostname != access_hostname:
161 relation_set(relation_settings={hostname_key: access_hostname})
162 return ctxt # Defer any further hook execution for now....
163
164 password_setting = 'password'
165 if self.relation_prefix:
166 password_setting = self.relation_prefix + '_password'
167
168 for rid in relation_ids('shared-db'):
169 for unit in related_units(rid):
170 rdata = relation_get(rid=rid, unit=unit)
171 ctxt = {
172 'database_host': rdata.get('db_host'),
173 'database': self.database,
174 'database_user': self.user,
175 'database_password': rdata.get(password_setting),
176 'database_type': 'mysql'
177 }
178 if context_complete(ctxt):
179 db_ssl(rdata, ctxt, self.ssl_dir)
180 return ctxt
181 return {}
182
183
184class PostgresqlDBContext(OSContextGenerator):
185 interfaces = ['pgsql-db']
186
187 def __init__(self, database=None):
188 self.database = database
189
190 def __call__(self):
191 self.database = self.database or config('database')
192 if self.database is None:
193 log('Could not generate postgresql_db context. '
194 'Missing required charm config options. '
195 '(database name)')
196 raise OSContextError
197 ctxt = {}
198
199 for rid in relation_ids(self.interfaces[0]):
200 for unit in related_units(rid):
201 ctxt = {
202 'database_host': relation_get('host', rid=rid, unit=unit),
203 'database': self.database,
204 'database_user': relation_get('user', rid=rid, unit=unit),
205 'database_password': relation_get('password', rid=rid, unit=unit),
206 'database_type': 'postgresql',
207 }
208 if context_complete(ctxt):
209 return ctxt
210 return {}
211
212
213def db_ssl(rdata, ctxt, ssl_dir):
214 if 'ssl_ca' in rdata and ssl_dir:
215 ca_path = os.path.join(ssl_dir, 'db-client.ca')
216 with open(ca_path, 'w') as fh:
217 fh.write(b64decode(rdata['ssl_ca']))
218 ctxt['database_ssl_ca'] = ca_path
219 elif 'ssl_ca' in rdata:
220 log("Charm not setup for ssl support but ssl ca found")
221 return ctxt
222 if 'ssl_cert' in rdata:
223 cert_path = os.path.join(
224 ssl_dir, 'db-client.cert')
225 if not os.path.exists(cert_path):
226 log("Waiting 1m for ssl client cert validity")
227 time.sleep(60)
228 with open(cert_path, 'w') as fh:
229 fh.write(b64decode(rdata['ssl_cert']))
230 ctxt['database_ssl_cert'] = cert_path
231 key_path = os.path.join(ssl_dir, 'db-client.key')
232 with open(key_path, 'w') as fh:
233 fh.write(b64decode(rdata['ssl_key']))
234 ctxt['database_ssl_key'] = key_path
235 return ctxt
236
237
238class IdentityServiceContext(OSContextGenerator):
239 interfaces = ['identity-service']
240
241 def __call__(self):
242 log('Generating template context for identity-service')
243 ctxt = {}
244
245 for rid in relation_ids('identity-service'):
246 for unit in related_units(rid):
247 rdata = relation_get(rid=rid, unit=unit)
248 ctxt = {
249 'service_port': rdata.get('service_port'),
250 'service_host': rdata.get('service_host'),
251 'auth_host': rdata.get('auth_host'),
252 'auth_port': rdata.get('auth_port'),
253 'admin_tenant_name': rdata.get('service_tenant'),
254 'admin_user': rdata.get('service_username'),
255 'admin_password': rdata.get('service_password'),
256 'service_protocol':
257 rdata.get('service_protocol') or 'http',
258 'auth_protocol':
259 rdata.get('auth_protocol') or 'http',
260 }
261 if context_complete(ctxt):
262 # NOTE(jamespage) this is required for >= icehouse
263 # so a missing value just indicates keystone needs
264 # upgrading
265 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
266 return ctxt
267 return {}
268
269
270class AMQPContext(OSContextGenerator):
271
272 def __init__(self, ssl_dir=None, rel_name='amqp', relation_prefix=None):
273 self.ssl_dir = ssl_dir
274 self.rel_name = rel_name
275 self.relation_prefix = relation_prefix
276 self.interfaces = [rel_name]
277
278 def __call__(self):
279 log('Generating template context for amqp')
280 conf = config()
281 user_setting = 'rabbit-user'
282 vhost_setting = 'rabbit-vhost'
283 if self.relation_prefix:
284 user_setting = self.relation_prefix + '-rabbit-user'
285 vhost_setting = self.relation_prefix + '-rabbit-vhost'
286
287 try:
288 username = conf[user_setting]
289 vhost = conf[vhost_setting]
290 except KeyError as e:
291 log('Could not generate shared_db context. '
292 'Missing required charm config options: %s.' % e)
293 raise OSContextError
294 ctxt = {}
295 for rid in relation_ids(self.rel_name):
296 ha_vip_only = False
297 for unit in related_units(rid):
298 if relation_get('clustered', rid=rid, unit=unit):
299 ctxt['clustered'] = True
300 ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
301 unit=unit)
302 else:
303 ctxt['rabbitmq_host'] = relation_get('private-address',
304 rid=rid, unit=unit)
305 ctxt.update({
306 'rabbitmq_user': username,
307 'rabbitmq_password': relation_get('password', rid=rid,
308 unit=unit),
309 'rabbitmq_virtual_host': vhost,
310 })
311
312 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
313 if ssl_port:
314 ctxt['rabbit_ssl_port'] = ssl_port
315 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
316 if ssl_ca:
317 ctxt['rabbit_ssl_ca'] = ssl_ca
318
319 if relation_get('ha_queues', rid=rid, unit=unit) is not None:
320 ctxt['rabbitmq_ha_queues'] = True
321
322 ha_vip_only = relation_get('ha-vip-only',
323 rid=rid, unit=unit) is not None
324
325 if context_complete(ctxt):
326 if 'rabbit_ssl_ca' in ctxt:
327 if not self.ssl_dir:
328 log(("Charm not setup for ssl support "
329 "but ssl ca found"))
330 break
331 ca_path = os.path.join(
332 self.ssl_dir, 'rabbit-client-ca.pem')
333 with open(ca_path, 'w') as fh:
334 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
335 ctxt['rabbit_ssl_ca'] = ca_path
336 # Sufficient information found = break out!
337 break
338 # Used for active/active rabbitmq >= grizzly
339 if ('clustered' not in ctxt or ha_vip_only) \
340 and len(related_units(rid)) > 1:
341 rabbitmq_hosts = []
342 for unit in related_units(rid):
343 rabbitmq_hosts.append(relation_get('private-address',
344 rid=rid, unit=unit))
345 ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
346 if not context_complete(ctxt):
347 return {}
348 else:
349 return ctxt
350
351
352class CephContext(OSContextGenerator):
353 interfaces = ['ceph']
354
355 def __call__(self):
356 '''This generates context for /etc/ceph/ceph.conf templates'''
357 if not relation_ids('ceph'):
358 return {}
359
360 log('Generating template context for ceph')
361
362 mon_hosts = []
363 auth = None
364 key = None
365 use_syslog = str(config('use-syslog')).lower()
366 for rid in relation_ids('ceph'):
367 for unit in related_units(rid):
368 auth = relation_get('auth', rid=rid, unit=unit)
369 key = relation_get('key', rid=rid, unit=unit)
370 ceph_addr = \
371 relation_get('ceph-public-address', rid=rid, unit=unit) or \
372 relation_get('private-address', rid=rid, unit=unit)
373 mon_hosts.append(ceph_addr)
374
375 ctxt = {
376 'mon_hosts': ' '.join(mon_hosts),
377 'auth': auth,
378 'key': key,
379 'use_syslog': use_syslog
380 }
381
382 if not os.path.isdir('/etc/ceph'):
383 os.mkdir('/etc/ceph')
384
385 if not context_complete(ctxt):
386 return {}
387
388 ensure_packages(['ceph-common'])
389
390 return ctxt
391
392
393class HAProxyContext(OSContextGenerator):
394 interfaces = ['cluster']
395
396 def __call__(self):
397 '''
398 Builds half a context for the haproxy template, which describes
399 all peers to be included in the cluster. Each charm needs to include
400 its own context generator that describes the port mapping.
401 '''
402 if not relation_ids('cluster'):
403 return {}
404
405 cluster_hosts = {}
406 l_unit = local_unit().replace('/', '-')
407 if config('prefer-ipv6'):
408 addr = get_ipv6_addr()
409 else:
410 addr = unit_get('private-address')
411 cluster_hosts[l_unit] = get_address_in_network(config('os-internal-network'),
412 addr)
413
414 for rid in relation_ids('cluster'):
415 for unit in related_units(rid):
416 _unit = unit.replace('/', '-')
417 addr = relation_get('private-address', rid=rid, unit=unit)
418 cluster_hosts[_unit] = addr
419
420 ctxt = {
421 'units': cluster_hosts,
422 }
423
424 if config('prefer-ipv6'):
425 ctxt['local_host'] = 'ip6-localhost'
426 ctxt['haproxy_host'] = '::'
427 ctxt['stat_port'] = ':::8888'
428 else:
429 ctxt['local_host'] = '127.0.0.1'
430 ctxt['haproxy_host'] = '0.0.0.0'
431 ctxt['stat_port'] = ':8888'
432
433 if len(cluster_hosts.keys()) > 1:
434 # Enable haproxy when we have enough peers.
435 log('Ensuring haproxy enabled in /etc/default/haproxy.')
436 with open('/etc/default/haproxy', 'w') as out:
437 out.write('ENABLED=1\n')
438 return ctxt
439 log('HAProxy context is incomplete, this unit has no peers.')
440 return {}
441
442
443class ImageServiceContext(OSContextGenerator):
444 interfaces = ['image-service']
445
446 def __call__(self):
447 '''
448 Obtains the glance API server from the image-service relation. Useful
449 in nova and cinder (currently).
450 '''
451 log('Generating template context for image-service.')
452 rids = relation_ids('image-service')
453 if not rids:
454 return {}
455 for rid in rids:
456 for unit in related_units(rid):
457 api_server = relation_get('glance-api-server',
458 rid=rid, unit=unit)
459 if api_server:
460 return {'glance_api_servers': api_server}
461 log('ImageService context is incomplete. '
462 'Missing required relation data.')
463 return {}
464
465
466class ApacheSSLContext(OSContextGenerator):
467
468 """
469 Generates a context for an apache vhost configuration that configures
470 HTTPS reverse proxying for one or many endpoints. Generated context
471 looks something like::
472
473 {
474 'namespace': 'cinder',
475 'private_address': 'iscsi.mycinderhost.com',
476 'endpoints': [(8776, 8766), (8777, 8767)]
477 }
478
479 The endpoints list consists of a tuples mapping external ports
480 to internal ports.
481 """
482 interfaces = ['https']
483
484 # charms should inherit this context and set external ports
485 # and service namespace accordingly.
486 external_ports = []
487 service_namespace = None
488
489 def enable_modules(self):
490 cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
491 check_call(cmd)
492
493 def configure_cert(self):
494 if not os.path.isdir('/etc/apache2/ssl'):
495 os.mkdir('/etc/apache2/ssl')
496 ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
497 if not os.path.isdir(ssl_dir):
498 os.mkdir(ssl_dir)
499 cert, key = get_cert()
500 with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
501 cert_out.write(b64decode(cert))
502 with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
503 key_out.write(b64decode(key))
504 ca_cert = get_ca_cert()
505 if ca_cert:
506 with open(CA_CERT_PATH, 'w') as ca_out:
507 ca_out.write(b64decode(ca_cert))
508 check_call(['update-ca-certificates'])
509
510 def __call__(self):
511 if isinstance(self.external_ports, basestring):
512 self.external_ports = [self.external_ports]
513 if (not self.external_ports or not https()):
514 return {}
515
516 self.configure_cert()
517 self.enable_modules()
518
519 ctxt = {
520 'namespace': self.service_namespace,
521 'private_address': unit_get('private-address'),
522 'endpoints': []
523 }
524 if is_clustered():
525 ctxt['private_address'] = config('vip')
526 for api_port in self.external_ports:
527 ext_port = determine_apache_port(api_port)
528 int_port = determine_api_port(api_port)
529 portmap = (int(ext_port), int(int_port))
530 ctxt['endpoints'].append(portmap)
531 return ctxt
532
533
534class NeutronContext(OSContextGenerator):
535 interfaces = []
536
537 @property
538 def plugin(self):
539 return None
540
541 @property
542 def network_manager(self):
543 return None
544
545 @property
546 def packages(self):
547 return neutron_plugin_attribute(
548 self.plugin, 'packages', self.network_manager)
549
550 @property
551 def neutron_security_groups(self):
552 return None
553
554 def _ensure_packages(self):
555 [ensure_packages(pkgs) for pkgs in self.packages]
556
557 def _save_flag_file(self):
558 if self.network_manager == 'quantum':
559 _file = '/etc/nova/quantum_plugin.conf'
560 else:
561 _file = '/etc/nova/neutron_plugin.conf'
562 with open(_file, 'wb') as out:
563 out.write(self.plugin + '\n')
564
565 def ovs_ctxt(self):
566 driver = neutron_plugin_attribute(self.plugin, 'driver',
567 self.network_manager)
568 config = neutron_plugin_attribute(self.plugin, 'config',
569 self.network_manager)
570 ovs_ctxt = {
571 'core_plugin': driver,
572 'neutron_plugin': 'ovs',
573 'neutron_security_groups': self.neutron_security_groups,
574 'local_ip': unit_private_ip(),
575 'config': config
576 }
577
578 return ovs_ctxt
579
580 def nvp_ctxt(self):
581 driver = neutron_plugin_attribute(self.plugin, 'driver',
582 self.network_manager)
583 config = neutron_plugin_attribute(self.plugin, 'config',
584 self.network_manager)
585 nvp_ctxt = {
586 'core_plugin': driver,
587 'neutron_plugin': 'nvp',
588 'neutron_security_groups': self.neutron_security_groups,
589 'local_ip': unit_private_ip(),
590 'config': config
591 }
592
593 return nvp_ctxt
594
595 def n1kv_ctxt(self):
596 driver = neutron_plugin_attribute(self.plugin, 'driver',
597 self.network_manager)
598 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
599 self.network_manager)
600 n1kv_ctxt = {
601 'core_plugin': driver,
602 'neutron_plugin': 'n1kv',
603 'neutron_security_groups': self.neutron_security_groups,
604 'local_ip': unit_private_ip(),
605 'config': n1kv_config,
606 'vsm_ip': config('n1kv-vsm-ip'),
607 'vsm_username': config('n1kv-vsm-username'),
608 'vsm_password': config('n1kv-vsm-password'),
609 'restrict_policy_profiles': config(
610 'n1kv_restrict_policy_profiles'),
611 }
612
613 return n1kv_ctxt
614
615 def neutron_ctxt(self):
616 if https():
617 proto = 'https'
618 else:
619 proto = 'http'
620 if is_clustered():
621 host = config('vip')
622 else:
623 host = unit_get('private-address')
624 url = '%s://%s:%s' % (proto, host, '9696')
625 ctxt = {
626 'network_manager': self.network_manager,
627 'neutron_url': url,
628 }
629 return ctxt
630
631 def __call__(self):
632 self._ensure_packages()
633
634 if self.network_manager not in ['quantum', 'neutron']:
635 return {}
636
637 if not self.plugin:
638 return {}
639
640 ctxt = self.neutron_ctxt()
641
642 if self.plugin == 'ovs':
643 ctxt.update(self.ovs_ctxt())
644 elif self.plugin in ['nvp', 'nsx']:
645 ctxt.update(self.nvp_ctxt())
646 elif self.plugin == 'n1kv':
647 ctxt.update(self.n1kv_ctxt())
648
649 alchemy_flags = config('neutron-alchemy-flags')
650 if alchemy_flags:
651 flags = config_flags_parser(alchemy_flags)
652 ctxt['neutron_alchemy_flags'] = flags
653
654 self._save_flag_file()
655 return ctxt
656
657
658class OSConfigFlagContext(OSContextGenerator):
659
660 """
661 Responsible for adding user-defined config-flags in charm config to a
662 template context.
663
664 NOTE: the value of config-flags may be a comma-separated list of
665 key=value pairs and some Openstack config files support
666 comma-separated lists as values.
667 """
668
669 def __call__(self):
670 config_flags = config('config-flags')
671 if not config_flags:
672 return {}
673
674 flags = config_flags_parser(config_flags)
675 return {'user_config_flags': flags}
676
677
678class SubordinateConfigContext(OSContextGenerator):
679
680 """
681 Responsible for inspecting relations to subordinates that
682 may be exporting required config via a json blob.
683
684 The subordinate interface allows subordinates to export their
685 configuration requirements to the principle for multiple config
686 files and multiple serivces. Ie, a subordinate that has interfaces
687 to both glance and nova may export to following yaml blob as json::
688
689 glance:
690 /etc/glance/glance-api.conf:
691 sections:
692 DEFAULT:
693 - [key1, value1]
694 /etc/glance/glance-registry.conf:
695 MYSECTION:
696 - [key2, value2]
697 nova:
698 /etc/nova/nova.conf:
699 sections:
700 DEFAULT:
701 - [key3, value3]
702
703
704 It is then up to the principle charms to subscribe this context to
705 the service+config file it is interestd in. Configuration data will
706 be available in the template context, in glance's case, as::
707
708 ctxt = {
709 ... other context ...
710 'subordinate_config': {
711 'DEFAULT': {
712 'key1': 'value1',
713 },
714 'MYSECTION': {
715 'key2': 'value2',
716 },
717 }
718 }
719
720 """
721
722 def __init__(self, service, config_file, interface):
723 """
724 :param service : Service name key to query in any subordinate
725 data found
726 :param config_file : Service's config file to query sections
727 :param interface : Subordinate interface to inspect
728 """
729 self.service = service
730 self.config_file = config_file
731 self.interface = interface
732
733 def __call__(self):
734 ctxt = {'sections': {}}
735 for rid in relation_ids(self.interface):
736 for unit in related_units(rid):
737 sub_config = relation_get('subordinate_configuration',
738 rid=rid, unit=unit)
739 if sub_config and sub_config != '':
740 try:
741 sub_config = json.loads(sub_config)
742 except:
743 log('Could not parse JSON from subordinate_config '
744 'setting from %s' % rid, level=ERROR)
745 continue
746
747 if self.service not in sub_config:
748 log('Found subordinate_config on %s but it contained'
749 'nothing for %s service' % (rid, self.service))
750 continue
751
752 sub_config = sub_config[self.service]
753 if self.config_file not in sub_config:
754 log('Found subordinate_config on %s but it contained'
755 'nothing for %s' % (rid, self.config_file))
756 continue
757
758 sub_config = sub_config[self.config_file]
759 for k, v in sub_config.iteritems():
760 if k == 'sections':
761 for section, config_dict in v.iteritems():
762 log("adding section '%s'" % (section))
763 ctxt[k][section] = config_dict
764 else:
765 ctxt[k] = v
766
767 log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
768
769 return ctxt
770
771
772class LogLevelContext(OSContextGenerator):
773
774 def __call__(self):
775 ctxt = {}
776 ctxt['debug'] = \
777 False if config('debug') is None else config('debug')
778 ctxt['verbose'] = \
779 False if config('verbose') is None else config('verbose')
780 return ctxt
781
782
783class SyslogContext(OSContextGenerator):
784
785 def __call__(self):
786 ctxt = {
787 'use_syslog': config('use-syslog')
788 }
789 return ctxt
0790
=== added file 'hooks/charmhelpers/contrib/openstack/ip.py'
--- hooks/charmhelpers/contrib/openstack/ip.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,79 @@
1from charmhelpers.core.hookenv import (
2 config,
3 unit_get,
4)
5
6from charmhelpers.contrib.network.ip import (
7 get_address_in_network,
8 is_address_in_network,
9 is_ipv6,
10 get_ipv6_addr,
11)
12
13from charmhelpers.contrib.hahelpers.cluster import is_clustered
14
15PUBLIC = 'public'
16INTERNAL = 'int'
17ADMIN = 'admin'
18
19_address_map = {
20 PUBLIC: {
21 'config': 'os-public-network',
22 'fallback': 'public-address'
23 },
24 INTERNAL: {
25 'config': 'os-internal-network',
26 'fallback': 'private-address'
27 },
28 ADMIN: {
29 'config': 'os-admin-network',
30 'fallback': 'private-address'
31 }
32}
33
34
35def canonical_url(configs, endpoint_type=PUBLIC):
36 '''
37 Returns the correct HTTP URL to this host given the state of HTTPS
38 configuration, hacluster and charm configuration.
39
40 :configs OSTemplateRenderer: A config tempating object to inspect for
41 a complete https context.
42 :endpoint_type str: The endpoint type to resolve.
43
44 :returns str: Base URL for services on the current service unit.
45 '''
46 scheme = 'http'
47 if 'https' in configs.complete_contexts():
48 scheme = 'https'
49 address = resolve_address(endpoint_type)
50 if is_ipv6(address):
51 address = "[{}]".format(address)
52 return '%s://%s' % (scheme, address)
53
54
55def resolve_address(endpoint_type=PUBLIC):
56 resolved_address = None
57 if is_clustered():
58 if config(_address_map[endpoint_type]['config']) is None:
59 # Assume vip is simple and pass back directly
60 resolved_address = config('vip')
61 else:
62 for vip in config('vip').split():
63 if is_address_in_network(
64 config(_address_map[endpoint_type]['config']),
65 vip):
66 resolved_address = vip
67 else:
68 if config('prefer-ipv6'):
69 fallback_addr = get_ipv6_addr()
70 else:
71 fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
72 resolved_address = get_address_in_network(
73 config(_address_map[endpoint_type]['config']), fallback_addr)
74
75 if resolved_address is None:
76 raise ValueError('Unable to resolve a suitable IP address'
77 ' based on charm state and configuration')
78 else:
79 return resolved_address
080
=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,201 @@
1# Various utilies for dealing with Neutron and the renaming from Quantum.
2
3from subprocess import check_output
4
5from charmhelpers.core.hookenv import (
6 config,
7 log,
8 ERROR,
9)
10
11from charmhelpers.contrib.openstack.utils import os_release
12
13
14def headers_package():
15 """Ensures correct linux-headers for running kernel are installed,
16 for building DKMS package"""
17 kver = check_output(['uname', '-r']).strip()
18 return 'linux-headers-%s' % kver
19
20QUANTUM_CONF_DIR = '/etc/quantum'
21
22
23def kernel_version():
24 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
25 kver = check_output(['uname', '-r']).strip()
26 kver = kver.split('.')
27 return (int(kver[0]), int(kver[1]))
28
29
30def determine_dkms_package():
31 """ Determine which DKMS package should be used based on kernel version """
32 # NOTE: 3.13 kernels have support for GRE and VXLAN native
33 if kernel_version() >= (3, 13):
34 return []
35 else:
36 return ['openvswitch-datapath-dkms']
37
38
39# legacy
40
41
42def quantum_plugins():
43 from charmhelpers.contrib.openstack import context
44 return {
45 'ovs': {
46 'config': '/etc/quantum/plugins/openvswitch/'
47 'ovs_quantum_plugin.ini',
48 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
49 'OVSQuantumPluginV2',
50 'contexts': [
51 context.SharedDBContext(user=config('neutron-database-user'),
52 database=config('neutron-database'),
53 relation_prefix='neutron',
54 ssl_dir=QUANTUM_CONF_DIR)],
55 'services': ['quantum-plugin-openvswitch-agent'],
56 'packages': [[headers_package()] + determine_dkms_package(),
57 ['quantum-plugin-openvswitch-agent']],
58 'server_packages': ['quantum-server',
59 'quantum-plugin-openvswitch'],
60 'server_services': ['quantum-server']
61 },
62 'nvp': {
63 'config': '/etc/quantum/plugins/nicira/nvp.ini',
64 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
65 'QuantumPlugin.NvpPluginV2',
66 'contexts': [
67 context.SharedDBContext(user=config('neutron-database-user'),
68 database=config('neutron-database'),
69 relation_prefix='neutron',
70 ssl_dir=QUANTUM_CONF_DIR)],
71 'services': [],
72 'packages': [],
73 'server_packages': ['quantum-server',
74 'quantum-plugin-nicira'],
75 'server_services': ['quantum-server']
76 }
77 }
78
79NEUTRON_CONF_DIR = '/etc/neutron'
80
81
82def neutron_plugins():
83 from charmhelpers.contrib.openstack import context
84 release = os_release('nova-common')
85 plugins = {
86 'ovs': {
87 'config': '/etc/neutron/plugins/openvswitch/'
88 'ovs_neutron_plugin.ini',
89 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
90 'OVSNeutronPluginV2',
91 'contexts': [
92 context.SharedDBContext(user=config('neutron-database-user'),
93 database=config('neutron-database'),
94 relation_prefix='neutron',
95 ssl_dir=NEUTRON_CONF_DIR)],
96 'services': ['neutron-plugin-openvswitch-agent'],
97 'packages': [[headers_package()] + determine_dkms_package(),
98 ['neutron-plugin-openvswitch-agent']],
99 'server_packages': ['neutron-server',
100 'neutron-plugin-openvswitch'],
101 'server_services': ['neutron-server']
102 },
103 'nvp': {
104 'config': '/etc/neutron/plugins/nicira/nvp.ini',
105 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
106 'NeutronPlugin.NvpPluginV2',
107 'contexts': [
108 context.SharedDBContext(user=config('neutron-database-user'),
109 database=config('neutron-database'),
110 relation_prefix='neutron',
111 ssl_dir=NEUTRON_CONF_DIR)],
112 'services': [],
113 'packages': [],
114 'server_packages': ['neutron-server',
115 'neutron-plugin-nicira'],
116 'server_services': ['neutron-server']
117 },
118 'nsx': {
119 'config': '/etc/neutron/plugins/vmware/nsx.ini',
120 'driver': 'vmware',
121 'contexts': [
122 context.SharedDBContext(user=config('neutron-database-user'),
123 database=config('neutron-database'),
124 relation_prefix='neutron',
125 ssl_dir=NEUTRON_CONF_DIR)],
126 'services': [],
127 'packages': [],
128 'server_packages': ['neutron-server',
129 'neutron-plugin-vmware'],
130 'server_services': ['neutron-server']
131 },
132 'n1kv': {
133 'config': '/etc/neutron/plugins/cisco/cisco_plugins.ini',
134 'driver': 'neutron.plugins.cisco.network_plugin.PluginV2',
135 'contexts': [
136 context.SharedDBContext(user=config('neutron-database-user'),
137 database=config('neutron-database'),
138 relation_prefix='neutron',
139 ssl_dir=NEUTRON_CONF_DIR)],
140 'services': [],
141 'packages': [['neutron-plugin-cisco']],
142 'server_packages': ['neutron-server',
143 'neutron-plugin-cisco'],
144 'server_services': ['neutron-server']
145 }
146 }
147 if release >= 'icehouse':
148 # NOTE: patch in ml2 plugin for icehouse onwards
149 plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini'
150 plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin'
151 plugins['ovs']['server_packages'] = ['neutron-server',
152 'neutron-plugin-ml2']
153 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
154 plugins['nvp'] = plugins['nsx']
155 return plugins
156
157
158def neutron_plugin_attribute(plugin, attr, net_manager=None):
159 manager = net_manager or network_manager()
160 if manager == 'quantum':
161 plugins = quantum_plugins()
162 elif manager == 'neutron':
163 plugins = neutron_plugins()
164 else:
165 log('Error: Network manager does not support plugins.')
166 raise Exception
167
168 try:
169 _plugin = plugins[plugin]
170 except KeyError:
171 log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
172 raise Exception
173
174 try:
175 return _plugin[attr]
176 except KeyError:
177 return None
178
179
180def network_manager():
181 '''
182 Deals with the renaming of Quantum to Neutron in H and any situations
183 that require compatability (eg, deploying H with network-manager=quantum,
184 upgrading from G).
185 '''
186 release = os_release('nova-common')
187 manager = config('network-manager').lower()
188
189 if manager not in ['quantum', 'neutron']:
190 return manager
191
192 if release in ['essex']:
193 # E does not support neutron
194 log('Neutron networking not supported in Essex.', level=ERROR)
195 raise Exception
196 elif release in ['folsom', 'grizzly']:
197 # neutron is named quantum in F and G
198 return 'quantum'
199 else:
200 # ensure accurate naming for all releases post-H
201 return 'neutron'
0202
=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,2 @@
1# dummy __init__.py to fool syncer into thinking this is a syncable python
2# module
03
=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,279 @@
1import os
2
3from charmhelpers.fetch import apt_install
4
5from charmhelpers.core.hookenv import (
6 log,
7 ERROR,
8 INFO
9)
10
11from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
12
13try:
14 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
15except ImportError:
16 # python-jinja2 may not be installed yet, or we're running unittests.
17 FileSystemLoader = ChoiceLoader = Environment = exceptions = None
18
19
20class OSConfigException(Exception):
21 pass
22
23
24def get_loader(templates_dir, os_release):
25 """
26 Create a jinja2.ChoiceLoader containing template dirs up to
27 and including os_release. If directory template directory
28 is missing at templates_dir, it will be omitted from the loader.
29 templates_dir is added to the bottom of the search list as a base
30 loading dir.
31
32 A charm may also ship a templates dir with this module
33 and it will be appended to the bottom of the search list, eg::
34
35 hooks/charmhelpers/contrib/openstack/templates
36
37 :param templates_dir (str): Base template directory containing release
38 sub-directories.
39 :param os_release (str): OpenStack release codename to construct template
40 loader.
41 :returns: jinja2.ChoiceLoader constructed with a list of
42 jinja2.FilesystemLoaders, ordered in descending
43 order by OpenStack release.
44 """
45 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
46 for rel in OPENSTACK_CODENAMES.itervalues()]
47
48 if not os.path.isdir(templates_dir):
49 log('Templates directory not found @ %s.' % templates_dir,
50 level=ERROR)
51 raise OSConfigException
52
53 # the bottom contains tempaltes_dir and possibly a common templates dir
54 # shipped with the helper.
55 loaders = [FileSystemLoader(templates_dir)]
56 helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
57 if os.path.isdir(helper_templates):
58 loaders.append(FileSystemLoader(helper_templates))
59
60 for rel, tmpl_dir in tmpl_dirs:
61 if os.path.isdir(tmpl_dir):
62 loaders.insert(0, FileSystemLoader(tmpl_dir))
63 if rel == os_release:
64 break
65 log('Creating choice loader with dirs: %s' %
66 [l.searchpath for l in loaders], level=INFO)
67 return ChoiceLoader(loaders)
68
69
70class OSConfigTemplate(object):
71 """
72 Associates a config file template with a list of context generators.
73 Responsible for constructing a template context based on those generators.
74 """
75 def __init__(self, config_file, contexts):
76 self.config_file = config_file
77
78 if hasattr(contexts, '__call__'):
79 self.contexts = [contexts]
80 else:
81 self.contexts = contexts
82
83 self._complete_contexts = []
84
85 def context(self):
86 ctxt = {}
87 for context in self.contexts:
88 _ctxt = context()
89 if _ctxt:
90 ctxt.update(_ctxt)
91 # track interfaces for every complete context.
92 [self._complete_contexts.append(interface)
93 for interface in context.interfaces
94 if interface not in self._complete_contexts]
95 return ctxt
96
97 def complete_contexts(self):
98 '''
99 Return a list of interfaces that have atisfied contexts.
100 '''
101 if self._complete_contexts:
102 return self._complete_contexts
103 self.context()
104 return self._complete_contexts
105
106
107class OSConfigRenderer(object):
108 """
109 This class provides a common templating system to be used by OpenStack
110 charms. It is intended to help charms share common code and templates,
111 and ease the burden of managing config templates across multiple OpenStack
112 releases.
113
114 Basic usage::
115
116 # import some common context generates from charmhelpers
117 from charmhelpers.contrib.openstack import context
118
119 # Create a renderer object for a specific OS release.
120 configs = OSConfigRenderer(templates_dir='/tmp/templates',
121 openstack_release='folsom')
122 # register some config files with context generators.
123 configs.register(config_file='/etc/nova/nova.conf',
124 contexts=[context.SharedDBContext(),
125 context.AMQPContext()])
126 configs.register(config_file='/etc/nova/api-paste.ini',
127 contexts=[context.IdentityServiceContext()])
128 configs.register(config_file='/etc/haproxy/haproxy.conf',
129 contexts=[context.HAProxyContext()])
130 # write out a single config
131 configs.write('/etc/nova/nova.conf')
132 # write out all registered configs
133 configs.write_all()
134
135 **OpenStack Releases and template loading**
136
137 When the object is instantiated, it is associated with a specific OS
138 release. This dictates how the template loader will be constructed.
139
140 The constructed loader attempts to load the template from several places
141 in the following order:
142 - from the most recent OS release-specific template dir (if one exists)
143 - the base templates_dir
144 - a template directory shipped in the charm with this helper file.
145
146 For the example above, '/tmp/templates' contains the following structure::
147
148 /tmp/templates/nova.conf
149 /tmp/templates/api-paste.ini
150 /tmp/templates/grizzly/api-paste.ini
151 /tmp/templates/havana/api-paste.ini
152
153 Since it was registered with the grizzly release, it first seraches
154 the grizzly directory for nova.conf, then the templates dir.
155
156 When writing api-paste.ini, it will find the template in the grizzly
157 directory.
158
159 If the object were created with folsom, it would fall back to the
160 base templates dir for its api-paste.ini template.
161
162 This system should help manage changes in config files through
163 openstack releases, allowing charms to fall back to the most recently
164 updated config template for a given release
165
166 The haproxy.conf, since it is not shipped in the templates dir, will
167 be loaded from the module directory's template directory, eg
168 $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
169 us to ship common templates (haproxy, apache) with the helpers.
170
171 **Context generators**
172
173 Context generators are used to generate template contexts during hook
174 execution. Doing so may require inspecting service relations, charm
175 config, etc. When registered, a config file is associated with a list
176 of generators. When a template is rendered and written, all context
177 generates are called in a chain to generate the context dictionary
178 passed to the jinja2 template. See context.py for more info.
179 """
180 def __init__(self, templates_dir, openstack_release):
181 if not os.path.isdir(templates_dir):
182 log('Could not locate templates dir %s' % templates_dir,
183 level=ERROR)
184 raise OSConfigException
185
186 self.templates_dir = templates_dir
187 self.openstack_release = openstack_release
188 self.templates = {}
189 self._tmpl_env = None
190
191 if None in [Environment, ChoiceLoader, FileSystemLoader]:
192 # if this code is running, the object is created pre-install hook.
193 # jinja2 shouldn't get touched until the module is reloaded on next
194 # hook execution, with proper jinja2 bits successfully imported.
195 apt_install('python-jinja2')
196
197 def register(self, config_file, contexts):
198 """
199 Register a config file with a list of context generators to be called
200 during rendering.
201 """
202 self.templates[config_file] = OSConfigTemplate(config_file=config_file,
203 contexts=contexts)
204 log('Registered config file: %s' % config_file, level=INFO)
205
206 def _get_tmpl_env(self):
207 if not self._tmpl_env:
208 loader = get_loader(self.templates_dir, self.openstack_release)
209 self._tmpl_env = Environment(loader=loader)
210
211 def _get_template(self, template):
212 self._get_tmpl_env()
213 template = self._tmpl_env.get_template(template)
214 log('Loaded template from %s' % template.filename, level=INFO)
215 return template
216
217 def render(self, config_file):
218 if config_file not in self.templates:
219 log('Config not registered: %s' % config_file, level=ERROR)
220 raise OSConfigException
221 ctxt = self.templates[config_file].context()
222
223 _tmpl = os.path.basename(config_file)
224 try:
225 template = self._get_template(_tmpl)
226 except exceptions.TemplateNotFound:
227 # if no template is found with basename, try looking for it
228 # using a munged full path, eg:
229 # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
230 _tmpl = '_'.join(config_file.split('/')[1:])
231 try:
232 template = self._get_template(_tmpl)
233 except exceptions.TemplateNotFound as e:
234 log('Could not load template from %s by %s or %s.' %
235 (self.templates_dir, os.path.basename(config_file), _tmpl),
236 level=ERROR)
237 raise e
238
239 log('Rendering from template: %s' % _tmpl, level=INFO)
240 return template.render(ctxt)
241
242 def write(self, config_file):
243 """
244 Write a single config file, raises if config file is not registered.
245 """
246 if config_file not in self.templates:
247 log('Config not registered: %s' % config_file, level=ERROR)
248 raise OSConfigException
249
250 _out = self.render(config_file)
251
252 with open(config_file, 'wb') as out:
253 out.write(_out)
254
255 log('Wrote template %s.' % config_file, level=INFO)
256
257 def write_all(self):
258 """
259 Write out all registered config files.
260 """
261 [self.write(k) for k in self.templates.iterkeys()]
262
263 def set_release(self, openstack_release):
264 """
265 Resets the template environment and generates a new template loader
266 based on a the new openstack release.
267 """
268 self._tmpl_env = None
269 self.openstack_release = openstack_release
270 self._get_tmpl_env()
271
272 def complete_contexts(self):
273 '''
274 Returns a list of context interfaces that yield a complete context.
275 '''
276 interfaces = []
277 [interfaces.extend(i.complete_contexts())
278 for i in self.templates.itervalues()]
279 return interfaces
0280
=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,459 @@
1#!/usr/bin/python
2
3# Common python helper functions used for OpenStack charms.
4from collections import OrderedDict
5
6import subprocess
7import os
8import socket
9import sys
10
11from charmhelpers.core.hookenv import (
12 config,
13 log as juju_log,
14 charm_dir,
15 ERROR,
16 INFO
17)
18
19from charmhelpers.contrib.storage.linux.lvm import (
20 deactivate_lvm_volume_group,
21 is_lvm_physical_volume,
22 remove_lvm_physical_volume,
23)
24
25from charmhelpers.core.host import lsb_release, mounts, umount
26from charmhelpers.fetch import apt_install, apt_cache
27from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
28from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
29
30CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
31CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
32
33DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
34 'restricted main multiverse universe')
35
36
37UBUNTU_OPENSTACK_RELEASE = OrderedDict([
38 ('oneiric', 'diablo'),
39 ('precise', 'essex'),
40 ('quantal', 'folsom'),
41 ('raring', 'grizzly'),
42 ('saucy', 'havana'),
43 ('trusty', 'icehouse'),
44 ('utopic', 'juno'),
45])
46
47
48OPENSTACK_CODENAMES = OrderedDict([
49 ('2011.2', 'diablo'),
50 ('2012.1', 'essex'),
51 ('2012.2', 'folsom'),
52 ('2013.1', 'grizzly'),
53 ('2013.2', 'havana'),
54 ('2014.1', 'icehouse'),
55 ('2014.2', 'juno'),
56])
57
58# The ugly duckling
59SWIFT_CODENAMES = OrderedDict([
60 ('1.4.3', 'diablo'),
61 ('1.4.8', 'essex'),
62 ('1.7.4', 'folsom'),
63 ('1.8.0', 'grizzly'),
64 ('1.7.7', 'grizzly'),
65 ('1.7.6', 'grizzly'),
66 ('1.10.0', 'havana'),
67 ('1.9.1', 'havana'),
68 ('1.9.0', 'havana'),
69 ('1.13.1', 'icehouse'),
70 ('1.13.0', 'icehouse'),
71 ('1.12.0', 'icehouse'),
72 ('1.11.0', 'icehouse'),
73 ('2.0.0', 'juno'),
74])
75
76DEFAULT_LOOPBACK_SIZE = '5G'
77
78
79def error_out(msg):
80 juju_log("FATAL ERROR: %s" % msg, level='ERROR')
81 sys.exit(1)
82
83
84def get_os_codename_install_source(src):
85 '''Derive OpenStack release codename from a given installation source.'''
86 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
87 rel = ''
88 if src is None:
89 return rel
90 if src in ['distro', 'distro-proposed']:
91 try:
92 rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
93 except KeyError:
94 e = 'Could not derive openstack release for '\
95 'this Ubuntu release: %s' % ubuntu_rel
96 error_out(e)
97 return rel
98
99 if src.startswith('cloud:'):
100 ca_rel = src.split(':')[1]
101 ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
102 return ca_rel
103
104 # Best guess match based on deb string provided
105 if src.startswith('deb') or src.startswith('ppa'):
106 for k, v in OPENSTACK_CODENAMES.iteritems():
107 if v in src:
108 return v
109
110
111def get_os_version_install_source(src):
112 codename = get_os_codename_install_source(src)
113 return get_os_version_codename(codename)
114
115
116def get_os_codename_version(vers):
117 '''Determine OpenStack codename from version number.'''
118 try:
119 return OPENSTACK_CODENAMES[vers]
120 except KeyError:
121 e = 'Could not determine OpenStack codename for version %s' % vers
122 error_out(e)
123
124
125def get_os_version_codename(codename):
126 '''Determine OpenStack version number from codename.'''
127 for k, v in OPENSTACK_CODENAMES.iteritems():
128 if v == codename:
129 return k
130 e = 'Could not derive OpenStack version for '\
131 'codename: %s' % codename
132 error_out(e)
133
134
135def get_os_codename_package(package, fatal=True):
136 '''Derive OpenStack release codename from an installed package.'''
137 import apt_pkg as apt
138
139 cache = apt_cache()
140
141 try:
142 pkg = cache[package]
143 except:
144 if not fatal:
145 return None
146 # the package is unknown to the current apt cache.
147 e = 'Could not determine version of package with no installation '\
148 'candidate: %s' % package
149 error_out(e)
150
151 if not pkg.current_ver:
152 if not fatal:
153 return None
154 # package is known, but no version is currently installed.
155 e = 'Could not determine version of uninstalled package: %s' % package
156 error_out(e)
157
158 vers = apt.upstream_version(pkg.current_ver.ver_str)
159
160 try:
161 if 'swift' in pkg.name:
162 swift_vers = vers[:5]
163 if swift_vers not in SWIFT_CODENAMES:
164 # Deal with 1.10.0 upward
165 swift_vers = vers[:6]
166 return SWIFT_CODENAMES[swift_vers]
167 else:
168 vers = vers[:6]
169 return OPENSTACK_CODENAMES[vers]
170 except KeyError:
171 e = 'Could not determine OpenStack codename for version %s' % vers
172 error_out(e)
173
174
175def get_os_version_package(pkg, fatal=True):
176 '''Derive OpenStack version number from an installed package.'''
177 codename = get_os_codename_package(pkg, fatal=fatal)
178
179 if not codename:
180 return None
181
182 if 'swift' in pkg:
183 vers_map = SWIFT_CODENAMES
184 else:
185 vers_map = OPENSTACK_CODENAMES
186
187 for version, cname in vers_map.iteritems():
188 if cname == codename:
189 return version
190 # e = "Could not determine OpenStack version for package: %s" % pkg
191 # error_out(e)
192
193
194os_rel = None
195
196
197def os_release(package, base='essex'):
198 '''
199 Returns OpenStack release codename from a cached global.
200 If the codename can not be determined from either an installed package or
201 the installation source, the earliest release supported by the charm should
202 be returned.
203 '''
204 global os_rel
205 if os_rel:
206 return os_rel
207 os_rel = (get_os_codename_package(package, fatal=False) or
208 get_os_codename_install_source(config('openstack-origin')) or
209 base)
210 return os_rel
211
212
213def import_key(keyid):
214 cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
215 "--recv-keys %s" % keyid
216 try:
217 subprocess.check_call(cmd.split(' '))
218 except subprocess.CalledProcessError:
219 error_out("Error importing repo key %s" % keyid)
220
221
222def configure_installation_source(rel):
223 '''Configure apt installation source.'''
224 if rel == 'distro':
225 return
226 elif rel == 'distro-proposed':
227 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
228 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
229 f.write(DISTRO_PROPOSED % ubuntu_rel)
230 elif rel[:4] == "ppa:":
231 src = rel
232 subprocess.check_call(["add-apt-repository", "-y", src])
233 elif rel[:3] == "deb":
234 l = len(rel.split('|'))
235 if l == 2:
236 src, key = rel.split('|')
237 juju_log("Importing PPA key from keyserver for %s" % src)
238 import_key(key)
239 elif l == 1:
240 src = rel
241 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
242 f.write(src)
243 elif rel[:6] == 'cloud:':
244 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
245 rel = rel.split(':')[1]
246 u_rel = rel.split('-')[0]
247 ca_rel = rel.split('-')[1]
248
249 if u_rel != ubuntu_rel:
250 e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
251 'version (%s)' % (ca_rel, ubuntu_rel)
252 error_out(e)
253
254 if 'staging' in ca_rel:
255 # staging is just a regular PPA.
256 os_rel = ca_rel.split('/')[0]
257 ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
258 cmd = 'add-apt-repository -y %s' % ppa
259 subprocess.check_call(cmd.split(' '))
260 return
261
262 # map charm config options to actual archive pockets.
263 pockets = {
264 'folsom': 'precise-updates/folsom',
265 'folsom/updates': 'precise-updates/folsom',
266 'folsom/proposed': 'precise-proposed/folsom',
267 'grizzly': 'precise-updates/grizzly',
268 'grizzly/updates': 'precise-updates/grizzly',
269 'grizzly/proposed': 'precise-proposed/grizzly',
270 'havana': 'precise-updates/havana',
271 'havana/updates': 'precise-updates/havana',
272 'havana/proposed': 'precise-proposed/havana',
273 'icehouse': 'precise-updates/icehouse',
274 'icehouse/updates': 'precise-updates/icehouse',
275 'icehouse/proposed': 'precise-proposed/icehouse',
276 'juno': 'trusty-updates/juno',
277 'juno/updates': 'trusty-updates/juno',
278 'juno/proposed': 'trusty-proposed/juno',
279 }
280
281 try:
282 pocket = pockets[ca_rel]
283 except KeyError:
284 e = 'Invalid Cloud Archive release specified: %s' % rel
285 error_out(e)
286
287 src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
288 apt_install('ubuntu-cloud-keyring', fatal=True)
289
290 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
291 f.write(src)
292 else:
293 error_out("Invalid openstack-release specified: %s" % rel)
294
295
296def save_script_rc(script_path="scripts/scriptrc", **env_vars):
297 """
298 Write an rc file in the charm-delivered directory containing
299 exported environment variables provided by env_vars. Any charm scripts run
300 outside the juju hook environment can source this scriptrc to obtain
301 updated config information necessary to perform health checks or
302 service changes.
303 """
304 juju_rc_path = "%s/%s" % (charm_dir(), script_path)
305 if not os.path.exists(os.path.dirname(juju_rc_path)):
306 os.mkdir(os.path.dirname(juju_rc_path))
307 with open(juju_rc_path, 'wb') as rc_script:
308 rc_script.write(
309 "#!/bin/bash\n")
310 [rc_script.write('export %s=%s\n' % (u, p))
311 for u, p in env_vars.iteritems() if u != "script_path"]
312
313
314def openstack_upgrade_available(package):
315 """
316 Determines if an OpenStack upgrade is available from installation
317 source, based on version of installed package.
318
319 :param package: str: Name of installed package.
320
321 :returns: bool: : Returns True if configured installation source offers
322 a newer version of package.
323
324 """
325
326 import apt_pkg as apt
327 src = config('openstack-origin')
328 cur_vers = get_os_version_package(package)
329 available_vers = get_os_version_install_source(src)
330 apt.init()
331 return apt.version_compare(available_vers, cur_vers) == 1
332
333
334def ensure_block_device(block_device):
335 '''
336 Confirm block_device, create as loopback if necessary.
337
338 :param block_device: str: Full path of block device to ensure.
339
340 :returns: str: Full path of ensured block device.
341 '''
342 _none = ['None', 'none', None]
343 if (block_device in _none):
344 error_out('prepare_storage(): Missing required input: '
345 'block_device=%s.' % block_device, level=ERROR)
346
347 if block_device.startswith('/dev/'):
348 bdev = block_device
349 elif block_device.startswith('/'):
350 _bd = block_device.split('|')
351 if len(_bd) == 2:
352 bdev, size = _bd
353 else:
354 bdev = block_device
355 size = DEFAULT_LOOPBACK_SIZE
356 bdev = ensure_loopback_device(bdev, size)
357 else:
358 bdev = '/dev/%s' % block_device
359
360 if not is_block_device(bdev):
361 error_out('Failed to locate valid block device at %s' % bdev,
362 level=ERROR)
363
364 return bdev
365
366
367def clean_storage(block_device):
368 '''
369 Ensures a block device is clean. That is:
370 - unmounted
371 - any lvm volume groups are deactivated
372 - any lvm physical device signatures removed
373 - partition table wiped
374
375 :param block_device: str: Full path to block device to clean.
376 '''
377 for mp, d in mounts():
378 if d == block_device:
379 juju_log('clean_storage(): %s is mounted @ %s, unmounting.' %
380 (d, mp), level=INFO)
381 umount(mp, persist=True)
382
383 if is_lvm_physical_volume(block_device):
384 deactivate_lvm_volume_group(block_device)
385 remove_lvm_physical_volume(block_device)
386 else:
387 zap_disk(block_device)
388
389
390def is_ip(address):
391 """
392 Returns True if address is a valid IP address.
393 """
394 try:
395 # Test to see if already an IPv4 address
396 socket.inet_aton(address)
397 return True
398 except socket.error:
399 return False
400
401
402def ns_query(address):
403 try:
404 import dns.resolver
405 except ImportError:
406 apt_install('python-dnspython')
407 import dns.resolver
408
409 if isinstance(address, dns.name.Name):
410 rtype = 'PTR'
411 elif isinstance(address, basestring):
412 rtype = 'A'
413 else:
414 return None
415
416 answers = dns.resolver.query(address, rtype)
417 if answers:
418 return str(answers[0])
419 return None
420
421
422def get_host_ip(hostname):
423 """
424 Resolves the IP for a given hostname, or returns
425 the input if it is already an IP.
426 """
427 if is_ip(hostname):
428 return hostname
429
430 return ns_query(hostname)
431
432
433def get_hostname(address, fqdn=True):
434 """
435 Resolves hostname for given IP, or returns the input
436 if it is already a hostname.
437 """
438 if is_ip(address):
439 try:
440 import dns.reversename
441 except ImportError:
442 apt_install('python-dnspython')
443 import dns.reversename
444
445 rev = dns.reversename.from_address(address)
446 result = ns_query(rev)
447 if not result:
448 return None
449 else:
450 result = address
451
452 if fqdn:
453 # strip trailing .
454 if result.endswith('.'):
455 return result[:-1]
456 else:
457 return result
458 else:
459 return result.split('.')[0]
0460
=== added directory 'hooks/charmhelpers/contrib/storage'
=== added file 'hooks/charmhelpers/contrib/storage/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/storage/linux'
=== added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py'
=== added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,387 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import os
12import shutil
13import json
14import time
15
16from subprocess import (
17 check_call,
18 check_output,
19 CalledProcessError
20)
21
22from charmhelpers.core.hookenv import (
23 relation_get,
24 relation_ids,
25 related_units,
26 log,
27 INFO,
28 WARNING,
29 ERROR
30)
31
32from charmhelpers.core.host import (
33 mount,
34 mounts,
35 service_start,
36 service_stop,
37 service_running,
38 umount,
39)
40
41from charmhelpers.fetch import (
42 apt_install,
43)
44
45KEYRING = '/etc/ceph/ceph.client.{}.keyring'
46KEYFILE = '/etc/ceph/ceph.client.{}.key'
47
48CEPH_CONF = """[global]
49 auth supported = {auth}
50 keyring = {keyring}
51 mon host = {mon_hosts}
52 log to syslog = {use_syslog}
53 err to syslog = {use_syslog}
54 clog to syslog = {use_syslog}
55"""
56
57
58def install():
59 ''' Basic Ceph client installation '''
60 ceph_dir = "/etc/ceph"
61 if not os.path.exists(ceph_dir):
62 os.mkdir(ceph_dir)
63 apt_install('ceph-common', fatal=True)
64
65
66def rbd_exists(service, pool, rbd_img):
67 ''' Check to see if a RADOS block device exists '''
68 try:
69 out = check_output(['rbd', 'list', '--id', service,
70 '--pool', pool])
71 except CalledProcessError:
72 return False
73 else:
74 return rbd_img in out
75
76
77def create_rbd_image(service, pool, image, sizemb):
78 ''' Create a new RADOS block device '''
79 cmd = [
80 'rbd',
81 'create',
82 image,
83 '--size',
84 str(sizemb),
85 '--id',
86 service,
87 '--pool',
88 pool
89 ]
90 check_call(cmd)
91
92
93def pool_exists(service, name):
94 ''' Check to see if a RADOS pool already exists '''
95 try:
96 out = check_output(['rados', '--id', service, 'lspools'])
97 except CalledProcessError:
98 return False
99 else:
100 return name in out
101
102
103def get_osds(service):
104 '''
105 Return a list of all Ceph Object Storage Daemons
106 currently in the cluster
107 '''
108 version = ceph_version()
109 if version and version >= '0.56':
110 return json.loads(check_output(['ceph', '--id', service,
111 'osd', 'ls', '--format=json']))
112 else:
113 return None
114
115
116def create_pool(service, name, replicas=2):
117 ''' Create a new RADOS pool '''
118 if pool_exists(service, name):
119 log("Ceph pool {} already exists, skipping creation".format(name),
120 level=WARNING)
121 return
122 # Calculate the number of placement groups based
123 # on upstream recommended best practices.
124 osds = get_osds(service)
125 if osds:
126 pgnum = (len(osds) * 100 / replicas)
127 else:
128 # NOTE(james-page): Default to 200 for older ceph versions
129 # which don't support OSD query from cli
130 pgnum = 200
131 cmd = [
132 'ceph', '--id', service,
133 'osd', 'pool', 'create',
134 name, str(pgnum)
135 ]
136 check_call(cmd)
137 cmd = [
138 'ceph', '--id', service,
139 'osd', 'pool', 'set', name,
140 'size', str(replicas)
141 ]
142 check_call(cmd)
143
144
145def delete_pool(service, name):
146 ''' Delete a RADOS pool from ceph '''
147 cmd = [
148 'ceph', '--id', service,
149 'osd', 'pool', 'delete',
150 name, '--yes-i-really-really-mean-it'
151 ]
152 check_call(cmd)
153
154
155def _keyfile_path(service):
156 return KEYFILE.format(service)
157
158
159def _keyring_path(service):
160 return KEYRING.format(service)
161
162
163def create_keyring(service, key):
164 ''' Create a new Ceph keyring containing key'''
165 keyring = _keyring_path(service)
166 if os.path.exists(keyring):
167 log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
168 return
169 cmd = [
170 'ceph-authtool',
171 keyring,
172 '--create-keyring',
173 '--name=client.{}'.format(service),
174 '--add-key={}'.format(key)
175 ]
176 check_call(cmd)
177 log('ceph: Created new ring at %s.' % keyring, level=INFO)
178
179
180def create_key_file(service, key):
181 ''' Create a file containing key '''
182 keyfile = _keyfile_path(service)
183 if os.path.exists(keyfile):
184 log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
185 return
186 with open(keyfile, 'w') as fd:
187 fd.write(key)
188 log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
189
190
191def get_ceph_nodes():
192 ''' Query named relation 'ceph' to detemine current nodes '''
193 hosts = []
194 for r_id in relation_ids('ceph'):
195 for unit in related_units(r_id):
196 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
197 return hosts
198
199
200def configure(service, key, auth, use_syslog):
201 ''' Perform basic configuration of Ceph '''
202 create_keyring(service, key)
203 create_key_file(service, key)
204 hosts = get_ceph_nodes()
205 with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
206 ceph_conf.write(CEPH_CONF.format(auth=auth,
207 keyring=_keyring_path(service),
208 mon_hosts=",".join(map(str, hosts)),
209 use_syslog=use_syslog))
210 modprobe('rbd')
211
212
213def image_mapped(name):
214 ''' Determine whether a RADOS block device is mapped locally '''
215 try:
216 out = check_output(['rbd', 'showmapped'])
217 except CalledProcessError:
218 return False
219 else:
220 return name in out
221
222
223def map_block_storage(service, pool, image):
224 ''' Map a RADOS block device for local use '''
225 cmd = [
226 'rbd',
227 'map',
228 '{}/{}'.format(pool, image),
229 '--user',
230 service,
231 '--secret',
232 _keyfile_path(service),
233 ]
234 check_call(cmd)
235
236
237def filesystem_mounted(fs):
238 ''' Determine whether a filesytems is already mounted '''
239 return fs in [f for f, m in mounts()]
240
241
242def make_filesystem(blk_device, fstype='ext4', timeout=10):
243 ''' Make a new filesystem on the specified block device '''
244 count = 0
245 e_noent = os.errno.ENOENT
246 while not os.path.exists(blk_device):
247 if count >= timeout:
248 log('ceph: gave up waiting on block device %s' % blk_device,
249 level=ERROR)
250 raise IOError(e_noent, os.strerror(e_noent), blk_device)
251 log('ceph: waiting for block device %s to appear' % blk_device,
252 level=INFO)
253 count += 1
254 time.sleep(1)
255 else:
256 log('ceph: Formatting block device %s as filesystem %s.' %
257 (blk_device, fstype), level=INFO)
258 check_call(['mkfs', '-t', fstype, blk_device])
259
260
261def place_data_on_block_device(blk_device, data_src_dst):
262 ''' Migrate data in data_src_dst to blk_device and then remount '''
263 # mount block device into /mnt
264 mount(blk_device, '/mnt')
265 # copy data to /mnt
266 copy_files(data_src_dst, '/mnt')
267 # umount block device
268 umount('/mnt')
269 # Grab user/group ID's from original source
270 _dir = os.stat(data_src_dst)
271 uid = _dir.st_uid
272 gid = _dir.st_gid
273 # re-mount where the data should originally be
274 # TODO: persist is currently a NO-OP in core.host
275 mount(blk_device, data_src_dst, persist=True)
276 # ensure original ownership of new mount.
277 os.chown(data_src_dst, uid, gid)
278
279
280# TODO: re-use
281def modprobe(module):
282 ''' Load a kernel module and configure for auto-load on reboot '''
283 log('ceph: Loading kernel module', level=INFO)
284 cmd = ['modprobe', module]
285 check_call(cmd)
286 with open('/etc/modules', 'r+') as modules:
287 if module not in modules.read():
288 modules.write(module)
289
290
291def copy_files(src, dst, symlinks=False, ignore=None):
292 ''' Copy files from src to dst '''
293 for item in os.listdir(src):
294 s = os.path.join(src, item)
295 d = os.path.join(dst, item)
296 if os.path.isdir(s):
297 shutil.copytree(s, d, symlinks, ignore)
298 else:
299 shutil.copy2(s, d)
300
301
302def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
303 blk_device, fstype, system_services=[]):
304 """
305 NOTE: This function must only be called from a single service unit for
306 the same rbd_img otherwise data loss will occur.
307
308 Ensures given pool and RBD image exists, is mapped to a block device,
309 and the device is formatted and mounted at the given mount_point.
310
311 If formatting a device for the first time, data existing at mount_point
312 will be migrated to the RBD device before being re-mounted.
313
314 All services listed in system_services will be stopped prior to data
315 migration and restarted when complete.
316 """
317 # Ensure pool, RBD image, RBD mappings are in place.
318 if not pool_exists(service, pool):
319 log('ceph: Creating new pool {}.'.format(pool))
320 create_pool(service, pool)
321
322 if not rbd_exists(service, pool, rbd_img):
323 log('ceph: Creating RBD image ({}).'.format(rbd_img))
324 create_rbd_image(service, pool, rbd_img, sizemb)
325
326 if not image_mapped(rbd_img):
327 log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
328 map_block_storage(service, pool, rbd_img)
329
330 # make file system
331 # TODO: What happens if for whatever reason this is run again and
332 # the data is already in the rbd device and/or is mounted??
333 # When it is mounted already, it will fail to make the fs
334 # XXX: This is really sketchy! Need to at least add an fstab entry
335 # otherwise this hook will blow away existing data if its executed
336 # after a reboot.
337 if not filesystem_mounted(mount_point):
338 make_filesystem(blk_device, fstype)
339
340 for svc in system_services:
341 if service_running(svc):
342 log('ceph: Stopping services {} prior to migrating data.'
343 .format(svc))
344 service_stop(svc)
345
346 place_data_on_block_device(blk_device, mount_point)
347
348 for svc in system_services:
349 log('ceph: Starting service {} after migrating data.'
350 .format(svc))
351 service_start(svc)
352
353
354def ensure_ceph_keyring(service, user=None, group=None):
355 '''
356 Ensures a ceph keyring is created for a named service
357 and optionally ensures user and group ownership.
358
359 Returns False if no ceph key is available in relation state.
360 '''
361 key = None
362 for rid in relation_ids('ceph'):
363 for unit in related_units(rid):
364 key = relation_get('key', rid=rid, unit=unit)
365 if key:
366 break
367 if not key:
368 return False
369 create_keyring(service=service, key=key)
370 keyring = _keyring_path(service)
371 if user and group:
372 check_call(['chown', '%s.%s' % (user, group), keyring])
373 return True
374
375
376def ceph_version():
377 ''' Retrieve the local version of ceph '''
378 if os.path.exists('/usr/bin/ceph'):
379 cmd = ['ceph', '-v']
380 output = check_output(cmd)
381 output = output.split()
382 if len(output) > 3:
383 return output[2]
384 else:
385 return None
386 else:
387 return None
0388
=== added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
--- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,62 @@
1
2import os
3import re
4
5from subprocess import (
6 check_call,
7 check_output,
8)
9
10
11##################################################
12# loopback device helpers.
13##################################################
14def loopback_devices():
15 '''
16 Parse through 'losetup -a' output to determine currently mapped
17 loopback devices. Output is expected to look like:
18
19 /dev/loop0: [0807]:961814 (/tmp/my.img)
20
21 :returns: dict: a dict mapping {loopback_dev: backing_file}
22 '''
23 loopbacks = {}
24 cmd = ['losetup', '-a']
25 devs = [d.strip().split(' ') for d in
26 check_output(cmd).splitlines() if d != '']
27 for dev, _, f in devs:
28 loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0]
29 return loopbacks
30
31
32def create_loopback(file_path):
33 '''
34 Create a loopback device for a given backing file.
35
36 :returns: str: Full path to new loopback device (eg, /dev/loop0)
37 '''
38 file_path = os.path.abspath(file_path)
39 check_call(['losetup', '--find', file_path])
40 for d, f in loopback_devices().iteritems():
41 if f == file_path:
42 return d
43
44
45def ensure_loopback_device(path, size):
46 '''
47 Ensure a loopback device exists for a given backing file path and size.
48 If it a loopback device is not mapped to file, a new one will be created.
49
50 TODO: Confirm size of found loopback device.
51
52 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
53 '''
54 for d, f in loopback_devices().iteritems():
55 if f == path:
56 return d
57
58 if not os.path.exists(path):
59 cmd = ['truncate', '--size', size, path]
60 check_call(cmd)
61
62 return create_loopback(path)
063
=== added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
--- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,88 @@
1from subprocess import (
2 CalledProcessError,
3 check_call,
4 check_output,
5 Popen,
6 PIPE,
7)
8
9
10##################################################
11# LVM helpers.
12##################################################
13def deactivate_lvm_volume_group(block_device):
14 '''
15 Deactivate any volume gruop associated with an LVM physical volume.
16
17 :param block_device: str: Full path to LVM physical volume
18 '''
19 vg = list_lvm_volume_group(block_device)
20 if vg:
21 cmd = ['vgchange', '-an', vg]
22 check_call(cmd)
23
24
25def is_lvm_physical_volume(block_device):
26 '''
27 Determine whether a block device is initialized as an LVM PV.
28
29 :param block_device: str: Full path of block device to inspect.
30
31 :returns: boolean: True if block device is a PV, False if not.
32 '''
33 try:
34 check_output(['pvdisplay', block_device])
35 return True
36 except CalledProcessError:
37 return False
38
39
40def remove_lvm_physical_volume(block_device):
41 '''
42 Remove LVM PV signatures from a given block device.
43
44 :param block_device: str: Full path of block device to scrub.
45 '''
46 p = Popen(['pvremove', '-ff', block_device],
47 stdin=PIPE)
48 p.communicate(input='y\n')
49
50
51def list_lvm_volume_group(block_device):
52 '''
53 List LVM volume group associated with a given block device.
54
55 Assumes block device is a valid LVM PV.
56
57 :param block_device: str: Full path of block device to inspect.
58
59 :returns: str: Name of volume group associated with block device or None
60 '''
61 vg = None
62 pvd = check_output(['pvdisplay', block_device]).splitlines()
63 for l in pvd:
64 if l.strip().startswith('VG Name'):
65 vg = ' '.join(l.strip().split()[2:])
66 return vg
67
68
69def create_lvm_physical_volume(block_device):
70 '''
71 Initialize a block device as an LVM physical volume.
72
73 :param block_device: str: Full path of block device to initialize.
74
75 '''
76 check_call(['pvcreate', block_device])
77
78
79def create_lvm_volume_group(volume_group, block_device):
80 '''
81 Create an LVM volume group backed by a given block device.
82
83 Assumes block device has already been initialized as an LVM PV.
84
85 :param volume_group: str: Name of volume group to create.
86 :block_device: str: Full path of PV-initialized block device.
87 '''
88 check_call(['vgcreate', volume_group, block_device])
089
=== added file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
--- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-09-10 21:17:48 +0000
@@ -0,0 +1,53 @@
1import os
2import re
3from stat import S_ISBLK
4
5from subprocess import (
6 check_call,
7 check_output,
8 call
9)
10
11
12def is_block_device(path):
13 '''
14 Confirm device at path is a valid block device node.
15
16 :returns: boolean: True if path is a block device, False if not.
17 '''
18 if not os.path.exists(path):
19 return False
20 return S_ISBLK(os.stat(path).st_mode)
21
22
23def zap_disk(block_device):
24 '''
25 Clear a block device of partition table. Relies on sgdisk, which is
26 installed as pat of the 'gdisk' package in Ubuntu.
27
28 :param block_device: str: Full path of block device to clean.
29 '''
30 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
31 call(['sgdisk', '--zap-all', '--mbrtogpt',
32 '--clear', block_device])
33 dev_end = check_output(['blockdev', '--getsz', block_device])
34 gpt_end = int(dev_end.split()[0]) - 100
35 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
36 'bs=1M', 'count=1'])
37 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
38 'bs=512', 'count=100', 'seek=%s' % (gpt_end)])
39
40
41def is_device_mounted(device):
42 '''Given a device path, return True if that device is mounted, and False
43 if it isn't.
44
45 :param device: str: Full path of the device to check.
46 :returns: boolean: True if the path represents a mounted device, False if
47 it doesn't.
48 '''
49 is_partition = bool(re.search(r".*[0-9]+\b", device))
50 out = check_output(['mount'])
51 if is_partition:
52 return bool(re.search(device + r"\b", out))
53 return bool(re.search(device + r"[0-9]+\b", out))
054
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2014-01-28 00:01:57 +0000
+++ hooks/charmhelpers/core/hookenv.py 2014-09-10 21:17:48 +0000
@@ -25,7 +25,7 @@
25def cached(func):25def cached(func):
26 """Cache return values for multiple executions of func + args26 """Cache return values for multiple executions of func + args
2727
28 For example:28 For example::
2929
30 @cached30 @cached
31 def unit_get(attribute):31 def unit_get(attribute):
@@ -155,6 +155,121 @@
155 return os.path.basename(sys.argv[0])155 return os.path.basename(sys.argv[0])
156156
157157
158class Config(dict):
159 """A dictionary representation of the charm's config.yaml, with some
160 extra features:
161
162 - See which values in the dictionary have changed since the previous hook.
163 - For values that have changed, see what the previous value was.
164 - Store arbitrary data for use in a later hook.
165
166 NOTE: Do not instantiate this object directly - instead call
167 ``hookenv.config()``, which will return an instance of :class:`Config`.
168
169 Example usage::
170
171 >>> # inside a hook
172 >>> from charmhelpers.core import hookenv
173 >>> config = hookenv.config()
174 >>> config['foo']
175 'bar'
176 >>> # store a new key/value for later use
177 >>> config['mykey'] = 'myval'
178
179
180 >>> # user runs `juju set mycharm foo=baz`
181 >>> # now we're inside subsequent config-changed hook
182 >>> config = hookenv.config()
183 >>> config['foo']
184 'baz'
185 >>> # test to see if this val has changed since last hook
186 >>> config.changed('foo')
187 True
188 >>> # what was the previous value?
189 >>> config.previous('foo')
190 'bar'
191 >>> # keys/values that we add are preserved across hooks
192 >>> config['mykey']
193 'myval'
194
195 """
196 CONFIG_FILE_NAME = '.juju-persistent-config'
197
198 def __init__(self, *args, **kw):
199 super(Config, self).__init__(*args, **kw)
200 self.implicit_save = True
201 self._prev_dict = None
202 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
203 if os.path.exists(self.path):
204 self.load_previous()
205
206 def __getitem__(self, key):
207 """For regular dict lookups, check the current juju config first,
208 then the previous (saved) copy. This ensures that user-saved values
209 will be returned by a dict lookup.
210
211 """
212 try:
213 return dict.__getitem__(self, key)
214 except KeyError:
215 return (self._prev_dict or {})[key]
216
217 def load_previous(self, path=None):
218 """Load previous copy of config from disk.
219
220 In normal usage you don't need to call this method directly - it
221 is called automatically at object initialization.
222
223 :param path:
224
225 File path from which to load the previous config. If `None`,
226 config is loaded from the default location. If `path` is
227 specified, subsequent `save()` calls will write to the same
228 path.
229
230 """
231 self.path = path or self.path
232 with open(self.path) as f:
233 self._prev_dict = json.load(f)
234
235 def changed(self, key):
236 """Return True if the current value for this key is different from
237 the previous value.
238
239 """
240 if self._prev_dict is None:
241 return True
242 return self.previous(key) != self.get(key)
243
244 def previous(self, key):
245 """Return previous value for this key, or None if there
246 is no previous value.
247
248 """
249 if self._prev_dict:
250 return self._prev_dict.get(key)
251 return None
252
253 def save(self):
254 """Save this config to disk.
255
256 If the charm is using the :mod:`Services Framework <services.base>`
257 or :meth:'@hook <Hooks.hook>' decorator, this
258 is called automatically at the end of successful hook execution.
259 Otherwise, it should be called directly by user code.
260
261 To disable automatic saves, set ``implicit_save=False`` on this
262 instance.
263
264 """
265 if self._prev_dict:
266 for k, v in self._prev_dict.iteritems():
267 if k not in self:
268 self[k] = v
269 with open(self.path, 'w') as f:
270 json.dump(self, f)
271
272
158@cached273@cached
159def config(scope=None):274def config(scope=None):
160 """Juju charm configuration"""275 """Juju charm configuration"""
@@ -163,7 +278,10 @@
163 config_cmd_line.append(scope)278 config_cmd_line.append(scope)
164 config_cmd_line.append('--format=json')279 config_cmd_line.append('--format=json')
165 try:280 try:
166 return json.loads(subprocess.check_output(config_cmd_line))281 config_data = json.loads(subprocess.check_output(config_cmd_line))
282 if scope is not None:
283 return config_data
284 return Config(config_data)
167 except ValueError:285 except ValueError:
168 return None286 return None
169287
@@ -188,8 +306,9 @@
188 raise306 raise
189307
190308
191def relation_set(relation_id=None, relation_settings={}, **kwargs):309def relation_set(relation_id=None, relation_settings=None, **kwargs):
192 """Set relation information for the current unit"""310 """Set relation information for the current unit"""
311 relation_settings = relation_settings if relation_settings else {}
193 relation_cmd_line = ['relation-set']312 relation_cmd_line = ['relation-set']
194 if relation_id is not None:313 if relation_id is not None:
195 relation_cmd_line.extend(('-r', relation_id))314 relation_cmd_line.extend(('-r', relation_id))
@@ -348,18 +467,19 @@
348class Hooks(object):467class Hooks(object):
349 """A convenient handler for hook functions.468 """A convenient handler for hook functions.
350469
351 Example:470 Example::
471
352 hooks = Hooks()472 hooks = Hooks()
353473
354 # register a hook, taking its name from the function name474 # register a hook, taking its name from the function name
355 @hooks.hook()475 @hooks.hook()
356 def install():476 def install():
357 ...477 pass # your code here
358478
359 # register a hook, providing a custom hook name479 # register a hook, providing a custom hook name
360 @hooks.hook("config-changed")480 @hooks.hook("config-changed")
361 def config_changed():481 def config_changed():
362 ...482 pass # your code here
363483
364 if __name__ == "__main__":484 if __name__ == "__main__":
365 # execute a hook based on the name the program is called by485 # execute a hook based on the name the program is called by
@@ -379,6 +499,9 @@
379 hook_name = os.path.basename(args[0])499 hook_name = os.path.basename(args[0])
380 if hook_name in self._hooks:500 if hook_name in self._hooks:
381 self._hooks[hook_name]()501 self._hooks[hook_name]()
502 cfg = config()
503 if cfg.implicit_save:
504 cfg.save()
382 else:505 else:
383 raise UnregisteredHookError(hook_name)506 raise UnregisteredHookError(hook_name)
384507
385508
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2014-01-28 00:01:57 +0000
+++ hooks/charmhelpers/core/host.py 2014-09-10 21:17:48 +0000
@@ -12,10 +12,13 @@
12import string12import string
13import subprocess13import subprocess
14import hashlib14import hashlib
15import shutil
16from contextlib import contextmanager
1517
16from collections import OrderedDict18from collections import OrderedDict
1719
18from hookenv import log20from hookenv import log
21from fstab import Fstab
1922
2023
21def service_start(service_name):24def service_start(service_name):
@@ -34,7 +37,8 @@
3437
3538
36def service_reload(service_name, restart_on_failure=False):39def service_reload(service_name, restart_on_failure=False):
37 """Reload a system service, optionally falling back to restart if reload fails"""40 """Reload a system service, optionally falling back to restart if
41 reload fails"""
38 service_result = service('reload', service_name)42 service_result = service('reload', service_name)
39 if not service_result and restart_on_failure:43 if not service_result and restart_on_failure:
40 service_result = service('restart', service_name)44 service_result = service('restart', service_name)
@@ -50,7 +54,7 @@
50def service_running(service):54def service_running(service):
51 """Determine whether a system service is running"""55 """Determine whether a system service is running"""
52 try:56 try:
53 output = subprocess.check_output(['service', service, 'status'])57 output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
54 except subprocess.CalledProcessError:58 except subprocess.CalledProcessError:
55 return False59 return False
56 else:60 else:
@@ -60,6 +64,16 @@
60 return False64 return False
6165
6266
67def service_available(service_name):
68 """Determine whether a system service is available"""
69 try:
70 subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
71 except subprocess.CalledProcessError:
72 return False
73 else:
74 return True
75
76
63def adduser(username, password=None, shell='/bin/bash', system_user=False):77def adduser(username, password=None, shell='/bin/bash', system_user=False):
64 """Add a user to the system"""78 """Add a user to the system"""
65 try:79 try:
@@ -143,7 +157,19 @@
143 target.write(content)157 target.write(content)
144158
145159
146def mount(device, mountpoint, options=None, persist=False):160def fstab_remove(mp):
161 """Remove the given mountpoint entry from /etc/fstab
162 """
163 return Fstab.remove_by_mountpoint(mp)
164
165
166def fstab_add(dev, mp, fs, options=None):
167 """Adds the given device entry to the /etc/fstab file
168 """
169 return Fstab.add(dev, mp, fs, options=options)
170
171
172def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
147 """Mount a filesystem at a particular mountpoint"""173 """Mount a filesystem at a particular mountpoint"""
148 cmd_args = ['mount']174 cmd_args = ['mount']
149 if options is not None:175 if options is not None:
@@ -154,9 +180,9 @@
154 except subprocess.CalledProcessError, e:180 except subprocess.CalledProcessError, e:
155 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))181 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
156 return False182 return False
183
157 if persist:184 if persist:
158 # TODO: update fstab185 return fstab_add(device, mountpoint, filesystem, options=options)
159 pass
160 return True186 return True
161187
162188
@@ -168,9 +194,9 @@
168 except subprocess.CalledProcessError, e:194 except subprocess.CalledProcessError, e:
169 log('Error unmounting {}\n{}'.format(mountpoint, e.output))195 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
170 return False196 return False
197
171 if persist:198 if persist:
172 # TODO: update fstab199 return fstab_remove(mountpoint)
173 pass
174 return True200 return True
175201
176202
@@ -194,16 +220,16 @@
194 return None220 return None
195221
196222
197def restart_on_change(restart_map):223def restart_on_change(restart_map, stopstart=False):
198 """Restart services based on configuration files changing224 """Restart services based on configuration files changing
199225
200 This function is used a decorator, for example226 This function is used a decorator, for example::
201227
202 @restart_on_change({228 @restart_on_change({
203 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]229 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
204 })230 })
205 def ceph_client_changed():231 def ceph_client_changed():
206 ...232 pass # your code here
207233
208 In this example, the cinder-api and cinder-volume services234 In this example, the cinder-api and cinder-volume services
209 would be restarted if /etc/ceph/ceph.conf is changed by the235 would be restarted if /etc/ceph/ceph.conf is changed by the
@@ -219,8 +245,14 @@
219 for path in restart_map:245 for path in restart_map:
220 if checksums[path] != file_hash(path):246 if checksums[path] != file_hash(path):
221 restarts += restart_map[path]247 restarts += restart_map[path]
222 for service_name in list(OrderedDict.fromkeys(restarts)):248 services_list = list(OrderedDict.fromkeys(restarts))
223 service('restart', service_name)249 if not stopstart:
250 for service_name in services_list:
251 service('restart', service_name)
252 else:
253 for action in ['stop', 'start']:
254 for service_name in services_list:
255 service(action, service_name)
224 return wrapped_f256 return wrapped_f
225 return wrap257 return wrap
226258
@@ -289,3 +321,40 @@
289 if 'link/ether' in words:321 if 'link/ether' in words:
290 hwaddr = words[words.index('link/ether') + 1]322 hwaddr = words[words.index('link/ether') + 1]
291 return hwaddr323 return hwaddr
324
325
326def cmp_pkgrevno(package, revno, pkgcache=None):
327 '''Compare supplied revno with the revno of the installed package
328
329 * 1 => Installed revno is greater than supplied arg
330 * 0 => Installed revno is the same as supplied arg
331 * -1 => Installed revno is less than supplied arg
332
333 '''
334 import apt_pkg
335 from charmhelpers.fetch import apt_cache
336 if not pkgcache:
337 pkgcache = apt_cache()
338 pkg = pkgcache[package]
339 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
340
341
342@contextmanager
343def chdir(d):
344 cur = os.getcwd()
345 try:
346 yield os.chdir(d)
347 finally:
348 os.chdir(cur)
349
350
351def chownr(path, owner, group):
352 uid = pwd.getpwnam(owner).pw_uid
353 gid = grp.getgrnam(group).gr_gid
354
355 for root, dirs, files in os.walk(path):
356 for name in dirs + files:
357 full = os.path.join(root, name)
358 broken_symlink = os.path.lexists(full) and not os.path.exists(full)
359 if not broken_symlink:
360 os.chown(full, uid, gid)
292361
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2014-01-28 00:01:57 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2014-09-10 21:17:48 +0000
@@ -1,4 +1,6 @@
1import importlib1import importlib
2from tempfile import NamedTemporaryFile
3import time
2from yaml import safe_load4from yaml import safe_load
3from charmhelpers.core.host import (5from charmhelpers.core.host import (
4 lsb_release6 lsb_release
@@ -12,9 +14,9 @@
12 config,14 config,
13 log,15 log,
14)16)
15import apt_pkg
16import os17import os
1718
19
18CLOUD_ARCHIVE = """# Ubuntu Cloud Archive20CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
19deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main21deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
20"""22"""
@@ -54,13 +56,68 @@
54 'icehouse/proposed': 'precise-proposed/icehouse',56 'icehouse/proposed': 'precise-proposed/icehouse',
55 'precise-icehouse/proposed': 'precise-proposed/icehouse',57 'precise-icehouse/proposed': 'precise-proposed/icehouse',
56 'precise-proposed/icehouse': 'precise-proposed/icehouse',58 'precise-proposed/icehouse': 'precise-proposed/icehouse',
59 # Juno
60 'juno': 'trusty-updates/juno',
61 'trusty-juno': 'trusty-updates/juno',
62 'trusty-juno/updates': 'trusty-updates/juno',
63 'trusty-updates/juno': 'trusty-updates/juno',
64 'juno/proposed': 'trusty-proposed/juno',
65 'juno/proposed': 'trusty-proposed/juno',
66 'trusty-juno/proposed': 'trusty-proposed/juno',
67 'trusty-proposed/juno': 'trusty-proposed/juno',
57}68}
5869
70# The order of this list is very important. Handlers should be listed in from
71# least- to most-specific URL matching.
72FETCH_HANDLERS = (
73 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
74 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
75)
76
77APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
78APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
79APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
80
81
82class SourceConfigError(Exception):
83 pass
84
85
86class UnhandledSource(Exception):
87 pass
88
89
90class AptLockError(Exception):
91 pass
92
93
94class BaseFetchHandler(object):
95
96 """Base class for FetchHandler implementations in fetch plugins"""
97
98 def can_handle(self, source):
99 """Returns True if the source can be handled. Otherwise returns
100 a string explaining why it cannot"""
101 return "Wrong source type"
102
103 def install(self, source):
104 """Try to download and unpack the source. Return the path to the
105 unpacked files or raise UnhandledSource."""
106 raise UnhandledSource("Wrong source type {}".format(source))
107
108 def parse_url(self, url):
109 return urlparse(url)
110
111 def base_url(self, url):
112 """Return url without querystring or fragment"""
113 parts = list(self.parse_url(url))
114 parts[4:] = ['' for i in parts[4:]]
115 return urlunparse(parts)
116
59117
60def filter_installed_packages(packages):118def filter_installed_packages(packages):
61 """Returns a list of packages that require installation"""119 """Returns a list of packages that require installation"""
62 apt_pkg.init()120 cache = apt_cache()
63 cache = apt_pkg.Cache()
64 _pkgs = []121 _pkgs = []
65 for package in packages:122 for package in packages:
66 try:123 try:
@@ -73,6 +130,16 @@
73 return _pkgs130 return _pkgs
74131
75132
133def apt_cache(in_memory=True):
134 """Build and return an apt cache"""
135 import apt_pkg
136 apt_pkg.init()
137 if in_memory:
138 apt_pkg.config.set("Dir::Cache::pkgcache", "")
139 apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
140 return apt_pkg.Cache()
141
142
76def apt_install(packages, options=None, fatal=False):143def apt_install(packages, options=None, fatal=False):
77 """Install one or more packages"""144 """Install one or more packages"""
78 if options is None:145 if options is None:
@@ -87,23 +154,28 @@
87 cmd.extend(packages)154 cmd.extend(packages)
88 log("Installing {} with options: {}".format(packages,155 log("Installing {} with options: {}".format(packages,
89 options))156 options))
90 env = os.environ.copy()157 _run_apt_command(cmd, fatal)
91 if 'DEBIAN_FRONTEND' not in env:158
92 env['DEBIAN_FRONTEND'] = 'noninteractive'159
93160def apt_upgrade(options=None, fatal=False, dist=False):
94 if fatal:161 """Upgrade all packages"""
95 subprocess.check_call(cmd, env=env)162 if options is None:
163 options = ['--option=Dpkg::Options::=--force-confold']
164
165 cmd = ['apt-get', '--assume-yes']
166 cmd.extend(options)
167 if dist:
168 cmd.append('dist-upgrade')
96 else:169 else:
97 subprocess.call(cmd, env=env)170 cmd.append('upgrade')
171 log("Upgrading with options: {}".format(options))
172 _run_apt_command(cmd, fatal)
98173
99174
100def apt_update(fatal=False):175def apt_update(fatal=False):
101 """Update local apt cache"""176 """Update local apt cache"""
102 cmd = ['apt-get', 'update']177 cmd = ['apt-get', 'update']
103 if fatal:178 _run_apt_command(cmd, fatal)
104 subprocess.check_call(cmd)
105 else:
106 subprocess.call(cmd)
107179
108180
109def apt_purge(packages, fatal=False):181def apt_purge(packages, fatal=False):
@@ -114,10 +186,7 @@
114 else:186 else:
115 cmd.extend(packages)187 cmd.extend(packages)
116 log("Purging {}".format(packages))188 log("Purging {}".format(packages))
117 if fatal:189 _run_apt_command(cmd, fatal)
118 subprocess.check_call(cmd)
119 else:
120 subprocess.call(cmd)
121190
122191
123def apt_hold(packages, fatal=False):192def apt_hold(packages, fatal=False):
@@ -128,6 +197,7 @@
128 else:197 else:
129 cmd.extend(packages)198 cmd.extend(packages)
130 log("Holding {}".format(packages))199 log("Holding {}".format(packages))
200
131 if fatal:201 if fatal:
132 subprocess.check_call(cmd)202 subprocess.check_call(cmd)
133 else:203 else:
@@ -135,8 +205,33 @@
135205
136206
137def add_source(source, key=None):207def add_source(source, key=None):
208 """Add a package source to this system.
209
210 @param source: a URL or sources.list entry, as supported by
211 add-apt-repository(1). Examples:
212 ppa:charmers/example
213 deb https://stub:key@private.example.com/ubuntu trusty main
214
215 In addition:
216 'proposed:' may be used to enable the standard 'proposed'
217 pocket for the release.
218 'cloud:' may be used to activate official cloud archive pockets,
219 such as 'cloud:icehouse'
220
221 @param key: A key to be added to the system's APT keyring and used
222 to verify the signatures on packages. Ideally, this should be an
223 ASCII format GPG public key including the block headers. A GPG key
224 id may also be used, but be aware that only insecure protocols are
225 available to retrieve the actual public key from a public keyserver
226 placing your Juju environment at risk. ppa and cloud archive keys
227 are securely added automtically, so sould not be provided.
228 """
229 if source is None:
230 log('Source is not present. Skipping')
231 return
232
138 if (source.startswith('ppa:') or233 if (source.startswith('ppa:') or
139 source.startswith('http:') or234 source.startswith('http') or
140 source.startswith('deb ') or235 source.startswith('deb ') or
141 source.startswith('cloud-archive:')):236 source.startswith('cloud-archive:')):
142 subprocess.check_call(['add-apt-repository', '--yes', source])237 subprocess.check_call(['add-apt-repository', '--yes', source])
@@ -155,57 +250,66 @@
155 release = lsb_release()['DISTRIB_CODENAME']250 release = lsb_release()['DISTRIB_CODENAME']
156 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:251 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
157 apt.write(PROPOSED_POCKET.format(release))252 apt.write(PROPOSED_POCKET.format(release))
253 else:
254 raise SourceConfigError("Unknown source: {!r}".format(source))
255
158 if key:256 if key:
159 subprocess.check_call(['apt-key', 'import', key])257 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
160258 with NamedTemporaryFile() as key_file:
161259 key_file.write(key)
162class SourceConfigError(Exception):260 key_file.flush()
163 pass261 key_file.seek(0)
262 subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
263 else:
264 # Note that hkp: is in no way a secure protocol. Using a
265 # GPG key id is pointless from a security POV unless you
266 # absolutely trust your network and DNS.
267 subprocess.check_call(['apt-key', 'adv', '--keyserver',
268 'hkp://keyserver.ubuntu.com:80', '--recv',
269 key])
164270
165271
166def configure_sources(update=False,272def configure_sources(update=False,
167 sources_var='install_sources',273 sources_var='install_sources',
168 keys_var='install_keys'):274 keys_var='install_keys'):
169 """275 """
170 Configure multiple sources from charm configuration276 Configure multiple sources from charm configuration.
277
278 The lists are encoded as yaml fragments in the configuration.
279 The frament needs to be included as a string. Sources and their
280 corresponding keys are of the types supported by add_source().
171281
172 Example config:282 Example config:
173 install_sources:283 install_sources: |
174 - "ppa:foo"284 - "ppa:foo"
175 - "http://example.com/repo precise main"285 - "http://example.com/repo precise main"
176 install_keys:286 install_keys: |
177 - null287 - null
178 - "a1b2c3d4"288 - "a1b2c3d4"
179289
180 Note that 'null' (a.k.a. None) should not be quoted.290 Note that 'null' (a.k.a. None) should not be quoted.
181 """291 """
182 sources = safe_load(config(sources_var))292 sources = safe_load((config(sources_var) or '').strip()) or []
183 keys = config(keys_var)293 keys = safe_load((config(keys_var) or '').strip()) or None
184 if keys is not None:294
185 keys = safe_load(keys)295 if isinstance(sources, basestring):
186 if isinstance(sources, basestring) and (296 sources = [sources]
187 keys is None or isinstance(keys, basestring)):297
188 add_source(sources, keys)298 if keys is None:
299 for source in sources:
300 add_source(source, None)
189 else:301 else:
190 if not len(sources) == len(keys):302 if isinstance(keys, basestring):
191 msg = 'Install sources and keys lists are different lengths'303 keys = [keys]
192 raise SourceConfigError(msg)304
193 for src_num in range(len(sources)):305 if len(sources) != len(keys):
194 add_source(sources[src_num], keys[src_num])306 raise SourceConfigError(
307 'Install sources and keys lists are different lengths')
308 for source, key in zip(sources, keys):
309 add_source(source, key)
195 if update:310 if update:
196 apt_update(fatal=True)311 apt_update(fatal=True)
197312
198# The order of this list is very important. Handlers should be listed in from
199# least- to most-specific URL matching.
200FETCH_HANDLERS = (
201 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
202 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
203)
204
205
206class UnhandledSource(Exception):
207 pass
208
209313
210def install_remote(source):314def install_remote(source):
211 """315 """
@@ -236,30 +340,6 @@
236 return install_remote(source)340 return install_remote(source)
237341
238342
239class BaseFetchHandler(object):
240
241 """Base class for FetchHandler implementations in fetch plugins"""
242
243 def can_handle(self, source):
244 """Returns True if the source can be handled. Otherwise returns
245 a string explaining why it cannot"""
246 return "Wrong source type"
247
248 def install(self, source):
249 """Try to download and unpack the source. Return the path to the
250 unpacked files or raise UnhandledSource."""
251 raise UnhandledSource("Wrong source type {}".format(source))
252
253 def parse_url(self, url):
254 return urlparse(url)
255
256 def base_url(self, url):
257 """Return url without querystring or fragment"""
258 parts = list(self.parse_url(url))
259 parts[4:] = ['' for i in parts[4:]]
260 return urlunparse(parts)
261
262
263def plugins(fetch_handlers=None):343def plugins(fetch_handlers=None):
264 if not fetch_handlers:344 if not fetch_handlers:
265 fetch_handlers = FETCH_HANDLERS345 fetch_handlers = FETCH_HANDLERS
@@ -277,3 +357,40 @@
277 log("FetchHandler {} not found, skipping plugin".format(357 log("FetchHandler {} not found, skipping plugin".format(
278 handler_name))358 handler_name))
279 return plugin_list359 return plugin_list
360
361
362def _run_apt_command(cmd, fatal=False):
363 """
364 Run an APT command, checking output and retrying if the fatal flag is set
365 to True.
366
367 :param: cmd: str: The apt command to run.
368 :param: fatal: bool: Whether the command's output should be checked and
369 retried.
370 """
371 env = os.environ.copy()
372
373 if 'DEBIAN_FRONTEND' not in env:
374 env['DEBIAN_FRONTEND'] = 'noninteractive'
375
376 if fatal:
377 retry_count = 0
378 result = None
379
380 # If the command is considered "fatal", we need to retry if the apt
381 # lock was not acquired.
382
383 while result is None or result == APT_NO_LOCK:
384 try:
385 result = subprocess.check_call(cmd, env=env)
386 except subprocess.CalledProcessError, e:
387 retry_count = retry_count + 1
388 if retry_count > APT_NO_LOCK_RETRY_COUNT:
389 raise
390 result = e.returncode
391 log("Couldn't acquire DPKG lock. Will retry in {} seconds."
392 "".format(APT_NO_LOCK_RETRY_DELAY))
393 time.sleep(APT_NO_LOCK_RETRY_DELAY)
394
395 else:
396 subprocess.call(cmd, env=env)
280397
=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 2014-01-28 00:01:57 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2014-09-10 21:17:48 +0000
@@ -1,5 +1,9 @@
1import os1import os
2import urllib22import urllib2
3from urllib import urlretrieve
4import urlparse
5import hashlib
6
3from charmhelpers.fetch import (7from charmhelpers.fetch import (
4 BaseFetchHandler,8 BaseFetchHandler,
5 UnhandledSource9 UnhandledSource
@@ -10,7 +14,17 @@
10)14)
11from charmhelpers.core.host import mkdir15from charmhelpers.core.host import mkdir
1216
1317"""
18This class is a plugin for charmhelpers.fetch.install_remote.
19
20It grabs, validates and installs remote archives fetched over "http", "https", "ftp" or "file" protocols. The contents of the archive are installed in $CHARM_DIR/fetched/.
21
22Example usage:
23install_remote("https://example.com/some/archive.tar.gz")
24# Installs the contents of archive.tar.gz in $CHARM_DIR/fetched/.
25
26See charmhelpers.fetch.archiveurl.get_archivehandler for supported archive types.
27"""
14class ArchiveUrlFetchHandler(BaseFetchHandler):28class ArchiveUrlFetchHandler(BaseFetchHandler):
15 """Handler for archives via generic URLs"""29 """Handler for archives via generic URLs"""
16 def can_handle(self, source):30 def can_handle(self, source):
@@ -24,6 +38,19 @@
24 def download(self, source, dest):38 def download(self, source, dest):
25 # propogate all exceptions39 # propogate all exceptions
26 # URLError, OSError, etc40 # URLError, OSError, etc
41 proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
42 if proto in ('http', 'https'):
43 auth, barehost = urllib2.splituser(netloc)
44 if auth is not None:
45 source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
46 username, password = urllib2.splitpasswd(auth)
47 passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
48 # Realm is set to None in add_password to force the username and password
49 # to be used whatever the realm
50 passman.add_password(None, source, username, password)
51 authhandler = urllib2.HTTPBasicAuthHandler(passman)
52 opener = urllib2.build_opener(authhandler)
53 urllib2.install_opener(opener)
27 response = urllib2.urlopen(source)54 response = urllib2.urlopen(source)
28 try:55 try:
29 with open(dest, 'w') as dest_file:56 with open(dest, 'w') as dest_file:
@@ -46,3 +73,31 @@
46 except OSError as e:73 except OSError as e:
47 raise UnhandledSource(e.strerror)74 raise UnhandledSource(e.strerror)
48 return extract(dld_file)75 return extract(dld_file)
76
77 # Mandatory file validation via Sha1 or MD5 hashing.
78 def download_and_validate(self, url, hashsum, validate="sha1"):
79 if validate == 'sha1' and len(hashsum) != 40:
80 raise ValueError("HashSum must be = 40 characters when using sha1"
81 " validation")
82 if validate == 'md5' and len(hashsum) != 32:
83 raise ValueError("HashSum must be = 32 characters when using md5"
84 " validation")
85 tempfile, headers = urlretrieve(url)
86 self.validate_file(tempfile, hashsum, validate)
87 return tempfile
88
89 # Predicate method that returns status of hash matching expected hash.
90 def validate_file(self, source, hashsum, vmethod='sha1'):
91 if vmethod != 'sha1' and vmethod != 'md5':
92 raise ValueError("Validation Method not supported")
93
94 if vmethod == 'md5':
95 m = hashlib.md5()
96 if vmethod == 'sha1':
97 m = hashlib.sha1()
98 with open(source) as f:
99 for line in f:
100 m.update(line)
101 if hashsum != m.hexdigest():
102 msg = "Hash Mismatch on {} expected {} got {}"
103 raise ValueError(msg.format(source, hashsum, m.hexdigest()))
49104
=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 2014-01-28 00:01:57 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2014-09-10 21:17:48 +0000
@@ -39,7 +39,8 @@
39 def install(self, source):39 def install(self, source):
40 url_parts = self.parse_url(source)40 url_parts = self.parse_url(source)
41 branch_name = url_parts.path.strip("/").split("/")[-1]41 branch_name = url_parts.path.strip("/").split("/")[-1]
42 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)42 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
43 branch_name)
43 if not os.path.exists(dest_dir):44 if not os.path.exists(dest_dir):
44 mkdir(dest_dir, perms=0755)45 mkdir(dest_dir, perms=0755)
45 try:46 try:
4647
=== modified file 'hooks/hooks.py'
--- hooks/hooks.py 2014-08-22 07:52:20 +0000
+++ hooks/hooks.py 2014-09-10 21:17:48 +0000
@@ -1,6 +1,7 @@
1#!/usr/bin/env python1#!/usr/bin/env python
2# vim: et ai ts=4 sw=4:2# vim: et ai ts=4 sw=4:
33
4from charmhelpers.contrib.openstack.utils import configure_installation_source
4from charmhelpers import fetch5from charmhelpers import fetch
5from charmhelpers.core import hookenv6from charmhelpers.core import hookenv
6from charmhelpers.core.hookenv import ERROR, INFO7from charmhelpers.core.hookenv import ERROR, INFO
@@ -8,7 +9,7 @@
8import json9import json
9import os10import os
10import sys11import sys
11from util import StorageServiceUtil, generate_volume_label, get_running_series12from util import StorageServiceUtil, generate_volume_label
1213
13hooks = hookenv.Hooks()14hooks = hookenv.Hooks()
1415
@@ -84,13 +85,12 @@
84 if apt_install is None: # for testing purposes85 if apt_install is None: # for testing purposes
85 apt_install = fetch.apt_install86 apt_install = fetch.apt_install
86 if add_source is None: # for testing purposes87 if add_source is None: # for testing purposes
87 add_source = fetch.add_source88 add_source = configure_installation_source
8889
89 provider = hookenv.config("provider")90 provider = hookenv.config("provider")
90 if provider == "nova":91 if provider == "nova":
92 add_source(hookenv.config('openstack-origin'))
91 required_packages = ["python-novaclient"]93 required_packages = ["python-novaclient"]
92 if int(get_running_series()['release'].split(".")[0]) < 14:
93 add_source("cloud-archive:havana")
94 elif provider == "ec2":94 elif provider == "ec2":
95 required_packages = ["python-boto"]95 required_packages = ["python-boto"]
96 fetch.apt_update(fatal=True)96 fetch.apt_update(fatal=True)
9797
=== modified file 'hooks/test_hooks.py'
--- hooks/test_hooks.py 2014-09-09 16:24:46 +0000
+++ hooks/test_hooks.py 2014-09-10 21:17:48 +0000
@@ -16,7 +16,8 @@
16 {"key": "myusername", "tenant": "myusername_project",16 {"key": "myusername", "tenant": "myusername_project",
17 "secret": "password", "region": "region1", "provider": "nova",17 "secret": "password", "region": "region1", "provider": "nova",
18 "endpoint": "https://keystone_url:443/v2.0/",18 "endpoint": "https://keystone_url:443/v2.0/",
19 "default_volume_size": 11})19 "default_volume_size": 11,
20 "openstack-origin": "cloud:precise-folsom/staging"})
2021
21 def test_wb_persist_data_creates_persist_file_if_it_doesnt_exist(self):22 def test_wb_persist_data_creates_persist_file_if_it_doesnt_exist(self):
22 """23 """
@@ -182,46 +183,23 @@
182 self.mocker.replay()183 self.mocker.replay()
183 hooks.config_changed()184 hooks.config_changed()
184185
185 def test_install_installs_novaclient_and_no_cloud_archive_on_trusty(self):186 def test_install_installs_novaclient_from_openstack_origin_config(self):
186 """187 """
187 On trusty, 14.04, and later, L{install} will not call188 When C{provider} is nova, L{install} will call the charmhelper's
188 C{fetch.add_source} to add a cloud repository but it will install the189 C{configure_installation_source} to add the appropriate cloud archive
189 install the C{python-novaclient} package.190 for the configured C{openstack-origin}. The C{python-novaclient}
190 """191 package will then be installed.
191 get_running_series = self.mocker.replace(hooks.get_running_series)192 """
192 get_running_series()193 apt_update = self.mocker.replace(fetch.apt_update)
193 self.mocker.result({'release': '14.04'}) # Trusty series194 apt_update(fatal=True)
194 add_source = self.mocker.replace(fetch.add_source)195 self.mocker.replay()
195 add_source("cloud-archive:havana")196
196 self.mocker.count(0) # Test we never called add_source197 def apt_install(packages, fatal):
197 apt_update = self.mocker.replace(fetch.apt_update)198 self.assertEqual(["python-novaclient"], packages)
198 apt_update(fatal=True)199 self.assertTrue(fatal)
199 self.mocker.replay()200
200201 def add_source(origin):
201 def apt_install(packages, fatal):202 self.assertEqual("cloud:precise-folsom/staging", origin)
202 self.assertEqual(["python-novaclient"], packages)
203 self.assertTrue(fatal)
204
205 hooks.install(apt_install=apt_install, add_source=add_source)
206
207 def test_precise_install_adds_apt_source_and_installs_novaclient(self):
208 """
209 L{install} will call C{fetch.add_source} to add a cloud repository and
210 install the C{python-novaclient} package.
211 """
212 get_running_series = self.mocker.replace(hooks.get_running_series)
213 get_running_series()
214 self.mocker.result({'release': '12.04'}) # precise
215 apt_update = self.mocker.replace(fetch.apt_update)
216 apt_update(fatal=True)
217 self.mocker.replay()
218
219 def add_source(source):
220 self.assertEqual("cloud-archive:havana", source)
221
222 def apt_install(packages, fatal):
223 self.assertEqual(["python-novaclient"], packages)
224 self.assertTrue(fatal)
225203
226 hooks.install(apt_install=apt_install, add_source=add_source)204 hooks.install(apt_install=apt_install, add_source=add_source)
227205

Subscribers

People subscribed via source and target branches

to all changes: