Merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-use-charmhelpers-to-set-openstack-origin into lp:charms/block-storage-broker
- Precise Pangolin (12.04)
- bsb-use-charmhelpers-to-set-openstack-origin
- Merge into trunk
Status: | Work in progress | ||||
---|---|---|---|---|---|
Proposed branch: | lp:~chad.smith/charms/precise/block-storage-broker/bsb-use-charmhelpers-to-set-openstack-origin | ||||
Merge into: | lp:charms/block-storage-broker | ||||
Diff against target: |
3846 lines (+3261/-146) 23 files modified
Makefile (+10/-7) charm-helpers.yaml (+2/-0) config.yaml (+15/-0) hooks/charmhelpers/contrib/openstack/alternatives.py (+17/-0) hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+61/-0) hooks/charmhelpers/contrib/openstack/amulet/utils.py (+275/-0) hooks/charmhelpers/contrib/openstack/context.py (+789/-0) hooks/charmhelpers/contrib/openstack/ip.py (+79/-0) hooks/charmhelpers/contrib/openstack/neutron.py (+201/-0) hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0) hooks/charmhelpers/contrib/openstack/templating.py (+279/-0) hooks/charmhelpers/contrib/openstack/utils.py (+459/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+387/-0) hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0) hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0) hooks/charmhelpers/contrib/storage/linux/utils.py (+53/-0) hooks/charmhelpers/core/hookenv.py (+129/-6) hooks/charmhelpers/core/host.py (+81/-12) hooks/charmhelpers/fetch/__init__.py (+191/-74) hooks/charmhelpers/fetch/archiveurl.py (+56/-1) hooks/charmhelpers/fetch/bzrurl.py (+2/-1) hooks/hooks.py (+4/-4) hooks/test_hooks.py (+19/-41) |
||||
To merge this branch: | bzr merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-use-charmhelpers-to-set-openstack-origin | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
David Britton (community) | Needs Fixing | ||
Review via email: mp+231594@code.launchpad.net |
Commit message
Description of the change
This branch avoids making a static call to charmhelpers' fetch.add_
This branch is quite sizeable because of pulling in the charmhelpers.
1. sync new charmhelpers dependencies
- minor Makefile sync target added for simplified charmhelper updates
- charm-helpers.yaml to define new charmhelpers dependencies contrib.openstack and contrib.storage
- new files sync'd under charmhelpers (not authored in this branch)
2. config.yaml has a new openstack-origin parameter that will default the cloud archive repository to the supported distro default but will allow a user to set their own custom cloud archive repository if needed
3. hooks/hooks.py drop use of fetch.add_
4. fix unit tests
The relevant changes that exclude the charmhelpers.
http://
- 62. By Chad Smith
-
merge block-storage-
broker trunk resolve conflicts and fix unit tests to avoid mocker use
Unmerged revisions
- 62. By Chad Smith
-
merge block-storage-
broker trunk resolve conflicts and fix unit tests to avoid mocker use - 61. By Chad Smith
-
correct yaml indent in config.yaml
- 60. By Chad Smith
-
update unit tests to validate use of charmhelpers config_
installation_ source - 59. By Chad Smith
-
update charmhelpers sync functionality
- 58. By Chad Smith
-
add openstack-origin to config.yaml options and use charmhelpers configure_
installation_ source to pull appropriate deb packages for a given ubuntu series - 57. By Chad Smith
-
sync added contrib.
(storage| openstack) files - 56. By Chad Smith
-
add contrib.openstack and it's dependency contrib.storage to charm-helpers.yaml file
- 55. By Chad Smith
-
sync existing charmhelpers dependencies
Preview Diff
1 | === modified file 'Makefile' | |||
2 | --- Makefile 2014-03-21 17:05:09 +0000 | |||
3 | +++ Makefile 2014-09-10 21:17:48 +0000 | |||
4 | @@ -1,4 +1,6 @@ | |||
5 | 1 | .PHONY: test lint clean | 1 | .PHONY: test lint clean |
6 | 2 | PYTHON := /usr/bin/env python | ||
7 | 3 | |||
8 | 2 | CHARM_DIR=`pwd` | 4 | CHARM_DIR=`pwd` |
9 | 3 | 5 | ||
10 | 4 | clean: | 6 | clean: |
11 | @@ -10,10 +12,11 @@ | |||
12 | 10 | lint: | 12 | lint: |
13 | 11 | @flake8 --exclude hooks/charmhelpers hooks | 13 | @flake8 --exclude hooks/charmhelpers hooks |
14 | 12 | 14 | ||
22 | 13 | update-charm-helpers: | 15 | bin/charm_helpers_sync.py: |
23 | 14 | # Pull latest charm-helpers branch and sync the components based on our | 16 | @mkdir -p bin |
24 | 15 | $ charm-helpers.yaml | 17 | @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ |
25 | 16 | rm -rf charm-helpers | 18 | > bin/charm_helpers_sync.py |
26 | 17 | bzr co lp:charm-helpers | 19 | |
27 | 18 | ./charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py -c charm-helpers.yaml | 20 | # Update charmhelpers dependencies within our charm |
28 | 19 | rm -rf charm-helpers | 21 | sync: bin/charm_helpers_sync.py |
29 | 22 | $(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml | ||
30 | 20 | 23 | ||
31 | === modified file 'charm-helpers.yaml' | |||
32 | --- charm-helpers.yaml 2014-02-04 17:36:03 +0000 | |||
33 | +++ charm-helpers.yaml 2014-09-10 21:17:48 +0000 | |||
34 | @@ -5,3 +5,5 @@ | |||
35 | 5 | include: | 5 | include: |
36 | 6 | - core | 6 | - core |
37 | 7 | - fetch | 7 | - fetch |
38 | 8 | - contrib.openstack | ||
39 | 9 | - contrib.storage # for openstack dependencies | ||
40 | 8 | 10 | ||
41 | === modified file 'config.yaml' | |||
42 | --- config.yaml 2014-07-15 22:58:26 +0000 | |||
43 | +++ config.yaml 2014-09-10 21:17:48 +0000 | |||
44 | @@ -29,3 +29,18 @@ | |||
45 | 29 | type: int | 29 | type: int |
46 | 30 | description: The volume size in GB if the relation does not specify | 30 | description: The volume size in GB if the relation does not specify |
47 | 31 | default: 5 | 31 | default: 5 |
48 | 32 | openstack-origin: | ||
49 | 33 | default: distro | ||
50 | 34 | type: string | ||
51 | 35 | description: | | ||
52 | 36 | Repository from which to install. May be one of the following: | ||
53 | 37 | distro (default), ppa:somecustom/ppa, a deb url sources entry, | ||
54 | 38 | or a supported Cloud Archive release pocket. | ||
55 | 39 | |||
56 | 40 | Supported Cloud Archive sources include: cloud:precise-folsom, | ||
57 | 41 | cloud:precise-folsom/updates, cloud:precise-folsom/staging, | ||
58 | 42 | cloud:precise-folsom/proposed. | ||
59 | 43 | |||
60 | 44 | Note that updating this setting to a source that is known to | ||
61 | 45 | provide a later version of OpenStack will trigger a software | ||
62 | 46 | upgrade. | ||
63 | 32 | 47 | ||
64 | === added directory 'hooks/charmhelpers/contrib' | |||
65 | === added file 'hooks/charmhelpers/contrib/__init__.py' | |||
66 | === added directory 'hooks/charmhelpers/contrib/openstack' | |||
67 | === added file 'hooks/charmhelpers/contrib/openstack/__init__.py' | |||
68 | === added file 'hooks/charmhelpers/contrib/openstack/alternatives.py' | |||
69 | --- hooks/charmhelpers/contrib/openstack/alternatives.py 1970-01-01 00:00:00 +0000 | |||
70 | +++ hooks/charmhelpers/contrib/openstack/alternatives.py 2014-09-10 21:17:48 +0000 | |||
71 | @@ -0,0 +1,17 @@ | |||
72 | 1 | ''' Helper for managing alternatives for file conflict resolution ''' | ||
73 | 2 | |||
74 | 3 | import subprocess | ||
75 | 4 | import shutil | ||
76 | 5 | import os | ||
77 | 6 | |||
78 | 7 | |||
79 | 8 | def install_alternative(name, target, source, priority=50): | ||
80 | 9 | ''' Install alternative configuration ''' | ||
81 | 10 | if (os.path.exists(target) and not os.path.islink(target)): | ||
82 | 11 | # Move existing file/directory away before installing | ||
83 | 12 | shutil.move(target, '{}.bak'.format(target)) | ||
84 | 13 | cmd = [ | ||
85 | 14 | 'update-alternatives', '--force', '--install', | ||
86 | 15 | target, name, source, str(priority) | ||
87 | 16 | ] | ||
88 | 17 | subprocess.check_call(cmd) | ||
89 | 0 | 18 | ||
90 | === added directory 'hooks/charmhelpers/contrib/openstack/amulet' | |||
91 | === added file 'hooks/charmhelpers/contrib/openstack/amulet/__init__.py' | |||
92 | === added file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py' | |||
93 | --- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000 | |||
94 | +++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-09-10 21:17:48 +0000 | |||
95 | @@ -0,0 +1,61 @@ | |||
96 | 1 | from charmhelpers.contrib.amulet.deployment import ( | ||
97 | 2 | AmuletDeployment | ||
98 | 3 | ) | ||
99 | 4 | |||
100 | 5 | |||
101 | 6 | class OpenStackAmuletDeployment(AmuletDeployment): | ||
102 | 7 | """OpenStack amulet deployment. | ||
103 | 8 | |||
104 | 9 | This class inherits from AmuletDeployment and has additional support | ||
105 | 10 | that is specifically for use by OpenStack charms. | ||
106 | 11 | """ | ||
107 | 12 | |||
108 | 13 | def __init__(self, series=None, openstack=None, source=None): | ||
109 | 14 | """Initialize the deployment environment.""" | ||
110 | 15 | super(OpenStackAmuletDeployment, self).__init__(series) | ||
111 | 16 | self.openstack = openstack | ||
112 | 17 | self.source = source | ||
113 | 18 | |||
114 | 19 | def _add_services(self, this_service, other_services): | ||
115 | 20 | """Add services to the deployment and set openstack-origin.""" | ||
116 | 21 | super(OpenStackAmuletDeployment, self)._add_services(this_service, | ||
117 | 22 | other_services) | ||
118 | 23 | name = 0 | ||
119 | 24 | services = other_services | ||
120 | 25 | services.append(this_service) | ||
121 | 26 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph'] | ||
122 | 27 | |||
123 | 28 | if self.openstack: | ||
124 | 29 | for svc in services: | ||
125 | 30 | if svc[name] not in use_source: | ||
126 | 31 | config = {'openstack-origin': self.openstack} | ||
127 | 32 | self.d.configure(svc[name], config) | ||
128 | 33 | |||
129 | 34 | if self.source: | ||
130 | 35 | for svc in services: | ||
131 | 36 | if svc[name] in use_source: | ||
132 | 37 | config = {'source': self.source} | ||
133 | 38 | self.d.configure(svc[name], config) | ||
134 | 39 | |||
135 | 40 | def _configure_services(self, configs): | ||
136 | 41 | """Configure all of the services.""" | ||
137 | 42 | for service, config in configs.iteritems(): | ||
138 | 43 | self.d.configure(service, config) | ||
139 | 44 | |||
140 | 45 | def _get_openstack_release(self): | ||
141 | 46 | """Get openstack release. | ||
142 | 47 | |||
143 | 48 | Return an integer representing the enum value of the openstack | ||
144 | 49 | release. | ||
145 | 50 | """ | ||
146 | 51 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, | ||
147 | 52 | self.precise_havana, self.precise_icehouse, | ||
148 | 53 | self.trusty_icehouse) = range(6) | ||
149 | 54 | releases = { | ||
150 | 55 | ('precise', None): self.precise_essex, | ||
151 | 56 | ('precise', 'cloud:precise-folsom'): self.precise_folsom, | ||
152 | 57 | ('precise', 'cloud:precise-grizzly'): self.precise_grizzly, | ||
153 | 58 | ('precise', 'cloud:precise-havana'): self.precise_havana, | ||
154 | 59 | ('precise', 'cloud:precise-icehouse'): self.precise_icehouse, | ||
155 | 60 | ('trusty', None): self.trusty_icehouse} | ||
156 | 61 | return releases[(self.series, self.openstack)] | ||
157 | 0 | 62 | ||
158 | === added file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py' | |||
159 | --- hooks/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000 | |||
160 | +++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-09-10 21:17:48 +0000 | |||
161 | @@ -0,0 +1,275 @@ | |||
162 | 1 | import logging | ||
163 | 2 | import os | ||
164 | 3 | import time | ||
165 | 4 | import urllib | ||
166 | 5 | |||
167 | 6 | import glanceclient.v1.client as glance_client | ||
168 | 7 | import keystoneclient.v2_0 as keystone_client | ||
169 | 8 | import novaclient.v1_1.client as nova_client | ||
170 | 9 | |||
171 | 10 | from charmhelpers.contrib.amulet.utils import ( | ||
172 | 11 | AmuletUtils | ||
173 | 12 | ) | ||
174 | 13 | |||
175 | 14 | DEBUG = logging.DEBUG | ||
176 | 15 | ERROR = logging.ERROR | ||
177 | 16 | |||
178 | 17 | |||
179 | 18 | class OpenStackAmuletUtils(AmuletUtils): | ||
180 | 19 | """OpenStack amulet utilities. | ||
181 | 20 | |||
182 | 21 | This class inherits from AmuletUtils and has additional support | ||
183 | 22 | that is specifically for use by OpenStack charms. | ||
184 | 23 | """ | ||
185 | 24 | |||
186 | 25 | def __init__(self, log_level=ERROR): | ||
187 | 26 | """Initialize the deployment environment.""" | ||
188 | 27 | super(OpenStackAmuletUtils, self).__init__(log_level) | ||
189 | 28 | |||
190 | 29 | def validate_endpoint_data(self, endpoints, admin_port, internal_port, | ||
191 | 30 | public_port, expected): | ||
192 | 31 | """Validate endpoint data. | ||
193 | 32 | |||
194 | 33 | Validate actual endpoint data vs expected endpoint data. The ports | ||
195 | 34 | are used to find the matching endpoint. | ||
196 | 35 | """ | ||
197 | 36 | found = False | ||
198 | 37 | for ep in endpoints: | ||
199 | 38 | self.log.debug('endpoint: {}'.format(repr(ep))) | ||
200 | 39 | if (admin_port in ep.adminurl and | ||
201 | 40 | internal_port in ep.internalurl and | ||
202 | 41 | public_port in ep.publicurl): | ||
203 | 42 | found = True | ||
204 | 43 | actual = {'id': ep.id, | ||
205 | 44 | 'region': ep.region, | ||
206 | 45 | 'adminurl': ep.adminurl, | ||
207 | 46 | 'internalurl': ep.internalurl, | ||
208 | 47 | 'publicurl': ep.publicurl, | ||
209 | 48 | 'service_id': ep.service_id} | ||
210 | 49 | ret = self._validate_dict_data(expected, actual) | ||
211 | 50 | if ret: | ||
212 | 51 | return 'unexpected endpoint data - {}'.format(ret) | ||
213 | 52 | |||
214 | 53 | if not found: | ||
215 | 54 | return 'endpoint not found' | ||
216 | 55 | |||
217 | 56 | def validate_svc_catalog_endpoint_data(self, expected, actual): | ||
218 | 57 | """Validate service catalog endpoint data. | ||
219 | 58 | |||
220 | 59 | Validate a list of actual service catalog endpoints vs a list of | ||
221 | 60 | expected service catalog endpoints. | ||
222 | 61 | """ | ||
223 | 62 | self.log.debug('actual: {}'.format(repr(actual))) | ||
224 | 63 | for k, v in expected.iteritems(): | ||
225 | 64 | if k in actual: | ||
226 | 65 | ret = self._validate_dict_data(expected[k][0], actual[k][0]) | ||
227 | 66 | if ret: | ||
228 | 67 | return self.endpoint_error(k, ret) | ||
229 | 68 | else: | ||
230 | 69 | return "endpoint {} does not exist".format(k) | ||
231 | 70 | return ret | ||
232 | 71 | |||
233 | 72 | def validate_tenant_data(self, expected, actual): | ||
234 | 73 | """Validate tenant data. | ||
235 | 74 | |||
236 | 75 | Validate a list of actual tenant data vs list of expected tenant | ||
237 | 76 | data. | ||
238 | 77 | """ | ||
239 | 78 | self.log.debug('actual: {}'.format(repr(actual))) | ||
240 | 79 | for e in expected: | ||
241 | 80 | found = False | ||
242 | 81 | for act in actual: | ||
243 | 82 | a = {'enabled': act.enabled, 'description': act.description, | ||
244 | 83 | 'name': act.name, 'id': act.id} | ||
245 | 84 | if e['name'] == a['name']: | ||
246 | 85 | found = True | ||
247 | 86 | ret = self._validate_dict_data(e, a) | ||
248 | 87 | if ret: | ||
249 | 88 | return "unexpected tenant data - {}".format(ret) | ||
250 | 89 | if not found: | ||
251 | 90 | return "tenant {} does not exist".format(e['name']) | ||
252 | 91 | return ret | ||
253 | 92 | |||
254 | 93 | def validate_role_data(self, expected, actual): | ||
255 | 94 | """Validate role data. | ||
256 | 95 | |||
257 | 96 | Validate a list of actual role data vs a list of expected role | ||
258 | 97 | data. | ||
259 | 98 | """ | ||
260 | 99 | self.log.debug('actual: {}'.format(repr(actual))) | ||
261 | 100 | for e in expected: | ||
262 | 101 | found = False | ||
263 | 102 | for act in actual: | ||
264 | 103 | a = {'name': act.name, 'id': act.id} | ||
265 | 104 | if e['name'] == a['name']: | ||
266 | 105 | found = True | ||
267 | 106 | ret = self._validate_dict_data(e, a) | ||
268 | 107 | if ret: | ||
269 | 108 | return "unexpected role data - {}".format(ret) | ||
270 | 109 | if not found: | ||
271 | 110 | return "role {} does not exist".format(e['name']) | ||
272 | 111 | return ret | ||
273 | 112 | |||
274 | 113 | def validate_user_data(self, expected, actual): | ||
275 | 114 | """Validate user data. | ||
276 | 115 | |||
277 | 116 | Validate a list of actual user data vs a list of expected user | ||
278 | 117 | data. | ||
279 | 118 | """ | ||
280 | 119 | self.log.debug('actual: {}'.format(repr(actual))) | ||
281 | 120 | for e in expected: | ||
282 | 121 | found = False | ||
283 | 122 | for act in actual: | ||
284 | 123 | a = {'enabled': act.enabled, 'name': act.name, | ||
285 | 124 | 'email': act.email, 'tenantId': act.tenantId, | ||
286 | 125 | 'id': act.id} | ||
287 | 126 | if e['name'] == a['name']: | ||
288 | 127 | found = True | ||
289 | 128 | ret = self._validate_dict_data(e, a) | ||
290 | 129 | if ret: | ||
291 | 130 | return "unexpected user data - {}".format(ret) | ||
292 | 131 | if not found: | ||
293 | 132 | return "user {} does not exist".format(e['name']) | ||
294 | 133 | return ret | ||
295 | 134 | |||
296 | 135 | def validate_flavor_data(self, expected, actual): | ||
297 | 136 | """Validate flavor data. | ||
298 | 137 | |||
299 | 138 | Validate a list of actual flavors vs a list of expected flavors. | ||
300 | 139 | """ | ||
301 | 140 | self.log.debug('actual: {}'.format(repr(actual))) | ||
302 | 141 | act = [a.name for a in actual] | ||
303 | 142 | return self._validate_list_data(expected, act) | ||
304 | 143 | |||
305 | 144 | def tenant_exists(self, keystone, tenant): | ||
306 | 145 | """Return True if tenant exists.""" | ||
307 | 146 | return tenant in [t.name for t in keystone.tenants.list()] | ||
308 | 147 | |||
309 | 148 | def authenticate_keystone_admin(self, keystone_sentry, user, password, | ||
310 | 149 | tenant): | ||
311 | 150 | """Authenticates admin user with the keystone admin endpoint.""" | ||
312 | 151 | unit = keystone_sentry | ||
313 | 152 | service_ip = unit.relation('shared-db', | ||
314 | 153 | 'mysql:shared-db')['private-address'] | ||
315 | 154 | ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8')) | ||
316 | 155 | return keystone_client.Client(username=user, password=password, | ||
317 | 156 | tenant_name=tenant, auth_url=ep) | ||
318 | 157 | |||
319 | 158 | def authenticate_keystone_user(self, keystone, user, password, tenant): | ||
320 | 159 | """Authenticates a regular user with the keystone public endpoint.""" | ||
321 | 160 | ep = keystone.service_catalog.url_for(service_type='identity', | ||
322 | 161 | endpoint_type='publicURL') | ||
323 | 162 | return keystone_client.Client(username=user, password=password, | ||
324 | 163 | tenant_name=tenant, auth_url=ep) | ||
325 | 164 | |||
326 | 165 | def authenticate_glance_admin(self, keystone): | ||
327 | 166 | """Authenticates admin user with glance.""" | ||
328 | 167 | ep = keystone.service_catalog.url_for(service_type='image', | ||
329 | 168 | endpoint_type='adminURL') | ||
330 | 169 | return glance_client.Client(ep, token=keystone.auth_token) | ||
331 | 170 | |||
332 | 171 | def authenticate_nova_user(self, keystone, user, password, tenant): | ||
333 | 172 | """Authenticates a regular user with nova-api.""" | ||
334 | 173 | ep = keystone.service_catalog.url_for(service_type='identity', | ||
335 | 174 | endpoint_type='publicURL') | ||
336 | 175 | return nova_client.Client(username=user, api_key=password, | ||
337 | 176 | project_id=tenant, auth_url=ep) | ||
338 | 177 | |||
339 | 178 | def create_cirros_image(self, glance, image_name): | ||
340 | 179 | """Download the latest cirros image and upload it to glance.""" | ||
341 | 180 | http_proxy = os.getenv('AMULET_HTTP_PROXY') | ||
342 | 181 | self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) | ||
343 | 182 | if http_proxy: | ||
344 | 183 | proxies = {'http': http_proxy} | ||
345 | 184 | opener = urllib.FancyURLopener(proxies) | ||
346 | 185 | else: | ||
347 | 186 | opener = urllib.FancyURLopener() | ||
348 | 187 | |||
349 | 188 | f = opener.open("http://download.cirros-cloud.net/version/released") | ||
350 | 189 | version = f.read().strip() | ||
351 | 190 | cirros_img = "tests/cirros-{}-x86_64-disk.img".format(version) | ||
352 | 191 | |||
353 | 192 | if not os.path.exists(cirros_img): | ||
354 | 193 | cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net", | ||
355 | 194 | version, cirros_img) | ||
356 | 195 | opener.retrieve(cirros_url, cirros_img) | ||
357 | 196 | f.close() | ||
358 | 197 | |||
359 | 198 | with open(cirros_img) as f: | ||
360 | 199 | image = glance.images.create(name=image_name, is_public=True, | ||
361 | 200 | disk_format='qcow2', | ||
362 | 201 | container_format='bare', data=f) | ||
363 | 202 | count = 1 | ||
364 | 203 | status = image.status | ||
365 | 204 | while status != 'active' and count < 10: | ||
366 | 205 | time.sleep(3) | ||
367 | 206 | image = glance.images.get(image.id) | ||
368 | 207 | status = image.status | ||
369 | 208 | self.log.debug('image status: {}'.format(status)) | ||
370 | 209 | count += 1 | ||
371 | 210 | |||
372 | 211 | if status != 'active': | ||
373 | 212 | self.log.error('image creation timed out') | ||
374 | 213 | return None | ||
375 | 214 | |||
376 | 215 | return image | ||
377 | 216 | |||
378 | 217 | def delete_image(self, glance, image): | ||
379 | 218 | """Delete the specified image.""" | ||
380 | 219 | num_before = len(list(glance.images.list())) | ||
381 | 220 | glance.images.delete(image) | ||
382 | 221 | |||
383 | 222 | count = 1 | ||
384 | 223 | num_after = len(list(glance.images.list())) | ||
385 | 224 | while num_after != (num_before - 1) and count < 10: | ||
386 | 225 | time.sleep(3) | ||
387 | 226 | num_after = len(list(glance.images.list())) | ||
388 | 227 | self.log.debug('number of images: {}'.format(num_after)) | ||
389 | 228 | count += 1 | ||
390 | 229 | |||
391 | 230 | if num_after != (num_before - 1): | ||
392 | 231 | self.log.error('image deletion timed out') | ||
393 | 232 | return False | ||
394 | 233 | |||
395 | 234 | return True | ||
396 | 235 | |||
397 | 236 | def create_instance(self, nova, image_name, instance_name, flavor): | ||
398 | 237 | """Create the specified instance.""" | ||
399 | 238 | image = nova.images.find(name=image_name) | ||
400 | 239 | flavor = nova.flavors.find(name=flavor) | ||
401 | 240 | instance = nova.servers.create(name=instance_name, image=image, | ||
402 | 241 | flavor=flavor) | ||
403 | 242 | |||
404 | 243 | count = 1 | ||
405 | 244 | status = instance.status | ||
406 | 245 | while status != 'ACTIVE' and count < 60: | ||
407 | 246 | time.sleep(3) | ||
408 | 247 | instance = nova.servers.get(instance.id) | ||
409 | 248 | status = instance.status | ||
410 | 249 | self.log.debug('instance status: {}'.format(status)) | ||
411 | 250 | count += 1 | ||
412 | 251 | |||
413 | 252 | if status != 'ACTIVE': | ||
414 | 253 | self.log.error('instance creation timed out') | ||
415 | 254 | return None | ||
416 | 255 | |||
417 | 256 | return instance | ||
418 | 257 | |||
419 | 258 | def delete_instance(self, nova, instance): | ||
420 | 259 | """Delete the specified instance.""" | ||
421 | 260 | num_before = len(list(nova.servers.list())) | ||
422 | 261 | nova.servers.delete(instance) | ||
423 | 262 | |||
424 | 263 | count = 1 | ||
425 | 264 | num_after = len(list(nova.servers.list())) | ||
426 | 265 | while num_after != (num_before - 1) and count < 10: | ||
427 | 266 | time.sleep(3) | ||
428 | 267 | num_after = len(list(nova.servers.list())) | ||
429 | 268 | self.log.debug('number of instances: {}'.format(num_after)) | ||
430 | 269 | count += 1 | ||
431 | 270 | |||
432 | 271 | if num_after != (num_before - 1): | ||
433 | 272 | self.log.error('instance deletion timed out') | ||
434 | 273 | return False | ||
435 | 274 | |||
436 | 275 | return True | ||
437 | 0 | 276 | ||
438 | === added file 'hooks/charmhelpers/contrib/openstack/context.py' | |||
439 | --- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000 | |||
440 | +++ hooks/charmhelpers/contrib/openstack/context.py 2014-09-10 21:17:48 +0000 | |||
441 | @@ -0,0 +1,789 @@ | |||
442 | 1 | import json | ||
443 | 2 | import os | ||
444 | 3 | import time | ||
445 | 4 | |||
446 | 5 | from base64 import b64decode | ||
447 | 6 | |||
448 | 7 | from subprocess import ( | ||
449 | 8 | check_call | ||
450 | 9 | ) | ||
451 | 10 | |||
452 | 11 | |||
453 | 12 | from charmhelpers.fetch import ( | ||
454 | 13 | apt_install, | ||
455 | 14 | filter_installed_packages, | ||
456 | 15 | ) | ||
457 | 16 | |||
458 | 17 | from charmhelpers.core.hookenv import ( | ||
459 | 18 | config, | ||
460 | 19 | local_unit, | ||
461 | 20 | log, | ||
462 | 21 | relation_get, | ||
463 | 22 | relation_ids, | ||
464 | 23 | related_units, | ||
465 | 24 | relation_set, | ||
466 | 25 | unit_get, | ||
467 | 26 | unit_private_ip, | ||
468 | 27 | ERROR, | ||
469 | 28 | INFO | ||
470 | 29 | ) | ||
471 | 30 | |||
472 | 31 | from charmhelpers.contrib.hahelpers.cluster import ( | ||
473 | 32 | determine_apache_port, | ||
474 | 33 | determine_api_port, | ||
475 | 34 | https, | ||
476 | 35 | is_clustered | ||
477 | 36 | ) | ||
478 | 37 | |||
479 | 38 | from charmhelpers.contrib.hahelpers.apache import ( | ||
480 | 39 | get_cert, | ||
481 | 40 | get_ca_cert, | ||
482 | 41 | ) | ||
483 | 42 | |||
484 | 43 | from charmhelpers.contrib.openstack.neutron import ( | ||
485 | 44 | neutron_plugin_attribute, | ||
486 | 45 | ) | ||
487 | 46 | |||
488 | 47 | from charmhelpers.contrib.network.ip import ( | ||
489 | 48 | get_address_in_network, | ||
490 | 49 | get_ipv6_addr, | ||
491 | 50 | ) | ||
492 | 51 | |||
493 | 52 | CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' | ||
494 | 53 | |||
495 | 54 | |||
496 | 55 | class OSContextError(Exception): | ||
497 | 56 | pass | ||
498 | 57 | |||
499 | 58 | |||
500 | 59 | def ensure_packages(packages): | ||
501 | 60 | '''Install but do not upgrade required plugin packages''' | ||
502 | 61 | required = filter_installed_packages(packages) | ||
503 | 62 | if required: | ||
504 | 63 | apt_install(required, fatal=True) | ||
505 | 64 | |||
506 | 65 | |||
507 | 66 | def context_complete(ctxt): | ||
508 | 67 | _missing = [] | ||
509 | 68 | for k, v in ctxt.iteritems(): | ||
510 | 69 | if v is None or v == '': | ||
511 | 70 | _missing.append(k) | ||
512 | 71 | if _missing: | ||
513 | 72 | log('Missing required data: %s' % ' '.join(_missing), level='INFO') | ||
514 | 73 | return False | ||
515 | 74 | return True | ||
516 | 75 | |||
517 | 76 | |||
518 | 77 | def config_flags_parser(config_flags): | ||
519 | 78 | if config_flags.find('==') >= 0: | ||
520 | 79 | log("config_flags is not in expected format (key=value)", | ||
521 | 80 | level=ERROR) | ||
522 | 81 | raise OSContextError | ||
523 | 82 | # strip the following from each value. | ||
524 | 83 | post_strippers = ' ,' | ||
525 | 84 | # we strip any leading/trailing '=' or ' ' from the string then | ||
526 | 85 | # split on '='. | ||
527 | 86 | split = config_flags.strip(' =').split('=') | ||
528 | 87 | limit = len(split) | ||
529 | 88 | flags = {} | ||
530 | 89 | for i in xrange(0, limit - 1): | ||
531 | 90 | current = split[i] | ||
532 | 91 | next = split[i + 1] | ||
533 | 92 | vindex = next.rfind(',') | ||
534 | 93 | if (i == limit - 2) or (vindex < 0): | ||
535 | 94 | value = next | ||
536 | 95 | else: | ||
537 | 96 | value = next[:vindex] | ||
538 | 97 | |||
539 | 98 | if i == 0: | ||
540 | 99 | key = current | ||
541 | 100 | else: | ||
542 | 101 | # if this not the first entry, expect an embedded key. | ||
543 | 102 | index = current.rfind(',') | ||
544 | 103 | if index < 0: | ||
545 | 104 | log("invalid config value(s) at index %s" % (i), | ||
546 | 105 | level=ERROR) | ||
547 | 106 | raise OSContextError | ||
548 | 107 | key = current[index + 1:] | ||
549 | 108 | |||
550 | 109 | # Add to collection. | ||
551 | 110 | flags[key.strip(post_strippers)] = value.rstrip(post_strippers) | ||
552 | 111 | return flags | ||
553 | 112 | |||
554 | 113 | |||
555 | 114 | class OSContextGenerator(object): | ||
556 | 115 | interfaces = [] | ||
557 | 116 | |||
558 | 117 | def __call__(self): | ||
559 | 118 | raise NotImplementedError | ||
560 | 119 | |||
561 | 120 | |||
562 | 121 | class SharedDBContext(OSContextGenerator): | ||
563 | 122 | interfaces = ['shared-db'] | ||
564 | 123 | |||
565 | 124 | def __init__(self, | ||
566 | 125 | database=None, user=None, relation_prefix=None, ssl_dir=None): | ||
567 | 126 | ''' | ||
568 | 127 | Allows inspecting relation for settings prefixed with relation_prefix. | ||
569 | 128 | This is useful for parsing access for multiple databases returned via | ||
570 | 129 | the shared-db interface (eg, nova_password, quantum_password) | ||
571 | 130 | ''' | ||
572 | 131 | self.relation_prefix = relation_prefix | ||
573 | 132 | self.database = database | ||
574 | 133 | self.user = user | ||
575 | 134 | self.ssl_dir = ssl_dir | ||
576 | 135 | |||
577 | 136 | def __call__(self): | ||
578 | 137 | self.database = self.database or config('database') | ||
579 | 138 | self.user = self.user or config('database-user') | ||
580 | 139 | if None in [self.database, self.user]: | ||
581 | 140 | log('Could not generate shared_db context. ' | ||
582 | 141 | 'Missing required charm config options. ' | ||
583 | 142 | '(database name and user)') | ||
584 | 143 | raise OSContextError | ||
585 | 144 | |||
586 | 145 | ctxt = {} | ||
587 | 146 | |||
588 | 147 | # NOTE(jamespage) if mysql charm provides a network upon which | ||
589 | 148 | # access to the database should be made, reconfigure relation | ||
590 | 149 | # with the service units local address and defer execution | ||
591 | 150 | access_network = relation_get('access-network') | ||
592 | 151 | if access_network is not None: | ||
593 | 152 | if self.relation_prefix is not None: | ||
594 | 153 | hostname_key = "{}_hostname".format(self.relation_prefix) | ||
595 | 154 | else: | ||
596 | 155 | hostname_key = "hostname" | ||
597 | 156 | access_hostname = get_address_in_network(access_network, | ||
598 | 157 | unit_get('private-address')) | ||
599 | 158 | set_hostname = relation_get(attribute=hostname_key, | ||
600 | 159 | unit=local_unit()) | ||
601 | 160 | if set_hostname != access_hostname: | ||
602 | 161 | relation_set(relation_settings={hostname_key: access_hostname}) | ||
603 | 162 | return ctxt # Defer any further hook execution for now.... | ||
604 | 163 | |||
605 | 164 | password_setting = 'password' | ||
606 | 165 | if self.relation_prefix: | ||
607 | 166 | password_setting = self.relation_prefix + '_password' | ||
608 | 167 | |||
609 | 168 | for rid in relation_ids('shared-db'): | ||
610 | 169 | for unit in related_units(rid): | ||
611 | 170 | rdata = relation_get(rid=rid, unit=unit) | ||
612 | 171 | ctxt = { | ||
613 | 172 | 'database_host': rdata.get('db_host'), | ||
614 | 173 | 'database': self.database, | ||
615 | 174 | 'database_user': self.user, | ||
616 | 175 | 'database_password': rdata.get(password_setting), | ||
617 | 176 | 'database_type': 'mysql' | ||
618 | 177 | } | ||
619 | 178 | if context_complete(ctxt): | ||
620 | 179 | db_ssl(rdata, ctxt, self.ssl_dir) | ||
621 | 180 | return ctxt | ||
622 | 181 | return {} | ||
623 | 182 | |||
624 | 183 | |||
625 | 184 | class PostgresqlDBContext(OSContextGenerator): | ||
626 | 185 | interfaces = ['pgsql-db'] | ||
627 | 186 | |||
628 | 187 | def __init__(self, database=None): | ||
629 | 188 | self.database = database | ||
630 | 189 | |||
631 | 190 | def __call__(self): | ||
632 | 191 | self.database = self.database or config('database') | ||
633 | 192 | if self.database is None: | ||
634 | 193 | log('Could not generate postgresql_db context. ' | ||
635 | 194 | 'Missing required charm config options. ' | ||
636 | 195 | '(database name)') | ||
637 | 196 | raise OSContextError | ||
638 | 197 | ctxt = {} | ||
639 | 198 | |||
640 | 199 | for rid in relation_ids(self.interfaces[0]): | ||
641 | 200 | for unit in related_units(rid): | ||
642 | 201 | ctxt = { | ||
643 | 202 | 'database_host': relation_get('host', rid=rid, unit=unit), | ||
644 | 203 | 'database': self.database, | ||
645 | 204 | 'database_user': relation_get('user', rid=rid, unit=unit), | ||
646 | 205 | 'database_password': relation_get('password', rid=rid, unit=unit), | ||
647 | 206 | 'database_type': 'postgresql', | ||
648 | 207 | } | ||
649 | 208 | if context_complete(ctxt): | ||
650 | 209 | return ctxt | ||
651 | 210 | return {} | ||
652 | 211 | |||
653 | 212 | |||
654 | 213 | def db_ssl(rdata, ctxt, ssl_dir): | ||
655 | 214 | if 'ssl_ca' in rdata and ssl_dir: | ||
656 | 215 | ca_path = os.path.join(ssl_dir, 'db-client.ca') | ||
657 | 216 | with open(ca_path, 'w') as fh: | ||
658 | 217 | fh.write(b64decode(rdata['ssl_ca'])) | ||
659 | 218 | ctxt['database_ssl_ca'] = ca_path | ||
660 | 219 | elif 'ssl_ca' in rdata: | ||
661 | 220 | log("Charm not setup for ssl support but ssl ca found") | ||
662 | 221 | return ctxt | ||
663 | 222 | if 'ssl_cert' in rdata: | ||
664 | 223 | cert_path = os.path.join( | ||
665 | 224 | ssl_dir, 'db-client.cert') | ||
666 | 225 | if not os.path.exists(cert_path): | ||
667 | 226 | log("Waiting 1m for ssl client cert validity") | ||
668 | 227 | time.sleep(60) | ||
669 | 228 | with open(cert_path, 'w') as fh: | ||
670 | 229 | fh.write(b64decode(rdata['ssl_cert'])) | ||
671 | 230 | ctxt['database_ssl_cert'] = cert_path | ||
672 | 231 | key_path = os.path.join(ssl_dir, 'db-client.key') | ||
673 | 232 | with open(key_path, 'w') as fh: | ||
674 | 233 | fh.write(b64decode(rdata['ssl_key'])) | ||
675 | 234 | ctxt['database_ssl_key'] = key_path | ||
676 | 235 | return ctxt | ||
677 | 236 | |||
678 | 237 | |||
679 | 238 | class IdentityServiceContext(OSContextGenerator): | ||
680 | 239 | interfaces = ['identity-service'] | ||
681 | 240 | |||
682 | 241 | def __call__(self): | ||
683 | 242 | log('Generating template context for identity-service') | ||
684 | 243 | ctxt = {} | ||
685 | 244 | |||
686 | 245 | for rid in relation_ids('identity-service'): | ||
687 | 246 | for unit in related_units(rid): | ||
688 | 247 | rdata = relation_get(rid=rid, unit=unit) | ||
689 | 248 | ctxt = { | ||
690 | 249 | 'service_port': rdata.get('service_port'), | ||
691 | 250 | 'service_host': rdata.get('service_host'), | ||
692 | 251 | 'auth_host': rdata.get('auth_host'), | ||
693 | 252 | 'auth_port': rdata.get('auth_port'), | ||
694 | 253 | 'admin_tenant_name': rdata.get('service_tenant'), | ||
695 | 254 | 'admin_user': rdata.get('service_username'), | ||
696 | 255 | 'admin_password': rdata.get('service_password'), | ||
697 | 256 | 'service_protocol': | ||
698 | 257 | rdata.get('service_protocol') or 'http', | ||
699 | 258 | 'auth_protocol': | ||
700 | 259 | rdata.get('auth_protocol') or 'http', | ||
701 | 260 | } | ||
702 | 261 | if context_complete(ctxt): | ||
703 | 262 | # NOTE(jamespage) this is required for >= icehouse | ||
704 | 263 | # so a missing value just indicates keystone needs | ||
705 | 264 | # upgrading | ||
706 | 265 | ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') | ||
707 | 266 | return ctxt | ||
708 | 267 | return {} | ||
709 | 268 | |||
710 | 269 | |||
711 | 270 | class AMQPContext(OSContextGenerator): | ||
712 | 271 | |||
713 | 272 | def __init__(self, ssl_dir=None, rel_name='amqp', relation_prefix=None): | ||
714 | 273 | self.ssl_dir = ssl_dir | ||
715 | 274 | self.rel_name = rel_name | ||
716 | 275 | self.relation_prefix = relation_prefix | ||
717 | 276 | self.interfaces = [rel_name] | ||
718 | 277 | |||
719 | 278 | def __call__(self): | ||
720 | 279 | log('Generating template context for amqp') | ||
721 | 280 | conf = config() | ||
722 | 281 | user_setting = 'rabbit-user' | ||
723 | 282 | vhost_setting = 'rabbit-vhost' | ||
724 | 283 | if self.relation_prefix: | ||
725 | 284 | user_setting = self.relation_prefix + '-rabbit-user' | ||
726 | 285 | vhost_setting = self.relation_prefix + '-rabbit-vhost' | ||
727 | 286 | |||
728 | 287 | try: | ||
729 | 288 | username = conf[user_setting] | ||
730 | 289 | vhost = conf[vhost_setting] | ||
731 | 290 | except KeyError as e: | ||
732 | 291 | log('Could not generate shared_db context. ' | ||
733 | 292 | 'Missing required charm config options: %s.' % e) | ||
734 | 293 | raise OSContextError | ||
735 | 294 | ctxt = {} | ||
736 | 295 | for rid in relation_ids(self.rel_name): | ||
737 | 296 | ha_vip_only = False | ||
738 | 297 | for unit in related_units(rid): | ||
739 | 298 | if relation_get('clustered', rid=rid, unit=unit): | ||
740 | 299 | ctxt['clustered'] = True | ||
741 | 300 | ctxt['rabbitmq_host'] = relation_get('vip', rid=rid, | ||
742 | 301 | unit=unit) | ||
743 | 302 | else: | ||
744 | 303 | ctxt['rabbitmq_host'] = relation_get('private-address', | ||
745 | 304 | rid=rid, unit=unit) | ||
746 | 305 | ctxt.update({ | ||
747 | 306 | 'rabbitmq_user': username, | ||
748 | 307 | 'rabbitmq_password': relation_get('password', rid=rid, | ||
749 | 308 | unit=unit), | ||
750 | 309 | 'rabbitmq_virtual_host': vhost, | ||
751 | 310 | }) | ||
752 | 311 | |||
753 | 312 | ssl_port = relation_get('ssl_port', rid=rid, unit=unit) | ||
754 | 313 | if ssl_port: | ||
755 | 314 | ctxt['rabbit_ssl_port'] = ssl_port | ||
756 | 315 | ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) | ||
757 | 316 | if ssl_ca: | ||
758 | 317 | ctxt['rabbit_ssl_ca'] = ssl_ca | ||
759 | 318 | |||
760 | 319 | if relation_get('ha_queues', rid=rid, unit=unit) is not None: | ||
761 | 320 | ctxt['rabbitmq_ha_queues'] = True | ||
762 | 321 | |||
763 | 322 | ha_vip_only = relation_get('ha-vip-only', | ||
764 | 323 | rid=rid, unit=unit) is not None | ||
765 | 324 | |||
766 | 325 | if context_complete(ctxt): | ||
767 | 326 | if 'rabbit_ssl_ca' in ctxt: | ||
768 | 327 | if not self.ssl_dir: | ||
769 | 328 | log(("Charm not setup for ssl support " | ||
770 | 329 | "but ssl ca found")) | ||
771 | 330 | break | ||
772 | 331 | ca_path = os.path.join( | ||
773 | 332 | self.ssl_dir, 'rabbit-client-ca.pem') | ||
774 | 333 | with open(ca_path, 'w') as fh: | ||
775 | 334 | fh.write(b64decode(ctxt['rabbit_ssl_ca'])) | ||
776 | 335 | ctxt['rabbit_ssl_ca'] = ca_path | ||
777 | 336 | # Sufficient information found = break out! | ||
778 | 337 | break | ||
779 | 338 | # Used for active/active rabbitmq >= grizzly | ||
780 | 339 | if ('clustered' not in ctxt or ha_vip_only) \ | ||
781 | 340 | and len(related_units(rid)) > 1: | ||
782 | 341 | rabbitmq_hosts = [] | ||
783 | 342 | for unit in related_units(rid): | ||
784 | 343 | rabbitmq_hosts.append(relation_get('private-address', | ||
785 | 344 | rid=rid, unit=unit)) | ||
786 | 345 | ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts) | ||
787 | 346 | if not context_complete(ctxt): | ||
788 | 347 | return {} | ||
789 | 348 | else: | ||
790 | 349 | return ctxt | ||
791 | 350 | |||
792 | 351 | |||
793 | 352 | class CephContext(OSContextGenerator): | ||
794 | 353 | interfaces = ['ceph'] | ||
795 | 354 | |||
796 | 355 | def __call__(self): | ||
797 | 356 | '''This generates context for /etc/ceph/ceph.conf templates''' | ||
798 | 357 | if not relation_ids('ceph'): | ||
799 | 358 | return {} | ||
800 | 359 | |||
801 | 360 | log('Generating template context for ceph') | ||
802 | 361 | |||
803 | 362 | mon_hosts = [] | ||
804 | 363 | auth = None | ||
805 | 364 | key = None | ||
806 | 365 | use_syslog = str(config('use-syslog')).lower() | ||
807 | 366 | for rid in relation_ids('ceph'): | ||
808 | 367 | for unit in related_units(rid): | ||
809 | 368 | auth = relation_get('auth', rid=rid, unit=unit) | ||
810 | 369 | key = relation_get('key', rid=rid, unit=unit) | ||
811 | 370 | ceph_addr = \ | ||
812 | 371 | relation_get('ceph-public-address', rid=rid, unit=unit) or \ | ||
813 | 372 | relation_get('private-address', rid=rid, unit=unit) | ||
814 | 373 | mon_hosts.append(ceph_addr) | ||
815 | 374 | |||
816 | 375 | ctxt = { | ||
817 | 376 | 'mon_hosts': ' '.join(mon_hosts), | ||
818 | 377 | 'auth': auth, | ||
819 | 378 | 'key': key, | ||
820 | 379 | 'use_syslog': use_syslog | ||
821 | 380 | } | ||
822 | 381 | |||
823 | 382 | if not os.path.isdir('/etc/ceph'): | ||
824 | 383 | os.mkdir('/etc/ceph') | ||
825 | 384 | |||
826 | 385 | if not context_complete(ctxt): | ||
827 | 386 | return {} | ||
828 | 387 | |||
829 | 388 | ensure_packages(['ceph-common']) | ||
830 | 389 | |||
831 | 390 | return ctxt | ||
832 | 391 | |||
833 | 392 | |||
834 | 393 | class HAProxyContext(OSContextGenerator): | ||
835 | 394 | interfaces = ['cluster'] | ||
836 | 395 | |||
837 | 396 | def __call__(self): | ||
838 | 397 | ''' | ||
839 | 398 | Builds half a context for the haproxy template, which describes | ||
840 | 399 | all peers to be included in the cluster. Each charm needs to include | ||
841 | 400 | its own context generator that describes the port mapping. | ||
842 | 401 | ''' | ||
843 | 402 | if not relation_ids('cluster'): | ||
844 | 403 | return {} | ||
845 | 404 | |||
846 | 405 | cluster_hosts = {} | ||
847 | 406 | l_unit = local_unit().replace('/', '-') | ||
848 | 407 | if config('prefer-ipv6'): | ||
849 | 408 | addr = get_ipv6_addr() | ||
850 | 409 | else: | ||
851 | 410 | addr = unit_get('private-address') | ||
852 | 411 | cluster_hosts[l_unit] = get_address_in_network(config('os-internal-network'), | ||
853 | 412 | addr) | ||
854 | 413 | |||
855 | 414 | for rid in relation_ids('cluster'): | ||
856 | 415 | for unit in related_units(rid): | ||
857 | 416 | _unit = unit.replace('/', '-') | ||
858 | 417 | addr = relation_get('private-address', rid=rid, unit=unit) | ||
859 | 418 | cluster_hosts[_unit] = addr | ||
860 | 419 | |||
861 | 420 | ctxt = { | ||
862 | 421 | 'units': cluster_hosts, | ||
863 | 422 | } | ||
864 | 423 | |||
865 | 424 | if config('prefer-ipv6'): | ||
866 | 425 | ctxt['local_host'] = 'ip6-localhost' | ||
867 | 426 | ctxt['haproxy_host'] = '::' | ||
868 | 427 | ctxt['stat_port'] = ':::8888' | ||
869 | 428 | else: | ||
870 | 429 | ctxt['local_host'] = '127.0.0.1' | ||
871 | 430 | ctxt['haproxy_host'] = '0.0.0.0' | ||
872 | 431 | ctxt['stat_port'] = ':8888' | ||
873 | 432 | |||
874 | 433 | if len(cluster_hosts.keys()) > 1: | ||
875 | 434 | # Enable haproxy when we have enough peers. | ||
876 | 435 | log('Ensuring haproxy enabled in /etc/default/haproxy.') | ||
877 | 436 | with open('/etc/default/haproxy', 'w') as out: | ||
878 | 437 | out.write('ENABLED=1\n') | ||
879 | 438 | return ctxt | ||
880 | 439 | log('HAProxy context is incomplete, this unit has no peers.') | ||
881 | 440 | return {} | ||
882 | 441 | |||
883 | 442 | |||
884 | 443 | class ImageServiceContext(OSContextGenerator): | ||
885 | 444 | interfaces = ['image-service'] | ||
886 | 445 | |||
887 | 446 | def __call__(self): | ||
888 | 447 | ''' | ||
889 | 448 | Obtains the glance API server from the image-service relation. Useful | ||
890 | 449 | in nova and cinder (currently). | ||
891 | 450 | ''' | ||
892 | 451 | log('Generating template context for image-service.') | ||
893 | 452 | rids = relation_ids('image-service') | ||
894 | 453 | if not rids: | ||
895 | 454 | return {} | ||
896 | 455 | for rid in rids: | ||
897 | 456 | for unit in related_units(rid): | ||
898 | 457 | api_server = relation_get('glance-api-server', | ||
899 | 458 | rid=rid, unit=unit) | ||
900 | 459 | if api_server: | ||
901 | 460 | return {'glance_api_servers': api_server} | ||
902 | 461 | log('ImageService context is incomplete. ' | ||
903 | 462 | 'Missing required relation data.') | ||
904 | 463 | return {} | ||
905 | 464 | |||
906 | 465 | |||
907 | 466 | class ApacheSSLContext(OSContextGenerator): | ||
908 | 467 | |||
909 | 468 | """ | ||
910 | 469 | Generates a context for an apache vhost configuration that configures | ||
911 | 470 | HTTPS reverse proxying for one or many endpoints. Generated context | ||
912 | 471 | looks something like:: | ||
913 | 472 | |||
914 | 473 | { | ||
915 | 474 | 'namespace': 'cinder', | ||
916 | 475 | 'private_address': 'iscsi.mycinderhost.com', | ||
917 | 476 | 'endpoints': [(8776, 8766), (8777, 8767)] | ||
918 | 477 | } | ||
919 | 478 | |||
920 | 479 | The endpoints list consists of a tuples mapping external ports | ||
921 | 480 | to internal ports. | ||
922 | 481 | """ | ||
923 | 482 | interfaces = ['https'] | ||
924 | 483 | |||
925 | 484 | # charms should inherit this context and set external ports | ||
926 | 485 | # and service namespace accordingly. | ||
927 | 486 | external_ports = [] | ||
928 | 487 | service_namespace = None | ||
929 | 488 | |||
930 | 489 | def enable_modules(self): | ||
931 | 490 | cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http'] | ||
932 | 491 | check_call(cmd) | ||
933 | 492 | |||
934 | 493 | def configure_cert(self): | ||
935 | 494 | if not os.path.isdir('/etc/apache2/ssl'): | ||
936 | 495 | os.mkdir('/etc/apache2/ssl') | ||
937 | 496 | ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) | ||
938 | 497 | if not os.path.isdir(ssl_dir): | ||
939 | 498 | os.mkdir(ssl_dir) | ||
940 | 499 | cert, key = get_cert() | ||
941 | 500 | with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out: | ||
942 | 501 | cert_out.write(b64decode(cert)) | ||
943 | 502 | with open(os.path.join(ssl_dir, 'key'), 'w') as key_out: | ||
944 | 503 | key_out.write(b64decode(key)) | ||
945 | 504 | ca_cert = get_ca_cert() | ||
946 | 505 | if ca_cert: | ||
947 | 506 | with open(CA_CERT_PATH, 'w') as ca_out: | ||
948 | 507 | ca_out.write(b64decode(ca_cert)) | ||
949 | 508 | check_call(['update-ca-certificates']) | ||
950 | 509 | |||
951 | 510 | def __call__(self): | ||
952 | 511 | if isinstance(self.external_ports, basestring): | ||
953 | 512 | self.external_ports = [self.external_ports] | ||
954 | 513 | if (not self.external_ports or not https()): | ||
955 | 514 | return {} | ||
956 | 515 | |||
957 | 516 | self.configure_cert() | ||
958 | 517 | self.enable_modules() | ||
959 | 518 | |||
960 | 519 | ctxt = { | ||
961 | 520 | 'namespace': self.service_namespace, | ||
962 | 521 | 'private_address': unit_get('private-address'), | ||
963 | 522 | 'endpoints': [] | ||
964 | 523 | } | ||
965 | 524 | if is_clustered(): | ||
966 | 525 | ctxt['private_address'] = config('vip') | ||
967 | 526 | for api_port in self.external_ports: | ||
968 | 527 | ext_port = determine_apache_port(api_port) | ||
969 | 528 | int_port = determine_api_port(api_port) | ||
970 | 529 | portmap = (int(ext_port), int(int_port)) | ||
971 | 530 | ctxt['endpoints'].append(portmap) | ||
972 | 531 | return ctxt | ||
973 | 532 | |||
974 | 533 | |||
975 | 534 | class NeutronContext(OSContextGenerator): | ||
976 | 535 | interfaces = [] | ||
977 | 536 | |||
978 | 537 | @property | ||
979 | 538 | def plugin(self): | ||
980 | 539 | return None | ||
981 | 540 | |||
982 | 541 | @property | ||
983 | 542 | def network_manager(self): | ||
984 | 543 | return None | ||
985 | 544 | |||
986 | 545 | @property | ||
987 | 546 | def packages(self): | ||
988 | 547 | return neutron_plugin_attribute( | ||
989 | 548 | self.plugin, 'packages', self.network_manager) | ||
990 | 549 | |||
991 | 550 | @property | ||
992 | 551 | def neutron_security_groups(self): | ||
993 | 552 | return None | ||
994 | 553 | |||
995 | 554 | def _ensure_packages(self): | ||
996 | 555 | [ensure_packages(pkgs) for pkgs in self.packages] | ||
997 | 556 | |||
998 | 557 | def _save_flag_file(self): | ||
999 | 558 | if self.network_manager == 'quantum': | ||
1000 | 559 | _file = '/etc/nova/quantum_plugin.conf' | ||
1001 | 560 | else: | ||
1002 | 561 | _file = '/etc/nova/neutron_plugin.conf' | ||
1003 | 562 | with open(_file, 'wb') as out: | ||
1004 | 563 | out.write(self.plugin + '\n') | ||
1005 | 564 | |||
1006 | 565 | def ovs_ctxt(self): | ||
1007 | 566 | driver = neutron_plugin_attribute(self.plugin, 'driver', | ||
1008 | 567 | self.network_manager) | ||
1009 | 568 | config = neutron_plugin_attribute(self.plugin, 'config', | ||
1010 | 569 | self.network_manager) | ||
1011 | 570 | ovs_ctxt = { | ||
1012 | 571 | 'core_plugin': driver, | ||
1013 | 572 | 'neutron_plugin': 'ovs', | ||
1014 | 573 | 'neutron_security_groups': self.neutron_security_groups, | ||
1015 | 574 | 'local_ip': unit_private_ip(), | ||
1016 | 575 | 'config': config | ||
1017 | 576 | } | ||
1018 | 577 | |||
1019 | 578 | return ovs_ctxt | ||
1020 | 579 | |||
1021 | 580 | def nvp_ctxt(self): | ||
1022 | 581 | driver = neutron_plugin_attribute(self.plugin, 'driver', | ||
1023 | 582 | self.network_manager) | ||
1024 | 583 | config = neutron_plugin_attribute(self.plugin, 'config', | ||
1025 | 584 | self.network_manager) | ||
1026 | 585 | nvp_ctxt = { | ||
1027 | 586 | 'core_plugin': driver, | ||
1028 | 587 | 'neutron_plugin': 'nvp', | ||
1029 | 588 | 'neutron_security_groups': self.neutron_security_groups, | ||
1030 | 589 | 'local_ip': unit_private_ip(), | ||
1031 | 590 | 'config': config | ||
1032 | 591 | } | ||
1033 | 592 | |||
1034 | 593 | return nvp_ctxt | ||
1035 | 594 | |||
1036 | 595 | def n1kv_ctxt(self): | ||
1037 | 596 | driver = neutron_plugin_attribute(self.plugin, 'driver', | ||
1038 | 597 | self.network_manager) | ||
1039 | 598 | n1kv_config = neutron_plugin_attribute(self.plugin, 'config', | ||
1040 | 599 | self.network_manager) | ||
1041 | 600 | n1kv_ctxt = { | ||
1042 | 601 | 'core_plugin': driver, | ||
1043 | 602 | 'neutron_plugin': 'n1kv', | ||
1044 | 603 | 'neutron_security_groups': self.neutron_security_groups, | ||
1045 | 604 | 'local_ip': unit_private_ip(), | ||
1046 | 605 | 'config': n1kv_config, | ||
1047 | 606 | 'vsm_ip': config('n1kv-vsm-ip'), | ||
1048 | 607 | 'vsm_username': config('n1kv-vsm-username'), | ||
1049 | 608 | 'vsm_password': config('n1kv-vsm-password'), | ||
1050 | 609 | 'restrict_policy_profiles': config( | ||
1051 | 610 | 'n1kv_restrict_policy_profiles'), | ||
1052 | 611 | } | ||
1053 | 612 | |||
1054 | 613 | return n1kv_ctxt | ||
1055 | 614 | |||
1056 | 615 | def neutron_ctxt(self): | ||
1057 | 616 | if https(): | ||
1058 | 617 | proto = 'https' | ||
1059 | 618 | else: | ||
1060 | 619 | proto = 'http' | ||
1061 | 620 | if is_clustered(): | ||
1062 | 621 | host = config('vip') | ||
1063 | 622 | else: | ||
1064 | 623 | host = unit_get('private-address') | ||
1065 | 624 | url = '%s://%s:%s' % (proto, host, '9696') | ||
1066 | 625 | ctxt = { | ||
1067 | 626 | 'network_manager': self.network_manager, | ||
1068 | 627 | 'neutron_url': url, | ||
1069 | 628 | } | ||
1070 | 629 | return ctxt | ||
1071 | 630 | |||
1072 | 631 | def __call__(self): | ||
1073 | 632 | self._ensure_packages() | ||
1074 | 633 | |||
1075 | 634 | if self.network_manager not in ['quantum', 'neutron']: | ||
1076 | 635 | return {} | ||
1077 | 636 | |||
1078 | 637 | if not self.plugin: | ||
1079 | 638 | return {} | ||
1080 | 639 | |||
1081 | 640 | ctxt = self.neutron_ctxt() | ||
1082 | 641 | |||
1083 | 642 | if self.plugin == 'ovs': | ||
1084 | 643 | ctxt.update(self.ovs_ctxt()) | ||
1085 | 644 | elif self.plugin in ['nvp', 'nsx']: | ||
1086 | 645 | ctxt.update(self.nvp_ctxt()) | ||
1087 | 646 | elif self.plugin == 'n1kv': | ||
1088 | 647 | ctxt.update(self.n1kv_ctxt()) | ||
1089 | 648 | |||
1090 | 649 | alchemy_flags = config('neutron-alchemy-flags') | ||
1091 | 650 | if alchemy_flags: | ||
1092 | 651 | flags = config_flags_parser(alchemy_flags) | ||
1093 | 652 | ctxt['neutron_alchemy_flags'] = flags | ||
1094 | 653 | |||
1095 | 654 | self._save_flag_file() | ||
1096 | 655 | return ctxt | ||
1097 | 656 | |||
1098 | 657 | |||
1099 | 658 | class OSConfigFlagContext(OSContextGenerator): | ||
1100 | 659 | |||
1101 | 660 | """ | ||
1102 | 661 | Responsible for adding user-defined config-flags in charm config to a | ||
1103 | 662 | template context. | ||
1104 | 663 | |||
1105 | 664 | NOTE: the value of config-flags may be a comma-separated list of | ||
1106 | 665 | key=value pairs and some Openstack config files support | ||
1107 | 666 | comma-separated lists as values. | ||
1108 | 667 | """ | ||
1109 | 668 | |||
1110 | 669 | def __call__(self): | ||
1111 | 670 | config_flags = config('config-flags') | ||
1112 | 671 | if not config_flags: | ||
1113 | 672 | return {} | ||
1114 | 673 | |||
1115 | 674 | flags = config_flags_parser(config_flags) | ||
1116 | 675 | return {'user_config_flags': flags} | ||
1117 | 676 | |||
1118 | 677 | |||
1119 | 678 | class SubordinateConfigContext(OSContextGenerator): | ||
1120 | 679 | |||
1121 | 680 | """ | ||
1122 | 681 | Responsible for inspecting relations to subordinates that | ||
1123 | 682 | may be exporting required config via a json blob. | ||
1124 | 683 | |||
1125 | 684 | The subordinate interface allows subordinates to export their | ||
1126 | 685 | configuration requirements to the principle for multiple config | ||
1127 | 686 | files and multiple serivces. Ie, a subordinate that has interfaces | ||
1128 | 687 | to both glance and nova may export to following yaml blob as json:: | ||
1129 | 688 | |||
1130 | 689 | glance: | ||
1131 | 690 | /etc/glance/glance-api.conf: | ||
1132 | 691 | sections: | ||
1133 | 692 | DEFAULT: | ||
1134 | 693 | - [key1, value1] | ||
1135 | 694 | /etc/glance/glance-registry.conf: | ||
1136 | 695 | MYSECTION: | ||
1137 | 696 | - [key2, value2] | ||
1138 | 697 | nova: | ||
1139 | 698 | /etc/nova/nova.conf: | ||
1140 | 699 | sections: | ||
1141 | 700 | DEFAULT: | ||
1142 | 701 | - [key3, value3] | ||
1143 | 702 | |||
1144 | 703 | |||
1145 | 704 | It is then up to the principle charms to subscribe this context to | ||
1146 | 705 | the service+config file it is interestd in. Configuration data will | ||
1147 | 706 | be available in the template context, in glance's case, as:: | ||
1148 | 707 | |||
1149 | 708 | ctxt = { | ||
1150 | 709 | ... other context ... | ||
1151 | 710 | 'subordinate_config': { | ||
1152 | 711 | 'DEFAULT': { | ||
1153 | 712 | 'key1': 'value1', | ||
1154 | 713 | }, | ||
1155 | 714 | 'MYSECTION': { | ||
1156 | 715 | 'key2': 'value2', | ||
1157 | 716 | }, | ||
1158 | 717 | } | ||
1159 | 718 | } | ||
1160 | 719 | |||
1161 | 720 | """ | ||
1162 | 721 | |||
1163 | 722 | def __init__(self, service, config_file, interface): | ||
1164 | 723 | """ | ||
1165 | 724 | :param service : Service name key to query in any subordinate | ||
1166 | 725 | data found | ||
1167 | 726 | :param config_file : Service's config file to query sections | ||
1168 | 727 | :param interface : Subordinate interface to inspect | ||
1169 | 728 | """ | ||
1170 | 729 | self.service = service | ||
1171 | 730 | self.config_file = config_file | ||
1172 | 731 | self.interface = interface | ||
1173 | 732 | |||
1174 | 733 | def __call__(self): | ||
1175 | 734 | ctxt = {'sections': {}} | ||
1176 | 735 | for rid in relation_ids(self.interface): | ||
1177 | 736 | for unit in related_units(rid): | ||
1178 | 737 | sub_config = relation_get('subordinate_configuration', | ||
1179 | 738 | rid=rid, unit=unit) | ||
1180 | 739 | if sub_config and sub_config != '': | ||
1181 | 740 | try: | ||
1182 | 741 | sub_config = json.loads(sub_config) | ||
1183 | 742 | except: | ||
1184 | 743 | log('Could not parse JSON from subordinate_config ' | ||
1185 | 744 | 'setting from %s' % rid, level=ERROR) | ||
1186 | 745 | continue | ||
1187 | 746 | |||
1188 | 747 | if self.service not in sub_config: | ||
1189 | 748 | log('Found subordinate_config on %s but it contained' | ||
1190 | 749 | 'nothing for %s service' % (rid, self.service)) | ||
1191 | 750 | continue | ||
1192 | 751 | |||
1193 | 752 | sub_config = sub_config[self.service] | ||
1194 | 753 | if self.config_file not in sub_config: | ||
1195 | 754 | log('Found subordinate_config on %s but it contained' | ||
1196 | 755 | 'nothing for %s' % (rid, self.config_file)) | ||
1197 | 756 | continue | ||
1198 | 757 | |||
1199 | 758 | sub_config = sub_config[self.config_file] | ||
1200 | 759 | for k, v in sub_config.iteritems(): | ||
1201 | 760 | if k == 'sections': | ||
1202 | 761 | for section, config_dict in v.iteritems(): | ||
1203 | 762 | log("adding section '%s'" % (section)) | ||
1204 | 763 | ctxt[k][section] = config_dict | ||
1205 | 764 | else: | ||
1206 | 765 | ctxt[k] = v | ||
1207 | 766 | |||
1208 | 767 | log("%d section(s) found" % (len(ctxt['sections'])), level=INFO) | ||
1209 | 768 | |||
1210 | 769 | return ctxt | ||
1211 | 770 | |||
1212 | 771 | |||
1213 | 772 | class LogLevelContext(OSContextGenerator): | ||
1214 | 773 | |||
1215 | 774 | def __call__(self): | ||
1216 | 775 | ctxt = {} | ||
1217 | 776 | ctxt['debug'] = \ | ||
1218 | 777 | False if config('debug') is None else config('debug') | ||
1219 | 778 | ctxt['verbose'] = \ | ||
1220 | 779 | False if config('verbose') is None else config('verbose') | ||
1221 | 780 | return ctxt | ||
1222 | 781 | |||
1223 | 782 | |||
1224 | 783 | class SyslogContext(OSContextGenerator): | ||
1225 | 784 | |||
1226 | 785 | def __call__(self): | ||
1227 | 786 | ctxt = { | ||
1228 | 787 | 'use_syslog': config('use-syslog') | ||
1229 | 788 | } | ||
1230 | 789 | return ctxt | ||
1231 | 0 | 790 | ||
1232 | === added file 'hooks/charmhelpers/contrib/openstack/ip.py' | |||
1233 | --- hooks/charmhelpers/contrib/openstack/ip.py 1970-01-01 00:00:00 +0000 | |||
1234 | +++ hooks/charmhelpers/contrib/openstack/ip.py 2014-09-10 21:17:48 +0000 | |||
1235 | @@ -0,0 +1,79 @@ | |||
1236 | 1 | from charmhelpers.core.hookenv import ( | ||
1237 | 2 | config, | ||
1238 | 3 | unit_get, | ||
1239 | 4 | ) | ||
1240 | 5 | |||
1241 | 6 | from charmhelpers.contrib.network.ip import ( | ||
1242 | 7 | get_address_in_network, | ||
1243 | 8 | is_address_in_network, | ||
1244 | 9 | is_ipv6, | ||
1245 | 10 | get_ipv6_addr, | ||
1246 | 11 | ) | ||
1247 | 12 | |||
1248 | 13 | from charmhelpers.contrib.hahelpers.cluster import is_clustered | ||
1249 | 14 | |||
1250 | 15 | PUBLIC = 'public' | ||
1251 | 16 | INTERNAL = 'int' | ||
1252 | 17 | ADMIN = 'admin' | ||
1253 | 18 | |||
1254 | 19 | _address_map = { | ||
1255 | 20 | PUBLIC: { | ||
1256 | 21 | 'config': 'os-public-network', | ||
1257 | 22 | 'fallback': 'public-address' | ||
1258 | 23 | }, | ||
1259 | 24 | INTERNAL: { | ||
1260 | 25 | 'config': 'os-internal-network', | ||
1261 | 26 | 'fallback': 'private-address' | ||
1262 | 27 | }, | ||
1263 | 28 | ADMIN: { | ||
1264 | 29 | 'config': 'os-admin-network', | ||
1265 | 30 | 'fallback': 'private-address' | ||
1266 | 31 | } | ||
1267 | 32 | } | ||
1268 | 33 | |||
1269 | 34 | |||
1270 | 35 | def canonical_url(configs, endpoint_type=PUBLIC): | ||
1271 | 36 | ''' | ||
1272 | 37 | Returns the correct HTTP URL to this host given the state of HTTPS | ||
1273 | 38 | configuration, hacluster and charm configuration. | ||
1274 | 39 | |||
1275 | 40 | :configs OSTemplateRenderer: A config tempating object to inspect for | ||
1276 | 41 | a complete https context. | ||
1277 | 42 | :endpoint_type str: The endpoint type to resolve. | ||
1278 | 43 | |||
1279 | 44 | :returns str: Base URL for services on the current service unit. | ||
1280 | 45 | ''' | ||
1281 | 46 | scheme = 'http' | ||
1282 | 47 | if 'https' in configs.complete_contexts(): | ||
1283 | 48 | scheme = 'https' | ||
1284 | 49 | address = resolve_address(endpoint_type) | ||
1285 | 50 | if is_ipv6(address): | ||
1286 | 51 | address = "[{}]".format(address) | ||
1287 | 52 | return '%s://%s' % (scheme, address) | ||
1288 | 53 | |||
1289 | 54 | |||
1290 | 55 | def resolve_address(endpoint_type=PUBLIC): | ||
1291 | 56 | resolved_address = None | ||
1292 | 57 | if is_clustered(): | ||
1293 | 58 | if config(_address_map[endpoint_type]['config']) is None: | ||
1294 | 59 | # Assume vip is simple and pass back directly | ||
1295 | 60 | resolved_address = config('vip') | ||
1296 | 61 | else: | ||
1297 | 62 | for vip in config('vip').split(): | ||
1298 | 63 | if is_address_in_network( | ||
1299 | 64 | config(_address_map[endpoint_type]['config']), | ||
1300 | 65 | vip): | ||
1301 | 66 | resolved_address = vip | ||
1302 | 67 | else: | ||
1303 | 68 | if config('prefer-ipv6'): | ||
1304 | 69 | fallback_addr = get_ipv6_addr() | ||
1305 | 70 | else: | ||
1306 | 71 | fallback_addr = unit_get(_address_map[endpoint_type]['fallback']) | ||
1307 | 72 | resolved_address = get_address_in_network( | ||
1308 | 73 | config(_address_map[endpoint_type]['config']), fallback_addr) | ||
1309 | 74 | |||
1310 | 75 | if resolved_address is None: | ||
1311 | 76 | raise ValueError('Unable to resolve a suitable IP address' | ||
1312 | 77 | ' based on charm state and configuration') | ||
1313 | 78 | else: | ||
1314 | 79 | return resolved_address | ||
1315 | 0 | 80 | ||
1316 | === added file 'hooks/charmhelpers/contrib/openstack/neutron.py' | |||
1317 | --- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000 | |||
1318 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-09-10 21:17:48 +0000 | |||
1319 | @@ -0,0 +1,201 @@ | |||
1320 | 1 | # Various utilies for dealing with Neutron and the renaming from Quantum. | ||
1321 | 2 | |||
1322 | 3 | from subprocess import check_output | ||
1323 | 4 | |||
1324 | 5 | from charmhelpers.core.hookenv import ( | ||
1325 | 6 | config, | ||
1326 | 7 | log, | ||
1327 | 8 | ERROR, | ||
1328 | 9 | ) | ||
1329 | 10 | |||
1330 | 11 | from charmhelpers.contrib.openstack.utils import os_release | ||
1331 | 12 | |||
1332 | 13 | |||
1333 | 14 | def headers_package(): | ||
1334 | 15 | """Ensures correct linux-headers for running kernel are installed, | ||
1335 | 16 | for building DKMS package""" | ||
1336 | 17 | kver = check_output(['uname', '-r']).strip() | ||
1337 | 18 | return 'linux-headers-%s' % kver | ||
1338 | 19 | |||
1339 | 20 | QUANTUM_CONF_DIR = '/etc/quantum' | ||
1340 | 21 | |||
1341 | 22 | |||
1342 | 23 | def kernel_version(): | ||
1343 | 24 | """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ | ||
1344 | 25 | kver = check_output(['uname', '-r']).strip() | ||
1345 | 26 | kver = kver.split('.') | ||
1346 | 27 | return (int(kver[0]), int(kver[1])) | ||
1347 | 28 | |||
1348 | 29 | |||
1349 | 30 | def determine_dkms_package(): | ||
1350 | 31 | """ Determine which DKMS package should be used based on kernel version """ | ||
1351 | 32 | # NOTE: 3.13 kernels have support for GRE and VXLAN native | ||
1352 | 33 | if kernel_version() >= (3, 13): | ||
1353 | 34 | return [] | ||
1354 | 35 | else: | ||
1355 | 36 | return ['openvswitch-datapath-dkms'] | ||
1356 | 37 | |||
1357 | 38 | |||
1358 | 39 | # legacy | ||
1359 | 40 | |||
1360 | 41 | |||
1361 | 42 | def quantum_plugins(): | ||
1362 | 43 | from charmhelpers.contrib.openstack import context | ||
1363 | 44 | return { | ||
1364 | 45 | 'ovs': { | ||
1365 | 46 | 'config': '/etc/quantum/plugins/openvswitch/' | ||
1366 | 47 | 'ovs_quantum_plugin.ini', | ||
1367 | 48 | 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.' | ||
1368 | 49 | 'OVSQuantumPluginV2', | ||
1369 | 50 | 'contexts': [ | ||
1370 | 51 | context.SharedDBContext(user=config('neutron-database-user'), | ||
1371 | 52 | database=config('neutron-database'), | ||
1372 | 53 | relation_prefix='neutron', | ||
1373 | 54 | ssl_dir=QUANTUM_CONF_DIR)], | ||
1374 | 55 | 'services': ['quantum-plugin-openvswitch-agent'], | ||
1375 | 56 | 'packages': [[headers_package()] + determine_dkms_package(), | ||
1376 | 57 | ['quantum-plugin-openvswitch-agent']], | ||
1377 | 58 | 'server_packages': ['quantum-server', | ||
1378 | 59 | 'quantum-plugin-openvswitch'], | ||
1379 | 60 | 'server_services': ['quantum-server'] | ||
1380 | 61 | }, | ||
1381 | 62 | 'nvp': { | ||
1382 | 63 | 'config': '/etc/quantum/plugins/nicira/nvp.ini', | ||
1383 | 64 | 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.' | ||
1384 | 65 | 'QuantumPlugin.NvpPluginV2', | ||
1385 | 66 | 'contexts': [ | ||
1386 | 67 | context.SharedDBContext(user=config('neutron-database-user'), | ||
1387 | 68 | database=config('neutron-database'), | ||
1388 | 69 | relation_prefix='neutron', | ||
1389 | 70 | ssl_dir=QUANTUM_CONF_DIR)], | ||
1390 | 71 | 'services': [], | ||
1391 | 72 | 'packages': [], | ||
1392 | 73 | 'server_packages': ['quantum-server', | ||
1393 | 74 | 'quantum-plugin-nicira'], | ||
1394 | 75 | 'server_services': ['quantum-server'] | ||
1395 | 76 | } | ||
1396 | 77 | } | ||
1397 | 78 | |||
1398 | 79 | NEUTRON_CONF_DIR = '/etc/neutron' | ||
1399 | 80 | |||
1400 | 81 | |||
1401 | 82 | def neutron_plugins(): | ||
1402 | 83 | from charmhelpers.contrib.openstack import context | ||
1403 | 84 | release = os_release('nova-common') | ||
1404 | 85 | plugins = { | ||
1405 | 86 | 'ovs': { | ||
1406 | 87 | 'config': '/etc/neutron/plugins/openvswitch/' | ||
1407 | 88 | 'ovs_neutron_plugin.ini', | ||
1408 | 89 | 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.' | ||
1409 | 90 | 'OVSNeutronPluginV2', | ||
1410 | 91 | 'contexts': [ | ||
1411 | 92 | context.SharedDBContext(user=config('neutron-database-user'), | ||
1412 | 93 | database=config('neutron-database'), | ||
1413 | 94 | relation_prefix='neutron', | ||
1414 | 95 | ssl_dir=NEUTRON_CONF_DIR)], | ||
1415 | 96 | 'services': ['neutron-plugin-openvswitch-agent'], | ||
1416 | 97 | 'packages': [[headers_package()] + determine_dkms_package(), | ||
1417 | 98 | ['neutron-plugin-openvswitch-agent']], | ||
1418 | 99 | 'server_packages': ['neutron-server', | ||
1419 | 100 | 'neutron-plugin-openvswitch'], | ||
1420 | 101 | 'server_services': ['neutron-server'] | ||
1421 | 102 | }, | ||
1422 | 103 | 'nvp': { | ||
1423 | 104 | 'config': '/etc/neutron/plugins/nicira/nvp.ini', | ||
1424 | 105 | 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.' | ||
1425 | 106 | 'NeutronPlugin.NvpPluginV2', | ||
1426 | 107 | 'contexts': [ | ||
1427 | 108 | context.SharedDBContext(user=config('neutron-database-user'), | ||
1428 | 109 | database=config('neutron-database'), | ||
1429 | 110 | relation_prefix='neutron', | ||
1430 | 111 | ssl_dir=NEUTRON_CONF_DIR)], | ||
1431 | 112 | 'services': [], | ||
1432 | 113 | 'packages': [], | ||
1433 | 114 | 'server_packages': ['neutron-server', | ||
1434 | 115 | 'neutron-plugin-nicira'], | ||
1435 | 116 | 'server_services': ['neutron-server'] | ||
1436 | 117 | }, | ||
1437 | 118 | 'nsx': { | ||
1438 | 119 | 'config': '/etc/neutron/plugins/vmware/nsx.ini', | ||
1439 | 120 | 'driver': 'vmware', | ||
1440 | 121 | 'contexts': [ | ||
1441 | 122 | context.SharedDBContext(user=config('neutron-database-user'), | ||
1442 | 123 | database=config('neutron-database'), | ||
1443 | 124 | relation_prefix='neutron', | ||
1444 | 125 | ssl_dir=NEUTRON_CONF_DIR)], | ||
1445 | 126 | 'services': [], | ||
1446 | 127 | 'packages': [], | ||
1447 | 128 | 'server_packages': ['neutron-server', | ||
1448 | 129 | 'neutron-plugin-vmware'], | ||
1449 | 130 | 'server_services': ['neutron-server'] | ||
1450 | 131 | }, | ||
1451 | 132 | 'n1kv': { | ||
1452 | 133 | 'config': '/etc/neutron/plugins/cisco/cisco_plugins.ini', | ||
1453 | 134 | 'driver': 'neutron.plugins.cisco.network_plugin.PluginV2', | ||
1454 | 135 | 'contexts': [ | ||
1455 | 136 | context.SharedDBContext(user=config('neutron-database-user'), | ||
1456 | 137 | database=config('neutron-database'), | ||
1457 | 138 | relation_prefix='neutron', | ||
1458 | 139 | ssl_dir=NEUTRON_CONF_DIR)], | ||
1459 | 140 | 'services': [], | ||
1460 | 141 | 'packages': [['neutron-plugin-cisco']], | ||
1461 | 142 | 'server_packages': ['neutron-server', | ||
1462 | 143 | 'neutron-plugin-cisco'], | ||
1463 | 144 | 'server_services': ['neutron-server'] | ||
1464 | 145 | } | ||
1465 | 146 | } | ||
1466 | 147 | if release >= 'icehouse': | ||
1467 | 148 | # NOTE: patch in ml2 plugin for icehouse onwards | ||
1468 | 149 | plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' | ||
1469 | 150 | plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' | ||
1470 | 151 | plugins['ovs']['server_packages'] = ['neutron-server', | ||
1471 | 152 | 'neutron-plugin-ml2'] | ||
1472 | 153 | # NOTE: patch in vmware renames nvp->nsx for icehouse onwards | ||
1473 | 154 | plugins['nvp'] = plugins['nsx'] | ||
1474 | 155 | return plugins | ||
1475 | 156 | |||
1476 | 157 | |||
1477 | 158 | def neutron_plugin_attribute(plugin, attr, net_manager=None): | ||
1478 | 159 | manager = net_manager or network_manager() | ||
1479 | 160 | if manager == 'quantum': | ||
1480 | 161 | plugins = quantum_plugins() | ||
1481 | 162 | elif manager == 'neutron': | ||
1482 | 163 | plugins = neutron_plugins() | ||
1483 | 164 | else: | ||
1484 | 165 | log('Error: Network manager does not support plugins.') | ||
1485 | 166 | raise Exception | ||
1486 | 167 | |||
1487 | 168 | try: | ||
1488 | 169 | _plugin = plugins[plugin] | ||
1489 | 170 | except KeyError: | ||
1490 | 171 | log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR) | ||
1491 | 172 | raise Exception | ||
1492 | 173 | |||
1493 | 174 | try: | ||
1494 | 175 | return _plugin[attr] | ||
1495 | 176 | except KeyError: | ||
1496 | 177 | return None | ||
1497 | 178 | |||
1498 | 179 | |||
1499 | 180 | def network_manager(): | ||
1500 | 181 | ''' | ||
1501 | 182 | Deals with the renaming of Quantum to Neutron in H and any situations | ||
1502 | 183 | that require compatability (eg, deploying H with network-manager=quantum, | ||
1503 | 184 | upgrading from G). | ||
1504 | 185 | ''' | ||
1505 | 186 | release = os_release('nova-common') | ||
1506 | 187 | manager = config('network-manager').lower() | ||
1507 | 188 | |||
1508 | 189 | if manager not in ['quantum', 'neutron']: | ||
1509 | 190 | return manager | ||
1510 | 191 | |||
1511 | 192 | if release in ['essex']: | ||
1512 | 193 | # E does not support neutron | ||
1513 | 194 | log('Neutron networking not supported in Essex.', level=ERROR) | ||
1514 | 195 | raise Exception | ||
1515 | 196 | elif release in ['folsom', 'grizzly']: | ||
1516 | 197 | # neutron is named quantum in F and G | ||
1517 | 198 | return 'quantum' | ||
1518 | 199 | else: | ||
1519 | 200 | # ensure accurate naming for all releases post-H | ||
1520 | 201 | return 'neutron' | ||
1521 | 0 | 202 | ||
1522 | === added directory 'hooks/charmhelpers/contrib/openstack/templates' | |||
1523 | === added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py' | |||
1524 | --- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000 | |||
1525 | +++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2014-09-10 21:17:48 +0000 | |||
1526 | @@ -0,0 +1,2 @@ | |||
1527 | 1 | # dummy __init__.py to fool syncer into thinking this is a syncable python | ||
1528 | 2 | # module | ||
1529 | 0 | 3 | ||
1530 | === added file 'hooks/charmhelpers/contrib/openstack/templating.py' | |||
1531 | --- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000 | |||
1532 | +++ hooks/charmhelpers/contrib/openstack/templating.py 2014-09-10 21:17:48 +0000 | |||
1533 | @@ -0,0 +1,279 @@ | |||
1534 | 1 | import os | ||
1535 | 2 | |||
1536 | 3 | from charmhelpers.fetch import apt_install | ||
1537 | 4 | |||
1538 | 5 | from charmhelpers.core.hookenv import ( | ||
1539 | 6 | log, | ||
1540 | 7 | ERROR, | ||
1541 | 8 | INFO | ||
1542 | 9 | ) | ||
1543 | 10 | |||
1544 | 11 | from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES | ||
1545 | 12 | |||
1546 | 13 | try: | ||
1547 | 14 | from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions | ||
1548 | 15 | except ImportError: | ||
1549 | 16 | # python-jinja2 may not be installed yet, or we're running unittests. | ||
1550 | 17 | FileSystemLoader = ChoiceLoader = Environment = exceptions = None | ||
1551 | 18 | |||
1552 | 19 | |||
1553 | 20 | class OSConfigException(Exception): | ||
1554 | 21 | pass | ||
1555 | 22 | |||
1556 | 23 | |||
1557 | 24 | def get_loader(templates_dir, os_release): | ||
1558 | 25 | """ | ||
1559 | 26 | Create a jinja2.ChoiceLoader containing template dirs up to | ||
1560 | 27 | and including os_release. If directory template directory | ||
1561 | 28 | is missing at templates_dir, it will be omitted from the loader. | ||
1562 | 29 | templates_dir is added to the bottom of the search list as a base | ||
1563 | 30 | loading dir. | ||
1564 | 31 | |||
1565 | 32 | A charm may also ship a templates dir with this module | ||
1566 | 33 | and it will be appended to the bottom of the search list, eg:: | ||
1567 | 34 | |||
1568 | 35 | hooks/charmhelpers/contrib/openstack/templates | ||
1569 | 36 | |||
1570 | 37 | :param templates_dir (str): Base template directory containing release | ||
1571 | 38 | sub-directories. | ||
1572 | 39 | :param os_release (str): OpenStack release codename to construct template | ||
1573 | 40 | loader. | ||
1574 | 41 | :returns: jinja2.ChoiceLoader constructed with a list of | ||
1575 | 42 | jinja2.FilesystemLoaders, ordered in descending | ||
1576 | 43 | order by OpenStack release. | ||
1577 | 44 | """ | ||
1578 | 45 | tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) | ||
1579 | 46 | for rel in OPENSTACK_CODENAMES.itervalues()] | ||
1580 | 47 | |||
1581 | 48 | if not os.path.isdir(templates_dir): | ||
1582 | 49 | log('Templates directory not found @ %s.' % templates_dir, | ||
1583 | 50 | level=ERROR) | ||
1584 | 51 | raise OSConfigException | ||
1585 | 52 | |||
1586 | 53 | # the bottom contains tempaltes_dir and possibly a common templates dir | ||
1587 | 54 | # shipped with the helper. | ||
1588 | 55 | loaders = [FileSystemLoader(templates_dir)] | ||
1589 | 56 | helper_templates = os.path.join(os.path.dirname(__file__), 'templates') | ||
1590 | 57 | if os.path.isdir(helper_templates): | ||
1591 | 58 | loaders.append(FileSystemLoader(helper_templates)) | ||
1592 | 59 | |||
1593 | 60 | for rel, tmpl_dir in tmpl_dirs: | ||
1594 | 61 | if os.path.isdir(tmpl_dir): | ||
1595 | 62 | loaders.insert(0, FileSystemLoader(tmpl_dir)) | ||
1596 | 63 | if rel == os_release: | ||
1597 | 64 | break | ||
1598 | 65 | log('Creating choice loader with dirs: %s' % | ||
1599 | 66 | [l.searchpath for l in loaders], level=INFO) | ||
1600 | 67 | return ChoiceLoader(loaders) | ||
1601 | 68 | |||
1602 | 69 | |||
1603 | 70 | class OSConfigTemplate(object): | ||
1604 | 71 | """ | ||
1605 | 72 | Associates a config file template with a list of context generators. | ||
1606 | 73 | Responsible for constructing a template context based on those generators. | ||
1607 | 74 | """ | ||
1608 | 75 | def __init__(self, config_file, contexts): | ||
1609 | 76 | self.config_file = config_file | ||
1610 | 77 | |||
1611 | 78 | if hasattr(contexts, '__call__'): | ||
1612 | 79 | self.contexts = [contexts] | ||
1613 | 80 | else: | ||
1614 | 81 | self.contexts = contexts | ||
1615 | 82 | |||
1616 | 83 | self._complete_contexts = [] | ||
1617 | 84 | |||
1618 | 85 | def context(self): | ||
1619 | 86 | ctxt = {} | ||
1620 | 87 | for context in self.contexts: | ||
1621 | 88 | _ctxt = context() | ||
1622 | 89 | if _ctxt: | ||
1623 | 90 | ctxt.update(_ctxt) | ||
1624 | 91 | # track interfaces for every complete context. | ||
1625 | 92 | [self._complete_contexts.append(interface) | ||
1626 | 93 | for interface in context.interfaces | ||
1627 | 94 | if interface not in self._complete_contexts] | ||
1628 | 95 | return ctxt | ||
1629 | 96 | |||
1630 | 97 | def complete_contexts(self): | ||
1631 | 98 | ''' | ||
1632 | 99 | Return a list of interfaces that have atisfied contexts. | ||
1633 | 100 | ''' | ||
1634 | 101 | if self._complete_contexts: | ||
1635 | 102 | return self._complete_contexts | ||
1636 | 103 | self.context() | ||
1637 | 104 | return self._complete_contexts | ||
1638 | 105 | |||
1639 | 106 | |||
1640 | 107 | class OSConfigRenderer(object): | ||
1641 | 108 | """ | ||
1642 | 109 | This class provides a common templating system to be used by OpenStack | ||
1643 | 110 | charms. It is intended to help charms share common code and templates, | ||
1644 | 111 | and ease the burden of managing config templates across multiple OpenStack | ||
1645 | 112 | releases. | ||
1646 | 113 | |||
1647 | 114 | Basic usage:: | ||
1648 | 115 | |||
1649 | 116 | # import some common context generates from charmhelpers | ||
1650 | 117 | from charmhelpers.contrib.openstack import context | ||
1651 | 118 | |||
1652 | 119 | # Create a renderer object for a specific OS release. | ||
1653 | 120 | configs = OSConfigRenderer(templates_dir='/tmp/templates', | ||
1654 | 121 | openstack_release='folsom') | ||
1655 | 122 | # register some config files with context generators. | ||
1656 | 123 | configs.register(config_file='/etc/nova/nova.conf', | ||
1657 | 124 | contexts=[context.SharedDBContext(), | ||
1658 | 125 | context.AMQPContext()]) | ||
1659 | 126 | configs.register(config_file='/etc/nova/api-paste.ini', | ||
1660 | 127 | contexts=[context.IdentityServiceContext()]) | ||
1661 | 128 | configs.register(config_file='/etc/haproxy/haproxy.conf', | ||
1662 | 129 | contexts=[context.HAProxyContext()]) | ||
1663 | 130 | # write out a single config | ||
1664 | 131 | configs.write('/etc/nova/nova.conf') | ||
1665 | 132 | # write out all registered configs | ||
1666 | 133 | configs.write_all() | ||
1667 | 134 | |||
1668 | 135 | **OpenStack Releases and template loading** | ||
1669 | 136 | |||
1670 | 137 | When the object is instantiated, it is associated with a specific OS | ||
1671 | 138 | release. This dictates how the template loader will be constructed. | ||
1672 | 139 | |||
1673 | 140 | The constructed loader attempts to load the template from several places | ||
1674 | 141 | in the following order: | ||
1675 | 142 | - from the most recent OS release-specific template dir (if one exists) | ||
1676 | 143 | - the base templates_dir | ||
1677 | 144 | - a template directory shipped in the charm with this helper file. | ||
1678 | 145 | |||
1679 | 146 | For the example above, '/tmp/templates' contains the following structure:: | ||
1680 | 147 | |||
1681 | 148 | /tmp/templates/nova.conf | ||
1682 | 149 | /tmp/templates/api-paste.ini | ||
1683 | 150 | /tmp/templates/grizzly/api-paste.ini | ||
1684 | 151 | /tmp/templates/havana/api-paste.ini | ||
1685 | 152 | |||
1686 | 153 | Since it was registered with the grizzly release, it first seraches | ||
1687 | 154 | the grizzly directory for nova.conf, then the templates dir. | ||
1688 | 155 | |||
1689 | 156 | When writing api-paste.ini, it will find the template in the grizzly | ||
1690 | 157 | directory. | ||
1691 | 158 | |||
1692 | 159 | If the object were created with folsom, it would fall back to the | ||
1693 | 160 | base templates dir for its api-paste.ini template. | ||
1694 | 161 | |||
1695 | 162 | This system should help manage changes in config files through | ||
1696 | 163 | openstack releases, allowing charms to fall back to the most recently | ||
1697 | 164 | updated config template for a given release | ||
1698 | 165 | |||
1699 | 166 | The haproxy.conf, since it is not shipped in the templates dir, will | ||
1700 | 167 | be loaded from the module directory's template directory, eg | ||
1701 | 168 | $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows | ||
1702 | 169 | us to ship common templates (haproxy, apache) with the helpers. | ||
1703 | 170 | |||
1704 | 171 | **Context generators** | ||
1705 | 172 | |||
1706 | 173 | Context generators are used to generate template contexts during hook | ||
1707 | 174 | execution. Doing so may require inspecting service relations, charm | ||
1708 | 175 | config, etc. When registered, a config file is associated with a list | ||
1709 | 176 | of generators. When a template is rendered and written, all context | ||
1710 | 177 | generates are called in a chain to generate the context dictionary | ||
1711 | 178 | passed to the jinja2 template. See context.py for more info. | ||
1712 | 179 | """ | ||
1713 | 180 | def __init__(self, templates_dir, openstack_release): | ||
1714 | 181 | if not os.path.isdir(templates_dir): | ||
1715 | 182 | log('Could not locate templates dir %s' % templates_dir, | ||
1716 | 183 | level=ERROR) | ||
1717 | 184 | raise OSConfigException | ||
1718 | 185 | |||
1719 | 186 | self.templates_dir = templates_dir | ||
1720 | 187 | self.openstack_release = openstack_release | ||
1721 | 188 | self.templates = {} | ||
1722 | 189 | self._tmpl_env = None | ||
1723 | 190 | |||
1724 | 191 | if None in [Environment, ChoiceLoader, FileSystemLoader]: | ||
1725 | 192 | # if this code is running, the object is created pre-install hook. | ||
1726 | 193 | # jinja2 shouldn't get touched until the module is reloaded on next | ||
1727 | 194 | # hook execution, with proper jinja2 bits successfully imported. | ||
1728 | 195 | apt_install('python-jinja2') | ||
1729 | 196 | |||
1730 | 197 | def register(self, config_file, contexts): | ||
1731 | 198 | """ | ||
1732 | 199 | Register a config file with a list of context generators to be called | ||
1733 | 200 | during rendering. | ||
1734 | 201 | """ | ||
1735 | 202 | self.templates[config_file] = OSConfigTemplate(config_file=config_file, | ||
1736 | 203 | contexts=contexts) | ||
1737 | 204 | log('Registered config file: %s' % config_file, level=INFO) | ||
1738 | 205 | |||
1739 | 206 | def _get_tmpl_env(self): | ||
1740 | 207 | if not self._tmpl_env: | ||
1741 | 208 | loader = get_loader(self.templates_dir, self.openstack_release) | ||
1742 | 209 | self._tmpl_env = Environment(loader=loader) | ||
1743 | 210 | |||
1744 | 211 | def _get_template(self, template): | ||
1745 | 212 | self._get_tmpl_env() | ||
1746 | 213 | template = self._tmpl_env.get_template(template) | ||
1747 | 214 | log('Loaded template from %s' % template.filename, level=INFO) | ||
1748 | 215 | return template | ||
1749 | 216 | |||
1750 | 217 | def render(self, config_file): | ||
1751 | 218 | if config_file not in self.templates: | ||
1752 | 219 | log('Config not registered: %s' % config_file, level=ERROR) | ||
1753 | 220 | raise OSConfigException | ||
1754 | 221 | ctxt = self.templates[config_file].context() | ||
1755 | 222 | |||
1756 | 223 | _tmpl = os.path.basename(config_file) | ||
1757 | 224 | try: | ||
1758 | 225 | template = self._get_template(_tmpl) | ||
1759 | 226 | except exceptions.TemplateNotFound: | ||
1760 | 227 | # if no template is found with basename, try looking for it | ||
1761 | 228 | # using a munged full path, eg: | ||
1762 | 229 | # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf | ||
1763 | 230 | _tmpl = '_'.join(config_file.split('/')[1:]) | ||
1764 | 231 | try: | ||
1765 | 232 | template = self._get_template(_tmpl) | ||
1766 | 233 | except exceptions.TemplateNotFound as e: | ||
1767 | 234 | log('Could not load template from %s by %s or %s.' % | ||
1768 | 235 | (self.templates_dir, os.path.basename(config_file), _tmpl), | ||
1769 | 236 | level=ERROR) | ||
1770 | 237 | raise e | ||
1771 | 238 | |||
1772 | 239 | log('Rendering from template: %s' % _tmpl, level=INFO) | ||
1773 | 240 | return template.render(ctxt) | ||
1774 | 241 | |||
1775 | 242 | def write(self, config_file): | ||
1776 | 243 | """ | ||
1777 | 244 | Write a single config file, raises if config file is not registered. | ||
1778 | 245 | """ | ||
1779 | 246 | if config_file not in self.templates: | ||
1780 | 247 | log('Config not registered: %s' % config_file, level=ERROR) | ||
1781 | 248 | raise OSConfigException | ||
1782 | 249 | |||
1783 | 250 | _out = self.render(config_file) | ||
1784 | 251 | |||
1785 | 252 | with open(config_file, 'wb') as out: | ||
1786 | 253 | out.write(_out) | ||
1787 | 254 | |||
1788 | 255 | log('Wrote template %s.' % config_file, level=INFO) | ||
1789 | 256 | |||
1790 | 257 | def write_all(self): | ||
1791 | 258 | """ | ||
1792 | 259 | Write out all registered config files. | ||
1793 | 260 | """ | ||
1794 | 261 | [self.write(k) for k in self.templates.iterkeys()] | ||
1795 | 262 | |||
1796 | 263 | def set_release(self, openstack_release): | ||
1797 | 264 | """ | ||
1798 | 265 | Resets the template environment and generates a new template loader | ||
1799 | 266 | based on a the new openstack release. | ||
1800 | 267 | """ | ||
1801 | 268 | self._tmpl_env = None | ||
1802 | 269 | self.openstack_release = openstack_release | ||
1803 | 270 | self._get_tmpl_env() | ||
1804 | 271 | |||
1805 | 272 | def complete_contexts(self): | ||
1806 | 273 | ''' | ||
1807 | 274 | Returns a list of context interfaces that yield a complete context. | ||
1808 | 275 | ''' | ||
1809 | 276 | interfaces = [] | ||
1810 | 277 | [interfaces.extend(i.complete_contexts()) | ||
1811 | 278 | for i in self.templates.itervalues()] | ||
1812 | 279 | return interfaces | ||
1813 | 0 | 280 | ||
1814 | === added file 'hooks/charmhelpers/contrib/openstack/utils.py' | |||
1815 | --- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000 | |||
1816 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2014-09-10 21:17:48 +0000 | |||
1817 | @@ -0,0 +1,459 @@ | |||
1818 | 1 | #!/usr/bin/python | ||
1819 | 2 | |||
1820 | 3 | # Common python helper functions used for OpenStack charms. | ||
1821 | 4 | from collections import OrderedDict | ||
1822 | 5 | |||
1823 | 6 | import subprocess | ||
1824 | 7 | import os | ||
1825 | 8 | import socket | ||
1826 | 9 | import sys | ||
1827 | 10 | |||
1828 | 11 | from charmhelpers.core.hookenv import ( | ||
1829 | 12 | config, | ||
1830 | 13 | log as juju_log, | ||
1831 | 14 | charm_dir, | ||
1832 | 15 | ERROR, | ||
1833 | 16 | INFO | ||
1834 | 17 | ) | ||
1835 | 18 | |||
1836 | 19 | from charmhelpers.contrib.storage.linux.lvm import ( | ||
1837 | 20 | deactivate_lvm_volume_group, | ||
1838 | 21 | is_lvm_physical_volume, | ||
1839 | 22 | remove_lvm_physical_volume, | ||
1840 | 23 | ) | ||
1841 | 24 | |||
1842 | 25 | from charmhelpers.core.host import lsb_release, mounts, umount | ||
1843 | 26 | from charmhelpers.fetch import apt_install, apt_cache | ||
1844 | 27 | from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk | ||
1845 | 28 | from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device | ||
1846 | 29 | |||
1847 | 30 | CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" | ||
1848 | 31 | CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' | ||
1849 | 32 | |||
1850 | 33 | DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' | ||
1851 | 34 | 'restricted main multiverse universe') | ||
1852 | 35 | |||
1853 | 36 | |||
1854 | 37 | UBUNTU_OPENSTACK_RELEASE = OrderedDict([ | ||
1855 | 38 | ('oneiric', 'diablo'), | ||
1856 | 39 | ('precise', 'essex'), | ||
1857 | 40 | ('quantal', 'folsom'), | ||
1858 | 41 | ('raring', 'grizzly'), | ||
1859 | 42 | ('saucy', 'havana'), | ||
1860 | 43 | ('trusty', 'icehouse'), | ||
1861 | 44 | ('utopic', 'juno'), | ||
1862 | 45 | ]) | ||
1863 | 46 | |||
1864 | 47 | |||
1865 | 48 | OPENSTACK_CODENAMES = OrderedDict([ | ||
1866 | 49 | ('2011.2', 'diablo'), | ||
1867 | 50 | ('2012.1', 'essex'), | ||
1868 | 51 | ('2012.2', 'folsom'), | ||
1869 | 52 | ('2013.1', 'grizzly'), | ||
1870 | 53 | ('2013.2', 'havana'), | ||
1871 | 54 | ('2014.1', 'icehouse'), | ||
1872 | 55 | ('2014.2', 'juno'), | ||
1873 | 56 | ]) | ||
1874 | 57 | |||
1875 | 58 | # The ugly duckling | ||
1876 | 59 | SWIFT_CODENAMES = OrderedDict([ | ||
1877 | 60 | ('1.4.3', 'diablo'), | ||
1878 | 61 | ('1.4.8', 'essex'), | ||
1879 | 62 | ('1.7.4', 'folsom'), | ||
1880 | 63 | ('1.8.0', 'grizzly'), | ||
1881 | 64 | ('1.7.7', 'grizzly'), | ||
1882 | 65 | ('1.7.6', 'grizzly'), | ||
1883 | 66 | ('1.10.0', 'havana'), | ||
1884 | 67 | ('1.9.1', 'havana'), | ||
1885 | 68 | ('1.9.0', 'havana'), | ||
1886 | 69 | ('1.13.1', 'icehouse'), | ||
1887 | 70 | ('1.13.0', 'icehouse'), | ||
1888 | 71 | ('1.12.0', 'icehouse'), | ||
1889 | 72 | ('1.11.0', 'icehouse'), | ||
1890 | 73 | ('2.0.0', 'juno'), | ||
1891 | 74 | ]) | ||
1892 | 75 | |||
1893 | 76 | DEFAULT_LOOPBACK_SIZE = '5G' | ||
1894 | 77 | |||
1895 | 78 | |||
1896 | 79 | def error_out(msg): | ||
1897 | 80 | juju_log("FATAL ERROR: %s" % msg, level='ERROR') | ||
1898 | 81 | sys.exit(1) | ||
1899 | 82 | |||
1900 | 83 | |||
1901 | 84 | def get_os_codename_install_source(src): | ||
1902 | 85 | '''Derive OpenStack release codename from a given installation source.''' | ||
1903 | 86 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
1904 | 87 | rel = '' | ||
1905 | 88 | if src is None: | ||
1906 | 89 | return rel | ||
1907 | 90 | if src in ['distro', 'distro-proposed']: | ||
1908 | 91 | try: | ||
1909 | 92 | rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] | ||
1910 | 93 | except KeyError: | ||
1911 | 94 | e = 'Could not derive openstack release for '\ | ||
1912 | 95 | 'this Ubuntu release: %s' % ubuntu_rel | ||
1913 | 96 | error_out(e) | ||
1914 | 97 | return rel | ||
1915 | 98 | |||
1916 | 99 | if src.startswith('cloud:'): | ||
1917 | 100 | ca_rel = src.split(':')[1] | ||
1918 | 101 | ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] | ||
1919 | 102 | return ca_rel | ||
1920 | 103 | |||
1921 | 104 | # Best guess match based on deb string provided | ||
1922 | 105 | if src.startswith('deb') or src.startswith('ppa'): | ||
1923 | 106 | for k, v in OPENSTACK_CODENAMES.iteritems(): | ||
1924 | 107 | if v in src: | ||
1925 | 108 | return v | ||
1926 | 109 | |||
1927 | 110 | |||
1928 | 111 | def get_os_version_install_source(src): | ||
1929 | 112 | codename = get_os_codename_install_source(src) | ||
1930 | 113 | return get_os_version_codename(codename) | ||
1931 | 114 | |||
1932 | 115 | |||
1933 | 116 | def get_os_codename_version(vers): | ||
1934 | 117 | '''Determine OpenStack codename from version number.''' | ||
1935 | 118 | try: | ||
1936 | 119 | return OPENSTACK_CODENAMES[vers] | ||
1937 | 120 | except KeyError: | ||
1938 | 121 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
1939 | 122 | error_out(e) | ||
1940 | 123 | |||
1941 | 124 | |||
1942 | 125 | def get_os_version_codename(codename): | ||
1943 | 126 | '''Determine OpenStack version number from codename.''' | ||
1944 | 127 | for k, v in OPENSTACK_CODENAMES.iteritems(): | ||
1945 | 128 | if v == codename: | ||
1946 | 129 | return k | ||
1947 | 130 | e = 'Could not derive OpenStack version for '\ | ||
1948 | 131 | 'codename: %s' % codename | ||
1949 | 132 | error_out(e) | ||
1950 | 133 | |||
1951 | 134 | |||
1952 | 135 | def get_os_codename_package(package, fatal=True): | ||
1953 | 136 | '''Derive OpenStack release codename from an installed package.''' | ||
1954 | 137 | import apt_pkg as apt | ||
1955 | 138 | |||
1956 | 139 | cache = apt_cache() | ||
1957 | 140 | |||
1958 | 141 | try: | ||
1959 | 142 | pkg = cache[package] | ||
1960 | 143 | except: | ||
1961 | 144 | if not fatal: | ||
1962 | 145 | return None | ||
1963 | 146 | # the package is unknown to the current apt cache. | ||
1964 | 147 | e = 'Could not determine version of package with no installation '\ | ||
1965 | 148 | 'candidate: %s' % package | ||
1966 | 149 | error_out(e) | ||
1967 | 150 | |||
1968 | 151 | if not pkg.current_ver: | ||
1969 | 152 | if not fatal: | ||
1970 | 153 | return None | ||
1971 | 154 | # package is known, but no version is currently installed. | ||
1972 | 155 | e = 'Could not determine version of uninstalled package: %s' % package | ||
1973 | 156 | error_out(e) | ||
1974 | 157 | |||
1975 | 158 | vers = apt.upstream_version(pkg.current_ver.ver_str) | ||
1976 | 159 | |||
1977 | 160 | try: | ||
1978 | 161 | if 'swift' in pkg.name: | ||
1979 | 162 | swift_vers = vers[:5] | ||
1980 | 163 | if swift_vers not in SWIFT_CODENAMES: | ||
1981 | 164 | # Deal with 1.10.0 upward | ||
1982 | 165 | swift_vers = vers[:6] | ||
1983 | 166 | return SWIFT_CODENAMES[swift_vers] | ||
1984 | 167 | else: | ||
1985 | 168 | vers = vers[:6] | ||
1986 | 169 | return OPENSTACK_CODENAMES[vers] | ||
1987 | 170 | except KeyError: | ||
1988 | 171 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
1989 | 172 | error_out(e) | ||
1990 | 173 | |||
1991 | 174 | |||
1992 | 175 | def get_os_version_package(pkg, fatal=True): | ||
1993 | 176 | '''Derive OpenStack version number from an installed package.''' | ||
1994 | 177 | codename = get_os_codename_package(pkg, fatal=fatal) | ||
1995 | 178 | |||
1996 | 179 | if not codename: | ||
1997 | 180 | return None | ||
1998 | 181 | |||
1999 | 182 | if 'swift' in pkg: | ||
2000 | 183 | vers_map = SWIFT_CODENAMES | ||
2001 | 184 | else: | ||
2002 | 185 | vers_map = OPENSTACK_CODENAMES | ||
2003 | 186 | |||
2004 | 187 | for version, cname in vers_map.iteritems(): | ||
2005 | 188 | if cname == codename: | ||
2006 | 189 | return version | ||
2007 | 190 | # e = "Could not determine OpenStack version for package: %s" % pkg | ||
2008 | 191 | # error_out(e) | ||
2009 | 192 | |||
2010 | 193 | |||
2011 | 194 | os_rel = None | ||
2012 | 195 | |||
2013 | 196 | |||
2014 | 197 | def os_release(package, base='essex'): | ||
2015 | 198 | ''' | ||
2016 | 199 | Returns OpenStack release codename from a cached global. | ||
2017 | 200 | If the codename can not be determined from either an installed package or | ||
2018 | 201 | the installation source, the earliest release supported by the charm should | ||
2019 | 202 | be returned. | ||
2020 | 203 | ''' | ||
2021 | 204 | global os_rel | ||
2022 | 205 | if os_rel: | ||
2023 | 206 | return os_rel | ||
2024 | 207 | os_rel = (get_os_codename_package(package, fatal=False) or | ||
2025 | 208 | get_os_codename_install_source(config('openstack-origin')) or | ||
2026 | 209 | base) | ||
2027 | 210 | return os_rel | ||
2028 | 211 | |||
2029 | 212 | |||
2030 | 213 | def import_key(keyid): | ||
2031 | 214 | cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \ | ||
2032 | 215 | "--recv-keys %s" % keyid | ||
2033 | 216 | try: | ||
2034 | 217 | subprocess.check_call(cmd.split(' ')) | ||
2035 | 218 | except subprocess.CalledProcessError: | ||
2036 | 219 | error_out("Error importing repo key %s" % keyid) | ||
2037 | 220 | |||
2038 | 221 | |||
2039 | 222 | def configure_installation_source(rel): | ||
2040 | 223 | '''Configure apt installation source.''' | ||
2041 | 224 | if rel == 'distro': | ||
2042 | 225 | return | ||
2043 | 226 | elif rel == 'distro-proposed': | ||
2044 | 227 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
2045 | 228 | with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: | ||
2046 | 229 | f.write(DISTRO_PROPOSED % ubuntu_rel) | ||
2047 | 230 | elif rel[:4] == "ppa:": | ||
2048 | 231 | src = rel | ||
2049 | 232 | subprocess.check_call(["add-apt-repository", "-y", src]) | ||
2050 | 233 | elif rel[:3] == "deb": | ||
2051 | 234 | l = len(rel.split('|')) | ||
2052 | 235 | if l == 2: | ||
2053 | 236 | src, key = rel.split('|') | ||
2054 | 237 | juju_log("Importing PPA key from keyserver for %s" % src) | ||
2055 | 238 | import_key(key) | ||
2056 | 239 | elif l == 1: | ||
2057 | 240 | src = rel | ||
2058 | 241 | with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: | ||
2059 | 242 | f.write(src) | ||
2060 | 243 | elif rel[:6] == 'cloud:': | ||
2061 | 244 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
2062 | 245 | rel = rel.split(':')[1] | ||
2063 | 246 | u_rel = rel.split('-')[0] | ||
2064 | 247 | ca_rel = rel.split('-')[1] | ||
2065 | 248 | |||
2066 | 249 | if u_rel != ubuntu_rel: | ||
2067 | 250 | e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ | ||
2068 | 251 | 'version (%s)' % (ca_rel, ubuntu_rel) | ||
2069 | 252 | error_out(e) | ||
2070 | 253 | |||
2071 | 254 | if 'staging' in ca_rel: | ||
2072 | 255 | # staging is just a regular PPA. | ||
2073 | 256 | os_rel = ca_rel.split('/')[0] | ||
2074 | 257 | ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel | ||
2075 | 258 | cmd = 'add-apt-repository -y %s' % ppa | ||
2076 | 259 | subprocess.check_call(cmd.split(' ')) | ||
2077 | 260 | return | ||
2078 | 261 | |||
2079 | 262 | # map charm config options to actual archive pockets. | ||
2080 | 263 | pockets = { | ||
2081 | 264 | 'folsom': 'precise-updates/folsom', | ||
2082 | 265 | 'folsom/updates': 'precise-updates/folsom', | ||
2083 | 266 | 'folsom/proposed': 'precise-proposed/folsom', | ||
2084 | 267 | 'grizzly': 'precise-updates/grizzly', | ||
2085 | 268 | 'grizzly/updates': 'precise-updates/grizzly', | ||
2086 | 269 | 'grizzly/proposed': 'precise-proposed/grizzly', | ||
2087 | 270 | 'havana': 'precise-updates/havana', | ||
2088 | 271 | 'havana/updates': 'precise-updates/havana', | ||
2089 | 272 | 'havana/proposed': 'precise-proposed/havana', | ||
2090 | 273 | 'icehouse': 'precise-updates/icehouse', | ||
2091 | 274 | 'icehouse/updates': 'precise-updates/icehouse', | ||
2092 | 275 | 'icehouse/proposed': 'precise-proposed/icehouse', | ||
2093 | 276 | 'juno': 'trusty-updates/juno', | ||
2094 | 277 | 'juno/updates': 'trusty-updates/juno', | ||
2095 | 278 | 'juno/proposed': 'trusty-proposed/juno', | ||
2096 | 279 | } | ||
2097 | 280 | |||
2098 | 281 | try: | ||
2099 | 282 | pocket = pockets[ca_rel] | ||
2100 | 283 | except KeyError: | ||
2101 | 284 | e = 'Invalid Cloud Archive release specified: %s' % rel | ||
2102 | 285 | error_out(e) | ||
2103 | 286 | |||
2104 | 287 | src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) | ||
2105 | 288 | apt_install('ubuntu-cloud-keyring', fatal=True) | ||
2106 | 289 | |||
2107 | 290 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: | ||
2108 | 291 | f.write(src) | ||
2109 | 292 | else: | ||
2110 | 293 | error_out("Invalid openstack-release specified: %s" % rel) | ||
2111 | 294 | |||
2112 | 295 | |||
2113 | 296 | def save_script_rc(script_path="scripts/scriptrc", **env_vars): | ||
2114 | 297 | """ | ||
2115 | 298 | Write an rc file in the charm-delivered directory containing | ||
2116 | 299 | exported environment variables provided by env_vars. Any charm scripts run | ||
2117 | 300 | outside the juju hook environment can source this scriptrc to obtain | ||
2118 | 301 | updated config information necessary to perform health checks or | ||
2119 | 302 | service changes. | ||
2120 | 303 | """ | ||
2121 | 304 | juju_rc_path = "%s/%s" % (charm_dir(), script_path) | ||
2122 | 305 | if not os.path.exists(os.path.dirname(juju_rc_path)): | ||
2123 | 306 | os.mkdir(os.path.dirname(juju_rc_path)) | ||
2124 | 307 | with open(juju_rc_path, 'wb') as rc_script: | ||
2125 | 308 | rc_script.write( | ||
2126 | 309 | "#!/bin/bash\n") | ||
2127 | 310 | [rc_script.write('export %s=%s\n' % (u, p)) | ||
2128 | 311 | for u, p in env_vars.iteritems() if u != "script_path"] | ||
2129 | 312 | |||
2130 | 313 | |||
2131 | 314 | def openstack_upgrade_available(package): | ||
2132 | 315 | """ | ||
2133 | 316 | Determines if an OpenStack upgrade is available from installation | ||
2134 | 317 | source, based on version of installed package. | ||
2135 | 318 | |||
2136 | 319 | :param package: str: Name of installed package. | ||
2137 | 320 | |||
2138 | 321 | :returns: bool: : Returns True if configured installation source offers | ||
2139 | 322 | a newer version of package. | ||
2140 | 323 | |||
2141 | 324 | """ | ||
2142 | 325 | |||
2143 | 326 | import apt_pkg as apt | ||
2144 | 327 | src = config('openstack-origin') | ||
2145 | 328 | cur_vers = get_os_version_package(package) | ||
2146 | 329 | available_vers = get_os_version_install_source(src) | ||
2147 | 330 | apt.init() | ||
2148 | 331 | return apt.version_compare(available_vers, cur_vers) == 1 | ||
2149 | 332 | |||
2150 | 333 | |||
2151 | 334 | def ensure_block_device(block_device): | ||
2152 | 335 | ''' | ||
2153 | 336 | Confirm block_device, create as loopback if necessary. | ||
2154 | 337 | |||
2155 | 338 | :param block_device: str: Full path of block device to ensure. | ||
2156 | 339 | |||
2157 | 340 | :returns: str: Full path of ensured block device. | ||
2158 | 341 | ''' | ||
2159 | 342 | _none = ['None', 'none', None] | ||
2160 | 343 | if (block_device in _none): | ||
2161 | 344 | error_out('prepare_storage(): Missing required input: ' | ||
2162 | 345 | 'block_device=%s.' % block_device, level=ERROR) | ||
2163 | 346 | |||
2164 | 347 | if block_device.startswith('/dev/'): | ||
2165 | 348 | bdev = block_device | ||
2166 | 349 | elif block_device.startswith('/'): | ||
2167 | 350 | _bd = block_device.split('|') | ||
2168 | 351 | if len(_bd) == 2: | ||
2169 | 352 | bdev, size = _bd | ||
2170 | 353 | else: | ||
2171 | 354 | bdev = block_device | ||
2172 | 355 | size = DEFAULT_LOOPBACK_SIZE | ||
2173 | 356 | bdev = ensure_loopback_device(bdev, size) | ||
2174 | 357 | else: | ||
2175 | 358 | bdev = '/dev/%s' % block_device | ||
2176 | 359 | |||
2177 | 360 | if not is_block_device(bdev): | ||
2178 | 361 | error_out('Failed to locate valid block device at %s' % bdev, | ||
2179 | 362 | level=ERROR) | ||
2180 | 363 | |||
2181 | 364 | return bdev | ||
2182 | 365 | |||
2183 | 366 | |||
2184 | 367 | def clean_storage(block_device): | ||
2185 | 368 | ''' | ||
2186 | 369 | Ensures a block device is clean. That is: | ||
2187 | 370 | - unmounted | ||
2188 | 371 | - any lvm volume groups are deactivated | ||
2189 | 372 | - any lvm physical device signatures removed | ||
2190 | 373 | - partition table wiped | ||
2191 | 374 | |||
2192 | 375 | :param block_device: str: Full path to block device to clean. | ||
2193 | 376 | ''' | ||
2194 | 377 | for mp, d in mounts(): | ||
2195 | 378 | if d == block_device: | ||
2196 | 379 | juju_log('clean_storage(): %s is mounted @ %s, unmounting.' % | ||
2197 | 380 | (d, mp), level=INFO) | ||
2198 | 381 | umount(mp, persist=True) | ||
2199 | 382 | |||
2200 | 383 | if is_lvm_physical_volume(block_device): | ||
2201 | 384 | deactivate_lvm_volume_group(block_device) | ||
2202 | 385 | remove_lvm_physical_volume(block_device) | ||
2203 | 386 | else: | ||
2204 | 387 | zap_disk(block_device) | ||
2205 | 388 | |||
2206 | 389 | |||
2207 | 390 | def is_ip(address): | ||
2208 | 391 | """ | ||
2209 | 392 | Returns True if address is a valid IP address. | ||
2210 | 393 | """ | ||
2211 | 394 | try: | ||
2212 | 395 | # Test to see if already an IPv4 address | ||
2213 | 396 | socket.inet_aton(address) | ||
2214 | 397 | return True | ||
2215 | 398 | except socket.error: | ||
2216 | 399 | return False | ||
2217 | 400 | |||
2218 | 401 | |||
2219 | 402 | def ns_query(address): | ||
2220 | 403 | try: | ||
2221 | 404 | import dns.resolver | ||
2222 | 405 | except ImportError: | ||
2223 | 406 | apt_install('python-dnspython') | ||
2224 | 407 | import dns.resolver | ||
2225 | 408 | |||
2226 | 409 | if isinstance(address, dns.name.Name): | ||
2227 | 410 | rtype = 'PTR' | ||
2228 | 411 | elif isinstance(address, basestring): | ||
2229 | 412 | rtype = 'A' | ||
2230 | 413 | else: | ||
2231 | 414 | return None | ||
2232 | 415 | |||
2233 | 416 | answers = dns.resolver.query(address, rtype) | ||
2234 | 417 | if answers: | ||
2235 | 418 | return str(answers[0]) | ||
2236 | 419 | return None | ||
2237 | 420 | |||
2238 | 421 | |||
2239 | 422 | def get_host_ip(hostname): | ||
2240 | 423 | """ | ||
2241 | 424 | Resolves the IP for a given hostname, or returns | ||
2242 | 425 | the input if it is already an IP. | ||
2243 | 426 | """ | ||
2244 | 427 | if is_ip(hostname): | ||
2245 | 428 | return hostname | ||
2246 | 429 | |||
2247 | 430 | return ns_query(hostname) | ||
2248 | 431 | |||
2249 | 432 | |||
2250 | 433 | def get_hostname(address, fqdn=True): | ||
2251 | 434 | """ | ||
2252 | 435 | Resolves hostname for given IP, or returns the input | ||
2253 | 436 | if it is already a hostname. | ||
2254 | 437 | """ | ||
2255 | 438 | if is_ip(address): | ||
2256 | 439 | try: | ||
2257 | 440 | import dns.reversename | ||
2258 | 441 | except ImportError: | ||
2259 | 442 | apt_install('python-dnspython') | ||
2260 | 443 | import dns.reversename | ||
2261 | 444 | |||
2262 | 445 | rev = dns.reversename.from_address(address) | ||
2263 | 446 | result = ns_query(rev) | ||
2264 | 447 | if not result: | ||
2265 | 448 | return None | ||
2266 | 449 | else: | ||
2267 | 450 | result = address | ||
2268 | 451 | |||
2269 | 452 | if fqdn: | ||
2270 | 453 | # strip trailing . | ||
2271 | 454 | if result.endswith('.'): | ||
2272 | 455 | return result[:-1] | ||
2273 | 456 | else: | ||
2274 | 457 | return result | ||
2275 | 458 | else: | ||
2276 | 459 | return result.split('.')[0] | ||
2277 | 0 | 460 | ||
2278 | === added directory 'hooks/charmhelpers/contrib/storage' | |||
2279 | === added file 'hooks/charmhelpers/contrib/storage/__init__.py' | |||
2280 | === added directory 'hooks/charmhelpers/contrib/storage/linux' | |||
2281 | === added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py' | |||
2282 | === added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' | |||
2283 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000 | |||
2284 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-09-10 21:17:48 +0000 | |||
2285 | @@ -0,0 +1,387 @@ | |||
2286 | 1 | # | ||
2287 | 2 | # Copyright 2012 Canonical Ltd. | ||
2288 | 3 | # | ||
2289 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
2290 | 5 | # | ||
2291 | 6 | # Authors: | ||
2292 | 7 | # James Page <james.page@ubuntu.com> | ||
2293 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
2294 | 9 | # | ||
2295 | 10 | |||
2296 | 11 | import os | ||
2297 | 12 | import shutil | ||
2298 | 13 | import json | ||
2299 | 14 | import time | ||
2300 | 15 | |||
2301 | 16 | from subprocess import ( | ||
2302 | 17 | check_call, | ||
2303 | 18 | check_output, | ||
2304 | 19 | CalledProcessError | ||
2305 | 20 | ) | ||
2306 | 21 | |||
2307 | 22 | from charmhelpers.core.hookenv import ( | ||
2308 | 23 | relation_get, | ||
2309 | 24 | relation_ids, | ||
2310 | 25 | related_units, | ||
2311 | 26 | log, | ||
2312 | 27 | INFO, | ||
2313 | 28 | WARNING, | ||
2314 | 29 | ERROR | ||
2315 | 30 | ) | ||
2316 | 31 | |||
2317 | 32 | from charmhelpers.core.host import ( | ||
2318 | 33 | mount, | ||
2319 | 34 | mounts, | ||
2320 | 35 | service_start, | ||
2321 | 36 | service_stop, | ||
2322 | 37 | service_running, | ||
2323 | 38 | umount, | ||
2324 | 39 | ) | ||
2325 | 40 | |||
2326 | 41 | from charmhelpers.fetch import ( | ||
2327 | 42 | apt_install, | ||
2328 | 43 | ) | ||
2329 | 44 | |||
2330 | 45 | KEYRING = '/etc/ceph/ceph.client.{}.keyring' | ||
2331 | 46 | KEYFILE = '/etc/ceph/ceph.client.{}.key' | ||
2332 | 47 | |||
2333 | 48 | CEPH_CONF = """[global] | ||
2334 | 49 | auth supported = {auth} | ||
2335 | 50 | keyring = {keyring} | ||
2336 | 51 | mon host = {mon_hosts} | ||
2337 | 52 | log to syslog = {use_syslog} | ||
2338 | 53 | err to syslog = {use_syslog} | ||
2339 | 54 | clog to syslog = {use_syslog} | ||
2340 | 55 | """ | ||
2341 | 56 | |||
2342 | 57 | |||
2343 | 58 | def install(): | ||
2344 | 59 | ''' Basic Ceph client installation ''' | ||
2345 | 60 | ceph_dir = "/etc/ceph" | ||
2346 | 61 | if not os.path.exists(ceph_dir): | ||
2347 | 62 | os.mkdir(ceph_dir) | ||
2348 | 63 | apt_install('ceph-common', fatal=True) | ||
2349 | 64 | |||
2350 | 65 | |||
2351 | 66 | def rbd_exists(service, pool, rbd_img): | ||
2352 | 67 | ''' Check to see if a RADOS block device exists ''' | ||
2353 | 68 | try: | ||
2354 | 69 | out = check_output(['rbd', 'list', '--id', service, | ||
2355 | 70 | '--pool', pool]) | ||
2356 | 71 | except CalledProcessError: | ||
2357 | 72 | return False | ||
2358 | 73 | else: | ||
2359 | 74 | return rbd_img in out | ||
2360 | 75 | |||
2361 | 76 | |||
2362 | 77 | def create_rbd_image(service, pool, image, sizemb): | ||
2363 | 78 | ''' Create a new RADOS block device ''' | ||
2364 | 79 | cmd = [ | ||
2365 | 80 | 'rbd', | ||
2366 | 81 | 'create', | ||
2367 | 82 | image, | ||
2368 | 83 | '--size', | ||
2369 | 84 | str(sizemb), | ||
2370 | 85 | '--id', | ||
2371 | 86 | service, | ||
2372 | 87 | '--pool', | ||
2373 | 88 | pool | ||
2374 | 89 | ] | ||
2375 | 90 | check_call(cmd) | ||
2376 | 91 | |||
2377 | 92 | |||
2378 | 93 | def pool_exists(service, name): | ||
2379 | 94 | ''' Check to see if a RADOS pool already exists ''' | ||
2380 | 95 | try: | ||
2381 | 96 | out = check_output(['rados', '--id', service, 'lspools']) | ||
2382 | 97 | except CalledProcessError: | ||
2383 | 98 | return False | ||
2384 | 99 | else: | ||
2385 | 100 | return name in out | ||
2386 | 101 | |||
2387 | 102 | |||
2388 | 103 | def get_osds(service): | ||
2389 | 104 | ''' | ||
2390 | 105 | Return a list of all Ceph Object Storage Daemons | ||
2391 | 106 | currently in the cluster | ||
2392 | 107 | ''' | ||
2393 | 108 | version = ceph_version() | ||
2394 | 109 | if version and version >= '0.56': | ||
2395 | 110 | return json.loads(check_output(['ceph', '--id', service, | ||
2396 | 111 | 'osd', 'ls', '--format=json'])) | ||
2397 | 112 | else: | ||
2398 | 113 | return None | ||
2399 | 114 | |||
2400 | 115 | |||
2401 | 116 | def create_pool(service, name, replicas=2): | ||
2402 | 117 | ''' Create a new RADOS pool ''' | ||
2403 | 118 | if pool_exists(service, name): | ||
2404 | 119 | log("Ceph pool {} already exists, skipping creation".format(name), | ||
2405 | 120 | level=WARNING) | ||
2406 | 121 | return | ||
2407 | 122 | # Calculate the number of placement groups based | ||
2408 | 123 | # on upstream recommended best practices. | ||
2409 | 124 | osds = get_osds(service) | ||
2410 | 125 | if osds: | ||
2411 | 126 | pgnum = (len(osds) * 100 / replicas) | ||
2412 | 127 | else: | ||
2413 | 128 | # NOTE(james-page): Default to 200 for older ceph versions | ||
2414 | 129 | # which don't support OSD query from cli | ||
2415 | 130 | pgnum = 200 | ||
2416 | 131 | cmd = [ | ||
2417 | 132 | 'ceph', '--id', service, | ||
2418 | 133 | 'osd', 'pool', 'create', | ||
2419 | 134 | name, str(pgnum) | ||
2420 | 135 | ] | ||
2421 | 136 | check_call(cmd) | ||
2422 | 137 | cmd = [ | ||
2423 | 138 | 'ceph', '--id', service, | ||
2424 | 139 | 'osd', 'pool', 'set', name, | ||
2425 | 140 | 'size', str(replicas) | ||
2426 | 141 | ] | ||
2427 | 142 | check_call(cmd) | ||
2428 | 143 | |||
2429 | 144 | |||
2430 | 145 | def delete_pool(service, name): | ||
2431 | 146 | ''' Delete a RADOS pool from ceph ''' | ||
2432 | 147 | cmd = [ | ||
2433 | 148 | 'ceph', '--id', service, | ||
2434 | 149 | 'osd', 'pool', 'delete', | ||
2435 | 150 | name, '--yes-i-really-really-mean-it' | ||
2436 | 151 | ] | ||
2437 | 152 | check_call(cmd) | ||
2438 | 153 | |||
2439 | 154 | |||
2440 | 155 | def _keyfile_path(service): | ||
2441 | 156 | return KEYFILE.format(service) | ||
2442 | 157 | |||
2443 | 158 | |||
2444 | 159 | def _keyring_path(service): | ||
2445 | 160 | return KEYRING.format(service) | ||
2446 | 161 | |||
2447 | 162 | |||
2448 | 163 | def create_keyring(service, key): | ||
2449 | 164 | ''' Create a new Ceph keyring containing key''' | ||
2450 | 165 | keyring = _keyring_path(service) | ||
2451 | 166 | if os.path.exists(keyring): | ||
2452 | 167 | log('ceph: Keyring exists at %s.' % keyring, level=WARNING) | ||
2453 | 168 | return | ||
2454 | 169 | cmd = [ | ||
2455 | 170 | 'ceph-authtool', | ||
2456 | 171 | keyring, | ||
2457 | 172 | '--create-keyring', | ||
2458 | 173 | '--name=client.{}'.format(service), | ||
2459 | 174 | '--add-key={}'.format(key) | ||
2460 | 175 | ] | ||
2461 | 176 | check_call(cmd) | ||
2462 | 177 | log('ceph: Created new ring at %s.' % keyring, level=INFO) | ||
2463 | 178 | |||
2464 | 179 | |||
2465 | 180 | def create_key_file(service, key): | ||
2466 | 181 | ''' Create a file containing key ''' | ||
2467 | 182 | keyfile = _keyfile_path(service) | ||
2468 | 183 | if os.path.exists(keyfile): | ||
2469 | 184 | log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING) | ||
2470 | 185 | return | ||
2471 | 186 | with open(keyfile, 'w') as fd: | ||
2472 | 187 | fd.write(key) | ||
2473 | 188 | log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) | ||
2474 | 189 | |||
2475 | 190 | |||
2476 | 191 | def get_ceph_nodes(): | ||
2477 | 192 | ''' Query named relation 'ceph' to detemine current nodes ''' | ||
2478 | 193 | hosts = [] | ||
2479 | 194 | for r_id in relation_ids('ceph'): | ||
2480 | 195 | for unit in related_units(r_id): | ||
2481 | 196 | hosts.append(relation_get('private-address', unit=unit, rid=r_id)) | ||
2482 | 197 | return hosts | ||
2483 | 198 | |||
2484 | 199 | |||
2485 | 200 | def configure(service, key, auth, use_syslog): | ||
2486 | 201 | ''' Perform basic configuration of Ceph ''' | ||
2487 | 202 | create_keyring(service, key) | ||
2488 | 203 | create_key_file(service, key) | ||
2489 | 204 | hosts = get_ceph_nodes() | ||
2490 | 205 | with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: | ||
2491 | 206 | ceph_conf.write(CEPH_CONF.format(auth=auth, | ||
2492 | 207 | keyring=_keyring_path(service), | ||
2493 | 208 | mon_hosts=",".join(map(str, hosts)), | ||
2494 | 209 | use_syslog=use_syslog)) | ||
2495 | 210 | modprobe('rbd') | ||
2496 | 211 | |||
2497 | 212 | |||
2498 | 213 | def image_mapped(name): | ||
2499 | 214 | ''' Determine whether a RADOS block device is mapped locally ''' | ||
2500 | 215 | try: | ||
2501 | 216 | out = check_output(['rbd', 'showmapped']) | ||
2502 | 217 | except CalledProcessError: | ||
2503 | 218 | return False | ||
2504 | 219 | else: | ||
2505 | 220 | return name in out | ||
2506 | 221 | |||
2507 | 222 | |||
2508 | 223 | def map_block_storage(service, pool, image): | ||
2509 | 224 | ''' Map a RADOS block device for local use ''' | ||
2510 | 225 | cmd = [ | ||
2511 | 226 | 'rbd', | ||
2512 | 227 | 'map', | ||
2513 | 228 | '{}/{}'.format(pool, image), | ||
2514 | 229 | '--user', | ||
2515 | 230 | service, | ||
2516 | 231 | '--secret', | ||
2517 | 232 | _keyfile_path(service), | ||
2518 | 233 | ] | ||
2519 | 234 | check_call(cmd) | ||
2520 | 235 | |||
2521 | 236 | |||
2522 | 237 | def filesystem_mounted(fs): | ||
2523 | 238 | ''' Determine whether a filesytems is already mounted ''' | ||
2524 | 239 | return fs in [f for f, m in mounts()] | ||
2525 | 240 | |||
2526 | 241 | |||
2527 | 242 | def make_filesystem(blk_device, fstype='ext4', timeout=10): | ||
2528 | 243 | ''' Make a new filesystem on the specified block device ''' | ||
2529 | 244 | count = 0 | ||
2530 | 245 | e_noent = os.errno.ENOENT | ||
2531 | 246 | while not os.path.exists(blk_device): | ||
2532 | 247 | if count >= timeout: | ||
2533 | 248 | log('ceph: gave up waiting on block device %s' % blk_device, | ||
2534 | 249 | level=ERROR) | ||
2535 | 250 | raise IOError(e_noent, os.strerror(e_noent), blk_device) | ||
2536 | 251 | log('ceph: waiting for block device %s to appear' % blk_device, | ||
2537 | 252 | level=INFO) | ||
2538 | 253 | count += 1 | ||
2539 | 254 | time.sleep(1) | ||
2540 | 255 | else: | ||
2541 | 256 | log('ceph: Formatting block device %s as filesystem %s.' % | ||
2542 | 257 | (blk_device, fstype), level=INFO) | ||
2543 | 258 | check_call(['mkfs', '-t', fstype, blk_device]) | ||
2544 | 259 | |||
2545 | 260 | |||
2546 | 261 | def place_data_on_block_device(blk_device, data_src_dst): | ||
2547 | 262 | ''' Migrate data in data_src_dst to blk_device and then remount ''' | ||
2548 | 263 | # mount block device into /mnt | ||
2549 | 264 | mount(blk_device, '/mnt') | ||
2550 | 265 | # copy data to /mnt | ||
2551 | 266 | copy_files(data_src_dst, '/mnt') | ||
2552 | 267 | # umount block device | ||
2553 | 268 | umount('/mnt') | ||
2554 | 269 | # Grab user/group ID's from original source | ||
2555 | 270 | _dir = os.stat(data_src_dst) | ||
2556 | 271 | uid = _dir.st_uid | ||
2557 | 272 | gid = _dir.st_gid | ||
2558 | 273 | # re-mount where the data should originally be | ||
2559 | 274 | # TODO: persist is currently a NO-OP in core.host | ||
2560 | 275 | mount(blk_device, data_src_dst, persist=True) | ||
2561 | 276 | # ensure original ownership of new mount. | ||
2562 | 277 | os.chown(data_src_dst, uid, gid) | ||
2563 | 278 | |||
2564 | 279 | |||
2565 | 280 | # TODO: re-use | ||
2566 | 281 | def modprobe(module): | ||
2567 | 282 | ''' Load a kernel module and configure for auto-load on reboot ''' | ||
2568 | 283 | log('ceph: Loading kernel module', level=INFO) | ||
2569 | 284 | cmd = ['modprobe', module] | ||
2570 | 285 | check_call(cmd) | ||
2571 | 286 | with open('/etc/modules', 'r+') as modules: | ||
2572 | 287 | if module not in modules.read(): | ||
2573 | 288 | modules.write(module) | ||
2574 | 289 | |||
2575 | 290 | |||
2576 | 291 | def copy_files(src, dst, symlinks=False, ignore=None): | ||
2577 | 292 | ''' Copy files from src to dst ''' | ||
2578 | 293 | for item in os.listdir(src): | ||
2579 | 294 | s = os.path.join(src, item) | ||
2580 | 295 | d = os.path.join(dst, item) | ||
2581 | 296 | if os.path.isdir(s): | ||
2582 | 297 | shutil.copytree(s, d, symlinks, ignore) | ||
2583 | 298 | else: | ||
2584 | 299 | shutil.copy2(s, d) | ||
2585 | 300 | |||
2586 | 301 | |||
2587 | 302 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, | ||
2588 | 303 | blk_device, fstype, system_services=[]): | ||
2589 | 304 | """ | ||
2590 | 305 | NOTE: This function must only be called from a single service unit for | ||
2591 | 306 | the same rbd_img otherwise data loss will occur. | ||
2592 | 307 | |||
2593 | 308 | Ensures given pool and RBD image exists, is mapped to a block device, | ||
2594 | 309 | and the device is formatted and mounted at the given mount_point. | ||
2595 | 310 | |||
2596 | 311 | If formatting a device for the first time, data existing at mount_point | ||
2597 | 312 | will be migrated to the RBD device before being re-mounted. | ||
2598 | 313 | |||
2599 | 314 | All services listed in system_services will be stopped prior to data | ||
2600 | 315 | migration and restarted when complete. | ||
2601 | 316 | """ | ||
2602 | 317 | # Ensure pool, RBD image, RBD mappings are in place. | ||
2603 | 318 | if not pool_exists(service, pool): | ||
2604 | 319 | log('ceph: Creating new pool {}.'.format(pool)) | ||
2605 | 320 | create_pool(service, pool) | ||
2606 | 321 | |||
2607 | 322 | if not rbd_exists(service, pool, rbd_img): | ||
2608 | 323 | log('ceph: Creating RBD image ({}).'.format(rbd_img)) | ||
2609 | 324 | create_rbd_image(service, pool, rbd_img, sizemb) | ||
2610 | 325 | |||
2611 | 326 | if not image_mapped(rbd_img): | ||
2612 | 327 | log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img)) | ||
2613 | 328 | map_block_storage(service, pool, rbd_img) | ||
2614 | 329 | |||
2615 | 330 | # make file system | ||
2616 | 331 | # TODO: What happens if for whatever reason this is run again and | ||
2617 | 332 | # the data is already in the rbd device and/or is mounted?? | ||
2618 | 333 | # When it is mounted already, it will fail to make the fs | ||
2619 | 334 | # XXX: This is really sketchy! Need to at least add an fstab entry | ||
2620 | 335 | # otherwise this hook will blow away existing data if its executed | ||
2621 | 336 | # after a reboot. | ||
2622 | 337 | if not filesystem_mounted(mount_point): | ||
2623 | 338 | make_filesystem(blk_device, fstype) | ||
2624 | 339 | |||
2625 | 340 | for svc in system_services: | ||
2626 | 341 | if service_running(svc): | ||
2627 | 342 | log('ceph: Stopping services {} prior to migrating data.' | ||
2628 | 343 | .format(svc)) | ||
2629 | 344 | service_stop(svc) | ||
2630 | 345 | |||
2631 | 346 | place_data_on_block_device(blk_device, mount_point) | ||
2632 | 347 | |||
2633 | 348 | for svc in system_services: | ||
2634 | 349 | log('ceph: Starting service {} after migrating data.' | ||
2635 | 350 | .format(svc)) | ||
2636 | 351 | service_start(svc) | ||
2637 | 352 | |||
2638 | 353 | |||
2639 | 354 | def ensure_ceph_keyring(service, user=None, group=None): | ||
2640 | 355 | ''' | ||
2641 | 356 | Ensures a ceph keyring is created for a named service | ||
2642 | 357 | and optionally ensures user and group ownership. | ||
2643 | 358 | |||
2644 | 359 | Returns False if no ceph key is available in relation state. | ||
2645 | 360 | ''' | ||
2646 | 361 | key = None | ||
2647 | 362 | for rid in relation_ids('ceph'): | ||
2648 | 363 | for unit in related_units(rid): | ||
2649 | 364 | key = relation_get('key', rid=rid, unit=unit) | ||
2650 | 365 | if key: | ||
2651 | 366 | break | ||
2652 | 367 | if not key: | ||
2653 | 368 | return False | ||
2654 | 369 | create_keyring(service=service, key=key) | ||
2655 | 370 | keyring = _keyring_path(service) | ||
2656 | 371 | if user and group: | ||
2657 | 372 | check_call(['chown', '%s.%s' % (user, group), keyring]) | ||
2658 | 373 | return True | ||
2659 | 374 | |||
2660 | 375 | |||
2661 | 376 | def ceph_version(): | ||
2662 | 377 | ''' Retrieve the local version of ceph ''' | ||
2663 | 378 | if os.path.exists('/usr/bin/ceph'): | ||
2664 | 379 | cmd = ['ceph', '-v'] | ||
2665 | 380 | output = check_output(cmd) | ||
2666 | 381 | output = output.split() | ||
2667 | 382 | if len(output) > 3: | ||
2668 | 383 | return output[2] | ||
2669 | 384 | else: | ||
2670 | 385 | return None | ||
2671 | 386 | else: | ||
2672 | 387 | return None | ||
2673 | 0 | 388 | ||
2674 | === added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py' | |||
2675 | --- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000 | |||
2676 | +++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-09-10 21:17:48 +0000 | |||
2677 | @@ -0,0 +1,62 @@ | |||
2678 | 1 | |||
2679 | 2 | import os | ||
2680 | 3 | import re | ||
2681 | 4 | |||
2682 | 5 | from subprocess import ( | ||
2683 | 6 | check_call, | ||
2684 | 7 | check_output, | ||
2685 | 8 | ) | ||
2686 | 9 | |||
2687 | 10 | |||
2688 | 11 | ################################################## | ||
2689 | 12 | # loopback device helpers. | ||
2690 | 13 | ################################################## | ||
2691 | 14 | def loopback_devices(): | ||
2692 | 15 | ''' | ||
2693 | 16 | Parse through 'losetup -a' output to determine currently mapped | ||
2694 | 17 | loopback devices. Output is expected to look like: | ||
2695 | 18 | |||
2696 | 19 | /dev/loop0: [0807]:961814 (/tmp/my.img) | ||
2697 | 20 | |||
2698 | 21 | :returns: dict: a dict mapping {loopback_dev: backing_file} | ||
2699 | 22 | ''' | ||
2700 | 23 | loopbacks = {} | ||
2701 | 24 | cmd = ['losetup', '-a'] | ||
2702 | 25 | devs = [d.strip().split(' ') for d in | ||
2703 | 26 | check_output(cmd).splitlines() if d != ''] | ||
2704 | 27 | for dev, _, f in devs: | ||
2705 | 28 | loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0] | ||
2706 | 29 | return loopbacks | ||
2707 | 30 | |||
2708 | 31 | |||
2709 | 32 | def create_loopback(file_path): | ||
2710 | 33 | ''' | ||
2711 | 34 | Create a loopback device for a given backing file. | ||
2712 | 35 | |||
2713 | 36 | :returns: str: Full path to new loopback device (eg, /dev/loop0) | ||
2714 | 37 | ''' | ||
2715 | 38 | file_path = os.path.abspath(file_path) | ||
2716 | 39 | check_call(['losetup', '--find', file_path]) | ||
2717 | 40 | for d, f in loopback_devices().iteritems(): | ||
2718 | 41 | if f == file_path: | ||
2719 | 42 | return d | ||
2720 | 43 | |||
2721 | 44 | |||
2722 | 45 | def ensure_loopback_device(path, size): | ||
2723 | 46 | ''' | ||
2724 | 47 | Ensure a loopback device exists for a given backing file path and size. | ||
2725 | 48 | If it a loopback device is not mapped to file, a new one will be created. | ||
2726 | 49 | |||
2727 | 50 | TODO: Confirm size of found loopback device. | ||
2728 | 51 | |||
2729 | 52 | :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) | ||
2730 | 53 | ''' | ||
2731 | 54 | for d, f in loopback_devices().iteritems(): | ||
2732 | 55 | if f == path: | ||
2733 | 56 | return d | ||
2734 | 57 | |||
2735 | 58 | if not os.path.exists(path): | ||
2736 | 59 | cmd = ['truncate', '--size', size, path] | ||
2737 | 60 | check_call(cmd) | ||
2738 | 61 | |||
2739 | 62 | return create_loopback(path) | ||
2740 | 0 | 63 | ||
2741 | === added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' | |||
2742 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000 | |||
2743 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-09-10 21:17:48 +0000 | |||
2744 | @@ -0,0 +1,88 @@ | |||
2745 | 1 | from subprocess import ( | ||
2746 | 2 | CalledProcessError, | ||
2747 | 3 | check_call, | ||
2748 | 4 | check_output, | ||
2749 | 5 | Popen, | ||
2750 | 6 | PIPE, | ||
2751 | 7 | ) | ||
2752 | 8 | |||
2753 | 9 | |||
2754 | 10 | ################################################## | ||
2755 | 11 | # LVM helpers. | ||
2756 | 12 | ################################################## | ||
2757 | 13 | def deactivate_lvm_volume_group(block_device): | ||
2758 | 14 | ''' | ||
2759 | 15 | Deactivate any volume gruop associated with an LVM physical volume. | ||
2760 | 16 | |||
2761 | 17 | :param block_device: str: Full path to LVM physical volume | ||
2762 | 18 | ''' | ||
2763 | 19 | vg = list_lvm_volume_group(block_device) | ||
2764 | 20 | if vg: | ||
2765 | 21 | cmd = ['vgchange', '-an', vg] | ||
2766 | 22 | check_call(cmd) | ||
2767 | 23 | |||
2768 | 24 | |||
2769 | 25 | def is_lvm_physical_volume(block_device): | ||
2770 | 26 | ''' | ||
2771 | 27 | Determine whether a block device is initialized as an LVM PV. | ||
2772 | 28 | |||
2773 | 29 | :param block_device: str: Full path of block device to inspect. | ||
2774 | 30 | |||
2775 | 31 | :returns: boolean: True if block device is a PV, False if not. | ||
2776 | 32 | ''' | ||
2777 | 33 | try: | ||
2778 | 34 | check_output(['pvdisplay', block_device]) | ||
2779 | 35 | return True | ||
2780 | 36 | except CalledProcessError: | ||
2781 | 37 | return False | ||
2782 | 38 | |||
2783 | 39 | |||
2784 | 40 | def remove_lvm_physical_volume(block_device): | ||
2785 | 41 | ''' | ||
2786 | 42 | Remove LVM PV signatures from a given block device. | ||
2787 | 43 | |||
2788 | 44 | :param block_device: str: Full path of block device to scrub. | ||
2789 | 45 | ''' | ||
2790 | 46 | p = Popen(['pvremove', '-ff', block_device], | ||
2791 | 47 | stdin=PIPE) | ||
2792 | 48 | p.communicate(input='y\n') | ||
2793 | 49 | |||
2794 | 50 | |||
2795 | 51 | def list_lvm_volume_group(block_device): | ||
2796 | 52 | ''' | ||
2797 | 53 | List LVM volume group associated with a given block device. | ||
2798 | 54 | |||
2799 | 55 | Assumes block device is a valid LVM PV. | ||
2800 | 56 | |||
2801 | 57 | :param block_device: str: Full path of block device to inspect. | ||
2802 | 58 | |||
2803 | 59 | :returns: str: Name of volume group associated with block device or None | ||
2804 | 60 | ''' | ||
2805 | 61 | vg = None | ||
2806 | 62 | pvd = check_output(['pvdisplay', block_device]).splitlines() | ||
2807 | 63 | for l in pvd: | ||
2808 | 64 | if l.strip().startswith('VG Name'): | ||
2809 | 65 | vg = ' '.join(l.strip().split()[2:]) | ||
2810 | 66 | return vg | ||
2811 | 67 | |||
2812 | 68 | |||
2813 | 69 | def create_lvm_physical_volume(block_device): | ||
2814 | 70 | ''' | ||
2815 | 71 | Initialize a block device as an LVM physical volume. | ||
2816 | 72 | |||
2817 | 73 | :param block_device: str: Full path of block device to initialize. | ||
2818 | 74 | |||
2819 | 75 | ''' | ||
2820 | 76 | check_call(['pvcreate', block_device]) | ||
2821 | 77 | |||
2822 | 78 | |||
2823 | 79 | def create_lvm_volume_group(volume_group, block_device): | ||
2824 | 80 | ''' | ||
2825 | 81 | Create an LVM volume group backed by a given block device. | ||
2826 | 82 | |||
2827 | 83 | Assumes block device has already been initialized as an LVM PV. | ||
2828 | 84 | |||
2829 | 85 | :param volume_group: str: Name of volume group to create. | ||
2830 | 86 | :block_device: str: Full path of PV-initialized block device. | ||
2831 | 87 | ''' | ||
2832 | 88 | check_call(['vgcreate', volume_group, block_device]) | ||
2833 | 0 | 89 | ||
2834 | === added file 'hooks/charmhelpers/contrib/storage/linux/utils.py' | |||
2835 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000 | |||
2836 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-09-10 21:17:48 +0000 | |||
2837 | @@ -0,0 +1,53 @@ | |||
2838 | 1 | import os | ||
2839 | 2 | import re | ||
2840 | 3 | from stat import S_ISBLK | ||
2841 | 4 | |||
2842 | 5 | from subprocess import ( | ||
2843 | 6 | check_call, | ||
2844 | 7 | check_output, | ||
2845 | 8 | call | ||
2846 | 9 | ) | ||
2847 | 10 | |||
2848 | 11 | |||
2849 | 12 | def is_block_device(path): | ||
2850 | 13 | ''' | ||
2851 | 14 | Confirm device at path is a valid block device node. | ||
2852 | 15 | |||
2853 | 16 | :returns: boolean: True if path is a block device, False if not. | ||
2854 | 17 | ''' | ||
2855 | 18 | if not os.path.exists(path): | ||
2856 | 19 | return False | ||
2857 | 20 | return S_ISBLK(os.stat(path).st_mode) | ||
2858 | 21 | |||
2859 | 22 | |||
2860 | 23 | def zap_disk(block_device): | ||
2861 | 24 | ''' | ||
2862 | 25 | Clear a block device of partition table. Relies on sgdisk, which is | ||
2863 | 26 | installed as pat of the 'gdisk' package in Ubuntu. | ||
2864 | 27 | |||
2865 | 28 | :param block_device: str: Full path of block device to clean. | ||
2866 | 29 | ''' | ||
2867 | 30 | # sometimes sgdisk exits non-zero; this is OK, dd will clean up | ||
2868 | 31 | call(['sgdisk', '--zap-all', '--mbrtogpt', | ||
2869 | 32 | '--clear', block_device]) | ||
2870 | 33 | dev_end = check_output(['blockdev', '--getsz', block_device]) | ||
2871 | 34 | gpt_end = int(dev_end.split()[0]) - 100 | ||
2872 | 35 | check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), | ||
2873 | 36 | 'bs=1M', 'count=1']) | ||
2874 | 37 | check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), | ||
2875 | 38 | 'bs=512', 'count=100', 'seek=%s' % (gpt_end)]) | ||
2876 | 39 | |||
2877 | 40 | |||
2878 | 41 | def is_device_mounted(device): | ||
2879 | 42 | '''Given a device path, return True if that device is mounted, and False | ||
2880 | 43 | if it isn't. | ||
2881 | 44 | |||
2882 | 45 | :param device: str: Full path of the device to check. | ||
2883 | 46 | :returns: boolean: True if the path represents a mounted device, False if | ||
2884 | 47 | it doesn't. | ||
2885 | 48 | ''' | ||
2886 | 49 | is_partition = bool(re.search(r".*[0-9]+\b", device)) | ||
2887 | 50 | out = check_output(['mount']) | ||
2888 | 51 | if is_partition: | ||
2889 | 52 | return bool(re.search(device + r"\b", out)) | ||
2890 | 53 | return bool(re.search(device + r"[0-9]+\b", out)) | ||
2891 | 0 | 54 | ||
2892 | === modified file 'hooks/charmhelpers/core/hookenv.py' | |||
2893 | --- hooks/charmhelpers/core/hookenv.py 2014-01-28 00:01:57 +0000 | |||
2894 | +++ hooks/charmhelpers/core/hookenv.py 2014-09-10 21:17:48 +0000 | |||
2895 | @@ -25,7 +25,7 @@ | |||
2896 | 25 | def cached(func): | 25 | def cached(func): |
2897 | 26 | """Cache return values for multiple executions of func + args | 26 | """Cache return values for multiple executions of func + args |
2898 | 27 | 27 | ||
2900 | 28 | For example: | 28 | For example:: |
2901 | 29 | 29 | ||
2902 | 30 | @cached | 30 | @cached |
2903 | 31 | def unit_get(attribute): | 31 | def unit_get(attribute): |
2904 | @@ -155,6 +155,121 @@ | |||
2905 | 155 | return os.path.basename(sys.argv[0]) | 155 | return os.path.basename(sys.argv[0]) |
2906 | 156 | 156 | ||
2907 | 157 | 157 | ||
2908 | 158 | class Config(dict): | ||
2909 | 159 | """A dictionary representation of the charm's config.yaml, with some | ||
2910 | 160 | extra features: | ||
2911 | 161 | |||
2912 | 162 | - See which values in the dictionary have changed since the previous hook. | ||
2913 | 163 | - For values that have changed, see what the previous value was. | ||
2914 | 164 | - Store arbitrary data for use in a later hook. | ||
2915 | 165 | |||
2916 | 166 | NOTE: Do not instantiate this object directly - instead call | ||
2917 | 167 | ``hookenv.config()``, which will return an instance of :class:`Config`. | ||
2918 | 168 | |||
2919 | 169 | Example usage:: | ||
2920 | 170 | |||
2921 | 171 | >>> # inside a hook | ||
2922 | 172 | >>> from charmhelpers.core import hookenv | ||
2923 | 173 | >>> config = hookenv.config() | ||
2924 | 174 | >>> config['foo'] | ||
2925 | 175 | 'bar' | ||
2926 | 176 | >>> # store a new key/value for later use | ||
2927 | 177 | >>> config['mykey'] = 'myval' | ||
2928 | 178 | |||
2929 | 179 | |||
2930 | 180 | >>> # user runs `juju set mycharm foo=baz` | ||
2931 | 181 | >>> # now we're inside subsequent config-changed hook | ||
2932 | 182 | >>> config = hookenv.config() | ||
2933 | 183 | >>> config['foo'] | ||
2934 | 184 | 'baz' | ||
2935 | 185 | >>> # test to see if this val has changed since last hook | ||
2936 | 186 | >>> config.changed('foo') | ||
2937 | 187 | True | ||
2938 | 188 | >>> # what was the previous value? | ||
2939 | 189 | >>> config.previous('foo') | ||
2940 | 190 | 'bar' | ||
2941 | 191 | >>> # keys/values that we add are preserved across hooks | ||
2942 | 192 | >>> config['mykey'] | ||
2943 | 193 | 'myval' | ||
2944 | 194 | |||
2945 | 195 | """ | ||
2946 | 196 | CONFIG_FILE_NAME = '.juju-persistent-config' | ||
2947 | 197 | |||
2948 | 198 | def __init__(self, *args, **kw): | ||
2949 | 199 | super(Config, self).__init__(*args, **kw) | ||
2950 | 200 | self.implicit_save = True | ||
2951 | 201 | self._prev_dict = None | ||
2952 | 202 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) | ||
2953 | 203 | if os.path.exists(self.path): | ||
2954 | 204 | self.load_previous() | ||
2955 | 205 | |||
2956 | 206 | def __getitem__(self, key): | ||
2957 | 207 | """For regular dict lookups, check the current juju config first, | ||
2958 | 208 | then the previous (saved) copy. This ensures that user-saved values | ||
2959 | 209 | will be returned by a dict lookup. | ||
2960 | 210 | |||
2961 | 211 | """ | ||
2962 | 212 | try: | ||
2963 | 213 | return dict.__getitem__(self, key) | ||
2964 | 214 | except KeyError: | ||
2965 | 215 | return (self._prev_dict or {})[key] | ||
2966 | 216 | |||
2967 | 217 | def load_previous(self, path=None): | ||
2968 | 218 | """Load previous copy of config from disk. | ||
2969 | 219 | |||
2970 | 220 | In normal usage you don't need to call this method directly - it | ||
2971 | 221 | is called automatically at object initialization. | ||
2972 | 222 | |||
2973 | 223 | :param path: | ||
2974 | 224 | |||
2975 | 225 | File path from which to load the previous config. If `None`, | ||
2976 | 226 | config is loaded from the default location. If `path` is | ||
2977 | 227 | specified, subsequent `save()` calls will write to the same | ||
2978 | 228 | path. | ||
2979 | 229 | |||
2980 | 230 | """ | ||
2981 | 231 | self.path = path or self.path | ||
2982 | 232 | with open(self.path) as f: | ||
2983 | 233 | self._prev_dict = json.load(f) | ||
2984 | 234 | |||
2985 | 235 | def changed(self, key): | ||
2986 | 236 | """Return True if the current value for this key is different from | ||
2987 | 237 | the previous value. | ||
2988 | 238 | |||
2989 | 239 | """ | ||
2990 | 240 | if self._prev_dict is None: | ||
2991 | 241 | return True | ||
2992 | 242 | return self.previous(key) != self.get(key) | ||
2993 | 243 | |||
2994 | 244 | def previous(self, key): | ||
2995 | 245 | """Return previous value for this key, or None if there | ||
2996 | 246 | is no previous value. | ||
2997 | 247 | |||
2998 | 248 | """ | ||
2999 | 249 | if self._prev_dict: | ||
3000 | 250 | return self._prev_dict.get(key) | ||
3001 | 251 | return None | ||
3002 | 252 | |||
3003 | 253 | def save(self): | ||
3004 | 254 | """Save this config to disk. | ||
3005 | 255 | |||
3006 | 256 | If the charm is using the :mod:`Services Framework <services.base>` | ||
3007 | 257 | or :meth:'@hook <Hooks.hook>' decorator, this | ||
3008 | 258 | is called automatically at the end of successful hook execution. | ||
3009 | 259 | Otherwise, it should be called directly by user code. | ||
3010 | 260 | |||
3011 | 261 | To disable automatic saves, set ``implicit_save=False`` on this | ||
3012 | 262 | instance. | ||
3013 | 263 | |||
3014 | 264 | """ | ||
3015 | 265 | if self._prev_dict: | ||
3016 | 266 | for k, v in self._prev_dict.iteritems(): | ||
3017 | 267 | if k not in self: | ||
3018 | 268 | self[k] = v | ||
3019 | 269 | with open(self.path, 'w') as f: | ||
3020 | 270 | json.dump(self, f) | ||
3021 | 271 | |||
3022 | 272 | |||
3023 | 158 | @cached | 273 | @cached |
3024 | 159 | def config(scope=None): | 274 | def config(scope=None): |
3025 | 160 | """Juju charm configuration""" | 275 | """Juju charm configuration""" |
3026 | @@ -163,7 +278,10 @@ | |||
3027 | 163 | config_cmd_line.append(scope) | 278 | config_cmd_line.append(scope) |
3028 | 164 | config_cmd_line.append('--format=json') | 279 | config_cmd_line.append('--format=json') |
3029 | 165 | try: | 280 | try: |
3031 | 166 | return json.loads(subprocess.check_output(config_cmd_line)) | 281 | config_data = json.loads(subprocess.check_output(config_cmd_line)) |
3032 | 282 | if scope is not None: | ||
3033 | 283 | return config_data | ||
3034 | 284 | return Config(config_data) | ||
3035 | 167 | except ValueError: | 285 | except ValueError: |
3036 | 168 | return None | 286 | return None |
3037 | 169 | 287 | ||
3038 | @@ -188,8 +306,9 @@ | |||
3039 | 188 | raise | 306 | raise |
3040 | 189 | 307 | ||
3041 | 190 | 308 | ||
3043 | 191 | def relation_set(relation_id=None, relation_settings={}, **kwargs): | 309 | def relation_set(relation_id=None, relation_settings=None, **kwargs): |
3044 | 192 | """Set relation information for the current unit""" | 310 | """Set relation information for the current unit""" |
3045 | 311 | relation_settings = relation_settings if relation_settings else {} | ||
3046 | 193 | relation_cmd_line = ['relation-set'] | 312 | relation_cmd_line = ['relation-set'] |
3047 | 194 | if relation_id is not None: | 313 | if relation_id is not None: |
3048 | 195 | relation_cmd_line.extend(('-r', relation_id)) | 314 | relation_cmd_line.extend(('-r', relation_id)) |
3049 | @@ -348,18 +467,19 @@ | |||
3050 | 348 | class Hooks(object): | 467 | class Hooks(object): |
3051 | 349 | """A convenient handler for hook functions. | 468 | """A convenient handler for hook functions. |
3052 | 350 | 469 | ||
3054 | 351 | Example: | 470 | Example:: |
3055 | 471 | |||
3056 | 352 | hooks = Hooks() | 472 | hooks = Hooks() |
3057 | 353 | 473 | ||
3058 | 354 | # register a hook, taking its name from the function name | 474 | # register a hook, taking its name from the function name |
3059 | 355 | @hooks.hook() | 475 | @hooks.hook() |
3060 | 356 | def install(): | 476 | def install(): |
3062 | 357 | ... | 477 | pass # your code here |
3063 | 358 | 478 | ||
3064 | 359 | # register a hook, providing a custom hook name | 479 | # register a hook, providing a custom hook name |
3065 | 360 | @hooks.hook("config-changed") | 480 | @hooks.hook("config-changed") |
3066 | 361 | def config_changed(): | 481 | def config_changed(): |
3068 | 362 | ... | 482 | pass # your code here |
3069 | 363 | 483 | ||
3070 | 364 | if __name__ == "__main__": | 484 | if __name__ == "__main__": |
3071 | 365 | # execute a hook based on the name the program is called by | 485 | # execute a hook based on the name the program is called by |
3072 | @@ -379,6 +499,9 @@ | |||
3073 | 379 | hook_name = os.path.basename(args[0]) | 499 | hook_name = os.path.basename(args[0]) |
3074 | 380 | if hook_name in self._hooks: | 500 | if hook_name in self._hooks: |
3075 | 381 | self._hooks[hook_name]() | 501 | self._hooks[hook_name]() |
3076 | 502 | cfg = config() | ||
3077 | 503 | if cfg.implicit_save: | ||
3078 | 504 | cfg.save() | ||
3079 | 382 | else: | 505 | else: |
3080 | 383 | raise UnregisteredHookError(hook_name) | 506 | raise UnregisteredHookError(hook_name) |
3081 | 384 | 507 | ||
3082 | 385 | 508 | ||
3083 | === modified file 'hooks/charmhelpers/core/host.py' | |||
3084 | --- hooks/charmhelpers/core/host.py 2014-01-28 00:01:57 +0000 | |||
3085 | +++ hooks/charmhelpers/core/host.py 2014-09-10 21:17:48 +0000 | |||
3086 | @@ -12,10 +12,13 @@ | |||
3087 | 12 | import string | 12 | import string |
3088 | 13 | import subprocess | 13 | import subprocess |
3089 | 14 | import hashlib | 14 | import hashlib |
3090 | 15 | import shutil | ||
3091 | 16 | from contextlib import contextmanager | ||
3092 | 15 | 17 | ||
3093 | 16 | from collections import OrderedDict | 18 | from collections import OrderedDict |
3094 | 17 | 19 | ||
3095 | 18 | from hookenv import log | 20 | from hookenv import log |
3096 | 21 | from fstab import Fstab | ||
3097 | 19 | 22 | ||
3098 | 20 | 23 | ||
3099 | 21 | def service_start(service_name): | 24 | def service_start(service_name): |
3100 | @@ -34,7 +37,8 @@ | |||
3101 | 34 | 37 | ||
3102 | 35 | 38 | ||
3103 | 36 | def service_reload(service_name, restart_on_failure=False): | 39 | def service_reload(service_name, restart_on_failure=False): |
3105 | 37 | """Reload a system service, optionally falling back to restart if reload fails""" | 40 | """Reload a system service, optionally falling back to restart if |
3106 | 41 | reload fails""" | ||
3107 | 38 | service_result = service('reload', service_name) | 42 | service_result = service('reload', service_name) |
3108 | 39 | if not service_result and restart_on_failure: | 43 | if not service_result and restart_on_failure: |
3109 | 40 | service_result = service('restart', service_name) | 44 | service_result = service('restart', service_name) |
3110 | @@ -50,7 +54,7 @@ | |||
3111 | 50 | def service_running(service): | 54 | def service_running(service): |
3112 | 51 | """Determine whether a system service is running""" | 55 | """Determine whether a system service is running""" |
3113 | 52 | try: | 56 | try: |
3115 | 53 | output = subprocess.check_output(['service', service, 'status']) | 57 | output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT) |
3116 | 54 | except subprocess.CalledProcessError: | 58 | except subprocess.CalledProcessError: |
3117 | 55 | return False | 59 | return False |
3118 | 56 | else: | 60 | else: |
3119 | @@ -60,6 +64,16 @@ | |||
3120 | 60 | return False | 64 | return False |
3121 | 61 | 65 | ||
3122 | 62 | 66 | ||
3123 | 67 | def service_available(service_name): | ||
3124 | 68 | """Determine whether a system service is available""" | ||
3125 | 69 | try: | ||
3126 | 70 | subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT) | ||
3127 | 71 | except subprocess.CalledProcessError: | ||
3128 | 72 | return False | ||
3129 | 73 | else: | ||
3130 | 74 | return True | ||
3131 | 75 | |||
3132 | 76 | |||
3133 | 63 | def adduser(username, password=None, shell='/bin/bash', system_user=False): | 77 | def adduser(username, password=None, shell='/bin/bash', system_user=False): |
3134 | 64 | """Add a user to the system""" | 78 | """Add a user to the system""" |
3135 | 65 | try: | 79 | try: |
3136 | @@ -143,7 +157,19 @@ | |||
3137 | 143 | target.write(content) | 157 | target.write(content) |
3138 | 144 | 158 | ||
3139 | 145 | 159 | ||
3141 | 146 | def mount(device, mountpoint, options=None, persist=False): | 160 | def fstab_remove(mp): |
3142 | 161 | """Remove the given mountpoint entry from /etc/fstab | ||
3143 | 162 | """ | ||
3144 | 163 | return Fstab.remove_by_mountpoint(mp) | ||
3145 | 164 | |||
3146 | 165 | |||
3147 | 166 | def fstab_add(dev, mp, fs, options=None): | ||
3148 | 167 | """Adds the given device entry to the /etc/fstab file | ||
3149 | 168 | """ | ||
3150 | 169 | return Fstab.add(dev, mp, fs, options=options) | ||
3151 | 170 | |||
3152 | 171 | |||
3153 | 172 | def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"): | ||
3154 | 147 | """Mount a filesystem at a particular mountpoint""" | 173 | """Mount a filesystem at a particular mountpoint""" |
3155 | 148 | cmd_args = ['mount'] | 174 | cmd_args = ['mount'] |
3156 | 149 | if options is not None: | 175 | if options is not None: |
3157 | @@ -154,9 +180,9 @@ | |||
3158 | 154 | except subprocess.CalledProcessError, e: | 180 | except subprocess.CalledProcessError, e: |
3159 | 155 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) | 181 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
3160 | 156 | return False | 182 | return False |
3161 | 183 | |||
3162 | 157 | if persist: | 184 | if persist: |
3165 | 158 | # TODO: update fstab | 185 | return fstab_add(device, mountpoint, filesystem, options=options) |
3164 | 159 | pass | ||
3166 | 160 | return True | 186 | return True |
3167 | 161 | 187 | ||
3168 | 162 | 188 | ||
3169 | @@ -168,9 +194,9 @@ | |||
3170 | 168 | except subprocess.CalledProcessError, e: | 194 | except subprocess.CalledProcessError, e: |
3171 | 169 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | 195 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
3172 | 170 | return False | 196 | return False |
3173 | 197 | |||
3174 | 171 | if persist: | 198 | if persist: |
3177 | 172 | # TODO: update fstab | 199 | return fstab_remove(mountpoint) |
3176 | 173 | pass | ||
3178 | 174 | return True | 200 | return True |
3179 | 175 | 201 | ||
3180 | 176 | 202 | ||
3181 | @@ -194,16 +220,16 @@ | |||
3182 | 194 | return None | 220 | return None |
3183 | 195 | 221 | ||
3184 | 196 | 222 | ||
3186 | 197 | def restart_on_change(restart_map): | 223 | def restart_on_change(restart_map, stopstart=False): |
3187 | 198 | """Restart services based on configuration files changing | 224 | """Restart services based on configuration files changing |
3188 | 199 | 225 | ||
3190 | 200 | This function is used a decorator, for example | 226 | This function is used a decorator, for example:: |
3191 | 201 | 227 | ||
3192 | 202 | @restart_on_change({ | 228 | @restart_on_change({ |
3193 | 203 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] | 229 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
3194 | 204 | }) | 230 | }) |
3195 | 205 | def ceph_client_changed(): | 231 | def ceph_client_changed(): |
3197 | 206 | ... | 232 | pass # your code here |
3198 | 207 | 233 | ||
3199 | 208 | In this example, the cinder-api and cinder-volume services | 234 | In this example, the cinder-api and cinder-volume services |
3200 | 209 | would be restarted if /etc/ceph/ceph.conf is changed by the | 235 | would be restarted if /etc/ceph/ceph.conf is changed by the |
3201 | @@ -219,8 +245,14 @@ | |||
3202 | 219 | for path in restart_map: | 245 | for path in restart_map: |
3203 | 220 | if checksums[path] != file_hash(path): | 246 | if checksums[path] != file_hash(path): |
3204 | 221 | restarts += restart_map[path] | 247 | restarts += restart_map[path] |
3207 | 222 | for service_name in list(OrderedDict.fromkeys(restarts)): | 248 | services_list = list(OrderedDict.fromkeys(restarts)) |
3208 | 223 | service('restart', service_name) | 249 | if not stopstart: |
3209 | 250 | for service_name in services_list: | ||
3210 | 251 | service('restart', service_name) | ||
3211 | 252 | else: | ||
3212 | 253 | for action in ['stop', 'start']: | ||
3213 | 254 | for service_name in services_list: | ||
3214 | 255 | service(action, service_name) | ||
3215 | 224 | return wrapped_f | 256 | return wrapped_f |
3216 | 225 | return wrap | 257 | return wrap |
3217 | 226 | 258 | ||
3218 | @@ -289,3 +321,40 @@ | |||
3219 | 289 | if 'link/ether' in words: | 321 | if 'link/ether' in words: |
3220 | 290 | hwaddr = words[words.index('link/ether') + 1] | 322 | hwaddr = words[words.index('link/ether') + 1] |
3221 | 291 | return hwaddr | 323 | return hwaddr |
3222 | 324 | |||
3223 | 325 | |||
3224 | 326 | def cmp_pkgrevno(package, revno, pkgcache=None): | ||
3225 | 327 | '''Compare supplied revno with the revno of the installed package | ||
3226 | 328 | |||
3227 | 329 | * 1 => Installed revno is greater than supplied arg | ||
3228 | 330 | * 0 => Installed revno is the same as supplied arg | ||
3229 | 331 | * -1 => Installed revno is less than supplied arg | ||
3230 | 332 | |||
3231 | 333 | ''' | ||
3232 | 334 | import apt_pkg | ||
3233 | 335 | from charmhelpers.fetch import apt_cache | ||
3234 | 336 | if not pkgcache: | ||
3235 | 337 | pkgcache = apt_cache() | ||
3236 | 338 | pkg = pkgcache[package] | ||
3237 | 339 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) | ||
3238 | 340 | |||
3239 | 341 | |||
3240 | 342 | @contextmanager | ||
3241 | 343 | def chdir(d): | ||
3242 | 344 | cur = os.getcwd() | ||
3243 | 345 | try: | ||
3244 | 346 | yield os.chdir(d) | ||
3245 | 347 | finally: | ||
3246 | 348 | os.chdir(cur) | ||
3247 | 349 | |||
3248 | 350 | |||
3249 | 351 | def chownr(path, owner, group): | ||
3250 | 352 | uid = pwd.getpwnam(owner).pw_uid | ||
3251 | 353 | gid = grp.getgrnam(group).gr_gid | ||
3252 | 354 | |||
3253 | 355 | for root, dirs, files in os.walk(path): | ||
3254 | 356 | for name in dirs + files: | ||
3255 | 357 | full = os.path.join(root, name) | ||
3256 | 358 | broken_symlink = os.path.lexists(full) and not os.path.exists(full) | ||
3257 | 359 | if not broken_symlink: | ||
3258 | 360 | os.chown(full, uid, gid) | ||
3259 | 292 | 361 | ||
3260 | === modified file 'hooks/charmhelpers/fetch/__init__.py' | |||
3261 | --- hooks/charmhelpers/fetch/__init__.py 2014-01-28 00:01:57 +0000 | |||
3262 | +++ hooks/charmhelpers/fetch/__init__.py 2014-09-10 21:17:48 +0000 | |||
3263 | @@ -1,4 +1,6 @@ | |||
3264 | 1 | import importlib | 1 | import importlib |
3265 | 2 | from tempfile import NamedTemporaryFile | ||
3266 | 3 | import time | ||
3267 | 2 | from yaml import safe_load | 4 | from yaml import safe_load |
3268 | 3 | from charmhelpers.core.host import ( | 5 | from charmhelpers.core.host import ( |
3269 | 4 | lsb_release | 6 | lsb_release |
3270 | @@ -12,9 +14,9 @@ | |||
3271 | 12 | config, | 14 | config, |
3272 | 13 | log, | 15 | log, |
3273 | 14 | ) | 16 | ) |
3274 | 15 | import apt_pkg | ||
3275 | 16 | import os | 17 | import os |
3276 | 17 | 18 | ||
3277 | 19 | |||
3278 | 18 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive | 20 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
3279 | 19 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | 21 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
3280 | 20 | """ | 22 | """ |
3281 | @@ -54,13 +56,68 @@ | |||
3282 | 54 | 'icehouse/proposed': 'precise-proposed/icehouse', | 56 | 'icehouse/proposed': 'precise-proposed/icehouse', |
3283 | 55 | 'precise-icehouse/proposed': 'precise-proposed/icehouse', | 57 | 'precise-icehouse/proposed': 'precise-proposed/icehouse', |
3284 | 56 | 'precise-proposed/icehouse': 'precise-proposed/icehouse', | 58 | 'precise-proposed/icehouse': 'precise-proposed/icehouse', |
3285 | 59 | # Juno | ||
3286 | 60 | 'juno': 'trusty-updates/juno', | ||
3287 | 61 | 'trusty-juno': 'trusty-updates/juno', | ||
3288 | 62 | 'trusty-juno/updates': 'trusty-updates/juno', | ||
3289 | 63 | 'trusty-updates/juno': 'trusty-updates/juno', | ||
3290 | 64 | 'juno/proposed': 'trusty-proposed/juno', | ||
3291 | 65 | 'juno/proposed': 'trusty-proposed/juno', | ||
3292 | 66 | 'trusty-juno/proposed': 'trusty-proposed/juno', | ||
3293 | 67 | 'trusty-proposed/juno': 'trusty-proposed/juno', | ||
3294 | 57 | } | 68 | } |
3295 | 58 | 69 | ||
3296 | 70 | # The order of this list is very important. Handlers should be listed in from | ||
3297 | 71 | # least- to most-specific URL matching. | ||
3298 | 72 | FETCH_HANDLERS = ( | ||
3299 | 73 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | ||
3300 | 74 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | ||
3301 | 75 | ) | ||
3302 | 76 | |||
3303 | 77 | APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. | ||
3304 | 78 | APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks. | ||
3305 | 79 | APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. | ||
3306 | 80 | |||
3307 | 81 | |||
3308 | 82 | class SourceConfigError(Exception): | ||
3309 | 83 | pass | ||
3310 | 84 | |||
3311 | 85 | |||
3312 | 86 | class UnhandledSource(Exception): | ||
3313 | 87 | pass | ||
3314 | 88 | |||
3315 | 89 | |||
3316 | 90 | class AptLockError(Exception): | ||
3317 | 91 | pass | ||
3318 | 92 | |||
3319 | 93 | |||
3320 | 94 | class BaseFetchHandler(object): | ||
3321 | 95 | |||
3322 | 96 | """Base class for FetchHandler implementations in fetch plugins""" | ||
3323 | 97 | |||
3324 | 98 | def can_handle(self, source): | ||
3325 | 99 | """Returns True if the source can be handled. Otherwise returns | ||
3326 | 100 | a string explaining why it cannot""" | ||
3327 | 101 | return "Wrong source type" | ||
3328 | 102 | |||
3329 | 103 | def install(self, source): | ||
3330 | 104 | """Try to download and unpack the source. Return the path to the | ||
3331 | 105 | unpacked files or raise UnhandledSource.""" | ||
3332 | 106 | raise UnhandledSource("Wrong source type {}".format(source)) | ||
3333 | 107 | |||
3334 | 108 | def parse_url(self, url): | ||
3335 | 109 | return urlparse(url) | ||
3336 | 110 | |||
3337 | 111 | def base_url(self, url): | ||
3338 | 112 | """Return url without querystring or fragment""" | ||
3339 | 113 | parts = list(self.parse_url(url)) | ||
3340 | 114 | parts[4:] = ['' for i in parts[4:]] | ||
3341 | 115 | return urlunparse(parts) | ||
3342 | 116 | |||
3343 | 59 | 117 | ||
3344 | 60 | def filter_installed_packages(packages): | 118 | def filter_installed_packages(packages): |
3345 | 61 | """Returns a list of packages that require installation""" | 119 | """Returns a list of packages that require installation""" |
3348 | 62 | apt_pkg.init() | 120 | cache = apt_cache() |
3347 | 63 | cache = apt_pkg.Cache() | ||
3349 | 64 | _pkgs = [] | 121 | _pkgs = [] |
3350 | 65 | for package in packages: | 122 | for package in packages: |
3351 | 66 | try: | 123 | try: |
3352 | @@ -73,6 +130,16 @@ | |||
3353 | 73 | return _pkgs | 130 | return _pkgs |
3354 | 74 | 131 | ||
3355 | 75 | 132 | ||
3356 | 133 | def apt_cache(in_memory=True): | ||
3357 | 134 | """Build and return an apt cache""" | ||
3358 | 135 | import apt_pkg | ||
3359 | 136 | apt_pkg.init() | ||
3360 | 137 | if in_memory: | ||
3361 | 138 | apt_pkg.config.set("Dir::Cache::pkgcache", "") | ||
3362 | 139 | apt_pkg.config.set("Dir::Cache::srcpkgcache", "") | ||
3363 | 140 | return apt_pkg.Cache() | ||
3364 | 141 | |||
3365 | 142 | |||
3366 | 76 | def apt_install(packages, options=None, fatal=False): | 143 | def apt_install(packages, options=None, fatal=False): |
3367 | 77 | """Install one or more packages""" | 144 | """Install one or more packages""" |
3368 | 78 | if options is None: | 145 | if options is None: |
3369 | @@ -87,23 +154,28 @@ | |||
3370 | 87 | cmd.extend(packages) | 154 | cmd.extend(packages) |
3371 | 88 | log("Installing {} with options: {}".format(packages, | 155 | log("Installing {} with options: {}".format(packages, |
3372 | 89 | options)) | 156 | options)) |
3379 | 90 | env = os.environ.copy() | 157 | _run_apt_command(cmd, fatal) |
3380 | 91 | if 'DEBIAN_FRONTEND' not in env: | 158 | |
3381 | 92 | env['DEBIAN_FRONTEND'] = 'noninteractive' | 159 | |
3382 | 93 | 160 | def apt_upgrade(options=None, fatal=False, dist=False): | |
3383 | 94 | if fatal: | 161 | """Upgrade all packages""" |
3384 | 95 | subprocess.check_call(cmd, env=env) | 162 | if options is None: |
3385 | 163 | options = ['--option=Dpkg::Options::=--force-confold'] | ||
3386 | 164 | |||
3387 | 165 | cmd = ['apt-get', '--assume-yes'] | ||
3388 | 166 | cmd.extend(options) | ||
3389 | 167 | if dist: | ||
3390 | 168 | cmd.append('dist-upgrade') | ||
3391 | 96 | else: | 169 | else: |
3393 | 97 | subprocess.call(cmd, env=env) | 170 | cmd.append('upgrade') |
3394 | 171 | log("Upgrading with options: {}".format(options)) | ||
3395 | 172 | _run_apt_command(cmd, fatal) | ||
3396 | 98 | 173 | ||
3397 | 99 | 174 | ||
3398 | 100 | def apt_update(fatal=False): | 175 | def apt_update(fatal=False): |
3399 | 101 | """Update local apt cache""" | 176 | """Update local apt cache""" |
3400 | 102 | cmd = ['apt-get', 'update'] | 177 | cmd = ['apt-get', 'update'] |
3405 | 103 | if fatal: | 178 | _run_apt_command(cmd, fatal) |
3402 | 104 | subprocess.check_call(cmd) | ||
3403 | 105 | else: | ||
3404 | 106 | subprocess.call(cmd) | ||
3406 | 107 | 179 | ||
3407 | 108 | 180 | ||
3408 | 109 | def apt_purge(packages, fatal=False): | 181 | def apt_purge(packages, fatal=False): |
3409 | @@ -114,10 +186,7 @@ | |||
3410 | 114 | else: | 186 | else: |
3411 | 115 | cmd.extend(packages) | 187 | cmd.extend(packages) |
3412 | 116 | log("Purging {}".format(packages)) | 188 | log("Purging {}".format(packages)) |
3417 | 117 | if fatal: | 189 | _run_apt_command(cmd, fatal) |
3414 | 118 | subprocess.check_call(cmd) | ||
3415 | 119 | else: | ||
3416 | 120 | subprocess.call(cmd) | ||
3418 | 121 | 190 | ||
3419 | 122 | 191 | ||
3420 | 123 | def apt_hold(packages, fatal=False): | 192 | def apt_hold(packages, fatal=False): |
3421 | @@ -128,6 +197,7 @@ | |||
3422 | 128 | else: | 197 | else: |
3423 | 129 | cmd.extend(packages) | 198 | cmd.extend(packages) |
3424 | 130 | log("Holding {}".format(packages)) | 199 | log("Holding {}".format(packages)) |
3425 | 200 | |||
3426 | 131 | if fatal: | 201 | if fatal: |
3427 | 132 | subprocess.check_call(cmd) | 202 | subprocess.check_call(cmd) |
3428 | 133 | else: | 203 | else: |
3429 | @@ -135,8 +205,33 @@ | |||
3430 | 135 | 205 | ||
3431 | 136 | 206 | ||
3432 | 137 | def add_source(source, key=None): | 207 | def add_source(source, key=None): |
3433 | 208 | """Add a package source to this system. | ||
3434 | 209 | |||
3435 | 210 | @param source: a URL or sources.list entry, as supported by | ||
3436 | 211 | add-apt-repository(1). Examples: | ||
3437 | 212 | ppa:charmers/example | ||
3438 | 213 | deb https://stub:key@private.example.com/ubuntu trusty main | ||
3439 | 214 | |||
3440 | 215 | In addition: | ||
3441 | 216 | 'proposed:' may be used to enable the standard 'proposed' | ||
3442 | 217 | pocket for the release. | ||
3443 | 218 | 'cloud:' may be used to activate official cloud archive pockets, | ||
3444 | 219 | such as 'cloud:icehouse' | ||
3445 | 220 | |||
3446 | 221 | @param key: A key to be added to the system's APT keyring and used | ||
3447 | 222 | to verify the signatures on packages. Ideally, this should be an | ||
3448 | 223 | ASCII format GPG public key including the block headers. A GPG key | ||
3449 | 224 | id may also be used, but be aware that only insecure protocols are | ||
3450 | 225 | available to retrieve the actual public key from a public keyserver | ||
3451 | 226 | placing your Juju environment at risk. ppa and cloud archive keys | ||
3452 | 227 | are securely added automtically, so sould not be provided. | ||
3453 | 228 | """ | ||
3454 | 229 | if source is None: | ||
3455 | 230 | log('Source is not present. Skipping') | ||
3456 | 231 | return | ||
3457 | 232 | |||
3458 | 138 | if (source.startswith('ppa:') or | 233 | if (source.startswith('ppa:') or |
3460 | 139 | source.startswith('http:') or | 234 | source.startswith('http') or |
3461 | 140 | source.startswith('deb ') or | 235 | source.startswith('deb ') or |
3462 | 141 | source.startswith('cloud-archive:')): | 236 | source.startswith('cloud-archive:')): |
3463 | 142 | subprocess.check_call(['add-apt-repository', '--yes', source]) | 237 | subprocess.check_call(['add-apt-repository', '--yes', source]) |
3464 | @@ -155,57 +250,66 @@ | |||
3465 | 155 | release = lsb_release()['DISTRIB_CODENAME'] | 250 | release = lsb_release()['DISTRIB_CODENAME'] |
3466 | 156 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: | 251 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
3467 | 157 | apt.write(PROPOSED_POCKET.format(release)) | 252 | apt.write(PROPOSED_POCKET.format(release)) |
3468 | 253 | else: | ||
3469 | 254 | raise SourceConfigError("Unknown source: {!r}".format(source)) | ||
3470 | 255 | |||
3471 | 158 | if key: | 256 | if key: |
3477 | 159 | subprocess.check_call(['apt-key', 'import', key]) | 257 | if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: |
3478 | 160 | 258 | with NamedTemporaryFile() as key_file: | |
3479 | 161 | 259 | key_file.write(key) | |
3480 | 162 | class SourceConfigError(Exception): | 260 | key_file.flush() |
3481 | 163 | pass | 261 | key_file.seek(0) |
3482 | 262 | subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file) | ||
3483 | 263 | else: | ||
3484 | 264 | # Note that hkp: is in no way a secure protocol. Using a | ||
3485 | 265 | # GPG key id is pointless from a security POV unless you | ||
3486 | 266 | # absolutely trust your network and DNS. | ||
3487 | 267 | subprocess.check_call(['apt-key', 'adv', '--keyserver', | ||
3488 | 268 | 'hkp://keyserver.ubuntu.com:80', '--recv', | ||
3489 | 269 | key]) | ||
3490 | 164 | 270 | ||
3491 | 165 | 271 | ||
3492 | 166 | def configure_sources(update=False, | 272 | def configure_sources(update=False, |
3493 | 167 | sources_var='install_sources', | 273 | sources_var='install_sources', |
3494 | 168 | keys_var='install_keys'): | 274 | keys_var='install_keys'): |
3495 | 169 | """ | 275 | """ |
3497 | 170 | Configure multiple sources from charm configuration | 276 | Configure multiple sources from charm configuration. |
3498 | 277 | |||
3499 | 278 | The lists are encoded as yaml fragments in the configuration. | ||
3500 | 279 | The frament needs to be included as a string. Sources and their | ||
3501 | 280 | corresponding keys are of the types supported by add_source(). | ||
3502 | 171 | 281 | ||
3503 | 172 | Example config: | 282 | Example config: |
3505 | 173 | install_sources: | 283 | install_sources: | |
3506 | 174 | - "ppa:foo" | 284 | - "ppa:foo" |
3507 | 175 | - "http://example.com/repo precise main" | 285 | - "http://example.com/repo precise main" |
3509 | 176 | install_keys: | 286 | install_keys: | |
3510 | 177 | - null | 287 | - null |
3511 | 178 | - "a1b2c3d4" | 288 | - "a1b2c3d4" |
3512 | 179 | 289 | ||
3513 | 180 | Note that 'null' (a.k.a. None) should not be quoted. | 290 | Note that 'null' (a.k.a. None) should not be quoted. |
3514 | 181 | """ | 291 | """ |
3522 | 182 | sources = safe_load(config(sources_var)) | 292 | sources = safe_load((config(sources_var) or '').strip()) or [] |
3523 | 183 | keys = config(keys_var) | 293 | keys = safe_load((config(keys_var) or '').strip()) or None |
3524 | 184 | if keys is not None: | 294 | |
3525 | 185 | keys = safe_load(keys) | 295 | if isinstance(sources, basestring): |
3526 | 186 | if isinstance(sources, basestring) and ( | 296 | sources = [sources] |
3527 | 187 | keys is None or isinstance(keys, basestring)): | 297 | |
3528 | 188 | add_source(sources, keys) | 298 | if keys is None: |
3529 | 299 | for source in sources: | ||
3530 | 300 | add_source(source, None) | ||
3531 | 189 | else: | 301 | else: |
3537 | 190 | if not len(sources) == len(keys): | 302 | if isinstance(keys, basestring): |
3538 | 191 | msg = 'Install sources and keys lists are different lengths' | 303 | keys = [keys] |
3539 | 192 | raise SourceConfigError(msg) | 304 | |
3540 | 193 | for src_num in range(len(sources)): | 305 | if len(sources) != len(keys): |
3541 | 194 | add_source(sources[src_num], keys[src_num]) | 306 | raise SourceConfigError( |
3542 | 307 | 'Install sources and keys lists are different lengths') | ||
3543 | 308 | for source, key in zip(sources, keys): | ||
3544 | 309 | add_source(source, key) | ||
3545 | 195 | if update: | 310 | if update: |
3546 | 196 | apt_update(fatal=True) | 311 | apt_update(fatal=True) |
3547 | 197 | 312 | ||
3548 | 198 | # The order of this list is very important. Handlers should be listed in from | ||
3549 | 199 | # least- to most-specific URL matching. | ||
3550 | 200 | FETCH_HANDLERS = ( | ||
3551 | 201 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | ||
3552 | 202 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | ||
3553 | 203 | ) | ||
3554 | 204 | |||
3555 | 205 | |||
3556 | 206 | class UnhandledSource(Exception): | ||
3557 | 207 | pass | ||
3558 | 208 | |||
3559 | 209 | 313 | ||
3560 | 210 | def install_remote(source): | 314 | def install_remote(source): |
3561 | 211 | """ | 315 | """ |
3562 | @@ -236,30 +340,6 @@ | |||
3563 | 236 | return install_remote(source) | 340 | return install_remote(source) |
3564 | 237 | 341 | ||
3565 | 238 | 342 | ||
3566 | 239 | class BaseFetchHandler(object): | ||
3567 | 240 | |||
3568 | 241 | """Base class for FetchHandler implementations in fetch plugins""" | ||
3569 | 242 | |||
3570 | 243 | def can_handle(self, source): | ||
3571 | 244 | """Returns True if the source can be handled. Otherwise returns | ||
3572 | 245 | a string explaining why it cannot""" | ||
3573 | 246 | return "Wrong source type" | ||
3574 | 247 | |||
3575 | 248 | def install(self, source): | ||
3576 | 249 | """Try to download and unpack the source. Return the path to the | ||
3577 | 250 | unpacked files or raise UnhandledSource.""" | ||
3578 | 251 | raise UnhandledSource("Wrong source type {}".format(source)) | ||
3579 | 252 | |||
3580 | 253 | def parse_url(self, url): | ||
3581 | 254 | return urlparse(url) | ||
3582 | 255 | |||
3583 | 256 | def base_url(self, url): | ||
3584 | 257 | """Return url without querystring or fragment""" | ||
3585 | 258 | parts = list(self.parse_url(url)) | ||
3586 | 259 | parts[4:] = ['' for i in parts[4:]] | ||
3587 | 260 | return urlunparse(parts) | ||
3588 | 261 | |||
3589 | 262 | |||
3590 | 263 | def plugins(fetch_handlers=None): | 343 | def plugins(fetch_handlers=None): |
3591 | 264 | if not fetch_handlers: | 344 | if not fetch_handlers: |
3592 | 265 | fetch_handlers = FETCH_HANDLERS | 345 | fetch_handlers = FETCH_HANDLERS |
3593 | @@ -277,3 +357,40 @@ | |||
3594 | 277 | log("FetchHandler {} not found, skipping plugin".format( | 357 | log("FetchHandler {} not found, skipping plugin".format( |
3595 | 278 | handler_name)) | 358 | handler_name)) |
3596 | 279 | return plugin_list | 359 | return plugin_list |
3597 | 360 | |||
3598 | 361 | |||
3599 | 362 | def _run_apt_command(cmd, fatal=False): | ||
3600 | 363 | """ | ||
3601 | 364 | Run an APT command, checking output and retrying if the fatal flag is set | ||
3602 | 365 | to True. | ||
3603 | 366 | |||
3604 | 367 | :param: cmd: str: The apt command to run. | ||
3605 | 368 | :param: fatal: bool: Whether the command's output should be checked and | ||
3606 | 369 | retried. | ||
3607 | 370 | """ | ||
3608 | 371 | env = os.environ.copy() | ||
3609 | 372 | |||
3610 | 373 | if 'DEBIAN_FRONTEND' not in env: | ||
3611 | 374 | env['DEBIAN_FRONTEND'] = 'noninteractive' | ||
3612 | 375 | |||
3613 | 376 | if fatal: | ||
3614 | 377 | retry_count = 0 | ||
3615 | 378 | result = None | ||
3616 | 379 | |||
3617 | 380 | # If the command is considered "fatal", we need to retry if the apt | ||
3618 | 381 | # lock was not acquired. | ||
3619 | 382 | |||
3620 | 383 | while result is None or result == APT_NO_LOCK: | ||
3621 | 384 | try: | ||
3622 | 385 | result = subprocess.check_call(cmd, env=env) | ||
3623 | 386 | except subprocess.CalledProcessError, e: | ||
3624 | 387 | retry_count = retry_count + 1 | ||
3625 | 388 | if retry_count > APT_NO_LOCK_RETRY_COUNT: | ||
3626 | 389 | raise | ||
3627 | 390 | result = e.returncode | ||
3628 | 391 | log("Couldn't acquire DPKG lock. Will retry in {} seconds." | ||
3629 | 392 | "".format(APT_NO_LOCK_RETRY_DELAY)) | ||
3630 | 393 | time.sleep(APT_NO_LOCK_RETRY_DELAY) | ||
3631 | 394 | |||
3632 | 395 | else: | ||
3633 | 396 | subprocess.call(cmd, env=env) | ||
3634 | 280 | 397 | ||
3635 | === modified file 'hooks/charmhelpers/fetch/archiveurl.py' | |||
3636 | --- hooks/charmhelpers/fetch/archiveurl.py 2014-01-28 00:01:57 +0000 | |||
3637 | +++ hooks/charmhelpers/fetch/archiveurl.py 2014-09-10 21:17:48 +0000 | |||
3638 | @@ -1,5 +1,9 @@ | |||
3639 | 1 | import os | 1 | import os |
3640 | 2 | import urllib2 | 2 | import urllib2 |
3641 | 3 | from urllib import urlretrieve | ||
3642 | 4 | import urlparse | ||
3643 | 5 | import hashlib | ||
3644 | 6 | |||
3645 | 3 | from charmhelpers.fetch import ( | 7 | from charmhelpers.fetch import ( |
3646 | 4 | BaseFetchHandler, | 8 | BaseFetchHandler, |
3647 | 5 | UnhandledSource | 9 | UnhandledSource |
3648 | @@ -10,7 +14,17 @@ | |||
3649 | 10 | ) | 14 | ) |
3650 | 11 | from charmhelpers.core.host import mkdir | 15 | from charmhelpers.core.host import mkdir |
3651 | 12 | 16 | ||
3653 | 13 | 17 | """ | |
3654 | 18 | This class is a plugin for charmhelpers.fetch.install_remote. | ||
3655 | 19 | |||
3656 | 20 | It grabs, validates and installs remote archives fetched over "http", "https", "ftp" or "file" protocols. The contents of the archive are installed in $CHARM_DIR/fetched/. | ||
3657 | 21 | |||
3658 | 22 | Example usage: | ||
3659 | 23 | install_remote("https://example.com/some/archive.tar.gz") | ||
3660 | 24 | # Installs the contents of archive.tar.gz in $CHARM_DIR/fetched/. | ||
3661 | 25 | |||
3662 | 26 | See charmhelpers.fetch.archiveurl.get_archivehandler for supported archive types. | ||
3663 | 27 | """ | ||
3664 | 14 | class ArchiveUrlFetchHandler(BaseFetchHandler): | 28 | class ArchiveUrlFetchHandler(BaseFetchHandler): |
3665 | 15 | """Handler for archives via generic URLs""" | 29 | """Handler for archives via generic URLs""" |
3666 | 16 | def can_handle(self, source): | 30 | def can_handle(self, source): |
3667 | @@ -24,6 +38,19 @@ | |||
3668 | 24 | def download(self, source, dest): | 38 | def download(self, source, dest): |
3669 | 25 | # propogate all exceptions | 39 | # propogate all exceptions |
3670 | 26 | # URLError, OSError, etc | 40 | # URLError, OSError, etc |
3671 | 41 | proto, netloc, path, params, query, fragment = urlparse.urlparse(source) | ||
3672 | 42 | if proto in ('http', 'https'): | ||
3673 | 43 | auth, barehost = urllib2.splituser(netloc) | ||
3674 | 44 | if auth is not None: | ||
3675 | 45 | source = urlparse.urlunparse((proto, barehost, path, params, query, fragment)) | ||
3676 | 46 | username, password = urllib2.splitpasswd(auth) | ||
3677 | 47 | passman = urllib2.HTTPPasswordMgrWithDefaultRealm() | ||
3678 | 48 | # Realm is set to None in add_password to force the username and password | ||
3679 | 49 | # to be used whatever the realm | ||
3680 | 50 | passman.add_password(None, source, username, password) | ||
3681 | 51 | authhandler = urllib2.HTTPBasicAuthHandler(passman) | ||
3682 | 52 | opener = urllib2.build_opener(authhandler) | ||
3683 | 53 | urllib2.install_opener(opener) | ||
3684 | 27 | response = urllib2.urlopen(source) | 54 | response = urllib2.urlopen(source) |
3685 | 28 | try: | 55 | try: |
3686 | 29 | with open(dest, 'w') as dest_file: | 56 | with open(dest, 'w') as dest_file: |
3687 | @@ -46,3 +73,31 @@ | |||
3688 | 46 | except OSError as e: | 73 | except OSError as e: |
3689 | 47 | raise UnhandledSource(e.strerror) | 74 | raise UnhandledSource(e.strerror) |
3690 | 48 | return extract(dld_file) | 75 | return extract(dld_file) |
3691 | 76 | |||
3692 | 77 | # Mandatory file validation via Sha1 or MD5 hashing. | ||
3693 | 78 | def download_and_validate(self, url, hashsum, validate="sha1"): | ||
3694 | 79 | if validate == 'sha1' and len(hashsum) != 40: | ||
3695 | 80 | raise ValueError("HashSum must be = 40 characters when using sha1" | ||
3696 | 81 | " validation") | ||
3697 | 82 | if validate == 'md5' and len(hashsum) != 32: | ||
3698 | 83 | raise ValueError("HashSum must be = 32 characters when using md5" | ||
3699 | 84 | " validation") | ||
3700 | 85 | tempfile, headers = urlretrieve(url) | ||
3701 | 86 | self.validate_file(tempfile, hashsum, validate) | ||
3702 | 87 | return tempfile | ||
3703 | 88 | |||
3704 | 89 | # Predicate method that returns status of hash matching expected hash. | ||
3705 | 90 | def validate_file(self, source, hashsum, vmethod='sha1'): | ||
3706 | 91 | if vmethod != 'sha1' and vmethod != 'md5': | ||
3707 | 92 | raise ValueError("Validation Method not supported") | ||
3708 | 93 | |||
3709 | 94 | if vmethod == 'md5': | ||
3710 | 95 | m = hashlib.md5() | ||
3711 | 96 | if vmethod == 'sha1': | ||
3712 | 97 | m = hashlib.sha1() | ||
3713 | 98 | with open(source) as f: | ||
3714 | 99 | for line in f: | ||
3715 | 100 | m.update(line) | ||
3716 | 101 | if hashsum != m.hexdigest(): | ||
3717 | 102 | msg = "Hash Mismatch on {} expected {} got {}" | ||
3718 | 103 | raise ValueError(msg.format(source, hashsum, m.hexdigest())) | ||
3719 | 49 | 104 | ||
3720 | === modified file 'hooks/charmhelpers/fetch/bzrurl.py' | |||
3721 | --- hooks/charmhelpers/fetch/bzrurl.py 2014-01-28 00:01:57 +0000 | |||
3722 | +++ hooks/charmhelpers/fetch/bzrurl.py 2014-09-10 21:17:48 +0000 | |||
3723 | @@ -39,7 +39,8 @@ | |||
3724 | 39 | def install(self, source): | 39 | def install(self, source): |
3725 | 40 | url_parts = self.parse_url(source) | 40 | url_parts = self.parse_url(source) |
3726 | 41 | branch_name = url_parts.path.strip("/").split("/")[-1] | 41 | branch_name = url_parts.path.strip("/").split("/")[-1] |
3728 | 42 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name) | 42 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
3729 | 43 | branch_name) | ||
3730 | 43 | if not os.path.exists(dest_dir): | 44 | if not os.path.exists(dest_dir): |
3731 | 44 | mkdir(dest_dir, perms=0755) | 45 | mkdir(dest_dir, perms=0755) |
3732 | 45 | try: | 46 | try: |
3733 | 46 | 47 | ||
3734 | === modified file 'hooks/hooks.py' | |||
3735 | --- hooks/hooks.py 2014-08-22 07:52:20 +0000 | |||
3736 | +++ hooks/hooks.py 2014-09-10 21:17:48 +0000 | |||
3737 | @@ -1,6 +1,7 @@ | |||
3738 | 1 | #!/usr/bin/env python | 1 | #!/usr/bin/env python |
3739 | 2 | # vim: et ai ts=4 sw=4: | 2 | # vim: et ai ts=4 sw=4: |
3740 | 3 | 3 | ||
3741 | 4 | from charmhelpers.contrib.openstack.utils import configure_installation_source | ||
3742 | 4 | from charmhelpers import fetch | 5 | from charmhelpers import fetch |
3743 | 5 | from charmhelpers.core import hookenv | 6 | from charmhelpers.core import hookenv |
3744 | 6 | from charmhelpers.core.hookenv import ERROR, INFO | 7 | from charmhelpers.core.hookenv import ERROR, INFO |
3745 | @@ -8,7 +9,7 @@ | |||
3746 | 8 | import json | 9 | import json |
3747 | 9 | import os | 10 | import os |
3748 | 10 | import sys | 11 | import sys |
3750 | 11 | from util import StorageServiceUtil, generate_volume_label, get_running_series | 12 | from util import StorageServiceUtil, generate_volume_label |
3751 | 12 | 13 | ||
3752 | 13 | hooks = hookenv.Hooks() | 14 | hooks = hookenv.Hooks() |
3753 | 14 | 15 | ||
3754 | @@ -84,13 +85,12 @@ | |||
3755 | 84 | if apt_install is None: # for testing purposes | 85 | if apt_install is None: # for testing purposes |
3756 | 85 | apt_install = fetch.apt_install | 86 | apt_install = fetch.apt_install |
3757 | 86 | if add_source is None: # for testing purposes | 87 | if add_source is None: # for testing purposes |
3759 | 87 | add_source = fetch.add_source | 88 | add_source = configure_installation_source |
3760 | 88 | 89 | ||
3761 | 89 | provider = hookenv.config("provider") | 90 | provider = hookenv.config("provider") |
3762 | 90 | if provider == "nova": | 91 | if provider == "nova": |
3763 | 92 | add_source(hookenv.config('openstack-origin')) | ||
3764 | 91 | required_packages = ["python-novaclient"] | 93 | required_packages = ["python-novaclient"] |
3765 | 92 | if int(get_running_series()['release'].split(".")[0]) < 14: | ||
3766 | 93 | add_source("cloud-archive:havana") | ||
3767 | 94 | elif provider == "ec2": | 94 | elif provider == "ec2": |
3768 | 95 | required_packages = ["python-boto"] | 95 | required_packages = ["python-boto"] |
3769 | 96 | fetch.apt_update(fatal=True) | 96 | fetch.apt_update(fatal=True) |
3770 | 97 | 97 | ||
3771 | === modified file 'hooks/test_hooks.py' | |||
3772 | --- hooks/test_hooks.py 2014-09-09 16:24:46 +0000 | |||
3773 | +++ hooks/test_hooks.py 2014-09-10 21:17:48 +0000 | |||
3774 | @@ -16,7 +16,8 @@ | |||
3775 | 16 | {"key": "myusername", "tenant": "myusername_project", | 16 | {"key": "myusername", "tenant": "myusername_project", |
3776 | 17 | "secret": "password", "region": "region1", "provider": "nova", | 17 | "secret": "password", "region": "region1", "provider": "nova", |
3777 | 18 | "endpoint": "https://keystone_url:443/v2.0/", | 18 | "endpoint": "https://keystone_url:443/v2.0/", |
3779 | 19 | "default_volume_size": 11}) | 19 | "default_volume_size": 11, |
3780 | 20 | "openstack-origin": "cloud:precise-folsom/staging"}) | ||
3781 | 20 | 21 | ||
3782 | 21 | def test_wb_persist_data_creates_persist_file_if_it_doesnt_exist(self): | 22 | def test_wb_persist_data_creates_persist_file_if_it_doesnt_exist(self): |
3783 | 22 | """ | 23 | """ |
3784 | @@ -182,46 +183,23 @@ | |||
3785 | 182 | self.mocker.replay() | 183 | self.mocker.replay() |
3786 | 183 | hooks.config_changed() | 184 | hooks.config_changed() |
3787 | 184 | 185 | ||
3828 | 185 | def test_install_installs_novaclient_and_no_cloud_archive_on_trusty(self): | 186 | def test_install_installs_novaclient_from_openstack_origin_config(self): |
3829 | 186 | """ | 187 | """ |
3830 | 187 | On trusty, 14.04, and later, L{install} will not call | 188 | When C{provider} is nova, L{install} will call the charmhelper's |
3831 | 188 | C{fetch.add_source} to add a cloud repository but it will install the | 189 | C{configure_installation_source} to add the appropriate cloud archive |
3832 | 189 | install the C{python-novaclient} package. | 190 | for the configured C{openstack-origin}. The C{python-novaclient} |
3833 | 190 | """ | 191 | package will then be installed. |
3834 | 191 | get_running_series = self.mocker.replace(hooks.get_running_series) | 192 | """ |
3835 | 192 | get_running_series() | 193 | apt_update = self.mocker.replace(fetch.apt_update) |
3836 | 193 | self.mocker.result({'release': '14.04'}) # Trusty series | 194 | apt_update(fatal=True) |
3837 | 194 | add_source = self.mocker.replace(fetch.add_source) | 195 | self.mocker.replay() |
3838 | 195 | add_source("cloud-archive:havana") | 196 | |
3839 | 196 | self.mocker.count(0) # Test we never called add_source | 197 | def apt_install(packages, fatal): |
3840 | 197 | apt_update = self.mocker.replace(fetch.apt_update) | 198 | self.assertEqual(["python-novaclient"], packages) |
3841 | 198 | apt_update(fatal=True) | 199 | self.assertTrue(fatal) |
3842 | 199 | self.mocker.replay() | 200 | |
3843 | 200 | 201 | def add_source(origin): | |
3844 | 201 | def apt_install(packages, fatal): | 202 | self.assertEqual("cloud:precise-folsom/staging", origin) |
3805 | 202 | self.assertEqual(["python-novaclient"], packages) | ||
3806 | 203 | self.assertTrue(fatal) | ||
3807 | 204 | |||
3808 | 205 | hooks.install(apt_install=apt_install, add_source=add_source) | ||
3809 | 206 | |||
3810 | 207 | def test_precise_install_adds_apt_source_and_installs_novaclient(self): | ||
3811 | 208 | """ | ||
3812 | 209 | L{install} will call C{fetch.add_source} to add a cloud repository and | ||
3813 | 210 | install the C{python-novaclient} package. | ||
3814 | 211 | """ | ||
3815 | 212 | get_running_series = self.mocker.replace(hooks.get_running_series) | ||
3816 | 213 | get_running_series() | ||
3817 | 214 | self.mocker.result({'release': '12.04'}) # precise | ||
3818 | 215 | apt_update = self.mocker.replace(fetch.apt_update) | ||
3819 | 216 | apt_update(fatal=True) | ||
3820 | 217 | self.mocker.replay() | ||
3821 | 218 | |||
3822 | 219 | def add_source(source): | ||
3823 | 220 | self.assertEqual("cloud-archive:havana", source) | ||
3824 | 221 | |||
3825 | 222 | def apt_install(packages, fatal): | ||
3826 | 223 | self.assertEqual(["python-novaclient"], packages) | ||
3827 | 224 | self.assertTrue(fatal) | ||
3845 | 225 | 203 | ||
3846 | 226 | hooks.install(apt_install=apt_install, add_source=add_source) | 204 | hooks.install(apt_install=apt_install, add_source=add_source) |
3847 | 227 | 205 |
Hi Chad -- Thanks for this MP!
I don't see any reason why this would be controversial, please clear the merge conflict with trunk and I'll review and commit this straightaway.