Merge lp:~tribaal/charms/precise/storage/add-ceph-storage-provider into lp:charms/storage
- Precise Pangolin (12.04)
- add-ceph-storage-provider
- Merge into trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp:~tribaal/charms/precise/storage/add-ceph-storage-provider |
Merge into: | lp:charms/storage |
Diff against target: |
2112 lines (+1834/-27) 19 files modified
.bzrignore (+1/-0) Makefile (+9/-9) charm-helpers.yaml (+1/-0) hooks/ceph_common.py (+12/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+387/-0) hooks/charmhelpers/core/fstab.py (+116/-0) hooks/charmhelpers/core/hookenv.py (+111/-6) hooks/charmhelpers/core/host.py (+85/-12) hooks/charmhelpers/core/services/__init__.py (+2/-0) hooks/charmhelpers/core/services/base.py (+305/-0) hooks/charmhelpers/core/services/helpers.py (+125/-0) hooks/charmhelpers/core/templating.py (+51/-0) hooks/charmhelpers/fetch/__init__.py (+394/-0) hooks/charmhelpers/fetch/archiveurl.py (+63/-0) hooks/charmhelpers/fetch/bzrurl.py (+50/-0) hooks/storage-provider.d/ceph/ceph-relation-changed (+69/-0) hooks/storage-provider.d/ceph/config-changed (+6/-0) hooks/storage-provider.d/ceph/data-relation-changed (+45/-0) metadata.yaml (+2/-0) |
To merge this branch: | bzr merge lp:~tribaal/charms/precise/storage/add-ceph-storage-provider |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
David Britton (community) | Needs Information | ||
Review via email: mp+232239@code.launchpad.net |
Commit message
Description of the change
This branch adds basic support for a "ceph" storage provider that mounts and uses a ceph image to use as storage.
It is currently rough (not working 100%), but I'd like to have eyes on the code before I go any further with it (I don't understand exactly what's wrong with my approach).
- 44. By Chris Glass
-
Added ceph-relation-
changed hook symlink. - 45. By Chris Glass
-
Make the ceph-relation-
changed hook noop if all the necessary information is
not present (like the key). - 46. By Chris Glass
-
added relation debug output
David Britton (dpb) wrote : | # |
Hi Chris -- Thanks so much for doing this!
The general approach looks great. Can you implement the data-relation-
Before I run through paces, I wanted to make sure that you include a sample deployment when working with ceph at least here in the review so we can compare apples to apples.
Also I added a simple nit for a diff comment.
This change is quite exciting, glad so much of it is contained in charmhelpers. :)
Chris Glass (tribaal) wrote : | # |
Yes, I started working on the unmount logic, but then realised this branch was already 2k lines, so I decided against putting even more code in.
I'll add it in here, no problem.
- 47. By Chris Glass
-
Added exit(0)!
Unmerged revisions
- 47. By Chris Glass
-
Added exit(0)!
- 46. By Chris Glass
-
added relation debug output
- 45. By Chris Glass
-
Make the ceph-relation-
changed hook noop if all the necessary information is
not present (like the key). - 44. By Chris Glass
-
Added ceph-relation-
changed hook symlink. - 43. By Chris Glass
-
Added some debug logs...
- 42. By Chris Glass
-
Added logs to the data-relation-
changed hook - 41. By Chris Glass
-
Put the common ceph code in a proper python module.
- 40. By Chris Glass
-
Made hook executable. Duh.
- 39. By Chris Glass
-
Update charm-helpers along with the make file to sync them.
- 38. By Chris Glass
-
The hooks should work
Preview Diff
1 | === modified file '.bzrignore' | |||
2 | --- .bzrignore 2014-01-31 23:49:37 +0000 | |||
3 | +++ .bzrignore 2014-09-09 05:13:17 +0000 | |||
4 | @@ -1,2 +1,3 @@ | |||
5 | 1 | _trial_temp | 1 | _trial_temp |
6 | 2 | charm-helpers | 2 | charm-helpers |
7 | 3 | bin/ | ||
8 | 3 | 4 | ||
9 | === modified file 'Makefile' | |||
10 | --- Makefile 2014-02-10 22:52:50 +0000 | |||
11 | +++ Makefile 2014-09-09 05:13:17 +0000 | |||
12 | @@ -1,4 +1,6 @@ | |||
13 | 1 | #!/usr/bin/make | ||
14 | 1 | .PHONY: test lint clean | 2 | .PHONY: test lint clean |
15 | 3 | PYTHON := /usr/bin/env python | ||
16 | 2 | 4 | ||
17 | 3 | clean: | 5 | clean: |
18 | 4 | find . -name *.pyc | xargs rm | 6 | find . -name *.pyc | xargs rm |
19 | @@ -11,12 +13,10 @@ | |||
20 | 11 | # flake any python scripts not named *.py | 13 | # flake any python scripts not named *.py |
21 | 12 | fgrep -r bin/python hooks/ | awk -F : '{print $$1}' | xargs -r -n1 flake8 | 14 | fgrep -r bin/python hooks/ | awk -F : '{print $$1}' | xargs -r -n1 flake8 |
22 | 13 | 15 | ||
32 | 14 | 16 | bin/charm_helpers_sync.py: | |
33 | 15 | update-charm-helpers: | 17 | @mkdir -p bin |
34 | 16 | # Pull latest charm-helpers branch and sync the components based on our | 18 | @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ |
35 | 17 | $ charm-helpers.yaml | 19 | > bin/charm_helpers_sync.py |
36 | 18 | rm -rf charm-helpers | 20 | |
37 | 19 | bzr co lp:charm-helpers | 21 | sync: bin/charm_helpers_sync.py |
38 | 20 | ./charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py -c charm-helpers.yaml | 22 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml |
30 | 21 | rm -rf charm-helpers | ||
31 | 22 | |||
39 | 23 | 23 | ||
40 | === modified file 'charm-helpers.yaml' | |||
41 | --- charm-helpers.yaml 2014-01-31 23:49:37 +0000 | |||
42 | +++ charm-helpers.yaml 2014-09-09 05:13:17 +0000 | |||
43 | @@ -3,3 +3,4 @@ | |||
44 | 3 | include: | 3 | include: |
45 | 4 | - core | 4 | - core |
46 | 5 | - fetch | 5 | - fetch |
47 | 6 | - contrib.storage.linux.ceph | ||
48 | 6 | 7 | ||
49 | === added symlink 'hooks/ceph-relation-changed' | |||
50 | === target is u'hooks' | |||
51 | === added file 'hooks/ceph_common.py' | |||
52 | --- hooks/ceph_common.py 1970-01-01 00:00:00 +0000 | |||
53 | +++ hooks/ceph_common.py 2014-09-09 05:13:17 +0000 | |||
54 | @@ -0,0 +1,12 @@ | |||
55 | 1 | |||
56 | 2 | from charmhelpers.core.hookenv import ( | ||
57 | 3 | service_name, | ||
58 | 4 | ) | ||
59 | 5 | |||
60 | 6 | SERVICE_NAME = service_name() # TODO: Make sure it's the unit - not "storage" | ||
61 | 7 | TEMPORARY_MOUNT_POINT = "/mnt/temp-ceph/%s" % SERVICE_NAME | ||
62 | 8 | POOL_NAME = SERVICE_NAME | ||
63 | 9 | |||
64 | 10 | |||
65 | 11 | RDB_IMG = "storage" | ||
66 | 12 | BLK_DEVICE = '/dev/rbd/%s/%s' % (POOL_NAME, RDB_IMG) | ||
67 | 0 | 13 | ||
68 | === added directory 'hooks/charmhelpers/contrib' | |||
69 | === added file 'hooks/charmhelpers/contrib/__init__.py' | |||
70 | === added directory 'hooks/charmhelpers/contrib/storage' | |||
71 | === added file 'hooks/charmhelpers/contrib/storage/__init__.py' | |||
72 | === added directory 'hooks/charmhelpers/contrib/storage/linux' | |||
73 | === added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py' | |||
74 | === added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' | |||
75 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000 | |||
76 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-09-09 05:13:17 +0000 | |||
77 | @@ -0,0 +1,387 @@ | |||
78 | 1 | # | ||
79 | 2 | # Copyright 2012 Canonical Ltd. | ||
80 | 3 | # | ||
81 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
82 | 5 | # | ||
83 | 6 | # Authors: | ||
84 | 7 | # James Page <james.page@ubuntu.com> | ||
85 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
86 | 9 | # | ||
87 | 10 | |||
88 | 11 | import os | ||
89 | 12 | import shutil | ||
90 | 13 | import json | ||
91 | 14 | import time | ||
92 | 15 | |||
93 | 16 | from subprocess import ( | ||
94 | 17 | check_call, | ||
95 | 18 | check_output, | ||
96 | 19 | CalledProcessError | ||
97 | 20 | ) | ||
98 | 21 | |||
99 | 22 | from charmhelpers.core.hookenv import ( | ||
100 | 23 | relation_get, | ||
101 | 24 | relation_ids, | ||
102 | 25 | related_units, | ||
103 | 26 | log, | ||
104 | 27 | INFO, | ||
105 | 28 | WARNING, | ||
106 | 29 | ERROR | ||
107 | 30 | ) | ||
108 | 31 | |||
109 | 32 | from charmhelpers.core.host import ( | ||
110 | 33 | mount, | ||
111 | 34 | mounts, | ||
112 | 35 | service_start, | ||
113 | 36 | service_stop, | ||
114 | 37 | service_running, | ||
115 | 38 | umount, | ||
116 | 39 | ) | ||
117 | 40 | |||
118 | 41 | from charmhelpers.fetch import ( | ||
119 | 42 | apt_install, | ||
120 | 43 | ) | ||
121 | 44 | |||
122 | 45 | KEYRING = '/etc/ceph/ceph.client.{}.keyring' | ||
123 | 46 | KEYFILE = '/etc/ceph/ceph.client.{}.key' | ||
124 | 47 | |||
125 | 48 | CEPH_CONF = """[global] | ||
126 | 49 | auth supported = {auth} | ||
127 | 50 | keyring = {keyring} | ||
128 | 51 | mon host = {mon_hosts} | ||
129 | 52 | log to syslog = {use_syslog} | ||
130 | 53 | err to syslog = {use_syslog} | ||
131 | 54 | clog to syslog = {use_syslog} | ||
132 | 55 | """ | ||
133 | 56 | |||
134 | 57 | |||
135 | 58 | def install(): | ||
136 | 59 | ''' Basic Ceph client installation ''' | ||
137 | 60 | ceph_dir = "/etc/ceph" | ||
138 | 61 | if not os.path.exists(ceph_dir): | ||
139 | 62 | os.mkdir(ceph_dir) | ||
140 | 63 | apt_install('ceph-common', fatal=True) | ||
141 | 64 | |||
142 | 65 | |||
143 | 66 | def rbd_exists(service, pool, rbd_img): | ||
144 | 67 | ''' Check to see if a RADOS block device exists ''' | ||
145 | 68 | try: | ||
146 | 69 | out = check_output(['rbd', 'list', '--id', service, | ||
147 | 70 | '--pool', pool]) | ||
148 | 71 | except CalledProcessError: | ||
149 | 72 | return False | ||
150 | 73 | else: | ||
151 | 74 | return rbd_img in out | ||
152 | 75 | |||
153 | 76 | |||
154 | 77 | def create_rbd_image(service, pool, image, sizemb): | ||
155 | 78 | ''' Create a new RADOS block device ''' | ||
156 | 79 | cmd = [ | ||
157 | 80 | 'rbd', | ||
158 | 81 | 'create', | ||
159 | 82 | image, | ||
160 | 83 | '--size', | ||
161 | 84 | str(sizemb), | ||
162 | 85 | '--id', | ||
163 | 86 | service, | ||
164 | 87 | '--pool', | ||
165 | 88 | pool | ||
166 | 89 | ] | ||
167 | 90 | check_call(cmd) | ||
168 | 91 | |||
169 | 92 | |||
170 | 93 | def pool_exists(service, name): | ||
171 | 94 | ''' Check to see if a RADOS pool already exists ''' | ||
172 | 95 | try: | ||
173 | 96 | out = check_output(['rados', '--id', service, 'lspools']) | ||
174 | 97 | except CalledProcessError: | ||
175 | 98 | return False | ||
176 | 99 | else: | ||
177 | 100 | return name in out | ||
178 | 101 | |||
179 | 102 | |||
180 | 103 | def get_osds(service): | ||
181 | 104 | ''' | ||
182 | 105 | Return a list of all Ceph Object Storage Daemons | ||
183 | 106 | currently in the cluster | ||
184 | 107 | ''' | ||
185 | 108 | version = ceph_version() | ||
186 | 109 | if version and version >= '0.56': | ||
187 | 110 | return json.loads(check_output(['ceph', '--id', service, | ||
188 | 111 | 'osd', 'ls', '--format=json'])) | ||
189 | 112 | else: | ||
190 | 113 | return None | ||
191 | 114 | |||
192 | 115 | |||
193 | 116 | def create_pool(service, name, replicas=2): | ||
194 | 117 | ''' Create a new RADOS pool ''' | ||
195 | 118 | if pool_exists(service, name): | ||
196 | 119 | log("Ceph pool {} already exists, skipping creation".format(name), | ||
197 | 120 | level=WARNING) | ||
198 | 121 | return | ||
199 | 122 | # Calculate the number of placement groups based | ||
200 | 123 | # on upstream recommended best practices. | ||
201 | 124 | osds = get_osds(service) | ||
202 | 125 | if osds: | ||
203 | 126 | pgnum = (len(osds) * 100 / replicas) | ||
204 | 127 | else: | ||
205 | 128 | # NOTE(james-page): Default to 200 for older ceph versions | ||
206 | 129 | # which don't support OSD query from cli | ||
207 | 130 | pgnum = 200 | ||
208 | 131 | cmd = [ | ||
209 | 132 | 'ceph', '--id', service, | ||
210 | 133 | 'osd', 'pool', 'create', | ||
211 | 134 | name, str(pgnum) | ||
212 | 135 | ] | ||
213 | 136 | check_call(cmd) | ||
214 | 137 | cmd = [ | ||
215 | 138 | 'ceph', '--id', service, | ||
216 | 139 | 'osd', 'pool', 'set', name, | ||
217 | 140 | 'size', str(replicas) | ||
218 | 141 | ] | ||
219 | 142 | check_call(cmd) | ||
220 | 143 | |||
221 | 144 | |||
222 | 145 | def delete_pool(service, name): | ||
223 | 146 | ''' Delete a RADOS pool from ceph ''' | ||
224 | 147 | cmd = [ | ||
225 | 148 | 'ceph', '--id', service, | ||
226 | 149 | 'osd', 'pool', 'delete', | ||
227 | 150 | name, '--yes-i-really-really-mean-it' | ||
228 | 151 | ] | ||
229 | 152 | check_call(cmd) | ||
230 | 153 | |||
231 | 154 | |||
232 | 155 | def _keyfile_path(service): | ||
233 | 156 | return KEYFILE.format(service) | ||
234 | 157 | |||
235 | 158 | |||
236 | 159 | def _keyring_path(service): | ||
237 | 160 | return KEYRING.format(service) | ||
238 | 161 | |||
239 | 162 | |||
240 | 163 | def create_keyring(service, key): | ||
241 | 164 | ''' Create a new Ceph keyring containing key''' | ||
242 | 165 | keyring = _keyring_path(service) | ||
243 | 166 | if os.path.exists(keyring): | ||
244 | 167 | log('ceph: Keyring exists at %s.' % keyring, level=WARNING) | ||
245 | 168 | return | ||
246 | 169 | cmd = [ | ||
247 | 170 | 'ceph-authtool', | ||
248 | 171 | keyring, | ||
249 | 172 | '--create-keyring', | ||
250 | 173 | '--name=client.{}'.format(service), | ||
251 | 174 | '--add-key={}'.format(key) | ||
252 | 175 | ] | ||
253 | 176 | check_call(cmd) | ||
254 | 177 | log('ceph: Created new ring at %s.' % keyring, level=INFO) | ||
255 | 178 | |||
256 | 179 | |||
257 | 180 | def create_key_file(service, key): | ||
258 | 181 | ''' Create a file containing key ''' | ||
259 | 182 | keyfile = _keyfile_path(service) | ||
260 | 183 | if os.path.exists(keyfile): | ||
261 | 184 | log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING) | ||
262 | 185 | return | ||
263 | 186 | with open(keyfile, 'w') as fd: | ||
264 | 187 | fd.write(key) | ||
265 | 188 | log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) | ||
266 | 189 | |||
267 | 190 | |||
268 | 191 | def get_ceph_nodes(): | ||
269 | 192 | ''' Query named relation 'ceph' to detemine current nodes ''' | ||
270 | 193 | hosts = [] | ||
271 | 194 | for r_id in relation_ids('ceph'): | ||
272 | 195 | for unit in related_units(r_id): | ||
273 | 196 | hosts.append(relation_get('private-address', unit=unit, rid=r_id)) | ||
274 | 197 | return hosts | ||
275 | 198 | |||
276 | 199 | |||
277 | 200 | def configure(service, key, auth, use_syslog): | ||
278 | 201 | ''' Perform basic configuration of Ceph ''' | ||
279 | 202 | create_keyring(service, key) | ||
280 | 203 | create_key_file(service, key) | ||
281 | 204 | hosts = get_ceph_nodes() | ||
282 | 205 | with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: | ||
283 | 206 | ceph_conf.write(CEPH_CONF.format(auth=auth, | ||
284 | 207 | keyring=_keyring_path(service), | ||
285 | 208 | mon_hosts=",".join(map(str, hosts)), | ||
286 | 209 | use_syslog=use_syslog)) | ||
287 | 210 | modprobe('rbd') | ||
288 | 211 | |||
289 | 212 | |||
290 | 213 | def image_mapped(name): | ||
291 | 214 | ''' Determine whether a RADOS block device is mapped locally ''' | ||
292 | 215 | try: | ||
293 | 216 | out = check_output(['rbd', 'showmapped']) | ||
294 | 217 | except CalledProcessError: | ||
295 | 218 | return False | ||
296 | 219 | else: | ||
297 | 220 | return name in out | ||
298 | 221 | |||
299 | 222 | |||
300 | 223 | def map_block_storage(service, pool, image): | ||
301 | 224 | ''' Map a RADOS block device for local use ''' | ||
302 | 225 | cmd = [ | ||
303 | 226 | 'rbd', | ||
304 | 227 | 'map', | ||
305 | 228 | '{}/{}'.format(pool, image), | ||
306 | 229 | '--user', | ||
307 | 230 | service, | ||
308 | 231 | '--secret', | ||
309 | 232 | _keyfile_path(service), | ||
310 | 233 | ] | ||
311 | 234 | check_call(cmd) | ||
312 | 235 | |||
313 | 236 | |||
314 | 237 | def filesystem_mounted(fs): | ||
315 | 238 | ''' Determine whether a filesytems is already mounted ''' | ||
316 | 239 | return fs in [f for f, m in mounts()] | ||
317 | 240 | |||
318 | 241 | |||
319 | 242 | def make_filesystem(blk_device, fstype='ext4', timeout=10): | ||
320 | 243 | ''' Make a new filesystem on the specified block device ''' | ||
321 | 244 | count = 0 | ||
322 | 245 | e_noent = os.errno.ENOENT | ||
323 | 246 | while not os.path.exists(blk_device): | ||
324 | 247 | if count >= timeout: | ||
325 | 248 | log('ceph: gave up waiting on block device %s' % blk_device, | ||
326 | 249 | level=ERROR) | ||
327 | 250 | raise IOError(e_noent, os.strerror(e_noent), blk_device) | ||
328 | 251 | log('ceph: waiting for block device %s to appear' % blk_device, | ||
329 | 252 | level=INFO) | ||
330 | 253 | count += 1 | ||
331 | 254 | time.sleep(1) | ||
332 | 255 | else: | ||
333 | 256 | log('ceph: Formatting block device %s as filesystem %s.' % | ||
334 | 257 | (blk_device, fstype), level=INFO) | ||
335 | 258 | check_call(['mkfs', '-t', fstype, blk_device]) | ||
336 | 259 | |||
337 | 260 | |||
338 | 261 | def place_data_on_block_device(blk_device, data_src_dst): | ||
339 | 262 | ''' Migrate data in data_src_dst to blk_device and then remount ''' | ||
340 | 263 | # mount block device into /mnt | ||
341 | 264 | mount(blk_device, '/mnt') | ||
342 | 265 | # copy data to /mnt | ||
343 | 266 | copy_files(data_src_dst, '/mnt') | ||
344 | 267 | # umount block device | ||
345 | 268 | umount('/mnt') | ||
346 | 269 | # Grab user/group ID's from original source | ||
347 | 270 | _dir = os.stat(data_src_dst) | ||
348 | 271 | uid = _dir.st_uid | ||
349 | 272 | gid = _dir.st_gid | ||
350 | 273 | # re-mount where the data should originally be | ||
351 | 274 | # TODO: persist is currently a NO-OP in core.host | ||
352 | 275 | mount(blk_device, data_src_dst, persist=True) | ||
353 | 276 | # ensure original ownership of new mount. | ||
354 | 277 | os.chown(data_src_dst, uid, gid) | ||
355 | 278 | |||
356 | 279 | |||
357 | 280 | # TODO: re-use | ||
358 | 281 | def modprobe(module): | ||
359 | 282 | ''' Load a kernel module and configure for auto-load on reboot ''' | ||
360 | 283 | log('ceph: Loading kernel module', level=INFO) | ||
361 | 284 | cmd = ['modprobe', module] | ||
362 | 285 | check_call(cmd) | ||
363 | 286 | with open('/etc/modules', 'r+') as modules: | ||
364 | 287 | if module not in modules.read(): | ||
365 | 288 | modules.write(module) | ||
366 | 289 | |||
367 | 290 | |||
368 | 291 | def copy_files(src, dst, symlinks=False, ignore=None): | ||
369 | 292 | ''' Copy files from src to dst ''' | ||
370 | 293 | for item in os.listdir(src): | ||
371 | 294 | s = os.path.join(src, item) | ||
372 | 295 | d = os.path.join(dst, item) | ||
373 | 296 | if os.path.isdir(s): | ||
374 | 297 | shutil.copytree(s, d, symlinks, ignore) | ||
375 | 298 | else: | ||
376 | 299 | shutil.copy2(s, d) | ||
377 | 300 | |||
378 | 301 | |||
379 | 302 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, | ||
380 | 303 | blk_device, fstype, system_services=[]): | ||
381 | 304 | """ | ||
382 | 305 | NOTE: This function must only be called from a single service unit for | ||
383 | 306 | the same rbd_img otherwise data loss will occur. | ||
384 | 307 | |||
385 | 308 | Ensures given pool and RBD image exists, is mapped to a block device, | ||
386 | 309 | and the device is formatted and mounted at the given mount_point. | ||
387 | 310 | |||
388 | 311 | If formatting a device for the first time, data existing at mount_point | ||
389 | 312 | will be migrated to the RBD device before being re-mounted. | ||
390 | 313 | |||
391 | 314 | All services listed in system_services will be stopped prior to data | ||
392 | 315 | migration and restarted when complete. | ||
393 | 316 | """ | ||
394 | 317 | # Ensure pool, RBD image, RBD mappings are in place. | ||
395 | 318 | if not pool_exists(service, pool): | ||
396 | 319 | log('ceph: Creating new pool {}.'.format(pool)) | ||
397 | 320 | create_pool(service, pool) | ||
398 | 321 | |||
399 | 322 | if not rbd_exists(service, pool, rbd_img): | ||
400 | 323 | log('ceph: Creating RBD image ({}).'.format(rbd_img)) | ||
401 | 324 | create_rbd_image(service, pool, rbd_img, sizemb) | ||
402 | 325 | |||
403 | 326 | if not image_mapped(rbd_img): | ||
404 | 327 | log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img)) | ||
405 | 328 | map_block_storage(service, pool, rbd_img) | ||
406 | 329 | |||
407 | 330 | # make file system | ||
408 | 331 | # TODO: What happens if for whatever reason this is run again and | ||
409 | 332 | # the data is already in the rbd device and/or is mounted?? | ||
410 | 333 | # When it is mounted already, it will fail to make the fs | ||
411 | 334 | # XXX: This is really sketchy! Need to at least add an fstab entry | ||
412 | 335 | # otherwise this hook will blow away existing data if its executed | ||
413 | 336 | # after a reboot. | ||
414 | 337 | if not filesystem_mounted(mount_point): | ||
415 | 338 | make_filesystem(blk_device, fstype) | ||
416 | 339 | |||
417 | 340 | for svc in system_services: | ||
418 | 341 | if service_running(svc): | ||
419 | 342 | log('ceph: Stopping services {} prior to migrating data.' | ||
420 | 343 | .format(svc)) | ||
421 | 344 | service_stop(svc) | ||
422 | 345 | |||
423 | 346 | place_data_on_block_device(blk_device, mount_point) | ||
424 | 347 | |||
425 | 348 | for svc in system_services: | ||
426 | 349 | log('ceph: Starting service {} after migrating data.' | ||
427 | 350 | .format(svc)) | ||
428 | 351 | service_start(svc) | ||
429 | 352 | |||
430 | 353 | |||
431 | 354 | def ensure_ceph_keyring(service, user=None, group=None): | ||
432 | 355 | ''' | ||
433 | 356 | Ensures a ceph keyring is created for a named service | ||
434 | 357 | and optionally ensures user and group ownership. | ||
435 | 358 | |||
436 | 359 | Returns False if no ceph key is available in relation state. | ||
437 | 360 | ''' | ||
438 | 361 | key = None | ||
439 | 362 | for rid in relation_ids('ceph'): | ||
440 | 363 | for unit in related_units(rid): | ||
441 | 364 | key = relation_get('key', rid=rid, unit=unit) | ||
442 | 365 | if key: | ||
443 | 366 | break | ||
444 | 367 | if not key: | ||
445 | 368 | return False | ||
446 | 369 | create_keyring(service=service, key=key) | ||
447 | 370 | keyring = _keyring_path(service) | ||
448 | 371 | if user and group: | ||
449 | 372 | check_call(['chown', '%s.%s' % (user, group), keyring]) | ||
450 | 373 | return True | ||
451 | 374 | |||
452 | 375 | |||
453 | 376 | def ceph_version(): | ||
454 | 377 | ''' Retrieve the local version of ceph ''' | ||
455 | 378 | if os.path.exists('/usr/bin/ceph'): | ||
456 | 379 | cmd = ['ceph', '-v'] | ||
457 | 380 | output = check_output(cmd) | ||
458 | 381 | output = output.split() | ||
459 | 382 | if len(output) > 3: | ||
460 | 383 | return output[2] | ||
461 | 384 | else: | ||
462 | 385 | return None | ||
463 | 386 | else: | ||
464 | 387 | return None | ||
465 | 0 | 388 | ||
466 | === added file 'hooks/charmhelpers/core/fstab.py' | |||
467 | --- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000 | |||
468 | +++ hooks/charmhelpers/core/fstab.py 2014-09-09 05:13:17 +0000 | |||
469 | @@ -0,0 +1,116 @@ | |||
470 | 1 | #!/usr/bin/env python | ||
471 | 2 | # -*- coding: utf-8 -*- | ||
472 | 3 | |||
473 | 4 | __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' | ||
474 | 5 | |||
475 | 6 | import os | ||
476 | 7 | |||
477 | 8 | |||
478 | 9 | class Fstab(file): | ||
479 | 10 | """This class extends file in order to implement a file reader/writer | ||
480 | 11 | for file `/etc/fstab` | ||
481 | 12 | """ | ||
482 | 13 | |||
483 | 14 | class Entry(object): | ||
484 | 15 | """Entry class represents a non-comment line on the `/etc/fstab` file | ||
485 | 16 | """ | ||
486 | 17 | def __init__(self, device, mountpoint, filesystem, | ||
487 | 18 | options, d=0, p=0): | ||
488 | 19 | self.device = device | ||
489 | 20 | self.mountpoint = mountpoint | ||
490 | 21 | self.filesystem = filesystem | ||
491 | 22 | |||
492 | 23 | if not options: | ||
493 | 24 | options = "defaults" | ||
494 | 25 | |||
495 | 26 | self.options = options | ||
496 | 27 | self.d = d | ||
497 | 28 | self.p = p | ||
498 | 29 | |||
499 | 30 | def __eq__(self, o): | ||
500 | 31 | return str(self) == str(o) | ||
501 | 32 | |||
502 | 33 | def __str__(self): | ||
503 | 34 | return "{} {} {} {} {} {}".format(self.device, | ||
504 | 35 | self.mountpoint, | ||
505 | 36 | self.filesystem, | ||
506 | 37 | self.options, | ||
507 | 38 | self.d, | ||
508 | 39 | self.p) | ||
509 | 40 | |||
510 | 41 | DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab') | ||
511 | 42 | |||
512 | 43 | def __init__(self, path=None): | ||
513 | 44 | if path: | ||
514 | 45 | self._path = path | ||
515 | 46 | else: | ||
516 | 47 | self._path = self.DEFAULT_PATH | ||
517 | 48 | file.__init__(self, self._path, 'r+') | ||
518 | 49 | |||
519 | 50 | def _hydrate_entry(self, line): | ||
520 | 51 | # NOTE: use split with no arguments to split on any | ||
521 | 52 | # whitespace including tabs | ||
522 | 53 | return Fstab.Entry(*filter( | ||
523 | 54 | lambda x: x not in ('', None), | ||
524 | 55 | line.strip("\n").split())) | ||
525 | 56 | |||
526 | 57 | @property | ||
527 | 58 | def entries(self): | ||
528 | 59 | self.seek(0) | ||
529 | 60 | for line in self.readlines(): | ||
530 | 61 | try: | ||
531 | 62 | if not line.startswith("#"): | ||
532 | 63 | yield self._hydrate_entry(line) | ||
533 | 64 | except ValueError: | ||
534 | 65 | pass | ||
535 | 66 | |||
536 | 67 | def get_entry_by_attr(self, attr, value): | ||
537 | 68 | for entry in self.entries: | ||
538 | 69 | e_attr = getattr(entry, attr) | ||
539 | 70 | if e_attr == value: | ||
540 | 71 | return entry | ||
541 | 72 | return None | ||
542 | 73 | |||
543 | 74 | def add_entry(self, entry): | ||
544 | 75 | if self.get_entry_by_attr('device', entry.device): | ||
545 | 76 | return False | ||
546 | 77 | |||
547 | 78 | self.write(str(entry) + '\n') | ||
548 | 79 | self.truncate() | ||
549 | 80 | return entry | ||
550 | 81 | |||
551 | 82 | def remove_entry(self, entry): | ||
552 | 83 | self.seek(0) | ||
553 | 84 | |||
554 | 85 | lines = self.readlines() | ||
555 | 86 | |||
556 | 87 | found = False | ||
557 | 88 | for index, line in enumerate(lines): | ||
558 | 89 | if not line.startswith("#"): | ||
559 | 90 | if self._hydrate_entry(line) == entry: | ||
560 | 91 | found = True | ||
561 | 92 | break | ||
562 | 93 | |||
563 | 94 | if not found: | ||
564 | 95 | return False | ||
565 | 96 | |||
566 | 97 | lines.remove(line) | ||
567 | 98 | |||
568 | 99 | self.seek(0) | ||
569 | 100 | self.write(''.join(lines)) | ||
570 | 101 | self.truncate() | ||
571 | 102 | return True | ||
572 | 103 | |||
573 | 104 | @classmethod | ||
574 | 105 | def remove_by_mountpoint(cls, mountpoint, path=None): | ||
575 | 106 | fstab = cls(path=path) | ||
576 | 107 | entry = fstab.get_entry_by_attr('mountpoint', mountpoint) | ||
577 | 108 | if entry: | ||
578 | 109 | return fstab.remove_entry(entry) | ||
579 | 110 | return False | ||
580 | 111 | |||
581 | 112 | @classmethod | ||
582 | 113 | def add(cls, device, mountpoint, filesystem, options=None, path=None): | ||
583 | 114 | return cls(path=path).add_entry(Fstab.Entry(device, | ||
584 | 115 | mountpoint, filesystem, | ||
585 | 116 | options=options)) | ||
586 | 0 | 117 | ||
587 | === modified file 'hooks/charmhelpers/core/hookenv.py' | |||
588 | --- hooks/charmhelpers/core/hookenv.py 2014-01-14 21:55:40 +0000 | |||
589 | +++ hooks/charmhelpers/core/hookenv.py 2014-09-09 05:13:17 +0000 | |||
590 | @@ -8,6 +8,7 @@ | |||
591 | 8 | import json | 8 | import json |
592 | 9 | import yaml | 9 | import yaml |
593 | 10 | import subprocess | 10 | import subprocess |
594 | 11 | import sys | ||
595 | 11 | import UserDict | 12 | import UserDict |
596 | 12 | from subprocess import CalledProcessError | 13 | from subprocess import CalledProcessError |
597 | 13 | 14 | ||
598 | @@ -24,7 +25,7 @@ | |||
599 | 24 | def cached(func): | 25 | def cached(func): |
600 | 25 | """Cache return values for multiple executions of func + args | 26 | """Cache return values for multiple executions of func + args |
601 | 26 | 27 | ||
603 | 27 | For example: | 28 | For example:: |
604 | 28 | 29 | ||
605 | 29 | @cached | 30 | @cached |
606 | 30 | def unit_get(attribute): | 31 | def unit_get(attribute): |
607 | @@ -149,6 +150,105 @@ | |||
608 | 149 | return local_unit().split('/')[0] | 150 | return local_unit().split('/')[0] |
609 | 150 | 151 | ||
610 | 151 | 152 | ||
611 | 153 | def hook_name(): | ||
612 | 154 | """The name of the currently executing hook""" | ||
613 | 155 | return os.path.basename(sys.argv[0]) | ||
614 | 156 | |||
615 | 157 | |||
616 | 158 | class Config(dict): | ||
617 | 159 | """A Juju charm config dictionary that can write itself to | ||
618 | 160 | disk (as json) and track which values have changed since | ||
619 | 161 | the previous hook invocation. | ||
620 | 162 | |||
621 | 163 | Do not instantiate this object directly - instead call | ||
622 | 164 | ``hookenv.config()`` | ||
623 | 165 | |||
624 | 166 | Example usage:: | ||
625 | 167 | |||
626 | 168 | >>> # inside a hook | ||
627 | 169 | >>> from charmhelpers.core import hookenv | ||
628 | 170 | >>> config = hookenv.config() | ||
629 | 171 | >>> config['foo'] | ||
630 | 172 | 'bar' | ||
631 | 173 | >>> config['mykey'] = 'myval' | ||
632 | 174 | >>> config.save() | ||
633 | 175 | |||
634 | 176 | |||
635 | 177 | >>> # user runs `juju set mycharm foo=baz` | ||
636 | 178 | >>> # now we're inside subsequent config-changed hook | ||
637 | 179 | >>> config = hookenv.config() | ||
638 | 180 | >>> config['foo'] | ||
639 | 181 | 'baz' | ||
640 | 182 | >>> # test to see if this val has changed since last hook | ||
641 | 183 | >>> config.changed('foo') | ||
642 | 184 | True | ||
643 | 185 | >>> # what was the previous value? | ||
644 | 186 | >>> config.previous('foo') | ||
645 | 187 | 'bar' | ||
646 | 188 | >>> # keys/values that we add are preserved across hooks | ||
647 | 189 | >>> config['mykey'] | ||
648 | 190 | 'myval' | ||
649 | 191 | >>> # don't forget to save at the end of hook! | ||
650 | 192 | >>> config.save() | ||
651 | 193 | |||
652 | 194 | """ | ||
653 | 195 | CONFIG_FILE_NAME = '.juju-persistent-config' | ||
654 | 196 | |||
655 | 197 | def __init__(self, *args, **kw): | ||
656 | 198 | super(Config, self).__init__(*args, **kw) | ||
657 | 199 | self._prev_dict = None | ||
658 | 200 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) | ||
659 | 201 | if os.path.exists(self.path): | ||
660 | 202 | self.load_previous() | ||
661 | 203 | |||
662 | 204 | def load_previous(self, path=None): | ||
663 | 205 | """Load previous copy of config from disk so that current values | ||
664 | 206 | can be compared to previous values. | ||
665 | 207 | |||
666 | 208 | :param path: | ||
667 | 209 | |||
668 | 210 | File path from which to load the previous config. If `None`, | ||
669 | 211 | config is loaded from the default location. If `path` is | ||
670 | 212 | specified, subsequent `save()` calls will write to the same | ||
671 | 213 | path. | ||
672 | 214 | |||
673 | 215 | """ | ||
674 | 216 | self.path = path or self.path | ||
675 | 217 | with open(self.path) as f: | ||
676 | 218 | self._prev_dict = json.load(f) | ||
677 | 219 | |||
678 | 220 | def changed(self, key): | ||
679 | 221 | """Return true if the value for this key has changed since | ||
680 | 222 | the last save. | ||
681 | 223 | |||
682 | 224 | """ | ||
683 | 225 | if self._prev_dict is None: | ||
684 | 226 | return True | ||
685 | 227 | return self.previous(key) != self.get(key) | ||
686 | 228 | |||
687 | 229 | def previous(self, key): | ||
688 | 230 | """Return previous value for this key, or None if there | ||
689 | 231 | is no "previous" value. | ||
690 | 232 | |||
691 | 233 | """ | ||
692 | 234 | if self._prev_dict: | ||
693 | 235 | return self._prev_dict.get(key) | ||
694 | 236 | return None | ||
695 | 237 | |||
696 | 238 | def save(self): | ||
697 | 239 | """Save this config to disk. | ||
698 | 240 | |||
699 | 241 | Preserves items in _prev_dict that do not exist in self. | ||
700 | 242 | |||
701 | 243 | """ | ||
702 | 244 | if self._prev_dict: | ||
703 | 245 | for k, v in self._prev_dict.iteritems(): | ||
704 | 246 | if k not in self: | ||
705 | 247 | self[k] = v | ||
706 | 248 | with open(self.path, 'w') as f: | ||
707 | 249 | json.dump(self, f) | ||
708 | 250 | |||
709 | 251 | |||
710 | 152 | @cached | 252 | @cached |
711 | 153 | def config(scope=None): | 253 | def config(scope=None): |
712 | 154 | """Juju charm configuration""" | 254 | """Juju charm configuration""" |
713 | @@ -157,7 +257,10 @@ | |||
714 | 157 | config_cmd_line.append(scope) | 257 | config_cmd_line.append(scope) |
715 | 158 | config_cmd_line.append('--format=json') | 258 | config_cmd_line.append('--format=json') |
716 | 159 | try: | 259 | try: |
718 | 160 | return json.loads(subprocess.check_output(config_cmd_line)) | 260 | config_data = json.loads(subprocess.check_output(config_cmd_line)) |
719 | 261 | if scope is not None: | ||
720 | 262 | return config_data | ||
721 | 263 | return Config(config_data) | ||
722 | 161 | except ValueError: | 264 | except ValueError: |
723 | 162 | return None | 265 | return None |
724 | 163 | 266 | ||
725 | @@ -182,8 +285,9 @@ | |||
726 | 182 | raise | 285 | raise |
727 | 183 | 286 | ||
728 | 184 | 287 | ||
730 | 185 | def relation_set(relation_id=None, relation_settings={}, **kwargs): | 288 | def relation_set(relation_id=None, relation_settings=None, **kwargs): |
731 | 186 | """Set relation information for the current unit""" | 289 | """Set relation information for the current unit""" |
732 | 290 | relation_settings = relation_settings if relation_settings else {} | ||
733 | 187 | relation_cmd_line = ['relation-set'] | 291 | relation_cmd_line = ['relation-set'] |
734 | 188 | if relation_id is not None: | 292 | if relation_id is not None: |
735 | 189 | relation_cmd_line.extend(('-r', relation_id)) | 293 | relation_cmd_line.extend(('-r', relation_id)) |
736 | @@ -342,18 +446,19 @@ | |||
737 | 342 | class Hooks(object): | 446 | class Hooks(object): |
738 | 343 | """A convenient handler for hook functions. | 447 | """A convenient handler for hook functions. |
739 | 344 | 448 | ||
741 | 345 | Example: | 449 | Example:: |
742 | 450 | |||
743 | 346 | hooks = Hooks() | 451 | hooks = Hooks() |
744 | 347 | 452 | ||
745 | 348 | # register a hook, taking its name from the function name | 453 | # register a hook, taking its name from the function name |
746 | 349 | @hooks.hook() | 454 | @hooks.hook() |
747 | 350 | def install(): | 455 | def install(): |
749 | 351 | ... | 456 | pass # your code here |
750 | 352 | 457 | ||
751 | 353 | # register a hook, providing a custom hook name | 458 | # register a hook, providing a custom hook name |
752 | 354 | @hooks.hook("config-changed") | 459 | @hooks.hook("config-changed") |
753 | 355 | def config_changed(): | 460 | def config_changed(): |
755 | 356 | ... | 461 | pass # your code here |
756 | 357 | 462 | ||
757 | 358 | if __name__ == "__main__": | 463 | if __name__ == "__main__": |
758 | 359 | # execute a hook based on the name the program is called by | 464 | # execute a hook based on the name the program is called by |
759 | 360 | 465 | ||
760 | === modified file 'hooks/charmhelpers/core/host.py' | |||
761 | --- hooks/charmhelpers/core/host.py 2014-01-14 21:55:40 +0000 | |||
762 | +++ hooks/charmhelpers/core/host.py 2014-09-09 05:13:17 +0000 | |||
763 | @@ -12,10 +12,13 @@ | |||
764 | 12 | import string | 12 | import string |
765 | 13 | import subprocess | 13 | import subprocess |
766 | 14 | import hashlib | 14 | import hashlib |
767 | 15 | import shutil | ||
768 | 16 | from contextlib import contextmanager | ||
769 | 15 | 17 | ||
770 | 16 | from collections import OrderedDict | 18 | from collections import OrderedDict |
771 | 17 | 19 | ||
772 | 18 | from hookenv import log | 20 | from hookenv import log |
773 | 21 | from fstab import Fstab | ||
774 | 19 | 22 | ||
775 | 20 | 23 | ||
776 | 21 | def service_start(service_name): | 24 | def service_start(service_name): |
777 | @@ -34,7 +37,8 @@ | |||
778 | 34 | 37 | ||
779 | 35 | 38 | ||
780 | 36 | def service_reload(service_name, restart_on_failure=False): | 39 | def service_reload(service_name, restart_on_failure=False): |
782 | 37 | """Reload a system service, optionally falling back to restart if reload fails""" | 40 | """Reload a system service, optionally falling back to restart if |
783 | 41 | reload fails""" | ||
784 | 38 | service_result = service('reload', service_name) | 42 | service_result = service('reload', service_name) |
785 | 39 | if not service_result and restart_on_failure: | 43 | if not service_result and restart_on_failure: |
786 | 40 | service_result = service('restart', service_name) | 44 | service_result = service('restart', service_name) |
787 | @@ -50,7 +54,7 @@ | |||
788 | 50 | def service_running(service): | 54 | def service_running(service): |
789 | 51 | """Determine whether a system service is running""" | 55 | """Determine whether a system service is running""" |
790 | 52 | try: | 56 | try: |
792 | 53 | output = subprocess.check_output(['service', service, 'status']) | 57 | output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT) |
793 | 54 | except subprocess.CalledProcessError: | 58 | except subprocess.CalledProcessError: |
794 | 55 | return False | 59 | return False |
795 | 56 | else: | 60 | else: |
796 | @@ -60,6 +64,16 @@ | |||
797 | 60 | return False | 64 | return False |
798 | 61 | 65 | ||
799 | 62 | 66 | ||
800 | 67 | def service_available(service_name): | ||
801 | 68 | """Determine whether a system service is available""" | ||
802 | 69 | try: | ||
803 | 70 | subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT) | ||
804 | 71 | except subprocess.CalledProcessError: | ||
805 | 72 | return False | ||
806 | 73 | else: | ||
807 | 74 | return True | ||
808 | 75 | |||
809 | 76 | |||
810 | 63 | def adduser(username, password=None, shell='/bin/bash', system_user=False): | 77 | def adduser(username, password=None, shell='/bin/bash', system_user=False): |
811 | 64 | """Add a user to the system""" | 78 | """Add a user to the system""" |
812 | 65 | try: | 79 | try: |
813 | @@ -143,7 +157,19 @@ | |||
814 | 143 | target.write(content) | 157 | target.write(content) |
815 | 144 | 158 | ||
816 | 145 | 159 | ||
818 | 146 | def mount(device, mountpoint, options=None, persist=False): | 160 | def fstab_remove(mp): |
819 | 161 | """Remove the given mountpoint entry from /etc/fstab | ||
820 | 162 | """ | ||
821 | 163 | return Fstab.remove_by_mountpoint(mp) | ||
822 | 164 | |||
823 | 165 | |||
824 | 166 | def fstab_add(dev, mp, fs, options=None): | ||
825 | 167 | """Adds the given device entry to the /etc/fstab file | ||
826 | 168 | """ | ||
827 | 169 | return Fstab.add(dev, mp, fs, options=options) | ||
828 | 170 | |||
829 | 171 | |||
830 | 172 | def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"): | ||
831 | 147 | """Mount a filesystem at a particular mountpoint""" | 173 | """Mount a filesystem at a particular mountpoint""" |
832 | 148 | cmd_args = ['mount'] | 174 | cmd_args = ['mount'] |
833 | 149 | if options is not None: | 175 | if options is not None: |
834 | @@ -154,9 +180,9 @@ | |||
835 | 154 | except subprocess.CalledProcessError, e: | 180 | except subprocess.CalledProcessError, e: |
836 | 155 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) | 181 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
837 | 156 | return False | 182 | return False |
838 | 183 | |||
839 | 157 | if persist: | 184 | if persist: |
842 | 158 | # TODO: update fstab | 185 | return fstab_add(device, mountpoint, filesystem, options=options) |
841 | 159 | pass | ||
843 | 160 | return True | 186 | return True |
844 | 161 | 187 | ||
845 | 162 | 188 | ||
846 | @@ -168,9 +194,9 @@ | |||
847 | 168 | except subprocess.CalledProcessError, e: | 194 | except subprocess.CalledProcessError, e: |
848 | 169 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | 195 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
849 | 170 | return False | 196 | return False |
850 | 197 | |||
851 | 171 | if persist: | 198 | if persist: |
854 | 172 | # TODO: update fstab | 199 | return fstab_remove(mountpoint) |
853 | 173 | pass | ||
855 | 174 | return True | 200 | return True |
856 | 175 | 201 | ||
857 | 176 | 202 | ||
858 | @@ -194,16 +220,16 @@ | |||
859 | 194 | return None | 220 | return None |
860 | 195 | 221 | ||
861 | 196 | 222 | ||
863 | 197 | def restart_on_change(restart_map): | 223 | def restart_on_change(restart_map, stopstart=False): |
864 | 198 | """Restart services based on configuration files changing | 224 | """Restart services based on configuration files changing |
865 | 199 | 225 | ||
867 | 200 | This function is used a decorator, for example | 226 | This function is used a decorator, for example:: |
868 | 201 | 227 | ||
869 | 202 | @restart_on_change({ | 228 | @restart_on_change({ |
870 | 203 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] | 229 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
871 | 204 | }) | 230 | }) |
872 | 205 | def ceph_client_changed(): | 231 | def ceph_client_changed(): |
874 | 206 | ... | 232 | pass # your code here |
875 | 207 | 233 | ||
876 | 208 | In this example, the cinder-api and cinder-volume services | 234 | In this example, the cinder-api and cinder-volume services |
877 | 209 | would be restarted if /etc/ceph/ceph.conf is changed by the | 235 | would be restarted if /etc/ceph/ceph.conf is changed by the |
878 | @@ -219,8 +245,14 @@ | |||
879 | 219 | for path in restart_map: | 245 | for path in restart_map: |
880 | 220 | if checksums[path] != file_hash(path): | 246 | if checksums[path] != file_hash(path): |
881 | 221 | restarts += restart_map[path] | 247 | restarts += restart_map[path] |
884 | 222 | for service_name in list(OrderedDict.fromkeys(restarts)): | 248 | services_list = list(OrderedDict.fromkeys(restarts)) |
885 | 223 | service('restart', service_name) | 249 | if not stopstart: |
886 | 250 | for service_name in services_list: | ||
887 | 251 | service('restart', service_name) | ||
888 | 252 | else: | ||
889 | 253 | for action in ['stop', 'start']: | ||
890 | 254 | for service_name in services_list: | ||
891 | 255 | service(action, service_name) | ||
892 | 224 | return wrapped_f | 256 | return wrapped_f |
893 | 225 | return wrap | 257 | return wrap |
894 | 226 | 258 | ||
895 | @@ -289,3 +321,44 @@ | |||
896 | 289 | if 'link/ether' in words: | 321 | if 'link/ether' in words: |
897 | 290 | hwaddr = words[words.index('link/ether') + 1] | 322 | hwaddr = words[words.index('link/ether') + 1] |
898 | 291 | return hwaddr | 323 | return hwaddr |
899 | 324 | |||
900 | 325 | |||
901 | 326 | def cmp_pkgrevno(package, revno, pkgcache=None): | ||
902 | 327 | '''Compare supplied revno with the revno of the installed package | ||
903 | 328 | |||
904 | 329 | * 1 => Installed revno is greater than supplied arg | ||
905 | 330 | * 0 => Installed revno is the same as supplied arg | ||
906 | 331 | * -1 => Installed revno is less than supplied arg | ||
907 | 332 | |||
908 | 333 | ''' | ||
909 | 334 | import apt_pkg | ||
910 | 335 | if not pkgcache: | ||
911 | 336 | apt_pkg.init() | ||
912 | 337 | # Force Apt to build its cache in memory. That way we avoid race | ||
913 | 338 | # conditions with other applications building the cache in the same | ||
914 | 339 | # place. | ||
915 | 340 | apt_pkg.config.set("Dir::Cache::pkgcache", "") | ||
916 | 341 | pkgcache = apt_pkg.Cache() | ||
917 | 342 | pkg = pkgcache[package] | ||
918 | 343 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) | ||
919 | 344 | |||
920 | 345 | |||
921 | 346 | @contextmanager | ||
922 | 347 | def chdir(d): | ||
923 | 348 | cur = os.getcwd() | ||
924 | 349 | try: | ||
925 | 350 | yield os.chdir(d) | ||
926 | 351 | finally: | ||
927 | 352 | os.chdir(cur) | ||
928 | 353 | |||
929 | 354 | |||
930 | 355 | def chownr(path, owner, group): | ||
931 | 356 | uid = pwd.getpwnam(owner).pw_uid | ||
932 | 357 | gid = grp.getgrnam(group).gr_gid | ||
933 | 358 | |||
934 | 359 | for root, dirs, files in os.walk(path): | ||
935 | 360 | for name in dirs + files: | ||
936 | 361 | full = os.path.join(root, name) | ||
937 | 362 | broken_symlink = os.path.lexists(full) and not os.path.exists(full) | ||
938 | 363 | if not broken_symlink: | ||
939 | 364 | os.chown(full, uid, gid) | ||
940 | 292 | 365 | ||
941 | === added directory 'hooks/charmhelpers/core/services' | |||
942 | === added file 'hooks/charmhelpers/core/services/__init__.py' | |||
943 | --- hooks/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000 | |||
944 | +++ hooks/charmhelpers/core/services/__init__.py 2014-09-09 05:13:17 +0000 | |||
945 | @@ -0,0 +1,2 @@ | |||
946 | 1 | from .base import * | ||
947 | 2 | from .helpers import * | ||
948 | 0 | 3 | ||
949 | === added file 'hooks/charmhelpers/core/services/base.py' | |||
950 | --- hooks/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000 | |||
951 | +++ hooks/charmhelpers/core/services/base.py 2014-09-09 05:13:17 +0000 | |||
952 | @@ -0,0 +1,305 @@ | |||
953 | 1 | import os | ||
954 | 2 | import re | ||
955 | 3 | import json | ||
956 | 4 | from collections import Iterable | ||
957 | 5 | |||
958 | 6 | from charmhelpers.core import host | ||
959 | 7 | from charmhelpers.core import hookenv | ||
960 | 8 | |||
961 | 9 | |||
962 | 10 | __all__ = ['ServiceManager', 'ManagerCallback', | ||
963 | 11 | 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports', | ||
964 | 12 | 'service_restart', 'service_stop'] | ||
965 | 13 | |||
966 | 14 | |||
967 | 15 | class ServiceManager(object): | ||
968 | 16 | def __init__(self, services=None): | ||
969 | 17 | """ | ||
970 | 18 | Register a list of services, given their definitions. | ||
971 | 19 | |||
972 | 20 | Traditional charm authoring is focused on implementing hooks. That is, | ||
973 | 21 | the charm author is thinking in terms of "What hook am I handling; what | ||
974 | 22 | does this hook need to do?" However, in most cases, the real question | ||
975 | 23 | should be "Do I have the information I need to configure and start this | ||
976 | 24 | piece of software and, if so, what are the steps for doing so?" The | ||
977 | 25 | ServiceManager framework tries to bring the focus to the data and the | ||
978 | 26 | setup tasks, in the most declarative way possible. | ||
979 | 27 | |||
980 | 28 | Service definitions are dicts in the following formats (all keys except | ||
981 | 29 | 'service' are optional):: | ||
982 | 30 | |||
983 | 31 | { | ||
984 | 32 | "service": <service name>, | ||
985 | 33 | "required_data": <list of required data contexts>, | ||
986 | 34 | "data_ready": <one or more callbacks>, | ||
987 | 35 | "data_lost": <one or more callbacks>, | ||
988 | 36 | "start": <one or more callbacks>, | ||
989 | 37 | "stop": <one or more callbacks>, | ||
990 | 38 | "ports": <list of ports to manage>, | ||
991 | 39 | } | ||
992 | 40 | |||
993 | 41 | The 'required_data' list should contain dicts of required data (or | ||
994 | 42 | dependency managers that act like dicts and know how to collect the data). | ||
995 | 43 | Only when all items in the 'required_data' list are populated are the list | ||
996 | 44 | of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more | ||
997 | 45 | information. | ||
998 | 46 | |||
999 | 47 | The 'data_ready' value should be either a single callback, or a list of | ||
1000 | 48 | callbacks, to be called when all items in 'required_data' pass `is_ready()`. | ||
1001 | 49 | Each callback will be called with the service name as the only parameter. | ||
1002 | 50 | After all of the 'data_ready' callbacks are called, the 'start' callbacks | ||
1003 | 51 | are fired. | ||
1004 | 52 | |||
1005 | 53 | The 'data_lost' value should be either a single callback, or a list of | ||
1006 | 54 | callbacks, to be called when a 'required_data' item no longer passes | ||
1007 | 55 | `is_ready()`. Each callback will be called with the service name as the | ||
1008 | 56 | only parameter. After all of the 'data_lost' callbacks are called, | ||
1009 | 57 | the 'stop' callbacks are fired. | ||
1010 | 58 | |||
1011 | 59 | The 'start' value should be either a single callback, or a list of | ||
1012 | 60 | callbacks, to be called when starting the service, after the 'data_ready' | ||
1013 | 61 | callbacks are complete. Each callback will be called with the service | ||
1014 | 62 | name as the only parameter. This defaults to | ||
1015 | 63 | `[host.service_start, services.open_ports]`. | ||
1016 | 64 | |||
1017 | 65 | The 'stop' value should be either a single callback, or a list of | ||
1018 | 66 | callbacks, to be called when stopping the service. If the service is | ||
1019 | 67 | being stopped because it no longer has all of its 'required_data', this | ||
1020 | 68 | will be called after all of the 'data_lost' callbacks are complete. | ||
1021 | 69 | Each callback will be called with the service name as the only parameter. | ||
1022 | 70 | This defaults to `[services.close_ports, host.service_stop]`. | ||
1023 | 71 | |||
1024 | 72 | The 'ports' value should be a list of ports to manage. The default | ||
1025 | 73 | 'start' handler will open the ports after the service is started, | ||
1026 | 74 | and the default 'stop' handler will close the ports prior to stopping | ||
1027 | 75 | the service. | ||
1028 | 76 | |||
1029 | 77 | |||
1030 | 78 | Examples: | ||
1031 | 79 | |||
1032 | 80 | The following registers an Upstart service called bingod that depends on | ||
1033 | 81 | a mongodb relation and which runs a custom `db_migrate` function prior to | ||
1034 | 82 | restarting the service, and a Runit service called spadesd:: | ||
1035 | 83 | |||
1036 | 84 | manager = services.ServiceManager([ | ||
1037 | 85 | { | ||
1038 | 86 | 'service': 'bingod', | ||
1039 | 87 | 'ports': [80, 443], | ||
1040 | 88 | 'required_data': [MongoRelation(), config(), {'my': 'data'}], | ||
1041 | 89 | 'data_ready': [ | ||
1042 | 90 | services.template(source='bingod.conf'), | ||
1043 | 91 | services.template(source='bingod.ini', | ||
1044 | 92 | target='/etc/bingod.ini', | ||
1045 | 93 | owner='bingo', perms=0400), | ||
1046 | 94 | ], | ||
1047 | 95 | }, | ||
1048 | 96 | { | ||
1049 | 97 | 'service': 'spadesd', | ||
1050 | 98 | 'data_ready': services.template(source='spadesd_run.j2', | ||
1051 | 99 | target='/etc/sv/spadesd/run', | ||
1052 | 100 | perms=0555), | ||
1053 | 101 | 'start': runit_start, | ||
1054 | 102 | 'stop': runit_stop, | ||
1055 | 103 | }, | ||
1056 | 104 | ]) | ||
1057 | 105 | manager.manage() | ||
1058 | 106 | """ | ||
1059 | 107 | self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json') | ||
1060 | 108 | self._ready = None | ||
1061 | 109 | self.services = {} | ||
1062 | 110 | for service in services or []: | ||
1063 | 111 | service_name = service['service'] | ||
1064 | 112 | self.services[service_name] = service | ||
1065 | 113 | |||
1066 | 114 | def manage(self): | ||
1067 | 115 | """ | ||
1068 | 116 | Handle the current hook by doing The Right Thing with the registered services. | ||
1069 | 117 | """ | ||
1070 | 118 | hook_name = hookenv.hook_name() | ||
1071 | 119 | if hook_name == 'stop': | ||
1072 | 120 | self.stop_services() | ||
1073 | 121 | else: | ||
1074 | 122 | self.provide_data() | ||
1075 | 123 | self.reconfigure_services() | ||
1076 | 124 | |||
1077 | 125 | def provide_data(self): | ||
1078 | 126 | hook_name = hookenv.hook_name() | ||
1079 | 127 | for service in self.services.values(): | ||
1080 | 128 | for provider in service.get('provided_data', []): | ||
1081 | 129 | if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name): | ||
1082 | 130 | data = provider.provide_data() | ||
1083 | 131 | if provider._is_ready(data): | ||
1084 | 132 | hookenv.relation_set(None, data) | ||
1085 | 133 | |||
1086 | 134 | def reconfigure_services(self, *service_names): | ||
1087 | 135 | """ | ||
1088 | 136 | Update all files for one or more registered services, and, | ||
1089 | 137 | if ready, optionally restart them. | ||
1090 | 138 | |||
1091 | 139 | If no service names are given, reconfigures all registered services. | ||
1092 | 140 | """ | ||
1093 | 141 | for service_name in service_names or self.services.keys(): | ||
1094 | 142 | if self.is_ready(service_name): | ||
1095 | 143 | self.fire_event('data_ready', service_name) | ||
1096 | 144 | self.fire_event('start', service_name, default=[ | ||
1097 | 145 | service_restart, | ||
1098 | 146 | manage_ports]) | ||
1099 | 147 | self.save_ready(service_name) | ||
1100 | 148 | else: | ||
1101 | 149 | if self.was_ready(service_name): | ||
1102 | 150 | self.fire_event('data_lost', service_name) | ||
1103 | 151 | self.fire_event('stop', service_name, default=[ | ||
1104 | 152 | manage_ports, | ||
1105 | 153 | service_stop]) | ||
1106 | 154 | self.save_lost(service_name) | ||
1107 | 155 | |||
1108 | 156 | def stop_services(self, *service_names): | ||
1109 | 157 | """ | ||
1110 | 158 | Stop one or more registered services, by name. | ||
1111 | 159 | |||
1112 | 160 | If no service names are given, stops all registered services. | ||
1113 | 161 | """ | ||
1114 | 162 | for service_name in service_names or self.services.keys(): | ||
1115 | 163 | self.fire_event('stop', service_name, default=[ | ||
1116 | 164 | manage_ports, | ||
1117 | 165 | service_stop]) | ||
1118 | 166 | |||
1119 | 167 | def get_service(self, service_name): | ||
1120 | 168 | """ | ||
1121 | 169 | Given the name of a registered service, return its service definition. | ||
1122 | 170 | """ | ||
1123 | 171 | service = self.services.get(service_name) | ||
1124 | 172 | if not service: | ||
1125 | 173 | raise KeyError('Service not registered: %s' % service_name) | ||
1126 | 174 | return service | ||
1127 | 175 | |||
1128 | 176 | def fire_event(self, event_name, service_name, default=None): | ||
1129 | 177 | """ | ||
1130 | 178 | Fire a data_ready, data_lost, start, or stop event on a given service. | ||
1131 | 179 | """ | ||
1132 | 180 | service = self.get_service(service_name) | ||
1133 | 181 | callbacks = service.get(event_name, default) | ||
1134 | 182 | if not callbacks: | ||
1135 | 183 | return | ||
1136 | 184 | if not isinstance(callbacks, Iterable): | ||
1137 | 185 | callbacks = [callbacks] | ||
1138 | 186 | for callback in callbacks: | ||
1139 | 187 | if isinstance(callback, ManagerCallback): | ||
1140 | 188 | callback(self, service_name, event_name) | ||
1141 | 189 | else: | ||
1142 | 190 | callback(service_name) | ||
1143 | 191 | |||
1144 | 192 | def is_ready(self, service_name): | ||
1145 | 193 | """ | ||
1146 | 194 | Determine if a registered service is ready, by checking its 'required_data'. | ||
1147 | 195 | |||
1148 | 196 | A 'required_data' item can be any mapping type, and is considered ready | ||
1149 | 197 | if `bool(item)` evaluates as True. | ||
1150 | 198 | """ | ||
1151 | 199 | service = self.get_service(service_name) | ||
1152 | 200 | reqs = service.get('required_data', []) | ||
1153 | 201 | return all(bool(req) for req in reqs) | ||
1154 | 202 | |||
1155 | 203 | def _load_ready_file(self): | ||
1156 | 204 | if self._ready is not None: | ||
1157 | 205 | return | ||
1158 | 206 | if os.path.exists(self._ready_file): | ||
1159 | 207 | with open(self._ready_file) as fp: | ||
1160 | 208 | self._ready = set(json.load(fp)) | ||
1161 | 209 | else: | ||
1162 | 210 | self._ready = set() | ||
1163 | 211 | |||
1164 | 212 | def _save_ready_file(self): | ||
1165 | 213 | if self._ready is None: | ||
1166 | 214 | return | ||
1167 | 215 | with open(self._ready_file, 'w') as fp: | ||
1168 | 216 | json.dump(list(self._ready), fp) | ||
1169 | 217 | |||
1170 | 218 | def save_ready(self, service_name): | ||
1171 | 219 | """ | ||
1172 | 220 | Save an indicator that the given service is now data_ready. | ||
1173 | 221 | """ | ||
1174 | 222 | self._load_ready_file() | ||
1175 | 223 | self._ready.add(service_name) | ||
1176 | 224 | self._save_ready_file() | ||
1177 | 225 | |||
1178 | 226 | def save_lost(self, service_name): | ||
1179 | 227 | """ | ||
1180 | 228 | Save an indicator that the given service is no longer data_ready. | ||
1181 | 229 | """ | ||
1182 | 230 | self._load_ready_file() | ||
1183 | 231 | self._ready.discard(service_name) | ||
1184 | 232 | self._save_ready_file() | ||
1185 | 233 | |||
1186 | 234 | def was_ready(self, service_name): | ||
1187 | 235 | """ | ||
1188 | 236 | Determine if the given service was previously data_ready. | ||
1189 | 237 | """ | ||
1190 | 238 | self._load_ready_file() | ||
1191 | 239 | return service_name in self._ready | ||
1192 | 240 | |||
1193 | 241 | |||
1194 | 242 | class ManagerCallback(object): | ||
1195 | 243 | """ | ||
1196 | 244 | Special case of a callback that takes the `ServiceManager` instance | ||
1197 | 245 | in addition to the service name. | ||
1198 | 246 | |||
1199 | 247 | Subclasses should implement `__call__` which should accept three parameters: | ||
1200 | 248 | |||
1201 | 249 | * `manager` The `ServiceManager` instance | ||
1202 | 250 | * `service_name` The name of the service it's being triggered for | ||
1203 | 251 | * `event_name` The name of the event that this callback is handling | ||
1204 | 252 | """ | ||
1205 | 253 | def __call__(self, manager, service_name, event_name): | ||
1206 | 254 | raise NotImplementedError() | ||
1207 | 255 | |||
1208 | 256 | |||
1209 | 257 | class PortManagerCallback(ManagerCallback): | ||
1210 | 258 | """ | ||
1211 | 259 | Callback class that will open or close ports, for use as either | ||
1212 | 260 | a start or stop action. | ||
1213 | 261 | """ | ||
1214 | 262 | def __call__(self, manager, service_name, event_name): | ||
1215 | 263 | service = manager.get_service(service_name) | ||
1216 | 264 | new_ports = service.get('ports', []) | ||
1217 | 265 | port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name)) | ||
1218 | 266 | if os.path.exists(port_file): | ||
1219 | 267 | with open(port_file) as fp: | ||
1220 | 268 | old_ports = fp.read().split(',') | ||
1221 | 269 | for old_port in old_ports: | ||
1222 | 270 | if bool(old_port): | ||
1223 | 271 | old_port = int(old_port) | ||
1224 | 272 | if old_port not in new_ports: | ||
1225 | 273 | hookenv.close_port(old_port) | ||
1226 | 274 | with open(port_file, 'w') as fp: | ||
1227 | 275 | fp.write(','.join(str(port) for port in new_ports)) | ||
1228 | 276 | for port in new_ports: | ||
1229 | 277 | if event_name == 'start': | ||
1230 | 278 | hookenv.open_port(port) | ||
1231 | 279 | elif event_name == 'stop': | ||
1232 | 280 | hookenv.close_port(port) | ||
1233 | 281 | |||
1234 | 282 | |||
1235 | 283 | def service_stop(service_name): | ||
1236 | 284 | """ | ||
1237 | 285 | Wrapper around host.service_stop to prevent spurious "unknown service" | ||
1238 | 286 | messages in the logs. | ||
1239 | 287 | """ | ||
1240 | 288 | if host.service_running(service_name): | ||
1241 | 289 | host.service_stop(service_name) | ||
1242 | 290 | |||
1243 | 291 | |||
1244 | 292 | def service_restart(service_name): | ||
1245 | 293 | """ | ||
1246 | 294 | Wrapper around host.service_restart to prevent spurious "unknown service" | ||
1247 | 295 | messages in the logs. | ||
1248 | 296 | """ | ||
1249 | 297 | if host.service_available(service_name): | ||
1250 | 298 | if host.service_running(service_name): | ||
1251 | 299 | host.service_restart(service_name) | ||
1252 | 300 | else: | ||
1253 | 301 | host.service_start(service_name) | ||
1254 | 302 | |||
1255 | 303 | |||
1256 | 304 | # Convenience aliases | ||
1257 | 305 | open_ports = close_ports = manage_ports = PortManagerCallback() | ||
1258 | 0 | 306 | ||
1259 | === added file 'hooks/charmhelpers/core/services/helpers.py' | |||
1260 | --- hooks/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000 | |||
1261 | +++ hooks/charmhelpers/core/services/helpers.py 2014-09-09 05:13:17 +0000 | |||
1262 | @@ -0,0 +1,125 @@ | |||
1263 | 1 | from charmhelpers.core import hookenv | ||
1264 | 2 | from charmhelpers.core import templating | ||
1265 | 3 | |||
1266 | 4 | from charmhelpers.core.services.base import ManagerCallback | ||
1267 | 5 | |||
1268 | 6 | |||
1269 | 7 | __all__ = ['RelationContext', 'TemplateCallback', | ||
1270 | 8 | 'render_template', 'template'] | ||
1271 | 9 | |||
1272 | 10 | |||
1273 | 11 | class RelationContext(dict): | ||
1274 | 12 | """ | ||
1275 | 13 | Base class for a context generator that gets relation data from juju. | ||
1276 | 14 | |||
1277 | 15 | Subclasses must provide the attributes `name`, which is the name of the | ||
1278 | 16 | interface of interest, `interface`, which is the type of the interface of | ||
1279 | 17 | interest, and `required_keys`, which is the set of keys required for the | ||
1280 | 18 | relation to be considered complete. The data for all interfaces matching | ||
1281 | 19 | the `name` attribute that are complete will used to populate the dictionary | ||
1282 | 20 | values (see `get_data`, below). | ||
1283 | 21 | |||
1284 | 22 | The generated context will be namespaced under the interface type, to prevent | ||
1285 | 23 | potential naming conflicts. | ||
1286 | 24 | """ | ||
1287 | 25 | name = None | ||
1288 | 26 | interface = None | ||
1289 | 27 | required_keys = [] | ||
1290 | 28 | |||
1291 | 29 | def __init__(self, *args, **kwargs): | ||
1292 | 30 | super(RelationContext, self).__init__(*args, **kwargs) | ||
1293 | 31 | self.get_data() | ||
1294 | 32 | |||
1295 | 33 | def __bool__(self): | ||
1296 | 34 | """ | ||
1297 | 35 | Returns True if all of the required_keys are available. | ||
1298 | 36 | """ | ||
1299 | 37 | return self.is_ready() | ||
1300 | 38 | |||
1301 | 39 | __nonzero__ = __bool__ | ||
1302 | 40 | |||
1303 | 41 | def __repr__(self): | ||
1304 | 42 | return super(RelationContext, self).__repr__() | ||
1305 | 43 | |||
1306 | 44 | def is_ready(self): | ||
1307 | 45 | """ | ||
1308 | 46 | Returns True if all of the `required_keys` are available from any units. | ||
1309 | 47 | """ | ||
1310 | 48 | ready = len(self.get(self.name, [])) > 0 | ||
1311 | 49 | if not ready: | ||
1312 | 50 | hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG) | ||
1313 | 51 | return ready | ||
1314 | 52 | |||
1315 | 53 | def _is_ready(self, unit_data): | ||
1316 | 54 | """ | ||
1317 | 55 | Helper method that tests a set of relation data and returns True if | ||
1318 | 56 | all of the `required_keys` are present. | ||
1319 | 57 | """ | ||
1320 | 58 | return set(unit_data.keys()).issuperset(set(self.required_keys)) | ||
1321 | 59 | |||
1322 | 60 | def get_data(self): | ||
1323 | 61 | """ | ||
1324 | 62 | Retrieve the relation data for each unit involved in a relation and, | ||
1325 | 63 | if complete, store it in a list under `self[self.name]`. This | ||
1326 | 64 | is automatically called when the RelationContext is instantiated. | ||
1327 | 65 | |||
1328 | 66 | The units are sorted lexographically first by the service ID, then by | ||
1329 | 67 | the unit ID. Thus, if an interface has two other services, 'db:1' | ||
1330 | 68 | and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1', | ||
1331 | 69 | and 'db:2' having one unit, 'mediawiki/0', all of which have a complete | ||
1332 | 70 | set of data, the relation data for the units will be stored in the | ||
1333 | 71 | order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'. | ||
1334 | 72 | |||
1335 | 73 | If you only care about a single unit on the relation, you can just | ||
1336 | 74 | access it as `{{ interface[0]['key'] }}`. However, if you can at all | ||
1337 | 75 | support multiple units on a relation, you should iterate over the list, | ||
1338 | 76 | like:: | ||
1339 | 77 | |||
1340 | 78 | {% for unit in interface -%} | ||
1341 | 79 | {{ unit['key'] }}{% if not loop.last %},{% endif %} | ||
1342 | 80 | {%- endfor %} | ||
1343 | 81 | |||
1344 | 82 | Note that since all sets of relation data from all related services and | ||
1345 | 83 | units are in a single list, if you need to know which service or unit a | ||
1346 | 84 | set of data came from, you'll need to extend this class to preserve | ||
1347 | 85 | that information. | ||
1348 | 86 | """ | ||
1349 | 87 | if not hookenv.relation_ids(self.name): | ||
1350 | 88 | return | ||
1351 | 89 | |||
1352 | 90 | ns = self.setdefault(self.name, []) | ||
1353 | 91 | for rid in sorted(hookenv.relation_ids(self.name)): | ||
1354 | 92 | for unit in sorted(hookenv.related_units(rid)): | ||
1355 | 93 | reldata = hookenv.relation_get(rid=rid, unit=unit) | ||
1356 | 94 | if self._is_ready(reldata): | ||
1357 | 95 | ns.append(reldata) | ||
1358 | 96 | |||
1359 | 97 | def provide_data(self): | ||
1360 | 98 | """ | ||
1361 | 99 | Return data to be relation_set for this interface. | ||
1362 | 100 | """ | ||
1363 | 101 | return {} | ||
1364 | 102 | |||
1365 | 103 | |||
1366 | 104 | class TemplateCallback(ManagerCallback): | ||
1367 | 105 | """ | ||
1368 | 106 | Callback class that will render a template, for use as a ready action. | ||
1369 | 107 | """ | ||
1370 | 108 | def __init__(self, source, target, owner='root', group='root', perms=0444): | ||
1371 | 109 | self.source = source | ||
1372 | 110 | self.target = target | ||
1373 | 111 | self.owner = owner | ||
1374 | 112 | self.group = group | ||
1375 | 113 | self.perms = perms | ||
1376 | 114 | |||
1377 | 115 | def __call__(self, manager, service_name, event_name): | ||
1378 | 116 | service = manager.get_service(service_name) | ||
1379 | 117 | context = {} | ||
1380 | 118 | for ctx in service.get('required_data', []): | ||
1381 | 119 | context.update(ctx) | ||
1382 | 120 | templating.render(self.source, self.target, context, | ||
1383 | 121 | self.owner, self.group, self.perms) | ||
1384 | 122 | |||
1385 | 123 | |||
1386 | 124 | # Convenience aliases for templates | ||
1387 | 125 | render_template = template = TemplateCallback | ||
1388 | 0 | 126 | ||
1389 | === added file 'hooks/charmhelpers/core/templating.py' | |||
1390 | --- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000 | |||
1391 | +++ hooks/charmhelpers/core/templating.py 2014-09-09 05:13:17 +0000 | |||
1392 | @@ -0,0 +1,51 @@ | |||
1393 | 1 | import os | ||
1394 | 2 | |||
1395 | 3 | from charmhelpers.core import host | ||
1396 | 4 | from charmhelpers.core import hookenv | ||
1397 | 5 | |||
1398 | 6 | |||
1399 | 7 | def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None): | ||
1400 | 8 | """ | ||
1401 | 9 | Render a template. | ||
1402 | 10 | |||
1403 | 11 | The `source` path, if not absolute, is relative to the `templates_dir`. | ||
1404 | 12 | |||
1405 | 13 | The `target` path should be absolute. | ||
1406 | 14 | |||
1407 | 15 | The context should be a dict containing the values to be replaced in the | ||
1408 | 16 | template. | ||
1409 | 17 | |||
1410 | 18 | The `owner`, `group`, and `perms` options will be passed to `write_file`. | ||
1411 | 19 | |||
1412 | 20 | If omitted, `templates_dir` defaults to the `templates` folder in the charm. | ||
1413 | 21 | |||
1414 | 22 | Note: Using this requires python-jinja2; if it is not installed, calling | ||
1415 | 23 | this will attempt to use charmhelpers.fetch.apt_install to install it. | ||
1416 | 24 | """ | ||
1417 | 25 | try: | ||
1418 | 26 | from jinja2 import FileSystemLoader, Environment, exceptions | ||
1419 | 27 | except ImportError: | ||
1420 | 28 | try: | ||
1421 | 29 | from charmhelpers.fetch import apt_install | ||
1422 | 30 | except ImportError: | ||
1423 | 31 | hookenv.log('Could not import jinja2, and could not import ' | ||
1424 | 32 | 'charmhelpers.fetch to install it', | ||
1425 | 33 | level=hookenv.ERROR) | ||
1426 | 34 | raise | ||
1427 | 35 | apt_install('python-jinja2', fatal=True) | ||
1428 | 36 | from jinja2 import FileSystemLoader, Environment, exceptions | ||
1429 | 37 | |||
1430 | 38 | if templates_dir is None: | ||
1431 | 39 | templates_dir = os.path.join(hookenv.charm_dir(), 'templates') | ||
1432 | 40 | loader = Environment(loader=FileSystemLoader(templates_dir)) | ||
1433 | 41 | try: | ||
1434 | 42 | source = source | ||
1435 | 43 | template = loader.get_template(source) | ||
1436 | 44 | except exceptions.TemplateNotFound as e: | ||
1437 | 45 | hookenv.log('Could not load template %s from %s.' % | ||
1438 | 46 | (source, templates_dir), | ||
1439 | 47 | level=hookenv.ERROR) | ||
1440 | 48 | raise e | ||
1441 | 49 | content = template.render(context) | ||
1442 | 50 | host.mkdir(os.path.dirname(target)) | ||
1443 | 51 | host.write_file(target, content, owner, group, perms) | ||
1444 | 0 | 52 | ||
1445 | === added directory 'hooks/charmhelpers/fetch' | |||
1446 | === added file 'hooks/charmhelpers/fetch/__init__.py' | |||
1447 | --- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 | |||
1448 | +++ hooks/charmhelpers/fetch/__init__.py 2014-09-09 05:13:17 +0000 | |||
1449 | @@ -0,0 +1,394 @@ | |||
1450 | 1 | import importlib | ||
1451 | 2 | from tempfile import NamedTemporaryFile | ||
1452 | 3 | import time | ||
1453 | 4 | from yaml import safe_load | ||
1454 | 5 | from charmhelpers.core.host import ( | ||
1455 | 6 | lsb_release | ||
1456 | 7 | ) | ||
1457 | 8 | from urlparse import ( | ||
1458 | 9 | urlparse, | ||
1459 | 10 | urlunparse, | ||
1460 | 11 | ) | ||
1461 | 12 | import subprocess | ||
1462 | 13 | from charmhelpers.core.hookenv import ( | ||
1463 | 14 | config, | ||
1464 | 15 | log, | ||
1465 | 16 | ) | ||
1466 | 17 | import os | ||
1467 | 18 | |||
1468 | 19 | |||
1469 | 20 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive | ||
1470 | 21 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | ||
1471 | 22 | """ | ||
1472 | 23 | PROPOSED_POCKET = """# Proposed | ||
1473 | 24 | deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted | ||
1474 | 25 | """ | ||
1475 | 26 | CLOUD_ARCHIVE_POCKETS = { | ||
1476 | 27 | # Folsom | ||
1477 | 28 | 'folsom': 'precise-updates/folsom', | ||
1478 | 29 | 'precise-folsom': 'precise-updates/folsom', | ||
1479 | 30 | 'precise-folsom/updates': 'precise-updates/folsom', | ||
1480 | 31 | 'precise-updates/folsom': 'precise-updates/folsom', | ||
1481 | 32 | 'folsom/proposed': 'precise-proposed/folsom', | ||
1482 | 33 | 'precise-folsom/proposed': 'precise-proposed/folsom', | ||
1483 | 34 | 'precise-proposed/folsom': 'precise-proposed/folsom', | ||
1484 | 35 | # Grizzly | ||
1485 | 36 | 'grizzly': 'precise-updates/grizzly', | ||
1486 | 37 | 'precise-grizzly': 'precise-updates/grizzly', | ||
1487 | 38 | 'precise-grizzly/updates': 'precise-updates/grizzly', | ||
1488 | 39 | 'precise-updates/grizzly': 'precise-updates/grizzly', | ||
1489 | 40 | 'grizzly/proposed': 'precise-proposed/grizzly', | ||
1490 | 41 | 'precise-grizzly/proposed': 'precise-proposed/grizzly', | ||
1491 | 42 | 'precise-proposed/grizzly': 'precise-proposed/grizzly', | ||
1492 | 43 | # Havana | ||
1493 | 44 | 'havana': 'precise-updates/havana', | ||
1494 | 45 | 'precise-havana': 'precise-updates/havana', | ||
1495 | 46 | 'precise-havana/updates': 'precise-updates/havana', | ||
1496 | 47 | 'precise-updates/havana': 'precise-updates/havana', | ||
1497 | 48 | 'havana/proposed': 'precise-proposed/havana', | ||
1498 | 49 | 'precise-havana/proposed': 'precise-proposed/havana', | ||
1499 | 50 | 'precise-proposed/havana': 'precise-proposed/havana', | ||
1500 | 51 | # Icehouse | ||
1501 | 52 | 'icehouse': 'precise-updates/icehouse', | ||
1502 | 53 | 'precise-icehouse': 'precise-updates/icehouse', | ||
1503 | 54 | 'precise-icehouse/updates': 'precise-updates/icehouse', | ||
1504 | 55 | 'precise-updates/icehouse': 'precise-updates/icehouse', | ||
1505 | 56 | 'icehouse/proposed': 'precise-proposed/icehouse', | ||
1506 | 57 | 'precise-icehouse/proposed': 'precise-proposed/icehouse', | ||
1507 | 58 | 'precise-proposed/icehouse': 'precise-proposed/icehouse', | ||
1508 | 59 | # Juno | ||
1509 | 60 | 'juno': 'trusty-updates/juno', | ||
1510 | 61 | 'trusty-juno': 'trusty-updates/juno', | ||
1511 | 62 | 'trusty-juno/updates': 'trusty-updates/juno', | ||
1512 | 63 | 'trusty-updates/juno': 'trusty-updates/juno', | ||
1513 | 64 | 'juno/proposed': 'trusty-proposed/juno', | ||
1514 | 65 | 'juno/proposed': 'trusty-proposed/juno', | ||
1515 | 66 | 'trusty-juno/proposed': 'trusty-proposed/juno', | ||
1516 | 67 | 'trusty-proposed/juno': 'trusty-proposed/juno', | ||
1517 | 68 | } | ||
1518 | 69 | |||
1519 | 70 | # The order of this list is very important. Handlers should be listed in from | ||
1520 | 71 | # least- to most-specific URL matching. | ||
1521 | 72 | FETCH_HANDLERS = ( | ||
1522 | 73 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | ||
1523 | 74 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | ||
1524 | 75 | ) | ||
1525 | 76 | |||
1526 | 77 | APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. | ||
1527 | 78 | APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks. | ||
1528 | 79 | APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. | ||
1529 | 80 | |||
1530 | 81 | |||
1531 | 82 | class SourceConfigError(Exception): | ||
1532 | 83 | pass | ||
1533 | 84 | |||
1534 | 85 | |||
1535 | 86 | class UnhandledSource(Exception): | ||
1536 | 87 | pass | ||
1537 | 88 | |||
1538 | 89 | |||
1539 | 90 | class AptLockError(Exception): | ||
1540 | 91 | pass | ||
1541 | 92 | |||
1542 | 93 | |||
1543 | 94 | class BaseFetchHandler(object): | ||
1544 | 95 | |||
1545 | 96 | """Base class for FetchHandler implementations in fetch plugins""" | ||
1546 | 97 | |||
1547 | 98 | def can_handle(self, source): | ||
1548 | 99 | """Returns True if the source can be handled. Otherwise returns | ||
1549 | 100 | a string explaining why it cannot""" | ||
1550 | 101 | return "Wrong source type" | ||
1551 | 102 | |||
1552 | 103 | def install(self, source): | ||
1553 | 104 | """Try to download and unpack the source. Return the path to the | ||
1554 | 105 | unpacked files or raise UnhandledSource.""" | ||
1555 | 106 | raise UnhandledSource("Wrong source type {}".format(source)) | ||
1556 | 107 | |||
1557 | 108 | def parse_url(self, url): | ||
1558 | 109 | return urlparse(url) | ||
1559 | 110 | |||
1560 | 111 | def base_url(self, url): | ||
1561 | 112 | """Return url without querystring or fragment""" | ||
1562 | 113 | parts = list(self.parse_url(url)) | ||
1563 | 114 | parts[4:] = ['' for i in parts[4:]] | ||
1564 | 115 | return urlunparse(parts) | ||
1565 | 116 | |||
1566 | 117 | |||
1567 | 118 | def filter_installed_packages(packages): | ||
1568 | 119 | """Returns a list of packages that require installation""" | ||
1569 | 120 | import apt_pkg | ||
1570 | 121 | apt_pkg.init() | ||
1571 | 122 | |||
1572 | 123 | # Tell apt to build an in-memory cache to prevent race conditions (if | ||
1573 | 124 | # another process is already building the cache). | ||
1574 | 125 | apt_pkg.config.set("Dir::Cache::pkgcache", "") | ||
1575 | 126 | apt_pkg.config.set("Dir::Cache::srcpkgcache", "") | ||
1576 | 127 | |||
1577 | 128 | cache = apt_pkg.Cache() | ||
1578 | 129 | _pkgs = [] | ||
1579 | 130 | for package in packages: | ||
1580 | 131 | try: | ||
1581 | 132 | p = cache[package] | ||
1582 | 133 | p.current_ver or _pkgs.append(package) | ||
1583 | 134 | except KeyError: | ||
1584 | 135 | log('Package {} has no installation candidate.'.format(package), | ||
1585 | 136 | level='WARNING') | ||
1586 | 137 | _pkgs.append(package) | ||
1587 | 138 | return _pkgs | ||
1588 | 139 | |||
1589 | 140 | |||
1590 | 141 | def apt_install(packages, options=None, fatal=False): | ||
1591 | 142 | """Install one or more packages""" | ||
1592 | 143 | if options is None: | ||
1593 | 144 | options = ['--option=Dpkg::Options::=--force-confold'] | ||
1594 | 145 | |||
1595 | 146 | cmd = ['apt-get', '--assume-yes'] | ||
1596 | 147 | cmd.extend(options) | ||
1597 | 148 | cmd.append('install') | ||
1598 | 149 | if isinstance(packages, basestring): | ||
1599 | 150 | cmd.append(packages) | ||
1600 | 151 | else: | ||
1601 | 152 | cmd.extend(packages) | ||
1602 | 153 | log("Installing {} with options: {}".format(packages, | ||
1603 | 154 | options)) | ||
1604 | 155 | _run_apt_command(cmd, fatal) | ||
1605 | 156 | |||
1606 | 157 | |||
1607 | 158 | def apt_upgrade(options=None, fatal=False, dist=False): | ||
1608 | 159 | """Upgrade all packages""" | ||
1609 | 160 | if options is None: | ||
1610 | 161 | options = ['--option=Dpkg::Options::=--force-confold'] | ||
1611 | 162 | |||
1612 | 163 | cmd = ['apt-get', '--assume-yes'] | ||
1613 | 164 | cmd.extend(options) | ||
1614 | 165 | if dist: | ||
1615 | 166 | cmd.append('dist-upgrade') | ||
1616 | 167 | else: | ||
1617 | 168 | cmd.append('upgrade') | ||
1618 | 169 | log("Upgrading with options: {}".format(options)) | ||
1619 | 170 | _run_apt_command(cmd, fatal) | ||
1620 | 171 | |||
1621 | 172 | |||
1622 | 173 | def apt_update(fatal=False): | ||
1623 | 174 | """Update local apt cache""" | ||
1624 | 175 | cmd = ['apt-get', 'update'] | ||
1625 | 176 | _run_apt_command(cmd, fatal) | ||
1626 | 177 | |||
1627 | 178 | |||
1628 | 179 | def apt_purge(packages, fatal=False): | ||
1629 | 180 | """Purge one or more packages""" | ||
1630 | 181 | cmd = ['apt-get', '--assume-yes', 'purge'] | ||
1631 | 182 | if isinstance(packages, basestring): | ||
1632 | 183 | cmd.append(packages) | ||
1633 | 184 | else: | ||
1634 | 185 | cmd.extend(packages) | ||
1635 | 186 | log("Purging {}".format(packages)) | ||
1636 | 187 | _run_apt_command(cmd, fatal) | ||
1637 | 188 | |||
1638 | 189 | |||
1639 | 190 | def apt_hold(packages, fatal=False): | ||
1640 | 191 | """Hold one or more packages""" | ||
1641 | 192 | cmd = ['apt-mark', 'hold'] | ||
1642 | 193 | if isinstance(packages, basestring): | ||
1643 | 194 | cmd.append(packages) | ||
1644 | 195 | else: | ||
1645 | 196 | cmd.extend(packages) | ||
1646 | 197 | log("Holding {}".format(packages)) | ||
1647 | 198 | |||
1648 | 199 | if fatal: | ||
1649 | 200 | subprocess.check_call(cmd) | ||
1650 | 201 | else: | ||
1651 | 202 | subprocess.call(cmd) | ||
1652 | 203 | |||
1653 | 204 | |||
1654 | 205 | def add_source(source, key=None): | ||
1655 | 206 | """Add a package source to this system. | ||
1656 | 207 | |||
1657 | 208 | @param source: a URL or sources.list entry, as supported by | ||
1658 | 209 | add-apt-repository(1). Examples: | ||
1659 | 210 | ppa:charmers/example | ||
1660 | 211 | deb https://stub:key@private.example.com/ubuntu trusty main | ||
1661 | 212 | |||
1662 | 213 | In addition: | ||
1663 | 214 | 'proposed:' may be used to enable the standard 'proposed' | ||
1664 | 215 | pocket for the release. | ||
1665 | 216 | 'cloud:' may be used to activate official cloud archive pockets, | ||
1666 | 217 | such as 'cloud:icehouse' | ||
1667 | 218 | |||
1668 | 219 | @param key: A key to be added to the system's APT keyring and used | ||
1669 | 220 | to verify the signatures on packages. Ideally, this should be an | ||
1670 | 221 | ASCII format GPG public key including the block headers. A GPG key | ||
1671 | 222 | id may also be used, but be aware that only insecure protocols are | ||
1672 | 223 | available to retrieve the actual public key from a public keyserver | ||
1673 | 224 | placing your Juju environment at risk. ppa and cloud archive keys | ||
1674 | 225 | are securely added automtically, so sould not be provided. | ||
1675 | 226 | """ | ||
1676 | 227 | if source is None: | ||
1677 | 228 | log('Source is not present. Skipping') | ||
1678 | 229 | return | ||
1679 | 230 | |||
1680 | 231 | if (source.startswith('ppa:') or | ||
1681 | 232 | source.startswith('http') or | ||
1682 | 233 | source.startswith('deb ') or | ||
1683 | 234 | source.startswith('cloud-archive:')): | ||
1684 | 235 | subprocess.check_call(['add-apt-repository', '--yes', source]) | ||
1685 | 236 | elif source.startswith('cloud:'): | ||
1686 | 237 | apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), | ||
1687 | 238 | fatal=True) | ||
1688 | 239 | pocket = source.split(':')[-1] | ||
1689 | 240 | if pocket not in CLOUD_ARCHIVE_POCKETS: | ||
1690 | 241 | raise SourceConfigError( | ||
1691 | 242 | 'Unsupported cloud: source option %s' % | ||
1692 | 243 | pocket) | ||
1693 | 244 | actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket] | ||
1694 | 245 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: | ||
1695 | 246 | apt.write(CLOUD_ARCHIVE.format(actual_pocket)) | ||
1696 | 247 | elif source == 'proposed': | ||
1697 | 248 | release = lsb_release()['DISTRIB_CODENAME'] | ||
1698 | 249 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: | ||
1699 | 250 | apt.write(PROPOSED_POCKET.format(release)) | ||
1700 | 251 | else: | ||
1701 | 252 | raise SourceConfigError("Unknown source: {!r}".format(source)) | ||
1702 | 253 | |||
1703 | 254 | if key: | ||
1704 | 255 | if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: | ||
1705 | 256 | with NamedTemporaryFile() as key_file: | ||
1706 | 257 | key_file.write(key) | ||
1707 | 258 | key_file.flush() | ||
1708 | 259 | key_file.seek(0) | ||
1709 | 260 | subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file) | ||
1710 | 261 | else: | ||
1711 | 262 | # Note that hkp: is in no way a secure protocol. Using a | ||
1712 | 263 | # GPG key id is pointless from a security POV unless you | ||
1713 | 264 | # absolutely trust your network and DNS. | ||
1714 | 265 | subprocess.check_call(['apt-key', 'adv', '--keyserver', | ||
1715 | 266 | 'hkp://keyserver.ubuntu.com:80', '--recv', | ||
1716 | 267 | key]) | ||
1717 | 268 | |||
1718 | 269 | |||
1719 | 270 | def configure_sources(update=False, | ||
1720 | 271 | sources_var='install_sources', | ||
1721 | 272 | keys_var='install_keys'): | ||
1722 | 273 | """ | ||
1723 | 274 | Configure multiple sources from charm configuration. | ||
1724 | 275 | |||
1725 | 276 | The lists are encoded as yaml fragments in the configuration. | ||
1726 | 277 | The frament needs to be included as a string. Sources and their | ||
1727 | 278 | corresponding keys are of the types supported by add_source(). | ||
1728 | 279 | |||
1729 | 280 | Example config: | ||
1730 | 281 | install_sources: | | ||
1731 | 282 | - "ppa:foo" | ||
1732 | 283 | - "http://example.com/repo precise main" | ||
1733 | 284 | install_keys: | | ||
1734 | 285 | - null | ||
1735 | 286 | - "a1b2c3d4" | ||
1736 | 287 | |||
1737 | 288 | Note that 'null' (a.k.a. None) should not be quoted. | ||
1738 | 289 | """ | ||
1739 | 290 | sources = safe_load((config(sources_var) or '').strip()) or [] | ||
1740 | 291 | keys = safe_load((config(keys_var) or '').strip()) or None | ||
1741 | 292 | |||
1742 | 293 | if isinstance(sources, basestring): | ||
1743 | 294 | sources = [sources] | ||
1744 | 295 | |||
1745 | 296 | if keys is None: | ||
1746 | 297 | for source in sources: | ||
1747 | 298 | add_source(source, None) | ||
1748 | 299 | else: | ||
1749 | 300 | if isinstance(keys, basestring): | ||
1750 | 301 | keys = [keys] | ||
1751 | 302 | |||
1752 | 303 | if len(sources) != len(keys): | ||
1753 | 304 | raise SourceConfigError( | ||
1754 | 305 | 'Install sources and keys lists are different lengths') | ||
1755 | 306 | for source, key in zip(sources, keys): | ||
1756 | 307 | add_source(source, key) | ||
1757 | 308 | if update: | ||
1758 | 309 | apt_update(fatal=True) | ||
1759 | 310 | |||
1760 | 311 | |||
1761 | 312 | def install_remote(source): | ||
1762 | 313 | """ | ||
1763 | 314 | Install a file tree from a remote source | ||
1764 | 315 | |||
1765 | 316 | The specified source should be a url of the form: | ||
1766 | 317 | scheme://[host]/path[#[option=value][&...]] | ||
1767 | 318 | |||
1768 | 319 | Schemes supported are based on this modules submodules | ||
1769 | 320 | Options supported are submodule-specific""" | ||
1770 | 321 | # We ONLY check for True here because can_handle may return a string | ||
1771 | 322 | # explaining why it can't handle a given source. | ||
1772 | 323 | handlers = [h for h in plugins() if h.can_handle(source) is True] | ||
1773 | 324 | installed_to = None | ||
1774 | 325 | for handler in handlers: | ||
1775 | 326 | try: | ||
1776 | 327 | installed_to = handler.install(source) | ||
1777 | 328 | except UnhandledSource: | ||
1778 | 329 | pass | ||
1779 | 330 | if not installed_to: | ||
1780 | 331 | raise UnhandledSource("No handler found for source {}".format(source)) | ||
1781 | 332 | return installed_to | ||
1782 | 333 | |||
1783 | 334 | |||
1784 | 335 | def install_from_config(config_var_name): | ||
1785 | 336 | charm_config = config() | ||
1786 | 337 | source = charm_config[config_var_name] | ||
1787 | 338 | return install_remote(source) | ||
1788 | 339 | |||
1789 | 340 | |||
1790 | 341 | def plugins(fetch_handlers=None): | ||
1791 | 342 | if not fetch_handlers: | ||
1792 | 343 | fetch_handlers = FETCH_HANDLERS | ||
1793 | 344 | plugin_list = [] | ||
1794 | 345 | for handler_name in fetch_handlers: | ||
1795 | 346 | package, classname = handler_name.rsplit('.', 1) | ||
1796 | 347 | try: | ||
1797 | 348 | handler_class = getattr( | ||
1798 | 349 | importlib.import_module(package), | ||
1799 | 350 | classname) | ||
1800 | 351 | plugin_list.append(handler_class()) | ||
1801 | 352 | except (ImportError, AttributeError): | ||
1802 | 353 | # Skip missing plugins so that they can be ommitted from | ||
1803 | 354 | # installation if desired | ||
1804 | 355 | log("FetchHandler {} not found, skipping plugin".format( | ||
1805 | 356 | handler_name)) | ||
1806 | 357 | return plugin_list | ||
1807 | 358 | |||
1808 | 359 | |||
1809 | 360 | def _run_apt_command(cmd, fatal=False): | ||
1810 | 361 | """ | ||
1811 | 362 | Run an APT command, checking output and retrying if the fatal flag is set | ||
1812 | 363 | to True. | ||
1813 | 364 | |||
1814 | 365 | :param: cmd: str: The apt command to run. | ||
1815 | 366 | :param: fatal: bool: Whether the command's output should be checked and | ||
1816 | 367 | retried. | ||
1817 | 368 | """ | ||
1818 | 369 | env = os.environ.copy() | ||
1819 | 370 | |||
1820 | 371 | if 'DEBIAN_FRONTEND' not in env: | ||
1821 | 372 | env['DEBIAN_FRONTEND'] = 'noninteractive' | ||
1822 | 373 | |||
1823 | 374 | if fatal: | ||
1824 | 375 | retry_count = 0 | ||
1825 | 376 | result = None | ||
1826 | 377 | |||
1827 | 378 | # If the command is considered "fatal", we need to retry if the apt | ||
1828 | 379 | # lock was not acquired. | ||
1829 | 380 | |||
1830 | 381 | while result is None or result == APT_NO_LOCK: | ||
1831 | 382 | try: | ||
1832 | 383 | result = subprocess.check_call(cmd, env=env) | ||
1833 | 384 | except subprocess.CalledProcessError, e: | ||
1834 | 385 | retry_count = retry_count + 1 | ||
1835 | 386 | if retry_count > APT_NO_LOCK_RETRY_COUNT: | ||
1836 | 387 | raise | ||
1837 | 388 | result = e.returncode | ||
1838 | 389 | log("Couldn't acquire DPKG lock. Will retry in {} seconds." | ||
1839 | 390 | "".format(APT_NO_LOCK_RETRY_DELAY)) | ||
1840 | 391 | time.sleep(APT_NO_LOCK_RETRY_DELAY) | ||
1841 | 392 | |||
1842 | 393 | else: | ||
1843 | 394 | subprocess.call(cmd, env=env) | ||
1844 | 0 | 395 | ||
1845 | === added file 'hooks/charmhelpers/fetch/archiveurl.py' | |||
1846 | --- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000 | |||
1847 | +++ hooks/charmhelpers/fetch/archiveurl.py 2014-09-09 05:13:17 +0000 | |||
1848 | @@ -0,0 +1,63 @@ | |||
1849 | 1 | import os | ||
1850 | 2 | import urllib2 | ||
1851 | 3 | import urlparse | ||
1852 | 4 | |||
1853 | 5 | from charmhelpers.fetch import ( | ||
1854 | 6 | BaseFetchHandler, | ||
1855 | 7 | UnhandledSource | ||
1856 | 8 | ) | ||
1857 | 9 | from charmhelpers.payload.archive import ( | ||
1858 | 10 | get_archive_handler, | ||
1859 | 11 | extract, | ||
1860 | 12 | ) | ||
1861 | 13 | from charmhelpers.core.host import mkdir | ||
1862 | 14 | |||
1863 | 15 | |||
1864 | 16 | class ArchiveUrlFetchHandler(BaseFetchHandler): | ||
1865 | 17 | """Handler for archives via generic URLs""" | ||
1866 | 18 | def can_handle(self, source): | ||
1867 | 19 | url_parts = self.parse_url(source) | ||
1868 | 20 | if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): | ||
1869 | 21 | return "Wrong source type" | ||
1870 | 22 | if get_archive_handler(self.base_url(source)): | ||
1871 | 23 | return True | ||
1872 | 24 | return False | ||
1873 | 25 | |||
1874 | 26 | def download(self, source, dest): | ||
1875 | 27 | # propogate all exceptions | ||
1876 | 28 | # URLError, OSError, etc | ||
1877 | 29 | proto, netloc, path, params, query, fragment = urlparse.urlparse(source) | ||
1878 | 30 | if proto in ('http', 'https'): | ||
1879 | 31 | auth, barehost = urllib2.splituser(netloc) | ||
1880 | 32 | if auth is not None: | ||
1881 | 33 | source = urlparse.urlunparse((proto, barehost, path, params, query, fragment)) | ||
1882 | 34 | username, password = urllib2.splitpasswd(auth) | ||
1883 | 35 | passman = urllib2.HTTPPasswordMgrWithDefaultRealm() | ||
1884 | 36 | # Realm is set to None in add_password to force the username and password | ||
1885 | 37 | # to be used whatever the realm | ||
1886 | 38 | passman.add_password(None, source, username, password) | ||
1887 | 39 | authhandler = urllib2.HTTPBasicAuthHandler(passman) | ||
1888 | 40 | opener = urllib2.build_opener(authhandler) | ||
1889 | 41 | urllib2.install_opener(opener) | ||
1890 | 42 | response = urllib2.urlopen(source) | ||
1891 | 43 | try: | ||
1892 | 44 | with open(dest, 'w') as dest_file: | ||
1893 | 45 | dest_file.write(response.read()) | ||
1894 | 46 | except Exception as e: | ||
1895 | 47 | if os.path.isfile(dest): | ||
1896 | 48 | os.unlink(dest) | ||
1897 | 49 | raise e | ||
1898 | 50 | |||
1899 | 51 | def install(self, source): | ||
1900 | 52 | url_parts = self.parse_url(source) | ||
1901 | 53 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') | ||
1902 | 54 | if not os.path.exists(dest_dir): | ||
1903 | 55 | mkdir(dest_dir, perms=0755) | ||
1904 | 56 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) | ||
1905 | 57 | try: | ||
1906 | 58 | self.download(source, dld_file) | ||
1907 | 59 | except urllib2.URLError as e: | ||
1908 | 60 | raise UnhandledSource(e.reason) | ||
1909 | 61 | except OSError as e: | ||
1910 | 62 | raise UnhandledSource(e.strerror) | ||
1911 | 63 | return extract(dld_file) | ||
1912 | 0 | 64 | ||
1913 | === added file 'hooks/charmhelpers/fetch/bzrurl.py' | |||
1914 | --- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000 | |||
1915 | +++ hooks/charmhelpers/fetch/bzrurl.py 2014-09-09 05:13:17 +0000 | |||
1916 | @@ -0,0 +1,50 @@ | |||
1917 | 1 | import os | ||
1918 | 2 | from charmhelpers.fetch import ( | ||
1919 | 3 | BaseFetchHandler, | ||
1920 | 4 | UnhandledSource | ||
1921 | 5 | ) | ||
1922 | 6 | from charmhelpers.core.host import mkdir | ||
1923 | 7 | |||
1924 | 8 | try: | ||
1925 | 9 | from bzrlib.branch import Branch | ||
1926 | 10 | except ImportError: | ||
1927 | 11 | from charmhelpers.fetch import apt_install | ||
1928 | 12 | apt_install("python-bzrlib") | ||
1929 | 13 | from bzrlib.branch import Branch | ||
1930 | 14 | |||
1931 | 15 | |||
1932 | 16 | class BzrUrlFetchHandler(BaseFetchHandler): | ||
1933 | 17 | """Handler for bazaar branches via generic and lp URLs""" | ||
1934 | 18 | def can_handle(self, source): | ||
1935 | 19 | url_parts = self.parse_url(source) | ||
1936 | 20 | if url_parts.scheme not in ('bzr+ssh', 'lp'): | ||
1937 | 21 | return False | ||
1938 | 22 | else: | ||
1939 | 23 | return True | ||
1940 | 24 | |||
1941 | 25 | def branch(self, source, dest): | ||
1942 | 26 | url_parts = self.parse_url(source) | ||
1943 | 27 | # If we use lp:branchname scheme we need to load plugins | ||
1944 | 28 | if not self.can_handle(source): | ||
1945 | 29 | raise UnhandledSource("Cannot handle {}".format(source)) | ||
1946 | 30 | if url_parts.scheme == "lp": | ||
1947 | 31 | from bzrlib.plugin import load_plugins | ||
1948 | 32 | load_plugins() | ||
1949 | 33 | try: | ||
1950 | 34 | remote_branch = Branch.open(source) | ||
1951 | 35 | remote_branch.bzrdir.sprout(dest).open_branch() | ||
1952 | 36 | except Exception as e: | ||
1953 | 37 | raise e | ||
1954 | 38 | |||
1955 | 39 | def install(self, source): | ||
1956 | 40 | url_parts = self.parse_url(source) | ||
1957 | 41 | branch_name = url_parts.path.strip("/").split("/")[-1] | ||
1958 | 42 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", | ||
1959 | 43 | branch_name) | ||
1960 | 44 | if not os.path.exists(dest_dir): | ||
1961 | 45 | mkdir(dest_dir, perms=0755) | ||
1962 | 46 | try: | ||
1963 | 47 | self.branch(source, dest_dir) | ||
1964 | 48 | except OSError as e: | ||
1965 | 49 | raise UnhandledSource(e.strerror) | ||
1966 | 50 | return dest_dir | ||
1967 | 0 | 51 | ||
1968 | === added directory 'hooks/storage-provider.d/ceph' | |||
1969 | === added file 'hooks/storage-provider.d/ceph/ceph-relation-changed' | |||
1970 | --- hooks/storage-provider.d/ceph/ceph-relation-changed 1970-01-01 00:00:00 +0000 | |||
1971 | +++ hooks/storage-provider.d/ceph/ceph-relation-changed 2014-09-09 05:13:17 +0000 | |||
1972 | @@ -0,0 +1,69 @@ | |||
1973 | 1 | #!/usr/bin/env python | ||
1974 | 2 | import sys | ||
1975 | 3 | |||
1976 | 4 | from ceph_common import ( | ||
1977 | 5 | SERVICE_NAME, | ||
1978 | 6 | POOL_NAME, | ||
1979 | 7 | TEMPORARY_MOUNT_POINT, | ||
1980 | 8 | RDB_IMG, | ||
1981 | 9 | BLK_DEVICE, | ||
1982 | 10 | ) | ||
1983 | 11 | |||
1984 | 12 | from charmhelpers.core.hookenv import ( | ||
1985 | 13 | relation_get, | ||
1986 | 14 | config, | ||
1987 | 15 | log, | ||
1988 | 16 | ) | ||
1989 | 17 | |||
1990 | 18 | from charmhelpers.contrib.storage.linux import ceph | ||
1991 | 19 | |||
1992 | 20 | """ | ||
1993 | 21 | This will mount a ceph image to a temporary location when the ceph relation | ||
1994 | 22 | is ready. | ||
1995 | 23 | |||
1996 | 24 | Later, when the data relation fires and a mountpoint is define, the temporary | ||
1997 | 25 | mountpoint will be remounted. | ||
1998 | 26 | """ | ||
1999 | 27 | |||
2000 | 28 | |||
2001 | 29 | REPLICA_COUNT = 3 | ||
2002 | 30 | |||
2003 | 31 | |||
2004 | 32 | def main(): | ||
2005 | 33 | config_data = config() | ||
2006 | 34 | |||
2007 | 35 | # This shouldn't clash since the pool name should be a good enough | ||
2008 | 36 | # "namespace". | ||
2009 | 37 | |||
2010 | 38 | volume_size_gb = config_data.get("volume_size", None) # in Gb. | ||
2011 | 39 | volume_size = volume_size_gb * 1024 | ||
2012 | 40 | |||
2013 | 41 | auth = relation_get("auth") | ||
2014 | 42 | key = relation_get("key") | ||
2015 | 43 | use_syslog = relation_get("use-syslog") | ||
2016 | 44 | |||
2017 | 45 | relation_data = [auth, key, use_syslog] | ||
2018 | 46 | |||
2019 | 47 | if None in relation_data: | ||
2020 | 48 | log("Not all relation data received: '%s'" % relation_data) | ||
2021 | 49 | sys.exit(0) | ||
2022 | 50 | |||
2023 | 51 | log("Configuring ceph client...") | ||
2024 | 52 | ceph.configure( | ||
2025 | 53 | service=SERVICE_NAME, key=key, auth=auth, use_syslog=use_syslog) | ||
2026 | 54 | |||
2027 | 55 | log("Mounting ceph image to temporary location '%s'" | ||
2028 | 56 | "" % TEMPORARY_MOUNT_POINT) | ||
2029 | 57 | ceph.ensure_ceph_storage( | ||
2030 | 58 | service=SERVICE_NAME, | ||
2031 | 59 | pool=POOL_NAME, | ||
2032 | 60 | rdb_img=RDB_IMG, | ||
2033 | 61 | sizemb=volume_size, | ||
2034 | 62 | fstype="ext4", | ||
2035 | 63 | mount_point=TEMPORARY_MOUNT_POINT, | ||
2036 | 64 | blk_device=BLK_DEVICE, | ||
2037 | 65 | rdb_pool_replicas=REPLICA_COUNT) | ||
2038 | 66 | |||
2039 | 67 | |||
2040 | 68 | if __name__ == "__main__": | ||
2041 | 69 | main() | ||
2042 | 0 | 70 | ||
2043 | === added file 'hooks/storage-provider.d/ceph/config-changed' | |||
2044 | --- hooks/storage-provider.d/ceph/config-changed 1970-01-01 00:00:00 +0000 | |||
2045 | +++ hooks/storage-provider.d/ceph/config-changed 2014-09-09 05:13:17 +0000 | |||
2046 | @@ -0,0 +1,6 @@ | |||
2047 | 1 | #!/usr/bin/env python | ||
2048 | 2 | from charmhelpers.contrib.storage.linux.ceph import ( | ||
2049 | 3 | install, | ||
2050 | 4 | ) | ||
2051 | 5 | |||
2052 | 6 | install() | ||
2053 | 0 | 7 | ||
2054 | === added file 'hooks/storage-provider.d/ceph/data-relation-changed' | |||
2055 | --- hooks/storage-provider.d/ceph/data-relation-changed 1970-01-01 00:00:00 +0000 | |||
2056 | +++ hooks/storage-provider.d/ceph/data-relation-changed 2014-09-09 05:13:17 +0000 | |||
2057 | @@ -0,0 +1,45 @@ | |||
2058 | 1 | #!/usr/bin/env python | ||
2059 | 2 | import sys | ||
2060 | 3 | from ceph_common import ( | ||
2061 | 4 | TEMPORARY_MOUNT_POINT, | ||
2062 | 5 | BLK_DEVICE, | ||
2063 | 6 | ) | ||
2064 | 7 | from charmhelpers.core.hookenv import ( | ||
2065 | 8 | relation_get, | ||
2066 | 9 | relation_set, | ||
2067 | 10 | log, | ||
2068 | 11 | ) | ||
2069 | 12 | |||
2070 | 13 | from charmhelpers.core.host import ( | ||
2071 | 14 | mount, | ||
2072 | 15 | mounts, | ||
2073 | 16 | #umount, | ||
2074 | 17 | ) | ||
2075 | 18 | |||
2076 | 19 | |||
2077 | 20 | def main(): | ||
2078 | 21 | |||
2079 | 22 | # Noop until mountpoint is set. | ||
2080 | 23 | mountpoint = relation_get("mountpoint") | ||
2081 | 24 | if mountpoint is None: | ||
2082 | 25 | log("No mountpoint received from the data relation. Noop.") | ||
2083 | 26 | sys.exit(0) # No mountpoint requested yet - noop | ||
2084 | 27 | |||
2085 | 28 | # If the mountpoint is set, let's remount the temporary folder. | ||
2086 | 29 | actual_mounts = mounts() | ||
2087 | 30 | log("Getting mounts: %s" % actual_mounts) | ||
2088 | 31 | mountpoints = [point[0] for point in actual_mounts] # returns a list of 2-lists | ||
2089 | 32 | |||
2090 | 33 | if TEMPORARY_MOUNT_POINT in mountpoints: | ||
2091 | 34 | log("Found ceph block device, remounting to '%s'" % mountpoint) | ||
2092 | 35 | # The ceph image is ready and mounted - let's remount it to the desired | ||
2093 | 36 | # mountpoint. | ||
2094 | 37 | mount(BLK_DEVICE, mountpoint) | ||
2095 | 38 | log("Mountpoint ready, notifying data relation.") | ||
2096 | 39 | relation_set("mountpoint", mountpoint) | ||
2097 | 40 | else: | ||
2098 | 41 | log("Temporary mountpoint not found (yet?). Noop.") | ||
2099 | 42 | sys.exit(0) | ||
2100 | 43 | |||
2101 | 44 | if __name__ == "__main__": | ||
2102 | 45 | main() | ||
2103 | 0 | 46 | ||
2104 | === modified file 'metadata.yaml' | |||
2105 | --- metadata.yaml 2014-02-10 16:58:17 +0000 | |||
2106 | +++ metadata.yaml 2014-09-09 05:13:17 +0000 | |||
2107 | @@ -18,3 +18,5 @@ | |||
2108 | 18 | interface: mount | 18 | interface: mount |
2109 | 19 | block-storage: | 19 | block-storage: |
2110 | 20 | interface: volume-request | 20 | interface: volume-request |
2111 | 21 | ceph: | ||
2112 | 22 | interface: ceph-client |
Chris, just on first glance it seems you are missing the ceph-relation- changed hook symlink in the hooks directory. Those hooks are needed as links to hooks/hooks as it is a proxy for calls to the underlying storage- provider. d/<provider_ name>/your- hook-name. If the hook symlink doesn't exist at hooks/ceph- relation- changed you should see a "no ceph-relation- changed hook" message in the juju logs.
I'll peek more at this on with a live deployment.