Merge lp:~tribaal/charms/precise/storage/add-ceph-storage-provider into lp:charms/storage

Proposed by Chris Glass
Status: Work in progress
Proposed branch: lp:~tribaal/charms/precise/storage/add-ceph-storage-provider
Merge into: lp:charms/storage
Diff against target: 2112 lines (+1834/-27)
19 files modified
.bzrignore (+1/-0)
Makefile (+9/-9)
charm-helpers.yaml (+1/-0)
hooks/ceph_common.py (+12/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+387/-0)
hooks/charmhelpers/core/fstab.py (+116/-0)
hooks/charmhelpers/core/hookenv.py (+111/-6)
hooks/charmhelpers/core/host.py (+85/-12)
hooks/charmhelpers/core/services/__init__.py (+2/-0)
hooks/charmhelpers/core/services/base.py (+305/-0)
hooks/charmhelpers/core/services/helpers.py (+125/-0)
hooks/charmhelpers/core/templating.py (+51/-0)
hooks/charmhelpers/fetch/__init__.py (+394/-0)
hooks/charmhelpers/fetch/archiveurl.py (+63/-0)
hooks/charmhelpers/fetch/bzrurl.py (+50/-0)
hooks/storage-provider.d/ceph/ceph-relation-changed (+69/-0)
hooks/storage-provider.d/ceph/config-changed (+6/-0)
hooks/storage-provider.d/ceph/data-relation-changed (+45/-0)
metadata.yaml (+2/-0)
To merge this branch: bzr merge lp:~tribaal/charms/precise/storage/add-ceph-storage-provider
Reviewer Review Type Date Requested Status
David Britton (community) Needs Information
Review via email: mp+232239@code.launchpad.net

Description of the change

This branch adds basic support for a "ceph" storage provider that mounts and uses a ceph image to use as storage.

It is currently rough (not working 100%), but I'd like to have eyes on the code before I go any further with it (I don't understand exactly what's wrong with my approach).

To post a comment you must log in.
Revision history for this message
Chad Smith (chad.smith) wrote :

Chris, just on first glance it seems you are missing the ceph-relation-changed hook symlink in the hooks directory. Those hooks are needed as links to hooks/hooks as it is a proxy for calls to the underlying storage-provider.d/<provider_name>/your-hook-name. If the hook symlink doesn't exist at hooks/ceph-relation-changed you should see a "no ceph-relation-changed hook" message in the juju logs.

I'll peek more at this on with a live deployment.

44. By Chris Glass

Added ceph-relation-changed hook symlink.

45. By Chris Glass

Make the ceph-relation-changed hook noop if all the necessary information is
not present (like the key).

46. By Chris Glass

added relation debug output

Revision history for this message
David Britton (dpb) wrote :

Hi Chris -- Thanks so much for doing this!

The general approach looks great. Can you implement the data-relation-departed hook as well? I noticed an 'unmount' commented out import in the data-relation-changed hook as well, thought maybe you were thinking about doing that when you stopped for a quick sanity-check.

Before I run through paces, I wanted to make sure that you include a sample deployment when working with ceph at least here in the review so we can compare apples to apples.

Also I added a simple nit for a diff comment.

This change is quite exciting, glad so much of it is contained in charmhelpers. :)

review: Needs Information
Revision history for this message
Chris Glass (tribaal) wrote :

Yes, I started working on the unmount logic, but then realised this branch was already 2k lines, so I decided against putting even more code in.

I'll add it in here, no problem.

47. By Chris Glass

Added exit(0)!

Unmerged revisions

47. By Chris Glass

Added exit(0)!

46. By Chris Glass

added relation debug output

45. By Chris Glass

Make the ceph-relation-changed hook noop if all the necessary information is
not present (like the key).

44. By Chris Glass

Added ceph-relation-changed hook symlink.

43. By Chris Glass

Added some debug logs...

42. By Chris Glass

Added logs to the data-relation-changed hook

41. By Chris Glass

Put the common ceph code in a proper python module.

40. By Chris Glass

Made hook executable. Duh.

39. By Chris Glass

Update charm-helpers along with the make file to sync them.

38. By Chris Glass

The hooks should work

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.bzrignore'
2--- .bzrignore 2014-01-31 23:49:37 +0000
3+++ .bzrignore 2014-09-09 05:13:17 +0000
4@@ -1,2 +1,3 @@
5 _trial_temp
6 charm-helpers
7+bin/
8
9=== modified file 'Makefile'
10--- Makefile 2014-02-10 22:52:50 +0000
11+++ Makefile 2014-09-09 05:13:17 +0000
12@@ -1,4 +1,6 @@
13+#!/usr/bin/make
14 .PHONY: test lint clean
15+PYTHON := /usr/bin/env python
16
17 clean:
18 find . -name *.pyc | xargs rm
19@@ -11,12 +13,10 @@
20 # flake any python scripts not named *.py
21 fgrep -r bin/python hooks/ | awk -F : '{print $$1}' | xargs -r -n1 flake8
22
23-
24-update-charm-helpers:
25- # Pull latest charm-helpers branch and sync the components based on our
26- $ charm-helpers.yaml
27- rm -rf charm-helpers
28- bzr co lp:charm-helpers
29- ./charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py -c charm-helpers.yaml
30- rm -rf charm-helpers
31-
32+bin/charm_helpers_sync.py:
33+ @mkdir -p bin
34+ @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
35+ > bin/charm_helpers_sync.py
36+
37+sync: bin/charm_helpers_sync.py
38+ @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
39
40=== modified file 'charm-helpers.yaml'
41--- charm-helpers.yaml 2014-01-31 23:49:37 +0000
42+++ charm-helpers.yaml 2014-09-09 05:13:17 +0000
43@@ -3,3 +3,4 @@
44 include:
45 - core
46 - fetch
47+ - contrib.storage.linux.ceph
48
49=== added symlink 'hooks/ceph-relation-changed'
50=== target is u'hooks'
51=== added file 'hooks/ceph_common.py'
52--- hooks/ceph_common.py 1970-01-01 00:00:00 +0000
53+++ hooks/ceph_common.py 2014-09-09 05:13:17 +0000
54@@ -0,0 +1,12 @@
55+
56+from charmhelpers.core.hookenv import (
57+ service_name,
58+)
59+
60+SERVICE_NAME = service_name() # TODO: Make sure it's the unit - not "storage"
61+TEMPORARY_MOUNT_POINT = "/mnt/temp-ceph/%s" % SERVICE_NAME
62+POOL_NAME = SERVICE_NAME
63+
64+
65+RDB_IMG = "storage"
66+BLK_DEVICE = '/dev/rbd/%s/%s' % (POOL_NAME, RDB_IMG)
67
68=== added directory 'hooks/charmhelpers/contrib'
69=== added file 'hooks/charmhelpers/contrib/__init__.py'
70=== added directory 'hooks/charmhelpers/contrib/storage'
71=== added file 'hooks/charmhelpers/contrib/storage/__init__.py'
72=== added directory 'hooks/charmhelpers/contrib/storage/linux'
73=== added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py'
74=== added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
75--- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000
76+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-09-09 05:13:17 +0000
77@@ -0,0 +1,387 @@
78+#
79+# Copyright 2012 Canonical Ltd.
80+#
81+# This file is sourced from lp:openstack-charm-helpers
82+#
83+# Authors:
84+# James Page <james.page@ubuntu.com>
85+# Adam Gandelman <adamg@ubuntu.com>
86+#
87+
88+import os
89+import shutil
90+import json
91+import time
92+
93+from subprocess import (
94+ check_call,
95+ check_output,
96+ CalledProcessError
97+)
98+
99+from charmhelpers.core.hookenv import (
100+ relation_get,
101+ relation_ids,
102+ related_units,
103+ log,
104+ INFO,
105+ WARNING,
106+ ERROR
107+)
108+
109+from charmhelpers.core.host import (
110+ mount,
111+ mounts,
112+ service_start,
113+ service_stop,
114+ service_running,
115+ umount,
116+)
117+
118+from charmhelpers.fetch import (
119+ apt_install,
120+)
121+
122+KEYRING = '/etc/ceph/ceph.client.{}.keyring'
123+KEYFILE = '/etc/ceph/ceph.client.{}.key'
124+
125+CEPH_CONF = """[global]
126+ auth supported = {auth}
127+ keyring = {keyring}
128+ mon host = {mon_hosts}
129+ log to syslog = {use_syslog}
130+ err to syslog = {use_syslog}
131+ clog to syslog = {use_syslog}
132+"""
133+
134+
135+def install():
136+ ''' Basic Ceph client installation '''
137+ ceph_dir = "/etc/ceph"
138+ if not os.path.exists(ceph_dir):
139+ os.mkdir(ceph_dir)
140+ apt_install('ceph-common', fatal=True)
141+
142+
143+def rbd_exists(service, pool, rbd_img):
144+ ''' Check to see if a RADOS block device exists '''
145+ try:
146+ out = check_output(['rbd', 'list', '--id', service,
147+ '--pool', pool])
148+ except CalledProcessError:
149+ return False
150+ else:
151+ return rbd_img in out
152+
153+
154+def create_rbd_image(service, pool, image, sizemb):
155+ ''' Create a new RADOS block device '''
156+ cmd = [
157+ 'rbd',
158+ 'create',
159+ image,
160+ '--size',
161+ str(sizemb),
162+ '--id',
163+ service,
164+ '--pool',
165+ pool
166+ ]
167+ check_call(cmd)
168+
169+
170+def pool_exists(service, name):
171+ ''' Check to see if a RADOS pool already exists '''
172+ try:
173+ out = check_output(['rados', '--id', service, 'lspools'])
174+ except CalledProcessError:
175+ return False
176+ else:
177+ return name in out
178+
179+
180+def get_osds(service):
181+ '''
182+ Return a list of all Ceph Object Storage Daemons
183+ currently in the cluster
184+ '''
185+ version = ceph_version()
186+ if version and version >= '0.56':
187+ return json.loads(check_output(['ceph', '--id', service,
188+ 'osd', 'ls', '--format=json']))
189+ else:
190+ return None
191+
192+
193+def create_pool(service, name, replicas=2):
194+ ''' Create a new RADOS pool '''
195+ if pool_exists(service, name):
196+ log("Ceph pool {} already exists, skipping creation".format(name),
197+ level=WARNING)
198+ return
199+ # Calculate the number of placement groups based
200+ # on upstream recommended best practices.
201+ osds = get_osds(service)
202+ if osds:
203+ pgnum = (len(osds) * 100 / replicas)
204+ else:
205+ # NOTE(james-page): Default to 200 for older ceph versions
206+ # which don't support OSD query from cli
207+ pgnum = 200
208+ cmd = [
209+ 'ceph', '--id', service,
210+ 'osd', 'pool', 'create',
211+ name, str(pgnum)
212+ ]
213+ check_call(cmd)
214+ cmd = [
215+ 'ceph', '--id', service,
216+ 'osd', 'pool', 'set', name,
217+ 'size', str(replicas)
218+ ]
219+ check_call(cmd)
220+
221+
222+def delete_pool(service, name):
223+ ''' Delete a RADOS pool from ceph '''
224+ cmd = [
225+ 'ceph', '--id', service,
226+ 'osd', 'pool', 'delete',
227+ name, '--yes-i-really-really-mean-it'
228+ ]
229+ check_call(cmd)
230+
231+
232+def _keyfile_path(service):
233+ return KEYFILE.format(service)
234+
235+
236+def _keyring_path(service):
237+ return KEYRING.format(service)
238+
239+
240+def create_keyring(service, key):
241+ ''' Create a new Ceph keyring containing key'''
242+ keyring = _keyring_path(service)
243+ if os.path.exists(keyring):
244+ log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
245+ return
246+ cmd = [
247+ 'ceph-authtool',
248+ keyring,
249+ '--create-keyring',
250+ '--name=client.{}'.format(service),
251+ '--add-key={}'.format(key)
252+ ]
253+ check_call(cmd)
254+ log('ceph: Created new ring at %s.' % keyring, level=INFO)
255+
256+
257+def create_key_file(service, key):
258+ ''' Create a file containing key '''
259+ keyfile = _keyfile_path(service)
260+ if os.path.exists(keyfile):
261+ log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
262+ return
263+ with open(keyfile, 'w') as fd:
264+ fd.write(key)
265+ log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
266+
267+
268+def get_ceph_nodes():
269+ ''' Query named relation 'ceph' to detemine current nodes '''
270+ hosts = []
271+ for r_id in relation_ids('ceph'):
272+ for unit in related_units(r_id):
273+ hosts.append(relation_get('private-address', unit=unit, rid=r_id))
274+ return hosts
275+
276+
277+def configure(service, key, auth, use_syslog):
278+ ''' Perform basic configuration of Ceph '''
279+ create_keyring(service, key)
280+ create_key_file(service, key)
281+ hosts = get_ceph_nodes()
282+ with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
283+ ceph_conf.write(CEPH_CONF.format(auth=auth,
284+ keyring=_keyring_path(service),
285+ mon_hosts=",".join(map(str, hosts)),
286+ use_syslog=use_syslog))
287+ modprobe('rbd')
288+
289+
290+def image_mapped(name):
291+ ''' Determine whether a RADOS block device is mapped locally '''
292+ try:
293+ out = check_output(['rbd', 'showmapped'])
294+ except CalledProcessError:
295+ return False
296+ else:
297+ return name in out
298+
299+
300+def map_block_storage(service, pool, image):
301+ ''' Map a RADOS block device for local use '''
302+ cmd = [
303+ 'rbd',
304+ 'map',
305+ '{}/{}'.format(pool, image),
306+ '--user',
307+ service,
308+ '--secret',
309+ _keyfile_path(service),
310+ ]
311+ check_call(cmd)
312+
313+
314+def filesystem_mounted(fs):
315+ ''' Determine whether a filesytems is already mounted '''
316+ return fs in [f for f, m in mounts()]
317+
318+
319+def make_filesystem(blk_device, fstype='ext4', timeout=10):
320+ ''' Make a new filesystem on the specified block device '''
321+ count = 0
322+ e_noent = os.errno.ENOENT
323+ while not os.path.exists(blk_device):
324+ if count >= timeout:
325+ log('ceph: gave up waiting on block device %s' % blk_device,
326+ level=ERROR)
327+ raise IOError(e_noent, os.strerror(e_noent), blk_device)
328+ log('ceph: waiting for block device %s to appear' % blk_device,
329+ level=INFO)
330+ count += 1
331+ time.sleep(1)
332+ else:
333+ log('ceph: Formatting block device %s as filesystem %s.' %
334+ (blk_device, fstype), level=INFO)
335+ check_call(['mkfs', '-t', fstype, blk_device])
336+
337+
338+def place_data_on_block_device(blk_device, data_src_dst):
339+ ''' Migrate data in data_src_dst to blk_device and then remount '''
340+ # mount block device into /mnt
341+ mount(blk_device, '/mnt')
342+ # copy data to /mnt
343+ copy_files(data_src_dst, '/mnt')
344+ # umount block device
345+ umount('/mnt')
346+ # Grab user/group ID's from original source
347+ _dir = os.stat(data_src_dst)
348+ uid = _dir.st_uid
349+ gid = _dir.st_gid
350+ # re-mount where the data should originally be
351+ # TODO: persist is currently a NO-OP in core.host
352+ mount(blk_device, data_src_dst, persist=True)
353+ # ensure original ownership of new mount.
354+ os.chown(data_src_dst, uid, gid)
355+
356+
357+# TODO: re-use
358+def modprobe(module):
359+ ''' Load a kernel module and configure for auto-load on reboot '''
360+ log('ceph: Loading kernel module', level=INFO)
361+ cmd = ['modprobe', module]
362+ check_call(cmd)
363+ with open('/etc/modules', 'r+') as modules:
364+ if module not in modules.read():
365+ modules.write(module)
366+
367+
368+def copy_files(src, dst, symlinks=False, ignore=None):
369+ ''' Copy files from src to dst '''
370+ for item in os.listdir(src):
371+ s = os.path.join(src, item)
372+ d = os.path.join(dst, item)
373+ if os.path.isdir(s):
374+ shutil.copytree(s, d, symlinks, ignore)
375+ else:
376+ shutil.copy2(s, d)
377+
378+
379+def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
380+ blk_device, fstype, system_services=[]):
381+ """
382+ NOTE: This function must only be called from a single service unit for
383+ the same rbd_img otherwise data loss will occur.
384+
385+ Ensures given pool and RBD image exists, is mapped to a block device,
386+ and the device is formatted and mounted at the given mount_point.
387+
388+ If formatting a device for the first time, data existing at mount_point
389+ will be migrated to the RBD device before being re-mounted.
390+
391+ All services listed in system_services will be stopped prior to data
392+ migration and restarted when complete.
393+ """
394+ # Ensure pool, RBD image, RBD mappings are in place.
395+ if not pool_exists(service, pool):
396+ log('ceph: Creating new pool {}.'.format(pool))
397+ create_pool(service, pool)
398+
399+ if not rbd_exists(service, pool, rbd_img):
400+ log('ceph: Creating RBD image ({}).'.format(rbd_img))
401+ create_rbd_image(service, pool, rbd_img, sizemb)
402+
403+ if not image_mapped(rbd_img):
404+ log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
405+ map_block_storage(service, pool, rbd_img)
406+
407+ # make file system
408+ # TODO: What happens if for whatever reason this is run again and
409+ # the data is already in the rbd device and/or is mounted??
410+ # When it is mounted already, it will fail to make the fs
411+ # XXX: This is really sketchy! Need to at least add an fstab entry
412+ # otherwise this hook will blow away existing data if its executed
413+ # after a reboot.
414+ if not filesystem_mounted(mount_point):
415+ make_filesystem(blk_device, fstype)
416+
417+ for svc in system_services:
418+ if service_running(svc):
419+ log('ceph: Stopping services {} prior to migrating data.'
420+ .format(svc))
421+ service_stop(svc)
422+
423+ place_data_on_block_device(blk_device, mount_point)
424+
425+ for svc in system_services:
426+ log('ceph: Starting service {} after migrating data.'
427+ .format(svc))
428+ service_start(svc)
429+
430+
431+def ensure_ceph_keyring(service, user=None, group=None):
432+ '''
433+ Ensures a ceph keyring is created for a named service
434+ and optionally ensures user and group ownership.
435+
436+ Returns False if no ceph key is available in relation state.
437+ '''
438+ key = None
439+ for rid in relation_ids('ceph'):
440+ for unit in related_units(rid):
441+ key = relation_get('key', rid=rid, unit=unit)
442+ if key:
443+ break
444+ if not key:
445+ return False
446+ create_keyring(service=service, key=key)
447+ keyring = _keyring_path(service)
448+ if user and group:
449+ check_call(['chown', '%s.%s' % (user, group), keyring])
450+ return True
451+
452+
453+def ceph_version():
454+ ''' Retrieve the local version of ceph '''
455+ if os.path.exists('/usr/bin/ceph'):
456+ cmd = ['ceph', '-v']
457+ output = check_output(cmd)
458+ output = output.split()
459+ if len(output) > 3:
460+ return output[2]
461+ else:
462+ return None
463+ else:
464+ return None
465
466=== added file 'hooks/charmhelpers/core/fstab.py'
467--- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
468+++ hooks/charmhelpers/core/fstab.py 2014-09-09 05:13:17 +0000
469@@ -0,0 +1,116 @@
470+#!/usr/bin/env python
471+# -*- coding: utf-8 -*-
472+
473+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
474+
475+import os
476+
477+
478+class Fstab(file):
479+ """This class extends file in order to implement a file reader/writer
480+ for file `/etc/fstab`
481+ """
482+
483+ class Entry(object):
484+ """Entry class represents a non-comment line on the `/etc/fstab` file
485+ """
486+ def __init__(self, device, mountpoint, filesystem,
487+ options, d=0, p=0):
488+ self.device = device
489+ self.mountpoint = mountpoint
490+ self.filesystem = filesystem
491+
492+ if not options:
493+ options = "defaults"
494+
495+ self.options = options
496+ self.d = d
497+ self.p = p
498+
499+ def __eq__(self, o):
500+ return str(self) == str(o)
501+
502+ def __str__(self):
503+ return "{} {} {} {} {} {}".format(self.device,
504+ self.mountpoint,
505+ self.filesystem,
506+ self.options,
507+ self.d,
508+ self.p)
509+
510+ DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
511+
512+ def __init__(self, path=None):
513+ if path:
514+ self._path = path
515+ else:
516+ self._path = self.DEFAULT_PATH
517+ file.__init__(self, self._path, 'r+')
518+
519+ def _hydrate_entry(self, line):
520+ # NOTE: use split with no arguments to split on any
521+ # whitespace including tabs
522+ return Fstab.Entry(*filter(
523+ lambda x: x not in ('', None),
524+ line.strip("\n").split()))
525+
526+ @property
527+ def entries(self):
528+ self.seek(0)
529+ for line in self.readlines():
530+ try:
531+ if not line.startswith("#"):
532+ yield self._hydrate_entry(line)
533+ except ValueError:
534+ pass
535+
536+ def get_entry_by_attr(self, attr, value):
537+ for entry in self.entries:
538+ e_attr = getattr(entry, attr)
539+ if e_attr == value:
540+ return entry
541+ return None
542+
543+ def add_entry(self, entry):
544+ if self.get_entry_by_attr('device', entry.device):
545+ return False
546+
547+ self.write(str(entry) + '\n')
548+ self.truncate()
549+ return entry
550+
551+ def remove_entry(self, entry):
552+ self.seek(0)
553+
554+ lines = self.readlines()
555+
556+ found = False
557+ for index, line in enumerate(lines):
558+ if not line.startswith("#"):
559+ if self._hydrate_entry(line) == entry:
560+ found = True
561+ break
562+
563+ if not found:
564+ return False
565+
566+ lines.remove(line)
567+
568+ self.seek(0)
569+ self.write(''.join(lines))
570+ self.truncate()
571+ return True
572+
573+ @classmethod
574+ def remove_by_mountpoint(cls, mountpoint, path=None):
575+ fstab = cls(path=path)
576+ entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
577+ if entry:
578+ return fstab.remove_entry(entry)
579+ return False
580+
581+ @classmethod
582+ def add(cls, device, mountpoint, filesystem, options=None, path=None):
583+ return cls(path=path).add_entry(Fstab.Entry(device,
584+ mountpoint, filesystem,
585+ options=options))
586
587=== modified file 'hooks/charmhelpers/core/hookenv.py'
588--- hooks/charmhelpers/core/hookenv.py 2014-01-14 21:55:40 +0000
589+++ hooks/charmhelpers/core/hookenv.py 2014-09-09 05:13:17 +0000
590@@ -8,6 +8,7 @@
591 import json
592 import yaml
593 import subprocess
594+import sys
595 import UserDict
596 from subprocess import CalledProcessError
597
598@@ -24,7 +25,7 @@
599 def cached(func):
600 """Cache return values for multiple executions of func + args
601
602- For example:
603+ For example::
604
605 @cached
606 def unit_get(attribute):
607@@ -149,6 +150,105 @@
608 return local_unit().split('/')[0]
609
610
611+def hook_name():
612+ """The name of the currently executing hook"""
613+ return os.path.basename(sys.argv[0])
614+
615+
616+class Config(dict):
617+ """A Juju charm config dictionary that can write itself to
618+ disk (as json) and track which values have changed since
619+ the previous hook invocation.
620+
621+ Do not instantiate this object directly - instead call
622+ ``hookenv.config()``
623+
624+ Example usage::
625+
626+ >>> # inside a hook
627+ >>> from charmhelpers.core import hookenv
628+ >>> config = hookenv.config()
629+ >>> config['foo']
630+ 'bar'
631+ >>> config['mykey'] = 'myval'
632+ >>> config.save()
633+
634+
635+ >>> # user runs `juju set mycharm foo=baz`
636+ >>> # now we're inside subsequent config-changed hook
637+ >>> config = hookenv.config()
638+ >>> config['foo']
639+ 'baz'
640+ >>> # test to see if this val has changed since last hook
641+ >>> config.changed('foo')
642+ True
643+ >>> # what was the previous value?
644+ >>> config.previous('foo')
645+ 'bar'
646+ >>> # keys/values that we add are preserved across hooks
647+ >>> config['mykey']
648+ 'myval'
649+ >>> # don't forget to save at the end of hook!
650+ >>> config.save()
651+
652+ """
653+ CONFIG_FILE_NAME = '.juju-persistent-config'
654+
655+ def __init__(self, *args, **kw):
656+ super(Config, self).__init__(*args, **kw)
657+ self._prev_dict = None
658+ self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
659+ if os.path.exists(self.path):
660+ self.load_previous()
661+
662+ def load_previous(self, path=None):
663+ """Load previous copy of config from disk so that current values
664+ can be compared to previous values.
665+
666+ :param path:
667+
668+ File path from which to load the previous config. If `None`,
669+ config is loaded from the default location. If `path` is
670+ specified, subsequent `save()` calls will write to the same
671+ path.
672+
673+ """
674+ self.path = path or self.path
675+ with open(self.path) as f:
676+ self._prev_dict = json.load(f)
677+
678+ def changed(self, key):
679+ """Return true if the value for this key has changed since
680+ the last save.
681+
682+ """
683+ if self._prev_dict is None:
684+ return True
685+ return self.previous(key) != self.get(key)
686+
687+ def previous(self, key):
688+ """Return previous value for this key, or None if there
689+ is no "previous" value.
690+
691+ """
692+ if self._prev_dict:
693+ return self._prev_dict.get(key)
694+ return None
695+
696+ def save(self):
697+ """Save this config to disk.
698+
699+ Preserves items in _prev_dict that do not exist in self.
700+
701+ """
702+ if self._prev_dict:
703+ for k, v in self._prev_dict.iteritems():
704+ if k not in self:
705+ self[k] = v
706+ with open(self.path, 'w') as f:
707+ json.dump(self, f)
708+
709+
710 @cached
711 def config(scope=None):
712 """Juju charm configuration"""
713@@ -157,7 +257,10 @@
714 config_cmd_line.append(scope)
715 config_cmd_line.append('--format=json')
716 try:
717- return json.loads(subprocess.check_output(config_cmd_line))
718+ config_data = json.loads(subprocess.check_output(config_cmd_line))
719+ if scope is not None:
720+ return config_data
721+ return Config(config_data)
722 except ValueError:
723 return None
724
725@@ -182,8 +285,9 @@
726 raise
727
728
729-def relation_set(relation_id=None, relation_settings={}, **kwargs):
730+def relation_set(relation_id=None, relation_settings=None, **kwargs):
731 """Set relation information for the current unit"""
732+ relation_settings = relation_settings if relation_settings else {}
733 relation_cmd_line = ['relation-set']
734 if relation_id is not None:
735 relation_cmd_line.extend(('-r', relation_id))
736@@ -342,18 +446,19 @@
737 class Hooks(object):
738 """A convenient handler for hook functions.
739
740- Example:
741+ Example::
742+
743 hooks = Hooks()
744
745 # register a hook, taking its name from the function name
746 @hooks.hook()
747 def install():
748- ...
749+ pass # your code here
750
751 # register a hook, providing a custom hook name
752 @hooks.hook("config-changed")
753 def config_changed():
754- ...
755+ pass # your code here
756
757 if __name__ == "__main__":
758 # execute a hook based on the name the program is called by
759
760=== modified file 'hooks/charmhelpers/core/host.py'
761--- hooks/charmhelpers/core/host.py 2014-01-14 21:55:40 +0000
762+++ hooks/charmhelpers/core/host.py 2014-09-09 05:13:17 +0000
763@@ -12,10 +12,13 @@
764 import string
765 import subprocess
766 import hashlib
767+import shutil
768+from contextlib import contextmanager
769
770 from collections import OrderedDict
771
772 from hookenv import log
773+from fstab import Fstab
774
775
776 def service_start(service_name):
777@@ -34,7 +37,8 @@
778
779
780 def service_reload(service_name, restart_on_failure=False):
781- """Reload a system service, optionally falling back to restart if reload fails"""
782+ """Reload a system service, optionally falling back to restart if
783+ reload fails"""
784 service_result = service('reload', service_name)
785 if not service_result and restart_on_failure:
786 service_result = service('restart', service_name)
787@@ -50,7 +54,7 @@
788 def service_running(service):
789 """Determine whether a system service is running"""
790 try:
791- output = subprocess.check_output(['service', service, 'status'])
792+ output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
793 except subprocess.CalledProcessError:
794 return False
795 else:
796@@ -60,6 +64,16 @@
797 return False
798
799
800+def service_available(service_name):
801+ """Determine whether a system service is available"""
802+ try:
803+ subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
804+ except subprocess.CalledProcessError:
805+ return False
806+ else:
807+ return True
808+
809+
810 def adduser(username, password=None, shell='/bin/bash', system_user=False):
811 """Add a user to the system"""
812 try:
813@@ -143,7 +157,19 @@
814 target.write(content)
815
816
817-def mount(device, mountpoint, options=None, persist=False):
818+def fstab_remove(mp):
819+ """Remove the given mountpoint entry from /etc/fstab
820+ """
821+ return Fstab.remove_by_mountpoint(mp)
822+
823+
824+def fstab_add(dev, mp, fs, options=None):
825+ """Adds the given device entry to the /etc/fstab file
826+ """
827+ return Fstab.add(dev, mp, fs, options=options)
828+
829+
830+def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
831 """Mount a filesystem at a particular mountpoint"""
832 cmd_args = ['mount']
833 if options is not None:
834@@ -154,9 +180,9 @@
835 except subprocess.CalledProcessError, e:
836 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
837 return False
838+
839 if persist:
840- # TODO: update fstab
841- pass
842+ return fstab_add(device, mountpoint, filesystem, options=options)
843 return True
844
845
846@@ -168,9 +194,9 @@
847 except subprocess.CalledProcessError, e:
848 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
849 return False
850+
851 if persist:
852- # TODO: update fstab
853- pass
854+ return fstab_remove(mountpoint)
855 return True
856
857
858@@ -194,16 +220,16 @@
859 return None
860
861
862-def restart_on_change(restart_map):
863+def restart_on_change(restart_map, stopstart=False):
864 """Restart services based on configuration files changing
865
866- This function is used a decorator, for example
867+ This function is used a decorator, for example::
868
869 @restart_on_change({
870 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
871 })
872 def ceph_client_changed():
873- ...
874+ pass # your code here
875
876 In this example, the cinder-api and cinder-volume services
877 would be restarted if /etc/ceph/ceph.conf is changed by the
878@@ -219,8 +245,14 @@
879 for path in restart_map:
880 if checksums[path] != file_hash(path):
881 restarts += restart_map[path]
882- for service_name in list(OrderedDict.fromkeys(restarts)):
883- service('restart', service_name)
884+ services_list = list(OrderedDict.fromkeys(restarts))
885+ if not stopstart:
886+ for service_name in services_list:
887+ service('restart', service_name)
888+ else:
889+ for action in ['stop', 'start']:
890+ for service_name in services_list:
891+ service(action, service_name)
892 return wrapped_f
893 return wrap
894
895@@ -289,3 +321,44 @@
896 if 'link/ether' in words:
897 hwaddr = words[words.index('link/ether') + 1]
898 return hwaddr
899+
900+
901+def cmp_pkgrevno(package, revno, pkgcache=None):
902+ '''Compare supplied revno with the revno of the installed package
903+
904+ * 1 => Installed revno is greater than supplied arg
905+ * 0 => Installed revno is the same as supplied arg
906+ * -1 => Installed revno is less than supplied arg
907+
908+ '''
909+ import apt_pkg
910+ if not pkgcache:
911+ apt_pkg.init()
912+ # Force Apt to build its cache in memory. That way we avoid race
913+ # conditions with other applications building the cache in the same
914+ # place.
915+ apt_pkg.config.set("Dir::Cache::pkgcache", "")
916+ pkgcache = apt_pkg.Cache()
917+ pkg = pkgcache[package]
918+ return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
919+
920+
921+@contextmanager
922+def chdir(d):
923+ cur = os.getcwd()
924+ try:
925+ yield os.chdir(d)
926+ finally:
927+ os.chdir(cur)
928+
929+
930+def chownr(path, owner, group):
931+ uid = pwd.getpwnam(owner).pw_uid
932+ gid = grp.getgrnam(group).gr_gid
933+
934+ for root, dirs, files in os.walk(path):
935+ for name in dirs + files:
936+ full = os.path.join(root, name)
937+ broken_symlink = os.path.lexists(full) and not os.path.exists(full)
938+ if not broken_symlink:
939+ os.chown(full, uid, gid)
940
941=== added directory 'hooks/charmhelpers/core/services'
942=== added file 'hooks/charmhelpers/core/services/__init__.py'
943--- hooks/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000
944+++ hooks/charmhelpers/core/services/__init__.py 2014-09-09 05:13:17 +0000
945@@ -0,0 +1,2 @@
946+from .base import *
947+from .helpers import *
948
949=== added file 'hooks/charmhelpers/core/services/base.py'
950--- hooks/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000
951+++ hooks/charmhelpers/core/services/base.py 2014-09-09 05:13:17 +0000
952@@ -0,0 +1,305 @@
953+import os
954+import re
955+import json
956+from collections import Iterable
957+
958+from charmhelpers.core import host
959+from charmhelpers.core import hookenv
960+
961+
962+__all__ = ['ServiceManager', 'ManagerCallback',
963+ 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
964+ 'service_restart', 'service_stop']
965+
966+
967+class ServiceManager(object):
968+ def __init__(self, services=None):
969+ """
970+ Register a list of services, given their definitions.
971+
972+ Traditional charm authoring is focused on implementing hooks. That is,
973+ the charm author is thinking in terms of "What hook am I handling; what
974+ does this hook need to do?" However, in most cases, the real question
975+ should be "Do I have the information I need to configure and start this
976+ piece of software and, if so, what are the steps for doing so?" The
977+ ServiceManager framework tries to bring the focus to the data and the
978+ setup tasks, in the most declarative way possible.
979+
980+ Service definitions are dicts in the following formats (all keys except
981+ 'service' are optional)::
982+
983+ {
984+ "service": <service name>,
985+ "required_data": <list of required data contexts>,
986+ "data_ready": <one or more callbacks>,
987+ "data_lost": <one or more callbacks>,
988+ "start": <one or more callbacks>,
989+ "stop": <one or more callbacks>,
990+ "ports": <list of ports to manage>,
991+ }
992+
993+ The 'required_data' list should contain dicts of required data (or
994+ dependency managers that act like dicts and know how to collect the data).
995+ Only when all items in the 'required_data' list are populated are the list
996+ of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
997+ information.
998+
999+ The 'data_ready' value should be either a single callback, or a list of
1000+ callbacks, to be called when all items in 'required_data' pass `is_ready()`.
1001+ Each callback will be called with the service name as the only parameter.
1002+ After all of the 'data_ready' callbacks are called, the 'start' callbacks
1003+ are fired.
1004+
1005+ The 'data_lost' value should be either a single callback, or a list of
1006+ callbacks, to be called when a 'required_data' item no longer passes
1007+ `is_ready()`. Each callback will be called with the service name as the
1008+ only parameter. After all of the 'data_lost' callbacks are called,
1009+ the 'stop' callbacks are fired.
1010+
1011+ The 'start' value should be either a single callback, or a list of
1012+ callbacks, to be called when starting the service, after the 'data_ready'
1013+ callbacks are complete. Each callback will be called with the service
1014+ name as the only parameter. This defaults to
1015+ `[host.service_start, services.open_ports]`.
1016+
1017+ The 'stop' value should be either a single callback, or a list of
1018+ callbacks, to be called when stopping the service. If the service is
1019+ being stopped because it no longer has all of its 'required_data', this
1020+ will be called after all of the 'data_lost' callbacks are complete.
1021+ Each callback will be called with the service name as the only parameter.
1022+ This defaults to `[services.close_ports, host.service_stop]`.
1023+
1024+ The 'ports' value should be a list of ports to manage. The default
1025+ 'start' handler will open the ports after the service is started,
1026+ and the default 'stop' handler will close the ports prior to stopping
1027+ the service.
1028+
1029+
1030+ Examples:
1031+
1032+ The following registers an Upstart service called bingod that depends on
1033+ a mongodb relation and which runs a custom `db_migrate` function prior to
1034+ restarting the service, and a Runit service called spadesd::
1035+
1036+ manager = services.ServiceManager([
1037+ {
1038+ 'service': 'bingod',
1039+ 'ports': [80, 443],
1040+ 'required_data': [MongoRelation(), config(), {'my': 'data'}],
1041+ 'data_ready': [
1042+ services.template(source='bingod.conf'),
1043+ services.template(source='bingod.ini',
1044+ target='/etc/bingod.ini',
1045+ owner='bingo', perms=0400),
1046+ ],
1047+ },
1048+ {
1049+ 'service': 'spadesd',
1050+ 'data_ready': services.template(source='spadesd_run.j2',
1051+ target='/etc/sv/spadesd/run',
1052+ perms=0555),
1053+ 'start': runit_start,
1054+ 'stop': runit_stop,
1055+ },
1056+ ])
1057+ manager.manage()
1058+ """
1059+ self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
1060+ self._ready = None
1061+ self.services = {}
1062+ for service in services or []:
1063+ service_name = service['service']
1064+ self.services[service_name] = service
1065+
1066+ def manage(self):
1067+ """
1068+ Handle the current hook by doing The Right Thing with the registered services.
1069+ """
1070+ hook_name = hookenv.hook_name()
1071+ if hook_name == 'stop':
1072+ self.stop_services()
1073+ else:
1074+ self.provide_data()
1075+ self.reconfigure_services()
1076+
1077+ def provide_data(self):
1078+ hook_name = hookenv.hook_name()
1079+ for service in self.services.values():
1080+ for provider in service.get('provided_data', []):
1081+ if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
1082+ data = provider.provide_data()
1083+ if provider._is_ready(data):
1084+ hookenv.relation_set(None, data)
1085+
1086+ def reconfigure_services(self, *service_names):
1087+ """
1088+ Update all files for one or more registered services, and,
1089+ if ready, optionally restart them.
1090+
1091+ If no service names are given, reconfigures all registered services.
1092+ """
1093+ for service_name in service_names or self.services.keys():
1094+ if self.is_ready(service_name):
1095+ self.fire_event('data_ready', service_name)
1096+ self.fire_event('start', service_name, default=[
1097+ service_restart,
1098+ manage_ports])
1099+ self.save_ready(service_name)
1100+ else:
1101+ if self.was_ready(service_name):
1102+ self.fire_event('data_lost', service_name)
1103+ self.fire_event('stop', service_name, default=[
1104+ manage_ports,
1105+ service_stop])
1106+ self.save_lost(service_name)
1107+
1108+ def stop_services(self, *service_names):
1109+ """
1110+ Stop one or more registered services, by name.
1111+
1112+ If no service names are given, stops all registered services.
1113+ """
1114+ for service_name in service_names or self.services.keys():
1115+ self.fire_event('stop', service_name, default=[
1116+ manage_ports,
1117+ service_stop])
1118+
1119+ def get_service(self, service_name):
1120+ """
1121+ Given the name of a registered service, return its service definition.
1122+ """
1123+ service = self.services.get(service_name)
1124+ if not service:
1125+ raise KeyError('Service not registered: %s' % service_name)
1126+ return service
1127+
1128+ def fire_event(self, event_name, service_name, default=None):
1129+ """
1130+ Fire a data_ready, data_lost, start, or stop event on a given service.
1131+ """
1132+ service = self.get_service(service_name)
1133+ callbacks = service.get(event_name, default)
1134+ if not callbacks:
1135+ return
1136+ if not isinstance(callbacks, Iterable):
1137+ callbacks = [callbacks]
1138+ for callback in callbacks:
1139+ if isinstance(callback, ManagerCallback):
1140+ callback(self, service_name, event_name)
1141+ else:
1142+ callback(service_name)
1143+
1144+ def is_ready(self, service_name):
1145+ """
1146+ Determine if a registered service is ready, by checking its 'required_data'.
1147+
1148+ A 'required_data' item can be any mapping type, and is considered ready
1149+ if `bool(item)` evaluates as True.
1150+ """
1151+ service = self.get_service(service_name)
1152+ reqs = service.get('required_data', [])
1153+ return all(bool(req) for req in reqs)
1154+
1155+ def _load_ready_file(self):
1156+ if self._ready is not None:
1157+ return
1158+ if os.path.exists(self._ready_file):
1159+ with open(self._ready_file) as fp:
1160+ self._ready = set(json.load(fp))
1161+ else:
1162+ self._ready = set()
1163+
1164+ def _save_ready_file(self):
1165+ if self._ready is None:
1166+ return
1167+ with open(self._ready_file, 'w') as fp:
1168+ json.dump(list(self._ready), fp)
1169+
1170+ def save_ready(self, service_name):
1171+ """
1172+ Save an indicator that the given service is now data_ready.
1173+ """
1174+ self._load_ready_file()
1175+ self._ready.add(service_name)
1176+ self._save_ready_file()
1177+
1178+ def save_lost(self, service_name):
1179+ """
1180+ Save an indicator that the given service is no longer data_ready.
1181+ """
1182+ self._load_ready_file()
1183+ self._ready.discard(service_name)
1184+ self._save_ready_file()
1185+
1186+ def was_ready(self, service_name):
1187+ """
1188+ Determine if the given service was previously data_ready.
1189+ """
1190+ self._load_ready_file()
1191+ return service_name in self._ready
1192+
1193+
1194+class ManagerCallback(object):
1195+ """
1196+ Special case of a callback that takes the `ServiceManager` instance
1197+ in addition to the service name.
1198+
1199+ Subclasses should implement `__call__` which should accept three parameters:
1200+
1201+ * `manager` The `ServiceManager` instance
1202+ * `service_name` The name of the service it's being triggered for
1203+ * `event_name` The name of the event that this callback is handling
1204+ """
1205+ def __call__(self, manager, service_name, event_name):
1206+ raise NotImplementedError()
1207+
1208+
1209+class PortManagerCallback(ManagerCallback):
1210+ """
1211+ Callback class that will open or close ports, for use as either
1212+ a start or stop action.
1213+ """
1214+ def __call__(self, manager, service_name, event_name):
1215+ service = manager.get_service(service_name)
1216+ new_ports = service.get('ports', [])
1217+ port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
1218+ if os.path.exists(port_file):
1219+ with open(port_file) as fp:
1220+ old_ports = fp.read().split(',')
1221+ for old_port in old_ports:
1222+ if bool(old_port):
1223+ old_port = int(old_port)
1224+ if old_port not in new_ports:
1225+ hookenv.close_port(old_port)
1226+ with open(port_file, 'w') as fp:
1227+ fp.write(','.join(str(port) for port in new_ports))
1228+ for port in new_ports:
1229+ if event_name == 'start':
1230+ hookenv.open_port(port)
1231+ elif event_name == 'stop':
1232+ hookenv.close_port(port)
1233+
1234+
1235+def service_stop(service_name):
1236+ """
1237+ Wrapper around host.service_stop to prevent spurious "unknown service"
1238+ messages in the logs.
1239+ """
1240+ if host.service_running(service_name):
1241+ host.service_stop(service_name)
1242+
1243+
1244+def service_restart(service_name):
1245+ """
1246+ Wrapper around host.service_restart to prevent spurious "unknown service"
1247+ messages in the logs.
1248+ """
1249+ if host.service_available(service_name):
1250+ if host.service_running(service_name):
1251+ host.service_restart(service_name)
1252+ else:
1253+ host.service_start(service_name)
1254+
1255+
1256+# Convenience aliases
1257+open_ports = close_ports = manage_ports = PortManagerCallback()
1258
1259=== added file 'hooks/charmhelpers/core/services/helpers.py'
1260--- hooks/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000
1261+++ hooks/charmhelpers/core/services/helpers.py 2014-09-09 05:13:17 +0000
1262@@ -0,0 +1,125 @@
1263+from charmhelpers.core import hookenv
1264+from charmhelpers.core import templating
1265+
1266+from charmhelpers.core.services.base import ManagerCallback
1267+
1268+
1269+__all__ = ['RelationContext', 'TemplateCallback',
1270+ 'render_template', 'template']
1271+
1272+
1273+class RelationContext(dict):
1274+ """
1275+ Base class for a context generator that gets relation data from juju.
1276+
1277+ Subclasses must provide the attributes `name`, which is the name of the
1278+ interface of interest, `interface`, which is the type of the interface of
1279+ interest, and `required_keys`, which is the set of keys required for the
1280+ relation to be considered complete. The data for all interfaces matching
1281+ the `name` attribute that are complete will used to populate the dictionary
1282+ values (see `get_data`, below).
1283+
1284+ The generated context will be namespaced under the interface type, to prevent
1285+ potential naming conflicts.
1286+ """
1287+ name = None
1288+ interface = None
1289+ required_keys = []
1290+
1291+ def __init__(self, *args, **kwargs):
1292+ super(RelationContext, self).__init__(*args, **kwargs)
1293+ self.get_data()
1294+
1295+ def __bool__(self):
1296+ """
1297+ Returns True if all of the required_keys are available.
1298+ """
1299+ return self.is_ready()
1300+
1301+ __nonzero__ = __bool__
1302+
1303+ def __repr__(self):
1304+ return super(RelationContext, self).__repr__()
1305+
1306+ def is_ready(self):
1307+ """
1308+ Returns True if all of the `required_keys` are available from any units.
1309+ """
1310+ ready = len(self.get(self.name, [])) > 0
1311+ if not ready:
1312+ hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
1313+ return ready
1314+
1315+ def _is_ready(self, unit_data):
1316+ """
1317+ Helper method that tests a set of relation data and returns True if
1318+ all of the `required_keys` are present.
1319+ """
1320+ return set(unit_data.keys()).issuperset(set(self.required_keys))
1321+
1322+ def get_data(self):
1323+ """
1324+ Retrieve the relation data for each unit involved in a relation and,
1325+ if complete, store it in a list under `self[self.name]`. This
1326+ is automatically called when the RelationContext is instantiated.
1327+
1328+ The units are sorted lexographically first by the service ID, then by
1329+ the unit ID. Thus, if an interface has two other services, 'db:1'
1330+ and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
1331+ and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
1332+ set of data, the relation data for the units will be stored in the
1333+ order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
1334+
1335+ If you only care about a single unit on the relation, you can just
1336+ access it as `{{ interface[0]['key'] }}`. However, if you can at all
1337+ support multiple units on a relation, you should iterate over the list,
1338+ like::
1339+
1340+ {% for unit in interface -%}
1341+ {{ unit['key'] }}{% if not loop.last %},{% endif %}
1342+ {%- endfor %}
1343+
1344+ Note that since all sets of relation data from all related services and
1345+ units are in a single list, if you need to know which service or unit a
1346+ set of data came from, you'll need to extend this class to preserve
1347+ that information.
1348+ """
1349+ if not hookenv.relation_ids(self.name):
1350+ return
1351+
1352+ ns = self.setdefault(self.name, [])
1353+ for rid in sorted(hookenv.relation_ids(self.name)):
1354+ for unit in sorted(hookenv.related_units(rid)):
1355+ reldata = hookenv.relation_get(rid=rid, unit=unit)
1356+ if self._is_ready(reldata):
1357+ ns.append(reldata)
1358+
1359+ def provide_data(self):
1360+ """
1361+ Return data to be relation_set for this interface.
1362+ """
1363+ return {}
1364+
1365+
1366+class TemplateCallback(ManagerCallback):
1367+ """
1368+ Callback class that will render a template, for use as a ready action.
1369+ """
1370+ def __init__(self, source, target, owner='root', group='root', perms=0444):
1371+ self.source = source
1372+ self.target = target
1373+ self.owner = owner
1374+ self.group = group
1375+ self.perms = perms
1376+
1377+ def __call__(self, manager, service_name, event_name):
1378+ service = manager.get_service(service_name)
1379+ context = {}
1380+ for ctx in service.get('required_data', []):
1381+ context.update(ctx)
1382+ templating.render(self.source, self.target, context,
1383+ self.owner, self.group, self.perms)
1384+
1385+
1386+# Convenience aliases for templates
1387+render_template = template = TemplateCallback
1388
1389=== added file 'hooks/charmhelpers/core/templating.py'
1390--- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
1391+++ hooks/charmhelpers/core/templating.py 2014-09-09 05:13:17 +0000
1392@@ -0,0 +1,51 @@
1393+import os
1394+
1395+from charmhelpers.core import host
1396+from charmhelpers.core import hookenv
1397+
1398+
1399+def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
1400+ """
1401+ Render a template.
1402+
1403+ The `source` path, if not absolute, is relative to the `templates_dir`.
1404+
1405+ The `target` path should be absolute.
1406+
1407+ The context should be a dict containing the values to be replaced in the
1408+ template.
1409+
1410+ The `owner`, `group`, and `perms` options will be passed to `write_file`.
1411+
1412+ If omitted, `templates_dir` defaults to the `templates` folder in the charm.
1413+
1414+ Note: Using this requires python-jinja2; if it is not installed, calling
1415+ this will attempt to use charmhelpers.fetch.apt_install to install it.
1416+ """
1417+ try:
1418+ from jinja2 import FileSystemLoader, Environment, exceptions
1419+ except ImportError:
1420+ try:
1421+ from charmhelpers.fetch import apt_install
1422+ except ImportError:
1423+ hookenv.log('Could not import jinja2, and could not import '
1424+ 'charmhelpers.fetch to install it',
1425+ level=hookenv.ERROR)
1426+ raise
1427+ apt_install('python-jinja2', fatal=True)
1428+ from jinja2 import FileSystemLoader, Environment, exceptions
1429+
1430+ if templates_dir is None:
1431+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
1432+ loader = Environment(loader=FileSystemLoader(templates_dir))
1433+ try:
1434+ source = source
1435+ template = loader.get_template(source)
1436+ except exceptions.TemplateNotFound as e:
1437+ hookenv.log('Could not load template %s from %s.' %
1438+ (source, templates_dir),
1439+ level=hookenv.ERROR)
1440+ raise e
1441+ content = template.render(context)
1442+ host.mkdir(os.path.dirname(target))
1443+ host.write_file(target, content, owner, group, perms)
1444
1445=== added directory 'hooks/charmhelpers/fetch'
1446=== added file 'hooks/charmhelpers/fetch/__init__.py'
1447--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
1448+++ hooks/charmhelpers/fetch/__init__.py 2014-09-09 05:13:17 +0000
1449@@ -0,0 +1,394 @@
1450+import importlib
1451+from tempfile import NamedTemporaryFile
1452+import time
1453+from yaml import safe_load
1454+from charmhelpers.core.host import (
1455+ lsb_release
1456+)
1457+from urlparse import (
1458+ urlparse,
1459+ urlunparse,
1460+)
1461+import subprocess
1462+from charmhelpers.core.hookenv import (
1463+ config,
1464+ log,
1465+)
1466+import os
1467+
1468+
1469+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
1470+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
1471+"""
1472+PROPOSED_POCKET = """# Proposed
1473+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
1474+"""
1475+CLOUD_ARCHIVE_POCKETS = {
1476+ # Folsom
1477+ 'folsom': 'precise-updates/folsom',
1478+ 'precise-folsom': 'precise-updates/folsom',
1479+ 'precise-folsom/updates': 'precise-updates/folsom',
1480+ 'precise-updates/folsom': 'precise-updates/folsom',
1481+ 'folsom/proposed': 'precise-proposed/folsom',
1482+ 'precise-folsom/proposed': 'precise-proposed/folsom',
1483+ 'precise-proposed/folsom': 'precise-proposed/folsom',
1484+ # Grizzly
1485+ 'grizzly': 'precise-updates/grizzly',
1486+ 'precise-grizzly': 'precise-updates/grizzly',
1487+ 'precise-grizzly/updates': 'precise-updates/grizzly',
1488+ 'precise-updates/grizzly': 'precise-updates/grizzly',
1489+ 'grizzly/proposed': 'precise-proposed/grizzly',
1490+ 'precise-grizzly/proposed': 'precise-proposed/grizzly',
1491+ 'precise-proposed/grizzly': 'precise-proposed/grizzly',
1492+ # Havana
1493+ 'havana': 'precise-updates/havana',
1494+ 'precise-havana': 'precise-updates/havana',
1495+ 'precise-havana/updates': 'precise-updates/havana',
1496+ 'precise-updates/havana': 'precise-updates/havana',
1497+ 'havana/proposed': 'precise-proposed/havana',
1498+ 'precise-havana/proposed': 'precise-proposed/havana',
1499+ 'precise-proposed/havana': 'precise-proposed/havana',
1500+ # Icehouse
1501+ 'icehouse': 'precise-updates/icehouse',
1502+ 'precise-icehouse': 'precise-updates/icehouse',
1503+ 'precise-icehouse/updates': 'precise-updates/icehouse',
1504+ 'precise-updates/icehouse': 'precise-updates/icehouse',
1505+ 'icehouse/proposed': 'precise-proposed/icehouse',
1506+ 'precise-icehouse/proposed': 'precise-proposed/icehouse',
1507+ 'precise-proposed/icehouse': 'precise-proposed/icehouse',
1508+ # Juno
1509+ 'juno': 'trusty-updates/juno',
1510+ 'trusty-juno': 'trusty-updates/juno',
1511+ 'trusty-juno/updates': 'trusty-updates/juno',
1512+ 'trusty-updates/juno': 'trusty-updates/juno',
1513+ 'juno/proposed': 'trusty-proposed/juno',
1514+ 'juno/proposed': 'trusty-proposed/juno',
1515+ 'trusty-juno/proposed': 'trusty-proposed/juno',
1516+ 'trusty-proposed/juno': 'trusty-proposed/juno',
1517+}
1518+
1519+# The order of this list is very important. Handlers should be listed in from
1520+# least- to most-specific URL matching.
1521+FETCH_HANDLERS = (
1522+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
1523+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
1524+)
1525+
1526+APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
1527+APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
1528+APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
1529+
1530+
1531+class SourceConfigError(Exception):
1532+ pass
1533+
1534+
1535+class UnhandledSource(Exception):
1536+ pass
1537+
1538+
1539+class AptLockError(Exception):
1540+ pass
1541+
1542+
1543+class BaseFetchHandler(object):
1544+
1545+ """Base class for FetchHandler implementations in fetch plugins"""
1546+
1547+ def can_handle(self, source):
1548+ """Returns True if the source can be handled. Otherwise returns
1549+ a string explaining why it cannot"""
1550+ return "Wrong source type"
1551+
1552+ def install(self, source):
1553+ """Try to download and unpack the source. Return the path to the
1554+ unpacked files or raise UnhandledSource."""
1555+ raise UnhandledSource("Wrong source type {}".format(source))
1556+
1557+ def parse_url(self, url):
1558+ return urlparse(url)
1559+
1560+ def base_url(self, url):
1561+ """Return url without querystring or fragment"""
1562+ parts = list(self.parse_url(url))
1563+ parts[4:] = ['' for i in parts[4:]]
1564+ return urlunparse(parts)
1565+
1566+
1567+def filter_installed_packages(packages):
1568+ """Returns a list of packages that require installation"""
1569+ import apt_pkg
1570+ apt_pkg.init()
1571+
1572+ # Tell apt to build an in-memory cache to prevent race conditions (if
1573+ # another process is already building the cache).
1574+ apt_pkg.config.set("Dir::Cache::pkgcache", "")
1575+ apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
1576+
1577+ cache = apt_pkg.Cache()
1578+ _pkgs = []
1579+ for package in packages:
1580+ try:
1581+ p = cache[package]
1582+ p.current_ver or _pkgs.append(package)
1583+ except KeyError:
1584+ log('Package {} has no installation candidate.'.format(package),
1585+ level='WARNING')
1586+ _pkgs.append(package)
1587+ return _pkgs
1588+
1589+
1590+def apt_install(packages, options=None, fatal=False):
1591+ """Install one or more packages"""
1592+ if options is None:
1593+ options = ['--option=Dpkg::Options::=--force-confold']
1594+
1595+ cmd = ['apt-get', '--assume-yes']
1596+ cmd.extend(options)
1597+ cmd.append('install')
1598+ if isinstance(packages, basestring):
1599+ cmd.append(packages)
1600+ else:
1601+ cmd.extend(packages)
1602+ log("Installing {} with options: {}".format(packages,
1603+ options))
1604+ _run_apt_command(cmd, fatal)
1605+
1606+
1607+def apt_upgrade(options=None, fatal=False, dist=False):
1608+ """Upgrade all packages"""
1609+ if options is None:
1610+ options = ['--option=Dpkg::Options::=--force-confold']
1611+
1612+ cmd = ['apt-get', '--assume-yes']
1613+ cmd.extend(options)
1614+ if dist:
1615+ cmd.append('dist-upgrade')
1616+ else:
1617+ cmd.append('upgrade')
1618+ log("Upgrading with options: {}".format(options))
1619+ _run_apt_command(cmd, fatal)
1620+
1621+
1622+def apt_update(fatal=False):
1623+ """Update local apt cache"""
1624+ cmd = ['apt-get', 'update']
1625+ _run_apt_command(cmd, fatal)
1626+
1627+
1628+def apt_purge(packages, fatal=False):
1629+ """Purge one or more packages"""
1630+ cmd = ['apt-get', '--assume-yes', 'purge']
1631+ if isinstance(packages, basestring):
1632+ cmd.append(packages)
1633+ else:
1634+ cmd.extend(packages)
1635+ log("Purging {}".format(packages))
1636+ _run_apt_command(cmd, fatal)
1637+
1638+
1639+def apt_hold(packages, fatal=False):
1640+ """Hold one or more packages"""
1641+ cmd = ['apt-mark', 'hold']
1642+ if isinstance(packages, basestring):
1643+ cmd.append(packages)
1644+ else:
1645+ cmd.extend(packages)
1646+ log("Holding {}".format(packages))
1647+
1648+ if fatal:
1649+ subprocess.check_call(cmd)
1650+ else:
1651+ subprocess.call(cmd)
1652+
1653+
1654+def add_source(source, key=None):
1655+ """Add a package source to this system.
1656+
1657+ @param source: a URL or sources.list entry, as supported by
1658+ add-apt-repository(1). Examples:
1659+ ppa:charmers/example
1660+ deb https://stub:key@private.example.com/ubuntu trusty main
1661+
1662+ In addition:
1663+ 'proposed:' may be used to enable the standard 'proposed'
1664+ pocket for the release.
1665+ 'cloud:' may be used to activate official cloud archive pockets,
1666+ such as 'cloud:icehouse'
1667+
1668+ @param key: A key to be added to the system's APT keyring and used
1669+ to verify the signatures on packages. Ideally, this should be an
1670+ ASCII format GPG public key including the block headers. A GPG key
1671+ id may also be used, but be aware that only insecure protocols are
1672+ available to retrieve the actual public key from a public keyserver
1673+ placing your Juju environment at risk. ppa and cloud archive keys
1674+ are securely added automtically, so sould not be provided.
1675+ """
1676+ if source is None:
1677+ log('Source is not present. Skipping')
1678+ return
1679+
1680+ if (source.startswith('ppa:') or
1681+ source.startswith('http') or
1682+ source.startswith('deb ') or
1683+ source.startswith('cloud-archive:')):
1684+ subprocess.check_call(['add-apt-repository', '--yes', source])
1685+ elif source.startswith('cloud:'):
1686+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
1687+ fatal=True)
1688+ pocket = source.split(':')[-1]
1689+ if pocket not in CLOUD_ARCHIVE_POCKETS:
1690+ raise SourceConfigError(
1691+ 'Unsupported cloud: source option %s' %
1692+ pocket)
1693+ actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
1694+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
1695+ apt.write(CLOUD_ARCHIVE.format(actual_pocket))
1696+ elif source == 'proposed':
1697+ release = lsb_release()['DISTRIB_CODENAME']
1698+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
1699+ apt.write(PROPOSED_POCKET.format(release))
1700+ else:
1701+ raise SourceConfigError("Unknown source: {!r}".format(source))
1702+
1703+ if key:
1704+ if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
1705+ with NamedTemporaryFile() as key_file:
1706+ key_file.write(key)
1707+ key_file.flush()
1708+ key_file.seek(0)
1709+ subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
1710+ else:
1711+ # Note that hkp: is in no way a secure protocol. Using a
1712+ # GPG key id is pointless from a security POV unless you
1713+ # absolutely trust your network and DNS.
1714+ subprocess.check_call(['apt-key', 'adv', '--keyserver',
1715+ 'hkp://keyserver.ubuntu.com:80', '--recv',
1716+ key])
1717+
1718+
1719+def configure_sources(update=False,
1720+ sources_var='install_sources',
1721+ keys_var='install_keys'):
1722+ """
1723+ Configure multiple sources from charm configuration.
1724+
1725+ The lists are encoded as yaml fragments in the configuration.
1726+ The frament needs to be included as a string. Sources and their
1727+ corresponding keys are of the types supported by add_source().
1728+
1729+ Example config:
1730+ install_sources: |
1731+ - "ppa:foo"
1732+ - "http://example.com/repo precise main"
1733+ install_keys: |
1734+ - null
1735+ - "a1b2c3d4"
1736+
1737+ Note that 'null' (a.k.a. None) should not be quoted.
1738+ """
1739+ sources = safe_load((config(sources_var) or '').strip()) or []
1740+ keys = safe_load((config(keys_var) or '').strip()) or None
1741+
1742+ if isinstance(sources, basestring):
1743+ sources = [sources]
1744+
1745+ if keys is None:
1746+ for source in sources:
1747+ add_source(source, None)
1748+ else:
1749+ if isinstance(keys, basestring):
1750+ keys = [keys]
1751+
1752+ if len(sources) != len(keys):
1753+ raise SourceConfigError(
1754+ 'Install sources and keys lists are different lengths')
1755+ for source, key in zip(sources, keys):
1756+ add_source(source, key)
1757+ if update:
1758+ apt_update(fatal=True)
1759+
1760+
1761+def install_remote(source):
1762+ """
1763+ Install a file tree from a remote source
1764+
1765+ The specified source should be a url of the form:
1766+ scheme://[host]/path[#[option=value][&...]]
1767+
1768+ Schemes supported are based on this modules submodules
1769+ Options supported are submodule-specific"""
1770+ # We ONLY check for True here because can_handle may return a string
1771+ # explaining why it can't handle a given source.
1772+ handlers = [h for h in plugins() if h.can_handle(source) is True]
1773+ installed_to = None
1774+ for handler in handlers:
1775+ try:
1776+ installed_to = handler.install(source)
1777+ except UnhandledSource:
1778+ pass
1779+ if not installed_to:
1780+ raise UnhandledSource("No handler found for source {}".format(source))
1781+ return installed_to
1782+
1783+
1784+def install_from_config(config_var_name):
1785+ charm_config = config()
1786+ source = charm_config[config_var_name]
1787+ return install_remote(source)
1788+
1789+
1790+def plugins(fetch_handlers=None):
1791+ if not fetch_handlers:
1792+ fetch_handlers = FETCH_HANDLERS
1793+ plugin_list = []
1794+ for handler_name in fetch_handlers:
1795+ package, classname = handler_name.rsplit('.', 1)
1796+ try:
1797+ handler_class = getattr(
1798+ importlib.import_module(package),
1799+ classname)
1800+ plugin_list.append(handler_class())
1801+ except (ImportError, AttributeError):
1802+ # Skip missing plugins so that they can be ommitted from
1803+ # installation if desired
1804+ log("FetchHandler {} not found, skipping plugin".format(
1805+ handler_name))
1806+ return plugin_list
1807+
1808+
1809+def _run_apt_command(cmd, fatal=False):
1810+ """
1811+ Run an APT command, checking output and retrying if the fatal flag is set
1812+ to True.
1813+
1814+ :param: cmd: str: The apt command to run.
1815+ :param: fatal: bool: Whether the command's output should be checked and
1816+ retried.
1817+ """
1818+ env = os.environ.copy()
1819+
1820+ if 'DEBIAN_FRONTEND' not in env:
1821+ env['DEBIAN_FRONTEND'] = 'noninteractive'
1822+
1823+ if fatal:
1824+ retry_count = 0
1825+ result = None
1826+
1827+ # If the command is considered "fatal", we need to retry if the apt
1828+ # lock was not acquired.
1829+
1830+ while result is None or result == APT_NO_LOCK:
1831+ try:
1832+ result = subprocess.check_call(cmd, env=env)
1833+ except subprocess.CalledProcessError, e:
1834+ retry_count = retry_count + 1
1835+ if retry_count > APT_NO_LOCK_RETRY_COUNT:
1836+ raise
1837+ result = e.returncode
1838+ log("Couldn't acquire DPKG lock. Will retry in {} seconds."
1839+ "".format(APT_NO_LOCK_RETRY_DELAY))
1840+ time.sleep(APT_NO_LOCK_RETRY_DELAY)
1841+
1842+ else:
1843+ subprocess.call(cmd, env=env)
1844
1845=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
1846--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
1847+++ hooks/charmhelpers/fetch/archiveurl.py 2014-09-09 05:13:17 +0000
1848@@ -0,0 +1,63 @@
1849+import os
1850+import urllib2
1851+import urlparse
1852+
1853+from charmhelpers.fetch import (
1854+ BaseFetchHandler,
1855+ UnhandledSource
1856+)
1857+from charmhelpers.payload.archive import (
1858+ get_archive_handler,
1859+ extract,
1860+)
1861+from charmhelpers.core.host import mkdir
1862+
1863+
1864+class ArchiveUrlFetchHandler(BaseFetchHandler):
1865+ """Handler for archives via generic URLs"""
1866+ def can_handle(self, source):
1867+ url_parts = self.parse_url(source)
1868+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
1869+ return "Wrong source type"
1870+ if get_archive_handler(self.base_url(source)):
1871+ return True
1872+ return False
1873+
1874+ def download(self, source, dest):
1875+ # propogate all exceptions
1876+ # URLError, OSError, etc
1877+ proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
1878+ if proto in ('http', 'https'):
1879+ auth, barehost = urllib2.splituser(netloc)
1880+ if auth is not None:
1881+ source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
1882+ username, password = urllib2.splitpasswd(auth)
1883+ passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
1884+ # Realm is set to None in add_password to force the username and password
1885+ # to be used whatever the realm
1886+ passman.add_password(None, source, username, password)
1887+ authhandler = urllib2.HTTPBasicAuthHandler(passman)
1888+ opener = urllib2.build_opener(authhandler)
1889+ urllib2.install_opener(opener)
1890+ response = urllib2.urlopen(source)
1891+ try:
1892+ with open(dest, 'w') as dest_file:
1893+ dest_file.write(response.read())
1894+ except Exception as e:
1895+ if os.path.isfile(dest):
1896+ os.unlink(dest)
1897+ raise e
1898+
1899+ def install(self, source):
1900+ url_parts = self.parse_url(source)
1901+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
1902+ if not os.path.exists(dest_dir):
1903+ mkdir(dest_dir, perms=0755)
1904+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
1905+ try:
1906+ self.download(source, dld_file)
1907+ except urllib2.URLError as e:
1908+ raise UnhandledSource(e.reason)
1909+ except OSError as e:
1910+ raise UnhandledSource(e.strerror)
1911+ return extract(dld_file)
1912
1913=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
1914--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
1915+++ hooks/charmhelpers/fetch/bzrurl.py 2014-09-09 05:13:17 +0000
1916@@ -0,0 +1,50 @@
1917+import os
1918+from charmhelpers.fetch import (
1919+ BaseFetchHandler,
1920+ UnhandledSource
1921+)
1922+from charmhelpers.core.host import mkdir
1923+
1924+try:
1925+ from bzrlib.branch import Branch
1926+except ImportError:
1927+ from charmhelpers.fetch import apt_install
1928+ apt_install("python-bzrlib")
1929+ from bzrlib.branch import Branch
1930+
1931+
1932+class BzrUrlFetchHandler(BaseFetchHandler):
1933+ """Handler for bazaar branches via generic and lp URLs"""
1934+ def can_handle(self, source):
1935+ url_parts = self.parse_url(source)
1936+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
1937+ return False
1938+ else:
1939+ return True
1940+
1941+ def branch(self, source, dest):
1942+ url_parts = self.parse_url(source)
1943+ # If we use lp:branchname scheme we need to load plugins
1944+ if not self.can_handle(source):
1945+ raise UnhandledSource("Cannot handle {}".format(source))
1946+ if url_parts.scheme == "lp":
1947+ from bzrlib.plugin import load_plugins
1948+ load_plugins()
1949+ try:
1950+ remote_branch = Branch.open(source)
1951+ remote_branch.bzrdir.sprout(dest).open_branch()
1952+ except Exception as e:
1953+ raise e
1954+
1955+ def install(self, source):
1956+ url_parts = self.parse_url(source)
1957+ branch_name = url_parts.path.strip("/").split("/")[-1]
1958+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1959+ branch_name)
1960+ if not os.path.exists(dest_dir):
1961+ mkdir(dest_dir, perms=0755)
1962+ try:
1963+ self.branch(source, dest_dir)
1964+ except OSError as e:
1965+ raise UnhandledSource(e.strerror)
1966+ return dest_dir
1967
1968=== added directory 'hooks/storage-provider.d/ceph'
1969=== added file 'hooks/storage-provider.d/ceph/ceph-relation-changed'
1970--- hooks/storage-provider.d/ceph/ceph-relation-changed 1970-01-01 00:00:00 +0000
1971+++ hooks/storage-provider.d/ceph/ceph-relation-changed 2014-09-09 05:13:17 +0000
1972@@ -0,0 +1,69 @@
1973+#!/usr/bin/env python
1974+import sys
1975+
1976+from ceph_common import (
1977+ SERVICE_NAME,
1978+ POOL_NAME,
1979+ TEMPORARY_MOUNT_POINT,
1980+ RDB_IMG,
1981+ BLK_DEVICE,
1982+)
1983+
1984+from charmhelpers.core.hookenv import (
1985+ relation_get,
1986+ config,
1987+ log,
1988+)
1989+
1990+from charmhelpers.contrib.storage.linux import ceph
1991+
1992+"""
1993+This will mount a ceph image to a temporary location when the ceph relation
1994+is ready.
1995+
1996+Later, when the data relation fires and a mountpoint is define, the temporary
1997+mountpoint will be remounted.
1998+"""
1999+
2000+
2001+REPLICA_COUNT = 3
2002+
2003+
2004+def main():
2005+ config_data = config()
2006+
2007+ # This shouldn't clash since the pool name should be a good enough
2008+ # "namespace".
2009+
2010+ volume_size_gb = config_data.get("volume_size", None) # in Gb.
2011+ volume_size = volume_size_gb * 1024
2012+
2013+ auth = relation_get("auth")
2014+ key = relation_get("key")
2015+ use_syslog = relation_get("use-syslog")
2016+
2017+ relation_data = [auth, key, use_syslog]
2018+
2019+ if None in relation_data:
2020+ log("Not all relation data received: '%s'" % relation_data)
2021+ sys.exit(0)
2022+
2023+ log("Configuring ceph client...")
2024+ ceph.configure(
2025+ service=SERVICE_NAME, key=key, auth=auth, use_syslog=use_syslog)
2026+
2027+ log("Mounting ceph image to temporary location '%s'"
2028+ "" % TEMPORARY_MOUNT_POINT)
2029+ ceph.ensure_ceph_storage(
2030+ service=SERVICE_NAME,
2031+ pool=POOL_NAME,
2032+ rdb_img=RDB_IMG,
2033+ sizemb=volume_size,
2034+ fstype="ext4",
2035+ mount_point=TEMPORARY_MOUNT_POINT,
2036+ blk_device=BLK_DEVICE,
2037+ rdb_pool_replicas=REPLICA_COUNT)
2038+
2039+
2040+if __name__ == "__main__":
2041+ main()
2042
2043=== added file 'hooks/storage-provider.d/ceph/config-changed'
2044--- hooks/storage-provider.d/ceph/config-changed 1970-01-01 00:00:00 +0000
2045+++ hooks/storage-provider.d/ceph/config-changed 2014-09-09 05:13:17 +0000
2046@@ -0,0 +1,6 @@
2047+#!/usr/bin/env python
2048+from charmhelpers.contrib.storage.linux.ceph import (
2049+ install,
2050+)
2051+
2052+install()
2053
2054=== added file 'hooks/storage-provider.d/ceph/data-relation-changed'
2055--- hooks/storage-provider.d/ceph/data-relation-changed 1970-01-01 00:00:00 +0000
2056+++ hooks/storage-provider.d/ceph/data-relation-changed 2014-09-09 05:13:17 +0000
2057@@ -0,0 +1,45 @@
2058+#!/usr/bin/env python
2059+import sys
2060+from ceph_common import (
2061+ TEMPORARY_MOUNT_POINT,
2062+ BLK_DEVICE,
2063+)
2064+from charmhelpers.core.hookenv import (
2065+ relation_get,
2066+ relation_set,
2067+ log,
2068+)
2069+
2070+from charmhelpers.core.host import (
2071+ mount,
2072+ mounts,
2073+ #umount,
2074+)
2075+
2076+
2077+def main():
2078+
2079+ # Noop until mountpoint is set.
2080+ mountpoint = relation_get("mountpoint")
2081+ if mountpoint is None:
2082+ log("No mountpoint received from the data relation. Noop.")
2083+ sys.exit(0) # No mountpoint requested yet - noop
2084+
2085+ # If the mountpoint is set, let's remount the temporary folder.
2086+ actual_mounts = mounts()
2087+ log("Getting mounts: %s" % actual_mounts)
2088+ mountpoints = [point[0] for point in actual_mounts] # returns a list of 2-lists
2089+
2090+ if TEMPORARY_MOUNT_POINT in mountpoints:
2091+ log("Found ceph block device, remounting to '%s'" % mountpoint)
2092+ # The ceph image is ready and mounted - let's remount it to the desired
2093+ # mountpoint.
2094+ mount(BLK_DEVICE, mountpoint)
2095+ log("Mountpoint ready, notifying data relation.")
2096+ relation_set("mountpoint", mountpoint)
2097+ else:
2098+ log("Temporary mountpoint not found (yet?). Noop.")
2099+ sys.exit(0)
2100+
2101+if __name__ == "__main__":
2102+ main()
2103
2104=== modified file 'metadata.yaml'
2105--- metadata.yaml 2014-02-10 16:58:17 +0000
2106+++ metadata.yaml 2014-09-09 05:13:17 +0000
2107@@ -18,3 +18,5 @@
2108 interface: mount
2109 block-storage:
2110 interface: volume-request
2111+ ceph:
2112+ interface: ceph-client

Subscribers

People subscribed via source and target branches

to all changes: