Merge lp:~tribaal/charms/precise/storage/add-ceph-storage-provider into lp:charms/storage

Proposed by Chris Glass
Status: Work in progress
Proposed branch: lp:~tribaal/charms/precise/storage/add-ceph-storage-provider
Merge into: lp:charms/storage
Diff against target: 2112 lines (+1834/-27)
19 files modified
.bzrignore (+1/-0)
Makefile (+9/-9)
charm-helpers.yaml (+1/-0)
hooks/ceph_common.py (+12/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+387/-0)
hooks/charmhelpers/core/fstab.py (+116/-0)
hooks/charmhelpers/core/hookenv.py (+111/-6)
hooks/charmhelpers/core/host.py (+85/-12)
hooks/charmhelpers/core/services/__init__.py (+2/-0)
hooks/charmhelpers/core/services/base.py (+305/-0)
hooks/charmhelpers/core/services/helpers.py (+125/-0)
hooks/charmhelpers/core/templating.py (+51/-0)
hooks/charmhelpers/fetch/__init__.py (+394/-0)
hooks/charmhelpers/fetch/archiveurl.py (+63/-0)
hooks/charmhelpers/fetch/bzrurl.py (+50/-0)
hooks/storage-provider.d/ceph/ceph-relation-changed (+69/-0)
hooks/storage-provider.d/ceph/config-changed (+6/-0)
hooks/storage-provider.d/ceph/data-relation-changed (+45/-0)
metadata.yaml (+2/-0)
To merge this branch: bzr merge lp:~tribaal/charms/precise/storage/add-ceph-storage-provider
Reviewer Review Type Date Requested Status
David Britton (community) Needs Information
Review via email: mp+232239@code.launchpad.net

Description of the change

This branch adds basic support for a "ceph" storage provider that mounts and uses a ceph image to use as storage.

It is currently rough (not working 100%), but I'd like to have eyes on the code before I go any further with it (I don't understand exactly what's wrong with my approach).

To post a comment you must log in.
Revision history for this message
Chad Smith (chad.smith) wrote :

Chris, just on first glance it seems you are missing the ceph-relation-changed hook symlink in the hooks directory. Those hooks are needed as links to hooks/hooks as it is a proxy for calls to the underlying storage-provider.d/<provider_name>/your-hook-name. If the hook symlink doesn't exist at hooks/ceph-relation-changed you should see a "no ceph-relation-changed hook" message in the juju logs.

I'll peek more at this on with a live deployment.

44. By Chris Glass

Added ceph-relation-changed hook symlink.

45. By Chris Glass

Make the ceph-relation-changed hook noop if all the necessary information is
not present (like the key).

46. By Chris Glass

added relation debug output

Revision history for this message
David Britton (dpb) wrote :

Hi Chris -- Thanks so much for doing this!

The general approach looks great. Can you implement the data-relation-departed hook as well? I noticed an 'unmount' commented out import in the data-relation-changed hook as well, thought maybe you were thinking about doing that when you stopped for a quick sanity-check.

Before I run through paces, I wanted to make sure that you include a sample deployment when working with ceph at least here in the review so we can compare apples to apples.

Also I added a simple nit for a diff comment.

This change is quite exciting, glad so much of it is contained in charmhelpers. :)

review: Needs Information
Revision history for this message
Chris Glass (tribaal) wrote :

Yes, I started working on the unmount logic, but then realised this branch was already 2k lines, so I decided against putting even more code in.

I'll add it in here, no problem.

47. By Chris Glass

Added exit(0)!

Unmerged revisions

47. By Chris Glass

Added exit(0)!

46. By Chris Glass

added relation debug output

45. By Chris Glass

Make the ceph-relation-changed hook noop if all the necessary information is
not present (like the key).

44. By Chris Glass

Added ceph-relation-changed hook symlink.

43. By Chris Glass

Added some debug logs...

42. By Chris Glass

Added logs to the data-relation-changed hook

41. By Chris Glass

Put the common ceph code in a proper python module.

40. By Chris Glass

Made hook executable. Duh.

39. By Chris Glass

Update charm-helpers along with the make file to sync them.

38. By Chris Glass

The hooks should work

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file '.bzrignore'
--- .bzrignore 2014-01-31 23:49:37 +0000
+++ .bzrignore 2014-09-09 05:13:17 +0000
@@ -1,2 +1,3 @@
1_trial_temp1_trial_temp
2charm-helpers2charm-helpers
3bin/
34
=== modified file 'Makefile'
--- Makefile 2014-02-10 22:52:50 +0000
+++ Makefile 2014-09-09 05:13:17 +0000
@@ -1,4 +1,6 @@
1#!/usr/bin/make
1.PHONY: test lint clean2.PHONY: test lint clean
3PYTHON := /usr/bin/env python
24
3clean:5clean:
4 find . -name *.pyc | xargs rm6 find . -name *.pyc | xargs rm
@@ -11,12 +13,10 @@
11 # flake any python scripts not named *.py13 # flake any python scripts not named *.py
12 fgrep -r bin/python hooks/ | awk -F : '{print $$1}' | xargs -r -n1 flake814 fgrep -r bin/python hooks/ | awk -F : '{print $$1}' | xargs -r -n1 flake8
1315
1416bin/charm_helpers_sync.py:
15update-charm-helpers:17 @mkdir -p bin
16 # Pull latest charm-helpers branch and sync the components based on our18 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
17 $ charm-helpers.yaml19 > bin/charm_helpers_sync.py
18 rm -rf charm-helpers20
19 bzr co lp:charm-helpers21sync: bin/charm_helpers_sync.py
20 ./charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py -c charm-helpers.yaml22 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
21 rm -rf charm-helpers
22
2323
=== modified file 'charm-helpers.yaml'
--- charm-helpers.yaml 2014-01-31 23:49:37 +0000
+++ charm-helpers.yaml 2014-09-09 05:13:17 +0000
@@ -3,3 +3,4 @@
3include:3include:
4 - core4 - core
5 - fetch5 - fetch
6 - contrib.storage.linux.ceph
67
=== added symlink 'hooks/ceph-relation-changed'
=== target is u'hooks'
=== added file 'hooks/ceph_common.py'
--- hooks/ceph_common.py 1970-01-01 00:00:00 +0000
+++ hooks/ceph_common.py 2014-09-09 05:13:17 +0000
@@ -0,0 +1,12 @@
1
2from charmhelpers.core.hookenv import (
3 service_name,
4)
5
6SERVICE_NAME = service_name() # TODO: Make sure it's the unit - not "storage"
7TEMPORARY_MOUNT_POINT = "/mnt/temp-ceph/%s" % SERVICE_NAME
8POOL_NAME = SERVICE_NAME
9
10
11RDB_IMG = "storage"
12BLK_DEVICE = '/dev/rbd/%s/%s' % (POOL_NAME, RDB_IMG)
013
=== added directory 'hooks/charmhelpers/contrib'
=== added file 'hooks/charmhelpers/contrib/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/storage'
=== added file 'hooks/charmhelpers/contrib/storage/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/storage/linux'
=== added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py'
=== added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-09-09 05:13:17 +0000
@@ -0,0 +1,387 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import os
12import shutil
13import json
14import time
15
16from subprocess import (
17 check_call,
18 check_output,
19 CalledProcessError
20)
21
22from charmhelpers.core.hookenv import (
23 relation_get,
24 relation_ids,
25 related_units,
26 log,
27 INFO,
28 WARNING,
29 ERROR
30)
31
32from charmhelpers.core.host import (
33 mount,
34 mounts,
35 service_start,
36 service_stop,
37 service_running,
38 umount,
39)
40
41from charmhelpers.fetch import (
42 apt_install,
43)
44
45KEYRING = '/etc/ceph/ceph.client.{}.keyring'
46KEYFILE = '/etc/ceph/ceph.client.{}.key'
47
48CEPH_CONF = """[global]
49 auth supported = {auth}
50 keyring = {keyring}
51 mon host = {mon_hosts}
52 log to syslog = {use_syslog}
53 err to syslog = {use_syslog}
54 clog to syslog = {use_syslog}
55"""
56
57
58def install():
59 ''' Basic Ceph client installation '''
60 ceph_dir = "/etc/ceph"
61 if not os.path.exists(ceph_dir):
62 os.mkdir(ceph_dir)
63 apt_install('ceph-common', fatal=True)
64
65
66def rbd_exists(service, pool, rbd_img):
67 ''' Check to see if a RADOS block device exists '''
68 try:
69 out = check_output(['rbd', 'list', '--id', service,
70 '--pool', pool])
71 except CalledProcessError:
72 return False
73 else:
74 return rbd_img in out
75
76
77def create_rbd_image(service, pool, image, sizemb):
78 ''' Create a new RADOS block device '''
79 cmd = [
80 'rbd',
81 'create',
82 image,
83 '--size',
84 str(sizemb),
85 '--id',
86 service,
87 '--pool',
88 pool
89 ]
90 check_call(cmd)
91
92
93def pool_exists(service, name):
94 ''' Check to see if a RADOS pool already exists '''
95 try:
96 out = check_output(['rados', '--id', service, 'lspools'])
97 except CalledProcessError:
98 return False
99 else:
100 return name in out
101
102
103def get_osds(service):
104 '''
105 Return a list of all Ceph Object Storage Daemons
106 currently in the cluster
107 '''
108 version = ceph_version()
109 if version and version >= '0.56':
110 return json.loads(check_output(['ceph', '--id', service,
111 'osd', 'ls', '--format=json']))
112 else:
113 return None
114
115
116def create_pool(service, name, replicas=2):
117 ''' Create a new RADOS pool '''
118 if pool_exists(service, name):
119 log("Ceph pool {} already exists, skipping creation".format(name),
120 level=WARNING)
121 return
122 # Calculate the number of placement groups based
123 # on upstream recommended best practices.
124 osds = get_osds(service)
125 if osds:
126 pgnum = (len(osds) * 100 / replicas)
127 else:
128 # NOTE(james-page): Default to 200 for older ceph versions
129 # which don't support OSD query from cli
130 pgnum = 200
131 cmd = [
132 'ceph', '--id', service,
133 'osd', 'pool', 'create',
134 name, str(pgnum)
135 ]
136 check_call(cmd)
137 cmd = [
138 'ceph', '--id', service,
139 'osd', 'pool', 'set', name,
140 'size', str(replicas)
141 ]
142 check_call(cmd)
143
144
145def delete_pool(service, name):
146 ''' Delete a RADOS pool from ceph '''
147 cmd = [
148 'ceph', '--id', service,
149 'osd', 'pool', 'delete',
150 name, '--yes-i-really-really-mean-it'
151 ]
152 check_call(cmd)
153
154
155def _keyfile_path(service):
156 return KEYFILE.format(service)
157
158
159def _keyring_path(service):
160 return KEYRING.format(service)
161
162
163def create_keyring(service, key):
164 ''' Create a new Ceph keyring containing key'''
165 keyring = _keyring_path(service)
166 if os.path.exists(keyring):
167 log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
168 return
169 cmd = [
170 'ceph-authtool',
171 keyring,
172 '--create-keyring',
173 '--name=client.{}'.format(service),
174 '--add-key={}'.format(key)
175 ]
176 check_call(cmd)
177 log('ceph: Created new ring at %s.' % keyring, level=INFO)
178
179
180def create_key_file(service, key):
181 ''' Create a file containing key '''
182 keyfile = _keyfile_path(service)
183 if os.path.exists(keyfile):
184 log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
185 return
186 with open(keyfile, 'w') as fd:
187 fd.write(key)
188 log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
189
190
191def get_ceph_nodes():
192 ''' Query named relation 'ceph' to detemine current nodes '''
193 hosts = []
194 for r_id in relation_ids('ceph'):
195 for unit in related_units(r_id):
196 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
197 return hosts
198
199
200def configure(service, key, auth, use_syslog):
201 ''' Perform basic configuration of Ceph '''
202 create_keyring(service, key)
203 create_key_file(service, key)
204 hosts = get_ceph_nodes()
205 with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
206 ceph_conf.write(CEPH_CONF.format(auth=auth,
207 keyring=_keyring_path(service),
208 mon_hosts=",".join(map(str, hosts)),
209 use_syslog=use_syslog))
210 modprobe('rbd')
211
212
213def image_mapped(name):
214 ''' Determine whether a RADOS block device is mapped locally '''
215 try:
216 out = check_output(['rbd', 'showmapped'])
217 except CalledProcessError:
218 return False
219 else:
220 return name in out
221
222
223def map_block_storage(service, pool, image):
224 ''' Map a RADOS block device for local use '''
225 cmd = [
226 'rbd',
227 'map',
228 '{}/{}'.format(pool, image),
229 '--user',
230 service,
231 '--secret',
232 _keyfile_path(service),
233 ]
234 check_call(cmd)
235
236
237def filesystem_mounted(fs):
238 ''' Determine whether a filesytems is already mounted '''
239 return fs in [f for f, m in mounts()]
240
241
242def make_filesystem(blk_device, fstype='ext4', timeout=10):
243 ''' Make a new filesystem on the specified block device '''
244 count = 0
245 e_noent = os.errno.ENOENT
246 while not os.path.exists(blk_device):
247 if count >= timeout:
248 log('ceph: gave up waiting on block device %s' % blk_device,
249 level=ERROR)
250 raise IOError(e_noent, os.strerror(e_noent), blk_device)
251 log('ceph: waiting for block device %s to appear' % blk_device,
252 level=INFO)
253 count += 1
254 time.sleep(1)
255 else:
256 log('ceph: Formatting block device %s as filesystem %s.' %
257 (blk_device, fstype), level=INFO)
258 check_call(['mkfs', '-t', fstype, blk_device])
259
260
261def place_data_on_block_device(blk_device, data_src_dst):
262 ''' Migrate data in data_src_dst to blk_device and then remount '''
263 # mount block device into /mnt
264 mount(blk_device, '/mnt')
265 # copy data to /mnt
266 copy_files(data_src_dst, '/mnt')
267 # umount block device
268 umount('/mnt')
269 # Grab user/group ID's from original source
270 _dir = os.stat(data_src_dst)
271 uid = _dir.st_uid
272 gid = _dir.st_gid
273 # re-mount where the data should originally be
274 # TODO: persist is currently a NO-OP in core.host
275 mount(blk_device, data_src_dst, persist=True)
276 # ensure original ownership of new mount.
277 os.chown(data_src_dst, uid, gid)
278
279
280# TODO: re-use
281def modprobe(module):
282 ''' Load a kernel module and configure for auto-load on reboot '''
283 log('ceph: Loading kernel module', level=INFO)
284 cmd = ['modprobe', module]
285 check_call(cmd)
286 with open('/etc/modules', 'r+') as modules:
287 if module not in modules.read():
288 modules.write(module)
289
290
291def copy_files(src, dst, symlinks=False, ignore=None):
292 ''' Copy files from src to dst '''
293 for item in os.listdir(src):
294 s = os.path.join(src, item)
295 d = os.path.join(dst, item)
296 if os.path.isdir(s):
297 shutil.copytree(s, d, symlinks, ignore)
298 else:
299 shutil.copy2(s, d)
300
301
302def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
303 blk_device, fstype, system_services=[]):
304 """
305 NOTE: This function must only be called from a single service unit for
306 the same rbd_img otherwise data loss will occur.
307
308 Ensures given pool and RBD image exists, is mapped to a block device,
309 and the device is formatted and mounted at the given mount_point.
310
311 If formatting a device for the first time, data existing at mount_point
312 will be migrated to the RBD device before being re-mounted.
313
314 All services listed in system_services will be stopped prior to data
315 migration and restarted when complete.
316 """
317 # Ensure pool, RBD image, RBD mappings are in place.
318 if not pool_exists(service, pool):
319 log('ceph: Creating new pool {}.'.format(pool))
320 create_pool(service, pool)
321
322 if not rbd_exists(service, pool, rbd_img):
323 log('ceph: Creating RBD image ({}).'.format(rbd_img))
324 create_rbd_image(service, pool, rbd_img, sizemb)
325
326 if not image_mapped(rbd_img):
327 log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
328 map_block_storage(service, pool, rbd_img)
329
330 # make file system
331 # TODO: What happens if for whatever reason this is run again and
332 # the data is already in the rbd device and/or is mounted??
333 # When it is mounted already, it will fail to make the fs
334 # XXX: This is really sketchy! Need to at least add an fstab entry
335 # otherwise this hook will blow away existing data if its executed
336 # after a reboot.
337 if not filesystem_mounted(mount_point):
338 make_filesystem(blk_device, fstype)
339
340 for svc in system_services:
341 if service_running(svc):
342 log('ceph: Stopping services {} prior to migrating data.'
343 .format(svc))
344 service_stop(svc)
345
346 place_data_on_block_device(blk_device, mount_point)
347
348 for svc in system_services:
349 log('ceph: Starting service {} after migrating data.'
350 .format(svc))
351 service_start(svc)
352
353
354def ensure_ceph_keyring(service, user=None, group=None):
355 '''
356 Ensures a ceph keyring is created for a named service
357 and optionally ensures user and group ownership.
358
359 Returns False if no ceph key is available in relation state.
360 '''
361 key = None
362 for rid in relation_ids('ceph'):
363 for unit in related_units(rid):
364 key = relation_get('key', rid=rid, unit=unit)
365 if key:
366 break
367 if not key:
368 return False
369 create_keyring(service=service, key=key)
370 keyring = _keyring_path(service)
371 if user and group:
372 check_call(['chown', '%s.%s' % (user, group), keyring])
373 return True
374
375
376def ceph_version():
377 ''' Retrieve the local version of ceph '''
378 if os.path.exists('/usr/bin/ceph'):
379 cmd = ['ceph', '-v']
380 output = check_output(cmd)
381 output = output.split()
382 if len(output) > 3:
383 return output[2]
384 else:
385 return None
386 else:
387 return None
0388
=== added file 'hooks/charmhelpers/core/fstab.py'
--- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/fstab.py 2014-09-09 05:13:17 +0000
@@ -0,0 +1,116 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
5
6import os
7
8
9class Fstab(file):
10 """This class extends file in order to implement a file reader/writer
11 for file `/etc/fstab`
12 """
13
14 class Entry(object):
15 """Entry class represents a non-comment line on the `/etc/fstab` file
16 """
17 def __init__(self, device, mountpoint, filesystem,
18 options, d=0, p=0):
19 self.device = device
20 self.mountpoint = mountpoint
21 self.filesystem = filesystem
22
23 if not options:
24 options = "defaults"
25
26 self.options = options
27 self.d = d
28 self.p = p
29
30 def __eq__(self, o):
31 return str(self) == str(o)
32
33 def __str__(self):
34 return "{} {} {} {} {} {}".format(self.device,
35 self.mountpoint,
36 self.filesystem,
37 self.options,
38 self.d,
39 self.p)
40
41 DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
42
43 def __init__(self, path=None):
44 if path:
45 self._path = path
46 else:
47 self._path = self.DEFAULT_PATH
48 file.__init__(self, self._path, 'r+')
49
50 def _hydrate_entry(self, line):
51 # NOTE: use split with no arguments to split on any
52 # whitespace including tabs
53 return Fstab.Entry(*filter(
54 lambda x: x not in ('', None),
55 line.strip("\n").split()))
56
57 @property
58 def entries(self):
59 self.seek(0)
60 for line in self.readlines():
61 try:
62 if not line.startswith("#"):
63 yield self._hydrate_entry(line)
64 except ValueError:
65 pass
66
67 def get_entry_by_attr(self, attr, value):
68 for entry in self.entries:
69 e_attr = getattr(entry, attr)
70 if e_attr == value:
71 return entry
72 return None
73
74 def add_entry(self, entry):
75 if self.get_entry_by_attr('device', entry.device):
76 return False
77
78 self.write(str(entry) + '\n')
79 self.truncate()
80 return entry
81
82 def remove_entry(self, entry):
83 self.seek(0)
84
85 lines = self.readlines()
86
87 found = False
88 for index, line in enumerate(lines):
89 if not line.startswith("#"):
90 if self._hydrate_entry(line) == entry:
91 found = True
92 break
93
94 if not found:
95 return False
96
97 lines.remove(line)
98
99 self.seek(0)
100 self.write(''.join(lines))
101 self.truncate()
102 return True
103
104 @classmethod
105 def remove_by_mountpoint(cls, mountpoint, path=None):
106 fstab = cls(path=path)
107 entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
108 if entry:
109 return fstab.remove_entry(entry)
110 return False
111
112 @classmethod
113 def add(cls, device, mountpoint, filesystem, options=None, path=None):
114 return cls(path=path).add_entry(Fstab.Entry(device,
115 mountpoint, filesystem,
116 options=options))
0117
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2014-01-14 21:55:40 +0000
+++ hooks/charmhelpers/core/hookenv.py 2014-09-09 05:13:17 +0000
@@ -8,6 +8,7 @@
8import json8import json
9import yaml9import yaml
10import subprocess10import subprocess
11import sys
11import UserDict12import UserDict
12from subprocess import CalledProcessError13from subprocess import CalledProcessError
1314
@@ -24,7 +25,7 @@
24def cached(func):25def cached(func):
25 """Cache return values for multiple executions of func + args26 """Cache return values for multiple executions of func + args
2627
27 For example:28 For example::
2829
29 @cached30 @cached
30 def unit_get(attribute):31 def unit_get(attribute):
@@ -149,6 +150,105 @@
149 return local_unit().split('/')[0]150 return local_unit().split('/')[0]
150151
151152
153def hook_name():
154 """The name of the currently executing hook"""
155 return os.path.basename(sys.argv[0])
156
157
158class Config(dict):
159 """A Juju charm config dictionary that can write itself to
160 disk (as json) and track which values have changed since
161 the previous hook invocation.
162
163 Do not instantiate this object directly - instead call
164 ``hookenv.config()``
165
166 Example usage::
167
168 >>> # inside a hook
169 >>> from charmhelpers.core import hookenv
170 >>> config = hookenv.config()
171 >>> config['foo']
172 'bar'
173 >>> config['mykey'] = 'myval'
174 >>> config.save()
175
176
177 >>> # user runs `juju set mycharm foo=baz`
178 >>> # now we're inside subsequent config-changed hook
179 >>> config = hookenv.config()
180 >>> config['foo']
181 'baz'
182 >>> # test to see if this val has changed since last hook
183 >>> config.changed('foo')
184 True
185 >>> # what was the previous value?
186 >>> config.previous('foo')
187 'bar'
188 >>> # keys/values that we add are preserved across hooks
189 >>> config['mykey']
190 'myval'
191 >>> # don't forget to save at the end of hook!
192 >>> config.save()
193
194 """
195 CONFIG_FILE_NAME = '.juju-persistent-config'
196
197 def __init__(self, *args, **kw):
198 super(Config, self).__init__(*args, **kw)
199 self._prev_dict = None
200 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
201 if os.path.exists(self.path):
202 self.load_previous()
203
204 def load_previous(self, path=None):
205 """Load previous copy of config from disk so that current values
206 can be compared to previous values.
207
208 :param path:
209
210 File path from which to load the previous config. If `None`,
211 config is loaded from the default location. If `path` is
212 specified, subsequent `save()` calls will write to the same
213 path.
214
215 """
216 self.path = path or self.path
217 with open(self.path) as f:
218 self._prev_dict = json.load(f)
219
220 def changed(self, key):
221 """Return true if the value for this key has changed since
222 the last save.
223
224 """
225 if self._prev_dict is None:
226 return True
227 return self.previous(key) != self.get(key)
228
229 def previous(self, key):
230 """Return previous value for this key, or None if there
231 is no "previous" value.
232
233 """
234 if self._prev_dict:
235 return self._prev_dict.get(key)
236 return None
237
238 def save(self):
239 """Save this config to disk.
240
241 Preserves items in _prev_dict that do not exist in self.
242
243 """
244 if self._prev_dict:
245 for k, v in self._prev_dict.iteritems():
246 if k not in self:
247 self[k] = v
248 with open(self.path, 'w') as f:
249 json.dump(self, f)
250
251
152@cached252@cached
153def config(scope=None):253def config(scope=None):
154 """Juju charm configuration"""254 """Juju charm configuration"""
@@ -157,7 +257,10 @@
157 config_cmd_line.append(scope)257 config_cmd_line.append(scope)
158 config_cmd_line.append('--format=json')258 config_cmd_line.append('--format=json')
159 try:259 try:
160 return json.loads(subprocess.check_output(config_cmd_line))260 config_data = json.loads(subprocess.check_output(config_cmd_line))
261 if scope is not None:
262 return config_data
263 return Config(config_data)
161 except ValueError:264 except ValueError:
162 return None265 return None
163266
@@ -182,8 +285,9 @@
182 raise285 raise
183286
184287
185def relation_set(relation_id=None, relation_settings={}, **kwargs):288def relation_set(relation_id=None, relation_settings=None, **kwargs):
186 """Set relation information for the current unit"""289 """Set relation information for the current unit"""
290 relation_settings = relation_settings if relation_settings else {}
187 relation_cmd_line = ['relation-set']291 relation_cmd_line = ['relation-set']
188 if relation_id is not None:292 if relation_id is not None:
189 relation_cmd_line.extend(('-r', relation_id))293 relation_cmd_line.extend(('-r', relation_id))
@@ -342,18 +446,19 @@
342class Hooks(object):446class Hooks(object):
343 """A convenient handler for hook functions.447 """A convenient handler for hook functions.
344448
345 Example:449 Example::
450
346 hooks = Hooks()451 hooks = Hooks()
347452
348 # register a hook, taking its name from the function name453 # register a hook, taking its name from the function name
349 @hooks.hook()454 @hooks.hook()
350 def install():455 def install():
351 ...456 pass # your code here
352457
353 # register a hook, providing a custom hook name458 # register a hook, providing a custom hook name
354 @hooks.hook("config-changed")459 @hooks.hook("config-changed")
355 def config_changed():460 def config_changed():
356 ...461 pass # your code here
357462
358 if __name__ == "__main__":463 if __name__ == "__main__":
359 # execute a hook based on the name the program is called by464 # execute a hook based on the name the program is called by
360465
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2014-01-14 21:55:40 +0000
+++ hooks/charmhelpers/core/host.py 2014-09-09 05:13:17 +0000
@@ -12,10 +12,13 @@
12import string12import string
13import subprocess13import subprocess
14import hashlib14import hashlib
15import shutil
16from contextlib import contextmanager
1517
16from collections import OrderedDict18from collections import OrderedDict
1719
18from hookenv import log20from hookenv import log
21from fstab import Fstab
1922
2023
21def service_start(service_name):24def service_start(service_name):
@@ -34,7 +37,8 @@
3437
3538
36def service_reload(service_name, restart_on_failure=False):39def service_reload(service_name, restart_on_failure=False):
37 """Reload a system service, optionally falling back to restart if reload fails"""40 """Reload a system service, optionally falling back to restart if
41 reload fails"""
38 service_result = service('reload', service_name)42 service_result = service('reload', service_name)
39 if not service_result and restart_on_failure:43 if not service_result and restart_on_failure:
40 service_result = service('restart', service_name)44 service_result = service('restart', service_name)
@@ -50,7 +54,7 @@
50def service_running(service):54def service_running(service):
51 """Determine whether a system service is running"""55 """Determine whether a system service is running"""
52 try:56 try:
53 output = subprocess.check_output(['service', service, 'status'])57 output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
54 except subprocess.CalledProcessError:58 except subprocess.CalledProcessError:
55 return False59 return False
56 else:60 else:
@@ -60,6 +64,16 @@
60 return False64 return False
6165
6266
67def service_available(service_name):
68 """Determine whether a system service is available"""
69 try:
70 subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
71 except subprocess.CalledProcessError:
72 return False
73 else:
74 return True
75
76
63def adduser(username, password=None, shell='/bin/bash', system_user=False):77def adduser(username, password=None, shell='/bin/bash', system_user=False):
64 """Add a user to the system"""78 """Add a user to the system"""
65 try:79 try:
@@ -143,7 +157,19 @@
143 target.write(content)157 target.write(content)
144158
145159
146def mount(device, mountpoint, options=None, persist=False):160def fstab_remove(mp):
161 """Remove the given mountpoint entry from /etc/fstab
162 """
163 return Fstab.remove_by_mountpoint(mp)
164
165
166def fstab_add(dev, mp, fs, options=None):
167 """Adds the given device entry to the /etc/fstab file
168 """
169 return Fstab.add(dev, mp, fs, options=options)
170
171
172def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
147 """Mount a filesystem at a particular mountpoint"""173 """Mount a filesystem at a particular mountpoint"""
148 cmd_args = ['mount']174 cmd_args = ['mount']
149 if options is not None:175 if options is not None:
@@ -154,9 +180,9 @@
154 except subprocess.CalledProcessError, e:180 except subprocess.CalledProcessError, e:
155 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))181 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
156 return False182 return False
183
157 if persist:184 if persist:
158 # TODO: update fstab185 return fstab_add(device, mountpoint, filesystem, options=options)
159 pass
160 return True186 return True
161187
162188
@@ -168,9 +194,9 @@
168 except subprocess.CalledProcessError, e:194 except subprocess.CalledProcessError, e:
169 log('Error unmounting {}\n{}'.format(mountpoint, e.output))195 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
170 return False196 return False
197
171 if persist:198 if persist:
172 # TODO: update fstab199 return fstab_remove(mountpoint)
173 pass
174 return True200 return True
175201
176202
@@ -194,16 +220,16 @@
194 return None220 return None
195221
196222
197def restart_on_change(restart_map):223def restart_on_change(restart_map, stopstart=False):
198 """Restart services based on configuration files changing224 """Restart services based on configuration files changing
199225
200 This function is used a decorator, for example226 This function is used a decorator, for example::
201227
202 @restart_on_change({228 @restart_on_change({
203 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]229 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
204 })230 })
205 def ceph_client_changed():231 def ceph_client_changed():
206 ...232 pass # your code here
207233
208 In this example, the cinder-api and cinder-volume services234 In this example, the cinder-api and cinder-volume services
209 would be restarted if /etc/ceph/ceph.conf is changed by the235 would be restarted if /etc/ceph/ceph.conf is changed by the
@@ -219,8 +245,14 @@
219 for path in restart_map:245 for path in restart_map:
220 if checksums[path] != file_hash(path):246 if checksums[path] != file_hash(path):
221 restarts += restart_map[path]247 restarts += restart_map[path]
222 for service_name in list(OrderedDict.fromkeys(restarts)):248 services_list = list(OrderedDict.fromkeys(restarts))
223 service('restart', service_name)249 if not stopstart:
250 for service_name in services_list:
251 service('restart', service_name)
252 else:
253 for action in ['stop', 'start']:
254 for service_name in services_list:
255 service(action, service_name)
224 return wrapped_f256 return wrapped_f
225 return wrap257 return wrap
226258
@@ -289,3 +321,44 @@
289 if 'link/ether' in words:321 if 'link/ether' in words:
290 hwaddr = words[words.index('link/ether') + 1]322 hwaddr = words[words.index('link/ether') + 1]
291 return hwaddr323 return hwaddr
324
325
326def cmp_pkgrevno(package, revno, pkgcache=None):
327 '''Compare supplied revno with the revno of the installed package
328
329 * 1 => Installed revno is greater than supplied arg
330 * 0 => Installed revno is the same as supplied arg
331 * -1 => Installed revno is less than supplied arg
332
333 '''
334 import apt_pkg
335 if not pkgcache:
336 apt_pkg.init()
337 # Force Apt to build its cache in memory. That way we avoid race
338 # conditions with other applications building the cache in the same
339 # place.
340 apt_pkg.config.set("Dir::Cache::pkgcache", "")
341 pkgcache = apt_pkg.Cache()
342 pkg = pkgcache[package]
343 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
344
345
346@contextmanager
347def chdir(d):
348 cur = os.getcwd()
349 try:
350 yield os.chdir(d)
351 finally:
352 os.chdir(cur)
353
354
355def chownr(path, owner, group):
356 uid = pwd.getpwnam(owner).pw_uid
357 gid = grp.getgrnam(group).gr_gid
358
359 for root, dirs, files in os.walk(path):
360 for name in dirs + files:
361 full = os.path.join(root, name)
362 broken_symlink = os.path.lexists(full) and not os.path.exists(full)
363 if not broken_symlink:
364 os.chown(full, uid, gid)
292365
=== added directory 'hooks/charmhelpers/core/services'
=== added file 'hooks/charmhelpers/core/services/__init__.py'
--- hooks/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/services/__init__.py 2014-09-09 05:13:17 +0000
@@ -0,0 +1,2 @@
1from .base import *
2from .helpers import *
03
=== added file 'hooks/charmhelpers/core/services/base.py'
--- hooks/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/services/base.py 2014-09-09 05:13:17 +0000
@@ -0,0 +1,305 @@
1import os
2import re
3import json
4from collections import Iterable
5
6from charmhelpers.core import host
7from charmhelpers.core import hookenv
8
9
10__all__ = ['ServiceManager', 'ManagerCallback',
11 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
12 'service_restart', 'service_stop']
13
14
15class ServiceManager(object):
16 def __init__(self, services=None):
17 """
18 Register a list of services, given their definitions.
19
20 Traditional charm authoring is focused on implementing hooks. That is,
21 the charm author is thinking in terms of "What hook am I handling; what
22 does this hook need to do?" However, in most cases, the real question
23 should be "Do I have the information I need to configure and start this
24 piece of software and, if so, what are the steps for doing so?" The
25 ServiceManager framework tries to bring the focus to the data and the
26 setup tasks, in the most declarative way possible.
27
28 Service definitions are dicts in the following formats (all keys except
29 'service' are optional)::
30
31 {
32 "service": <service name>,
33 "required_data": <list of required data contexts>,
34 "data_ready": <one or more callbacks>,
35 "data_lost": <one or more callbacks>,
36 "start": <one or more callbacks>,
37 "stop": <one or more callbacks>,
38 "ports": <list of ports to manage>,
39 }
40
41 The 'required_data' list should contain dicts of required data (or
42 dependency managers that act like dicts and know how to collect the data).
43 Only when all items in the 'required_data' list are populated are the list
44 of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
45 information.
46
47 The 'data_ready' value should be either a single callback, or a list of
48 callbacks, to be called when all items in 'required_data' pass `is_ready()`.
49 Each callback will be called with the service name as the only parameter.
50 After all of the 'data_ready' callbacks are called, the 'start' callbacks
51 are fired.
52
53 The 'data_lost' value should be either a single callback, or a list of
54 callbacks, to be called when a 'required_data' item no longer passes
55 `is_ready()`. Each callback will be called with the service name as the
56 only parameter. After all of the 'data_lost' callbacks are called,
57 the 'stop' callbacks are fired.
58
59 The 'start' value should be either a single callback, or a list of
60 callbacks, to be called when starting the service, after the 'data_ready'
61 callbacks are complete. Each callback will be called with the service
62 name as the only parameter. This defaults to
63 `[host.service_start, services.open_ports]`.
64
65 The 'stop' value should be either a single callback, or a list of
66 callbacks, to be called when stopping the service. If the service is
67 being stopped because it no longer has all of its 'required_data', this
68 will be called after all of the 'data_lost' callbacks are complete.
69 Each callback will be called with the service name as the only parameter.
70 This defaults to `[services.close_ports, host.service_stop]`.
71
72 The 'ports' value should be a list of ports to manage. The default
73 'start' handler will open the ports after the service is started,
74 and the default 'stop' handler will close the ports prior to stopping
75 the service.
76
77
78 Examples:
79
80 The following registers an Upstart service called bingod that depends on
81 a mongodb relation and which runs a custom `db_migrate` function prior to
82 restarting the service, and a Runit service called spadesd::
83
84 manager = services.ServiceManager([
85 {
86 'service': 'bingod',
87 'ports': [80, 443],
88 'required_data': [MongoRelation(), config(), {'my': 'data'}],
89 'data_ready': [
90 services.template(source='bingod.conf'),
91 services.template(source='bingod.ini',
92 target='/etc/bingod.ini',
93 owner='bingo', perms=0400),
94 ],
95 },
96 {
97 'service': 'spadesd',
98 'data_ready': services.template(source='spadesd_run.j2',
99 target='/etc/sv/spadesd/run',
100 perms=0555),
101 'start': runit_start,
102 'stop': runit_stop,
103 },
104 ])
105 manager.manage()
106 """
107 self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
108 self._ready = None
109 self.services = {}
110 for service in services or []:
111 service_name = service['service']
112 self.services[service_name] = service
113
114 def manage(self):
115 """
116 Handle the current hook by doing The Right Thing with the registered services.
117 """
118 hook_name = hookenv.hook_name()
119 if hook_name == 'stop':
120 self.stop_services()
121 else:
122 self.provide_data()
123 self.reconfigure_services()
124
125 def provide_data(self):
126 hook_name = hookenv.hook_name()
127 for service in self.services.values():
128 for provider in service.get('provided_data', []):
129 if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
130 data = provider.provide_data()
131 if provider._is_ready(data):
132 hookenv.relation_set(None, data)
133
134 def reconfigure_services(self, *service_names):
135 """
136 Update all files for one or more registered services, and,
137 if ready, optionally restart them.
138
139 If no service names are given, reconfigures all registered services.
140 """
141 for service_name in service_names or self.services.keys():
142 if self.is_ready(service_name):
143 self.fire_event('data_ready', service_name)
144 self.fire_event('start', service_name, default=[
145 service_restart,
146 manage_ports])
147 self.save_ready(service_name)
148 else:
149 if self.was_ready(service_name):
150 self.fire_event('data_lost', service_name)
151 self.fire_event('stop', service_name, default=[
152 manage_ports,
153 service_stop])
154 self.save_lost(service_name)
155
156 def stop_services(self, *service_names):
157 """
158 Stop one or more registered services, by name.
159
160 If no service names are given, stops all registered services.
161 """
162 for service_name in service_names or self.services.keys():
163 self.fire_event('stop', service_name, default=[
164 manage_ports,
165 service_stop])
166
167 def get_service(self, service_name):
168 """
169 Given the name of a registered service, return its service definition.
170 """
171 service = self.services.get(service_name)
172 if not service:
173 raise KeyError('Service not registered: %s' % service_name)
174 return service
175
176 def fire_event(self, event_name, service_name, default=None):
177 """
178 Fire a data_ready, data_lost, start, or stop event on a given service.
179 """
180 service = self.get_service(service_name)
181 callbacks = service.get(event_name, default)
182 if not callbacks:
183 return
184 if not isinstance(callbacks, Iterable):
185 callbacks = [callbacks]
186 for callback in callbacks:
187 if isinstance(callback, ManagerCallback):
188 callback(self, service_name, event_name)
189 else:
190 callback(service_name)
191
192 def is_ready(self, service_name):
193 """
194 Determine if a registered service is ready, by checking its 'required_data'.
195
196 A 'required_data' item can be any mapping type, and is considered ready
197 if `bool(item)` evaluates as True.
198 """
199 service = self.get_service(service_name)
200 reqs = service.get('required_data', [])
201 return all(bool(req) for req in reqs)
202
203 def _load_ready_file(self):
204 if self._ready is not None:
205 return
206 if os.path.exists(self._ready_file):
207 with open(self._ready_file) as fp:
208 self._ready = set(json.load(fp))
209 else:
210 self._ready = set()
211
212 def _save_ready_file(self):
213 if self._ready is None:
214 return
215 with open(self._ready_file, 'w') as fp:
216 json.dump(list(self._ready), fp)
217
218 def save_ready(self, service_name):
219 """
220 Save an indicator that the given service is now data_ready.
221 """
222 self._load_ready_file()
223 self._ready.add(service_name)
224 self._save_ready_file()
225
226 def save_lost(self, service_name):
227 """
228 Save an indicator that the given service is no longer data_ready.
229 """
230 self._load_ready_file()
231 self._ready.discard(service_name)
232 self._save_ready_file()
233
234 def was_ready(self, service_name):
235 """
236 Determine if the given service was previously data_ready.
237 """
238 self._load_ready_file()
239 return service_name in self._ready
240
241
242class ManagerCallback(object):
243 """
244 Special case of a callback that takes the `ServiceManager` instance
245 in addition to the service name.
246
247 Subclasses should implement `__call__` which should accept three parameters:
248
249 * `manager` The `ServiceManager` instance
250 * `service_name` The name of the service it's being triggered for
251 * `event_name` The name of the event that this callback is handling
252 """
253 def __call__(self, manager, service_name, event_name):
254 raise NotImplementedError()
255
256
257class PortManagerCallback(ManagerCallback):
258 """
259 Callback class that will open or close ports, for use as either
260 a start or stop action.
261 """
262 def __call__(self, manager, service_name, event_name):
263 service = manager.get_service(service_name)
264 new_ports = service.get('ports', [])
265 port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
266 if os.path.exists(port_file):
267 with open(port_file) as fp:
268 old_ports = fp.read().split(',')
269 for old_port in old_ports:
270 if bool(old_port):
271 old_port = int(old_port)
272 if old_port not in new_ports:
273 hookenv.close_port(old_port)
274 with open(port_file, 'w') as fp:
275 fp.write(','.join(str(port) for port in new_ports))
276 for port in new_ports:
277 if event_name == 'start':
278 hookenv.open_port(port)
279 elif event_name == 'stop':
280 hookenv.close_port(port)
281
282
283def service_stop(service_name):
284 """
285 Wrapper around host.service_stop to prevent spurious "unknown service"
286 messages in the logs.
287 """
288 if host.service_running(service_name):
289 host.service_stop(service_name)
290
291
292def service_restart(service_name):
293 """
294 Wrapper around host.service_restart to prevent spurious "unknown service"
295 messages in the logs.
296 """
297 if host.service_available(service_name):
298 if host.service_running(service_name):
299 host.service_restart(service_name)
300 else:
301 host.service_start(service_name)
302
303
304# Convenience aliases
305open_ports = close_ports = manage_ports = PortManagerCallback()
0306
=== added file 'hooks/charmhelpers/core/services/helpers.py'
--- hooks/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/services/helpers.py 2014-09-09 05:13:17 +0000
@@ -0,0 +1,125 @@
1from charmhelpers.core import hookenv
2from charmhelpers.core import templating
3
4from charmhelpers.core.services.base import ManagerCallback
5
6
7__all__ = ['RelationContext', 'TemplateCallback',
8 'render_template', 'template']
9
10
11class RelationContext(dict):
12 """
13 Base class for a context generator that gets relation data from juju.
14
15 Subclasses must provide the attributes `name`, which is the name of the
16 interface of interest, `interface`, which is the type of the interface of
17 interest, and `required_keys`, which is the set of keys required for the
18 relation to be considered complete. The data for all interfaces matching
19 the `name` attribute that are complete will used to populate the dictionary
20 values (see `get_data`, below).
21
22 The generated context will be namespaced under the interface type, to prevent
23 potential naming conflicts.
24 """
25 name = None
26 interface = None
27 required_keys = []
28
29 def __init__(self, *args, **kwargs):
30 super(RelationContext, self).__init__(*args, **kwargs)
31 self.get_data()
32
33 def __bool__(self):
34 """
35 Returns True if all of the required_keys are available.
36 """
37 return self.is_ready()
38
39 __nonzero__ = __bool__
40
41 def __repr__(self):
42 return super(RelationContext, self).__repr__()
43
44 def is_ready(self):
45 """
46 Returns True if all of the `required_keys` are available from any units.
47 """
48 ready = len(self.get(self.name, [])) > 0
49 if not ready:
50 hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
51 return ready
52
53 def _is_ready(self, unit_data):
54 """
55 Helper method that tests a set of relation data and returns True if
56 all of the `required_keys` are present.
57 """
58 return set(unit_data.keys()).issuperset(set(self.required_keys))
59
60 def get_data(self):
61 """
62 Retrieve the relation data for each unit involved in a relation and,
63 if complete, store it in a list under `self[self.name]`. This
64 is automatically called when the RelationContext is instantiated.
65
66 The units are sorted lexographically first by the service ID, then by
67 the unit ID. Thus, if an interface has two other services, 'db:1'
68 and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
69 and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
70 set of data, the relation data for the units will be stored in the
71 order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
72
73 If you only care about a single unit on the relation, you can just
74 access it as `{{ interface[0]['key'] }}`. However, if you can at all
75 support multiple units on a relation, you should iterate over the list,
76 like::
77
78 {% for unit in interface -%}
79 {{ unit['key'] }}{% if not loop.last %},{% endif %}
80 {%- endfor %}
81
82 Note that since all sets of relation data from all related services and
83 units are in a single list, if you need to know which service or unit a
84 set of data came from, you'll need to extend this class to preserve
85 that information.
86 """
87 if not hookenv.relation_ids(self.name):
88 return
89
90 ns = self.setdefault(self.name, [])
91 for rid in sorted(hookenv.relation_ids(self.name)):
92 for unit in sorted(hookenv.related_units(rid)):
93 reldata = hookenv.relation_get(rid=rid, unit=unit)
94 if self._is_ready(reldata):
95 ns.append(reldata)
96
97 def provide_data(self):
98 """
99 Return data to be relation_set for this interface.
100 """
101 return {}
102
103
104class TemplateCallback(ManagerCallback):
105 """
106 Callback class that will render a template, for use as a ready action.
107 """
108 def __init__(self, source, target, owner='root', group='root', perms=0444):
109 self.source = source
110 self.target = target
111 self.owner = owner
112 self.group = group
113 self.perms = perms
114
115 def __call__(self, manager, service_name, event_name):
116 service = manager.get_service(service_name)
117 context = {}
118 for ctx in service.get('required_data', []):
119 context.update(ctx)
120 templating.render(self.source, self.target, context,
121 self.owner, self.group, self.perms)
122
123
124# Convenience aliases for templates
125render_template = template = TemplateCallback
0126
=== added file 'hooks/charmhelpers/core/templating.py'
--- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/templating.py 2014-09-09 05:13:17 +0000
@@ -0,0 +1,51 @@
1import os
2
3from charmhelpers.core import host
4from charmhelpers.core import hookenv
5
6
7def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
8 """
9 Render a template.
10
11 The `source` path, if not absolute, is relative to the `templates_dir`.
12
13 The `target` path should be absolute.
14
15 The context should be a dict containing the values to be replaced in the
16 template.
17
18 The `owner`, `group`, and `perms` options will be passed to `write_file`.
19
20 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
21
22 Note: Using this requires python-jinja2; if it is not installed, calling
23 this will attempt to use charmhelpers.fetch.apt_install to install it.
24 """
25 try:
26 from jinja2 import FileSystemLoader, Environment, exceptions
27 except ImportError:
28 try:
29 from charmhelpers.fetch import apt_install
30 except ImportError:
31 hookenv.log('Could not import jinja2, and could not import '
32 'charmhelpers.fetch to install it',
33 level=hookenv.ERROR)
34 raise
35 apt_install('python-jinja2', fatal=True)
36 from jinja2 import FileSystemLoader, Environment, exceptions
37
38 if templates_dir is None:
39 templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
40 loader = Environment(loader=FileSystemLoader(templates_dir))
41 try:
42 source = source
43 template = loader.get_template(source)
44 except exceptions.TemplateNotFound as e:
45 hookenv.log('Could not load template %s from %s.' %
46 (source, templates_dir),
47 level=hookenv.ERROR)
48 raise e
49 content = template.render(context)
50 host.mkdir(os.path.dirname(target))
51 host.write_file(target, content, owner, group, perms)
052
=== added directory 'hooks/charmhelpers/fetch'
=== added file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2014-09-09 05:13:17 +0000
@@ -0,0 +1,394 @@
1import importlib
2from tempfile import NamedTemporaryFile
3import time
4from yaml import safe_load
5from charmhelpers.core.host import (
6 lsb_release
7)
8from urlparse import (
9 urlparse,
10 urlunparse,
11)
12import subprocess
13from charmhelpers.core.hookenv import (
14 config,
15 log,
16)
17import os
18
19
20CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
21deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
22"""
23PROPOSED_POCKET = """# Proposed
24deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
25"""
26CLOUD_ARCHIVE_POCKETS = {
27 # Folsom
28 'folsom': 'precise-updates/folsom',
29 'precise-folsom': 'precise-updates/folsom',
30 'precise-folsom/updates': 'precise-updates/folsom',
31 'precise-updates/folsom': 'precise-updates/folsom',
32 'folsom/proposed': 'precise-proposed/folsom',
33 'precise-folsom/proposed': 'precise-proposed/folsom',
34 'precise-proposed/folsom': 'precise-proposed/folsom',
35 # Grizzly
36 'grizzly': 'precise-updates/grizzly',
37 'precise-grizzly': 'precise-updates/grizzly',
38 'precise-grizzly/updates': 'precise-updates/grizzly',
39 'precise-updates/grizzly': 'precise-updates/grizzly',
40 'grizzly/proposed': 'precise-proposed/grizzly',
41 'precise-grizzly/proposed': 'precise-proposed/grizzly',
42 'precise-proposed/grizzly': 'precise-proposed/grizzly',
43 # Havana
44 'havana': 'precise-updates/havana',
45 'precise-havana': 'precise-updates/havana',
46 'precise-havana/updates': 'precise-updates/havana',
47 'precise-updates/havana': 'precise-updates/havana',
48 'havana/proposed': 'precise-proposed/havana',
49 'precise-havana/proposed': 'precise-proposed/havana',
50 'precise-proposed/havana': 'precise-proposed/havana',
51 # Icehouse
52 'icehouse': 'precise-updates/icehouse',
53 'precise-icehouse': 'precise-updates/icehouse',
54 'precise-icehouse/updates': 'precise-updates/icehouse',
55 'precise-updates/icehouse': 'precise-updates/icehouse',
56 'icehouse/proposed': 'precise-proposed/icehouse',
57 'precise-icehouse/proposed': 'precise-proposed/icehouse',
58 'precise-proposed/icehouse': 'precise-proposed/icehouse',
59 # Juno
60 'juno': 'trusty-updates/juno',
61 'trusty-juno': 'trusty-updates/juno',
62 'trusty-juno/updates': 'trusty-updates/juno',
63 'trusty-updates/juno': 'trusty-updates/juno',
64 'juno/proposed': 'trusty-proposed/juno',
65 'juno/proposed': 'trusty-proposed/juno',
66 'trusty-juno/proposed': 'trusty-proposed/juno',
67 'trusty-proposed/juno': 'trusty-proposed/juno',
68}
69
70# The order of this list is very important. Handlers should be listed in from
71# least- to most-specific URL matching.
72FETCH_HANDLERS = (
73 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
74 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
75)
76
77APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
78APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
79APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
80
81
82class SourceConfigError(Exception):
83 pass
84
85
86class UnhandledSource(Exception):
87 pass
88
89
90class AptLockError(Exception):
91 pass
92
93
94class BaseFetchHandler(object):
95
96 """Base class for FetchHandler implementations in fetch plugins"""
97
98 def can_handle(self, source):
99 """Returns True if the source can be handled. Otherwise returns
100 a string explaining why it cannot"""
101 return "Wrong source type"
102
103 def install(self, source):
104 """Try to download and unpack the source. Return the path to the
105 unpacked files or raise UnhandledSource."""
106 raise UnhandledSource("Wrong source type {}".format(source))
107
108 def parse_url(self, url):
109 return urlparse(url)
110
111 def base_url(self, url):
112 """Return url without querystring or fragment"""
113 parts = list(self.parse_url(url))
114 parts[4:] = ['' for i in parts[4:]]
115 return urlunparse(parts)
116
117
118def filter_installed_packages(packages):
119 """Returns a list of packages that require installation"""
120 import apt_pkg
121 apt_pkg.init()
122
123 # Tell apt to build an in-memory cache to prevent race conditions (if
124 # another process is already building the cache).
125 apt_pkg.config.set("Dir::Cache::pkgcache", "")
126 apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
127
128 cache = apt_pkg.Cache()
129 _pkgs = []
130 for package in packages:
131 try:
132 p = cache[package]
133 p.current_ver or _pkgs.append(package)
134 except KeyError:
135 log('Package {} has no installation candidate.'.format(package),
136 level='WARNING')
137 _pkgs.append(package)
138 return _pkgs
139
140
141def apt_install(packages, options=None, fatal=False):
142 """Install one or more packages"""
143 if options is None:
144 options = ['--option=Dpkg::Options::=--force-confold']
145
146 cmd = ['apt-get', '--assume-yes']
147 cmd.extend(options)
148 cmd.append('install')
149 if isinstance(packages, basestring):
150 cmd.append(packages)
151 else:
152 cmd.extend(packages)
153 log("Installing {} with options: {}".format(packages,
154 options))
155 _run_apt_command(cmd, fatal)
156
157
158def apt_upgrade(options=None, fatal=False, dist=False):
159 """Upgrade all packages"""
160 if options is None:
161 options = ['--option=Dpkg::Options::=--force-confold']
162
163 cmd = ['apt-get', '--assume-yes']
164 cmd.extend(options)
165 if dist:
166 cmd.append('dist-upgrade')
167 else:
168 cmd.append('upgrade')
169 log("Upgrading with options: {}".format(options))
170 _run_apt_command(cmd, fatal)
171
172
173def apt_update(fatal=False):
174 """Update local apt cache"""
175 cmd = ['apt-get', 'update']
176 _run_apt_command(cmd, fatal)
177
178
179def apt_purge(packages, fatal=False):
180 """Purge one or more packages"""
181 cmd = ['apt-get', '--assume-yes', 'purge']
182 if isinstance(packages, basestring):
183 cmd.append(packages)
184 else:
185 cmd.extend(packages)
186 log("Purging {}".format(packages))
187 _run_apt_command(cmd, fatal)
188
189
190def apt_hold(packages, fatal=False):
191 """Hold one or more packages"""
192 cmd = ['apt-mark', 'hold']
193 if isinstance(packages, basestring):
194 cmd.append(packages)
195 else:
196 cmd.extend(packages)
197 log("Holding {}".format(packages))
198
199 if fatal:
200 subprocess.check_call(cmd)
201 else:
202 subprocess.call(cmd)
203
204
205def add_source(source, key=None):
206 """Add a package source to this system.
207
208 @param source: a URL or sources.list entry, as supported by
209 add-apt-repository(1). Examples:
210 ppa:charmers/example
211 deb https://stub:key@private.example.com/ubuntu trusty main
212
213 In addition:
214 'proposed:' may be used to enable the standard 'proposed'
215 pocket for the release.
216 'cloud:' may be used to activate official cloud archive pockets,
217 such as 'cloud:icehouse'
218
219 @param key: A key to be added to the system's APT keyring and used
220 to verify the signatures on packages. Ideally, this should be an
221 ASCII format GPG public key including the block headers. A GPG key
222 id may also be used, but be aware that only insecure protocols are
223 available to retrieve the actual public key from a public keyserver
224 placing your Juju environment at risk. ppa and cloud archive keys
225 are securely added automtically, so sould not be provided.
226 """
227 if source is None:
228 log('Source is not present. Skipping')
229 return
230
231 if (source.startswith('ppa:') or
232 source.startswith('http') or
233 source.startswith('deb ') or
234 source.startswith('cloud-archive:')):
235 subprocess.check_call(['add-apt-repository', '--yes', source])
236 elif source.startswith('cloud:'):
237 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
238 fatal=True)
239 pocket = source.split(':')[-1]
240 if pocket not in CLOUD_ARCHIVE_POCKETS:
241 raise SourceConfigError(
242 'Unsupported cloud: source option %s' %
243 pocket)
244 actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
245 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
246 apt.write(CLOUD_ARCHIVE.format(actual_pocket))
247 elif source == 'proposed':
248 release = lsb_release()['DISTRIB_CODENAME']
249 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
250 apt.write(PROPOSED_POCKET.format(release))
251 else:
252 raise SourceConfigError("Unknown source: {!r}".format(source))
253
254 if key:
255 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
256 with NamedTemporaryFile() as key_file:
257 key_file.write(key)
258 key_file.flush()
259 key_file.seek(0)
260 subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
261 else:
262 # Note that hkp: is in no way a secure protocol. Using a
263 # GPG key id is pointless from a security POV unless you
264 # absolutely trust your network and DNS.
265 subprocess.check_call(['apt-key', 'adv', '--keyserver',
266 'hkp://keyserver.ubuntu.com:80', '--recv',
267 key])
268
269
270def configure_sources(update=False,
271 sources_var='install_sources',
272 keys_var='install_keys'):
273 """
274 Configure multiple sources from charm configuration.
275
276 The lists are encoded as yaml fragments in the configuration.
277 The frament needs to be included as a string. Sources and their
278 corresponding keys are of the types supported by add_source().
279
280 Example config:
281 install_sources: |
282 - "ppa:foo"
283 - "http://example.com/repo precise main"
284 install_keys: |
285 - null
286 - "a1b2c3d4"
287
288 Note that 'null' (a.k.a. None) should not be quoted.
289 """
290 sources = safe_load((config(sources_var) or '').strip()) or []
291 keys = safe_load((config(keys_var) or '').strip()) or None
292
293 if isinstance(sources, basestring):
294 sources = [sources]
295
296 if keys is None:
297 for source in sources:
298 add_source(source, None)
299 else:
300 if isinstance(keys, basestring):
301 keys = [keys]
302
303 if len(sources) != len(keys):
304 raise SourceConfigError(
305 'Install sources and keys lists are different lengths')
306 for source, key in zip(sources, keys):
307 add_source(source, key)
308 if update:
309 apt_update(fatal=True)
310
311
312def install_remote(source):
313 """
314 Install a file tree from a remote source
315
316 The specified source should be a url of the form:
317 scheme://[host]/path[#[option=value][&...]]
318
319 Schemes supported are based on this modules submodules
320 Options supported are submodule-specific"""
321 # We ONLY check for True here because can_handle may return a string
322 # explaining why it can't handle a given source.
323 handlers = [h for h in plugins() if h.can_handle(source) is True]
324 installed_to = None
325 for handler in handlers:
326 try:
327 installed_to = handler.install(source)
328 except UnhandledSource:
329 pass
330 if not installed_to:
331 raise UnhandledSource("No handler found for source {}".format(source))
332 return installed_to
333
334
335def install_from_config(config_var_name):
336 charm_config = config()
337 source = charm_config[config_var_name]
338 return install_remote(source)
339
340
341def plugins(fetch_handlers=None):
342 if not fetch_handlers:
343 fetch_handlers = FETCH_HANDLERS
344 plugin_list = []
345 for handler_name in fetch_handlers:
346 package, classname = handler_name.rsplit('.', 1)
347 try:
348 handler_class = getattr(
349 importlib.import_module(package),
350 classname)
351 plugin_list.append(handler_class())
352 except (ImportError, AttributeError):
353 # Skip missing plugins so that they can be ommitted from
354 # installation if desired
355 log("FetchHandler {} not found, skipping plugin".format(
356 handler_name))
357 return plugin_list
358
359
360def _run_apt_command(cmd, fatal=False):
361 """
362 Run an APT command, checking output and retrying if the fatal flag is set
363 to True.
364
365 :param: cmd: str: The apt command to run.
366 :param: fatal: bool: Whether the command's output should be checked and
367 retried.
368 """
369 env = os.environ.copy()
370
371 if 'DEBIAN_FRONTEND' not in env:
372 env['DEBIAN_FRONTEND'] = 'noninteractive'
373
374 if fatal:
375 retry_count = 0
376 result = None
377
378 # If the command is considered "fatal", we need to retry if the apt
379 # lock was not acquired.
380
381 while result is None or result == APT_NO_LOCK:
382 try:
383 result = subprocess.check_call(cmd, env=env)
384 except subprocess.CalledProcessError, e:
385 retry_count = retry_count + 1
386 if retry_count > APT_NO_LOCK_RETRY_COUNT:
387 raise
388 result = e.returncode
389 log("Couldn't acquire DPKG lock. Will retry in {} seconds."
390 "".format(APT_NO_LOCK_RETRY_DELAY))
391 time.sleep(APT_NO_LOCK_RETRY_DELAY)
392
393 else:
394 subprocess.call(cmd, env=env)
0395
=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2014-09-09 05:13:17 +0000
@@ -0,0 +1,63 @@
1import os
2import urllib2
3import urlparse
4
5from charmhelpers.fetch import (
6 BaseFetchHandler,
7 UnhandledSource
8)
9from charmhelpers.payload.archive import (
10 get_archive_handler,
11 extract,
12)
13from charmhelpers.core.host import mkdir
14
15
16class ArchiveUrlFetchHandler(BaseFetchHandler):
17 """Handler for archives via generic URLs"""
18 def can_handle(self, source):
19 url_parts = self.parse_url(source)
20 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
21 return "Wrong source type"
22 if get_archive_handler(self.base_url(source)):
23 return True
24 return False
25
26 def download(self, source, dest):
27 # propogate all exceptions
28 # URLError, OSError, etc
29 proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
30 if proto in ('http', 'https'):
31 auth, barehost = urllib2.splituser(netloc)
32 if auth is not None:
33 source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
34 username, password = urllib2.splitpasswd(auth)
35 passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
36 # Realm is set to None in add_password to force the username and password
37 # to be used whatever the realm
38 passman.add_password(None, source, username, password)
39 authhandler = urllib2.HTTPBasicAuthHandler(passman)
40 opener = urllib2.build_opener(authhandler)
41 urllib2.install_opener(opener)
42 response = urllib2.urlopen(source)
43 try:
44 with open(dest, 'w') as dest_file:
45 dest_file.write(response.read())
46 except Exception as e:
47 if os.path.isfile(dest):
48 os.unlink(dest)
49 raise e
50
51 def install(self, source):
52 url_parts = self.parse_url(source)
53 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
54 if not os.path.exists(dest_dir):
55 mkdir(dest_dir, perms=0755)
56 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
57 try:
58 self.download(source, dld_file)
59 except urllib2.URLError as e:
60 raise UnhandledSource(e.reason)
61 except OSError as e:
62 raise UnhandledSource(e.strerror)
63 return extract(dld_file)
064
=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2014-09-09 05:13:17 +0000
@@ -0,0 +1,50 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7
8try:
9 from bzrlib.branch import Branch
10except ImportError:
11 from charmhelpers.fetch import apt_install
12 apt_install("python-bzrlib")
13 from bzrlib.branch import Branch
14
15
16class BzrUrlFetchHandler(BaseFetchHandler):
17 """Handler for bazaar branches via generic and lp URLs"""
18 def can_handle(self, source):
19 url_parts = self.parse_url(source)
20 if url_parts.scheme not in ('bzr+ssh', 'lp'):
21 return False
22 else:
23 return True
24
25 def branch(self, source, dest):
26 url_parts = self.parse_url(source)
27 # If we use lp:branchname scheme we need to load plugins
28 if not self.can_handle(source):
29 raise UnhandledSource("Cannot handle {}".format(source))
30 if url_parts.scheme == "lp":
31 from bzrlib.plugin import load_plugins
32 load_plugins()
33 try:
34 remote_branch = Branch.open(source)
35 remote_branch.bzrdir.sprout(dest).open_branch()
36 except Exception as e:
37 raise e
38
39 def install(self, source):
40 url_parts = self.parse_url(source)
41 branch_name = url_parts.path.strip("/").split("/")[-1]
42 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
43 branch_name)
44 if not os.path.exists(dest_dir):
45 mkdir(dest_dir, perms=0755)
46 try:
47 self.branch(source, dest_dir)
48 except OSError as e:
49 raise UnhandledSource(e.strerror)
50 return dest_dir
051
=== added directory 'hooks/storage-provider.d/ceph'
=== added file 'hooks/storage-provider.d/ceph/ceph-relation-changed'
--- hooks/storage-provider.d/ceph/ceph-relation-changed 1970-01-01 00:00:00 +0000
+++ hooks/storage-provider.d/ceph/ceph-relation-changed 2014-09-09 05:13:17 +0000
@@ -0,0 +1,69 @@
1#!/usr/bin/env python
2import sys
3
4from ceph_common import (
5 SERVICE_NAME,
6 POOL_NAME,
7 TEMPORARY_MOUNT_POINT,
8 RDB_IMG,
9 BLK_DEVICE,
10)
11
12from charmhelpers.core.hookenv import (
13 relation_get,
14 config,
15 log,
16)
17
18from charmhelpers.contrib.storage.linux import ceph
19
20"""
21This will mount a ceph image to a temporary location when the ceph relation
22is ready.
23
24Later, when the data relation fires and a mountpoint is define, the temporary
25mountpoint will be remounted.
26"""
27
28
29REPLICA_COUNT = 3
30
31
32def main():
33 config_data = config()
34
35 # This shouldn't clash since the pool name should be a good enough
36 # "namespace".
37
38 volume_size_gb = config_data.get("volume_size", None) # in Gb.
39 volume_size = volume_size_gb * 1024
40
41 auth = relation_get("auth")
42 key = relation_get("key")
43 use_syslog = relation_get("use-syslog")
44
45 relation_data = [auth, key, use_syslog]
46
47 if None in relation_data:
48 log("Not all relation data received: '%s'" % relation_data)
49 sys.exit(0)
50
51 log("Configuring ceph client...")
52 ceph.configure(
53 service=SERVICE_NAME, key=key, auth=auth, use_syslog=use_syslog)
54
55 log("Mounting ceph image to temporary location '%s'"
56 "" % TEMPORARY_MOUNT_POINT)
57 ceph.ensure_ceph_storage(
58 service=SERVICE_NAME,
59 pool=POOL_NAME,
60 rdb_img=RDB_IMG,
61 sizemb=volume_size,
62 fstype="ext4",
63 mount_point=TEMPORARY_MOUNT_POINT,
64 blk_device=BLK_DEVICE,
65 rdb_pool_replicas=REPLICA_COUNT)
66
67
68if __name__ == "__main__":
69 main()
070
=== added file 'hooks/storage-provider.d/ceph/config-changed'
--- hooks/storage-provider.d/ceph/config-changed 1970-01-01 00:00:00 +0000
+++ hooks/storage-provider.d/ceph/config-changed 2014-09-09 05:13:17 +0000
@@ -0,0 +1,6 @@
1#!/usr/bin/env python
2from charmhelpers.contrib.storage.linux.ceph import (
3 install,
4)
5
6install()
07
=== added file 'hooks/storage-provider.d/ceph/data-relation-changed'
--- hooks/storage-provider.d/ceph/data-relation-changed 1970-01-01 00:00:00 +0000
+++ hooks/storage-provider.d/ceph/data-relation-changed 2014-09-09 05:13:17 +0000
@@ -0,0 +1,45 @@
1#!/usr/bin/env python
2import sys
3from ceph_common import (
4 TEMPORARY_MOUNT_POINT,
5 BLK_DEVICE,
6)
7from charmhelpers.core.hookenv import (
8 relation_get,
9 relation_set,
10 log,
11)
12
13from charmhelpers.core.host import (
14 mount,
15 mounts,
16 #umount,
17)
18
19
20def main():
21
22 # Noop until mountpoint is set.
23 mountpoint = relation_get("mountpoint")
24 if mountpoint is None:
25 log("No mountpoint received from the data relation. Noop.")
26 sys.exit(0) # No mountpoint requested yet - noop
27
28 # If the mountpoint is set, let's remount the temporary folder.
29 actual_mounts = mounts()
30 log("Getting mounts: %s" % actual_mounts)
31 mountpoints = [point[0] for point in actual_mounts] # returns a list of 2-lists
32
33 if TEMPORARY_MOUNT_POINT in mountpoints:
34 log("Found ceph block device, remounting to '%s'" % mountpoint)
35 # The ceph image is ready and mounted - let's remount it to the desired
36 # mountpoint.
37 mount(BLK_DEVICE, mountpoint)
38 log("Mountpoint ready, notifying data relation.")
39 relation_set("mountpoint", mountpoint)
40 else:
41 log("Temporary mountpoint not found (yet?). Noop.")
42 sys.exit(0)
43
44if __name__ == "__main__":
45 main()
046
=== modified file 'metadata.yaml'
--- metadata.yaml 2014-02-10 16:58:17 +0000
+++ metadata.yaml 2014-09-09 05:13:17 +0000
@@ -18,3 +18,5 @@
18 interface: mount18 interface: mount
19 block-storage:19 block-storage:
20 interface: volume-request20 interface: volume-request
21 ceph:
22 interface: ceph-client

Subscribers

People subscribed via source and target branches

to all changes: