Merge lp:~james-page/charms/precise/ceph/charm-helpers into lp:~charmers/charms/precise/ceph/trunk

Proposed by James Page
Status: Merged
Merged at revision: 62
Proposed branch: lp:~james-page/charms/precise/ceph/charm-helpers
Merge into: lp:~charmers/charms/precise/ceph/trunk
Diff against target: 1983 lines (+1146/-437)
13 files modified
.pydevproject (+1/-3)
Makefile (+8/-0)
README.md (+9/-9)
charm-helpers-sync.yaml (+7/-0)
hooks/ceph.py (+126/-26)
hooks/charmhelpers/contrib/storage/linux/utils.py (+25/-0)
hooks/charmhelpers/core/hookenv.py (+334/-0)
hooks/charmhelpers/core/host.py (+273/-0)
hooks/charmhelpers/fetch/__init__.py (+152/-0)
hooks/charmhelpers/fetch/archiveurl.py (+43/-0)
hooks/hooks.py (+149/-233)
hooks/utils.py (+18/-164)
metadata.yaml (+1/-2)
To merge this branch: bzr merge lp:~james-page/charms/precise/ceph/charm-helpers
Reviewer Review Type Date Requested Status
Mark Mims (community) Approve
Review via email: mp+173245@code.launchpad.net

Description of the change

Refactoring to support use with charm-helpers

Significant rework to support use of charm-helpers rather than its own
utils.py of fun.

Also fixes a couple of issues with newer versions of ceph which no longer automatically zap disks.

To post a comment you must log in.
80. By James Page

Fixup dodgy disk detection

Revision history for this message
Mark Mims (mark-mims) wrote :

similarly to ceph-osd, two requests for the future:

- consider refactoring $CHARM_DIR/hooks/hooks.py into $CHARM_DIR/lib/ceph_tools with accompanying unit-type $CHARM_DIR/lib/ceph_tools/tests where possible

- please think up some decent integration tests and add them into $CHARM_DIR/tests

Thanks!

review: Approve
Revision history for this message
Mark Mims (mark-mims) wrote :

Added bug https://bugs.launchpad.net/charms/+source/ceph/+bug/1201173 to remove embedded charmhelpers code over time.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file '.pydevproject'
--- .pydevproject 2012-10-18 08:24:36 +0000
+++ .pydevproject 2013-07-08 08:34:31 +0000
@@ -1,7 +1,5 @@
1<?xml version="1.0" encoding="UTF-8" standalone="no"?>1<?xml version="1.0" encoding="UTF-8" standalone="no"?>
2<?eclipse-pydev version="1.0"?>2<?eclipse-pydev version="1.0"?><pydev_project>
3
4<pydev_project>
5<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property>3<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property>
6<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>4<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
7<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">5<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
86
=== added file 'Makefile'
--- Makefile 1970-01-01 00:00:00 +0000
+++ Makefile 2013-07-08 08:34:31 +0000
@@ -0,0 +1,8 @@
1#!/usr/bin/make
2
3lint:
4 @flake8 --exclude hooks/charmhelpers hooks
5 @charm proof
6
7sync:
8 @charm-helper-sync -c charm-helpers-sync.yaml
09
=== modified file 'README.md'
--- README.md 2012-12-17 10:22:51 +0000
+++ README.md 2013-07-08 08:34:31 +0000
@@ -15,28 +15,28 @@
15 fsid:15 fsid:
16 uuid specific to a ceph cluster used to ensure that different16 uuid specific to a ceph cluster used to ensure that different
17 clusters don't get mixed up - use `uuid` to generate one.17 clusters don't get mixed up - use `uuid` to generate one.
18 18
19 monitor-secret: 19 monitor-secret:
20 a ceph generated key used by the daemons that manage to cluster20 a ceph generated key used by the daemons that manage to cluster
21 to control security. You can use the ceph-authtool command to 21 to control security. You can use the ceph-authtool command to
22 generate one:22 generate one:
23 23
24 ceph-authtool /dev/stdout --name=mon. --gen-key24 ceph-authtool /dev/stdout --name=mon. --gen-key
25 25
26These two pieces of configuration must NOT be changed post bootstrap; attempting26These two pieces of configuration must NOT be changed post bootstrap; attempting
27todo this will cause a reconfiguration error and new service units will not join27todo this will cause a reconfiguration error and new service units will not join
28the existing ceph cluster.28the existing ceph cluster.
29 29
30The charm also supports specification of the storage devices to use in the ceph30The charm also supports specification of the storage devices to use in the ceph
31cluster.31cluster.
3232
33 osd-devices:33 osd-devices:
34 A list of devices that the charm will attempt to detect, initialise and34 A list of devices that the charm will attempt to detect, initialise and
35 activate as ceph storage.35 activate as ceph storage.
36 36
37 This this can be a superset of the actual storage devices presented to37 This this can be a superset of the actual storage devices presented to
38 each service unit and can be changed post ceph bootstrap using `juju set`.38 each service unit and can be changed post ceph bootstrap using `juju set`.
39 39
40At a minimum you must provide a juju config file during initial deployment40At a minimum you must provide a juju config file during initial deployment
41with the fsid and monitor-secret options (contents of cepy.yaml below):41with the fsid and monitor-secret options (contents of cepy.yaml below):
4242
@@ -44,7 +44,7 @@
44 fsid: ecbb8960-0e21-11e2-b495-83a88f44db01 44 fsid: ecbb8960-0e21-11e2-b495-83a88f44db01
45 monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg==45 monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg==
46 osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde46 osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde
47 47
48Specifying the osd-devices to use is also a good idea.48Specifying the osd-devices to use is also a good idea.
4949
50Boot things up by using:50Boot things up by using:
@@ -62,7 +62,7 @@
62 James Page <james.page@ubuntu.com>62 James Page <james.page@ubuntu.com>
63Report bugs at: http://bugs.launchpad.net/charms/+source/ceph/+filebug63Report bugs at: http://bugs.launchpad.net/charms/+source/ceph/+filebug
64Location: http://jujucharms.com/charms/ceph64Location: http://jujucharms.com/charms/ceph
65 65
66Technical Bootnotes66Technical Bootnotes
67===================67===================
6868
@@ -89,4 +89,4 @@
89implement it.89implement it.
9090
91See http://ceph.com/docs/master/dev/mon-bootstrap/ for more information on Ceph91See http://ceph.com/docs/master/dev/mon-bootstrap/ for more information on Ceph
92monitor cluster deployment strategies and pitfalls. 92monitor cluster deployment strategies and pitfalls.
9393
=== added file 'charm-helpers-sync.yaml'
--- charm-helpers-sync.yaml 1970-01-01 00:00:00 +0000
+++ charm-helpers-sync.yaml 2013-07-08 08:34:31 +0000
@@ -0,0 +1,7 @@
1branch: lp:charm-helpers
2destination: hooks/charmhelpers
3include:
4 - core
5 - fetch
6 - contrib.storage.linux:
7 - utils
08
=== modified file 'hooks/ceph.py'
--- hooks/ceph.py 2012-12-18 10:25:38 +0000
+++ hooks/ceph.py 2013-07-08 08:34:31 +0000
@@ -10,23 +10,36 @@
10import json10import json
11import subprocess11import subprocess
12import time12import time
13import utils
14import os13import os
15import apt_pkg as apt14import apt_pkg as apt
15from charmhelpers.core.host import (
16 mkdir,
17 service_restart,
18 log
19)
20from charmhelpers.contrib.storage.linux.utils import (
21 zap_disk,
22 is_block_device
23)
24from utils import (
25 get_unit_hostname
26)
1627
17LEADER = 'leader'28LEADER = 'leader'
18PEON = 'peon'29PEON = 'peon'
19QUORUM = [LEADER, PEON]30QUORUM = [LEADER, PEON]
2031
32PACKAGES = ['ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs']
33
2134
22def is_quorum():35def is_quorum():
23 asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname())36 asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname())
24 cmd = [37 cmd = [
25 "ceph",38 "ceph",
26 "--admin-daemon",39 "--admin-daemon",
27 asok,40 asok,
28 "mon_status"41 "mon_status"
29 ]42 ]
30 if os.path.exists(asok):43 if os.path.exists(asok):
31 try:44 try:
32 result = json.loads(subprocess.check_output(cmd))45 result = json.loads(subprocess.check_output(cmd))
@@ -44,13 +57,13 @@
4457
4558
46def is_leader():59def is_leader():
47 asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname())60 asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname())
48 cmd = [61 cmd = [
49 "ceph",62 "ceph",
50 "--admin-daemon",63 "--admin-daemon",
51 asok,64 asok,
52 "mon_status"65 "mon_status"
53 ]66 ]
54 if os.path.exists(asok):67 if os.path.exists(asok):
55 try:68 try:
56 result = json.loads(subprocess.check_output(cmd))69 result = json.loads(subprocess.check_output(cmd))
@@ -73,14 +86,14 @@
7386
7487
75def add_bootstrap_hint(peer):88def add_bootstrap_hint(peer):
76 asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname())89 asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname())
77 cmd = [90 cmd = [
78 "ceph",91 "ceph",
79 "--admin-daemon",92 "--admin-daemon",
80 asok,93 asok,
81 "add_bootstrap_peer_hint",94 "add_bootstrap_peer_hint",
82 peer95 peer
83 ]96 ]
84 if os.path.exists(asok):97 if os.path.exists(asok):
85 # Ignore any errors for this call98 # Ignore any errors for this call
86 subprocess.call(cmd)99 subprocess.call(cmd)
@@ -89,7 +102,7 @@
89 'xfs',102 'xfs',
90 'ext4',103 'ext4',
91 'btrfs'104 'btrfs'
92 ]105]
93106
94107
95def is_osd_disk(dev):108def is_osd_disk(dev):
@@ -99,7 +112,7 @@
99 for line in info:112 for line in info:
100 if line.startswith(113 if line.startswith(
101 'Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D'114 'Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D'
102 ):115 ):
103 return True116 return True
104 except subprocess.CalledProcessError:117 except subprocess.CalledProcessError:
105 pass118 pass
@@ -110,16 +123,11 @@
110 cmd = [123 cmd = [
111 'udevadm', 'trigger',124 'udevadm', 'trigger',
112 '--subsystem-match=block', '--action=add'125 '--subsystem-match=block', '--action=add'
113 ]126 ]
114127
115 subprocess.call(cmd)128 subprocess.call(cmd)
116129
117130
118def zap_disk(dev):
119 cmd = ['sgdisk', '--zap-all', dev]
120 subprocess.check_call(cmd)
121
122
123_bootstrap_keyring = "/var/lib/ceph/bootstrap-osd/ceph.keyring"131_bootstrap_keyring = "/var/lib/ceph/bootstrap-osd/ceph.keyring"
124132
125133
@@ -140,7 +148,7 @@
140 '--create-keyring',148 '--create-keyring',
141 '--name=client.bootstrap-osd',149 '--name=client.bootstrap-osd',
142 '--add-key={}'.format(key)150 '--add-key={}'.format(key)
143 ]151 ]
144 subprocess.check_call(cmd)152 subprocess.check_call(cmd)
145153
146# OSD caps taken from ceph-create-keys154# OSD caps taken from ceph-create-keys
@@ -148,10 +156,10 @@
148 'mon': [156 'mon': [
149 'allow command osd create ...',157 'allow command osd create ...',
150 'allow command osd crush set ...',158 'allow command osd crush set ...',
151 r'allow command auth add * osd allow\ * mon allow\ rwx',159 r'allow command auth add * osd allow\ * mon allow\ rwx',
152 'allow command mon getmap'160 'allow command mon getmap'
153 ]161 ]
154 }162}
155163
156164
157def get_osd_bootstrap_key():165def get_osd_bootstrap_key():
@@ -169,14 +177,14 @@
169 '--create-keyring',177 '--create-keyring',
170 '--name=client.radosgw.gateway',178 '--name=client.radosgw.gateway',
171 '--add-key={}'.format(key)179 '--add-key={}'.format(key)
172 ]180 ]
173 subprocess.check_call(cmd)181 subprocess.check_call(cmd)
174182
175# OSD caps taken from ceph-create-keys183# OSD caps taken from ceph-create-keys
176_radosgw_caps = {184_radosgw_caps = {
177 'mon': ['allow r'],185 'mon': ['allow r'],
178 'osd': ['allow rwx']186 'osd': ['allow rwx']
179 }187}
180188
181189
182def get_radosgw_key():190def get_radosgw_key():
@@ -186,7 +194,7 @@
186_default_caps = {194_default_caps = {
187 'mon': ['allow r'],195 'mon': ['allow r'],
188 'osd': ['allow rwx']196 'osd': ['allow rwx']
189 }197}
190198
191199
192def get_named_key(name, caps=None):200def get_named_key(name, caps=None):
@@ -196,16 +204,16 @@
196 '--name', 'mon.',204 '--name', 'mon.',
197 '--keyring',205 '--keyring',
198 '/var/lib/ceph/mon/ceph-{}/keyring'.format(206 '/var/lib/ceph/mon/ceph-{}/keyring'.format(
199 utils.get_unit_hostname()207 get_unit_hostname()
200 ),208 ),
201 'auth', 'get-or-create', 'client.{}'.format(name),209 'auth', 'get-or-create', 'client.{}'.format(name),
202 ]210 ]
203 # Add capabilities211 # Add capabilities
204 for subsystem, subcaps in caps.iteritems():212 for subsystem, subcaps in caps.iteritems():
205 cmd.extend([213 cmd.extend([
206 subsystem,214 subsystem,
207 '; '.join(subcaps),215 '; '.join(subcaps),
208 ])216 ])
209 output = subprocess.check_output(cmd).strip() # IGNORE:E1103217 output = subprocess.check_output(cmd).strip() # IGNORE:E1103
210 # get-or-create appears to have different output depending218 # get-or-create appears to have different output depending
211 # on whether its 'get' or 'create'219 # on whether its 'get' or 'create'
@@ -221,6 +229,42 @@
221 return key229 return key
222230
223231
232def bootstrap_monitor_cluster(secret):
233 hostname = get_unit_hostname()
234 path = '/var/lib/ceph/mon/ceph-{}'.format(hostname)
235 done = '{}/done'.format(path)
236 upstart = '{}/upstart'.format(path)
237 keyring = '/var/lib/ceph/tmp/{}.mon.keyring'.format(hostname)
238
239 if os.path.exists(done):
240 log('bootstrap_monitor_cluster: mon already initialized.')
241 else:
242 # Ceph >= 0.61.3 needs this for ceph-mon fs creation
243 mkdir('/var/run/ceph', perms=0755)
244 mkdir(path)
245 # end changes for Ceph >= 0.61.3
246 try:
247 subprocess.check_call(['ceph-authtool', keyring,
248 '--create-keyring', '--name=mon.',
249 '--add-key={}'.format(secret),
250 '--cap', 'mon', 'allow *'])
251
252 subprocess.check_call(['ceph-mon', '--mkfs',
253 '-i', hostname,
254 '--keyring', keyring])
255
256 with open(done, 'w'):
257 pass
258 with open(upstart, 'w'):
259 pass
260
261 service_restart('ceph-mon-all')
262 except:
263 raise
264 finally:
265 os.unlink(keyring)
266
267
224def get_ceph_version():268def get_ceph_version():
225 apt.init()269 apt.init()
226 cache = apt.Cache()270 cache = apt.Cache()
@@ -233,3 +277,59 @@
233277
234def version_compare(a, b):278def version_compare(a, b):
235 return apt.version_compare(a, b)279 return apt.version_compare(a, b)
280
281
282def update_monfs():
283 hostname = get_unit_hostname()
284 monfs = '/var/lib/ceph/mon/ceph-{}'.format(hostname)
285 upstart = '{}/upstart'.format(monfs)
286 if os.path.exists(monfs) and not os.path.exists(upstart):
287 # Mark mon as managed by upstart so that
288 # it gets start correctly on reboots
289 with open(upstart, 'w'):
290 pass
291
292
293def osdize(dev, osd_format, osd_journal, reformat_osd=False):
294 if not os.path.exists(dev):
295 log('Path {} does not exist - bailing'.format(dev))
296 return
297
298 if not is_block_device(dev):
299 log('Path {} is not a block device - bailing'.format(dev))
300 return
301
302 if (is_osd_disk(dev) and not reformat_osd):
303 log('Looks like {} is already an OSD, skipping.'.format(dev))
304 return
305
306 if device_mounted(dev):
307 log('Looks like {} is in use, skipping.'.format(dev))
308 return
309
310 cmd = ['ceph-disk-prepare']
311 # Later versions of ceph support more options
312 if get_ceph_version() >= "0.48.3":
313 if osd_format:
314 cmd.append('--fs-type')
315 cmd.append(osd_format)
316 cmd.append(dev)
317 if osd_journal and os.path.exists(osd_journal):
318 cmd.append(osd_journal)
319 else:
320 # Just provide the device - no other options
321 # for older versions of ceph
322 cmd.append(dev)
323
324 if reformat_osd:
325 zap_disk(dev)
326
327 subprocess.check_call(cmd)
328
329
330def device_mounted(dev):
331 return subprocess.call(['grep', '-wqs', dev + '1', '/proc/mounts']) == 0
332
333
334def filesystem_mounted(fs):
335 return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0
236336
=== added directory 'hooks/charmhelpers'
=== added file 'hooks/charmhelpers/__init__.py'
=== added directory 'hooks/charmhelpers/contrib'
=== added file 'hooks/charmhelpers/contrib/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/storage'
=== added file 'hooks/charmhelpers/contrib/storage/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/storage/linux'
=== added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py'
=== added file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
--- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2013-07-08 08:34:31 +0000
@@ -0,0 +1,25 @@
1from os import stat
2from stat import S_ISBLK
3
4from subprocess import (
5 check_call
6)
7
8
9def is_block_device(path):
10 '''
11 Confirm device at path is a valid block device node.
12
13 :returns: boolean: True if path is a block device, False if not.
14 '''
15 return S_ISBLK(stat(path).st_mode)
16
17
18def zap_disk(block_device):
19 '''
20 Clear a block device of partition table. Relies on sgdisk, which is
21 installed as pat of the 'gdisk' package in Ubuntu.
22
23 :param block_device: str: Full path of block device to clean.
24 '''
25 check_call(['sgdisk', '--zap-all', block_device])
026
=== added directory 'hooks/charmhelpers/core'
=== added file 'hooks/charmhelpers/core/__init__.py'
=== added file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/hookenv.py 2013-07-08 08:34:31 +0000
@@ -0,0 +1,334 @@
1"Interactions with the Juju environment"
2# Copyright 2013 Canonical Ltd.
3#
4# Authors:
5# Charm Helpers Developers <juju@lists.ubuntu.com>
6
7import os
8import json
9import yaml
10import subprocess
11import UserDict
12
13CRITICAL = "CRITICAL"
14ERROR = "ERROR"
15WARNING = "WARNING"
16INFO = "INFO"
17DEBUG = "DEBUG"
18MARKER = object()
19
20cache = {}
21
22
23def cached(func):
24 ''' Cache return values for multiple executions of func + args
25
26 For example:
27
28 @cached
29 def unit_get(attribute):
30 pass
31
32 unit_get('test')
33
34 will cache the result of unit_get + 'test' for future calls.
35 '''
36 def wrapper(*args, **kwargs):
37 global cache
38 key = str((func, args, kwargs))
39 try:
40 return cache[key]
41 except KeyError:
42 res = func(*args, **kwargs)
43 cache[key] = res
44 return res
45 return wrapper
46
47
48def flush(key):
49 ''' Flushes any entries from function cache where the
50 key is found in the function+args '''
51 flush_list = []
52 for item in cache:
53 if key in item:
54 flush_list.append(item)
55 for item in flush_list:
56 del cache[item]
57
58
59def log(message, level=None):
60 "Write a message to the juju log"
61 command = ['juju-log']
62 if level:
63 command += ['-l', level]
64 command += [message]
65 subprocess.call(command)
66
67
68class Serializable(UserDict.IterableUserDict):
69 "Wrapper, an object that can be serialized to yaml or json"
70
71 def __init__(self, obj):
72 # wrap the object
73 UserDict.IterableUserDict.__init__(self)
74 self.data = obj
75
76 def __getattr__(self, attr):
77 # See if this object has attribute.
78 if attr in ("json", "yaml", "data"):
79 return self.__dict__[attr]
80 # Check for attribute in wrapped object.
81 got = getattr(self.data, attr, MARKER)
82 if got is not MARKER:
83 return got
84 # Proxy to the wrapped object via dict interface.
85 try:
86 return self.data[attr]
87 except KeyError:
88 raise AttributeError(attr)
89
90 def __getstate__(self):
91 # Pickle as a standard dictionary.
92 return self.data
93
94 def __setstate__(self, state):
95 # Unpickle into our wrapper.
96 self.data = state
97
98 def json(self):
99 "Serialize the object to json"
100 return json.dumps(self.data)
101
102 def yaml(self):
103 "Serialize the object to yaml"
104 return yaml.dump(self.data)
105
106
107def execution_environment():
108 """A convenient bundling of the current execution context"""
109 context = {}
110 context['conf'] = config()
111 if relation_id():
112 context['reltype'] = relation_type()
113 context['relid'] = relation_id()
114 context['rel'] = relation_get()
115 context['unit'] = local_unit()
116 context['rels'] = relations()
117 context['env'] = os.environ
118 return context
119
120
121def in_relation_hook():
122 "Determine whether we're running in a relation hook"
123 return 'JUJU_RELATION' in os.environ
124
125
126def relation_type():
127 "The scope for the current relation hook"
128 return os.environ.get('JUJU_RELATION', None)
129
130
131def relation_id():
132 "The relation ID for the current relation hook"
133 return os.environ.get('JUJU_RELATION_ID', None)
134
135
136def local_unit():
137 "Local unit ID"
138 return os.environ['JUJU_UNIT_NAME']
139
140
141def remote_unit():
142 "The remote unit for the current relation hook"
143 return os.environ['JUJU_REMOTE_UNIT']
144
145
146@cached
147def config(scope=None):
148 "Juju charm configuration"
149 config_cmd_line = ['config-get']
150 if scope is not None:
151 config_cmd_line.append(scope)
152 config_cmd_line.append('--format=json')
153 try:
154 return json.loads(subprocess.check_output(config_cmd_line))
155 except ValueError:
156 return None
157
158
159@cached
160def relation_get(attribute=None, unit=None, rid=None):
161 _args = ['relation-get', '--format=json']
162 if rid:
163 _args.append('-r')
164 _args.append(rid)
165 _args.append(attribute or '-')
166 if unit:
167 _args.append(unit)
168 try:
169 return json.loads(subprocess.check_output(_args))
170 except ValueError:
171 return None
172
173
174def relation_set(relation_id=None, relation_settings={}, **kwargs):
175 relation_cmd_line = ['relation-set']
176 if relation_id is not None:
177 relation_cmd_line.extend(('-r', relation_id))
178 for k, v in (relation_settings.items() + kwargs.items()):
179 if v is None:
180 relation_cmd_line.append('{}='.format(k))
181 else:
182 relation_cmd_line.append('{}={}'.format(k, v))
183 subprocess.check_call(relation_cmd_line)
184 # Flush cache of any relation-gets for local unit
185 flush(local_unit())
186
187
188@cached
189def relation_ids(reltype=None):
190 "A list of relation_ids"
191 reltype = reltype or relation_type()
192 relid_cmd_line = ['relation-ids', '--format=json']
193 if reltype is not None:
194 relid_cmd_line.append(reltype)
195 return json.loads(subprocess.check_output(relid_cmd_line))
196 return []
197
198
199@cached
200def related_units(relid=None):
201 "A list of related units"
202 relid = relid or relation_id()
203 units_cmd_line = ['relation-list', '--format=json']
204 if relid is not None:
205 units_cmd_line.extend(('-r', relid))
206 return json.loads(subprocess.check_output(units_cmd_line))
207
208
209@cached
210def relation_for_unit(unit=None, rid=None):
211 "Get the json represenation of a unit's relation"
212 unit = unit or remote_unit()
213 relation = relation_get(unit=unit, rid=rid)
214 for key in relation:
215 if key.endswith('-list'):
216 relation[key] = relation[key].split()
217 relation['__unit__'] = unit
218 return relation
219
220
221@cached
222def relations_for_id(relid=None):
223 "Get relations of a specific relation ID"
224 relation_data = []
225 relid = relid or relation_ids()
226 for unit in related_units(relid):
227 unit_data = relation_for_unit(unit, relid)
228 unit_data['__relid__'] = relid
229 relation_data.append(unit_data)
230 return relation_data
231
232
233@cached
234def relations_of_type(reltype=None):
235 "Get relations of a specific type"
236 relation_data = []
237 reltype = reltype or relation_type()
238 for relid in relation_ids(reltype):
239 for relation in relations_for_id(relid):
240 relation['__relid__'] = relid
241 relation_data.append(relation)
242 return relation_data
243
244
245@cached
246def relation_types():
247 "Get a list of relation types supported by this charm"
248 charmdir = os.environ.get('CHARM_DIR', '')
249 mdf = open(os.path.join(charmdir, 'metadata.yaml'))
250 md = yaml.safe_load(mdf)
251 rel_types = []
252 for key in ('provides', 'requires', 'peers'):
253 section = md.get(key)
254 if section:
255 rel_types.extend(section.keys())
256 mdf.close()
257 return rel_types
258
259
260@cached
261def relations():
262 rels = {}
263 for reltype in relation_types():
264 relids = {}
265 for relid in relation_ids(reltype):
266 units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
267 for unit in related_units(relid):
268 reldata = relation_get(unit=unit, rid=relid)
269 units[unit] = reldata
270 relids[relid] = units
271 rels[reltype] = relids
272 return rels
273
274
275def open_port(port, protocol="TCP"):
276 "Open a service network port"
277 _args = ['open-port']
278 _args.append('{}/{}'.format(port, protocol))
279 subprocess.check_call(_args)
280
281
282def close_port(port, protocol="TCP"):
283 "Close a service network port"
284 _args = ['close-port']
285 _args.append('{}/{}'.format(port, protocol))
286 subprocess.check_call(_args)
287
288
289@cached
290def unit_get(attribute):
291 _args = ['unit-get', '--format=json', attribute]
292 try:
293 return json.loads(subprocess.check_output(_args))
294 except ValueError:
295 return None
296
297
298def unit_private_ip():
299 return unit_get('private-address')
300
301
302class UnregisteredHookError(Exception):
303 pass
304
305
306class Hooks(object):
307 def __init__(self):
308 super(Hooks, self).__init__()
309 self._hooks = {}
310
311 def register(self, name, function):
312 self._hooks[name] = function
313
314 def execute(self, args):
315 hook_name = os.path.basename(args[0])
316 if hook_name in self._hooks:
317 self._hooks[hook_name]()
318 else:
319 raise UnregisteredHookError(hook_name)
320
321 def hook(self, *hook_names):
322 def wrapper(decorated):
323 for hook_name in hook_names:
324 self.register(hook_name, decorated)
325 else:
326 self.register(decorated.__name__, decorated)
327 if '_' in decorated.__name__:
328 self.register(
329 decorated.__name__.replace('_', '-'), decorated)
330 return decorated
331 return wrapper
332
333def charm_dir():
334 return os.environ.get('CHARM_DIR')
0335
=== added file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/host.py 2013-07-08 08:34:31 +0000
@@ -0,0 +1,273 @@
1"""Tools for working with the host system"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Nick Moffitt <nick.moffitt@canonical.com>
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
7
8import apt_pkg
9import os
10import pwd
11import grp
12import subprocess
13import hashlib
14
15from collections import OrderedDict
16
17from hookenv import log, execution_environment
18
19
20def service_start(service_name):
21 service('start', service_name)
22
23
24def service_stop(service_name):
25 service('stop', service_name)
26
27
28def service_restart(service_name):
29 service('restart', service_name)
30
31
32def service_reload(service_name, restart_on_failure=False):
33 if not service('reload', service_name) and restart_on_failure:
34 service('restart', service_name)
35
36
37def service(action, service_name):
38 cmd = ['service', service_name, action]
39 return subprocess.call(cmd) == 0
40
41
42def adduser(username, password=None, shell='/bin/bash', system_user=False):
43 """Add a user"""
44 try:
45 user_info = pwd.getpwnam(username)
46 log('user {0} already exists!'.format(username))
47 except KeyError:
48 log('creating user {0}'.format(username))
49 cmd = ['useradd']
50 if system_user or password is None:
51 cmd.append('--system')
52 else:
53 cmd.extend([
54 '--create-home',
55 '--shell', shell,
56 '--password', password,
57 ])
58 cmd.append(username)
59 subprocess.check_call(cmd)
60 user_info = pwd.getpwnam(username)
61 return user_info
62
63
64def add_user_to_group(username, group):
65 """Add a user to a group"""
66 cmd = [
67 'gpasswd', '-a',
68 username,
69 group
70 ]
71 log("Adding user {} to group {}".format(username, group))
72 subprocess.check_call(cmd)
73
74
75def rsync(from_path, to_path, flags='-r', options=None):
76 """Replicate the contents of a path"""
77 context = execution_environment()
78 options = options or ['--delete', '--executability']
79 cmd = ['/usr/bin/rsync', flags]
80 cmd.extend(options)
81 cmd.append(from_path.format(**context))
82 cmd.append(to_path.format(**context))
83 log(" ".join(cmd))
84 return subprocess.check_output(cmd).strip()
85
86
87def symlink(source, destination):
88 """Create a symbolic link"""
89 context = execution_environment()
90 log("Symlinking {} as {}".format(source, destination))
91 cmd = [
92 'ln',
93 '-sf',
94 source.format(**context),
95 destination.format(**context)
96 ]
97 subprocess.check_call(cmd)
98
99
100def mkdir(path, owner='root', group='root', perms=0555, force=False):
101 """Create a directory"""
102 context = execution_environment()
103 log("Making dir {} {}:{} {:o}".format(path, owner, group,
104 perms))
105 uid = pwd.getpwnam(owner.format(**context)).pw_uid
106 gid = grp.getgrnam(group.format(**context)).gr_gid
107 realpath = os.path.abspath(path)
108 if os.path.exists(realpath):
109 if force and not os.path.isdir(realpath):
110 log("Removing non-directory file {} prior to mkdir()".format(path))
111 os.unlink(realpath)
112 else:
113 os.makedirs(realpath, perms)
114 os.chown(realpath, uid, gid)
115
116
117def write_file(path, fmtstr, owner='root', group='root', perms=0444, **kwargs):
118 """Create or overwrite a file with the contents of a string"""
119 context = execution_environment()
120 context.update(kwargs)
121 log("Writing file {} {}:{} {:o}".format(path, owner, group,
122 perms))
123 uid = pwd.getpwnam(owner.format(**context)).pw_uid
124 gid = grp.getgrnam(group.format(**context)).gr_gid
125 with open(path.format(**context), 'w') as target:
126 os.fchown(target.fileno(), uid, gid)
127 os.fchmod(target.fileno(), perms)
128 target.write(fmtstr.format(**context))
129
130
131def render_template_file(source, destination, **kwargs):
132 """Create or overwrite a file using a template"""
133 log("Rendering template {} for {}".format(source,
134 destination))
135 context = execution_environment()
136 with open(source.format(**context), 'r') as template:
137 write_file(destination.format(**context), template.read(),
138 **kwargs)
139
140
141def filter_installed_packages(packages):
142 """Returns a list of packages that require installation"""
143 apt_pkg.init()
144 cache = apt_pkg.Cache()
145 _pkgs = []
146 for package in packages:
147 try:
148 p = cache[package]
149 p.current_ver or _pkgs.append(package)
150 except KeyError:
151 log('Package {} has no installation candidate.'.format(package),
152 level='WARNING')
153 _pkgs.append(package)
154 return _pkgs
155
156
157def apt_install(packages, options=None, fatal=False):
158 """Install one or more packages"""
159 options = options or []
160 cmd = ['apt-get', '-y']
161 cmd.extend(options)
162 cmd.append('install')
163 if isinstance(packages, basestring):
164 cmd.append(packages)
165 else:
166 cmd.extend(packages)
167 log("Installing {} with options: {}".format(packages,
168 options))
169 if fatal:
170 subprocess.check_call(cmd)
171 else:
172 subprocess.call(cmd)
173
174
175def apt_update(fatal=False):
176 """Update local apt cache"""
177 cmd = ['apt-get', 'update']
178 if fatal:
179 subprocess.check_call(cmd)
180 else:
181 subprocess.call(cmd)
182
183
184def mount(device, mountpoint, options=None, persist=False):
185 '''Mount a filesystem'''
186 cmd_args = ['mount']
187 if options is not None:
188 cmd_args.extend(['-o', options])
189 cmd_args.extend([device, mountpoint])
190 try:
191 subprocess.check_output(cmd_args)
192 except subprocess.CalledProcessError, e:
193 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
194 return False
195 if persist:
196 # TODO: update fstab
197 pass
198 return True
199
200
201def umount(mountpoint, persist=False):
202 '''Unmount a filesystem'''
203 cmd_args = ['umount', mountpoint]
204 try:
205 subprocess.check_output(cmd_args)
206 except subprocess.CalledProcessError, e:
207 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
208 return False
209 if persist:
210 # TODO: update fstab
211 pass
212 return True
213
214
215def mounts():
216 '''List of all mounted volumes as [[mountpoint,device],[...]]'''
217 with open('/proc/mounts') as f:
218 # [['/mount/point','/dev/path'],[...]]
219 system_mounts = [m[1::-1] for m in [l.strip().split()
220 for l in f.readlines()]]
221 return system_mounts
222
223
224def file_hash(path):
225 ''' Generate a md5 hash of the contents of 'path' or None if not found '''
226 if os.path.exists(path):
227 h = hashlib.md5()
228 with open(path, 'r') as source:
229 h.update(source.read()) # IGNORE:E1101 - it does have update
230 return h.hexdigest()
231 else:
232 return None
233
234
235def restart_on_change(restart_map):
236 ''' Restart services based on configuration files changing
237
238 This function is used a decorator, for example
239
240 @restart_on_change({
241 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
242 })
243 def ceph_client_changed():
244 ...
245
246 In this example, the cinder-api and cinder-volume services
247 would be restarted if /etc/ceph/ceph.conf is changed by the
248 ceph_client_changed function.
249 '''
250 def wrap(f):
251 def wrapped_f(*args):
252 checksums = {}
253 for path in restart_map:
254 checksums[path] = file_hash(path)
255 f(*args)
256 restarts = []
257 for path in restart_map:
258 if checksums[path] != file_hash(path):
259 restarts += restart_map[path]
260 for service_name in list(OrderedDict.fromkeys(restarts)):
261 service('restart', service_name)
262 return wrapped_f
263 return wrap
264
265
266def lsb_release():
267 '''Return /etc/lsb-release in a dict'''
268 d = {}
269 with open('/etc/lsb-release', 'r') as lsb:
270 for l in lsb:
271 k, v = l.split('=')
272 d[k.strip()] = v.strip()
273 return d
0274
=== added directory 'hooks/charmhelpers/fetch'
=== added file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2013-07-08 08:34:31 +0000
@@ -0,0 +1,152 @@
1import importlib
2from yaml import safe_load
3from charmhelpers.core.host import (
4 apt_install,
5 apt_update,
6 filter_installed_packages,
7 lsb_release
8)
9from urlparse import (
10 urlparse,
11 urlunparse,
12)
13import subprocess
14from charmhelpers.core.hookenv import (
15 config,
16 log,
17)
18
19CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
20deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
21"""
22PROPOSED_POCKET = """# Proposed
23deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
24"""
25
26
27def add_source(source, key=None):
28 if ((source.startswith('ppa:') or
29 source.startswith('http:'))):
30 subprocess.check_call(['add-apt-repository', source])
31 elif source.startswith('cloud:'):
32 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
33 fatal=True)
34 pocket = source.split(':')[-1]
35 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
36 apt.write(CLOUD_ARCHIVE.format(pocket))
37 elif source == 'proposed':
38 release = lsb_release()['DISTRIB_CODENAME']
39 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
40 apt.write(PROPOSED_POCKET.format(release))
41 if key:
42 subprocess.check_call(['apt-key', 'import', key])
43
44
45class SourceConfigError(Exception):
46 pass
47
48
49def configure_sources(update=False,
50 sources_var='install_sources',
51 keys_var='install_keys'):
52 """
53 Configure multiple sources from charm configuration
54
55 Example config:
56 install_sources:
57 - "ppa:foo"
58 - "http://example.com/repo precise main"
59 install_keys:
60 - null
61 - "a1b2c3d4"
62
63 Note that 'null' (a.k.a. None) should not be quoted.
64 """
65 sources = safe_load(config(sources_var))
66 keys = safe_load(config(keys_var))
67 if isinstance(sources, basestring) and isinstance(keys, basestring):
68 add_source(sources, keys)
69 else:
70 if not len(sources) == len(keys):
71 msg = 'Install sources and keys lists are different lengths'
72 raise SourceConfigError(msg)
73 for src_num in range(len(sources)):
74 add_source(sources[src_num], keys[src_num])
75 if update:
76 apt_update(fatal=True)
77
78# The order of this list is very important. Handlers should be listed in from
79# least- to most-specific URL matching.
80FETCH_HANDLERS = (
81 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
82)
83
84
85class UnhandledSource(Exception):
86 pass
87
88
89def install_remote(source):
90 """
91 Install a file tree from a remote source
92
93 The specified source should be a url of the form:
94 scheme://[host]/path[#[option=value][&...]]
95
96 Schemes supported are based on this modules submodules
97 Options supported are submodule-specific"""
98 # We ONLY check for True here because can_handle may return a string
99 # explaining why it can't handle a given source.
100 handlers = [h for h in plugins() if h.can_handle(source) is True]
101 for handler in handlers:
102 try:
103 installed_to = handler.install(source)
104 except UnhandledSource:
105 pass
106 if not installed_to:
107 raise UnhandledSource("No handler found for source {}".format(source))
108 return installed_to
109
110
111def install_from_config(config_var_name):
112 charm_config = config()
113 source = charm_config[config_var_name]
114 return install_remote(source)
115
116
117class BaseFetchHandler(object):
118 """Base class for FetchHandler implementations in fetch plugins"""
119 def can_handle(self, source):
120 """Returns True if the source can be handled. Otherwise returns
121 a string explaining why it cannot"""
122 return "Wrong source type"
123
124 def install(self, source):
125 """Try to download and unpack the source. Return the path to the
126 unpacked files or raise UnhandledSource."""
127 raise UnhandledSource("Wrong source type {}".format(source))
128
129 def parse_url(self, url):
130 return urlparse(url)
131
132 def base_url(self, url):
133 """Return url without querystring or fragment"""
134 parts = list(self.parse_url(url))
135 parts[4:] = ['' for i in parts[4:]]
136 return urlunparse(parts)
137
138
139def plugins(fetch_handlers=None):
140 if not fetch_handlers:
141 fetch_handlers = FETCH_HANDLERS
142 plugin_list = []
143 for handler_name in fetch_handlers:
144 package, classname = handler_name.rsplit('.', 1)
145 try:
146 handler_class = getattr(importlib.import_module(package), classname)
147 plugin_list.append(handler_class())
148 except (ImportError, AttributeError):
149 # Skip missing plugins so that they can be ommitted from
150 # installation if desired
151 log("FetchHandler {} not found, skipping plugin".format(handler_name))
152 return plugin_list
0153
=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2013-07-08 08:34:31 +0000
@@ -0,0 +1,43 @@
1import os
2import urllib2
3from charmhelpers.fetch import (
4 BaseFetchHandler,
5 UnhandledSource
6)
7from charmhelpers.payload.archive import (
8 get_archive_handler,
9 extract,
10)
11
12
13class ArchiveUrlFetchHandler(BaseFetchHandler):
14 """Handler for archives via generic URLs"""
15 def can_handle(self, source):
16 url_parts = self.parse_url(source)
17 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
18 return "Wrong source type"
19 if get_archive_handler(self.base_url(source)):
20 return True
21 return False
22
23 def download(self, source, dest):
24 # propogate all exceptions
25 # URLError, OSError, etc
26 response = urllib2.urlopen(source)
27 with open(dest, 'w') as dest_file:
28 dest_file.write(response.read())
29
30 def install(self, source):
31 url_parts = self.parse_url(source)
32 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
33 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
34 try:
35 self.download(source, dld_file)
36 except urllib2.URLError as e:
37 return UnhandledSource(e.reason)
38 except OSError as e:
39 return UnhandledSource(e.strerror)
40 finally:
41 if os.path.isfile(dld_file):
42 os.unlink(dld_file)
43 return extract(dld_file)
044
=== modified file 'hooks/hooks.py'
--- hooks/hooks.py 2013-06-20 21:15:17 +0000
+++ hooks/hooks.py 2013-07-08 08:34:31 +0000
@@ -10,12 +10,35 @@
1010
11import glob11import glob
12import os12import os
13import subprocess
14import shutil13import shutil
15import sys14import sys
1615
17import ceph16import ceph
18import utils17from charmhelpers.core.hookenv import (
18 log, ERROR,
19 config,
20 relation_ids,
21 related_units,
22 relation_get,
23 relation_set,
24 remote_unit,
25 Hooks, UnregisteredHookError
26)
27from charmhelpers.core.host import (
28 apt_install,
29 apt_update,
30 filter_installed_packages,
31 service_restart,
32 umount
33)
34from charmhelpers.fetch import add_source
35
36from utils import (
37 render_template,
38 get_host_ip,
39)
40
41hooks = Hooks()
1942
2043
21def install_upstart_scripts():44def install_upstart_scripts():
@@ -25,328 +48,221 @@
25 shutil.copy(x, '/etc/init/')48 shutil.copy(x, '/etc/init/')
2649
2750
51@hooks.hook('install')
28def install():52def install():
29 utils.juju_log('INFO', 'Begin install hook.')53 log('Begin install hook.')
30 utils.configure_source()54 add_source(config('source'), config('key'))
31 utils.install('ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs')55 apt_update(fatal=True)
56 apt_install(packages=ceph.PACKAGES, fatal=True)
32 install_upstart_scripts()57 install_upstart_scripts()
33 utils.juju_log('INFO', 'End install hook.')58 log('End install hook.')
3459
3560
36def emit_cephconf():61def emit_cephconf():
37 cephcontext = {62 cephcontext = {
38 'auth_supported': utils.config_get('auth-supported'),63 'auth_supported': config('auth-supported'),
39 'mon_hosts': ' '.join(get_mon_hosts()),64 'mon_hosts': ' '.join(get_mon_hosts()),
40 'fsid': utils.config_get('fsid'),65 'fsid': config('fsid'),
41 'version': ceph.get_ceph_version()66 'version': ceph.get_ceph_version()
42 }67 }
4368
44 with open('/etc/ceph/ceph.conf', 'w') as cephconf:69 with open('/etc/ceph/ceph.conf', 'w') as cephconf:
45 cephconf.write(utils.render_template('ceph.conf', cephcontext))70 cephconf.write(render_template('ceph.conf', cephcontext))
4671
47JOURNAL_ZAPPED = '/var/lib/ceph/journal_zapped'72JOURNAL_ZAPPED = '/var/lib/ceph/journal_zapped'
4873
4974
75@hooks.hook('config-changed')
50def config_changed():76def config_changed():
51 utils.juju_log('INFO', 'Begin config-changed hook.')77 log('Begin config-changed hook.')
5278
53 utils.juju_log('INFO', 'Monitor hosts are ' + repr(get_mon_hosts()))79 log('Monitor hosts are ' + repr(get_mon_hosts()))
5480
55 # Pre-flight checks81 # Pre-flight checks
56 if not utils.config_get('fsid'):82 if not config('fsid'):
57 utils.juju_log('CRITICAL', 'No fsid supplied, cannot proceed.')83 log('No fsid supplied, cannot proceed.', level=ERROR)
58 sys.exit(1)84 sys.exit(1)
59 if not utils.config_get('monitor-secret'):85 if not config('monitor-secret'):
60 utils.juju_log('CRITICAL',86 log('No monitor-secret supplied, cannot proceed.', level=ERROR)
61 'No monitor-secret supplied, cannot proceed.')87 sys.exit(1)
62 sys.exit(1)88 if config('osd-format') not in ceph.DISK_FORMATS:
63 if utils.config_get('osd-format') not in ceph.DISK_FORMATS:89 log('Invalid OSD disk format configuration specified', level=ERROR)
64 utils.juju_log('CRITICAL',
65 'Invalid OSD disk format configuration specified')
66 sys.exit(1)90 sys.exit(1)
6791
68 emit_cephconf()92 emit_cephconf()
6993
70 e_mountpoint = utils.config_get('ephemeral-unmount')94 e_mountpoint = config('ephemeral-unmount')
71 if (e_mountpoint and95 if e_mountpoint and ceph.filesystem_mounted(e_mountpoint):
72 filesystem_mounted(e_mountpoint)):96 umount(e_mountpoint)
73 subprocess.call(['umount', e_mountpoint])
7497
75 osd_journal = utils.config_get('osd-journal')98 osd_journal = config('osd-journal')
76 if (osd_journal and99 if (osd_journal and not os.path.exists(JOURNAL_ZAPPED)
77 not os.path.exists(JOURNAL_ZAPPED) and100 and os.path.exists(osd_journal)):
78 os.path.exists(osd_journal)):
79 ceph.zap_disk(osd_journal)101 ceph.zap_disk(osd_journal)
80 with open(JOURNAL_ZAPPED, 'w') as zapped:102 with open(JOURNAL_ZAPPED, 'w') as zapped:
81 zapped.write('DONE')103 zapped.write('DONE')
82104
83 for dev in utils.config_get('osd-devices').split(' '):105 for dev in config('osd-devices').split(' '):
84 osdize(dev)106 ceph.osdize(dev, config('osd-format'), config('osd-journal'),
107 reformat_osd())
85108
86 # Support use of single node ceph109 # Support use of single node ceph
87 if (not ceph.is_bootstrapped() and110 if (not ceph.is_bootstrapped() and int(config('monitor-count')) == 1):
88 int(utils.config_get('monitor-count')) == 1):111 ceph.bootstrap_monitor_cluster(config('monitor-secret'))
89 bootstrap_monitor_cluster()
90 ceph.wait_for_bootstrap()112 ceph.wait_for_bootstrap()
91113
92 if ceph.is_bootstrapped():114 if ceph.is_bootstrapped():
93 ceph.rescan_osd_devices()115 ceph.rescan_osd_devices()
94116
95 utils.juju_log('INFO', 'End config-changed hook.')117 log('End config-changed hook.')
96118
97119
98def get_mon_hosts():120def get_mon_hosts():
99 hosts = []121 hosts = []
100 hosts.append('{}:6789'.format(utils.get_host_ip()))122 hosts.append('{}:6789'.format(get_host_ip()))
101123
102 for relid in utils.relation_ids('mon'):124 for relid in relation_ids('mon'):
103 for unit in utils.relation_list(relid):125 for unit in related_units(relid):
104 hosts.append(126 hosts.append(
105 '{}:6789'.format(utils.get_host_ip(127 '{}:6789'.format(get_host_ip(relation_get('private-address',
106 utils.relation_get('private-address',128 unit, relid)))
107 unit, relid)))129 )
108 )
109130
110 hosts.sort()131 hosts.sort()
111 return hosts132 return hosts
112133
113134
114def update_monfs():
115 hostname = utils.get_unit_hostname()
116 monfs = '/var/lib/ceph/mon/ceph-{}'.format(hostname)
117 upstart = '{}/upstart'.format(monfs)
118 if (os.path.exists(monfs) and
119 not os.path.exists(upstart)):
120 # Mark mon as managed by upstart so that
121 # it gets start correctly on reboots
122 with open(upstart, 'w'):
123 pass
124
125
126def bootstrap_monitor_cluster():
127 hostname = utils.get_unit_hostname()
128 path = '/var/lib/ceph/mon/ceph-{}'.format(hostname)
129 done = '{}/done'.format(path)
130 upstart = '{}/upstart'.format(path)
131 secret = utils.config_get('monitor-secret')
132 keyring = '/var/lib/ceph/tmp/{}.mon.keyring'.format(hostname)
133
134 if os.path.exists(done):
135 utils.juju_log('INFO',
136 'bootstrap_monitor_cluster: mon already initialized.')
137 else:
138 # Ceph >= 0.61.3 needs this for ceph-mon fs creation
139 os.makedirs('/var/run/ceph', mode=0755)
140 os.makedirs(path)
141 # end changes for Ceph >= 0.61.3
142 try:
143 subprocess.check_call(['ceph-authtool', keyring,
144 '--create-keyring', '--name=mon.',
145 '--add-key={}'.format(secret),
146 '--cap', 'mon', 'allow *'])
147
148 subprocess.check_call(['ceph-mon', '--mkfs',
149 '-i', hostname,
150 '--keyring', keyring])
151
152 with open(done, 'w'):
153 pass
154 with open(upstart, 'w'):
155 pass
156
157 subprocess.check_call(['start', 'ceph-mon-all-starter'])
158 except:
159 raise
160 finally:
161 os.unlink(keyring)
162
163
164def reformat_osd():135def reformat_osd():
165 if utils.config_get('osd-reformat'):136 if config('osd-reformat'):
166 return True137 return True
167 else:138 else:
168 return False139 return False
169140
170141
171def osdize(dev):142@hooks.hook('mon-relation-departed',
172 if not os.path.exists(dev):143 'mon-relation-joined')
173 utils.juju_log('INFO',
174 'Path {} does not exist - bailing'.format(dev))
175 return
176
177 if (ceph.is_osd_disk(dev) and not
178 reformat_osd()):
179 utils.juju_log('INFO',
180 'Looks like {} is already an OSD, skipping.'
181 .format(dev))
182 return
183
184 if device_mounted(dev):
185 utils.juju_log('INFO',
186 'Looks like {} is in use, skipping.'.format(dev))
187 return
188
189 cmd = ['ceph-disk-prepare']
190 # Later versions of ceph support more options
191 if ceph.get_ceph_version() >= "0.48.3":
192 osd_format = utils.config_get('osd-format')
193 if osd_format:
194 cmd.append('--fs-type')
195 cmd.append(osd_format)
196 cmd.append(dev)
197 osd_journal = utils.config_get('osd-journal')
198 if (osd_journal and
199 os.path.exists(osd_journal)):
200 cmd.append(osd_journal)
201 else:
202 # Just provide the device - no other options
203 # for older versions of ceph
204 cmd.append(dev)
205 subprocess.call(cmd)
206
207
208def device_mounted(dev):
209 return subprocess.call(['grep', '-wqs', dev + '1', '/proc/mounts']) == 0
210
211
212def filesystem_mounted(fs):
213 return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0
214
215
216def mon_relation():144def mon_relation():
217 utils.juju_log('INFO', 'Begin mon-relation hook.')145 log('Begin mon-relation hook.')
218 emit_cephconf()146 emit_cephconf()
219147
220 moncount = int(utils.config_get('monitor-count'))148 moncount = int(config('monitor-count'))
221 if len(get_mon_hosts()) >= moncount:149 if len(get_mon_hosts()) >= moncount:
222 bootstrap_monitor_cluster()150 ceph.bootstrap_monitor_cluster(config('monitor-secret'))
223 ceph.wait_for_bootstrap()151 ceph.wait_for_bootstrap()
224 ceph.rescan_osd_devices()152 ceph.rescan_osd_devices()
225 notify_osds()153 notify_osds()
226 notify_radosgws()154 notify_radosgws()
227 notify_client()155 notify_client()
228 else:156 else:
229 utils.juju_log('INFO',157 log('Not enough mons ({}), punting.'
230 'Not enough mons ({}), punting.'.format(158 .format(len(get_mon_hosts())))
231 len(get_mon_hosts())))
232159
233 utils.juju_log('INFO', 'End mon-relation hook.')160 log('End mon-relation hook.')
234161
235162
236def notify_osds():163def notify_osds():
237 utils.juju_log('INFO', 'Begin notify_osds.')164 log('Begin notify_osds.')
238165
239 for relid in utils.relation_ids('osd'):166 for relid in relation_ids('osd'):
240 utils.relation_set(fsid=utils.config_get('fsid'),167 relation_set(relation_id=relid,
241 osd_bootstrap_key=ceph.get_osd_bootstrap_key(),168 fsid=config('fsid'),
242 auth=utils.config_get('auth-supported'),169 osd_bootstrap_key=ceph.get_osd_bootstrap_key(),
243 rid=relid)170 auth=config('auth-supported'))
244171
245 utils.juju_log('INFO', 'End notify_osds.')172 log('End notify_osds.')
246173
247174
248def notify_radosgws():175def notify_radosgws():
249 utils.juju_log('INFO', 'Begin notify_radosgws.')176 log('Begin notify_radosgws.')
250177
251 for relid in utils.relation_ids('radosgw'):178 for relid in relation_ids('radosgw'):
252 utils.relation_set(radosgw_key=ceph.get_radosgw_key(),179 relation_set(relation_id=relid,
253 auth=utils.config_get('auth-supported'),180 radosgw_key=ceph.get_radosgw_key(),
254 rid=relid)181 auth=config('auth-supported'))
255182
256 utils.juju_log('INFO', 'End notify_radosgws.')183 log('End notify_radosgws.')
257184
258185
259def notify_client():186def notify_client():
260 utils.juju_log('INFO', 'Begin notify_client.')187 log('Begin notify_client.')
261188
262 for relid in utils.relation_ids('client'):189 for relid in relation_ids('client'):
263 units = utils.relation_list(relid)190 units = related_units(relid)
264 if len(units) > 0:191 if len(units) > 0:
265 service_name = units[0].split('/')[0]192 service_name = units[0].split('/')[0]
266 utils.relation_set(key=ceph.get_named_key(service_name),193 relation_set(relation_id=relid,
267 auth=utils.config_get('auth-supported'),194 key=ceph.get_named_key(service_name),
268 rid=relid)195 auth=config('auth-supported'))
269196
270 utils.juju_log('INFO', 'End notify_client.')197 log('End notify_client.')
271198
272199
200@hooks.hook('osd-relation-joined')
273def osd_relation():201def osd_relation():
274 utils.juju_log('INFO', 'Begin osd-relation hook.')202 log('Begin osd-relation hook.')
275203
276 if ceph.is_quorum():204 if ceph.is_quorum():
277 utils.juju_log('INFO',205 log('mon cluster in quorum - providing fsid & keys')
278 'mon cluster in quorum - providing fsid & keys')206 relation_set(fsid=config('fsid'),
279 utils.relation_set(fsid=utils.config_get('fsid'),207 osd_bootstrap_key=ceph.get_osd_bootstrap_key(),
280 osd_bootstrap_key=ceph.get_osd_bootstrap_key(),208 auth=config('auth-supported'))
281 auth=utils.config_get('auth-supported'))
282 else:209 else:
283 utils.juju_log('INFO',210 log('mon cluster not in quorum - deferring fsid provision')
284 'mon cluster not in quorum - deferring fsid provision')211
285212 log('End osd-relation hook.')
286 utils.juju_log('INFO', 'End osd-relation hook.')213
287214
288215@hooks.hook('radosgw-relation-joined')
289def radosgw_relation():216def radosgw_relation():
290 utils.juju_log('INFO', 'Begin radosgw-relation hook.')217 log('Begin radosgw-relation hook.')
291218
292 utils.install('radosgw') # Install radosgw for admin tools219 # Install radosgw for admin tools
293220 apt_install(packages=filter_installed_packages(['radosgw']))
294 if ceph.is_quorum():221 if ceph.is_quorum():
295 utils.juju_log('INFO',222 log('mon cluster in quorum - providing radosgw with keys')
296 'mon cluster in quorum - \223 relation_set(radosgw_key=ceph.get_radosgw_key(),
297 providing radosgw with keys')224 auth=config('auth-supported'))
298 utils.relation_set(radosgw_key=ceph.get_radosgw_key(),
299 auth=utils.config_get('auth-supported'))
300 else:225 else:
301 utils.juju_log('INFO',226 log('mon cluster not in quorum - deferring key provision')
302 'mon cluster not in quorum - deferring key provision')227
303228 log('End radosgw-relation hook.')
304 utils.juju_log('INFO', 'End radosgw-relation hook.')229
305230
306231@hooks.hook('client-relation-joined')
307def client_relation():232def client_relation():
308 utils.juju_log('INFO', 'Begin client-relation hook.')233 log('Begin client-relation hook.')
309234
310 if ceph.is_quorum():235 if ceph.is_quorum():
311 utils.juju_log('INFO',236 log('mon cluster in quorum - providing client with keys')
312 'mon cluster in quorum - \237 service_name = remote_unit().split('/')[0]
313 providing client with keys')238 relation_set(key=ceph.get_named_key(service_name),
314 service_name = os.environ['JUJU_REMOTE_UNIT'].split('/')[0]239 auth=config('auth-supported'))
315 utils.relation_set(key=ceph.get_named_key(service_name),
316 auth=utils.config_get('auth-supported'))
317 else:240 else:
318 utils.juju_log('INFO',241 log('mon cluster not in quorum - deferring key provision')
319 'mon cluster not in quorum - deferring key provision')242
320243 log('End client-relation hook.')
321 utils.juju_log('INFO', 'End client-relation hook.')244
322245
323246@hooks.hook('upgrade-charm')
324def upgrade_charm():247def upgrade_charm():
325 utils.juju_log('INFO', 'Begin upgrade-charm hook.')248 log('Begin upgrade-charm hook.')
326 emit_cephconf()249 emit_cephconf()
327 utils.install('xfsprogs')250 apt_install(packages=filter_installed_packages(ceph.PACKAGES), fatal=True)
328 install_upstart_scripts()251 install_upstart_scripts()
329 update_monfs()252 ceph.update_monfs()
330 utils.juju_log('INFO', 'End upgrade-charm hook.')253 log('End upgrade-charm hook.')
331254
332255
256@hooks.hook('start')
333def start():257def start():
334 # In case we're being redeployed to the same machines, try258 # In case we're being redeployed to the same machines, try
335 # to make sure everything is running as soon as possible.259 # to make sure everything is running as soon as possible.
336 subprocess.call(['start', 'ceph-mon-all-starter'])260 service_restart('ceph-mon-all')
337 ceph.rescan_osd_devices()261 ceph.rescan_osd_devices()
338262
339263
340utils.do_hooks({264if __name__ == '__main__':
341 'config-changed': config_changed,265 try:
342 'install': install,266 hooks.execute(sys.argv)
343 'mon-relation-departed': mon_relation,267 except UnregisteredHookError as e:
344 'mon-relation-joined': mon_relation,268 log('Unknown hook {} - skipping.'.format(e))
345 'osd-relation-joined': osd_relation,
346 'radosgw-relation-joined': radosgw_relation,
347 'client-relation-joined': client_relation,
348 'start': start,
349 'upgrade-charm': upgrade_charm,
350 })
351
352sys.exit(0)
353269
=== modified file 'hooks/utils.py'
--- hooks/utils.py 2013-02-08 11:09:00 +0000
+++ hooks/utils.py 2013-07-08 08:34:31 +0000
@@ -7,97 +7,41 @@
7# Paul Collins <paul.collins@canonical.com>7# Paul Collins <paul.collins@canonical.com>
8#8#
99
10import os
11import subprocess
12import socket10import socket
13import sys
14import re11import re
1512from charmhelpers.core.hookenv import (
1613 unit_get,
17def do_hooks(hooks):14 cached
18 hook = os.path.basename(sys.argv[0])15)
1916from charmhelpers.core.host import (
20 try:17 apt_install,
21 hook_func = hooks[hook]18 filter_installed_packages
22 except KeyError:19)
23 juju_log('INFO',
24 "This charm doesn't know how to handle '{}'.".format(hook))
25 else:
26 hook_func()
27
28
29def install(*pkgs):
30 cmd = [
31 'apt-get',
32 '-y',
33 'install'
34 ]
35 for pkg in pkgs:
36 cmd.append(pkg)
37 subprocess.check_call(cmd)
3820
39TEMPLATES_DIR = 'templates'21TEMPLATES_DIR = 'templates'
4022
41try:23try:
42 import jinja224 import jinja2
43except ImportError:25except ImportError:
44 install('python-jinja2')26 apt_install(filter_installed_packages(['python-jinja2']),
27 fatal=True)
45 import jinja228 import jinja2
4629
47try:30try:
48 import dns.resolver31 import dns.resolver
49except ImportError:32except ImportError:
50 install('python-dnspython')33 apt_install(filter_installed_packages(['python-dnspython']),
34 fatal=True)
51 import dns.resolver35 import dns.resolver
5236
5337
54def render_template(template_name, context, template_dir=TEMPLATES_DIR):38def render_template(template_name, context, template_dir=TEMPLATES_DIR):
55 templates = jinja2.Environment(39 templates = jinja2.Environment(
56 loader=jinja2.FileSystemLoader(template_dir)40 loader=jinja2.FileSystemLoader(template_dir))
57 )
58 template = templates.get_template(template_name)41 template = templates.get_template(template_name)
59 return template.render(context)42 return template.render(context)
6043
6144
62CLOUD_ARCHIVE = \
63""" # Ubuntu Cloud Archive
64deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
65"""
66
67
68def configure_source():
69 source = str(config_get('source'))
70 if not source:
71 return
72 if source.startswith('ppa:'):
73 cmd = [
74 'add-apt-repository',
75 source
76 ]
77 subprocess.check_call(cmd)
78 if source.startswith('cloud:'):
79 install('ubuntu-cloud-keyring')
80 pocket = source.split(':')[1]
81 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
82 apt.write(CLOUD_ARCHIVE.format(pocket))
83 if source.startswith('http:'):
84 with open('/etc/apt/sources.list.d/ceph.list', 'w') as apt:
85 apt.write("deb " + source + "\n")
86 key = config_get('key')
87 if key:
88 cmd = [
89 'apt-key',
90 'adv', '--keyserver keyserver.ubuntu.com',
91 '--recv-keys', key
92 ]
93 subprocess.check_call(cmd)
94 cmd = [
95 'apt-get',
96 'update'
97 ]
98 subprocess.check_call(cmd)
99
100
101def enable_pocket(pocket):45def enable_pocket(pocket):
102 apt_sources = "/etc/apt/sources.list"46 apt_sources = "/etc/apt/sources.list"
103 with open(apt_sources, "r") as sources:47 with open(apt_sources, "r") as sources:
@@ -109,105 +53,15 @@
109 else:53 else:
110 sources.write(line)54 sources.write(line)
11155
112# Protocols56
113TCP = 'TCP'57@cached
114UDP = 'UDP'
115
116
117def expose(port, protocol='TCP'):
118 cmd = [
119 'open-port',
120 '{}/{}'.format(port, protocol)
121 ]
122 subprocess.check_call(cmd)
123
124
125def juju_log(severity, message):
126 cmd = [
127 'juju-log',
128 '--log-level', severity,
129 message
130 ]
131 subprocess.check_call(cmd)
132
133
134def relation_ids(relation):
135 cmd = [
136 'relation-ids',
137 relation
138 ]
139 return subprocess.check_output(cmd).split() # IGNORE:E1103
140
141
142def relation_list(rid):
143 cmd = [
144 'relation-list',
145 '-r', rid,
146 ]
147 return subprocess.check_output(cmd).split() # IGNORE:E1103
148
149
150def relation_get(attribute, unit=None, rid=None):
151 cmd = [
152 'relation-get',
153 ]
154 if rid:
155 cmd.append('-r')
156 cmd.append(rid)
157 cmd.append(attribute)
158 if unit:
159 cmd.append(unit)
160 value = str(subprocess.check_output(cmd)).strip()
161 if value == "":
162 return None
163 else:
164 return value
165
166
167def relation_set(**kwargs):
168 cmd = [
169 'relation-set'
170 ]
171 args = []
172 for k, v in kwargs.items():
173 if k == 'rid':
174 cmd.append('-r')
175 cmd.append(v)
176 else:
177 args.append('{}={}'.format(k, v))
178 cmd += args
179 subprocess.check_call(cmd)
180
181
182def unit_get(attribute):
183 cmd = [
184 'unit-get',
185 attribute
186 ]
187 value = str(subprocess.check_output(cmd)).strip()
188 if value == "":
189 return None
190 else:
191 return value
192
193
194def config_get(attribute):
195 cmd = [
196 'config-get',
197 attribute
198 ]
199 value = str(subprocess.check_output(cmd)).strip()
200 if value == "":
201 return None
202 else:
203 return value
204
205
206def get_unit_hostname():58def get_unit_hostname():
207 return socket.gethostname()59 return socket.gethostname()
20860
20961
210def get_host_ip(hostname=unit_get('private-address')):62@cached
63def get_host_ip(hostname=None):
64 hostname = hostname or unit_get('private-address')
211 try:65 try:
212 # Test to see if already an IPv4 address66 # Test to see if already an IPv4 address
213 socket.inet_aton(hostname)67 socket.inet_aton(hostname)
21468
=== modified file 'metadata.yaml'
--- metadata.yaml 2013-04-22 19:49:09 +0000
+++ metadata.yaml 2013-07-08 08:34:31 +0000
@@ -1,7 +1,6 @@
1name: ceph1name: ceph
2summary: Highly scalable distributed storage2summary: Highly scalable distributed storage
3maintainer: James Page <james.page@ubuntu.com>,3maintainer: James Page <james.page@ubuntu.com>
4 Paul Collins <paul.collins@canonical.com>
5description: |4description: |
6 Ceph is a distributed storage and network file system designed to provide5 Ceph is a distributed storage and network file system designed to provide
7 excellent performance, reliability, and scalability.6 excellent performance, reliability, and scalability.

Subscribers

People subscribed via source and target branches

to all changes: