Merge lp:~hopem/charms/trusty/ceph/ceph-broker into lp:~openstack-charmers-archive/charms/trusty/ceph/next
- Trusty Tahr (14.04)
- ceph-broker
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 87 |
Proposed branch: | lp:~hopem/charms/trusty/ceph/ceph-broker |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph/next |
Diff against target: |
750 lines (+593/-17) 10 files modified
Makefile (+5/-1) charm-helpers-hooks.yaml (+1/-0) hooks/ceph_broker.py (+90/-0) hooks/charmhelpers/contrib/network/ip.py (+0/-2) hooks/charmhelpers/contrib/storage/linux/ceph.py (+388/-0) hooks/charmhelpers/core/services/__init__.py (+2/-2) hooks/charmhelpers/fetch/__init__.py (+1/-1) hooks/hooks.py (+32/-11) unit_tests/__init__.py (+2/-0) unit_tests/test_ceph_broker.py (+72/-0) |
To merge this branch: | bzr merge lp:~hopem/charms/trusty/ceph/ceph-broker |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Pending | ||
Review via email: mp+242273@code.launchpad.net |
This proposal supersedes a proposal from 2014-11-07.
Commit message
Description of the change
Ryan Beisner (1chb1n) wrote : Posted in a previous version of this proposal | # |
Ryan Beisner (1chb1n) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_unit_test #766 ceph-next for hopem mp241083
UNIT OK: passed
UNIT Results (max last 5 lines):
Starting unit tests...
Name Stmts Miss Cover Missing
unit_
Ran 1 test in 0.004s
OK
Full unit test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_amulet_test #349 ceph-next for hopem mp241083
AMULET OK: passed
AMULET Results (max last 5 lines):
juju-
juju-
juju-
WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
juju-test INFO : Results: 3 passed, 0 failed, 0 errored
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_unit_test #807 ceph-next for hopem mp241083
UNIT OK: passed
UNIT Results (max last 5 lines):
yaml.serializer 85 85 0% 2-110
yaml.tokens 76 76 0% 2-103
TOTAL 6482 6239 4%
Ran 6 tests in 0.006s
OK
Full unit test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_lint_check #972 ceph-next for hopem mp241083
LINT OK: passed
LINT Results (max last 5 lines):
I: config.yaml: option key has no default value
I: config.yaml: option fsid has no default value
I: config.yaml: option osd-journal has no default value
I: config.yaml: option osd-reformat has no default value
I: config.yaml: option source has no default value
Full lint test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_amulet_test #353 ceph-next for hopem mp241083
AMULET OK: passed
AMULET Results (max last 5 lines):
juju-
juju-
juju-
WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
juju-test INFO : Results: 3 passed, 0 failed, 0 errored
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_unit_test #810 ceph-next for hopem mp241083
UNIT OK: passed
UNIT Results (max last 5 lines):
yaml.serializer 85 85 0% 2-110
yaml.tokens 76 76 0% 2-103
TOTAL 6482 6239 4%
Ran 6 tests in 0.006s
OK
Full unit test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_lint_check #975 ceph-next for hopem mp241083
LINT OK: passed
LINT Results (max last 5 lines):
I: config.yaml: option key has no default value
I: config.yaml: option fsid has no default value
I: config.yaml: option osd-journal has no default value
I: config.yaml: option osd-reformat has no default value
I: config.yaml: option source has no default value
Full lint test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_amulet_test #356 ceph-next for hopem mp241083
AMULET OK: passed
AMULET Results (max last 5 lines):
juju-
juju-
juju-
WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
juju-test INFO : Results: 3 passed, 0 failed, 0 errored
Full amulet test output: http://
Build: http://
James Page (james-page) wrote : Posted in a previous version of this proposal | # |
Ed
This all looks OK - just a couple of nits on log levels.
Edward Hope-Morley (hopem) wrote : Posted in a previous version of this proposal | # |
Fixed.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_unit_test #968 ceph-next for hopem mp242273
UNIT OK: passed
UNIT Results (max last 5 lines):
yaml.serializer 85 85 0% 2-110
yaml.tokens 76 76 0% 2-103
TOTAL 6482 6239 4%
Ran 6 tests in 0.006s
OK
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_lint_check #1134 ceph-next for hopem mp242273
LINT OK: passed
LINT Results (max last 5 lines):
I: config.yaml: option key has no default value
I: config.yaml: option fsid has no default value
I: config.yaml: option osd-journal has no default value
I: config.yaml: option osd-reformat has no default value
I: config.yaml: option source has no default value
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_amulet_test #476 ceph-next for hopem mp242273
AMULET OK: passed
AMULET Results (max last 5 lines):
juju-
juju-
juju-
WARNING cannot delete security group "juju-osci-sv11-0". Used by another environment?
juju-test INFO : Results: 3 passed, 0 failed, 0 errored
Full amulet test output: http://
Build: http://
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2014-09-27 18:15:47 +0000 |
3 | +++ Makefile 2014-11-19 21:34:52 +0000 |
4 | @@ -2,9 +2,13 @@ |
5 | PYTHON := /usr/bin/env python |
6 | |
7 | lint: |
8 | - @flake8 --exclude hooks/charmhelpers hooks tests |
9 | + @flake8 --exclude hooks/charmhelpers hooks tests unit_tests |
10 | @charm proof |
11 | |
12 | +unit_test: |
13 | + @echo Starting unit tests... |
14 | + @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
15 | + |
16 | test: |
17 | @echo Starting Amulet tests... |
18 | # coreycb note: The -v should only be temporary until Amulet sends |
19 | |
20 | === modified file 'charm-helpers-hooks.yaml' |
21 | --- charm-helpers-hooks.yaml 2014-09-23 10:01:29 +0000 |
22 | +++ charm-helpers-hooks.yaml 2014-11-19 21:34:52 +0000 |
23 | @@ -5,6 +5,7 @@ |
24 | - fetch |
25 | - contrib.storage.linux: |
26 | - utils |
27 | + - ceph |
28 | - payload.execd |
29 | - contrib.openstack.alternatives |
30 | - contrib.network.ip |
31 | |
32 | === added file 'hooks/__init__.py' |
33 | === added file 'hooks/ceph_broker.py' |
34 | --- hooks/ceph_broker.py 1970-01-01 00:00:00 +0000 |
35 | +++ hooks/ceph_broker.py 2014-11-19 21:34:52 +0000 |
36 | @@ -0,0 +1,90 @@ |
37 | +#!/usr/bin/python |
38 | +# |
39 | +# Copyright 2014 Canonical Ltd. |
40 | +# |
41 | +import json |
42 | + |
43 | +from charmhelpers.core.hookenv import ( |
44 | + log, |
45 | + DEBUG, |
46 | + INFO, |
47 | + ERROR, |
48 | +) |
49 | +from charmhelpers.contrib.storage.linux.ceph import ( |
50 | + create_pool, |
51 | + pool_exists, |
52 | +) |
53 | + |
54 | + |
55 | +def decode_req_encode_rsp(f): |
56 | + """Decorator to decode incoming requests and encode responses.""" |
57 | + def decode_inner(req): |
58 | + return json.dumps(f(json.loads(req))) |
59 | + |
60 | + return decode_inner |
61 | + |
62 | + |
63 | +@decode_req_encode_rsp |
64 | +def process_requests(reqs): |
65 | + """Process Ceph broker request(s). |
66 | + |
67 | + This is a versioned api. API version must be supplied by the client making |
68 | + the request. |
69 | + """ |
70 | + try: |
71 | + version = reqs.get('api-version') |
72 | + if version == 1: |
73 | + return process_requests_v1(reqs['ops']) |
74 | + |
75 | + except Exception as exc: |
76 | + log(str(exc), level=ERROR) |
77 | + msg = ("Unexpected error occurred while processing requests: %s" % |
78 | + (reqs)) |
79 | + log(msg, level=ERROR) |
80 | + return {'exit-code': 1, 'stderr': msg} |
81 | + |
82 | + msg = ("Missing or invalid api version (%s)" % (version)) |
83 | + return {'exit-code': 1, 'stderr': msg} |
84 | + |
85 | + |
86 | +def process_requests_v1(reqs): |
87 | + """Process v1 requests. |
88 | + |
89 | + Takes a list of requests (dicts) and processes each one. If an error is |
90 | + found, processing stops and the client is notified in the response. |
91 | + |
92 | + Returns a response dict containing the exit code (non-zero if any |
93 | + operation failed along with an explanation). |
94 | + """ |
95 | + log("Processing %s ceph broker requests" % (len(reqs)), level=INFO) |
96 | + for req in reqs: |
97 | + op = req.get('op') |
98 | + log("Processing op='%s'" % (op), level=DEBUG) |
99 | + # Use admin client since we do not have other client key locations |
100 | + # setup to use them for these operations. |
101 | + svc = 'admin' |
102 | + if op == "create-pool": |
103 | + params = {'pool': req.get('name'), |
104 | + 'replicas': req.get('replicas')} |
105 | + if not all(params.iteritems()): |
106 | + msg = ("Missing parameter(s): %s" % |
107 | + (' '.join([k for k in params.iterkeys() |
108 | + if not params[k]]))) |
109 | + log(msg, level=ERROR) |
110 | + return {'exit-code': 1, 'stderr': msg} |
111 | + |
112 | + pool = params['pool'] |
113 | + replicas = params['replicas'] |
114 | + if not pool_exists(service=svc, name=pool): |
115 | + log("Creating pool '%s' (replicas=%s)" % (pool, replicas), |
116 | + level=INFO) |
117 | + create_pool(service=svc, name=pool, replicas=replicas) |
118 | + else: |
119 | + log("Pool '%s' already exists - skipping create" % (pool), |
120 | + level=DEBUG) |
121 | + else: |
122 | + msg = "Unknown operation '%s'" % (op) |
123 | + log(msg, level=ERROR) |
124 | + return {'exit-code': 1, 'stderr': msg} |
125 | + |
126 | + return {'exit-code': 0} |
127 | |
128 | === modified file 'hooks/charmhelpers/contrib/network/ip.py' |
129 | --- hooks/charmhelpers/contrib/network/ip.py 2014-10-21 07:25:29 +0000 |
130 | +++ hooks/charmhelpers/contrib/network/ip.py 2014-11-19 21:34:52 +0000 |
131 | @@ -8,7 +8,6 @@ |
132 | from charmhelpers.core.hookenv import unit_get |
133 | from charmhelpers.fetch import apt_install |
134 | from charmhelpers.core.hookenv import ( |
135 | - WARNING, |
136 | ERROR, |
137 | log |
138 | ) |
139 | @@ -175,7 +174,6 @@ |
140 | if is_ipv6(address): |
141 | address = "[%s]" % address |
142 | else: |
143 | - log("Not a valid ipv6 address: %s" % address, level=WARNING) |
144 | address = None |
145 | |
146 | return address |
147 | |
148 | === added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' |
149 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000 |
150 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-11-19 21:34:52 +0000 |
151 | @@ -0,0 +1,388 @@ |
152 | +# |
153 | +# Copyright 2012 Canonical Ltd. |
154 | +# |
155 | +# This file is sourced from lp:openstack-charm-helpers |
156 | +# |
157 | +# Authors: |
158 | +# James Page <james.page@ubuntu.com> |
159 | +# Adam Gandelman <adamg@ubuntu.com> |
160 | +# |
161 | + |
162 | +import os |
163 | +import shutil |
164 | +import json |
165 | +import time |
166 | + |
167 | +from subprocess import ( |
168 | + check_call, |
169 | + check_output, |
170 | + CalledProcessError |
171 | +) |
172 | + |
173 | +from charmhelpers.core.hookenv import ( |
174 | + relation_get, |
175 | + relation_ids, |
176 | + related_units, |
177 | + log, |
178 | + INFO, |
179 | + WARNING, |
180 | + ERROR |
181 | +) |
182 | + |
183 | +from charmhelpers.core.host import ( |
184 | + mount, |
185 | + mounts, |
186 | + service_start, |
187 | + service_stop, |
188 | + service_running, |
189 | + umount, |
190 | +) |
191 | + |
192 | +from charmhelpers.fetch import ( |
193 | + apt_install, |
194 | +) |
195 | + |
196 | +KEYRING = '/etc/ceph/ceph.client.{}.keyring' |
197 | +KEYFILE = '/etc/ceph/ceph.client.{}.key' |
198 | + |
199 | +CEPH_CONF = """[global] |
200 | + auth supported = {auth} |
201 | + keyring = {keyring} |
202 | + mon host = {mon_hosts} |
203 | + log to syslog = {use_syslog} |
204 | + err to syslog = {use_syslog} |
205 | + clog to syslog = {use_syslog} |
206 | +""" |
207 | + |
208 | + |
209 | +def install(): |
210 | + ''' Basic Ceph client installation ''' |
211 | + ceph_dir = "/etc/ceph" |
212 | + if not os.path.exists(ceph_dir): |
213 | + os.mkdir(ceph_dir) |
214 | + apt_install('ceph-common', fatal=True) |
215 | + |
216 | + |
217 | +def rbd_exists(service, pool, rbd_img): |
218 | + ''' Check to see if a RADOS block device exists ''' |
219 | + try: |
220 | + out = check_output(['rbd', 'list', '--id', service, |
221 | + '--pool', pool]) |
222 | + except CalledProcessError: |
223 | + return False |
224 | + else: |
225 | + return rbd_img in out |
226 | + |
227 | + |
228 | +def create_rbd_image(service, pool, image, sizemb): |
229 | + ''' Create a new RADOS block device ''' |
230 | + cmd = [ |
231 | + 'rbd', |
232 | + 'create', |
233 | + image, |
234 | + '--size', |
235 | + str(sizemb), |
236 | + '--id', |
237 | + service, |
238 | + '--pool', |
239 | + pool |
240 | + ] |
241 | + check_call(cmd) |
242 | + |
243 | + |
244 | +def pool_exists(service, name): |
245 | + ''' Check to see if a RADOS pool already exists ''' |
246 | + try: |
247 | + out = check_output(['rados', '--id', service, 'lspools']) |
248 | + except CalledProcessError: |
249 | + return False |
250 | + else: |
251 | + return name in out |
252 | + |
253 | + |
254 | +def get_osds(service): |
255 | + ''' |
256 | + Return a list of all Ceph Object Storage Daemons |
257 | + currently in the cluster |
258 | + ''' |
259 | + version = ceph_version() |
260 | + if version and version >= '0.56': |
261 | + return json.loads(check_output(['ceph', '--id', service, |
262 | + 'osd', 'ls', '--format=json'])) |
263 | + else: |
264 | + return None |
265 | + |
266 | + |
267 | +def create_pool(service, name, replicas=3): |
268 | + ''' Create a new RADOS pool ''' |
269 | + if pool_exists(service, name): |
270 | + log("Ceph pool {} already exists, skipping creation".format(name), |
271 | + level=WARNING) |
272 | + return |
273 | + # Calculate the number of placement groups based |
274 | + # on upstream recommended best practices. |
275 | + osds = get_osds(service) |
276 | + if osds: |
277 | + pgnum = (len(osds) * 100 / replicas) |
278 | + else: |
279 | + # NOTE(james-page): Default to 200 for older ceph versions |
280 | + # which don't support OSD query from cli |
281 | + pgnum = 200 |
282 | + cmd = [ |
283 | + 'ceph', '--id', service, |
284 | + 'osd', 'pool', 'create', |
285 | + name, str(pgnum) |
286 | + ] |
287 | + check_call(cmd) |
288 | + cmd = [ |
289 | + 'ceph', '--id', service, |
290 | + 'osd', 'pool', 'set', name, |
291 | + 'size', str(replicas) |
292 | + ] |
293 | + check_call(cmd) |
294 | + |
295 | + |
296 | +def delete_pool(service, name): |
297 | + ''' Delete a RADOS pool from ceph ''' |
298 | + cmd = [ |
299 | + 'ceph', '--id', service, |
300 | + 'osd', 'pool', 'delete', |
301 | + name, '--yes-i-really-really-mean-it' |
302 | + ] |
303 | + check_call(cmd) |
304 | + |
305 | + |
306 | +def _keyfile_path(service): |
307 | + return KEYFILE.format(service) |
308 | + |
309 | + |
310 | +def _keyring_path(service): |
311 | + return KEYRING.format(service) |
312 | + |
313 | + |
314 | +def create_keyring(service, key): |
315 | + ''' Create a new Ceph keyring containing key''' |
316 | + keyring = _keyring_path(service) |
317 | + if os.path.exists(keyring): |
318 | + log('ceph: Keyring exists at %s.' % keyring, level=WARNING) |
319 | + return |
320 | + cmd = [ |
321 | + 'ceph-authtool', |
322 | + keyring, |
323 | + '--create-keyring', |
324 | + '--name=client.{}'.format(service), |
325 | + '--add-key={}'.format(key) |
326 | + ] |
327 | + check_call(cmd) |
328 | + log('ceph: Created new ring at %s.' % keyring, level=INFO) |
329 | + |
330 | + |
331 | +def create_key_file(service, key): |
332 | + ''' Create a file containing key ''' |
333 | + keyfile = _keyfile_path(service) |
334 | + if os.path.exists(keyfile): |
335 | + log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING) |
336 | + return |
337 | + with open(keyfile, 'w') as fd: |
338 | + fd.write(key) |
339 | + log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) |
340 | + |
341 | + |
342 | +def get_ceph_nodes(): |
343 | + ''' Query named relation 'ceph' to detemine current nodes ''' |
344 | + hosts = [] |
345 | + for r_id in relation_ids('ceph'): |
346 | + for unit in related_units(r_id): |
347 | + hosts.append(relation_get('private-address', unit=unit, rid=r_id)) |
348 | + return hosts |
349 | + |
350 | + |
351 | +def configure(service, key, auth, use_syslog): |
352 | + ''' Perform basic configuration of Ceph ''' |
353 | + create_keyring(service, key) |
354 | + create_key_file(service, key) |
355 | + hosts = get_ceph_nodes() |
356 | + with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: |
357 | + ceph_conf.write(CEPH_CONF.format(auth=auth, |
358 | + keyring=_keyring_path(service), |
359 | + mon_hosts=",".join(map(str, hosts)), |
360 | + use_syslog=use_syslog)) |
361 | + modprobe('rbd') |
362 | + |
363 | + |
364 | +def image_mapped(name): |
365 | + ''' Determine whether a RADOS block device is mapped locally ''' |
366 | + try: |
367 | + out = check_output(['rbd', 'showmapped']) |
368 | + except CalledProcessError: |
369 | + return False |
370 | + else: |
371 | + return name in out |
372 | + |
373 | + |
374 | +def map_block_storage(service, pool, image): |
375 | + ''' Map a RADOS block device for local use ''' |
376 | + cmd = [ |
377 | + 'rbd', |
378 | + 'map', |
379 | + '{}/{}'.format(pool, image), |
380 | + '--user', |
381 | + service, |
382 | + '--secret', |
383 | + _keyfile_path(service), |
384 | + ] |
385 | + check_call(cmd) |
386 | + |
387 | + |
388 | +def filesystem_mounted(fs): |
389 | + ''' Determine whether a filesytems is already mounted ''' |
390 | + return fs in [f for f, m in mounts()] |
391 | + |
392 | + |
393 | +def make_filesystem(blk_device, fstype='ext4', timeout=10): |
394 | + ''' Make a new filesystem on the specified block device ''' |
395 | + count = 0 |
396 | + e_noent = os.errno.ENOENT |
397 | + while not os.path.exists(blk_device): |
398 | + if count >= timeout: |
399 | + log('ceph: gave up waiting on block device %s' % blk_device, |
400 | + level=ERROR) |
401 | + raise IOError(e_noent, os.strerror(e_noent), blk_device) |
402 | + log('ceph: waiting for block device %s to appear' % blk_device, |
403 | + level=INFO) |
404 | + count += 1 |
405 | + time.sleep(1) |
406 | + else: |
407 | + log('ceph: Formatting block device %s as filesystem %s.' % |
408 | + (blk_device, fstype), level=INFO) |
409 | + check_call(['mkfs', '-t', fstype, blk_device]) |
410 | + |
411 | + |
412 | +def place_data_on_block_device(blk_device, data_src_dst): |
413 | + ''' Migrate data in data_src_dst to blk_device and then remount ''' |
414 | + # mount block device into /mnt |
415 | + mount(blk_device, '/mnt') |
416 | + # copy data to /mnt |
417 | + copy_files(data_src_dst, '/mnt') |
418 | + # umount block device |
419 | + umount('/mnt') |
420 | + # Grab user/group ID's from original source |
421 | + _dir = os.stat(data_src_dst) |
422 | + uid = _dir.st_uid |
423 | + gid = _dir.st_gid |
424 | + # re-mount where the data should originally be |
425 | + # TODO: persist is currently a NO-OP in core.host |
426 | + mount(blk_device, data_src_dst, persist=True) |
427 | + # ensure original ownership of new mount. |
428 | + os.chown(data_src_dst, uid, gid) |
429 | + |
430 | + |
431 | +# TODO: re-use |
432 | +def modprobe(module): |
433 | + ''' Load a kernel module and configure for auto-load on reboot ''' |
434 | + log('ceph: Loading kernel module', level=INFO) |
435 | + cmd = ['modprobe', module] |
436 | + check_call(cmd) |
437 | + with open('/etc/modules', 'r+') as modules: |
438 | + if module not in modules.read(): |
439 | + modules.write(module) |
440 | + |
441 | + |
442 | +def copy_files(src, dst, symlinks=False, ignore=None): |
443 | + ''' Copy files from src to dst ''' |
444 | + for item in os.listdir(src): |
445 | + s = os.path.join(src, item) |
446 | + d = os.path.join(dst, item) |
447 | + if os.path.isdir(s): |
448 | + shutil.copytree(s, d, symlinks, ignore) |
449 | + else: |
450 | + shutil.copy2(s, d) |
451 | + |
452 | + |
453 | +def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
454 | + blk_device, fstype, system_services=[], |
455 | + replicas=3): |
456 | + """ |
457 | + NOTE: This function must only be called from a single service unit for |
458 | + the same rbd_img otherwise data loss will occur. |
459 | + |
460 | + Ensures given pool and RBD image exists, is mapped to a block device, |
461 | + and the device is formatted and mounted at the given mount_point. |
462 | + |
463 | + If formatting a device for the first time, data existing at mount_point |
464 | + will be migrated to the RBD device before being re-mounted. |
465 | + |
466 | + All services listed in system_services will be stopped prior to data |
467 | + migration and restarted when complete. |
468 | + """ |
469 | + # Ensure pool, RBD image, RBD mappings are in place. |
470 | + if not pool_exists(service, pool): |
471 | + log('ceph: Creating new pool {}.'.format(pool)) |
472 | + create_pool(service, pool, replicas=replicas) |
473 | + |
474 | + if not rbd_exists(service, pool, rbd_img): |
475 | + log('ceph: Creating RBD image ({}).'.format(rbd_img)) |
476 | + create_rbd_image(service, pool, rbd_img, sizemb) |
477 | + |
478 | + if not image_mapped(rbd_img): |
479 | + log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img)) |
480 | + map_block_storage(service, pool, rbd_img) |
481 | + |
482 | + # make file system |
483 | + # TODO: What happens if for whatever reason this is run again and |
484 | + # the data is already in the rbd device and/or is mounted?? |
485 | + # When it is mounted already, it will fail to make the fs |
486 | + # XXX: This is really sketchy! Need to at least add an fstab entry |
487 | + # otherwise this hook will blow away existing data if its executed |
488 | + # after a reboot. |
489 | + if not filesystem_mounted(mount_point): |
490 | + make_filesystem(blk_device, fstype) |
491 | + |
492 | + for svc in system_services: |
493 | + if service_running(svc): |
494 | + log('ceph: Stopping services {} prior to migrating data.' |
495 | + .format(svc)) |
496 | + service_stop(svc) |
497 | + |
498 | + place_data_on_block_device(blk_device, mount_point) |
499 | + |
500 | + for svc in system_services: |
501 | + log('ceph: Starting service {} after migrating data.' |
502 | + .format(svc)) |
503 | + service_start(svc) |
504 | + |
505 | + |
506 | +def ensure_ceph_keyring(service, user=None, group=None): |
507 | + ''' |
508 | + Ensures a ceph keyring is created for a named service |
509 | + and optionally ensures user and group ownership. |
510 | + |
511 | + Returns False if no ceph key is available in relation state. |
512 | + ''' |
513 | + key = None |
514 | + for rid in relation_ids('ceph'): |
515 | + for unit in related_units(rid): |
516 | + key = relation_get('key', rid=rid, unit=unit) |
517 | + if key: |
518 | + break |
519 | + if not key: |
520 | + return False |
521 | + create_keyring(service=service, key=key) |
522 | + keyring = _keyring_path(service) |
523 | + if user and group: |
524 | + check_call(['chown', '%s.%s' % (user, group), keyring]) |
525 | + return True |
526 | + |
527 | + |
528 | +def ceph_version(): |
529 | + ''' Retrieve the local version of ceph ''' |
530 | + if os.path.exists('/usr/bin/ceph'): |
531 | + cmd = ['ceph', '-v'] |
532 | + output = check_output(cmd) |
533 | + output = output.split() |
534 | + if len(output) > 3: |
535 | + return output[2] |
536 | + else: |
537 | + return None |
538 | + else: |
539 | + return None |
540 | |
541 | === modified file 'hooks/charmhelpers/core/services/__init__.py' |
542 | --- hooks/charmhelpers/core/services/__init__.py 2014-08-25 18:42:17 +0000 |
543 | +++ hooks/charmhelpers/core/services/__init__.py 2014-11-19 21:34:52 +0000 |
544 | @@ -1,2 +1,2 @@ |
545 | -from .base import * |
546 | -from .helpers import * |
547 | +from .base import * # NOQA |
548 | +from .helpers import * # NOQA |
549 | |
550 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
551 | --- hooks/charmhelpers/fetch/__init__.py 2014-10-21 07:25:29 +0000 |
552 | +++ hooks/charmhelpers/fetch/__init__.py 2014-11-19 21:34:52 +0000 |
553 | @@ -256,7 +256,7 @@ |
554 | elif source == 'distro': |
555 | pass |
556 | else: |
557 | - raise SourceConfigError("Unknown source: {!r}".format(source)) |
558 | + log("Unknown source: {!r}".format(source)) |
559 | |
560 | if key: |
561 | if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: |
562 | |
563 | === added symlink 'hooks/client-relation-changed' |
564 | === target is u'hooks.py' |
565 | === modified file 'hooks/hooks.py' |
566 | --- hooks/hooks.py 2014-09-30 03:06:10 +0000 |
567 | +++ hooks/hooks.py 2014-11-19 21:34:52 +0000 |
568 | @@ -15,7 +15,9 @@ |
569 | |
570 | import ceph |
571 | from charmhelpers.core.hookenv import ( |
572 | - log, ERROR, |
573 | + log, |
574 | + DEBUG, |
575 | + ERROR, |
576 | config, |
577 | relation_ids, |
578 | related_units, |
579 | @@ -25,7 +27,6 @@ |
580 | Hooks, UnregisteredHookError, |
581 | service_name |
582 | ) |
583 | - |
584 | from charmhelpers.core.host import ( |
585 | service_restart, |
586 | umount, |
587 | @@ -44,12 +45,14 @@ |
588 | get_ipv6_addr, |
589 | format_ipv6_addr |
590 | ) |
591 | - |
592 | from utils import ( |
593 | render_template, |
594 | get_public_addr, |
595 | assert_charm_supports_ipv6 |
596 | ) |
597 | +from ceph_broker import ( |
598 | + process_requests |
599 | +) |
600 | |
601 | hooks = Hooks() |
602 | |
603 | @@ -215,7 +218,7 @@ |
604 | |
605 | def notify_client(): |
606 | for relid in relation_ids('client'): |
607 | - client_relation(relid) |
608 | + client_relation_joined(relid) |
609 | |
610 | |
611 | def upgrade_keys(): |
612 | @@ -266,28 +269,46 @@ |
613 | |
614 | |
615 | @hooks.hook('client-relation-joined') |
616 | -def client_relation(relid=None): |
617 | +def client_relation_joined(relid=None): |
618 | if ceph.is_quorum(): |
619 | log('mon cluster in quorum - providing client with keys') |
620 | service_name = None |
621 | if relid is None: |
622 | - service_name = remote_unit().split('/')[0] |
623 | + units = [remote_unit()] |
624 | + service_name = units[0].split('/')[0] |
625 | else: |
626 | units = related_units(relid) |
627 | if len(units) > 0: |
628 | service_name = units[0].split('/')[0] |
629 | + |
630 | if service_name is not None: |
631 | - data = { |
632 | - 'key': ceph.get_named_key(service_name), |
633 | - 'auth': config('auth-supported'), |
634 | - 'ceph-public-address': get_public_addr(), |
635 | - } |
636 | + data = {'key': ceph.get_named_key(service_name), |
637 | + 'auth': config('auth-supported'), |
638 | + 'ceph-public-address': get_public_addr()} |
639 | relation_set(relation_id=relid, |
640 | relation_settings=data) |
641 | + |
642 | + client_relation_changed(relid=relid) |
643 | else: |
644 | log('mon cluster not in quorum - deferring key provision') |
645 | |
646 | |
647 | +@hooks.hook('client-relation-changed') |
648 | +def client_relation_changed(relid=None): |
649 | + """Process broker requests from ceph client relations.""" |
650 | + if ceph.is_quorum(): |
651 | + settings = relation_get(rid=relid) |
652 | + if 'broker_req' in settings: |
653 | + if not ceph.is_leader(): |
654 | + log("Not leader - ignoring broker request", level=DEBUG) |
655 | + else: |
656 | + rsp = process_requests(settings['broker_req']) |
657 | + relation_set(relation_id=relid, |
658 | + relation_settings={'broker_rsp': rsp}) |
659 | + else: |
660 | + log('mon cluster not in quorum', level=DEBUG) |
661 | + |
662 | + |
663 | @hooks.hook('upgrade-charm') |
664 | def upgrade_charm(): |
665 | emit_cephconf() |
666 | |
667 | === added directory 'unit_tests' |
668 | === added file 'unit_tests/__init__.py' |
669 | --- unit_tests/__init__.py 1970-01-01 00:00:00 +0000 |
670 | +++ unit_tests/__init__.py 2014-11-19 21:34:52 +0000 |
671 | @@ -0,0 +1,2 @@ |
672 | +import sys |
673 | +sys.path.append('hooks') |
674 | |
675 | === added file 'unit_tests/test_ceph_broker.py' |
676 | --- unit_tests/test_ceph_broker.py 1970-01-01 00:00:00 +0000 |
677 | +++ unit_tests/test_ceph_broker.py 2014-11-19 21:34:52 +0000 |
678 | @@ -0,0 +1,72 @@ |
679 | +import json |
680 | +import mock |
681 | +import unittest |
682 | + |
683 | +import ceph_broker |
684 | + |
685 | + |
686 | +class CephBrokerTestCase(unittest.TestCase): |
687 | + |
688 | + def setUp(self): |
689 | + super(CephBrokerTestCase, self).setUp() |
690 | + |
691 | + @mock.patch('ceph_broker.log') |
692 | + def test_process_requests_noop(self, mock_log): |
693 | + req = json.dumps({'api-version': 1, 'ops': []}) |
694 | + rc = ceph_broker.process_requests(req) |
695 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
696 | + |
697 | + @mock.patch('ceph_broker.log') |
698 | + def test_process_requests_missing_api_version(self, mock_log): |
699 | + req = json.dumps({'ops': []}) |
700 | + rc = ceph_broker.process_requests(req) |
701 | + self.assertEqual(json.loads(rc), {'exit-code': 1, |
702 | + 'stderr': |
703 | + ('Missing or invalid api version ' |
704 | + '(None)')}) |
705 | + |
706 | + @mock.patch('ceph_broker.log') |
707 | + def test_process_requests_invalid_api_version(self, mock_log): |
708 | + req = json.dumps({'api-version': 2, 'ops': []}) |
709 | + rc = ceph_broker.process_requests(req) |
710 | + self.assertEqual(json.loads(rc), |
711 | + {'exit-code': 1, |
712 | + 'stderr': 'Missing or invalid api version (2)'}) |
713 | + |
714 | + @mock.patch('ceph_broker.log') |
715 | + def test_process_requests_invalid(self, mock_log): |
716 | + reqs = json.dumps({'api-version': 1, 'ops': [{'op': 'invalid_op'}]}) |
717 | + rc = ceph_broker.process_requests(reqs) |
718 | + self.assertEqual(json.loads(rc), |
719 | + {'exit-code': 1, |
720 | + 'stderr': "Unknown operation 'invalid_op'"}) |
721 | + |
722 | + @mock.patch('ceph_broker.create_pool') |
723 | + @mock.patch('ceph_broker.pool_exists') |
724 | + @mock.patch('ceph_broker.log') |
725 | + def test_process_requests_create_pool(self, mock_log, mock_pool_exists, |
726 | + mock_create_pool): |
727 | + mock_pool_exists.return_value = False |
728 | + reqs = json.dumps({'api-version': 1, |
729 | + 'ops': [{'op': 'create-pool', 'name': |
730 | + 'foo', 'replicas': 3}]}) |
731 | + rc = ceph_broker.process_requests(reqs) |
732 | + mock_pool_exists.assert_called_with(service='admin', name='foo') |
733 | + mock_create_pool.assert_called_with(service='admin', name='foo', |
734 | + replicas=3) |
735 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
736 | + |
737 | + @mock.patch('ceph_broker.create_pool') |
738 | + @mock.patch('ceph_broker.pool_exists') |
739 | + @mock.patch('ceph_broker.log') |
740 | + def test_process_requests_create_pool_exists(self, mock_log, |
741 | + mock_pool_exists, |
742 | + mock_create_pool): |
743 | + mock_pool_exists.return_value = True |
744 | + reqs = json.dumps({'api-version': 1, |
745 | + 'ops': [{'op': 'create-pool', 'name': 'foo', |
746 | + 'replicas': 3}]}) |
747 | + rc = ceph_broker.process_requests(reqs) |
748 | + mock_pool_exists.assert_called_with(service='admin', name='foo') |
749 | + self.assertFalse(mock_create_pool.called) |
750 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
UOSCI bot says:
charm_lint_check #931 ceph-next for hopem mp241083
LINT OK: passed
LINT Results (max last 5 lines):
I: config.yaml: option key has no default value
I: config.yaml: option fsid has no default value
I: config.yaml: option osd-journal has no default value
I: config.yaml: option osd-reformat has no default value
I: config.yaml: option source has no default value
Full lint test output: http:// paste.ubuntu. com/8868311/ 10.98.191. 181:8080/ job/charm_ lint_check/ 931/
Build: http://