Merge lp:~axwalk/charms/trusty/ceph/trunk into lp:~openstack-charmers-archive/charms/trusty/ceph/next

Proposed by Andrew Wilkins
Status: Merged
Merged at revision: 124
Proposed branch: lp:~axwalk/charms/trusty/ceph/trunk
Merge into: lp:~openstack-charmers-archive/charms/trusty/ceph/next
Diff against target: 170 lines (+57/-15)
5 files modified
config.yaml (+2/-1)
hooks/ceph_hooks.py (+37/-9)
hooks/charmhelpers/core/hookenv.py (+2/-2)
hooks/charmhelpers/fetch/bzrurl.py (+7/-3)
metadata.yaml (+9/-0)
To merge this branch: bzr merge lp:~axwalk/charms/trusty/ceph/trunk
Reviewer Review Type Date Requested Status
James Page Needs Fixing
Chris Holcombe (community) Approve
Review via email: mp+276727@code.launchpad.net

Description of the change

These changes add support for Juju's new (1.25.0+) storage feature, which enables you to allocate disks specifically to Ceph, rather than using a block device white list. This change requires updating charm-helpers.

We add the "osd-devices" block-type storage, with minimum of 0 and no maximum. Volumes assigned to a unit will be added as OSDs when they are attached, and the block devices visible.

To use this, you would, for example do:
   juju deploy cs:trusty/ceph -n 3 --storage osd-devices=5,50G

which will deploy 3 units of ceph, each with 5x50GiB volumes assigned.

After deployment you can add volumes with, e.g.
    juju storage add ceph/0 osd-devices=50G

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #12264 ceph-next for axwalk mp276727
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/12264/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #13157 ceph-next for axwalk mp276727
    LINT FAIL: lint-test failed
    LINT FAIL: charm-proof failed

LINT Results (max last 2 lines):
make: *** [lint] Error 200
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/13110679/
Build: http://10.245.162.77:8080/job/charm_lint_check/13157/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

charm-tools isn't storage-aware yet, hence the lint fail.

https://github.com/juju/charm-tools/issues/41

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #7723 ceph-next for axwalk mp276727
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/13112876/
Build: http://10.245.162.77:8080/job/charm_amulet_test/7723/

Revision history for this message
Chris Holcombe (xfactor973) wrote :

Code looks great. I assume some/all of the charmhelpers changes are not from you. I appreciate you doing a check on json.loads. Great work!

review: Approve
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #7725 ceph-next for axwalk mp276727
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/7725/

Revision history for this message
Andrew Wilkins (axwalk) wrote :

Chris, sorry, launchpad comments are being swallowed by an errant email filter, didn't see your comment till now. The only charmhelpers changes I personally authored were the storage hookenv functions (storage_get, storage_list).

Ryan, thanks. FYI, charm tools has been updated to do storage linting since then.

Revision history for this message
James Page (james-page) wrote :

We can't land this until charm-tools understands storage metadata attributes:

E: Unknown root metadata field (storage)

I however did take a run through the code; see inline comments.

The storage_get and storage_list helpers are not inline with other hookenv functions in that they use "" instead of None for optional paramaters - it would be nice to get that fixed as well.

review: Needs Fixing
Revision history for this message
James Page (james-page) wrote :

The current configuration approach has two storage options; osd-devices and osd-journal - it would be nice to see osd-journal handled in the same way - this is typically placed on faster SSD to smooth out write spikes etc...

Revision history for this message
James Page (james-page) wrote :

I see storage support committed but not yet released in charm-tools:

https://github.com/juju/charm-tools/commit/6764324cef6017d41de7f0f8fd0252054d82a56e

Revision history for this message
James Page (james-page) wrote :

Using charm-tools from github:

E: storage.osd-devices.multiple.range: supported formats are: m (a fixed number), m-n (an explicit range), and m- (a minimum number)

review: Needs Fixing
Revision history for this message
James Page (james-page) wrote :

Please set back to 'Needs Review' when ready - thanks!

On Fri, Nov 20, 2015 at 9:48 AM, James Page <email address hidden> wrote:

> The proposal to merge lp:~axwalk/charms/trusty/ceph/trunk into
> lp:~openstack-charmers/charms/trusty/ceph/next has been updated.
>
> Status: Needs review => Work in progress
>
> For more details, see:
> https://code.launchpad.net/~axwalk/charms/trusty/ceph/trunk/+merge/276727
> --
> You are reviewing the proposed merge of
> lp:~axwalk/charms/trusty/ceph/trunk into
> lp:~openstack-charmers/charms/trusty/ceph/next.
>

Revision history for this message
Andrew Wilkins (axwalk) wrote :

I've fixed the proof error (proof is stricter than necessary, but follows the docs), and added an optional osd-journal store. This is currently handled the same as the existing config: one-shot, at deployment time only. Storage charmers will need to update it to handle dynamic journal changes if that's ever a requirement.

I've put up another MP to charm-helpers to fix the optional parameters inconsistency, and will update this when it's merged.

Revision history for this message
James Page (james-page) wrote :

I've merged the update to charm-helpers - please sync away!

On Mon, Nov 23, 2015 at 1:56 AM, Andrew Wilkins <
<email address hidden>> wrote:

> I've fixed the proof error (proof is stricter than necessary, but follows
> the docs), and added an optional osd-journal store. This is currently
> handled the same as the existing config: one-shot, at deployment time only.
> Storage charmers will need to update it to handle dynamic journal changes
> if that's ever a requirement.
>
> I've put up another MP to charm-helpers to fix the optional parameters
> inconsistency, and will update this when it's merged.
> --
> https://code.launchpad.net/~axwalk/charms/trusty/ceph/trunk/+merge/276727
> You are reviewing the proposed merge of
> lp:~axwalk/charms/trusty/ceph/trunk into
> lp:~openstack-charmers/charms/trusty/ceph/next.
>

Revision history for this message
Andrew Wilkins (axwalk) wrote :

Thanks, James. Updated. I've tested with the master branch of Juju, with
   --storage osd-devices=3,50G --storage osd-journal=ebs-ssd,10G

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #13283 ceph-next for axwalk mp276727
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/13283/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #14252 ceph-next for axwalk mp276727
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/14252/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8015 ceph-next for axwalk mp276727
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 124
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/13477671/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8015/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8019 ceph-next for axwalk mp276727
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/8019/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8020 ceph-next for axwalk mp276727
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/13492604/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8020/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8027 ceph-next for axwalk mp276727
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/8027/

Revision history for this message
Andrew Wilkins (axwalk) wrote :

Ping.

Revision history for this message
James Page (james-page) wrote :

Pong - LGTM - Merged.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'config.yaml'
--- config.yaml 2015-07-10 14:14:18 +0000
+++ config.yaml 2015-11-23 09:31:27 +0000
@@ -40,7 +40,8 @@
40 The devices to format and set up as osd volumes.40 The devices to format and set up as osd volumes.
41 .41 .
42 These devices are the range of devices that will be checked for and42 These devices are the range of devices that will be checked for and
43 used across all service units.43 used across all service units, in addition to any volumes attached
44 via the --storage flag during deployment.
44 .45 .
45 For ceph >= 0.56.6 these can also be directories instead of devices - the46 For ceph >= 0.56.6 these can also be directories instead of devices - the
46 charm assumes anything not starting with /dev is a directory instead.47 charm assumes anything not starting with /dev is a directory instead.
4748
=== modified file 'hooks/ceph_hooks.py'
--- hooks/ceph_hooks.py 2015-10-30 02:15:38 +0000
+++ hooks/ceph_hooks.py 2015-11-23 09:31:27 +0000
@@ -29,6 +29,8 @@
29 relations_of_type,29 relations_of_type,
30 status_set,30 status_set,
31 local_unit,31 local_unit,
32 storage_get,
33 storage_list
32)34)
33from charmhelpers.core.host import (35from charmhelpers.core.host import (
34 service_restart,36 service_restart,
@@ -145,7 +147,7 @@
145 if e_mountpoint and ceph.filesystem_mounted(e_mountpoint):147 if e_mountpoint and ceph.filesystem_mounted(e_mountpoint):
146 umount(e_mountpoint)148 umount(e_mountpoint)
147149
148 osd_journal = config('osd-journal')150 osd_journal = get_osd_journal()
149 if (osd_journal and not os.path.exists(JOURNAL_ZAPPED) and151 if (osd_journal and not os.path.exists(JOURNAL_ZAPPED) and
150 os.path.exists(osd_journal)):152 os.path.exists(osd_journal)):
151 ceph.zap_disk(osd_journal)153 ceph.zap_disk(osd_journal)
@@ -158,16 +160,36 @@
158 ceph.bootstrap_monitor_cluster(config('monitor-secret'))160 ceph.bootstrap_monitor_cluster(config('monitor-secret'))
159 ceph.wait_for_bootstrap()161 ceph.wait_for_bootstrap()
160162
161 if ceph.is_bootstrapped():163 storage_changed()
162 for dev in get_devices():
163 ceph.osdize(dev, config('osd-format'), config('osd-journal'),
164 reformat_osd(), config('ignore-device-errors'))
165 ceph.start_osds(get_devices())
166164
167 if relations_of_type('nrpe-external-master'):165 if relations_of_type('nrpe-external-master'):
168 update_nrpe_config()166 update_nrpe_config()
169167
170168
169@hooks.hook('osd-devices-storage-attached', 'osd-devices-storage-detaching')
170def storage_changed():
171 if ceph.is_bootstrapped():
172 for dev in get_devices():
173 ceph.osdize(dev, config('osd-format'), get_osd_journal(),
174 reformat_osd(), config('ignore-device-errors'))
175 ceph.start_osds(get_devices())
176
177
178def get_osd_journal():
179 '''
180 Returns the block device path to use for the OSD journal, if any.
181
182 If there is an osd-journal storage instance attached, it will be
183 used as the journal. Otherwise, the osd-journal configuration will
184 be returned.
185 '''
186 storage_ids = storage_list('osd-journal')
187 if storage_ids:
188 # There can be at most one osd-journal storage instance.
189 return storage_get('location', storage_ids[0])
190 return config('osd-journal')
191
192
171def get_mon_hosts():193def get_mon_hosts():
172 hosts = []194 hosts = []
173 addr = get_public_addr()195 addr = get_public_addr()
@@ -207,9 +229,15 @@
207229
208def get_devices():230def get_devices():
209 if config('osd-devices'):231 if config('osd-devices'):
210 return config('osd-devices').split(' ')232 devices = config('osd-devices').split(' ')
211 else:233 else:
212 return []234 devices = []
235 # List storage instances for the 'osd-devices'
236 # store declared for this charm too, and add
237 # their block device paths to the list.
238 storage_ids = storage_list('osd-devices')
239 devices.extend((storage_get('location', s) for s in storage_ids))
240 return devices
213241
214242
215@hooks.hook('mon-relation-joined')243@hooks.hook('mon-relation-joined')
@@ -231,7 +259,7 @@
231 ceph.bootstrap_monitor_cluster(config('monitor-secret'))259 ceph.bootstrap_monitor_cluster(config('monitor-secret'))
232 ceph.wait_for_bootstrap()260 ceph.wait_for_bootstrap()
233 for dev in get_devices():261 for dev in get_devices():
234 ceph.osdize(dev, config('osd-format'), config('osd-journal'),262 ceph.osdize(dev, config('osd-format'), get_osd_journal(),
235 reformat_osd(), config('ignore-device-errors'))263 reformat_osd(), config('ignore-device-errors'))
236 ceph.start_osds(get_devices())264 ceph.start_osds(get_devices())
237 notify_osds()265 notify_osds()
238266
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2015-11-19 17:16:51 +0000
+++ hooks/charmhelpers/core/hookenv.py 2015-11-23 09:31:27 +0000
@@ -637,7 +637,7 @@
637637
638638
639@cached639@cached
640def storage_get(attribute="", storage_id=""):640def storage_get(attribute=None, storage_id=None):
641 """Get storage attributes"""641 """Get storage attributes"""
642 _args = ['storage-get', '--format=json']642 _args = ['storage-get', '--format=json']
643 if storage_id:643 if storage_id:
@@ -651,7 +651,7 @@
651651
652652
653@cached653@cached
654def storage_list(storage_name=""):654def storage_list(storage_name=None):
655 """List the storage IDs for the unit"""655 """List the storage IDs for the unit"""
656 _args = ['storage-list', '--format=json']656 _args = ['storage-list', '--format=json']
657 if storage_name:657 if storage_name:
658658
=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 2015-01-26 09:46:20 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2015-11-23 09:31:27 +0000
@@ -64,11 +64,15 @@
64 except Exception as e:64 except Exception as e:
65 raise e65 raise e
6666
67 def install(self, source):67 def install(self, source, dest=None):
68 url_parts = self.parse_url(source)68 url_parts = self.parse_url(source)
69 branch_name = url_parts.path.strip("/").split("/")[-1]69 branch_name = url_parts.path.strip("/").split("/")[-1]
70 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",70 if dest:
71 branch_name)71 dest_dir = os.path.join(dest, branch_name)
72 else:
73 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
74 branch_name)
75
72 if not os.path.exists(dest_dir):76 if not os.path.exists(dest_dir):
73 mkdir(dest_dir, perms=0o755)77 mkdir(dest_dir, perms=0o755)
74 try:78 try:
7579
=== added symlink 'hooks/osd-devices-storage-attached'
=== target is u'ceph_hooks.py'
=== added symlink 'hooks/osd-devices-storage-detaching'
=== target is u'ceph_hooks.py'
=== modified file 'metadata.yaml'
--- metadata.yaml 2015-11-18 10:29:56 +0000
+++ metadata.yaml 2015-11-23 09:31:27 +0000
@@ -25,3 +25,12 @@
25 nrpe-external-master:25 nrpe-external-master:
26 interface: nrpe-external-master26 interface: nrpe-external-master
27 scope: container27 scope: container
28storage:
29 osd-devices:
30 type: block
31 multiple:
32 range: 0-
33 osd-journal:
34 type: block
35 multiple:
36 range: 0-1

Subscribers

People subscribed via source and target branches