Merge lp:~axwalk/charms/trusty/ceph/trunk into lp:~openstack-charmers-archive/charms/trusty/ceph/next
- Trusty Tahr (14.04)
- trunk
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 124 |
Proposed branch: | lp:~axwalk/charms/trusty/ceph/trunk |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph/next |
Diff against target: |
170 lines (+57/-15) 5 files modified
config.yaml (+2/-1) hooks/ceph_hooks.py (+37/-9) hooks/charmhelpers/core/hookenv.py (+2/-2) hooks/charmhelpers/fetch/bzrurl.py (+7/-3) metadata.yaml (+9/-0) |
To merge this branch: | bzr merge lp:~axwalk/charms/trusty/ceph/trunk |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Needs Fixing | ||
Chris Holcombe (community) | Approve | ||
Review via email: mp+276727@code.launchpad.net |
Commit message
Description of the change
These changes add support for Juju's new (1.25.0+) storage feature, which enables you to allocate disks specifically to Ceph, rather than using a block device white list. This change requires updating charm-helpers.
We add the "osd-devices" block-type storage, with minimum of 0 and no maximum. Volumes assigned to a unit will be added as OSDs when they are attached, and the block devices visible.
To use this, you would, for example do:
juju deploy cs:trusty/ceph -n 3 --storage osd-devices=5,50G
which will deploy 3 units of ceph, each with 5x50GiB volumes assigned.
After deployment you can add volumes with, e.g.
juju storage add ceph/0 osd-devices=50G
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #13157 ceph-next for axwalk mp276727
LINT FAIL: lint-test failed
LINT FAIL: charm-proof failed
LINT Results (max last 2 lines):
make: *** [lint] Error 200
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
charm-tools isn't storage-aware yet, hence the lint fail.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #7723 ceph-next for axwalk mp276727
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Chris Holcombe (xfactor973) wrote : | # |
Code looks great. I assume some/all of the charmhelpers changes are not from you. I appreciate you doing a check on json.loads. Great work!
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #7725 ceph-next for axwalk mp276727
AMULET OK: passed
Build: http://
Andrew Wilkins (axwalk) wrote : | # |
Chris, sorry, launchpad comments are being swallowed by an errant email filter, didn't see your comment till now. The only charmhelpers changes I personally authored were the storage hookenv functions (storage_get, storage_list).
Ryan, thanks. FYI, charm tools has been updated to do storage linting since then.
James Page (james-page) wrote : | # |
We can't land this until charm-tools understands storage metadata attributes:
E: Unknown root metadata field (storage)
I however did take a run through the code; see inline comments.
The storage_get and storage_list helpers are not inline with other hookenv functions in that they use "" instead of None for optional paramaters - it would be nice to get that fixed as well.
James Page (james-page) wrote : | # |
The current configuration approach has two storage options; osd-devices and osd-journal - it would be nice to see osd-journal handled in the same way - this is typically placed on faster SSD to smooth out write spikes etc...
James Page (james-page) wrote : | # |
I see storage support committed but not yet released in charm-tools:
https:/
James Page (james-page) wrote : | # |
Using charm-tools from github:
E: storage.
James Page (james-page) wrote : | # |
Please set back to 'Needs Review' when ready - thanks!
On Fri, Nov 20, 2015 at 9:48 AM, James Page <email address hidden> wrote:
> The proposal to merge lp:~axwalk/charms/trusty/ceph/trunk into
> lp:~openstack-charmers/charms/trusty/ceph/next has been updated.
>
> Status: Needs review => Work in progress
>
> For more details, see:
> https:/
> --
> You are reviewing the proposed merge of
> lp:~axwalk/charms/trusty/ceph/trunk into
> lp:~openstack-charmers/charms/trusty/ceph/next.
>
Andrew Wilkins (axwalk) wrote : | # |
I've fixed the proof error (proof is stricter than necessary, but follows the docs), and added an optional osd-journal store. This is currently handled the same as the existing config: one-shot, at deployment time only. Storage charmers will need to update it to handle dynamic journal changes if that's ever a requirement.
I've put up another MP to charm-helpers to fix the optional parameters inconsistency, and will update this when it's merged.
James Page (james-page) wrote : | # |
I've merged the update to charm-helpers - please sync away!
On Mon, Nov 23, 2015 at 1:56 AM, Andrew Wilkins <
<email address hidden>> wrote:
> I've fixed the proof error (proof is stricter than necessary, but follows
> the docs), and added an optional osd-journal store. This is currently
> handled the same as the existing config: one-shot, at deployment time only.
> Storage charmers will need to update it to handle dynamic journal changes
> if that's ever a requirement.
>
> I've put up another MP to charm-helpers to fix the optional parameters
> inconsistency, and will update this when it's merged.
> --
> https:/
> You are reviewing the proposed merge of
> lp:~axwalk/charms/trusty/ceph/trunk into
> lp:~openstack-charmers/charms/trusty/ceph/next.
>
Andrew Wilkins (axwalk) wrote : | # |
Thanks, James. Updated. I've tested with the master branch of Juju, with
--storage osd-devices=3,50G --storage osd-journal=
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #13283 ceph-next for axwalk mp276727
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #14252 ceph-next for axwalk mp276727
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8015 ceph-next for axwalk mp276727
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 124
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8019 ceph-next for axwalk mp276727
AMULET OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8020 ceph-next for axwalk mp276727
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8027 ceph-next for axwalk mp276727
AMULET OK: passed
Build: http://
Andrew Wilkins (axwalk) wrote : | # |
Ping.
James Page (james-page) wrote : | # |
Pong - LGTM - Merged.
Preview Diff
1 | === modified file 'config.yaml' |
2 | --- config.yaml 2015-07-10 14:14:18 +0000 |
3 | +++ config.yaml 2015-11-23 09:31:27 +0000 |
4 | @@ -40,7 +40,8 @@ |
5 | The devices to format and set up as osd volumes. |
6 | . |
7 | These devices are the range of devices that will be checked for and |
8 | - used across all service units. |
9 | + used across all service units, in addition to any volumes attached |
10 | + via the --storage flag during deployment. |
11 | . |
12 | For ceph >= 0.56.6 these can also be directories instead of devices - the |
13 | charm assumes anything not starting with /dev is a directory instead. |
14 | |
15 | === modified file 'hooks/ceph_hooks.py' |
16 | --- hooks/ceph_hooks.py 2015-10-30 02:15:38 +0000 |
17 | +++ hooks/ceph_hooks.py 2015-11-23 09:31:27 +0000 |
18 | @@ -29,6 +29,8 @@ |
19 | relations_of_type, |
20 | status_set, |
21 | local_unit, |
22 | + storage_get, |
23 | + storage_list |
24 | ) |
25 | from charmhelpers.core.host import ( |
26 | service_restart, |
27 | @@ -145,7 +147,7 @@ |
28 | if e_mountpoint and ceph.filesystem_mounted(e_mountpoint): |
29 | umount(e_mountpoint) |
30 | |
31 | - osd_journal = config('osd-journal') |
32 | + osd_journal = get_osd_journal() |
33 | if (osd_journal and not os.path.exists(JOURNAL_ZAPPED) and |
34 | os.path.exists(osd_journal)): |
35 | ceph.zap_disk(osd_journal) |
36 | @@ -158,16 +160,36 @@ |
37 | ceph.bootstrap_monitor_cluster(config('monitor-secret')) |
38 | ceph.wait_for_bootstrap() |
39 | |
40 | - if ceph.is_bootstrapped(): |
41 | - for dev in get_devices(): |
42 | - ceph.osdize(dev, config('osd-format'), config('osd-journal'), |
43 | - reformat_osd(), config('ignore-device-errors')) |
44 | - ceph.start_osds(get_devices()) |
45 | + storage_changed() |
46 | |
47 | if relations_of_type('nrpe-external-master'): |
48 | update_nrpe_config() |
49 | |
50 | |
51 | +@hooks.hook('osd-devices-storage-attached', 'osd-devices-storage-detaching') |
52 | +def storage_changed(): |
53 | + if ceph.is_bootstrapped(): |
54 | + for dev in get_devices(): |
55 | + ceph.osdize(dev, config('osd-format'), get_osd_journal(), |
56 | + reformat_osd(), config('ignore-device-errors')) |
57 | + ceph.start_osds(get_devices()) |
58 | + |
59 | + |
60 | +def get_osd_journal(): |
61 | + ''' |
62 | + Returns the block device path to use for the OSD journal, if any. |
63 | + |
64 | + If there is an osd-journal storage instance attached, it will be |
65 | + used as the journal. Otherwise, the osd-journal configuration will |
66 | + be returned. |
67 | + ''' |
68 | + storage_ids = storage_list('osd-journal') |
69 | + if storage_ids: |
70 | + # There can be at most one osd-journal storage instance. |
71 | + return storage_get('location', storage_ids[0]) |
72 | + return config('osd-journal') |
73 | + |
74 | + |
75 | def get_mon_hosts(): |
76 | hosts = [] |
77 | addr = get_public_addr() |
78 | @@ -207,9 +229,15 @@ |
79 | |
80 | def get_devices(): |
81 | if config('osd-devices'): |
82 | - return config('osd-devices').split(' ') |
83 | + devices = config('osd-devices').split(' ') |
84 | else: |
85 | - return [] |
86 | + devices = [] |
87 | + # List storage instances for the 'osd-devices' |
88 | + # store declared for this charm too, and add |
89 | + # their block device paths to the list. |
90 | + storage_ids = storage_list('osd-devices') |
91 | + devices.extend((storage_get('location', s) for s in storage_ids)) |
92 | + return devices |
93 | |
94 | |
95 | @hooks.hook('mon-relation-joined') |
96 | @@ -231,7 +259,7 @@ |
97 | ceph.bootstrap_monitor_cluster(config('monitor-secret')) |
98 | ceph.wait_for_bootstrap() |
99 | for dev in get_devices(): |
100 | - ceph.osdize(dev, config('osd-format'), config('osd-journal'), |
101 | + ceph.osdize(dev, config('osd-format'), get_osd_journal(), |
102 | reformat_osd(), config('ignore-device-errors')) |
103 | ceph.start_osds(get_devices()) |
104 | notify_osds() |
105 | |
106 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
107 | --- hooks/charmhelpers/core/hookenv.py 2015-11-19 17:16:51 +0000 |
108 | +++ hooks/charmhelpers/core/hookenv.py 2015-11-23 09:31:27 +0000 |
109 | @@ -637,7 +637,7 @@ |
110 | |
111 | |
112 | @cached |
113 | -def storage_get(attribute="", storage_id=""): |
114 | +def storage_get(attribute=None, storage_id=None): |
115 | """Get storage attributes""" |
116 | _args = ['storage-get', '--format=json'] |
117 | if storage_id: |
118 | @@ -651,7 +651,7 @@ |
119 | |
120 | |
121 | @cached |
122 | -def storage_list(storage_name=""): |
123 | +def storage_list(storage_name=None): |
124 | """List the storage IDs for the unit""" |
125 | _args = ['storage-list', '--format=json'] |
126 | if storage_name: |
127 | |
128 | === modified file 'hooks/charmhelpers/fetch/bzrurl.py' |
129 | --- hooks/charmhelpers/fetch/bzrurl.py 2015-01-26 09:46:20 +0000 |
130 | +++ hooks/charmhelpers/fetch/bzrurl.py 2015-11-23 09:31:27 +0000 |
131 | @@ -64,11 +64,15 @@ |
132 | except Exception as e: |
133 | raise e |
134 | |
135 | - def install(self, source): |
136 | + def install(self, source, dest=None): |
137 | url_parts = self.parse_url(source) |
138 | branch_name = url_parts.path.strip("/").split("/")[-1] |
139 | - dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
140 | - branch_name) |
141 | + if dest: |
142 | + dest_dir = os.path.join(dest, branch_name) |
143 | + else: |
144 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
145 | + branch_name) |
146 | + |
147 | if not os.path.exists(dest_dir): |
148 | mkdir(dest_dir, perms=0o755) |
149 | try: |
150 | |
151 | === added symlink 'hooks/osd-devices-storage-attached' |
152 | === target is u'ceph_hooks.py' |
153 | === added symlink 'hooks/osd-devices-storage-detaching' |
154 | === target is u'ceph_hooks.py' |
155 | === modified file 'metadata.yaml' |
156 | --- metadata.yaml 2015-11-18 10:29:56 +0000 |
157 | +++ metadata.yaml 2015-11-23 09:31:27 +0000 |
158 | @@ -25,3 +25,12 @@ |
159 | nrpe-external-master: |
160 | interface: nrpe-external-master |
161 | scope: container |
162 | +storage: |
163 | + osd-devices: |
164 | + type: block |
165 | + multiple: |
166 | + range: 0- |
167 | + osd-journal: |
168 | + type: block |
169 | + multiple: |
170 | + range: 0-1 |
charm_unit_test #12264 ceph-next for axwalk mp276727
UNIT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_unit_ test/12264/