Merge lp:~axwalk/charms/trusty/ceph/trunk into lp:~openstack-charmers-archive/charms/trusty/ceph/next
- Trusty Tahr (14.04)
- trunk
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 124 |
Proposed branch: | lp:~axwalk/charms/trusty/ceph/trunk |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph/next |
Diff against target: |
170 lines (+57/-15) 5 files modified
config.yaml (+2/-1) hooks/ceph_hooks.py (+37/-9) hooks/charmhelpers/core/hookenv.py (+2/-2) hooks/charmhelpers/fetch/bzrurl.py (+7/-3) metadata.yaml (+9/-0) |
To merge this branch: | bzr merge lp:~axwalk/charms/trusty/ceph/trunk |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Needs Fixing | ||
Chris Holcombe (community) | Approve | ||
Review via email:
|
Commit message
Description of the change
These changes add support for Juju's new (1.25.0+) storage feature, which enables you to allocate disks specifically to Ceph, rather than using a block device white list. This change requires updating charm-helpers.
We add the "osd-devices" block-type storage, with minimum of 0 and no maximum. Volumes assigned to a unit will be added as OSDs when they are attached, and the block devices visible.
To use this, you would, for example do:
juju deploy cs:trusty/ceph -n 3 --storage osd-devices=5,50G
which will deploy 3 units of ceph, each with 5x50GiB volumes assigned.
After deployment you can add volumes with, e.g.
juju storage add ceph/0 osd-devices=50G
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #13157 ceph-next for axwalk mp276727
LINT FAIL: lint-test failed
LINT FAIL: charm-proof failed
LINT Results (max last 2 lines):
make: *** [lint] Error 200
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Ryan Beisner (1chb1n) wrote : | # |
charm-tools isn't storage-aware yet, hence the lint fail.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #7723 ceph-next for axwalk mp276727
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Chris Holcombe (xfactor973) wrote : | # |
Code looks great. I assume some/all of the charmhelpers changes are not from you. I appreciate you doing a check on json.loads. Great work!
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #7725 ceph-next for axwalk mp276727
AMULET OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Andrew Wilkins (axwalk) wrote : | # |
Chris, sorry, launchpad comments are being swallowed by an errant email filter, didn't see your comment till now. The only charmhelpers changes I personally authored were the storage hookenv functions (storage_get, storage_list).
Ryan, thanks. FYI, charm tools has been updated to do storage linting since then.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
James Page (james-page) wrote : | # |
We can't land this until charm-tools understands storage metadata attributes:
E: Unknown root metadata field (storage)
I however did take a run through the code; see inline comments.
The storage_get and storage_list helpers are not inline with other hookenv functions in that they use "" instead of None for optional paramaters - it would be nice to get that fixed as well.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
James Page (james-page) wrote : | # |
The current configuration approach has two storage options; osd-devices and osd-journal - it would be nice to see osd-journal handled in the same way - this is typically placed on faster SSD to smooth out write spikes etc...
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
James Page (james-page) wrote : | # |
I see storage support committed but not yet released in charm-tools:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
James Page (james-page) wrote : | # |
Using charm-tools from github:
E: storage.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
James Page (james-page) wrote : | # |
Please set back to 'Needs Review' when ready - thanks!
On Fri, Nov 20, 2015 at 9:48 AM, James Page <email address hidden> wrote:
> The proposal to merge lp:~axwalk/charms/trusty/ceph/trunk into
> lp:~openstack-charmers/charms/trusty/ceph/next has been updated.
>
> Status: Needs review => Work in progress
>
> For more details, see:
> https:/
> --
> You are reviewing the proposed merge of
> lp:~axwalk/charms/trusty/ceph/trunk into
> lp:~openstack-charmers/charms/trusty/ceph/next.
>
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Andrew Wilkins (axwalk) wrote : | # |
I've fixed the proof error (proof is stricter than necessary, but follows the docs), and added an optional osd-journal store. This is currently handled the same as the existing config: one-shot, at deployment time only. Storage charmers will need to update it to handle dynamic journal changes if that's ever a requirement.
I've put up another MP to charm-helpers to fix the optional parameters inconsistency, and will update this when it's merged.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
James Page (james-page) wrote : | # |
I've merged the update to charm-helpers - please sync away!
On Mon, Nov 23, 2015 at 1:56 AM, Andrew Wilkins <
<email address hidden>> wrote:
> I've fixed the proof error (proof is stricter than necessary, but follows
> the docs), and added an optional osd-journal store. This is currently
> handled the same as the existing config: one-shot, at deployment time only.
> Storage charmers will need to update it to handle dynamic journal changes
> if that's ever a requirement.
>
> I've put up another MP to charm-helpers to fix the optional parameters
> inconsistency, and will update this when it's merged.
> --
> https:/
> You are reviewing the proposed merge of
> lp:~axwalk/charms/trusty/ceph/trunk into
> lp:~openstack-charmers/charms/trusty/ceph/next.
>
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Andrew Wilkins (axwalk) wrote : | # |
Thanks, James. Updated. I've tested with the master branch of Juju, with
--storage osd-devices=3,50G --storage osd-journal=
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #13283 ceph-next for axwalk mp276727
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #14252 ceph-next for axwalk mp276727
LINT OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8015 ceph-next for axwalk mp276727
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 124
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8019 ceph-next for axwalk mp276727
AMULET OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8020 ceph-next for axwalk mp276727
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8027 ceph-next for axwalk mp276727
AMULET OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Andrew Wilkins (axwalk) wrote : | # |
Ping.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
James Page (james-page) wrote : | # |
Pong - LGTM - Merged.
Preview Diff
1 | === modified file 'config.yaml' | |||
2 | --- config.yaml 2015-07-10 14:14:18 +0000 | |||
3 | +++ config.yaml 2015-11-23 09:31:27 +0000 | |||
4 | @@ -40,7 +40,8 @@ | |||
5 | 40 | The devices to format and set up as osd volumes. | 40 | The devices to format and set up as osd volumes. |
6 | 41 | . | 41 | . |
7 | 42 | These devices are the range of devices that will be checked for and | 42 | These devices are the range of devices that will be checked for and |
9 | 43 | used across all service units. | 43 | used across all service units, in addition to any volumes attached |
10 | 44 | via the --storage flag during deployment. | ||
11 | 44 | . | 45 | . |
12 | 45 | For ceph >= 0.56.6 these can also be directories instead of devices - the | 46 | For ceph >= 0.56.6 these can also be directories instead of devices - the |
13 | 46 | charm assumes anything not starting with /dev is a directory instead. | 47 | charm assumes anything not starting with /dev is a directory instead. |
14 | 47 | 48 | ||
15 | === modified file 'hooks/ceph_hooks.py' | |||
16 | --- hooks/ceph_hooks.py 2015-10-30 02:15:38 +0000 | |||
17 | +++ hooks/ceph_hooks.py 2015-11-23 09:31:27 +0000 | |||
18 | @@ -29,6 +29,8 @@ | |||
19 | 29 | relations_of_type, | 29 | relations_of_type, |
20 | 30 | status_set, | 30 | status_set, |
21 | 31 | local_unit, | 31 | local_unit, |
22 | 32 | storage_get, | ||
23 | 33 | storage_list | ||
24 | 32 | ) | 34 | ) |
25 | 33 | from charmhelpers.core.host import ( | 35 | from charmhelpers.core.host import ( |
26 | 34 | service_restart, | 36 | service_restart, |
27 | @@ -145,7 +147,7 @@ | |||
28 | 145 | if e_mountpoint and ceph.filesystem_mounted(e_mountpoint): | 147 | if e_mountpoint and ceph.filesystem_mounted(e_mountpoint): |
29 | 146 | umount(e_mountpoint) | 148 | umount(e_mountpoint) |
30 | 147 | 149 | ||
32 | 148 | osd_journal = config('osd-journal') | 150 | osd_journal = get_osd_journal() |
33 | 149 | if (osd_journal and not os.path.exists(JOURNAL_ZAPPED) and | 151 | if (osd_journal and not os.path.exists(JOURNAL_ZAPPED) and |
34 | 150 | os.path.exists(osd_journal)): | 152 | os.path.exists(osd_journal)): |
35 | 151 | ceph.zap_disk(osd_journal) | 153 | ceph.zap_disk(osd_journal) |
36 | @@ -158,16 +160,36 @@ | |||
37 | 158 | ceph.bootstrap_monitor_cluster(config('monitor-secret')) | 160 | ceph.bootstrap_monitor_cluster(config('monitor-secret')) |
38 | 159 | ceph.wait_for_bootstrap() | 161 | ceph.wait_for_bootstrap() |
39 | 160 | 162 | ||
45 | 161 | if ceph.is_bootstrapped(): | 163 | storage_changed() |
41 | 162 | for dev in get_devices(): | ||
42 | 163 | ceph.osdize(dev, config('osd-format'), config('osd-journal'), | ||
43 | 164 | reformat_osd(), config('ignore-device-errors')) | ||
44 | 165 | ceph.start_osds(get_devices()) | ||
46 | 166 | 164 | ||
47 | 167 | if relations_of_type('nrpe-external-master'): | 165 | if relations_of_type('nrpe-external-master'): |
48 | 168 | update_nrpe_config() | 166 | update_nrpe_config() |
49 | 169 | 167 | ||
50 | 170 | 168 | ||
51 | 169 | @hooks.hook('osd-devices-storage-attached', 'osd-devices-storage-detaching') | ||
52 | 170 | def storage_changed(): | ||
53 | 171 | if ceph.is_bootstrapped(): | ||
54 | 172 | for dev in get_devices(): | ||
55 | 173 | ceph.osdize(dev, config('osd-format'), get_osd_journal(), | ||
56 | 174 | reformat_osd(), config('ignore-device-errors')) | ||
57 | 175 | ceph.start_osds(get_devices()) | ||
58 | 176 | |||
59 | 177 | |||
60 | 178 | def get_osd_journal(): | ||
61 | 179 | ''' | ||
62 | 180 | Returns the block device path to use for the OSD journal, if any. | ||
63 | 181 | |||
64 | 182 | If there is an osd-journal storage instance attached, it will be | ||
65 | 183 | used as the journal. Otherwise, the osd-journal configuration will | ||
66 | 184 | be returned. | ||
67 | 185 | ''' | ||
68 | 186 | storage_ids = storage_list('osd-journal') | ||
69 | 187 | if storage_ids: | ||
70 | 188 | # There can be at most one osd-journal storage instance. | ||
71 | 189 | return storage_get('location', storage_ids[0]) | ||
72 | 190 | return config('osd-journal') | ||
73 | 191 | |||
74 | 192 | |||
75 | 171 | def get_mon_hosts(): | 193 | def get_mon_hosts(): |
76 | 172 | hosts = [] | 194 | hosts = [] |
77 | 173 | addr = get_public_addr() | 195 | addr = get_public_addr() |
78 | @@ -207,9 +229,15 @@ | |||
79 | 207 | 229 | ||
80 | 208 | def get_devices(): | 230 | def get_devices(): |
81 | 209 | if config('osd-devices'): | 231 | if config('osd-devices'): |
83 | 210 | return config('osd-devices').split(' ') | 232 | devices = config('osd-devices').split(' ') |
84 | 211 | else: | 233 | else: |
86 | 212 | return [] | 234 | devices = [] |
87 | 235 | # List storage instances for the 'osd-devices' | ||
88 | 236 | # store declared for this charm too, and add | ||
89 | 237 | # their block device paths to the list. | ||
90 | 238 | storage_ids = storage_list('osd-devices') | ||
91 | 239 | devices.extend((storage_get('location', s) for s in storage_ids)) | ||
92 | 240 | return devices | ||
93 | 213 | 241 | ||
94 | 214 | 242 | ||
95 | 215 | @hooks.hook('mon-relation-joined') | 243 | @hooks.hook('mon-relation-joined') |
96 | @@ -231,7 +259,7 @@ | |||
97 | 231 | ceph.bootstrap_monitor_cluster(config('monitor-secret')) | 259 | ceph.bootstrap_monitor_cluster(config('monitor-secret')) |
98 | 232 | ceph.wait_for_bootstrap() | 260 | ceph.wait_for_bootstrap() |
99 | 233 | for dev in get_devices(): | 261 | for dev in get_devices(): |
101 | 234 | ceph.osdize(dev, config('osd-format'), config('osd-journal'), | 262 | ceph.osdize(dev, config('osd-format'), get_osd_journal(), |
102 | 235 | reformat_osd(), config('ignore-device-errors')) | 263 | reformat_osd(), config('ignore-device-errors')) |
103 | 236 | ceph.start_osds(get_devices()) | 264 | ceph.start_osds(get_devices()) |
104 | 237 | notify_osds() | 265 | notify_osds() |
105 | 238 | 266 | ||
106 | === modified file 'hooks/charmhelpers/core/hookenv.py' | |||
107 | --- hooks/charmhelpers/core/hookenv.py 2015-11-19 17:16:51 +0000 | |||
108 | +++ hooks/charmhelpers/core/hookenv.py 2015-11-23 09:31:27 +0000 | |||
109 | @@ -637,7 +637,7 @@ | |||
110 | 637 | 637 | ||
111 | 638 | 638 | ||
112 | 639 | @cached | 639 | @cached |
114 | 640 | def storage_get(attribute="", storage_id=""): | 640 | def storage_get(attribute=None, storage_id=None): |
115 | 641 | """Get storage attributes""" | 641 | """Get storage attributes""" |
116 | 642 | _args = ['storage-get', '--format=json'] | 642 | _args = ['storage-get', '--format=json'] |
117 | 643 | if storage_id: | 643 | if storage_id: |
118 | @@ -651,7 +651,7 @@ | |||
119 | 651 | 651 | ||
120 | 652 | 652 | ||
121 | 653 | @cached | 653 | @cached |
123 | 654 | def storage_list(storage_name=""): | 654 | def storage_list(storage_name=None): |
124 | 655 | """List the storage IDs for the unit""" | 655 | """List the storage IDs for the unit""" |
125 | 656 | _args = ['storage-list', '--format=json'] | 656 | _args = ['storage-list', '--format=json'] |
126 | 657 | if storage_name: | 657 | if storage_name: |
127 | 658 | 658 | ||
128 | === modified file 'hooks/charmhelpers/fetch/bzrurl.py' | |||
129 | --- hooks/charmhelpers/fetch/bzrurl.py 2015-01-26 09:46:20 +0000 | |||
130 | +++ hooks/charmhelpers/fetch/bzrurl.py 2015-11-23 09:31:27 +0000 | |||
131 | @@ -64,11 +64,15 @@ | |||
132 | 64 | except Exception as e: | 64 | except Exception as e: |
133 | 65 | raise e | 65 | raise e |
134 | 66 | 66 | ||
136 | 67 | def install(self, source): | 67 | def install(self, source, dest=None): |
137 | 68 | url_parts = self.parse_url(source) | 68 | url_parts = self.parse_url(source) |
138 | 69 | branch_name = url_parts.path.strip("/").split("/")[-1] | 69 | branch_name = url_parts.path.strip("/").split("/")[-1] |
141 | 70 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", | 70 | if dest: |
142 | 71 | branch_name) | 71 | dest_dir = os.path.join(dest, branch_name) |
143 | 72 | else: | ||
144 | 73 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", | ||
145 | 74 | branch_name) | ||
146 | 75 | |||
147 | 72 | if not os.path.exists(dest_dir): | 76 | if not os.path.exists(dest_dir): |
148 | 73 | mkdir(dest_dir, perms=0o755) | 77 | mkdir(dest_dir, perms=0o755) |
149 | 74 | try: | 78 | try: |
150 | 75 | 79 | ||
151 | === added symlink 'hooks/osd-devices-storage-attached' | |||
152 | === target is u'ceph_hooks.py' | |||
153 | === added symlink 'hooks/osd-devices-storage-detaching' | |||
154 | === target is u'ceph_hooks.py' | |||
155 | === modified file 'metadata.yaml' | |||
156 | --- metadata.yaml 2015-11-18 10:29:56 +0000 | |||
157 | +++ metadata.yaml 2015-11-23 09:31:27 +0000 | |||
158 | @@ -25,3 +25,12 @@ | |||
159 | 25 | nrpe-external-master: | 25 | nrpe-external-master: |
160 | 26 | interface: nrpe-external-master | 26 | interface: nrpe-external-master |
161 | 27 | scope: container | 27 | scope: container |
162 | 28 | storage: | ||
163 | 29 | osd-devices: | ||
164 | 30 | type: block | ||
165 | 31 | multiple: | ||
166 | 32 | range: 0- | ||
167 | 33 | osd-journal: | ||
168 | 34 | type: block | ||
169 | 35 | multiple: | ||
170 | 36 | range: 0-1 |
charm_unit_test #12264 ceph-next for axwalk mp276727
UNIT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_unit_ test/12264/