Merge lp:~tribaal/charms/precise/storage/refactor-mount-volume into lp:charms/storage
- Precise Pangolin (12.04)
- refactor-mount-volume
- Merge into trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp:~tribaal/charms/precise/storage/refactor-mount-volume |
Merge into: | lp:charms/storage |
Diff against target: |
1077 lines (+447/-344) 9 files modified
.bzrignore (+1/-0) README.md (+47/-0) hooks/common_util.py (+20/-56) hooks/storage-provider.d/block-storage-broker/block-storage-relation-changed (+42/-12) hooks/storage-provider.d/block-storage-broker/test_block_storage_provider.py (+147/-0) hooks/storage-provider.d/nfs/data-relation-changed (+0/-6) hooks/storage-provider.d/nfs/hooks.py (+50/-0) hooks/storage-provider.d/nfs/test_nfs_provider.py (+98/-0) hooks/test_common_util.py (+42/-270) |
To merge this branch: | bzr merge lp:~tribaal/charms/precise/storage/refactor-mount-volume |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
David Britton (community) | Needs Fixing | ||
Cory Johns (community) | Needs Fixing | ||
Review via email: mp+232236@code.launchpad.net |
Commit message
Description of the change
This branch refactors the switching logic in (what used to be) common_utils.py's mount_volume() function.
I found it strange to have to switch on the provider name again in that common code, since the switch was already effectively done by the storage-provider logic.
Instead of creating a new "storage-only" hook, I decided to keep the current setup as is and simply split the mount_volume() function in two separate helpers: prepare_
What used to be the "switched" code is now inline in the hook's body, so we don't need to switch again - it's clear where and what each piece of code does.
Changing this required me to unwire some of the provider-specific tests. I moved them to the provider subfolders instead, since it makes no sense in my opinion to keep them at the common level. Right now they are not wired in, but there needs to be more refactoring first, and the branch is becoming a monster so I'd like to have eyes on the code first, to at least work the rough edges out and validate the concept.
David Britton (dpb) wrote : | # |
Hi Chris -- This change looks wonderful, could you please fix the merge conflict. Also, can you wire the tests back up before pushing up, or are you wanting to do another branch for that? Thanks again for doing all this, much needed.
Chris Glass (tribaal) wrote : | # |
Merge conflict should be gone.. Let's hope someone doesn't commit something on the pre-refactor codebase again :)
David Britton (dpb) wrote : | # |
Hi Chris -- some more feedback:
[1] please link nfs-relation-
[2] You seem to be missing all the glue code in nfs/hooks.py to actual
run hooks, and you are missing an annotation for the hooks-env. Adding
these things, along with the module imports necessary made a mount
actually work. cool!
[3] could you pastebin the sample deployer.yaml file, or the juju
command steps you take, so I can make sure I'm testing apples to apples.
As you know, this charm is new, so there could be subtle bugs relating
to postgresql vs whatever you are using. I'd like to make sure I'm
doing a good job testing.
[4] please file a bug to re-wire the tests in the nfs/ dir. I think it
should be as simple as adding anothing command in the make 'test'
target. I don't see a reason we need to get overly fancy with things at
this point. If you think of a better idea, add to the bug.
I'm going to mark this needs fixing, and set the MP back to WIP. Please
simply set back to 'needs review' when ready for another round.
Thanks for the change, again it looks like exactly the right direction,
just needs a bit of finishing (which I think you alluded to already, so
this probably isn't a surprise to you).
--
David Britton <email address hidden>
Chris Glass (tribaal) wrote : | # |
Hi David,
Thanks a lot for the feedback - that's exactly what I was after (more eyes on the code after such a refactoring is awesome).
So, [1] and [2]: Duh. Fixed. Guess who forgot to commit changes? This guy.
For [3], I'm using my own in-development charm, and it's not 100% ready for prime time yet, but should work as another, simple storage client. Steps to use it:
juju bootstrap
juju deploy cs:~tribaal/
juju deploy (this branch)
juju set storage "provider=nfs" # Or any other provider, for that matter.
# Point to your NFS storage provider here.
juju add-relation storage ubuntu-mirror
Once the mountpoint is available and mounted, the mirror will create a bunch of folders first, and will start syncing after a while (this takes a *long* time as it will rsync around 2Tb so I advise you not to wait for it to finish :) )
As for [4]: https:/
Chris Glass (tribaal) wrote : | # |
Switching this to needs review again.
Cory Johns (johnsca) wrote : | # |
These refactors look good and everything seems to work, but when running the moved tests (PYTHONPATH=
I suggest fixing the tests and changing the `make test` target to use the PYTHONPATH modification to run these tests (I'd also put the tests in a top-level unit_tests folder so they're more easily discoverable, but that's up to you, as long as the `make test` target can run them).
Chris Glass (tribaal) wrote : | # |
Thanks for the review, and indeed, I think tweaking PYTHONPATH is exactly one of the ways to go (although it's not really nice to have to tweak things this way, it's quite un-pythoninc
However I would still like to address this in a later branch, since the diff is already quite large. The tests need rewriting, I think it's better to submit the changes in a separate branch to make sure the changes don't get drowned in the refactoring changes.
If you both insist, I can rewire things in this branch, but I suspect the diff will grow quite drastically.
David Britton (dpb) wrote : | # |
Hi Chris -- I think the unit tests are fine as is, thanks for the explination here and on IRC. We intend to fix those in a follow-on branch. I just some tests again, and it's failing on postgresql:
unit-postgresql-0: 2014-09-26 14:30:10 INFO juju-log data:1: Making dir /srv/data/
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 INFO data-relation-
unit-postgresql-0: 2014-09-26 14:30:10 ERROR juju.worker.uniter uniter.go:486 hook failed: exit status 1
I thought we added a creation of the requested mountpoint? Did that get lost in the refactor? For reference, here is the bundle I'm using:
http://
I'm going to mark this needs fixing, I'd like to make sure at least our base tests pass with nfs before we put this in.
David Britton (dpb) wrote : | # |
Pastebin for that error message, sorry about the formatting in here:
Cory Johns (johnsca) wrote : | # |
David,
I'm not sure if that's an error with the charm code, which does ensure that the mount point exists (the line that's failing is inside charmhelpers.
It appears that error is an NFS permissions issue. This blog post http://
Cory Johns (johnsca) wrote : | # |
Chris,
I understand wanting to keep the review to a manageable size, but I'm personally leery of giving my +1 review when you're removing coverage of previously tested code. The tests are there to give confidence that the code is continuing to work as expected after exactly this sort of refactor. So, to me, even if the tests aren't added back to the Makefile in this branch, they should at least be fixed so that they're passing if run manually. Then, the later branch can deal with how best to wire them back in so that they're picked up by `make test`.
But it does technically pass on `make test`, and David is a ~charmer, so if he's ok with delaying fixing the tests (once the postgres issue is resolved), I certainly won't fight about it. :)
David Britton (dpb) wrote : | # |
On Fri, Sep 26, 2014 at 03:30:11PM -0000, Cory Johns wrote:
> Chris,
>
> I understand wanting to keep the review to a manageable size,
Cory, after speaking with Chris, I think he has an idea about how to
better wire in the tests. so I'm going to set this back to WIP for now,
just so it's clear.
Thanks for the review, and I'll look at the NFS issue more closely, it
could ideed just be permissions.
--
David Britton <email address hidden>
Chris Glass (tribaal) wrote : | # |
Yes indeed, I think it can work better to change how the charm is structured instead - falling straight in the refactoring category :)
So to summarize the IRC discussion (for future reference):
I think I'll rename the storage-providers.d directory to just "storage-providers" or "providers" to make it a proper python module, so everything is one nice tidy tree and we don't need to muck around with PYTHONPATH.
I'll rewire as much as the tests as possible in this branch using this strategy - but the bulk of them will have to be rewritten. I'll try to do so to "best effort" in this branch, and will put another branch up for review doing most of the tests rewriting - nobody wants to decrease coverage, as others have rightfully pointed out.
Thanks a lot for the feedback - it's hard to get these big refactorings right without other eyes on the code. So thanks a lot for your time and pointers David and Cory :) I'll keep hammering at it some more :)
Unmerged revisions
- 44. By Chris Glass
-
Addressed first review comments.
- 43. By Chris Glass
-
Merging trunk. Rewrote the submitted fstab fixing code since the code changed a lot (https:/
/code.launchpad .net/~moon127/ charms/ precise/ storage/ fstab-fix/ +merge/ 232382). - 42. By Chris Glass
-
Rewrote the equivalent of https:/
/code.launchpad .net/~moon127/ charms/ precise/ storage/ fstab-fix/ +merge/ 232382
since most of that code moved around. - 41. By Chris Glass
-
Fixed inline comments - stray changes and spelling mistakes.
- 40. By Chris Glass
-
Removed the now useless mount_volume() function.
- 39. By Chris Glass
-
This refactors the provider-specific tests into the relevant provider subfolder
so that the logic testing common_utils can focus on just the common parts.Right now the provider-specific tests are not rewired, since this will need a
little more work and this branch is becoming quite big. - 38. By Chris Glass
-
Added .coverage to .bzrignore
- 37. By Chris Glass
-
Most of the refactoring is working - need to port and write tests.
Preview Diff
1 | === modified file '.bzrignore' |
2 | --- .bzrignore 2014-01-31 23:49:37 +0000 |
3 | +++ .bzrignore 2014-09-11 05:39:25 +0000 |
4 | @@ -1,2 +1,3 @@ |
5 | _trial_temp |
6 | charm-helpers |
7 | +.coverage |
8 | |
9 | === modified file 'README.md' |
10 | --- README.md 2014-05-16 14:40:47 +0000 |
11 | +++ README.md 2014-09-11 05:39:25 +0000 |
12 | @@ -6,6 +6,53 @@ |
13 | interface, and the charmed application will start writing data into the provided |
14 | storage. |
15 | |
16 | +General idea |
17 | +------------- |
18 | + |
19 | +The storage charm is a subordinate charm that you attach to a main charm in need |
20 | +of block storage space on one side (using a scope:container relation), and to |
21 | +a storage provider on the other side (using a normal relation). |
22 | + |
23 | +You need to explicitly configure what provider to use in the subordinate |
24 | +configuration, using the aptly named "provider" config variable. |
25 | + |
26 | +Example usage |
27 | +-------------- |
28 | + |
29 | +The following will try to give you an idea of typical usage, using NFS as an |
30 | +example: |
31 | + |
32 | +juju bootstrap |
33 | +juju deploy my-main-service |
34 | +juju deploy storage # this charm |
35 | +juju deploy some-charm-exposing-nfs |
36 | +juju set storage "provider=nfs" |
37 | +juju add-relation nfs storage |
38 | +juju add-relation storage my-main-service |
39 | + |
40 | +The charm in need of storage should send a "mountpoint=/where/I/need/space" |
41 | +message to the storage subordinate using a "set-relation" call: |
42 | + |
43 | +// data-relation-changed on my-main-service |
44 | +set-relation "mountpoint=/please/mount/here" |
45 | + |
46 | +Once the storage is ready, the subordinate will also set-relation, so you can |
47 | +test whether the mountpoint is ready by checking "get-relation" on the same hook |
48 | +Since the subordinate will set-relation, the "relation-changed" hook will fire |
49 | +again: |
50 | + |
51 | +// data-relation-changed on my-main-service |
52 | +ready = $(get-relation "mountpoint") |
53 | +if $ready == "/please/mount/here": |
54 | + do something with the mount point |
55 | +else: |
56 | + exit(0) # we'll get called later, when it's ready. |
57 | + |
58 | +On the storage-provider side of the relation, the relevant "relation changed" |
59 | +hook will be called once the mountpoint has been requested, and, once it is |
60 | +available, the provider will send the |
61 | +relation-set "mountpoint=/please/mount/here" message back to the main charm. |
62 | + |
63 | |
64 | Provider-Specific Features |
65 | -------------------------- |
66 | |
67 | === modified file 'hooks/common_util.py' |
68 | --- hooks/common_util.py 2014-08-27 11:01:14 +0000 |
69 | +++ hooks/common_util.py 2014-09-11 05:39:25 +0000 |
70 | @@ -123,7 +123,7 @@ |
71 | sys.exit(1) |
72 | |
73 | |
74 | -def _assert_block_device(device_path): |
75 | +def assert_block_device(device_path): |
76 | """C{device_path} is a valid block device or exit in error""" |
77 | isBlockDevice = False |
78 | for x in range(0, 10): |
79 | @@ -146,7 +146,7 @@ |
80 | sys.exit(1) |
81 | |
82 | |
83 | -def _set_label(device_path, label): |
84 | +def set_label(device_path, label): |
85 | """Set label if necessary on C{device_path}""" |
86 | if not os.path.exists(device_path): |
87 | log("ERROR: Cannot set label of %s. Path does not exist" % device_path) |
88 | @@ -177,7 +177,7 @@ |
89 | return subprocess.call(command, shell=True) == 0 |
90 | |
91 | |
92 | -def _format_device(device_path): |
93 | +def format_device(device_path): |
94 | """ |
95 | Create an ext4 partition, if needed, for C{device_path} and fsck that |
96 | partition. |
97 | @@ -211,21 +211,18 @@ |
98 | return None |
99 | |
100 | |
101 | -def mount_volume(device_path=None): |
102 | +def prepare_volume_mount(device_path=None): |
103 | """ |
104 | - Mount the attached volume, nfs or block-storage-broker type, to the |
105 | - principal requested C{mountpoint}. |
106 | - |
107 | - If the C{mountpoint} required relation data does not exist, log the |
108 | - issue and exit. Otherwise attempt to initialize and mount the blockstorage |
109 | - or nfs device. |
110 | + Code that is common to all storage providers and does basic sanity check |
111 | + about the mountpoint, for example, noop if the "mountpoint" was never |
112 | + requested from the "data" relation. |
113 | """ |
114 | # mountpoint defined by our related principal in the "data" relation |
115 | mountpoint = _get_from_relation("data", "mountpoint") |
116 | |
117 | if not mountpoint: |
118 | log("No mountpoint defined by relation. Cannot mount volume yet.") |
119 | - return |
120 | + sys.exit(0) |
121 | |
122 | _persist_data("mountpoint", mountpoint) # Saved for our departed hook |
123 | if is_mounted(mountpoint): |
124 | @@ -234,51 +231,18 @@ |
125 | for relid in hookenv.relation_ids("data"): |
126 | hookenv.relation_set( |
127 | relid, relation_settings={"mountpoint": mountpoint}) |
128 | - return |
129 | - |
130 | - options = "defaults" |
131 | - provider = get_provider() |
132 | - if provider == "nfs": |
133 | - relids = hookenv.relation_ids("nfs") |
134 | - for relid in relids: |
135 | - for unit in hookenv.related_units(relid): |
136 | - relation = hookenv.relation_get(unit=unit, rid=relid) |
137 | - # XXX Only handle 1 related nfs unit. No nfs cluster support |
138 | - nfs_server = relation.get("private-address", None) |
139 | - nfs_path = relation.get("mountpoint", None) |
140 | - fstype = relation.get("fstype", None) |
141 | - options = relation.get("options", "defaults") |
142 | - break |
143 | - |
144 | - if len(relids) == 0 or None in [nfs_server, nfs_path, fstype]: |
145 | - log("Missing required relation values. " |
146 | - "nfs-relation-changed hook needs to run first.") |
147 | - return |
148 | - |
149 | - device_path = "%s:%s" % (nfs_server, nfs_path) |
150 | - elif provider == "block-storage-broker": |
151 | - if device_path is None: |
152 | - log("Waiting for persistent nova device provided by " |
153 | - "block-storage-broker.") |
154 | - sys.exit(0) |
155 | - |
156 | - _assert_block_device(device_path) |
157 | - if os.path.exists("%s1" % device_path): |
158 | - # Then we are supporting upgrade-charm path for deprecated |
159 | - # postgresql volume-map config parameter. Postgresql used to |
160 | - # partition the volume into a single partition at 100% of the |
161 | - # volume size, but the storage charm doesn't do that partitioning |
162 | - device_path = "%s1" % device_path |
163 | - log("Mounting %s as the device path as the root device is already " |
164 | - "partitioned." % device_path) |
165 | - _format_device(device_path) |
166 | - _set_label(device_path, mountpoint) |
167 | - fstype = subprocess.check_output( |
168 | - "blkid -o value -s TYPE %s" % device_path, shell=True).strip() |
169 | - else: |
170 | - log("ERROR: Unknown provider %s. Cannot mount volume." % hookenv.ERROR) |
171 | - sys.exit(1) |
172 | - |
173 | + |
174 | + return mountpoint |
175 | + |
176 | + |
177 | +def call_volume_mount(device_path, options="defaults", fstype="ext3"): |
178 | + """ |
179 | + Actually perform the mounting of the requested volume, forwarding the call |
180 | + to the system's "mount" command (passing along the options and filesystem |
181 | + type specified). |
182 | + """ |
183 | + # mountpoint defined by our related principal in the "data" relation |
184 | + mountpoint = _get_from_relation("data", "mountpoint") |
185 | if not os.path.exists(mountpoint): |
186 | os.makedirs(mountpoint) |
187 | |
188 | |
189 | === modified file 'hooks/storage-provider.d/block-storage-broker/block-storage-relation-changed' |
190 | --- hooks/storage-provider.d/block-storage-broker/block-storage-relation-changed 2014-03-07 08:01:15 +0000 |
191 | +++ hooks/storage-provider.d/block-storage-broker/block-storage-relation-changed 2014-09-11 05:39:25 +0000 |
192 | @@ -1,22 +1,32 @@ |
193 | #!/usr/bin/python |
194 | |
195 | +import subprocess |
196 | import common_util |
197 | import sys |
198 | +import os |
199 | import yaml |
200 | from yaml.constructor import ConstructorError |
201 | -from charmhelpers.core import hookenv |
202 | +from charmhelpers.core.hookenv import ( |
203 | + log, |
204 | + config, |
205 | + relation_ids, |
206 | + relation_type, |
207 | + relation_set, |
208 | + related_units, |
209 | + WARNING, |
210 | +) |
211 | |
212 | |
213 | instance_id = common_util.get_instance_id() |
214 | availability_zone = common_util.get_availability_zone() |
215 | -config_data = hookenv.config() |
216 | +config_data = config() |
217 | volume_size = config_data.get("volume_size", None) |
218 | volume_map = config_data.get("volume_map", None) |
219 | volume_id = None |
220 | label = None |
221 | -relids = hookenv.relation_ids("data") |
222 | +relids = relation_ids("data") |
223 | for relid in relids: |
224 | - for unit in hookenv.related_units(relid): |
225 | + for unit in related_units(relid): |
226 | label = "%s %s volume" % (unit, availability_zone) |
227 | if volume_map is not None: |
228 | # Grab the volume_label for our instance if it exists so we can |
229 | @@ -26,26 +36,46 @@ |
230 | if volume_map: |
231 | volume_id = volume_map.get(unit, None) |
232 | except ConstructorError as e: |
233 | - hookenv.log( |
234 | - "invalid YAML in 'volume-map': {}".format(e), |
235 | - hookenv.WARNING) |
236 | + log("invalid YAML in 'volume-map': {}".format(e), WARNING) |
237 | |
238 | if label is None: |
239 | - hookenv.log( |
240 | - "No data-relation-changed hook fired. Awaiting data-relation before " |
241 | + log("No data-relation-changed hook fired. Awaiting data-relation before " |
242 | "updating block-storage-broker relation") |
243 | sys.exit(0) |
244 | |
245 | -relation_type = hookenv.relation_type() |
246 | +relation_type = relation_type() |
247 | if relation_type == "block-storage": |
248 | relation_settings = {"instance-id": instance_id, "size": volume_size, |
249 | "volume-label": label} |
250 | if volume_id is not None: |
251 | relation_settings["volume-id"] = volume_id |
252 | - hookenv.relation_set(relation_settings=relation_settings) |
253 | + relation_set(relation_settings=relation_settings) |
254 | |
255 | # Grab info from the block-storage relation regardless of whether we are in the |
256 | # data relation |
257 | device_path = common_util._get_from_relation( |
258 | "block-storage", "block-device-path") |
259 | -common_util.mount_volume(device_path) # Will wait for mountpoint |
260 | + |
261 | +# Perform common pre-mount tasks and checks. |
262 | +mountpoint = common_util.prepare_volume_mount(device_path) |
263 | + |
264 | +# Prepare mount options |
265 | +if device_path is None: |
266 | + log("Waiting for persistent nova device provided by block-storage-broker.") |
267 | + sys.exit(0) |
268 | + |
269 | +common_util.assert_block_device(device_path) |
270 | +if os.path.exists("%s1" % device_path): |
271 | + # Then we are supporting upgrade-charm path for deprecated |
272 | + # postgresql volume-map config parameter. Postgresql used to |
273 | + # partition the volume into a single partition at 100% of the |
274 | + # volume size, but the storage charm doesn't do that partitioning |
275 | + device_path = "%s1" % device_path |
276 | + log("Mounting %s as the device path as the root device is already " |
277 | + "partitioned." % device_path) |
278 | +common_util.format_device(device_path) |
279 | +common_util.set_label(device_path, mountpoint) |
280 | +fstype = subprocess.check_output( |
281 | + "blkid -o value -s TYPE %s" % device_path, shell=True).strip() |
282 | + |
283 | +common_util.call_volume_mount(device_path, fstype=fstype) |
284 | |
285 | === added file 'hooks/storage-provider.d/block-storage-broker/test_block_storage_provider.py' |
286 | --- hooks/storage-provider.d/block-storage-broker/test_block_storage_provider.py 1970-01-01 00:00:00 +0000 |
287 | +++ hooks/storage-provider.d/block-storage-broker/test_block_storage_provider.py 2014-09-11 05:39:25 +0000 |
288 | @@ -0,0 +1,147 @@ |
289 | +import mocker |
290 | +import os |
291 | +import subprocess |
292 | + |
293 | +import common_util as util |
294 | + |
295 | +class TestBlockStorageBroker(mocker.MockerTestCase): |
296 | + |
297 | + def test_mount_volume_bsb_provider_no_device_path_provided(self): |
298 | + """ |
299 | + L{mount_volume} will exit early if called before C{device_path} |
300 | + is specified by the block-storage-broker charm. |
301 | + """ |
302 | + util.hookenv._config = (("provider", "block-storage-broker"),) |
303 | + util.hookenv._incoming_relation_data = ( |
304 | + ("data:1", "mountpoint", "/mnt/this"),) |
305 | + |
306 | + self.assertEqual(util.get_provider(), "block-storage-broker") |
307 | + result = self.assertRaises(SystemExit, util.mount_volume) |
308 | + self.assertEqual(result.code, 0) |
309 | + message = ( |
310 | + "Waiting for persistent nova device provided by " |
311 | + "block-storage-broker.") |
312 | + self.assertTrue( |
313 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
314 | + |
315 | + def test_mount_volume_bsb_provider_success(self): |
316 | + """ |
317 | + L{mount_volume} reads principal relation data for the C{mountpoint}. |
318 | + When storage config is set to the C{nova} provider, L{mount_volume} |
319 | + will read the optional C{device_path} parameter provided by the nova |
320 | + provider's C{data-relation-changed} hook. |
321 | + The success of the mount will log a message and the storage charm will |
322 | + call relation-set C{mountpoint} to report the successful mount to the |
323 | + principal. |
324 | + """ |
325 | + util.hookenv._incoming_relation_data = ( |
326 | + ("data:1", "mountpoint", "/mnt/this"),) |
327 | + |
328 | + util.hookenv._config = (("provider", "block-storage-broker"),) |
329 | + |
330 | + ismount = self.mocker.replace(os.path.ismount) |
331 | + ismount("/mnt/this") |
332 | + self.mocker.result(False) # Mock not mounted |
333 | + |
334 | + assert_device = self.mocker.replace(util.assert_block_device) |
335 | + assert_device("/dev/vdx") |
336 | + exists = self.mocker.replace(os.path.exists) |
337 | + exists("/dev/vdx1") |
338 | + self.mocker.result(False) |
339 | + create_partition = self.mocker.replace( |
340 | + util.format_device) |
341 | + create_partition("/dev/vdx") |
342 | + |
343 | + set_label = self.mocker.replace(util.set_label) |
344 | + set_label("/dev/vdx", "/mnt/this") |
345 | + |
346 | + # Ensure blkid is called to discover fstype of nova device |
347 | + get_fstype = self.mocker.replace(subprocess.check_output) |
348 | + get_fstype("blkid -o value -s TYPE /dev/vdx", shell=True) |
349 | + self.mocker.result("ext4\n") |
350 | + |
351 | + exists("/mnt/this") |
352 | + self.mocker.result(True) |
353 | + |
354 | + mount = self.mocker.replace(subprocess.check_call) |
355 | + mount("mount -t ext4 /dev/vdx /mnt/this", shell=True) |
356 | + self.mocker.replay() |
357 | + |
358 | + self.assertEqual(util.get_provider(), "block-storage-broker") |
359 | + util.mount_volume("/dev/vdx") |
360 | + message = "Mount (/mnt/this) successful" |
361 | + self.assertTrue( |
362 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
363 | + |
364 | + # Wrote proper fstab mount info to mount on reboot |
365 | + fstab_content = "/dev/vdx /mnt/this ext4 default 0 0" |
366 | + with open(util.FSTAB_FILE) as input_file: |
367 | + self.assertEqual(input_file.read(), fstab_content) |
368 | + |
369 | + # Reported success to principal |
370 | + self.assertEqual( |
371 | + util.hookenv._outgoing_relation_data, |
372 | + (("mountpoint", "/mnt/this"),)) |
373 | + |
374 | + def test_mount_volume_bsb_provider_success_postgresql_charm_upgrade(self): |
375 | + """ |
376 | + Old versions of the postgresql charm would have created a single |
377 | + partition with max size from an external attached volume if the user |
378 | + had used postgresql charm's deprecated volume-map parameter. When the |
379 | + storage charm sees a single partition on the attached device, it will |
380 | + attempt to mount that partition instead of the root device. |
381 | + The success of the mount will log a message and the storage charm will |
382 | + call relation-set C{mountpoint} to report the successful mount to the |
383 | + principal. |
384 | + """ |
385 | + util.hookenv._incoming_relation_data = ( |
386 | + ("data:1", "mountpoint", "/mnt/this"),) |
387 | + |
388 | + util.hookenv._config = (("provider", "block-storage-broker"),) |
389 | + |
390 | + ismount = self.mocker.replace(os.path.ismount) |
391 | + ismount("/mnt/this") |
392 | + self.mocker.result(False) # Mock not mounted |
393 | + |
394 | + assert_device = self.mocker.replace(util.assert_block_device) |
395 | + assert_device("/dev/vdx") |
396 | + exists = self.mocker.replace(os.path.exists) |
397 | + exists("/dev/vdx1") |
398 | + self.mocker.result(True) |
399 | + create_partition = self.mocker.replace( |
400 | + util.format_device) |
401 | + create_partition("/dev/vdx1") |
402 | + |
403 | + set_label = self.mocker.replace(util.set_label) |
404 | + set_label("/dev/vdx1", "/mnt/this") |
405 | + |
406 | + # Ensure blkid is called to discover fstype of nova device |
407 | + get_fstype = self.mocker.replace(subprocess.check_output) |
408 | + get_fstype("blkid -o value -s TYPE /dev/vdx1", shell=True) |
409 | + self.mocker.result("ext4\n") |
410 | + |
411 | + exists("/mnt/this") |
412 | + self.mocker.result(True) |
413 | + |
414 | + mount = self.mocker.replace(subprocess.check_call) |
415 | + mount("mount -t ext4 /dev/vdx1 /mnt/this", shell=True) |
416 | + self.mocker.replay() |
417 | + |
418 | + self.assertEqual(util.get_provider(), "block-storage-broker") |
419 | + util.mount_volume("/dev/vdx") |
420 | + messages = ["Mount (/mnt/this) successful", |
421 | + "Mounting /dev/vdx1 as the device path as the root device " |
422 | + "is already partitioned."] |
423 | + for message in messages: |
424 | + self.assertTrue( |
425 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
426 | + |
427 | + # Wrote proper fstab mount info to mount on reboot |
428 | + fstab_content = "/dev/vdx1 /mnt/this ext4 default 0 0" |
429 | + with open(util.FSTAB_FILE) as input_file: |
430 | + self.assertEqual(input_file.read(), fstab_content) |
431 | + |
432 | + # Reported success to principal |
433 | + self.assertEqual( |
434 | + util.hookenv._outgoing_relation_data, |
435 | + (("mountpoint", "/mnt/this"),)) |
436 | |
437 | === modified file 'hooks/storage-provider.d/nfs/data-relation-changed' |
438 | --- hooks/storage-provider.d/nfs/data-relation-changed 2014-03-06 16:02:32 +0000 |
439 | +++ hooks/storage-provider.d/nfs/data-relation-changed 1970-01-01 00:00:00 +0000 |
440 | @@ -1,6 +0,0 @@ |
441 | -#!/usr/bin/python |
442 | - |
443 | -import common_util |
444 | - |
445 | -# Will NOOP until mountpoint is set by principal and relation setup with nfs |
446 | -common_util.mount_volume() |
447 | |
448 | === target is u'hooks.py' |
449 | === added file 'hooks/storage-provider.d/nfs/hooks.py' |
450 | --- hooks/storage-provider.d/nfs/hooks.py 1970-01-01 00:00:00 +0000 |
451 | +++ hooks/storage-provider.d/nfs/hooks.py 2014-09-11 05:39:25 +0000 |
452 | @@ -0,0 +1,50 @@ |
453 | +#!/usr/bin/python |
454 | +import sys |
455 | + |
456 | +from charmhelpers.core.hookenv import ( |
457 | + log, |
458 | + relation_ids, |
459 | + related_units, |
460 | + relation_get, |
461 | + Hooks, |
462 | + UnregisteredHookError, |
463 | +) |
464 | + |
465 | +import common_util |
466 | + |
467 | +hooks = Hooks() |
468 | + |
469 | + |
470 | +@hooks.hook("data-relation-changed") |
471 | +def data_relation_changed(): |
472 | + # Run the common pre-mount check and code. |
473 | + common_util.prepare_volume_mount() |
474 | + |
475 | + relids = relation_ids("nfs") |
476 | + for relid in relids: |
477 | + for unit in related_units(relid): |
478 | + relation = relation_get(unit=unit, rid=relid) |
479 | + |
480 | + # XXX Only handle 1 related nfs unit. No nfs cluster support (yet?) |
481 | + nfs_server = relation.get("private-address", None) |
482 | + nfs_path = relation.get("mountpoint", None) |
483 | + fstype = relation.get("fstype", None) |
484 | + options = relation.get("options", "defaults") |
485 | + break |
486 | + |
487 | + if len(relids) == 0 or None in [nfs_server, nfs_path, fstype]: |
488 | + log("Missing required relation values. " |
489 | + "nfs-relation-changed hook needs to run first.") |
490 | + return |
491 | + |
492 | + device_path = "%s:%s" % (nfs_server, nfs_path) |
493 | + |
494 | + # Actually perform the volume mount, calling the "mount" system command. |
495 | + common_util.call_volume_mount(device_path, options=options, fstype=fstype) |
496 | + |
497 | + |
498 | +if __name__ == '__main__': |
499 | + try: |
500 | + hooks.execute(sys.argv) |
501 | + except UnregisteredHookError as e: |
502 | + log('Unknown hook {} - skipping.'.format(e)) |
503 | |
504 | === modified symlink 'hooks/storage-provider.d/nfs/nfs-relation-changed' |
505 | === target changed u'data-relation-changed' => u'hooks.py' |
506 | === added file 'hooks/storage-provider.d/nfs/test_nfs_provider.py' |
507 | --- hooks/storage-provider.d/nfs/test_nfs_provider.py 1970-01-01 00:00:00 +0000 |
508 | +++ hooks/storage-provider.d/nfs/test_nfs_provider.py 2014-09-11 05:39:25 +0000 |
509 | @@ -0,0 +1,98 @@ |
510 | +import mocker |
511 | +import common_util as util |
512 | +import os |
513 | +import subprocess |
514 | +import json |
515 | + |
516 | + |
517 | +class TestNFSProvider(mocker.MockerTestCase): |
518 | + # TODO: This class is not yet run by "make test". Make it run! |
519 | + |
520 | + def test_mount_volume_nfs_provider_no_nfs_hook_relation_data(self): |
521 | + """ |
522 | + L{mount_volume} will exit in error when set as C{nfs} provider and the |
523 | + nfs relation data does not contain any of the required |
524 | + values: C{mount_path}, C{mount_server}, C{mount_options} or |
525 | + C{mount_type}. |
526 | + """ |
527 | + data = ( |
528 | + ("data:1", "mountpoint", "/mnt/this"), |
529 | + ("nfs:1", "private-address", "me.com"), |
530 | + ("nfs:1", "mountpoint", "/nfs/server/path"), |
531 | + ("nfs:1", "fstype", "nfs"),) |
532 | + util.hookenv._incoming_relation_data = data |
533 | + |
534 | + # Setup persisted data missing values from nfs-relation-changed |
535 | + for skip in range(1, 3): |
536 | + partial_data = () |
537 | + for index, item in enumerate(data): |
538 | + if index != skip: |
539 | + partial_data = partial_data + (item,) |
540 | + util.hookenv._incoming_relation_data = partial_data |
541 | + self.assertEqual( |
542 | + util.hookenv.relation_get("mountpoint", rid="data:1"), |
543 | + "/mnt/this") |
544 | + util.mount_volume() |
545 | + message = ( |
546 | + "Missing required relation values. " |
547 | + "nfs-relation-changed hook needs to run first.") |
548 | + self.assertTrue( |
549 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
550 | + |
551 | + def test_mount_volume_nfs_provider_success(self): |
552 | + """ |
553 | + L{mount_volume} will read incoming principal relation data for the |
554 | + C{mountpoint}. When storage config is set to the C{nfs} provider |
555 | + L{mount_volume} as a result of the hook C{nfs-relation-changed}. The |
556 | + values required nfs relation data are C{mount_path}, C{mount_server}, |
557 | + C{mount_type} and C{mount_options}. The success of the mount will log a |
558 | + message and the storage charm will call juju relation-set C{mountpoint} |
559 | + to report the successful initialization to the principal. |
560 | + """ |
561 | + util.hookenv._incoming_relation_data = ( |
562 | + ("data:1", "mountpoint", "/mnt/this"), |
563 | + ("nfs:1", "private-address", "me.com"), |
564 | + ("nfs:1", "options", "someopts"), |
565 | + ("nfs:1", "mountpoint", "/nfs/server/path"), |
566 | + ("nfs:1", "fstype", "nfs")) |
567 | + |
568 | + # Mock not mounted |
569 | + ismount = self.mocker.replace(os.path.ismount) |
570 | + ismount("/mnt/this") |
571 | + self.mocker.result(False) |
572 | + |
573 | + # mount_volume calls makedirs when path !exists |
574 | + exists = self.mocker.replace(os.path.exists) |
575 | + exists("/mnt/this") |
576 | + self.mocker.result(False) |
577 | + makedirs = self.mocker.replace(os.makedirs) |
578 | + makedirs("/mnt/this") |
579 | + |
580 | + # Called proper mount command based on nfs data |
581 | + command = "mount -t nfs -o someopts me.com:/nfs/server/path /mnt/this" |
582 | + mount_cmd = self.mocker.replace(subprocess.check_call) |
583 | + mount_cmd(command, shell=True) |
584 | + self.mocker.replay() |
585 | + |
586 | + self.assertEqual(util.get_provider(), "nfs") |
587 | + util.mount_volume() |
588 | + message = "Mount (/mnt/this) successful" |
589 | + self.assertTrue( |
590 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
591 | + |
592 | + # Wrote proper fstab mount info to mount on reboot |
593 | + fstab_content = "me.com:/nfs/server/path /mnt/this nfs someopts 0 0" |
594 | + with open(util.FSTAB_FILE) as input_file: |
595 | + self.assertEqual(input_file.read(), fstab_content) |
596 | + |
597 | + # Reported success to principal |
598 | + self.assertEqual( |
599 | + util.hookenv._outgoing_relation_data, |
600 | + (("mountpoint", "/mnt/this"),)) |
601 | + # Perserved mountpoint for future departed hook runs |
602 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
603 | + with open(persist_path) as infile: |
604 | + data = json.load(infile) |
605 | + self.assertEqual(data["mountpoint"], "/mnt/this") |
606 | + |
607 | + |
608 | |
609 | === modified file 'hooks/test_common_util.py' |
610 | --- hooks/test_common_util.py 2014-03-21 22:53:22 +0000 |
611 | +++ hooks/test_common_util.py 2014-09-11 05:39:25 +0000 |
612 | @@ -390,9 +390,9 @@ |
613 | self.assertTrue( |
614 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
615 | |
616 | - def test_wb_assert_block_device_failure_path_does_not_exist(self): |
617 | + def test_assert_block_device_failure_path_does_not_exist(self): |
618 | """ |
619 | - L{_assert_block_device} will exit in error if the C{device_path} |
620 | + L{assert_block_device} will exit in error if the C{device_path} |
621 | provided does not exist. |
622 | """ |
623 | exists = self.mocker.replace(os.path.exists) |
624 | @@ -404,16 +404,16 @@ |
625 | self.mocker.count(10) |
626 | self.mocker.replay() |
627 | |
628 | - self.assertRaises(SystemExit, util._assert_block_device, "/dev/vdx") |
629 | + self.assertRaises(SystemExit, util.assert_block_device, "/dev/vdx") |
630 | messages = ["WARNING: /dev/vdx does not exist", |
631 | "ERROR: /dev/vdx is not a block device."] |
632 | for message in messages: |
633 | self.assertTrue( |
634 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
635 | |
636 | - def test_wb_assert_block_device_failure_not_block_device(self): |
637 | + def test_assert_block_device_failure_not_block_device(self): |
638 | """ |
639 | - L{_assert_block_device} will try 10 times between sleeps to validate |
640 | + L{assert_block_device} will try 10 times between sleeps to validate |
641 | that C{device_path} is a valid block device before logging an error |
642 | and exiting. |
643 | """ |
644 | @@ -430,30 +430,30 @@ |
645 | self.mocker.count(10) |
646 | self.mocker.replay() |
647 | |
648 | - self.assertRaises(SystemExit, util._assert_block_device, "/dev/vdx") |
649 | + self.assertRaises(SystemExit, util.assert_block_device, "/dev/vdx") |
650 | message = "ERROR: /dev/vdx is not a block device." |
651 | self.assertTrue( |
652 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
653 | |
654 | - def test_wb_set_label_exits_when_path_does_not_exist(self): |
655 | + def test_set_label_exits_when_path_does_not_exist(self): |
656 | """ |
657 | - L{_set_label} will log and exit when C{device_path} does not exist |
658 | + L{set_label} will log and exit when C{device_path} does not exist |
659 | """ |
660 | exists = self.mocker.replace(os.path.exists) |
661 | exists("/dev/not-there") |
662 | self.mocker.result(False) |
663 | self.mocker.replay() |
664 | self.assertRaises( |
665 | - SystemExit, util._set_label, "/dev/not-there", "/mnt/this") |
666 | + SystemExit, util.set_label, "/dev/not-there", "/mnt/this") |
667 | message = ( |
668 | "ERROR: Cannot set label of /dev/not-there. Path does not exist") |
669 | self.assertTrue( |
670 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
671 | |
672 | - def test_wb_set_label_does_nothing_if_label_already_set_correctly(self): |
673 | + def test_set_label_does_nothing_if_label_already_set_correctly(self): |
674 | """ |
675 | - L{_set_label} will use e2label command to check if the label is |
676 | - set. If current label is equal to requested label. L{_set_label} will |
677 | + L{set_label} will use e2label command to check if the label is |
678 | + set. If current label is equal to requested label. L{set_label} will |
679 | do nothing. |
680 | """ |
681 | exists = self.mocker.replace(os.path.exists) |
682 | @@ -463,12 +463,12 @@ |
683 | e2label_check("e2label /dev/vdz", shell=True) |
684 | self.mocker.result("/mnt/this\n") |
685 | self.mocker.replay() |
686 | - util._set_label("/dev/vdz", "/mnt/this") |
687 | + util.set_label("/dev/vdz", "/mnt/this") |
688 | |
689 | - def test_wb_set_label_alters_existing_label_when_set_different(self): |
690 | + def test_set_label_alters_existing_label_when_set_different(self): |
691 | """ |
692 | - L{_set_label} will use e2label command to check if the label is |
693 | - set. If current label is different from requested label. L{_set_label} |
694 | + L{set_label} will use e2label command to check if the label is |
695 | + set. If current label is different from requested label. L{set_label} |
696 | will reset the filesystem label and log a message. |
697 | """ |
698 | exists = self.mocker.replace(os.path.exists) |
699 | @@ -480,17 +480,17 @@ |
700 | e2label_check = self.mocker.replace(subprocess.check_call) |
701 | e2label_check("e2label /dev/vdz /mnt/this", shell=True) |
702 | self.mocker.replay() |
703 | - util._set_label("/dev/vdz", "/mnt/this") |
704 | + util.set_label("/dev/vdz", "/mnt/this") |
705 | message = ( |
706 | "WARNING: /dev/vdz had label=other-label, overwriting with " |
707 | "label=/mnt/this") |
708 | self.assertTrue( |
709 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
710 | |
711 | - def test_wb_set_label_alters_existing_label_when_unset(self): |
712 | + def test_set_label_alters_existing_label_when_unset(self): |
713 | """ |
714 | - L{_set_label} will use e2label command to check if the label is |
715 | - set. If label is unset, L{_set_label} to reset the filesystem label and |
716 | + L{set_label} will use e2label command to check if the label is |
717 | + set. If label is unset, L{set_label} to reset the filesystem label and |
718 | log a message. |
719 | """ |
720 | exists = self.mocker.replace(os.path.exists) |
721 | @@ -502,25 +502,25 @@ |
722 | e2label_check = self.mocker.replace(subprocess.check_call) |
723 | e2label_check("e2label /dev/vdz /mnt/this", shell=True) |
724 | self.mocker.replay() |
725 | - util._set_label("/dev/vdz", "/mnt/this") |
726 | + util.set_label("/dev/vdz", "/mnt/this") |
727 | |
728 | - def test_wb_format_device_ignores_fsck_when_busy(self): |
729 | + def test_format_device_ignores_fsck_when_busy(self): |
730 | """ |
731 | When blockdev reports the C{device_path} is busy, |
732 | - L{_format_device} skips fsck and returns the partition path. |
733 | + L{format_device} skips fsck and returns the partition path. |
734 | """ |
735 | partition_table_mock = self.mocker.replace(util._read_partition_table) |
736 | partition_table_mock("/dev/vdz") |
737 | self.mocker.result(1) |
738 | self.mocker.replay() |
739 | |
740 | - util._format_device("/dev/vdz") |
741 | + util.format_device("/dev/vdz") |
742 | message = ( |
743 | "INFO: /dev/vdz is busy, no fsck performed. Assuming formatted.") |
744 | self.assertTrue( |
745 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
746 | |
747 | - def test_wb_format_device_runs_fsck_if_device_not_busy(self): |
748 | + def test_format_device_runs_fsck_if_device_not_busy(self): |
749 | """ |
750 | When L{_read_partition_table} reports the device partition table is |
751 | readable, and L{_assert_fstype} returns C{True}, mkfs.ext4 is not run |
752 | @@ -541,12 +541,12 @@ |
753 | self.mocker.result(0) # successful fsck command exit |
754 | self.mocker.replay() |
755 | |
756 | - util._format_device("/dev/vdz") |
757 | + util.format_device("/dev/vdz") |
758 | message = "/dev/vdz already formatted - skipping mkfs.ext4." |
759 | self.assertTrue( |
760 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
761 | |
762 | - def test_wb_format_device_non_ext4_is_formatted(self): |
763 | + def test_format_device_non_ext4_is_formatted(self): |
764 | """ |
765 | When L{_read_partition_table} reports the device partition table is |
766 | readable, and L{_assert_fstype} returns C{Flase}, mkfs.ext4 and fsck |
767 | @@ -567,14 +567,14 @@ |
768 | self.mocker.result(0) # Sucessful command exit 0 |
769 | self.mocker.replay() |
770 | |
771 | - util._format_device("/dev/vdz") |
772 | + util.format_device("/dev/vdz") |
773 | message = "NOTICE: Running: mkfs.ext4 /dev/vdz" |
774 | self.assertTrue( |
775 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
776 | |
777 | - def test_wb_format_device_responds_to_fsck_error(self): |
778 | + def test_format_device_responds_to_fsck_error(self): |
779 | """ |
780 | - When fsck returns an error, L{_format_device} exits in |
781 | + When fsck returns an error, L{format_device} exits in |
782 | error and logs the error. |
783 | """ |
784 | partition_table_mock = self.mocker.replace(util._read_partition_table) |
785 | @@ -593,7 +593,7 @@ |
786 | self.mocker.replay() |
787 | |
788 | result = self.assertRaises( |
789 | - SystemExit, util._format_device, "/dev/vdz") |
790 | + SystemExit, util.format_device, "/dev/vdz") |
791 | self.assertEqual(result.code, 1) |
792 | |
793 | messages = ["NOTICE: Running: mkfs.ext4 /dev/vdz", |
794 | @@ -608,17 +608,16 @@ |
795 | has not yet defined by the relation principal yet. |
796 | """ |
797 | self.assertIsNone(util.hookenv.relation_get("mountpoint")) |
798 | - util.mount_volume() |
799 | + self.assertRaises(SystemExit, util.prepare_volume_mount) |
800 | message = "No mountpoint defined by relation. Cannot mount volume yet." |
801 | self.assertTrue( |
802 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
803 | |
804 | - def test_mount_volume_mountpoint_persists_mountpoint_from_relation(self): |
805 | + def test_prepare_volume_mount_mountpoint_persists_mountpoint(self): |
806 | """ |
807 | - L{mount_volume} will read incoming principal relation data for the |
808 | - C{mountpoint}. When the requested C{mountpoint} is determined as |
809 | - mounted, it will then persist the mountpoint data, log a message and |
810 | - set juju relation C{mountpoint} in response. |
811 | + L{prepare_volume_mount} will read incoming principal relation data for |
812 | + the C{mountpoint}. When the requested C{mountpoint} is determined as |
813 | + mounted, it will then persist the mountpoint data, and log a message. |
814 | """ |
815 | util.hookenv._incoming_relation_data = ( |
816 | ("data:1", "mountpoint", "/mnt/this"),) |
817 | @@ -628,7 +627,7 @@ |
818 | self.mocker.result(True) |
819 | self.mocker.replay() |
820 | |
821 | - util.mount_volume() |
822 | + util.prepare_volume_mount() |
823 | self.assertIn(("mountpoint", "/mnt/this"), |
824 | util.hookenv._outgoing_relation_data) |
825 | message = "Volume (/mnt/this) already mounted" |
826 | @@ -639,11 +638,11 @@ |
827 | data = json.load(infile) |
828 | self.assertEqual(data["mountpoint"], "/mnt/this") |
829 | |
830 | - def test_mount_volume_mountpoint_from_relation_data_already_mounted(self): |
831 | + def test_prepare_mount_volume_mountpoint_already_mounted(self): |
832 | """ |
833 | - L{mount_volume} will log a message and return when the C{mountpoint} is |
834 | - already defined in relation data and the requested C{mountpoint} is |
835 | - already mounted. |
836 | + L{prepare_mount_volume} will log a message and return when the |
837 | + C{mountpoint} is already defined in relation data and the requested |
838 | + C{mountpoint} is already mounted. |
839 | """ |
840 | util.hookenv._incoming_relation_data = ( |
841 | ("data:1", "mountpoint", "/mnt/this"),) |
842 | @@ -655,234 +654,7 @@ |
843 | self.mocker.result(True) |
844 | self.mocker.replay() |
845 | |
846 | - util.mount_volume() |
847 | + util.prepare_volume_mount() |
848 | message = "Volume (/mnt/this) already mounted" |
849 | self.assertTrue( |
850 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
851 | - |
852 | - def test_mount_volume_nfs_provider_no_nfs_hook_relation_data(self): |
853 | - """ |
854 | - L{mount_volume} will exit in error when set as C{nfs} provider and the |
855 | - nfs relation data does not contain any of the required |
856 | - values: C{mount_path}, C{mount_server}, C{mount_options} or |
857 | - C{mount_type}. |
858 | - """ |
859 | - data = ( |
860 | - ("data:1", "mountpoint", "/mnt/this"), |
861 | - ("nfs:1", "private-address", "me.com"), |
862 | - ("nfs:1", "mountpoint", "/nfs/server/path"), |
863 | - ("nfs:1", "fstype", "nfs"),) |
864 | - util.hookenv._incoming_relation_data = data |
865 | - |
866 | - # Setup persisted data missing values from nfs-relation-changed |
867 | - for skip in range(1, 3): |
868 | - partial_data = () |
869 | - for index, item in enumerate(data): |
870 | - if index != skip: |
871 | - partial_data = partial_data + (item,) |
872 | - util.hookenv._incoming_relation_data = partial_data |
873 | - self.assertEqual( |
874 | - util.hookenv.relation_get("mountpoint", rid="data:1"), |
875 | - "/mnt/this") |
876 | - util.mount_volume() |
877 | - message = ( |
878 | - "Missing required relation values. " |
879 | - "nfs-relation-changed hook needs to run first.") |
880 | - self.assertTrue( |
881 | - util.hookenv.is_logged(message), "Not logged- %s" % message) |
882 | - |
883 | - def test_mount_volume_nfs_provider_success(self): |
884 | - """ |
885 | - L{mount_volume} will read incoming principal relation data for the |
886 | - C{mountpoint}. When storage config is set to the C{nfs} provider |
887 | - L{mount_volume} as a result of the hook C{nfs-relation-changed}. The |
888 | - values required nfs relation data are C{mount_path}, C{mount_server}, |
889 | - C{mount_type} and C{mount_options}. The success of the mount will log a |
890 | - message and the storage charm will call juju relation-set C{mountpoint} |
891 | - to report the successful initialization to the principal. |
892 | - """ |
893 | - util.hookenv._incoming_relation_data = ( |
894 | - ("data:1", "mountpoint", "/mnt/this"), |
895 | - ("nfs:1", "private-address", "me.com"), |
896 | - ("nfs:1", "options", "someopts"), |
897 | - ("nfs:1", "mountpoint", "/nfs/server/path"), |
898 | - ("nfs:1", "fstype", "nfs")) |
899 | - |
900 | - # Mock not mounted |
901 | - ismount = self.mocker.replace(os.path.ismount) |
902 | - ismount("/mnt/this") |
903 | - self.mocker.result(False) |
904 | - |
905 | - # mount_volume calls makedirs when path !exists |
906 | - exists = self.mocker.replace(os.path.exists) |
907 | - exists("/mnt/this") |
908 | - self.mocker.result(False) |
909 | - makedirs = self.mocker.replace(os.makedirs) |
910 | - makedirs("/mnt/this") |
911 | - |
912 | - # Called proper mount command based on nfs data |
913 | - command = "mount -t nfs -o someopts me.com:/nfs/server/path /mnt/this" |
914 | - mount_cmd = self.mocker.replace(subprocess.check_call) |
915 | - mount_cmd(command, shell=True) |
916 | - self.mocker.replay() |
917 | - |
918 | - self.assertEqual(util.get_provider(), "nfs") |
919 | - util.mount_volume() |
920 | - message = "Mount (/mnt/this) successful" |
921 | - self.assertTrue( |
922 | - util.hookenv.is_logged(message), "Not logged- %s" % message) |
923 | - |
924 | - # Wrote proper fstab mount info to mount on reboot |
925 | - fstab_content = "me.com:/nfs/server/path /mnt/this nfs someopts 0 0" |
926 | - with open(util.FSTAB_FILE) as input_file: |
927 | - self.assertEqual(input_file.read(), fstab_content) |
928 | - |
929 | - # Reported success to principal |
930 | - self.assertEqual( |
931 | - util.hookenv._outgoing_relation_data, |
932 | - (("mountpoint", "/mnt/this"),)) |
933 | - # Perserved mountpoint for future departed hook runs |
934 | - persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
935 | - with open(persist_path) as infile: |
936 | - data = json.load(infile) |
937 | - self.assertEqual(data["mountpoint"], "/mnt/this") |
938 | - |
939 | - def test_mount_volume_bsb_provider_no_device_path_provided(self): |
940 | - """ |
941 | - L{mount_volume} will exit early if called before C{device_path} |
942 | - is specified by the block-storage-broker charm. |
943 | - """ |
944 | - util.hookenv._config = (("provider", "block-storage-broker"),) |
945 | - util.hookenv._incoming_relation_data = ( |
946 | - ("data:1", "mountpoint", "/mnt/this"),) |
947 | - |
948 | - self.assertEqual(util.get_provider(), "block-storage-broker") |
949 | - result = self.assertRaises(SystemExit, util.mount_volume) |
950 | - self.assertEqual(result.code, 0) |
951 | - message = ( |
952 | - "Waiting for persistent nova device provided by " |
953 | - "block-storage-broker.") |
954 | - self.assertTrue( |
955 | - util.hookenv.is_logged(message), "Not logged- %s" % message) |
956 | - |
957 | - def test_mount_volume_bsb_provider_success(self): |
958 | - """ |
959 | - L{mount_volume} reads principal relation data for the C{mountpoint}. |
960 | - When storage config is set to the C{nova} provider, L{mount_volume} |
961 | - will read the optional C{device_path} parameter provided by the nova |
962 | - provider's C{data-relation-changed} hook. |
963 | - The success of the mount will log a message and the storage charm will |
964 | - call relation-set C{mountpoint} to report the successful mount to the |
965 | - principal. |
966 | - """ |
967 | - util.hookenv._incoming_relation_data = ( |
968 | - ("data:1", "mountpoint", "/mnt/this"),) |
969 | - |
970 | - util.hookenv._config = (("provider", "block-storage-broker"),) |
971 | - |
972 | - ismount = self.mocker.replace(os.path.ismount) |
973 | - ismount("/mnt/this") |
974 | - self.mocker.result(False) # Mock not mounted |
975 | - |
976 | - assert_device = self.mocker.replace(util._assert_block_device) |
977 | - assert_device("/dev/vdx") |
978 | - exists = self.mocker.replace(os.path.exists) |
979 | - exists("/dev/vdx1") |
980 | - self.mocker.result(False) |
981 | - create_partition = self.mocker.replace( |
982 | - util._format_device) |
983 | - create_partition("/dev/vdx") |
984 | - |
985 | - _set_label = self.mocker.replace(util._set_label) |
986 | - _set_label("/dev/vdx", "/mnt/this") |
987 | - |
988 | - # Ensure blkid is called to discover fstype of nova device |
989 | - get_fstype = self.mocker.replace(subprocess.check_output) |
990 | - get_fstype("blkid -o value -s TYPE /dev/vdx", shell=True) |
991 | - self.mocker.result("ext4\n") |
992 | - |
993 | - exists("/mnt/this") |
994 | - self.mocker.result(True) |
995 | - |
996 | - mount = self.mocker.replace(subprocess.check_call) |
997 | - mount("mount -t ext4 /dev/vdx /mnt/this", shell=True) |
998 | - self.mocker.replay() |
999 | - |
1000 | - self.assertEqual(util.get_provider(), "block-storage-broker") |
1001 | - util.mount_volume("/dev/vdx") |
1002 | - message = "Mount (/mnt/this) successful" |
1003 | - self.assertTrue( |
1004 | - util.hookenv.is_logged(message), "Not logged- %s" % message) |
1005 | - |
1006 | - # Wrote proper fstab mount info to mount on reboot |
1007 | - fstab_content = "/dev/vdx /mnt/this ext4 default 0 0" |
1008 | - with open(util.FSTAB_FILE) as input_file: |
1009 | - self.assertEqual(input_file.read(), fstab_content) |
1010 | - |
1011 | - # Reported success to principal |
1012 | - self.assertEqual( |
1013 | - util.hookenv._outgoing_relation_data, |
1014 | - (("mountpoint", "/mnt/this"),)) |
1015 | - |
1016 | - def test_mount_volume_bsb_provider_success_postgresql_charm_upgrade(self): |
1017 | - """ |
1018 | - Old versions of the postgresql charm would have created a single |
1019 | - partition with max size from an external attached volume if the user |
1020 | - had used postgresql charm's deprecated volume-map parameter. When the |
1021 | - storage charm sees a single partition on the attached device, it will |
1022 | - attempt to mount that partition instead of the root device. |
1023 | - The success of the mount will log a message and the storage charm will |
1024 | - call relation-set C{mountpoint} to report the successful mount to the |
1025 | - principal. |
1026 | - """ |
1027 | - util.hookenv._incoming_relation_data = ( |
1028 | - ("data:1", "mountpoint", "/mnt/this"),) |
1029 | - |
1030 | - util.hookenv._config = (("provider", "block-storage-broker"),) |
1031 | - |
1032 | - ismount = self.mocker.replace(os.path.ismount) |
1033 | - ismount("/mnt/this") |
1034 | - self.mocker.result(False) # Mock not mounted |
1035 | - |
1036 | - assert_device = self.mocker.replace(util._assert_block_device) |
1037 | - assert_device("/dev/vdx") |
1038 | - exists = self.mocker.replace(os.path.exists) |
1039 | - exists("/dev/vdx1") |
1040 | - self.mocker.result(True) |
1041 | - create_partition = self.mocker.replace( |
1042 | - util._format_device) |
1043 | - create_partition("/dev/vdx1") |
1044 | - |
1045 | - _set_label = self.mocker.replace(util._set_label) |
1046 | - _set_label("/dev/vdx1", "/mnt/this") |
1047 | - |
1048 | - # Ensure blkid is called to discover fstype of nova device |
1049 | - get_fstype = self.mocker.replace(subprocess.check_output) |
1050 | - get_fstype("blkid -o value -s TYPE /dev/vdx1", shell=True) |
1051 | - self.mocker.result("ext4\n") |
1052 | - |
1053 | - exists("/mnt/this") |
1054 | - self.mocker.result(True) |
1055 | - |
1056 | - mount = self.mocker.replace(subprocess.check_call) |
1057 | - mount("mount -t ext4 /dev/vdx1 /mnt/this", shell=True) |
1058 | - self.mocker.replay() |
1059 | - |
1060 | - self.assertEqual(util.get_provider(), "block-storage-broker") |
1061 | - util.mount_volume("/dev/vdx") |
1062 | - messages = ["Mount (/mnt/this) successful", |
1063 | - "Mounting /dev/vdx1 as the device path as the root device " |
1064 | - "is already partitioned."] |
1065 | - for message in messages: |
1066 | - self.assertTrue( |
1067 | - util.hookenv.is_logged(message), "Not logged- %s" % message) |
1068 | - |
1069 | - # Wrote proper fstab mount info to mount on reboot |
1070 | - fstab_content = "/dev/vdx1 /mnt/this ext4 default 0 0" |
1071 | - with open(util.FSTAB_FILE) as input_file: |
1072 | - self.assertEqual(input_file.read(), fstab_content) |
1073 | - |
1074 | - # Reported success to principal |
1075 | - self.assertEqual( |
1076 | - util.hookenv._outgoing_relation_data, |
1077 | - (("mountpoint", "/mnt/this"),)) |
Some English fixes plus a stray change at the end