Merge lp:~jseutter/charms/precise/storage/storage-with-broker-provider into lp:~dpb/charms/precise/storage/trunk
- Precise Pangolin (12.04)
- storage-with-broker-provider
- Merge into trunk
Status: | Merged |
---|---|
Merge reported by: | David Britton |
Merged at revision: | not available |
Proposed branch: | lp:~jseutter/charms/precise/storage/storage-with-broker-provider |
Merge into: | lp:~dpb/charms/precise/storage/trunk |
Diff against target: |
741 lines (+324/-106) 10 files modified
Makefile (+3/-0) config.yaml (+12/-2) hooks/common_util.py (+89/-15) hooks/storage-provider.d/block-storage-broker/block-storage-relation-broken (+6/-0) hooks/storage-provider.d/block-storage-broker/block-storage-relation-changed (+50/-0) hooks/storage-provider.d/nfs/data-relation-changed (+0/-7) hooks/storage-provider.d/nfs/data-relation-departed (+1/-1) hooks/test_common_util.py (+158/-81) hooks/testing.py (+3/-0) metadata.yaml (+2/-0) |
To merge this branch: | bzr merge lp:~jseutter/charms/precise/storage/storage-with-broker-provider |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
David Britton | Approve | ||
Chad Smith (community) | Approve | ||
Review via email: mp+205283@code.launchpad.net |
Commit message
Description of the change
The storage charm now relates to the block-storage-
The postgresql charm is the only one that currently works with the storage subordinate charm. To test, you could use the following juju-deployer config:
#######
common:
services:
branch: lp:~chad.smith/charms/precise/block-storage-broker/trunk
storage:
branch: lp:~jseutter/charms/precise/storage/storage-with-broker-provider/
postgresql:
branch: lp:~jseutter/charms/precise/postgresql/delegate-blockstorage-to-storage-subordinate-patched
doit:
inherits: common
series: precise
relations:
- [postgresql, storage]
- [storage, block-storage-
#######
then run something like:
juju-deployer -c storage-
To verify, ssh to the postgresql/0 unit. You should see a symlink in postgresql that points to the volume that the storage charm mounted:
root@juju-
lrwxrwxrwx 1 root root 40 Feb 7 04:01 /var/lib/
David Britton (dpb) wrote : | # |
- 32. By Jerry Seutter
-
Added missing blockstoragebroker directory.
David Britton (dpb) wrote : | # |
[1] chmod +x storage-
David Britton (dpb) wrote : | # |
[2] remove nova_utils.py from blockstoragebroker dir
David Britton (dpb) wrote : | # |
[3] you are missing symlinks in hooks/. block-storage-
[4] please rename the storage-
David Britton (dpb) wrote : | # |
[5] Upon relation, the device is not set by block-storage-
https:/
--- a/hooks/
+++ b/hooks/
@@ -18,4 +18,7 @@ if os.path.
subprocess
device_path = hookenv.
-common_
+if device_path:
+ common_
+else:
+ hookenv.
David Britton (dpb) wrote : | # |
[2-amendment] I think the file can go away, afaict, you are only using one method from it to get an instance id (which should at least be expanded to aws, I think)
David Britton (dpb) wrote : | # |
Paste of all changes I had to make so far: https:/
David Britton (dpb) wrote : | # |
[6] There is a fairly serious race condition. If data-relation-
David Britton (dpb) wrote : | # |
Updated diff including thoughts from [6]: https:/
David Britton (dpb) wrote : | # |
I'd like to re-review, but there seems to be some rework necessary here. I'd suggest for testing:
1) deploy
2) add-unit postgres
3) add-unit postgres
4) break relation between postgres and storage, add back
5) destroy-services
6) make sure everything comes down without error
In between each step, make sure the system looks right. That the volume is mounted, that data is being stored there for postgres. This can be seen with the following checks as user postgres:
* ls -ld /var/lib/
* ls -l /mnt/takeitlike
* psql # CREATE DATABASE foo -- do this once on the main node
* psql -d foo # make sure with each new node, you can see the data copied in.
* pg_lsclusters
- 33. By Jerry Seutter
-
Restore execute permission on hooks
Chad Smith (chad.smith) wrote : | # |
Thanks Jerry for posting this MP.
Okay, just wanted to consolidate some review comments from previous email so we can track them and check them off here:
[1] let's kill nova_util.py and migrate get_instance_id up into common_util. and call it from there. Lots of unused code that we can remove if we drop nova_util
[2] get_instance_id should not longer curl and use jsonpipe. Let's do something like this
def _get_instance_id():
import json
metadata = json.loads(
return metadata["uuid"]
[3] in hooks/storage-
Drop the install of jsonpipe and drop all the OS_env file saving stuff it's not needed as we handle that in block-storage-
[4] drop any config-changed file loads or calls from any hooks in hooks/storage-
# So this stuff should be gone from any other file
current_dir = sys.path[0]
config_changed = "%s/config-changed" % current_dir
if os.path.
subprocess.
[5] We can delete the start and stop hooks from hooks/storage-
We don't even have those hook links defined up in the global hooks directory so they won't get called anyway
Will continue to review the charm changes with a couple live deployments to see how our unit add/remove story is going.
- 34. By Jerry Seutter
-
Restoring relation hook symlinks
Jerry Seutter (jseutter) wrote : | # |
> [1] chmod +x storage-
Fixed.
- 35. By Jerry Seutter
-
Removing nova_util
- 36. By Jerry Seutter
-
Restoring METADATA_URL.
- 37. By Jerry Seutter
-
Rename blockstoragebroker to block-storage-
broker. - 38. By Jerry Seutter
-
get_instance_id no longer uses jsonpipe
- 39. By Jerry Seutter
-
import json
- 40. By Jerry Seutter
-
Remove python-jsonpipe
- 41. By Jerry Seutter
-
Removing openstack environment variable lookups
- 42. By Jerry Seutter
-
Removing unused hooks.
- 43. By Jerry Seutter
-
Removing unused code.
Jerry Seutter (jseutter) wrote : | # |
dpb, your 1, 2, 3, 4 have been addressed. 5 and your extra comment are still outstanding.
Jerry Seutter (jseutter) wrote : | # |
Chad, your 1, 2, 3, 4 and 5 have all been addressed.
Chad Smith (chad.smith) wrote : | # |
> I'd like to re-review, but there seems to be some rework necessary here. I'd
> suggest for testing:
>
> 1) deploy
> 2) add-unit postgres
> 3) add-unit postgres
> 4) break relation between postgres and storage, add back
Without the subsequent MP, I don't think breaking storage/postgresql relations and re-adding will be possible because volume labels of block-storage-
Chad Smith (chad.smith) wrote : | # |
Jerry, thanks for the updates and pulling in that volume-map and volume-label branch. I'll remove that MP now.
I've been deploying this now with the following bundle and I'm able to juju add-unit postgresql, juju remove-relation postgresql storage, juju add-relation postgresql storage and see the volumes remounted without hook issues. I'll debug a bit more and look into the race condition between data-relation and block-storage-
cat postgres-
common:
services:
postgresql:
branch: lp:~chad.smith/landscape/postgresql-storage-wip
storage:
branch: lp:~jseutter/charms/precise/storage/storage-with-broker-provider
branch: lp:~chad.smith/charms/precise/block-storage-broker/trunk
doit:
inherits: common
series: precise
relations:
- [postgresql, storage]
- [storage, block-storage-
$ juju-deployer -c postgres-
Chad Smith (chad.smith) wrote : | # |
Hey Jerry,
I had a couple of things I needed to fix in this branch to make it work for remove-relation postgresql storage and add-relation postgresl postgresql
[6] pulled in persist* functions from block-storage-
[7] persistent mountpoint saved during mount_volume
[8] needed symlink in block-storage-
[9] use persistent mountpoint info instead of relation-get mointpoint (as this relation-data might disappear on some departed/broken hooks)
here's the pastebin of what I patched in your branch: including the unit tests just pulled over from block-storage-
https:/
Okay enough of the comment steamrolling. I was trying to get the storage-related concerns out of the way so I could prove our postgresql charm was the one having problems.
Your branch with this patch works very well with the postgres-
add-relation remove-relation between storage works great and postgres can pickup and remount the nova volume fine upon relation-rejoined with a simple juju resolved --retry postgresql/0. As expected postgresql/0 goes into error hook state when the nova volume disappears.
Approving your branch with the following patch above as we keep lumping more features into it. Sorry man
- 45. By Jerry Seutter
-
Adding setting persistence to storage subordinate.
- 46. By Jerry Seutter
-
Unmounting the volume upon data-relation-
departed.
David Britton (dpb) : | # |
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2014-02-04 01:13:16 +0000 |
3 | +++ Makefile 2014-02-14 00:23:01 +0000 |
4 | @@ -8,6 +8,9 @@ |
5 | |
6 | lint: |
7 | @flake8 --exclude hooks/charmhelpers hooks |
8 | + # flake any python scripts not named *.py |
9 | + fgrep -r bin/python hooks/ | awk -F : '{print $$1}' | xargs -r -n1 flake8 |
10 | + |
11 | |
12 | update-charm-helpers: |
13 | # Pull latest charm-helpers branch and sync the components based on our |
14 | |
15 | === modified file 'config.yaml' |
16 | --- config.yaml 2014-02-04 00:58:01 +0000 |
17 | +++ config.yaml 2014-02-14 00:23:01 +0000 |
18 | @@ -8,9 +8,19 @@ |
19 | default: local |
20 | description: | |
21 | The backend storage provider. Will be mounted at root, and |
22 | - can be one of, local, blockstoragebroker or nfs. If you set to |
23 | - blockstoragebroker, you should provide a suitable value for |
24 | + can be one of, local, block-storage-broker or nfs. If you set to |
25 | + block-storage-broker, you should provide a suitable value for |
26 | volume_size. |
27 | + volume_map: |
28 | + type: string |
29 | + default: "" |
30 | + description: | |
31 | + YAML map as e.g. "{postgres/0: vol-0000010, postgres/1: vol-0000016}". |
32 | + A service unit to specific block storage volume-id that should be |
33 | + attached to each unit. This requires that provider is set to |
34 | + block-storage-broker and that volume-ids specified match a listing |
35 | + from nova volume-list. If a related unit does not have a volume-id |
36 | + specified, a new volume of volume_size will be created for that unit. |
37 | volume_size: |
38 | type: int |
39 | description: | |
40 | |
41 | === added symlink 'hooks/block-storage-relation-broken' |
42 | === target is u'hooks' |
43 | === added symlink 'hooks/block-storage-relation-changed' |
44 | === target is u'hooks' |
45 | === added symlink 'hooks/block-storage-relation-departed' |
46 | === target is u'hooks' |
47 | === modified file 'hooks/common_util.py' |
48 | --- hooks/common_util.py 2014-02-05 16:40:11 +0000 |
49 | +++ hooks/common_util.py 2014-02-14 00:23:01 +0000 |
50 | @@ -5,15 +5,63 @@ |
51 | import sys |
52 | from time import sleep |
53 | from stat import S_ISBLK, ST_MODE |
54 | +import json |
55 | |
56 | CHARM_DIR = os.environ.get("CHARM_DIR") |
57 | FSTAB_FILE = "/etc/fstab" |
58 | +METADATA_URL = "http://169.254.169.254/openstack/2012-08-10/meta_data.json" |
59 | +PERSIST_DATA_FILE = "charm-data" |
60 | |
61 | |
62 | def log(message, level=None): |
63 | """Quaint little log wrapper for juju logging""" |
64 | hookenv.log(message, level) |
65 | |
66 | +# XXX We can remove _persist_data when juju bug:1279018 is fixed to provide |
67 | +# relation-data from *-relation-departed hook runs |
68 | +def _persist_data(key, value): |
69 | + """Save C{key} and C{value} pairs in a persistent file for reference""" |
70 | + data = {} |
71 | + filepath = "%s/%s" % (hookenv.charm_dir(), PERSIST_DATA_FILE) |
72 | + if os.path.exists(filepath): |
73 | + with open(filepath) as inputfile: |
74 | + data = json.load(inputfile) |
75 | + data[key] = value |
76 | + with open(filepath, "w") as outfile: |
77 | + outfile.write(unicode(json.dumps(data, ensure_ascii=False))) |
78 | + |
79 | + |
80 | +def _get_persistent_data(key=None, remove_values=False): |
81 | + """Get a persistent data as specified by C{key} from the PERSIST_DATA_FILE |
82 | + |
83 | + If specified C{key} does not exist, or if no PERSIST_DATA_FILE exists, |
84 | + return C{None}. If C{remove_values} is C{True} remove the specified C{key} |
85 | + from the data file. |
86 | + """ |
87 | + data = {} |
88 | + filepath = "%s/%s" % (hookenv.charm_dir(), PERSIST_DATA_FILE) |
89 | + if os.path.exists(filepath): |
90 | + with open(filepath) as inputfile: |
91 | + data = json.load(inputfile) |
92 | + if key: |
93 | + value = data.get(key, None) |
94 | + if remove_values: |
95 | + data.pop(key, None) |
96 | + if data: |
97 | + with open(filepath, "w") as outfile: |
98 | + outfile.write( |
99 | + unicode(json.dumps(data, ensure_ascii=False))) |
100 | + elif os.path.exists(filepath): |
101 | + os.remove(filepath) |
102 | + else: |
103 | + value = data |
104 | + if not data: |
105 | + value = None |
106 | + if remove_values: |
107 | + if os.path.exists(filepath): |
108 | + os.remove(filepath) |
109 | + return value |
110 | + |
111 | |
112 | def get_provider(): |
113 | """Get the storage charm's provider type as set within it's config.yaml""" |
114 | @@ -27,11 +75,10 @@ |
115 | sys.exit(1) |
116 | |
117 | |
118 | -def is_mounted(): |
119 | - """Return True if the defined mountpoint is mounted""" |
120 | - mountpoint = hookenv.relation_get("mountpoint") |
121 | +def is_mounted(mountpoint): |
122 | + """Return True if the mountpoint is mounted""" |
123 | if not mountpoint: |
124 | - log("INFO: No mountpoint defined by relation assuming not mounted.") |
125 | + log("INFO: No mountpoint given, assuming not mounted.") |
126 | return False |
127 | return os.path.ismount(mountpoint) |
128 | |
129 | @@ -39,12 +86,15 @@ |
130 | def unmount_volume(): |
131 | """Unmount the relation-requested C{mountpoint}""" |
132 | success = False |
133 | - mountpoint = hookenv.relation_get("mountpoint") |
134 | + # Because relation-data is not guaranteed to exist on relation-departed |
135 | + # We need to obtain mountpoint from the persistent relation data we saved |
136 | + # during data-relation-changed |
137 | + mountpoint = _get_persistent_data("mountpoint") |
138 | if not mountpoint: |
139 | log("INFO: No mountpoint defined by relation no unmount performed.") |
140 | return |
141 | |
142 | - if not is_mounted(): |
143 | + if not is_mounted(mountpoint): |
144 | log("%s is not mounted, Done." % mountpoint) |
145 | success = True |
146 | else: |
147 | @@ -144,21 +194,40 @@ |
148 | sys.exit(1) |
149 | |
150 | |
151 | +def _get_from_relation(relation, key, default=None): |
152 | + relids = hookenv.relation_ids(relation) |
153 | + for relid in relids: |
154 | + for unit in hookenv.related_units(relid): |
155 | + relation = hookenv.relation_get(unit=unit, rid=relid) |
156 | + value = relation.get(key, default) |
157 | + if value is not None: |
158 | + return value |
159 | + return None |
160 | + |
161 | + |
162 | def mount_volume(device_path=None): |
163 | """ |
164 | - Mount the attached volume, nfs or blockstoragebroker type, to the principal |
165 | - requested C{mountpoint}. |
166 | + Mount the attached volume, nfs or block-storage-broker type, to the |
167 | + principal requested C{mountpoint}. |
168 | |
169 | If the C{mountpoint} required relation data does not exist, log the |
170 | issue and exit. Otherwise attempt to initialize and mount the blockstorage |
171 | or nfs device. |
172 | """ |
173 | - mountpoint = hookenv.relation_get("mountpoint") |
174 | + provider = get_provider() |
175 | + if provider == "block-storage-broker": |
176 | + # Called as part of a block storage hook, but mountpoint comes |
177 | + # from the data relation. |
178 | + mountpoint = _get_from_relation("data", "mountpoint") |
179 | + else: |
180 | + mountpoint = hookenv.relation_get("mountpoint") |
181 | + |
182 | if not mountpoint: |
183 | log("No mountpoint defined by relation. Cannot mount volume yet.") |
184 | sys.exit(0) |
185 | |
186 | - if is_mounted(): |
187 | + _persist_data("mountpoint", mountpoint) # Saved for our departed hook |
188 | + if is_mounted(mountpoint): |
189 | log("Volume (%s) already mounted" % mountpoint) |
190 | # publish changes to data relation and exit |
191 | for relid in hookenv.relation_ids("data"): |
192 | @@ -166,7 +235,6 @@ |
193 | relid, relation_settings={"mountpoint": mountpoint}) |
194 | sys.exit(0) |
195 | |
196 | - provider = get_provider() |
197 | options = "default" |
198 | if provider == "nfs": |
199 | relids = hookenv.relation_ids("nfs") |
200 | @@ -187,11 +255,11 @@ |
201 | sys.exit(1) |
202 | |
203 | device_path = "%s:%s" % (nfs_server, nfs_path) |
204 | - elif provider == "blockstoragebroker": |
205 | + elif provider == "block-storage-broker": |
206 | if device_path is None: |
207 | - log("ERROR: No persistent nova device provided by " |
208 | - "blockstoragebroker.", hookenv.ERROR) |
209 | - sys.exit(1) |
210 | + log("Waiting for persistent nova device provided by " |
211 | + "block-storage-broker.") |
212 | + sys.exit(0) |
213 | |
214 | _assert_block_device(device_path) |
215 | _format_device(device_path) |
216 | @@ -217,3 +285,9 @@ |
217 | for relid in hookenv.relation_ids("data"): |
218 | hookenv.relation_set( |
219 | relid, relation_settings={"mountpoint": mountpoint}) |
220 | + |
221 | + |
222 | +def get_instance_id(): |
223 | + metadata = json.loads( |
224 | + subprocess.check_output("curl %s" % METADATA_URL, shell=True)) |
225 | + return metadata["uuid"] |
226 | |
227 | === added directory 'hooks/storage-provider.d/block-storage-broker' |
228 | === added file 'hooks/storage-provider.d/block-storage-broker/block-storage-relation-broken' |
229 | --- hooks/storage-provider.d/block-storage-broker/block-storage-relation-broken 1970-01-01 00:00:00 +0000 |
230 | +++ hooks/storage-provider.d/block-storage-broker/block-storage-relation-broken 2014-02-14 00:23:01 +0000 |
231 | @@ -0,0 +1,6 @@ |
232 | +#!/usr/bin/python |
233 | + |
234 | +import common_util |
235 | + |
236 | + |
237 | +common_util.unmount_volume() |
238 | |
239 | === added file 'hooks/storage-provider.d/block-storage-broker/block-storage-relation-changed' |
240 | --- hooks/storage-provider.d/block-storage-broker/block-storage-relation-changed 1970-01-01 00:00:00 +0000 |
241 | +++ hooks/storage-provider.d/block-storage-broker/block-storage-relation-changed 2014-02-14 00:23:01 +0000 |
242 | @@ -0,0 +1,50 @@ |
243 | +#!/usr/bin/python |
244 | + |
245 | +import common_util |
246 | +import sys |
247 | +import yaml |
248 | +from yaml.constructor import ConstructorError |
249 | +from charmhelpers.core import hookenv |
250 | + |
251 | + |
252 | +instance_id = common_util.get_instance_id() |
253 | +config_data = hookenv.config() |
254 | +volume_size = config_data.get("volume_size", None) |
255 | +volume_map = config_data.get("volume_map", None) |
256 | +volume_id = None |
257 | +label = None |
258 | +relids = hookenv.relation_ids("data") |
259 | +for relid in relids: |
260 | + for unit in hookenv.related_units(relid): |
261 | + label = "%s nova volume" % unit |
262 | + if volume_map is not None: |
263 | + # Grab the volume_label for our instance if it exists so we can |
264 | + # request that volume-id from the block-storage-broker |
265 | + try: |
266 | + volume_map = yaml.load(volume_map) |
267 | + if volume_map: |
268 | + volume_id = volume_map.get(unit, None) |
269 | + except ConstructorError as e: |
270 | + hookenv.log( |
271 | + "invalid YAML in 'volume-map': {}".format(e), |
272 | + hookenv.WARNING) |
273 | + |
274 | +if label is None: |
275 | + hookenv.log( |
276 | + "No data-relation-changed hook fired. Awaiting data-relation before " |
277 | + "updating block-storage-broker relation") |
278 | + sys.exit(0) |
279 | + |
280 | +relation_type = hookenv.relation_type() |
281 | +if relation_type == "block-storage": |
282 | + relation_settings = {"instance-id": instance_id, "size": volume_size, |
283 | + "volume-label": label} |
284 | + if volume_id is not None: |
285 | + relation_settings["volume-id"] = volume_id |
286 | + hookenv.relation_set(relation_settings=relation_settings) |
287 | + |
288 | +# Grab info from the block-storage relation regardless of whether we are in the |
289 | +# data relation |
290 | +device_path = common_util._get_from_relation( |
291 | + "block-storage", "block-device-path") |
292 | +common_util.mount_volume(device_path) # Will wait for mountpoint |
293 | |
294 | === added symlink 'hooks/storage-provider.d/block-storage-broker/data-relation-changed' |
295 | === target is u'block-storage-relation-changed' |
296 | === added symlink 'hooks/storage-provider.d/block-storage-broker/data-relation-departed' |
297 | === target is u'block-storage-relation-broken' |
298 | === modified file 'hooks/storage-provider.d/nfs/data-relation-changed' |
299 | --- hooks/storage-provider.d/nfs/data-relation-changed 2014-01-22 20:12:24 +0000 |
300 | +++ hooks/storage-provider.d/nfs/data-relation-changed 2014-02-14 00:23:01 +0000 |
301 | @@ -1,12 +1,5 @@ |
302 | #!/usr/bin/python |
303 | |
304 | import common_util |
305 | -import os |
306 | -import subprocess |
307 | - |
308 | -current_dir = os.path.dirname(os.path.realpath(__file__)) |
309 | -config_changed = "%s/config-changed" % current_dir |
310 | -if os.path.exists(config_changed): |
311 | - subprocess.check_call(config_changed) |
312 | |
313 | common_util.mount_volume() # Will wait until mountpoint is set by principal |
314 | |
315 | === modified file 'hooks/storage-provider.d/nfs/data-relation-departed' |
316 | --- hooks/storage-provider.d/nfs/data-relation-departed 2014-01-22 02:26:11 +0000 |
317 | +++ hooks/storage-provider.d/nfs/data-relation-departed 2014-02-14 00:23:01 +0000 |
318 | @@ -2,4 +2,4 @@ |
319 | |
320 | import common_util |
321 | |
322 | -common_util.unmount_volume(remove_persistent_data=True) |
323 | +common_util.unmount_volume() |
324 | |
325 | === modified file 'hooks/test_common_util.py' |
326 | --- hooks/test_common_util.py 2014-02-04 02:23:53 +0000 |
327 | +++ hooks/test_common_util.py 2014-02-14 00:23:01 +0000 |
328 | @@ -1,4 +1,5 @@ |
329 | import common_util as util |
330 | +import json |
331 | import mocker |
332 | import os |
333 | import subprocess |
334 | @@ -12,6 +13,118 @@ |
335 | self.maxDiff = None |
336 | util.FSTAB_FILE = self.makeFile() |
337 | util.hookenv = TestHookenv({"provider": "nfs"}) |
338 | + self.addCleanup(setattr, os, "environ", os.environ) |
339 | + self.charm_dir = self.makeDir() |
340 | + os.environ = {"CHARM_DIR": self.charm_dir} |
341 | + |
342 | + def test_wb_persist_data_creates_persist_file_if_it_doesnt_exist(self): |
343 | + """ |
344 | + L{_persist_data} will create a new PERSIST_DATA_FILE in C{CHARM_DIR} |
345 | + if the file does not already exist and store the provided C{key}, |
346 | + C{value} pairs. |
347 | + """ |
348 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
349 | + self.assertFalse(os.path.exists(persist_path)) |
350 | + util._persist_data("key", "value") |
351 | + with open(persist_path) as infile: |
352 | + data = json.load(infile) |
353 | + self.assertEqual(data["key"], "value") |
354 | + |
355 | + def test_wb_persist_data_appends_new_values_to_existing_persist_file(self): |
356 | + """ |
357 | + L{_persist_data} will add new values to an existing PERSIST_DATA_FILE |
358 | + in C{CHARM_DIR} without removing existing C{key}, C{value} pairs. |
359 | + """ |
360 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
361 | + data = {"key1": "value1"} |
362 | + with open(persist_path, "w") as outfile: |
363 | + outfile.write(unicode(json.dumps(data, ensure_ascii=False))) |
364 | + util._persist_data("key2", "value2") |
365 | + with open(persist_path) as infile: |
366 | + data = json.load(infile) |
367 | + self.assertEqual(data["key1"], "value1") |
368 | + self.assertEqual(data["key2"], "value2") |
369 | + |
370 | + def test_wb_get_persistent_data_returns_none_when_no_persist_file(self): |
371 | + """ |
372 | + L{_get_persistent_data} will return C{None} for any C{key} search if |
373 | + the PERSIST_DATA_FILE does not yet exist. |
374 | + """ |
375 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
376 | + self.assertFalse(os.path.exists(persist_path)) |
377 | + self.assertIsNone(util._get_persistent_data("key2")) |
378 | + self.assertIsNone(util._get_persistent_data()) |
379 | + |
380 | + def test_wb_get_persistent_data_returns_none_when_no_key_exists(self): |
381 | + """ |
382 | + L{_get_persistent_data} will return C{None} for any C{key} search when |
383 | + the C{key} is not defined in the PERSIST_DATA_FILE. |
384 | + """ |
385 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
386 | + data = {"key1": "i-123"} |
387 | + with open(persist_path, "w") as outfile: |
388 | + outfile.write(unicode(json.dumps(data, ensure_ascii=False))) |
389 | + self.assertIsNone(util._get_persistent_data("cant-find-this-key")) |
390 | + |
391 | + def test_wb_get_persistent_data_returns_value_when_key_exists(self): |
392 | + """ |
393 | + L{_get_persistent_data} will return the corresponding value for the |
394 | + search C{key} provided when that C{key} is defined in the |
395 | + PERSIST_DATA_FILE. |
396 | + """ |
397 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
398 | + data = {"key1": "i-123"} |
399 | + with open(persist_path, "w") as outfile: |
400 | + outfile.write(unicode(json.dumps(data, ensure_ascii=False))) |
401 | + self.assertEqual(util._get_persistent_data("key1"), "i-123") |
402 | + |
403 | + def test_wb_get_persistent_data_removes_key_when_remove_values_true(self): |
404 | + """ |
405 | + L{_get_persistent_data} will return the corresponding value for the |
406 | + search C{key} provided when that C{key} is defined in the |
407 | + PERSIST_DATA_FILE. It will also remove that key from the |
408 | + PERSIST_DATA_FILE when C{remove_values} is C{True}. |
409 | + """ |
410 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
411 | + data = {"key1": "i-123", "key2": "i-456"} |
412 | + with open(persist_path, "w") as outfile: |
413 | + outfile.write(unicode(json.dumps(data, ensure_ascii=False))) |
414 | + self.assertEqual( |
415 | + util._get_persistent_data("key1", remove_values=True), "i-123") |
416 | + with open(persist_path) as infile: |
417 | + new_data = json.load(infile) |
418 | + self.assertEqual(new_data, {"key2": "i-456"}) |
419 | + |
420 | + def test_wb_get_persistent_data_removes_file_when_remove_values_true(self): |
421 | + """ |
422 | + L{_get_persistent_data} will return the corresponding value for the |
423 | + search C{key} provided when that C{key} is defined in the |
424 | + PERSIST_DATA_FILE. It will also remove the PERSIST_DATA_FILE when |
425 | + C{remove_values} is C{True} and the last remaining C{key} has been |
426 | + removed. |
427 | + """ |
428 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
429 | + data = {"key1": "i-123"} |
430 | + with open(persist_path, "w") as outfile: |
431 | + outfile.write(unicode(json.dumps(data, ensure_ascii=False))) |
432 | + self.assertEqual( |
433 | + util._get_persistent_data("key1", remove_values=True), "i-123") |
434 | + self.assertFalse(os.path.exists(persist_path)) |
435 | + |
436 | + def test_wb_get_persistent_data_removes_file_when_no_key_specified(self): |
437 | + """ |
438 | + L{_get_persistent_data} will return all existing values and remove the |
439 | + PERSIST_DATA_FILE when C{remove_values} is C{True} and no C{key} is |
440 | + specified. |
441 | + """ |
442 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
443 | + data = {"key1": "i-123", "key2": "i-456"} |
444 | + with open(persist_path, "w") as outfile: |
445 | + outfile.write(unicode(json.dumps(data, ensure_ascii=False))) |
446 | + self.assertEqual( |
447 | + util._get_persistent_data(key=None, remove_values=True), |
448 | + data) |
449 | + self.assertFalse(os.path.exists(persist_path)) |
450 | |
451 | def test_get_provider(self): |
452 | """L{get_provider} returns the C{provider} config setting if present""" |
453 | @@ -21,8 +134,6 @@ |
454 | """ |
455 | L{get_provider} exits with an error if C{provider} configuration unset. |
456 | """ |
457 | - self.addCleanup( |
458 | - setattr, util.hookenv, "_config", {"provider": "nfs"}) |
459 | util.hookenv._config = {} # No provider defined |
460 | |
461 | self.assertRaises(SystemExit, util.get_provider) |
462 | @@ -31,45 +142,32 @@ |
463 | |
464 | def test_is_mounted_when_mountpoint_is_mounted(self): |
465 | """ |
466 | - L{is_mounted} returns C{True} when persisted mountpath is mounted |
467 | + L{is_mounted} returns C{True} when the mountpath is mounted. |
468 | """ |
469 | - self.addCleanup( |
470 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
471 | - util.hookenv._incoming_relation_data = ( |
472 | - (None, "mountpoint", "/mnt/this"),) |
473 | - self.assertEqual(util.hookenv.relation_get("mountpoint"), "/mnt/this") |
474 | - |
475 | ismount = self.mocker.replace(os.path.ismount) |
476 | ismount("/mnt/this") |
477 | self.mocker.result(True) |
478 | self.mocker.replay() |
479 | - |
480 | - self.assertTrue(util.is_mounted()) |
481 | + self.assertTrue(util.is_mounted("/mnt/this")) |
482 | |
483 | def test_is_mounted_when_mountpoint_is_not_mounted(self): |
484 | """ |
485 | - L{is_mounted} returns C{False} when persisted mountpath is not mounted |
486 | + L{is_mounted} returns C{False} when the mountpath is not mounted. |
487 | """ |
488 | - self.addCleanup( |
489 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
490 | - util.hookenv._incoming_relation_data = ( |
491 | - (None, "mountpoint", "/mnt/this"),) |
492 | - |
493 | ismount = self.mocker.replace(os.path.ismount) |
494 | ismount("/mnt/this") |
495 | self.mocker.result(False) |
496 | self.mocker.replay() |
497 | |
498 | - self.assertFalse(util.is_mounted()) |
499 | + self.assertFalse(util.is_mounted("/mnt/this")) |
500 | |
501 | - def test_is_mounted_when_mountpoint_not_defined_by_relation(self): |
502 | + def test_is_mounted_when_mountpoint_not_defined(self): |
503 | """ |
504 | - L{is_mounted} returns C{False} while the relation doesn't define |
505 | - C{mountpoint} |
506 | + L{is_mounted} returns C{False} when the given mountpoint is undefined. |
507 | """ |
508 | self.assertIsNone(util.hookenv.relation_get("mountpoint")) |
509 | - self.assertFalse(util.is_mounted()) |
510 | - message = "No mountpoint defined by relation assuming not mounted." |
511 | + self.assertFalse(util.is_mounted(None)) |
512 | + message = "No mountpoint given, assuming not mounted." |
513 | self.assertTrue( |
514 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
515 | |
516 | @@ -78,23 +176,18 @@ |
517 | L{unmount_volume} will log a message and do nothing if mountpoint is |
518 | not defined. |
519 | """ |
520 | - self.assertIsNone(util.hookenv.relation_get("mountpoint")) |
521 | + self.assertIsNone(util._get_persistent_data("mountpoint")) |
522 | util.unmount_volume() |
523 | |
524 | message = "No mountpoint defined by relation no unmount performed." |
525 | self.assertTrue(util.hookenv.is_logged(message), "Message not logged") |
526 | - self.assertFalse(util.is_mounted()) |
527 | |
528 | def test_unmount_volume_when_defined_mountpoint_not_mounted(self): |
529 | """ |
530 | L{unmount_volume} will log a message and do nothing if the defined |
531 | - mountpoint is not mounted and C{remove_persistent_data} parameter is |
532 | - unspecified. |
533 | + mountpoint is not mounted. |
534 | """ |
535 | - self.addCleanup( |
536 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
537 | - util.hookenv._incoming_relation_data = ( |
538 | - (None, "mountpoint", "/mnt/this"),) |
539 | + util._persist_data("mountpoint", "/mnt/this") |
540 | |
541 | ismount = self.mocker.replace(os.path.ismount) |
542 | ismount("/mnt/this") |
543 | @@ -113,10 +206,7 @@ |
544 | to get a success with a sleep(5) in between attempts. If all attempts |
545 | fail, L{umount_volume} will log the failure and exit 1. |
546 | """ |
547 | - self.addCleanup( |
548 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
549 | - util.hookenv._incoming_relation_data = ( |
550 | - (None, "mountpoint", "/mnt/this"),) |
551 | + util._persist_data("mountpoint", "/mnt/this") |
552 | |
553 | ismount = self.mocker.replace(os.path.ismount) |
554 | ismount("/mnt/this") |
555 | @@ -144,10 +234,7 @@ |
556 | L{unmount_volume} will unmount a mounted filesystem calling the shell |
557 | command umount and will log the success. |
558 | """ |
559 | - self.addCleanup( |
560 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
561 | - util.hookenv._incoming_relation_data = ( |
562 | - (None, "mountpoint", "/mnt/this"),) |
563 | + util._persist_data("mountpoint", "/mnt/this") |
564 | |
565 | ismount = self.mocker.replace(os.path.ismount) |
566 | ismount("/mnt/this") |
567 | @@ -390,15 +477,11 @@ |
568 | """ |
569 | L{mount_volume} will read incoming principal relation data for the |
570 | C{mountpoint}. When the requested C{mountpoint} is determined as |
571 | - mounted, it will log a message and will set juju relation |
572 | - C{mountpoint} in response. |
573 | + mounted, it will then persist the mountpoint data, log a message and |
574 | + set juju relation C{mountpoint} in response. |
575 | """ |
576 | - self.addCleanup( |
577 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
578 | util.hookenv._incoming_relation_data = ( |
579 | (None, "mountpoint", "/mnt/this"),) |
580 | - self.addCleanup( |
581 | - setattr, util.hookenv, "_outgoing_relation_data", ()) |
582 | |
583 | ismount = self.mocker.replace(os.path.ismount) |
584 | ismount("/mnt/this") |
585 | @@ -407,9 +490,16 @@ |
586 | |
587 | result = self.assertRaises(SystemExit, util.mount_volume) |
588 | self.assertEqual(result.code, 0) |
589 | + self.assertIn(("mountpoint", "/mnt/this"), |
590 | + util.hookenv._outgoing_relation_data) |
591 | message = "Volume (/mnt/this) already mounted" |
592 | self.assertTrue( |
593 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
594 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
595 | + with open(persist_path) as infile: |
596 | + data = json.load(infile) |
597 | + self.assertEqual(data["mountpoint"], "/mnt/this") |
598 | + |
599 | |
600 | def test_mount_volume_mountpoint_from_relation_data(self): |
601 | """ |
602 | @@ -417,8 +507,6 @@ |
603 | already defined in relation data and the requested C{mountpoint} is |
604 | already mounted. |
605 | """ |
606 | - self.addCleanup( |
607 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
608 | util.hookenv._incoming_relation_data = ( |
609 | (None, "mountpoint", "/mnt/this"),) |
610 | self.assertEqual(util.hookenv.relation_get("mountpoint"), "/mnt/this") |
611 | @@ -441,8 +529,6 @@ |
612 | values: C{mount_path}, C{mount_server}, C{mount_options} or |
613 | C{mount_type}. |
614 | """ |
615 | - self.addCleanup( |
616 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
617 | data = ( |
618 | (None, "mountpoint", "/mnt/this"), |
619 | ("nfs:1", "private-address", "me.com"), |
620 | @@ -476,16 +562,12 @@ |
621 | message and the storage charm will call juju relation-set C{mountpoint} |
622 | to report the successful initialization to the principal. |
623 | """ |
624 | - self.addCleanup( |
625 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
626 | util.hookenv._incoming_relation_data = ( |
627 | (None, "mountpoint", "/mnt/this"), |
628 | ("nfs:1", "private-address", "me.com"), |
629 | ("nfs:1", "options", "someopts"), |
630 | ("nfs:1", "mountpoint", "/nfs/server/path"), |
631 | ("nfs:1", "fstype", "nfs")) |
632 | - self.addCleanup( |
633 | - setattr, util.hookenv, "_outgoing_relation_data", ()) |
634 | |
635 | # Mock not mounted |
636 | ismount = self.mocker.replace(os.path.ismount) |
637 | @@ -520,49 +602,44 @@ |
638 | self.assertEqual( |
639 | util.hookenv._outgoing_relation_data, |
640 | (("mountpoint", "/mnt/this"),)) |
641 | + # Perserved mountpoint for future departed hook runs |
642 | + persist_path = "%s/%s" % (self.charm_dir, util.PERSIST_DATA_FILE) |
643 | + with open(persist_path) as infile: |
644 | + data = json.load(infile) |
645 | + self.assertEqual(data["mountpoint"], "/mnt/this") |
646 | |
647 | - def test_mount_volume_bsb_provider_error_no_device_path_provided(self): |
648 | - """ |
649 | - L{mount_volume} will exit with errors if called before C{device_path} |
650 | - is specified by the blockstoragebroker charm |
651 | - saved from either the C{start} or C{data-relation-changed} hook runs. |
652 | - """ |
653 | - self.addCleanup( |
654 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
655 | + def test_mount_volume_bsb_provider_no_device_path_provided(self): |
656 | + """ |
657 | + L{mount_volume} will exit early if called before C{device_path} |
658 | + is specified by the block-storage-broker charm. |
659 | + """ |
660 | + util.hookenv._config = (("provider", "block-storage-broker"),) |
661 | util.hookenv._incoming_relation_data = ( |
662 | - (None, "mountpoint", "/mnt/this"),) |
663 | - |
664 | - self.addCleanup( |
665 | - setattr, util.hookenv, "_config", (("provider", "nfs"),)) |
666 | - util.hookenv._config = (("provider", "blockstoragebroker"),) |
667 | - |
668 | - self.assertEqual(util.get_provider(), "blockstoragebroker") |
669 | - self.assertRaises(SystemExit, util.mount_volume) |
670 | + ("data:1", "mountpoint", "/mnt/this"),) |
671 | + |
672 | + self.assertEqual(util.get_provider(), "block-storage-broker") |
673 | + result = self.assertRaises(SystemExit, util.mount_volume) |
674 | + self.assertEqual(result.code, 0) |
675 | message = ( |
676 | - "ERROR: No persistent nova device provided by blockstoragebroker.") |
677 | + "Waiting for persistent nova device provided by " |
678 | + "block-storage-broker.") |
679 | self.assertTrue( |
680 | util.hookenv.is_logged(message), "Not logged- %s" % message) |
681 | |
682 | - def test_mount_volume_nova_provider_success(self): |
683 | + def test_mount_volume_bsb_provider_success(self): |
684 | """ |
685 | L{mount_volume} reads principal relation data for the C{mountpoint}. |
686 | When storage config is set to the C{nova} provider, L{mount_volume} |
687 | will read the optional C{device_path} parameter provided by the nova |
688 | - provider's C{data-relation-changed} hook. |
689 | + provider's C{data-relation-changed} hook. |
690 | The success of the mount will log a message and the storage charm will |
691 | call relation-set C{mountpoint} to report the successful mount to the |
692 | principal. |
693 | """ |
694 | - self.addCleanup( |
695 | - setattr, util.hookenv, "_incoming_relation_data", ()) |
696 | util.hookenv._incoming_relation_data = ( |
697 | - (None, "mountpoint", "/mnt/this"),) |
698 | + ("data:1", "mountpoint", "/mnt/this"),) |
699 | |
700 | - self.addCleanup( |
701 | - setattr, util.hookenv, "_config", (("provider", "nfs"),)) |
702 | - util.hookenv._config = (("provider", "blockstoragebroker"),) |
703 | - self.addCleanup( |
704 | - setattr, util.hookenv, "_outgoing_relation_data", ()) |
705 | + util.hookenv._config = (("provider", "block-storage-broker"),) |
706 | |
707 | ismount = self.mocker.replace(os.path.ismount) |
708 | ismount("/mnt/this") |
709 | @@ -590,7 +667,7 @@ |
710 | mount("mount -t ext4 /dev/vdx /mnt/this", shell=True) |
711 | self.mocker.replay() |
712 | |
713 | - self.assertEqual(util.get_provider(), "blockstoragebroker") |
714 | + self.assertEqual(util.get_provider(), "block-storage-broker") |
715 | util.mount_volume("/dev/vdx") |
716 | message = "Mount (/mnt/this) successful" |
717 | self.assertTrue( |
718 | |
719 | === modified file 'hooks/testing.py' |
720 | --- hooks/testing.py 2014-01-31 23:49:37 +0000 |
721 | +++ hooks/testing.py 2014-02-14 00:23:01 +0000 |
722 | @@ -73,6 +73,9 @@ |
723 | def local_unit(self): |
724 | return os.environ["JUJU_UNIT_NAME"] |
725 | |
726 | + def charm_dir(self): |
727 | + return os.environ["CHARM_DIR"] |
728 | + |
729 | def log(self, message, level): |
730 | self._log = self._log + (message,) |
731 | |
732 | |
733 | === modified file 'metadata.yaml' |
734 | --- metadata.yaml 2014-01-09 04:19:14 +0000 |
735 | +++ metadata.yaml 2014-02-14 00:23:01 +0000 |
736 | @@ -16,3 +16,5 @@ |
737 | scope: container |
738 | nfs: |
739 | interface: mount |
740 | + block-storage: |
741 | + interface: volume-request |
I think you forgot the 'storage- provider. d/blockstoragep rovider' dir?