Merge lp:~xfactor973/charm-helpers/ceph-functions into lp:charm-helpers
- ceph-functions
- Merge into devel
Status: | Merged |
---|---|
Merged at revision: | 502 |
Proposed branch: | lp:~xfactor973/charm-helpers/ceph-functions |
Merge into: | lp:charm-helpers |
Diff against target: |
1173 lines (+738/-72) 6 files modified
.bzrignore (+1/-0) charmhelpers/contrib/storage/linux/ceph.py (+391/-25) tests/contrib/storage/test_linux_ceph.py (+321/-22) tests/contrib/storage/test_linux_storage_loopback.py (+5/-5) tests/contrib/storage/test_linux_storage_lvm.py (+8/-8) tests/contrib/storage/test_linux_storage_utils.py (+12/-12) |
To merge this branch: | bzr merge lp:~xfactor973/charm-helpers/ceph-functions |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Needs Fixing | ||
Review via email: mp+276476@code.launchpad.net |
Commit message
Description of the change
This change adds many additional functions for working with Ceph. Most of the changes are centered around erasure coding support. I added unit tests to bring the testing coverage to 93%. This patch also introduces support for adding/removing cache tiers from existing ceph pools.
I also added a new pgs calculation which is based on ceph's latest documentation recommending that we round to the nearest power of 2. Since rounding to the nearest power is extremely slow in python ( many seconds) I have opted to store the answers in a list called powers_of_two. I'm not creative with names.
You will also notice a function called validator. This function was born because of this documentation: http://
Chris Holcombe (xfactor973) wrote : | # |
James Page (james-page) wrote : | # |
$ make test
Checking for Python syntax...
tests/contrib/
Makefile:73: recipe for target 'lint' faile
James Page (james-page) wrote : | # |
python 3 tests as also erroring- discussed with Chris in irc....
- 479. By Chris Holcombe
-
Add basestring from the past
James Page (james-page) wrote : | # |
You might want to make use of six to deal with the py2/py3 compatibility:
https:/
=======
ERROR: Failure: ImportError (No module named past.builtins)
-------
Traceback (most recent call last):
File "/home/
addr.filename, addr.module)
File "/home/
return self.importFrom
File "/home/
mod = load_module(
File "/home/
from past.builtins import basestring
ImportError: No module named past.builtins
- 480. By Chris Holcombe
-
Switch to more compatible six for python 2 and 3 basestring support
- 481. By Chris Holcombe
-
Merge upstream
- 482. By Chris Holcombe
-
Remove duplicated fns
James Page (james-page) wrote : | # |
Hi Chris
I'm still seeing test failures under python3:
=======
ERROR: tests.contrib.
-------
testtools.
File "/home/
p.create()
File "/home/
erasure_profile = get_erasure_
File "/home/
return json.loads(out)
File "/usr/lib/
s._
TypeError: the JSON object must be str, not 'MagicMock'
=======
ERROR: tests.contrib.
-------
testtools.
File "/home/
cache_mode = ceph_utils.
File "/home/
osd_json = json.loads(out)
File "/usr/lib/
s._
TypeError: the JSON object must be str, not 'bytes'
=======
ERROR: tests.contrib.
-------
testtools.
File "/home/
ceph_
File "/home/
check_call(cmd)
File "/home/
return _mock_self.
File "/home/
raise effect
TypeError: __init__() missing 2 required positional arguments: 'returncode' and 'cmd'
=======
ERROR: tests.contrib.
-------
testtools.
James Page (james-page) wrote : | # |
Hi Chris
I've tried every which way to get the unit tests to pass under python3, but even in a clean trusty container, I see the same failures.
unittest.
I've reworked to use assertRaises and fixed another misc issue:
https:/
With this branch merged, all tests pass.
Chris Holcombe (xfactor973) wrote : | # |
It looks good to me! Thanks for doing this
On 12/09/2015 02:15 AM, James Page wrote:
> Review: Needs Fixing
>
> Hi Chris
>
> I've tried every which way to get the unit tests to pass under python3, but even in a clean trusty container, I see the same failures.
>
> unittest.
>
> I've reworked to use assertRaises and fixed another misc issue:
>
> https:/
>
> With this branch merged, all tests pass.
Preview Diff
1 | === modified file '.bzrignore' |
2 | --- .bzrignore 2015-02-10 22:24:39 +0000 |
3 | +++ .bzrignore 2015-11-30 18:12:02 +0000 |
4 | @@ -8,6 +8,7 @@ |
5 | .env/ |
6 | coverage.xml |
7 | docs/_build |
8 | +.idea |
9 | .project |
10 | .pydevproject |
11 | .settings |
12 | |
13 | === modified file 'charmhelpers/contrib/storage/linux/ceph.py' |
14 | --- charmhelpers/contrib/storage/linux/ceph.py 2015-11-19 18:44:50 +0000 |
15 | +++ charmhelpers/contrib/storage/linux/ceph.py 2015-11-30 18:12:02 +0000 |
16 | @@ -23,10 +23,11 @@ |
17 | # James Page <james.page@ubuntu.com> |
18 | # Adam Gandelman <adamg@ubuntu.com> |
19 | # |
20 | +import bisect |
21 | +import six |
22 | |
23 | import os |
24 | import shutil |
25 | -import six |
26 | import json |
27 | import time |
28 | import uuid |
29 | @@ -73,6 +74,394 @@ |
30 | err to syslog = {use_syslog} |
31 | clog to syslog = {use_syslog} |
32 | """ |
33 | +# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs) |
34 | +powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608] |
35 | + |
36 | + |
37 | +def validator(value, valid_type, valid_range=None): |
38 | + """ |
39 | + Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values |
40 | + Example input: |
41 | + validator(value=1, |
42 | + valid_type=int, |
43 | + valid_range=[0, 2]) |
44 | + This says I'm testing value=1. It must be an int inclusive in [0,2] |
45 | + |
46 | + :param value: The value to validate |
47 | + :param valid_type: The type that value should be. |
48 | + :param valid_range: A range of values that value can assume. |
49 | + :return: |
50 | + """ |
51 | + assert isinstance(value, valid_type), "{} is not a {}".format( |
52 | + value, |
53 | + valid_type) |
54 | + if valid_range is not None: |
55 | + assert isinstance(valid_range, list), \ |
56 | + "valid_range must be a list, was given {}".format(valid_range) |
57 | + # If we're dealing with strings |
58 | + if valid_type is six.string_types: |
59 | + assert value in valid_range, \ |
60 | + "{} is not in the list {}".format(value, valid_range) |
61 | + # Integer, float should have a min and max |
62 | + else: |
63 | + if len(valid_range) != 2: |
64 | + raise ValueError( |
65 | + "Invalid valid_range list of {} for {}. " |
66 | + "List must be [min,max]".format(valid_range, value)) |
67 | + assert value >= valid_range[0], \ |
68 | + "{} is less than minimum allowed value of {}".format( |
69 | + value, valid_range[0]) |
70 | + assert value <= valid_range[1], \ |
71 | + "{} is greater than maximum allowed value of {}".format( |
72 | + value, valid_range[1]) |
73 | + |
74 | + |
75 | +class PoolCreationError(Exception): |
76 | + """ |
77 | + A custom error to inform the caller that a pool creation failed. Provides an error message |
78 | + """ |
79 | + def __init__(self, message): |
80 | + super(PoolCreationError, self).__init__(message) |
81 | + |
82 | + |
83 | +class Pool(object): |
84 | + """ |
85 | + An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool. |
86 | + Do not call create() on this base class as it will not do anything. Instantiate a child class and call create(). |
87 | + """ |
88 | + def __init__(self, service, name): |
89 | + self.service = service |
90 | + self.name = name |
91 | + |
92 | + # Create the pool if it doesn't exist already |
93 | + # To be implemented by subclasses |
94 | + def create(self): |
95 | + pass |
96 | + |
97 | + def add_cache_tier(self, cache_pool, mode): |
98 | + """ |
99 | + Adds a new cache tier to an existing pool. |
100 | + :param cache_pool: six.string_types. The cache tier pool name to add. |
101 | + :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"] |
102 | + :return: None |
103 | + """ |
104 | + # Check the input types and values |
105 | + validator(value=cache_pool, valid_type=six.string_types) |
106 | + validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"]) |
107 | + |
108 | + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool]) |
109 | + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode]) |
110 | + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool]) |
111 | + check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom']) |
112 | + |
113 | + def remove_cache_tier(self, cache_pool): |
114 | + """ |
115 | + Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete. |
116 | + :param cache_pool: six.string_types. The cache tier pool name to remove. |
117 | + :return: None |
118 | + """ |
119 | + # read-only is easy, writeback is much harder |
120 | + mode = get_cache_mode(cache_pool) |
121 | + if mode == 'readonly': |
122 | + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none']) |
123 | + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool]) |
124 | + |
125 | + elif mode == 'writeback': |
126 | + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'forward']) |
127 | + # Flush the cache and wait for it to return |
128 | + check_call(['ceph', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all']) |
129 | + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name]) |
130 | + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool]) |
131 | + |
132 | + def get_pgs(self, pool_size): |
133 | + """ |
134 | + :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for |
135 | + erasure coded pools |
136 | + :return: int. The number of pgs to use. |
137 | + """ |
138 | + validator(value=pool_size, valid_type=int) |
139 | + osds = get_osds(self.service) |
140 | + if not osds: |
141 | + # NOTE(james-page): Default to 200 for older ceph versions |
142 | + # which don't support OSD query from cli |
143 | + return 200 |
144 | + |
145 | + # Calculate based on Ceph best practices |
146 | + if osds < 5: |
147 | + return 128 |
148 | + elif 5 < osds < 10: |
149 | + return 512 |
150 | + elif 10 < osds < 50: |
151 | + return 4096 |
152 | + else: |
153 | + estimate = (osds * 100) / pool_size |
154 | + # Return the next nearest power of 2 |
155 | + index = bisect.bisect_right(powers_of_two, estimate) |
156 | + return powers_of_two[index] |
157 | + |
158 | + |
159 | +class ReplicatedPool(Pool): |
160 | + def __init__(self, service, name, replicas=2): |
161 | + super(ReplicatedPool, self).__init__(service=service, name=name) |
162 | + self.replicas = replicas |
163 | + |
164 | + def create(self): |
165 | + if not pool_exists(self.service, self.name): |
166 | + # Create it |
167 | + pgs = self.get_pgs(self.replicas) |
168 | + cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs)] |
169 | + try: |
170 | + check_call(cmd) |
171 | + except CalledProcessError: |
172 | + raise |
173 | + |
174 | + |
175 | +# Default jerasure erasure coded pool |
176 | +class ErasurePool(Pool): |
177 | + def __init__(self, service, name, erasure_code_profile="default"): |
178 | + super(ErasurePool, self).__init__(service=service, name=name) |
179 | + self.erasure_code_profile = erasure_code_profile |
180 | + |
181 | + def create(self): |
182 | + if not pool_exists(self.service, self.name): |
183 | + # Try to find the erasure profile information so we can properly size the pgs |
184 | + erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile) |
185 | + |
186 | + # Check for errors |
187 | + if erasure_profile is None: |
188 | + log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile), |
189 | + level=ERROR) |
190 | + raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile)) |
191 | + if 'k' not in erasure_profile or 'm' not in erasure_profile: |
192 | + # Error |
193 | + log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile), |
194 | + level=ERROR) |
195 | + raise PoolCreationError( |
196 | + message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile)) |
197 | + |
198 | + pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m'])) |
199 | + # Create it |
200 | + cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs), |
201 | + 'erasure', self.erasure_code_profile] |
202 | + try: |
203 | + check_call(cmd) |
204 | + except CalledProcessError: |
205 | + raise |
206 | + |
207 | + """Get an existing erasure code profile if it already exists. |
208 | + Returns json formatted output""" |
209 | + |
210 | + |
211 | +def get_erasure_profile(service, name): |
212 | + """ |
213 | + :param service: six.string_types. The Ceph user name to run the command under |
214 | + :param name: |
215 | + :return: |
216 | + """ |
217 | + try: |
218 | + out = check_output(['ceph', '--id', service, |
219 | + 'osd', 'erasure-code-profile', 'get', |
220 | + name, '--format=json']) |
221 | + return json.loads(out) |
222 | + except (CalledProcessError, OSError, ValueError): |
223 | + return None |
224 | + |
225 | + |
226 | +def pool_set(service, pool_name, key, value): |
227 | + """ |
228 | + Sets a value for a RADOS pool in ceph. |
229 | + :param service: six.string_types. The Ceph user name to run the command under |
230 | + :param pool_name: six.string_types |
231 | + :param key: six.string_types |
232 | + :param value: |
233 | + :return: None. Can raise CalledProcessError |
234 | + """ |
235 | + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value] |
236 | + try: |
237 | + check_call(cmd) |
238 | + except CalledProcessError: |
239 | + raise |
240 | + |
241 | + |
242 | +def snapshot_pool(service, pool_name, snapshot_name): |
243 | + """ |
244 | + Snapshots a RADOS pool in ceph. |
245 | + :param service: six.string_types. The Ceph user name to run the command under |
246 | + :param pool_name: six.string_types |
247 | + :param snapshot_name: six.string_types |
248 | + :return: None. Can raise CalledProcessError |
249 | + """ |
250 | + cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name] |
251 | + try: |
252 | + check_call(cmd) |
253 | + except CalledProcessError: |
254 | + raise |
255 | + |
256 | + |
257 | +def remove_pool_snapshot(service, pool_name, snapshot_name): |
258 | + """ |
259 | + Remove a snapshot from a RADOS pool in ceph. |
260 | + :param service: six.string_types. The Ceph user name to run the command under |
261 | + :param pool_name: six.string_types |
262 | + :param snapshot_name: six.string_types |
263 | + :return: None. Can raise CalledProcessError |
264 | + """ |
265 | + cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name] |
266 | + try: |
267 | + check_call(cmd) |
268 | + except CalledProcessError: |
269 | + raise |
270 | + |
271 | + |
272 | +# max_bytes should be an int or long |
273 | +def set_pool_quota(service, pool_name, max_bytes): |
274 | + """ |
275 | + :param service: six.string_types. The Ceph user name to run the command under |
276 | + :param pool_name: six.string_types |
277 | + :param max_bytes: int or long |
278 | + :return: None. Can raise CalledProcessError |
279 | + """ |
280 | + # Set a byte quota on a RADOS pool in ceph. |
281 | + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', max_bytes] |
282 | + try: |
283 | + check_call(cmd) |
284 | + except CalledProcessError: |
285 | + raise |
286 | + |
287 | + |
288 | +def remove_pool_quota(service, pool_name): |
289 | + """ |
290 | + Set a byte quota on a RADOS pool in ceph. |
291 | + :param service: six.string_types. The Ceph user name to run the command under |
292 | + :param pool_name: six.string_types |
293 | + :return: None. Can raise CalledProcessError |
294 | + """ |
295 | + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0'] |
296 | + try: |
297 | + check_call(cmd) |
298 | + except CalledProcessError: |
299 | + raise |
300 | + |
301 | + |
302 | +def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host', |
303 | + data_chunks=2, coding_chunks=1, |
304 | + locality=None, durability_estimator=None): |
305 | + """ |
306 | + Create a new erasure code profile if one does not already exist for it. Updates |
307 | + the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ |
308 | + for more details |
309 | + :param service: six.string_types. The Ceph user name to run the command under |
310 | + :param profile_name: six.string_types |
311 | + :param erasure_plugin_name: six.string_types |
312 | + :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', |
313 | + 'room', 'root', 'row']) |
314 | + :param data_chunks: int |
315 | + :param coding_chunks: int |
316 | + :param locality: int |
317 | + :param durability_estimator: int |
318 | + :return: None. Can raise CalledProcessError |
319 | + """ |
320 | + # Ensure this failure_domain is allowed by Ceph |
321 | + validator(failure_domain, six.string_types, |
322 | + ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row']) |
323 | + |
324 | + cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name, |
325 | + 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks), |
326 | + 'ruleset_failure_domain=' + failure_domain] |
327 | + if locality is not None and durability_estimator is not None: |
328 | + raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.") |
329 | + |
330 | + # Add plugin specific information |
331 | + if locality is not None: |
332 | + # For local erasure codes |
333 | + cmd.append('l=' + str(locality)) |
334 | + if durability_estimator is not None: |
335 | + # For Shec erasure codes |
336 | + cmd.append('c=' + str(durability_estimator)) |
337 | + |
338 | + if erasure_profile_exists(service, profile_name): |
339 | + cmd.append('--force') |
340 | + |
341 | + try: |
342 | + check_call(cmd) |
343 | + except CalledProcessError: |
344 | + raise |
345 | + |
346 | + |
347 | +def rename_pool(service, old_name, new_name): |
348 | + """ |
349 | + Rename a Ceph pool from old_name to new_name |
350 | + :param service: six.string_types. The Ceph user name to run the command under |
351 | + :param old_name: six.string_types |
352 | + :param new_name: six.string_types |
353 | + :return: None |
354 | + """ |
355 | + validator(value=old_name, valid_type=six.string_types) |
356 | + validator(value=new_name, valid_type=six.string_types) |
357 | + |
358 | + cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name] |
359 | + check_call(cmd) |
360 | + |
361 | + |
362 | +def erasure_profile_exists(service, name): |
363 | + """ |
364 | + Check to see if an Erasure code profile already exists. |
365 | + :param service: six.string_types. The Ceph user name to run the command under |
366 | + :param name: six.string_types |
367 | + :return: int or None |
368 | + """ |
369 | + validator(value=name, valid_type=six.string_types) |
370 | + try: |
371 | + check_call(['ceph', '--id', service, |
372 | + 'osd', 'erasure-code-profile', 'get', |
373 | + name]) |
374 | + return True |
375 | + except CalledProcessError: |
376 | + return False |
377 | + |
378 | + |
379 | +def get_cache_mode(service, pool_name): |
380 | + """ |
381 | + Find the current caching mode of the pool_name given. |
382 | + :param service: six.string_types. The Ceph user name to run the command under |
383 | + :param pool_name: six.string_types |
384 | + :return: int or None |
385 | + """ |
386 | + validator(value=service, valid_type=six.string_types) |
387 | + validator(value=pool_name, valid_type=six.string_types) |
388 | + out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json']) |
389 | + try: |
390 | + osd_json = json.loads(out) |
391 | + for pool in osd_json['pools']: |
392 | + if pool['pool_name'] == pool_name: |
393 | + return pool['cache_mode'] |
394 | + return None |
395 | + except ValueError: |
396 | + raise |
397 | + |
398 | + |
399 | +def pool_exists(service, name): |
400 | + """Check to see if a RADOS pool already exists.""" |
401 | + try: |
402 | + out = check_output(['rados', '--id', service, |
403 | + 'lspools']).decode('UTF-8') |
404 | + except CalledProcessError: |
405 | + return False |
406 | + |
407 | + return name in out |
408 | + |
409 | + |
410 | +def get_osds(service): |
411 | + """Return a list of all Ceph Object Storage Daemons currently in the |
412 | + cluster. |
413 | + """ |
414 | + version = ceph_version() |
415 | + if version and version >= '0.56': |
416 | + return json.loads(check_output(['ceph', '--id', service, |
417 | + 'osd', 'ls', |
418 | + '--format=json']).decode('UTF-8')) |
419 | + |
420 | + return None |
421 | |
422 | |
423 | def install(): |
424 | @@ -102,30 +491,6 @@ |
425 | check_call(cmd) |
426 | |
427 | |
428 | -def pool_exists(service, name): |
429 | - """Check to see if a RADOS pool already exists.""" |
430 | - try: |
431 | - out = check_output(['rados', '--id', service, |
432 | - 'lspools']).decode('UTF-8') |
433 | - except CalledProcessError: |
434 | - return False |
435 | - |
436 | - return name in out |
437 | - |
438 | - |
439 | -def get_osds(service): |
440 | - """Return a list of all Ceph Object Storage Daemons currently in the |
441 | - cluster. |
442 | - """ |
443 | - version = ceph_version() |
444 | - if version and version >= '0.56': |
445 | - return json.loads(check_output(['ceph', '--id', service, |
446 | - 'osd', 'ls', |
447 | - '--format=json']).decode('UTF-8')) |
448 | - |
449 | - return None |
450 | - |
451 | - |
452 | def update_pool(client, pool, settings): |
453 | cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool] |
454 | for k, v in six.iteritems(settings): |
455 | @@ -414,6 +779,7 @@ |
456 | |
457 | The API is versioned and defaults to version 1. |
458 | """ |
459 | + |
460 | def __init__(self, api_version=1, request_id=None): |
461 | self.api_version = api_version |
462 | if request_id: |
463 | |
464 | === modified file 'tests/contrib/storage/test_linux_ceph.py' |
465 | --- tests/contrib/storage/test_linux_ceph.py 2015-09-11 12:19:23 +0000 |
466 | +++ tests/contrib/storage/test_linux_ceph.py 2015-11-30 18:12:02 +0000 |
467 | @@ -1,5 +1,7 @@ |
468 | +import unittest |
469 | from mock import patch, call |
470 | |
471 | +import six |
472 | from shutil import rmtree |
473 | from tempfile import mkdtemp |
474 | from threading import Timer |
475 | @@ -14,7 +16,6 @@ |
476 | import os |
477 | import time |
478 | |
479 | - |
480 | LS_POOLS = b""" |
481 | images |
482 | volumes |
483 | @@ -31,6 +32,55 @@ |
484 | bar |
485 | baz |
486 | """ |
487 | +# Vastly abbreviated output from ceph osd dump --format=json |
488 | +OSD_DUMP = b""" |
489 | +{ |
490 | + "pools": [ |
491 | + { |
492 | + "pool": 2, |
493 | + "pool_name": "rbd", |
494 | + "flags": 1, |
495 | + "flags_names": "hashpspool", |
496 | + "type": 1, |
497 | + "size": 3, |
498 | + "min_size": 2, |
499 | + "crush_ruleset": 0, |
500 | + "object_hash": 2, |
501 | + "pg_num": 64, |
502 | + "pg_placement_num": 64, |
503 | + "crash_replay_interval": 0, |
504 | + "last_change": "1", |
505 | + "last_force_op_resend": "0", |
506 | + "auid": 0, |
507 | + "snap_mode": "selfmanaged", |
508 | + "snap_seq": 0, |
509 | + "snap_epoch": 0, |
510 | + "pool_snaps": [], |
511 | + "removed_snaps": "[]", |
512 | + "quota_max_bytes": 0, |
513 | + "quota_max_objects": 0, |
514 | + "tiers": [], |
515 | + "tier_of": -1, |
516 | + "read_tier": -1, |
517 | + "write_tier": -1, |
518 | + "cache_mode": "writeback", |
519 | + "target_max_bytes": 0, |
520 | + "target_max_objects": 0, |
521 | + "cache_target_dirty_ratio_micro": 0, |
522 | + "cache_target_full_ratio_micro": 0, |
523 | + "cache_min_flush_age": 0, |
524 | + "cache_min_evict_age": 0, |
525 | + "erasure_code_profile": "", |
526 | + "hit_set_params": { |
527 | + "type": "none" |
528 | + }, |
529 | + "hit_set_period": 0, |
530 | + "hit_set_count": 0, |
531 | + "stripe_width": 0 |
532 | + } |
533 | + ] |
534 | +} |
535 | +""" |
536 | |
537 | CEPH_CLIENT_RELATION = { |
538 | 'ceph:8': { |
539 | @@ -92,9 +142,256 @@ |
540 | self.addCleanup(_m.stop) |
541 | setattr(self, method, mock) |
542 | |
543 | + def test_validator_valid(self): |
544 | + # 1 is an int |
545 | + ceph_utils.validator(value=1, |
546 | + valid_type=int) |
547 | + |
548 | + def test_validator_valid_range(self): |
549 | + # 1 is an int between 0 and 2 |
550 | + ceph_utils.validator(value=1, |
551 | + valid_type=int, |
552 | + valid_range=[0, 2]) |
553 | + |
554 | + @unittest.expectedFailure |
555 | + def test_validator_invalid_range(self): |
556 | + # 1 is an int that isn't in the valid list of only 0 |
557 | + ceph_utils.validator(value=1, |
558 | + valid_type=int, |
559 | + valid_range=[0]) |
560 | + |
561 | + @unittest.expectedFailure |
562 | + def test_validator_invalid_string_list(self): |
563 | + # foo is a six.string_types that isn't in the valid string list |
564 | + ceph_utils.validator(value="foo", |
565 | + valid_type=six.string_types, |
566 | + valid_range=["valid", "list", "of", "strings"]) |
567 | + |
568 | + def test_pool_add_cache_tier(self): |
569 | + p = ceph_utils.Pool(name='test', service='admin') |
570 | + p.add_cache_tier('cacher', 'readonly') |
571 | + self.check_call.assert_has_calls([ |
572 | + call(['ceph', '--id', 'admin', 'osd', 'tier', 'add', 'test', 'cacher']), |
573 | + call(['ceph', '--id', 'admin', 'osd', 'tier', 'cache-mode', 'cacher', 'readonly']), |
574 | + call(['ceph', '--id', 'admin', 'osd', 'tier', 'set-overlay', 'test', 'cacher']), |
575 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set', 'cacher', 'hit_set_type', 'bloom']), |
576 | + ]) |
577 | + |
578 | + @patch.object(ceph_utils, 'get_cache_mode') |
579 | + def test_pool_remove_readonly_cache_tier(self, cache_mode): |
580 | + cache_mode.return_value = 'readonly' |
581 | + |
582 | + p = ceph_utils.Pool(name='test', service='admin') |
583 | + p.remove_cache_tier(cache_pool='cacher') |
584 | + self.check_call.assert_has_calls([ |
585 | + call(['ceph', '--id', 'admin', 'osd', 'tier', 'cache-mode', 'cacher', 'none']), |
586 | + call(['ceph', '--id', 'admin', 'osd', 'tier', 'remove', 'test', 'cacher']), |
587 | + ]) |
588 | + |
589 | + @patch.object(ceph_utils, 'get_cache_mode') |
590 | + def test_pool_remove_writeback_cache_tier(self, cache_mode): |
591 | + cache_mode.return_value = 'writeback' |
592 | + |
593 | + p = ceph_utils.Pool(name='test', service='admin') |
594 | + p.remove_cache_tier(cache_pool='cacher') |
595 | + self.check_call.assert_has_calls([ |
596 | + call(['ceph', '--id', 'admin', 'osd', 'tier', 'cache-mode', 'cacher', 'forward']), |
597 | + call(['ceph', '--id', 'admin', '-p', 'cacher', 'cache-flush-evict-all']), |
598 | + call(['ceph', '--id', 'admin', 'osd', 'tier', 'remove-overlay', 'test']), |
599 | + call(['ceph', '--id', 'admin', 'osd', 'tier', 'remove', 'test', 'cacher']), |
600 | + ]) |
601 | + |
602 | + @patch.object(ceph_utils, 'get_osds') |
603 | + def test_replicated_pool_create_old_ceph(self, get_osds): |
604 | + get_osds.return_value = None |
605 | + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) |
606 | + p.create() |
607 | + |
608 | + self.check_call.assert_has_calls([ |
609 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', str(200)]), |
610 | + ]) |
611 | + |
612 | + @patch.object(ceph_utils, 'get_osds') |
613 | + def test_replicated_pool_create_small_osds(self, get_osds): |
614 | + get_osds.return_value = 4 |
615 | + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) |
616 | + p.create() |
617 | + |
618 | + self.check_call.assert_has_calls([ |
619 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', str(128)]), |
620 | + ]) |
621 | + |
622 | + @patch.object(ceph_utils, 'get_osds') |
623 | + def test_replicated_pool_create_medium_osds(self, get_osds): |
624 | + get_osds.return_value = 8 |
625 | + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) |
626 | + p.create() |
627 | + |
628 | + self.check_call.assert_has_calls([ |
629 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', str(512)]), |
630 | + ]) |
631 | + |
632 | + @patch.object(ceph_utils, 'get_osds') |
633 | + def test_replicated_pool_create_large_osds(self, get_osds): |
634 | + get_osds.return_value = 40 |
635 | + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) |
636 | + p.create() |
637 | + |
638 | + self.check_call.assert_has_calls([ |
639 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', str(4096)]), |
640 | + ]) |
641 | + |
642 | + @patch.object(ceph_utils, 'get_osds') |
643 | + def test_replicated_pool_create_xlarge_osds(self, get_osds): |
644 | + get_osds.return_value = 1000 |
645 | + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) |
646 | + p.create() |
647 | + |
648 | + self.check_call.assert_has_calls([ |
649 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', str(65536)]), |
650 | + ]) |
651 | + |
652 | + @unittest.expectedFailure |
653 | + def test_replicated_pool_create_failed(self): |
654 | + self.check_call.side_effect = CalledProcessError |
655 | + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) |
656 | + p.create() |
657 | + |
658 | + @patch.object(ceph_utils, 'pool_exists') |
659 | + def test_replicated_pool_skips_creation(self, pool_exists): |
660 | + pool_exists.return_value = True |
661 | + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) |
662 | + p.create() |
663 | + self.check_call.assert_has_calls([]) |
664 | + |
665 | + @unittest.expectedFailure |
666 | + def test_erasure_pool_create_failed(self): |
667 | + p = ceph_utils.ErasurePool(name='test', service='admin', erasure_code_profile='foo') |
668 | + p.create() |
669 | + |
670 | + @patch.object(ceph_utils, 'get_erasure_profile') |
671 | + @patch.object(ceph_utils, 'get_osds') |
672 | + def test_erasure_pool_create(self, get_osds, erasure_profile): |
673 | + get_osds.return_value = 60 |
674 | + erasure_profile.return_value = { |
675 | + 'directory': '/usr/lib/x86_64-linux-gnu/ceph/erasure-code', |
676 | + 'k': '2', |
677 | + 'technique': 'reed_sol_van', |
678 | + 'm': '1', |
679 | + 'plugin': 'jerasure'} |
680 | + p = ceph_utils.ErasurePool(name='test', service='admin') |
681 | + p.create() |
682 | + self.check_call.assert_has_calls( |
683 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', str(8192), |
684 | + 'erasure', 'default']) |
685 | + ) |
686 | + |
687 | + def test_get_erasure_profile_none(self): |
688 | + self.check_output.side_effect = CalledProcessError(1, 'ceph') |
689 | + return_value = ceph_utils.get_erasure_profile('admin', 'unknown') |
690 | + self.assertEqual(None, return_value) |
691 | + |
692 | + def test_pool_set(self): |
693 | + self.check_call.return_value = 0 |
694 | + ceph_utils.pool_set(service='admin', pool_name='data', key='test', value=2) |
695 | + self.check_call.assert_has_calls( |
696 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set', 'data', 'test', 2]) |
697 | + ) |
698 | + |
699 | + @unittest.expectedFailure |
700 | + def test_pool_set_fails(self): |
701 | + self.check_call.side_effect = CalledProcessError |
702 | + ceph_utils.pool_set(service='admin', pool_name='data', key='test', value=2) |
703 | + |
704 | + def test_snapshot_pool(self): |
705 | + self.check_call.return_value = 0 |
706 | + ceph_utils.snapshot_pool(service='admin', pool_name='data', snapshot_name='test-snap-1') |
707 | + self.check_call.assert_has_calls( |
708 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'mksnap', 'data', 'test-snap-1']) |
709 | + ) |
710 | + |
711 | + @unittest.expectedFailure |
712 | + def test_snapshot_pool_fails(self): |
713 | + self.check_call.side_effect = CalledProcessError |
714 | + ceph_utils.snapshot_pool(service='admin', pool_name='data', snapshot_name='test-snap-1') |
715 | + |
716 | + def test_remove_pool_snapshot(self): |
717 | + self.check_call.return_value = 0 |
718 | + ceph_utils.remove_pool_snapshot(service='admin', pool_name='data', snapshot_name='test-snap-1') |
719 | + self.check_call.assert_has_calls( |
720 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'rmsnap', 'data', 'test-snap-1']) |
721 | + ) |
722 | + |
723 | + def test_set_pool_quota(self): |
724 | + self.check_call.return_value = 0 |
725 | + ceph_utils.set_pool_quota(service='admin', pool_name='data', max_bytes=1024) |
726 | + self.check_call.assert_has_calls( |
727 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set-quota', 'data', 'max_bytes', 1024]) |
728 | + ) |
729 | + |
730 | + def test_remove_pool_quota(self): |
731 | + self.check_call.return_value = 0 |
732 | + ceph_utils.remove_pool_quota(service='admin', pool_name='data') |
733 | + self.check_call.assert_has_calls( |
734 | + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set-quota', 'data', 'max_bytes', '0']) |
735 | + ) |
736 | + |
737 | + @patch.object(ceph_utils, 'erasure_profile_exists') |
738 | + def test_create_erasure_profile(self, existing_profile): |
739 | + existing_profile.return_value = True |
740 | + ceph_utils.create_erasure_profile(service='admin', profile_name='super-profile', erasure_plugin_name='jerasure', |
741 | + failure_domain='rack', data_chunks=10, coding_chunks=3) |
742 | + |
743 | + cmd = ['ceph', '--id', 'admin', 'osd', 'erasure-code-profile', 'set', 'super-profile', |
744 | + 'plugin=' + 'jerasure', 'k=' + str(10), 'm=' + str(3), |
745 | + 'ruleset_failure_domain=' + 'rack', '--force'] |
746 | + self.check_call.assert_has_calls(call(cmd)) |
747 | + |
748 | + @patch.object(ceph_utils, 'erasure_profile_exists') |
749 | + def test_create_erasure_profile_local(self, existing_profile): |
750 | + existing_profile.return_value = False |
751 | + ceph_utils.create_erasure_profile(service='admin', profile_name='super-profile', erasure_plugin_name='local', |
752 | + failure_domain='rack', data_chunks=10, coding_chunks=3, locality=1) |
753 | + |
754 | + cmd = ['ceph', '--id', 'admin', 'osd', 'erasure-code-profile', 'set', 'super-profile', |
755 | + 'plugin=' + 'local', 'k=' + str(10), 'm=' + str(3), |
756 | + 'ruleset_failure_domain=' + 'rack', 'l=' + str(1)] |
757 | + self.check_call.assert_has_calls(call(cmd)) |
758 | + |
759 | + @patch.object(ceph_utils, 'erasure_profile_exists') |
760 | + def test_create_erasure_profile_shec(self, existing_profile): |
761 | + existing_profile.return_value = False |
762 | + ceph_utils.create_erasure_profile(service='admin', profile_name='super-profile', erasure_plugin_name='shec', |
763 | + failure_domain='rack', data_chunks=10, coding_chunks=3, |
764 | + durability_estimator=1) |
765 | + |
766 | + cmd = ['ceph', '--id', 'admin', 'osd', 'erasure-code-profile', 'set', 'super-profile', |
767 | + 'plugin=' + 'shec', 'k=' + str(10), 'm=' + str(3), |
768 | + 'ruleset_failure_domain=' + 'rack', 'c=' + str(1)] |
769 | + self.check_call.assert_has_calls(call(cmd)) |
770 | + |
771 | + def test_rename_pool(self): |
772 | + ceph_utils.rename_pool(service='admin', old_name='old-pool', new_name='new-pool') |
773 | + cmd = ['ceph', '--id', 'admin', 'osd', 'pool', 'rename', 'old-pool', 'new-pool'] |
774 | + self.check_call.assert_called_with(cmd) |
775 | + |
776 | + def test_erasure_profile_exists(self): |
777 | + self.check_call.return_value = 0 |
778 | + profile_exists = ceph_utils.erasure_profile_exists(service='admin', name='super-profile') |
779 | + cmd = ['ceph', '--id', 'admin', |
780 | + 'osd', 'erasure-code-profile', 'get', |
781 | + 'super-profile'] |
782 | + self.check_call.assert_called_with(cmd) |
783 | + self.assertEqual(True, profile_exists) |
784 | + |
785 | + def test_get_cache_mode(self): |
786 | + self.check_output.return_value = OSD_DUMP |
787 | + cache_mode = ceph_utils.get_cache_mode(service='admin', pool_name='rbd') |
788 | + self.assertEqual("writeback", cache_mode) |
789 | + |
790 | @patch('os.path.exists') |
791 | def test_create_keyring(self, _exists): |
792 | - '''It creates a new ceph keyring''' |
793 | + """It creates a new ceph keyring""" |
794 | _exists.return_value = False |
795 | ceph_utils.create_keyring('cinder', 'cephkey') |
796 | _cmd = ['ceph-authtool', '/etc/ceph/ceph.client.cinder.keyring', |
797 | @@ -104,7 +401,7 @@ |
798 | |
799 | @patch('os.path.exists') |
800 | def test_create_keyring_already_exists(self, _exists): |
801 | - '''It creates a new ceph keyring''' |
802 | + """It creates a new ceph keyring""" |
803 | _exists.return_value = True |
804 | ceph_utils.create_keyring('cinder', 'cephkey') |
805 | self.assertTrue(self.log.called) |
806 | @@ -113,7 +410,7 @@ |
807 | @patch('os.remove') |
808 | @patch('os.path.exists') |
809 | def test_delete_keyring(self, _exists, _remove): |
810 | - '''It deletes a ceph keyring.''' |
811 | + """It deletes a ceph keyring.""" |
812 | _exists.return_value = True |
813 | ceph_utils.delete_keyring('cinder') |
814 | _remove.assert_called_with('/etc/ceph/ceph.client.cinder.keyring') |
815 | @@ -122,7 +419,7 @@ |
816 | @patch('os.remove') |
817 | @patch('os.path.exists') |
818 | def test_delete_keyring_not_exists(self, _exists, _remove): |
819 | - '''It creates a new ceph keyring.''' |
820 | + """It creates a new ceph keyring.""" |
821 | _exists.return_value = False |
822 | ceph_utils.delete_keyring('cinder') |
823 | self.assertTrue(self.log.called) |
824 | @@ -130,7 +427,7 @@ |
825 | |
826 | @patch('os.path.exists') |
827 | def test_create_keyfile(self, _exists): |
828 | - '''It creates a new ceph keyfile''' |
829 | + """It creates a new ceph keyfile""" |
830 | _exists.return_value = False |
831 | with patch_open() as (_open, _file): |
832 | ceph_utils.create_key_file('cinder', 'cephkey') |
833 | @@ -139,7 +436,7 @@ |
834 | |
835 | @patch('os.path.exists') |
836 | def test_create_key_file_already_exists(self, _exists): |
837 | - '''It creates a new ceph keyring''' |
838 | + """It creates a new ceph keyring""" |
839 | _exists.return_value = True |
840 | ceph_utils.create_key_file('cinder', 'cephkey') |
841 | self.assertTrue(self.log.called) |
842 | @@ -173,7 +470,7 @@ |
843 | @patch.object(ceph_utils, 'get_osds') |
844 | @patch.object(ceph_utils, 'pool_exists') |
845 | def test_create_pool(self, _exists, _get_osds): |
846 | - '''It creates rados pool correctly with default replicas ''' |
847 | + """It creates rados pool correctly with default replicas """ |
848 | _exists.return_value = False |
849 | _get_osds.return_value = [1, 2, 3] |
850 | ceph_utils.create_pool(service='cinder', name='foo') |
851 | @@ -187,7 +484,7 @@ |
852 | @patch.object(ceph_utils, 'get_osds') |
853 | @patch.object(ceph_utils, 'pool_exists') |
854 | def test_create_pool_2_replicas(self, _exists, _get_osds): |
855 | - '''It creates rados pool correctly with 3 replicas''' |
856 | + """It creates rados pool correctly with 3 replicas""" |
857 | _exists.return_value = False |
858 | _get_osds.return_value = [1, 2, 3] |
859 | ceph_utils.create_pool(service='cinder', name='foo', replicas=2) |
860 | @@ -201,7 +498,7 @@ |
861 | @patch.object(ceph_utils, 'get_osds') |
862 | @patch.object(ceph_utils, 'pool_exists') |
863 | def test_create_pool_argonaut(self, _exists, _get_osds): |
864 | - '''It creates rados pool correctly with 3 replicas''' |
865 | + """It creates rados pool correctly with 3 replicas""" |
866 | _exists.return_value = False |
867 | _get_osds.return_value = None |
868 | ceph_utils.create_pool(service='cinder', name='foo') |
869 | @@ -220,27 +517,27 @@ |
870 | self.check_call.assert_not_called() |
871 | |
872 | def test_keyring_path(self): |
873 | - '''It correctly dervies keyring path from service name''' |
874 | + """It correctly dervies keyring path from service name""" |
875 | result = ceph_utils._keyring_path('cinder') |
876 | self.assertEquals('/etc/ceph/ceph.client.cinder.keyring', result) |
877 | |
878 | def test_keyfile_path(self): |
879 | - '''It correctly dervies keyring path from service name''' |
880 | + """It correctly dervies keyring path from service name""" |
881 | result = ceph_utils._keyfile_path('cinder') |
882 | self.assertEquals('/etc/ceph/ceph.client.cinder.key', result) |
883 | |
884 | def test_pool_exists(self): |
885 | - '''It detects an rbd pool exists''' |
886 | + """It detects an rbd pool exists""" |
887 | self.check_output.return_value = LS_POOLS |
888 | self.assertTrue(ceph_utils.pool_exists('cinder', 'volumes')) |
889 | |
890 | def test_pool_does_not_exist(self): |
891 | - '''It detects an rbd pool exists''' |
892 | + """It detects an rbd pool exists""" |
893 | self.check_output.return_value = LS_POOLS |
894 | self.assertFalse(ceph_utils.pool_exists('cinder', 'foo')) |
895 | |
896 | def test_pool_exists_error(self): |
897 | - ''' Ensure subprocess errors and sandboxed with False ''' |
898 | + """ Ensure subprocess errors and sandboxed with False """ |
899 | self.check_output.side_effect = CalledProcessError(1, 'rados') |
900 | self.assertFalse(ceph_utils.pool_exists('cinder', 'foo')) |
901 | |
902 | @@ -259,7 +556,7 @@ |
903 | ) |
904 | |
905 | def test_rbd_exists_error(self): |
906 | - ''' Ensure subprocess errors and sandboxed with False ''' |
907 | + """ Ensure subprocess errors and sandboxed with False """ |
908 | self.check_output.side_effect = CalledProcessError(1, 'rbd') |
909 | self.assertFalse(ceph_utils.rbd_exists('cinder', 'foo', 'rbd')) |
910 | |
911 | @@ -473,13 +770,13 @@ |
912 | self.service_start.assert_called_with(_services[0]) |
913 | |
914 | def test_make_filesystem_default_filesystem(self): |
915 | - '''make_filesystem() uses ext4 as the default filesystem.''' |
916 | + """make_filesystem() uses ext4 as the default filesystem.""" |
917 | device = '/dev/zero' |
918 | ceph_utils.make_filesystem(device) |
919 | self.check_call.assert_called_with(['mkfs', '-t', 'ext4', device]) |
920 | |
921 | def test_make_filesystem_no_device(self): |
922 | - '''make_filesystem() raises an IOError if the device does not exist.''' |
923 | + """make_filesystem() raises an IOError if the device does not exist.""" |
924 | device = '/no/such/device' |
925 | e = self.assertRaises(IOError, ceph_utils.make_filesystem, device, |
926 | timeout=0) |
927 | @@ -512,9 +809,11 @@ |
928 | The specified device is formatted if it appears before the timeout |
929 | is reached. |
930 | """ |
931 | + |
932 | def create_my_device(filename): |
933 | with open(filename, "w") as device: |
934 | device.write("hello\n") |
935 | + |
936 | temp_dir = mkdtemp() |
937 | self.addCleanup(rmtree, temp_dir) |
938 | device = "%s/mydevice" % temp_dir |
939 | @@ -639,10 +938,10 @@ |
940 | self.relation_ids.side_effect = relation.relation_ids |
941 | self.related_units.side_effect = relation.related_units |
942 | |
943 | -# @patch.object(ceph_utils, 'uuid') |
944 | -# @patch.object(ceph_utils, 'local_unit') |
945 | -# def test_get_request_states(self, mlocal_unit, muuid): |
946 | -# muuid.uuid1.return_value = '0bc7dc54' |
947 | + # @patch.object(ceph_utils, 'uuid') |
948 | + # @patch.object(ceph_utils, 'local_unit') |
949 | + # def test_get_request_states(self, mlocal_unit, muuid): |
950 | + # muuid.uuid1.return_value = '0bc7dc54' |
951 | @patch.object(ceph_utils, 'local_unit') |
952 | def test_get_request_states(self, mlocal_unit): |
953 | mlocal_unit.return_value = 'glance/0' |
954 | |
955 | === modified file 'tests/contrib/storage/test_linux_storage_loopback.py' |
956 | --- tests/contrib/storage/test_linux_storage_loopback.py 2015-11-02 19:48:05 +0000 |
957 | +++ tests/contrib/storage/test_linux_storage_loopback.py 2015-11-30 18:12:02 +0000 |
958 | @@ -17,7 +17,7 @@ |
959 | class LoopbackStorageUtilsTests(unittest.TestCase): |
960 | @patch(STORAGE_LINUX_LOOPBACK + '.check_output') |
961 | def test_loopback_devices(self, output): |
962 | - '''It translates current loopback mapping to a dict''' |
963 | + """It translates current loopback mapping to a dict""" |
964 | output.return_value = LOOPBACK_DEVICES |
965 | ex = { |
966 | '/dev/loop1': '/tmp/bar.img', |
967 | @@ -31,7 +31,7 @@ |
968 | @patch(STORAGE_LINUX_LOOPBACK + '.loopback_devices') |
969 | def test_loopback_create_already_exists(self, loopbacks, check_call, |
970 | create): |
971 | - '''It finds existing loopback device for requested file''' |
972 | + """It finds existing loopback device for requested file""" |
973 | loopbacks.return_value = {'/dev/loop1': '/tmp/bar.img'} |
974 | res = loopback.ensure_loopback_device('/tmp/bar.img', '5G') |
975 | self.assertEquals(res, '/dev/loop1') |
976 | @@ -43,7 +43,7 @@ |
977 | @patch('os.path.exists') |
978 | def test_loop_creation_no_truncate(self, path_exists, create_loopback, |
979 | loopbacks): |
980 | - '''It does not create a new sparse image for loopback if one exists''' |
981 | + """It does not create a new sparse image for loopback if one exists""" |
982 | loopbacks.return_value = {} |
983 | path_exists.return_value = True |
984 | with patch('subprocess.check_call') as check_call: |
985 | @@ -55,7 +55,7 @@ |
986 | @patch('os.path.exists') |
987 | def test_ensure_loopback_creation(self, path_exists, create_loopback, |
988 | loopbacks): |
989 | - '''It creates a new sparse image for loopback if one does not exists''' |
990 | + """It creates a new sparse image for loopback if one does not exists""" |
991 | loopbacks.return_value = {} |
992 | path_exists.return_value = False |
993 | create_loopback.return_value = '/dev/loop0' |
994 | @@ -66,7 +66,7 @@ |
995 | |
996 | @patch.object(loopback, 'loopback_devices') |
997 | def test_create_loopback(self, _devs): |
998 | - '''It corectly calls losetup to create a loopback device''' |
999 | + """It correctly calls losetup to create a loopback device""" |
1000 | _devs.return_value = {'/dev/loop0': '/tmp/foo'} |
1001 | with patch(STORAGE_LINUX_LOOPBACK + '.check_call') as check_call: |
1002 | check_call.return_value = '' |
1003 | |
1004 | === modified file 'tests/contrib/storage/test_linux_storage_lvm.py' |
1005 | --- tests/contrib/storage/test_linux_storage_lvm.py 2014-11-25 13:38:01 +0000 |
1006 | +++ tests/contrib/storage/test_linux_storage_lvm.py 2015-11-30 18:12:02 +0000 |
1007 | @@ -39,14 +39,14 @@ |
1008 | |
1009 | class LVMStorageUtilsTests(unittest.TestCase): |
1010 | def test_find_volume_group_on_pv(self): |
1011 | - '''It determines any volume group assigned to a LVM PV''' |
1012 | + """It determines any volume group assigned to a LVM PV""" |
1013 | with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: |
1014 | check_output.return_value = PVDISPLAY |
1015 | vg = lvm.list_lvm_volume_group('/dev/loop0') |
1016 | self.assertEquals(vg, 'foo') |
1017 | |
1018 | def test_find_empty_volume_group_on_pv(self): |
1019 | - '''Return empty string when no volume group is assigned to the PV''' |
1020 | + """Return empty string when no volume group is assigned to the PV""" |
1021 | with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: |
1022 | check_output.return_value = EMPTY_VG_IN_PVDISPLAY |
1023 | vg = lvm.list_lvm_volume_group('/dev/loop0') |
1024 | @@ -54,38 +54,38 @@ |
1025 | |
1026 | @patch(STORAGE_LINUX_LVM + '.list_lvm_volume_group') |
1027 | def test_deactivate_lvm_volume_groups(self, ls_vg): |
1028 | - '''It deactivates active volume groups on LVM PV''' |
1029 | + """It deactivates active volume groups on LVM PV""" |
1030 | ls_vg.return_value = 'foo-vg' |
1031 | with patch(STORAGE_LINUX_LVM + '.check_call') as check_call: |
1032 | lvm.deactivate_lvm_volume_group('/dev/loop0') |
1033 | check_call.assert_called_with(['vgchange', '-an', 'foo-vg']) |
1034 | |
1035 | def test_remove_lvm_physical_volume(self): |
1036 | - '''It removes LVM physical volume signatures from block device''' |
1037 | + """It removes LVM physical volume signatures from block device""" |
1038 | with patch(STORAGE_LINUX_LVM + '.Popen') as popen: |
1039 | lvm.remove_lvm_physical_volume('/dev/foo') |
1040 | popen.assert_called_with(['pvremove', '-ff', '/dev/foo'], stdin=-1) |
1041 | |
1042 | def test_is_physical_volume(self): |
1043 | - '''It properly reports block dev is an LVM PV''' |
1044 | + """It properly reports block dev is an LVM PV""" |
1045 | with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: |
1046 | check_output.return_value = PVDISPLAY |
1047 | self.assertTrue(lvm.is_lvm_physical_volume('/dev/loop0')) |
1048 | |
1049 | def test_is_not_physical_volume(self): |
1050 | - '''It properly reports block dev is an LVM PV''' |
1051 | + """It properly reports block dev is an LVM PV""" |
1052 | with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: |
1053 | check_output.side_effect = subprocess.CalledProcessError('cmd', 2) |
1054 | self.assertFalse(lvm.is_lvm_physical_volume('/dev/loop0')) |
1055 | |
1056 | def test_pvcreate(self): |
1057 | - '''It correctly calls pvcreate for a given block dev''' |
1058 | + """It correctly calls pvcreate for a given block dev""" |
1059 | with patch(STORAGE_LINUX_LVM + '.check_call') as check_call: |
1060 | lvm.create_lvm_physical_volume('/dev/foo') |
1061 | check_call.assert_called_with(['pvcreate', '/dev/foo']) |
1062 | |
1063 | def test_vgcreate(self): |
1064 | - '''It correctly calls vgcreate for given block dev and vol group''' |
1065 | + """It correctly calls vgcreate for given block dev and vol group""" |
1066 | with patch(STORAGE_LINUX_LVM + '.check_call') as check_call: |
1067 | lvm.create_lvm_volume_group('foo-vg', '/dev/foo') |
1068 | check_call.assert_called_with(['vgcreate', 'foo-vg', '/dev/foo']) |
1069 | |
1070 | === modified file 'tests/contrib/storage/test_linux_storage_utils.py' |
1071 | --- tests/contrib/storage/test_linux_storage_utils.py 2015-08-07 09:00:30 +0000 |
1072 | +++ tests/contrib/storage/test_linux_storage_utils.py 2015-11-30 18:12:02 +0000 |
1073 | @@ -13,7 +13,7 @@ |
1074 | @patch(STORAGE_LINUX_UTILS + '.call') |
1075 | @patch(STORAGE_LINUX_UTILS + '.check_call') |
1076 | def test_zap_disk(self, check_call, call, check_output): |
1077 | - '''It calls sgdisk correctly to zap disk''' |
1078 | + """It calls sgdisk correctly to zap disk""" |
1079 | check_output.return_value = b'200\n' |
1080 | storage_utils.zap_disk('/dev/foo') |
1081 | call.assert_any_call(['sgdisk', '--zap-all', '--', '/dev/foo']) |
1082 | @@ -29,7 +29,7 @@ |
1083 | @patch('os.path.exists') |
1084 | @patch('os.stat') |
1085 | def test_is_block_device(self, S_ISBLK, exists, stat): |
1086 | - '''It detects device node is block device''' |
1087 | + """It detects device node is block device""" |
1088 | class fake_stat: |
1089 | st_mode = True |
1090 | S_ISBLK.return_value = fake_stat() |
1091 | @@ -40,7 +40,7 @@ |
1092 | @patch('os.path.exists') |
1093 | @patch('os.stat') |
1094 | def test_is_block_device_does_not_exist(self, S_ISBLK, exists, stat): |
1095 | - '''It detects device node is block device''' |
1096 | + """It detects device node is block device""" |
1097 | class fake_stat: |
1098 | st_mode = True |
1099 | S_ISBLK.return_value = fake_stat() |
1100 | @@ -49,7 +49,7 @@ |
1101 | |
1102 | @patch(STORAGE_LINUX_UTILS + '.check_output') |
1103 | def test_is_device_mounted(self, check_output): |
1104 | - '''It detects mounted devices as mounted.''' |
1105 | + """It detects mounted devices as mounted.""" |
1106 | check_output.return_value = ( |
1107 | b"/dev/sda1 on / type ext4 (rw,errors=remount-ro)\n") |
1108 | result = storage_utils.is_device_mounted('/dev/sda') |
1109 | @@ -57,7 +57,7 @@ |
1110 | |
1111 | @patch(STORAGE_LINUX_UTILS + '.check_output') |
1112 | def test_is_device_mounted_partition(self, check_output): |
1113 | - '''It detects mounted partitions as mounted.''' |
1114 | + """It detects mounted partitions as mounted.""" |
1115 | check_output.return_value = ( |
1116 | b"/dev/sda1 on / type ext4 (rw,errors=remount-ro)\n") |
1117 | result = storage_utils.is_device_mounted('/dev/sda1') |
1118 | @@ -65,8 +65,8 @@ |
1119 | |
1120 | @patch(STORAGE_LINUX_UTILS + '.check_output') |
1121 | def test_is_device_mounted_partition_with_device(self, check_output): |
1122 | - '''It detects mounted devices as mounted if "mount" shows only a |
1123 | - partition as mounted.''' |
1124 | + """It detects mounted devices as mounted if "mount" shows only a |
1125 | + partition as mounted.""" |
1126 | check_output.return_value = ( |
1127 | b"/dev/sda1 on / type ext4 (rw,errors=remount-ro)\n") |
1128 | result = storage_utils.is_device_mounted('/dev/sda') |
1129 | @@ -74,7 +74,7 @@ |
1130 | |
1131 | @patch(STORAGE_LINUX_UTILS + '.check_output') |
1132 | def test_is_device_mounted_not_mounted(self, check_output): |
1133 | - '''It detects unmounted devices as not mounted.''' |
1134 | + """It detects unmounted devices as not mounted.""" |
1135 | check_output.return_value = ( |
1136 | b"/dev/foo on / type ext4 (rw,errors=remount-ro)\n") |
1137 | result = storage_utils.is_device_mounted('/dev/sda') |
1138 | @@ -82,7 +82,7 @@ |
1139 | |
1140 | @patch(STORAGE_LINUX_UTILS + '.check_output') |
1141 | def test_is_device_mounted_not_mounted_partition(self, check_output): |
1142 | - '''It detects unmounted partitions as not mounted.''' |
1143 | + """It detects unmounted partitions as not mounted.""" |
1144 | check_output.return_value = ( |
1145 | b"/dev/foo on / type ext4 (rw,errors=remount-ro)\n") |
1146 | result = storage_utils.is_device_mounted('/dev/sda1') |
1147 | @@ -90,7 +90,7 @@ |
1148 | |
1149 | @patch(STORAGE_LINUX_UTILS + '.check_output') |
1150 | def test_is_device_mounted_full_disks(self, check_output): |
1151 | - '''It detects mounted full disks as mounted.''' |
1152 | + """It detects mounted full disks as mounted.""" |
1153 | check_output.return_value = ( |
1154 | b"/dev/sda on / type ext4 (rw,errors=remount-ro)\n") |
1155 | result = storage_utils.is_device_mounted('/dev/sda') |
1156 | @@ -98,7 +98,7 @@ |
1157 | |
1158 | @patch(STORAGE_LINUX_UTILS + '.check_output') |
1159 | def test_is_device_mounted_cciss(self, check_output): |
1160 | - '''It detects mounted cciss partitions as mounted.''' |
1161 | + """It detects mounted cciss partitions as mounted.""" |
1162 | check_output.return_value = ( |
1163 | b"/dev/cciss/c0d0 on / type ext4 (rw,errors=remount-ro)\n") |
1164 | result = storage_utils.is_device_mounted('/dev/cciss/c0d0') |
1165 | @@ -106,7 +106,7 @@ |
1166 | |
1167 | @patch(STORAGE_LINUX_UTILS + '.check_output') |
1168 | def test_is_device_mounted_cciss_not_mounted(self, check_output): |
1169 | - '''It detects unmounted cciss partitions as not mounted.''' |
1170 | + """It detects unmounted cciss partitions as not mounted.""" |
1171 | check_output.return_value = ( |
1172 | b"/dev/cciss/c0d1 on / type ext4 (rw,errors=remount-ro)\n") |
1173 | result = storage_utils.is_device_mounted('/dev/cciss/c0d0') |
Results from make test2: https:/ /gist.github. com/cholcombe97 3/a6603ed6e8820 ad323d2