Merge lp:~raharper/curtin/trunk.implement_raid_preserve into lp:~curtin-dev/curtin/trunk

Proposed by Ryan Harper
Status: Merged
Merged at revision: 330
Proposed branch: lp:~raharper/curtin/trunk.implement_raid_preserve
Merge into: lp:~curtin-dev/curtin/trunk
Diff against target: 1691 lines (+1571/-44)
4 files modified
curtin/block/__init__.py (+18/-0)
curtin/block/mdadm.py (+630/-0)
curtin/commands/block_meta.py (+32/-44)
tests/unittests/test_block_mdadm.py (+891/-0)
To merge this branch: bzr merge lp:~raharper/curtin/trunk.implement_raid_preserve
Reviewer Review Type Date Requested Status
Scott Moser (community) Approve
Server Team CI bot continuous-integration Approve
Review via email: mp+279603@code.launchpad.net

Description of the change

block_meta: handle 'preserve' flag for raid devices

The raid handler did not respect the 'preserve' flag and
attempted to recreate the raid device instead of verifying
it.

Migrate mdadm usage into curtin.block.mdadm module.
Complete mdadm unittest for new module.

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
317. By Ryan Harper

from trunk

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

our block_meta has been what really *should* be curtin/block/__init__.py
there are a lot of functions there that are general purpose, not specifically tied to 'block_meta'.

just after having been looking through code on many different MP's recently, i've sen a lot of stuff in block_meta that i'd like in block/__init__.py.

Revision history for this message
Ryan Harper (raharper) wrote :
Download full text (6.7 KiB)

On Mon, Dec 7, 2015 at 10:27 AM, Scott Moser <email address hidden> wrote:

> our block_meta has been what really *should* be curtin/block/__init__.py
> there are a lot of functions there that are general purpose, not
> specifically tied to 'block_meta'.
>
> just after having been looking through code on many different MP's
> recently, i've sen a lot of stuff in block_meta that i'd like in
> block/__init__.py.
>

That's good to know. I don't mind moving them out there.

>
>
> Diff comments:
>
> > === modified file 'curtin/commands/block_meta.py'
> > --- curtin/commands/block_meta.py 2015-11-18 18:12:54 +0000
> > +++ curtin/commands/block_meta.py 2015-12-07 15:22:18 +0000
> > @@ -438,6 +438,75 @@
> > return volume_path
> >
> >
> > +def check_raid_array(mdname, raidlevel, devices=[], spares=[]):
> > + LOG.debug('Checking mdadm array: '
> > + 'name={} raidlevel={} devices={} spares={}'.format(
> > + mdname, raidlevel, devices, spares))
> > +
> > + # query the target raid device
> > + md_devname = os.path.join('/dev', mdname)
>
> beware os.path.join("/dev/", "/tmp/foo") returns /tmp/foo.
>

I think we're more concerned about:

 os.path.join("/dev", "/tmp/foo")
'/tmp/foo'

> I'd like a general get_full_device_path_for_device_name(devname="sda")
> that woudl return /dev/sda or what not.
>

I think most of it is here (but expects the volume id)

def get_path_to_storage_volume()

Instead you want it to take what? kernel devnames? /dev/XXXX, or XXX?
Is this just a smarter os.path.join for devices?

>
> > + (out, _err) = util.subp(["mdadm", "--query", "--detail", "--export",
>
> this next block of code should be in a function. mdadm_query_devname()
>

OK

>
> > + md_devname], capture=True)
> > + if _err:
> > + # this is not fatal as we may need to assemble the array
> > + LOG.warn("raid device '%s' does not exist" % md_devname)
> > + return False
> > +
> > + # Convert mdadm --query --detail --export key=value into dictionary
> > + md_query_data = {k: v for k, v in (x.split('=')
>
> I'm guessing you want to use shlex here.
> see curtin/block/__init__.py's blkid() which returns a dictionary and
> parses output using shlex.
>

Hrm, do you want me to make a general:

def subp_to_dict(cmd)

?

And have blkid use that? the K=V logic is currently embedded in there.

>
> > + for x in out.strip().split('\n'))}
> > +
> > + # confirm we have /dev/{mdname} by following the udev symlink
> > + mduuid_path = ('/dev/disk/by-id/md-uuid-' +
> > + '{}'.format(md_query_data['MD_UUID']))
> > + mdquery_devname = os.path.realpath(mduuid_path)
> > + if md_devname != mdquery_devname:
>
> this is a good check thank.s
>

NP, we should probably add that to the vmtests raid check as well.

>
> > + raise ValueError("Couldn't find correct raid device."
>
> need a ' ' after the '.' there or your value eror shows
> "device.MD_UUID={}".
>
>
OK

> > + "MD_UUID={} points to {} but expected {}" % (
> > + md_query_data['MD_UUID'],
> > + ...

Read more...

318. By Ryan Harper

from trunk

319. By Ryan Harper

Fix spacing in raid check exception message.

320. By Ryan Harper

from trunk

321. By Ryan Harper

Create block.mdadm module

322. By Ryan Harper

Add mdadm unittests, apply some fixes and additional checks to mdadm module.

323. By Ryan Harper

mdadm: code-motion, relocating out of mdadm_ section.

324. By Ryan Harper

unittests: add mdadm_stop, mdadm_remove tests

325. By Ryan Harper

unittests: add mdadm_detail_scan

326. By Ryan Harper

unittests: add mdadm_query_detail

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
327. By Ryan Harper

unittests: mdadm helpers

328. By Ryan Harper

unittests: mdadm md_helpers

329. By Ryan Harper

mdadm: use util.load_file for easier mocking

330. By Ryan Harper

unittests: mdadm md_check_raidlevel

331. By Ryan Harper

unittests: mdadm md_check_array_state

332. By Ryan Harper

fix lint/pep8

333. By Ryan Harper

unittests: mdadm md_check_{uuid,devices,spares}

334. By Ryan Harper

unittest: fix pep/lint

335. By Ryan Harper

unittests: mdadm finish md_helpers

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
336. By Ryan Harper

from trunk

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

this alrgely looks good.
I'm ok if you want to push on it and get it in, given the good tests that youv'e got in place.
Just make sure we're passing vmtest and unit tests first.

i wont be able to give more thourough review till the new year.

Revision history for this message
Ryan Harper (raharper) wrote :

OK, I won't push until it passes vmtests, it already passes make check.

On Fri, Dec 18, 2015 at 4:23 PM, Scott Moser <email address hidden> wrote:

> this alrgely looks good.
> I'm ok if you want to push on it and get it in, given the good tests that
> youv'e got in place.
> Just make sure we're passing vmtest and unit tests first.
>
> i wont be able to give more thourough review till the new year.
>
>
> --
>
> https://code.launchpad.net/~raharper/curtin/trunk.implement_raid_preserve/+merge/279603
> You are the owner of lp:~raharper/curtin/trunk.implement_raid_preserve.
>

337. By Ryan Harper

Don't expect md_devname to exist when creating raid devices.

338. By Ryan Harper

reformat to fix pep on newer systems

339. By Ryan Harper

Allow non-zero return codes for mdadm_assemble

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
340. By Ryan Harper

fix assert_has_calls to use list

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
341. By Ryan Harper

Rename block.dev_long -> block.dev_path

Replacing dev_long with dev_path to indicate that we're returning a device
path.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

Locally I ran make check and make vmtest succesfully.
I merged into trunk and pushed.
Fix-committed in revno 330.

Thanks Ryan.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'curtin/block/__init__.py'
2--- curtin/block/__init__.py 2015-12-07 20:22:00 +0000
3+++ curtin/block/__init__.py 2016-01-04 16:25:53 +0000
4@@ -41,6 +41,23 @@
5 return False
6
7
8+def dev_short(devname):
9+ if os.path.sep in devname:
10+ return os.path.basename(devname)
11+ return devname
12+
13+
14+def dev_path(devname):
15+ if devname.startswith('/dev/'):
16+ return devname
17+ else:
18+ return '/dev/' + devname
19+
20+
21+def sys_block_path(devname):
22+ return '/sys/class/block/' + dev_short(devname)
23+
24+
25 def _lsblock_pairs_to_dict(lines):
26 ret = {}
27 for line in lines.splitlines():
28@@ -435,4 +452,5 @@
29 does not exist" % (path, serial))
30 return path
31
32+
33 # vi: ts=4 expandtab syntax=python
34
35=== added file 'curtin/block/mdadm.py'
36--- curtin/block/mdadm.py 1970-01-01 00:00:00 +0000
37+++ curtin/block/mdadm.py 2016-01-04 16:25:53 +0000
38@@ -0,0 +1,630 @@
39+# Copyright (C) 2015 Canonical Ltd.
40+#
41+# Author: Ryan Harper <ryan.harper@canonical.com>
42+#
43+# Curtin is free software: you can redistribute it and/or modify it under
44+# the terms of the GNU Affero General Public License as published by the
45+# Free Software Foundation, either version 3 of the License, or (at your
46+# option) any later version.
47+#
48+# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
49+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
50+# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
51+# more details.
52+#
53+# You should have received a copy of the GNU Affero General Public License
54+# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
55+
56+
57+# This module wraps calls to the mdadm utility for examing Linux SoftRAID
58+# virtual devices. Functions prefixed with 'mdadm_' involve executing
59+# the 'mdadm' command in a subprocess. The remaining functions handle
60+# manipulation of the mdadm output.
61+
62+
63+import os
64+import re
65+import shlex
66+from subprocess import CalledProcessError
67+
68+from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path)
69+from curtin import util
70+from curtin.log import LOG
71+
72+NOSPARE_RAID_LEVELS = [
73+ 'linear', 'raid0', '0', 0,
74+]
75+
76+SPARE_RAID_LEVELS = [
77+ 'raid1', 'stripe', 'mirror', '1', 1,
78+ 'raid4', '4', 4,
79+ 'raid5', '5', 5,
80+ 'raid6', '6', 6,
81+ 'raid10', '10', 10,
82+]
83+
84+VALID_RAID_LEVELS = NOSPARE_RAID_LEVELS + SPARE_RAID_LEVELS
85+
86+# https://www.kernel.org/doc/Documentation/md.txt
87+'''
88+ clear
89+ No devices, no size, no level
90+ Writing is equivalent to STOP_ARRAY ioctl
91+ inactive
92+ May have some settings, but array is not active
93+ all IO results in error
94+ When written, doesn't tear down array, but just stops it
95+ suspended (not supported yet)
96+ All IO requests will block. The array can be reconfigured.
97+ Writing this, if accepted, will block until array is quiessent
98+ readonly
99+ no resync can happen. no superblocks get written.
100+ write requests fail
101+ read-auto
102+ like readonly, but behaves like 'clean' on a write request.
103+
104+ clean - no pending writes, but otherwise active.
105+ When written to inactive array, starts without resync
106+ If a write request arrives then
107+ if metadata is known, mark 'dirty' and switch to 'active'.
108+ if not known, block and switch to write-pending
109+ If written to an active array that has pending writes, then fails.
110+ active
111+ fully active: IO and resync can be happening.
112+ When written to inactive array, starts with resync
113+
114+ write-pending
115+ clean, but writes are blocked waiting for 'active' to be written.
116+
117+ active-idle
118+ like active, but no writes have been seen for a while (safe_mode_delay).
119+'''
120+
121+ERROR_RAID_STATES = [
122+ 'clear',
123+ 'inactive',
124+ 'suspended',
125+]
126+
127+READONLY_RAID_STATES = [
128+ 'readonly',
129+]
130+
131+READWRITE_RAID_STATES = [
132+ 'read-auto',
133+ 'clean',
134+ 'active',
135+ 'active-idle',
136+ 'write-pending',
137+]
138+
139+VALID_RAID_ARRAY_STATES = (
140+ ERROR_RAID_STATES +
141+ READONLY_RAID_STATES +
142+ READWRITE_RAID_STATES
143+)
144+
145+# need a on-import check of version and set the value for later reference
146+''' mdadm version < 3.3 doesn't include enough info when using --export
147+ and we must use --detail and parse out information. This method
148+ checks the mdadm version and will return True if we can use --export
149+ for key=value list with enough info, false if version is less than
150+'''
151+MDADM_USE_EXPORT = util.lsb_release()['codename'] not in ['precise', 'trusty']
152+
153+#
154+# mdadm executors
155+#
156+
157+
158+def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False):
159+ # md_devname is a /dev/XXXX
160+ # devices is non-empty list of /dev/xxx
161+ # if spares is non-empt list append of /dev/xxx
162+ cmd = ["mdadm", "--assemble"]
163+ if scan:
164+ cmd += ['--scan']
165+ else:
166+ valid_mdname(md_devname)
167+ cmd += [dev_path(md_devname), "--run"] + devices
168+ if spares:
169+ cmd += spares
170+
171+ util.subp(cmd, capture=True, rcs=[0, 1, 2])
172+ util.subp(["udevadm", "settle"])
173+
174+
175+def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name=""):
176+ LOG.debug('mdadm_create: ' +
177+ 'md_name=%s raidlevel=%s ' % (md_devname, raidlevel) +
178+ ' devices=%s spares=%s name=%s' % (devices, spares, md_name))
179+
180+ if raidlevel not in VALID_RAID_LEVELS:
181+ raise ValueError('Invalid raidlevel: ' + str(raidlevel))
182+
183+ min_devices = md_minimum_devices(raidlevel)
184+ if len(devices) < min_devices:
185+ err = 'Not enough devices for raidlevel: ' + str(raidlevel)
186+ err += ' minimum devices needed: ' + str(min_devices)
187+ raise ValueError(err)
188+
189+ if spares and raidlevel not in SPARE_RAID_LEVELS:
190+ err = ('Raidlevel does not support spare devices: ' + str(raidlevel))
191+ raise ValueError(err)
192+
193+ cmd = ["mdadm", "--create", dev_path(md_devname), "--run",
194+ "--level=%s" % raidlevel, "--raid-devices=%s" % len(devices)]
195+ if md_name:
196+ cmd.append("--name=%s" % md_name)
197+
198+ for device in devices:
199+ # Zero out device superblock just in case device has been used for raid
200+ # before, as this will cause many issues
201+ util.subp(["mdadm", "--zero-superblock", device], capture=True)
202+ cmd.append(device)
203+
204+ if spares:
205+ cmd.append("--spare-devices=%s" % len(spares))
206+ for device in spares:
207+ util.subp(["mdadm", "--zero-superblock", device], capture=True)
208+ cmd.append(device)
209+
210+ # Create the raid device
211+ util.subp(["udevadm", "settle"])
212+ util.subp(["udevadm", "control", "--stop-exec-queue"])
213+ util.subp(cmd, capture=True)
214+ util.subp(["udevadm", "control", "--start-exec-queue"])
215+ util.subp(["udevadm", "settle", "--exit-if-exists=%s" % md_devname])
216+
217+
218+def mdadm_examine(devpath, export=MDADM_USE_EXPORT):
219+ ''' exectute mdadm --examine, and optionally
220+ append --export.
221+ Parse and return dict of key=val from output'''
222+ cmd = ["mdadm", "--examine"]
223+ if export:
224+ cmd.extend(["--export"])
225+
226+ cmd.extend([devpath])
227+ try:
228+ (out, _err) = util.subp(cmd, capture=True)
229+ except CalledProcessError:
230+ LOG.exception('Error: not a valid md device: ' + devpath)
231+ return {}
232+
233+ if export:
234+ data = __mdadm_export_to_dict(out)
235+ else:
236+ data = __upgrade_detail_dict(__mdadm_detail_to_dict(out))
237+
238+ return data
239+
240+
241+def mdadm_stop(devpath):
242+ if not devpath:
243+ raise ValueError('mdadm_stop: missing parameter devpath')
244+
245+ LOG.info("mdadm stopping: %s" % devpath)
246+ util.subp(["mdadm", "--stop", devpath], rcs=[0, 1], capture=True)
247+
248+
249+def mdadm_remove(devpath):
250+ if not devpath:
251+ raise ValueError('mdadm_remove: missing parameter devpath')
252+
253+ LOG.info("mdadm removing: %s" % devpath)
254+ util.subp(["mdadm", "--remove", devpath], rcs=[0, 1], capture=True)
255+
256+
257+def mdadm_query_detail(md_devname, export=MDADM_USE_EXPORT):
258+ valid_mdname(md_devname)
259+
260+ cmd = ["mdadm", "--query", "--detail"]
261+ if export:
262+ cmd.extend(["--export"])
263+ cmd.extend([md_devname])
264+ (out, _err) = util.subp(cmd, capture=True)
265+
266+ if export:
267+ data = __mdadm_export_to_dict(out)
268+ else:
269+ data = __upgrade_detail_dict(__mdadm_detail_to_dict(out))
270+
271+ return data
272+
273+
274+def mdadm_detail_scan():
275+ (out, _err) = util.subp(["mdadm", "--detail", "--scan"], capture=True)
276+ if not _err:
277+ return out
278+
279+
280+# ------------------------------ #
281+def valid_mdname(md_devname):
282+ if md_devname is None:
283+ raise ValueError('Parameter: md_devname is None')
284+ return False
285+
286+ if not is_valid_device(dev_path(md_devname)):
287+ raise ValueError('Specified md device does not exist: ' +
288+ dev_path(md_devname))
289+ return False
290+
291+ return True
292+
293+
294+def md_sysfs_attr(md_devname, attrname):
295+ if not valid_mdname(md_devname):
296+ raise ValueError('Invalid md devicename')
297+
298+ attrdata = ''
299+ # /sys/class/block/<md_short>/md
300+ sysmd = sys_block_path(dev_short(md_devname)) + "/md"
301+
302+ # /sys/class/block/<md_short>/md/attrname
303+ sysfs_attr_path = os.path.join(sysmd, attrname)
304+ if not os.path.isfile(sysfs_attr_path):
305+ with open(sysfs_attr_path) as fp:
306+ attrdata = fp.read().strip()
307+
308+ return attrdata
309+
310+
311+def md_raidlevel_short(raidlevel):
312+ if isinstance(raidlevel, int) or raidlevel in ['linear', 'stripe']:
313+ return raidlevel
314+
315+ return int(raidlevel.replace('raid', ''))
316+
317+
318+def md_minimum_devices(raidlevel):
319+ ''' return the minimum number of devices for a given raid level '''
320+ rl = md_raidlevel_short(raidlevel)
321+ if rl in [0, 1, 'linear', 'stripe']:
322+ return 2
323+ if rl in [5]:
324+ return 3
325+ if rl in [6, 10]:
326+ return 4
327+
328+ return -1
329+
330+
331+def __md_check_array_state(md_devname, mode='READWRITE'):
332+ modes = {
333+ 'READWRITE': READWRITE_RAID_STATES,
334+ 'READONLY': READONLY_RAID_STATES,
335+ 'ERROR': ERROR_RAID_STATES,
336+ }
337+ if mode not in modes:
338+ raise ValueError('Invalid Array State mode: ' + mode)
339+
340+ array_state = md_sysfs_attr(md_devname, 'array_state')
341+ if array_state in modes[mode]:
342+ return True
343+
344+ return False
345+
346+
347+def md_check_array_state_rw(md_devname):
348+ return __md_check_array_state(md_devname, mode='READWRITE')
349+
350+
351+def md_check_array_state_ro(md_devname):
352+ return __md_check_array_state(md_devname, mode='READONLY')
353+
354+
355+def md_check_array_state_error(md_devname):
356+ return __md_check_array_state(md_devname, mode='ERROR')
357+
358+
359+def __mdadm_export_to_dict(output):
360+ ''' convert Key=Value text output into dictionary '''
361+ return dict(tok.split('=', 1) for tok in shlex.split(output))
362+
363+
364+def __mdadm_detail_to_dict(input):
365+ ''' Convert mdadm --detail output to dictionary
366+
367+ /dev/vde:
368+ Magic : a92b4efc
369+ Version : 1.2
370+ Feature Map : 0x0
371+ Array UUID : 93a73e10:427f280b:b7076c02:204b8f7a
372+ Name : wily-foobar:0 (local to host wily-foobar)
373+ Creation Time : Sat Dec 12 16:06:05 2015
374+ Raid Level : raid1
375+ Raid Devices : 2
376+
377+ Avail Dev Size : 20955136 (9.99 GiB 10.73 GB)
378+ Used Dev Size : 20955136 (9.99 GiB 10.73 GB)
379+ Array Size : 10477568 (9.99 GiB 10.73 GB)
380+ Data Offset : 16384 sectors
381+ Super Offset : 8 sectors
382+ Unused Space : before=16296 sectors, after=0 sectors
383+ State : clean
384+ Device UUID : 8fcd62e6:991acc6e:6cb71ee3:7c956919
385+
386+ Update Time : Sat Dec 12 16:09:09 2015
387+ Bad Block Log : 512 entries available at offset 72 sectors
388+ Checksum : 65b57c2e - correct
389+ Events : 17
390+
391+
392+ Device Role : spare
393+ Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
394+ '''
395+ data = {}
396+
397+ device = re.findall('^(\/dev\/[a-zA-Z0-9-\._]+)', input)
398+ if len(device) == 1:
399+ data.update({'device': device[0]})
400+ else:
401+ raise ValueError('Failed to determine device in input')
402+
403+ # FIXME: probably could do a better regex to match the LHS which
404+ # has one, two or three words
405+ for f in re.findall('(\w+|\w+\ \w+|\w+\ \w+\ \w+)' +
406+ '\ \:\ ([a-zA-Z0-9\-\.,: \(\)=\']+)',
407+ input, re.MULTILINE):
408+ key = f[0].replace(' ', '_').lower()
409+ val = f[1]
410+ if key in data:
411+ raise ValueError('Duplicate key in mdadm regex parsing: ' + key)
412+ data.update({key: val})
413+
414+ return data
415+
416+
417+def md_device_key_role(devname):
418+ if not devname:
419+ raise ValueError('Missing parameter devname')
420+ return 'MD_DEVICE_' + dev_short(devname) + '_ROLE'
421+
422+
423+def md_device_key_dev(devname):
424+ if not devname:
425+ raise ValueError('Missing parameter devname')
426+ return 'MD_DEVICE_' + dev_short(devname) + '_DEV'
427+
428+
429+def __upgrade_detail_dict(detail):
430+ ''' This method attempts to convert mdadm --detail output into
431+ a KEY=VALUE output the same as mdadm --detail --export from mdadm v3.3
432+ '''
433+ # if the input already has MD_UUID, it's already been converted
434+ if 'MD_UUID' in detail:
435+ return detail
436+
437+ md_detail = {
438+ 'MD_LEVEL': detail['raid_level'],
439+ 'MD_DEVICES': detail['raid_devices'],
440+ 'MD_METADATA': detail['version'],
441+ 'MD_NAME': detail['name'].split()[0],
442+ }
443+
444+ # exmaine has ARRAY UUID
445+ if 'array_uuid' in detail:
446+ md_detail.update({'MD_UUID': detail['array_uuid']})
447+ # query,detail has UUID
448+ elif 'uuid' in detail:
449+ md_detail.update({'MD_UUID': detail['uuid']})
450+
451+ device = detail['device']
452+
453+ # MD_DEVICE_vdc1_DEV=/dev/vdc1
454+ md_detail.update({md_device_key_dev(device): device})
455+
456+ if 'device_role' in detail:
457+ role = detail['device_role']
458+ if role != 'spare':
459+ # device_role = Active device 1
460+ role = role.split()[-1]
461+
462+ # MD_DEVICE_vdc1_ROLE=spare
463+ md_detail.update({md_device_key_role(device): role})
464+
465+ return md_detail
466+
467+
468+def md_read_run_mdadm_map():
469+ '''
470+ md1 1.2 59beb40f:4c202f67:088e702b:efdf577a /dev/md1
471+ md0 0.90 077e6a9e:edf92012:e2a6e712:b193f786 /dev/md0
472+
473+ return
474+ # md_shortname = (metaversion, md_uuid, md_devpath)
475+ data = {
476+ 'md1': (1.2, 59beb40f:4c202f67:088e702b:efdf577a, /dev/md1)
477+ 'md0': (0.90, 077e6a9e:edf92012:e2a6e712:b193f786, /dev/md0)
478+ '''
479+
480+ mdadm_map = {}
481+ run_mdadm_map = '/run/mdadm/map'
482+ if os.path.exists(run_mdadm_map):
483+ with open(run_mdadm_map, 'r') as fp:
484+ data = fp.read().strip()
485+ for entry in data.split('\n'):
486+ (key, meta, md_uuid, dev) = entry.split()
487+ mdadm_map.update({key: (meta, md_uuid, dev)})
488+
489+ return mdadm_map
490+
491+
492+def md_get_spares_list(devpath):
493+ sysfs_md = sys_block_path(devpath) + '/md'
494+
495+ if not os.path.exists(sysfs_md):
496+ raise ValueError('Cannot find md sysfs directory: ' +
497+ sysfs_md)
498+
499+ spares = [dev_path(dev[4:])
500+ for dev in os.listdir(sysfs_md)
501+ if (dev.startswith('dev-') and
502+ util.load_file(os.path.join(sysfs_md,
503+ dev,
504+ 'state')).strip() == 'spare')]
505+
506+ return spares
507+
508+
509+def md_get_devices_list(devpath):
510+ sysfs_md = sys_block_path(devpath) + '/md'
511+ if not os.path.exists(sysfs_md):
512+ raise ValueError('Cannot find md sysfs directory: ' +
513+ sysfs_md)
514+ devices = [dev_path(dev[4:])
515+ for dev in os.listdir(sysfs_md)
516+ if (dev.startswith('dev-') and
517+ util.load_file(os.path.join(sysfs_md,
518+ dev,
519+ 'state')).strip() != 'spare')]
520+ return devices
521+
522+
523+def md_check_array_uuid(md_devname, md_uuid):
524+ # confirm we have /dev/{mdname} by following the udev symlink
525+ mduuid_path = ('/dev/disk/by-id/md-uuid-' + md_uuid)
526+ mdlink_devname = dev_path(os.path.realpath(mduuid_path))
527+ if md_devname != mdlink_devname:
528+ err = ('Mismatch between devname and md-uuid symlink: ' +
529+ '%s -> %s != %s' % (mduuid_path, mdlink_devname, md_devname))
530+ raise ValueError(err)
531+
532+ return True
533+
534+
535+def md_get_uuid(md_devname):
536+ if not valid_mdname(md_devname):
537+ raise ValueError('Invalid md devicename')
538+
539+ md_query = mdadm_query_detail(md_devname)
540+ return md_query.get('MD_UUID', None)
541+
542+
543+def _compare_devlist(expected, found):
544+ LOG.debug('comparing device lists: '
545+ 'expected: {} found: {}'.format(expected, found))
546+ expected = set(expected)
547+ found = set(found)
548+ if expected != found:
549+ missing = expected.difference(found)
550+ extra = found.difference(expected)
551+ raise ValueError("RAID array device list does not match."
552+ " Missing: {} Extra: {}".format(missing, extra))
553+
554+
555+def md_check_raidlevel(raidlevel):
556+ # Validate raidlevel against what curtin supports configuring
557+ if raidlevel not in VALID_RAID_LEVELS:
558+ err = ('Invalid raidlevel: ' + raidlevel +
559+ ' Must be one of: ' + str(VALID_RAID_LEVELS))
560+ raise ValueError(err)
561+ return True
562+
563+
564+def md_block_until_in_sync(md_devname):
565+ '''
566+ sync_completed
567+ This shows the number of sectors that have been completed of
568+ whatever the current sync_action is, followed by the number of
569+ sectors in total that could need to be processed. The two
570+ numbers are separated by a '/' thus effectively showing one
571+ value, a fraction of the process that is complete.
572+ A 'select' on this attribute will return when resync completes,
573+ when it reaches the current sync_max (below) and possibly at
574+ other times.
575+ '''
576+ # FIXME: use selectors to block on: /sys/class/block/mdX/md/sync_completed
577+ pass
578+
579+
580+def md_check_array_state(md_devname):
581+ # check array state
582+
583+ writable = md_check_array_state_rw(md_devname)
584+ degraded = int(md_sysfs_attr(md_devname, 'degraded'))
585+ sync_action = md_sysfs_attr(md_devname, 'sync_action')
586+
587+ if not writable:
588+ raise ValueError('Array not in writable state: ' + md_devname)
589+ if degraded > 0:
590+ raise ValueError('Array in degraded state: ' + md_devname)
591+ if sync_action != "idle":
592+ raise ValueError('Array syncing, not idle state: ' + md_devname)
593+
594+ return True
595+
596+
597+def md_check_uuid(md_devname):
598+ md_uuid = md_get_uuid(md_devname)
599+ if not md_uuid:
600+ raise ValueError('Failed to get md UUID from device: ' + md_devname)
601+ return md_check_array_uuid(md_devname, md_uuid)
602+
603+
604+def md_check_devices(md_devname, devices):
605+ if not devices or len(devices) == 0:
606+ raise ValueError('Cannot verify raid array with empty device list')
607+
608+ # collect and compare raid devices based on md name versus
609+ # expected device list.
610+ #
611+ # NB: In some cases, a device might report as a spare until
612+ # md has finished syncing it into the array. Currently
613+ # we fail the check since the specified raid device is not
614+ # yet in its proper role. Callers can check mdadm_sync_action
615+ # state to see if the array is currently recovering, which would
616+ # explain the failure. Also mdadm_degraded will indicate if the
617+ # raid is currently degraded or not, which would also explain the
618+ # failure.
619+ md_raid_devices = md_get_devices_list(md_devname)
620+ LOG.debug('md_check_devices: md_raid_devs: ' + str(md_raid_devices))
621+ _compare_devlist(devices, md_raid_devices)
622+
623+
624+def md_check_spares(md_devname, spares):
625+ # collect and compare spare devices based on md name versus
626+ # expected device list.
627+ md_raid_spares = md_get_spares_list(md_devname)
628+ _compare_devlist(spares, md_raid_spares)
629+
630+
631+def md_check_array_membership(md_devname, devices):
632+ # validate that all devices are members of the correct array
633+ md_uuid = md_get_uuid(md_devname)
634+ for device in devices:
635+ dev_examine = mdadm_examine(device, export=False)
636+ if 'MD_UUID' not in dev_examine:
637+ raise ValueError('Device is not part of an array: ' + device)
638+ dev_uuid = dev_examine['MD_UUID']
639+ if dev_uuid != md_uuid:
640+ err = "Device {} is not part of {} array. ".format(device,
641+ md_devname)
642+ err += "MD_UUID mismatch: device:{} != array:{}".format(dev_uuid,
643+ md_uuid)
644+ raise ValueError(err)
645+
646+
647+def md_check(md_devname, raidlevel, devices=[], spares=[]):
648+ ''' Check passed in variables from storage configuration against
649+ the system we're running upon.
650+ '''
651+ LOG.debug('RAID validation: ' +
652+ 'name={} raidlevel={} devices={} spares={}'.format(md_devname,
653+ raidlevel,
654+ devices,
655+ spares))
656+
657+ md_check_array_state(md_devname)
658+ md_check_raidlevel(raidlevel)
659+ md_check_uuid(md_devname)
660+ md_check_devices(md_devname, devices)
661+ md_check_spares(md_devname, spares)
662+ md_check_array_membership(md_devname, devices + spares)
663+
664+ LOG.debug('RAID array OK: ' + md_devname)
665+ return True
666+
667+
668+# vi: ts=4 expandtab syntax=python
669
670=== modified file 'curtin/commands/block_meta.py'
671--- curtin/commands/block_meta.py 2015-12-08 01:23:09 +0000
672+++ curtin/commands/block_meta.py 2016-01-04 16:25:53 +0000
673@@ -16,9 +16,8 @@
674 # along with Curtin. If not, see <http://www.gnu.org/licenses/>.
675
676 from collections import OrderedDict
677-from curtin import block
678-from curtin import config
679-from curtin import util
680+from curtin import (block, config, util)
681+from curtin.block import mdadm
682 from curtin.log import LOG
683
684 from . import populate_one_subcmd
685@@ -256,9 +255,8 @@
686 block_dev = os.path.join("/dev/", os.path.split(sys_block_path)[-1])
687 # if these fail its okay, the array might not be assembled and thats
688 # fine
689- LOG.info("stopping: %s" % block_dev)
690- util.subp(["mdadm", "--stop", block_dev], rcs=[0, 1])
691- util.subp(["mdadm", "--remove", block_dev], rcs=[0, 1])
692+ mdadm.mdadm_stop(block_dev)
693+ mdadm.mdadm_remove(block_dev)
694
695 elif os.path.exists(os.path.join(sys_block_path, "dm")):
696 # Shut down any volgroups
697@@ -341,12 +339,8 @@
698 "-part%s" % determine_partition_number(volume, storage_config)
699 rule.append(compose_udev_equality('ENV{ID_PART_ENTRY_UUID}', ptuuid))
700 elif vol.get('type') == "raid":
701- (out, _err) = util.subp(["mdadm", "--detail", "--export", path],
702- capture=True)
703- for line in out.splitlines():
704- if "MD_UUID" in line:
705- md_uuid = line.split('=')[-1]
706- break
707+ md_data = mdadm.mdadm_query_detail(path)
708+ md_uuid = md_data.get('MD_UUID')
709 rule.append(compose_udev_equality("ENV{MD_UUID}", md_uuid))
710 elif vol.get('type') == "bcache":
711 rule.append(compose_udev_equality("ENV{DEVNAME}", path))
712@@ -471,7 +465,7 @@
713 # Wipe the disk
714 if info.get('wipe') and info.get('wipe') != "none":
715 # The disk has a lable, clear all partitions
716- util.subp(["mdadm", "--assemble", "--scan"], rcs=[0, 1, 2])
717+ mdadm.mdadm_assemble(scan=True)
718 disk_kname = os.path.split(disk)[-1]
719 syspath_partitions = list(
720 os.path.split(prt)[0] for prt in
721@@ -914,39 +908,33 @@
722 device_paths = list(get_path_to_storage_volume(dev, storage_config) for
723 dev in devices)
724
725+ spare_device_paths = []
726 if spare_devices:
727 spare_device_paths = list(get_path_to_storage_volume(dev,
728 storage_config) for dev in spare_devices)
729
730- mdnameparm = ""
731- mdname = info.get('mdname')
732- if mdname:
733- mdnameparm = "--name=%s" % info.get('mdname')
734-
735- cmd = ["mdadm", "--create", "/dev/%s" % info.get('name'), "--run",
736- "--level=%s" % raidlevel, "--raid-devices=%s" % len(device_paths),
737- mdnameparm]
738-
739- for device in device_paths:
740- # Zero out device superblock just in case device has been used for raid
741- # before, as this will cause many issues
742- util.subp(["mdadm", "--zero-superblock", device])
743-
744- cmd.append(device)
745-
746- if spare_devices:
747- cmd.append("--spare-devices=%s" % len(spare_device_paths))
748- for device in spare_device_paths:
749- util.subp(["mdadm", "--zero-superblock", device])
750-
751- cmd.append(device)
752-
753- # Create the raid device
754- util.subp(["udevadm", "settle"])
755- util.subp(["udevadm", "control", "--stop-exec-queue"])
756- util.subp(" ".join(cmd), shell=True)
757- util.subp(["udevadm", "control", "--start-exec-queue"])
758- util.subp(["udevadm", "settle"])
759+ # Handle preserve flag
760+ if info.get('preserve'):
761+ # check if the array is already up, if not try to assemble
762+ if not block.md_check(info.get('name'), raidlevel,
763+ device_paths, spare_device_paths):
764+ LOG.info("assembling preserved raid for "
765+ "{}".format(info.get('name')))
766+
767+ mdadm.mdadm_assemble(info.get('name'),
768+ device_paths, spare_device_paths)
769+
770+ # try again after attempting to assemble
771+ if not mdadm.md_check(info.get('name'), raidlevel,
772+ devices, spare_device_paths):
773+ raise ValueError("Unable to confirm preserved raid array: "
774+ " {}".format(info.get('name')))
775+ # raid is all OK
776+ return
777+
778+ mdadm.mdadm_create(info.get('name'), raidlevel,
779+ device_paths, spare_device_paths,
780+ info.get('mdname', ''))
781
782 # Make dname rule for this dev
783 make_dname(info.get('id'), storage_config)
784@@ -958,9 +946,9 @@
785 if state['fstab']:
786 mdadm_location = os.path.join(os.path.split(state['fstab'])[0],
787 "mdadm.conf")
788- (out, _err) = util.subp(["mdadm", "--detail", "--scan"], capture=True)
789+ mdadm_scan_data = mdadm.mdadm_detail_scan()
790 with open(mdadm_location, "w") as fp:
791- fp.write(out)
792+ fp.write(mdadm_scan_data)
793 else:
794 LOG.info("fstab configuration is not present in the environment, so \
795 cannot locate an appropriate directory to write mdadm.conf in, \
796
797=== added file 'tests/unittests/test_block_mdadm.py'
798--- tests/unittests/test_block_mdadm.py 1970-01-01 00:00:00 +0000
799+++ tests/unittests/test_block_mdadm.py 2016-01-04 16:25:53 +0000
800@@ -0,0 +1,891 @@
801+from unittest import TestCase
802+from mock import call, patch
803+from curtin.block import dev_short
804+from curtin.block import mdadm
805+import os
806+import subprocess
807+
808+from sys import version_info
809+if version_info.major == 2:
810+ import __builtin__ as builtins
811+else:
812+ import builtins
813+
814+
815+class MdadmTestBase(TestCase):
816+ def setUp(self):
817+ super(MdadmTestBase, self).setUp()
818+
819+ def add_patch(self, target, attr):
820+ """Patches specified target object and sets it as attr on test
821+ instance also schedules cleanup"""
822+ m = patch(target, autospec=True)
823+ p = m.start()
824+ self.addCleanup(m.stop)
825+ setattr(self, attr, p)
826+
827+
828+class TestBlockMdadmAssemble(MdadmTestBase):
829+ def setUp(self):
830+ super(TestBlockMdadmAssemble, self).setUp()
831+ self.add_patch('curtin.block.mdadm.util', 'mock_util')
832+ self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
833+
834+ # Common mock settings
835+ self.mock_valid.return_value = True
836+ self.mock_util.lsb_release.return_value = {'codename': 'precise'}
837+ self.mock_util.subp.side_effect = [
838+ ("", ""), # mdadm assemble
839+ ("", ""), # udevadm settle
840+ ]
841+
842+ def test_mdadm_assemble_scan(self):
843+ mdadm.mdadm_assemble(scan=True)
844+ expected_calls = [
845+ call(["mdadm", "--assemble", "--scan"], capture=True,
846+ rcs=[0, 1, 2]),
847+ call(["udevadm", "settle"]),
848+ ]
849+ self.mock_util.subp.assert_has_calls(expected_calls)
850+
851+ def test_mdadm_assemble_md_devname(self):
852+ md_devname = "/dev/md0"
853+ mdadm.mdadm_assemble(md_devname=md_devname)
854+
855+ expected_calls = [
856+ call(["mdadm", "--assemble", md_devname, "--run"], capture=True,
857+ rcs=[0, 1, 2]),
858+ call(["udevadm", "settle"]),
859+ ]
860+ self.mock_util.subp.assert_has_calls(expected_calls)
861+
862+ def test_mdadm_assemble_md_devname_short(self):
863+ md_devname = "md0"
864+ mdadm.mdadm_assemble(md_devname=md_devname)
865+
866+ expected_calls = [
867+ call(["mdadm", "--assemble", "/dev/md0", "--run"], capture=True,
868+ rcs=[0, 1, 2]),
869+ call(["udevadm", "settle"]),
870+ ]
871+ self.mock_util.subp.assert_has_calls(expected_calls)
872+
873+ def test_mdadm_assemble_md_devname_none(self):
874+ with self.assertRaises(ValueError):
875+ md_devname = None
876+ mdadm.mdadm_assemble(md_devname=md_devname)
877+
878+ def test_mdadm_assemble_md_devname_devices(self):
879+ md_devname = "/dev/md0"
880+ devices = ["/dev/vdc1", "/dev/vdd1"]
881+ mdadm.mdadm_assemble(md_devname=md_devname, devices=devices)
882+ expected_calls = [
883+ call(["mdadm", "--assemble", md_devname, "--run"] + devices,
884+ capture=True, rcs=[0, 1, 2]),
885+ call(["udevadm", "settle"]),
886+ ]
887+ self.mock_util.subp.assert_has_calls(expected_calls)
888+
889+
890+class TestBlockMdadmCreate(MdadmTestBase):
891+ def setUp(self):
892+ super(TestBlockMdadmCreate, self).setUp()
893+ self.add_patch('curtin.block.mdadm.util', 'mock_util')
894+ self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
895+
896+ # Common mock settings
897+ self.mock_valid.return_value = True
898+ self.mock_util.lsb_release.return_value = {'codename': 'precise'}
899+
900+ def prepare_mock(self, md_devname, raidlevel, devices, spares):
901+ side_effects = []
902+ expected_calls = []
903+
904+ # don't mock anything if raidlevel and spares mismatch
905+ if spares and raidlevel not in mdadm.SPARE_RAID_LEVELS:
906+ return (side_effects, expected_calls)
907+
908+ # prepare side-effects
909+ for d in devices + spares:
910+ side_effects.append(("", "")) # mdadm --zero-superblock
911+ expected_calls.append(
912+ call(["mdadm", "--zero-superblock", d], capture=True))
913+
914+ side_effects.append(("", "")) # udevadm settle
915+ expected_calls.append(call(["udevadm", "settle"]))
916+ side_effects.append(("", "")) # udevadm control --stop-exec-queue
917+ expected_calls.append(call(["udevadm", "control",
918+ "--stop-exec-queue"]))
919+ side_effects.append(("", "")) # mdadm create
920+ # build command how mdadm_create does
921+ cmd = (["mdadm", "--create", md_devname, "--run",
922+ "--level=%s" % raidlevel, "--raid-devices=%s" % len(devices)] +
923+ devices)
924+ if spares:
925+ cmd += ["--spare-devices=%s" % len(spares)] + spares
926+
927+ expected_calls.append(call(cmd, capture=True))
928+ side_effects.append(("", "")) # udevadm control --start-exec-queue
929+ expected_calls.append(call(["udevadm", "control",
930+ "--start-exec-queue"]))
931+ side_effects.append(("", "")) # udevadm settle
932+ expected_calls.append(call(["udevadm", "settle",
933+ "--exit-if-exists=%s" % md_devname]))
934+
935+ return (side_effects, expected_calls)
936+
937+ def test_mdadm_create_raid0(self):
938+ md_devname = "/dev/md0"
939+ raidlevel = 0
940+ devices = ["/dev/vdc1", "/dev/vdd1"]
941+ spares = []
942+ (side_effects, expected_calls) = self.prepare_mock(md_devname,
943+ raidlevel,
944+ devices,
945+ spares)
946+
947+ self.mock_util.subp.side_effect = side_effects
948+ mdadm.mdadm_create(md_devname=md_devname, raidlevel=raidlevel,
949+ devices=devices, spares=spares)
950+ self.mock_util.subp.assert_has_calls(expected_calls)
951+
952+ def test_mdadm_create_raid0_with_spares(self):
953+ md_devname = "/dev/md0"
954+ raidlevel = 0
955+ devices = ["/dev/vdc1", "/dev/vdd1"]
956+ spares = ["/dev/vde1"]
957+ (side_effects, expected_calls) = self.prepare_mock(md_devname,
958+ raidlevel,
959+ devices,
960+ spares)
961+
962+ self.mock_util.subp.side_effect = side_effects
963+ with self.assertRaises(ValueError):
964+ mdadm.mdadm_create(md_devname=md_devname, raidlevel=raidlevel,
965+ devices=devices, spares=spares)
966+ self.mock_util.subp.assert_has_calls(expected_calls)
967+
968+ def test_mdadm_create_md_devname_none(self):
969+ md_devname = None
970+ raidlevel = 0
971+ devices = ["/dev/vdc1", "/dev/vdd1"]
972+ spares = ["/dev/vde1"]
973+ with self.assertRaises(ValueError):
974+ mdadm.mdadm_create(md_devname=md_devname, raidlevel=raidlevel,
975+ devices=devices, spares=spares)
976+
977+ def test_mdadm_create_md_devname_missing(self):
978+ self.mock_valid.return_value = False
979+ md_devname = "/dev/wark"
980+ raidlevel = 0
981+ devices = ["/dev/vdc1", "/dev/vdd1"]
982+ spares = ["/dev/vde1"]
983+ with self.assertRaises(ValueError):
984+ mdadm.mdadm_create(md_devname=md_devname, raidlevel=raidlevel,
985+ devices=devices, spares=spares)
986+
987+ def test_mdadm_create_invalid_raidlevel(self):
988+ md_devname = "/dev/md0"
989+ raidlevel = 27
990+ devices = ["/dev/vdc1", "/dev/vdd1"]
991+ spares = ["/dev/vde1"]
992+ with self.assertRaises(ValueError):
993+ mdadm.mdadm_create(md_devname=md_devname, raidlevel=raidlevel,
994+ devices=devices, spares=spares)
995+
996+ def test_mdadm_create_check_min_devices(self):
997+ md_devname = "/dev/md0"
998+ raidlevel = 5
999+ devices = ["/dev/vdc1", "/dev/vdd1"]
1000+ spares = ["/dev/vde1"]
1001+ with self.assertRaises(ValueError):
1002+ mdadm.mdadm_create(md_devname=md_devname, raidlevel=raidlevel,
1003+ devices=devices, spares=spares)
1004+
1005+ def test_mdadm_create_raid5(self):
1006+ md_devname = "/dev/md0"
1007+ raidlevel = 5
1008+ devices = ['/dev/vdc1', '/dev/vdd1', '/dev/vde1']
1009+ spares = ['/dev/vdg1']
1010+ (side_effects, expected_calls) = self.prepare_mock(md_devname,
1011+ raidlevel,
1012+ devices,
1013+ spares)
1014+
1015+ self.mock_util.subp.side_effect = side_effects
1016+ mdadm.mdadm_create(md_devname=md_devname, raidlevel=raidlevel,
1017+ devices=devices, spares=spares)
1018+ self.mock_util.subp.assert_has_calls(expected_calls)
1019+
1020+
1021+class TestBlockMdadmExamine(MdadmTestBase):
1022+ def setUp(self):
1023+ super(TestBlockMdadmExamine, self).setUp()
1024+ self.add_patch('curtin.block.mdadm.util', 'mock_util')
1025+ self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
1026+
1027+ # Common mock settings
1028+ self.mock_valid.return_value = True
1029+ self.mock_util.lsb_release.return_value = {'codename': 'precise'}
1030+
1031+ def test_mdadm_examine_export(self):
1032+ self.mock_util.lsb_release.return_value = {'codename': 'xenial'}
1033+ self.mock_util.subp.return_value = (
1034+ """
1035+ MD_LEVEL=raid0
1036+ MD_DEVICES=2
1037+ MD_METADATA=0.90
1038+ MD_UUID=93a73e10:427f280b:b7076c02:204b8f7a
1039+ """, "")
1040+
1041+ device = "/dev/vde"
1042+ data = mdadm.mdadm_examine(device, export=True)
1043+
1044+ expected_calls = [
1045+ call(["mdadm", "--examine", "--export", device], capture=True),
1046+ ]
1047+ self.mock_util.subp.assert_has_calls(expected_calls)
1048+ self.assertEqual(data['MD_UUID'],
1049+ '93a73e10:427f280b:b7076c02:204b8f7a')
1050+
1051+ def test_mdadm_examine_no_export(self):
1052+ self.mock_util.subp.return_value = ("""/dev/vde:
1053+ Magic : a92b4efc
1054+ Version : 1.2
1055+ Feature Map : 0x0
1056+ Array UUID : 93a73e10:427f280b:b7076c02:204b8f7a
1057+ Name : wily-foobar:0 (local to host wily-foobar)
1058+ Creation Time : Sat Dec 12 16:06:05 2015
1059+ Raid Level : raid1
1060+ Raid Devices : 2
1061+
1062+ Avail Dev Size : 20955136 (9.99 GiB 10.73 GB)
1063+ Used Dev Size : 20955136 (9.99 GiB 10.73 GB)
1064+ Array Size : 10477568 (9.99 GiB 10.73 GB)
1065+ Data Offset : 16384 sectors
1066+ Super Offset : 8 sectors
1067+ Unused Space : before=16296 sectors, after=0 sectors
1068+ State : clean
1069+ Device UUID : 8fcd62e6:991acc6e:6cb71ee3:7c956919
1070+
1071+ Update Time : Sat Dec 12 16:09:09 2015
1072+ Bad Block Log : 512 entries available at offset 72 sectors
1073+ Checksum : 65b57c2e - correct
1074+ Events : 17
1075+
1076+
1077+ Device Role : spare
1078+ Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
1079+ """, "") # mdadm --examine /dev/vde
1080+
1081+ device = "/dev/vde"
1082+ data = mdadm.mdadm_examine(device, export=False)
1083+
1084+ expected_calls = [
1085+ call(["mdadm", "--examine", device], capture=True),
1086+ ]
1087+ self.mock_util.subp.assert_has_calls(expected_calls)
1088+ self.assertEqual(data['MD_UUID'],
1089+ '93a73e10:427f280b:b7076c02:204b8f7a')
1090+
1091+ def test_mdadm_examine_no_raid(self):
1092+ self.mock_util.subp.side_effect = subprocess.CalledProcessError("", "")
1093+
1094+ device = "/dev/sda"
1095+ data = mdadm.mdadm_examine(device, export=False)
1096+
1097+ expected_calls = [
1098+ call(["mdadm", "--examine", device], capture=True),
1099+ ]
1100+
1101+ # don't mock anything if raidlevel and spares mismatch
1102+ self.mock_util.subp.assert_has_calls(expected_calls)
1103+ self.assertEqual(data, {})
1104+
1105+
1106+class TestBlockMdadmStop(MdadmTestBase):
1107+ def setUp(self):
1108+ super(TestBlockMdadmStop, self).setUp()
1109+ self.add_patch('curtin.block.mdadm.util', 'mock_util')
1110+ self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
1111+
1112+ # Common mock settings
1113+ self.mock_valid.return_value = True
1114+ self.mock_util.lsb_release.return_value = {'codename': 'xenial'}
1115+ self.mock_util.subp.side_effect = [
1116+ ("", ""), # mdadm stop device
1117+ ]
1118+
1119+ def test_mdadm_stop_no_devpath(self):
1120+ with self.assertRaises(ValueError):
1121+ mdadm.mdadm_stop(None)
1122+
1123+ def test_mdadm_stop(self):
1124+ device = "/dev/vdc"
1125+ mdadm.mdadm_stop(device)
1126+ expected_calls = [
1127+ call(["mdadm", "--stop", device], rcs=[0, 1], capture=True),
1128+ ]
1129+ self.mock_util.subp.assert_has_calls(expected_calls)
1130+
1131+
1132+class TestBlockMdadmRemove(MdadmTestBase):
1133+ def setUp(self):
1134+ super(TestBlockMdadmRemove, self).setUp()
1135+ self.add_patch('curtin.block.mdadm.util', 'mock_util')
1136+ self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
1137+
1138+ # Common mock settings
1139+ self.mock_valid.return_value = True
1140+ self.mock_util.lsb_release.return_value = {'codename': 'xenial'}
1141+ self.mock_util.subp.side_effect = [
1142+ ("", ""), # mdadm remove device
1143+ ]
1144+
1145+ def test_mdadm_remove_no_devpath(self):
1146+ with self.assertRaises(ValueError):
1147+ mdadm.mdadm_remove(None)
1148+
1149+ def test_mdadm_remove(self):
1150+ device = "/dev/vdc"
1151+ mdadm.mdadm_remove(device)
1152+ expected_calls = [
1153+ call(["mdadm", "--remove", device], rcs=[0, 1], capture=True),
1154+ ]
1155+ self.mock_util.subp.assert_has_calls(expected_calls)
1156+
1157+
1158+class TestBlockMdadmQueryDetail(MdadmTestBase):
1159+ def setUp(self):
1160+ super(TestBlockMdadmQueryDetail, self).setUp()
1161+ self.add_patch('curtin.block.mdadm.util', 'mock_util')
1162+ self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
1163+
1164+ # Common mock settings
1165+ self.mock_valid.return_value = True
1166+ self.mock_util.lsb_release.return_value = {'codename': 'precise'}
1167+
1168+ def test_mdadm_query_detail_export(self):
1169+ self.mock_util.lsb_release.return_value = {'codename': 'xenial'}
1170+ self.mock_util.subp.return_value = (
1171+ """
1172+ MD_LEVEL=raid1
1173+ MD_DEVICES=2
1174+ MD_METADATA=1.2
1175+ MD_UUID=93a73e10:427f280b:b7076c02:204b8f7a
1176+ MD_NAME=wily-foobar:0
1177+ MD_DEVICE_vdc_ROLE=0
1178+ MD_DEVICE_vdc_DEV=/dev/vdc
1179+ MD_DEVICE_vdd_ROLE=1
1180+ MD_DEVICE_vdd_DEV=/dev/vdd
1181+ MD_DEVICE_vde_ROLE=spare
1182+ MD_DEVICE_vde_DEV=/dev/vde
1183+ """, "")
1184+
1185+ device = "/dev/md0"
1186+ self.mock_valid.return_value = True
1187+ data = mdadm.mdadm_query_detail(device, export=True)
1188+
1189+ expected_calls = [
1190+ call(["mdadm", "--query", "--detail", "--export", device],
1191+ capture=True),
1192+ ]
1193+ self.mock_util.subp.assert_has_calls(expected_calls)
1194+ self.assertEqual(data['MD_UUID'],
1195+ '93a73e10:427f280b:b7076c02:204b8f7a')
1196+
1197+ def test_mdadm_query_detail_no_export(self):
1198+ self.mock_util.subp.return_value = ("""/dev/md0:
1199+ Version : 1.2
1200+ Creation Time : Sat Dec 12 16:06:05 2015
1201+ Raid Level : raid1
1202+ Array Size : 10477568 (9.99 GiB 10.73 GB)
1203+ Used Dev Size : 10477568 (9.99 GiB 10.73 GB)
1204+ Raid Devices : 2
1205+ Total Devices : 3
1206+ Persistence : Superblock is persistent
1207+
1208+ Update Time : Sat Dec 12 16:09:09 2015
1209+ State : clean
1210+ Active Devices : 2
1211+Working Devices : 3
1212+ Failed Devices : 0
1213+ Spare Devices : 1
1214+
1215+ Name : wily-foobar:0 (local to host wily-foobar)
1216+ UUID : 93a73e10:427f280b:b7076c02:204b8f7a
1217+ Events : 17
1218+
1219+ Number Major Minor RaidDevice State
1220+ 0 253 32 0 active sync /dev/vdc
1221+ 1 253 48 1 active sync /dev/vdd
1222+
1223+ 2 253 64 - spare /dev/vde
1224+ """, "") # mdadm --query --detail /dev/md0
1225+
1226+ device = "/dev/md0"
1227+ data = mdadm.mdadm_query_detail(device, export=False)
1228+ expected_calls = [
1229+ call(["mdadm", "--query", "--detail", device], capture=True),
1230+ ]
1231+ self.mock_util.subp.assert_has_calls(expected_calls)
1232+ self.assertEqual(data['MD_UUID'],
1233+ '93a73e10:427f280b:b7076c02:204b8f7a')
1234+
1235+
1236+class TestBlockMdadmDetailScan(MdadmTestBase):
1237+ def setUp(self):
1238+ super(TestBlockMdadmDetailScan, self).setUp()
1239+ self.add_patch('curtin.block.mdadm.util', 'mock_util')
1240+ self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
1241+
1242+ # Common mock settings
1243+ self.scan_output = ("ARRAY /dev/md0 metadata=1.2 spares=2 name=0 " +
1244+ "UUID=b1eae2ff:69b6b02e:1d63bb53:ddfa6e4a")
1245+ self.mock_valid.return_value = True
1246+ self.mock_util.lsb_release.return_value = {'codename': 'xenial'}
1247+ self.mock_util.subp.side_effect = [
1248+ (self.scan_output, ""), # mdadm --detail --scan
1249+ ]
1250+
1251+ def test_mdadm_remove(self):
1252+ data = mdadm.mdadm_detail_scan()
1253+ expected_calls = [
1254+ call(["mdadm", "--detail", "--scan"], capture=True),
1255+ ]
1256+ self.mock_util.subp.assert_has_calls(expected_calls)
1257+ self.assertEqual(self.scan_output, data)
1258+
1259+ def test_mdadm_remove_error(self):
1260+ self.mock_util.subp.side_effect = [
1261+ ("wark", "error"), # mdadm --detail --scan
1262+ ]
1263+ data = mdadm.mdadm_detail_scan()
1264+ expected_calls = [
1265+ call(["mdadm", "--detail", "--scan"], capture=True),
1266+ ]
1267+ self.mock_util.subp.assert_has_calls(expected_calls)
1268+ self.assertEqual(None, data)
1269+
1270+
1271+class TestBlockMdadmMdHelpers(MdadmTestBase):
1272+ def setUp(self):
1273+ super(TestBlockMdadmMdHelpers, self).setUp()
1274+ self.add_patch('curtin.block.mdadm.util', 'mock_util')
1275+ self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
1276+
1277+ self.mock_valid.return_value = True
1278+ self.mock_util.lsb_release.return_value = {'codename': 'xenial'}
1279+
1280+ def test_valid_mdname(self):
1281+ mdname = "/dev/md0"
1282+ result = mdadm.valid_mdname(mdname)
1283+ expected_calls = [
1284+ call(mdname)
1285+ ]
1286+ self.mock_valid.assert_has_calls(expected_calls)
1287+ self.assertTrue(result)
1288+
1289+ def test_valid_mdname_short(self):
1290+ mdname = "md0"
1291+ result = mdadm.valid_mdname(mdname)
1292+ expected_calls = [
1293+ call("/dev/md0")
1294+ ]
1295+ self.mock_valid.assert_has_calls(expected_calls)
1296+ self.assertTrue(result)
1297+
1298+ def test_valid_mdname_none(self):
1299+ mdname = None
1300+ with self.assertRaises(ValueError):
1301+ mdadm.valid_mdname(mdname)
1302+
1303+ def test_valid_mdname_not_valid_device(self):
1304+ self.mock_valid.return_value = False
1305+ mdname = "/dev/md0"
1306+ with self.assertRaises(ValueError):
1307+ mdadm.valid_mdname(mdname)
1308+
1309+ @patch.object(builtins, "open")
1310+ def test_md_sysfs_attr(self, mock_open):
1311+ mdname = "/dev/md0"
1312+ attr_name = 'array_state'
1313+ sysfs_path = '/sys/class/block/{}/md/{}'.format(dev_short(mdname),
1314+ attr_name)
1315+ mdadm.md_sysfs_attr(mdname, attr_name)
1316+ mock_open.assert_called_with(sysfs_path)
1317+
1318+ def test_md_sysfs_attr_devname_none(self):
1319+ mdname = None
1320+ attr_name = 'array_state'
1321+ with self.assertRaises(ValueError):
1322+ mdadm.md_sysfs_attr(mdname, attr_name)
1323+
1324+ def test_md_raidlevel_short(self):
1325+ for rl in [0, 1, 5, 6, 10, 'linear', 'stripe']:
1326+ self.assertEqual(rl, mdadm.md_raidlevel_short(rl))
1327+ if isinstance(rl, int):
1328+ long_rl = 'raid%d' % rl
1329+ self.assertEqual(rl, mdadm.md_raidlevel_short(long_rl))
1330+
1331+ def test_md_minimum_devices(self):
1332+ min_to_rl = {
1333+ 2: [0, 1, 'linear', 'stripe'],
1334+ 3: [5],
1335+ 4: [6, 10],
1336+ }
1337+
1338+ for rl in [0, 1, 5, 6, 10, 'linear', 'stripe']:
1339+ min_devs = mdadm.md_minimum_devices(rl)
1340+ self.assertTrue(rl in min_to_rl[min_devs])
1341+
1342+ def test_md_minimum_devices_invalid_rl(self):
1343+ min_devs = mdadm.md_minimum_devices(27)
1344+ self.assertEqual(min_devs, -1)
1345+
1346+ @patch('curtin.block.mdadm.md_sysfs_attr')
1347+ def test_md_check_array_state_rw(self, mock_attr):
1348+ mdname = '/dev/md0'
1349+ mock_attr.return_value = 'clean'
1350+ self.assertTrue(mdadm.md_check_array_state_rw(mdname))
1351+
1352+ @patch('curtin.block.mdadm.md_sysfs_attr')
1353+ def test_md_check_array_state_rw_false(self, mock_attr):
1354+ mdname = '/dev/md0'
1355+ mock_attr.return_value = 'inactive'
1356+ self.assertFalse(mdadm.md_check_array_state_rw(mdname))
1357+
1358+ @patch('curtin.block.mdadm.md_sysfs_attr')
1359+ def test_md_check_array_state_ro(self, mock_attr):
1360+ mdname = '/dev/md0'
1361+ mock_attr.return_value = 'readonly'
1362+ self.assertTrue(mdadm.md_check_array_state_ro(mdname))
1363+
1364+ @patch('curtin.block.mdadm.md_sysfs_attr')
1365+ def test_md_check_array_state_ro_false(self, mock_attr):
1366+ mdname = '/dev/md0'
1367+ mock_attr.return_value = 'inactive'
1368+ self.assertFalse(mdadm.md_check_array_state_ro(mdname))
1369+
1370+ @patch('curtin.block.mdadm.md_sysfs_attr')
1371+ def test_md_check_array_state_error(self, mock_attr):
1372+ mdname = '/dev/md0'
1373+ mock_attr.return_value = 'inactive'
1374+ self.assertTrue(mdadm.md_check_array_state_error(mdname))
1375+
1376+ @patch('curtin.block.mdadm.md_sysfs_attr')
1377+ def test_md_check_array_state_error_false(self, mock_attr):
1378+ mdname = '/dev/md0'
1379+ mock_attr.return_value = 'active'
1380+ self.assertFalse(mdadm.md_check_array_state_error(mdname))
1381+
1382+ def test_md_device_key_role(self):
1383+ devname = '/dev/vda'
1384+ rolekey = mdadm.md_device_key_role(devname)
1385+ self.assertEqual('MD_DEVICE_vda_ROLE', rolekey)
1386+
1387+ def test_md_device_key_role_no_dev(self):
1388+ devname = None
1389+ with self.assertRaises(ValueError):
1390+ mdadm.md_device_key_role(devname)
1391+
1392+ def test_md_device_key_dev(self):
1393+ devname = '/dev/vda'
1394+ devkey = mdadm.md_device_key_dev(devname)
1395+ self.assertEqual('MD_DEVICE_vda_DEV', devkey)
1396+
1397+ def test_md_device_key_dev_no_dev(self):
1398+ devname = None
1399+ with self.assertRaises(ValueError):
1400+ mdadm.md_device_key_dev(devname)
1401+
1402+ @patch('curtin.block.mdadm.os.path.exists')
1403+ @patch('curtin.block.mdadm.os.listdir')
1404+ def tests_md_get_spares_list(self, mock_listdir, mock_exists):
1405+ mdname = '/dev/md0'
1406+ devices = ['dev-vda', 'dev-vdb', 'dev-vdc']
1407+ states = ['in-sync', 'in-sync', 'spare']
1408+
1409+ mock_exists.return_value = True
1410+ mock_listdir.return_value = devices
1411+ self.mock_util.load_file.side_effect = states
1412+
1413+ sysfs_path = '/sys/class/block/md0/md/'
1414+
1415+ expected_calls = []
1416+ for d in devices:
1417+ expected_calls.append(call(os.path.join(sysfs_path, d, 'state')))
1418+
1419+ spares = mdadm.md_get_spares_list(mdname)
1420+ self.mock_util.load_file.assert_has_calls(expected_calls)
1421+ self.assertEqual(['/dev/vdc'], spares)
1422+
1423+ @patch('curtin.block.mdadm.os.path.exists')
1424+ def tests_md_get_spares_list_nomd(self, mock_exists):
1425+ mdname = '/dev/md0'
1426+ mock_exists.return_value = False
1427+ with self.assertRaises(ValueError):
1428+ mdadm.md_get_spares_list(mdname)
1429+
1430+ @patch('curtin.block.mdadm.os.path.exists')
1431+ @patch('curtin.block.mdadm.os.listdir')
1432+ def tests_md_get_devices_list(self, mock_listdir, mock_exists):
1433+ mdname = '/dev/md0'
1434+ devices = ['dev-vda', 'dev-vdb', 'dev-vdc']
1435+ states = ['in-sync', 'in-sync', 'spare']
1436+
1437+ mock_exists.return_value = True
1438+ mock_listdir.return_value = devices
1439+ self.mock_util.load_file.side_effect = states
1440+
1441+ sysfs_path = '/sys/class/block/md0/md/'
1442+
1443+ expected_calls = []
1444+ for d in devices:
1445+ expected_calls.append(call(os.path.join(sysfs_path, d, 'state')))
1446+
1447+ devs = mdadm.md_get_devices_list(mdname)
1448+ self.mock_util.load_file.assert_has_calls(expected_calls)
1449+ self.assertEqual(sorted(['/dev/vda', '/dev/vdb']), sorted(devs))
1450+
1451+ @patch('curtin.block.mdadm.os.path.exists')
1452+ def tests_md_get_devices_list_nomd(self, mock_exists):
1453+ mdname = '/dev/md0'
1454+ mock_exists.return_value = False
1455+ with self.assertRaises(ValueError):
1456+ mdadm.md_get_devices_list(mdname)
1457+
1458+ @patch('curtin.block.mdadm.os')
1459+ def test_md_check_array_uuid(self, mock_os):
1460+ devname = '/dev/md0'
1461+ md_uuid = '93a73e10:427f280b:b7076c02:204b8f7a'
1462+ mock_os.path.realpath.return_value = devname
1463+ rv = mdadm.md_check_array_uuid(devname, md_uuid)
1464+ self.assertTrue(rv)
1465+
1466+ @patch('curtin.block.mdadm.os')
1467+ def test_md_check_array_uuid_mismatch(self, mock_os):
1468+ devname = '/dev/md0'
1469+ md_uuid = '93a73e10:427f280b:b7076c02:204b8f7a'
1470+ mock_os.path.realpath.return_value = '/dev/md1'
1471+
1472+ with self.assertRaises(ValueError):
1473+ mdadm.md_check_array_uuid(devname, md_uuid)
1474+
1475+ @patch('curtin.block.mdadm.mdadm_query_detail')
1476+ def test_md_get_uuid(self, mock_query):
1477+ mdname = '/dev/md0'
1478+ md_uuid = '93a73e10:427f280b:b7076c02:204b8f7a'
1479+ mock_query.return_value = {'MD_UUID': md_uuid}
1480+ uuid = mdadm.md_get_uuid(mdname)
1481+ self.assertEqual(md_uuid, uuid)
1482+
1483+ @patch('curtin.block.mdadm.mdadm_query_detail')
1484+ def test_md_get_uuid_dev_none(self, mock_query):
1485+ mdname = None
1486+ with self.assertRaises(ValueError):
1487+ mdadm.md_get_uuid(mdname)
1488+
1489+ def test_md_check_raid_level(self):
1490+ for rl in mdadm.VALID_RAID_LEVELS:
1491+ self.assertTrue(mdadm.md_check_raidlevel(rl))
1492+
1493+ def test_md_check_raid_level_bad(self):
1494+ bogus = '27'
1495+ self.assertTrue(bogus not in mdadm.VALID_RAID_LEVELS)
1496+ with self.assertRaises(ValueError):
1497+ mdadm.md_check_raidlevel(bogus)
1498+
1499+ @patch('curtin.block.mdadm.md_sysfs_attr')
1500+ def test_md_check_array_state(self, mock_attr):
1501+ mdname = '/dev/md0'
1502+ mock_attr.side_effect = [
1503+ 'clean', # array_state
1504+ '0', # degraded
1505+ 'idle', # sync_action
1506+ ]
1507+ self.assertTrue(mdadm.md_check_array_state(mdname))
1508+
1509+ @patch('curtin.block.mdadm.md_sysfs_attr')
1510+ def test_md_check_array_state_norw(self, mock_attr):
1511+ mdname = '/dev/md0'
1512+ mock_attr.side_effect = [
1513+ 'suspended', # array_state
1514+ '0', # degraded
1515+ 'idle', # sync_action
1516+ ]
1517+ with self.assertRaises(ValueError):
1518+ mdadm.md_check_array_state(mdname)
1519+
1520+ @patch('curtin.block.mdadm.md_sysfs_attr')
1521+ def test_md_check_array_state_degraded(self, mock_attr):
1522+ mdname = '/dev/md0'
1523+ mock_attr.side_effect = [
1524+ 'clean', # array_state
1525+ '1', # degraded
1526+ 'idle', # sync_action
1527+ ]
1528+ with self.assertRaises(ValueError):
1529+ mdadm.md_check_array_state(mdname)
1530+
1531+ @patch('curtin.block.mdadm.md_sysfs_attr')
1532+ def test_md_check_array_state_sync(self, mock_attr):
1533+ mdname = '/dev/md0'
1534+ mock_attr.side_effect = [
1535+ 'clean', # array_state
1536+ '0', # degraded
1537+ 'recovery', # sync_action
1538+ ]
1539+ with self.assertRaises(ValueError):
1540+ mdadm.md_check_array_state(mdname)
1541+
1542+ @patch('curtin.block.mdadm.md_check_array_uuid')
1543+ @patch('curtin.block.mdadm.md_get_uuid')
1544+ def test_md_check_uuid(self, mock_guuid, mock_ckuuid):
1545+ mdname = '/dev/md0'
1546+ mock_guuid.return_value = '93a73e10:427f280b:b7076c02:204b8f7a'
1547+ mock_ckuuid.return_value = True
1548+
1549+ rv = mdadm.md_check_uuid(mdname)
1550+ self.assertTrue(rv)
1551+
1552+ @patch('curtin.block.mdadm.md_check_array_uuid')
1553+ @patch('curtin.block.mdadm.md_get_uuid')
1554+ def test_md_check_uuid_nouuid(self, mock_guuid, mock_ckuuid):
1555+ mdname = '/dev/md0'
1556+ mock_guuid.return_value = None
1557+ with self.assertRaises(ValueError):
1558+ mdadm.md_check_uuid(mdname)
1559+
1560+ @patch('curtin.block.mdadm.md_get_devices_list')
1561+ def test_md_check_devices(self, mock_devlist):
1562+ mdname = '/dev/md0'
1563+ devices = ['/dev/vdc', '/dev/vdd']
1564+
1565+ mock_devlist.return_value = devices
1566+ rv = mdadm.md_check_devices(mdname, devices)
1567+ self.assertEqual(rv, None)
1568+
1569+ @patch('curtin.block.mdadm.md_get_devices_list')
1570+ def test_md_check_devices_wrong_devs(self, mock_devlist):
1571+ mdname = '/dev/md0'
1572+ devices = ['/dev/vdc', '/dev/vdd']
1573+
1574+ mock_devlist.return_value = ['/dev/sda']
1575+ with self.assertRaises(ValueError):
1576+ mdadm.md_check_devices(mdname, devices)
1577+
1578+ def test_md_check_devices_no_devs(self):
1579+ mdname = '/dev/md0'
1580+ devices = []
1581+
1582+ with self.assertRaises(ValueError):
1583+ mdadm.md_check_devices(mdname, devices)
1584+
1585+ @patch('curtin.block.mdadm.md_get_spares_list')
1586+ def test_md_check_spares(self, mock_devlist):
1587+ mdname = '/dev/md0'
1588+ spares = ['/dev/vdc', '/dev/vdd']
1589+
1590+ mock_devlist.return_value = spares
1591+ rv = mdadm.md_check_spares(mdname, spares)
1592+ self.assertEqual(rv, None)
1593+
1594+ @patch('curtin.block.mdadm.md_get_spares_list')
1595+ def test_md_check_spares_wrong_devs(self, mock_devlist):
1596+ mdname = '/dev/md0'
1597+ spares = ['/dev/vdc', '/dev/vdd']
1598+
1599+ mock_devlist.return_value = ['/dev/sda']
1600+ with self.assertRaises(ValueError):
1601+ mdadm.md_check_spares(mdname, spares)
1602+
1603+ @patch('curtin.block.mdadm.mdadm_examine')
1604+ @patch('curtin.block.mdadm.mdadm_query_detail')
1605+ @patch('curtin.block.mdadm.md_get_uuid')
1606+ def test_md_check_array_membership(self, mock_uuid, mock_query,
1607+ mock_examine):
1608+ mdname = '/dev/md0'
1609+ devices = ['/dev/vda', '/dev/vdb', '/dev/vdc', '/dev/vdd']
1610+ md_uuid = '93a73e10:427f280b:b7076c02:204b8f7a'
1611+ md_dict = {'MD_UUID': md_uuid}
1612+ mock_query.return_value = md_dict
1613+ mock_uuid.return_value = md_uuid
1614+ mock_examine.side_effect = [md_dict] * len(devices)
1615+ expected_calls = []
1616+ for dev in devices:
1617+ expected_calls.append(call(dev, export=False))
1618+
1619+ rv = mdadm.md_check_array_membership(mdname, devices)
1620+
1621+ self.assertEqual(rv, None)
1622+ mock_uuid.assert_has_calls([call(mdname)])
1623+ mock_examine.assert_has_calls(expected_calls)
1624+
1625+ @patch('curtin.block.mdadm.mdadm_examine')
1626+ @patch('curtin.block.mdadm.mdadm_query_detail')
1627+ @patch('curtin.block.mdadm.md_get_uuid')
1628+ def test_md_check_array_membership_bad_dev(self, mock_uuid, mock_query,
1629+ mock_examine):
1630+ mdname = '/dev/md0'
1631+ devices = ['/dev/vda', '/dev/vdb', '/dev/vdc', '/dev/vdd']
1632+ md_uuid = '93a73e10:427f280b:b7076c02:204b8f7a'
1633+ md_dict = {'MD_UUID': md_uuid}
1634+ mock_query.return_value = md_dict
1635+ mock_uuid.return_value = md_uuid
1636+ mock_examine.side_effect = [
1637+ md_dict,
1638+ {},
1639+ md_dict,
1640+ md_dict,
1641+ ] # one device isn't a member
1642+
1643+ with self.assertRaises(ValueError):
1644+ mdadm.md_check_array_membership(mdname, devices)
1645+
1646+ @patch('curtin.block.mdadm.mdadm_examine')
1647+ @patch('curtin.block.mdadm.mdadm_query_detail')
1648+ @patch('curtin.block.mdadm.md_get_uuid')
1649+ def test_md_check_array_membership_wrong_array(self, mock_uuid, mock_query,
1650+ mock_examine):
1651+ mdname = '/dev/md0'
1652+ devices = ['/dev/vda', '/dev/vdb', '/dev/vdc', '/dev/vdd']
1653+ md_uuid = '93a73e10:427f280b:b7076c02:204b8f7a'
1654+ md_dict = {'MD_UUID': '11111111:427f280b:b7076c02:204b8f7a'}
1655+ mock_query.return_value = md_dict
1656+ mock_uuid.return_value = md_uuid
1657+ mock_examine.side_effect = [md_dict] * len(devices)
1658+
1659+ with self.assertRaises(ValueError):
1660+ mdadm.md_check_array_membership(mdname, devices)
1661+
1662+ @patch('curtin.block.mdadm.md_check_array_membership')
1663+ @patch('curtin.block.mdadm.md_check_spares')
1664+ @patch('curtin.block.mdadm.md_check_devices')
1665+ @patch('curtin.block.mdadm.md_check_uuid')
1666+ @patch('curtin.block.mdadm.md_check_raidlevel')
1667+ @patch('curtin.block.mdadm.md_check_array_state')
1668+ def test_md_check_all_good(self, mock_array, mock_raid, mock_uuid,
1669+ mock_dev, mock_spare, mock_member):
1670+ md_devname = '/dev/md0'
1671+ raidlevel = 1
1672+ devices = ['/dev/vda', '/dev/vdb']
1673+ spares = ['/dev/vdc']
1674+
1675+ mock_array.return_value = None
1676+ mock_raid.return_value = None
1677+ mock_uuid.return_value = None
1678+ mock_dev.return_value = None
1679+ mock_spare.return_value = None
1680+ mock_member.return_value = None
1681+
1682+ mdadm.md_check(md_devname, raidlevel, devices=devices, spares=spares)
1683+
1684+ mock_array.assert_has_calls([call(md_devname)])
1685+ mock_raid.assert_has_calls([call(raidlevel)])
1686+ mock_uuid.assert_has_calls([call(md_devname)])
1687+ mock_dev.assert_has_calls([call(md_devname, devices)])
1688+ mock_spare.assert_has_calls([call(md_devname, spares)])
1689+ mock_member.assert_has_calls([call(md_devname, devices + spares)])
1690+
1691+# vi: ts=4 expandtab syntax=python

Subscribers

People subscribed via source and target branches