Merge lp:~raharper/curtin/trunk.md-uefi into lp:~curtin-dev/curtin/trunk

Proposed by Ryan Harper
Status: Merged
Merged at revision: 495
Proposed branch: lp:~raharper/curtin/trunk.md-uefi
Merge into: lp:~curtin-dev/curtin/trunk
Diff against target: 490 lines (+379/-6)
6 files modified
curtin/block/clear_holders.py (+23/-1)
curtin/block/mdadm.py (+27/-1)
examples/tests/mirrorboot-uefi.yaml (+138/-0)
tests/unittests/test_block_mdadm.py (+94/-1)
tests/unittests/test_clear_holders.py (+57/-3)
tests/vmtests/test_mdadm_bcache.py (+40/-0)
To merge this branch: bzr merge lp:~raharper/curtin/trunk.md-uefi
Reviewer Review Type Date Requested Status
Server Team CI bot continuous-integration Approve
Scott Moser (community) Approve
Chad Smith Approve
Joshua Powers (community) Approve
Review via email: mp+322553@code.launchpad.net

Description of the change

clear-holders: mdadm use /proc/mdstat to wait until an array has been stopped

shutdown_mdadm needs to wait until the md device has stopped. Using files in sysfs is unreliable due to a kernel bug (LP:1682456) so
instead use device presence in /proc/mdstat.

Add the use of --manage flag to force mdadm to interrupt actions on the device (like a resync).

Add vmtest with storage config to recreate issue found (LP:1682584) using
early_command to dirty existing disks with raid configurations.

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
lp:~raharper/curtin/trunk.md-uefi updated
449. By Ryan Harper

merge from trunk

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
lp:~raharper/curtin/trunk.md-uefi updated
450. By Ryan Harper

Don't force launch into smp mode

451. By Ryan Harper

merge from trunk

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
lp:~raharper/curtin/trunk.md-uefi updated
452. By Ryan Harper

Fix and add unittests for mdadm.md_present

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

some small things.

Revision history for this message
Chad Smith (chad.smith) wrote :

While I don't have a lot of context on raid setup, just an initial review on the code and tests as written.

Revision history for this message
Ryan Harper (raharper) wrote :

Thanks for the reviews; will update.

lp:~raharper/curtin/trunk.md-uefi updated
453. By Ryan Harper

tl;dr wait-for-mdadm comment

454. By Ryan Harper

Use variable to define how often to retry waiting on mdadm release

455. By Ryan Harper

On shutdown failure, log critical, handle missing /proc/mdstat, add unittests

456. By Ryan Harper

Use IOError, it's supported in both py2 and py3

Revision history for this message
Ryan Harper (raharper) wrote :

I've addressed the comments as I replied earlier. Please re-review.
The updated branch is running through vmtest again.

https://jenkins.ubuntu.com/server/job/curtin-vmtest-devel-debug/32/console

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Joshua Powers (powersj) wrote :

LGTM, given my limited exposure to mdadm

review: Approve
Revision history for this message
Chad Smith (chad.smith) wrote :

+1 with a minor nit below

review: Approve
lp:~raharper/curtin/trunk.md-uefi updated
457. By Ryan Harper

remove unnused variable and enumerate of MDADM_RELEASE_RETRIES

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) :
Revision history for this message
Scott Moser (smoser) wrote :

they're small, but i think worth fixing.

review: Needs Fixing
Revision history for this message
Ryan Harper (raharper) wrote :
Download full text (3.1 KiB)

On Wed, Apr 26, 2017 at 10:03 AM, Scott Moser <email address hidden> wrote:

> Review: Needs Fixing
>
> they're small, but i think worth fixing.
>
>
> Diff comments:
>
> > === modified file 'curtin/block/clear_holders.py'
> > --- curtin/block/clear_holders.py 2017-04-11 20:52:11 +0000
> > +++ curtin/block/clear_holders.py 2017-04-26 14:18:37 +0000
> > @@ -187,7 +191,26 @@
> > blockdev = block.sysfs_to_devpath(device)
> > LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev)
> > mdadm.mdadm_stop(blockdev)
> > - mdadm.mdadm_remove(blockdev)
> > +
> > + # mdadm stop operation is asynchronous so we must wait for the
> kernel to
> > + # release resources. For more details see lp:1682456
> > + try:
> > + for wait in MDADM_RELEASE_RETRIES:
> > + if mdadm.md_present(block.path_to_kname(blockdev)):
> > + time.sleep(wait)
> > + else:
> > + LOG.debug('%s has been removed', blockdev)
> > + break
> > +
> > + if mdadm.md_present(block.path_to_kname(blockdev)):
> > + raise OSError('Timeout exceeded for removal of %s',
> blockdev)
> > +
> > + except OSError:
> > + LOG.critical('Failed to stop mdadm device %s', device)
> > + if os.path.exists('/proc/mdstat'):
> > + out, _ = util.subp(['cat', '/proc/mdstat'], capture=True)
>
> use util.load_file here. cat and capture for reading a file?
>

It's really just debugging so it doesn't matter the method.
I'm find for using load_file.

>
> > + LOG.critical(out)
> > + raise
> >
> >
> > def wipe_superblock(device):
> >
> > === modified file 'curtin/block/mdadm.py'
> > --- curtin/block/mdadm.py 2017-01-31 19:15:24 +0000
> > +++ curtin/block/mdadm.py 2017-04-26 14:18:37 +0000
> > @@ -293,6 +294,26 @@
> > return out
> >
> >
> > +def md_present(mdname):
> > + """Check if mdname is present in /proc/mdstat"""
> > + if not mdname:
> > + raise ValueError('md_present requires a valid md name')
> > +
> > + try:
> > + mdstat = util.load_file('/proc/mdstat')
> > + except IOError:
> > + LOG.warning('Failed to read /proc/mdstat; '
>
> we really should check if this is a ENOENT. that is very specifically
> different than any other type of IOError
> If we got a IOError reading that file, we should just raise the exception
> and fail rather than printing this message.
>
> if util.is_file_not_found_exc(e):
> LOG.warning("....")
> else:
> raise e
>

You mentioned that before and somehow I failed to find it and thought you
were suggesting to
write one.

>
> > + 'md modules might not be loaded')
> > + return False
> > +
> > + md_kname = dev_short(mdname)
> > + present = [line for line in mdstat.splitlines()
> > + if line.startswith(md_kname)]
> > + if len(present) > 0:
> > + return True
> > + return False
> > +
> > +
> > # ------------------------------ #
> > def valid_mdname(md_devname):
> > assert_valid_devpath(md_devname)
>
>
> --
> https://code.launchpad.net/~raharper/curtin/trunk.md-uefi/+merge/322553
> You are the owner of lp:~raharper/c...

Read more...

lp:~raharper/curtin/trunk.md-uefi updated
458. By Ryan Harper

Use load_file, check exception for file-not-found

459. By Ryan Harper

Merge from trunk

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
lp:~raharper/curtin/trunk.md-uefi updated
460. By Ryan Harper

merge from trunk

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

There are some nits here.
only one really needs to be fixed ("md10".startswith("md1") == true).

fix my nits (you dont have to fix the util.loadfile if you dont want)
and then i approve.

review: Approve
lp:~raharper/curtin/trunk.md-uefi updated
461. By Ryan Harper

Match entire mdname not just the start

462. By Ryan Harper

fix style lint

463. By Ryan Harper

drop useless out return value

464. By Ryan Harper

Change format for LP bug reference

465. By Ryan Harper

remove debugging late_commands from mirrorboot-uefi vmtest

Revision history for this message
Ryan Harper (raharper) wrote :

Applied fixes for all; thanks for the .startswith catch.

I've pushed the fixes to this MR and started a vmtest run for just the mdadm tests here:

https://jenkins.ubuntu.com/server/job/curtin-vmtest-devel-debug/36/

If that comes out green, I'll land this to trunk tomorrow AM.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'curtin/block/clear_holders.py'
--- curtin/block/clear_holders.py 2017-04-11 20:52:11 +0000
+++ curtin/block/clear_holders.py 2017-05-03 02:14:17 +0000
@@ -23,12 +23,16 @@
2323
24import errno24import errno
25import os25import os
26import time
2627
27from curtin import (block, udev, util)28from curtin import (block, udev, util)
28from curtin.block import lvm29from curtin.block import lvm
29from curtin.block import mdadm30from curtin.block import mdadm
30from curtin.log import LOG31from curtin.log import LOG
3132
33# poll frequenty, but wait up to 60 seconds total
34MDADM_RELEASE_RETRIES = [0.4] * 150
35
3236
33def _define_handlers_registry():37def _define_handlers_registry():
34 """38 """
@@ -187,7 +191,25 @@
187 blockdev = block.sysfs_to_devpath(device)191 blockdev = block.sysfs_to_devpath(device)
188 LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev)192 LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev)
189 mdadm.mdadm_stop(blockdev)193 mdadm.mdadm_stop(blockdev)
190 mdadm.mdadm_remove(blockdev)194
195 # mdadm stop operation is asynchronous so we must wait for the kernel to
196 # release resources. For more details see LP: #1682456
197 try:
198 for wait in MDADM_RELEASE_RETRIES:
199 if mdadm.md_present(block.path_to_kname(blockdev)):
200 time.sleep(wait)
201 else:
202 LOG.debug('%s has been removed', blockdev)
203 break
204
205 if mdadm.md_present(block.path_to_kname(blockdev)):
206 raise OSError('Timeout exceeded for removal of %s', blockdev)
207
208 except OSError:
209 LOG.critical('Failed to stop mdadm device %s', device)
210 if os.path.exists('/proc/mdstat'):
211 LOG.critical("/proc/mdstat:\n%s", util.load_file('/proc/mdstat'))
212 raise
191213
192214
193def wipe_superblock(device):215def wipe_superblock(device):
194216
=== modified file 'curtin/block/mdadm.py'
--- curtin/block/mdadm.py 2017-01-31 19:15:24 +0000
+++ curtin/block/mdadm.py 2017-05-03 02:14:17 +0000
@@ -257,7 +257,8 @@
257 assert_valid_devpath(devpath)257 assert_valid_devpath(devpath)
258258
259 LOG.info("mdadm stopping: %s" % devpath)259 LOG.info("mdadm stopping: %s" % devpath)
260 out, err = util.subp(["mdadm", "--stop", devpath], capture=True)260 out, err = util.subp(["mdadm", "--manage", "--stop", devpath],
261 capture=True)
261 LOG.debug("mdadm stop:\n%s\n%s", out, err)262 LOG.debug("mdadm stop:\n%s\n%s", out, err)
262263
263264
@@ -293,6 +294,31 @@
293 return out294 return out
294295
295296
297def md_present(mdname):
298 """Check if mdname is present in /proc/mdstat"""
299 if not mdname:
300 raise ValueError('md_present requires a valid md name')
301
302 try:
303 mdstat = util.load_file('/proc/mdstat')
304 except IOError as e:
305 if util.is_file_not_found_exc(e):
306 LOG.warning('Failed to read /proc/mdstat; '
307 'md modules might not be loaded')
308 return False
309 else:
310 raise e
311
312 md_kname = dev_short(mdname)
313 # Find lines like:
314 # md10 : active raid1 vdc1[1] vda2[0]
315 present = [line for line in mdstat.splitlines()
316 if line.split(":")[0].rstrip() == md_kname]
317 if len(present) > 0:
318 return True
319 return False
320
321
296# ------------------------------ #322# ------------------------------ #
297def valid_mdname(md_devname):323def valid_mdname(md_devname):
298 assert_valid_devpath(md_devname)324 assert_valid_devpath(md_devname)
299325
=== added file 'examples/tests/mirrorboot-uefi.yaml'
--- examples/tests/mirrorboot-uefi.yaml 1970-01-01 00:00:00 +0000
+++ examples/tests/mirrorboot-uefi.yaml 2017-05-03 02:14:17 +0000
@@ -0,0 +1,138 @@
1showtrace: true
2
3early_commands:
4 # running block-meta custom from the install environment
5 # inherits the CONFIG environment, so this works to actually prepare
6 # the disks exactly as in this config before the rest of the install
7 # will just blow it all away. We have to clean out the other
8 # environment that could unintentionally mess things up.
9 blockmeta: [env, -u, OUTPUT_FSTAB,
10 TARGET_MOUNT_POINT=/tmp/my.bdir/target,
11 WORKING_DIR=/tmp/my.bdir/work.d,
12 curtin, --showtrace, -v, block-meta, --umount, custom]
13
14storage:
15 config:
16 - grub_device: true
17 id: sda
18 name: sda
19 ptable: msdos
20 type: disk
21 wipe: superblock
22 path: /dev/vdb
23 name: main_disk
24 - id: sdb
25 name: sdb
26 ptable: gpt
27 type: disk
28 wipe: superblock
29 path: /dev/vdc
30 name: second_disk
31 - device: sda
32 flag: boot
33 id: sda-part1
34 name: sda-part1
35 number: 1
36 offset: 4194304B
37 size: 511705088B
38 type: partition
39 uuid: fc7ab24c-b6bf-460f-8446-d3ac362c0625
40 wipe: superblock
41 - device: sda
42 id: sda-part2
43 name: sda-part2
44 number: 2
45 size: 2G
46 type: partition
47 uuid: 47c97eae-f35d-473f-8f3d-d64161d571f1
48 wipe: superblock
49 - device: sda
50 id: sda-part3
51 name: sda-part3
52 number: 3
53 size: 2G
54 type: partition
55 uuid: e3202633-841c-4936-a520-b18d1f7938ea
56 wipe: superblock
57 - device: sdb
58 flag: boot
59 id: sdb-part1
60 name: sdb-part1
61 number: 1
62 offset: 4194304B
63 size: 511705088B
64 type: partition
65 uuid: 86326392-3706-4124-87c6-2992acfa31cc
66 wipe: superblock
67 - device: sdb
68 id: sdb-part2
69 name: sdb-part2
70 number: 2
71 size: 2G
72 type: partition
73 uuid: a33a83dd-d1bf-4940-bf3e-6d931de85dbc
74 wipe: superblock
75 - devices:
76 - sda-part2
77 - sdb-part2
78 id: md0
79 name: md0
80 raidlevel: 1
81 spare_devices: []
82 type: raid
83 - device: sdb
84 id: sdb-part3
85 name: sdb-part3
86 number: 3
87 size: 2G
88 type: partition
89 uuid: 27e29758-fdcf-4c6a-8578-c92f907a8a9d
90 wipe: superblock
91 - devices:
92 - sda-part3
93 - sdb-part3
94 id: md1
95 name: md1
96 raidlevel: 1
97 spare_devices: []
98 type: raid
99 - fstype: fat32
100 id: sda-part1_format
101 label: efi
102 type: format
103 uuid: b3d50fc7-2f9e-4d1a-9e24-28985e4c560b
104 volume: sda-part1
105 - fstype: fat32
106 id: sdb-part1_format
107 label: efi
108 type: format
109 uuid: c604cbb1-2ee1-4575-9489-d38a60fa0cf2
110 volume: sdb-part1
111 - fstype: ext4
112 id: md0_format
113 label: ''
114 type: format
115 uuid: 76a315b7-2979-436c-b156-9ae64a565a59
116 volume: md0
117 - fstype: ext4
118 id: md1_format
119 label: ''
120 type: format
121 uuid: 48dceca6-a9f9-4c7b-bfd3-7f3a0faa4ecc
122 volume: md1
123 - device: md0_format
124 id: md0_mount
125 options: ''
126 path: /
127 type: mount
128 - device: sda-part1_format
129 id: sda-part1_mount
130 options: ''
131 path: /boot/efi
132 type: mount
133 - device: md1_format
134 id: md1_mount
135 options: ''
136 path: /var
137 type: mount
138 version: 1
0139
=== modified file 'tests/unittests/test_block_mdadm.py'
--- tests/unittests/test_block_mdadm.py 2017-01-27 01:00:39 +0000
+++ tests/unittests/test_block_mdadm.py 2017-05-03 02:14:17 +0000
@@ -5,6 +5,7 @@
5from curtin import util5from curtin import util
6import os6import os
7import subprocess7import subprocess
8import textwrap
89
910
10class MdadmTestBase(TestCase):11class MdadmTestBase(TestCase):
@@ -348,7 +349,7 @@
348 device = "/dev/vdc"349 device = "/dev/vdc"
349 mdadm.mdadm_stop(device)350 mdadm.mdadm_stop(device)
350 expected_calls = [351 expected_calls = [
351 call(["mdadm", "--stop", device], capture=True),352 call(["mdadm", "--manage", "--stop", device], capture=True)
352 ]353 ]
353 self.mock_util.subp.assert_has_calls(expected_calls)354 self.mock_util.subp.assert_has_calls(expected_calls)
354355
@@ -944,4 +945,96 @@
944 mdadm.md_check(md_devname, raidlevel, devices=devices,945 mdadm.md_check(md_devname, raidlevel, devices=devices,
945 spares=spares)946 spares=spares)
946947
948 def test_md_present(self):
949 mdname = 'md0'
950 self.mock_util.load_file.return_value = textwrap.dedent("""
951 Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
952 [raid4] [raid10]
953 md0 : active raid1 vdc1[1] vda2[0]
954 3143680 blocks super 1.2 [2/2] [UU]
955
956 unused devices: <none>
957 """)
958
959 md_is_present = mdadm.md_present(mdname)
960
961 self.assertTrue(md_is_present)
962 self.mock_util.load_file.assert_called_with('/proc/mdstat')
963
964 def test_md_present_not_found(self):
965 mdname = 'md1'
966 self.mock_util.load_file.return_value = textwrap.dedent("""
967 Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
968 [raid4] [raid10]
969 md0 : active raid1 vdc1[1] vda2[0]
970 3143680 blocks super 1.2 [2/2] [UU]
971
972 unused devices: <none>
973 """)
974
975 md_is_present = mdadm.md_present(mdname)
976
977 self.assertFalse(md_is_present)
978 self.mock_util.load_file.assert_called_with('/proc/mdstat')
979
980 def test_md_present_not_found_check_matching(self):
981 mdname = 'md1'
982 found_mdname = 'md10'
983 self.mock_util.load_file.return_value = textwrap.dedent("""
984 Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
985 [raid4] [raid10]
986 md10 : active raid1 vdc1[1] vda2[0]
987 3143680 blocks super 1.2 [2/2] [UU]
988
989 unused devices: <none>
990 """)
991
992 md_is_present = mdadm.md_present(mdname)
993
994 self.assertFalse(md_is_present,
995 "%s mistakenly matched %s" % (mdname, found_mdname))
996 self.mock_util.load_file.assert_called_with('/proc/mdstat')
997
998 def test_md_present_with_dev_path(self):
999 mdname = '/dev/md0'
1000 self.mock_util.load_file.return_value = textwrap.dedent("""
1001 Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
1002 [raid4] [raid10]
1003 md0 : active raid1 vdc1[1] vda2[0]
1004 3143680 blocks super 1.2 [2/2] [UU]
1005
1006 unused devices: <none>
1007 """)
1008
1009 md_is_present = mdadm.md_present(mdname)
1010
1011 self.assertTrue(md_is_present)
1012 self.mock_util.load_file.assert_called_with('/proc/mdstat')
1013
1014 def test_md_present_none(self):
1015 mdname = ''
1016 self.mock_util.load_file.return_value = textwrap.dedent("""
1017 Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
1018 [raid4] [raid10]
1019 md0 : active raid1 vdc1[1] vda2[0]
1020 3143680 blocks super 1.2 [2/2] [UU]
1021
1022 unused devices: <none>
1023 """)
1024
1025 with self.assertRaises(ValueError):
1026 mdadm.md_present(mdname)
1027
1028 # util.load_file should NOT have been called
1029 self.assertEqual([], self.mock_util.call_args_list)
1030
1031 def test_md_present_no_proc_mdstat(self):
1032 mdname = 'md0'
1033 self.mock_util.side_effect = IOError
1034
1035 md_is_present = mdadm.md_present(mdname)
1036 self.assertFalse(md_is_present)
1037 self.mock_util.load_file.assert_called_with('/proc/mdstat')
1038
1039
947# vi: ts=4 expandtab syntax=python1040# vi: ts=4 expandtab syntax=python
9481041
=== modified file 'tests/unittests/test_clear_holders.py'
--- tests/unittests/test_clear_holders.py 2017-04-11 20:52:11 +0000
+++ tests/unittests/test_clear_holders.py 2017-05-03 02:14:17 +0000
@@ -360,16 +360,70 @@
360 mock_util.subp.assert_called_with(360 mock_util.subp.assert_called_with(
361 ['cryptsetup', 'remove', self.test_blockdev], capture=True)361 ['cryptsetup', 'remove', self.test_blockdev], capture=True)
362362
363 @mock.patch('curtin.block.clear_holders.time')
364 @mock.patch('curtin.block.clear_holders.util')
363 @mock.patch('curtin.block.clear_holders.LOG')365 @mock.patch('curtin.block.clear_holders.LOG')
364 @mock.patch('curtin.block.clear_holders.mdadm')366 @mock.patch('curtin.block.clear_holders.mdadm')
365 @mock.patch('curtin.block.clear_holders.block')367 @mock.patch('curtin.block.clear_holders.block')
366 def test_shutdown_mdadm(self, mock_block, mock_mdadm, mock_log):368 def test_shutdown_mdadm(self, mock_block, mock_mdadm, mock_log, mock_util,
369 mock_time):
367 """test clear_holders.shutdown_mdadm"""370 """test clear_holders.shutdown_mdadm"""
368 mock_block.sysfs_to_devpath.return_value = self.test_blockdev371 mock_block.sysfs_to_devpath.return_value = self.test_blockdev
372 mock_block.path_to_kname.return_value = self.test_blockdev
373 mock_mdadm.md_present.return_value = False
369 clear_holders.shutdown_mdadm(self.test_syspath)374 clear_holders.shutdown_mdadm(self.test_syspath)
370 mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev)375 mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev)
371 mock_mdadm.mdadm_remove.assert_called_with(self.test_blockdev)376 mock_mdadm.md_present.assert_called_with(self.test_blockdev)
372 self.assertTrue(mock_log.debug.called)377 self.assertTrue(mock_log.debug.called)
378
379 @mock.patch('curtin.block.clear_holders.os')
380 @mock.patch('curtin.block.clear_holders.time')
381 @mock.patch('curtin.block.clear_holders.util')
382 @mock.patch('curtin.block.clear_holders.LOG')
383 @mock.patch('curtin.block.clear_holders.mdadm')
384 @mock.patch('curtin.block.clear_holders.block')
385 def test_shutdown_mdadm_fail_raises_oserror(self, mock_block, mock_mdadm,
386 mock_log, mock_util, mock_time,
387 mock_os):
388 """test clear_holders.shutdown_mdadm raises OSError on failure"""
389 mock_block.sysfs_to_devpath.return_value = self.test_blockdev
390 mock_block.path_to_kname.return_value = self.test_blockdev
391 mock_mdadm.md_present.return_value = True
392 mock_util.subp.return_value = ("", "")
393 mock_os.path.exists.return_value = True
394
395 with self.assertRaises(OSError):
396 clear_holders.shutdown_mdadm(self.test_syspath)
397
398 mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev)
399 mock_mdadm.md_present.assert_called_with(self.test_blockdev)
400 mock_util.load_file.assert_called_with('/proc/mdstat')
401 self.assertTrue(mock_log.debug.called)
402 self.assertTrue(mock_log.critical.called)
403
404 @mock.patch('curtin.block.clear_holders.os')
405 @mock.patch('curtin.block.clear_holders.time')
406 @mock.patch('curtin.block.clear_holders.util')
407 @mock.patch('curtin.block.clear_holders.LOG')
408 @mock.patch('curtin.block.clear_holders.mdadm')
409 @mock.patch('curtin.block.clear_holders.block')
410 def test_shutdown_mdadm_fails_no_proc_mdstat(self, mock_block, mock_mdadm,
411 mock_log, mock_util,
412 mock_time, mock_os):
413 """test clear_holders.shutdown_mdadm handles no /proc/mdstat"""
414 mock_block.sysfs_to_devpath.return_value = self.test_blockdev
415 mock_block.path_to_kname.return_value = self.test_blockdev
416 mock_mdadm.md_present.return_value = True
417 mock_os.path.exists.return_value = False
418
419 with self.assertRaises(OSError):
420 clear_holders.shutdown_mdadm(self.test_syspath)
421
422 mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev)
423 mock_mdadm.md_present.assert_called_with(self.test_blockdev)
424 self.assertEqual([], mock_util.subp.call_args_list)
425 self.assertTrue(mock_log.debug.called)
426 self.assertTrue(mock_log.critical.called)
373427
374 @mock.patch('curtin.block.clear_holders.LOG')428 @mock.patch('curtin.block.clear_holders.LOG')
375 @mock.patch('curtin.block.clear_holders.block')429 @mock.patch('curtin.block.clear_holders.block')
376430
=== modified file 'tests/vmtests/test_mdadm_bcache.py'
--- tests/vmtests/test_mdadm_bcache.py 2017-04-03 02:27:33 +0000
+++ tests/vmtests/test_mdadm_bcache.py 2017-05-03 02:14:17 +0000
@@ -16,6 +16,10 @@
16 grep -c active /proc/mdstat > mdadm_active216 grep -c active /proc/mdstat > mdadm_active2
17 ls /dev/disk/by-dname > ls_dname17 ls /dev/disk/by-dname > ls_dname
18 find /etc/network/interfaces.d > find_interfacesd18 find /etc/network/interfaces.d > find_interfacesd
19 cat /proc/mdstat | tee mdstat
20 cat /proc/partitions | tee procpartitions
21 ls -1 /sys/class/block | tee sys_class_block
22 ls -1 /dev/md* | tee dev_md
19 """)]23 """)]
2024
21 def test_mdadm_output_files_exist(self):25 def test_mdadm_output_files_exist(self):
@@ -234,6 +238,42 @@
234 __test__ = True238 __test__ = True
235239
236240
241class TestMirrorbootPartitionsUEFIAbs(TestMdadmAbs):
242 # alternative config for more complex setup
243 conf_file = "examples/tests/mirrorboot-uefi.yaml"
244 # initialize secondary disk
245 extra_disks = ['10G']
246 disk_to_check = [('main_disk', 2),
247 ('second_disk', 3),
248 ('md0', 0),
249 ('md1', 0)]
250 active_mdadm = "2"
251 uefi = True
252
253
254class TrustyTestMirrorbootPartitionsUEFI(relbase.trusty,
255 TestMirrorbootPartitionsUEFIAbs):
256 __test__ = True
257
258 # FIXME(LP: #1523037): dname does not work on trusty
259 # when dname works on trusty, then we need to re-enable by removing line.
260 def test_dname(self):
261 print("test_dname does not work for Trusty")
262
263 def test_ptable(self):
264 print("test_ptable does not work for Trusty")
265
266
267class XenialTestMirrorbootPartitionsUEFI(relbase.xenial,
268 TestMirrorbootPartitionsUEFIAbs):
269 __test__ = True
270
271
272class ZestyTestMirrorbootPartitionsUEFI(relbase.zesty,
273 TestMirrorbootPartitionsUEFIAbs):
274 __test__ = True
275
276
237class TestRaid5bootAbs(TestMdadmAbs):277class TestRaid5bootAbs(TestMdadmAbs):
238 # alternative config for more complex setup278 # alternative config for more complex setup
239 conf_file = "examples/tests/raid5boot.yaml"279 conf_file = "examples/tests/raid5boot.yaml"

Subscribers

People subscribed via source and target branches