Merge lp:~raharper/curtin/trunk.md-uefi into lp:~curtin-dev/curtin/trunk
- trunk.md-uefi
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 495 |
Proposed branch: | lp:~raharper/curtin/trunk.md-uefi |
Merge into: | lp:~curtin-dev/curtin/trunk |
Diff against target: |
490 lines (+379/-6) 6 files modified
curtin/block/clear_holders.py (+23/-1) curtin/block/mdadm.py (+27/-1) examples/tests/mirrorboot-uefi.yaml (+138/-0) tests/unittests/test_block_mdadm.py (+94/-1) tests/unittests/test_clear_holders.py (+57/-3) tests/vmtests/test_mdadm_bcache.py (+40/-0) |
To merge this branch: | bzr merge lp:~raharper/curtin/trunk.md-uefi |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
Scott Moser (community) | Approve | ||
Chad Smith | Approve | ||
Joshua Powers (community) | Approve | ||
Review via email: mp+322553@code.launchpad.net |
Commit message
Description of the change
clear-holders: mdadm use /proc/mdstat to wait until an array has been stopped
shutdown_mdadm needs to wait until the md device has stopped. Using files in sysfs is unreliable due to a kernel bug (LP:1682456) so
instead use device presence in /proc/mdstat.
Add the use of --manage flag to force mdadm to interrupt actions on the device (like a resync).
Add vmtest with storage config to recreate issue found (LP:1682584) using
early_command to dirty existing disks with raid configurations.
Server Team CI bot (server-team-bot) wrote : | # |
- 449. By Ryan Harper
-
merge from trunk
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:449
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 450. By Ryan Harper
-
Don't force launch into smp mode
- 451. By Ryan Harper
-
merge from trunk
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:451
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 452. By Ryan Harper
-
Fix and add unittests for mdadm.md_present
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:452
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
some small things.
Chad Smith (chad.smith) wrote : | # |
While I don't have a lot of context on raid setup, just an initial review on the code and tests as written.
Ryan Harper (raharper) wrote : | # |
Thanks for the reviews; will update.
- 453. By Ryan Harper
-
tl;dr wait-for-mdadm comment
- 454. By Ryan Harper
-
Use variable to define how often to retry waiting on mdadm release
- 455. By Ryan Harper
-
On shutdown failure, log critical, handle missing /proc/mdstat, add unittests
- 456. By Ryan Harper
-
Use IOError, it's supported in both py2 and py3
Ryan Harper (raharper) wrote : | # |
I've addressed the comments as I replied earlier. Please re-review.
The updated branch is running through vmtest again.
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:456
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Joshua Powers (powersj) wrote : | # |
LGTM, given my limited exposure to mdadm
Chad Smith (chad.smith) wrote : | # |
+1 with a minor nit below
- 457. By Ryan Harper
-
remove unnused variable and enumerate of MDADM_RELEASE_
RETRIES
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:457
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Scott Moser (smoser) wrote : | # |
they're small, but i think worth fixing.
Ryan Harper (raharper) wrote : | # |
On Wed, Apr 26, 2017 at 10:03 AM, Scott Moser <email address hidden> wrote:
> Review: Needs Fixing
>
> they're small, but i think worth fixing.
>
>
> Diff comments:
>
> > === modified file 'curtin/
> > --- curtin/
> > +++ curtin/
> > @@ -187,7 +191,26 @@
> > blockdev = block.sysfs_
> > LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev)
> > mdadm.mdadm_
> > - mdadm.mdadm_
> > +
> > + # mdadm stop operation is asynchronous so we must wait for the
> kernel to
> > + # release resources. For more details see lp:1682456
> > + try:
> > + for wait in MDADM_RELEASE_
> > + if mdadm.md_
> > + time.sleep(wait)
> > + else:
> > + LOG.debug('%s has been removed', blockdev)
> > + break
> > +
> > + if mdadm.md_
> > + raise OSError('Timeout exceeded for removal of %s',
> blockdev)
> > +
> > + except OSError:
> > + LOG.critical(
> > + if os.path.
> > + out, _ = util.subp(['cat', '/proc/mdstat'], capture=True)
>
> use util.load_file here. cat and capture for reading a file?
>
It's really just debugging so it doesn't matter the method.
I'm find for using load_file.
>
> > + LOG.critical(out)
> > + raise
> >
> >
> > def wipe_superblock
> >
> > === modified file 'curtin/
> > --- curtin/
> > +++ curtin/
> > @@ -293,6 +294,26 @@
> > return out
> >
> >
> > +def md_present(mdname):
> > + """Check if mdname is present in /proc/mdstat"""
> > + if not mdname:
> > + raise ValueError(
> > +
> > + try:
> > + mdstat = util.load_
> > + except IOError:
> > + LOG.warning('Failed to read /proc/mdstat; '
>
> we really should check if this is a ENOENT. that is very specifically
> different than any other type of IOError
> If we got a IOError reading that file, we should just raise the exception
> and fail rather than printing this message.
>
> if util.is_
> LOG.warning("....")
> else:
> raise e
>
You mentioned that before and somehow I failed to find it and thought you
were suggesting to
write one.
>
> > + 'md modules might not be loaded')
> > + return False
> > +
> > + md_kname = dev_short(mdname)
> > + present = [line for line in mdstat.splitlines()
> > + if line.startswith
> > + if len(present) > 0:
> > + return True
> > + return False
> > +
> > +
> > # -------
> > def valid_mdname(
> > assert_
>
>
> --
> https:/
> You are the owner of lp:~raharper/c...
- 458. By Ryan Harper
-
Use load_file, check exception for file-not-found
- 459. By Ryan Harper
-
Merge from trunk
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:459
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 460. By Ryan Harper
-
merge from trunk
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:460
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
There are some nits here.
only one really needs to be fixed ("md10"
fix my nits (you dont have to fix the util.loadfile if you dont want)
and then i approve.
- 461. By Ryan Harper
-
Match entire mdname not just the start
- 462. By Ryan Harper
-
fix style lint
- 463. By Ryan Harper
-
drop useless out return value
- 464. By Ryan Harper
-
Change format for LP bug reference
- 465. By Ryan Harper
-
remove debugging late_commands from mirrorboot-uefi vmtest
Ryan Harper (raharper) wrote : | # |
Applied fixes for all; thanks for the .startswith catch.
I've pushed the fixes to this MR and started a vmtest run for just the mdadm tests here:
https:/
If that comes out green, I'll land this to trunk tomorrow AM.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:465
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Preview Diff
1 | === modified file 'curtin/block/clear_holders.py' |
2 | --- curtin/block/clear_holders.py 2017-04-11 20:52:11 +0000 |
3 | +++ curtin/block/clear_holders.py 2017-05-03 02:14:17 +0000 |
4 | @@ -23,12 +23,16 @@ |
5 | |
6 | import errno |
7 | import os |
8 | +import time |
9 | |
10 | from curtin import (block, udev, util) |
11 | from curtin.block import lvm |
12 | from curtin.block import mdadm |
13 | from curtin.log import LOG |
14 | |
15 | +# poll frequenty, but wait up to 60 seconds total |
16 | +MDADM_RELEASE_RETRIES = [0.4] * 150 |
17 | + |
18 | |
19 | def _define_handlers_registry(): |
20 | """ |
21 | @@ -187,7 +191,25 @@ |
22 | blockdev = block.sysfs_to_devpath(device) |
23 | LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev) |
24 | mdadm.mdadm_stop(blockdev) |
25 | - mdadm.mdadm_remove(blockdev) |
26 | + |
27 | + # mdadm stop operation is asynchronous so we must wait for the kernel to |
28 | + # release resources. For more details see LP: #1682456 |
29 | + try: |
30 | + for wait in MDADM_RELEASE_RETRIES: |
31 | + if mdadm.md_present(block.path_to_kname(blockdev)): |
32 | + time.sleep(wait) |
33 | + else: |
34 | + LOG.debug('%s has been removed', blockdev) |
35 | + break |
36 | + |
37 | + if mdadm.md_present(block.path_to_kname(blockdev)): |
38 | + raise OSError('Timeout exceeded for removal of %s', blockdev) |
39 | + |
40 | + except OSError: |
41 | + LOG.critical('Failed to stop mdadm device %s', device) |
42 | + if os.path.exists('/proc/mdstat'): |
43 | + LOG.critical("/proc/mdstat:\n%s", util.load_file('/proc/mdstat')) |
44 | + raise |
45 | |
46 | |
47 | def wipe_superblock(device): |
48 | |
49 | === modified file 'curtin/block/mdadm.py' |
50 | --- curtin/block/mdadm.py 2017-01-31 19:15:24 +0000 |
51 | +++ curtin/block/mdadm.py 2017-05-03 02:14:17 +0000 |
52 | @@ -257,7 +257,8 @@ |
53 | assert_valid_devpath(devpath) |
54 | |
55 | LOG.info("mdadm stopping: %s" % devpath) |
56 | - out, err = util.subp(["mdadm", "--stop", devpath], capture=True) |
57 | + out, err = util.subp(["mdadm", "--manage", "--stop", devpath], |
58 | + capture=True) |
59 | LOG.debug("mdadm stop:\n%s\n%s", out, err) |
60 | |
61 | |
62 | @@ -293,6 +294,31 @@ |
63 | return out |
64 | |
65 | |
66 | +def md_present(mdname): |
67 | + """Check if mdname is present in /proc/mdstat""" |
68 | + if not mdname: |
69 | + raise ValueError('md_present requires a valid md name') |
70 | + |
71 | + try: |
72 | + mdstat = util.load_file('/proc/mdstat') |
73 | + except IOError as e: |
74 | + if util.is_file_not_found_exc(e): |
75 | + LOG.warning('Failed to read /proc/mdstat; ' |
76 | + 'md modules might not be loaded') |
77 | + return False |
78 | + else: |
79 | + raise e |
80 | + |
81 | + md_kname = dev_short(mdname) |
82 | + # Find lines like: |
83 | + # md10 : active raid1 vdc1[1] vda2[0] |
84 | + present = [line for line in mdstat.splitlines() |
85 | + if line.split(":")[0].rstrip() == md_kname] |
86 | + if len(present) > 0: |
87 | + return True |
88 | + return False |
89 | + |
90 | + |
91 | # ------------------------------ # |
92 | def valid_mdname(md_devname): |
93 | assert_valid_devpath(md_devname) |
94 | |
95 | === added file 'examples/tests/mirrorboot-uefi.yaml' |
96 | --- examples/tests/mirrorboot-uefi.yaml 1970-01-01 00:00:00 +0000 |
97 | +++ examples/tests/mirrorboot-uefi.yaml 2017-05-03 02:14:17 +0000 |
98 | @@ -0,0 +1,138 @@ |
99 | +showtrace: true |
100 | + |
101 | +early_commands: |
102 | + # running block-meta custom from the install environment |
103 | + # inherits the CONFIG environment, so this works to actually prepare |
104 | + # the disks exactly as in this config before the rest of the install |
105 | + # will just blow it all away. We have to clean out the other |
106 | + # environment that could unintentionally mess things up. |
107 | + blockmeta: [env, -u, OUTPUT_FSTAB, |
108 | + TARGET_MOUNT_POINT=/tmp/my.bdir/target, |
109 | + WORKING_DIR=/tmp/my.bdir/work.d, |
110 | + curtin, --showtrace, -v, block-meta, --umount, custom] |
111 | + |
112 | +storage: |
113 | + config: |
114 | + - grub_device: true |
115 | + id: sda |
116 | + name: sda |
117 | + ptable: msdos |
118 | + type: disk |
119 | + wipe: superblock |
120 | + path: /dev/vdb |
121 | + name: main_disk |
122 | + - id: sdb |
123 | + name: sdb |
124 | + ptable: gpt |
125 | + type: disk |
126 | + wipe: superblock |
127 | + path: /dev/vdc |
128 | + name: second_disk |
129 | + - device: sda |
130 | + flag: boot |
131 | + id: sda-part1 |
132 | + name: sda-part1 |
133 | + number: 1 |
134 | + offset: 4194304B |
135 | + size: 511705088B |
136 | + type: partition |
137 | + uuid: fc7ab24c-b6bf-460f-8446-d3ac362c0625 |
138 | + wipe: superblock |
139 | + - device: sda |
140 | + id: sda-part2 |
141 | + name: sda-part2 |
142 | + number: 2 |
143 | + size: 2G |
144 | + type: partition |
145 | + uuid: 47c97eae-f35d-473f-8f3d-d64161d571f1 |
146 | + wipe: superblock |
147 | + - device: sda |
148 | + id: sda-part3 |
149 | + name: sda-part3 |
150 | + number: 3 |
151 | + size: 2G |
152 | + type: partition |
153 | + uuid: e3202633-841c-4936-a520-b18d1f7938ea |
154 | + wipe: superblock |
155 | + - device: sdb |
156 | + flag: boot |
157 | + id: sdb-part1 |
158 | + name: sdb-part1 |
159 | + number: 1 |
160 | + offset: 4194304B |
161 | + size: 511705088B |
162 | + type: partition |
163 | + uuid: 86326392-3706-4124-87c6-2992acfa31cc |
164 | + wipe: superblock |
165 | + - device: sdb |
166 | + id: sdb-part2 |
167 | + name: sdb-part2 |
168 | + number: 2 |
169 | + size: 2G |
170 | + type: partition |
171 | + uuid: a33a83dd-d1bf-4940-bf3e-6d931de85dbc |
172 | + wipe: superblock |
173 | + - devices: |
174 | + - sda-part2 |
175 | + - sdb-part2 |
176 | + id: md0 |
177 | + name: md0 |
178 | + raidlevel: 1 |
179 | + spare_devices: [] |
180 | + type: raid |
181 | + - device: sdb |
182 | + id: sdb-part3 |
183 | + name: sdb-part3 |
184 | + number: 3 |
185 | + size: 2G |
186 | + type: partition |
187 | + uuid: 27e29758-fdcf-4c6a-8578-c92f907a8a9d |
188 | + wipe: superblock |
189 | + - devices: |
190 | + - sda-part3 |
191 | + - sdb-part3 |
192 | + id: md1 |
193 | + name: md1 |
194 | + raidlevel: 1 |
195 | + spare_devices: [] |
196 | + type: raid |
197 | + - fstype: fat32 |
198 | + id: sda-part1_format |
199 | + label: efi |
200 | + type: format |
201 | + uuid: b3d50fc7-2f9e-4d1a-9e24-28985e4c560b |
202 | + volume: sda-part1 |
203 | + - fstype: fat32 |
204 | + id: sdb-part1_format |
205 | + label: efi |
206 | + type: format |
207 | + uuid: c604cbb1-2ee1-4575-9489-d38a60fa0cf2 |
208 | + volume: sdb-part1 |
209 | + - fstype: ext4 |
210 | + id: md0_format |
211 | + label: '' |
212 | + type: format |
213 | + uuid: 76a315b7-2979-436c-b156-9ae64a565a59 |
214 | + volume: md0 |
215 | + - fstype: ext4 |
216 | + id: md1_format |
217 | + label: '' |
218 | + type: format |
219 | + uuid: 48dceca6-a9f9-4c7b-bfd3-7f3a0faa4ecc |
220 | + volume: md1 |
221 | + - device: md0_format |
222 | + id: md0_mount |
223 | + options: '' |
224 | + path: / |
225 | + type: mount |
226 | + - device: sda-part1_format |
227 | + id: sda-part1_mount |
228 | + options: '' |
229 | + path: /boot/efi |
230 | + type: mount |
231 | + - device: md1_format |
232 | + id: md1_mount |
233 | + options: '' |
234 | + path: /var |
235 | + type: mount |
236 | + version: 1 |
237 | |
238 | === modified file 'tests/unittests/test_block_mdadm.py' |
239 | --- tests/unittests/test_block_mdadm.py 2017-01-27 01:00:39 +0000 |
240 | +++ tests/unittests/test_block_mdadm.py 2017-05-03 02:14:17 +0000 |
241 | @@ -5,6 +5,7 @@ |
242 | from curtin import util |
243 | import os |
244 | import subprocess |
245 | +import textwrap |
246 | |
247 | |
248 | class MdadmTestBase(TestCase): |
249 | @@ -348,7 +349,7 @@ |
250 | device = "/dev/vdc" |
251 | mdadm.mdadm_stop(device) |
252 | expected_calls = [ |
253 | - call(["mdadm", "--stop", device], capture=True), |
254 | + call(["mdadm", "--manage", "--stop", device], capture=True) |
255 | ] |
256 | self.mock_util.subp.assert_has_calls(expected_calls) |
257 | |
258 | @@ -944,4 +945,96 @@ |
259 | mdadm.md_check(md_devname, raidlevel, devices=devices, |
260 | spares=spares) |
261 | |
262 | + def test_md_present(self): |
263 | + mdname = 'md0' |
264 | + self.mock_util.load_file.return_value = textwrap.dedent(""" |
265 | + Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] |
266 | + [raid4] [raid10] |
267 | + md0 : active raid1 vdc1[1] vda2[0] |
268 | + 3143680 blocks super 1.2 [2/2] [UU] |
269 | + |
270 | + unused devices: <none> |
271 | + """) |
272 | + |
273 | + md_is_present = mdadm.md_present(mdname) |
274 | + |
275 | + self.assertTrue(md_is_present) |
276 | + self.mock_util.load_file.assert_called_with('/proc/mdstat') |
277 | + |
278 | + def test_md_present_not_found(self): |
279 | + mdname = 'md1' |
280 | + self.mock_util.load_file.return_value = textwrap.dedent(""" |
281 | + Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] |
282 | + [raid4] [raid10] |
283 | + md0 : active raid1 vdc1[1] vda2[0] |
284 | + 3143680 blocks super 1.2 [2/2] [UU] |
285 | + |
286 | + unused devices: <none> |
287 | + """) |
288 | + |
289 | + md_is_present = mdadm.md_present(mdname) |
290 | + |
291 | + self.assertFalse(md_is_present) |
292 | + self.mock_util.load_file.assert_called_with('/proc/mdstat') |
293 | + |
294 | + def test_md_present_not_found_check_matching(self): |
295 | + mdname = 'md1' |
296 | + found_mdname = 'md10' |
297 | + self.mock_util.load_file.return_value = textwrap.dedent(""" |
298 | + Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] |
299 | + [raid4] [raid10] |
300 | + md10 : active raid1 vdc1[1] vda2[0] |
301 | + 3143680 blocks super 1.2 [2/2] [UU] |
302 | + |
303 | + unused devices: <none> |
304 | + """) |
305 | + |
306 | + md_is_present = mdadm.md_present(mdname) |
307 | + |
308 | + self.assertFalse(md_is_present, |
309 | + "%s mistakenly matched %s" % (mdname, found_mdname)) |
310 | + self.mock_util.load_file.assert_called_with('/proc/mdstat') |
311 | + |
312 | + def test_md_present_with_dev_path(self): |
313 | + mdname = '/dev/md0' |
314 | + self.mock_util.load_file.return_value = textwrap.dedent(""" |
315 | + Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] |
316 | + [raid4] [raid10] |
317 | + md0 : active raid1 vdc1[1] vda2[0] |
318 | + 3143680 blocks super 1.2 [2/2] [UU] |
319 | + |
320 | + unused devices: <none> |
321 | + """) |
322 | + |
323 | + md_is_present = mdadm.md_present(mdname) |
324 | + |
325 | + self.assertTrue(md_is_present) |
326 | + self.mock_util.load_file.assert_called_with('/proc/mdstat') |
327 | + |
328 | + def test_md_present_none(self): |
329 | + mdname = '' |
330 | + self.mock_util.load_file.return_value = textwrap.dedent(""" |
331 | + Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] |
332 | + [raid4] [raid10] |
333 | + md0 : active raid1 vdc1[1] vda2[0] |
334 | + 3143680 blocks super 1.2 [2/2] [UU] |
335 | + |
336 | + unused devices: <none> |
337 | + """) |
338 | + |
339 | + with self.assertRaises(ValueError): |
340 | + mdadm.md_present(mdname) |
341 | + |
342 | + # util.load_file should NOT have been called |
343 | + self.assertEqual([], self.mock_util.call_args_list) |
344 | + |
345 | + def test_md_present_no_proc_mdstat(self): |
346 | + mdname = 'md0' |
347 | + self.mock_util.side_effect = IOError |
348 | + |
349 | + md_is_present = mdadm.md_present(mdname) |
350 | + self.assertFalse(md_is_present) |
351 | + self.mock_util.load_file.assert_called_with('/proc/mdstat') |
352 | + |
353 | + |
354 | # vi: ts=4 expandtab syntax=python |
355 | |
356 | === modified file 'tests/unittests/test_clear_holders.py' |
357 | --- tests/unittests/test_clear_holders.py 2017-04-11 20:52:11 +0000 |
358 | +++ tests/unittests/test_clear_holders.py 2017-05-03 02:14:17 +0000 |
359 | @@ -360,16 +360,70 @@ |
360 | mock_util.subp.assert_called_with( |
361 | ['cryptsetup', 'remove', self.test_blockdev], capture=True) |
362 | |
363 | + @mock.patch('curtin.block.clear_holders.time') |
364 | + @mock.patch('curtin.block.clear_holders.util') |
365 | @mock.patch('curtin.block.clear_holders.LOG') |
366 | @mock.patch('curtin.block.clear_holders.mdadm') |
367 | @mock.patch('curtin.block.clear_holders.block') |
368 | - def test_shutdown_mdadm(self, mock_block, mock_mdadm, mock_log): |
369 | + def test_shutdown_mdadm(self, mock_block, mock_mdadm, mock_log, mock_util, |
370 | + mock_time): |
371 | """test clear_holders.shutdown_mdadm""" |
372 | mock_block.sysfs_to_devpath.return_value = self.test_blockdev |
373 | + mock_block.path_to_kname.return_value = self.test_blockdev |
374 | + mock_mdadm.md_present.return_value = False |
375 | clear_holders.shutdown_mdadm(self.test_syspath) |
376 | mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev) |
377 | - mock_mdadm.mdadm_remove.assert_called_with(self.test_blockdev) |
378 | - self.assertTrue(mock_log.debug.called) |
379 | + mock_mdadm.md_present.assert_called_with(self.test_blockdev) |
380 | + self.assertTrue(mock_log.debug.called) |
381 | + |
382 | + @mock.patch('curtin.block.clear_holders.os') |
383 | + @mock.patch('curtin.block.clear_holders.time') |
384 | + @mock.patch('curtin.block.clear_holders.util') |
385 | + @mock.patch('curtin.block.clear_holders.LOG') |
386 | + @mock.patch('curtin.block.clear_holders.mdadm') |
387 | + @mock.patch('curtin.block.clear_holders.block') |
388 | + def test_shutdown_mdadm_fail_raises_oserror(self, mock_block, mock_mdadm, |
389 | + mock_log, mock_util, mock_time, |
390 | + mock_os): |
391 | + """test clear_holders.shutdown_mdadm raises OSError on failure""" |
392 | + mock_block.sysfs_to_devpath.return_value = self.test_blockdev |
393 | + mock_block.path_to_kname.return_value = self.test_blockdev |
394 | + mock_mdadm.md_present.return_value = True |
395 | + mock_util.subp.return_value = ("", "") |
396 | + mock_os.path.exists.return_value = True |
397 | + |
398 | + with self.assertRaises(OSError): |
399 | + clear_holders.shutdown_mdadm(self.test_syspath) |
400 | + |
401 | + mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev) |
402 | + mock_mdadm.md_present.assert_called_with(self.test_blockdev) |
403 | + mock_util.load_file.assert_called_with('/proc/mdstat') |
404 | + self.assertTrue(mock_log.debug.called) |
405 | + self.assertTrue(mock_log.critical.called) |
406 | + |
407 | + @mock.patch('curtin.block.clear_holders.os') |
408 | + @mock.patch('curtin.block.clear_holders.time') |
409 | + @mock.patch('curtin.block.clear_holders.util') |
410 | + @mock.patch('curtin.block.clear_holders.LOG') |
411 | + @mock.patch('curtin.block.clear_holders.mdadm') |
412 | + @mock.patch('curtin.block.clear_holders.block') |
413 | + def test_shutdown_mdadm_fails_no_proc_mdstat(self, mock_block, mock_mdadm, |
414 | + mock_log, mock_util, |
415 | + mock_time, mock_os): |
416 | + """test clear_holders.shutdown_mdadm handles no /proc/mdstat""" |
417 | + mock_block.sysfs_to_devpath.return_value = self.test_blockdev |
418 | + mock_block.path_to_kname.return_value = self.test_blockdev |
419 | + mock_mdadm.md_present.return_value = True |
420 | + mock_os.path.exists.return_value = False |
421 | + |
422 | + with self.assertRaises(OSError): |
423 | + clear_holders.shutdown_mdadm(self.test_syspath) |
424 | + |
425 | + mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev) |
426 | + mock_mdadm.md_present.assert_called_with(self.test_blockdev) |
427 | + self.assertEqual([], mock_util.subp.call_args_list) |
428 | + self.assertTrue(mock_log.debug.called) |
429 | + self.assertTrue(mock_log.critical.called) |
430 | |
431 | @mock.patch('curtin.block.clear_holders.LOG') |
432 | @mock.patch('curtin.block.clear_holders.block') |
433 | |
434 | === modified file 'tests/vmtests/test_mdadm_bcache.py' |
435 | --- tests/vmtests/test_mdadm_bcache.py 2017-04-03 02:27:33 +0000 |
436 | +++ tests/vmtests/test_mdadm_bcache.py 2017-05-03 02:14:17 +0000 |
437 | @@ -16,6 +16,10 @@ |
438 | grep -c active /proc/mdstat > mdadm_active2 |
439 | ls /dev/disk/by-dname > ls_dname |
440 | find /etc/network/interfaces.d > find_interfacesd |
441 | + cat /proc/mdstat | tee mdstat |
442 | + cat /proc/partitions | tee procpartitions |
443 | + ls -1 /sys/class/block | tee sys_class_block |
444 | + ls -1 /dev/md* | tee dev_md |
445 | """)] |
446 | |
447 | def test_mdadm_output_files_exist(self): |
448 | @@ -234,6 +238,42 @@ |
449 | __test__ = True |
450 | |
451 | |
452 | +class TestMirrorbootPartitionsUEFIAbs(TestMdadmAbs): |
453 | + # alternative config for more complex setup |
454 | + conf_file = "examples/tests/mirrorboot-uefi.yaml" |
455 | + # initialize secondary disk |
456 | + extra_disks = ['10G'] |
457 | + disk_to_check = [('main_disk', 2), |
458 | + ('second_disk', 3), |
459 | + ('md0', 0), |
460 | + ('md1', 0)] |
461 | + active_mdadm = "2" |
462 | + uefi = True |
463 | + |
464 | + |
465 | +class TrustyTestMirrorbootPartitionsUEFI(relbase.trusty, |
466 | + TestMirrorbootPartitionsUEFIAbs): |
467 | + __test__ = True |
468 | + |
469 | + # FIXME(LP: #1523037): dname does not work on trusty |
470 | + # when dname works on trusty, then we need to re-enable by removing line. |
471 | + def test_dname(self): |
472 | + print("test_dname does not work for Trusty") |
473 | + |
474 | + def test_ptable(self): |
475 | + print("test_ptable does not work for Trusty") |
476 | + |
477 | + |
478 | +class XenialTestMirrorbootPartitionsUEFI(relbase.xenial, |
479 | + TestMirrorbootPartitionsUEFIAbs): |
480 | + __test__ = True |
481 | + |
482 | + |
483 | +class ZestyTestMirrorbootPartitionsUEFI(relbase.zesty, |
484 | + TestMirrorbootPartitionsUEFIAbs): |
485 | + __test__ = True |
486 | + |
487 | + |
488 | class TestRaid5bootAbs(TestMdadmAbs): |
489 | # alternative config for more complex setup |
490 | conf_file = "examples/tests/raid5boot.yaml" |
PASSED: Continuous integration, rev:448 /jenkins. ubuntu. com/server/ job/curtin- ci/448/ /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-amd64/ 448 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-arm64/ 448 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-ppc64el/ 448 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-s390x/ 448 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= vm-i386/ 448
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/curtin- ci/448/ rebuild
https:/