Merge ~raharper/curtin:fix/clear-holders-refactor into curtin:master
- Git
- lp:~raharper/curtin
- fix/clear-holders-refactor
- Merge into master
Status: | Merged | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Approved by: | Scott Moser | ||||||||||||
Approved revision: | 374a1ff40f72b5ce1ce8a86d4c5047ebd774103a | ||||||||||||
Merge reported by: | Scott Moser | ||||||||||||
Merged at revision: | 4bf7750b6010f0483b22ff626eab58d6a85bb072 | ||||||||||||
Proposed branch: | ~raharper/curtin:fix/clear-holders-refactor | ||||||||||||
Merge into: | curtin:master | ||||||||||||
Diff against target: |
679 lines (+250/-96) 7 files modified
curtin/block/__init__.py (+18/-14) curtin/block/clear_holders.py (+86/-54) curtin/block/iscsi.py (+7/-8) curtin/block/mdadm.py (+65/-0) tests/unittests/test_clear_holders.py (+66/-20) tests/vmtests/test_lvm.py (+5/-0) tests/vmtests/test_lvm_iscsi.py (+3/-0) |
||||||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Scott Moser (community) | Approve | ||
Server Team CI bot | continuous-integration | Approve | |
Review via email:
|
Commit message
block/clear_
The block layer handling of checking efi/dos signatures did not ensure
that it always used a devpath (/dev/<kname>) which resulted in
failures when attempting to identify dos partitions, including
extended partitions due to attempting to open /sys/class/
to look for metadata. In addition there is a window where
wiping/removing lvm devices in dos logical partitions renders the
/dev/<kname> path to the extended partition missing, no /dev/<kname>
We now re-read a block device's partition table after completing a
shutdown of each device. Doing this re-read triggered mdadm devices
which had been stopped to resume preventing wiping of the underlying
devices. A refactor of how raid devices are shutdown now includes
collecting the list of raid members and spares, failing each element
out of the array, stopping the rescan/rebuild of the array and finally
removing the mdadm metadata on the underlying devices.
With this in place, we can now successful run the LVM test-cases with
dirty_disk mode enabled and simultaneously continue to pass the
complex bcache/raid examples.
Description of the change

Server Team CI bot (server-team-bot) wrote : | # |

Scott Moser (smoser) wrote : | # |
some comments in line.
I think i'm generally ok with this.
I'm happy that you can add dirty_disks and show that this fixed something.
- f3eb081... by Ryan Harper
-
Change suppress=False to warn_on_fail=True
- a032d77... by Ryan Harper
-
Switch to using block.quick_
zero(path, partitions=False) - fcb3775... by Ryan Harper
-
use combine_
capture= True to merge sfdisk stderr output into logged msg. - f9b2ef5... by Ryan Harper
-
Drop mdadm_ prefix on newly added mdadm methods.

Ryan Harper (raharper) wrote : | # |
I've updated with your suggestions applied. Kicking off a vmtest run:
https:/

Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:f9b2ef56dfc
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 374a1ff... by Ryan Harper
-
Fix remaining clear-holder issues with rescanning partitions and iscsi devices
Moving the scan to run before issuing the next shutdown command satisfies the
LVM scenario where msdos extended partitiosn disappear and further we need to
always trigger the automatic mode for iscsi devices as the iscsiadm discovery
reconfigures the defaults.

Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:374a1ff40f7
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/

Ryan Harper (raharper) wrote : | # |
This now passes vmtest run on jenkins:
https:/

Scott Moser (smoser) : | # |

Scott Moser (smoser) wrote : | # |
An upstream commit landed for this bug.
To view that commit see the following URL:
https:/
Preview Diff
1 | diff --git a/curtin/block/__init__.py b/curtin/block/__init__.py |
2 | index 50e953e..a8ee8a6 100644 |
3 | --- a/curtin/block/__init__.py |
4 | +++ b/curtin/block/__init__.py |
5 | @@ -378,7 +378,7 @@ def stop_all_unused_multipath_devices(): |
6 | LOG.warn("Failed to stop multipath devices: %s", e) |
7 | |
8 | |
9 | -def rescan_block_devices(): |
10 | +def rescan_block_devices(warn_on_fail=True): |
11 | """ |
12 | run 'blockdev --rereadpt' for all block devices not currently mounted |
13 | """ |
14 | @@ -399,13 +399,15 @@ def rescan_block_devices(): |
15 | try: |
16 | util.subp(cmd, capture=True) |
17 | except util.ProcessExecutionError as e: |
18 | - # FIXME: its less than ideal to swallow this error, but until |
19 | - # we fix LP: #1489521 we kind of need to. |
20 | - LOG.warn("Error rescanning devices, possibly known issue LP: #1489521") |
21 | - # Reformatting the exception output so as to not trigger |
22 | - # vmtest scanning for Unexepected errors in install logfile |
23 | - LOG.warn("cmd: %s\nstdout:%s\nstderr:%s\nexit_code:%s", e.cmd, |
24 | - e.stdout, e.stderr, e.exit_code) |
25 | + if warn_on_fail: |
26 | + # FIXME: its less than ideal to swallow this error, but until |
27 | + # we fix LP: #1489521 we kind of need to. |
28 | + LOG.warn( |
29 | + "Error rescanning devices, possibly known issue LP: #1489521") |
30 | + # Reformatting the exception output so as to not trigger |
31 | + # vmtest scanning for Unexepected errors in install logfile |
32 | + LOG.warn("cmd: %s\nstdout:%s\nstderr:%s\nexit_code:%s", e.cmd, |
33 | + e.stdout, e.stderr, e.exit_code) |
34 | |
35 | udevadm_settle() |
36 | |
37 | @@ -753,8 +755,9 @@ def check_dos_signature(device): |
38 | # the underlying disk uses a larger logical block size, so the start of |
39 | # this signature must be at 0x1fe |
40 | # https://en.wikipedia.org/wiki/Master_boot_record#Sector_layout |
41 | - return (is_block_device(device) and util.file_size(device) >= 0x200 and |
42 | - (util.load_file(device, decode=False, read_len=2, offset=0x1fe) == |
43 | + devname = dev_path(path_to_kname(device)) |
44 | + return (is_block_device(devname) and util.file_size(devname) >= 0x200 and |
45 | + (util.load_file(devname, decode=False, read_len=2, offset=0x1fe) == |
46 | b'\x55\xAA')) |
47 | |
48 | |
49 | @@ -769,10 +772,11 @@ def check_efi_signature(device): |
50 | # the start of the gpt partition table header shoult have the signaure |
51 | # 'EFI PART'. |
52 | # https://en.wikipedia.org/wiki/GUID_Partition_Table |
53 | - sector_size = get_blockdev_sector_size(device)[0] |
54 | - return (is_block_device(device) and |
55 | - util.file_size(device) >= 2 * sector_size and |
56 | - (util.load_file(device, decode=False, read_len=8, |
57 | + devname = dev_path(path_to_kname(device)) |
58 | + sector_size = get_blockdev_sector_size(devname)[0] |
59 | + return (is_block_device(devname) and |
60 | + util.file_size(devname) >= 2 * sector_size and |
61 | + (util.load_file(devname, decode=False, read_len=8, |
62 | offset=sector_size) == b'EFI PART')) |
63 | |
64 | |
65 | diff --git a/curtin/block/clear_holders.py b/curtin/block/clear_holders.py |
66 | index e0ee044..cbb7074 100644 |
67 | --- a/curtin/block/clear_holders.py |
68 | +++ b/curtin/block/clear_holders.py |
69 | @@ -110,6 +110,9 @@ def shutdown_bcache(device): |
70 | 'Device path must start with /sys/class/block/', |
71 | device) |
72 | |
73 | + LOG.info('Wiping superblock on bcache device: %s', device) |
74 | + _wipe_superblock(block.sysfs_to_devpath(device), exclusive=False) |
75 | + |
76 | # bcache device removal should be fast but in an extreme |
77 | # case, might require the cache device to flush large |
78 | # amounts of data to a backing device. The strategy here |
79 | @@ -189,14 +192,27 @@ def shutdown_lvm(device): |
80 | name_file = os.path.join(device, 'dm', 'name') |
81 | lvm_name = util.load_file(name_file).strip() |
82 | (vg_name, lv_name) = lvm.split_lvm_name(lvm_name) |
83 | + vg_lv_name = "%s/%s" % (vg_name, lv_name) |
84 | + devname = "/dev/" + vg_lv_name |
85 | + |
86 | + # wipe contents of the logical volume first |
87 | + LOG.info('Wiping lvm logical volume: %s', devname) |
88 | + block.quick_zero(devname, partitions=False) |
89 | |
90 | - # use dmsetup as lvm commands require valid /etc/lvm/* metadata |
91 | - LOG.debug('using "dmsetup remove" on %s', lvm_name) |
92 | - util.subp(['dmsetup', 'remove', lvm_name]) |
93 | + # remove the logical volume |
94 | + LOG.debug('using "lvremove" on %s', vg_lv_name) |
95 | + util.subp(['lvremove', '--force', '--force', vg_lv_name]) |
96 | |
97 | # if that was the last lvol in the volgroup, get rid of volgroup |
98 | if len(lvm.get_lvols_in_volgroup(vg_name)) == 0: |
99 | + pvols = lvm.get_pvols_in_volgroup(vg_name) |
100 | util.subp(['vgremove', '--force', '--force', vg_name], rcs=[0, 5]) |
101 | + |
102 | + # wipe the underlying physical volumes |
103 | + for pv in pvols: |
104 | + LOG.info('Wiping lvm physical volume: %s', pv) |
105 | + block.quick_zero(pv, partitions=False) |
106 | + |
107 | # refresh lvmetad |
108 | lvm.lvm_scan() |
109 | |
110 | @@ -213,10 +229,31 @@ def shutdown_mdadm(device): |
111 | """ |
112 | Shutdown specified mdadm device. |
113 | """ |
114 | + |
115 | blockdev = block.sysfs_to_devpath(device) |
116 | + |
117 | + LOG.info('Wiping superblock on raid device: %s', device) |
118 | + _wipe_superblock(blockdev, exclusive=False) |
119 | + |
120 | + md_devs = ( |
121 | + mdadm.md_get_devices_list(blockdev) + |
122 | + mdadm.md_get_spares_list(blockdev)) |
123 | + mdadm.set_sync_action(blockdev, action="idle") |
124 | + mdadm.set_sync_action(blockdev, action="frozen") |
125 | + for mddev in md_devs: |
126 | + try: |
127 | + mdadm.fail_device(blockdev, mddev) |
128 | + mdadm.remove_device(blockdev, mddev) |
129 | + except util.ProcessExecutionError as e: |
130 | + LOG.debug('Non-fatal error clearing raid array: %s', e.stderr) |
131 | + pass |
132 | + |
133 | LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev) |
134 | mdadm.mdadm_stop(blockdev) |
135 | |
136 | + for mddev in md_devs: |
137 | + mdadm.zero_device(mddev) |
138 | + |
139 | # mdadm stop operation is asynchronous so we must wait for the kernel to |
140 | # release resources. For more details see LP: #1682456 |
141 | try: |
142 | @@ -244,34 +281,49 @@ def wipe_superblock(device): |
143 | blockdev = block.sysfs_to_devpath(device) |
144 | # when operating on a disk that used to have a dos part table with an |
145 | # extended partition, attempting to wipe the extended partition will fail |
146 | - if block.is_extended_partition(blockdev): |
147 | - LOG.info("extended partitions do not need wiping, so skipping: '%s'", |
148 | - blockdev) |
149 | - else: |
150 | - # release zfs member by exporting the pool |
151 | - if block.is_zfs_member(blockdev): |
152 | - poolname = zfs.device_to_poolname(blockdev) |
153 | - # only export pools that have been imported |
154 | - if poolname in zfs.zpool_list(): |
155 | - zfs.zpool_export(poolname) |
156 | - |
157 | - if is_swap_device(blockdev): |
158 | - shutdown_swap(blockdev) |
159 | - |
160 | - # some volumes will be claimed by the bcache layer but do not surface |
161 | - # an actual /dev/bcacheN device which owns the parts (backing, cache) |
162 | - # The result is that some volumes cannot be wiped while bcache claims |
163 | - # the device. Resolve this by stopping bcache layer on those volumes |
164 | - # if present. |
165 | - for bcache_path in ['bcache', 'bcache/set']: |
166 | - stop_path = os.path.join(device, bcache_path) |
167 | - if os.path.exists(stop_path): |
168 | - LOG.debug('Attempting to release bcache layer from device: %s', |
169 | - device) |
170 | - maybe_stop_bcache_device(stop_path) |
171 | - continue |
172 | - |
173 | - _wipe_superblock(blockdev) |
174 | + try: |
175 | + if block.is_extended_partition(blockdev): |
176 | + LOG.info("extended partitions do not need wiping, so skipping:" |
177 | + " '%s'", blockdev) |
178 | + return |
179 | + except OSError as e: |
180 | + if util.is_file_not_found_exc(e): |
181 | + LOG.debug('Device to wipe disappeared: %s', e) |
182 | + LOG.debug('/proc/partitions says: %s', |
183 | + util.load_file('/proc/partitions')) |
184 | + |
185 | + (parent, partnum) = block.get_blockdev_for_partition(blockdev) |
186 | + out, _e = util.subp(['sfdisk', '-d', parent], |
187 | + capture=True, combine_capture=True) |
188 | + LOG.debug('Disk partition info:\n%s', out) |
189 | + return |
190 | + else: |
191 | + raise e |
192 | + |
193 | + # release zfs member by exporting the pool |
194 | + if block.is_zfs_member(blockdev): |
195 | + poolname = zfs.device_to_poolname(blockdev) |
196 | + # only export pools that have been imported |
197 | + if poolname in zfs.zpool_list(): |
198 | + zfs.zpool_export(poolname) |
199 | + |
200 | + if is_swap_device(blockdev): |
201 | + shutdown_swap(blockdev) |
202 | + |
203 | + # some volumes will be claimed by the bcache layer but do not surface |
204 | + # an actual /dev/bcacheN device which owns the parts (backing, cache) |
205 | + # The result is that some volumes cannot be wiped while bcache claims |
206 | + # the device. Resolve this by stopping bcache layer on those volumes |
207 | + # if present. |
208 | + for bcache_path in ['bcache', 'bcache/set']: |
209 | + stop_path = os.path.join(device, bcache_path) |
210 | + if os.path.exists(stop_path): |
211 | + LOG.debug('Attempting to release bcache layer from device: %s', |
212 | + device) |
213 | + maybe_stop_bcache_device(stop_path) |
214 | + continue |
215 | + |
216 | + _wipe_superblock(blockdev) |
217 | |
218 | |
219 | def _wipe_superblock(blockdev, exclusive=True): |
220 | @@ -512,28 +564,7 @@ def clear_holders(base_paths, try_preserve=False): |
221 | LOG.info('Current device storage tree:\n%s', |
222 | '\n'.join(format_holders_tree(tree) for tree in holder_trees)) |
223 | ordered_devs = plan_shutdown_holder_trees(holder_trees) |
224 | - |
225 | - # run wipe-superblock on layered devices |
226 | - for dev_info in ordered_devs: |
227 | - dev_type = DEV_TYPES.get(dev_info['dev_type']) |
228 | - shutdown_function = dev_type.get('shutdown') |
229 | - if not shutdown_function: |
230 | - continue |
231 | - |
232 | - if try_preserve and shutdown_function in DATA_DESTROYING_HANDLERS: |
233 | - LOG.info('shutdown function for holder type: %s is destructive. ' |
234 | - 'attempting to preserve data, so skipping' % |
235 | - dev_info['dev_type']) |
236 | - continue |
237 | - |
238 | - # for layered block devices, wipe first, then shutdown |
239 | - if dev_info['dev_type'] in ['bcache', 'raid']: |
240 | - LOG.info("Wiping superblock on layered device type: " |
241 | - "'%s' syspath: '%s'", dev_info['dev_type'], |
242 | - dev_info['device']) |
243 | - # we just want to wipe data, we don't care about exclusive |
244 | - _wipe_superblock(block.sysfs_to_devpath(dev_info['device']), |
245 | - exclusive=False) |
246 | + LOG.info('Shutdown Plan:\n%s', "\n".join(map(str, ordered_devs))) |
247 | |
248 | # run shutdown functions |
249 | for dev_info in ordered_devs: |
250 | @@ -548,11 +579,12 @@ def clear_holders(base_paths, try_preserve=False): |
251 | dev_info['dev_type']) |
252 | continue |
253 | |
254 | + # scan before we check |
255 | + block.rescan_block_devices(warn_on_fail=False) |
256 | if os.path.exists(dev_info['device']): |
257 | LOG.info("shutdown running on holder type: '%s' syspath: '%s'", |
258 | dev_info['dev_type'], dev_info['device']) |
259 | shutdown_function(dev_info['device']) |
260 | - udev.udevadm_settle() |
261 | |
262 | |
263 | def start_clear_holders_deps(): |
264 | diff --git a/curtin/block/iscsi.py b/curtin/block/iscsi.py |
265 | index 461f615..0c666b6 100644 |
266 | --- a/curtin/block/iscsi.py |
267 | +++ b/curtin/block/iscsi.py |
268 | @@ -416,18 +416,17 @@ class IscsiDisk(object): |
269 | self.portal, self.target, self.lun) |
270 | |
271 | def connect(self): |
272 | - if self.target in iscsiadm_sessions(): |
273 | - return |
274 | - |
275 | - iscsiadm_discovery(self.portal) |
276 | + if self.target not in iscsiadm_sessions(): |
277 | + iscsiadm_discovery(self.portal) |
278 | |
279 | - iscsiadm_authenticate(self.target, self.portal, self.user, |
280 | - self.password, self.iuser, self.ipassword) |
281 | + iscsiadm_authenticate(self.target, self.portal, self.user, |
282 | + self.password, self.iuser, self.ipassword) |
283 | |
284 | - iscsiadm_login(self.target, self.portal) |
285 | + iscsiadm_login(self.target, self.portal) |
286 | |
287 | - udev.udevadm_settle(self.devdisk_path) |
288 | + udev.udevadm_settle(self.devdisk_path) |
289 | |
290 | + # always set automatic mode |
291 | iscsiadm_set_automatic(self.target, self.portal) |
292 | |
293 | def disconnect(self): |
294 | diff --git a/curtin/block/mdadm.py b/curtin/block/mdadm.py |
295 | index 2e73e71..e0fe0d3 100644 |
296 | --- a/curtin/block/mdadm.py |
297 | +++ b/curtin/block/mdadm.py |
298 | @@ -237,6 +237,44 @@ def mdadm_examine(devpath, export=MDADM_USE_EXPORT): |
299 | return data |
300 | |
301 | |
302 | +def set_sync_action(devpath, action=None, retries=None): |
303 | + assert_valid_devpath(devpath) |
304 | + if not action: |
305 | + return |
306 | + |
307 | + if not retries: |
308 | + retries = [0.2] * 60 |
309 | + |
310 | + sync_action = md_sysfs_attr_path(devpath, 'sync_action') |
311 | + if not os.path.exists(sync_action): |
312 | + # arrays without sync_action can't set values |
313 | + return |
314 | + |
315 | + LOG.info("mdadm set sync_action=%s on array %s", action, devpath) |
316 | + for (attempt, wait) in enumerate(retries): |
317 | + try: |
318 | + LOG.debug('mdadm: set sync_action %s attempt %s', |
319 | + devpath, attempt) |
320 | + val = md_sysfs_attr(devpath, 'sync_action').strip() |
321 | + LOG.debug('sync_action = "%s" ? "%s"', val, action) |
322 | + if val != action: |
323 | + LOG.debug("mdadm: setting array sync_action=%s", action) |
324 | + try: |
325 | + util.write_file(sync_action, content=action) |
326 | + except (IOError, OSError) as e: |
327 | + LOG.debug("mdadm: (non-fatal) write to %s failed %s", |
328 | + sync_action, e) |
329 | + else: |
330 | + LOG.debug("mdadm: set array sync_action=%s SUCCESS", action) |
331 | + return |
332 | + |
333 | + except util.ProcessExecutionError: |
334 | + LOG.debug( |
335 | + "mdadm: set sync_action failed, retrying in %s seconds", wait) |
336 | + time.sleep(wait) |
337 | + pass |
338 | + |
339 | + |
340 | def mdadm_stop(devpath, retries=None): |
341 | assert_valid_devpath(devpath) |
342 | if not retries: |
343 | @@ -305,6 +343,33 @@ def mdadm_remove(devpath): |
344 | LOG.debug("mdadm remove:\n%s\n%s", out, err) |
345 | |
346 | |
347 | +def fail_device(mddev, arraydev): |
348 | + assert_valid_devpath(mddev) |
349 | + |
350 | + LOG.info("mdadm mark faulty: %s in array %s", arraydev, mddev) |
351 | + out, err = util.subp(["mdadm", "--fail", mddev, arraydev], |
352 | + rcs=[0], capture=True) |
353 | + LOG.debug("mdadm mark faulty:\n%s\n%s", out, err) |
354 | + |
355 | + |
356 | +def remove_device(mddev, arraydev): |
357 | + assert_valid_devpath(mddev) |
358 | + |
359 | + LOG.info("mdadm remove %s from array %s", arraydev, mddev) |
360 | + out, err = util.subp(["mdadm", "--remove", mddev, arraydev], |
361 | + rcs=[0], capture=True) |
362 | + LOG.debug("mdadm remove:\n%s\n%s", out, err) |
363 | + |
364 | + |
365 | +def zero_device(devpath): |
366 | + assert_valid_devpath(devpath) |
367 | + |
368 | + LOG.info("mdadm zero superblock on %s", devpath) |
369 | + out, err = util.subp(["mdadm", "--zero-superblock", devpath], |
370 | + rcs=[0], capture=True) |
371 | + LOG.debug("mdadm zero superblock:\n%s\n%s", out, err) |
372 | + |
373 | + |
374 | def mdadm_query_detail(md_devname, export=MDADM_USE_EXPORT): |
375 | valid_mdname(md_devname) |
376 | |
377 | diff --git a/tests/unittests/test_clear_holders.py b/tests/unittests/test_clear_holders.py |
378 | index ddb149f..9c93b5a 100644 |
379 | --- a/tests/unittests/test_clear_holders.py |
380 | +++ b/tests/unittests/test_clear_holders.py |
381 | @@ -132,6 +132,7 @@ class TestClearHolders(CiTestCase): |
382 | mock_block.path_to_kname.assert_called_with(self.test_syspath) |
383 | mock_get_dmsetup_uuid.assert_called_with(self.test_syspath) |
384 | |
385 | + @mock.patch('curtin.block.clear_holders.block') |
386 | @mock.patch('curtin.block.clear_holders.udev.udevadm_settle') |
387 | @mock.patch('curtin.block.clear_holders.get_bcache_sys_path') |
388 | @mock.patch('curtin.block.clear_holders.util') |
389 | @@ -140,7 +141,7 @@ class TestClearHolders(CiTestCase): |
390 | @mock.patch('curtin.block.clear_holders.get_bcache_using_dev') |
391 | def test_shutdown_bcache(self, mock_get_bcache, mock_log, mock_os, |
392 | mock_util, mock_get_bcache_block, |
393 | - mock_udevadm_settle): |
394 | + mock_udevadm_settle, mock_block): |
395 | """test clear_holders.shutdown_bcache""" |
396 | # |
397 | # pass in a sysfs path to a bcache block device, |
398 | @@ -152,6 +153,7 @@ class TestClearHolders(CiTestCase): |
399 | # |
400 | |
401 | device = self.test_syspath |
402 | + mock_block.sys_block_path.return_value = '/dev/null' |
403 | bcache_cset_uuid = 'c08ae789-a964-46fb-a66e-650f0ae78f94' |
404 | |
405 | mock_os.path.exists.return_value = True |
406 | @@ -197,6 +199,7 @@ class TestClearHolders(CiTestCase): |
407 | self.assertEqual(0, len(mock_util.call_args_list)) |
408 | self.assertEqual(0, len(mock_get_bcache_block.call_args_list)) |
409 | |
410 | + @mock.patch('curtin.block.clear_holders.block') |
411 | @mock.patch('curtin.block.clear_holders.get_bcache_sys_path') |
412 | @mock.patch('curtin.block.clear_holders.util') |
413 | @mock.patch('curtin.block.clear_holders.os') |
414 | @@ -204,18 +207,20 @@ class TestClearHolders(CiTestCase): |
415 | @mock.patch('curtin.block.clear_holders.get_bcache_using_dev') |
416 | def test_shutdown_bcache_no_device(self, mock_get_bcache, mock_log, |
417 | mock_os, mock_util, |
418 | - mock_get_bcache_block): |
419 | + mock_get_bcache_block, mock_block): |
420 | device = "/sys/class/block/null" |
421 | + mock_block.sysfs_to_devpath.return_value = '/dev/null' |
422 | mock_os.path.exists.return_value = False |
423 | |
424 | clear_holders.shutdown_bcache(device) |
425 | |
426 | - self.assertEqual(1, len(mock_log.info.call_args_list)) |
427 | + self.assertEqual(3, len(mock_log.info.call_args_list)) |
428 | self.assertEqual(1, len(mock_os.path.exists.call_args_list)) |
429 | self.assertEqual(0, len(mock_get_bcache.call_args_list)) |
430 | self.assertEqual(0, len(mock_util.call_args_list)) |
431 | self.assertEqual(0, len(mock_get_bcache_block.call_args_list)) |
432 | |
433 | + @mock.patch('curtin.block.clear_holders.block') |
434 | @mock.patch('curtin.block.clear_holders.get_bcache_sys_path') |
435 | @mock.patch('curtin.block.clear_holders.util') |
436 | @mock.patch('curtin.block.clear_holders.os') |
437 | @@ -223,8 +228,9 @@ class TestClearHolders(CiTestCase): |
438 | @mock.patch('curtin.block.clear_holders.get_bcache_using_dev') |
439 | def test_shutdown_bcache_no_cset(self, mock_get_bcache, mock_log, |
440 | mock_os, mock_util, |
441 | - mock_get_bcache_block): |
442 | + mock_get_bcache_block, mock_block): |
443 | device = "/sys/class/block/null" |
444 | + mock_block.sysfs_to_devpath.return_value = '/dev/null' |
445 | mock_os.path.exists.side_effect = iter([ |
446 | True, # backing device exists |
447 | False, # cset device not present (already removed) |
448 | @@ -236,7 +242,7 @@ class TestClearHolders(CiTestCase): |
449 | |
450 | clear_holders.shutdown_bcache(device) |
451 | |
452 | - self.assertEqual(2, len(mock_log.info.call_args_list)) |
453 | + self.assertEqual(4, len(mock_log.info.call_args_list)) |
454 | self.assertEqual(3, len(mock_os.path.exists.call_args_list)) |
455 | self.assertEqual(1, len(mock_get_bcache.call_args_list)) |
456 | self.assertEqual(1, len(mock_get_bcache_block.call_args_list)) |
457 | @@ -252,6 +258,7 @@ class TestClearHolders(CiTestCase): |
458 | mock.call(device, retries=retries), |
459 | mock.call(device + '/bcache', retries=retries)]) |
460 | |
461 | + @mock.patch('curtin.block.clear_holders.block') |
462 | @mock.patch('curtin.block.clear_holders.udev.udevadm_settle') |
463 | @mock.patch('curtin.block.clear_holders.get_bcache_sys_path') |
464 | @mock.patch('curtin.block.clear_holders.util') |
465 | @@ -262,8 +269,10 @@ class TestClearHolders(CiTestCase): |
466 | mock_log, mock_os, |
467 | mock_util, |
468 | mock_get_bcache_block, |
469 | - mock_udevadm_settle): |
470 | + mock_udevadm_settle, |
471 | + mock_block): |
472 | device = "/sys/class/block/null" |
473 | + mock_block.sysfs_to_devpath.return_value = '/dev/null' |
474 | mock_os.path.exists.side_effect = iter([ |
475 | True, # backing device exists |
476 | True, # cset device not present (already removed) |
477 | @@ -276,7 +285,7 @@ class TestClearHolders(CiTestCase): |
478 | |
479 | clear_holders.shutdown_bcache(device) |
480 | |
481 | - self.assertEqual(2, len(mock_log.info.call_args_list)) |
482 | + self.assertEqual(4, len(mock_log.info.call_args_list)) |
483 | self.assertEqual(3, len(mock_os.path.exists.call_args_list)) |
484 | self.assertEqual(1, len(mock_get_bcache.call_args_list)) |
485 | self.assertEqual(1, len(mock_get_bcache_block.call_args_list)) |
486 | @@ -293,6 +302,7 @@ class TestClearHolders(CiTestCase): |
487 | mock.call(device, retries=self.remove_retries) |
488 | ]) |
489 | |
490 | + @mock.patch('curtin.block.clear_holders.block') |
491 | @mock.patch('curtin.block.clear_holders.udev.udevadm_settle') |
492 | @mock.patch('curtin.block.clear_holders.get_bcache_sys_path') |
493 | @mock.patch('curtin.block.clear_holders.util') |
494 | @@ -303,8 +313,10 @@ class TestClearHolders(CiTestCase): |
495 | mock_log, mock_os, |
496 | mock_util, |
497 | mock_get_bcache_block, |
498 | - mock_udevadm_settle): |
499 | + mock_udevadm_settle, |
500 | + mock_block): |
501 | device = "/sys/class/block/null" |
502 | + mock_block.sysfs_to_devpath.return_value = '/dev/null' |
503 | mock_os.path.exists.side_effect = iter([ |
504 | True, # backing device exists |
505 | True, # cset device not present (already removed) |
506 | @@ -317,7 +329,7 @@ class TestClearHolders(CiTestCase): |
507 | |
508 | clear_holders.shutdown_bcache(device) |
509 | |
510 | - self.assertEqual(2, len(mock_log.info.call_args_list)) |
511 | + self.assertEqual(4, len(mock_log.info.call_args_list)) |
512 | self.assertEqual(3, len(mock_os.path.exists.call_args_list)) |
513 | self.assertEqual(1, len(mock_get_bcache.call_args_list)) |
514 | self.assertEqual(1, len(mock_get_bcache_block.call_args_list)) |
515 | @@ -333,6 +345,8 @@ class TestClearHolders(CiTestCase): |
516 | ]) |
517 | |
518 | # test bcache shutdown with 'stop' sysfs write failure |
519 | + @mock.patch('curtin.block.clear_holders.block') |
520 | + @mock.patch('curtin.block.wipe_volume') |
521 | @mock.patch('curtin.block.clear_holders.udev.udevadm_settle') |
522 | @mock.patch('curtin.block.clear_holders.get_bcache_sys_path') |
523 | @mock.patch('curtin.block.clear_holders.util') |
524 | @@ -343,9 +357,12 @@ class TestClearHolders(CiTestCase): |
525 | mock_log, mock_os, |
526 | mock_util, |
527 | mock_get_bcache_block, |
528 | - mock_udevadm_settle): |
529 | + mock_udevadm_settle, |
530 | + mock_wipe, |
531 | + mock_block): |
532 | """Test writes sysfs write failures pass if file not present""" |
533 | device = "/sys/class/block/null" |
534 | + mock_block.sysfs_to_devpath.return_value = '/dev/null' |
535 | mock_os.path.exists.side_effect = iter([ |
536 | True, # backing device exists |
537 | True, # cset device not present (already removed) |
538 | @@ -363,7 +380,7 @@ class TestClearHolders(CiTestCase): |
539 | |
540 | clear_holders.shutdown_bcache(device) |
541 | |
542 | - self.assertEqual(2, len(mock_log.info.call_args_list)) |
543 | + self.assertEqual(4, len(mock_log.info.call_args_list)) |
544 | self.assertEqual(3, len(mock_os.path.exists.call_args_list)) |
545 | self.assertEqual(1, len(mock_get_bcache.call_args_list)) |
546 | self.assertEqual(1, len(mock_get_bcache_block.call_args_list)) |
547 | @@ -378,15 +395,20 @@ class TestClearHolders(CiTestCase): |
548 | mock.call(cset, retries=self.remove_retries) |
549 | ]) |
550 | |
551 | + @mock.patch('curtin.block.quick_zero') |
552 | @mock.patch('curtin.block.clear_holders.LOG') |
553 | @mock.patch('curtin.block.clear_holders.block.sys_block_path') |
554 | @mock.patch('curtin.block.clear_holders.lvm') |
555 | @mock.patch('curtin.block.clear_holders.util') |
556 | - def test_shutdown_lvm(self, mock_util, mock_lvm, mock_syspath, mock_log): |
557 | + def test_shutdown_lvm(self, mock_util, mock_lvm, mock_syspath, mock_log, |
558 | + mock_zero): |
559 | """test clear_holders.shutdown_lvm""" |
560 | lvm_name = b'ubuntu--vg-swap\n' |
561 | vg_name = 'ubuntu-vg' |
562 | lv_name = 'swap' |
563 | + vg_lv_name = "%s/%s" % (vg_name, lv_name) |
564 | + devname = "/dev/" + vg_lv_name |
565 | + pvols = ['/dev/wda1', '/dev/wda2'] |
566 | mock_syspath.return_value = self.test_blockdev |
567 | mock_util.load_file.return_value = lvm_name |
568 | mock_lvm.split_lvm_name.return_value = (vg_name, lv_name) |
569 | @@ -394,18 +416,22 @@ class TestClearHolders(CiTestCase): |
570 | clear_holders.shutdown_lvm(self.test_blockdev) |
571 | mock_syspath.assert_called_with(self.test_blockdev) |
572 | mock_util.load_file.assert_called_with(self.test_blockdev + '/dm/name') |
573 | + mock_zero.assert_called_with(devname, partitions=False) |
574 | mock_lvm.split_lvm_name.assert_called_with(lvm_name.strip()) |
575 | self.assertTrue(mock_log.debug.called) |
576 | mock_util.subp.assert_called_with( |
577 | - ['dmsetup', 'remove', lvm_name.strip()]) |
578 | - |
579 | + ['lvremove', '--force', '--force', vg_lv_name]) |
580 | mock_lvm.get_lvols_in_volgroup.assert_called_with(vg_name) |
581 | self.assertEqual(len(mock_util.subp.call_args_list), 1) |
582 | - self.assertTrue(mock_lvm.lvm_scan.called) |
583 | mock_lvm.get_lvols_in_volgroup.return_value = [] |
584 | + self.assertTrue(mock_lvm.lvm_scan.called) |
585 | + mock_lvm.get_pvols_in_volgroup.return_value = pvols |
586 | clear_holders.shutdown_lvm(self.test_blockdev) |
587 | mock_util.subp.assert_called_with( |
588 | ['vgremove', '--force', '--force', vg_name], rcs=[0, 5]) |
589 | + for pv in pvols: |
590 | + mock_zero.assert_any_call(pv, partitions=False) |
591 | + self.assertTrue(mock_lvm.lvm_scan.called) |
592 | |
593 | @mock.patch('curtin.block.clear_holders.block') |
594 | @mock.patch('curtin.block.clear_holders.util') |
595 | @@ -417,18 +443,38 @@ class TestClearHolders(CiTestCase): |
596 | mock_util.subp.assert_called_with( |
597 | ['cryptsetup', 'remove', self.test_blockdev], capture=True) |
598 | |
599 | + @mock.patch('curtin.block.wipe_volume') |
600 | + @mock.patch('curtin.block.path_to_kname') |
601 | + @mock.patch('curtin.block.sysfs_to_devpath') |
602 | @mock.patch('curtin.block.clear_holders.time') |
603 | @mock.patch('curtin.block.clear_holders.util') |
604 | @mock.patch('curtin.block.clear_holders.LOG') |
605 | @mock.patch('curtin.block.clear_holders.mdadm') |
606 | - @mock.patch('curtin.block.clear_holders.block') |
607 | - def test_shutdown_mdadm(self, mock_block, mock_mdadm, mock_log, mock_util, |
608 | - mock_time): |
609 | + def test_shutdown_mdadm(self, mock_mdadm, mock_log, mock_util, |
610 | + mock_time, mock_sysdev, mock_path, mock_wipe): |
611 | """test clear_holders.shutdown_mdadm""" |
612 | - mock_block.sysfs_to_devpath.return_value = self.test_blockdev |
613 | - mock_block.path_to_kname.return_value = self.test_blockdev |
614 | + devices = ['/dev/wda1', '/dev/wda2'] |
615 | + spares = ['/dev/wdb1'] |
616 | + md_devs = (devices + spares) |
617 | + mock_sysdev.return_value = self.test_blockdev |
618 | + mock_path.return_value = self.test_blockdev |
619 | mock_mdadm.md_present.return_value = False |
620 | + mock_mdadm.md_get_devices_list.return_value = devices |
621 | + mock_mdadm.md_get_spares_list.return_value = spares |
622 | + |
623 | clear_holders.shutdown_mdadm(self.test_syspath) |
624 | + |
625 | + mock_wipe.assert_called_with( |
626 | + self.test_blockdev, exclusive=False, mode='superblock') |
627 | + mock_mdadm.set_sync_action.assert_has_calls([ |
628 | + mock.call(self.test_blockdev, action="idle"), |
629 | + mock.call(self.test_blockdev, action="frozen")]) |
630 | + mock_mdadm.fail_device.assert_has_calls( |
631 | + [mock.call(self.test_blockdev, dev) for dev in md_devs]) |
632 | + mock_mdadm.remove_device.assert_has_calls( |
633 | + [mock.call(self.test_blockdev, dev) for dev in md_devs]) |
634 | + mock_mdadm.zero_device.assert_has_calls( |
635 | + [mock.call(dev) for dev in md_devs]) |
636 | mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev) |
637 | mock_mdadm.md_present.assert_called_with(self.test_blockdev) |
638 | self.assertTrue(mock_log.debug.called) |
639 | diff --git a/tests/vmtests/test_lvm.py b/tests/vmtests/test_lvm.py |
640 | index 40f6cb7..ed708fd 100644 |
641 | --- a/tests/vmtests/test_lvm.py |
642 | +++ b/tests/vmtests/test_lvm.py |
643 | @@ -10,10 +10,15 @@ class TestLvmAbs(VMBaseClass): |
644 | conf_file = "examples/tests/lvm.yaml" |
645 | interactive = False |
646 | extra_disks = ['10G'] |
647 | + dirty_disks = True |
648 | collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent(""" |
649 | cd OUTPUT_COLLECT_D |
650 | cat /etc/fstab > fstab |
651 | ls /dev/disk/by-dname > ls_dname |
652 | + ls -al /dev/disk/by-dname > lsal_dname |
653 | + ls -al /dev/disk/by-id/ > ls_byid |
654 | + ls -al /dev/disk/by-uuid/ > ls_byuuid |
655 | + cat /proc/partitions > proc_partitions |
656 | find /etc/network/interfaces.d > find_interfacesd |
657 | pvdisplay -C --separator = -o vg_name,pv_name --noheadings > pvs |
658 | lvdisplay -C --separator = -o lv_name,vg_name --noheadings > lvs |
659 | diff --git a/tests/vmtests/test_lvm_iscsi.py b/tests/vmtests/test_lvm_iscsi.py |
660 | index e94fae3..2a11d6e 100644 |
661 | --- a/tests/vmtests/test_lvm_iscsi.py |
662 | +++ b/tests/vmtests/test_lvm_iscsi.py |
663 | @@ -9,6 +9,7 @@ import textwrap |
664 | |
665 | class TestLvmIscsiAbs(TestLvmAbs, TestBasicIscsiAbs): |
666 | interactive = False |
667 | + dirty_disks = True |
668 | iscsi_disks = [ |
669 | {'size': '6G'}, |
670 | {'size': '5G', 'auth': 'user:passw0rd', 'iauth': 'iuser:ipassw0rd'}] |
671 | @@ -20,6 +21,8 @@ class TestLvmIscsiAbs(TestLvmAbs, TestBasicIscsiAbs): |
672 | """ |
673 | cd OUTPUT_COLLECT_D |
674 | ls -al /sys/class/block/dm*/slaves/ > dm_slaves |
675 | + cp -a /etc/udev/rules.d udev_rules_d |
676 | + cp -a /etc/iscsi etc_iscsi |
677 | """)] |
678 | |
679 | fstab_expected = { |
PASSED: Continuous integration, rev:9bc6ab2438f 9dc098fd1fb1860 4c9ed042621ff5 /jenkins. ubuntu. com/server/ job/curtin- ci/925/ /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-amd64/ 925 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-arm64/ 925 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-ppc64el/ 925 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-s390x/ 925
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/curtin- ci/925/ rebuild
https:/