Merge ~raharper/curtin:fix/clear-holders-refactor into curtin:master

Proposed by Ryan Harper
Status: Merged
Approved by: Scott Moser
Approved revision: 374a1ff40f72b5ce1ce8a86d4c5047ebd774103a
Merge reported by: Scott Moser
Merged at revision: 4bf7750b6010f0483b22ff626eab58d6a85bb072
Proposed branch: ~raharper/curtin:fix/clear-holders-refactor
Merge into: curtin:master
Diff against target: 679 lines (+250/-96)
7 files modified
curtin/block/__init__.py (+18/-14)
curtin/block/clear_holders.py (+86/-54)
curtin/block/iscsi.py (+7/-8)
curtin/block/mdadm.py (+65/-0)
tests/unittests/test_clear_holders.py (+66/-20)
tests/vmtests/test_lvm.py (+5/-0)
tests/vmtests/test_lvm_iscsi.py (+3/-0)
Reviewer Review Type Date Requested Status
Scott Moser (community) Approve
Server Team CI bot continuous-integration Approve
Review via email: mp+345361@code.launchpad.net

Commit message

block/clear_holders/mdadm: refactor handling of layered device wiping

The block layer handling of checking efi/dos signatures did not ensure
that it always used a devpath (/dev/<kname>) which resulted in
failures when attempting to identify dos partitions, including
extended partitions due to attempting to open /sys/class/block/<kname>
to look for metadata. In addition there is a window where
wiping/removing lvm devices in dos logical partitions renders the
/dev/<kname> path to the extended partition missing, no /dev/<kname>
We now re-read a block device's partition table after completing a
shutdown of each device. Doing this re-read triggered mdadm devices
which had been stopped to resume preventing wiping of the underlying
devices. A refactor of how raid devices are shutdown now includes
collecting the list of raid members and spares, failing each element
out of the array, stopping the rescan/rebuild of the array and finally
removing the mdadm metadata on the underlying devices.

With this in place, we can now successful run the LVM test-cases with
dirty_disk mode enabled and simultaneously continue to pass the
complex bcache/raid examples.

LP: #1768893
LP: #1769742

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

some comments in line.

I think i'm generally ok with this.
I'm happy that you can add dirty_disks and show that this fixed something.

f3eb081... by Ryan Harper

Change suppress=False to warn_on_fail=True

a032d77... by Ryan Harper

Switch to using block.quick_zero(path, partitions=False)

fcb3775... by Ryan Harper

use combine_capture=True to merge sfdisk stderr output into logged msg.

f9b2ef5... by Ryan Harper

Drop mdadm_ prefix on newly added mdadm methods.

Revision history for this message
Ryan Harper (raharper) wrote :

I've updated with your suggestions applied. Kicking off a vmtest run:

https://jenkins.ubuntu.com/server/view/curtin/job/curtin-vmtest-devel-debug/62/console

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
374a1ff... by Ryan Harper

Fix remaining clear-holder issues with rescanning partitions and iscsi devices

Moving the scan to run before issuing the next shutdown command satisfies the
LVM scenario where msdos extended partitiosn disappear and further we need to
always trigger the automatic mode for iscsi devices as the iscsiadm discovery
reconfigures the defaults.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :
Revision history for this message
Scott Moser (smoser) :
review: Approve
Revision history for this message
Scott Moser (smoser) wrote :

An upstream commit landed for this bug.

To view that commit see the following URL:
https://git.launchpad.net/curtin/commit/?id=4bf7750b

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
diff --git a/curtin/block/__init__.py b/curtin/block/__init__.py
index 50e953e..a8ee8a6 100644
--- a/curtin/block/__init__.py
+++ b/curtin/block/__init__.py
@@ -378,7 +378,7 @@ def stop_all_unused_multipath_devices():
378 LOG.warn("Failed to stop multipath devices: %s", e)378 LOG.warn("Failed to stop multipath devices: %s", e)
379379
380380
381def rescan_block_devices():381def rescan_block_devices(warn_on_fail=True):
382 """382 """
383 run 'blockdev --rereadpt' for all block devices not currently mounted383 run 'blockdev --rereadpt' for all block devices not currently mounted
384 """384 """
@@ -399,13 +399,15 @@ def rescan_block_devices():
399 try:399 try:
400 util.subp(cmd, capture=True)400 util.subp(cmd, capture=True)
401 except util.ProcessExecutionError as e:401 except util.ProcessExecutionError as e:
402 # FIXME: its less than ideal to swallow this error, but until402 if warn_on_fail:
403 # we fix LP: #1489521 we kind of need to.403 # FIXME: its less than ideal to swallow this error, but until
404 LOG.warn("Error rescanning devices, possibly known issue LP: #1489521")404 # we fix LP: #1489521 we kind of need to.
405 # Reformatting the exception output so as to not trigger405 LOG.warn(
406 # vmtest scanning for Unexepected errors in install logfile406 "Error rescanning devices, possibly known issue LP: #1489521")
407 LOG.warn("cmd: %s\nstdout:%s\nstderr:%s\nexit_code:%s", e.cmd,407 # Reformatting the exception output so as to not trigger
408 e.stdout, e.stderr, e.exit_code)408 # vmtest scanning for Unexepected errors in install logfile
409 LOG.warn("cmd: %s\nstdout:%s\nstderr:%s\nexit_code:%s", e.cmd,
410 e.stdout, e.stderr, e.exit_code)
409411
410 udevadm_settle()412 udevadm_settle()
411413
@@ -753,8 +755,9 @@ def check_dos_signature(device):
753 # the underlying disk uses a larger logical block size, so the start of755 # the underlying disk uses a larger logical block size, so the start of
754 # this signature must be at 0x1fe756 # this signature must be at 0x1fe
755 # https://en.wikipedia.org/wiki/Master_boot_record#Sector_layout757 # https://en.wikipedia.org/wiki/Master_boot_record#Sector_layout
756 return (is_block_device(device) and util.file_size(device) >= 0x200 and758 devname = dev_path(path_to_kname(device))
757 (util.load_file(device, decode=False, read_len=2, offset=0x1fe) ==759 return (is_block_device(devname) and util.file_size(devname) >= 0x200 and
760 (util.load_file(devname, decode=False, read_len=2, offset=0x1fe) ==
758 b'\x55\xAA'))761 b'\x55\xAA'))
759762
760763
@@ -769,10 +772,11 @@ def check_efi_signature(device):
769 # the start of the gpt partition table header shoult have the signaure772 # the start of the gpt partition table header shoult have the signaure
770 # 'EFI PART'.773 # 'EFI PART'.
771 # https://en.wikipedia.org/wiki/GUID_Partition_Table774 # https://en.wikipedia.org/wiki/GUID_Partition_Table
772 sector_size = get_blockdev_sector_size(device)[0]775 devname = dev_path(path_to_kname(device))
773 return (is_block_device(device) and776 sector_size = get_blockdev_sector_size(devname)[0]
774 util.file_size(device) >= 2 * sector_size and777 return (is_block_device(devname) and
775 (util.load_file(device, decode=False, read_len=8,778 util.file_size(devname) >= 2 * sector_size and
779 (util.load_file(devname, decode=False, read_len=8,
776 offset=sector_size) == b'EFI PART'))780 offset=sector_size) == b'EFI PART'))
777781
778782
diff --git a/curtin/block/clear_holders.py b/curtin/block/clear_holders.py
index e0ee044..cbb7074 100644
--- a/curtin/block/clear_holders.py
+++ b/curtin/block/clear_holders.py
@@ -110,6 +110,9 @@ def shutdown_bcache(device):
110 'Device path must start with /sys/class/block/',110 'Device path must start with /sys/class/block/',
111 device)111 device)
112112
113 LOG.info('Wiping superblock on bcache device: %s', device)
114 _wipe_superblock(block.sysfs_to_devpath(device), exclusive=False)
115
113 # bcache device removal should be fast but in an extreme116 # bcache device removal should be fast but in an extreme
114 # case, might require the cache device to flush large117 # case, might require the cache device to flush large
115 # amounts of data to a backing device. The strategy here118 # amounts of data to a backing device. The strategy here
@@ -189,14 +192,27 @@ def shutdown_lvm(device):
189 name_file = os.path.join(device, 'dm', 'name')192 name_file = os.path.join(device, 'dm', 'name')
190 lvm_name = util.load_file(name_file).strip()193 lvm_name = util.load_file(name_file).strip()
191 (vg_name, lv_name) = lvm.split_lvm_name(lvm_name)194 (vg_name, lv_name) = lvm.split_lvm_name(lvm_name)
195 vg_lv_name = "%s/%s" % (vg_name, lv_name)
196 devname = "/dev/" + vg_lv_name
197
198 # wipe contents of the logical volume first
199 LOG.info('Wiping lvm logical volume: %s', devname)
200 block.quick_zero(devname, partitions=False)
192201
193 # use dmsetup as lvm commands require valid /etc/lvm/* metadata202 # remove the logical volume
194 LOG.debug('using "dmsetup remove" on %s', lvm_name)203 LOG.debug('using "lvremove" on %s', vg_lv_name)
195 util.subp(['dmsetup', 'remove', lvm_name])204 util.subp(['lvremove', '--force', '--force', vg_lv_name])
196205
197 # if that was the last lvol in the volgroup, get rid of volgroup206 # if that was the last lvol in the volgroup, get rid of volgroup
198 if len(lvm.get_lvols_in_volgroup(vg_name)) == 0:207 if len(lvm.get_lvols_in_volgroup(vg_name)) == 0:
208 pvols = lvm.get_pvols_in_volgroup(vg_name)
199 util.subp(['vgremove', '--force', '--force', vg_name], rcs=[0, 5])209 util.subp(['vgremove', '--force', '--force', vg_name], rcs=[0, 5])
210
211 # wipe the underlying physical volumes
212 for pv in pvols:
213 LOG.info('Wiping lvm physical volume: %s', pv)
214 block.quick_zero(pv, partitions=False)
215
200 # refresh lvmetad216 # refresh lvmetad
201 lvm.lvm_scan()217 lvm.lvm_scan()
202218
@@ -213,10 +229,31 @@ def shutdown_mdadm(device):
213 """229 """
214 Shutdown specified mdadm device.230 Shutdown specified mdadm device.
215 """231 """
232
216 blockdev = block.sysfs_to_devpath(device)233 blockdev = block.sysfs_to_devpath(device)
234
235 LOG.info('Wiping superblock on raid device: %s', device)
236 _wipe_superblock(blockdev, exclusive=False)
237
238 md_devs = (
239 mdadm.md_get_devices_list(blockdev) +
240 mdadm.md_get_spares_list(blockdev))
241 mdadm.set_sync_action(blockdev, action="idle")
242 mdadm.set_sync_action(blockdev, action="frozen")
243 for mddev in md_devs:
244 try:
245 mdadm.fail_device(blockdev, mddev)
246 mdadm.remove_device(blockdev, mddev)
247 except util.ProcessExecutionError as e:
248 LOG.debug('Non-fatal error clearing raid array: %s', e.stderr)
249 pass
250
217 LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev)251 LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev)
218 mdadm.mdadm_stop(blockdev)252 mdadm.mdadm_stop(blockdev)
219253
254 for mddev in md_devs:
255 mdadm.zero_device(mddev)
256
220 # mdadm stop operation is asynchronous so we must wait for the kernel to257 # mdadm stop operation is asynchronous so we must wait for the kernel to
221 # release resources. For more details see LP: #1682456258 # release resources. For more details see LP: #1682456
222 try:259 try:
@@ -244,34 +281,49 @@ def wipe_superblock(device):
244 blockdev = block.sysfs_to_devpath(device)281 blockdev = block.sysfs_to_devpath(device)
245 # when operating on a disk that used to have a dos part table with an282 # when operating on a disk that used to have a dos part table with an
246 # extended partition, attempting to wipe the extended partition will fail283 # extended partition, attempting to wipe the extended partition will fail
247 if block.is_extended_partition(blockdev):284 try:
248 LOG.info("extended partitions do not need wiping, so skipping: '%s'",285 if block.is_extended_partition(blockdev):
249 blockdev)286 LOG.info("extended partitions do not need wiping, so skipping:"
250 else:287 " '%s'", blockdev)
251 # release zfs member by exporting the pool288 return
252 if block.is_zfs_member(blockdev):289 except OSError as e:
253 poolname = zfs.device_to_poolname(blockdev)290 if util.is_file_not_found_exc(e):
254 # only export pools that have been imported291 LOG.debug('Device to wipe disappeared: %s', e)
255 if poolname in zfs.zpool_list():292 LOG.debug('/proc/partitions says: %s',
256 zfs.zpool_export(poolname)293 util.load_file('/proc/partitions'))
257294
258 if is_swap_device(blockdev):295 (parent, partnum) = block.get_blockdev_for_partition(blockdev)
259 shutdown_swap(blockdev)296 out, _e = util.subp(['sfdisk', '-d', parent],
260297 capture=True, combine_capture=True)
261 # some volumes will be claimed by the bcache layer but do not surface298 LOG.debug('Disk partition info:\n%s', out)
262 # an actual /dev/bcacheN device which owns the parts (backing, cache)299 return
263 # The result is that some volumes cannot be wiped while bcache claims300 else:
264 # the device. Resolve this by stopping bcache layer on those volumes301 raise e
265 # if present.302
266 for bcache_path in ['bcache', 'bcache/set']:303 # release zfs member by exporting the pool
267 stop_path = os.path.join(device, bcache_path)304 if block.is_zfs_member(blockdev):
268 if os.path.exists(stop_path):305 poolname = zfs.device_to_poolname(blockdev)
269 LOG.debug('Attempting to release bcache layer from device: %s',306 # only export pools that have been imported
270 device)307 if poolname in zfs.zpool_list():
271 maybe_stop_bcache_device(stop_path)308 zfs.zpool_export(poolname)
272 continue309
273310 if is_swap_device(blockdev):
274 _wipe_superblock(blockdev)311 shutdown_swap(blockdev)
312
313 # some volumes will be claimed by the bcache layer but do not surface
314 # an actual /dev/bcacheN device which owns the parts (backing, cache)
315 # The result is that some volumes cannot be wiped while bcache claims
316 # the device. Resolve this by stopping bcache layer on those volumes
317 # if present.
318 for bcache_path in ['bcache', 'bcache/set']:
319 stop_path = os.path.join(device, bcache_path)
320 if os.path.exists(stop_path):
321 LOG.debug('Attempting to release bcache layer from device: %s',
322 device)
323 maybe_stop_bcache_device(stop_path)
324 continue
325
326 _wipe_superblock(blockdev)
275327
276328
277def _wipe_superblock(blockdev, exclusive=True):329def _wipe_superblock(blockdev, exclusive=True):
@@ -512,28 +564,7 @@ def clear_holders(base_paths, try_preserve=False):
512 LOG.info('Current device storage tree:\n%s',564 LOG.info('Current device storage tree:\n%s',
513 '\n'.join(format_holders_tree(tree) for tree in holder_trees))565 '\n'.join(format_holders_tree(tree) for tree in holder_trees))
514 ordered_devs = plan_shutdown_holder_trees(holder_trees)566 ordered_devs = plan_shutdown_holder_trees(holder_trees)
515567 LOG.info('Shutdown Plan:\n%s', "\n".join(map(str, ordered_devs)))
516 # run wipe-superblock on layered devices
517 for dev_info in ordered_devs:
518 dev_type = DEV_TYPES.get(dev_info['dev_type'])
519 shutdown_function = dev_type.get('shutdown')
520 if not shutdown_function:
521 continue
522
523 if try_preserve and shutdown_function in DATA_DESTROYING_HANDLERS:
524 LOG.info('shutdown function for holder type: %s is destructive. '
525 'attempting to preserve data, so skipping' %
526 dev_info['dev_type'])
527 continue
528
529 # for layered block devices, wipe first, then shutdown
530 if dev_info['dev_type'] in ['bcache', 'raid']:
531 LOG.info("Wiping superblock on layered device type: "
532 "'%s' syspath: '%s'", dev_info['dev_type'],
533 dev_info['device'])
534 # we just want to wipe data, we don't care about exclusive
535 _wipe_superblock(block.sysfs_to_devpath(dev_info['device']),
536 exclusive=False)
537568
538 # run shutdown functions569 # run shutdown functions
539 for dev_info in ordered_devs:570 for dev_info in ordered_devs:
@@ -548,11 +579,12 @@ def clear_holders(base_paths, try_preserve=False):
548 dev_info['dev_type'])579 dev_info['dev_type'])
549 continue580 continue
550581
582 # scan before we check
583 block.rescan_block_devices(warn_on_fail=False)
551 if os.path.exists(dev_info['device']):584 if os.path.exists(dev_info['device']):
552 LOG.info("shutdown running on holder type: '%s' syspath: '%s'",585 LOG.info("shutdown running on holder type: '%s' syspath: '%s'",
553 dev_info['dev_type'], dev_info['device'])586 dev_info['dev_type'], dev_info['device'])
554 shutdown_function(dev_info['device'])587 shutdown_function(dev_info['device'])
555 udev.udevadm_settle()
556588
557589
558def start_clear_holders_deps():590def start_clear_holders_deps():
diff --git a/curtin/block/iscsi.py b/curtin/block/iscsi.py
index 461f615..0c666b6 100644
--- a/curtin/block/iscsi.py
+++ b/curtin/block/iscsi.py
@@ -416,18 +416,17 @@ class IscsiDisk(object):
416 self.portal, self.target, self.lun)416 self.portal, self.target, self.lun)
417417
418 def connect(self):418 def connect(self):
419 if self.target in iscsiadm_sessions():419 if self.target not in iscsiadm_sessions():
420 return420 iscsiadm_discovery(self.portal)
421
422 iscsiadm_discovery(self.portal)
423421
424 iscsiadm_authenticate(self.target, self.portal, self.user,422 iscsiadm_authenticate(self.target, self.portal, self.user,
425 self.password, self.iuser, self.ipassword)423 self.password, self.iuser, self.ipassword)
426424
427 iscsiadm_login(self.target, self.portal)425 iscsiadm_login(self.target, self.portal)
428426
429 udev.udevadm_settle(self.devdisk_path)427 udev.udevadm_settle(self.devdisk_path)
430428
429 # always set automatic mode
431 iscsiadm_set_automatic(self.target, self.portal)430 iscsiadm_set_automatic(self.target, self.portal)
432431
433 def disconnect(self):432 def disconnect(self):
diff --git a/curtin/block/mdadm.py b/curtin/block/mdadm.py
index 2e73e71..e0fe0d3 100644
--- a/curtin/block/mdadm.py
+++ b/curtin/block/mdadm.py
@@ -237,6 +237,44 @@ def mdadm_examine(devpath, export=MDADM_USE_EXPORT):
237 return data237 return data
238238
239239
240def set_sync_action(devpath, action=None, retries=None):
241 assert_valid_devpath(devpath)
242 if not action:
243 return
244
245 if not retries:
246 retries = [0.2] * 60
247
248 sync_action = md_sysfs_attr_path(devpath, 'sync_action')
249 if not os.path.exists(sync_action):
250 # arrays without sync_action can't set values
251 return
252
253 LOG.info("mdadm set sync_action=%s on array %s", action, devpath)
254 for (attempt, wait) in enumerate(retries):
255 try:
256 LOG.debug('mdadm: set sync_action %s attempt %s',
257 devpath, attempt)
258 val = md_sysfs_attr(devpath, 'sync_action').strip()
259 LOG.debug('sync_action = "%s" ? "%s"', val, action)
260 if val != action:
261 LOG.debug("mdadm: setting array sync_action=%s", action)
262 try:
263 util.write_file(sync_action, content=action)
264 except (IOError, OSError) as e:
265 LOG.debug("mdadm: (non-fatal) write to %s failed %s",
266 sync_action, e)
267 else:
268 LOG.debug("mdadm: set array sync_action=%s SUCCESS", action)
269 return
270
271 except util.ProcessExecutionError:
272 LOG.debug(
273 "mdadm: set sync_action failed, retrying in %s seconds", wait)
274 time.sleep(wait)
275 pass
276
277
240def mdadm_stop(devpath, retries=None):278def mdadm_stop(devpath, retries=None):
241 assert_valid_devpath(devpath)279 assert_valid_devpath(devpath)
242 if not retries:280 if not retries:
@@ -305,6 +343,33 @@ def mdadm_remove(devpath):
305 LOG.debug("mdadm remove:\n%s\n%s", out, err)343 LOG.debug("mdadm remove:\n%s\n%s", out, err)
306344
307345
346def fail_device(mddev, arraydev):
347 assert_valid_devpath(mddev)
348
349 LOG.info("mdadm mark faulty: %s in array %s", arraydev, mddev)
350 out, err = util.subp(["mdadm", "--fail", mddev, arraydev],
351 rcs=[0], capture=True)
352 LOG.debug("mdadm mark faulty:\n%s\n%s", out, err)
353
354
355def remove_device(mddev, arraydev):
356 assert_valid_devpath(mddev)
357
358 LOG.info("mdadm remove %s from array %s", arraydev, mddev)
359 out, err = util.subp(["mdadm", "--remove", mddev, arraydev],
360 rcs=[0], capture=True)
361 LOG.debug("mdadm remove:\n%s\n%s", out, err)
362
363
364def zero_device(devpath):
365 assert_valid_devpath(devpath)
366
367 LOG.info("mdadm zero superblock on %s", devpath)
368 out, err = util.subp(["mdadm", "--zero-superblock", devpath],
369 rcs=[0], capture=True)
370 LOG.debug("mdadm zero superblock:\n%s\n%s", out, err)
371
372
308def mdadm_query_detail(md_devname, export=MDADM_USE_EXPORT):373def mdadm_query_detail(md_devname, export=MDADM_USE_EXPORT):
309 valid_mdname(md_devname)374 valid_mdname(md_devname)
310375
diff --git a/tests/unittests/test_clear_holders.py b/tests/unittests/test_clear_holders.py
index ddb149f..9c93b5a 100644
--- a/tests/unittests/test_clear_holders.py
+++ b/tests/unittests/test_clear_holders.py
@@ -132,6 +132,7 @@ class TestClearHolders(CiTestCase):
132 mock_block.path_to_kname.assert_called_with(self.test_syspath)132 mock_block.path_to_kname.assert_called_with(self.test_syspath)
133 mock_get_dmsetup_uuid.assert_called_with(self.test_syspath)133 mock_get_dmsetup_uuid.assert_called_with(self.test_syspath)
134134
135 @mock.patch('curtin.block.clear_holders.block')
135 @mock.patch('curtin.block.clear_holders.udev.udevadm_settle')136 @mock.patch('curtin.block.clear_holders.udev.udevadm_settle')
136 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')137 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')
137 @mock.patch('curtin.block.clear_holders.util')138 @mock.patch('curtin.block.clear_holders.util')
@@ -140,7 +141,7 @@ class TestClearHolders(CiTestCase):
140 @mock.patch('curtin.block.clear_holders.get_bcache_using_dev')141 @mock.patch('curtin.block.clear_holders.get_bcache_using_dev')
141 def test_shutdown_bcache(self, mock_get_bcache, mock_log, mock_os,142 def test_shutdown_bcache(self, mock_get_bcache, mock_log, mock_os,
142 mock_util, mock_get_bcache_block,143 mock_util, mock_get_bcache_block,
143 mock_udevadm_settle):144 mock_udevadm_settle, mock_block):
144 """test clear_holders.shutdown_bcache"""145 """test clear_holders.shutdown_bcache"""
145 #146 #
146 # pass in a sysfs path to a bcache block device,147 # pass in a sysfs path to a bcache block device,
@@ -152,6 +153,7 @@ class TestClearHolders(CiTestCase):
152 #153 #
153154
154 device = self.test_syspath155 device = self.test_syspath
156 mock_block.sys_block_path.return_value = '/dev/null'
155 bcache_cset_uuid = 'c08ae789-a964-46fb-a66e-650f0ae78f94'157 bcache_cset_uuid = 'c08ae789-a964-46fb-a66e-650f0ae78f94'
156158
157 mock_os.path.exists.return_value = True159 mock_os.path.exists.return_value = True
@@ -197,6 +199,7 @@ class TestClearHolders(CiTestCase):
197 self.assertEqual(0, len(mock_util.call_args_list))199 self.assertEqual(0, len(mock_util.call_args_list))
198 self.assertEqual(0, len(mock_get_bcache_block.call_args_list))200 self.assertEqual(0, len(mock_get_bcache_block.call_args_list))
199201
202 @mock.patch('curtin.block.clear_holders.block')
200 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')203 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')
201 @mock.patch('curtin.block.clear_holders.util')204 @mock.patch('curtin.block.clear_holders.util')
202 @mock.patch('curtin.block.clear_holders.os')205 @mock.patch('curtin.block.clear_holders.os')
@@ -204,18 +207,20 @@ class TestClearHolders(CiTestCase):
204 @mock.patch('curtin.block.clear_holders.get_bcache_using_dev')207 @mock.patch('curtin.block.clear_holders.get_bcache_using_dev')
205 def test_shutdown_bcache_no_device(self, mock_get_bcache, mock_log,208 def test_shutdown_bcache_no_device(self, mock_get_bcache, mock_log,
206 mock_os, mock_util,209 mock_os, mock_util,
207 mock_get_bcache_block):210 mock_get_bcache_block, mock_block):
208 device = "/sys/class/block/null"211 device = "/sys/class/block/null"
212 mock_block.sysfs_to_devpath.return_value = '/dev/null'
209 mock_os.path.exists.return_value = False213 mock_os.path.exists.return_value = False
210214
211 clear_holders.shutdown_bcache(device)215 clear_holders.shutdown_bcache(device)
212216
213 self.assertEqual(1, len(mock_log.info.call_args_list))217 self.assertEqual(3, len(mock_log.info.call_args_list))
214 self.assertEqual(1, len(mock_os.path.exists.call_args_list))218 self.assertEqual(1, len(mock_os.path.exists.call_args_list))
215 self.assertEqual(0, len(mock_get_bcache.call_args_list))219 self.assertEqual(0, len(mock_get_bcache.call_args_list))
216 self.assertEqual(0, len(mock_util.call_args_list))220 self.assertEqual(0, len(mock_util.call_args_list))
217 self.assertEqual(0, len(mock_get_bcache_block.call_args_list))221 self.assertEqual(0, len(mock_get_bcache_block.call_args_list))
218222
223 @mock.patch('curtin.block.clear_holders.block')
219 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')224 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')
220 @mock.patch('curtin.block.clear_holders.util')225 @mock.patch('curtin.block.clear_holders.util')
221 @mock.patch('curtin.block.clear_holders.os')226 @mock.patch('curtin.block.clear_holders.os')
@@ -223,8 +228,9 @@ class TestClearHolders(CiTestCase):
223 @mock.patch('curtin.block.clear_holders.get_bcache_using_dev')228 @mock.patch('curtin.block.clear_holders.get_bcache_using_dev')
224 def test_shutdown_bcache_no_cset(self, mock_get_bcache, mock_log,229 def test_shutdown_bcache_no_cset(self, mock_get_bcache, mock_log,
225 mock_os, mock_util,230 mock_os, mock_util,
226 mock_get_bcache_block):231 mock_get_bcache_block, mock_block):
227 device = "/sys/class/block/null"232 device = "/sys/class/block/null"
233 mock_block.sysfs_to_devpath.return_value = '/dev/null'
228 mock_os.path.exists.side_effect = iter([234 mock_os.path.exists.side_effect = iter([
229 True, # backing device exists235 True, # backing device exists
230 False, # cset device not present (already removed)236 False, # cset device not present (already removed)
@@ -236,7 +242,7 @@ class TestClearHolders(CiTestCase):
236242
237 clear_holders.shutdown_bcache(device)243 clear_holders.shutdown_bcache(device)
238244
239 self.assertEqual(2, len(mock_log.info.call_args_list))245 self.assertEqual(4, len(mock_log.info.call_args_list))
240 self.assertEqual(3, len(mock_os.path.exists.call_args_list))246 self.assertEqual(3, len(mock_os.path.exists.call_args_list))
241 self.assertEqual(1, len(mock_get_bcache.call_args_list))247 self.assertEqual(1, len(mock_get_bcache.call_args_list))
242 self.assertEqual(1, len(mock_get_bcache_block.call_args_list))248 self.assertEqual(1, len(mock_get_bcache_block.call_args_list))
@@ -252,6 +258,7 @@ class TestClearHolders(CiTestCase):
252 mock.call(device, retries=retries),258 mock.call(device, retries=retries),
253 mock.call(device + '/bcache', retries=retries)])259 mock.call(device + '/bcache', retries=retries)])
254260
261 @mock.patch('curtin.block.clear_holders.block')
255 @mock.patch('curtin.block.clear_holders.udev.udevadm_settle')262 @mock.patch('curtin.block.clear_holders.udev.udevadm_settle')
256 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')263 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')
257 @mock.patch('curtin.block.clear_holders.util')264 @mock.patch('curtin.block.clear_holders.util')
@@ -262,8 +269,10 @@ class TestClearHolders(CiTestCase):
262 mock_log, mock_os,269 mock_log, mock_os,
263 mock_util,270 mock_util,
264 mock_get_bcache_block,271 mock_get_bcache_block,
265 mock_udevadm_settle):272 mock_udevadm_settle,
273 mock_block):
266 device = "/sys/class/block/null"274 device = "/sys/class/block/null"
275 mock_block.sysfs_to_devpath.return_value = '/dev/null'
267 mock_os.path.exists.side_effect = iter([276 mock_os.path.exists.side_effect = iter([
268 True, # backing device exists277 True, # backing device exists
269 True, # cset device not present (already removed)278 True, # cset device not present (already removed)
@@ -276,7 +285,7 @@ class TestClearHolders(CiTestCase):
276285
277 clear_holders.shutdown_bcache(device)286 clear_holders.shutdown_bcache(device)
278287
279 self.assertEqual(2, len(mock_log.info.call_args_list))288 self.assertEqual(4, len(mock_log.info.call_args_list))
280 self.assertEqual(3, len(mock_os.path.exists.call_args_list))289 self.assertEqual(3, len(mock_os.path.exists.call_args_list))
281 self.assertEqual(1, len(mock_get_bcache.call_args_list))290 self.assertEqual(1, len(mock_get_bcache.call_args_list))
282 self.assertEqual(1, len(mock_get_bcache_block.call_args_list))291 self.assertEqual(1, len(mock_get_bcache_block.call_args_list))
@@ -293,6 +302,7 @@ class TestClearHolders(CiTestCase):
293 mock.call(device, retries=self.remove_retries)302 mock.call(device, retries=self.remove_retries)
294 ])303 ])
295304
305 @mock.patch('curtin.block.clear_holders.block')
296 @mock.patch('curtin.block.clear_holders.udev.udevadm_settle')306 @mock.patch('curtin.block.clear_holders.udev.udevadm_settle')
297 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')307 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')
298 @mock.patch('curtin.block.clear_holders.util')308 @mock.patch('curtin.block.clear_holders.util')
@@ -303,8 +313,10 @@ class TestClearHolders(CiTestCase):
303 mock_log, mock_os,313 mock_log, mock_os,
304 mock_util,314 mock_util,
305 mock_get_bcache_block,315 mock_get_bcache_block,
306 mock_udevadm_settle):316 mock_udevadm_settle,
317 mock_block):
307 device = "/sys/class/block/null"318 device = "/sys/class/block/null"
319 mock_block.sysfs_to_devpath.return_value = '/dev/null'
308 mock_os.path.exists.side_effect = iter([320 mock_os.path.exists.side_effect = iter([
309 True, # backing device exists321 True, # backing device exists
310 True, # cset device not present (already removed)322 True, # cset device not present (already removed)
@@ -317,7 +329,7 @@ class TestClearHolders(CiTestCase):
317329
318 clear_holders.shutdown_bcache(device)330 clear_holders.shutdown_bcache(device)
319331
320 self.assertEqual(2, len(mock_log.info.call_args_list))332 self.assertEqual(4, len(mock_log.info.call_args_list))
321 self.assertEqual(3, len(mock_os.path.exists.call_args_list))333 self.assertEqual(3, len(mock_os.path.exists.call_args_list))
322 self.assertEqual(1, len(mock_get_bcache.call_args_list))334 self.assertEqual(1, len(mock_get_bcache.call_args_list))
323 self.assertEqual(1, len(mock_get_bcache_block.call_args_list))335 self.assertEqual(1, len(mock_get_bcache_block.call_args_list))
@@ -333,6 +345,8 @@ class TestClearHolders(CiTestCase):
333 ])345 ])
334346
335 # test bcache shutdown with 'stop' sysfs write failure347 # test bcache shutdown with 'stop' sysfs write failure
348 @mock.patch('curtin.block.clear_holders.block')
349 @mock.patch('curtin.block.wipe_volume')
336 @mock.patch('curtin.block.clear_holders.udev.udevadm_settle')350 @mock.patch('curtin.block.clear_holders.udev.udevadm_settle')
337 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')351 @mock.patch('curtin.block.clear_holders.get_bcache_sys_path')
338 @mock.patch('curtin.block.clear_holders.util')352 @mock.patch('curtin.block.clear_holders.util')
@@ -343,9 +357,12 @@ class TestClearHolders(CiTestCase):
343 mock_log, mock_os,357 mock_log, mock_os,
344 mock_util,358 mock_util,
345 mock_get_bcache_block,359 mock_get_bcache_block,
346 mock_udevadm_settle):360 mock_udevadm_settle,
361 mock_wipe,
362 mock_block):
347 """Test writes sysfs write failures pass if file not present"""363 """Test writes sysfs write failures pass if file not present"""
348 device = "/sys/class/block/null"364 device = "/sys/class/block/null"
365 mock_block.sysfs_to_devpath.return_value = '/dev/null'
349 mock_os.path.exists.side_effect = iter([366 mock_os.path.exists.side_effect = iter([
350 True, # backing device exists367 True, # backing device exists
351 True, # cset device not present (already removed)368 True, # cset device not present (already removed)
@@ -363,7 +380,7 @@ class TestClearHolders(CiTestCase):
363380
364 clear_holders.shutdown_bcache(device)381 clear_holders.shutdown_bcache(device)
365382
366 self.assertEqual(2, len(mock_log.info.call_args_list))383 self.assertEqual(4, len(mock_log.info.call_args_list))
367 self.assertEqual(3, len(mock_os.path.exists.call_args_list))384 self.assertEqual(3, len(mock_os.path.exists.call_args_list))
368 self.assertEqual(1, len(mock_get_bcache.call_args_list))385 self.assertEqual(1, len(mock_get_bcache.call_args_list))
369 self.assertEqual(1, len(mock_get_bcache_block.call_args_list))386 self.assertEqual(1, len(mock_get_bcache_block.call_args_list))
@@ -378,15 +395,20 @@ class TestClearHolders(CiTestCase):
378 mock.call(cset, retries=self.remove_retries)395 mock.call(cset, retries=self.remove_retries)
379 ])396 ])
380397
398 @mock.patch('curtin.block.quick_zero')
381 @mock.patch('curtin.block.clear_holders.LOG')399 @mock.patch('curtin.block.clear_holders.LOG')
382 @mock.patch('curtin.block.clear_holders.block.sys_block_path')400 @mock.patch('curtin.block.clear_holders.block.sys_block_path')
383 @mock.patch('curtin.block.clear_holders.lvm')401 @mock.patch('curtin.block.clear_holders.lvm')
384 @mock.patch('curtin.block.clear_holders.util')402 @mock.patch('curtin.block.clear_holders.util')
385 def test_shutdown_lvm(self, mock_util, mock_lvm, mock_syspath, mock_log):403 def test_shutdown_lvm(self, mock_util, mock_lvm, mock_syspath, mock_log,
404 mock_zero):
386 """test clear_holders.shutdown_lvm"""405 """test clear_holders.shutdown_lvm"""
387 lvm_name = b'ubuntu--vg-swap\n'406 lvm_name = b'ubuntu--vg-swap\n'
388 vg_name = 'ubuntu-vg'407 vg_name = 'ubuntu-vg'
389 lv_name = 'swap'408 lv_name = 'swap'
409 vg_lv_name = "%s/%s" % (vg_name, lv_name)
410 devname = "/dev/" + vg_lv_name
411 pvols = ['/dev/wda1', '/dev/wda2']
390 mock_syspath.return_value = self.test_blockdev412 mock_syspath.return_value = self.test_blockdev
391 mock_util.load_file.return_value = lvm_name413 mock_util.load_file.return_value = lvm_name
392 mock_lvm.split_lvm_name.return_value = (vg_name, lv_name)414 mock_lvm.split_lvm_name.return_value = (vg_name, lv_name)
@@ -394,18 +416,22 @@ class TestClearHolders(CiTestCase):
394 clear_holders.shutdown_lvm(self.test_blockdev)416 clear_holders.shutdown_lvm(self.test_blockdev)
395 mock_syspath.assert_called_with(self.test_blockdev)417 mock_syspath.assert_called_with(self.test_blockdev)
396 mock_util.load_file.assert_called_with(self.test_blockdev + '/dm/name')418 mock_util.load_file.assert_called_with(self.test_blockdev + '/dm/name')
419 mock_zero.assert_called_with(devname, partitions=False)
397 mock_lvm.split_lvm_name.assert_called_with(lvm_name.strip())420 mock_lvm.split_lvm_name.assert_called_with(lvm_name.strip())
398 self.assertTrue(mock_log.debug.called)421 self.assertTrue(mock_log.debug.called)
399 mock_util.subp.assert_called_with(422 mock_util.subp.assert_called_with(
400 ['dmsetup', 'remove', lvm_name.strip()])423 ['lvremove', '--force', '--force', vg_lv_name])
401
402 mock_lvm.get_lvols_in_volgroup.assert_called_with(vg_name)424 mock_lvm.get_lvols_in_volgroup.assert_called_with(vg_name)
403 self.assertEqual(len(mock_util.subp.call_args_list), 1)425 self.assertEqual(len(mock_util.subp.call_args_list), 1)
404 self.assertTrue(mock_lvm.lvm_scan.called)
405 mock_lvm.get_lvols_in_volgroup.return_value = []426 mock_lvm.get_lvols_in_volgroup.return_value = []
427 self.assertTrue(mock_lvm.lvm_scan.called)
428 mock_lvm.get_pvols_in_volgroup.return_value = pvols
406 clear_holders.shutdown_lvm(self.test_blockdev)429 clear_holders.shutdown_lvm(self.test_blockdev)
407 mock_util.subp.assert_called_with(430 mock_util.subp.assert_called_with(
408 ['vgremove', '--force', '--force', vg_name], rcs=[0, 5])431 ['vgremove', '--force', '--force', vg_name], rcs=[0, 5])
432 for pv in pvols:
433 mock_zero.assert_any_call(pv, partitions=False)
434 self.assertTrue(mock_lvm.lvm_scan.called)
409435
410 @mock.patch('curtin.block.clear_holders.block')436 @mock.patch('curtin.block.clear_holders.block')
411 @mock.patch('curtin.block.clear_holders.util')437 @mock.patch('curtin.block.clear_holders.util')
@@ -417,18 +443,38 @@ class TestClearHolders(CiTestCase):
417 mock_util.subp.assert_called_with(443 mock_util.subp.assert_called_with(
418 ['cryptsetup', 'remove', self.test_blockdev], capture=True)444 ['cryptsetup', 'remove', self.test_blockdev], capture=True)
419445
446 @mock.patch('curtin.block.wipe_volume')
447 @mock.patch('curtin.block.path_to_kname')
448 @mock.patch('curtin.block.sysfs_to_devpath')
420 @mock.patch('curtin.block.clear_holders.time')449 @mock.patch('curtin.block.clear_holders.time')
421 @mock.patch('curtin.block.clear_holders.util')450 @mock.patch('curtin.block.clear_holders.util')
422 @mock.patch('curtin.block.clear_holders.LOG')451 @mock.patch('curtin.block.clear_holders.LOG')
423 @mock.patch('curtin.block.clear_holders.mdadm')452 @mock.patch('curtin.block.clear_holders.mdadm')
424 @mock.patch('curtin.block.clear_holders.block')453 def test_shutdown_mdadm(self, mock_mdadm, mock_log, mock_util,
425 def test_shutdown_mdadm(self, mock_block, mock_mdadm, mock_log, mock_util,454 mock_time, mock_sysdev, mock_path, mock_wipe):
426 mock_time):
427 """test clear_holders.shutdown_mdadm"""455 """test clear_holders.shutdown_mdadm"""
428 mock_block.sysfs_to_devpath.return_value = self.test_blockdev456 devices = ['/dev/wda1', '/dev/wda2']
429 mock_block.path_to_kname.return_value = self.test_blockdev457 spares = ['/dev/wdb1']
458 md_devs = (devices + spares)
459 mock_sysdev.return_value = self.test_blockdev
460 mock_path.return_value = self.test_blockdev
430 mock_mdadm.md_present.return_value = False461 mock_mdadm.md_present.return_value = False
462 mock_mdadm.md_get_devices_list.return_value = devices
463 mock_mdadm.md_get_spares_list.return_value = spares
464
431 clear_holders.shutdown_mdadm(self.test_syspath)465 clear_holders.shutdown_mdadm(self.test_syspath)
466
467 mock_wipe.assert_called_with(
468 self.test_blockdev, exclusive=False, mode='superblock')
469 mock_mdadm.set_sync_action.assert_has_calls([
470 mock.call(self.test_blockdev, action="idle"),
471 mock.call(self.test_blockdev, action="frozen")])
472 mock_mdadm.fail_device.assert_has_calls(
473 [mock.call(self.test_blockdev, dev) for dev in md_devs])
474 mock_mdadm.remove_device.assert_has_calls(
475 [mock.call(self.test_blockdev, dev) for dev in md_devs])
476 mock_mdadm.zero_device.assert_has_calls(
477 [mock.call(dev) for dev in md_devs])
432 mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev)478 mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev)
433 mock_mdadm.md_present.assert_called_with(self.test_blockdev)479 mock_mdadm.md_present.assert_called_with(self.test_blockdev)
434 self.assertTrue(mock_log.debug.called)480 self.assertTrue(mock_log.debug.called)
diff --git a/tests/vmtests/test_lvm.py b/tests/vmtests/test_lvm.py
index 40f6cb7..ed708fd 100644
--- a/tests/vmtests/test_lvm.py
+++ b/tests/vmtests/test_lvm.py
@@ -10,10 +10,15 @@ class TestLvmAbs(VMBaseClass):
10 conf_file = "examples/tests/lvm.yaml"10 conf_file = "examples/tests/lvm.yaml"
11 interactive = False11 interactive = False
12 extra_disks = ['10G']12 extra_disks = ['10G']
13 dirty_disks = True
13 collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent("""14 collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent("""
14 cd OUTPUT_COLLECT_D15 cd OUTPUT_COLLECT_D
15 cat /etc/fstab > fstab16 cat /etc/fstab > fstab
16 ls /dev/disk/by-dname > ls_dname17 ls /dev/disk/by-dname > ls_dname
18 ls -al /dev/disk/by-dname > lsal_dname
19 ls -al /dev/disk/by-id/ > ls_byid
20 ls -al /dev/disk/by-uuid/ > ls_byuuid
21 cat /proc/partitions > proc_partitions
17 find /etc/network/interfaces.d > find_interfacesd22 find /etc/network/interfaces.d > find_interfacesd
18 pvdisplay -C --separator = -o vg_name,pv_name --noheadings > pvs23 pvdisplay -C --separator = -o vg_name,pv_name --noheadings > pvs
19 lvdisplay -C --separator = -o lv_name,vg_name --noheadings > lvs24 lvdisplay -C --separator = -o lv_name,vg_name --noheadings > lvs
diff --git a/tests/vmtests/test_lvm_iscsi.py b/tests/vmtests/test_lvm_iscsi.py
index e94fae3..2a11d6e 100644
--- a/tests/vmtests/test_lvm_iscsi.py
+++ b/tests/vmtests/test_lvm_iscsi.py
@@ -9,6 +9,7 @@ import textwrap
99
10class TestLvmIscsiAbs(TestLvmAbs, TestBasicIscsiAbs):10class TestLvmIscsiAbs(TestLvmAbs, TestBasicIscsiAbs):
11 interactive = False11 interactive = False
12 dirty_disks = True
12 iscsi_disks = [13 iscsi_disks = [
13 {'size': '6G'},14 {'size': '6G'},
14 {'size': '5G', 'auth': 'user:passw0rd', 'iauth': 'iuser:ipassw0rd'}]15 {'size': '5G', 'auth': 'user:passw0rd', 'iauth': 'iuser:ipassw0rd'}]
@@ -20,6 +21,8 @@ class TestLvmIscsiAbs(TestLvmAbs, TestBasicIscsiAbs):
20 """21 """
21 cd OUTPUT_COLLECT_D22 cd OUTPUT_COLLECT_D
22 ls -al /sys/class/block/dm*/slaves/ > dm_slaves23 ls -al /sys/class/block/dm*/slaves/ > dm_slaves
24 cp -a /etc/udev/rules.d udev_rules_d
25 cp -a /etc/iscsi etc_iscsi
23 """)]26 """)]
2427
25 fstab_expected = {28 fstab_expected = {

Subscribers

People subscribed via source and target branches