Merge ~raharper/curtin:fix/bcache-over-raid5 into curtin:master
Status: | Merged |
---|---|
Approved by: | Scott Moser |
Approved revision: | 46f8000f783cea49cf12051f741edba05fbc7900 |
Merged at revision: | 46f8000f783cea49cf12051f741edba05fbc7900 |
Proposed branch: | ~raharper/curtin:fix/bcache-over-raid5 |
Merge into: | curtin:master |
Diff against target: |
381 lines (+120/-58) 7 files modified
curtin/block/__init__.py (+11/-9) curtin/block/clear_holders.py (+62/-28) curtin/commands/block_meta.py (+9/-4) examples/tests/mdadm_bcache.yaml (+22/-5) tests/unittests/test_block.py (+10/-6) tests/unittests/test_clear_holders.py (+4/-4) tests/vmtests/test_mdadm_bcache.py (+2/-2) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Scott Moser (community) | Approve | ||
Server Team CI bot | continuous-integration | Approve | |
Review via email: mp+339415@code.launchpad.net |
Description of the change
clear_holders: wipe complex devices before disassembly
The curtin clear-holders code did not wipe the contents of the
assembled devices prior to shutting them down. In cases where
curtin re-assembles the same devices, in a RAID for example, then
when the RAID device comes on-line, it may prevent some tools from
creating new metadata on the target device. In particular, the
bcache tools do not want to overwrite an existing bcache superblock
resulting in a failed deployment when layering a bcache over a RAID
volume.
The fix is in two parts. First, clear-holders will wipe the
superblock of the assembled device before it breaks it apart.
Second, when creating a bcache backing device, check first if the
target device is already claimed by the bcache layer; if so then
stop and wipe that device first and then proceed with formatting the
bcache superblock.
curtin.block
- expose 'exclusive' boolean and pass it through to context manager
curtin.
- Add logic to wipe assembled devices before calling shutdown
function
- introduce internal _wipe_superblock which only contains the retry
logic
- switch to using dmsetup remove for lvm destruction
- improve identify_raid, it incorrectly identified partitions on
RAID0 and RAID1 devices as raid devices, but they are actually
partitions.
LP: #1750519
One question inline.