Merge ~mwhudson/curtin:reformat-vs-preserve into curtin:master

Proposed by Michael Hudson-Doyle
Status: Merged
Approved by: Michael Hudson-Doyle
Approved revision: 54a06a6260ac8186db5a02dbedf3db9f6e995965
Merge reported by: Server Team CI bot
Merged at revision: not available
Proposed branch: ~mwhudson/curtin:reformat-vs-preserve
Merge into: curtin:master
Diff against target: 235 lines (+170/-3)
5 files modified
curtin/block/schemas.py (+1/-0)
curtin/commands/block_meta.py (+14/-3)
doc/topics/storage.rst (+5/-0)
examples/tests/partition-existing-raid.yaml (+115/-0)
tests/vmtests/test_preserve_raid.py (+35/-0)
Reviewer Review Type Date Requested Status
Ryan Harper (community) Approve
Server Team CI bot continuous-integration Approve
Dan Bungert Approve
Review via email: mp+404623@code.launchpad.net

Commit message

disk_handler: check wipe field when deciding whether to reformat raids

Currently preserve=true for raid means "preserve the RAID array"
and ALSO "preserve the partition table". So change the
interpretation of preserve to be solely about preserving the
array and instead check the wipe field to decide if the array
should get a new partition table or not.

LP: #1932976

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

Hrm, I didn't think this was true (that preserve means keeping partition table).

looking at raid_handler, if you set wipe: superblock, it will wipe a verified.

```
create_raid = True
if preserve:
    raid_verify(md_devname, raidlevel, device_paths, spare_device_paths)
    LOG.debug('raid %s already present, skipping create', md_devname)
    create_raid = False

if create_raid:
    mdadm.mdadm_create(md_devname, raidlevel,
                       device_paths, spare_device_paths, container_dev,
                       info.get('mdname', ''), metadata)

wipe_mode = info.get('wipe')
if wipe_mode:
    if wipe_mode == 'superblock' and create_raid:
        # Newly created raid devices already wipe member superblocks at
        # their data offset (this is equivalent to wiping the assembled
        # device, see curtin.block.mdadm.zero_device for more details.
        pass
    else:
        LOG.debug('Wiping raid device %s mode=%s', md_devname, wipe_mode)
        block.wipe_volume(md_devname, mode=wipe_mode, exclusive=False)
```

Curtin will verify the array, lower the create_raid flag, and then if wipe
is set, it will clear the array; it only *skips* clearing if wipe=superblock
and we've just created the array (which already wipes the underlying disks).

review: Needs Information
cbdd142... by Michael Hudson-Doyle

add vmtest

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
1e414ee... by Michael Hudson-Doyle

fix vmtest

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
b16647c... by Michael Hudson-Doyle

a different approach

b7283ab... by Michael Hudson-Doyle

add a comment. and fix logic to not be 180 degrees wrong

Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

OK, I think this is ok for another look now (following discussion on the bug)

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
760d60a... by Michael Hudson-Doyle

add wipe to schema for raid actions

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Dan Bungert (dbungert) :
review: Approve
Revision history for this message
Ryan Harper (raharper) wrote :

This looks great. Two comments:

1) let's update doc/topics/storage.rst with the comment you added in the disk handler around what preserve/wipe means for existing compound devices.

2) In the vmtest, create a partition table on top of the raid different than the final ptable in the storage config

54a06a6... by Michael Hudson-Doyle

address review comments by improving test & documentation

Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

Good points, done.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

Awesome! This looks great.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/curtin/block/schemas.py b/curtin/block/schemas.py
2index d846505..70de260 100644
3--- a/curtin/block/schemas.py
4+++ b/curtin/block/schemas.py
5@@ -323,6 +323,7 @@ RAID = {
6 'metadata': {'type': ['string', 'number']},
7 'preserve': {'$ref': '#/definitions/preserve'},
8 'ptable': {'$ref': '#/definitions/ptable'},
9+ 'wipe': {'$ref': '#/definitions/wipe'},
10 'spare_devices': {'$ref': '#/definitions/devices'},
11 'container': {'$ref': '#/definitions/id'},
12 'type': {'const': 'raid'},
13diff --git a/curtin/commands/block_meta.py b/curtin/commands/block_meta.py
14index 8cb2784..e5cf659 100644
15--- a/curtin/commands/block_meta.py
16+++ b/curtin/commands/block_meta.py
17@@ -583,7 +583,17 @@ def disk_handler(info, storage_config):
18 'Invalid partition table type: %s in %s' % (ptable, info))
19
20 disk = get_path_to_storage_volume(info.get('id'), storage_config)
21- if config.value_as_boolean(info.get('preserve')):
22+ # For disks, 'preserve' is what indicates whether the partition
23+ # table should be reused or recreated but for compound devices
24+ # such as raids, it indicates if the raid should be created or
25+ # assumed to already exist. So to allow a pre-existing raid to get
26+ # a new partition table, we use presence of 'wipe' field to
27+ # indicate if the disk should be reformatted or not.
28+ if info['type'] == 'disk':
29+ preserve_ptable = config.value_as_boolean(info.get('preserve'))
30+ else:
31+ preserve_ptable = not config.value_as_boolean(info.get('wipe'))
32+ if preserve_ptable:
33 # Handle preserve flag, verifying if ptable specified in config
34 if ptable and ptable != PTABLE_UNSUPPORTED:
35 current_ptable = block.get_part_table_type(disk)
36@@ -591,8 +601,9 @@ def disk_handler(info, storage_config):
37 if current_ptable not in PTABLES_SUPPORTED:
38 raise ValueError(
39 "disk '%s' does not have correct partition table or "
40- "cannot be read, but preserve is set to true. "
41- "cannot continue installation." % info.get('id'))
42+ "cannot be read, but preserve is set to true (or wipe is "
43+ "not set). cannot continue installation." %
44+ info.get('id'))
45 LOG.info("disk '%s' marked to be preserved, so keeping partition "
46 "table" % disk)
47 else:
48diff --git a/doc/topics/storage.rst b/doc/topics/storage.rst
49index b20d5c1..0f33ec0 100644
50--- a/doc/topics/storage.rst
51+++ b/doc/topics/storage.rst
52@@ -865,6 +865,10 @@ If ``wipe`` option is set to values other than 'superblock', curtin will
53 wipe contents of the assembled raid device. Curtin skips 'superblock` wipes
54 as it already clears raid data on the members before assembling the array.
55
56+To allow a pre-existing (i.e. ``preserve=true``) raid to get a new partition
57+table, set the ``wipe`` field to indicate the disk should be
58+reformatted (this is different from disk actions, where the preserve field is
59+used for this. But that means something different for raid devices).
60
61 **Config Example**::
62
63@@ -930,6 +934,7 @@ the bcache device. This includes checking that backing device and cache
64 device are enabled and bound correctly (backing device is cached by expected
65 cache device). If ``cache-mode`` is specified, verify that the mode matches.
66
67+
68 **wipe**: *superblock, superblock-recursive, pvremove, zero, random*
69
70 If ``wipe`` option is set, curtin will wipe the contents of the bcache device.
71diff --git a/examples/tests/partition-existing-raid.yaml b/examples/tests/partition-existing-raid.yaml
72new file mode 100644
73index 0000000..07cf8d2
74--- /dev/null
75+++ b/examples/tests/partition-existing-raid.yaml
76@@ -0,0 +1,115 @@
77+showtrace: true
78+
79+bucket:
80+ - &setup |
81+ parted /dev/disk/by-id/virtio-disk-b --script -- \
82+ mklabel gpt \
83+ mkpart primary 1GiB 9GiB \
84+ set 1 boot on
85+ parted /dev/disk/by-id/virtio-disk-c --script -- \
86+ mklabel gpt \
87+ mkpart primary 1GiB 9GiB \
88+ set 1 boot on
89+ udevadm settle
90+ mdadm --create --metadata 1.2 --level 1 -n 2 /dev/md1 --assume-clean \
91+ /dev/disk/by-id/virtio-disk-b-part1 /dev/disk/by-id/virtio-disk-c-part1
92+ udevadm settle
93+ parted /dev/md1 --script -- \
94+ mklabel dos
95+ udevadm settle
96+ mdadm --stop /dev/md1
97+ udevadm settle
98+
99+# Create a RAID now to test curtin's reuse of existing RAIDs.
100+early_commands:
101+ 00-setup-raid: [sh, -exuc, *setup]
102+
103+storage:
104+ config:
105+ - type: disk
106+ id: id_disk0
107+ serial: disk-a
108+ ptable: gpt
109+ wipe: superblock
110+ - type: disk
111+ id: id_disk1
112+ serial: disk-b
113+ ptable: gpt
114+ preserve: true
115+ - type: disk
116+ id: id_disk2
117+ serial: disk-c
118+ ptable: gpt
119+ preserve: true
120+ - type: partition
121+ id: id_disk0_part1
122+ device: id_disk0
123+ flag: boot
124+ number: 1
125+ size: 512M
126+ - type: partition
127+ id: id_disk0_part2
128+ device: id_disk0
129+ number: 2
130+ size: 3G
131+ - type: partition
132+ id: id_disk0_part3
133+ device: id_disk0
134+ number: 3
135+ size: 3G
136+ - type: partition
137+ id: id_disk1_part1
138+ device: id_disk1
139+ flag: boot
140+ number: 1
141+ size: 8G
142+ preserve: true
143+ - type: partition
144+ id: id_disk2_part1
145+ device: id_disk2
146+ flag: boot
147+ number: 1
148+ size: 8G
149+ preserve: true
150+ - type: raid
151+ id: raid-md1
152+ name: md1
153+ raidlevel: raid1
154+ devices:
155+ - id_disk1_part1
156+ - id_disk2_part1
157+ spare_devices: []
158+ metadata: 1.2
159+ preserve: true
160+ wipe: superblock
161+ ptable: gpt
162+ - type: partition
163+ id: id_raid1_part1
164+ device: raid-md1
165+ number: 1
166+ size: 7G
167+ - type: format
168+ id: id_efi_format
169+ volume: id_disk0_part1
170+ fstype: fat32
171+ - type: format
172+ id: id_root_format
173+ volume: id_disk0_part2
174+ fstype: ext4
175+ - type: format
176+ id: id_raid-md1_format
177+ volume: id_raid1_part1
178+ fstype: ext4
179+ - type: mount
180+ device: id_root_format
181+ id: id_root_mount
182+ path: /
183+ - type: mount
184+ id: id_efi_mount
185+ device: id_efi_format
186+ path: /boot/efi
187+ - type: mount
188+ id: id_raid-md1_mount
189+ device: id_raid-md1_format
190+ path: /srv
191+ version: 1
192diff --git a/tests/vmtests/test_preserve_raid.py b/tests/vmtests/test_preserve_raid.py
193index 7fc6daa..8c85a20 100644
194--- a/tests/vmtests/test_preserve_raid.py
195+++ b/tests/vmtests/test_preserve_raid.py
196@@ -37,4 +37,39 @@ class GroovyTestPreserveRAID(relbase.groovy, TestPreserveRAID):
197 __test__ = True
198
199
200+class TestPartitionExistingRAID(VMBaseClass):
201+ """ Test that curtin can repartition an existing RAID. """
202+ conf_file = "examples/tests/partition-existing-raid.yaml"
203+ extra_disks = ['10G', '10G', '10G']
204+ uefi = True
205+ extra_collect_scripts = [textwrap.dedent("""
206+ cd OUTPUT_COLLECT_D
207+ lsblk --nodeps --noheading --raw --output PTTYPE /dev/md1 > md1-pttype
208+ exit 0
209+ """)]
210+
211+ def test_correct_ptype(self):
212+ self.assertEqual('gpt', self.load_collect_file('md1-pttype').strip())
213+
214+
215+class BionicTestPartitionExistingRAID(
216+ relbase.bionic, TestPartitionExistingRAID):
217+ __test__ = True
218+
219+
220+class FocalTestPartitionExistingRAID(
221+ relbase.focal, TestPartitionExistingRAID):
222+ __test__ = True
223+
224+
225+class HirsuteTestPartitionExistingRAID(
226+ relbase.hirsute, TestPartitionExistingRAID):
227+ __test__ = True
228+
229+
230+class GroovyTestPartitionExistingRAID(
231+ relbase.groovy, TestPartitionExistingRAID):
232+ __test__ = True
233+
234+
235 # vi: ts=4 expandtab syntax=python

Subscribers

People subscribed via source and target branches