Merge ~raharper/curtin:fix/refactor-preserve-wipe into curtin:master

Proposed by Ryan Harper
Status: Merged
Approved by: Chad Smith
Approved revision: 667d4acded9df3e2bb240e5f10a30c6f917840c7
Merge reported by: Server Team CI bot
Merged at revision: not available
Proposed branch: ~raharper/curtin:fix/refactor-preserve-wipe
Merge into: curtin:master
Diff against target: 2822 lines (+1841/-473)
19 files modified
curtin/__init__.py (+2/-0)
curtin/block/__init__.py (+79/-0)
curtin/block/bcache.py (+242/-1)
curtin/block/lvm.py (+12/-2)
curtin/block/schemas.py (+4/-0)
curtin/commands/block_meta.py (+420/-410)
curtin/storage_config.py (+31/-30)
doc/topics/storage.rst (+87/-19)
examples/tests/preserve-bcache.yaml (+82/-0)
examples/tests/preserve-lvm.yaml (+77/-0)
examples/tests/preserve-partition-wipe-vg.yaml (+116/-0)
examples/tests/preserve-raid.yaml (+4/-2)
examples/tests/uefi_reuse_esp.yaml (+2/-2)
tests/unittests/test_commands_block_meta.py (+494/-4)
tests/unittests/test_storage_config.py (+4/-3)
tests/vmtests/__init__.py (+2/-0)
tests/vmtests/test_preserve_bcache.py (+67/-0)
tests/vmtests/test_preserve_lvm.py (+80/-0)
tests/vmtests/test_preserve_partition_wipe_vg.py (+36/-0)
Reviewer Review Type Date Requested Status
Server Team CI bot continuous-integration Approve
Chad Smith Approve
Review via email: mp+373610@code.launchpad.net

Commit message

block-meta: refactor storage_config preserve and wipe settings

The subiquity installer has use-cases where users wish to preserve
some devices and then repurpose them into other storage devices.
For example, retain a particular partition and then add it to an
LVM setup. Curtin currently makes users chose to preserve devices
wholesale or chose to wipe them. To support the 'repurpose' use
case we need to be able to preserve a device and then optionally
wipe the data therein independently of preservation.

The block schema has been updated to accept preserve and wipe
settings for each block device type. For the following types,
dasd, disk, partition, format, lvm_volgroup, lvm_partition,
dm_crypt, raid and bcache, Curtin will accept the preserve
boolean, and if set True, will verify various characteristics
of the target device and compare this to the specified config.

For example, preserving a partition which needs a device, size
and some optional values, like flag. Curtin will verify the
device exists, the number is correct, the size of the partition
matches and if the correct flag is set on the device.

The set of verifications performed are storage type specific.
Any verification failure results in Curtin raising a RuntimeError
exception which includes the expected and found values.

Additional work:

- Migrate bcache creation functions from block-meta into
  block.bcache
- Unittest and vmtests added for verifying preserve for each
  storage type.

LP: #1837214

To post a comment you must log in.
Revision history for this message
Ryan Harper (raharper) wrote :

Updated this branch. I've implemented partition verification for existence, size, and flags. Fixed up the vmtest case for reusing partition, wiping and adding to vgs. After that, I've also added some additional support for setting 'wipe' for various devices (raids, bcache, dm_crypt). Raid already has a preserve verification, but we'll need test-case. I've added a dm_crypt verification (existence and backing volume). bcache verification won't be hard and that's up next.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

I really like this for the most part!

The only thing that seems a bit odd is the default wipe behaviour (which probably will not be a problem for subiquity as we should always pass it): the default for raid, lvs and dm_crypt seems to be to wipe the superblock, even if the device is freshly created, which seems a bit strange (I'm also not sure that wiping the superblock in the preserve=True case is particularly sane but also I'm not sure that it isn't ;-p). The default for partition, otoh, seems to be not to wipe, which is also strange because for partitions you can make the case that wiping the superblock of a freshly created partition is actually a good idea! (If you repartition a disk twice with the same settings, you can easily find that a new partition already has a filesystem). Given all that, *I'd* be fine with making wipe a mandatory parameter but I guess MAAS doesn't always set it?

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

Rebasing this branch and thinking on your comments re: default wipe.

I see what you mean w.r.t wipe defaults. Let me work through the scenarios:

For partition creation, we wipe *prior* to creating partitions, explicitly
at the offset where the partition will be created to prevent "buried treasure"
from being discovered by the kernel as soon as we complete partition creation.

Generally this makes wiping after creating a partition overhead if the wipe
mode is superblock; If the mode is zero or superblock-recursive; then
we should follow those instructions and continue to wipe. I dropped this
last part (avoiding the double superblock wipe, so I'll re-add that).

For composed types (like lvm and raid) it's difficult to wipe before they
are created ... however, on raid we inspect raid members, and if they have
existing mdadm metadata we use that to calculate where the data offset
is and zero that.

For raid_handler, we can apply the same skip (only wipe if set and != "superblock"

For LVM, I've not done the work to try to determine where lvm data is stored on pvs;
I'll update this handler to only wipe if it's set.

7b92f6a... by Ryan Harper

Adjust wipe mode defaults, depending on device type

1184326... by Ryan Harper

Fix style nits

2684f63... by Ryan Harper

block-meta: refactor volgroup verification

Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

On Thu, 27 Feb 2020 at 19:58, Ryan Harper <email address hidden> wrote:

> Rebasing this branch and thinking on your comments re: default wipe.
>
> I see what you mean w.r.t wipe defaults. Let me work through the
> scenarios:
>
> For partition creation, we wipe *prior* to creating partitions, explicitly
> at the offset where the partition will be created to prevent "buried
> treasure"
> from being discovered by the kernel as soon as we complete partition
> creation.
>

Oh! OK then :)

> Generally this makes wiping after creating a partition overhead if the wipe
> mode is superblock; If the mode is zero or superblock-recursive; then
> we should follow those instructions and continue to wipe. I dropped this
> last part (avoiding the double superblock wipe, so I'll re-add that).
>

Thanks.

> For composed types (like lvm and raid) it's difficult to wipe before they
> are created ... however, on raid we inspect raid members, and if they have
> existing mdadm metadata we use that to calculate where the data offset
> is and zero that.
>

Well these are a bit different because the tools are a bit more, uh,
assertive about what's going on, or something.

> For raid_handler, we can apply the same skip (only wipe if set and !=
> "superblock"
>
> For LVM, I've not done the work to try to determine where lvm data is
> stored on pvs;
> I'll update this handler to only wipe if it's set.
>

I'm pretty sure that reading from a freshly created logical volume or RAID
will get you zeros irrespective of what was on the disks before (and mdadm
will effectively do an asynchronous wipe: zero anyway aiui) but I guess if
the user explicitly asks for their new raid or lv to have zeros written to
it, we should do that.

Revision history for this message
Ryan Harper (raharper) wrote :

> I'm pretty sure that reading from a freshly created logical volume or RAID
> will get you zeros irrespective of what was on the disks before (and mdadm
> will effectively do an asynchronous wipe: zero anyway aiui) but I guess if

Curtin needs to be a bit more defensive because mdadm is complicated.

https://bugs.launchpad.net/curtin/+bug/1815018

In there, as soon as the raid was recreated, udev does read/probing on the
/dev/mdXXX and *finds* whatever was there before (bcache in the bug IIRC).

The --zero-superblock removes raid *metadata* but does *nothing* for the
contents. so if you recreate a raid with the same strip-size and set of
disks, you will *find* your data again. It's almost as if RAID was designed
to not lose data. Curtin clear-holders is designed to find that data and
kill it until it's really dead. =)

> the user explicitly asks for their new raid or lv to have zeros written to
> it, we should do that.

+1

b15f09c... by Ryan Harper

block-meta: add lvm_partition preserve verification

98e3ac8... by Ryan Harper

block-meta: refactor raid preserve into common pattern

0f89cbb... by Ryan Harper

block-meta: refactor bcache handler to support preserve

Bcache is a bit more fun in that we can create backing, cache
or both along with some optional parameters. In verification
we need to support verifying each of those separate modes. The
most common is a cache + backing in one config, but in some cases
we have multiple sets of same cache device with different backing
devs.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
6cc67d9... by Ryan Harper

Add preserve to bcache and dmcrypt

1747713... by Ryan Harper

unittests: add unittests for verifying preserve on lvm_partitions

bba9a45... by Ryan Harper

unittests: add tests for dm_crypt_verify, fix block.dmsetup_info (add capture=True)

16d8616... by Ryan Harper

unittests: add lvm_volgroup handler and preserve tests, fix issues found

f3c695c... by Ryan Harper

creat -> create

12961f0... by Ryan Harper

unittest: add unittests for raid verification

456bc84... by Ryan Harper

bcache: move bcache block-meta methods to block.bcache

73e4f02... by Ryan Harper

unittest: add block-meta bcache handler inital unittest

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
de9b092... by Ryan Harper

Move bcache_verify methods ahead of bcache_handler

e1f35f2... by Ryan Harper

fix verification failure on preserve-raid, missing boot flags

4912642... by Ryan Harper

Fix preserve lvm paritition wipe, don't get path until after create.

6cfbc48... by Ryan Harper

Drop disco, add Focal

2b51fc7... by Ryan Harper

fix bcache movement by returning device path

96b83b6... by Ryan Harper

Fix verification of reuse esp, adjust sizes and don't verify flag if not set in config

b500932... by Ryan Harper

Add vmtest for preserve-bcache

56e7a7e... by Ryan Harper

Add vmtest for preserve-lvm

26e716a... by Ryan Harper

Fix tox, and wipe-by-default lvm-part unittest

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
d2b5a53... by Ryan Harper

return a default state dir with no fstab for raid unittest

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
e945f0f... by Ryan Harper

block-meta: drop VerifyResult namedtuple, raise RuntimeError directly.

Revision history for this message
Chad Smith (chad.smith) :
review: Needs Information
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:e945f0f2705bf3a192cada83303f3409ae477d2d
https://jenkins.ubuntu.com/server/job/curtin-ci/7/
Executed test runs:
    None: https://jenkins.ubuntu.com/server/job/admin-lp-git-vote/2086/

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/curtin-ci/7//rebuild

review: Approve (continuous-integration)
Revision history for this message
Chad Smith (chad.smith) wrote :

review still in progress, but dinner/EOW calls.

Revision history for this message
Ryan Harper (raharper) wrote :

Thanks, I've replied to what you've asked.

Revision history for this message
Chad Smith (chad.smith) wrote :

Pass #2 on this branch set.

I still am a bit confused about when both wipe and preserve are specified for a given object type. Do you think there is a need for some rtd changes here with a bit more information about the use of both flags that the implication for the devices?

review: Needs Information
Revision history for this message
Ryan Harper (raharper) wrote :

Doc updates are definitely needed, thanks for bringing them up.

Preserve is meant to keep a device (partition, volgroup, lvm_partition, bcache) as it described. In effect, the storage-config is saying, I expect that there is a partition, that is on top of a specific disk, that it has a particular number, it has a specific size, and this flag on it.

Curtin will verify that all of the info provided is present upon the device to preserve; if verified, then curtin can *use* the device as it is, without modification.

Wipe means to clear *data* on the device itself, for a partition, this means to zero some portion of the partition (superblock writes zeros on the first and last 1MB of a device).

Revision history for this message
Ryan Harper (raharper) wrote :

Thanks, ill push an update in a bit

fe1d8b3... by Ryan Harper

Drop additional if verify() logic since verify_() methods are much like asserts and raise on error

3d25544... by Ryan Harper

Drop leading space in log message

9eae9d5... by Ryan Harper

Drop debug bloop

c0e16d0... by Ryan Harper

doc: update preserve/wipe documentation

b9701c1... by Ryan Harper

Add feature flag for preserve/wipe change

Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:b9701c14d49585e18b147093e9951ff6998c9b38
https://jenkins.ubuntu.com/server/job/curtin-ci/8/
Executed test runs:
    None: https://jenkins.ubuntu.com/server/job/admin-lp-git-vote/2087/

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/curtin-ci/8//rebuild

review: Approve (continuous-integration)
Revision history for this message
Chad Smith (chad.smith) wrote :

Thanks for these fixes, minor doc nits and a request for more docs on pvremove
https://paste.ubuntu.com/p/zb4V4FJKQS/

Revision history for this message
Chad Smith (chad.smith) :
3dbcc1f... by Ryan Harper

Rename bcache_verify_bcachedev to bcache_verify

Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:3dbcc1f28ccfdf26cb7e00c6925faf9b6fe261a6
https://jenkins.ubuntu.com/server/job/curtin-ci/9/
Executed test runs:
    None: https://jenkins.ubuntu.com/server/job/admin-lp-git-vote/2088/

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/curtin-ci/9//rebuild

review: Approve (continuous-integration)
667d4ac... by Ryan Harper

Add fixes from Chad

Revision history for this message
Chad Smith (chad.smith) wrote :

LGTM!

review: Approve
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:667d4acded9df3e2bb240e5f10a30c6f917840c7
https://jenkins.ubuntu.com/server/job/curtin-ci/10/
Executed test runs:
    None: https://jenkins.ubuntu.com/server/job/admin-lp-git-vote/2089/

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/curtin-ci/10//rebuild

review: Approve (continuous-integration)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/curtin/__init__.py b/curtin/__init__.py
2index 142d288..7114b3b 100644
3--- a/curtin/__init__.py
4+++ b/curtin/__init__.py
5@@ -22,6 +22,8 @@ FEATURES = [
6 'STORAGE_CONFIG_V1',
7 # install supports the 'storage' config version 1 for DD images
8 'STORAGE_CONFIG_V1_DD',
9+ # has separate 'preserve' and 'wipe' config options
10+ 'STORAGE_CONFIG_SEPARATE_PRESERVE_AND_WIPE'
11 # subcommand 'system-install' is present
12 'SUBCOMMAND_SYSTEM_INSTALL',
13 # subcommand 'system-upgrade' is present
14diff --git a/curtin/block/__init__.py b/curtin/block/__init__.py
15index f30c5df..6e5354e 100644
16--- a/curtin/block/__init__.py
17+++ b/curtin/block/__init__.py
18@@ -16,6 +16,9 @@ from curtin.udev import udevadm_settle, udevadm_info
19 from curtin import storage_config
20
21
22+SECTOR_SIZE_BYTES = 512
23+
24+
25 def get_dev_name_entry(devname):
26 """
27 convert device name to path in /dev
28@@ -241,6 +244,73 @@ def _lsblock(args=None):
29 return _lsblock_pairs_to_dict(out)
30
31
32+def _sfdisk_parse(lines):
33+ info = {}
34+ for line in lines:
35+ if ':' not in line:
36+ continue
37+ lhs, _, rhs = line.partition(':')
38+ key = lhs.strip()
39+ value = rhs.strip()
40+ if "," in rhs:
41+ value = dict((item.split('=')
42+ for item in rhs.replace(' ', '').split(',')))
43+ info[key] = value
44+
45+ return info
46+
47+
48+def sfdisk_info(devpath):
49+ ''' returns dict of sfdisk info about disk partitions
50+ {
51+ "/dev/vda1": {
52+ "size": "20744159",
53+ "start": "227328",
54+ "type": "0FC63DAF-8483-4772-8E79-3D69D8477DE4",
55+ "uuid": "29983666-2A66-4F14-8533-7CE13B715462"
56+ },
57+ "device": "/dev/vda",
58+ "first-lba": "34",
59+ "label": "gpt",
60+ "label-id": "E94FCCFE-953D-4D4B-9511-451BBCC17A9A",
61+ "last-lba": "20971486",
62+ "unit": "sectors"
63+ }
64+ '''
65+ (parent, partnum) = get_blockdev_for_partition(devpath)
66+ try:
67+ (out, _err) = util.subp(['sfdisk', '--dump', parent], capture=True)
68+ except util.ProcessExecutionError as e:
69+ LOG.exception(e)
70+ out = ""
71+ return _sfdisk_parse(out.splitlines())
72+
73+
74+def dmsetup_info(devname):
75+ ''' returns dict of info about device mapper dev.
76+
77+ {'blkdevname': 'dm-0',
78+ 'blkdevs_used': 'sda5',
79+ 'name': 'sda5_crypt',
80+ 'subsystem': 'CRYPT',
81+ 'uuid': 'CRYPT-LUKS1-2b370697149743b0b2407d11f88311f1-sda5_crypt'
82+ }
83+ '''
84+ _SEP = '='
85+ fields = ('name,uuid,blkdevname,blkdevs_used,subsystem'.split(','))
86+ try:
87+ (out, _err) = util.subp(['dmsetup', 'info', devname, '-C', '-o',
88+ ','.join(fields), '--noheading',
89+ '--separator', _SEP], capture=True)
90+ except util.ProcessExecutionError as e:
91+ LOG.error('Failed to run dmsetup info:', e)
92+ return {}
93+
94+ values = out.strip().split(_SEP)
95+ info = dict(zip(fields, values))
96+ return info
97+
98+
99 def get_unused_blockdev_info():
100 """
101 return a list of unused block devices.
102@@ -626,6 +696,15 @@ def get_blockdev_sector_size(devpath):
103 return (int(logical), int(physical))
104
105
106+def read_sys_block_size_bytes(device):
107+ """ /sys/class/block/<device>/size and return integer value in bytes"""
108+ device_dir = os.path.join('/sys/class/block', os.path.basename(device))
109+ blockdev_size = os.path.join(device_dir, 'size')
110+ with open(blockdev_size) as d:
111+ size = int(d.read().strip()) * SECTOR_SIZE_BYTES
112+ return size
113+
114+
115 def get_volume_uuid(path):
116 """
117 Get uuid of disk with given path. This address uniquely identifies
118diff --git a/curtin/block/bcache.py b/curtin/block/bcache.py
119index c31852e..188b4e0 100644
120--- a/curtin/block/bcache.py
121+++ b/curtin/block/bcache.py
122@@ -2,13 +2,16 @@
123
124 import errno
125 import os
126+import time
127
128 from curtin import util
129 from curtin.log import LOG
130-from . import sys_block_path
131+from curtin.udev import udevadm_settle
132+from . import dev_path, sys_block_path
133
134 # Wait up to 20 minutes (150 + 300 + 750 = 1200 seconds)
135 BCACHE_RETRIES = [sleep for nap in [1, 2, 5] for sleep in [nap] * 150]
136+BCACHE_REGISTRATION_RETRY = [0.2] * 60
137
138
139 def superblock_asdict(device=None, data=None):
140@@ -163,6 +166,15 @@ def get_cacheset_cachedev(cset_uuid):
141 return None
142
143
144+def attach_backing_to_cacheset(backing_device, cache_device, cset_uuid):
145+ LOG.info("Attaching backing device to cacheset: "
146+ "{} -> {} cset.uuid: {}".format(backing_device, cache_device,
147+ cset_uuid))
148+ backing_device_sysfs = sys_block_path(backing_device)
149+ attach = os.path.join(backing_device_sysfs, "bcache", "attach")
150+ util.write_file(attach, cset_uuid, mode=None)
151+
152+
153 def get_backing_device(bcache_kname):
154 """ For a given bcacheN kname, return the backing device
155 bcache sysfs dir.
156@@ -263,4 +275,233 @@ def _stop_device(device):
157 util.wait_for_removal(bcache_stop, retries=BCACHE_RETRIES)
158
159
160+def register_bcache(bcache_device):
161+ LOG.debug('register_bcache: %s > /sys/fs/bcache/register', bcache_device)
162+ util.write_file('/sys/fs/bcache/register', bcache_device, mode=None)
163+
164+
165+def set_cache_mode(bcache_dev, cache_mode):
166+ LOG.info("Setting cache_mode on {} to {}".format(bcache_dev, cache_mode))
167+ cache_mode_file = '/sys/block/{}/bcache/cache_mode'.format(bcache_dev)
168+ util.write_file(cache_mode_file, cache_mode, mode=None)
169+
170+
171+def validate_bcache_ready(bcache_device, bcache_sys_path):
172+ """ check if bcache is ready, dump info
173+
174+ For cache devices, we expect to find a cacheN symlink
175+ which will point to the underlying cache device; Find
176+ this symlink, read it and compare bcache_device
177+ specified in the parameters.
178+
179+ For backing devices, we expec to find a dev symlink
180+ pointing to the bcacheN device to which the backing
181+ device is enslaved. From the dev symlink, we can
182+ read the bcacheN holders list, which should contain
183+ the backing device kname.
184+
185+ In either case, if we fail to find the correct
186+ symlinks in sysfs, this method will raise
187+ an OSError indicating the missing attribute.
188+ """
189+ # cacheset
190+ # /sys/fs/bcache/<uuid>
191+
192+ # cache device
193+ # /sys/class/block/<cdev>/bcache/set -> # .../fs/bcache/uuid
194+
195+ # backing
196+ # /sys/class/block/<bdev>/bcache/cache -> # .../block/bcacheN
197+ # /sys/class/block/<bdev>/bcache/dev -> # .../block/bcacheN
198+
199+ if bcache_sys_path.startswith('/sys/fs/bcache'):
200+ LOG.debug("validating bcache caching device '%s' from sys_path"
201+ " '%s'", bcache_device, bcache_sys_path)
202+ # we expect a cacheN symlink to point to bcache_device/bcache
203+ sys_path_links = [os.path.join(bcache_sys_path, l)
204+ for l in os.listdir(bcache_sys_path)]
205+ cache_links = [l for l in sys_path_links
206+ if os.path.islink(l) and (
207+ os.path.basename(l).startswith('cache'))]
208+
209+ if len(cache_links) == 0:
210+ msg = ('Failed to find any cache links in %s:%s' % (
211+ bcache_sys_path, sys_path_links))
212+ raise OSError(msg)
213+
214+ for link in cache_links:
215+ target = os.readlink(link)
216+ LOG.debug('Resolving symlink %s -> %s', link, target)
217+ # cacheN -> ../../../devices/.../<bcache_device>/bcache
218+ # basename(dirname(readlink(link)))
219+ target_cache_device = os.path.basename(
220+ os.path.dirname(target))
221+ if os.path.basename(bcache_device) == target_cache_device:
222+ LOG.debug('Found match: bcache_device=%s target_device=%s',
223+ bcache_device, target_cache_device)
224+ return
225+ else:
226+ msg = ('Cache symlink %s ' % target_cache_device +
227+ 'points to incorrect device: %s' % bcache_device)
228+ raise OSError(msg)
229+ elif bcache_sys_path.startswith('/sys/class/block'):
230+ LOG.debug("validating bcache backing device '%s' from sys_path"
231+ " '%s'", bcache_device, bcache_sys_path)
232+ # we expect a 'dev' symlink to point to the bcacheN device
233+ bcache_dev = os.path.join(bcache_sys_path, 'dev')
234+ if os.path.islink(bcache_dev):
235+ bcache_dev_link = (
236+ os.path.basename(os.readlink(bcache_dev)))
237+ LOG.debug('bcache device %s using bcache kname: %s',
238+ bcache_sys_path, bcache_dev_link)
239+
240+ bcache_slaves_path = os.path.join(bcache_dev, 'slaves')
241+ slaves = os.listdir(bcache_slaves_path)
242+ LOG.debug('bcache device %s has slaves: %s',
243+ bcache_sys_path, slaves)
244+ if os.path.basename(bcache_device) in slaves:
245+ LOG.debug('bcache device %s found in slaves',
246+ os.path.basename(bcache_device))
247+ return
248+ else:
249+ msg = ('Failed to find bcache device %s' % bcache_device +
250+ 'in slaves list %s' % slaves)
251+ raise OSError(msg)
252+ else:
253+ msg = 'didnt find "dev" attribute on: %s', bcache_dev
254+ return OSError(msg)
255+
256+ else:
257+ LOG.debug("Failed to validate bcache device '%s' from sys_path"
258+ " '%s'", bcache_device, bcache_sys_path)
259+ msg = ('sysfs path %s does not appear to be a bcache device' %
260+ bcache_sys_path)
261+ return ValueError(msg)
262+
263+
264+def ensure_bcache_is_registered(bcache_device, expected, retry=None):
265+ """ Test that bcache_device is found at an expected path and
266+ re-register the device if it's not ready.
267+
268+ Retry the validation and registration as needed.
269+ """
270+ if not retry:
271+ retry = BCACHE_REGISTRATION_RETRY
272+
273+ for attempt, wait in enumerate(retry):
274+ # find the actual bcache device name via sysfs using the
275+ # backing device's holders directory.
276+ LOG.debug('check just created bcache %s if it is registered,'
277+ ' try=%s', bcache_device, attempt + 1)
278+ try:
279+ udevadm_settle()
280+ if os.path.exists(expected):
281+ LOG.debug('Found bcache dev %s at expected path %s',
282+ bcache_device, expected)
283+ validate_bcache_ready(bcache_device, expected)
284+ else:
285+ msg = 'bcache device path not found: %s' % expected
286+ LOG.debug(msg)
287+ raise ValueError(msg)
288+
289+ # if bcache path exists and holders are > 0 we can return
290+ LOG.debug('bcache dev %s at path %s successfully registered'
291+ ' on attempt %s/%s', bcache_device, expected,
292+ attempt + 1, len(retry))
293+ return
294+
295+ except (OSError, IndexError, ValueError):
296+ # Some versions of bcache-tools will register the bcache device
297+ # as soon as we run make-bcache using udev rules, so wait for
298+ # udev to settle, then try to locate the dev, on older versions
299+ # we need to register it manually though
300+ LOG.debug('bcache device was not registered, registering %s '
301+ 'at /sys/fs/bcache/register', bcache_device)
302+ try:
303+ register_bcache(bcache_device)
304+ except IOError:
305+ # device creation is notoriously racy and this can trigger
306+ # "Invalid argument" IOErrors if it got created in "the
307+ # meantime" - just restart the function a few times to
308+ # check it all again
309+ pass
310+
311+ LOG.debug("bcache dev %s not ready, waiting %ss",
312+ bcache_device, wait)
313+ time.sleep(wait)
314+
315+ # we've exhausted our retries
316+ LOG.warning('Repetitive error registering the bcache dev %s',
317+ bcache_device)
318+ raise RuntimeError("bcache device %s can't be registered" %
319+ bcache_device)
320+
321+
322+def create_cache_device(cache_device):
323+ # /sys/class/block/XXX/YYY/
324+ cache_device_sysfs = sys_block_path(cache_device)
325+
326+ if os.path.exists(os.path.join(cache_device_sysfs, "bcache")):
327+ LOG.debug('caching device already exists at {}/bcache. Read '
328+ 'cset.uuid'.format(cache_device_sysfs))
329+ (out, err) = util.subp(["bcache-super-show", cache_device],
330+ capture=True)
331+ LOG.debug('bcache-super-show=[{}]'.format(out))
332+ [cset_uuid] = [line.split()[-1] for line in out.split("\n")
333+ if line.startswith('cset.uuid')]
334+ else:
335+ LOG.debug('caching device does not yet exist at {}/bcache. Make '
336+ 'cache and get uuid'.format(cache_device_sysfs))
337+ # make the cache device, extracting cacheset uuid
338+ (out, err) = util.subp(["make-bcache", "-C", cache_device],
339+ capture=True)
340+ LOG.debug('out=[{}]'.format(out))
341+ [cset_uuid] = [line.split()[-1] for line in out.split("\n")
342+ if line.startswith('Set UUID:')]
343+
344+ target_sysfs_path = '/sys/fs/bcache/%s' % cset_uuid
345+ ensure_bcache_is_registered(cache_device, target_sysfs_path)
346+ return cset_uuid
347+
348+
349+def create_backing_device(backing_device, cache_device, cache_mode, cset_uuid):
350+ backing_device_sysfs = sys_block_path(backing_device)
351+ target_sysfs_path = os.path.join(backing_device_sysfs, "bcache")
352+
353+ # there should not be any pre-existing bcache device
354+ bdir = os.path.join(backing_device_sysfs, "bcache")
355+ if os.path.exists(bdir):
356+ raise RuntimeError(
357+ 'Unexpected old bcache device: %s', backing_device)
358+
359+ LOG.debug('Creating a backing device on %s', backing_device)
360+ util.subp(["make-bcache", "-B", backing_device])
361+ ensure_bcache_is_registered(backing_device, target_sysfs_path)
362+
363+ # via the holders we can identify which bcache device we just created
364+ # for a given backing device
365+ from .clear_holders import get_holders
366+ holders = get_holders(backing_device)
367+ if len(holders) != 1:
368+ err = ('Invalid number {} of holding devices:'
369+ ' "{}"'.format(len(holders), holders))
370+ LOG.error(err)
371+ raise ValueError(err)
372+ [bcache_dev] = holders
373+ LOG.debug('The just created bcache device is {}'.format(holders))
374+
375+ if cache_device:
376+ # if we specify both then we need to attach backing to cache
377+ if cset_uuid:
378+ attach_backing_to_cacheset(backing_device, cache_device, cset_uuid)
379+ else:
380+ msg = "Invalid cset_uuid: {}".format(cset_uuid)
381+ LOG.error(msg)
382+ raise ValueError(msg)
383+
384+ if cache_mode:
385+ set_cache_mode(bcache_dev, cache_mode)
386+ return dev_path(bcache_dev)
387+
388+
389 # vi: ts=4 expandtab syntax=python
390diff --git a/curtin/block/lvm.py b/curtin/block/lvm.py
391index b3f8bcb..3583f95 100644
392--- a/curtin/block/lvm.py
393+++ b/curtin/block/lvm.py
394@@ -13,12 +13,14 @@ import os
395 _SEP = '='
396
397
398-def _filter_lvm_info(lvtool, match_field, query_field, match_key):
399+def _filter_lvm_info(lvtool, match_field, query_field, match_key, args=None):
400 """
401 filter output of pv/vg/lvdisplay tools
402 """
403+ if args is None:
404+ args = []
405 (out, _) = util.subp([lvtool, '-C', '--separator', _SEP, '--noheadings',
406- '-o', ','.join([match_field, query_field])],
407+ '-o', ','.join([match_field, query_field])] + args,
408 capture=True)
409 return [qf for (mf, qf) in
410 [l.strip().split(_SEP) for l in out.strip().splitlines()]
411@@ -39,6 +41,14 @@ def get_lvols_in_volgroup(vg_name):
412 return _filter_lvm_info('lvdisplay', 'vg_name', 'lv_name', vg_name)
413
414
415+def get_lv_size_bytes(lv_name):
416+ """ get the size in bytes of a logical volume specified by lv_name."""
417+ result = _filter_lvm_info('lvdisplay', 'lv_name', 'lv_size', lv_name,
418+ args=['--units=B'])
419+ if result:
420+ return util.human2bytes(result[0])
421+
422+
423 def split_lvm_name(full):
424 """
425 split full lvm name into tuple of (volgroup, lv_name)
426diff --git a/curtin/block/schemas.py b/curtin/block/schemas.py
427index 34c4400..a9ed605 100644
428--- a/curtin/block/schemas.py
429+++ b/curtin/block/schemas.py
430@@ -65,6 +65,7 @@ BCACHE = {
431 'backing_device': {'$ref': '#/definitions/ref_id'},
432 'cache_device': {'$ref': '#/definitions/ref_id'},
433 'name': {'$ref': '#/definitions/name'},
434+ 'preserve': {'$ref': '#/definitions/preserve'},
435 'type': {'const': 'bcache'},
436 'cache_mode': {
437 'type': ['string'],
438@@ -166,6 +167,7 @@ DM_CRYPT = {
439 'volume': {'$ref': '#/definitions/ref_id'},
440 'key': {'$ref': '#/definitions/id'},
441 'keyfile': {'$ref': '#/definitions/id'},
442+ 'preserve': {'$ref': '#/definitions/preserve'},
443 'type': {'const': 'dm_crypt'},
444 },
445 }
446@@ -201,6 +203,7 @@ LVM_PARTITION = {
447 'properties': {
448 'id': {'$ref': '#/definitions/id'},
449 'name': {'$ref': '#/definitions/name'},
450+ 'preserve': {'type': 'boolean'},
451 'size': {'$ref': '#/definitions/size'}, # XXX: This is not used
452 'type': {'const': 'lvm_partition'},
453 'volgroup': {'$ref': '#/definitions/ref_id'},
454@@ -219,6 +222,7 @@ LVM_VOLGROUP = {
455 'id': {'$ref': '#/definitions/id'},
456 'devices': {'$ref': '#/definitions/devices'},
457 'name': {'$ref': '#/definitions/name'},
458+ 'preserve': {'type': 'boolean'},
459 'uuid': {'$ref': '#/definitions/uuid'}, # XXX: This is not used
460 'type': {'const': 'lvm_volgroup'},
461 },
462diff --git a/curtin/commands/block_meta.py b/curtin/commands/block_meta.py
463index c48ddbf..b0dcb81 100644
464--- a/curtin/commands/block_meta.py
465+++ b/curtin/commands/block_meta.py
466@@ -8,7 +8,8 @@ from curtin.block import (bcache, clear_holders, dasd, iscsi, lvm, mdadm, mkfs,
467 from curtin import distro
468 from curtin.log import LOG, logged_time
469 from curtin.reporter import events
470-from curtin.storage_config import extract_storage_ordered_dict
471+from curtin.storage_config import (extract_storage_ordered_dict,
472+ ptable_uuid_to_flag_entry)
473
474
475 from . import populate_one_subcmd
476@@ -32,9 +33,20 @@ FstabData.__new__.__defaults__ = (None, None, None, "", "0", "0", None)
477 SIMPLE = 'simple'
478 SIMPLE_BOOT = 'simple-boot'
479 CUSTOM = 'custom'
480-BCACHE_REGISTRATION_RETRY = [0.2] * 60
481 PTABLE_UNSUPPORTED = schemas._ptable_unsupported
482
483+SGDISK_FLAGS = {
484+ "boot": 'ef00',
485+ "lvm": '8e00',
486+ "raid": 'fd00',
487+ "bios_grub": 'ef02',
488+ "prep": '4100',
489+ "swap": '8200',
490+ "home": '8302',
491+ "linux": '8300'
492+}
493+
494+
495 CMD_ARGUMENTS = (
496 ((('-D', '--devices'),
497 {'help': 'which devices to operate on', 'action': 'append',
498@@ -507,9 +519,11 @@ def dasd_handler(info, storage_config):
499 disk_layout = info.get('disk_layout')
500 label = info.get('label')
501 mode = info.get('mode')
502+ force_format = config.value_as_boolean(info.get('wipe'))
503
504 dasd_device = dasd.DasdDevice(device_id)
505- if dasd_device.needs_formatting(blocksize, disk_layout, label):
506+ if (force_format or dasd_device.needs_formatting(blocksize,
507+ disk_layout, label)):
508 if config.value_as_boolean(info.get('preserve')):
509 raise ValueError(
510 "dasd '%s' does not match configured properties and"
511@@ -615,6 +629,51 @@ def find_extended_partition(part_device, storage_config):
512 return item_id
513
514
515+def verify_exists(devpath):
516+ LOG.debug('Verifying %s exists', devpath)
517+ if not os.path.exists(devpath):
518+ raise RuntimeError("Device %s does not exist" % devpath)
519+
520+
521+def verify_size(devpath, expected_size_bytes):
522+ found_size_bytes = block.read_sys_block_size_bytes(devpath)
523+ msg = (
524+ 'Verifying %s size, expecting %s bytes, found %s bytes' % (
525+ devpath, expected_size_bytes, found_size_bytes))
526+ LOG.debug(msg)
527+ if expected_size_bytes != found_size_bytes:
528+ raise RuntimeError(msg)
529+
530+
531+def verify_ptable_flag(devpath, expected_flag):
532+ if not SGDISK_FLAGS.get(expected_flag):
533+ raise RuntimeError(
534+ 'Cannot verify unknown partition flag: %s', expected_flag)
535+
536+ info = block.sfdisk_info(devpath)
537+ if devpath not in info:
538+ raise RuntimeError('Device %s not present in sfdisk dump:\n%s' %
539+ devpath, util.json_dumps(info))
540+
541+ entry = info[devpath]
542+ LOG.debug("Device %s ptable entry: %s", devpath, util.json_dumps(entry))
543+ (found_flag, code) = ptable_uuid_to_flag_entry(entry['type'])
544+ msg = (
545+ 'Verifying %s partition flag, expecting %s, found %s' % (
546+ devpath, expected_flag, found_flag))
547+ LOG.debug(msg)
548+ if expected_flag != found_flag:
549+ raise RuntimeError(msg)
550+
551+
552+def partition_verify(devpath, info):
553+ verify_exists(devpath)
554+ verify_size(devpath, int(util.human2bytes(info['size'])))
555+ expected_flag = info.get('flag')
556+ if expected_flag:
557+ verify_ptable_flag(devpath, info['flag'])
558+
559+
560 def partition_handler(info, storage_config):
561 device = info.get('device')
562 size = info.get('size')
563@@ -716,86 +775,83 @@ def partition_handler(info, storage_config):
564 length_sectors = length_sectors + (logdisks * alignment_offset)
565
566 # Handle preserve flag
567+ create_partition = True
568 if config.value_as_boolean(info.get('preserve')):
569- return
570- elif config.value_as_boolean(storage_config.get(device).get('preserve')):
571- raise NotImplementedError("Partition '%s' is not marked to be \
572- preserved, but device '%s' is. At this time, preserving devices \
573- but not also the partitions on the devices is not supported, \
574- because of the possibility of damaging partitions intended to be \
575- preserved." % (info.get('id'), device))
576-
577- # Set flag
578- # 'sgdisk --list-types'
579- sgdisk_flags = {"boot": 'ef00',
580- "lvm": '8e00',
581- "raid": 'fd00',
582- "bios_grub": 'ef02',
583- "prep": '4100',
584- "swap": '8200',
585- "home": '8302',
586- "linux": '8300'}
587-
588- LOG.info("adding partition '%s' to disk '%s' (ptable: '%s')",
589- info.get('id'), device, disk_ptable)
590- LOG.debug("partnum: %s offset_sectors: %s length_sectors: %s",
591- partnumber, offset_sectors, length_sectors)
592-
593- # Wipe the partition if told to do so, do not wipe dos extended partitions
594- # as this may damage the extended partition table
595- if config.value_as_boolean(info.get('wipe')):
596- LOG.info("Preparing partition location on disk %s", disk)
597- if info.get('flag') == "extended":
598- LOG.warn("extended partitions do not need wiping, so skipping: "
599- "'%s'" % info.get('id'))
600- else:
601- # wipe the start of the new partition first by zeroing 1M at the
602- # length of the previous partition
603- wipe_offset = int(offset_sectors * logical_block_size_bytes)
604- LOG.debug('Wiping 1M on %s at offset %s', disk, wipe_offset)
605- # We don't require exclusive access as we're wiping data at an
606- # offset and the current holder maybe part of the current storage
607- # configuration.
608- block.zero_file_at_offsets(disk, [wipe_offset], exclusive=False)
609-
610- if disk_ptable == "msdos":
611- if flag and flag == 'prep':
612- raise ValueError('PReP partitions require a GPT partition table')
613-
614- if flag in ["extended", "logical", "primary"]:
615- partition_type = flag
616+ part_path = block.dev_path(
617+ block.partition_kname(disk_kname, partnumber))
618+ partition_verify(part_path, info)
619+ LOG.debug('Partition %s already present, skipping create', part_path)
620+ create_partition = False
621+
622+ if create_partition:
623+ # Set flag
624+ # 'sgdisk --list-types'
625+ LOG.info("adding partition '%s' to disk '%s' (ptable: '%s')",
626+ info.get('id'), device, disk_ptable)
627+ LOG.debug("partnum: %s offset_sectors: %s length_sectors: %s",
628+ partnumber, offset_sectors, length_sectors)
629+
630+ # Pre-Wipe the partition if told to do so, do not wipe dos extended
631+ # partitions as this may damage the extended partition table
632+ if config.value_as_boolean(info.get('wipe')):
633+ LOG.info("Preparing partition location on disk %s", disk)
634+ if info.get('flag') == "extended":
635+ LOG.warn("extended partitions do not need wiping, "
636+ "so skipping: '%s'" % info.get('id'))
637+ else:
638+ # wipe the start of the new partition first by zeroing 1M at
639+ # the length of the previous partition
640+ wipe_offset = int(offset_sectors * logical_block_size_bytes)
641+ LOG.debug('Wiping 1M on %s at offset %s', disk, wipe_offset)
642+ # We don't require exclusive access as we're wiping data at an
643+ # offset and the current holder maybe part of the current
644+ # storage configuration.
645+ block.zero_file_at_offsets(disk, [wipe_offset],
646+ exclusive=False)
647+
648+ if disk_ptable == "msdos":
649+ if flag and flag == 'prep':
650+ raise ValueError(
651+ 'PReP partitions require a GPT partition table')
652+
653+ if flag in ["extended", "logical", "primary"]:
654+ partition_type = flag
655+ else:
656+ partition_type = "primary"
657+ cmd = ["parted", disk, "--script", "mkpart", partition_type,
658+ "%ss" % offset_sectors, "%ss" % str(offset_sectors +
659+ length_sectors)]
660+ util.subp(cmd, capture=True)
661+ elif disk_ptable == "gpt":
662+ if flag and flag in SGDISK_FLAGS:
663+ typecode = SGDISK_FLAGS[flag]
664+ else:
665+ typecode = SGDISK_FLAGS['linux']
666+ cmd = ["sgdisk", "--new", "%s:%s:%s" % (partnumber, offset_sectors,
667+ length_sectors + offset_sectors),
668+ "--typecode=%s:%s" % (partnumber, typecode), disk]
669+ util.subp(cmd, capture=True)
670+ elif disk_ptable == "vtoc":
671+ disk_device_id = storage_config.get(device).get('device_id')
672+ dasd_device = dasd.DasdDevice(disk_device_id)
673+ dasd_device.partition(partnumber, length_bytes)
674 else:
675- partition_type = "primary"
676- cmd = ["parted", disk, "--script", "mkpart", partition_type,
677- "%ss" % offset_sectors, "%ss" % str(offset_sectors +
678- length_sectors)]
679- util.subp(cmd, capture=True)
680- elif disk_ptable == "gpt":
681- if flag and flag in sgdisk_flags:
682- typecode = sgdisk_flags[flag]
683+ raise ValueError("parent partition has invalid partition table")
684+
685+ # ensure partition exists
686+ part_path = block.dev_path(block.partition_kname(disk_kname,
687+ partnumber))
688+ block.rescan_block_devices([disk])
689+ udevadm_settle(exists=part_path)
690+
691+ wipe_mode = info.get('wipe')
692+ if wipe_mode:
693+ if wipe_mode == 'superblock' and create_partition:
694+ # partition creation pre-wipes partition superblock locations
695+ pass
696 else:
697- typecode = sgdisk_flags['linux']
698- cmd = ["sgdisk", "--new", "%s:%s:%s" % (partnumber, offset_sectors,
699- length_sectors + offset_sectors),
700- "--typecode=%s:%s" % (partnumber, typecode), disk]
701- util.subp(cmd, capture=True)
702- elif disk_ptable == "vtoc":
703- disk_device_id = storage_config.get(device).get('device_id')
704- dasd_device = dasd.DasdDevice(disk_device_id)
705- dasd_device.partition(partnumber, length_bytes)
706- else:
707- raise ValueError("parent partition has invalid partition table")
708-
709- # ensure partition exists
710- part_path = block.dev_path(block.partition_kname(disk_kname, partnumber))
711- block.rescan_block_devices([disk])
712- udevadm_settle(exists=part_path)
713-
714- # wipe the created partition if needed, superblocks have already been wiped
715- wipe_mode = info.get('wipe', 'superblock')
716- if wipe_mode != 'superblock':
717- LOG.debug('Wiping partition %s mode=%s', part_path, wipe_mode)
718- block.wipe_volume(part_path, mode=wipe_mode, exclusive=False)
719+ LOG.debug('Wiping partition %s mode=%s', part_path, wipe_mode)
720+ block.wipe_volume(part_path, mode=wipe_mode, exclusive=False)
721
722 # Make the name if needed
723 if storage_config.get(device).get('name') and partition_type != 'extended':
724@@ -1041,10 +1097,28 @@ def mount_handler(info, storage_config):
725 target=state.get('target'), fstab=state.get('fstab'))
726
727
728+def verify_volgroup_members(vg_name, pv_paths):
729+ # LVM may be offline, so start it
730+ lvm.activate_volgroups()
731+ # Verify that volgroup exists and contains all specified devices
732+ found_pvs = set(lvm.get_pvols_in_volgroup(vg_name))
733+ expected_pvs = set(pv_paths)
734+ msg = ('Verifying lvm volgroup %s members, expected %s, found %s ' % (
735+ vg_name, expected_pvs, found_pvs))
736+ LOG.debug(msg)
737+ if expected_pvs != found_pvs:
738+ raise RuntimeError(msg)
739+
740+
741+def lvm_volgroup_verify(vg_name, device_paths):
742+ verify_volgroup_members(vg_name, device_paths)
743+
744+
745 def lvm_volgroup_handler(info, storage_config):
746 devices = info.get('devices')
747 device_paths = []
748 name = info.get('name')
749+ preserve = config.value_as_boolean(info.get('preserve'))
750 if not devices:
751 raise ValueError("devices for volgroup '%s' must be specified" %
752 info.get('id'))
753@@ -1059,16 +1133,13 @@ def lvm_volgroup_handler(info, storage_config):
754 device_paths.append(get_path_to_storage_volume(device_id,
755 storage_config))
756
757- # Handle preserve flag
758- if config.value_as_boolean(info.get('preserve')):
759- # LVM will probably be offline, so start it
760- util.subp(["vgchange", "-a", "y"])
761- # Verify that volgroup exists and contains all specified devices
762- if set(lvm.get_pvols_in_volgroup(name)) != set(device_paths):
763- raise ValueError("volgroup '%s' marked to be preserved, but does "
764- "not exist or does not contain the right "
765- "physical volumes" % info.get('id'))
766- else:
767+ create_vg = True
768+ if preserve:
769+ lvm_volgroup_verify(name, device_paths)
770+ LOG.debug('lvm_volgroup %s already present, skipping create', name)
771+ create_vg = False
772+
773+ if create_vg:
774 # Create vgrcreate command and run
775 # capture output to avoid printing it to log
776 # Use zero to clear target devices of any metadata
777@@ -1079,9 +1150,34 @@ def lvm_volgroup_handler(info, storage_config):
778 lvm.lvm_scan()
779
780
781+def verify_lv_in_vg(lv_name, vg_name):
782+ found_lvols = lvm.get_lvols_in_volgroup(vg_name)
783+ msg = ('Verifying %s logical volume is in %s volume '
784+ 'group, found %s ' % (lv_name, vg_name, found_lvols))
785+ LOG.debug(msg)
786+ if lv_name not in found_lvols:
787+ raise RuntimeError(msg)
788+
789+
790+def verify_lv_size(lv_name, size):
791+ expected_size_bytes = util.human2bytes(size)
792+ found_size_bytes = lvm.get_lv_size_bytes(lv_name)
793+ msg = ('Verifying %s logical value is size bytes %s, found %s '
794+ % (lv_name, expected_size_bytes, found_size_bytes))
795+ LOG.debug(msg)
796+ if expected_size_bytes != found_size_bytes:
797+ raise RuntimeError(msg)
798+
799+
800+def lvm_partition_verify(lv_name, vg_name, info):
801+ verify_lv_in_vg(lv_name, vg_name)
802+ if 'size' in info:
803+ verify_lv_size(lv_name, info['size'])
804+
805+
806 def lvm_partition_handler(info, storage_config):
807- volgroup = storage_config.get(info.get('volgroup')).get('name')
808- name = info.get('name')
809+ volgroup = storage_config[info['volgroup']]['name']
810+ name = info['name']
811 if not volgroup:
812 raise ValueError("lvm volgroup for lvm partition must be specified")
813 if not name:
814@@ -1089,21 +1185,15 @@ def lvm_partition_handler(info, storage_config):
815 if info.get('ptable'):
816 raise ValueError("Partition tables on top of lvm logical volumes is "
817 "not supported")
818+ preserve = config.value_as_boolean(info.get('preserve'))
819
820- # Handle preserve flag
821- if config.value_as_boolean(info.get('preserve')):
822- if name not in lvm.get_lvols_in_volgroup(volgroup):
823- raise ValueError("lvm partition '%s' marked to be preserved, but "
824- "does not exist or does not mach storage "
825- "configuration" % info.get('id'))
826- elif storage_config.get(info.get('volgroup')).get('preserve'):
827- raise NotImplementedError(
828- "Lvm Partition '%s' is not marked to be preserved, but volgroup "
829- "'%s' is. At this time, preserving volgroups but not also the lvm "
830- "partitions on the volgroup is not supported, because of the "
831- "possibility of damaging lvm partitions intended to be "
832- "preserved." % (info.get('id'), volgroup))
833- else:
834+ create_lv = True
835+ if preserve:
836+ lvm_partition_verify(name, volgroup, info)
837+ LOG.debug('lvm_partition %s already present, skipping create', name)
838+ create_lv = False
839+
840+ if create_lv:
841 # Use 'wipesignatures' (if available) and 'zero' to clear target lv
842 # of any fs metadata
843 cmd = ["lvcreate", volgroup, "--name", name, "--zero=y"]
844@@ -1122,7 +1212,29 @@ def lvm_partition_handler(info, storage_config):
845 # refresh lvmetad
846 lvm.lvm_scan()
847
848- make_dname(info.get('id'), storage_config)
849+ wipe_mode = info.get('wipe', 'superblock')
850+ if wipe_mode and create_lv:
851+ lv_path = get_path_to_storage_volume(info['id'], storage_config)
852+ LOG.debug('Wiping logical volume %s mode=%s', lv_path, wipe_mode)
853+ block.wipe_volume(lv_path, mode=wipe_mode, exclusive=False)
854+
855+ make_dname(info['id'], storage_config)
856+
857+
858+def verify_blkdev_used(dmcrypt_dev, expected_blkdev):
859+ dminfo = block.dmsetup_info(dmcrypt_dev)
860+ found_blkdev = dminfo['blkdevs_used']
861+ msg = (
862+ 'Verifying %s volume, expecting %s , found %s ' % (
863+ dmcrypt_dev, expected_blkdev, found_blkdev))
864+ LOG.debug(msg)
865+ if expected_blkdev != found_blkdev:
866+ raise RuntimeError(msg)
867+
868+
869+def dm_crypt_verify(dmcrypt_dev, volume_path):
870+ verify_exists(dmcrypt_dev)
871+ verify_blkdev_used(dmcrypt_dev, volume_path)
872
873
874 def dm_crypt_handler(info, storage_config):
875@@ -1131,6 +1243,8 @@ def dm_crypt_handler(info, storage_config):
876 keysize = info.get('keysize')
877 cipher = info.get('cipher')
878 dm_name = info.get('dm_name')
879+ dmcrypt_dev = os.path.join("/dev", "mapper", dm_name)
880+ preserve = config.value_as_boolean(info.get('preserve'))
881 if not volume:
882 raise ValueError("volume for cryptsetup to operate on must be \
883 specified")
884@@ -1154,51 +1268,68 @@ def dm_crypt_handler(info, storage_config):
885 else:
886 raise ValueError("encryption key or keyfile must be specified")
887
888- # if zkey is available, attempt to generate and use it; if it's not
889- # available or fails to setup properly, fallback to normal cryptsetup
890- # passing strict=False downgrades log messages to warnings
891- zkey_used = None
892- if block.zkey_supported(strict=False):
893- volume_name = "%s:%s" % (volume_byid_path, dm_name)
894- LOG.debug('Attempting to setup zkey for %s', volume_name)
895- luks_type = 'luks2'
896- gen_cmd = ['zkey', 'generate', '--xts', '--volume-type', luks_type,
897- '--sector-size', '4096', '--name', dm_name,
898- '--description',
899- "curtin generated zkey for %s" % volume_name,
900- '--volumes', volume_name]
901- run_cmd = ['zkey', 'cryptsetup', '--run', '--volumes',
902- volume_byid_path, '--batch-mode', '--key-file', keyfile]
903- try:
904- util.subp(gen_cmd, capture=True)
905- util.subp(run_cmd, capture=True)
906- zkey_used = os.path.join(os.path.split(state['fstab'])[0],
907- "zkey_used")
908- # mark in state that we used zkey
909- util.write_file(zkey_used, "1")
910- except util.ProcessExecutionError as e:
911- LOG.exception(e)
912- msg = 'Setup of zkey on %s failed, fallback to cryptsetup.'
913- LOG.error(msg % volume_path)
914-
915- if not zkey_used:
916- LOG.debug('Using cryptsetup on %s', volume_path)
917- luks_type = "luks"
918- cmd = ["cryptsetup"]
919- if cipher:
920- cmd.extend(["--cipher", cipher])
921- if keysize:
922- cmd.extend(["--key-size", keysize])
923- cmd.extend(["luksFormat", volume_path, keyfile])
924- util.subp(cmd)
925+ create_dmcrypt = True
926+ if preserve:
927+ dm_crypt_verify(dmcrypt_dev, volume_path)
928+ LOG.debug('dm_crypt %s already present, skipping create', dmcrypt_dev)
929+ create_dmcrypt = False
930+
931+ if create_dmcrypt:
932+ # if zkey is available, attempt to generate and use it; if it's not
933+ # available or fails to setup properly, fallback to normal cryptsetup
934+ # passing strict=False downgrades log messages to warnings
935+ zkey_used = None
936+ if block.zkey_supported(strict=False):
937+ volume_name = "%s:%s" % (volume_byid_path, dm_name)
938+ LOG.debug('Attempting to setup zkey for %s', volume_name)
939+ luks_type = 'luks2'
940+ gen_cmd = ['zkey', 'generate', '--xts', '--volume-type', luks_type,
941+ '--sector-size', '4096', '--name', dm_name,
942+ '--description',
943+ "curtin generated zkey for %s" % volume_name,
944+ '--volumes', volume_name]
945+ run_cmd = ['zkey', 'cryptsetup', '--run', '--volumes',
946+ volume_byid_path, '--batch-mode', '--key-file', keyfile]
947+ try:
948+ util.subp(gen_cmd, capture=True)
949+ util.subp(run_cmd, capture=True)
950+ zkey_used = os.path.join(os.path.split(state['fstab'])[0],
951+ "zkey_used")
952+ # mark in state that we used zkey
953+ util.write_file(zkey_used, "1")
954+ except util.ProcessExecutionError as e:
955+ LOG.exception(e)
956+ msg = 'Setup of zkey on %s failed, fallback to cryptsetup.'
957+ LOG.error(msg % volume_path)
958+
959+ if not zkey_used:
960+ LOG.debug('Using cryptsetup on %s', volume_path)
961+ luks_type = "luks"
962+ cmd = ["cryptsetup"]
963+ if cipher:
964+ cmd.extend(["--cipher", cipher])
965+ if keysize:
966+ cmd.extend(["--key-size", keysize])
967+ cmd.extend(["luksFormat", volume_path, keyfile])
968+ util.subp(cmd)
969+
970+ cmd = ["cryptsetup", "open", "--type", luks_type, volume_path, dm_name,
971+ "--key-file", keyfile]
972
973- cmd = ["cryptsetup", "open", "--type", luks_type, volume_path, dm_name,
974- "--key-file", keyfile]
975+ util.subp(cmd)
976
977- util.subp(cmd)
978+ if keyfile_is_tmp:
979+ os.remove(keyfile)
980
981- if keyfile_is_tmp:
982- os.remove(keyfile)
983+ wipe_mode = info.get('wipe')
984+ if wipe_mode:
985+ if wipe_mode == 'superblock' and create_dmcrypt:
986+ # newly created dmcrypt volumes do not need superblock wiping
987+ pass
988+ else:
989+ LOG.debug('Wiping dm_crypt device %s mode=%s',
990+ dmcrypt_dev, wipe_mode)
991+ block.wipe_volume(dmcrypt_dev, mode=wipe_mode, exclusive=False)
992
993 # A crypttab will be created in the same directory as the fstab in the
994 # configuration. This will then be copied onto the system later
995@@ -1214,12 +1345,33 @@ def dm_crypt_handler(info, storage_config):
996 so not writing crypttab")
997
998
999+def verify_md_components(md_devname, raidlevel, device_paths, spare_paths):
1000+ # check if the array is already up, if not try to assemble
1001+ check_ok = mdadm.md_check(md_devname, raidlevel, device_paths,
1002+ spare_paths)
1003+ if not check_ok:
1004+ LOG.info("assembling preserved raid for {}".format(md_devname))
1005+ mdadm.mdadm_assemble(md_devname, device_paths, spare_paths)
1006+ check_ok = mdadm.md_check(md_devname, raidlevel, device_paths,
1007+ spare_paths)
1008+ msg = ('Verifying %s raid composition, found raid is %s'
1009+ % (md_devname, 'OK' if check_ok else 'not OK'))
1010+ LOG.debug(msg)
1011+ if not check_ok:
1012+ raise RuntimeError(msg)
1013+
1014+
1015+def raid_verify(md_devname, raidlevel, device_paths, spare_paths):
1016+ verify_md_components(md_devname, raidlevel, device_paths, spare_paths)
1017+
1018+
1019 def raid_handler(info, storage_config):
1020 state = util.load_command_environment(strict=True)
1021 devices = info.get('devices')
1022 raidlevel = info.get('raidlevel')
1023 spare_devices = info.get('spare_devices')
1024 md_devname = block.dev_path(info.get('name'))
1025+ preserve = config.value_as_boolean(info.get('preserve'))
1026 if not devices:
1027 raise ValueError("devices for raid must be specified")
1028 if raidlevel not in ['linear', 'raid0', 0, 'stripe', 'raid1', 1, 'mirror',
1029@@ -1233,7 +1385,7 @@ def raid_handler(info, storage_config):
1030 device_paths = list(get_path_to_storage_volume(dev, storage_config) for
1031 dev in devices)
1032 LOG.debug('raid: device path mapping: {}'.format(
1033- zip(devices, device_paths)))
1034+ list(zip(devices, device_paths))))
1035
1036 spare_device_paths = []
1037 if spare_devices:
1038@@ -1242,27 +1394,27 @@ def raid_handler(info, storage_config):
1039 LOG.debug('raid: spare device path mapping: {}'.format(
1040 zip(spare_devices, spare_device_paths)))
1041
1042- # Handle preserve flag
1043- if config.value_as_boolean(info.get('preserve')):
1044- # check if the array is already up, if not try to assemble
1045- if not mdadm.md_check(md_devname, raidlevel,
1046- device_paths, spare_device_paths):
1047- LOG.info("assembling preserved raid for "
1048- "{}".format(md_devname))
1049-
1050- mdadm.mdadm_assemble(md_devname, device_paths, spare_device_paths)
1051-
1052- # try again after attempting to assemble
1053- if not mdadm.md_check(md_devname, raidlevel,
1054- devices, spare_device_paths):
1055- raise ValueError("Unable to confirm preserved raid array: "
1056- " {}".format(md_devname))
1057- # raid is all OK
1058- return
1059-
1060- mdadm.mdadm_create(md_devname, raidlevel,
1061- device_paths, spare_device_paths,
1062- info.get('mdname', ''))
1063+ create_raid = True
1064+ if preserve:
1065+ raid_verify(md_devname, raidlevel, device_paths, spare_device_paths)
1066+ LOG.debug('raid %s already present, skipping create', md_devname)
1067+ create_raid = False
1068+
1069+ if create_raid:
1070+ mdadm.mdadm_create(md_devname, raidlevel,
1071+ device_paths, spare_device_paths,
1072+ info.get('mdname', ''))
1073+
1074+ wipe_mode = info.get('wipe')
1075+ if wipe_mode:
1076+ if wipe_mode == 'superblock' and create_raid:
1077+ # Newly created raid devices already wipe member superblocks at
1078+ # their data offset (this is equivalent to wiping the assembled
1079+ # device, see curtin.block.mdadm.zero_device for more details.
1080+ pass
1081+ else:
1082+ LOG.debug('Wiping raid device %s mode=%s', md_devname, wipe_mode)
1083+ block.wipe_volume(md_devname, mode=wipe_mode, exclusive=False)
1084
1085 # Make dname rule for this dev
1086 make_dname(info.get('id'), storage_config)
1087@@ -1287,262 +1439,120 @@ def raid_handler(info, storage_config):
1088 disk_handler(info, storage_config)
1089
1090
1091+def verify_bcache_cachedev(cachedev):
1092+ """ verify that the specified cache_device is a bcache cache device."""
1093+ result = bcache.is_caching(cachedev)
1094+ msg = ('Verifying %s is bcache cache device, found device is %s'
1095+ % (cachedev, 'OK' if result else 'not OK'))
1096+ LOG.debug(msg)
1097+ if not result:
1098+ raise RuntimeError(msg)
1099+
1100+
1101+def verify_bcache_backingdev(backingdev):
1102+ """ verify that the specified backingdev is a bcache backing device."""
1103+ result = bcache.is_backing(backingdev)
1104+ msg = ('Verifying %s is bcache backing device, found device is %s'
1105+ % (backingdev, 'OK' if result else 'not OK'))
1106+ LOG.debug(msg)
1107+ if not result:
1108+ raise RuntimeError(msg)
1109+
1110+
1111+def verify_cache_mode(backing_dev, backing_superblock, expected_mode):
1112+ """ verify the backing device cache-mode is set as expected. """
1113+ found = backing_superblock.get('dev.data.cache_mode', '')
1114+ msg = ('Verifying %s bcache cache-mode, expecting %s, found %s'
1115+ % (backing_dev, expected_mode, found))
1116+ LOG.debug(msg)
1117+ if expected_mode not in found:
1118+ raise RuntimeError(msg)
1119+
1120+
1121+def verify_bcache_cset_uuid_match(backing_dev, cinfo, binfo):
1122+ expected_cset_uuid = cinfo.get('cset.uuid')
1123+ found_cset_uuid = binfo.get('cset.uuid')
1124+ result = ((expected_cset_uuid == found_cset_uuid)
1125+ if expected_cset_uuid else False)
1126+ msg = ('Verifying bcache backing_device %s cset.uuid is %s, found %s'
1127+ % (backing_dev, expected_cset_uuid, found_cset_uuid))
1128+ LOG.debug(msg)
1129+ if not result:
1130+ raise RuntimeError(msg)
1131+
1132+
1133+def bcache_verify_cachedev(cachedev):
1134+ verify_bcache_cachedev(cachedev)
1135+ return True
1136+
1137+
1138+def bcache_verify_backingdev(backingdev):
1139+ verify_bcache_backingdev(backingdev)
1140+ return True
1141+
1142+
1143+def bcache_verify(cachedev, backingdev, cache_mode):
1144+ bcache_verify_cachedev(cachedev)
1145+ bcache_verify_backingdev(backingdev)
1146+ cache_info = bcache.superblock_asdict(cachedev)
1147+ backing_info = bcache.superblock_asdict(backingdev)
1148+ verify_bcache_cset_uuid_match(backingdev, cache_info, backing_info)
1149+ if cache_mode:
1150+ verify_cache_mode(backingdev, backing_info, cache_mode)
1151+
1152+ return True
1153+
1154+
1155 def bcache_handler(info, storage_config):
1156 backing_device = get_path_to_storage_volume(info.get('backing_device'),
1157 storage_config)
1158 cache_device = get_path_to_storage_volume(info.get('cache_device'),
1159 storage_config)
1160 cache_mode = info.get('cache_mode', None)
1161+ preserve = config.value_as_boolean(info.get('preserve'))
1162
1163 if not backing_device or not cache_device:
1164 raise ValueError("backing device and cache device for bcache"
1165 " must be specified")
1166
1167- bcache_sysfs = "/sys/fs/bcache"
1168- udevadm_settle(exists=bcache_sysfs)
1169-
1170- def register_bcache(bcache_device):
1171- LOG.debug('register_bcache: %s > /sys/fs/bcache/register',
1172- bcache_device)
1173- with open("/sys/fs/bcache/register", "w") as fp:
1174- fp.write(bcache_device)
1175-
1176- def _validate_bcache(bcache_device, bcache_sys_path):
1177- """ check if bcache is ready, dump info
1178-
1179- For cache devices, we expect to find a cacheN symlink
1180- which will point to the underlying cache device; Find
1181- this symlink, read it and compare bcache_device
1182- specified in the parameters.
1183-
1184- For backing devices, we expec to find a dev symlink
1185- pointing to the bcacheN device to which the backing
1186- device is enslaved. From the dev symlink, we can
1187- read the bcacheN holders list, which should contain
1188- the backing device kname.
1189-
1190- In either case, if we fail to find the correct
1191- symlinks in sysfs, this method will raise
1192- an OSError indicating the missing attribute.
1193- """
1194- # cacheset
1195- # /sys/fs/bcache/<uuid>
1196-
1197- # cache device
1198- # /sys/class/block/<cdev>/bcache/set -> # .../fs/bcache/uuid
1199-
1200- # backing
1201- # /sys/class/block/<bdev>/bcache/cache -> # .../block/bcacheN
1202- # /sys/class/block/<bdev>/bcache/dev -> # .../block/bcacheN
1203-
1204- if bcache_sys_path.startswith('/sys/fs/bcache'):
1205- LOG.debug("validating bcache caching device '%s' from sys_path"
1206- " '%s'", bcache_device, bcache_sys_path)
1207- # we expect a cacheN symlink to point to bcache_device/bcache
1208- sys_path_links = [os.path.join(bcache_sys_path, l)
1209- for l in os.listdir(bcache_sys_path)]
1210- cache_links = [l for l in sys_path_links
1211- if os.path.islink(l) and (
1212- os.path.basename(l).startswith('cache'))]
1213-
1214- if len(cache_links) == 0:
1215- msg = ('Failed to find any cache links in %s:%s' % (
1216- bcache_sys_path, sys_path_links))
1217- raise OSError(msg)
1218-
1219- for link in cache_links:
1220- target = os.readlink(link)
1221- LOG.debug('Resolving symlink %s -> %s', link, target)
1222- # cacheN -> ../../../devices/.../<bcache_device>/bcache
1223- # basename(dirname(readlink(link)))
1224- target_cache_device = os.path.basename(
1225- os.path.dirname(target))
1226- if os.path.basename(bcache_device) == target_cache_device:
1227- LOG.debug('Found match: bcache_device=%s target_device=%s',
1228- bcache_device, target_cache_device)
1229- return
1230- else:
1231- msg = ('Cache symlink %s ' % target_cache_device +
1232- 'points to incorrect device: %s' % bcache_device)
1233- raise OSError(msg)
1234- elif bcache_sys_path.startswith('/sys/class/block'):
1235- LOG.debug("validating bcache backing device '%s' from sys_path"
1236- " '%s'", bcache_device, bcache_sys_path)
1237- # we expect a 'dev' symlink to point to the bcacheN device
1238- bcache_dev = os.path.join(bcache_sys_path, 'dev')
1239- if os.path.islink(bcache_dev):
1240- bcache_dev_link = (
1241- os.path.basename(os.readlink(bcache_dev)))
1242- LOG.debug('bcache device %s using bcache kname: %s',
1243- bcache_sys_path, bcache_dev_link)
1244-
1245- bcache_slaves_path = os.path.join(bcache_dev, 'slaves')
1246- slaves = os.listdir(bcache_slaves_path)
1247- LOG.debug('bcache device %s has slaves: %s',
1248- bcache_sys_path, slaves)
1249- if os.path.basename(bcache_device) in slaves:
1250- LOG.debug('bcache device %s found in slaves',
1251- os.path.basename(bcache_device))
1252- return
1253- else:
1254- msg = ('Failed to find bcache device %s' % bcache_device +
1255- 'in slaves list %s' % slaves)
1256- raise OSError(msg)
1257- else:
1258- msg = 'didnt find "dev" attribute on: %s', bcache_dev
1259- return OSError(msg)
1260-
1261- else:
1262- LOG.debug("Failed to validate bcache device '%s' from sys_path"
1263- " '%s'", bcache_device, bcache_sys_path)
1264- msg = ('sysfs path %s does not appear to be a bcache device' %
1265- bcache_sys_path)
1266- return ValueError(msg)
1267-
1268- def ensure_bcache_is_registered(bcache_device, expected, retry=None):
1269- """ Test that bcache_device is found at an expected path and
1270- re-register the device if it's not ready.
1271-
1272- Retry the validation and registration as needed.
1273- """
1274- if not retry:
1275- retry = BCACHE_REGISTRATION_RETRY
1276-
1277- for attempt, wait in enumerate(retry):
1278- # find the actual bcache device name via sysfs using the
1279- # backing device's holders directory.
1280- LOG.debug('check just created bcache %s if it is registered,'
1281- ' try=%s', bcache_device, attempt + 1)
1282- try:
1283- udevadm_settle()
1284- if os.path.exists(expected):
1285- LOG.debug('Found bcache dev %s at expected path %s',
1286- bcache_device, expected)
1287- _validate_bcache(bcache_device, expected)
1288- else:
1289- msg = 'bcache device path not found: %s' % expected
1290- LOG.debug(msg)
1291- raise ValueError(msg)
1292-
1293- # if bcache path exists and holders are > 0 we can return
1294- LOG.debug('bcache dev %s at path %s successfully registered'
1295- ' on attempt %s/%s', bcache_device, expected,
1296- attempt + 1, len(retry))
1297- return
1298-
1299- except (OSError, IndexError, ValueError):
1300- # Some versions of bcache-tools will register the bcache device
1301- # as soon as we run make-bcache using udev rules, so wait for
1302- # udev to settle, then try to locate the dev, on older versions
1303- # we need to register it manually though
1304- LOG.debug('bcache device was not registered, registering %s '
1305- 'at /sys/fs/bcache/register', bcache_device)
1306- try:
1307- register_bcache(bcache_device)
1308- except IOError:
1309- # device creation is notoriously racy and this can trigger
1310- # "Invalid argument" IOErrors if it got created in "the
1311- # meantime" - just restart the function a few times to
1312- # check it all again
1313- pass
1314-
1315- LOG.debug("bcache dev %s not ready, waiting %ss",
1316- bcache_device, wait)
1317- time.sleep(wait)
1318-
1319- # we've exhausted our retries
1320- LOG.warning('Repetitive error registering the bcache dev %s',
1321- bcache_device)
1322- raise RuntimeError("bcache device %s can't be registered" %
1323- bcache_device)
1324-
1325- if cache_device:
1326- # /sys/class/block/XXX/YYY/
1327- cache_device_sysfs = block.sys_block_path(cache_device)
1328-
1329- if os.path.exists(os.path.join(cache_device_sysfs, "bcache")):
1330- LOG.debug('caching device already exists at {}/bcache. Read '
1331- 'cset.uuid'.format(cache_device_sysfs))
1332- (out, err) = util.subp(["bcache-super-show", cache_device],
1333- capture=True)
1334- LOG.debug('bcache-super-show=[{}]'.format(out))
1335- [cset_uuid] = [line.split()[-1] for line in out.split("\n")
1336- if line.startswith('cset.uuid')]
1337- else:
1338- LOG.debug('caching device does not yet exist at {}/bcache. Make '
1339- 'cache and get uuid'.format(cache_device_sysfs))
1340- # make the cache device, extracting cacheset uuid
1341- (out, err) = util.subp(["make-bcache", "-C", cache_device],
1342- capture=True)
1343- LOG.debug('out=[{}]'.format(out))
1344- [cset_uuid] = [line.split()[-1] for line in out.split("\n")
1345- if line.startswith('Set UUID:')]
1346-
1347- target_sysfs_path = '/sys/fs/bcache/%s' % cset_uuid
1348- ensure_bcache_is_registered(cache_device, target_sysfs_path)
1349-
1350- if backing_device:
1351- backing_device_sysfs = block.sys_block_path(backing_device)
1352- target_sysfs_path = os.path.join(backing_device_sysfs, "bcache")
1353-
1354- # there should not be any pre-existing bcache device
1355- bdir = os.path.join(backing_device_sysfs, "bcache")
1356- if os.path.exists(bdir):
1357- raise RuntimeError(
1358- 'Unexpected old bcache device: %s', backing_device)
1359-
1360- LOG.debug('Creating a backing device on %s', backing_device)
1361- util.subp(["make-bcache", "-B", backing_device])
1362- ensure_bcache_is_registered(backing_device, target_sysfs_path)
1363-
1364- # via the holders we can identify which bcache device we just created
1365- # for a given backing device
1366- holders = clear_holders.get_holders(backing_device)
1367- if len(holders) != 1:
1368- err = ('Invalid number {} of holding devices:'
1369- ' "{}"'.format(len(holders), holders))
1370- LOG.error(err)
1371- raise ValueError(err)
1372- [bcache_dev] = holders
1373- LOG.debug('The just created bcache device is {}'.format(holders))
1374-
1375- if cache_device:
1376- # if we specify both then we need to attach backing to cache
1377- if cset_uuid:
1378- LOG.info("Attaching backing device to cacheset: "
1379- "{} -> {} cset.uuid: {}".format(backing_device,
1380- cache_device,
1381- cset_uuid))
1382- attach = os.path.join(backing_device_sysfs,
1383- "bcache",
1384- "attach")
1385- with open(attach, "w") as fp:
1386- fp.write(cset_uuid)
1387- else:
1388- msg = "Invalid cset_uuid: {}".format(cset_uuid)
1389- LOG.error(msg)
1390- raise ValueError(msg)
1391-
1392- if cache_mode:
1393- LOG.info("Setting cache_mode on {} to {}".format(bcache_dev,
1394- cache_mode))
1395- cache_mode_file = \
1396- '/sys/block/{}/bcache/cache_mode'.format(bcache_dev)
1397- with open(cache_mode_file, "w") as fp:
1398- fp.write(cache_mode)
1399- else:
1400- # no backing device
1401- if cache_mode:
1402- raise ValueError("cache mode specified which can only be set per \
1403- backing devices, but none was specified")
1404+ create_bcache = True
1405+ if preserve:
1406+ if cache_device and backing_device:
1407+ if bcache_verify(cache_device, backing_device, cache_mode):
1408+ create_bcache = False
1409+ elif cache_device:
1410+ if bcache_verify_cachedev(cache_device):
1411+ create_bcache = False
1412+ elif backing_device:
1413+ if bcache_verify_backingdev(backing_device):
1414+ create_bcache = False
1415+ if not create_bcache:
1416+ LOG.debug('bcache %s already present, skipping create', info['id'])
1417+
1418+ cset_uuid = bcache_dev = None
1419+ if create_bcache and cache_device:
1420+ cset_uuid = bcache.create_cache_device(cache_device)
1421+
1422+ if create_bcache and backing_device:
1423+ bcache_dev = bcache.create_backing_device(backing_device, cache_device,
1424+ cache_mode, cset_uuid)
1425+
1426+ if cache_mode and not backing_device:
1427+ raise ValueError("cache mode specified which can only be set on "
1428+ "backing devices, but none was specified")
1429+
1430+ wipe_mode = info.get('wipe')
1431+ if wipe_mode and bcache_dev:
1432+ LOG.debug('Wiping bcache device %s mode=%s', bcache_dev, wipe_mode)
1433+ block.wipe_volume(bcache_dev, mode=wipe_mode, exclusive=False)
1434
1435 if info.get('name'):
1436 # Make dname rule for this dev
1437 make_dname(info.get('id'), storage_config)
1438
1439 if info.get('ptable'):
1440- raise ValueError("Partition tables on top of lvm logical volumes is \
1441- not supported")
1442+ disk_handler(info, storage_config)
1443+
1444 LOG.debug('Finished bcache creation for backing {} or caching {}'
1445 .format(backing_device, cache_device))
1446
1447diff --git a/curtin/storage_config.py b/curtin/storage_config.py
1448index abc5e4b..f3ee7e5 100644
1449--- a/curtin/storage_config.py
1450+++ b/curtin/storage_config.py
1451@@ -661,35 +661,6 @@ class BlockdevParser(ProbertParser):
1452 configs.append(entry)
1453 return (configs, errors)
1454
1455- def ptable_uuid_to_flag_entry(self, guid):
1456- # map
1457- # https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs
1458- # to
1459- # curtin/commands/block_meta.py:partition_handler()sgdisk_flags/types
1460- # MBR types
1461- # https://www.win.tue.nl/~aeb/partitions/partition_types-2.html
1462- guid_map = {
1463- 'C12A7328-F81F-11D2-BA4B-00A0C93EC93B': ('boot', 'EF00'),
1464- '21686148-6449-6E6F-744E-656564454649': ('bios_grub', 'EF02'),
1465- '933AC7E1-2EB4-4F13-B844-0E14E2AEF915': ('home', '8302'),
1466- '0FC63DAF-8483-4772-8E79-3D69D8477DE4': ('linux', '8300'),
1467- 'E6D6D379-F507-44C2-A23C-238F2A3DF928': ('lvm', '8e00'),
1468- '024DEE41-33E7-11D3-9D69-0008C781F39F': ('mbr', ''),
1469- '9E1A2D38-C612-4316-AA26-8B49521E5A8B': ('prep', '4200'),
1470- 'A19D880F-05FC-4D3B-A006-743F0F84911E': ('raid', 'fd00'),
1471- '0657FD6D-A4AB-43C4-84E5-0933C84B4F4F': ('swap', '8200'),
1472- '0X83': ('linux', '83'),
1473- '0XF': ('extended', 'f'),
1474- '0X5': ('extended', 'f'),
1475- '0X85': ('extended', 'f'),
1476- '0XC5': ('extended', 'f'),
1477- }
1478- name = code = None
1479- if guid and guid.upper() in guid_map:
1480- name, code = guid_map[guid.upper()]
1481-
1482- return (name, code)
1483-
1484 def get_unique_ids(self, blockdev):
1485 """ extract preferred ID_* keys for www and serial values.
1486
1487@@ -808,7 +779,7 @@ class BlockdevParser(ProbertParser):
1488 entry['size'] *= 512
1489
1490 ptype = blockdev_data.get('ID_PART_ENTRY_TYPE')
1491- flag_name, _flag_code = self.ptable_uuid_to_flag_entry(ptype)
1492+ flag_name, _flag_code = ptable_uuid_to_flag_entry(ptype)
1493
1494 # logical partitions are not tagged in data, however
1495 # the partition number > 4 (ie, not primary nor extended)
1496@@ -1243,6 +1214,36 @@ class ZfsParser(ProbertParser):
1497 return (zpool_configs + zfs_configs, errors)
1498
1499
1500+def ptable_uuid_to_flag_entry(guid):
1501+ # map
1502+ # https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs
1503+ # to
1504+ # curtin/commands/block_meta.py:partition_handler()sgdisk_flags/types
1505+ # MBR types
1506+ # https://www.win.tue.nl/~aeb/partitions/partition_types-2.html
1507+ guid_map = {
1508+ 'C12A7328-F81F-11D2-BA4B-00A0C93EC93B': ('boot', 'EF00'),
1509+ '21686148-6449-6E6F-744E-656564454649': ('bios_grub', 'EF02'),
1510+ '933AC7E1-2EB4-4F13-B844-0E14E2AEF915': ('home', '8302'),
1511+ '0FC63DAF-8483-4772-8E79-3D69D8477DE4': ('linux', '8300'),
1512+ 'E6D6D379-F507-44C2-A23C-238F2A3DF928': ('lvm', '8e00'),
1513+ '024DEE41-33E7-11D3-9D69-0008C781F39F': ('mbr', ''),
1514+ '9E1A2D38-C612-4316-AA26-8B49521E5A8B': ('prep', '4200'),
1515+ 'A19D880F-05FC-4D3B-A006-743F0F84911E': ('raid', 'fd00'),
1516+ '0657FD6D-A4AB-43C4-84E5-0933C84B4F4F': ('swap', '8200'),
1517+ '0X83': ('linux', '83'),
1518+ '0XF': ('extended', 'f'),
1519+ '0X5': ('extended', 'f'),
1520+ '0X85': ('extended', 'f'),
1521+ '0XC5': ('extended', 'f'),
1522+ }
1523+ name = code = None
1524+ if guid and guid.upper() in guid_map:
1525+ name, code = guid_map[guid.upper()]
1526+
1527+ return (name, code)
1528+
1529+
1530 def extract_storage_config(probe_data, strict=False):
1531 """ Examine a probert storage dictionary and extract a curtin
1532 storage configuration that would recreate all of the
1533diff --git a/doc/topics/storage.rst b/doc/topics/storage.rst
1534index c85174d..f30fc30 100644
1535--- a/doc/topics/storage.rst
1536+++ b/doc/topics/storage.rst
1537@@ -212,7 +212,7 @@ This can specify the manufacturer or model of the disk. It is not currently
1538 used by curtin, but can be useful for a human reading a config file. Future
1539 versions of curtin may make use of this information.
1540
1541-**wipe**: *superblock, superblock-recursive, zero, random*
1542+**wipe**: *superblock, superblock-recursive, pvremove, zero, random*
1543
1544 If wipe is specified, **the disk contents will be destroyed**. In the case that
1545 a disk is a part of virtual block device, like bcache, RAID array, or LVM, then
1546@@ -233,22 +233,34 @@ The ``wipe: random`` option will write pseudo-random data from /dev/urandom
1547 Depending on the size and speed of the disk; it may take a long time to
1548 complete.
1549
1550+The ``wipe: pvremove`` option will execute the ``pvremove`` command to
1551+wipe the LVM metadata so that the device is no longer part of an LVM.
1552+
1553+
1554 **preserve**: *true, false*
1555
1556 When the preserve key is present and set to ``true`` curtin will attempt
1557-to use the disk without damaging data present on it. If ``preserve`` is set and
1558-``ptable`` is also set, then curtin will validate that the partition table
1559-specified by ``ptable`` exists on the disk and will raise an error if it does
1560-not. If ``preserve`` is set and ``ptable`` is not, then curtin will be able to
1561-use the disk in later commands, but will not check if the disk has a valid
1562-partition table, and will only verify that the disk exists.
1563-
1564-It can be dangerous to try to move or re-size filesystems and partitions
1565-containing data that needs to be preserved. Therefor curtin does not support
1566-preserving a disk without also preserving the partitions on it. If a disk is
1567-set to be preserved and curtin is told to move a partition on that disk,
1568-installation will stop. It is still possible to reformat partitions that do
1569-not need to be preserved.
1570+reuse the existing storage device. Curtin will verify aspects of the device
1571+against the configuration provided. For example, when assessing whether
1572+curtin can use a preserved partition, curtin checks that the device exists,
1573+size of the partition matches the value in the config and checks if the same
1574+partition flag is set. The set of verification checks vary by device type.
1575+If curtin encounters a mismatch between config and what is found on the
1576+device a RuntimeError will be raised with the expected and found values and
1577+halt the installation. Currently curtin will verify the follow storage types:
1578+
1579+- disk
1580+- partition
1581+- lvm_volgroup
1582+- lvm_partition
1583+- dm_crypt
1584+- raid
1585+- bcache
1586+- format
1587+
1588+One specific use-case of ``preserve: true`` is in conjunction with the ``wipe``
1589+flag. This allows a device to reused, but have the *content* of the device to
1590+be removed.
1591
1592 **name**: *<name>*
1593
1594@@ -327,7 +339,7 @@ The ``device`` key refers to the ``id`` of a disk in the storage configuration.
1595 The disk entry must already be defined in the list of commands to ensure that
1596 it has already been processed.
1597
1598-**wipe**: *superblock, pvremove, zero, random*
1599+**wipe**: *superblock, superblock-recursive, pvremove, zero, random*
1600
1601 After the partition is added to the disk's partition table, curtin can run a
1602 wipe command on the partition. The wipe command values are the sames as for
1603@@ -337,9 +349,7 @@ disks.
1604
1605 Curtin will automatically wipe 1MB at the starting location of the partition
1606 prior to creating the partition to ensure that other block layers or devices
1607- do not enable themselves and prevent accessing the partition. Wipe
1608- and other destructive operations only occur if the ``preserve`` value
1609- is not set to ``True``.
1610+ do not enable themselves and prevent accessing the partition.
1611
1612 **flag**: *logical, extended, boot, bios_grub, swap, lvm, raid, home, prep*
1613
1614@@ -370,7 +380,7 @@ filesystem or be mounted anywhere on the system.
1615 **preserve**: *true, false*
1616
1617 If the preserve flag is set to true, curtin will verify that the partition
1618-exists and will not modify the partition.
1619+exists and that the ``size`` and ``flag`` match the configuration provided.
1620
1621 **name**: *<name>*
1622
1623@@ -594,6 +604,14 @@ The ``devices`` key gives a list of devices to use as physical volumes. Each
1624 device is specified using the ``id`` of existing devices in the storage config.
1625 Almost anything can be used as a device such as partitions, whole disks, RAID.
1626
1627+**preserve**: *true, false*
1628+
1629+If the ``preserve`` option is True, curtin will verify that volume group
1630+specified by the ``name`` option is present and that the physical volumes
1631+of the group match the devices specified in ``devices``. There is no ``wipe``
1632+option for volume groups.
1633+
1634+
1635 **Config Example**::
1636
1637 - id: volgroup1
1638@@ -642,6 +660,18 @@ number followed by a SI unit should work, i.e. *B, kB, MB, GB, TB*.
1639 If the ``size`` key is omitted then all remaining space on the volgroup will be
1640 used for the logical volume.
1641
1642+**preserve**: *true, false*
1643+
1644+If the ``preserve`` option is True, curtin will verify that specified lvm
1645+partition is part of the specified volume group. If ``size`` is specified
1646+curtin will verify the size matches the specified value.
1647+
1648+**wipe**: *superblock, superblock-recursive, pvremove, zero, random*
1649+
1650+If ``wipe`` option is set, and ``preserve`` is False, curtin will wipe the
1651+contents of the lvm partition. Curtin skips wipe settings if it creates
1652+the lvm partition.
1653+
1654 .. note::
1655
1656 Curtin does not adjust size values. If you specific a size that exceeds the
1657@@ -705,6 +735,19 @@ system will prompt for this password in order to mount the disk.
1658
1659 Exactly one of **key** and **keyfile** must be supplied.
1660
1661+**preserve**: *true, false*
1662+
1663+If the ``preserve`` option is True, curtin will verify the dm-crypt device
1664+specified is composed of the device specified in ``volume``.
1665+
1666+
1667+**wipe**: *superblock, superblock-recursive, pvremove, zero, random*
1668+
1669+If ``wipe`` option is set, and ``preserve`` is False, curtin will wipe the
1670+contents of the dm-crypt device. Curtin skips wipe settings if it creates
1671+the dm-crypt volume.
1672+
1673+
1674 .. note::
1675
1676 Encrypted disks and partitions are tracked in ``/etc/crypttab`` and will be
1677@@ -768,6 +811,19 @@ version of mdadm used during the install will control the value here. Note
1678 that metadata version 1.2 is the default in mdadm since release version 3.3
1679 in 2013.
1680
1681+**preserve**: *true, false*
1682+
1683+If the ``preserve`` option is True, curtin will verify the composition of
1684+the raid device. This includes array state, raid level, device md-uuid,
1685+composition of the array devices and spares and that all are present.
1686+
1687+**wipe**: *superblock, superblock-recursive, pvremove, zero, random*
1688+
1689+If ``wipe`` option is set to values other than 'superblock', curtin will
1690+wipe contents of the assembled raid device. Curtin skips 'superblock` wipes
1691+as it already clears raid data on the members before assembling the array.
1692+
1693+
1694 **Config Example**::
1695
1696 - id: raid_array
1697@@ -825,6 +881,18 @@ If the ``name`` key is present, curtin will create a link to the device at
1698 as long as the device metadata does not change. If users modify the device
1699 such that device metadata is changed then the udev rule may no longer apply.
1700
1701+**preserve**: *true, false*
1702+
1703+If the ``preserve`` option is True, curtin will verify the composition of
1704+the bcache device. This includes checking that backing device and cache
1705+device are enabled and bound correctly (backing device is cached by expected
1706+cache device). If ``cache-mode`` is specified, verify that the mode matches.
1707+
1708+**wipe**: *superblock, superblock-recursive, pvremove, zero, random*
1709+
1710+If ``wipe`` option is set, curtin will wipe the contents of the bcache device.
1711+If only ``cache`` device is specified, wipe option is ignored.
1712+
1713
1714 **Config Example**::
1715
1716diff --git a/examples/tests/preserve-bcache.yaml b/examples/tests/preserve-bcache.yaml
1717new file mode 100644
1718index 0000000..f614f37
1719--- /dev/null
1720+++ b/examples/tests/preserve-bcache.yaml
1721@@ -0,0 +1,82 @@
1722+showtrace: true
1723+
1724+bucket:
1725+ - &setup |
1726+ parted /dev/disk/by-id/virtio-disk-a --script -- \
1727+ mklabel msdos \
1728+ mkpart primary 1MiB 1025Mib \
1729+ set 1 boot on \
1730+ mkpart primary 1026MiB 9218MiB
1731+ udevadm settle
1732+ make-bcache -C /dev/disk/by-id/virtio-disk-b \
1733+ -B /dev/disk/by-id/virtio-disk-a-part2 --writeback
1734+ udevadm settle
1735+ mkfs.ext4 /dev/bcache0
1736+ mount /dev/bcache0 /mnt
1737+ touch /mnt/existing
1738+ umount /mnt
1739+ echo 1 > /sys/class/block/bcache0/bcache/stop
1740+ udevadm settle
1741+
1742+# Create a bcache now to test curtin's reuse of existing bcache.
1743+early_commands:
1744+ 00-setup-raid: [sh, -exuc, *setup]
1745+
1746+
1747+storage:
1748+ config:
1749+ - id: id_rotary0
1750+ type: disk
1751+ name: rotary0
1752+ serial: disk-a
1753+ ptable: msdos
1754+ preserve: true
1755+ grub_device: true
1756+ - id: id_ssd0
1757+ type: disk
1758+ name: ssd0
1759+ serial: disk-b
1760+ preserve: true
1761+ - id: id_rotary0_part1
1762+ type: partition
1763+ name: rotary0-part1
1764+ device: id_rotary0
1765+ number: 1
1766+ offset: 1M
1767+ size: 1024M
1768+ preserve: true
1769+ wipe: superblock
1770+ - id: id_rotary0_part2
1771+ type: partition
1772+ name: rotary0-part2
1773+ device: id_rotary0
1774+ number: 2
1775+ size: 8G
1776+ preserve: true
1777+ - id: id_bcache0
1778+ type: bcache
1779+ name: bcache0
1780+ backing_device: id_rotary0_part2
1781+ cache_device: id_ssd0
1782+ cache_mode: writeback
1783+ preserve: true
1784+ - id: bootfs
1785+ type: format
1786+ label: boot-fs
1787+ volume: id_rotary0_part1
1788+ fstype: ext4
1789+ - id: rootfs
1790+ type: format
1791+ label: root-fs
1792+ volume: id_bcache0
1793+ fstype: ext4
1794+ preserve: true
1795+ - id: rootfs_mount
1796+ type: mount
1797+ path: /
1798+ device: rootfs
1799+ - id: bootfs_mount
1800+ type: mount
1801+ path: /boot
1802+ device: bootfs
1803+ version: 1
1804diff --git a/examples/tests/preserve-lvm.yaml b/examples/tests/preserve-lvm.yaml
1805new file mode 100644
1806index 0000000..046d6b4
1807--- /dev/null
1808+++ b/examples/tests/preserve-lvm.yaml
1809@@ -0,0 +1,77 @@
1810+showtrace: true
1811+bucket:
1812+ - &setup |
1813+ parted /dev/disk/by-id/virtio-disk-a --script -- \
1814+ mklabel gpt \
1815+ mkpart primary 1MiB 2MiB \
1816+ set 1 bios_grub on \
1817+ mkpart primary 3MiB 4099MiB \
1818+ set 2 boot on
1819+ udevadm settle
1820+ ls -al /dev/disk/by-id
1821+ vgcreate --force --zero=y --yes root_vg /dev/disk/by-id/virtio-disk-a-part2
1822+ pvscan --cache
1823+ vgscan --mknodes --cache
1824+ lvcreate root_vg --name lv1_root --zero=y --wipesignatures=y \
1825+ --size 3758096384B
1826+ udevadm settle
1827+ mkfs.ext4 /dev/root_vg/lv1_root
1828+ mount /dev/root_vg/lv1_root /mnt
1829+ touch /mnt/existing
1830+ umount /mnt
1831+ # disable vg/lv
1832+ for vg in `pvdisplay -C --separator = -o vg_name --noheadings`; do
1833+ vgchange -an $vg ||:
1834+ done
1835+ command -v systemctl && systemctl mask lvm2-pvscan\@.service
1836+ rm -rf /etc/lvm/archive /etc/lvm/backup
1837+
1838+# Create a LVM now to test curtin's reuse of existing LVMs
1839+early_commands:
1840+ 00-setup-lvm: [sh, -exuc, *setup]
1841+
1842+storage:
1843+ version: 1
1844+ config:
1845+ - id: main_disk
1846+ type: disk
1847+ ptable: gpt
1848+ name: root_disk
1849+ serial: disk-a
1850+ grub_device: true
1851+ preserve: true
1852+ - id: bios_boot
1853+ type: partition
1854+ size: 1MB
1855+ number: 1
1856+ device: main_disk
1857+ flag: bios_grub
1858+ preserve: true
1859+ - id: main_disk_p2
1860+ type: partition
1861+ number: 2
1862+ size: 4GB
1863+ device: main_disk
1864+ flag: boot
1865+ preserve: true
1866+ - id: root_vg
1867+ type: lvm_volgroup
1868+ name: root_vg
1869+ devices:
1870+ - main_disk_p2
1871+ preserve: true
1872+ - id: root_vg_lv1
1873+ type: lvm_partition
1874+ name: lv1_root
1875+ size: 3.5G
1876+ volgroup: root_vg
1877+ preserve: true
1878+ - id: lv1_root_fs
1879+ type: format
1880+ fstype: ext4
1881+ volume: root_vg_lv1
1882+ preserve: true
1883+ - id: lvroot_mount
1884+ path: /
1885+ type: mount
1886+ device: lv1_root_fs
1887diff --git a/examples/tests/preserve-partition-wipe-vg.yaml b/examples/tests/preserve-partition-wipe-vg.yaml
1888new file mode 100644
1889index 0000000..cef9678
1890--- /dev/null
1891+++ b/examples/tests/preserve-partition-wipe-vg.yaml
1892@@ -0,0 +1,116 @@
1893+showtrace: true
1894+
1895+bucket:
1896+ - &setup |
1897+ parted /dev/disk/by-id/virtio-disk-a --script -- \
1898+ mklabel gpt \
1899+ mkpart primary ext4 2MiB 4MiB \
1900+ set 1 bios_grub on \
1901+ mkpart primary ext4 1GiB 4GiB \
1902+ mkpart primary ext4 4GiB 7GiB
1903+ parted /dev/disk/by-id/virtio-disk-b --script -- \
1904+ mklabel gpt \
1905+ mkpart primary ext4 1GiB 4GiB \
1906+ mkpart primary ext4 4GiB 7GiB
1907+ udevadm settle
1908+ ls -al /dev/disk/by-id
1909+ vgcreate --force --zero=y --yes vg8 /dev/disk/by-id/virtio-disk-b-part1
1910+ pvscan --cache
1911+ vgscan --mknodes --cache
1912+ udevadm settle
1913+ ls -al /dev/disk/by-id
1914+ mkfs.ext4 /dev/disk/by-id/virtio-disk-a-part3
1915+ mkfs.ext4 /dev/disk/by-id/virtio-disk-b-part2
1916+ mount /dev/disk/by-id/virtio-disk-b-part2 /mnt
1917+ touch /mnt/existing-virtio-disk-b-part2
1918+ umount /mnt
1919+
1920+# Partition the disk now to test curtin's reuse of partitions.
1921+early_commands:
1922+ 00-setup-disk: [sh, -exuc, *setup]
1923+
1924+storage:
1925+ config:
1926+ - ptable: gpt
1927+ serial: disk-a
1928+ preserve: true
1929+ name: disk-a
1930+ grub_device: true
1931+ type: disk
1932+ id: disk-sda
1933+ wipe: superblock
1934+ - serial: disk-b
1935+ name: disk-b
1936+ grub_device: false
1937+ type: disk
1938+ id: disk-sdb
1939+ preserve: true
1940+ - device: disk-sda
1941+ size: 2097152
1942+ flag: bios_grub
1943+ preserve: true
1944+ wipe: zero
1945+ type: partition
1946+ id: disk-sda-part-1
1947+ - device: disk-sda
1948+ size: 3G
1949+ flag: linux
1950+ preserve: true
1951+ wipe: zero
1952+ type: partition
1953+ id: disk-sda-part-2
1954+ - device: disk-sdb
1955+ flag: linux
1956+ size: 3G
1957+ preserve: true
1958+ wipe: zero
1959+ type: partition
1960+ id: disk-sdb-part-1
1961+ - device: disk-sdb
1962+ flag: linux
1963+ size: 3G
1964+ preserve: true
1965+ type: partition
1966+ id: disk-sdb-part-2
1967+ - fstype: ext4
1968+ volume: disk-sda-part-2
1969+ preserve: false
1970+ type: format
1971+ id: format-0
1972+ - fstype: ext4
1973+ volume: disk-sdb-part-2
1974+ preserve: true
1975+ type: format
1976+ id: format-disk-sdb-part-2
1977+ - device: format-0
1978+ path: /
1979+ type: mount
1980+ id: mount-0
1981+ - name: vg1
1982+ devices:
1983+ - disk-sdb-part-1
1984+ preserve: false
1985+ type: lvm_volgroup
1986+ id: lvm_volgroup-0
1987+ - name: lv-0
1988+ volgroup: lvm_volgroup-0
1989+ size: 2G
1990+ preserve: false
1991+ type: lvm_partition
1992+ id: lvm_partition-0
1993+ - fstype: ext4
1994+ volume: lvm_partition-0
1995+ preserve: false
1996+ type: format
1997+ id: format-1
1998+ - device: format-1
1999+ path: /home
2000+ type: mount
2001+ id: mount-1
2002+ - device: format-disk-sdb-part-2
2003+ path: /opt
2004+ type: mount
2005+ id: mount-2
2006+
2007+ version: 1
2008+verbosity: 3
2009diff --git a/examples/tests/preserve-raid.yaml b/examples/tests/preserve-raid.yaml
2010index 3a6cc18..9e0489f 100644
2011--- a/examples/tests/preserve-raid.yaml
2012+++ b/examples/tests/preserve-raid.yaml
2013@@ -4,10 +4,12 @@ bucket:
2014 - &setup |
2015 parted /dev/disk/by-id/virtio-disk-b --script -- \
2016 mklabel gpt \
2017- mkpart primary 1GiB 9GiB
2018+ mkpart primary 1GiB 9GiB \
2019+ set 1 boot on
2020 parted /dev/disk/by-id/virtio-disk-c --script -- \
2021 mklabel gpt \
2022- mkpart primary 1GiB 9GiB
2023+ mkpart primary 1GiB 9GiB \
2024+ set 1 boot on
2025 udevadm settle
2026 mdadm --create --metadata 1.2 --level 1 -n 2 /dev/md1 --assume-clean \
2027 /dev/disk/by-id/virtio-disk-b-part1 /dev/disk/by-id/virtio-disk-c-part1
2028diff --git a/examples/tests/uefi_reuse_esp.yaml b/examples/tests/uefi_reuse_esp.yaml
2029index 37a30d3..7ad7fdf 100644
2030--- a/examples/tests/uefi_reuse_esp.yaml
2031+++ b/examples/tests/uefi_reuse_esp.yaml
2032@@ -6,9 +6,9 @@ bucket:
2033 - &setup |
2034 parted /dev/disk/by-id/virtio-disk-a --script -- \
2035 mklabel gpt \
2036- mkpart primary fat32 1MiB 512MiB \
2037+ mkpart primary fat32 1MiB 513MiB \
2038 set 1 esp on \
2039- mkpart primary ext4 512MiB 3512Mib
2040+ mkpart primary ext4 513MiB 3585MiB
2041
2042 udevadm settle
2043 mkfs.vfat -I -n EFI -F 32 /dev/disk/by-id/virtio-disk-a-part1
2044diff --git a/tests/unittests/test_commands_block_meta.py b/tests/unittests/test_commands_block_meta.py
2045index bc4f1cc..d7715c0 100644
2046--- a/tests/unittests/test_commands_block_meta.py
2047+++ b/tests/unittests/test_commands_block_meta.py
2048@@ -1212,15 +1212,116 @@ class TestDasdHandler(CiTestCase):
2049 self.assertEqual(0, m_dasd_format.call_count)
2050
2051
2052+class TestLvmVolgroupHandler(CiTestCase):
2053+
2054+ def setUp(self):
2055+ super(TestLvmVolgroupHandler, self).setUp()
2056+
2057+ basepath = 'curtin.commands.block_meta.'
2058+ self.add_patch(basepath + 'lvm', 'm_lvm')
2059+ self.add_patch(basepath + 'util.subp', 'm_subp')
2060+ self.add_patch(basepath + 'make_dname', 'm_dname')
2061+ self.add_patch(basepath + 'get_path_to_storage_volume', 'm_getpath')
2062+ self.add_patch(basepath + 'block.wipe_volume', 'm_wipe')
2063+
2064+ self.target = "my_target"
2065+ self.config = {
2066+ 'storage': {
2067+ 'version': 1,
2068+ 'config': [
2069+ {'id': 'wda2',
2070+ 'type': 'partition'},
2071+ {'id': 'wdb2',
2072+ 'type': 'partition'},
2073+ {'id': 'lvm-volgroup1',
2074+ 'type': 'lvm_volgroup',
2075+ 'name': 'vg1',
2076+ 'devices': ['wda2', 'wdb2']},
2077+ {'id': 'lvm-part1',
2078+ 'type': 'lvm_partition',
2079+ 'name': 'lv1',
2080+ 'size': 1073741824,
2081+ 'volgroup': 'lvm-volgroup1'},
2082+ ],
2083+ }
2084+ }
2085+ self.storage_config = (
2086+ block_meta.extract_storage_ordered_dict(self.config))
2087+
2088+ def test_lvmvolgroup_creates_volume_group(self):
2089+ """ lvm_volgroup handler creates volume group. """
2090+
2091+ devices = [self.random_string(), self.random_string()]
2092+ self.m_getpath.side_effect = iter(devices)
2093+
2094+ block_meta.lvm_volgroup_handler(self.storage_config['lvm-volgroup1'],
2095+ self.storage_config)
2096+
2097+ self.assertEqual([call(['vgcreate', '--force', '--zero=y', '--yes',
2098+ 'vg1'] + devices, capture=True)],
2099+ self.m_subp.call_args_list)
2100+ self.assertEqual(1, self.m_lvm.lvm_scan.call_count)
2101+
2102+ @patch('curtin.commands.block_meta.lvm_volgroup_verify')
2103+ def test_lvmvolgroup_preserve_existing_volume_group(self, m_verify):
2104+ """ lvm_volgroup handler preserves existing volume group. """
2105+ m_verify.return_value = True
2106+ devices = [self.random_string(), self.random_string()]
2107+ self.m_getpath.side_effect = iter(devices)
2108+
2109+ self.storage_config['lvm-volgroup1']['preserve'] = True
2110+ block_meta.lvm_volgroup_handler(self.storage_config['lvm-volgroup1'],
2111+ self.storage_config)
2112+
2113+ self.assertEqual(0, self.m_subp.call_count)
2114+ self.assertEqual(1, self.m_lvm.lvm_scan.call_count)
2115+
2116+ def test_lvmvolgroup_preserve_verifies_volgroup_members(self):
2117+ """ lvm_volgroup handler preserves existing volume group. """
2118+ devices = [self.random_string(), self.random_string()]
2119+ self.m_getpath.side_effect = iter(devices)
2120+ self.m_lvm.get_pvols_in_volgroup.return_value = devices
2121+ self.storage_config['lvm-volgroup1']['preserve'] = True
2122+
2123+ block_meta.lvm_volgroup_handler(self.storage_config['lvm-volgroup1'],
2124+ self.storage_config)
2125+
2126+ self.assertEqual(1, self.m_lvm.activate_volgroups.call_count)
2127+ self.assertEqual([call('vg1')],
2128+ self.m_lvm.get_pvols_in_volgroup.call_args_list)
2129+ self.assertEqual(0, self.m_subp.call_count)
2130+ self.assertEqual(1, self.m_lvm.lvm_scan.call_count)
2131+
2132+ def test_lvmvolgroup_preserve_raises_exception_wrong_pvs(self):
2133+ """ lvm_volgroup handler preserve raises execption on wrong pv devs."""
2134+ devices = [self.random_string(), self.random_string()]
2135+ self.m_getpath.side_effect = iter(devices)
2136+ self.m_lvm.get_pvols_in_volgroup.return_value = [self.random_string()]
2137+ self.storage_config['lvm-volgroup1']['preserve'] = True
2138+
2139+ with self.assertRaises(RuntimeError):
2140+ block_meta.lvm_volgroup_handler(
2141+ self.storage_config['lvm-volgroup1'], self.storage_config)
2142+
2143+ self.assertEqual(1, self.m_lvm.activate_volgroups.call_count)
2144+ self.assertEqual([call('vg1')],
2145+ self.m_lvm.get_pvols_in_volgroup.call_args_list)
2146+ self.assertEqual(0, self.m_subp.call_count)
2147+ self.assertEqual(0, self.m_lvm.lvm_scan.call_count)
2148+
2149+
2150 class TestLvmPartitionHandler(CiTestCase):
2151
2152 def setUp(self):
2153 super(TestLvmPartitionHandler, self).setUp()
2154
2155- self.add_patch('curtin.commands.block_meta.lvm', 'm_lvm')
2156- self.add_patch('curtin.commands.block_meta.distro', 'm_distro')
2157- self.add_patch('curtin.commands.block_meta.util.subp', 'm_subp')
2158- self.add_patch('curtin.commands.block_meta.make_dname', 'm_dname')
2159+ basepath = 'curtin.commands.block_meta.'
2160+ self.add_patch(basepath + 'lvm', 'm_lvm')
2161+ self.add_patch(basepath + 'distro', 'm_distro')
2162+ self.add_patch(basepath + 'util.subp', 'm_subp')
2163+ self.add_patch(basepath + 'make_dname', 'm_dname')
2164+ self.add_patch(basepath + 'get_path_to_storage_volume', 'm_getpath')
2165+ self.add_patch(basepath + 'block.wipe_volume', 'm_wipe')
2166
2167 self.target = "my_target"
2168 self.config = {
2169@@ -1257,6 +1358,84 @@ class TestLvmPartitionHandler(CiTestCase):
2170 # call_args is an n-tuple of arg list
2171 self.assertIn(expected_size_str, call_args[0])
2172
2173+ def test_lvmpart_wipes_volume_by_default(self):
2174+ """ lvm_partition_handler wipes superblock by default. """
2175+
2176+ self.m_distro.lsb_release.return_value = {'codename': 'bionic'}
2177+ devpath = self.random_string()
2178+ self.m_getpath.return_value = devpath
2179+
2180+ block_meta.lvm_partition_handler(self.storage_config['lvm-part1'],
2181+ self.storage_config)
2182+ self.m_wipe.assert_called_with(devpath, mode='superblock',
2183+ exclusive=False)
2184+
2185+ def test_lvmpart_handles_wipe_setting(self):
2186+ """ lvm_partition_handler handles wipe settings. """
2187+
2188+ self.m_distro.lsb_release.return_value = {'codename': 'bionic'}
2189+ devpath = self.random_string()
2190+ self.m_getpath.return_value = devpath
2191+
2192+ wipe_mode = 'zero'
2193+ self.storage_config['lvm-part1']['wipe'] = wipe_mode
2194+ block_meta.lvm_partition_handler(self.storage_config['lvm-part1'],
2195+ self.storage_config)
2196+ self.m_wipe.assert_called_with(devpath, mode=wipe_mode,
2197+ exclusive=False)
2198+
2199+ @patch('curtin.commands.block_meta.lvm_partition_verify')
2200+ def test_lvmpart_preserve_existing_lvmpart(self, m_verify):
2201+ m_verify.return_value = True
2202+ self.storage_config['lvm-part1']['preserve'] = True
2203+ block_meta.lvm_partition_handler(self.storage_config['lvm-part1'],
2204+ self.storage_config)
2205+ self.assertEqual(0, self.m_distro.lsb_release.call_count)
2206+ self.assertEqual(0, self.m_subp.call_count)
2207+
2208+ def test_lvmpart_preserve_verifies_lv_in_vg_and_lv_size(self):
2209+ self.storage_config['lvm-part1']['preserve'] = True
2210+ self.m_lvm.get_lvols_in_volgroup.return_value = ['lv1']
2211+ self.m_lvm.get_lv_size_bytes.return_value = 1073741824.0
2212+
2213+ block_meta.lvm_partition_handler(self.storage_config['lvm-part1'],
2214+ self.storage_config)
2215+ self.assertEqual([call('vg1')],
2216+ self.m_lvm.get_lvols_in_volgroup.call_args_list)
2217+ self.assertEqual([call('lv1')],
2218+ self.m_lvm.get_lv_size_bytes.call_args_list)
2219+ self.assertEqual(0, self.m_distro.lsb_release.call_count)
2220+ self.assertEqual(0, self.m_subp.call_count)
2221+
2222+ def test_lvmpart_preserve_fails_if_lv_not_in_vg(self):
2223+ self.storage_config['lvm-part1']['preserve'] = True
2224+ self.m_lvm.get_lvols_in_volgroup.return_value = []
2225+
2226+ with self.assertRaises(RuntimeError):
2227+ block_meta.lvm_partition_handler(self.storage_config['lvm-part1'],
2228+ self.storage_config)
2229+
2230+ self.assertEqual([call('vg1')],
2231+ self.m_lvm.get_lvols_in_volgroup.call_args_list)
2232+ self.assertEqual(0, self.m_lvm.get_lv_size_bytes.call_count)
2233+ self.assertEqual(0, self.m_distro.lsb_release.call_count)
2234+ self.assertEqual(0, self.m_subp.call_count)
2235+
2236+ def test_lvmpart_preserve_verifies_lv_size_matches(self):
2237+ self.storage_config['lvm-part1']['preserve'] = True
2238+ self.m_lvm.get_lvols_in_volgroup.return_value = ['lv1']
2239+ self.m_lvm.get_lv_size_bytes.return_value = 0.0
2240+
2241+ with self.assertRaises(RuntimeError):
2242+ block_meta.lvm_partition_handler(self.storage_config['lvm-part1'],
2243+ self.storage_config)
2244+ self.assertEqual([call('vg1')],
2245+ self.m_lvm.get_lvols_in_volgroup.call_args_list)
2246+ self.assertEqual([call('lv1')],
2247+ self.m_lvm.get_lv_size_bytes.call_args_list)
2248+ self.assertEqual(0, self.m_distro.lsb_release.call_count)
2249+ self.assertEqual(0, self.m_subp.call_count)
2250+
2251
2252 class TestDmCryptHandler(CiTestCase):
2253
2254@@ -1431,6 +1610,317 @@ class TestDmCryptHandler(CiTestCase):
2255 self.m_subp.assert_has_calls(expected_calls)
2256 self.assertEqual(len(util.load_file(self.crypttab).splitlines()), 1)
2257
2258+ @patch('curtin.commands.block_meta.dm_crypt_verify')
2259+ def test_dm_crypt_preserves_existing(self, m_verify):
2260+ """ verify dm_crypt preserves existing device. """
2261+ m_verify.return_value = True
2262+ volume_path = self.random_string()
2263+ self.m_getpath.return_value = volume_path
2264+
2265+ info = self.storage_config['dmcrypt0']
2266+ info['preserve'] = True
2267+ block_meta.dm_crypt_handler(info, self.storage_config)
2268+
2269+ self.assertEqual(0, self.m_subp.call_count)
2270+ self.assertEqual(len(util.load_file(self.crypttab).splitlines()), 1)
2271+
2272+ @patch('curtin.commands.block_meta.os.path.exists')
2273+ def test_dm_crypt_preserve_verifies_correct_device_is_present(self, m_ex):
2274+ """ verify dm_crypt preserve verifies correct dev is used. """
2275+ volume_path = self.random_string()
2276+ self.m_getpath.return_value = volume_path
2277+ self.m_block.dmsetup_info.return_value = {
2278+ 'blkdevname': 'dm-0',
2279+ 'blkdevs_used': volume_path,
2280+ 'name': 'cryptroot',
2281+ 'uuid': self.random_string(),
2282+ 'subsystem': 'crypt'
2283+ }
2284+ m_ex.return_value = True
2285+
2286+ info = self.storage_config['dmcrypt0']
2287+ info['preserve'] = True
2288+ block_meta.dm_crypt_handler(info, self.storage_config)
2289+ self.assertEqual(len(util.load_file(self.crypttab).splitlines()), 1)
2290+
2291+ @patch('curtin.commands.block_meta.os.path.exists')
2292+ def test_dm_crypt_preserve_raises_exception_if_not_present(self, m_ex):
2293+ """ verify dm_crypt raises exception if dm device not present. """
2294+ volume_path = self.random_string()
2295+ self.m_getpath.return_value = volume_path
2296+ m_ex.return_value = False
2297+ info = self.storage_config['dmcrypt0']
2298+ info['preserve'] = True
2299+ with self.assertRaises(RuntimeError):
2300+ block_meta.dm_crypt_handler(info, self.storage_config)
2301+
2302+ @patch('curtin.commands.block_meta.os.path.exists')
2303+ def test_dm_crypt_preserve_raises_exception_if_wrong_dev_used(self, m_ex):
2304+ """ verify dm_crypt preserve raises exception on wrong dev used. """
2305+ volume_path = self.random_string()
2306+ self.m_getpath.return_value = volume_path
2307+ self.m_block.dmsetup_info.return_value = {
2308+ 'blkdevname': 'dm-0',
2309+ 'blkdevs_used': self.random_string(),
2310+ 'name': 'cryptroot',
2311+ 'uuid': self.random_string(),
2312+ 'subsystem': 'crypt'
2313+ }
2314+ m_ex.return_value = True
2315+ info = self.storage_config['dmcrypt0']
2316+ info['preserve'] = True
2317+ with self.assertRaises(RuntimeError):
2318+ block_meta.dm_crypt_handler(info, self.storage_config)
2319+
2320+
2321+class TestRaidHandler(CiTestCase):
2322+
2323+ def setUp(self):
2324+ super(TestRaidHandler, self).setUp()
2325+
2326+ basepath = 'curtin.commands.block_meta.'
2327+ self.add_patch(basepath + 'get_path_to_storage_volume', 'm_getpath')
2328+ self.add_patch(basepath + 'util', 'm_util')
2329+ self.add_patch(basepath + 'make_dname', 'm_dname')
2330+ self.add_patch(basepath + 'mdadm', 'm_mdadm')
2331+ self.add_patch(basepath + 'block', 'm_block')
2332+ self.add_patch(basepath + 'udevadm_settle', 'm_uset')
2333+
2334+ self.target = "my_target"
2335+ self.config = {
2336+ 'storage': {
2337+ 'version': 1,
2338+ 'config': [
2339+ {'grub_device': 1,
2340+ 'id': 'sda',
2341+ 'model': 'QEMU HARDDISK',
2342+ 'name': 'main_disk',
2343+ 'ptable': 'gpt',
2344+ 'serial': 'disk-a',
2345+ 'type': 'disk',
2346+ 'wipe': 'superblock'},
2347+ {'device': 'sda',
2348+ 'flag': 'bios_grub',
2349+ 'id': 'bios_boot_partition',
2350+ 'size': '1MB',
2351+ 'type': 'partition'},
2352+ {'device': 'sda',
2353+ 'id': 'sda1',
2354+ 'size': '3GB',
2355+ 'type': 'partition'},
2356+ {'id': 'sdb',
2357+ 'model': 'QEMU HARDDISK',
2358+ 'name': 'second_disk',
2359+ 'ptable': 'gpt',
2360+ 'serial': 'disk-b',
2361+ 'type': 'disk',
2362+ 'wipe': 'superblock'},
2363+ {'device': 'sdb',
2364+ 'id': 'sdb1',
2365+ 'size': '3GB',
2366+ 'type': 'partition'},
2367+ {'id': 'sdc',
2368+ 'model': 'QEMU HARDDISK',
2369+ 'name': 'third_disk',
2370+ 'ptable': 'gpt',
2371+ 'serial': 'disk-c',
2372+ 'type': 'disk',
2373+ 'wipe': 'superblock'},
2374+ {'device': 'sdc',
2375+ 'id': 'sdc1',
2376+ 'size': '3GB',
2377+ 'type': 'partition'},
2378+ {'devices': ['sda1', 'sdb1', 'sdc1'],
2379+ 'id': 'mddevice',
2380+ 'name': 'md0',
2381+ 'raidlevel': 5,
2382+ 'type': 'raid'},
2383+ {'fstype': 'ext4',
2384+ 'id': 'md_root',
2385+ 'type': 'format',
2386+ 'volume': 'mddevice'},
2387+ {'device': 'md_root',
2388+ 'id': 'md_mount',
2389+ 'path': '/',
2390+ 'type': 'mount'}],
2391+ },
2392+ }
2393+ self.storage_config = (
2394+ block_meta.extract_storage_ordered_dict(self.config))
2395+ self.m_util.load_command_environment.return_value = {'fstab': None}
2396+
2397+ def test_raid_handler(self):
2398+ """ raid_handler creates raid device. """
2399+ devices = [self.random_string(), self.random_string(),
2400+ self.random_string()]
2401+ md_devname = '/dev/' + self.storage_config['mddevice']['name']
2402+ self.m_block.dev_path.return_value = '/dev/md0'
2403+ self.m_getpath.side_effect = iter(devices)
2404+ block_meta.raid_handler(self.storage_config['mddevice'],
2405+ self.storage_config)
2406+ self.assertEqual([call(md_devname, 5, devices, [], '')],
2407+ self.m_mdadm.mdadm_create.call_args_list)
2408+
2409+ @patch('curtin.commands.block_meta.raid_verify')
2410+ def test_raid_handler_preserves_existing_device(self, m_verify):
2411+ """ raid_handler preserves existing device. """
2412+
2413+ devices = [self.random_string(), self.random_string(),
2414+ self.random_string()]
2415+ self.m_block.dev_path.return_value = '/dev/md0'
2416+ self.m_getpath.side_effect = iter(devices)
2417+ m_verify.return_value = True
2418+ self.storage_config['mddevice']['preserve'] = True
2419+ block_meta.raid_handler(self.storage_config['mddevice'],
2420+ self.storage_config)
2421+ self.assertEqual(0, self.m_mdadm.mdadm_create.call_count)
2422+
2423+ def test_raid_handler_preserve_verifies_md_device(self):
2424+ """ raid_handler preserve verifies existing raid device. """
2425+
2426+ devices = [self.random_string(), self.random_string(),
2427+ self.random_string()]
2428+ md_devname = '/dev/' + self.storage_config['mddevice']['name']
2429+ self.m_block.dev_path.return_value = '/dev/md0'
2430+ self.m_getpath.side_effect = iter(devices)
2431+ self.m_mdadm.md_check.return_value = True
2432+ self.storage_config['mddevice']['preserve'] = True
2433+ block_meta.raid_handler(self.storage_config['mddevice'],
2434+ self.storage_config)
2435+ self.assertEqual(0, self.m_mdadm.mdadm_create.call_count)
2436+ self.assertEqual([call(md_devname, 5, devices, [])],
2437+ self.m_mdadm.md_check.call_args_list)
2438+
2439+ def test_raid_handler_preserve_verifies_md_device_after_assemble(self):
2440+ """ raid_handler preserve assembles array if first check fails. """
2441+
2442+ devices = [self.random_string(), self.random_string(),
2443+ self.random_string()]
2444+ md_devname = '/dev/' + self.storage_config['mddevice']['name']
2445+ self.m_block.dev_path.return_value = '/dev/md0'
2446+ self.m_getpath.side_effect = iter(devices)
2447+ self.m_mdadm.md_check.side_effect = iter([False, True])
2448+ self.storage_config['mddevice']['preserve'] = True
2449+ block_meta.raid_handler(self.storage_config['mddevice'],
2450+ self.storage_config)
2451+ self.assertEqual(0, self.m_mdadm.mdadm_create.call_count)
2452+ self.assertEqual([call(md_devname, 5, devices, [])] * 2,
2453+ self.m_mdadm.md_check.call_args_list)
2454+ self.assertEqual([call(md_devname, devices, [])],
2455+ self.m_mdadm.mdadm_assemble.call_args_list)
2456+
2457+ def test_raid_handler_preserve_raises_exception_if_verify_fails(self):
2458+ """ raid_handler preserve raises exception on failed verification."""
2459+
2460+ devices = [self.random_string(), self.random_string(),
2461+ self.random_string()]
2462+ md_devname = '/dev/' + self.storage_config['mddevice']['name']
2463+ self.m_block.dev_path.return_value = '/dev/md0'
2464+ self.m_getpath.side_effect = iter(devices)
2465+ self.m_mdadm.md_check.side_effect = iter([False, False])
2466+ self.storage_config['mddevice']['preserve'] = True
2467+ with self.assertRaises(RuntimeError):
2468+ block_meta.raid_handler(self.storage_config['mddevice'],
2469+ self.storage_config)
2470+ self.assertEqual(0, self.m_mdadm.mdadm_create.call_count)
2471+ self.assertEqual([call(md_devname, 5, devices, [])] * 2,
2472+ self.m_mdadm.md_check.call_args_list)
2473+ self.assertEqual([call(md_devname, devices, [])],
2474+ self.m_mdadm.mdadm_assemble.call_args_list)
2475+
2476+
2477+class TestBcacheHandler(CiTestCase):
2478+
2479+ def setUp(self):
2480+ super(TestBcacheHandler, self).setUp()
2481+
2482+ basepath = 'curtin.commands.block_meta.'
2483+ self.add_patch(basepath + 'get_path_to_storage_volume', 'm_getpath')
2484+ self.add_patch(basepath + 'util', 'm_util')
2485+ self.add_patch(basepath + 'make_dname', 'm_dname')
2486+ self.add_patch(basepath + 'bcache', 'm_bcache')
2487+ self.add_patch(basepath + 'block', 'm_block')
2488+ self.add_patch(basepath + 'disk_handler', 'm_disk_handler')
2489+
2490+ self.target = "my_target"
2491+ self.config = {
2492+ 'storage': {
2493+ 'version': 1,
2494+ 'config': [
2495+ {'grub_device': True,
2496+ 'id': 'id_rotary0',
2497+ 'name': 'rotary0',
2498+ 'ptable': 'msdos',
2499+ 'serial': 'disk-a',
2500+ 'type': 'disk',
2501+ 'wipe': 'superblock'},
2502+ {'id': 'id_ssd0',
2503+ 'name': 'ssd0',
2504+ 'serial': 'disk-b',
2505+ 'type': 'disk',
2506+ 'wipe': 'superblock'},
2507+ {'device': 'id_rotary0',
2508+ 'id': 'id_rotary0_part1',
2509+ 'name': 'rotary0-part1',
2510+ 'number': 1,
2511+ 'offset': '1M',
2512+ 'size': '999M',
2513+ 'type': 'partition',
2514+ 'wipe': 'superblock'},
2515+ {'device': 'id_rotary0',
2516+ 'id': 'id_rotary0_part2',
2517+ 'name': 'rotary0-part2',
2518+ 'number': 2,
2519+ 'size': '9G',
2520+ 'type': 'partition',
2521+ 'wipe': 'superblock'},
2522+ {'backing_device': 'id_rotary0_part2',
2523+ 'cache_device': 'id_ssd0',
2524+ 'cache_mode': 'writeback',
2525+ 'id': 'id_bcache0',
2526+ 'name': 'bcache0',
2527+ 'type': 'bcache'},
2528+ {'fstype': 'ext4',
2529+ 'id': 'bootfs',
2530+ 'label': 'boot-fs',
2531+ 'type': 'format',
2532+ 'volume': 'id_rotary0_part1'},
2533+ {'fstype': 'ext4',
2534+ 'id': 'rootfs',
2535+ 'label': 'root-fs',
2536+ 'type': 'format',
2537+ 'volume': 'id_bcache0'},
2538+ {'device': 'rootfs',
2539+ 'id': 'rootfs_mount',
2540+ 'path': '/',
2541+ 'type': 'mount'},
2542+ {'device': 'bootfs',
2543+ 'id': 'bootfs_mount',
2544+ 'path': '/boot',
2545+ 'type': 'mount'}
2546+ ],
2547+ },
2548+ }
2549+ self.storage_config = (
2550+ block_meta.extract_storage_ordered_dict(self.config))
2551+
2552+ def test_bcache_handler(self):
2553+ """ bcache_handler creates bcache device. """
2554+ backing_device = self.random_string()
2555+ caching_device = self.random_string()
2556+ cset_uuid = self.random_string()
2557+ cache_mode = self.storage_config['id_bcache0']['cache_mode']
2558+ self.m_getpath.side_effect = iter([backing_device, caching_device])
2559+ self.m_bcache.create_cache_device.return_value = cset_uuid
2560+
2561+ block_meta.bcache_handler(self.storage_config['id_bcache0'],
2562+ self.storage_config)
2563+ self.assertEqual([call(caching_device)],
2564+ self.m_bcache.create_cache_device.call_args_list)
2565+ self.assertEqual([
2566+ call(backing_device, caching_device, cache_mode, cset_uuid)],
2567+ self.m_bcache.create_backing_device.call_args_list)
2568+
2569
2570 class TestPartitionHandler(CiTestCase):
2571
2572diff --git a/tests/unittests/test_storage_config.py b/tests/unittests/test_storage_config.py
2573index 8663ba5..0f3307d 100644
2574--- a/tests/unittests/test_storage_config.py
2575+++ b/tests/unittests/test_storage_config.py
2576@@ -7,6 +7,7 @@ from curtin.storage_config import ProbertParser as baseparser
2577 from curtin.storage_config import (BcacheParser, BlockdevParser, DasdParser,
2578 DmcryptParser, FilesystemParser, LvmParser,
2579 RaidParser, MountParser, ZfsParser)
2580+from curtin.storage_config import ptable_uuid_to_flag_entry
2581 from curtin import util
2582
2583
2584@@ -207,21 +208,21 @@ class TestBlockdevParser(CiTestCase):
2585 expected_tuple = ('boot', 'EF00')
2586 for guid in boot_guids:
2587 self.assertEqual(expected_tuple,
2588- self.bdevp.ptable_uuid_to_flag_entry(guid))
2589+ ptable_uuid_to_flag_entry(guid))
2590
2591 # XXX: Parameterize me
2592 def test_blockdev_ptable_uuid_flag_invalid(self):
2593 """ BlockdevParser returns (None, None) for invalid uuids. """
2594 for invalid in [None, '', {}, []]:
2595 self.assertEqual((None, None),
2596- self.bdevp.ptable_uuid_to_flag_entry(invalid))
2597+ ptable_uuid_to_flag_entry(invalid))
2598
2599 # XXX: Parameterize me
2600 def test_blockdev_ptable_uuid_flag_unknown_uuid(self):
2601 """ BlockdevParser returns (None, None) for unknown uuids. """
2602 for unknown in [self.random_string(), self.random_string()]:
2603 self.assertEqual((None, None),
2604- self.bdevp.ptable_uuid_to_flag_entry(unknown))
2605+ ptable_uuid_to_flag_entry(unknown))
2606
2607 def test_get_unique_ids(self):
2608 """ BlockdevParser extracts uniq udev ID_ values. """
2609diff --git a/tests/vmtests/__init__.py b/tests/vmtests/__init__.py
2610index 39dfb40..f65b3bf 100644
2611--- a/tests/vmtests/__init__.py
2612+++ b/tests/vmtests/__init__.py
2613@@ -1680,6 +1680,8 @@ class VMBaseClass(TestCase):
2614 'wwn': 'ID_WWN_WITH_EXTENSION',
2615 }
2616 for disk in disks:
2617+ if not disk.get('name'):
2618+ continue
2619 dname_file = "%s.rules" % sanitize_dname(disk.get('name'))
2620 contents = self.load_collect_file("udev_rules.d/%s" % dname_file)
2621 for key, key_name in key_to_udev.items():
2622diff --git a/tests/vmtests/test_preserve_bcache.py b/tests/vmtests/test_preserve_bcache.py
2623new file mode 100644
2624index 0000000..e2d2a34
2625--- /dev/null
2626+++ b/tests/vmtests/test_preserve_bcache.py
2627@@ -0,0 +1,67 @@
2628+# This file is part of curtin. See LICENSE file for copyright and license info.
2629+
2630+from . import VMBaseClass, skip_if_flag
2631+from .releases import base_vm_classes as relbase
2632+
2633+import textwrap
2634+
2635+
2636+class TestPreserveBcache(VMBaseClass):
2637+ arch_skip = [
2638+ "s390x", # lp:1565029
2639+ ]
2640+ test_type = 'storage'
2641+ conf_file = 'examples/tests/preserve-bcache.yaml'
2642+ nr_cpus = 2
2643+ dirty_disks = False
2644+ extra_disks = ['2G']
2645+ extra_collect_scripts = [textwrap.dedent("""
2646+ cd OUTPUT_COLLECT_D
2647+ ls / > ls-root
2648+ bcache-super-show /dev/vda2 > bcache_super_vda2
2649+ ls /sys/fs/bcache > bcache_ls
2650+ cat /sys/block/bcache0/bcache/cache_mode > bcache_cache_mode
2651+
2652+ exit 0
2653+ """)]
2654+
2655+ @skip_if_flag('expected_failure')
2656+ def test_bcache_output_files_exist(self):
2657+ self.output_files_exist(["bcache_super_vda2", "bcache_ls",
2658+ "bcache_cache_mode"])
2659+
2660+ @skip_if_flag('expected_failure')
2661+ def test_bcache_status(self):
2662+ bcache_cset_uuid = None
2663+ for line in self.load_collect_file("bcache_super_vda2").splitlines():
2664+ if line != "" and line.split()[0] == "cset.uuid":
2665+ bcache_cset_uuid = line.split()[-1].rstrip()
2666+ self.assertIsNotNone(bcache_cset_uuid)
2667+ self.assertTrue(bcache_cset_uuid in
2668+ self.load_collect_file("bcache_ls").splitlines())
2669+
2670+ @skip_if_flag('expected_failure')
2671+ def test_bcache_cachemode(self):
2672+ self.check_file_regex("bcache_cache_mode", r"\[writeback\]")
2673+
2674+ @skip_if_flag('expected_failure')
2675+ def test_proc_cmdline_root_by_uuid(self):
2676+ self.check_file_regex("proc_cmdline", r"root=UUID=")
2677+
2678+ def test_preserved_data_exists(self):
2679+ self.assertIn('existing', self.load_collect_file('ls-root'))
2680+
2681+
2682+class BionicTestPreserveBcache(relbase.bionic, TestPreserveBcache):
2683+ __test__ = True
2684+
2685+
2686+class EoanTestPreserveBcache(relbase.eoan, TestPreserveBcache):
2687+ __test__ = True
2688+
2689+
2690+class FocalTestPreserveBcache(relbase.focal, TestPreserveBcache):
2691+ __test__ = True
2692+
2693+
2694+# vi: ts=4 expandtab syntax=python
2695diff --git a/tests/vmtests/test_preserve_lvm.py b/tests/vmtests/test_preserve_lvm.py
2696new file mode 100644
2697index 0000000..90f15cb
2698--- /dev/null
2699+++ b/tests/vmtests/test_preserve_lvm.py
2700@@ -0,0 +1,80 @@
2701+# This file is part of curtin. See LICENSE file for copyright and license info.
2702+
2703+from . import VMBaseClass
2704+from .releases import base_vm_classes as relbase
2705+
2706+import json
2707+import os
2708+import textwrap
2709+
2710+
2711+class TestLvmPreserveAbs(VMBaseClass):
2712+ conf_file = "examples/tests/preserve-lvm.yaml"
2713+ test_type = 'storage'
2714+ interactive = False
2715+ extra_disks = ['10G']
2716+ dirty_disks = False
2717+ extra_collect_scripts = [textwrap.dedent("""
2718+ cd OUTPUT_COLLECT_D
2719+ lsblk --json --fs -o KNAME,MOUNTPOINT,UUID,FSTYPE > lsblk.json
2720+ lsblk --fs -P -o KNAME,MOUNTPOINT,UUID,FSTYPE > lsblk.out
2721+ pvdisplay -C --separator = -o vg_name,pv_name --noheadings > pvs
2722+ lvdisplay -C --separator = -o lv_name,vg_name --noheadings > lvs
2723+ pvdisplay > pvdisplay
2724+ vgdisplay > vgdisplay
2725+ lvdisplay > lvdisplay
2726+ ls -al /dev/root_vg/ > dev_root_vg
2727+ ls / > ls-root
2728+
2729+ exit 0
2730+ """)]
2731+ conf_replace = {}
2732+
2733+ def get_fstab_output(self):
2734+ rootvg = self._dname_to_kname('root_vg-lv1_root')
2735+ return [
2736+ (self._kname_to_uuid_devpath('dm-uuid', rootvg), '/', 'defaults')
2737+ ]
2738+
2739+ def test_output_files_exist(self):
2740+ self.output_files_exist(["fstab"])
2741+
2742+ def test_rootfs_format(self):
2743+ self.output_files_exist(["lsblk.json"])
2744+ if os.path.getsize(self.collect_path('lsblk.json')) > 0:
2745+ lsblk_data = json.load(open(self.collect_path('lsblk.json')))
2746+ print(json.dumps(lsblk_data, indent=4))
2747+ [entry] = [entry for entry in lsblk_data.get('blockdevices')
2748+ if entry['mountpoint'] == '/']
2749+ print(entry)
2750+ self.assertEqual('ext4', entry['fstype'])
2751+ else:
2752+ # no json output on older releases
2753+ self.output_files_exist(["lsblk.out"])
2754+ lsblk_data = open(self.collect_path('lsblk.out')).readlines()
2755+ print(lsblk_data)
2756+ [root] = [line.strip() for line in lsblk_data
2757+ if 'MOUNTPOINT="/"' in line]
2758+ print(root)
2759+ [fstype] = [val.replace('"', '').split("=")[1]
2760+ for val in root.split() if 'FSTYPE' in val]
2761+ print(fstype)
2762+ self.assertEqual('ext4', fstype)
2763+
2764+ def test_preserved_data_exists(self):
2765+ self.assertIn('existing', self.load_collect_file('ls-root'))
2766+
2767+
2768+class BionicTestLvmPreserve(relbase.bionic, TestLvmPreserveAbs):
2769+ __test__ = True
2770+
2771+
2772+class EoanTestLvmPreserve(relbase.eoan, TestLvmPreserveAbs):
2773+ __test__ = True
2774+
2775+
2776+class FocalTestLvmPreserve(relbase.focal, TestLvmPreserveAbs):
2777+ __test__ = True
2778+
2779+
2780+# vi: ts=4 expandtab syntax=python
2781diff --git a/tests/vmtests/test_preserve_partition_wipe_vg.py b/tests/vmtests/test_preserve_partition_wipe_vg.py
2782new file mode 100644
2783index 0000000..b779ad1
2784--- /dev/null
2785+++ b/tests/vmtests/test_preserve_partition_wipe_vg.py
2786@@ -0,0 +1,36 @@
2787+# This file is part of curtin. See LICENSE file for copyright and license info.
2788+
2789+from . import VMBaseClass
2790+from .releases import base_vm_classes as relbase
2791+
2792+import textwrap
2793+
2794+
2795+class TestPreserveWipeLvm(VMBaseClass):
2796+ """ Test that curtin can reuse a partition that was previously in lvm. """
2797+ conf_file = "examples/tests/preserve-partition-wipe-vg.yaml"
2798+ extra_disks = ['20G']
2799+ uefi = False
2800+ extra_collect_scripts = [textwrap.dedent("""
2801+ cd OUTPUT_COLLECT_D
2802+ ls /opt > ls-opt
2803+ exit 0
2804+ """)]
2805+
2806+ def test_existing_exists(self):
2807+ self.assertIn('existing', self.load_collect_file('ls-opt'))
2808+
2809+
2810+class BionicTestPreserveWipeLvm(relbase.bionic, TestPreserveWipeLvm):
2811+ __test__ = True
2812+
2813+
2814+class EoanTestPreserveWipeLvm(relbase.eoan, TestPreserveWipeLvm):
2815+ __test__ = True
2816+
2817+
2818+class FocalTestPreserveWipeLvm(relbase.focal, TestPreserveWipeLvm):
2819+ __test__ = True
2820+
2821+
2822+# vi: ts=4 expandtab syntax=python

Subscribers

People subscribed via source and target branches