fix tearing down ChrootableTarget when mounts appear while it is set up
There are several bug reports that boil down to
ChrootableTarget.__exit__ failing to unmount bind mounts with "target is
busy". For example, ssh-ing in while running tends to do this, because
that creates a mount in /run and then umounting /target/run fails
because of the sub mount. Fix this by marking the mountpoints
recursively private and then unmounting them recursively.
disk_handler: check wipe field when deciding whether to reformat raids
Currently preserve=true for raid means "preserve the RAID array"
and ALSO "preserve the partition table". So change the
interpretation of preserve to be solely about preserving the
array and instead check the wipe field to decide if the array
should get a new partition table or not.
pylintrc: explicitly list the DISTROS generated-members
For some obscure reason setting
generated-members=DISTROS\.
in pylintrc makes the pylint run flaky, causing failures like:
curtin/commands/install_grub.py:93:19:
E1101: Instance of 'Distros' has no 'debian' member (no-member)
curtin/commands/install_grub.py:155:28:
E1101: Instance of 'Distros' has no 'redhat' member (no-member)
These failures:
- happen on about 15% of the time, at least on Bionic
- also happen with the latest stable version of pylint (2.8.2)
- only happen on install_grub.py (which only refers to debian and redhat)
- do not seem to happen on Impish.
- possibly related: https://github.com/PyCQA/pylint/issues/1628
But the truth is I don't understand why generated-members=DISTROS\.
even works some of the time for some generated members. Let's replace
it with the explicit list of the (non auto-detected) generated members,
so it will always work, and we'll know why it does.
block_meta: make preserve: true on a raid in a container work
Pass any supplied container name to raid_verify and on to md_check and
check it against the existing device.
Also tidy up a few other things in the raid verification code path: make
checking functions consistently raise ValueError on failure rather than
returning True / False and have the verification of raid level actually
check the level of the existing array.
This also fixes preserve: true on a raid0 array while we are there --
raid0 arrays do not have degraded or sync_action attributes.
NVMe namespaces are only verified if the wwn comes from either the
NGUID or EUI64. With NVMe 1.3 a third ID, UUID is now[1] supported for a
namespace. If it exists it takes precedence and the wwid in sysfs will
populate with that uuid and use a uuid.XXXXX format, which does not
validate correctly with the filter in curtin.
The 'nodev' is intended to indicate
"whether the file system is mounted on a block device" https://red.ht/3toGT5b
Use this info to set nodev items to passno 0, and default to 1 for
non-nodev or if the filesystem isn't present there.
Except that /proc/filesystems doesn't list 'swap' or 'none', so special
case those to passno 0.