> I'm pretty sure that reading from a freshly created logical volume or RAID
> will get you zeros irrespective of what was on the disks before (and mdadm
> will effectively do an asynchronous wipe: zero anyway aiui) but I guess if
Curtin needs to be a bit more defensive because mdadm is complicated.
In there, as soon as the raid was recreated, udev does read/probing on the
/dev/mdXXX and *finds* whatever was there before (bcache in the bug IIRC).
The --zero-superblock removes raid *metadata* but does *nothing* for the
contents. so if you recreate a raid with the same strip-size and set of
disks, you will *find* your data again. It's almost as if RAID was designed
to not lose data. Curtin clear-holders is designed to find that data and
kill it until it's really dead. =)
> the user explicitly asks for their new raid or lv to have zeros written to
> it, we should do that.
> I'm pretty sure that reading from a freshly created logical volume or RAID
> will get you zeros irrespective of what was on the disks before (and mdadm
> will effectively do an asynchronous wipe: zero anyway aiui) but I guess if
Curtin needs to be a bit more defensive because mdadm is complicated.
https:/ /bugs.launchpad .net/curtin/ +bug/1815018
In there, as soon as the raid was recreated, udev does read/probing on the
/dev/mdXXX and *finds* whatever was there before (bcache in the bug IIRC).
The --zero-superblock removes raid *metadata* but does *nothing* for the
contents. so if you recreate a raid with the same strip-size and set of
disks, you will *find* your data again. It's almost as if RAID was designed
to not lose data. Curtin clear-holders is designed to find that data and
kill it until it's really dead. =)
> the user explicitly asks for their new raid or lv to have zeros written to
> it, we should do that.
+1