Generally, disks should always have wipe (which we've had); unless they need to be preserved in some way.
This almost compares to the current partial raid bug, and it's easy to see where this will fail.
If vdc had *two* partitions, one for the raid, and one for a filesystem to be preserved; it's clear that we *can't* wipe vdc, but clearly we need to wipe the partition.
For that case, then we'd want to preserve: true on vdc, wipe: superblock on vdc-part1, and preserve: true on vdc-part2.
And with the fix to allow partitions to be accepted into the clear-holders list; I believe that should work.
So, can we update this config to include the second partition on vdc to be preserved, as above.
I suspect this will fail until we add the fix to block_meta that I sent you the other day:
-def get_disk_paths_from_storage_config(storage_config):
- """Returns a list of disk paths in a storage config filtering out
- preserved or disks which do not have wipe configuration.
+def get_device_paths_from_storage_config(storage_config):
+ """Returns a list of device paths in a storage config filtering out
+ preserved or devices which do not have wipe configuration.
return [get_path_if_present(k, storage_config)
for (k, v) in storage_config.items()
- if v.get('type') == 'disk' and
+ if v.get('type') in ['disk', 'partition'] and config.value_as_boolean(v.get('wipe')) and
not config.value_as_boolean(v.get('preserve'))]
Generally, disks should always have wipe (which we've had); unless they need to be preserved in some way.
This almost compares to the current partial raid bug, and it's easy to see where this will fail.
If vdc had *two* partitions, one for the raid, and one for a filesystem to be preserved; it's clear that we *can't* wipe vdc, but clearly we need to wipe the partition.
For that case, then we'd want to preserve: true on vdc, wipe: superblock on vdc-part1, and preserve: true on vdc-part2.
And with the fix to allow partitions to be accepted into the clear-holders list; I believe that should work.
So, can we update this config to include the second partition on vdc to be preserved, as above.
I suspect this will fail until we add the fix to block_meta that I sent you the other day:
% git diff -- curtin/ commands/ block_meta. py commands/ block_meta. py b/curtin/ commands/ block_meta. py commands/ block_meta. py commands/ block_meta. py paths_from_ storage_ config( paths_from_ storage_ config(
extract_ storage_ ordered_ dict(cfg) )
devices = cfg.get( 'block- meta', {}).get('devices', [])
util. write_file( state[' fstab'] , fstab_entry, omode='a')
diff --git a/curtin/
index 79493fc..5d53c0b 100644
--- a/curtin/
+++ b/curtin/
@@ -67,7 +67,7 @@ def block_meta(args):
if devices is None:
devices = []
if 'storage' in cfg:
- devices = get_disk_
+ devices = get_device_
if len(devices) == 0:
@@ -1505,9 +1505,9 @@ def zfs_handler(info, storage_config):
-def get_disk_ paths_from_ storage_ config( storage_ config) : paths_from_ storage_ config( storage_ config) :
- """Returns a list of disk paths in a storage config filtering out
- preserved or disks which do not have wipe configuration.
+def get_device_
+ """Returns a list of device paths in a storage config filtering out
+ preserved or devices which do not have wipe configuration.
:param: storage_config: Ordered dict of storage configation paths_from_ storage_ config( storage_ config) :
"""
@@ -1516,7 +1516,7 @@ def get_disk_
return [get_path_ if_present( k, storage_config) config. items()
config. value_as_ boolean( v.get(' wipe')) and value_as_ boolean( v.get(' preserve' ))]
for (k, v) in storage_
- if v.get('type') == 'disk' and
+ if v.get('type') in ['disk', 'partition'] and
not config.