Support for Intel VROC (Virtual RAID On CPU)

Bug #1893661 reported by György Szombathelyi
20
This bug affects 3 people
Affects Status Importance Assigned to Milestone
curtin (Ubuntu)
Fix Released
Wishlist
György Szombathelyi

Bug Description

MaaS and curtin currently doesn't support creating RAID arrays with NVMe devices using the Intel Virtual RAID On CPU technology. Even worse, when it encounters one, it tries to destroy it, but fail to do it also.

Creating a VROC array requires creating an external container via mdadm (-e option, maybe this one could be generalized):
mdadm -C /dev/md/imsm0 /dev/nvme[0-3]n1 -n 4 -e imsm

Then the RAID array can be created:
mdadm -C /dev/md126 /dev/md/imsm0 -n 4 -l 5

Revision history for this message
Ryan Harper (raharper) wrote :

Hello,

Thanks for filing the bug. Do you know if VROC is different feature than RSTe? There's an existing feature request for that here:

https://bugs.launchpad.net/curtin/+bug/1790055

Changed in curtin (Ubuntu):
status: New → Incomplete
Revision history for this message
György Szombathelyi (gyurco) wrote :

The VROC is only for NVMes where the storage directly connected to the CPU PCIe lanes. Its probable that they're using the same metadata format, and according to the docs, it requires the same steps to create the array (first the container, then the array itself), thus I think both can be supported with the same effort.

Revision history for this message
Ryan Harper (raharper) wrote :

Thanks.

I won't mark this as a duplicate yet without further confirmation.

Would you be interested in working on the feature? I don't have access to any Intel VROC devices which makes it difficult to implement.

I don't believe QEMU or other virt platforms emulate Intel VROC devices; one needs to have access to real hardware for testing.

Revision history for this message
György Szombathelyi (gyurco) wrote :

It would be interesting, I'll try to "borrow" a VROC-enabled server for some days. I think it will be not too hard to add this to curtin. I'll have questions about the preferred method how to present this to the user, e.g. maybe an external_metadata: xxxx, where xxxx could be imsm for now could execute this 2 stage raid array creation.
About the MaaS part: I would leave it to MaaS developers, I'm not sure what should be done there. I think it will need a detection method in the commissioning phase at least.

Revision history for this message
Ryan Harper (raharper) wrote :

Cool!

The first step is to understand what additional metadata is needed to construct an VROC device (this container).

The existing storage config yaml for raid looks like this:

https://curtin.readthedocs.io/en/latest/topics/storage.html#raid-command

It looks like we need to have a default name for the container,
you use /dev/md/insm0, I see other places use /dev/md/insm. Thoughts?

Maybe this metadata syntax:

- type: raid
  metadata: imsm
  container: /dev/md/ism0
  name: mirror0
  devices:
    - /dev/nvme0n1p1
    - /dev/nvme1n1p1
  level: 1

Which would run two commands:
  mdadm --create /dev/md/ism0 --metadata=imsm --raid-devices=2 /dev/nvme0n1p1 /dev/nvme1n1p1
  mdadm --create /dev/md0 ... /dev/md/ism0 --name=mirror0

Open questions/TODO:

1) does the second mdadm command need --metadata=imsm, --raid-devices=2 ?
2) it appears that the second command does require specifying raid-level, but what about devices?
3) how does one remove imsm containers? Curtin attempts to remove existing storage layers on top of physical disks so they can be re-used in other layered storages
4) can we boot to root on imsm devices? if so, what grub and initrd changes may be needed?
5) once created, we need to adjust mdadm examine/detail output parsing in curtin/block/mdadm.py; I believe the output of imsm device details is broken.

Lastly; it's going to be hard to support this since we cannot emulate such a device in QEMU; curtin heavily relies on virtualization to test for regressions. Typically we'd construct an QEMU vm with storage that looks like an Intel VROC raid and make sure we can create/destroy these things automatically.

Changed in curtin (Ubuntu):
importance: Undecided → Wishlist
status: Incomplete → Confirmed
Revision history for this message
György Szombathelyi (gyurco) wrote :

1-2) No, only the md-device, the container, raid level and number of devices (it's possible to create different RAID volumes with different levels in one container, however the number of devices must be the same in all).
3) AFAIK after stopping and removing all arrays from the container, mdamd --zero-superblock on the member devices will remove the container itself. Need to check again.
4) Yes, and even the EFI partition can be on RAID. Btw, it doesn't really works with the legacy BIOS, mdadm doesn't find the raid support on our servers without EFI. I'm not sure if it's supposed to work with legacy BIOS, or it's just a firmware issue.
5) That need to be checked, as the cleaning info comes from there.

Yeah, I fear it cannot be automatically tested without actual hardware.

Revision history for this message
György Szombathelyi (gyurco) wrote :

Maybe it would be possible to abstract out the container, like:
- type: raid
  metadata: imsm
  container: /dev/md/imsm0
  name: imsm0
  devices:
    - /dev/nvme0n1p1
    - /dev/nvme1n1p1

- type: raid
  devices:
    - /dev/md/imsm0 # but need to get the number of real devices for -n
  name: mirror0
  level: 1

Revision history for this message
Ryan Harper (raharper) wrote :
Download full text (3.8 KiB)

> 1-2) No, only the md-device, the container, raid level and number of devices (it's possible to create different RAID volumes with different levels in one container, however the number of devices must be the same in all).

OK.

Looking at the https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/ssd-software/Linux_VROC_6-0_User_Guide.pdf
without specifying a size, (raid never does, it uses whole devices) I can't
quite wrap my head around what creating to volumes in a container means.

If I have 4 disks, and put them into a container and then create a raid0
and a raid5 from the container ... what devices do I have?

Ah, the manual suggests size is important for the multi-container support

    -l Specifies the RAID level. The options supported are 0, 1, 5, 10.
    -z Specifies the size (in Kibibyte) of space dedicated on each disk to the RAID volume. This must be a multiple of the chunk size. For example

> 3) AFAIK after stopping and removing all arrays from the container, mdamd --zero-superblock on the member devices will remove the container itself. Need to check again.

OK. We currently do:

a) enumerate devices and spares
b) set_sync_action=idle
c) set_sync_action=frozen
d) wipe the superblock if the composed raid device (it may have metadata for a
   higher level device, like nested raid or LVM over RAID etc)
e) for each raid members + spares
      mdadm fail device
      mdadm remove device
f) mdadm stop
g) mdadm zero_device
h) wait for /dev/mdX to be released from the kernel

I think we'd need to notice /dev/mdX is part of a container, and if so after
tearing down the mdx, to then wipe the container? You mentioned the delete
curtin does isn't sufficient; If you have the curtin install.log with the
failure, that'll help sort this bit out.

https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/ssd-software/Linux_VROC_6-0_User_Guide.pdf

That has more details.

specifically mdadm -vS which stops volumes and if we support the multi-volume
setup, then removing sub-arrays is more complicated

> 4) Yes, and even the EFI partition can be on RAID. Btw, it doesn't really works with the legacy BIOS, mdadm doesn't find the raid support on our servers without EFI. I'm not sure if it's supposed to work with legacy BIOS, or it's just a firmware issue.

EFI really shouldn't be on RAID in that EFI is VFAT and I don't believe there
are EFI drivers for mdadm, maybe Intel provides a VROC/mdadm EFI driver, do
you know?

For legacy, the open question is whether grub2 can read an mdadm with metadata
imsm metadata... sounds like no.

> 5) That need to be checked, as the cleaning info comes from there.
>
> Yeah, I fear it cannot be automatically tested without actual hardware.

=(

>
>
> Maybe it would be possible to abstract out the container, like:
> - type: raid
> metadata: imsm
> container: /dev/md/imsm0
> name: imsm0
> devices:
> - /dev/nvme0n1p1
> - /dev/nvme1n1p1
>
> - type: raid
> devices:
> - /dev/md/imsm0 # but need to get the number of real devices for -n

We can do that by either repeating the values; it's a shame that the mdadm implementation doesn't just take the container n...

Read more...

Revision history for this message
György Szombathelyi (gyurco) wrote :

I think your suggestion is a good YAML scheme. I think size_kb: should be optional to fill the whole array with one volume if it's omitted. The number of devices can be looked up by pairing 'container' with 'id'.

The EFI firmware can handle the EFI partition on this kind of RAID, and for this reason it's also no problem for GRUB. Linux of course can use a VFAT on the array. That's one of the big plus of this kind of soft-raid, it gets some integration into the whole system.

Here are mdadm --query --detail outputs for the container and a level 5 array:

/dev/md127:
           Version : imsm
        Raid Level : container
     Total Devices : 4

   Working Devices : 4

              UUID : ba5ad77a:7618efd1:b178a313:c060a2e7
     Member Arrays : /dev/md/126

    Number Major Minor RaidDevice

       - 259 2 - /dev/nvme2n1
       - 259 0 - /dev/nvme1n1
       - 259 1 - /dev/nvme0n1
       - 259 3 - /dev/nvme3n1

---------------

/dev/md126:
         Container : /dev/md/imsm0, member 0
        Raid Level : raid5
        Array Size : 2930270208 (2794.52 GiB 3000.60 GB)
     Used Dev Size : 976756736 (931.51 GiB 1000.20 GB)
      Raid Devices : 4
     Total Devices : 4

             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-asymmetric
        Chunk Size : 128K

Consistency Policy : resync

              UUID : 5fa06b36:53e67142:37ff9ad6:44ef0e89
    Number Major Minor RaidDevice State
       3 259 0 0 active sync /dev/nvme1n1
       2 259 1 1 active sync /dev/nvme0n1
       1 259 3 2 active sync /dev/nvme3n1
       0 259 2 3 active sync /dev/nvme2n1

Revision history for this message
György Szombathelyi (gyurco) wrote :

I suggest to add a level: container to the top-level, as it would imply to use the -e switch to mdadm, and also would be consistent to the query output.

- type: raid
  id: disk_raid_container0
  level: container
  metadata: imsm
  name: /dev/md/imsm0
  devices:
    - /dev/nvme0n1p1
    - /dev/nvme1n1p1

Revision history for this message
György Szombathelyi (gyurco) wrote :

Hmm, just realized the one has to create dummy disk: entries for the devices
- type: disk
  id: /dev/nvme0n1
  path: /dev/nvme0n1
- type: disk
  id: /dev/nvme1n1
  path: /dev/nvme1n1

then they'll be usable for devices in type: raid

Another buglet: metadata is not passed to mdadm_create() in raid_handler()

Revision history for this message
György Szombathelyi (gyurco) wrote :

And the weird behavior of --examine of one disk: the current parser errors out because of State is duplicated (quadruplicated actually)

mdadm --query --examine /dev/nvme0n1
/dev/nvme0n1:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.3.00
    Orig Family : 2b2b3bbf
         Family : 2b2b3bbf
     Generation : 00000004
     Attributes : All supported
           UUID : 3c645e76:1b9c9ddf:dbd3f118:2bb228ad
       Checksum : 367e2a35 correct
    MPB Sectors : 2
          Disks : 4
   RAID Devices : 1

  Disk01 Serial : LJ91620AB71P0FGN
          State : active
             Id : 00000000
    Usable Size : 1953514766 (931.51 GiB 1000.20 GB)

[126]:
           UUID : 31e876aa:81eba73d:47d77898:6b656f1d
     RAID Level : 5 <-- 5
        Members : 4 <-- 4
          Slots : [UUUU] <-- [UUUU]
    Failed disk : none
      This Slot : 1
    Sector Size : 512
     Array Size : 5860540416 (2794.52 GiB 3000.60 GB)
   Per Dev Size : 1953515520 (931.51 GiB 1000.20 GB)
  Sector Offset : 0
    Num Stripes : 7630912
     Chunk Size : 128 KiB <-- 128 KiB
       Reserved : 0
  Migrate State : initialize
      Map State : normal <-- uninitialized
     Checkpoint : 0 (1024)
    Dirty State : clean
     RWH Policy : off

  Disk00 Serial : LJ916308PS1P0FGN
          State : active
             Id : 00000000
    Usable Size : 1953514766 (931.51 GiB 1000.20 GB)

  Disk02 Serial : LJ910504A01P0FGN
          State : active
             Id : 00000000
    Usable Size : 1953514766 (931.51 GiB 1000.20 GB)

  Disk03 Serial : LJ916308CY1P0FGN
          State : active
             Id : 00000000
    Usable Size : 1953514766 (931.51 GiB 1000.20 GB)

Revision history for this message
György Szombathelyi (gyurco) wrote :

This commit adds VROC container and array creation (on clean disks)
https://github.com/gyurco/curtin/commit/fd72c17665c071cde3eb5e047662b04ab993a0dd

Stopping and destroying existing one is to be done (+specifying a size to be able to create more than one array in one container).
This config snippet was used on our servers:

  - type: disk
    id: /dev/nvme0n1
    path: /dev/nvme0n1
  - type: disk
    id: /dev/nvme1n1
    path: /dev/nvme1n1
  - type: disk
    id: /dev/nvme2n1
    path: /dev/nvme2n1
  - type: disk
    id: /dev/nvme3n1
    path: /dev/nvme3n1

  - type: raid
    id: disk_raid_container0
    raidlevel: container
    metadata: imsm
    name: /dev/md/imsm0
    devices:
     - /dev/nvme0n1
     - /dev/nvme1n1
     - /dev/nvme2n1
     - /dev/nvme3n1

  - type: raid
    id: md126
    name: /dev/md126
    raidlevel: 5
    container: disk_raid_container0

Revision history for this message
Ryan Harper (raharper) wrote :

> I think your suggestion is a good YAML scheme. I think size_kb:
> should be optional to fill the whole array with one volume if
> it's omitted.

Well, it's not that simple. What do we do if it's omitted and
the config includes multiple volumes from the same container?

Curtin config is typically constructed from an "Oracle"; either
MAAS or Subiquity probe storage and then provide complete
configuration from user-input.

So if either added support for VROC, they would know in advance
how many volumes per container and could specify the exact
size.

I believe we want to require size_kb if metadata == 'imsm'

> The number of devices can be looked up by pairing
> 'container' with 'id'.

Yes, I put the id anchor there so that the volumes created with
in a container will be able to lookup the correct value rather
than having to repeat.

> Maybe it would be possible to abstract out the container, like:
> - type: raid

Everything needs and id.

id: raid_container_0

> metadata: imsm
> container: /dev/md/imsm0

This isn't needed, IIUC, metadata=imsm implies a container
raid, no?

> name: imsm0

name: /dev/md/imsm0

> devices:
> - /dev/nvme0n1p1
> - /dev/nvme1n1p1
>
> - type: raid
> devices:
> - /dev/md/imsm0 # but need to get the number of real devices for -n

Yes, this is nice, and what we will do instead is:

devices:
  - raid_container_0

> name: mirror0
> level: 1

> I suggest to add a level: container to the top-level, as it would
> imply to use the -e switch to mdadm, and also would be consistent
> to the query output.
>
> - type: raid
> id: disk_raid_container0
> level: container
> metadata: imsm

I think we can skip that if it's true that imsm is always a
container.

> name: /dev/md/imsm0
> devices:
> - /dev/nvme0n1p1
> - /dev/nvme1n1p1

> Another buglet: metadata is not passed to mdadm_create() in
> raid_handler()

Good catch! And in that case we can always pass --metadata=
to mdadm, if metadata is not provided, we use the default.

Containers, of course, require metadata=imsm

Revision history for this message
Ryan Harper (raharper) wrote :

> This commit adds VROC container and array creation (on clean disks)
> https://github.com/gyurco/curtin/commit/fd72c17665c071cde3eb5e047662b04ab993a0dd

Nice!

Now here's the not so fun part. We've not yet moved curtin to github, so code
submissions are done here:

https://curtin.readthedocs.io/en/latest/topics/hacking.html

Revision history for this message
György Szombathelyi (gyurco) wrote : Re: [Bug 1893661] Re: Support for Intel VROC (Virtual RAID On CPU)

On 9/2/20 4:51 PM, Ryan Harper wrote:
>> I think your suggestion is a good YAML scheme. I think size_kb:
>> should be optional to fill the whole array with one volume if
>> it's omitted.
>
> Well, it's not that simple. What do we do if it's omitted and
> the config includes multiple volumes from the same container?
>
> Curtin config is typically constructed from an "Oracle"; either
> MAAS or Subiquity probe storage and then provide complete
> configuration from user-input.
>
> So if either added support for VROC, they would know in advance
> how many volumes per container and could specify the exact
> size.
>
> I believe we want to require size_kb if metadata == 'imsm'
>
Well, I would not complicate the most common case (at least for us).
If some external tool provides the size to all arrays, then OK. But if a
hand-constructed yaml doesn't, then it should go with that.

>
>> The number of devices can be looked up by pairing
>> 'container' with 'id'.
>
> Yes, I put the id anchor there so that the volumes created with
> in a container will be able to lookup the correct value rather
> than having to repeat.
>
>
>> Maybe it would be possible to abstract out the container, like:
>> - type: raid
>
> Everything needs and id.
>
> id: raid_container_0
>
>> metadata: imsm
>> container: /dev/md/imsm0
>
> This isn't needed, IIUC, metadata=imsm implies a container
> raid, no?
>
Yeah, container: is now in the members.

>> name: imsm0
>
> name: /dev/md/imsm0
>
>> devices:
>> - /dev/nvme0n1p1
>> - /dev/nvme1n1p1
>>
>> - type: raid
>> devices:
>> - /dev/md/imsm0 # but need to get the number of real devices for -n
>
> Yes, this is nice, and what we will do instead is:
>
> devices:
> - raid_container_0
>
I did
   container: raid_container_0
Otherwise it wouldn't obvious that it's a backreference.

>> name: mirror0
>> level: 1
>
>
>> I suggest to add a level: container to the top-level, as it would
>> imply to use the -e switch to mdadm, and also would be consistent
>> to the query output.
>>
>> - type: raid
>> id: disk_raid_container0
>> level: container
>> metadata: imsm
>
> I think we can skip that if it's true that imsm is always a
> container.
>
True. But then level would be undefined. Allowing level as None when
metadata=imsm - not sure if it's simpler.

>> name: /dev/md/imsm0
>> devices:
>> - /dev/nvme0n1p1
>> - /dev/nvme1n1p1
>
>
>> Another buglet: metadata is not passed to mdadm_create() in
>> raid_handler()
>
> Good catch! And in that case we can always pass --metadata=
> to mdadm, if metadata is not provided, we use the default.
>
As it wasn't reported, I think nobody used it :)

Revision history for this message
György Szombathelyi (gyurco) wrote :

On 9/2/20 4:54 PM, Ryan Harper wrote:
>> This commit adds VROC container and array creation (on clean disks)
>> https://github.com/gyurco/curtin/commit/fd72c17665c071cde3eb5e047662b04ab993a0dd
>
> Nice!
>
> Now here's the not so fun part. We've not yet moved curtin to github, so code
> submissions are done here:
>
> https://curtin.readthedocs.io/en/latest/topics/hacking.html
>
Thats not problem - pushing to GitHub or to launchpad, doesn't really
matter.
The not so fun part is handling an existing VROC array.

Revision history for this message
György Szombathelyi (gyurco) wrote :

Only one blocker remaining for a successful reinstall in shutdown_mdadm:

    LOG.debug('Wiping mdadm member devices: %s' % md_devs)
    for mddev in md_devs:
        mdadm.zero_device(mddev, force=True)

As the devices in an array are held by the container, the zero_device above fail (as it cannot get an exclusive lock). As I see, the device list for an array is get from /sys/block/mdxxx/md/dev-*, and unfortunately it's populated by the underlying drives in both the array and the container.

Is it acceptable to make the mdadm.zero_device failure non-fatal? Otherwise it must be detected if an md device is in a container, which isn't straightforward.

Or maybe doing a query in mdadm_shutdown, and skip zeroing if it contains a container: key?

mdadm --query --detail /dev/md126
/dev/md126:
         Container : /dev/md/imsm0, member 0
        Raid Level : raid5
        Array Size : 2930270208 (2794.52 GiB 3000.60 GB)
     Used Dev Size : 976756736 (931.51 GiB 1000.20 GB)
      Raid Devices : 4
     Total Devices : 4

             State : active, resyncing
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-asymmetric
        Chunk Size : 128K

Revision history for this message
György Szombathelyi (gyurco) wrote :

At the end I was able to do a successful reinstall, using mdadm_query_detail to investigate if it's a member array of a container, and skip zeroing in this case.

I've sent a merge request. I didn't add size_kb yet, because I'm lazy and we don't really need it. Can be added later of course.

Revision history for this message
Paride Legovini (paride) wrote :

Hi György,

Is [1] the MP you submitted? I think something went wrong with it, as if you committed the source tree with unresolved git conflicts. The MP should look more or less like [2] if I don't go wrong.

Could you have a look and update/resubmit it?

Thanks!

[1] https://code.launchpad.net/~gyurco/curtin/+git/curtin/+merge/390234
[2] https://github.com/gyurco/curtin/commit/fd72c17665c071cde3eb5e047662b04ab993a0dd

Revision history for this message
György Szombathelyi (gyurco) wrote :

The problem was that I didn't work on the master branch. Re-based and re-submitted the merge request.

Dan Watkins (oddbloke)
Changed in curtin (Ubuntu):
assignee: nobody → György Szombathelyi (gyurco)
status: Confirmed → In Progress
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

This bug is fixed with commit ac2c09ce to curtin on branch master.
To view that commit see the following URL:
https://git.launchpad.net/curtin/commit/?id=ac2c09ce

Jeff Lane  (bladernr)
tags: added: hwcert-server
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package curtin - 21.1-0ubuntu1

---------------
curtin (21.1-0ubuntu1) hirsute; urgency=medium

  * New upstream release.
    - Release 21.1 [Michael Hudson-Doyle] (LP: #1911841)
    - This adds arm64 compatibility for RH installations [Mark Klein]
    - vmtest: add Hirsute release classes, tool to add vmtest class
      [Ryan Harper]
    - vmtest: fix image-sync after maas URL stream rename
      [Ryan Harper] (LP: #1908543)
    - storage_config: set ptable to vtoc for 'virt' dasds as well as 'ECKD'
      [Michael Hudson-Doyle]
    - install_grub: Fix bootloader-id for RHEL systems, must be redhat
      [Ryan Harper] (LP: #1906543)
    - vmtests: remove LP: #1888726 skip_by_date decorators
    - storage_config: only produce type: dasd actions for ECKD dasds
      [Michael Hudson-Doyle]
    - storage_config: handle some FBA dasd oddities [Michael Hudson-Doyle]
    - apt_config: stop using the deprecated apt-key command
      [Nishanth Aravamudan] (LP: #1892494)
    - allow adding a vtoc partition without a device id [Michael Hudson-Doyle]
    - simplify dasdview parsing code [Michael Hudson-Doyle]
    - fix construction of DasdPartitionTable from fdasd output
      [Michael Hudson-Doyle]
    - Don't install grub if it is already found on CentOS/RHEL
      [Lee Trager] (LP: #1895067)
    - vmtests: Replace newly added Eoan test with Groovy [Ryan Harper]
    - vmtests: test using a disk with RAID partition on it directly in a RAID
      [Michael Hudson-Doyle]
    - fix verification of vtoc partitions [Michael Hudson-Doyle] (LP: #1899471)
    - create an empty vtoc in disk_handler [Michael Hudson-Doyle]
    - remove unused parameters from dasd code [Michael Hudson-Doyle]
    - remove support for calling get_path_to_storage_volume on a dasd action
      [Michael Hudson-Doyle]
    - clear-holders: fix identification of multipath partitions
      [Ryan Harper] (LP: #1900900)
    - vmtests: remove skip_by_dates for now-fixed bcache issue
    - debian/rules: drop PKG_VERSION and UPSTREAM_VERSION [Paride Legovini]
    - deb packaging: fully cleanup directory tree after build
      [Paride Legovini] (LP: #1899698)
    - udevadm_info should use maxsplit=1 instead of maxsplit=2
      [Sergey Bykov] (LP: #1895021)
    - vmtests/multipath-lvm: dont assume device-mapper block names
      [Ryan Harper] (LP: #1898758)
    - vmtest: fix the groovy arm64 subarch [Paride Legovini] (LP: #1898757)
    - tools/curtainer: dearmor gpg key and use apt-key add
      [Ryan Harper] (LP: #1898609)
    - Support imsm external metadata RAID containers
      [Gyorgy Szombathelyi] (LP: #1893661)
    - Drop tools/new-upstream-snapshot [Paride Legovini]

 -- Michael Hudson-Doyle <email address hidden> Fri, 15 Jan 2021 17:07:35 +1300

Changed in curtin (Ubuntu):
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.