failed creating LVM on top of md devices

Bug #1783413 reported by Joshua Powers
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
curtin
Fix Released
Medium
Unassigned
subiquity
Fix Released
Undecided
Unassigned

Bug Description

Summary:
Tried creating a LVM group on top of two software raid devices and the install failed

Expected Behavior:
Install works and reboots into a system with a /home mounted on LVM + RAID

Actual Behavior:

Running command ['vgcreate', '--force', '--zero=y', '--yes', 'vg0', '/dev/md0', '/dev/md1'] with allowed return codes [0] (capture=True)
        An error occured handling 'vg-0': ProcessExecutionError - Unexpected error while running command.
        Command: ['vgcreate', '--force', '--zero=y', '--yes', 'vg0', '/dev/md0', '/dev/md1']
        Exit code: 5
        Reason: -
        Stdout: ''
        Stderr: WARNING: Device for PV Ku4Qjy-hGry-zaUE-el9b-Vee1-c0eB-43PGwR not found or rejected by a filter.
                  WARNING: Device for PV Ku4Qjy-hGry-zaUE-el9b-Vee1-c0eB-43PGwR not found or rejected by a filter.
                  A volume group called vg0 already exists.

Steps to reproduce:
1. Get Bionic live-server ISO from July 24, 2018 (20180724)
2. Launch a VM with 1x QEMU disk (for root) and 5x libvirt disks for raid
3. Accept defaults till disk and then go manual
4. Setup qemu disk with ext4 mounted at /
5. Setup 2x md devices: 2 disks each, mirrored
6. Setup LVM using the 2x md devices
7. Create volume group on the LVM device with ext4 mounted at /home
8. Install fails

Logs:
curtin: http://paste.ubuntu.com/p/w7g3Whp9ZC/
subiquity-curtin-install.conf: http://paste.ubuntu.com/p/Bp7v8FVJp3/
subiquity-debug.log: http://paste.ubuntu.com/p/fRF9392Yc8/

Related branches

Revision history for this message
Ryan Harper (raharper) wrote :

A volume group called vg0 already exists.

Hrm, looking at the curtin log, it didn't detect a vg0 volume group.

Current device storage tree:
sda
|-- sda2
`-- sda1
vdb
`-- md127
vda
`-- md127
vdc
`-- md127
vdd
`-- md127
Shutdown Plan:
{'level': 1, 'device': '/sys/class/block/sda/sda2', 'dev_type': 'partition'}
{'level': 1, 'device': '/sys/class/block/sda/sda1', 'dev_type': 'partition'}
{'level': 1, 'device': '/sys/class/block/md127', 'dev_type': 'raid'}
{'level': 0, 'device': '/sys/class/block/sda', 'dev_type': 'disk'}
{'level': 0, 'device': '/sys/class/block/vdb', 'dev_type': 'disk'}
{'level': 0, 'device': '/sys/class/block/vda', 'dev_type': 'disk'}
{'level': 0, 'device': '/sys/class/block/vdc', 'dev_type': 'disk'}
{'level': 0, 'device': '/sys/class/block/vdd', 'dev_type': 'disk'}

We wipe the contents of the assembled raid device, so if that previously held lvm info; that'll get wiped. We run a lvm scan to see if any devices are present but none showed.

If you still have access to system, can you run a few commands:

pvs -a, vgs -a, lvs -a
dmsetup ls
dmsetup status

Revision history for this message
Joshua Powers (powersj) wrote :

sudo pvs -a: http://paste.ubuntu.com/p/58hfG8cDs2/
sudo vgs -a: http://paste.ubuntu.com/p/9bbrJrM9Sf/
sudo lvs -a: empty
sudo dmsetup ls: No devices found
sudo dmsetup status: No devices found

Revision history for this message
Ryan Harper (raharper) wrote :

OK, I've taken this configuration and recreated the issue with our vmtest; we create the storage config, then tear down the lvm and raid (removing configurations from /etc/lvm and /etc/mdadm) then run the storage config a second time. We see the same storage tree (missing the lvm dm0 entries) and then trigger the vgcreate error where vg0 already exists).

The fix is for curtin to add an lvm scan of block devices after assembling raid arrays; This scan will enable any lvm vgs that can be assembled. Then once curtin can "see" the lvm devices, it will properly remove them.

Changed in curtin:
status: New → Confirmed
Ryan Harper (raharper)
Changed in curtin:
status: Confirmed → In Progress
Scott Moser (smoser)
Changed in curtin:
importance: Undecided → Medium
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

This bug is fixed with commit 6a776e15 to curtin on branch master.
To view that commit see the following URL:
https://git.launchpad.net/curtin/commit/?id=6a776e15

Changed in curtin:
status: In Progress → Fix Committed
Revision history for this message
Ryan Harper (raharper) wrote : Fixed in curtin version 18.2.

This bug is believed to be fixed in curtin in version 18.2. If this is still a problem for you, please make a comment and set the state back to New

Thank you.

Changed in curtin:
status: Fix Committed → Fix Released
Changed in subiquity:
status: New → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.