Merge ~raharper/curtin:fix/lvm-over-bcache into curtin:master

Proposed by Ryan Harper
Status: Merged
Approved by: Chad Smith
Approved revision: c5897529e377c9afca83aea0256a1901832e8536
Merge reported by: Server Team CI bot
Merged at revision: not available
Proposed branch: ~raharper/curtin:fix/lvm-over-bcache
Merge into: curtin:master
Diff against target: 158 lines (+131/-1)
3 files modified
curtin/block/clear_holders.py (+1/-1)
examples/tests/bcache-ceph-nvme-simple.yaml (+107/-0)
tests/vmtests/test_bcache_ceph.py (+23/-0)
Reviewer Review Type Date Requested Status
Chad Smith Approve
Server Team CI bot continuous-integration Approve
Review via email: mp+372939@code.launchpad.net

Commit message

clear-holders: increase the level for devices with holders by one

In the case where clear-holders looks at a dependent device, if the
parent is already in the registry, we want to take the max level
and then increment this by one to ensure the dependent device is
shutdown first. This resolves the case where we have an LVM over
top a bcache device and we need to remove the LVM device before
any of the bcache devices can be removed as they may share a
cacheset in which case all bcache devices must be stopped before
the cacheset can be removed.

LP: #1844543

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Chad Smith (chad.smith) wrote :

Thanks for this test coverage. LGTM

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/curtin/block/clear_holders.py b/curtin/block/clear_holders.py
2index bcc16e6..fb778f0 100644
3--- a/curtin/block/clear_holders.py
4+++ b/curtin/block/clear_holders.py
5@@ -487,7 +487,7 @@ def plan_shutdown_holder_trees(holders_trees):
6 # anything else regardless of whether it was added to the tree via
7 # the cache device or backing device first
8 if device in reg:
9- level = max(reg[device]['level'], level)
10+ level = max(reg[device]['level'], level) + 1
11
12 reg[device] = {'level': level, 'device': device,
13 'dev_type': tree['dev_type']}
14diff --git a/examples/tests/bcache-ceph-nvme-simple.yaml b/examples/tests/bcache-ceph-nvme-simple.yaml
15new file mode 100644
16index 0000000..83cb04c
17--- /dev/null
18+++ b/examples/tests/bcache-ceph-nvme-simple.yaml
19@@ -0,0 +1,107 @@
20+storage:
21+ config:
22+ - grub_device: true
23+ id: sda
24+ model: MM1000GBKAL
25+ name: sda
26+ ptable: gpt
27+ serial: disk-a
28+ type: disk
29+ wipe: superblock
30+ - id: sdb
31+ model: MM1000GBKAL
32+ name: sdb
33+ serial: disk-b
34+ type: disk
35+ wipe: superblock
36+ - id: nvme0n1
37+ model: INTEL SSDPEDME400G4
38+ name: nvme0n1
39+ serial: nvme-CVMD552400
40+ type: disk
41+ wipe: superblock
42+ - backing_device: sdb
43+ cache_device: nvme0n1
44+ cache_mode: writeback
45+ id: bcache1
46+ name: bcache1
47+ type: bcache
48+ - device: sda
49+ id: sda-part1
50+ name: sda-part1
51+ number: 1
52+ offset: 4194304B
53+ size: 5G
54+ type: partition
55+ uuid: 1e27e7af-26dc-4af4-9ef5-aea928204997
56+ wipe: superblock
57+ - device: sda
58+ id: sda-part2
59+ name: sda-part2
60+ number: 2
61+ size: 2G
62+ type: partition
63+ uuid: 0040d622-41f1-4596-842f-82d731ba9054
64+ wipe: superblock
65+ - device: sda
66+ id: sda-part3
67+ name: sda-part3
68+ number: 3
69+ size: 2G
70+ type: partition
71+ uuid: cb59d827-662c-4da6-b1ef-7967218bd0db
72+ wipe: superblock
73+ - backing_device: sda-part3
74+ cache_device: nvme0n1
75+ cache_mode: writeback
76+ id: bcache0
77+ name: bcache0
78+ type: bcache
79+ - fstype: fat32
80+ id: sda-part1_format
81+ label: efi
82+ type: format
83+ uuid: 27638478-d881-43e5-a93c-1cac7aa60daa
84+ volume: sda-part1
85+ - fstype: ext4
86+ id: sda-part2_format
87+ label: boot
88+ type: format
89+ uuid: cfd11d4f-d77f-4307-b372-b52e81c873f7
90+ volume: sda-part2
91+ - fstype: ext4
92+ id: bcache0_format
93+ label: root
94+ type: format
95+ uuid: 63247841-195c-4939-83e4-cb834d61f95f
96+ volume: bcache0
97+ - devices:
98+ - bcache1
99+ id: ceph-bcache-vg
100+ name: ceph-bcache-vg
101+ type: lvm_volgroup
102+ - id: ceph-bcache-lv-0
103+ name: ceph-bcache-lv-0
104+ size: 3G
105+ type: lvm_partition
106+ volgroup: ceph-bcache-vg
107+ - fstype: xfs
108+ id: ceph-bcache-lv-0_format
109+ volume: ceph-bcache-lv-0
110+ type: format
111+ - device: bcache0_format
112+ id: bcache0_mount
113+ options: ''
114+ path: /
115+ type: mount
116+ - device: sda-part2_format
117+ id: sda-part2_mount
118+ options: ''
119+ path: /boot
120+ type: mount
121+ - device: sda-part1_format
122+ id: sda-part1_mount
123+ options: ''
124+ path: /boot/efi
125+ type: mount
126+ version: 1
127diff --git a/tests/vmtests/test_bcache_ceph.py b/tests/vmtests/test_bcache_ceph.py
128index c35ee85..07d9bca 100644
129--- a/tests/vmtests/test_bcache_ceph.py
130+++ b/tests/vmtests/test_bcache_ceph.py
131@@ -82,4 +82,27 @@ class DiscoTestBcacheCeph(relbase.disco, TestBcacheCeph):
132 class EoanTestBcacheCeph(relbase.eoan, TestBcacheCeph):
133 __test__ = True
134
135+
136+class TestBcacheCephLvm(TestBcacheCeph):
137+ test_type = 'storage'
138+ nr_cpus = 2
139+ uefi = True
140+ dirty_disks = True
141+ extra_disks = ['20G', '20G']
142+ nvme_disks = ['20G']
143+ conf_file = "examples/tests/bcache-ceph-nvme-simple.yaml"
144+
145+ @skip_if_flag('expected_failure')
146+ def test_bcache_output_files_exist(self):
147+ self.output_files_exist([
148+ "bcache-super-show.vda3",
149+ "bcache-super-show.vdc",
150+ "bcache-super-show.nvme0n1",
151+ ])
152+
153+
154+class BionicTestBcacheCephLvm(relbase.bionic, TestBcacheCephLvm):
155+ __test__ = True
156+
157+
158 # vi: ts=4 expandtab syntax=python

Subscribers

People subscribed via source and target branches