Merge ~gyurco/curtin:imsm into curtin:master
- Git
- lp:~gyurco/curtin
- imsm
- Merge into master
Status: | Merged |
---|---|
Approved by: | Paride Legovini |
Approved revision: | 0e2b8ff3711d178d2a5c4ad3ff1c78e9ecc827b6 |
Merge reported by: | Server Team CI bot |
Merged at revision: | not available |
Proposed branch: | ~gyurco/curtin:imsm |
Merge into: | curtin:master |
Diff against target: |
504 lines (+271/-32) 7 files modified
curtin/block/clear_holders.py (+10/-4) curtin/block/mdadm.py (+30/-9) curtin/block/schemas.py (+3/-1) curtin/commands/block_meta.py (+19/-10) tests/unittests/test_block_mdadm.py (+173/-7) tests/unittests/test_clear_holders.py (+35/-0) tests/unittests/test_commands_block_meta.py (+1/-1) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
Paride Legovini | Approve | ||
Ryan Harper (community) | Needs Fixing | ||
Review via email: mp+390307@code.launchpad.net |
Commit message
Support imsm external metadata RAID containers
LP: #1893661
Description of the change
György Szombathelyi (gyurco) wrote : | # |
I have some minor explanations about some of your concerns, otherwise I'll try to implement as you suggested.
Ryan Harper (raharper) wrote : | # |
Thanks.
György Szombathelyi (gyurco) : | # |
- 0141803... by György Szombathelyi
-
Add md_is_container
- 3856b5c... by György Szombathelyi
-
Use container instead of container_devcnt
- 6168dbb... by György Szombathelyi
-
Add container to RAID schema
- 341a674... by György Szombathelyi
-
Clarify comment
- e3158e4... by György Szombathelyi
-
Revert unnecessary change
György Szombathelyi (gyurco) wrote : | # |
I think I've addressed most(all) of the concerns. Can you review it, please? I have to commission the test servers into production this week, after that I cannot continue the work until I receive another batch of machines.
Ryan Harper (raharper) wrote : | # |
One suggestion in line.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:1d466b3cb87
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- cd11d1a... by György Szombathelyi
-
md_is_container -> md_is_in container (and return bool)
Paride Legovini (paride) wrote : | # |
I'll trigger another CI run after this lands:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:1d466b3cb87
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:a0b1e8c5f68
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:a0b1e8c5f68
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:a0b1e8c5f68
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Paride Legovini (paride) wrote : | # |
Hi György,
I ran the CI on the wrong revision, my bad. We actually have some failures, see the full run log [1]. The errors are just linting issues:
py3-pylint:
curtin/
curtin/
curtin/
curtin/
curtin/
curtin/
curtin/
curtin/
curtin/
curtin/
tests/unittests
tests/unittests
tests/unittests
tests/unittests
tests/unittests
tests/unittests
tests/unittests
py3-pyflakes:
tests/unittests
Please rebase your branch on master (mainly to pickup [2]), run `tox` and fix all the linting issues it will spot. In general I think the MP is in good shape, thanks for working on it!
Paride
[1] https:/
[2] https:/
- 0e2b8ff... by György Szombathelyi
-
Fix lint issues
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:0e2b8ff3711
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Paride Legovini (paride) wrote : | # |
I can't test on actual hardware but the code LGTM. Thanks for this MP!
Server Team CI bot (server-team-bot) wrote : | # |
Autolanding: FAILED
More details in the following jenkins job:
https:/
Executed test runs:
FAILURE: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Server Team CI bot (server-team-bot) : | # |
Preview Diff
1 | diff --git a/curtin/block/clear_holders.py b/curtin/block/clear_holders.py |
2 | index 116ee81..9553db6 100644 |
3 | --- a/curtin/block/clear_holders.py |
4 | +++ b/curtin/block/clear_holders.py |
5 | @@ -166,10 +166,16 @@ def shutdown_mdadm(device): |
6 | |
7 | blockdev = block.sysfs_to_devpath(device) |
8 | |
9 | - LOG.info('Discovering raid devices and spares for %s', device) |
10 | - md_devs = ( |
11 | - mdadm.md_get_devices_list(blockdev) + |
12 | - mdadm.md_get_spares_list(blockdev)) |
13 | + if mdadm.md_is_in_container(blockdev): |
14 | + LOG.info('Array is in a container, skip discovering ' + |
15 | + 'raid devices and spares for %s', device) |
16 | + md_devs = [] |
17 | + else: |
18 | + LOG.info('Discovering raid devices and spares for %s', device) |
19 | + md_devs = ( |
20 | + mdadm.md_get_devices_list(blockdev) + |
21 | + mdadm.md_get_spares_list(blockdev)) |
22 | + |
23 | mdadm.set_sync_action(blockdev, action="idle") |
24 | mdadm.set_sync_action(blockdev, action="frozen") |
25 | |
26 | diff --git a/curtin/block/mdadm.py b/curtin/block/mdadm.py |
27 | index 32b467c..a6ac970 100644 |
28 | --- a/curtin/block/mdadm.py |
29 | +++ b/curtin/block/mdadm.py |
30 | @@ -26,6 +26,7 @@ from curtin.log import LOG |
31 | |
32 | NOSPARE_RAID_LEVELS = [ |
33 | 'linear', 'raid0', '0', 0, |
34 | + 'container' |
35 | ] |
36 | |
37 | SPARE_RAID_LEVELS = [ |
38 | @@ -145,8 +146,8 @@ def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False, |
39 | udev.udevadm_settle() |
40 | |
41 | |
42 | -def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name="", |
43 | - metadata=None): |
44 | +def mdadm_create(md_devname, raidlevel, devices, spares=None, container=None, |
45 | + md_name="", metadata=None): |
46 | LOG.debug('mdadm_create: ' + |
47 | 'md_name=%s raidlevel=%s ' % (md_devname, raidlevel) + |
48 | ' devices=%s spares=%s name=%s' % (devices, spares, md_name)) |
49 | @@ -159,8 +160,11 @@ def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name="", |
50 | raise ValueError('Invalid raidlevel: [{}]'.format(raidlevel)) |
51 | |
52 | min_devices = md_minimum_devices(raidlevel) |
53 | - if len(devices) < min_devices: |
54 | - err = 'Not enough devices for raidlevel: ' + str(raidlevel) |
55 | + devcnt = len(devices) if not container else \ |
56 | + len(md_get_devices_list(container)) |
57 | + if devcnt < min_devices: |
58 | + err = 'Not enough devices (' + str(devcnt) + ') ' |
59 | + err += 'for raidlevel: ' + str(raidlevel) |
60 | err += ' minimum devices needed: ' + str(min_devices) |
61 | raise ValueError(err) |
62 | |
63 | @@ -171,13 +175,20 @@ def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name="", |
64 | (hostname, _err) = util.subp(["hostname", "-s"], rcs=[0], capture=True) |
65 | |
66 | cmd = ["mdadm", "--create", md_devname, "--run", |
67 | - "--metadata=%s" % metadata, |
68 | "--homehost=%s" % hostname.strip(), |
69 | - "--level=%s" % raidlevel, |
70 | - "--raid-devices=%s" % len(devices)] |
71 | + "--raid-devices=%s" % devcnt] |
72 | + |
73 | + if not container: |
74 | + cmd.append("--metadata=%s" % metadata) |
75 | + if raidlevel != 'container': |
76 | + cmd.append("--level=%s" % raidlevel) |
77 | + |
78 | if md_name: |
79 | cmd.append("--name=%s" % md_name) |
80 | |
81 | + if container: |
82 | + cmd.append(container) |
83 | + |
84 | for device in devices: |
85 | holders = get_holders(device) |
86 | if len(holders) > 0: |
87 | @@ -508,7 +519,8 @@ def md_sysfs_attr(md_devname, attrname): |
88 | |
89 | |
90 | def md_raidlevel_short(raidlevel): |
91 | - if isinstance(raidlevel, int) or raidlevel in ['linear', 'stripe']: |
92 | + if isinstance(raidlevel, int) or \ |
93 | + raidlevel in ['linear', 'stripe', 'container']: |
94 | return raidlevel |
95 | |
96 | return int(raidlevel.replace('raid', '')) |
97 | @@ -517,7 +529,7 @@ def md_raidlevel_short(raidlevel): |
98 | def md_minimum_devices(raidlevel): |
99 | ''' return the minimum number of devices for a given raid level ''' |
100 | rl = md_raidlevel_short(raidlevel) |
101 | - if rl in [0, 1, 'linear', 'stripe']: |
102 | + if rl in [0, 1, 'linear', 'stripe', 'container']: |
103 | return 2 |
104 | if rl in [5]: |
105 | return 3 |
106 | @@ -603,6 +615,11 @@ def __mdadm_detail_to_dict(input): |
107 | # start after the first newline |
108 | remainder = input[input.find('\n')+1:] |
109 | |
110 | + # keep only the first section (imsm container) |
111 | + arraysection = remainder.find('\n[') |
112 | + if arraysection != -1: |
113 | + remainder = remainder[:arraysection] |
114 | + |
115 | # FIXME: probably could do a better regex to match the LHS which |
116 | # has one, two or three words |
117 | rem = r'(\w+|\w+\ \w+|\w+\ \w+\ \w+)\ \:\ ([a-zA-Z0-9\-\.,: \(\)=\']+)' |
118 | @@ -837,4 +854,8 @@ def md_check(md_devname, raidlevel, devices=[], spares=[]): |
119 | LOG.debug('RAID array OK: ' + md_devname) |
120 | return True |
121 | |
122 | + |
123 | +def md_is_in_container(md_devname): |
124 | + return 'MD_CONTAINER' in mdadm_query_detail(md_devname) |
125 | + |
126 | # vi: ts=4 expandtab syntax=python |
127 | diff --git a/curtin/block/schemas.py b/curtin/block/schemas.py |
128 | index 9e2c41f..4dc2f0a 100644 |
129 | --- a/curtin/block/schemas.py |
130 | +++ b/curtin/block/schemas.py |
131 | @@ -316,12 +316,14 @@ RAID = { |
132 | 'preserve': {'$ref': '#/definitions/preserve'}, |
133 | 'ptable': {'$ref': '#/definitions/ptable'}, |
134 | 'spare_devices': {'$ref': '#/definitions/devices'}, |
135 | + 'container': {'$ref': '#/definitions/id'}, |
136 | 'type': {'const': 'raid'}, |
137 | 'raidlevel': { |
138 | 'type': ['integer', 'string'], |
139 | 'oneOf': [ |
140 | {'enum': [0, 1, 4, 5, 6, 10]}, |
141 | - {'enum': ['raid0', 'linear', '0', |
142 | + {'enum': ['container', |
143 | + 'raid0', 'linear', '0', |
144 | 'raid1', 'mirror', 'stripe', '1', |
145 | 'raid4', '4', |
146 | 'raid5', '5', |
147 | diff --git a/curtin/commands/block_meta.py b/curtin/commands/block_meta.py |
148 | index dee73b1..30fc817 100644 |
149 | --- a/curtin/commands/block_meta.py |
150 | +++ b/curtin/commands/block_meta.py |
151 | @@ -1486,21 +1486,30 @@ def raid_handler(info, storage_config): |
152 | raidlevel = info.get('raidlevel') |
153 | spare_devices = info.get('spare_devices') |
154 | md_devname = block.md_path(info.get('name')) |
155 | + container = info.get('container') |
156 | + metadata = info.get('metadata') |
157 | preserve = config.value_as_boolean(info.get('preserve')) |
158 | - if not devices: |
159 | - raise ValueError("devices for raid must be specified") |
160 | + if not devices and not container: |
161 | + raise ValueError("devices or container for raid must be specified") |
162 | if raidlevel not in ['linear', 'raid0', 0, 'stripe', 'raid1', 1, 'mirror', |
163 | - 'raid4', 4, 'raid5', 5, 'raid6', 6, 'raid10', 10]: |
164 | + 'raid4', 4, 'raid5', 5, 'raid6', 6, 'raid10', 10, |
165 | + 'container']: |
166 | raise ValueError("invalid raidlevel '%s'" % raidlevel) |
167 | - if raidlevel in ['linear', 'raid0', 0, 'stripe']: |
168 | + if raidlevel in ['linear', 'raid0', 0, 'stripe', 'container']: |
169 | if spare_devices: |
170 | raise ValueError("spareunsupported in raidlevel '%s'" % raidlevel) |
171 | |
172 | LOG.debug('raid: cfg: %s', util.json_dumps(info)) |
173 | - device_paths = list(get_path_to_storage_volume(dev, storage_config) for |
174 | - dev in devices) |
175 | - LOG.debug('raid: device path mapping: %s', |
176 | - list(zip(devices, device_paths))) |
177 | + |
178 | + container_dev = None |
179 | + device_paths = [] |
180 | + if container: |
181 | + container_dev = get_path_to_storage_volume(container, storage_config) |
182 | + else: |
183 | + device_paths = list(get_path_to_storage_volume(dev, storage_config) for |
184 | + dev in devices) |
185 | + LOG.debug('raid: device path mapping: {}'.format( |
186 | + zip(devices, device_paths))) |
187 | |
188 | spare_device_paths = [] |
189 | if spare_devices: |
190 | @@ -1517,8 +1526,8 @@ def raid_handler(info, storage_config): |
191 | |
192 | if create_raid: |
193 | mdadm.mdadm_create(md_devname, raidlevel, |
194 | - device_paths, spare_device_paths, |
195 | - info.get('mdname', '')) |
196 | + device_paths, spare_device_paths, container_dev, |
197 | + info.get('mdname', ''), metadata) |
198 | |
199 | wipe_mode = info.get('wipe') |
200 | if wipe_mode: |
201 | diff --git a/tests/unittests/test_block_mdadm.py b/tests/unittests/test_block_mdadm.py |
202 | index dba0f74..b04cf82 100644 |
203 | --- a/tests/unittests/test_block_mdadm.py |
204 | +++ b/tests/unittests/test_block_mdadm.py |
205 | @@ -97,7 +97,7 @@ class TestBlockMdadmCreate(CiTestCase): |
206 | self.mock_holders.return_value = [] |
207 | |
208 | def prepare_mock(self, md_devname, raidlevel, devices, spares, |
209 | - metadata=None): |
210 | + container=None, metadata=None): |
211 | side_effects = [] |
212 | expected_calls = [] |
213 | hostname = 'ubuntu' |
214 | @@ -120,10 +120,15 @@ class TestBlockMdadmCreate(CiTestCase): |
215 | side_effects.append(("", "")) # mdadm create |
216 | # build command how mdadm_create does |
217 | cmd = (["mdadm", "--create", md_devname, "--run", |
218 | - "--metadata=%s" % metadata, |
219 | - "--homehost=%s" % hostname, "--level=%s" % raidlevel, |
220 | - "--raid-devices=%s" % len(devices)] + |
221 | - devices) |
222 | + "--homehost=%s" % hostname, |
223 | + "--raid-devices=%s" % (len(devices) if not container else 4)]) |
224 | + if not container: |
225 | + cmd += ["--metadata=%s" % metadata] |
226 | + if raidlevel != 'container': |
227 | + cmd += ["--level=%s" % raidlevel] |
228 | + if container: |
229 | + cmd += [container] |
230 | + cmd += devices |
231 | if spares: |
232 | cmd += ["--spare-devices=%s" % len(spares)] + spares |
233 | |
234 | @@ -228,6 +233,48 @@ class TestBlockMdadmCreate(CiTestCase): |
235 | devices=devices, spares=spares) |
236 | self.mock_util.subp.assert_has_calls(expected_calls) |
237 | |
238 | + def test_mdadm_create_imsm_container(self): |
239 | + md_devname = "/dev/md/imsm" |
240 | + raidlevel = 'container' |
241 | + devices = ['/dev/nvme0n1', '/dev/nvme1n1', '/dev/nvme2n1'] |
242 | + metadata = 'imsm' |
243 | + spares = [] |
244 | + (side_effects, expected_calls) = self.prepare_mock(md_devname, |
245 | + raidlevel, |
246 | + devices, |
247 | + spares, |
248 | + None, |
249 | + metadata) |
250 | + |
251 | + self.mock_util.subp.side_effect = side_effects |
252 | + mdadm.mdadm_create(md_devname=md_devname, raidlevel=raidlevel, |
253 | + devices=devices, spares=spares, metadata=metadata) |
254 | + self.mock_util.subp.assert_has_calls(expected_calls) |
255 | + |
256 | + @patch("curtin.block._md_get_members_list") |
257 | + def test_mdadm_create_array_in_imsm_container(self, mock_get_members): |
258 | + md_devname = "/dev/md126" |
259 | + raidlevel = 5 |
260 | + devices = [] |
261 | + metadata = 'imsm' |
262 | + spares = [] |
263 | + container = "/dev/md/imsm" |
264 | + (side_effects, expected_calls) = self.prepare_mock(md_devname, |
265 | + raidlevel, |
266 | + devices, |
267 | + spares, |
268 | + container, |
269 | + metadata) |
270 | + |
271 | + self.mock_util.subp.side_effect = side_effects |
272 | + mock_get_members.return_value = [ |
273 | + '/dev/nvme0n1', '/dev/nvme1n1', '/dev/nvme2n1', '/dev/nvme3n1' |
274 | + ] |
275 | + mdadm.mdadm_create(md_devname=md_devname, raidlevel=raidlevel, |
276 | + devices=devices, spares=spares, |
277 | + container=container, metadata=metadata) |
278 | + self.mock_util.subp.assert_has_calls(expected_calls) |
279 | + |
280 | |
281 | class TestBlockMdadmExamine(CiTestCase): |
282 | def setUp(self): |
283 | @@ -315,6 +362,70 @@ class TestBlockMdadmExamine(CiTestCase): |
284 | self.mock_util.subp.assert_has_calls(expected_calls) |
285 | self.assertEqual(data, {}) |
286 | |
287 | + def test_mdadm_examine_no_export_imsm(self): |
288 | + self.mock_util.subp.return_value = ("""/dev/nvme0n1: |
289 | + Magic : Intel Raid ISM Cfg Sig. |
290 | + Version : 1.3.00 |
291 | + Orig Family : 6f8c68e3 |
292 | + Family : 6f8c68e3 |
293 | + Generation : 00000112 |
294 | + Attributes : All supported |
295 | + UUID : 7ec12162:ee5cd20b:0ac8b069:cfbd93ec |
296 | + Checksum : 4a5cebe2 correct |
297 | + MPB Sectors : 2 |
298 | + Disks : 4 |
299 | + RAID Devices : 1 |
300 | + |
301 | + Disk03 Serial : LJ910504Q41P0FGN |
302 | + State : active |
303 | + Id : 00000000 |
304 | + Usable Size : 1953514766 (931.51 GiB 1000.20 GB) |
305 | + |
306 | +[126]: |
307 | + UUID : f9792759:7f61d0c7:e7313d5a:2e7c2e22 |
308 | + RAID Level : 5 |
309 | + Members : 4 |
310 | + Slots : [UUUU] |
311 | + Failed disk : none |
312 | + This Slot : 3 |
313 | + Sector Size : 512 |
314 | + Array Size : 5860540416 (2794.52 GiB 3000.60 GB) |
315 | + Per Dev Size : 1953515520 (931.51 GiB 1000.20 GB) |
316 | + Sector Offset : 0 |
317 | + Num Stripes : 7630912 |
318 | + Chunk Size : 128 KiB |
319 | + Reserved : 0 |
320 | + Migrate State : idle |
321 | + Map State : normal |
322 | + Dirty State : dirty |
323 | + RWH Policy : off |
324 | + |
325 | + Disk00 Serial : LJ91040H2Y1P0FGN |
326 | + State : active |
327 | + Id : 00000003 |
328 | + Usable Size : 1953514766 (931.51 GiB 1000.20 GB) |
329 | + |
330 | + Disk01 Serial : LJ916308CZ1P0FGN |
331 | + State : active |
332 | + Id : 00000002 |
333 | + Usable Size : 1953514766 (931.51 GiB 1000.20 GB) |
334 | + |
335 | + Disk02 Serial : LJ916308RF1P0FGN |
336 | + State : active |
337 | + Id : 00000001 |
338 | + Usable Size : 1953514766 (931.51 GiB 1000.20 GB) |
339 | + """, "") # mdadm --examine /dev/nvme0n1 |
340 | + |
341 | + device = "/dev/nvme0n1" |
342 | + data = mdadm.mdadm_examine(device, export=False) |
343 | + |
344 | + expected_calls = [ |
345 | + call(["mdadm", "--examine", device], capture=True), |
346 | + ] |
347 | + self.mock_util.subp.assert_has_calls(expected_calls) |
348 | + self.assertEqual(data['uuid'], |
349 | + '7ec12162:ee5cd20b:0ac8b069:cfbd93ec') |
350 | + |
351 | |
352 | class TestBlockMdadmStop(CiTestCase): |
353 | def setUp(self): |
354 | @@ -1171,6 +1282,63 @@ class TestBlockMdadmMdHelpers(CiTestCase): |
355 | self.assertFalse(md_is_present) |
356 | self.mock_util.load_file.assert_called_with('/proc/mdstat') |
357 | |
358 | + def test_md_is_in_container_false(self): |
359 | + self.mock_util.subp.return_value = ( |
360 | + """ |
361 | + MD_LEVEL=raid1 |
362 | + MD_DEVICES=2 |
363 | + MD_METADATA=1.2 |
364 | + MD_UUID=93a73e10:427f280b:b7076c02:204b8f7a |
365 | + MD_NAME=wily-foobar:0 |
366 | + MD_DEVICE_vdc_ROLE=0 |
367 | + MD_DEVICE_vdc_DEV=/dev/vdc |
368 | + MD_DEVICE_vdd_ROLE=1 |
369 | + MD_DEVICE_vdd_DEV=/dev/vdd |
370 | + MD_DEVICE_vde_ROLE=spare |
371 | + MD_DEVICE_vde_DEV=/dev/vde |
372 | + """, "") |
373 | + |
374 | + device = "/dev/md0" |
375 | + self.mock_valid.return_value = True |
376 | + is_in_container = mdadm.md_is_in_container(device) |
377 | + |
378 | + expected_calls = [ |
379 | + call(["mdadm", "--query", "--detail", "--export", device], |
380 | + capture=True), |
381 | + ] |
382 | + self.mock_util.subp.assert_has_calls(expected_calls) |
383 | + self.assertEqual(is_in_container, False) |
384 | + |
385 | + def test_md_is_in_container_true(self): |
386 | + self.mock_util.subp.return_value = ( |
387 | + """ |
388 | + MD_LEVEL=raid5 |
389 | + MD_DEVICES=4 |
390 | + MD_CONTAINER=/dev/md/imsm0 |
391 | + MD_MEMBER=0 |
392 | + MD_UUID=5fa06b36:53e67142:37ff9ad6:44ef0e89 |
393 | + MD_DEVNAME=126 |
394 | + MD_DEVICE_ev_nvme2n1_ROLE=3 |
395 | + MD_DEVICE_ev_nvme2n1_DEV=/dev/nvme2n1 |
396 | + MD_DEVICE_ev_nvme1n1_ROLE=0 |
397 | + MD_DEVICE_ev_nvme1n1_DEV=/dev/nvme1n1 |
398 | + MD_DEVICE_ev_nvme0n1_ROLE=1 |
399 | + MD_DEVICE_ev_nvme0n1_DEV=/dev/nvme0n1 |
400 | + MD_DEVICE_ev_nvme3n1_ROLE=2 |
401 | + MD_DEVICE_ev_nvme3n1_DEV=/dev/nvme3n1 |
402 | + """, "") |
403 | + |
404 | + device = "/dev/md0" |
405 | + self.mock_valid.return_value = True |
406 | + is_in_container = mdadm.md_is_in_container(device) |
407 | + |
408 | + expected_calls = [ |
409 | + call(["mdadm", "--query", "--detail", "--export", device], |
410 | + capture=True), |
411 | + ] |
412 | + self.mock_util.subp.assert_has_calls(expected_calls) |
413 | + self.assertEqual(is_in_container, True) |
414 | + |
415 | |
416 | class TestBlockMdadmZeroDevice(CiTestCase): |
417 | |
418 | @@ -1243,6 +1411,4 @@ class TestBlockMdadmZeroDevice(CiTestCase): |
419 | self.mock_examine.assert_called_with(device, export=False) |
420 | self.m_zero.assert_called_with(device, expected_offsets, |
421 | buflen=1024, count=1024, strict=True) |
422 | - |
423 | - |
424 | # vi: ts=4 expandtab syntax=python |
425 | diff --git a/tests/unittests/test_clear_holders.py b/tests/unittests/test_clear_holders.py |
426 | index 25e9e79..d1c2590 100644 |
427 | --- a/tests/unittests/test_clear_holders.py |
428 | +++ b/tests/unittests/test_clear_holders.py |
429 | @@ -238,6 +238,7 @@ class TestClearHolders(CiTestCase): |
430 | mock_mdadm.md_present.return_value = False |
431 | mock_mdadm.md_get_devices_list.return_value = devices |
432 | mock_mdadm.md_get_spares_list.return_value = spares |
433 | + mock_mdadm.md_is_in_container.return_value = False |
434 | |
435 | clear_holders.shutdown_mdadm(self.test_syspath) |
436 | |
437 | @@ -256,6 +257,38 @@ class TestClearHolders(CiTestCase): |
438 | mock_mdadm.md_present.assert_called_with(self.test_blockdev) |
439 | self.assertTrue(mock_log.debug.called) |
440 | |
441 | + @mock.patch('curtin.block.wipe_volume') |
442 | + @mock.patch('curtin.block.path_to_kname') |
443 | + @mock.patch('curtin.block.sysfs_to_devpath') |
444 | + @mock.patch('curtin.block.clear_holders.time') |
445 | + @mock.patch('curtin.block.clear_holders.util') |
446 | + @mock.patch('curtin.block.clear_holders.LOG') |
447 | + @mock.patch('curtin.block.clear_holders.mdadm') |
448 | + def test_shutdown_mdadm_in_container(self, mock_mdadm, mock_log, mock_util, |
449 | + mock_time, mock_sysdev, mock_path, |
450 | + mock_wipe): |
451 | + """test clear_holders.shutdown_mdadm""" |
452 | + devices = ['/dev/wda1', '/dev/wda2'] |
453 | + spares = ['/dev/wdb1'] |
454 | + mock_sysdev.return_value = self.test_blockdev |
455 | + mock_path.return_value = self.test_blockdev |
456 | + mock_mdadm.md_present.return_value = False |
457 | + mock_mdadm.md_get_devices_list.return_value = devices |
458 | + mock_mdadm.md_get_spares_list.return_value = spares |
459 | + mock_mdadm.mdadm_query_detail.return_value = \ |
460 | + {'MD_CONTAINER': '/dev/md/imsm0'} |
461 | + |
462 | + clear_holders.shutdown_mdadm(self.test_syspath) |
463 | + |
464 | + mock_wipe.assert_called_with(self.test_blockdev, exclusive=False, |
465 | + mode='superblock', strict=True) |
466 | + mock_mdadm.set_sync_action.assert_has_calls([ |
467 | + mock.call(self.test_blockdev, action="idle"), |
468 | + mock.call(self.test_blockdev, action="frozen")]) |
469 | + mock_mdadm.mdadm_stop.assert_called_with(self.test_blockdev) |
470 | + mock_mdadm.md_present.assert_called_with(self.test_blockdev) |
471 | + self.assertTrue(mock_log.debug.called) |
472 | + |
473 | @mock.patch('curtin.block.clear_holders.os') |
474 | @mock.patch('curtin.block.clear_holders.time') |
475 | @mock.patch('curtin.block.clear_holders.util') |
476 | @@ -271,6 +304,7 @@ class TestClearHolders(CiTestCase): |
477 | mock_mdadm.md_present.return_value = True |
478 | mock_util.subp.return_value = ("", "") |
479 | mock_os.path.exists.return_value = True |
480 | + mock_mdadm.mdadm_query_detail.return_value = {} |
481 | |
482 | with self.assertRaises(OSError): |
483 | clear_holders.shutdown_mdadm(self.test_syspath) |
484 | @@ -295,6 +329,7 @@ class TestClearHolders(CiTestCase): |
485 | mock_block.path_to_kname.return_value = self.test_blockdev |
486 | mock_mdadm.md_present.return_value = True |
487 | mock_os.path.exists.return_value = False |
488 | + mock_mdadm.mdadm_query_detail.return_value = {} |
489 | |
490 | with self.assertRaises(OSError): |
491 | clear_holders.shutdown_mdadm(self.test_syspath) |
492 | diff --git a/tests/unittests/test_commands_block_meta.py b/tests/unittests/test_commands_block_meta.py |
493 | index d954296..98be573 100644 |
494 | --- a/tests/unittests/test_commands_block_meta.py |
495 | +++ b/tests/unittests/test_commands_block_meta.py |
496 | @@ -1892,7 +1892,7 @@ class TestRaidHandler(CiTestCase): |
497 | self.m_getpath.side_effect = iter(devices) |
498 | block_meta.raid_handler(self.storage_config['mddevice'], |
499 | self.storage_config) |
500 | - self.assertEqual([call(md_devname, 5, devices, [], '')], |
501 | + self.assertEqual([call(md_devname, 5, devices, [], None, '', None)], |
502 | self.m_mdadm.mdadm_create.call_args_list) |
503 | |
504 | @patch('curtin.commands.block_meta.raid_verify') |
Thanks for working on this.
I've provided some feedback below in the code. In addition to what we have here, this will also need updates to curtin/ block/schema. py as we're adding new fields to the raid structure.
diff --git a/curtin/ block/schemas. py b/curtin/ block/schemas. py block/schemas. py block/schemas. py
'preserve' : {'$ref': '#/definitions/ preserve' }, ptable' },
'spare_ devices' : {'$ref': '#/definitions/ devices' }, id'},
'raidlevel' : {
' type': ['integer', 'string'],
' oneOf': [
{'enum' : [0, 1, 4, 5, 6, 10]},
' raid1', 'mirror', 'stripe', '1',
' raid4', '4',
' raid5', '5',
index 9e2c41fe..4dc2f0a7 100644
--- a/curtin/
+++ b/curtin/
@@ -316,12 +316,14 @@ RAID = {
'ptable': {'$ref': '#/definitions/
+ 'container': {'$ref': '#/definitions/
'type': {'const': 'raid'},
- {'enum': ['raid0', 'linear', '0',
+ {'enum': ['container',
+ 'raid0', 'linear', '0',