Merge ~falcojr/curtin:fix-mdraid-name into curtin:master
- Git
- lp:~falcojr/curtin
- fix-mdraid-name
- Merge into master
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | Ryan Harper | ||||
Approved revision: | feb769baef03cf6dd8d6953bc971f7e51dd7d9e6 | ||||
Merge reported by: | Server Team CI bot | ||||
Merged at revision: | not available | ||||
Proposed branch: | ~falcojr/curtin:fix-mdraid-name | ||||
Merge into: | curtin:master | ||||
Diff against target: |
214 lines (+70/-11) 6 files modified
curtin/block/__init__.py (+14/-1) curtin/commands/block_meta.py (+2/-2) examples/tests/raid5boot.yaml (+1/-1) tests/unittests/test_commands_block_meta.py (+34/-5) tests/vmtests/__init__.py (+13/-0) tests/vmtests/test_mdadm_bcache.py (+6/-2) |
||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
Ryan Harper (community) | Approve | ||
Review via email: mp+384209@code.launchpad.net |
Commit message
Fix mdraid name creates broken configuration
Allow to pass a raid device name that works for either /dev/mdN
format or /dev/md/NAME format, with or without prefix.
LP: #1803933
Description of the change
Server Team CI bot (server-team-bot) wrote : | # |
Ryan Harper (raharper) wrote : | # |
This looks good! I think we can update one or more of our raid vmtests config to use the extended name; and we'll likely want to add a vmtest *unittest* to verify we find the device in /dev/md/XXXX
See examples/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:aab45a3af8e
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:75935187bf2
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:51ccfd450bb
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:0ecec542308
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:eb3b5d2e4da
https:/
Executed test runs:
FAILURE: https:/
ABORTED: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:4481197790b
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
James Falcon (falcojr) wrote : | # |
This is ready for re-review.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:086ff18785b
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:feb769baef0
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
Thanks! This looks good. I verified the Raid5Boot test passes in vmtest.
Server Team CI bot (server-team-bot) wrote : | # |
Commit message lints:
- Line #2 has 44 too many characters. Line starts with: "Allow to pass a raid"...
Server Team CI bot (server-team-bot) : | # |
Preview Diff
1 | diff --git a/curtin/block/__init__.py b/curtin/block/__init__.py |
2 | index 35e3a64..a422264 100644 |
3 | --- a/curtin/block/__init__.py |
4 | +++ b/curtin/block/__init__.py |
5 | @@ -1,5 +1,5 @@ |
6 | # This file is part of curtin. See LICENSE file for copyright and license info. |
7 | - |
8 | +import re |
9 | from contextlib import contextmanager |
10 | import errno |
11 | import itertools |
12 | @@ -67,6 +67,19 @@ def dev_path(devname): |
13 | return '/dev/' + devname |
14 | |
15 | |
16 | +def md_path(mdname): |
17 | + """ Convert device name to path in /dev/md """ |
18 | + full_mdname = dev_path(mdname) |
19 | + if full_mdname.startswith('/dev/md/'): |
20 | + return full_mdname |
21 | + elif re.match(r'/dev/md\d+$', full_mdname): |
22 | + return full_mdname |
23 | + elif '/' in mdname: |
24 | + raise ValueError("Invalid RAID device name: {}".format(mdname)) |
25 | + else: |
26 | + return '/dev/md/{}'.format(mdname) |
27 | + |
28 | + |
29 | def path_to_kname(path): |
30 | """ |
31 | converts a path in /dev or a path in /sys/block to the device kname, |
32 | diff --git a/curtin/commands/block_meta.py b/curtin/commands/block_meta.py |
33 | index ff0f2e9..4bc7da2 100644 |
34 | --- a/curtin/commands/block_meta.py |
35 | +++ b/curtin/commands/block_meta.py |
36 | @@ -502,7 +502,7 @@ def get_path_to_storage_volume(volume, storage_config): |
37 | elif vol.get('type') == "raid": |
38 | # For raid partitions, block device is at /dev/mdX |
39 | name = vol.get('name') |
40 | - volume_path = os.path.join("/dev", name) |
41 | + volume_path = block.md_path(name) |
42 | |
43 | elif vol.get('type') == "bcache": |
44 | # For bcache setups, the only reliable way to determine the name of the |
45 | @@ -1485,7 +1485,7 @@ def raid_handler(info, storage_config): |
46 | devices = info.get('devices') |
47 | raidlevel = info.get('raidlevel') |
48 | spare_devices = info.get('spare_devices') |
49 | - md_devname = block.dev_path(info.get('name')) |
50 | + md_devname = block.md_path(info.get('name')) |
51 | preserve = config.value_as_boolean(info.get('preserve')) |
52 | if not devices: |
53 | raise ValueError("devices for raid must be specified") |
54 | diff --git a/examples/tests/raid5boot.yaml b/examples/tests/raid5boot.yaml |
55 | index b1df21d..d8afe7f 100644 |
56 | --- a/examples/tests/raid5boot.yaml |
57 | +++ b/examples/tests/raid5boot.yaml |
58 | @@ -42,7 +42,7 @@ storage: |
59 | size: 3GB |
60 | device: sdc |
61 | - id: mddevice |
62 | - name: md0 |
63 | + name: os-raid1 |
64 | type: raid |
65 | raidlevel: 5 |
66 | devices: |
67 | diff --git a/tests/unittests/test_commands_block_meta.py b/tests/unittests/test_commands_block_meta.py |
68 | index b768cdc..74a14a7 100644 |
69 | --- a/tests/unittests/test_commands_block_meta.py |
70 | +++ b/tests/unittests/test_commands_block_meta.py |
71 | @@ -1779,6 +1779,7 @@ class TestRaidHandler(CiTestCase): |
72 | def setUp(self): |
73 | super(TestRaidHandler, self).setUp() |
74 | |
75 | + orig_md_path = block_meta.block.md_path |
76 | basepath = 'curtin.commands.block_meta.' |
77 | self.add_patch(basepath + 'get_path_to_storage_volume', 'm_getpath') |
78 | self.add_patch(basepath + 'util', 'm_util') |
79 | @@ -1787,6 +1788,10 @@ class TestRaidHandler(CiTestCase): |
80 | self.add_patch(basepath + 'block', 'm_block') |
81 | self.add_patch(basepath + 'udevadm_settle', 'm_uset') |
82 | |
83 | + # The behavior of this function is being directly tested in |
84 | + # these tests, so we can't mock it |
85 | + self.m_block.md_path = orig_md_path |
86 | + |
87 | self.target = "my_target" |
88 | self.config = { |
89 | 'storage': { |
90 | @@ -1850,12 +1855,40 @@ class TestRaidHandler(CiTestCase): |
91 | block_meta.extract_storage_ordered_dict(self.config)) |
92 | self.m_util.load_command_environment.return_value = {'fstab': None} |
93 | |
94 | + def test_md_name(self): |
95 | + input_to_result = [ |
96 | + ('md1', '/dev/md1'), |
97 | + ('os-raid1', '/dev/md/os-raid1'), |
98 | + ('md/os-raid1', '/dev/md/os-raid1'), |
99 | + ('/dev/md1', '/dev/md1'), |
100 | + ('/dev/md/os-raid1', '/dev/md/os-raid1'), |
101 | + ('bad/path', ValueError) |
102 | + ] |
103 | + for index, test in enumerate(input_to_result): |
104 | + param, expected = test |
105 | + self.storage_config['mddevice']['name'] = param |
106 | + try: |
107 | + block_meta.raid_handler(self.storage_config['mddevice'], |
108 | + self.storage_config) |
109 | + except ValueError: |
110 | + if param in ['bad/path']: |
111 | + continue |
112 | + else: |
113 | + raise |
114 | + |
115 | + actual = self.m_mdadm.mdadm_create.call_args_list[index][0][0] |
116 | + self.assertEqual( |
117 | + expected, |
118 | + actual, |
119 | + "Expected {} to result in mdadm being called with {}. " |
120 | + "mdadm instead called with {}".format(param, expected, actual) |
121 | + ) |
122 | + |
123 | def test_raid_handler(self): |
124 | """ raid_handler creates raid device. """ |
125 | devices = [self.random_string(), self.random_string(), |
126 | self.random_string()] |
127 | md_devname = '/dev/' + self.storage_config['mddevice']['name'] |
128 | - self.m_block.dev_path.return_value = '/dev/md0' |
129 | self.m_getpath.side_effect = iter(devices) |
130 | block_meta.raid_handler(self.storage_config['mddevice'], |
131 | self.storage_config) |
132 | @@ -1868,7 +1901,6 @@ class TestRaidHandler(CiTestCase): |
133 | |
134 | devices = [self.random_string(), self.random_string(), |
135 | self.random_string()] |
136 | - self.m_block.dev_path.return_value = '/dev/md0' |
137 | self.m_getpath.side_effect = iter(devices) |
138 | m_verify.return_value = True |
139 | self.storage_config['mddevice']['preserve'] = True |
140 | @@ -1882,7 +1914,6 @@ class TestRaidHandler(CiTestCase): |
141 | devices = [self.random_string(), self.random_string(), |
142 | self.random_string()] |
143 | md_devname = '/dev/' + self.storage_config['mddevice']['name'] |
144 | - self.m_block.dev_path.return_value = '/dev/md0' |
145 | self.m_getpath.side_effect = iter(devices) |
146 | self.m_mdadm.md_check.return_value = True |
147 | self.storage_config['mddevice']['preserve'] = True |
148 | @@ -1898,7 +1929,6 @@ class TestRaidHandler(CiTestCase): |
149 | devices = [self.random_string(), self.random_string(), |
150 | self.random_string()] |
151 | md_devname = '/dev/' + self.storage_config['mddevice']['name'] |
152 | - self.m_block.dev_path.return_value = '/dev/md0' |
153 | self.m_getpath.side_effect = iter(devices) |
154 | self.m_mdadm.md_check.side_effect = iter([False, True]) |
155 | self.storage_config['mddevice']['preserve'] = True |
156 | @@ -1916,7 +1946,6 @@ class TestRaidHandler(CiTestCase): |
157 | devices = [self.random_string(), self.random_string(), |
158 | self.random_string()] |
159 | md_devname = '/dev/' + self.storage_config['mddevice']['name'] |
160 | - self.m_block.dev_path.return_value = '/dev/md0' |
161 | self.m_getpath.side_effect = iter(devices) |
162 | self.m_mdadm.md_check.side_effect = iter([False, False]) |
163 | self.storage_config['mddevice']['preserve'] = True |
164 | diff --git a/tests/vmtests/__init__.py b/tests/vmtests/__init__.py |
165 | index adfcd24..9b24d36 100644 |
166 | --- a/tests/vmtests/__init__.py |
167 | +++ b/tests/vmtests/__init__.py |
168 | @@ -1761,6 +1761,19 @@ class VMBaseClass(TestCase): |
169 | self.assertIsNotNone(kname) |
170 | return kname |
171 | |
172 | + def _mdname_to_kname(self, mdname): |
173 | + # extract kname from /dev/md/ on /dev/<kname> |
174 | + # parsing ls -al output on /dev/md/*: |
175 | + # lrwxrwxrwx 1 root root 8 May 28 16:26 /dev/md/os-raid1 -> ../md127 |
176 | + ls_dev_md = self.load_collect_file("ls_al_dev_md") |
177 | + knames = [os.path.basename(line.split()[-1]) |
178 | + for line in ls_dev_md.split('\n') |
179 | + if mdname in line] |
180 | + self.assertEqual(len(knames), 1) |
181 | + kname = knames.pop() |
182 | + self.assertIsNotNone(kname) |
183 | + return kname |
184 | + |
185 | def _kname_to_bypath(self, kname): |
186 | # extract path from /dev/disk/by-path on /dev/<kname> |
187 | # parsing ls -al output on /dev/disk/by-path |
188 | diff --git a/tests/vmtests/test_mdadm_bcache.py b/tests/vmtests/test_mdadm_bcache.py |
189 | index 8e250cc..53eaa5e 100644 |
190 | --- a/tests/vmtests/test_mdadm_bcache.py |
191 | +++ b/tests/vmtests/test_mdadm_bcache.py |
192 | @@ -26,6 +26,7 @@ class TestMdadmAbs(VMBaseClass): |
193 | ls -al /dev/bcache* > lsal_dev_bcache_star |
194 | ls -al /dev/bcache/by-uuid/ | cat >ls_al_bcache_byuuid |
195 | ls -al /dev/bcache/by-label/ | cat >ls_al_bcache_bylabel |
196 | + ls -al /dev/md/* | cat >ls_al_dev_md |
197 | |
198 | exit 0 |
199 | """)] |
200 | @@ -359,11 +360,14 @@ class TestRaid5bootAbs(TestMdadmAbs): |
201 | ('main_disk', 2), |
202 | ('second_disk', 1), |
203 | ('third_disk', 1), |
204 | - ('md0', 0)] |
205 | + ('os-raid1', 0)] |
206 | |
207 | def get_fstab_expected(self): |
208 | + kname = self._mdname_to_kname('os-raid1') |
209 | return [ |
210 | - (self._kname_to_uuid_devpath('md-uuid', 'md0'), '/', 'defaults'), |
211 | + (self._kname_to_uuid_devpath('md-uuid', kname), |
212 | + '/', |
213 | + 'defaults'), |
214 | ] |
215 | |
216 |
FAILED: Continuous integration, rev:f538c89eda5 3e771f168e58b77 b5152208c549d7 /jenkins. ubuntu. com/server/ job/curtin- ci/115/ /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-amd64/ 115/ /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-arm64/ 115/ /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-ppc64el/ 115/ /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-s390x/ 115/
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/curtin- ci/115/ /rebuild
https:/