Merge ~raharper/curtin:fix/wipe-zfs-no-utils into curtin:master
- Git
- lp:~raharper/curtin
- fix/wipe-zfs-no-utils
- Merge into master
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | Ryan Harper | ||||
Approved revision: | 90e4ec0a1e86d96d86cf1a0d2fb960aee998dd92 | ||||
Merge reported by: | Server Team CI bot | ||||
Merged at revision: | not available | ||||
Proposed branch: | ~raharper/curtin:fix/wipe-zfs-no-utils | ||||
Merge into: | curtin:master | ||||
Diff against target: |
443 lines (+213/-43) 7 files modified
curtin/block/__init__.py (+14/-0) curtin/block/clear_holders.py (+8/-9) curtin/block/zfs.py (+18/-7) curtin/commands/block_meta.py (+3/-2) tests/unittests/test_block.py (+35/-0) tests/unittests/test_block_zfs.py (+80/-24) tests/unittests/test_clear_holders.py (+55/-1) |
||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
Scott Moser (community) | Approve | ||
Review via email: mp+350697@code.launchpad.net |
Commit message
clear-holders: handle missing zpool/zfs tools when wiping
Allow curtin to continue to wipe disks on systems where zfs
kernel module is present but zfsutils-linux is not.
LP: #1782744
Description of the change
Server Team CI bot (server-team-bot) wrote : | # |
Scott Moser (smoser) wrote : | # |
Hm..
So we do have 'zfs_supported' in our zfs module.
It seems like it would make sense to use that.
here is something to add check of the needed utilities to that:
http://
Ryan Harper (raharper) wrote : | # |
Updated this with changes from Scott.
Needs a vmtest run.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:7accde148f0
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
We discussed this some in a hangout today.
I think http://
what do you think?
Ryan Harper (raharper) wrote : | # |
I think we need some changes to the zfs_supported() which I think you mentioned was intended to be a wrapper around assert_
IIUC, the design was to allow the:
if zfs.zfs_
# do zfs stuff here
And that zfs_supported would always return a boolean (it does) but it also
doesn't check anything. I think maybe it could be:
try:
return zfs.assert_
except RuntimeError:
return False
And assert_
However. clear-holders wants to attempt to make zfs ready
So it cant call assert_zfs_support without handling the
RuntimeException.
I'd also want to only assert_zfs_support once; there are multiple
calls to zfs_support (or assert) through looping (see wipe superblock)
or any of zfs config with multiple pools or zfs volumes. I'd like to
avoid repeating the checks and attempted module loading since one time
is sufficient to determine if it's going to work.
What about this:
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:89bdcf9ae46
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
vmtest run going here:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:315ea078dcf
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
I don't love the caching of zfs_supported.
the primary reason is that zfs_supported might get set to False.
and then 'apt-get install -qy zfslinux'
and then it would stay false.
I guess its not specifically a problem here now as curtin
never does an install of zfsutils-linux, but if it did
then we could effecitvely cache the wrong value.
that could happen though with a curtin command
executed through user provided config.
Performance wise, this is what you're saving with the cache:
$ python3 -m timeit --setup="from curtin.block import zfs" "zfs.assert_
1000 loops, best of 3: 238 usec per loop
So less than 1/1000 of a second for that call.
I'm fine with caching it, but balance the desire to do so with the potential of a stale cache.
Scott Moser (smoser) : | # |
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:20188284fe8
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
I have one nit inline... You can take it or leave it.
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Autolanding.
More details in the following jenkins job:
https:/
Executed test runs:
None: https:/
Scott Moser (smoser) wrote : | # |
this merge-conflict.
can you re-base and then set back to approve?
https:/
CONFLICT (content): Merge conflict in tests/unittests
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:90e4ec0a1e8
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Autolanding.
More details in the following jenkins job:
https:/
Executed test runs:
None: https:/
Server Team CI bot (server-team-bot) : | # |
Preview Diff
1 | diff --git a/curtin/block/__init__.py b/curtin/block/__init__.py |
2 | index b49b9d3..b771629 100644 |
3 | --- a/curtin/block/__init__.py |
4 | +++ b/curtin/block/__init__.py |
5 | @@ -1074,4 +1074,18 @@ def detect_required_packages_mapping(): |
6 | } |
7 | return mapping |
8 | |
9 | + |
10 | +def get_supported_filesystems(): |
11 | + """ Return a list of filesystems that the kernel currently supports |
12 | + as read from /proc/filesystems. |
13 | + |
14 | + Raises RuntimeError if /proc/filesystems does not exist. |
15 | + """ |
16 | + proc_fs = "/proc/filesystems" |
17 | + if not os.path.exists(proc_fs): |
18 | + raise RuntimeError("Unable to read 'filesystems' from %s" % proc_fs) |
19 | + |
20 | + return [l.split('\t')[1].strip() |
21 | + for l in util.load_file(proc_fs).splitlines()] |
22 | + |
23 | # vi: ts=4 expandtab syntax=python |
24 | diff --git a/curtin/block/clear_holders.py b/curtin/block/clear_holders.py |
25 | index a2042d5..a05c9ca 100644 |
26 | --- a/curtin/block/clear_holders.py |
27 | +++ b/curtin/block/clear_holders.py |
28 | @@ -304,11 +304,14 @@ def wipe_superblock(device): |
29 | partitions = block.get_sysfs_partitions(device) |
30 | |
31 | # release zfs member by exporting the pool |
32 | - if block.is_zfs_member(blockdev): |
33 | + if zfs.zfs_supported() and block.is_zfs_member(blockdev): |
34 | poolname = zfs.device_to_poolname(blockdev) |
35 | # only export pools that have been imported |
36 | if poolname in zfs.zpool_list(): |
37 | - zfs.zpool_export(poolname) |
38 | + try: |
39 | + zfs.zpool_export(poolname) |
40 | + except util.ProcessExecutionError as e: |
41 | + LOG.warning('Failed to export zpool "%s": %s', poolname, e) |
42 | |
43 | if is_swap_device(blockdev): |
44 | shutdown_swap(blockdev) |
45 | @@ -633,13 +636,9 @@ def start_clear_holders_deps(): |
46 | # happens then there is no need to halt installation, as the bcache devices |
47 | # will never appear and will never prevent the disk from being reformatted |
48 | util.load_kernel_module('bcache') |
49 | - # the zfs module is needed to find and export devices which may be in-use |
50 | - # and need to be cleared, only on xenial+. |
51 | - try: |
52 | - if zfs.zfs_supported(): |
53 | - util.load_kernel_module('zfs') |
54 | - except RuntimeError as e: |
55 | - LOG.warning('Failed to load zfs kernel module: %s', e) |
56 | + |
57 | + if not zfs.zfs_supported(): |
58 | + LOG.warning('zfs filesystem is not supported in this environment') |
59 | |
60 | |
61 | # anything that is not identified can assumed to be a 'disk' or similar |
62 | diff --git a/curtin/block/zfs.py b/curtin/block/zfs.py |
63 | index cfb07a9..e279ab6 100644 |
64 | --- a/curtin/block/zfs.py |
65 | +++ b/curtin/block/zfs.py |
66 | @@ -8,7 +8,7 @@ import os |
67 | |
68 | from curtin.config import merge_config |
69 | from curtin import util |
70 | -from . import blkid |
71 | +from . import blkid, get_supported_filesystems |
72 | |
73 | ZPOOL_DEFAULT_PROPERTIES = { |
74 | 'ashift': 12, |
75 | @@ -73,6 +73,15 @@ def _join_pool_volume(poolname, volume): |
76 | |
77 | |
78 | def zfs_supported(): |
79 | + """Return a boolean indicating if zfs is supported.""" |
80 | + try: |
81 | + zfs_assert_supported() |
82 | + return True |
83 | + except RuntimeError: |
84 | + return False |
85 | + |
86 | + |
87 | +def zfs_assert_supported(): |
88 | """ Determine if the runtime system supports zfs. |
89 | returns: True if system supports zfs |
90 | raises: RuntimeError: if system does not support zfs |
91 | @@ -85,13 +94,15 @@ def zfs_supported(): |
92 | if release in ZFS_UNSUPPORTED_RELEASES: |
93 | raise RuntimeError("zfs is not supported on release: %s" % release) |
94 | |
95 | - try: |
96 | - util.subp(['modinfo', 'zfs'], capture=True) |
97 | - except util.ProcessExecutionError as err: |
98 | - if err.stderr.startswith("modinfo: ERROR: Module zfs not found."): |
99 | - raise RuntimeError("zfs kernel module is not available: %s" % err) |
100 | + if 'zfs' not in get_supported_filesystems(): |
101 | + try: |
102 | + util.load_kernel_module('zfs') |
103 | + except util.ProcessExecutionError as err: |
104 | + raise RuntimeError("Failed to load 'zfs' kernel module: %s" % err) |
105 | |
106 | - return True |
107 | + missing_progs = [p for p in ('zpool', 'zfs') if not util.which(p)] |
108 | + if missing_progs: |
109 | + raise RuntimeError("Missing zfs utils: %s" % ','.join(missing_progs)) |
110 | |
111 | |
112 | def zpool_create(poolname, vdevs, mountpoint=None, altroot=None, |
113 | diff --git a/curtin/commands/block_meta.py b/curtin/commands/block_meta.py |
114 | index 63193f5..6bd430d 100644 |
115 | --- a/curtin/commands/block_meta.py |
116 | +++ b/curtin/commands/block_meta.py |
117 | @@ -1264,7 +1264,7 @@ def zpool_handler(info, storage_config): |
118 | """ |
119 | Create a zpool based in storage_configuration |
120 | """ |
121 | - zfs.zfs_supported() |
122 | + zfs.zfs_assert_supported() |
123 | |
124 | state = util.load_command_environment() |
125 | |
126 | @@ -1299,7 +1299,8 @@ def zfs_handler(info, storage_config): |
127 | """ |
128 | Create a zfs filesystem |
129 | """ |
130 | - zfs.zfs_supported() |
131 | + zfs.zfs_assert_supported() |
132 | + |
133 | state = util.load_command_environment() |
134 | poolname = get_poolname(info, storage_config) |
135 | volume = info.get('volume') |
136 | diff --git a/tests/unittests/test_block.py b/tests/unittests/test_block.py |
137 | index d9b19a4..9cf8383 100644 |
138 | --- a/tests/unittests/test_block.py |
139 | +++ b/tests/unittests/test_block.py |
140 | @@ -647,4 +647,39 @@ class TestSlaveKnames(CiTestCase): |
141 | knames = block.get_device_slave_knames(device) |
142 | self.assertEqual(slaves, knames) |
143 | |
144 | + |
145 | +class TestGetSupportedFilesystems(CiTestCase): |
146 | + |
147 | + supported_filesystems = ['sysfs', 'rootfs', 'ramfs', 'ext4'] |
148 | + |
149 | + def _proc_filesystems_output(self, supported=None): |
150 | + if not supported: |
151 | + supported = self.supported_filesystems |
152 | + |
153 | + def devname(fsname): |
154 | + """ in-use filesystem modules not emit the 'nodev' prefix """ |
155 | + return '\t' if fsname.startswith('ext') else 'nodev\t' |
156 | + |
157 | + return '\n'.join([devname(fs) + fs for fs in supported]) + '\n' |
158 | + |
159 | + @mock.patch('curtin.block.util') |
160 | + @mock.patch('curtin.block.os') |
161 | + def test_get_supported_filesystems(self, mock_os, mock_util): |
162 | + """ test parsing /proc/filesystems contents into a filesystem list""" |
163 | + mock_os.path.exists.return_value = True |
164 | + mock_util.load_file.return_value = self._proc_filesystems_output() |
165 | + |
166 | + result = block.get_supported_filesystems() |
167 | + self.assertEqual(sorted(self.supported_filesystems), sorted(result)) |
168 | + |
169 | + @mock.patch('curtin.block.util') |
170 | + @mock.patch('curtin.block.os') |
171 | + def test_get_supported_filesystems_no_proc_path(self, mock_os, mock_util): |
172 | + """ missing /proc/filesystems raises RuntimeError """ |
173 | + mock_os.path.exists.return_value = False |
174 | + with self.assertRaises(RuntimeError): |
175 | + block.get_supported_filesystems() |
176 | + self.assertEqual(0, mock_util.load_file.call_count) |
177 | + |
178 | + |
179 | # vi: ts=4 expandtab syntax=python |
180 | diff --git a/tests/unittests/test_block_zfs.py b/tests/unittests/test_block_zfs.py |
181 | index c61a6da..ca8f118 100644 |
182 | --- a/tests/unittests/test_block_zfs.py |
183 | +++ b/tests/unittests/test_block_zfs.py |
184 | @@ -378,10 +378,10 @@ class TestBlockZfsDeviceToPoolname(CiTestCase): |
185 | self.mock_blkid.assert_called_with(devs=[devname]) |
186 | |
187 | |
188 | -class TestBlockZfsZfsSupported(CiTestCase): |
189 | +class TestBlockZfsAssertZfsSupported(CiTestCase): |
190 | |
191 | def setUp(self): |
192 | - super(TestBlockZfsZfsSupported, self).setUp() |
193 | + super(TestBlockZfsAssertZfsSupported, self).setUp() |
194 | self.add_patch('curtin.block.zfs.util.subp', 'mock_subp') |
195 | self.add_patch('curtin.block.zfs.util.get_platform_arch', 'mock_arch') |
196 | self.add_patch('curtin.block.zfs.util.lsb_release', 'mock_release') |
197 | @@ -394,34 +394,41 @@ class TestBlockZfsZfsSupported(CiTestCase): |
198 | def test_unsupported_arch(self): |
199 | self.mock_arch.return_value = 'i386' |
200 | with self.assertRaises(RuntimeError): |
201 | - zfs.zfs_supported() |
202 | + zfs.zfs_assert_supported() |
203 | |
204 | def test_unsupported_releases(self): |
205 | for rel in ['precise', 'trusty']: |
206 | self.mock_release.return_value = {'codename': rel} |
207 | with self.assertRaises(RuntimeError): |
208 | - zfs.zfs_supported() |
209 | + zfs.zfs_assert_supported() |
210 | |
211 | - def test_missing_module(self): |
212 | - missing = 'modinfo: ERROR: Module zfs not found.\n ' |
213 | + @mock.patch('curtin.block.zfs.util.is_kmod_loaded') |
214 | + @mock.patch('curtin.block.zfs.get_supported_filesystems') |
215 | + def test_missing_module(self, mock_supfs, mock_kmod): |
216 | + missing = 'modprobe: FATAL: Module zfs not found.\n ' |
217 | self.mock_subp.side_effect = ProcessExecutionError(stdout='', |
218 | stderr=missing, |
219 | exit_code='1') |
220 | + mock_supfs.return_value = ['ext4'] |
221 | + mock_kmod.return_value = False |
222 | with self.assertRaises(RuntimeError): |
223 | - zfs.zfs_supported() |
224 | + zfs.zfs_assert_supported() |
225 | |
226 | |
227 | -class TestZfsSupported(CiTestCase): |
228 | +class TestAssertZfsSupported(CiTestCase): |
229 | |
230 | def setUp(self): |
231 | - super(TestZfsSupported, self).setUp() |
232 | + super(TestAssertZfsSupported, self).setUp() |
233 | |
234 | + @mock.patch('curtin.block.zfs.get_supported_filesystems') |
235 | @mock.patch('curtin.block.zfs.util') |
236 | - def test_zfs_supported_returns_true(self, mock_util): |
237 | - """zfs_supported returns True on supported platforms""" |
238 | + def test_zfs_assert_supported_returns_true(self, mock_util, mock_supfs): |
239 | + """zfs_assert_supported returns True on supported platforms""" |
240 | mock_util.get_platform_arch.return_value = 'amd64' |
241 | mock_util.lsb_release.return_value = {'codename': 'bionic'} |
242 | mock_util.subp.return_value = ("", "") |
243 | + mock_supfs.return_value = ['zfs'] |
244 | + mock_util.which.side_effect = iter(['/wark/zpool', '/wark/zfs']) |
245 | |
246 | self.assertNotIn(mock_util.get_platform_arch.return_value, |
247 | zfs.ZFS_UNSUPPORTED_ARCHES) |
248 | @@ -430,45 +437,94 @@ class TestZfsSupported(CiTestCase): |
249 | self.assertTrue(zfs.zfs_supported()) |
250 | |
251 | @mock.patch('curtin.block.zfs.util') |
252 | - def test_zfs_supported_raises_exception_on_bad_arch(self, mock_util): |
253 | - """zfs_supported raises RuntimeError on unspported arches""" |
254 | + def test_zfs_assert_supported_raises_exception_on_bad_arch(self, |
255 | + mock_util): |
256 | + """zfs_assert_supported raises RuntimeError on unspported arches""" |
257 | mock_util.lsb_release.return_value = {'codename': 'bionic'} |
258 | mock_util.subp.return_value = ("", "") |
259 | for arch in zfs.ZFS_UNSUPPORTED_ARCHES: |
260 | mock_util.get_platform_arch.return_value = arch |
261 | with self.assertRaises(RuntimeError): |
262 | - zfs.zfs_supported() |
263 | + zfs.zfs_assert_supported() |
264 | |
265 | @mock.patch('curtin.block.zfs.util') |
266 | - def test_zfs_supported_raises_execption_on_bad_releases(self, mock_util): |
267 | - """zfs_supported raises RuntimeError on unspported releases""" |
268 | + def test_zfs_assert_supported_raises_exc_on_bad_releases(self, mock_util): |
269 | + """zfs_assert_supported raises RuntimeError on unspported releases""" |
270 | mock_util.get_platform_arch.return_value = 'amd64' |
271 | mock_util.subp.return_value = ("", "") |
272 | for release in zfs.ZFS_UNSUPPORTED_RELEASES: |
273 | mock_util.lsb_release.return_value = {'codename': release} |
274 | with self.assertRaises(RuntimeError): |
275 | - zfs.zfs_supported() |
276 | + zfs.zfs_assert_supported() |
277 | |
278 | @mock.patch('curtin.block.zfs.util.subprocess.Popen') |
279 | + @mock.patch('curtin.block.zfs.util.is_kmod_loaded') |
280 | + @mock.patch('curtin.block.zfs.get_supported_filesystems') |
281 | @mock.patch('curtin.block.zfs.util.lsb_release') |
282 | @mock.patch('curtin.block.zfs.util.get_platform_arch') |
283 | - def test_zfs_supported_raises_exception_on_missing_module(self, |
284 | - m_arch, |
285 | - m_release, |
286 | - m_popen): |
287 | - """zfs_supported raises RuntimeError on missing zfs module""" |
288 | + def test_zfs_assert_supported_raises_exc_on_missing_module(self, |
289 | + m_arch, |
290 | + m_release, |
291 | + m_supfs, |
292 | + m_kmod, |
293 | + m_popen, |
294 | + ): |
295 | + """zfs_assert_supported raises RuntimeError modprobe zfs error""" |
296 | |
297 | m_arch.return_value = 'amd64' |
298 | m_release.return_value = {'codename': 'bionic'} |
299 | + m_supfs.return_value = ['ext4'] |
300 | + m_kmod.return_value = False |
301 | process_mock = mock.Mock() |
302 | attrs = { |
303 | 'returncode': 1, |
304 | 'communicate.return_value': |
305 | - ('output', "modinfo: ERROR: Module zfs not found."), |
306 | + ('output', 'modprobe: FATAL: Module zfs not found ...'), |
307 | } |
308 | process_mock.configure_mock(**attrs) |
309 | m_popen.return_value = process_mock |
310 | with self.assertRaises(RuntimeError): |
311 | - zfs.zfs_supported() |
312 | + zfs.zfs_assert_supported() |
313 | + |
314 | + @mock.patch('curtin.block.zfs.get_supported_filesystems') |
315 | + @mock.patch('curtin.block.zfs.util.lsb_release') |
316 | + @mock.patch('curtin.block.zfs.util.get_platform_arch') |
317 | + @mock.patch('curtin.block.zfs.util') |
318 | + def test_zfs_assert_supported_raises_exc_on_missing_binaries(self, |
319 | + mock_util, |
320 | + m_arch, |
321 | + m_release, |
322 | + m_supfs): |
323 | + """zfs_assert_supported raises RuntimeError if no zpool or zfs tools""" |
324 | + mock_util.get_platform_arch.return_value = 'amd64' |
325 | + mock_util.lsb_release.return_value = {'codename': 'bionic'} |
326 | + mock_util.subp.return_value = ("", "") |
327 | + m_supfs.return_value = ['zfs'] |
328 | + mock_util.which.return_value = None |
329 | + |
330 | + with self.assertRaises(RuntimeError): |
331 | + zfs.zfs_assert_supported() |
332 | + |
333 | + |
334 | +class TestZfsSupported(CiTestCase): |
335 | + |
336 | + @mock.patch('curtin.block.zfs.zfs_assert_supported') |
337 | + def test_zfs_supported(self, m_assert_zfs): |
338 | + zfs_supported = True |
339 | + m_assert_zfs.return_value = zfs_supported |
340 | + |
341 | + result = zfs.zfs_supported() |
342 | + self.assertEqual(zfs_supported, result) |
343 | + self.assertEqual(1, m_assert_zfs.call_count) |
344 | + |
345 | + @mock.patch('curtin.block.zfs.zfs_assert_supported') |
346 | + def test_zfs_supported_returns_false_on_assert_fail(self, m_assert_zfs): |
347 | + zfs_supported = False |
348 | + m_assert_zfs.side_effect = RuntimeError('No zfs module') |
349 | + |
350 | + result = zfs.zfs_supported() |
351 | + self.assertEqual(zfs_supported, result) |
352 | + self.assertEqual(1, m_assert_zfs.call_count) |
353 | + |
354 | |
355 | # vi: ts=4 expandtab syntax=python |
356 | diff --git a/tests/unittests/test_clear_holders.py b/tests/unittests/test_clear_holders.py |
357 | index 21f76be..d3f80a0 100644 |
358 | --- a/tests/unittests/test_clear_holders.py |
359 | +++ b/tests/unittests/test_clear_holders.py |
360 | @@ -6,6 +6,7 @@ import os |
361 | import textwrap |
362 | |
363 | from curtin.block import clear_holders |
364 | +from curtin.util import ProcessExecutionError |
365 | from .helpers import CiTestCase |
366 | |
367 | |
368 | @@ -558,6 +559,7 @@ class TestClearHolders(CiTestCase): |
369 | self.assertFalse(mock_block.wipe_volume.called) |
370 | mock_block.is_extended_partition.return_value = False |
371 | mock_block.is_zfs_member.return_value = True |
372 | + mock_zfs.zfs_supported.return_value = True |
373 | mock_zfs.device_to_poolname.return_value = 'fake_pool' |
374 | mock_zfs.zpool_list.return_value = ['fake_pool'] |
375 | clear_holders.wipe_superblock(self.test_syspath) |
376 | @@ -567,6 +569,58 @@ class TestClearHolders(CiTestCase): |
377 | self.test_blockdev, exclusive=True, mode='superblock') |
378 | |
379 | @mock.patch('curtin.block.clear_holders.is_swap_device') |
380 | + @mock.patch('curtin.block.clear_holders.zfs') |
381 | + @mock.patch('curtin.block.clear_holders.LOG') |
382 | + @mock.patch('curtin.block.clear_holders.block') |
383 | + def test_clear_holders_wipe_superblock_no_zfs(self, mock_block, mock_log, |
384 | + mock_zfs, mock_swap): |
385 | + """test clear_holders.wipe_superblock checks zfs supported""" |
386 | + mock_swap.return_value = False |
387 | + mock_block.sysfs_to_devpath.return_value = self.test_blockdev |
388 | + mock_block.is_extended_partition.return_value = True |
389 | + clear_holders.wipe_superblock(self.test_syspath) |
390 | + self.assertFalse(mock_block.wipe_volume.called) |
391 | + mock_block.is_extended_partition.return_value = False |
392 | + mock_block.is_zfs_member.return_value = True |
393 | + mock_zfs.zfs_supported.return_value = False |
394 | + clear_holders.wipe_superblock(self.test_syspath) |
395 | + mock_block.sysfs_to_devpath.assert_called_with(self.test_syspath) |
396 | + self.assertEqual(1, mock_zfs.zfs_supported.call_count) |
397 | + self.assertEqual(0, mock_block.is_zfs_member.call_count) |
398 | + self.assertEqual(0, mock_zfs.device_to_poolname.call_count) |
399 | + self.assertEqual(0, mock_zfs.zpool_list.call_count) |
400 | + mock_block.wipe_volume.assert_called_with( |
401 | + self.test_blockdev, exclusive=True, mode='superblock') |
402 | + |
403 | + @mock.patch('curtin.block.clear_holders.is_swap_device') |
404 | + @mock.patch('curtin.block.clear_holders.zfs') |
405 | + @mock.patch('curtin.block.clear_holders.LOG') |
406 | + @mock.patch('curtin.block.clear_holders.block') |
407 | + def test_clear_holders_wipe_superblock_zfs_no_utils(self, mock_block, |
408 | + mock_log, mock_zfs, |
409 | + mock_swap): |
410 | + """test clear_holders.wipe_superblock handles missing zpool cmd""" |
411 | + mock_swap.return_value = False |
412 | + mock_block.sysfs_to_devpath.return_value = self.test_blockdev |
413 | + mock_block.is_extended_partition.return_value = True |
414 | + clear_holders.wipe_superblock(self.test_syspath) |
415 | + self.assertFalse(mock_block.wipe_volume.called) |
416 | + mock_block.is_extended_partition.return_value = False |
417 | + mock_block.is_zfs_member.return_value = True |
418 | + mock_zfs.zfs_supported.return_value = True |
419 | + mock_zfs.device_to_poolname.return_value = 'fake_pool' |
420 | + mock_zfs.zpool_list.return_value = ['fake_pool'] |
421 | + mock_zfs.zpool_export.side_effect = [ |
422 | + ProcessExecutionError(cmd=['zpool', 'export', 'fake_pool'], |
423 | + stdout="", |
424 | + stderr=("cannot open 'fake_pool': " |
425 | + "no such pool"))] |
426 | + clear_holders.wipe_superblock(self.test_syspath) |
427 | + mock_block.sysfs_to_devpath.assert_called_with(self.test_syspath) |
428 | + mock_block.wipe_volume.assert_called_with( |
429 | + self.test_blockdev, exclusive=True, mode='superblock') |
430 | + |
431 | + @mock.patch('curtin.block.clear_holders.is_swap_device') |
432 | @mock.patch('curtin.block.clear_holders.time') |
433 | @mock.patch('curtin.block.clear_holders.LOG') |
434 | @mock.patch('curtin.block.clear_holders.block') |
435 | @@ -790,7 +844,7 @@ class TestClearHolders(CiTestCase): |
436 | mock_mdadm.mdadm_assemble.assert_called_with( |
437 | scan=True, ignore_errors=True) |
438 | mock_util.load_kernel_module.assert_has_calls([ |
439 | - mock.call('bcache'), mock.call('zfs')]) |
440 | + mock.call('bcache')]) |
441 | |
442 | @mock.patch('curtin.block.clear_holders.lvm') |
443 | @mock.patch('curtin.block.clear_holders.zfs') |
PASSED: Continuous integration, rev:749fb758b10 ff19e8393debcbe 358354785cdf2c /jenkins. ubuntu. com/server/ job/curtin- ci/999/ /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-arm64/ 999 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-ppc64el/ 999 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-s390x/ 999 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= torkoal/ 999
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/curtin- ci/999/ rebuild
https:/