Merge lp:~wesley-wiedenmeier/curtin/enum-storage-config into lp:~curtin-dev/curtin/trunk
- enum-storage-config
- Merge into trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp:~wesley-wiedenmeier/curtin/enum-storage-config |
Merge into: | lp:~curtin-dev/curtin/trunk |
Diff against target: |
1857 lines (+799/-352) 11 files modified
curtin/block/__init__.py (+14/-7) curtin/block/mkfs.py (+111/-97) curtin/block/storage_config.py (+218/-0) curtin/commands/block_meta.py (+239/-219) curtin/commands/block_wipe.py (+6/-3) curtin/commands/curthooks.py (+4/-4) curtin/commands/mkfs.py (+6/-4) curtin/enum.py (+90/-0) curtin/reporter/events.py (+4/-11) tests/unittests/test_block_mkfs.py (+19/-7) tests/unittests/test_enum.py (+88/-0) |
To merge this branch: | bzr merge lp:~wesley-wiedenmeier/curtin/enum-storage-config |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Needs Fixing | |
curtin developers | Pending | ||
Review via email:
|
Commit message
Description of the change
Whenever possible, replace strings in storage config with enums and do enum comparison rather than string comparison for reliability.
This branch isn't quite ready yet, there was a bit more that I wanted to get added in, I'm just using this merge proposal to trigger vmtests.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Ryan Harper (raharper) wrote : | # |
Thanks for starting this conversion. I had put some comments inline below.
As I read through the changes to block_meta, something occurred to me that
I think we should discuss.
AFAICT, while we get stronger "typing" with the enum class, the result in the
code is more verbose and I'd say slightly more opaque since one needs to look
up the enum types and such.
I found it useful to provide sanity checks on the specific strings/keys we use
in the config yaml, specifically the various 'lvm_partition' versus 'lvm_partitions'
sort of error and I'm wondering if it would be simpler to provide a yaml validator
and introspection API that MAAS could use to query curtin's storage config
for the set of types allowed and the keys per type. I think that same api or method
call could be used in the curtin/
the enum efforts; just remove the human part of emitting the list of types.
Thoughts?
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Wesley Wiedenmeier (wesley-wiedenmeier) wrote : | # |
I definitely agree that the enum system makes configuration processing more verbose. I also think that even though it does fix some potential errors wrt typos in strings it also introduces quite a few more potential errors, mainly due to the fact that calling str() on either the builtin or fallback enum type is not a useful output. For example, I just figured out that the reason that the ci tests failed was that I had accidentally used 'fstype' instead of 'fstype.name' when generating the fstab.
I do think that the lists of command types and corresponding lists of possible command keys are useful though. With those in place it would be pretty easy to write a simple api for maas to use to get keys to use for configuration. It would also be easy to write a standalone curtin configuration validator, which would be pretty nice on its own.
I think that a good balance between strict typing and having simple code would be to use something similar to the old util.NameSet class but written so that it inserts keys in to its __dict__ rather than implementing __getattr__. I could keep the logic from convert_
I'll try making the changes in a different branch, just for comparison. I'll go through the inline comments here first though and address those, thanks for looking through this.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:415
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:419
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:421
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 422. By Wesley Wiedenmeier
-
Use (str, bytes) tuple instead of six.string_types as six module not present on
precise ephemeral image
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:422
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 423. By Wesley Wiedenmeier
-
In unittests.
test_enum, skip the enum test class if tests running in an
environment that does not have the builtin enum module available. Since the
tests validate both the builtin and bakcup enum module it is okay to skip these
tests when the builtin enum is not available, as they will still fully cover
the enum moduleIn block_meta, make a few style fixes to get tox passing for the trusty version
of pep8, which seems to have different opinions about hanging indents than the
xenial version of pep8
Unmerged revisions
- 423. By Wesley Wiedenmeier
-
In unittests.
test_enum, skip the enum test class if tests running in an
environment that does not have the builtin enum module available. Since the
tests validate both the builtin and bakcup enum module it is okay to skip these
tests when the builtin enum is not available, as they will still fully cover
the enum moduleIn block_meta, make a few style fixes to get tox passing for the trusty version
of pep8, which seems to have different opinions about hanging indents than the
xenial version of pep8 - 422. By Wesley Wiedenmeier
-
Use (str, bytes) tuple instead of six.string_types as six module not present on
precise ephemeral image - 421. By Wesley Wiedenmeier
-
Use attr_name in enum_t in convert_to_enum as fallback enum class does not work
with isinstance - 420. By Wesley Wiedenmeier
-
Clean up fileystem type logic in mount_handler
- 419. By Wesley Wiedenmeier
-
Use GEN_KEYS.id instead of 'id' in bcache_handler
- 418. By Wesley Wiedenmeier
-
Use cache_mode.name instead of cache_mode when setting up bcache
- 417. By Wesley Wiedenmeier
-
Don't attempt to use json.dumps on dict containing enums in raid_handler
- 416. By Wesley Wiedenmeier
-
Fix enum processing in determine_
partition_ number and clean up logic - 415. By Wesley Wiedenmeier
-
Skip config items missing type instead of aborting installation
- 414. By Wesley Wiedenmeier
-
Add convert_to_enum function in curtin.enum and use it in functions using enums
Preview Diff
1 | === modified file 'curtin/block/__init__.py' |
2 | --- curtin/block/__init__.py 2016-04-22 16:24:00 +0000 |
3 | +++ curtin/block/__init__.py 2016-06-14 21:02:46 +0000 |
4 | @@ -22,7 +22,7 @@ |
5 | import tempfile |
6 | import itertools |
7 | |
8 | -from curtin import util |
9 | +from curtin import (util, enum) |
10 | from curtin.udev import udevadm_settle |
11 | from curtin.log import LOG |
12 | |
13 | @@ -635,7 +635,12 @@ |
14 | fp.write(buf) |
15 | |
16 | |
17 | -def wipe_volume(path, mode="superblock"): |
18 | +WIPE_MODES = enum.make_enum('WIPE_MODES', |
19 | + ("superblock", "zero", "random", "pvremove", |
20 | + "superblock_recursive")) |
21 | + |
22 | + |
23 | +def wipe_volume(path, mode=WIPE_MODES.superblock): |
24 | """wipe a volume/block device |
25 | |
26 | :param path: a path to a block device |
27 | @@ -648,7 +653,9 @@ |
28 | volume and beginning and end of any partitions that are |
29 | known to be on this device. |
30 | """ |
31 | - if mode == "pvremove": |
32 | + mode = enum.convert_to_enum(WIPE_MODES, mode) |
33 | + |
34 | + if mode == WIPE_MODES.pvremove: |
35 | # We need to use --force --force in case it's already in a volgroup and |
36 | # pvremove doesn't want to remove it |
37 | cmds = [] |
38 | @@ -660,14 +667,14 @@ |
39 | # wiping something that is already blank |
40 | for cmd in cmds: |
41 | util.subp(cmd, rcs=[0, 5], capture=True) |
42 | - elif mode == "zero": |
43 | + elif mode == WIPE_MODES.zero: |
44 | wipe_file(path) |
45 | - elif mode == "random": |
46 | + elif mode == WIPE_MODES.random: |
47 | with open("/dev/urandom", "rb") as reader: |
48 | wipe_file(path, reader=reader.read) |
49 | - elif mode == "superblock": |
50 | + elif mode == WIPE_MODES.superblock: |
51 | quick_zero(path, partitions=False) |
52 | - elif mode == "superblock-recursive": |
53 | + elif mode == WIPE_MODES.superblock_recursive: |
54 | quick_zero(path, partitions=True) |
55 | else: |
56 | raise ValueError("wipe mode %s not supported" % mode) |
57 | |
58 | === modified file 'curtin/block/mkfs.py' |
59 | --- curtin/block/mkfs.py 2016-04-14 19:24:47 +0000 |
60 | +++ curtin/block/mkfs.py 2016-06-14 21:02:46 +0000 |
61 | @@ -18,96 +18,107 @@ |
62 | # This module wraps calls to mkfs.<fstype> and determines the appropriate flags |
63 | # for each filesystem type |
64 | |
65 | -from curtin import util |
66 | -from curtin import block |
67 | +from curtin import (util, block, enum) |
68 | |
69 | import string |
70 | import os |
71 | from uuid import uuid1 |
72 | |
73 | +FMT_KEYS = enum.make_enum('FMT_KEYS', ('volume', 'uuid', 'label', 'fstype')) |
74 | +FS_TYPES = enum.make_enum( |
75 | + 'FS_TYPES', ('ext', 'ext2', 'ext3', 'ext4', |
76 | + 'fat', 'vfat', 'fat12', 'fat16', 'fat32', 'fat64', |
77 | + 'btrfs', 'jfs', 'ntfs', 'reiserfs', 'swap', 'xfs',)) |
78 | +FS_FLAGS = enum.make_enum( |
79 | + 'FS_FLAGS', ('label', 'uuid', 'force', 'fatsize', 'quiet', 'sectorsize',)) |
80 | + |
81 | mkfs_commands = { |
82 | - "btrfs": "mkfs.btrfs", |
83 | - "ext2": "mkfs.ext2", |
84 | - "ext3": "mkfs.ext3", |
85 | - "ext4": "mkfs.ext4", |
86 | - "fat": "mkfs.vfat", |
87 | - "fat12": "mkfs.vfat", |
88 | - "fat16": "mkfs.vfat", |
89 | - "fat32": "mkfs.vfat", |
90 | - "vfat": "mkfs.vfat", |
91 | - "jfs": "jfs_mkfs", |
92 | - "ntfs": "mkntfs", |
93 | - "reiserfs": "mkfs.reiserfs", |
94 | - "swap": "mkswap", |
95 | - "xfs": "mkfs.xfs" |
96 | + FS_TYPES.btrfs: "mkfs.btrfs", |
97 | + FS_TYPES.ext: "mkfs.ext4", |
98 | + FS_TYPES.ext2: "mkfs.ext2", |
99 | + FS_TYPES.ext3: "mkfs.ext3", |
100 | + FS_TYPES.ext4: "mkfs.ext4", |
101 | + FS_TYPES.fat: "mkfs.vfat", |
102 | + FS_TYPES.fat12: "mkfs.vfat", |
103 | + FS_TYPES.fat16: "mkfs.vfat", |
104 | + FS_TYPES.fat32: "mkfs.vfat", |
105 | + FS_TYPES.fat64: "mkfs.vfat", |
106 | + FS_TYPES.vfat: "mkfs.vfat", |
107 | + FS_TYPES.jfs: "jfs_mkfs", |
108 | + FS_TYPES.ntfs: "mkntfs", |
109 | + FS_TYPES.reiserfs: "mkfs.reiserfs", |
110 | + FS_TYPES.swap: "mkswap", |
111 | + FS_TYPES.xfs: "mkfs.xfs" |
112 | } |
113 | |
114 | specific_to_family = { |
115 | - "ext2": "ext", |
116 | - "ext3": "ext", |
117 | - "ext4": "ext", |
118 | - "fat12": "fat", |
119 | - "fat16": "fat", |
120 | - "fat32": "fat", |
121 | - "vfat": "fat", |
122 | + FS_TYPES.ext2: FS_TYPES.ext, |
123 | + FS_TYPES.ext3: FS_TYPES.ext, |
124 | + FS_TYPES.ext4: FS_TYPES.ext, |
125 | + FS_TYPES.fat12: FS_TYPES.fat, |
126 | + FS_TYPES.fat16: FS_TYPES.fat, |
127 | + FS_TYPES.fat32: FS_TYPES.fat, |
128 | + FS_TYPES.fat64: FS_TYPES.fat, |
129 | + FS_TYPES.vfat: FS_TYPES.fat, |
130 | } |
131 | |
132 | label_length_limits = { |
133 | - "btrfs": 256, |
134 | - "ext": 16, |
135 | - "fat": 11, |
136 | - "jfs": 16, # see jfs_tune manpage |
137 | - "ntfs": 32, |
138 | - "reiserfs": 16, |
139 | - "swap": 15, # not in manpages, found experimentally |
140 | - "xfs": 12 |
141 | + FS_TYPES.btrfs: 256, |
142 | + FS_TYPES.ext: 16, |
143 | + FS_TYPES.fat: 11, |
144 | + FS_TYPES.jfs: 16, # see jfs_tune manpage |
145 | + FS_TYPES.ntfs: 32, |
146 | + FS_TYPES.reiserfs: 16, |
147 | + FS_TYPES.swap: 15, # not in manpages, found experimentally |
148 | + FS_TYPES.xfs: 12, |
149 | } |
150 | |
151 | family_flag_mappings = { |
152 | - "label": {"btrfs": "--label", |
153 | - "ext": "-L", |
154 | - "fat": "-n", |
155 | - "jfs": "-L", |
156 | - "ntfs": "--label", |
157 | - "reiserfs": "--label", |
158 | - "swap": "--label", |
159 | - "xfs": "-L"}, |
160 | - "uuid": {"btrfs": "--uuid", |
161 | - "ext": "-U", |
162 | - "reiserfs": "--uuid", |
163 | - "swap": "--uuid"}, |
164 | - "force": {"btrfs": "--force", |
165 | - "ext": "-F", |
166 | - "ntfs": "--force", |
167 | - "reiserfs": "-f", |
168 | - "swap": "--force", |
169 | - "xfs": "-f"}, |
170 | - "fatsize": {"fat": "-F"}, |
171 | - "quiet": {"ext": "-q", |
172 | - "ntfs": "-q", |
173 | - "reiserfs": "-q", |
174 | - "xfs": "--quiet"}, |
175 | - "sectorsize": { |
176 | - "btrfs": "--sectorsize", |
177 | - "ext": "-b", |
178 | - "fat": "-S", |
179 | - "ntfs": "--sector-size", |
180 | - "reiserfs": "--block-size"} |
181 | + FS_FLAGS.label: { |
182 | + FS_TYPES.btrfs: "--label", |
183 | + FS_TYPES.ext: "-L", |
184 | + FS_TYPES.fat: "-n", |
185 | + FS_TYPES.jfs: "-L", |
186 | + FS_TYPES.ntfs: "--label", |
187 | + FS_TYPES.reiserfs: "--label", |
188 | + FS_TYPES.swap: "--label", |
189 | + FS_TYPES.xfs: "-L"}, |
190 | + FS_FLAGS.uuid: { |
191 | + FS_TYPES.btrfs: "--uuid", |
192 | + FS_TYPES.ext: "-U", |
193 | + FS_TYPES.reiserfs: "--uuid", |
194 | + FS_TYPES.swap: "--uuid"}, |
195 | + FS_FLAGS.force: { |
196 | + FS_TYPES.btrfs: "--force", |
197 | + FS_TYPES.ext: "-F", |
198 | + FS_TYPES.ntfs: "--force", |
199 | + FS_TYPES.reiserfs: "-f", |
200 | + FS_TYPES.swap: "--force", |
201 | + FS_TYPES.xfs: "-f"}, |
202 | + FS_FLAGS.fatsize: { |
203 | + FS_TYPES.fat: "-F"}, |
204 | + FS_FLAGS.quiet: { |
205 | + FS_TYPES.ext: "-q", |
206 | + FS_TYPES.ntfs: "-q", |
207 | + FS_TYPES.reiserfs: "-q", |
208 | + FS_TYPES.xfs: "--quiet"}, |
209 | + FS_FLAGS.sectorsize: { |
210 | + FS_TYPES.btrfs: "--sectorsize", |
211 | + FS_TYPES.ext: "-b", |
212 | + FS_TYPES.fat: "-S", |
213 | + FS_TYPES.ntfs: "--sector-size", |
214 | + FS_TYPES.reiserfs: "--block-size"} |
215 | } |
216 | |
217 | release_flag_mapping_overrides = { |
218 | "precise": { |
219 | - "force": {"btrfs": None}, |
220 | - "uuid": {"btrfs": None}}, |
221 | + FS_FLAGS.force: {FS_TYPES.btrfs: None}, |
222 | + FS_FLAGS.uuid: {FS_TYPES.btrfs: None}}, |
223 | "trusty": { |
224 | - "uuid": {"btrfs": None}}, |
225 | + FS_FLAGS.uuid: {FS_TYPES.btrfs: None}}, |
226 | } |
227 | |
228 | |
229 | -def valid_fstypes(): |
230 | - return list(mkfs_commands.keys()) |
231 | - |
232 | - |
233 | def get_flag_mapping(flag_name, fs_family, param=None, strict=False): |
234 | ret = [] |
235 | release = util.lsb_release()['codename'] |
236 | @@ -117,13 +128,13 @@ |
237 | else: |
238 | flag_sym_families = family_flag_mappings.get(flag_name) |
239 | if flag_sym_families is None: |
240 | - raise ValueError("unsupported flag '%s'" % flag_name) |
241 | + raise ValueError("unsupported flag '%s'" % flag_name.name) |
242 | flag_sym = flag_sym_families.get(fs_family) |
243 | |
244 | if flag_sym is None: |
245 | if strict: |
246 | raise ValueError("flag '%s' not supported by fs family '%s'" % |
247 | - flag_name, fs_family) |
248 | + flag_name.name, fs_family.name) |
249 | else: |
250 | ret = [flag_sym] |
251 | if param is not None: |
252 | @@ -132,20 +143,22 @@ |
253 | |
254 | |
255 | def mkfs(path, fstype, strict=False, label=None, uuid=None, force=False): |
256 | - """Make filesystem on block device with given path using given fstype and |
257 | - appropriate flags for filesystem family. |
258 | - |
259 | - Filesystem uuid and label can be passed in as kwargs. By default no |
260 | - label or uuid will be used. If a filesystem label is too long curtin |
261 | - will raise a ValueError if the strict flag is true or will truncate |
262 | - it to the maximum possible length. |
263 | - |
264 | - If a flag is not supported by a filesystem family mkfs will raise a |
265 | - ValueError if the strict flag is true or silently ignore it otherwise. |
266 | - |
267 | - Force can be specified to force the mkfs command to continue even if it |
268 | - finds old data or filesystems on the partition. |
269 | - """ |
270 | + """ |
271 | + Make filesystem on block device with given path using given fstype and |
272 | + appropriate flags for filesystem family. |
273 | + |
274 | + Filesystem uuid and label can be passed in as kwargs. By default no |
275 | + label or uuid will be used. If a filesystem label is too long curtin |
276 | + will raise a ValueError if the strict flag is true or will truncate |
277 | + it to the maximum possible length. |
278 | + |
279 | + If a flag is not supported by a filesystem family mkfs will raise a |
280 | + ValueError if the strict flag is true or silently ignore it otherwise. |
281 | + |
282 | + Force can be specified to force the mkfs command to continue even if it |
283 | + finds old data or filesystems on the partition. |
284 | + """ |
285 | + fstype = enum.convert_to_enum(FS_TYPES, fstype) |
286 | |
287 | if path is None: |
288 | raise ValueError("invalid block dev path '%s'" % path) |
289 | @@ -155,7 +168,7 @@ |
290 | fs_family = specific_to_family.get(fstype, fstype) |
291 | mkfs_cmd = mkfs_commands.get(fstype) |
292 | if not mkfs_cmd: |
293 | - raise ValueError("unsupported fs type '%s'" % fstype) |
294 | + raise ValueError("unsupported fs type '%s'" % fstype.name) |
295 | |
296 | if util.which(mkfs_cmd) is None: |
297 | raise ValueError("need '%s' but it could not be found" % mkfs_cmd) |
298 | @@ -165,7 +178,7 @@ |
299 | # use device logical block size to ensure properly formated filesystems |
300 | (logical_bsize, physical_bsize) = block.get_blockdev_sector_size(path) |
301 | if logical_bsize > 512: |
302 | - cmd.extend(get_flag_mapping("sectorsize", fs_family, |
303 | + cmd.extend(get_flag_mapping(FS_FLAGS.sectorsize, fs_family, |
304 | param=str(logical_bsize), |
305 | strict=strict)) |
306 | # mkfs.vfat doesn't calculate this right for non-512b sector size |
307 | @@ -173,36 +186,37 @@ |
308 | cmd.extend(["-s", "1"]) |
309 | |
310 | if force: |
311 | - cmd.extend(get_flag_mapping("force", fs_family, strict=strict)) |
312 | + cmd.extend(get_flag_mapping(FS_FLAGS.force, fs_family, strict=strict)) |
313 | if label is not None: |
314 | limit = label_length_limits.get(fs_family) |
315 | if len(label) > limit: |
316 | if strict: |
317 | raise ValueError("length of fs label for '%s' exceeds max \ |
318 | allowed for fstype '%s'. max is '%s'" |
319 | - % (path, fstype, limit)) |
320 | + % (path, fstype.name, limit)) |
321 | else: |
322 | label = label[:limit] |
323 | - cmd.extend(get_flag_mapping("label", fs_family, param=label, |
324 | + cmd.extend(get_flag_mapping(FS_FLAGS.label, fs_family, param=label, |
325 | strict=strict)) |
326 | |
327 | # If uuid is not specified, generate one and try to use it |
328 | if uuid is None: |
329 | uuid = str(uuid1()) |
330 | - cmd.extend(get_flag_mapping("uuid", fs_family, param=uuid, strict=strict)) |
331 | + cmd.extend(get_flag_mapping(FS_FLAGS.uuid, fs_family, param=uuid, |
332 | + strict=strict)) |
333 | |
334 | - if fs_family == "fat": |
335 | - fat_size = fstype.strip(string.ascii_letters) |
336 | + if fs_family == FS_TYPES.fat: |
337 | + fat_size = fstype.name.strip(string.ascii_letters) |
338 | if fat_size in ["12", "16", "32"]: |
339 | - cmd.extend(get_flag_mapping("fatsize", fs_family, param=fat_size, |
340 | - strict=strict)) |
341 | + cmd.extend(get_flag_mapping(FS_FLAGS.fatsize, fs_family, |
342 | + param=fat_size, strict=strict)) |
343 | |
344 | cmd.append(path) |
345 | util.subp(cmd, capture=True) |
346 | |
347 | # if fs_family does not support specifying uuid then use blkid to find it |
348 | # if blkid is unable to then just return None for uuid |
349 | - if fs_family not in family_flag_mappings['uuid']: |
350 | + if fs_family not in family_flag_mappings[FS_FLAGS.uuid]: |
351 | try: |
352 | uuid = block.blkid()[path]['UUID'] |
353 | except: |
354 | @@ -216,10 +230,10 @@ |
355 | def mkfs_from_config(path, info, strict=False): |
356 | """Make filesystem on block device with given path according to storage |
357 | config given""" |
358 | - fstype = info.get('fstype') |
359 | + fstype = info.get(FMT_KEYS.fstype) |
360 | if fstype is None: |
361 | raise ValueError("fstype must be specified") |
362 | # NOTE: Since old metadata on partitions that have not been wiped can cause |
363 | # some mkfs commands to refuse to work, it's best to use force=True |
364 | - mkfs(path, fstype, strict=strict, force=True, uuid=info.get('uuid'), |
365 | - label=info.get('label')) |
366 | + mkfs(path, fstype, strict=strict, force=True, uuid=info.get(FMT_KEYS.uuid), |
367 | + label=info.get(FMT_KEYS.label)) |
368 | |
369 | === added file 'curtin/block/storage_config.py' |
370 | --- curtin/block/storage_config.py 1970-01-01 00:00:00 +0000 |
371 | +++ curtin/block/storage_config.py 2016-06-14 21:02:46 +0000 |
372 | @@ -0,0 +1,218 @@ |
373 | +# Copyright (C) 2016 Canonical Ltd. |
374 | +# |
375 | +# Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com> |
376 | +# |
377 | +# Curtin is free software: you can redistribute it and/or modify it under |
378 | +# the terms of the GNU Affero General Public License as published by the |
379 | +# Free Software Foundation, either version 3 of the License, or (at your |
380 | +# option) any later version. |
381 | +# |
382 | +# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY |
383 | +# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS |
384 | +# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for |
385 | +# more details. |
386 | +# |
387 | +# You should have received a copy of the GNU Affero General Public License |
388 | +# along with Curtin. If not, see <http://www.gnu.org/licenses/>. |
389 | + |
390 | +# This module provides constants and data processing functions for curtin |
391 | +# storage config |
392 | + |
393 | +from collections import OrderedDict |
394 | + |
395 | +from curtin import enum |
396 | +from curtin.block.mkfs import (FS_TYPES, FMT_KEYS) |
397 | +from curtin.block import WIPE_MODES |
398 | + |
399 | +CMD_TYPES = enum.make_enum( |
400 | + 'CMD_TYPES', ('disk', 'partition', 'format', 'mount', 'lvm_volgroup', |
401 | + 'lvm_partition', 'dm_crypt', 'raid', 'bcache')) |
402 | + |
403 | +# command keys |
404 | +GEN_KEYS = enum.make_enum('GEN_KEYS', ('id', 'type', 'name', 'preserve')) |
405 | +MOUNT_KEYS = enum.make_enum('MOUNT_KEYS', ('path', 'device')) |
406 | +LVM_VG_KEYS = enum.make_enum('LVM_VG_KEYS', ('devices',)) |
407 | +PART_KEYS = enum.make_enum( |
408 | + 'PART_KEYS', ('device', 'size', 'flag', 'number', 'wipe')) |
409 | +LVM_PART_KEYS = enum.make_enum('LVM_PART_KEYS', ('name', 'volgroup', 'size')) |
410 | +DM_CRYPT_KEYS = enum.make_enum( |
411 | + 'DM_CRYPT_KEYS', ('volume', 'key', 'keysize', 'cipher')) |
412 | +BCACHE_KEYS = enum.make_enum( |
413 | + 'BCACHE_KEYS', ('backing_device', 'cache_device', 'cache_mode')) |
414 | +DISK_KEYS = enum.make_enum( |
415 | + 'DISK_KEYS', ('ptable', 'wipe', 'serial', 'path', 'grub_device')) |
416 | +RAID_KEYS = enum.make_enum( |
417 | + 'RAID_KEYS', ('devices', 'raidlevel', 'spare_devices', 'mdname', 'ptable')) |
418 | + |
419 | +# what item specific keys apply to each command type key |
420 | +SC_KEY_MAP = { |
421 | + CMD_TYPES.disk: DISK_KEYS, |
422 | + CMD_TYPES.partition: PART_KEYS, |
423 | + CMD_TYPES.format: FMT_KEYS, |
424 | + CMD_TYPES.mount: MOUNT_KEYS, |
425 | + CMD_TYPES.lvm_volgroup: LVM_VG_KEYS, |
426 | + CMD_TYPES.lvm_partition: LVM_PART_KEYS, |
427 | + CMD_TYPES.dm_crypt: DM_CRYPT_KEYS, |
428 | + CMD_TYPES.raid: RAID_KEYS, |
429 | + CMD_TYPES.bcache: BCACHE_KEYS, |
430 | +} |
431 | + |
432 | +# command specific enums, (FS_TYPE and WIPE_MODES effectively here as well) |
433 | +PT_TYPES = enum.make_enum('PT_TYPES', ('msdos', 'gpt')) |
434 | +# NOTE: This covers both dos and gpt flags, but some (like boot) are shared, so |
435 | +# it make sense to keep them together, and it makes storage config |
436 | +# processing simpler |
437 | +PART_FLAGS = enum.make_enum( |
438 | + 'PART_FLAGS', |
439 | + ('extended', 'logical', 'primary', |
440 | + 'boot', 'lvm', 'raid', 'bios_grub', 'prep', 'swap', 'home', 'linux')) |
441 | +# NOTE: I think that bcache also accepts 'none' in bcache/cache_mode, and this |
442 | +# disables the cache, but since 'none' is a special value in yaml there |
443 | +# was no way to specify cache mode none before, so leaving it out here |
444 | +# preserves behavior |
445 | +BCACHE_MODES = enum.make_enum( |
446 | + 'BCACHE_MODES', ('writethrough', 'writeback', 'writearound')) |
447 | + |
448 | +# mapping of config item attributes with matching cmd specific enum |
449 | +# FIXME: DM_CRYPT_KEYS.keysize and DM_CRYPT_KEYS.cipher should be in here too, |
450 | +# but i don't think that the handler function works properly atm, so I |
451 | +# left them out |
452 | +# FIXME: I have RAID_KEYS.ptable set here, but I am not sure if disk handler |
453 | +# will be able to partition md devices anymore, it may be good to |
454 | +# disable that functionality with a warning until I can get it working |
455 | +# FIXME: RAID_LEVELS needs to be created in block.mdadm, raid information is |
456 | +# still being processed as strings right now |
457 | +SC_CFG_TO_ENUM = { |
458 | + GEN_KEYS.type: CMD_TYPES, |
459 | + DISK_KEYS.ptable: PT_TYPES, |
460 | + DISK_KEYS.wipe: WIPE_MODES, |
461 | + PART_KEYS.wipe: WIPE_MODES, |
462 | + PART_KEYS.flag: PART_FLAGS, |
463 | + FMT_KEYS.fstype: FS_TYPES, |
464 | + BCACHE_KEYS.cache_mode: BCACHE_MODES, |
465 | + # RAID_KEYS.raidlevel: RAID_LEVELS, |
466 | + RAID_KEYS.ptable: PT_TYPES, |
467 | +} |
468 | + |
469 | + |
470 | +def extract_storage_ordered_dict(config): |
471 | + """ |
472 | + Extract storage configuration from curtin configuration as an ordered |
473 | + dictionary. This data structure will allow efficient lookups of storage |
474 | + config entries by their id, and will preserve storage config order |
475 | + """ |
476 | + storage_config = config.get('storage', {}) |
477 | + if not storage_config: |
478 | + raise ValueError("no 'storage' entry in config") |
479 | + scfg = storage_config.get('config') |
480 | + if not scfg: |
481 | + raise ValueError("invalid storage config data") |
482 | + return OrderedDict((d['id'], d) for (i, d) in enumerate(scfg)) |
483 | + |
484 | + |
485 | +def convert_storage_config_str_to_enum(config): |
486 | + """ |
487 | + Items in storage config are mostly strings. Since string comparison is |
488 | + prone to errors, convert strings in storage config to enums |
489 | + """ |
490 | + # non type specific storage config keys |
491 | + general_keys = set(GEN_KEYS) |
492 | + |
493 | + res = OrderedDict() |
494 | + for (item_id, old_item) in config.items(): |
495 | + |
496 | + # all config entries must have a type defined |
497 | + # if a config entry is missing a type just ignore it to preserve old |
498 | + # behavior |
499 | + type_str = old_item.get('type') |
500 | + if not type_str: |
501 | + continue |
502 | + |
503 | + # str in enum does not work, so have to try to retrieve enum val and |
504 | + # catch key error if thrown |
505 | + try: |
506 | + item_type = CMD_TYPES[type_str] |
507 | + item_keys = set(SC_KEY_MAP[item_type]) |
508 | + except KeyError: |
509 | + raise ValueError("invalid storage config type: '{}'" |
510 | + .format(type_str)) |
511 | + |
512 | + # determine what keys are represented in the storage config entry |
513 | + represented_keys = set(k for k in item_keys.union(general_keys) |
514 | + if k.name in old_item) |
515 | + |
516 | + # keys for new_item should be from one of the KEYS enums, to avoid |
517 | + # accessing key attributes by string comparison |
518 | + new_item = dict().fromkeys(represented_keys) |
519 | + |
520 | + # copy values over from old to new, preforming enum lookup for config |
521 | + # entries with an associated enum type |
522 | + for key in represented_keys: |
523 | + str_val = old_item.get(key.name) |
524 | + if str_val in (None, 'none', 'None', 'null', 'Null'): |
525 | + new_item[key] = None |
526 | + elif key in SC_CFG_TO_ENUM: |
527 | + enum_t = SC_CFG_TO_ENUM.get(key) |
528 | + assert enum_t is not None |
529 | + try: |
530 | + new_item[key] = enum_t[str_val] |
531 | + except KeyError: |
532 | + raise ValueError( |
533 | + "{} is not a valid value for attr {} in conf entry {}" |
534 | + .format(str_val, key, item_id)) |
535 | + else: |
536 | + new_item[key] = str_val |
537 | + |
538 | + # udpate res odict, odict goes by insert order, so this will preserve |
539 | + # original storage config order |
540 | + res.update({item_id: new_item}) |
541 | + |
542 | + return res |
543 | + |
544 | + |
545 | +def number_storage_config_partitions(config): |
546 | + """ |
547 | + Ensure that all partitions in storage config have 'number' specified to |
548 | + save processing during block_meta |
549 | + """ |
550 | + return config |
551 | + |
552 | + |
553 | +class StorageConfig(object): |
554 | + """ |
555 | + A class to wrap storage config processing. |
556 | + |
557 | + Implements many of the standard dictionary methods so it can be treated |
558 | + just like the old ordered dictionary most of the time. |
559 | + """ |
560 | + |
561 | + def __init__(self, config): |
562 | + """ |
563 | + Extract storage configuration from global curtin config and run |
564 | + preprocessing stages |
565 | + """ |
566 | + for preprocessor in [extract_storage_ordered_dict, |
567 | + convert_storage_config_str_to_enum, |
568 | + number_storage_config_partitions]: |
569 | + config = preprocessor(config) |
570 | + |
571 | + # FIXME: this class should store the result and skip running the |
572 | + # preprocessor functions after the first time through |
573 | + self.config = config |
574 | + |
575 | + def get(self, attr, default=None): |
576 | + """Get attr that behaves just like dict""" |
577 | + return self.config.get(attr, default) |
578 | + |
579 | + def __getitem__(self, item): |
580 | + """Allow subscripting to access config""" |
581 | + return self.config[item] |
582 | + |
583 | + def __iter__(self): |
584 | + """Allow iter from config items""" |
585 | + for e in self.config: |
586 | + yield e |
587 | + |
588 | + def items(self): |
589 | + """Allow items() call to config items""" |
590 | + return self.config.items() |
591 | |
592 | === modified file 'curtin/commands/block_meta.py' |
593 | --- curtin/commands/block_meta.py 2016-04-22 16:24:00 +0000 |
594 | +++ curtin/commands/block_meta.py 2016-06-14 21:02:46 +0000 |
595 | @@ -17,11 +17,20 @@ |
596 | |
597 | from collections import OrderedDict |
598 | from curtin import (block, config, util) |
599 | -from curtin.block import mdadm |
600 | +from curtin.block import (mdadm, mkfs) |
601 | +from curtin.block import storage_config as sc |
602 | from curtin.log import LOG |
603 | -from curtin.block import mkfs |
604 | from curtin.reporter import events |
605 | |
606 | +# bring in config name sets |
607 | +from curtin.block import WIPE_MODES |
608 | +from curtin.block.mkfs import (FS_TYPES, FMT_KEYS) |
609 | +from curtin.block.storage_config import ( |
610 | + GEN_KEYS, DISK_KEYS, PART_KEYS, MOUNT_KEYS, LVM_VG_KEYS, |
611 | + LVM_PART_KEYS, BCACHE_KEYS, RAID_KEYS, DM_CRYPT_KEYS, |
612 | + CMD_TYPES, PT_TYPES, PART_FLAGS |
613 | +) |
614 | + |
615 | from . import populate_one_subcmd |
616 | from curtin.udev import compose_udev_equality, udevadm_settle |
617 | |
618 | @@ -215,7 +224,7 @@ |
619 | break |
620 | for part_dev in part_devs: |
621 | block.wipe_volume(os.path.join("/dev", part_dev), |
622 | - mode="superblock") |
623 | + mode=WIPE_MODES.superblock) |
624 | |
625 | if os.path.exists(os.path.join(sys_block_path, "bcache")): |
626 | # bcache device that isn't running, if it were, we would have found it |
627 | @@ -275,32 +284,37 @@ |
628 | |
629 | def determine_partition_number(partition_id, storage_config): |
630 | vol = storage_config.get(partition_id) |
631 | - partnumber = vol.get('number') |
632 | - if vol.get('flag') == "logical": |
633 | - if not partnumber: |
634 | - LOG.warn('partition \'number\' key not set in config:\n%s', |
635 | - util.json_dumps(vol)) |
636 | - partnumber = 5 |
637 | - for key, item in storage_config.items(): |
638 | - if item.get('type') == "partition" and \ |
639 | - item.get('device') == vol.get('device') and\ |
640 | - item.get('flag') == "logical": |
641 | - if item.get('id') == vol.get('id'): |
642 | - break |
643 | - else: |
644 | - partnumber += 1 |
645 | + partnumber = vol.get(PART_KEYS.number) |
646 | + if partnumber is not None: |
647 | + return partnumber |
648 | + |
649 | + # previously the warning was generated using util.json_dumps, |
650 | + # but util.json_dumps crashes when given an enum. instead, just |
651 | + # output the volume id |
652 | + LOG.warn("partition 'number' key not set in config: %s", |
653 | + vol.get(GEN_KEYS.id)) |
654 | + |
655 | + if vol.get(PART_KEYS.flag) == PART_FLAGS.logical: |
656 | + partnumber = 5 |
657 | + for key, item in storage_config.items(): |
658 | + if (item.get(GEN_KEYS.type) == CMD_TYPES.partition and |
659 | + item.get(PART_KEYS.device) == vol.get(PART_KEYS.device) and |
660 | + item.get(PART_KEYS.flag) == PART_FLAGS.logical): |
661 | + if item.get(GEN_KEYS.id) == vol.get(GEN_KEYS.id): |
662 | + break |
663 | + else: |
664 | + partnumber += 1 |
665 | else: |
666 | - if not partnumber: |
667 | - LOG.warn('partition \'number\' key not set in config:\n%s', |
668 | - util.json_dumps(vol)) |
669 | - partnumber = 1 |
670 | - for key, item in storage_config.items(): |
671 | - if item.get('type') == "partition" and \ |
672 | - item.get('device') == vol.get('device'): |
673 | - if item.get('id') == vol.get('id'): |
674 | - break |
675 | - else: |
676 | - partnumber += 1 |
677 | + partnumber = 1 |
678 | + for key, item in storage_config.items(): |
679 | + if (item.get(GEN_KEYS.type) == CMD_TYPES.partition and |
680 | + item.get(PART_KEYS.device) == |
681 | + vol.get(PART_KEYS.device)): |
682 | + if item.get(GEN_KEYS.id) == vol.get(GEN_KEYS.id): |
683 | + break |
684 | + else: |
685 | + partnumber += 1 |
686 | + |
687 | return partnumber |
688 | |
689 | |
690 | @@ -310,8 +324,8 @@ |
691 | vol = storage_config.get(volume) |
692 | path = get_path_to_storage_volume(volume, storage_config) |
693 | ptuuid = None |
694 | - dname = vol.get('name') |
695 | - if vol.get('type') in ["partition", "disk"]: |
696 | + dname = vol.get(GEN_KEYS.name) |
697 | + if vol.get(GEN_KEYS.type) in [CMD_TYPES.disk, CMD_TYPES.partition]: |
698 | (out, _err) = util.subp(["blkid", "-o", "export", path], capture=True, |
699 | rcs=[0, 2], retries=[1, 1, 1]) |
700 | for line in out.splitlines(): |
701 | @@ -319,7 +333,8 @@ |
702 | ptuuid = line.split('=')[-1] |
703 | break |
704 | # we may not always be able to find a uniq identifier on devices with names |
705 | - if not ptuuid and vol.get('type') in ["disk", "partition"]: |
706 | + if (not ptuuid and |
707 | + vol.get(GEN_KEYS.type) in [CMD_TYPES.disk, CMD_TYPES.partition]): |
708 | LOG.warning("Can't find a uuid for volume: {}. Skipping dname.".format( |
709 | dname)) |
710 | return |
711 | @@ -328,22 +343,24 @@ |
712 | compose_udev_equality("SUBSYSTEM", "block"), |
713 | compose_udev_equality("ACTION", "add|change"), |
714 | ] |
715 | - if vol.get('type') == "disk": |
716 | + if vol.get(GEN_KEYS.type) == CMD_TYPES.disk: |
717 | rule.append(compose_udev_equality('ENV{DEVTYPE}', "disk")) |
718 | rule.append(compose_udev_equality('ENV{ID_PART_TABLE_UUID}', ptuuid)) |
719 | - elif vol.get('type') == "partition": |
720 | + elif vol.get(GEN_KEYS.type) == CMD_TYPES.partition: |
721 | rule.append(compose_udev_equality('ENV{DEVTYPE}', "partition")) |
722 | - dname = storage_config.get(vol.get('device')).get('name') + \ |
723 | - "-part%s" % determine_partition_number(volume, storage_config) |
724 | + parent_dev = storage_config.get(vol.get(PART_KEYS.device)) |
725 | + dname = parent_dev.get(GEN_KEYS.name) + "-part{}".format( |
726 | + determine_partition_number(volume, storage_config)) |
727 | rule.append(compose_udev_equality('ENV{ID_PART_ENTRY_UUID}', ptuuid)) |
728 | - elif vol.get('type') == "raid": |
729 | + elif vol.get(GEN_KEYS.type) == CMD_TYPES.raid: |
730 | md_data = mdadm.mdadm_query_detail(path) |
731 | md_uuid = md_data.get('MD_UUID') |
732 | rule.append(compose_udev_equality("ENV{MD_UUID}", md_uuid)) |
733 | - elif vol.get('type') == "bcache": |
734 | + elif vol.get(GEN_KEYS.type) == CMD_TYPES.bcache: |
735 | rule.append(compose_udev_equality("ENV{DEVNAME}", path)) |
736 | - elif vol.get('type') == "lvm_partition": |
737 | - volgroup_name = storage_config.get(vol.get('volgroup')).get('name') |
738 | + elif vol.get(GEN_KEYS.type) == CMD_TYPES.lvm_partition: |
739 | + parent_vg = storage_config.get(vol.get(LVM_PART_KEYS.volgroup)) |
740 | + volgroup_name = parent_vg.get(GEN_KEYS.name) |
741 | dname = "%s-%s" % (volgroup_name, dname) |
742 | rule.append(compose_udev_equality("ENV{DM_NAME}", dname)) |
743 | rule.append("SYMLINK+=\"disk/by-dname/%s\"" % dname) |
744 | @@ -364,58 +381,59 @@ |
745 | raise ValueError("volume with id '%s' not found" % volume) |
746 | |
747 | # Find path to block device |
748 | - if vol.get('type') == "partition": |
749 | - partnumber = determine_partition_number(vol.get('id'), storage_config) |
750 | - disk_block_path = get_path_to_storage_volume(vol.get('device'), |
751 | - storage_config) |
752 | + if vol.get(GEN_KEYS.type) == CMD_TYPES.partition: |
753 | + partnumber = determine_partition_number( |
754 | + vol.get(GEN_KEYS.id), storage_config) |
755 | + disk_block_path = get_path_to_storage_volume( |
756 | + vol.get(PART_KEYS.device), storage_config) |
757 | (base_path, disk_kname) = os.path.split(disk_block_path) |
758 | partition_kname = determine_partition_kname(disk_kname, partnumber) |
759 | volume_path = os.path.join(base_path, partition_kname) |
760 | devsync_vol = os.path.join(disk_block_path) |
761 | |
762 | - elif vol.get('type') == "disk": |
763 | + elif vol.get(GEN_KEYS.type) == CMD_TYPES.disk: |
764 | # Get path to block device for disk. Device_id param should refer |
765 | # to id of device in storage config |
766 | - if vol.get('serial'): |
767 | - volume_path = block.lookup_disk(vol.get('serial')) |
768 | - elif vol.get('path'): |
769 | - volume_path = vol.get('path') |
770 | + if vol.get(DISK_KEYS.serial): |
771 | + volume_path = block.lookup_disk(vol.get(DISK_KEYS.serial)) |
772 | + elif vol.get(DISK_KEYS.path): |
773 | + volume_path = vol.get(DISK_KEYS.path) |
774 | else: |
775 | raise ValueError("serial number or path to block dev must be \ |
776 | specified to identify disk") |
777 | |
778 | - elif vol.get('type') == "lvm_partition": |
779 | + elif vol.get(GEN_KEYS.type) == CMD_TYPES.lvm_partition: |
780 | # For lvm partitions, a directory in /dev/ should be present with the |
781 | # name of the volgroup the partition belongs to. We can simply append |
782 | # the id of the lvm partition to the path of that directory |
783 | - volgroup = storage_config.get(vol.get('volgroup')) |
784 | + volgroup = storage_config.get(vol.get(LVM_PART_KEYS.volgroup)) |
785 | if not volgroup: |
786 | raise ValueError("lvm volume group '%s' could not be found" |
787 | - % vol.get('volgroup')) |
788 | - volume_path = os.path.join("/dev/", volgroup.get('name'), |
789 | - vol.get('name')) |
790 | + % vol.get(LVM_PART_KEYS.volgroup)) |
791 | + volume_path = os.path.join("/dev/", volgroup.get(GEN_KEYS.name), |
792 | + vol.get(GEN_KEYS.name)) |
793 | |
794 | - elif vol.get('type') == "dm_crypt": |
795 | + elif vol.get(GEN_KEYS.type) == CMD_TYPES.dm_crypt: |
796 | # For dm_crypted partitions, unencrypted block device is at |
797 | # /dev/mapper/<dm_name> |
798 | - dm_name = vol.get('dm_name') |
799 | + dm_name = vol.get(GEN_KEYS.name) |
800 | if not dm_name: |
801 | - dm_name = vol.get('id') |
802 | + dm_name = vol.get(GEN_KEYS.name) |
803 | volume_path = os.path.join("/dev", "mapper", dm_name) |
804 | |
805 | - elif vol.get('type') == "raid": |
806 | + elif vol.get(GEN_KEYS.type) == CMD_TYPES.raid: |
807 | # For raid partitions, block device is at /dev/mdX |
808 | - name = vol.get('name') |
809 | + name = vol.get(GEN_KEYS.name) |
810 | volume_path = os.path.join("/dev", name) |
811 | |
812 | - elif vol.get('type') == "bcache": |
813 | + elif vol.get(GEN_KEYS.type) == CMD_TYPES.bcache: |
814 | # For bcache setups, the only reliable way to determine the name of the |
815 | # block device is to look in all /sys/block/bcacheX/ dirs and see what |
816 | # block devs are in the slaves dir there. Then, those blockdevs can be |
817 | # checked against the kname of the devs in the config for the desired |
818 | # bcache device. This is not very elegant though |
819 | backing_device_kname = os.path.split(get_path_to_storage_volume( |
820 | - vol.get('backing_device'), storage_config))[-1] |
821 | + vol.get(BCACHE_KEYS.backing_device), storage_config))[-1] |
822 | sys_path = list(filter(lambda x: backing_device_kname in x, |
823 | glob.glob("/sys/block/bcache*/slaves/*")))[0] |
824 | while "bcache" not in os.path.split(sys_path)[-1]: |
825 | @@ -425,7 +443,7 @@ |
826 | |
827 | else: |
828 | raise NotImplementedError("cannot determine the path to storage \ |
829 | - volume '%s' with type '%s'" % (volume, vol.get('type'))) |
830 | + volume '%s' with type '%s'" % (volume, vol.get(GEN_KEYS.type))) |
831 | |
832 | # sync devices |
833 | if not devsync_vol: |
834 | @@ -437,37 +455,47 @@ |
835 | |
836 | |
837 | def disk_handler(info, storage_config): |
838 | - ptable = info.get('ptable') |
839 | + ptable = info.get(DISK_KEYS.ptable) |
840 | |
841 | - disk = get_path_to_storage_volume(info.get('id'), storage_config) |
842 | + disk = get_path_to_storage_volume(info.get(GEN_KEYS.id), storage_config) |
843 | |
844 | # Handle preserve flag |
845 | - if info.get('preserve'): |
846 | + if info.get(GEN_KEYS.preserve): |
847 | if not ptable: |
848 | # Don't need to check state, return |
849 | return |
850 | |
851 | - # Check state of current ptable |
852 | + # Check state of current ptable, try to do this using blkid, but if |
853 | + # blkid fails then try to fall back to using parted. |
854 | + _possible_errors = (util.ProcessExecutionError, StopIteration, |
855 | + IndexError, AttributeError) |
856 | try: |
857 | (out, _err) = util.subp(["blkid", "-o", "export", disk], |
858 | capture=True) |
859 | - except util.ProcessExecutionError: |
860 | - raise ValueError("disk '%s' has no readable partition table or \ |
861 | - cannot be accessed, but preserve is set to true, so cannot \ |
862 | - continue") |
863 | - current_ptable = list(filter(lambda x: "PTTYPE" in x, |
864 | - out.splitlines()))[0].split("=")[-1] |
865 | - if current_ptable == "dos" and ptable != "msdos" or \ |
866 | - current_ptable == "gpt" and ptable != "gpt": |
867 | - raise ValueError("disk '%s' does not have correct \ |
868 | + current_ptable = next(l.split('=')[1] for l in out.splitlines() |
869 | + if 'TYPE' in l) |
870 | + except _possible_errors: |
871 | + try: |
872 | + (out, _err) = util.subp(["parted", disk, "--script", "print"], |
873 | + capture=True) |
874 | + current_ptable = next(l.split()[-1] for l in out.splitlines() |
875 | + if "Partition Table" in l) |
876 | + except _possible_errors: |
877 | + raise ValueError("disk '{}' has no readable partition table or\ |
878 | + cannot be accessed, but preserve is set to true, so cannot\ |
879 | + continue".format(info.get(GEN_KEYS.id))) |
880 | + if not (current_ptable == ptable.name or |
881 | + (current_ptable == "dos" and ptable.name == "msdos")): |
882 | + raise ValueError("disk '{}' does not have correct \ |
883 | partition table, but preserve is set to true, so not \ |
884 | - creating table, so not creating table." % info.get('id')) |
885 | + creating table, so not creating table.".format( |
886 | + info.get(GEN_KEYS.id))) |
887 | LOG.info("disk '%s' marked to be preserved, so keeping partition \ |
888 | table") |
889 | return |
890 | |
891 | # Wipe the disk |
892 | - if info.get('wipe') and info.get('wipe') != "none": |
893 | + if info.get(DISK_KEYS.wipe) is not None: |
894 | # The disk has a lable, clear all partitions |
895 | mdadm.mdadm_assemble(scan=True) |
896 | disk_kname = os.path.split(disk)[-1] |
897 | @@ -480,29 +508,30 @@ |
898 | block_no = fp.read().rstrip() |
899 | partition_path = os.path.realpath( |
900 | os.path.join("/dev/block", block_no)) |
901 | - block.wipe_volume(partition_path, mode=info.get('wipe')) |
902 | + block.wipe_volume(partition_path, mode=info.get(DISK_KEYS.wipe)) |
903 | |
904 | clear_holders("/sys/block/%s" % disk_kname) |
905 | - block.wipe_volume(disk, mode=info.get('wipe')) |
906 | + block.wipe_volume(disk, mode=info.get(DISK_KEYS.wipe)) |
907 | |
908 | # Create partition table on disk |
909 | - if info.get('ptable'): |
910 | + if info.get(DISK_KEYS.ptable): |
911 | LOG.info("labeling device: '%s' with '%s' partition table", disk, |
912 | ptable) |
913 | - if ptable == "gpt": |
914 | + if ptable == PT_TYPES.gpt: |
915 | util.subp(["sgdisk", "--clear", disk]) |
916 | - elif ptable == "msdos": |
917 | + elif ptable == PT_TYPES.msdos: |
918 | util.subp(["parted", disk, "--script", "mklabel", "msdos"]) |
919 | |
920 | # Make the name if needed |
921 | - if info.get('name'): |
922 | - make_dname(info.get('id'), storage_config) |
923 | + if info.get(GEN_KEYS.name): |
924 | + make_dname(info.get(GEN_KEYS.id), storage_config) |
925 | |
926 | |
927 | def getnumberoflogicaldisks(device, storage_config): |
928 | logicaldisks = 0 |
929 | for key, item in storage_config.items(): |
930 | - if item.get('device') == device and item.get('flag') == "logical": |
931 | + if (item.get(PART_KEYS.device) == device and |
932 | + item.get(PART_KEYS.flag) == PART_FLAGS.logical): |
933 | logicaldisks = logicaldisks + 1 |
934 | return logicaldisks |
935 | |
936 | @@ -514,9 +543,9 @@ |
937 | break |
938 | |
939 | # skip anything not on this disk, not a 'partition' or 'extended' |
940 | - if command['type'] != 'partition' or command['device'] != disk_id: |
941 | - continue |
942 | - if command.get('flag') == "extended": |
943 | + if (command[GEN_KEYS.type] != CMD_TYPES.partition or |
944 | + command.get(PART_KEYS.device) != disk_id or |
945 | + command.get(PART_KEYS.flag) == PART_FLAGS.extended): |
946 | continue |
947 | |
948 | last_partnum = determine_partition_number(item_id, storage_config) |
949 | @@ -525,10 +554,10 @@ |
950 | |
951 | |
952 | def partition_handler(info, storage_config): |
953 | - device = info.get('device') |
954 | - size = info.get('size') |
955 | - flag = info.get('flag') |
956 | - disk_ptable = storage_config.get(device).get('ptable') |
957 | + device = info.get(PART_KEYS.device) |
958 | + size = info.get(PART_KEYS.size) |
959 | + flag = info.get(PART_KEYS.flag) |
960 | + disk_ptable = storage_config.get(device).get(DISK_KEYS.ptable) |
961 | partition_type = None |
962 | if not device: |
963 | raise ValueError("device must be set for partition to be created") |
964 | @@ -536,7 +565,8 @@ |
965 | raise ValueError("size must be specified for partition to be created") |
966 | |
967 | disk = get_path_to_storage_volume(device, storage_config) |
968 | - partnumber = determine_partition_number(info.get('id'), storage_config) |
969 | + partnumber = determine_partition_number( |
970 | + info.get(GEN_KEYS.id), storage_config) |
971 | |
972 | disk_kname = os.path.split( |
973 | get_path_to_storage_volume(device, storage_config))[-1] |
974 | @@ -553,11 +583,11 @@ |
975 | logical_block_size_bytes)) |
976 | |
977 | if partnumber > 1: |
978 | - if partnumber == 5 and disk_ptable == "msdos": |
979 | + if partnumber == 5 and disk_ptable == PT_TYPES.msdos: |
980 | for key, item in storage_config.items(): |
981 | - if item.get('type') == "partition" and \ |
982 | - item.get('device') == device and \ |
983 | - item.get('flag') == "extended": |
984 | + if (item.get(GEN_KEYS.type) == CMD_TYPES.partition and |
985 | + item.get(PART_KEYS.device) == device and |
986 | + item.get(PART_KEYS.flag) == PART_FLAGS.extended): |
987 | extended_part_no = determine_partition_number( |
988 | key, storage_config) |
989 | break |
990 | @@ -566,9 +596,10 @@ |
991 | previous_partition = "/sys/block/%s/%s/" % \ |
992 | (disk_kname, partition_kname) |
993 | else: |
994 | - pnum = find_previous_partition(device, info['id'], storage_config) |
995 | + pnum = find_previous_partition(device, info.get(GEN_KEYS.id), |
996 | + storage_config) |
997 | LOG.debug("previous partition number for '%s' found to be '%s'", |
998 | - info.get('id'), pnum) |
999 | + info.get(GEN_KEYS.id), pnum) |
1000 | partition_kname = determine_partition_kname(disk_kname, pnum) |
1001 | previous_partition = "/sys/block/%s/%s/" % \ |
1002 | (disk_kname, partition_kname) |
1003 | @@ -594,12 +625,12 @@ |
1004 | offset_sectors = alignment_offset |
1005 | else: |
1006 | # further partitions |
1007 | - if disk_ptable == "gpt" or flag != "logical": |
1008 | + if disk_ptable == PT_TYPES.gpt or flag != PART_FLAGS.logical: |
1009 | # msdos primary and any gpt part start after former partition end |
1010 | offset_sectors = previous_start_sectors + previous_size_sectors |
1011 | else: |
1012 | # msdos extended/logical partitions |
1013 | - if flag == "logical": |
1014 | + if flag == PART_FLAGS.logical: |
1015 | if partnumber == 5: |
1016 | # First logical partition |
1017 | # start at extended partition start + alignment_offset |
1018 | @@ -619,49 +650,52 @@ |
1019 | # logical partitions can't share their start sector with the extended |
1020 | # partition and logical partitions can't go head-to-head, so we have to |
1021 | # realign and for that increase size as required |
1022 | - if info.get('flag') == "extended": |
1023 | + if info.get(PART_KEYS.flag) == PART_FLAGS.extended: |
1024 | logdisks = getnumberoflogicaldisks(device, storage_config) |
1025 | length_sectors = length_sectors + (logdisks * alignment_offset) |
1026 | |
1027 | # Handle preserve flag |
1028 | - if info.get('preserve'): |
1029 | + if info.get(GEN_KEYS.preserve): |
1030 | return |
1031 | - elif storage_config.get(device).get('preserve'): |
1032 | + elif storage_config.get(device).get(GEN_KEYS.preserve): |
1033 | raise NotImplementedError("Partition '%s' is not marked to be \ |
1034 | preserved, but device '%s' is. At this time, preserving devices \ |
1035 | but not also the partitions on the devices is not supported, \ |
1036 | because of the possibility of damaging partitions intended to be \ |
1037 | - preserved." % (info.get('id'), device)) |
1038 | + preserved." % (info.get(GEN_KEYS.id), device)) |
1039 | |
1040 | # Set flag |
1041 | # 'sgdisk --list-types' |
1042 | - sgdisk_flags = {"boot": 'ef00', |
1043 | - "lvm": '8e00', |
1044 | - "raid": 'fd00', |
1045 | - "bios_grub": 'ef02', |
1046 | - "prep": '4100', |
1047 | - "swap": '8200', |
1048 | - "home": '8302', |
1049 | - "linux": '8300'} |
1050 | + sgdisk_flags = { |
1051 | + PART_FLAGS.boot: 'ef00', |
1052 | + PART_FLAGS.lvm: '8e00', |
1053 | + PART_FLAGS.raid: 'fd00', |
1054 | + PART_FLAGS.bios_grub: 'ef02', |
1055 | + PART_FLAGS.prep: '4100', |
1056 | + PART_FLAGS.swap: '8200', |
1057 | + PART_FLAGS.home: '8302', |
1058 | + PART_FLAGS.linux: '8300', |
1059 | + } |
1060 | |
1061 | LOG.info("adding partition '%s' to disk '%s' (ptable: '%s')", |
1062 | - info.get('id'), device, disk_ptable) |
1063 | + info.get(GEN_KEYS.id), device, disk_ptable) |
1064 | LOG.debug("partnum: %s offset_sectors: %s length_sectors: %s", |
1065 | partnumber, offset_sectors, length_sectors) |
1066 | - if disk_ptable == "msdos": |
1067 | - if flag in ["extended", "logical", "primary"]: |
1068 | + if disk_ptable == PT_TYPES.msdos: |
1069 | + if flag in [PART_FLAGS.extended, PART_FLAGS.logical, |
1070 | + PART_FLAGS.primary]: |
1071 | partition_type = flag |
1072 | else: |
1073 | - partition_type = "primary" |
1074 | - cmd = ["parted", disk, "--script", "mkpart", partition_type, |
1075 | + partition_type = PART_FLAGS.primary |
1076 | + cmd = ["parted", disk, "--script", "mkpart", partition_type.name, |
1077 | "%ss" % offset_sectors, "%ss" % str(offset_sectors + |
1078 | length_sectors)] |
1079 | util.subp(cmd, capture=True) |
1080 | - elif disk_ptable == "gpt": |
1081 | + elif disk_ptable == PT_TYPES.gpt: |
1082 | if flag and flag in sgdisk_flags: |
1083 | typecode = sgdisk_flags[flag] |
1084 | else: |
1085 | - typecode = sgdisk_flags['linux'] |
1086 | + typecode = sgdisk_flags[PART_FLAGS.linux] |
1087 | cmd = ["sgdisk", "--new", "%s:%s:%s" % (partnumber, offset_sectors, |
1088 | length_sectors + offset_sectors), |
1089 | "--typecode=%s:%s" % (partnumber, typecode), disk] |
1090 | @@ -670,26 +704,27 @@ |
1091 | raise ValueError("parent partition has invalid partition table") |
1092 | |
1093 | # Wipe the partition if told to do so |
1094 | - if info.get('wipe') and info.get('wipe') != "none": |
1095 | + if info.get(PART_KEYS.wipe) is not None: |
1096 | block.wipe_volume( |
1097 | - get_path_to_storage_volume(info.get('id'), storage_config), |
1098 | - mode=info.get('wipe')) |
1099 | + get_path_to_storage_volume(info.get(GEN_KEYS.id), storage_config), |
1100 | + mode=info.get(PART_KEYS.wipe)) |
1101 | # Make the name if needed |
1102 | - if storage_config.get(device).get('name') and partition_type != 'extended': |
1103 | - make_dname(info.get('id'), storage_config) |
1104 | + if (storage_config.get(device).get(GEN_KEYS.name) and |
1105 | + partition_type != PART_FLAGS.extended): |
1106 | + make_dname(info.get(GEN_KEYS.id), storage_config) |
1107 | |
1108 | |
1109 | def format_handler(info, storage_config): |
1110 | - volume = info.get('volume') |
1111 | + volume = info.get(FMT_KEYS.volume) |
1112 | if not volume: |
1113 | raise ValueError("volume must be specified for partition '%s'" % |
1114 | - info.get('id')) |
1115 | + info.get(GEN_KEYS.id)) |
1116 | |
1117 | # Get path to volume |
1118 | volume_path = get_path_to_storage_volume(volume, storage_config) |
1119 | |
1120 | # Handle preserve flag |
1121 | - if info.get('preserve'): |
1122 | + if info.get(GEN_KEYS.preserve): |
1123 | # Volume marked to be preserved, not formatting |
1124 | return |
1125 | |
1126 | @@ -700,17 +735,17 @@ |
1127 | |
1128 | def mount_handler(info, storage_config): |
1129 | state = util.load_command_environment() |
1130 | - path = info.get('path') |
1131 | - filesystem = storage_config.get(info.get('device')) |
1132 | - if not path and filesystem.get('fstype') != "swap": |
1133 | + path = info.get(MOUNT_KEYS.path) |
1134 | + filesystem = storage_config.get(info.get(MOUNT_KEYS.device)) |
1135 | + if not path and filesystem.get(FMT_KEYS.fstype) != FS_TYPES.swap: |
1136 | raise ValueError("path to mountpoint must be specified") |
1137 | - volume = storage_config.get(filesystem.get('volume')) |
1138 | + volume = storage_config.get(filesystem.get(FMT_KEYS.volume)) |
1139 | |
1140 | # Get path to volume |
1141 | - volume_path = get_path_to_storage_volume(filesystem.get('volume'), |
1142 | + volume_path = get_path_to_storage_volume(filesystem.get(FMT_KEYS.volume), |
1143 | storage_config) |
1144 | |
1145 | - if filesystem.get('fstype') != "swap": |
1146 | + if filesystem.get(FMT_KEYS.fstype) != FS_TYPES.swap: |
1147 | # Figure out what point should be |
1148 | while len(path) > 0 and path[0] == "/": |
1149 | path = path[1:] |
1150 | @@ -725,40 +760,41 @@ |
1151 | # Add volume to fstab |
1152 | if state['fstab']: |
1153 | with open(state['fstab'], "a") as fp: |
1154 | - if volume.get('type') in ["raid", "bcache", |
1155 | - "disk", "lvm_partition"]: |
1156 | - location = get_path_to_storage_volume(volume.get('id'), |
1157 | + if volume.get(GEN_KEYS.type) in [CMD_TYPES.disk, |
1158 | + CMD_TYPES.lvm_partition, |
1159 | + CMD_TYPES.bcache, CMD_TYPES.raid]: |
1160 | + location = get_path_to_storage_volume(volume.get(GEN_KEYS.id), |
1161 | storage_config) |
1162 | - elif volume.get('type') in ["partition", "dm_crypt"]: |
1163 | + elif volume.get(GEN_KEYS.type) in [CMD_TYPES.partition, |
1164 | + CMD_TYPES.dm_crypt]: |
1165 | location = "UUID=%s" % block.get_volume_uuid(volume_path) |
1166 | else: |
1167 | raise ValueError("cannot write fstab for volume type '%s'" % |
1168 | - volume.get("type")) |
1169 | + volume.get(GEN_KEYS.type)) |
1170 | |
1171 | - if filesystem.get('fstype') == "swap": |
1172 | + if filesystem.get(FMT_KEYS.fstype) == FS_TYPES.swap: |
1173 | path = "none" |
1174 | options = "sw" |
1175 | else: |
1176 | path = "/%s" % path |
1177 | options = "defaults" |
1178 | |
1179 | - if filesystem.get('fstype') in ["fat", "fat12", "fat16", "fat32", |
1180 | - "fat64"]: |
1181 | - fstype = "vfat" |
1182 | - else: |
1183 | - fstype = filesystem.get('fstype') |
1184 | - fp.write("%s %s %s %s 0 0\n" % (location, path, fstype, options)) |
1185 | + fstype = filesystem.get(FMT_KEYS.fstype) |
1186 | + if mkfs.specific_to_family.get(fstype) == FS_TYPES.fat: |
1187 | + fstype = FS_TYPES.vfat |
1188 | + fp.write("%s %s %s %s 0 0\n" % (location, path, fstype.name, |
1189 | + options)) |
1190 | else: |
1191 | LOG.info("fstab not in environment, so not writing") |
1192 | |
1193 | |
1194 | def lvm_volgroup_handler(info, storage_config): |
1195 | - devices = info.get('devices') |
1196 | + devices = info.get(LVM_VG_KEYS.devices) |
1197 | device_paths = [] |
1198 | - name = info.get('name') |
1199 | + name = info.get(GEN_KEYS.name) |
1200 | if not devices: |
1201 | raise ValueError("devices for volgroup '%s' must be specified" % |
1202 | - info.get('id')) |
1203 | + info.get(GEN_KEYS.id)) |
1204 | if not name: |
1205 | raise ValueError("name for volgroups needs to be specified") |
1206 | |
1207 | @@ -771,7 +807,7 @@ |
1208 | storage_config)) |
1209 | |
1210 | # Handle preserve flag |
1211 | - if info.get('preserve'): |
1212 | + if info.get(GEN_KEYS.preserve): |
1213 | # LVM will probably be offline, so start it |
1214 | util.subp(["vgchange", "-a", "y"]) |
1215 | # Verify that volgroup exists and contains all specified devices |
1216 | @@ -785,7 +821,7 @@ |
1217 | if set(current_paths) != set(device_paths): |
1218 | raise ValueError("volgroup '%s' marked to be preserved, but does \ |
1219 | not exist or does not contain the right physical \ |
1220 | - volumes" % info.get('id')) |
1221 | + volumes" % info.get(GEN_KEYS.id)) |
1222 | else: |
1223 | # Create vgrcreate command and run |
1224 | cmd = ["vgcreate", name] |
1225 | @@ -794,15 +830,16 @@ |
1226 | |
1227 | |
1228 | def lvm_partition_handler(info, storage_config): |
1229 | - volgroup = storage_config.get(info.get('volgroup')).get('name') |
1230 | - name = info.get('name') |
1231 | + lvm_vg_info = storage_config.get(info.get(LVM_PART_KEYS.volgroup)) |
1232 | + volgroup = lvm_vg_info.get(GEN_KEYS.name) |
1233 | + name = info.get(GEN_KEYS.name) |
1234 | if not volgroup: |
1235 | raise ValueError("lvm volgroup for lvm partition must be specified") |
1236 | if not name: |
1237 | raise ValueError("lvm partition name must be specified") |
1238 | |
1239 | # Handle preserve flag |
1240 | - if info.get('preserve'): |
1241 | + if info.get(GEN_KEYS.preserve): |
1242 | (out, _err) = util.subp(["lvdisplay", "-C", "--separator", "=", "-o", |
1243 | "lv_name,vg_name", "--noheadings"], |
1244 | capture=True) |
1245 | @@ -815,43 +852,40 @@ |
1246 | if not found: |
1247 | raise ValueError("lvm partition '%s' marked to be preserved, but \ |
1248 | does not exist or does not mach storage \ |
1249 | - configuration" % info.get('id')) |
1250 | - elif storage_config.get(info.get('volgroup')).get('preserve'): |
1251 | + configuration" % info.get(GEN_KEYS.id)) |
1252 | + elif lvm_vg_info.get(GEN_KEYS.preserve): |
1253 | raise NotImplementedError("Lvm Partition '%s' is not marked to be \ |
1254 | preserved, but volgroup '%s' is. At this time, preserving \ |
1255 | volgroups but not also the lvm partitions on the volgroup is \ |
1256 | not supported, because of the possibility of damaging lvm \ |
1257 | - partitions intended to be preserved." % (info.get('id'), volgroup)) |
1258 | + partitions intended to be preserved." % (info.get(GEN_KEYS.id), |
1259 | + volgroup)) |
1260 | else: |
1261 | cmd = ["lvcreate", volgroup, "-n", name] |
1262 | - if info.get('size'): |
1263 | - cmd.extend(["-L", info.get('size')]) |
1264 | + if info.get(LVM_PART_KEYS.size): |
1265 | + cmd.extend(["-L", info.get(LVM_PART_KEYS.size)]) |
1266 | else: |
1267 | cmd.extend(["-l", "100%FREE"]) |
1268 | |
1269 | util.subp(cmd) |
1270 | |
1271 | - if info.get('ptable'): |
1272 | - raise ValueError("Partition tables on top of lvm logical volumes is \ |
1273 | - not supported") |
1274 | - |
1275 | - make_dname(info.get('id'), storage_config) |
1276 | + make_dname(info.get(GEN_KEYS.id), storage_config) |
1277 | |
1278 | |
1279 | def dm_crypt_handler(info, storage_config): |
1280 | state = util.load_command_environment() |
1281 | - volume = info.get('volume') |
1282 | - key = info.get('key') |
1283 | - keysize = info.get('keysize') |
1284 | - cipher = info.get('cipher') |
1285 | - dm_name = info.get('dm_name') |
1286 | + volume = info.get(DM_CRYPT_KEYS.volume) |
1287 | + key = info.get(DM_CRYPT_KEYS.key) |
1288 | + keysize = info.get(DM_CRYPT_KEYS.keysize) |
1289 | + cipher = info.get(DM_CRYPT_KEYS.cipher) |
1290 | + dm_name = info.get(GEN_KEYS.name) |
1291 | if not volume: |
1292 | raise ValueError("volume for cryptsetup to operate on must be \ |
1293 | specified") |
1294 | if not key: |
1295 | raise ValueError("encryption key must be specified") |
1296 | if not dm_name: |
1297 | - dm_name = info.get('id') |
1298 | + dm_name = info.get(GEN_KEYS.id) |
1299 | |
1300 | volume_path = get_path_to_storage_volume(volume, storage_config) |
1301 | |
1302 | @@ -893,10 +927,10 @@ |
1303 | |
1304 | def raid_handler(info, storage_config): |
1305 | state = util.load_command_environment() |
1306 | - devices = info.get('devices') |
1307 | - raidlevel = info.get('raidlevel') |
1308 | - spare_devices = info.get('spare_devices') |
1309 | - md_devname = block.dev_path(info.get('name')) |
1310 | + devices = info.get(RAID_KEYS.devices) |
1311 | + raidlevel = info.get(RAID_KEYS.raidlevel) |
1312 | + spare_devices = info.get(RAID_KEYS.spare_devices) |
1313 | + md_devname = block.dev_path(info.get(GEN_KEYS.name)) |
1314 | if not devices: |
1315 | raise ValueError("devices for raid must be specified") |
1316 | if raidlevel not in ['linear', 'raid0', 0, 'stripe', 'raid1', 1, 'mirror', |
1317 | @@ -906,7 +940,9 @@ |
1318 | if spare_devices: |
1319 | raise ValueError("spareunsupported in raidlevel '%s'" % raidlevel) |
1320 | |
1321 | - LOG.debug('raid: cfg: {}'.format(util.json_dumps(info))) |
1322 | + # this log message previously used util.json_dumps, but enums cause json to |
1323 | + # crash, so just output md name and devices instead |
1324 | + LOG.debug('raid: name: {}, devices:{}'.format(md_devname, devices)) |
1325 | device_paths = list(get_path_to_storage_volume(dev, storage_config) for |
1326 | dev in devices) |
1327 | LOG.debug('raid: device path mapping: {}'.format( |
1328 | @@ -920,7 +956,7 @@ |
1329 | zip(spare_devices, spare_device_paths))) |
1330 | |
1331 | # Handle preserve flag |
1332 | - if info.get('preserve'): |
1333 | + if info.get(GEN_KEYS.preserve): |
1334 | # check if the array is already up, if not try to assemble |
1335 | if not mdadm.md_check(md_devname, raidlevel, |
1336 | device_paths, spare_device_paths): |
1337 | @@ -939,10 +975,10 @@ |
1338 | |
1339 | mdadm.mdadm_create(md_devname, raidlevel, |
1340 | device_paths, spare_device_paths, |
1341 | - info.get('mdname', '')) |
1342 | + info.get(RAID_KEYS.mdname, '')) |
1343 | |
1344 | # Make dname rule for this dev |
1345 | - make_dname(info.get('id'), storage_config) |
1346 | + make_dname(info.get(GEN_KEYS.id), storage_config) |
1347 | |
1348 | # A mdadm.conf will be created in the same directory as the fstab in the |
1349 | # configuration. This will then be copied onto the installed system later. |
1350 | @@ -961,16 +997,16 @@ |
1351 | |
1352 | # If ptable is specified, call disk_handler on this mdadm device to create |
1353 | # the table |
1354 | - if info.get('ptable'): |
1355 | + if info.get(RAID_KEYS.ptable): |
1356 | disk_handler(info, storage_config) |
1357 | |
1358 | |
1359 | def bcache_handler(info, storage_config): |
1360 | - backing_device = get_path_to_storage_volume(info.get('backing_device'), |
1361 | - storage_config) |
1362 | - cache_device = get_path_to_storage_volume(info.get('cache_device'), |
1363 | - storage_config) |
1364 | - cache_mode = info.get('cache_mode', None) |
1365 | + backing_device = get_path_to_storage_volume( |
1366 | + info.get(BCACHE_KEYS.backing_device), storage_config) |
1367 | + cache_device = get_path_to_storage_volume( |
1368 | + info.get(BCACHE_KEYS.cache_device), storage_config) |
1369 | + cache_mode = info.get(BCACHE_KEYS.cache_mode) |
1370 | |
1371 | if not backing_device or not cache_device: |
1372 | raise ValueError("backing device and cache device for bcache" |
1373 | @@ -1093,74 +1129,58 @@ |
1374 | cache_mode_file = \ |
1375 | '/sys/block/{}/bcache/cache_mode'.format(bcache_dev) |
1376 | with open(cache_mode_file, "w") as fp: |
1377 | - fp.write(cache_mode) |
1378 | + fp.write(cache_mode.name) |
1379 | else: |
1380 | # no backing device |
1381 | if cache_mode: |
1382 | raise ValueError("cache mode specified which can only be set per \ |
1383 | backing devices, but none was specified") |
1384 | |
1385 | - if info.get('name'): |
1386 | + if info.get(GEN_KEYS.name): |
1387 | # Make dname rule for this dev |
1388 | - make_dname(info.get('id'), storage_config) |
1389 | + make_dname(info.get(GEN_KEYS.id), storage_config) |
1390 | |
1391 | - if info.get('ptable'): |
1392 | - raise ValueError("Partition tables on top of lvm logical volumes is \ |
1393 | - not supported") |
1394 | LOG.debug('Finished bcache creation for backing {} or caching {}' |
1395 | .format(backing_device, cache_device)) |
1396 | |
1397 | |
1398 | -def extract_storage_ordered_dict(config): |
1399 | - storage_config = config.get('storage', {}) |
1400 | - if not storage_config: |
1401 | - raise ValueError("no 'storage' entry in config") |
1402 | - scfg = storage_config.get('config') |
1403 | - if not scfg: |
1404 | - raise ValueError("invalid storage config data") |
1405 | - |
1406 | - # Since storage config will often have to be searched for a value by its |
1407 | - # id, and this can become very inefficient as storage_config grows, a dict |
1408 | - # will be generated with the id of each component of the storage_config as |
1409 | - # its index and the component of storage_config as its value |
1410 | - return OrderedDict((d["id"], d) for (i, d) in enumerate(scfg)) |
1411 | - |
1412 | - |
1413 | def meta_custom(args): |
1414 | - """Does custom partitioning based on the layout provided in the config |
1415 | + """ |
1416 | + Does custom partitioning based on the layout provided in the config |
1417 | file. Section with the name storage contains information on which |
1418 | partitions on which disks to create. It also contains information about |
1419 | overlays (raid, lvm, bcache) which need to be setup. |
1420 | """ |
1421 | |
1422 | command_handlers = { |
1423 | - 'disk': disk_handler, |
1424 | - 'partition': partition_handler, |
1425 | - 'format': format_handler, |
1426 | - 'mount': mount_handler, |
1427 | - 'lvm_volgroup': lvm_volgroup_handler, |
1428 | - 'lvm_partition': lvm_partition_handler, |
1429 | - 'dm_crypt': dm_crypt_handler, |
1430 | - 'raid': raid_handler, |
1431 | - 'bcache': bcache_handler |
1432 | + CMD_TYPES.disk: disk_handler, |
1433 | + CMD_TYPES.partition: partition_handler, |
1434 | + CMD_TYPES.format: format_handler, |
1435 | + CMD_TYPES.mount: mount_handler, |
1436 | + CMD_TYPES.lvm_volgroup: lvm_volgroup_handler, |
1437 | + CMD_TYPES.lvm_partition: lvm_partition_handler, |
1438 | + CMD_TYPES.dm_crypt: dm_crypt_handler, |
1439 | + CMD_TYPES.raid: raid_handler, |
1440 | + CMD_TYPES.bcache: bcache_handler |
1441 | } |
1442 | |
1443 | state = util.load_command_environment() |
1444 | cfg = config.load_command_config(args, state) |
1445 | |
1446 | - storage_config_dict = extract_storage_ordered_dict(cfg) |
1447 | + storage_config_dict = sc.StorageConfig(cfg) |
1448 | |
1449 | # set up reportstack |
1450 | stack_prefix = state.get('report_stack_prefix', '') |
1451 | |
1452 | for item_id, command in storage_config_dict.items(): |
1453 | - handler = command_handlers.get(command['type']) |
1454 | + handler = command_handlers.get(command.get(GEN_KEYS.type)) |
1455 | if not handler: |
1456 | - raise ValueError("unknown command type '%s'" % command['type']) |
1457 | + raise ValueError("unknown command type '%s'" % |
1458 | + command.get(GEN_KEYS.type)) |
1459 | with events.ReportEventStack( |
1460 | name=stack_prefix, reporting_enabled=True, level="INFO", |
1461 | - description="configuring %s: %s" % (command['type'], |
1462 | - command['id'])): |
1463 | + description="configuring %s: %s" % (command.get(GEN_KEYS.type), |
1464 | + command.get(GEN_KEYS.id))): |
1465 | try: |
1466 | handler(command, storage_config_dict) |
1467 | except Exception as error: |
1468 | |
1469 | === modified file 'curtin/commands/block_wipe.py' |
1470 | --- curtin/commands/block_wipe.py 2016-04-04 20:12:01 +0000 |
1471 | +++ curtin/commands/block_wipe.py 2016-06-14 21:02:46 +0000 |
1472 | @@ -17,14 +17,17 @@ |
1473 | |
1474 | import sys |
1475 | import curtin.block as block |
1476 | +from curtin.block import WIPE_MODES |
1477 | +from curtin import enum |
1478 | from . import populate_one_subcmd |
1479 | |
1480 | |
1481 | def wipe_main(args): |
1482 | # curtin clear-holders device [device2 [device3]] |
1483 | + wipe_mode = enum.convert_to_enum(WIPE_MODES, args.wipe_mode) |
1484 | for blockdev in args.devices: |
1485 | try: |
1486 | - block.wipe_volume(blockdev, mode=args.mode) |
1487 | + block.wipe_volume(blockdev, mode=wipe_mode) |
1488 | except Exception as e: |
1489 | sys.stderr.write( |
1490 | "Failed to wipe volume %s in mode %s: %s" % |
1491 | @@ -36,8 +39,8 @@ |
1492 | CMD_ARGUMENTS = ( |
1493 | ((('-m', '--mode'), |
1494 | {'help': 'mode for wipe.', 'action': 'store', |
1495 | - 'default': 'superblocks', |
1496 | - 'choices': ['zero', 'superblock', 'superblock-recursive', 'random']}), |
1497 | + 'default': WIPE_MODES.superblock.name, |
1498 | + 'choices': sorted(enum.enum_names(WIPE_MODES))}), |
1499 | ('devices', |
1500 | {'help': 'devices to wipe', 'default': [], 'nargs': '+'}), |
1501 | ) |
1502 | |
1503 | === modified file 'curtin/commands/curthooks.py' |
1504 | --- curtin/commands/curthooks.py 2016-05-17 21:14:36 +0000 |
1505 | +++ curtin/commands/curthooks.py 2016-06-14 21:02:46 +0000 |
1506 | @@ -377,8 +377,8 @@ |
1507 | |
1508 | # FIXME: these methods need moving to curtin.block |
1509 | # and using them from there rather than commands.block_meta |
1510 | - from curtin.commands.block_meta import (extract_storage_ordered_dict, |
1511 | - get_path_to_storage_volume) |
1512 | + from curtin.commands.block_meta import get_path_to_storage_volume |
1513 | + from curtin.block import storage_config |
1514 | |
1515 | grubcfg = cfg.get('grub', {}) |
1516 | |
1517 | @@ -390,14 +390,14 @@ |
1518 | # if there is storage config, look for devices tagged with 'grub_device' |
1519 | storage_cfg_odict = None |
1520 | try: |
1521 | - storage_cfg_odict = extract_storage_ordered_dict(cfg) |
1522 | + storage_cfg_odict = storage_config.StorageConfig(cfg) |
1523 | except ValueError as e: |
1524 | pass |
1525 | |
1526 | if storage_cfg_odict: |
1527 | storage_grub_devices = [] |
1528 | for item_id, item in storage_cfg_odict.items(): |
1529 | - if not item.get('grub_device'): |
1530 | + if not item.get(storage_config.DISK_KEYS.grub_device): |
1531 | continue |
1532 | LOG.debug("checking: %s", item) |
1533 | storage_grub_devices.append( |
1534 | |
1535 | === modified file 'curtin/commands/mkfs.py' |
1536 | --- curtin/commands/mkfs.py 2016-01-19 00:51:51 +0000 |
1537 | +++ curtin/commands/mkfs.py 2016-06-14 21:02:46 +0000 |
1538 | @@ -17,7 +17,8 @@ |
1539 | |
1540 | from . import populate_one_subcmd |
1541 | from curtin.block.mkfs import mkfs as run_mkfs |
1542 | -from curtin.block.mkfs import valid_fstypes |
1543 | +from curtin.block.mkfs import FS_TYPES |
1544 | +from curtin import enum |
1545 | |
1546 | import sys |
1547 | |
1548 | @@ -28,7 +29,7 @@ |
1549 | 'metavar': 'DEVICE', 'action': 'store', 'nargs': '+'}), |
1550 | (('-f', '--fstype'), |
1551 | {'help': 'filesystem type to use. default is ext4', |
1552 | - 'choices': sorted(valid_fstypes()), |
1553 | + 'choices': sorted(enum.enum_names(FS_TYPES)), |
1554 | 'default': 'ext4', 'action': 'store'}), |
1555 | (('-l', '--label'), |
1556 | {'help': 'label to use for filesystem', 'action': 'store'}), |
1557 | @@ -45,13 +46,14 @@ |
1558 | |
1559 | |
1560 | def mkfs(args): |
1561 | + fstype = enum.convert_to_enum(FS_TYPES, args.fstype) |
1562 | for device in args.devices: |
1563 | - uuid = run_mkfs(device, args.fstype, strict=args.strict, |
1564 | + uuid = run_mkfs(device, fstype, strict=args.strict, |
1565 | uuid=args.uuid, label=args.label, |
1566 | force=args.force) |
1567 | |
1568 | print("Created '%s' filesystem in '%s' with uuid '%s' and label '%s'" % |
1569 | - (args.fstype, device, uuid, args.label)) |
1570 | + (fstype.name, device, uuid, args.label)) |
1571 | |
1572 | sys.exit(0) |
1573 | |
1574 | |
1575 | === added file 'curtin/enum.py' |
1576 | --- curtin/enum.py 1970-01-01 00:00:00 +0000 |
1577 | +++ curtin/enum.py 2016-06-14 21:02:46 +0000 |
1578 | @@ -0,0 +1,90 @@ |
1579 | +# Copyright (C) 2016 Canonical Ltd. |
1580 | +# |
1581 | +# Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com> |
1582 | +# |
1583 | +# Curtin is free software: you can redistribute it and/or modify it under |
1584 | +# the terms of the GNU Affero General Public License as published by the |
1585 | +# Free Software Foundation, either version 3 of the License, or (at your |
1586 | +# option) any later version. |
1587 | +# |
1588 | +# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY |
1589 | +# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS |
1590 | +# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for |
1591 | +# more details. |
1592 | +# |
1593 | +# You should have received a copy of the GNU Affero General Public License |
1594 | +# along with Curtin. If not, see <http://www.gnu.org/licenses/>. |
1595 | + |
1596 | +# this provides a python2 and python3 compatible enum class |
1597 | +# it was written by harlowja at: |
1598 | +# https://gist.github.com/harlowja/f60d91128a1fcd99d30d27715d5a9a30 |
1599 | + |
1600 | +from __future__ import absolute_import |
1601 | + |
1602 | +try: |
1603 | + import enum |
1604 | +except ImportError: |
1605 | + enum = None |
1606 | + |
1607 | + |
1608 | +def enum_names(enum): |
1609 | + return [i.name for i in enum] |
1610 | + |
1611 | + |
1612 | +def convert_to_enum(enum_t, attr_name): |
1613 | + """convert attr_name to instance of enum_t""" |
1614 | + # the fallback enum method will not work with isinstance, use comparison |
1615 | + if (isinstance(attr_name, enum_t) or attr_name in enum_t): |
1616 | + return attr_name |
1617 | + elif isinstance(attr_name, str): |
1618 | + try: |
1619 | + return enum_t[attr_name] |
1620 | + except KeyError: |
1621 | + raise ValueError("'{}' not a valid value for '{}'" |
1622 | + .format(attr_name, enum_t)) |
1623 | + raise ValueError("invalid input type for enum {}".format(attr_name)) |
1624 | + |
1625 | + |
1626 | +def make_enum(cls_name, enums): |
1627 | + # Basic validations. |
1628 | + if not enums: |
1629 | + raise ValueError("At least one enum name must be provided") |
1630 | + if not cls_name: |
1631 | + raise ValueError("A class name must be provided") |
1632 | + for e in enums: |
1633 | + # cannot use six.string_types as six module is not present in precise |
1634 | + # ephemeral image |
1635 | + if not isinstance(e, (str, bytes)): |
1636 | + raise TypeError("Enum name '%s' must be a string type" % e) |
1637 | + # Just use the built-in one (if we can). |
1638 | + if enum is not None: |
1639 | + return enum.Enum(cls_name, enums) |
1640 | + else: |
1641 | + # Otherwise make a good enough equivalent. |
1642 | + |
1643 | + class EnumValue(object): |
1644 | + __slots__ = ('name', 'value') |
1645 | + |
1646 | + def __init__(self, name, value): |
1647 | + self.name = name |
1648 | + self.value = value |
1649 | + |
1650 | + def __repr__(self): |
1651 | + return "<%s.%s: %s>" % (cls_name, self.name, self.value) |
1652 | + |
1653 | + enums_vals = [EnumValue(e, i + 1) for i, e in enumerate(enums)] |
1654 | + enum_dict = dict(zip(enums, enums_vals)) |
1655 | + |
1656 | + class EnumMeta(type): |
1657 | + |
1658 | + def __new__(cls, name, parents, dct): |
1659 | + return super(EnumMeta, cls).__new__(cls, name, parents, dct) |
1660 | + |
1661 | + def __iter__(self): |
1662 | + for e in enums_vals: |
1663 | + yield e |
1664 | + |
1665 | + def __getitem__(self, item): |
1666 | + return enum_dict[item] |
1667 | + |
1668 | + return EnumMeta(cls_name, (object,), enum_dict) |
1669 | |
1670 | === modified file 'curtin/reporter/events.py' |
1671 | --- curtin/reporter/events.py 2016-05-06 17:21:33 +0000 |
1672 | +++ curtin/reporter/events.py 2016-06-14 21:02:46 +0000 |
1673 | @@ -25,6 +25,7 @@ |
1674 | import time |
1675 | |
1676 | from . import instantiated_handler_registry |
1677 | +from curtin import enum |
1678 | |
1679 | FINISH_EVENT_TYPE = 'finish' |
1680 | START_EVENT_TYPE = 'start' |
1681 | @@ -32,15 +33,7 @@ |
1682 | |
1683 | DEFAULT_EVENT_ORIGIN = 'curtin' |
1684 | |
1685 | - |
1686 | -class _nameset(set): |
1687 | - def __getattr__(self, name): |
1688 | - if name in self: |
1689 | - return name |
1690 | - raise AttributeError("%s not a valid value" % name) |
1691 | - |
1692 | - |
1693 | -status = _nameset(("SUCCESS", "WARN", "FAIL")) |
1694 | +status = enum.make_enum('status', ('SUCCESS', 'WARN', 'FAIL')) |
1695 | |
1696 | |
1697 | class ReportingEvent(object): |
1698 | @@ -87,12 +80,12 @@ |
1699 | |
1700 | def as_string(self): |
1701 | return '{0}: {1}: {2}: {3}'.format( |
1702 | - self.event_type, self.name, self.result, self.description) |
1703 | + self.event_type, self.name, self.result.name, self.description) |
1704 | |
1705 | def as_dict(self): |
1706 | """The event represented as json friendly.""" |
1707 | data = super(FinishReportingEvent, self).as_dict() |
1708 | - data['result'] = self.result |
1709 | + data['result'] = self.result.name |
1710 | if self.post_files: |
1711 | data['files'] = _collect_file_info(self.post_files) |
1712 | if self.result == status.WARN: |
1713 | |
1714 | === modified file 'tests/unittests/test_block_mkfs.py' |
1715 | --- tests/unittests/test_block_mkfs.py 2016-04-07 20:47:33 +0000 |
1716 | +++ tests/unittests/test_block_mkfs.py 2016-06-14 21:02:46 +0000 |
1717 | @@ -1,4 +1,6 @@ |
1718 | from curtin.block import mkfs |
1719 | +from curtin.block.mkfs import (FMT_KEYS, FS_TYPES) |
1720 | +from curtin.block.storage_config import (GEN_KEYS, CMD_TYPES) |
1721 | |
1722 | from unittest import TestCase |
1723 | import mock |
1724 | @@ -8,9 +10,16 @@ |
1725 | test_uuid = "fb26cc6c-ae73-11e5-9e38-2fb63f0c3155" |
1726 | |
1727 | def _get_config(self, fstype): |
1728 | - return {"fstype": fstype, "type": "format", "id": "testfmt", |
1729 | - "volume": "null", "label": "format1", |
1730 | - "uuid": self.test_uuid} |
1731 | + try: |
1732 | + fs_type_enum = FS_TYPES[fstype] |
1733 | + except KeyError: |
1734 | + fs_type_enum = None |
1735 | + return {FMT_KEYS.fstype: fs_type_enum, |
1736 | + GEN_KEYS.type: CMD_TYPES.format, |
1737 | + GEN_KEYS.id: "testfmt", |
1738 | + FMT_KEYS.volume: "null", |
1739 | + FMT_KEYS.label: "format1", |
1740 | + FMT_KEYS.uuid: self.test_uuid} |
1741 | |
1742 | def _assert_same_flags(self, call, expected): |
1743 | print("call:\n{}".format(call)) |
1744 | @@ -104,13 +113,16 @@ |
1745 | """Do not proceed if filesystem label is too long""" |
1746 | with self.assertRaises(ValueError): |
1747 | conf = self._get_config("ext4") |
1748 | - conf['label'] = "thislabelislongerthan16chars" |
1749 | - self._run_mkfs_with_config(conf, "mkfs.ext4", [], strict=True) |
1750 | + conf[FMT_KEYS.label] = "thislabelislongerthan16chars" |
1751 | + expected_flags = ["-F", ["-L", conf[FMT_KEYS.label]], |
1752 | + ["-U", self.test_uuid]] |
1753 | + self._run_mkfs_with_config(conf, "mkfs.ext4", expected_flags, |
1754 | + strict=True) |
1755 | |
1756 | conf = self._get_config("swap") |
1757 | expected_flags = ["--force", ["--label", "abcdefghijklmno"], |
1758 | - ["--uuid", conf['uuid']]] |
1759 | - conf['label'] = "abcdefghijklmnop" # 16 chars, 15 is max |
1760 | + ["--uuid", self.test_uuid]] |
1761 | + conf[FMT_KEYS.label] = "abcdefghijklmnop" # 16 chars, 15 is max |
1762 | |
1763 | # Raise error, do not truncate with strict = True |
1764 | with self.assertRaises(ValueError): |
1765 | |
1766 | === added file 'tests/unittests/test_enum.py' |
1767 | --- tests/unittests/test_enum.py 1970-01-01 00:00:00 +0000 |
1768 | +++ tests/unittests/test_enum.py 2016-06-14 21:02:46 +0000 |
1769 | @@ -0,0 +1,88 @@ |
1770 | +from unittest import (TestCase, skipIf) |
1771 | +from curtin import enum as curtin_enum |
1772 | + |
1773 | +try: |
1774 | + import enum as builtin_enum |
1775 | +except: |
1776 | + builtin_enum = None |
1777 | + |
1778 | + |
1779 | +class _disable_builtin_enum(object): |
1780 | + def __init__(self, curtin_enum_instance): |
1781 | + self.curtin_enum_instance = curtin_enum_instance |
1782 | + |
1783 | + def __enter__(self): |
1784 | + self.curtin_enum_instance_old_enum = self.curtin_enum_instance.enum |
1785 | + self.curtin_enum_instance.enum = None |
1786 | + |
1787 | + def __exit__(self, exc_type, exc_val, exc_tb): |
1788 | + self.curtin_enum_instance.enum = self.curtin_enum_instance_old_enum |
1789 | + |
1790 | + |
1791 | +@skipIf(builtin_enum is None, |
1792 | + 'TestEnum disabled when enum module is not available') |
1793 | +class TestEnum(TestCase): |
1794 | + def _get_colors_enum(self): |
1795 | + return curtin_enum.make_enum('colors', ['red', 'green', 'blue']) |
1796 | + |
1797 | + def test_enum_selection(self): |
1798 | + """Ensure that curtin.enum uses py3 enum when available""" |
1799 | + self.assertEqual(builtin_enum, curtin_enum.enum) |
1800 | + if builtin_enum is not None: |
1801 | + self.assertTrue(isinstance(self._get_colors_enum(), |
1802 | + builtin_enum.EnumMeta)) |
1803 | + with _disable_builtin_enum(curtin_enum): |
1804 | + self.assertFalse(isinstance(self._get_colors_enum(), |
1805 | + builtin_enum.EnumMeta)) |
1806 | + |
1807 | + def test_fallback_enum_comparisons(self): |
1808 | + """Ensure that the fallback curtin.enum handles comparisons""" |
1809 | + with _disable_builtin_enum(curtin_enum): |
1810 | + colors = self._get_colors_enum() |
1811 | + self.assertEqual(colors.red, colors.red) |
1812 | + self.assertIs(colors.red, colors.red) |
1813 | + self.assertNotEqual(colors.blue, colors.red) |
1814 | + self.assertIsNot(colors.blue, colors.red) |
1815 | + self.assertNotEqual(colors.blue, 'blue') |
1816 | + self.assertNotEqual(colors.blue, 3) |
1817 | + self.assertEqual(colors.blue.value, 3) |
1818 | + self.assertEqual(colors.blue.name, 'blue') |
1819 | + |
1820 | + def test_fallback_enum_meta(self): |
1821 | + """Ensure that the fallback enum meta works""" |
1822 | + with _disable_builtin_enum(curtin_enum): |
1823 | + colors = self._get_colors_enum() |
1824 | + for color, expected in zip(colors, ('red', 'green', 'blue')): |
1825 | + self.assertEqual(color.name, expected) |
1826 | + self.assertEqual(colors['red'].name, 'red') |
1827 | + with self.assertRaises(KeyError): |
1828 | + colors['purple'] |
1829 | + |
1830 | + def test_enum_names(self): |
1831 | + """Ensure that enum names works with both builtin and fallback enum""" |
1832 | + self.assertEqual(curtin_enum.enum_names(self._get_colors_enum()), |
1833 | + ['red', 'green', 'blue']) |
1834 | + with _disable_builtin_enum(curtin_enum): |
1835 | + self.assertEqual(curtin_enum.enum_names(self._get_colors_enum()), |
1836 | + ['red', 'green', 'blue']) |
1837 | + |
1838 | + def test_convert_to_enum(self): |
1839 | + """Ensure that enum.convert_to_enum() works""" |
1840 | + colors = self._get_colors_enum() |
1841 | + self.assertEqual(curtin_enum.convert_to_enum(colors, colors.red), |
1842 | + colors.red) |
1843 | + self.assertEqual(curtin_enum.convert_to_enum(colors, 'green'), |
1844 | + colors.green) |
1845 | + with self.assertRaises(ValueError): |
1846 | + curtin_enum.convert_to_enum(colors, 'purple') |
1847 | + with self.assertRaises(ValueError): |
1848 | + curtin_enum.convert_to_enum(colors, dict()) |
1849 | + |
1850 | + def test_convert_to_enum_use_fallback(self): |
1851 | + """Ensure that enum.convert_to_enum() works with fallback enum""" |
1852 | + with _disable_builtin_enum(curtin_enum): |
1853 | + colors = self._get_colors_enum() |
1854 | + self.assertEqual(curtin_enum.convert_to_enum(colors, colors.red), |
1855 | + colors.red) |
1856 | + self.assertEqual(curtin_enum.convert_to_enum(colors, 'green'), |
1857 | + colors.green) |
FAILED: Continuous integration, rev:412 /server- team-jenkins. canonical. com/job/ curtin- ci/244/ /server- team-jenkins. canonical. com/job/ generic- update- mp/241/ console
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild: /server- team-jenkins. canonical. com/job/ curtin- ci/244/ rebuild
https:/