Merge lp:~raharper/curtin/trunk.power8 into lp:~curtin-dev/curtin/trunk
- trunk.power8
- Merge into trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp:~raharper/curtin/trunk.power8 |
Merge into: | lp:~curtin-dev/curtin/trunk |
Diff against target: |
2563 lines (+1071/-613) 30 files modified
curtin/block/__init__.py (+16/-2) curtin/commands/block_meta.py (+75/-24) curtin/commands/curthooks.py (+2/-1) examples/tests/allindata.yaml (+9/-4) examples/tests/basic.yaml (+4/-4) examples/tests/bcache_basic.yaml (+2/-2) examples/tests/lvm.yaml (+1/-1) examples/tests/mdadm_bcache.yaml (+3/-3) examples/tests/mdadm_bcache_complex.yaml (+3/-3) examples/tests/mirrorboot.yaml (+2/-2) examples/tests/nvme.yaml (+7/-7) examples/tests/raid10boot.yaml (+4/-4) examples/tests/raid5bcache.yaml (+5/-5) examples/tests/raid5boot.yaml (+3/-3) examples/tests/raid6boot.yaml (+4/-4) examples/tests/uefi_basic.yaml (+7/-1) examples/tests/vlan_network.yaml (+2/-0) helpers/common (+2/-0) tests/vmtests/__init__.py (+659/-15) tests/vmtests/test_basic.py (+10/-280) tests/vmtests/test_bcache_basic.py (+139/-22) tests/vmtests/test_bonding.py (+1/-0) tests/vmtests/test_lvm.py (+48/-21) tests/vmtests/test_mdadm_bcache.py (+9/-83) tests/vmtests/test_multipath.py (+1/-0) tests/vmtests/test_network.py (+4/-0) tests/vmtests/test_nvme.py (+3/-1) tests/vmtests/test_raid5_bcache.py (+4/-62) tests/vmtests/test_uefi_basic.py (+37/-53) tools/launch (+5/-6) |
To merge this branch: | bzr merge lp:~raharper/curtin/trunk.power8 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
curtin developers | Pending | ||
Review via email:
|
Commit message
Description of the change
autogenerate vmtest data collection
All vmtest classes define lots of duplicate data and require knowledge of the
contents of the input config yaml. This is cumbersome and error-prone.
Fix this by auto-generating the scripts to collect data based on the input
config.
Updated block_meta's get_path_
- change suffix to no_sync and drop the device sync
- add wrapper function for existing callers which calls devsync
- implement handlers for 'format', 'mount', 'lvm_partition' types
Updated block_meta's determine_
wwn assigned.
Refactored basic vmtest to read the parsed storage config and calculate
the correct expected files collected and then validate internal data.
This currently passes for XenialBasic.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
- 384. By Ryan Harper
-
merge from trunk, resolv conflicts
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:384
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Scott Moser (smoser) wrote : | # |
why the funny '_no_sync' name ? versus a 'sync=False' argument?
i'm not opposed to the overall injection of prep partition.
i dont understand yet how you check the exported data though.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Ryan Harper (raharper) wrote : | # |
On Wed, Jun 29, 2016 at 1:54 PM, Scott Moser <email address hidden> wrote:
> why the funny '_no_sync' name ? versus a 'sync=False' argument?
>
because class instantiation. I didn't want to touch everywhere else that
calls it to set sync=True during a real install. If I set it to True by
default
then it attempts to devsync when I just want it to generate plausible paths
at startup time.
>
> i'm not opposed to the overall injection of prep partition.
> i dont understand yet how you check the exported data though.
>
We're programtically generating blkid and parted commands and
writing them at known paths based on a unique disk identifier (wwn)
>
> Diff comments:
>
> > === modified file 'curtin/
> > --- curtin/
> > +++ curtin/
> > @@ -266,10 +266,20 @@
> >
> >
> > def determine_
> > - for dev_type in ["nvme", "mmcblk"]:
> > - if disk_kname.
> > - partition_number = "p%s" % partition_number
> > - break
> > + """ Modify, if necessary, the prefix of a partition
> > + depending on the type of disk kernel name
>
> maybe the function name was always wrong, but now it is wrong for sure.
> wwn-0xff-part1 is not a kname (kernel name).
>
meh, we can drop the k, determine_
>
> > +
> > + (sda, 2) => sda2
> > + (nvmen1, 1) => nvmen1p1
> > + (mmcblk0, 5) => mmcblk0p5
> > + (wwn-0xfff, 1) => wwn-0xfff-part1
> > + (scsi-$ID, 2) => scsi-$ID-part2
> > + """
> > + if disk_kname.
> > + partition_number = "p%s" % partition_number
> > + elif disk_kname.
> disk_kname.
> > + partition_number = "-part%s" % partition_number
> > +
> > return "%s%s" % (disk_kname, partition_number)
> >
> >
> > @@ -653,6 +687,8 @@
> > info.get('id'), device, disk_ptable)
> > LOG.debug("partnum: %s offset_sectors: %s length_sectors: %s",
> > partnumber, offset_sectors, length_sectors)
> > + (out, err) = util.subp(["cat", "/proc/
> > + LOG.debug(
>
> util.load_
>
> debugging, will drop, but good use of load_file.
> > if disk_ptable == "msdos":
> > if flag in ["extended", "logical", "primary"]:
> > partition_type = flag
> > @@ -662,6 +698,15 @@
> > "%ss" % offset_sectors, "%ss" % str(offset_sectors +
> > length_sectors)]
> > util.subp(cmd, capture=True)
> > +
> > + (out, err) = util.subp(["cat", "/proc/
> capture=True)
> > + LOG.debug(
>
> i guess its just debug. but load_file just as easy.
>
> will drop
> > +
> > + if flag in ["prep"]:
> > + cmd = ["parted", "--script", disk, "set %d boot on" %
> partnumber]
> > + util.subp(cmd, capture=True)
> > + cmd = ["parted", "--script"...
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Scott Moser (smoser) wrote : | # |
we really should have some tests that show full use of a disk, such that the inserted partition would not fit.
thanks for the reply. generally i think this does look nice. over time i hope to have less of the copy'd stuf fall round in vmtest and more stuff like this is doing.
- 385. By Ryan Harper
-
merge from diglett fixing lvm and common vmtests
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:385
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 386. By Ryan Harper
-
merge from diglett and mdadm-bcache fixes
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:386
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 387. By Ryan Harper
-
merge from diglett fixing remaining storage tests.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:387
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 388. By Ryan Harper
-
merge from diglett, fixing test_simple
- 389. By Ryan Harper
-
merge from diglett, full tests run fixes
- 390. By Ryan Harper
-
more fixups from run on power hardware
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:390
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:390
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:390
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:390
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Unmerged revisions
- 390. By Ryan Harper
-
more fixups from run on power hardware
- 389. By Ryan Harper
-
merge from diglett, full tests run fixes
- 388. By Ryan Harper
-
merge from diglett, fixing test_simple
- 387. By Ryan Harper
-
merge from diglett fixing remaining storage tests.
- 386. By Ryan Harper
-
merge from diglett and mdadm-bcache fixes
- 385. By Ryan Harper
-
merge from diglett fixing lvm and common vmtests
- 384. By Ryan Harper
-
merge from trunk, resolv conflicts
- 383. By Ryan Harper
-
merge from run on diamond/p8 box
- 382. By Ryan Harper
-
merge from diglett x86 test run
- 381. By Ryan Harper
-
merge from trunk
Preview Diff
1 | === modified file 'curtin/block/__init__.py' |
2 | --- curtin/block/__init__.py 2016-06-10 14:49:22 +0000 |
3 | +++ curtin/block/__init__.py 2016-07-19 15:04:36 +0000 |
4 | @@ -436,6 +436,19 @@ |
5 | for line in out.splitlines(): |
6 | if "UUID" in line: |
7 | return line.split('=')[-1] |
8 | + |
9 | + # trusty and precise might not include UUID= output, try via by-uuid |
10 | + (out, _err) = util.subp(["ls", "-al", "/dev/disk/by-uuid"], capture=True) |
11 | + if path in out: |
12 | + # prepend ../../ path |
13 | + kname_base = "../../" + os.path.basename(path) |
14 | + # extract the mapping |
15 | + # <uuid> -> ../../<kernel basename> |
16 | + uuid_list = out.split() |
17 | + path_idx = uuid_list.index(kname_base) |
18 | + uuid = uuid_list[path_idx - 2] |
19 | + return uuid |
20 | + |
21 | return '' |
22 | |
23 | |
24 | @@ -479,9 +492,10 @@ |
25 | serial_udev = serial.replace(' ', '_') |
26 | LOG.info('Processing serial %s via udev to %s', serial, serial_udev) |
27 | |
28 | - disks = list(filter(lambda x: serial_udev in x, |
29 | - os.listdir("/dev/disk/by-id/"))) |
30 | + by_id = os.listdir("/dev/disk/by-id/") |
31 | + disks = list(filter(lambda x: serial_udev in x, by_id)) |
32 | if not disks or len(disks) < 1: |
33 | + LOG.debug('cant find serial in: %s', by_id) |
34 | raise ValueError("no disk with serial '%s' found" % serial_udev) |
35 | |
36 | # Sort by length and take the shortest path name, as the longer path names |
37 | |
38 | === modified file 'curtin/commands/block_meta.py' |
39 | --- curtin/commands/block_meta.py 2016-06-08 21:58:29 +0000 |
40 | +++ curtin/commands/block_meta.py 2016-07-19 15:04:36 +0000 |
41 | @@ -266,10 +266,20 @@ |
42 | |
43 | |
44 | def determine_partition_kname(disk_kname, partition_number): |
45 | - for dev_type in ["nvme", "mmcblk"]: |
46 | - if disk_kname.startswith(dev_type): |
47 | - partition_number = "p%s" % partition_number |
48 | - break |
49 | + """ Modify, if necessary, the prefix of a partition |
50 | + depending on the type of disk kernel name |
51 | + |
52 | + (sda, 2) => sda2 |
53 | + (nvmen1, 1) => nvmen1p1 |
54 | + (mmcblk0, 5) => mmcblk0p5 |
55 | + (wwn-0xfff, 1) => wwn-0xfff-part1 |
56 | + (scsi-$ID, 2) => scsi-$ID-part2 |
57 | + """ |
58 | + if disk_kname.startswith("nvme") or disk_kname.startswith("mmcblk"): |
59 | + partition_number = "p%s" % partition_number |
60 | + elif disk_kname.startswith("wwn-") or disk_kname.startswith('scsi-'): |
61 | + partition_number = "-part%s" % partition_number |
62 | + |
63 | return "%s%s" % (disk_kname, partition_number) |
64 | |
65 | |
66 | @@ -354,11 +364,17 @@ |
67 | |
68 | |
69 | def get_path_to_storage_volume(volume, storage_config): |
70 | + """ common path syncs the device path """ |
71 | + volpath = get_path_to_storage_volume_no_sync(volume, storage_config) |
72 | + devsync(volpath) |
73 | + return volpath |
74 | + |
75 | + |
76 | +def get_path_to_storage_volume_no_sync(volume, storage_config): |
77 | # Get path to block device for volume. Volume param should refer to id of |
78 | # volume in storage config |
79 | |
80 | LOG.debug('get_path_to_storage_volume for volume {}'.format(volume)) |
81 | - devsync_vol = None |
82 | vol = storage_config.get(volume) |
83 | if not vol: |
84 | raise ValueError("volume with id '%s' not found" % volume) |
85 | @@ -366,12 +382,11 @@ |
86 | # Find path to block device |
87 | if vol.get('type') == "partition": |
88 | partnumber = determine_partition_number(vol.get('id'), storage_config) |
89 | - disk_block_path = get_path_to_storage_volume(vol.get('device'), |
90 | - storage_config) |
91 | + disk_block_path = get_path_to_storage_volume_no_sync(vol.get('device'), |
92 | + storage_config) |
93 | (base_path, disk_kname) = os.path.split(disk_block_path) |
94 | partition_kname = determine_partition_kname(disk_kname, partnumber) |
95 | volume_path = os.path.join(base_path, partition_kname) |
96 | - devsync_vol = os.path.join(disk_block_path) |
97 | |
98 | elif vol.get('type') == "disk": |
99 | # Get path to block device for disk. Device_id param should refer |
100 | @@ -400,6 +415,13 @@ |
101 | volume_path = os.path.join("/dev/", volgroup.get('name'), |
102 | vol.get('name')) |
103 | |
104 | + elif vol.get('type') == "lvm_volgroup": |
105 | + vg_name = vol.get('name') |
106 | + if not vg_name: |
107 | + raise ValueError("lvm_volgroup missing 'name' value in " |
108 | + "storage config") |
109 | + volume_path = os.path.realpath(os.path.join("/dev/", vg_name)) |
110 | + |
111 | elif vol.get('type') == "dm_crypt": |
112 | # For dm_crypted partitions, unencrypted block device is at |
113 | # /dev/mapper/<dm_name> |
114 | @@ -419,26 +441,38 @@ |
115 | # block devs are in the slaves dir there. Then, those blockdevs can be |
116 | # checked against the kname of the devs in the config for the desired |
117 | # bcache device. This is not very elegant though |
118 | - backing_device_kname = os.path.split(get_path_to_storage_volume( |
119 | - vol.get('backing_device'), storage_config))[-1] |
120 | - sys_path = list(filter(lambda x: backing_device_kname in x, |
121 | - glob.glob("/sys/block/bcache*/slaves/*")))[0] |
122 | - while "bcache" not in os.path.split(sys_path)[-1]: |
123 | - sys_path = os.path.split(sys_path)[0] |
124 | - volume_path = os.path.join("/dev", os.path.split(sys_path)[-1]) |
125 | + backing_device_kname = os.path.split( |
126 | + get_path_to_storage_volume_no_sync(vol.get('backing_device'), |
127 | + storage_config))[-1] |
128 | + bcache_kdevs = list(filter(lambda x: backing_device_kname in x, |
129 | + glob.glob("/sys/block/bcache*/slaves/*"))) |
130 | + if len(bcache_kdevs) > 0: |
131 | + sys_path = bcache_kdevs[0] |
132 | + while "bcache" not in os.path.split(sys_path)[-1]: |
133 | + sys_path = os.path.split(sys_path)[0] |
134 | + volume_path = os.path.join("/dev", os.path.split(sys_path)[-1]) |
135 | + else: |
136 | + # find all bcache configs present in storage config |
137 | + # and use the order of creation to guess the bcache dev id |
138 | + bcache_devs = [id for (id, val) in storage_config.items() |
139 | + if val['type'] == 'bcache'] |
140 | + bcache_idx = bcache_devs.index(vol.get('id')) |
141 | + volume_path = "/dev/bcache%d" % bcache_idx |
142 | LOG.debug('got bcache volume path {}'.format(volume_path)) |
143 | |
144 | + elif vol.get('type') == "format": |
145 | + volume_path = get_path_to_storage_volume_no_sync(vol.get('volume'), |
146 | + storage_config) |
147 | + |
148 | + elif vol.get('type') == "mount": |
149 | + volume_path = get_path_to_storage_volume_no_sync(vol.get('device'), |
150 | + storage_config) |
151 | else: |
152 | raise NotImplementedError("cannot determine the path to storage \ |
153 | volume '%s' with type '%s'" % (volume, vol.get('type'))) |
154 | |
155 | - # sync devices |
156 | - if not devsync_vol: |
157 | - devsync_vol = volume_path |
158 | - devsync(devsync_vol) |
159 | - |
160 | LOG.debug('return volume path {}'.format(volume_path)) |
161 | - return volume_path |
162 | + return os.path.realpath(volume_path) |
163 | |
164 | |
165 | def disk_handler(info, storage_config): |
166 | @@ -653,6 +687,8 @@ |
167 | info.get('id'), device, disk_ptable) |
168 | LOG.debug("partnum: %s offset_sectors: %s length_sectors: %s", |
169 | partnumber, offset_sectors, length_sectors) |
170 | + (out, err) = util.subp(["cat", "/proc/partitions"], capture=True) |
171 | + LOG.debug("partitions:\n%s", out) |
172 | if disk_ptable == "msdos": |
173 | if flag in ["extended", "logical", "primary"]: |
174 | partition_type = flag |
175 | @@ -662,6 +698,15 @@ |
176 | "%ss" % offset_sectors, "%ss" % str(offset_sectors + |
177 | length_sectors)] |
178 | util.subp(cmd, capture=True) |
179 | + |
180 | + (out, err) = util.subp(["cat", "/proc/partitions"], capture=True) |
181 | + LOG.debug("partitions:\n%s", out) |
182 | + |
183 | + if flag in ["prep"]: |
184 | + cmd = ["parted", "--script", disk, "set %d boot on" % partnumber] |
185 | + util.subp(cmd, capture=True) |
186 | + cmd = ["parted", "--script", disk, "set %d prep on" % partnumber] |
187 | + util.subp(cmd, capture=True) |
188 | elif disk_ptable == "gpt": |
189 | if flag and flag in sgdisk_flags: |
190 | typecode = sgdisk_flags[flag] |
191 | @@ -670,6 +715,12 @@ |
192 | cmd = ["sgdisk", "--new", "%s:%s:%s" % (partnumber, offset_sectors, |
193 | length_sectors + offset_sectors), |
194 | "--typecode=%s:%s" % (partnumber, typecode), disk] |
195 | + # Power8 prep partitions must have |
196 | + # GUID=9e1a2d38-c612-4316-aa26-8b49521e5a8b |
197 | + if flag == 'prep': |
198 | + p8guid = '9e1a2d38-c612-4316-aa26-8b49521e5a8b' |
199 | + cmd += ['--partition-guid=%s:%s' % (partnumber, p8guid)] |
200 | + |
201 | util.subp(cmd, capture=True) |
202 | else: |
203 | raise ValueError("parent partition has invalid partition table") |
204 | @@ -730,11 +781,11 @@ |
205 | # Add volume to fstab |
206 | if state['fstab']: |
207 | with open(state['fstab'], "a") as fp: |
208 | - if volume.get('type') in ["raid", "bcache", |
209 | - "disk", "lvm_partition"]: |
210 | + if volume.get('type') in ["lvm_partition"]: |
211 | location = get_path_to_storage_volume(volume.get('id'), |
212 | storage_config) |
213 | - elif volume.get('type') in ["partition", "dm_crypt"]: |
214 | + elif volume.get('type') in ["disk", "raid", "bcache", |
215 | + "partition", "dm_crypt"]: |
216 | location = "UUID=%s" % block.get_volume_uuid(volume_path) |
217 | else: |
218 | raise ValueError("cannot write fstab for volume type '%s'" % |
219 | |
220 | === modified file 'curtin/commands/curthooks.py' |
221 | --- curtin/commands/curthooks.py 2016-06-24 19:27:52 +0000 |
222 | +++ curtin/commands/curthooks.py 2016-07-19 15:04:36 +0000 |
223 | @@ -495,7 +495,8 @@ |
224 | LOG.debug("NOT enabling UEFI nvram updates") |
225 | LOG.debug("Target system may not boot") |
226 | args.append(target) |
227 | - util.subp(args + instdevs, env=env) |
228 | + LOG.debug('install-grub: %s', args) |
229 | + util.subp(args + instdevs, env=env, capture=True) |
230 | |
231 | |
232 | def update_initramfs(target, all_kernels=False): |
233 | |
234 | === modified file 'examples/tests/allindata.yaml' |
235 | --- examples/tests/allindata.yaml 2016-04-14 01:45:54 +0000 |
236 | +++ examples/tests/allindata.yaml 2016-07-19 15:04:36 +0000 |
237 | @@ -6,7 +6,7 @@ |
238 | type: disk |
239 | ptable: gpt |
240 | model: QEMU HARDDISK |
241 | - path: /dev/vdb |
242 | + wwn: '0x0b38330401f57cb6' |
243 | name: main_disk |
244 | grub_device: 1 |
245 | - id: bios_boot_partition |
246 | @@ -40,11 +40,16 @@ |
247 | size: 3GB |
248 | device: sda |
249 | number: 6 # XXX: we really need to stop using id with DiskPartnum |
250 | + - id: sda6 # this partition disables cloud-init's growpart which |
251 | + type: partition # was breaking the partition size test-case. |
252 | + size: 2GB |
253 | + device: sda |
254 | + number: 7 |
255 | - id: sdb |
256 | type: disk |
257 | ptable: gpt |
258 | model: QEMU HARDDISK |
259 | - path: /dev/vdc |
260 | + wwn: '0x0409203126fe097c' |
261 | name: second_disk |
262 | - id: sdb1 |
263 | type: partition |
264 | @@ -66,7 +71,7 @@ |
265 | type: disk |
266 | ptable: gpt |
267 | model: QEMU HARDDISK |
268 | - path: /dev/vdd |
269 | + wwn: '0x5c1c48aa32255436' |
270 | name: third_disk |
271 | - id: sdc1 |
272 | type: partition |
273 | @@ -88,7 +93,7 @@ |
274 | type: disk |
275 | ptable: gpt |
276 | model: QEMU HARDDISK |
277 | - path: /dev/vde |
278 | + wwn: '0x28984e887e4426c8' |
279 | name: fourth_disk |
280 | - id: sdd1 |
281 | type: partition |
282 | |
283 | === modified file 'examples/tests/basic.yaml' |
284 | --- examples/tests/basic.yaml 2016-04-04 16:03:32 +0000 |
285 | +++ examples/tests/basic.yaml 2016-07-19 15:04:36 +0000 |
286 | @@ -6,7 +6,7 @@ |
287 | type: disk |
288 | ptable: msdos |
289 | model: QEMU HARDDISK |
290 | - path: /dev/vdb |
291 | + wwn: '0x2b873dea52e628f8' |
292 | name: main_disk |
293 | wipe: superblock |
294 | grub_device: true |
295 | @@ -39,12 +39,12 @@ |
296 | device: sda2_home |
297 | - id: sparedisk_id |
298 | type: disk |
299 | - path: /dev/vdc |
300 | + wwn: '0x4c5a435e6aff3e28' |
301 | name: sparedisk |
302 | wipe: superblock |
303 | - id: btrfs_disk_id |
304 | type: disk |
305 | - path: /dev/vdd |
306 | + wwn: '0x74464deb314c182b' |
307 | name: btrfs_volume |
308 | wipe: superblock |
309 | - id: btrfs_disk_fmt_id |
310 | @@ -57,7 +57,7 @@ |
311 | device: btrfs_disk_fmt_id |
312 | - id: pnum_disk |
313 | type: disk |
314 | - path: /dev/vde |
315 | + wwn: '0x5b092fb53f5e5816' |
316 | name: pnum_disk |
317 | wipe: superblock |
318 | ptable: gpt |
319 | |
320 | === modified file 'examples/tests/bcache_basic.yaml' |
321 | --- examples/tests/bcache_basic.yaml 2016-04-04 16:03:32 +0000 |
322 | +++ examples/tests/bcache_basic.yaml 2016-07-19 15:04:36 +0000 |
323 | @@ -4,14 +4,14 @@ |
324 | - id: id_rotary0 |
325 | type: disk |
326 | name: rotary0 |
327 | - path: /dev/vdb |
328 | + wwn: '0x116d1a473ea34188' |
329 | ptable: msdos |
330 | wipe: superblock |
331 | grub_device: true |
332 | - id: id_ssd0 |
333 | type: disk |
334 | name: ssd0 |
335 | - path: /dev/vdc |
336 | + wwn: '0x594c150c789537ea' |
337 | wipe: superblock |
338 | - id: id_rotary0_part1 |
339 | type: partition |
340 | |
341 | === modified file 'examples/tests/lvm.yaml' |
342 | --- examples/tests/lvm.yaml 2016-04-04 16:03:32 +0000 |
343 | +++ examples/tests/lvm.yaml 2016-07-19 15:04:36 +0000 |
344 | @@ -6,7 +6,7 @@ |
345 | type: disk |
346 | ptable: msdos |
347 | model: QEMU HARDDISK |
348 | - path: /dev/vdb |
349 | + wwn: '0x770f31415cf36ac3' |
350 | name: main_disk |
351 | - id: sda1 |
352 | type: partition |
353 | |
354 | === modified file 'examples/tests/mdadm_bcache.yaml' |
355 | --- examples/tests/mdadm_bcache.yaml 2016-04-17 22:35:20 +0000 |
356 | +++ examples/tests/mdadm_bcache.yaml 2016-07-19 15:04:36 +0000 |
357 | @@ -7,7 +7,7 @@ |
358 | type: disk |
359 | ptable: gpt |
360 | model: QEMU HARDDISK |
361 | - path: /dev/vdb |
362 | + wwn: '0x6f1d6983311e4c9d' |
363 | name: main_disk |
364 | - id: bios_boot_partition |
365 | type: partition |
366 | @@ -54,13 +54,13 @@ |
367 | - id: sdb |
368 | type: disk |
369 | model: QEMU HARDDISK |
370 | - path: /dev/vdc |
371 | + wwn: '0x11c342997b026e09' |
372 | name: second_disk |
373 | - id: sdc |
374 | type: disk |
375 | ptable: gpt |
376 | model: QEMU HARDDISK |
377 | - path: /dev/vdd |
378 | + wwn: '0x14ae3095379d3350' |
379 | name: third_disk |
380 | - id: sdc1 |
381 | type: partition |
382 | |
383 | === modified file 'examples/tests/mdadm_bcache_complex.yaml' |
384 | --- examples/tests/mdadm_bcache_complex.yaml 2015-11-25 13:45:53 +0000 |
385 | +++ examples/tests/mdadm_bcache_complex.yaml 2016-07-19 15:04:36 +0000 |
386 | @@ -6,7 +6,7 @@ |
387 | type: disk |
388 | ptable: gpt |
389 | model: QEMU HARDDISK |
390 | - path: /dev/vdb |
391 | + wwn: '0x28c124ad5eb83f0a' |
392 | name: main_disk |
393 | - id: bios_boot_partition |
394 | type: partition |
395 | @@ -44,13 +44,13 @@ |
396 | - id: sdb |
397 | type: disk |
398 | model: QEMU HARDDISK |
399 | - path: /dev/vdc |
400 | + wwn: '0x605e49ec47bd5b38' |
401 | name: second_disk |
402 | - id: sdc |
403 | type: disk |
404 | ptable: gpt |
405 | model: QEMU HARDDISK |
406 | - path: /dev/vdd |
407 | + wwn: '0x58521e6a73165fc8' |
408 | name: third_disk |
409 | - id: sdc1 |
410 | type: partition |
411 | |
412 | === modified file 'examples/tests/mirrorboot.yaml' |
413 | --- examples/tests/mirrorboot.yaml 2016-04-04 16:03:32 +0000 |
414 | +++ examples/tests/mirrorboot.yaml 2016-07-19 15:04:36 +0000 |
415 | @@ -6,7 +6,7 @@ |
416 | type: disk |
417 | ptable: gpt |
418 | model: QEMU HARDDISK |
419 | - path: /dev/vdb |
420 | + wwn: '0x3ffd076474f457a2' |
421 | name: main_disk |
422 | grub_device: 1 |
423 | - id: bios_boot_partition |
424 | @@ -22,7 +22,7 @@ |
425 | type: disk |
426 | ptable: gpt |
427 | model: QEMU HARDDISK |
428 | - path: /dev/vdc |
429 | + wwn: '0x075a4ceb60d0299f' |
430 | name: second_disk |
431 | - id: sdb1 |
432 | type: partition |
433 | |
434 | === modified file 'examples/tests/nvme.yaml' |
435 | --- examples/tests/nvme.yaml 2016-04-04 18:20:33 +0000 |
436 | +++ examples/tests/nvme.yaml 2016-07-19 15:04:36 +0000 |
437 | @@ -5,7 +5,7 @@ |
438 | - id: main_disk |
439 | type: disk |
440 | ptable: gpt |
441 | - path: /dev/vdb |
442 | + serial: 'IPR-0 1234567890' |
443 | name: main_disk |
444 | wipe: superblock |
445 | grub_device: true |
446 | @@ -44,9 +44,9 @@ |
447 | device: main_disk_home |
448 | - id: nvme_disk |
449 | type: disk |
450 | - path: /dev/nvme0n1 |
451 | - name: nvme_disk |
452 | - wipe: superblock |
453 | + path: /dev/nvme0n1 # nvme's require a serial but the serial isn't |
454 | + name: nvme_disk # exported into udev until kernel > 4.4 so we |
455 | + wipe: superblock # use a path to find the device |
456 | ptable: gpt |
457 | - id: nvme_disk_p1 |
458 | type: partition |
459 | @@ -63,10 +63,10 @@ |
460 | device: nvme_disk |
461 | - id: nvme_disk2 |
462 | type: disk |
463 | - path: /dev/nvme1n1 |
464 | - wipe: superblock |
465 | + path: /dev/nvme1n1 # nvme's require a serial but the serial isn't |
466 | + name: second_nvme # exported into udev until kernel > 4.4 so we |
467 | + wipe: superblock # use a path to find the device |
468 | ptable: msdos |
469 | - name: second_nvme |
470 | - id: nvme_disk2_p1 |
471 | type: partition |
472 | size: 1GB |
473 | |
474 | === modified file 'examples/tests/raid10boot.yaml' |
475 | --- examples/tests/raid10boot.yaml 2016-04-04 16:03:32 +0000 |
476 | +++ examples/tests/raid10boot.yaml 2016-07-19 15:04:36 +0000 |
477 | @@ -6,7 +6,7 @@ |
478 | type: disk |
479 | ptable: gpt |
480 | model: QEMU HARDDISK |
481 | - path: /dev/vdb |
482 | + wwn: '0x21557fcd125b2cb7' |
483 | name: main_disk |
484 | grub_device: 1 |
485 | - id: bios_boot_partition |
486 | @@ -22,7 +22,7 @@ |
487 | type: disk |
488 | ptable: gpt |
489 | model: QEMU HARDDISK |
490 | - path: /dev/vdc |
491 | + wwn: '0x7ade250975512b10' |
492 | name: second_disk |
493 | - id: sdb1 |
494 | type: partition |
495 | @@ -32,7 +32,7 @@ |
496 | type: disk |
497 | ptable: gpt |
498 | model: QEMU HARDDISK |
499 | - path: /dev/vdd |
500 | + wwn: '0x2e902b105c4f6724' |
501 | name: third_disk |
502 | - id: sdc1 |
503 | type: partition |
504 | @@ -42,7 +42,7 @@ |
505 | type: disk |
506 | ptable: gpt |
507 | model: QEMU HARDDISK |
508 | - path: /dev/vde |
509 | + wwn: '0x5bbd663336493174' |
510 | name: fourth_disk |
511 | - id: sdd1 |
512 | type: partition |
513 | |
514 | === modified file 'examples/tests/raid5bcache.yaml' |
515 | --- examples/tests/raid5bcache.yaml 2016-04-04 16:03:32 +0000 |
516 | +++ examples/tests/raid5bcache.yaml 2016-07-19 15:04:36 +0000 |
517 | @@ -6,31 +6,31 @@ |
518 | model: QEMU HARDDISK |
519 | name: sda |
520 | ptable: msdos |
521 | - path: /dev/vdb |
522 | + wwn: '0x603022fd2a1204a3' |
523 | type: disk |
524 | wipe: superblock |
525 | - id: sdb |
526 | model: QEMU HARDDISK |
527 | name: sdb |
528 | - path: /dev/vdc |
529 | + wwn: '0x6ef079fd51bc1aa3' |
530 | type: disk |
531 | wipe: superblock |
532 | - id: sdc |
533 | model: QEMU HARDDISK |
534 | name: sdc |
535 | - path: /dev/vdd |
536 | + wwn: '0x43ac532e78e70785' |
537 | type: disk |
538 | wipe: superblock |
539 | - id: sdd |
540 | model: QEMU HARDDISK |
541 | name: sdd |
542 | - path: /dev/vde |
543 | + wwn: '0x289f09306d0d11a7' |
544 | type: disk |
545 | wipe: superblock |
546 | - id: sde |
547 | model: QEMU HARDDISK |
548 | name: sde |
549 | - path: /dev/vdf |
550 | + wwn: '0x0e1a6948357d0b01' |
551 | type: disk |
552 | wipe: superblock |
553 | - devices: |
554 | |
555 | === modified file 'examples/tests/raid5boot.yaml' |
556 | --- examples/tests/raid5boot.yaml 2016-04-04 16:03:32 +0000 |
557 | +++ examples/tests/raid5boot.yaml 2016-07-19 15:04:36 +0000 |
558 | @@ -6,7 +6,7 @@ |
559 | type: disk |
560 | ptable: gpt |
561 | model: QEMU HARDDISK |
562 | - path: /dev/vdb |
563 | + wwn: '0x40de6e7235387803' |
564 | name: main_disk |
565 | grub_device: 1 |
566 | - id: bios_boot_partition |
567 | @@ -22,7 +22,7 @@ |
568 | type: disk |
569 | ptable: gpt |
570 | model: QEMU HARDDISK |
571 | - path: /dev/vdc |
572 | + wwn: '0x11c93eb225551234' |
573 | name: second_disk |
574 | - id: sdb1 |
575 | type: partition |
576 | @@ -32,7 +32,7 @@ |
577 | type: disk |
578 | ptable: gpt |
579 | model: QEMU HARDDISK |
580 | - path: /dev/vdd |
581 | + wwn: '0x3878752f02f41fc5' |
582 | name: third_disk |
583 | - id: sdc1 |
584 | type: partition |
585 | |
586 | === modified file 'examples/tests/raid6boot.yaml' |
587 | --- examples/tests/raid6boot.yaml 2016-04-04 16:03:32 +0000 |
588 | +++ examples/tests/raid6boot.yaml 2016-07-19 15:04:36 +0000 |
589 | @@ -6,7 +6,7 @@ |
590 | type: disk |
591 | ptable: gpt |
592 | model: QEMU HARDDISK |
593 | - path: /dev/vdb |
594 | + wwn: '0x2b2518ab3256382e' |
595 | name: main_disk |
596 | grub_device: 1 |
597 | - id: bios_boot_partition |
598 | @@ -22,7 +22,7 @@ |
599 | type: disk |
600 | ptable: gpt |
601 | model: QEMU HARDDISK |
602 | - path: /dev/vdc |
603 | + wwn: '0x4a6a081952db4963' |
604 | name: second_disk |
605 | - id: sdb1 |
606 | type: partition |
607 | @@ -32,7 +32,7 @@ |
608 | type: disk |
609 | ptable: gpt |
610 | model: QEMU HARDDISK |
611 | - path: /dev/vdd |
612 | + wwn: '0x7e6e04e903cf42e9' |
613 | name: third_disk |
614 | - id: sdc1 |
615 | type: partition |
616 | @@ -42,7 +42,7 @@ |
617 | type: disk |
618 | ptable: gpt |
619 | model: QEMU HARDDISK |
620 | - path: /dev/vde |
621 | + wwn: '0x7fa0135968db1f54' |
622 | name: fourth_disk |
623 | - id: sdd1 |
624 | type: partition |
625 | |
626 | === modified file 'examples/tests/uefi_basic.yaml' |
627 | --- examples/tests/uefi_basic.yaml 2016-04-07 21:03:58 +0000 |
628 | +++ examples/tests/uefi_basic.yaml 2016-07-19 15:04:36 +0000 |
629 | @@ -4,7 +4,7 @@ |
630 | - id: id_disk0 |
631 | type: disk |
632 | name: main_disk |
633 | - path: /dev/vdb |
634 | + wwn: '0x3fd832a52c877c49' |
635 | ptable: gpt |
636 | wipe: superblock |
637 | grub_device: true |
638 | @@ -22,6 +22,12 @@ |
639 | size: 3G |
640 | type: partition |
641 | wipe: superblock |
642 | + - device: id_disk0 |
643 | + id: id_disk0_part3 |
644 | + number: 3 |
645 | + size: 2G |
646 | + type: partition |
647 | + wipe: superblock |
648 | - fstype: fat32 |
649 | id: id_efi_format |
650 | label: efi |
651 | |
652 | === modified file 'examples/tests/vlan_network.yaml' |
653 | --- examples/tests/vlan_network.yaml 2016-05-17 20:04:00 +0000 |
654 | +++ examples/tests/vlan_network.yaml 2016-07-19 15:04:36 +0000 |
655 | @@ -8,6 +8,8 @@ |
656 | - address: 10.245.168.16/21 |
657 | dns_nameservers: |
658 | - 10.245.168.2 |
659 | + dns_search: |
660 | + - dellstack |
661 | gateway: 10.245.168.1 |
662 | type: static |
663 | type: physical |
664 | |
665 | === modified file 'helpers/common' |
666 | --- helpers/common 2016-04-06 19:49:42 +0000 |
667 | +++ helpers/common 2016-07-19 15:04:36 +0000 |
668 | @@ -707,6 +707,8 @@ |
669 | chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -ec ' |
670 | pkg=$1; shift; |
671 | dpkg-reconfigure "$pkg" |
672 | + initrds=$(ls /boot/initrd.img* || true) |
673 | + [ -z "$initrds" ] && update-initramfs -u |
674 | update-grub |
675 | for d in "$@"; do grub-install "$d" || exit; done' \ |
676 | -- "${grub_name}" "${grubdevs[@]}" </dev/null || |
677 | |
678 | === modified file 'tests/vmtests/__init__.py' |
679 | --- tests/vmtests/__init__.py 2016-06-23 14:26:39 +0000 |
680 | +++ tests/vmtests/__init__.py 2016-07-19 15:04:36 +0000 |
681 | @@ -15,6 +15,8 @@ |
682 | import curtin.net as curtin_net |
683 | import curtin.util as util |
684 | |
685 | +from curtin.commands.block_meta import (extract_storage_ordered_dict, |
686 | + get_path_to_storage_volume_no_sync) |
687 | from curtin.commands.install import INSTALL_PASS_MSG |
688 | |
689 | from .image_sync import query as imagesync_query |
690 | @@ -348,6 +350,111 @@ |
691 | stdout=DEVNULL, stderr=subprocess.STDOUT) |
692 | |
693 | |
694 | +def generate_storage_scripts(storage_config): |
695 | + bcache_fs = "ls /sys/fs/bcache > {collectd}/ls_bcache" |
696 | + bcache_cache_mode_map = textwrap.dedent(""" |
697 | + for bfile in $(find /sys/class/block/bcache*/bcache); do |
698 | + bcachedev=$(basename `dirname $bfile`) |
699 | + cache_mode=$(cat $bfile/cache_mode) |
700 | + echo $bcachedev:$cache_mode >> {collectd}/bcache_cache_mode_map |
701 | + done""") |
702 | + bcache_super_cache = ("bcache-super-show {cachedev} " |
703 | + "> {collectd}/bcache_super_cache_{cachebase}") |
704 | + bcache_super_back = ("bcache-super-show {backdev} " |
705 | + "> {collectd}/bcache_super_backing_{backbase}") |
706 | + bcache_backing_map = textwrap.dedent(""" |
707 | + for bfile in $(find /sys/class/block/bcache*/bcache); do |
708 | + bcachedev=$(basename `dirname $bfile`) |
709 | + cfile=`readlink $bfile` |
710 | + cdev=$(basename `dirname $cfile`) |
711 | + echo $bcachedev $cdev >> {collectd}/bcache_backing_map |
712 | + done""") |
713 | + bcache_blkid = ("blkid -o export {devpath} > " |
714 | + "{collectd}/bcache_blkid_{backbase}") |
715 | + blkid_tpl = "blkid -o export {devpath} > {collectd}/blkid_{basename}" |
716 | + block_size = textwrap.dedent(""" |
717 | + kname=$(basename `readlink {devpath}`) |
718 | + q=/sys/class/block/$kname/queue |
719 | + cat $q/logical_block_size > {collectd}/lbs_$kname |
720 | + cat $q/physical_block_size > {collectd}/pbs_$kname |
721 | + blockdev --getbsz /dev/$kname > {collectd}/blockdev_getbsz_$kname |
722 | + blockdev --getpbsz /dev/$kname > {collectd}/blockdev_getpbsz_$kname |
723 | + blockdev --getss /dev/$kname > {collectd}/blockdev_getss_$kname |
724 | + blockdev --getsz /dev/$kname > {collectd}/blockdev_getsz_$kname""") |
725 | + parted_tpl = ("parted --script {devpath} unit B print > " |
726 | + "{collectd}/parted_{basename}") |
727 | + pvdisplay = ("pvdisplay -C --separator = -o vg_name,pv_name" |
728 | + " --noheadings > {collectd}/pvs") |
729 | + lvdisplay = ("lvdisplay -C --separator = -o lv_name,vg_name" |
730 | + " --noheadings > {collectd}/lvs") |
731 | + dmsetup = ("dmsetup info --noheading -c -o vg_name,lv_name,blkdevname" |
732 | + " > {collectd}/dmsetup_info") |
733 | + mdadm_status = "mdadm --detail --scan > {collectd}/mdadm_status" |
734 | + mdadm_active = ("mdadm --detail --scan | grep -c ubuntu" |
735 | + " > {collectd}/mdadm_active") |
736 | + proc_mdstat = "cat /proc/mdstat > {collectd}/proc_mdstat" |
737 | + format_tpl = "{dumpfs} {devpath} > {collectd}/format_{basename}" |
738 | + |
739 | + script_templates = { |
740 | + 'disk': [blkid_tpl, parted_tpl, block_size], |
741 | + 'partition': [blkid_tpl], |
742 | + 'lvm_volgroup': [pvdisplay, dmsetup], |
743 | + 'lvm_partition': [lvdisplay], |
744 | + 'raid': [mdadm_status, mdadm_active, proc_mdstat], |
745 | + 'bcache': [bcache_fs, bcache_cache_mode_map, bcache_backing_map, |
746 | + bcache_super_cache, bcache_super_back, bcache_blkid], |
747 | + 'format': [blkid_tpl, format_tpl], |
748 | + 'mount': [], |
749 | + } |
750 | + |
751 | + format_dumpfs = { |
752 | + 'fat32': '/bin/true', |
753 | + 'ext2': 'dumpe2fs -h', |
754 | + 'ext3': 'dumpe2fs -h', |
755 | + 'ext4': 'dumpe2fs -h', |
756 | + 'btrfs': 'btrfs-show-super', |
757 | + 'xfs': 'xfs_metadump', |
758 | + } |
759 | + |
760 | + storage_scripts = [] |
761 | + for item_id, command in storage_config.items(): |
762 | + templates = script_templates.get(command['type'], []) |
763 | + devpath = get_path_to_storage_volume_no_sync(item_id, |
764 | + storage_config) |
765 | + cachedev = storage_config.get(command.get('cache_device', '')) |
766 | + cachebase = '' |
767 | + if cachedev: |
768 | + cachedev = get_path_to_storage_volume_no_sync(cachedev.get('id'), |
769 | + storage_config) |
770 | + cachebase = os.path.basename(cachedev) |
771 | + |
772 | + backdev = storage_config.get(command.get('backing_device', '')) |
773 | + backbase = '' |
774 | + if backdev: |
775 | + backdev = get_path_to_storage_volume_no_sync(backdev.get('id'), |
776 | + storage_config) |
777 | + backbase = os.path.basename(backdev) |
778 | + |
779 | + basename = os.path.basename(devpath) |
780 | + for template in templates: |
781 | + data = { |
782 | + 'backbase': backbase, |
783 | + 'backdev': backdev, |
784 | + 'cachebase': cachebase, |
785 | + 'cachedev': cachedev, |
786 | + 'collectd': 'OUTPUT_COLLECT_D', |
787 | + 'devpath': devpath, |
788 | + 'basename': basename, |
789 | + 'dumpfs': format_dumpfs.get(command.get('fstype', ''), '') |
790 | + } |
791 | + print(template) |
792 | + script = textwrap.dedent(template.format(**data).rstrip() + " |:") |
793 | + logger.debug('Adding storage script: %s', script) |
794 | + storage_scripts.append(script) |
795 | + |
796 | + return storage_scripts |
797 | + |
798 | + |
799 | class VMBaseClass(TestCase): |
800 | __test__ = False |
801 | arch_skip = [] |
802 | @@ -355,11 +462,12 @@ |
803 | collect_scripts = [] |
804 | conf_file = "examples/tests/basic.yaml" |
805 | disk_block_size = 512 |
806 | - disk_driver = 'virtio-blk' |
807 | + disk_driver = 'scsi-hd' |
808 | disk_to_check = {} |
809 | extra_disks = [] |
810 | extra_kern_args = None |
811 | fstab_expected = {} |
812 | + generate_storage_scripts = True |
813 | image_store_class = ImageStore |
814 | install_timeout = 3000 |
815 | interactive = False |
816 | @@ -395,10 +503,55 @@ |
817 | logger.debug("Image %s\n boot=%s\n kernel=%s\n initrd=%s\n" |
818 | " tarball=%s\n", img_verstr, boot_img, boot_kernel, |
819 | boot_initrd, tarball) |
820 | + |
821 | + try: |
822 | + if cls.arch == 'ppc64el': |
823 | + cls.conf_yaml = inject_prep_partition(cls.conf_file) |
824 | + else: |
825 | + cls.conf_yaml = yaml.load(util.load_file(cls.conf_file)) |
826 | + cls.storage_config_dict = ( |
827 | + extract_storage_ordered_dict(cls.conf_yaml)) |
828 | + except ValueError: |
829 | + cls.storage_config_dict = {} |
830 | + logger.warning('No storage config in %s, no storage scripts', |
831 | + cls.conf_file) |
832 | + pass |
833 | + |
834 | + # generate storage scripts as needed |
835 | + storage_scripts = [] |
836 | + storage_scripts_modified = [] |
837 | + cls.storage_volume_dict = {} |
838 | + if cls.generate_storage_scripts: |
839 | + logger.info('Generating storage scripts from %s', cls.conf_file) |
840 | + storage_scripts = generate_storage_scripts(cls.storage_config_dict) |
841 | + |
842 | + # map storage config to volume paths |
843 | + scd = cls.storage_config_dict |
844 | + for item_id, command in cls.storage_config_dict.items(): |
845 | + devpath = get_path_to_storage_volume_no_sync(item_id, scd) |
846 | + cls.storage_volume_dict.update({item_id: devpath}) |
847 | + |
848 | + # precise does not have virtio-scsi, so we use wwn to predict the |
849 | + # path name, but know that it will be a virtio device with serial |
850 | + if not cls.has_virtio_scsi() and 'storage' in cls.conf_yaml: |
851 | + logger.info('updating script paths for precise workaround') |
852 | + for script in storage_scripts: |
853 | + script = script.replace("wwn-", "virtio-") |
854 | + storage_scripts_modified.append(script) |
855 | + logger.info('updating volume devpaths for precise workaround') |
856 | + for item_id, devpath in cls.storage_volume_dict.items(): |
857 | + new_path = devpath.replace('wwn-', 'virtio-') |
858 | + cls.storage_volume_dict.update({item_id: new_path}) |
859 | + |
860 | + # always use the class specific scripts |
861 | + collect_scripts = cls.collect_scripts |
862 | + # append any generated or modified scripts as well |
863 | + collect_scripts += storage_scripts + storage_scripts_modified |
864 | + |
865 | # set up tempdir |
866 | cls.td = TempDir( |
867 | name=cls.__name__, |
868 | - user_data=generate_user_data(collect_scripts=cls.collect_scripts)) |
869 | + user_data=generate_user_data(collect_scripts=collect_scripts)) |
870 | logger.info('Using tempdir: %s , Image: %s', cls.td.tmpdir, |
871 | img_verstr) |
872 | cls.install_log = os.path.join(cls.td.logs, 'install-serial.log') |
873 | @@ -451,18 +604,27 @@ |
874 | |
875 | # build disk arguments |
876 | disks = [] |
877 | - sc = util.load_file(cls.conf_file) |
878 | - storage_config = yaml.load(sc).get('storage', {}).get('config', {}) |
879 | - cls.disk_wwns = ["wwn=%s" % x.get('wwn') for x in storage_config |
880 | + |
881 | + # modify config for precise's lack of virtio-scsi |
882 | + if not cls.has_virtio_scsi() and 'storage' in cls.conf_yaml: |
883 | + logger.info('Precise detected, modifying storage config for test') |
884 | + cls.conf_yaml = switch_wwn_for_serial_disks(cls.conf_yaml) |
885 | + cls.storage_config_dict = ( |
886 | + extract_storage_ordered_dict(cls.conf_yaml)) |
887 | + |
888 | + cls.disk_wwns = ["wwn=%s" % x.get('wwn') |
889 | + for x in cls.storage_config_dict.values() |
890 | if 'wwn' in x] |
891 | cls.disk_serials = ["serial=%s" % x.get('serial') |
892 | - for x in storage_config if 'serial' in x] |
893 | + for x in cls.storage_config_dict.values() |
894 | + if 'serial' in x] |
895 | |
896 | target_disk = "{}:{}:{}:{}:".format(cls.td.target_disk, |
897 | "", |
898 | cls.disk_driver, |
899 | cls.disk_block_size) |
900 | - if len(cls.disk_wwns): |
901 | + |
902 | + if len(cls.disk_wwns) and cls.disk_driver == "scsi-hd": |
903 | target_disk += cls.disk_wwns[0] |
904 | |
905 | if len(cls.disk_serials): |
906 | @@ -476,7 +638,7 @@ |
907 | extra_disk = '{}:{}:{}:{}:'.format(dpath, disk_sz, |
908 | cls.disk_driver, |
909 | cls.disk_block_size) |
910 | - if len(cls.disk_wwns): |
911 | + if len(cls.disk_wwns) and cls.disk_driver == "scsi-hd": |
912 | w_index = disk_no + 1 |
913 | if w_index < len(cls.disk_wwns): |
914 | extra_disk += cls.disk_wwns[w_index] |
915 | @@ -496,8 +658,23 @@ |
916 | "serial=nvme-%d" % disk_no) |
917 | disks.extend(['--disk', nvme_disk]) |
918 | |
919 | + configs = [cls.conf_file] |
920 | + # write out modified config yaml if needed |
921 | + if cls.arch.startswith('ppc64') or not cls.has_virtio_scsi(): |
922 | + if 'storage' in cls.conf_yaml: |
923 | + conf_name = "modified-" + os.path.basename(cls.conf_file) |
924 | + modified_conf_file = os.path.join(cls.td.install, conf_name) |
925 | + logger.info('storage config modified for testing') |
926 | + logger.debug('modified_config:\n%s', cls.conf_yaml) |
927 | + with open(modified_conf_file, "w") as fp: |
928 | + fp.write(yaml.dump(cls.conf_yaml, default_flow_style=False, |
929 | + indent=4)) |
930 | + |
931 | + shutil.copyfile(modified_conf_file, |
932 | + os.path.join(cls.td.logs, conf_name)) |
933 | + configs = [modified_conf_file] |
934 | + |
935 | # proxy config |
936 | - configs = [cls.conf_file] |
937 | proxy = get_apt_proxy() |
938 | if get_apt_proxy is not None: |
939 | proxy_config = os.path.join(cls.td.install, 'proxy.cfg') |
940 | @@ -577,7 +754,7 @@ |
941 | for (disk_no, disk) in enumerate([cls.td.target_disk]): |
942 | disk = '--disk={},driver={},format={},{}'.format( |
943 | disk, cls.disk_driver, TARGET_IMAGE_FORMAT, bsize_args) |
944 | - if len(cls.disk_wwns): |
945 | + if len(cls.disk_wwns) and cls.disk_driver == "scsi-hd": |
946 | disk += ",%s" % cls.disk_wwns[0] |
947 | if len(cls.disk_serials): |
948 | disk += ",%s" % cls.disk_serials[0] |
949 | @@ -589,7 +766,7 @@ |
950 | dpath = os.path.join(cls.td.disks, 'extra_disk_%d.img' % disk_no) |
951 | disk = '--disk={},driver={},format={},{}'.format( |
952 | dpath, cls.disk_driver, TARGET_IMAGE_FORMAT, bsize_args) |
953 | - if len(cls.disk_wwns): |
954 | + if len(cls.disk_wwns) and cls.disk_driver == "scsi-hd": |
955 | w_index = disk_no + 1 |
956 | if w_index < len(cls.disk_wwns): |
957 | disk += ",%s" % cls.disk_wwns[w_index] |
958 | @@ -697,6 +874,14 @@ |
959 | keep_fail=KEEP_DATA['fail']) |
960 | |
961 | @classmethod |
962 | + def has_virtio_scsi(cls): |
963 | + if cls.release == "precise" and cls.krel in [None, "precise"]: |
964 | + cls.disk_driver = "virtio-blk" |
965 | + return False |
966 | + |
967 | + return True |
968 | + |
969 | + @classmethod |
970 | def expected_interfaces(cls): |
971 | expected = [] |
972 | interfaces = {} |
973 | @@ -745,7 +930,56 @@ |
974 | return boot_log_wrap(cls.__name__, myboot, cmd, console_log, timeout, |
975 | purpose) |
976 | |
977 | + @classmethod |
978 | + def human2bytes(self, size): |
979 | + return util.human2bytes(size) |
980 | + |
981 | # Misc functions that are useful for many tests |
982 | + def _resolve_volpath(self, vol_path, ls_input): |
983 | + part_kname = os.path.basename(vol_path) |
984 | + ls_data = None |
985 | + # if needed, look up storage path in ls_id file which maps |
986 | + # volume paths (/dev/disk/*) to knames (/dev/XXX) |
987 | + if vol_path.startswith('/dev/disk/'): |
988 | + with open(os.path.join(self.td.collect, ls_input)) as fp: |
989 | + ls_data = fp.read() |
990 | + |
991 | + self.assertIn(part_kname, ls_data) |
992 | + # extract the mapping |
993 | + # <vol_base> -> ../../<kernel name> |
994 | + # wwn-0x2b873dea52e628f8-part1 -> ../../sda1 |
995 | + id_list = ls_data.split() |
996 | + partidx = id_list.index(part_kname) |
997 | + part_kname = id_list[partidx + 2].replace("../../", "") |
998 | + |
999 | + return (part_kname, ls_data) |
1000 | + |
1001 | + def _kname_to_fsuuid(self, kname, ls_input): |
1002 | + |
1003 | + # prepend ../../ path |
1004 | + kname_base = "../../" + os.path.basename(kname) |
1005 | + ls_data = None |
1006 | + uuid = None |
1007 | + with open(os.path.join(self.td.collect, ls_input)) as fp: |
1008 | + ls_data = fp.read() |
1009 | + |
1010 | + self.assertIn(kname_base, ls_data) |
1011 | + # extract the mapping |
1012 | + # <uuid> -> ../../<kernel basename> |
1013 | + id_list = ls_data.split() |
1014 | + partidx = id_list.index(kname_base) |
1015 | + uuid = id_list[partidx - 2] |
1016 | + |
1017 | + return (uuid, ls_data) |
1018 | + |
1019 | + def _test_proc_partitions(self, kname): |
1020 | + # with a kname, look at proc_partitions for the size |
1021 | + with open(os.path.join(self.td.collect, 'proc_partitions')) as fp: |
1022 | + proc_partitions = fp.read() |
1023 | + |
1024 | + self.assertIn(kname, proc_partitions) |
1025 | + return proc_partitions |
1026 | + |
1027 | def output_files_exist(self, files): |
1028 | for f in files: |
1029 | self.assertTrue(os.path.exists(os.path.join(self.td.collect, f))) |
1030 | @@ -769,9 +1003,32 @@ |
1031 | # Python 2. |
1032 | self.assertRegexpMatches(s, r) |
1033 | |
1034 | + def get_bcache_data(self, bcache_file): |
1035 | + """ parse bcache data into dict """ |
1036 | + data = util.load_file(os.path.join(self.td.collect, bcache_file)) |
1037 | + ret = {} |
1038 | + for line in data.splitlines(): |
1039 | + if line == "": |
1040 | + continue |
1041 | + val = line.split() |
1042 | + ret[val[0]] = val[1] |
1043 | + return ret |
1044 | + |
1045 | + def get_bcache_backing_map(self): |
1046 | + """ parse backing device to bcache name into dict. """ |
1047 | + bcache_file = "bcache_backing_map" |
1048 | + data = util.load_file(os.path.join(self.td.collect, bcache_file)) |
1049 | + ret = {} |
1050 | + for line in data.splitlines(): |
1051 | + if line == "": |
1052 | + continue |
1053 | + val = line.split() |
1054 | + ret[val[1]] = val[0] |
1055 | + return ret |
1056 | + |
1057 | def get_blkid_data(self, blkid_file): |
1058 | - with open(os.path.join(self.td.collect, blkid_file)) as fp: |
1059 | - data = fp.read() |
1060 | + """ parse blkid data into dict """ |
1061 | + data = util.load_file(os.path.join(self.td.collect, blkid_file)) |
1062 | ret = {} |
1063 | for line in data.splitlines(): |
1064 | if line == "": |
1065 | @@ -780,6 +1037,32 @@ |
1066 | ret[val[0]] = val[1] |
1067 | return ret |
1068 | |
1069 | + def get_parted_data(self, parted_file): |
1070 | + """ parse parted data into dict """ |
1071 | + data = util.load_file(os.path.join(self.td.collect, parted_file)) |
1072 | + ret = {} |
1073 | + splitdata = data.split("\n\n") |
1074 | + |
1075 | + for line in splitdata[0].splitlines(): |
1076 | + if line == "": |
1077 | + continue |
1078 | + val = line.split(':') |
1079 | + ret[val[0].strip()] = val[1].strip() |
1080 | + |
1081 | + partitions = [] |
1082 | + if len(splitdata) > 1: |
1083 | + # populate fields, parted prints the column names |
1084 | + header = splitdata[1].splitlines()[0] |
1085 | + # transform strings into suitable keys |
1086 | + fields = header.lower().replace('file system', |
1087 | + 'filesystem').split() |
1088 | + partitions = [] |
1089 | + for partline in splitdata[1].splitlines()[1:]: |
1090 | + partitions.append(dict(zip(fields, partline.split()))) |
1091 | + |
1092 | + ret['partitions'] = partitions |
1093 | + return ret |
1094 | + |
1095 | def test_fstab(self): |
1096 | if (os.path.exists(self.td.collect + "fstab") and |
1097 | self.fstab_expected is not None): |
1098 | @@ -814,6 +1097,257 @@ |
1099 | self.assertNotIn("/etc/network/interfaces.d/eth0.cfg", |
1100 | interfacesd.split("\n")) |
1101 | |
1102 | + def test_output_files_exist(self): |
1103 | + self.output_files_exist( |
1104 | + ["fstab", "ls_dname", "ls_uuid", "ls_id", "proc_partitions"]) |
1105 | + |
1106 | + def test_disks_ptable(self): |
1107 | + """ test_disks_ptable: Test disks with 'ptable' key present """ |
1108 | + |
1109 | + if not self.generate_storage_scripts: |
1110 | + reason = "This test uses only custom storage tests" |
1111 | + raise SkipTest(reason) |
1112 | + |
1113 | + for id in [id for (id, item) in self.storage_config_dict.items() |
1114 | + if item['type'] == 'disk' and 'ptable' in item]: |
1115 | + vol = self.storage_config_dict.get(id) |
1116 | + vol_path = self.storage_volume_dict.get(id) |
1117 | + |
1118 | + # Trusty blkid does not export some required data |
1119 | + if self.release not in ['trusty', 'precise']: |
1120 | + blkid_file = "blkid_%s" % os.path.basename(vol_path) |
1121 | + blkid_info = self.get_blkid_data(blkid_file) |
1122 | + |
1123 | + # blkid reports 'DOS' for type msdos, so translate config |
1124 | + cfg_to_blkid = { |
1125 | + 'msdos': 'dos', |
1126 | + 'gpt': 'gpt', |
1127 | + } |
1128 | + self.assertEquals(cfg_to_blkid.get(vol.get('ptable')), |
1129 | + blkid_info["PTTYPE"]) |
1130 | + else: |
1131 | + # for precise, trusty! use parted |
1132 | + parted_file = "parted_%s" % os.path.basename(vol_path) |
1133 | + parted_info = self.get_parted_data(parted_file) |
1134 | + self.assertEqual(vol.get('ptable'), |
1135 | + parted_info['Partition Table']) |
1136 | + |
1137 | + def test_partitions(self): |
1138 | + """ test_partitions: for each partition check (kernel path, size) |
1139 | + 1. exists |
1140 | + 2. kpath |
1141 | + 3. size |
1142 | + |
1143 | + """ |
1144 | + if not self.generate_storage_scripts: |
1145 | + reason = "This test uses only custom storage tests" |
1146 | + raise SkipTest(reason) |
1147 | + |
1148 | + for id in [id for (id, item) in self.storage_config_dict.items() |
1149 | + if item['type'] == 'partition']: |
1150 | + |
1151 | + # determine storage path |
1152 | + vol = self.storage_config_dict.get(id) |
1153 | + vol_path = self.storage_volume_dict.get(id) |
1154 | + |
1155 | + # ensure vol_path exists |
1156 | + (vol_kname, ls_id) = self._resolve_volpath(vol_path, |
1157 | + ls_input='ls_id') |
1158 | + print("vol=%s" % vol) |
1159 | + print("vol_path=%s" % vol_path) |
1160 | + print("vol_kname=%s" % vol_kname) |
1161 | + |
1162 | + # check for kname in proc_partitions data, and return contents |
1163 | + proc_partitions = self._test_proc_partitions(vol_kname) |
1164 | + pp_list = proc_partitions.split() |
1165 | + # proc partition line format is: |
1166 | + # major minor #blocks(in 1KB) kname(dev_short) |
1167 | + # 8 1 3145728 sda1 |
1168 | + partidx = pp_list.index(vol_kname) |
1169 | + |
1170 | + # blocks is one element before kname |
1171 | + print("pp_item: %s" % pp_list[partidx - 1]) |
1172 | + part_size_bytes = int(pp_list[partidx - 1]) * 1024 |
1173 | + print("part_size_bytes=%s" % part_size_bytes) |
1174 | + expected_size = int(self.human2bytes(vol.get('size'))) |
1175 | + # extended partitions have a fixed size |
1176 | + if 'flag' in vol and vol['flag'] == "extended": |
1177 | + expected_size = 1024 |
1178 | + print("expected_size=%s" % expected_size) |
1179 | + self.assertEqual(part_size_bytes, expected_size) |
1180 | + |
1181 | + def test_formats(self): |
1182 | + """ test_formats: for each format check (device, uuid and fstype) |
1183 | + 1. underlying device matches |
1184 | + 2. if UUID set, validate |
1185 | + 3. fstype matches |
1186 | + |
1187 | + type: format |
1188 | + id: sda1_root |
1189 | + fstype: ext4 |
1190 | + volume: sda1 |
1191 | + |
1192 | + """ |
1193 | + if not self.generate_storage_scripts: |
1194 | + reason = "This test uses only custom storage tests" |
1195 | + raise SkipTest(reason) |
1196 | + |
1197 | + for id in [id for (id, item) in self.storage_config_dict.items() |
1198 | + if item['type'] == 'format']: |
1199 | + format_cfg = self.storage_config_dict.get(id) |
1200 | + real_dev = self.storage_config_dict.get(format_cfg.get('volume')) |
1201 | + vol_path = self.storage_volume_dict.get(format_cfg.get('volume')) |
1202 | + vol_base = os.path.basename(vol_path) |
1203 | + print("cfg: %s" % format_cfg) |
1204 | + print("vol_path=%s" % vol_path) |
1205 | + print("vol_base=%s" % vol_base) |
1206 | + |
1207 | + if real_dev['type'] in ['bcache']: |
1208 | + print('format backed by bcache') |
1209 | + bdev = real_dev.get('backing_device') |
1210 | + bdev_path = self.storage_volume_dict.get(bdev) |
1211 | + print('bdev_path=%s' % bdev_path) |
1212 | + |
1213 | + # look up backing name to map to bcacheN |
1214 | + (bkname, ls_id) = self._resolve_volpath(bdev_path, |
1215 | + ls_input='ls_id') |
1216 | + print("resolved bdev -> %s" % bkname) |
1217 | + bdev_bcache = self.get_bcache_backing_map() |
1218 | + bcache_kname = bdev_bcache[bkname] |
1219 | + print("mapped %s to bcache kname %s" % (bkname, bcache_kname)) |
1220 | + vol_base = bcache_kname |
1221 | + else: |
1222 | + # ensure vol_path exists |
1223 | + (vol_kname, ls_id) = self._resolve_volpath(vol_path, |
1224 | + ls_input='ls_id') |
1225 | + print("vol_base=%s" % vol_base) |
1226 | + |
1227 | + # blkid reports different values vs. config file |
1228 | + cfg_to_blkid = { |
1229 | + 'fat32': 'vfat', |
1230 | + 'ext3': 'ext3', |
1231 | + 'ext4': 'ext4', |
1232 | + 'btrfs': 'btrfs', |
1233 | + 'xfs': 'xfs', |
1234 | + } |
1235 | + # check fstype in blkid output |
1236 | + blkid_file = "blkid_%s" % vol_base |
1237 | + blkid_info = self.get_blkid_data(blkid_file) |
1238 | + print("blkid_file=%s" % blkid_file) |
1239 | + print("blkid_info: %s" % blkid_info) |
1240 | + self.assertEqual(cfg_to_blkid.get( |
1241 | + format_cfg.get('fstype', 'NO-FSTYPE-SET')), |
1242 | + blkid_info['TYPE']) |
1243 | + print("%s fstype ok" % format_cfg.get('id')) |
1244 | + |
1245 | + if 'uuid' in format_cfg: |
1246 | + uuid = format_cfg.get('uuid', 'NO-UUID-SET') |
1247 | + self.assertEqual(uuid, blkid_info['UUID']) |
1248 | + print("%s uuid ok" % format_cfg.get('id')) |
1249 | + |
1250 | + def test_mounts(self): |
1251 | + """ test_mounts: for each mount check (device and mount point) |
1252 | + |
1253 | + type: mount |
1254 | + id: sda1_root_mount |
1255 | + path: / |
1256 | + device: sda1_root |
1257 | + """ |
1258 | + if not self.generate_storage_scripts: |
1259 | + reason = "This test uses only custom storage tests" |
1260 | + raise SkipTest(reason) |
1261 | + |
1262 | + with open(os.path.join(self.td.collect, "fstab"), "r") as fp: |
1263 | + fstab = fp.read() |
1264 | + |
1265 | + for id in [id for (id, item) in self.storage_config_dict.items() |
1266 | + if item['type'] == 'mount']: |
1267 | + mount_cfg = self.storage_config_dict.get(id) |
1268 | + fs_cfg = self.storage_config_dict.get(mount_cfg.get('device')) |
1269 | + real_dev = self.storage_config_dict.get(fs_cfg.get('volume')) |
1270 | + print("mount_cfg: %s" % mount_cfg) |
1271 | + print("fs_cfg: %s" % fs_cfg) |
1272 | + print("real_dev: %s" % real_dev) |
1273 | + |
1274 | + if real_dev['type'] in ['lvm_partition']: |
1275 | + vol_path = ( |
1276 | + self.storage_volume_dict.get(mount_cfg.get('device'))) |
1277 | + vol_base = os.path.basename(vol_path) |
1278 | + |
1279 | + # check DEVNAME on device |
1280 | + blkid_file = "blkid_%s" % vol_base |
1281 | + blkid_info = self.get_blkid_data(blkid_file) |
1282 | + # most releases support DEVNAME in blkid |
1283 | + if 'DEVNAME' in blkid_info: |
1284 | + self.assertEqual(vol_path, blkid_info['DEVNAME']) |
1285 | + fstab_device = vol_path |
1286 | + |
1287 | + elif real_dev['type'] in ['bcache']: |
1288 | + print('mount backed by bcache') |
1289 | + # translate backing dev kname to bcacheN |
1290 | + # "bcache_raid1" -> "/dev/md0" |
1291 | + bdev = real_dev.get('backing_device') |
1292 | + print('backing_device_id=%s' % bdev) |
1293 | + bdev_path = self.storage_volume_dict.get(bdev) |
1294 | + print('bdev_path=%s' % bdev_path) |
1295 | + |
1296 | + # look up backing name to map to bcacheN |
1297 | + (bkname, ls_id) = self._resolve_volpath(bdev_path, |
1298 | + ls_input='ls_id') |
1299 | + print("resolved bdev -> %s" % bkname) |
1300 | + bdev_bcache = self.get_bcache_backing_map() |
1301 | + bcache_kname = bdev_bcache[bkname] |
1302 | + print("mapped %s to bcache kname %s" % (bkname, bcache_kname)) |
1303 | + |
1304 | + # look up the by-uuid symlink value based on bcache kname |
1305 | + (uuid, ls_uuid) = self._kname_to_fsuuid(bcache_kname, |
1306 | + ls_input='ls_uuid') |
1307 | + # check uuid on device |
1308 | + blkid_file = "blkid_%s" % bcache_kname |
1309 | + print("Using blkid_file: %s" % blkid_file) |
1310 | + blkid_info = self.get_blkid_data(blkid_file) |
1311 | + self.assertEqual(uuid, blkid_info['UUID']) |
1312 | + fstab_device = "UUID=%s" % uuid |
1313 | + |
1314 | + else: |
1315 | + vol_path = ( |
1316 | + self.storage_volume_dict.get(mount_cfg.get('device'))) |
1317 | + vol_base = os.path.basename(vol_path) |
1318 | + |
1319 | + # ensure vol_path exists |
1320 | + (kname, ls_id) = self._resolve_volpath(vol_path, |
1321 | + ls_input='ls_id') |
1322 | + |
1323 | + # look up the by-uuid symlink value based on |
1324 | + (uuid, ls_uuid) = self._kname_to_fsuuid(kname, |
1325 | + ls_input='ls_uuid') |
1326 | + |
1327 | + # check uuid on device |
1328 | + blkid_file = "blkid_%s" % vol_base |
1329 | + blkid_info = self.get_blkid_data(blkid_file) |
1330 | + self.assertEqual(uuid, blkid_info['UUID']) |
1331 | + fstab_device = "UUID=%s" % uuid |
1332 | + |
1333 | + # check fstab_device and path |
1334 | + print("checking fstab_device=%s" % fstab_device) |
1335 | + for line in fstab.split("\n"): |
1336 | + if fstab_device in line: |
1337 | + (dev, mp, fs, op, _, _) = line.split() |
1338 | + self.assertEqual(fstab_device, dev) |
1339 | + self.assertEqual(mount_cfg.get('path', 'PATH-MISSING'), mp) |
1340 | + print("%s mount ok" % mount_cfg.get('id')) |
1341 | + |
1342 | + def test_proxy_set(self): |
1343 | + expected = get_apt_proxy() |
1344 | + with open(os.path.join(self.td.collect, "apt-proxy")) as fp: |
1345 | + apt_proxy_found = fp.read().rstrip() |
1346 | + if expected: |
1347 | + # the proxy should have gotten set through |
1348 | + self.assertIn(expected, apt_proxy_found) |
1349 | + else: |
1350 | + # no proxy, so the output of apt-config dump should be empty |
1351 | + self.assertEqual("", apt_proxy_found) |
1352 | + |
1353 | def run(self, result): |
1354 | super(VMBaseClass, self).run(result) |
1355 | self.record_result(result) |
1356 | @@ -919,6 +1453,24 @@ |
1357 | def test_interfacesd_eth0_removed(self): |
1358 | pass |
1359 | |
1360 | + def test_output_files_exist(self): |
1361 | + pass |
1362 | + |
1363 | + def test_disks_ptable(self): |
1364 | + pass |
1365 | + |
1366 | + def test_partitions(self): |
1367 | + pass |
1368 | + |
1369 | + def test_proxy_set(self): |
1370 | + pass |
1371 | + |
1372 | + def test_formats(self): |
1373 | + pass |
1374 | + |
1375 | + def test_mounts(self): |
1376 | + pass |
1377 | + |
1378 | def _maybe_raise(self, exc): |
1379 | if self.allow_test_fails: |
1380 | raise exc |
1381 | @@ -1018,8 +1570,27 @@ |
1382 | shutdown -P now "Shutting down on precise" |
1383 | """) |
1384 | |
1385 | - scripts = ([collect_prep] + collect_scripts + [collect_post] + |
1386 | - [precise_poweroff]) |
1387 | + # we always collect the following |
1388 | + default_collect = textwrap.dedent("""#!/bin/sh -x |
1389 | + cd OUTPUT_COLLECT_D |
1390 | + cat /proc/partitions > proc_partitions |
1391 | + cat /etc/fstab > fstab |
1392 | + mkdir -p /dev/disk/by-dname |
1393 | + ls /dev/disk/by-dname/ > ls_dname |
1394 | + ls -al /dev/disk/by-uuid/ > ls_uuid |
1395 | + ls -al /dev/disk/by-id/ > ls_id |
1396 | + ls /sys/firmware/efi/ > ls_sys_firmware_efi |
1397 | + find /etc/network/interfaces.d > find_interfacesd |
1398 | + |
1399 | + v="" |
1400 | + out=$(apt-config shell v Acquire::HTTP::Proxy) |
1401 | + eval "$out" |
1402 | + echo "$v" > apt-proxy |
1403 | + """) |
1404 | + |
1405 | + scripts = ([collect_prep] + [default_collect] + collect_scripts + |
1406 | + [collect_post] + [precise_poweroff]) |
1407 | + logger.debug('Using %s collect scripts', len(scripts)) |
1408 | |
1409 | for part in scripts: |
1410 | if not part.startswith("#!"): |
1411 | @@ -1093,5 +1664,78 @@ |
1412 | return ret |
1413 | |
1414 | |
1415 | +def prep_partition_for_device(device): |
1416 | + """ Storage config for a Power PReP partition, both |
1417 | + msdos and gpt format""" |
1418 | + return { |
1419 | + 'id': 'prep_partition', |
1420 | + 'type': 'partition', |
1421 | + 'size': '8M', |
1422 | + 'flag': 'prep', |
1423 | + 'guid': '9e1a2d38-c612-4316-aa26-8b49521e5a8b', |
1424 | + 'offset': '1M', |
1425 | + 'wipe': 'zero', |
1426 | + 'grub_device': True, |
1427 | + 'device': device} |
1428 | + |
1429 | + |
1430 | +def inject_prep_partition(storage_conf_file): |
1431 | + """ Parse a curtin configuration file and inject |
1432 | + a prep partition if possible""" |
1433 | + conf_yaml = yaml.load(util.load_file(storage_conf_file)) |
1434 | + storage_config = conf_yaml.get('storage', {}).get('config', {}) |
1435 | + if len(storage_config) == 0: |
1436 | + logger.debug('No storage config, skipping prep injection') |
1437 | + return conf_yaml |
1438 | + |
1439 | + all_disks = [item for item in storage_config |
1440 | + if item['type'] == 'disk'] |
1441 | + main_disk = all_disks[0] |
1442 | + main_partitions = [part for part in storage_config |
1443 | + if part['type'] == 'partition' and |
1444 | + part['device'] == main_disk['id']] |
1445 | + |
1446 | + # for msdos disks |
1447 | + # must have a primary partition spot |
1448 | + if main_disk['ptable'] == 'msdos': |
1449 | + primary_parts = [part for part in main_partitions |
1450 | + if part.get('flag', '') not in ['extended', |
1451 | + 'logical']] |
1452 | + if len(primary_parts) > 3: |
1453 | + logger.error("Can't find a primary partition for prep partition") |
1454 | + raise ValueError |
1455 | + |
1456 | + prep_partition = prep_partition_for_device(main_disk['id']) |
1457 | + last_partition = primary_parts[-1] |
1458 | + partition_index = storage_config.index(last_partition) |
1459 | + storage_config.insert(partition_index + 1, prep_partition) |
1460 | + if 'grub_device' in main_disk: |
1461 | + del main_disk['grub_device'] |
1462 | + |
1463 | + # for gpt, find the boot_partition and replace it |
1464 | + elif main_disk['ptable'] == 'gpt': |
1465 | + prep_partition = prep_partition_for_device(main_disk['id']) |
1466 | + [boot_partition] = [part for part in main_partitions |
1467 | + if part.get('flag', '') in ['bios_grub']] |
1468 | + partition_index = storage_config.index(boot_partition) |
1469 | + storage_config[partition_index] = prep_partition |
1470 | + |
1471 | + if 'grub_device' in main_disk: |
1472 | + del main_disk['grub_device'] |
1473 | + |
1474 | + return conf_yaml |
1475 | + |
1476 | + |
1477 | +def switch_wwn_for_serial_disks(conf_yaml): |
1478 | + if 'storage' in conf_yaml: |
1479 | + cfg = conf_yaml.get('storage').get('config') |
1480 | + for disk in [item for item in cfg |
1481 | + if item['type'] == 'disk' and 'wwn' in item]: |
1482 | + disk['serial'] = disk['wwn'] |
1483 | + del disk['wwn'] |
1484 | + |
1485 | + return conf_yaml |
1486 | + |
1487 | + |
1488 | apply_keep_settings() |
1489 | logger = _initialize_logging() |
1490 | |
1491 | === modified file 'tests/vmtests/test_basic.py' |
1492 | --- tests/vmtests/test_basic.py 2016-06-23 14:26:39 +0000 |
1493 | +++ tests/vmtests/test_basic.py 2016-07-19 15:04:36 +0000 |
1494 | @@ -1,188 +1,25 @@ |
1495 | -from . import ( |
1496 | - VMBaseClass, |
1497 | - get_apt_proxy) |
1498 | +from . import VMBaseClass |
1499 | from .releases import base_vm_classes as relbase |
1500 | |
1501 | -import os |
1502 | -import re |
1503 | -import textwrap |
1504 | - |
1505 | |
1506 | class TestBasicAbs(VMBaseClass): |
1507 | interactive = False |
1508 | conf_file = "examples/tests/basic.yaml" |
1509 | + # FIXME: generate extra_disks, and nvme from parsing config yaml |
1510 | extra_disks = ['128G', '128G', '4G'] |
1511 | nvme_disks = ['4G'] |
1512 | + # FIXME: update disk_to_check method to extract label and |
1513 | + # and partition number from config yaml |
1514 | disk_to_check = [('main_disk', 1), ('main_disk', 2)] |
1515 | - collect_scripts = [textwrap.dedent(""" |
1516 | - cd OUTPUT_COLLECT_D |
1517 | - blkid -o export /dev/vda > blkid_output_vda |
1518 | - blkid -o export /dev/vda1 > blkid_output_vda1 |
1519 | - blkid -o export /dev/vda2 > blkid_output_vda2 |
1520 | - btrfs-show-super /dev/vdd > btrfs_show_super_vdd |
1521 | - cat /proc/partitions > proc_partitions |
1522 | - ls -al /dev/disk/by-uuid/ > ls_uuid |
1523 | - cat /etc/fstab > fstab |
1524 | - mkdir -p /dev/disk/by-dname |
1525 | - ls /dev/disk/by-dname/ > ls_dname |
1526 | - find /etc/network/interfaces.d > find_interfacesd |
1527 | - |
1528 | - v="" |
1529 | - out=$(apt-config shell v Acquire::HTTP::Proxy) |
1530 | - eval "$out" |
1531 | - echo "$v" > apt-proxy |
1532 | - """)] |
1533 | - |
1534 | - def test_output_files_exist(self): |
1535 | - self.output_files_exist( |
1536 | - ["blkid_output_vda", "blkid_output_vda1", "blkid_output_vda2", |
1537 | - "btrfs_show_super_vdd", "fstab", "ls_dname", "ls_uuid", |
1538 | - "proc_partitions"]) |
1539 | - |
1540 | - def test_ptable(self): |
1541 | - blkid_info = self.get_blkid_data("blkid_output_vda") |
1542 | - self.assertEquals(blkid_info["PTTYPE"], "dos") |
1543 | - |
1544 | - def test_partition_numbers(self): |
1545 | - # vde should have partitions 1 and 10 |
1546 | - disk = "vde" |
1547 | - proc_partitions_path = os.path.join(self.td.collect, |
1548 | - 'proc_partitions') |
1549 | - self.assertTrue(os.path.exists(proc_partitions_path)) |
1550 | - found = [] |
1551 | - with open(proc_partitions_path, 'r') as fp: |
1552 | - for line in fp.readlines(): |
1553 | - if disk in line: |
1554 | - found.append(line.split()[3]) |
1555 | - # /proc/partitions should have 3 lines with 'vde' in them. |
1556 | - expected = [disk + s for s in ["", "1", "10"]] |
1557 | - self.assertEqual(found, expected) |
1558 | - |
1559 | - def test_partitions(self): |
1560 | - with open(os.path.join(self.td.collect, "fstab")) as fp: |
1561 | - fstab_lines = fp.readlines() |
1562 | - print("\n".join(fstab_lines)) |
1563 | - # Test that vda1 is on / |
1564 | - blkid_info = self.get_blkid_data("blkid_output_vda1") |
1565 | - fstab_entry = None |
1566 | - for line in fstab_lines: |
1567 | - if blkid_info['UUID'] in line: |
1568 | - fstab_entry = line |
1569 | - break |
1570 | - self.assertIsNotNone(fstab_entry) |
1571 | - self.assertEqual(fstab_entry.split(' ')[1], "/") |
1572 | - |
1573 | - # Test that vda2 is on /home |
1574 | - blkid_info = self.get_blkid_data("blkid_output_vda2") |
1575 | - fstab_entry = None |
1576 | - for line in fstab_lines: |
1577 | - if blkid_info['UUID'] in line: |
1578 | - fstab_entry = line |
1579 | - break |
1580 | - self.assertIsNotNone(fstab_entry) |
1581 | - self.assertEqual(fstab_entry.split(' ')[1], "/home") |
1582 | - |
1583 | - # Test whole disk vdd is mounted at /btrfs |
1584 | - fstab_entry = None |
1585 | - for line in fstab_lines: |
1586 | - if "/dev/vdd" in line: |
1587 | - fstab_entry = line |
1588 | - break |
1589 | - self.assertIsNotNone(fstab_entry) |
1590 | - self.assertEqual(fstab_entry.split(' ')[1], "/btrfs") |
1591 | - |
1592 | - def test_whole_disk_format(self): |
1593 | - # confirm the whole disk format is the expected device |
1594 | - with open(os.path.join(self.td.collect, |
1595 | - "btrfs_show_super_vdd"), "r") as fp: |
1596 | - btrfs_show_super = fp.read() |
1597 | - |
1598 | - with open(os.path.join(self.td.collect, "ls_uuid"), "r") as fp: |
1599 | - ls_uuid = fp.read() |
1600 | - |
1601 | - # extract uuid from btrfs superblock |
1602 | - btrfs_fsid = [line for line in btrfs_show_super.split('\n') |
1603 | - if line.startswith('fsid\t\t')] |
1604 | - self.assertEqual(len(btrfs_fsid), 1) |
1605 | - btrfs_uuid = btrfs_fsid[0].split()[1] |
1606 | - self.assertTrue(btrfs_uuid is not None) |
1607 | - |
1608 | - # extract uuid from /dev/disk/by-uuid on /dev/vdd |
1609 | - # parsing ls -al output on /dev/disk/by-uuid: |
1610 | - # lrwxrwxrwx 1 root root 9 Dec 4 20:02 |
1611 | - # d591e9e9-825a-4f0a-b280-3bfaf470b83c -> ../../vdg |
1612 | - vdd_uuid = [line.split()[8] for line in ls_uuid.split('\n') |
1613 | - if 'vdd' in line] |
1614 | - self.assertEqual(len(vdd_uuid), 1) |
1615 | - vdd_uuid = vdd_uuid.pop() |
1616 | - self.assertTrue(vdd_uuid is not None) |
1617 | - |
1618 | - # compare them |
1619 | - self.assertEqual(vdd_uuid, btrfs_uuid) |
1620 | - |
1621 | - def test_proxy_set(self): |
1622 | - expected = get_apt_proxy() |
1623 | - with open(os.path.join(self.td.collect, "apt-proxy")) as fp: |
1624 | - apt_proxy_found = fp.read().rstrip() |
1625 | - if expected: |
1626 | - # the proxy should have gotten set through |
1627 | - self.assertIn(expected, apt_proxy_found) |
1628 | - else: |
1629 | - # no proxy, so the output of apt-config dump should be empty |
1630 | - self.assertEqual("", apt_proxy_found) |
1631 | |
1632 | |
1633 | class PreciseTestBasic(relbase.precise, TestBasicAbs): |
1634 | __test__ = True |
1635 | - |
1636 | - collect_scripts = [textwrap.dedent(""" |
1637 | - cd OUTPUT_COLLECT_D |
1638 | - blkid -o export /dev/vda > blkid_output_vda |
1639 | - blkid -o export /dev/vda1 > blkid_output_vda1 |
1640 | - blkid -o export /dev/vda2 > blkid_output_vda2 |
1641 | - btrfs-show /dev/vdd > btrfs_show_super_vdd |
1642 | - cat /proc/partitions > proc_partitions |
1643 | - ls -al /dev/disk/by-uuid/ > ls_uuid |
1644 | - cat /etc/fstab > fstab |
1645 | - mkdir -p /dev/disk/by-dname |
1646 | - ls /dev/disk/by-dname/ > ls_dname |
1647 | - find /etc/network/interfaces.d > find_interfacesd |
1648 | - |
1649 | - v="" |
1650 | - out=$(apt-config shell v Acquire::HTTP::Proxy) |
1651 | - eval "$out" |
1652 | - echo "$v" > apt-proxy |
1653 | - """)] |
1654 | - |
1655 | - def test_whole_disk_format(self): |
1656 | - # confirm the whole disk format is the expected device |
1657 | - with open(os.path.join(self.td.collect, |
1658 | - "btrfs_show_super_vdd"), "r") as fp: |
1659 | - btrfs_show_super = fp.read() |
1660 | - |
1661 | - with open(os.path.join(self.td.collect, "ls_uuid"), "r") as fp: |
1662 | - ls_uuid = fp.read() |
1663 | - |
1664 | - # extract uuid from btrfs superblock |
1665 | - btrfs_fsid = re.findall('.*uuid:\ (.*)\n', btrfs_show_super) |
1666 | - |
1667 | - self.assertEqual(len(btrfs_fsid), 1) |
1668 | - btrfs_uuid = btrfs_fsid.pop() |
1669 | - self.assertTrue(btrfs_uuid is not None) |
1670 | - |
1671 | - # extract uuid from /dev/disk/by-uuid on /dev/vdd |
1672 | - # parsing ls -al output on /dev/disk/by-uuid: |
1673 | - # lrwxrwxrwx 1 root root 9 Dec 4 20:02 |
1674 | - # d591e9e9-825a-4f0a-b280-3bfaf470b83c -> ../../vdg |
1675 | - vdd_uuid = [line.split()[8] for line in ls_uuid.split('\n') |
1676 | - if 'vdd' in line] |
1677 | - self.assertEqual(len(vdd_uuid), 1) |
1678 | - vdd_uuid = vdd_uuid.pop() |
1679 | - self.assertTrue(vdd_uuid is not None) |
1680 | - |
1681 | - # compare them |
1682 | - self.assertEqual(vdd_uuid, btrfs_uuid) |
1683 | - |
1684 | + arch_skip = ["ppc64el", "ppc64le", "s390x"] |
1685 | + |
1686 | + # FIXME(LP: #1523037): dname does not work on precise, so we cannot expect |
1687 | + # sda-part2 to exist in /dev/disk/by-dname as we can on other releases |
1688 | + # when dname works on trusty, then we need to re-enable by removing line. |
1689 | def test_ptable(self): |
1690 | print("test_ptable does not work for Precise") |
1691 | |
1692 | @@ -206,6 +43,7 @@ |
1693 | class PreciseHWETTestBasic(relbase.precise_hwe_t, PreciseTestBasic): |
1694 | # FIXME: off due to test_whole_disk_format failing |
1695 | __test__ = False |
1696 | + arch_skip = ["ppc64el", "ppc64le", "s390x"] |
1697 | |
1698 | |
1699 | class TrustyHWEUTestBasic(relbase.trusty_hwe_u, TrustyTestBasic): |
1700 | @@ -238,114 +76,6 @@ |
1701 | class TestBasicScsiAbs(TestBasicAbs): |
1702 | conf_file = "examples/tests/basic_scsi.yaml" |
1703 | disk_driver = 'scsi-hd' |
1704 | - extra_disks = ['128G', '128G', '4G'] |
1705 | - nvme_disks = ['4G'] |
1706 | - collect_scripts = [textwrap.dedent(""" |
1707 | - cd OUTPUT_COLLECT_D |
1708 | - blkid -o export /dev/sda > blkid_output_sda |
1709 | - blkid -o export /dev/sda1 > blkid_output_sda1 |
1710 | - blkid -o export /dev/sda2 > blkid_output_sda2 |
1711 | - btrfs-show-super /dev/sdc > btrfs_show_super_sdc |
1712 | - cat /proc/partitions > proc_partitions |
1713 | - ls -al /dev/disk/by-uuid/ > ls_uuid |
1714 | - ls -al /dev/disk/by-id/ > ls_disk_id |
1715 | - cat /etc/fstab > fstab |
1716 | - mkdir -p /dev/disk/by-dname |
1717 | - ls /dev/disk/by-dname/ > ls_dname |
1718 | - find /etc/network/interfaces.d > find_interfacesd |
1719 | - |
1720 | - v="" |
1721 | - out=$(apt-config shell v Acquire::HTTP::Proxy) |
1722 | - eval "$out" |
1723 | - echo "$v" > apt-proxy |
1724 | - """)] |
1725 | - |
1726 | - def test_output_files_exist(self): |
1727 | - self.output_files_exist( |
1728 | - ["blkid_output_sda", "blkid_output_sda1", "blkid_output_sda2", |
1729 | - "btrfs_show_super_sdc", "fstab", "ls_dname", "ls_uuid", |
1730 | - "ls_disk_id", "proc_partitions"]) |
1731 | - |
1732 | - def test_ptable(self): |
1733 | - blkid_info = self.get_blkid_data("blkid_output_sda") |
1734 | - self.assertEquals(blkid_info["PTTYPE"], "dos") |
1735 | - |
1736 | - def test_partition_numbers(self): |
1737 | - # vde should have partitions 1 and 10 |
1738 | - disk = "sdd" |
1739 | - proc_partitions_path = os.path.join(self.td.collect, |
1740 | - 'proc_partitions') |
1741 | - self.assertTrue(os.path.exists(proc_partitions_path)) |
1742 | - found = [] |
1743 | - with open(proc_partitions_path, 'r') as fp: |
1744 | - for line in fp.readlines(): |
1745 | - if disk in line: |
1746 | - found.append(line.split()[3]) |
1747 | - # /proc/partitions should have 3 lines with 'vde' in them. |
1748 | - expected = [disk + s for s in ["", "1", "10"]] |
1749 | - self.assertEqual(found, expected) |
1750 | - |
1751 | - def test_partitions(self): |
1752 | - with open(os.path.join(self.td.collect, "fstab")) as fp: |
1753 | - fstab_lines = fp.readlines() |
1754 | - print("\n".join(fstab_lines)) |
1755 | - # Test that vda1 is on / |
1756 | - blkid_info = self.get_blkid_data("blkid_output_sda1") |
1757 | - fstab_entry = None |
1758 | - for line in fstab_lines: |
1759 | - if blkid_info['UUID'] in line: |
1760 | - fstab_entry = line |
1761 | - break |
1762 | - self.assertIsNotNone(fstab_entry) |
1763 | - self.assertEqual(fstab_entry.split(' ')[1], "/") |
1764 | - |
1765 | - # Test that vda2 is on /home |
1766 | - blkid_info = self.get_blkid_data("blkid_output_sda2") |
1767 | - fstab_entry = None |
1768 | - for line in fstab_lines: |
1769 | - if blkid_info['UUID'] in line: |
1770 | - fstab_entry = line |
1771 | - break |
1772 | - self.assertIsNotNone(fstab_entry) |
1773 | - self.assertEqual(fstab_entry.split(' ')[1], "/home") |
1774 | - |
1775 | - # Test whole disk sdc is mounted at /btrfs |
1776 | - fstab_entry = None |
1777 | - for line in fstab_lines: |
1778 | - if "/dev/sdc" in line: |
1779 | - fstab_entry = line |
1780 | - break |
1781 | - self.assertIsNotNone(fstab_entry) |
1782 | - self.assertEqual(fstab_entry.split(' ')[1], "/btrfs") |
1783 | - |
1784 | - def test_whole_disk_format(self): |
1785 | - # confirm the whole disk format is the expected device |
1786 | - with open(os.path.join(self.td.collect, |
1787 | - "btrfs_show_super_sdc"), "r") as fp: |
1788 | - btrfs_show_super = fp.read() |
1789 | - |
1790 | - with open(os.path.join(self.td.collect, "ls_uuid"), "r") as fp: |
1791 | - ls_uuid = fp.read() |
1792 | - |
1793 | - # extract uuid from btrfs superblock |
1794 | - btrfs_fsid = [line for line in btrfs_show_super.split('\n') |
1795 | - if line.startswith('fsid\t\t')] |
1796 | - self.assertEqual(len(btrfs_fsid), 1) |
1797 | - btrfs_uuid = btrfs_fsid[0].split()[1] |
1798 | - self.assertTrue(btrfs_uuid is not None) |
1799 | - |
1800 | - # extract uuid from /dev/disk/by-uuid on /dev/sdc |
1801 | - # parsing ls -al output on /dev/disk/by-uuid: |
1802 | - # lrwxrwxrwx 1 root root 9 Dec 4 20:02 |
1803 | - # d591e9e9-825a-4f0a-b280-3bfaf470b83c -> ../../vdg |
1804 | - uuid = [line.split()[8] for line in ls_uuid.split('\n') |
1805 | - if 'sdc' in line] |
1806 | - self.assertEqual(len(uuid), 1) |
1807 | - uuid = uuid.pop() |
1808 | - self.assertTrue(uuid is not None) |
1809 | - |
1810 | - # compare them |
1811 | - self.assertEqual(uuid, btrfs_uuid) |
1812 | |
1813 | |
1814 | class XenialTestScsiBasic(relbase.xenial, TestBasicScsiAbs): |
1815 | |
1816 | === modified file 'tests/vmtests/test_bcache_basic.py' |
1817 | --- tests/vmtests/test_bcache_basic.py 2016-06-13 20:49:15 +0000 |
1818 | +++ tests/vmtests/test_bcache_basic.py 2016-07-19 15:04:36 +0000 |
1819 | @@ -1,43 +1,160 @@ |
1820 | from . import VMBaseClass |
1821 | from .releases import base_vm_classes as relbase |
1822 | |
1823 | -import textwrap |
1824 | import os |
1825 | |
1826 | |
1827 | class TestBcacheBasic(VMBaseClass): |
1828 | arch_skip = [ |
1829 | "s390x", # lp:1565029 |
1830 | + "ppc64el", # LP:1602299 |
1831 | ] |
1832 | conf_file = "examples/tests/bcache_basic.yaml" |
1833 | extra_disks = ['2G'] |
1834 | - collect_scripts = [textwrap.dedent(""" |
1835 | - cd OUTPUT_COLLECT_D |
1836 | - bcache-super-show /dev/vda2 > bcache_super_vda2 |
1837 | - ls /sys/fs/bcache > bcache_ls |
1838 | - cat /sys/block/bcache0/bcache/cache_mode > bcache_cache_mode |
1839 | - cat /proc/mounts > proc_mounts |
1840 | - cat /proc/partitions > proc_partitions |
1841 | - find /etc/network/interfaces.d > find_interfacesd |
1842 | - """)] |
1843 | + |
1844 | + def output_files_exist_bcache_basic(self): |
1845 | + to_check = ["ls_bcache"] |
1846 | + for id in [id for (id, item) in self.storage_config_dict.items() |
1847 | + if item['type'] == 'bcache']: |
1848 | + vol_base = os.path.basename(self.storage_volume_dict.get(id)) |
1849 | + to_check.append("%s_cache_mode" % vol_base) |
1850 | + to_check.append("bcache_super_%s" % vol_base) |
1851 | + |
1852 | + self.output_files_exist(to_check) |
1853 | |
1854 | def test_bcache_output_files_exist(self): |
1855 | - self.output_files_exist(["bcache_super_vda2", "bcache_ls", |
1856 | - "bcache_cache_mode"]) |
1857 | + always_present = [ |
1858 | + "bcache_backing_map", |
1859 | + "bcache_cache_mode_map", |
1860 | + "ls_bcache", |
1861 | + ] |
1862 | + per_device = [] |
1863 | + for id in [id for (id, item) in self.storage_config_dict.items() |
1864 | + if item['type'] == 'bcache']: |
1865 | + bcache_cfg = self.storage_config_dict.get(id) |
1866 | + |
1867 | + # backingdev related files |
1868 | + bdev = bcache_cfg.get('backing_device') |
1869 | + bdev_path = self.storage_volume_dict.get(bdev) |
1870 | + # look up backing name to map to bcacheN |
1871 | + (bkname, ls_id) = self._resolve_volpath(bdev_path, |
1872 | + ls_input='ls_id') |
1873 | + bdev_bcache = self.get_bcache_backing_map() |
1874 | + bcache_kname = bdev_bcache[bkname] |
1875 | + |
1876 | + blkid_file = "bcache_blkid_%s" % os.path.basename(bdev_path) |
1877 | + if blkid_file not in per_device: |
1878 | + per_device.append(blkid_file) |
1879 | + |
1880 | + blkid_file = ("bcache_super_backing_%s" % |
1881 | + os.path.basename(bdev_path)) |
1882 | + if blkid_file not in per_device: |
1883 | + per_device.append(blkid_file) |
1884 | + |
1885 | + blkid_file = "blkid_%s" % bcache_kname |
1886 | + if blkid_file not in per_device: |
1887 | + per_device.append(blkid_file) |
1888 | + |
1889 | + # caching_device |
1890 | + cdev = bcache_cfg.get('cache_device') |
1891 | + cdev_path = self.storage_volume_dict.get(cdev) # /dev/disk/by-id |
1892 | + blkid_file = "bcache_super_cache_%s" % os.path.basename(cdev_path) |
1893 | + if blkid_file not in per_device: |
1894 | + per_device.append(blkid_file) |
1895 | + |
1896 | + to_check = always_present + per_device |
1897 | + print(to_check) |
1898 | + self.output_files_exist(to_check) |
1899 | + |
1900 | + def bcache_status_basic(self): |
1901 | + for id in [id for (id, item) in self.storage_config_dict.items() |
1902 | + if item['type'] == 'bcache']: |
1903 | + vol_path = self.storage_volume_dict.get(id) |
1904 | + vol_base = os.path.basename(vol_path) |
1905 | + |
1906 | + # check cset.uuid on cache devices matches sysfs/fs/bcache |
1907 | + bcache_super = "bcache_super_%s" % vol_base |
1908 | + bcache_super_info = self.get_bcache_data(bcache_super) |
1909 | + self.check_file_regex("ls_bcache", bcache_super_info['cset.uuid']) |
1910 | |
1911 | def test_bcache_status(self): |
1912 | - bcache_cset_uuid = None |
1913 | - fname = os.path.join(self.td.collect, "bcache_super_vda2") |
1914 | - with open(fname, "r") as fp: |
1915 | - for line in fp.read().splitlines(): |
1916 | - if line != "" and line.split()[0] == "cset.uuid": |
1917 | - bcache_cset_uuid = line.split()[-1].rstrip() |
1918 | - self.assertIsNotNone(bcache_cset_uuid) |
1919 | - with open(os.path.join(self.td.collect, "bcache_ls"), "r") as fp: |
1920 | - self.assertTrue(bcache_cset_uuid in fp.read().splitlines()) |
1921 | + """test_bcache_status: match dev.uuid from superblock """ |
1922 | + for id in [id for (id, item) in self.storage_config_dict.items() |
1923 | + if item['type'] == 'bcache']: |
1924 | + bcache_cfg = self.storage_config_dict.get(id) |
1925 | + print("bcache cfg: %s" % bcache_cfg) |
1926 | + |
1927 | + # backing_device related names |
1928 | + bdev = bcache_cfg.get('backing_device') |
1929 | + print("backing_dev: %s" % bdev) |
1930 | + bdev_path = self.storage_volume_dict.get(bdev) |
1931 | + print("bdev_path: %s" % bdev_path) |
1932 | + # look up backing name to map to bcacheN |
1933 | + (bkname, ls_id) = self._resolve_volpath(bdev_path, |
1934 | + ls_input='ls_id') |
1935 | + print("bkname: %s" % bkname) |
1936 | + |
1937 | + (bk_uuid, ls_uuid) = self._kname_to_fsuuid(bkname, |
1938 | + ls_input='ls_uuid') |
1939 | + |
1940 | + bcache_super = ("bcache_super_backing_%s" % |
1941 | + os.path.basename(bdev_path)) |
1942 | + super_data = self.get_bcache_data(bcache_super) |
1943 | + dev_uuid = super_data['dev.uuid'] |
1944 | + self.assertEqual(bk_uuid, dev_uuid) |
1945 | + print("backing_device uuid OK") |
1946 | + |
1947 | + # cache_device related name |
1948 | + print("check cache dev") |
1949 | + cdev = bcache_cfg.get('cache_device') |
1950 | + print("cdev: %s" % cdev) |
1951 | + cdev_path = self.storage_volume_dict.get(cdev) # /dev/disk/by-id |
1952 | + print("cdev_path: %s" % cdev_path) |
1953 | + # /dev/disk/byid -> kname |
1954 | + (ckname, ls_id) = self._resolve_volpath(cdev_path, |
1955 | + ls_input='ls_id') |
1956 | + print("ckname: %s" % ckname) |
1957 | + (cc_uuid, ls_uuid) = self._kname_to_fsuuid(ckname, |
1958 | + ls_input='ls_uuid') |
1959 | + |
1960 | + bcache_super = ("bcache_super_cache_%s" % |
1961 | + os.path.basename(cdev_path)) |
1962 | + print("super_file: %s" % bcache_super) |
1963 | + super_data = self.get_bcache_data(bcache_super) |
1964 | + dev_uuid = super_data['dev.uuid'] |
1965 | + print("super.dev.uuid: %s" % dev_uuid) |
1966 | + print("cdev.uuid: %s" % cc_uuid) |
1967 | + self.assertEqual(cc_uuid, dev_uuid) |
1968 | + |
1969 | + def bcache_cachemode_basic(self): |
1970 | + for id in [id for (id, item) in self.storage_config_dict.items() |
1971 | + if item['type'] == 'bcache']: |
1972 | + vol = self.storage_config_dict.get(id) |
1973 | + vol_path = self.storage_volume_dict.get(id) |
1974 | + vol_base = os.path.basename(vol_path) |
1975 | + regex = r"\[%s]" % vol.get('cache_mode') |
1976 | + self.check_file_regex("%s_cache_mode" % vol_base, regex) |
1977 | |
1978 | def test_bcache_cachemode(self): |
1979 | - self.check_file_regex("bcache_cache_mode", r"\[writeback\]") |
1980 | + """test_bcache_cachemode: validate bcache cache setting""" |
1981 | + for id in [id for (id, item) in self.storage_config_dict.items() |
1982 | + if item['type'] == 'bcache']: |
1983 | + bcache_cfg = self.storage_config_dict.get(id) |
1984 | + bdev = bcache_cfg.get('backing_device') |
1985 | + bdev_path = self.storage_volume_dict.get(bdev) |
1986 | + print('bdev_path=%s' % bdev_path) |
1987 | + |
1988 | + # look up backing name to map to bcacheN |
1989 | + (bkname, ls_id) = self._resolve_volpath(bdev_path, |
1990 | + ls_input='ls_id') |
1991 | + print("resolved bdev -> %s" % bkname) |
1992 | + bdev_bcache = self.get_bcache_backing_map() |
1993 | + |
1994 | + bcache_kname = bdev_bcache[bkname] |
1995 | + print("mapped %s to bcache kname %s" % (bkname, bcache_kname)) |
1996 | + |
1997 | + regex = r"%s:.*[%s]" % (bcache_kname, bcache_cfg.get('cache_mode')) |
1998 | + self.check_file_regex("bcache_cache_mode_map", regex) |
1999 | |
2000 | |
2001 | class PreciseHWETBcacheBasic(relbase.precise_hwe_t, TestBcacheBasic): |
2002 | |
2003 | === modified file 'tests/vmtests/test_bonding.py' |
2004 | --- tests/vmtests/test_bonding.py 2016-06-15 19:38:44 +0000 |
2005 | +++ tests/vmtests/test_bonding.py 2016-07-19 15:04:36 +0000 |
2006 | @@ -167,6 +167,7 @@ |
2007 | |
2008 | class PreciseHWETTestBonding(relbase.precise_hwe_t, TestNetworkAbs): |
2009 | __test__ = True |
2010 | + arch_skip = ["ppc64el", "ppc64le", "s390x"] |
2011 | # package names on precise are different, need to check on ifenslave-2.6 |
2012 | collect_scripts = TestNetworkAbs.collect_scripts + [textwrap.dedent(""" |
2013 | cd OUTPUT_COLLECT_D |
2014 | |
2015 | === modified file 'tests/vmtests/test_lvm.py' |
2016 | --- tests/vmtests/test_lvm.py 2016-06-13 20:49:15 +0000 |
2017 | +++ tests/vmtests/test_lvm.py 2016-07-19 15:04:36 +0000 |
2018 | @@ -1,21 +1,11 @@ |
2019 | from . import VMBaseClass |
2020 | from .releases import base_vm_classes as relbase |
2021 | |
2022 | -import textwrap |
2023 | - |
2024 | |
2025 | class TestLvmAbs(VMBaseClass): |
2026 | conf_file = "examples/tests/lvm.yaml" |
2027 | interactive = False |
2028 | extra_disks = [] |
2029 | - collect_scripts = [textwrap.dedent(""" |
2030 | - cd OUTPUT_COLLECT_D |
2031 | - cat /etc/fstab > fstab |
2032 | - ls /dev/disk/by-dname > ls_dname |
2033 | - find /etc/network/interfaces.d > find_interfacesd |
2034 | - pvdisplay -C --separator = -o vg_name,pv_name --noheadings > pvs |
2035 | - lvdisplay -C --separator = -o lv_name,vg_name --noheadings > lvs |
2036 | - """)] |
2037 | fstab_expected = { |
2038 | '/dev/vg1/lv1': '/srv/data', |
2039 | '/dev/vg1/lv2': '/srv/backup', |
2040 | @@ -26,21 +16,58 @@ |
2041 | ('vg1-lv1', 0), |
2042 | ('vg1-lv2', 0)] |
2043 | |
2044 | - def test_lvs(self): |
2045 | - self.check_file_strippedline("lvs", "lv1=vg1") |
2046 | - self.check_file_strippedline("lvs", "lv2=vg1") |
2047 | - |
2048 | - def test_pvs(self): |
2049 | - self.check_file_strippedline("pvs", "vg1=/dev/vda5") |
2050 | - self.check_file_strippedline("pvs", "vg1=/dev/vda6") |
2051 | - |
2052 | - def test_output_files_exist(self): |
2053 | - self.output_files_exist( |
2054 | - ["fstab", "ls_dname"]) |
2055 | + def test_lvm_volgroup(self): |
2056 | + """ test_lvm_volgroups: for each lvm_volgroup check (devices, name) |
2057 | + 1. underlying devices are members of the vg |
2058 | + 2. validate the vg name is present |
2059 | + |
2060 | + """ |
2061 | + for id in [id for (id, item) in self.storage_config_dict.items() |
2062 | + if item['type'] == 'lvm_volgroup']: |
2063 | + |
2064 | + # determine storage path |
2065 | + vol = self.storage_config_dict.get(id) |
2066 | + vol_path = self.storage_volume_dict.get(id) |
2067 | + |
2068 | + # ensure vol_path exists |
2069 | + (vol_kname, ls_id) = self._resolve_volpath(vol_path, |
2070 | + ls_input='ls_id') |
2071 | + |
2072 | + for device in vol.get('devices'): |
2073 | + # lookup each device member and resolve path |
2074 | + dev_vol_path = self.storage_volume_dict.get(device) |
2075 | + (devpath, _) = self._resolve_volpath(dev_vol_path, |
2076 | + ls_input='ls_id') |
2077 | + pvs_str = "%s=%s" % (vol.get('name'), "/dev/"+devpath) |
2078 | + self.check_file_strippedline("pvs", pvs_str) |
2079 | + |
2080 | + def test_lvm_partition(self): |
2081 | + """ test_lvm_partitions: for each lvm_partitions check (devices, name) |
2082 | + 1. device exists |
2083 | + 2. validate the lv name is present in the lvs listing |
2084 | + |
2085 | + """ |
2086 | + for id in [id for (id, item) in self.storage_config_dict.items() |
2087 | + if item['type'] == 'lvm_partition']: |
2088 | + |
2089 | + # determine storage path |
2090 | + vol = self.storage_config_dict.get(id) |
2091 | + |
2092 | + # extract the volgroup name the lvm_partition specified |
2093 | + lv_name = vol.get('name') |
2094 | + vg_id = vol.get('volgroup') |
2095 | + vg_name = self.storage_config_dict.get(vg_id).get('name') |
2096 | + |
2097 | + lvs_str = "%s=%s" % (lv_name, vg_name) |
2098 | + self.check_file_strippedline("lvs", lvs_str) |
2099 | + |
2100 | + def test_output_files_exist_lvm(self): |
2101 | + self.output_files_exist(["pvs", "lvs", "dmsetup_info"]) |
2102 | |
2103 | |
2104 | class PreciseTestLvm(relbase.precise, TestLvmAbs): |
2105 | __test__ = True |
2106 | + arch_skip = ["ppc64el", "ppc64le", "s390x"] |
2107 | |
2108 | # FIXME(LP: #1523037): dname does not work on trusty, so we cannot expect |
2109 | # sda-part2 to exist in /dev/disk/by-dname as we can on other releases |
2110 | |
2111 | === modified file 'tests/vmtests/test_mdadm_bcache.py' |
2112 | --- tests/vmtests/test_mdadm_bcache.py 2016-06-13 20:49:15 +0000 |
2113 | +++ tests/vmtests/test_mdadm_bcache.py 2016-07-19 15:04:36 +0000 |
2114 | @@ -1,39 +1,31 @@ |
2115 | from . import VMBaseClass |
2116 | +from .test_bcache_basic import TestBcacheBasic |
2117 | from .releases import base_vm_classes as relbase |
2118 | |
2119 | import textwrap |
2120 | -import os |
2121 | |
2122 | |
2123 | class TestMdadmAbs(VMBaseClass): |
2124 | interactive = False |
2125 | extra_disks = [] |
2126 | active_mdadm = "1" |
2127 | - collect_scripts = [textwrap.dedent(""" |
2128 | - cd OUTPUT_COLLECT_D |
2129 | - cat /etc/fstab > fstab |
2130 | - mdadm --detail --scan > mdadm_status |
2131 | - mdadm --detail --scan | grep -c ubuntu > mdadm_active1 |
2132 | - grep -c active /proc/mdstat > mdadm_active2 |
2133 | - ls /dev/disk/by-dname > ls_dname |
2134 | - find /etc/network/interfaces.d > find_interfacesd |
2135 | - """)] |
2136 | |
2137 | def test_mdadm_output_files_exist(self): |
2138 | self.output_files_exist( |
2139 | - ["fstab", "mdadm_status", "mdadm_active1", "mdadm_active2", |
2140 | + ["fstab", "mdadm_status", "mdadm_active", "proc_mdstat", |
2141 | "ls_dname"]) |
2142 | |
2143 | def test_mdadm_status(self): |
2144 | # ubuntu:<ID> is the name assigned to the md array |
2145 | self.check_file_regex("mdadm_status", r"ubuntu:[0-9]*") |
2146 | - self.check_file_strippedline("mdadm_active1", self.active_mdadm) |
2147 | - self.check_file_strippedline("mdadm_active2", self.active_mdadm) |
2148 | - |
2149 | - |
2150 | -class TestMdadmBcacheAbs(TestMdadmAbs): |
2151 | + self.check_file_strippedline("mdadm_active", self.active_mdadm) |
2152 | + self.check_file_regex("proc_mdstat", r"active") |
2153 | + |
2154 | + |
2155 | +class TestMdadmBcacheAbs(TestBcacheBasic, TestMdadmAbs): |
2156 | arch_skip = [ |
2157 | "s390x", # lp:1565029 |
2158 | + "ppc64el", # LP:1602299 |
2159 | ] |
2160 | conf_file = "examples/tests/mdadm_bcache.yaml" |
2161 | disk_to_check = [('main_disk', 1), |
2162 | @@ -46,76 +38,10 @@ |
2163 | ('cached_array', 0), |
2164 | ('cached_array_2', 0)] |
2165 | extra_disks = ['4G', '4G'] |
2166 | - collect_scripts = TestMdadmAbs.collect_scripts + [textwrap.dedent(""" |
2167 | - cd OUTPUT_COLLECT_D |
2168 | - bcache-super-show /dev/vda6 > bcache_super_vda6 |
2169 | - bcache-super-show /dev/vda7 > bcache_super_vda7 |
2170 | - bcache-super-show /dev/md0 > bcache_super_md0 |
2171 | - ls /sys/fs/bcache > bcache_ls |
2172 | - cat /sys/block/bcache0/bcache/cache_mode > bcache_cache_mode |
2173 | - cat /sys/block/bcache1/bcache/cache_mode >> bcache_cache_mode |
2174 | - cat /sys/block/bcache2/bcache/cache_mode >> bcache_cache_mode |
2175 | - cat /proc/mounts > proc_mounts |
2176 | - find /etc/network/interfaces.d > find_interfacesd |
2177 | - """)] |
2178 | - fstab_expected = { |
2179 | - '/dev/vda1': '/media/sda1', |
2180 | - '/dev/vda7': '/boot', |
2181 | - '/dev/bcache1': '/media/data', |
2182 | - '/dev/bcache0': '/media/bcache_normal', |
2183 | - '/dev/bcache2': '/media/bcachefoo_fulldiskascache_storage' |
2184 | - } |
2185 | - |
2186 | - def test_bcache_output_files_exist(self): |
2187 | - self.output_files_exist(["bcache_super_vda6", |
2188 | - "bcache_super_vda7", |
2189 | - "bcache_super_md0", |
2190 | - "bcache_ls", |
2191 | - "bcache_cache_mode"]) |
2192 | - |
2193 | - def test_bcache_status(self): |
2194 | - bcache_supers = [ |
2195 | - "bcache_super_vda6", |
2196 | - "bcache_super_vda7", |
2197 | - "bcache_super_md0", |
2198 | - ] |
2199 | - bcache_cset_uuid = None |
2200 | - found = {} |
2201 | - for bcache_super in bcache_supers: |
2202 | - with open(os.path.join(self.td.collect, bcache_super), "r") as fp: |
2203 | - for line in fp.read().splitlines(): |
2204 | - if line != "" and line.split()[0] == "cset.uuid": |
2205 | - bcache_cset_uuid = line.split()[-1].rstrip() |
2206 | - if bcache_cset_uuid in found: |
2207 | - found[bcache_cset_uuid].append(bcache_super) |
2208 | - else: |
2209 | - found[bcache_cset_uuid] = [bcache_super] |
2210 | - self.assertIsNotNone(bcache_cset_uuid) |
2211 | - with open(os.path.join(self.td.collect, "bcache_ls"), "r") as fp: |
2212 | - self.assertTrue(bcache_cset_uuid in fp.read().splitlines()) |
2213 | - |
2214 | - # one cset.uuid for all devices |
2215 | - self.assertEqual(len(found), 1) |
2216 | - |
2217 | - # three devices with same cset.uuid |
2218 | - self.assertEqual(len(found[bcache_cset_uuid]), 3) |
2219 | - |
2220 | - # check the cset.uuid in the dict |
2221 | - self.assertEqual(list(found.keys()).pop(), |
2222 | - bcache_cset_uuid) |
2223 | - |
2224 | - def test_bcache_cachemode(self): |
2225 | - # definition is on order 0->back,1->through,2->around |
2226 | - # but after reboot it can be anything since order is not guaranteed |
2227 | - # until we find a way to redetect the order we just check that all |
2228 | - # three are there |
2229 | - self.check_file_regex("bcache_cache_mode", r"\[writeback\]") |
2230 | - self.check_file_regex("bcache_cache_mode", r"\[writethrough\]") |
2231 | - self.check_file_regex("bcache_cache_mode", r"\[writearound\]") |
2232 | |
2233 | |
2234 | class TrustyTestMdadmBcache(relbase.trusty, TestMdadmBcacheAbs): |
2235 | - __test__ = True |
2236 | + __test__ = False # Stack traces for now |
2237 | |
2238 | # FIXME(LP: #1523037): dname does not work on trusty |
2239 | # when dname works on trusty, then we need to re-enable by removing line. |
2240 | |
2241 | === modified file 'tests/vmtests/test_multipath.py' |
2242 | --- tests/vmtests/test_multipath.py 2016-06-23 14:26:39 +0000 |
2243 | +++ tests/vmtests/test_multipath.py 2016-07-19 15:04:36 +0000 |
2244 | @@ -11,6 +11,7 @@ |
2245 | disk_driver = 'scsi-hd' |
2246 | extra_disks = [] |
2247 | nvme_disks = [] |
2248 | + generate_storage_scripts = False |
2249 | collect_scripts = [textwrap.dedent(""" |
2250 | cd OUTPUT_COLLECT_D |
2251 | blkid -o export /dev/sda > blkid_output_sda |
2252 | |
2253 | === modified file 'tests/vmtests/test_network.py' |
2254 | --- tests/vmtests/test_network.py 2016-06-15 19:38:44 +0000 |
2255 | +++ tests/vmtests/test_network.py 2016-07-19 15:04:36 +0000 |
2256 | @@ -322,12 +322,14 @@ |
2257 | |
2258 | |
2259 | class PreciseHWETTestNetwork(relbase.precise_hwe_t, TestNetworkAbs): |
2260 | + arch_skip = ["ppc64el", "ppc64le", "s390x"] |
2261 | # FIXME: off due to hang at test: Starting execute cloud user/final scripts |
2262 | __test__ = False |
2263 | |
2264 | |
2265 | class PreciseHWETTestNetworkStatic(relbase.precise_hwe_t, |
2266 | TestNetworkStaticAbs): |
2267 | + arch_skip = ["ppc64el", "ppc64le", "s390x"] |
2268 | # FIXME: off due to hang at test: Starting execute cloud user/final scripts |
2269 | __test__ = False |
2270 | |
2271 | @@ -398,6 +400,7 @@ |
2272 | |
2273 | |
2274 | class PreciseTestNetworkVlan(relbase.precise, TestNetworkVlanAbs): |
2275 | + arch_skip = ["ppc64el", "ppc64le", "s390x"] |
2276 | __test__ = True |
2277 | |
2278 | # precise ip -d link show output is different (of course) |
2279 | @@ -430,6 +433,7 @@ |
2280 | |
2281 | |
2282 | class PreciseTestNetworkENISource(relbase.precise, TestNetworkENISource): |
2283 | + arch_skip = ["ppc64el", "ppc64le", "s390x"] |
2284 | __test__ = False |
2285 | # not working, still debugging though; possible older ifupdown doesn't |
2286 | # like the multiple iface method. |
2287 | |
2288 | === modified file 'tests/vmtests/test_nvme.py' |
2289 | --- tests/vmtests/test_nvme.py 2016-06-13 20:49:15 +0000 |
2290 | +++ tests/vmtests/test_nvme.py 2016-07-19 15:04:36 +0000 |
2291 | @@ -9,15 +9,17 @@ |
2292 | arch_skip = [ |
2293 | "s390x", # nvme is a pci device, no pci on s390x |
2294 | ] |
2295 | + generate_storage_scripts = False |
2296 | interactive = False |
2297 | conf_file = "examples/tests/nvme.yaml" |
2298 | install_timeout = 600 |
2299 | boot_timeout = 120 |
2300 | extra_disks = [] |
2301 | nvme_disks = ['4G', '4G'] |
2302 | - disk_to_check = [('main_disk', 1), ('main_disk', 2), ('main_disk', 15), |
2303 | + disk_to_check = [('main_disk', 1), ('main_disk', 2), |
2304 | ('nvme_disk', 1), ('nvme_disk', 2), ('nvme_disk', 3), |
2305 | ('second_nvme', 1)] |
2306 | + |
2307 | collect_scripts = [textwrap.dedent(""" |
2308 | cd OUTPUT_COLLECT_D |
2309 | ls /sys/class/ > sys_class |
2310 | |
2311 | === modified file 'tests/vmtests/test_raid5_bcache.py' |
2312 | --- tests/vmtests/test_raid5_bcache.py 2016-06-13 20:49:15 +0000 |
2313 | +++ tests/vmtests/test_raid5_bcache.py 2016-07-19 15:04:36 +0000 |
2314 | @@ -1,72 +1,14 @@ |
2315 | -from . import VMBaseClass |
2316 | +from .test_bcache_basic import TestBcacheBasic |
2317 | +from .test_mdadm_bcache import TestMdadmAbs |
2318 | from .releases import base_vm_classes as relbase |
2319 | |
2320 | -import textwrap |
2321 | -import os |
2322 | - |
2323 | - |
2324 | -class TestMdadmAbs(VMBaseClass): |
2325 | + |
2326 | +class TestMdadmBcacheAbs(TestBcacheBasic, TestMdadmAbs): |
2327 | interactive = False |
2328 | extra_disks = ['10G', '10G', '10G', '10G'] |
2329 | - active_mdadm = "1" |
2330 | - collect_scripts = [textwrap.dedent(""" |
2331 | - cd OUTPUT_COLLECT_D |
2332 | - cat /etc/fstab > fstab |
2333 | - mdadm --detail --scan > mdadm_status |
2334 | - mdadm --detail --scan | grep -c ubuntu > mdadm_active1 |
2335 | - grep -c active /proc/mdstat > mdadm_active2 |
2336 | - ls /dev/disk/by-dname > ls_dname |
2337 | - find /etc/network/interfaces.d > find_interfacesd |
2338 | - """)] |
2339 | - |
2340 | - def test_mdadm_output_files_exist(self): |
2341 | - self.output_files_exist( |
2342 | - ["fstab", "mdadm_status", "mdadm_active1", "mdadm_active2", |
2343 | - "ls_dname"]) |
2344 | - |
2345 | - def test_mdadm_status(self): |
2346 | - # ubuntu:<ID> is the name assigned to the md array |
2347 | - self.check_file_regex("mdadm_status", r"ubuntu:[0-9]*") |
2348 | - self.check_file_strippedline("mdadm_active1", self.active_mdadm) |
2349 | - self.check_file_strippedline("mdadm_active2", self.active_mdadm) |
2350 | - |
2351 | - |
2352 | -class TestMdadmBcacheAbs(TestMdadmAbs): |
2353 | conf_file = "examples/tests/raid5bcache.yaml" |
2354 | disk_to_check = [('md0', 0), ('sda', 2)] |
2355 | |
2356 | - collect_scripts = TestMdadmAbs.collect_scripts + [textwrap.dedent(""" |
2357 | - cd OUTPUT_COLLECT_D |
2358 | - bcache-super-show /dev/vda2 > bcache_super_vda2 |
2359 | - ls /sys/fs/bcache > bcache_ls |
2360 | - cat /sys/block/bcache0/bcache/cache_mode > bcache_cache_mode |
2361 | - cat /proc/mounts > proc_mounts |
2362 | - cat /proc/partitions > proc_partitions |
2363 | - find /etc/network/interfaces.d > find_interfacesd |
2364 | - """)] |
2365 | - fstab_expected = { |
2366 | - '/dev/bcache0': '/', |
2367 | - '/dev/md0': '/srv/data', |
2368 | - } |
2369 | - |
2370 | - def test_bcache_output_files_exist(self): |
2371 | - self.output_files_exist(["bcache_super_vda2", "bcache_ls", |
2372 | - "bcache_cache_mode"]) |
2373 | - |
2374 | - def test_bcache_status(self): |
2375 | - bcache_cset_uuid = None |
2376 | - fname = os.path.join(self.td.collect, "bcache_super_vda2") |
2377 | - with open(fname, "r") as fp: |
2378 | - for line in fp.read().splitlines(): |
2379 | - if line != "" and line.split()[0] == "cset.uuid": |
2380 | - bcache_cset_uuid = line.split()[-1].rstrip() |
2381 | - self.assertIsNotNone(bcache_cset_uuid) |
2382 | - with open(os.path.join(self.td.collect, "bcache_ls"), "r") as fp: |
2383 | - self.assertTrue(bcache_cset_uuid in fp.read().splitlines()) |
2384 | - |
2385 | - def test_bcache_cachemode(self): |
2386 | - self.check_file_regex("bcache_cache_mode", r"\[writeback\]") |
2387 | - |
2388 | |
2389 | class PreciseHWETTestRaid5Bcache(relbase.precise_hwe_t, TestMdadmBcacheAbs): |
2390 | # FIXME: off due to failing install: RUN_ARRAY failed: Invalid argument |
2391 | |
2392 | === modified file 'tests/vmtests/test_uefi_basic.py' |
2393 | --- tests/vmtests/test_uefi_basic.py 2016-06-13 20:49:15 +0000 |
2394 | +++ tests/vmtests/test_uefi_basic.py 2016-07-19 15:04:36 +0000 |
2395 | @@ -3,41 +3,15 @@ |
2396 | from .releases import base_vm_classes as relbase |
2397 | |
2398 | import os |
2399 | -import textwrap |
2400 | - |
2401 | - |
2402 | -class TestBasicAbs(VMBaseClass): |
2403 | + |
2404 | + |
2405 | +class TestUefiBasicAbs(VMBaseClass): |
2406 | interactive = False |
2407 | - arch_skip = ["s390x"] |
2408 | + arch_skip = ["s390x", "ppc64el"] |
2409 | conf_file = "examples/tests/uefi_basic.yaml" |
2410 | extra_disks = [] |
2411 | uefi = True |
2412 | disk_to_check = [('main_disk', 1), ('main_disk', 2)] |
2413 | - collect_scripts = [textwrap.dedent(""" |
2414 | - cd OUTPUT_COLLECT_D |
2415 | - blkid -o export /dev/vda > blkid_output_vda |
2416 | - blkid -o export /dev/vda1 > blkid_output_vda1 |
2417 | - blkid -o export /dev/vda2 > blkid_output_vda2 |
2418 | - cat /proc/partitions > proc_partitions |
2419 | - ls -al /dev/disk/by-uuid/ > ls_uuid |
2420 | - cat /etc/fstab > fstab |
2421 | - mkdir -p /dev/disk/by-dname |
2422 | - ls /dev/disk/by-dname/ > ls_dname |
2423 | - find /etc/network/interfaces.d > find_interfacesd |
2424 | - ls /sys/firmware/efi/ > ls_sys_firmware_efi |
2425 | - cat /sys/class/block/vda/queue/logical_block_size > vda_lbs |
2426 | - cat /sys/class/block/vda/queue/physical_block_size > vda_pbs |
2427 | - blockdev --getsz /dev/vda > vda_blockdev_getsz |
2428 | - blockdev --getss /dev/vda > vda_blockdev_getss |
2429 | - blockdev --getpbsz /dev/vda > vda_blockdev_getpbsz |
2430 | - blockdev --getbsz /dev/vda > vda_blockdev_getbsz |
2431 | - """)] |
2432 | - |
2433 | - def test_output_files_exist(self): |
2434 | - self.output_files_exist( |
2435 | - ["blkid_output_vda", "blkid_output_vda1", "blkid_output_vda2", |
2436 | - "fstab", "ls_dname", "ls_uuid", "ls_sys_firmware_efi", |
2437 | - "proc_partitions"]) |
2438 | |
2439 | def test_sys_firmware_efi(self): |
2440 | sys_efi_expected = [ |
2441 | @@ -61,11 +35,16 @@ |
2442 | """ Test disk logical and physical block size are match |
2443 | the class block size. |
2444 | """ |
2445 | - for bs in ['lbs', 'pbs']: |
2446 | - with open(os.path.join(self.td.collect, |
2447 | - 'vda_' + bs), 'r') as fp: |
2448 | - size = int(fp.read()) |
2449 | - self.assertEqual(self.disk_block_size, size) |
2450 | + for id in [id for (id, item) in self.storage_config_dict.items() |
2451 | + if item['type'] == 'disk']: |
2452 | + vol_path = self.storage_volume_dict.get(id) |
2453 | + (vol_kname, ls_id) = self._resolve_volpath(vol_path, |
2454 | + ls_input='ls_id') |
2455 | + for bs in ['lbs', 'pbs']: |
2456 | + with open(os.path.join(self.td.collect, |
2457 | + bs + '_' + vol_kname), 'r') as fp: |
2458 | + size = int(fp.read()) |
2459 | + self.assertEqual(self.disk_block_size, size) |
2460 | |
2461 | def test_disk_block_size_with_blockdev(self): |
2462 | """ validate maas setting |
2463 | @@ -74,14 +53,19 @@ |
2464 | --getpbsz get physical block (sector) size |
2465 | --getbsz get blocksize |
2466 | """ |
2467 | - for syscall in ['getss', 'getpbsz']: |
2468 | - with open(os.path.join(self.td.collect, |
2469 | - 'vda_blockdev_' + syscall), 'r') as fp: |
2470 | - size = int(fp.read()) |
2471 | - self.assertEqual(self.disk_block_size, size) |
2472 | - |
2473 | - |
2474 | -class PreciseUefiTestBasic(relbase.precise, TestBasicAbs): |
2475 | + for id in [id for (id, item) in self.storage_config_dict.items() |
2476 | + if item['type'] == 'disk']: |
2477 | + vol_path = self.storage_volume_dict.get(id) |
2478 | + (vol_kname, ls_id) = self._resolve_volpath(vol_path, |
2479 | + ls_input='ls_id') |
2480 | + for syscall in ['getss', 'getpbsz']: |
2481 | + with open(os.path.join(self.td.collect, |
2482 | + 'blockdev_' + syscall + '_' + vol_kname), 'r') as fp: |
2483 | + size = int(fp.read()) |
2484 | + self.assertEqual(self.disk_block_size, size) |
2485 | + |
2486 | + |
2487 | +class PreciseUefiTestBasic(relbase.precise, TestUefiBasicAbs): |
2488 | __test__ = True |
2489 | |
2490 | def test_ptable(self): |
2491 | @@ -91,7 +75,7 @@ |
2492 | print("test_dname does not work for Precise") |
2493 | |
2494 | |
2495 | -class TrustyUefiTestBasic(relbase.trusty, TestBasicAbs): |
2496 | +class TrustyUefiTestBasic(relbase.trusty, TestUefiBasicAbs): |
2497 | __test__ = True |
2498 | |
2499 | # FIXME(LP: #1523037): dname does not work on trusty, so we cannot expect |
2500 | @@ -104,15 +88,15 @@ |
2501 | print("test_ptable does not work for Trusty") |
2502 | |
2503 | |
2504 | -class WilyUefiTestBasic(relbase.wily, TestBasicAbs): |
2505 | - __test__ = True |
2506 | - |
2507 | - |
2508 | -class XenialUefiTestBasic(relbase.xenial, TestBasicAbs): |
2509 | - __test__ = True |
2510 | - |
2511 | - |
2512 | -class YakketyUefiTestBasic(relbase.yakkety, TestBasicAbs): |
2513 | +class WilyUefiTestBasic(relbase.wily, TestUefiBasicAbs): |
2514 | + __test__ = True |
2515 | + |
2516 | + |
2517 | +class XenialUefiTestBasic(relbase.xenial, TestUefiBasicAbs): |
2518 | + __test__ = True |
2519 | + |
2520 | + |
2521 | +class YakketyUefiTestBasic(relbase.yakkety, TestUefiBasicAbs): |
2522 | __test__ = True |
2523 | |
2524 | |
2525 | |
2526 | === modified file 'tools/launch' |
2527 | --- tools/launch 2016-06-27 14:01:39 +0000 |
2528 | +++ tools/launch 2016-07-19 15:04:36 +0000 |
2529 | @@ -8,7 +8,7 @@ |
2530 | HTTP_PORT_MIN=${HTTP_PORT_MIN:-12000} |
2531 | HTTP_PORT_MAX=${HTTP_PORT_MAX:-65500} |
2532 | MY_D=$(dirname "$0") |
2533 | -DEFAULT_ROOT_ARG="root=LABEL=cloudimg-rootfs" |
2534 | +DEFAULT_ROOT_ARG="LABEL=cloudimg-rootfs" |
2535 | |
2536 | error() { echo "$@" 1>&2; } |
2537 | |
2538 | @@ -578,13 +578,12 @@ |
2539 | if [ -z "$root_arg" ]; then |
2540 | debug 1 "WARN: root_arg is empty with kernel." |
2541 | fi |
2542 | - append="${root_arg:+${root_arg} }ds=nocloud-net;seedfrom=$burl" |
2543 | + append="root=${root_arg:+${root_arg} }ds=nocloud-net;seedfrom=$burl" |
2544 | |
2545 | - local console_name="" |
2546 | + local console_name="ttyS0" |
2547 | case "${arch_hint}" in |
2548 | - s390x) console_name="";; |
2549 | + s390x) console_name="ttyS0";; |
2550 | ppc64*) console_name="hvc0";; |
2551 | - *) console_name="ttyS0";; |
2552 | esac |
2553 | if [ -n "$console_name" ]; then |
2554 | append="${append} console=${console_name}" |
2555 | @@ -625,7 +624,7 @@ |
2556 | cmd=( |
2557 | xkvm "${pt[@]}" "${netargs[@]}" -- |
2558 | "${bios_opts[@]}" |
2559 | - -m ${mem} ${serial_args} ${video} |
2560 | + -smp 2 -m ${mem} ${serial_args} ${video} |
2561 | -drive "file=$bootimg,if=none,cache=unsafe,format=qcow2,id=boot,index=0" |
2562 | -device "virtio-blk,drive=boot" |
2563 | "${disk_args[@]}" |
PASSED: Continuous integration, rev:383 /server- team-jenkins. canonical. com/job/ curtin- ci/286/ /server- team-jenkins. canonical. com/job/ generic- update- mp/283/ console
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild: /server- team-jenkins. canonical. com/job/ curtin- ci/286/ rebuild
https:/