Merge ~juliank/autopkgtest/+git/development:blocking-io into ~juliank/autopkgtest/+git/development:master
- Git
- lp:~juliank/autopkgtest/+git/development
- blocking-io
- Merge into master
Status: | Superseded |
---|---|
Proposed branch: | ~juliank/autopkgtest/+git/development:blocking-io |
Merge into: | ~juliank/autopkgtest/+git/development:master |
Diff against target: |
4971 lines (+2286/-985) (has conflicts) 42 files modified
.gitlab-ci.yml (+9/-9) Makefile (+1/-0) debian/README.source (+150/-0) debian/changelog (+128/-0) debian/control (+29/-26) debian/copyright (+11/-1) debian/rules (+2/-0) debian/tests/control (+8/-11) debian/tests/lxd (+2/-2) dev/null (+0/-15) doc/README.package-tests.rst (+39/-13) lib/VirtSubproc.py (+8/-6) lib/adt_binaries.py (+4/-4) lib/adt_testbed.py (+189/-183) lib/autopkgtest_args.py (+31/-1) lib/autopkgtest_qemu.py (+385/-0) lib/testdesc.py (+154/-34) runner/autopkgtest (+77/-28) runner/autopkgtest.1 (+34/-0) setup-commands/ro-apt (+7/-7) setup-commands/setup-testbed (+32/-12) ssh-setup/SKELETON (+5/-5) ssh-setup/nova (+2/-2) tests/autopkgtest (+215/-105) tests/autopkgtest_args (+7/-2) tests/mypy (+48/-0) tests/pycodestyle (+17/-7) tests/pyflakes (+13/-6) tests/qemu (+59/-0) tests/run-parallel (+14/-11) tests/shellcheck (+45/-0) tests/ssh-setup-lxd (+13/-13) tests/testdesc (+22/-9) tools/autopkgtest-build-lxc (+7/-3) tools/autopkgtest-build-lxd (+3/-3) tools/autopkgtest-build-qemu (+387/-282) tools/autopkgtest-buildvm-ubuntu-cloud (+30/-26) virt/autopkgtest-virt-lxc (+11/-6) virt/autopkgtest-virt-lxc.1 (+7/-0) virt/autopkgtest-virt-lxd (+10/-3) virt/autopkgtest-virt-qemu (+57/-139) virt/autopkgtest-virt-ssh (+14/-11) Conflict in setup-commands/setup-testbed Conflict in virt/autopkgtest-virt-qemu |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Julian Andres Klode | Pending | ||
Review via email: mp+404085@code.launchpad.net |
This proposal has been superseded by a proposal from 2021-06-11.
Commit message
Description of the change
Unmerged commits
- 237232c... by Julian Andres Klode
-
runner: Set sys.stderr to blocking
We see BlockingIOError quite a bit these days on Ubuntu's autopkgtest
cloud when trying to write to sys.stderr. Not sure why it's non-blocking,
seems Python sets it up that way sometimes...Traceback (most recent call last):
File "/home/ubuntu/ autopkgtest/ lib/VirtSubproc .py", line 740, in mainloop
command()
File "/home/ubuntu/ autopkgtest/ lib/VirtSubproc .py", line 669, in command
r = f(c, ce)
File "/home/ubuntu/ autopkgtest/ lib/VirtSubproc .py", line 364, in cmd_reboot
caller. hook_wait_ reboot( **wait_ reboot_ args)
File "/home/ubuntu/ autopkgtest/ virt/autopkgtes t-virt- ssh", line 486, in hook_wait_reboot
wait_for_ ssh(sshcmd, timeout= args.timeout_ ssh)
File "/home/ubuntu/ autopkgtest/ virt/autopkgtes t-virt- ssh", line 320, in wait_for_ssh
execute_ setup_script( 'debug- failure' , fail_ok=True)
File "/home/ubuntu/ autopkgtest/ virt/autopkgtes t-virt- ssh", line 208, in execute_ setup_script
sys.stderr. write(err)
BlockingIOError: [Errno 11] write could not complete without blocking - ed4023b... by Iain Lane
-
nova: Drop nova network-show, we're all openstack now
- f0b6d2b... by Iain Lane
-
ssh-setup/nova: Use `openstack network show` in preference to `nova`
Once we don't care about xenial's `openstack` client tools then we can
drop the fallback and upstream this. - 42224fc... by Julian Andres Klode
-
UBUNTU: setup-testbed: Setup Acquire::Retries 10, like debci does
- da7d955... by Julian Andres Klode
-
setup-testbed: Also remove needrestart
needrestart installs a dpkg hook that fails and inserts errors,
causing the apt test suite to fail (there's no way to isolate against
system dpkg hooks so far), and is installed in cloud images atm. - d47c7f0... by Iain Lane
-
ssh-setup/nova: Quote arguments to `tr`
Otherwise they can be expanded as globs, which breaks badly if they
match anything in the cwd.(cherry picked from commit a55a4f342688569
f41a2937e942bee 0dc8d05be5) - 3875f4e... by Iain Lane
-
lxd: Add a hook_prepare_reboot and pass arguments from it to wait_reboot
We've got a race condition currently. When rebooting the testbed, the
flow is like this:hook_
prepare_ reboot
reboot
hook_wait_rebootThe idea is that hook_wait_reboot waits for the testbed to come back up,
so we can go on with the tests. For lxd, this means fetch the uptime,
and wait for it to go backwards, indicating the the reboot happened.We fetch the 'initial' uptime in hook_wait_reboot, though. As this is
after we actually ask the testbed to reboot itself, there's a race
condition. If we managed to go down before we get into hook_wait_reboot,
we will not be able to get the initial uptime.Instead, add a prepare_reboot hook to autopkgtest-
virt-lxd, which
fetches the initial uptime. Then it needs to be able to return a value
back to the caller, so that wait_reboot can know what it needs to
compare to. That needs a bit of adjustment to hook_wait_reboot's
interface, to allow it to accept arguments. We make this generic, so
it's optional across backends. - 5d45b3b... by Julian Andres Klode
-
TEMPORARY: RT#127293: install haveged again
rng-tools is not working everywhere, so continue to install haveged
in the meantime. - 43c3f26... by Steve Langasek
-
Include cross-arch packages in pinning.
To ensure cross-installab
ility of 'unstable' versions of libraries whose
native versions are also installed in the base system, and to ensure the
'unstable' versions of cross packages are used, our binary package pins must
include both native and cross packages. Some of the values included in the
pin will be nonsense (the cross variant of any Arch: all package), but this
doesn't matter. - 57d7040... by Steve Langasek
-
Handle cross-arch test deps for packages which are not Arch: any.
If the package is only built on a subset of archs, the [arch] list is
part of the string in my_packages and needs to be accounted for. Don't
worry about the possibility that our target arch isn't actually in the
architecture list, since in that case it shouldn't be a test dep anyway.
Preview Diff
1 | diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml |
2 | index f28fa86..0601456 100644 |
3 | --- a/.gitlab-ci.yml |
4 | +++ b/.gitlab-ci.yml |
5 | @@ -9,18 +9,18 @@ quicktests: |
6 | - tests/pyflakes |
7 | - tests/testdesc |
8 | |
9 | -tests-sid: |
10 | - stage: test |
11 | - image: debian:sid |
12 | +.tests: &tests |
13 | script: |
14 | - apt-get update |
15 | - apt-get install -y apt-utils autodep8 build-essential debhelper libdpkg-perl procps python3 python3-debian |
16 | - tests/autopkgtest NullRunner NullRunnerRoot ChrootRunner |
17 | |
18 | -tests-stretch: |
19 | +tests-sid: |
20 | stage: test |
21 | - image: debian:stretch |
22 | - script: |
23 | - - apt-get update |
24 | - - apt-get install -y apt-utils autodep8 build-essential debhelper libdpkg-perl procps python3 python3-debian |
25 | - - tests/autopkgtest NullRunner NullRunnerRoot ChrootRunner |
26 | + image: debian:sid |
27 | + <<: *tests |
28 | + |
29 | +tests-stable: |
30 | + stage: test |
31 | + image: debian:stable |
32 | + <<: *tests |
33 | diff --git a/Makefile b/Makefile |
34 | index 1582c44..6fc99b6 100644 |
35 | --- a/Makefile |
36 | +++ b/Makefile |
37 | @@ -53,6 +53,7 @@ programs = tools/autopkgtest-buildvm-ubuntu-cloud \ |
38 | pythonfiles = lib/VirtSubproc.py \ |
39 | lib/adtlog.py \ |
40 | lib/autopkgtest_args.py \ |
41 | + lib/autopkgtest_qemu.py \ |
42 | lib/adt_testbed.py \ |
43 | lib/adt_binaries.py \ |
44 | lib/testdesc.py \ |
45 | diff --git a/debian/README.source b/debian/README.source |
46 | new file mode 100644 |
47 | index 0000000..a6e9b02 |
48 | --- /dev/null |
49 | +++ b/debian/README.source |
50 | @@ -0,0 +1,150 @@ |
51 | +Testing autopkgtest backends |
52 | +============================ |
53 | + |
54 | +This is a cheat-sheet for developers of autopkgtest who do not have any |
55 | +particular requirements for the packages under test or the containers in |
56 | +which they are tested, and just want to prove that the various backends |
57 | +still work. |
58 | + |
59 | +The current working directory is assumed to be the autopkgtest source |
60 | +code. Omit the ./runner/ and ./tools/ prefixes to test the system copy. |
61 | + |
62 | +All examples refer to testing the 'util-linux' source package on amd64, |
63 | +in either Debian 10 or Ubuntu 18.04. Adjust as necessary for the |
64 | +distribution, architecture and package you actually want to test. |
65 | +util-linux is a convenient example of an Essential package with only |
66 | +trivial test coverage and few test-dependencies, hence quick to test. |
67 | + |
68 | +Commands prefixed with # need to be run as root, commands prefixed with $ |
69 | +can be run as an ordinary user. |
70 | + |
71 | +Run all this in a virtual machine if you don't want to run as root on |
72 | +the host system (for qemu this requires nested KVM). |
73 | + |
74 | +null |
75 | +---- |
76 | + |
77 | +No setup required, but you are responsible for installing build- |
78 | +and/or test-dependencies yourself. |
79 | + |
80 | +$ ./runner/autopkgtest util-linux -- null |
81 | + |
82 | +schroot |
83 | +------- |
84 | + |
85 | +# apt install schroot sbuild |
86 | +# mkdir /srv/chroot |
87 | +# sbuild-createchroot \ |
88 | +--arch=amd64 \ |
89 | +buster \ |
90 | +/srv/chroot/buster-amd64-sbuild |
91 | + |
92 | +(if you are in the sbuild group) |
93 | +$ ./runner/autopkgtest util-linux -- schroot buster-amd64-sbuild |
94 | +(or) |
95 | +# ./runner/autopkgtest util-linux -- schroot buster-amd64-sbuild |
96 | + |
97 | +Or for Ubuntu: |
98 | + |
99 | +# apt install ubuntu-keyring |
100 | +# sbuild-createchroot \ |
101 | +--arch=amd64 \ |
102 | +bionic \ |
103 | +/srv/chroot/bionic-amd64-sbuild |
104 | +# ./runner/autopkgtest util-linux -- schroot bionic-amd64-sbuild |
105 | + |
106 | +lxc |
107 | +--- |
108 | + |
109 | +This cheat-sheet assumes lxc (>= 3). |
110 | + |
111 | +# apt install lxc |
112 | +# subnet=10.0.3 |
113 | +# cat > /etc/default/lxc-net <<EOF |
114 | +USE_LXC_BRIDGE="true" |
115 | +LXC_BRIDGE="lxcbr0" |
116 | +LXC_ADDR="${subnet}.1" |
117 | +LXC_NETMASK="255.255.255.0" |
118 | +LXC_NETWORK="${subnet}.0/24" |
119 | +LXC_DHCP_RANGE="${subnet}.2,${subnet}.254" |
120 | +LXC_DHCP_MAX="253" |
121 | +LXC_DHCP_CONFILE="" |
122 | +LXC_DOMAIN="" |
123 | +EOF |
124 | +# cat > /etc/lxc/default.conf <<EOF |
125 | +lxc.net.0.type = veth |
126 | +lxc.net.0.link = lxcbr0 |
127 | +lxc.net.0.flags = up |
128 | +lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx |
129 | +lxc.apparmor.profile = unconfined |
130 | +EOF |
131 | +# service lxc restart |
132 | + |
133 | +# ./tools/autopkgtest-build-lxc debian buster amd64 |
134 | + |
135 | +# ./runner/autopkgtest util-linux -- lxc autopkgtest-buster-amd64 |
136 | +(or) |
137 | +$ ./runner/autopkgtest util-linux -- lxc --sudo autopkgtest-buster-amd64 |
138 | + |
139 | +Or for Ubuntu: |
140 | + |
141 | +# ./tools/autopkgtest-build-lxc ubuntu bionic amd64 |
142 | +# ./runner/autopkgtest util-linux -- lxc autopkgtest-bionic-amd64 |
143 | + |
144 | +lxd |
145 | +--- |
146 | + |
147 | +lxd is not available in Debian, only from third-party snap repositories. |
148 | + |
149 | +# apt install snapd |
150 | +(log out and back in to add /snap/bin to PATH) |
151 | +# snap install lxd |
152 | + |
153 | +# lxd init |
154 | +(for a simple throwaway setup, accept all defaults) |
155 | + |
156 | +# ./tools/autopkgtest-build-lxd images:debian/buster/amd64 |
157 | +# lxc image list |
158 | +(you will see autopkgtest/debian/buster/amd64 listed) |
159 | +# ./runner/autopkgtest util-linux -- lxd autopkgtest/debian/buster/amd64 |
160 | + |
161 | +Or for Ubuntu: |
162 | + |
163 | +# ./tools/autopkgtest-build-lxd ubuntu:bionic |
164 | +# lxc image list |
165 | +(you will see autopkgtest/ubuntu/bionic/amd64 listed) |
166 | +# ./runner/autopkgtest util-linux -- lxd autopkgtest/ubuntu/bionic/amd64 |
167 | + |
168 | +qemu |
169 | +---- |
170 | + |
171 | +This can be done in a VM: |
172 | + |
173 | +# apt install qemu-utils vmdb2 |
174 | +# ./tools/autopkgtest-build-qemu buster ./buster.qcow2 |
175 | + |
176 | +This can be done in a VM if you have nested KVM enabled, or on the host |
177 | +system. The unprivileged user needs write access to /dev/kvm, but no other |
178 | +privileges: |
179 | + |
180 | +# apt install qemu-system-x86 qemu-utils |
181 | +$ ./runner/autopkgtest util-linux -- qemu ./buster.qcow2 |
182 | + |
183 | +autopkgtest-build-qemu doesn't currently work to build Ubuntu images, |
184 | +because vmdb2 assumes grub-install supports the --force-extra-removable |
185 | +option, but Ubuntu's grub-install doesn't have that option. |
186 | +Instead use a cloud image, which can be done unprivileged: |
187 | + |
188 | +$ ./tools/autopkgtest-buildvm-ubuntu-cloud --release=bionic |
189 | +$ ./runner/autopkgtest util-linux -- qemu ./autopkgtest-bionic-amd64.img |
190 | + |
191 | +(If you're running a VM inside a VM, you might need to pass something |
192 | +like --ram-size=512 after the qemu argument to make the inner VM use |
193 | +strictly less memory.) |
194 | + |
195 | +ssh (without a setup script) |
196 | +---------------------------- |
197 | + |
198 | +Prepare 'machine' however you want to, then: |
199 | + |
200 | +$ autopkgtest util-linux -- ssh -H machine |
201 | diff --git a/debian/changelog b/debian/changelog |
202 | index d503cc7..7c63c27 100644 |
203 | --- a/debian/changelog |
204 | +++ b/debian/changelog |
205 | @@ -1,3 +1,131 @@ |
206 | +autopkgtest (5.15) unstable; urgency=medium |
207 | + |
208 | + [ Sebastien Delafond ] |
209 | + * Remove left over .new containers before trying to generate a new one |
210 | + (Closes: #971749) |
211 | + |
212 | + [ Antonio Terceiro ] |
213 | + * virt-lxc: extract common initial argument list for lxc-copy |
214 | + * virt-lxc: add option to limit disk usage by tests |
215 | + |
216 | + [ Paul Gevers ] |
217 | + * tests/lxd: mark test skippable and exit 77 in stead of 0 in case of |
218 | + balling-out |
219 | + * Add support for Architecture field (Closes: #970513) |
220 | + * Check for empty Tests field (Closes: #918882) |
221 | + * With --test-name, don't report when other tests are skipped |
222 | + (Closes: #960267) |
223 | + |
224 | + [ Simon McVittie ] |
225 | + * Check restrictions with testbed compat, not during initialization |
226 | + * Allow restrictions to be ignored from the command line |
227 | + |
228 | + [ Ivo De Decker ] |
229 | + * Assume root-on-testbed with autopkgtest-virt-ssh and improve debugging |
230 | + (Closes: #958727) |
231 | + |
232 | + -- Paul Gevers <elbrus@debian.org> Mon, 26 Oct 2020 21:27:25 +0100 |
233 | + |
234 | +autopkgtest (5.14) unstable; urgency=medium |
235 | + |
236 | + [ Christian Kastner ] |
237 | + * autopkgtest-build-qemu: Support for vmdb2->qemu-debootstrap |
238 | + (Closes: #959389) |
239 | + |
240 | + [ Antonio Terceiro ] |
241 | + * autopkgtest-build-qemu: make sure VM can resolve its own hostname |
242 | + (Closes: #959713) |
243 | + * autopkgtest: add --validate option |
244 | + |
245 | + [ Iain Lane ] |
246 | + * autopkgtest-virt-ssh: Give the wait_port_down socket a timeout |
247 | + |
248 | + [ Simon McVittie ] |
249 | + * Avoid using 'l' as a variable name |
250 | + * qemu: Guess format of main disk image (Closes: #968598) |
251 | + * tools: Don't make qemu guess what format the disk image is |
252 | + |
253 | + [ Paul Gevers ] |
254 | + * Bump standards |
255 | + * Bump debhelper compat to 13 via debhelper-compat BD |
256 | + * Add Rules-Requires-Root: no |
257 | + * Drop postinst script as all supported releases have higher versions |
258 | + |
259 | + -- Paul Gevers <elbrus@debian.org> Tue, 01 Sep 2020 21:28:29 +0200 |
260 | + |
261 | +autopkgtest (5.13.1) unstable; urgency=medium |
262 | + |
263 | + * autopkgtest-build-qemu: revert commit that broke image creation |
264 | + (Closes: #956659) |
265 | + |
266 | + -- Antonio Terceiro <terceiro@debian.org> Fri, 17 Apr 2020 22:02:07 -0300 |
267 | + |
268 | +autopkgtest (5.13) unstable; urgency=medium |
269 | + |
270 | + [ Gordon Ball ] |
271 | + * Use pyflakes3 instead of pyflakes (Closes: #956338) |
272 | + |
273 | + [ Paul Gevers ] |
274 | + * Add support for needs-internet restriction |
275 | + * Add note about ftp-master ruling (Closes: #954157) |
276 | + * README.package-tests.rst: add documentation about needs-internet |
277 | + restriction |
278 | + * Update 5.12 changelog entry with bug number for qemu on ppc64el item |
279 | + |
280 | + -- Paul Gevers <elbrus@debian.org> Thu, 16 Apr 2020 21:07:58 +0200 |
281 | + |
282 | +autopkgtest (5.12.1) unstable; urgency=medium |
283 | + |
284 | + [ Antonio Terceiro ] |
285 | + * adt_testbed: ignore debian/control when checking for dependencies. This |
286 | + fixes a regression observed in the debci test suite. |
287 | + |
288 | + -- Antonio Terceiro <terceiro@debian.org> Sat, 04 Apr 2020 12:20:17 -0300 |
289 | + |
290 | +autopkgtest (5.12) unstable; urgency=medium |
291 | + |
292 | + [ Dan Streetman ] |
293 | + * tools/autopkgtest-build-lxd: pass /dev/null on stdin to lxc launch |
294 | + (LP: #1845037) |
295 | + * autopkgtest: When finding src pkg, skip binary pkgs for other archs |
296 | + (LP: #1845157) (Closes: #939790) |
297 | + * autopkgtest: when checking binary pkg arch, allow *-$ARCH-* values also |
298 | + |
299 | + [ Antonio Terceiro ] |
300 | + * Drop unpacking of dependencies to temporary dir |
301 | + |
302 | + [ Iain Lane ] |
303 | + * setup-testbed: Install rng-tools |
304 | + * lib/adt_testbed.py, runner/autopkgtest: Run --shell-fail in more situations |
305 | + * adt_testbed: Run the debug-fail command in more circumstances |
306 | + |
307 | + [ Sรฉbastien Delafond ] |
308 | + * Add kali support |
309 | + |
310 | + [ Simon McVittie ] |
311 | + * lxd: Actually exit the awk loop after the first apt source |
312 | + * build-lxd: Quote MIRROR and RELEASE properly |
313 | + * d/README.source: Document how to test various backends |
314 | + |
315 | + [ Paul Gevers ] |
316 | + * adt_testbed.py: add date to the start of log so we always have it |
317 | + (Closes: #954366) |
318 | + * Do not ignore distributions that have dashes in their name |
319 | + * lxc/lxd: wait for sysvinit services to finish |
320 | + Thanks to Lars Kruse (Closes: #953655) |
321 | + * lxc: increase timeout around reboot |
322 | + * tests/lxd: add skip-not-installable (Closes: #952594) |
323 | + |
324 | + [ Jelmer Vernooฤณ ] |
325 | + * Document that $HOME will exist and will be writeable. |
326 | + |
327 | + [ Thierry Fauck ] |
328 | + * autopkgtest-build-qemu: create primary partition of type Prep to |
329 | + support ppc64el with grub2 (Closes: #926945) |
330 | + * Add proper kernel release name for ppc64el |
331 | + |
332 | + -- Paul Gevers <elbrus@debian.org> Thu, 02 Apr 2020 10:36:09 +0200 |
333 | + |
334 | autopkgtest (5.11) unstable; urgency=medium |
335 | |
336 | [ Dan Streetman ] |
337 | diff --git a/debian/compat b/debian/compat |
338 | deleted file mode 100644 |
339 | index ec63514..0000000 |
340 | --- a/debian/compat |
341 | +++ /dev/null |
342 | @@ -1 +0,0 @@ |
343 | -9 |
344 | diff --git a/debian/control b/debian/control |
345 | index 6b347f0..88d2eff 100644 |
346 | --- a/debian/control |
347 | +++ b/debian/control |
348 | @@ -1,39 +1,42 @@ |
349 | Source: autopkgtest |
350 | Maintainer: Debian CI team <team+ci@tracker.debian.org> |
351 | -Uploaders: Ian Jackson <ijackson@chiark.greenend.org.uk>, Martin Pitt <mpitt@debian.org>, Antonio Terceiro <terceiro@debian.org>, Paul Gevers <elbrus@debian.org> |
352 | +Uploaders: Ian Jackson <ijackson@chiark.greenend.org.uk>, |
353 | + Martin Pitt <mpitt@debian.org>, |
354 | + Antonio Terceiro <terceiro@debian.org>, |
355 | + Paul Gevers <elbrus@debian.org> |
356 | Section: devel |
357 | Priority: optional |
358 | -Standards-Version: 4.4.0 |
359 | -Build-Depends: debhelper (>= 9), |
360 | - python3 (>= 3.1), |
361 | - python3-mock, |
362 | - python3-debian, |
363 | - python3-docutils, |
364 | - pyflakes3 | pyflakes, |
365 | - procps, |
366 | - pycodestyle | pep8, |
367 | +Standards-Version: 4.5.0 |
368 | +Build-Depends: debhelper-compat (= 13), |
369 | + procps, |
370 | + pycodestyle | pep8, |
371 | + pyflakes3, |
372 | + python3 (>= 3.3), |
373 | + python3-debian, |
374 | + python3-docutils, |
375 | + python3-mock |
376 | +Rules-Requires-Root: no |
377 | Vcs-Git: https://salsa.debian.org/ci-team/autopkgtest.git |
378 | Vcs-Browser: https://salsa.debian.org/ci-team/autopkgtest |
379 | |
380 | Package: autopkgtest |
381 | Architecture: all |
382 | -Depends: python3, |
383 | - python3-debian, |
384 | - apt-utils, |
385 | - libdpkg-perl, |
386 | - procps, |
387 | - ${misc:Depends} |
388 | +Depends: apt-utils, |
389 | + libdpkg-perl, |
390 | + procps, |
391 | + python3, |
392 | + python3-debian, |
393 | + ${misc:Depends} |
394 | Recommends: autodep8 |
395 | -Suggests: |
396 | - lxc, |
397 | - lxd, |
398 | - ovmf, |
399 | - qemu-efi-aarch64, |
400 | - qemu-efi-arm, |
401 | - qemu-system, |
402 | - qemu-utils, |
403 | - schroot, |
404 | - vmdb2, |
405 | +Suggests: lxc, |
406 | + lxd, |
407 | + ovmf, |
408 | + qemu-efi-aarch64, |
409 | + qemu-efi-arm, |
410 | + qemu-system, |
411 | + qemu-utils, |
412 | + schroot, |
413 | + vmdb2 |
414 | Breaks: debci (<< 1.7~) |
415 | Description: automatic as-installed testing for Debian packages |
416 | autopkgtest runs tests on binary packages. The tests are run on the |
417 | diff --git a/debian/copyright b/debian/copyright |
418 | index 7b35827..c923c56 100644 |
419 | --- a/debian/copyright |
420 | +++ b/debian/copyright |
421 | @@ -1,7 +1,17 @@ |
422 | Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ |
423 | |
424 | Files: * |
425 | -Copyright: Copyright (C) 2006-2014 Canonical Ltd. |
426 | +Copyright: |
427 | + 2006-2018 Canonical Ltd. and others |
428 | + 2012-2017 Martin Pitt |
429 | + 2016-2020 Simon McVittie |
430 | + 2016-2020 Antonio Terceiro |
431 | + 2017 Jiri Palecek |
432 | + 2017-2020 Collabora Ltd. |
433 | + 2018 Thadeu Lima de Souza Cascardo |
434 | + 2019 Michael Biebl |
435 | + 2019 Raphaรซl Hertzog |
436 | + 2019 Sรฉbastien Delafond |
437 | License: GPL-2+ |
438 | This program is free software; you can redistribute it and/or modify |
439 | it under the terms of the GNU General Public License as published by |
440 | diff --git a/debian/postinst b/debian/postinst |
441 | deleted file mode 100644 |
442 | index 3d29059..0000000 |
443 | --- a/debian/postinst |
444 | +++ /dev/null |
445 | @@ -1,15 +0,0 @@ |
446 | -#!/bin/sh |
447 | -set -e |
448 | - |
449 | -if [ "$1" = configure ] && dpkg --compare-versions "$2" lt-nl "3.9.3"; then |
450 | - # If this file exists and its only content is "force-unsafe-io", then it was |
451 | - # generated by #775076 and must be removed. |
452 | - if [ -e /etc/dpkg/dpkg.cfg.d/autopkgtest ]; then |
453 | - if [ "`cat /etc/dpkg/dpkg.cfg.d/autopkgtest`" = "force-unsafe-io" ]; then |
454 | - echo "Cleaning up erroneous /etc/dpkg/dpkg.cfg.d/autopkgtest..." |
455 | - rm -f /etc/dpkg/dpkg.cfg.d/autopkgtest |
456 | - fi |
457 | - fi |
458 | -fi |
459 | - |
460 | -#DEBHELPER# |
461 | diff --git a/debian/rules b/debian/rules |
462 | index 692b235..06867d9 100755 |
463 | --- a/debian/rules |
464 | +++ b/debian/rules |
465 | @@ -36,7 +36,9 @@ override_dh_auto_install: |
466 | override_dh_auto_test: |
467 | ifeq (, $(findstring nocheck, $(DEB_BUILD_OPTIONS))) |
468 | if type pyflakes3 >/dev/null 2>&1; then tests/pyflakes; else echo "pyflakes3 not available, skipping"; fi |
469 | + tests/mypy |
470 | tests/pycodestyle || true |
471 | + tests/shellcheck |
472 | tests/testdesc |
473 | tests/autopkgtest_args |
474 | env NO_PKG_MANGLE=1 tests/autopkgtest NullRunner |
475 | diff --git a/debian/tests/control b/debian/tests/control |
476 | index a59b3ef..e95dddb 100644 |
477 | --- a/debian/tests/control |
478 | +++ b/debian/tests/control |
479 | @@ -1,13 +1,10 @@ |
480 | Tests: autopkgtest |
481 | -Depends: autopkgtest, |
482 | - autodep8, |
483 | - build-essential, |
484 | - debhelper (>= 7) |
485 | +Depends: autodep8, autopkgtest, build-essential, debhelper (>= 7) |
486 | Restrictions: needs-root |
487 | Tests-Directory: tests |
488 | |
489 | Tests: pyflakes |
490 | -Depends: pyflakes |
491 | +Depends: pyflakes3 |
492 | Tests-Directory: tests |
493 | |
494 | Tests: installed |
495 | @@ -15,9 +12,9 @@ Depends: autopkgtest |
496 | |
497 | Tests: lxd |
498 | Depends: autopkgtest, |
499 | - lxd, |
500 | - build-essential, |
501 | - debhelper (>= 7), |
502 | - fakeroot, |
503 | - iptables |
504 | -Restrictions: isolation-machine, needs-root, allow-stderr |
505 | + build-essential, |
506 | + debhelper (>= 7), |
507 | + fakeroot, |
508 | + iptables, |
509 | + lxd |
510 | +Restrictions: isolation-machine, needs-root, allow-stderr, skip-not-installable, skippable |
511 | diff --git a/debian/tests/lxd b/debian/tests/lxd |
512 | index f60ab03..d026f70 100755 |
513 | --- a/debian/tests/lxd |
514 | +++ b/debian/tests/lxd |
515 | @@ -4,12 +4,12 @@ arch=$(dpkg --print-architecture) |
516 | if [ "$arch" != i386 -a "$arch" != amd64 ]; then |
517 | # we don't have LXD images for most non-x86 architectures |
518 | echo "Skipping on non-x86 architecture $arch" |
519 | - exit 0 |
520 | + exit 77 |
521 | fi |
522 | |
523 | if [ -z "${AUTOPKGTEST_NORMAL_USER-}" ]; then |
524 | echo "Skipping test because it requires an AUTOPKGTEST_NORMAL_USER" |
525 | - exit 0 |
526 | + exit 77 |
527 | fi |
528 | |
529 | # Detect LXD API extensions |
530 | diff --git a/doc/README.package-tests.rst b/doc/README.package-tests.rst |
531 | index 6c861f0..a5ed023 100644 |
532 | --- a/doc/README.package-tests.rst |
533 | +++ b/doc/README.package-tests.rst |
534 | @@ -40,6 +40,9 @@ During execution of the test, the environment variable |
535 | particular test, which starts empty and will be deleted afterwards (so |
536 | there is no need for the test to clean up files left there). |
537 | |
538 | +Tests can expect that the ``$HOME`` environment variable to be set |
539 | +to a directory that exists and is writeable by the user running the test. |
540 | + |
541 | If tests want to create artifacts which are useful to attach to test |
542 | results, such as additional log files or screenshots, they can put them |
543 | into the directory specified by the ``$AUTOPKGTEST_ARTIFACTS`` |
544 | @@ -172,6 +175,17 @@ Classes: class-1 [, class-2 ...] |
545 | |
546 | Classes are separated by commas and/or whitespace. |
547 | |
548 | +Architecture: dpkg architecture field syntax |
549 | + When package tests are only supported on a limited set of |
550 | + architectures, or are known to not work on a particular (set of) |
551 | + architecture(s), this field can be used to define the supported |
552 | + architectures. The autopkgtest will be skipped when the |
553 | + architecture of the testbed doesn't match the content of this |
554 | + field. The format is the same as in debian/control, with the |
555 | + understanding that ``all`` is not allowed, and ``any`` means that |
556 | + the test will be run on every architecture, which is the default |
557 | + when not specifying this field at all. |
558 | + |
559 | Any unknown fields will cause the whole stanza to be skipped. |
560 | |
561 | Defined restrictions |
562 | @@ -245,6 +259,12 @@ needs-reboot |
563 | The test wants to reboot the machine using |
564 | ``/tmp/autopkgtest-reboot`` (see below). |
565 | |
566 | +needs-internet |
567 | + The test needs unrestricted internet access, e.g. to download test data |
568 | + that's not shipped as a package, or to test a protocol implementation |
569 | + against a test server. Please also see the note about Network access later |
570 | + in this document. |
571 | + |
572 | needs-recommends (deprecated) |
573 | Enable installation of recommended packages in apt for the test |
574 | dependencies. This does not affect build dependencies. |
575 | @@ -425,16 +445,22 @@ the testbed to use it. (Note that the standard tools like |
576 | autopkgtest-build-lxc or mk-sbuild automatically use the apt proxy from |
577 | the host system.) |
578 | |
579 | -In general, tests are also allowed to access the internet. As this |
580 | -usually makes tests less reliable, this should be kept to a minimum; but |
581 | -for many packages their main purpose is to interact with remote web |
582 | -services and thus their testing should actually cover those too, to |
583 | -ensure that the distribution package keeps working with their |
584 | -corresponding web service. |
585 | - |
586 | -Debian's production CI infrastructure allows unrestricted network |
587 | -access, in Ubuntu's infrastructure access to sites other than |
588 | -`*.ubuntu.com` and `*.launchpad.net` happens via a proxy (limited to |
589 | -DNS and http/https). |
590 | - |
591 | -.. vim: ft=rst tw=72 |
592 | +In general, tests should not access the internet themselves. If a test does use |
593 | +the internet outside of the pre-configured apt domain, the test must be marked |
594 | +with the needs-internet restriction. Using the internet usually makes tests |
595 | +less reliable, so this should be kept to a minimum. But for many packages their |
596 | +main purpose is to interact with remote web services and thus their testing |
597 | +should actually cover those too, to ensure that the distribution package keeps |
598 | +working with their corresponding web service. |
599 | + |
600 | +Please note that for Debian, the ftp-master have ruled (in their |
601 | +`REJECT-FAQ (Non-Main II) <https://ftp-master.debian.org/REJECT-FAQ.html>`_ |
602 | +that tests must not execute code they download. In particular, tests must not |
603 | +use external repositories to depend on software (as opposed to data) that is |
604 | +not in Debian. However, currently there is nothing preventing this. |
605 | + |
606 | +Debian's production CI infrastructure allows unrestricted network access |
607 | +on most workers. Tests with needs-internet can be skipped on some to avoid |
608 | +flaky behavior. In Ubuntu's infrastructure access to sites other than |
609 | +`*.ubuntu.com` and `*.launchpad.net` happens via a proxy (limited to DNS and |
610 | +http/https). |
611 | diff --git a/lib/VirtSubproc.py b/lib/VirtSubproc.py |
612 | index 847eac8..d12e3db 100644 |
613 | --- a/lib/VirtSubproc.py |
614 | +++ b/lib/VirtSubproc.py |
615 | @@ -31,9 +31,9 @@ import subprocess |
616 | import traceback |
617 | import errno |
618 | import time |
619 | -import pipes |
620 | import socket |
621 | import shutil |
622 | +import shlex |
623 | |
624 | import adtlog |
625 | |
626 | @@ -333,6 +333,7 @@ def reboot_testbed(): |
627 | def cmd_reboot(c, ce): |
628 | global downtmp |
629 | cmdnumargs(c, ce, 0, 1) |
630 | + wait_reboot_args = {} |
631 | if not downtmp: |
632 | bomb("`reboot' when not open") |
633 | if 'reboot' not in caller.hook_capabilities(): |
634 | @@ -350,7 +351,7 @@ def cmd_reboot(c, ce): |
635 | adtlog.debug('cmd_reboot: saved current downtmp, rebooting') |
636 | |
637 | try: |
638 | - caller.hook_prepare_reboot() |
639 | + wait_reboot_args = caller.hook_prepare_reboot() or {} |
640 | except AttributeError: |
641 | pass |
642 | |
643 | @@ -360,7 +361,7 @@ def cmd_reboot(c, ce): |
644 | else: |
645 | reboot_testbed() |
646 | |
647 | - caller.hook_wait_reboot() |
648 | + caller.hook_wait_reboot(**wait_reboot_args) |
649 | |
650 | # restore downtmp |
651 | check_exec(['sh', '-ec', 'for d in %s; do ' |
652 | @@ -529,7 +530,7 @@ def copyupdown_internal(wh, sd, upp): |
653 | |
654 | deststdout = devnull_read |
655 | srcstdin = devnull_read |
656 | - remfileq = pipes.quote(sd[iremote]) |
657 | + remfileq = shlex.quote(sd[iremote]) |
658 | if not dirsp: |
659 | rune = 'cat %s%s' % ('><'[upp], remfileq) |
660 | if upp: |
661 | @@ -587,8 +588,9 @@ def copyupdown_internal(wh, sd, upp): |
662 | status = subprocs[sdn].wait() |
663 | if not (status == 0 or (sdn == 0 and status == -13)): |
664 | timeout_stop() |
665 | - bomb("%s %s failed, status %d" % |
666 | - (wh, ['source', 'destination'][sdn], status)) |
667 | + adtlog.info("%s %s failed, status %d" % |
668 | + (wh, ['source', 'destination'][sdn], status)) |
669 | + raise FailedCmd(['copy-failed']) |
670 | timeout_stop() |
671 | except Timeout: |
672 | for sdn in [1, 0]: |
673 | diff --git a/lib/adt_binaries.py b/lib/adt_binaries.py |
674 | index 3cdaee0..31caef5 100644 |
675 | --- a/lib/adt_binaries.py |
676 | +++ b/lib/adt_binaries.py |
677 | @@ -107,10 +107,10 @@ class DebBinaries: |
678 | adtlog.debug('Binaries: publish reinstall checking...') |
679 | pkgs_reinstall = set() |
680 | pkg = None |
681 | - for l in open(aptupdate_out.host, encoding='UTF-8'): |
682 | - if l.startswith('Package: '): |
683 | - pkg = l[9:].rstrip() |
684 | - elif l.startswith('Status: install '): |
685 | + for line in open(aptupdate_out.host, encoding='UTF-8'): |
686 | + if line.startswith('Package: '): |
687 | + pkg = line[9:].rstrip() |
688 | + elif line.startswith('Status: install '): |
689 | if pkg in self.registered: |
690 | pkgs_reinstall.add(pkg) |
691 | adtlog.debug('Binaries: publish reinstall needs ' + pkg) |
692 | diff --git a/lib/adt_testbed.py b/lib/adt_testbed.py |
693 | index 4fe8874..10c0bd3 100644 |
694 | --- a/lib/adt_testbed.py |
695 | +++ b/lib/adt_testbed.py |
696 | @@ -24,17 +24,15 @@ import os |
697 | import sys |
698 | import errno |
699 | import time |
700 | -import pipes |
701 | import traceback |
702 | import re |
703 | +import shlex |
704 | import signal |
705 | import subprocess |
706 | import tempfile |
707 | import shutil |
708 | import urllib.parse |
709 | |
710 | -from debian import debian_support |
711 | - |
712 | import adtlog |
713 | import VirtSubproc |
714 | |
715 | @@ -48,7 +46,7 @@ class Testbed: |
716 | setup_commands=[], setup_commands_boot=[], add_apt_pockets=[], |
717 | copy_files=[], pin_packages=[], add_apt_sources=[], |
718 | add_apt_releases=[], apt_default_release=None, |
719 | - enable_apt_fallback=True): |
720 | + enable_apt_fallback=True, shell_fail=False, needs_internet='run', cross_arch=None): |
721 | self.sp = None |
722 | self.lastsend = None |
723 | self.scratch = None |
724 | @@ -56,6 +54,8 @@ class Testbed: |
725 | self._need_reset_apt = False |
726 | self.stop_sent = False |
727 | self.dpkg_arch = None |
728 | + self.cross_arch = cross_arch |
729 | + self.cross_env = [] |
730 | self.exec_cmd = None |
731 | self.output_dir = output_dir |
732 | self.shared_downtmp = None # testbed's downtmp on the host, if supported |
733 | @@ -79,6 +79,8 @@ class Testbed: |
734 | self.eatmydata_prefix = [] |
735 | self.apt_pin_for_releases = [] |
736 | self.enable_apt_fallback = enable_apt_fallback |
737 | + self.needs_internet = needs_internet |
738 | + self.shell_fail = shell_fail |
739 | self.nproc = None |
740 | self.cpu_model = None |
741 | self.cpu_flags = None |
742 | @@ -95,6 +97,9 @@ class Testbed: |
743 | return self.user or 'root' |
744 | |
745 | def start(self): |
746 | + # log date at least once; to ease finding it |
747 | + adtlog.info('starting date: %s' % time.strftime('%Y-%m-%d')) |
748 | + |
749 | # are we running from a checkout? |
750 | root_dir = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) |
751 | if os.path.exists(os.path.join(root_dir, '.git')): |
752 | @@ -110,7 +115,7 @@ class Testbed: |
753 | |
754 | # log command line invocation for the log |
755 | adtlog.info('host %s; command line: %s' % ( |
756 | - os.uname()[1], ' '.join([pipes.quote(w) for w in sys.argv]))) |
757 | + os.uname()[1], ' '.join([shlex.quote(w) for w in sys.argv]))) |
758 | |
759 | self.sp = subprocess.Popen(self.vserver_argv, |
760 | stdin=subprocess.PIPE, |
761 | @@ -135,6 +140,7 @@ class Testbed: |
762 | self.sp.stdin.close() |
763 | ec = self.sp.wait() |
764 | if ec: |
765 | + self.command('auxverb_debug_fail') |
766 | self.bomb('testbed gave exit status %d after quit' % ec) |
767 | self.sp = None |
768 | |
769 | @@ -209,6 +215,8 @@ class Testbed: |
770 | if rc: |
771 | # setup scripts should exit with 100 if it's the package's |
772 | # fault, otherwise it's considered a transient testbed failure |
773 | + if self.shell_fail: |
774 | + self.run_shell() |
775 | if rc == 100: |
776 | self.badpkg('testbed boot setup commands failed with status 100') |
777 | else: |
778 | @@ -221,6 +229,8 @@ class Testbed: |
779 | self.recommends_installed = False |
780 | self.exec_cmd = list(map(urllib.parse.unquote, self.command('print-execute-command', (), 1)[0].split(','))) |
781 | self.caps = self.command('capabilities', (), None) |
782 | + if self.needs_internet in ['try', 'run']: |
783 | + self.caps.append('has_internet') |
784 | adtlog.debug('testbed capabilities: %s' % self.caps) |
785 | for c in self.caps: |
786 | if c.startswith('downtmp-host='): |
787 | @@ -239,6 +249,23 @@ class Testbed: |
788 | self.dpkg_arch = self.check_exec(['dpkg', '--print-architecture'], True).strip() |
789 | adtlog.info('testbed dpkg architecture: ' + self.dpkg_arch) |
790 | |
791 | + # set up environment for cross-architecture if needed |
792 | + if self.cross_arch: |
793 | + self.cross_env = [] |
794 | + argv = ['dpkg-architecture', '-a', self.cross_arch] |
795 | + # ignore stderr |
796 | + (code, vars, err) = self.execute(argv, |
797 | + stdout=subprocess.PIPE, |
798 | + stderr=subprocess.PIPE) |
799 | + if code != 0: |
800 | + self.bomb('"%s" failed with status %i' % (' '.join(argv), code), |
801 | + adtlog.AutopkgtestError) |
802 | + |
803 | + for var in vars.split('\n'): |
804 | + if var.startswith('DEB_HOST'): |
805 | + self.cross_env.append(var) |
806 | + adtlog.info('testbed target architecture: ' + self.cross_arch) |
807 | + |
808 | # do we have eatmydata? |
809 | (code, out, err) = self.execute(['which', 'eatmydata'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) |
810 | if code == 0: |
811 | @@ -331,7 +358,7 @@ class Testbed: |
812 | [self._get_default_release() + '-updates']) |
813 | if self.add_apt_releases: |
814 | get_mirror_and_components = ''' |
815 | - sed -rn 's/^(deb|deb-src) +(\\[.*\\] *)?((http|https|file):[^ ]*) +([^ -]+) +(.*)$/\\2\\3 \\6/p' \\ |
816 | + sed -rn 's/^(deb|deb-src) +(\\[.*\\] *)?((http|https|file):[^ ]*) +([^ ]+) +(.*)$/\\2\\3 \\6/p' \\ |
817 | /etc/apt/sources.list `ls /etc/apt/sources.list.d/*.list 2>/dev/null|| true` | head -1 |
818 | ''' |
819 | mirror_and_components = self.check_exec(['sh', '-ec', get_mirror_and_components], stdout=True).split() |
820 | @@ -365,6 +392,8 @@ class Testbed: |
821 | if rc: |
822 | # setup scripts should exit with 100 if it's the package's |
823 | # fault, otherwise it's considered a transient testbed failure |
824 | + if self.shell_fail: |
825 | + self.run_shell() |
826 | if rc == 100: |
827 | self.badpkg('testbed setup commands failed with status 100') |
828 | else: |
829 | @@ -403,7 +432,23 @@ class Testbed: |
830 | self.recommends_installed = recommends |
831 | if not deps_new: |
832 | return |
833 | - self.satisfy_dependencies_string(', '.join(deps_new), 'install-deps', recommends, shell_on_failure=shell_on_failure, synth_deps=synth_deps) |
834 | + if self.cross_arch: |
835 | + # mock up a .dsc with our dependencies string as our build-deps |
836 | + dsc = TempPath(self, 'autopkgtest-satdep.dsc') |
837 | + with open(dsc.host, 'w', encoding='UTF-8') as f_dsc: |
838 | + f_dsc.write("Source: autopkgtest-satdep\n" |
839 | + "Binary: autopkgtest-satdep\n" |
840 | + "Architecture: any all\n" |
841 | + "Build-Depends: " + ', '.join(deps_new) + "\n") |
842 | + dsc.copydown() |
843 | + |
844 | + # feed the result to apt-get build-dep |
845 | + self.satisfy_build_deps(dsc.tb, recommends, shell_on_failure) |
846 | + # Now handle the synthetic deps as well |
847 | + self.install_apt('', recommends, shell_on_failure, synth_deps) |
848 | + |
849 | + else: |
850 | + self.satisfy_dependencies_string(', '.join(deps_new), 'install-deps', recommends, shell_on_failure=shell_on_failure, synth_deps=synth_deps) |
851 | |
852 | def needs_reset(self): |
853 | # show what caused a reset |
854 | @@ -422,7 +467,7 @@ class Testbed: |
855 | adtlog.debug('%s %s' % (_type.__name__, m)) |
856 | raise _type(m) |
857 | |
858 | - def send(self, string): |
859 | + def send(self, string, debug_if_fails=True): |
860 | try: |
861 | adtlog.debug('sending command to testbed: ' + string) |
862 | self.sp.stdin.write(string) |
863 | @@ -430,11 +475,16 @@ class Testbed: |
864 | self.sp.stdin.flush() |
865 | self.lastsend = string |
866 | except Exception as e: |
867 | + # command() calls back into send() - avoid potential infinite |
868 | + # recursion |
869 | + if debug_if_fails: |
870 | + self.command('auxverb_debug_fail', debug_if_fails=False) |
871 | self.bomb('cannot send to testbed: %s' % e) |
872 | |
873 | def expect(self, keyword, nresults): |
874 | line = self.sp.stdout.readline() |
875 | if not line: |
876 | + self.command('auxverb_debug_fail') |
877 | self.bomb('unexpected eof from the testbed') |
878 | if not line.endswith('\n'): |
879 | self.bomb('unterminated line from the testbed') |
880 | @@ -445,9 +495,11 @@ class Testbed: |
881 | self.bomb('unexpected whitespace-only line from the testbed') |
882 | if ll[0] != keyword: |
883 | if self.lastsend is None: |
884 | + self.command('auxverb_debug_fail') |
885 | self.bomb("got banner `%s', expected `%s...'" % |
886 | (line, keyword)) |
887 | else: |
888 | + self.command('auxverb_debug_fail') |
889 | self.bomb("sent `%s', got `%s', expected `%s...'" % |
890 | (self.lastsend, line, keyword)) |
891 | ll = ll[1:] |
892 | @@ -457,7 +509,7 @@ class Testbed: |
893 | (self.lastsend, line, len(ll), nresults)) |
894 | return ll |
895 | |
896 | - def command(self, cmd, args=(), nresults=0, unquote=True): |
897 | + def command(self, cmd, args=(), nresults=0, unquote=True, debug_if_fails=True): |
898 | # pass args=[None,...] or =(None,...) to avoid more url quoting |
899 | if type(cmd) is str: |
900 | cmd = [cmd] |
901 | @@ -466,7 +518,7 @@ class Testbed: |
902 | else: |
903 | args = list(map(urllib.parse.quote, args)) |
904 | al = cmd + args |
905 | - self.send(' '.join(al)) |
906 | + self.send(' '.join(al), debug_if_fails=debug_if_fails) |
907 | ll = self.expect('ok', nresults) |
908 | if unquote: |
909 | ll = list(map(urllib.parse.unquote, ll)) |
910 | @@ -488,6 +540,7 @@ class Testbed: |
911 | env.append('APT_LISTBUGS_FRONTEND=none') |
912 | env.append('APT_LISTCHANGES_FRONTEND=none') |
913 | env += self.install_tmp_env |
914 | + env += self.cross_env |
915 | |
916 | adtlog.debug('testbed command %s, kind %s, sout %s, serr %s, env %s' % |
917 | (argv, kind, stdout and 'pipe' or 'raw', |
918 | @@ -520,6 +573,7 @@ class Testbed: |
919 | adtlog.error(msg) |
920 | raise |
921 | else: |
922 | + self.command('auxverb_debug_fail') |
923 | self.bomb(msg) |
924 | |
925 | adtlog.debug('testbed command exited with code %i' % proc.returncode) |
926 | @@ -756,176 +810,6 @@ Description: satisfy autopkgtest test dependencies |
927 | |
928 | self.execute(['dpkg', '--purge', 'autopkgtest-satdep']) |
929 | |
930 | - def install_tmp(self, deps, recommends=False): |
931 | - '''Unpack dependencies into temporary directory |
932 | - |
933 | - This is a fallback if the testbed does not have root privileges or a |
934 | - writable file system, and will only work for packages that can be |
935 | - used from a different directory with PATH, LD_LIBRARY_PATH, PYTHONPATH |
936 | - etc. set. |
937 | - |
938 | - Sets/updates self.install_tmp_env to necessary variables. |
939 | - ''' |
940 | - unsupported = [] |
941 | - pkg_constraints = {} # pkg -> (relation, version) |
942 | - |
943 | - # parse deps into pkg_constraints |
944 | - dep_re = re.compile( |
945 | - r'(?P<p>[a-z0-9+-.]+)\s*' |
946 | - r'(\((?P<r><<|<=|>=|=|>>)\s*(?P<v>[^\)]*)\))?$') |
947 | - for dep in deps.split(','): |
948 | - dep = dep.strip() |
949 | - if not dep: |
950 | - continue # trailing comma |
951 | - m = dep_re.match(dep) |
952 | - if not m: |
953 | - unsupported.append(dep) |
954 | - continue |
955 | - pkg_constraints[m.group('p')] = (m.group('r'), m.group('v')) |
956 | - |
957 | - adtlog.debug('install_tmp: "%s" -> %s, unsupported: %s' % |
958 | - (deps, pkg_constraints, unsupported)) |
959 | - |
960 | - if unsupported: |
961 | - adtlog.warning('The following dependencies cannot be handled in ' |
962 | - 'reduced "unpack to temporary directory" mode: ' + |
963 | - ', '.join(unsupported)) |
964 | - |
965 | - # simulate installation, grab packages, and check constraints |
966 | - (rc, out, _) = self.execute(['apt-get', '--quiet', '--simulate', '--no-remove', |
967 | - '-o', 'Debug::pkgProblemResolver=true', |
968 | - '-o', 'Debug::NoLocking=true', |
969 | - '-o', 'APT::Install-Recommends=%s' % recommends, |
970 | - '-o', 'APT::Get::Show-User-Simulation-Note=False', |
971 | - 'install'] + list(pkg_constraints), |
972 | - stdout=subprocess.PIPE) |
973 | - if rc != 0: |
974 | - self.badpkg('Test dependencies are unsatisfiable. A common reason is ' |
975 | - 'that your testbed is out of date with respect to the ' |
976 | - 'archive, and you need to use a current testbed, or ' |
977 | - 'try "--setup-commands ro-apt-update".') |
978 | - |
979 | - def check_constraint(pkg, ver): |
980 | - constraint = pkg_constraints.get(pkg, (None, None)) |
981 | - if constraint[0] is None: |
982 | - return True |
983 | - comp = debian_support.version_compare(ver, constraint[1]) |
984 | - if constraint[0] == '<<': |
985 | - return comp < 0 |
986 | - if constraint[0] == '<=': |
987 | - return comp <= 0 |
988 | - if constraint[0] == '==': |
989 | - return comp == 0 |
990 | - if constraint[0] == '>=': |
991 | - return comp >= 0 |
992 | - if constraint[0] == '>>': |
993 | - return comp > 0 |
994 | - raise ValueError('invalid dependency version relation %s' % constraint[0]) |
995 | - |
996 | - to_install = [] |
997 | - for line in out.splitlines(): |
998 | - if not line.startswith('Inst '): |
999 | - continue |
1000 | - fields = line.split() |
1001 | - pkg = fields[1] |
1002 | - if fields[2].startswith('('): |
1003 | - ver = fields[2][1:] |
1004 | - elif fields[3].startswith('('): |
1005 | - ver = fields[3][1:] |
1006 | - else: |
1007 | - raise ValueError('Cannot parse line: %s' % line) |
1008 | - # ignore Python 2 stuff, with PYTHONPATH we can only support one |
1009 | - # Python major version (3) |
1010 | - if pkg.startswith('python-') or pkg.startswith('libpython-') or \ |
1011 | - 'python2.' in pkg or pkg == 'python': |
1012 | - adtlog.warning('Ignoring Python 2.x dependency %s, not ' |
1013 | - 'supported in unpack only mode' % pkg) |
1014 | - continue |
1015 | - if not check_constraint(pkg, ver): |
1016 | - self.badpkg('test dependency %s (%s %s) is unsatisfiable: available version %s' % |
1017 | - (pkg, pkg_constraints[pkg][0], pkg_constraints[pkg][1], ver)) |
1018 | - to_install.append(pkg) |
1019 | - |
1020 | - adtlog.debug('install_tmp: packages to install: %s' % ' '.join(to_install)) |
1021 | - |
1022 | - if not to_install: |
1023 | - # we already have everything, all good |
1024 | - return |
1025 | - |
1026 | - adtlog.warning('virtualisation system does not offer root or writable ' |
1027 | - 'testbed; unpacking dependencies to temporary dir, ' |
1028 | - 'which will only work for some packages') |
1029 | - |
1030 | - # download and unpack all debs |
1031 | - script = r'''d=%(t)s/deps |
1032 | -mkdir -p $d; cd $d |
1033 | -apt-get download %(pkgs)s >&2 |
1034 | -for p in *.deb; do dpkg-deb --extract $p .; rm $p; done |
1035 | - |
1036 | -# executables |
1037 | -echo PATH=$d/sbin:$d/bin:$d/usr/sbin:$d/usr/bin:$d/usr/games:$PATH |
1038 | - |
1039 | -# shared libraries / Qt plugins |
1040 | -l="" |
1041 | -q="" |
1042 | -for candidate in $(find $d -type d \( -name 'lib' -o -path '*/lib/*-linux-*' \)); do |
1043 | - [ -z "$(ls $candidate/*.so $candidate/*.so.* 2>/dev/null)" ] || l="$candidate:$l" |
1044 | - [ -z "$(ls $candidate/lib*qt*.so* 2>/dev/null)" ] || q="$candidate:$q" |
1045 | -done |
1046 | -[ -z "$l" ] || echo LD_LIBRARY_PATH=$l${LD_LIBRARY_PATH:-} |
1047 | -[ -z "$q" ] || echo QT_PLUGIN_PATH="$q" |
1048 | - |
1049 | -# ImageMagick needs some hacks to make python[3]-wand find its library |
1050 | -l="" |
1051 | -for ml in $(ls usr/lib/*-linux-*/libMagick*.so.* 2>/dev/null); do |
1052 | - if [ -L $ml ]; then continue; fi |
1053 | - l=$(dirname $ml) |
1054 | - ln -sf $(basename "$ml") "${ml%%.so.*}.so" |
1055 | -done |
1056 | -if [ -n "$l" ]; then |
1057 | - [ -d "$l/lib" ] || ln -sf . "$l/lib" |
1058 | - echo MAGICK_HOME="$d/$l" |
1059 | -fi |
1060 | - |
1061 | -# Python modules |
1062 | -p="" |
1063 | -for candidate in $d/usr/lib/python3*/dist-packages; do |
1064 | - [ ! -d $candidate ] || p="$candidate:$p" |
1065 | -done |
1066 | -[ -z "$p" ] || echo PYTHONPATH=$p${PYTHONPATH:-} |
1067 | - |
1068 | -# Perl modules |
1069 | -p="" |
1070 | -for candidate in $d/usr/share/perl* $d/usr/lib/perl5 $d/usr/lib/*/perl5/*; do |
1071 | - [ ! -d $candidate ] || p="$candidate:$p" |
1072 | -done |
1073 | -[ -z "$p" ] || echo PERL5LIB=$p${PERL5LIB:-} |
1074 | - |
1075 | -# gobject-introspection |
1076 | -l="" |
1077 | -if [ -d $d/usr/lib/girepository-1.0 ]; then |
1078 | - l=$d/usr/lib/girepository-1.0 |
1079 | -fi |
1080 | -for candidate in $(find $d -type d -path '*/usr/lib/*/girepository-*'); do |
1081 | - [ -z "$(ls $candidate/*.typelib 2>/dev/null)" ] || l="$candidate:$l" |
1082 | -done |
1083 | -[ -z "$l" ] || echo GI_TYPELIB_PATH="$l:${GI_TYPELIB_PATH:-}" |
1084 | - |
1085 | -# udev rules |
1086 | -if [ -n "$(ls $d/lib/udev/rules.d/*.rules 2>/dev/null)" ] && [ -w /run/udev ]; then |
1087 | - mkdir -p /run/udev/rules.d |
1088 | - cp $d/lib/udev/rules.d/*.rules /run/udev/rules.d/ |
1089 | - udevadm control --reload |
1090 | - udevadm trigger || true |
1091 | -fi |
1092 | -''' % {'t': self.scratch, 'pkgs': ' '.join(to_install)} |
1093 | - (rc, out, _) = self.execute(['sh', '-euc', script], |
1094 | - stdout=subprocess.PIPE, kind='install') |
1095 | - if rc != 0: |
1096 | - self.bomb('failed to download and unpack test dependencies') |
1097 | - self.install_tmp_env = [l.strip() for l in out.splitlines() if l] |
1098 | - adtlog.debug('install_tmp: env is now %s' % self.install_tmp_env) |
1099 | - |
1100 | def install_click(self, clickpath): |
1101 | # copy click into testbed |
1102 | tp = Path(self, clickpath, os.path.join( |
1103 | @@ -1021,6 +905,105 @@ fi |
1104 | if self.execute(['sh', adtlog.verbosity >= 2 and '-exc' or '-ec', script], kind='install')[0] != 0: |
1105 | self.bomb('Failed to update click AppArmor rules') |
1106 | |
1107 | + def _run_apt_build_dep(self, what, prefix, recommends, ignorerc=False): |
1108 | + '''actually run apt-get build-dep''' |
1109 | + |
1110 | + # capture status-fd to stderr |
1111 | + (rc, _, serr) = self.execute(['/bin/sh', '-ec', '%s apt-get build-dep ' |
1112 | + '--assume-yes %s ' |
1113 | + '-o APT::Status-Fd=3 ' |
1114 | + '-o APT::Install-Recommends=%s ' |
1115 | + '-o Dpkg::Options::=--force-confnew ' |
1116 | + '-o Debug::pkgProblemResolver=true 3>&2 2>&1' % |
1117 | + (prefix, what, recommends)], |
1118 | + kind='install', stderr=subprocess.PIPE) |
1119 | + if not ignorerc and rc != 0: |
1120 | + adtlog.debug('apt-get build-dep %s failed; status-fd:\n%s' % (what, serr)) |
1121 | + # check if apt failed during package download, which might be a |
1122 | + # transient error, so retry |
1123 | + if 'dlstatus:' in serr and 'pmstatus:' not in serr: |
1124 | + raise adtlog.AptDownloadError |
1125 | + else: |
1126 | + raise adtlog.AptPermanentError |
1127 | + |
1128 | + def satisfy_build_deps(self, source_pkg, recommends=False, |
1129 | + shell_on_failure=False): |
1130 | + '''Install build-dependencies into testbed with apt-get build-dep |
1131 | + |
1132 | + This requires root privileges and a writable file system. |
1133 | + ''' |
1134 | + # check if we can use apt-get |
1135 | + can_apt_get = False |
1136 | + if 'root-on-testbed' in self.caps: |
1137 | + rc = self.execute(['test', '-w', '/var/lib/dpkg/status'])[0] |
1138 | + if rc == 0: |
1139 | + can_apt_get = True |
1140 | + adtlog.debug('can use apt-get on testbed: %s' % can_apt_get) |
1141 | + |
1142 | + if not can_apt_get: |
1143 | + self.bomb('no root on testbed, unsupported when specifying target architecture') |
1144 | + |
1145 | + what = source_pkg |
1146 | + if self.cross_arch: |
1147 | + what += ' -a ' + self.cross_arch |
1148 | + # install the build-dependencies for the specified source (which can |
1149 | + # be a source package name, or a path); our apt pinning is not |
1150 | + # very clever wrt. resolving transitional dependencies in the pocket, |
1151 | + # so we might need to retry without pinning |
1152 | + download_fail_retries = 3 |
1153 | + while True: |
1154 | + rc = 0 |
1155 | + try: |
1156 | + self._run_apt_build_dep(what, ' '.join(self.eatmydata_prefix), recommends) |
1157 | + |
1158 | + # check if apt failed during package download, which might be a |
1159 | + # transient error, so retry |
1160 | + except adtlog.AptDownloadError: |
1161 | + download_fail_retries -= 1 |
1162 | + if download_fail_retries > 0: |
1163 | + adtlog.warning('apt failed to download packages, retrying in 10s...') |
1164 | + time.sleep(10) |
1165 | + continue |
1166 | + else: |
1167 | + self.bomb('apt repeatedly failed to download packages') |
1168 | + |
1169 | + except adtlog.AptPermanentError: |
1170 | + rc = -1 |
1171 | + if shell_on_failure: |
1172 | + self.run_shell() |
1173 | + |
1174 | + if rc != 0: |
1175 | + if self.apt_pin_for_releases and self.enable_apt_fallback: |
1176 | + release = self.apt_pin_for_releases.pop() |
1177 | + adtlog.warning('Test dependencies are unsatisfiable using apt pinning. ' |
1178 | + 'Retrying using all packages from %s' % release) |
1179 | + self.check_exec(['/bin/sh', '-ec', 'rm /etc/apt/preferences.d/autopkgtest-' + release]) |
1180 | + if not self.apt_pin_for_releases: |
1181 | + self.check_exec(['/bin/sh', '-ec', 'rm -f /etc/apt/preferences.d/autopkgtest-default-release']) |
1182 | + continue |
1183 | + |
1184 | + adtlog.warning('Test dependencies are unsatisfiable - calling ' |
1185 | + 'apt install on test deps directly for further ' |
1186 | + 'data about failing dependencies in test logs') |
1187 | + self._run_apt_build_dep('--simulate ' + what, |
1188 | + ' '.join(self.eatmydata_prefix), |
1189 | + recommends, ignorerc=True) |
1190 | + |
1191 | + if shell_on_failure: |
1192 | + self.run_shell() |
1193 | + if self.enable_apt_fallback: |
1194 | + self.badpkg('Test dependencies are unsatisfiable. A common reason is ' |
1195 | + 'that your testbed is out of date with respect to the ' |
1196 | + 'archive, and you need to use a current testbed or run ' |
1197 | + 'apt-get update or use -U.') |
1198 | + else: |
1199 | + self.badpkg('Test dependencies are unsatisfiable. A common reason is ' |
1200 | + 'that the requested apt pinning prevented dependencies ' |
1201 | + 'from the non-default suite to be installed. In that ' |
1202 | + 'case you need to add those dependencies to the pinning ' |
1203 | + 'list.') |
1204 | + break |
1205 | + |
1206 | def satisfy_dependencies_string(self, deps, what, recommends=False, |
1207 | build_dep=False, shell_on_failure=False, synth_deps=[]): |
1208 | '''Install dependencies from a string into the testbed''' |
1209 | @@ -1066,7 +1049,22 @@ fi |
1210 | if can_apt_get: |
1211 | self.install_apt(deps, recommends, shell_on_failure, synth_deps) |
1212 | else: |
1213 | - self.install_tmp(deps, recommends) |
1214 | + has_dpkg_checkbuilddeps = self.execute( |
1215 | + ['which', 'dpkg-checkbuilddeps'], |
1216 | + stdout=subprocess.DEVNULL, |
1217 | + stderr=subprocess.DEVNULL |
1218 | + )[0] == 0 |
1219 | + if has_dpkg_checkbuilddeps: |
1220 | + rc, _, err = self.execute( |
1221 | + ['dpkg-checkbuilddeps', '-d', deps, "/dev/null"], |
1222 | + stdout=subprocess.PIPE, |
1223 | + stderr=subprocess.PIPE |
1224 | + ) |
1225 | + if rc != 0: |
1226 | + missing = re.sub('dpkg-checkbuilddeps: error: Unmet build dependencies: ', '', err) |
1227 | + self.badpkg('test dependencies missing: %s' % missing) |
1228 | + else: |
1229 | + adtlog.warning('test dependencies (%s) are not fully satisfied, but continuing anyway since dpkg-checkbuilddeps it not available to determine which ones are missing.' % deps) |
1230 | |
1231 | def run_shell(self, cwd=None, extra_env=[]): |
1232 | '''Run shell in testbed for debugging tests''' |
1233 | @@ -1270,7 +1268,10 @@ fi |
1234 | elif rc == 77 and 'skippable' in test.restrictions: |
1235 | test.set_skipped('exit status 77 and marked as skippable') |
1236 | elif rc != 0: |
1237 | - test.failed('non-zero exit status %d' % rc) |
1238 | + if 'needs-internet' in test.restrictions and self.needs_internet == 'try': |
1239 | + test.set_skipped("Failed, but test has needs-internet and that's not guaranteed") |
1240 | + else: |
1241 | + test.failed('non-zero exit status %d' % rc) |
1242 | elif se_size != 0 and 'allow-stderr' not in test.restrictions: |
1243 | with open(se.host, encoding='UTF-8', errors='replace') as f: |
1244 | stderr_top = f.readline().rstrip('\n \t\r') |
1245 | @@ -1354,6 +1355,11 @@ fi |
1246 | '''awk '/^Package-List:/ { show=1; next } (/^ / && show==1) { print $1; next } { show=0 }' |''' \ |
1247 | '''sort -u | tr '\\n' ' ')"; ''' % \ |
1248 | ' '.join(srcpkgs) |
1249 | + if self.cross_arch: |
1250 | + script += 'PKGS="$PKGS $(apt-cache showsrc %s | ' \ |
1251 | + '''awk '/^Package-List:/ { show=1; next } (/^ / && show==1) { print $1 ":%s"; next } { show=0 }' |''' \ |
1252 | + '''sort -u | tr '\\n' ' ')"; ''' % \ |
1253 | + (' '.join(srcpkgs), self.cross_arch) |
1254 | |
1255 | # prefer given packages from series, but make sure that other packages |
1256 | # are taken from default release as much as possible |
1257 | @@ -1374,7 +1380,7 @@ fi |
1258 | if self.default_release is None: |
1259 | |
1260 | script = 'SRCS=$(ls /etc/apt/sources.list /etc/apt/sources.list.d/*.list 2>/dev/null|| true); ' |
1261 | - script += r'''sed -rn '/^(deb|deb-src) +(\[.*\] *)?(http|https|file):/ { s/\[.*\] +//; s/^[^ ]+ +[^ ]* +([^ -]+) +.*$/\1/p }' $SRCS | head -n1''' |
1262 | + script += r'''sed -rn '/^(deb|deb-src) +(\[.*\] *)?(http|https|file):/ { s/\[.*\] +//; s/^[^ ]+ +[^ ]* +([^ ]+) +.*$/\1/p }' $SRCS | head -n1''' |
1263 | self.default_release = self.check_exec(['sh', '-ec', script], stdout=True).strip() |
1264 | |
1265 | return self.default_release |
1266 | diff --git a/lib/autopkgtest_args.py b/lib/autopkgtest_args.py |
1267 | index 7fea2d8..5237c6c 100644 |
1268 | --- a/lib/autopkgtest_args.py |
1269 | +++ b/lib/autopkgtest_args.py |
1270 | @@ -162,6 +162,13 @@ class ArgumentParser(argparse.ArgumentParser): |
1271 | return [arg_line.strip()] |
1272 | |
1273 | |
1274 | +class AppendCommaSeparatedArg(argparse.Action): |
1275 | + def __call__(self, parser, args, value, option_string=None): |
1276 | + result = getattr(args, self.dest, []) |
1277 | + result.extend(value.split(',')) |
1278 | + setattr(args, self.dest, result) |
1279 | + |
1280 | + |
1281 | def parse_args(arglist=None): |
1282 | '''Parse autopkgtest command line arguments. |
1283 | |
1284 | @@ -210,6 +217,11 @@ for details.''' |
1285 | help='Run tests from already installed click package ' |
1286 | '(e. g. "com.example.myapp"), from specified click ' |
1287 | 'source directory or manifest\'s x-source.') |
1288 | + g_test.add_argument('-a', '--architecture', metavar='ARCH', |
1289 | + help='run tests for (and when asked, build binaries ' |
1290 | + 'for) ARCH instead of the testbed host architecture. ' |
1291 | + 'Assumes ARCH is available as a foreign architecture ' |
1292 | + 'on the testbed.') |
1293 | g_test.add_argument('packages', nargs='*', |
1294 | help='testsrc source package and testbinary packages as above') |
1295 | |
1296 | @@ -247,7 +259,8 @@ for details.''' |
1297 | '{ [ "${O%404*Not Found*}" = "$O" ] || exit 100; sleep 15; apt-get update; }''' |
1298 | ' || { sleep 60; apt-get update; } || false)' |
1299 | ' && $(which eatmydata || true) apt-get dist-upgrade -y -o ' |
1300 | - 'Dpkg::Options::="--force-confnew"', |
1301 | + 'Dpkg::Options::="--force-confnew"' |
1302 | + ' && $(which eatmydata || true) apt-get --purge autoremove -y', |
1303 | help='Run apt update/dist-upgrade before the tests') |
1304 | g_setup.add_argument('--setup-commands-boot', metavar='COMMANDS_OR_PATH', |
1305 | action='append', default=[], |
1306 | @@ -299,6 +312,11 @@ for details.''' |
1307 | g_setup.add_argument('--env', metavar='VAR=value', |
1308 | action='append', default=[], |
1309 | help='Set arbitrary environment variable for builds and test') |
1310 | + g_setup.add_argument('--ignore-restrictions', default=[], |
1311 | + metavar='RESTRICTION[,RESTRICTION...]', |
1312 | + action=AppendCommaSeparatedArg, |
1313 | + help='Run tests even if these restrictions would ' |
1314 | + 'normally prevent it') |
1315 | |
1316 | # privileges |
1317 | g_priv = parser.add_argument_group('user/privilege handling options') |
1318 | @@ -346,6 +364,18 @@ for details.''' |
1319 | help='Set "parallel=N" DEB_BUILD_OPTION for building ' |
1320 | 'packages (default: number of available processors)') |
1321 | g_misc.add_argument( |
1322 | + '--needs-internet', dest='needs_internet', |
1323 | + choices=['run', 'try', 'skip'], |
1324 | + default='run', |
1325 | + help='Define how to handle the needs-internet restriction. With "try" ' |
1326 | + 'tests with needs-internet restrictions will be run, but if they fail ' |
1327 | + 'they will be treated as flaky tests. With "skip" these tests will be ' |
1328 | + 'skipped immediately and will not be run. With "run" the restriction ' |
1329 | + 'is basically ignored.') |
1330 | + g_misc.add_argument( |
1331 | + '-V', '--validate', action='store_true', default=False, |
1332 | + help='validate the test control file and exit') |
1333 | + g_misc.add_argument( |
1334 | '-h', '--help', action='help', default=argparse.SUPPRESS, |
1335 | help='show this help message and exit') |
1336 | |
1337 | diff --git a/lib/autopkgtest_qemu.py b/lib/autopkgtest_qemu.py |
1338 | new file mode 100644 |
1339 | index 0000000..01a652d |
1340 | --- /dev/null |
1341 | +++ b/lib/autopkgtest_qemu.py |
1342 | @@ -0,0 +1,385 @@ |
1343 | +#!/usr/bin/python3 |
1344 | +# |
1345 | +# This is not a stable API; for use within autopkgtest only. |
1346 | +# |
1347 | +# Part of autopkgtest. |
1348 | +# autopkgtest is a tool for testing Debian binary packages. |
1349 | +# |
1350 | +# Copyright 2006-2016 Canonical Ltd. |
1351 | +# Copyright 2016-2020 Simon McVittie |
1352 | +# Copyright 2017 Martin Pitt |
1353 | +# Copyright 2017 Jiri Palecek |
1354 | +# Copyright 2017-2018 Collabora Ltd. |
1355 | +# Copyright 2018 Thadeu Lima de Souza Cascardo |
1356 | +# Copyright 2019 Michael Biebl |
1357 | +# Copyright 2019 Raphaรซl Hertzog |
1358 | +# |
1359 | +# autopkgtest-virt-qemu was developed by |
1360 | +# Martin Pitt <martin.pitt@ubuntu.com> |
1361 | +# |
1362 | +# This program is free software; you can redistribute it and/or modify |
1363 | +# it under the terms of the GNU General Public License as published by |
1364 | +# the Free Software Foundation; either version 2 of the License, or |
1365 | +# (at your option) any later version. |
1366 | +# |
1367 | +# This program is distributed in the hope that it will be useful, |
1368 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1369 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1370 | +# GNU General Public License for more details. |
1371 | +# |
1372 | +# You should have received a copy of the GNU General Public License |
1373 | +# along with this program; if not, write to the Free Software |
1374 | +# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. |
1375 | +# |
1376 | +# See the file CREDITS for a full list of credits information (often |
1377 | +# installed as /usr/share/doc/autopkgtest/CREDITS). |
1378 | + |
1379 | +import errno |
1380 | +import fcntl |
1381 | +import json |
1382 | +import os |
1383 | +import re |
1384 | +import shutil |
1385 | +import socket |
1386 | +import subprocess |
1387 | +import sys |
1388 | +import tempfile |
1389 | +import time |
1390 | +from typing import ( |
1391 | + List, |
1392 | + Optional, |
1393 | + Sequence, |
1394 | + Union, |
1395 | +) |
1396 | + |
1397 | +import VirtSubproc |
1398 | +import adtlog |
1399 | + |
1400 | + |
1401 | +def find_free_port(start: int) -> int: |
1402 | + '''Find an unused port in the range [start, start+50)''' |
1403 | + |
1404 | + for p in range(start, start + 50): |
1405 | + adtlog.debug('find_free_port: trying %i' % p) |
1406 | + try: |
1407 | + lockfile = '/tmp/autopkgtest-virt-qemu.port.%i' % p |
1408 | + f = None |
1409 | + try: |
1410 | + f = open(lockfile, 'x') |
1411 | + os.unlink(lockfile) |
1412 | + fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) |
1413 | + except (IOError, OSError): |
1414 | + adtlog.debug('find_free_port: %i is locked' % p) |
1415 | + continue |
1416 | + finally: |
1417 | + if f: |
1418 | + f.close() |
1419 | + |
1420 | + s = socket.create_connection(('127.0.0.1', p)) |
1421 | + # if that works, the port is taken |
1422 | + s.close() |
1423 | + continue |
1424 | + except socket.error as e: |
1425 | + if e.errno == errno.ECONNREFUSED: |
1426 | + adtlog.debug('find_free_port: %i is free' % p) |
1427 | + return p |
1428 | + else: |
1429 | + pass |
1430 | + |
1431 | + adtlog.debug('find_free_port: all ports are taken') |
1432 | + return 0 |
1433 | + |
1434 | + |
1435 | +def get_cpuflag() -> Sequence[str]: |
1436 | + '''Return QEMU cpu option list suitable for host CPU''' |
1437 | + |
1438 | + try: |
1439 | + with open('/proc/cpuinfo', 'r') as f: |
1440 | + for line in f: |
1441 | + if line.startswith('flags'): |
1442 | + words = line.split() |
1443 | + if 'vmx' in words: |
1444 | + adtlog.debug( |
1445 | + 'Detected KVM capable Intel host CPU, ' |
1446 | + 'enabling nested KVM' |
1447 | + ) |
1448 | + return ['-cpu', 'kvm64,+vmx,+lahf_lm'] |
1449 | + elif 'svm' in words: # AMD kvm |
1450 | + adtlog.debug( |
1451 | + 'Detected KVM capable AMD host CPU, ' |
1452 | + 'enabling nested KVM' |
1453 | + ) |
1454 | + # FIXME: this should really be the one below |
1455 | + # for more reproducible testbeds, but nothing |
1456 | + # except -cpu host works |
1457 | + # return ['-cpu', 'kvm64,+svm,+lahf_lm'] |
1458 | + return ['-cpu', 'host'] |
1459 | + except IOError as e: |
1460 | + adtlog.warning( |
1461 | + 'Cannot read /proc/cpuinfo to detect CPU flags: %s' % e |
1462 | + ) |
1463 | + # fetching CPU flags isn't critical (only used to enable |
1464 | + # nested KVM), so don't fail here |
1465 | + |
1466 | + return [] |
1467 | + |
1468 | + |
1469 | +class QemuImage: |
1470 | + def __init__( |
1471 | + self, |
1472 | + file: str, |
1473 | + format: Optional[str] = None, |
1474 | + readonly: bool = False, |
1475 | + ) -> None: |
1476 | + self.file = file |
1477 | + self.overlay = None # type: Optional[str] |
1478 | + self.readonly = readonly |
1479 | + |
1480 | + if format is not None: |
1481 | + self.format = format |
1482 | + else: |
1483 | + info = json.loads( |
1484 | + VirtSubproc.check_exec([ |
1485 | + 'qemu-img', 'info', |
1486 | + '--output=json', |
1487 | + self.file, |
1488 | + ], outp=True, timeout=5) |
1489 | + ) |
1490 | + |
1491 | + if 'format' not in info: |
1492 | + VirtSubproc.bomb('Unable to determine format of %s' % self.file) |
1493 | + |
1494 | + self.format = str(info['format']) |
1495 | + |
1496 | + def __str__(self) -> str: |
1497 | + bits = [] # type: List[str] |
1498 | + |
1499 | + if self.overlay is None: |
1500 | + bits.append('file={}'.format(self.file)) |
1501 | + else: |
1502 | + bits.append('file={}'.format(self.overlay)) |
1503 | + bits.append('cache=unsafe') |
1504 | + |
1505 | + bits.append('if=virtio') |
1506 | + bits.append('discard=unmap') |
1507 | + bits.append('format={}'.format(self.format)) |
1508 | + |
1509 | + if self.readonly: |
1510 | + bits.append('readonly') |
1511 | + |
1512 | + return ','.join(bits) |
1513 | + |
1514 | + |
1515 | +class Qemu: |
1516 | + def __init__( |
1517 | + self, |
1518 | + images: Sequence[Union[QemuImage, str]], |
1519 | + qemu_command: str, |
1520 | + cpus: int = 1, |
1521 | + efi: bool = False, |
1522 | + overlay: bool = False, |
1523 | + overlay_dir: Optional[str] = None, |
1524 | + qemu_options: Sequence[str] = (), |
1525 | + ram_size: int = 1024, |
1526 | + workdir: Optional[str] = None, |
1527 | + ) -> None: |
1528 | + """ |
1529 | + Constructor. |
1530 | + |
1531 | + images: Disk images for the VM. The first image is assumed to be |
1532 | + the bootable, writable root filesystem, and we actually boot a |
1533 | + snapshot. The remaining images are assumed to be read-only. |
1534 | + qemu_command: qemu executable |
1535 | + |
1536 | + cpus: Number of vCPUs |
1537 | + efi: If true, boot using OVMF/AAVMF firmware (x86/ARM only) |
1538 | + overlay: If true, use a temporary overlay for first image |
1539 | + overlay_dir: Store writable overlays here (default: workdir) |
1540 | + qemu_options: Space-separated options for qemu |
1541 | + ram_size: Amount of RAM in MiB |
1542 | + workdir: Directory for temporary files (default: a random |
1543 | + subdirectory of $TMPDIR) |
1544 | + """ |
1545 | + |
1546 | + self.cpus = cpus |
1547 | + self.images = [] # type: List[QemuImage] |
1548 | + self.overlay_dir = overlay_dir |
1549 | + self.qemu_command = qemu_command |
1550 | + self.ram_size = ram_size |
1551 | + self.ssh_port = find_free_port(10022) |
1552 | + |
1553 | + if workdir is None: |
1554 | + workdir = tempfile.mkdtemp(prefix='autopkgtest-qemu.') |
1555 | + |
1556 | + self.workdir = workdir # type: Optional[str] |
1557 | + os.chmod(workdir, 0o755) |
1558 | + self.shareddir = os.path.join(workdir, 'shared') |
1559 | + os.mkdir(self.shareddir) |
1560 | + self.monitor_socket_path = os.path.join(workdir, 'monitor') |
1561 | + self.ttys0_socket_path = os.path.join(workdir, 'ttyS0') |
1562 | + self.ttys1_socket_path = os.path.join(workdir, 'ttyS1') |
1563 | + |
1564 | + for i, image in enumerate(images): |
1565 | + if isinstance(image, QemuImage): |
1566 | + self.images.append(image) |
1567 | + else: |
1568 | + assert isinstance(image, str) |
1569 | + |
1570 | + self.images.append( |
1571 | + QemuImage( |
1572 | + file=image, |
1573 | + format=None, |
1574 | + readonly=(i != 0), |
1575 | + ) |
1576 | + ) |
1577 | + |
1578 | + if overlay: |
1579 | + self.images[0].overlay = self.prepare_overlay(self.images[0]) |
1580 | + |
1581 | + if self.ssh_port: |
1582 | + adtlog.debug( |
1583 | + 'Forwarding local port %i to VM ssh port 22' % self.ssh_port |
1584 | + ) |
1585 | + nic_opt = ',hostfwd=tcp:127.0.0.1:%i-:22' % self.ssh_port |
1586 | + else: |
1587 | + nic_opt = '' |
1588 | + |
1589 | + # start QEMU |
1590 | + argv = [ |
1591 | + qemu_command, |
1592 | + '-m', str(ram_size), |
1593 | + '-smp', str(cpus), |
1594 | + '-nographic', |
1595 | + '-net', 'nic,model=virtio', |
1596 | + '-net', 'user' + nic_opt, |
1597 | + '-object', 'rng-random,filename=/dev/urandom,id=rng0', |
1598 | + '-device', 'virtio-rng-pci,rng=rng0,id=rng-device0', |
1599 | + '-monitor', 'unix:%s,server,nowait' % self.monitor_socket_path, |
1600 | + '-serial', 'unix:%s,server,nowait' % self.ttys0_socket_path, |
1601 | + '-serial', 'unix:%s,server,nowait' % self.ttys1_socket_path, |
1602 | + '-virtfs', |
1603 | + ( |
1604 | + 'local,id=autopkgtest,path=%s,security_model=none,' |
1605 | + 'mount_tag=autopkgtest' |
1606 | + ) % self.shareddir, |
1607 | + ] |
1608 | + |
1609 | + for i, image in enumerate(self.images): |
1610 | + argv.append('-drive') |
1611 | + argv.append('index=%d,%s' % (i, image)) |
1612 | + |
1613 | + if efi: |
1614 | + if 'qemu-system-x86_64' in qemu_command or \ |
1615 | + 'qemu-system-i386' in qemu_command: |
1616 | + code = '/usr/share/OVMF/OVMF_CODE.fd' |
1617 | + data = '/usr/share/OVMF/OVMF_VARS.fd' |
1618 | + elif 'qemu-system-aarch64' in qemu_command: |
1619 | + code = '/usr/share/AAVMF/AAVMF_CODE.fd' |
1620 | + data = '/usr/share/AAVMF/AAVMF_VARS.fd' |
1621 | + elif 'qemu-system-arm' in qemu_command: |
1622 | + code = '/usr/share/AAVMF/AAVMF32_CODE.fd' |
1623 | + data = '/usr/share/AAVMF/AAVMF32_VARS.fd' |
1624 | + else: |
1625 | + VirtSubproc.bomb('Unknown architecture for EFI boot') |
1626 | + |
1627 | + shutil.copy(data, '%s/efivars.fd' % workdir) |
1628 | + argv.append('-drive') |
1629 | + argv.append('if=pflash,format=raw,read-only,file=' + code) |
1630 | + argv.append('-drive') |
1631 | + argv.append( |
1632 | + 'if=pflash,format=raw,file=%s/efivars.fd' % workdir |
1633 | + ) |
1634 | + |
1635 | + if os.path.exists('/dev/kvm'): |
1636 | + argv.append('-enable-kvm') |
1637 | + # Enable nested KVM by default on x86_64 |
1638 | + if ( |
1639 | + os.uname()[4] == 'x86_64' and |
1640 | + self.qemu_command == 'qemu-system-x86_64' and |
1641 | + '-cpu' not in qemu_options |
1642 | + ): |
1643 | + argv += get_cpuflag() |
1644 | + |
1645 | + # pass through option to qemu |
1646 | + if qemu_options: |
1647 | + argv.extend(qemu_options) |
1648 | + |
1649 | + self.subprocess = subprocess.Popen( |
1650 | + argv, |
1651 | + stdin=subprocess.DEVNULL, |
1652 | + stdout=sys.stderr, |
1653 | + stderr=subprocess.STDOUT, |
1654 | + ) # type: Optional[subprocess.Popen[bytes]] |
1655 | + |
1656 | + @staticmethod |
1657 | + def get_default_qemu_command( |
1658 | + uname_m: Optional[str] = None |
1659 | + ) -> str: |
1660 | + uname_to_qemu_suffix = {'i[3456]86$': 'i386', '^arm': 'arm'} |
1661 | + |
1662 | + if uname_m is None: |
1663 | + uname_m = os.uname()[4] |
1664 | + |
1665 | + for pattern, suffix in uname_to_qemu_suffix.items(): |
1666 | + if re.match(pattern, uname_m): |
1667 | + return 'qemu-system-' + suffix |
1668 | + break |
1669 | + else: |
1670 | + return 'qemu-system-' + uname_m |
1671 | + |
1672 | + @property |
1673 | + def monitor_socket(self): |
1674 | + return VirtSubproc.get_unix_socket(self.monitor_socket_path) |
1675 | + |
1676 | + @property |
1677 | + def ttys0_socket(self): |
1678 | + return VirtSubproc.get_unix_socket(self.ttys0_socket_path) |
1679 | + |
1680 | + @property |
1681 | + def ttys1_socket(self): |
1682 | + return VirtSubproc.get_unix_socket(self.ttys1_socket_path) |
1683 | + |
1684 | + def prepare_overlay( |
1685 | + self, |
1686 | + image: QemuImage, |
1687 | + ) -> str: |
1688 | + '''Generate a temporary overlay image''' |
1689 | + |
1690 | + # generate a temporary overlay |
1691 | + if self.overlay_dir is not None: |
1692 | + overlay = os.path.join( |
1693 | + self.overlay_dir, |
1694 | + os.path.basename(image.file) + '.overlay-%s' % time.time() |
1695 | + ) |
1696 | + else: |
1697 | + workdir = self.workdir |
1698 | + assert workdir is not None |
1699 | + overlay = os.path.join(workdir, 'overlay.img') |
1700 | + |
1701 | + adtlog.debug('Creating temporary overlay image in %s' % overlay) |
1702 | + VirtSubproc.check_exec( |
1703 | + [ |
1704 | + 'qemu-img', 'create', |
1705 | + '-f', 'qcow2', |
1706 | + '-F', image.format, |
1707 | + '-b', os.path.abspath(image.file), |
1708 | + overlay, |
1709 | + ], |
1710 | + outp=True, |
1711 | + timeout=300, |
1712 | + ) |
1713 | + return overlay |
1714 | + |
1715 | + def cleanup(self) -> Optional[int]: |
1716 | + ret = None |
1717 | + |
1718 | + if self.subprocess is not None: |
1719 | + self.subprocess.terminate() |
1720 | + ret = self.subprocess.wait() |
1721 | + self.subprocess = None |
1722 | + |
1723 | + if self.workdir is not None: |
1724 | + shutil.rmtree(self.workdir) |
1725 | + self.workdir = None |
1726 | + |
1727 | + return ret |
1728 | diff --git a/lib/testdesc.py b/lib/testdesc.py |
1729 | index 84d24db..c8c9741 100644 |
1730 | --- a/lib/testdesc.py |
1731 | +++ b/lib/testdesc.py |
1732 | @@ -44,7 +44,7 @@ known_restrictions = ['rw-build-tree', 'breaks-testbed', 'needs-root', |
1733 | 'build-needed', 'allow-stderr', 'isolation-container', |
1734 | 'isolation-machine', 'needs-recommends', 'needs-reboot', |
1735 | 'flaky', 'skippable', 'superficial', |
1736 | - 'skip-not-installable'] |
1737 | + 'skip-not-installable', 'needs-internet'] |
1738 | |
1739 | |
1740 | class Unsupported(Exception): |
1741 | @@ -99,9 +99,6 @@ class Test: |
1742 | ''' |
1743 | if '/' in name: |
1744 | raise Unsupported(name, 'test name may not contain / character') |
1745 | - for r in restrictions: |
1746 | - if r not in known_restrictions: |
1747 | - raise Unsupported(name, 'unknown restriction %s' % r) |
1748 | |
1749 | if not ((path is None) ^ (command is None)): |
1750 | raise InvalidControl(name, 'Test must have either path or command') |
1751 | @@ -151,42 +148,54 @@ class Test: |
1752 | else: |
1753 | adtlog.report(self.name, 'FAIL ' + reason) |
1754 | |
1755 | - def check_testbed_compat(self, caps): |
1756 | + def check_testbed_compat(self, caps, ignore_restrictions=()): |
1757 | '''Check for restrictions incompatible with test bed capabilities. |
1758 | |
1759 | Raise Unsupported exception if there are any. |
1760 | ''' |
1761 | - if 'isolation-container' in self.restrictions and \ |
1762 | + effective = set(self.restrictions) - set(ignore_restrictions) |
1763 | + |
1764 | + for r in effective: |
1765 | + if r not in known_restrictions: |
1766 | + raise Unsupported(self.name, 'unknown restriction %s' % r) |
1767 | + |
1768 | + if 'isolation-container' in effective and \ |
1769 | 'isolation-container' not in caps and \ |
1770 | 'isolation-machine' not in caps: |
1771 | raise Unsupported(self.name, |
1772 | 'Test requires container-level isolation but ' |
1773 | 'testbed does not provide that') |
1774 | |
1775 | - if 'isolation-machine' in self.restrictions and \ |
1776 | + if 'isolation-machine' in effective and \ |
1777 | 'isolation-machine' not in caps: |
1778 | raise Unsupported(self.name, |
1779 | 'Test requires machine-level isolation but ' |
1780 | 'testbed does not provide that') |
1781 | |
1782 | - if 'breaks-testbed' in self.restrictions and \ |
1783 | + if 'breaks-testbed' in effective and \ |
1784 | 'revert-full-system' not in caps: |
1785 | raise Unsupported(self.name, |
1786 | 'Test breaks testbed but testbed does not ' |
1787 | 'provide revert-full-system') |
1788 | |
1789 | - if 'needs-root' in self.restrictions and \ |
1790 | + if 'needs-root' in effective and \ |
1791 | 'root-on-testbed' not in caps: |
1792 | raise Unsupported(self.name, |
1793 | 'Test needs root on testbed which is not ' |
1794 | 'available') |
1795 | |
1796 | - if 'needs-reboot' in self.restrictions and \ |
1797 | + if 'needs-reboot' in effective and \ |
1798 | 'reboot' not in caps: |
1799 | raise Unsupported(self.name, |
1800 | 'Test needs to reboot testbed but testbed does ' |
1801 | 'not provide reboot capability') |
1802 | |
1803 | + if 'needs-internet' in self.restrictions and \ |
1804 | + 'has_internet' not in caps: |
1805 | + raise Unsupported(self.name, |
1806 | + 'Test needs unrestricted internet access but testbed does ' |
1807 | + 'not provide it') |
1808 | + |
1809 | # |
1810 | # Parsing for Debian source packages |
1811 | # |
1812 | @@ -234,12 +243,12 @@ def parse_rfc822(path): |
1813 | def _debian_check_unknown_fields(name, record): |
1814 | unknown_keys = set(record.keys()).difference( |
1815 | {'Tests', 'Test-command', 'Restrictions', 'Features', |
1816 | - 'Depends', 'Tests-directory', 'Classes'}) |
1817 | + 'Depends', 'Tests-directory', 'Classes', 'Architecture'}) |
1818 | if unknown_keys: |
1819 | raise Unsupported(name, 'unknown field %s' % unknown_keys.pop()) |
1820 | |
1821 | |
1822 | -def _debian_packages_from_source(srcdir): |
1823 | +def _debian_packages_from_source(srcdir, cross_arch=None): |
1824 | packages = [] |
1825 | packages_no_arch = [] |
1826 | |
1827 | @@ -252,10 +261,17 @@ def _debian_packages_from_source(srcdir): |
1828 | st.get('Package-type', 'deb') != 'deb': |
1829 | continue |
1830 | arch = st['Architecture'] |
1831 | - if arch in ('all', 'any'): |
1832 | + qual_pkg = st['Package'] |
1833 | + # take care to emit an arch qualifier only for arch-dependent |
1834 | + # packages, not for arch: all ones |
1835 | + if cross_arch: |
1836 | + qual_pkg += ':' + cross_arch |
1837 | + if arch == 'all': |
1838 | packages.append(st['Package']) |
1839 | + elif arch == 'any': |
1840 | + packages.append(qual_pkg) |
1841 | else: |
1842 | - packages.append('%s [%s]' % (st['Package'], arch)) |
1843 | + packages.append('%s [%s]' % (qual_pkg, arch)) |
1844 | packages_no_arch.append(st['Package']) |
1845 | |
1846 | return (packages, packages_no_arch) |
1847 | @@ -291,7 +307,7 @@ def _debian_build_deps_from_source(srcdir, testbed_arch): |
1848 | deps = [d.strip() for d in deps.split(',')] |
1849 | |
1850 | # @builddeps@ should always imply build-essential |
1851 | - deps.append('build-essential') |
1852 | + deps.append('build-essential:native') |
1853 | return deps |
1854 | |
1855 | |
1856 | @@ -353,7 +369,8 @@ def _synthesize_deps(dep, testbed_arch): |
1857 | return None |
1858 | |
1859 | |
1860 | -def _parse_debian_depends(testname, dep_str, srcdir, testbed_arch): |
1861 | +def _parse_debian_depends(testname, dep_str, srcdir, testbed_arch, |
1862 | + cross_arch=None): |
1863 | '''Parse Depends: line in a Debian package |
1864 | |
1865 | Split dependencies (comma separated), validate their syntax, and expand @ |
1866 | @@ -364,7 +381,12 @@ def _parse_debian_depends(testname, dep_str, srcdir, testbed_arch): |
1867 | ''' |
1868 | deps = [] |
1869 | synthdeps = [] |
1870 | - (my_packages, my_packages_no_arch) = _debian_packages_from_source(srcdir) |
1871 | + (my_packages, my_packages_no_arch) = _debian_packages_from_source(srcdir, |
1872 | + cross_arch=cross_arch) |
1873 | + if cross_arch: |
1874 | + target_arch = cross_arch |
1875 | + else: |
1876 | + target_arch = testbed_arch |
1877 | for alt_group_str in dep_str.split(','): |
1878 | alt_group_str = alt_group_str.strip() |
1879 | if not alt_group_str: |
1880 | @@ -375,13 +397,25 @@ def _parse_debian_depends(testname, dep_str, srcdir, testbed_arch): |
1881 | for d in my_packages: |
1882 | adtlog.debug('synthesised dependency %s' % d) |
1883 | deps.append(d) |
1884 | - s = _synthesize_deps(d, testbed_arch) |
1885 | + s = _synthesize_deps(d, target_arch) |
1886 | if s: |
1887 | synthdeps.append(s) |
1888 | elif alt_group_str == '@builddeps@': |
1889 | for d in _debian_build_deps_from_source(srcdir, testbed_arch): |
1890 | adtlog.debug('synthesised dependency %s' % d) |
1891 | deps.append(d) |
1892 | + elif alt_group_str == 'build-essential': |
1893 | + # special case; this is how packages declare they want to build |
1894 | + # code during the test (but not necessarily doing a full package |
1895 | + # build), but for cross-architecture testing this declaration is |
1896 | + # wrong because it tries to pull in build-essential for the wrong |
1897 | + # arch. So we apply the same fix-up here that we do in |
1898 | + # runner/autopkgtest. We can't expect the package maintainer to |
1899 | + # do this because they don't know what crossbuild-essential-$arch |
1900 | + # to pull in. |
1901 | + deps.append('build-essential:native') |
1902 | + if cross_arch: |
1903 | + deps.append('crossbuild-essential-%s:native' % cross_arch) |
1904 | else: |
1905 | synthdep_alternatives = [] |
1906 | for dep in alt_group_str.split('|'): |
1907 | @@ -389,8 +423,10 @@ def _parse_debian_depends(testname, dep_str, srcdir, testbed_arch): |
1908 | if pkg not in my_packages_no_arch: |
1909 | synthdep_alternatives = [] |
1910 | break |
1911 | - s = _synthesize_deps(dep, testbed_arch) |
1912 | + s = _synthesize_deps(dep, target_arch) |
1913 | if s: |
1914 | + if cross_arch: |
1915 | + s += ':' + cross_arch |
1916 | synthdep_alternatives.append(s) |
1917 | if synthdep_alternatives: |
1918 | adtlog.debug('marked alternatives %s as a synthesised dependency' % synthdep_alternatives) |
1919 | @@ -398,6 +434,13 @@ def _parse_debian_depends(testname, dep_str, srcdir, testbed_arch): |
1920 | synthdeps.append(synthdep_alternatives) |
1921 | else: |
1922 | synthdeps.append(synthdep_alternatives[0]) |
1923 | + if cross_arch: |
1924 | + for mine in my_packages: |
1925 | + # ignore [arch] filters for matching packages |
1926 | + if alt_group_str + ':' + cross_arch == mine.split(' ')[0]: |
1927 | + adtlog.debug('%s is from our source package, adding arch qualifier for cross-testing' % s) |
1928 | + alt_group_str += ':' + cross_arch |
1929 | + break |
1930 | deps.append(alt_group_str) |
1931 | |
1932 | return (deps, synthdeps) |
1933 | @@ -428,10 +471,74 @@ def _autodep8(srcdir): |
1934 | return None |
1935 | |
1936 | |
1937 | +def _matches_architecture(host_arch, arch_wildcard): |
1938 | + try: |
1939 | + subprocess.check_call(['perl', '-mDpkg::Arch', '-e', |
1940 | + 'exit(!Dpkg::Arch::debarch_is(shift, shift))', |
1941 | + host_arch, arch_wildcard]) |
1942 | + except subprocess.CalledProcessError as e: |
1943 | + # returns 1 if host_arch is not matching arch_wildcard; other |
1944 | + # errors shouldn't be ignored |
1945 | + if e.returncode != 1: |
1946 | + raise |
1947 | + return False |
1948 | + return True |
1949 | + |
1950 | + |
1951 | +def _check_architecture(name, testbed_arch, architectures): |
1952 | + '''Check if testbed_arch is supported by the architectures |
1953 | + |
1954 | + The architecture list comes in two variants, positive: only this |
1955 | + arch is supported (arch may be a wildcard) and negative: this arch |
1956 | + is not supported (arch may be a wildcard). If there is any |
1957 | + positive arch, every arch not explicitly listed is skipped. Debian |
1958 | + Policy 7.1 explains that for (Build-)Depends it's not allowed to |
1959 | + mix positive and negative, so let's not do either. The list can |
1960 | + also be empty. Empty and ["any"] are the same, "all" isn't |
1961 | + allowed. |
1962 | + ''' |
1963 | + |
1964 | + if "all" in architectures: |
1965 | + raise Unsupported(name, "Arch 'all' not allowed in Architecture field") |
1966 | + |
1967 | + if len(architectures) == 0 or architectures == ["any"]: |
1968 | + return |
1969 | + |
1970 | + any_negative = False |
1971 | + any_positive = False |
1972 | + for arch in architectures: |
1973 | + if arch[0] == "!": |
1974 | + any_negative = True |
1975 | + if _matches_architecture(testbed_arch, arch[1:]): |
1976 | + raise Unsupported(name, "Test declares architecture as not " + |
1977 | + "supported: %s" % testbed_arch) |
1978 | + if arch[0] != "!": |
1979 | + any_positive = True |
1980 | + |
1981 | + if any_positive: |
1982 | + if any_negative: |
1983 | + raise Unsupported(name, "It is not permitted for some archs to " + |
1984 | + "be prepended by an exclamation mark while " + |
1985 | + "others aren't") |
1986 | + arch_matched = False |
1987 | + for arch in architectures: |
1988 | + if _matches_architecture(testbed_arch, arch): |
1989 | + arch_matched = True |
1990 | + |
1991 | + if not arch_matched: |
1992 | + raise Unsupported(name, "Test lists explicitly supported " + |
1993 | + "architectures, but the current architecture " + |
1994 | + "%s isn't listed." % testbed_arch) |
1995 | + |
1996 | + |
1997 | def parse_debian_source(srcdir, testbed_caps, testbed_arch, control_path=None, |
1998 | - auto_control=True): |
1999 | + auto_control=True, cross_arch=None, ignore_restrictions=(), |
2000 | + testname=None): |
2001 | '''Parse test descriptions from a Debian DEP-8 source dir |
2002 | |
2003 | + @ignore_restrictions: If we would skip the test due to these restrictions, |
2004 | + run it anyway |
2005 | + |
2006 | You can specify an alternative path for the control file (default: |
2007 | srcdir/debian/tests/control). |
2008 | |
2009 | @@ -496,14 +603,18 @@ def parse_debian_source(srcdir, testbed_caps, testbed_arch, control_path=None, |
2010 | '*', 'only one test-name feature allowed') |
2011 | feature_test_name = details[1] |
2012 | features.append(feature) |
2013 | + architectures = record.get('Architecture', '').replace( |
2014 | + ',', ' ').split() |
2015 | |
2016 | if 'Tests' in record: |
2017 | test_names = record['Tests'].replace(',', ' ').split() |
2018 | + if len(test_names) == 0: |
2019 | + raise InvalidControl('*', '"Tests" field is empty') |
2020 | (depends, synth_depends) = _parse_debian_depends( |
2021 | test_names[0], |
2022 | record.get('Depends', '@'), |
2023 | srcdir, |
2024 | - testbed_arch) |
2025 | + testbed_arch, cross_arch=cross_arch) |
2026 | if 'Test-command' in record: |
2027 | raise InvalidControl('*', 'Only one of "Tests" or ' |
2028 | '"Test-Command" may be given') |
2029 | @@ -515,12 +626,15 @@ def parse_debian_source(srcdir, testbed_caps, testbed_arch, control_path=None, |
2030 | for n in test_names: |
2031 | try: |
2032 | _debian_check_unknown_fields(n, record) |
2033 | + _check_architecture(n, testbed_arch, architectures) |
2034 | + |
2035 | test = Test(n, os.path.join(test_dir, n), None, |
2036 | restrictions, features, depends, [], [], synth_depends) |
2037 | - test.check_testbed_compat(testbed_caps) |
2038 | + test.check_testbed_compat(testbed_caps, ignore_restrictions) |
2039 | except Unsupported as u: |
2040 | - u.report() |
2041 | - some_skipped = True |
2042 | + if testname is None or n == testname: |
2043 | + u.report() |
2044 | + some_skipped = True |
2045 | else: |
2046 | tests.append(test) |
2047 | elif 'Test-command' in record: |
2048 | @@ -529,23 +643,25 @@ def parse_debian_source(srcdir, testbed_caps, testbed_arch, control_path=None, |
2049 | command, |
2050 | record.get('Depends', '@'), |
2051 | srcdir, |
2052 | - testbed_arch) |
2053 | + testbed_arch, cross_arch=cross_arch) |
2054 | if feature_test_name is None: |
2055 | command_counter += 1 |
2056 | name = 'command%i' % command_counter |
2057 | else: |
2058 | name = feature_test_name |
2059 | _debian_check_unknown_fields(name, record) |
2060 | + _check_architecture(name, testbed_arch, architectures) |
2061 | test = Test(name, None, command, restrictions, features, |
2062 | depends, [], [], synth_depends) |
2063 | - test.check_testbed_compat(testbed_caps) |
2064 | + test.check_testbed_compat(testbed_caps, ignore_restrictions) |
2065 | tests.append(test) |
2066 | else: |
2067 | raise InvalidControl('*', 'missing "Tests" or "Test-Command"' |
2068 | ' field') |
2069 | except Unsupported as u: |
2070 | - u.report() |
2071 | - some_skipped = True |
2072 | + if testname is None or n == testname: |
2073 | + u.report() |
2074 | + some_skipped = True |
2075 | |
2076 | return (tests, some_skipped) |
2077 | |
2078 | @@ -555,7 +671,7 @@ def parse_debian_source(srcdir, testbed_caps, testbed_arch, control_path=None, |
2079 | # |
2080 | |
2081 | def parse_click_manifest(manifest, testbed_caps, clickdeps, use_installed, |
2082 | - srcdir=None): |
2083 | + srcdir=None, ignore_restrictions=(), testname=None): |
2084 | '''Parse test descriptions from a click manifest. |
2085 | |
2086 | @manifest: String with the click manifest |
2087 | @@ -563,6 +679,9 @@ def parse_click_manifest(manifest, testbed_caps, clickdeps, use_installed, |
2088 | @clickdeps: paths of click packages that these tests need |
2089 | @use_installed: True if test expects the described click to be installed |
2090 | already |
2091 | + @ignore_restrictions: If we would skip the test due to these restrictions, |
2092 | + run it anyway |
2093 | + @testname: If we're asked a specific test, don't report on others |
2094 | |
2095 | Return (source_dir, list of Test objects, some_skipped). If this encounters |
2096 | any invalid restrictions, fields, or test restrictions which cannot be met |
2097 | @@ -628,11 +747,12 @@ def parse_click_manifest(manifest, testbed_caps, clickdeps, use_installed, |
2098 | test = Test(name, desc.get('path'), desc.get('command'), |
2099 | desc.get('restrictions', []), desc.get('features', []), |
2100 | desc.get('depends', []), clickdeps, installed_clicks, []) |
2101 | - test.check_testbed_compat(testbed_caps) |
2102 | + test.check_testbed_compat(testbed_caps, ignore_restrictions) |
2103 | tests.append(test) |
2104 | except Unsupported as u: |
2105 | - u.report() |
2106 | - some_skipped = True |
2107 | + if testname is None or name == testname: |
2108 | + u.report() |
2109 | + some_skipped = True |
2110 | |
2111 | if srcdir is None: |
2112 | # do we have an x-source/vcs-bzr link? |
2113 | @@ -659,7 +779,7 @@ def parse_click_manifest(manifest, testbed_caps, clickdeps, use_installed, |
2114 | return (srcdir, tests, some_skipped) |
2115 | |
2116 | |
2117 | -def parse_click(clickpath, testbed_caps, srcdir=None): |
2118 | +def parse_click(clickpath, testbed_caps, srcdir=None, testname=None): |
2119 | '''Parse test descriptions from a click package. |
2120 | |
2121 | Return (source_dir, list of Test objects, some_skipped). If this encounters |
2122 | @@ -681,4 +801,4 @@ def parse_click(clickpath, testbed_caps, srcdir=None): |
2123 | pkg.close() |
2124 | |
2125 | return parse_click_manifest(manifest, testbed_caps, [clickpath], False, |
2126 | - srcdir) |
2127 | + srcdir, testname=testname) |
2128 | diff --git a/runner/autopkgtest b/runner/autopkgtest |
2129 | index 93c6e61..22f796c 100755 |
2130 | --- a/runner/autopkgtest |
2131 | +++ b/runner/autopkgtest |
2132 | @@ -31,7 +31,7 @@ import os |
2133 | import shutil |
2134 | import atexit |
2135 | import json |
2136 | -import pipes |
2137 | +import shlex |
2138 | |
2139 | from debian import deb822 |
2140 | |
2141 | @@ -159,12 +159,16 @@ def run_tests(tests, tree): |
2142 | binaries.publish() |
2143 | doTest = True |
2144 | try: |
2145 | - testbed.install_deps(t.depends, 'needs-recommends' in t.restrictions, opts.shell_fail, t.synth_depends) |
2146 | + testbed.install_deps(t.depends, |
2147 | + 'needs-recommends' in t.restrictions, |
2148 | + opts.shell_fail, t.synth_depends) |
2149 | except adtlog.BadPackageError as e: |
2150 | if 'skip-not-installable' in t.restrictions: |
2151 | errorcode |= 2 |
2152 | adtlog.report(t.name, 'SKIP installation fails and skip-not-installable set') |
2153 | else: |
2154 | + if opts.shell_fail: |
2155 | + testbed.run_shell() |
2156 | errorcode |= 12 |
2157 | adtlog.report(t.name, 'FAIL badpkg') |
2158 | adtlog.preport('blame: ' + ' '.join(blamed)) |
2159 | @@ -200,7 +204,7 @@ def run_tests(tests, tree): |
2160 | def create_testinfo(vserver_args): |
2161 | global testbed |
2162 | |
2163 | - info = {'virt_server': ' '.join([pipes.quote(w) for w in vserver_args])} |
2164 | + info = {'virt_server': ' '.join([shlex.quote(w) for w in vserver_args])} |
2165 | |
2166 | if testbed.initial_kernel_version: |
2167 | info['kernel_version'] = testbed.initial_kernel_version |
2168 | @@ -388,6 +392,12 @@ def build_source(kind, arg, built_binaries): |
2169 | return tests_tree |
2170 | |
2171 | elif kind == 'apt-source': |
2172 | + # Make sure we are selecting the binaries based on the actual target |
2173 | + # architecture, not necessarily the testbed architecture |
2174 | + if opts.architecture: |
2175 | + arch = opts.architecture |
2176 | + else: |
2177 | + arch = testbed.dpkg_arch |
2178 | # The default is to determine the version for "apt-get source |
2179 | # pkg=version" that conforms to the current apt pinning. We only |
2180 | # consider binaries which are shipped in all available versions, |
2181 | @@ -443,7 +453,7 @@ pkgs=$(echo "$pkgs\n" | awk " |
2182 | if (foundarch == 0 || archmatch == 1) thissrc[\\$1] = 1; |
2183 | next } |
2184 | { if (!inlist) next; |
2185 | - inlist=0;''' % {'src': arg, 'arch': testbed.dpkg_arch} |
2186 | + inlist=0;''' % {'src': arg, 'arch': arch} |
2187 | |
2188 | create_command_part2_check_all_pkgs = ''' |
2189 | remaining=0; |
2190 | @@ -580,24 +590,8 @@ dpkg-source -x %(src)s_*.dsc src >/dev/null''' % {'src': arg} |
2191 | if kind not in ['dsc', 'apt-source']: |
2192 | testbed.install_deps([], False) |
2193 | |
2194 | - if kind in ('apt-source', 'git-source'): |
2195 | - # we need to get the downloaded debian/control from the testbed, so |
2196 | - # that we can avoid calling "apt-get build-dep" and thus |
2197 | - # introducing a second mechanism for installing build deps |
2198 | - pkg_control = adt_testbed.Path(testbed, |
2199 | - os.path.join(tmp, 'apt-control'), |
2200 | - os.path.join(result_pwd, 'debian/control'), False) |
2201 | - pkg_control.copyup() |
2202 | - dsc = pkg_control.host |
2203 | - |
2204 | - with open(dsc, encoding='UTF-8') as f: |
2205 | - d = deb822.Deb822(sequence=f) |
2206 | - bd = d.get('Build-Depends', '') |
2207 | - bdi = d.get('Build-Depends-Indep', '') |
2208 | - bda = d.get('Build-Depends-Arch', '') |
2209 | - |
2210 | # determine build command and build-essential packages |
2211 | - build_essential = ['build-essential'] |
2212 | + build_essential = ['build-essential:native'] |
2213 | assert testbed.nproc |
2214 | dpkg_buildpackage = 'DEB_BUILD_OPTIONS="parallel=%s $DEB_BUILD_OPTIONS" dpkg-buildpackage -us -uc -b' % ( |
2215 | opts.build_parallel or testbed.nproc) |
2216 | @@ -607,8 +601,46 @@ dpkg-source -x %(src)s_*.dsc src >/dev/null''' % {'src': arg} |
2217 | if testbed.user or 'root-on-testbed' not in testbed.caps: |
2218 | build_essential += ['fakeroot'] |
2219 | |
2220 | - testbed.satisfy_dependencies_string(bd + ', ' + bdi + ', ' + bda + ', ' + ', '.join(build_essential), arg, |
2221 | - build_dep=True, shell_on_failure=opts.shell_fail) |
2222 | + if opts.architecture: |
2223 | + dpkg_buildpackage += ' -a' + opts.architecture |
2224 | + build_essential += ['crossbuild-essential-%s:native' % opts.architecture] |
2225 | + # apt-get build-dep is the best option here, but we don't call |
2226 | + # it unconditionally because it doesn't take a path to a source |
2227 | + # package as an option in very old releases; so for compatibility |
2228 | + # only use it when we need its multiarch build-dep resolution |
2229 | + # support. |
2230 | + # This is supported in apt 1.1~exp2 and newer, which covers |
2231 | + # Debian oldstable (at time of writing) and Ubuntu 16.04 and |
2232 | + # newer, so once we can rely on this version of apt everywhere, |
2233 | + # the old build-dep resolver code should be deprecated. |
2234 | + all_build_deps = '' |
2235 | + testbed.satisfy_build_deps(result_pwd, |
2236 | + shell_on_failure=opts.shell_fail) |
2237 | + else: |
2238 | + if kind in ('apt-source', 'git-source'): |
2239 | + # we need to get the downloaded debian/control from the |
2240 | + # testbed, so that we can avoid calling "apt-get build-dep" |
2241 | + # and thus introducing a second mechanism for installing |
2242 | + # build deps |
2243 | + pkg_control = adt_testbed.Path(testbed, |
2244 | + os.path.join(tmp, 'apt-control'), |
2245 | + os.path.join(result_pwd, 'debian/control'), False) |
2246 | + pkg_control.copyup() |
2247 | + dsc = pkg_control.host |
2248 | + |
2249 | + with open(dsc, encoding='UTF-8') as f: |
2250 | + d = deb822.Deb822(sequence=f) |
2251 | + bd = d.get('Build-Depends', '') |
2252 | + bdi = d.get('Build-Depends-Indep', '') |
2253 | + bda = d.get('Build-Depends-Arch', '') |
2254 | + |
2255 | + all_build_deps = bd + ', ' + bdi + ', ' + bda + ', ' |
2256 | + |
2257 | + testbed.satisfy_dependencies_string(all_build_deps + |
2258 | + ', '.join(build_essential), |
2259 | + arg, |
2260 | + build_dep=True, |
2261 | + shell_on_failure=opts.shell_fail) |
2262 | |
2263 | # keep patches applied for tests |
2264 | source_rules_command([dpkg_buildpackage, 'dpkg-source --before-build .'], 'build', cwd=result_pwd) |
2265 | @@ -704,7 +736,8 @@ def process_actions(): |
2266 | clicks.append(arg) |
2267 | use_installed = True |
2268 | (srcdir, tests, skipped) = testdesc.parse_click_manifest( |
2269 | - manifest, testbed.caps, clicks, use_installed, pending_click_source) |
2270 | + manifest, testbed.caps, clicks, use_installed, pending_click_source, |
2271 | + opts.ignore_restrictions, testname=testname) |
2272 | |
2273 | elif os.path.exists(arg): |
2274 | # local .click package file |
2275 | @@ -718,7 +751,8 @@ def process_actions(): |
2276 | u = [] |
2277 | manifest = testbed.check_exec(['click', 'info'] + u + [arg], stdout=True) |
2278 | (srcdir, tests, skipped) = testdesc.parse_click_manifest( |
2279 | - manifest, testbed.caps, [], True, pending_click_source) |
2280 | + manifest, testbed.caps, [], True, pending_click_source, |
2281 | + opts.ignore_restrictions, testname=testname) |
2282 | |
2283 | if not srcdir: |
2284 | adtlog.bomb('No click source available for %s' % arg) |
2285 | @@ -733,10 +767,17 @@ def process_actions(): |
2286 | (tests, skipped) = testdesc.parse_debian_source( |
2287 | tests_tree.host, testbed.caps, testbed.dpkg_arch, |
2288 | control_path=control_override, |
2289 | - auto_control=opts.auto_control) |
2290 | + auto_control=opts.auto_control, |
2291 | + ignore_restrictions=opts.ignore_restrictions, |
2292 | + testname=testname, |
2293 | + cross_arch=opts.architecture) |
2294 | except testdesc.InvalidControl as e: |
2295 | adtlog.badpkg(str(e)) |
2296 | |
2297 | + if opts.validate: |
2298 | + adtlog.report("*", "Test specification is valid") |
2299 | + return |
2300 | + |
2301 | if skipped: |
2302 | errorcode |= 2 |
2303 | |
2304 | @@ -776,27 +817,35 @@ def main(): |
2305 | signal.signal(signal.SIGTERM, signal_handler) |
2306 | signal.signal(signal.SIGQUIT, signal_handler) |
2307 | |
2308 | + os.set_blocking(sys.stderr.fileno(), True) |
2309 | + |
2310 | try: |
2311 | setup_trace() |
2312 | testbed = adt_testbed.Testbed(vserver_argv=vserver_args, |
2313 | output_dir=tmp, |
2314 | user=opts.user, |
2315 | + shell_fail=opts.shell_fail, |
2316 | setup_commands=opts.setup_commands, |
2317 | setup_commands_boot=opts.setup_commands_boot, |
2318 | add_apt_pockets=opts.apt_pocket, |
2319 | copy_files=opts.copy, |
2320 | enable_apt_fallback=opts.enable_apt_fallback, |
2321 | + needs_internet=opts.needs_internet, |
2322 | add_apt_sources=getattr(opts, 'add_apt_sources', []), |
2323 | add_apt_releases=getattr(opts, 'add_apt_releases', []), |
2324 | pin_packages=opts.pin_packages, |
2325 | - apt_default_release=opts.apt_default_release) |
2326 | + apt_default_release=opts.apt_default_release, |
2327 | + cross_arch=opts.architecture) |
2328 | testbed.start() |
2329 | testbed.open() |
2330 | process_actions() |
2331 | except Exception: |
2332 | errorcode = print_exception(sys.exc_info(), '') |
2333 | if tmp: |
2334 | - create_testinfo(vserver_args) |
2335 | + try: |
2336 | + create_testinfo(vserver_args) |
2337 | + except Exception: |
2338 | + errorcode = print_exception(sys.exc_info(), '') |
2339 | cleanup() |
2340 | sys.exit(errorcode) |
2341 | |
2342 | diff --git a/runner/autopkgtest.1 b/runner/autopkgtest.1 |
2343 | index ce4a1bf..5395fc5 100644 |
2344 | --- a/runner/autopkgtest.1 |
2345 | +++ b/runner/autopkgtest.1 |
2346 | @@ -136,6 +136,13 @@ autopkgtest --installed-click com.example.myclick -- [...] |
2347 | .SH TEST OPTIONS |
2348 | |
2349 | .TP |
2350 | +.BR -a " ARCH" " | --architecture=" ARCH |
2351 | + |
2352 | +Run tests for the specified architecture, rather than for the host |
2353 | +architecture as defined by dpkg \-\-print-architecture. When building |
2354 | +packages from source, cross-build for the target architecture as well. |
2355 | + |
2356 | +.TP |
2357 | .BR -B " | " --no-built-binaries |
2358 | Binaries from unbuilt source packages (see above) |
2359 | will not be built or ignored, and dependencies are satisfied with packages from |
2360 | @@ -335,6 +342,22 @@ Disable the apt-get fallback which is used with \fB\-\-apt-pocket\fR or |
2361 | \fB\-\-pin-packages\fR in case installation of dependencies fails due |
2362 | to strict pinning. |
2363 | |
2364 | +.TP |
2365 | +.BI \-\-ignore\-restrictions= RESTRICTION , RESTRICTION... |
2366 | +If a test would normally be skipped because it has |
2367 | +.BI "Restrictions: " RESTRICTION\fR, |
2368 | +run it anyway. Can be specified multiple times. |
2369 | + |
2370 | +For example, you might ignore the restriction |
2371 | +.B isolation\-machine |
2372 | +when using the |
2373 | +.B null |
2374 | +virtualization server if you know that |
2375 | +.B autopkgtest |
2376 | +itself is running on an expendable virtual machine. These options also |
2377 | +work for unknown restrictions, so they can be used when experimenting |
2378 | +with new restrictions. |
2379 | + |
2380 | .SH USER/PRIVILEGE HANDLING OPTIONS |
2381 | |
2382 | .TP |
2383 | @@ -425,6 +448,17 @@ available processors. This is mostly useful in containers where you can |
2384 | restrict the available RAM, but not restrict the number of CPUs. |
2385 | |
2386 | .TP |
2387 | +.BI "--needs-internet=" run | try | skip |
2388 | +Define how to handle the needs\-internet restriction. With "try" tests with |
2389 | +needs-internet restrictions will be run, but if they fail they will be treated |
2390 | +as flaky tests. With "skip" these tests will skipped immediately and will not |
2391 | +be run. With "run" the restriction is basically ignored, this is the default. |
2392 | + |
2393 | +.TP |
2394 | +.BR \-V | \-\-validate |
2395 | +Validate the test control file and exit without running any tests. |
2396 | + |
2397 | +.TP |
2398 | .BR \-h | \-\-help |
2399 | Show command line help and exit. |
2400 | |
2401 | diff --git a/setup-commands/ro-apt b/setup-commands/ro-apt |
2402 | index 126ffaf..0995a80 100644 |
2403 | --- a/setup-commands/ro-apt |
2404 | +++ b/setup-commands/ro-apt |
2405 | @@ -10,10 +10,10 @@ |
2406 | |
2407 | set -e |
2408 | M=$(mktemp --directory /run/ro-apt.XXXXX) |
2409 | -mount -t tmpfs tmpfs $M |
2410 | -cp -a /var/lib/dpkg/status /var/lib/dpkg/lock $M |
2411 | -cp -a /var/cache/apt $M/cache_apt |
2412 | -mount -o remount,ro $M |
2413 | -mount -o bind,ro $M/status /var/lib/dpkg/status |
2414 | -mount -o bind,ro $M/lock /var/lib/dpkg/lock |
2415 | -mount -o bind,ro $M/cache_apt /var/cache/apt |
2416 | +mount -t tmpfs tmpfs "$M" |
2417 | +cp -a /var/lib/dpkg/status /var/lib/dpkg/lock "$M" |
2418 | +cp -a /var/cache/apt "$M/cache_apt" |
2419 | +mount -o remount,ro "$M" |
2420 | +mount -o bind,ro "$M/status" /var/lib/dpkg/status |
2421 | +mount -o bind,ro "$M/lock" /var/lib/dpkg/lock |
2422 | +mount -o bind,ro "$M/cache_apt" /var/cache/apt |
2423 | diff --git a/setup-commands/setup-testbed b/setup-commands/setup-testbed |
2424 | index 835da0a..8b31d53 100755 |
2425 | --- a/setup-commands/setup-testbed |
2426 | +++ b/setup-commands/setup-testbed |
2427 | @@ -82,7 +82,7 @@ EOF |
2428 | fi |
2429 | |
2430 | # serial console for upstart |
2431 | -if [ -e "$root/etc/init/tty2.conf" -a ! -e "$root/etc/init/ttyS0.conf" ]; then |
2432 | +if [ -e "$root/etc/init/tty2.conf" ] && ! [ -e "$root/etc/init/ttyS0.conf" ]; then |
2433 | sed 's/tty2/ttyS0/g; s! *exec.*$!exec /sbin/getty -L ttyS0 115200 vt102!' \ |
2434 | "$root/etc/init/tty2.conf" > "$root/etc/init/ttyS0.conf" |
2435 | fi |
2436 | @@ -122,13 +122,14 @@ fi |
2437 | |
2438 | # set up apt sources |
2439 | if [ -e "$root/etc/os-release" ]; then |
2440 | - DISTRO_ID=`. "$root/etc/os-release" && echo "$ID" || echo INVALID` |
2441 | + # shellcheck disable=SC1090 |
2442 | + DISTRO_ID=$(. "$root/etc/os-release" && echo "$ID" || echo INVALID) |
2443 | fi |
2444 | if [ -z "${MIRROR:-}" ]; then |
2445 | - MIRROR=`awk '/^deb .*'"$DISTRO_ID"'/ { sub(/\[.*\]/, "", $0); print $2; exit }' "$root/etc/apt/sources.list"` |
2446 | + MIRROR=$(awk '/^deb .*'"$DISTRO_ID"'/ { sub(/\[.*\]/, "", $0); print $2; exit }' "$root/etc/apt/sources.list") |
2447 | fi |
2448 | if [ -z "${RELEASE:-}" ]; then |
2449 | - RELEASE=`awk '/^deb .*'"$DISTRO_ID"'/ { sub(/\[.*\]/, "", $0); print $3; exit }' "$root/etc/apt/sources.list"` |
2450 | + RELEASE=$(awk '/^deb .*'"$DISTRO_ID"'/ { sub(/\[.*\]/, "", $0); print $3; exit }' "$root/etc/apt/sources.list") |
2451 | fi |
2452 | |
2453 | if [ -n "${AUTOPKGTEST_KEEP_APT_SOURCES:-}" ]; then |
2454 | @@ -143,10 +144,14 @@ else |
2455 | echo "$0: Attempting to set up Debian/Ubuntu apt sources automatically" >&2 |
2456 | |
2457 | if [ -z "$RELEASE" ]; then |
2458 | + # Deliberately not expanding $RELEASE here |
2459 | + # shellcheck disable=SC2016 |
2460 | echo 'Failed to auto-detect distribution release name; set $RELEASE explicitly' >&2 |
2461 | exit 1 |
2462 | fi |
2463 | if [ -z "$MIRROR" ]; then |
2464 | + # Deliberately not expanding $MIRROR here |
2465 | + # shellcheck disable=SC2016 |
2466 | echo 'Failed to auto-detect apt mirror; set $MIRROR explicitly' >&2 |
2467 | exit 1 |
2468 | fi |
2469 | @@ -207,13 +212,14 @@ if [ -z "${AUTOPKGTEST_IS_SETUP_COMMAND:-}" ] && |
2470 | if [ -n "$IFACE" ] ; then |
2471 | mkdir -p "$root/etc/network/interfaces.d" |
2472 | if ! grep -h -r "^[[:space:]]*auto.*$IFACE" "$root/etc/network/interfaces" "$root/etc/network/interfaces.d" | grep -qv 'auto[[:space:]]*lo'; then |
2473 | - printf "auto $IFACE\niface $IFACE inet dhcp\n" >> "$root/etc/network/interfaces.d/$IFACE" |
2474 | + printf 'auto %s\niface %s inet dhcp\n' "$IFACE" "$IFACE" >> "$root/etc/network/interfaces.d/$IFACE" |
2475 | fi |
2476 | fi |
2477 | fi |
2478 | |
2479 | # go-faster apt/dpkg |
2480 | echo "Acquire::Languages \"none\";" > "$root"/etc/apt/apt.conf.d/90nolanguages |
2481 | +echo "Acquire::Retries \"10\";" > "$root"/etc/apt/apt.conf.d/90retry |
2482 | echo 'force-unsafe-io' > "$root"/etc/dpkg/dpkg.cfg.d/autopkgtest |
2483 | |
2484 | # support backwards compatible env var too |
2485 | @@ -221,10 +227,12 @@ AUTOPKGTEST_APT_PROXY=${AUTOPKGTEST_APT_PROXY:-${ADT_APT_PROXY:-}} |
2486 | |
2487 | # detect apt proxy on the host (in chroot mode) |
2488 | if [ "$root" != "/" ] && [ -z "$AUTOPKGTEST_APT_PROXY" ]; then |
2489 | - RES=`apt-config shell proxy Acquire::http::Proxy` |
2490 | + RES=$(apt-config shell proxy Acquire::http::Proxy) |
2491 | if [ -n "$RES" ]; then |
2492 | - eval $RES |
2493 | - if echo "$proxy" | egrep -q '(localhost|127\.0\.0\.[0-9]*)'; then |
2494 | + # evaluating $RES will set proxy, but shellcheck can't know that |
2495 | + proxy= |
2496 | + eval "$RES" |
2497 | + if echo "$proxy" | grep -E -q '(localhost|127\.0\.0\.[0-9]*)'; then |
2498 | AUTOPKGTEST_APT_PROXY=$(echo "$proxy" | sed -r "s#localhost|127\.0\.0\.[0-9]*#10.0.2.2#") |
2499 | elif [ -n "${proxy:-}" ]; then |
2500 | AUTOPKGTEST_APT_PROXY="$proxy" |
2501 | @@ -238,7 +246,7 @@ if [ "$root" != "/" ] && [ -e /etc/resolv.conf ]; then |
2502 | mv "$root/etc/resolv.conf" "$root/etc/resolv.conf.vmdebootstrap" |
2503 | fi |
2504 | cat /etc/resolv.conf > "$root/etc/resolv.conf" |
2505 | - trap "if [ -e '$root/etc/resolv.conf.vmdebootstrap' ]; then mv '$root/etc/resolv.conf.vmdebootstrap' '$root/etc/resolv.conf'; fi" EXIT INT QUIT PIPE |
2506 | + trap 'if [ -e "$root/etc/resolv.conf.vmdebootstrap" ]; then mv "$root/etc/resolv.conf.vmdebootstrap" "$root/etc/resolv.conf"; fi' EXIT INT QUIT PIPE |
2507 | fi |
2508 | |
2509 | if [ -z "${AUTOPKGTEST_IS_SETUP_COMMAND:-}" ]; then |
2510 | @@ -262,8 +270,18 @@ if [ ! -e "$root/usr/bin/gpg" ]; then |
2511 | fi |
2512 | |
2513 | if ! systemd-detect-virt --quiet --container; then |
2514 | +<<<<<<< setup-commands/setup-testbed |
2515 | chroot "$root" apt-get install -y haveged </dev/null |
2516 | fi |
2517 | +======= |
2518 | + chroot "$root" apt-get install -y rng-tools </dev/null |
2519 | +fi |
2520 | + |
2521 | +if ! systemd-detect-virt --quiet --container; then |
2522 | + chroot "$root" apt-get install -y haveged </dev/null |
2523 | +fi |
2524 | + |
2525 | +>>>>>>> setup-commands/setup-testbed |
2526 | if [ ! -e "$root/usr/share/doc/libpam-systemd" ] && chroot "$root" apt-cache show libpam-systemd >/dev/null 2>&1; then |
2527 | chroot "$root" apt-get install -y libpam-systemd </dev/null |
2528 | fi |
2529 | @@ -285,12 +303,14 @@ if [ -z "${AUTOPKGTEST_IS_SETUP_COMMAND:-}" ]; then |
2530 | cgmanager lxc-common lxc lxd lxd-client open-iscsi mdadm dmeventd lvm2 \ |
2531 | unattended-upgrades update-notifier-common ureadahead debootstrap \ |
2532 | lxcfs ppp pppconfig pppoeconf snapd snap-confine ubuntu-core-launcher \ |
2533 | - thermald xdg-user-dirs zerofree xml-core; do |
2534 | + thermald xdg-user-dirs zerofree xml-core needrestart; do |
2535 | if [ -d "$root/usr/share/doc/$p" ]; then |
2536 | purge_list="$purge_list $p" |
2537 | fi |
2538 | done |
2539 | if [ -n "$purge_list" ]; then |
2540 | + # Deliberately word-splitting $purge_list: |
2541 | + # shellcheck disable=SC2086 |
2542 | chroot "$root" eatmydata apt-get --auto-remove -y purge $purge_list || true |
2543 | fi |
2544 | |
2545 | @@ -310,11 +330,11 @@ else |
2546 | fi |
2547 | |
2548 | if grep -q buntu "$root/etc/os-release" "$root/etc/lsb-release"; then |
2549 | - if ls $root/boot/vmlinu* >/dev/null 2>&1; then |
2550 | + if ls "$root"/boot/vmlinu* >/dev/null 2>&1; then |
2551 | # provides kmods like scsi_debug or mac80211_hwsim on Ubuntu |
2552 | chroot "$root" eatmydata apt-get install -y linux-generic < /dev/null |
2553 | else |
2554 | - if [ "$RELEASE" = precise -a "$ARCH" = armhf ]; then |
2555 | + if [ "$RELEASE" = precise ] && [ "$ARCH" = armhf ]; then |
2556 | # no linux-image-generic in precise/armhf yet |
2557 | chroot "$root" eatmydata apt-get install -y linux-headers-omap < /dev/null |
2558 | else |
2559 | diff --git a/ssh-setup/SKELETON b/ssh-setup/SKELETON |
2560 | index d643e57..96b8e4d 100644 |
2561 | --- a/ssh-setup/SKELETON |
2562 | +++ b/ssh-setup/SKELETON |
2563 | @@ -65,15 +65,15 @@ debug_failure() { |
2564 | |
2565 | case "$1" in |
2566 | open) |
2567 | - open $@;; |
2568 | + open "$@";; |
2569 | cleanup) |
2570 | - cleanup $@;; |
2571 | + cleanup "$@";; |
2572 | revert) |
2573 | - revert $@;; |
2574 | + revert "$@";; |
2575 | wait-reboot) |
2576 | - wait_reboot $@;; |
2577 | + wait_reboot "$@";; |
2578 | debug-failure) |
2579 | - debug_failure $@;; |
2580 | + debug_failure "$@";; |
2581 | '') |
2582 | echo "Needs to be called with command as first argument" >&2 |
2583 | exit 1 |
2584 | diff --git a/ssh-setup/nova b/ssh-setup/nova |
2585 | index b7b5a59..0b27234 100755 |
2586 | --- a/ssh-setup/nova |
2587 | +++ b/ssh-setup/nova |
2588 | @@ -253,7 +253,7 @@ EOF |
2589 | EXTRA_OPTS='' |
2590 | if [ -n "$NET_ID" ]; then |
2591 | # translate a name into a UUID |
2592 | - OUT=$(nova network-show $NET_ID) |
2593 | + OUT=$(openstack network show $NET_ID 2>/dev/null) |
2594 | NET_ID="$(echo "$OUT"| awk -F'|' '/ id / {gsub(" ", "", $3); print $3}')" |
2595 | EXTRA_OPTS="$EXTRA_OPTS --nic net-id=$NET_ID" |
2596 | fi |
2597 | @@ -430,7 +430,7 @@ if [ $# -eq 0 ]; then |
2598 | error "Invalid number of arguments, command is missing" |
2599 | exit 1 |
2600 | fi |
2601 | -cmd=$(echo $1|tr [[:upper:]] [[:lower:]]) |
2602 | +cmd=$(echo $1|tr '[[:upper:]]' '[[:lower:]]') |
2603 | shift |
2604 | parse_args "$@" |
2605 | |
2606 | diff --git a/tests/autopkgtest b/tests/autopkgtest |
2607 | index 8a6949c..0b13e62 100755 |
2608 | --- a/tests/autopkgtest |
2609 | +++ b/tests/autopkgtest |
2610 | @@ -325,6 +325,123 @@ class DebTestsAll: |
2611 | self.assertEqual(code, 0, err) |
2612 | self.assertRegex(out, r'needs-magic\s+PASS', out) |
2613 | |
2614 | + def test_needs_internet_success(self): |
2615 | + '''A needs-internet test succeeds''' |
2616 | + p = self.build_src('Tests: downloads-data\nRestrictions: needs-internet\nDepends: coreutils\n', |
2617 | + {'downloads-data': '#!/bin/sh\necho I am fine\n'}) |
2618 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2619 | + self.assertEqual(code, 0, err) |
2620 | + self.assertRegex(out, r'downloads-data\s+PASS', out) |
2621 | + |
2622 | + def test_needs_internet_skipped(self): |
2623 | + '''A needs-internet test is skipped''' |
2624 | + p = self.build_src('Tests: downloads-data\nRestrictions: needs-internet\nDepends: coreutils\n', |
2625 | + {'downloads-data': '#!/bin/sh\necho I am fine\n'}) |
2626 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', '--needs-internet=skip', p]) |
2627 | + self.assertEqual(code, 8, err) |
2628 | + self.assertRegex(out, r'downloads-data\s+SKIP Test needs unrestricted internet', out) |
2629 | + |
2630 | + def test_needs_internet_tried_success(self): |
2631 | + '''A needs-internet test is tried and succeeds''' |
2632 | + p = self.build_src('Tests: downloads-data\nRestrictions: needs-internet\nDepends: coreutils\n', |
2633 | + {'downloads-data': '#!/bin/sh\necho I am fine\n'}) |
2634 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', '--needs-internet=try', p]) |
2635 | + self.assertEqual(code, 0, err) |
2636 | + self.assertRegex(out, r'downloads-data\s+PASS', out) |
2637 | + |
2638 | + def test_needs_internet_tried_skipped(self): |
2639 | + '''A needs-internet test is tried and skipped''' |
2640 | + p = self.build_src('Tests: downloads-data\nRestrictions: needs-internet\nDepends: coreutils\n', |
2641 | + {'downloads-data': '#!/bin/sh\necho I am sick\nexit 7\n'}) |
2642 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', '--needs-internet=try', p]) |
2643 | + self.assertEqual(code, 8, err) |
2644 | + self.assertRegex(out, r'downloads-data\s+SKIP Failed, but test has needs-internet', out) |
2645 | + |
2646 | + @unittest.skipIf(host_arch != 'amd64', 'needs to run on amd64') |
2647 | + def test_arch_in_supported_list(self): |
2648 | + '''A test on an explicitly suppported architecture succeeds''' |
2649 | + p = self.build_src('Tests: pass\nArchitecture: amd63 amd64\nDepends: coreutils\n', |
2650 | + {'pass': '#!/bin/sh\necho I am fine\n'}) |
2651 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2652 | + self.assertEqual(code, 0, err) |
2653 | + self.assertRegex(out, r'pass\s+PASS', out) |
2654 | + |
2655 | + def test_arch_not_in_negated_list(self): |
2656 | + '''A test on an implicitly supported architecture succeeds''' |
2657 | + p = self.build_src('Tests: pass\nArchitecture: !amd63 !amd63\nDepends: coreutils\n', |
2658 | + {'pass': '#!/bin/sh\necho I am fine\n'}) |
2659 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2660 | + self.assertEqual(code, 0, err) |
2661 | + self.assertRegex(out, r'pass\s+PASS', out) |
2662 | + |
2663 | + @unittest.skipIf(host_arch != 'amd64', 'needs to run on amd64') |
2664 | + def test_arch_in_unsupported_list(self): |
2665 | + '''A test on an explicitly unsuppported architecture is skipped''' |
2666 | + p = self.build_src('Tests: skip-me\nArchitecture: !amd64\nDepends: coreutils\n', |
2667 | + {'skip-me': '#!/bin/sh\necho I am fine\n'}) |
2668 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2669 | + self.assertEqual(code, 8, err) |
2670 | + self.assertRegex(out, r'skip-me\s+SKIP Test declares architecture as not supported', out) |
2671 | + |
2672 | + @unittest.skipIf(host_arch != 'amd64', 'needs to run on amd64') |
2673 | + def test_arch_in_unsupported_wildcard_list(self): |
2674 | + '''A test on an explictly unsuppported wildcard architecture is skipped''' |
2675 | + p = self.build_src('Tests: skip-me\nArchitecture: !linux-any\nDepends: coreutils\n', |
2676 | + {'skip-me': '#!/bin/sh\necho I am fine\n'}) |
2677 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2678 | + self.assertEqual(code, 8, err) |
2679 | + self.assertRegex(out, r'skip-me\s+SKIP Test declares architecture as not supported', out) |
2680 | + |
2681 | + def test_arch_not_in_supported_list(self): |
2682 | + '''A test on an implicitly unsupported architecture is skipped''' |
2683 | + p = self.build_src('Tests: skip-me\nArchitecture: amd63\nDepends: coreutils\n', |
2684 | + {'skip-me': '#!/bin/sh\necho I am fine\n'}) |
2685 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2686 | + self.assertEqual(code, 8, err) |
2687 | + self.assertRegex(out, r'skip-me\s+SKIP Test lists explicitly supported architectures', out) |
2688 | + |
2689 | + @unittest.skipIf(host_arch == 'hurd', 'needs to run on !hurd') |
2690 | + def test_arch_not_in_supported_wildcard_list(self): |
2691 | + '''A test on an implicitly unsupported wildcard architecture is skipped''' |
2692 | + p = self.build_src('Tests: skip-me\nArchitecture: hurd-any\nDepends: coreutils\n', |
2693 | + {'skip-me': '#!/bin/sh\necho I am fine\n'}) |
2694 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2695 | + self.assertEqual(code, 8, err) |
2696 | + self.assertRegex(out, r'skip-me\s+SKIP Test lists explicitly supported architectures', out) |
2697 | + |
2698 | + def test_arch_in_mixed_list_skipped_wildcard(self): |
2699 | + '''A test in an mixed list on supported architecture succeeds''' |
2700 | + p = self.build_src('Tests: skip-me\nArchitecture: !amd63 linux-any\nDepends: coreutils\n', |
2701 | + {'skip-me': '#!/bin/sh\necho I am fine\n'}) |
2702 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2703 | + self.assertEqual(code, 8, err) |
2704 | + self.assertRegex(out, r'skip-me\s+SKIP It is not', out) |
2705 | + |
2706 | + @unittest.skipIf(host_arch != 'amd64', 'needs to run on amd64') |
2707 | + def test_arch_in_mixed_list_skipped_explicit(self): |
2708 | + '''A test in an mixed list on unsupported architecture is skipped''' |
2709 | + p = self.build_src('Tests: skip-me\nArchitecture: !amd64 linux-any\nDepends: coreutils\n', |
2710 | + {'skip-me': '#!/bin/sh\necho I am fine\n'}) |
2711 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2712 | + self.assertEqual(code, 8, err) |
2713 | + self.assertRegex(out, r'skip-me\s+SKIP Test declares architecture as not supported', out) |
2714 | + |
2715 | + def test_arch_any(self): |
2716 | + '''A test on "any" architecture succeeds''' |
2717 | + p = self.build_src('Tests: pass\nArchitecture: any\nDepends: coreutils\n', |
2718 | + {'pass': '#!/bin/sh\necho I am fine\n'}) |
2719 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2720 | + self.assertEqual(code, 0, err) |
2721 | + self.assertRegex(out, r'pass\s+PASS', out) |
2722 | + |
2723 | + def test_arch_all(self): |
2724 | + '''A test on "all" architecture skipped''' |
2725 | + p = self.build_src('Tests: skip-me\nArchitecture: all\nDepends: coreutils\n', |
2726 | + {'skip-me': '#!/bin/sh\necho I am fine\n'}) |
2727 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', p]) |
2728 | + self.assertEqual(code, 8, err) |
2729 | + self.assertRegex(out, r"skip-me\s+SKIP Arch 'all' not", out) |
2730 | + |
2731 | |
2732 | class DebTestsFailureModes: |
2733 | '''Common deb tests for handling various failure modes |
2734 | @@ -1110,104 +1227,18 @@ bad FAIL non-zero exit status 1 |
2735 | self.assertRegex(out, r'SKIP no tests in this package', out) |
2736 | |
2737 | @unittest.skipIf(os.getuid() == 0, 'needs to run as user') |
2738 | - @unittest.skipIf(os.path.exists('/usr/bin/dotty'), |
2739 | - 'needs graphviz uninstalled') |
2740 | - @unittest.skipUnless(have_apt, 'needs apt-get download working') |
2741 | - def test_tmp_install(self): |
2742 | - '''temp dir unpack of test dependencies''' |
2743 | - |
2744 | - p = self.build_src('Tests: t\nDepends: graphviz, gir1.2-json-1.0 (>= 0.14), python3-gi, a|b,', |
2745 | - {'t': '#!/bin/sh\ndotty -V 2>&1 || true\n' |
2746 | - 'python3 -c "import gi; gi.require_version(\'Json\', \'1.0\'); from gi.repository import Json; print(Json)"'}) |
2747 | - |
2748 | - (code, out, err) = self.runtest(['-B', p]) |
2749 | - self.assertEqual(code, 0, err) |
2750 | - self.assertRegex(out, r't\s+PASS', out) |
2751 | - |
2752 | - # should show test stdout |
2753 | - self.assertIn('dotty version ', out) |
2754 | - try: |
2755 | - from gi.repository import Json |
2756 | - Json # pyflakes |
2757 | - # already installed on the system |
2758 | - self.assertRegex(out, r'(Dynamic|Introspection)Module.*Json.* from .*/usr/lib/.*girepository') |
2759 | - except ImportError: |
2760 | - # should use from local unpack dir |
2761 | - self.assertRegex(out, r'(Dynamic|Introspection)Module.*Json.* from .*tmp.*/deps/usr/lib') |
2762 | - # no stderr |
2763 | - self.assertNotIn(' stderr ', err) |
2764 | - |
2765 | - # downloads dependencies |
2766 | - self.assertIn('libcgraph', err) |
2767 | - self.assertIn('libcgraph', err) |
2768 | - |
2769 | - # warn about restricted functionality |
2770 | - self.assertRegex(err, r'WARNING.*cannot be handled.* a | b') |
2771 | - self.assertRegex(err, r'WARNING.*will only work for some packages') |
2772 | - |
2773 | - @unittest.skipIf(os.getuid() == 0, 'needs to run as user') |
2774 | - @unittest.skipIf(os.path.exists('/usr/share/perl5/Test/Requires.pm'), |
2775 | - 'needs libtest-requires-perl uninstalled') |
2776 | - @unittest.skipIf(os.path.exists('/usr/share/doc/libconvert-uulib-perl'), |
2777 | - 'needs libconvert-uulib-perl uninstalled') |
2778 | - @unittest.skipUnless(have_apt, 'needs apt-get download working') |
2779 | - def test_tmp_install_perl(self): |
2780 | - '''temp dir unpack of Perl dependencies''' |
2781 | - |
2782 | - # one arch: all, one binary |
2783 | - p = self.build_src('Tests: t\nDepends: libtest-requires-perl, libconvert-uulib-perl', |
2784 | - {'t': '#!/usr/bin/perl\nuse Test::Requires;\nuse Convert::UUlib;\n'}) |
2785 | - |
2786 | - (code, out, err) = self.runtest(['-B', p]) |
2787 | - self.assertEqual(code, 0, err) |
2788 | - self.assertRegex(out, r't\s+PASS', out) |
2789 | - |
2790 | - @unittest.skipIf(os.getuid() == 0, 'needs to run as user') |
2791 | - @unittest.skipIf(os.path.exists('/usr/lib/python3/dist-packages/wand/'), |
2792 | - 'needs python3-wand uninstalled') |
2793 | - @unittest.skipUnless(have_apt and subprocess.call(['apt-cache', 'show', 'python3-wand'], |
2794 | - stdout=subprocess.PIPE, |
2795 | - stderr=subprocess.STDOUT) == 0, |
2796 | - 'needs python3-wand package') |
2797 | - def test_tmp_install_imagemagick(self): |
2798 | - '''temp dir unpack of imagemagick dependencies''' |
2799 | - |
2800 | - p = self.build_src('Tests: t\nDepends: python3-wand', |
2801 | - {'t': '#!/usr/bin/env python3\nfrom wand.image import Image\n'}) |
2802 | - |
2803 | - (code, out, err) = self.runtest(['-d', '-B', p]) |
2804 | - self.assertEqual(code, 0, err) |
2805 | - self.assertRegex(out, r't\s+PASS', out) |
2806 | - |
2807 | - # no stderr |
2808 | - self.assertNotIn(' stderr ', err) |
2809 | - |
2810 | - @unittest.skipIf(os.getuid() == 0, 'needs to run as user') |
2811 | - def test_tmp_install_nonexisting_pkg(self): |
2812 | - '''temp dir unpack of nonexisting test dependency''' |
2813 | - |
2814 | - p = self.build_src('Tests: t\nDepends: nosuchpackage', |
2815 | - {'t': '#!/bin/sh\nfalse'}) |
2816 | - |
2817 | - (code, out, err) = self.runtest(['-B', p]) |
2818 | + def test_no_apt_install_with_missing_dependency(self): |
2819 | + '''package with missing tests dependencies without available apt-get install''' |
2820 | + p = self.build_src('Test-Command: /bin/true\nDepends: package-that-does-not-exist\n', {}) |
2821 | + (code, out, err) = self.runtest(['--no-built-binaries', p]) |
2822 | self.assertEqual(code, 12, err) |
2823 | |
2824 | - self.assertRegex(err, r'E: .*nosuchpackage') |
2825 | - self.assertIn('Test dependencies are unsatisfiable', err) |
2826 | - |
2827 | @unittest.skipIf(os.getuid() == 0, 'needs to run as user') |
2828 | - @unittest.skipIf(os.path.exists('/usr/bin/dotty'), |
2829 | - 'needs graphviz uninstalled') |
2830 | - def test_tmp_install_no_such_version(self): |
2831 | - '''temp dir unpack of test dependency with unsatisfiable version''' |
2832 | - |
2833 | - p = self.build_src('Tests: t\nDepends: graphviz (>= 4:999)', |
2834 | - {'t': '#!/bin/sh\nfalse'}) |
2835 | - |
2836 | - (code, out, err) = self.runtest(['-B', p]) |
2837 | - self.assertEqual(code, 12, err) |
2838 | - |
2839 | - self.assertIn('test dependency graphviz (>= 4:999) is unsatisfiable: available version ', err) |
2840 | + def test_no_apt_install_with_dependencies_satisfied(self): |
2841 | + '''package with dependencies satisfied while apt-get install is not available''' |
2842 | + p = self.build_src('Test-Command: /bin/true\nDepends: coreutils\n', {}) |
2843 | + (code, out, err) = self.runtest(['--no-built-binaries', p]) |
2844 | + self.assertEqual(code, 0, err) |
2845 | |
2846 | def test_test_command(self): |
2847 | '''Test-Command: instead of Tests:''' |
2848 | @@ -1348,6 +1379,25 @@ if ($pid) { # parent |
2849 | self.assertNotIn('1_ONE', out) |
2850 | self.assertNotIn('3_THREE', out) |
2851 | |
2852 | + def test_testname_with_others_skipped(self): |
2853 | + '''Run only one specified test between skipped tests''' |
2854 | + |
2855 | + p = self.build_src('Tests: one three\nDepends:\nRestrictions: needs-quantum-computer\n\nTests: two\nDepends:', |
2856 | + {'one': '#!/bin/sh\necho 1_ONE', |
2857 | + 'two': '#!/bin/sh\necho 2_TWO', |
2858 | + 'three': '#!/bin/sh\necho 3_THREE'}) |
2859 | + |
2860 | + (code, out, err) = self.runtest(['-B', '--test-name', 'two', p]) |
2861 | + |
2862 | + self.assertEqual(code, 0, err) |
2863 | + self.assertRegex(out, r'two\s+PASS', out) |
2864 | + self.assertNotIn('one', out) |
2865 | + self.assertNotIn('three', out) |
2866 | + |
2867 | + self.assertIn('2_TWO', out) |
2868 | + self.assertNotIn('1_ONE', out) |
2869 | + self.assertNotIn('3_THREE', out) |
2870 | + |
2871 | def test_testname_noexist(self): |
2872 | '''Run only one specified test which does not exist''' |
2873 | |
2874 | @@ -1585,6 +1635,7 @@ if ($pid) { # parent |
2875 | self.assertRegex(out, r'command1\s+SKIP Test needs to reboot testbed but testbed does not provide reboot capability') |
2876 | self.assertRegex(out, r'command2\s+PASS') |
2877 | |
2878 | + @unittest.skipIf(os.getuid() == 0, 'failure mode for root is different and tested elsewhere') |
2879 | def test_broken_test_deps(self): |
2880 | '''unsatisfiable test dependencies''' |
2881 | |
2882 | @@ -1593,8 +1644,8 @@ if ($pid) { # parent |
2883 | |
2884 | (code, out, err) = self.runtest(['--no-built-binaries', p]) |
2885 | self.assertEqual(code, 12, err) |
2886 | - self.assertIn('Test dependencies are unsatisfiable', err) |
2887 | - self.assertIn('Test dependencies are unsatisfiable', out) |
2888 | + self.assertIn('test dependencies missing', err) |
2889 | + self.assertIn('test dependencies missing', out) |
2890 | |
2891 | def test_continue_with_other_tests(self): |
2892 | '''Install failure should continue with next test''' |
2893 | @@ -1634,6 +1685,40 @@ if ($pid) { # parent |
2894 | self.assertRegex(out, r'notinstallable\s+SKIP installation fails', out) |
2895 | self.assertRegex(out, r'ok\s+PASS', out) |
2896 | |
2897 | + def test_validate(self): |
2898 | + '''--validate command line options''' |
2899 | + p = self.build_src('Tests: hello-world\n' |
2900 | + 'Depends: coreutils\n', |
2901 | + {'hello-world': '#!/bin/sh\necho "HELLO WORLD"\n'}) |
2902 | + (code, out, err) = self.runtest(['-d', '--no-built-binaries', '--validate', p]) |
2903 | + self.assertEqual(code, 0, err) |
2904 | + self.assertIn('Test specification is valid', out) |
2905 | + self.assertNotIn('HELLO WORLD', out) |
2906 | + |
2907 | + def test_unknown_restriction(self): |
2908 | + '''test with unknown restriction gets skipped''' |
2909 | + |
2910 | + p = self.build_src('Test-Command: true\nDepends:\nRestrictions: needs-reassurance', {}) |
2911 | + (code, out, err) = self.runtest(['-d', '-B', p]) |
2912 | + self.assertEqual(code, 8, err) |
2913 | + self.assertRegex(out, r'command1\s+SKIP unknown restriction needs-reassurance') |
2914 | + |
2915 | + def test_unknown_derestriction(self): |
2916 | + '''--ignore-restrictions is respected for unknown restrictions''' |
2917 | + |
2918 | + p = self.build_src('Test-Command: true\nDepends:\nRestrictions: needs-reassurance', {}) |
2919 | + (code, out, err) = self.runtest(['-d', '-B', '--ignore-restrictions=needs-reassurance', p]) |
2920 | + self.assertEqual(code, 0, out + err) |
2921 | + self.assertRegex(out, r'command1\s+PASS', out) |
2922 | + |
2923 | + def test_known_derestriction(self): |
2924 | + '''--ignore-restrictions is respected for known restrictions''' |
2925 | + |
2926 | + p = self.build_src('Test-Command: true\nDepends:\nRestrictions: needs-reboot', {}) |
2927 | + (code, out, err) = self.runtest(['-d', '-B', '--ignore-restrictions=needs-reboot', p]) |
2928 | + self.assertEqual(code, 0, out + err) |
2929 | + self.assertRegex(out, r'command1\s+PASS', out) |
2930 | + |
2931 | |
2932 | @unittest.skipIf(os.getuid() > 0, |
2933 | 'NullRunnerRoot tests need to run as root') |
2934 | @@ -1641,6 +1726,17 @@ class NullRunnerRoot(AdtTestCase): |
2935 | def __init__(self, *args, **kwargs): |
2936 | super(NullRunnerRoot, self).__init__(['null'], *args, **kwargs) |
2937 | |
2938 | + def test_broken_test_deps(self): |
2939 | + '''unsatisfiable test dependencies''' |
2940 | + |
2941 | + p = self.build_src('Tests: p\nDepends: unknown, libc6 (>= 99:99)\n', |
2942 | + {'p': '#!/bin/sh -e\ntrue'}) |
2943 | + |
2944 | + (code, out, err) = self.runtest(['--no-built-binaries', p]) |
2945 | + self.assertEqual(code, 12, err) |
2946 | + self.assertIn('Test dependencies are unsatisfiable', err) |
2947 | + self.assertIn('Test dependencies are unsatisfiable', out) |
2948 | + |
2949 | def test_tmpdir_for_other_users(self): |
2950 | '''$TMPDIR is accessible to non-root users''' |
2951 | |
2952 | @@ -1880,7 +1976,7 @@ Restrictions: needs-root |
2953 | self.assertIn('synthesised dependency bdep4\n', err) |
2954 | self.assertIn('synthesised dependency bdep5\n', err) |
2955 | self.assertIn('synthesised dependency bdep6\n', err) |
2956 | - self.assertIn('synthesised dependency build-essential\n', err) |
2957 | + self.assertIn('synthesised dependency build-essential:native\n', err) |
2958 | self.assertIn('processing dependency testdep2\n', err) |
2959 | |
2960 | def test_build_deps_profiles(self): |
2961 | @@ -1900,7 +1996,7 @@ Restrictions: needs-root |
2962 | self.assertRegex(out, r'pass\s+PASS') |
2963 | |
2964 | self.assertIn('synthesised dependency bdepyes\n', err) |
2965 | - self.assertIn('synthesised dependency build-essential\n', err) |
2966 | + self.assertIn('synthesised dependency build-essential:native\n', err) |
2967 | |
2968 | dpkg_deps_ver = subprocess.check_output(['perl', '-MDpkg::Deps', '-e', 'print $Dpkg::Deps::VERSION'], |
2969 | universal_newlines=True) |
2970 | @@ -2169,10 +2265,10 @@ deb http://foo.ubuntu.com/ fluffy-proposed restricted |
2971 | apt_dir = os.path.join(self.chroot, 'etc', 'apt') |
2972 | with open(os.path.join(apt_dir, 'sources.list'), 'w') as f: |
2973 | f.write('''# comment |
2974 | -deb http://foo.ubuntu.com/ fluffy-updates main non-free |
2975 | -deb-src http://foo.ubuntu.com/ fluffy-updates main non-free |
2976 | deb http://foo.ubuntu.com/ fluffy main non-free |
2977 | deb-src http://foo.ubuntu.com/ fluffy main non-free |
2978 | +deb http://foo.ubuntu.com/ fluffy-updates main non-free |
2979 | +deb-src http://foo.ubuntu.com/ fluffy-updates main non-free |
2980 | deb [trusted=yes arch=6510] http://foo.ubuntu.com/ fluffy main 6510 |
2981 | ''') |
2982 | |
2983 | @@ -2424,6 +2520,18 @@ Pin-Priority: 990 |
2984 | # don't create empty/bogus files |
2985 | self.assertNotIn('testbed-packages', os.listdir(outdir)) |
2986 | |
2987 | + @unittest.skip('chroot runner tests fake testbed_arch as powerpc') |
2988 | + def test_arch_in_mixed_list_skipped_explicit(self): |
2989 | + pass |
2990 | + |
2991 | + @unittest.skip('chroot runner tests fake testbed_arch as powerpc') |
2992 | + def test_arch_in_unsupported_list(self): |
2993 | + pass |
2994 | + |
2995 | + @unittest.skip('chroot runner tests fake testbed_arch as powerpc') |
2996 | + def test_arch_in_supported_list(self): |
2997 | + pass |
2998 | + |
2999 | |
3000 | class DebTestsVirtFS(DebTestsAll): |
3001 | '''Common tests for runners with file system virtualization''' |
3002 | @@ -2540,7 +2648,7 @@ class DebTestsVirtFS(DebTestsAll): |
3003 | self.assertIn('dh build', err) |
3004 | |
3005 | # test should run as user |
3006 | - lines = [l for l in out.splitlines() if l.startswith('XXX')] |
3007 | + lines = [line for line in out.splitlines() if line.startswith('XXX')] |
3008 | self.assertEqual(len(lines), 1, lines) |
3009 | fields = lines[0].split() |
3010 | self.assertEqual(len(fields), 4) |
3011 | @@ -4033,8 +4141,8 @@ class SshRunnerNoScript(AdtTestCase): |
3012 | cmd.append('-s') |
3013 | cmd += ['open', os.environ.get('AUTOPKGTEST_TEST_LXD')] |
3014 | out = subprocess.check_output(cmd, universal_newlines=True) |
3015 | - for l in out.splitlines(): |
3016 | - (k, v) = l.split('=', 1) |
3017 | + for line in out.splitlines(): |
3018 | + (k, v) = line.split('=', 1) |
3019 | self.info[k] = v |
3020 | self.virt_args = ['ssh', '-d', '-H', self.info['hostname'], '-l', self.info['login'], '-i', self.info['identity']] |
3021 | |
3022 | @@ -4453,5 +4561,7 @@ class SshRunnerWithScript(AdtTestCase, DebTestsAll): |
3023 | if __name__ == '__main__': |
3024 | # Force encoding to UTF-8 even in non-UTF-8 locales. |
3025 | import io |
3026 | - sys.stdout = io.TextIOWrapper(sys.stdout.detach(), encoding="UTF-8", line_buffering=True) |
3027 | + real_stdout = sys.stdout |
3028 | + assert isinstance(real_stdout, io.TextIOBase) |
3029 | + sys.stdout = io.TextIOWrapper(real_stdout.detach(), encoding="UTF-8", line_buffering=True) |
3030 | unittest.main(testRunner=unittest.TextTestRunner(stream=sys.stdout, verbosity=2)) |
3031 | diff --git a/tests/autopkgtest_args b/tests/autopkgtest_args |
3032 | index 3871101..0c13625 100755 |
3033 | --- a/tests/autopkgtest_args |
3034 | +++ b/tests/autopkgtest_args |
3035 | @@ -13,7 +13,7 @@ try: |
3036 | patch # pyflakes |
3037 | except ImportError: |
3038 | # fall back to separate package |
3039 | - from mock import patch |
3040 | + from mock import patch # type: ignore |
3041 | |
3042 | test_dir = os.path.dirname(os.path.abspath(__file__)) |
3043 | sys.path.insert(1, os.path.join(os.path.dirname(test_dir), 'lib')) |
3044 | @@ -225,17 +225,22 @@ Files: |
3045 | self.assertEqual(adt_testbed.timeouts['test'], 10000) |
3046 | self.assertEqual(adt_testbed.timeouts['copy'], 300) |
3047 | self.assertEqual(args.testname, None) |
3048 | + self.assertEqual(args.ignore_restrictions, []) |
3049 | |
3050 | def test_options(self): |
3051 | (args, acts, virt) = self.parse( |
3052 | ['-q', '--shell-fail', '--timeout-copy=5', '--set-lang', |
3053 | - 'en_US.UTF-8', 'mypkg', |
3054 | + 'en_US.UTF-8', |
3055 | + '--ignore-restrictions=a,b', |
3056 | + '--ignore-restrictions=c', |
3057 | + 'mypkg', |
3058 | '--', 'foo', '-d', '-s', '--', '-d']) |
3059 | self.assertEqual(args.verbosity, 0) |
3060 | self.assertEqual(args.shell_fail, True) |
3061 | self.assertEqual(adt_testbed.timeouts['copy'], 5) |
3062 | self.assertEqual(args.env, ['LANG=en_US.UTF-8']) |
3063 | self.assertEqual(args.auto_control, True) |
3064 | + self.assertEqual(args.ignore_restrictions, ['a', 'b', 'c']) |
3065 | |
3066 | self.assertEqual(acts, [('apt-source', 'mypkg', False)]) |
3067 | self.assertEqual(virt, ['autopkgtest-virt-foo', '-d', '-s', '--', '-d']) |
3068 | diff --git a/tests/mypy b/tests/mypy |
3069 | new file mode 100755 |
3070 | index 0000000..b4a8d49 |
3071 | --- /dev/null |
3072 | +++ b/tests/mypy |
3073 | @@ -0,0 +1,48 @@ |
3074 | +#!/bin/sh |
3075 | +# Copyright ยฉ 2016-2020 Simon McVittie |
3076 | +# Copyright ยฉ 2018 Collabora Ltd. |
3077 | +# SPDX-License-Identifier: GPL-2+ |
3078 | + |
3079 | +set -e |
3080 | +set -u |
3081 | + |
3082 | +testdir="$(dirname "$(readlink -f "$0")")" |
3083 | +rootdir="$(dirname "$testdir")" |
3084 | + |
3085 | +export MYPYPATH="${PYTHONPATH:="${rootdir}/lib"}" |
3086 | + |
3087 | +i=0 |
3088 | +for file in \ |
3089 | + "$rootdir"/lib/*.py \ |
3090 | + "$rootdir"/runner/autopkgtest \ |
3091 | + "$rootdir"/tests/*.py \ |
3092 | + "$rootdir"/tests/autopkgtest \ |
3093 | + "$rootdir"/tests/autopkgtest_args \ |
3094 | + "$rootdir"/tests/qemu \ |
3095 | + "$rootdir"/tests/testdesc \ |
3096 | + "$rootdir"/tools/autopkgtest-build-qemu \ |
3097 | + "$rootdir"/tools/autopkgtest-buildvm-ubuntu-cloud \ |
3098 | + "$rootdir"/virt/autopkgtest-virt-chroot \ |
3099 | + "$rootdir"/virt/autopkgtest-virt-lxc \ |
3100 | + "$rootdir"/virt/autopkgtest-virt-lxd \ |
3101 | + "$rootdir"/virt/autopkgtest-virt-null \ |
3102 | + "$rootdir"/virt/autopkgtest-virt-qemu \ |
3103 | + "$rootdir"/virt/autopkgtest-virt-schroot \ |
3104 | + "$rootdir"/virt/autopkgtest-virt-ssh \ |
3105 | +; do |
3106 | + i=$((i + 1)) |
3107 | + if [ "x${MYPY:="$(command -v mypy || echo false)"}" = xfalse ]; then |
3108 | + echo "ok $i - $file # SKIP mypy not found" |
3109 | + elif "${MYPY}" \ |
3110 | + --python-executable="${PYTHON:=python3}" \ |
3111 | + --follow-imports=skip \ |
3112 | + --ignore-missing-imports \ |
3113 | + "$file"; then |
3114 | + echo "ok $i - $file" |
3115 | + else |
3116 | + echo "not ok $i - $file # TODO mypy issues reported" |
3117 | + fi |
3118 | +done |
3119 | +echo "1..$i" |
3120 | + |
3121 | +# vim:set sw=4 sts=4 et: |
3122 | diff --git a/tests/pycodestyle b/tests/pycodestyle |
3123 | index d6bef92..f46d712 100755 |
3124 | --- a/tests/pycodestyle |
3125 | +++ b/tests/pycodestyle |
3126 | @@ -1,16 +1,26 @@ |
3127 | #!/bin/sh |
3128 | set -e |
3129 | -testdir="$(dirname $(readlink -f $0))" |
3130 | -rootdir="$(dirname $testdir)" |
3131 | -check=$(which pycodestyle || which pep8) |
3132 | +testdir="$(dirname "$(readlink -f "$0")")" |
3133 | +rootdir="$(dirname "$testdir")" |
3134 | +check=$(command -v pycodestyle || command -v pep8) |
3135 | status=0 |
3136 | |
3137 | -$check --ignore E402,E501,W504 $rootdir/lib/*.py $rootdir/tools/autopkgtest-buildvm-ubuntu-cloud || status=$? |
3138 | +"$check" --ignore E402,E501,W504 \ |
3139 | + "$rootdir"/lib/*.py \ |
3140 | + "$rootdir"/tools/autopkgtest-build-qemu \ |
3141 | + "$rootdir"/tools/autopkgtest-buildvm-ubuntu-cloud \ |
3142 | +|| status=$? |
3143 | |
3144 | for v in chroot null schroot lxc lxd qemu ssh; do |
3145 | - $check --ignore E501,E402,W504 $rootdir/virt/autopkgtest-virt-$v || status=$? |
3146 | + "$check" --ignore E501,E402,W504 "$rootdir/virt/autopkgtest-virt-$v" || status=$? |
3147 | done |
3148 | |
3149 | -$check --ignore E501,E402,W504 $rootdir/runner/autopkgtest $testdir/autopkgtest $testdir/testdesc $testdir/autopkgtest_args $testdir/*.py || status=$? |
3150 | +"$check" --ignore E501,E402,W504 \ |
3151 | + "$rootdir/runner/autopkgtest" \ |
3152 | + "$testdir/autopkgtest" \ |
3153 | + "$testdir/autopkgtest_args" \ |
3154 | + "$testdir/qemu" \ |
3155 | + "$testdir/testdesc" \ |
3156 | + "$testdir"/*.py || status=$? |
3157 | |
3158 | -exit $status |
3159 | +exit "$status" |
3160 | diff --git a/tests/pyflakes b/tests/pyflakes |
3161 | index f4f3a68..e6182f2 100755 |
3162 | --- a/tests/pyflakes |
3163 | +++ b/tests/pyflakes |
3164 | @@ -7,18 +7,25 @@ |
3165 | # Author: Martin Pitt <martin.pitt@ubuntu.com> |
3166 | |
3167 | set -e |
3168 | -testdir="$(dirname $(readlink -f $0))" |
3169 | -rootdir="$(dirname $testdir)" |
3170 | +testdir="$(dirname "$(readlink -f "$0")")" |
3171 | +rootdir="$(dirname "$testdir")" |
3172 | |
3173 | if ! type pyflakes3 >/dev/null 2>&1; then |
3174 | echo "pyflakes3 not available, skipping" |
3175 | exit 0 |
3176 | fi |
3177 | |
3178 | -pyflakes3 $rootdir/lib $rootdir/runner/autopkgtest $testdir/autopkgtest \ |
3179 | - $testdir/testdesc $testdir/autopkgtest_args $testdir/*.py \ |
3180 | - $rootdir/tools/autopkgtest-buildvm-ubuntu-cloud |
3181 | +pyflakes3 \ |
3182 | + "$rootdir/lib" \ |
3183 | + "$rootdir/runner/autopkgtest" \ |
3184 | + "$testdir/autopkgtest" \ |
3185 | + "$testdir/autopkgtest_args" \ |
3186 | + "$testdir/qemu" \ |
3187 | + "$testdir/testdesc" \ |
3188 | + "$testdir"/*.py \ |
3189 | + "$rootdir/tools/autopkgtest-build-qemu" \ |
3190 | + "$rootdir/tools/autopkgtest-buildvm-ubuntu-cloud" |
3191 | |
3192 | for v in chroot null schroot lxc lxd qemu ssh; do |
3193 | - pyflakes3 $rootdir/virt/autopkgtest-virt-$v |
3194 | + pyflakes3 "$rootdir/virt/autopkgtest-virt-$v" |
3195 | done |
3196 | diff --git a/tests/qemu b/tests/qemu |
3197 | new file mode 100755 |
3198 | index 0000000..a8f13dd |
3199 | --- /dev/null |
3200 | +++ b/tests/qemu |
3201 | @@ -0,0 +1,59 @@ |
3202 | +#!/usr/bin/python3 |
3203 | + |
3204 | +# This testsuite is part of autopkgtest. |
3205 | +# autopkgtest is a tool for testing Debian binary packages |
3206 | +# |
3207 | +# Copyright 2020 Simon McVittie |
3208 | +# |
3209 | +# This program is free software; you can redistribute it and/or modify |
3210 | +# it under the terms of the GNU General Public License as published by |
3211 | +# the Free Software Foundation; either version 2 of the License, or |
3212 | +# (at your option) any later version. |
3213 | +# |
3214 | +# This program is distributed in the hope that it will be useful, |
3215 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
3216 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
3217 | +# GNU General Public License for more details. |
3218 | +# |
3219 | +# You should have received a copy of the GNU General Public License |
3220 | +# along with this program; if not, write to the Free Software |
3221 | +# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. |
3222 | +# |
3223 | +# See the file CREDITS for a full list of credits information (often |
3224 | +# installed as /usr/share/doc/autopkgtest/CREDITS). |
3225 | + |
3226 | +import os |
3227 | +import sys |
3228 | +import unittest |
3229 | + |
3230 | +test_dir = os.path.dirname(os.path.abspath(__file__)) |
3231 | +root_dir = os.path.dirname(test_dir) |
3232 | + |
3233 | +sys.path[:0] = [test_dir, os.path.join(root_dir, 'lib')] |
3234 | + |
3235 | +from autopkgtest_qemu import Qemu # noqa |
3236 | + |
3237 | + |
3238 | +class QemuTestCase(unittest.TestCase): |
3239 | + def setUp(self) -> None: |
3240 | + super().setUp() |
3241 | + |
3242 | + def tearDown(self) -> None: |
3243 | + super().tearDown() |
3244 | + |
3245 | + def test_default_qemu_command(self) -> None: |
3246 | + get = Qemu.get_default_qemu_command |
3247 | + self.assertEqual(get('aarch64'), 'qemu-system-aarch64') |
3248 | + self.assertEqual(get('armv7l'), 'qemu-system-arm') |
3249 | + self.assertEqual(get('armv8l'), 'qemu-system-arm') |
3250 | + self.assertEqual(get('i686'), 'qemu-system-i386') |
3251 | + self.assertEqual(get('x86_64'), 'qemu-system-x86_64') |
3252 | + |
3253 | + |
3254 | +if __name__ == '__main__': |
3255 | + # Force encoding to UTF-8 even in non-UTF-8 locales. |
3256 | + import io |
3257 | + real_stdout = sys.stdout |
3258 | + assert isinstance(real_stdout, io.TextIOBase) |
3259 | + sys.stdout = io.TextIOWrapper(real_stdout.detach(), encoding="UTF-8", line_buffering=True) |
3260 | + unittest.main(testRunner=unittest.TextTestRunner(stream=sys.stdout, verbosity=2)) |
3261 | diff --git a/tests/run-parallel b/tests/run-parallel |
3262 | index 8b8e081..38fb6b5 100755 |
3263 | --- a/tests/run-parallel |
3264 | +++ b/tests/run-parallel |
3265 | @@ -1,21 +1,24 @@ |
3266 | #!/bin/sh |
3267 | # Run tests for different runners in parallel |
3268 | |
3269 | -MYDIR=$(dirname $0) |
3270 | +MYDIR=$(dirname "$0") |
3271 | |
3272 | # these are fast, run them first |
3273 | set -e |
3274 | -$MYDIR/pycodestyle |
3275 | -$MYDIR/pyflakes |
3276 | -$MYDIR/testdesc |
3277 | -$MYDIR/autopkgtest_args |
3278 | +"$MYDIR/mypy" |
3279 | +"$MYDIR/pycodestyle" |
3280 | +"$MYDIR/pyflakes" |
3281 | +"$MYDIR/qemu" |
3282 | +"$MYDIR/shellcheck" |
3283 | +"$MYDIR/testdesc" |
3284 | +"$MYDIR/autopkgtest_args" |
3285 | set +e |
3286 | |
3287 | # get sudo password early, to avoid asking for it in background jobs |
3288 | -[ `id -u` -eq 0 ] || sudo true |
3289 | +[ "$(id -u)" -eq 0 ] || sudo true |
3290 | |
3291 | -(OUT=$($MYDIR/autopkgtest QemuRunner 2>&1) || rc=$?; echo "=== $c ==="; echo "$OUT"; exit $rc) & |
3292 | -(OUT=$($MYDIR/autopkgtest LxcRunner SshRunnerNoScript SshRunnerWithScript 2>&1) || rc=$?; echo "=== $c ==="; echo "$OUT"; exit $rc) & |
3293 | -(OUT=$($MYDIR/autopkgtest NullRunner SchrootRunner SchrootClickRunner LxdRunner 2>&1) || rc=$?; echo "=== $c ==="; echo "$OUT"; exit $rc) & |
3294 | -(OUT=$(sudo $MYDIR/autopkgtest NullRunnerRoot ChrootRunner 2>&1) || rc=$?; echo "=== $c ==="; echo "$OUT"; exit $rc) & |
3295 | -for c in `seq 5`; do wait; done |
3296 | +(OUT=$("$MYDIR/autopkgtest" QemuRunner 2>&1) || rc=$?; echo "=== $c ==="; echo "$OUT"; exit "$rc") & |
3297 | +(OUT=$("$MYDIR/autopkgtest" LxcRunner SshRunnerNoScript SshRunnerWithScript 2>&1) || rc=$?; echo "=== $c ==="; echo "$OUT"; exit "$rc") & |
3298 | +(OUT=$("$MYDIR/autopkgtest" NullRunner SchrootRunner SchrootClickRunner LxdRunner 2>&1) || rc=$?; echo "=== $c ==="; echo "$OUT"; exit "$rc") & |
3299 | +(OUT=$(sudo "$MYDIR/autopkgtest" NullRunnerRoot ChrootRunner 2>&1) || rc=$?; echo "=== $c ==="; echo "$OUT"; exit "$rc") & |
3300 | +for c in $(seq 5); do wait; done |
3301 | diff --git a/tests/shellcheck b/tests/shellcheck |
3302 | new file mode 100755 |
3303 | index 0000000..3e3f8b7 |
3304 | --- /dev/null |
3305 | +++ b/tests/shellcheck |
3306 | @@ -0,0 +1,45 @@ |
3307 | +#!/bin/sh |
3308 | +# Copyright ยฉ 2018-2020 Collabora Ltd |
3309 | +# SPDX-License-Identifier: GPL-2+ |
3310 | + |
3311 | +set -e |
3312 | +set -u |
3313 | + |
3314 | +testdir="$(dirname "$(readlink -f "$0")")" |
3315 | +rootdir="$(dirname "$testdir")" |
3316 | + |
3317 | +if ! command -v shellcheck >/dev/null 2>&1; then |
3318 | + echo "1..0 # SKIP shellcheck not available" |
3319 | + exit 0 |
3320 | +fi |
3321 | + |
3322 | +n=0 |
3323 | +for shell_script in \ |
3324 | + "$rootdir"/setup-commands/* \ |
3325 | + "$rootdir"/ssh-setup/* \ |
3326 | + "$rootdir"/tests/mypy \ |
3327 | + "$rootdir"/tests/pycodestyle \ |
3328 | + "$rootdir"/tests/pyflakes \ |
3329 | + "$rootdir"/tests/run-parallel \ |
3330 | + "$rootdir"/tests/shellcheck \ |
3331 | + "$rootdir"/tests/ssh-setup-lxd \ |
3332 | +; do |
3333 | + n=$((n + 1)) |
3334 | + |
3335 | + case "$shell_script" in |
3336 | + (*/ssh-setup/adb | */ssh-setup/maas | */ssh-setup/nova) |
3337 | + echo "ok $n - $shell_script # SKIP Someone who can test this needs to fix it" |
3338 | + continue |
3339 | + ;; |
3340 | + esac |
3341 | + |
3342 | + if shellcheck --shell=dash "$shell_script"; then |
3343 | + echo "ok $n - $shell_script" |
3344 | + else |
3345 | + echo "not ok $n # TODO - $shell_script" |
3346 | + fi |
3347 | +done |
3348 | + |
3349 | +echo "1..$n" |
3350 | + |
3351 | +# vim:set sw=4 sts=4 et: |
3352 | diff --git a/tests/ssh-setup-lxd b/tests/ssh-setup-lxd |
3353 | index 9b87127..5a106ad 100755 |
3354 | --- a/tests/ssh-setup-lxd |
3355 | +++ b/tests/ssh-setup-lxd |
3356 | @@ -27,36 +27,36 @@ ENABLE_SUDO= |
3357 | # optional: port, options, capabilities |
3358 | open() { |
3359 | [ -z "$2" ] || IMAGE="$2" |
3360 | - if [ -z "$IMAGE}" ]; then |
3361 | + if [ -z "${IMAGE}" ]; then |
3362 | echo "ERROR: $0 needs to be called with image name" >&1 |
3363 | exit 1 |
3364 | fi |
3365 | |
3366 | - [ -n "$CONTAINER" ] || CONTAINER=`mktemp -u autopkgtest-test-XXX` |
3367 | + [ -n "$CONTAINER" ] || CONTAINER=$(mktemp -u autopkgtest-test-XXX) |
3368 | |
3369 | lxc launch --ephemeral "$IMAGE" "$CONTAINER" >/dev/null |
3370 | |
3371 | # wait for and parse IPv4 |
3372 | - while ! OUT=`lxc info $CONTAINER|grep 'eth0:.*inet[^6]'`; do |
3373 | + while ! OUT=$(lxc info "$CONTAINER"|grep 'eth0:.*inet[^6]'); do |
3374 | sleep 1 |
3375 | done |
3376 | IP=$(echo "$OUT" | grep -o '10\.[0-9]\+\.[0-9]\+\.[0-9]\+') |
3377 | |
3378 | # create user |
3379 | # password: python3 -c 'from crypt import *; print(crypt("autopkgtest", mksalt(METHOD_CRYPT)))' |
3380 | - lxc exec $CONTAINER -- useradd --password FJfXYBhFnX6xA --create-home $USER |
3381 | + lxc exec "$CONTAINER" -- useradd --password FJfXYBhFnX6xA --create-home "$USER" |
3382 | |
3383 | # install SSH |
3384 | - lxc exec $CONTAINER -- eatmydata apt-get install -y openssh-server >/dev/null 2>&1 |
3385 | + lxc exec "$CONTAINER" -- eatmydata apt-get install -y openssh-server >/dev/null 2>&1 |
3386 | |
3387 | if [ -n "$INSTALL_KEY" ]; then |
3388 | - key=`cat $HOME/.ssh/id_rsa.pub` |
3389 | - lxc exec $CONTAINER -- su -c "mkdir ~/.ssh; echo '$key' > ~/.ssh/authorized_keys" $USER |
3390 | + key=$(cat "$HOME/.ssh/id_rsa.pub") |
3391 | + lxc exec "$CONTAINER" -- su -c "mkdir ~/.ssh; echo '$key' > ~/.ssh/authorized_keys" "$USER" |
3392 | echo "identity=$HOME/.ssh/id_rsa" |
3393 | fi |
3394 | |
3395 | if [ -n "$ENABLE_SUDO" ]; then |
3396 | - lxc exec $CONTAINER -- sh -ec "echo '$USER ALL=(ALL) $ENABLE_SUDO' > /etc/sudoers.d/autopkgtest" |
3397 | + lxc exec "$CONTAINER" -- sh -ec "echo '$USER ALL=(ALL) $ENABLE_SUDO' > /etc/sudoers.d/autopkgtest" |
3398 | fi |
3399 | |
3400 | cat<<EOF |
3401 | @@ -82,11 +82,11 @@ cleanup() { |
3402 | echo "Needs to be called with -n <container name>" >&2 |
3403 | exit 1 |
3404 | fi |
3405 | - lxc delete --force $CONTAINER |
3406 | + lxc delete --force "$CONTAINER" |
3407 | } |
3408 | |
3409 | # parse options |
3410 | -eval set -- $(getopt -o "ksSn:I:c" -- "$@") |
3411 | +eval "set -- $(getopt -o "ksSn:I:c" -- "$@")" |
3412 | while true; do |
3413 | case "$1" in |
3414 | -k) |
3415 | @@ -109,11 +109,11 @@ done |
3416 | |
3417 | case "$1" in |
3418 | open) |
3419 | - open $@;; |
3420 | + open "$@";; |
3421 | cleanup) |
3422 | - cleanup $@;; |
3423 | + cleanup "$@";; |
3424 | revert) |
3425 | - revert $@;; |
3426 | + revert "$@";; |
3427 | '') |
3428 | echo "Needs to be called with command as first argument" >&2 |
3429 | exit 1 |
3430 | diff --git a/tests/testdesc b/tests/testdesc |
3431 | index 1751d7b..28a78de 100755 |
3432 | --- a/tests/testdesc |
3433 | +++ b/tests/testdesc |
3434 | @@ -14,7 +14,7 @@ try: |
3435 | patch # pyflakes |
3436 | except ImportError: |
3437 | # fall back to separate package |
3438 | - from mock import patch |
3439 | + from mock import patch # type: ignore |
3440 | |
3441 | test_dir = os.path.dirname(os.path.abspath(__file__)) |
3442 | sys.path.insert(1, os.path.join(os.path.dirname(test_dir), 'lib')) |
3443 | @@ -136,10 +136,8 @@ class Test(unittest.TestCase): |
3444 | def test_unknown_restriction(self): |
3445 | '''Test with unknown restriction''' |
3446 | |
3447 | - with self.assertRaises(testdesc.Unsupported) as cm: |
3448 | - testdesc.Test('foo', 'tests/do_foo', None, ['needs-red'], [], [], |
3449 | - [], [], []) |
3450 | - self.assertIn('unknown restriction needs-red', str(cm.exception)) |
3451 | + testdesc.Test('foo', 'tests/do_foo', None, ['needs-red'], [], [], |
3452 | + [], [], []) |
3453 | |
3454 | def test_neither_path_nor_command(self): |
3455 | '''Test without path nor command''' |
3456 | @@ -166,6 +164,11 @@ class Test(unittest.TestCase): |
3457 | self.assertRaises(testdesc.Unsupported, |
3458 | t.check_testbed_compat, ['root-on-testbed']) |
3459 | t.check_testbed_compat(['isolation-container', 'root-on-testbed']) |
3460 | + self.assertRaises(testdesc.Unsupported, |
3461 | + t.check_testbed_compat, ['needs-quantum-computer']) |
3462 | + t.check_testbed_compat([], |
3463 | + ignore_restrictions=['needs-root', |
3464 | + 'isolation-container']) |
3465 | |
3466 | |
3467 | class Debian(unittest.TestCase): |
3468 | @@ -355,6 +358,14 @@ class Debian(unittest.TestCase): |
3469 | self.call_parse('Depends:') |
3470 | self.assertIn('missing "Tests"', str(cm.exception)) |
3471 | |
3472 | + def test_invalid_control_empty_test(self): |
3473 | + '''another invalid control file''' |
3474 | + |
3475 | + # empty tests field |
3476 | + with self.assertRaises(testdesc.InvalidControl) as cm: |
3477 | + self.call_parse('Tests:') |
3478 | + self.assertIn('"Tests" field is empty', str(cm.exception)) |
3479 | + |
3480 | def test_tests_dir(self): |
3481 | '''non-standard Tests-Directory''' |
3482 | |
3483 | @@ -380,7 +391,7 @@ class Debian(unittest.TestCase): |
3484 | 'Package: one\nArchitecture: any') |
3485 | self.assertEqual(ts[0].depends, ['one', 'bd1', 'bd3:native (>= 7) | bd4', |
3486 | 'bdi1', 'bdi2', 'bda1', 'bda2', |
3487 | - 'build-essential', 'foo (>= 7)']) |
3488 | + 'build-essential:native', 'foo (>= 7)']) |
3489 | self.assertFalse(skipped) |
3490 | |
3491 | @unittest.skipUnless(have_dpkg_build_profiles, |
3492 | @@ -393,7 +404,7 @@ class Debian(unittest.TestCase): |
3493 | 'Source: nums\nBuild-Depends: bd1, bd2 <!check>, bd3 <!cross>, bdnotme <stage1> <cross>\n' |
3494 | '\n' |
3495 | 'Package: one\nArchitecture: any') |
3496 | - self.assertEqual(ts[0].depends, ['one', 'bd1', 'bd2', 'bd3', 'build-essential']) |
3497 | + self.assertEqual(ts[0].depends, ['one', 'bd1', 'bd2', 'bd3', 'build-essential:native']) |
3498 | self.assertFalse(skipped) |
3499 | |
3500 | def test_complex_deps(self): |
3501 | @@ -442,7 +453,7 @@ class Debian(unittest.TestCase): |
3502 | ' , bd2\n' |
3503 | '\n' |
3504 | 'Package: one\nArchitecture: any') |
3505 | - self.assertEqual(ts[0].depends, ['one', 'bd1', 'bd2', 'build-essential']) |
3506 | + self.assertEqual(ts[0].depends, ['one', 'bd1', 'bd2', 'build-essential:native']) |
3507 | self.assertFalse(skipped) |
3508 | |
3509 | @patch('adtlog.report') |
3510 | @@ -924,5 +935,7 @@ fi''' % kls.click_src) |
3511 | |
3512 | if __name__ == '__main__': |
3513 | # Force encoding to UTF-8 even in non-UTF-8 locales. |
3514 | - sys.stdout = io.TextIOWrapper(sys.stdout.detach(), encoding="UTF-8", line_buffering=True) |
3515 | + real_stdout = sys.stdout |
3516 | + assert isinstance(real_stdout, io.TextIOBase) |
3517 | + sys.stdout = io.TextIOWrapper(real_stdout.detach(), encoding="UTF-8", line_buffering=True) |
3518 | unittest.main(testRunner=unittest.TextTestRunner(stream=sys.stdout, verbosity=2)) |
3519 | diff --git a/tools/autopkgtest-build-lxc b/tools/autopkgtest-build-lxc |
3520 | index f453bf6..84d53f8 100755 |
3521 | --- a/tools/autopkgtest-build-lxc |
3522 | +++ b/tools/autopkgtest-build-lxc |
3523 | @@ -27,7 +27,7 @@ set -e |
3524 | DISTRO="$1" |
3525 | RELEASE="$2" |
3526 | if [ -z "$1" ] || [ -z "$2" ]; then |
3527 | - echo "Usage: $0 debian|ubuntu <release> [arch] [script]" >&2 |
3528 | + echo "Usage: $0 debian|ubuntu|kali <release> [arch] [script]" >&2 |
3529 | exit 1 |
3530 | fi |
3531 | |
3532 | @@ -164,8 +164,8 @@ proxy_detect |
3533 | # lxc templates for debian and ubuntu differ; ubuntu uses |
3534 | # $RELEASE/rootfs-$ARCH, while debian uses debian/rootfs-$RELEASE-$ARCH |
3535 | CACHE="$RELEASE" |
3536 | -if [ "$DISTRO" = debian ]; then |
3537 | - CACHE=debian |
3538 | +if [ "$DISTRO" = debian ] || [ "$DISTRO" = kali ] ; then |
3539 | + CACHE=$DISTRO |
3540 | fi |
3541 | |
3542 | if [ ! -e $LXCDIR/$NAME ]; then |
3543 | @@ -176,6 +176,10 @@ else |
3544 | # remove LXC rootfs caches; on btrfs this might be a subvolume, otherwise |
3545 | # rm it |
3546 | btrfs subvolume delete /var/cache/lxc/$CACHE/rootfs-* 2>/dev/null || rm -rf /var/cache/lxc/$CACHE/rootfs-* |
3547 | + # remove leftover .new container if present |
3548 | + if lxc-ls | grep -q ${NAME}.new ; then |
3549 | + lxc-destroy --force --name=${NAME}.new |
3550 | + fi |
3551 | # create a new rootfs in a temp container |
3552 | $LXC_CREATE_PREFIX lxc-create -B best --name=${NAME}.new $LXC_ARGS |
3553 | setup ${NAME}.new |
3554 | diff --git a/tools/autopkgtest-build-lxd b/tools/autopkgtest-build-lxd |
3555 | index 60b9fc1..40c92c5 100755 |
3556 | --- a/tools/autopkgtest-build-lxd |
3557 | +++ b/tools/autopkgtest-build-lxd |
3558 | @@ -94,7 +94,7 @@ setup() { |
3559 | |
3560 | ARCH=$(lxc exec "$CONTAINER" -- dpkg --print-architecture </dev/null) |
3561 | DISTRO=$(lxc exec "$CONTAINER" -- sh -ec 'lsb_release -si 2>/dev/null || . /etc/os-release; echo "${NAME% *}"' </dev/null) |
3562 | - CRELEASE=$(lxc exec "$CONTAINER" -- sh -ec 'lsb_release -sc 2>/dev/null || awk "/^deb/ {sub(/\\[.*\\]/, \"\", \$0); print \$3; quit}" /etc/apt/sources.list' </dev/null) |
3563 | + CRELEASE=$(lxc exec "$CONTAINER" -- sh -ec 'lsb_release -sc 2>/dev/null || awk "/^deb/ {sub(/\\[.*\\]/, \"\", \$0); print \$3; exit}" /etc/apt/sources.list' </dev/null) |
3564 | echo "Container finished booting. Distribution $DISTRO, release $CRELEASE, architecture $ARCH" |
3565 | RELEASE=${RELEASE:-${CRELEASE}} |
3566 | |
3567 | @@ -112,8 +112,8 @@ setup() { |
3568 | lxc exec "$CONTAINER" -- env \ |
3569 | AUTOPKGTEST_KEEP_APT_SOURCES="${AUTOPKGTEST_KEEP_APT_SOURCES:-}" \ |
3570 | AUTOPKGTEST_APT_SOURCES="${AUTOPKGTEST_APT_SOURCES:-}" \ |
3571 | - MIRROR=${MIRROR:-} \ |
3572 | - RELEASE=${RELEASE} \ |
3573 | + MIRROR="${MIRROR:-}" \ |
3574 | + RELEASE="${RELEASE}" \ |
3575 | sh < "$script" |
3576 | break |
3577 | fi |
3578 | diff --git a/tools/autopkgtest-build-qemu b/tools/autopkgtest-build-qemu |
3579 | index 16fdc2a..a2bccc9 100755 |
3580 | --- a/tools/autopkgtest-build-qemu |
3581 | +++ b/tools/autopkgtest-build-qemu |
3582 | @@ -1,9 +1,12 @@ |
3583 | -#!/bin/sh |
3584 | +#!/usr/bin/python3 |
3585 | |
3586 | # autopkgtest-build-qemu is part of autopkgtest |
3587 | # autopkgtest is a tool for testing Debian binary packages |
3588 | # |
3589 | -# Copyright (C) Antonio Terceiro <terceiro@debian.org>. |
3590 | +# Copyright (C) 2016-2020 Antonio Terceiro <terceiro@debian.org>. |
3591 | +# Copyright (C) 2019 Sรฉbastien Delafond |
3592 | +# Copyright (C) 2019-2020 Simon McVittie |
3593 | +# Copyright (C) 2020 Christian Kastner |
3594 | # |
3595 | # Build a QEMU image for using with autopkgtest |
3596 | # |
3597 | @@ -24,283 +27,385 @@ |
3598 | # See the file CREDITS for a full list of credits information (often |
3599 | # installed as /usr/share/doc/autopkgtest/CREDITS). |
3600 | |
3601 | -set -eu |
3602 | - |
3603 | -apt_proxy= |
3604 | -architecture= |
3605 | -mirror= |
3606 | -user_script= |
3607 | -size= |
3608 | - |
3609 | -usage () { |
3610 | - echo "usage: $0 [<options...>] <release> <image> [<mirror>] [<architecture>] [<script>] [<size>]" |
3611 | - echo "" |
3612 | - echo "--apt-proxy=http://PROXY:PORT Set apt proxy [default: auto]" |
3613 | - echo "--arch=ARCH, --architecture=ARCH Set architecture, e.g. i386" |
3614 | - echo " [default: $(dpkg --print-architecture)]" |
3615 | - echo "--mirror=URL Set apt mirror [default:" |
3616 | - echo " http://deb.debian.org/debian]" |
3617 | - echo "--script=SCRIPT Run an extra customization script" |
3618 | - echo "--size=SIZE Set image size [default: 25G]" |
3619 | - exit "${1-1}" |
3620 | -} |
3621 | - |
3622 | -if getopt_temp="$( |
3623 | - getopt -o '' \ |
3624 | - --long 'apt-proxy:,arch:,architecture:,help,mirror:,script:,size:' \ |
3625 | - -n "$0" -- "$@" |
3626 | -)"; then |
3627 | - eval set -- "$getopt_temp" |
3628 | -else |
3629 | - echo "" |
3630 | - usage $? |
3631 | -fi |
3632 | - |
3633 | -while true; do |
3634 | - case "$1" in |
3635 | - (--arch|--architecture) |
3636 | - architecture="$2" |
3637 | - shift 2 |
3638 | - ;; |
3639 | - |
3640 | - (--apt-proxy) |
3641 | - apt_proxy="$2" |
3642 | - shift 2 |
3643 | - ;; |
3644 | - |
3645 | - (--help) |
3646 | - usage 0 |
3647 | - ;; |
3648 | - |
3649 | - (--mirror) |
3650 | - mirror="$2" |
3651 | - shift 2 |
3652 | - ;; |
3653 | - |
3654 | - (--script) |
3655 | - user_script="$2" |
3656 | - shift 2 |
3657 | - ;; |
3658 | - |
3659 | - (--size) |
3660 | - size="$2" |
3661 | - shift 2 |
3662 | - ;; |
3663 | - |
3664 | - (--) |
3665 | - shift |
3666 | - break |
3667 | - ;; |
3668 | - |
3669 | - (-*) |
3670 | - echo "E: Option '$1' not understood" |
3671 | - exit 2 |
3672 | - ;; |
3673 | - |
3674 | - (*) |
3675 | - break |
3676 | - ;; |
3677 | - esac |
3678 | -done |
3679 | - |
3680 | -if [ $# -lt 2 -o $# -gt 6 ]; then |
3681 | - usage 1 |
3682 | -fi |
3683 | - |
3684 | -if ! which vmdb2 > /dev/null; then |
3685 | - echo "E: vmdb2 not found. This script requires vmdb2 to be installed" |
3686 | - exit 2 |
3687 | -fi |
3688 | - |
3689 | -release="$1" |
3690 | -image="$2" |
3691 | - |
3692 | -if [ $# -ge 3 ]; then |
3693 | - if [ -n "$mirror" ]; then |
3694 | - echo "E: --mirror and 3rd positional argument cannot both be specified" |
3695 | - usage 2 |
3696 | - fi |
3697 | - mirror="$3" |
3698 | -elif [ -z "$mirror" ]; then |
3699 | - mirror="http://deb.debian.org/debian" |
3700 | -fi |
3701 | - |
3702 | -if [ $# -ge 4 ]; then |
3703 | - if [ -n "$architecture" ]; then |
3704 | - echo "E: --arch and 4th positional argument cannot both be specified" |
3705 | - usage 2 |
3706 | - fi |
3707 | - architecture="$4" |
3708 | -elif [ -z "$architecture" ]; then |
3709 | - architecture=$(dpkg --print-architecture) |
3710 | -fi |
3711 | - |
3712 | -if [ $# -ge 5 ]; then |
3713 | - if [ -n "$user_script" ]; then |
3714 | - echo "E: --script and 5th positional argument cannot both be specified" |
3715 | - usage 2 |
3716 | - fi |
3717 | - user_script="$5" |
3718 | -elif [ -z "$user_script" ]; then |
3719 | - user_script="/bin/true" |
3720 | -fi |
3721 | - |
3722 | -if [ $# -ge 6 ]; then |
3723 | - if [ -n "$size" ]; then |
3724 | - echo "E: --size and 6th positional argument cannot both be specified" |
3725 | - usage 2 |
3726 | - fi |
3727 | - size="$6" |
3728 | -elif [ -z "$size" ]; then |
3729 | - size="25G" |
3730 | -fi |
3731 | - |
3732 | -# detect apt proxy |
3733 | -# support backwards compatible env var too |
3734 | -AUTOPKGTEST_APT_PROXY=${apt_proxy:-${AUTOPKGTEST_APT_PROXY:-${ADT_APT_PROXY:-}}} |
3735 | -if [ -z "$AUTOPKGTEST_APT_PROXY" ]; then |
3736 | - RES=`apt-config shell proxy Acquire::http::Proxy` |
3737 | - if [ -n "$RES" ]; then |
3738 | - eval $RES |
3739 | - else |
3740 | - RES=`apt-config shell proxy_cmd Acquire::http::Proxy-Auto-Detect` |
3741 | - eval $RES |
3742 | - if [ -n "${proxy_cmd:-}" ]; then |
3743 | - proxy=`$proxy_cmd` |
3744 | - fi |
3745 | - fi |
3746 | - if echo "${proxy:-}" | egrep -q '(localhost|127\.0\.0\.[0-9]*)'; then |
3747 | - # set http_proxy for the initial debootstrap |
3748 | - export http_proxy="$proxy" |
3749 | - |
3750 | - # translate proxy address to one that can be accessed from the |
3751 | - # running VM |
3752 | - AUTOPKGTEST_APT_PROXY=$(echo "$proxy" | sed -r "s#localhost|127\.0\.0\.[0-9]*#10.0.2.2#") |
3753 | - if [ -n "$AUTOPKGTEST_APT_PROXY" ]; then |
3754 | - echo "Detected local apt proxy, using $AUTOPKGTEST_APT_PROXY as virtual machine proxy" |
3755 | - fi |
3756 | - elif [ -n "${proxy:-}" ]; then |
3757 | - AUTOPKGTEST_APT_PROXY="$proxy" |
3758 | - echo "Using $AUTOPKGTEST_APT_PROXY as container proxy" |
3759 | - # set http_proxy for the initial debootstrap |
3760 | - export http_proxy="$proxy" |
3761 | - fi |
3762 | -fi |
3763 | -export AUTOPKGTEST_APT_PROXY |
3764 | - |
3765 | - |
3766 | -script=/bin/true |
3767 | -for s in $(dirname $(dirname "$0"))/setup-commands/setup-testbed \ |
3768 | - /usr/share/autopkgtest/setup-commands/setup-testbed; do |
3769 | - if [ -r "$s" ]; then |
3770 | - script="$s" |
3771 | - break |
3772 | - fi |
3773 | -done |
3774 | - |
3775 | -if [ "$user_script" != "/bin/true" ]; then |
3776 | - echo "Using customization script $user_script ..." |
3777 | -fi |
3778 | - |
3779 | - |
3780 | -case "$mirror" in |
3781 | - *ubuntu*) |
3782 | - kernel=linux-image-virtual |
3783 | - ;; |
3784 | - *) |
3785 | - case "$architecture" in |
3786 | - (armhf) |
3787 | - kernel=linux-image-armmp |
3788 | - ;; |
3789 | - (hppa) |
3790 | - kernel=linux-image-parisc |
3791 | - ;; |
3792 | - (i386) |
3793 | - case "$release" in |
3794 | - (jessie) |
3795 | - kernel=linux-image-586 |
3796 | - ;; |
3797 | - (*) |
3798 | - kernel=linux-image-686 |
3799 | - ;; |
3800 | - esac |
3801 | - ;; |
3802 | - (ppc64) |
3803 | - kernel=linux-image-powerpc64 |
3804 | - ;; |
3805 | - (*) |
3806 | - kernel="linux-image-$architecture" |
3807 | - ;; |
3808 | - esac |
3809 | - ;; |
3810 | -esac |
3811 | - |
3812 | -vmdb2_config=$(mktemp) |
3813 | -trap "rm -rf $vmdb2_config" INT TERM EXIT |
3814 | -cat > "$vmdb2_config" <<EOF |
3815 | -steps: |
3816 | - - mkimg: "{{ image }}" |
3817 | - size: $size |
3818 | - |
3819 | - - mklabel: msdos |
3820 | - device: "{{ image }}" |
3821 | - |
3822 | - - mkpart: primary |
3823 | - device: "{{ image }}" |
3824 | - start: 0% |
3825 | - end: 100% |
3826 | - tag: root |
3827 | - |
3828 | - - kpartx: "{{ image }}" |
3829 | - |
3830 | - - mkfs: ext4 |
3831 | - partition: root |
3832 | - |
3833 | - - mount: root |
3834 | - |
3835 | - - debootstrap: $release |
3836 | - mirror: $mirror |
3837 | - target: root |
3838 | - |
3839 | - - apt: install |
3840 | - packages: |
3841 | - - $kernel |
3842 | - - ifupdown |
3843 | - tag: root |
3844 | - |
3845 | - - grub: bios |
3846 | - tag: root |
3847 | - console: serial |
3848 | - |
3849 | - - chroot: root |
3850 | - shell: | |
3851 | - passwd --delete root |
3852 | - useradd --home-dir /home/user --create-home user |
3853 | - passwd --delete user |
3854 | - echo host > /etc/hostname |
3855 | - |
3856 | - - shell: | |
3857 | - rootdev=\$(ls -1 /dev/mapper/loop* | sort | tail -1) |
3858 | - uuid=\$(blkid -c /dev/null -o value -s UUID \$rootdev) |
3859 | - echo "UUID=\$uuid / ext4 errors=remount-ro 0 1" > \$ROOT/etc/fstab |
3860 | - root-fs: root |
3861 | - |
3862 | - - shell: '$script \$ROOT' |
3863 | - root-fs: root |
3864 | - |
3865 | - - shell: '$user_script \$ROOT' |
3866 | - root-fs: root |
3867 | - |
3868 | -EOF |
3869 | - |
3870 | -vmdb2 \ |
3871 | - --verbose \ |
3872 | - --image="$image".raw \ |
3873 | - "$vmdb2_config" |
3874 | - |
3875 | -qemu-img convert -O qcow2 "$image".raw "$image".new |
3876 | - |
3877 | -rm -f "$image".raw |
3878 | - |
3879 | -# replace a potentially existing image as atomically as possible |
3880 | -mv "$image".new "$image" |
3881 | +import argparse |
3882 | +import json |
3883 | +import logging |
3884 | +import os |
3885 | +import re |
3886 | +import shlex |
3887 | +import shutil |
3888 | +import subprocess |
3889 | +import sys |
3890 | +from contextlib import (suppress) |
3891 | +from tempfile import (TemporaryDirectory) |
3892 | +from typing import (Any, Dict, List, Optional) |
3893 | + |
3894 | + |
3895 | +logger = logging.getLogger('autopkgtest-build-qemu') |
3896 | + |
3897 | +DATA_PATHS = ( |
3898 | + os.path.dirname(os.path.dirname(os.path.abspath(__file__))), |
3899 | + '/usr/share/autopkgtest', |
3900 | +) |
3901 | + |
3902 | +for p in DATA_PATHS: |
3903 | + sys.path.insert(0, os.path.join(p, 'lib')) |
3904 | + |
3905 | +DEBIAN_KERNELS = dict( |
3906 | + armhf='linux-image-armmp-lpae', |
3907 | + hppa='linux-image-parisc', |
3908 | + i386='linux-image-686-pae', |
3909 | + ppc64='linux-image-powerpc64', |
3910 | +) |
3911 | + |
3912 | + |
3913 | +class UsageError(Exception): |
3914 | + pass |
3915 | + |
3916 | + |
3917 | +class BuildQemu: |
3918 | + def __init__(self) -> None: |
3919 | + pass |
3920 | + |
3921 | + def run(self) -> None: |
3922 | + default_arch = subprocess.check_output( |
3923 | + ['dpkg', '--print-architecture'], |
3924 | + universal_newlines=True |
3925 | + ).strip() |
3926 | + default_mirror = 'http://deb.debian.org/debian' |
3927 | + |
3928 | + parser = argparse.ArgumentParser() |
3929 | + |
3930 | + parser.add_argument( |
3931 | + '--architecture', '--arch', |
3932 | + default='', |
3933 | + help='dpkg architecture name [default: %s]' % default_arch, |
3934 | + ) |
3935 | + parser.add_argument( |
3936 | + '--apt-proxy', |
3937 | + default='', |
3938 | + metavar='http://PROXY:PORT', |
3939 | + help='Set apt proxy [default: auto]', |
3940 | + ) |
3941 | + parser.add_argument( |
3942 | + '--mirror', |
3943 | + default='', |
3944 | + metavar='URL', |
3945 | + help=( |
3946 | + 'Debian or Debian derivative mirror ' + |
3947 | + '[default: %s]' % default_mirror |
3948 | + ), |
3949 | + ) |
3950 | + parser.add_argument( |
3951 | + '--script', |
3952 | + default='', |
3953 | + dest='user_script', |
3954 | + help='Run an extra customization script', |
3955 | + ) |
3956 | + parser.add_argument( |
3957 | + '--size', |
3958 | + default='', |
3959 | + help='Set image size [default: 25G]', |
3960 | + ) |
3961 | + parser.add_argument( |
3962 | + 'release', |
3963 | + metavar='RELEASE', |
3964 | + help='An apt suite or codename available from MIRROR', |
3965 | + ) |
3966 | + parser.add_argument( |
3967 | + 'image', |
3968 | + metavar='IMAGE', |
3969 | + help='Filename of qcow2 image to create', |
3970 | + ) |
3971 | + parser.add_argument( |
3972 | + '_mirror', |
3973 | + default=None, |
3974 | + metavar='MIRROR', |
3975 | + nargs='?', |
3976 | + help='Deprecated, use --mirror instead', |
3977 | + ) |
3978 | + parser.add_argument( |
3979 | + '_architecture', |
3980 | + default=None, |
3981 | + metavar='ARCHITECTURE', |
3982 | + nargs='?', |
3983 | + help='Deprecated, use --architecture instead', |
3984 | + ) |
3985 | + parser.add_argument( |
3986 | + '_user_script', |
3987 | + default=None, |
3988 | + metavar='SCRIPT', |
3989 | + nargs='?', |
3990 | + help='Deprecated, use --script instead', |
3991 | + ) |
3992 | + parser.add_argument( |
3993 | + '_size', |
3994 | + default=None, |
3995 | + metavar='SIZE', |
3996 | + nargs='?', |
3997 | + help='Deprecated, use --size instead', |
3998 | + ) |
3999 | + |
4000 | + args = parser.parse_args() |
4001 | + |
4002 | + if args._mirror is not None: |
4003 | + if args.mirror: |
4004 | + parser.error( |
4005 | + "--mirror and 3rd positional argument cannot both be " |
4006 | + "specified" |
4007 | + ) |
4008 | + else: |
4009 | + args.mirror = args._mirror |
4010 | + |
4011 | + if args._architecture is not None: |
4012 | + if args.architecture: |
4013 | + parser.error( |
4014 | + "--architecture and 4th positional argument cannot both " |
4015 | + "be specified" |
4016 | + ) |
4017 | + else: |
4018 | + args.architecture = args._architecture |
4019 | + |
4020 | + if args._user_script is not None: |
4021 | + if args.user_script: |
4022 | + parser.error( |
4023 | + "--script and 5th positional argument cannot both " |
4024 | + "be specified" |
4025 | + ) |
4026 | + else: |
4027 | + args.user_script = args._user_script |
4028 | + |
4029 | + if args._size is not None: |
4030 | + if args.size: |
4031 | + parser.error( |
4032 | + "--size and 6th positional argument cannot both " |
4033 | + "be specified" |
4034 | + ) |
4035 | + else: |
4036 | + args.size = args._size |
4037 | + |
4038 | + vmdb2 = shutil.which('vmdb2') |
4039 | + |
4040 | + if vmdb2 is None: |
4041 | + raise UsageError( |
4042 | + 'vmdb2 not found. This script requires vmdb2 to be installed' |
4043 | + ) |
4044 | + |
4045 | + if not args.mirror: |
4046 | + args.mirror = default_mirror |
4047 | + |
4048 | + if not args.architecture: |
4049 | + args.architecture = default_arch |
4050 | + |
4051 | + if not args.size: |
4052 | + args.size = '25G' |
4053 | + |
4054 | + if not args.apt_proxy: |
4055 | + args.apt_proxy = os.getenv( |
4056 | + 'AUTOPKGTEST_APT_PROXY', |
4057 | + os.getenv('ADT_APT_PROXY', ''), |
4058 | + ) |
4059 | + |
4060 | + if not args.apt_proxy: |
4061 | + args.apt_proxy = subprocess.check_output( |
4062 | + 'eval "$(apt-config shell p Acquire::http::Proxy)"; echo "$p"', |
4063 | + shell=True, |
4064 | + universal_newlines=True, |
4065 | + ).strip() |
4066 | + |
4067 | + if not args.apt_proxy: |
4068 | + proxy_command = subprocess.check_output( |
4069 | + 'eval "$(apt-config shell p Acquire::http::Proxy-Auto-Detect)"; echo "$p"', |
4070 | + shell=True, |
4071 | + universal_newlines=True, |
4072 | + ).strip() |
4073 | + |
4074 | + if proxy_command: |
4075 | + args.apt_proxy = subprocess.check_output( |
4076 | + proxy_command, |
4077 | + shell=True, |
4078 | + universal_newlines=True, |
4079 | + ).strip() |
4080 | + |
4081 | + if args.apt_proxy: |
4082 | + # Set http_proxy for the initial debootstrap |
4083 | + os.environ['http_proxy'] = args.apt_proxy |
4084 | + # Translate proxy address on localhost to one that can be |
4085 | + # accessed from the running VM |
4086 | + os.environ['AUTOPKGTEST_APT_PROXY'] = re.sub( |
4087 | + r'localhost|127\.0\.0\.[0-9]*', |
4088 | + '10.0.2.2', |
4089 | + args.apt_proxy, |
4090 | + ) |
4091 | + |
4092 | + script = '' |
4093 | + |
4094 | + for d in DATA_PATHS: |
4095 | + s = os.path.join(d, 'setup-commands', 'setup-testbed') |
4096 | + |
4097 | + if os.access(s, os.R_OK): |
4098 | + script = s |
4099 | + break |
4100 | + |
4101 | + if args.user_script: |
4102 | + logger.info('Using customization script %s...', args.user_script) |
4103 | + |
4104 | + if args.architecture == default_arch: |
4105 | + override_arch = None |
4106 | + else: |
4107 | + override_arch = args.architecture |
4108 | + |
4109 | + with TemporaryDirectory() as temp: |
4110 | + vmdb2_config = os.path.join(temp, 'vmdb2.yaml') |
4111 | + |
4112 | + self.write_vmdb2_config( |
4113 | + vmdb2_config, |
4114 | + kernel=self.choose_kernel(args.mirror, args.architecture), |
4115 | + mirror=args.mirror, |
4116 | + override_arch=override_arch, |
4117 | + release=args.release, |
4118 | + script=script, |
4119 | + size=args.size, |
4120 | + user_script=args.user_script, |
4121 | + ) |
4122 | + |
4123 | + try: |
4124 | + subprocess.check_call([ |
4125 | + vmdb2, |
4126 | + '--verbose', |
4127 | + '--image=' + args.image + '.raw', |
4128 | + vmdb2_config, |
4129 | + ]) |
4130 | + subprocess.check_call([ |
4131 | + 'qemu-img', |
4132 | + 'convert', |
4133 | + '-f', 'raw', |
4134 | + '-O', 'qcow2', |
4135 | + args.image + '.raw', |
4136 | + args.image + '.new', |
4137 | + ]) |
4138 | + # Replace a potentially existing image as atomically as |
4139 | + # possible |
4140 | + os.rename(args.image + '.new', args.image) |
4141 | + finally: |
4142 | + with suppress(FileNotFoundError): |
4143 | + os.unlink(args.image + '.new') |
4144 | + |
4145 | + with suppress(FileNotFoundError): |
4146 | + os.unlink(args.image + '.raw') |
4147 | + |
4148 | + def write_vmdb2_config( |
4149 | + self, |
4150 | + path: str, |
4151 | + *, |
4152 | + kernel: str, |
4153 | + mirror: str, |
4154 | + override_arch: Optional[str], |
4155 | + release: str, |
4156 | + script: str, |
4157 | + size: str, |
4158 | + user_script: str, |
4159 | + ): |
4160 | + steps = [] # type: List[Dict[str, Any]] |
4161 | + steps.append(dict(mkimg='{{ image }}', size=size)) |
4162 | + steps.append(dict(mklabel='msdos', device='{{ image }}')) |
4163 | + |
4164 | + steps.append( |
4165 | + dict( |
4166 | + mkpart='primary', |
4167 | + device='{{ image }}', |
4168 | + start='0%', |
4169 | + end='100%', |
4170 | + tag='root', |
4171 | + ), |
4172 | + ) |
4173 | + |
4174 | + steps.append(dict(kpartx='{{ image }}')) |
4175 | + steps.append(dict(mkfs='ext4', partition='root')) |
4176 | + steps.append(dict(mount='root')) |
4177 | + |
4178 | + debootstrap = {} # type: Dict[str, Any] |
4179 | + |
4180 | + if override_arch is None: |
4181 | + debootstrap['debootstrap'] = release |
4182 | + else: |
4183 | + debootstrap['qemu-debootstrap'] = release |
4184 | + debootstrap['arch'] = override_arch |
4185 | + |
4186 | + debootstrap['mirror'] = mirror |
4187 | + debootstrap['target'] = 'root' |
4188 | + |
4189 | + steps.append(debootstrap) |
4190 | + |
4191 | + steps.append( |
4192 | + dict( |
4193 | + apt='install', |
4194 | + packages=[kernel, 'ifupdown'], |
4195 | + tag='root', |
4196 | + ), |
4197 | + ) |
4198 | + |
4199 | + steps.append( |
4200 | + dict( |
4201 | + grub='bios', |
4202 | + tag='root', |
4203 | + console='serial', |
4204 | + ), |
4205 | + ) |
4206 | + |
4207 | + steps.append( |
4208 | + dict( |
4209 | + chroot='root', |
4210 | + shell='\n'.join([ |
4211 | + 'passwd --delete root', |
4212 | + 'useradd --home-dir /home/user --create-home user', |
4213 | + 'passwd --delete user', |
4214 | + 'echo host > /etc/hostname', |
4215 | + "echo '127.0.1.1\thost' >> /etc/hosts", |
4216 | + ]), |
4217 | + ), |
4218 | + ) |
4219 | + |
4220 | + steps.append({ |
4221 | + 'shell': '\n'.join([ |
4222 | + 'rootdev=$(ls -1 /dev/mapper/loop* | sort | tail -1)', |
4223 | + 'uuid=$(blkid -c /dev/null -o value -s UUID "$rootdev")', |
4224 | + ('echo "UUID=$uuid / ext4 errors=remount-ro 0 1" ' |
4225 | + '> "$ROOT/etc/fstab"'), |
4226 | + ]), |
4227 | + 'root-fs': 'root', |
4228 | + }) |
4229 | + |
4230 | + for s in (script, user_script): |
4231 | + if s: |
4232 | + steps.append({ |
4233 | + 'shell': shlex.quote(s) + ' "$ROOT"', |
4234 | + 'root-fs': 'root', |
4235 | + }) |
4236 | + |
4237 | + with open(path, 'w') as writer: |
4238 | + # It's really YAML, but YAML is a superset of JSON (except in |
4239 | + # pathological cases), so writing it out as JSON avoids a |
4240 | + # dependency on a non-stdlib YAML library. |
4241 | + json.dump(dict(steps=steps), writer) |
4242 | + |
4243 | + def choose_kernel( |
4244 | + self, |
4245 | + mirror: str, |
4246 | + architecture: str, |
4247 | + ) -> str: |
4248 | + if 'ubuntu' in mirror: |
4249 | + return 'linux-image-virtual' |
4250 | + |
4251 | + return DEBIAN_KERNELS.get(architecture, 'linux-image-' + architecture) |
4252 | + |
4253 | + |
4254 | +if __name__ == '__main__': |
4255 | + try: |
4256 | + BuildQemu().run() |
4257 | + except UsageError as e: |
4258 | + logger.error('%s', e) |
4259 | + sys.exit(2) |
4260 | + except subprocess.CalledProcessError as e: |
4261 | + logger.error('%s', e) |
4262 | + sys.exit(e.returncode or 1) |
4263 | diff --git a/tools/autopkgtest-buildvm-ubuntu-cloud b/tools/autopkgtest-buildvm-ubuntu-cloud |
4264 | index 5b61d86..95f3da1 100755 |
4265 | --- a/tools/autopkgtest-buildvm-ubuntu-cloud |
4266 | +++ b/tools/autopkgtest-buildvm-ubuntu-cloud |
4267 | @@ -42,6 +42,7 @@ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname( |
4268 | os.path.abspath(__file__))), 'lib')) |
4269 | |
4270 | import VirtSubproc |
4271 | +from autopkgtest_qemu import Qemu, QemuImage |
4272 | |
4273 | workdir = tempfile.mkdtemp(prefix='autopkgtest-buildvm-ubuntu-cloud') |
4274 | atexit.register(shutil.rmtree, workdir) |
4275 | @@ -62,10 +63,12 @@ def get_default_release(): |
4276 | sys.stderr.write('WARNING: python-distro-info not installed, falling ' |
4277 | 'back to determining default release from currently ' |
4278 | 'installed release\n') |
4279 | - if subprocess.call(['which', 'lsb_release'], stdout=subprocess.PIPE) == 0: |
4280 | + if shutil.which('lsb_release') is not None: |
4281 | return subprocess.check_output(['lsb_release', '-cs'], |
4282 | universal_newlines=True).strip() |
4283 | |
4284 | + return None |
4285 | + |
4286 | |
4287 | def parse_args(): |
4288 | '''Parse CLI args''' |
4289 | @@ -133,13 +136,11 @@ def parse_args(): |
4290 | args = parser.parse_args() |
4291 | |
4292 | # check our dependencies |
4293 | - if subprocess.call(['which', args.qemu_command], stdout=subprocess.PIPE, |
4294 | - stderr=subprocess.STDOUT) != 0: |
4295 | + if shutil.which(args.qemu_command) is None: |
4296 | sys.stderr.write('ERROR: QEMU command %s not found\n' % |
4297 | args.qemu_command) |
4298 | sys.exit(1) |
4299 | - if subprocess.call(['which', 'genisoimage'], stdout=subprocess.PIPE, |
4300 | - stderr=subprocess.STDOUT) != 0: |
4301 | + if shutil.which('genisoimage') is None: |
4302 | sys.stderr.write('ERROR: genisoimage not found\n') |
4303 | sys.exit(1) |
4304 | if os.path.exists('/dev/kvm') and not os.access('/dev/kvm', os.W_OK): |
4305 | @@ -202,7 +203,7 @@ def download_image(cloud_img_url, release, arch): |
4306 | |
4307 | def resize_image(image, size): |
4308 | print('Resizing image, adding %s...' % size) |
4309 | - subprocess.check_call(['qemu-img', 'resize', image, '+' + size]) |
4310 | + subprocess.check_call(['qemu-img', 'resize', '-f', 'qcow2', image, '+' + size]) |
4311 | |
4312 | |
4313 | DEFAULT_METADATA = 'instance-id: nocloud\nlocal-hostname: autopkgtest\n' |
4314 | @@ -217,6 +218,8 @@ ssh_pwauth: True |
4315 | manage_etc_hosts: True |
4316 | apt_proxy: %(proxy)s |
4317 | apt_mirror: %(mirror)s |
4318 | +bootcmd: |
4319 | + - dpkg --add-architecture i386 |
4320 | runcmd: |
4321 | - sed -i 's/deb-systemd-invoke/true/' /var/lib/dpkg/info/cloud-init.prerm |
4322 | - mount -r /dev/vdb /mnt |
4323 | @@ -331,37 +334,33 @@ def host_tz(): |
4324 | def boot_image(image, seed, qemu_command, verbose, timeout): |
4325 | print('Booting image to run cloud-init...') |
4326 | |
4327 | - tty_sock = os.path.join(workdir, 'ttyS0') |
4328 | - |
4329 | - argv = [qemu_command, '-m', '512', |
4330 | - '-nographic', |
4331 | - '-monitor', 'null', |
4332 | - '-net', 'user', |
4333 | - '-net', 'nic,model=virtio', |
4334 | - '-serial', 'unix:%s,server,nowait' % tty_sock, |
4335 | - '-drive', 'file=%s,if=virtio' % image, |
4336 | - '-drive', 'file=%s,if=virtio,readonly' % seed] |
4337 | - |
4338 | - if os.path.exists('/dev/kvm'): |
4339 | - argv.append('-enable-kvm') |
4340 | + qemu = Qemu( |
4341 | + cpus=1, |
4342 | + images=[ |
4343 | + QemuImage(file=image, format='qcow2'), |
4344 | + QemuImage(file=seed, format='raw', readonly=True), |
4345 | + ], |
4346 | + overlay=False, |
4347 | + qemu_command=qemu_command, |
4348 | + ram_size=512, |
4349 | + ) |
4350 | |
4351 | - qemu = subprocess.Popen(argv) |
4352 | try: |
4353 | if verbose: |
4354 | - tty = VirtSubproc.get_unix_socket(tty_sock) |
4355 | + tty = VirtSubproc.get_unix_socket(qemu.ttys0_socket_path) |
4356 | |
4357 | # wait for cloud-init to finish and VM to shutdown |
4358 | with VirtSubproc.timeout(timeout, 'timed out on cloud-init'): |
4359 | - while qemu.poll() is None: |
4360 | + while qemu.subprocess.poll() is None: |
4361 | if verbose: |
4362 | sys.stdout.buffer.raw.write(tty.recv(4096)) |
4363 | else: |
4364 | time.sleep(1) |
4365 | finally: |
4366 | - if qemu.poll() is None: |
4367 | - qemu.terminate() |
4368 | - if qemu.wait() != 0: |
4369 | - sys.stderr.write('qemu failed with status %i\n' % qemu.returncode) |
4370 | + ret = qemu.cleanup() |
4371 | + assert ret is not None |
4372 | + if ret != 0: |
4373 | + sys.stderr.write('qemu failed with status %i\n' % ret) |
4374 | sys.exit(1) |
4375 | |
4376 | |
4377 | @@ -381,6 +380,11 @@ def install_image(src, dest): |
4378 | # |
4379 | |
4380 | args = parse_args() |
4381 | + |
4382 | +if args.release is None: |
4383 | + sys.stderr.write('Unable to determine default Ubuntu release\n') |
4384 | + sys.exit(1) |
4385 | + |
4386 | image = download_image(args.cloud_image_url, args.release, args.arch) |
4387 | resize_image(image, args.disk_size) |
4388 | seed = build_seed(args.mirror, args.proxy, args.no_apt_upgrade, |
4389 | diff --git a/virt/autopkgtest-virt-lxc b/virt/autopkgtest-virt-lxc |
4390 | index afa1069..50da67b 100755 |
4391 | --- a/virt/autopkgtest-virt-lxc |
4392 | +++ b/virt/autopkgtest-virt-lxc |
4393 | @@ -66,6 +66,9 @@ def parse_args(): |
4394 | help='Run lxc-* commands with sudo; use if you run ' |
4395 | 'autopkgtest as normal user') |
4396 | parser.add_argument('--name', help='container name (autopkgtest-lxc-XXXXXX by default)') |
4397 | + parser.add_argument('--disk-limit', |
4398 | + help='limit amount of disk space that can be used by the container.' |
4399 | + 'Causes the container to be backed by a loop device of that size.') |
4400 | parser.add_argument('template', help='LXC container name that will be ' |
4401 | 'used as a template') |
4402 | parser.add_argument('lxcargs', nargs=argparse.REMAINDER, |
4403 | @@ -112,12 +115,12 @@ def wait_booted(lxc_name): |
4404 | Do this by checking that the runlevel is someting numeric, i. e. not |
4405 | "unknown" or "S". |
4406 | ''' |
4407 | - timeout = 60 |
4408 | + timeout = 120 |
4409 | while timeout > 0: |
4410 | timeout -= 1 |
4411 | time.sleep(1) |
4412 | (rc, out, _) = VirtSubproc.execute_timeout( |
4413 | - None, 10, sudoify(['lxc-attach', '--name', lxc_name, 'runlevel']), |
4414 | + None, 20, sudoify(['lxc-attach', '--name', lxc_name, 'runlevel']), |
4415 | stdout=subprocess.PIPE) |
4416 | if rc != 0: |
4417 | adtlog.debug('wait_booted: lxc-attach failed, retrying...') |
4418 | @@ -125,7 +128,7 @@ def wait_booted(lxc_name): |
4419 | out = out.strip() |
4420 | if out.split()[-1].isdigit(): |
4421 | adtlog.debug('waiting for network') |
4422 | - VirtSubproc.check_exec(sudoify(['lxc-attach', '--name', lxc_name, '--', 'sh', '-ec', '[ ! -d /run/systemd/system ] || systemctl start network-online.target']), timeout=60) |
4423 | + VirtSubproc.check_exec(sudoify(['lxc-attach', '--name', lxc_name, '--', 'sh', '-ec', r'if [ -d /run/systemd/system ]; then systemctl start network-online.target; else while ps -ef | grep -q "/etc/init\.d/rc"; do sleep 1; done; fi']), timeout=60) |
4424 | return |
4425 | |
4426 | adtlog.debug('wait_booted: runlevel "%s", retrying...' % out) |
4427 | @@ -188,8 +191,11 @@ def start_lxc1(): |
4428 | |
4429 | |
4430 | def start_lxc_copy(): |
4431 | + argv = ['lxc-copy', '--name', args.template, '--newname', lxc_container_name] |
4432 | + if args.disk_limit: |
4433 | + argv += ['--backingstorage', 'loop', '--fssize', args.disk_limit] |
4434 | if args.ephemeral: |
4435 | - argv = ['lxc-copy', '--name', args.template, '--newname', lxc_container_name, '--ephemeral'] |
4436 | + argv += ['--ephemeral'] |
4437 | if shared_dir: |
4438 | argv += ['--mount', 'bind=%s:%s' % (shared_dir, shared_dir)] |
4439 | argv += args.lxcargs |
4440 | @@ -198,7 +204,6 @@ def start_lxc_copy(): |
4441 | if rc != 0: |
4442 | VirtSubproc.bomb('lxc-copy with exit status %i' % rc) |
4443 | else: |
4444 | - argv = ['lxc-copy', '--name', args.template, '--newname', lxc_container_name] |
4445 | rc = VirtSubproc.execute_timeout(None, 310, sudoify(argv, 300), |
4446 | stdout=subprocess.DEVNULL)[0] |
4447 | if rc != 0: |
4448 | @@ -297,7 +302,7 @@ def hook_revert(): |
4449 | hook_open() |
4450 | |
4451 | |
4452 | -def hook_wait_reboot(): |
4453 | +def hook_wait_reboot(*args, **kwargs): |
4454 | adtlog.debug('hook_wait_reboot: waiting for container to shut down...') |
4455 | VirtSubproc.execute_timeout(None, 65, sudoify( |
4456 | ['lxc-wait', '-n', lxc_container_name, '-s', 'STOPPED', '-t', '60'])) |
4457 | diff --git a/virt/autopkgtest-virt-lxc.1 b/virt/autopkgtest-virt-lxc.1 |
4458 | index 042d1a9..864d9d4 100644 |
4459 | --- a/virt/autopkgtest-virt-lxc.1 |
4460 | +++ b/virt/autopkgtest-virt-lxc.1 |
4461 | @@ -63,6 +63,13 @@ generate more expressive unique names you can use that to make it easier to map |
4462 | containers to running tests. |
4463 | |
4464 | .TP |
4465 | +.BI " \-\-disk\-limit" " SIZE |
4466 | +Limits the amount of disk space that the test container is allowed to use for |
4467 | +its root filesystem. When this option is used, then the test container will be |
4468 | +backed by a loop device of the given \fISIZE\fR, and it won't be possible for |
4469 | +the container to consume all of the disk space in the host machine. |
4470 | + |
4471 | +.TP |
4472 | .BR \-d " | " \-\-debug |
4473 | Enables debugging output. |
4474 | |
4475 | diff --git a/virt/autopkgtest-virt-lxd b/virt/autopkgtest-virt-lxd |
4476 | index cce0931..fe057fd 100755 |
4477 | --- a/virt/autopkgtest-virt-lxd |
4478 | +++ b/virt/autopkgtest-virt-lxd |
4479 | @@ -101,7 +101,7 @@ def wait_booted(): |
4480 | out = out.strip() |
4481 | if out.split()[-1].isdigit(): |
4482 | adtlog.debug('waiting for network') |
4483 | - VirtSubproc.check_exec(['lxc', 'exec', container_name, '--', 'sh', '-ec', '[ ! -d /run/systemd/system ] || systemctl start network-online.target'], timeout=60) |
4484 | + VirtSubproc.check_exec(['lxc', 'exec', container_name, '--', 'sh', '-ec', r'if [ -d /run/systemd/system ]; then systemctl start network-online.target; else while ps -ef | grep -q "/etc/init\.d/rc"; do sleep 1; done; fi'], timeout=60) |
4485 | return |
4486 | |
4487 | adtlog.debug('wait_booted: runlevel "%s", retrying...' % out) |
4488 | @@ -201,11 +201,18 @@ def get_uptime(): |
4489 | return |
4490 | |
4491 | |
4492 | -def hook_wait_reboot(): |
4493 | +def hook_prepare_reboot(): |
4494 | + initial_uptime = get_uptime() |
4495 | + adtlog.debug('hook_prepare_reboot: fetching uptime before reboot: %s' % initial_uptime) |
4496 | + |
4497 | + return {'initial_uptime': initial_uptime} |
4498 | + |
4499 | + |
4500 | +def hook_wait_reboot(*func_args, **kwargs): |
4501 | adtlog.debug('hook_wait_reboot: waiting for container to shut down...') |
4502 | # "lxc exec" exits with 0 when the container stops, so just wait longer |
4503 | # than our timeout |
4504 | - initial_uptime = get_uptime() |
4505 | + initial_uptime = kwargs['initial_uptime'] |
4506 | |
4507 | adtlog.debug('hook_wait_reboot: container up for %s, waiting for reboot' % initial_uptime) |
4508 | |
4509 | diff --git a/virt/autopkgtest-virt-qemu b/virt/autopkgtest-virt-qemu |
4510 | index e8073cf..ec66f80 100755 |
4511 | --- a/virt/autopkgtest-virt-qemu |
4512 | +++ b/virt/autopkgtest-virt-qemu |
4513 | @@ -27,14 +27,7 @@ |
4514 | |
4515 | import sys |
4516 | import os |
4517 | -import subprocess |
4518 | -import tempfile |
4519 | -import shutil |
4520 | import time |
4521 | -import socket |
4522 | -import errno |
4523 | -import fcntl |
4524 | -import re |
4525 | import argparse |
4526 | |
4527 | sys.path.insert(0, '/usr/share/autopkgtest/lib') |
4528 | @@ -43,27 +36,17 @@ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname( |
4529 | |
4530 | import VirtSubproc |
4531 | import adtlog |
4532 | +from autopkgtest_qemu import Qemu |
4533 | |
4534 | |
4535 | args = None |
4536 | -workdir = None |
4537 | -p_qemu = None |
4538 | -ssh_port = None |
4539 | +qemu = None |
4540 | normal_user = None |
4541 | -qemu_cmd_default = None |
4542 | |
4543 | |
4544 | def parse_args(): |
4545 | - global args, qemu_cmd_default |
4546 | - |
4547 | - uname_to_qemu_suffix = {'i[3456]86$': 'i386'} |
4548 | - arch = os.uname()[4] |
4549 | - for pattern, suffix in uname_to_qemu_suffix.items(): |
4550 | - if re.match(pattern, arch): |
4551 | - qemu_cmd_default = 'qemu-system-' + suffix |
4552 | - break |
4553 | - else: |
4554 | - qemu_cmd_default = 'qemu-system-' + arch |
4555 | + global args |
4556 | + qemu_cmd_default = Qemu.get_default_qemu_command() |
4557 | |
4558 | parser = argparse.ArgumentParser() |
4559 | |
4560 | @@ -86,13 +69,13 @@ def parse_args(): |
4561 | help='Show boot messages from serial console') |
4562 | parser.add_argument('-d', '--debug', action='store_true', |
4563 | help='Enable debugging output') |
4564 | - parser.add_argument('--qemu-options', |
4565 | - help='Pass through arguments to QEMU command.') |
4566 | + parser.add_argument('--qemu-options', default='', |
4567 | + help='Pass through (whitespace-separated) arguments to QEMU command.') |
4568 | parser.add_argument('--baseimage', action='store_true', default=False, |
4569 | help='Provide a read-only copy of the base image at /dev/baseimage') |
4570 | parser.add_argument('--efi', action='store_true', default=False, |
4571 | help='Use OVMF or AAVMF to boot virtual machine using EFI (default: BIOS)') |
4572 | - parser.add_argument('image', nargs='+', |
4573 | + parser.add_argument('images', nargs='+', |
4574 | help='disk image to add to the VM (in order)') |
4575 | |
4576 | args = parser.parse_args() |
4577 | @@ -101,24 +84,8 @@ def parse_args(): |
4578 | adtlog.verbosity = 2 |
4579 | |
4580 | |
4581 | -def prepare_overlay(): |
4582 | - '''Generate a temporary overlay image''' |
4583 | - |
4584 | - # generate a temporary overlay |
4585 | - if args.overlay_dir: |
4586 | - overlay = os.path.join(args.overlay_dir, os.path.basename( |
4587 | - args.image[0]) + '.overlay-%s' % time.time()) |
4588 | - else: |
4589 | - overlay = os.path.join(workdir, 'overlay.img') |
4590 | - adtlog.debug('Creating temporary overlay image in %s' % overlay) |
4591 | - VirtSubproc.check_exec(['qemu-img', 'create', '-f', 'qcow2', '-b', |
4592 | - os.path.abspath(args.image[0]), overlay], |
4593 | - outp=True, timeout=300) |
4594 | - return overlay |
4595 | - |
4596 | - |
4597 | def wait_boot(): |
4598 | - term = VirtSubproc.get_unix_socket(os.path.join(workdir, 'ttyS0')) |
4599 | + term = qemu.ttys0_socket |
4600 | VirtSubproc.expect(term, b' login: ', args.timeout_reboot, 'login prompt on ttyS0', |
4601 | echo=args.show_boot) |
4602 | # this is really ugly, but runlevel, "service status hwclock" etc. all |
4603 | @@ -131,7 +98,7 @@ def wait_boot(): |
4604 | def check_ttyS1_shell(): |
4605 | '''Check if there is a shell running on ttyS1''' |
4606 | |
4607 | - term = VirtSubproc.get_unix_socket(os.path.join(workdir, 'ttyS1')) |
4608 | + term = qemu.ttys1_socket |
4609 | term.sendall(b'echo -n o; echo k\n') |
4610 | try: |
4611 | VirtSubproc.expect(term, b'ok', 1) |
4612 | @@ -168,7 +135,7 @@ def setup_shell(): |
4613 | def login_tty_and_setup_shell(): |
4614 | '''login on ttyS0 and start a root shell on ttyS1 from there''' |
4615 | |
4616 | - term = VirtSubproc.get_unix_socket(os.path.join(workdir, 'ttyS0')) |
4617 | + term = qemu.ttys0_socket |
4618 | |
4619 | # send user name |
4620 | term.sendall(args.user.encode('UTF-8') + b'\n') |
4621 | @@ -199,7 +166,7 @@ def login_tty_and_setup_shell(): |
4622 | def setup_baseimage(): |
4623 | '''setup /dev/baseimage in VM''' |
4624 | |
4625 | - term = VirtSubproc.get_unix_socket(os.path.join(workdir, 'ttyS1')) |
4626 | + term = qemu.ttys1_socket |
4627 | |
4628 | # Setup udev rules for /dev/baseimage; set link_priority to -1024 so |
4629 | # that the duplicate UUIDs of the partitions will have no effect. |
4630 | @@ -211,8 +178,8 @@ def setup_baseimage(): |
4631 | VirtSubproc.expect(term, b'#', 10) |
4632 | |
4633 | # Add the base image as an additional drive |
4634 | - monitor = VirtSubproc.get_unix_socket(os.path.join(workdir, 'monitor')) |
4635 | - monitor.sendall(('drive_add 0 file=%s,if=none,readonly=on,serial=BASEIMAGE,id=drive-baseimage\n' % args.image[0]).encode()) |
4636 | + monitor = qemu.monitor_socket |
4637 | + monitor.sendall(('drive_add 0 file=%s,if=none,readonly=on,serial=BASEIMAGE,id=drive-baseimage,format=%s\n' % (qemu.images[0].file, qemu.images[0].format)).encode()) |
4638 | VirtSubproc.expect(monitor, b'(qemu)', 10) |
4639 | monitor.sendall(b'device_add virtio-blk-pci,drive=drive-baseimage,id=virtio-baseimage\n') |
4640 | VirtSubproc.expect(monitor, b'(qemu)', 10) |
4641 | @@ -226,7 +193,7 @@ def setup_baseimage(): |
4642 | def setup_shared(shared_dir): |
4643 | '''Set up shared dir''' |
4644 | |
4645 | - term = VirtSubproc.get_unix_socket(os.path.join(workdir, 'ttyS1')) |
4646 | + term = qemu.ttys1_socket |
4647 | |
4648 | term.sendall(b'''mkdir -p -m 1777 /run/autopkgtest/shared |
4649 | mount -t 9p -o trans=virtio,access=any autopkgtest /run/autopkgtest/shared |
4650 | @@ -287,7 +254,7 @@ EOF |
4651 | def setup_config(shared_dir): |
4652 | '''Set up configuration files''' |
4653 | |
4654 | - term = VirtSubproc.get_unix_socket(os.path.join(workdir, 'ttyS1')) |
4655 | + term = qemu.ttys1_socket |
4656 | |
4657 | # copy our timezone, to avoid time skews with the host |
4658 | if os.path.exists('/etc/timezone'): |
4659 | @@ -311,7 +278,7 @@ def setup_config(shared_dir): |
4660 | # ensure that we have Python for our the auxverb helpers |
4661 | term.sendall(b'type python3 2>/dev/null || type python 2>/dev/null\n') |
4662 | try: |
4663 | - out = VirtSubproc.expect(term, b'/python', 5) |
4664 | + out = VirtSubproc.expect(term, b'/python', 30) |
4665 | except VirtSubproc.Timeout: |
4666 | VirtSubproc.bomb('Neither python3 nor python is installed in the VM, ' |
4667 | 'one of them is required by autopkgtest') |
4668 | @@ -324,11 +291,15 @@ def setup_config(shared_dir): |
4669 | def make_auxverb(shared_dir): |
4670 | '''Create auxverb script''' |
4671 | |
4672 | - auxverb = os.path.join(workdir, 'runcmd') |
4673 | + auxverb = os.path.join(qemu.workdir, 'runcmd') |
4674 | with open(auxverb, 'w') as f: |
4675 | f.write('''#!%(py)s |
4676 | -import sys, os, tempfile, threading, time, atexit, shutil, fcntl, errno, pipes |
4677 | +import sys, os, tempfile, threading, time, atexit, shutil, fcntl, errno |
4678 | import socket |
4679 | +try: |
4680 | + from shlex import quote |
4681 | +except ImportError: |
4682 | + from pipes import quote |
4683 | |
4684 | dir_host = '%(dir)s' |
4685 | job_host = tempfile.mkdtemp(prefix='job.', dir=dir_host) |
4686 | @@ -394,7 +365,7 @@ s.connect('%(tty)s') |
4687 | cmd = 'PYTHONHASHSEED=0 /tmp/eofcat %%(d)s/stdin_eof %%(d)s/exit.tmp < %%(d)s/stdin | ' \\ |
4688 | '(%%(c)s >> %%(d)s/stdout 2>> %%(d)s/stderr; echo $? > %%(d)s/exit.tmp);' \\ |
4689 | 'mv %%(d)s/exit.tmp %%(d)s/exit\\n' %% \\ |
4690 | - {'d': job_guest, 'c': ' '.join(map(pipes.quote, sys.argv[1:]))} |
4691 | + {'d': job_guest, 'c': ' '.join(map(quote, sys.argv[1:]))} |
4692 | s.sendall(cmd.encode()) |
4693 | |
4694 | # wait until command has exited |
4695 | @@ -422,7 +393,7 @@ t_stdout.join() |
4696 | t_stderr.join() |
4697 | # code 255 means that the auxverb itself failed, so translate |
4698 | sys.exit(rc == 255 and 253 or rc) |
4699 | -''' % {'py': sys.executable, 'tty': os.path.join(workdir, 'ttyS1'), 'dir': shared_dir}) |
4700 | +''' % {'py': sys.executable, 'tty': os.path.join(qemu.workdir, 'ttyS1'), 'dir': shared_dir}) |
4701 | |
4702 | os.chmod(auxverb, 0o755) |
4703 | |
4704 | @@ -436,66 +407,6 @@ sys.exit(rc == 255 and 253 or rc) |
4705 | VirtSubproc.bomb('failed to connect to VM') |
4706 | |
4707 | |
4708 | -def get_cpuflag(): |
4709 | - '''Return QEMU cpu option list suitable for host CPU''' |
4710 | - |
4711 | - try: |
4712 | - with open('/proc/cpuinfo', 'r') as f: |
4713 | - for line in f: |
4714 | - if line.startswith('flags'): |
4715 | - words = line.split() |
4716 | - if 'vmx' in words: |
4717 | - adtlog.debug('Detected KVM capable Intel host CPU, enabling nested KVM') |
4718 | - return ['-cpu', 'kvm64,+vmx,+lahf_lm'] |
4719 | - elif 'svm' in words: # AMD kvm |
4720 | - adtlog.debug('Detected KVM capable AMD host CPU, enabling nested KVM') |
4721 | - # FIXME: this should really be the one below for more |
4722 | - # reproducible testbeds, but nothing except -cpu host works |
4723 | - # return ['-cpu', 'kvm64,+svm,+lahf_lm'] |
4724 | - return ['-cpu', 'host'] |
4725 | - except IOError as e: |
4726 | - adtlog.warning('Cannot read /proc/cpuinfo to detect CPU flags: %s' % e) |
4727 | - # fetching CPU flags isn't critical (only used to enable nested KVM), |
4728 | - # so don't fail here |
4729 | - pass |
4730 | - |
4731 | - return [] |
4732 | - |
4733 | - |
4734 | -def find_free_port(start): |
4735 | - '''Find an unused port in the range [start, start+50)''' |
4736 | - |
4737 | - for p in range(start, start + 50): |
4738 | - adtlog.debug('find_free_port: trying %i' % p) |
4739 | - try: |
4740 | - lockfile = '/tmp/autopkgtest-virt-qemu.port.%i' % p |
4741 | - f = None |
4742 | - try: |
4743 | - f = open(lockfile, 'x') |
4744 | - os.unlink(lockfile) |
4745 | - fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) |
4746 | - except (IOError, OSError): |
4747 | - adtlog.debug('find_free_port: %i is locked' % p) |
4748 | - continue |
4749 | - finally: |
4750 | - if f: |
4751 | - f.close() |
4752 | - |
4753 | - s = socket.create_connection(('127.0.0.1', p)) |
4754 | - # if that works, the port is taken |
4755 | - s.close() |
4756 | - continue |
4757 | - except socket.error as e: |
4758 | - if e.errno == errno.ECONNREFUSED: |
4759 | - adtlog.debug('find_free_port: %i is free' % p) |
4760 | - return p |
4761 | - else: |
4762 | - pass |
4763 | - |
4764 | - adtlog.debug('find_free_port: all ports are taken') |
4765 | - return None |
4766 | - |
4767 | - |
4768 | def determine_normal_user(shared_dir): |
4769 | '''Check for a normal user to run tests as.''' |
4770 | |
4771 | @@ -507,7 +418,7 @@ def determine_normal_user(shared_dir): |
4772 | |
4773 | # get the first UID in the Debian Policy ยง9.2.2 "dynamically allocated |
4774 | # user account" range |
4775 | - term = VirtSubproc.get_unix_socket(os.path.join(workdir, 'ttyS1')) |
4776 | + term = qemu.ttys1_socket |
4777 | term.sendall(b"getent passwd | sort -t: -nk3 | " |
4778 | b"awk -F: '{if ($3 >= 1000 && $3 <= 59999) { print $1; exit } }'" |
4779 | b"> /run/autopkgtest/shared/normal_user\n") |
4780 | @@ -526,6 +437,7 @@ def determine_normal_user(shared_dir): |
4781 | |
4782 | |
4783 | def hook_open(): |
4784 | +<<<<<<< virt/autopkgtest-virt-qemu |
4785 | global workdir, p_qemu, ssh_port |
4786 | |
4787 | workdir = tempfile.mkdtemp(prefix='autopkgtest-qemu.') |
4788 | @@ -597,6 +509,20 @@ def hook_open(): |
4789 | argv.extend(args.qemu_options.split()) |
4790 | |
4791 | p_qemu = subprocess.Popen(argv) |
4792 | +======= |
4793 | + global qemu |
4794 | + |
4795 | + qemu = Qemu( |
4796 | + cpus=args.cpus, |
4797 | + efi=args.efi, |
4798 | + images=args.images, |
4799 | + overlay=True, |
4800 | + overlay_dir=args.overlay_dir, |
4801 | + qemu_command=args.qemu_command, |
4802 | + qemu_options=args.qemu_options.split(), |
4803 | + ram_size=args.ram_size, |
4804 | + ) |
4805 | +>>>>>>> virt/autopkgtest-virt-qemu |
4806 | |
4807 | try: |
4808 | try: |
4809 | @@ -604,14 +530,16 @@ def hook_open(): |
4810 | finally: |
4811 | # remove overlay as early as possible, to avoid leaking large |
4812 | # files; let QEMU run with the deleted inode |
4813 | + overlay = qemu.images[0].overlay |
4814 | + assert overlay is not None |
4815 | os.unlink(overlay) |
4816 | setup_shell() |
4817 | if args.baseimage: |
4818 | setup_baseimage() |
4819 | - setup_shared(shareddir) |
4820 | - setup_config(shareddir) |
4821 | - make_auxverb(shareddir) |
4822 | - determine_normal_user(shareddir) |
4823 | + setup_shared(qemu.shareddir) |
4824 | + setup_config(qemu.shareddir) |
4825 | + make_auxverb(qemu.shareddir) |
4826 | + determine_normal_user(qemu.shareddir) |
4827 | except Exception: |
4828 | # Clean up on failure |
4829 | hook_cleanup() |
4830 | @@ -633,35 +561,27 @@ def hook_revert(): |
4831 | |
4832 | |
4833 | def hook_cleanup(): |
4834 | - global p_qemu, workdir |
4835 | - |
4836 | - if p_qemu: |
4837 | - p_qemu.terminate() |
4838 | - p_qemu.wait() |
4839 | - p_qemu = None |
4840 | + global qemu |
4841 | |
4842 | - if workdir: |
4843 | - shutil.rmtree(workdir) |
4844 | - workdir = None |
4845 | + qemu.cleanup() |
4846 | + qemu = None |
4847 | |
4848 | |
4849 | def hook_prepare_reboot(): |
4850 | if args.baseimage: |
4851 | # Remove baseimage drive again, so that it does not break the subsequent |
4852 | # boot due to the duplicate UUID |
4853 | - monitor = VirtSubproc.get_unix_socket(os.path.join(workdir, 'monitor')) |
4854 | + monitor = qemu.monitor_socket |
4855 | monitor.sendall(b'device_del virtio-baseimage\n') |
4856 | VirtSubproc.expect(monitor, b'(qemu)', 10) |
4857 | monitor.close() |
4858 | |
4859 | |
4860 | -def hook_wait_reboot(): |
4861 | - global workdir |
4862 | - shareddir = os.path.join(workdir, 'shared') |
4863 | - os.unlink(os.path.join(shareddir, 'done_shared')) |
4864 | +def hook_wait_reboot(*func_args, **kwargs): |
4865 | + os.unlink(os.path.join(qemu.shareddir, 'done_shared')) |
4866 | wait_boot() |
4867 | setup_shell() |
4868 | - setup_shared(shareddir) |
4869 | + setup_shared(qemu.shareddir) |
4870 | if args.baseimage: |
4871 | setup_baseimage() |
4872 | |
4873 | @@ -671,19 +591,17 @@ def hook_capabilities(): |
4874 | caps = ['revert', 'revert-full-system', 'root-on-testbed', |
4875 | 'isolation-machine', 'reboot'] |
4876 | # disabled, see hook_downtmp() |
4877 | - # caps.append('downtmp-host=%s' % os.path.join(workdir, 'shared', 'tmp')) |
4878 | + # caps.append('downtmp-host=%s' % os.path.join(qemu.workdir, 'shared', 'tmp')) |
4879 | if normal_user: |
4880 | caps.append('suggested-normal-user=' + normal_user) |
4881 | return caps |
4882 | |
4883 | |
4884 | def hook_shell(dir, *extra_env): |
4885 | - global ssh_port, normal_user |
4886 | - |
4887 | - if ssh_port: |
4888 | + if qemu.ssh_port: |
4889 | user = normal_user or '<user>' |
4890 | ssh = ' ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -p %i %s@localhost\n' % ( |
4891 | - ssh_port, user) |
4892 | + qemu.ssh_port, user) |
4893 | else: |
4894 | ssh = '' |
4895 | |
4896 | @@ -698,7 +616,7 @@ Depending on which terminal program you have installed, you can use one of |
4897 | The tested source package is in %(dir)s |
4898 | |
4899 | Press Enter to resume running tests. |
4900 | -''' % {'tty0': os.path.join(workdir, 'ttyS0'), 'dir': dir, 'ssh': ssh}) |
4901 | +''' % {'tty0': os.path.join(qemu.workdir, 'ttyS0'), 'dir': dir, 'ssh': ssh}) |
4902 | with open('/dev/tty', 'r') as f: |
4903 | f.readline() |
4904 | |
4905 | diff --git a/virt/autopkgtest-virt-ssh b/virt/autopkgtest-virt-ssh |
4906 | index 861460f..186ebe6 100755 |
4907 | --- a/virt/autopkgtest-virt-ssh |
4908 | +++ b/virt/autopkgtest-virt-ssh |
4909 | @@ -30,8 +30,8 @@ import sys |
4910 | import os |
4911 | import argparse |
4912 | import tempfile |
4913 | +import shlex |
4914 | import shutil |
4915 | -import pipes |
4916 | import time |
4917 | import subprocess |
4918 | import socket |
4919 | @@ -326,16 +326,18 @@ def build_auxverb(): |
4920 | |
4921 | global sshconfig, sshcmd, capabilities, workdir |
4922 | |
4923 | - if sshconfig['login'] != 'root': |
4924 | - (sudocmd, askpass) = can_sudo(sshcmd) |
4925 | - else: |
4926 | + if sshconfig['login'] == 'root': |
4927 | (sudocmd, askpass) = (None, None) |
4928 | - if sudocmd: |
4929 | - if 'root-on-testbed' not in capabilities: |
4930 | - capabilities.append('root-on-testbed') |
4931 | + capabilities.append('root-on-testbed') |
4932 | else: |
4933 | - if 'root-on-testbed' in capabilities: |
4934 | - capabilities.remove('root-on-testbed') |
4935 | + (sudocmd, askpass) = can_sudo(sshcmd) |
4936 | + if sudocmd: |
4937 | + if 'root-on-testbed' not in capabilities: |
4938 | + capabilities.append('root-on-testbed') |
4939 | + else: |
4940 | + if 'root-on-testbed' in capabilities: |
4941 | + adtlog.warning('sudo command failed: removing root-on-testbed capability') |
4942 | + capabilities.remove('root-on-testbed') |
4943 | |
4944 | extra_cmd = '' |
4945 | if askpass: |
4946 | @@ -375,7 +377,7 @@ def can_sudo(ssh_cmd): |
4947 | '/bin/echo -e "#!/bin/sh\necho \'%s\'" > $F;' \ |
4948 | 'chmod u+x $F; sync; echo $F' % args.password |
4949 | askpass = VirtSubproc.check_exec( |
4950 | - ssh_cmd + ['/bin/sh', '-ec', pipes.quote(cmd)], |
4951 | + ssh_cmd + ['/bin/sh', '-ec', shlex.quote(cmd)], |
4952 | outp=True, timeout=30).strip() |
4953 | adtlog.debug('created SUDO_ASKPASS from specified password') |
4954 | cleanup_paths.append(askpass) |
4955 | @@ -436,6 +438,7 @@ def wait_port_down(host, port, timeout): |
4956 | VirtSubproc.timeout_start(timeout) |
4957 | while True: |
4958 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) |
4959 | + s.settimeout(1) |
4960 | try: |
4961 | res = s.connect_ex((host, port)) |
4962 | adtlog.debug('wait_port_down() connect: %s' % os.strerror(res)) |
4963 | @@ -455,7 +458,7 @@ def wait_port_down(host, port, timeout): |
4964 | VirtSubproc.timeout_stop() |
4965 | |
4966 | |
4967 | -def hook_wait_reboot(): |
4968 | +def hook_wait_reboot(*func_args, **kwargs): |
4969 | global sshcmd |
4970 | |
4971 | if args.setup_script: |