Merge lp:~laney/britney/autopkgtest-no-delay-if-never-passed into lp:britney
- autopkgtest-no-delay-if-never-passed
- Merge into britney2
Status: | Superseded | ||||
---|---|---|---|---|---|
Proposed branch: | lp:~laney/britney/autopkgtest-no-delay-if-never-passed | ||||
Merge into: | lp:britney | ||||
Diff against target: |
5095 lines (+4252/-116) (has conflicts) 13 files modified
autopkgtest.py (+730/-0) boottest.py (+293/-0) britney.conf (+52/-15) britney.py (+505/-80) britney_nobreakall.conf (+44/-16) britney_util.py (+110/-0) consts.py (+1/-0) excuse.py (+54/-5) run-autopkgtest (+78/-0) tests/__init__.py (+184/-0) tests/mock_swift.py (+170/-0) tests/test_autopkgtest.py (+1586/-0) tests/test_boottest.py (+445/-0) Text conflict in britney.conf Text conflict in britney.py Text conflict in britney_nobreakall.conf Text conflict in britney_util.py |
||||
To merge this branch: | bzr merge lp:~laney/britney/autopkgtest-no-delay-if-never-passed | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Ubuntu Package Archive Administrators | Pending | ||
Review via email: mp+277671@code.launchpad.net |
This proposal has been superseded by a proposal from 2015-11-17.
Commit message
Description of the change
This is an attempt at fixing bug #1501699.
With this branch, we now take into account the "ever_passed" flag to decide if a test run has passed or not. It has passed if the test is ALWAYSFAILED or RUNNING but has never passed, for all arches.
Many testcases relied on the previous behaviour, so I inserted some 'passing' test results to cause the packages to be held.
There are some delicacies around kernel triggering that I don't fully understand (this is my first venture into britney/
- 529. By Iain Lane
-
Add passing results for various tests
We're going to modify britney so that RUNNING tests don't block
promotion if they have never failed - for this we will need to change a
few tests so that the packages being tested have passed before, meaning
that there could potentially be a regression. - 530. By Iain Lane
-
tests: pretty print the excuses so that they are readable
- 531. By Iain Lane
-
Fix typo in test name triggerered -> triggered
- 532. By Iain Lane
-
Don't wait for tests which have never passed
They won't ever block promotion, so we might as well make them
candidates right away (but trigger their tests).
Unmerged revisions
- 532. By Iain Lane
-
Don't wait for tests which have never passed
They won't ever block promotion, so we might as well make them
candidates right away (but trigger their tests). - 531. By Iain Lane
-
Fix typo in test name triggerered -> triggered
- 530. By Iain Lane
-
tests: pretty print the excuses so that they are readable
- 529. By Iain Lane
-
Add passing results for various tests
We're going to modify britney so that RUNNING tests don't block
promotion if they have never failed - for this we will need to change a
few tests so that the packages being tested have passed before, meaning
that there could potentially be a regression. - 528. By Martin Pitt
-
Restrict ADT_ARCHES to architectures we actually run for
This makes it simpler to run britney against a PPA with
--architectures='i386 amd64'. - 527. By Martin Pitt
-
Split self.options.
adt_arches once during initialization For consistency with all other self.options.
*_arches. - 526. By Martin Pitt
-
tests: Use proper triggers in all tests
Also introduce a tr() shortcut for those. Drop the now redundant
test_rerun_failure_ triggers( ). - 525. By Martin Pitt
-
Autopkgtest: Request one test run per trigger for all packages
Stop special-casing the kernel and move to one test run per trigger. This
allows us to only install the triggering package from unstable and run the rest
out of testing, which gives much better isolation. - 524. By Martin Pitt
-
Let linux-meta trigger systemd
- 523. By Martin Pitt
-
run-autopkgtest: Require --trigger argument, improve help
Preview Diff
1 | === added file 'autopkgtest.py' |
2 | --- autopkgtest.py 1970-01-01 00:00:00 +0000 |
3 | +++ autopkgtest.py 2015-11-17 11:05:43 +0000 |
4 | @@ -0,0 +1,730 @@ |
5 | +# -*- coding: utf-8 -*- |
6 | + |
7 | +# Copyright (C) 2013 - 2015 Canonical Ltd. |
8 | +# Authors: |
9 | +# Colin Watson <cjwatson@ubuntu.com> |
10 | +# Jean-Baptiste Lallement <jean-baptiste.lallement@canonical.com> |
11 | +# Martin Pitt <martin.pitt@ubuntu.com> |
12 | + |
13 | +# This program is free software; you can redistribute it and/or modify |
14 | +# it under the terms of the GNU General Public License as published by |
15 | +# the Free Software Foundation; either version 2 of the License, or |
16 | +# (at your option) any later version. |
17 | + |
18 | +# This program is distributed in the hope that it will be useful, |
19 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
20 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
21 | +# GNU General Public License for more details. |
22 | + |
23 | +import os |
24 | +import time |
25 | +import json |
26 | +import tarfile |
27 | +import io |
28 | +import copy |
29 | +import re |
30 | +from urllib.parse import urlencode |
31 | +from urllib.request import urlopen |
32 | + |
33 | +import apt_pkg |
34 | +import kombu |
35 | + |
36 | +from consts import (AUTOPKGTEST, BINARIES, DEPENDS, RDEPENDS, SOURCE, VERSION) |
37 | + |
38 | + |
39 | +def srchash(src): |
40 | + '''archive hash prefix for source package''' |
41 | + |
42 | + if src.startswith('lib'): |
43 | + return src[:4] |
44 | + else: |
45 | + return src[0] |
46 | + |
47 | + |
48 | +def latest_item(ver_map, min_version=None): |
49 | + '''Return (ver, value) from version -> value map with latest version number |
50 | + |
51 | + If min_version is given, version has to be >= that, otherwise a KeyError is |
52 | + raised. |
53 | + ''' |
54 | + latest = None |
55 | + for ver in ver_map: |
56 | + if latest is None or apt_pkg.version_compare(ver, latest) > 0: |
57 | + latest = ver |
58 | + if min_version is not None and latest is not None and \ |
59 | + apt_pkg.version_compare(latest, min_version) < 0: |
60 | + latest = None |
61 | + |
62 | + if latest is not None: |
63 | + return (latest, ver_map[latest]) |
64 | + else: |
65 | + raise KeyError('no version >= %s' % min_version) |
66 | + |
67 | + |
68 | +class AutoPackageTest(object): |
69 | + """autopkgtest integration |
70 | + |
71 | + Look for autopkgtest jobs to run for each update that is otherwise a |
72 | + valid candidate, and collect the results. If an update causes any |
73 | + autopkgtest jobs to be run, then they must all pass before the update is |
74 | + accepted. |
75 | + """ |
76 | + |
77 | + def __init__(self, britney, distribution, series, debug=False): |
78 | + self.britney = britney |
79 | + self.distribution = distribution |
80 | + self.series = series |
81 | + self.debug = debug |
82 | + self.excludes = set() |
83 | + self.test_state_dir = os.path.join(britney.options.unstable, |
84 | + 'autopkgtest') |
85 | + # map of requested tests from request() |
86 | + # src -> ver -> arch -> {(triggering-src1, ver1), ...} |
87 | + self.requested_tests = {} |
88 | + # same map for tests requested in previous runs |
89 | + self.pending_tests = None |
90 | + self.pending_tests_file = os.path.join(self.test_state_dir, 'pending.txt') |
91 | + |
92 | + if not os.path.isdir(self.test_state_dir): |
93 | + os.mkdir(self.test_state_dir) |
94 | + self.read_pending_tests() |
95 | + |
96 | + # results map: src -> arch -> [latest_stamp, ver -> trigger -> passed, ever_passed] |
97 | + # - It's tempting to just use a global "latest" time stamp, but due to |
98 | + # swift's "eventual consistency" we might miss results with older time |
99 | + # stamps from other packages that we don't see in the current run, but |
100 | + # will in the next one. This doesn't hurt for older results of the same |
101 | + # package. |
102 | + # - trigger is "source/version" of an unstable package that triggered |
103 | + # this test run. We need to track this to avoid unnecessarily |
104 | + # re-running tests. |
105 | + # - "passed" is a bool |
106 | + # - ever_passed is a bool whether there is any successful test of |
107 | + # src/arch of any version. This is used for detecting "regression" |
108 | + # vs. "always failed" |
109 | + self.test_results = {} |
110 | + self.results_cache_file = os.path.join(self.test_state_dir, 'results.cache') |
111 | + |
112 | + # read the cached results that we collected so far |
113 | + if os.path.exists(self.results_cache_file): |
114 | + with open(self.results_cache_file) as f: |
115 | + self.test_results = json.load(f) |
116 | + self.log_verbose('Read previous results from %s' % self.results_cache_file) |
117 | + else: |
118 | + self.log_verbose('%s does not exist, re-downloading all results ' |
119 | + 'from swift' % self.results_cache_file) |
120 | + |
121 | + def log_verbose(self, msg): |
122 | + if self.britney.options.verbose: |
123 | + print('I: [%s] - %s' % (time.asctime(), msg)) |
124 | + |
125 | + def log_error(self, msg): |
126 | + print('E: [%s] - %s' % (time.asctime(), msg)) |
127 | + |
128 | + @classmethod |
129 | + def has_autodep8(kls, srcinfo, binaries): |
130 | + '''Check if package is covered by autodep8 |
131 | + |
132 | + srcinfo is an item from self.britney.sources |
133 | + binaries is self.britney.binaries['unstable'][arch][0] |
134 | + ''' |
135 | + # DKMS: some binary depends on "dkms" |
136 | + for bin_arch in srcinfo[BINARIES]: |
137 | + binpkg = bin_arch.split('/')[0] # chop off arch |
138 | + try: |
139 | + bininfo = binaries[binpkg] |
140 | + except KeyError: |
141 | + continue |
142 | + if 'dkms' in (bininfo[DEPENDS] or ''): |
143 | + return True |
144 | + return False |
145 | + |
146 | + def tests_for_source(self, src, ver, arch): |
147 | + '''Iterate over all tests that should be run for given source and arch''' |
148 | + |
149 | + sources_info = self.britney.sources['unstable'] |
150 | + binaries_info = self.britney.binaries['unstable'][arch][0] |
151 | + |
152 | + reported_pkgs = set() |
153 | + |
154 | + tests = [] |
155 | + |
156 | + # hack for vivid's gccgo-5 |
157 | + if src == 'gccgo-5': |
158 | + for test in ['juju', 'juju-core', 'juju-mongodb', 'mongodb']: |
159 | + try: |
160 | + tests.append((test, self.britney.sources['testing'][test][VERSION])) |
161 | + except KeyError: |
162 | + # no package in that series? *shrug*, then not (mostly for testing) |
163 | + pass |
164 | + return tests |
165 | + |
166 | + # gcc-N triggers tons of tests via libgcc1, but this is mostly in vain: |
167 | + # gcc already tests itself during build, and it is being used from |
168 | + # -proposed, so holding it back on a dozen unrelated test failures |
169 | + # serves no purpose. Just check some key packages which actually use |
170 | + # gcc during the test, and libreoffice as an example for a libgcc user. |
171 | + if src.startswith('gcc-'): |
172 | + if re.match('gcc-\d$', src): |
173 | + for test in ['binutils', 'fglrx-installer', 'libreoffice', 'linux']: |
174 | + try: |
175 | + tests.append((test, self.britney.sources['testing'][test][VERSION])) |
176 | + except KeyError: |
177 | + # no package in that series? *shrug*, then not (mostly for testing) |
178 | + pass |
179 | + return tests |
180 | + else: |
181 | + # for other compilers such as gcc-snapshot etc. we don't need |
182 | + # to trigger anything |
183 | + return [] |
184 | + |
185 | + # for linux themselves we don't want to trigger tests -- these should |
186 | + # all come from linux-meta*. A new kernel ABI without a corresponding |
187 | + # -meta won't be installed and thus we can't sensibly run tests against |
188 | + # it. |
189 | + if src.startswith('linux') and src.replace('linux', 'linux-meta') in self.britney.sources['testing']: |
190 | + return [] |
191 | + |
192 | + srcinfo = sources_info[src] |
193 | + # we want to test the package itself, if it still has a test in |
194 | + # unstable |
195 | + if srcinfo[AUTOPKGTEST] or self.has_autodep8(srcinfo, binaries_info): |
196 | + reported_pkgs.add(src) |
197 | + tests.append((src, ver)) |
198 | + |
199 | + extra_bins = [] |
200 | + # Hack: For new kernels trigger all DKMS packages by pretending that |
201 | + # linux-meta* builds a "dkms" binary as well. With that we ensure that we |
202 | + # don't regress DKMS drivers with new kernel versions. |
203 | + if src.startswith('linux-meta'): |
204 | + # does this have any image on this arch? |
205 | + for b in srcinfo[BINARIES]: |
206 | + p, a = b.split('/', 1) |
207 | + if a == arch and '-image' in p: |
208 | + extra_bins.append('dkms') |
209 | + |
210 | + # plus all direct reverse dependencies of its binaries which have |
211 | + # an autopkgtest |
212 | + for binary in srcinfo[BINARIES] + extra_bins: |
213 | + binary = binary.split('/')[0] # chop off arch |
214 | + try: |
215 | + rdeps = binaries_info[binary][RDEPENDS] |
216 | + except KeyError: |
217 | + self.log_verbose('Ignoring nonexistant binary %s on %s (FTBFS/NBS)?' % (binary, arch)) |
218 | + continue |
219 | + for rdep in rdeps: |
220 | + rdep_src = binaries_info[rdep][SOURCE] |
221 | + # if rdep_src/unstable is known to be not built yet or |
222 | + # uninstallable, try to run tests against testing; if that |
223 | + # works, then the unstable src does not break the testing |
224 | + # rdep_src and is fine |
225 | + if rdep_src in self.excludes: |
226 | + try: |
227 | + rdep_src_info = self.britney.sources['testing'][rdep_src] |
228 | + self.log_verbose('Reverse dependency %s of %s/%s is unbuilt or uninstallable, running test against testing version %s' % |
229 | + (rdep_src, src, ver, rdep_src_info[VERSION])) |
230 | + except KeyError: |
231 | + self.log_verbose('Reverse dependency %s of %s/%s is unbuilt or uninstallable and not present in testing, ignoring' % |
232 | + (rdep_src, src, ver)) |
233 | + continue |
234 | + else: |
235 | + rdep_src_info = sources_info[rdep_src] |
236 | + if rdep_src_info[AUTOPKGTEST] or self.has_autodep8(rdep_src_info, binaries_info): |
237 | + if rdep_src not in reported_pkgs: |
238 | + tests.append((rdep_src, rdep_src_info[VERSION])) |
239 | + reported_pkgs.add(rdep_src) |
240 | + |
241 | + # Hardcode linux-meta → linux, lxc, glibc, systemd triggers until we get a more flexible |
242 | + # implementation: https://bugs.debian.org/779559 |
243 | + if src.startswith('linux-meta'): |
244 | + for pkg in ['lxc', 'glibc', src.replace('linux-meta', 'linux'), 'systemd']: |
245 | + if pkg not in reported_pkgs: |
246 | + # does this have any image on this arch? |
247 | + for b in srcinfo[BINARIES]: |
248 | + p, a = b.split('/', 1) |
249 | + if a == arch and '-image' in p: |
250 | + try: |
251 | + tests.append((pkg, self.britney.sources['unstable'][pkg][VERSION])) |
252 | + except KeyError: |
253 | + try: |
254 | + tests.append((pkg, self.britney.sources['testing'][pkg][VERSION])) |
255 | + except KeyError: |
256 | + # package not in that series? *shrug*, then not |
257 | + pass |
258 | + break |
259 | + |
260 | + tests.sort(key=lambda s_v: s_v[0]) |
261 | + return tests |
262 | + |
263 | + # |
264 | + # AMQP/cloud interface helpers |
265 | + # |
266 | + |
267 | + def read_pending_tests(self): |
268 | + '''Read pending test requests from previous britney runs |
269 | + |
270 | + Read UNSTABLE/autopkgtest/requested.txt with the format: |
271 | + srcpkg srcver triggering-srcpkg triggering-srcver |
272 | + |
273 | + Initialize self.pending_tests with that data. |
274 | + ''' |
275 | + assert self.pending_tests is None, 'already initialized' |
276 | + self.pending_tests = {} |
277 | + if not os.path.exists(self.pending_tests_file): |
278 | + self.log_verbose('No %s, starting with no pending tests' % |
279 | + self.pending_tests_file) |
280 | + return |
281 | + with open(self.pending_tests_file) as f: |
282 | + for l in f: |
283 | + l = l.strip() |
284 | + if not l: |
285 | + continue |
286 | + try: |
287 | + (src, ver, arch, trigsrc, trigver) = l.split() |
288 | + except ValueError: |
289 | + self.log_error('ignoring malformed line in %s: %s' % |
290 | + (self.pending_tests_file, l)) |
291 | + continue |
292 | + self.pending_tests.setdefault(src, {}).setdefault( |
293 | + ver, {}).setdefault(arch, set()).add((trigsrc, trigver)) |
294 | + self.log_verbose('Read pending requested tests from %s: %s' % |
295 | + (self.pending_tests_file, self.pending_tests)) |
296 | + |
297 | + def update_pending_tests(self): |
298 | + '''Update pending tests after submitting requested tests |
299 | + |
300 | + Update UNSTABLE/autopkgtest/requested.txt, see read_pending_tests() for |
301 | + the format. |
302 | + ''' |
303 | + # merge requested_tests into pending_tests |
304 | + for src, verinfo in self.requested_tests.items(): |
305 | + for ver, archinfo in verinfo.items(): |
306 | + for arch, triggers in archinfo.items(): |
307 | + self.pending_tests.setdefault(src, {}).setdefault( |
308 | + ver, {}).setdefault(arch, set()).update(triggers) |
309 | + self.requested_tests = {} |
310 | + |
311 | + # write it |
312 | + with open(self.pending_tests_file + '.new', 'w') as f: |
313 | + for src in sorted(self.pending_tests): |
314 | + for ver in sorted(self.pending_tests[src]): |
315 | + for arch in sorted(self.pending_tests[src][ver]): |
316 | + for (trigsrc, trigver) in sorted(self.pending_tests[src][ver][arch]): |
317 | + f.write('%s %s %s %s %s\n' % (src, ver, arch, trigsrc, trigver)) |
318 | + os.rename(self.pending_tests_file + '.new', self.pending_tests_file) |
319 | + self.log_verbose('Updated pending requested tests in %s' % |
320 | + self.pending_tests_file) |
321 | + |
322 | + def add_test_request(self, src, ver, arch, trigsrc, trigver): |
323 | + '''Add one test request to the local self.requested_tests queue |
324 | + |
325 | + This will only be done if that test wasn't already requested in a |
326 | + previous run (i. e. not already in self.pending_tests) or there already |
327 | + is a result for it. |
328 | + ''' |
329 | + # check for existing results for both the requested and the current |
330 | + # unstable version: test runs might see newly built versions which we |
331 | + # didn't see in britney yet |
332 | + ver_trig_results = self.test_results.get(src, {}).get(arch, [None, {}, None])[1] |
333 | + unstable_ver = self.britney.sources['unstable'][src][VERSION] |
334 | + try: |
335 | + testing_ver = self.britney.sources['testing'][src][VERSION] |
336 | + except KeyError: |
337 | + testing_ver = unstable_ver |
338 | + for result_ver in set([testing_ver, ver, unstable_ver]): |
339 | + # result_ver might be < ver here; that's okay, if we already have a |
340 | + # result for trigsrc/trigver we don't need to re-run it again |
341 | + if result_ver not in ver_trig_results: |
342 | + continue |
343 | + for trigger in ver_trig_results[result_ver]: |
344 | + (tsrc, tver) = trigger.split('/', 1) |
345 | + if tsrc == trigsrc and apt_pkg.version_compare(tver, trigver) >= 0: |
346 | + self.log_verbose('There already is a result for %s/%s/%s triggered by %s/%s' % |
347 | + (src, result_ver, arch, tsrc, tver)) |
348 | + return |
349 | + |
350 | + if (trigsrc, trigver) in self.pending_tests.get(src, {}).get( |
351 | + ver, {}).get(arch, set()): |
352 | + self.log_verbose('test %s/%s/%s for %s/%s is already pending, not queueing' % |
353 | + (src, ver, arch, trigsrc, trigver)) |
354 | + return |
355 | + self.requested_tests.setdefault(src, {}).setdefault( |
356 | + ver, {}).setdefault(arch, set()).add((trigsrc, trigver)) |
357 | + |
358 | + def fetch_swift_results(self, swift_url, src, arch, trigger=None): |
359 | + '''Download new results for source package/arch from swift''' |
360 | + |
361 | + # prepare query: get all runs with a timestamp later than latest_stamp |
362 | + # for this package/arch; '@' is at the end of each run timestamp, to |
363 | + # mark the end of a test run directory path |
364 | + # example: <autopkgtest-wily>wily/amd64/libp/libpng/20150630_054517@/result.tar |
365 | + query = {'delimiter': '@', |
366 | + 'prefix': '%s/%s/%s/%s/' % (self.series, arch, srchash(src), src)} |
367 | + try: |
368 | + query['marker'] = query['prefix'] + self.test_results[src][arch][0] |
369 | + except KeyError: |
370 | + # no stamp yet, download all results |
371 | + pass |
372 | + |
373 | + # request new results from swift |
374 | + url = os.path.join(swift_url, 'autopkgtest-' + self.series) |
375 | + url += '?' + urlencode(query) |
376 | + try: |
377 | + f = urlopen(url) |
378 | + if f.getcode() == 200: |
379 | + result_paths = f.read().decode().strip().splitlines() |
380 | + elif f.getcode() == 204: # No content |
381 | + result_paths = [] |
382 | + else: |
383 | + self.log_error('Failure to fetch swift results from %s: %u' % |
384 | + (url, f.getcode())) |
385 | + f.close() |
386 | + return |
387 | + f.close() |
388 | + except IOError as e: |
389 | + self.log_error('Failure to fetch swift results from %s: %s' % (url, str(e))) |
390 | + return |
391 | + |
392 | + for p in result_paths: |
393 | + self.fetch_one_result( |
394 | + os.path.join(swift_url, 'autopkgtest-' + self.series, p, 'result.tar'), |
395 | + src, arch, trigger) |
396 | + |
397 | + def fetch_one_result(self, url, src, arch, trigger=None): |
398 | + '''Download one result URL for source/arch |
399 | + |
400 | + Remove matching pending_tests entries. If trigger is given (src, ver) |
401 | + it is added to the triggers of that result. |
402 | + ''' |
403 | + try: |
404 | + f = urlopen(url) |
405 | + if f.getcode() == 200: |
406 | + tar_bytes = io.BytesIO(f.read()) |
407 | + f.close() |
408 | + else: |
409 | + self.log_error('Failure to fetch %s: %u' % (url, f.getcode())) |
410 | + return |
411 | + except IOError as e: |
412 | + self.log_error('Failure to fetch %s: %s' % (url, str(e))) |
413 | + return |
414 | + |
415 | + try: |
416 | + with tarfile.open(None, 'r', tar_bytes) as tar: |
417 | + exitcode = int(tar.extractfile('exitcode').read().strip()) |
418 | + srcver = tar.extractfile('testpkg-version').read().decode().strip() |
419 | + (ressrc, ver) = srcver.split() |
420 | + try: |
421 | + testinfo = json.loads(tar.extractfile('testinfo.json').read().decode()) |
422 | + except KeyError: |
423 | + self.log_error('warning: %s does not have a testinfo.json' % url) |
424 | + testinfo = {} |
425 | + except (KeyError, ValueError, tarfile.TarError) as e: |
426 | + self.log_error('%s is damaged, ignoring: %s' % (url, str(e))) |
427 | + # ignore this; this will leave an orphaned request in pending.txt |
428 | + # and thus require manual retries after fixing the tmpfail, but we |
429 | + # can't just blindly attribute it to some pending test. |
430 | + return |
431 | + |
432 | + if src != ressrc: |
433 | + self.log_error('%s is a result for package %s, but expected package %s' % |
434 | + (url, ressrc, src)) |
435 | + return |
436 | + |
437 | + # parse recorded triggers in test result |
438 | + if 'custom_environment' in testinfo: |
439 | + for e in testinfo['custom_environment']: |
440 | + if e.startswith('ADT_TEST_TRIGGERS='): |
441 | + result_triggers = [tuple(i.split('/', 1)) for i in e.split('=', 1)[1].split() if '/' in i] |
442 | + break |
443 | + else: |
444 | + result_triggers = None |
445 | + |
446 | + stamp = os.path.basename(os.path.dirname(url)) |
447 | + # allow some skipped tests, but nothing else |
448 | + passed = exitcode in [0, 2] |
449 | + |
450 | + self.log_verbose('Fetched test result for %s/%s/%s %s (triggers: %s): %s' % ( |
451 | + src, ver, arch, stamp, result_triggers, passed and 'pass' or 'fail')) |
452 | + |
453 | + # remove matching test requests, remember triggers |
454 | + satisfied_triggers = set() |
455 | + for request_map in [self.requested_tests, self.pending_tests]: |
456 | + for pending_ver, pending_archinfo in request_map.get(src, {}).copy().items(): |
457 | + # don't consider newer requested versions |
458 | + if apt_pkg.version_compare(pending_ver, ver) > 0: |
459 | + continue |
460 | + |
461 | + if result_triggers: |
462 | + # explicitly recording/retrieving test triggers is the |
463 | + # preferred (and robust) way of matching results to pending |
464 | + # requests |
465 | + for result_trigger in result_triggers: |
466 | + satisfied_triggers.add(result_trigger) |
467 | + try: |
468 | + request_map[src][pending_ver][arch].remove(result_trigger) |
469 | + self.log_verbose('-> matches pending request %s/%s/%s for trigger %s' % |
470 | + (src, pending_ver, arch, str(result_trigger))) |
471 | + except (KeyError, ValueError): |
472 | + self.log_verbose('-> does not match any pending request for %s/%s/%s' % |
473 | + (src, pending_ver, arch)) |
474 | + else: |
475 | + # ... but we still need to support results without |
476 | + # testinfo.json and recorded triggers until we stop caring about |
477 | + # existing wily and trusty results; match the latest result to all |
478 | + # triggers for src that have at least the requested version |
479 | + try: |
480 | + t = pending_archinfo[arch] |
481 | + self.log_verbose('-> matches pending request %s/%s for triggers %s' % |
482 | + (src, pending_ver, str(t))) |
483 | + satisfied_triggers.update(t) |
484 | + del request_map[src][pending_ver][arch] |
485 | + except KeyError: |
486 | + self.log_verbose('-> does not match any pending request for %s/%s' % |
487 | + (src, pending_ver)) |
488 | + |
489 | + # FIXME: this is a hack that mostly applies to re-running tests |
490 | + # manually without giving a trigger. Tests which don't get |
491 | + # triggered by a particular kernel version are fine with that, so |
492 | + # add some heuristic once we drop the above code. |
493 | + if trigger: |
494 | + satisfied_triggers.add(trigger) |
495 | + |
496 | + # add this result |
497 | + src_arch_results = self.test_results.setdefault(src, {}).setdefault(arch, [stamp, {}, False]) |
498 | + if passed: |
499 | + # update ever_passed field, unless we got triggered from |
500 | + # linux-meta*: we trigger separate per-kernel tests for reverse |
501 | + # test dependencies, and we don't want to track per-trigger |
502 | + # ever_passed. This would be wrong for everything except the |
503 | + # kernel, and the kernel team tracks per-kernel regressions already |
504 | + if not result_triggers or not result_triggers[0][0].startswith('linux-meta'): |
505 | + src_arch_results[2] = True |
506 | + if satisfied_triggers: |
507 | + for trig in satisfied_triggers: |
508 | + src_arch_results[1].setdefault(ver, {})[trig[0] + '/' + trig[1]] = passed |
509 | + else: |
510 | + # this result did not match any triggers? then we are in backwards |
511 | + # compat mode for results without recorded triggers; update all |
512 | + # results |
513 | + for trig in src_arch_results[1].setdefault(ver, {}): |
514 | + src_arch_results[1][ver][trig] = passed |
515 | + # update latest_stamp |
516 | + if stamp > src_arch_results[0]: |
517 | + src_arch_results[0] = stamp |
518 | + |
519 | + def failed_tests_for_trigger(self, trigsrc, trigver): |
520 | + '''Return (src, arch) set for failed tests for given trigger pkg''' |
521 | + |
522 | + result = set() |
523 | + trigger = trigsrc + '/' + trigver |
524 | + for src, srcinfo in self.test_results.items(): |
525 | + for arch, (stamp, vermap, ever_passed) in srcinfo.items(): |
526 | + for ver, trig_results in vermap.items(): |
527 | + if trig_results.get(trigger) is False: |
528 | + result.add((src, arch)) |
529 | + return result |
530 | + |
531 | + # |
532 | + # Public API |
533 | + # |
534 | + |
535 | + def request(self, packages, excludes=None): |
536 | + if excludes: |
537 | + self.excludes.update(excludes) |
538 | + |
539 | + self.log_verbose('Requested autopkgtests for %s, exclusions: %s' % |
540 | + (['%s/%s' % i for i in packages], str(self.excludes))) |
541 | + for src, ver in packages: |
542 | + for arch in self.britney.options.adt_arches.split(): |
543 | + for (testsrc, testver) in self.tests_for_source(src, ver, arch): |
544 | + self.add_test_request(testsrc, testver, arch, src, ver) |
545 | + |
546 | + if self.britney.options.verbose: |
547 | + for src, verinfo in self.requested_tests.items(): |
548 | + for ver, archinfo in verinfo.items(): |
549 | + for arch, triggers in archinfo.items(): |
550 | + self.log_verbose('Requesting %s/%s/%s autopkgtest to verify %s' % |
551 | + (src, ver, arch, ', '.join(['%s/%s' % i for i in triggers]))) |
552 | + |
553 | + def submit(self): |
554 | + |
555 | + def _arches(verinfo): |
556 | + res = set() |
557 | + for archinfo in verinfo.values(): |
558 | + res.update(archinfo.keys()) |
559 | + return res |
560 | + |
561 | + def _trigsources(verinfo, arch): |
562 | + '''Calculate the triggers for a given verinfo map |
563 | + |
564 | + verinfo is ver -> arch -> {(triggering-src1, ver1), ...}, i. e. an |
565 | + entry of self.requested_tests[arch] |
566 | + |
567 | + Return {trigger1, ...}) set. |
568 | + ''' |
569 | + triggers = set() |
570 | + for archinfo in verinfo.values(): |
571 | + for (t, v) in archinfo.get(arch, []): |
572 | + triggers.add(t + '/' + v) |
573 | + return triggers |
574 | + |
575 | + # build per-queue request strings for new test requests |
576 | + # TODO: Once we support version constraints in AMQP requests, add them |
577 | + # arch → (queue_name, [(pkg, params), ...]) |
578 | + arch_queues = {} |
579 | + for arch in self.britney.options.adt_arches.split(): |
580 | + requests = [] |
581 | + for pkg, verinfo in self.requested_tests.items(): |
582 | + if arch in _arches(verinfo): |
583 | + # if a package gets triggered by several sources, we can |
584 | + # run just one test for all triggers; but for proposed |
585 | + # kernels we want to run a separate test for each, so that |
586 | + # the test runs under that particular kernel |
587 | + triggers = _trigsources(verinfo, arch) |
588 | + for t in sorted(triggers): |
589 | + params = {'triggers': [t]} |
590 | + requests.append((pkg, json.dumps(params))) |
591 | + arch_queues[arch] = ('debci-%s-%s' % (self.series, arch), requests) |
592 | + |
593 | + amqp_url = self.britney.options.adt_amqp |
594 | + |
595 | + if amqp_url.startswith('amqp://'): |
596 | + # in production mode, send them out via AMQP |
597 | + with kombu.Connection(amqp_url) as conn: |
598 | + for arch, (queue, requests) in arch_queues.items(): |
599 | + # don't use SimpleQueue here as it always declares queues; |
600 | + # ACLs might not allow that |
601 | + with kombu.Producer(conn, routing_key=queue, auto_declare=False) as p: |
602 | + for (pkg, params) in requests: |
603 | + p.publish(pkg + '\n' + params) |
604 | + elif amqp_url.startswith('file://'): |
605 | + # in testing mode, adt_amqp will be a file:// URL |
606 | + with open(amqp_url[7:], 'a') as f: |
607 | + for arch, (queue, requests) in arch_queues.items(): |
608 | + for (pkg, params) in requests: |
609 | + f.write('%s:%s %s\n' % (queue, pkg, params)) |
610 | + else: |
611 | + self.log_error('Unknown ADT_AMQP schema in %s' % |
612 | + self.britney.options.adt_amqp) |
613 | + |
614 | + # mark them as pending now |
615 | + self.update_pending_tests() |
616 | + |
617 | + def collect_requested(self): |
618 | + '''Update results from swift for all requested packages |
619 | + |
620 | + This is normally redundant with collect(), but avoids actually |
621 | + sending test requests if results are already available. This mostly |
622 | + happens when you have to blow away results.cache and let it rebuild |
623 | + from scratch. |
624 | + ''' |
625 | + for pkg, verinfo in copy.deepcopy(self.requested_tests).items(): |
626 | + for archinfo in verinfo.values(): |
627 | + for arch in archinfo: |
628 | + self.fetch_swift_results(self.britney.options.adt_swift_url, pkg, arch) |
629 | + |
630 | + def collect(self, packages): |
631 | + '''Update results from swift for all pending packages |
632 | + |
633 | + Remove pending tests for which we have results. |
634 | + ''' |
635 | + for pkg, verinfo in copy.deepcopy(self.pending_tests).items(): |
636 | + for archinfo in verinfo.values(): |
637 | + for arch in archinfo: |
638 | + self.fetch_swift_results(self.britney.options.adt_swift_url, pkg, arch) |
639 | + # also update results for excuses whose tests failed, in case a |
640 | + # manual retry worked |
641 | + for (trigpkg, trigver) in packages: |
642 | + for (pkg, arch) in self.failed_tests_for_trigger(trigpkg, trigver): |
643 | + if arch not in self.pending_tests.get(trigpkg, {}).get(trigver, {}): |
644 | + self.log_verbose('Checking for new results for failed %s on %s for trigger %s/%s' % |
645 | + (pkg, arch, trigpkg, trigver)) |
646 | + self.fetch_swift_results(self.britney.options.adt_swift_url, pkg, arch, (trigpkg, trigver)) |
647 | + |
648 | + # update the results cache |
649 | + with open(self.results_cache_file + '.new', 'w') as f: |
650 | + json.dump(self.test_results, f, indent=2) |
651 | + os.rename(self.results_cache_file + '.new', self.results_cache_file) |
652 | + self.log_verbose('Updated results cache') |
653 | + |
654 | + # new results remove pending requests, update the on-disk cache |
655 | + self.update_pending_tests() |
656 | + |
657 | + def results(self, trigsrc, trigver): |
658 | + '''Return test results for triggering package |
659 | + |
660 | + Return (passed, src, ver, arch -> ALWAYSFAIL|PASS|FAIL|RUNNING) |
661 | + iterable for all package tests that got triggered by trigsrc/trigver. |
662 | + ''' |
663 | + # (src, ver) -> arch -> ALWAYSFAIL|PASS|FAIL|RUNNING |
664 | + pkg_arch_result = {} |
665 | + trigger = trigsrc + '/' + trigver |
666 | + |
667 | + for arch in self.britney.options.adt_arches.split(): |
668 | + for testsrc, testver in self.tests_for_source(trigsrc, trigver, arch): |
669 | + passed = True |
670 | + try: |
671 | + (_, ver_map, ever_passed) = self.test_results[testsrc][arch] |
672 | + |
673 | + # check if we have a result for any version of testsrc that |
674 | + # was triggered for trigsrc/trigver; we prefer PASSes, as |
675 | + # it could be that an unrelated package upload could break |
676 | + # testsrc's tests at a later point |
677 | + status = None |
678 | + for ver, trigger_results in ver_map.items(): |
679 | + try: |
680 | + status = trigger_results[trigger] |
681 | + testver = ver |
682 | + # if we found a PASS, we can stop searching |
683 | + if status is True: |
684 | + break |
685 | + except KeyError: |
686 | + pass |
687 | + |
688 | + if status is None: |
689 | + # no result? go to "still running" below |
690 | + raise KeyError |
691 | + |
692 | + if status: |
693 | + result = 'PASS' |
694 | + else: |
695 | + # test failed, check ever_passed flag for that src/arch |
696 | + # unless we got triggered from linux-meta*: we trigger |
697 | + # separate per-kernel tests for reverse test |
698 | + # dependencies, and we don't want to track per-trigger |
699 | + # ever_passed. This would be wrong for everything |
700 | + # except the kernel, and the kernel team tracks |
701 | + # per-kernel regressions already |
702 | + if ever_passed and not trigsrc.startswith('linux-meta') and trigsrc != 'linux': |
703 | + result = 'REGRESSION' |
704 | + passed = False |
705 | + else: |
706 | + result = 'ALWAYSFAIL' |
707 | + except KeyError: |
708 | + # no result for testsrc/testver/arch; still running? |
709 | + try: |
710 | + self.pending_tests[testsrc][testver][arch] |
711 | + # if we can't find a result, assume that it has never passed (i.e. this is the first run) |
712 | + (_, _, ever_passed) = self.test_results.get(testsrc, {}).get(arch, (None, None, False)) |
713 | + |
714 | + passed = not ever_passed |
715 | + result = 'RUNNING' |
716 | + except KeyError: |
717 | + # ignore if adt or swift results are disabled, |
718 | + # otherwise this is unexpected |
719 | + if not hasattr(self.britney.options, 'adt_swift_url'): |
720 | + continue |
721 | + # FIXME: Ignore this error for now as it crashes britney, but investigate! |
722 | + self.log_error('FIXME: Result for %s/%s/%s (triggered by %s) is neither known nor pending!' % |
723 | + (testsrc, testver, arch, trigger)) |
724 | + continue |
725 | + |
726 | + pkg_arch_result.setdefault((testsrc, testver), {})[arch] = (result, passed) |
727 | + |
728 | + for ((testsrc, testver), arch_results) in pkg_arch_result.items(): |
729 | + # extract "passed" |
730 | + ps = [p for (_, p) in arch_results.values()] |
731 | + # and then strip them out - these aren't part of our API |
732 | + arch_results = {arch: r for arch, (r, p) in arch_results.items()} |
733 | + # we've passed if all results are True |
734 | + yield (False not in ps, testsrc, testver, arch_results) |
735 | |
736 | === added file 'boottest.py' |
737 | --- boottest.py 1970-01-01 00:00:00 +0000 |
738 | +++ boottest.py 2015-11-17 11:05:43 +0000 |
739 | @@ -0,0 +1,293 @@ |
740 | +# -*- coding: utf-8 -*- |
741 | + |
742 | +# Copyright (C) 2015 Canonical Ltd. |
743 | + |
744 | +# This program is free software; you can redistribute it and/or modify |
745 | +# it under the terms of the GNU General Public License as published by |
746 | +# the Free Software Foundation; either version 2 of the License, or |
747 | +# (at your option) any later version. |
748 | + |
749 | +# This program is distributed in the hope that it will be useful, |
750 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
751 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
752 | +# GNU General Public License for more details. |
753 | + |
754 | + |
755 | +from collections import defaultdict |
756 | +from contextlib import closing |
757 | +import os |
758 | +import subprocess |
759 | +import tempfile |
760 | +from textwrap import dedent |
761 | +import time |
762 | +import urllib.request |
763 | + |
764 | +import apt_pkg |
765 | + |
766 | +from consts import BINARIES |
767 | + |
768 | + |
769 | +FETCH_RETRIES = 3 |
770 | + |
771 | + |
772 | +class TouchManifest(object): |
773 | + """Parses a corresponding touch image manifest. |
774 | + |
775 | + Based on http://cdimage.u.c/ubuntu-touch/daily-preinstalled/pending/vivid-preinstalled-touch-armhf.manifest |
776 | + |
777 | + Assumes the deployment is arranged in a way the manifest is available |
778 | + and fresh on: |
779 | + |
780 | + '{britney_cwd}/boottest/images/{distribution}/{series}/manifest' |
781 | + |
782 | + Only binary name matters, version is ignored, so callsites can: |
783 | + |
784 | + >>> manifest = TouchManifest('ubuntu-touch', 'vivid') |
785 | + >>> 'webbrowser-app' in manifest |
786 | + True |
787 | + >>> 'firefox' in manifest |
788 | + False |
789 | + |
790 | + """ |
791 | + |
792 | + def __init__(self, project, series, verbose=False, fetch=True): |
793 | + self.verbose = verbose |
794 | + self.path = "boottest/images/{}/{}/manifest".format( |
795 | + project, series) |
796 | + success = False |
797 | + if fetch: |
798 | + retries = FETCH_RETRIES |
799 | + success = self.__fetch_manifest(project, series) |
800 | + |
801 | + while retries > 0 and not success: |
802 | + success = self.__fetch_manifest(project, series) |
803 | + retries -= 1 |
804 | + if not success: |
805 | + print("E: [%s] - Unable to fetch manifest: %s %s" % ( |
806 | + time.asctime(), project, series)) |
807 | + |
808 | + self._manifest = self._load() |
809 | + |
810 | + def __fetch_manifest(self, project, series): |
811 | + # There are two url formats that may lead to the proper manifest |
812 | + # file. The first form is for series that have been released, |
813 | + # the second form is for the current development series. |
814 | + # Only one of these is expected to exist for any given series. |
815 | + url_list = [ |
816 | + "http://cdimage.ubuntu.com/{}/{}/daily-preinstalled/pending/" \ |
817 | + "{}-preinstalled-touch-armhf.manifest".format( |
818 | + project, series, series), |
819 | + "http://cdimage.ubuntu.com/{}/daily-preinstalled/pending/" \ |
820 | + "{}-preinstalled-touch-armhf.manifest".format( |
821 | + project, series), |
822 | + ] |
823 | + |
824 | + success = False |
825 | + for url in url_list: |
826 | + if self.verbose: |
827 | + print("I: [%s] - Fetching manifest from %s" % |
828 | + (time.asctime(), url)) |
829 | + print("I: [%s] - saving it to %s" % |
830 | + (time.asctime(), self.path)) |
831 | + try: |
832 | + response = urllib.request.urlopen(url) |
833 | + if response.code == 200: |
834 | + # Only [re]create the manifest file if one was successfully |
835 | + # downloaded. This allows for an existing image to be used |
836 | + # if the download fails. |
837 | + path_dir = os.path.dirname(self.path) |
838 | + if not os.path.exists(path_dir): |
839 | + os.makedirs(path_dir) |
840 | + with open(self.path, 'wb') as fp: |
841 | + fp.write(response.read()) |
842 | + success = True |
843 | + break |
844 | + |
845 | + except IOError as e: |
846 | + print("W: [%s] - error connecting to %s: %s" % ( |
847 | + time.asctime(), self.path, e)) |
848 | + |
849 | + return success |
850 | + |
851 | + def _load(self): |
852 | + pkg_list = [] |
853 | + |
854 | + if not os.path.exists(self.path): |
855 | + return pkg_list |
856 | + |
857 | + with open(self.path) as fd: |
858 | + for line in fd.readlines(): |
859 | + # skip headers and metadata |
860 | + if 'DOCTYPE' in line: |
861 | + continue |
862 | + name, version = line.split() |
863 | + name = name.split(':')[0] |
864 | + if name == 'click': |
865 | + continue |
866 | + pkg_list.append(name) |
867 | + |
868 | + return sorted(pkg_list) |
869 | + |
870 | + def __contains__(self, key): |
871 | + return key in self._manifest |
872 | + |
873 | + |
874 | +class BootTest(object): |
875 | + """Boottest criteria for Britney. |
876 | + |
877 | + This class provides an API for handling the boottest-jenkins |
878 | + integration layer (mostly derived from auto-package-testing/adt): |
879 | + """ |
880 | + VALID_STATUSES = ('PASS',) |
881 | + |
882 | + EXCUSE_LABELS = { |
883 | + "PASS": '<span style="background:#87d96c">Pass</span>', |
884 | + "FAIL": '<span style="background:#ff6666">Regression</span>', |
885 | + "RUNNING": '<span style="background:#99ddff">Test in progress</span>', |
886 | + } |
887 | + |
888 | + script_path = os.path.expanduser( |
889 | + "~/auto-package-testing/jenkins/boottest-britney") |
890 | + |
891 | + def __init__(self, britney, distribution, series, debug=False): |
892 | + self.britney = britney |
893 | + self.distribution = distribution |
894 | + self.series = series |
895 | + self.debug = debug |
896 | + self.rc_path = None |
897 | + self._read() |
898 | + manifest_fetch = getattr( |
899 | + self.britney.options, "boottest_fetch", "no") == "yes" |
900 | + self.phone_manifest = TouchManifest( |
901 | + 'ubuntu-touch', self.series, fetch=manifest_fetch, |
902 | + verbose=self.britney.options.verbose) |
903 | + |
904 | + @property |
905 | + def _request_path(self): |
906 | + return "boottest/work/adt.request.%s" % self.series |
907 | + |
908 | + @property |
909 | + def _result_path(self): |
910 | + return "boottest/work/adt.result.%s" % self.series |
911 | + |
912 | + def _ensure_rc_file(self): |
913 | + if self.rc_path: |
914 | + return |
915 | + self.rc_path = os.path.abspath("boottest/rc.%s" % self.series) |
916 | + with open(self.rc_path, "w") as rc_file: |
917 | + home = os.path.expanduser("~") |
918 | + print(dedent("""\ |
919 | + release: %s |
920 | + aptroot: ~/.chdist/%s-proposed-armhf/ |
921 | + apturi: file:%s/mirror/%s |
922 | + components: main restricted universe multiverse |
923 | + rsync_host: rsync://tachash.ubuntu-ci/boottest/ |
924 | + datadir: ~/proposed-migration/boottest/data""" % |
925 | + (self.series, self.series, home, self.distribution)), |
926 | + file=rc_file) |
927 | + |
928 | + def _run(self, *args): |
929 | + self._ensure_rc_file() |
930 | + if not os.path.exists(self.script_path): |
931 | + print("E: [%s] - Boottest/Jenking glue script missing: %s" % ( |
932 | + time.asctime(), self.script_path)) |
933 | + return '-' |
934 | + command = [ |
935 | + self.script_path, |
936 | + "-c", self.rc_path, |
937 | + "-r", self.series, |
938 | + "-PU", |
939 | + ] |
940 | + if self.debug: |
941 | + command.append("-d") |
942 | + command.extend(args) |
943 | + return subprocess.check_output(command).strip() |
944 | + |
945 | + def _read(self): |
946 | + """Loads a list of results (sources tests and their status). |
947 | + |
948 | + Provides internal data for `get_status()`. |
949 | + """ |
950 | + self.pkglist = defaultdict(dict) |
951 | + if not os.path.exists(self._result_path): |
952 | + return |
953 | + with open(self._result_path) as f: |
954 | + for line in f: |
955 | + line = line.strip() |
956 | + if line.startswith("Suite:") or line.startswith("Date:"): |
957 | + continue |
958 | + linebits = line.split() |
959 | + if len(linebits) < 2: |
960 | + print("W: Invalid line format: '%s', skipped" % line) |
961 | + continue |
962 | + (src, ver, status) = linebits[:3] |
963 | + if not (src in self.pkglist and ver in self.pkglist[src]): |
964 | + self.pkglist[src][ver] = status |
965 | + |
966 | + def get_status(self, name, version): |
967 | + """Return test status for the given source name and version.""" |
968 | + try: |
969 | + return self.pkglist[name][version] |
970 | + except KeyError: |
971 | + # This error handling accounts for outdated apt caches, when |
972 | + # `boottest-britney` erroneously reports results for the |
973 | + # current source version, instead of the proposed. |
974 | + # Returning None here will block source promotion with: |
975 | + # 'UNKNOWN STATUS' excuse. If the jobs are retried and its |
976 | + # results find an up-to-date cache, the problem is gone. |
977 | + print("E: [%s] - Missing boottest results for %s_%s" % ( |
978 | + time.asctime(), name, version)) |
979 | + return None |
980 | + |
981 | + def request(self, packages): |
982 | + """Requests boottests for the given sources list ([(src, ver),]).""" |
983 | + request_path = self._request_path |
984 | + if os.path.exists(request_path): |
985 | + os.unlink(request_path) |
986 | + with closing(tempfile.NamedTemporaryFile(mode="w")) as request_file: |
987 | + for src, ver in packages: |
988 | + if src in self.pkglist and ver in self.pkglist[src]: |
989 | + continue |
990 | + print("%s %s" % (src, ver), file=request_file) |
991 | + # Update 'pkglist' so even if submit/collect is not called |
992 | + # (dry-run), britney has some results. |
993 | + self.pkglist[src][ver] = 'RUNNING' |
994 | + request_file.flush() |
995 | + self._run("request", "-O", request_path, request_file.name) |
996 | + |
997 | + def submit(self): |
998 | + """Submits the current boottests requests for processing.""" |
999 | + self._run("submit", self._request_path) |
1000 | + |
1001 | + def collect(self): |
1002 | + """Collects boottests results and updates internal registry.""" |
1003 | + self._run("collect", "-O", self._result_path) |
1004 | + self._read() |
1005 | + if not self.britney.options.verbose: |
1006 | + return |
1007 | + for src in sorted(self.pkglist): |
1008 | + for ver in sorted(self.pkglist[src]): |
1009 | + status = self.pkglist[src][ver] |
1010 | + print("I: [%s] - Collected boottest status for %s_%s: " |
1011 | + "%s" % (time.asctime(), src, ver, status)) |
1012 | + |
1013 | + def needs_test(self, name, version): |
1014 | + """Whether or not the given source and version should be tested. |
1015 | + |
1016 | + Sources are only considered for boottesting if they produce binaries |
1017 | + that are part of the phone image manifest. See `TouchManifest`. |
1018 | + """ |
1019 | + # Discover all binaries for the 'excused' source. |
1020 | + unstable_sources = self.britney.sources['unstable'] |
1021 | + # Dismiss if source is not yet recognized (??). |
1022 | + if name not in unstable_sources: |
1023 | + return False |
1024 | + # Binaries are a seq of "<binname>/<arch>" and, practically, boottest |
1025 | + # is only concerned about armhf binaries mentioned in the phone |
1026 | + # manifest. Anything else should be skipped. |
1027 | + phone_binaries = [ |
1028 | + b for b in unstable_sources[name][BINARIES] |
1029 | + if b.split('/')[1] in self.britney.options.boottest_arches.split() |
1030 | + and b.split('/')[0] in self.phone_manifest |
1031 | + ] |
1032 | + return bool(phone_binaries) |
1033 | |
1034 | === modified file 'britney.conf' |
1035 | --- britney.conf 2015-10-27 17:32:31 +0000 |
1036 | +++ britney.conf 2015-11-17 11:05:43 +0000 |
1037 | @@ -1,26 +1,25 @@ |
1038 | # Configuration file for britney |
1039 | |
1040 | # Paths for control files |
1041 | -TESTING = /srv/release.debian.org/britney/var/data-b2/testing |
1042 | -TPU = /srv/release.debian.org/britney/var/data-b2/testing-proposed-updates |
1043 | -PU = /srv/release.debian.org/britney/var/data-b2/proposed-updates |
1044 | -UNSTABLE = /srv/release.debian.org/britney/var/data-b2/unstable |
1045 | +TESTING = data/%(SERIES) |
1046 | +UNSTABLE = data/%(SERIES)-proposed |
1047 | +PARTIAL_UNSTABLE = yes |
1048 | |
1049 | # Output |
1050 | -NONINST_STATUS = /srv/release.debian.org/britney/var/data-b2/non-installable-status |
1051 | -EXCUSES_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/excuses.html |
1052 | -EXCUSES_YAML_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/excuses.yaml |
1053 | -UPGRADE_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/output.txt |
1054 | -HEIDI_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/HeidiResult |
1055 | +NONINST_STATUS = data/%(SERIES)/non-installable-status |
1056 | +EXCUSES_OUTPUT = output/%(SERIES)/excuses.html |
1057 | +EXCUSES_YAML_OUTPUT = output/%(SERIES)/excuses.yaml |
1058 | +UPGRADE_OUTPUT = output/%(SERIES)/output.txt |
1059 | +HEIDI_OUTPUT = output/%(SERIES)/HeidiResult |
1060 | |
1061 | # List of release architectures |
1062 | -ARCHITECTURES = i386 amd64 arm64 armel armhf mips mipsel powerpc ppc64el s390x |
1063 | +ARCHITECTURES = amd64 arm64 armhf i386 powerpc ppc64el |
1064 | |
1065 | # if you're not in this list, arch: all packages are allowed to break on you |
1066 | -NOBREAKALL_ARCHES = i386 amd64 |
1067 | +NOBREAKALL_ARCHES = amd64 |
1068 | |
1069 | # if you're in this list, your packages may not stay in sync with the source |
1070 | -FUCKED_ARCHES = |
1071 | +OUTOFSYNC_ARCHES = |
1072 | |
1073 | # if you're in this list, your uninstallability count may increase |
1074 | BREAK_ARCHES = |
1075 | @@ -29,14 +28,15 @@ |
1076 | NEW_ARCHES = |
1077 | |
1078 | # priorities and delays |
1079 | -MINDAYS_LOW = 10 |
1080 | -MINDAYS_MEDIUM = 5 |
1081 | -MINDAYS_HIGH = 2 |
1082 | +MINDAYS_LOW = 0 |
1083 | +MINDAYS_MEDIUM = 0 |
1084 | +MINDAYS_HIGH = 0 |
1085 | MINDAYS_CRITICAL = 0 |
1086 | MINDAYS_EMERGENCY = 0 |
1087 | DEFAULT_URGENCY = medium |
1088 | |
1089 | # hint permissions |
1090 | +<<<<<<< TREE |
1091 | HINTS_ABA = ALL |
1092 | HINTS_PKERN = STANDARD force |
1093 | HINTS_ADSB = STANDARD force force-hint |
1094 | @@ -52,12 +52,49 @@ |
1095 | HINTS_FREEZE-EXCEPTION = unblock unblock-udeb |
1096 | HINTS_SATBRITNEY = easy |
1097 | HINTS_AUTO-REMOVALS = remove |
1098 | +======= |
1099 | +HINTS_CJWATSON = ALL |
1100 | +HINTS_ADCONRAD = ALL |
1101 | +HINTS_KITTERMAN = ALL |
1102 | +HINTS_LANEY = ALL |
1103 | +HINTS_JRIDDELL = ALL |
1104 | +HINTS_STEFANOR = ALL |
1105 | +HINTS_STGRABER = ALL |
1106 | +HINTS_VORLON = ALL |
1107 | +HINTS_PITTI = ALL |
1108 | +HINTS_FREEZE = block block-all |
1109 | + |
1110 | +HINTS_UBUNTU-TOUCH/DIDROCKS = block unblock |
1111 | +HINTS_UBUNTU-TOUCH/EV = block unblock |
1112 | +HINTS_UBUNTU-TOUCH/KEN-VANDINE = block unblock |
1113 | +HINTS_UBUNTU-TOUCH/LOOL = block unblock |
1114 | +HINTS_UBUNTU-TOUCH/MATHIEU-TL = block unblock |
1115 | +HINTS_UBUNTU-TOUCH/OGRA = block unblock |
1116 | +>>>>>>> MERGE-SOURCE |
1117 | |
1118 | # support for old libraries in testing (smooth update) |
1119 | # use ALL to enable smooth updates for all the sections |
1120 | # |
1121 | # naming a non-existent section will effectively disable new smooth |
1122 | # updates but still allow removals to occur |
1123 | +<<<<<<< TREE |
1124 | SMOOTH_UPDATES = libs oldlibs |
1125 | |
1126 | IGNORE_CRUFT = 1 |
1127 | +======= |
1128 | +SMOOTH_UPDATES = badgers |
1129 | + |
1130 | +REMOVE_OBSOLETE = no |
1131 | + |
1132 | +ADT_ENABLE = yes |
1133 | +ADT_DEBUG = no |
1134 | +ADT_ARCHES = amd64 i386 armhf ppc64el |
1135 | +ADT_AMQP = amqp://test_request:password@162.213.33.228 |
1136 | +# Swift base URL with the results (must be publicly readable and browsable) |
1137 | +ADT_SWIFT_URL = https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac |
1138 | + |
1139 | +BOOTTEST_ENABLE = no |
1140 | +BOOTTEST_DEBUG = yes |
1141 | +BOOTTEST_ARCHES = armhf amd64 |
1142 | +BOOTTEST_FETCH = yes |
1143 | +>>>>>>> MERGE-SOURCE |
1144 | |
1145 | === modified file 'britney.py' |
1146 | --- britney.py 2015-11-01 09:09:30 +0000 |
1147 | +++ britney.py 2015-11-17 11:05:43 +0000 |
1148 | @@ -62,6 +62,9 @@ |
1149 | * Hints, which contains lists of commands which modify the standard behaviour |
1150 | of Britney (see Britney.read_hints). |
1151 | |
1152 | + * Blocks, which contains user-supplied blocks read from Launchpad bugs |
1153 | + (see Britney.read_blocks). |
1154 | + |
1155 | For a more detailed explanation about the format of these files, please read |
1156 | the documentation of the related methods. The exact meaning of them will be |
1157 | instead explained in the chapter "Excuses Generation". |
1158 | @@ -204,11 +207,18 @@ |
1159 | read_nuninst, write_nuninst, write_heidi, |
1160 | eval_uninst, newly_uninst, make_migrationitem, |
1161 | write_excuses, write_heidi_delta, write_controlfiles, |
1162 | - old_libraries, is_nuninst_asgood_generous, |
1163 | + old_libraries, is_nuninst_asgood_generous, ensuredir, |
1164 | clone_nuninst) |
1165 | from consts import (VERSION, SECTION, BINARIES, MAINTAINER, FAKESRC, |
1166 | SOURCE, SOURCEVER, ARCHITECTURE, DEPENDS, CONFLICTS, |
1167 | +<<<<<<< TREE |
1168 | PROVIDES, MULTIARCH, ESSENTIAL) |
1169 | +======= |
1170 | + PROVIDES, RDEPENDS, RCONFLICTS, MULTIARCH, ESSENTIAL) |
1171 | +from autopkgtest import AutoPackageTest, srchash |
1172 | +from boottest import BootTest |
1173 | + |
1174 | +>>>>>>> MERGE-SOURCE |
1175 | |
1176 | __author__ = 'Fabio Tranchitella and the Debian Release Team' |
1177 | __version__ = '2.0' |
1178 | @@ -227,7 +237,7 @@ |
1179 | |
1180 | HINTS_HELPERS = ("easy", "hint", "remove", "block", "block-udeb", "unblock", "unblock-udeb", "approve") |
1181 | HINTS_STANDARD = ("urgent", "age-days") + HINTS_HELPERS |
1182 | - HINTS_ALL = ("force", "force-hint", "block-all") + HINTS_STANDARD |
1183 | + HINTS_ALL = ("force", "force-hint", "force-badtest", "force-skiptest", "block-all") + HINTS_STANDARD |
1184 | |
1185 | def __init__(self): |
1186 | """Class constructor |
1187 | @@ -235,8 +245,7 @@ |
1188 | This method initializes and populates the data lists, which contain all |
1189 | the information needed by the other methods of the class. |
1190 | """ |
1191 | - # britney's "day" begins at 3pm |
1192 | - self.date_now = int(((time.time() / (60*60)) - 15) / 24) |
1193 | + self.date_now = int(time.time()) |
1194 | |
1195 | # parse the command line arguments |
1196 | self.__parse_arguments() |
1197 | @@ -264,7 +273,12 @@ |
1198 | if 'testing' not in self.sources: |
1199 | self.sources['testing'] = self.read_sources(self.options.testing) |
1200 | self.sources['unstable'] = self.read_sources(self.options.unstable) |
1201 | - self.sources['tpu'] = self.read_sources(self.options.tpu) |
1202 | + if hasattr(self.options, 'partial_unstable'): |
1203 | + self.merge_sources('testing', 'unstable') |
1204 | + if hasattr(self.options, 'tpu'): |
1205 | + self.sources['tpu'] = self.read_sources(self.options.tpu) |
1206 | + else: |
1207 | + self.sources['tpu'] = {} |
1208 | |
1209 | if hasattr(self.options, 'pu'): |
1210 | self.sources['pu'] = self.read_sources(self.options.pu) |
1211 | @@ -281,7 +295,15 @@ |
1212 | if arch not in self.binaries['testing']: |
1213 | self.binaries['testing'][arch] = self.read_binaries(self.options.testing, "testing", arch) |
1214 | self.binaries['unstable'][arch] = self.read_binaries(self.options.unstable, "unstable", arch) |
1215 | - self.binaries['tpu'][arch] = self.read_binaries(self.options.tpu, "tpu", arch) |
1216 | + if hasattr(self.options, 'partial_unstable'): |
1217 | + self.merge_binaries('testing', 'unstable', arch) |
1218 | + if hasattr(self.options, 'tpu'): |
1219 | + self.binaries['tpu'][arch] = self.read_binaries(self.options.tpu, "tpu", arch) |
1220 | + else: |
1221 | + # _build_installability_tester relies it being |
1222 | + # properly initialised, so insert two empty dicts |
1223 | + # here. |
1224 | + self.binaries['tpu'][arch] = ({}, {}) |
1225 | if hasattr(self.options, 'pu'): |
1226 | self.binaries['pu'][arch] = self.read_binaries(self.options.pu, "pu", arch) |
1227 | else: |
1228 | @@ -330,6 +352,7 @@ |
1229 | # read additional data |
1230 | self.dates = self.read_dates(self.options.testing) |
1231 | self.urgencies = self.read_urgencies(self.options.testing) |
1232 | + self.blocks = self.read_blocks(self.options.unstable) |
1233 | self.excuses = [] |
1234 | self.dependencies = {} |
1235 | |
1236 | @@ -399,6 +422,10 @@ |
1237 | help="do not build the non-installability status, use the cache from file") |
1238 | parser.add_option("", "--print-uninst", action="store_true", dest="print_uninst", default=False, |
1239 | help="just print a summary of uninstallable packages") |
1240 | + parser.add_option("", "--distribution", action="store", dest="distribution", default="ubuntu", |
1241 | + help="set distribution name") |
1242 | + parser.add_option("", "--series", action="store", dest="series", default=None, |
1243 | + help="set distribution series name") |
1244 | (self.options, self.args) = parser.parse_args() |
1245 | |
1246 | # integrity checks |
1247 | @@ -420,6 +447,8 @@ |
1248 | k, v = line.split('=', 1) |
1249 | k = k.strip() |
1250 | v = v.strip() |
1251 | + if self.options.series is not None: |
1252 | + v = v.replace("%(SERIES)", self.options.series) |
1253 | if k.startswith("MINDAYS_"): |
1254 | self.MINDAYS[k.split("_")[1].lower()] = int(v) |
1255 | elif k.startswith("HINTS_"): |
1256 | @@ -433,14 +462,14 @@ |
1257 | self.options.heidi_delta_output = self.options.heidi_output + "Delta" |
1258 | |
1259 | self.options.nobreakall_arches = self.options.nobreakall_arches.split() |
1260 | - self.options.fucked_arches = self.options.fucked_arches.split() |
1261 | + self.options.outofsync_arches = self.options.outofsync_arches.split() |
1262 | self.options.break_arches = self.options.break_arches.split() |
1263 | self.options.new_arches = self.options.new_arches.split() |
1264 | |
1265 | # Sort the architecture list |
1266 | allarches = sorted(self.options.architectures.split()) |
1267 | arches = [x for x in allarches if x in self.options.nobreakall_arches] |
1268 | - arches += [x for x in allarches if x not in arches and x not in self.options.fucked_arches] |
1269 | + arches += [x for x in allarches if x not in arches and x not in self.options.outofsync_arches] |
1270 | arches += [x for x in allarches if x not in arches and x not in self.options.break_arches] |
1271 | arches += [x for x in allarches if x not in arches and x not in self.options.new_arches] |
1272 | arches += [x for x in allarches if x not in arches] |
1273 | @@ -570,8 +599,8 @@ |
1274 | filename = os.path.join(basedir, "Sources") |
1275 | self.__log("Loading source packages from %s" % filename) |
1276 | |
1277 | - with open(filename, encoding='utf-8') as f: |
1278 | - Packages = apt_pkg.TagFile(f) |
1279 | + f = open(filename, encoding='utf-8') |
1280 | + Packages = apt_pkg.TagFile(f) |
1281 | get_field = Packages.section.get |
1282 | step = Packages.step |
1283 | |
1284 | @@ -592,7 +621,9 @@ |
1285 | [], |
1286 | get_field('Maintainer'), |
1287 | False, |
1288 | + get_field('Testsuite', '').startswith('autopkgtest'), |
1289 | ] |
1290 | + f.close() |
1291 | return sources |
1292 | |
1293 | def read_binaries(self, basedir, distribution, arch, intern=sys.intern): |
1294 | @@ -626,8 +657,8 @@ |
1295 | filename = os.path.join(basedir, "Packages_%s" % arch) |
1296 | self.__log("Loading binary packages from %s" % filename) |
1297 | |
1298 | - with open(filename, encoding='utf-8') as f: |
1299 | - Packages = apt_pkg.TagFile(f) |
1300 | + f = open(filename, encoding='utf-8') |
1301 | + Packages = apt_pkg.TagFile(f) |
1302 | get_field = Packages.section.get |
1303 | step = Packages.step |
1304 | |
1305 | @@ -697,7 +728,7 @@ |
1306 | sources[distribution][dpkg[SOURCE]][BINARIES].append(pkgarch) |
1307 | # if the source package doesn't exist, create a fake one |
1308 | else: |
1309 | - sources[distribution][dpkg[SOURCE]] = [dpkg[SOURCEVER], 'faux', [pkgarch], None, True] |
1310 | + sources[distribution][dpkg[SOURCE]] = [dpkg[SOURCEVER], 'faux', [pkgarch], None, True, False] |
1311 | |
1312 | # register virtual packages and real packages that provide them |
1313 | if dpkg[PROVIDES]: |
1314 | @@ -712,9 +743,85 @@ |
1315 | # add the resulting dictionary to the package list |
1316 | packages[pkg] = dpkg |
1317 | |
1318 | +<<<<<<< TREE |
1319 | +======= |
1320 | + f.close() |
1321 | + |
1322 | + # loop again on the list of packages to register reverse dependencies and conflicts |
1323 | + register_reverses(packages, provides, check_doubles=False) |
1324 | + |
1325 | +>>>>>>> MERGE-SOURCE |
1326 | # return a tuple with the list of real and virtual packages |
1327 | return (packages, provides) |
1328 | |
1329 | +<<<<<<< TREE |
1330 | +======= |
1331 | + def merge_sources(self, source, target): |
1332 | + """Merge sources from `source' into partial suite `target'.""" |
1333 | + source_sources = self.sources[source] |
1334 | + target_sources = self.sources[target] |
1335 | + for pkg, value in source_sources.items(): |
1336 | + if pkg in target_sources: |
1337 | + continue |
1338 | + target_sources[pkg] = list(value) |
1339 | + target_sources[pkg][BINARIES] = list( |
1340 | + target_sources[pkg][BINARIES]) |
1341 | + |
1342 | + def merge_binaries(self, source, target, arch): |
1343 | + """Merge `arch' binaries from `source' into partial suite `target'.""" |
1344 | + source_sources = self.sources[source] |
1345 | + source_binaries, _ = self.binaries[source][arch] |
1346 | + target_sources = self.sources[target] |
1347 | + target_binaries, target_provides = self.binaries[target][arch] |
1348 | + oodsrcs = set() |
1349 | + for pkg, value in source_binaries.items(): |
1350 | + if pkg in target_binaries: |
1351 | + continue |
1352 | + |
1353 | + # Don't merge binaries rendered stale by new sources in target |
1354 | + # that have built on this architecture. |
1355 | + if value[SOURCE] not in oodsrcs: |
1356 | + source_version = source_sources[value[SOURCE]][VERSION] |
1357 | + target_version = target_sources[value[SOURCE]][VERSION] |
1358 | + if source_version != target_version: |
1359 | + current_arch = value[ARCHITECTURE] |
1360 | + built = False |
1361 | + for b in target_sources[value[SOURCE]][BINARIES]: |
1362 | + binpkg, binarch = b.split('/') |
1363 | + if binarch == arch: |
1364 | + target_value = target_binaries[binpkg] |
1365 | + if current_arch in ( |
1366 | + target_value[ARCHITECTURE], "all"): |
1367 | + built = True |
1368 | + break |
1369 | + if built: |
1370 | + continue |
1371 | + oodsrcs.add(value[SOURCE]) |
1372 | + |
1373 | + if pkg in target_binaries: |
1374 | + for p in target_binaries[pkg][PROVIDES]: |
1375 | + target_provides[p].remove(pkg) |
1376 | + if not target_provides[p]: |
1377 | + del target_provides[p] |
1378 | + |
1379 | + target_binaries[pkg] = value |
1380 | + |
1381 | + pkg_arch = pkg + "/" + arch |
1382 | + if pkg_arch not in target_sources[value[SOURCE]][BINARIES]: |
1383 | + target_sources[value[SOURCE]][BINARIES].append(pkg_arch) |
1384 | + |
1385 | + for p in value[PROVIDES]: |
1386 | + if p not in target_provides: |
1387 | + target_provides[p] = [] |
1388 | + target_provides[p].append(pkg) |
1389 | + |
1390 | + for pkg, value in target_binaries.items(): |
1391 | + value[RDEPENDS] = [] |
1392 | + value[RCONFLICTS] = [] |
1393 | + register_reverses( |
1394 | + target_binaries, target_provides, check_doubles=False) |
1395 | + |
1396 | +>>>>>>> MERGE-SOURCE |
1397 | def read_bugs(self, basedir): |
1398 | """Read the release critical bug summary from the specified directory |
1399 | |
1400 | @@ -730,13 +837,16 @@ |
1401 | bugs = defaultdict(list) |
1402 | filename = os.path.join(basedir, "BugsV") |
1403 | self.__log("Loading RC bugs data from %s" % filename) |
1404 | - for line in open(filename, encoding='ascii'): |
1405 | - l = line.split() |
1406 | - if len(l) != 2: |
1407 | - self.__log("Malformed line found in line %s" % (line), type='W') |
1408 | - continue |
1409 | - pkg = l[0] |
1410 | - bugs[pkg] += l[1].split(",") |
1411 | + try: |
1412 | + for line in open(filename, encoding='ascii'): |
1413 | + l = line.split() |
1414 | + if len(l) != 2: |
1415 | + self.__log("Malformed line found in line %s" % (line), type='W') |
1416 | + continue |
1417 | + pkg = l[0] |
1418 | + bugs[pkg] += l[1].split(",") |
1419 | + except IOError: |
1420 | + self.__log("%s missing; skipping bug-based processing" % filename) |
1421 | return bugs |
1422 | |
1423 | def __maxver(self, pkg, dist): |
1424 | @@ -792,7 +902,8 @@ |
1425 | |
1426 | <package-name> <version> <date-of-upload> |
1427 | |
1428 | - The dates are expressed as the number of days from 1970-01-01. |
1429 | + The dates are expressed as the number of seconds from the Unix epoch |
1430 | + (1970-01-01 00:00:00 UTC). |
1431 | |
1432 | The method returns a dictionary where the key is the binary package |
1433 | name and the value is a tuple with two items, the version and the date. |
1434 | @@ -800,13 +911,17 @@ |
1435 | dates = {} |
1436 | filename = os.path.join(basedir, "Dates") |
1437 | self.__log("Loading upload data from %s" % filename) |
1438 | - for line in open(filename, encoding='ascii'): |
1439 | - l = line.split() |
1440 | - if len(l) != 3: continue |
1441 | - try: |
1442 | - dates[l[0]] = (l[1], int(l[2])) |
1443 | - except ValueError: |
1444 | - self.__log("Dates, unable to parse \"%s\"" % line, type="E") |
1445 | + try: |
1446 | + for line in open(filename, encoding='ascii'): |
1447 | + l = line.split() |
1448 | + if len(l) != 3: continue |
1449 | + try: |
1450 | + dates[l[0]] = (l[1], int(l[2])) |
1451 | + except ValueError: |
1452 | + self.__log("Dates, unable to parse \"%s\"" % line, type="E") |
1453 | + except IOError: |
1454 | + self.__log("%s missing; initialising upload data from scratch" % |
1455 | + filename) |
1456 | return dates |
1457 | |
1458 | def write_dates(self, basedir, dates): |
1459 | @@ -817,6 +932,7 @@ |
1460 | """ |
1461 | filename = os.path.join(basedir, "Dates") |
1462 | self.__log("Writing upload data to %s" % filename) |
1463 | + ensuredir(os.path.dirname(filename)) |
1464 | with open(filename, 'w', encoding='utf-8') as f: |
1465 | for pkg in sorted(dates): |
1466 | f.write("%s %s %d\n" % ((pkg,) + dates[pkg])) |
1467 | @@ -839,31 +955,34 @@ |
1468 | urgencies = {} |
1469 | filename = os.path.join(basedir, "Urgency") |
1470 | self.__log("Loading upload urgencies from %s" % filename) |
1471 | - for line in open(filename, errors='surrogateescape', encoding='ascii'): |
1472 | - l = line.split() |
1473 | - if len(l) != 3: continue |
1474 | - |
1475 | - # read the minimum days associated with the urgencies |
1476 | - urgency_old = urgencies.get(l[0], None) |
1477 | - mindays_old = self.MINDAYS.get(urgency_old, 1000) |
1478 | - mindays_new = self.MINDAYS.get(l[2], self.MINDAYS[self.options.default_urgency]) |
1479 | - |
1480 | - # if the new urgency is lower (so the min days are higher), do nothing |
1481 | - if mindays_old <= mindays_new: |
1482 | - continue |
1483 | - |
1484 | - # if the package exists in testing and it is more recent, do nothing |
1485 | - tsrcv = self.sources['testing'].get(l[0], None) |
1486 | - if tsrcv and apt_pkg.version_compare(tsrcv[VERSION], l[1]) >= 0: |
1487 | - continue |
1488 | - |
1489 | - # if the package doesn't exist in unstable or it is older, do nothing |
1490 | - usrcv = self.sources['unstable'].get(l[0], None) |
1491 | - if not usrcv or apt_pkg.version_compare(usrcv[VERSION], l[1]) < 0: |
1492 | - continue |
1493 | - |
1494 | - # update the urgency for the package |
1495 | - urgencies[l[0]] = l[2] |
1496 | + try: |
1497 | + for line in open(filename, errors='surrogateescape', encoding='ascii'): |
1498 | + l = line.split() |
1499 | + if len(l) != 3: continue |
1500 | + |
1501 | + # read the minimum days associated with the urgencies |
1502 | + urgency_old = urgencies.get(l[0], None) |
1503 | + mindays_old = self.MINDAYS.get(urgency_old, 1000) |
1504 | + mindays_new = self.MINDAYS.get(l[2], self.MINDAYS[self.options.default_urgency]) |
1505 | + |
1506 | + # if the new urgency is lower (so the min days are higher), do nothing |
1507 | + if mindays_old <= mindays_new: |
1508 | + continue |
1509 | + |
1510 | + # if the package exists in testing and it is more recent, do nothing |
1511 | + tsrcv = self.sources['testing'].get(l[0], None) |
1512 | + if tsrcv and apt_pkg.version_compare(tsrcv[VERSION], l[1]) >= 0: |
1513 | + continue |
1514 | + |
1515 | + # if the package doesn't exist in unstable or it is older, do nothing |
1516 | + usrcv = self.sources['unstable'].get(l[0], None) |
1517 | + if not usrcv or apt_pkg.version_compare(usrcv[VERSION], l[1]) < 0: |
1518 | + continue |
1519 | + |
1520 | + # update the urgency for the package |
1521 | + urgencies[l[0]] = l[2] |
1522 | + except IOError: |
1523 | + self.__log("%s missing; using default for all packages" % filename) |
1524 | |
1525 | return urgencies |
1526 | |
1527 | @@ -912,7 +1031,7 @@ |
1528 | elif len(l) == 1: |
1529 | # All current hints require at least one argument |
1530 | self.__log("Malformed hint found in %s: '%s'" % (filename, line), type="W") |
1531 | - elif l[0] in ["approve", "block", "block-all", "block-udeb", "unblock", "unblock-udeb", "force", "urgent", "remove"]: |
1532 | + elif l[0] in ["approve", "block", "block-all", "block-udeb", "unblock", "unblock-udeb", "force", "force-badtest", "force-skiptest", "urgent", "remove"]: |
1533 | if l[0] == 'approve': l[0] = 'unblock' |
1534 | for package in l[1:]: |
1535 | hints.add_hint('%s %s' % (l[0], package), who) |
1536 | @@ -922,7 +1041,7 @@ |
1537 | else: |
1538 | hints.add_hint(l, who) |
1539 | |
1540 | - for x in ["block", "block-all", "block-udeb", "unblock", "unblock-udeb", "force", "urgent", "remove", "age-days"]: |
1541 | + for x in ["block", "block-all", "block-udeb", "unblock", "unblock-udeb", "force", "force-badtest", "force-skiptest", "urgent", "remove", "age-days"]: |
1542 | z = {} |
1543 | for hint in hints[x]: |
1544 | package = hint.package |
1545 | @@ -954,6 +1073,32 @@ |
1546 | |
1547 | return hints |
1548 | |
1549 | + def read_blocks(self, basedir): |
1550 | + """Read user-supplied blocks from the specified directory. |
1551 | + |
1552 | + The file contains rows with the format: |
1553 | + |
1554 | + <source-name> <bug> <date> |
1555 | + |
1556 | + The dates are expressed as the number of seconds from the Unix epoch |
1557 | + (1970-01-01 00:00:00 UTC). |
1558 | + |
1559 | + The method returns a dictionary where the key is the source package |
1560 | + name and the value is a list of (bug, date) tuples. |
1561 | + """ |
1562 | + blocks = {} |
1563 | + filename = os.path.join(basedir, "Blocks") |
1564 | + self.__log("Loading user-supplied block data from %s" % filename) |
1565 | + for line in open(filename): |
1566 | + l = line.split() |
1567 | + if len(l) != 3: continue |
1568 | + try: |
1569 | + blocks.setdefault(l[0], []) |
1570 | + blocks[l[0]].append((l[1], int(l[2]))) |
1571 | + except ValueError: |
1572 | + self.__log("Blocks, unable to parse \"%s\"" % line, type="E") |
1573 | + return blocks |
1574 | + |
1575 | |
1576 | # Utility methods for package analysis |
1577 | # ------------------------------------ |
1578 | @@ -1018,11 +1163,34 @@ |
1579 | parse_depends = apt_pkg.parse_depends |
1580 | get_dependency_solvers = self.get_dependency_solvers |
1581 | |
1582 | + # make linux* wait on corresponding -meta package |
1583 | + if src.startswith('linux'): |
1584 | + meta = src.replace('linux', 'linux-meta', 1) |
1585 | + if meta in self.sources[suite]: |
1586 | + # copy binary_u here, don't modify self.binaries! |
1587 | + if binary_u[DEPENDS]: |
1588 | + binary_u[DEPENDS] = binary_u[DEPENDS] + ', ' |
1589 | + else: |
1590 | + binary_u[DEPENDS] = '' |
1591 | + # find binary of our architecture |
1592 | + binpkg = None |
1593 | + for b in self.sources[suite][meta][BINARIES]: |
1594 | + pkg, a = b.split('/') |
1595 | + if a == arch: |
1596 | + binpkg = pkg |
1597 | + break |
1598 | + if binpkg: |
1599 | + binver = self.binaries[suite][arch][0][binpkg][SOURCEVER] |
1600 | + binary_u[DEPENDS] += '%s (>= %s)' % (binpkg, binver) |
1601 | + self.__log('Synthesizing dependency %s to %s: %s' % (meta, src, binary_u[DEPENDS])) |
1602 | + |
1603 | # analyze the dependency fields (if present) |
1604 | if not binary_u[DEPENDS]: |
1605 | - return |
1606 | + return True |
1607 | deps = binary_u[DEPENDS] |
1608 | |
1609 | + all_satisfiable = True |
1610 | + |
1611 | # for every dependency block (formed as conjunction of disjunction) |
1612 | for block, block_txt in zip(parse_depends(deps, False), deps.split(',')): |
1613 | # if the block is satisfied in testing, then skip the block |
1614 | @@ -1046,6 +1214,7 @@ |
1615 | if not packages: |
1616 | excuse.addhtml("%s/%s unsatisfiable Depends: %s" % (pkg, arch, block_txt.strip())) |
1617 | excuse.addreason("depends") |
1618 | + all_satisfiable = False |
1619 | continue |
1620 | |
1621 | # for the solving packages, update the excuse to add the dependencies |
1622 | @@ -1058,6 +1227,8 @@ |
1623 | else: |
1624 | excuse.add_break_dep(p, arch) |
1625 | |
1626 | + return all_satisfiable |
1627 | + |
1628 | # Package analysis methods |
1629 | # ------------------------ |
1630 | |
1631 | @@ -1080,6 +1251,7 @@ |
1632 | excuse = Excuse("-" + pkg) |
1633 | excuse.addhtml("Package not in unstable, will try to remove") |
1634 | excuse.addreason("remove") |
1635 | + excuse.set_distribution(self.options.distribution) |
1636 | excuse.set_vers(src[VERSION], None) |
1637 | src[MAINTAINER] and excuse.set_maint(src[MAINTAINER].strip()) |
1638 | src[SECTION] and excuse.set_section(src[SECTION].strip()) |
1639 | @@ -1087,7 +1259,7 @@ |
1640 | # if the package is blocked, skip it |
1641 | for hint in self.hints.search('block', package=pkg, removal=True): |
1642 | excuse.addhtml("Not touching package, as requested by %s " |
1643 | - "(check https://release.debian.org/jessie/freeze_policy.html if update is needed)" % hint.user) |
1644 | + "(contact #ubuntu-release if update is needed)" % hint.user) |
1645 | excuse.addhtml("Not considered") |
1646 | excuse.addreason("block") |
1647 | self.excuses.append(excuse) |
1648 | @@ -1117,6 +1289,7 @@ |
1649 | # build the common part of the excuse, which will be filled by the code below |
1650 | ref = "%s/%s%s" % (src, arch, suite != 'unstable' and "_" + suite or "") |
1651 | excuse = Excuse(ref) |
1652 | + excuse.set_distribution(self.options.distribution) |
1653 | excuse.set_vers(source_t[VERSION], source_t[VERSION]) |
1654 | source_u[MAINTAINER] and excuse.set_maint(source_u[MAINTAINER].strip()) |
1655 | source_u[SECTION] and excuse.set_section(source_u[SECTION].strip()) |
1656 | @@ -1136,6 +1309,7 @@ |
1657 | # the starting point is that there is nothing wrong and nothing worth doing |
1658 | anywrongver = False |
1659 | anyworthdoing = False |
1660 | + unsat_deps = False |
1661 | |
1662 | # for every binary package produced by this source in unstable for this architecture |
1663 | for pkg in sorted(filter(lambda x: x.endswith("/" + arch), source_u[BINARIES]), key=lambda x: x.split("/")[0]): |
1664 | @@ -1143,6 +1317,9 @@ |
1665 | |
1666 | # retrieve the testing (if present) and unstable corresponding binary packages |
1667 | binary_t = pkg in source_t[BINARIES] and self.binaries['testing'][arch][0][pkg_name] or None |
1668 | + if hasattr(self.options, 'partial_unstable') and binary_t is not None and binary_t[ARCHITECTURE] == 'all' and pkg_name not in self.binaries[suite][arch][0]: |
1669 | + excuse.addhtml("Ignoring %s %s (from %s) as it is arch: all and not yet built in unstable" % (pkg_name, binary_t[VERSION], binary_t[SOURCEVER])) |
1670 | + continue |
1671 | binary_u = self.binaries[suite][arch][0][pkg_name] |
1672 | |
1673 | # this is the source version for the new binary package |
1674 | @@ -1155,6 +1332,7 @@ |
1675 | |
1676 | # if the new binary package is not from the same source as the testing one, then skip it |
1677 | # this implies that this binary migration is part of a source migration |
1678 | +<<<<<<< TREE |
1679 | if same_source(source_u[VERSION], pkgsv) and not same_source(source_t[VERSION], pkgsv): |
1680 | anywrongver = True |
1681 | excuse.addhtml("From wrong source: %s %s (%s not %s)" % (pkg_name, binary_u[VERSION], pkgsv, source_t[VERSION])) |
1682 | @@ -1168,6 +1346,13 @@ |
1683 | anywrongver = True |
1684 | excuse.addhtml("Old cruft: %s %s" % (pkg_name, pkgsv)) |
1685 | continue |
1686 | +======= |
1687 | + if not same_source(source_t[VERSION], pkgsv): |
1688 | + if binary_t is None or binary_t[VERSION] != binary_u[VERSION]: |
1689 | + anywrongver = True |
1690 | + excuse.addhtml("From wrong source: %s %s (%s not %s)" % (pkg_name, binary_u[VERSION], pkgsv, source_t[VERSION])) |
1691 | + break |
1692 | +>>>>>>> MERGE-SOURCE |
1693 | |
1694 | # if the source package has been updated in unstable and this is a binary migration, skip it |
1695 | # (the binaries are now out-of-date) |
1696 | @@ -1177,7 +1362,8 @@ |
1697 | continue |
1698 | |
1699 | # find unsatisfied dependencies for the new binary package |
1700 | - self.excuse_unsat_deps(pkg_name, src, arch, suite, excuse) |
1701 | + if not self.excuse_unsat_deps(pkg_name, src, arch, suite, excuse): |
1702 | + unsat_deps = True |
1703 | |
1704 | # if the binary is not present in testing, then it is a new binary; |
1705 | # in this case, there is something worth doing |
1706 | @@ -1237,7 +1423,7 @@ |
1707 | anyworthdoing = True |
1708 | |
1709 | # if there is nothing wrong and there is something worth doing, this is a valid candidate |
1710 | - if not anywrongver and anyworthdoing: |
1711 | + if not anywrongver and not unsat_deps and anyworthdoing: |
1712 | excuse.is_valid = True |
1713 | self.excuses.append(excuse) |
1714 | return True |
1715 | @@ -1276,12 +1462,15 @@ |
1716 | # build the common part of the excuse, which will be filled by the code below |
1717 | ref = "%s%s" % (src, suite != 'unstable' and "_" + suite or "") |
1718 | excuse = Excuse(ref) |
1719 | + excuse.set_distribution(self.options.distribution) |
1720 | excuse.set_vers(source_t and source_t[VERSION] or None, source_u[VERSION]) |
1721 | source_u[MAINTAINER] and excuse.set_maint(source_u[MAINTAINER].strip()) |
1722 | source_u[SECTION] and excuse.set_section(source_u[SECTION].strip()) |
1723 | |
1724 | - # the starting point is that we will update the candidate |
1725 | + # the starting point is that we will update the candidate and run autopkgtests |
1726 | update_candidate = True |
1727 | + run_autopkgtest = True |
1728 | + run_boottest = True |
1729 | |
1730 | # if the version in unstable is older, then stop here with a warning in the excuse and return False |
1731 | if source_t and apt_pkg.version_compare(source_u[VERSION], source_t[VERSION]) < 0: |
1732 | @@ -1294,6 +1483,8 @@ |
1733 | if source_u[FAKESRC]: |
1734 | excuse.addhtml("%s source package doesn't exist" % (src)) |
1735 | update_candidate = False |
1736 | + run_autopkgtest = False |
1737 | + run_boottest = False |
1738 | |
1739 | # retrieve the urgency for the upload, ignoring it if this is a NEW package (not present in testing) |
1740 | urgency = self.urgencies.get(src, self.options.default_urgency) |
1741 | @@ -1311,6 +1502,8 @@ |
1742 | excuse.addhtml("Trying to remove package, not update it") |
1743 | excuse.addreason("remove") |
1744 | update_candidate = False |
1745 | + run_autopkgtest = False |
1746 | + run_boottest = False |
1747 | |
1748 | # check if there is a `block' or `block-udeb' hint for this package, or a `block-all source' hint |
1749 | blocked = {} |
1750 | @@ -1346,7 +1539,7 @@ |
1751 | excuse.addhtml("%s request by %s ignored due to version mismatch: %s" % |
1752 | (unblock_cmd.capitalize(), unblocks[0].user, unblocks[0].version)) |
1753 | if suite == 'unstable' or block_cmd == 'block-udeb': |
1754 | - tooltip = "check https://release.debian.org/jessie/freeze_policy.html if update is needed" |
1755 | + tooltip = "contact #ubuntu-release if update is needed" |
1756 | # redirect people to d-i RM for udeb things: |
1757 | if block_cmd == 'block-udeb': |
1758 | tooltip = "please contact the d-i release manager if an update is needed" |
1759 | @@ -1358,6 +1551,13 @@ |
1760 | excuse.addreason("block") |
1761 | update_candidate = False |
1762 | |
1763 | + if src in self.blocks: |
1764 | + for user_block in self.blocks[src]: |
1765 | + excuse.addhtml("Not touching package as requested in <a href=\"https://launchpad.net/bugs/%s\">bug %s</a> on %s" % |
1766 | + (user_block[0], user_block[0], time.asctime(time.gmtime(user_block[1])))) |
1767 | + excuse.addreason("block") |
1768 | + update_candidate = False |
1769 | + |
1770 | # if the suite is unstable, then we have to check the urgency and the minimum days of |
1771 | # permanence in unstable before updating testing; if the source package is too young, |
1772 | # the check fails and we set update_candidate to False to block the update; consider |
1773 | @@ -1368,7 +1568,7 @@ |
1774 | elif not same_source(self.dates[src][0], source_u[VERSION]): |
1775 | self.dates[src] = (source_u[VERSION], self.date_now) |
1776 | |
1777 | - days_old = self.date_now - self.dates[src][1] |
1778 | + days_old = (self.date_now - self.dates[src][1]) / 60 / 60 / 24 |
1779 | min_days = self.MINDAYS[urgency] |
1780 | |
1781 | for age_days_hint in [ x for x in self.hints.search('age-days', package=src) if \ |
1782 | @@ -1385,6 +1585,8 @@ |
1783 | excuse.addhtml("Too young, but urgency pushed by %s" % (urgent_hints[0].user)) |
1784 | else: |
1785 | update_candidate = False |
1786 | + run_autopkgtest = False |
1787 | + run_boottest = False |
1788 | excuse.addreason("age") |
1789 | |
1790 | if suite in ['pu', 'tpu']: |
1791 | @@ -1412,12 +1614,16 @@ |
1792 | base = 'testing' |
1793 | else: |
1794 | base = 'stable' |
1795 | - text = "Not yet built on <a href=\"http://buildd.debian.org/status/logs.php?arch=%s&pkg=%s&ver=%s&suite=%s\" target=\"_blank\">%s</a> (relative to testing)" % (quote(arch), quote(src), quote(source_u[VERSION]), base, arch) |
1796 | + text = "Not yet built on <a href=\"https://launchpad.net/%s/+source/%s/%s/+latestbuild/%s\" target=\"_blank\">%s</a> (relative to testing)" % (self.options.distribution, quote(src.split("/")[0]), quote(source_u[VERSION]), quote(arch), arch) |
1797 | |
1798 | - if arch in self.options.fucked_arches: |
1799 | + if arch in self.options.outofsync_arches: |
1800 | text = text + " (but %s isn't keeping up, so never mind)" % (arch) |
1801 | else: |
1802 | update_candidate = False |
1803 | + if arch in self.options.adt_arches.split(): |
1804 | + run_autopkgtest = False |
1805 | + if arch in self.options.boottest_arches.split(): |
1806 | + run_boottest = False |
1807 | excuse.addreason("arch") |
1808 | excuse.addreason("arch-%s" % arch) |
1809 | excuse.addreason("build-arch") |
1810 | @@ -1428,6 +1634,7 @@ |
1811 | # at this point, we check the status of the builds on all the supported architectures |
1812 | # to catch the out-of-date ones |
1813 | pkgs = {src: ["source"]} |
1814 | + built_anywhere = False |
1815 | for arch in self.options.architectures: |
1816 | oodbins = {} |
1817 | uptodatebins = False |
1818 | @@ -1449,38 +1656,58 @@ |
1819 | oodbins[pkgsv].append(pkg) |
1820 | continue |
1821 | else: |
1822 | +<<<<<<< TREE |
1823 | # if the binary is arch all, it doesn't count as |
1824 | # up-to-date for this arch |
1825 | if binary_u[ARCHITECTURE] == arch: |
1826 | uptodatebins = True |
1827 | +======= |
1828 | + uptodatebins = True |
1829 | + built_anywhere = True |
1830 | +>>>>>>> MERGE-SOURCE |
1831 | |
1832 | # if the package is architecture-dependent or the current arch is `nobreakall' |
1833 | # find unsatisfied dependencies for the binary package |
1834 | if binary_u[ARCHITECTURE] != 'all' or arch in self.options.nobreakall_arches: |
1835 | - self.excuse_unsat_deps(pkg, src, arch, suite, excuse) |
1836 | + if not self.excuse_unsat_deps(pkg, src, arch, suite, excuse): |
1837 | + update_candidate = False |
1838 | + if arch in self.options.adt_arches.split(): |
1839 | + run_autopkgtest = False |
1840 | + if arch in self.options.boottest_arches.split(): |
1841 | + run_boottest = False |
1842 | |
1843 | # if there are out-of-date packages, warn about them in the excuse and set update_candidate |
1844 | # to False to block the update; if the architecture where the package is out-of-date is |
1845 | - # in the `fucked_arches' list, then do not block the update |
1846 | + # in the `outofsync_arches' list, then do not block the update |
1847 | if oodbins: |
1848 | oodtxt = "" |
1849 | for v in oodbins.keys(): |
1850 | if oodtxt: oodtxt = oodtxt + "; " |
1851 | - oodtxt = oodtxt + "%s (from <a href=\"http://buildd.debian.org/status/logs.php?" \ |
1852 | - "arch=%s&pkg=%s&ver=%s\" target=\"_blank\">%s</a>)" % \ |
1853 | - (", ".join(sorted(oodbins[v])), quote(arch), quote(src), quote(v), v) |
1854 | + oodtxt = oodtxt + "%s (from <a href=\"https://launchpad.net/%s/+source/" \ |
1855 | + "%s/%s/+latestbuild/%s\" target=\"_blank\">%s</a>)" % \ |
1856 | + (", ".join(sorted(oodbins[v])), self.options.distribution, quote(src.split("/")[0]), quote(v), quote(arch), v) |
1857 | if uptodatebins: |
1858 | - text = "old binaries left on <a href=\"http://buildd.debian.org/status/logs.php?" \ |
1859 | - "arch=%s&pkg=%s&ver=%s\" target=\"_blank\">%s</a>: %s" % \ |
1860 | - (quote(arch), quote(src), quote(source_u[VERSION]), arch, oodtxt) |
1861 | + text = "old binaries left on <a href=\"https://launchpad.net/%s/+source/" \ |
1862 | + "%s/%s/+latestbuild/%s\" target=\"_blank\">%s</a>: %s" % \ |
1863 | + (self.options.distribution, quote(src.split("/")[0]), quote(source_u[VERSION]), quote(arch), arch, oodtxt) |
1864 | else: |
1865 | - text = "missing build on <a href=\"http://buildd.debian.org/status/logs.php?" \ |
1866 | - "arch=%s&pkg=%s&ver=%s\" target=\"_blank\">%s</a>: %s" % \ |
1867 | - (quote(arch), quote(src), quote(source_u[VERSION]), arch, oodtxt) |
1868 | + text = "missing build on <a href=\"https://launchpad.net/%s/+source/" \ |
1869 | + "%s/%s/+latestbuild/%s\" target=\"_blank\">%s</a>: %s" % \ |
1870 | + (self.options.distribution, quote(src.split("/")[0]), quote(source_u[VERSION]), quote(arch), arch, oodtxt) |
1871 | |
1872 | - if arch in self.options.fucked_arches: |
1873 | + if arch in self.options.outofsync_arches: |
1874 | text = text + " (but %s isn't keeping up, so nevermind)" % (arch) |
1875 | else: |
1876 | +<<<<<<< TREE |
1877 | +======= |
1878 | + update_candidate = False |
1879 | + if arch in self.options.adt_arches.split(): |
1880 | + run_autopkgtest = False |
1881 | + if arch in self.options.boottest_arches.split(): |
1882 | + run_boottest = False |
1883 | + excuse.addreason("arch") |
1884 | + excuse.addreason("arch-%s" % arch) |
1885 | +>>>>>>> MERGE-SOURCE |
1886 | if uptodatebins: |
1887 | excuse.addreason("cruft-arch") |
1888 | excuse.addreason("cruft-arch-%s" % arch) |
1889 | @@ -1497,14 +1724,21 @@ |
1890 | excuse.addreason("build-arch") |
1891 | excuse.addreason("build-arch-%s" % arch) |
1892 | |
1893 | - if self.date_now != self.dates[src][1]: |
1894 | - excuse.addhtml(text) |
1895 | + excuse.addhtml(text) |
1896 | |
1897 | # if the source package has no binaries, set update_candidate to False to block the update |
1898 | if len(self.sources[suite][src][BINARIES]) == 0: |
1899 | excuse.addhtml("%s has no binaries on any arch" % src) |
1900 | excuse.addreason("no-binaries") |
1901 | update_candidate = False |
1902 | + run_autopkgtest = False |
1903 | + run_boottest = False |
1904 | + elif not built_anywhere: |
1905 | + excuse.addhtml("%s has no up-to-date binaries on any arch" % src) |
1906 | + excuse.addreason("no-binaries") |
1907 | + update_candidate = False |
1908 | + run_autopkgtest = False |
1909 | + run_boottest = False |
1910 | |
1911 | # if the suite is unstable, then we have to check the release-critical bug lists before |
1912 | # updating testing; if the unstable package has RC bugs that do not apply to the testing |
1913 | @@ -1536,6 +1770,8 @@ |
1914 | excuse.addhtml("Updating %s introduces new bugs: %s" % (pkg, ", ".join( |
1915 | ["<a href=\"http://bugs.debian.org/%s\">#%s</a>" % (quote(a), a) for a in new_bugs]))) |
1916 | update_candidate = False |
1917 | + run_autopkgtest = False |
1918 | + run_boottest = False |
1919 | excuse.addreason("buggy") |
1920 | |
1921 | if len(old_bugs) > 0: |
1922 | @@ -1553,6 +1789,8 @@ |
1923 | excuse.addhtml("Should ignore, but forced by %s" % (forces[0].user)) |
1924 | excuse.force() |
1925 | update_candidate = True |
1926 | + run_autopkgtest = True |
1927 | + run_boottest = True |
1928 | |
1929 | # if the package can be updated, it is a valid candidate |
1930 | if update_candidate: |
1931 | @@ -1561,6 +1799,8 @@ |
1932 | else: |
1933 | # TODO |
1934 | excuse.addhtml("Not considered") |
1935 | + excuse.run_autopkgtest = run_autopkgtest |
1936 | + excuse.run_boottest = run_boottest |
1937 | |
1938 | self.excuses.append(excuse) |
1939 | return update_candidate |
1940 | @@ -1694,6 +1934,7 @@ |
1941 | # add the removal of the package to upgrade_me and build a new excuse |
1942 | upgrade_me.append("-%s" % (src)) |
1943 | excuse = Excuse("-%s" % (src)) |
1944 | + excuse.set_distribution(self.options.distribution) |
1945 | excuse.set_vers(tsrcv, None) |
1946 | excuse.addhtml("Removal request by %s" % (item.user)) |
1947 | excuse.addhtml("Package is broken, will try to remove") |
1948 | @@ -1706,6 +1947,167 @@ |
1949 | # extract the not considered packages, which are in the excuses but not in upgrade_me |
1950 | unconsidered = [e.name for e in self.excuses if e.name not in upgrade_me] |
1951 | |
1952 | + if getattr(self.options, "adt_enable", "no") == "yes" and \ |
1953 | + self.options.series: |
1954 | + # trigger autopkgtests for valid candidates |
1955 | + adt_debug = getattr(self.options, "adt_debug", "no") == "yes" |
1956 | + autopkgtest = AutoPackageTest( |
1957 | + self, self.options.distribution, self.options.series, |
1958 | + debug=adt_debug) |
1959 | + autopkgtest_packages = [] |
1960 | + autopkgtest_excuses = [] |
1961 | + autopkgtest_excludes = [] |
1962 | + for e in self.excuses: |
1963 | + if not e.run_autopkgtest: |
1964 | + autopkgtest_excludes.append(e.name) |
1965 | + continue |
1966 | + # skip removals, binary-only candidates, and proposed-updates |
1967 | + if e.name.startswith("-") or "/" in e.name or "_" in e.name: |
1968 | + pass |
1969 | + if e.ver[1] == "-": |
1970 | + continue |
1971 | + autopkgtest_excuses.append(e) |
1972 | + autopkgtest_packages.append((e.name, e.ver[1])) |
1973 | + autopkgtest.request(autopkgtest_packages, autopkgtest_excludes) |
1974 | + if not self.options.dry_run: |
1975 | + autopkgtest.collect_requested() |
1976 | + autopkgtest.submit() |
1977 | + autopkgtest.collect(autopkgtest_packages) |
1978 | + cloud_url = "http://autopkgtest.ubuntu.com/packages/%(h)s/%(s)s/%(r)s/%(a)s" |
1979 | + for e in autopkgtest_excuses: |
1980 | + adtpass = True |
1981 | + for passed, adtsrc, adtver, arch_status in autopkgtest.results( |
1982 | + e.name, e.ver[1]): |
1983 | + for arch in arch_status: |
1984 | + url = cloud_url % {'h': srchash(adtsrc), 's': adtsrc, |
1985 | + 'r': self.options.series, 'a': arch} |
1986 | + e.addtest('autopkgtest', '%s %s' % (adtsrc, adtver), |
1987 | + arch, arch_status[arch], url) |
1988 | + |
1989 | + # hints can override failures |
1990 | + if not passed: |
1991 | + hints = self.hints.search( |
1992 | + 'force-badtest', package=adtsrc) |
1993 | + hints.extend( |
1994 | + self.hints.search('force', package=adtsrc)) |
1995 | + forces = [ |
1996 | + x for x in hints |
1997 | + if same_source(adtver, x.version) ] |
1998 | + if forces: |
1999 | + e.force() |
2000 | + e.addreason('badtest %s %s' % (adtsrc, adtver)) |
2001 | + e.addhtml( |
2002 | + "Should wait for %s %s test, but forced by " |
2003 | + "%s" % (adtsrc, adtver, forces[0].user)) |
2004 | + passed = True |
2005 | + |
2006 | + if not passed: |
2007 | + adtpass = False |
2008 | + |
2009 | + if not adtpass and e.is_valid: |
2010 | + hints = self.hints.search('force-skiptest', package=e.name) |
2011 | + hints.extend(self.hints.search('force', package=e.name)) |
2012 | + forces = [ |
2013 | + x for x in hints |
2014 | + if same_source(e.ver[1], x.version) ] |
2015 | + if forces: |
2016 | + e.force() |
2017 | + e.addreason('skiptest') |
2018 | + e.addhtml( |
2019 | + "Should wait for tests relating to %s %s, but " |
2020 | + "forced by %s" % |
2021 | + (e.name, e.ver[1], forces[0].user)) |
2022 | + else: |
2023 | + upgrade_me.remove(e.name) |
2024 | + unconsidered.append(e.name) |
2025 | + e.addhtml("Not considered") |
2026 | + e.addreason("autopkgtest") |
2027 | + e.is_valid = False |
2028 | + |
2029 | + if (getattr(self.options, "boottest_enable", "no") == "yes" and |
2030 | + self.options.series): |
2031 | + # trigger 'boottest'ing for valid candidates. |
2032 | + boottest_debug = getattr( |
2033 | + self.options, "boottest_debug", "no") == "yes" |
2034 | + boottest = BootTest( |
2035 | + self, self.options.distribution, self.options.series, |
2036 | + debug=boottest_debug) |
2037 | + boottest_excuses = [] |
2038 | + for excuse in self.excuses: |
2039 | + # Skip already invalid excuses. |
2040 | + if not excuse.run_boottest: |
2041 | + continue |
2042 | + # Also skip removals, binary-only candidates, proposed-updates |
2043 | + # and unknown versions. |
2044 | + if (excuse.name.startswith("-") or |
2045 | + "/" in excuse.name or |
2046 | + "_" in excuse.name or |
2047 | + excuse.ver[1] == "-"): |
2048 | + continue |
2049 | + # Allows hints to skip boottest attempts |
2050 | + hints = self.hints.search( |
2051 | + 'force-skiptest', package=excuse.name) |
2052 | + forces = [x for x in hints |
2053 | + if same_source(excuse.ver[1], x.version)] |
2054 | + if forces: |
2055 | + excuse.addhtml( |
2056 | + "boottest skipped from hints by %s" % forces[0].user) |
2057 | + continue |
2058 | + # Only sources whitelisted in the boottest context should |
2059 | + # be tested (currently only sources building phone binaries). |
2060 | + if not boottest.needs_test(excuse.name, excuse.ver[1]): |
2061 | + # Silently skipping. |
2062 | + continue |
2063 | + # Okay, aggregate required boottests requests. |
2064 | + boottest_excuses.append(excuse) |
2065 | + boottest.request([(e.name, e.ver[1]) for e in boottest_excuses]) |
2066 | + # Dry-run avoids data exchange with external systems. |
2067 | + if not self.options.dry_run: |
2068 | + boottest.submit() |
2069 | + boottest.collect() |
2070 | + # Boottest Jenkins views location. |
2071 | + jenkins_public = "https://jenkins.qa.ubuntu.com/job" |
2072 | + jenkins_private = ( |
2073 | + "http://d-jenkins.ubuntu-ci:8080/view/%s/view/BootTest/job" % |
2074 | + self.options.series.title()) |
2075 | + # Update excuses from the boottest context. |
2076 | + for excuse in boottest_excuses: |
2077 | + status = boottest.get_status(excuse.name, excuse.ver[1]) |
2078 | + label = BootTest.EXCUSE_LABELS.get(status, 'UNKNOWN STATUS') |
2079 | + public_url = "%s/%s-boottest-%s/lastBuild" % ( |
2080 | + jenkins_public, self.options.series, |
2081 | + excuse.name.replace("+", "-")) |
2082 | + private_url = "%s/%s-boottest-%s/lastBuild" % ( |
2083 | + jenkins_private, self.options.series, |
2084 | + excuse.name.replace("+", "-")) |
2085 | + excuse.addhtml( |
2086 | + "Boottest result: %s (Jenkins: <a href=\"%s\">public</a>" |
2087 | + ", <a href=\"%s\">private</a>)" % ( |
2088 | + label, public_url, private_url)) |
2089 | + # Allows hints to force boottest failures/attempts |
2090 | + # to be ignored. |
2091 | + hints = self.hints.search('force', package=excuse.name) |
2092 | + hints.extend( |
2093 | + self.hints.search('force-badtest', package=excuse.name)) |
2094 | + forces = [x for x in hints |
2095 | + if same_source(excuse.ver[1], x.version)] |
2096 | + if forces: |
2097 | + excuse.addhtml( |
2098 | + "Should wait for %s %s boottest, but forced by " |
2099 | + "%s" % (excuse.name, excuse.ver[1], |
2100 | + forces[0].user)) |
2101 | + continue |
2102 | + # Block promotion if the excuse is still valid (adt tests |
2103 | + # passed) but the boottests attempt has failed or still in |
2104 | + # progress. |
2105 | + if status not in BootTest.VALID_STATUSES: |
2106 | + excuse.addreason("boottest") |
2107 | + if excuse.is_valid: |
2108 | + excuse.is_valid = False |
2109 | + excuse.addhtml("Not considered") |
2110 | + upgrade_me.remove(excuse.name) |
2111 | + unconsidered.append(excuse.name) |
2112 | + |
2113 | # invalidate impossible excuses |
2114 | for e in self.excuses: |
2115 | # parts[0] == package name |
2116 | @@ -2455,7 +2857,7 @@ |
2117 | |
2118 | if not force: |
2119 | self.output_write(eval_uninst(self.options.architectures, |
2120 | - newly_uninst(nuninst_start, nuninst_end)) + "\n") |
2121 | + newly_uninst(nuninst_start, nuninst_end))) |
2122 | |
2123 | if not force: |
2124 | break_arches = set(self.options.break_arches) |
2125 | @@ -2483,7 +2885,7 @@ |
2126 | if force: |
2127 | self.output_write("force breaks:\n") |
2128 | self.output_write(eval_uninst(self.options.architectures, |
2129 | - newly_uninst(nuninst_start, nuninst_end)) + "\n") |
2130 | + newly_uninst(nuninst_start, nuninst_end))) |
2131 | self.output_write("SUCCESS (%d/%d)\n" % (len(actions or self.upgrade_me), len(extra))) |
2132 | self.nuninst_orig = nuninst_end |
2133 | self.all_selected += selected |
2134 | @@ -2498,6 +2900,7 @@ |
2135 | lundo.reverse() |
2136 | |
2137 | undo_changes(lundo, self._inst_tester, self.sources, self.binaries) |
2138 | + self.output_write("\n") |
2139 | |
2140 | |
2141 | def assert_nuninst_is_correct(self): |
2142 | @@ -2541,6 +2944,7 @@ |
2143 | self.nuninst_orig = self.get_nuninst() |
2144 | # nuninst_orig may get updated during the upgrade process |
2145 | self.nuninst_orig_save = self.get_nuninst() |
2146 | + self.all_selected = [] |
2147 | |
2148 | if not self.options.actions: |
2149 | # process `easy' hints |
2150 | @@ -2592,6 +2996,7 @@ |
2151 | # obsolete source packages |
2152 | # a package is obsolete if none of the binary packages in testing |
2153 | # are built by it |
2154 | +<<<<<<< TREE |
2155 | self.__log("> Removing obsolete source packages from testing", type="I") |
2156 | # local copies for performance |
2157 | sources = self.sources['testing'] |
2158 | @@ -2607,6 +3012,24 @@ |
2159 | self.output_write("Removing obsolete source packages from testing (%d):\n" % (len(removals))) |
2160 | self.do_all(actions=removals) |
2161 | |
2162 | +======= |
2163 | + if getattr(self.options, "remove_obsolete", "yes") == "yes": |
2164 | + self.__log("> Removing obsolete source packages from testing", type="I") |
2165 | + # local copies for performance |
2166 | + sources = self.sources['testing'] |
2167 | + binaries = self.binaries['testing'] |
2168 | + used = set(binaries[arch][0][binary][SOURCE] |
2169 | + for arch in binaries |
2170 | + for binary in binaries[arch][0] |
2171 | + ) |
2172 | + removals = [ MigrationItem("-%s/%s" % (source, sources[source][VERSION])) |
2173 | + for source in sources if source not in used |
2174 | + ] |
2175 | + if len(removals) > 0: |
2176 | + self.output_write("Removing obsolete source packages from testing (%d):\n" % (len(removals))) |
2177 | + self.do_all(actions=removals) |
2178 | + |
2179 | +>>>>>>> MERGE-SOURCE |
2180 | # smooth updates |
2181 | if self.options.smooth_updates: |
2182 | self.__log("> Removing old packages left in testing from smooth updates", type="I") |
2183 | @@ -2670,6 +3093,7 @@ |
2184 | self.__log("> Calculating current uninstallability counters", type="I") |
2185 | self.nuninst_orig = self.get_nuninst() |
2186 | self.nuninst_orig_save = self.get_nuninst() |
2187 | + self.all_selected = [] |
2188 | |
2189 | import readline |
2190 | from completer import Completer |
2191 | @@ -2891,6 +3315,7 @@ |
2192 | else: |
2193 | self.upgrade_me = self.options.actions.split() |
2194 | |
2195 | + ensuredir(os.path.dirname(self.options.upgrade_output)) |
2196 | with open(self.options.upgrade_output, 'w', encoding='utf-8') as f: |
2197 | self.__output = f |
2198 | |
2199 | |
2200 | === modified file 'britney_nobreakall.conf' |
2201 | --- britney_nobreakall.conf 2015-10-27 17:32:31 +0000 |
2202 | +++ britney_nobreakall.conf 2015-11-17 11:05:43 +0000 |
2203 | @@ -1,26 +1,25 @@ |
2204 | # Configuration file for britney |
2205 | |
2206 | # Paths for control files |
2207 | -TESTING = /srv/release.debian.org/britney/var/data-b2/testing |
2208 | -TPU = /srv/release.debian.org/britney/var/data-b2/testing-proposed-updates |
2209 | -PU = /srv/release.debian.org/britney/var/data-b2/proposed-updates |
2210 | -UNSTABLE = /srv/release.debian.org/britney/var/data-b2/unstable |
2211 | +TESTING = data/%(SERIES) |
2212 | +UNSTABLE = data/%(SERIES)-proposed |
2213 | +PARTIAL_UNSTABLE = yes |
2214 | |
2215 | # Output |
2216 | -NONINST_STATUS = /srv/release.debian.org/britney/var/data-b2/non-installable-status |
2217 | -EXCUSES_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/excuses.html |
2218 | -EXCUSES_YAML_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/excuses.yaml |
2219 | -UPGRADE_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/output.txt |
2220 | -HEIDI_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/HeidiResult |
2221 | +NONINST_STATUS = data/%(SERIES)/non-installable-status |
2222 | +EXCUSES_OUTPUT = output/%(SERIES)/excuses.html |
2223 | +EXCUSES_YAML_OUTPUT = output/%(SERIES)/excuses.yaml |
2224 | +UPGRADE_OUTPUT = output/%(SERIES)/output.txt |
2225 | +HEIDI_OUTPUT = output/%(SERIES)/HeidiResult |
2226 | |
2227 | # List of release architectures |
2228 | -ARCHITECTURES = i386 amd64 arm64 armel armhf mips mipsel powerpc ppc64el s390x |
2229 | +ARCHITECTURES = amd64 arm64 armhf i386 powerpc ppc64el |
2230 | |
2231 | # if you're not in this list, arch: all packages are allowed to break on you |
2232 | -NOBREAKALL_ARCHES = i386 amd64 arm64 armel armhf mips mipsel powerpc ppc64el s390x |
2233 | +NOBREAKALL_ARCHES = amd64 arm64 armhf i386 powerpc ppc64el |
2234 | |
2235 | # if you're in this list, your packages may not stay in sync with the source |
2236 | -FUCKED_ARCHES = |
2237 | +OUTOFSYNC_ARCHES = |
2238 | |
2239 | # if you're in this list, your uninstallability count may increase |
2240 | BREAK_ARCHES = |
2241 | @@ -29,14 +28,15 @@ |
2242 | NEW_ARCHES = |
2243 | |
2244 | # priorities and delays |
2245 | -MINDAYS_LOW = 10 |
2246 | -MINDAYS_MEDIUM = 5 |
2247 | -MINDAYS_HIGH = 2 |
2248 | +MINDAYS_LOW = 0 |
2249 | +MINDAYS_MEDIUM = 0 |
2250 | +MINDAYS_HIGH = 0 |
2251 | MINDAYS_CRITICAL = 0 |
2252 | MINDAYS_EMERGENCY = 0 |
2253 | DEFAULT_URGENCY = medium |
2254 | |
2255 | # hint permissions |
2256 | +<<<<<<< TREE |
2257 | HINTS_ABA = ALL |
2258 | HINTS_PKERN = STANDARD force |
2259 | HINTS_ADSB = STANDARD force force-hint |
2260 | @@ -52,10 +52,38 @@ |
2261 | HINTS_FREEZE-EXCEPTION = unblock unblock-udeb |
2262 | HINTS_SATBRITNEY = easy |
2263 | HINTS_AUTO-REMOVALS = remove |
2264 | +======= |
2265 | +HINTS_CJWATSON = ALL |
2266 | +HINTS_ADCONRAD = ALL |
2267 | +HINTS_KITTERMAN = ALL |
2268 | +HINTS_LANEY = ALL |
2269 | +HINTS_JRIDDELL = ALL |
2270 | +HINTS_STEFANOR = ALL |
2271 | +HINTS_STGRABER = ALL |
2272 | +HINTS_VORLON = ALL |
2273 | +HINTS_PITTI = ALL |
2274 | +HINTS_FREEZE = block block-all |
2275 | + |
2276 | +HINTS_UBUNTU-TOUCH/DIDROCKS = block unblock |
2277 | +HINTS_UBUNTU-TOUCH/EV = block unblock |
2278 | +HINTS_UBUNTU-TOUCH/KEN-VANDINE = block unblock |
2279 | +HINTS_UBUNTU-TOUCH/LOOL = block unblock |
2280 | +HINTS_UBUNTU-TOUCH/MATHIEU-TL = block unblock |
2281 | +HINTS_UBUNTU-TOUCH/OGRA = block unblock |
2282 | +>>>>>>> MERGE-SOURCE |
2283 | |
2284 | # support for old libraries in testing (smooth update) |
2285 | # use ALL to enable smooth updates for all the sections |
2286 | # |
2287 | # naming a non-existent section will effectively disable new smooth |
2288 | # updates but still allow removals to occur |
2289 | -SMOOTH_UPDATES = libs oldlibs |
2290 | +SMOOTH_UPDATES = badgers |
2291 | + |
2292 | +REMOVE_OBSOLETE = no |
2293 | + |
2294 | +ADT_ENABLE = yes |
2295 | +ADT_DEBUG = no |
2296 | +ADT_ARCHES = amd64 i386 armhf ppc64el |
2297 | +ADT_AMQP = amqp://test_request:password@162.213.33.228 |
2298 | +# Swift base URL with the results (must be publicly readable and browsable) |
2299 | +ADT_SWIFT_URL = https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac |
2300 | |
2301 | === modified file 'britney_util.py' |
2302 | --- britney_util.py 2015-09-13 18:33:06 +0000 |
2303 | +++ britney_util.py 2015-11-17 11:05:43 +0000 |
2304 | @@ -39,6 +39,11 @@ |
2305 | |
2306 | binnmu_re = re.compile(r'^(.*)\+b\d+$') |
2307 | |
2308 | +def ensuredir(directory): |
2309 | + if not os.path.isdir(directory): |
2310 | + os.makedirs(directory) |
2311 | + |
2312 | + |
2313 | def same_source(sv1, sv2, binnmu_re=binnmu_re): |
2314 | """Check if two version numbers are built from the same source |
2315 | |
2316 | @@ -201,6 +206,7 @@ |
2317 | return "\n".join(" " + k + ": " + " ".join(libraries[k]) for k in libraries) + "\n" |
2318 | |
2319 | |
2320 | +<<<<<<< TREE |
2321 | def compute_reverse_tree(inst_tester, affected): |
2322 | """Calculate the full dependency tree for a set of packages |
2323 | |
2324 | @@ -219,6 +225,106 @@ |
2325 | affected.update(new_pkg_ids) |
2326 | remain.extend(new_pkg_ids) |
2327 | return None |
2328 | +======= |
2329 | + |
2330 | +def register_reverses(packages, provides, check_doubles=True, iterator=None, |
2331 | + parse_depends=apt_pkg.parse_depends, |
2332 | + DEPENDS=DEPENDS, CONFLICTS=CONFLICTS, |
2333 | + RDEPENDS=RDEPENDS, RCONFLICTS=RCONFLICTS): |
2334 | + """Register reverse dependencies and conflicts for a given |
2335 | + sequence of packages |
2336 | + |
2337 | + This method registers the reverse dependencies and conflicts for a |
2338 | + given sequence of packages. "packages" is a table of real |
2339 | + packages and "provides" is a table of virtual packages. |
2340 | + |
2341 | + iterator is the sequence of packages for which the reverse |
2342 | + relations should be updated. |
2343 | + |
2344 | + The "X=X" parameters are optimizations to avoid "load global" in |
2345 | + the loops. |
2346 | + """ |
2347 | + if iterator is None: |
2348 | + iterator = packages.keys() |
2349 | + else: |
2350 | + iterator = ifilter_only(packages, iterator) |
2351 | + |
2352 | + for pkg in iterator: |
2353 | + # register the list of the dependencies for the depending packages |
2354 | + dependencies = [] |
2355 | + pkg_data = packages[pkg] |
2356 | + if pkg_data[DEPENDS]: |
2357 | + dependencies.extend(parse_depends(pkg_data[DEPENDS], False)) |
2358 | + # go through the list |
2359 | + for p in dependencies: |
2360 | + for a in p: |
2361 | + # strip off Multi-Arch qualifiers like :any or :native |
2362 | + dep = a[0].split(':')[0] |
2363 | + # register real packages |
2364 | + if dep in packages and (not check_doubles or pkg not in packages[dep][RDEPENDS]): |
2365 | + packages[dep][RDEPENDS].append(pkg) |
2366 | + # also register packages which provide the package (if any) |
2367 | + if dep in provides: |
2368 | + for i in provides[dep]: |
2369 | + if i not in packages: continue |
2370 | + if not check_doubles or pkg not in packages[i][RDEPENDS]: |
2371 | + packages[i][RDEPENDS].append(pkg) |
2372 | + # register the list of the conflicts for the conflicting packages |
2373 | + if pkg_data[CONFLICTS]: |
2374 | + for p in parse_depends(pkg_data[CONFLICTS], False): |
2375 | + for a in p: |
2376 | + con = a[0] |
2377 | + # register real packages |
2378 | + if con in packages and (not check_doubles or pkg not in packages[con][RCONFLICTS]): |
2379 | + packages[con][RCONFLICTS].append(pkg) |
2380 | + # also register packages which provide the package (if any) |
2381 | + if con in provides: |
2382 | + for i in provides[con]: |
2383 | + if i not in packages: continue |
2384 | + if not check_doubles or pkg not in packages[i][RCONFLICTS]: |
2385 | + packages[i][RCONFLICTS].append(pkg) |
2386 | + |
2387 | + |
2388 | +def compute_reverse_tree(packages_s, pkg, arch, |
2389 | + set=set, flatten=chain.from_iterable, |
2390 | + RDEPENDS=RDEPENDS): |
2391 | + """Calculate the full dependency tree for the given package |
2392 | + |
2393 | + This method returns the full dependency tree for the package |
2394 | + "pkg", inside the "arch" architecture for a given suite flattened |
2395 | + as an iterable. The first argument "packages_s" is the binary |
2396 | + package table for that given suite (e.g. Britney().binaries["testing"]). |
2397 | + |
2398 | + The tree (or graph) is returned as an iterable of (package, arch) |
2399 | + tuples and the iterable will contain ("pkg", "arch") if it is |
2400 | + available on that architecture. |
2401 | + |
2402 | + If "pkg" is not available on that architecture in that suite, |
2403 | + this returns an empty iterable. |
2404 | + |
2405 | + The method does not promise any ordering of the returned |
2406 | + elements and the iterable is not reusable. |
2407 | + |
2408 | + The flatten=... and the "X=X" parameters are optimizations to |
2409 | + avoid "load global" in the loops. |
2410 | + """ |
2411 | + binaries = packages_s[arch][0] |
2412 | + if pkg not in binaries: |
2413 | + return frozenset() |
2414 | + rev_deps = set(binaries[pkg][RDEPENDS]) |
2415 | + seen = set([pkg]) |
2416 | + |
2417 | + binfilt = ifilter_only(binaries) |
2418 | + revfilt = ifilter_except(seen) |
2419 | + |
2420 | + while rev_deps: |
2421 | + # mark all of the current iteration of packages as affected |
2422 | + seen |= rev_deps |
2423 | + # generate the next iteration, which is the reverse-dependencies of |
2424 | + # the current iteration |
2425 | + rev_deps = set(revfilt(flatten( binaries[x][RDEPENDS] for x in binfilt(rev_deps) ))) |
2426 | + return zip(seen, repeat(arch)) |
2427 | +>>>>>>> MERGE-SOURCE |
2428 | |
2429 | |
2430 | def write_nuninst(filename, nuninst): |
2431 | @@ -377,6 +483,7 @@ |
2432 | or "legacy-html". |
2433 | """ |
2434 | if output_format == "yaml": |
2435 | + ensuredir(os.path.dirname(dest_file)) |
2436 | with open(dest_file, 'w', encoding='utf-8') as f: |
2437 | excuselist = [] |
2438 | for e in excuses: |
2439 | @@ -386,11 +493,13 @@ |
2440 | excusesdata["generated-date"] = datetime.utcnow() |
2441 | f.write(yaml.dump(excusesdata, default_flow_style=False, allow_unicode=True)) |
2442 | elif output_format == "legacy-html": |
2443 | + ensuredir(os.path.dirname(dest_file)) |
2444 | with open(dest_file, 'w', encoding='utf-8') as f: |
2445 | f.write("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n") |
2446 | f.write("<html><head><title>excuses...</title>") |
2447 | f.write("<meta http-equiv=\"Content-Type\" content=\"text/html;charset=utf-8\"></head><body>\n") |
2448 | f.write("<p>Generated: " + time.strftime("%Y.%m.%d %H:%M:%S %z", time.gmtime(time.time())) + "</p>\n") |
2449 | + f.write("<p>See the <a href=\"https://wiki.ubuntu.com/ProposedMigration\">documentation</a> for help interpreting this page.</p>\n") |
2450 | f.write("<ul>\n") |
2451 | for e in excuses: |
2452 | f.write("<li>%s" % e.html()) |
2453 | @@ -436,6 +545,7 @@ |
2454 | (PROVIDES, 'Provides'), (CONFLICTS, 'Conflicts'), |
2455 | (ESSENTIAL, 'Essential')) |
2456 | |
2457 | + ensuredir(basedir) |
2458 | for arch in packages_s: |
2459 | filename = os.path.join(basedir, 'Packages_%s' % arch) |
2460 | binaries = packages_s[arch][0] |
2461 | |
2462 | === modified file 'consts.py' |
2463 | --- consts.py 2015-09-13 17:33:22 +0000 |
2464 | +++ consts.py 2015-11-17 11:05:43 +0000 |
2465 | @@ -24,6 +24,7 @@ |
2466 | BINARIES = 2 |
2467 | MAINTAINER = 3 |
2468 | FAKESRC = 4 |
2469 | +AUTOPKGTEST = 5 |
2470 | |
2471 | # binary package |
2472 | SOURCE = 2 |
2473 | |
2474 | === modified file 'excuse.py' |
2475 | --- excuse.py 2015-09-06 14:23:05 +0000 |
2476 | +++ excuse.py 2015-11-17 11:05:43 +0000 |
2477 | @@ -1,6 +1,6 @@ |
2478 | # -*- coding: utf-8 -*- |
2479 | |
2480 | -# Copyright (C) 2001-2004 Anthony Towns <ajt@debian.org> |
2481 | +# Copyright (C) 2006, 2011-2015 Anthony Towns <ajt@debian.org> |
2482 | # Andreas Barth <aba@debian.org> |
2483 | # Fabio Tranchitella <kobold@debian.org> |
2484 | |
2485 | @@ -16,6 +16,15 @@ |
2486 | |
2487 | import re |
2488 | |
2489 | +EXCUSES_LABELS = { |
2490 | + "PASS": '<span style="background:#87d96c">Pass</span>', |
2491 | + "FAIL": '<span style="background:#ff6666">Failed</span>', |
2492 | + "ALWAYSFAIL": '<span style="background:#e5c545">Always failed</span>', |
2493 | + "REGRESSION": '<span style="background:#ff6666">Regression</span>', |
2494 | + "RUNNING": '<span style="background:#99ddff">Test in progress</span>', |
2495 | +} |
2496 | + |
2497 | + |
2498 | class Excuse(object): |
2499 | """Excuse class |
2500 | |
2501 | @@ -49,6 +58,9 @@ |
2502 | self._is_valid = False |
2503 | self._dontinvalidate = False |
2504 | self.forced = False |
2505 | + self.run_autopkgtest = False |
2506 | + self.run_boottest = False |
2507 | + self.distribution = "ubuntu" |
2508 | |
2509 | self.invalid_deps = [] |
2510 | self.deps = {} |
2511 | @@ -59,6 +71,9 @@ |
2512 | self.oldbugs = set() |
2513 | self.reason = {} |
2514 | self.htmlline = [] |
2515 | + # type (e. g. "autopkgtest") -> package (e. g. "foo 2-1") -> arch -> |
2516 | + # ['PASS'|'ALWAYSFAIL'|'REGRESSION'|'RUNNING', url] |
2517 | + self.tests = {} |
2518 | |
2519 | def sortkey(self): |
2520 | if self.daysold == None: |
2521 | @@ -98,6 +113,10 @@ |
2522 | """Set the urgency of upload of the package""" |
2523 | self.urgency = date |
2524 | |
2525 | + def set_distribution(self, distribution): |
2526 | + """Set the distribution name""" |
2527 | + self.distribution = distribution |
2528 | + |
2529 | def add_dep(self, name, arch): |
2530 | """Add a dependency""" |
2531 | if name not in self.deps: self.deps[name]=[] |
2532 | @@ -131,19 +150,42 @@ |
2533 | |
2534 | def html(self): |
2535 | """Render the excuse in HTML""" |
2536 | - res = "<a id=\"%s\" name=\"%s\">%s</a> (%s to %s)\n<ul>\n" % \ |
2537 | - (self.name, self.name, self.name, self.ver[0], self.ver[1]) |
2538 | + lp_pkg = "https://launchpad.net/%s/+source/%s" % (self.distribution, self.name.split("/")[0]) |
2539 | + if self.ver[0] == "-": |
2540 | + lp_old = self.ver[0] |
2541 | + else: |
2542 | + lp_old = "<a href=\"%s/%s\">%s</a>" % ( |
2543 | + lp_pkg, self.ver[0], self.ver[0]) |
2544 | + if self.ver[1] == "-": |
2545 | + lp_new = self.ver[1] |
2546 | + else: |
2547 | + lp_new = "<a href=\"%s/%s\">%s</a>" % ( |
2548 | + lp_pkg, self.ver[1], self.ver[1]) |
2549 | + res = ( |
2550 | + "<a id=\"%s\" name=\"%s\" href=\"%s\">%s</a> (%s to %s)\n<ul>\n" % |
2551 | + (self.name, self.name, lp_pkg, self.name, lp_old, lp_new)) |
2552 | if self.maint: |
2553 | res = res + "<li>Maintainer: %s\n" % (self.maint) |
2554 | if self.section and self.section.find("/") > -1: |
2555 | res = res + "<li>Section: %s\n" % (self.section) |
2556 | if self.daysold != None: |
2557 | - if self.daysold < self.mindays: |
2558 | + if self.mindays == 0: |
2559 | + res = res + ("<li>%d days old\n" % self.daysold) |
2560 | + elif self.daysold < self.mindays: |
2561 | res = res + ("<li>Too young, only %d of %d days old\n" % |
2562 | (self.daysold, self.mindays)) |
2563 | else: |
2564 | res = res + ("<li>%d days old (needed %d days)\n" % |
2565 | (self.daysold, self.mindays)) |
2566 | + for testtype in sorted(self.tests): |
2567 | + for pkg in sorted(self.tests[testtype]): |
2568 | + archmsg = [] |
2569 | + for arch in sorted(self.tests[testtype][pkg]): |
2570 | + status, url = self.tests[testtype][pkg][arch] |
2571 | + archmsg.append('<a href="%s">%s: %s</a>' % |
2572 | + (url, arch, EXCUSES_LABELS[status])) |
2573 | + res = res + ("<li>%s for %s: %s</li>\n" % (testtype, pkg, ', '.join(archmsg))) |
2574 | + |
2575 | for x in self.htmlline: |
2576 | res = res + "<li>" + x + "\n" |
2577 | lastdep = "" |
2578 | @@ -172,6 +214,10 @@ |
2579 | """"adding reason""" |
2580 | self.reason[reason] = 1 |
2581 | |
2582 | + def addtest(self, type_, package, arch, state, url): |
2583 | + """Add test result""" |
2584 | + self.tests.setdefault(type_, {}).setdefault(package, {})[arch] = [state, url] |
2585 | + |
2586 | # TODO merge with html() |
2587 | def text(self): |
2588 | """Render the excuse in text""" |
2589 | @@ -184,7 +230,9 @@ |
2590 | if self.section and self.section.find("/") > -1: |
2591 | res.append("Section: %s" % (self.section)) |
2592 | if self.daysold != None: |
2593 | - if self.daysold < self.mindays: |
2594 | + if self.mindays == 0: |
2595 | + res.append("%d days old" % self.daysold) |
2596 | + elif self.daysold < self.mindays: |
2597 | res.append(("Too young, only %d of %d days old" % |
2598 | (self.daysold, self.mindays))) |
2599 | else: |
2600 | @@ -225,5 +273,6 @@ |
2601 | else: |
2602 | excusedata["reason"] = sorted(list(self.reason.keys())) |
2603 | excusedata["is-candidate"] = self.is_valid |
2604 | + excusedata["tests"] = self.tests |
2605 | return excusedata |
2606 | |
2607 | |
2608 | === added file 'run-autopkgtest' |
2609 | --- run-autopkgtest 1970-01-01 00:00:00 +0000 |
2610 | +++ run-autopkgtest 2015-11-17 11:05:43 +0000 |
2611 | @@ -0,0 +1,78 @@ |
2612 | +#!/usr/bin/python3 |
2613 | +# Request re-runs of autopkgtests for packages |
2614 | + |
2615 | +import os |
2616 | +import sys |
2617 | +import argparse |
2618 | +import json |
2619 | + |
2620 | +import kombu |
2621 | + |
2622 | +my_dir = os.path.dirname(os.path.realpath(sys.argv[0])) |
2623 | + |
2624 | + |
2625 | +def parse_args(): |
2626 | + '''Parse command line arguments''' |
2627 | + |
2628 | + parser = argparse.ArgumentParser() |
2629 | + parser.add_argument('-c', '--config', |
2630 | + default=os.path.join(my_dir, 'britney.conf'), |
2631 | + help='britney config file (default: %(default)s)') |
2632 | + parser.add_argument('-s', '--series', required=True, |
2633 | + help='Distro series name (required).') |
2634 | + parser.add_argument('-a', '--architecture', action='append', default=[], |
2635 | + help='Only run test(s) on given architecture name(s). ' |
2636 | + 'Can be specified multiple times (default: all).') |
2637 | + parser.add_argument('--trigger', action='append', default=[], |
2638 | + metavar='SOURCE/VERSION', required=True, |
2639 | + help='Add triggering package to request. ' |
2640 | + 'Can be specified multiple times.') |
2641 | + parser.add_argument('--ppa', metavar='LPUSER/PPANAME', |
2642 | + help='Enable PPA for requested test(s)') |
2643 | + parser.add_argument('package', nargs='+', |
2644 | + help='Source package name(s) whose tests to run.') |
2645 | + args = parser.parse_args() |
2646 | + |
2647 | + # verify syntax of triggers |
2648 | + for t in args.trigger: |
2649 | + try: |
2650 | + (src, ver) = t.split('/') |
2651 | + except ValueError: |
2652 | + parser.error('Invalid trigger format "%s", must be "sourcepkg/version"' % t) |
2653 | + |
2654 | + return args |
2655 | + |
2656 | + |
2657 | +def parse_config(config_file): |
2658 | + '''Parse config file (like britney.py)''' |
2659 | + |
2660 | + config = argparse.Namespace() |
2661 | + with open(config_file) as f: |
2662 | + for k, v in [r.split('=', 1) for r in f if '=' in r and not r.strip().startswith('#')]: |
2663 | + k = k.strip() |
2664 | + if not getattr(config, k.lower(), None): |
2665 | + setattr(config, k.lower(), v.strip()) |
2666 | + return config |
2667 | + |
2668 | + |
2669 | +if __name__ == '__main__': |
2670 | + args = parse_args() |
2671 | + config = parse_config(args.config) |
2672 | + if not args.architecture: |
2673 | + args.architecture = config.adt_arches.split() |
2674 | + |
2675 | + params = {} |
2676 | + if args.trigger: |
2677 | + params['triggers'] = args.trigger |
2678 | + if args.ppa: |
2679 | + params['ppa'] = args.ppa |
2680 | + params = '\n' + json.dumps(params) |
2681 | + |
2682 | + with kombu.Connection(config.adt_amqp) as conn: |
2683 | + for arch in args.architecture: |
2684 | + # don't use SimpleQueue here as it always declares queues; |
2685 | + # ACLs might not allow that |
2686 | + with kombu.Producer(conn, routing_key='debci-%s-%s' % (args.series, arch), |
2687 | + auto_declare=False) as p: |
2688 | + for pkg in args.package: |
2689 | + p.publish(pkg + params) |
2690 | |
2691 | === added directory 'tests' |
2692 | === added file 'tests/__init__.py' |
2693 | --- tests/__init__.py 1970-01-01 00:00:00 +0000 |
2694 | +++ tests/__init__.py 2015-11-17 11:05:43 +0000 |
2695 | @@ -0,0 +1,184 @@ |
2696 | +# (C) 2015 Canonical Ltd. |
2697 | +# |
2698 | +# This program is free software; you can redistribute it and/or modify |
2699 | +# it under the terms of the GNU General Public License as published by |
2700 | +# the Free Software Foundation; either version 2 of the License, or |
2701 | +# (at your option) any later version. |
2702 | + |
2703 | +import os |
2704 | +import shutil |
2705 | +import subprocess |
2706 | +import tempfile |
2707 | +import unittest |
2708 | + |
2709 | +PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) |
2710 | + |
2711 | +architectures = ['amd64', 'arm64', 'armhf', 'i386', 'powerpc', 'ppc64el'] |
2712 | + |
2713 | + |
2714 | +class TestData: |
2715 | + |
2716 | + def __init__(self): |
2717 | + '''Construct local test package indexes. |
2718 | + |
2719 | + The archive is initially empty. You can create new packages with |
2720 | + create_deb(). self.path contains the path of the archive, and |
2721 | + self.apt_source provides an apt source "deb" line. |
2722 | + |
2723 | + It is kept in a temporary directory which gets removed when the Archive |
2724 | + object gets deleted. |
2725 | + ''' |
2726 | + self.path = tempfile.mkdtemp(prefix='testarchive.') |
2727 | + self.apt_source = 'deb file://%s /' % self.path |
2728 | + self.series = 'series' |
2729 | + self.dirs = {False: os.path.join(self.path, 'data', self.series), |
2730 | + True: os.path.join( |
2731 | + self.path, 'data', '%s-proposed' % self.series)} |
2732 | + os.makedirs(self.dirs[False]) |
2733 | + os.mkdir(self.dirs[True]) |
2734 | + self.added_sources = {False: set(), True: set()} |
2735 | + self.added_binaries = {False: set(), True: set()} |
2736 | + |
2737 | + # pre-create all files for all architectures |
2738 | + for arch in architectures: |
2739 | + for dir in self.dirs.values(): |
2740 | + with open(os.path.join(dir, 'Packages_' + arch), 'w'): |
2741 | + pass |
2742 | + for dir in self.dirs.values(): |
2743 | + for fname in ['Dates', 'Blocks']: |
2744 | + with open(os.path.join(dir, fname), 'w'): |
2745 | + pass |
2746 | + for dname in ['Hints']: |
2747 | + os.mkdir(os.path.join(dir, dname)) |
2748 | + |
2749 | + os.mkdir(os.path.join(self.path, 'output')) |
2750 | + |
2751 | + # create temporary home dir for proposed-migration autopktest status |
2752 | + self.home = os.path.join(self.path, 'home') |
2753 | + os.environ['HOME'] = self.home |
2754 | + os.makedirs(os.path.join(self.home, 'proposed-migration', |
2755 | + 'autopkgtest', 'work')) |
2756 | + |
2757 | + def __del__(self): |
2758 | + shutil.rmtree(self.path) |
2759 | + |
2760 | + def add(self, name, unstable, fields={}, add_src=True, testsuite=None): |
2761 | + '''Add a binary package to the index file. |
2762 | + |
2763 | + You need to specify at least the package name and in which list to put |
2764 | + it (unstable==True for unstable/proposed, or False for |
2765 | + testing/release). fields specifies all additional entries, e. g. |
2766 | + {'Depends': 'foo, bar', 'Conflicts: baz'}. There are defaults for most |
2767 | + fields. |
2768 | + |
2769 | + Unless add_src is set to False, this will also automatically create a |
2770 | + source record, based on fields['Source'] and name. In that case, the |
2771 | + "Testsuite:" field is set to the testsuite argument. |
2772 | + ''' |
2773 | + assert (name not in self.added_binaries[unstable]) |
2774 | + self.added_binaries[unstable].add(name) |
2775 | + |
2776 | + fields.setdefault('Architecture', 'all') |
2777 | + fields.setdefault('Version', '1') |
2778 | + fields.setdefault('Priority', 'optional') |
2779 | + fields.setdefault('Section', 'devel') |
2780 | + fields.setdefault('Description', 'test pkg') |
2781 | + if fields['Architecture'] == 'all': |
2782 | + for a in architectures: |
2783 | + self._append(name, unstable, 'Packages_' + a, fields) |
2784 | + else: |
2785 | + self._append(name, unstable, 'Packages_' + fields['Architecture'], |
2786 | + fields) |
2787 | + |
2788 | + if add_src: |
2789 | + src = fields.get('Source', name) |
2790 | + if src not in self.added_sources[unstable]: |
2791 | + srcfields = {'Version': fields['Version'], |
2792 | + 'Section': fields['Section']} |
2793 | + if testsuite: |
2794 | + srcfields['Testsuite'] = testsuite |
2795 | + self.add_src(src, unstable, srcfields) |
2796 | + |
2797 | + def add_src(self, name, unstable, fields={}): |
2798 | + '''Add a source package to the index file. |
2799 | + |
2800 | + You need to specify at least the package name and in which list to put |
2801 | + it (unstable==True for unstable/proposed, or False for |
2802 | + testing/release). fields specifies all additional entries, which can be |
2803 | + Version (default: 1), Section (default: devel), Testsuite (default: |
2804 | + none), and Extra-Source-Only. |
2805 | + ''' |
2806 | + assert (name not in self.added_sources[unstable]) |
2807 | + self.added_sources[unstable].add(name) |
2808 | + |
2809 | + fields.setdefault('Version', '1') |
2810 | + fields.setdefault('Section', 'devel') |
2811 | + self._append(name, unstable, 'Sources', fields) |
2812 | + |
2813 | + def _append(self, name, unstable, file_name, fields): |
2814 | + with open(os.path.join(self.dirs[unstable], file_name), 'a') as f: |
2815 | + f.write('''Package: %s |
2816 | +Maintainer: Joe <joe@example.com> |
2817 | +''' % name) |
2818 | + |
2819 | + for k, v in fields.items(): |
2820 | + f.write('%s: %s\n' % (k, v)) |
2821 | + f.write('\n') |
2822 | + |
2823 | + def remove_all(self, unstable): |
2824 | + '''Remove all added packages''' |
2825 | + |
2826 | + self.added_binaries[unstable] = set() |
2827 | + self.added_sources[unstable] = set() |
2828 | + for a in architectures: |
2829 | + open(os.path.join(self.dirs[unstable], 'Packages_' + a), 'w').close() |
2830 | + open(os.path.join(self.dirs[unstable], 'Sources'), 'w').close() |
2831 | + |
2832 | + |
2833 | +class TestBase(unittest.TestCase): |
2834 | + |
2835 | + def setUp(self): |
2836 | + super(TestBase, self).setUp() |
2837 | + self.data = TestData() |
2838 | + self.britney = os.path.join(PROJECT_DIR, 'britney.py') |
2839 | + # create temporary config so that tests can hack it |
2840 | + self.britney_conf = os.path.join(self.data.path, 'britney.conf') |
2841 | + shutil.copy(os.path.join(PROJECT_DIR, 'britney.conf'), self.britney_conf) |
2842 | + assert os.path.exists(self.britney) |
2843 | + |
2844 | + def tearDown(self): |
2845 | + del self.data |
2846 | + |
2847 | + def run_britney(self, args=[]): |
2848 | + '''Run britney. |
2849 | + |
2850 | + Assert that it succeeds and does not produce anything on stderr. |
2851 | + Return (excuses.yaml, excuses.html, britney_out). |
2852 | + ''' |
2853 | + britney = subprocess.Popen([self.britney, '-v', '-c', self.britney_conf, |
2854 | + '--distribution=ubuntu', |
2855 | + '--series=%s' % self.data.series], |
2856 | + stdout=subprocess.PIPE, |
2857 | + stderr=subprocess.PIPE, |
2858 | + cwd=self.data.path, |
2859 | + universal_newlines=True) |
2860 | + (out, err) = britney.communicate() |
2861 | + self.assertEqual(britney.returncode, 0, out + err) |
2862 | + self.assertEqual(err, '') |
2863 | + |
2864 | + with open(os.path.join(self.data.path, 'output', self.data.series, |
2865 | + 'excuses.yaml')) as f: |
2866 | + yaml = f.read() |
2867 | + with open(os.path.join(self.data.path, 'output', self.data.series, |
2868 | + 'excuses.html')) as f: |
2869 | + html = f.read() |
2870 | + |
2871 | + return (yaml, html, out) |
2872 | + |
2873 | + def create_hint(self, username, content): |
2874 | + '''Create a hint file for the given username and content''' |
2875 | + |
2876 | + hints_path = os.path.join( |
2877 | + self.data.path, 'data', self.data.series + '-proposed', 'Hints', username) |
2878 | + with open(hints_path, 'w') as fd: |
2879 | + fd.write(content) |
2880 | |
2881 | === added file 'tests/mock_swift.py' |
2882 | --- tests/mock_swift.py 1970-01-01 00:00:00 +0000 |
2883 | +++ tests/mock_swift.py 2015-11-17 11:05:43 +0000 |
2884 | @@ -0,0 +1,170 @@ |
2885 | +# Mock a Swift server with autopkgtest results |
2886 | +# Author: Martin Pitt <martin.pitt@ubuntu.com> |
2887 | + |
2888 | +import os |
2889 | +import tarfile |
2890 | +import io |
2891 | +import sys |
2892 | +import socket |
2893 | +import time |
2894 | +import tempfile |
2895 | +import json |
2896 | + |
2897 | +try: |
2898 | + from http.server import HTTPServer, BaseHTTPRequestHandler |
2899 | + from urllib.parse import urlparse, parse_qs |
2900 | +except ImportError: |
2901 | + # Python 2 |
2902 | + from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler |
2903 | + from urlparse import urlparse, parse_qs |
2904 | + |
2905 | + |
2906 | +class SwiftHTTPRequestHandler(BaseHTTPRequestHandler): |
2907 | + '''Mock swift container with autopkgtest results |
2908 | + |
2909 | + This accepts retrieving a particular result.tar (e. g. |
2910 | + /container/path/result.tar) or listing the container contents |
2911 | + (/container/?prefix=foo&delimiter=@&marker=foo/bar). |
2912 | + ''' |
2913 | + # map container -> result.tar path -> (exitcode, testpkg-version[, testinfo]) |
2914 | + results = {} |
2915 | + |
2916 | + def do_GET(self): |
2917 | + p = urlparse(self.path) |
2918 | + path_comp = p.path.split('/') |
2919 | + container = path_comp[1] |
2920 | + path = '/'.join(path_comp[2:]) |
2921 | + if path: |
2922 | + self.serve_file(container, path) |
2923 | + else: |
2924 | + self.list_container(container, parse_qs(p.query)) |
2925 | + |
2926 | + def serve_file(self, container, path): |
2927 | + if os.path.basename(path) != 'result.tar': |
2928 | + self.send_error(404, 'File not found (only result.tar supported)') |
2929 | + return |
2930 | + try: |
2931 | + fields = self.results[container][os.path.dirname(path)] |
2932 | + try: |
2933 | + (exitcode, pkgver, testinfo) = fields |
2934 | + except ValueError: |
2935 | + (exitcode, pkgver) = fields |
2936 | + testinfo = None |
2937 | + except KeyError: |
2938 | + self.send_error(404, 'File not found') |
2939 | + return |
2940 | + |
2941 | + self.send_response(200) |
2942 | + self.send_header('Content-type', 'application/octet-stream') |
2943 | + self.end_headers() |
2944 | + |
2945 | + tar = io.BytesIO() |
2946 | + with tarfile.open('result.tar', 'w', tar) as results: |
2947 | + # add exitcode |
2948 | + contents = ('%i' % exitcode).encode() |
2949 | + ti = tarfile.TarInfo('exitcode') |
2950 | + ti.size = len(contents) |
2951 | + results.addfile(ti, io.BytesIO(contents)) |
2952 | + # add testpkg-version |
2953 | + if pkgver is not None: |
2954 | + contents = pkgver.encode() |
2955 | + ti = tarfile.TarInfo('testpkg-version') |
2956 | + ti.size = len(contents) |
2957 | + results.addfile(ti, io.BytesIO(contents)) |
2958 | + # add testinfo.json |
2959 | + if testinfo: |
2960 | + contents = json.dumps(testinfo).encode() |
2961 | + ti = tarfile.TarInfo('testinfo.json') |
2962 | + ti.size = len(contents) |
2963 | + results.addfile(ti, io.BytesIO(contents)) |
2964 | + |
2965 | + self.wfile.write(tar.getvalue()) |
2966 | + |
2967 | + def list_container(self, container, query): |
2968 | + try: |
2969 | + objs = set(['%s/result.tar' % r for r in self.results[container]]) |
2970 | + except KeyError: |
2971 | + self.send_error(404, 'Container does not exist') |
2972 | + return |
2973 | + if 'prefix' in query: |
2974 | + p = query['prefix'][-1] |
2975 | + objs = set([o for o in objs if o.startswith(p)]) |
2976 | + if 'delimiter' in query: |
2977 | + d = query['delimiter'][-1] |
2978 | + # if find() returns a value, we want to include the delimiter, thus |
2979 | + # bump its result; for "not found" return None |
2980 | + find_adapter = lambda i: (i >= 0) and (i + 1) or None |
2981 | + objs = set([o[:find_adapter(o.find(d))] for o in objs]) |
2982 | + if 'marker' in query: |
2983 | + m = query['marker'][-1] |
2984 | + objs = set([o for o in objs if o > m]) |
2985 | + |
2986 | + self.send_response(objs and 200 or 204) # 204: "No Content" |
2987 | + self.send_header('Content-type', 'text/plain') |
2988 | + self.end_headers() |
2989 | + self.wfile.write(('\n'.join(sorted(objs)) + '\n').encode('UTF-8')) |
2990 | + |
2991 | + |
2992 | +class AutoPkgTestSwiftServer: |
2993 | + def __init__(self, port=8080): |
2994 | + self.port = port |
2995 | + self.server_pid = None |
2996 | + self.log = None |
2997 | + |
2998 | + def __del__(self): |
2999 | + if self.server_pid: |
3000 | + self.stop() |
3001 | + |
3002 | + @classmethod |
3003 | + def set_results(klass, results): |
3004 | + '''Set served results. |
3005 | + |
3006 | + results is a map: container -> result.tar path -> |
3007 | + (exitcode, testpkg-version, testinfo) |
3008 | + ''' |
3009 | + SwiftHTTPRequestHandler.results = results |
3010 | + |
3011 | + def start(self): |
3012 | + assert self.server_pid is None, 'already started' |
3013 | + if self.log: |
3014 | + self.log.close() |
3015 | + self.log = tempfile.TemporaryFile() |
3016 | + p = os.fork() |
3017 | + if p: |
3018 | + # parent: wait until server starts |
3019 | + self.server_pid = p |
3020 | + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) |
3021 | + while True: |
3022 | + if s.connect_ex(('127.0.0.1', self.port)) == 0: |
3023 | + break |
3024 | + time.sleep(0.1) |
3025 | + s.close() |
3026 | + return |
3027 | + |
3028 | + # child; quiesce logging on stderr |
3029 | + os.dup2(self.log.fileno(), sys.stderr.fileno()) |
3030 | + srv = HTTPServer(('', self.port), SwiftHTTPRequestHandler) |
3031 | + srv.serve_forever() |
3032 | + sys.exit(0) |
3033 | + |
3034 | + def stop(self): |
3035 | + assert self.server_pid, 'not running' |
3036 | + os.kill(self.server_pid, 15) |
3037 | + os.waitpid(self.server_pid, 0) |
3038 | + self.server_pid = None |
3039 | + self.log.close() |
3040 | + |
3041 | +if __name__ == '__main__': |
3042 | + srv = AutoPkgTestSwiftServer() |
3043 | + srv.set_results({'autopkgtest-series': { |
3044 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1'), |
3045 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1', {'custom_environment': ['ADT_TEST_TRIGGERS=green']}), |
3046 | + 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1'), |
3047 | + 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 2'), |
3048 | + 'series/i386/l/lightgreen/20150101_100102@': (0, 'lightgreen 3'), |
3049 | + }}) |
3050 | + srv.start() |
3051 | + print('Running on http://localhost:8080/autopkgtest-series') |
3052 | + print('Press Enter to quit.') |
3053 | + sys.stdin.readline() |
3054 | + srv.stop() |
3055 | |
3056 | === added file 'tests/test_autopkgtest.py' |
3057 | --- tests/test_autopkgtest.py 1970-01-01 00:00:00 +0000 |
3058 | +++ tests/test_autopkgtest.py 2015-11-17 11:05:43 +0000 |
3059 | @@ -0,0 +1,1586 @@ |
3060 | +#!/usr/bin/python3 |
3061 | +# (C) 2014 - 2015 Canonical Ltd. |
3062 | +# |
3063 | +# This program is free software; you can redistribute it and/or modify |
3064 | +# it under the terms of the GNU General Public License as published by |
3065 | +# the Free Software Foundation; either version 2 of the License, or |
3066 | +# (at your option) any later version. |
3067 | + |
3068 | +from textwrap import dedent |
3069 | + |
3070 | +import apt_pkg |
3071 | +import os |
3072 | +import sys |
3073 | +import fileinput |
3074 | +import unittest |
3075 | +import json |
3076 | +import pprint |
3077 | + |
3078 | +import yaml |
3079 | + |
3080 | +PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) |
3081 | +sys.path.insert(0, PROJECT_DIR) |
3082 | + |
3083 | +from tests import TestBase, mock_swift |
3084 | + |
3085 | +apt_pkg.init() |
3086 | + |
3087 | + |
3088 | +# shortcut for test triggers |
3089 | +def tr(s): |
3090 | + return {'custom_environment': ['ADT_TEST_TRIGGERS=%s' % s]} |
3091 | + |
3092 | + |
3093 | +class TestAutoPkgTest(TestBase): |
3094 | + '''AMQP/cloud interface''' |
3095 | + |
3096 | + ################################################################ |
3097 | + # Common test code |
3098 | + ################################################################ |
3099 | + |
3100 | + def setUp(self): |
3101 | + super(TestAutoPkgTest, self).setUp() |
3102 | + self.fake_amqp = os.path.join(self.data.path, 'amqp') |
3103 | + |
3104 | + # Disable boottests and set fake AMQP and Swift server |
3105 | + for line in fileinput.input(self.britney_conf, inplace=True): |
3106 | + if 'BOOTTEST_ENABLE' in line: |
3107 | + print('BOOTTEST_ENABLE = no') |
3108 | + elif 'ADT_AMQP' in line: |
3109 | + print('ADT_AMQP = file://%s' % self.fake_amqp) |
3110 | + elif 'ADT_SWIFT_URL' in line: |
3111 | + print('ADT_SWIFT_URL = http://localhost:18085') |
3112 | + elif 'ADT_ARCHES' in line: |
3113 | + print('ADT_ARCHES = amd64 i386') |
3114 | + else: |
3115 | + sys.stdout.write(line) |
3116 | + |
3117 | + # add a bunch of packages to testing to avoid repetition |
3118 | + self.data.add('libc6', False) |
3119 | + self.data.add('libgreen1', False, {'Source': 'green', |
3120 | + 'Depends': 'libc6 (>= 0.9)'}, |
3121 | + testsuite='autopkgtest') |
3122 | + self.data.add('green', False, {'Depends': 'libc6 (>= 0.9), libgreen1', |
3123 | + 'Conflicts': 'blue'}, |
3124 | + testsuite='autopkgtest') |
3125 | + self.data.add('lightgreen', False, {'Depends': 'libgreen1'}, |
3126 | + testsuite='autopkgtest') |
3127 | + self.data.add('darkred', False, {}, |
3128 | + testsuite='autopkgtest') |
3129 | + # autodep8 or similar test |
3130 | + self.data.add('darkgreen', False, {'Depends': 'libgreen1'}, |
3131 | + testsuite='autopkgtest-pkg-foo') |
3132 | + self.data.add('blue', False, {'Depends': 'libc6 (>= 0.9)', |
3133 | + 'Conflicts': 'green'}, |
3134 | + testsuite='specialtest') |
3135 | + |
3136 | + # create mock Swift server (but don't start it yet, as tests first need |
3137 | + # to poke in results) |
3138 | + self.swift = mock_swift.AutoPkgTestSwiftServer(port=18085) |
3139 | + self.swift.set_results({}) |
3140 | + |
3141 | + def tearDown(self): |
3142 | + del self.swift |
3143 | + |
3144 | + def do_test(self, unstable_add, expect_status, expect_excuses={}): |
3145 | + '''Run britney with some unstable packages and verify excuses. |
3146 | + |
3147 | + unstable_add is a list of (binpkgname, field_dict, testsuite_value) |
3148 | + passed to TestData.add for "unstable". |
3149 | + |
3150 | + expect_status is a dict sourcename → (is_candidate, testsrc → arch → status) |
3151 | + that is checked against the excuses YAML. |
3152 | + |
3153 | + expect_excuses is a dict sourcename → [(key, value), ...] |
3154 | + matches that are checked against the excuses YAML. |
3155 | + |
3156 | + Return (output, excuses_dict). |
3157 | + ''' |
3158 | + for (pkg, fields, testsuite) in unstable_add: |
3159 | + self.data.add(pkg, True, fields, True, testsuite) |
3160 | + |
3161 | + self.swift.start() |
3162 | + (excuses_yaml, excuses_html, out) = self.run_britney() |
3163 | + self.swift.stop() |
3164 | + |
3165 | + # convert excuses to source indexed dict |
3166 | + excuses_dict = {} |
3167 | + for s in yaml.load(excuses_yaml)['sources']: |
3168 | + excuses_dict[s['source']] = s |
3169 | + |
3170 | + if 'SHOW_EXCUSES' in os.environ: |
3171 | + print('------- excuses -----') |
3172 | + pprint.pprint(excuses_dict, width=200) |
3173 | + if 'SHOW_HTML' in os.environ: |
3174 | + print('------- excuses.html -----\n%s\n' % excuses_html) |
3175 | + if 'SHOW_OUTPUT' in os.environ: |
3176 | + print('------- output -----\n%s\n' % out) |
3177 | + |
3178 | + for src, (is_candidate, testmap) in expect_status.items(): |
3179 | + self.assertEqual(excuses_dict[src]['is-candidate'], is_candidate, |
3180 | + src + ': ' + pprint.pformat(excuses_dict[src])) |
3181 | + for testsrc, archmap in testmap.items(): |
3182 | + for arch, status in archmap.items(): |
3183 | + self.assertEqual(excuses_dict[src]['tests']['autopkgtest'][testsrc][arch][0], |
3184 | + status, |
3185 | + excuses_dict[src]['tests']['autopkgtest'][testsrc]) |
3186 | + |
3187 | + for src, matches in expect_excuses.items(): |
3188 | + for k, v in matches: |
3189 | + if isinstance(excuses_dict[src][k], list): |
3190 | + self.assertIn(v, excuses_dict[src][k]) |
3191 | + else: |
3192 | + self.assertEqual(excuses_dict[src][k], v) |
3193 | + |
3194 | + self.amqp_requests = set() |
3195 | + try: |
3196 | + with open(self.fake_amqp) as f: |
3197 | + for line in f: |
3198 | + self.amqp_requests.add(line.strip()) |
3199 | + os.unlink(self.fake_amqp) |
3200 | + except IOError: |
3201 | + pass |
3202 | + |
3203 | + try: |
3204 | + with open(os.path.join(self.data.path, 'data/series-proposed/autopkgtest/pending.txt')) as f: |
3205 | + self.pending_requests = f.read() |
3206 | + except IOError: |
3207 | + self.pending_requests = None |
3208 | + |
3209 | + self.assertNotIn('FIXME', out) |
3210 | + |
3211 | + return (out, excuses_dict) |
3212 | + |
3213 | + ################################################################ |
3214 | + # Tests for generic packages |
3215 | + ################################################################ |
3216 | + |
3217 | + def test_no_request_for_uninstallable(self): |
3218 | + '''Does not request a test for an uninstallable package''' |
3219 | + |
3220 | + exc = self.do_test( |
3221 | + # uninstallable unstable version |
3222 | + [('lightgreen', {'Version': '1.1~beta', 'Depends': 'libc6 (>= 0.9), libgreen1 (>= 2)'}, 'autopkgtest')], |
3223 | + {'lightgreen': (False, {})}, |
3224 | + {'lightgreen': [('old-version', '1'), ('new-version', '1.1~beta'), |
3225 | + ('reason', 'depends'), |
3226 | + ('excuses', 'lightgreen/amd64 unsatisfiable Depends: libgreen1 (>= 2)') |
3227 | + ] |
3228 | + })[1] |
3229 | + # autopkgtest should not be triggered for uninstallable pkg |
3230 | + self.assertEqual(exc['lightgreen']['tests'], {}) |
3231 | + |
3232 | + self.assertEqual(self.pending_requests, '') |
3233 | + self.assertEqual(self.amqp_requests, set()) |
3234 | + |
3235 | + def test_no_wait_for_always_failed_test(self): |
3236 | + '''We do not need to wait for results for tests which have always |
3237 | + failed, but the tests should still be triggered.''' |
3238 | + |
3239 | + self.swift.set_results({'autopkgtest-series': { |
3240 | + 'series/i386/d/darkred/20150101_100000@': (4, 'darkred 1'), |
3241 | + }}) |
3242 | + |
3243 | + exc = self.do_test( |
3244 | + [('darkred', {'Version': '2'}, 'autopkgtest')], |
3245 | + {'darkred': (True, {'darkred 2': {'i386': 'RUNNING'}})}, |
3246 | + {'darkred': [('old-version', '1'), ('new-version', '2'), |
3247 | + ('excuses', 'Valid candidate') |
3248 | + ] |
3249 | + })[1] |
3250 | + |
3251 | + self.assertEqual(exc['darkred']['tests'], {'autopkgtest': |
3252 | + {'darkred 2': { |
3253 | + 'amd64': ['RUNNING', |
3254 | + 'http://autopkgtest.ubuntu.com/packages/d/darkred/series/amd64'], |
3255 | + 'i386': ['RUNNING', |
3256 | + 'http://autopkgtest.ubuntu.com/packages/d/darkred/series/i386']}}}) |
3257 | + |
3258 | + self.assertEqual( |
3259 | + self.pending_requests, dedent('''\ |
3260 | + darkred 2 amd64 darkred 2 |
3261 | + darkred 2 i386 darkred 2 |
3262 | + ''')) |
3263 | + self.assertEqual( |
3264 | + self.amqp_requests, |
3265 | + set(['debci-series-amd64:darkred {"triggers": ["darkred/2"]}', |
3266 | + 'debci-series-i386:darkred {"triggers": ["darkred/2"]}'])) |
3267 | + |
3268 | + |
3269 | + def test_multi_rdepends_with_tests_all_running(self): |
3270 | + '''Multiple reverse dependencies with tests (all running)''' |
3271 | + |
3272 | + # green has passed before |
3273 | + self.swift.set_results({'autopkgtest-series': { |
3274 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1'), |
3275 | + }}) |
3276 | + |
3277 | + self.do_test( |
3278 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3279 | + {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3280 | + 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3281 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3282 | + }) |
3283 | + }, |
3284 | + {'green': [('old-version', '1'), ('new-version', '2')]}) |
3285 | + |
3286 | + # we expect the package's and its reverse dependencies' tests to get |
3287 | + # triggered |
3288 | + self.assertEqual( |
3289 | + self.amqp_requests, |
3290 | + set(['debci-series-i386:green {"triggers": ["green/2"]}', |
3291 | + 'debci-series-amd64:green {"triggers": ["green/2"]}', |
3292 | + 'debci-series-i386:lightgreen {"triggers": ["green/2"]}', |
3293 | + 'debci-series-amd64:lightgreen {"triggers": ["green/2"]}', |
3294 | + 'debci-series-i386:darkgreen {"triggers": ["green/2"]}', |
3295 | + 'debci-series-amd64:darkgreen {"triggers": ["green/2"]}'])) |
3296 | + |
3297 | + # ... and that they get recorded as pending |
3298 | + expected_pending = '''darkgreen 1 amd64 green 2 |
3299 | +darkgreen 1 i386 green 2 |
3300 | +green 2 amd64 green 2 |
3301 | +green 2 i386 green 2 |
3302 | +lightgreen 1 amd64 green 2 |
3303 | +lightgreen 1 i386 green 2 |
3304 | +''' |
3305 | + self.assertEqual(self.pending_requests, expected_pending) |
3306 | + |
3307 | + # if we run britney again this should *not* trigger any new tests |
3308 | + self.do_test([], {'green': (False, {})}) |
3309 | + self.assertEqual(self.amqp_requests, set()) |
3310 | + # but the set of pending tests doesn't change |
3311 | + self.assertEqual(self.pending_requests, expected_pending) |
3312 | + |
3313 | + def test_multi_rdepends_with_tests_all_pass(self): |
3314 | + '''Multiple reverse dependencies with tests (all pass)''' |
3315 | + |
3316 | + # green has passed before |
3317 | + self.swift.set_results({'autopkgtest-series': { |
3318 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1'), |
3319 | + }}) |
3320 | + |
3321 | + # first run requests tests and marks them as pending |
3322 | + self.do_test( |
3323 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3324 | + {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3325 | + 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3326 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3327 | + }) |
3328 | + }, |
3329 | + {'green': [('old-version', '1'), ('new-version', '2')]}) |
3330 | + |
3331 | + # second run collects the results |
3332 | + self.swift.set_results({'autopkgtest-series': { |
3333 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3334 | + 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')), |
3335 | + 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/2')), |
3336 | + 'series/amd64/l/lightgreen/20150101_100101@': (0, 'lightgreen 1', tr('green/2')), |
3337 | + # version in testing fails |
3338 | + 'series/i386/g/green/20150101_020000@': (4, 'green 1', tr('green/1')), |
3339 | + 'series/amd64/g/green/20150101_020000@': (4, 'green 1', tr('green/1')), |
3340 | + # version in unstable succeeds |
3341 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
3342 | + 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')), |
3343 | + }}) |
3344 | + |
3345 | + out = self.do_test( |
3346 | + [], |
3347 | + {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
3348 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3349 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3350 | + }) |
3351 | + }, |
3352 | + {'green': [('old-version', '1'), ('new-version', '2')]} |
3353 | + )[0] |
3354 | + |
3355 | + # all tests ran, there should be no more pending ones |
3356 | + self.assertEqual(self.pending_requests, '') |
3357 | + |
3358 | + # not expecting any failures to retrieve from swift |
3359 | + self.assertNotIn('Failure', out, out) |
3360 | + |
3361 | + # caches the results and triggers |
3362 | + with open(os.path.join(self.data.path, 'data/series-proposed/autopkgtest/results.cache')) as f: |
3363 | + res = json.load(f) |
3364 | + self.assertEqual(res['green']['i386'], |
3365 | + ['20150101_100200@', |
3366 | + {'1': {}, '2': {'green/2': True}}, |
3367 | + True]) |
3368 | + self.assertEqual(res['lightgreen']['amd64'], |
3369 | + ['20150101_100101@', |
3370 | + {'1': {'green/2': True}}, |
3371 | + True]) |
3372 | + |
3373 | + # third run should not trigger any new tests, should all be in the |
3374 | + # cache |
3375 | + self.swift.set_results({}) |
3376 | + out = self.do_test( |
3377 | + [], |
3378 | + {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
3379 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3380 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3381 | + }) |
3382 | + })[0] |
3383 | + self.assertEqual(self.amqp_requests, set()) |
3384 | + self.assertEqual(self.pending_requests, '') |
3385 | + self.assertNotIn('Failure', out, out) |
3386 | + |
3387 | + def test_multi_rdepends_with_tests_mixed(self): |
3388 | + '''Multiple reverse dependencies with tests (mixed results)''' |
3389 | + |
3390 | + # green has passed before |
3391 | + self.swift.set_results({'autopkgtest-series': { |
3392 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1'), |
3393 | + }}) |
3394 | + |
3395 | + # first run requests tests and marks them as pending |
3396 | + self.do_test( |
3397 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3398 | + {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3399 | + 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3400 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3401 | + }) |
3402 | + }, |
3403 | + {'green': [('old-version', '1'), ('new-version', '2')]}) |
3404 | + |
3405 | + # second run collects the results |
3406 | + self.swift.set_results({'autopkgtest-series': { |
3407 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3408 | + 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/2')), |
3409 | + 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
3410 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
3411 | + 'series/amd64/g/green/20150101_100201@': (4, 'green 2', tr('green/2')), |
3412 | + # unrelated results (wrong trigger), ignore this! |
3413 | + 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/1')), |
3414 | + 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('blue/1')), |
3415 | + }}) |
3416 | + |
3417 | + out = self.do_test( |
3418 | + [], |
3419 | + {'green': (False, {'green 2': {'amd64': 'ALWAYSFAIL', 'i386': 'PASS'}, |
3420 | + 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'RUNNING'}, |
3421 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'PASS'}, |
3422 | + }) |
3423 | + }) |
3424 | + |
3425 | + # not expecting any failures to retrieve from swift |
3426 | + self.assertNotIn('Failure', out, out) |
3427 | + |
3428 | + # there should be some pending ones |
3429 | + self.assertIn('darkgreen 1 amd64 green 2', self.pending_requests) |
3430 | + self.assertIn('lightgreen 1 i386 green 2', self.pending_requests) |
3431 | + |
3432 | + def test_multi_rdepends_with_tests_mixed_no_recorded_triggers(self): |
3433 | + '''Multiple reverse dependencies with tests (mixed results), no recorded triggers''' |
3434 | + |
3435 | + # green has passed before |
3436 | + self.swift.set_results({'autopkgtest-series': { |
3437 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1'), |
3438 | + }}) |
3439 | + |
3440 | + # first run requests tests and marks them as pending |
3441 | + self.do_test( |
3442 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3443 | + {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3444 | + 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3445 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3446 | + }) |
3447 | + }, |
3448 | + {'green': [('old-version', '1'), ('new-version', '2')]}) |
3449 | + |
3450 | + # second run collects the results |
3451 | + self.swift.set_results({'autopkgtest-series': { |
3452 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1'), |
3453 | + 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1'), |
3454 | + 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1'), |
3455 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2'), |
3456 | + 'series/amd64/g/green/20150101_100201@': (4, 'green 2'), |
3457 | + }}) |
3458 | + |
3459 | + out = self.do_test( |
3460 | + [], |
3461 | + {'green': (False, {'green 2': {'amd64': 'ALWAYSFAIL', 'i386': 'PASS'}, |
3462 | + 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'RUNNING'}, |
3463 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'PASS'}, |
3464 | + }) |
3465 | + }) |
3466 | + |
3467 | + # not expecting any failures to retrieve from swift |
3468 | + self.assertNotIn('Failure', out, out) |
3469 | + |
3470 | + # there should be some pending ones |
3471 | + self.assertIn('darkgreen 1 amd64 green 2', self.pending_requests) |
3472 | + self.assertIn('lightgreen 1 i386 green 2', self.pending_requests) |
3473 | + |
3474 | + def test_multi_rdepends_with_tests_regression(self): |
3475 | + '''Multiple reverse dependencies with tests (regression)''' |
3476 | + |
3477 | + self.swift.set_results({'autopkgtest-series': { |
3478 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3479 | + 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3480 | + 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')), |
3481 | + 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
3482 | + 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')), |
3483 | + 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
3484 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
3485 | + 'series/amd64/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
3486 | + 'series/amd64/g/green/20150101_100201@': (4, 'green 2', tr('green/2')), |
3487 | + }}) |
3488 | + |
3489 | + out = self.do_test( |
3490 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3491 | + {'green': (False, {'green 2': {'amd64': 'REGRESSION', 'i386': 'PASS'}, |
3492 | + 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}, |
3493 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3494 | + }) |
3495 | + }, |
3496 | + {'green': [('old-version', '1'), ('new-version', '2')]} |
3497 | + )[0] |
3498 | + |
3499 | + # we already had all results before the run, so this should not trigger |
3500 | + # any new requests |
3501 | + self.assertEqual(self.amqp_requests, set()) |
3502 | + self.assertEqual(self.pending_requests, '') |
3503 | + |
3504 | + # not expecting any failures to retrieve from swift |
3505 | + self.assertNotIn('Failure', out, out) |
3506 | + |
3507 | + def test_multi_rdepends_with_tests_regression_last_pass(self): |
3508 | + '''Multiple reverse dependencies with tests (regression), last one passes |
3509 | + |
3510 | + This ensures that we don't just evaluate the test result of the last |
3511 | + test, but all of them. |
3512 | + ''' |
3513 | + self.swift.set_results({'autopkgtest-series': { |
3514 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3515 | + 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3516 | + 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/2')), |
3517 | + 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/2')), |
3518 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
3519 | + 'series/amd64/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
3520 | + 'series/amd64/g/green/20150101_100201@': (4, 'green 2', tr('green/2')), |
3521 | + }}) |
3522 | + |
3523 | + out = self.do_test( |
3524 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3525 | + {'green': (False, {'green 2': {'amd64': 'REGRESSION', 'i386': 'PASS'}, |
3526 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3527 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3528 | + }) |
3529 | + }, |
3530 | + {'green': [('old-version', '1'), ('new-version', '2')]} |
3531 | + )[0] |
3532 | + |
3533 | + self.assertEqual(self.pending_requests, '') |
3534 | + # not expecting any failures to retrieve from swift |
3535 | + self.assertNotIn('Failure', out, out) |
3536 | + |
3537 | + def test_multi_rdepends_with_tests_always_failed(self): |
3538 | + '''Multiple reverse dependencies with tests (always failed)''' |
3539 | + |
3540 | + self.swift.set_results({'autopkgtest-series': { |
3541 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3542 | + 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3543 | + 'series/i386/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1')), |
3544 | + 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
3545 | + 'series/amd64/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1')), |
3546 | + 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
3547 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
3548 | + 'series/amd64/g/green/20150101_100200@': (4, 'green 2', tr('green/1')), |
3549 | + 'series/amd64/g/green/20150101_100201@': (4, 'green 2', tr('green/2')), |
3550 | + }}) |
3551 | + |
3552 | + out = self.do_test( |
3553 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3554 | + {'green': (True, {'green 2': {'amd64': 'ALWAYSFAIL', 'i386': 'PASS'}, |
3555 | + 'lightgreen 1': {'amd64': 'ALWAYSFAIL', 'i386': 'ALWAYSFAIL'}, |
3556 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3557 | + }) |
3558 | + }, |
3559 | + {'green': [('old-version', '1'), ('new-version', '2')]} |
3560 | + )[0] |
3561 | + |
3562 | + self.assertEqual(self.pending_requests, '') |
3563 | + # not expecting any failures to retrieve from swift |
3564 | + self.assertNotIn('Failure', out, out) |
3565 | + |
3566 | + def test_multi_rdepends_arch_specific(self): |
3567 | + '''Multiple reverse dependencies with arch specific tests''' |
3568 | + |
3569 | + # green64 has passed before |
3570 | + self.swift.set_results({'autopkgtest-series': { |
3571 | + 'series/amd64/g/green64/20150101_100000@': (0, 'green64 0.1'), |
3572 | + }}) |
3573 | + |
3574 | + self.data.add('green64', False, {'Depends': 'libc6 (>= 0.9), libgreen1', |
3575 | + 'Architecture': 'amd64'}, |
3576 | + testsuite='autopkgtest') |
3577 | + |
3578 | + # first run requests tests and marks them as pending |
3579 | + self.do_test( |
3580 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3581 | + {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3582 | + 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3583 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3584 | + 'green64 1': {'amd64': 'RUNNING'}, |
3585 | + }) |
3586 | + }) |
3587 | + |
3588 | + self.assertEqual( |
3589 | + self.amqp_requests, |
3590 | + set(['debci-series-i386:green {"triggers": ["green/2"]}', |
3591 | + 'debci-series-amd64:green {"triggers": ["green/2"]}', |
3592 | + 'debci-series-i386:lightgreen {"triggers": ["green/2"]}', |
3593 | + 'debci-series-amd64:lightgreen {"triggers": ["green/2"]}', |
3594 | + 'debci-series-i386:darkgreen {"triggers": ["green/2"]}', |
3595 | + 'debci-series-amd64:darkgreen {"triggers": ["green/2"]}', |
3596 | + 'debci-series-amd64:green64 {"triggers": ["green/2"]}'])) |
3597 | + |
3598 | + self.assertIn('green64 1 amd64', self.pending_requests) |
3599 | + self.assertNotIn('green64 1 i386', self.pending_requests) |
3600 | + |
3601 | + # second run collects the results |
3602 | + self.swift.set_results({'autopkgtest-series': { |
3603 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3604 | + 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')), |
3605 | + 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/2')), |
3606 | + 'series/amd64/l/lightgreen/20150101_100101@': (0, 'lightgreen 1', tr('green/2')), |
3607 | + # version in testing fails |
3608 | + 'series/i386/g/green/20150101_020000@': (4, 'green 1', tr('green/1')), |
3609 | + 'series/amd64/g/green/20150101_020000@': (4, 'green 1', tr('green/1')), |
3610 | + # version in unstable succeeds |
3611 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
3612 | + 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')), |
3613 | + # only amd64 result for green64 |
3614 | + 'series/amd64/g/green64/20150101_100200@': (0, 'green64 1', tr('green/2')), |
3615 | + }}) |
3616 | + |
3617 | + out = self.do_test( |
3618 | + [], |
3619 | + {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
3620 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3621 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3622 | + 'green64 1': {'amd64': 'PASS'}, |
3623 | + }) |
3624 | + }, |
3625 | + {'green': [('old-version', '1'), ('new-version', '2')]} |
3626 | + )[0] |
3627 | + |
3628 | + # all tests ran, there should be no more pending ones |
3629 | + self.assertEqual(self.amqp_requests, set()) |
3630 | + self.assertEqual(self.pending_requests, '') |
3631 | + |
3632 | + # not expecting any failures to retrieve from swift |
3633 | + self.assertNotIn('Failure', out, out) |
3634 | + |
3635 | + def test_unbuilt(self): |
3636 | + '''Unbuilt package should not trigger tests or get considered''' |
3637 | + |
3638 | + self.data.add_src('green', True, {'Version': '2', 'Testsuite': 'autopkgtest'}) |
3639 | + exc = self.do_test( |
3640 | + # uninstallable unstable version |
3641 | + [], |
3642 | + {'green': (False, {})}, |
3643 | + {'green': [('old-version', '1'), ('new-version', '2'), |
3644 | + ('reason', 'no-binaries'), |
3645 | + ('excuses', 'green has no up-to-date binaries on any arch') |
3646 | + ] |
3647 | + })[1] |
3648 | + # autopkgtest should not be triggered for unbuilt pkg |
3649 | + self.assertEqual(exc['green']['tests'], {}) |
3650 | + |
3651 | + def test_rdepends_unbuilt(self): |
3652 | + '''Unbuilt reverse dependency''' |
3653 | + |
3654 | + # old lightgreen fails, thus new green should be held back |
3655 | + self.swift.set_results({'autopkgtest-series': { |
3656 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/1.1')), |
3657 | + 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/1.1')), |
3658 | + 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/1.1')), |
3659 | + 'series/i386/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1.1')), |
3660 | + 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/1.1')), |
3661 | + 'series/amd64/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1.1')), |
3662 | + 'series/i386/g/green/20150101_020000@': (0, 'green 1', tr('green/1')), |
3663 | + 'series/amd64/g/green/20150101_020000@': (0, 'green 1', tr('green/1')), |
3664 | + 'series/i386/g/green/20150101_100200@': (0, 'green 1.1', tr('green/1.1')), |
3665 | + 'series/amd64/g/green/20150101_100201@': (0, 'green 1.1', tr('green/1.1')), |
3666 | + }}) |
3667 | + |
3668 | + # add unbuilt lightgreen; should run tests against the old version |
3669 | + self.data.add_src('lightgreen', True, {'Version': '2', 'Testsuite': 'autopkgtest'}) |
3670 | + self.do_test( |
3671 | + [('libgreen1', {'Version': '1.1', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3672 | + {'green': (False, {'green 1.1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3673 | + 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}, |
3674 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3675 | + }), |
3676 | + 'lightgreen': (False, {}), |
3677 | + }, |
3678 | + {'green': [('old-version', '1'), ('new-version', '1.1')], |
3679 | + 'lightgreen': [('old-version', '1'), ('new-version', '2'), |
3680 | + ('excuses', 'lightgreen has no up-to-date binaries on any arch')] |
3681 | + } |
3682 | + ) |
3683 | + |
3684 | + self.assertEqual(self.amqp_requests, set()) |
3685 | + self.assertEqual(self.pending_requests, '') |
3686 | + |
3687 | + # next run should not trigger any new requests |
3688 | + self.do_test([], {'green': (False, {}), 'lightgreen': (False, {})}) |
3689 | + self.assertEqual(self.amqp_requests, set()) |
3690 | + self.assertEqual(self.pending_requests, '') |
3691 | + |
3692 | + # now lightgreen 2 gets built, should trigger a new test run |
3693 | + self.data.remove_all(True) |
3694 | + self.do_test( |
3695 | + [('libgreen1', {'Version': '1.1', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest'), |
3696 | + ('lightgreen', {'Version': '2'}, 'autopkgtest')], |
3697 | + {}) |
3698 | + self.assertEqual(self.amqp_requests, |
3699 | + set(['debci-series-amd64:lightgreen {"triggers": ["lightgreen/2"]}', |
3700 | + 'debci-series-i386:lightgreen {"triggers": ["lightgreen/2"]}'])) |
3701 | + |
3702 | + # next run collects the results |
3703 | + self.swift.set_results({'autopkgtest-series': { |
3704 | + 'series/i386/l/lightgreen/20150101_100200@': (0, 'lightgreen 2', tr('lightgreen/2')), |
3705 | + 'series/amd64/l/lightgreen/20150101_102000@': (0, 'lightgreen 2', tr('lightgreen/2')), |
3706 | + }}) |
3707 | + self.do_test( |
3708 | + [], |
3709 | + {'green': (True, {'green 1.1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3710 | + # FIXME: expecting a lightgreen test here |
3711 | + # 'lightgreen 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
3712 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3713 | + }), |
3714 | + 'lightgreen': (True, {'lightgreen 2': {'amd64': 'PASS', 'i386': 'PASS'}}), |
3715 | + }, |
3716 | + {'green': [('old-version', '1'), ('new-version', '1.1')], |
3717 | + 'lightgreen': [('old-version', '1'), ('new-version', '2')], |
3718 | + } |
3719 | + ) |
3720 | + self.assertEqual(self.amqp_requests, set()) |
3721 | + self.assertEqual(self.pending_requests, '') |
3722 | + |
3723 | + def test_rdepends_unbuilt_unstable_only(self): |
3724 | + '''Unbuilt reverse dependency which is not in testing''' |
3725 | + |
3726 | + self.swift.set_results({'autopkgtest-series': { |
3727 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3728 | + 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')), |
3729 | + 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')), |
3730 | + 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')), |
3731 | + 'series/i386/g/green/20150101_020000@': (0, 'green 1', tr('green/1')), |
3732 | + 'series/amd64/g/green/20150101_020000@': (0, 'green 1', tr('green/1')), |
3733 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
3734 | + 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')), |
3735 | + }}) |
3736 | + # run britney once to pick up previous results |
3737 | + self.do_test( |
3738 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3739 | + {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}})}) |
3740 | + |
3741 | + # add new uninstallable brokengreen; should not run test at all |
3742 | + exc = self.do_test( |
3743 | + [('brokengreen', {'Version': '1', 'Depends': 'libgreen1, nonexisting'}, 'autopkgtest')], |
3744 | + {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}}), |
3745 | + 'brokengreen': (False, {}), |
3746 | + }, |
3747 | + {'green': [('old-version', '1'), ('new-version', '2')], |
3748 | + 'brokengreen': [('old-version', '-'), ('new-version', '1'), |
3749 | + ('reason', 'depends'), |
3750 | + ('excuses', 'brokengreen/amd64 unsatisfiable Depends: nonexisting')], |
3751 | + })[1] |
3752 | + # autopkgtest should not be triggered for uninstallable pkg |
3753 | + self.assertEqual(exc['brokengreen']['tests'], {}) |
3754 | + |
3755 | + self.assertEqual(self.amqp_requests, set()) |
3756 | + |
3757 | + def test_rdepends_unbuilt_new_version_result(self): |
3758 | + '''Unbuilt reverse dependency gets test result for newer version |
3759 | + |
3760 | + This might happen if the autopkgtest infrastructure runs the unstable |
3761 | + source tests against the testing binaries. Even if that gets done |
3762 | + properly it might still happen that at the time of the britney run the |
3763 | + package isn't built yet, but it is once the test gets run. |
3764 | + ''' |
3765 | + # old lightgreen fails, thus new green should be held back |
3766 | + self.swift.set_results({'autopkgtest-series': { |
3767 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/1.1')), |
3768 | + 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/1.1')), |
3769 | + 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/1.1')), |
3770 | + 'series/i386/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1.1')), |
3771 | + 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/1.1')), |
3772 | + 'series/amd64/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1.1')), |
3773 | + 'series/i386/g/green/20150101_020000@': (0, 'green 1', tr('green/1')), |
3774 | + 'series/amd64/g/green/20150101_020000@': (0, 'green 1', tr('green/1')), |
3775 | + 'series/i386/g/green/20150101_100200@': (0, 'green 1.1', tr('green/1.1')), |
3776 | + 'series/amd64/g/green/20150101_100201@': (0, 'green 1.1', tr('green/1.1')), |
3777 | + }}) |
3778 | + |
3779 | + # add unbuilt lightgreen; should run tests against the old version |
3780 | + self.data.add_src('lightgreen', True, {'Version': '2', 'Testsuite': 'autopkgtest'}) |
3781 | + self.do_test( |
3782 | + [('libgreen1', {'Version': '1.1', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3783 | + {'green': (False, {'green 1.1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3784 | + 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}, |
3785 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3786 | + }), |
3787 | + 'lightgreen': (False, {}), |
3788 | + }, |
3789 | + {'green': [('old-version', '1'), ('new-version', '1.1')], |
3790 | + 'lightgreen': [('old-version', '1'), ('new-version', '2'), |
3791 | + ('excuses', 'lightgreen has no up-to-date binaries on any arch')] |
3792 | + } |
3793 | + ) |
3794 | + self.assertEqual(self.amqp_requests, set()) |
3795 | + self.assertEqual(self.pending_requests, '') |
3796 | + |
3797 | + # lightgreen 2 stays unbuilt in britney, but we get a test result for it |
3798 | + self.swift.set_results({'autopkgtest-series': { |
3799 | + 'series/i386/l/lightgreen/20150101_100200@': (0, 'lightgreen 2', tr('green/1.1')), |
3800 | + 'series/amd64/l/lightgreen/20150101_102000@': (0, 'lightgreen 2', tr('green/1.1')), |
3801 | + }}) |
3802 | + self.do_test( |
3803 | + [], |
3804 | + {'green': (True, {'green 1.1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3805 | + 'lightgreen 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
3806 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3807 | + }), |
3808 | + 'lightgreen': (False, {}), |
3809 | + }, |
3810 | + {'green': [('old-version', '1'), ('new-version', '1.1')], |
3811 | + 'lightgreen': [('old-version', '1'), ('new-version', '2'), |
3812 | + ('excuses', 'lightgreen has no up-to-date binaries on any arch')] |
3813 | + } |
3814 | + ) |
3815 | + self.assertEqual(self.amqp_requests, set()) |
3816 | + self.assertEqual(self.pending_requests, '') |
3817 | + |
3818 | + # next run should not trigger any new requests |
3819 | + self.do_test([], {'green': (True, {}), 'lightgreen': (False, {})}) |
3820 | + self.assertEqual(self.amqp_requests, set()) |
3821 | + self.assertEqual(self.pending_requests, '') |
3822 | + |
3823 | + def test_rdepends_unbuilt_new_version_fail(self): |
3824 | + '''Unbuilt reverse dependency gets failure for newer version''' |
3825 | + |
3826 | + self.swift.set_results({'autopkgtest-series': { |
3827 | + 'series/i386/l/lightgreen/20150101_100101@': (0, 'lightgreen 1', tr('lightgreen/2')), |
3828 | + }}) |
3829 | + |
3830 | + # add unbuilt lightgreen; should request tests against the old version |
3831 | + self.data.add_src('lightgreen', True, {'Version': '2', 'Testsuite': 'autopkgtest'}) |
3832 | + self.do_test( |
3833 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
3834 | + {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3835 | + 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3836 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3837 | + }), |
3838 | + 'lightgreen': (False, {}), |
3839 | + }, |
3840 | + {'green': [('old-version', '1'), ('new-version', '2')], |
3841 | + 'lightgreen': [('old-version', '1'), ('new-version', '2'), |
3842 | + ('excuses', 'lightgreen has no up-to-date binaries on any arch')] |
3843 | + } |
3844 | + ) |
3845 | + self.assertEqual(len(self.amqp_requests), 6) |
3846 | + |
3847 | + # we only get a result for lightgreen 2, not for the requested 1 |
3848 | + self.swift.set_results({'autopkgtest-series': { |
3849 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
3850 | + 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')), |
3851 | + 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 0.5', tr('green/2')), |
3852 | + 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 0.5', tr('green/2')), |
3853 | + 'series/i386/l/lightgreen/20150101_100200@': (4, 'lightgreen 2', tr('green/2')), |
3854 | + 'series/amd64/l/lightgreen/20150101_100200@': (4, 'lightgreen 2', tr('green/2')), |
3855 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
3856 | + 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')), |
3857 | + }}) |
3858 | + self.do_test( |
3859 | + [], |
3860 | + {'green': (False, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
3861 | + 'lightgreen 2': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}, |
3862 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3863 | + }), |
3864 | + 'lightgreen': (False, {}), |
3865 | + }, |
3866 | + {'green': [('old-version', '1'), ('new-version', '2')], |
3867 | + 'lightgreen': [('old-version', '1'), ('new-version', '2'), |
3868 | + ('excuses', 'lightgreen has no up-to-date binaries on any arch')] |
3869 | + } |
3870 | + ) |
3871 | + self.assertEqual(self.amqp_requests, set()) |
3872 | + self.assertEqual(self.pending_requests, '') |
3873 | + |
3874 | + # next run should not trigger any new requests |
3875 | + self.do_test([], {'green': (False, {}), 'lightgreen': (False, {})}) |
3876 | + self.assertEqual(self.pending_requests, '') |
3877 | + self.assertEqual(self.amqp_requests, set()) |
3878 | + |
3879 | + def test_package_pair_running(self): |
3880 | + '''Two packages in unstable that need to go in together (running)''' |
3881 | + |
3882 | + # green has passed before |
3883 | + self.swift.set_results({'autopkgtest-series': { |
3884 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1'), |
3885 | + }}) |
3886 | + |
3887 | + self.do_test( |
3888 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest'), |
3889 | + ('lightgreen', {'Version': '2', 'Depends': 'libgreen1 (>= 2)'}, 'autopkgtest')], |
3890 | + {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3891 | + 'lightgreen 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3892 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3893 | + }), |
3894 | + 'lightgreen': (False, {'lightgreen 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}}), |
3895 | + }, |
3896 | + {'green': [('old-version', '1'), ('new-version', '2')], |
3897 | + 'lightgreen': [('old-version', '1'), ('new-version', '2')], |
3898 | + }) |
3899 | + |
3900 | + # we expect the package's and its reverse dependencies' tests to get |
3901 | + # triggered; lightgreen should be triggered for each trigger |
3902 | + self.assertEqual( |
3903 | + self.amqp_requests, |
3904 | + set(['debci-series-i386:green {"triggers": ["green/2"]}', |
3905 | + 'debci-series-amd64:green {"triggers": ["green/2"]}', |
3906 | + 'debci-series-i386:lightgreen {"triggers": ["green/2"]}', |
3907 | + 'debci-series-amd64:lightgreen {"triggers": ["green/2"]}', |
3908 | + 'debci-series-i386:lightgreen {"triggers": ["lightgreen/2"]}', |
3909 | + 'debci-series-amd64:lightgreen {"triggers": ["lightgreen/2"]}', |
3910 | + 'debci-series-i386:darkgreen {"triggers": ["green/2"]}', |
3911 | + 'debci-series-amd64:darkgreen {"triggers": ["green/2"]}'])) |
3912 | + |
3913 | + # ... and that they get recorded as pending |
3914 | + expected_pending = '''darkgreen 1 amd64 green 2 |
3915 | +darkgreen 1 i386 green 2 |
3916 | +green 2 amd64 green 2 |
3917 | +green 2 i386 green 2 |
3918 | +lightgreen 2 amd64 green 2 |
3919 | +lightgreen 2 amd64 lightgreen 2 |
3920 | +lightgreen 2 i386 green 2 |
3921 | +lightgreen 2 i386 lightgreen 2 |
3922 | +''' |
3923 | + self.assertEqual(self.pending_requests, expected_pending) |
3924 | + |
3925 | + def test_binary_from_new_source_package_running(self): |
3926 | + '''building an existing binary for a new source package (running)''' |
3927 | + |
3928 | + self.do_test( |
3929 | + [('libgreen1', {'Version': '2', 'Source': 'newgreen', 'Depends': 'libc6'}, 'autopkgtest')], |
3930 | + {'newgreen': (True, {'newgreen 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3931 | + 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3932 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
3933 | + }), |
3934 | + }, |
3935 | + {'newgreen': [('old-version', '-'), ('new-version', '2')]}) |
3936 | + |
3937 | + self.assertEqual(len(self.amqp_requests), 8) |
3938 | + expected_pending = '''darkgreen 1 amd64 newgreen 2 |
3939 | +darkgreen 1 i386 newgreen 2 |
3940 | +green 1 amd64 newgreen 2 |
3941 | +green 1 i386 newgreen 2 |
3942 | +lightgreen 1 amd64 newgreen 2 |
3943 | +lightgreen 1 i386 newgreen 2 |
3944 | +newgreen 2 amd64 newgreen 2 |
3945 | +newgreen 2 i386 newgreen 2 |
3946 | +''' |
3947 | + self.assertEqual(self.pending_requests, expected_pending) |
3948 | + |
3949 | + def test_binary_from_new_source_package_pass(self): |
3950 | + '''building an existing binary for a new source package (pass)''' |
3951 | + |
3952 | + self.swift.set_results({'autopkgtest-series': { |
3953 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('newgreen/2')), |
3954 | + 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('newgreen/2')), |
3955 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1', tr('newgreen/2')), |
3956 | + 'series/amd64/g/green/20150101_100000@': (0, 'green 1', tr('newgreen/2')), |
3957 | + 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('newgreen/2')), |
3958 | + 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('newgreen/2')), |
3959 | + 'series/i386/n/newgreen/20150101_100200@': (0, 'newgreen 2', tr('newgreen/2')), |
3960 | + 'series/amd64/n/newgreen/20150101_100201@': (0, 'newgreen 2', tr('newgreen/2')), |
3961 | + }}) |
3962 | + |
3963 | + self.do_test( |
3964 | + [('libgreen1', {'Version': '2', 'Source': 'newgreen', 'Depends': 'libc6'}, 'autopkgtest')], |
3965 | + {'newgreen': (True, {'newgreen 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
3966 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3967 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3968 | + 'green 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
3969 | + }), |
3970 | + }, |
3971 | + {'newgreen': [('old-version', '-'), ('new-version', '2')]}) |
3972 | + |
3973 | + self.assertEqual(self.amqp_requests, set()) |
3974 | + self.assertEqual(self.pending_requests, '') |
3975 | + |
3976 | + def test_result_from_older_version(self): |
3977 | + '''test result from older version than the uploaded one''' |
3978 | + |
3979 | + self.swift.set_results({'autopkgtest-series': { |
3980 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('darkgreen/2')), |
3981 | + 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('darkgreen/2')), |
3982 | + }}) |
3983 | + |
3984 | + self.do_test( |
3985 | + [('darkgreen', {'Version': '2', 'Depends': 'libc6 (>= 0.9), libgreen1'}, 'autopkgtest')], |
3986 | + {'darkgreen': (False, {'darkgreen 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}})}) |
3987 | + |
3988 | + self.assertEqual( |
3989 | + self.amqp_requests, |
3990 | + set(['debci-series-i386:darkgreen {"triggers": ["darkgreen/2"]}', |
3991 | + 'debci-series-amd64:darkgreen {"triggers": ["darkgreen/2"]}'])) |
3992 | + self.assertEqual(self.pending_requests, |
3993 | + 'darkgreen 2 amd64 darkgreen 2\ndarkgreen 2 i386 darkgreen 2\n') |
3994 | + |
3995 | + # second run gets the results for darkgreen 2 |
3996 | + self.swift.set_results({'autopkgtest-series': { |
3997 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('darkgreen/1')), |
3998 | + 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('darkgreen/1')), |
3999 | + 'series/i386/d/darkgreen/20150101_100010@': (0, 'darkgreen 2', tr('darkgreen/2')), |
4000 | + 'series/amd64/d/darkgreen/20150101_100010@': (0, 'darkgreen 2', tr('darkgreen/2')), |
4001 | + }}) |
4002 | + self.do_test( |
4003 | + [], |
4004 | + {'darkgreen': (True, {'darkgreen 2': {'amd64': 'PASS', 'i386': 'PASS'}})}) |
4005 | + self.assertEqual(self.amqp_requests, set()) |
4006 | + self.assertEqual(self.pending_requests, '') |
4007 | + |
4008 | + # next run sees a newer darkgreen, should re-run tests |
4009 | + self.data.remove_all(True) |
4010 | + self.do_test( |
4011 | + [('darkgreen', {'Version': '3', 'Depends': 'libc6 (>= 0.9), libgreen1'}, 'autopkgtest')], |
4012 | + {'darkgreen': (False, {'darkgreen 3': {'amd64': 'RUNNING', 'i386': 'RUNNING'}})}) |
4013 | + self.assertEqual( |
4014 | + self.amqp_requests, |
4015 | + set(['debci-series-i386:darkgreen {"triggers": ["darkgreen/3"]}', |
4016 | + 'debci-series-amd64:darkgreen {"triggers": ["darkgreen/3"]}'])) |
4017 | + self.assertEqual(self.pending_requests, |
4018 | + 'darkgreen 3 amd64 darkgreen 3\ndarkgreen 3 i386 darkgreen 3\n') |
4019 | + |
4020 | + def test_old_result_from_rdep_version(self): |
4021 | + '''re-runs reverse dependency test on new versions''' |
4022 | + |
4023 | + self.swift.set_results({'autopkgtest-series': { |
4024 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1', tr('green/1')), |
4025 | + 'series/amd64/g/green/20150101_100000@': (0, 'green 1', tr('green/1')), |
4026 | + 'series/i386/g/green/20150101_100010@': (0, 'green 2', tr('green/2')), |
4027 | + 'series/amd64/g/green/20150101_100010@': (0, 'green 2', tr('green/2')), |
4028 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
4029 | + 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
4030 | + 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')), |
4031 | + 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')), |
4032 | + }}) |
4033 | + |
4034 | + self.do_test( |
4035 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
4036 | + {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
4037 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4038 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4039 | + }), |
4040 | + }) |
4041 | + |
4042 | + self.assertEqual(self.amqp_requests, set()) |
4043 | + self.assertEqual(self.pending_requests, '') |
4044 | + self.data.remove_all(True) |
4045 | + |
4046 | + # second run: new version re-triggers all tests |
4047 | + self.do_test( |
4048 | + [('libgreen1', {'Version': '3', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
4049 | + {'green': (False, {'green 3': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4050 | + 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4051 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4052 | + }), |
4053 | + }) |
4054 | + |
4055 | + self.assertEqual(len(self.amqp_requests), 6) |
4056 | + |
4057 | + expected_pending = '''darkgreen 1 amd64 green 3 |
4058 | +darkgreen 1 i386 green 3 |
4059 | +green 3 amd64 green 3 |
4060 | +green 3 i386 green 3 |
4061 | +lightgreen 1 amd64 green 3 |
4062 | +lightgreen 1 i386 green 3 |
4063 | +''' |
4064 | + self.assertEqual(self.pending_requests, expected_pending) |
4065 | + |
4066 | + # third run gets the results for green and lightgreen, darkgreen is |
4067 | + # still running |
4068 | + self.swift.set_results({'autopkgtest-series': { |
4069 | + 'series/i386/g/green/20150101_100020@': (0, 'green 3', tr('green/3')), |
4070 | + 'series/amd64/g/green/20150101_100020@': (0, 'green 3', tr('green/3')), |
4071 | + 'series/i386/l/lightgreen/20150101_100010@': (0, 'lightgreen 1', tr('green/3')), |
4072 | + 'series/amd64/l/lightgreen/20150101_100010@': (0, 'lightgreen 1', tr('green/3')), |
4073 | + }}) |
4074 | + self.do_test( |
4075 | + [], |
4076 | + {'green': (False, {'green 3': {'amd64': 'PASS', 'i386': 'PASS'}, |
4077 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4078 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4079 | + }), |
4080 | + }) |
4081 | + self.assertEqual(self.amqp_requests, set()) |
4082 | + self.assertEqual(self.pending_requests, |
4083 | + 'darkgreen 1 amd64 green 3\ndarkgreen 1 i386 green 3\n') |
4084 | + |
4085 | + # fourth run finally gets the new darkgreen result |
4086 | + self.swift.set_results({'autopkgtest-series': { |
4087 | + 'series/i386/d/darkgreen/20150101_100010@': (0, 'darkgreen 1', tr('green/3')), |
4088 | + 'series/amd64/d/darkgreen/20150101_100010@': (0, 'darkgreen 1', tr('green/3')), |
4089 | + }}) |
4090 | + self.do_test( |
4091 | + [], |
4092 | + {'green': (True, {'green 3': {'amd64': 'PASS', 'i386': 'PASS'}, |
4093 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4094 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4095 | + }), |
4096 | + }) |
4097 | + self.assertEqual(self.amqp_requests, set()) |
4098 | + self.assertEqual(self.pending_requests, '') |
4099 | + |
4100 | + def test_tmpfail(self): |
4101 | + '''tmpfail results''' |
4102 | + |
4103 | + # one tmpfail result without testpkg-version, should be ignored |
4104 | + self.swift.set_results({'autopkgtest-series': { |
4105 | + 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('lightgreen/2')), |
4106 | + 'series/i386/l/lightgreen/20150101_100101@': (16, None), |
4107 | + 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('lightgreen/2')), |
4108 | + 'series/amd64/l/lightgreen/20150101_100101@': (16, 'lightgreen 2', tr('lightgreen/2')), |
4109 | + }}) |
4110 | + |
4111 | + self.do_test( |
4112 | + [('lightgreen', {'Version': '2', 'Depends': 'libgreen1 (>= 1)'}, 'autopkgtest')], |
4113 | + {'lightgreen': (False, {'lightgreen 2': {'amd64': 'REGRESSION', 'i386': 'RUNNING'}})}) |
4114 | + self.assertEqual(self.pending_requests, 'lightgreen 2 i386 lightgreen 2\n') |
4115 | + |
4116 | + # one more tmpfail result, should not confuse britney with None version |
4117 | + self.swift.set_results({'autopkgtest-series': { |
4118 | + 'series/i386/l/lightgreen/20150101_100201@': (16, None), |
4119 | + }}) |
4120 | + self.do_test( |
4121 | + [], |
4122 | + {'lightgreen': (False, {'lightgreen 2': {'amd64': 'REGRESSION', 'i386': 'RUNNING'}})}) |
4123 | + with open(os.path.join(self.data.path, 'data/series-proposed/autopkgtest/results.cache')) as f: |
4124 | + contents = f.read() |
4125 | + self.assertNotIn('null', contents) |
4126 | + self.assertNotIn('None', contents) |
4127 | + |
4128 | + def test_rerun_failure(self): |
4129 | + '''manually re-running failed tests gets picked up''' |
4130 | + |
4131 | + # first run fails |
4132 | + self.swift.set_results({'autopkgtest-series': { |
4133 | + 'series/i386/g/green/20150101_100000@': (0, 'green 2', tr('green/2')), |
4134 | + 'series/i386/g/green/20150101_100101@': (4, 'green 2', tr('green/2')), |
4135 | + 'series/amd64/g/green/20150101_100000@': (0, 'green 2', tr('green/2')), |
4136 | + 'series/amd64/g/green/20150101_100101@': (4, 'green 2', tr('green/2')), |
4137 | + 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')), |
4138 | + 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
4139 | + 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')), |
4140 | + 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
4141 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
4142 | + 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')), |
4143 | + }}) |
4144 | + |
4145 | + self.do_test( |
4146 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
4147 | + {'green': (False, {'green 2': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}, |
4148 | + 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}, |
4149 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4150 | + }), |
4151 | + }) |
4152 | + self.assertEqual(self.pending_requests, '') |
4153 | + |
4154 | + # re-running test manually succeeded (note: darkgreen result should be |
4155 | + # cached already) |
4156 | + self.swift.set_results({'autopkgtest-series': { |
4157 | + 'series/i386/g/green/20150101_100201@': (0, 'green 2', tr('green/2')), |
4158 | + 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')), |
4159 | + 'series/i386/l/lightgreen/20150101_100201@': (0, 'lightgreen 1', tr('green/2')), |
4160 | + 'series/amd64/l/lightgreen/20150101_100201@': (0, 'lightgreen 1', tr('green/2')), |
4161 | + }}) |
4162 | + self.do_test( |
4163 | + [], |
4164 | + {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
4165 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4166 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4167 | + }), |
4168 | + }) |
4169 | + self.assertEqual(self.pending_requests, '') |
4170 | + |
4171 | + def test_new_runs_dont_clobber_pass(self): |
4172 | + '''passing once is sufficient |
4173 | + |
4174 | + If a test succeeded once for a particular version and trigger, |
4175 | + subsequent failures (which might be triggered by other unstable |
4176 | + uploads) should not invalidate the PASS, as that new failure is the |
4177 | + fault of the new upload, not the original one. |
4178 | + ''' |
4179 | + # new libc6 works fine with green |
4180 | + self.swift.set_results({'autopkgtest-series': { |
4181 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1', tr('libc6/2')), |
4182 | + 'series/amd64/g/green/20150101_100000@': (0, 'green 1', tr('libc6/2')), |
4183 | + }}) |
4184 | + |
4185 | + self.do_test( |
4186 | + [('libc6', {'Version': '2'}, None)], |
4187 | + {'libc6': (True, {'green 1': {'amd64': 'PASS', 'i386': 'PASS'}})}) |
4188 | + self.assertEqual(self.pending_requests, '') |
4189 | + |
4190 | + # new green fails; that's not libc6's fault though, so it should stay |
4191 | + # valid |
4192 | + self.swift.set_results({'autopkgtest-series': { |
4193 | + 'series/i386/g/green/20150101_100100@': (4, 'green 2', tr('green/2')), |
4194 | + 'series/amd64/g/green/20150101_100100@': (4, 'green 2', tr('green/2')), |
4195 | + }}) |
4196 | + self.do_test( |
4197 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
4198 | + {'green': (False, {'green 2': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}}), |
4199 | + 'libc6': (True, {'green 1': {'amd64': 'PASS', 'i386': 'PASS'}}), |
4200 | + }) |
4201 | + self.assertEqual( |
4202 | + self.amqp_requests, |
4203 | + set(['debci-series-i386:darkgreen {"triggers": ["green/2"]}', |
4204 | + 'debci-series-amd64:darkgreen {"triggers": ["green/2"]}', |
4205 | + 'debci-series-i386:lightgreen {"triggers": ["green/2"]}', |
4206 | + 'debci-series-amd64:lightgreen {"triggers": ["green/2"]}', |
4207 | + ])) |
4208 | + |
4209 | + def test_remove_from_unstable(self): |
4210 | + '''broken package gets removed from unstable''' |
4211 | + |
4212 | + self.swift.set_results({'autopkgtest-series': { |
4213 | + 'series/i386/g/green/20150101_100101@': (0, 'green 1', tr('green/1')), |
4214 | + 'series/amd64/g/green/20150101_100101@': (0, 'green 1', tr('green/1')), |
4215 | + 'series/i386/g/green/20150101_100201@': (0, 'green 2', tr('green/2')), |
4216 | + 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')), |
4217 | + 'series/i386/l/lightgreen/20150101_100101@': (0, 'lightgreen 1', tr('green/2')), |
4218 | + 'series/amd64/l/lightgreen/20150101_100101@': (0, 'lightgreen 1', tr('green/2')), |
4219 | + 'series/i386/l/lightgreen/20150101_100201@': (4, 'lightgreen 2', tr('green/2 lightgreen/2')), |
4220 | + 'series/amd64/l/lightgreen/20150101_100201@': (4, 'lightgreen 2', tr('green/2 lightgreen/2')), |
4221 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
4222 | + 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')), |
4223 | + }}) |
4224 | + |
4225 | + self.do_test( |
4226 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest'), |
4227 | + ('lightgreen', {'Version': '2', 'Depends': 'libgreen1 (>= 2)'}, 'autopkgtest')], |
4228 | + {'green': (False, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
4229 | + 'lightgreen 2': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}, |
4230 | + }), |
4231 | + }) |
4232 | + self.assertEqual(self.pending_requests, '') |
4233 | + self.assertEqual(self.amqp_requests, set()) |
4234 | + |
4235 | + # remove new lightgreen by resetting archive indexes, and re-adding |
4236 | + # green |
4237 | + self.data.remove_all(True) |
4238 | + |
4239 | + self.swift.set_results({'autopkgtest-series': { |
4240 | + # add new result for lightgreen 1 |
4241 | + 'series/i386/l/lightgreen/20150101_100301@': (0, 'lightgreen 1', tr('green/2')), |
4242 | + 'series/amd64/l/lightgreen/20150101_100301@': (0, 'lightgreen 1', tr('green/2')), |
4243 | + }}) |
4244 | + |
4245 | + # next run should re-trigger lightgreen 1 to test against green/2 |
4246 | + exc = self.do_test( |
4247 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
4248 | + {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
4249 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4250 | + }), |
4251 | + })[1] |
4252 | + self.assertNotIn('lightgreen 2', exc['green']['tests']['autopkgtest']) |
4253 | + |
4254 | + # should not trigger new requests |
4255 | + self.assertEqual(self.pending_requests, '') |
4256 | + self.assertEqual(self.amqp_requests, set()) |
4257 | + |
4258 | + # but the next run should not trigger anything new |
4259 | + self.do_test( |
4260 | + [], |
4261 | + {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
4262 | + 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4263 | + }), |
4264 | + }) |
4265 | + self.assertEqual(self.pending_requests, '') |
4266 | + self.assertEqual(self.amqp_requests, set()) |
4267 | + |
4268 | + def test_multiarch_dep(self): |
4269 | + '''multi-arch dependency''' |
4270 | + |
4271 | + # lightgreen has passed before |
4272 | + self.swift.set_results({'autopkgtest-series': { |
4273 | + 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1'), |
4274 | + }}) |
4275 | + |
4276 | + self.data.add('rainbow', False, {'Depends': 'lightgreen:any'}, |
4277 | + testsuite='autopkgtest') |
4278 | + |
4279 | + self.do_test( |
4280 | + [('lightgreen', {'Version': '2'}, 'autopkgtest')], |
4281 | + {'lightgreen': (False, {'lightgreen 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4282 | + 'rainbow 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4283 | + }), |
4284 | + }, |
4285 | + {'lightgreen': [('old-version', '1'), ('new-version', '2')]} |
4286 | + ) |
4287 | + |
4288 | + def test_disable_adt(self): |
4289 | + '''Run without autopkgtest requests''' |
4290 | + |
4291 | + # Disable AMQP server config, to ensure we don't touch them with ADT |
4292 | + # disabled |
4293 | + for line in fileinput.input(self.britney_conf, inplace=True): |
4294 | + if line.startswith('ADT_ENABLE'): |
4295 | + print('ADT_ENABLE = no') |
4296 | + elif not line.startswith('ADT_AMQP') and not line.startswith('ADT_SWIFT_URL'): |
4297 | + sys.stdout.write(line) |
4298 | + |
4299 | + exc = self.do_test( |
4300 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
4301 | + {'green': (True, {})}, |
4302 | + {'green': [('old-version', '1'), ('new-version', '2')]})[1] |
4303 | + self.assertNotIn('autopkgtest', exc['green']['tests']) |
4304 | + |
4305 | + self.assertEqual(self.amqp_requests, set()) |
4306 | + self.assertEqual(self.pending_requests, None) |
4307 | + |
4308 | + ################################################################ |
4309 | + # Tests for hint processing |
4310 | + ################################################################ |
4311 | + |
4312 | + def test_hint_force_badtest(self): |
4313 | + '''force-badtest hint''' |
4314 | + |
4315 | + self.swift.set_results({'autopkgtest-series': { |
4316 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
4317 | + 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
4318 | + 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')), |
4319 | + 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
4320 | + 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')), |
4321 | + 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
4322 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
4323 | + 'series/amd64/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
4324 | + }}) |
4325 | + |
4326 | + self.create_hint('pitti', 'force-badtest lightgreen/1') |
4327 | + |
4328 | + self.do_test( |
4329 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
4330 | + {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
4331 | + 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}, |
4332 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4333 | + }), |
4334 | + }, |
4335 | + {'green': [('old-version', '1'), ('new-version', '2'), |
4336 | + ('forced-reason', 'badtest lightgreen 1'), |
4337 | + ('excuses', 'Should wait for lightgreen 1 test, but forced by pitti')] |
4338 | + }) |
4339 | + |
4340 | + def test_hint_force_badtest_different_version(self): |
4341 | + '''force-badtest hint with non-matching version''' |
4342 | + |
4343 | + self.swift.set_results({'autopkgtest-series': { |
4344 | + 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
4345 | + 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')), |
4346 | + 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')), |
4347 | + 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
4348 | + 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')), |
4349 | + 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')), |
4350 | + 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
4351 | + 'series/amd64/g/green/20150101_100200@': (0, 'green 2', tr('green/2')), |
4352 | + }}) |
4353 | + |
4354 | + self.create_hint('pitti', 'force-badtest lightgreen/0.1') |
4355 | + |
4356 | + exc = self.do_test( |
4357 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
4358 | + {'green': (False, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}, |
4359 | + 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}, |
4360 | + 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4361 | + }), |
4362 | + }, |
4363 | + {'green': [('reason', 'autopkgtest')]} |
4364 | + )[1] |
4365 | + self.assertNotIn('forced-reason', exc['green']) |
4366 | + |
4367 | + def test_hint_force_skiptest(self): |
4368 | + '''force-skiptest hint''' |
4369 | + |
4370 | + self.create_hint('pitti', 'force-skiptest green/2') |
4371 | + |
4372 | + # green has passed before |
4373 | + self.swift.set_results({'autopkgtest-series': { |
4374 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1'), |
4375 | + }}) |
4376 | + |
4377 | + self.do_test( |
4378 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
4379 | + {'green': (True, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4380 | + 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4381 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4382 | + }), |
4383 | + }, |
4384 | + {'green': [('old-version', '1'), ('new-version', '2'), |
4385 | + ('forced-reason', 'skiptest'), |
4386 | + ('excuses', 'Should wait for tests relating to green 2, but forced by pitti')] |
4387 | + }) |
4388 | + |
4389 | + def test_hint_force_skiptest_different_version(self): |
4390 | + '''force-skiptest hint with non-matching version''' |
4391 | + |
4392 | + # green has passed before |
4393 | + self.swift.set_results({'autopkgtest-series': { |
4394 | + 'series/i386/g/green/20150101_100000@': (0, 'green 1'), |
4395 | + }}) |
4396 | + |
4397 | + self.create_hint('pitti', 'force-skiptest green/1') |
4398 | + exc = self.do_test( |
4399 | + [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')], |
4400 | + {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4401 | + 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4402 | + 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4403 | + }), |
4404 | + }, |
4405 | + {'green': [('reason', 'autopkgtest')]} |
4406 | + )[1] |
4407 | + self.assertNotIn('forced-reason', exc['green']) |
4408 | + |
4409 | + ################################################################ |
4410 | + # Kernel related tests |
4411 | + ################################################################ |
4412 | + |
4413 | + def test_detect_dkms_autodep8(self): |
4414 | + '''DKMS packages are autopkgtested (via autodep8)''' |
4415 | + |
4416 | + self.data.add('dkms', False, {}) |
4417 | + self.data.add('fancy-dkms', False, {'Source': 'fancy', 'Depends': 'dkms (>= 1)'}) |
4418 | + |
4419 | + self.swift.set_results({'autopkgtest-series': { |
4420 | + 'series/i386/f/fancy/20150101_100101@': (0, 'fancy 0.1') |
4421 | + }}) |
4422 | + |
4423 | + self.do_test( |
4424 | + [('dkms', {'Version': '2'}, None)], |
4425 | + {'dkms': (False, {'fancy 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}})}, |
4426 | + {'dkms': [('old-version', '1'), ('new-version', '2')]}) |
4427 | + |
4428 | + def test_kernel_triggers_dkms(self): |
4429 | + '''DKMS packages get triggered by kernel uploads''' |
4430 | + |
4431 | + self.data.add('dkms', False, {}) |
4432 | + self.data.add('fancy-dkms', False, {'Source': 'fancy', 'Depends': 'dkms (>= 1)'}) |
4433 | + |
4434 | + self.do_test( |
4435 | + [('linux-image-generic', {'Source': 'linux-meta'}, None), |
4436 | + ('linux-image-grumpy-generic', {'Source': 'linux-meta-lts-grumpy'}, None), |
4437 | + ('linux-image-64only', {'Source': 'linux-meta-64only', 'Architecture': 'amd64'}, None), |
4438 | + ], |
4439 | + {'linux-meta': (True, {'fancy 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}}), |
4440 | + 'linux-meta-lts-grumpy': (True, {'fancy 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}}), |
4441 | + 'linux-meta-64only': (True, {'fancy 1': {'amd64': 'RUNNING'}}), |
4442 | + }) |
4443 | + |
4444 | + # one separate test should be triggered for each kernel |
4445 | + self.assertEqual( |
4446 | + self.amqp_requests, |
4447 | + set(['debci-series-i386:fancy {"triggers": ["linux-meta/1"]}', |
4448 | + 'debci-series-amd64:fancy {"triggers": ["linux-meta/1"]}', |
4449 | + 'debci-series-i386:fancy {"triggers": ["linux-meta-lts-grumpy/1"]}', |
4450 | + 'debci-series-amd64:fancy {"triggers": ["linux-meta-lts-grumpy/1"]}', |
4451 | + 'debci-series-amd64:fancy {"triggers": ["linux-meta-64only/1"]}'])) |
4452 | + |
4453 | + # ... and that they get recorded as pending |
4454 | + expected_pending = '''fancy 1 amd64 linux-meta 1 |
4455 | +fancy 1 amd64 linux-meta-64only 1 |
4456 | +fancy 1 amd64 linux-meta-lts-grumpy 1 |
4457 | +fancy 1 i386 linux-meta 1 |
4458 | +fancy 1 i386 linux-meta-lts-grumpy 1 |
4459 | +''' |
4460 | + self.assertEqual(self.pending_requests, expected_pending) |
4461 | + |
4462 | + def test_dkms_results_per_kernel(self): |
4463 | + '''DKMS results get mapped to the triggering kernel version''' |
4464 | + |
4465 | + self.data.add('dkms', False, {}) |
4466 | + self.data.add('fancy-dkms', False, {'Source': 'fancy', 'Depends': 'dkms (>= 1)'}) |
4467 | + |
4468 | + # works against linux-meta and -64only, fails against grumpy i386, no |
4469 | + # result yet for grumpy amd64 |
4470 | + self.swift.set_results({'autopkgtest-series': { |
4471 | + 'series/amd64/f/fancy/20150101_100301@': (0, 'fancy 1', tr('fancy/1')), |
4472 | + 'series/i386/f/fancy/20150101_100101@': (0, 'fancy 1', tr('linux-meta/1')), |
4473 | + 'series/amd64/f/fancy/20150101_100101@': (0, 'fancy 1', tr('linux-meta/1')), |
4474 | + 'series/amd64/f/fancy/20150101_100201@': (0, 'fancy 1', tr('linux-meta-64only/1')), |
4475 | + 'series/i386/f/fancy/20150101_100301@': (4, 'fancy 1', tr('linux-meta-lts-grumpy/1')), |
4476 | + }}) |
4477 | + |
4478 | + self.do_test( |
4479 | + [('linux-image-generic', {'Source': 'linux-meta'}, None), |
4480 | + ('linux-image-grumpy-generic', {'Source': 'linux-meta-lts-grumpy'}, None), |
4481 | + ('linux-image-64only', {'Source': 'linux-meta-64only', 'Architecture': 'amd64'}, None), |
4482 | + ], |
4483 | + {'linux-meta': (True, {'fancy 1': {'amd64': 'PASS', 'i386': 'PASS'}}), |
4484 | + 'linux-meta-lts-grumpy': (False, {'fancy 1': {'amd64': 'RUNNING', 'i386': 'ALWAYSFAIL'}}), |
4485 | + 'linux-meta-64only': (True, {'fancy 1': {'amd64': 'PASS'}}), |
4486 | + }) |
4487 | + |
4488 | + self.assertEqual(self.pending_requests, 'fancy 1 amd64 linux-meta-lts-grumpy 1\n') |
4489 | + |
4490 | + def test_dkms_results_per_kernel_old_results(self): |
4491 | + '''DKMS results get mapped to the triggering kernel version, old results''' |
4492 | + |
4493 | + self.data.add('dkms', False, {}) |
4494 | + self.data.add('fancy-dkms', False, {'Source': 'fancy', 'Depends': 'dkms (>= 1)'}) |
4495 | + |
4496 | + # works against linux-meta and -64only, fails against grumpy i386, no |
4497 | + # result yet for grumpy amd64 |
4498 | + self.swift.set_results({'autopkgtest-series': { |
4499 | + # old results without trigger info |
4500 | + 'series/i386/f/fancy/20140101_100101@': (0, 'fancy 1', {}), |
4501 | + 'series/amd64/f/fancy/20140101_100101@': (8, 'fancy 1', {}), |
4502 | + # current results with triggers |
4503 | + 'series/i386/f/fancy/20150101_100101@': (0, 'fancy 1', tr('linux-meta/1')), |
4504 | + 'series/amd64/f/fancy/20150101_100101@': (0, 'fancy 1', tr('linux-meta/1')), |
4505 | + 'series/amd64/f/fancy/20150101_100201@': (0, 'fancy 1', tr('linux-meta-64only/1')), |
4506 | + 'series/i386/f/fancy/20150101_100301@': (4, 'fancy 1', tr('linux-meta-lts-grumpy/1')), |
4507 | + }}) |
4508 | + |
4509 | + self.do_test( |
4510 | + [('linux-image-generic', {'Source': 'linux-meta'}, None), |
4511 | + ('linux-image-grumpy-generic', {'Source': 'linux-meta-lts-grumpy'}, None), |
4512 | + ('linux-image-64only', {'Source': 'linux-meta-64only', 'Architecture': 'amd64'}, None), |
4513 | + ], |
4514 | + {'linux-meta': (True, {'fancy 1': {'amd64': 'PASS', 'i386': 'PASS'}}), |
4515 | + # we don't have an explicit result for amd64, so the old one counts |
4516 | + 'linux-meta-lts-grumpy': (True, {'fancy 1': {'amd64': 'ALWAYSFAIL', 'i386': 'ALWAYSFAIL'}}), |
4517 | + 'linux-meta-64only': (True, {'fancy 1': {'amd64': 'PASS'}}), |
4518 | + }) |
4519 | + |
4520 | + self.assertEqual(self.pending_requests, '') |
4521 | + |
4522 | + def test_kernel_triggered_tests(self): |
4523 | + '''linux, lxc, glibc tests get triggered by linux-meta* uploads''' |
4524 | + |
4525 | + self.data.remove_all(False) |
4526 | + self.data.add('libc6-dev', False, {'Source': 'glibc', 'Depends': 'linux-libc-dev'}, |
4527 | + testsuite='autopkgtest') |
4528 | + self.data.add('lxc', False, {'Testsuite-Triggers': 'linux-generic'}, |
4529 | + testsuite='autopkgtest') |
4530 | + self.data.add('systemd', False, {'Testsuite-Triggers': 'linux-generic'}, |
4531 | + testsuite='autopkgtest') |
4532 | + self.data.add('linux-image-1', False, {'Source': 'linux'}, testsuite='autopkgtest') |
4533 | + self.data.add('linux-libc-dev', False, {'Source': 'linux'}, testsuite='autopkgtest') |
4534 | + self.data.add('linux-image', False, {'Source': 'linux-meta', 'Depends': 'linux-image-1'}) |
4535 | + |
4536 | + self.swift.set_results({'autopkgtest-series': { |
4537 | + 'series/amd64/l/lxc/20150101_100101@': (0, 'lxc 0.1') |
4538 | + }}) |
4539 | + |
4540 | + exc = self.do_test( |
4541 | + [('linux-image', {'Version': '2', 'Depends': 'linux-image-2', 'Source': 'linux-meta'}, None), |
4542 | + ('linux-image-64only', {'Source': 'linux-meta-64only', 'Architecture': 'amd64'}, None), |
4543 | + ('linux-image-2', {'Version': '2', 'Source': 'linux'}, 'autopkgtest'), |
4544 | + ('linux-libc-dev', {'Version': '2', 'Source': 'linux'}, 'autopkgtest'), |
4545 | + ], |
4546 | + {'linux-meta': (False, {'lxc 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4547 | + 'glibc 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4548 | + 'linux 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4549 | + 'systemd 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4550 | + }), |
4551 | + 'linux-meta-64only': (False, {'lxc 1': {'amd64': 'RUNNING'}}), |
4552 | + 'linux': (False, {}), |
4553 | + })[1] |
4554 | + # the kernel itself should not trigger tests; we want to trigger |
4555 | + # everything from -meta |
4556 | + self.assertNotIn('autopkgtest', exc['linux']['tests']) |
4557 | + |
4558 | + def test_kernel_waits_on_meta(self): |
4559 | + '''linux waits on linux-meta''' |
4560 | + |
4561 | + self.data.add('dkms', False, {}) |
4562 | + self.data.add('fancy-dkms', False, {'Source': 'fancy', 'Depends': 'dkms (>= 1)'}, testsuite='autopkgtest') |
4563 | + self.data.add('linux-image-generic', False, {'Version': '0.1', 'Source': 'linux-meta', 'Depends': 'linux-image-1'}) |
4564 | + self.data.add('linux-image-1', False, {'Source': 'linux'}, testsuite='autopkgtest') |
4565 | + self.data.add('linux-firmware', False, {'Source': 'linux-firmware'}, testsuite='autopkgtest') |
4566 | + |
4567 | + self.swift.set_results({'autopkgtest-series': { |
4568 | + 'series/i386/f/fancy/20150101_090000@': (0, 'fancy 0.5', tr('fancy/0.5')), |
4569 | + 'series/i386/l/linux/20150101_100000@': (0, 'linux 2', tr('linux-meta/0.2')), |
4570 | + 'series/amd64/l/linux/20150101_100000@': (0, 'linux 2', tr('linux-meta/0.2')), |
4571 | + 'series/i386/l/linux-firmware/20150101_100000@': (0, 'linux-firmware 2', tr('linux-firmware/2')), |
4572 | + 'series/amd64/l/linux-firmware/20150101_100000@': (0, 'linux-firmware 2', tr('linux-firmware/2')), |
4573 | + }}) |
4574 | + |
4575 | + self.do_test( |
4576 | + [('linux-image-generic', {'Version': '0.2', 'Source': 'linux-meta', 'Depends': 'linux-image-2'}, None), |
4577 | + ('linux-image-2', {'Version': '2', 'Source': 'linux'}, 'autopkgtest'), |
4578 | + ('linux-firmware', {'Version': '2', 'Source': 'linux-firmware'}, 'autopkgtest'), |
4579 | + ], |
4580 | + {'linux-meta': (False, {'fancy 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4581 | + 'linux 2': {'amd64': 'PASS', 'i386': 'PASS'} |
4582 | + }), |
4583 | + # no tests, but should wait on linux-meta |
4584 | + 'linux': (False, {}), |
4585 | + # this one does not have a -meta, so don't wait |
4586 | + 'linux-firmware': (True, {'linux-firmware 2': {'amd64': 'PASS', 'i386': 'PASS'}}), |
4587 | + }, |
4588 | + {'linux': [('reason', 'depends'), |
4589 | + ('excuses', 'Depends: linux linux-meta (not considered)')] |
4590 | + } |
4591 | + ) |
4592 | + |
4593 | + # now linux-meta is ready to go |
4594 | + self.swift.set_results({'autopkgtest-series': { |
4595 | + 'series/i386/f/fancy/20150101_100000@': (0, 'fancy 1', tr('linux-meta/0.2')), |
4596 | + 'series/amd64/f/fancy/20150101_100000@': (0, 'fancy 1', tr('linux-meta/0.2')), |
4597 | + }}) |
4598 | + self.do_test( |
4599 | + [], |
4600 | + {'linux-meta': (True, {'fancy 1': {'amd64': 'PASS', 'i386': 'PASS'}, |
4601 | + 'linux 2': {'amd64': 'PASS', 'i386': 'PASS'}}), |
4602 | + 'linux': (True, {}), |
4603 | + 'linux-firmware': (True, {'linux-firmware 2': {'amd64': 'PASS', 'i386': 'PASS'}}), |
4604 | + }, |
4605 | + {'linux': [('excuses', 'Depends: linux linux-meta')] |
4606 | + } |
4607 | + ) |
4608 | + |
4609 | + |
4610 | + ################################################################ |
4611 | + # Tests for special-cased packages |
4612 | + ################################################################ |
4613 | + |
4614 | + def test_gcc(self): |
4615 | + '''gcc only triggers some key packages''' |
4616 | + |
4617 | + self.data.add('binutils', False, {}, testsuite='autopkgtest') |
4618 | + self.data.add('linux', False, {}, testsuite='autopkgtest') |
4619 | + self.data.add('notme', False, {'Depends': 'libgcc1'}, testsuite='autopkgtest') |
4620 | + |
4621 | + # binutils has passed before |
4622 | + self.swift.set_results({'autopkgtest-series': { |
4623 | + 'series/i386/b/binutils/20150101_100000@': (0, 'binutils 1', tr('binutils/1')), |
4624 | + }}) |
4625 | + |
4626 | + exc = self.do_test( |
4627 | + [('libgcc1', {'Source': 'gcc-5', 'Version': '2'}, None)], |
4628 | + {'gcc-5': (False, {'binutils 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}, |
4629 | + 'linux 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}})})[1] |
4630 | + self.assertNotIn('notme 1', exc['gcc-5']['tests']['autopkgtest']) |
4631 | + |
4632 | + def test_alternative_gcc(self): |
4633 | + '''alternative gcc does not trigger anything''' |
4634 | + |
4635 | + self.data.add('binutils', False, {}, testsuite='autopkgtest') |
4636 | + self.data.add('notme', False, {'Depends': 'libgcc1'}, testsuite='autopkgtest') |
4637 | + |
4638 | + exc = self.do_test( |
4639 | + [('libgcc1', {'Source': 'gcc-snapshot', 'Version': '2'}, None)], |
4640 | + {'gcc-snapshot': (True, {})})[1] |
4641 | + self.assertNotIn('autopkgtest', exc['gcc-snapshot']['tests']) |
4642 | + |
4643 | + |
4644 | +if __name__ == '__main__': |
4645 | + unittest.main() |
4646 | |
4647 | === added file 'tests/test_boottest.py' |
4648 | --- tests/test_boottest.py 1970-01-01 00:00:00 +0000 |
4649 | +++ tests/test_boottest.py 2015-11-17 11:05:43 +0000 |
4650 | @@ -0,0 +1,445 @@ |
4651 | +#!/usr/bin/python3 |
4652 | +# (C) 2014 Canonical Ltd. |
4653 | +# |
4654 | +# This program is free software; you can redistribute it and/or modify |
4655 | +# it under the terms of the GNU General Public License as published by |
4656 | +# the Free Software Foundation; either version 2 of the License, or |
4657 | +# (at your option) any later version. |
4658 | + |
4659 | +import mock |
4660 | +import os |
4661 | +import shutil |
4662 | +import sys |
4663 | +import tempfile |
4664 | +import fileinput |
4665 | +import unittest |
4666 | + |
4667 | + |
4668 | +PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) |
4669 | +sys.path.insert(0, PROJECT_DIR) |
4670 | + |
4671 | +import boottest |
4672 | +from tests import TestBase |
4673 | + |
4674 | + |
4675 | +def create_manifest(manifest_dir, lines): |
4676 | + """Helper function for writing touch image manifests.""" |
4677 | + os.makedirs(manifest_dir) |
4678 | + with open(os.path.join(manifest_dir, 'manifest'), 'w') as fd: |
4679 | + fd.write('\n'.join(lines)) |
4680 | + |
4681 | + |
4682 | +class FakeResponse(object): |
4683 | + |
4684 | + def __init__(self, code=404, content=''): |
4685 | + self.code = code |
4686 | + self.content = content |
4687 | + |
4688 | + def read(self): |
4689 | + return self.content |
4690 | + |
4691 | + |
4692 | +class TestTouchManifest(unittest.TestCase): |
4693 | + |
4694 | + def setUp(self): |
4695 | + super(TestTouchManifest, self).setUp() |
4696 | + self.path = tempfile.mkdtemp(prefix='boottest') |
4697 | + os.chdir(self.path) |
4698 | + self.imagesdir = os.path.join(self.path, 'boottest/images') |
4699 | + os.makedirs(self.imagesdir) |
4700 | + self.addCleanup(shutil.rmtree, self.path) |
4701 | + _p = mock.patch('urllib.request.urlopen') |
4702 | + self.mocked_urlopen = _p.start() |
4703 | + self.mocked_urlopen.side_effect = [ |
4704 | + FakeResponse(code=404), |
4705 | + FakeResponse(code=404), |
4706 | + ] |
4707 | + self.addCleanup(_p.stop) |
4708 | + self.fetch_retries_orig = boottest.FETCH_RETRIES |
4709 | + |
4710 | + def restore_fetch_retries(): |
4711 | + boottest.FETCH_RETRIES = self.fetch_retries_orig |
4712 | + boottest.FETCH_RETRIES = 0 |
4713 | + self.addCleanup(restore_fetch_retries) |
4714 | + |
4715 | + def test_missing(self): |
4716 | + # Missing manifest file silently results in empty contents. |
4717 | + manifest = boottest.TouchManifest('I-dont-exist', 'vivid') |
4718 | + self.assertEqual([], manifest._manifest) |
4719 | + self.assertNotIn('foo', manifest) |
4720 | + |
4721 | + def test_fetch(self): |
4722 | + # Missing manifest file is fetched dynamically |
4723 | + self.mocked_urlopen.side_effect = [ |
4724 | + FakeResponse(code=200, content=b'foo 1.0'), |
4725 | + ] |
4726 | + manifest = boottest.TouchManifest('ubuntu-touch', 'vivid') |
4727 | + self.assertNotEqual([], manifest._manifest) |
4728 | + |
4729 | + def test_fetch_disabled(self): |
4730 | + # Manifest auto-fetching can be disabled. |
4731 | + manifest = boottest.TouchManifest('ubuntu-touch', 'vivid', fetch=False) |
4732 | + self.mocked_urlopen.assert_not_called() |
4733 | + self.assertEqual([], manifest._manifest) |
4734 | + |
4735 | + def test_fetch_fails(self): |
4736 | + project = 'fake' |
4737 | + series = 'fake' |
4738 | + manifest_dir = os.path.join(self.imagesdir, project, series) |
4739 | + manifest_lines = [ |
4740 | + 'foo:armhf 1~beta1', |
4741 | + ] |
4742 | + create_manifest(manifest_dir, manifest_lines) |
4743 | + manifest = boottest.TouchManifest(project, series) |
4744 | + self.assertEqual(1, len(manifest._manifest)) |
4745 | + self.assertIn('foo', manifest) |
4746 | + |
4747 | + def test_fetch_exception(self): |
4748 | + self.mocked_urlopen.side_effect = [ |
4749 | + IOError("connection refused"), |
4750 | + IOError("connection refused"), |
4751 | + ] |
4752 | + manifest = boottest.TouchManifest('not-real', 'not-real') |
4753 | + self.assertEqual(0, len(manifest._manifest)) |
4754 | + |
4755 | + def test_simple(self): |
4756 | + # Existing manifest file allows callsites to properly check presence. |
4757 | + manifest_dir = os.path.join(self.imagesdir, 'ubuntu/vivid') |
4758 | + manifest_lines = [ |
4759 | + 'bar 1234', |
4760 | + 'foo:armhf 1~beta1', |
4761 | + 'boing1-1.2\t666', |
4762 | + 'click:com.ubuntu.shorts 0.2.346' |
4763 | + ] |
4764 | + create_manifest(manifest_dir, manifest_lines) |
4765 | + |
4766 | + manifest = boottest.TouchManifest('ubuntu', 'vivid') |
4767 | + # We can dig deeper on the manifest package names list ... |
4768 | + self.assertEqual( |
4769 | + ['bar', 'boing1-1.2', 'foo'], manifest._manifest) |
4770 | + # but the '<name> in manifest' API reads better. |
4771 | + self.assertIn('foo', manifest) |
4772 | + self.assertIn('boing1-1.2', manifest) |
4773 | + self.assertNotIn('baz', manifest) |
4774 | + # 'click' name is blacklisted due to the click package syntax. |
4775 | + self.assertNotIn('click', manifest) |
4776 | + |
4777 | + |
4778 | +class TestBoottestEnd2End(TestBase): |
4779 | + """End2End tests (calling `britney`) for the BootTest criteria.""" |
4780 | + |
4781 | + def setUp(self): |
4782 | + super(TestBoottestEnd2End, self).setUp() |
4783 | + |
4784 | + # Modify shared configuration file. |
4785 | + with open(self.britney_conf, 'r') as fp: |
4786 | + original_config = fp.read() |
4787 | + # Disable autopkgtests. |
4788 | + new_config = original_config.replace( |
4789 | + 'ADT_ENABLE = yes', 'ADT_ENABLE = no') |
4790 | + # Enable boottest. |
4791 | + new_config = new_config.replace( |
4792 | + 'BOOTTEST_ENABLE = no', 'BOOTTEST_ENABLE = yes') |
4793 | + # Disable TouchManifest auto-fetching. |
4794 | + new_config = new_config.replace( |
4795 | + 'BOOTTEST_FETCH = yes', 'BOOTTEST_FETCH = no') |
4796 | + with open(self.britney_conf, 'w') as fp: |
4797 | + fp.write(new_config) |
4798 | + |
4799 | + self.data.add('libc6', False, {'Architecture': 'armhf'}), |
4800 | + |
4801 | + self.data.add( |
4802 | + 'libgreen1', |
4803 | + False, |
4804 | + {'Source': 'green', 'Architecture': 'armhf', |
4805 | + 'Depends': 'libc6 (>= 0.9)'}) |
4806 | + self.data.add( |
4807 | + 'green', |
4808 | + False, |
4809 | + {'Source': 'green', 'Architecture': 'armhf', |
4810 | + 'Depends': 'libc6 (>= 0.9), libgreen1'}) |
4811 | + self.create_manifest([ |
4812 | + 'green 1.0', |
4813 | + 'pyqt5:armhf 1.0', |
4814 | + 'signon 1.0', |
4815 | + 'purple 1.1', |
4816 | + ]) |
4817 | + |
4818 | + def create_manifest(self, lines): |
4819 | + """Create a manifest for this britney run context.""" |
4820 | + path = os.path.join( |
4821 | + self.data.path, |
4822 | + 'boottest/images/ubuntu-touch/{}'.format(self.data.series)) |
4823 | + create_manifest(path, lines) |
4824 | + |
4825 | + def make_boottest(self): |
4826 | + """Create a stub version of boottest-britney script.""" |
4827 | + script_path = os.path.expanduser( |
4828 | + "~/auto-package-testing/jenkins/boottest-britney") |
4829 | + if not os.path.exists(os.path.dirname(script_path)): |
4830 | + os.makedirs(os.path.dirname(script_path)) |
4831 | + with open(script_path, 'w') as f: |
4832 | + f.write('''#!%(py)s |
4833 | +import argparse |
4834 | +import os |
4835 | +import shutil |
4836 | +import sys |
4837 | + |
4838 | +template = """ |
4839 | +green 1.1~beta RUNNING |
4840 | +pyqt5-src 1.1~beta PASS |
4841 | +pyqt5-src 1.1 FAIL |
4842 | +signon 1.1 PASS |
4843 | +purple 1.1 RUNNING |
4844 | +""" |
4845 | + |
4846 | +def request(): |
4847 | + work_path = os.path.dirname(args.output) |
4848 | + os.makedirs(work_path) |
4849 | + shutil.copy(args.input, os.path.join(work_path, 'test_input')) |
4850 | + with open(args.output, 'w') as f: |
4851 | + f.write(template) |
4852 | + |
4853 | +def submit(): |
4854 | + pass |
4855 | + |
4856 | +def collect(): |
4857 | + with open(args.output, 'w') as f: |
4858 | + f.write(template) |
4859 | + |
4860 | +p = argparse.ArgumentParser() |
4861 | +p.add_argument('-r') |
4862 | +p.add_argument('-c') |
4863 | +p.add_argument('-d', default=False, action='store_true') |
4864 | +p.add_argument('-P', default=False, action='store_true') |
4865 | +p.add_argument('-U', default=False, action='store_true') |
4866 | + |
4867 | +sp = p.add_subparsers() |
4868 | + |
4869 | +psubmit = sp.add_parser('submit') |
4870 | +psubmit.add_argument('input') |
4871 | +psubmit.set_defaults(func=submit) |
4872 | + |
4873 | +prequest = sp.add_parser('request') |
4874 | +prequest.add_argument('-O', dest='output') |
4875 | +prequest.add_argument('input') |
4876 | +prequest.set_defaults(func=request) |
4877 | + |
4878 | +pcollect = sp.add_parser('collect') |
4879 | +pcollect.add_argument('-O', dest='output') |
4880 | +pcollect.set_defaults(func=collect) |
4881 | + |
4882 | +args = p.parse_args() |
4883 | +args.func() |
4884 | + ''' % {'py': sys.executable}) |
4885 | + os.chmod(script_path, 0o755) |
4886 | + |
4887 | + def do_test(self, context, expect=None, no_expect=None): |
4888 | + """Process the given package context and assert britney results.""" |
4889 | + for (pkg, fields) in context: |
4890 | + self.data.add(pkg, True, fields, testsuite='autopkgtest') |
4891 | + self.make_boottest() |
4892 | + (excuses_yaml, excuses, out) = self.run_britney() |
4893 | + # print('-------\nexcuses: %s\n-----' % excuses) |
4894 | + # print('-------\nout: %s\n-----' % out) |
4895 | + if expect: |
4896 | + for re in expect: |
4897 | + self.assertRegex(excuses, re) |
4898 | + if no_expect: |
4899 | + for re in no_expect: |
4900 | + self.assertNotRegex(excuses, re) |
4901 | + |
4902 | + def test_runs(self): |
4903 | + # `Britney` runs and considers binary packages for boottesting |
4904 | + # when it is enabled in the configuration, only binaries needed |
4905 | + # in the phone image are considered for boottesting. |
4906 | + # The boottest status is presented along with its corresponding |
4907 | + # jenkins job urls for the public and the private servers. |
4908 | + # 'in progress' tests blocks package promotion. |
4909 | + context = [ |
4910 | + ('green', {'Source': 'green', 'Version': '1.1~beta', |
4911 | + 'Architecture': 'armhf', 'Depends': 'libc6 (>= 0.9)'}), |
4912 | + ('libgreen1', {'Source': 'green', 'Version': '1.1~beta', |
4913 | + 'Architecture': 'armhf', |
4914 | + 'Depends': 'libc6 (>= 0.9)'}), |
4915 | + ] |
4916 | + public_jenkins_url = ( |
4917 | + 'https://jenkins.qa.ubuntu.com/job/series-boottest-green/' |
4918 | + 'lastBuild') |
4919 | + private_jenkins_url = ( |
4920 | + 'http://d-jenkins.ubuntu-ci:8080/view/Series/view/BootTest/' |
4921 | + 'job/series-boottest-green/lastBuild') |
4922 | + self.do_test( |
4923 | + context, |
4924 | + [r'\bgreen\b.*>1</a> to .*>1.1~beta<', |
4925 | + r'<li>Boottest result: {} \(Jenkins: ' |
4926 | + r'<a href="{}">public</a>, <a href="{}">private</a>\)'.format( |
4927 | + boottest.BootTest.EXCUSE_LABELS['RUNNING'], |
4928 | + public_jenkins_url, private_jenkins_url), |
4929 | + '<li>Not considered']) |
4930 | + |
4931 | + # The `boottest-britney` input (recorded for testing purposes), |
4932 | + # contains a line matching the requested boottest attempt. |
4933 | + # '<source> <version>\n' |
4934 | + test_input_path = os.path.join( |
4935 | + self.data.path, 'boottest/work/test_input') |
4936 | + with open(test_input_path) as f: |
4937 | + self.assertEqual( |
4938 | + ['green 1.1~beta\n'], f.readlines()) |
4939 | + |
4940 | + def test_pass(self): |
4941 | + # `Britney` updates boottesting information in excuses when the |
4942 | + # package test pass and marks the package as a valid candidate for |
4943 | + # promotion. |
4944 | + context = [] |
4945 | + context.append( |
4946 | + ('signon', {'Version': '1.1', 'Architecture': 'armhf'})) |
4947 | + self.do_test( |
4948 | + context, |
4949 | + [r'\bsignon\b.*\(- to .*>1.1<', |
4950 | + '<li>Boottest result: {}'.format( |
4951 | + boottest.BootTest.EXCUSE_LABELS['PASS']), |
4952 | + '<li>Valid candidate']) |
4953 | + |
4954 | + def test_fail(self): |
4955 | + # `Britney` updates boottesting information in excuses when the |
4956 | + # package test fails and blocks the package promotion |
4957 | + # ('Not considered.') |
4958 | + context = [] |
4959 | + context.append( |
4960 | + ('pyqt5', {'Source': 'pyqt5-src', 'Version': '1.1', |
4961 | + 'Architecture': 'all'})) |
4962 | + self.do_test( |
4963 | + context, |
4964 | + [r'\bpyqt5-src\b.*\(- to .*>1.1<', |
4965 | + '<li>Boottest result: {}'.format( |
4966 | + boottest.BootTest.EXCUSE_LABELS['FAIL']), |
4967 | + '<li>Not considered']) |
4968 | + |
4969 | + def test_unknown(self): |
4970 | + # `Britney` does not block on missing boottest results for a |
4971 | + # particular source/version, in this case pyqt5-src_1.2 (not |
4972 | + # listed in the testing result history). Instead it renders |
4973 | + # excuses with 'UNKNOWN STATUS' and links to the corresponding |
4974 | + # jenkins jobs for further investigation. Source promotion is |
4975 | + # blocked, though. |
4976 | + context = [ |
4977 | + ('pyqt5', {'Source': 'pyqt5-src', 'Version': '1.2', |
4978 | + 'Architecture': 'armhf'})] |
4979 | + self.do_test( |
4980 | + context, |
4981 | + [r'\bpyqt5-src\b.*\(- to .*>1.2<', |
4982 | + r'<li>Boottest result: UNKNOWN STATUS \(Jenkins: .*\)', |
4983 | + '<li>Not considered']) |
4984 | + |
4985 | + def test_skipped_by_hints(self): |
4986 | + # `Britney` allows boottests to be skipped by hinting the |
4987 | + # corresponding source with 'force-skiptest'. The boottest |
4988 | + # attempt will not be requested. |
4989 | + context = [ |
4990 | + ('pyqt5', {'Source': 'pyqt5-src', 'Version': '1.1', |
4991 | + 'Architecture': 'all'}), |
4992 | + ] |
4993 | + self.create_hint('cjwatson', 'force-skiptest pyqt5-src/1.1') |
4994 | + self.do_test( |
4995 | + context, |
4996 | + [r'\bpyqt5-src\b.*\(- to .*>1.1<', |
4997 | + '<li>boottest skipped from hints by cjwatson', |
4998 | + '<li>Valid candidate']) |
4999 | + |
5000 | + def test_fail_but_forced_by_hints(self): |