Merge lp:~laney/britney/autopkgtest-no-delay-if-never-passed into lp:britney

Proposed by Iain Lane
Status: Superseded
Proposed branch: lp:~laney/britney/autopkgtest-no-delay-if-never-passed
Merge into: lp:britney
Diff against target: 5095 lines (+4252/-116) (has conflicts)
13 files modified
autopkgtest.py (+730/-0)
boottest.py (+293/-0)
britney.conf (+52/-15)
britney.py (+505/-80)
britney_nobreakall.conf (+44/-16)
britney_util.py (+110/-0)
consts.py (+1/-0)
excuse.py (+54/-5)
run-autopkgtest (+78/-0)
tests/__init__.py (+184/-0)
tests/mock_swift.py (+170/-0)
tests/test_autopkgtest.py (+1586/-0)
tests/test_boottest.py (+445/-0)
Text conflict in britney.conf
Text conflict in britney.py
Text conflict in britney_nobreakall.conf
Text conflict in britney_util.py
To merge this branch: bzr merge lp:~laney/britney/autopkgtest-no-delay-if-never-passed
Reviewer Review Type Date Requested Status
Ubuntu Package Archive Administrators Pending
Review via email: mp+277671@code.launchpad.net

This proposal has been superseded by a proposal from 2015-11-17.

Description of the change

This is an attempt at fixing bug #1501699.

With this branch, we now take into account the "ever_passed" flag to decide if a test run has passed or not. It has passed if the test is ALWAYSFAILED or RUNNING but has never passed, for all arches.

Many testcases relied on the previous behaviour, so I inserted some 'passing' test results to cause the packages to be held.

There are some delicacies around kernel triggering that I don't fully understand (this is my first venture into britney/autopkgtest) - please check I didn't break anything.

To post a comment you must log in.
529. By Iain Lane

Add passing results for various tests

We're going to modify britney so that RUNNING tests don't block
promotion if they have never failed - for this we will need to change a
few tests so that the packages being tested have passed before, meaning
that there could potentially be a regression.

530. By Iain Lane

tests: pretty print the excuses so that they are readable

531. By Iain Lane

Fix typo in test name triggerered -> triggered

532. By Iain Lane

Don't wait for tests which have never passed

They won't ever block promotion, so we might as well make them
candidates right away (but trigger their tests).

Unmerged revisions

532. By Iain Lane

Don't wait for tests which have never passed

They won't ever block promotion, so we might as well make them
candidates right away (but trigger their tests).

531. By Iain Lane

Fix typo in test name triggerered -> triggered

530. By Iain Lane

tests: pretty print the excuses so that they are readable

529. By Iain Lane

Add passing results for various tests

We're going to modify britney so that RUNNING tests don't block
promotion if they have never failed - for this we will need to change a
few tests so that the packages being tested have passed before, meaning
that there could potentially be a regression.

528. By Martin Pitt

Restrict ADT_ARCHES to architectures we actually run for

This makes it simpler to run britney against a PPA with
--architectures='i386 amd64'.

527. By Martin Pitt

Split self.options.adt_arches once during initialization

For consistency with all other self.options.*_arches.

526. By Martin Pitt

tests: Use proper triggers in all tests

Also introduce a tr() shortcut for those. Drop the now redundant
test_rerun_failure_triggers().

525. By Martin Pitt

Autopkgtest: Request one test run per trigger for all packages

Stop special-casing the kernel and move to one test run per trigger. This
allows us to only install the triggering package from unstable and run the rest
out of testing, which gives much better isolation.

524. By Martin Pitt

Let linux-meta trigger systemd

523. By Martin Pitt

run-autopkgtest: Require --trigger argument, improve help

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'autopkgtest.py'
--- autopkgtest.py 1970-01-01 00:00:00 +0000
+++ autopkgtest.py 2015-11-17 11:05:43 +0000
@@ -0,0 +1,730 @@
1# -*- coding: utf-8 -*-
2
3# Copyright (C) 2013 - 2015 Canonical Ltd.
4# Authors:
5# Colin Watson <cjwatson@ubuntu.com>
6# Jean-Baptiste Lallement <jean-baptiste.lallement@canonical.com>
7# Martin Pitt <martin.pitt@ubuntu.com>
8
9# This program is free software; you can redistribute it and/or modify
10# it under the terms of the GNU General Public License as published by
11# the Free Software Foundation; either version 2 of the License, or
12# (at your option) any later version.
13
14# This program is distributed in the hope that it will be useful,
15# but WITHOUT ANY WARRANTY; without even the implied warranty of
16# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17# GNU General Public License for more details.
18
19import os
20import time
21import json
22import tarfile
23import io
24import copy
25import re
26from urllib.parse import urlencode
27from urllib.request import urlopen
28
29import apt_pkg
30import kombu
31
32from consts import (AUTOPKGTEST, BINARIES, DEPENDS, RDEPENDS, SOURCE, VERSION)
33
34
35def srchash(src):
36 '''archive hash prefix for source package'''
37
38 if src.startswith('lib'):
39 return src[:4]
40 else:
41 return src[0]
42
43
44def latest_item(ver_map, min_version=None):
45 '''Return (ver, value) from version -> value map with latest version number
46
47 If min_version is given, version has to be >= that, otherwise a KeyError is
48 raised.
49 '''
50 latest = None
51 for ver in ver_map:
52 if latest is None or apt_pkg.version_compare(ver, latest) > 0:
53 latest = ver
54 if min_version is not None and latest is not None and \
55 apt_pkg.version_compare(latest, min_version) < 0:
56 latest = None
57
58 if latest is not None:
59 return (latest, ver_map[latest])
60 else:
61 raise KeyError('no version >= %s' % min_version)
62
63
64class AutoPackageTest(object):
65 """autopkgtest integration
66
67 Look for autopkgtest jobs to run for each update that is otherwise a
68 valid candidate, and collect the results. If an update causes any
69 autopkgtest jobs to be run, then they must all pass before the update is
70 accepted.
71 """
72
73 def __init__(self, britney, distribution, series, debug=False):
74 self.britney = britney
75 self.distribution = distribution
76 self.series = series
77 self.debug = debug
78 self.excludes = set()
79 self.test_state_dir = os.path.join(britney.options.unstable,
80 'autopkgtest')
81 # map of requested tests from request()
82 # src -> ver -> arch -> {(triggering-src1, ver1), ...}
83 self.requested_tests = {}
84 # same map for tests requested in previous runs
85 self.pending_tests = None
86 self.pending_tests_file = os.path.join(self.test_state_dir, 'pending.txt')
87
88 if not os.path.isdir(self.test_state_dir):
89 os.mkdir(self.test_state_dir)
90 self.read_pending_tests()
91
92 # results map: src -> arch -> [latest_stamp, ver -> trigger -> passed, ever_passed]
93 # - It's tempting to just use a global "latest" time stamp, but due to
94 # swift's "eventual consistency" we might miss results with older time
95 # stamps from other packages that we don't see in the current run, but
96 # will in the next one. This doesn't hurt for older results of the same
97 # package.
98 # - trigger is "source/version" of an unstable package that triggered
99 # this test run. We need to track this to avoid unnecessarily
100 # re-running tests.
101 # - "passed" is a bool
102 # - ever_passed is a bool whether there is any successful test of
103 # src/arch of any version. This is used for detecting "regression"
104 # vs. "always failed"
105 self.test_results = {}
106 self.results_cache_file = os.path.join(self.test_state_dir, 'results.cache')
107
108 # read the cached results that we collected so far
109 if os.path.exists(self.results_cache_file):
110 with open(self.results_cache_file) as f:
111 self.test_results = json.load(f)
112 self.log_verbose('Read previous results from %s' % self.results_cache_file)
113 else:
114 self.log_verbose('%s does not exist, re-downloading all results '
115 'from swift' % self.results_cache_file)
116
117 def log_verbose(self, msg):
118 if self.britney.options.verbose:
119 print('I: [%s] - %s' % (time.asctime(), msg))
120
121 def log_error(self, msg):
122 print('E: [%s] - %s' % (time.asctime(), msg))
123
124 @classmethod
125 def has_autodep8(kls, srcinfo, binaries):
126 '''Check if package is covered by autodep8
127
128 srcinfo is an item from self.britney.sources
129 binaries is self.britney.binaries['unstable'][arch][0]
130 '''
131 # DKMS: some binary depends on "dkms"
132 for bin_arch in srcinfo[BINARIES]:
133 binpkg = bin_arch.split('/')[0] # chop off arch
134 try:
135 bininfo = binaries[binpkg]
136 except KeyError:
137 continue
138 if 'dkms' in (bininfo[DEPENDS] or ''):
139 return True
140 return False
141
142 def tests_for_source(self, src, ver, arch):
143 '''Iterate over all tests that should be run for given source and arch'''
144
145 sources_info = self.britney.sources['unstable']
146 binaries_info = self.britney.binaries['unstable'][arch][0]
147
148 reported_pkgs = set()
149
150 tests = []
151
152 # hack for vivid's gccgo-5
153 if src == 'gccgo-5':
154 for test in ['juju', 'juju-core', 'juju-mongodb', 'mongodb']:
155 try:
156 tests.append((test, self.britney.sources['testing'][test][VERSION]))
157 except KeyError:
158 # no package in that series? *shrug*, then not (mostly for testing)
159 pass
160 return tests
161
162 # gcc-N triggers tons of tests via libgcc1, but this is mostly in vain:
163 # gcc already tests itself during build, and it is being used from
164 # -proposed, so holding it back on a dozen unrelated test failures
165 # serves no purpose. Just check some key packages which actually use
166 # gcc during the test, and libreoffice as an example for a libgcc user.
167 if src.startswith('gcc-'):
168 if re.match('gcc-\d$', src):
169 for test in ['binutils', 'fglrx-installer', 'libreoffice', 'linux']:
170 try:
171 tests.append((test, self.britney.sources['testing'][test][VERSION]))
172 except KeyError:
173 # no package in that series? *shrug*, then not (mostly for testing)
174 pass
175 return tests
176 else:
177 # for other compilers such as gcc-snapshot etc. we don't need
178 # to trigger anything
179 return []
180
181 # for linux themselves we don't want to trigger tests -- these should
182 # all come from linux-meta*. A new kernel ABI without a corresponding
183 # -meta won't be installed and thus we can't sensibly run tests against
184 # it.
185 if src.startswith('linux') and src.replace('linux', 'linux-meta') in self.britney.sources['testing']:
186 return []
187
188 srcinfo = sources_info[src]
189 # we want to test the package itself, if it still has a test in
190 # unstable
191 if srcinfo[AUTOPKGTEST] or self.has_autodep8(srcinfo, binaries_info):
192 reported_pkgs.add(src)
193 tests.append((src, ver))
194
195 extra_bins = []
196 # Hack: For new kernels trigger all DKMS packages by pretending that
197 # linux-meta* builds a "dkms" binary as well. With that we ensure that we
198 # don't regress DKMS drivers with new kernel versions.
199 if src.startswith('linux-meta'):
200 # does this have any image on this arch?
201 for b in srcinfo[BINARIES]:
202 p, a = b.split('/', 1)
203 if a == arch and '-image' in p:
204 extra_bins.append('dkms')
205
206 # plus all direct reverse dependencies of its binaries which have
207 # an autopkgtest
208 for binary in srcinfo[BINARIES] + extra_bins:
209 binary = binary.split('/')[0] # chop off arch
210 try:
211 rdeps = binaries_info[binary][RDEPENDS]
212 except KeyError:
213 self.log_verbose('Ignoring nonexistant binary %s on %s (FTBFS/NBS)?' % (binary, arch))
214 continue
215 for rdep in rdeps:
216 rdep_src = binaries_info[rdep][SOURCE]
217 # if rdep_src/unstable is known to be not built yet or
218 # uninstallable, try to run tests against testing; if that
219 # works, then the unstable src does not break the testing
220 # rdep_src and is fine
221 if rdep_src in self.excludes:
222 try:
223 rdep_src_info = self.britney.sources['testing'][rdep_src]
224 self.log_verbose('Reverse dependency %s of %s/%s is unbuilt or uninstallable, running test against testing version %s' %
225 (rdep_src, src, ver, rdep_src_info[VERSION]))
226 except KeyError:
227 self.log_verbose('Reverse dependency %s of %s/%s is unbuilt or uninstallable and not present in testing, ignoring' %
228 (rdep_src, src, ver))
229 continue
230 else:
231 rdep_src_info = sources_info[rdep_src]
232 if rdep_src_info[AUTOPKGTEST] or self.has_autodep8(rdep_src_info, binaries_info):
233 if rdep_src not in reported_pkgs:
234 tests.append((rdep_src, rdep_src_info[VERSION]))
235 reported_pkgs.add(rdep_src)
236
237 # Hardcode linux-meta → linux, lxc, glibc, systemd triggers until we get a more flexible
238 # implementation: https://bugs.debian.org/779559
239 if src.startswith('linux-meta'):
240 for pkg in ['lxc', 'glibc', src.replace('linux-meta', 'linux'), 'systemd']:
241 if pkg not in reported_pkgs:
242 # does this have any image on this arch?
243 for b in srcinfo[BINARIES]:
244 p, a = b.split('/', 1)
245 if a == arch and '-image' in p:
246 try:
247 tests.append((pkg, self.britney.sources['unstable'][pkg][VERSION]))
248 except KeyError:
249 try:
250 tests.append((pkg, self.britney.sources['testing'][pkg][VERSION]))
251 except KeyError:
252 # package not in that series? *shrug*, then not
253 pass
254 break
255
256 tests.sort(key=lambda s_v: s_v[0])
257 return tests
258
259 #
260 # AMQP/cloud interface helpers
261 #
262
263 def read_pending_tests(self):
264 '''Read pending test requests from previous britney runs
265
266 Read UNSTABLE/autopkgtest/requested.txt with the format:
267 srcpkg srcver triggering-srcpkg triggering-srcver
268
269 Initialize self.pending_tests with that data.
270 '''
271 assert self.pending_tests is None, 'already initialized'
272 self.pending_tests = {}
273 if not os.path.exists(self.pending_tests_file):
274 self.log_verbose('No %s, starting with no pending tests' %
275 self.pending_tests_file)
276 return
277 with open(self.pending_tests_file) as f:
278 for l in f:
279 l = l.strip()
280 if not l:
281 continue
282 try:
283 (src, ver, arch, trigsrc, trigver) = l.split()
284 except ValueError:
285 self.log_error('ignoring malformed line in %s: %s' %
286 (self.pending_tests_file, l))
287 continue
288 self.pending_tests.setdefault(src, {}).setdefault(
289 ver, {}).setdefault(arch, set()).add((trigsrc, trigver))
290 self.log_verbose('Read pending requested tests from %s: %s' %
291 (self.pending_tests_file, self.pending_tests))
292
293 def update_pending_tests(self):
294 '''Update pending tests after submitting requested tests
295
296 Update UNSTABLE/autopkgtest/requested.txt, see read_pending_tests() for
297 the format.
298 '''
299 # merge requested_tests into pending_tests
300 for src, verinfo in self.requested_tests.items():
301 for ver, archinfo in verinfo.items():
302 for arch, triggers in archinfo.items():
303 self.pending_tests.setdefault(src, {}).setdefault(
304 ver, {}).setdefault(arch, set()).update(triggers)
305 self.requested_tests = {}
306
307 # write it
308 with open(self.pending_tests_file + '.new', 'w') as f:
309 for src in sorted(self.pending_tests):
310 for ver in sorted(self.pending_tests[src]):
311 for arch in sorted(self.pending_tests[src][ver]):
312 for (trigsrc, trigver) in sorted(self.pending_tests[src][ver][arch]):
313 f.write('%s %s %s %s %s\n' % (src, ver, arch, trigsrc, trigver))
314 os.rename(self.pending_tests_file + '.new', self.pending_tests_file)
315 self.log_verbose('Updated pending requested tests in %s' %
316 self.pending_tests_file)
317
318 def add_test_request(self, src, ver, arch, trigsrc, trigver):
319 '''Add one test request to the local self.requested_tests queue
320
321 This will only be done if that test wasn't already requested in a
322 previous run (i. e. not already in self.pending_tests) or there already
323 is a result for it.
324 '''
325 # check for existing results for both the requested and the current
326 # unstable version: test runs might see newly built versions which we
327 # didn't see in britney yet
328 ver_trig_results = self.test_results.get(src, {}).get(arch, [None, {}, None])[1]
329 unstable_ver = self.britney.sources['unstable'][src][VERSION]
330 try:
331 testing_ver = self.britney.sources['testing'][src][VERSION]
332 except KeyError:
333 testing_ver = unstable_ver
334 for result_ver in set([testing_ver, ver, unstable_ver]):
335 # result_ver might be < ver here; that's okay, if we already have a
336 # result for trigsrc/trigver we don't need to re-run it again
337 if result_ver not in ver_trig_results:
338 continue
339 for trigger in ver_trig_results[result_ver]:
340 (tsrc, tver) = trigger.split('/', 1)
341 if tsrc == trigsrc and apt_pkg.version_compare(tver, trigver) >= 0:
342 self.log_verbose('There already is a result for %s/%s/%s triggered by %s/%s' %
343 (src, result_ver, arch, tsrc, tver))
344 return
345
346 if (trigsrc, trigver) in self.pending_tests.get(src, {}).get(
347 ver, {}).get(arch, set()):
348 self.log_verbose('test %s/%s/%s for %s/%s is already pending, not queueing' %
349 (src, ver, arch, trigsrc, trigver))
350 return
351 self.requested_tests.setdefault(src, {}).setdefault(
352 ver, {}).setdefault(arch, set()).add((trigsrc, trigver))
353
354 def fetch_swift_results(self, swift_url, src, arch, trigger=None):
355 '''Download new results for source package/arch from swift'''
356
357 # prepare query: get all runs with a timestamp later than latest_stamp
358 # for this package/arch; '@' is at the end of each run timestamp, to
359 # mark the end of a test run directory path
360 # example: <autopkgtest-wily>wily/amd64/libp/libpng/20150630_054517@/result.tar
361 query = {'delimiter': '@',
362 'prefix': '%s/%s/%s/%s/' % (self.series, arch, srchash(src), src)}
363 try:
364 query['marker'] = query['prefix'] + self.test_results[src][arch][0]
365 except KeyError:
366 # no stamp yet, download all results
367 pass
368
369 # request new results from swift
370 url = os.path.join(swift_url, 'autopkgtest-' + self.series)
371 url += '?' + urlencode(query)
372 try:
373 f = urlopen(url)
374 if f.getcode() == 200:
375 result_paths = f.read().decode().strip().splitlines()
376 elif f.getcode() == 204: # No content
377 result_paths = []
378 else:
379 self.log_error('Failure to fetch swift results from %s: %u' %
380 (url, f.getcode()))
381 f.close()
382 return
383 f.close()
384 except IOError as e:
385 self.log_error('Failure to fetch swift results from %s: %s' % (url, str(e)))
386 return
387
388 for p in result_paths:
389 self.fetch_one_result(
390 os.path.join(swift_url, 'autopkgtest-' + self.series, p, 'result.tar'),
391 src, arch, trigger)
392
393 def fetch_one_result(self, url, src, arch, trigger=None):
394 '''Download one result URL for source/arch
395
396 Remove matching pending_tests entries. If trigger is given (src, ver)
397 it is added to the triggers of that result.
398 '''
399 try:
400 f = urlopen(url)
401 if f.getcode() == 200:
402 tar_bytes = io.BytesIO(f.read())
403 f.close()
404 else:
405 self.log_error('Failure to fetch %s: %u' % (url, f.getcode()))
406 return
407 except IOError as e:
408 self.log_error('Failure to fetch %s: %s' % (url, str(e)))
409 return
410
411 try:
412 with tarfile.open(None, 'r', tar_bytes) as tar:
413 exitcode = int(tar.extractfile('exitcode').read().strip())
414 srcver = tar.extractfile('testpkg-version').read().decode().strip()
415 (ressrc, ver) = srcver.split()
416 try:
417 testinfo = json.loads(tar.extractfile('testinfo.json').read().decode())
418 except KeyError:
419 self.log_error('warning: %s does not have a testinfo.json' % url)
420 testinfo = {}
421 except (KeyError, ValueError, tarfile.TarError) as e:
422 self.log_error('%s is damaged, ignoring: %s' % (url, str(e)))
423 # ignore this; this will leave an orphaned request in pending.txt
424 # and thus require manual retries after fixing the tmpfail, but we
425 # can't just blindly attribute it to some pending test.
426 return
427
428 if src != ressrc:
429 self.log_error('%s is a result for package %s, but expected package %s' %
430 (url, ressrc, src))
431 return
432
433 # parse recorded triggers in test result
434 if 'custom_environment' in testinfo:
435 for e in testinfo['custom_environment']:
436 if e.startswith('ADT_TEST_TRIGGERS='):
437 result_triggers = [tuple(i.split('/', 1)) for i in e.split('=', 1)[1].split() if '/' in i]
438 break
439 else:
440 result_triggers = None
441
442 stamp = os.path.basename(os.path.dirname(url))
443 # allow some skipped tests, but nothing else
444 passed = exitcode in [0, 2]
445
446 self.log_verbose('Fetched test result for %s/%s/%s %s (triggers: %s): %s' % (
447 src, ver, arch, stamp, result_triggers, passed and 'pass' or 'fail'))
448
449 # remove matching test requests, remember triggers
450 satisfied_triggers = set()
451 for request_map in [self.requested_tests, self.pending_tests]:
452 for pending_ver, pending_archinfo in request_map.get(src, {}).copy().items():
453 # don't consider newer requested versions
454 if apt_pkg.version_compare(pending_ver, ver) > 0:
455 continue
456
457 if result_triggers:
458 # explicitly recording/retrieving test triggers is the
459 # preferred (and robust) way of matching results to pending
460 # requests
461 for result_trigger in result_triggers:
462 satisfied_triggers.add(result_trigger)
463 try:
464 request_map[src][pending_ver][arch].remove(result_trigger)
465 self.log_verbose('-> matches pending request %s/%s/%s for trigger %s' %
466 (src, pending_ver, arch, str(result_trigger)))
467 except (KeyError, ValueError):
468 self.log_verbose('-> does not match any pending request for %s/%s/%s' %
469 (src, pending_ver, arch))
470 else:
471 # ... but we still need to support results without
472 # testinfo.json and recorded triggers until we stop caring about
473 # existing wily and trusty results; match the latest result to all
474 # triggers for src that have at least the requested version
475 try:
476 t = pending_archinfo[arch]
477 self.log_verbose('-> matches pending request %s/%s for triggers %s' %
478 (src, pending_ver, str(t)))
479 satisfied_triggers.update(t)
480 del request_map[src][pending_ver][arch]
481 except KeyError:
482 self.log_verbose('-> does not match any pending request for %s/%s' %
483 (src, pending_ver))
484
485 # FIXME: this is a hack that mostly applies to re-running tests
486 # manually without giving a trigger. Tests which don't get
487 # triggered by a particular kernel version are fine with that, so
488 # add some heuristic once we drop the above code.
489 if trigger:
490 satisfied_triggers.add(trigger)
491
492 # add this result
493 src_arch_results = self.test_results.setdefault(src, {}).setdefault(arch, [stamp, {}, False])
494 if passed:
495 # update ever_passed field, unless we got triggered from
496 # linux-meta*: we trigger separate per-kernel tests for reverse
497 # test dependencies, and we don't want to track per-trigger
498 # ever_passed. This would be wrong for everything except the
499 # kernel, and the kernel team tracks per-kernel regressions already
500 if not result_triggers or not result_triggers[0][0].startswith('linux-meta'):
501 src_arch_results[2] = True
502 if satisfied_triggers:
503 for trig in satisfied_triggers:
504 src_arch_results[1].setdefault(ver, {})[trig[0] + '/' + trig[1]] = passed
505 else:
506 # this result did not match any triggers? then we are in backwards
507 # compat mode for results without recorded triggers; update all
508 # results
509 for trig in src_arch_results[1].setdefault(ver, {}):
510 src_arch_results[1][ver][trig] = passed
511 # update latest_stamp
512 if stamp > src_arch_results[0]:
513 src_arch_results[0] = stamp
514
515 def failed_tests_for_trigger(self, trigsrc, trigver):
516 '''Return (src, arch) set for failed tests for given trigger pkg'''
517
518 result = set()
519 trigger = trigsrc + '/' + trigver
520 for src, srcinfo in self.test_results.items():
521 for arch, (stamp, vermap, ever_passed) in srcinfo.items():
522 for ver, trig_results in vermap.items():
523 if trig_results.get(trigger) is False:
524 result.add((src, arch))
525 return result
526
527 #
528 # Public API
529 #
530
531 def request(self, packages, excludes=None):
532 if excludes:
533 self.excludes.update(excludes)
534
535 self.log_verbose('Requested autopkgtests for %s, exclusions: %s' %
536 (['%s/%s' % i for i in packages], str(self.excludes)))
537 for src, ver in packages:
538 for arch in self.britney.options.adt_arches.split():
539 for (testsrc, testver) in self.tests_for_source(src, ver, arch):
540 self.add_test_request(testsrc, testver, arch, src, ver)
541
542 if self.britney.options.verbose:
543 for src, verinfo in self.requested_tests.items():
544 for ver, archinfo in verinfo.items():
545 for arch, triggers in archinfo.items():
546 self.log_verbose('Requesting %s/%s/%s autopkgtest to verify %s' %
547 (src, ver, arch, ', '.join(['%s/%s' % i for i in triggers])))
548
549 def submit(self):
550
551 def _arches(verinfo):
552 res = set()
553 for archinfo in verinfo.values():
554 res.update(archinfo.keys())
555 return res
556
557 def _trigsources(verinfo, arch):
558 '''Calculate the triggers for a given verinfo map
559
560 verinfo is ver -> arch -> {(triggering-src1, ver1), ...}, i. e. an
561 entry of self.requested_tests[arch]
562
563 Return {trigger1, ...}) set.
564 '''
565 triggers = set()
566 for archinfo in verinfo.values():
567 for (t, v) in archinfo.get(arch, []):
568 triggers.add(t + '/' + v)
569 return triggers
570
571 # build per-queue request strings for new test requests
572 # TODO: Once we support version constraints in AMQP requests, add them
573 # arch → (queue_name, [(pkg, params), ...])
574 arch_queues = {}
575 for arch in self.britney.options.adt_arches.split():
576 requests = []
577 for pkg, verinfo in self.requested_tests.items():
578 if arch in _arches(verinfo):
579 # if a package gets triggered by several sources, we can
580 # run just one test for all triggers; but for proposed
581 # kernels we want to run a separate test for each, so that
582 # the test runs under that particular kernel
583 triggers = _trigsources(verinfo, arch)
584 for t in sorted(triggers):
585 params = {'triggers': [t]}
586 requests.append((pkg, json.dumps(params)))
587 arch_queues[arch] = ('debci-%s-%s' % (self.series, arch), requests)
588
589 amqp_url = self.britney.options.adt_amqp
590
591 if amqp_url.startswith('amqp://'):
592 # in production mode, send them out via AMQP
593 with kombu.Connection(amqp_url) as conn:
594 for arch, (queue, requests) in arch_queues.items():
595 # don't use SimpleQueue here as it always declares queues;
596 # ACLs might not allow that
597 with kombu.Producer(conn, routing_key=queue, auto_declare=False) as p:
598 for (pkg, params) in requests:
599 p.publish(pkg + '\n' + params)
600 elif amqp_url.startswith('file://'):
601 # in testing mode, adt_amqp will be a file:// URL
602 with open(amqp_url[7:], 'a') as f:
603 for arch, (queue, requests) in arch_queues.items():
604 for (pkg, params) in requests:
605 f.write('%s:%s %s\n' % (queue, pkg, params))
606 else:
607 self.log_error('Unknown ADT_AMQP schema in %s' %
608 self.britney.options.adt_amqp)
609
610 # mark them as pending now
611 self.update_pending_tests()
612
613 def collect_requested(self):
614 '''Update results from swift for all requested packages
615
616 This is normally redundant with collect(), but avoids actually
617 sending test requests if results are already available. This mostly
618 happens when you have to blow away results.cache and let it rebuild
619 from scratch.
620 '''
621 for pkg, verinfo in copy.deepcopy(self.requested_tests).items():
622 for archinfo in verinfo.values():
623 for arch in archinfo:
624 self.fetch_swift_results(self.britney.options.adt_swift_url, pkg, arch)
625
626 def collect(self, packages):
627 '''Update results from swift for all pending packages
628
629 Remove pending tests for which we have results.
630 '''
631 for pkg, verinfo in copy.deepcopy(self.pending_tests).items():
632 for archinfo in verinfo.values():
633 for arch in archinfo:
634 self.fetch_swift_results(self.britney.options.adt_swift_url, pkg, arch)
635 # also update results for excuses whose tests failed, in case a
636 # manual retry worked
637 for (trigpkg, trigver) in packages:
638 for (pkg, arch) in self.failed_tests_for_trigger(trigpkg, trigver):
639 if arch not in self.pending_tests.get(trigpkg, {}).get(trigver, {}):
640 self.log_verbose('Checking for new results for failed %s on %s for trigger %s/%s' %
641 (pkg, arch, trigpkg, trigver))
642 self.fetch_swift_results(self.britney.options.adt_swift_url, pkg, arch, (trigpkg, trigver))
643
644 # update the results cache
645 with open(self.results_cache_file + '.new', 'w') as f:
646 json.dump(self.test_results, f, indent=2)
647 os.rename(self.results_cache_file + '.new', self.results_cache_file)
648 self.log_verbose('Updated results cache')
649
650 # new results remove pending requests, update the on-disk cache
651 self.update_pending_tests()
652
653 def results(self, trigsrc, trigver):
654 '''Return test results for triggering package
655
656 Return (passed, src, ver, arch -> ALWAYSFAIL|PASS|FAIL|RUNNING)
657 iterable for all package tests that got triggered by trigsrc/trigver.
658 '''
659 # (src, ver) -> arch -> ALWAYSFAIL|PASS|FAIL|RUNNING
660 pkg_arch_result = {}
661 trigger = trigsrc + '/' + trigver
662
663 for arch in self.britney.options.adt_arches.split():
664 for testsrc, testver in self.tests_for_source(trigsrc, trigver, arch):
665 passed = True
666 try:
667 (_, ver_map, ever_passed) = self.test_results[testsrc][arch]
668
669 # check if we have a result for any version of testsrc that
670 # was triggered for trigsrc/trigver; we prefer PASSes, as
671 # it could be that an unrelated package upload could break
672 # testsrc's tests at a later point
673 status = None
674 for ver, trigger_results in ver_map.items():
675 try:
676 status = trigger_results[trigger]
677 testver = ver
678 # if we found a PASS, we can stop searching
679 if status is True:
680 break
681 except KeyError:
682 pass
683
684 if status is None:
685 # no result? go to "still running" below
686 raise KeyError
687
688 if status:
689 result = 'PASS'
690 else:
691 # test failed, check ever_passed flag for that src/arch
692 # unless we got triggered from linux-meta*: we trigger
693 # separate per-kernel tests for reverse test
694 # dependencies, and we don't want to track per-trigger
695 # ever_passed. This would be wrong for everything
696 # except the kernel, and the kernel team tracks
697 # per-kernel regressions already
698 if ever_passed and not trigsrc.startswith('linux-meta') and trigsrc != 'linux':
699 result = 'REGRESSION'
700 passed = False
701 else:
702 result = 'ALWAYSFAIL'
703 except KeyError:
704 # no result for testsrc/testver/arch; still running?
705 try:
706 self.pending_tests[testsrc][testver][arch]
707 # if we can't find a result, assume that it has never passed (i.e. this is the first run)
708 (_, _, ever_passed) = self.test_results.get(testsrc, {}).get(arch, (None, None, False))
709
710 passed = not ever_passed
711 result = 'RUNNING'
712 except KeyError:
713 # ignore if adt or swift results are disabled,
714 # otherwise this is unexpected
715 if not hasattr(self.britney.options, 'adt_swift_url'):
716 continue
717 # FIXME: Ignore this error for now as it crashes britney, but investigate!
718 self.log_error('FIXME: Result for %s/%s/%s (triggered by %s) is neither known nor pending!' %
719 (testsrc, testver, arch, trigger))
720 continue
721
722 pkg_arch_result.setdefault((testsrc, testver), {})[arch] = (result, passed)
723
724 for ((testsrc, testver), arch_results) in pkg_arch_result.items():
725 # extract "passed"
726 ps = [p for (_, p) in arch_results.values()]
727 # and then strip them out - these aren't part of our API
728 arch_results = {arch: r for arch, (r, p) in arch_results.items()}
729 # we've passed if all results are True
730 yield (False not in ps, testsrc, testver, arch_results)
0731
=== added file 'boottest.py'
--- boottest.py 1970-01-01 00:00:00 +0000
+++ boottest.py 2015-11-17 11:05:43 +0000
@@ -0,0 +1,293 @@
1# -*- coding: utf-8 -*-
2
3# Copyright (C) 2015 Canonical Ltd.
4
5# This program is free software; you can redistribute it and/or modify
6# it under the terms of the GNU General Public License as published by
7# the Free Software Foundation; either version 2 of the License, or
8# (at your option) any later version.
9
10# This program is distributed in the hope that it will be useful,
11# but WITHOUT ANY WARRANTY; without even the implied warranty of
12# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13# GNU General Public License for more details.
14
15
16from collections import defaultdict
17from contextlib import closing
18import os
19import subprocess
20import tempfile
21from textwrap import dedent
22import time
23import urllib.request
24
25import apt_pkg
26
27from consts import BINARIES
28
29
30FETCH_RETRIES = 3
31
32
33class TouchManifest(object):
34 """Parses a corresponding touch image manifest.
35
36 Based on http://cdimage.u.c/ubuntu-touch/daily-preinstalled/pending/vivid-preinstalled-touch-armhf.manifest
37
38 Assumes the deployment is arranged in a way the manifest is available
39 and fresh on:
40
41 '{britney_cwd}/boottest/images/{distribution}/{series}/manifest'
42
43 Only binary name matters, version is ignored, so callsites can:
44
45 >>> manifest = TouchManifest('ubuntu-touch', 'vivid')
46 >>> 'webbrowser-app' in manifest
47 True
48 >>> 'firefox' in manifest
49 False
50
51 """
52
53 def __init__(self, project, series, verbose=False, fetch=True):
54 self.verbose = verbose
55 self.path = "boottest/images/{}/{}/manifest".format(
56 project, series)
57 success = False
58 if fetch:
59 retries = FETCH_RETRIES
60 success = self.__fetch_manifest(project, series)
61
62 while retries > 0 and not success:
63 success = self.__fetch_manifest(project, series)
64 retries -= 1
65 if not success:
66 print("E: [%s] - Unable to fetch manifest: %s %s" % (
67 time.asctime(), project, series))
68
69 self._manifest = self._load()
70
71 def __fetch_manifest(self, project, series):
72 # There are two url formats that may lead to the proper manifest
73 # file. The first form is for series that have been released,
74 # the second form is for the current development series.
75 # Only one of these is expected to exist for any given series.
76 url_list = [
77 "http://cdimage.ubuntu.com/{}/{}/daily-preinstalled/pending/" \
78 "{}-preinstalled-touch-armhf.manifest".format(
79 project, series, series),
80 "http://cdimage.ubuntu.com/{}/daily-preinstalled/pending/" \
81 "{}-preinstalled-touch-armhf.manifest".format(
82 project, series),
83 ]
84
85 success = False
86 for url in url_list:
87 if self.verbose:
88 print("I: [%s] - Fetching manifest from %s" %
89 (time.asctime(), url))
90 print("I: [%s] - saving it to %s" %
91 (time.asctime(), self.path))
92 try:
93 response = urllib.request.urlopen(url)
94 if response.code == 200:
95 # Only [re]create the manifest file if one was successfully
96 # downloaded. This allows for an existing image to be used
97 # if the download fails.
98 path_dir = os.path.dirname(self.path)
99 if not os.path.exists(path_dir):
100 os.makedirs(path_dir)
101 with open(self.path, 'wb') as fp:
102 fp.write(response.read())
103 success = True
104 break
105
106 except IOError as e:
107 print("W: [%s] - error connecting to %s: %s" % (
108 time.asctime(), self.path, e))
109
110 return success
111
112 def _load(self):
113 pkg_list = []
114
115 if not os.path.exists(self.path):
116 return pkg_list
117
118 with open(self.path) as fd:
119 for line in fd.readlines():
120 # skip headers and metadata
121 if 'DOCTYPE' in line:
122 continue
123 name, version = line.split()
124 name = name.split(':')[0]
125 if name == 'click':
126 continue
127 pkg_list.append(name)
128
129 return sorted(pkg_list)
130
131 def __contains__(self, key):
132 return key in self._manifest
133
134
135class BootTest(object):
136 """Boottest criteria for Britney.
137
138 This class provides an API for handling the boottest-jenkins
139 integration layer (mostly derived from auto-package-testing/adt):
140 """
141 VALID_STATUSES = ('PASS',)
142
143 EXCUSE_LABELS = {
144 "PASS": '<span style="background:#87d96c">Pass</span>',
145 "FAIL": '<span style="background:#ff6666">Regression</span>',
146 "RUNNING": '<span style="background:#99ddff">Test in progress</span>',
147 }
148
149 script_path = os.path.expanduser(
150 "~/auto-package-testing/jenkins/boottest-britney")
151
152 def __init__(self, britney, distribution, series, debug=False):
153 self.britney = britney
154 self.distribution = distribution
155 self.series = series
156 self.debug = debug
157 self.rc_path = None
158 self._read()
159 manifest_fetch = getattr(
160 self.britney.options, "boottest_fetch", "no") == "yes"
161 self.phone_manifest = TouchManifest(
162 'ubuntu-touch', self.series, fetch=manifest_fetch,
163 verbose=self.britney.options.verbose)
164
165 @property
166 def _request_path(self):
167 return "boottest/work/adt.request.%s" % self.series
168
169 @property
170 def _result_path(self):
171 return "boottest/work/adt.result.%s" % self.series
172
173 def _ensure_rc_file(self):
174 if self.rc_path:
175 return
176 self.rc_path = os.path.abspath("boottest/rc.%s" % self.series)
177 with open(self.rc_path, "w") as rc_file:
178 home = os.path.expanduser("~")
179 print(dedent("""\
180 release: %s
181 aptroot: ~/.chdist/%s-proposed-armhf/
182 apturi: file:%s/mirror/%s
183 components: main restricted universe multiverse
184 rsync_host: rsync://tachash.ubuntu-ci/boottest/
185 datadir: ~/proposed-migration/boottest/data""" %
186 (self.series, self.series, home, self.distribution)),
187 file=rc_file)
188
189 def _run(self, *args):
190 self._ensure_rc_file()
191 if not os.path.exists(self.script_path):
192 print("E: [%s] - Boottest/Jenking glue script missing: %s" % (
193 time.asctime(), self.script_path))
194 return '-'
195 command = [
196 self.script_path,
197 "-c", self.rc_path,
198 "-r", self.series,
199 "-PU",
200 ]
201 if self.debug:
202 command.append("-d")
203 command.extend(args)
204 return subprocess.check_output(command).strip()
205
206 def _read(self):
207 """Loads a list of results (sources tests and their status).
208
209 Provides internal data for `get_status()`.
210 """
211 self.pkglist = defaultdict(dict)
212 if not os.path.exists(self._result_path):
213 return
214 with open(self._result_path) as f:
215 for line in f:
216 line = line.strip()
217 if line.startswith("Suite:") or line.startswith("Date:"):
218 continue
219 linebits = line.split()
220 if len(linebits) < 2:
221 print("W: Invalid line format: '%s', skipped" % line)
222 continue
223 (src, ver, status) = linebits[:3]
224 if not (src in self.pkglist and ver in self.pkglist[src]):
225 self.pkglist[src][ver] = status
226
227 def get_status(self, name, version):
228 """Return test status for the given source name and version."""
229 try:
230 return self.pkglist[name][version]
231 except KeyError:
232 # This error handling accounts for outdated apt caches, when
233 # `boottest-britney` erroneously reports results for the
234 # current source version, instead of the proposed.
235 # Returning None here will block source promotion with:
236 # 'UNKNOWN STATUS' excuse. If the jobs are retried and its
237 # results find an up-to-date cache, the problem is gone.
238 print("E: [%s] - Missing boottest results for %s_%s" % (
239 time.asctime(), name, version))
240 return None
241
242 def request(self, packages):
243 """Requests boottests for the given sources list ([(src, ver),])."""
244 request_path = self._request_path
245 if os.path.exists(request_path):
246 os.unlink(request_path)
247 with closing(tempfile.NamedTemporaryFile(mode="w")) as request_file:
248 for src, ver in packages:
249 if src in self.pkglist and ver in self.pkglist[src]:
250 continue
251 print("%s %s" % (src, ver), file=request_file)
252 # Update 'pkglist' so even if submit/collect is not called
253 # (dry-run), britney has some results.
254 self.pkglist[src][ver] = 'RUNNING'
255 request_file.flush()
256 self._run("request", "-O", request_path, request_file.name)
257
258 def submit(self):
259 """Submits the current boottests requests for processing."""
260 self._run("submit", self._request_path)
261
262 def collect(self):
263 """Collects boottests results and updates internal registry."""
264 self._run("collect", "-O", self._result_path)
265 self._read()
266 if not self.britney.options.verbose:
267 return
268 for src in sorted(self.pkglist):
269 for ver in sorted(self.pkglist[src]):
270 status = self.pkglist[src][ver]
271 print("I: [%s] - Collected boottest status for %s_%s: "
272 "%s" % (time.asctime(), src, ver, status))
273
274 def needs_test(self, name, version):
275 """Whether or not the given source and version should be tested.
276
277 Sources are only considered for boottesting if they produce binaries
278 that are part of the phone image manifest. See `TouchManifest`.
279 """
280 # Discover all binaries for the 'excused' source.
281 unstable_sources = self.britney.sources['unstable']
282 # Dismiss if source is not yet recognized (??).
283 if name not in unstable_sources:
284 return False
285 # Binaries are a seq of "<binname>/<arch>" and, practically, boottest
286 # is only concerned about armhf binaries mentioned in the phone
287 # manifest. Anything else should be skipped.
288 phone_binaries = [
289 b for b in unstable_sources[name][BINARIES]
290 if b.split('/')[1] in self.britney.options.boottest_arches.split()
291 and b.split('/')[0] in self.phone_manifest
292 ]
293 return bool(phone_binaries)
0294
=== modified file 'britney.conf'
--- britney.conf 2015-10-27 17:32:31 +0000
+++ britney.conf 2015-11-17 11:05:43 +0000
@@ -1,26 +1,25 @@
1# Configuration file for britney1# Configuration file for britney
22
3# Paths for control files3# Paths for control files
4TESTING = /srv/release.debian.org/britney/var/data-b2/testing4TESTING = data/%(SERIES)
5TPU = /srv/release.debian.org/britney/var/data-b2/testing-proposed-updates5UNSTABLE = data/%(SERIES)-proposed
6PU = /srv/release.debian.org/britney/var/data-b2/proposed-updates6PARTIAL_UNSTABLE = yes
7UNSTABLE = /srv/release.debian.org/britney/var/data-b2/unstable
87
9# Output8# Output
10NONINST_STATUS = /srv/release.debian.org/britney/var/data-b2/non-installable-status9NONINST_STATUS = data/%(SERIES)/non-installable-status
11EXCUSES_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/excuses.html10EXCUSES_OUTPUT = output/%(SERIES)/excuses.html
12EXCUSES_YAML_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/excuses.yaml11EXCUSES_YAML_OUTPUT = output/%(SERIES)/excuses.yaml
13UPGRADE_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/output.txt12UPGRADE_OUTPUT = output/%(SERIES)/output.txt
14HEIDI_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/HeidiResult13HEIDI_OUTPUT = output/%(SERIES)/HeidiResult
1514
16# List of release architectures15# List of release architectures
17ARCHITECTURES = i386 amd64 arm64 armel armhf mips mipsel powerpc ppc64el s390x16ARCHITECTURES = amd64 arm64 armhf i386 powerpc ppc64el
1817
19# if you're not in this list, arch: all packages are allowed to break on you18# if you're not in this list, arch: all packages are allowed to break on you
20NOBREAKALL_ARCHES = i386 amd6419NOBREAKALL_ARCHES = amd64
2120
22# if you're in this list, your packages may not stay in sync with the source21# if you're in this list, your packages may not stay in sync with the source
23FUCKED_ARCHES = 22OUTOFSYNC_ARCHES =
2423
25# if you're in this list, your uninstallability count may increase24# if you're in this list, your uninstallability count may increase
26BREAK_ARCHES =25BREAK_ARCHES =
@@ -29,14 +28,15 @@
29NEW_ARCHES =28NEW_ARCHES =
3029
31# priorities and delays30# priorities and delays
32MINDAYS_LOW = 1031MINDAYS_LOW = 0
33MINDAYS_MEDIUM = 532MINDAYS_MEDIUM = 0
34MINDAYS_HIGH = 233MINDAYS_HIGH = 0
35MINDAYS_CRITICAL = 034MINDAYS_CRITICAL = 0
36MINDAYS_EMERGENCY = 035MINDAYS_EMERGENCY = 0
37DEFAULT_URGENCY = medium36DEFAULT_URGENCY = medium
3837
39# hint permissions38# hint permissions
39<<<<<<< TREE
40HINTS_ABA = ALL40HINTS_ABA = ALL
41HINTS_PKERN = STANDARD force41HINTS_PKERN = STANDARD force
42HINTS_ADSB = STANDARD force force-hint42HINTS_ADSB = STANDARD force force-hint
@@ -52,12 +52,49 @@
52HINTS_FREEZE-EXCEPTION = unblock unblock-udeb52HINTS_FREEZE-EXCEPTION = unblock unblock-udeb
53HINTS_SATBRITNEY = easy53HINTS_SATBRITNEY = easy
54HINTS_AUTO-REMOVALS = remove54HINTS_AUTO-REMOVALS = remove
55=======
56HINTS_CJWATSON = ALL
57HINTS_ADCONRAD = ALL
58HINTS_KITTERMAN = ALL
59HINTS_LANEY = ALL
60HINTS_JRIDDELL = ALL
61HINTS_STEFANOR = ALL
62HINTS_STGRABER = ALL
63HINTS_VORLON = ALL
64HINTS_PITTI = ALL
65HINTS_FREEZE = block block-all
66
67HINTS_UBUNTU-TOUCH/DIDROCKS = block unblock
68HINTS_UBUNTU-TOUCH/EV = block unblock
69HINTS_UBUNTU-TOUCH/KEN-VANDINE = block unblock
70HINTS_UBUNTU-TOUCH/LOOL = block unblock
71HINTS_UBUNTU-TOUCH/MATHIEU-TL = block unblock
72HINTS_UBUNTU-TOUCH/OGRA = block unblock
73>>>>>>> MERGE-SOURCE
5574
56# support for old libraries in testing (smooth update)75# support for old libraries in testing (smooth update)
57# use ALL to enable smooth updates for all the sections76# use ALL to enable smooth updates for all the sections
58#77#
59# naming a non-existent section will effectively disable new smooth78# naming a non-existent section will effectively disable new smooth
60# updates but still allow removals to occur79# updates but still allow removals to occur
80<<<<<<< TREE
61SMOOTH_UPDATES = libs oldlibs81SMOOTH_UPDATES = libs oldlibs
6282
63IGNORE_CRUFT = 183IGNORE_CRUFT = 1
84=======
85SMOOTH_UPDATES = badgers
86
87REMOVE_OBSOLETE = no
88
89ADT_ENABLE = yes
90ADT_DEBUG = no
91ADT_ARCHES = amd64 i386 armhf ppc64el
92ADT_AMQP = amqp://test_request:password@162.213.33.228
93# Swift base URL with the results (must be publicly readable and browsable)
94ADT_SWIFT_URL = https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac
95
96BOOTTEST_ENABLE = no
97BOOTTEST_DEBUG = yes
98BOOTTEST_ARCHES = armhf amd64
99BOOTTEST_FETCH = yes
100>>>>>>> MERGE-SOURCE
64101
=== modified file 'britney.py'
--- britney.py 2015-11-01 09:09:30 +0000
+++ britney.py 2015-11-17 11:05:43 +0000
@@ -62,6 +62,9 @@
62 * Hints, which contains lists of commands which modify the standard behaviour62 * Hints, which contains lists of commands which modify the standard behaviour
63 of Britney (see Britney.read_hints).63 of Britney (see Britney.read_hints).
6464
65 * Blocks, which contains user-supplied blocks read from Launchpad bugs
66 (see Britney.read_blocks).
67
65For a more detailed explanation about the format of these files, please read68For a more detailed explanation about the format of these files, please read
66the documentation of the related methods. The exact meaning of them will be69the documentation of the related methods. The exact meaning of them will be
67instead explained in the chapter "Excuses Generation".70instead explained in the chapter "Excuses Generation".
@@ -204,11 +207,18 @@
204 read_nuninst, write_nuninst, write_heidi,207 read_nuninst, write_nuninst, write_heidi,
205 eval_uninst, newly_uninst, make_migrationitem,208 eval_uninst, newly_uninst, make_migrationitem,
206 write_excuses, write_heidi_delta, write_controlfiles,209 write_excuses, write_heidi_delta, write_controlfiles,
207 old_libraries, is_nuninst_asgood_generous,210 old_libraries, is_nuninst_asgood_generous, ensuredir,
208 clone_nuninst)211 clone_nuninst)
209from consts import (VERSION, SECTION, BINARIES, MAINTAINER, FAKESRC,212from consts import (VERSION, SECTION, BINARIES, MAINTAINER, FAKESRC,
210 SOURCE, SOURCEVER, ARCHITECTURE, DEPENDS, CONFLICTS,213 SOURCE, SOURCEVER, ARCHITECTURE, DEPENDS, CONFLICTS,
214<<<<<<< TREE
211 PROVIDES, MULTIARCH, ESSENTIAL)215 PROVIDES, MULTIARCH, ESSENTIAL)
216=======
217 PROVIDES, RDEPENDS, RCONFLICTS, MULTIARCH, ESSENTIAL)
218from autopkgtest import AutoPackageTest, srchash
219from boottest import BootTest
220
221>>>>>>> MERGE-SOURCE
212222
213__author__ = 'Fabio Tranchitella and the Debian Release Team'223__author__ = 'Fabio Tranchitella and the Debian Release Team'
214__version__ = '2.0'224__version__ = '2.0'
@@ -227,7 +237,7 @@
227237
228 HINTS_HELPERS = ("easy", "hint", "remove", "block", "block-udeb", "unblock", "unblock-udeb", "approve")238 HINTS_HELPERS = ("easy", "hint", "remove", "block", "block-udeb", "unblock", "unblock-udeb", "approve")
229 HINTS_STANDARD = ("urgent", "age-days") + HINTS_HELPERS239 HINTS_STANDARD = ("urgent", "age-days") + HINTS_HELPERS
230 HINTS_ALL = ("force", "force-hint", "block-all") + HINTS_STANDARD240 HINTS_ALL = ("force", "force-hint", "force-badtest", "force-skiptest", "block-all") + HINTS_STANDARD
231241
232 def __init__(self):242 def __init__(self):
233 """Class constructor243 """Class constructor
@@ -235,8 +245,7 @@
235 This method initializes and populates the data lists, which contain all245 This method initializes and populates the data lists, which contain all
236 the information needed by the other methods of the class.246 the information needed by the other methods of the class.
237 """247 """
238 # britney's "day" begins at 3pm248 self.date_now = int(time.time())
239 self.date_now = int(((time.time() / (60*60)) - 15) / 24)
240249
241 # parse the command line arguments250 # parse the command line arguments
242 self.__parse_arguments()251 self.__parse_arguments()
@@ -264,7 +273,12 @@
264 if 'testing' not in self.sources:273 if 'testing' not in self.sources:
265 self.sources['testing'] = self.read_sources(self.options.testing)274 self.sources['testing'] = self.read_sources(self.options.testing)
266 self.sources['unstable'] = self.read_sources(self.options.unstable)275 self.sources['unstable'] = self.read_sources(self.options.unstable)
267 self.sources['tpu'] = self.read_sources(self.options.tpu)276 if hasattr(self.options, 'partial_unstable'):
277 self.merge_sources('testing', 'unstable')
278 if hasattr(self.options, 'tpu'):
279 self.sources['tpu'] = self.read_sources(self.options.tpu)
280 else:
281 self.sources['tpu'] = {}
268282
269 if hasattr(self.options, 'pu'):283 if hasattr(self.options, 'pu'):
270 self.sources['pu'] = self.read_sources(self.options.pu)284 self.sources['pu'] = self.read_sources(self.options.pu)
@@ -281,7 +295,15 @@
281 if arch not in self.binaries['testing']:295 if arch not in self.binaries['testing']:
282 self.binaries['testing'][arch] = self.read_binaries(self.options.testing, "testing", arch)296 self.binaries['testing'][arch] = self.read_binaries(self.options.testing, "testing", arch)
283 self.binaries['unstable'][arch] = self.read_binaries(self.options.unstable, "unstable", arch)297 self.binaries['unstable'][arch] = self.read_binaries(self.options.unstable, "unstable", arch)
284 self.binaries['tpu'][arch] = self.read_binaries(self.options.tpu, "tpu", arch)298 if hasattr(self.options, 'partial_unstable'):
299 self.merge_binaries('testing', 'unstable', arch)
300 if hasattr(self.options, 'tpu'):
301 self.binaries['tpu'][arch] = self.read_binaries(self.options.tpu, "tpu", arch)
302 else:
303 # _build_installability_tester relies it being
304 # properly initialised, so insert two empty dicts
305 # here.
306 self.binaries['tpu'][arch] = ({}, {})
285 if hasattr(self.options, 'pu'):307 if hasattr(self.options, 'pu'):
286 self.binaries['pu'][arch] = self.read_binaries(self.options.pu, "pu", arch)308 self.binaries['pu'][arch] = self.read_binaries(self.options.pu, "pu", arch)
287 else:309 else:
@@ -330,6 +352,7 @@
330 # read additional data352 # read additional data
331 self.dates = self.read_dates(self.options.testing)353 self.dates = self.read_dates(self.options.testing)
332 self.urgencies = self.read_urgencies(self.options.testing)354 self.urgencies = self.read_urgencies(self.options.testing)
355 self.blocks = self.read_blocks(self.options.unstable)
333 self.excuses = []356 self.excuses = []
334 self.dependencies = {}357 self.dependencies = {}
335358
@@ -399,6 +422,10 @@
399 help="do not build the non-installability status, use the cache from file")422 help="do not build the non-installability status, use the cache from file")
400 parser.add_option("", "--print-uninst", action="store_true", dest="print_uninst", default=False,423 parser.add_option("", "--print-uninst", action="store_true", dest="print_uninst", default=False,
401 help="just print a summary of uninstallable packages")424 help="just print a summary of uninstallable packages")
425 parser.add_option("", "--distribution", action="store", dest="distribution", default="ubuntu",
426 help="set distribution name")
427 parser.add_option("", "--series", action="store", dest="series", default=None,
428 help="set distribution series name")
402 (self.options, self.args) = parser.parse_args()429 (self.options, self.args) = parser.parse_args()
403 430
404 # integrity checks431 # integrity checks
@@ -420,6 +447,8 @@
420 k, v = line.split('=', 1)447 k, v = line.split('=', 1)
421 k = k.strip()448 k = k.strip()
422 v = v.strip()449 v = v.strip()
450 if self.options.series is not None:
451 v = v.replace("%(SERIES)", self.options.series)
423 if k.startswith("MINDAYS_"):452 if k.startswith("MINDAYS_"):
424 self.MINDAYS[k.split("_")[1].lower()] = int(v)453 self.MINDAYS[k.split("_")[1].lower()] = int(v)
425 elif k.startswith("HINTS_"):454 elif k.startswith("HINTS_"):
@@ -433,14 +462,14 @@
433 self.options.heidi_delta_output = self.options.heidi_output + "Delta"462 self.options.heidi_delta_output = self.options.heidi_output + "Delta"
434463
435 self.options.nobreakall_arches = self.options.nobreakall_arches.split()464 self.options.nobreakall_arches = self.options.nobreakall_arches.split()
436 self.options.fucked_arches = self.options.fucked_arches.split()465 self.options.outofsync_arches = self.options.outofsync_arches.split()
437 self.options.break_arches = self.options.break_arches.split()466 self.options.break_arches = self.options.break_arches.split()
438 self.options.new_arches = self.options.new_arches.split()467 self.options.new_arches = self.options.new_arches.split()
439468
440 # Sort the architecture list469 # Sort the architecture list
441 allarches = sorted(self.options.architectures.split())470 allarches = sorted(self.options.architectures.split())
442 arches = [x for x in allarches if x in self.options.nobreakall_arches]471 arches = [x for x in allarches if x in self.options.nobreakall_arches]
443 arches += [x for x in allarches if x not in arches and x not in self.options.fucked_arches]472 arches += [x for x in allarches if x not in arches and x not in self.options.outofsync_arches]
444 arches += [x for x in allarches if x not in arches and x not in self.options.break_arches]473 arches += [x for x in allarches if x not in arches and x not in self.options.break_arches]
445 arches += [x for x in allarches if x not in arches and x not in self.options.new_arches]474 arches += [x for x in allarches if x not in arches and x not in self.options.new_arches]
446 arches += [x for x in allarches if x not in arches]475 arches += [x for x in allarches if x not in arches]
@@ -570,8 +599,8 @@
570 filename = os.path.join(basedir, "Sources")599 filename = os.path.join(basedir, "Sources")
571 self.__log("Loading source packages from %s" % filename)600 self.__log("Loading source packages from %s" % filename)
572601
573 with open(filename, encoding='utf-8') as f:602 f = open(filename, encoding='utf-8')
574 Packages = apt_pkg.TagFile(f)603 Packages = apt_pkg.TagFile(f)
575 get_field = Packages.section.get604 get_field = Packages.section.get
576 step = Packages.step605 step = Packages.step
577606
@@ -592,7 +621,9 @@
592 [],621 [],
593 get_field('Maintainer'),622 get_field('Maintainer'),
594 False,623 False,
624 get_field('Testsuite', '').startswith('autopkgtest'),
595 ]625 ]
626 f.close()
596 return sources627 return sources
597628
598 def read_binaries(self, basedir, distribution, arch, intern=sys.intern):629 def read_binaries(self, basedir, distribution, arch, intern=sys.intern):
@@ -626,8 +657,8 @@
626 filename = os.path.join(basedir, "Packages_%s" % arch)657 filename = os.path.join(basedir, "Packages_%s" % arch)
627 self.__log("Loading binary packages from %s" % filename)658 self.__log("Loading binary packages from %s" % filename)
628659
629 with open(filename, encoding='utf-8') as f:660 f = open(filename, encoding='utf-8')
630 Packages = apt_pkg.TagFile(f)661 Packages = apt_pkg.TagFile(f)
631 get_field = Packages.section.get662 get_field = Packages.section.get
632 step = Packages.step663 step = Packages.step
633664
@@ -697,7 +728,7 @@
697 sources[distribution][dpkg[SOURCE]][BINARIES].append(pkgarch)728 sources[distribution][dpkg[SOURCE]][BINARIES].append(pkgarch)
698 # if the source package doesn't exist, create a fake one729 # if the source package doesn't exist, create a fake one
699 else:730 else:
700 sources[distribution][dpkg[SOURCE]] = [dpkg[SOURCEVER], 'faux', [pkgarch], None, True]731 sources[distribution][dpkg[SOURCE]] = [dpkg[SOURCEVER], 'faux', [pkgarch], None, True, False]
701732
702 # register virtual packages and real packages that provide them733 # register virtual packages and real packages that provide them
703 if dpkg[PROVIDES]:734 if dpkg[PROVIDES]:
@@ -712,9 +743,85 @@
712 # add the resulting dictionary to the package list743 # add the resulting dictionary to the package list
713 packages[pkg] = dpkg744 packages[pkg] = dpkg
714745
746<<<<<<< TREE
747=======
748 f.close()
749
750 # loop again on the list of packages to register reverse dependencies and conflicts
751 register_reverses(packages, provides, check_doubles=False)
752
753>>>>>>> MERGE-SOURCE
715 # return a tuple with the list of real and virtual packages754 # return a tuple with the list of real and virtual packages
716 return (packages, provides)755 return (packages, provides)
717756
757<<<<<<< TREE
758=======
759 def merge_sources(self, source, target):
760 """Merge sources from `source' into partial suite `target'."""
761 source_sources = self.sources[source]
762 target_sources = self.sources[target]
763 for pkg, value in source_sources.items():
764 if pkg in target_sources:
765 continue
766 target_sources[pkg] = list(value)
767 target_sources[pkg][BINARIES] = list(
768 target_sources[pkg][BINARIES])
769
770 def merge_binaries(self, source, target, arch):
771 """Merge `arch' binaries from `source' into partial suite `target'."""
772 source_sources = self.sources[source]
773 source_binaries, _ = self.binaries[source][arch]
774 target_sources = self.sources[target]
775 target_binaries, target_provides = self.binaries[target][arch]
776 oodsrcs = set()
777 for pkg, value in source_binaries.items():
778 if pkg in target_binaries:
779 continue
780
781 # Don't merge binaries rendered stale by new sources in target
782 # that have built on this architecture.
783 if value[SOURCE] not in oodsrcs:
784 source_version = source_sources[value[SOURCE]][VERSION]
785 target_version = target_sources[value[SOURCE]][VERSION]
786 if source_version != target_version:
787 current_arch = value[ARCHITECTURE]
788 built = False
789 for b in target_sources[value[SOURCE]][BINARIES]:
790 binpkg, binarch = b.split('/')
791 if binarch == arch:
792 target_value = target_binaries[binpkg]
793 if current_arch in (
794 target_value[ARCHITECTURE], "all"):
795 built = True
796 break
797 if built:
798 continue
799 oodsrcs.add(value[SOURCE])
800
801 if pkg in target_binaries:
802 for p in target_binaries[pkg][PROVIDES]:
803 target_provides[p].remove(pkg)
804 if not target_provides[p]:
805 del target_provides[p]
806
807 target_binaries[pkg] = value
808
809 pkg_arch = pkg + "/" + arch
810 if pkg_arch not in target_sources[value[SOURCE]][BINARIES]:
811 target_sources[value[SOURCE]][BINARIES].append(pkg_arch)
812
813 for p in value[PROVIDES]:
814 if p not in target_provides:
815 target_provides[p] = []
816 target_provides[p].append(pkg)
817
818 for pkg, value in target_binaries.items():
819 value[RDEPENDS] = []
820 value[RCONFLICTS] = []
821 register_reverses(
822 target_binaries, target_provides, check_doubles=False)
823
824>>>>>>> MERGE-SOURCE
718 def read_bugs(self, basedir):825 def read_bugs(self, basedir):
719 """Read the release critical bug summary from the specified directory826 """Read the release critical bug summary from the specified directory
720 827
@@ -730,13 +837,16 @@
730 bugs = defaultdict(list)837 bugs = defaultdict(list)
731 filename = os.path.join(basedir, "BugsV")838 filename = os.path.join(basedir, "BugsV")
732 self.__log("Loading RC bugs data from %s" % filename)839 self.__log("Loading RC bugs data from %s" % filename)
733 for line in open(filename, encoding='ascii'):840 try:
734 l = line.split()841 for line in open(filename, encoding='ascii'):
735 if len(l) != 2:842 l = line.split()
736 self.__log("Malformed line found in line %s" % (line), type='W')843 if len(l) != 2:
737 continue844 self.__log("Malformed line found in line %s" % (line), type='W')
738 pkg = l[0]845 continue
739 bugs[pkg] += l[1].split(",")846 pkg = l[0]
847 bugs[pkg] += l[1].split(",")
848 except IOError:
849 self.__log("%s missing; skipping bug-based processing" % filename)
740 return bugs850 return bugs
741851
742 def __maxver(self, pkg, dist):852 def __maxver(self, pkg, dist):
@@ -792,7 +902,8 @@
792902
793 <package-name> <version> <date-of-upload>903 <package-name> <version> <date-of-upload>
794904
795 The dates are expressed as the number of days from 1970-01-01.905 The dates are expressed as the number of seconds from the Unix epoch
906 (1970-01-01 00:00:00 UTC).
796907
797 The method returns a dictionary where the key is the binary package908 The method returns a dictionary where the key is the binary package
798 name and the value is a tuple with two items, the version and the date.909 name and the value is a tuple with two items, the version and the date.
@@ -800,13 +911,17 @@
800 dates = {}911 dates = {}
801 filename = os.path.join(basedir, "Dates")912 filename = os.path.join(basedir, "Dates")
802 self.__log("Loading upload data from %s" % filename)913 self.__log("Loading upload data from %s" % filename)
803 for line in open(filename, encoding='ascii'):914 try:
804 l = line.split()915 for line in open(filename, encoding='ascii'):
805 if len(l) != 3: continue916 l = line.split()
806 try:917 if len(l) != 3: continue
807 dates[l[0]] = (l[1], int(l[2]))918 try:
808 except ValueError:919 dates[l[0]] = (l[1], int(l[2]))
809 self.__log("Dates, unable to parse \"%s\"" % line, type="E")920 except ValueError:
921 self.__log("Dates, unable to parse \"%s\"" % line, type="E")
922 except IOError:
923 self.__log("%s missing; initialising upload data from scratch" %
924 filename)
810 return dates925 return dates
811926
812 def write_dates(self, basedir, dates):927 def write_dates(self, basedir, dates):
@@ -817,6 +932,7 @@
817 """932 """
818 filename = os.path.join(basedir, "Dates")933 filename = os.path.join(basedir, "Dates")
819 self.__log("Writing upload data to %s" % filename)934 self.__log("Writing upload data to %s" % filename)
935 ensuredir(os.path.dirname(filename))
820 with open(filename, 'w', encoding='utf-8') as f:936 with open(filename, 'w', encoding='utf-8') as f:
821 for pkg in sorted(dates):937 for pkg in sorted(dates):
822 f.write("%s %s %d\n" % ((pkg,) + dates[pkg]))938 f.write("%s %s %d\n" % ((pkg,) + dates[pkg]))
@@ -839,31 +955,34 @@
839 urgencies = {}955 urgencies = {}
840 filename = os.path.join(basedir, "Urgency")956 filename = os.path.join(basedir, "Urgency")
841 self.__log("Loading upload urgencies from %s" % filename)957 self.__log("Loading upload urgencies from %s" % filename)
842 for line in open(filename, errors='surrogateescape', encoding='ascii'):958 try:
843 l = line.split()959 for line in open(filename, errors='surrogateescape', encoding='ascii'):
844 if len(l) != 3: continue960 l = line.split()
845961 if len(l) != 3: continue
846 # read the minimum days associated with the urgencies962
847 urgency_old = urgencies.get(l[0], None)963 # read the minimum days associated with the urgencies
848 mindays_old = self.MINDAYS.get(urgency_old, 1000)964 urgency_old = urgencies.get(l[0], None)
849 mindays_new = self.MINDAYS.get(l[2], self.MINDAYS[self.options.default_urgency])965 mindays_old = self.MINDAYS.get(urgency_old, 1000)
850966 mindays_new = self.MINDAYS.get(l[2], self.MINDAYS[self.options.default_urgency])
851 # if the new urgency is lower (so the min days are higher), do nothing967
852 if mindays_old <= mindays_new:968 # if the new urgency is lower (so the min days are higher), do nothing
853 continue969 if mindays_old <= mindays_new:
854970 continue
855 # if the package exists in testing and it is more recent, do nothing971
856 tsrcv = self.sources['testing'].get(l[0], None)972 # if the package exists in testing and it is more recent, do nothing
857 if tsrcv and apt_pkg.version_compare(tsrcv[VERSION], l[1]) >= 0:973 tsrcv = self.sources['testing'].get(l[0], None)
858 continue974 if tsrcv and apt_pkg.version_compare(tsrcv[VERSION], l[1]) >= 0:
859975 continue
860 # if the package doesn't exist in unstable or it is older, do nothing976
861 usrcv = self.sources['unstable'].get(l[0], None)977 # if the package doesn't exist in unstable or it is older, do nothing
862 if not usrcv or apt_pkg.version_compare(usrcv[VERSION], l[1]) < 0:978 usrcv = self.sources['unstable'].get(l[0], None)
863 continue979 if not usrcv or apt_pkg.version_compare(usrcv[VERSION], l[1]) < 0:
864980 continue
865 # update the urgency for the package981
866 urgencies[l[0]] = l[2]982 # update the urgency for the package
983 urgencies[l[0]] = l[2]
984 except IOError:
985 self.__log("%s missing; using default for all packages" % filename)
867986
868 return urgencies987 return urgencies
869988
@@ -912,7 +1031,7 @@
912 elif len(l) == 1:1031 elif len(l) == 1:
913 # All current hints require at least one argument1032 # All current hints require at least one argument
914 self.__log("Malformed hint found in %s: '%s'" % (filename, line), type="W")1033 self.__log("Malformed hint found in %s: '%s'" % (filename, line), type="W")
915 elif l[0] in ["approve", "block", "block-all", "block-udeb", "unblock", "unblock-udeb", "force", "urgent", "remove"]:1034 elif l[0] in ["approve", "block", "block-all", "block-udeb", "unblock", "unblock-udeb", "force", "force-badtest", "force-skiptest", "urgent", "remove"]:
916 if l[0] == 'approve': l[0] = 'unblock'1035 if l[0] == 'approve': l[0] = 'unblock'
917 for package in l[1:]:1036 for package in l[1:]:
918 hints.add_hint('%s %s' % (l[0], package), who)1037 hints.add_hint('%s %s' % (l[0], package), who)
@@ -922,7 +1041,7 @@
922 else:1041 else:
923 hints.add_hint(l, who)1042 hints.add_hint(l, who)
9241043
925 for x in ["block", "block-all", "block-udeb", "unblock", "unblock-udeb", "force", "urgent", "remove", "age-days"]:1044 for x in ["block", "block-all", "block-udeb", "unblock", "unblock-udeb", "force", "force-badtest", "force-skiptest", "urgent", "remove", "age-days"]:
926 z = {}1045 z = {}
927 for hint in hints[x]:1046 for hint in hints[x]:
928 package = hint.package1047 package = hint.package
@@ -954,6 +1073,32 @@
9541073
955 return hints1074 return hints
9561075
1076 def read_blocks(self, basedir):
1077 """Read user-supplied blocks from the specified directory.
1078
1079 The file contains rows with the format:
1080
1081 <source-name> <bug> <date>
1082
1083 The dates are expressed as the number of seconds from the Unix epoch
1084 (1970-01-01 00:00:00 UTC).
1085
1086 The method returns a dictionary where the key is the source package
1087 name and the value is a list of (bug, date) tuples.
1088 """
1089 blocks = {}
1090 filename = os.path.join(basedir, "Blocks")
1091 self.__log("Loading user-supplied block data from %s" % filename)
1092 for line in open(filename):
1093 l = line.split()
1094 if len(l) != 3: continue
1095 try:
1096 blocks.setdefault(l[0], [])
1097 blocks[l[0]].append((l[1], int(l[2])))
1098 except ValueError:
1099 self.__log("Blocks, unable to parse \"%s\"" % line, type="E")
1100 return blocks
1101
9571102
958 # Utility methods for package analysis1103 # Utility methods for package analysis
959 # ------------------------------------1104 # ------------------------------------
@@ -1018,11 +1163,34 @@
1018 parse_depends = apt_pkg.parse_depends1163 parse_depends = apt_pkg.parse_depends
1019 get_dependency_solvers = self.get_dependency_solvers1164 get_dependency_solvers = self.get_dependency_solvers
10201165
1166 # make linux* wait on corresponding -meta package
1167 if src.startswith('linux'):
1168 meta = src.replace('linux', 'linux-meta', 1)
1169 if meta in self.sources[suite]:
1170 # copy binary_u here, don't modify self.binaries!
1171 if binary_u[DEPENDS]:
1172 binary_u[DEPENDS] = binary_u[DEPENDS] + ', '
1173 else:
1174 binary_u[DEPENDS] = ''
1175 # find binary of our architecture
1176 binpkg = None
1177 for b in self.sources[suite][meta][BINARIES]:
1178 pkg, a = b.split('/')
1179 if a == arch:
1180 binpkg = pkg
1181 break
1182 if binpkg:
1183 binver = self.binaries[suite][arch][0][binpkg][SOURCEVER]
1184 binary_u[DEPENDS] += '%s (>= %s)' % (binpkg, binver)
1185 self.__log('Synthesizing dependency %s to %s: %s' % (meta, src, binary_u[DEPENDS]))
1186
1021 # analyze the dependency fields (if present)1187 # analyze the dependency fields (if present)
1022 if not binary_u[DEPENDS]:1188 if not binary_u[DEPENDS]:
1023 return1189 return True
1024 deps = binary_u[DEPENDS]1190 deps = binary_u[DEPENDS]
10251191
1192 all_satisfiable = True
1193
1026 # for every dependency block (formed as conjunction of disjunction)1194 # for every dependency block (formed as conjunction of disjunction)
1027 for block, block_txt in zip(parse_depends(deps, False), deps.split(',')):1195 for block, block_txt in zip(parse_depends(deps, False), deps.split(',')):
1028 # if the block is satisfied in testing, then skip the block1196 # if the block is satisfied in testing, then skip the block
@@ -1046,6 +1214,7 @@
1046 if not packages:1214 if not packages:
1047 excuse.addhtml("%s/%s unsatisfiable Depends: %s" % (pkg, arch, block_txt.strip()))1215 excuse.addhtml("%s/%s unsatisfiable Depends: %s" % (pkg, arch, block_txt.strip()))
1048 excuse.addreason("depends")1216 excuse.addreason("depends")
1217 all_satisfiable = False
1049 continue1218 continue
10501219
1051 # for the solving packages, update the excuse to add the dependencies1220 # for the solving packages, update the excuse to add the dependencies
@@ -1058,6 +1227,8 @@
1058 else:1227 else:
1059 excuse.add_break_dep(p, arch)1228 excuse.add_break_dep(p, arch)
10601229
1230 return all_satisfiable
1231
1061 # Package analysis methods1232 # Package analysis methods
1062 # ------------------------1233 # ------------------------
10631234
@@ -1080,6 +1251,7 @@
1080 excuse = Excuse("-" + pkg)1251 excuse = Excuse("-" + pkg)
1081 excuse.addhtml("Package not in unstable, will try to remove")1252 excuse.addhtml("Package not in unstable, will try to remove")
1082 excuse.addreason("remove")1253 excuse.addreason("remove")
1254 excuse.set_distribution(self.options.distribution)
1083 excuse.set_vers(src[VERSION], None)1255 excuse.set_vers(src[VERSION], None)
1084 src[MAINTAINER] and excuse.set_maint(src[MAINTAINER].strip())1256 src[MAINTAINER] and excuse.set_maint(src[MAINTAINER].strip())
1085 src[SECTION] and excuse.set_section(src[SECTION].strip())1257 src[SECTION] and excuse.set_section(src[SECTION].strip())
@@ -1087,7 +1259,7 @@
1087 # if the package is blocked, skip it1259 # if the package is blocked, skip it
1088 for hint in self.hints.search('block', package=pkg, removal=True):1260 for hint in self.hints.search('block', package=pkg, removal=True):
1089 excuse.addhtml("Not touching package, as requested by %s "1261 excuse.addhtml("Not touching package, as requested by %s "
1090 "(check https://release.debian.org/jessie/freeze_policy.html if update is needed)" % hint.user)1262 "(contact #ubuntu-release if update is needed)" % hint.user)
1091 excuse.addhtml("Not considered")1263 excuse.addhtml("Not considered")
1092 excuse.addreason("block")1264 excuse.addreason("block")
1093 self.excuses.append(excuse)1265 self.excuses.append(excuse)
@@ -1117,6 +1289,7 @@
1117 # build the common part of the excuse, which will be filled by the code below1289 # build the common part of the excuse, which will be filled by the code below
1118 ref = "%s/%s%s" % (src, arch, suite != 'unstable' and "_" + suite or "")1290 ref = "%s/%s%s" % (src, arch, suite != 'unstable' and "_" + suite or "")
1119 excuse = Excuse(ref)1291 excuse = Excuse(ref)
1292 excuse.set_distribution(self.options.distribution)
1120 excuse.set_vers(source_t[VERSION], source_t[VERSION])1293 excuse.set_vers(source_t[VERSION], source_t[VERSION])
1121 source_u[MAINTAINER] and excuse.set_maint(source_u[MAINTAINER].strip())1294 source_u[MAINTAINER] and excuse.set_maint(source_u[MAINTAINER].strip())
1122 source_u[SECTION] and excuse.set_section(source_u[SECTION].strip())1295 source_u[SECTION] and excuse.set_section(source_u[SECTION].strip())
@@ -1136,6 +1309,7 @@
1136 # the starting point is that there is nothing wrong and nothing worth doing1309 # the starting point is that there is nothing wrong and nothing worth doing
1137 anywrongver = False1310 anywrongver = False
1138 anyworthdoing = False1311 anyworthdoing = False
1312 unsat_deps = False
11391313
1140 # for every binary package produced by this source in unstable for this architecture1314 # for every binary package produced by this source in unstable for this architecture
1141 for pkg in sorted(filter(lambda x: x.endswith("/" + arch), source_u[BINARIES]), key=lambda x: x.split("/")[0]):1315 for pkg in sorted(filter(lambda x: x.endswith("/" + arch), source_u[BINARIES]), key=lambda x: x.split("/")[0]):
@@ -1143,6 +1317,9 @@
11431317
1144 # retrieve the testing (if present) and unstable corresponding binary packages1318 # retrieve the testing (if present) and unstable corresponding binary packages
1145 binary_t = pkg in source_t[BINARIES] and self.binaries['testing'][arch][0][pkg_name] or None1319 binary_t = pkg in source_t[BINARIES] and self.binaries['testing'][arch][0][pkg_name] or None
1320 if hasattr(self.options, 'partial_unstable') and binary_t is not None and binary_t[ARCHITECTURE] == 'all' and pkg_name not in self.binaries[suite][arch][0]:
1321 excuse.addhtml("Ignoring %s %s (from %s) as it is arch: all and not yet built in unstable" % (pkg_name, binary_t[VERSION], binary_t[SOURCEVER]))
1322 continue
1146 binary_u = self.binaries[suite][arch][0][pkg_name]1323 binary_u = self.binaries[suite][arch][0][pkg_name]
11471324
1148 # this is the source version for the new binary package1325 # this is the source version for the new binary package
@@ -1155,6 +1332,7 @@
11551332
1156 # if the new binary package is not from the same source as the testing one, then skip it1333 # if the new binary package is not from the same source as the testing one, then skip it
1157 # this implies that this binary migration is part of a source migration1334 # this implies that this binary migration is part of a source migration
1335<<<<<<< TREE
1158 if same_source(source_u[VERSION], pkgsv) and not same_source(source_t[VERSION], pkgsv):1336 if same_source(source_u[VERSION], pkgsv) and not same_source(source_t[VERSION], pkgsv):
1159 anywrongver = True1337 anywrongver = True
1160 excuse.addhtml("From wrong source: %s %s (%s not %s)" % (pkg_name, binary_u[VERSION], pkgsv, source_t[VERSION]))1338 excuse.addhtml("From wrong source: %s %s (%s not %s)" % (pkg_name, binary_u[VERSION], pkgsv, source_t[VERSION]))
@@ -1168,6 +1346,13 @@
1168 anywrongver = True1346 anywrongver = True
1169 excuse.addhtml("Old cruft: %s %s" % (pkg_name, pkgsv))1347 excuse.addhtml("Old cruft: %s %s" % (pkg_name, pkgsv))
1170 continue1348 continue
1349=======
1350 if not same_source(source_t[VERSION], pkgsv):
1351 if binary_t is None or binary_t[VERSION] != binary_u[VERSION]:
1352 anywrongver = True
1353 excuse.addhtml("From wrong source: %s %s (%s not %s)" % (pkg_name, binary_u[VERSION], pkgsv, source_t[VERSION]))
1354 break
1355>>>>>>> MERGE-SOURCE
11711356
1172 # if the source package has been updated in unstable and this is a binary migration, skip it1357 # if the source package has been updated in unstable and this is a binary migration, skip it
1173 # (the binaries are now out-of-date)1358 # (the binaries are now out-of-date)
@@ -1177,7 +1362,8 @@
1177 continue1362 continue
11781363
1179 # find unsatisfied dependencies for the new binary package1364 # find unsatisfied dependencies for the new binary package
1180 self.excuse_unsat_deps(pkg_name, src, arch, suite, excuse)1365 if not self.excuse_unsat_deps(pkg_name, src, arch, suite, excuse):
1366 unsat_deps = True
11811367
1182 # if the binary is not present in testing, then it is a new binary;1368 # if the binary is not present in testing, then it is a new binary;
1183 # in this case, there is something worth doing1369 # in this case, there is something worth doing
@@ -1237,7 +1423,7 @@
1237 anyworthdoing = True1423 anyworthdoing = True
12381424
1239 # if there is nothing wrong and there is something worth doing, this is a valid candidate1425 # if there is nothing wrong and there is something worth doing, this is a valid candidate
1240 if not anywrongver and anyworthdoing:1426 if not anywrongver and not unsat_deps and anyworthdoing:
1241 excuse.is_valid = True1427 excuse.is_valid = True
1242 self.excuses.append(excuse)1428 self.excuses.append(excuse)
1243 return True1429 return True
@@ -1276,12 +1462,15 @@
1276 # build the common part of the excuse, which will be filled by the code below1462 # build the common part of the excuse, which will be filled by the code below
1277 ref = "%s%s" % (src, suite != 'unstable' and "_" + suite or "")1463 ref = "%s%s" % (src, suite != 'unstable' and "_" + suite or "")
1278 excuse = Excuse(ref)1464 excuse = Excuse(ref)
1465 excuse.set_distribution(self.options.distribution)
1279 excuse.set_vers(source_t and source_t[VERSION] or None, source_u[VERSION])1466 excuse.set_vers(source_t and source_t[VERSION] or None, source_u[VERSION])
1280 source_u[MAINTAINER] and excuse.set_maint(source_u[MAINTAINER].strip())1467 source_u[MAINTAINER] and excuse.set_maint(source_u[MAINTAINER].strip())
1281 source_u[SECTION] and excuse.set_section(source_u[SECTION].strip())1468 source_u[SECTION] and excuse.set_section(source_u[SECTION].strip())
12821469
1283 # the starting point is that we will update the candidate1470 # the starting point is that we will update the candidate and run autopkgtests
1284 update_candidate = True1471 update_candidate = True
1472 run_autopkgtest = True
1473 run_boottest = True
1285 1474
1286 # if the version in unstable is older, then stop here with a warning in the excuse and return False1475 # if the version in unstable is older, then stop here with a warning in the excuse and return False
1287 if source_t and apt_pkg.version_compare(source_u[VERSION], source_t[VERSION]) < 0:1476 if source_t and apt_pkg.version_compare(source_u[VERSION], source_t[VERSION]) < 0:
@@ -1294,6 +1483,8 @@
1294 if source_u[FAKESRC]:1483 if source_u[FAKESRC]:
1295 excuse.addhtml("%s source package doesn't exist" % (src))1484 excuse.addhtml("%s source package doesn't exist" % (src))
1296 update_candidate = False1485 update_candidate = False
1486 run_autopkgtest = False
1487 run_boottest = False
12971488
1298 # retrieve the urgency for the upload, ignoring it if this is a NEW package (not present in testing)1489 # retrieve the urgency for the upload, ignoring it if this is a NEW package (not present in testing)
1299 urgency = self.urgencies.get(src, self.options.default_urgency)1490 urgency = self.urgencies.get(src, self.options.default_urgency)
@@ -1311,6 +1502,8 @@
1311 excuse.addhtml("Trying to remove package, not update it")1502 excuse.addhtml("Trying to remove package, not update it")
1312 excuse.addreason("remove")1503 excuse.addreason("remove")
1313 update_candidate = False1504 update_candidate = False
1505 run_autopkgtest = False
1506 run_boottest = False
13141507
1315 # check if there is a `block' or `block-udeb' hint for this package, or a `block-all source' hint1508 # check if there is a `block' or `block-udeb' hint for this package, or a `block-all source' hint
1316 blocked = {}1509 blocked = {}
@@ -1346,7 +1539,7 @@
1346 excuse.addhtml("%s request by %s ignored due to version mismatch: %s" %1539 excuse.addhtml("%s request by %s ignored due to version mismatch: %s" %
1347 (unblock_cmd.capitalize(), unblocks[0].user, unblocks[0].version))1540 (unblock_cmd.capitalize(), unblocks[0].user, unblocks[0].version))
1348 if suite == 'unstable' or block_cmd == 'block-udeb':1541 if suite == 'unstable' or block_cmd == 'block-udeb':
1349 tooltip = "check https://release.debian.org/jessie/freeze_policy.html if update is needed"1542 tooltip = "contact #ubuntu-release if update is needed"
1350 # redirect people to d-i RM for udeb things:1543 # redirect people to d-i RM for udeb things:
1351 if block_cmd == 'block-udeb':1544 if block_cmd == 'block-udeb':
1352 tooltip = "please contact the d-i release manager if an update is needed"1545 tooltip = "please contact the d-i release manager if an update is needed"
@@ -1358,6 +1551,13 @@
1358 excuse.addreason("block")1551 excuse.addreason("block")
1359 update_candidate = False1552 update_candidate = False
13601553
1554 if src in self.blocks:
1555 for user_block in self.blocks[src]:
1556 excuse.addhtml("Not touching package as requested in <a href=\"https://launchpad.net/bugs/%s\">bug %s</a> on %s" %
1557 (user_block[0], user_block[0], time.asctime(time.gmtime(user_block[1]))))
1558 excuse.addreason("block")
1559 update_candidate = False
1560
1361 # if the suite is unstable, then we have to check the urgency and the minimum days of1561 # if the suite is unstable, then we have to check the urgency and the minimum days of
1362 # permanence in unstable before updating testing; if the source package is too young,1562 # permanence in unstable before updating testing; if the source package is too young,
1363 # the check fails and we set update_candidate to False to block the update; consider1563 # the check fails and we set update_candidate to False to block the update; consider
@@ -1368,7 +1568,7 @@
1368 elif not same_source(self.dates[src][0], source_u[VERSION]):1568 elif not same_source(self.dates[src][0], source_u[VERSION]):
1369 self.dates[src] = (source_u[VERSION], self.date_now)1569 self.dates[src] = (source_u[VERSION], self.date_now)
13701570
1371 days_old = self.date_now - self.dates[src][1]1571 days_old = (self.date_now - self.dates[src][1]) / 60 / 60 / 24
1372 min_days = self.MINDAYS[urgency]1572 min_days = self.MINDAYS[urgency]
13731573
1374 for age_days_hint in [ x for x in self.hints.search('age-days', package=src) if \1574 for age_days_hint in [ x for x in self.hints.search('age-days', package=src) if \
@@ -1385,6 +1585,8 @@
1385 excuse.addhtml("Too young, but urgency pushed by %s" % (urgent_hints[0].user))1585 excuse.addhtml("Too young, but urgency pushed by %s" % (urgent_hints[0].user))
1386 else:1586 else:
1387 update_candidate = False1587 update_candidate = False
1588 run_autopkgtest = False
1589 run_boottest = False
1388 excuse.addreason("age")1590 excuse.addreason("age")
13891591
1390 if suite in ['pu', 'tpu']:1592 if suite in ['pu', 'tpu']:
@@ -1412,12 +1614,16 @@
1412 base = 'testing'1614 base = 'testing'
1413 else:1615 else:
1414 base = 'stable'1616 base = 'stable'
1415 text = "Not yet built on <a href=\"http://buildd.debian.org/status/logs.php?arch=%s&pkg=%s&ver=%s&suite=%s\" target=\"_blank\">%s</a> (relative to testing)" % (quote(arch), quote(src), quote(source_u[VERSION]), base, arch)1617 text = "Not yet built on <a href=\"https://launchpad.net/%s/+source/%s/%s/+latestbuild/%s\" target=\"_blank\">%s</a> (relative to testing)" % (self.options.distribution, quote(src.split("/")[0]), quote(source_u[VERSION]), quote(arch), arch)
14161618
1417 if arch in self.options.fucked_arches:1619 if arch in self.options.outofsync_arches:
1418 text = text + " (but %s isn't keeping up, so never mind)" % (arch)1620 text = text + " (but %s isn't keeping up, so never mind)" % (arch)
1419 else:1621 else:
1420 update_candidate = False1622 update_candidate = False
1623 if arch in self.options.adt_arches.split():
1624 run_autopkgtest = False
1625 if arch in self.options.boottest_arches.split():
1626 run_boottest = False
1421 excuse.addreason("arch")1627 excuse.addreason("arch")
1422 excuse.addreason("arch-%s" % arch)1628 excuse.addreason("arch-%s" % arch)
1423 excuse.addreason("build-arch")1629 excuse.addreason("build-arch")
@@ -1428,6 +1634,7 @@
1428 # at this point, we check the status of the builds on all the supported architectures1634 # at this point, we check the status of the builds on all the supported architectures
1429 # to catch the out-of-date ones1635 # to catch the out-of-date ones
1430 pkgs = {src: ["source"]}1636 pkgs = {src: ["source"]}
1637 built_anywhere = False
1431 for arch in self.options.architectures:1638 for arch in self.options.architectures:
1432 oodbins = {}1639 oodbins = {}
1433 uptodatebins = False1640 uptodatebins = False
@@ -1449,38 +1656,58 @@
1449 oodbins[pkgsv].append(pkg)1656 oodbins[pkgsv].append(pkg)
1450 continue1657 continue
1451 else:1658 else:
1659<<<<<<< TREE
1452 # if the binary is arch all, it doesn't count as1660 # if the binary is arch all, it doesn't count as
1453 # up-to-date for this arch1661 # up-to-date for this arch
1454 if binary_u[ARCHITECTURE] == arch:1662 if binary_u[ARCHITECTURE] == arch:
1455 uptodatebins = True1663 uptodatebins = True
1664=======
1665 uptodatebins = True
1666 built_anywhere = True
1667>>>>>>> MERGE-SOURCE
14561668
1457 # if the package is architecture-dependent or the current arch is `nobreakall'1669 # if the package is architecture-dependent or the current arch is `nobreakall'
1458 # find unsatisfied dependencies for the binary package1670 # find unsatisfied dependencies for the binary package
1459 if binary_u[ARCHITECTURE] != 'all' or arch in self.options.nobreakall_arches:1671 if binary_u[ARCHITECTURE] != 'all' or arch in self.options.nobreakall_arches:
1460 self.excuse_unsat_deps(pkg, src, arch, suite, excuse)1672 if not self.excuse_unsat_deps(pkg, src, arch, suite, excuse):
1673 update_candidate = False
1674 if arch in self.options.adt_arches.split():
1675 run_autopkgtest = False
1676 if arch in self.options.boottest_arches.split():
1677 run_boottest = False
14611678
1462 # if there are out-of-date packages, warn about them in the excuse and set update_candidate1679 # if there are out-of-date packages, warn about them in the excuse and set update_candidate
1463 # to False to block the update; if the architecture where the package is out-of-date is1680 # to False to block the update; if the architecture where the package is out-of-date is
1464 # in the `fucked_arches' list, then do not block the update1681 # in the `outofsync_arches' list, then do not block the update
1465 if oodbins:1682 if oodbins:
1466 oodtxt = ""1683 oodtxt = ""
1467 for v in oodbins.keys():1684 for v in oodbins.keys():
1468 if oodtxt: oodtxt = oodtxt + "; "1685 if oodtxt: oodtxt = oodtxt + "; "
1469 oodtxt = oodtxt + "%s (from <a href=\"http://buildd.debian.org/status/logs.php?" \1686 oodtxt = oodtxt + "%s (from <a href=\"https://launchpad.net/%s/+source/" \
1470 "arch=%s&pkg=%s&ver=%s\" target=\"_blank\">%s</a>)" % \1687 "%s/%s/+latestbuild/%s\" target=\"_blank\">%s</a>)" % \
1471 (", ".join(sorted(oodbins[v])), quote(arch), quote(src), quote(v), v)1688 (", ".join(sorted(oodbins[v])), self.options.distribution, quote(src.split("/")[0]), quote(v), quote(arch), v)
1472 if uptodatebins:1689 if uptodatebins:
1473 text = "old binaries left on <a href=\"http://buildd.debian.org/status/logs.php?" \1690 text = "old binaries left on <a href=\"https://launchpad.net/%s/+source/" \
1474 "arch=%s&pkg=%s&ver=%s\" target=\"_blank\">%s</a>: %s" % \1691 "%s/%s/+latestbuild/%s\" target=\"_blank\">%s</a>: %s" % \
1475 (quote(arch), quote(src), quote(source_u[VERSION]), arch, oodtxt)1692 (self.options.distribution, quote(src.split("/")[0]), quote(source_u[VERSION]), quote(arch), arch, oodtxt)
1476 else:1693 else:
1477 text = "missing build on <a href=\"http://buildd.debian.org/status/logs.php?" \1694 text = "missing build on <a href=\"https://launchpad.net/%s/+source/" \
1478 "arch=%s&pkg=%s&ver=%s\" target=\"_blank\">%s</a>: %s" % \1695 "%s/%s/+latestbuild/%s\" target=\"_blank\">%s</a>: %s" % \
1479 (quote(arch), quote(src), quote(source_u[VERSION]), arch, oodtxt)1696 (self.options.distribution, quote(src.split("/")[0]), quote(source_u[VERSION]), quote(arch), arch, oodtxt)
14801697
1481 if arch in self.options.fucked_arches:1698 if arch in self.options.outofsync_arches:
1482 text = text + " (but %s isn't keeping up, so nevermind)" % (arch)1699 text = text + " (but %s isn't keeping up, so nevermind)" % (arch)
1483 else:1700 else:
1701<<<<<<< TREE
1702=======
1703 update_candidate = False
1704 if arch in self.options.adt_arches.split():
1705 run_autopkgtest = False
1706 if arch in self.options.boottest_arches.split():
1707 run_boottest = False
1708 excuse.addreason("arch")
1709 excuse.addreason("arch-%s" % arch)
1710>>>>>>> MERGE-SOURCE
1484 if uptodatebins:1711 if uptodatebins:
1485 excuse.addreason("cruft-arch")1712 excuse.addreason("cruft-arch")
1486 excuse.addreason("cruft-arch-%s" % arch)1713 excuse.addreason("cruft-arch-%s" % arch)
@@ -1497,14 +1724,21 @@
1497 excuse.addreason("build-arch")1724 excuse.addreason("build-arch")
1498 excuse.addreason("build-arch-%s" % arch)1725 excuse.addreason("build-arch-%s" % arch)
14991726
1500 if self.date_now != self.dates[src][1]:1727 excuse.addhtml(text)
1501 excuse.addhtml(text)
15021728
1503 # if the source package has no binaries, set update_candidate to False to block the update1729 # if the source package has no binaries, set update_candidate to False to block the update
1504 if len(self.sources[suite][src][BINARIES]) == 0:1730 if len(self.sources[suite][src][BINARIES]) == 0:
1505 excuse.addhtml("%s has no binaries on any arch" % src)1731 excuse.addhtml("%s has no binaries on any arch" % src)
1506 excuse.addreason("no-binaries")1732 excuse.addreason("no-binaries")
1507 update_candidate = False1733 update_candidate = False
1734 run_autopkgtest = False
1735 run_boottest = False
1736 elif not built_anywhere:
1737 excuse.addhtml("%s has no up-to-date binaries on any arch" % src)
1738 excuse.addreason("no-binaries")
1739 update_candidate = False
1740 run_autopkgtest = False
1741 run_boottest = False
15081742
1509 # if the suite is unstable, then we have to check the release-critical bug lists before1743 # if the suite is unstable, then we have to check the release-critical bug lists before
1510 # updating testing; if the unstable package has RC bugs that do not apply to the testing1744 # updating testing; if the unstable package has RC bugs that do not apply to the testing
@@ -1536,6 +1770,8 @@
1536 excuse.addhtml("Updating %s introduces new bugs: %s" % (pkg, ", ".join(1770 excuse.addhtml("Updating %s introduces new bugs: %s" % (pkg, ", ".join(
1537 ["<a href=\"http://bugs.debian.org/%s\">#%s</a>" % (quote(a), a) for a in new_bugs])))1771 ["<a href=\"http://bugs.debian.org/%s\">#%s</a>" % (quote(a), a) for a in new_bugs])))
1538 update_candidate = False1772 update_candidate = False
1773 run_autopkgtest = False
1774 run_boottest = False
1539 excuse.addreason("buggy")1775 excuse.addreason("buggy")
15401776
1541 if len(old_bugs) > 0:1777 if len(old_bugs) > 0:
@@ -1553,6 +1789,8 @@
1553 excuse.addhtml("Should ignore, but forced by %s" % (forces[0].user))1789 excuse.addhtml("Should ignore, but forced by %s" % (forces[0].user))
1554 excuse.force()1790 excuse.force()
1555 update_candidate = True1791 update_candidate = True
1792 run_autopkgtest = True
1793 run_boottest = True
15561794
1557 # if the package can be updated, it is a valid candidate1795 # if the package can be updated, it is a valid candidate
1558 if update_candidate:1796 if update_candidate:
@@ -1561,6 +1799,8 @@
1561 else:1799 else:
1562 # TODO1800 # TODO
1563 excuse.addhtml("Not considered")1801 excuse.addhtml("Not considered")
1802 excuse.run_autopkgtest = run_autopkgtest
1803 excuse.run_boottest = run_boottest
15641804
1565 self.excuses.append(excuse)1805 self.excuses.append(excuse)
1566 return update_candidate1806 return update_candidate
@@ -1694,6 +1934,7 @@
1694 # add the removal of the package to upgrade_me and build a new excuse1934 # add the removal of the package to upgrade_me and build a new excuse
1695 upgrade_me.append("-%s" % (src))1935 upgrade_me.append("-%s" % (src))
1696 excuse = Excuse("-%s" % (src))1936 excuse = Excuse("-%s" % (src))
1937 excuse.set_distribution(self.options.distribution)
1697 excuse.set_vers(tsrcv, None)1938 excuse.set_vers(tsrcv, None)
1698 excuse.addhtml("Removal request by %s" % (item.user))1939 excuse.addhtml("Removal request by %s" % (item.user))
1699 excuse.addhtml("Package is broken, will try to remove")1940 excuse.addhtml("Package is broken, will try to remove")
@@ -1706,6 +1947,167 @@
1706 # extract the not considered packages, which are in the excuses but not in upgrade_me1947 # extract the not considered packages, which are in the excuses but not in upgrade_me
1707 unconsidered = [e.name for e in self.excuses if e.name not in upgrade_me]1948 unconsidered = [e.name for e in self.excuses if e.name not in upgrade_me]
17081949
1950 if getattr(self.options, "adt_enable", "no") == "yes" and \
1951 self.options.series:
1952 # trigger autopkgtests for valid candidates
1953 adt_debug = getattr(self.options, "adt_debug", "no") == "yes"
1954 autopkgtest = AutoPackageTest(
1955 self, self.options.distribution, self.options.series,
1956 debug=adt_debug)
1957 autopkgtest_packages = []
1958 autopkgtest_excuses = []
1959 autopkgtest_excludes = []
1960 for e in self.excuses:
1961 if not e.run_autopkgtest:
1962 autopkgtest_excludes.append(e.name)
1963 continue
1964 # skip removals, binary-only candidates, and proposed-updates
1965 if e.name.startswith("-") or "/" in e.name or "_" in e.name:
1966 pass
1967 if e.ver[1] == "-":
1968 continue
1969 autopkgtest_excuses.append(e)
1970 autopkgtest_packages.append((e.name, e.ver[1]))
1971 autopkgtest.request(autopkgtest_packages, autopkgtest_excludes)
1972 if not self.options.dry_run:
1973 autopkgtest.collect_requested()
1974 autopkgtest.submit()
1975 autopkgtest.collect(autopkgtest_packages)
1976 cloud_url = "http://autopkgtest.ubuntu.com/packages/%(h)s/%(s)s/%(r)s/%(a)s"
1977 for e in autopkgtest_excuses:
1978 adtpass = True
1979 for passed, adtsrc, adtver, arch_status in autopkgtest.results(
1980 e.name, e.ver[1]):
1981 for arch in arch_status:
1982 url = cloud_url % {'h': srchash(adtsrc), 's': adtsrc,
1983 'r': self.options.series, 'a': arch}
1984 e.addtest('autopkgtest', '%s %s' % (adtsrc, adtver),
1985 arch, arch_status[arch], url)
1986
1987 # hints can override failures
1988 if not passed:
1989 hints = self.hints.search(
1990 'force-badtest', package=adtsrc)
1991 hints.extend(
1992 self.hints.search('force', package=adtsrc))
1993 forces = [
1994 x for x in hints
1995 if same_source(adtver, x.version) ]
1996 if forces:
1997 e.force()
1998 e.addreason('badtest %s %s' % (adtsrc, adtver))
1999 e.addhtml(
2000 "Should wait for %s %s test, but forced by "
2001 "%s" % (adtsrc, adtver, forces[0].user))
2002 passed = True
2003
2004 if not passed:
2005 adtpass = False
2006
2007 if not adtpass and e.is_valid:
2008 hints = self.hints.search('force-skiptest', package=e.name)
2009 hints.extend(self.hints.search('force', package=e.name))
2010 forces = [
2011 x for x in hints
2012 if same_source(e.ver[1], x.version) ]
2013 if forces:
2014 e.force()
2015 e.addreason('skiptest')
2016 e.addhtml(
2017 "Should wait for tests relating to %s %s, but "
2018 "forced by %s" %
2019 (e.name, e.ver[1], forces[0].user))
2020 else:
2021 upgrade_me.remove(e.name)
2022 unconsidered.append(e.name)
2023 e.addhtml("Not considered")
2024 e.addreason("autopkgtest")
2025 e.is_valid = False
2026
2027 if (getattr(self.options, "boottest_enable", "no") == "yes" and
2028 self.options.series):
2029 # trigger 'boottest'ing for valid candidates.
2030 boottest_debug = getattr(
2031 self.options, "boottest_debug", "no") == "yes"
2032 boottest = BootTest(
2033 self, self.options.distribution, self.options.series,
2034 debug=boottest_debug)
2035 boottest_excuses = []
2036 for excuse in self.excuses:
2037 # Skip already invalid excuses.
2038 if not excuse.run_boottest:
2039 continue
2040 # Also skip removals, binary-only candidates, proposed-updates
2041 # and unknown versions.
2042 if (excuse.name.startswith("-") or
2043 "/" in excuse.name or
2044 "_" in excuse.name or
2045 excuse.ver[1] == "-"):
2046 continue
2047 # Allows hints to skip boottest attempts
2048 hints = self.hints.search(
2049 'force-skiptest', package=excuse.name)
2050 forces = [x for x in hints
2051 if same_source(excuse.ver[1], x.version)]
2052 if forces:
2053 excuse.addhtml(
2054 "boottest skipped from hints by %s" % forces[0].user)
2055 continue
2056 # Only sources whitelisted in the boottest context should
2057 # be tested (currently only sources building phone binaries).
2058 if not boottest.needs_test(excuse.name, excuse.ver[1]):
2059 # Silently skipping.
2060 continue
2061 # Okay, aggregate required boottests requests.
2062 boottest_excuses.append(excuse)
2063 boottest.request([(e.name, e.ver[1]) for e in boottest_excuses])
2064 # Dry-run avoids data exchange with external systems.
2065 if not self.options.dry_run:
2066 boottest.submit()
2067 boottest.collect()
2068 # Boottest Jenkins views location.
2069 jenkins_public = "https://jenkins.qa.ubuntu.com/job"
2070 jenkins_private = (
2071 "http://d-jenkins.ubuntu-ci:8080/view/%s/view/BootTest/job" %
2072 self.options.series.title())
2073 # Update excuses from the boottest context.
2074 for excuse in boottest_excuses:
2075 status = boottest.get_status(excuse.name, excuse.ver[1])
2076 label = BootTest.EXCUSE_LABELS.get(status, 'UNKNOWN STATUS')
2077 public_url = "%s/%s-boottest-%s/lastBuild" % (
2078 jenkins_public, self.options.series,
2079 excuse.name.replace("+", "-"))
2080 private_url = "%s/%s-boottest-%s/lastBuild" % (
2081 jenkins_private, self.options.series,
2082 excuse.name.replace("+", "-"))
2083 excuse.addhtml(
2084 "Boottest result: %s (Jenkins: <a href=\"%s\">public</a>"
2085 ", <a href=\"%s\">private</a>)" % (
2086 label, public_url, private_url))
2087 # Allows hints to force boottest failures/attempts
2088 # to be ignored.
2089 hints = self.hints.search('force', package=excuse.name)
2090 hints.extend(
2091 self.hints.search('force-badtest', package=excuse.name))
2092 forces = [x for x in hints
2093 if same_source(excuse.ver[1], x.version)]
2094 if forces:
2095 excuse.addhtml(
2096 "Should wait for %s %s boottest, but forced by "
2097 "%s" % (excuse.name, excuse.ver[1],
2098 forces[0].user))
2099 continue
2100 # Block promotion if the excuse is still valid (adt tests
2101 # passed) but the boottests attempt has failed or still in
2102 # progress.
2103 if status not in BootTest.VALID_STATUSES:
2104 excuse.addreason("boottest")
2105 if excuse.is_valid:
2106 excuse.is_valid = False
2107 excuse.addhtml("Not considered")
2108 upgrade_me.remove(excuse.name)
2109 unconsidered.append(excuse.name)
2110
1709 # invalidate impossible excuses2111 # invalidate impossible excuses
1710 for e in self.excuses:2112 for e in self.excuses:
1711 # parts[0] == package name2113 # parts[0] == package name
@@ -2455,7 +2857,7 @@
24552857
2456 if not force:2858 if not force:
2457 self.output_write(eval_uninst(self.options.architectures,2859 self.output_write(eval_uninst(self.options.architectures,
2458 newly_uninst(nuninst_start, nuninst_end)) + "\n")2860 newly_uninst(nuninst_start, nuninst_end)))
24592861
2460 if not force:2862 if not force:
2461 break_arches = set(self.options.break_arches)2863 break_arches = set(self.options.break_arches)
@@ -2483,7 +2885,7 @@
2483 if force:2885 if force:
2484 self.output_write("force breaks:\n")2886 self.output_write("force breaks:\n")
2485 self.output_write(eval_uninst(self.options.architectures,2887 self.output_write(eval_uninst(self.options.architectures,
2486 newly_uninst(nuninst_start, nuninst_end)) + "\n")2888 newly_uninst(nuninst_start, nuninst_end)))
2487 self.output_write("SUCCESS (%d/%d)\n" % (len(actions or self.upgrade_me), len(extra)))2889 self.output_write("SUCCESS (%d/%d)\n" % (len(actions or self.upgrade_me), len(extra)))
2488 self.nuninst_orig = nuninst_end2890 self.nuninst_orig = nuninst_end
2489 self.all_selected += selected2891 self.all_selected += selected
@@ -2498,6 +2900,7 @@
2498 lundo.reverse()2900 lundo.reverse()
24992901
2500 undo_changes(lundo, self._inst_tester, self.sources, self.binaries)2902 undo_changes(lundo, self._inst_tester, self.sources, self.binaries)
2903 self.output_write("\n")
25012904
25022905
2503 def assert_nuninst_is_correct(self):2906 def assert_nuninst_is_correct(self):
@@ -2541,6 +2944,7 @@
2541 self.nuninst_orig = self.get_nuninst()2944 self.nuninst_orig = self.get_nuninst()
2542 # nuninst_orig may get updated during the upgrade process2945 # nuninst_orig may get updated during the upgrade process
2543 self.nuninst_orig_save = self.get_nuninst()2946 self.nuninst_orig_save = self.get_nuninst()
2947 self.all_selected = []
25442948
2545 if not self.options.actions:2949 if not self.options.actions:
2546 # process `easy' hints2950 # process `easy' hints
@@ -2592,6 +2996,7 @@
2592 # obsolete source packages2996 # obsolete source packages
2593 # a package is obsolete if none of the binary packages in testing2997 # a package is obsolete if none of the binary packages in testing
2594 # are built by it2998 # are built by it
2999<<<<<<< TREE
2595 self.__log("> Removing obsolete source packages from testing", type="I")3000 self.__log("> Removing obsolete source packages from testing", type="I")
2596 # local copies for performance3001 # local copies for performance
2597 sources = self.sources['testing']3002 sources = self.sources['testing']
@@ -2607,6 +3012,24 @@
2607 self.output_write("Removing obsolete source packages from testing (%d):\n" % (len(removals)))3012 self.output_write("Removing obsolete source packages from testing (%d):\n" % (len(removals)))
2608 self.do_all(actions=removals)3013 self.do_all(actions=removals)
26093014
3015=======
3016 if getattr(self.options, "remove_obsolete", "yes") == "yes":
3017 self.__log("> Removing obsolete source packages from testing", type="I")
3018 # local copies for performance
3019 sources = self.sources['testing']
3020 binaries = self.binaries['testing']
3021 used = set(binaries[arch][0][binary][SOURCE]
3022 for arch in binaries
3023 for binary in binaries[arch][0]
3024 )
3025 removals = [ MigrationItem("-%s/%s" % (source, sources[source][VERSION]))
3026 for source in sources if source not in used
3027 ]
3028 if len(removals) > 0:
3029 self.output_write("Removing obsolete source packages from testing (%d):\n" % (len(removals)))
3030 self.do_all(actions=removals)
3031
3032>>>>>>> MERGE-SOURCE
2610 # smooth updates3033 # smooth updates
2611 if self.options.smooth_updates:3034 if self.options.smooth_updates:
2612 self.__log("> Removing old packages left in testing from smooth updates", type="I")3035 self.__log("> Removing old packages left in testing from smooth updates", type="I")
@@ -2670,6 +3093,7 @@
2670 self.__log("> Calculating current uninstallability counters", type="I")3093 self.__log("> Calculating current uninstallability counters", type="I")
2671 self.nuninst_orig = self.get_nuninst()3094 self.nuninst_orig = self.get_nuninst()
2672 self.nuninst_orig_save = self.get_nuninst()3095 self.nuninst_orig_save = self.get_nuninst()
3096 self.all_selected = []
26733097
2674 import readline3098 import readline
2675 from completer import Completer3099 from completer import Completer
@@ -2891,6 +3315,7 @@
2891 else:3315 else:
2892 self.upgrade_me = self.options.actions.split()3316 self.upgrade_me = self.options.actions.split()
28933317
3318 ensuredir(os.path.dirname(self.options.upgrade_output))
2894 with open(self.options.upgrade_output, 'w', encoding='utf-8') as f:3319 with open(self.options.upgrade_output, 'w', encoding='utf-8') as f:
2895 self.__output = f3320 self.__output = f
28963321
28973322
=== modified file 'britney_nobreakall.conf'
--- britney_nobreakall.conf 2015-10-27 17:32:31 +0000
+++ britney_nobreakall.conf 2015-11-17 11:05:43 +0000
@@ -1,26 +1,25 @@
1# Configuration file for britney1# Configuration file for britney
22
3# Paths for control files3# Paths for control files
4TESTING = /srv/release.debian.org/britney/var/data-b2/testing4TESTING = data/%(SERIES)
5TPU = /srv/release.debian.org/britney/var/data-b2/testing-proposed-updates5UNSTABLE = data/%(SERIES)-proposed
6PU = /srv/release.debian.org/britney/var/data-b2/proposed-updates6PARTIAL_UNSTABLE = yes
7UNSTABLE = /srv/release.debian.org/britney/var/data-b2/unstable
87
9# Output8# Output
10NONINST_STATUS = /srv/release.debian.org/britney/var/data-b2/non-installable-status9NONINST_STATUS = data/%(SERIES)/non-installable-status
11EXCUSES_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/excuses.html10EXCUSES_OUTPUT = output/%(SERIES)/excuses.html
12EXCUSES_YAML_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/excuses.yaml11EXCUSES_YAML_OUTPUT = output/%(SERIES)/excuses.yaml
13UPGRADE_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/output.txt12UPGRADE_OUTPUT = output/%(SERIES)/output.txt
14HEIDI_OUTPUT = /srv/release.debian.org/britney/var/data-b2/output/HeidiResult13HEIDI_OUTPUT = output/%(SERIES)/HeidiResult
1514
16# List of release architectures15# List of release architectures
17ARCHITECTURES = i386 amd64 arm64 armel armhf mips mipsel powerpc ppc64el s390x16ARCHITECTURES = amd64 arm64 armhf i386 powerpc ppc64el
1817
19# if you're not in this list, arch: all packages are allowed to break on you18# if you're not in this list, arch: all packages are allowed to break on you
20NOBREAKALL_ARCHES = i386 amd64 arm64 armel armhf mips mipsel powerpc ppc64el s390x19NOBREAKALL_ARCHES = amd64 arm64 armhf i386 powerpc ppc64el
2120
22# if you're in this list, your packages may not stay in sync with the source21# if you're in this list, your packages may not stay in sync with the source
23FUCKED_ARCHES = 22OUTOFSYNC_ARCHES =
2423
25# if you're in this list, your uninstallability count may increase24# if you're in this list, your uninstallability count may increase
26BREAK_ARCHES =25BREAK_ARCHES =
@@ -29,14 +28,15 @@
29NEW_ARCHES =28NEW_ARCHES =
3029
31# priorities and delays30# priorities and delays
32MINDAYS_LOW = 1031MINDAYS_LOW = 0
33MINDAYS_MEDIUM = 532MINDAYS_MEDIUM = 0
34MINDAYS_HIGH = 233MINDAYS_HIGH = 0
35MINDAYS_CRITICAL = 034MINDAYS_CRITICAL = 0
36MINDAYS_EMERGENCY = 035MINDAYS_EMERGENCY = 0
37DEFAULT_URGENCY = medium36DEFAULT_URGENCY = medium
3837
39# hint permissions38# hint permissions
39<<<<<<< TREE
40HINTS_ABA = ALL40HINTS_ABA = ALL
41HINTS_PKERN = STANDARD force41HINTS_PKERN = STANDARD force
42HINTS_ADSB = STANDARD force force-hint42HINTS_ADSB = STANDARD force force-hint
@@ -52,10 +52,38 @@
52HINTS_FREEZE-EXCEPTION = unblock unblock-udeb52HINTS_FREEZE-EXCEPTION = unblock unblock-udeb
53HINTS_SATBRITNEY = easy53HINTS_SATBRITNEY = easy
54HINTS_AUTO-REMOVALS = remove54HINTS_AUTO-REMOVALS = remove
55=======
56HINTS_CJWATSON = ALL
57HINTS_ADCONRAD = ALL
58HINTS_KITTERMAN = ALL
59HINTS_LANEY = ALL
60HINTS_JRIDDELL = ALL
61HINTS_STEFANOR = ALL
62HINTS_STGRABER = ALL
63HINTS_VORLON = ALL
64HINTS_PITTI = ALL
65HINTS_FREEZE = block block-all
66
67HINTS_UBUNTU-TOUCH/DIDROCKS = block unblock
68HINTS_UBUNTU-TOUCH/EV = block unblock
69HINTS_UBUNTU-TOUCH/KEN-VANDINE = block unblock
70HINTS_UBUNTU-TOUCH/LOOL = block unblock
71HINTS_UBUNTU-TOUCH/MATHIEU-TL = block unblock
72HINTS_UBUNTU-TOUCH/OGRA = block unblock
73>>>>>>> MERGE-SOURCE
5574
56# support for old libraries in testing (smooth update)75# support for old libraries in testing (smooth update)
57# use ALL to enable smooth updates for all the sections76# use ALL to enable smooth updates for all the sections
58#77#
59# naming a non-existent section will effectively disable new smooth78# naming a non-existent section will effectively disable new smooth
60# updates but still allow removals to occur79# updates but still allow removals to occur
61SMOOTH_UPDATES = libs oldlibs80SMOOTH_UPDATES = badgers
81
82REMOVE_OBSOLETE = no
83
84ADT_ENABLE = yes
85ADT_DEBUG = no
86ADT_ARCHES = amd64 i386 armhf ppc64el
87ADT_AMQP = amqp://test_request:password@162.213.33.228
88# Swift base URL with the results (must be publicly readable and browsable)
89ADT_SWIFT_URL = https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac
6290
=== modified file 'britney_util.py'
--- britney_util.py 2015-09-13 18:33:06 +0000
+++ britney_util.py 2015-11-17 11:05:43 +0000
@@ -39,6 +39,11 @@
3939
40binnmu_re = re.compile(r'^(.*)\+b\d+$')40binnmu_re = re.compile(r'^(.*)\+b\d+$')
4141
42def ensuredir(directory):
43 if not os.path.isdir(directory):
44 os.makedirs(directory)
45
46
42def same_source(sv1, sv2, binnmu_re=binnmu_re):47def same_source(sv1, sv2, binnmu_re=binnmu_re):
43 """Check if two version numbers are built from the same source48 """Check if two version numbers are built from the same source
4449
@@ -201,6 +206,7 @@
201 return "\n".join(" " + k + ": " + " ".join(libraries[k]) for k in libraries) + "\n"206 return "\n".join(" " + k + ": " + " ".join(libraries[k]) for k in libraries) + "\n"
202207
203208
209<<<<<<< TREE
204def compute_reverse_tree(inst_tester, affected):210def compute_reverse_tree(inst_tester, affected):
205 """Calculate the full dependency tree for a set of packages211 """Calculate the full dependency tree for a set of packages
206212
@@ -219,6 +225,106 @@
219 affected.update(new_pkg_ids)225 affected.update(new_pkg_ids)
220 remain.extend(new_pkg_ids)226 remain.extend(new_pkg_ids)
221 return None227 return None
228=======
229
230def register_reverses(packages, provides, check_doubles=True, iterator=None,
231 parse_depends=apt_pkg.parse_depends,
232 DEPENDS=DEPENDS, CONFLICTS=CONFLICTS,
233 RDEPENDS=RDEPENDS, RCONFLICTS=RCONFLICTS):
234 """Register reverse dependencies and conflicts for a given
235 sequence of packages
236
237 This method registers the reverse dependencies and conflicts for a
238 given sequence of packages. "packages" is a table of real
239 packages and "provides" is a table of virtual packages.
240
241 iterator is the sequence of packages for which the reverse
242 relations should be updated.
243
244 The "X=X" parameters are optimizations to avoid "load global" in
245 the loops.
246 """
247 if iterator is None:
248 iterator = packages.keys()
249 else:
250 iterator = ifilter_only(packages, iterator)
251
252 for pkg in iterator:
253 # register the list of the dependencies for the depending packages
254 dependencies = []
255 pkg_data = packages[pkg]
256 if pkg_data[DEPENDS]:
257 dependencies.extend(parse_depends(pkg_data[DEPENDS], False))
258 # go through the list
259 for p in dependencies:
260 for a in p:
261 # strip off Multi-Arch qualifiers like :any or :native
262 dep = a[0].split(':')[0]
263 # register real packages
264 if dep in packages and (not check_doubles or pkg not in packages[dep][RDEPENDS]):
265 packages[dep][RDEPENDS].append(pkg)
266 # also register packages which provide the package (if any)
267 if dep in provides:
268 for i in provides[dep]:
269 if i not in packages: continue
270 if not check_doubles or pkg not in packages[i][RDEPENDS]:
271 packages[i][RDEPENDS].append(pkg)
272 # register the list of the conflicts for the conflicting packages
273 if pkg_data[CONFLICTS]:
274 for p in parse_depends(pkg_data[CONFLICTS], False):
275 for a in p:
276 con = a[0]
277 # register real packages
278 if con in packages and (not check_doubles or pkg not in packages[con][RCONFLICTS]):
279 packages[con][RCONFLICTS].append(pkg)
280 # also register packages which provide the package (if any)
281 if con in provides:
282 for i in provides[con]:
283 if i not in packages: continue
284 if not check_doubles or pkg not in packages[i][RCONFLICTS]:
285 packages[i][RCONFLICTS].append(pkg)
286
287
288def compute_reverse_tree(packages_s, pkg, arch,
289 set=set, flatten=chain.from_iterable,
290 RDEPENDS=RDEPENDS):
291 """Calculate the full dependency tree for the given package
292
293 This method returns the full dependency tree for the package
294 "pkg", inside the "arch" architecture for a given suite flattened
295 as an iterable. The first argument "packages_s" is the binary
296 package table for that given suite (e.g. Britney().binaries["testing"]).
297
298 The tree (or graph) is returned as an iterable of (package, arch)
299 tuples and the iterable will contain ("pkg", "arch") if it is
300 available on that architecture.
301
302 If "pkg" is not available on that architecture in that suite,
303 this returns an empty iterable.
304
305 The method does not promise any ordering of the returned
306 elements and the iterable is not reusable.
307
308 The flatten=... and the "X=X" parameters are optimizations to
309 avoid "load global" in the loops.
310 """
311 binaries = packages_s[arch][0]
312 if pkg not in binaries:
313 return frozenset()
314 rev_deps = set(binaries[pkg][RDEPENDS])
315 seen = set([pkg])
316
317 binfilt = ifilter_only(binaries)
318 revfilt = ifilter_except(seen)
319
320 while rev_deps:
321 # mark all of the current iteration of packages as affected
322 seen |= rev_deps
323 # generate the next iteration, which is the reverse-dependencies of
324 # the current iteration
325 rev_deps = set(revfilt(flatten( binaries[x][RDEPENDS] for x in binfilt(rev_deps) )))
326 return zip(seen, repeat(arch))
327>>>>>>> MERGE-SOURCE
222328
223329
224def write_nuninst(filename, nuninst):330def write_nuninst(filename, nuninst):
@@ -377,6 +483,7 @@
377 or "legacy-html".483 or "legacy-html".
378 """484 """
379 if output_format == "yaml":485 if output_format == "yaml":
486 ensuredir(os.path.dirname(dest_file))
380 with open(dest_file, 'w', encoding='utf-8') as f:487 with open(dest_file, 'w', encoding='utf-8') as f:
381 excuselist = []488 excuselist = []
382 for e in excuses:489 for e in excuses:
@@ -386,11 +493,13 @@
386 excusesdata["generated-date"] = datetime.utcnow()493 excusesdata["generated-date"] = datetime.utcnow()
387 f.write(yaml.dump(excusesdata, default_flow_style=False, allow_unicode=True))494 f.write(yaml.dump(excusesdata, default_flow_style=False, allow_unicode=True))
388 elif output_format == "legacy-html":495 elif output_format == "legacy-html":
496 ensuredir(os.path.dirname(dest_file))
389 with open(dest_file, 'w', encoding='utf-8') as f:497 with open(dest_file, 'w', encoding='utf-8') as f:
390 f.write("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n")498 f.write("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n")
391 f.write("<html><head><title>excuses...</title>")499 f.write("<html><head><title>excuses...</title>")
392 f.write("<meta http-equiv=\"Content-Type\" content=\"text/html;charset=utf-8\"></head><body>\n")500 f.write("<meta http-equiv=\"Content-Type\" content=\"text/html;charset=utf-8\"></head><body>\n")
393 f.write("<p>Generated: " + time.strftime("%Y.%m.%d %H:%M:%S %z", time.gmtime(time.time())) + "</p>\n")501 f.write("<p>Generated: " + time.strftime("%Y.%m.%d %H:%M:%S %z", time.gmtime(time.time())) + "</p>\n")
502 f.write("<p>See the <a href=\"https://wiki.ubuntu.com/ProposedMigration\">documentation</a> for help interpreting this page.</p>\n")
394 f.write("<ul>\n")503 f.write("<ul>\n")
395 for e in excuses:504 for e in excuses:
396 f.write("<li>%s" % e.html())505 f.write("<li>%s" % e.html())
@@ -436,6 +545,7 @@
436 (PROVIDES, 'Provides'), (CONFLICTS, 'Conflicts'),545 (PROVIDES, 'Provides'), (CONFLICTS, 'Conflicts'),
437 (ESSENTIAL, 'Essential'))546 (ESSENTIAL, 'Essential'))
438547
548 ensuredir(basedir)
439 for arch in packages_s:549 for arch in packages_s:
440 filename = os.path.join(basedir, 'Packages_%s' % arch)550 filename = os.path.join(basedir, 'Packages_%s' % arch)
441 binaries = packages_s[arch][0]551 binaries = packages_s[arch][0]
442552
=== modified file 'consts.py'
--- consts.py 2015-09-13 17:33:22 +0000
+++ consts.py 2015-11-17 11:05:43 +0000
@@ -24,6 +24,7 @@
24BINARIES = 224BINARIES = 2
25MAINTAINER = 325MAINTAINER = 3
26FAKESRC = 426FAKESRC = 4
27AUTOPKGTEST = 5
2728
28# binary package29# binary package
29SOURCE = 230SOURCE = 2
3031
=== modified file 'excuse.py'
--- excuse.py 2015-09-06 14:23:05 +0000
+++ excuse.py 2015-11-17 11:05:43 +0000
@@ -1,6 +1,6 @@
1# -*- coding: utf-8 -*-1# -*- coding: utf-8 -*-
22
3# Copyright (C) 2001-2004 Anthony Towns <ajt@debian.org>3# Copyright (C) 2006, 2011-2015 Anthony Towns <ajt@debian.org>
4# Andreas Barth <aba@debian.org>4# Andreas Barth <aba@debian.org>
5# Fabio Tranchitella <kobold@debian.org>5# Fabio Tranchitella <kobold@debian.org>
66
@@ -16,6 +16,15 @@
1616
17import re17import re
1818
19EXCUSES_LABELS = {
20 "PASS": '<span style="background:#87d96c">Pass</span>',
21 "FAIL": '<span style="background:#ff6666">Failed</span>',
22 "ALWAYSFAIL": '<span style="background:#e5c545">Always failed</span>',
23 "REGRESSION": '<span style="background:#ff6666">Regression</span>',
24 "RUNNING": '<span style="background:#99ddff">Test in progress</span>',
25}
26
27
19class Excuse(object):28class Excuse(object):
20 """Excuse class29 """Excuse class
21 30
@@ -49,6 +58,9 @@
49 self._is_valid = False58 self._is_valid = False
50 self._dontinvalidate = False59 self._dontinvalidate = False
51 self.forced = False60 self.forced = False
61 self.run_autopkgtest = False
62 self.run_boottest = False
63 self.distribution = "ubuntu"
5264
53 self.invalid_deps = []65 self.invalid_deps = []
54 self.deps = {}66 self.deps = {}
@@ -59,6 +71,9 @@
59 self.oldbugs = set()71 self.oldbugs = set()
60 self.reason = {}72 self.reason = {}
61 self.htmlline = []73 self.htmlline = []
74 # type (e. g. "autopkgtest") -> package (e. g. "foo 2-1") -> arch ->
75 # ['PASS'|'ALWAYSFAIL'|'REGRESSION'|'RUNNING', url]
76 self.tests = {}
6277
63 def sortkey(self):78 def sortkey(self):
64 if self.daysold == None:79 if self.daysold == None:
@@ -98,6 +113,10 @@
98 """Set the urgency of upload of the package"""113 """Set the urgency of upload of the package"""
99 self.urgency = date114 self.urgency = date
100115
116 def set_distribution(self, distribution):
117 """Set the distribution name"""
118 self.distribution = distribution
119
101 def add_dep(self, name, arch):120 def add_dep(self, name, arch):
102 """Add a dependency"""121 """Add a dependency"""
103 if name not in self.deps: self.deps[name]=[]122 if name not in self.deps: self.deps[name]=[]
@@ -131,19 +150,42 @@
131150
132 def html(self):151 def html(self):
133 """Render the excuse in HTML"""152 """Render the excuse in HTML"""
134 res = "<a id=\"%s\" name=\"%s\">%s</a> (%s to %s)\n<ul>\n" % \153 lp_pkg = "https://launchpad.net/%s/+source/%s" % (self.distribution, self.name.split("/")[0])
135 (self.name, self.name, self.name, self.ver[0], self.ver[1])154 if self.ver[0] == "-":
155 lp_old = self.ver[0]
156 else:
157 lp_old = "<a href=\"%s/%s\">%s</a>" % (
158 lp_pkg, self.ver[0], self.ver[0])
159 if self.ver[1] == "-":
160 lp_new = self.ver[1]
161 else:
162 lp_new = "<a href=\"%s/%s\">%s</a>" % (
163 lp_pkg, self.ver[1], self.ver[1])
164 res = (
165 "<a id=\"%s\" name=\"%s\" href=\"%s\">%s</a> (%s to %s)\n<ul>\n" %
166 (self.name, self.name, lp_pkg, self.name, lp_old, lp_new))
136 if self.maint:167 if self.maint:
137 res = res + "<li>Maintainer: %s\n" % (self.maint)168 res = res + "<li>Maintainer: %s\n" % (self.maint)
138 if self.section and self.section.find("/") > -1:169 if self.section and self.section.find("/") > -1:
139 res = res + "<li>Section: %s\n" % (self.section)170 res = res + "<li>Section: %s\n" % (self.section)
140 if self.daysold != None:171 if self.daysold != None:
141 if self.daysold < self.mindays:172 if self.mindays == 0:
173 res = res + ("<li>%d days old\n" % self.daysold)
174 elif self.daysold < self.mindays:
142 res = res + ("<li>Too young, only %d of %d days old\n" %175 res = res + ("<li>Too young, only %d of %d days old\n" %
143 (self.daysold, self.mindays))176 (self.daysold, self.mindays))
144 else:177 else:
145 res = res + ("<li>%d days old (needed %d days)\n" %178 res = res + ("<li>%d days old (needed %d days)\n" %
146 (self.daysold, self.mindays))179 (self.daysold, self.mindays))
180 for testtype in sorted(self.tests):
181 for pkg in sorted(self.tests[testtype]):
182 archmsg = []
183 for arch in sorted(self.tests[testtype][pkg]):
184 status, url = self.tests[testtype][pkg][arch]
185 archmsg.append('<a href="%s">%s: %s</a>' %
186 (url, arch, EXCUSES_LABELS[status]))
187 res = res + ("<li>%s for %s: %s</li>\n" % (testtype, pkg, ', '.join(archmsg)))
188
147 for x in self.htmlline:189 for x in self.htmlline:
148 res = res + "<li>" + x + "\n"190 res = res + "<li>" + x + "\n"
149 lastdep = ""191 lastdep = ""
@@ -172,6 +214,10 @@
172 """"adding reason"""214 """"adding reason"""
173 self.reason[reason] = 1215 self.reason[reason] = 1
174216
217 def addtest(self, type_, package, arch, state, url):
218 """Add test result"""
219 self.tests.setdefault(type_, {}).setdefault(package, {})[arch] = [state, url]
220
175 # TODO merge with html()221 # TODO merge with html()
176 def text(self):222 def text(self):
177 """Render the excuse in text"""223 """Render the excuse in text"""
@@ -184,7 +230,9 @@
184 if self.section and self.section.find("/") > -1:230 if self.section and self.section.find("/") > -1:
185 res.append("Section: %s" % (self.section))231 res.append("Section: %s" % (self.section))
186 if self.daysold != None:232 if self.daysold != None:
187 if self.daysold < self.mindays:233 if self.mindays == 0:
234 res.append("%d days old" % self.daysold)
235 elif self.daysold < self.mindays:
188 res.append(("Too young, only %d of %d days old" %236 res.append(("Too young, only %d of %d days old" %
189 (self.daysold, self.mindays)))237 (self.daysold, self.mindays)))
190 else:238 else:
@@ -225,5 +273,6 @@
225 else:273 else:
226 excusedata["reason"] = sorted(list(self.reason.keys()))274 excusedata["reason"] = sorted(list(self.reason.keys()))
227 excusedata["is-candidate"] = self.is_valid275 excusedata["is-candidate"] = self.is_valid
276 excusedata["tests"] = self.tests
228 return excusedata277 return excusedata
229278
230279
=== added file 'run-autopkgtest'
--- run-autopkgtest 1970-01-01 00:00:00 +0000
+++ run-autopkgtest 2015-11-17 11:05:43 +0000
@@ -0,0 +1,78 @@
1#!/usr/bin/python3
2# Request re-runs of autopkgtests for packages
3
4import os
5import sys
6import argparse
7import json
8
9import kombu
10
11my_dir = os.path.dirname(os.path.realpath(sys.argv[0]))
12
13
14def parse_args():
15 '''Parse command line arguments'''
16
17 parser = argparse.ArgumentParser()
18 parser.add_argument('-c', '--config',
19 default=os.path.join(my_dir, 'britney.conf'),
20 help='britney config file (default: %(default)s)')
21 parser.add_argument('-s', '--series', required=True,
22 help='Distro series name (required).')
23 parser.add_argument('-a', '--architecture', action='append', default=[],
24 help='Only run test(s) on given architecture name(s). '
25 'Can be specified multiple times (default: all).')
26 parser.add_argument('--trigger', action='append', default=[],
27 metavar='SOURCE/VERSION', required=True,
28 help='Add triggering package to request. '
29 'Can be specified multiple times.')
30 parser.add_argument('--ppa', metavar='LPUSER/PPANAME',
31 help='Enable PPA for requested test(s)')
32 parser.add_argument('package', nargs='+',
33 help='Source package name(s) whose tests to run.')
34 args = parser.parse_args()
35
36 # verify syntax of triggers
37 for t in args.trigger:
38 try:
39 (src, ver) = t.split('/')
40 except ValueError:
41 parser.error('Invalid trigger format "%s", must be "sourcepkg/version"' % t)
42
43 return args
44
45
46def parse_config(config_file):
47 '''Parse config file (like britney.py)'''
48
49 config = argparse.Namespace()
50 with open(config_file) as f:
51 for k, v in [r.split('=', 1) for r in f if '=' in r and not r.strip().startswith('#')]:
52 k = k.strip()
53 if not getattr(config, k.lower(), None):
54 setattr(config, k.lower(), v.strip())
55 return config
56
57
58if __name__ == '__main__':
59 args = parse_args()
60 config = parse_config(args.config)
61 if not args.architecture:
62 args.architecture = config.adt_arches.split()
63
64 params = {}
65 if args.trigger:
66 params['triggers'] = args.trigger
67 if args.ppa:
68 params['ppa'] = args.ppa
69 params = '\n' + json.dumps(params)
70
71 with kombu.Connection(config.adt_amqp) as conn:
72 for arch in args.architecture:
73 # don't use SimpleQueue here as it always declares queues;
74 # ACLs might not allow that
75 with kombu.Producer(conn, routing_key='debci-%s-%s' % (args.series, arch),
76 auto_declare=False) as p:
77 for pkg in args.package:
78 p.publish(pkg + params)
079
=== added directory 'tests'
=== added file 'tests/__init__.py'
--- tests/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/__init__.py 2015-11-17 11:05:43 +0000
@@ -0,0 +1,184 @@
1# (C) 2015 Canonical Ltd.
2#
3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by
5# the Free Software Foundation; either version 2 of the License, or
6# (at your option) any later version.
7
8import os
9import shutil
10import subprocess
11import tempfile
12import unittest
13
14PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
15
16architectures = ['amd64', 'arm64', 'armhf', 'i386', 'powerpc', 'ppc64el']
17
18
19class TestData:
20
21 def __init__(self):
22 '''Construct local test package indexes.
23
24 The archive is initially empty. You can create new packages with
25 create_deb(). self.path contains the path of the archive, and
26 self.apt_source provides an apt source "deb" line.
27
28 It is kept in a temporary directory which gets removed when the Archive
29 object gets deleted.
30 '''
31 self.path = tempfile.mkdtemp(prefix='testarchive.')
32 self.apt_source = 'deb file://%s /' % self.path
33 self.series = 'series'
34 self.dirs = {False: os.path.join(self.path, 'data', self.series),
35 True: os.path.join(
36 self.path, 'data', '%s-proposed' % self.series)}
37 os.makedirs(self.dirs[False])
38 os.mkdir(self.dirs[True])
39 self.added_sources = {False: set(), True: set()}
40 self.added_binaries = {False: set(), True: set()}
41
42 # pre-create all files for all architectures
43 for arch in architectures:
44 for dir in self.dirs.values():
45 with open(os.path.join(dir, 'Packages_' + arch), 'w'):
46 pass
47 for dir in self.dirs.values():
48 for fname in ['Dates', 'Blocks']:
49 with open(os.path.join(dir, fname), 'w'):
50 pass
51 for dname in ['Hints']:
52 os.mkdir(os.path.join(dir, dname))
53
54 os.mkdir(os.path.join(self.path, 'output'))
55
56 # create temporary home dir for proposed-migration autopktest status
57 self.home = os.path.join(self.path, 'home')
58 os.environ['HOME'] = self.home
59 os.makedirs(os.path.join(self.home, 'proposed-migration',
60 'autopkgtest', 'work'))
61
62 def __del__(self):
63 shutil.rmtree(self.path)
64
65 def add(self, name, unstable, fields={}, add_src=True, testsuite=None):
66 '''Add a binary package to the index file.
67
68 You need to specify at least the package name and in which list to put
69 it (unstable==True for unstable/proposed, or False for
70 testing/release). fields specifies all additional entries, e. g.
71 {'Depends': 'foo, bar', 'Conflicts: baz'}. There are defaults for most
72 fields.
73
74 Unless add_src is set to False, this will also automatically create a
75 source record, based on fields['Source'] and name. In that case, the
76 "Testsuite:" field is set to the testsuite argument.
77 '''
78 assert (name not in self.added_binaries[unstable])
79 self.added_binaries[unstable].add(name)
80
81 fields.setdefault('Architecture', 'all')
82 fields.setdefault('Version', '1')
83 fields.setdefault('Priority', 'optional')
84 fields.setdefault('Section', 'devel')
85 fields.setdefault('Description', 'test pkg')
86 if fields['Architecture'] == 'all':
87 for a in architectures:
88 self._append(name, unstable, 'Packages_' + a, fields)
89 else:
90 self._append(name, unstable, 'Packages_' + fields['Architecture'],
91 fields)
92
93 if add_src:
94 src = fields.get('Source', name)
95 if src not in self.added_sources[unstable]:
96 srcfields = {'Version': fields['Version'],
97 'Section': fields['Section']}
98 if testsuite:
99 srcfields['Testsuite'] = testsuite
100 self.add_src(src, unstable, srcfields)
101
102 def add_src(self, name, unstable, fields={}):
103 '''Add a source package to the index file.
104
105 You need to specify at least the package name and in which list to put
106 it (unstable==True for unstable/proposed, or False for
107 testing/release). fields specifies all additional entries, which can be
108 Version (default: 1), Section (default: devel), Testsuite (default:
109 none), and Extra-Source-Only.
110 '''
111 assert (name not in self.added_sources[unstable])
112 self.added_sources[unstable].add(name)
113
114 fields.setdefault('Version', '1')
115 fields.setdefault('Section', 'devel')
116 self._append(name, unstable, 'Sources', fields)
117
118 def _append(self, name, unstable, file_name, fields):
119 with open(os.path.join(self.dirs[unstable], file_name), 'a') as f:
120 f.write('''Package: %s
121Maintainer: Joe <joe@example.com>
122''' % name)
123
124 for k, v in fields.items():
125 f.write('%s: %s\n' % (k, v))
126 f.write('\n')
127
128 def remove_all(self, unstable):
129 '''Remove all added packages'''
130
131 self.added_binaries[unstable] = set()
132 self.added_sources[unstable] = set()
133 for a in architectures:
134 open(os.path.join(self.dirs[unstable], 'Packages_' + a), 'w').close()
135 open(os.path.join(self.dirs[unstable], 'Sources'), 'w').close()
136
137
138class TestBase(unittest.TestCase):
139
140 def setUp(self):
141 super(TestBase, self).setUp()
142 self.data = TestData()
143 self.britney = os.path.join(PROJECT_DIR, 'britney.py')
144 # create temporary config so that tests can hack it
145 self.britney_conf = os.path.join(self.data.path, 'britney.conf')
146 shutil.copy(os.path.join(PROJECT_DIR, 'britney.conf'), self.britney_conf)
147 assert os.path.exists(self.britney)
148
149 def tearDown(self):
150 del self.data
151
152 def run_britney(self, args=[]):
153 '''Run britney.
154
155 Assert that it succeeds and does not produce anything on stderr.
156 Return (excuses.yaml, excuses.html, britney_out).
157 '''
158 britney = subprocess.Popen([self.britney, '-v', '-c', self.britney_conf,
159 '--distribution=ubuntu',
160 '--series=%s' % self.data.series],
161 stdout=subprocess.PIPE,
162 stderr=subprocess.PIPE,
163 cwd=self.data.path,
164 universal_newlines=True)
165 (out, err) = britney.communicate()
166 self.assertEqual(britney.returncode, 0, out + err)
167 self.assertEqual(err, '')
168
169 with open(os.path.join(self.data.path, 'output', self.data.series,
170 'excuses.yaml')) as f:
171 yaml = f.read()
172 with open(os.path.join(self.data.path, 'output', self.data.series,
173 'excuses.html')) as f:
174 html = f.read()
175
176 return (yaml, html, out)
177
178 def create_hint(self, username, content):
179 '''Create a hint file for the given username and content'''
180
181 hints_path = os.path.join(
182 self.data.path, 'data', self.data.series + '-proposed', 'Hints', username)
183 with open(hints_path, 'w') as fd:
184 fd.write(content)
0185
=== added file 'tests/mock_swift.py'
--- tests/mock_swift.py 1970-01-01 00:00:00 +0000
+++ tests/mock_swift.py 2015-11-17 11:05:43 +0000
@@ -0,0 +1,170 @@
1# Mock a Swift server with autopkgtest results
2# Author: Martin Pitt <martin.pitt@ubuntu.com>
3
4import os
5import tarfile
6import io
7import sys
8import socket
9import time
10import tempfile
11import json
12
13try:
14 from http.server import HTTPServer, BaseHTTPRequestHandler
15 from urllib.parse import urlparse, parse_qs
16except ImportError:
17 # Python 2
18 from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
19 from urlparse import urlparse, parse_qs
20
21
22class SwiftHTTPRequestHandler(BaseHTTPRequestHandler):
23 '''Mock swift container with autopkgtest results
24
25 This accepts retrieving a particular result.tar (e. g.
26 /container/path/result.tar) or listing the container contents
27 (/container/?prefix=foo&delimiter=@&marker=foo/bar).
28 '''
29 # map container -> result.tar path -> (exitcode, testpkg-version[, testinfo])
30 results = {}
31
32 def do_GET(self):
33 p = urlparse(self.path)
34 path_comp = p.path.split('/')
35 container = path_comp[1]
36 path = '/'.join(path_comp[2:])
37 if path:
38 self.serve_file(container, path)
39 else:
40 self.list_container(container, parse_qs(p.query))
41
42 def serve_file(self, container, path):
43 if os.path.basename(path) != 'result.tar':
44 self.send_error(404, 'File not found (only result.tar supported)')
45 return
46 try:
47 fields = self.results[container][os.path.dirname(path)]
48 try:
49 (exitcode, pkgver, testinfo) = fields
50 except ValueError:
51 (exitcode, pkgver) = fields
52 testinfo = None
53 except KeyError:
54 self.send_error(404, 'File not found')
55 return
56
57 self.send_response(200)
58 self.send_header('Content-type', 'application/octet-stream')
59 self.end_headers()
60
61 tar = io.BytesIO()
62 with tarfile.open('result.tar', 'w', tar) as results:
63 # add exitcode
64 contents = ('%i' % exitcode).encode()
65 ti = tarfile.TarInfo('exitcode')
66 ti.size = len(contents)
67 results.addfile(ti, io.BytesIO(contents))
68 # add testpkg-version
69 if pkgver is not None:
70 contents = pkgver.encode()
71 ti = tarfile.TarInfo('testpkg-version')
72 ti.size = len(contents)
73 results.addfile(ti, io.BytesIO(contents))
74 # add testinfo.json
75 if testinfo:
76 contents = json.dumps(testinfo).encode()
77 ti = tarfile.TarInfo('testinfo.json')
78 ti.size = len(contents)
79 results.addfile(ti, io.BytesIO(contents))
80
81 self.wfile.write(tar.getvalue())
82
83 def list_container(self, container, query):
84 try:
85 objs = set(['%s/result.tar' % r for r in self.results[container]])
86 except KeyError:
87 self.send_error(404, 'Container does not exist')
88 return
89 if 'prefix' in query:
90 p = query['prefix'][-1]
91 objs = set([o for o in objs if o.startswith(p)])
92 if 'delimiter' in query:
93 d = query['delimiter'][-1]
94 # if find() returns a value, we want to include the delimiter, thus
95 # bump its result; for "not found" return None
96 find_adapter = lambda i: (i >= 0) and (i + 1) or None
97 objs = set([o[:find_adapter(o.find(d))] for o in objs])
98 if 'marker' in query:
99 m = query['marker'][-1]
100 objs = set([o for o in objs if o > m])
101
102 self.send_response(objs and 200 or 204) # 204: "No Content"
103 self.send_header('Content-type', 'text/plain')
104 self.end_headers()
105 self.wfile.write(('\n'.join(sorted(objs)) + '\n').encode('UTF-8'))
106
107
108class AutoPkgTestSwiftServer:
109 def __init__(self, port=8080):
110 self.port = port
111 self.server_pid = None
112 self.log = None
113
114 def __del__(self):
115 if self.server_pid:
116 self.stop()
117
118 @classmethod
119 def set_results(klass, results):
120 '''Set served results.
121
122 results is a map: container -> result.tar path ->
123 (exitcode, testpkg-version, testinfo)
124 '''
125 SwiftHTTPRequestHandler.results = results
126
127 def start(self):
128 assert self.server_pid is None, 'already started'
129 if self.log:
130 self.log.close()
131 self.log = tempfile.TemporaryFile()
132 p = os.fork()
133 if p:
134 # parent: wait until server starts
135 self.server_pid = p
136 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
137 while True:
138 if s.connect_ex(('127.0.0.1', self.port)) == 0:
139 break
140 time.sleep(0.1)
141 s.close()
142 return
143
144 # child; quiesce logging on stderr
145 os.dup2(self.log.fileno(), sys.stderr.fileno())
146 srv = HTTPServer(('', self.port), SwiftHTTPRequestHandler)
147 srv.serve_forever()
148 sys.exit(0)
149
150 def stop(self):
151 assert self.server_pid, 'not running'
152 os.kill(self.server_pid, 15)
153 os.waitpid(self.server_pid, 0)
154 self.server_pid = None
155 self.log.close()
156
157if __name__ == '__main__':
158 srv = AutoPkgTestSwiftServer()
159 srv.set_results({'autopkgtest-series': {
160 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1'),
161 'series/i386/g/green/20150101_100000@': (0, 'green 1', {'custom_environment': ['ADT_TEST_TRIGGERS=green']}),
162 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1'),
163 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 2'),
164 'series/i386/l/lightgreen/20150101_100102@': (0, 'lightgreen 3'),
165 }})
166 srv.start()
167 print('Running on http://localhost:8080/autopkgtest-series')
168 print('Press Enter to quit.')
169 sys.stdin.readline()
170 srv.stop()
0171
=== added file 'tests/test_autopkgtest.py'
--- tests/test_autopkgtest.py 1970-01-01 00:00:00 +0000
+++ tests/test_autopkgtest.py 2015-11-17 11:05:43 +0000
@@ -0,0 +1,1586 @@
1#!/usr/bin/python3
2# (C) 2014 - 2015 Canonical Ltd.
3#
4# This program is free software; you can redistribute it and/or modify
5# it under the terms of the GNU General Public License as published by
6# the Free Software Foundation; either version 2 of the License, or
7# (at your option) any later version.
8
9from textwrap import dedent
10
11import apt_pkg
12import os
13import sys
14import fileinput
15import unittest
16import json
17import pprint
18
19import yaml
20
21PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
22sys.path.insert(0, PROJECT_DIR)
23
24from tests import TestBase, mock_swift
25
26apt_pkg.init()
27
28
29# shortcut for test triggers
30def tr(s):
31 return {'custom_environment': ['ADT_TEST_TRIGGERS=%s' % s]}
32
33
34class TestAutoPkgTest(TestBase):
35 '''AMQP/cloud interface'''
36
37 ################################################################
38 # Common test code
39 ################################################################
40
41 def setUp(self):
42 super(TestAutoPkgTest, self).setUp()
43 self.fake_amqp = os.path.join(self.data.path, 'amqp')
44
45 # Disable boottests and set fake AMQP and Swift server
46 for line in fileinput.input(self.britney_conf, inplace=True):
47 if 'BOOTTEST_ENABLE' in line:
48 print('BOOTTEST_ENABLE = no')
49 elif 'ADT_AMQP' in line:
50 print('ADT_AMQP = file://%s' % self.fake_amqp)
51 elif 'ADT_SWIFT_URL' in line:
52 print('ADT_SWIFT_URL = http://localhost:18085')
53 elif 'ADT_ARCHES' in line:
54 print('ADT_ARCHES = amd64 i386')
55 else:
56 sys.stdout.write(line)
57
58 # add a bunch of packages to testing to avoid repetition
59 self.data.add('libc6', False)
60 self.data.add('libgreen1', False, {'Source': 'green',
61 'Depends': 'libc6 (>= 0.9)'},
62 testsuite='autopkgtest')
63 self.data.add('green', False, {'Depends': 'libc6 (>= 0.9), libgreen1',
64 'Conflicts': 'blue'},
65 testsuite='autopkgtest')
66 self.data.add('lightgreen', False, {'Depends': 'libgreen1'},
67 testsuite='autopkgtest')
68 self.data.add('darkred', False, {},
69 testsuite='autopkgtest')
70 # autodep8 or similar test
71 self.data.add('darkgreen', False, {'Depends': 'libgreen1'},
72 testsuite='autopkgtest-pkg-foo')
73 self.data.add('blue', False, {'Depends': 'libc6 (>= 0.9)',
74 'Conflicts': 'green'},
75 testsuite='specialtest')
76
77 # create mock Swift server (but don't start it yet, as tests first need
78 # to poke in results)
79 self.swift = mock_swift.AutoPkgTestSwiftServer(port=18085)
80 self.swift.set_results({})
81
82 def tearDown(self):
83 del self.swift
84
85 def do_test(self, unstable_add, expect_status, expect_excuses={}):
86 '''Run britney with some unstable packages and verify excuses.
87
88 unstable_add is a list of (binpkgname, field_dict, testsuite_value)
89 passed to TestData.add for "unstable".
90
91 expect_status is a dict sourcename → (is_candidate, testsrc → arch → status)
92 that is checked against the excuses YAML.
93
94 expect_excuses is a dict sourcename → [(key, value), ...]
95 matches that are checked against the excuses YAML.
96
97 Return (output, excuses_dict).
98 '''
99 for (pkg, fields, testsuite) in unstable_add:
100 self.data.add(pkg, True, fields, True, testsuite)
101
102 self.swift.start()
103 (excuses_yaml, excuses_html, out) = self.run_britney()
104 self.swift.stop()
105
106 # convert excuses to source indexed dict
107 excuses_dict = {}
108 for s in yaml.load(excuses_yaml)['sources']:
109 excuses_dict[s['source']] = s
110
111 if 'SHOW_EXCUSES' in os.environ:
112 print('------- excuses -----')
113 pprint.pprint(excuses_dict, width=200)
114 if 'SHOW_HTML' in os.environ:
115 print('------- excuses.html -----\n%s\n' % excuses_html)
116 if 'SHOW_OUTPUT' in os.environ:
117 print('------- output -----\n%s\n' % out)
118
119 for src, (is_candidate, testmap) in expect_status.items():
120 self.assertEqual(excuses_dict[src]['is-candidate'], is_candidate,
121 src + ': ' + pprint.pformat(excuses_dict[src]))
122 for testsrc, archmap in testmap.items():
123 for arch, status in archmap.items():
124 self.assertEqual(excuses_dict[src]['tests']['autopkgtest'][testsrc][arch][0],
125 status,
126 excuses_dict[src]['tests']['autopkgtest'][testsrc])
127
128 for src, matches in expect_excuses.items():
129 for k, v in matches:
130 if isinstance(excuses_dict[src][k], list):
131 self.assertIn(v, excuses_dict[src][k])
132 else:
133 self.assertEqual(excuses_dict[src][k], v)
134
135 self.amqp_requests = set()
136 try:
137 with open(self.fake_amqp) as f:
138 for line in f:
139 self.amqp_requests.add(line.strip())
140 os.unlink(self.fake_amqp)
141 except IOError:
142 pass
143
144 try:
145 with open(os.path.join(self.data.path, 'data/series-proposed/autopkgtest/pending.txt')) as f:
146 self.pending_requests = f.read()
147 except IOError:
148 self.pending_requests = None
149
150 self.assertNotIn('FIXME', out)
151
152 return (out, excuses_dict)
153
154 ################################################################
155 # Tests for generic packages
156 ################################################################
157
158 def test_no_request_for_uninstallable(self):
159 '''Does not request a test for an uninstallable package'''
160
161 exc = self.do_test(
162 # uninstallable unstable version
163 [('lightgreen', {'Version': '1.1~beta', 'Depends': 'libc6 (>= 0.9), libgreen1 (>= 2)'}, 'autopkgtest')],
164 {'lightgreen': (False, {})},
165 {'lightgreen': [('old-version', '1'), ('new-version', '1.1~beta'),
166 ('reason', 'depends'),
167 ('excuses', 'lightgreen/amd64 unsatisfiable Depends: libgreen1 (>= 2)')
168 ]
169 })[1]
170 # autopkgtest should not be triggered for uninstallable pkg
171 self.assertEqual(exc['lightgreen']['tests'], {})
172
173 self.assertEqual(self.pending_requests, '')
174 self.assertEqual(self.amqp_requests, set())
175
176 def test_no_wait_for_always_failed_test(self):
177 '''We do not need to wait for results for tests which have always
178 failed, but the tests should still be triggered.'''
179
180 self.swift.set_results({'autopkgtest-series': {
181 'series/i386/d/darkred/20150101_100000@': (4, 'darkred 1'),
182 }})
183
184 exc = self.do_test(
185 [('darkred', {'Version': '2'}, 'autopkgtest')],
186 {'darkred': (True, {'darkred 2': {'i386': 'RUNNING'}})},
187 {'darkred': [('old-version', '1'), ('new-version', '2'),
188 ('excuses', 'Valid candidate')
189 ]
190 })[1]
191
192 self.assertEqual(exc['darkred']['tests'], {'autopkgtest':
193 {'darkred 2': {
194 'amd64': ['RUNNING',
195 'http://autopkgtest.ubuntu.com/packages/d/darkred/series/amd64'],
196 'i386': ['RUNNING',
197 'http://autopkgtest.ubuntu.com/packages/d/darkred/series/i386']}}})
198
199 self.assertEqual(
200 self.pending_requests, dedent('''\
201 darkred 2 amd64 darkred 2
202 darkred 2 i386 darkred 2
203 '''))
204 self.assertEqual(
205 self.amqp_requests,
206 set(['debci-series-amd64:darkred {"triggers": ["darkred/2"]}',
207 'debci-series-i386:darkred {"triggers": ["darkred/2"]}']))
208
209
210 def test_multi_rdepends_with_tests_all_running(self):
211 '''Multiple reverse dependencies with tests (all running)'''
212
213 # green has passed before
214 self.swift.set_results({'autopkgtest-series': {
215 'series/i386/g/green/20150101_100000@': (0, 'green 1'),
216 }})
217
218 self.do_test(
219 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
220 {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
221 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
222 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
223 })
224 },
225 {'green': [('old-version', '1'), ('new-version', '2')]})
226
227 # we expect the package's and its reverse dependencies' tests to get
228 # triggered
229 self.assertEqual(
230 self.amqp_requests,
231 set(['debci-series-i386:green {"triggers": ["green/2"]}',
232 'debci-series-amd64:green {"triggers": ["green/2"]}',
233 'debci-series-i386:lightgreen {"triggers": ["green/2"]}',
234 'debci-series-amd64:lightgreen {"triggers": ["green/2"]}',
235 'debci-series-i386:darkgreen {"triggers": ["green/2"]}',
236 'debci-series-amd64:darkgreen {"triggers": ["green/2"]}']))
237
238 # ... and that they get recorded as pending
239 expected_pending = '''darkgreen 1 amd64 green 2
240darkgreen 1 i386 green 2
241green 2 amd64 green 2
242green 2 i386 green 2
243lightgreen 1 amd64 green 2
244lightgreen 1 i386 green 2
245'''
246 self.assertEqual(self.pending_requests, expected_pending)
247
248 # if we run britney again this should *not* trigger any new tests
249 self.do_test([], {'green': (False, {})})
250 self.assertEqual(self.amqp_requests, set())
251 # but the set of pending tests doesn't change
252 self.assertEqual(self.pending_requests, expected_pending)
253
254 def test_multi_rdepends_with_tests_all_pass(self):
255 '''Multiple reverse dependencies with tests (all pass)'''
256
257 # green has passed before
258 self.swift.set_results({'autopkgtest-series': {
259 'series/i386/g/green/20150101_100000@': (0, 'green 1'),
260 }})
261
262 # first run requests tests and marks them as pending
263 self.do_test(
264 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
265 {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
266 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
267 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
268 })
269 },
270 {'green': [('old-version', '1'), ('new-version', '2')]})
271
272 # second run collects the results
273 self.swift.set_results({'autopkgtest-series': {
274 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
275 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')),
276 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/2')),
277 'series/amd64/l/lightgreen/20150101_100101@': (0, 'lightgreen 1', tr('green/2')),
278 # version in testing fails
279 'series/i386/g/green/20150101_020000@': (4, 'green 1', tr('green/1')),
280 'series/amd64/g/green/20150101_020000@': (4, 'green 1', tr('green/1')),
281 # version in unstable succeeds
282 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
283 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')),
284 }})
285
286 out = self.do_test(
287 [],
288 {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
289 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
290 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
291 })
292 },
293 {'green': [('old-version', '1'), ('new-version', '2')]}
294 )[0]
295
296 # all tests ran, there should be no more pending ones
297 self.assertEqual(self.pending_requests, '')
298
299 # not expecting any failures to retrieve from swift
300 self.assertNotIn('Failure', out, out)
301
302 # caches the results and triggers
303 with open(os.path.join(self.data.path, 'data/series-proposed/autopkgtest/results.cache')) as f:
304 res = json.load(f)
305 self.assertEqual(res['green']['i386'],
306 ['20150101_100200@',
307 {'1': {}, '2': {'green/2': True}},
308 True])
309 self.assertEqual(res['lightgreen']['amd64'],
310 ['20150101_100101@',
311 {'1': {'green/2': True}},
312 True])
313
314 # third run should not trigger any new tests, should all be in the
315 # cache
316 self.swift.set_results({})
317 out = self.do_test(
318 [],
319 {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
320 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
321 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
322 })
323 })[0]
324 self.assertEqual(self.amqp_requests, set())
325 self.assertEqual(self.pending_requests, '')
326 self.assertNotIn('Failure', out, out)
327
328 def test_multi_rdepends_with_tests_mixed(self):
329 '''Multiple reverse dependencies with tests (mixed results)'''
330
331 # green has passed before
332 self.swift.set_results({'autopkgtest-series': {
333 'series/i386/g/green/20150101_100000@': (0, 'green 1'),
334 }})
335
336 # first run requests tests and marks them as pending
337 self.do_test(
338 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
339 {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
340 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
341 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
342 })
343 },
344 {'green': [('old-version', '1'), ('new-version', '2')]})
345
346 # second run collects the results
347 self.swift.set_results({'autopkgtest-series': {
348 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
349 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/2')),
350 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
351 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
352 'series/amd64/g/green/20150101_100201@': (4, 'green 2', tr('green/2')),
353 # unrelated results (wrong trigger), ignore this!
354 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/1')),
355 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('blue/1')),
356 }})
357
358 out = self.do_test(
359 [],
360 {'green': (False, {'green 2': {'amd64': 'ALWAYSFAIL', 'i386': 'PASS'},
361 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'RUNNING'},
362 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'PASS'},
363 })
364 })
365
366 # not expecting any failures to retrieve from swift
367 self.assertNotIn('Failure', out, out)
368
369 # there should be some pending ones
370 self.assertIn('darkgreen 1 amd64 green 2', self.pending_requests)
371 self.assertIn('lightgreen 1 i386 green 2', self.pending_requests)
372
373 def test_multi_rdepends_with_tests_mixed_no_recorded_triggers(self):
374 '''Multiple reverse dependencies with tests (mixed results), no recorded triggers'''
375
376 # green has passed before
377 self.swift.set_results({'autopkgtest-series': {
378 'series/i386/g/green/20150101_100000@': (0, 'green 1'),
379 }})
380
381 # first run requests tests and marks them as pending
382 self.do_test(
383 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
384 {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
385 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
386 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
387 })
388 },
389 {'green': [('old-version', '1'), ('new-version', '2')]})
390
391 # second run collects the results
392 self.swift.set_results({'autopkgtest-series': {
393 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1'),
394 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1'),
395 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1'),
396 'series/i386/g/green/20150101_100200@': (0, 'green 2'),
397 'series/amd64/g/green/20150101_100201@': (4, 'green 2'),
398 }})
399
400 out = self.do_test(
401 [],
402 {'green': (False, {'green 2': {'amd64': 'ALWAYSFAIL', 'i386': 'PASS'},
403 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'RUNNING'},
404 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'PASS'},
405 })
406 })
407
408 # not expecting any failures to retrieve from swift
409 self.assertNotIn('Failure', out, out)
410
411 # there should be some pending ones
412 self.assertIn('darkgreen 1 amd64 green 2', self.pending_requests)
413 self.assertIn('lightgreen 1 i386 green 2', self.pending_requests)
414
415 def test_multi_rdepends_with_tests_regression(self):
416 '''Multiple reverse dependencies with tests (regression)'''
417
418 self.swift.set_results({'autopkgtest-series': {
419 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
420 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
421 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')),
422 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
423 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')),
424 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
425 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
426 'series/amd64/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
427 'series/amd64/g/green/20150101_100201@': (4, 'green 2', tr('green/2')),
428 }})
429
430 out = self.do_test(
431 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
432 {'green': (False, {'green 2': {'amd64': 'REGRESSION', 'i386': 'PASS'},
433 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'},
434 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
435 })
436 },
437 {'green': [('old-version', '1'), ('new-version', '2')]}
438 )[0]
439
440 # we already had all results before the run, so this should not trigger
441 # any new requests
442 self.assertEqual(self.amqp_requests, set())
443 self.assertEqual(self.pending_requests, '')
444
445 # not expecting any failures to retrieve from swift
446 self.assertNotIn('Failure', out, out)
447
448 def test_multi_rdepends_with_tests_regression_last_pass(self):
449 '''Multiple reverse dependencies with tests (regression), last one passes
450
451 This ensures that we don't just evaluate the test result of the last
452 test, but all of them.
453 '''
454 self.swift.set_results({'autopkgtest-series': {
455 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
456 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
457 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/2')),
458 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/2')),
459 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
460 'series/amd64/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
461 'series/amd64/g/green/20150101_100201@': (4, 'green 2', tr('green/2')),
462 }})
463
464 out = self.do_test(
465 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
466 {'green': (False, {'green 2': {'amd64': 'REGRESSION', 'i386': 'PASS'},
467 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
468 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
469 })
470 },
471 {'green': [('old-version', '1'), ('new-version', '2')]}
472 )[0]
473
474 self.assertEqual(self.pending_requests, '')
475 # not expecting any failures to retrieve from swift
476 self.assertNotIn('Failure', out, out)
477
478 def test_multi_rdepends_with_tests_always_failed(self):
479 '''Multiple reverse dependencies with tests (always failed)'''
480
481 self.swift.set_results({'autopkgtest-series': {
482 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
483 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
484 'series/i386/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1')),
485 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
486 'series/amd64/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1')),
487 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
488 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
489 'series/amd64/g/green/20150101_100200@': (4, 'green 2', tr('green/1')),
490 'series/amd64/g/green/20150101_100201@': (4, 'green 2', tr('green/2')),
491 }})
492
493 out = self.do_test(
494 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
495 {'green': (True, {'green 2': {'amd64': 'ALWAYSFAIL', 'i386': 'PASS'},
496 'lightgreen 1': {'amd64': 'ALWAYSFAIL', 'i386': 'ALWAYSFAIL'},
497 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
498 })
499 },
500 {'green': [('old-version', '1'), ('new-version', '2')]}
501 )[0]
502
503 self.assertEqual(self.pending_requests, '')
504 # not expecting any failures to retrieve from swift
505 self.assertNotIn('Failure', out, out)
506
507 def test_multi_rdepends_arch_specific(self):
508 '''Multiple reverse dependencies with arch specific tests'''
509
510 # green64 has passed before
511 self.swift.set_results({'autopkgtest-series': {
512 'series/amd64/g/green64/20150101_100000@': (0, 'green64 0.1'),
513 }})
514
515 self.data.add('green64', False, {'Depends': 'libc6 (>= 0.9), libgreen1',
516 'Architecture': 'amd64'},
517 testsuite='autopkgtest')
518
519 # first run requests tests and marks them as pending
520 self.do_test(
521 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
522 {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
523 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
524 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
525 'green64 1': {'amd64': 'RUNNING'},
526 })
527 })
528
529 self.assertEqual(
530 self.amqp_requests,
531 set(['debci-series-i386:green {"triggers": ["green/2"]}',
532 'debci-series-amd64:green {"triggers": ["green/2"]}',
533 'debci-series-i386:lightgreen {"triggers": ["green/2"]}',
534 'debci-series-amd64:lightgreen {"triggers": ["green/2"]}',
535 'debci-series-i386:darkgreen {"triggers": ["green/2"]}',
536 'debci-series-amd64:darkgreen {"triggers": ["green/2"]}',
537 'debci-series-amd64:green64 {"triggers": ["green/2"]}']))
538
539 self.assertIn('green64 1 amd64', self.pending_requests)
540 self.assertNotIn('green64 1 i386', self.pending_requests)
541
542 # second run collects the results
543 self.swift.set_results({'autopkgtest-series': {
544 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
545 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')),
546 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/2')),
547 'series/amd64/l/lightgreen/20150101_100101@': (0, 'lightgreen 1', tr('green/2')),
548 # version in testing fails
549 'series/i386/g/green/20150101_020000@': (4, 'green 1', tr('green/1')),
550 'series/amd64/g/green/20150101_020000@': (4, 'green 1', tr('green/1')),
551 # version in unstable succeeds
552 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
553 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')),
554 # only amd64 result for green64
555 'series/amd64/g/green64/20150101_100200@': (0, 'green64 1', tr('green/2')),
556 }})
557
558 out = self.do_test(
559 [],
560 {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
561 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
562 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
563 'green64 1': {'amd64': 'PASS'},
564 })
565 },
566 {'green': [('old-version', '1'), ('new-version', '2')]}
567 )[0]
568
569 # all tests ran, there should be no more pending ones
570 self.assertEqual(self.amqp_requests, set())
571 self.assertEqual(self.pending_requests, '')
572
573 # not expecting any failures to retrieve from swift
574 self.assertNotIn('Failure', out, out)
575
576 def test_unbuilt(self):
577 '''Unbuilt package should not trigger tests or get considered'''
578
579 self.data.add_src('green', True, {'Version': '2', 'Testsuite': 'autopkgtest'})
580 exc = self.do_test(
581 # uninstallable unstable version
582 [],
583 {'green': (False, {})},
584 {'green': [('old-version', '1'), ('new-version', '2'),
585 ('reason', 'no-binaries'),
586 ('excuses', 'green has no up-to-date binaries on any arch')
587 ]
588 })[1]
589 # autopkgtest should not be triggered for unbuilt pkg
590 self.assertEqual(exc['green']['tests'], {})
591
592 def test_rdepends_unbuilt(self):
593 '''Unbuilt reverse dependency'''
594
595 # old lightgreen fails, thus new green should be held back
596 self.swift.set_results({'autopkgtest-series': {
597 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/1.1')),
598 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/1.1')),
599 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/1.1')),
600 'series/i386/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1.1')),
601 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/1.1')),
602 'series/amd64/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1.1')),
603 'series/i386/g/green/20150101_020000@': (0, 'green 1', tr('green/1')),
604 'series/amd64/g/green/20150101_020000@': (0, 'green 1', tr('green/1')),
605 'series/i386/g/green/20150101_100200@': (0, 'green 1.1', tr('green/1.1')),
606 'series/amd64/g/green/20150101_100201@': (0, 'green 1.1', tr('green/1.1')),
607 }})
608
609 # add unbuilt lightgreen; should run tests against the old version
610 self.data.add_src('lightgreen', True, {'Version': '2', 'Testsuite': 'autopkgtest'})
611 self.do_test(
612 [('libgreen1', {'Version': '1.1', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
613 {'green': (False, {'green 1.1': {'amd64': 'PASS', 'i386': 'PASS'},
614 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'},
615 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
616 }),
617 'lightgreen': (False, {}),
618 },
619 {'green': [('old-version', '1'), ('new-version', '1.1')],
620 'lightgreen': [('old-version', '1'), ('new-version', '2'),
621 ('excuses', 'lightgreen has no up-to-date binaries on any arch')]
622 }
623 )
624
625 self.assertEqual(self.amqp_requests, set())
626 self.assertEqual(self.pending_requests, '')
627
628 # next run should not trigger any new requests
629 self.do_test([], {'green': (False, {}), 'lightgreen': (False, {})})
630 self.assertEqual(self.amqp_requests, set())
631 self.assertEqual(self.pending_requests, '')
632
633 # now lightgreen 2 gets built, should trigger a new test run
634 self.data.remove_all(True)
635 self.do_test(
636 [('libgreen1', {'Version': '1.1', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest'),
637 ('lightgreen', {'Version': '2'}, 'autopkgtest')],
638 {})
639 self.assertEqual(self.amqp_requests,
640 set(['debci-series-amd64:lightgreen {"triggers": ["lightgreen/2"]}',
641 'debci-series-i386:lightgreen {"triggers": ["lightgreen/2"]}']))
642
643 # next run collects the results
644 self.swift.set_results({'autopkgtest-series': {
645 'series/i386/l/lightgreen/20150101_100200@': (0, 'lightgreen 2', tr('lightgreen/2')),
646 'series/amd64/l/lightgreen/20150101_102000@': (0, 'lightgreen 2', tr('lightgreen/2')),
647 }})
648 self.do_test(
649 [],
650 {'green': (True, {'green 1.1': {'amd64': 'PASS', 'i386': 'PASS'},
651 # FIXME: expecting a lightgreen test here
652 # 'lightgreen 2': {'amd64': 'PASS', 'i386': 'PASS'},
653 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
654 }),
655 'lightgreen': (True, {'lightgreen 2': {'amd64': 'PASS', 'i386': 'PASS'}}),
656 },
657 {'green': [('old-version', '1'), ('new-version', '1.1')],
658 'lightgreen': [('old-version', '1'), ('new-version', '2')],
659 }
660 )
661 self.assertEqual(self.amqp_requests, set())
662 self.assertEqual(self.pending_requests, '')
663
664 def test_rdepends_unbuilt_unstable_only(self):
665 '''Unbuilt reverse dependency which is not in testing'''
666
667 self.swift.set_results({'autopkgtest-series': {
668 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
669 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')),
670 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')),
671 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')),
672 'series/i386/g/green/20150101_020000@': (0, 'green 1', tr('green/1')),
673 'series/amd64/g/green/20150101_020000@': (0, 'green 1', tr('green/1')),
674 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
675 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')),
676 }})
677 # run britney once to pick up previous results
678 self.do_test(
679 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
680 {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}})})
681
682 # add new uninstallable brokengreen; should not run test at all
683 exc = self.do_test(
684 [('brokengreen', {'Version': '1', 'Depends': 'libgreen1, nonexisting'}, 'autopkgtest')],
685 {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'}}),
686 'brokengreen': (False, {}),
687 },
688 {'green': [('old-version', '1'), ('new-version', '2')],
689 'brokengreen': [('old-version', '-'), ('new-version', '1'),
690 ('reason', 'depends'),
691 ('excuses', 'brokengreen/amd64 unsatisfiable Depends: nonexisting')],
692 })[1]
693 # autopkgtest should not be triggered for uninstallable pkg
694 self.assertEqual(exc['brokengreen']['tests'], {})
695
696 self.assertEqual(self.amqp_requests, set())
697
698 def test_rdepends_unbuilt_new_version_result(self):
699 '''Unbuilt reverse dependency gets test result for newer version
700
701 This might happen if the autopkgtest infrastructure runs the unstable
702 source tests against the testing binaries. Even if that gets done
703 properly it might still happen that at the time of the britney run the
704 package isn't built yet, but it is once the test gets run.
705 '''
706 # old lightgreen fails, thus new green should be held back
707 self.swift.set_results({'autopkgtest-series': {
708 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/1.1')),
709 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/1.1')),
710 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/1.1')),
711 'series/i386/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1.1')),
712 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/1.1')),
713 'series/amd64/l/lightgreen/20150101_100100@': (4, 'lightgreen 1', tr('green/1.1')),
714 'series/i386/g/green/20150101_020000@': (0, 'green 1', tr('green/1')),
715 'series/amd64/g/green/20150101_020000@': (0, 'green 1', tr('green/1')),
716 'series/i386/g/green/20150101_100200@': (0, 'green 1.1', tr('green/1.1')),
717 'series/amd64/g/green/20150101_100201@': (0, 'green 1.1', tr('green/1.1')),
718 }})
719
720 # add unbuilt lightgreen; should run tests against the old version
721 self.data.add_src('lightgreen', True, {'Version': '2', 'Testsuite': 'autopkgtest'})
722 self.do_test(
723 [('libgreen1', {'Version': '1.1', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
724 {'green': (False, {'green 1.1': {'amd64': 'PASS', 'i386': 'PASS'},
725 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'},
726 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
727 }),
728 'lightgreen': (False, {}),
729 },
730 {'green': [('old-version', '1'), ('new-version', '1.1')],
731 'lightgreen': [('old-version', '1'), ('new-version', '2'),
732 ('excuses', 'lightgreen has no up-to-date binaries on any arch')]
733 }
734 )
735 self.assertEqual(self.amqp_requests, set())
736 self.assertEqual(self.pending_requests, '')
737
738 # lightgreen 2 stays unbuilt in britney, but we get a test result for it
739 self.swift.set_results({'autopkgtest-series': {
740 'series/i386/l/lightgreen/20150101_100200@': (0, 'lightgreen 2', tr('green/1.1')),
741 'series/amd64/l/lightgreen/20150101_102000@': (0, 'lightgreen 2', tr('green/1.1')),
742 }})
743 self.do_test(
744 [],
745 {'green': (True, {'green 1.1': {'amd64': 'PASS', 'i386': 'PASS'},
746 'lightgreen 2': {'amd64': 'PASS', 'i386': 'PASS'},
747 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
748 }),
749 'lightgreen': (False, {}),
750 },
751 {'green': [('old-version', '1'), ('new-version', '1.1')],
752 'lightgreen': [('old-version', '1'), ('new-version', '2'),
753 ('excuses', 'lightgreen has no up-to-date binaries on any arch')]
754 }
755 )
756 self.assertEqual(self.amqp_requests, set())
757 self.assertEqual(self.pending_requests, '')
758
759 # next run should not trigger any new requests
760 self.do_test([], {'green': (True, {}), 'lightgreen': (False, {})})
761 self.assertEqual(self.amqp_requests, set())
762 self.assertEqual(self.pending_requests, '')
763
764 def test_rdepends_unbuilt_new_version_fail(self):
765 '''Unbuilt reverse dependency gets failure for newer version'''
766
767 self.swift.set_results({'autopkgtest-series': {
768 'series/i386/l/lightgreen/20150101_100101@': (0, 'lightgreen 1', tr('lightgreen/2')),
769 }})
770
771 # add unbuilt lightgreen; should request tests against the old version
772 self.data.add_src('lightgreen', True, {'Version': '2', 'Testsuite': 'autopkgtest'})
773 self.do_test(
774 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
775 {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
776 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
777 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
778 }),
779 'lightgreen': (False, {}),
780 },
781 {'green': [('old-version', '1'), ('new-version', '2')],
782 'lightgreen': [('old-version', '1'), ('new-version', '2'),
783 ('excuses', 'lightgreen has no up-to-date binaries on any arch')]
784 }
785 )
786 self.assertEqual(len(self.amqp_requests), 6)
787
788 # we only get a result for lightgreen 2, not for the requested 1
789 self.swift.set_results({'autopkgtest-series': {
790 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
791 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')),
792 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 0.5', tr('green/2')),
793 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 0.5', tr('green/2')),
794 'series/i386/l/lightgreen/20150101_100200@': (4, 'lightgreen 2', tr('green/2')),
795 'series/amd64/l/lightgreen/20150101_100200@': (4, 'lightgreen 2', tr('green/2')),
796 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
797 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')),
798 }})
799 self.do_test(
800 [],
801 {'green': (False, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
802 'lightgreen 2': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'},
803 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
804 }),
805 'lightgreen': (False, {}),
806 },
807 {'green': [('old-version', '1'), ('new-version', '2')],
808 'lightgreen': [('old-version', '1'), ('new-version', '2'),
809 ('excuses', 'lightgreen has no up-to-date binaries on any arch')]
810 }
811 )
812 self.assertEqual(self.amqp_requests, set())
813 self.assertEqual(self.pending_requests, '')
814
815 # next run should not trigger any new requests
816 self.do_test([], {'green': (False, {}), 'lightgreen': (False, {})})
817 self.assertEqual(self.pending_requests, '')
818 self.assertEqual(self.amqp_requests, set())
819
820 def test_package_pair_running(self):
821 '''Two packages in unstable that need to go in together (running)'''
822
823 # green has passed before
824 self.swift.set_results({'autopkgtest-series': {
825 'series/i386/g/green/20150101_100000@': (0, 'green 1'),
826 }})
827
828 self.do_test(
829 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest'),
830 ('lightgreen', {'Version': '2', 'Depends': 'libgreen1 (>= 2)'}, 'autopkgtest')],
831 {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
832 'lightgreen 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
833 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
834 }),
835 'lightgreen': (False, {'lightgreen 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}}),
836 },
837 {'green': [('old-version', '1'), ('new-version', '2')],
838 'lightgreen': [('old-version', '1'), ('new-version', '2')],
839 })
840
841 # we expect the package's and its reverse dependencies' tests to get
842 # triggered; lightgreen should be triggered for each trigger
843 self.assertEqual(
844 self.amqp_requests,
845 set(['debci-series-i386:green {"triggers": ["green/2"]}',
846 'debci-series-amd64:green {"triggers": ["green/2"]}',
847 'debci-series-i386:lightgreen {"triggers": ["green/2"]}',
848 'debci-series-amd64:lightgreen {"triggers": ["green/2"]}',
849 'debci-series-i386:lightgreen {"triggers": ["lightgreen/2"]}',
850 'debci-series-amd64:lightgreen {"triggers": ["lightgreen/2"]}',
851 'debci-series-i386:darkgreen {"triggers": ["green/2"]}',
852 'debci-series-amd64:darkgreen {"triggers": ["green/2"]}']))
853
854 # ... and that they get recorded as pending
855 expected_pending = '''darkgreen 1 amd64 green 2
856darkgreen 1 i386 green 2
857green 2 amd64 green 2
858green 2 i386 green 2
859lightgreen 2 amd64 green 2
860lightgreen 2 amd64 lightgreen 2
861lightgreen 2 i386 green 2
862lightgreen 2 i386 lightgreen 2
863'''
864 self.assertEqual(self.pending_requests, expected_pending)
865
866 def test_binary_from_new_source_package_running(self):
867 '''building an existing binary for a new source package (running)'''
868
869 self.do_test(
870 [('libgreen1', {'Version': '2', 'Source': 'newgreen', 'Depends': 'libc6'}, 'autopkgtest')],
871 {'newgreen': (True, {'newgreen 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
872 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
873 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
874 }),
875 },
876 {'newgreen': [('old-version', '-'), ('new-version', '2')]})
877
878 self.assertEqual(len(self.amqp_requests), 8)
879 expected_pending = '''darkgreen 1 amd64 newgreen 2
880darkgreen 1 i386 newgreen 2
881green 1 amd64 newgreen 2
882green 1 i386 newgreen 2
883lightgreen 1 amd64 newgreen 2
884lightgreen 1 i386 newgreen 2
885newgreen 2 amd64 newgreen 2
886newgreen 2 i386 newgreen 2
887'''
888 self.assertEqual(self.pending_requests, expected_pending)
889
890 def test_binary_from_new_source_package_pass(self):
891 '''building an existing binary for a new source package (pass)'''
892
893 self.swift.set_results({'autopkgtest-series': {
894 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('newgreen/2')),
895 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('newgreen/2')),
896 'series/i386/g/green/20150101_100000@': (0, 'green 1', tr('newgreen/2')),
897 'series/amd64/g/green/20150101_100000@': (0, 'green 1', tr('newgreen/2')),
898 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('newgreen/2')),
899 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('newgreen/2')),
900 'series/i386/n/newgreen/20150101_100200@': (0, 'newgreen 2', tr('newgreen/2')),
901 'series/amd64/n/newgreen/20150101_100201@': (0, 'newgreen 2', tr('newgreen/2')),
902 }})
903
904 self.do_test(
905 [('libgreen1', {'Version': '2', 'Source': 'newgreen', 'Depends': 'libc6'}, 'autopkgtest')],
906 {'newgreen': (True, {'newgreen 2': {'amd64': 'PASS', 'i386': 'PASS'},
907 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
908 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
909 'green 1': {'amd64': 'PASS', 'i386': 'PASS'},
910 }),
911 },
912 {'newgreen': [('old-version', '-'), ('new-version', '2')]})
913
914 self.assertEqual(self.amqp_requests, set())
915 self.assertEqual(self.pending_requests, '')
916
917 def test_result_from_older_version(self):
918 '''test result from older version than the uploaded one'''
919
920 self.swift.set_results({'autopkgtest-series': {
921 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('darkgreen/2')),
922 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('darkgreen/2')),
923 }})
924
925 self.do_test(
926 [('darkgreen', {'Version': '2', 'Depends': 'libc6 (>= 0.9), libgreen1'}, 'autopkgtest')],
927 {'darkgreen': (False, {'darkgreen 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'}})})
928
929 self.assertEqual(
930 self.amqp_requests,
931 set(['debci-series-i386:darkgreen {"triggers": ["darkgreen/2"]}',
932 'debci-series-amd64:darkgreen {"triggers": ["darkgreen/2"]}']))
933 self.assertEqual(self.pending_requests,
934 'darkgreen 2 amd64 darkgreen 2\ndarkgreen 2 i386 darkgreen 2\n')
935
936 # second run gets the results for darkgreen 2
937 self.swift.set_results({'autopkgtest-series': {
938 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('darkgreen/1')),
939 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('darkgreen/1')),
940 'series/i386/d/darkgreen/20150101_100010@': (0, 'darkgreen 2', tr('darkgreen/2')),
941 'series/amd64/d/darkgreen/20150101_100010@': (0, 'darkgreen 2', tr('darkgreen/2')),
942 }})
943 self.do_test(
944 [],
945 {'darkgreen': (True, {'darkgreen 2': {'amd64': 'PASS', 'i386': 'PASS'}})})
946 self.assertEqual(self.amqp_requests, set())
947 self.assertEqual(self.pending_requests, '')
948
949 # next run sees a newer darkgreen, should re-run tests
950 self.data.remove_all(True)
951 self.do_test(
952 [('darkgreen', {'Version': '3', 'Depends': 'libc6 (>= 0.9), libgreen1'}, 'autopkgtest')],
953 {'darkgreen': (False, {'darkgreen 3': {'amd64': 'RUNNING', 'i386': 'RUNNING'}})})
954 self.assertEqual(
955 self.amqp_requests,
956 set(['debci-series-i386:darkgreen {"triggers": ["darkgreen/3"]}',
957 'debci-series-amd64:darkgreen {"triggers": ["darkgreen/3"]}']))
958 self.assertEqual(self.pending_requests,
959 'darkgreen 3 amd64 darkgreen 3\ndarkgreen 3 i386 darkgreen 3\n')
960
961 def test_old_result_from_rdep_version(self):
962 '''re-runs reverse dependency test on new versions'''
963
964 self.swift.set_results({'autopkgtest-series': {
965 'series/i386/g/green/20150101_100000@': (0, 'green 1', tr('green/1')),
966 'series/amd64/g/green/20150101_100000@': (0, 'green 1', tr('green/1')),
967 'series/i386/g/green/20150101_100010@': (0, 'green 2', tr('green/2')),
968 'series/amd64/g/green/20150101_100010@': (0, 'green 2', tr('green/2')),
969 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
970 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
971 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')),
972 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')),
973 }})
974
975 self.do_test(
976 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
977 {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
978 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
979 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
980 }),
981 })
982
983 self.assertEqual(self.amqp_requests, set())
984 self.assertEqual(self.pending_requests, '')
985 self.data.remove_all(True)
986
987 # second run: new version re-triggers all tests
988 self.do_test(
989 [('libgreen1', {'Version': '3', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
990 {'green': (False, {'green 3': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
991 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
992 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
993 }),
994 })
995
996 self.assertEqual(len(self.amqp_requests), 6)
997
998 expected_pending = '''darkgreen 1 amd64 green 3
999darkgreen 1 i386 green 3
1000green 3 amd64 green 3
1001green 3 i386 green 3
1002lightgreen 1 amd64 green 3
1003lightgreen 1 i386 green 3
1004'''
1005 self.assertEqual(self.pending_requests, expected_pending)
1006
1007 # third run gets the results for green and lightgreen, darkgreen is
1008 # still running
1009 self.swift.set_results({'autopkgtest-series': {
1010 'series/i386/g/green/20150101_100020@': (0, 'green 3', tr('green/3')),
1011 'series/amd64/g/green/20150101_100020@': (0, 'green 3', tr('green/3')),
1012 'series/i386/l/lightgreen/20150101_100010@': (0, 'lightgreen 1', tr('green/3')),
1013 'series/amd64/l/lightgreen/20150101_100010@': (0, 'lightgreen 1', tr('green/3')),
1014 }})
1015 self.do_test(
1016 [],
1017 {'green': (False, {'green 3': {'amd64': 'PASS', 'i386': 'PASS'},
1018 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
1019 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1020 }),
1021 })
1022 self.assertEqual(self.amqp_requests, set())
1023 self.assertEqual(self.pending_requests,
1024 'darkgreen 1 amd64 green 3\ndarkgreen 1 i386 green 3\n')
1025
1026 # fourth run finally gets the new darkgreen result
1027 self.swift.set_results({'autopkgtest-series': {
1028 'series/i386/d/darkgreen/20150101_100010@': (0, 'darkgreen 1', tr('green/3')),
1029 'series/amd64/d/darkgreen/20150101_100010@': (0, 'darkgreen 1', tr('green/3')),
1030 }})
1031 self.do_test(
1032 [],
1033 {'green': (True, {'green 3': {'amd64': 'PASS', 'i386': 'PASS'},
1034 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
1035 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
1036 }),
1037 })
1038 self.assertEqual(self.amqp_requests, set())
1039 self.assertEqual(self.pending_requests, '')
1040
1041 def test_tmpfail(self):
1042 '''tmpfail results'''
1043
1044 # one tmpfail result without testpkg-version, should be ignored
1045 self.swift.set_results({'autopkgtest-series': {
1046 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('lightgreen/2')),
1047 'series/i386/l/lightgreen/20150101_100101@': (16, None),
1048 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('lightgreen/2')),
1049 'series/amd64/l/lightgreen/20150101_100101@': (16, 'lightgreen 2', tr('lightgreen/2')),
1050 }})
1051
1052 self.do_test(
1053 [('lightgreen', {'Version': '2', 'Depends': 'libgreen1 (>= 1)'}, 'autopkgtest')],
1054 {'lightgreen': (False, {'lightgreen 2': {'amd64': 'REGRESSION', 'i386': 'RUNNING'}})})
1055 self.assertEqual(self.pending_requests, 'lightgreen 2 i386 lightgreen 2\n')
1056
1057 # one more tmpfail result, should not confuse britney with None version
1058 self.swift.set_results({'autopkgtest-series': {
1059 'series/i386/l/lightgreen/20150101_100201@': (16, None),
1060 }})
1061 self.do_test(
1062 [],
1063 {'lightgreen': (False, {'lightgreen 2': {'amd64': 'REGRESSION', 'i386': 'RUNNING'}})})
1064 with open(os.path.join(self.data.path, 'data/series-proposed/autopkgtest/results.cache')) as f:
1065 contents = f.read()
1066 self.assertNotIn('null', contents)
1067 self.assertNotIn('None', contents)
1068
1069 def test_rerun_failure(self):
1070 '''manually re-running failed tests gets picked up'''
1071
1072 # first run fails
1073 self.swift.set_results({'autopkgtest-series': {
1074 'series/i386/g/green/20150101_100000@': (0, 'green 2', tr('green/2')),
1075 'series/i386/g/green/20150101_100101@': (4, 'green 2', tr('green/2')),
1076 'series/amd64/g/green/20150101_100000@': (0, 'green 2', tr('green/2')),
1077 'series/amd64/g/green/20150101_100101@': (4, 'green 2', tr('green/2')),
1078 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')),
1079 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
1080 'series/amd64/l/lightgreen/20150101_100000@': (0, 'lightgreen 1', tr('green/2')),
1081 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
1082 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
1083 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')),
1084 }})
1085
1086 self.do_test(
1087 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
1088 {'green': (False, {'green 2': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'},
1089 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'},
1090 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
1091 }),
1092 })
1093 self.assertEqual(self.pending_requests, '')
1094
1095 # re-running test manually succeeded (note: darkgreen result should be
1096 # cached already)
1097 self.swift.set_results({'autopkgtest-series': {
1098 'series/i386/g/green/20150101_100201@': (0, 'green 2', tr('green/2')),
1099 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')),
1100 'series/i386/l/lightgreen/20150101_100201@': (0, 'lightgreen 1', tr('green/2')),
1101 'series/amd64/l/lightgreen/20150101_100201@': (0, 'lightgreen 1', tr('green/2')),
1102 }})
1103 self.do_test(
1104 [],
1105 {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
1106 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
1107 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
1108 }),
1109 })
1110 self.assertEqual(self.pending_requests, '')
1111
1112 def test_new_runs_dont_clobber_pass(self):
1113 '''passing once is sufficient
1114
1115 If a test succeeded once for a particular version and trigger,
1116 subsequent failures (which might be triggered by other unstable
1117 uploads) should not invalidate the PASS, as that new failure is the
1118 fault of the new upload, not the original one.
1119 '''
1120 # new libc6 works fine with green
1121 self.swift.set_results({'autopkgtest-series': {
1122 'series/i386/g/green/20150101_100000@': (0, 'green 1', tr('libc6/2')),
1123 'series/amd64/g/green/20150101_100000@': (0, 'green 1', tr('libc6/2')),
1124 }})
1125
1126 self.do_test(
1127 [('libc6', {'Version': '2'}, None)],
1128 {'libc6': (True, {'green 1': {'amd64': 'PASS', 'i386': 'PASS'}})})
1129 self.assertEqual(self.pending_requests, '')
1130
1131 # new green fails; that's not libc6's fault though, so it should stay
1132 # valid
1133 self.swift.set_results({'autopkgtest-series': {
1134 'series/i386/g/green/20150101_100100@': (4, 'green 2', tr('green/2')),
1135 'series/amd64/g/green/20150101_100100@': (4, 'green 2', tr('green/2')),
1136 }})
1137 self.do_test(
1138 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
1139 {'green': (False, {'green 2': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'}}),
1140 'libc6': (True, {'green 1': {'amd64': 'PASS', 'i386': 'PASS'}}),
1141 })
1142 self.assertEqual(
1143 self.amqp_requests,
1144 set(['debci-series-i386:darkgreen {"triggers": ["green/2"]}',
1145 'debci-series-amd64:darkgreen {"triggers": ["green/2"]}',
1146 'debci-series-i386:lightgreen {"triggers": ["green/2"]}',
1147 'debci-series-amd64:lightgreen {"triggers": ["green/2"]}',
1148 ]))
1149
1150 def test_remove_from_unstable(self):
1151 '''broken package gets removed from unstable'''
1152
1153 self.swift.set_results({'autopkgtest-series': {
1154 'series/i386/g/green/20150101_100101@': (0, 'green 1', tr('green/1')),
1155 'series/amd64/g/green/20150101_100101@': (0, 'green 1', tr('green/1')),
1156 'series/i386/g/green/20150101_100201@': (0, 'green 2', tr('green/2')),
1157 'series/amd64/g/green/20150101_100201@': (0, 'green 2', tr('green/2')),
1158 'series/i386/l/lightgreen/20150101_100101@': (0, 'lightgreen 1', tr('green/2')),
1159 'series/amd64/l/lightgreen/20150101_100101@': (0, 'lightgreen 1', tr('green/2')),
1160 'series/i386/l/lightgreen/20150101_100201@': (4, 'lightgreen 2', tr('green/2 lightgreen/2')),
1161 'series/amd64/l/lightgreen/20150101_100201@': (4, 'lightgreen 2', tr('green/2 lightgreen/2')),
1162 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
1163 'series/amd64/d/darkgreen/20150101_100001@': (0, 'darkgreen 1', tr('green/2')),
1164 }})
1165
1166 self.do_test(
1167 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest'),
1168 ('lightgreen', {'Version': '2', 'Depends': 'libgreen1 (>= 2)'}, 'autopkgtest')],
1169 {'green': (False, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
1170 'lightgreen 2': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'},
1171 }),
1172 })
1173 self.assertEqual(self.pending_requests, '')
1174 self.assertEqual(self.amqp_requests, set())
1175
1176 # remove new lightgreen by resetting archive indexes, and re-adding
1177 # green
1178 self.data.remove_all(True)
1179
1180 self.swift.set_results({'autopkgtest-series': {
1181 # add new result for lightgreen 1
1182 'series/i386/l/lightgreen/20150101_100301@': (0, 'lightgreen 1', tr('green/2')),
1183 'series/amd64/l/lightgreen/20150101_100301@': (0, 'lightgreen 1', tr('green/2')),
1184 }})
1185
1186 # next run should re-trigger lightgreen 1 to test against green/2
1187 exc = self.do_test(
1188 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
1189 {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
1190 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
1191 }),
1192 })[1]
1193 self.assertNotIn('lightgreen 2', exc['green']['tests']['autopkgtest'])
1194
1195 # should not trigger new requests
1196 self.assertEqual(self.pending_requests, '')
1197 self.assertEqual(self.amqp_requests, set())
1198
1199 # but the next run should not trigger anything new
1200 self.do_test(
1201 [],
1202 {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
1203 'lightgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
1204 }),
1205 })
1206 self.assertEqual(self.pending_requests, '')
1207 self.assertEqual(self.amqp_requests, set())
1208
1209 def test_multiarch_dep(self):
1210 '''multi-arch dependency'''
1211
1212 # lightgreen has passed before
1213 self.swift.set_results({'autopkgtest-series': {
1214 'series/i386/l/lightgreen/20150101_100000@': (0, 'lightgreen 1'),
1215 }})
1216
1217 self.data.add('rainbow', False, {'Depends': 'lightgreen:any'},
1218 testsuite='autopkgtest')
1219
1220 self.do_test(
1221 [('lightgreen', {'Version': '2'}, 'autopkgtest')],
1222 {'lightgreen': (False, {'lightgreen 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1223 'rainbow 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1224 }),
1225 },
1226 {'lightgreen': [('old-version', '1'), ('new-version', '2')]}
1227 )
1228
1229 def test_disable_adt(self):
1230 '''Run without autopkgtest requests'''
1231
1232 # Disable AMQP server config, to ensure we don't touch them with ADT
1233 # disabled
1234 for line in fileinput.input(self.britney_conf, inplace=True):
1235 if line.startswith('ADT_ENABLE'):
1236 print('ADT_ENABLE = no')
1237 elif not line.startswith('ADT_AMQP') and not line.startswith('ADT_SWIFT_URL'):
1238 sys.stdout.write(line)
1239
1240 exc = self.do_test(
1241 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
1242 {'green': (True, {})},
1243 {'green': [('old-version', '1'), ('new-version', '2')]})[1]
1244 self.assertNotIn('autopkgtest', exc['green']['tests'])
1245
1246 self.assertEqual(self.amqp_requests, set())
1247 self.assertEqual(self.pending_requests, None)
1248
1249 ################################################################
1250 # Tests for hint processing
1251 ################################################################
1252
1253 def test_hint_force_badtest(self):
1254 '''force-badtest hint'''
1255
1256 self.swift.set_results({'autopkgtest-series': {
1257 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
1258 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
1259 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')),
1260 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
1261 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')),
1262 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
1263 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
1264 'series/amd64/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
1265 }})
1266
1267 self.create_hint('pitti', 'force-badtest lightgreen/1')
1268
1269 self.do_test(
1270 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
1271 {'green': (True, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
1272 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'},
1273 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
1274 }),
1275 },
1276 {'green': [('old-version', '1'), ('new-version', '2'),
1277 ('forced-reason', 'badtest lightgreen 1'),
1278 ('excuses', 'Should wait for lightgreen 1 test, but forced by pitti')]
1279 })
1280
1281 def test_hint_force_badtest_different_version(self):
1282 '''force-badtest hint with non-matching version'''
1283
1284 self.swift.set_results({'autopkgtest-series': {
1285 'series/i386/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
1286 'series/amd64/d/darkgreen/20150101_100000@': (0, 'darkgreen 1', tr('green/2')),
1287 'series/i386/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')),
1288 'series/i386/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
1289 'series/amd64/l/lightgreen/20150101_100100@': (0, 'lightgreen 1', tr('green/1')),
1290 'series/amd64/l/lightgreen/20150101_100101@': (4, 'lightgreen 1', tr('green/2')),
1291 'series/i386/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
1292 'series/amd64/g/green/20150101_100200@': (0, 'green 2', tr('green/2')),
1293 }})
1294
1295 self.create_hint('pitti', 'force-badtest lightgreen/0.1')
1296
1297 exc = self.do_test(
1298 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
1299 {'green': (False, {'green 2': {'amd64': 'PASS', 'i386': 'PASS'},
1300 'lightgreen 1': {'amd64': 'REGRESSION', 'i386': 'REGRESSION'},
1301 'darkgreen 1': {'amd64': 'PASS', 'i386': 'PASS'},
1302 }),
1303 },
1304 {'green': [('reason', 'autopkgtest')]}
1305 )[1]
1306 self.assertNotIn('forced-reason', exc['green'])
1307
1308 def test_hint_force_skiptest(self):
1309 '''force-skiptest hint'''
1310
1311 self.create_hint('pitti', 'force-skiptest green/2')
1312
1313 # green has passed before
1314 self.swift.set_results({'autopkgtest-series': {
1315 'series/i386/g/green/20150101_100000@': (0, 'green 1'),
1316 }})
1317
1318 self.do_test(
1319 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
1320 {'green': (True, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1321 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1322 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1323 }),
1324 },
1325 {'green': [('old-version', '1'), ('new-version', '2'),
1326 ('forced-reason', 'skiptest'),
1327 ('excuses', 'Should wait for tests relating to green 2, but forced by pitti')]
1328 })
1329
1330 def test_hint_force_skiptest_different_version(self):
1331 '''force-skiptest hint with non-matching version'''
1332
1333 # green has passed before
1334 self.swift.set_results({'autopkgtest-series': {
1335 'series/i386/g/green/20150101_100000@': (0, 'green 1'),
1336 }})
1337
1338 self.create_hint('pitti', 'force-skiptest green/1')
1339 exc = self.do_test(
1340 [('libgreen1', {'Version': '2', 'Source': 'green', 'Depends': 'libc6'}, 'autopkgtest')],
1341 {'green': (False, {'green 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1342 'lightgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1343 'darkgreen 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1344 }),
1345 },
1346 {'green': [('reason', 'autopkgtest')]}
1347 )[1]
1348 self.assertNotIn('forced-reason', exc['green'])
1349
1350 ################################################################
1351 # Kernel related tests
1352 ################################################################
1353
1354 def test_detect_dkms_autodep8(self):
1355 '''DKMS packages are autopkgtested (via autodep8)'''
1356
1357 self.data.add('dkms', False, {})
1358 self.data.add('fancy-dkms', False, {'Source': 'fancy', 'Depends': 'dkms (>= 1)'})
1359
1360 self.swift.set_results({'autopkgtest-series': {
1361 'series/i386/f/fancy/20150101_100101@': (0, 'fancy 0.1')
1362 }})
1363
1364 self.do_test(
1365 [('dkms', {'Version': '2'}, None)],
1366 {'dkms': (False, {'fancy 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}})},
1367 {'dkms': [('old-version', '1'), ('new-version', '2')]})
1368
1369 def test_kernel_triggers_dkms(self):
1370 '''DKMS packages get triggered by kernel uploads'''
1371
1372 self.data.add('dkms', False, {})
1373 self.data.add('fancy-dkms', False, {'Source': 'fancy', 'Depends': 'dkms (>= 1)'})
1374
1375 self.do_test(
1376 [('linux-image-generic', {'Source': 'linux-meta'}, None),
1377 ('linux-image-grumpy-generic', {'Source': 'linux-meta-lts-grumpy'}, None),
1378 ('linux-image-64only', {'Source': 'linux-meta-64only', 'Architecture': 'amd64'}, None),
1379 ],
1380 {'linux-meta': (True, {'fancy 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}}),
1381 'linux-meta-lts-grumpy': (True, {'fancy 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}}),
1382 'linux-meta-64only': (True, {'fancy 1': {'amd64': 'RUNNING'}}),
1383 })
1384
1385 # one separate test should be triggered for each kernel
1386 self.assertEqual(
1387 self.amqp_requests,
1388 set(['debci-series-i386:fancy {"triggers": ["linux-meta/1"]}',
1389 'debci-series-amd64:fancy {"triggers": ["linux-meta/1"]}',
1390 'debci-series-i386:fancy {"triggers": ["linux-meta-lts-grumpy/1"]}',
1391 'debci-series-amd64:fancy {"triggers": ["linux-meta-lts-grumpy/1"]}',
1392 'debci-series-amd64:fancy {"triggers": ["linux-meta-64only/1"]}']))
1393
1394 # ... and that they get recorded as pending
1395 expected_pending = '''fancy 1 amd64 linux-meta 1
1396fancy 1 amd64 linux-meta-64only 1
1397fancy 1 amd64 linux-meta-lts-grumpy 1
1398fancy 1 i386 linux-meta 1
1399fancy 1 i386 linux-meta-lts-grumpy 1
1400'''
1401 self.assertEqual(self.pending_requests, expected_pending)
1402
1403 def test_dkms_results_per_kernel(self):
1404 '''DKMS results get mapped to the triggering kernel version'''
1405
1406 self.data.add('dkms', False, {})
1407 self.data.add('fancy-dkms', False, {'Source': 'fancy', 'Depends': 'dkms (>= 1)'})
1408
1409 # works against linux-meta and -64only, fails against grumpy i386, no
1410 # result yet for grumpy amd64
1411 self.swift.set_results({'autopkgtest-series': {
1412 'series/amd64/f/fancy/20150101_100301@': (0, 'fancy 1', tr('fancy/1')),
1413 'series/i386/f/fancy/20150101_100101@': (0, 'fancy 1', tr('linux-meta/1')),
1414 'series/amd64/f/fancy/20150101_100101@': (0, 'fancy 1', tr('linux-meta/1')),
1415 'series/amd64/f/fancy/20150101_100201@': (0, 'fancy 1', tr('linux-meta-64only/1')),
1416 'series/i386/f/fancy/20150101_100301@': (4, 'fancy 1', tr('linux-meta-lts-grumpy/1')),
1417 }})
1418
1419 self.do_test(
1420 [('linux-image-generic', {'Source': 'linux-meta'}, None),
1421 ('linux-image-grumpy-generic', {'Source': 'linux-meta-lts-grumpy'}, None),
1422 ('linux-image-64only', {'Source': 'linux-meta-64only', 'Architecture': 'amd64'}, None),
1423 ],
1424 {'linux-meta': (True, {'fancy 1': {'amd64': 'PASS', 'i386': 'PASS'}}),
1425 'linux-meta-lts-grumpy': (False, {'fancy 1': {'amd64': 'RUNNING', 'i386': 'ALWAYSFAIL'}}),
1426 'linux-meta-64only': (True, {'fancy 1': {'amd64': 'PASS'}}),
1427 })
1428
1429 self.assertEqual(self.pending_requests, 'fancy 1 amd64 linux-meta-lts-grumpy 1\n')
1430
1431 def test_dkms_results_per_kernel_old_results(self):
1432 '''DKMS results get mapped to the triggering kernel version, old results'''
1433
1434 self.data.add('dkms', False, {})
1435 self.data.add('fancy-dkms', False, {'Source': 'fancy', 'Depends': 'dkms (>= 1)'})
1436
1437 # works against linux-meta and -64only, fails against grumpy i386, no
1438 # result yet for grumpy amd64
1439 self.swift.set_results({'autopkgtest-series': {
1440 # old results without trigger info
1441 'series/i386/f/fancy/20140101_100101@': (0, 'fancy 1', {}),
1442 'series/amd64/f/fancy/20140101_100101@': (8, 'fancy 1', {}),
1443 # current results with triggers
1444 'series/i386/f/fancy/20150101_100101@': (0, 'fancy 1', tr('linux-meta/1')),
1445 'series/amd64/f/fancy/20150101_100101@': (0, 'fancy 1', tr('linux-meta/1')),
1446 'series/amd64/f/fancy/20150101_100201@': (0, 'fancy 1', tr('linux-meta-64only/1')),
1447 'series/i386/f/fancy/20150101_100301@': (4, 'fancy 1', tr('linux-meta-lts-grumpy/1')),
1448 }})
1449
1450 self.do_test(
1451 [('linux-image-generic', {'Source': 'linux-meta'}, None),
1452 ('linux-image-grumpy-generic', {'Source': 'linux-meta-lts-grumpy'}, None),
1453 ('linux-image-64only', {'Source': 'linux-meta-64only', 'Architecture': 'amd64'}, None),
1454 ],
1455 {'linux-meta': (True, {'fancy 1': {'amd64': 'PASS', 'i386': 'PASS'}}),
1456 # we don't have an explicit result for amd64, so the old one counts
1457 'linux-meta-lts-grumpy': (True, {'fancy 1': {'amd64': 'ALWAYSFAIL', 'i386': 'ALWAYSFAIL'}}),
1458 'linux-meta-64only': (True, {'fancy 1': {'amd64': 'PASS'}}),
1459 })
1460
1461 self.assertEqual(self.pending_requests, '')
1462
1463 def test_kernel_triggered_tests(self):
1464 '''linux, lxc, glibc tests get triggered by linux-meta* uploads'''
1465
1466 self.data.remove_all(False)
1467 self.data.add('libc6-dev', False, {'Source': 'glibc', 'Depends': 'linux-libc-dev'},
1468 testsuite='autopkgtest')
1469 self.data.add('lxc', False, {'Testsuite-Triggers': 'linux-generic'},
1470 testsuite='autopkgtest')
1471 self.data.add('systemd', False, {'Testsuite-Triggers': 'linux-generic'},
1472 testsuite='autopkgtest')
1473 self.data.add('linux-image-1', False, {'Source': 'linux'}, testsuite='autopkgtest')
1474 self.data.add('linux-libc-dev', False, {'Source': 'linux'}, testsuite='autopkgtest')
1475 self.data.add('linux-image', False, {'Source': 'linux-meta', 'Depends': 'linux-image-1'})
1476
1477 self.swift.set_results({'autopkgtest-series': {
1478 'series/amd64/l/lxc/20150101_100101@': (0, 'lxc 0.1')
1479 }})
1480
1481 exc = self.do_test(
1482 [('linux-image', {'Version': '2', 'Depends': 'linux-image-2', 'Source': 'linux-meta'}, None),
1483 ('linux-image-64only', {'Source': 'linux-meta-64only', 'Architecture': 'amd64'}, None),
1484 ('linux-image-2', {'Version': '2', 'Source': 'linux'}, 'autopkgtest'),
1485 ('linux-libc-dev', {'Version': '2', 'Source': 'linux'}, 'autopkgtest'),
1486 ],
1487 {'linux-meta': (False, {'lxc 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1488 'glibc 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1489 'linux 2': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1490 'systemd 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1491 }),
1492 'linux-meta-64only': (False, {'lxc 1': {'amd64': 'RUNNING'}}),
1493 'linux': (False, {}),
1494 })[1]
1495 # the kernel itself should not trigger tests; we want to trigger
1496 # everything from -meta
1497 self.assertNotIn('autopkgtest', exc['linux']['tests'])
1498
1499 def test_kernel_waits_on_meta(self):
1500 '''linux waits on linux-meta'''
1501
1502 self.data.add('dkms', False, {})
1503 self.data.add('fancy-dkms', False, {'Source': 'fancy', 'Depends': 'dkms (>= 1)'}, testsuite='autopkgtest')
1504 self.data.add('linux-image-generic', False, {'Version': '0.1', 'Source': 'linux-meta', 'Depends': 'linux-image-1'})
1505 self.data.add('linux-image-1', False, {'Source': 'linux'}, testsuite='autopkgtest')
1506 self.data.add('linux-firmware', False, {'Source': 'linux-firmware'}, testsuite='autopkgtest')
1507
1508 self.swift.set_results({'autopkgtest-series': {
1509 'series/i386/f/fancy/20150101_090000@': (0, 'fancy 0.5', tr('fancy/0.5')),
1510 'series/i386/l/linux/20150101_100000@': (0, 'linux 2', tr('linux-meta/0.2')),
1511 'series/amd64/l/linux/20150101_100000@': (0, 'linux 2', tr('linux-meta/0.2')),
1512 'series/i386/l/linux-firmware/20150101_100000@': (0, 'linux-firmware 2', tr('linux-firmware/2')),
1513 'series/amd64/l/linux-firmware/20150101_100000@': (0, 'linux-firmware 2', tr('linux-firmware/2')),
1514 }})
1515
1516 self.do_test(
1517 [('linux-image-generic', {'Version': '0.2', 'Source': 'linux-meta', 'Depends': 'linux-image-2'}, None),
1518 ('linux-image-2', {'Version': '2', 'Source': 'linux'}, 'autopkgtest'),
1519 ('linux-firmware', {'Version': '2', 'Source': 'linux-firmware'}, 'autopkgtest'),
1520 ],
1521 {'linux-meta': (False, {'fancy 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1522 'linux 2': {'amd64': 'PASS', 'i386': 'PASS'}
1523 }),
1524 # no tests, but should wait on linux-meta
1525 'linux': (False, {}),
1526 # this one does not have a -meta, so don't wait
1527 'linux-firmware': (True, {'linux-firmware 2': {'amd64': 'PASS', 'i386': 'PASS'}}),
1528 },
1529 {'linux': [('reason', 'depends'),
1530 ('excuses', 'Depends: linux linux-meta (not considered)')]
1531 }
1532 )
1533
1534 # now linux-meta is ready to go
1535 self.swift.set_results({'autopkgtest-series': {
1536 'series/i386/f/fancy/20150101_100000@': (0, 'fancy 1', tr('linux-meta/0.2')),
1537 'series/amd64/f/fancy/20150101_100000@': (0, 'fancy 1', tr('linux-meta/0.2')),
1538 }})
1539 self.do_test(
1540 [],
1541 {'linux-meta': (True, {'fancy 1': {'amd64': 'PASS', 'i386': 'PASS'},
1542 'linux 2': {'amd64': 'PASS', 'i386': 'PASS'}}),
1543 'linux': (True, {}),
1544 'linux-firmware': (True, {'linux-firmware 2': {'amd64': 'PASS', 'i386': 'PASS'}}),
1545 },
1546 {'linux': [('excuses', 'Depends: linux linux-meta')]
1547 }
1548 )
1549
1550
1551 ################################################################
1552 # Tests for special-cased packages
1553 ################################################################
1554
1555 def test_gcc(self):
1556 '''gcc only triggers some key packages'''
1557
1558 self.data.add('binutils', False, {}, testsuite='autopkgtest')
1559 self.data.add('linux', False, {}, testsuite='autopkgtest')
1560 self.data.add('notme', False, {'Depends': 'libgcc1'}, testsuite='autopkgtest')
1561
1562 # binutils has passed before
1563 self.swift.set_results({'autopkgtest-series': {
1564 'series/i386/b/binutils/20150101_100000@': (0, 'binutils 1', tr('binutils/1')),
1565 }})
1566
1567 exc = self.do_test(
1568 [('libgcc1', {'Source': 'gcc-5', 'Version': '2'}, None)],
1569 {'gcc-5': (False, {'binutils 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'},
1570 'linux 1': {'amd64': 'RUNNING', 'i386': 'RUNNING'}})})[1]
1571 self.assertNotIn('notme 1', exc['gcc-5']['tests']['autopkgtest'])
1572
1573 def test_alternative_gcc(self):
1574 '''alternative gcc does not trigger anything'''
1575
1576 self.data.add('binutils', False, {}, testsuite='autopkgtest')
1577 self.data.add('notme', False, {'Depends': 'libgcc1'}, testsuite='autopkgtest')
1578
1579 exc = self.do_test(
1580 [('libgcc1', {'Source': 'gcc-snapshot', 'Version': '2'}, None)],
1581 {'gcc-snapshot': (True, {})})[1]
1582 self.assertNotIn('autopkgtest', exc['gcc-snapshot']['tests'])
1583
1584
1585if __name__ == '__main__':
1586 unittest.main()
01587
=== added file 'tests/test_boottest.py'
--- tests/test_boottest.py 1970-01-01 00:00:00 +0000
+++ tests/test_boottest.py 2015-11-17 11:05:43 +0000
@@ -0,0 +1,445 @@
1#!/usr/bin/python3
2# (C) 2014 Canonical Ltd.
3#
4# This program is free software; you can redistribute it and/or modify
5# it under the terms of the GNU General Public License as published by
6# the Free Software Foundation; either version 2 of the License, or
7# (at your option) any later version.
8
9import mock
10import os
11import shutil
12import sys
13import tempfile
14import fileinput
15import unittest
16
17
18PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
19sys.path.insert(0, PROJECT_DIR)
20
21import boottest
22from tests import TestBase
23
24
25def create_manifest(manifest_dir, lines):
26 """Helper function for writing touch image manifests."""
27 os.makedirs(manifest_dir)
28 with open(os.path.join(manifest_dir, 'manifest'), 'w') as fd:
29 fd.write('\n'.join(lines))
30
31
32class FakeResponse(object):
33
34 def __init__(self, code=404, content=''):
35 self.code = code
36 self.content = content
37
38 def read(self):
39 return self.content
40
41
42class TestTouchManifest(unittest.TestCase):
43
44 def setUp(self):
45 super(TestTouchManifest, self).setUp()
46 self.path = tempfile.mkdtemp(prefix='boottest')
47 os.chdir(self.path)
48 self.imagesdir = os.path.join(self.path, 'boottest/images')
49 os.makedirs(self.imagesdir)
50 self.addCleanup(shutil.rmtree, self.path)
51 _p = mock.patch('urllib.request.urlopen')
52 self.mocked_urlopen = _p.start()
53 self.mocked_urlopen.side_effect = [
54 FakeResponse(code=404),
55 FakeResponse(code=404),
56 ]
57 self.addCleanup(_p.stop)
58 self.fetch_retries_orig = boottest.FETCH_RETRIES
59
60 def restore_fetch_retries():
61 boottest.FETCH_RETRIES = self.fetch_retries_orig
62 boottest.FETCH_RETRIES = 0
63 self.addCleanup(restore_fetch_retries)
64
65 def test_missing(self):
66 # Missing manifest file silently results in empty contents.
67 manifest = boottest.TouchManifest('I-dont-exist', 'vivid')
68 self.assertEqual([], manifest._manifest)
69 self.assertNotIn('foo', manifest)
70
71 def test_fetch(self):
72 # Missing manifest file is fetched dynamically
73 self.mocked_urlopen.side_effect = [
74 FakeResponse(code=200, content=b'foo 1.0'),
75 ]
76 manifest = boottest.TouchManifest('ubuntu-touch', 'vivid')
77 self.assertNotEqual([], manifest._manifest)
78
79 def test_fetch_disabled(self):
80 # Manifest auto-fetching can be disabled.
81 manifest = boottest.TouchManifest('ubuntu-touch', 'vivid', fetch=False)
82 self.mocked_urlopen.assert_not_called()
83 self.assertEqual([], manifest._manifest)
84
85 def test_fetch_fails(self):
86 project = 'fake'
87 series = 'fake'
88 manifest_dir = os.path.join(self.imagesdir, project, series)
89 manifest_lines = [
90 'foo:armhf 1~beta1',
91 ]
92 create_manifest(manifest_dir, manifest_lines)
93 manifest = boottest.TouchManifest(project, series)
94 self.assertEqual(1, len(manifest._manifest))
95 self.assertIn('foo', manifest)
96
97 def test_fetch_exception(self):
98 self.mocked_urlopen.side_effect = [
99 IOError("connection refused"),
100 IOError("connection refused"),
101 ]
102 manifest = boottest.TouchManifest('not-real', 'not-real')
103 self.assertEqual(0, len(manifest._manifest))
104
105 def test_simple(self):
106 # Existing manifest file allows callsites to properly check presence.
107 manifest_dir = os.path.join(self.imagesdir, 'ubuntu/vivid')
108 manifest_lines = [
109 'bar 1234',
110 'foo:armhf 1~beta1',
111 'boing1-1.2\t666',
112 'click:com.ubuntu.shorts 0.2.346'
113 ]
114 create_manifest(manifest_dir, manifest_lines)
115
116 manifest = boottest.TouchManifest('ubuntu', 'vivid')
117 # We can dig deeper on the manifest package names list ...
118 self.assertEqual(
119 ['bar', 'boing1-1.2', 'foo'], manifest._manifest)
120 # but the '<name> in manifest' API reads better.
121 self.assertIn('foo', manifest)
122 self.assertIn('boing1-1.2', manifest)
123 self.assertNotIn('baz', manifest)
124 # 'click' name is blacklisted due to the click package syntax.
125 self.assertNotIn('click', manifest)
126
127
128class TestBoottestEnd2End(TestBase):
129 """End2End tests (calling `britney`) for the BootTest criteria."""
130
131 def setUp(self):
132 super(TestBoottestEnd2End, self).setUp()
133
134 # Modify shared configuration file.
135 with open(self.britney_conf, 'r') as fp:
136 original_config = fp.read()
137 # Disable autopkgtests.
138 new_config = original_config.replace(
139 'ADT_ENABLE = yes', 'ADT_ENABLE = no')
140 # Enable boottest.
141 new_config = new_config.replace(
142 'BOOTTEST_ENABLE = no', 'BOOTTEST_ENABLE = yes')
143 # Disable TouchManifest auto-fetching.
144 new_config = new_config.replace(
145 'BOOTTEST_FETCH = yes', 'BOOTTEST_FETCH = no')
146 with open(self.britney_conf, 'w') as fp:
147 fp.write(new_config)
148
149 self.data.add('libc6', False, {'Architecture': 'armhf'}),
150
151 self.data.add(
152 'libgreen1',
153 False,
154 {'Source': 'green', 'Architecture': 'armhf',
155 'Depends': 'libc6 (>= 0.9)'})
156 self.data.add(
157 'green',
158 False,
159 {'Source': 'green', 'Architecture': 'armhf',
160 'Depends': 'libc6 (>= 0.9), libgreen1'})
161 self.create_manifest([
162 'green 1.0',
163 'pyqt5:armhf 1.0',
164 'signon 1.0',
165 'purple 1.1',
166 ])
167
168 def create_manifest(self, lines):
169 """Create a manifest for this britney run context."""
170 path = os.path.join(
171 self.data.path,
172 'boottest/images/ubuntu-touch/{}'.format(self.data.series))
173 create_manifest(path, lines)
174
175 def make_boottest(self):
176 """Create a stub version of boottest-britney script."""
177 script_path = os.path.expanduser(
178 "~/auto-package-testing/jenkins/boottest-britney")
179 if not os.path.exists(os.path.dirname(script_path)):
180 os.makedirs(os.path.dirname(script_path))
181 with open(script_path, 'w') as f:
182 f.write('''#!%(py)s
183import argparse
184import os
185import shutil
186import sys
187
188template = """
189green 1.1~beta RUNNING
190pyqt5-src 1.1~beta PASS
191pyqt5-src 1.1 FAIL
192signon 1.1 PASS
193purple 1.1 RUNNING
194"""
195
196def request():
197 work_path = os.path.dirname(args.output)
198 os.makedirs(work_path)
199 shutil.copy(args.input, os.path.join(work_path, 'test_input'))
200 with open(args.output, 'w') as f:
201 f.write(template)
202
203def submit():
204 pass
205
206def collect():
207 with open(args.output, 'w') as f:
208 f.write(template)
209
210p = argparse.ArgumentParser()
211p.add_argument('-r')
212p.add_argument('-c')
213p.add_argument('-d', default=False, action='store_true')
214p.add_argument('-P', default=False, action='store_true')
215p.add_argument('-U', default=False, action='store_true')
216
217sp = p.add_subparsers()
218
219psubmit = sp.add_parser('submit')
220psubmit.add_argument('input')
221psubmit.set_defaults(func=submit)
222
223prequest = sp.add_parser('request')
224prequest.add_argument('-O', dest='output')
225prequest.add_argument('input')
226prequest.set_defaults(func=request)
227
228pcollect = sp.add_parser('collect')
229pcollect.add_argument('-O', dest='output')
230pcollect.set_defaults(func=collect)
231
232args = p.parse_args()
233args.func()
234 ''' % {'py': sys.executable})
235 os.chmod(script_path, 0o755)
236
237 def do_test(self, context, expect=None, no_expect=None):
238 """Process the given package context and assert britney results."""
239 for (pkg, fields) in context:
240 self.data.add(pkg, True, fields, testsuite='autopkgtest')
241 self.make_boottest()
242 (excuses_yaml, excuses, out) = self.run_britney()
243 # print('-------\nexcuses: %s\n-----' % excuses)
244 # print('-------\nout: %s\n-----' % out)
245 if expect:
246 for re in expect:
247 self.assertRegex(excuses, re)
248 if no_expect:
249 for re in no_expect:
250 self.assertNotRegex(excuses, re)
251
252 def test_runs(self):
253 # `Britney` runs and considers binary packages for boottesting
254 # when it is enabled in the configuration, only binaries needed
255 # in the phone image are considered for boottesting.
256 # The boottest status is presented along with its corresponding
257 # jenkins job urls for the public and the private servers.
258 # 'in progress' tests blocks package promotion.
259 context = [
260 ('green', {'Source': 'green', 'Version': '1.1~beta',
261 'Architecture': 'armhf', 'Depends': 'libc6 (>= 0.9)'}),
262 ('libgreen1', {'Source': 'green', 'Version': '1.1~beta',
263 'Architecture': 'armhf',
264 'Depends': 'libc6 (>= 0.9)'}),
265 ]
266 public_jenkins_url = (
267 'https://jenkins.qa.ubuntu.com/job/series-boottest-green/'
268 'lastBuild')
269 private_jenkins_url = (
270 'http://d-jenkins.ubuntu-ci:8080/view/Series/view/BootTest/'
271 'job/series-boottest-green/lastBuild')
272 self.do_test(
273 context,
274 [r'\bgreen\b.*>1</a> to .*>1.1~beta<',
275 r'<li>Boottest result: {} \(Jenkins: '
276 r'<a href="{}">public</a>, <a href="{}">private</a>\)'.format(
277 boottest.BootTest.EXCUSE_LABELS['RUNNING'],
278 public_jenkins_url, private_jenkins_url),
279 '<li>Not considered'])
280
281 # The `boottest-britney` input (recorded for testing purposes),
282 # contains a line matching the requested boottest attempt.
283 # '<source> <version>\n'
284 test_input_path = os.path.join(
285 self.data.path, 'boottest/work/test_input')
286 with open(test_input_path) as f:
287 self.assertEqual(
288 ['green 1.1~beta\n'], f.readlines())
289
290 def test_pass(self):
291 # `Britney` updates boottesting information in excuses when the
292 # package test pass and marks the package as a valid candidate for
293 # promotion.
294 context = []
295 context.append(
296 ('signon', {'Version': '1.1', 'Architecture': 'armhf'}))
297 self.do_test(
298 context,
299 [r'\bsignon\b.*\(- to .*>1.1<',
300 '<li>Boottest result: {}'.format(
301 boottest.BootTest.EXCUSE_LABELS['PASS']),
302 '<li>Valid candidate'])
303
304 def test_fail(self):
305 # `Britney` updates boottesting information in excuses when the
306 # package test fails and blocks the package promotion
307 # ('Not considered.')
308 context = []
309 context.append(
310 ('pyqt5', {'Source': 'pyqt5-src', 'Version': '1.1',
311 'Architecture': 'all'}))
312 self.do_test(
313 context,
314 [r'\bpyqt5-src\b.*\(- to .*>1.1<',
315 '<li>Boottest result: {}'.format(
316 boottest.BootTest.EXCUSE_LABELS['FAIL']),
317 '<li>Not considered'])
318
319 def test_unknown(self):
320 # `Britney` does not block on missing boottest results for a
321 # particular source/version, in this case pyqt5-src_1.2 (not
322 # listed in the testing result history). Instead it renders
323 # excuses with 'UNKNOWN STATUS' and links to the corresponding
324 # jenkins jobs for further investigation. Source promotion is
325 # blocked, though.
326 context = [
327 ('pyqt5', {'Source': 'pyqt5-src', 'Version': '1.2',
328 'Architecture': 'armhf'})]
329 self.do_test(
330 context,
331 [r'\bpyqt5-src\b.*\(- to .*>1.2<',
332 r'<li>Boottest result: UNKNOWN STATUS \(Jenkins: .*\)',
333 '<li>Not considered'])
334
335 def test_skipped_by_hints(self):
336 # `Britney` allows boottests to be skipped by hinting the
337 # corresponding source with 'force-skiptest'. The boottest
338 # attempt will not be requested.
339 context = [
340 ('pyqt5', {'Source': 'pyqt5-src', 'Version': '1.1',
341 'Architecture': 'all'}),
342 ]
343 self.create_hint('cjwatson', 'force-skiptest pyqt5-src/1.1')
344 self.do_test(
345 context,
346 [r'\bpyqt5-src\b.*\(- to .*>1.1<',
347 '<li>boottest skipped from hints by cjwatson',
348 '<li>Valid candidate'])
349
350 def test_fail_but_forced_by_hints(self):
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: