Merge lp:~canonical-ci-engineering/britney/queued-announce-and-collect into lp:~ubuntu-release/britney/britney2-ubuntu

Proposed by Francis Ginther
Status: Work in progress
Proposed branch: lp:~canonical-ci-engineering/britney/queued-announce-and-collect
Merge into: lp:~ubuntu-release/britney/britney2-ubuntu
Diff against target: 926 lines (+794/-39)
7 files modified
britney.conf (+4/-0)
britney.py (+60/-0)
testclient.py (+216/-0)
tests/__init__.py (+25/-1)
tests/test_autopkgtest.py (+6/-9)
tests/test_boottest.py (+11/-29)
tests/test_testclient.py (+472/-0)
To merge this branch: bzr merge lp:~canonical-ci-engineering/britney/queued-announce-and-collect
Reviewer Review Type Date Requested Status
Ubuntu Release Team Pending
Review via email: mp+259972@code.launchpad.net

Description of the change

Enable TestClient work.

To post a comment you must log in.
Revision history for this message
Francis Ginther (fginther) wrote :

Two inline comments below.

Also, at some point, we need to think a little more about how the hints are used. These were originally setup for dealing with autopkgtests and the hints 'force-skiptest' and 'force-badtest' are really geared toward skipping those known tests. But we really don't want to skip other types of tests just because a package has some poorly written autopkgtest that no one wants to update. In this current MP, 'force-skiptest' is not used and I think that's the right way to go. I also think we need to remove 'force'badtest' and just use 'force' to act as a universal override (to override any type of autopkgtest, boottest, or snappy-selftest failure). This may be enough, but perhaps we'll need to define a set of hints specific to each required test type (i.e. 'force-bad-boottest', 'force-bad-snappy-selftest').

Revision history for this message
Thomi Richards (thomir-deactivatedaccount) wrote :

I can't say as I understand more than about 10% of this, but I left a few comments.

Revision history for this message
Celso Providelo (cprov) wrote :
436. By Celso Providelo

Merge lp:~cprov/britney/integration-tests

437. By Celso Providelo

Add integration tests for hints on testclient. Also refactoring TestBase.{overrideConfig, create_hint) so they can be reused across tests.

438. By Para Siva

Adding distribution to the testclient implementation

439. By Celso Providelo

Extending TestClient announcements implementation to include the DSC 'Binary' (and potentially other) information on the payload.

Revision history for this message
Martin Pitt (pitti) wrote :

Do you still want to go through with this, or is this stalled now? (I suppose the latter?)

I submitted https://code.launchpad.net/~pitti/britney/britney2-ubuntu-amqp/+merge/263679 for test requests, and I'll work on collecting test results directly from swift. This will avoid separate components like adt-britney which duplicate work from britney (like reverse dependency calculation).

Unmerged revisions

439. By Celso Providelo

Extending TestClient announcements implementation to include the DSC 'Binary' (and potentially other) information on the payload.

438. By Para Siva

Adding distribution to the testclient implementation

437. By Celso Providelo

Add integration tests for hints on testclient. Also refactoring TestBase.{overrideConfig, create_hint) so they can be reused across tests.

436. By Celso Providelo

Merge lp:~cprov/britney/integration-tests

435. By Celso Providelo

Merge lp:~cprov/britney/testclient-api/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'britney.conf'
--- britney.conf 2015-03-05 14:57:03 +0000
+++ britney.conf 2015-05-29 14:31:13 +0000
@@ -70,3 +70,7 @@
70BOOTTEST_DEBUG = yes70BOOTTEST_DEBUG = yes
71BOOTTEST_ARCHES = armhf amd6471BOOTTEST_ARCHES = armhf amd64
72BOOTTEST_FETCH = yes72BOOTTEST_FETCH = yes
73
74TESTCLIENT_ENABLE = yes
75TESTCLIENT_AMQP_URIS = amqp://guest:guest@162.213.32.181:5672//
76TESTCLIENT_REQUIRED_TESTS =
7377
=== modified file 'britney.py'
--- britney.py 2015-02-20 19:02:00 +0000
+++ britney.py 2015-05-29 14:31:13 +0000
@@ -227,6 +227,7 @@
227 PROVIDES, RDEPENDS, RCONFLICTS, MULTIARCH, ESSENTIAL)227 PROVIDES, RDEPENDS, RCONFLICTS, MULTIARCH, ESSENTIAL)
228from autopkgtest import AutoPackageTest, ADT_PASS, ADT_EXCUSES_LABELS228from autopkgtest import AutoPackageTest, ADT_PASS, ADT_EXCUSES_LABELS
229from boottest import BootTest229from boottest import BootTest
230from testclient import TestClient
230231
231232
232__author__ = 'Fabio Tranchitella and the Debian Release Team'233__author__ = 'Fabio Tranchitella and the Debian Release Team'
@@ -1984,6 +1985,65 @@
1984 upgrade_me.remove(excuse.name)1985 upgrade_me.remove(excuse.name)
1985 unconsidered.append(excuse.name)1986 unconsidered.append(excuse.name)
19861987
1988 if (getattr(self.options, "testclient_enable", "no") == "yes" and
1989 self.options.series):
1990
1991 # Filter only new source candidates excuses.
1992 testing_excuses = []
1993 for excuse in self.excuses:
1994 # Skip removals, binary-only candidates, proposed-updates
1995 # and unknown versions.
1996 if (excuse.name.startswith("-") or
1997 "/" in excuse.name or
1998 "_" in excuse.name or
1999 excuse.ver[1] == "-"):
2000 continue
2001 testing_excuses.append(excuse)
2002
2003 amqp_uris = getattr(
2004 self.options, "testclient_amqp_uris", "").split()
2005 testclient = TestClient(
2006 self.options.distribution, self.options.series, amqp_uris)
2007
2008 # Announce new candidates and collect new test results.
2009 if not self.options.dry_run:
2010 testclient.announce(testing_excuses, self.options.unstable)
2011 testclient.collect()
2012 testclient.cleanup(testing_excuses)
2013
2014 # Update excuses considering hints and required_tests (for gating).
2015 required_tests = getattr(
2016 self.options, "testclient_required_tests", "").split()
2017 for excuse in testing_excuses:
2018 hints = self.hints.search('force', package=excuse.name)
2019 hints.extend(
2020 self.hints.search('force-badtest', package=excuse.name))
2021 forces = [x for x in hints
2022 if same_source(excuse.ver[1], x.version)]
2023 for test in testclient.getTests(excuse.name, excuse.ver[1]):
2024 label = TestClient.EXCUSE_LABELS.get(
2025 test.get('status'), 'UNKNOWN STATUS')
2026 excuse.addhtml(
2027 "%s result: %s (<a href=\"%s\">results</a>)" % (
2028 test.get('name').capitalize(), label,
2029 test.get('url')))
2030 if forces:
2031 excuse.addhtml(
2032 "Should wait for %s %s %s test, but forced by "
2033 "%s" % (excuse.name, excuse.ver[1],
2034 test.get('name').capitalize(),
2035 forces[0].user))
2036 continue
2037 if test.get('name') not in required_tests:
2038 continue
2039 if test.get('status') not in TestClient.VALID_STATUSES:
2040 excuse.addreason(test.get('name'))
2041 if excuse.is_valid:
2042 excuse.is_valid = False
2043 excuse.addhtml("Not considered")
2044 upgrade_me.remove(excuse.name)
2045 unconsidered.append(excuse.name)
2046
1987 # invalidate impossible excuses2047 # invalidate impossible excuses
1988 for e in self.excuses:2048 for e in self.excuses:
1989 # parts[0] == package name2049 # parts[0] == package name
19902050
=== added file 'testclient.py'
--- testclient.py 1970-01-01 00:00:00 +0000
+++ testclient.py 2015-05-29 14:31:13 +0000
@@ -0,0 +1,216 @@
1# -*- coding: utf-8 -*-
2
3# Copyright (C) 2015 Canonical Ltd.
4
5# This program is free software; you can redistribute it and/or modify
6# it under the terms of the GNU General Public License as published by
7# the Free Software Foundation; either version 2 of the License, or
8# (at your option) any later version.
9
10# This program is distributed in the hope that it will be useful,
11# but WITHOUT ANY WARRANTY; without even the implied warranty of
12# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13# GNU General Public License for more details.
14from __future__ import print_function
15
16import apt_pkg
17from contextlib import (
18 contextmanager,
19 nested,
20)
21import json
22import os
23
24
25import kombu
26from kombu.pools import producers
27
28
29@contextmanager
30def json_cached_info(path):
31 """Context manager for caching a JSON object on disk."""
32 info = {}
33 if os.path.exists(path):
34 with open(path) as fp:
35 try:
36 info = json.load(fp)
37 except ValueError:
38 # cache is empty or corrupted (!).
39 info = {}
40 else:
41 dirname = os.path.dirname(path)
42 if not os.path.exists(dirname):
43 os.makedirs(dirname)
44 yield info
45 with open(path, 'w') as fp:
46 json.dump(info, fp, indent=2)
47
48
49def make_cache_key(name, version):
50 """Return a json-hashable key for given source & version."""
51 return '{}_{}'.format(name, version)
52
53
54def get_repo_sources(basedir):
55 """Return 'Sources' repository index information for the given basedir.
56
57 Extract and cache repository 'Sources' index into a dictionary
58 key-ed by source-key (see `make_cache_key`) pointing to a dictionary
59 containing relevant DSC fields.
60
61 At moment, we only cache 'Binary'.
62 """
63 fields = (
64 'Binary',
65 )
66 filename = os.path.join(basedir, "Sources")
67 tag_file = apt_pkg.TagFile(open(filename))
68
69 sources = {}
70 while tag_file.step():
71 source_name = tag_file.section.get('Package')
72 source_version = tag_file.section.get('Version')
73 source_key = make_cache_key(source_name, source_version)
74 sources[source_key] = {
75 k: tag_file.section.get(k, '') for k in fields
76 }
77
78 return sources
79
80
81class TestClient(object):
82 """Generic test client implementation.
83
84 announce: announcing new source candidates to a pre-defined rabbitmq
85 exchange (testing subsystems can hook/subscribe).
86
87 collect: collect test results for the context series from a pre-defined
88 rabbitmq exchange (other promotion agents can do the same).
89
90 cleanup: sanitize internal announcement/testing registry after runs.
91 """
92
93 EXCHANGE_CANDIDATES = 'candidates.exchange.v1'
94 EXCHANGE_RESULTS = 'results.exchange.v1'
95
96 VALID_STATUSES = ('PASS', 'SKIP')
97
98 EXCUSE_LABELS = {
99 "PASS": '<span style="background:#87d96c">Pass</span>',
100 "SKIP": '<span style="background:#ffff00">Test skipped</span>',
101 "FAIL": '<span style="background:#ff6666">Regression</span>',
102 "RUNNING": '<span style="background:#99ddff">Test in progress</span>',
103 }
104
105 def __init__(self, distribution, series, amqp_uris):
106 self.distribution = distribution
107 self.series = series
108 self.amqp_uris = amqp_uris
109
110 @property
111 def cache_path(self):
112 """Series-specific test announcement/result cache."""
113 return 'testclient/{}/{}.json'.format(self.distribution, self.series)
114
115 @property
116 def results_queue(self):
117 """Series-specific queue for collecting tests results."""
118 return 'pm.results.{}.{}'.format(self.distribution,
119 self.series)
120
121 def announce(self, excuses, unstable_basedir,
122 get_repo_sources=get_repo_sources):
123 """Announce new source candidates.
124
125 Post a message to the EXCHANGE_CANDATIDATES for every new given
126 excuses (cache announcements so excuses do not get re-annouced).
127 """
128 with nested(json_cached_info(self.cache_path),
129 kombu.Connection(self.amqp_uris)) as (cache, connection):
130 # XXX cprov 20150521: nested() is deprecated, and multi-statement
131 # 'with' does not support nesting (i.e. the previous context
132 # manager is not available to the next, in this case
133 # 'connection').
134 with producers[connection].acquire(block=True) as producer:
135 publisher = connection.ensure(
136 producer, producer.publish, max_retries=3)
137 exchange = kombu.Exchange(
138 self.EXCHANGE_CANDIDATES, type="fanout")
139 repo = get_repo_sources(unstable_basedir)
140 for excuse in excuses:
141 source_key = make_cache_key(excuse.name, excuse.ver[1])
142 if source_key in cache.keys():
143 continue
144 repo_data = repo[source_key]
145 payload = {
146 'source_name': excuse.name,
147 'source_version': excuse.ver[1],
148 'source_binaries': [
149 b.strip() for b in repo_data['Binary'].split(',')
150 ],
151 'series': self.series,
152 'distribution': self.distribution,
153 }
154 publisher(payload, exchange=exchange, declare=[exchange])
155 cache[make_cache_key(excuse.name, excuse.ver[1])] = []
156
157 def collect(self):
158 """Collect available test results.
159
160 Consume all messages from the EXCHANGE_RESULTS (routed to a series-
161 specific queue). Ignore test results for other series and update
162 test results registry.
163 """
164 with nested(json_cached_info(self.cache_path),
165 kombu.Connection(self.amqp_uris)) as (cache, connection):
166 exchange = kombu.Exchange(
167 self.EXCHANGE_RESULTS, type="fanout")
168 queue = kombu.Queue(self.results_queue, exchange)
169 # XXX cprov 20150521: same as above about nested context managers.
170 with connection.SimpleQueue(queue) as q:
171 for i in range(len(q)):
172 msg = q.get()
173 payload = msg.payload
174 if payload.get('distribution') != self.distribution:
175 continue
176 if payload.get('series') != self.series:
177 continue
178 tests = cache.setdefault(
179 make_cache_key(
180 payload.get('source_name'),
181 payload.get('source_version')
182 ), [])
183 tests.append({
184 'name': payload.get('test_name'),
185 'status': payload.get('test_status'),
186 'url': payload.get('test_url'),
187 })
188 msg.ack()
189
190 def cleanup(self, excuses):
191 """Remove test result entries without corresponding excuse.
192
193 If there is not excuse the test results are not relevant anymore.
194 """
195 with json_cached_info(self.cache_path) as cache:
196 current_keys = [
197 make_cache_key(e.name, e.ver[1]) for e in excuses]
198 cached_keys = list(cache.keys())
199 for k in cached_keys:
200 if k not in current_keys:
201 del cache[k]
202
203 def getTests(self, name, version):
204 """Yields test results for a given source-version pair.
205
206 Tests results are a list of dictionaries container test-name, status
207 and url.
208 """
209 with json_cached_info(self.cache_path) as cache:
210 tests = cache.get(make_cache_key(name, version), [])
211 for test in tests:
212 yield {
213 'name': test.get('name'),
214 'status': test.get('status'),
215 'url': test.get('url'),
216 }
0217
=== modified file 'tests/__init__.py'
--- tests/__init__.py 2015-02-05 14:43:23 +0000
+++ tests/__init__.py 2015-05-29 14:31:13 +0000
@@ -32,6 +32,7 @@
32 self.path = tempfile.mkdtemp(prefix='testarchive.')32 self.path = tempfile.mkdtemp(prefix='testarchive.')
33 self.apt_source = 'deb file://%s /' % self.path33 self.apt_source = 'deb file://%s /' % self.path
34 self.series = 'series'34 self.series = 'series'
35 self.distribution = 'ubuntu'
35 self.dirs = {False: os.path.join(self.path, 'data', self.series),36 self.dirs = {False: os.path.join(self.path, 'data', self.series),
36 True: os.path.join(37 True: os.path.join(
37 self.path, 'data', '%s-proposed' % self.series)}38 self.path, 'data', '%s-proposed' % self.series)}
@@ -94,7 +95,8 @@
94 src = fields.get('Source', name)95 src = fields.get('Source', name)
95 if src not in self.added_sources[unstable]:96 if src not in self.added_sources[unstable]:
96 self.add_src(src, unstable, {'Version': fields['Version'],97 self.add_src(src, unstable, {'Version': fields['Version'],
97 'Section': fields['Section']})98 'Section': fields['Section'],
99 'Binary': name})
98100
99 def add_src(self, name, unstable, fields={}):101 def add_src(self, name, unstable, fields={}):
100 '''Add a source package to the index file.102 '''Add a source package to the index file.
@@ -162,3 +164,25 @@
162 excuses = f.read()164 excuses = f.read()
163165
164 return (excuses, out)166 return (excuses, out)
167
168 def overrideConfig(self, overrides):
169 """Overrides briney configuration based on the given key-value map."""
170 with open(self.britney_conf, 'r') as fp:
171 original_config = fp.read()
172 new_config = []
173 for line in original_config.splitlines():
174 for k, v in overrides.iteritems():
175 if line.startswith(k):
176 line = '{} = {}'.format(k, v)
177 new_config.append(line)
178 with open(self.britney_conf, 'w') as fp:
179 fp.write('\n'.join(new_config))
180 self.addCleanup(self.restore_config, original_config)
181
182 def create_hint(self, username, content):
183 """Populates a hint file for the given 'username' with 'content'."""
184 hints_path = os.path.join(
185 self.data.path,
186 'data/{}-proposed/Hints/{}'.format(self.data.series, username))
187 with open(hints_path, 'w') as fd:
188 fd.write(content)
165189
=== modified file 'tests/test_autopkgtest.py'
--- tests/test_autopkgtest.py 2015-02-05 14:43:23 +0000
+++ tests/test_autopkgtest.py 2015-05-29 14:31:13 +0000
@@ -31,15 +31,12 @@
31 def setUp(self):31 def setUp(self):
32 super(TestAutoPkgTest, self).setUp()32 super(TestAutoPkgTest, self).setUp()
3333
34 # Mofify configuration according to the test context.34 self.overrideConfig({
35 with open(self.britney_conf, 'r') as fp:35 'ADT_ENABLE': 'yes',
36 original_config = fp.read()36 'BOOTTEST_ENABLE': 'no',
37 # Disable boottests.37 'BOOTTEST_FETCH': 'no',
38 new_config = original_config.replace(38 'TESTCLIENT_ENABLE': 'no',
39 'BOOTTEST_ENABLE = yes', 'BOOTTEST_ENABLE = no')39 })
40 with open(self.britney_conf, 'w') as fp:
41 fp.write(new_config)
42 self.addCleanup(self.restore_config, original_config)
4340
44 # fake adt-britney script41 # fake adt-britney script
45 self.adt_britney = os.path.join(42 self.adt_britney = os.path.join(
4643
=== modified file 'tests/test_boottest.py'
--- tests/test_boottest.py 2015-02-20 16:28:47 +0000
+++ tests/test_boottest.py 2015-05-29 14:31:13 +0000
@@ -123,21 +123,14 @@
123 def setUp(self):123 def setUp(self):
124 super(TestBoottestEnd2End, self).setUp()124 super(TestBoottestEnd2End, self).setUp()
125125
126 # Modify shared configuration file.126 # Disable autopkgtests + testclient and boottest_fetch
127 with open(self.britney_conf, 'r') as fp:127 # for this testing context.
128 original_config = fp.read()128 self.overrideConfig({
129 # Disable autopkgtests.129 'ADT_ENABLE': 'no',
130 new_config = original_config.replace(130 'BOOTTEST_ENABLE': 'yes',
131 'ADT_ENABLE = yes', 'ADT_ENABLE = no')131 'BOOTTEST_FETCH': 'no',
132 # Enable boottest.132 'TESTCLIENT_ENABLE': 'no',
133 new_config = new_config.replace(133 })
134 'BOOTTEST_ENABLE = no', 'BOOTTEST_ENABLE = yes')
135 # Disable TouchManifest auto-fetching.
136 new_config = new_config.replace(
137 'BOOTTEST_FETCH = yes', 'BOOTTEST_FETCH = no')
138 with open(self.britney_conf, 'w') as fp:
139 fp.write(new_config)
140 self.addCleanup(self.restore_config, original_config)
141134
142 self.data.add('libc6', False, {'Architecture': 'armhf'}),135 self.data.add('libc6', False, {'Architecture': 'armhf'}),
143136
@@ -323,14 +316,6 @@
323 r'<li>Boottest result: UNKNOWN STATUS \(Jenkins: .*\)',316 r'<li>Boottest result: UNKNOWN STATUS \(Jenkins: .*\)',
324 '<li>Not considered'])317 '<li>Not considered'])
325318
326 def create_hint(self, username, content):
327 """Populates a hint file for the given 'username' with 'content'."""
328 hints_path = os.path.join(
329 self.data.path,
330 'data/{}-proposed/Hints/{}'.format(self.data.series, username))
331 with open(hints_path, 'w') as fd:
332 fd.write(content)
333
334 def test_skipped_by_hints(self):319 def test_skipped_by_hints(self):
335 # `Britney` allows boottests to be skipped by hinting the320 # `Britney` allows boottests to be skipped by hinting the
336 # corresponding source with 'force-skiptest'. The boottest321 # corresponding source with 'force-skiptest'. The boottest
@@ -415,12 +400,9 @@
415 # Boottest can run simultaneously with autopkgtest (adt).400 # Boottest can run simultaneously with autopkgtest (adt).
416401
417 # Enable ADT in britney configuration.402 # Enable ADT in britney configuration.
418 with open(self.britney_conf, 'r') as fp:403 self.overrideConfig({
419 original_config = fp.read()404 'ADT_ENABLE': 'yes',
420 new_config = original_config.replace(405 })
421 'ADT_ENABLE = no', 'ADT_ENABLE = yes')
422 with open(self.britney_conf, 'w') as fp:
423 fp.write(new_config)
424406
425 # Create a fake 'adt-britney' that reports a RUNNING job for407 # Create a fake 'adt-britney' that reports a RUNNING job for
426 # the testing source ('purple_1.1').408 # the testing source ('purple_1.1').
427409
=== added file 'tests/test_testclient.py'
--- tests/test_testclient.py 1970-01-01 00:00:00 +0000
+++ tests/test_testclient.py 2015-05-29 14:31:13 +0000
@@ -0,0 +1,472 @@
1#!/usr/bin/python
2# (C) 2015 Canonical Ltd.
3#
4# This program is free software; you can redistribute it and/or modify
5# it under the terms of the GNU General Public License as published by
6# the Free Software Foundation; either version 2 of the License, or
7# (at your option) any later version.
8
9import os
10import shutil
11import sys
12import tempfile
13import unittest
14
15import kombu
16from kombu.pools import producers
17
18
19PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
20sys.path.insert(0, PROJECT_DIR)
21
22from excuse import Excuse
23from testclient import (
24 json_cached_info,
25 make_cache_key,
26 TestClient,
27)
28from tests import TestBase
29
30
31class TestJsonCachedInfo(unittest.TestCase):
32
33 def setUp(self):
34 super(TestJsonCachedInfo, self).setUp()
35 _, self.test_cache = tempfile.mkstemp()
36 self.addCleanup(os.unlink, self.test_cache)
37
38 def test_simple(self):
39 # `json_cached_info` context manager correctly persists a
40 # python dictionary on disk.
41 with json_cached_info(self.test_cache) as cache:
42 self.assertEqual({}, cache)
43 cache['foo'] = 'bar'
44
45 with open(self.test_cache) as fp:
46 self.assertEqual(
47 ['{\n',
48 ' "foo": "bar"\n',
49 '}'], fp.readlines())
50
51 with json_cached_info(self.test_cache) as cache:
52 self.assertEqual(cache['foo'], 'bar')
53
54
55def make_excuse(name, version):
56 """Return a `Excuse` for the give source name and version."""
57 e = Excuse(name)
58 e.set_vers('-', version)
59 return e
60
61
62class TestTestClient(unittest.TestCase):
63
64 def setUp(self):
65 super(TestTestClient, self).setUp()
66 self.path = tempfile.mkdtemp(prefix='testclient')
67 os.makedirs(os.path.join(self.path, 'testclient/'))
68 self.addCleanup(shutil.rmtree, self.path)
69
70 os.chdir(self.path)
71 cwd = os.getcwd()
72 self.addCleanup(os.chdir, cwd)
73
74 self.amqp_uris = ['memory://']
75
76 def test_announce(self):
77 # 'announce' post messages to the EXCHANGE_CANDIDATES exchange and
78 # updates its internal cache.
79 testclient = TestClient('ubuntu', 'vivid', self.amqp_uris)
80 test_excuses = [
81 make_excuse('foo', '1.0'),
82 make_excuse('bar', '2.0'),
83 ]
84 test_repo_data = {
85 'foo_1.0': {'Binary': 'a, b'},
86 'bar_2.0': {'Binary': 'c'},
87 }
88
89 with kombu.Connection(self.amqp_uris) as connection:
90 exchange = kombu.Exchange(
91 testclient.EXCHANGE_CANDIDATES, type="fanout")
92 queue = kombu.Queue('testing', exchange)
93 with connection.SimpleQueue(queue) as q:
94 testclient.announce(
95 test_excuses, None, lambda x: test_repo_data)
96 self.assertEqual(
97 [{'series': 'vivid',
98 'distribution': 'ubuntu',
99 'source_name': 'foo',
100 'source_version': '1.0',
101 'source_binaries': ['a', 'b']},
102 {'series': 'vivid',
103 'distribution': 'ubuntu',
104 'source_name': 'bar',
105 'source_version': '2.0',
106 'source_binaries': ['c']}],
107 [q.get().payload for i in range(len(q))])
108
109 with json_cached_info(testclient.cache_path) as cache:
110 self.assertEqual(
111 {'bar_2.0': [], 'foo_1.0': []},
112 cache)
113
114 def test_collect(self):
115 # 'collect' collects test results and aggregates them in its
116 # internal cache.
117 testclient = TestClient('ubuntu', 'vivid', self.amqp_uris)
118
119 result_payloads = [
120 {'source_name': 'foo',
121 'source_version': '1.0',
122 'series': testclient.series,
123 'distribution': testclient.distribution,
124 'test_name': 'snappy',
125 'test_status': 'RUNNING',
126 'test_url': 'http://snappy.com/foo'},
127 {'source_name': 'bar',
128 'source_version': '1.0',
129 'series': testclient.series,
130 'distribution': testclient.distribution,
131 'test_name': 'ubuntu',
132 'test_status': 'RUNNING',
133 'test_url': 'http://ubuntu.com/foo'},
134 {'source_name': 'foo',
135 'source_version': '1.0',
136 'series': testclient.series,
137 'distribution': testclient.distribution,
138 'test_name': 'bbb',
139 'test_status': 'RUNNING',
140 'test_url': 'http://bbb.com/foo'},
141 # This result will be ignored due to the series mismatch.
142 {'source_name': 'zoing',
143 'source_version': '1.0',
144 'series': 'some-other-series',
145 'test_name': 'ubuntu',
146 'test_status': 'RUNNING',
147 'test_url': 'http://ubuntu.com/foo'},
148 ]
149
150 with kombu.Connection(self.amqp_uris) as connection:
151 with producers[connection].acquire(block=True) as producer:
152 # Just for binding destination queue to the exchange.
153 testclient.collect()
154 exchange = kombu.Exchange(
155 testclient.EXCHANGE_RESULTS, type="fanout")
156 publisher = connection.ensure(
157 producer, producer.publish, max_retries=3)
158 for payload in result_payloads:
159 publisher(payload, exchange=exchange, declare=[exchange])
160 testclient.collect()
161
162 with json_cached_info(testclient.cache_path) as cache:
163 self.assertEqual(
164 {'foo_1.0': [{'name': 'snappy',
165 'status': 'RUNNING',
166 'url': 'http://snappy.com/foo'},
167 {'name': 'bbb',
168 'status': 'RUNNING',
169 'url': 'http://bbb.com/foo'}],
170 'bar_1.0': [{'name': 'ubuntu',
171 'status': 'RUNNING',
172 'url': 'http://ubuntu.com/foo'}]},
173 cache)
174
175 def test_cleanup(self):
176 # `cleanup` remove cache entries that are not present in the
177 # given excuses list (i.e. not relevant for promotion anymore).
178 testclient = TestClient('ubuntu', 'vivid', self.amqp_uris)
179 test_excuses = [
180 make_excuse('foo', '1.0'),
181 make_excuse('bar', '2.0'),
182 ]
183
184 with json_cached_info(testclient.cache_path) as cache:
185 cache[make_cache_key('foo', '0.9')] = []
186 cache[make_cache_key('foo', '1.0')] = []
187 cache[make_cache_key('bar', '2.0')] = []
188
189 testclient.cleanup(test_excuses)
190
191 with json_cached_info(testclient.cache_path) as cache:
192 self.assertEqual(
193 {'bar_2.0': [], 'foo_1.0': []},
194 cache)
195
196 def test_getTests(self):
197 # `getTests` yields cached tests results for a given source name
198 # and version.
199 testclient = TestClient('ubuntu', 'vivid', self.amqp_uris)
200
201 with json_cached_info(testclient.cache_path) as cache:
202 cache[make_cache_key('foo', '1.0')] = [
203 {'name': 'snappy',
204 'status': 'RUNNING',
205 'url': 'http://snappy.com/foo'},
206 {'name': 'bbb',
207 'status': 'RUNNING',
208 'url': 'http://bbb.com/foo'}
209 ]
210
211 self.assertEqual(
212 [{'name': 'snappy',
213 'status': 'RUNNING',
214 'url': 'http://snappy.com/foo'},
215 {'name': 'bbb',
216 'status': 'RUNNING',
217 'url': 'http://bbb.com/foo'}],
218 list(testclient.getTests('foo', '1.0')))
219
220 self.assertEqual(
221 [], list(testclient.getTests('bar', '1.0')))
222
223
224def has_local_rabbitmq():
225 """Whether a local rabbitmq server is available with default creds."""
226 with kombu.Connection('amqp://guest:guest@localhost:5672//',
227 connect_timeout=.1) as c:
228 try:
229 c.connect()
230 except:
231 return False
232 return True
233
234
235@unittest.skipUnless(has_local_rabbitmq(), 'No local rabbitmq')
236class TestTestClientEnd2End(TestBase):
237 """End2End tests (calling `britney`) for the TestClient usage."""
238
239 def setUp(self):
240 super(TestTestClientEnd2End, self).setUp()
241
242 # XXX cprov 20150525: unfortunately, this test requires a proper
243 # amqp transport/server layer (rabbitmq) because kombu 'memory://'
244 # cannot be shared across processes (britney & tests).
245 self.amqp_uris = ['amqp://guest:guest@localhost:5672//']
246
247 self.path = tempfile.mkdtemp(prefix='testclient')
248 os.makedirs(os.path.join(self.path, 'testclient/'))
249 self.addCleanup(shutil.rmtree, self.path)
250
251 os.chdir(self.path)
252 cwd = os.getcwd()
253 self.addCleanup(os.chdir, cwd)
254
255 # Disable autopkgtests + boottest tests and use local rabbit
256 # for this testing context.
257 self.overrideConfig({
258 'ADT_ENABLE': 'no',
259 'BOOTTEST_ENABLE': 'no',
260 'TESTCLIENT_ENABLE': 'yes',
261 'TESTCLIENT_AMQP_URIS': ' '.join(self.amqp_uris),
262 })
263
264 # We publish a version of 'foo' source to make it 'known'.
265 self.data.add(
266 'foo-bin', False, {'Source': 'foo', 'Architecture': 'amd64'})
267
268 def publishTestResults(self, results):
269 """Publish the given list of test results."""
270 with kombu.Connection(self.amqp_uris) as connection:
271 results_exchange = kombu.Exchange(
272 TestClient.EXCHANGE_RESULTS, type="fanout")
273 with producers[connection].acquire(block=True) as producer:
274 publisher = connection.ensure(
275 producer, producer.publish, max_retries=3)
276 for payload in results:
277 publisher(payload, exchange=results_exchange)
278
279 def getAnnouncements(self):
280 """Yields announcements payloads."""
281 with kombu.Connection(self.amqp_uris) as connection:
282 candidates_exchange = kombu.Exchange(
283 TestClient.EXCHANGE_CANDIDATES, type="fanout")
284 queue = kombu.Queue('testing', candidates_exchange)
285 with connection.SimpleQueue(queue) as q:
286 for i in range(len(q)):
287 msg = q.get()
288 msg.ack()
289 yield msg.payload
290
291 def do_test(self, context, expect=None, no_expect=None):
292 """Process the given package context and assert britney results."""
293 for (pkg, fields) in context:
294 self.data.add(pkg, True, fields)
295
296 # Creates a queue for collecting announcements from
297 # 'candidates.exchanges'.
298 with kombu.Connection(self.amqp_uris) as connection:
299 candidates_exchange = kombu.Exchange(
300 TestClient.EXCHANGE_CANDIDATES, type="fanout")
301 queue = kombu.Queue('testing', candidates_exchange)
302 with connection.SimpleQueue(queue) as q:
303 q.queue.purge()
304
305 (excuses, out) = self.run_britney()
306
307 #print('-------\nexcuses: %s\n-----' % excuses)
308 if expect:
309 for re in expect:
310 self.assertRegexpMatches(excuses, re)
311 if no_expect:
312 for re in no_expect:
313 self.assertNotRegexpMatches(excuses, re)
314
315 def test_non_required_test(self):
316 # Non-required test results are collected as part of the excuse
317 # report but do not block source promotion (i.e. the excuse is
318 # a 'Valid candidate' even if the test is 'in progress').
319
320 # Publish 'in-progress' results for 'bazinga for "foo_1.1"'.
321 test_results = [{
322 'source_name': 'foo',
323 'source_version': '1.1',
324 'series': self.data.series,
325 'distribution': self.data.distribution,
326 'test_name': 'bazinga',
327 'test_status': 'RUNNING',
328 'test_url': 'http://bazinga.com/foo',
329 }]
330 self.publishTestResults(test_results)
331
332 # Run britney for 'foo_1.1' and valid candidated is recorded.
333 context = [
334 ('foo-bin', {'Source': 'foo', 'Version': '1.1',
335 'Architecture': 'amd64'}),
336 ]
337 self.do_test(
338 context,
339 [r'\bfoo\b.*>1</a> to .*>1.1<',
340 r'<li>Bazinga result: .*>Test in progress.*'
341 r'href="http://bazinga.com/foo">results',
342 '<li>Valid candidate'])
343
344 # 'foo_1.1' source candidate was announced.
345 self.assertEqual(
346 [{'source_name': 'foo',
347 'source_version': '1.1',
348 'source_binaries': ['foo-bin'],
349 'series': self.data.series,
350 'distribution': self.data.distribution,
351 }], list(self.getAnnouncements()))
352
353 def test_required_test(self):
354 # A required-test result is collected and blocks source package
355 # promotion while it hasn't passed.
356
357 # Make 'bazinga' a required test.
358 self.overrideConfig({
359 'TESTCLIENT_REQUIRED_TESTS': 'bazinga',
360 })
361
362 # Publish 'in-progress' results for 'bazinga for "foo_1.1"'.
363 test_results = [{
364 'source_name': 'foo',
365 'source_version': '1.1',
366 'series': self.data.series,
367 'distribution': self.data.distribution,
368 'test_name': 'bazinga',
369 'test_status': 'RUNNING',
370 'test_url': 'http://bazinga.com/foo',
371 }]
372 self.publishTestResults(test_results)
373
374 # Run britney for 'foo_1.1' and an unconsidered excuse is recorded.
375 context = [
376 ('foo-bin', {'Source': 'foo', 'Version': '1.1',
377 'Architecture': 'amd64'}),
378 ]
379 self.do_test(
380 context,
381 [r'\bfoo\b.*>1</a> to .*>1.1<',
382 r'<li>Bazinga result: .*>Test in progress.*'
383 r'href="http://bazinga.com/foo">results',
384 '<li>Not considered'])
385
386 # 'foo_1.1' source candidate was announced.
387 self.assertEqual(
388 [{'source_name': 'foo',
389 'source_version': '1.1',
390 'source_binaries': ['foo-bin'],
391 'series': self.data.series,
392 'distribution': self.data.distribution,
393 }], list(self.getAnnouncements()))
394
395 def test_promoted(self):
396 # When all required tests passed (or were skipped) the source
397 # candidate can be promoted.
398
399 # Make 'bazinga' and 'zoing' required test.
400 self.overrideConfig({
401 'TESTCLIENT_REQUIRED_TESTS': 'bazinga zoing',
402 })
403
404 # Publish 'in-progress' results for 'bazinga for "foo_1.1"'.
405 test_results = [{
406 'source_name': 'foo',
407 'source_version': '1.1',
408 'series': self.data.series,
409 'distribution': self.data.distribution,
410 'test_name': 'bazinga',
411 'test_status': 'SKIP',
412 'test_url': 'http://bazinga.com/foo',
413 }, {
414 'source_name': 'foo',
415 'source_version': '1.1',
416 'series': self.data.series,
417 'distribution': self.data.distribution,
418 'test_name': 'zoing',
419 'test_status': 'PASS',
420 'test_url': 'http://zoing.com/foo',
421 }]
422 self.publishTestResults(test_results)
423
424 context = [
425 ('foo-bin', {'Source': 'foo', 'Version': '1.1',
426 'Architecture': 'amd64'}),
427 ]
428 self.do_test(
429 context,
430 [r'\bfoo\b.*>1</a> to .*>1.1<',
431 r'<li>Bazinga result: .*>Test skipped.*'
432 'href="http://bazinga.com/foo">results',
433 r'<li>Zoing result: .*>Pass.*href="http://zoing.com/foo">results',
434 '<li>Valid candidate'])
435
436 def test_hinted(self):
437 # 'Testclient' promotion respect 'force' and 'force-badtest' hints.
438 # 'force-skiptest' does not fit testclient approach, since it has
439 # no visibility of which tests 'will' run for a new source candidate,
440 # this decision belongs to the test-agents.
441
442 self.overrideConfig({
443 'TESTCLIENT_REQUIRED_TESTS': 'bazinga',
444 })
445 test_results = [{
446 'source_name': 'foo',
447 'source_version': '1.1',
448 'series': self.data.series,
449 'distribution': self.data.distribution,
450 'test_name': 'bazinga',
451 'test_status': 'FAIL',
452 'test_url': 'http://bazinga.com/foo',
453 }]
454 self.publishTestResults(test_results)
455
456 context = [
457 ('foo-bin', {'Source': 'foo', 'Version': '1.1',
458 'Architecture': 'amd64'}),
459 ]
460 self.create_hint('cjwatson', 'force-badtest foo/1.1')
461 self.do_test(
462 context,
463 [r'\bfoo\b.*>1</a> to .*>1.1<',
464 r'<li>Bazinga result: .*>Regression.*'
465 'href="http://bazinga.com/foo">results',
466 r'<li>Should wait for foo 1.1 Bazinga test, but forced '
467 'by cjwatson',
468 '<li>Valid candidate'])
469
470
471if __name__ == '__main__':
472 unittest.main()

Subscribers

People subscribed via source and target branches