Merge lp:~canonical-ci-engineering/britney/queued-announce-and-collect into lp:~ubuntu-release/britney/britney2-ubuntu

Proposed by Francis Ginther
Status: Work in progress
Proposed branch: lp:~canonical-ci-engineering/britney/queued-announce-and-collect
Merge into: lp:~ubuntu-release/britney/britney2-ubuntu
Diff against target: 926 lines (+794/-39)
7 files modified
britney.conf (+4/-0)
britney.py (+60/-0)
testclient.py (+216/-0)
tests/__init__.py (+25/-1)
tests/test_autopkgtest.py (+6/-9)
tests/test_boottest.py (+11/-29)
tests/test_testclient.py (+472/-0)
To merge this branch: bzr merge lp:~canonical-ci-engineering/britney/queued-announce-and-collect
Reviewer Review Type Date Requested Status
Ubuntu Release Team Pending
Review via email: mp+259972@code.launchpad.net

Description of the change

Enable TestClient work.

To post a comment you must log in.
Revision history for this message
Francis Ginther (fginther) wrote :

Two inline comments below.

Also, at some point, we need to think a little more about how the hints are used. These were originally setup for dealing with autopkgtests and the hints 'force-skiptest' and 'force-badtest' are really geared toward skipping those known tests. But we really don't want to skip other types of tests just because a package has some poorly written autopkgtest that no one wants to update. In this current MP, 'force-skiptest' is not used and I think that's the right way to go. I also think we need to remove 'force'badtest' and just use 'force' to act as a universal override (to override any type of autopkgtest, boottest, or snappy-selftest failure). This may be enough, but perhaps we'll need to define a set of hints specific to each required test type (i.e. 'force-bad-boottest', 'force-bad-snappy-selftest').

Revision history for this message
Thomi Richards (thomir-deactivatedaccount) wrote :

I can't say as I understand more than about 10% of this, but I left a few comments.

Revision history for this message
Celso Providelo (cprov) wrote :
436. By Celso Providelo

Merge lp:~cprov/britney/integration-tests

437. By Celso Providelo

Add integration tests for hints on testclient. Also refactoring TestBase.{overrideConfig, create_hint) so they can be reused across tests.

438. By Para Siva

Adding distribution to the testclient implementation

439. By Celso Providelo

Extending TestClient announcements implementation to include the DSC 'Binary' (and potentially other) information on the payload.

Revision history for this message
Martin Pitt (pitti) wrote :

Do you still want to go through with this, or is this stalled now? (I suppose the latter?)

I submitted https://code.launchpad.net/~pitti/britney/britney2-ubuntu-amqp/+merge/263679 for test requests, and I'll work on collecting test results directly from swift. This will avoid separate components like adt-britney which duplicate work from britney (like reverse dependency calculation).

Unmerged revisions

439. By Celso Providelo

Extending TestClient announcements implementation to include the DSC 'Binary' (and potentially other) information on the payload.

438. By Para Siva

Adding distribution to the testclient implementation

437. By Celso Providelo

Add integration tests for hints on testclient. Also refactoring TestBase.{overrideConfig, create_hint) so they can be reused across tests.

436. By Celso Providelo

Merge lp:~cprov/britney/integration-tests

435. By Celso Providelo

Merge lp:~cprov/britney/testclient-api/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'britney.conf'
2--- britney.conf 2015-03-05 14:57:03 +0000
3+++ britney.conf 2015-05-29 14:31:13 +0000
4@@ -70,3 +70,7 @@
5 BOOTTEST_DEBUG = yes
6 BOOTTEST_ARCHES = armhf amd64
7 BOOTTEST_FETCH = yes
8+
9+TESTCLIENT_ENABLE = yes
10+TESTCLIENT_AMQP_URIS = amqp://guest:guest@162.213.32.181:5672//
11+TESTCLIENT_REQUIRED_TESTS =
12
13=== modified file 'britney.py'
14--- britney.py 2015-02-20 19:02:00 +0000
15+++ britney.py 2015-05-29 14:31:13 +0000
16@@ -227,6 +227,7 @@
17 PROVIDES, RDEPENDS, RCONFLICTS, MULTIARCH, ESSENTIAL)
18 from autopkgtest import AutoPackageTest, ADT_PASS, ADT_EXCUSES_LABELS
19 from boottest import BootTest
20+from testclient import TestClient
21
22
23 __author__ = 'Fabio Tranchitella and the Debian Release Team'
24@@ -1984,6 +1985,65 @@
25 upgrade_me.remove(excuse.name)
26 unconsidered.append(excuse.name)
27
28+ if (getattr(self.options, "testclient_enable", "no") == "yes" and
29+ self.options.series):
30+
31+ # Filter only new source candidates excuses.
32+ testing_excuses = []
33+ for excuse in self.excuses:
34+ # Skip removals, binary-only candidates, proposed-updates
35+ # and unknown versions.
36+ if (excuse.name.startswith("-") or
37+ "/" in excuse.name or
38+ "_" in excuse.name or
39+ excuse.ver[1] == "-"):
40+ continue
41+ testing_excuses.append(excuse)
42+
43+ amqp_uris = getattr(
44+ self.options, "testclient_amqp_uris", "").split()
45+ testclient = TestClient(
46+ self.options.distribution, self.options.series, amqp_uris)
47+
48+ # Announce new candidates and collect new test results.
49+ if not self.options.dry_run:
50+ testclient.announce(testing_excuses, self.options.unstable)
51+ testclient.collect()
52+ testclient.cleanup(testing_excuses)
53+
54+ # Update excuses considering hints and required_tests (for gating).
55+ required_tests = getattr(
56+ self.options, "testclient_required_tests", "").split()
57+ for excuse in testing_excuses:
58+ hints = self.hints.search('force', package=excuse.name)
59+ hints.extend(
60+ self.hints.search('force-badtest', package=excuse.name))
61+ forces = [x for x in hints
62+ if same_source(excuse.ver[1], x.version)]
63+ for test in testclient.getTests(excuse.name, excuse.ver[1]):
64+ label = TestClient.EXCUSE_LABELS.get(
65+ test.get('status'), 'UNKNOWN STATUS')
66+ excuse.addhtml(
67+ "%s result: %s (<a href=\"%s\">results</a>)" % (
68+ test.get('name').capitalize(), label,
69+ test.get('url')))
70+ if forces:
71+ excuse.addhtml(
72+ "Should wait for %s %s %s test, but forced by "
73+ "%s" % (excuse.name, excuse.ver[1],
74+ test.get('name').capitalize(),
75+ forces[0].user))
76+ continue
77+ if test.get('name') not in required_tests:
78+ continue
79+ if test.get('status') not in TestClient.VALID_STATUSES:
80+ excuse.addreason(test.get('name'))
81+ if excuse.is_valid:
82+ excuse.is_valid = False
83+ excuse.addhtml("Not considered")
84+ upgrade_me.remove(excuse.name)
85+ unconsidered.append(excuse.name)
86+
87 # invalidate impossible excuses
88 for e in self.excuses:
89 # parts[0] == package name
90
91=== added file 'testclient.py'
92--- testclient.py 1970-01-01 00:00:00 +0000
93+++ testclient.py 2015-05-29 14:31:13 +0000
94@@ -0,0 +1,216 @@
95+# -*- coding: utf-8 -*-
96+
97+# Copyright (C) 2015 Canonical Ltd.
98+
99+# This program is free software; you can redistribute it and/or modify
100+# it under the terms of the GNU General Public License as published by
101+# the Free Software Foundation; either version 2 of the License, or
102+# (at your option) any later version.
103+
104+# This program is distributed in the hope that it will be useful,
105+# but WITHOUT ANY WARRANTY; without even the implied warranty of
106+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
107+# GNU General Public License for more details.
108+from __future__ import print_function
109+
110+import apt_pkg
111+from contextlib import (
112+ contextmanager,
113+ nested,
114+)
115+import json
116+import os
117+
118+
119+import kombu
120+from kombu.pools import producers
121+
122+
123+@contextmanager
124+def json_cached_info(path):
125+ """Context manager for caching a JSON object on disk."""
126+ info = {}
127+ if os.path.exists(path):
128+ with open(path) as fp:
129+ try:
130+ info = json.load(fp)
131+ except ValueError:
132+ # cache is empty or corrupted (!).
133+ info = {}
134+ else:
135+ dirname = os.path.dirname(path)
136+ if not os.path.exists(dirname):
137+ os.makedirs(dirname)
138+ yield info
139+ with open(path, 'w') as fp:
140+ json.dump(info, fp, indent=2)
141+
142+
143+def make_cache_key(name, version):
144+ """Return a json-hashable key for given source & version."""
145+ return '{}_{}'.format(name, version)
146+
147+
148+def get_repo_sources(basedir):
149+ """Return 'Sources' repository index information for the given basedir.
150+
151+ Extract and cache repository 'Sources' index into a dictionary
152+ key-ed by source-key (see `make_cache_key`) pointing to a dictionary
153+ containing relevant DSC fields.
154+
155+ At moment, we only cache 'Binary'.
156+ """
157+ fields = (
158+ 'Binary',
159+ )
160+ filename = os.path.join(basedir, "Sources")
161+ tag_file = apt_pkg.TagFile(open(filename))
162+
163+ sources = {}
164+ while tag_file.step():
165+ source_name = tag_file.section.get('Package')
166+ source_version = tag_file.section.get('Version')
167+ source_key = make_cache_key(source_name, source_version)
168+ sources[source_key] = {
169+ k: tag_file.section.get(k, '') for k in fields
170+ }
171+
172+ return sources
173+
174+
175+class TestClient(object):
176+ """Generic test client implementation.
177+
178+ announce: announcing new source candidates to a pre-defined rabbitmq
179+ exchange (testing subsystems can hook/subscribe).
180+
181+ collect: collect test results for the context series from a pre-defined
182+ rabbitmq exchange (other promotion agents can do the same).
183+
184+ cleanup: sanitize internal announcement/testing registry after runs.
185+ """
186+
187+ EXCHANGE_CANDIDATES = 'candidates.exchange.v1'
188+ EXCHANGE_RESULTS = 'results.exchange.v1'
189+
190+ VALID_STATUSES = ('PASS', 'SKIP')
191+
192+ EXCUSE_LABELS = {
193+ "PASS": '<span style="background:#87d96c">Pass</span>',
194+ "SKIP": '<span style="background:#ffff00">Test skipped</span>',
195+ "FAIL": '<span style="background:#ff6666">Regression</span>',
196+ "RUNNING": '<span style="background:#99ddff">Test in progress</span>',
197+ }
198+
199+ def __init__(self, distribution, series, amqp_uris):
200+ self.distribution = distribution
201+ self.series = series
202+ self.amqp_uris = amqp_uris
203+
204+ @property
205+ def cache_path(self):
206+ """Series-specific test announcement/result cache."""
207+ return 'testclient/{}/{}.json'.format(self.distribution, self.series)
208+
209+ @property
210+ def results_queue(self):
211+ """Series-specific queue for collecting tests results."""
212+ return 'pm.results.{}.{}'.format(self.distribution,
213+ self.series)
214+
215+ def announce(self, excuses, unstable_basedir,
216+ get_repo_sources=get_repo_sources):
217+ """Announce new source candidates.
218+
219+ Post a message to the EXCHANGE_CANDATIDATES for every new given
220+ excuses (cache announcements so excuses do not get re-annouced).
221+ """
222+ with nested(json_cached_info(self.cache_path),
223+ kombu.Connection(self.amqp_uris)) as (cache, connection):
224+ # XXX cprov 20150521: nested() is deprecated, and multi-statement
225+ # 'with' does not support nesting (i.e. the previous context
226+ # manager is not available to the next, in this case
227+ # 'connection').
228+ with producers[connection].acquire(block=True) as producer:
229+ publisher = connection.ensure(
230+ producer, producer.publish, max_retries=3)
231+ exchange = kombu.Exchange(
232+ self.EXCHANGE_CANDIDATES, type="fanout")
233+ repo = get_repo_sources(unstable_basedir)
234+ for excuse in excuses:
235+ source_key = make_cache_key(excuse.name, excuse.ver[1])
236+ if source_key in cache.keys():
237+ continue
238+ repo_data = repo[source_key]
239+ payload = {
240+ 'source_name': excuse.name,
241+ 'source_version': excuse.ver[1],
242+ 'source_binaries': [
243+ b.strip() for b in repo_data['Binary'].split(',')
244+ ],
245+ 'series': self.series,
246+ 'distribution': self.distribution,
247+ }
248+ publisher(payload, exchange=exchange, declare=[exchange])
249+ cache[make_cache_key(excuse.name, excuse.ver[1])] = []
250+
251+ def collect(self):
252+ """Collect available test results.
253+
254+ Consume all messages from the EXCHANGE_RESULTS (routed to a series-
255+ specific queue). Ignore test results for other series and update
256+ test results registry.
257+ """
258+ with nested(json_cached_info(self.cache_path),
259+ kombu.Connection(self.amqp_uris)) as (cache, connection):
260+ exchange = kombu.Exchange(
261+ self.EXCHANGE_RESULTS, type="fanout")
262+ queue = kombu.Queue(self.results_queue, exchange)
263+ # XXX cprov 20150521: same as above about nested context managers.
264+ with connection.SimpleQueue(queue) as q:
265+ for i in range(len(q)):
266+ msg = q.get()
267+ payload = msg.payload
268+ if payload.get('distribution') != self.distribution:
269+ continue
270+ if payload.get('series') != self.series:
271+ continue
272+ tests = cache.setdefault(
273+ make_cache_key(
274+ payload.get('source_name'),
275+ payload.get('source_version')
276+ ), [])
277+ tests.append({
278+ 'name': payload.get('test_name'),
279+ 'status': payload.get('test_status'),
280+ 'url': payload.get('test_url'),
281+ })
282+ msg.ack()
283+
284+ def cleanup(self, excuses):
285+ """Remove test result entries without corresponding excuse.
286+
287+ If there is not excuse the test results are not relevant anymore.
288+ """
289+ with json_cached_info(self.cache_path) as cache:
290+ current_keys = [
291+ make_cache_key(e.name, e.ver[1]) for e in excuses]
292+ cached_keys = list(cache.keys())
293+ for k in cached_keys:
294+ if k not in current_keys:
295+ del cache[k]
296+
297+ def getTests(self, name, version):
298+ """Yields test results for a given source-version pair.
299+
300+ Tests results are a list of dictionaries container test-name, status
301+ and url.
302+ """
303+ with json_cached_info(self.cache_path) as cache:
304+ tests = cache.get(make_cache_key(name, version), [])
305+ for test in tests:
306+ yield {
307+ 'name': test.get('name'),
308+ 'status': test.get('status'),
309+ 'url': test.get('url'),
310+ }
311
312=== modified file 'tests/__init__.py'
313--- tests/__init__.py 2015-02-05 14:43:23 +0000
314+++ tests/__init__.py 2015-05-29 14:31:13 +0000
315@@ -32,6 +32,7 @@
316 self.path = tempfile.mkdtemp(prefix='testarchive.')
317 self.apt_source = 'deb file://%s /' % self.path
318 self.series = 'series'
319+ self.distribution = 'ubuntu'
320 self.dirs = {False: os.path.join(self.path, 'data', self.series),
321 True: os.path.join(
322 self.path, 'data', '%s-proposed' % self.series)}
323@@ -94,7 +95,8 @@
324 src = fields.get('Source', name)
325 if src not in self.added_sources[unstable]:
326 self.add_src(src, unstable, {'Version': fields['Version'],
327- 'Section': fields['Section']})
328+ 'Section': fields['Section'],
329+ 'Binary': name})
330
331 def add_src(self, name, unstable, fields={}):
332 '''Add a source package to the index file.
333@@ -162,3 +164,25 @@
334 excuses = f.read()
335
336 return (excuses, out)
337+
338+ def overrideConfig(self, overrides):
339+ """Overrides briney configuration based on the given key-value map."""
340+ with open(self.britney_conf, 'r') as fp:
341+ original_config = fp.read()
342+ new_config = []
343+ for line in original_config.splitlines():
344+ for k, v in overrides.iteritems():
345+ if line.startswith(k):
346+ line = '{} = {}'.format(k, v)
347+ new_config.append(line)
348+ with open(self.britney_conf, 'w') as fp:
349+ fp.write('\n'.join(new_config))
350+ self.addCleanup(self.restore_config, original_config)
351+
352+ def create_hint(self, username, content):
353+ """Populates a hint file for the given 'username' with 'content'."""
354+ hints_path = os.path.join(
355+ self.data.path,
356+ 'data/{}-proposed/Hints/{}'.format(self.data.series, username))
357+ with open(hints_path, 'w') as fd:
358+ fd.write(content)
359
360=== modified file 'tests/test_autopkgtest.py'
361--- tests/test_autopkgtest.py 2015-02-05 14:43:23 +0000
362+++ tests/test_autopkgtest.py 2015-05-29 14:31:13 +0000
363@@ -31,15 +31,12 @@
364 def setUp(self):
365 super(TestAutoPkgTest, self).setUp()
366
367- # Mofify configuration according to the test context.
368- with open(self.britney_conf, 'r') as fp:
369- original_config = fp.read()
370- # Disable boottests.
371- new_config = original_config.replace(
372- 'BOOTTEST_ENABLE = yes', 'BOOTTEST_ENABLE = no')
373- with open(self.britney_conf, 'w') as fp:
374- fp.write(new_config)
375- self.addCleanup(self.restore_config, original_config)
376+ self.overrideConfig({
377+ 'ADT_ENABLE': 'yes',
378+ 'BOOTTEST_ENABLE': 'no',
379+ 'BOOTTEST_FETCH': 'no',
380+ 'TESTCLIENT_ENABLE': 'no',
381+ })
382
383 # fake adt-britney script
384 self.adt_britney = os.path.join(
385
386=== modified file 'tests/test_boottest.py'
387--- tests/test_boottest.py 2015-02-20 16:28:47 +0000
388+++ tests/test_boottest.py 2015-05-29 14:31:13 +0000
389@@ -123,21 +123,14 @@
390 def setUp(self):
391 super(TestBoottestEnd2End, self).setUp()
392
393- # Modify shared configuration file.
394- with open(self.britney_conf, 'r') as fp:
395- original_config = fp.read()
396- # Disable autopkgtests.
397- new_config = original_config.replace(
398- 'ADT_ENABLE = yes', 'ADT_ENABLE = no')
399- # Enable boottest.
400- new_config = new_config.replace(
401- 'BOOTTEST_ENABLE = no', 'BOOTTEST_ENABLE = yes')
402- # Disable TouchManifest auto-fetching.
403- new_config = new_config.replace(
404- 'BOOTTEST_FETCH = yes', 'BOOTTEST_FETCH = no')
405- with open(self.britney_conf, 'w') as fp:
406- fp.write(new_config)
407- self.addCleanup(self.restore_config, original_config)
408+ # Disable autopkgtests + testclient and boottest_fetch
409+ # for this testing context.
410+ self.overrideConfig({
411+ 'ADT_ENABLE': 'no',
412+ 'BOOTTEST_ENABLE': 'yes',
413+ 'BOOTTEST_FETCH': 'no',
414+ 'TESTCLIENT_ENABLE': 'no',
415+ })
416
417 self.data.add('libc6', False, {'Architecture': 'armhf'}),
418
419@@ -323,14 +316,6 @@
420 r'<li>Boottest result: UNKNOWN STATUS \(Jenkins: .*\)',
421 '<li>Not considered'])
422
423- def create_hint(self, username, content):
424- """Populates a hint file for the given 'username' with 'content'."""
425- hints_path = os.path.join(
426- self.data.path,
427- 'data/{}-proposed/Hints/{}'.format(self.data.series, username))
428- with open(hints_path, 'w') as fd:
429- fd.write(content)
430-
431 def test_skipped_by_hints(self):
432 # `Britney` allows boottests to be skipped by hinting the
433 # corresponding source with 'force-skiptest'. The boottest
434@@ -415,12 +400,9 @@
435 # Boottest can run simultaneously with autopkgtest (adt).
436
437 # Enable ADT in britney configuration.
438- with open(self.britney_conf, 'r') as fp:
439- original_config = fp.read()
440- new_config = original_config.replace(
441- 'ADT_ENABLE = no', 'ADT_ENABLE = yes')
442- with open(self.britney_conf, 'w') as fp:
443- fp.write(new_config)
444+ self.overrideConfig({
445+ 'ADT_ENABLE': 'yes',
446+ })
447
448 # Create a fake 'adt-britney' that reports a RUNNING job for
449 # the testing source ('purple_1.1').
450
451=== added file 'tests/test_testclient.py'
452--- tests/test_testclient.py 1970-01-01 00:00:00 +0000
453+++ tests/test_testclient.py 2015-05-29 14:31:13 +0000
454@@ -0,0 +1,472 @@
455+#!/usr/bin/python
456+# (C) 2015 Canonical Ltd.
457+#
458+# This program is free software; you can redistribute it and/or modify
459+# it under the terms of the GNU General Public License as published by
460+# the Free Software Foundation; either version 2 of the License, or
461+# (at your option) any later version.
462+
463+import os
464+import shutil
465+import sys
466+import tempfile
467+import unittest
468+
469+import kombu
470+from kombu.pools import producers
471+
472+
473+PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
474+sys.path.insert(0, PROJECT_DIR)
475+
476+from excuse import Excuse
477+from testclient import (
478+ json_cached_info,
479+ make_cache_key,
480+ TestClient,
481+)
482+from tests import TestBase
483+
484+
485+class TestJsonCachedInfo(unittest.TestCase):
486+
487+ def setUp(self):
488+ super(TestJsonCachedInfo, self).setUp()
489+ _, self.test_cache = tempfile.mkstemp()
490+ self.addCleanup(os.unlink, self.test_cache)
491+
492+ def test_simple(self):
493+ # `json_cached_info` context manager correctly persists a
494+ # python dictionary on disk.
495+ with json_cached_info(self.test_cache) as cache:
496+ self.assertEqual({}, cache)
497+ cache['foo'] = 'bar'
498+
499+ with open(self.test_cache) as fp:
500+ self.assertEqual(
501+ ['{\n',
502+ ' "foo": "bar"\n',
503+ '}'], fp.readlines())
504+
505+ with json_cached_info(self.test_cache) as cache:
506+ self.assertEqual(cache['foo'], 'bar')
507+
508+
509+def make_excuse(name, version):
510+ """Return a `Excuse` for the give source name and version."""
511+ e = Excuse(name)
512+ e.set_vers('-', version)
513+ return e
514+
515+
516+class TestTestClient(unittest.TestCase):
517+
518+ def setUp(self):
519+ super(TestTestClient, self).setUp()
520+ self.path = tempfile.mkdtemp(prefix='testclient')
521+ os.makedirs(os.path.join(self.path, 'testclient/'))
522+ self.addCleanup(shutil.rmtree, self.path)
523+
524+ os.chdir(self.path)
525+ cwd = os.getcwd()
526+ self.addCleanup(os.chdir, cwd)
527+
528+ self.amqp_uris = ['memory://']
529+
530+ def test_announce(self):
531+ # 'announce' post messages to the EXCHANGE_CANDIDATES exchange and
532+ # updates its internal cache.
533+ testclient = TestClient('ubuntu', 'vivid', self.amqp_uris)
534+ test_excuses = [
535+ make_excuse('foo', '1.0'),
536+ make_excuse('bar', '2.0'),
537+ ]
538+ test_repo_data = {
539+ 'foo_1.0': {'Binary': 'a, b'},
540+ 'bar_2.0': {'Binary': 'c'},
541+ }
542+
543+ with kombu.Connection(self.amqp_uris) as connection:
544+ exchange = kombu.Exchange(
545+ testclient.EXCHANGE_CANDIDATES, type="fanout")
546+ queue = kombu.Queue('testing', exchange)
547+ with connection.SimpleQueue(queue) as q:
548+ testclient.announce(
549+ test_excuses, None, lambda x: test_repo_data)
550+ self.assertEqual(
551+ [{'series': 'vivid',
552+ 'distribution': 'ubuntu',
553+ 'source_name': 'foo',
554+ 'source_version': '1.0',
555+ 'source_binaries': ['a', 'b']},
556+ {'series': 'vivid',
557+ 'distribution': 'ubuntu',
558+ 'source_name': 'bar',
559+ 'source_version': '2.0',
560+ 'source_binaries': ['c']}],
561+ [q.get().payload for i in range(len(q))])
562+
563+ with json_cached_info(testclient.cache_path) as cache:
564+ self.assertEqual(
565+ {'bar_2.0': [], 'foo_1.0': []},
566+ cache)
567+
568+ def test_collect(self):
569+ # 'collect' collects test results and aggregates them in its
570+ # internal cache.
571+ testclient = TestClient('ubuntu', 'vivid', self.amqp_uris)
572+
573+ result_payloads = [
574+ {'source_name': 'foo',
575+ 'source_version': '1.0',
576+ 'series': testclient.series,
577+ 'distribution': testclient.distribution,
578+ 'test_name': 'snappy',
579+ 'test_status': 'RUNNING',
580+ 'test_url': 'http://snappy.com/foo'},
581+ {'source_name': 'bar',
582+ 'source_version': '1.0',
583+ 'series': testclient.series,
584+ 'distribution': testclient.distribution,
585+ 'test_name': 'ubuntu',
586+ 'test_status': 'RUNNING',
587+ 'test_url': 'http://ubuntu.com/foo'},
588+ {'source_name': 'foo',
589+ 'source_version': '1.0',
590+ 'series': testclient.series,
591+ 'distribution': testclient.distribution,
592+ 'test_name': 'bbb',
593+ 'test_status': 'RUNNING',
594+ 'test_url': 'http://bbb.com/foo'},
595+ # This result will be ignored due to the series mismatch.
596+ {'source_name': 'zoing',
597+ 'source_version': '1.0',
598+ 'series': 'some-other-series',
599+ 'test_name': 'ubuntu',
600+ 'test_status': 'RUNNING',
601+ 'test_url': 'http://ubuntu.com/foo'},
602+ ]
603+
604+ with kombu.Connection(self.amqp_uris) as connection:
605+ with producers[connection].acquire(block=True) as producer:
606+ # Just for binding destination queue to the exchange.
607+ testclient.collect()
608+ exchange = kombu.Exchange(
609+ testclient.EXCHANGE_RESULTS, type="fanout")
610+ publisher = connection.ensure(
611+ producer, producer.publish, max_retries=3)
612+ for payload in result_payloads:
613+ publisher(payload, exchange=exchange, declare=[exchange])
614+ testclient.collect()
615+
616+ with json_cached_info(testclient.cache_path) as cache:
617+ self.assertEqual(
618+ {'foo_1.0': [{'name': 'snappy',
619+ 'status': 'RUNNING',
620+ 'url': 'http://snappy.com/foo'},
621+ {'name': 'bbb',
622+ 'status': 'RUNNING',
623+ 'url': 'http://bbb.com/foo'}],
624+ 'bar_1.0': [{'name': 'ubuntu',
625+ 'status': 'RUNNING',
626+ 'url': 'http://ubuntu.com/foo'}]},
627+ cache)
628+
629+ def test_cleanup(self):
630+ # `cleanup` remove cache entries that are not present in the
631+ # given excuses list (i.e. not relevant for promotion anymore).
632+ testclient = TestClient('ubuntu', 'vivid', self.amqp_uris)
633+ test_excuses = [
634+ make_excuse('foo', '1.0'),
635+ make_excuse('bar', '2.0'),
636+ ]
637+
638+ with json_cached_info(testclient.cache_path) as cache:
639+ cache[make_cache_key('foo', '0.9')] = []
640+ cache[make_cache_key('foo', '1.0')] = []
641+ cache[make_cache_key('bar', '2.0')] = []
642+
643+ testclient.cleanup(test_excuses)
644+
645+ with json_cached_info(testclient.cache_path) as cache:
646+ self.assertEqual(
647+ {'bar_2.0': [], 'foo_1.0': []},
648+ cache)
649+
650+ def test_getTests(self):
651+ # `getTests` yields cached tests results for a given source name
652+ # and version.
653+ testclient = TestClient('ubuntu', 'vivid', self.amqp_uris)
654+
655+ with json_cached_info(testclient.cache_path) as cache:
656+ cache[make_cache_key('foo', '1.0')] = [
657+ {'name': 'snappy',
658+ 'status': 'RUNNING',
659+ 'url': 'http://snappy.com/foo'},
660+ {'name': 'bbb',
661+ 'status': 'RUNNING',
662+ 'url': 'http://bbb.com/foo'}
663+ ]
664+
665+ self.assertEqual(
666+ [{'name': 'snappy',
667+ 'status': 'RUNNING',
668+ 'url': 'http://snappy.com/foo'},
669+ {'name': 'bbb',
670+ 'status': 'RUNNING',
671+ 'url': 'http://bbb.com/foo'}],
672+ list(testclient.getTests('foo', '1.0')))
673+
674+ self.assertEqual(
675+ [], list(testclient.getTests('bar', '1.0')))
676+
677+
678+def has_local_rabbitmq():
679+ """Whether a local rabbitmq server is available with default creds."""
680+ with kombu.Connection('amqp://guest:guest@localhost:5672//',
681+ connect_timeout=.1) as c:
682+ try:
683+ c.connect()
684+ except:
685+ return False
686+ return True
687+
688+
689+@unittest.skipUnless(has_local_rabbitmq(), 'No local rabbitmq')
690+class TestTestClientEnd2End(TestBase):
691+ """End2End tests (calling `britney`) for the TestClient usage."""
692+
693+ def setUp(self):
694+ super(TestTestClientEnd2End, self).setUp()
695+
696+ # XXX cprov 20150525: unfortunately, this test requires a proper
697+ # amqp transport/server layer (rabbitmq) because kombu 'memory://'
698+ # cannot be shared across processes (britney & tests).
699+ self.amqp_uris = ['amqp://guest:guest@localhost:5672//']
700+
701+ self.path = tempfile.mkdtemp(prefix='testclient')
702+ os.makedirs(os.path.join(self.path, 'testclient/'))
703+ self.addCleanup(shutil.rmtree, self.path)
704+
705+ os.chdir(self.path)
706+ cwd = os.getcwd()
707+ self.addCleanup(os.chdir, cwd)
708+
709+ # Disable autopkgtests + boottest tests and use local rabbit
710+ # for this testing context.
711+ self.overrideConfig({
712+ 'ADT_ENABLE': 'no',
713+ 'BOOTTEST_ENABLE': 'no',
714+ 'TESTCLIENT_ENABLE': 'yes',
715+ 'TESTCLIENT_AMQP_URIS': ' '.join(self.amqp_uris),
716+ })
717+
718+ # We publish a version of 'foo' source to make it 'known'.
719+ self.data.add(
720+ 'foo-bin', False, {'Source': 'foo', 'Architecture': 'amd64'})
721+
722+ def publishTestResults(self, results):
723+ """Publish the given list of test results."""
724+ with kombu.Connection(self.amqp_uris) as connection:
725+ results_exchange = kombu.Exchange(
726+ TestClient.EXCHANGE_RESULTS, type="fanout")
727+ with producers[connection].acquire(block=True) as producer:
728+ publisher = connection.ensure(
729+ producer, producer.publish, max_retries=3)
730+ for payload in results:
731+ publisher(payload, exchange=results_exchange)
732+
733+ def getAnnouncements(self):
734+ """Yields announcements payloads."""
735+ with kombu.Connection(self.amqp_uris) as connection:
736+ candidates_exchange = kombu.Exchange(
737+ TestClient.EXCHANGE_CANDIDATES, type="fanout")
738+ queue = kombu.Queue('testing', candidates_exchange)
739+ with connection.SimpleQueue(queue) as q:
740+ for i in range(len(q)):
741+ msg = q.get()
742+ msg.ack()
743+ yield msg.payload
744+
745+ def do_test(self, context, expect=None, no_expect=None):
746+ """Process the given package context and assert britney results."""
747+ for (pkg, fields) in context:
748+ self.data.add(pkg, True, fields)
749+
750+ # Creates a queue for collecting announcements from
751+ # 'candidates.exchanges'.
752+ with kombu.Connection(self.amqp_uris) as connection:
753+ candidates_exchange = kombu.Exchange(
754+ TestClient.EXCHANGE_CANDIDATES, type="fanout")
755+ queue = kombu.Queue('testing', candidates_exchange)
756+ with connection.SimpleQueue(queue) as q:
757+ q.queue.purge()
758+
759+ (excuses, out) = self.run_britney()
760+
761+ #print('-------\nexcuses: %s\n-----' % excuses)
762+ if expect:
763+ for re in expect:
764+ self.assertRegexpMatches(excuses, re)
765+ if no_expect:
766+ for re in no_expect:
767+ self.assertNotRegexpMatches(excuses, re)
768+
769+ def test_non_required_test(self):
770+ # Non-required test results are collected as part of the excuse
771+ # report but do not block source promotion (i.e. the excuse is
772+ # a 'Valid candidate' even if the test is 'in progress').
773+
774+ # Publish 'in-progress' results for 'bazinga for "foo_1.1"'.
775+ test_results = [{
776+ 'source_name': 'foo',
777+ 'source_version': '1.1',
778+ 'series': self.data.series,
779+ 'distribution': self.data.distribution,
780+ 'test_name': 'bazinga',
781+ 'test_status': 'RUNNING',
782+ 'test_url': 'http://bazinga.com/foo',
783+ }]
784+ self.publishTestResults(test_results)
785+
786+ # Run britney for 'foo_1.1' and valid candidated is recorded.
787+ context = [
788+ ('foo-bin', {'Source': 'foo', 'Version': '1.1',
789+ 'Architecture': 'amd64'}),
790+ ]
791+ self.do_test(
792+ context,
793+ [r'\bfoo\b.*>1</a> to .*>1.1<',
794+ r'<li>Bazinga result: .*>Test in progress.*'
795+ r'href="http://bazinga.com/foo">results',
796+ '<li>Valid candidate'])
797+
798+ # 'foo_1.1' source candidate was announced.
799+ self.assertEqual(
800+ [{'source_name': 'foo',
801+ 'source_version': '1.1',
802+ 'source_binaries': ['foo-bin'],
803+ 'series': self.data.series,
804+ 'distribution': self.data.distribution,
805+ }], list(self.getAnnouncements()))
806+
807+ def test_required_test(self):
808+ # A required-test result is collected and blocks source package
809+ # promotion while it hasn't passed.
810+
811+ # Make 'bazinga' a required test.
812+ self.overrideConfig({
813+ 'TESTCLIENT_REQUIRED_TESTS': 'bazinga',
814+ })
815+
816+ # Publish 'in-progress' results for 'bazinga for "foo_1.1"'.
817+ test_results = [{
818+ 'source_name': 'foo',
819+ 'source_version': '1.1',
820+ 'series': self.data.series,
821+ 'distribution': self.data.distribution,
822+ 'test_name': 'bazinga',
823+ 'test_status': 'RUNNING',
824+ 'test_url': 'http://bazinga.com/foo',
825+ }]
826+ self.publishTestResults(test_results)
827+
828+ # Run britney for 'foo_1.1' and an unconsidered excuse is recorded.
829+ context = [
830+ ('foo-bin', {'Source': 'foo', 'Version': '1.1',
831+ 'Architecture': 'amd64'}),
832+ ]
833+ self.do_test(
834+ context,
835+ [r'\bfoo\b.*>1</a> to .*>1.1<',
836+ r'<li>Bazinga result: .*>Test in progress.*'
837+ r'href="http://bazinga.com/foo">results',
838+ '<li>Not considered'])
839+
840+ # 'foo_1.1' source candidate was announced.
841+ self.assertEqual(
842+ [{'source_name': 'foo',
843+ 'source_version': '1.1',
844+ 'source_binaries': ['foo-bin'],
845+ 'series': self.data.series,
846+ 'distribution': self.data.distribution,
847+ }], list(self.getAnnouncements()))
848+
849+ def test_promoted(self):
850+ # When all required tests passed (or were skipped) the source
851+ # candidate can be promoted.
852+
853+ # Make 'bazinga' and 'zoing' required test.
854+ self.overrideConfig({
855+ 'TESTCLIENT_REQUIRED_TESTS': 'bazinga zoing',
856+ })
857+
858+ # Publish 'in-progress' results for 'bazinga for "foo_1.1"'.
859+ test_results = [{
860+ 'source_name': 'foo',
861+ 'source_version': '1.1',
862+ 'series': self.data.series,
863+ 'distribution': self.data.distribution,
864+ 'test_name': 'bazinga',
865+ 'test_status': 'SKIP',
866+ 'test_url': 'http://bazinga.com/foo',
867+ }, {
868+ 'source_name': 'foo',
869+ 'source_version': '1.1',
870+ 'series': self.data.series,
871+ 'distribution': self.data.distribution,
872+ 'test_name': 'zoing',
873+ 'test_status': 'PASS',
874+ 'test_url': 'http://zoing.com/foo',
875+ }]
876+ self.publishTestResults(test_results)
877+
878+ context = [
879+ ('foo-bin', {'Source': 'foo', 'Version': '1.1',
880+ 'Architecture': 'amd64'}),
881+ ]
882+ self.do_test(
883+ context,
884+ [r'\bfoo\b.*>1</a> to .*>1.1<',
885+ r'<li>Bazinga result: .*>Test skipped.*'
886+ 'href="http://bazinga.com/foo">results',
887+ r'<li>Zoing result: .*>Pass.*href="http://zoing.com/foo">results',
888+ '<li>Valid candidate'])
889+
890+ def test_hinted(self):
891+ # 'Testclient' promotion respect 'force' and 'force-badtest' hints.
892+ # 'force-skiptest' does not fit testclient approach, since it has
893+ # no visibility of which tests 'will' run for a new source candidate,
894+ # this decision belongs to the test-agents.
895+
896+ self.overrideConfig({
897+ 'TESTCLIENT_REQUIRED_TESTS': 'bazinga',
898+ })
899+ test_results = [{
900+ 'source_name': 'foo',
901+ 'source_version': '1.1',
902+ 'series': self.data.series,
903+ 'distribution': self.data.distribution,
904+ 'test_name': 'bazinga',
905+ 'test_status': 'FAIL',
906+ 'test_url': 'http://bazinga.com/foo',
907+ }]
908+ self.publishTestResults(test_results)
909+
910+ context = [
911+ ('foo-bin', {'Source': 'foo', 'Version': '1.1',
912+ 'Architecture': 'amd64'}),
913+ ]
914+ self.create_hint('cjwatson', 'force-badtest foo/1.1')
915+ self.do_test(
916+ context,
917+ [r'\bfoo\b.*>1</a> to .*>1.1<',
918+ r'<li>Bazinga result: .*>Regression.*'
919+ 'href="http://bazinga.com/foo">results',
920+ r'<li>Should wait for foo 1.1 Bazinga test, but forced '
921+ 'by cjwatson',
922+ '<li>Valid candidate'])
923+
924+
925+if __name__ == '__main__':
926+ unittest.main()

Subscribers

People subscribed via source and target branches