Merge lp:~vila/ols-store-tests/ols-run-tests into lp:~ubuntuone-pqm-team/ols-store-tests/store-acceptance-tests

Proposed by Vincent Ladeuil
Status: Merged
Approved by: Vincent Ladeuil
Approved revision: 43
Merged at revision: 40
Proposed branch: lp:~vila/ols-store-tests/ols-run-tests
Merge into: lp:~ubuntuone-pqm-team/ols-store-tests/store-acceptance-tests
Diff against target: 305 lines (+156/-11)
8 files modified
Makefile (+3/-3)
README (+1/-1)
dependencies.txt (+1/-0)
ols-vms.conf (+2/-1)
run-landing-tests.sh (+1/-2)
tests/api/snap/helpers.py (+68/-0)
tests/api/snap/test_helpers.py (+70/-0)
tests/api/snap/test_snap_release.py (+10/-4)
To merge this branch: bzr merge lp:~vila/ols-store-tests/ols-run-tests
Reviewer Review Type Date Requested Status
Fabián Ezequiel Gallina (community) Approve
Review via email: mp+318929@code.launchpad.net

Commit message

Display log when test_release_snap_successful fails.

Use ols-run-tests to display test names and timings when running on jenkaas.

Properly isolate pip3 by using the virtualenv one rather than the system one.

Description of the change

Display log when test_release_snap_successful fails.

Use ols-run-tests to display test names and timings when running on jenkaas.

Properly isolate pip3 by using the virtualenv one rather than the system one.

To post a comment you must log in.
Revision history for this message
Fabián Ezequiel Gallina (fgallina) wrote :

I like where this is going, in fact I like so much that I'm proposing we have the log on failure for all tests ;)

Revision history for this message
Vincent Ladeuil (vila) wrote :

> What if we made this part of the setUp and tearDown process and put it in the base TestCase.

The thing is, this trick requires wrapping the test method itself.

There is no good way (at least I haven't found any so far) that can be achieved "transparently" for all test methods.

If you look at the unittest implementation, you'll see that __call__() calls run() which provides no way to wrap the test method call (search for testMethod there) itself.

Alternatively, this may be be implemented on the TestResult object but that looks brittle (and would require implementing it for all TestResult objects).

This code originated in ols-vms (https://bazaar.launchpad.net/~ubuntuone-hackers/ols-vms/trunk/view/head:/olsvms/tests/__init__.py#L55 but is not used there anymore) and worked well, the constraint of explicitly decorating each test may seem a bit tedious but was worth it (not all tests have a use for that).

While borrowing it I attempted to do just what you suggest but fail.

Come to think of it that should allow the implementation to be simplified (there is no need for a class since there are no level and fmt parameters anymore).

Also note that here, we're not calling logging from any code under test, we only use it as part of the test code.

So this proposal is to activate it where we need it. As such, it's not exposed either where it's not needed.

I'll keep thinking about generalizing it though.

41. By Vincent Ladeuil

Display log on failure for all APITestCase.

Revision history for this message
Vincent Ladeuil (vila) wrote :

> What if we made this part of the setUp and tearDown process and put it in the base TestCase.

Take two.

setUp() is too late, this has to happen in __init__.

Last revision does that.

I still think it's a bit too magic (I have a gut feeling this may break in an obscure way in edge cases (parametrized tests for example)) and prefer to decorate the tests that really needs that but let's see how this goes.

42. By Vincent Ladeuil

Hacking from __init__ is too brittle, come back to simpler decorator.

43. By Vincent Ladeuil

Remove useless code.

Revision history for this message
Fabián Ezequiel Gallina (fgallina) wrote :

LGTM

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2017-03-01 16:33:48 +0000
3+++ Makefile 2017-03-06 11:50:31 +0000
4@@ -1,8 +1,8 @@
5 # Copyright (C) 2016 Canonical Ltd.
6
7 ENV = $(CURDIR)/env
8-PIP = pip3
9-PYTHON = python3
10+PIP = $(ENV)/bin/pip3
11+PYTHON = $(ENV)/bin/python3
12 VM = store-acceptance-tests
13 VSSH = ols-vms shell ${VM}
14 TEST_TARGET ?= discover tests/api
15@@ -13,7 +13,7 @@
16 bootstrap: $(ENV) wheels install-wheels
17
18 $(ENV):
19- virtualenv -p $(PYTHON) --clear --system-site-packages $(ENV)
20+ virtualenv -p python3 --clear --system-site-packages $(ENV)
21
22 wheels:
23 [ -d $(WHEELS_DIR) ] && (cd $(WHEELS_DIR) && bzr pull) || (bzr branch $(WHEELS_BRANCH_URL) $(WHEELS_DIR))
24
25=== modified file 'README'
26--- README 2017-03-01 16:40:34 +0000
27+++ README 2017-03-06 11:50:31 +0000
28@@ -95,4 +95,4 @@
29
30 Once there, 'cd' to your mounted working dir and 'make bootstrap'.
31
32-ols jenkaas use './run-landing-tests.sh'.
33+ols jenkaas uses './run-landing-tests.sh'.
34
35=== modified file 'dependencies.txt'
36--- dependencies.txt 2017-03-01 16:33:48 +0000
37+++ dependencies.txt 2017-03-06 11:50:31 +0000
38@@ -2,6 +2,7 @@
39 make
40 squashfs-tools
41 snapcraft
42+python3-ols-tests
43 python3-pip
44 python3-pymacaroons
45 python3-requests
46
47=== modified file 'ols-vms.conf'
48--- ols-vms.conf 2017-03-01 16:33:48 +0000
49+++ ols-vms.conf 2017-03-06 11:50:31 +0000
50@@ -4,4 +4,5 @@
51 vm.architecture = amd64
52 vm.update = True
53 vm.locale = en_US.UTF-8
54-vm.packages = @dependencies.txt, bzr
55+apt.sources = ppa:ubuntuone/ols-tests
56+vm.packages = @dependencies.txt
57
58=== modified file 'run-landing-tests.sh'
59--- run-landing-tests.sh 2017-03-01 16:33:48 +0000
60+++ run-landing-tests.sh 2017-03-06 11:50:31 +0000
61@@ -5,5 +5,4 @@
62 SECRETS=~/.config/ols-store-acceptance-tests/secrets
63 test -f ${SECRETS} && . ${SECRETS}
64
65-make api-tests
66-
67+PYTHONPATH=`pwd` env/bin/python3 /usr/bin/ols-run-tests
68
69=== modified file 'tests/api/snap/helpers.py'
70--- tests/api/snap/helpers.py 2016-08-24 05:32:42 +0000
71+++ tests/api/snap/helpers.py 2017-03-06 11:50:31 +0000
72@@ -1,6 +1,9 @@
73+import functools
74 import getpass
75 import glob
76+import io
77 import json
78+import logging
79 import os
80 import shutil
81 import subprocess
82@@ -9,6 +12,7 @@
83 import urllib.parse
84 import uuid
85
86+
87 import requests
88 from pymacaroons import Macaroon
89
90@@ -159,11 +163,75 @@
91 return body.get('upload_id'), version
92
93
94+def override_logging(test, stream_out, level=logging.INFO):
95+ """Setup a logging handler, restoring the actual handlers after the test.
96+
97+ This assumes a logging setup where handlers are added to the root logger
98+ only.
99+
100+ :param stream_out: A stream where log messges will be recorded.
101+
102+ :param level: The logging level to be active during the test.
103+
104+ :param fmt: The logging format to use during the test.
105+
106+ """
107+ root_logger = logging.getLogger(None)
108+ # Using reversed() below ensures we can modify root_logger.handlers as well
109+ # as providing the handlers in the right order for cleanups.
110+ for handler in reversed(root_logger.handlers):
111+ root_logger.removeHandler(handler)
112+ test.addCleanup(root_logger.addHandler, handler)
113+ # Install the new handler
114+ new_handler = logging.StreamHandler(stream_out)
115+ test.addCleanup(root_logger.removeHandler, new_handler)
116+ root_logger.addHandler(new_handler)
117+ # Install the new level, restoring the actual one after the test
118+ test.addCleanup(root_logger.setLevel, root_logger.level)
119+ root_logger.setLevel(level)
120+
121+
122+def log_on_failure(func):
123+ """Decorates a test to display log on failure.
124+
125+ This requires a 'log_stream' attribute available from the decorated test
126+ (which override_logging() provides if called during setUp()).
127+
128+ With this decoration the test can display the log content when it fails or
129+ errors but stay silent otherwise.
130+ """
131+ @functools.wraps(func)
132+ def decorator(*args):
133+ test = args[0]
134+ display_log = True
135+
136+ # We need to delay the log acquisition until we attempt to display
137+ # it (or we get no content).
138+ def delayed_display_log():
139+ if display_log:
140+ msg = 'Failed test log: >>>\n{}\n<<<'.format(
141+ test.log_stream.getvalue())
142+ raise Exception(msg)
143+
144+ test.addCleanup(delayed_display_log)
145+
146+ # Run the test without the decoration
147+ func(*args)
148+ # If it terminates properly, disable log display
149+ display_log = False
150+ return decorator
151+
152+
153 class APITestCase(unittest.TestCase):
154
155 # Do not ever truncate diffs so we don't ever miss response body changes
156 maxDiff = None
157
158+ def setUp(self):
159+ super().setUp()
160+ self.log_stream = io.StringIO()
161+ override_logging(self, self.log_stream)
162+
163 def assert_success(self, response, content_type='application/json'):
164 self.assertTrue(response.ok, 'Response was: [{}] {}'.format(
165 response.status_code, response.content))
166
167=== added file 'tests/api/snap/test_helpers.py'
168--- tests/api/snap/test_helpers.py 1970-01-01 00:00:00 +0000
169+++ tests/api/snap/test_helpers.py 2017-03-06 11:50:31 +0000
170@@ -0,0 +1,70 @@
171+import io
172+import logging
173+import unittest
174+
175+
176+from olstests import (
177+ assertions,
178+ results,
179+)
180+
181+from tests.api.snap import helpers
182+
183+
184+logger = logging.getLogger(__name__)
185+
186+
187+class TestLogOnFailure(unittest.TestCase):
188+
189+ def setUp(self):
190+ super().setUp()
191+ self.result = results.TextResult(io.StringIO(), verbosity=2)
192+
193+ # We don't care about timing here so we always return 0 which
194+ # simplifies matching the expected result
195+ def zero(atime):
196+ return 0.0
197+
198+ self.result.convert_delta_to_float = zero
199+
200+ def test_log_not_displayed(self):
201+
202+ class Test(helpers.APITestCase):
203+
204+ @helpers.log_on_failure
205+ def test_pass(inner):
206+ # Check the test name
207+ self.assertTrue(inner.id().endswith('Test.test_pass'))
208+ logger.info('pass')
209+
210+ t = Test('test_pass')
211+ t.run(self.result)
212+ self.assertEqual('pass\n', t.log_stream.getvalue())
213+ self.assertEqual('{} ... OK (0.000 secs)\n'.format(t.id()),
214+ self.result.stream.getvalue())
215+ self.assertEqual([], self.result.errors)
216+
217+ def test_log_displayed(self):
218+
219+ class Test(helpers.APITestCase):
220+
221+ @helpers.log_on_failure
222+ def test_fail(inner):
223+ # Check the test name
224+ self.assertTrue(inner.id().endswith('Test.test_fail'))
225+ logger.info("I'm broken")
226+ inner.fail()
227+
228+ t = Test('test_fail')
229+ t.run(self.result)
230+ self.assertEqual("I'm broken\n", t.log_stream.getvalue())
231+ # FAILERROR: The test FAIL, the cleanup ERRORs out.
232+ self.assertEqual(
233+ '{} ... FAILERROR (0.000 secs)\n'.format(t.id()),
234+ self.result.stream.getvalue())
235+ assertions.assertLength(self, 1, self.result.errors)
236+ failing_test, traceback = self.result.errors[0]
237+ self.assertIs(t, failing_test)
238+ expected = traceback.endswith("Failed test log:"
239+ " >>>\nI'm broken\n\n<<<\n")
240+ self.assertTrue(expected, 'Actual traceback: {}'.format(traceback))
241
242=== modified file 'tests/api/snap/test_snap_release.py'
243--- tests/api/snap/test_snap_release.py 2017-03-01 15:54:06 +0000
244+++ tests/api/snap/test_snap_release.py 2017-03-06 11:50:31 +0000
245@@ -3,18 +3,23 @@
246 import time
247 import uuid
248
249+
250 import requests
251
252 from .helpers import (
253 SCA_ROOT_URL,
254 APITestCase,
255 authenticate_with_macaroon,
256+ log_on_failure,
257 upload_snap,
258 )
259 from .test_register_name import register_name
260 from .test_snap_push import push_snap
261
262
263+logger = logging.getLogger(__name__)
264+
265+
266 RELEASE_SNAP_RETRY_WAIT = 30
267
268
269@@ -41,7 +46,7 @@
270 if status_url is not None:
271 response = poll_until_processed(status_url)
272 else:
273- logging.info(
274+ logger.info(
275 'Failed to release: data=%s. Retrying in %s seconds.',
276 data, RELEASE_SNAP_RETRY_WAIT)
277 time.sleep(RELEASE_SNAP_RETRY_WAIT)
278@@ -61,8 +66,8 @@
279 def get_snap_status(status_url):
280 response = requests.get(status_url, headers={'Cache-Control': 'no-cache'})
281 if not response.ok:
282- logging.error('Failed to get snap status. Response was: [%s] %s',
283- response.status_code, response.content)
284+ logger.error('Failed to get snap status. Response was: [%s] %s',
285+ response.status_code, response.content)
286 return response
287
288
289@@ -95,7 +100,7 @@
290 cls.auth = authenticate_with_macaroon()
291
292 def setUp(self):
293- super(SnapReleaseTestCase, self).setUp()
294+ super().setUp()
295
296 # use same format for package names as snapcraft integration tests
297 self.name = 'delete-me-{}'.format(str(uuid.uuid4().int)[:32])
298@@ -143,6 +148,7 @@
299 }
300 self.assertEqual(body, expected)
301
302+ @log_on_failure
303 def test_release_snap_successful(self):
304 response = register_name(self.name, auth=self.auth)
305 updown_id, version = upload_snap(self.name)

Subscribers

People subscribed via source and target branches