Merge lp:~veebers/autopilot/texttest-run-uprefresh into lp:autopilot

Proposed by Christopher Lee
Status: Needs review
Proposed branch: lp:~veebers/autopilot/texttest-run-uprefresh
Merge into: lp:autopilot
Diff against target: 848 lines (+351/-130)
7 files modified
autopilot/__init__.py (+1/-1)
autopilot/run.py (+21/-19)
autopilot/testresult.py (+85/-22)
autopilot/tests/functional/test_autopilot_functional.py (+87/-54)
autopilot/tests/unit/test_command_line_args.py (+10/-2)
autopilot/tests/unit/test_run.py (+127/-30)
autopilot/tests/unit/test_testresults.py (+20/-2)
To merge this branch: bzr merge lp:~veebers/autopilot/texttest-run-uprefresh
Reviewer Review Type Date Requested Status
platform-qa-bot continuous-integration Approve
PS Jenkins bot continuous-integration Needs Fixing
prod-platform-qa continuous-integration Pending
Christopher Lee Pending
Thomi Richards Pending
Corey Goldberg Pending
Review via email: mp+225247@code.launchpad.net

This proposal supersedes a proposal from 2014-03-03.

Commit message

Improve non-verbose output to console during tests

Description of the change

(Resubmitted w/ merged trunk + conflict fixes.)

This branch improves the non-verbose output, to get similar output as unittest runner.
It prints the dots/flags as tests run. so it's not just silent when running in normal (non-verbose) mode.

this branch also bumps the version of Python used in tox.ini configuration for testing to: Python3.4

To post a comment you must log in.
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

from irc with thomi:

<thomi> you should flush stdout afdter startTest, since we want to see that output in the case where a test takes a long time to run
<cgoldberg> right
<thomi> also, I think the skip line should have the optional message in '( )'
 so like:
 test.id ... SKIP (Not runnable on the device)
 for example
<cgoldberg> yup.. agree
<thomi> also, I think the status messages can be a bit more verbose.
 particularly, please change:
 XFAIL => EXPECTED FAIL
 and
 NOTOK => UNEXPECTED SUCCESS
<cgoldberg> right. makes sense. I copied the unittest runner's output, but there's no reason not to be a little more explicit
<thomi> also, you should generalise the code (diff lines 82-92) to go in a separate function, something like _wrap_result_with_output_decorator
 which accepts a result object, and wraps it in either a LoggingResultDecorator or your new decorator
 that way, it's easier to test, and we keep the complexity of the construct_XXX functions down
<thomi> cgoldberg: in your tests, I think you can make your 'assertOutput' lines more readable, by doing:
<cgoldberg> thomi, ok.. i can wrap it in a function
<thomi> self.assertOutput('{id} ... OK\n', 'pass')
 one line is better than 3, since we read left->right
 other than those minor quibbles above ^^, this looks great, but you're missing some tests still
<cgoldberg> thomi, hah.. will do
<thomi> I'd like to see an integration test that shows that when you specify the various result formats without the verbose flag, we get the correct result object & wrapper
 similarly for when you do specify the verbose flag
<cgoldberg> gotcha.. yea i can add that too
<thomi> Finally, I think you should add 6 functional tests to the functional test suite
 1 test fro each format, * 1 with verbose, and one without verbose
 actually, the verbose case is almost certainly covered already (but please do check)
 so maybe it's only 3 new functional tests :)

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

from thomi:

OK, so we need to change things. Seems like there's several things we're trying to control independantly of each other here:

1) The level of verbosity autopilot uses when logging. Ideally we'd make sure that all our log levels were sensible, and then this would control the logging framework verbosity level. 0 * '-v' => normal and higher. '-v' => info logs and higher. '-vv' => debug logs and higher. The log *always* gets attached to the test result for every test, regardless of the output file format.

2) The format we store run reports in. We have 'text', 'xml', 'subunit'. I think we need one more, which is 'verbosetext' or something similar. This causes the test log to get printed to stdoud as the test is runing. This is equivilent to the '-v' flag before this change.

3) The location of the output report. If the -o option is specified, the report is written to that file, in the format specified by the '-f' parameter.

Finally, if, after configuring all this, stdout is not used, the 'status' format should be printed to stdout as well.

review: Needs Fixing
Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

Four output formats will be:
* status (new format)
* xml
* subunit
* text
Every format can be sent to stdout or a file.
-v / -vv flag ONLY controls the verbosity of the python log.
make subclass of TextTestResult to print to stream

Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

make above fixes

review: Needs Fixing
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
Christopher Lee (veebers) wrote : Posted in a previous version of this proposal

Just a couple of minor things.

Initially I thought I misunderstood the verbose/output changes until I read the comment re: verbose output only going to a log file. I think that the -h output needs to be updated to state as such.
(actually I think I'm confused on this point now).

The help stats that the default format for -f is text where it is now 'status' (I was initially a little confused why my -v wasn't outputting the tapping details etc.).

line 488-491 has some odd indentation. I would put the opening [ on a new line i.e.
  self.run_autopilot(
      [ "run", "--failfast", "-f", "text", "tests"]
  )
# You could put the list args on new lines too if you wanted, but not needed here.

That's all for now, I'll try give a more functional review later on :-)

review: Needs Fixing
Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

thanks veebers!

> I think that the -h output needs to be updated to state as such.
> (actually I think I'm confused on this point now).

I will update -h commandline help, and /docs today with clear explanation of log and output handling.

> line 488-491 has some odd indentation.

I just pushed a fix for indentation in functional tests.

Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

updated help strings.

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Approve (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
Thomi Richards (thomir-deactivatedaccount) wrote : Posted in a previous version of this proposal

Hi,

A few things:

40 - '-v', '--verbose', required=False, default=False, action='count',
41 - help="Show autopilot log messages. Set twice to also log data "
42 - "useful for debugging autopilot itself.")
43 + "-v", "--verbosity", action='count', default=0, required=False,
44 + help="Increase verbosity of test details and 'text' mode output. "
45 + "Set twice to also log data useful for debugging autopilot "
46 + "itself.")

This is incorrect, since this is for the vis tool. The original message was correct, I don't think you need to change it at all.

54 - '-v', '--verbose', required=False, default=False, action='count',
55 - help="Show autopilot log messages. Set twice to also log data useful "
56 - "for debugging autopilot itself.")
57 + "-v", "--verbosity", action='count', default=0, required=False,
58 + help="Increase verbosity of test details and 'text' mode output. "
59 + "Set twice to also log data useful for debugging autopilot "
60 + "itself.")

Same here - this is for the launch command.

A small thing, but this:

85 -# Copyright (C) 2012-2013 Canonical
86 +# Copyright (C) 2012,2013,2014 Canonical

Should be "2012-2014", not "2012,2013,2014", as per legal advice.

102 from autopilot import get_version_string, parse_arguments
103 -import autopilot.globals
104 from autopilot._debug import get_all_debug_profiles
105 -from autopilot.testresult import get_output_formats
106 -from autopilot.utilities import DebugLogFilter, LogFormatter
107 from autopilot.application._launcher import (
108 _get_app_env_from_string_hint,
109 get_application_launcher_wrapper,
110 launch_process,
111 )
112 +import autopilot.globals
113 +from autopilot.testresult import get_output_formats
114 +from autopilot.utilities import DebugLogFilter, LogFormatter

This isn't the standard we've set in the AP codebase. We mix 'import FOO' and 'from FOO import BAR' statements in one block, and alphabeticise the module names.

565 + def test_failfast_text_mode(self):

Please move the test_failfast test to a new class, and use scenarios to make sure that *all* test result formats support failfast, not just text and status.

Here's the big change I think we need to make:

Currently, the logger is set up to log to the same stream as the result object. This works for the text format, but not the others. Instead, we should make it so the test log is added to the test result as a detail, for every format. This will help us with the subunit format, for example. TBH, I thought this already happend, but I can't see the code now. Feel free to point it out to me :)

Cheers

review: Needs Fixing
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

fixed:

- fixed copyright date format.
- reverted help text for -v arg in launch and vis modes
- added scenaro tests for failfast to cover all output formatsbz

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
483. By Christopher Lee

Fix introduced flake8 issues

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :

FAILED: Continuous integration, rev:483
No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want a jenkins rebuild you need to trigger it yourself):
https://code.launchpad.net/~veebers/autopilot/texttest-run-uprefresh/+merge/225247/+edit-commit-message

http://jenkins.qa.ubuntu.com/job/autopilot-ci/751/
Executed test runs:
    SUCCESS: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-amd64-ci/25
        deb: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-amd64-ci/25/artifact/work/output/*zip*/output.zip
    SUCCESS: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-armhf-ci/25
        deb: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-armhf-ci/25/artifact/work/output/*zip*/output.zip
    SUCCESS: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-i386-ci/25
        deb: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-i386-ci/25/artifact/work/output/*zip*/output.zip
    UNSTABLE: http://jenkins.qa.ubuntu.com/job/generic-mediumtests-utopic-autopilot/151
    UNSTABLE: http://jenkins.qa.ubuntu.com/job/autopilot-testrunner-otto-utopic-autopilot/238
    SUCCESS: http://jenkins.qa.ubuntu.com/job/generic-mediumtests-builder-utopic-amd64/1459
        deb: http://jenkins.qa.ubuntu.com/job/generic-mediumtests-builder-utopic-amd64/1459/artifact/work/output/*zip*/output.zip

Click here to trigger a rebuild:
http://s-jenkins.ubuntu-ci:8080/job/autopilot-ci/751/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
484. By Christopher Lee

Fix 3 failing tests with typo change

485. By Christopher Lee

Fix sp error in test (verbose vs verbosity)

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
486. By Christopher Lee

Merge trunk + fix conflicts

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
platform-qa-bot (platform-qa-bot) wrote :
review: Approve (continuous-integration)

Unmerged revisions

486. By Christopher Lee

Merge trunk + fix conflicts

485. By Christopher Lee

Fix sp error in test (verbose vs verbosity)

484. By Christopher Lee

Fix 3 failing tests with typo change

483. By Christopher Lee

Fix introduced flake8 issues

482. By Christopher Lee

Merge trunk + fix conflicts

481. By Corey Goldberg

fixed comments

480. By Corey Goldberg

flake8 fixes

479. By Corey Goldberg

details unit tests

478. By Corey Goldberg

fixes

477. By Corey Goldberg

fixes per latest review

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'autopilot/__init__.py'
2--- autopilot/__init__.py 2014-05-20 14:43:24 +0000
3+++ autopilot/__init__.py 2014-07-22 03:40:04 +0000
4@@ -1,7 +1,7 @@
5 # -*- Mode: Python; coding: utf-8; indent-tabs-mode: nil; tab-width: 4 -*-
6 #
7 # Autopilot Functional Test Tool
8-# Copyright (C) 2012-2013 Canonical
9+# Copyright (C) 2012-2014 Canonical
10 #
11 # This program is free software: you can redistribute it and/or modify
12 # it under the terms of the GNU General Public License as published by
13
14=== modified file 'autopilot/run.py'
15--- autopilot/run.py 2014-07-02 21:07:33 +0000
16+++ autopilot/run.py 2014-07-22 03:40:04 +0000
17@@ -31,15 +31,14 @@
18 import os.path
19 from platform import node
20 from random import shuffle
21-import six
22 import subprocess
23 import sys
24 from unittest import TestLoader, TestSuite
25
26+import six
27 from testtools import iterate_tests
28
29 from autopilot import get_version_string, have_vis
30-import autopilot.globals
31 from autopilot import _config as test_config
32 from autopilot._debug import (
33 get_all_debug_profiles,
34@@ -52,6 +51,7 @@
35 get_application_launcher_wrapper,
36 launch_process,
37 )
38+import autopilot.globals
39
40
41 def _get_parser():
42@@ -215,15 +215,18 @@
43 setattr(namespace, self.dest, values)
44
45
46-def setup_logging(verbose):
47- """Configure the root logger and verbose logging to stderr."""
48+def setup_logging(verbosity, path=None, format=None):
49+ """Configure the root logger"""
50 root_logger = get_root_logger()
51 root_logger.setLevel(logging.DEBUG)
52- if verbose == 0:
53+ if format == 'text':
54+ stream = get_output_stream(format, path)
55+ set_stream_handler(root_logger, stream)
56+ else:
57 set_null_log_handler(root_logger)
58- if verbose >= 1:
59- set_stderr_stream_handler(root_logger)
60- if verbose >= 2:
61+ if verbosity >= 1:
62+ autopilot.globals.set_log_verbose(True)
63+ if verbosity >= 2:
64 enable_debug_log_messages()
65 # log autopilot version
66 root_logger.info(get_version_string())
67@@ -237,11 +240,10 @@
68 root_logger.addHandler(logging.NullHandler())
69
70
71-def set_stderr_stream_handler(root_logger):
72- formatter = LogFormatter()
73- stderr_handler = logging.StreamHandler(stream=sys.stderr)
74- stderr_handler.setFormatter(formatter)
75- root_logger.addHandler(stderr_handler)
76+def set_stream_handler(root_logger, stream):
77+ handler = logging.StreamHandler(stream=stream)
78+ handler.setFormatter(LogFormatter())
79+ root_logger.addHandler(handler)
80
81
82 def enable_debug_log_messages():
83@@ -256,7 +258,7 @@
84 )
85
86
87-def get_output_stream(format, path):
88+def get_output_stream(format, path=None):
89 """Get an output stream pointing to 'path' that's appropriate for format
90 'format'.
91
92@@ -270,7 +272,7 @@
93 log_file = _get_log_file_path(path)
94 if format == 'xml':
95 return _get_text_mode_file_stream(log_file)
96- elif format == 'text':
97+ elif format in ('status', 'text'):
98 return _get_binary_mode_file_stream(log_file)
99 else:
100 return _get_raw_binary_mode_file_stream(log_file)
101@@ -639,7 +641,10 @@
102 self.args = defined_args or _parse_arguments()
103
104 def run(self):
105- setup_logging(getattr(self.args, 'verbose', False))
106+ verbosity = getattr(self.args, 'verbose', 0)
107+ format = getattr(self.args, 'format', 'status')
108+ path = getattr(self.args, 'output', '')
109+ setup_logging(verbosity, path, format)
110
111 action = None
112 if self.args.mode == 'list':
113@@ -712,9 +717,6 @@
114 print("Error: %s" % str(e))
115 exit(1)
116
117- if self.args.verbose:
118- autopilot.globals.set_log_verbose(True)
119-
120 result = construct_test_result(self.args)
121 result.startTestRun()
122 try:
123
124=== modified file 'autopilot/testresult.py'
125--- autopilot/testresult.py 2014-07-14 23:30:58 +0000
126+++ autopilot/testresult.py 2014-07-22 03:40:04 +0000
127@@ -1,7 +1,7 @@
128 # -*- Mode: Python; coding: utf-8; indent-tabs-mode: nil; tab-width: 4 -*-
129 #
130 # Autopilot Functional Test Tool
131-# Copyright (C) 2012-2013 Canonical
132+# Copyright (C) 2012,2013,2014 Canonical
133 #
134 # This program is free software: you can redistribute it and/or modify
135 # it under the terms of the GNU General Public License as published by
136@@ -32,18 +32,81 @@
137 try_import,
138 )
139
140-from autopilot.globals import get_log_verbose
141 from autopilot.utilities import _raise_on_unknown_kwargs
142
143
144+class StatusFormatTestResult(TextTestResult):
145+ """A TextTestResult that prints status messages to a stream."""
146+
147+ def __init__(self, stream, failfast):
148+ super(StatusFormatTestResult, self).__init__(stream, failfast)
149+ self.stream = stream
150+
151+ def startTest(self, test):
152+ self.stream.write(u'%s' % test.id())
153+ self.stream.write(u' ... ')
154+ self.stream.flush()
155+ super(StatusFormatTestResult, self).startTest(test)
156+
157+ def stopTest(self, test):
158+ self.stream.write(u'\n')
159+ self.stream.flush()
160+ super(StatusFormatTestResult, self).stopTest(test)
161+
162+ def stopTestRun(self):
163+ self.stream.write(u'-' * 80)
164+ self.stream.write(u'\n')
165+ self.stream.flush()
166+ super(StatusFormatTestResult, self).stopTestRun()
167+
168+ def addExpectedFailure(self, test, err=None, details=None):
169+ self.stream.write(u'EXPECTED FAIL')
170+ super(StatusFormatTestResult, self).addExpectedFailure(
171+ test, err, details
172+ )
173+
174+ def addError(self, test, err=None, details=None):
175+ self.stream.write(u'ERROR')
176+ super(StatusFormatTestResult, self).addError(
177+ test, err, details
178+ )
179+
180+ def addFailure(self, test, err=None, details=None):
181+ self.stream.write(u'FAIL')
182+ super(StatusFormatTestResult, self).addFailure(
183+ test, err, details
184+ )
185+
186+ def addSkip(self, test, reason, details=None):
187+ if not reason:
188+ reason_displayed = ''
189+ else:
190+ reason_displayed = ' (%s)' % reason
191+ self.stream.write(u'SKIP%s' % reason_displayed)
192+ super(StatusFormatTestResult, self).addSkip(
193+ test, reason, details
194+ )
195+
196+ def addSuccess(self, test, details=None):
197+ self.stream.write(u'OK')
198+ super(StatusFormatTestResult, self).addSuccess(
199+ test, details
200+ )
201+
202+ def addUnexpectedSuccess(self, test, details=None):
203+ self.stream.write(u'UNEXPECTED SUCCESS')
204+ super(StatusFormatTestResult, self).addUnexpectedSuccess(
205+ test, details
206+ )
207+
208+
209 class LoggedTestResultDecorator(TestResultDecorator):
210
211 """A decorator that logs messages to python's logging system."""
212
213 def _log(self, level, message):
214- """Perform the actual message logging."""
215- if get_log_verbose():
216- logging.getLogger().log(level, message)
217+ """Perform the actual message logging"""
218+ logging.getLogger().log(level, message)
219
220 def _log_details(self, level, details):
221 """Log the relavent test details."""
222@@ -87,9 +150,8 @@
223
224 """
225 supported_formats = {}
226-
227+ supported_formats['status'] = _construct_status
228 supported_formats['text'] = _construct_text
229-
230 if try_import('junitxml'):
231 supported_formats['xml'] = _construct_xml
232 if try_import('subunit'):
233@@ -98,7 +160,7 @@
234
235
236 def get_default_format():
237- return 'text'
238+ return 'status'
239
240
241 def _construct_xml(**kwargs):
242@@ -106,16 +168,21 @@
243 stream = kwargs.pop('stream')
244 failfast = kwargs.pop('failfast')
245 _raise_on_unknown_kwargs(kwargs)
246- result_object = LoggedTestResultDecorator(
247- ExtendedToOriginalDecorator(
248- JUnitXmlResult(stream)
249- )
250- )
251- result_object.failfast = failfast
252- return result_object
253+ result = ExtendedToOriginalDecorator(JUnitXmlResult(stream))
254+ result.failfast = failfast
255+ return result
256+
257+
258+def _construct_status(**kwargs):
259+ """Status (terse) text output."""
260+ stream = kwargs.pop('stream')
261+ failfast = kwargs.pop('failfast')
262+ _raise_on_unknown_kwargs(kwargs)
263+ return StatusFormatTestResult(stream, failfast)
264
265
266 def _construct_text(**kwargs):
267+ """Verbose text output."""
268 stream = kwargs.pop('stream')
269 failfast = kwargs.pop('failfast')
270 _raise_on_unknown_kwargs(kwargs)
271@@ -127,10 +194,6 @@
272 stream = kwargs.pop('stream')
273 failfast = kwargs.pop('failfast')
274 _raise_on_unknown_kwargs(kwargs)
275- result_object = LoggedTestResultDecorator(
276- ExtendedToStreamDecorator(
277- StreamResultToBytes(stream)
278- )
279- )
280- result_object.failfast = failfast
281- return result_object
282+ result = ExtendedToStreamDecorator(StreamResultToBytes(stream))
283+ result.failfast = failfast
284+ return result
285
286=== modified file 'autopilot/tests/functional/test_autopilot_functional.py'
287--- autopilot/tests/functional/test_autopilot_functional.py 2014-07-14 04:07:05 +0000
288+++ autopilot/tests/functional/test_autopilot_functional.py 2014-07-22 03:40:04 +0000
289@@ -25,9 +25,11 @@
290 import os.path
291 import re
292 from tempfile import mktemp
293+from textwrap import dedent
294+
295+from testscenarios import WithScenarios
296 from testtools import skipIf
297 from testtools.matchers import Contains, Equals, MatchesRegex, Not
298-from textwrap import dedent
299
300 from autopilot import platform
301 from autopilot.tests.functional import AutopilotRunTestBase, remove_if_exists
302@@ -770,15 +772,10 @@
303
304 class AutopilotVerboseFunctionalTests(AutopilotFunctionalTestsBase):
305
306- """Scenarioed functional tests for autopilot's verbose logging."""
307-
308- scenarios = [
309- ('text_format', dict(output_format='text')),
310- ('xml_format', dict(output_format='xml'))
311- ]
312+ """Functional tests for autopilot's verbose logging."""
313
314 def test_verbose_flag_works(self):
315- """Verbose flag must log to stderr."""
316+ """Verbose flag must log to stdout."""
317 self.create_test_file(
318 "test_simple.py", dedent("""\
319
320@@ -792,13 +789,13 @@
321 """)
322 )
323
324- code, output, error = self.run_autopilot(["run",
325- "-f", self.output_format,
326- "-v", "tests"])
327+ code, output, error = self.run_autopilot(
328+ ["run", "-f", "text", "-v", "tests"]
329+ )
330
331 self.assertThat(code, Equals(0))
332 self.assertThat(
333- error, Contains(
334+ output, Contains(
335 "Starting test tests.test_simple.SimpleTest.test_simple"))
336
337 def test_verbose_flag_shows_timestamps(self):
338@@ -816,11 +813,11 @@
339 """)
340 )
341
342- code, output, error = self.run_autopilot(["run",
343- "-f", self.output_format,
344- "-v", "tests"])
345+ code, output, error = self.run_autopilot(
346+ ["run", "-f", "text", "-v", "tests"]
347+ )
348
349- self.assertThat(error, MatchesRegex("^\d\d:\d\d:\d\d\.\d\d\d"))
350+ self.assertThat(output, MatchesRegex("^\d\d:\d\d:\d\d\.\d\d\d"))
351
352 def test_verbose_flag_shows_success(self):
353 """Verbose log must indicate successful tests (text format)."""
354@@ -837,12 +834,12 @@
355 """)
356 )
357
358- code, output, error = self.run_autopilot(["run",
359- "-f", self.output_format,
360- "-v", "tests"])
361+ code, output, error = self.run_autopilot(
362+ ["run", "-f", "text", "-v", "tests"]
363+ )
364
365 self.assertThat(
366- error, Contains("OK: tests.test_simple.SimpleTest.test_simple"))
367+ output, Contains("OK: tests.test_simple.SimpleTest.test_simple"))
368
369 def test_verbose_flag_shows_error(self):
370 """Verbose log must indicate test error with a traceback."""
371@@ -859,15 +856,17 @@
372 """)
373 )
374
375- code, output, error = self.run_autopilot(["run",
376- "-f", self.output_format,
377- "-v", "tests"])
378+ code, output, error = self.run_autopilot(
379+ ["run", "-f", "text", "-v", "tests"]
380+ )
381
382 self.assertThat(
383- error, Contains("ERROR: tests.test_simple.SimpleTest.test_simple"))
384- self.assertThat(error, Contains("traceback:"))
385+ output,
386+ Contains("ERROR: tests.test_simple.SimpleTest.test_simple")
387+ )
388+ self.assertThat(output, Contains("traceback:"))
389 self.assertThat(
390- error,
391+ output,
392 Contains("RuntimeError: Intentionally fail test.")
393 )
394
395@@ -887,13 +886,13 @@
396 """)
397 )
398
399- code, output, error = self.run_autopilot(["run",
400- "-f", self.output_format,
401- "-v", "tests"])
402+ code, output, error = self.run_autopilot(
403+ ["run", "-f", "text", "-v", "tests"]
404+ )
405
406- self.assertIn("FAIL: tests.test_simple.SimpleTest.test_simple", error)
407- self.assertIn("traceback:", error)
408- self.assertIn("AssertionError: False is not true", error)
409+ self.assertIn("FAIL: tests.test_simple.SimpleTest.test_simple", output)
410+ self.assertIn("traceback:", output)
411+ self.assertIn("AssertionError: False is not true", output)
412
413 def test_verbose_flag_captures_nested_autopilottestcase_classes(self):
414 """Verbose log must contain the log details of both the nested and
415@@ -917,20 +916,20 @@
416 """)
417 )
418
419- code, output, error = self.run_autopilot(["run",
420- "-f", self.output_format,
421- "-v", "tests"])
422+ code, output, error = self.run_autopilot(
423+ ["run", "-f", "text", "-v", "tests"])
424
425 self.assertThat(code, Equals(0))
426+
427 self.assertThat(
428- error,
429+ output,
430 Contains(
431 "Starting test tests.test_simple.OuterTestCase."
432 "test_nested_classes"
433 )
434 )
435 self.assertThat(
436- error,
437+ output,
438 Contains(
439 "Starting test tests.test_simple.InnerTestCase."
440 "test_produce_log_output"
441@@ -953,11 +952,10 @@
442 """)
443 )
444
445- code, output, error = self.run_autopilot(["run",
446- "-f", self.output_format,
447- "-vv", "tests"])
448+ code, output, error = self.run_autopilot(
449+ ["run", "-f", "text", "-vv", "tests"])
450
451- self.assertThat(error, Contains("Hello World"))
452+ self.assertThat(output, Contains("Hello World"))
453
454 def test_debug_output_not_shown_by_default(self):
455 """Verbose log must not show debug messages unless we specify '-vv'."""
456@@ -975,11 +973,11 @@
457 """)
458 )
459
460- code, output, error = self.run_autopilot(["run",
461- "-f", self.output_format,
462- "-v", "tests"])
463+ code, output, error = self.run_autopilot(
464+ ["run", "-f", "text", "-v", "tests"]
465+ )
466
467- self.assertThat(error, Not(Contains("Hello World")))
468+ self.assertThat(output, Not(Contains("Hello World")))
469
470 def test_verbose_flag_shows_autopilot_version(self):
471 from autopilot import get_version_string
472@@ -997,13 +995,23 @@
473 """)
474 )
475
476- code, output, error = self.run_autopilot(["run",
477- "-f", self.output_format,
478- "-v", "tests"])
479+ code, output, error = self.run_autopilot(
480+ ["run", "-f", "text", "-v", "tests"]
481+ )
482 self.assertThat(
483- error, Contains(get_version_string()))
484-
485- def test_failfast(self):
486+ output, Contains(get_version_string()))
487+
488+
489+class AutopilotFailFastTests(AutopilotRunTestBase, WithScenarios):
490+
491+ scenarios = [
492+ ('status', dict(format='status')),
493+ ('text', dict(format='text')),
494+ ('xml', dict(format='xml')),
495+ ('subunit', dict(format='subunit')),
496+ ]
497+
498+ def test_failfast_text_mode(self):
499 """Run stops after first error encountered."""
500 self.create_test_file(
501 'test_failfast.py', dedent("""\
502@@ -1020,9 +1028,34 @@
503 raise Exception
504 """)
505 )
506- code, output, error = self.run_autopilot(["run",
507- "--failfast",
508- "tests"])
509+ code, output, error = self.run_autopilot(
510+ ["run", "--failfast", "-f", self.format, "tests"]
511+ )
512 self.assertThat(code, Equals(1))
513 self.assertIn("Ran 1 test", output)
514 self.assertIn("FAILED (failures=1)", output)
515+
516+
517+class AutopilotTestDetailsTests(AutopilotRunTestBase):
518+
519+ def test_logging_added_as_test_detail(self):
520+ msg = "This is some information"
521+ self.create_test_file(
522+ 'test_details.py', dedent("""\
523+ import logging
524+ from autopilot.testcase import AutopilotTestCase
525+
526+
527+ logger = logging.getLogger(__name__)
528+
529+ class SimpleTest(AutopilotTestCase):
530+
531+ def test_one(self):
532+ logger.info('%s')
533+
534+ """ % msg)
535+ )
536+ code, output, error = self.run_autopilot(
537+ ["run", "-f", "text", "tests"]
538+ )
539+ self.assertIn(msg, output)
540
541=== modified file 'autopilot/tests/unit/test_command_line_args.py'
542--- autopilot/tests/unit/test_command_line_args.py 2014-05-23 11:28:21 +0000
543+++ autopilot/tests/unit/test_command_line_args.py 2014-07-22 03:40:04 +0000
544@@ -76,7 +76,7 @@
545
546 def test_launch_command_has_correct_default_verbosity(self):
547 args = parse_args("launch app")
548- self.assertThat(args.verbose, Equals(False))
549+ self.assertThat(args.verbose, Equals(0))
550
551 def test_launch_command_can_specify_verbosity(self):
552 args = parse_args("launch -v app")
553@@ -211,7 +211,15 @@
554
555 def test_run_command_default_format(self):
556 args = parse_args('run foo')
557- self.assertThat(args.format, Equals("text"))
558+ self.assertThat(args.format, Equals("status"))
559+
560+ def test_run_command_status_format_short_version(self):
561+ args = parse_args('run -f status foo')
562+ self.assertThat(args.format, Equals("status"))
563+
564+ def test_run_command_status_format_long_version(self):
565+ args = parse_args('run --format status foo')
566+ self.assertThat(args.format, Equals("status"))
567
568 def test_run_command_text_format_short_version(self):
569 args = parse_args('run -f text foo')
570
571=== modified file 'autopilot/tests/unit/test_run.py'
572--- autopilot/tests/unit/test_run.py 2014-05-23 11:28:21 +0000
573+++ autopilot/tests/unit/test_run.py 2014-07-22 03:40:04 +0000
574@@ -18,7 +18,7 @@
575 #
576
577 from argparse import Namespace
578-from unittest.mock import Mock, patch
579+from io import StringIO
580 import logging
581 import os.path
582 from shutil import rmtree
583@@ -37,12 +37,17 @@
584 Raises,
585 StartsWith,
586 )
587+from unittest.mock import Mock, patch
588
589 if six.PY3:
590 from contextlib import ExitStack
591 else:
592 from contextlib2 import ExitStack
593
594+from autopilot.testresult import (
595+ LoggedTestResultDecorator,
596+ StatusFormatTestResult,
597+)
598 from autopilot import have_vis, run
599
600
601@@ -595,27 +600,10 @@
602 logging.DEBUG
603 )
604
605- def test_set_null_log_handler(self):
606- mock_root_logger = Mock()
607- run.set_null_log_handler(mock_root_logger)
608-
609- self.assertThat(
610- mock_root_logger.addHandler.call_args[0][0],
611- IsInstance(logging.NullHandler)
612- )
613-
614- @patch.object(run, 'get_root_logger')
615- def test_verbse_level_zero_sets_null_handler(self, fake_get_logger):
616- with patch.object(run, 'set_null_log_handler') as fake_set_null:
617- run.setup_logging(0)
618-
619- fake_set_null.assert_called_once_with(
620- fake_get_logger.return_value
621- )
622-
623- def test_stderr_handler_sets_stream_handler_with_custom_formatter(self):
624- mock_root_logger = Mock()
625- run.set_stderr_stream_handler(mock_root_logger)
626+ def test_stream_handler_sets_custom_formatter(self):
627+ mock_root_logger = Mock()
628+ stream = StringIO()
629+ run.set_stream_handler(mock_root_logger, stream)
630
631 self.assertThat(mock_root_logger.addHandler.call_count, Equals(1))
632 created_handler = mock_root_logger.addHandler.call_args[0][0]
633@@ -630,12 +618,16 @@
634 )
635
636 @patch.object(run, 'get_root_logger')
637- def test_verbose_level_one_sets_stream_handler(self, fake_get_logger):
638- with patch.object(run, 'set_stderr_stream_handler') as stderr_handler:
639- run.setup_logging(1)
640+ def test_text_format_sets_stream_handler(self, fake_get_logger):
641+ verbosity = 0
642+ path = None
643+ # handler is set when running in 'text' format mode
644+ with patch.object(run, 'set_stream_handler') as handler:
645+ run.setup_logging(verbosity, path, format='text')
646
647- stderr_handler.assert_called_once_with(
648- fake_get_logger.return_value
649+ handler.assert_called_once_with(
650+ fake_get_logger.return_value,
651+ run.get_output_stream('text')
652 )
653
654 def test_enable_debug_log_messages_sets_debugFilter_attr(self):
655@@ -650,7 +642,7 @@
656 @patch.object(run, 'get_root_logger')
657 def test_verbose_level_two_enables_debug_messages(self, fake_get_logger):
658 with patch.object(run, 'enable_debug_log_messages') as enable_debug:
659- run.setup_logging(2)
660+ run.setup_logging(2, None)
661
662 enable_debug.assert_called_once_with()
663
664@@ -791,6 +783,13 @@
665 run.get_output_stream(format, output)
666 pgts.assert_called_once_with(output)
667
668+ def test_status_format_opens_binary_mode_stream(self):
669+ output = tempfile.mktemp()
670+ format = 'status'
671+ with patch.object(run, '_get_binary_mode_file_stream') as pgbs:
672+ run.get_output_stream(format, output)
673+ pgbs.assert_called_once_with(output)
674+
675 def test_txt_format_opens_binary_mode_stream(self):
676 output = tempfile.mktemp()
677 format = 'text'
678@@ -815,6 +814,14 @@
679 expected = "Using default log filename: %s\n" % path
680 self.assertThat(expected, Equals(output))
681
682+ def test_text_result_uses_logged_decorator(self):
683+ args = Namespace()
684+ args.failfast = False
685+ args.output = None
686+ args.format = "text"
687+ result = run.construct_test_result(args)
688+ self.assertIsInstance(result, LoggedTestResultDecorator)
689+
690
691 class TestProgramTests(TestCase):
692
693@@ -841,12 +848,12 @@
694 self.assertThat(program.args, Equals(fake_args))
695
696 def test_run_calls_setup_logging_with_verbose_arg(self):
697- fake_args = Namespace(verbose=1, mode='')
698+ fake_args = Namespace(verbose=1, mode='text')
699 program = run.TestProgram(fake_args)
700 with patch.object(run, 'setup_logging') as patched_setup_logging:
701 program.run()
702
703- patched_setup_logging.assert_called_once_with(True)
704+ patched_setup_logging.assert_called_once_with(1, '', 'status')
705
706 def test_list_command_calls_list_tests_method(self):
707 fake_args = Namespace(mode='list')
708@@ -966,3 +973,93 @@
709 )
710 defaults.update(kwargs)
711 return Namespace(**defaults)
712+
713+
714+def get_case(kind):
715+ # Define the class in a function so test loading doesn't
716+ # try to load it as a regular test class.
717+ class Test(TestCase):
718+
719+ def test_pass(self):
720+ pass
721+
722+ def test_fail(self):
723+ raise self.failureException
724+
725+ def test_error(self):
726+ raise SyntaxError
727+
728+ def test_skip(self):
729+ self.skipTest('')
730+
731+ def test_skip_reason(self):
732+ self.skipTest('Because')
733+
734+ def test_expected_failure(self):
735+ # We expect the test to fail and it does
736+ self.expectFailure("1 should be 0", self.assertEqual, 1, 0)
737+
738+ def test_unexpected_success(self):
739+ # We expect the test to fail but it doesn't
740+ self.expectFailure("1 is not 1", self.assertEqual, 1, 1)
741+
742+ test_method = 'test_%s' % kind
743+ return Test(test_method)
744+
745+
746+def expand_template_for_test(template, test):
747+ """Expand common references in template.
748+
749+ Tests that check runs output can be simplified if they use templates
750+ instead of literal expected strings.
751+
752+ :param template: A string where common strings have been replaced by a
753+ keyword.
754+
755+ :param test: The test case under scrutiny.
756+ """
757+ kwargs = dict(id=test.id())
758+ return template.format(**kwargs)
759+
760+
761+class StatusFormatTestResultTests(TestCase):
762+
763+ def setUp(self):
764+ super(StatusFormatTestResultTests, self).setUp()
765+
766+ def assertOutput(self, template, kind):
767+ test = get_case(kind)
768+ result = StatusFormatTestResult(StringIO(), False)
769+ test.run(result)
770+
771+ expected = expand_template_for_test(template, test)
772+ self.assertEquals(expected, result.stream.getvalue().strip())
773+
774+ def test_pass(self):
775+ self.assertOutput('{id} ... OK', 'pass')
776+
777+ def test_fail(self):
778+ self.assertOutput('{id} ... FAIL', 'fail')
779+
780+ def test_error(self):
781+ self.assertOutput('{id} ... ERROR', 'error')
782+
783+ def test_skip(self):
784+ self.assertOutput('{id} ... SKIP', 'skip')
785+
786+ def test_skip_reason(self):
787+ self.assertOutput('{id} ... SKIP (Because)', 'skip_reason')
788+
789+ def test_expected_failure(self):
790+ self.assertOutput('{id} ... EXPECTED FAIL', 'expected_failure')
791+
792+ def test_unexpected_success(self):
793+ self.assertOutput('{id} ... UNEXPECTED SUCCESS', 'unexpected_success')
794+
795+ def test_result_uses_status_class(self):
796+ args = Namespace()
797+ args.failfast = False
798+ args.output = None
799+ args.format = 'status'
800+ result = run.construct_test_result(args)
801+ self.assertIsInstance(result, StatusFormatTestResult)
802
803=== modified file 'autopilot/tests/unit/test_testresults.py'
804--- autopilot/tests/unit/test_testresults.py 2014-07-01 00:36:26 +0000
805+++ autopilot/tests/unit/test_testresults.py 2014-07-22 03:40:04 +0000
806@@ -27,8 +27,7 @@
807 from testscenarios import WithScenarios
808 import unittest
809
810-from autopilot import testresult
811-from autopilot import run
812+from autopilot import run, testresult
813
814
815 class LoggedTestResultDecoratorTests(TestCase):
816@@ -110,6 +109,9 @@
817
818 class OutputFormatFactoryTests(TestCase):
819
820+ def test_has_status_format(self):
821+ self.assertTrue('status' in testresult.get_output_formats())
822+
823 def test_has_text_format(self):
824 self.assertTrue('text' in testresult.get_output_formats())
825
826@@ -278,6 +280,22 @@
827 self.assertFalse(test_result.wasSuccessful())
828 self.assertEqual(1, test_result.testsRun)
829
830+ def test_traceback_added_as_detail_on_fail(self):
831+ msg = "failure info"
832+
833+ class FailingTests(TestCase):
834+
835+ def test_fails(self):
836+ self.fail(msg)
837+
838+ test = FailingTests('test_fails')
839+ test_result, output_path = self.run_test_with_result(
840+ test
841+ )
842+ self.assertFalse(test_result.wasSuccessful())
843+ tb_detail = test.getDetails()['traceback'].as_text()
844+ self.assertThat(tb_detail, Contains(msg))
845+
846
847 def remove_if_exists(path):
848 if os.path.exists(path):

Subscribers

People subscribed via source and target branches