Merge lp:~veebers/autopilot/texttest-run-uprefresh into lp:autopilot

Proposed by Christopher Lee
Status: Needs review
Proposed branch: lp:~veebers/autopilot/texttest-run-uprefresh
Merge into: lp:autopilot
Diff against target: 848 lines (+351/-130)
7 files modified
autopilot/__init__.py (+1/-1)
autopilot/run.py (+21/-19)
autopilot/testresult.py (+85/-22)
autopilot/tests/functional/test_autopilot_functional.py (+87/-54)
autopilot/tests/unit/test_command_line_args.py (+10/-2)
autopilot/tests/unit/test_run.py (+127/-30)
autopilot/tests/unit/test_testresults.py (+20/-2)
To merge this branch: bzr merge lp:~veebers/autopilot/texttest-run-uprefresh
Reviewer Review Type Date Requested Status
platform-qa-bot continuous-integration Approve
PS Jenkins bot continuous-integration Needs Fixing
prod-platform-qa continuous-integration Pending
Christopher Lee Pending
Thomi Richards Pending
Corey Goldberg Pending
Review via email: mp+225247@code.launchpad.net

This proposal supersedes a proposal from 2014-03-03.

Commit message

Improve non-verbose output to console during tests

Description of the change

(Resubmitted w/ merged trunk + conflict fixes.)

This branch improves the non-verbose output, to get similar output as unittest runner.
It prints the dots/flags as tests run. so it's not just silent when running in normal (non-verbose) mode.

this branch also bumps the version of Python used in tox.ini configuration for testing to: Python3.4

To post a comment you must log in.
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

from irc with thomi:

<thomi> you should flush stdout afdter startTest, since we want to see that output in the case where a test takes a long time to run
<cgoldberg> right
<thomi> also, I think the skip line should have the optional message in '( )'
 so like:
 test.id ... SKIP (Not runnable on the device)
 for example
<cgoldberg> yup.. agree
<thomi> also, I think the status messages can be a bit more verbose.
 particularly, please change:
 XFAIL => EXPECTED FAIL
 and
 NOTOK => UNEXPECTED SUCCESS
<cgoldberg> right. makes sense. I copied the unittest runner's output, but there's no reason not to be a little more explicit
<thomi> also, you should generalise the code (diff lines 82-92) to go in a separate function, something like _wrap_result_with_output_decorator
 which accepts a result object, and wraps it in either a LoggingResultDecorator or your new decorator
 that way, it's easier to test, and we keep the complexity of the construct_XXX functions down
<thomi> cgoldberg: in your tests, I think you can make your 'assertOutput' lines more readable, by doing:
<cgoldberg> thomi, ok.. i can wrap it in a function
<thomi> self.assertOutput('{id} ... OK\n', 'pass')
 one line is better than 3, since we read left->right
 other than those minor quibbles above ^^, this looks great, but you're missing some tests still
<cgoldberg> thomi, hah.. will do
<thomi> I'd like to see an integration test that shows that when you specify the various result formats without the verbose flag, we get the correct result object & wrapper
 similarly for when you do specify the verbose flag
<cgoldberg> gotcha.. yea i can add that too
<thomi> Finally, I think you should add 6 functional tests to the functional test suite
 1 test fro each format, * 1 with verbose, and one without verbose
 actually, the verbose case is almost certainly covered already (but please do check)
 so maybe it's only 3 new functional tests :)

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

from thomi:

OK, so we need to change things. Seems like there's several things we're trying to control independantly of each other here:

1) The level of verbosity autopilot uses when logging. Ideally we'd make sure that all our log levels were sensible, and then this would control the logging framework verbosity level. 0 * '-v' => normal and higher. '-v' => info logs and higher. '-vv' => debug logs and higher. The log *always* gets attached to the test result for every test, regardless of the output file format.

2) The format we store run reports in. We have 'text', 'xml', 'subunit'. I think we need one more, which is 'verbosetext' or something similar. This causes the test log to get printed to stdoud as the test is runing. This is equivilent to the '-v' flag before this change.

3) The location of the output report. If the -o option is specified, the report is written to that file, in the format specified by the '-f' parameter.

Finally, if, after configuring all this, stdout is not used, the 'status' format should be printed to stdout as well.

review: Needs Fixing
Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

Four output formats will be:
* status (new format)
* xml
* subunit
* text
Every format can be sent to stdout or a file.
-v / -vv flag ONLY controls the verbosity of the python log.
make subclass of TextTestResult to print to stream

Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

make above fixes

review: Needs Fixing
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
Christopher Lee (veebers) wrote : Posted in a previous version of this proposal

Just a couple of minor things.

Initially I thought I misunderstood the verbose/output changes until I read the comment re: verbose output only going to a log file. I think that the -h output needs to be updated to state as such.
(actually I think I'm confused on this point now).

The help stats that the default format for -f is text where it is now 'status' (I was initially a little confused why my -v wasn't outputting the tapping details etc.).

line 488-491 has some odd indentation. I would put the opening [ on a new line i.e.
  self.run_autopilot(
      [ "run", "--failfast", "-f", "text", "tests"]
  )
# You could put the list args on new lines too if you wanted, but not needed here.

That's all for now, I'll try give a more functional review later on :-)

review: Needs Fixing
Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

thanks veebers!

> I think that the -h output needs to be updated to state as such.
> (actually I think I'm confused on this point now).

I will update -h commandline help, and /docs today with clear explanation of log and output handling.

> line 488-491 has some odd indentation.

I just pushed a fix for indentation in functional tests.

Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

updated help strings.

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Approve (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
Thomi Richards (thomir-deactivatedaccount) wrote : Posted in a previous version of this proposal

Hi,

A few things:

40 - '-v', '--verbose', required=False, default=False, action='count',
41 - help="Show autopilot log messages. Set twice to also log data "
42 - "useful for debugging autopilot itself.")
43 + "-v", "--verbosity", action='count', default=0, required=False,
44 + help="Increase verbosity of test details and 'text' mode output. "
45 + "Set twice to also log data useful for debugging autopilot "
46 + "itself.")

This is incorrect, since this is for the vis tool. The original message was correct, I don't think you need to change it at all.

54 - '-v', '--verbose', required=False, default=False, action='count',
55 - help="Show autopilot log messages. Set twice to also log data useful "
56 - "for debugging autopilot itself.")
57 + "-v", "--verbosity", action='count', default=0, required=False,
58 + help="Increase verbosity of test details and 'text' mode output. "
59 + "Set twice to also log data useful for debugging autopilot "
60 + "itself.")

Same here - this is for the launch command.

A small thing, but this:

85 -# Copyright (C) 2012-2013 Canonical
86 +# Copyright (C) 2012,2013,2014 Canonical

Should be "2012-2014", not "2012,2013,2014", as per legal advice.

102 from autopilot import get_version_string, parse_arguments
103 -import autopilot.globals
104 from autopilot._debug import get_all_debug_profiles
105 -from autopilot.testresult import get_output_formats
106 -from autopilot.utilities import DebugLogFilter, LogFormatter
107 from autopilot.application._launcher import (
108 _get_app_env_from_string_hint,
109 get_application_launcher_wrapper,
110 launch_process,
111 )
112 +import autopilot.globals
113 +from autopilot.testresult import get_output_formats
114 +from autopilot.utilities import DebugLogFilter, LogFormatter

This isn't the standard we've set in the AP codebase. We mix 'import FOO' and 'from FOO import BAR' statements in one block, and alphabeticise the module names.

565 + def test_failfast_text_mode(self):

Please move the test_failfast test to a new class, and use scenarios to make sure that *all* test result formats support failfast, not just text and status.

Here's the big change I think we need to make:

Currently, the logger is set up to log to the same stream as the result object. This works for the text format, but not the others. Instead, we should make it so the test log is added to the test result as a detail, for every format. This will help us with the subunit format, for example. TBH, I thought this already happend, but I can't see the code now. Feel free to point it out to me :)

Cheers

review: Needs Fixing
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal
review: Needs Fixing (continuous-integration)
Revision history for this message
Corey Goldberg (coreygoldberg) wrote : Posted in a previous version of this proposal

fixed:

- fixed copyright date format.
- reverted help text for -v arg in launch and vis modes
- added scenaro tests for failfast to cover all output formatsbz

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
483. By Christopher Lee

Fix introduced flake8 issues

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :

FAILED: Continuous integration, rev:483
No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want a jenkins rebuild you need to trigger it yourself):
https://code.launchpad.net/~veebers/autopilot/texttest-run-uprefresh/+merge/225247/+edit-commit-message

http://jenkins.qa.ubuntu.com/job/autopilot-ci/751/
Executed test runs:
    SUCCESS: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-amd64-ci/25
        deb: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-amd64-ci/25/artifact/work/output/*zip*/output.zip
    SUCCESS: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-armhf-ci/25
        deb: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-armhf-ci/25/artifact/work/output/*zip*/output.zip
    SUCCESS: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-i386-ci/25
        deb: http://jenkins.qa.ubuntu.com/job/autopilot-utopic-i386-ci/25/artifact/work/output/*zip*/output.zip
    UNSTABLE: http://jenkins.qa.ubuntu.com/job/generic-mediumtests-utopic-autopilot/151
    UNSTABLE: http://jenkins.qa.ubuntu.com/job/autopilot-testrunner-otto-utopic-autopilot/238
    SUCCESS: http://jenkins.qa.ubuntu.com/job/generic-mediumtests-builder-utopic-amd64/1459
        deb: http://jenkins.qa.ubuntu.com/job/generic-mediumtests-builder-utopic-amd64/1459/artifact/work/output/*zip*/output.zip

Click here to trigger a rebuild:
http://s-jenkins.ubuntu-ci:8080/job/autopilot-ci/751/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
484. By Christopher Lee

Fix 3 failing tests with typo change

485. By Christopher Lee

Fix sp error in test (verbose vs verbosity)

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
486. By Christopher Lee

Merge trunk + fix conflicts

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
platform-qa-bot (platform-qa-bot) wrote :
review: Approve (continuous-integration)

Unmerged revisions

486. By Christopher Lee

Merge trunk + fix conflicts

485. By Christopher Lee

Fix sp error in test (verbose vs verbosity)

484. By Christopher Lee

Fix 3 failing tests with typo change

483. By Christopher Lee

Fix introduced flake8 issues

482. By Christopher Lee

Merge trunk + fix conflicts

481. By Corey Goldberg

fixed comments

480. By Corey Goldberg

flake8 fixes

479. By Corey Goldberg

details unit tests

478. By Corey Goldberg

fixes

477. By Corey Goldberg

fixes per latest review

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'autopilot/__init__.py'
--- autopilot/__init__.py 2014-05-20 14:43:24 +0000
+++ autopilot/__init__.py 2014-07-22 03:40:04 +0000
@@ -1,7 +1,7 @@
1# -*- Mode: Python; coding: utf-8; indent-tabs-mode: nil; tab-width: 4 -*-1# -*- Mode: Python; coding: utf-8; indent-tabs-mode: nil; tab-width: 4 -*-
2#2#
3# Autopilot Functional Test Tool3# Autopilot Functional Test Tool
4# Copyright (C) 2012-2013 Canonical4# Copyright (C) 2012-2014 Canonical
5#5#
6# This program is free software: you can redistribute it and/or modify6# This program is free software: you can redistribute it and/or modify
7# it under the terms of the GNU General Public License as published by7# it under the terms of the GNU General Public License as published by
88
=== modified file 'autopilot/run.py'
--- autopilot/run.py 2014-07-02 21:07:33 +0000
+++ autopilot/run.py 2014-07-22 03:40:04 +0000
@@ -31,15 +31,14 @@
31import os.path31import os.path
32from platform import node32from platform import node
33from random import shuffle33from random import shuffle
34import six
35import subprocess34import subprocess
36import sys35import sys
37from unittest import TestLoader, TestSuite36from unittest import TestLoader, TestSuite
3837
38import six
39from testtools import iterate_tests39from testtools import iterate_tests
4040
41from autopilot import get_version_string, have_vis41from autopilot import get_version_string, have_vis
42import autopilot.globals
43from autopilot import _config as test_config42from autopilot import _config as test_config
44from autopilot._debug import (43from autopilot._debug import (
45 get_all_debug_profiles,44 get_all_debug_profiles,
@@ -52,6 +51,7 @@
52 get_application_launcher_wrapper,51 get_application_launcher_wrapper,
53 launch_process,52 launch_process,
54)53)
54import autopilot.globals
5555
5656
57def _get_parser():57def _get_parser():
@@ -215,15 +215,18 @@
215 setattr(namespace, self.dest, values)215 setattr(namespace, self.dest, values)
216216
217217
218def setup_logging(verbose):218def setup_logging(verbosity, path=None, format=None):
219 """Configure the root logger and verbose logging to stderr."""219 """Configure the root logger"""
220 root_logger = get_root_logger()220 root_logger = get_root_logger()
221 root_logger.setLevel(logging.DEBUG)221 root_logger.setLevel(logging.DEBUG)
222 if verbose == 0:222 if format == 'text':
223 stream = get_output_stream(format, path)
224 set_stream_handler(root_logger, stream)
225 else:
223 set_null_log_handler(root_logger)226 set_null_log_handler(root_logger)
224 if verbose >= 1:227 if verbosity >= 1:
225 set_stderr_stream_handler(root_logger)228 autopilot.globals.set_log_verbose(True)
226 if verbose >= 2:229 if verbosity >= 2:
227 enable_debug_log_messages()230 enable_debug_log_messages()
228 # log autopilot version231 # log autopilot version
229 root_logger.info(get_version_string())232 root_logger.info(get_version_string())
@@ -237,11 +240,10 @@
237 root_logger.addHandler(logging.NullHandler())240 root_logger.addHandler(logging.NullHandler())
238241
239242
240def set_stderr_stream_handler(root_logger):243def set_stream_handler(root_logger, stream):
241 formatter = LogFormatter()244 handler = logging.StreamHandler(stream=stream)
242 stderr_handler = logging.StreamHandler(stream=sys.stderr)245 handler.setFormatter(LogFormatter())
243 stderr_handler.setFormatter(formatter)246 root_logger.addHandler(handler)
244 root_logger.addHandler(stderr_handler)
245247
246248
247def enable_debug_log_messages():249def enable_debug_log_messages():
@@ -256,7 +258,7 @@
256 )258 )
257259
258260
259def get_output_stream(format, path):261def get_output_stream(format, path=None):
260 """Get an output stream pointing to 'path' that's appropriate for format262 """Get an output stream pointing to 'path' that's appropriate for format
261 'format'.263 'format'.
262264
@@ -270,7 +272,7 @@
270 log_file = _get_log_file_path(path)272 log_file = _get_log_file_path(path)
271 if format == 'xml':273 if format == 'xml':
272 return _get_text_mode_file_stream(log_file)274 return _get_text_mode_file_stream(log_file)
273 elif format == 'text':275 elif format in ('status', 'text'):
274 return _get_binary_mode_file_stream(log_file)276 return _get_binary_mode_file_stream(log_file)
275 else:277 else:
276 return _get_raw_binary_mode_file_stream(log_file)278 return _get_raw_binary_mode_file_stream(log_file)
@@ -639,7 +641,10 @@
639 self.args = defined_args or _parse_arguments()641 self.args = defined_args or _parse_arguments()
640642
641 def run(self):643 def run(self):
642 setup_logging(getattr(self.args, 'verbose', False))644 verbosity = getattr(self.args, 'verbose', 0)
645 format = getattr(self.args, 'format', 'status')
646 path = getattr(self.args, 'output', '')
647 setup_logging(verbosity, path, format)
643648
644 action = None649 action = None
645 if self.args.mode == 'list':650 if self.args.mode == 'list':
@@ -712,9 +717,6 @@
712 print("Error: %s" % str(e))717 print("Error: %s" % str(e))
713 exit(1)718 exit(1)
714719
715 if self.args.verbose:
716 autopilot.globals.set_log_verbose(True)
717
718 result = construct_test_result(self.args)720 result = construct_test_result(self.args)
719 result.startTestRun()721 result.startTestRun()
720 try:722 try:
721723
=== modified file 'autopilot/testresult.py'
--- autopilot/testresult.py 2014-07-14 23:30:58 +0000
+++ autopilot/testresult.py 2014-07-22 03:40:04 +0000
@@ -1,7 +1,7 @@
1# -*- Mode: Python; coding: utf-8; indent-tabs-mode: nil; tab-width: 4 -*-1# -*- Mode: Python; coding: utf-8; indent-tabs-mode: nil; tab-width: 4 -*-
2#2#
3# Autopilot Functional Test Tool3# Autopilot Functional Test Tool
4# Copyright (C) 2012-2013 Canonical4# Copyright (C) 2012,2013,2014 Canonical
5#5#
6# This program is free software: you can redistribute it and/or modify6# This program is free software: you can redistribute it and/or modify
7# it under the terms of the GNU General Public License as published by7# it under the terms of the GNU General Public License as published by
@@ -32,18 +32,81 @@
32 try_import,32 try_import,
33)33)
3434
35from autopilot.globals import get_log_verbose
36from autopilot.utilities import _raise_on_unknown_kwargs35from autopilot.utilities import _raise_on_unknown_kwargs
3736
3837
38class StatusFormatTestResult(TextTestResult):
39 """A TextTestResult that prints status messages to a stream."""
40
41 def __init__(self, stream, failfast):
42 super(StatusFormatTestResult, self).__init__(stream, failfast)
43 self.stream = stream
44
45 def startTest(self, test):
46 self.stream.write(u'%s' % test.id())
47 self.stream.write(u' ... ')
48 self.stream.flush()
49 super(StatusFormatTestResult, self).startTest(test)
50
51 def stopTest(self, test):
52 self.stream.write(u'\n')
53 self.stream.flush()
54 super(StatusFormatTestResult, self).stopTest(test)
55
56 def stopTestRun(self):
57 self.stream.write(u'-' * 80)
58 self.stream.write(u'\n')
59 self.stream.flush()
60 super(StatusFormatTestResult, self).stopTestRun()
61
62 def addExpectedFailure(self, test, err=None, details=None):
63 self.stream.write(u'EXPECTED FAIL')
64 super(StatusFormatTestResult, self).addExpectedFailure(
65 test, err, details
66 )
67
68 def addError(self, test, err=None, details=None):
69 self.stream.write(u'ERROR')
70 super(StatusFormatTestResult, self).addError(
71 test, err, details
72 )
73
74 def addFailure(self, test, err=None, details=None):
75 self.stream.write(u'FAIL')
76 super(StatusFormatTestResult, self).addFailure(
77 test, err, details
78 )
79
80 def addSkip(self, test, reason, details=None):
81 if not reason:
82 reason_displayed = ''
83 else:
84 reason_displayed = ' (%s)' % reason
85 self.stream.write(u'SKIP%s' % reason_displayed)
86 super(StatusFormatTestResult, self).addSkip(
87 test, reason, details
88 )
89
90 def addSuccess(self, test, details=None):
91 self.stream.write(u'OK')
92 super(StatusFormatTestResult, self).addSuccess(
93 test, details
94 )
95
96 def addUnexpectedSuccess(self, test, details=None):
97 self.stream.write(u'UNEXPECTED SUCCESS')
98 super(StatusFormatTestResult, self).addUnexpectedSuccess(
99 test, details
100 )
101
102
39class LoggedTestResultDecorator(TestResultDecorator):103class LoggedTestResultDecorator(TestResultDecorator):
40104
41 """A decorator that logs messages to python's logging system."""105 """A decorator that logs messages to python's logging system."""
42106
43 def _log(self, level, message):107 def _log(self, level, message):
44 """Perform the actual message logging."""108 """Perform the actual message logging"""
45 if get_log_verbose():109 logging.getLogger().log(level, message)
46 logging.getLogger().log(level, message)
47110
48 def _log_details(self, level, details):111 def _log_details(self, level, details):
49 """Log the relavent test details."""112 """Log the relavent test details."""
@@ -87,9 +150,8 @@
87150
88 """151 """
89 supported_formats = {}152 supported_formats = {}
90153 supported_formats['status'] = _construct_status
91 supported_formats['text'] = _construct_text154 supported_formats['text'] = _construct_text
92
93 if try_import('junitxml'):155 if try_import('junitxml'):
94 supported_formats['xml'] = _construct_xml156 supported_formats['xml'] = _construct_xml
95 if try_import('subunit'):157 if try_import('subunit'):
@@ -98,7 +160,7 @@
98160
99161
100def get_default_format():162def get_default_format():
101 return 'text'163 return 'status'
102164
103165
104def _construct_xml(**kwargs):166def _construct_xml(**kwargs):
@@ -106,16 +168,21 @@
106 stream = kwargs.pop('stream')168 stream = kwargs.pop('stream')
107 failfast = kwargs.pop('failfast')169 failfast = kwargs.pop('failfast')
108 _raise_on_unknown_kwargs(kwargs)170 _raise_on_unknown_kwargs(kwargs)
109 result_object = LoggedTestResultDecorator(171 result = ExtendedToOriginalDecorator(JUnitXmlResult(stream))
110 ExtendedToOriginalDecorator(172 result.failfast = failfast
111 JUnitXmlResult(stream)173 return result
112 )174
113 )175
114 result_object.failfast = failfast176def _construct_status(**kwargs):
115 return result_object177 """Status (terse) text output."""
178 stream = kwargs.pop('stream')
179 failfast = kwargs.pop('failfast')
180 _raise_on_unknown_kwargs(kwargs)
181 return StatusFormatTestResult(stream, failfast)
116182
117183
118def _construct_text(**kwargs):184def _construct_text(**kwargs):
185 """Verbose text output."""
119 stream = kwargs.pop('stream')186 stream = kwargs.pop('stream')
120 failfast = kwargs.pop('failfast')187 failfast = kwargs.pop('failfast')
121 _raise_on_unknown_kwargs(kwargs)188 _raise_on_unknown_kwargs(kwargs)
@@ -127,10 +194,6 @@
127 stream = kwargs.pop('stream')194 stream = kwargs.pop('stream')
128 failfast = kwargs.pop('failfast')195 failfast = kwargs.pop('failfast')
129 _raise_on_unknown_kwargs(kwargs)196 _raise_on_unknown_kwargs(kwargs)
130 result_object = LoggedTestResultDecorator(197 result = ExtendedToStreamDecorator(StreamResultToBytes(stream))
131 ExtendedToStreamDecorator(198 result.failfast = failfast
132 StreamResultToBytes(stream)199 return result
133 )
134 )
135 result_object.failfast = failfast
136 return result_object
137200
=== modified file 'autopilot/tests/functional/test_autopilot_functional.py'
--- autopilot/tests/functional/test_autopilot_functional.py 2014-07-14 04:07:05 +0000
+++ autopilot/tests/functional/test_autopilot_functional.py 2014-07-22 03:40:04 +0000
@@ -25,9 +25,11 @@
25import os.path25import os.path
26import re26import re
27from tempfile import mktemp27from tempfile import mktemp
28from textwrap import dedent
29
30from testscenarios import WithScenarios
28from testtools import skipIf31from testtools import skipIf
29from testtools.matchers import Contains, Equals, MatchesRegex, Not32from testtools.matchers import Contains, Equals, MatchesRegex, Not
30from textwrap import dedent
3133
32from autopilot import platform34from autopilot import platform
33from autopilot.tests.functional import AutopilotRunTestBase, remove_if_exists35from autopilot.tests.functional import AutopilotRunTestBase, remove_if_exists
@@ -770,15 +772,10 @@
770772
771class AutopilotVerboseFunctionalTests(AutopilotFunctionalTestsBase):773class AutopilotVerboseFunctionalTests(AutopilotFunctionalTestsBase):
772774
773 """Scenarioed functional tests for autopilot's verbose logging."""775 """Functional tests for autopilot's verbose logging."""
774
775 scenarios = [
776 ('text_format', dict(output_format='text')),
777 ('xml_format', dict(output_format='xml'))
778 ]
779776
780 def test_verbose_flag_works(self):777 def test_verbose_flag_works(self):
781 """Verbose flag must log to stderr."""778 """Verbose flag must log to stdout."""
782 self.create_test_file(779 self.create_test_file(
783 "test_simple.py", dedent("""\780 "test_simple.py", dedent("""\
784781
@@ -792,13 +789,13 @@
792 """)789 """)
793 )790 )
794791
795 code, output, error = self.run_autopilot(["run",792 code, output, error = self.run_autopilot(
796 "-f", self.output_format,793 ["run", "-f", "text", "-v", "tests"]
797 "-v", "tests"])794 )
798795
799 self.assertThat(code, Equals(0))796 self.assertThat(code, Equals(0))
800 self.assertThat(797 self.assertThat(
801 error, Contains(798 output, Contains(
802 "Starting test tests.test_simple.SimpleTest.test_simple"))799 "Starting test tests.test_simple.SimpleTest.test_simple"))
803800
804 def test_verbose_flag_shows_timestamps(self):801 def test_verbose_flag_shows_timestamps(self):
@@ -816,11 +813,11 @@
816 """)813 """)
817 )814 )
818815
819 code, output, error = self.run_autopilot(["run",816 code, output, error = self.run_autopilot(
820 "-f", self.output_format,817 ["run", "-f", "text", "-v", "tests"]
821 "-v", "tests"])818 )
822819
823 self.assertThat(error, MatchesRegex("^\d\d:\d\d:\d\d\.\d\d\d"))820 self.assertThat(output, MatchesRegex("^\d\d:\d\d:\d\d\.\d\d\d"))
824821
825 def test_verbose_flag_shows_success(self):822 def test_verbose_flag_shows_success(self):
826 """Verbose log must indicate successful tests (text format)."""823 """Verbose log must indicate successful tests (text format)."""
@@ -837,12 +834,12 @@
837 """)834 """)
838 )835 )
839836
840 code, output, error = self.run_autopilot(["run",837 code, output, error = self.run_autopilot(
841 "-f", self.output_format,838 ["run", "-f", "text", "-v", "tests"]
842 "-v", "tests"])839 )
843840
844 self.assertThat(841 self.assertThat(
845 error, Contains("OK: tests.test_simple.SimpleTest.test_simple"))842 output, Contains("OK: tests.test_simple.SimpleTest.test_simple"))
846843
847 def test_verbose_flag_shows_error(self):844 def test_verbose_flag_shows_error(self):
848 """Verbose log must indicate test error with a traceback."""845 """Verbose log must indicate test error with a traceback."""
@@ -859,15 +856,17 @@
859 """)856 """)
860 )857 )
861858
862 code, output, error = self.run_autopilot(["run",859 code, output, error = self.run_autopilot(
863 "-f", self.output_format,860 ["run", "-f", "text", "-v", "tests"]
864 "-v", "tests"])861 )
865862
866 self.assertThat(863 self.assertThat(
867 error, Contains("ERROR: tests.test_simple.SimpleTest.test_simple"))864 output,
868 self.assertThat(error, Contains("traceback:"))865 Contains("ERROR: tests.test_simple.SimpleTest.test_simple")
866 )
867 self.assertThat(output, Contains("traceback:"))
869 self.assertThat(868 self.assertThat(
870 error,869 output,
871 Contains("RuntimeError: Intentionally fail test.")870 Contains("RuntimeError: Intentionally fail test.")
872 )871 )
873872
@@ -887,13 +886,13 @@
887 """)886 """)
888 )887 )
889888
890 code, output, error = self.run_autopilot(["run",889 code, output, error = self.run_autopilot(
891 "-f", self.output_format,890 ["run", "-f", "text", "-v", "tests"]
892 "-v", "tests"])891 )
893892
894 self.assertIn("FAIL: tests.test_simple.SimpleTest.test_simple", error)893 self.assertIn("FAIL: tests.test_simple.SimpleTest.test_simple", output)
895 self.assertIn("traceback:", error)894 self.assertIn("traceback:", output)
896 self.assertIn("AssertionError: False is not true", error)895 self.assertIn("AssertionError: False is not true", output)
897896
898 def test_verbose_flag_captures_nested_autopilottestcase_classes(self):897 def test_verbose_flag_captures_nested_autopilottestcase_classes(self):
899 """Verbose log must contain the log details of both the nested and898 """Verbose log must contain the log details of both the nested and
@@ -917,20 +916,20 @@
917 """)916 """)
918 )917 )
919918
920 code, output, error = self.run_autopilot(["run",919 code, output, error = self.run_autopilot(
921 "-f", self.output_format,920 ["run", "-f", "text", "-v", "tests"])
922 "-v", "tests"])
923921
924 self.assertThat(code, Equals(0))922 self.assertThat(code, Equals(0))
923
925 self.assertThat(924 self.assertThat(
926 error,925 output,
927 Contains(926 Contains(
928 "Starting test tests.test_simple.OuterTestCase."927 "Starting test tests.test_simple.OuterTestCase."
929 "test_nested_classes"928 "test_nested_classes"
930 )929 )
931 )930 )
932 self.assertThat(931 self.assertThat(
933 error,932 output,
934 Contains(933 Contains(
935 "Starting test tests.test_simple.InnerTestCase."934 "Starting test tests.test_simple.InnerTestCase."
936 "test_produce_log_output"935 "test_produce_log_output"
@@ -953,11 +952,10 @@
953 """)952 """)
954 )953 )
955954
956 code, output, error = self.run_autopilot(["run",955 code, output, error = self.run_autopilot(
957 "-f", self.output_format,956 ["run", "-f", "text", "-vv", "tests"])
958 "-vv", "tests"])
959957
960 self.assertThat(error, Contains("Hello World"))958 self.assertThat(output, Contains("Hello World"))
961959
962 def test_debug_output_not_shown_by_default(self):960 def test_debug_output_not_shown_by_default(self):
963 """Verbose log must not show debug messages unless we specify '-vv'."""961 """Verbose log must not show debug messages unless we specify '-vv'."""
@@ -975,11 +973,11 @@
975 """)973 """)
976 )974 )
977975
978 code, output, error = self.run_autopilot(["run",976 code, output, error = self.run_autopilot(
979 "-f", self.output_format,977 ["run", "-f", "text", "-v", "tests"]
980 "-v", "tests"])978 )
981979
982 self.assertThat(error, Not(Contains("Hello World")))980 self.assertThat(output, Not(Contains("Hello World")))
983981
984 def test_verbose_flag_shows_autopilot_version(self):982 def test_verbose_flag_shows_autopilot_version(self):
985 from autopilot import get_version_string983 from autopilot import get_version_string
@@ -997,13 +995,23 @@
997 """)995 """)
998 )996 )
999997
1000 code, output, error = self.run_autopilot(["run",998 code, output, error = self.run_autopilot(
1001 "-f", self.output_format,999 ["run", "-f", "text", "-v", "tests"]
1002 "-v", "tests"])1000 )
1003 self.assertThat(1001 self.assertThat(
1004 error, Contains(get_version_string()))1002 output, Contains(get_version_string()))
10051003
1006 def test_failfast(self):1004
1005class AutopilotFailFastTests(AutopilotRunTestBase, WithScenarios):
1006
1007 scenarios = [
1008 ('status', dict(format='status')),
1009 ('text', dict(format='text')),
1010 ('xml', dict(format='xml')),
1011 ('subunit', dict(format='subunit')),
1012 ]
1013
1014 def test_failfast_text_mode(self):
1007 """Run stops after first error encountered."""1015 """Run stops after first error encountered."""
1008 self.create_test_file(1016 self.create_test_file(
1009 'test_failfast.py', dedent("""\1017 'test_failfast.py', dedent("""\
@@ -1020,9 +1028,34 @@
1020 raise Exception1028 raise Exception
1021 """)1029 """)
1022 )1030 )
1023 code, output, error = self.run_autopilot(["run",1031 code, output, error = self.run_autopilot(
1024 "--failfast",1032 ["run", "--failfast", "-f", self.format, "tests"]
1025 "tests"])1033 )
1026 self.assertThat(code, Equals(1))1034 self.assertThat(code, Equals(1))
1027 self.assertIn("Ran 1 test", output)1035 self.assertIn("Ran 1 test", output)
1028 self.assertIn("FAILED (failures=1)", output)1036 self.assertIn("FAILED (failures=1)", output)
1037
1038
1039class AutopilotTestDetailsTests(AutopilotRunTestBase):
1040
1041 def test_logging_added_as_test_detail(self):
1042 msg = "This is some information"
1043 self.create_test_file(
1044 'test_details.py', dedent("""\
1045 import logging
1046 from autopilot.testcase import AutopilotTestCase
1047
1048
1049 logger = logging.getLogger(__name__)
1050
1051 class SimpleTest(AutopilotTestCase):
1052
1053 def test_one(self):
1054 logger.info('%s')
1055
1056 """ % msg)
1057 )
1058 code, output, error = self.run_autopilot(
1059 ["run", "-f", "text", "tests"]
1060 )
1061 self.assertIn(msg, output)
10291062
=== modified file 'autopilot/tests/unit/test_command_line_args.py'
--- autopilot/tests/unit/test_command_line_args.py 2014-05-23 11:28:21 +0000
+++ autopilot/tests/unit/test_command_line_args.py 2014-07-22 03:40:04 +0000
@@ -76,7 +76,7 @@
7676
77 def test_launch_command_has_correct_default_verbosity(self):77 def test_launch_command_has_correct_default_verbosity(self):
78 args = parse_args("launch app")78 args = parse_args("launch app")
79 self.assertThat(args.verbose, Equals(False))79 self.assertThat(args.verbose, Equals(0))
8080
81 def test_launch_command_can_specify_verbosity(self):81 def test_launch_command_can_specify_verbosity(self):
82 args = parse_args("launch -v app")82 args = parse_args("launch -v app")
@@ -211,7 +211,15 @@
211211
212 def test_run_command_default_format(self):212 def test_run_command_default_format(self):
213 args = parse_args('run foo')213 args = parse_args('run foo')
214 self.assertThat(args.format, Equals("text"))214 self.assertThat(args.format, Equals("status"))
215
216 def test_run_command_status_format_short_version(self):
217 args = parse_args('run -f status foo')
218 self.assertThat(args.format, Equals("status"))
219
220 def test_run_command_status_format_long_version(self):
221 args = parse_args('run --format status foo')
222 self.assertThat(args.format, Equals("status"))
215223
216 def test_run_command_text_format_short_version(self):224 def test_run_command_text_format_short_version(self):
217 args = parse_args('run -f text foo')225 args = parse_args('run -f text foo')
218226
=== modified file 'autopilot/tests/unit/test_run.py'
--- autopilot/tests/unit/test_run.py 2014-05-23 11:28:21 +0000
+++ autopilot/tests/unit/test_run.py 2014-07-22 03:40:04 +0000
@@ -18,7 +18,7 @@
18#18#
1919
20from argparse import Namespace20from argparse import Namespace
21from unittest.mock import Mock, patch21from io import StringIO
22import logging22import logging
23import os.path23import os.path
24from shutil import rmtree24from shutil import rmtree
@@ -37,12 +37,17 @@
37 Raises,37 Raises,
38 StartsWith,38 StartsWith,
39)39)
40from unittest.mock import Mock, patch
4041
41if six.PY3:42if six.PY3:
42 from contextlib import ExitStack43 from contextlib import ExitStack
43else:44else:
44 from contextlib2 import ExitStack45 from contextlib2 import ExitStack
4546
47from autopilot.testresult import (
48 LoggedTestResultDecorator,
49 StatusFormatTestResult,
50)
46from autopilot import have_vis, run51from autopilot import have_vis, run
4752
4853
@@ -595,27 +600,10 @@
595 logging.DEBUG600 logging.DEBUG
596 )601 )
597602
598 def test_set_null_log_handler(self):603 def test_stream_handler_sets_custom_formatter(self):
599 mock_root_logger = Mock()604 mock_root_logger = Mock()
600 run.set_null_log_handler(mock_root_logger)605 stream = StringIO()
601606 run.set_stream_handler(mock_root_logger, stream)
602 self.assertThat(
603 mock_root_logger.addHandler.call_args[0][0],
604 IsInstance(logging.NullHandler)
605 )
606
607 @patch.object(run, 'get_root_logger')
608 def test_verbse_level_zero_sets_null_handler(self, fake_get_logger):
609 with patch.object(run, 'set_null_log_handler') as fake_set_null:
610 run.setup_logging(0)
611
612 fake_set_null.assert_called_once_with(
613 fake_get_logger.return_value
614 )
615
616 def test_stderr_handler_sets_stream_handler_with_custom_formatter(self):
617 mock_root_logger = Mock()
618 run.set_stderr_stream_handler(mock_root_logger)
619607
620 self.assertThat(mock_root_logger.addHandler.call_count, Equals(1))608 self.assertThat(mock_root_logger.addHandler.call_count, Equals(1))
621 created_handler = mock_root_logger.addHandler.call_args[0][0]609 created_handler = mock_root_logger.addHandler.call_args[0][0]
@@ -630,12 +618,16 @@
630 )618 )
631619
632 @patch.object(run, 'get_root_logger')620 @patch.object(run, 'get_root_logger')
633 def test_verbose_level_one_sets_stream_handler(self, fake_get_logger):621 def test_text_format_sets_stream_handler(self, fake_get_logger):
634 with patch.object(run, 'set_stderr_stream_handler') as stderr_handler:622 verbosity = 0
635 run.setup_logging(1)623 path = None
624 # handler is set when running in 'text' format mode
625 with patch.object(run, 'set_stream_handler') as handler:
626 run.setup_logging(verbosity, path, format='text')
636627
637 stderr_handler.assert_called_once_with(628 handler.assert_called_once_with(
638 fake_get_logger.return_value629 fake_get_logger.return_value,
630 run.get_output_stream('text')
639 )631 )
640632
641 def test_enable_debug_log_messages_sets_debugFilter_attr(self):633 def test_enable_debug_log_messages_sets_debugFilter_attr(self):
@@ -650,7 +642,7 @@
650 @patch.object(run, 'get_root_logger')642 @patch.object(run, 'get_root_logger')
651 def test_verbose_level_two_enables_debug_messages(self, fake_get_logger):643 def test_verbose_level_two_enables_debug_messages(self, fake_get_logger):
652 with patch.object(run, 'enable_debug_log_messages') as enable_debug:644 with patch.object(run, 'enable_debug_log_messages') as enable_debug:
653 run.setup_logging(2)645 run.setup_logging(2, None)
654646
655 enable_debug.assert_called_once_with()647 enable_debug.assert_called_once_with()
656648
@@ -791,6 +783,13 @@
791 run.get_output_stream(format, output)783 run.get_output_stream(format, output)
792 pgts.assert_called_once_with(output)784 pgts.assert_called_once_with(output)
793785
786 def test_status_format_opens_binary_mode_stream(self):
787 output = tempfile.mktemp()
788 format = 'status'
789 with patch.object(run, '_get_binary_mode_file_stream') as pgbs:
790 run.get_output_stream(format, output)
791 pgbs.assert_called_once_with(output)
792
794 def test_txt_format_opens_binary_mode_stream(self):793 def test_txt_format_opens_binary_mode_stream(self):
795 output = tempfile.mktemp()794 output = tempfile.mktemp()
796 format = 'text'795 format = 'text'
@@ -815,6 +814,14 @@
815 expected = "Using default log filename: %s\n" % path814 expected = "Using default log filename: %s\n" % path
816 self.assertThat(expected, Equals(output))815 self.assertThat(expected, Equals(output))
817816
817 def test_text_result_uses_logged_decorator(self):
818 args = Namespace()
819 args.failfast = False
820 args.output = None
821 args.format = "text"
822 result = run.construct_test_result(args)
823 self.assertIsInstance(result, LoggedTestResultDecorator)
824
818825
819class TestProgramTests(TestCase):826class TestProgramTests(TestCase):
820827
@@ -841,12 +848,12 @@
841 self.assertThat(program.args, Equals(fake_args))848 self.assertThat(program.args, Equals(fake_args))
842849
843 def test_run_calls_setup_logging_with_verbose_arg(self):850 def test_run_calls_setup_logging_with_verbose_arg(self):
844 fake_args = Namespace(verbose=1, mode='')851 fake_args = Namespace(verbose=1, mode='text')
845 program = run.TestProgram(fake_args)852 program = run.TestProgram(fake_args)
846 with patch.object(run, 'setup_logging') as patched_setup_logging:853 with patch.object(run, 'setup_logging') as patched_setup_logging:
847 program.run()854 program.run()
848855
849 patched_setup_logging.assert_called_once_with(True)856 patched_setup_logging.assert_called_once_with(1, '', 'status')
850857
851 def test_list_command_calls_list_tests_method(self):858 def test_list_command_calls_list_tests_method(self):
852 fake_args = Namespace(mode='list')859 fake_args = Namespace(mode='list')
@@ -966,3 +973,93 @@
966 )973 )
967 defaults.update(kwargs)974 defaults.update(kwargs)
968 return Namespace(**defaults)975 return Namespace(**defaults)
976
977
978def get_case(kind):
979 # Define the class in a function so test loading doesn't
980 # try to load it as a regular test class.
981 class Test(TestCase):
982
983 def test_pass(self):
984 pass
985
986 def test_fail(self):
987 raise self.failureException
988
989 def test_error(self):
990 raise SyntaxError
991
992 def test_skip(self):
993 self.skipTest('')
994
995 def test_skip_reason(self):
996 self.skipTest('Because')
997
998 def test_expected_failure(self):
999 # We expect the test to fail and it does
1000 self.expectFailure("1 should be 0", self.assertEqual, 1, 0)
1001
1002 def test_unexpected_success(self):
1003 # We expect the test to fail but it doesn't
1004 self.expectFailure("1 is not 1", self.assertEqual, 1, 1)
1005
1006 test_method = 'test_%s' % kind
1007 return Test(test_method)
1008
1009
1010def expand_template_for_test(template, test):
1011 """Expand common references in template.
1012
1013 Tests that check runs output can be simplified if they use templates
1014 instead of literal expected strings.
1015
1016 :param template: A string where common strings have been replaced by a
1017 keyword.
1018
1019 :param test: The test case under scrutiny.
1020 """
1021 kwargs = dict(id=test.id())
1022 return template.format(**kwargs)
1023
1024
1025class StatusFormatTestResultTests(TestCase):
1026
1027 def setUp(self):
1028 super(StatusFormatTestResultTests, self).setUp()
1029
1030 def assertOutput(self, template, kind):
1031 test = get_case(kind)
1032 result = StatusFormatTestResult(StringIO(), False)
1033 test.run(result)
1034
1035 expected = expand_template_for_test(template, test)
1036 self.assertEquals(expected, result.stream.getvalue().strip())
1037
1038 def test_pass(self):
1039 self.assertOutput('{id} ... OK', 'pass')
1040
1041 def test_fail(self):
1042 self.assertOutput('{id} ... FAIL', 'fail')
1043
1044 def test_error(self):
1045 self.assertOutput('{id} ... ERROR', 'error')
1046
1047 def test_skip(self):
1048 self.assertOutput('{id} ... SKIP', 'skip')
1049
1050 def test_skip_reason(self):
1051 self.assertOutput('{id} ... SKIP (Because)', 'skip_reason')
1052
1053 def test_expected_failure(self):
1054 self.assertOutput('{id} ... EXPECTED FAIL', 'expected_failure')
1055
1056 def test_unexpected_success(self):
1057 self.assertOutput('{id} ... UNEXPECTED SUCCESS', 'unexpected_success')
1058
1059 def test_result_uses_status_class(self):
1060 args = Namespace()
1061 args.failfast = False
1062 args.output = None
1063 args.format = 'status'
1064 result = run.construct_test_result(args)
1065 self.assertIsInstance(result, StatusFormatTestResult)
9691066
=== modified file 'autopilot/tests/unit/test_testresults.py'
--- autopilot/tests/unit/test_testresults.py 2014-07-01 00:36:26 +0000
+++ autopilot/tests/unit/test_testresults.py 2014-07-22 03:40:04 +0000
@@ -27,8 +27,7 @@
27from testscenarios import WithScenarios27from testscenarios import WithScenarios
28import unittest28import unittest
2929
30from autopilot import testresult30from autopilot import run, testresult
31from autopilot import run
3231
3332
34class LoggedTestResultDecoratorTests(TestCase):33class LoggedTestResultDecoratorTests(TestCase):
@@ -110,6 +109,9 @@
110109
111class OutputFormatFactoryTests(TestCase):110class OutputFormatFactoryTests(TestCase):
112111
112 def test_has_status_format(self):
113 self.assertTrue('status' in testresult.get_output_formats())
114
113 def test_has_text_format(self):115 def test_has_text_format(self):
114 self.assertTrue('text' in testresult.get_output_formats())116 self.assertTrue('text' in testresult.get_output_formats())
115117
@@ -278,6 +280,22 @@
278 self.assertFalse(test_result.wasSuccessful())280 self.assertFalse(test_result.wasSuccessful())
279 self.assertEqual(1, test_result.testsRun)281 self.assertEqual(1, test_result.testsRun)
280282
283 def test_traceback_added_as_detail_on_fail(self):
284 msg = "failure info"
285
286 class FailingTests(TestCase):
287
288 def test_fails(self):
289 self.fail(msg)
290
291 test = FailingTests('test_fails')
292 test_result, output_path = self.run_test_with_result(
293 test
294 )
295 self.assertFalse(test_result.wasSuccessful())
296 tb_detail = test.getDetails()['traceback'].as_text()
297 self.assertThat(tb_detail, Contains(msg))
298
281299
282def remove_if_exists(path):300def remove_if_exists(path):
283 if os.path.exists(path):301 if os.path.exists(path):

Subscribers

People subscribed via source and target branches