Merge lp:~jml/zope.testing/subunit-output-formatter into lp:~vcs-imports/zope.testing/trunk

Proposed by Jonathan Lange
Status: Merged
Merge reported by: Jonathan Lange
Merged at revision: not available
Proposed branch: lp:~jml/zope.testing/subunit-output-formatter
Merge into: lp:~vcs-imports/zope.testing/trunk
Diff against target: 1378 lines (+1270/-5)
8 files modified
.bzrignore (+6/-0)
src/zope/testing/testrunner/formatter.py (+405/-0)
src/zope/testing/testrunner/options.py (+24/-2)
src/zope/testing/testrunner/testrunner-leaks.txt (+3/-3)
src/zope/testing/testrunner/testrunner-subunit-err.txt (+20/-0)
src/zope/testing/testrunner/testrunner-subunit-leaks.txt (+107/-0)
src/zope/testing/testrunner/testrunner-subunit.txt (+678/-0)
src/zope/testing/testrunner/tests.py (+27/-0)
To merge this branch: bzr merge lp:~jml/zope.testing/subunit-output-formatter

Commit message

Add a subunit output formatter to zope.testing.

To post a comment you must log in.
Revision history for this message
Robert Collins (lifeless) wrote :
Download full text (4.0 KiB)

Broad issues:
content and content_type are testtools modules; don't import from subunit, it only has them for compatibility glue.

Perhaps tag tests in layer foo with zope:layer:foo, not zope:testing:foo. In fact it looks like they are, and its simply a docstring bug to claim otherwise.

As subunit has a progress abstraction, the 'cannot support progress' statement confused me. Perhaps say 'cannot support zopes concept of progress because xxx'.

I see you've worked around the bug in subunit where there isn't a tag method on the test result; perhaps you could submit a patch for that ? the contract is(I think) clear, just 'not done'.

194 + Since subunit is a stream protocol format, it has no summary.
perhaps 'no need for a summary - when the stream is displayed a summary can be created then.

What is this?
+ def import_errors(self, import_errors):
221 + """Report test-module import errors (if any)."""
222 + # FIXME: Implement this.
... there is code here

235 + def _exc_info_to_details(self, exc_info):
236 + """Translate 'exc_info' into a details dictionary usable with subunit.
237 + """
238 + import subunit
239 + content_type = subunit.content_type.ContentType(
240 + 'text', 'x-traceback', dict(language='python', charset='utf8'))
241 + formatter = OutputFormatter(None)
242 + traceback = formatter.format_traceback(exc_info)
243 + return {
244 + 'traceback': subunit.content.Content(
245 + content_type, lambda: [traceback.encode('utf8')])}

This might be better as
import testtools.content
test = unittest.TestCase()
content = TracebackContent(exc_info, test)
return {'traceback': content}

unless the formatter.format_traceback(exc_info) is doing something nonobvious (and if it is, perhaps you should mention that. If its doing something nonobvious, then I suggest subclassing testtools.content.Content similarly to testtools.content.TracebackContent.

Also, you might want a global import rather than a scope local.

270 + # XXX: Since the subunit stream is designed for machine reading, we
271 + # should really dump the binary profiler stats here. Sadly, the
272 + # "content" API doesn't support this yet. Instead, we dump the
273 + # stringified version of the stats dict, which is functionally the
274 + # same thing. -- jml, 2010-02-14.
275 + plain_text = subunit.content_type.ContentType(
276 + 'text', 'plain', {'charset': 'utf8'})
277 + details = {
278 + 'profiler-stats': subunit.content.Content(
279 + plain_text, lambda: [unicode(stats.stats).encode('utf8')])}

meta: where some dependency is insufficient, it might be nice to file a bug saying 'please provide X', and then reference the bug in this patch. That way, when your later self returns, they have something to prompt the memory.
That said, I'm not sure what subunit is missing here:
(see application/octet-stream in http://www.rfc-editor.org/rfc/rfc2046.txt for details)
cprofile_type = testtools.content_type.ContentType('application', 'octet-stream', {'type':'cProfile'})
content = testtools.content.Content(cprofile_type, lambda: [bpickle(stats)])
return {'profiler-stats': content}

You can also make the content types attributes on self to avoid calculating them every time; they are 'Value Objec...

Read more...

Revision history for this message
Robert Collins (lifeless) wrote :

Oh, I forgot to mention; for profiling, using bzrlib.lsprof.profile you can get kcachegrind output out really easily; I love it much more than the damaged cProfile output, and it is actually plain text anyway :).

Revision history for this message
Jonathan Lange (jml) wrote :
Download full text (5.0 KiB)

> Broad issues:
...
> As subunit has a progress abstraction, the 'cannot support progress' statement
> confused me. Perhaps say 'cannot support zopes concept of progress because
> xxx'.
>

Changed to:
    # subunit output is designed for computers, so displaying a progress bar
    # isn't helpful.

Which isn't _strictly_ true, since displaying a progress bar might help the humans running the process. The real reason is it's too hard.

> I see you've worked around the bug in subunit where there isn't a tag method
> on the test result; perhaps you could submit a patch for that ? the contract
> is(I think) clear, just 'not done'.
>

...

> 194 + Since subunit is a stream protocol format, it has no summary.
> perhaps 'no need for a summary - when the stream is displayed a summary can be
> created then.
...

Now says:
        Since subunit is a stream protocol format, it has no need for a
        summary. When the stream is finished other tools can generate a
        summary if so desired.

> 235 + def _exc_info_to_details(self, exc_info):
> 236 + """Translate 'exc_info' into a details dictionary usable with
> subunit.
> 237 + """
> 238 + import subunit
> 239 + content_type = subunit.content_type.ContentType(
> 240 + 'text', 'x-traceback', dict(language='python', charset='utf8'))
> 241 + formatter = OutputFormatter(None)
> 242 + traceback = formatter.format_traceback(exc_info)
> 243 + return {
> 244 + 'traceback': subunit.content.Content(
> 245 + content_type, lambda: [traceback.encode('utf8')])}
>
> This might be better as
> import testtools.content
> test = unittest.TestCase()
> content = TracebackContent(exc_info, test)
> return {'traceback': content}
>
> unless the formatter.format_traceback(exc_info) is doing something nonobvious
> (and if it is, perhaps you should mention that. If its doing something
> nonobvious, then I suggest subclassing testtools.content.Content similarly to
> testtools.content.TracebackContent.
>

It's the non-obvious thing. Added this comment:
        # In an ideal world, we'd use the pre-bundled 'TracebackContent' class
        # from testtools. However, 'OutputFormatter' contains special logic to
        # handle errors from doctests, so we have to use that and manually
        # create an object equivalent to an instance of 'TracebackContent'.

> Also, you might want a global import rather than a scope local.
>

Yeah, good point.

I've changed it to be a global import. Note that it's still a soft dependency.

>
> 270 + # XXX: Since the subunit stream is designed for machine reading, we
> 271 + # should really dump the binary profiler stats here. Sadly, the
> 272 + # "content" API doesn't support this yet. Instead, we dump the
> 273 + # stringified version of the stats dict, which is functionally the
> 274 + # same thing. -- jml, 2010-02-14.
> 275 + plain_text = subunit.content_type.ContentType(
> 276 + 'text', 'plain', {'charset': 'utf8'})
> 277 + details = {
> 278 + 'profiler-stats': subunit.content.Content(
> 279 + plain_text, lambda: [unicode(stats.stats).encode('utf8')])}
>
> meta: where some dependency is insuffic...

Read more...

Revision history for this message
Jonathan Lange (jml) wrote :

OK, I think this is ready for landing into trunk. Would appreciate a review and guidance through whatever hoops I must jump through to get this landed.

Revision history for this message
Robert Collins (lifeless) wrote :

On Thu, 2010-03-11 at 20:31 +0000, Jonathan Lange wrote:
> > Broad issues:
> ...
> > As subunit has a progress abstraction, the 'cannot support progress' statement
> > confused me. Perhaps say 'cannot support zopes concept of progress because
> > xxx'.
> >
>
> Changed to:
> # subunit output is designed for computers, so displaying a progress bar
> # isn't helpful.
>
> Which isn't _strictly_ true, since displaying a progress bar might
> help the humans running the process. The real reason is it's too hard.

This still leaves me confused: subunit can fairly easily do a progress
bar, in-process or out-of-process. I was guessing that zopes concept of
progress was strange or something.

> > I think its great you've written docs/tests for this. I worry that they will
> > be fragile because they encode the subunit stream representation, but thats
> > not what you wrote: you wrote some object calls against subunit. I suggest you
> > use the guts from things like subunit-ls, subunit2pyunit, subunit-stats to
> > make your tests be done at the same level as the code wrote: to the object
> > API. This will prevent you having to e.g. include multipart MIME or http
> > chunking in the tests. Specifically I suspect you are really writing smoke
> > tests, and just a stats output will do great for your needs [most of the
> > time].
> >
>
> I suspect you are right. However, at this point I can't be bothered making that change. :)

It's your dime ;)

Let me know of anything I can do to help.

-Rob

377. By Jonathan Lange

Correct versions for testtools and subunit

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file '.bzrignore'
--- .bzrignore 1970-01-01 00:00:00 +0000
+++ .bzrignore 2010-03-11 22:36:30 +0000
@@ -0,0 +1,6 @@
1.installed.cfg
2bin
3develop-eggs
4documentation.txt
5src/zope.testing.egg-info
6parts
07
=== modified file 'src/zope/testing/testrunner/formatter.py'
--- src/zope/testing/testrunner/formatter.py 2009-12-23 21:21:53 +0000
+++ src/zope/testing/testrunner/formatter.py 2010-03-11 22:36:30 +0000
@@ -16,9 +16,12 @@
16$Id: __init__.py 86207 2008-05-03 13:25:02Z ctheune $16$Id: __init__.py 86207 2008-05-03 13:25:02Z ctheune $
17"""17"""
1818
19from datetime import datetime, timedelta
19import doctest20import doctest
21import os
20import sys22import sys
21import re23import re
24import tempfile
22import traceback25import traceback
2326
24from zope.testing.exceptions import DocTestFailureException27from zope.testing.exceptions import DocTestFailureException
@@ -647,3 +650,405 @@
647 else:650 else:
648 print self.colorize('exception', line)651 print self.colorize('exception', line)
649 print652 print
653
654
655class FakeTest(object):
656 """A fake test object that only has an id."""
657
658 failureException = None
659
660 def __init__(self, test_id):
661 self._id = test_id
662
663 def id(self):
664 return self._id
665
666
667# Conditional imports, we don't want zope.testing to have a hard dependency on
668# subunit.
669try:
670 import subunit
671 from subunit.iso8601 import Utc
672except ImportError:
673 subunit = None
674
675
676# testtools is a hard dependency of subunit itself, guarding separately for
677# richer error messages.
678try:
679 from testtools import content
680except ImportError:
681 content = None
682
683
684class SubunitOutputFormatter(object):
685 """A subunit output formatter.
686
687 This output formatter generates subunit compatible output (see
688 https://launchpad.net/subunit). Subunit output is essentially a stream of
689 results of unit tests. In this formatter, non-test events (such as layer
690 set up) are encoded as specially tagged tests and summary-generating
691 methods (such as modules_with_import_problems) deliberately do nothing.
692
693 In particular, for a layer 'foo', the fake tests related to layer set up
694 and tear down are tagged with 'zope:layer' and are called 'foo:setUp' and
695 'foo:tearDown'. Any tests within layer 'foo' are tagged with
696 'zope:layer:foo'.
697
698 Note that all tags specific to this formatter begin with 'zope:'
699 """
700
701 # subunit output is designed for computers, so displaying a progress bar
702 # isn't helpful.
703 progress = False
704 verbose = property(lambda self: self.options.verbose)
705
706 TAG_INFO_SUBOPTIMAL = 'zope:info_suboptimal'
707 TAG_ERROR_WITH_BANNER = 'zope:error_with_banner'
708 TAG_LAYER = 'zope:layer'
709 TAG_IMPORT_ERROR = 'zope:import_error'
710 TAG_PROFILER_STATS = 'zope:profiler_stats'
711 TAG_GARBAGE = 'zope:garbage'
712 TAG_THREADS = 'zope:threads'
713 TAG_REFCOUNTS = 'zope:refcounts'
714
715 def __init__(self, options):
716 if subunit is None:
717 raise Exception("Requires subunit 0.0.5 or better")
718 if content is None:
719 raise Exception("Requires testtools 0.9.2 or better")
720 self.options = options
721 self._stream = sys.stdout
722 self._subunit = subunit.TestProtocolClient(self._stream)
723 # Used to track the last layer that was set up or torn down. Either
724 # None or (layer_name, last_touched_time).
725 self._last_layer = None
726 self.UTC = Utc()
727 # Content types used in the output.
728 self.TRACEBACK_CONTENT_TYPE = content.ContentType(
729 'text', 'x-traceback', dict(language='python', charset='utf8'))
730 self.PROFILE_CONTENT_TYPE = content.ContentType(
731 'application', 'x-binary-profile')
732 self.PLAIN_TEXT = content.ContentType(
733 'text', 'plain', {'charset': 'utf8'})
734
735 @property
736 def _time_tests(self):
737 return self.verbose > 2
738
739 def _emit_timestamp(self, now=None):
740 """Emit a timestamp to the subunit stream.
741
742 If 'now' is not specified, use the current time on the system clock.
743 """
744 if now is None:
745 now = datetime.now(self.UTC)
746 self._subunit.time(now)
747 return now
748
749 def _emit_tag(self, tag):
750 self._stream.write('tags: %s\n' % (tag,))
751
752 def _stop_tag(self, tag):
753 self._stream.write('tags: -%s\n' % (tag,))
754
755 def _emit_fake_test(self, message, tag, details=None):
756 """Emit a successful fake test to the subunit stream.
757
758 Use this to print tagged informative messages.
759 """
760 test = FakeTest(message)
761 self._subunit.startTest(test)
762 self._emit_tag(tag)
763 self._subunit.addSuccess(test, details)
764
765 def _emit_error(self, error_id, tag, exc_info):
766 """Emit an error to the subunit stream.
767
768 Use this to pass on information about errors that occur outside of
769 tests.
770 """
771 test = FakeTest(error_id)
772 self._subunit.startTest(test)
773 self._emit_tag(tag)
774 self._subunit.addError(test, exc_info)
775
776 def info(self, message):
777 """Print an informative message, but only if verbose."""
778 # info() output is not relevant to actual test results. It only says
779 # things like "Running tests" or "Tearing down left over layers",
780 # things that are communicated already by the subunit stream. Just
781 # suppress the info() output.
782 pass
783
784 def info_suboptimal(self, message):
785 """Print an informative message about losing some of the features.
786
787 For example, when you run some tests in a subprocess, you lose the
788 ability to use the debugger.
789 """
790 # Used _only_ to indicate running in a subprocess.
791 self._emit_fake_test(message.strip(), self.TAG_INFO_SUBOPTIMAL)
792
793 def error(self, message):
794 """Report an error."""
795 # XXX: Mostly used for user errors, sometimes used for errors in the
796 # test framework, sometimes used to record layer setUp failure (!!!).
797 # How should this be encoded?
798 raise NotImplementedError(self.error)
799
800 def error_with_banner(self, message):
801 """Report an error with a big ASCII banner."""
802 # Either "Could not communicate with subprocess"
803 # Or "Can't post-mortem debug when running a layer as a subprocess!"
804 self._emit_fake_test(message, self.TAG_ERROR_WITH_BANNER)
805
806 def _enter_layer(self, layer_name):
807 """Signal in the subunit stream that we are entering a layer."""
808 self._emit_tag('zope:layer:%s' % (layer_name,))
809
810 def _exit_layer(self, layer_name):
811 """Signal in the subunit stream that we are leaving a layer."""
812 self._stop_tag('zope:layer:%s' % (layer_name,))
813
814 def start_set_up(self, layer_name):
815 """Report that we're setting up a layer.
816
817 We do this by emitting a tag of the form 'layer:$LAYER_NAME'.
818 """
819 now = self._emit_timestamp()
820 self._subunit.startTest(FakeTest('%s:setUp' % (layer_name,)))
821 self._emit_tag(self.TAG_LAYER)
822 self._last_layer = (layer_name, now)
823
824 def stop_set_up(self, seconds):
825 layer_name, start_time = self._last_layer
826 self._last_layer = None
827 self._emit_timestamp(start_time + timedelta(seconds=seconds))
828 self._subunit.addSuccess(FakeTest('%s:setUp' % (layer_name,)))
829 self._enter_layer(layer_name)
830
831 def start_tear_down(self, layer_name):
832 """Report that we're tearing down a layer.
833
834 We do this by removing a tag of the form 'layer:$LAYER_NAME'.
835 """
836 self._exit_layer(layer_name)
837 now = self._emit_timestamp()
838 self._subunit.startTest(FakeTest('%s:tearDown' % (layer_name,)))
839 self._emit_tag(self.TAG_LAYER)
840 self._last_layer = (layer_name, now)
841
842 def stop_tear_down(self, seconds):
843 layer_name, start_time = self._last_layer
844 self._last_layer = None
845 self._emit_timestamp(start_time + timedelta(seconds=seconds))
846 self._subunit.addSuccess(FakeTest('%s:tearDown' % (layer_name,)))
847
848 def tear_down_not_supported(self):
849 """Report that we could not tear down a layer.
850
851 Should be called right after start_tear_down().
852 """
853 layer_name, start_time = self._last_layer
854 self._last_layer = None
855 self._emit_timestamp(datetime.now(self.UTC))
856 self._subunit.addSkip(
857 FakeTest('%s:tearDown' % (layer_name,)), "tearDown not supported")
858
859 def summary(self, n_tests, n_failures, n_errors, n_seconds):
860 """Print out a summary.
861
862 Since subunit is a stream protocol format, it has no need for a
863 summary. When the stream is finished other tools can generate a
864 summary if so desired.
865 """
866 pass
867
868 def start_test(self, test, tests_run, total_tests):
869 """Report that we're about to run a test.
870
871 The next output operation should be test_success(), test_error(), or
872 test_failure().
873 """
874 if self._time_tests:
875 self._emit_timestamp()
876 # Note that this always emits newlines, so it will function as well as
877 # other start_test implementations if we are running in a subprocess.
878 self._subunit.startTest(test)
879
880 def stop_test(self, test):
881 """Clean up the output state after a test."""
882 self._subunit.stopTest(test)
883 self._stream.flush()
884
885 def stop_tests(self):
886 """Clean up the output state after a collection of tests."""
887 self._stream.flush()
888
889 def test_success(self, test, seconds):
890 if self._time_tests:
891 self._emit_timestamp()
892 self._subunit.addSuccess(test)
893
894 def import_errors(self, import_errors):
895 """Report test-module import errors (if any)."""
896 if not import_errors:
897 return
898 for error in import_errors:
899 self._emit_error(
900 error.module, self.TAG_IMPORT_ERROR, error.exc_info)
901
902 def modules_with_import_problems(self, import_errors):
903 """Report names of modules with import problems (if any)."""
904 # This is simply a summary method, and subunit output doesn't benefit
905 # from summaries.
906 pass
907
908 def _exc_info_to_details(self, exc_info):
909 """Translate 'exc_info' into a details dictionary usable with subunit.
910 """
911 # In an ideal world, we'd use the pre-bundled 'TracebackContent' class
912 # from testtools. However, 'OutputFormatter' contains special logic to
913 # handle errors from doctests, so we have to use that and manually
914 # create an object equivalent to an instance of 'TracebackContent'.
915 formatter = OutputFormatter(None)
916 traceback = formatter.format_traceback(exc_info)
917 return {
918 'traceback': content.Content(
919 self.TRACEBACK_CONTENT_TYPE, lambda: [traceback.encode('utf8')])}
920
921 def test_error(self, test, seconds, exc_info):
922 """Report that an error occurred while running a test.
923
924 Should be called right after start_test().
925
926 The next output operation should be stop_test().
927 """
928 if self._time_tests:
929 self._emit_timestamp()
930 details = self._exc_info_to_details(exc_info)
931 self._subunit.addError(test, details=details)
932
933 def test_failure(self, test, seconds, exc_info):
934 """Report that a test failed.
935
936 Should be called right after start_test().
937
938 The next output operation should be stop_test().
939 """
940 if self._time_tests:
941 self._emit_timestamp()
942 details = self._exc_info_to_details(exc_info)
943 self._subunit.addFailure(test, details=details)
944
945 def profiler_stats(self, stats):
946 """Report profiler stats."""
947 ignored, filename = tempfile.mkstemp()
948 try:
949 stats.dump_stats(filename)
950 profile_content = content.Content(
951 self.PROFILE_CONTENT_TYPE, open(filename).read)
952 details = {'profiler-stats': profile_content}
953 finally:
954 os.unlink(filename)
955 # Name the test 'zope:profiler_stats' just like its tag.
956 self._emit_fake_test(
957 self.TAG_PROFILER_STATS, self.TAG_PROFILER_STATS, details)
958
959 def tests_with_errors(self, errors):
960 """Report tests with errors.
961
962 Simply not supported by the subunit formatter. Fancy summary output
963 doesn't make sense.
964 """
965 pass
966
967 def tests_with_failures(self, failures):
968 """Report tests with failures.
969
970 Simply not supported by the subunit formatter. Fancy summary output
971 doesn't make sense.
972 """
973 pass
974
975 def totals(self, n_tests, n_failures, n_errors, n_seconds):
976 """Summarize the results of all layers.
977
978 Simply not supported by the subunit formatter. Fancy summary output
979 doesn't make sense.
980 """
981 pass
982
983 def list_of_tests(self, tests, layer_name):
984 """Report a list of test names."""
985 self._enter_layer(layer_name)
986 for test in tests:
987 self._subunit.startTest(test)
988 self._subunit.addSuccess(test)
989 self._exit_layer(layer_name)
990
991 def _get_text_details(self, name, text):
992 """Get a details dictionary that just has some plain text."""
993 return {
994 name: content.Content(
995 self.PLAIN_TEXT, lambda: [text.encode('utf8')])}
996
997 def garbage(self, garbage):
998 """Report garbage generated by tests."""
999 # XXX: Really, 'garbage', 'profiler_stats' and the 'refcounts' twins
1000 # ought to add extra details to a fake test that represents the
1001 # summary information for the whole suite. However, there's no event
1002 # on output formatters for "everything is really finished, honest". --
1003 # jml, 2010-02-14
1004 details = self._get_text_details('garbage', unicode(garbage))
1005 self._emit_fake_test(
1006 self.TAG_GARBAGE, self.TAG_GARBAGE, details)
1007
1008 def test_garbage(self, test, garbage):
1009 """Report garbage generated by a test.
1010
1011 Encoded in the subunit stream as a test error. Clients can filter out
1012 these tests based on the tag if they don't think garbage should fail
1013 the test run.
1014 """
1015 # XXX: Perhaps 'test_garbage' and 'test_threads' ought to be within
1016 # the output for the actual test, appended as details to whatever
1017 # result the test gets. Not an option with the present API, as there's
1018 # no event for "no more output for this test". -- jml, 2010-02-14
1019 self._subunit.startTest(test)
1020 self._emit_tag(self.TAG_GARBAGE)
1021 self._subunit.addError(
1022 test, self._get_text_details('garbage', unicode(garbage)))
1023
1024 def test_threads(self, test, new_threads):
1025 """Report threads left behind by a test.
1026
1027 Encoded in the subunit stream as a test error. Clients can filter out
1028 these tests based on the tag if they don't think left-over threads
1029 should fail the test run.
1030 """
1031 self._subunit.startTest(test)
1032 self._emit_tag(self.TAG_THREADS)
1033 self._subunit.addError(
1034 test, self._get_text_details('garbage', unicode(new_threads)))
1035
1036 def refcounts(self, rc, prev):
1037 """Report a change in reference counts."""
1038 details = self._get_text_details('sys-refcounts', str(rc))
1039 details.update(
1040 self._get_text_details('changes', str(rc - prev)))
1041 # XXX: Emit the details dict as JSON?
1042 self._emit_fake_test(
1043 self.TAG_REFCOUNTS, self.TAG_REFCOUNTS, details)
1044
1045 def detailed_refcounts(self, track, rc, prev):
1046 """Report a change in reference counts, with extra detail."""
1047 details = self._get_text_details('sys-refcounts', str(rc))
1048 details.update(
1049 self._get_text_details('changes', str(rc - prev)))
1050 details.update(
1051 self._get_text_details('track', str(track.delta)))
1052
1053 self._emit_fake_test(
1054 self.TAG_REFCOUNTS, self.TAG_REFCOUNTS, details)
6501055
=== modified file 'src/zope/testing/testrunner/options.py'
--- src/zope/testing/testrunner/options.py 2009-12-18 08:23:21 +0000
+++ src/zope/testing/testrunner/options.py 2010-03-11 22:36:30 +0000
@@ -22,7 +22,11 @@
22import sys22import sys
2323
24from zope.testing.testrunner.profiling import available_profilers24from zope.testing.testrunner.profiling import available_profilers
25from zope.testing.testrunner.formatter import OutputFormatter, ColorfulOutputFormatter25from zope.testing.testrunner.formatter import (
26 OutputFormatter,
27 ColorfulOutputFormatter,
28 SubunitOutputFormatter,
29 )
26from zope.testing.testrunner.formatter import terminal_has_colors30from zope.testing.testrunner.formatter import terminal_has_colors
2731
2832
@@ -192,6 +196,12 @@
192""")196""")
193197
194reporting.add_option(198reporting.add_option(
199 '--subunit', action="store_true", dest='subunit',
200 help="""\
201Use subunit output. Will not be colorized.
202""")
203
204reporting.add_option(
195 '--slow-test', type='float', dest='slow_test_threshold', metavar='N',205 '--slow-test', type='float', dest='slow_test_threshold', metavar='N',
196 help="""\206 help="""\
197With -c and -vvv, highlight tests that take longer than N seconds (default:207With -c and -vvv, highlight tests that take longer than N seconds (default:
@@ -531,7 +541,19 @@
531 options, positional = parser.parse_args(args[1:], defaults)541 options, positional = parser.parse_args(args[1:], defaults)
532 options.original_testrunner_args = args542 options.original_testrunner_args = args
533543
534 if options.color:544 if options.subunit:
545 try:
546 import subunit
547 except ImportError:
548 print """\
549 Subunit is not installed. Please install Subunit
550 to generate subunit output.
551 """
552 options.fail = True
553 return options
554 else:
555 options.output = SubunitOutputFormatter(options)
556 elif options.color:
535 options.output = ColorfulOutputFormatter(options)557 options.output = ColorfulOutputFormatter(options)
536 options.output.slow_test_threshold = options.slow_test_threshold558 options.output.slow_test_threshold = options.slow_test_threshold
537 else:559 else:
538560
=== modified file 'src/zope/testing/testrunner/testrunner-leaks.txt'
--- src/zope/testing/testrunner/testrunner-leaks.txt 2008-05-05 18:50:48 +0000
+++ src/zope/testing/testrunner/testrunner-leaks.txt 2010-03-11 22:36:30 +0000
@@ -16,7 +16,7 @@
16 >>> from zope.testing import testrunner16 >>> from zope.testing import testrunner
17 17
18 >>> sys.argv = 'test --layer Layer11$ --layer Layer12$ -N4 -r'.split()18 >>> sys.argv = 'test --layer Layer11$ --layer Layer12$ -N4 -r'.split()
19 >>> _ = testrunner.run(defaults)19 >>> _ = testrunner.run_internal(defaults)
20 Running samplelayers.Layer11 tests:20 Running samplelayers.Layer11 tests:
21 Set up samplelayers.Layer1 in 0.000 seconds.21 Set up samplelayers.Layer1 in 0.000 seconds.
22 Set up samplelayers.Layer11 in 0.000 seconds.22 Set up samplelayers.Layer11 in 0.000 seconds.
@@ -60,7 +60,7 @@
60Let's look at an example test that leaks:60Let's look at an example test that leaks:
6161
62 >>> sys.argv = 'test --tests-pattern leak -N4 -r'.split()62 >>> sys.argv = 'test --tests-pattern leak -N4 -r'.split()
63 >>> _ = testrunner.run(defaults)63 >>> _ = testrunner.run_internal(defaults)
64 Running zope.testing.testrunner.layer.UnitTests tests:...64 Running zope.testing.testrunner.layer.UnitTests tests:...
65 Iteration 165 Iteration 1
66 Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.66 Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
@@ -81,7 +81,7 @@
81type (or class):81type (or class):
8282
83 >>> sys.argv = 'test --tests-pattern leak -N5 -r -v'.split()83 >>> sys.argv = 'test --tests-pattern leak -N5 -r -v'.split()
84 >>> _ = testrunner.run(defaults)84 >>> _ = testrunner.run_internal(defaults)
85 Running tests at level 185 Running tests at level 1
86 Running zope.testing.testrunner.layer.UnitTests tests:...86 Running zope.testing.testrunner.layer.UnitTests tests:...
87 Iteration 187 Iteration 1
8888
=== added file 'src/zope/testing/testrunner/testrunner-subunit-err.txt'
--- src/zope/testing/testrunner/testrunner-subunit-err.txt 1970-01-01 00:00:00 +0000
+++ src/zope/testing/testrunner/testrunner-subunit-err.txt 2010-03-11 22:36:30 +0000
@@ -0,0 +1,20 @@
1Using subunit output without subunit installed
2==============================================
3
4To use the --subunit reporting option, you must have subunit installed. If you
5do not, you will get an error message:
6
7 >>> import os.path, sys
8 >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
9 >>> defaults = [
10 ... '--path', directory_with_tests,
11 ... '--tests-pattern', '^sampletestsf?$',
12 ... ]
13
14 >>> from zope.testing import testrunner
15
16 >>> sys.argv = 'test --subunit'.split()
17 >>> _ = testrunner.run_internal(defaults)
18 Subunit is not installed. Please install Subunit
19 to generate subunit output.
20 <BLANKLINE>
021
=== added file 'src/zope/testing/testrunner/testrunner-subunit-leaks.txt'
--- src/zope/testing/testrunner/testrunner-subunit-leaks.txt 1970-01-01 00:00:00 +0000
+++ src/zope/testing/testrunner/testrunner-subunit-leaks.txt 2010-03-11 22:36:30 +0000
@@ -0,0 +1,107 @@
1Debugging Memory Leaks with subunit output
2==========================================
3
4The --report-refcounts (-r) option can be used with the --repeat (-N)
5option to detect and diagnose memory leaks. To use this option, you
6must configure Python with the --with-pydebug option. (On Unix, pass
7this option to configure and then build Python.)
8
9For more detailed documentation, see testrunner-leaks.txt.
10
11 >>> import os.path, sys
12 >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
13 >>> defaults = [
14 ... '--path', directory_with_tests,
15 ... '--tests-pattern', '^sampletestsf?$',
16 ... ]
17
18 >>> from zope.testing import testrunner
19
20Each layer is repeated the requested number of times. For each
21iteration after the first, the system refcount and change in system
22refcount is shown. The system refcount is the total of all refcount in
23the system. When a refcount on any object is changed, the system
24refcount is changed by the same amount. Tests that don't leak show
25zero changes in system refcount.
26
27Let's look at an example test that leaks:
28
29 >>> sys.argv = 'test --subunit --tests-pattern leak -N2 -r'.split()
30 >>> _ = testrunner.run_internal(defaults)
31 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
32 test: zope.testing.testrunner.layer.UnitTests:setUp
33 tags: zope:layer
34 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
35 successful: zope.testing.testrunner.layer.UnitTests:setUp
36 tags: zope:layer:zope.testing.testrunner.layer.UnitTests
37 test: leak.TestSomething.testleak
38 successful: leak.TestSomething.testleak
39 test: leak.TestSomething.testleak
40 successful: leak.TestSomething.testleak
41 test: zope:refcounts
42 tags: zope:refcounts
43 successful: zope:refcounts [ multipart
44 Content-Type: text/plain;charset=utf8
45 ...
46 ...\r
47 <BLANKLINE>
48 ...\r
49 <BLANKLINE>
50 Content-Type: text/plain;charset=utf8
51 ...
52 ...\r
53 <BLANKLINE>
54 ...\r
55 <BLANKLINE>
56 ]
57 tags: -zope:layer:zope.testing.testrunner.layer.UnitTests
58 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
59 test: zope.testing.testrunner.layer.UnitTests:tearDown
60 tags: zope:layer
61 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
62 successful: zope.testing.testrunner.layer.UnitTests:tearDown
63
64Here we see that the system refcount is increasing. If we specify a
65verbosity greater than one, we can get details broken out by object
66type (or class):
67
68 >>> sys.argv = 'test --subunit --tests-pattern leak -N2 -v -r'.split()
69 >>> _ = testrunner.run_internal(defaults)
70 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
71 test: zope.testing.testrunner.layer.UnitTests:setUp
72 tags: zope:layer
73 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
74 successful: zope.testing.testrunner.layer.UnitTests:setUp
75 tags: zope:layer:zope.testing.testrunner.layer.UnitTests
76 test: leak.TestSomething.testleak
77 successful: leak.TestSomething.testleak
78 test: leak.TestSomething.testleak
79 successful: leak.TestSomething.testleak
80 test: zope:refcounts
81 tags: zope:refcounts
82 successful: zope:refcounts [ multipart
83 Content-Type: text/plain;charset=utf8
84 ...
85 ...\r
86 <BLANKLINE>
87 ...\r
88 <BLANKLINE>
89 Content-Type: text/plain;charset=utf8
90 ...
91 ...\r
92 <BLANKLINE>
93 ...\r
94 <BLANKLINE>
95 Content-Type: text/plain;charset=utf8
96 ...
97 ...\r
98 <BLANKLINE>
99 ...
100 <BLANKLINE>
101 ]
102 tags: -zope:layer:zope.testing.testrunner.layer.UnitTests
103 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
104 test: zope.testing.testrunner.layer.UnitTests:tearDown
105 tags: zope:layer
106 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
107 successful: zope.testing.testrunner.layer.UnitTests:tearDown
0108
=== added file 'src/zope/testing/testrunner/testrunner-subunit.txt'
--- src/zope/testing/testrunner/testrunner-subunit.txt 1970-01-01 00:00:00 +0000
+++ src/zope/testing/testrunner/testrunner-subunit.txt 2010-03-11 22:36:30 +0000
@@ -0,0 +1,678 @@
1Subunit Output
2==============
3
4Subunit is a streaming protocol for interchanging test results. More
5information can be found at https://launchpad.net/subunit.
6
7 >>> import os.path, sys
8 >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
9 >>> defaults = [
10 ... '--path', directory_with_tests,
11 ... '--tests-pattern', '^sampletestsf?$',
12 ... ]
13
14 >>> from zope.testing import testrunner
15
16
17Basic output
18------------
19
20Subunit output is line-based, with a 'test:' line for the start of each test
21and a 'successful:' line for each successful test.
22
23Zope layer set up and tear down events are represented as tests tagged with
24'zope:layer'. This allows them to be distinguished from actual tests, provides
25a place for the layer timing information in the subunit stream and allows us
26to include error information if necessary.
27
28Once the layer is set up, all future tests are tagged with
29'zope:layer:LAYER_NAME'.
30
31 >>> sys.argv = 'test --layer 122 --subunit -t TestNotMuch'.split()
32 >>> testrunner.run_internal(defaults)
33 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
34 test: samplelayers.Layer1:setUp
35 tags: zope:layer
36 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
37 successful: samplelayers.Layer1:setUp
38 tags: zope:layer:samplelayers.Layer1
39 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
40 test: samplelayers.Layer12:setUp
41 tags: zope:layer
42 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
43 successful: samplelayers.Layer12:setUp
44 tags: zope:layer:samplelayers.Layer12
45 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
46 test: samplelayers.Layer122:setUp
47 tags: zope:layer
48 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
49 successful: samplelayers.Layer122:setUp
50 tags: zope:layer:samplelayers.Layer122
51 test: sample1.sampletests.test122.TestNotMuch.test_1
52 successful: sample1.sampletests.test122.TestNotMuch.test_1
53 test: sample1.sampletests.test122.TestNotMuch.test_2
54 successful: sample1.sampletests.test122.TestNotMuch.test_2
55 test: sample1.sampletests.test122.TestNotMuch.test_3
56 successful: sample1.sampletests.test122.TestNotMuch.test_3
57 test: sampletests.test122.TestNotMuch.test_1
58 successful: sampletests.test122.TestNotMuch.test_1
59 test: sampletests.test122.TestNotMuch.test_2
60 successful: sampletests.test122.TestNotMuch.test_2
61 test: sampletests.test122.TestNotMuch.test_3
62 successful: sampletests.test122.TestNotMuch.test_3
63 tags: -zope:layer:samplelayers.Layer122
64 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
65 test: samplelayers.Layer122:tearDown
66 tags: zope:layer
67 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
68 successful: samplelayers.Layer122:tearDown
69 tags: -zope:layer:samplelayers.Layer12
70 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
71 test: samplelayers.Layer12:tearDown
72 tags: zope:layer
73 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
74 successful: samplelayers.Layer12:tearDown
75 tags: -zope:layer:samplelayers.Layer1
76 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
77 test: samplelayers.Layer1:tearDown
78 tags: zope:layer
79 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
80 successful: samplelayers.Layer1:tearDown
81 False
82
83
84Timing tests
85------------
86
87When verbosity is high enough, the subunit stream includes timing information
88for the actual tests, as well as for the layers.
89
90 >>> sys.argv = 'test --layer 122 -vvv --subunit -t TestNotMuch'.split()
91 >>> testrunner.run_internal(defaults)
92 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
93 test: samplelayers.Layer1:setUp
94 tags: zope:layer
95 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
96 successful: samplelayers.Layer1:setUp
97 tags: zope:layer:samplelayers.Layer1
98 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
99 test: samplelayers.Layer12:setUp
100 tags: zope:layer
101 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
102 successful: samplelayers.Layer12:setUp
103 tags: zope:layer:samplelayers.Layer12
104 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
105 test: samplelayers.Layer122:setUp
106 tags: zope:layer
107 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
108 successful: samplelayers.Layer122:setUp
109 tags: zope:layer:samplelayers.Layer122
110 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
111 test: sample1.sampletests.test122.TestNotMuch.test_1
112 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
113 successful: sample1.sampletests.test122.TestNotMuch.test_1
114 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
115 test: sample1.sampletests.test122.TestNotMuch.test_2
116 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
117 successful: sample1.sampletests.test122.TestNotMuch.test_2
118 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
119 test: sample1.sampletests.test122.TestNotMuch.test_3
120 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
121 successful: sample1.sampletests.test122.TestNotMuch.test_3
122 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
123 test: sampletests.test122.TestNotMuch.test_1
124 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
125 successful: sampletests.test122.TestNotMuch.test_1
126 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
127 test: sampletests.test122.TestNotMuch.test_2
128 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
129 successful: sampletests.test122.TestNotMuch.test_2
130 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
131 test: sampletests.test122.TestNotMuch.test_3
132 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
133 successful: sampletests.test122.TestNotMuch.test_3
134 tags: -zope:layer:samplelayers.Layer122
135 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
136 test: samplelayers.Layer122:tearDown
137 tags: zope:layer
138 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
139 successful: samplelayers.Layer122:tearDown
140 tags: -zope:layer:samplelayers.Layer12
141 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
142 test: samplelayers.Layer12:tearDown
143 tags: zope:layer
144 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
145 successful: samplelayers.Layer12:tearDown
146 tags: -zope:layer:samplelayers.Layer1
147 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
148 test: samplelayers.Layer1:tearDown
149 tags: zope:layer
150 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
151 successful: samplelayers.Layer1:tearDown
152 False
153
154
155Listing tests
156-------------
157
158A subunit stream is a stream of test results, more or less, so the most
159natural way of listing tests in subunit is to simply emit successful test
160results without actually running the tests.
161
162Note that in this stream, we don't emit fake tests for the layer set up and
163tear down, because it simply doesn't happen.
164
165We also don't include the dependent layers in the stream (in this case Layer1
166and Layer12), since they are not provided to the reporter.
167
168 >>> sys.argv = (
169 ... 'test --layer 122 --list-tests --subunit -t TestNotMuch').split()
170 >>> testrunner.run_internal(defaults)
171 tags: zope:layer:samplelayers.Layer122
172 test: sample1.sampletests.test122.TestNotMuch.test_1
173 successful: sample1.sampletests.test122.TestNotMuch.test_1
174 test: sample1.sampletests.test122.TestNotMuch.test_2
175 successful: sample1.sampletests.test122.TestNotMuch.test_2
176 test: sample1.sampletests.test122.TestNotMuch.test_3
177 successful: sample1.sampletests.test122.TestNotMuch.test_3
178 test: sampletests.test122.TestNotMuch.test_1
179 successful: sampletests.test122.TestNotMuch.test_1
180 test: sampletests.test122.TestNotMuch.test_2
181 successful: sampletests.test122.TestNotMuch.test_2
182 test: sampletests.test122.TestNotMuch.test_3
183 successful: sampletests.test122.TestNotMuch.test_3
184 tags: -zope:layer:samplelayers.Layer122
185 False
186
187
188Profiling tests
189---------------
190
191Test suites often cover a lot of code, and the performance of test suites
192themselves is often a critical part of the development process. Thus, it's
193good to be able to profile a test run.
194
195 >>> sys.argv = (
196 ... 'test --layer 122 --profile=cProfile --subunit '
197 ... '-t TestNotMuch').split()
198 >>> testrunner.run_internal(defaults)
199 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
200 test: samplelayers.Layer1:setUp
201 ...
202 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
203 successful: samplelayers.Layer1:tearDown
204 test: zope:profiler_stats
205 tags: zope:profiler_stats
206 successful: zope:profiler_stats [ multipart
207 Content-Type: application/x-binary-profile
208 profiler-stats
209 ...\r
210 <BLANKLINE>
211 ...
212 <BLANKLINE>
213 ]
214 False
215
216
217Errors
218------
219
220Errors are recorded in the subunit stream as MIME-encoded chunks of text.
221
222 >>> sys.argv = [
223 ... 'test', '--subunit' , '--tests-pattern', '^sampletests_e$',
224 ... ]
225 >>> testrunner.run_internal(defaults)
226 time: 2010-02-05 15:27:05.113541Z
227 test: zope.testing.testrunner.layer.UnitTests:setUp
228 tags: zope:layer
229 time: 2010-02-05 15:27:05.113545Z
230 successful: zope.testing.testrunner.layer.UnitTests:setUp
231 tags: zope:layer:zope.testing.testrunner.layer.UnitTests
232 test: sample2.sampletests_e.eek
233 failure: sample2.sampletests_e.eek [ multipart
234 Content-Type: text/x-traceback;charset=utf8,language=python
235 traceback
236 4B6\r
237 <BLANKLINE>
238 Failed doctest test for sample2.sampletests_e.eek
239 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 29, in eek
240 <BLANKLINE>
241 ----------------------------------------------------------------------
242 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 31, in sample2.sampletests_e.eek
243 Failed example:
244 f()
245 Exception raised:
246 Traceback (most recent call last):
247 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/doctest/__init__.py", line 1355, in __run
248 compileflags, 1) in test.globs
249 File "<doctest sample2.sampletests_e.eek[line 2, example 0]>", line 1, in <module>
250 f()
251 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 19, in f
252 g()
253 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 25, in g
254 x = y + 1
255 - __traceback_info__: I don't know what Y should be.
256 NameError: global name 'y' is not defined
257 0\r
258 <BLANKLINE>
259 ]
260 test: sample2.sampletests_e.Test.test1
261 successful: sample2.sampletests_e.Test.test1
262 test: sample2.sampletests_e.Test.test2
263 successful: sample2.sampletests_e.Test.test2
264 test: sample2.sampletests_e.Test.test3
265 error: sample2.sampletests_e.Test.test3 [ multipart
266 Content-Type: text/x-traceback;charset=utf8,language=python
267 traceback
268 29F\r
269 <BLANKLINE>
270 Traceback (most recent call last):
271 File "/usr/lib/python2.6/unittest.py", line 279, in run
272 testMethod()
273 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 44, in test3
274 f()
275 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 19, in f
276 g()
277 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 25, in g
278 x = y + 1
279 - __traceback_info__: I don't know what Y should be.
280 NameError: global name 'y' is not defined
281 0\r
282 <BLANKLINE>
283 ]
284 test: sample2.sampletests_e.Test.test4
285 successful: sample2.sampletests_e.Test.test4
286 test: sample2.sampletests_e.Test.test5
287 successful: sample2.sampletests_e.Test.test5
288 test: e_txt
289 failure: e_txt [
290 multipart
291 Content-Type: text/x-traceback;charset=utf8,language=python
292 traceback
293 329\r
294 <BLANKLINE>
295 Failed doctest test for e.txt
296 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/e.txt", line 0
297 <BLANKLINE>
298 ----------------------------------------------------------------------
299 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/e.txt", line 4, in e.txt
300 Failed example:
301 f()
302 Exception raised:
303 Traceback (most recent call last):
304 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/doctest/__init__.py", line 1355, in __run
305 compileflags, 1) in test.globs
306 File "<doctest e.txt[line 4, example 1]>", line 1, in <module>
307 f()
308 File "<doctest e.txt[line 1, example 0]>", line 2, in f
309 return x
310 NameError: global name 'x' is not defined
311 0\r
312 <BLANKLINE>
313 ]
314 tags: -zope:layer:zope.testing.testrunner.layer.UnitTests
315 time: 2010-02-05 15:27:05.147082Z
316 test: zope.testing.testrunner.layer.UnitTests:tearDown
317 tags: zope:layer
318 time: 2010-02-05 15:27:05.147088Z
319 successful: zope.testing.testrunner.layer.UnitTests:tearDown
320 True
321
322
323Layers that can't be torn down
324------------------------------
325
326A layer can have a tearDown method that raises NotImplementedError. If this is
327the case and there are no remaining tests to run, the subunit stream will say
328that the layer skipped its tearDown.
329
330 >>> import os.path, sys
331 >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
332 >>> from zope.testing import testrunner
333 >>> defaults = [
334 ... '--subunit',
335 ... '--path', directory_with_tests,
336 ... '--tests-pattern', '^sampletestsf?$',
337 ... ]
338
339 >>> sys.argv = 'test -ssample2 --tests-pattern sampletests_ntd$'.split()
340 >>> testrunner.run_internal(defaults)
341 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
342 test: sample2.sampletests_ntd.Layer:setUp
343 tags: zope:layer
344 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
345 successful: sample2.sampletests_ntd.Layer:setUp
346 tags: zope:layer:sample2.sampletests_ntd.Layer
347 test: sample2.sampletests_ntd.TestSomething.test_something
348 successful: sample2.sampletests_ntd.TestSomething.test_something
349 tags: -zope:layer:sample2.sampletests_ntd.Layer
350 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
351 test: sample2.sampletests_ntd.Layer:tearDown
352 tags: zope:layer
353 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
354 skip: sample2.sampletests_ntd.Layer:tearDown [
355 tearDown not supported
356 ]
357 False
358
359
360Module import errors
361--------------------
362
363We report module import errors too. They get encoded as tests with errors. The
364name of the test is the module that could not be imported, the test's result
365is an error containing the traceback. These "tests" are tagged with
366zope:import_error.
367
368Let's create a module with some bad syntax:
369
370 >>> badsyntax_path = os.path.join(directory_with_tests,
371 ... "sample2", "sampletests_i.py")
372 >>> f = open(badsyntax_path, "w")
373 >>> print >> f, "importx unittest" # syntax error
374 >>> f.close()
375
376And then run the tests:
377
378 >>> sys.argv = (
379 ... 'test --subunit --tests-pattern ^sampletests(f|_i)?$ --layer 1 '
380 ... ).split()
381 >>> testrunner.run_internal(defaults)
382 test: sample2.sampletests_i
383 tags: zope:import_error
384 error: sample2.sampletests_i [
385 Traceback (most recent call last):
386 testrunner-ex/sample2/sampletests_i.py", line 1
387 importx unittest
388 ^
389 SyntaxError: invalid syntax
390 ]
391 test: sample2.sample21.sampletests_i
392 tags: zope:import_error
393 error: sample2.sample21.sampletests_i [
394 Traceback (most recent call last):
395 testrunner-ex/sample2/sample21/sampletests_i.py", Line NNN, in ?
396 import zope.testing.huh
397 ImportError: No module named huh
398 ]
399 test: sample2.sample23.sampletests_i
400 tags: zope:import_error
401 error: sample2.sample23.sampletests_i [
402 Traceback (most recent call last):
403 testrunner-ex/sample2/sample23/sampletests_i.py", Line NNN, in ?
404 class Test(unittest.TestCase):
405 testrunner-ex/sample2/sample23/sampletests_i.py", Line NNN, in Test
406 raise TypeError('eek')
407 TypeError: eek
408 ]
409 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
410 test: samplelayers.Layer1:setUp
411 tags: zope:layer
412 ...
413 True
414
415Of course, because we care deeply about test isolation, we're going to have to
416delete the module with bad syntax now, lest it contaminate other tests or even
417future test runs.
418
419 >>> os.unlink(badsyntax_path)
420
421
422Tests in subprocesses
423---------------------
424
425If the tearDown method raises NotImplementedError and there are remaining
426layers to run, the test runner will restart itself as a new process,
427resuming tests where it left off:
428
429 >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$']
430 >>> testrunner.run_internal(defaults)
431 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
432 test: sample1.sampletests_ntd.Layer:setUp
433 tags: zope:layer
434 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
435 successful: sample1.sampletests_ntd.Layer:setUp
436 tags: zope:layer:sample1.sampletests_ntd.Layer
437 test: sample1.sampletests_ntd.TestSomething.test_something
438 successful: sample1.sampletests_ntd.TestSomething.test_something
439 tags: -zope:layer:sample1.sampletests_ntd.Layer
440 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
441 test: sample1.sampletests_ntd.Layer:tearDown
442 tags: zope:layer
443 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
444 skip: sample1.sampletests_ntd.Layer:tearDown [
445 tearDown not supported
446 ]
447 test: Running in a subprocess.
448 tags: zope:info_suboptimal
449 successful: Running in a subprocess.
450 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
451 test: sample2.sampletests_ntd.Layer:setUp
452 tags: zope:layer
453 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
454 successful: sample2.sampletests_ntd.Layer:setUp
455 tags: zope:layer:sample2.sampletests_ntd.Layer
456 test: sample2.sampletests_ntd.TestSomething.test_something
457 successful: sample2.sampletests_ntd.TestSomething.test_something
458 tags: -zope:layer:sample2.sampletests_ntd.Layer
459 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
460 test: sample2.sampletests_ntd.Layer:tearDown
461 tags: zope:layer
462 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
463 skip: sample2.sampletests_ntd.Layer:tearDown [
464 tearDown not supported
465 ]
466 test: Running in a subprocess.
467 tags: zope:info_suboptimal
468 successful: Running in a subprocess.
469 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
470 test: sample3.sampletests_ntd.Layer:setUp
471 tags: zope:layer
472 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
473 successful: sample3.sampletests_ntd.Layer:setUp
474 tags: zope:layer:sample3.sampletests_ntd.Layer
475 test: sample3.sampletests_ntd.TestSomething.test_error1
476 error: sample3.sampletests_ntd.TestSomething.test_error1 [ multipart
477 Content-Type: text/x-traceback;charset=utf8,language=python
478 traceback
479 14F\r
480 <BLANKLINE>
481 Traceback (most recent call last):
482 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
483 raise TypeError("Can we see errors")
484 TypeError: Can we see errors
485 0\r
486 <BLANKLINE>
487 ]
488 test: sample3.sampletests_ntd.TestSomething.test_error2
489 error: sample3.sampletests_ntd.TestSomething.test_error2 [ multipart
490 Content-Type: text/x-traceback;charset=utf8,language=python
491 traceback
492 13F\r
493 <BLANKLINE>
494 Traceback (most recent call last):
495 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
496 raise TypeError("I hope so")
497 TypeError: I hope so
498 0\r
499 <BLANKLINE>
500 ]
501 test: sample3.sampletests_ntd.TestSomething.test_fail1
502 failure: sample3.sampletests_ntd.TestSomething.test_fail1 [ multipart
503 Content-Type: text/x-traceback;charset=utf8,language=python
504 traceback
505 1AA\r
506 <BLANKLINE>
507 Traceback (most recent call last):
508 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
509 self.assertEqual(1, 2)
510 AssertionError: 1 != 2
511 0\r
512 <BLANKLINE>
513 ]
514 test: sample3.sampletests_ntd.TestSomething.test_fail2
515 failure: sample3.sampletests_ntd.TestSomething.test_fail2 [ multipart
516 Content-Type: text/x-traceback;charset=utf8,language=python
517 traceback
518 1AA\r
519 <BLANKLINE>
520 Traceback (most recent call last):
521 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
522 self.assertEqual(1, 3)
523 AssertionError: 1 != 3
524 0\r
525 <BLANKLINE>
526 ]
527 test: sample3.sampletests_ntd.TestSomething.test_something
528 successful: sample3.sampletests_ntd.TestSomething.test_something
529 test: sample3.sampletests_ntd.TestSomething.test_something_else
530 successful: sample3.sampletests_ntd.TestSomething.test_something_else
531 tags: -zope:layer:sample3.sampletests_ntd.Layer
532 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
533 test: sample3.sampletests_ntd.Layer:tearDown
534 tags: zope:layer
535 time: YYYY-MM-DD HH:MM:SS.mmmmmmZ
536 skip: sample3.sampletests_ntd.Layer:tearDown [
537 tearDown not supported
538 ]
539 True
540
541Note that debugging doesn't work when running tests in a subprocess:
542
543 >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$',
544 ... '-D', ]
545 >>> testrunner.run_internal(defaults)
546 time: 2010-02-10 22:41:25.279692Z
547 test: sample1.sampletests_ntd.Layer:setUp
548 tags: zope:layer
549 time: 2010-02-10 22:41:25.279695Z
550 successful: sample1.sampletests_ntd.Layer:setUp
551 tags: zope:layer:sample1.sampletests_ntd.Layer
552 test: sample1.sampletests_ntd.TestSomething.test_something
553 successful: sample1.sampletests_ntd.TestSomething.test_something
554 tags: -zope:layer:sample1.sampletests_ntd.Layer
555 time: 2010-02-10 22:41:25.310078Z
556 test: sample1.sampletests_ntd.Layer:tearDown
557 tags: zope:layer
558 time: 2010-02-10 22:41:25.310171Z
559 skip: sample1.sampletests_ntd.Layer:tearDown [
560 tearDown not supported
561 ]
562 test: Running in a subprocess.
563 tags: zope:info_suboptimal
564 successful: Running in a subprocess.
565 time: 2010-02-10 22:41:25.753076Z
566 test: sample2.sampletests_ntd.Layer:setUp
567 tags: zope:layer
568 time: 2010-02-10 22:41:25.753079Z
569 successful: sample2.sampletests_ntd.Layer:setUp
570 tags: zope:layer:sample2.sampletests_ntd.Layer
571 test: sample2.sampletests_ntd.TestSomething.test_something
572 successful: sample2.sampletests_ntd.TestSomething.test_something
573 tags: -zope:layer:sample2.sampletests_ntd.Layer
574 time: 2010-02-10 22:41:25.779256Z
575 test: sample2.sampletests_ntd.Layer:tearDown
576 tags: zope:layer
577 time: 2010-02-10 22:41:25.779326Z
578 skip: sample2.sampletests_ntd.Layer:tearDown [
579 tearDown not supported
580 ]
581 test: Running in a subprocess.
582 tags: zope:info_suboptimal
583 successful: Running in a subprocess.
584 time: 2010-02-10 22:41:26.310296Z
585 test: sample3.sampletests_ntd.Layer:setUp
586 tags: zope:layer
587 time: 2010-02-10 22:41:26.310299Z
588 successful: sample3.sampletests_ntd.Layer:setUp
589 tags: zope:layer:sample3.sampletests_ntd.Layer
590 test: sample3.sampletests_ntd.TestSomething.test_error1
591 error: sample3.sampletests_ntd.TestSomething.test_error1 [ multipart
592 Content-Type: text/x-traceback;charset=utf8,language=python
593 traceback
594 16A\r
595 <BLANKLINE>
596 Traceback (most recent call last):
597 File "/usr/lib/python2.6/unittest.py", line 305, in debug
598 getattr(self, self._testMethodName)()
599 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample3/sampletests_ntd.py", line 42, in test_error1
600 raise TypeError("Can we see errors")
601 TypeError: Can we see errors
602 0\r
603 <BLANKLINE>
604 ]
605 test: Can't post-mortem debug when running a layer as a subprocess!
606 tags: zope:error_with_banner
607 successful: Can't post-mortem debug when running a layer as a subprocess!
608 test: sample3.sampletests_ntd.TestSomething.test_error2
609 error: sample3.sampletests_ntd.TestSomething.test_error2 [ multipart
610 Content-Type: text/x-traceback;charset=utf8,language=python
611 traceback
612 15A\r
613 <BLANKLINE>
614 Traceback (most recent call last):
615 File "/usr/lib/python2.6/unittest.py", line 305, in debug
616 getattr(self, self._testMethodName)()
617 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample3/sampletests_ntd.py", line 45, in test_error2
618 raise TypeError("I hope so")
619 TypeError: I hope so
620 0\r
621 <BLANKLINE>
622 ]
623 test: Can't post-mortem debug when running a layer as a subprocess!
624 tags: zope:error_with_banner
625 successful: Can't post-mortem debug when running a layer as a subprocess!
626 test: sample3.sampletests_ntd.TestSomething.test_fail1
627 error: sample3.sampletests_ntd.TestSomething.test_fail1 [ multipart
628 Content-Type: text/x-traceback;charset=utf8,language=python
629 traceback
630 1C5\r
631 <BLANKLINE>
632 Traceback (most recent call last):
633 File "/usr/lib/python2.6/unittest.py", line 305, in debug
634 getattr(self, self._testMethodName)()
635 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample3/sampletests_ntd.py", line 48, in test_fail1
636 self.assertEqual(1, 2)
637 File "/usr/lib/python2.6/unittest.py", line 350, in failUnlessEqual
638 (msg or '%r != %r' % (first, second))
639 AssertionError: 1 != 2
640 0\r
641 <BLANKLINE>
642 ]
643 test: Can't post-mortem debug when running a layer as a subprocess!
644 tags: zope:error_with_banner
645 successful: Can't post-mortem debug when running a layer as a subprocess!
646 test: sample3.sampletests_ntd.TestSomething.test_fail2
647 error: sample3.sampletests_ntd.TestSomething.test_fail2 [ multipart
648 Content-Type: text/x-traceback;charset=utf8,language=python
649 traceback
650 1C5\r
651 <BLANKLINE>
652 Traceback (most recent call last):
653 File "/usr/lib/python2.6/unittest.py", line 305, in debug
654 getattr(self, self._testMethodName)()
655 File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample3/sampletests_ntd.py", line 51, in test_fail2
656 self.assertEqual(1, 3)
657 File "/usr/lib/python2.6/unittest.py", line 350, in failUnlessEqual
658 (msg or '%r != %r' % (first, second))
659 AssertionError: 1 != 3
660 0\r
661 <BLANKLINE>
662 ]
663 test: Can't post-mortem debug when running a layer as a subprocess!
664 tags: zope:error_with_banner
665 successful: Can't post-mortem debug when running a layer as a subprocess!
666 test: sample3.sampletests_ntd.TestSomething.test_something
667 successful: sample3.sampletests_ntd.TestSomething.test_something
668 test: sample3.sampletests_ntd.TestSomething.test_something_else
669 successful: sample3.sampletests_ntd.TestSomething.test_something_else
670 tags: -zope:layer:sample3.sampletests_ntd.Layer
671 time: 2010-02-10 22:41:26.340878Z
672 test: sample3.sampletests_ntd.Layer:tearDown
673 tags: zope:layer
674 time: 2010-02-10 22:41:26.340945Z
675 skip: sample3.sampletests_ntd.Layer:tearDown [
676 tearDown not supported
677 ]
678 True
0679
=== modified file 'src/zope/testing/testrunner/tests.py'
--- src/zope/testing/testrunner/tests.py 2009-12-18 08:23:21 +0000
+++ src/zope/testing/testrunner/tests.py 2010-03-11 22:36:30 +0000
@@ -103,6 +103,8 @@
103 (re.compile(r'\d+[.]\d\d\d seconds'), 'N.NNN seconds'),103 (re.compile(r'\d+[.]\d\d\d seconds'), 'N.NNN seconds'),
104 (re.compile(r'\d+[.]\d\d\d s'), 'N.NNN s'),104 (re.compile(r'\d+[.]\d\d\d s'), 'N.NNN s'),
105 (re.compile(r'\d+[.]\d\d\d{'), 'N.NNN{'),105 (re.compile(r'\d+[.]\d\d\d{'), 'N.NNN{'),
106 (re.compile(r'\d{4}-\d\d-\d\d \d\d:\d\d:\d\d\.\d+'),
107 'YYYY-MM-DD HH:MM:SS.mmmmmm'),
106 (re.compile('( |")[^\n]+testrunner-ex'), r'\1testrunner-ex'),108 (re.compile('( |")[^\n]+testrunner-ex'), r'\1testrunner-ex'),
107 (re.compile('( |")[^\n]+testrunner.py'), r'\1testrunner.py'),109 (re.compile('( |")[^\n]+testrunner.py'), r'\1testrunner.py'),
108 (re.compile(r'> [^\n]*(doc|unit)test[.]py\(\d+\)'),110 (re.compile(r'> [^\n]*(doc|unit)test[.]py\(\d+\)'),
@@ -251,4 +253,29 @@
251 checker=checker,253 checker=checker,
252 )254 )
253 )255 )
256
257 try:
258 import subunit
259 except ImportError:
260 suites.append(
261 doctest.DocFileSuite(
262 'testrunner-subunit-err.txt',
263 setUp=setUp, tearDown=tearDown,
264 optionflags=doctest.ELLIPSIS + doctest.NORMALIZE_WHITESPACE,
265 checker=checker))
266 else:
267 suites.append(
268 doctest.DocFileSuite(
269 'testrunner-subunit.txt',
270 setUp=setUp, tearDown=tearDown,
271 optionflags=doctest.ELLIPSIS + doctest.NORMALIZE_WHITESPACE,
272 checker=checker))
273 if hasattr(sys, 'gettotalrefcount'):
274 suites.append(
275 doctest.DocFileSuite(
276 'testrunner-subunit-leaks.txt',
277 setUp=setUp, tearDown=tearDown,
278 optionflags=doctest.ELLIPSIS + doctest.NORMALIZE_WHITESPACE,
279 checker=checker))
280
254 return unittest.TestSuite(suites)281 return unittest.TestSuite(suites)

Subscribers

People subscribed via source and target branches

to all changes: