Merge lp:~jml/zope.testing/subunit-output-formatter into lp:~vcs-imports/zope.testing/trunk
- subunit-output-formatter
- Merge into trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Merge reported by: | Jonathan Lange | ||||
Merged at revision: | not available | ||||
Proposed branch: | lp:~jml/zope.testing/subunit-output-formatter | ||||
Merge into: | lp:~vcs-imports/zope.testing/trunk | ||||
Diff against target: |
1378 lines (+1270/-5) 8 files modified
.bzrignore (+6/-0) src/zope/testing/testrunner/formatter.py (+405/-0) src/zope/testing/testrunner/options.py (+24/-2) src/zope/testing/testrunner/testrunner-leaks.txt (+3/-3) src/zope/testing/testrunner/testrunner-subunit-err.txt (+20/-0) src/zope/testing/testrunner/testrunner-subunit-leaks.txt (+107/-0) src/zope/testing/testrunner/testrunner-subunit.txt (+678/-0) src/zope/testing/testrunner/tests.py (+27/-0) |
||||
To merge this branch: | bzr merge lp:~jml/zope.testing/subunit-output-formatter | ||||
Related bugs: |
|
Commit message
Add a subunit output formatter to zope.testing.
Description of the change
Robert Collins (lifeless) wrote : | # |
Robert Collins (lifeless) wrote : | # |
Oh, I forgot to mention; for profiling, using bzrlib.
Jonathan Lange (jml) wrote : | # |
> Broad issues:
...
> As subunit has a progress abstraction, the 'cannot support progress' statement
> confused me. Perhaps say 'cannot support zopes concept of progress because
> xxx'.
>
Changed to:
# subunit output is designed for computers, so displaying a progress bar
# isn't helpful.
Which isn't _strictly_ true, since displaying a progress bar might help the humans running the process. The real reason is it's too hard.
> I see you've worked around the bug in subunit where there isn't a tag method
> on the test result; perhaps you could submit a patch for that ? the contract
> is(I think) clear, just 'not done'.
>
...
> 194 + Since subunit is a stream protocol format, it has no summary.
> perhaps 'no need for a summary - when the stream is displayed a summary can be
> created then.
...
Now says:
Since subunit is a stream protocol format, it has no need for a
summary. When the stream is finished other tools can generate a
summary if so desired.
> 235 + def _exc_info_
> 236 + """Translate 'exc_info' into a details dictionary usable with
> subunit.
> 237 + """
> 238 + import subunit
> 239 + content_type = subunit.
> 240 + 'text', 'x-traceback', dict(language=
> 241 + formatter = OutputFormatter
> 242 + traceback = formatter.
> 243 + return {
> 244 + 'traceback': subunit.
> 245 + content_type, lambda: [traceback.
>
> This might be better as
> import testtools.content
> test = unittest.TestCase()
> content = TracebackConten
> return {'traceback': content}
>
> unless the formatter.
> (and if it is, perhaps you should mention that. If its doing something
> nonobvious, then I suggest subclassing testtools.
> testtools.
>
It's the non-obvious thing. Added this comment:
# In an ideal world, we'd use the pre-bundled 'TracebackContent' class
# from testtools. However, 'OutputFormatter' contains special logic to
# handle errors from doctests, so we have to use that and manually
# create an object equivalent to an instance of 'TracebackContent'.
> Also, you might want a global import rather than a scope local.
>
Yeah, good point.
I've changed it to be a global import. Note that it's still a soft dependency.
>
> 270 + # XXX: Since the subunit stream is designed for machine reading, we
> 271 + # should really dump the binary profiler stats here. Sadly, the
> 272 + # "content" API doesn't support this yet. Instead, we dump the
> 273 + # stringified version of the stats dict, which is functionally the
> 274 + # same thing. -- jml, 2010-02-14.
> 275 + plain_text = subunit.
> 276 + 'text', 'plain', {'charset': 'utf8'})
> 277 + details = {
> 278 + 'profiler-stats': subunit.
> 279 + plain_text, lambda: [unicode(
>
> meta: where some dependency is insuffic...
Jonathan Lange (jml) wrote : | # |
OK, I think this is ready for landing into trunk. Would appreciate a review and guidance through whatever hoops I must jump through to get this landed.
Robert Collins (lifeless) wrote : | # |
On Thu, 2010-03-11 at 20:31 +0000, Jonathan Lange wrote:
> > Broad issues:
> ...
> > As subunit has a progress abstraction, the 'cannot support progress' statement
> > confused me. Perhaps say 'cannot support zopes concept of progress because
> > xxx'.
> >
>
> Changed to:
> # subunit output is designed for computers, so displaying a progress bar
> # isn't helpful.
>
> Which isn't _strictly_ true, since displaying a progress bar might
> help the humans running the process. The real reason is it's too hard.
This still leaves me confused: subunit can fairly easily do a progress
bar, in-process or out-of-process. I was guessing that zopes concept of
progress was strange or something.
> > I think its great you've written docs/tests for this. I worry that they will
> > be fragile because they encode the subunit stream representation, but thats
> > not what you wrote: you wrote some object calls against subunit. I suggest you
> > use the guts from things like subunit-ls, subunit2pyunit, subunit-stats to
> > make your tests be done at the same level as the code wrote: to the object
> > API. This will prevent you having to e.g. include multipart MIME or http
> > chunking in the tests. Specifically I suspect you are really writing smoke
> > tests, and just a stats output will do great for your needs [most of the
> > time].
> >
>
> I suspect you are right. However, at this point I can't be bothered making that change. :)
It's your dime ;)
Let me know of anything I can do to help.
-Rob
- 377. By Jonathan Lange
-
Correct versions for testtools and subunit
Preview Diff
1 | === added file '.bzrignore' | |||
2 | --- .bzrignore 1970-01-01 00:00:00 +0000 | |||
3 | +++ .bzrignore 2010-03-11 22:36:30 +0000 | |||
4 | @@ -0,0 +1,6 @@ | |||
5 | 1 | .installed.cfg | ||
6 | 2 | bin | ||
7 | 3 | develop-eggs | ||
8 | 4 | documentation.txt | ||
9 | 5 | src/zope.testing.egg-info | ||
10 | 6 | parts | ||
11 | 0 | 7 | ||
12 | === modified file 'src/zope/testing/testrunner/formatter.py' | |||
13 | --- src/zope/testing/testrunner/formatter.py 2009-12-23 21:21:53 +0000 | |||
14 | +++ src/zope/testing/testrunner/formatter.py 2010-03-11 22:36:30 +0000 | |||
15 | @@ -16,9 +16,12 @@ | |||
16 | 16 | $Id: __init__.py 86207 2008-05-03 13:25:02Z ctheune $ | 16 | $Id: __init__.py 86207 2008-05-03 13:25:02Z ctheune $ |
17 | 17 | """ | 17 | """ |
18 | 18 | 18 | ||
19 | 19 | from datetime import datetime, timedelta | ||
20 | 19 | import doctest | 20 | import doctest |
21 | 21 | import os | ||
22 | 20 | import sys | 22 | import sys |
23 | 21 | import re | 23 | import re |
24 | 24 | import tempfile | ||
25 | 22 | import traceback | 25 | import traceback |
26 | 23 | 26 | ||
27 | 24 | from zope.testing.exceptions import DocTestFailureException | 27 | from zope.testing.exceptions import DocTestFailureException |
28 | @@ -647,3 +650,405 @@ | |||
29 | 647 | else: | 650 | else: |
30 | 648 | print self.colorize('exception', line) | 651 | print self.colorize('exception', line) |
31 | 649 | 652 | ||
32 | 653 | |||
33 | 654 | |||
34 | 655 | class FakeTest(object): | ||
35 | 656 | """A fake test object that only has an id.""" | ||
36 | 657 | |||
37 | 658 | failureException = None | ||
38 | 659 | |||
39 | 660 | def __init__(self, test_id): | ||
40 | 661 | self._id = test_id | ||
41 | 662 | |||
42 | 663 | def id(self): | ||
43 | 664 | return self._id | ||
44 | 665 | |||
45 | 666 | |||
46 | 667 | # Conditional imports, we don't want zope.testing to have a hard dependency on | ||
47 | 668 | # subunit. | ||
48 | 669 | try: | ||
49 | 670 | import subunit | ||
50 | 671 | from subunit.iso8601 import Utc | ||
51 | 672 | except ImportError: | ||
52 | 673 | subunit = None | ||
53 | 674 | |||
54 | 675 | |||
55 | 676 | # testtools is a hard dependency of subunit itself, guarding separately for | ||
56 | 677 | # richer error messages. | ||
57 | 678 | try: | ||
58 | 679 | from testtools import content | ||
59 | 680 | except ImportError: | ||
60 | 681 | content = None | ||
61 | 682 | |||
62 | 683 | |||
63 | 684 | class SubunitOutputFormatter(object): | ||
64 | 685 | """A subunit output formatter. | ||
65 | 686 | |||
66 | 687 | This output formatter generates subunit compatible output (see | ||
67 | 688 | https://launchpad.net/subunit). Subunit output is essentially a stream of | ||
68 | 689 | results of unit tests. In this formatter, non-test events (such as layer | ||
69 | 690 | set up) are encoded as specially tagged tests and summary-generating | ||
70 | 691 | methods (such as modules_with_import_problems) deliberately do nothing. | ||
71 | 692 | |||
72 | 693 | In particular, for a layer 'foo', the fake tests related to layer set up | ||
73 | 694 | and tear down are tagged with 'zope:layer' and are called 'foo:setUp' and | ||
74 | 695 | 'foo:tearDown'. Any tests within layer 'foo' are tagged with | ||
75 | 696 | 'zope:layer:foo'. | ||
76 | 697 | |||
77 | 698 | Note that all tags specific to this formatter begin with 'zope:' | ||
78 | 699 | """ | ||
79 | 700 | |||
80 | 701 | # subunit output is designed for computers, so displaying a progress bar | ||
81 | 702 | # isn't helpful. | ||
82 | 703 | progress = False | ||
83 | 704 | verbose = property(lambda self: self.options.verbose) | ||
84 | 705 | |||
85 | 706 | TAG_INFO_SUBOPTIMAL = 'zope:info_suboptimal' | ||
86 | 707 | TAG_ERROR_WITH_BANNER = 'zope:error_with_banner' | ||
87 | 708 | TAG_LAYER = 'zope:layer' | ||
88 | 709 | TAG_IMPORT_ERROR = 'zope:import_error' | ||
89 | 710 | TAG_PROFILER_STATS = 'zope:profiler_stats' | ||
90 | 711 | TAG_GARBAGE = 'zope:garbage' | ||
91 | 712 | TAG_THREADS = 'zope:threads' | ||
92 | 713 | TAG_REFCOUNTS = 'zope:refcounts' | ||
93 | 714 | |||
94 | 715 | def __init__(self, options): | ||
95 | 716 | if subunit is None: | ||
96 | 717 | raise Exception("Requires subunit 0.0.5 or better") | ||
97 | 718 | if content is None: | ||
98 | 719 | raise Exception("Requires testtools 0.9.2 or better") | ||
99 | 720 | self.options = options | ||
100 | 721 | self._stream = sys.stdout | ||
101 | 722 | self._subunit = subunit.TestProtocolClient(self._stream) | ||
102 | 723 | # Used to track the last layer that was set up or torn down. Either | ||
103 | 724 | # None or (layer_name, last_touched_time). | ||
104 | 725 | self._last_layer = None | ||
105 | 726 | self.UTC = Utc() | ||
106 | 727 | # Content types used in the output. | ||
107 | 728 | self.TRACEBACK_CONTENT_TYPE = content.ContentType( | ||
108 | 729 | 'text', 'x-traceback', dict(language='python', charset='utf8')) | ||
109 | 730 | self.PROFILE_CONTENT_TYPE = content.ContentType( | ||
110 | 731 | 'application', 'x-binary-profile') | ||
111 | 732 | self.PLAIN_TEXT = content.ContentType( | ||
112 | 733 | 'text', 'plain', {'charset': 'utf8'}) | ||
113 | 734 | |||
114 | 735 | @property | ||
115 | 736 | def _time_tests(self): | ||
116 | 737 | return self.verbose > 2 | ||
117 | 738 | |||
118 | 739 | def _emit_timestamp(self, now=None): | ||
119 | 740 | """Emit a timestamp to the subunit stream. | ||
120 | 741 | |||
121 | 742 | If 'now' is not specified, use the current time on the system clock. | ||
122 | 743 | """ | ||
123 | 744 | if now is None: | ||
124 | 745 | now = datetime.now(self.UTC) | ||
125 | 746 | self._subunit.time(now) | ||
126 | 747 | return now | ||
127 | 748 | |||
128 | 749 | def _emit_tag(self, tag): | ||
129 | 750 | self._stream.write('tags: %s\n' % (tag,)) | ||
130 | 751 | |||
131 | 752 | def _stop_tag(self, tag): | ||
132 | 753 | self._stream.write('tags: -%s\n' % (tag,)) | ||
133 | 754 | |||
134 | 755 | def _emit_fake_test(self, message, tag, details=None): | ||
135 | 756 | """Emit a successful fake test to the subunit stream. | ||
136 | 757 | |||
137 | 758 | Use this to print tagged informative messages. | ||
138 | 759 | """ | ||
139 | 760 | test = FakeTest(message) | ||
140 | 761 | self._subunit.startTest(test) | ||
141 | 762 | self._emit_tag(tag) | ||
142 | 763 | self._subunit.addSuccess(test, details) | ||
143 | 764 | |||
144 | 765 | def _emit_error(self, error_id, tag, exc_info): | ||
145 | 766 | """Emit an error to the subunit stream. | ||
146 | 767 | |||
147 | 768 | Use this to pass on information about errors that occur outside of | ||
148 | 769 | tests. | ||
149 | 770 | """ | ||
150 | 771 | test = FakeTest(error_id) | ||
151 | 772 | self._subunit.startTest(test) | ||
152 | 773 | self._emit_tag(tag) | ||
153 | 774 | self._subunit.addError(test, exc_info) | ||
154 | 775 | |||
155 | 776 | def info(self, message): | ||
156 | 777 | """Print an informative message, but only if verbose.""" | ||
157 | 778 | # info() output is not relevant to actual test results. It only says | ||
158 | 779 | # things like "Running tests" or "Tearing down left over layers", | ||
159 | 780 | # things that are communicated already by the subunit stream. Just | ||
160 | 781 | # suppress the info() output. | ||
161 | 782 | pass | ||
162 | 783 | |||
163 | 784 | def info_suboptimal(self, message): | ||
164 | 785 | """Print an informative message about losing some of the features. | ||
165 | 786 | |||
166 | 787 | For example, when you run some tests in a subprocess, you lose the | ||
167 | 788 | ability to use the debugger. | ||
168 | 789 | """ | ||
169 | 790 | # Used _only_ to indicate running in a subprocess. | ||
170 | 791 | self._emit_fake_test(message.strip(), self.TAG_INFO_SUBOPTIMAL) | ||
171 | 792 | |||
172 | 793 | def error(self, message): | ||
173 | 794 | """Report an error.""" | ||
174 | 795 | # XXX: Mostly used for user errors, sometimes used for errors in the | ||
175 | 796 | # test framework, sometimes used to record layer setUp failure (!!!). | ||
176 | 797 | # How should this be encoded? | ||
177 | 798 | raise NotImplementedError(self.error) | ||
178 | 799 | |||
179 | 800 | def error_with_banner(self, message): | ||
180 | 801 | """Report an error with a big ASCII banner.""" | ||
181 | 802 | # Either "Could not communicate with subprocess" | ||
182 | 803 | # Or "Can't post-mortem debug when running a layer as a subprocess!" | ||
183 | 804 | self._emit_fake_test(message, self.TAG_ERROR_WITH_BANNER) | ||
184 | 805 | |||
185 | 806 | def _enter_layer(self, layer_name): | ||
186 | 807 | """Signal in the subunit stream that we are entering a layer.""" | ||
187 | 808 | self._emit_tag('zope:layer:%s' % (layer_name,)) | ||
188 | 809 | |||
189 | 810 | def _exit_layer(self, layer_name): | ||
190 | 811 | """Signal in the subunit stream that we are leaving a layer.""" | ||
191 | 812 | self._stop_tag('zope:layer:%s' % (layer_name,)) | ||
192 | 813 | |||
193 | 814 | def start_set_up(self, layer_name): | ||
194 | 815 | """Report that we're setting up a layer. | ||
195 | 816 | |||
196 | 817 | We do this by emitting a tag of the form 'layer:$LAYER_NAME'. | ||
197 | 818 | """ | ||
198 | 819 | now = self._emit_timestamp() | ||
199 | 820 | self._subunit.startTest(FakeTest('%s:setUp' % (layer_name,))) | ||
200 | 821 | self._emit_tag(self.TAG_LAYER) | ||
201 | 822 | self._last_layer = (layer_name, now) | ||
202 | 823 | |||
203 | 824 | def stop_set_up(self, seconds): | ||
204 | 825 | layer_name, start_time = self._last_layer | ||
205 | 826 | self._last_layer = None | ||
206 | 827 | self._emit_timestamp(start_time + timedelta(seconds=seconds)) | ||
207 | 828 | self._subunit.addSuccess(FakeTest('%s:setUp' % (layer_name,))) | ||
208 | 829 | self._enter_layer(layer_name) | ||
209 | 830 | |||
210 | 831 | def start_tear_down(self, layer_name): | ||
211 | 832 | """Report that we're tearing down a layer. | ||
212 | 833 | |||
213 | 834 | We do this by removing a tag of the form 'layer:$LAYER_NAME'. | ||
214 | 835 | """ | ||
215 | 836 | self._exit_layer(layer_name) | ||
216 | 837 | now = self._emit_timestamp() | ||
217 | 838 | self._subunit.startTest(FakeTest('%s:tearDown' % (layer_name,))) | ||
218 | 839 | self._emit_tag(self.TAG_LAYER) | ||
219 | 840 | self._last_layer = (layer_name, now) | ||
220 | 841 | |||
221 | 842 | def stop_tear_down(self, seconds): | ||
222 | 843 | layer_name, start_time = self._last_layer | ||
223 | 844 | self._last_layer = None | ||
224 | 845 | self._emit_timestamp(start_time + timedelta(seconds=seconds)) | ||
225 | 846 | self._subunit.addSuccess(FakeTest('%s:tearDown' % (layer_name,))) | ||
226 | 847 | |||
227 | 848 | def tear_down_not_supported(self): | ||
228 | 849 | """Report that we could not tear down a layer. | ||
229 | 850 | |||
230 | 851 | Should be called right after start_tear_down(). | ||
231 | 852 | """ | ||
232 | 853 | layer_name, start_time = self._last_layer | ||
233 | 854 | self._last_layer = None | ||
234 | 855 | self._emit_timestamp(datetime.now(self.UTC)) | ||
235 | 856 | self._subunit.addSkip( | ||
236 | 857 | FakeTest('%s:tearDown' % (layer_name,)), "tearDown not supported") | ||
237 | 858 | |||
238 | 859 | def summary(self, n_tests, n_failures, n_errors, n_seconds): | ||
239 | 860 | """Print out a summary. | ||
240 | 861 | |||
241 | 862 | Since subunit is a stream protocol format, it has no need for a | ||
242 | 863 | summary. When the stream is finished other tools can generate a | ||
243 | 864 | summary if so desired. | ||
244 | 865 | """ | ||
245 | 866 | pass | ||
246 | 867 | |||
247 | 868 | def start_test(self, test, tests_run, total_tests): | ||
248 | 869 | """Report that we're about to run a test. | ||
249 | 870 | |||
250 | 871 | The next output operation should be test_success(), test_error(), or | ||
251 | 872 | test_failure(). | ||
252 | 873 | """ | ||
253 | 874 | if self._time_tests: | ||
254 | 875 | self._emit_timestamp() | ||
255 | 876 | # Note that this always emits newlines, so it will function as well as | ||
256 | 877 | # other start_test implementations if we are running in a subprocess. | ||
257 | 878 | self._subunit.startTest(test) | ||
258 | 879 | |||
259 | 880 | def stop_test(self, test): | ||
260 | 881 | """Clean up the output state after a test.""" | ||
261 | 882 | self._subunit.stopTest(test) | ||
262 | 883 | self._stream.flush() | ||
263 | 884 | |||
264 | 885 | def stop_tests(self): | ||
265 | 886 | """Clean up the output state after a collection of tests.""" | ||
266 | 887 | self._stream.flush() | ||
267 | 888 | |||
268 | 889 | def test_success(self, test, seconds): | ||
269 | 890 | if self._time_tests: | ||
270 | 891 | self._emit_timestamp() | ||
271 | 892 | self._subunit.addSuccess(test) | ||
272 | 893 | |||
273 | 894 | def import_errors(self, import_errors): | ||
274 | 895 | """Report test-module import errors (if any).""" | ||
275 | 896 | if not import_errors: | ||
276 | 897 | return | ||
277 | 898 | for error in import_errors: | ||
278 | 899 | self._emit_error( | ||
279 | 900 | error.module, self.TAG_IMPORT_ERROR, error.exc_info) | ||
280 | 901 | |||
281 | 902 | def modules_with_import_problems(self, import_errors): | ||
282 | 903 | """Report names of modules with import problems (if any).""" | ||
283 | 904 | # This is simply a summary method, and subunit output doesn't benefit | ||
284 | 905 | # from summaries. | ||
285 | 906 | pass | ||
286 | 907 | |||
287 | 908 | def _exc_info_to_details(self, exc_info): | ||
288 | 909 | """Translate 'exc_info' into a details dictionary usable with subunit. | ||
289 | 910 | """ | ||
290 | 911 | # In an ideal world, we'd use the pre-bundled 'TracebackContent' class | ||
291 | 912 | # from testtools. However, 'OutputFormatter' contains special logic to | ||
292 | 913 | # handle errors from doctests, so we have to use that and manually | ||
293 | 914 | # create an object equivalent to an instance of 'TracebackContent'. | ||
294 | 915 | formatter = OutputFormatter(None) | ||
295 | 916 | traceback = formatter.format_traceback(exc_info) | ||
296 | 917 | return { | ||
297 | 918 | 'traceback': content.Content( | ||
298 | 919 | self.TRACEBACK_CONTENT_TYPE, lambda: [traceback.encode('utf8')])} | ||
299 | 920 | |||
300 | 921 | def test_error(self, test, seconds, exc_info): | ||
301 | 922 | """Report that an error occurred while running a test. | ||
302 | 923 | |||
303 | 924 | Should be called right after start_test(). | ||
304 | 925 | |||
305 | 926 | The next output operation should be stop_test(). | ||
306 | 927 | """ | ||
307 | 928 | if self._time_tests: | ||
308 | 929 | self._emit_timestamp() | ||
309 | 930 | details = self._exc_info_to_details(exc_info) | ||
310 | 931 | self._subunit.addError(test, details=details) | ||
311 | 932 | |||
312 | 933 | def test_failure(self, test, seconds, exc_info): | ||
313 | 934 | """Report that a test failed. | ||
314 | 935 | |||
315 | 936 | Should be called right after start_test(). | ||
316 | 937 | |||
317 | 938 | The next output operation should be stop_test(). | ||
318 | 939 | """ | ||
319 | 940 | if self._time_tests: | ||
320 | 941 | self._emit_timestamp() | ||
321 | 942 | details = self._exc_info_to_details(exc_info) | ||
322 | 943 | self._subunit.addFailure(test, details=details) | ||
323 | 944 | |||
324 | 945 | def profiler_stats(self, stats): | ||
325 | 946 | """Report profiler stats.""" | ||
326 | 947 | ignored, filename = tempfile.mkstemp() | ||
327 | 948 | try: | ||
328 | 949 | stats.dump_stats(filename) | ||
329 | 950 | profile_content = content.Content( | ||
330 | 951 | self.PROFILE_CONTENT_TYPE, open(filename).read) | ||
331 | 952 | details = {'profiler-stats': profile_content} | ||
332 | 953 | finally: | ||
333 | 954 | os.unlink(filename) | ||
334 | 955 | # Name the test 'zope:profiler_stats' just like its tag. | ||
335 | 956 | self._emit_fake_test( | ||
336 | 957 | self.TAG_PROFILER_STATS, self.TAG_PROFILER_STATS, details) | ||
337 | 958 | |||
338 | 959 | def tests_with_errors(self, errors): | ||
339 | 960 | """Report tests with errors. | ||
340 | 961 | |||
341 | 962 | Simply not supported by the subunit formatter. Fancy summary output | ||
342 | 963 | doesn't make sense. | ||
343 | 964 | """ | ||
344 | 965 | pass | ||
345 | 966 | |||
346 | 967 | def tests_with_failures(self, failures): | ||
347 | 968 | """Report tests with failures. | ||
348 | 969 | |||
349 | 970 | Simply not supported by the subunit formatter. Fancy summary output | ||
350 | 971 | doesn't make sense. | ||
351 | 972 | """ | ||
352 | 973 | pass | ||
353 | 974 | |||
354 | 975 | def totals(self, n_tests, n_failures, n_errors, n_seconds): | ||
355 | 976 | """Summarize the results of all layers. | ||
356 | 977 | |||
357 | 978 | Simply not supported by the subunit formatter. Fancy summary output | ||
358 | 979 | doesn't make sense. | ||
359 | 980 | """ | ||
360 | 981 | pass | ||
361 | 982 | |||
362 | 983 | def list_of_tests(self, tests, layer_name): | ||
363 | 984 | """Report a list of test names.""" | ||
364 | 985 | self._enter_layer(layer_name) | ||
365 | 986 | for test in tests: | ||
366 | 987 | self._subunit.startTest(test) | ||
367 | 988 | self._subunit.addSuccess(test) | ||
368 | 989 | self._exit_layer(layer_name) | ||
369 | 990 | |||
370 | 991 | def _get_text_details(self, name, text): | ||
371 | 992 | """Get a details dictionary that just has some plain text.""" | ||
372 | 993 | return { | ||
373 | 994 | name: content.Content( | ||
374 | 995 | self.PLAIN_TEXT, lambda: [text.encode('utf8')])} | ||
375 | 996 | |||
376 | 997 | def garbage(self, garbage): | ||
377 | 998 | """Report garbage generated by tests.""" | ||
378 | 999 | # XXX: Really, 'garbage', 'profiler_stats' and the 'refcounts' twins | ||
379 | 1000 | # ought to add extra details to a fake test that represents the | ||
380 | 1001 | # summary information for the whole suite. However, there's no event | ||
381 | 1002 | # on output formatters for "everything is really finished, honest". -- | ||
382 | 1003 | # jml, 2010-02-14 | ||
383 | 1004 | details = self._get_text_details('garbage', unicode(garbage)) | ||
384 | 1005 | self._emit_fake_test( | ||
385 | 1006 | self.TAG_GARBAGE, self.TAG_GARBAGE, details) | ||
386 | 1007 | |||
387 | 1008 | def test_garbage(self, test, garbage): | ||
388 | 1009 | """Report garbage generated by a test. | ||
389 | 1010 | |||
390 | 1011 | Encoded in the subunit stream as a test error. Clients can filter out | ||
391 | 1012 | these tests based on the tag if they don't think garbage should fail | ||
392 | 1013 | the test run. | ||
393 | 1014 | """ | ||
394 | 1015 | # XXX: Perhaps 'test_garbage' and 'test_threads' ought to be within | ||
395 | 1016 | # the output for the actual test, appended as details to whatever | ||
396 | 1017 | # result the test gets. Not an option with the present API, as there's | ||
397 | 1018 | # no event for "no more output for this test". -- jml, 2010-02-14 | ||
398 | 1019 | self._subunit.startTest(test) | ||
399 | 1020 | self._emit_tag(self.TAG_GARBAGE) | ||
400 | 1021 | self._subunit.addError( | ||
401 | 1022 | test, self._get_text_details('garbage', unicode(garbage))) | ||
402 | 1023 | |||
403 | 1024 | def test_threads(self, test, new_threads): | ||
404 | 1025 | """Report threads left behind by a test. | ||
405 | 1026 | |||
406 | 1027 | Encoded in the subunit stream as a test error. Clients can filter out | ||
407 | 1028 | these tests based on the tag if they don't think left-over threads | ||
408 | 1029 | should fail the test run. | ||
409 | 1030 | """ | ||
410 | 1031 | self._subunit.startTest(test) | ||
411 | 1032 | self._emit_tag(self.TAG_THREADS) | ||
412 | 1033 | self._subunit.addError( | ||
413 | 1034 | test, self._get_text_details('garbage', unicode(new_threads))) | ||
414 | 1035 | |||
415 | 1036 | def refcounts(self, rc, prev): | ||
416 | 1037 | """Report a change in reference counts.""" | ||
417 | 1038 | details = self._get_text_details('sys-refcounts', str(rc)) | ||
418 | 1039 | details.update( | ||
419 | 1040 | self._get_text_details('changes', str(rc - prev))) | ||
420 | 1041 | # XXX: Emit the details dict as JSON? | ||
421 | 1042 | self._emit_fake_test( | ||
422 | 1043 | self.TAG_REFCOUNTS, self.TAG_REFCOUNTS, details) | ||
423 | 1044 | |||
424 | 1045 | def detailed_refcounts(self, track, rc, prev): | ||
425 | 1046 | """Report a change in reference counts, with extra detail.""" | ||
426 | 1047 | details = self._get_text_details('sys-refcounts', str(rc)) | ||
427 | 1048 | details.update( | ||
428 | 1049 | self._get_text_details('changes', str(rc - prev))) | ||
429 | 1050 | details.update( | ||
430 | 1051 | self._get_text_details('track', str(track.delta))) | ||
431 | 1052 | |||
432 | 1053 | self._emit_fake_test( | ||
433 | 1054 | self.TAG_REFCOUNTS, self.TAG_REFCOUNTS, details) | ||
434 | 650 | 1055 | ||
435 | === modified file 'src/zope/testing/testrunner/options.py' | |||
436 | --- src/zope/testing/testrunner/options.py 2009-12-18 08:23:21 +0000 | |||
437 | +++ src/zope/testing/testrunner/options.py 2010-03-11 22:36:30 +0000 | |||
438 | @@ -22,7 +22,11 @@ | |||
439 | 22 | import sys | 22 | import sys |
440 | 23 | 23 | ||
441 | 24 | from zope.testing.testrunner.profiling import available_profilers | 24 | from zope.testing.testrunner.profiling import available_profilers |
443 | 25 | from zope.testing.testrunner.formatter import OutputFormatter, ColorfulOutputFormatter | 25 | from zope.testing.testrunner.formatter import ( |
444 | 26 | OutputFormatter, | ||
445 | 27 | ColorfulOutputFormatter, | ||
446 | 28 | SubunitOutputFormatter, | ||
447 | 29 | ) | ||
448 | 26 | from zope.testing.testrunner.formatter import terminal_has_colors | 30 | from zope.testing.testrunner.formatter import terminal_has_colors |
449 | 27 | 31 | ||
450 | 28 | 32 | ||
451 | @@ -192,6 +196,12 @@ | |||
452 | 192 | """) | 196 | """) |
453 | 193 | 197 | ||
454 | 194 | reporting.add_option( | 198 | reporting.add_option( |
455 | 199 | '--subunit', action="store_true", dest='subunit', | ||
456 | 200 | help="""\ | ||
457 | 201 | Use subunit output. Will not be colorized. | ||
458 | 202 | """) | ||
459 | 203 | |||
460 | 204 | reporting.add_option( | ||
461 | 195 | '--slow-test', type='float', dest='slow_test_threshold', metavar='N', | 205 | '--slow-test', type='float', dest='slow_test_threshold', metavar='N', |
462 | 196 | help="""\ | 206 | help="""\ |
463 | 197 | With -c and -vvv, highlight tests that take longer than N seconds (default: | 207 | With -c and -vvv, highlight tests that take longer than N seconds (default: |
464 | @@ -531,7 +541,19 @@ | |||
465 | 531 | options, positional = parser.parse_args(args[1:], defaults) | 541 | options, positional = parser.parse_args(args[1:], defaults) |
466 | 532 | options.original_testrunner_args = args | 542 | options.original_testrunner_args = args |
467 | 533 | 543 | ||
469 | 534 | if options.color: | 544 | if options.subunit: |
470 | 545 | try: | ||
471 | 546 | import subunit | ||
472 | 547 | except ImportError: | ||
473 | 548 | print """\ | ||
474 | 549 | Subunit is not installed. Please install Subunit | ||
475 | 550 | to generate subunit output. | ||
476 | 551 | """ | ||
477 | 552 | options.fail = True | ||
478 | 553 | return options | ||
479 | 554 | else: | ||
480 | 555 | options.output = SubunitOutputFormatter(options) | ||
481 | 556 | elif options.color: | ||
482 | 535 | options.output = ColorfulOutputFormatter(options) | 557 | options.output = ColorfulOutputFormatter(options) |
483 | 536 | options.output.slow_test_threshold = options.slow_test_threshold | 558 | options.output.slow_test_threshold = options.slow_test_threshold |
484 | 537 | else: | 559 | else: |
485 | 538 | 560 | ||
486 | === modified file 'src/zope/testing/testrunner/testrunner-leaks.txt' | |||
487 | --- src/zope/testing/testrunner/testrunner-leaks.txt 2008-05-05 18:50:48 +0000 | |||
488 | +++ src/zope/testing/testrunner/testrunner-leaks.txt 2010-03-11 22:36:30 +0000 | |||
489 | @@ -16,7 +16,7 @@ | |||
490 | 16 | >>> from zope.testing import testrunner | 16 | >>> from zope.testing import testrunner |
491 | 17 | 17 | ||
492 | 18 | >>> sys.argv = 'test --layer Layer11$ --layer Layer12$ -N4 -r'.split() | 18 | >>> sys.argv = 'test --layer Layer11$ --layer Layer12$ -N4 -r'.split() |
494 | 19 | >>> _ = testrunner.run(defaults) | 19 | >>> _ = testrunner.run_internal(defaults) |
495 | 20 | Running samplelayers.Layer11 tests: | 20 | Running samplelayers.Layer11 tests: |
496 | 21 | Set up samplelayers.Layer1 in 0.000 seconds. | 21 | Set up samplelayers.Layer1 in 0.000 seconds. |
497 | 22 | Set up samplelayers.Layer11 in 0.000 seconds. | 22 | Set up samplelayers.Layer11 in 0.000 seconds. |
498 | @@ -60,7 +60,7 @@ | |||
499 | 60 | Let's look at an example test that leaks: | 60 | Let's look at an example test that leaks: |
500 | 61 | 61 | ||
501 | 62 | >>> sys.argv = 'test --tests-pattern leak -N4 -r'.split() | 62 | >>> sys.argv = 'test --tests-pattern leak -N4 -r'.split() |
503 | 63 | >>> _ = testrunner.run(defaults) | 63 | >>> _ = testrunner.run_internal(defaults) |
504 | 64 | Running zope.testing.testrunner.layer.UnitTests tests:... | 64 | Running zope.testing.testrunner.layer.UnitTests tests:... |
505 | 65 | Iteration 1 | 65 | Iteration 1 |
506 | 66 | Ran 1 tests with 0 failures and 0 errors in 0.000 seconds. | 66 | Ran 1 tests with 0 failures and 0 errors in 0.000 seconds. |
507 | @@ -81,7 +81,7 @@ | |||
508 | 81 | type (or class): | 81 | type (or class): |
509 | 82 | 82 | ||
510 | 83 | >>> sys.argv = 'test --tests-pattern leak -N5 -r -v'.split() | 83 | >>> sys.argv = 'test --tests-pattern leak -N5 -r -v'.split() |
512 | 84 | >>> _ = testrunner.run(defaults) | 84 | >>> _ = testrunner.run_internal(defaults) |
513 | 85 | Running tests at level 1 | 85 | Running tests at level 1 |
514 | 86 | Running zope.testing.testrunner.layer.UnitTests tests:... | 86 | Running zope.testing.testrunner.layer.UnitTests tests:... |
515 | 87 | Iteration 1 | 87 | Iteration 1 |
516 | 88 | 88 | ||
517 | === added file 'src/zope/testing/testrunner/testrunner-subunit-err.txt' | |||
518 | --- src/zope/testing/testrunner/testrunner-subunit-err.txt 1970-01-01 00:00:00 +0000 | |||
519 | +++ src/zope/testing/testrunner/testrunner-subunit-err.txt 2010-03-11 22:36:30 +0000 | |||
520 | @@ -0,0 +1,20 @@ | |||
521 | 1 | Using subunit output without subunit installed | ||
522 | 2 | ============================================== | ||
523 | 3 | |||
524 | 4 | To use the --subunit reporting option, you must have subunit installed. If you | ||
525 | 5 | do not, you will get an error message: | ||
526 | 6 | |||
527 | 7 | >>> import os.path, sys | ||
528 | 8 | >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex') | ||
529 | 9 | >>> defaults = [ | ||
530 | 10 | ... '--path', directory_with_tests, | ||
531 | 11 | ... '--tests-pattern', '^sampletestsf?$', | ||
532 | 12 | ... ] | ||
533 | 13 | |||
534 | 14 | >>> from zope.testing import testrunner | ||
535 | 15 | |||
536 | 16 | >>> sys.argv = 'test --subunit'.split() | ||
537 | 17 | >>> _ = testrunner.run_internal(defaults) | ||
538 | 18 | Subunit is not installed. Please install Subunit | ||
539 | 19 | to generate subunit output. | ||
540 | 20 | <BLANKLINE> | ||
541 | 0 | 21 | ||
542 | === added file 'src/zope/testing/testrunner/testrunner-subunit-leaks.txt' | |||
543 | --- src/zope/testing/testrunner/testrunner-subunit-leaks.txt 1970-01-01 00:00:00 +0000 | |||
544 | +++ src/zope/testing/testrunner/testrunner-subunit-leaks.txt 2010-03-11 22:36:30 +0000 | |||
545 | @@ -0,0 +1,107 @@ | |||
546 | 1 | Debugging Memory Leaks with subunit output | ||
547 | 2 | ========================================== | ||
548 | 3 | |||
549 | 4 | The --report-refcounts (-r) option can be used with the --repeat (-N) | ||
550 | 5 | option to detect and diagnose memory leaks. To use this option, you | ||
551 | 6 | must configure Python with the --with-pydebug option. (On Unix, pass | ||
552 | 7 | this option to configure and then build Python.) | ||
553 | 8 | |||
554 | 9 | For more detailed documentation, see testrunner-leaks.txt. | ||
555 | 10 | |||
556 | 11 | >>> import os.path, sys | ||
557 | 12 | >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex') | ||
558 | 13 | >>> defaults = [ | ||
559 | 14 | ... '--path', directory_with_tests, | ||
560 | 15 | ... '--tests-pattern', '^sampletestsf?$', | ||
561 | 16 | ... ] | ||
562 | 17 | |||
563 | 18 | >>> from zope.testing import testrunner | ||
564 | 19 | |||
565 | 20 | Each layer is repeated the requested number of times. For each | ||
566 | 21 | iteration after the first, the system refcount and change in system | ||
567 | 22 | refcount is shown. The system refcount is the total of all refcount in | ||
568 | 23 | the system. When a refcount on any object is changed, the system | ||
569 | 24 | refcount is changed by the same amount. Tests that don't leak show | ||
570 | 25 | zero changes in system refcount. | ||
571 | 26 | |||
572 | 27 | Let's look at an example test that leaks: | ||
573 | 28 | |||
574 | 29 | >>> sys.argv = 'test --subunit --tests-pattern leak -N2 -r'.split() | ||
575 | 30 | >>> _ = testrunner.run_internal(defaults) | ||
576 | 31 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
577 | 32 | test: zope.testing.testrunner.layer.UnitTests:setUp | ||
578 | 33 | tags: zope:layer | ||
579 | 34 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
580 | 35 | successful: zope.testing.testrunner.layer.UnitTests:setUp | ||
581 | 36 | tags: zope:layer:zope.testing.testrunner.layer.UnitTests | ||
582 | 37 | test: leak.TestSomething.testleak | ||
583 | 38 | successful: leak.TestSomething.testleak | ||
584 | 39 | test: leak.TestSomething.testleak | ||
585 | 40 | successful: leak.TestSomething.testleak | ||
586 | 41 | test: zope:refcounts | ||
587 | 42 | tags: zope:refcounts | ||
588 | 43 | successful: zope:refcounts [ multipart | ||
589 | 44 | Content-Type: text/plain;charset=utf8 | ||
590 | 45 | ... | ||
591 | 46 | ...\r | ||
592 | 47 | <BLANKLINE> | ||
593 | 48 | ...\r | ||
594 | 49 | <BLANKLINE> | ||
595 | 50 | Content-Type: text/plain;charset=utf8 | ||
596 | 51 | ... | ||
597 | 52 | ...\r | ||
598 | 53 | <BLANKLINE> | ||
599 | 54 | ...\r | ||
600 | 55 | <BLANKLINE> | ||
601 | 56 | ] | ||
602 | 57 | tags: -zope:layer:zope.testing.testrunner.layer.UnitTests | ||
603 | 58 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
604 | 59 | test: zope.testing.testrunner.layer.UnitTests:tearDown | ||
605 | 60 | tags: zope:layer | ||
606 | 61 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
607 | 62 | successful: zope.testing.testrunner.layer.UnitTests:tearDown | ||
608 | 63 | |||
609 | 64 | Here we see that the system refcount is increasing. If we specify a | ||
610 | 65 | verbosity greater than one, we can get details broken out by object | ||
611 | 66 | type (or class): | ||
612 | 67 | |||
613 | 68 | >>> sys.argv = 'test --subunit --tests-pattern leak -N2 -v -r'.split() | ||
614 | 69 | >>> _ = testrunner.run_internal(defaults) | ||
615 | 70 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
616 | 71 | test: zope.testing.testrunner.layer.UnitTests:setUp | ||
617 | 72 | tags: zope:layer | ||
618 | 73 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
619 | 74 | successful: zope.testing.testrunner.layer.UnitTests:setUp | ||
620 | 75 | tags: zope:layer:zope.testing.testrunner.layer.UnitTests | ||
621 | 76 | test: leak.TestSomething.testleak | ||
622 | 77 | successful: leak.TestSomething.testleak | ||
623 | 78 | test: leak.TestSomething.testleak | ||
624 | 79 | successful: leak.TestSomething.testleak | ||
625 | 80 | test: zope:refcounts | ||
626 | 81 | tags: zope:refcounts | ||
627 | 82 | successful: zope:refcounts [ multipart | ||
628 | 83 | Content-Type: text/plain;charset=utf8 | ||
629 | 84 | ... | ||
630 | 85 | ...\r | ||
631 | 86 | <BLANKLINE> | ||
632 | 87 | ...\r | ||
633 | 88 | <BLANKLINE> | ||
634 | 89 | Content-Type: text/plain;charset=utf8 | ||
635 | 90 | ... | ||
636 | 91 | ...\r | ||
637 | 92 | <BLANKLINE> | ||
638 | 93 | ...\r | ||
639 | 94 | <BLANKLINE> | ||
640 | 95 | Content-Type: text/plain;charset=utf8 | ||
641 | 96 | ... | ||
642 | 97 | ...\r | ||
643 | 98 | <BLANKLINE> | ||
644 | 99 | ... | ||
645 | 100 | <BLANKLINE> | ||
646 | 101 | ] | ||
647 | 102 | tags: -zope:layer:zope.testing.testrunner.layer.UnitTests | ||
648 | 103 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
649 | 104 | test: zope.testing.testrunner.layer.UnitTests:tearDown | ||
650 | 105 | tags: zope:layer | ||
651 | 106 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
652 | 107 | successful: zope.testing.testrunner.layer.UnitTests:tearDown | ||
653 | 0 | 108 | ||
654 | === added file 'src/zope/testing/testrunner/testrunner-subunit.txt' | |||
655 | --- src/zope/testing/testrunner/testrunner-subunit.txt 1970-01-01 00:00:00 +0000 | |||
656 | +++ src/zope/testing/testrunner/testrunner-subunit.txt 2010-03-11 22:36:30 +0000 | |||
657 | @@ -0,0 +1,678 @@ | |||
658 | 1 | Subunit Output | ||
659 | 2 | ============== | ||
660 | 3 | |||
661 | 4 | Subunit is a streaming protocol for interchanging test results. More | ||
662 | 5 | information can be found at https://launchpad.net/subunit. | ||
663 | 6 | |||
664 | 7 | >>> import os.path, sys | ||
665 | 8 | >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex') | ||
666 | 9 | >>> defaults = [ | ||
667 | 10 | ... '--path', directory_with_tests, | ||
668 | 11 | ... '--tests-pattern', '^sampletestsf?$', | ||
669 | 12 | ... ] | ||
670 | 13 | |||
671 | 14 | >>> from zope.testing import testrunner | ||
672 | 15 | |||
673 | 16 | |||
674 | 17 | Basic output | ||
675 | 18 | ------------ | ||
676 | 19 | |||
677 | 20 | Subunit output is line-based, with a 'test:' line for the start of each test | ||
678 | 21 | and a 'successful:' line for each successful test. | ||
679 | 22 | |||
680 | 23 | Zope layer set up and tear down events are represented as tests tagged with | ||
681 | 24 | 'zope:layer'. This allows them to be distinguished from actual tests, provides | ||
682 | 25 | a place for the layer timing information in the subunit stream and allows us | ||
683 | 26 | to include error information if necessary. | ||
684 | 27 | |||
685 | 28 | Once the layer is set up, all future tests are tagged with | ||
686 | 29 | 'zope:layer:LAYER_NAME'. | ||
687 | 30 | |||
688 | 31 | >>> sys.argv = 'test --layer 122 --subunit -t TestNotMuch'.split() | ||
689 | 32 | >>> testrunner.run_internal(defaults) | ||
690 | 33 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
691 | 34 | test: samplelayers.Layer1:setUp | ||
692 | 35 | tags: zope:layer | ||
693 | 36 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
694 | 37 | successful: samplelayers.Layer1:setUp | ||
695 | 38 | tags: zope:layer:samplelayers.Layer1 | ||
696 | 39 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
697 | 40 | test: samplelayers.Layer12:setUp | ||
698 | 41 | tags: zope:layer | ||
699 | 42 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
700 | 43 | successful: samplelayers.Layer12:setUp | ||
701 | 44 | tags: zope:layer:samplelayers.Layer12 | ||
702 | 45 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
703 | 46 | test: samplelayers.Layer122:setUp | ||
704 | 47 | tags: zope:layer | ||
705 | 48 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
706 | 49 | successful: samplelayers.Layer122:setUp | ||
707 | 50 | tags: zope:layer:samplelayers.Layer122 | ||
708 | 51 | test: sample1.sampletests.test122.TestNotMuch.test_1 | ||
709 | 52 | successful: sample1.sampletests.test122.TestNotMuch.test_1 | ||
710 | 53 | test: sample1.sampletests.test122.TestNotMuch.test_2 | ||
711 | 54 | successful: sample1.sampletests.test122.TestNotMuch.test_2 | ||
712 | 55 | test: sample1.sampletests.test122.TestNotMuch.test_3 | ||
713 | 56 | successful: sample1.sampletests.test122.TestNotMuch.test_3 | ||
714 | 57 | test: sampletests.test122.TestNotMuch.test_1 | ||
715 | 58 | successful: sampletests.test122.TestNotMuch.test_1 | ||
716 | 59 | test: sampletests.test122.TestNotMuch.test_2 | ||
717 | 60 | successful: sampletests.test122.TestNotMuch.test_2 | ||
718 | 61 | test: sampletests.test122.TestNotMuch.test_3 | ||
719 | 62 | successful: sampletests.test122.TestNotMuch.test_3 | ||
720 | 63 | tags: -zope:layer:samplelayers.Layer122 | ||
721 | 64 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
722 | 65 | test: samplelayers.Layer122:tearDown | ||
723 | 66 | tags: zope:layer | ||
724 | 67 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
725 | 68 | successful: samplelayers.Layer122:tearDown | ||
726 | 69 | tags: -zope:layer:samplelayers.Layer12 | ||
727 | 70 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
728 | 71 | test: samplelayers.Layer12:tearDown | ||
729 | 72 | tags: zope:layer | ||
730 | 73 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
731 | 74 | successful: samplelayers.Layer12:tearDown | ||
732 | 75 | tags: -zope:layer:samplelayers.Layer1 | ||
733 | 76 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
734 | 77 | test: samplelayers.Layer1:tearDown | ||
735 | 78 | tags: zope:layer | ||
736 | 79 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
737 | 80 | successful: samplelayers.Layer1:tearDown | ||
738 | 81 | False | ||
739 | 82 | |||
740 | 83 | |||
741 | 84 | Timing tests | ||
742 | 85 | ------------ | ||
743 | 86 | |||
744 | 87 | When verbosity is high enough, the subunit stream includes timing information | ||
745 | 88 | for the actual tests, as well as for the layers. | ||
746 | 89 | |||
747 | 90 | >>> sys.argv = 'test --layer 122 -vvv --subunit -t TestNotMuch'.split() | ||
748 | 91 | >>> testrunner.run_internal(defaults) | ||
749 | 92 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
750 | 93 | test: samplelayers.Layer1:setUp | ||
751 | 94 | tags: zope:layer | ||
752 | 95 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
753 | 96 | successful: samplelayers.Layer1:setUp | ||
754 | 97 | tags: zope:layer:samplelayers.Layer1 | ||
755 | 98 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
756 | 99 | test: samplelayers.Layer12:setUp | ||
757 | 100 | tags: zope:layer | ||
758 | 101 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
759 | 102 | successful: samplelayers.Layer12:setUp | ||
760 | 103 | tags: zope:layer:samplelayers.Layer12 | ||
761 | 104 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
762 | 105 | test: samplelayers.Layer122:setUp | ||
763 | 106 | tags: zope:layer | ||
764 | 107 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
765 | 108 | successful: samplelayers.Layer122:setUp | ||
766 | 109 | tags: zope:layer:samplelayers.Layer122 | ||
767 | 110 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
768 | 111 | test: sample1.sampletests.test122.TestNotMuch.test_1 | ||
769 | 112 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
770 | 113 | successful: sample1.sampletests.test122.TestNotMuch.test_1 | ||
771 | 114 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
772 | 115 | test: sample1.sampletests.test122.TestNotMuch.test_2 | ||
773 | 116 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
774 | 117 | successful: sample1.sampletests.test122.TestNotMuch.test_2 | ||
775 | 118 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
776 | 119 | test: sample1.sampletests.test122.TestNotMuch.test_3 | ||
777 | 120 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
778 | 121 | successful: sample1.sampletests.test122.TestNotMuch.test_3 | ||
779 | 122 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
780 | 123 | test: sampletests.test122.TestNotMuch.test_1 | ||
781 | 124 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
782 | 125 | successful: sampletests.test122.TestNotMuch.test_1 | ||
783 | 126 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
784 | 127 | test: sampletests.test122.TestNotMuch.test_2 | ||
785 | 128 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
786 | 129 | successful: sampletests.test122.TestNotMuch.test_2 | ||
787 | 130 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
788 | 131 | test: sampletests.test122.TestNotMuch.test_3 | ||
789 | 132 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
790 | 133 | successful: sampletests.test122.TestNotMuch.test_3 | ||
791 | 134 | tags: -zope:layer:samplelayers.Layer122 | ||
792 | 135 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
793 | 136 | test: samplelayers.Layer122:tearDown | ||
794 | 137 | tags: zope:layer | ||
795 | 138 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
796 | 139 | successful: samplelayers.Layer122:tearDown | ||
797 | 140 | tags: -zope:layer:samplelayers.Layer12 | ||
798 | 141 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
799 | 142 | test: samplelayers.Layer12:tearDown | ||
800 | 143 | tags: zope:layer | ||
801 | 144 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
802 | 145 | successful: samplelayers.Layer12:tearDown | ||
803 | 146 | tags: -zope:layer:samplelayers.Layer1 | ||
804 | 147 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
805 | 148 | test: samplelayers.Layer1:tearDown | ||
806 | 149 | tags: zope:layer | ||
807 | 150 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
808 | 151 | successful: samplelayers.Layer1:tearDown | ||
809 | 152 | False | ||
810 | 153 | |||
811 | 154 | |||
812 | 155 | Listing tests | ||
813 | 156 | ------------- | ||
814 | 157 | |||
815 | 158 | A subunit stream is a stream of test results, more or less, so the most | ||
816 | 159 | natural way of listing tests in subunit is to simply emit successful test | ||
817 | 160 | results without actually running the tests. | ||
818 | 161 | |||
819 | 162 | Note that in this stream, we don't emit fake tests for the layer set up and | ||
820 | 163 | tear down, because it simply doesn't happen. | ||
821 | 164 | |||
822 | 165 | We also don't include the dependent layers in the stream (in this case Layer1 | ||
823 | 166 | and Layer12), since they are not provided to the reporter. | ||
824 | 167 | |||
825 | 168 | >>> sys.argv = ( | ||
826 | 169 | ... 'test --layer 122 --list-tests --subunit -t TestNotMuch').split() | ||
827 | 170 | >>> testrunner.run_internal(defaults) | ||
828 | 171 | tags: zope:layer:samplelayers.Layer122 | ||
829 | 172 | test: sample1.sampletests.test122.TestNotMuch.test_1 | ||
830 | 173 | successful: sample1.sampletests.test122.TestNotMuch.test_1 | ||
831 | 174 | test: sample1.sampletests.test122.TestNotMuch.test_2 | ||
832 | 175 | successful: sample1.sampletests.test122.TestNotMuch.test_2 | ||
833 | 176 | test: sample1.sampletests.test122.TestNotMuch.test_3 | ||
834 | 177 | successful: sample1.sampletests.test122.TestNotMuch.test_3 | ||
835 | 178 | test: sampletests.test122.TestNotMuch.test_1 | ||
836 | 179 | successful: sampletests.test122.TestNotMuch.test_1 | ||
837 | 180 | test: sampletests.test122.TestNotMuch.test_2 | ||
838 | 181 | successful: sampletests.test122.TestNotMuch.test_2 | ||
839 | 182 | test: sampletests.test122.TestNotMuch.test_3 | ||
840 | 183 | successful: sampletests.test122.TestNotMuch.test_3 | ||
841 | 184 | tags: -zope:layer:samplelayers.Layer122 | ||
842 | 185 | False | ||
843 | 186 | |||
844 | 187 | |||
845 | 188 | Profiling tests | ||
846 | 189 | --------------- | ||
847 | 190 | |||
848 | 191 | Test suites often cover a lot of code, and the performance of test suites | ||
849 | 192 | themselves is often a critical part of the development process. Thus, it's | ||
850 | 193 | good to be able to profile a test run. | ||
851 | 194 | |||
852 | 195 | >>> sys.argv = ( | ||
853 | 196 | ... 'test --layer 122 --profile=cProfile --subunit ' | ||
854 | 197 | ... '-t TestNotMuch').split() | ||
855 | 198 | >>> testrunner.run_internal(defaults) | ||
856 | 199 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
857 | 200 | test: samplelayers.Layer1:setUp | ||
858 | 201 | ... | ||
859 | 202 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
860 | 203 | successful: samplelayers.Layer1:tearDown | ||
861 | 204 | test: zope:profiler_stats | ||
862 | 205 | tags: zope:profiler_stats | ||
863 | 206 | successful: zope:profiler_stats [ multipart | ||
864 | 207 | Content-Type: application/x-binary-profile | ||
865 | 208 | profiler-stats | ||
866 | 209 | ...\r | ||
867 | 210 | <BLANKLINE> | ||
868 | 211 | ... | ||
869 | 212 | <BLANKLINE> | ||
870 | 213 | ] | ||
871 | 214 | False | ||
872 | 215 | |||
873 | 216 | |||
874 | 217 | Errors | ||
875 | 218 | ------ | ||
876 | 219 | |||
877 | 220 | Errors are recorded in the subunit stream as MIME-encoded chunks of text. | ||
878 | 221 | |||
879 | 222 | >>> sys.argv = [ | ||
880 | 223 | ... 'test', '--subunit' , '--tests-pattern', '^sampletests_e$', | ||
881 | 224 | ... ] | ||
882 | 225 | >>> testrunner.run_internal(defaults) | ||
883 | 226 | time: 2010-02-05 15:27:05.113541Z | ||
884 | 227 | test: zope.testing.testrunner.layer.UnitTests:setUp | ||
885 | 228 | tags: zope:layer | ||
886 | 229 | time: 2010-02-05 15:27:05.113545Z | ||
887 | 230 | successful: zope.testing.testrunner.layer.UnitTests:setUp | ||
888 | 231 | tags: zope:layer:zope.testing.testrunner.layer.UnitTests | ||
889 | 232 | test: sample2.sampletests_e.eek | ||
890 | 233 | failure: sample2.sampletests_e.eek [ multipart | ||
891 | 234 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
892 | 235 | traceback | ||
893 | 236 | 4B6\r | ||
894 | 237 | <BLANKLINE> | ||
895 | 238 | Failed doctest test for sample2.sampletests_e.eek | ||
896 | 239 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 29, in eek | ||
897 | 240 | <BLANKLINE> | ||
898 | 241 | ---------------------------------------------------------------------- | ||
899 | 242 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 31, in sample2.sampletests_e.eek | ||
900 | 243 | Failed example: | ||
901 | 244 | f() | ||
902 | 245 | Exception raised: | ||
903 | 246 | Traceback (most recent call last): | ||
904 | 247 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/doctest/__init__.py", line 1355, in __run | ||
905 | 248 | compileflags, 1) in test.globs | ||
906 | 249 | File "<doctest sample2.sampletests_e.eek[line 2, example 0]>", line 1, in <module> | ||
907 | 250 | f() | ||
908 | 251 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 19, in f | ||
909 | 252 | g() | ||
910 | 253 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 25, in g | ||
911 | 254 | x = y + 1 | ||
912 | 255 | - __traceback_info__: I don't know what Y should be. | ||
913 | 256 | NameError: global name 'y' is not defined | ||
914 | 257 | 0\r | ||
915 | 258 | <BLANKLINE> | ||
916 | 259 | ] | ||
917 | 260 | test: sample2.sampletests_e.Test.test1 | ||
918 | 261 | successful: sample2.sampletests_e.Test.test1 | ||
919 | 262 | test: sample2.sampletests_e.Test.test2 | ||
920 | 263 | successful: sample2.sampletests_e.Test.test2 | ||
921 | 264 | test: sample2.sampletests_e.Test.test3 | ||
922 | 265 | error: sample2.sampletests_e.Test.test3 [ multipart | ||
923 | 266 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
924 | 267 | traceback | ||
925 | 268 | 29F\r | ||
926 | 269 | <BLANKLINE> | ||
927 | 270 | Traceback (most recent call last): | ||
928 | 271 | File "/usr/lib/python2.6/unittest.py", line 279, in run | ||
929 | 272 | testMethod() | ||
930 | 273 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 44, in test3 | ||
931 | 274 | f() | ||
932 | 275 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 19, in f | ||
933 | 276 | g() | ||
934 | 277 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/sampletests_e.py", line 25, in g | ||
935 | 278 | x = y + 1 | ||
936 | 279 | - __traceback_info__: I don't know what Y should be. | ||
937 | 280 | NameError: global name 'y' is not defined | ||
938 | 281 | 0\r | ||
939 | 282 | <BLANKLINE> | ||
940 | 283 | ] | ||
941 | 284 | test: sample2.sampletests_e.Test.test4 | ||
942 | 285 | successful: sample2.sampletests_e.Test.test4 | ||
943 | 286 | test: sample2.sampletests_e.Test.test5 | ||
944 | 287 | successful: sample2.sampletests_e.Test.test5 | ||
945 | 288 | test: e_txt | ||
946 | 289 | failure: e_txt [ | ||
947 | 290 | multipart | ||
948 | 291 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
949 | 292 | traceback | ||
950 | 293 | 329\r | ||
951 | 294 | <BLANKLINE> | ||
952 | 295 | Failed doctest test for e.txt | ||
953 | 296 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/e.txt", line 0 | ||
954 | 297 | <BLANKLINE> | ||
955 | 298 | ---------------------------------------------------------------------- | ||
956 | 299 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample2/e.txt", line 4, in e.txt | ||
957 | 300 | Failed example: | ||
958 | 301 | f() | ||
959 | 302 | Exception raised: | ||
960 | 303 | Traceback (most recent call last): | ||
961 | 304 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/doctest/__init__.py", line 1355, in __run | ||
962 | 305 | compileflags, 1) in test.globs | ||
963 | 306 | File "<doctest e.txt[line 4, example 1]>", line 1, in <module> | ||
964 | 307 | f() | ||
965 | 308 | File "<doctest e.txt[line 1, example 0]>", line 2, in f | ||
966 | 309 | return x | ||
967 | 310 | NameError: global name 'x' is not defined | ||
968 | 311 | 0\r | ||
969 | 312 | <BLANKLINE> | ||
970 | 313 | ] | ||
971 | 314 | tags: -zope:layer:zope.testing.testrunner.layer.UnitTests | ||
972 | 315 | time: 2010-02-05 15:27:05.147082Z | ||
973 | 316 | test: zope.testing.testrunner.layer.UnitTests:tearDown | ||
974 | 317 | tags: zope:layer | ||
975 | 318 | time: 2010-02-05 15:27:05.147088Z | ||
976 | 319 | successful: zope.testing.testrunner.layer.UnitTests:tearDown | ||
977 | 320 | True | ||
978 | 321 | |||
979 | 322 | |||
980 | 323 | Layers that can't be torn down | ||
981 | 324 | ------------------------------ | ||
982 | 325 | |||
983 | 326 | A layer can have a tearDown method that raises NotImplementedError. If this is | ||
984 | 327 | the case and there are no remaining tests to run, the subunit stream will say | ||
985 | 328 | that the layer skipped its tearDown. | ||
986 | 329 | |||
987 | 330 | >>> import os.path, sys | ||
988 | 331 | >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex') | ||
989 | 332 | >>> from zope.testing import testrunner | ||
990 | 333 | >>> defaults = [ | ||
991 | 334 | ... '--subunit', | ||
992 | 335 | ... '--path', directory_with_tests, | ||
993 | 336 | ... '--tests-pattern', '^sampletestsf?$', | ||
994 | 337 | ... ] | ||
995 | 338 | |||
996 | 339 | >>> sys.argv = 'test -ssample2 --tests-pattern sampletests_ntd$'.split() | ||
997 | 340 | >>> testrunner.run_internal(defaults) | ||
998 | 341 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
999 | 342 | test: sample2.sampletests_ntd.Layer:setUp | ||
1000 | 343 | tags: zope:layer | ||
1001 | 344 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1002 | 345 | successful: sample2.sampletests_ntd.Layer:setUp | ||
1003 | 346 | tags: zope:layer:sample2.sampletests_ntd.Layer | ||
1004 | 347 | test: sample2.sampletests_ntd.TestSomething.test_something | ||
1005 | 348 | successful: sample2.sampletests_ntd.TestSomething.test_something | ||
1006 | 349 | tags: -zope:layer:sample2.sampletests_ntd.Layer | ||
1007 | 350 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1008 | 351 | test: sample2.sampletests_ntd.Layer:tearDown | ||
1009 | 352 | tags: zope:layer | ||
1010 | 353 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1011 | 354 | skip: sample2.sampletests_ntd.Layer:tearDown [ | ||
1012 | 355 | tearDown not supported | ||
1013 | 356 | ] | ||
1014 | 357 | False | ||
1015 | 358 | |||
1016 | 359 | |||
1017 | 360 | Module import errors | ||
1018 | 361 | -------------------- | ||
1019 | 362 | |||
1020 | 363 | We report module import errors too. They get encoded as tests with errors. The | ||
1021 | 364 | name of the test is the module that could not be imported, the test's result | ||
1022 | 365 | is an error containing the traceback. These "tests" are tagged with | ||
1023 | 366 | zope:import_error. | ||
1024 | 367 | |||
1025 | 368 | Let's create a module with some bad syntax: | ||
1026 | 369 | |||
1027 | 370 | >>> badsyntax_path = os.path.join(directory_with_tests, | ||
1028 | 371 | ... "sample2", "sampletests_i.py") | ||
1029 | 372 | >>> f = open(badsyntax_path, "w") | ||
1030 | 373 | >>> print >> f, "importx unittest" # syntax error | ||
1031 | 374 | >>> f.close() | ||
1032 | 375 | |||
1033 | 376 | And then run the tests: | ||
1034 | 377 | |||
1035 | 378 | >>> sys.argv = ( | ||
1036 | 379 | ... 'test --subunit --tests-pattern ^sampletests(f|_i)?$ --layer 1 ' | ||
1037 | 380 | ... ).split() | ||
1038 | 381 | >>> testrunner.run_internal(defaults) | ||
1039 | 382 | test: sample2.sampletests_i | ||
1040 | 383 | tags: zope:import_error | ||
1041 | 384 | error: sample2.sampletests_i [ | ||
1042 | 385 | Traceback (most recent call last): | ||
1043 | 386 | testrunner-ex/sample2/sampletests_i.py", line 1 | ||
1044 | 387 | importx unittest | ||
1045 | 388 | ^ | ||
1046 | 389 | SyntaxError: invalid syntax | ||
1047 | 390 | ] | ||
1048 | 391 | test: sample2.sample21.sampletests_i | ||
1049 | 392 | tags: zope:import_error | ||
1050 | 393 | error: sample2.sample21.sampletests_i [ | ||
1051 | 394 | Traceback (most recent call last): | ||
1052 | 395 | testrunner-ex/sample2/sample21/sampletests_i.py", Line NNN, in ? | ||
1053 | 396 | import zope.testing.huh | ||
1054 | 397 | ImportError: No module named huh | ||
1055 | 398 | ] | ||
1056 | 399 | test: sample2.sample23.sampletests_i | ||
1057 | 400 | tags: zope:import_error | ||
1058 | 401 | error: sample2.sample23.sampletests_i [ | ||
1059 | 402 | Traceback (most recent call last): | ||
1060 | 403 | testrunner-ex/sample2/sample23/sampletests_i.py", Line NNN, in ? | ||
1061 | 404 | class Test(unittest.TestCase): | ||
1062 | 405 | testrunner-ex/sample2/sample23/sampletests_i.py", Line NNN, in Test | ||
1063 | 406 | raise TypeError('eek') | ||
1064 | 407 | TypeError: eek | ||
1065 | 408 | ] | ||
1066 | 409 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1067 | 410 | test: samplelayers.Layer1:setUp | ||
1068 | 411 | tags: zope:layer | ||
1069 | 412 | ... | ||
1070 | 413 | True | ||
1071 | 414 | |||
1072 | 415 | Of course, because we care deeply about test isolation, we're going to have to | ||
1073 | 416 | delete the module with bad syntax now, lest it contaminate other tests or even | ||
1074 | 417 | future test runs. | ||
1075 | 418 | |||
1076 | 419 | >>> os.unlink(badsyntax_path) | ||
1077 | 420 | |||
1078 | 421 | |||
1079 | 422 | Tests in subprocesses | ||
1080 | 423 | --------------------- | ||
1081 | 424 | |||
1082 | 425 | If the tearDown method raises NotImplementedError and there are remaining | ||
1083 | 426 | layers to run, the test runner will restart itself as a new process, | ||
1084 | 427 | resuming tests where it left off: | ||
1085 | 428 | |||
1086 | 429 | >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$'] | ||
1087 | 430 | >>> testrunner.run_internal(defaults) | ||
1088 | 431 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1089 | 432 | test: sample1.sampletests_ntd.Layer:setUp | ||
1090 | 433 | tags: zope:layer | ||
1091 | 434 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1092 | 435 | successful: sample1.sampletests_ntd.Layer:setUp | ||
1093 | 436 | tags: zope:layer:sample1.sampletests_ntd.Layer | ||
1094 | 437 | test: sample1.sampletests_ntd.TestSomething.test_something | ||
1095 | 438 | successful: sample1.sampletests_ntd.TestSomething.test_something | ||
1096 | 439 | tags: -zope:layer:sample1.sampletests_ntd.Layer | ||
1097 | 440 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1098 | 441 | test: sample1.sampletests_ntd.Layer:tearDown | ||
1099 | 442 | tags: zope:layer | ||
1100 | 443 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1101 | 444 | skip: sample1.sampletests_ntd.Layer:tearDown [ | ||
1102 | 445 | tearDown not supported | ||
1103 | 446 | ] | ||
1104 | 447 | test: Running in a subprocess. | ||
1105 | 448 | tags: zope:info_suboptimal | ||
1106 | 449 | successful: Running in a subprocess. | ||
1107 | 450 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1108 | 451 | test: sample2.sampletests_ntd.Layer:setUp | ||
1109 | 452 | tags: zope:layer | ||
1110 | 453 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1111 | 454 | successful: sample2.sampletests_ntd.Layer:setUp | ||
1112 | 455 | tags: zope:layer:sample2.sampletests_ntd.Layer | ||
1113 | 456 | test: sample2.sampletests_ntd.TestSomething.test_something | ||
1114 | 457 | successful: sample2.sampletests_ntd.TestSomething.test_something | ||
1115 | 458 | tags: -zope:layer:sample2.sampletests_ntd.Layer | ||
1116 | 459 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1117 | 460 | test: sample2.sampletests_ntd.Layer:tearDown | ||
1118 | 461 | tags: zope:layer | ||
1119 | 462 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1120 | 463 | skip: sample2.sampletests_ntd.Layer:tearDown [ | ||
1121 | 464 | tearDown not supported | ||
1122 | 465 | ] | ||
1123 | 466 | test: Running in a subprocess. | ||
1124 | 467 | tags: zope:info_suboptimal | ||
1125 | 468 | successful: Running in a subprocess. | ||
1126 | 469 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1127 | 470 | test: sample3.sampletests_ntd.Layer:setUp | ||
1128 | 471 | tags: zope:layer | ||
1129 | 472 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1130 | 473 | successful: sample3.sampletests_ntd.Layer:setUp | ||
1131 | 474 | tags: zope:layer:sample3.sampletests_ntd.Layer | ||
1132 | 475 | test: sample3.sampletests_ntd.TestSomething.test_error1 | ||
1133 | 476 | error: sample3.sampletests_ntd.TestSomething.test_error1 [ multipart | ||
1134 | 477 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
1135 | 478 | traceback | ||
1136 | 479 | 14F\r | ||
1137 | 480 | <BLANKLINE> | ||
1138 | 481 | Traceback (most recent call last): | ||
1139 | 482 | testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1 | ||
1140 | 483 | raise TypeError("Can we see errors") | ||
1141 | 484 | TypeError: Can we see errors | ||
1142 | 485 | 0\r | ||
1143 | 486 | <BLANKLINE> | ||
1144 | 487 | ] | ||
1145 | 488 | test: sample3.sampletests_ntd.TestSomething.test_error2 | ||
1146 | 489 | error: sample3.sampletests_ntd.TestSomething.test_error2 [ multipart | ||
1147 | 490 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
1148 | 491 | traceback | ||
1149 | 492 | 13F\r | ||
1150 | 493 | <BLANKLINE> | ||
1151 | 494 | Traceback (most recent call last): | ||
1152 | 495 | testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2 | ||
1153 | 496 | raise TypeError("I hope so") | ||
1154 | 497 | TypeError: I hope so | ||
1155 | 498 | 0\r | ||
1156 | 499 | <BLANKLINE> | ||
1157 | 500 | ] | ||
1158 | 501 | test: sample3.sampletests_ntd.TestSomething.test_fail1 | ||
1159 | 502 | failure: sample3.sampletests_ntd.TestSomething.test_fail1 [ multipart | ||
1160 | 503 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
1161 | 504 | traceback | ||
1162 | 505 | 1AA\r | ||
1163 | 506 | <BLANKLINE> | ||
1164 | 507 | Traceback (most recent call last): | ||
1165 | 508 | testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1 | ||
1166 | 509 | self.assertEqual(1, 2) | ||
1167 | 510 | AssertionError: 1 != 2 | ||
1168 | 511 | 0\r | ||
1169 | 512 | <BLANKLINE> | ||
1170 | 513 | ] | ||
1171 | 514 | test: sample3.sampletests_ntd.TestSomething.test_fail2 | ||
1172 | 515 | failure: sample3.sampletests_ntd.TestSomething.test_fail2 [ multipart | ||
1173 | 516 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
1174 | 517 | traceback | ||
1175 | 518 | 1AA\r | ||
1176 | 519 | <BLANKLINE> | ||
1177 | 520 | Traceback (most recent call last): | ||
1178 | 521 | testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2 | ||
1179 | 522 | self.assertEqual(1, 3) | ||
1180 | 523 | AssertionError: 1 != 3 | ||
1181 | 524 | 0\r | ||
1182 | 525 | <BLANKLINE> | ||
1183 | 526 | ] | ||
1184 | 527 | test: sample3.sampletests_ntd.TestSomething.test_something | ||
1185 | 528 | successful: sample3.sampletests_ntd.TestSomething.test_something | ||
1186 | 529 | test: sample3.sampletests_ntd.TestSomething.test_something_else | ||
1187 | 530 | successful: sample3.sampletests_ntd.TestSomething.test_something_else | ||
1188 | 531 | tags: -zope:layer:sample3.sampletests_ntd.Layer | ||
1189 | 532 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1190 | 533 | test: sample3.sampletests_ntd.Layer:tearDown | ||
1191 | 534 | tags: zope:layer | ||
1192 | 535 | time: YYYY-MM-DD HH:MM:SS.mmmmmmZ | ||
1193 | 536 | skip: sample3.sampletests_ntd.Layer:tearDown [ | ||
1194 | 537 | tearDown not supported | ||
1195 | 538 | ] | ||
1196 | 539 | True | ||
1197 | 540 | |||
1198 | 541 | Note that debugging doesn't work when running tests in a subprocess: | ||
1199 | 542 | |||
1200 | 543 | >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$', | ||
1201 | 544 | ... '-D', ] | ||
1202 | 545 | >>> testrunner.run_internal(defaults) | ||
1203 | 546 | time: 2010-02-10 22:41:25.279692Z | ||
1204 | 547 | test: sample1.sampletests_ntd.Layer:setUp | ||
1205 | 548 | tags: zope:layer | ||
1206 | 549 | time: 2010-02-10 22:41:25.279695Z | ||
1207 | 550 | successful: sample1.sampletests_ntd.Layer:setUp | ||
1208 | 551 | tags: zope:layer:sample1.sampletests_ntd.Layer | ||
1209 | 552 | test: sample1.sampletests_ntd.TestSomething.test_something | ||
1210 | 553 | successful: sample1.sampletests_ntd.TestSomething.test_something | ||
1211 | 554 | tags: -zope:layer:sample1.sampletests_ntd.Layer | ||
1212 | 555 | time: 2010-02-10 22:41:25.310078Z | ||
1213 | 556 | test: sample1.sampletests_ntd.Layer:tearDown | ||
1214 | 557 | tags: zope:layer | ||
1215 | 558 | time: 2010-02-10 22:41:25.310171Z | ||
1216 | 559 | skip: sample1.sampletests_ntd.Layer:tearDown [ | ||
1217 | 560 | tearDown not supported | ||
1218 | 561 | ] | ||
1219 | 562 | test: Running in a subprocess. | ||
1220 | 563 | tags: zope:info_suboptimal | ||
1221 | 564 | successful: Running in a subprocess. | ||
1222 | 565 | time: 2010-02-10 22:41:25.753076Z | ||
1223 | 566 | test: sample2.sampletests_ntd.Layer:setUp | ||
1224 | 567 | tags: zope:layer | ||
1225 | 568 | time: 2010-02-10 22:41:25.753079Z | ||
1226 | 569 | successful: sample2.sampletests_ntd.Layer:setUp | ||
1227 | 570 | tags: zope:layer:sample2.sampletests_ntd.Layer | ||
1228 | 571 | test: sample2.sampletests_ntd.TestSomething.test_something | ||
1229 | 572 | successful: sample2.sampletests_ntd.TestSomething.test_something | ||
1230 | 573 | tags: -zope:layer:sample2.sampletests_ntd.Layer | ||
1231 | 574 | time: 2010-02-10 22:41:25.779256Z | ||
1232 | 575 | test: sample2.sampletests_ntd.Layer:tearDown | ||
1233 | 576 | tags: zope:layer | ||
1234 | 577 | time: 2010-02-10 22:41:25.779326Z | ||
1235 | 578 | skip: sample2.sampletests_ntd.Layer:tearDown [ | ||
1236 | 579 | tearDown not supported | ||
1237 | 580 | ] | ||
1238 | 581 | test: Running in a subprocess. | ||
1239 | 582 | tags: zope:info_suboptimal | ||
1240 | 583 | successful: Running in a subprocess. | ||
1241 | 584 | time: 2010-02-10 22:41:26.310296Z | ||
1242 | 585 | test: sample3.sampletests_ntd.Layer:setUp | ||
1243 | 586 | tags: zope:layer | ||
1244 | 587 | time: 2010-02-10 22:41:26.310299Z | ||
1245 | 588 | successful: sample3.sampletests_ntd.Layer:setUp | ||
1246 | 589 | tags: zope:layer:sample3.sampletests_ntd.Layer | ||
1247 | 590 | test: sample3.sampletests_ntd.TestSomething.test_error1 | ||
1248 | 591 | error: sample3.sampletests_ntd.TestSomething.test_error1 [ multipart | ||
1249 | 592 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
1250 | 593 | traceback | ||
1251 | 594 | 16A\r | ||
1252 | 595 | <BLANKLINE> | ||
1253 | 596 | Traceback (most recent call last): | ||
1254 | 597 | File "/usr/lib/python2.6/unittest.py", line 305, in debug | ||
1255 | 598 | getattr(self, self._testMethodName)() | ||
1256 | 599 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample3/sampletests_ntd.py", line 42, in test_error1 | ||
1257 | 600 | raise TypeError("Can we see errors") | ||
1258 | 601 | TypeError: Can we see errors | ||
1259 | 602 | 0\r | ||
1260 | 603 | <BLANKLINE> | ||
1261 | 604 | ] | ||
1262 | 605 | test: Can't post-mortem debug when running a layer as a subprocess! | ||
1263 | 606 | tags: zope:error_with_banner | ||
1264 | 607 | successful: Can't post-mortem debug when running a layer as a subprocess! | ||
1265 | 608 | test: sample3.sampletests_ntd.TestSomething.test_error2 | ||
1266 | 609 | error: sample3.sampletests_ntd.TestSomething.test_error2 [ multipart | ||
1267 | 610 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
1268 | 611 | traceback | ||
1269 | 612 | 15A\r | ||
1270 | 613 | <BLANKLINE> | ||
1271 | 614 | Traceback (most recent call last): | ||
1272 | 615 | File "/usr/lib/python2.6/unittest.py", line 305, in debug | ||
1273 | 616 | getattr(self, self._testMethodName)() | ||
1274 | 617 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample3/sampletests_ntd.py", line 45, in test_error2 | ||
1275 | 618 | raise TypeError("I hope so") | ||
1276 | 619 | TypeError: I hope so | ||
1277 | 620 | 0\r | ||
1278 | 621 | <BLANKLINE> | ||
1279 | 622 | ] | ||
1280 | 623 | test: Can't post-mortem debug when running a layer as a subprocess! | ||
1281 | 624 | tags: zope:error_with_banner | ||
1282 | 625 | successful: Can't post-mortem debug when running a layer as a subprocess! | ||
1283 | 626 | test: sample3.sampletests_ntd.TestSomething.test_fail1 | ||
1284 | 627 | error: sample3.sampletests_ntd.TestSomething.test_fail1 [ multipart | ||
1285 | 628 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
1286 | 629 | traceback | ||
1287 | 630 | 1C5\r | ||
1288 | 631 | <BLANKLINE> | ||
1289 | 632 | Traceback (most recent call last): | ||
1290 | 633 | File "/usr/lib/python2.6/unittest.py", line 305, in debug | ||
1291 | 634 | getattr(self, self._testMethodName)() | ||
1292 | 635 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample3/sampletests_ntd.py", line 48, in test_fail1 | ||
1293 | 636 | self.assertEqual(1, 2) | ||
1294 | 637 | File "/usr/lib/python2.6/unittest.py", line 350, in failUnlessEqual | ||
1295 | 638 | (msg or '%r != %r' % (first, second)) | ||
1296 | 639 | AssertionError: 1 != 2 | ||
1297 | 640 | 0\r | ||
1298 | 641 | <BLANKLINE> | ||
1299 | 642 | ] | ||
1300 | 643 | test: Can't post-mortem debug when running a layer as a subprocess! | ||
1301 | 644 | tags: zope:error_with_banner | ||
1302 | 645 | successful: Can't post-mortem debug when running a layer as a subprocess! | ||
1303 | 646 | test: sample3.sampletests_ntd.TestSomething.test_fail2 | ||
1304 | 647 | error: sample3.sampletests_ntd.TestSomething.test_fail2 [ multipart | ||
1305 | 648 | Content-Type: text/x-traceback;charset=utf8,language=python | ||
1306 | 649 | traceback | ||
1307 | 650 | 1C5\r | ||
1308 | 651 | <BLANKLINE> | ||
1309 | 652 | Traceback (most recent call last): | ||
1310 | 653 | File "/usr/lib/python2.6/unittest.py", line 305, in debug | ||
1311 | 654 | getattr(self, self._testMethodName)() | ||
1312 | 655 | File "/home/jml/src/zope.testing/subunit-output-formatter/src/zope/testing/testrunner/testrunner-ex/sample3/sampletests_ntd.py", line 51, in test_fail2 | ||
1313 | 656 | self.assertEqual(1, 3) | ||
1314 | 657 | File "/usr/lib/python2.6/unittest.py", line 350, in failUnlessEqual | ||
1315 | 658 | (msg or '%r != %r' % (first, second)) | ||
1316 | 659 | AssertionError: 1 != 3 | ||
1317 | 660 | 0\r | ||
1318 | 661 | <BLANKLINE> | ||
1319 | 662 | ] | ||
1320 | 663 | test: Can't post-mortem debug when running a layer as a subprocess! | ||
1321 | 664 | tags: zope:error_with_banner | ||
1322 | 665 | successful: Can't post-mortem debug when running a layer as a subprocess! | ||
1323 | 666 | test: sample3.sampletests_ntd.TestSomething.test_something | ||
1324 | 667 | successful: sample3.sampletests_ntd.TestSomething.test_something | ||
1325 | 668 | test: sample3.sampletests_ntd.TestSomething.test_something_else | ||
1326 | 669 | successful: sample3.sampletests_ntd.TestSomething.test_something_else | ||
1327 | 670 | tags: -zope:layer:sample3.sampletests_ntd.Layer | ||
1328 | 671 | time: 2010-02-10 22:41:26.340878Z | ||
1329 | 672 | test: sample3.sampletests_ntd.Layer:tearDown | ||
1330 | 673 | tags: zope:layer | ||
1331 | 674 | time: 2010-02-10 22:41:26.340945Z | ||
1332 | 675 | skip: sample3.sampletests_ntd.Layer:tearDown [ | ||
1333 | 676 | tearDown not supported | ||
1334 | 677 | ] | ||
1335 | 678 | True | ||
1336 | 0 | 679 | ||
1337 | === modified file 'src/zope/testing/testrunner/tests.py' | |||
1338 | --- src/zope/testing/testrunner/tests.py 2009-12-18 08:23:21 +0000 | |||
1339 | +++ src/zope/testing/testrunner/tests.py 2010-03-11 22:36:30 +0000 | |||
1340 | @@ -103,6 +103,8 @@ | |||
1341 | 103 | (re.compile(r'\d+[.]\d\d\d seconds'), 'N.NNN seconds'), | 103 | (re.compile(r'\d+[.]\d\d\d seconds'), 'N.NNN seconds'), |
1342 | 104 | (re.compile(r'\d+[.]\d\d\d s'), 'N.NNN s'), | 104 | (re.compile(r'\d+[.]\d\d\d s'), 'N.NNN s'), |
1343 | 105 | (re.compile(r'\d+[.]\d\d\d{'), 'N.NNN{'), | 105 | (re.compile(r'\d+[.]\d\d\d{'), 'N.NNN{'), |
1344 | 106 | (re.compile(r'\d{4}-\d\d-\d\d \d\d:\d\d:\d\d\.\d+'), | ||
1345 | 107 | 'YYYY-MM-DD HH:MM:SS.mmmmmm'), | ||
1346 | 106 | (re.compile('( |")[^\n]+testrunner-ex'), r'\1testrunner-ex'), | 108 | (re.compile('( |")[^\n]+testrunner-ex'), r'\1testrunner-ex'), |
1347 | 107 | (re.compile('( |")[^\n]+testrunner.py'), r'\1testrunner.py'), | 109 | (re.compile('( |")[^\n]+testrunner.py'), r'\1testrunner.py'), |
1348 | 108 | (re.compile(r'> [^\n]*(doc|unit)test[.]py\(\d+\)'), | 110 | (re.compile(r'> [^\n]*(doc|unit)test[.]py\(\d+\)'), |
1349 | @@ -251,4 +253,29 @@ | |||
1350 | 251 | checker=checker, | 253 | checker=checker, |
1351 | 252 | ) | 254 | ) |
1352 | 253 | ) | 255 | ) |
1353 | 256 | |||
1354 | 257 | try: | ||
1355 | 258 | import subunit | ||
1356 | 259 | except ImportError: | ||
1357 | 260 | suites.append( | ||
1358 | 261 | doctest.DocFileSuite( | ||
1359 | 262 | 'testrunner-subunit-err.txt', | ||
1360 | 263 | setUp=setUp, tearDown=tearDown, | ||
1361 | 264 | optionflags=doctest.ELLIPSIS + doctest.NORMALIZE_WHITESPACE, | ||
1362 | 265 | checker=checker)) | ||
1363 | 266 | else: | ||
1364 | 267 | suites.append( | ||
1365 | 268 | doctest.DocFileSuite( | ||
1366 | 269 | 'testrunner-subunit.txt', | ||
1367 | 270 | setUp=setUp, tearDown=tearDown, | ||
1368 | 271 | optionflags=doctest.ELLIPSIS + doctest.NORMALIZE_WHITESPACE, | ||
1369 | 272 | checker=checker)) | ||
1370 | 273 | if hasattr(sys, 'gettotalrefcount'): | ||
1371 | 274 | suites.append( | ||
1372 | 275 | doctest.DocFileSuite( | ||
1373 | 276 | 'testrunner-subunit-leaks.txt', | ||
1374 | 277 | setUp=setUp, tearDown=tearDown, | ||
1375 | 278 | optionflags=doctest.ELLIPSIS + doctest.NORMALIZE_WHITESPACE, | ||
1376 | 279 | checker=checker)) | ||
1377 | 280 | |||
1378 | 254 | return unittest.TestSuite(suites) | 281 | return unittest.TestSuite(suites) |
Broad issues:
content and content_type are testtools modules; don't import from subunit, it only has them for compatibility glue.
Perhaps tag tests in layer foo with zope:layer:foo, not zope:testing:foo. In fact it looks like they are, and its simply a docstring bug to claim otherwise.
As subunit has a progress abstraction, the 'cannot support progress' statement confused me. Perhaps say 'cannot support zopes concept of progress because xxx'.
I see you've worked around the bug in subunit where there isn't a tag method on the test result; perhaps you could submit a patch for that ? the contract is(I think) clear, just 'not done'.
194 + Since subunit is a stream protocol format, it has no summary.
perhaps 'no need for a summary - when the stream is displayed a summary can be created then.
What is this?
+ def import_errors(self, import_errors):
221 + """Report test-module import errors (if any)."""
222 + # FIXME: Implement this.
... there is code here
235 + def _exc_info_ to_details( self, exc_info): content_ type.ContentTyp e( 'python' , charset='utf8')) (None) format_ traceback( exc_info) content. Content( encode( 'utf8') ])}
236 + """Translate 'exc_info' into a details dictionary usable with subunit.
237 + """
238 + import subunit
239 + content_type = subunit.
240 + 'text', 'x-traceback', dict(language=
241 + formatter = OutputFormatter
242 + traceback = formatter.
243 + return {
244 + 'traceback': subunit.
245 + content_type, lambda: [traceback.
This might be better as t(exc_info, test)
import testtools.content
test = unittest.TestCase()
content = TracebackConten
return {'traceback': content}
unless the formatter. format_ traceback( exc_info) is doing something nonobvious (and if it is, perhaps you should mention that. If its doing something nonobvious, then I suggest subclassing testtools. content. Content similarly to testtools. content. TracebackConten t.
Also, you might want a global import rather than a scope local.
270 + # XXX: Since the subunit stream is designed for machine reading, we content_ type.ContentTyp e( content. Content( stats.stats) .encode( 'utf8') ])}
271 + # should really dump the binary profiler stats here. Sadly, the
272 + # "content" API doesn't support this yet. Instead, we dump the
273 + # stringified version of the stats dict, which is functionally the
274 + # same thing. -- jml, 2010-02-14.
275 + plain_text = subunit.
276 + 'text', 'plain', {'charset': 'utf8'})
277 + details = {
278 + 'profiler-stats': subunit.
279 + plain_text, lambda: [unicode(
meta: where some dependency is insufficient, it might be nice to file a bug saying 'please provide X', and then reference the bug in this patch. That way, when your later self returns, they have something to prompt the memory. octet-stream in http:// www.rfc- editor. org/rfc/ rfc2046. txt for details) content_ type.ContentTyp e('application' , 'octet-stream', {'type' :'cProfile' }) content. Content( cprofile_ type, lambda: [bpickle(stats)])
That said, I'm not sure what subunit is missing here:
(see application/
cprofile_type = testtools.
content = testtools.
return {'profiler-stats': content}
You can also make the content types attributes on self to avoid calculating them every time; they are 'Value Objec...