Merge lp:~lifeless/subunit/addSkip into lp:~subunit/subunit/trunk
- addSkip
- Merge into trunk
Proposed by
Robert Collins
Status: | Merged |
---|---|
Approved by: | Jonathan Lange |
Approved revision: | 58 |
Merged at revision: | not available |
Proposed branch: | lp:~lifeless/subunit/addSkip |
Merge into: | lp:~subunit/subunit/trunk |
Diff against target: | None lines |
To merge this branch: | bzr merge lp:~lifeless/subunit/addSkip |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jonathan Lange | Approve | ||
Review via email: mp+4033@code.launchpad.net |
Commit message
Description of the change
To post a comment you must log in.
Revision history for this message
Robert Collins (lifeless) wrote : | # |
lp:~lifeless/subunit/addSkip
updated
- 58. By Robert Collins
-
subunit-filter can now filter skips too.
Revision history for this message
Jonathan Lange (jml) wrote : | # |
Fine to land, along with the tweaks suggested for lp:~lifeless/subunit/filter.
review:
Approve
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'README' | |||
2 | --- README 2009-02-15 11:55:00 +0000 | |||
3 | +++ README 2009-02-28 09:27:25 +0000 | |||
4 | @@ -40,6 +40,8 @@ | |||
5 | 40 | stream on-the-fly. Currently subunit provides: | 40 | stream on-the-fly. Currently subunit provides: |
6 | 41 | * tap2subunit - convert perl's TestAnythingProtocol to subunit. | 41 | * tap2subunit - convert perl's TestAnythingProtocol to subunit. |
7 | 42 | * subunit2pyunit - convert a subunit stream to pyunit test results. | 42 | * subunit2pyunit - convert a subunit stream to pyunit test results. |
8 | 43 | * subunit-filter - filter out tests from a subunit stream. | ||
9 | 44 | * subunit-ls - list the tests present in a subunit stream. | ||
10 | 43 | * subunit-stats - generate a summary of a subunit stream. | 45 | * subunit-stats - generate a summary of a subunit stream. |
11 | 44 | * subunit-tags - add or remove tags from a stream. | 46 | * subunit-tags - add or remove tags from a stream. |
12 | 45 | 47 | ||
13 | @@ -200,8 +202,10 @@ | |||
14 | 200 | Currently this is not exposed at the python API layer. | 202 | Currently this is not exposed at the python API layer. |
15 | 201 | 203 | ||
16 | 202 | The skip result is used to indicate a test that was found by the runner but not | 204 | The skip result is used to indicate a test that was found by the runner but not |
19 | 203 | fully executed due to some policy or dependency issue. Currently this is | 205 | fully executed due to some policy or dependency issue. This is represented in |
20 | 204 | represented in Python as a successful test. | 206 | python using the addSkip interface that testtools |
21 | 207 | (https://edge.launchpad.net/testtools) defines. When communicating with a non | ||
22 | 208 | skip aware test result, the test is reported as an error. | ||
23 | 205 | The xfail result is used to indicate a test that was expected to fail failing | 209 | The xfail result is used to indicate a test that was expected to fail failing |
24 | 206 | in the expected manner. As this is a normal condition for such tests it is | 210 | in the expected manner. As this is a normal condition for such tests it is |
25 | 207 | represented as a successful test in Python. | 211 | represented as a successful test in Python. |
26 | 208 | 212 | ||
27 | === added file 'filters/subunit-filter' | |||
28 | --- filters/subunit-filter 1970-01-01 00:00:00 +0000 | |||
29 | +++ filters/subunit-filter 2009-02-22 07:51:04 +0000 | |||
30 | @@ -0,0 +1,50 @@ | |||
31 | 1 | #!/usr/bin/env python | ||
32 | 2 | # subunit: extensions to python unittest to get test results from subprocesses. | ||
33 | 3 | # Copyright (C) 2008 Robert Collins <robertc@robertcollins.net> | ||
34 | 4 | # | ||
35 | 5 | # This program is free software; you can redistribute it and/or modify | ||
36 | 6 | # it under the terms of the GNU General Public License as published by | ||
37 | 7 | # the Free Software Foundation; either version 2 of the License, or | ||
38 | 8 | # (at your option) any later version. | ||
39 | 9 | # | ||
40 | 10 | # This program is distributed in the hope that it will be useful, | ||
41 | 11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
42 | 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
43 | 13 | # GNU General Public License for more details. | ||
44 | 14 | # | ||
45 | 15 | # You should have received a copy of the GNU General Public License | ||
46 | 16 | # along with this program; if not, write to the Free Software | ||
47 | 17 | # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | ||
48 | 18 | # | ||
49 | 19 | |||
50 | 20 | """Filter a subunit stream to include/exclude tests. | ||
51 | 21 | |||
52 | 22 | The default is to strip successful tests. | ||
53 | 23 | """ | ||
54 | 24 | |||
55 | 25 | from optparse import OptionParser | ||
56 | 26 | import sys | ||
57 | 27 | import unittest | ||
58 | 28 | |||
59 | 29 | from subunit import ProtocolTestCase, TestResultFilter, TestProtocolClient | ||
60 | 30 | |||
61 | 31 | parser = OptionParser(description=__doc__) | ||
62 | 32 | parser.add_option("--error", action="store_false", | ||
63 | 33 | help="include errors", default=False, dest="error") | ||
64 | 34 | parser.add_option("-e", "--no-error", action="store_true", | ||
65 | 35 | help="exclude errors", dest="error") | ||
66 | 36 | parser.add_option("--failure", action="store_false", | ||
67 | 37 | help="include failures", default=False, dest="failure") | ||
68 | 38 | parser.add_option("-f", "--no-failure", action="store_true", | ||
69 | 39 | help="include failures", dest="failure") | ||
70 | 40 | parser.add_option("-s", "--success", action="store_false", | ||
71 | 41 | help="include successes", dest="success") | ||
72 | 42 | parser.add_option("--no-success", action="store_true", | ||
73 | 43 | help="exclude successes", default=True, dest="success") | ||
74 | 44 | (options, args) = parser.parse_args() | ||
75 | 45 | result = TestProtocolClient(sys.stdout) | ||
76 | 46 | result = TestResultFilter(result, filter_error=options.error, filter_failure=options.failure, | ||
77 | 47 | filter_success=options.success) | ||
78 | 48 | test = ProtocolTestCase(sys.stdin) | ||
79 | 49 | test.run(result) | ||
80 | 50 | sys.exit(0) | ||
81 | 0 | 51 | ||
82 | === added file 'filters/subunit-ls' | |||
83 | --- filters/subunit-ls 1970-01-01 00:00:00 +0000 | |||
84 | +++ filters/subunit-ls 2009-02-23 10:54:28 +0000 | |||
85 | @@ -0,0 +1,62 @@ | |||
86 | 1 | #!/usr/bin/env python | ||
87 | 2 | # subunit: extensions to python unittest to get test results from subprocesses. | ||
88 | 3 | # Copyright (C) 2008 Robert Collins <robertc@robertcollins.net> | ||
89 | 4 | # | ||
90 | 5 | # This program is free software; you can redistribute it and/or modify | ||
91 | 6 | # it under the terms of the GNU General Public License as published by | ||
92 | 7 | # the Free Software Foundation; either version 2 of the License, or | ||
93 | 8 | # (at your option) any later version. | ||
94 | 9 | # | ||
95 | 10 | # This program is distributed in the hope that it will be useful, | ||
96 | 11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
97 | 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
98 | 13 | # GNU General Public License for more details. | ||
99 | 14 | # | ||
100 | 15 | # You should have received a copy of the GNU General Public License | ||
101 | 16 | # along with this program; if not, write to the Free Software | ||
102 | 17 | # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | ||
103 | 18 | # | ||
104 | 19 | |||
105 | 20 | """List tests in a subunit stream.""" | ||
106 | 21 | |||
107 | 22 | import sys | ||
108 | 23 | import unittest | ||
109 | 24 | |||
110 | 25 | from subunit import ProtocolTestCase | ||
111 | 26 | |||
112 | 27 | class FilterResult(unittest.TestResult): | ||
113 | 28 | """Filter test objects for display.""" | ||
114 | 29 | |||
115 | 30 | def __init__(self, stream): | ||
116 | 31 | """Create a FilterResult object outputting to stream.""" | ||
117 | 32 | unittest.TestResult.__init__(self) | ||
118 | 33 | self._stream = stream | ||
119 | 34 | self.failed_tests = 0 | ||
120 | 35 | |||
121 | 36 | def addError(self, test, err): | ||
122 | 37 | self.failed_tests += 1 | ||
123 | 38 | self.reportTest(test) | ||
124 | 39 | |||
125 | 40 | def addFailure(self, test, err): | ||
126 | 41 | self.failed_tests += 1 | ||
127 | 42 | self.reportTest(test) | ||
128 | 43 | |||
129 | 44 | def addSuccess(self, test): | ||
130 | 45 | self.reportTest(test) | ||
131 | 46 | |||
132 | 47 | def reportTest(self, test): | ||
133 | 48 | self._stream.write(test.id() + '\n') | ||
134 | 49 | |||
135 | 50 | def wasSuccessful(self): | ||
136 | 51 | "Tells whether or not this result was a success" | ||
137 | 52 | return self.failed_tests == 0 | ||
138 | 53 | |||
139 | 54 | |||
140 | 55 | result = FilterResult(sys.stdout) | ||
141 | 56 | test = ProtocolTestCase(sys.stdin) | ||
142 | 57 | test.run(result) | ||
143 | 58 | if result.wasSuccessful(): | ||
144 | 59 | exit_code = 0 | ||
145 | 60 | else: | ||
146 | 61 | exit_code = 1 | ||
147 | 62 | sys.exit(exit_code) | ||
148 | 0 | 63 | ||
149 | === modified file 'python/subunit/__init__.py' | |||
150 | --- python/subunit/__init__.py 2009-02-15 11:55:00 +0000 | |||
151 | +++ python/subunit/__init__.py 2009-02-28 09:27:25 +0000 | |||
152 | @@ -133,7 +133,7 @@ | |||
153 | 133 | self.current_test_description == line[offset:-1]): | 133 | self.current_test_description == line[offset:-1]): |
154 | 134 | self.state = TestProtocolServer.OUTSIDE_TEST | 134 | self.state = TestProtocolServer.OUTSIDE_TEST |
155 | 135 | self.current_test_description = None | 135 | self.current_test_description = None |
157 | 136 | self.client.addSuccess(self._current_test) | 136 | self._skip_or_error() |
158 | 137 | self.client.stopTest(self._current_test) | 137 | self.client.stopTest(self._current_test) |
159 | 138 | elif (self.state == TestProtocolServer.TEST_STARTED and | 138 | elif (self.state == TestProtocolServer.TEST_STARTED and |
160 | 139 | self.current_test_description + " [" == line[offset:-1]): | 139 | self.current_test_description + " [" == line[offset:-1]): |
161 | @@ -142,6 +142,16 @@ | |||
162 | 142 | else: | 142 | else: |
163 | 143 | self.stdOutLineReceived(line) | 143 | self.stdOutLineReceived(line) |
164 | 144 | 144 | ||
165 | 145 | def _skip_or_error(self, message=None): | ||
166 | 146 | """Report the current test as a skip if possible, or else an error.""" | ||
167 | 147 | addSkip = getattr(self.client, 'addSkip', None) | ||
168 | 148 | if not callable(addSkip): | ||
169 | 149 | self.client.addError(self._current_test, RemoteError(message)) | ||
170 | 150 | else: | ||
171 | 151 | if not message: | ||
172 | 152 | message = "No reason given" | ||
173 | 153 | addSkip(self._current_test, message) | ||
174 | 154 | |||
175 | 145 | def _addSuccess(self, offset, line): | 155 | def _addSuccess(self, offset, line): |
176 | 146 | if (self.state == TestProtocolServer.TEST_STARTED and | 156 | if (self.state == TestProtocolServer.TEST_STARTED and |
177 | 147 | self.current_test_description == line[offset:-1]): | 157 | self.current_test_description == line[offset:-1]): |
178 | @@ -173,8 +183,12 @@ | |||
179 | 173 | self.client.addError(self._current_test, | 183 | self.client.addError(self._current_test, |
180 | 174 | RemoteError(self._message)) | 184 | RemoteError(self._message)) |
181 | 175 | self.client.stopTest(self._current_test) | 185 | self.client.stopTest(self._current_test) |
182 | 186 | elif self.state == TestProtocolServer.READING_SKIP: | ||
183 | 187 | self.state = TestProtocolServer.OUTSIDE_TEST | ||
184 | 188 | self.current_test_description = None | ||
185 | 189 | self._skip_or_error(self._message) | ||
186 | 190 | self.client.stopTest(self._current_test) | ||
187 | 176 | elif self.state in ( | 191 | elif self.state in ( |
188 | 177 | TestProtocolServer.READING_SKIP, | ||
189 | 178 | TestProtocolServer.READING_SUCCESS, | 192 | TestProtocolServer.READING_SUCCESS, |
190 | 179 | TestProtocolServer.READING_XFAIL, | 193 | TestProtocolServer.READING_XFAIL, |
191 | 180 | ): | 194 | ): |
192 | @@ -314,6 +328,12 @@ | |||
193 | 314 | self._stream.write("%s\n" % line) | 328 | self._stream.write("%s\n" % line) |
194 | 315 | self._stream.write("]\n") | 329 | self._stream.write("]\n") |
195 | 316 | 330 | ||
196 | 331 | def addSkip(self, test, reason): | ||
197 | 332 | """Report a skipped test.""" | ||
198 | 333 | self._stream.write("skip: %s [\n" % test.id()) | ||
199 | 334 | self._stream.write("%s\n" % reason) | ||
200 | 335 | self._stream.write("]\n") | ||
201 | 336 | |||
202 | 317 | def addSuccess(self, test): | 337 | def addSuccess(self, test): |
203 | 318 | """Report a success in a test.""" | 338 | """Report a success in a test.""" |
204 | 319 | self._stream.write("successful: %s\n" % test.id()) | 339 | self._stream.write("successful: %s\n" % test.id()) |
205 | @@ -363,7 +383,7 @@ | |||
206 | 363 | return self.__description | 383 | return self.__description |
207 | 364 | 384 | ||
208 | 365 | def id(self): | 385 | def id(self): |
210 | 366 | return "%s.%s" % (self._strclass(), self.__description) | 386 | return "%s" % (self.__description,) |
211 | 367 | 387 | ||
212 | 368 | def __str__(self): | 388 | def __str__(self): |
213 | 369 | return "%s (%s)" % (self.__description, self._strclass()) | 389 | return "%s (%s)" % (self.__description, self._strclass()) |
214 | @@ -651,6 +671,7 @@ | |||
215 | 651 | unittest.TestResult.__init__(self) | 671 | unittest.TestResult.__init__(self) |
216 | 652 | self._stream = stream | 672 | self._stream = stream |
217 | 653 | self.failed_tests = 0 | 673 | self.failed_tests = 0 |
218 | 674 | self.skipped_tests = 0 | ||
219 | 654 | self.tags = set() | 675 | self.tags = set() |
220 | 655 | 676 | ||
221 | 656 | @property | 677 | @property |
222 | @@ -663,16 +684,20 @@ | |||
223 | 663 | def addFailure(self, test, err): | 684 | def addFailure(self, test, err): |
224 | 664 | self.failed_tests += 1 | 685 | self.failed_tests += 1 |
225 | 665 | 686 | ||
226 | 687 | def addSkip(self, test, reason): | ||
227 | 688 | self.skipped_tests += 1 | ||
228 | 689 | |||
229 | 666 | def formatStats(self): | 690 | def formatStats(self): |
233 | 667 | self._stream.write("Total tests: %5d\n" % self.total_tests) | 691 | self._stream.write("Total tests: %5d\n" % self.total_tests) |
234 | 668 | self._stream.write("Passed tests: %5d\n" % self.passed_tests) | 692 | self._stream.write("Passed tests: %5d\n" % self.passed_tests) |
235 | 669 | self._stream.write("Failed tests: %5d\n" % self.failed_tests) | 693 | self._stream.write("Failed tests: %5d\n" % self.failed_tests) |
236 | 694 | self._stream.write("Skipped tests: %5d\n" % self.skipped_tests) | ||
237 | 670 | tags = sorted(self.tags) | 695 | tags = sorted(self.tags) |
238 | 671 | self._stream.write("Tags: %s\n" % (", ".join(tags))) | 696 | self._stream.write("Tags: %s\n" % (", ".join(tags))) |
239 | 672 | 697 | ||
240 | 673 | @property | 698 | @property |
241 | 674 | def passed_tests(self): | 699 | def passed_tests(self): |
243 | 675 | return self.total_tests - self.failed_tests | 700 | return self.total_tests - self.failed_tests - self.skipped_tests |
244 | 676 | 701 | ||
245 | 677 | def stopTest(self, test): | 702 | def stopTest(self, test): |
246 | 678 | unittest.TestResult.stopTest(self, test) | 703 | unittest.TestResult.stopTest(self, test) |
247 | @@ -681,3 +706,66 @@ | |||
248 | 681 | def wasSuccessful(self): | 706 | def wasSuccessful(self): |
249 | 682 | """Tells whether or not this result was a success""" | 707 | """Tells whether or not this result was a success""" |
250 | 683 | return self.failed_tests == 0 | 708 | return self.failed_tests == 0 |
251 | 709 | |||
252 | 710 | |||
253 | 711 | class TestResultFilter(unittest.TestResult): | ||
254 | 712 | """A pyunit TestResult interface implementation which filters tests. | ||
255 | 713 | |||
256 | 714 | Tests that pass the filter are handed on to another TestResult instance | ||
257 | 715 | for further processing/reporting. To obtain the filtered results, | ||
258 | 716 | the other instance must be interrogated. | ||
259 | 717 | |||
260 | 718 | :ivar result: The result that tests are passed to after filtering. | ||
261 | 719 | """ | ||
262 | 720 | |||
263 | 721 | def __init__(self, result, filter_error=False, filter_failure=False, | ||
264 | 722 | filter_success=True, filter_skip=False): | ||
265 | 723 | """Create a FilterResult object filtering to result. | ||
266 | 724 | |||
267 | 725 | :param filter_error: Filter out errors. | ||
268 | 726 | :param filter_failure: Filter out failures. | ||
269 | 727 | :param filter_success: Filter out successful tests. | ||
270 | 728 | :param filter_skip: Filter out skipped tests. | ||
271 | 729 | """ | ||
272 | 730 | unittest.TestResult.__init__(self) | ||
273 | 731 | self.result = result | ||
274 | 732 | self._filter_error = filter_error | ||
275 | 733 | self._filter_failure = filter_failure | ||
276 | 734 | self._filter_success = filter_success | ||
277 | 735 | self._filter_skip = filter_skip | ||
278 | 736 | |||
279 | 737 | def addError(self, test, err): | ||
280 | 738 | if not self._filter_error: | ||
281 | 739 | self.result.startTest(test) | ||
282 | 740 | self.result.addError(test, err) | ||
283 | 741 | self.result.stopTest(test) | ||
284 | 742 | |||
285 | 743 | def addFailure(self, test, err): | ||
286 | 744 | if not self._filter_failure: | ||
287 | 745 | self.result.startTest(test) | ||
288 | 746 | self.result.addFailure(test, err) | ||
289 | 747 | self.result.stopTest(test) | ||
290 | 748 | |||
291 | 749 | def addSkip(self, test, reason): | ||
292 | 750 | if not self._filter_skip: | ||
293 | 751 | self.result.startTest(test) | ||
294 | 752 | # This is duplicated, it would be nice to have on a 'calls | ||
295 | 753 | # TestResults' mixin perhaps. | ||
296 | 754 | addSkip = getattr(self.result, 'addSkip', None) | ||
297 | 755 | if not callable(addSkip): | ||
298 | 756 | self.result.addError(test, RemoteError(reason)) | ||
299 | 757 | else: | ||
300 | 758 | self.result.addSkip(test, reason) | ||
301 | 759 | self.result.stopTest(test) | ||
302 | 760 | |||
303 | 761 | def addSuccess(self, test): | ||
304 | 762 | if not self._filter_success: | ||
305 | 763 | self.result.startTest(test) | ||
306 | 764 | self.result.addSuccess(test) | ||
307 | 765 | self.result.stopTest(test) | ||
308 | 766 | |||
309 | 767 | def id_to_orig_id(self, id): | ||
310 | 768 | if id.startswith("subunit.RemotedTestCase."): | ||
311 | 769 | return id[len("subunit.RemotedTestCase."):] | ||
312 | 770 | return id | ||
313 | 771 | |||
314 | 684 | 772 | ||
315 | === modified file 'python/subunit/tests/__init__.py' | |||
316 | --- python/subunit/tests/__init__.py 2008-12-09 01:00:03 +0000 | |||
317 | +++ python/subunit/tests/__init__.py 2009-02-22 06:28:08 +0000 | |||
318 | @@ -19,6 +19,7 @@ | |||
319 | 19 | 19 | ||
320 | 20 | from subunit.tests import ( | 20 | from subunit.tests import ( |
321 | 21 | TestUtil, | 21 | TestUtil, |
322 | 22 | test_subunit_filter, | ||
323 | 22 | test_subunit_stats, | 23 | test_subunit_stats, |
324 | 23 | test_subunit_tags, | 24 | test_subunit_tags, |
325 | 24 | test_tap2subunit, | 25 | test_tap2subunit, |
326 | @@ -29,6 +30,7 @@ | |||
327 | 29 | result = TestUtil.TestSuite() | 30 | result = TestUtil.TestSuite() |
328 | 30 | result.addTest(test_test_protocol.test_suite()) | 31 | result.addTest(test_test_protocol.test_suite()) |
329 | 31 | result.addTest(test_tap2subunit.test_suite()) | 32 | result.addTest(test_tap2subunit.test_suite()) |
330 | 33 | result.addTest(test_subunit_filter.test_suite()) | ||
331 | 32 | result.addTest(test_subunit_tags.test_suite()) | 34 | result.addTest(test_subunit_tags.test_suite()) |
332 | 33 | result.addTest(test_subunit_stats.test_suite()) | 35 | result.addTest(test_subunit_stats.test_suite()) |
333 | 34 | return result | 36 | return result |
334 | 35 | 37 | ||
335 | === added file 'python/subunit/tests/test_subunit_filter.py' | |||
336 | --- python/subunit/tests/test_subunit_filter.py 1970-01-01 00:00:00 +0000 | |||
337 | +++ python/subunit/tests/test_subunit_filter.py 2009-02-28 09:27:25 +0000 | |||
338 | @@ -0,0 +1,125 @@ | |||
339 | 1 | # | ||
340 | 2 | # subunit: extensions to python unittest to get test results from subprocesses. | ||
341 | 3 | # Copyright (C) 2005 Robert Collins <robertc@robertcollins.net> | ||
342 | 4 | # | ||
343 | 5 | # This program is free software; you can redistribute it and/or modify | ||
344 | 6 | # it under the terms of the GNU General Public License as published by | ||
345 | 7 | # the Free Software Foundation; either version 2 of the License, or | ||
346 | 8 | # (at your option) any later version. | ||
347 | 9 | # | ||
348 | 10 | # This program is distributed in the hope that it will be useful, | ||
349 | 11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
350 | 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
351 | 13 | # GNU General Public License for more details. | ||
352 | 14 | # | ||
353 | 15 | # You should have received a copy of the GNU General Public License | ||
354 | 16 | # along with this program; if not, write to the Free Software | ||
355 | 17 | # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | ||
356 | 18 | # | ||
357 | 19 | |||
358 | 20 | """Tests for subunit.TestResultFilter.""" | ||
359 | 21 | |||
360 | 22 | import unittest | ||
361 | 23 | from StringIO import StringIO | ||
362 | 24 | |||
363 | 25 | import subunit | ||
364 | 26 | |||
365 | 27 | |||
366 | 28 | class TestTestResultFilter(unittest.TestCase): | ||
367 | 29 | """Test for TestResultFilter, a TestResult object which filters tests.""" | ||
368 | 30 | |||
369 | 31 | def _setUp(self): | ||
370 | 32 | self.output = StringIO() | ||
371 | 33 | |||
372 | 34 | def test_default(self): | ||
373 | 35 | """The default is to exclude success and include everything else.""" | ||
374 | 36 | self.filtered_result = unittest.TestResult() | ||
375 | 37 | self.filter = subunit.TestResultFilter(self.filtered_result) | ||
376 | 38 | self.run_tests() | ||
377 | 39 | # skips are seen as errors by default python TestResult. | ||
378 | 40 | self.assertEqual(['error', 'skipped'], | ||
379 | 41 | [error[0].id() for error in self.filtered_result.errors]) | ||
380 | 42 | self.assertEqual(['failed'], | ||
381 | 43 | [failure[0].id() for failure in | ||
382 | 44 | self.filtered_result.failures]) | ||
383 | 45 | self.assertEqual(3, self.filtered_result.testsRun) | ||
384 | 46 | |||
385 | 47 | def test_exclude_errors(self): | ||
386 | 48 | self.filtered_result = unittest.TestResult() | ||
387 | 49 | self.filter = subunit.TestResultFilter(self.filtered_result, | ||
388 | 50 | filter_error=True) | ||
389 | 51 | self.run_tests() | ||
390 | 52 | # skips are seen as errors by default python TestResult. | ||
391 | 53 | self.assertEqual(['skipped'], | ||
392 | 54 | [error[0].id() for error in self.filtered_result.errors]) | ||
393 | 55 | self.assertEqual(['failed'], | ||
394 | 56 | [failure[0].id() for failure in | ||
395 | 57 | self.filtered_result.failures]) | ||
396 | 58 | self.assertEqual(2, self.filtered_result.testsRun) | ||
397 | 59 | |||
398 | 60 | def test_exclude_failure(self): | ||
399 | 61 | self.filtered_result = unittest.TestResult() | ||
400 | 62 | self.filter = subunit.TestResultFilter(self.filtered_result, | ||
401 | 63 | filter_failure=True) | ||
402 | 64 | self.run_tests() | ||
403 | 65 | self.assertEqual(['error', 'skipped'], | ||
404 | 66 | [error[0].id() for error in self.filtered_result.errors]) | ||
405 | 67 | self.assertEqual([], | ||
406 | 68 | [failure[0].id() for failure in | ||
407 | 69 | self.filtered_result.failures]) | ||
408 | 70 | self.assertEqual(2, self.filtered_result.testsRun) | ||
409 | 71 | |||
410 | 72 | def test_exclude_skips(self): | ||
411 | 73 | self.filtered_result = subunit.TestResultStats(None) | ||
412 | 74 | self.filter = subunit.TestResultFilter(self.filtered_result, | ||
413 | 75 | filter_skip=True) | ||
414 | 76 | self.run_tests() | ||
415 | 77 | self.assertEqual(0, self.filtered_result.skipped_tests) | ||
416 | 78 | self.assertEqual(2, self.filtered_result.failed_tests) | ||
417 | 79 | self.assertEqual(2, self.filtered_result.testsRun) | ||
418 | 80 | |||
419 | 81 | def test_include_success(self): | ||
420 | 82 | """Success's can be included if requested.""" | ||
421 | 83 | self.filtered_result = unittest.TestResult() | ||
422 | 84 | self.filter = subunit.TestResultFilter(self.filtered_result, | ||
423 | 85 | filter_success=False) | ||
424 | 86 | self.run_tests() | ||
425 | 87 | self.assertEqual(['error', 'skipped'], | ||
426 | 88 | [error[0].id() for error in self.filtered_result.errors]) | ||
427 | 89 | self.assertEqual(['failed'], | ||
428 | 90 | [failure[0].id() for failure in | ||
429 | 91 | self.filtered_result.failures]) | ||
430 | 92 | self.assertEqual(5, self.filtered_result.testsRun) | ||
431 | 93 | |||
432 | 94 | def run_tests(self): | ||
433 | 95 | self.setUpTestStream() | ||
434 | 96 | self.test = subunit.ProtocolTestCase(self.input_stream) | ||
435 | 97 | self.test.run(self.filter) | ||
436 | 98 | |||
437 | 99 | def setUpTestStream(self): | ||
438 | 100 | # While TestResultFilter works on python objects, using a subunit | ||
439 | 101 | # stream is an easy pithy way of getting a series of test objects to | ||
440 | 102 | # call into the TestResult, and as TestResultFilter is intended for use | ||
441 | 103 | # with subunit also has the benefit of detecting any interface skew issues. | ||
442 | 104 | self.input_stream = StringIO() | ||
443 | 105 | self.input_stream.write("""tags: global | ||
444 | 106 | test passed | ||
445 | 107 | success passed | ||
446 | 108 | test failed | ||
447 | 109 | tags: local | ||
448 | 110 | failure failed | ||
449 | 111 | test error | ||
450 | 112 | error error | ||
451 | 113 | test skipped | ||
452 | 114 | skip skipped | ||
453 | 115 | test todo | ||
454 | 116 | xfail todo | ||
455 | 117 | """) | ||
456 | 118 | self.input_stream.seek(0) | ||
457 | 119 | |||
458 | 120 | |||
459 | 121 | |||
460 | 122 | def test_suite(): | ||
461 | 123 | loader = subunit.tests.TestUtil.TestLoader() | ||
462 | 124 | result = loader.loadTestsFromName(__name__) | ||
463 | 125 | return result | ||
464 | 0 | 126 | ||
465 | === modified file 'python/subunit/tests/test_subunit_stats.py' | |||
466 | --- python/subunit/tests/test_subunit_stats.py 2008-12-09 01:00:03 +0000 | |||
467 | +++ python/subunit/tests/test_subunit_stats.py 2009-02-28 09:27:25 +0000 | |||
468 | @@ -62,15 +62,17 @@ | |||
469 | 62 | # Statistics are calculated usefully. | 62 | # Statistics are calculated usefully. |
470 | 63 | self.setUpUsedStream() | 63 | self.setUpUsedStream() |
471 | 64 | self.assertEqual(5, self.result.total_tests) | 64 | self.assertEqual(5, self.result.total_tests) |
473 | 65 | self.assertEqual(3, self.result.passed_tests) | 65 | self.assertEqual(2, self.result.passed_tests) |
474 | 66 | self.assertEqual(2, self.result.failed_tests) | 66 | self.assertEqual(2, self.result.failed_tests) |
475 | 67 | self.assertEqual(1, self.result.skipped_tests) | ||
476 | 67 | self.assertEqual(set(["global", "local"]), self.result.tags) | 68 | self.assertEqual(set(["global", "local"]), self.result.tags) |
477 | 68 | 69 | ||
478 | 69 | def test_stat_formatting(self): | 70 | def test_stat_formatting(self): |
479 | 70 | expected = (""" | 71 | expected = (""" |
483 | 71 | Total tests: 5 | 72 | Total tests: 5 |
484 | 72 | Passed tests: 3 | 73 | Passed tests: 2 |
485 | 73 | Failed tests: 2 | 74 | Failed tests: 2 |
486 | 75 | Skipped tests: 1 | ||
487 | 74 | Tags: global, local | 76 | Tags: global, local |
488 | 75 | """)[1:] | 77 | """)[1:] |
489 | 76 | self.setUpUsedStream() | 78 | self.setUpUsedStream() |
490 | 77 | 79 | ||
491 | === modified file 'python/subunit/tests/test_test_protocol.py' | |||
492 | --- python/subunit/tests/test_test_protocol.py 2008-12-14 18:37:08 +0000 | |||
493 | +++ python/subunit/tests/test_test_protocol.py 2009-02-28 09:27:25 +0000 | |||
494 | @@ -32,6 +32,7 @@ | |||
495 | 32 | self.end_calls = [] | 32 | self.end_calls = [] |
496 | 33 | self.error_calls = [] | 33 | self.error_calls = [] |
497 | 34 | self.failure_calls = [] | 34 | self.failure_calls = [] |
498 | 35 | self.skip_calls = [] | ||
499 | 35 | self.start_calls = [] | 36 | self.start_calls = [] |
500 | 36 | self.success_calls = [] | 37 | self.success_calls = [] |
501 | 37 | super(MockTestProtocolServerClient, self).__init__() | 38 | super(MockTestProtocolServerClient, self).__init__() |
502 | @@ -42,6 +43,9 @@ | |||
503 | 42 | def addFailure(self, test, error): | 43 | def addFailure(self, test, error): |
504 | 43 | self.failure_calls.append((test, error)) | 44 | self.failure_calls.append((test, error)) |
505 | 44 | 45 | ||
506 | 46 | def addSkip(self, test, reason): | ||
507 | 47 | self.skip_calls.append((test, reason)) | ||
508 | 48 | |||
509 | 45 | def addSuccess(self, test): | 49 | def addSuccess(self, test): |
510 | 46 | self.success_calls.append(test) | 50 | self.success_calls.append(test) |
511 | 47 | 51 | ||
512 | @@ -589,8 +593,8 @@ | |||
513 | 589 | class TestTestProtocolServerAddSkip(unittest.TestCase): | 593 | class TestTestProtocolServerAddSkip(unittest.TestCase): |
514 | 590 | """Tests for the skip keyword. | 594 | """Tests for the skip keyword. |
515 | 591 | 595 | ||
518 | 592 | In Python this thunks through to Success due to stdlib limitations. (See | 596 | In python this meets the testtools extended TestResult contract. |
519 | 593 | README). | 597 | (See https://launchpad.net/testtools). |
520 | 594 | """ | 598 | """ |
521 | 595 | 599 | ||
522 | 596 | def setUp(self): | 600 | def setUp(self): |
523 | @@ -606,7 +610,9 @@ | |||
524 | 606 | self.assertEqual(self.client.end_calls, [self.test]) | 610 | self.assertEqual(self.client.end_calls, [self.test]) |
525 | 607 | self.assertEqual(self.client.error_calls, []) | 611 | self.assertEqual(self.client.error_calls, []) |
526 | 608 | self.assertEqual(self.client.failure_calls, []) | 612 | self.assertEqual(self.client.failure_calls, []) |
528 | 609 | self.assertEqual(self.client.success_calls, [self.test]) | 613 | self.assertEqual(self.client.success_calls, []) |
529 | 614 | self.assertEqual(self.client.skip_calls, | ||
530 | 615 | [(self.test, 'No reason given')]) | ||
531 | 610 | 616 | ||
532 | 611 | def test_simple_skip(self): | 617 | def test_simple_skip(self): |
533 | 612 | self.simple_skip_keyword("skip") | 618 | self.simple_skip_keyword("skip") |
534 | @@ -621,7 +627,9 @@ | |||
535 | 621 | self.assertEqual(self.client.end_calls, [self.test]) | 627 | self.assertEqual(self.client.end_calls, [self.test]) |
536 | 622 | self.assertEqual(self.client.error_calls, []) | 628 | self.assertEqual(self.client.error_calls, []) |
537 | 623 | self.assertEqual(self.client.failure_calls, []) | 629 | self.assertEqual(self.client.failure_calls, []) |
539 | 624 | self.assertEqual(self.client.success_calls, [self.test]) | 630 | self.assertEqual(self.client.success_calls, []) |
540 | 631 | self.assertEqual(self.client.skip_calls, | ||
541 | 632 | [(self.test, "No reason given")]) | ||
542 | 625 | 633 | ||
543 | 626 | def skip_quoted_bracket(self, keyword): | 634 | def skip_quoted_bracket(self, keyword): |
544 | 627 | # This tests it is accepted, but cannot test it is used today, because | 635 | # This tests it is accepted, but cannot test it is used today, because |
545 | @@ -633,7 +641,9 @@ | |||
546 | 633 | self.assertEqual(self.client.end_calls, [self.test]) | 641 | self.assertEqual(self.client.end_calls, [self.test]) |
547 | 634 | self.assertEqual(self.client.error_calls, []) | 642 | self.assertEqual(self.client.error_calls, []) |
548 | 635 | self.assertEqual(self.client.failure_calls, []) | 643 | self.assertEqual(self.client.failure_calls, []) |
550 | 636 | self.assertEqual(self.client.success_calls, [self.test]) | 644 | self.assertEqual(self.client.success_calls, []) |
551 | 645 | self.assertEqual(self.client.skip_calls, | ||
552 | 646 | [(self.test, "]\n")]) | ||
553 | 637 | 647 | ||
554 | 638 | def test_skip_quoted_bracket(self): | 648 | def test_skip_quoted_bracket(self): |
555 | 639 | self.skip_quoted_bracket("skip") | 649 | self.skip_quoted_bracket("skip") |
556 | @@ -772,7 +782,7 @@ | |||
557 | 772 | self.assertRaises(NotImplementedError, test.tearDown) | 782 | self.assertRaises(NotImplementedError, test.tearDown) |
558 | 773 | self.assertEqual("A test description", | 783 | self.assertEqual("A test description", |
559 | 774 | test.shortDescription()) | 784 | test.shortDescription()) |
561 | 775 | self.assertEqual("subunit.RemotedTestCase.A test description", | 785 | self.assertEqual("A test description", |
562 | 776 | test.id()) | 786 | test.id()) |
563 | 777 | self.assertEqual("A test description (subunit.RemotedTestCase)", "%s" % test) | 787 | self.assertEqual("A test description (subunit.RemotedTestCase)", "%s" % test) |
564 | 778 | self.assertEqual("<subunit.RemotedTestCase description=" | 788 | self.assertEqual("<subunit.RemotedTestCase description=" |
565 | @@ -933,7 +943,6 @@ | |||
566 | 933 | self.protocol = subunit.TestProtocolClient(self.io) | 943 | self.protocol = subunit.TestProtocolClient(self.io) |
567 | 934 | self.test = TestTestProtocolClient("test_start_test") | 944 | self.test = TestTestProtocolClient("test_start_test") |
568 | 935 | 945 | ||
569 | 936 | |||
570 | 937 | def test_start_test(self): | 946 | def test_start_test(self): |
571 | 938 | """Test startTest on a TestProtocolClient.""" | 947 | """Test startTest on a TestProtocolClient.""" |
572 | 939 | self.protocol.startTest(self.test) | 948 | self.protocol.startTest(self.test) |
573 | @@ -968,6 +977,14 @@ | |||
574 | 968 | "RemoteException: phwoar crikey\n" | 977 | "RemoteException: phwoar crikey\n" |
575 | 969 | "]\n" % self.test.id()) | 978 | "]\n" % self.test.id()) |
576 | 970 | 979 | ||
577 | 980 | def test_add_skip(self): | ||
578 | 981 | """Test addSkip on a TestProtocolClient.""" | ||
579 | 982 | self.protocol.addSkip( | ||
580 | 983 | self.test, "Has it really?") | ||
581 | 984 | self.assertEqual( | ||
582 | 985 | self.io.getvalue(), | ||
583 | 986 | 'skip: %s [\nHas it really?\n]\n' % self.test.id()) | ||
584 | 987 | |||
585 | 971 | 988 | ||
586 | 972 | def test_suite(): | 989 | def test_suite(): |
587 | 973 | loader = subunit.tests.TestUtil.TestLoader() | 990 | loader = subunit.tests.TestUtil.TestLoader() |
This includes filters; best to review after that is reviewed. Also when is that getting reviewed ? :)