Merge lp:~jml/subunit/to-csv into lp:~subunit/subunit/trunk

Proposed by Jonathan Lange
Status: Merged
Merged at revision: 162
Proposed branch: lp:~jml/subunit/to-csv
Merge into: lp:~subunit/subunit/trunk
Diff against target: 630 lines (+465/-76)
6 files modified
filters/subunit-notify (+17/-38)
filters/subunit2csv (+23/-0)
filters/subunit2junitxml (+4/-38)
python/subunit/filters.py (+125/-0)
python/subunit/test_results.py (+98/-0)
python/subunit/tests/test_test_results.py (+198/-0)
To merge this branch: bzr merge lp:~jml/subunit/to-csv
Reviewer Review Type Date Requested Status
Jelmer Vernooij Approve
Review via email: mp+90960@code.launchpad.net

Commit message

Add a CSV output filter.

Description of the change

Adds a CSV output filter.

Implementation approach was to make a generic result that just called something whenever a test finished, passing it everything useful. Then I copied-and-pasted subunit2junitxml.

Output looks like this:

 test,status,start_time,end_time
 foo.bar.baz,success,2012-02-21 12:34:56.000000+00:00,2012-02-21 12:34:59.000000+00:00

I've added tests for most of it, but not for the filter script itself since subunit lacks such tests for all of its other filters.

Arguably, the TestByTestResult helper belongs in testtools. I've added it here since it creates value and means subunit users won't need another lock-step upgrade with testtools.

I'm mostly happy to factor out the common code from subunit2junitxml and subunit2csv, but writing tests would be a bit boring.

There's an edge case in addSkip that I don't know how to handle.

Note that if you pass the output to this R script: http://paste.ubuntu.com/824452/, you can create graphs like http://code.mumak.net/2010/11/boiling-kettles-unit-tests-and-data.html

Thanks,
jml

To post a comment you must log in.
Revision history for this message
Jelmer Vernooij (jelmer) wrote :

It would be nice to have the code that's common in subunit2junitxml and subunit2csv factored out. I'd rather see that happen (it's at least one step further in the right direction) than not having it be moved to python/subunit/ at all.

Other than that this seems reasonable to me.

Revision history for this message
Jelmer Vernooij (jelmer) wrote :

I wonder if --no-passthrough perhaps shouldn't be the default in this case? It doesn't seem like the output can be very useful without it enabled.

lp:~jml/subunit/to-csv updated
152. By Jonathan Lange

Try to factor out the filter code.

153. By Jonathan Lange

More tweaking of boundaries.

154. By Jonathan Lange

More fiddling about.

155. By Jonathan Lange

Docstrings and renaming.

156. By Jonathan Lange

Factor out JUnitXML

157. By Jonathan Lange

Rename main() and give it a docstring.

158. By Jonathan Lange

Not XML

159. By Jonathan Lange

Try to reduce double negatives and be more explicit about what happens to
the forwarded input.

160. By Jonathan Lange

Add a post-run hook.

161. By Jonathan Lange

Factor out subunit-notify

Revision history for this message
Jonathan Lange (jml) wrote :

I've factored out the common code from subunit2csv, subunit2junitxml and subunit-notify. This should give us a good basis for factoring out the rest of the scripts, which all have slightly different options (I want to avoid making any behavioural changes in this branch other than adding subunit2csv).

As for --no-passthrough being the default, the same argument applies to subunit2junitxml, and now even the code is the same. I would like to avoid tackling that problem in this branch, since it spreads behavioural change to other scripts (unless I defactor filters.py a little), and since it would involve dealing with backwards compatibility issues, which are always tricky.

Revision history for this message
Jelmer Vernooij (jelmer) wrote :

> I've factored out the common code from subunit2csv, subunit2junitxml and
> subunit-notify. This should give us a good basis for factoring out the rest
> of the scripts, which all have slightly different options (I want to avoid
> making any behavioural changes in this branch other than adding subunit2csv).
Great.

> As for --no-passthrough being the default, the same argument applies to
> subunit2junitxml, and now even the code is the same. I would like to avoid
> tackling that problem in this branch, since it spreads behavioural change to
> other scripts (unless I defactor filters.py a little), and since it would
> involve dealing with backwards compatibility issues, which are always tricky.
I think it's wrong for --passthrough to be the default for subunit2junitxml too, but I'm happy to defer that fix until later since - as you say - it's a pre-existing issue.

Cheers,

Jelmer

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'filters/subunit-notify'
2--- filters/subunit-notify 2010-01-25 15:45:45 +0000
3+++ filters/subunit-notify 2012-03-27 11:21:22 +0000
4@@ -16,50 +16,29 @@
5
6 """Notify the user of a finished test run."""
7
8-from optparse import OptionParser
9-import sys
10-
11 import pygtk
12 pygtk.require('2.0')
13 import pynotify
14
15-from subunit import DiscardStream, ProtocolTestCase, TestResultStats
16+from subunit import TestResultStats
17+from subunit.filters import run_filter_script
18
19 if not pynotify.init("Subunit-notify"):
20 sys.exit(1)
21
22-parser = OptionParser(description=__doc__)
23-parser.add_option("--no-passthrough", action="store_true",
24- help="Hide all non subunit input.", default=False, dest="no_passthrough")
25-parser.add_option("-f", "--forward", action="store_true", default=False,
26- help="Forward subunit stream on stdout.")
27-(options, args) = parser.parse_args()
28-result = TestResultStats(sys.stdout)
29-if options.no_passthrough:
30- passthrough_stream = DiscardStream()
31-else:
32- passthrough_stream = None
33-if options.forward:
34- forward_stream = sys.stdout
35-else:
36- forward_stream = None
37-test = ProtocolTestCase(sys.stdin, passthrough=passthrough_stream,
38- forward=forward_stream)
39-test.run(result)
40-if result.failed_tests > 0:
41- summary = "Test run failed"
42-else:
43- summary = "Test run successful"
44-body = "Total tests: %d; Passed: %d; Failed: %d" % (
45- result.total_tests,
46- result.passed_tests,
47- result.failed_tests,
48+
49+def notify_of_result(result):
50+ if result.failed_tests > 0:
51+ summary = "Test run failed"
52+ else:
53+ summary = "Test run successful"
54+ body = "Total tests: %d; Passed: %d; Failed: %d" % (
55+ result.total_tests,
56+ result.passed_tests,
57+ result.failed_tests,
58 )
59-nw = pynotify.Notification(summary, body)
60-nw.show()
61-
62-if result.wasSuccessful():
63- exit_code = 0
64-else:
65- exit_code = 1
66-sys.exit(exit_code)
67+ nw = pynotify.Notification(summary, body)
68+ nw.show()
69+
70+
71+run_filter_script(TestResultStats, __doc__, notify_of_result)
72
73=== added file 'filters/subunit2csv'
74--- filters/subunit2csv 1970-01-01 00:00:00 +0000
75+++ filters/subunit2csv 2012-03-27 11:21:22 +0000
76@@ -0,0 +1,23 @@
77+#!/usr/bin/env python
78+# subunit: extensions to python unittest to get test results from subprocesses.
79+# Copyright (C) 2009 Robert Collins <robertc@robertcollins.net>
80+#
81+# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
82+# license at the users choice. A copy of both licenses are available in the
83+# project source as Apache-2.0 and BSD. You may not use this file except in
84+# compliance with one of these two licences.
85+#
86+# Unless required by applicable law or agreed to in writing, software
87+# distributed under these licenses is d on an "AS IS" BASIS, WITHOUT
88+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
89+# license you chose for the specific language governing permissions and
90+# limitations under that license.
91+#
92+
93+"""Turn a subunit stream into a CSV"""
94+
95+from subunit.filters import run_filter_script
96+from subunit.test_results import CsvResult
97+
98+
99+run_filter_script(CsvResult, __doc__)
100
101=== modified file 'filters/subunit2junitxml'
102--- filters/subunit2junitxml 2009-12-17 21:53:57 +0000
103+++ filters/subunit2junitxml 2012-03-27 11:21:22 +0000
104@@ -16,11 +16,10 @@
105
106 """Filter a subunit stream to get aggregate statistics."""
107
108-from optparse import OptionParser
109+
110 import sys
111-import unittest
112+from subunit.filters import run_filter_script
113
114-from subunit import DiscardStream, ProtocolTestCase
115 try:
116 from junitxml import JUnitXmlResult
117 except ImportError:
118@@ -28,38 +27,5 @@
119 "http://pypi.python.org/pypi/junitxml) is required for this filter.")
120 raise
121
122-parser = OptionParser(description=__doc__)
123-parser.add_option("--no-passthrough", action="store_true",
124- help="Hide all non subunit input.", default=False, dest="no_passthrough")
125-parser.add_option("-o", "--output-to",
126- help="Output the XML to this path rather than stdout.")
127-parser.add_option("-f", "--forward", action="store_true", default=False,
128- help="Forward subunit stream on stdout.")
129-(options, args) = parser.parse_args()
130-if options.output_to is None:
131- output_to = sys.stdout
132-else:
133- output_to = file(options.output_to, 'wb')
134-try:
135- result = JUnitXmlResult(output_to)
136- if options.no_passthrough:
137- passthrough_stream = DiscardStream()
138- else:
139- passthrough_stream = None
140- if options.forward:
141- forward_stream = sys.stdout
142- else:
143- forward_stream = None
144- test = ProtocolTestCase(sys.stdin, passthrough=passthrough_stream,
145- forward=forward_stream)
146- result.startTestRun()
147- test.run(result)
148- result.stopTestRun()
149-finally:
150- if options.output_to is not None:
151- output_to.close()
152-if result.wasSuccessful():
153- exit_code = 0
154-else:
155- exit_code = 1
156-sys.exit(exit_code)
157+
158+run_filter_script(JUnitXmlResult, __doc__)
159
160=== added file 'python/subunit/filters.py'
161--- python/subunit/filters.py 1970-01-01 00:00:00 +0000
162+++ python/subunit/filters.py 2012-03-27 11:21:22 +0000
163@@ -0,0 +1,125 @@
164+# subunit: extensions to python unittest to get test results from subprocesses.
165+# Copyright (C) 2009 Robert Collins <robertc@robertcollins.net>
166+#
167+# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
168+# license at the users choice. A copy of both licenses are available in the
169+# project source as Apache-2.0 and BSD. You may not use this file except in
170+# compliance with one of these two licences.
171+#
172+# Unless required by applicable law or agreed to in writing, software
173+# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
174+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
175+# license you chose for the specific language governing permissions and
176+# limitations under that license.
177+#
178+
179+
180+from optparse import OptionParser
181+import sys
182+
183+from subunit import DiscardStream, ProtocolTestCase
184+
185+
186+def make_options(description):
187+ parser = OptionParser(description=description)
188+ parser.add_option(
189+ "--no-passthrough", action="store_true",
190+ help="Hide all non subunit input.", default=False,
191+ dest="no_passthrough")
192+ parser.add_option(
193+ "-o", "--output-to",
194+ help="Send the output to this path rather than stdout.")
195+ parser.add_option(
196+ "-f", "--forward", action="store_true", default=False,
197+ help="Forward subunit stream on stdout.")
198+ return parser
199+
200+
201+def run_tests_from_stream(input_stream, result, passthrough_stream=None,
202+ forward_stream=None):
203+ """Run tests from a subunit input stream through 'result'.
204+
205+ :param input_stream: A stream containing subunit input.
206+ :param result: A TestResult that will receive the test events.
207+ :param passthrough_stream: All non-subunit input received will be
208+ sent to this stream. If not provided, uses the ``TestProtocolServer``
209+ default, which is ``sys.stdout``.
210+ :param forward_stream: All subunit input received will be forwarded
211+ to this stream. If not provided, uses the ``TestProtocolServer``
212+ default, which is to not forward any input.
213+ """
214+ test = ProtocolTestCase(
215+ input_stream, passthrough=passthrough_stream,
216+ forward=forward_stream)
217+ result.startTestRun()
218+ test.run(result)
219+ result.stopTestRun()
220+
221+
222+def filter_by_result(result_factory, output_path, passthrough, forward,
223+ input_stream=sys.stdin):
224+ """Filter an input stream using a test result.
225+
226+ :param result_factory: A callable that when passed an output stream
227+ returns a TestResult. It is expected that this result will output
228+ to the given stream.
229+ :param output_path: A path send output to. If None, output will be go
230+ to ``sys.stdout``.
231+ :param passthrough: If True, all non-subunit input will be sent to
232+ ``sys.stdout``. If False, that input will be discarded.
233+ :param forward: If True, all subunit input will be forwarded directly to
234+ ``sys.stdout`` as well as to the ``TestResult``.
235+ :param input_stream: The source of subunit input. Defaults to
236+ ``sys.stdin``.
237+ :return: A test result with the resultts of the run.
238+ """
239+ if passthrough:
240+ passthrough_stream = sys.stdout
241+ else:
242+ passthrough_stream = DiscardStream()
243+
244+ if forward:
245+ forward_stream = sys.stdout
246+ else:
247+ forward_stream = DiscardStream()
248+
249+ if output_path is None:
250+ output_to = sys.stdout
251+ else:
252+ output_to = file(output_path, 'wb')
253+
254+ try:
255+ result = result_factory(output_to)
256+ run_tests_from_stream(
257+ input_stream, result, passthrough_stream, forward_stream)
258+ finally:
259+ if output_path:
260+ output_to.close()
261+ return result
262+
263+
264+def run_filter_script(result_factory, description, post_run_hook=None):
265+ """Main function for simple subunit filter scripts.
266+
267+ Many subunit filter scripts take a stream of subunit input and use a
268+ TestResult to handle the events generated by that stream. This function
269+ wraps a lot of the boiler-plate around that by making a script with
270+ options for handling passthrough information and stream forwarding, and
271+ that will exit with a successful return code (i.e. 0) if the input stream
272+ represents a successful test run.
273+
274+ :param result_factory: A callable that takes an output stream and returns
275+ a test result that outputs to that stream.
276+ :param description: A description of the filter script.
277+ """
278+ parser = make_options(description)
279+ (options, args) = parser.parse_args()
280+ result = filter_by_result(
281+ result_factory, options.output_to, not options.no_passthrough,
282+ options.forward)
283+ if post_run_hook:
284+ post_run_hook(result)
285+ if result.wasSuccessful():
286+ sys.exit(0)
287+ else:
288+ sys.exit(1)
289
290=== modified file 'python/subunit/test_results.py'
291--- python/subunit/test_results.py 2011-06-30 11:16:55 +0000
292+++ python/subunit/test_results.py 2012-03-27 11:21:22 +0000
293@@ -16,9 +16,14 @@
294
295 """TestResult helper classes used to by subunit."""
296
297+import csv
298 import datetime
299
300 import testtools
301+from testtools.content import (
302+ text_content,
303+ TracebackContent,
304+ )
305
306 from subunit import iso8601
307
308@@ -493,3 +498,96 @@
309 def wasSuccessful(self):
310 "Tells whether or not this result was a success"
311 return self.failed_tests == 0
312+
313+
314+class TestByTestResult(testtools.TestResult):
315+ """Call something every time a test completes."""
316+
317+ # XXX: Arguably belongs in testtools.
318+
319+ def __init__(self, on_test):
320+ """Construct a ``TestByTestResult``.
321+
322+ :param on_test: A callable that take a test case, a status (one of
323+ "success", "failure", "error", "skip", or "xfail"), a start time
324+ (a ``datetime`` with timezone), a stop time, an iterable of tags,
325+ and a details dict. Is called at the end of each test (i.e. on
326+ ``stopTest``) with the accumulated values for that test.
327+ """
328+ super(TestByTestResult, self).__init__()
329+ self._on_test = on_test
330+
331+ def startTest(self, test):
332+ super(TestByTestResult, self).startTest(test)
333+ self._start_time = self._now()
334+ # There's no supported (i.e. tested) behaviour that relies on these
335+ # being set, but it makes me more comfortable all the same. -- jml
336+ self._status = None
337+ self._details = None
338+ self._stop_time = None
339+
340+ def stopTest(self, test):
341+ self._stop_time = self._now()
342+ super(TestByTestResult, self).stopTest(test)
343+ self._on_test(
344+ test=test,
345+ status=self._status,
346+ start_time=self._start_time,
347+ stop_time=self._stop_time,
348+ # current_tags is new in testtools 0.9.13.
349+ tags=getattr(self, 'current_tags', None),
350+ details=self._details)
351+
352+ def _err_to_details(self, test, err, details):
353+ if details:
354+ return details
355+ return {'traceback': TracebackContent(err, test)}
356+
357+ def addSuccess(self, test, details=None):
358+ super(TestByTestResult, self).addSuccess(test)
359+ self._status = 'success'
360+ self._details = details
361+
362+ def addFailure(self, test, err=None, details=None):
363+ super(TestByTestResult, self).addFailure(test, err, details)
364+ self._status = 'failure'
365+ self._details = self._err_to_details(test, err, details)
366+
367+ def addError(self, test, err=None, details=None):
368+ super(TestByTestResult, self).addError(test, err, details)
369+ self._status = 'error'
370+ self._details = self._err_to_details(test, err, details)
371+
372+ def addSkip(self, test, reason=None, details=None):
373+ super(TestByTestResult, self).addSkip(test, reason, details)
374+ self._status = 'skip'
375+ if details is None:
376+ details = {'reason': text_content(reason)}
377+ elif reason:
378+ # XXX: What if details already has 'reason' key?
379+ details['reason'] = text_content(reason)
380+ self._details = details
381+
382+ def addExpectedFailure(self, test, err=None, details=None):
383+ super(TestByTestResult, self).addExpectedFailure(test, err, details)
384+ self._status = 'xfail'
385+ self._details = self._err_to_details(test, err, details)
386+
387+ def addUnexpectedSuccess(self, test, details=None):
388+ super(TestByTestResult, self).addUnexpectedSuccess(test, details)
389+ self._status = 'success'
390+ self._details = details
391+
392+
393+class CsvResult(TestByTestResult):
394+
395+ def __init__(self, stream):
396+ super(CsvResult, self).__init__(self._on_test)
397+ self._write_row = csv.writer(stream).writerow
398+
399+ def _on_test(self, test, status, start_time, stop_time, tags, details):
400+ self._write_row([test.id(), status, start_time, stop_time])
401+
402+ def startTestRun(self):
403+ super(CsvResult, self).startTestRun()
404+ self._write_row(['test', 'status', 'start_time', 'stop_time'])
405
406=== modified file 'python/subunit/tests/test_test_results.py'
407--- python/subunit/tests/test_test_results.py 2011-04-25 20:32:49 +0000
408+++ python/subunit/tests/test_test_results.py 2012-03-27 11:21:22 +0000
409@@ -14,16 +14,25 @@
410 # limitations under that license.
411 #
412
413+import csv
414 import datetime
415+import sys
416 import unittest
417
418 from testtools import TestCase
419+from testtools.compat import StringIO
420+from testtools.content import (
421+ text_content,
422+ TracebackContent,
423+ )
424 from testtools.testresult.doubles import ExtendedTestResult
425
426 import subunit
427 import subunit.iso8601 as iso8601
428 import subunit.test_results
429
430+import testtools
431+
432
433 class LoggingDecorator(subunit.test_results.HookedTestResultDecorator):
434
435@@ -294,6 +303,195 @@
436 ('stopTest', foo)], result._events)
437
438
439+class TestByTestResultTests(testtools.TestCase):
440+
441+ def setUp(self):
442+ super(TestByTestResultTests, self).setUp()
443+ self.log = []
444+ self.result = subunit.test_results.TestByTestResult(self.on_test)
445+ self.result._now = iter(range(5)).next
446+
447+ def assertCalled(self, **kwargs):
448+ defaults = {
449+ 'test': self,
450+ 'tags': set(),
451+ 'details': None,
452+ 'start_time': 0,
453+ 'stop_time': 1,
454+ }
455+ defaults.update(kwargs)
456+ self.assertEqual([defaults], self.log)
457+
458+ def on_test(self, **kwargs):
459+ self.log.append(kwargs)
460+
461+ def test_no_tests_nothing_reported(self):
462+ self.result.startTestRun()
463+ self.result.stopTestRun()
464+ self.assertEqual([], self.log)
465+
466+ def test_add_success(self):
467+ self.result.startTest(self)
468+ self.result.addSuccess(self)
469+ self.result.stopTest(self)
470+ self.assertCalled(status='success')
471+
472+ def test_add_success_details(self):
473+ self.result.startTest(self)
474+ details = {'foo': 'bar'}
475+ self.result.addSuccess(self, details=details)
476+ self.result.stopTest(self)
477+ self.assertCalled(status='success', details=details)
478+
479+ def test_tags(self):
480+ if not getattr(self.result, 'tags', None):
481+ self.skipTest("No tags in testtools")
482+ self.result.tags(['foo'], [])
483+ self.result.startTest(self)
484+ self.result.addSuccess(self)
485+ self.result.stopTest(self)
486+ self.assertCalled(status='success', tags=set(['foo']))
487+
488+ def test_add_error(self):
489+ self.result.startTest(self)
490+ try:
491+ 1/0
492+ except ZeroDivisionError:
493+ error = sys.exc_info()
494+ self.result.addError(self, error)
495+ self.result.stopTest(self)
496+ self.assertCalled(
497+ status='error',
498+ details={'traceback': TracebackContent(error, self)})
499+
500+ def test_add_error_details(self):
501+ self.result.startTest(self)
502+ details = {"foo": text_content("bar")}
503+ self.result.addError(self, details=details)
504+ self.result.stopTest(self)
505+ self.assertCalled(status='error', details=details)
506+
507+ def test_add_failure(self):
508+ self.result.startTest(self)
509+ try:
510+ self.fail("intentional failure")
511+ except self.failureException:
512+ failure = sys.exc_info()
513+ self.result.addFailure(self, failure)
514+ self.result.stopTest(self)
515+ self.assertCalled(
516+ status='failure',
517+ details={'traceback': TracebackContent(failure, self)})
518+
519+ def test_add_failure_details(self):
520+ self.result.startTest(self)
521+ details = {"foo": text_content("bar")}
522+ self.result.addFailure(self, details=details)
523+ self.result.stopTest(self)
524+ self.assertCalled(status='failure', details=details)
525+
526+ def test_add_xfail(self):
527+ self.result.startTest(self)
528+ try:
529+ 1/0
530+ except ZeroDivisionError:
531+ error = sys.exc_info()
532+ self.result.addExpectedFailure(self, error)
533+ self.result.stopTest(self)
534+ self.assertCalled(
535+ status='xfail',
536+ details={'traceback': TracebackContent(error, self)})
537+
538+ def test_add_xfail_details(self):
539+ self.result.startTest(self)
540+ details = {"foo": text_content("bar")}
541+ self.result.addExpectedFailure(self, details=details)
542+ self.result.stopTest(self)
543+ self.assertCalled(status='xfail', details=details)
544+
545+ def test_add_unexpected_success(self):
546+ self.result.startTest(self)
547+ details = {'foo': 'bar'}
548+ self.result.addUnexpectedSuccess(self, details=details)
549+ self.result.stopTest(self)
550+ self.assertCalled(status='success', details=details)
551+
552+ def test_add_skip_reason(self):
553+ self.result.startTest(self)
554+ reason = self.getUniqueString()
555+ self.result.addSkip(self, reason)
556+ self.result.stopTest(self)
557+ self.assertCalled(
558+ status='skip', details={'reason': text_content(reason)})
559+
560+ def test_add_skip_details(self):
561+ self.result.startTest(self)
562+ details = {'foo': 'bar'}
563+ self.result.addSkip(self, details=details)
564+ self.result.stopTest(self)
565+ self.assertCalled(status='skip', details=details)
566+
567+ def test_twice(self):
568+ self.result.startTest(self)
569+ self.result.addSuccess(self, details={'foo': 'bar'})
570+ self.result.stopTest(self)
571+ self.result.startTest(self)
572+ self.result.addSuccess(self)
573+ self.result.stopTest(self)
574+ self.assertEqual(
575+ [{'test': self,
576+ 'status': 'success',
577+ 'start_time': 0,
578+ 'stop_time': 1,
579+ 'tags': set(),
580+ 'details': {'foo': 'bar'}},
581+ {'test': self,
582+ 'status': 'success',
583+ 'start_time': 2,
584+ 'stop_time': 3,
585+ 'tags': set(),
586+ 'details': None},
587+ ],
588+ self.log)
589+
590+
591+class TestCsvResult(testtools.TestCase):
592+
593+ def parse_stream(self, stream):
594+ stream.seek(0)
595+ reader = csv.reader(stream)
596+ return list(reader)
597+
598+ def test_csv_output(self):
599+ stream = StringIO()
600+ result = subunit.test_results.CsvResult(stream)
601+ result._now = iter(range(5)).next
602+ result.startTestRun()
603+ result.startTest(self)
604+ result.addSuccess(self)
605+ result.stopTest(self)
606+ result.stopTestRun()
607+ self.assertEqual(
608+ [['test', 'status', 'start_time', 'stop_time'],
609+ [self.id(), 'success', '0', '1'],
610+ ],
611+ self.parse_stream(stream))
612+
613+ def test_just_header_when_no_tests(self):
614+ stream = StringIO()
615+ result = subunit.test_results.CsvResult(stream)
616+ result.startTestRun()
617+ result.stopTestRun()
618+ self.assertEqual(
619+ [['test', 'status', 'start_time', 'stop_time']],
620+ self.parse_stream(stream))
621+
622+ def test_no_output_before_events(self):
623+ stream = StringIO()
624+ subunit.test_results.CsvResult(stream)
625+ self.assertEqual([], self.parse_stream(stream))
626+
627+
628 def test_suite():
629 loader = subunit.tests.TestUtil.TestLoader()
630 result = loader.loadTestsFromName(__name__)

Subscribers

People subscribed via source and target branches