Merge lp:~jml/subunit/to-csv into lp:~subunit/subunit/trunk

Proposed by Jonathan Lange
Status: Merged
Merged at revision: 162
Proposed branch: lp:~jml/subunit/to-csv
Merge into: lp:~subunit/subunit/trunk
Diff against target: 630 lines (+465/-76)
6 files modified
filters/subunit-notify (+17/-38)
filters/subunit2csv (+23/-0)
filters/subunit2junitxml (+4/-38)
python/subunit/filters.py (+125/-0)
python/subunit/test_results.py (+98/-0)
python/subunit/tests/test_test_results.py (+198/-0)
To merge this branch: bzr merge lp:~jml/subunit/to-csv
Reviewer Review Type Date Requested Status
Jelmer Vernooij Approve
Review via email: mp+90960@code.launchpad.net

Commit message

Add a CSV output filter.

Description of the change

Adds a CSV output filter.

Implementation approach was to make a generic result that just called something whenever a test finished, passing it everything useful. Then I copied-and-pasted subunit2junitxml.

Output looks like this:

 test,status,start_time,end_time
 foo.bar.baz,success,2012-02-21 12:34:56.000000+00:00,2012-02-21 12:34:59.000000+00:00

I've added tests for most of it, but not for the filter script itself since subunit lacks such tests for all of its other filters.

Arguably, the TestByTestResult helper belongs in testtools. I've added it here since it creates value and means subunit users won't need another lock-step upgrade with testtools.

I'm mostly happy to factor out the common code from subunit2junitxml and subunit2csv, but writing tests would be a bit boring.

There's an edge case in addSkip that I don't know how to handle.

Note that if you pass the output to this R script: http://paste.ubuntu.com/824452/, you can create graphs like http://code.mumak.net/2010/11/boiling-kettles-unit-tests-and-data.html

Thanks,
jml

To post a comment you must log in.
Revision history for this message
Jelmer Vernooij (jelmer) wrote :

It would be nice to have the code that's common in subunit2junitxml and subunit2csv factored out. I'd rather see that happen (it's at least one step further in the right direction) than not having it be moved to python/subunit/ at all.

Other than that this seems reasonable to me.

Revision history for this message
Jelmer Vernooij (jelmer) wrote :

I wonder if --no-passthrough perhaps shouldn't be the default in this case? It doesn't seem like the output can be very useful without it enabled.

lp:~jml/subunit/to-csv updated
152. By Jonathan Lange

Try to factor out the filter code.

153. By Jonathan Lange

More tweaking of boundaries.

154. By Jonathan Lange

More fiddling about.

155. By Jonathan Lange

Docstrings and renaming.

156. By Jonathan Lange

Factor out JUnitXML

157. By Jonathan Lange

Rename main() and give it a docstring.

158. By Jonathan Lange

Not XML

159. By Jonathan Lange

Try to reduce double negatives and be more explicit about what happens to
the forwarded input.

160. By Jonathan Lange

Add a post-run hook.

161. By Jonathan Lange

Factor out subunit-notify

Revision history for this message
Jonathan Lange (jml) wrote :

I've factored out the common code from subunit2csv, subunit2junitxml and subunit-notify. This should give us a good basis for factoring out the rest of the scripts, which all have slightly different options (I want to avoid making any behavioural changes in this branch other than adding subunit2csv).

As for --no-passthrough being the default, the same argument applies to subunit2junitxml, and now even the code is the same. I would like to avoid tackling that problem in this branch, since it spreads behavioural change to other scripts (unless I defactor filters.py a little), and since it would involve dealing with backwards compatibility issues, which are always tricky.

Revision history for this message
Jelmer Vernooij (jelmer) wrote :

> I've factored out the common code from subunit2csv, subunit2junitxml and
> subunit-notify. This should give us a good basis for factoring out the rest
> of the scripts, which all have slightly different options (I want to avoid
> making any behavioural changes in this branch other than adding subunit2csv).
Great.

> As for --no-passthrough being the default, the same argument applies to
> subunit2junitxml, and now even the code is the same. I would like to avoid
> tackling that problem in this branch, since it spreads behavioural change to
> other scripts (unless I defactor filters.py a little), and since it would
> involve dealing with backwards compatibility issues, which are always tricky.
I think it's wrong for --passthrough to be the default for subunit2junitxml too, but I'm happy to defer that fix until later since - as you say - it's a pre-existing issue.

Cheers,

Jelmer

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'filters/subunit-notify'
--- filters/subunit-notify 2010-01-25 15:45:45 +0000
+++ filters/subunit-notify 2012-03-27 11:21:22 +0000
@@ -16,50 +16,29 @@
1616
17"""Notify the user of a finished test run."""17"""Notify the user of a finished test run."""
1818
19from optparse import OptionParser
20import sys
21
22import pygtk19import pygtk
23pygtk.require('2.0')20pygtk.require('2.0')
24import pynotify21import pynotify
2522
26from subunit import DiscardStream, ProtocolTestCase, TestResultStats23from subunit import TestResultStats
24from subunit.filters import run_filter_script
2725
28if not pynotify.init("Subunit-notify"):26if not pynotify.init("Subunit-notify"):
29 sys.exit(1)27 sys.exit(1)
3028
31parser = OptionParser(description=__doc__)29
32parser.add_option("--no-passthrough", action="store_true",30def notify_of_result(result):
33 help="Hide all non subunit input.", default=False, dest="no_passthrough")31 if result.failed_tests > 0:
34parser.add_option("-f", "--forward", action="store_true", default=False,32 summary = "Test run failed"
35 help="Forward subunit stream on stdout.")33 else:
36(options, args) = parser.parse_args()34 summary = "Test run successful"
37result = TestResultStats(sys.stdout)35 body = "Total tests: %d; Passed: %d; Failed: %d" % (
38if options.no_passthrough:36 result.total_tests,
39 passthrough_stream = DiscardStream()37 result.passed_tests,
40else:38 result.failed_tests,
41 passthrough_stream = None
42if options.forward:
43 forward_stream = sys.stdout
44else:
45 forward_stream = None
46test = ProtocolTestCase(sys.stdin, passthrough=passthrough_stream,
47 forward=forward_stream)
48test.run(result)
49if result.failed_tests > 0:
50 summary = "Test run failed"
51else:
52 summary = "Test run successful"
53body = "Total tests: %d; Passed: %d; Failed: %d" % (
54 result.total_tests,
55 result.passed_tests,
56 result.failed_tests,
57 )39 )
58nw = pynotify.Notification(summary, body)40 nw = pynotify.Notification(summary, body)
59nw.show()41 nw.show()
6042
61if result.wasSuccessful():43
62 exit_code = 044run_filter_script(TestResultStats, __doc__, notify_of_result)
63else:
64 exit_code = 1
65sys.exit(exit_code)
6645
=== added file 'filters/subunit2csv'
--- filters/subunit2csv 1970-01-01 00:00:00 +0000
+++ filters/subunit2csv 2012-03-27 11:21:22 +0000
@@ -0,0 +1,23 @@
1#!/usr/bin/env python
2# subunit: extensions to python unittest to get test results from subprocesses.
3# Copyright (C) 2009 Robert Collins <robertc@robertcollins.net>
4#
5# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
6# license at the users choice. A copy of both licenses are available in the
7# project source as Apache-2.0 and BSD. You may not use this file except in
8# compliance with one of these two licences.
9#
10# Unless required by applicable law or agreed to in writing, software
11# distributed under these licenses is d on an "AS IS" BASIS, WITHOUT
12# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13# license you chose for the specific language governing permissions and
14# limitations under that license.
15#
16
17"""Turn a subunit stream into a CSV"""
18
19from subunit.filters import run_filter_script
20from subunit.test_results import CsvResult
21
22
23run_filter_script(CsvResult, __doc__)
024
=== modified file 'filters/subunit2junitxml'
--- filters/subunit2junitxml 2009-12-17 21:53:57 +0000
+++ filters/subunit2junitxml 2012-03-27 11:21:22 +0000
@@ -16,11 +16,10 @@
1616
17"""Filter a subunit stream to get aggregate statistics."""17"""Filter a subunit stream to get aggregate statistics."""
1818
19from optparse import OptionParser19
20import sys20import sys
21import unittest21from subunit.filters import run_filter_script
2222
23from subunit import DiscardStream, ProtocolTestCase
24try:23try:
25 from junitxml import JUnitXmlResult24 from junitxml import JUnitXmlResult
26except ImportError:25except ImportError:
@@ -28,38 +27,5 @@
28 "http://pypi.python.org/pypi/junitxml) is required for this filter.")27 "http://pypi.python.org/pypi/junitxml) is required for this filter.")
29 raise28 raise
3029
31parser = OptionParser(description=__doc__)30
32parser.add_option("--no-passthrough", action="store_true",31run_filter_script(JUnitXmlResult, __doc__)
33 help="Hide all non subunit input.", default=False, dest="no_passthrough")
34parser.add_option("-o", "--output-to",
35 help="Output the XML to this path rather than stdout.")
36parser.add_option("-f", "--forward", action="store_true", default=False,
37 help="Forward subunit stream on stdout.")
38(options, args) = parser.parse_args()
39if options.output_to is None:
40 output_to = sys.stdout
41else:
42 output_to = file(options.output_to, 'wb')
43try:
44 result = JUnitXmlResult(output_to)
45 if options.no_passthrough:
46 passthrough_stream = DiscardStream()
47 else:
48 passthrough_stream = None
49 if options.forward:
50 forward_stream = sys.stdout
51 else:
52 forward_stream = None
53 test = ProtocolTestCase(sys.stdin, passthrough=passthrough_stream,
54 forward=forward_stream)
55 result.startTestRun()
56 test.run(result)
57 result.stopTestRun()
58finally:
59 if options.output_to is not None:
60 output_to.close()
61if result.wasSuccessful():
62 exit_code = 0
63else:
64 exit_code = 1
65sys.exit(exit_code)
6632
=== added file 'python/subunit/filters.py'
--- python/subunit/filters.py 1970-01-01 00:00:00 +0000
+++ python/subunit/filters.py 2012-03-27 11:21:22 +0000
@@ -0,0 +1,125 @@
1# subunit: extensions to python unittest to get test results from subprocesses.
2# Copyright (C) 2009 Robert Collins <robertc@robertcollins.net>
3#
4# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
5# license at the users choice. A copy of both licenses are available in the
6# project source as Apache-2.0 and BSD. You may not use this file except in
7# compliance with one of these two licences.
8#
9# Unless required by applicable law or agreed to in writing, software
10# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
11# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12# license you chose for the specific language governing permissions and
13# limitations under that license.
14#
15
16
17from optparse import OptionParser
18import sys
19
20from subunit import DiscardStream, ProtocolTestCase
21
22
23def make_options(description):
24 parser = OptionParser(description=description)
25 parser.add_option(
26 "--no-passthrough", action="store_true",
27 help="Hide all non subunit input.", default=False,
28 dest="no_passthrough")
29 parser.add_option(
30 "-o", "--output-to",
31 help="Send the output to this path rather than stdout.")
32 parser.add_option(
33 "-f", "--forward", action="store_true", default=False,
34 help="Forward subunit stream on stdout.")
35 return parser
36
37
38def run_tests_from_stream(input_stream, result, passthrough_stream=None,
39 forward_stream=None):
40 """Run tests from a subunit input stream through 'result'.
41
42 :param input_stream: A stream containing subunit input.
43 :param result: A TestResult that will receive the test events.
44 :param passthrough_stream: All non-subunit input received will be
45 sent to this stream. If not provided, uses the ``TestProtocolServer``
46 default, which is ``sys.stdout``.
47 :param forward_stream: All subunit input received will be forwarded
48 to this stream. If not provided, uses the ``TestProtocolServer``
49 default, which is to not forward any input.
50 """
51 test = ProtocolTestCase(
52 input_stream, passthrough=passthrough_stream,
53 forward=forward_stream)
54 result.startTestRun()
55 test.run(result)
56 result.stopTestRun()
57
58
59def filter_by_result(result_factory, output_path, passthrough, forward,
60 input_stream=sys.stdin):
61 """Filter an input stream using a test result.
62
63 :param result_factory: A callable that when passed an output stream
64 returns a TestResult. It is expected that this result will output
65 to the given stream.
66 :param output_path: A path send output to. If None, output will be go
67 to ``sys.stdout``.
68 :param passthrough: If True, all non-subunit input will be sent to
69 ``sys.stdout``. If False, that input will be discarded.
70 :param forward: If True, all subunit input will be forwarded directly to
71 ``sys.stdout`` as well as to the ``TestResult``.
72 :param input_stream: The source of subunit input. Defaults to
73 ``sys.stdin``.
74 :return: A test result with the resultts of the run.
75 """
76 if passthrough:
77 passthrough_stream = sys.stdout
78 else:
79 passthrough_stream = DiscardStream()
80
81 if forward:
82 forward_stream = sys.stdout
83 else:
84 forward_stream = DiscardStream()
85
86 if output_path is None:
87 output_to = sys.stdout
88 else:
89 output_to = file(output_path, 'wb')
90
91 try:
92 result = result_factory(output_to)
93 run_tests_from_stream(
94 input_stream, result, passthrough_stream, forward_stream)
95 finally:
96 if output_path:
97 output_to.close()
98 return result
99
100
101def run_filter_script(result_factory, description, post_run_hook=None):
102 """Main function for simple subunit filter scripts.
103
104 Many subunit filter scripts take a stream of subunit input and use a
105 TestResult to handle the events generated by that stream. This function
106 wraps a lot of the boiler-plate around that by making a script with
107 options for handling passthrough information and stream forwarding, and
108 that will exit with a successful return code (i.e. 0) if the input stream
109 represents a successful test run.
110
111 :param result_factory: A callable that takes an output stream and returns
112 a test result that outputs to that stream.
113 :param description: A description of the filter script.
114 """
115 parser = make_options(description)
116 (options, args) = parser.parse_args()
117 result = filter_by_result(
118 result_factory, options.output_to, not options.no_passthrough,
119 options.forward)
120 if post_run_hook:
121 post_run_hook(result)
122 if result.wasSuccessful():
123 sys.exit(0)
124 else:
125 sys.exit(1)
0126
=== modified file 'python/subunit/test_results.py'
--- python/subunit/test_results.py 2011-06-30 11:16:55 +0000
+++ python/subunit/test_results.py 2012-03-27 11:21:22 +0000
@@ -16,9 +16,14 @@
1616
17"""TestResult helper classes used to by subunit."""17"""TestResult helper classes used to by subunit."""
1818
19import csv
19import datetime20import datetime
2021
21import testtools22import testtools
23from testtools.content import (
24 text_content,
25 TracebackContent,
26 )
2227
23from subunit import iso860128from subunit import iso8601
2429
@@ -493,3 +498,96 @@
493 def wasSuccessful(self):498 def wasSuccessful(self):
494 "Tells whether or not this result was a success"499 "Tells whether or not this result was a success"
495 return self.failed_tests == 0500 return self.failed_tests == 0
501
502
503class TestByTestResult(testtools.TestResult):
504 """Call something every time a test completes."""
505
506 # XXX: Arguably belongs in testtools.
507
508 def __init__(self, on_test):
509 """Construct a ``TestByTestResult``.
510
511 :param on_test: A callable that take a test case, a status (one of
512 "success", "failure", "error", "skip", or "xfail"), a start time
513 (a ``datetime`` with timezone), a stop time, an iterable of tags,
514 and a details dict. Is called at the end of each test (i.e. on
515 ``stopTest``) with the accumulated values for that test.
516 """
517 super(TestByTestResult, self).__init__()
518 self._on_test = on_test
519
520 def startTest(self, test):
521 super(TestByTestResult, self).startTest(test)
522 self._start_time = self._now()
523 # There's no supported (i.e. tested) behaviour that relies on these
524 # being set, but it makes me more comfortable all the same. -- jml
525 self._status = None
526 self._details = None
527 self._stop_time = None
528
529 def stopTest(self, test):
530 self._stop_time = self._now()
531 super(TestByTestResult, self).stopTest(test)
532 self._on_test(
533 test=test,
534 status=self._status,
535 start_time=self._start_time,
536 stop_time=self._stop_time,
537 # current_tags is new in testtools 0.9.13.
538 tags=getattr(self, 'current_tags', None),
539 details=self._details)
540
541 def _err_to_details(self, test, err, details):
542 if details:
543 return details
544 return {'traceback': TracebackContent(err, test)}
545
546 def addSuccess(self, test, details=None):
547 super(TestByTestResult, self).addSuccess(test)
548 self._status = 'success'
549 self._details = details
550
551 def addFailure(self, test, err=None, details=None):
552 super(TestByTestResult, self).addFailure(test, err, details)
553 self._status = 'failure'
554 self._details = self._err_to_details(test, err, details)
555
556 def addError(self, test, err=None, details=None):
557 super(TestByTestResult, self).addError(test, err, details)
558 self._status = 'error'
559 self._details = self._err_to_details(test, err, details)
560
561 def addSkip(self, test, reason=None, details=None):
562 super(TestByTestResult, self).addSkip(test, reason, details)
563 self._status = 'skip'
564 if details is None:
565 details = {'reason': text_content(reason)}
566 elif reason:
567 # XXX: What if details already has 'reason' key?
568 details['reason'] = text_content(reason)
569 self._details = details
570
571 def addExpectedFailure(self, test, err=None, details=None):
572 super(TestByTestResult, self).addExpectedFailure(test, err, details)
573 self._status = 'xfail'
574 self._details = self._err_to_details(test, err, details)
575
576 def addUnexpectedSuccess(self, test, details=None):
577 super(TestByTestResult, self).addUnexpectedSuccess(test, details)
578 self._status = 'success'
579 self._details = details
580
581
582class CsvResult(TestByTestResult):
583
584 def __init__(self, stream):
585 super(CsvResult, self).__init__(self._on_test)
586 self._write_row = csv.writer(stream).writerow
587
588 def _on_test(self, test, status, start_time, stop_time, tags, details):
589 self._write_row([test.id(), status, start_time, stop_time])
590
591 def startTestRun(self):
592 super(CsvResult, self).startTestRun()
593 self._write_row(['test', 'status', 'start_time', 'stop_time'])
496594
=== modified file 'python/subunit/tests/test_test_results.py'
--- python/subunit/tests/test_test_results.py 2011-04-25 20:32:49 +0000
+++ python/subunit/tests/test_test_results.py 2012-03-27 11:21:22 +0000
@@ -14,16 +14,25 @@
14# limitations under that license.14# limitations under that license.
15#15#
1616
17import csv
17import datetime18import datetime
19import sys
18import unittest20import unittest
1921
20from testtools import TestCase22from testtools import TestCase
23from testtools.compat import StringIO
24from testtools.content import (
25 text_content,
26 TracebackContent,
27 )
21from testtools.testresult.doubles import ExtendedTestResult28from testtools.testresult.doubles import ExtendedTestResult
2229
23import subunit30import subunit
24import subunit.iso8601 as iso860131import subunit.iso8601 as iso8601
25import subunit.test_results32import subunit.test_results
2633
34import testtools
35
2736
28class LoggingDecorator(subunit.test_results.HookedTestResultDecorator):37class LoggingDecorator(subunit.test_results.HookedTestResultDecorator):
2938
@@ -294,6 +303,195 @@
294 ('stopTest', foo)], result._events)303 ('stopTest', foo)], result._events)
295304
296305
306class TestByTestResultTests(testtools.TestCase):
307
308 def setUp(self):
309 super(TestByTestResultTests, self).setUp()
310 self.log = []
311 self.result = subunit.test_results.TestByTestResult(self.on_test)
312 self.result._now = iter(range(5)).next
313
314 def assertCalled(self, **kwargs):
315 defaults = {
316 'test': self,
317 'tags': set(),
318 'details': None,
319 'start_time': 0,
320 'stop_time': 1,
321 }
322 defaults.update(kwargs)
323 self.assertEqual([defaults], self.log)
324
325 def on_test(self, **kwargs):
326 self.log.append(kwargs)
327
328 def test_no_tests_nothing_reported(self):
329 self.result.startTestRun()
330 self.result.stopTestRun()
331 self.assertEqual([], self.log)
332
333 def test_add_success(self):
334 self.result.startTest(self)
335 self.result.addSuccess(self)
336 self.result.stopTest(self)
337 self.assertCalled(status='success')
338
339 def test_add_success_details(self):
340 self.result.startTest(self)
341 details = {'foo': 'bar'}
342 self.result.addSuccess(self, details=details)
343 self.result.stopTest(self)
344 self.assertCalled(status='success', details=details)
345
346 def test_tags(self):
347 if not getattr(self.result, 'tags', None):
348 self.skipTest("No tags in testtools")
349 self.result.tags(['foo'], [])
350 self.result.startTest(self)
351 self.result.addSuccess(self)
352 self.result.stopTest(self)
353 self.assertCalled(status='success', tags=set(['foo']))
354
355 def test_add_error(self):
356 self.result.startTest(self)
357 try:
358 1/0
359 except ZeroDivisionError:
360 error = sys.exc_info()
361 self.result.addError(self, error)
362 self.result.stopTest(self)
363 self.assertCalled(
364 status='error',
365 details={'traceback': TracebackContent(error, self)})
366
367 def test_add_error_details(self):
368 self.result.startTest(self)
369 details = {"foo": text_content("bar")}
370 self.result.addError(self, details=details)
371 self.result.stopTest(self)
372 self.assertCalled(status='error', details=details)
373
374 def test_add_failure(self):
375 self.result.startTest(self)
376 try:
377 self.fail("intentional failure")
378 except self.failureException:
379 failure = sys.exc_info()
380 self.result.addFailure(self, failure)
381 self.result.stopTest(self)
382 self.assertCalled(
383 status='failure',
384 details={'traceback': TracebackContent(failure, self)})
385
386 def test_add_failure_details(self):
387 self.result.startTest(self)
388 details = {"foo": text_content("bar")}
389 self.result.addFailure(self, details=details)
390 self.result.stopTest(self)
391 self.assertCalled(status='failure', details=details)
392
393 def test_add_xfail(self):
394 self.result.startTest(self)
395 try:
396 1/0
397 except ZeroDivisionError:
398 error = sys.exc_info()
399 self.result.addExpectedFailure(self, error)
400 self.result.stopTest(self)
401 self.assertCalled(
402 status='xfail',
403 details={'traceback': TracebackContent(error, self)})
404
405 def test_add_xfail_details(self):
406 self.result.startTest(self)
407 details = {"foo": text_content("bar")}
408 self.result.addExpectedFailure(self, details=details)
409 self.result.stopTest(self)
410 self.assertCalled(status='xfail', details=details)
411
412 def test_add_unexpected_success(self):
413 self.result.startTest(self)
414 details = {'foo': 'bar'}
415 self.result.addUnexpectedSuccess(self, details=details)
416 self.result.stopTest(self)
417 self.assertCalled(status='success', details=details)
418
419 def test_add_skip_reason(self):
420 self.result.startTest(self)
421 reason = self.getUniqueString()
422 self.result.addSkip(self, reason)
423 self.result.stopTest(self)
424 self.assertCalled(
425 status='skip', details={'reason': text_content(reason)})
426
427 def test_add_skip_details(self):
428 self.result.startTest(self)
429 details = {'foo': 'bar'}
430 self.result.addSkip(self, details=details)
431 self.result.stopTest(self)
432 self.assertCalled(status='skip', details=details)
433
434 def test_twice(self):
435 self.result.startTest(self)
436 self.result.addSuccess(self, details={'foo': 'bar'})
437 self.result.stopTest(self)
438 self.result.startTest(self)
439 self.result.addSuccess(self)
440 self.result.stopTest(self)
441 self.assertEqual(
442 [{'test': self,
443 'status': 'success',
444 'start_time': 0,
445 'stop_time': 1,
446 'tags': set(),
447 'details': {'foo': 'bar'}},
448 {'test': self,
449 'status': 'success',
450 'start_time': 2,
451 'stop_time': 3,
452 'tags': set(),
453 'details': None},
454 ],
455 self.log)
456
457
458class TestCsvResult(testtools.TestCase):
459
460 def parse_stream(self, stream):
461 stream.seek(0)
462 reader = csv.reader(stream)
463 return list(reader)
464
465 def test_csv_output(self):
466 stream = StringIO()
467 result = subunit.test_results.CsvResult(stream)
468 result._now = iter(range(5)).next
469 result.startTestRun()
470 result.startTest(self)
471 result.addSuccess(self)
472 result.stopTest(self)
473 result.stopTestRun()
474 self.assertEqual(
475 [['test', 'status', 'start_time', 'stop_time'],
476 [self.id(), 'success', '0', '1'],
477 ],
478 self.parse_stream(stream))
479
480 def test_just_header_when_no_tests(self):
481 stream = StringIO()
482 result = subunit.test_results.CsvResult(stream)
483 result.startTestRun()
484 result.stopTestRun()
485 self.assertEqual(
486 [['test', 'status', 'start_time', 'stop_time']],
487 self.parse_stream(stream))
488
489 def test_no_output_before_events(self):
490 stream = StringIO()
491 subunit.test_results.CsvResult(stream)
492 self.assertEqual([], self.parse_stream(stream))
493
494
297def test_suite():495def test_suite():
298 loader = subunit.tests.TestUtil.TestLoader()496 loader = subunit.tests.TestUtil.TestLoader()
299 result = loader.loadTestsFromName(__name__)497 result = loader.loadTestsFromName(__name__)

Subscribers

People subscribed via source and target branches