Merge lp:~benji/lpbuildbot/add-worker-logging into lp:lpbuildbot
- add-worker-logging
- Merge into public
| Status: | Merged |
|---|---|
| Approved by: | Gary Poster on 2012-04-27 |
| Approved revision: | 30 |
| Merged at revision: | 23 |
| Proposed branch: | lp:~benji/lpbuildbot/add-worker-logging |
| Merge into: | lp:lpbuildbot |
| Diff against target: |
656 lines (+496/-115) 3 files modified
bzrbuildbot/subunittest.py (+482/-89) bzrbuildbot/test.py (+13/-21) master.cfg (+1/-5) |
| To merge this branch: | bzr merge lp:~benji/lpbuildbot/add-worker-logging |
| Related bugs: |
| Reviewer | Review Type | Date Requested | Status |
|---|---|---|---|
| Gary Poster (community) | 2012-04-27 | Approve on 2012-04-27 | |
|
Review via email:
|
|||
Commit Message
revamp the subunit log observer to provide several logs, statistics, step description, and progress metrics
Description of the Change
This branch revamps the subunit log observer to provide several logs
(along with the standard stdio log):
- a log per worker which contains the IDs of the tests that worker ran
(for re-running that subset later)
- a log of just the subunit output (as best we can get it, there may be
a few stray lines here and there)
- a "summary" log of just the failures and errors
There step's description is continuously updated to show the number of
tests run, passing, failing, and erroring as well as whether or not the
tests are done yet.
Finally, the number of tests run thus far is reported to the buildbot
progress metrics subsystem so test run ETAs are more accurate.
There is good test coverage, runnable with the command "python
bzrbuildbot/
This branch has been run in a "production" EC2 instance and worked well.
Note that there were no errors in the production run, so there were a
few unexercised code paths, but I believe those are well covered by the
tests.
| Benji York (benji) wrote : | # |
All excellent points/suggestions. I will have very little time between
now and Monday night so I suggest landing it as-is and making cards for
the follow-up bits.
Preview Diff
| 1 | === modified file 'bzrbuildbot/subunittest.py' |
| 2 | --- bzrbuildbot/subunittest.py 2012-04-13 17:02:46 +0000 |
| 3 | +++ bzrbuildbot/subunittest.py 2012-04-27 20:56:18 +0000 |
| 4 | @@ -1,98 +1,491 @@ |
| 5 | import re |
| 6 | +import subunit |
| 7 | +import testtools |
| 8 | import unittest |
| 9 | |
| 10 | -import buildbot.process.buildstep |
| 11 | - |
| 12 | - |
| 13 | -class SubunitObserver(buildbot.process.buildstep.LogLineObserver): |
| 14 | - should_ignore = re.compile( |
| 15 | - r'^(successful|skip): .+:(setUp|tearDown)').match |
| 16 | - |
| 17 | - matchers = ( |
| 18 | - ('error_count', re.compile(r'^error: ').match), |
| 19 | - ('failure_count', re.compile(r'^failure: ').match), |
| 20 | - ('test_count', re.compile(r'^successful: ').match), |
| 21 | - ) |
| 22 | - |
| 23 | - test_count = 0 |
| 24 | - failure_count = 0 |
| 25 | - error_count = 0 |
| 26 | + |
| 27 | +class SubunitObserver(testtools.TestResult): |
| 28 | + """Watch a subunit stream and report interesting things about it. |
| 29 | + |
| 30 | + Test statistics (number of tests run, number of failures, etc. are |
| 31 | + reported as well as a summary of errors and failures (see addError and |
| 32 | + addFailure) and per-worker listings of the tests run so test-ordering and |
| 33 | + inter-test effects can be reproduced later. |
| 34 | + """ |
| 35 | + |
| 36 | + def __init__(self, step=None): |
| 37 | + testtools.TestResult.__init__(self) |
| 38 | + if step is not None: |
| 39 | + self.step = step |
| 40 | + self.logs = {} |
| 41 | + self.started = False |
| 42 | + self.real_test_count = 0 |
| 43 | + self.failure_count = 0 |
| 44 | + self.error_count = 0 |
| 45 | + # We will pass all incoming lines (which are a subunit stream) to the |
| 46 | + # TestProtocolServer which will in turn call the corresponding |
| 47 | + # TestResult methods on self (we are a TestResult subclass). |
| 48 | + self.protocol = subunit.TestProtocolServer( |
| 49 | + self, subunit.DiscardStream()) |
| 50 | + |
| 51 | + def log(self, name, line): |
| 52 | + """Log a line to the named log.""" |
| 53 | + if name not in self.logs: |
| 54 | + self.logs[name] = self.step.addLog(name) |
| 55 | + self.logs[name].addStdout(line + '\n') |
| 56 | + |
| 57 | + ### Begin Buildbot LogLineObserver methods. |
| 58 | |
| 59 | def outLineReceived(self, line): |
| 60 | - # XXX 2012-04-13 frankban bug=949950: |
| 61 | - # This method matches subunit output. The implementation of |
| 62 | - # stats collection is blocked until `testr run --parallel` |
| 63 | - # will output the subunit stream. |
| 64 | - if self.should_ignore(line): |
| 65 | - return |
| 66 | - for attr, matcher in self.matchers: |
| 67 | - if matcher(line): |
| 68 | - # Increment the corresponding statistic. |
| 69 | - setattr(self, attr, getattr(self, attr) + 1) |
| 70 | + """Process a line of the incoming subunit stream.""" |
| 71 | + # Impedance mismatch: subunit wants lines, observers get lines-no\n |
| 72 | + self.protocol.lineReceived(line + '\n') |
| 73 | + if self.started: |
| 74 | + self.log('subunit', line) |
| 75 | + |
| 76 | + ### End Buildbot LogLineObserver methods |
| 77 | + |
| 78 | + def current_worker_id(self): |
| 79 | + """Return the current worker ID tag value.""" |
| 80 | + for tag in self.current_tags: |
| 81 | + if tag.startswith('worker-'): |
| 82 | + worker_id = tag |
| 83 | break |
| 84 | - |
| 85 | - |
| 86 | -# Tests to verify that the subunit protocol is understood. |
| 87 | - |
| 88 | -class TestSubunitCounting(unittest.TestCase): |
| 89 | - |
| 90 | - def test_error(self): |
| 91 | - line = 'error: foo.bar.baz [ multipart' |
| 92 | - observer = SubunitObserver() |
| 93 | - self.assertEqual(observer.error_count, 0) |
| 94 | - observer.outLineReceived(line) |
| 95 | - self.assertEqual(observer.error_count, 1) |
| 96 | - |
| 97 | - def test_failure(self): |
| 98 | - line = 'failure: foo.bar.baz [ multipart' |
| 99 | - observer = SubunitObserver() |
| 100 | - self.assertEqual(observer.failure_count, 0) |
| 101 | - observer.outLineReceived(line) |
| 102 | - self.assertEqual(observer.failure_count, 1) |
| 103 | - |
| 104 | - def test_success(self): |
| 105 | - line = 'successful: foo.bar.baz' |
| 106 | - observer = SubunitObserver() |
| 107 | - self.assertEqual(observer.test_count, 0) |
| 108 | - observer.outLineReceived(line) |
| 109 | - self.assertEqual(observer.test_count, 1) |
| 110 | - |
| 111 | - def test_ignore_setUp(self): |
| 112 | - # Layer setup is reported as a test with a weird name. They are |
| 113 | - # detected and not counted in the statistics. |
| 114 | - line = 'successful: lp.testing.layers.ZopelessAppServerLayer:setUp' |
| 115 | - observer = SubunitObserver() |
| 116 | - self.assertEqual(observer.test_count, 0) |
| 117 | - observer.outLineReceived(line) |
| 118 | - self.assertEqual(observer.test_count, 0) |
| 119 | - |
| 120 | - def test_ignore_tearDown(self): |
| 121 | - # Layer tearDown is reported as a test with a weird name. They are |
| 122 | - # detected and not counted in the statistics. |
| 123 | - line = 'successful: lp.testing.layers.ZopelessAppServerLayer:tearDown' |
| 124 | - observer = SubunitObserver() |
| 125 | - self.assertEqual(observer.test_count, 0) |
| 126 | - observer.outLineReceived(line) |
| 127 | - self.assertEqual(observer.test_count, 0) |
| 128 | - |
| 129 | - def test_ignore_tearDown_failure(self): |
| 130 | - # Layer tearDown failure is reported as a skipped test with a weird |
| 131 | - # name. They are detected and not counted in the statistics. |
| 132 | - line = 'skip: lp.testing.layers.ZopelessLayer:tearDown [' |
| 133 | - observer = SubunitObserver() |
| 134 | - self.assertEqual(observer.test_count, 0) |
| 135 | - observer.outLineReceived(line) |
| 136 | - self.assertEqual(observer.test_count, 0) |
| 137 | - |
| 138 | - def test_ignore_false_positives(self): |
| 139 | - # Just because a line has subunit-like text in it doesn't mean it's |
| 140 | - # matched. |
| 141 | - line = 'this is not successful: Betamax' |
| 142 | - observer = SubunitObserver() |
| 143 | - self.assertEqual(observer.test_count, 0) |
| 144 | - observer.outLineReceived(line) |
| 145 | - self.assertEqual(observer.test_count, 0) |
| 146 | - |
| 147 | + else: |
| 148 | + worker_id = 'unknown worker (bug in our subunit output?)' |
| 149 | + |
| 150 | + return worker_id |
| 151 | + |
| 152 | + sep1 = u'=' * 70 + '\n' |
| 153 | + sep2 = u'-' * 70 + '\n' |
| 154 | + |
| 155 | + def format_error(self, label, test_id, error_text): |
| 156 | + """Format an error/failure for inclusion in the log.""" |
| 157 | + return u''.join([ |
| 158 | + self.sep1, |
| 159 | + '%s: %s\n' % (label, test_id), |
| 160 | + 'worker ID: %s\n' % self.current_worker_id(), |
| 161 | + self.sep2, |
| 162 | + error_text, |
| 163 | + ]) |
| 164 | + |
| 165 | + ### Begin TestResult methods. |
| 166 | + def time(self, test): |
| 167 | + self.started = True |
| 168 | + testtools.TestResult.time(self, test) |
| 169 | + |
| 170 | + def startTest(self, test): |
| 171 | + self.started = True |
| 172 | + testtools.TestResult.startTest(self, test) |
| 173 | + |
| 174 | + def stopTest(self, test): |
| 175 | + if ('zope:layer' not in self.current_tags and |
| 176 | + 'zope:info_suboptimal' not in self.current_tags): |
| 177 | + self.log(self.current_worker_id(), test.id()) |
| 178 | + self.real_test_count += 1 |
| 179 | + self.step.setProgress('tests', self.real_test_count) |
| 180 | + self.step.step_status.setStatistic( |
| 181 | + 'tests-total', self.real_test_count) |
| 182 | + testtools.TestResult.stopTest(self, test) |
| 183 | + self.step.step_status.setText(self.step.describe(False)) |
| 184 | + |
| 185 | + def addError(self, test, err): |
| 186 | + testtools.TestResult.addError(self, test, err) |
| 187 | + self.log( |
| 188 | + 'summary', self.format_error('ERROR', test.id(), str(err[1]))) |
| 189 | + self.error_count += 1 |
| 190 | + self.step.step_status.setStatistic('tests-errors', self.error_count) |
| 191 | + |
| 192 | + def addFailure(self, test, err): |
| 193 | + testtools.TestResult.addFailure(self, test, err) |
| 194 | + self.log( |
| 195 | + 'summary', self.format_error('FAILURE', test.id(), str(err[1]))) |
| 196 | + self.failure_count += 1 |
| 197 | + self.step.step_status.setStatistic( |
| 198 | + 'tests-failures', self.failure_count) |
| 199 | + |
| 200 | + ### End TestResult methods. |
| 201 | + |
| 202 | + |
| 203 | +def describe(step_status, base_description=None, done=False): |
| 204 | + if base_description is None: |
| 205 | + description = [] |
| 206 | + else: |
| 207 | + description = base_description[:] |
| 208 | + if step_status.hasStatistic('tests-total'): |
| 209 | + total = step_status.getStatistic("tests-total", 0) |
| 210 | + failures = step_status.getStatistic("tests-failures", 0) |
| 211 | + errors = step_status.getStatistic("tests-errors", 0) |
| 212 | + passed = total - (failures + errors) |
| 213 | + |
| 214 | + if total: |
| 215 | + description.append('%d tests' % total) |
| 216 | + if passed: |
| 217 | + description.append('%d passed' % passed) |
| 218 | + if failures: |
| 219 | + description.append('%d failures' % failures) |
| 220 | + if errors: |
| 221 | + description.append('%d errors' % errors) |
| 222 | + if not done: |
| 223 | + description.append('(still running)') |
| 224 | + else: |
| 225 | + description.append('no test results') |
| 226 | + return description |
| 227 | + |
| 228 | + |
| 229 | +# Tests of the above start here. |
| 230 | + |
| 231 | + |
| 232 | +class LogLogger(object): |
| 233 | + def __init__(self, log): |
| 234 | + self.log = log |
| 235 | + |
| 236 | + def addStdout(self, line): |
| 237 | + self.log.append(line) |
| 238 | + |
| 239 | + |
| 240 | +class FauxStepStatus(object): |
| 241 | + def __init__(self, stats=None, text=None): |
| 242 | + if stats is None: |
| 243 | + stats = {} |
| 244 | + self.stats = stats |
| 245 | + if text is None: |
| 246 | + text = [] |
| 247 | + self.text = text |
| 248 | + |
| 249 | + def setStatistic(self, name, value): |
| 250 | + self.stats[name] = value |
| 251 | + |
| 252 | + def hasStatistic(self, name): |
| 253 | + return name in self.stats |
| 254 | + |
| 255 | + def getStatistic(self, name, default=None): |
| 256 | + return self.stats.get(name, default) |
| 257 | + |
| 258 | + def setText(self, text): |
| 259 | + self.text.append(text) |
| 260 | + |
| 261 | + |
| 262 | +class FauxStep(object): |
| 263 | + def __init__(self): |
| 264 | + self.progress = [] |
| 265 | + self.logs = {} |
| 266 | + self.stats = {} |
| 267 | + self.text = [] |
| 268 | + self.step_status = FauxStepStatus(self.stats, self.text) |
| 269 | + |
| 270 | + def setProgress(self, name, value): |
| 271 | + self.progress.append((name, value)) |
| 272 | + |
| 273 | + def addLog(self, name): |
| 274 | + assert name not in self.logs, 'log already exists' |
| 275 | + self.logs[name] = [] |
| 276 | + return LogLogger(self.logs[name]) |
| 277 | + |
| 278 | + def describe(self, done=False): |
| 279 | + return describe(self.step_status, [], done) |
| 280 | + |
| 281 | + |
| 282 | +class TestSubunitObserverStatistics(unittest.TestCase): |
| 283 | + """Verify that test statistics are reported. |
| 284 | + |
| 285 | + SubunitObserver has a step object which has a setProgress method that |
| 286 | + accepts a statistic name and an updated value for that statistic. |
| 287 | + |
| 288 | + There are two ways a SubunitObserver can get access to the step, either |
| 289 | + having it passed into its constructor (as we do in most of the tests) or |
| 290 | + by having the buildbot set it on the instance after construction (as |
| 291 | + Buildbot does it). |
| 292 | + """ |
| 293 | + |
| 294 | + def test_step_provided(self): |
| 295 | + # When creating a SubunitObserver a step can be passed in. |
| 296 | + step = FauxStep() |
| 297 | + observer = SubunitObserver(step) |
| 298 | + # The step got stashed in the right place. |
| 299 | + self.assertIs(step, observer.step) |
| 300 | + |
| 301 | + def test_real_tests_counted(self): |
| 302 | + # When a "real" test ends, the "tests" statistic is updated. Tests |
| 303 | + # are considered "real" if they don't represent one of the zope:layer |
| 304 | + # or zope:info_suboptimal-tagged tests. |
| 305 | + observer = SubunitObserver(FauxStep()) |
| 306 | + observer.outLineReceived('test: foo') |
| 307 | + observer.outLineReceived('success: foo') |
| 308 | + # The progress indicator is told about the test run. |
| 309 | + self.assertEqual([('tests', 1)], observer.step.progress) |
| 310 | + # As well as the step statistics. |
| 311 | + self.assertIn('tests-total', observer.step.stats) |
| 312 | + self.assertEqual(1, observer.step.stats['tests-total']) |
| 313 | + |
| 314 | + def test_nonreal_tests_not_counted(self): |
| 315 | + # When a "non-real" test ends, the "tests" statistic is not updated. |
| 316 | + # Tests are considered "non-real" if they represent one of the |
| 317 | + # zope:layer or zope:info_suboptimal-tagged tests. |
| 318 | + observer = SubunitObserver(FauxStep()) |
| 319 | + observer.outLineReceived('test: foo') |
| 320 | + observer.outLineReceived('tags: zope:layer') |
| 321 | + observer.outLineReceived('success: foo') |
| 322 | + observer.outLineReceived('test: bar') |
| 323 | + observer.outLineReceived('tags: zope:info_suboptimal') |
| 324 | + observer.outLineReceived('success: bar') |
| 325 | + # The test is not reported to the progress counter or statistics. |
| 326 | + self.assertEqual([], observer.step.progress) |
| 327 | + self.assertEqual({}, observer.step.stats) |
| 328 | + |
| 329 | + def test_failures_counted(self): |
| 330 | + # When a new test failure is presented, the statistics are updated. |
| 331 | + observer = SubunitObserver(FauxStep()) |
| 332 | + observer.outLineReceived('test: foo') |
| 333 | + observer.outLineReceived('failure: foo [') |
| 334 | + observer.outLineReceived(']') |
| 335 | + # The failing test is counted. |
| 336 | + self.assertIn('tests-failures', observer.step.stats) |
| 337 | + self.assertEqual(1, observer.step.stats['tests-failures']) |
| 338 | + |
| 339 | + def test_errors_counted(self): |
| 340 | + # When a new test error is presented, the "tests failed" statistic |
| 341 | + # is updated. |
| 342 | + observer = SubunitObserver(FauxStep()) |
| 343 | + observer.outLineReceived('test: foo') |
| 344 | + observer.outLineReceived('error: foo [') |
| 345 | + observer.outLineReceived(']') |
| 346 | + # The failing test is counted. |
| 347 | + self.assertIn('tests-errors', observer.step.stats) |
| 348 | + self.assertEqual(1, observer.step.stats['tests-errors']) |
| 349 | + |
| 350 | + def test_statistics_summarized(self): |
| 351 | + # When tests finish the step statistic's summary is updated. |
| 352 | + observer = SubunitObserver(FauxStep()) |
| 353 | + observer.outLineReceived('test: foo') |
| 354 | + observer.outLineReceived('failure: foo [') |
| 355 | + observer.outLineReceived(']') |
| 356 | + observer.outLineReceived('test: bar') |
| 357 | + observer.outLineReceived('success: bar') |
| 358 | + self.assertSequenceEqual( |
| 359 | + [['1 tests', '1 failures', '(still running)'], |
| 360 | + ['2 tests', '1 passed', '1 failures', '(still running)']], |
| 361 | + observer.step.text) |
| 362 | + # We don't see a final update without the "(still running)" bit |
| 363 | + # because a final "describe" is called by Buildbot iteself. |
| 364 | + |
| 365 | + |
| 366 | +class TestSubunitObserverWorkerLogging(unittest.TestCase): |
| 367 | + """Tests run in each worker are logged to a per-worker log. |
| 368 | + |
| 369 | + Tests are run in many workers and then aggregated. Sometimes test |
| 370 | + interactions cause test failures. In that case we want to be able to |
| 371 | + reproduce a test run of just the tests executed by a particular worker. |
| 372 | + To facilitate that, we log each worker's test IDs to their own log. |
| 373 | + """ |
| 374 | + |
| 375 | + def test_worker_logs_contain_test_ids(self): |
| 376 | + # If a test is tagged as having been run by a particular worker, |
| 377 | + # the test ID is added to that worker's log. |
| 378 | + step = FauxStep() |
| 379 | + observer = SubunitObserver(step) |
| 380 | + observer.outLineReceived('test: foo') |
| 381 | + observer.outLineReceived('tags: worker-99') |
| 382 | + observer.outLineReceived('success: foo') |
| 383 | + # The worker had a log created just for it. |
| 384 | + self.assertIn('worker-99', step.logs) |
| 385 | + self.assertEqual(['foo\n'], step.logs['worker-99']) |
| 386 | + |
| 387 | + def test_ignore_fake_test_layer_tests(self): |
| 388 | + # Our subunit stream contains details about layer setUp masquerading |
| 389 | + # as real tests. We do not want to report those fake tests in the |
| 390 | + # per-worker list of tests run. |
| 391 | + step = FauxStep() |
| 392 | + observer = SubunitObserver(step) |
| 393 | + observer.outLineReceived('test: foo') |
| 394 | + observer.outLineReceived('tags: worker-99 zope:layer') |
| 395 | + observer.outLineReceived('success: foo') |
| 396 | + # The worker did not have a log created, because layer "tests" are |
| 397 | + # ignored. |
| 398 | + self.assertNotIn('worker-99', step.logs) |
| 399 | + |
| 400 | + def test_ignore_suboptimal_tests(self): |
| 401 | + # Our subunit stream contains details about starting subprocesses |
| 402 | + # masquerading as real tests. We do not want to report those fake |
| 403 | + # tests in the per-worker list of tests run. |
| 404 | + step = FauxStep() |
| 405 | + observer = SubunitObserver(step) |
| 406 | + observer.outLineReceived('test: foo') |
| 407 | + observer.outLineReceived('tags: worker-99 zope:info_suboptimal') |
| 408 | + observer.outLineReceived('success: foo') |
| 409 | + # The worker did not have a log created, because layer "tests" are |
| 410 | + # ignored. |
| 411 | + self.assertNotIn('worker-99', step.logs) |
| 412 | + |
| 413 | + |
| 414 | +class TestSubunitObserverStdoutLogging(unittest.TestCase): |
| 415 | + """All subunit output is logged in the "subunit" log.""" |
| 416 | + |
| 417 | + def test_subunit_logged(self): |
| 418 | + step = FauxStep() |
| 419 | + observer = SubunitObserver(step) |
| 420 | + observer.outLineReceived('test: foo') |
| 421 | + observer.outLineReceived('tags: worker-99 zope:info_suboptimal') |
| 422 | + observer.outLineReceived('success: foo') |
| 423 | + # A stdout log was created... |
| 424 | + self.assertIn('subunit', step.logs) |
| 425 | + # ...and it contains the lines we fed in. |
| 426 | + self.assertSequenceEqual( |
| 427 | + ['test: foo\n', |
| 428 | + 'tags: worker-99 zope:info_suboptimal\n', |
| 429 | + 'success: foo\n'], |
| 430 | + step.logs['subunit']) |
| 431 | + |
| 432 | + def test_nonsubunit_not_logged(self): |
| 433 | + # All output is ignored until subunit output starts. |
| 434 | + step = FauxStep() |
| 435 | + observer = SubunitObserver(step) |
| 436 | + observer.outLineReceived('foo') |
| 437 | + observer.outLineReceived('bar') |
| 438 | + observer.outLineReceived('time: 2000-01-01T00:00:00') |
| 439 | + observer.outLineReceived('test: foo') |
| 440 | + # The subunit output was written to the "subunit" log. |
| 441 | + self.assertSequenceEqual( |
| 442 | + ['time: 2000-01-01T00:00:00\n', 'test: foo\n'], |
| 443 | + step.logs['subunit']) |
| 444 | + |
| 445 | + |
| 446 | +class TestSubunitObserverSummaryLogging(unittest.TestCase): |
| 447 | + """All test errors and failures are logged in the "summary" log.""" |
| 448 | + |
| 449 | + def test_successful_tests_produce_no_summary(self): |
| 450 | + # The summary log only contains bad things, successes aren't bad. |
| 451 | + step = FauxStep() |
| 452 | + observer = SubunitObserver(step) |
| 453 | + observer = SubunitObserver(FauxStep()) |
| 454 | + observer.outLineReceived('test: foo') |
| 455 | + observer.outLineReceived('successes: foo') |
| 456 | + # A summary log has not been created yet because there are no failures |
| 457 | + # to report. |
| 458 | + self.assertNotIn('summary', step.logs) |
| 459 | + |
| 460 | + def test_failing_tests_are_reported(self): |
| 461 | + # The summary log contains bad things, a test failure is bad. |
| 462 | + step = FauxStep() |
| 463 | + observer = SubunitObserver(step) |
| 464 | + observer.outLineReceived('test: foo') |
| 465 | + observer.outLineReceived('failure: foo [') |
| 466 | + observer.outLineReceived(']') |
| 467 | + self.assertIn('summary', step.logs) |
| 468 | + self.assertIn('FAILURE: foo\n', step.logs['summary'][0]) |
| 469 | + |
| 470 | + def test_tests_with_errors_are_reported(self): |
| 471 | + # The summary log contains bad things, a test error is bad. |
| 472 | + step = FauxStep() |
| 473 | + observer = SubunitObserver(step) |
| 474 | + observer.outLineReceived('test: foo') |
| 475 | + observer.outLineReceived('error: foo [') |
| 476 | + observer.outLineReceived(']') |
| 477 | + self.assertIn('summary', step.logs) |
| 478 | + self.assertIn('ERROR: foo\n', step.logs['summary'][0]) |
| 479 | + |
| 480 | + def test_failure_details_are_reported(self): |
| 481 | + # The details provided with a test failure are logged. |
| 482 | + step = FauxStep() |
| 483 | + observer = SubunitObserver(step) |
| 484 | + observer.outLineReceived('test: foo') |
| 485 | + observer.outLineReceived('error: foo [') |
| 486 | + observer.outLineReceived('DETAILS') |
| 487 | + observer.outLineReceived(']') |
| 488 | + self.assertIn('summary', step.logs) |
| 489 | + self.assertIn('DETAILS', step.logs['summary'][0]) |
| 490 | + |
| 491 | + |
| 492 | +class TestSubunitObserverErrorFormatting(unittest.TestCase): |
| 493 | + """The format_error method is used to make presentable errors.""" |
| 494 | + |
| 495 | + def test_formatted_errors_contain_the_worker_id(self): |
| 496 | + observer = SubunitObserver(FauxStep()) |
| 497 | + observer.outLineReceived('test: foo') |
| 498 | + observer.outLineReceived('tags: worker-99') |
| 499 | + self.assertIn('worker-99', observer.format_error('', '', '')) |
| 500 | + |
| 501 | + |
| 502 | +class TestWorkerTag(unittest.TestCase): |
| 503 | + """Each test should be tagged with the ID of the worker that ran it.""" |
| 504 | + |
| 505 | + def test_current_worker_id_finds_tag(self): |
| 506 | + # If the currently running test has a worker ID, current_worker_id |
| 507 | + # will return it. |
| 508 | + observer = SubunitObserver(FauxStep()) |
| 509 | + observer.outLineReceived('test: foo') |
| 510 | + observer.outLineReceived('tags: worker-99') |
| 511 | + self.assertEqual('worker-99', observer.current_worker_id()) |
| 512 | + |
| 513 | + def test_current_worker_id_complains_when_no_worker_tag_is_present(self): |
| 514 | + # If the currently running test does not have a worker ID, |
| 515 | + # current_worker_id will return a fake ID that includes a hint that |
| 516 | + # there may be a bug. |
| 517 | + observer = SubunitObserver(FauxStep()) |
| 518 | + observer.outLineReceived('test: foo') |
| 519 | + self.assertEqual( |
| 520 | + 'unknown worker (bug in our subunit output?)', |
| 521 | + observer.current_worker_id()) |
| 522 | + |
| 523 | + |
| 524 | +class TestDescribe(unittest.TestCase): |
| 525 | + """The "describe" function describes the current test run.""" |
| 526 | + |
| 527 | + def test_bland_description(self): |
| 528 | + # If nothing has happened yet, then the description doesn't say much. |
| 529 | + status = FauxStepStatus() |
| 530 | + self.assertEqual(['no test results'], describe(status)) |
| 531 | + |
| 532 | + def test_not_done(self): |
| 533 | + # If some tests have run but they aren't done yet, the description |
| 534 | + # says so. |
| 535 | + status = FauxStepStatus() |
| 536 | + status.setStatistic('tests-total', 42) |
| 537 | + self.assertIn('(still running)', describe(status)) |
| 538 | + |
| 539 | + def test_done(self): |
| 540 | + # If the tests have run and are all done, the description doesn't say |
| 541 | + # that they're still running. |
| 542 | + status = FauxStepStatus() |
| 543 | + status.setStatistic('tests-total', 42) |
| 544 | + self.assertNotIn('(still running)', describe(status, done=True)) |
| 545 | + |
| 546 | + def test_failing_tests_described(self): |
| 547 | + # If some tests have failed, they are included in the description. |
| 548 | + status = FauxStepStatus() |
| 549 | + status.setStatistic('tests-total', 42) |
| 550 | + status.setStatistic('tests-failures', 3) |
| 551 | + self.assertIn('3 failures', describe(status)) |
| 552 | + |
| 553 | + def test_erroring_tests_described(self): |
| 554 | + # If some tests have generated errors, they are included in the |
| 555 | + # description. |
| 556 | + status = FauxStepStatus() |
| 557 | + status.setStatistic('tests-total', 42) |
| 558 | + status.setStatistic('tests-errors', 6) |
| 559 | + self.assertIn('6 errors', describe(status)) |
| 560 | + |
| 561 | + def test_base_description_is_left_alone(self): |
| 562 | + # The base description passed into "describe" isn't mutated. |
| 563 | + status = FauxStepStatus() |
| 564 | + status.setStatistic('tests-total', 42) |
| 565 | + status.setStatistic('tests-errors', 6) |
| 566 | + base_description = ['base'] |
| 567 | + describe(status, base_description) |
| 568 | + # Even though a description was generated the base description wasn't |
| 569 | + # touched. |
| 570 | + assert len(base_description) == 1 |
| 571 | + assert len(describe(status, base_description)) > 1 |
| 572 | + self.assertSequenceEqual(['base'], base_description) |
| 573 | + |
| 574 | + def test_passing_tests_are_calculated(self): |
| 575 | + # The number of passing is calculated from the total, failures, and |
| 576 | + # errors. |
| 577 | + status = FauxStepStatus() |
| 578 | + status.setStatistic('tests-total', 42) |
| 579 | + status.setStatistic('tests-failures', 3) |
| 580 | + status.setStatistic('tests-errors', 6) |
| 581 | + self.assertIn('33 passed', describe(status)) |
| 582 | |
| 583 | |
| 584 | if __name__ == '__main__': |
| 585 | |
| 586 | === modified file 'bzrbuildbot/test.py' |
| 587 | --- bzrbuildbot/test.py 2012-04-13 17:02:46 +0000 |
| 588 | +++ bzrbuildbot/test.py 2012-04-27 20:56:18 +0000 |
| 589 | @@ -1,6 +1,16 @@ |
| 590 | +from bzrbuildbot import subunittest |
| 591 | +import buildbot.process.buildstep |
| 592 | import buildbot.steps.shell |
| 593 | |
| 594 | -from bzrbuildbot.subunittest import SubunitObserver as TestObserver |
| 595 | + |
| 596 | +class TestObserver( |
| 597 | + subunittest.SubunitObserver, |
| 598 | + buildbot.process.buildstep.LogLineObserver): |
| 599 | + |
| 600 | + def __init__(self, *args, **kws): |
| 601 | + buildbot.process.buildstep.LogLineObserver.__init__( |
| 602 | + self, *args, **kws) |
| 603 | + subunittest.SubunitObserver.__init__(self) |
| 604 | |
| 605 | |
| 606 | class Test(buildbot.steps.shell.ShellCommand): |
| 607 | @@ -16,23 +26,5 @@ |
| 608 | self.addLogObserver('stdio', TestObserver()) |
| 609 | |
| 610 | def describe(self, done=False): |
| 611 | - description = buildbot.steps.shell.ShellCommand.describe(self, done)[:] |
| 612 | - if self.step_status.hasStatistic('tests-total'): |
| 613 | - total = self.step_status.getStatistic("tests-total", 0) |
| 614 | - failures = self.step_status.getStatistic("tests-failures", 0) |
| 615 | - errors = self.step_status.getStatistic("tests-errors", 0) |
| 616 | - passed = total - (failures + errors) |
| 617 | - |
| 618 | - if total: |
| 619 | - description.append('%d tests' % total) |
| 620 | - if passed: |
| 621 | - description.append('%d passed' % passed) |
| 622 | - if failures: |
| 623 | - description.append('%d failures' % failures) |
| 624 | - if errors: |
| 625 | - description.append('%d errors' % errors) |
| 626 | - if not done: |
| 627 | - description.extend(['(through', 'last', 'completed', 'layer)']) |
| 628 | - else: |
| 629 | - description.append("no test results") |
| 630 | - return description |
| 631 | + description = buildbot.steps.shell.ShellCommand.describe(self, done) |
| 632 | + return subunittest.describe(self.step_status, description, done) |
| 633 | |
| 634 | === modified file 'master.cfg' |
| 635 | --- master.cfg 2012-04-03 18:04:36 +0000 |
| 636 | +++ master.cfg 2012-04-27 20:56:18 +0000 |
| 637 | @@ -118,10 +118,6 @@ |
| 638 | command=['sudo', '/usr/local/bin/launchpad-lxc-build'], |
| 639 | description=['building', 'in', 'base', 'lxc', 'container'], |
| 640 | descriptionDone=['build', 'in', 'base', 'lxc', 'container'])) |
| 641 | - fac.addStep(bzrbuildbot.shell.ShellCommand( |
| 642 | - command=['sudo', '/usr/local/bin/launchpad-lxc-test', '-t', 'stories/gpg'], |
| 643 | - description=['running', 'tests', 'in', 'lxc', 'container'], |
| 644 | - descriptionDone=['ran', 'tests', 'in', 'lxc', 'container'])) |
| 645 | |
| 646 | launchpad_test_factory = factory.BuildFactory() |
| 647 | # default timeout for checkout is 20 minutes (20*60) |
| 648 | @@ -133,7 +129,7 @@ |
| 649 | bzrbuildbot.test.Test( |
| 650 | timeout=6*60*60, |
| 651 | env={'TEMP': properties.WithProperties('%(build_dir)s/temp')}, |
| 652 | - command=['testr', 'run', '--parallel'])) |
| 653 | + command=['testr', 'run', '--parallel', '--subunit', '--full-results'])) |
| 654 | |
| 655 | from buildbot.config import BuilderConfig |
| 656 | c['builders'] = [ |

Yay, great! Thank you.
- Please add a comment as to why you are not importing buildbot (and perhaps then also mention there how to run the tests)
- We talked on IRC about your idea of sorting worker logs by adding all "lower" worker logs whenever you add one. +1, though we can do that in a later branch.
- Similarly, I mentioned adding the failure log initially so it is always in a known place, and you liked that idea. We can also do that in a later branch.
- In bzrbuildbot/test.py , in Test.describe, if done is True, let's close out those logs in the Observer, as we discussed.