Merge lp:~anybox/ocb-server/7.0-test-report into lp:ocb-server

Proposed by Georges Racinet
Status: Merged
Approved by: Holger Brunn (Therp)
Approved revision: 5254
Merged at revision: 5294
Proposed branch: lp:~anybox/ocb-server/7.0-test-report
Merge into: lp:ocb-server
Diff against target: 142 lines (+45/-17)
4 files modified
openerp/modules/loading.py (+12/-4)
openerp/tools/assertion_report.py (+12/-4)
openerp/tools/convert.py (+12/-8)
openerp/tools/yaml_import.py (+9/-1)
To merge this branch: bzr merge lp:~anybox/ocb-server/7.0-test-report
Reviewer Review Type Date Requested Status
Holger Brunn (Therp) code review Approve
Raphaël Valyi - http://www.akretion.com Approve
Leonardo Pistone code review Approve
Review via email: mp+207978@code.launchpad.net

Description of the change

The core of OpenERP already has a system to report test failures (data loading, YML tests, true unit tests).

This non too-intrusive change enhances it to record more detail, allowing the launcher process to end with a useful summary.

There is a matching branch of openerp-command/7.0 that simply prints the data at the end of the test of all modules. It is I believe already enough to help maintainers quickly get an idea of the errors and dispatch them accordingly.

Here's a sample output for the current broken tests in OCB:

FAIL : 2 failure(s) or error(s) have been recorded
Module account_payment, in test file u'test/payment_order_process.yml': AssertionError in Python code : Due date is not correct.
Module purchase_requisition: Exception during load of legacy data-based tests (yml...)

See http://buildbot.anybox.fr/builders/ocb-7.0-postgresql-9.3/builds/273/steps/testing/logs/stdio for the full log.

To post a comment you must log in.
Revision history for this message
Stefan Rijnhart (Opener) (stefan-opener) wrote :

Hi Georges,

thanks, this is a great help in identifying failed tests! I'd love to have this available. However, the OCB-policy dictates that this patch would also be proposed to upstream OpenERP. Do you have such a proposal ready? To be able to quickly identify the status of this change in all projects, you should make a wishlist bug report that you link to both branches.

Revision history for this message
Georges Racinet (gracinet) wrote :

Hi Stefan,

thanks for kindly explaining policy ! I started indeed developing the change in OCB context (where it is useful now).

Revision history for this message
Georges Racinet (gracinet) wrote :

but I believe even in runbot, that could prove useful (don't know what's in current trunk, though)

Revision history for this message
Laurent Mignon (Acsone) (lmi) wrote :

Hi, Georges,

A useful proposal that we already use in all our buildbot slaves with success here at Acsone (thanks to buildout merge-in). I've also successfully applied the proposal on the official Openerp's trunk thanks to the merge-in functionality of buildout with:
merges = bzr lp:~anybox/ocb-server/7.0-test-report parts/openerp 5252..5253
Such a enhancement is now a requirement in our continuous integration methodology.

Thanks for the contrib.
lmi

Revision history for this message
Leonardo Pistone (lepistone) wrote :

Thanks for your work Georges.

On a technical side, it looks good! a bit hacky the way you find the path opening the __init__.py, but I guess you have to do that (and you're using the openerp tools). Perhaps it would be a bit cleaner to import the module you want and then take module.__file__, but that maybe doesn't even work.

On a functional side, I agree that is an improvement! I have something else to mumble that has nothing to do with your work: it doesn't feel very good to work to improve a custom bit of openerp that does a generic thing (in that case, a test runner).

In any case, lgtm!

review: Approve (code review)
Revision history for this message
Raphaël Valyi - http://www.akretion.com (rvalyi) wrote :

Hello Georges,

this is an improvement rather than a fix. This is usually avoided on OCB (I encourage to explore ways of diminishing the cost of tracking branches and cherry picking). That being said, it's still a minor change with little side effects to expect. I would approve if you replay a MP to the official server as requested by Stefan and the OCB policy. Thanks for jumping into OCB, I think people should clearly understand the value of OCB.

Revision history for this message
Holger Brunn (Therp) (hbrunn) wrote :

#134 should be os.path.sep

review: Needs Fixing
Revision history for this message
Georges Racinet (gracinet) wrote :

Hi Holger, thanks for the review.

Yes you're right: I thought that tools.file_open did some normalization on '/' (similar to an URL) to do its magic lookup in the wanted addons directory, but after re-reading its code, I see it does not, and indeed expects the incoming ``name`` argument to be built with os.sep, as in:

    if name.replace(os.sep, '/').startswith('addons/'):
        subdir = 'addons'
        name2 = name[7:]

That's probably because it also accepts absolute paths.

btw, I don't know of any platform where os.sep is more than 1 char, but the above extract would be wrong on it.

5254. By Georges Racinet

[FIX] platform independency by using openerp.module.module API

Revision history for this message
Georges Racinet (gracinet) wrote :

Just pushed a new revision that addresses Holger's concern and Leonardo's remark by requesting the module's base path from openerp.module.module API.

misc.file_open does not proceed itself this way, but in case of discrepancy, I guess OpenERP would be in far more trouble than getting a yml test file path wrong in a test report... I checked quickly that it's consistent, though.

Revision history for this message
Georges Racinet (gracinet) wrote :

For the record, the branch against mainline openobject-addons has been updated, too

Revision history for this message
Georges Racinet (gracinet) wrote :

Leonardo: actually what --tests-enable (or oe initialize --tests) do is not so generic, since it does interleaving of modules initialisation and tests, so that each module's test suite is run with exactly the same demo data set no matter what dependent modules could later add.
Of course that's a consequence of the whole design of these tests. Other frameworks I used to work with solved the same kind of problems in other ways.

Rvalyi: thanks for the welcome ! You must have missed it, but right after Stefan's comment, I complied and created a bug, then linked the two branches to it.

Revision history for this message
Raphaël Valyi - http://www.akretion.com (rvalyi) wrote :

Thanks!

review: Approve
Revision history for this message
Leonardo Pistone (lepistone) wrote :

Thanks for your explanation Georges.

Revision history for this message
Holger Brunn (Therp) (hbrunn) :
review: Approve (code review)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'openerp/modules/loading.py'
--- openerp/modules/loading.py 2013-11-25 10:38:42 +0000
+++ openerp/modules/loading.py 2014-03-04 14:07:44 +0000
@@ -197,13 +197,21 @@
197 # 'data' section, but should probably not alter the data,197 # 'data' section, but should probably not alter the data,
198 # as there is no rollback.198 # as there is no rollback.
199 if tools.config.options['test_enable']:199 if tools.config.options['test_enable']:
200 report.record_result(load_test(module_name, idref, mode))200 report.record_result(load_test(module_name, idref, mode),
201201 details=(dict(module=module_name,
202 msg="Exception during load of legacy "
203 "data-based tests (yml...)")))
202 # Run the `fast_suite` and `checks` tests given by the module.204 # Run the `fast_suite` and `checks` tests given by the module.
203 if module_name == 'base':205 if module_name == 'base':
204 # Also run the core tests after the database is created.206 # Also run the core tests after the database is created.
205 report.record_result(openerp.modules.module.run_unit_tests('openerp'))207 report.record_result(openerp.modules.module.run_unit_tests('openerp'),
206 report.record_result(openerp.modules.module.run_unit_tests(module_name))208 details=dict(module='openerp',
209 msg="Failure or error in server core "
210 "unit tests"))
211 report.record_result(openerp.modules.module.run_unit_tests(module_name),
212 details=dict(module=module_name,
213 msg="Failure or error in unit tests, "
214 "check logs for more details"))
207215
208 processed_modules.append(package.name)216 processed_modules.append(package.name)
209217
210218
=== modified file 'openerp/tools/assertion_report.py'
--- openerp/tools/assertion_report.py 2012-03-02 11:02:27 +0000
+++ openerp/tools/assertion_report.py 2014-03-04 14:07:44 +0000
@@ -8,20 +8,28 @@
8 def __init__(self):8 def __init__(self):
9 self.successes = 09 self.successes = 0
10 self.failures = 010 self.failures = 0
11 self.failures_details = []
1112
12 def record_success(self):13 def record_success(self):
13 self.successes += 114 self.successes += 1
1415
15 def record_failure(self):16 def record_failure(self, details=None):
16 self.failures += 117 self.failures += 1
1718 if details is not None:
18 def record_result(self, result):19 self.failures_details.append(details)
20
21 def record_result(self, result, details=None):
22 """Record either success or failure, with the provided details in the latter case.
23
24 :param result: a boolean
25 :param details: a dict with keys ``'module'``, ``'testfile'``, ``'msg'``, ``'msg_args'``
26 """
19 if result is None:27 if result is None:
20 pass28 pass
21 elif result is True:29 elif result is True:
22 self.record_success()30 self.record_success()
23 elif result is False:31 elif result is False:
24 self.record_failure()32 self.record_failure(details=details)
2533
26 def __str__(self):34 def __str__(self):
27 res = 'Assertions report: %s successes, %s failures' % (self.successes, self.failures)35 res = 'Assertions report: %s successes, %s failures' % (self.successes, self.failures)
2836
=== modified file 'openerp/tools/convert.py'
--- openerp/tools/convert.py 2013-12-03 09:24:33 +0000
+++ openerp/tools/convert.py 2014-03-04 14:07:44 +0000
@@ -692,13 +692,15 @@
692 if rec_src_count:692 if rec_src_count:
693 count = int(rec_src_count)693 count = int(rec_src_count)
694 if len(ids) != count:694 if len(ids) != count:
695 self.assertion_report.record_failure()
696 msg = 'assertion "%s" failed!\n' \695 msg = 'assertion "%s" failed!\n' \
697 ' Incorrect search count:\n' \696 ' Incorrect search count:\n' \
698 ' expected count: %d\n' \697 ' expected count: %d\n' \
699 ' obtained count: %d\n' \698 ' obtained count: %d\n'
700 % (rec_string, count, len(ids))699 msg_args = (rec_string, count, len(ids))
701 _logger.error(msg)700 _logger.error(msg, msg_args)
701 self.assertion_report.record_failure(details=dict(module=self.module,
702 msg=msg,
703 msg_args=msg_args))
702 return704 return
703705
704 assert ids is not None,\706 assert ids is not None,\
@@ -720,13 +722,15 @@
720 expected_value = _eval_xml(self, test, self.pool, cr, uid, self.idref, context=context) or True722 expected_value = _eval_xml(self, test, self.pool, cr, uid, self.idref, context=context) or True
721 expression_value = unsafe_eval(f_expr, globals_dict)723 expression_value = unsafe_eval(f_expr, globals_dict)
722 if expression_value != expected_value: # assertion failed724 if expression_value != expected_value: # assertion failed
723 self.assertion_report.record_failure()
724 msg = 'assertion "%s" failed!\n' \725 msg = 'assertion "%s" failed!\n' \
725 ' xmltag: %s\n' \726 ' xmltag: %s\n' \
726 ' expected value: %r\n' \727 ' expected value: %r\n' \
727 ' obtained value: %r\n' \728 ' obtained value: %r\n'
728 % (rec_string, etree.tostring(test), expected_value, expression_value)729 msg_args = (rec_string, etree.tostring(test), expected_value, expression_value)
729 _logger.error(msg)730 self.assertion_report.record_failure(details=dict(module=self.module,
731 msg=msg,
732 msg_args=msg_args))
733 _logger.error(msg, msg_args)
730 return734 return
731 else: # all tests were successful for this assertion tag (no break)735 else: # all tests were successful for this assertion tag (no break)
732 self.assertion_report.record_success()736 self.assertion_report.record_success()
733737
=== modified file 'openerp/tools/yaml_import.py'
--- openerp/tools/yaml_import.py 2012-12-01 11:35:24 +0000
+++ openerp/tools/yaml_import.py 2014-03-04 14:07:44 +0000
@@ -1,10 +1,12 @@
1# -*- coding: utf-8 -*-1# -*- coding: utf-8 -*-
2import os
2import threading3import threading
3import types4import types
4import time # used to eval time.strftime expressions5import time # used to eval time.strftime expressions
5from datetime import datetime, timedelta6from datetime import datetime, timedelta
6import logging7import logging
78
9from copy import deepcopy
8import openerp.pooler as pooler10import openerp.pooler as pooler
9import openerp.sql_db as sql_db11import openerp.sql_db as sql_db
10import misc12import misc
@@ -193,7 +195,13 @@
193 return node195 return node
194196
195 def _log_assert_failure(self, msg, *args):197 def _log_assert_failure(self, msg, *args):
196 self.assertion_report.record_failure()198 from openerp.modules import module # cannot be made before (loop)
199 basepath = module.get_module_path(self.module)
200 self.assertion_report.record_failure(
201 details=dict(module=self.module,
202 testfile=os.path.relpath(self.filename, basepath),
203 msg=msg,
204 msg_args=deepcopy(args)))
197 _logger.error(msg, *args)205 _logger.error(msg, *args)
198206
199 def _get_assertion_id(self, assertion):207 def _get_assertion_id(self, assertion):

Subscribers

People subscribed via source and target branches

to status/vote changes: