Merge lp:~anybox/ocb-server/7.0-test-report into lp:ocb-server

Proposed by Georges Racinet
Status: Merged
Approved by: Holger Brunn (Therp)
Approved revision: 5254
Merged at revision: 5294
Proposed branch: lp:~anybox/ocb-server/7.0-test-report
Merge into: lp:ocb-server
Diff against target: 142 lines (+45/-17)
4 files modified
openerp/modules/loading.py (+12/-4)
openerp/tools/assertion_report.py (+12/-4)
openerp/tools/convert.py (+12/-8)
openerp/tools/yaml_import.py (+9/-1)
To merge this branch: bzr merge lp:~anybox/ocb-server/7.0-test-report
Reviewer Review Type Date Requested Status
Holger Brunn (Therp) code review Approve
Raphaël Valyi - http://www.akretion.com Approve
Leonardo Pistone code review Approve
Review via email: mp+207978@code.launchpad.net

Description of the change

The core of OpenERP already has a system to report test failures (data loading, YML tests, true unit tests).

This non too-intrusive change enhances it to record more detail, allowing the launcher process to end with a useful summary.

There is a matching branch of openerp-command/7.0 that simply prints the data at the end of the test of all modules. It is I believe already enough to help maintainers quickly get an idea of the errors and dispatch them accordingly.

Here's a sample output for the current broken tests in OCB:

FAIL : 2 failure(s) or error(s) have been recorded
Module account_payment, in test file u'test/payment_order_process.yml': AssertionError in Python code : Due date is not correct.
Module purchase_requisition: Exception during load of legacy data-based tests (yml...)

See http://buildbot.anybox.fr/builders/ocb-7.0-postgresql-9.3/builds/273/steps/testing/logs/stdio for the full log.

To post a comment you must log in.
Revision history for this message
Stefan Rijnhart (Opener) (stefan-opener) wrote :

Hi Georges,

thanks, this is a great help in identifying failed tests! I'd love to have this available. However, the OCB-policy dictates that this patch would also be proposed to upstream OpenERP. Do you have such a proposal ready? To be able to quickly identify the status of this change in all projects, you should make a wishlist bug report that you link to both branches.

Revision history for this message
Georges Racinet (gracinet) wrote :

Hi Stefan,

thanks for kindly explaining policy ! I started indeed developing the change in OCB context (where it is useful now).

Revision history for this message
Georges Racinet (gracinet) wrote :

but I believe even in runbot, that could prove useful (don't know what's in current trunk, though)

Revision history for this message
Laurent Mignon (Acsone) (lmi) wrote :

Hi, Georges,

A useful proposal that we already use in all our buildbot slaves with success here at Acsone (thanks to buildout merge-in). I've also successfully applied the proposal on the official Openerp's trunk thanks to the merge-in functionality of buildout with:
merges = bzr lp:~anybox/ocb-server/7.0-test-report parts/openerp 5252..5253
Such a enhancement is now a requirement in our continuous integration methodology.

Thanks for the contrib.
lmi

Revision history for this message
Leonardo Pistone (lepistone) wrote :

Thanks for your work Georges.

On a technical side, it looks good! a bit hacky the way you find the path opening the __init__.py, but I guess you have to do that (and you're using the openerp tools). Perhaps it would be a bit cleaner to import the module you want and then take module.__file__, but that maybe doesn't even work.

On a functional side, I agree that is an improvement! I have something else to mumble that has nothing to do with your work: it doesn't feel very good to work to improve a custom bit of openerp that does a generic thing (in that case, a test runner).

In any case, lgtm!

review: Approve (code review)
Revision history for this message
Raphaël Valyi - http://www.akretion.com (rvalyi) wrote :

Hello Georges,

this is an improvement rather than a fix. This is usually avoided on OCB (I encourage to explore ways of diminishing the cost of tracking branches and cherry picking). That being said, it's still a minor change with little side effects to expect. I would approve if you replay a MP to the official server as requested by Stefan and the OCB policy. Thanks for jumping into OCB, I think people should clearly understand the value of OCB.

Revision history for this message
Holger Brunn (Therp) (hbrunn) wrote :

#134 should be os.path.sep

review: Needs Fixing
Revision history for this message
Georges Racinet (gracinet) wrote :

Hi Holger, thanks for the review.

Yes you're right: I thought that tools.file_open did some normalization on '/' (similar to an URL) to do its magic lookup in the wanted addons directory, but after re-reading its code, I see it does not, and indeed expects the incoming ``name`` argument to be built with os.sep, as in:

    if name.replace(os.sep, '/').startswith('addons/'):
        subdir = 'addons'
        name2 = name[7:]

That's probably because it also accepts absolute paths.

btw, I don't know of any platform where os.sep is more than 1 char, but the above extract would be wrong on it.

5254. By Georges Racinet

[FIX] platform independency by using openerp.module.module API

Revision history for this message
Georges Racinet (gracinet) wrote :

Just pushed a new revision that addresses Holger's concern and Leonardo's remark by requesting the module's base path from openerp.module.module API.

misc.file_open does not proceed itself this way, but in case of discrepancy, I guess OpenERP would be in far more trouble than getting a yml test file path wrong in a test report... I checked quickly that it's consistent, though.

Revision history for this message
Georges Racinet (gracinet) wrote :

For the record, the branch against mainline openobject-addons has been updated, too

Revision history for this message
Georges Racinet (gracinet) wrote :

Leonardo: actually what --tests-enable (or oe initialize --tests) do is not so generic, since it does interleaving of modules initialisation and tests, so that each module's test suite is run with exactly the same demo data set no matter what dependent modules could later add.
Of course that's a consequence of the whole design of these tests. Other frameworks I used to work with solved the same kind of problems in other ways.

Rvalyi: thanks for the welcome ! You must have missed it, but right after Stefan's comment, I complied and created a bug, then linked the two branches to it.

Revision history for this message
Raphaël Valyi - http://www.akretion.com (rvalyi) wrote :

Thanks!

review: Approve
Revision history for this message
Leonardo Pistone (lepistone) wrote :

Thanks for your explanation Georges.

Revision history for this message
Holger Brunn (Therp) (hbrunn) :
review: Approve (code review)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'openerp/modules/loading.py'
2--- openerp/modules/loading.py 2013-11-25 10:38:42 +0000
3+++ openerp/modules/loading.py 2014-03-04 14:07:44 +0000
4@@ -197,13 +197,21 @@
5 # 'data' section, but should probably not alter the data,
6 # as there is no rollback.
7 if tools.config.options['test_enable']:
8- report.record_result(load_test(module_name, idref, mode))
9-
10+ report.record_result(load_test(module_name, idref, mode),
11+ details=(dict(module=module_name,
12+ msg="Exception during load of legacy "
13+ "data-based tests (yml...)")))
14 # Run the `fast_suite` and `checks` tests given by the module.
15 if module_name == 'base':
16 # Also run the core tests after the database is created.
17- report.record_result(openerp.modules.module.run_unit_tests('openerp'))
18- report.record_result(openerp.modules.module.run_unit_tests(module_name))
19+ report.record_result(openerp.modules.module.run_unit_tests('openerp'),
20+ details=dict(module='openerp',
21+ msg="Failure or error in server core "
22+ "unit tests"))
23+ report.record_result(openerp.modules.module.run_unit_tests(module_name),
24+ details=dict(module=module_name,
25+ msg="Failure or error in unit tests, "
26+ "check logs for more details"))
27
28 processed_modules.append(package.name)
29
30
31=== modified file 'openerp/tools/assertion_report.py'
32--- openerp/tools/assertion_report.py 2012-03-02 11:02:27 +0000
33+++ openerp/tools/assertion_report.py 2014-03-04 14:07:44 +0000
34@@ -8,20 +8,28 @@
35 def __init__(self):
36 self.successes = 0
37 self.failures = 0
38+ self.failures_details = []
39
40 def record_success(self):
41 self.successes += 1
42
43- def record_failure(self):
44+ def record_failure(self, details=None):
45 self.failures += 1
46-
47- def record_result(self, result):
48+ if details is not None:
49+ self.failures_details.append(details)
50+
51+ def record_result(self, result, details=None):
52+ """Record either success or failure, with the provided details in the latter case.
53+
54+ :param result: a boolean
55+ :param details: a dict with keys ``'module'``, ``'testfile'``, ``'msg'``, ``'msg_args'``
56+ """
57 if result is None:
58 pass
59 elif result is True:
60 self.record_success()
61 elif result is False:
62- self.record_failure()
63+ self.record_failure(details=details)
64
65 def __str__(self):
66 res = 'Assertions report: %s successes, %s failures' % (self.successes, self.failures)
67
68=== modified file 'openerp/tools/convert.py'
69--- openerp/tools/convert.py 2013-12-03 09:24:33 +0000
70+++ openerp/tools/convert.py 2014-03-04 14:07:44 +0000
71@@ -692,13 +692,15 @@
72 if rec_src_count:
73 count = int(rec_src_count)
74 if len(ids) != count:
75- self.assertion_report.record_failure()
76 msg = 'assertion "%s" failed!\n' \
77 ' Incorrect search count:\n' \
78 ' expected count: %d\n' \
79- ' obtained count: %d\n' \
80- % (rec_string, count, len(ids))
81- _logger.error(msg)
82+ ' obtained count: %d\n'
83+ msg_args = (rec_string, count, len(ids))
84+ _logger.error(msg, msg_args)
85+ self.assertion_report.record_failure(details=dict(module=self.module,
86+ msg=msg,
87+ msg_args=msg_args))
88 return
89
90 assert ids is not None,\
91@@ -720,13 +722,15 @@
92 expected_value = _eval_xml(self, test, self.pool, cr, uid, self.idref, context=context) or True
93 expression_value = unsafe_eval(f_expr, globals_dict)
94 if expression_value != expected_value: # assertion failed
95- self.assertion_report.record_failure()
96 msg = 'assertion "%s" failed!\n' \
97 ' xmltag: %s\n' \
98 ' expected value: %r\n' \
99- ' obtained value: %r\n' \
100- % (rec_string, etree.tostring(test), expected_value, expression_value)
101- _logger.error(msg)
102+ ' obtained value: %r\n'
103+ msg_args = (rec_string, etree.tostring(test), expected_value, expression_value)
104+ self.assertion_report.record_failure(details=dict(module=self.module,
105+ msg=msg,
106+ msg_args=msg_args))
107+ _logger.error(msg, msg_args)
108 return
109 else: # all tests were successful for this assertion tag (no break)
110 self.assertion_report.record_success()
111
112=== modified file 'openerp/tools/yaml_import.py'
113--- openerp/tools/yaml_import.py 2012-12-01 11:35:24 +0000
114+++ openerp/tools/yaml_import.py 2014-03-04 14:07:44 +0000
115@@ -1,10 +1,12 @@
116 # -*- coding: utf-8 -*-
117+import os
118 import threading
119 import types
120 import time # used to eval time.strftime expressions
121 from datetime import datetime, timedelta
122 import logging
123
124+from copy import deepcopy
125 import openerp.pooler as pooler
126 import openerp.sql_db as sql_db
127 import misc
128@@ -193,7 +195,13 @@
129 return node
130
131 def _log_assert_failure(self, msg, *args):
132- self.assertion_report.record_failure()
133+ from openerp.modules import module # cannot be made before (loop)
134+ basepath = module.get_module_path(self.module)
135+ self.assertion_report.record_failure(
136+ details=dict(module=self.module,
137+ testfile=os.path.relpath(self.filename, basepath),
138+ msg=msg,
139+ msg_args=deepcopy(args)))
140 _logger.error(msg, *args)
141
142 def _get_assertion_id(self, assertion):

Subscribers

People subscribed via source and target branches

to status/vote changes: