Merge lp:~openerp-dev/openobject-server/6.0-server-fix-stored-function-triggers into lp:openobject-server/6.0

Proposed by Olivier Dony (Odoo)
Status: Needs review
Proposed branch: lp:~openerp-dev/openobject-server/6.0-server-fix-stored-function-triggers
Merge into: lp:openobject-server/6.0
Diff against target: 118 lines (+54/-38)
1 file modified
bin/osv/orm.py (+54/-38)
To merge this branch: bzr merge lp:~openerp-dev/openobject-server/6.0-server-fix-stored-function-triggers
Reviewer Review Type Date Requested Status
OpenERP Publisher's Warranty Team Pending
Review via email: mp+53772@code.launchpad.net

Description of the change

Rewrite of ORM's _store_get_values() used to generate the list of stored function fields that need to be recomputed as a result of any create/write/unlink operation.

This rewrite attempts to make the code a bit more readable (had a headache just reading the previous version), but most importantly changes the logic to respect *all* priorities set on the function fields's triggers.
The previous version of this code only considered the priority of the first function field found for each model, hence causing hard-to-diagnose issues during updates of objects with complex inter-dependent fields.

This changes allows fixing bug 715470 as explained in the related merge proposal for addons:
https://code.launchpad.net/~openerp-dev/openobject-addons/trunk-bug-715470-ara/+merge/51288

Once merged in 6.0, this fix will be forward-ported to trunk.

To post a comment you must log in.
Revision history for this message
Olivier Dony (Odoo) (odo-openerp) wrote :

Note that the comments on top about the structure of _inherits and _inherit_fields are not really related to this issue, they're just provided for reference.

Unmerged revisions

3367. By Olivier Dony (Odoo)

[FIX] orm: rewrote and fixed _store_get_values to properly respect priority

_store_get_values is in charge of gathering the list of
function fields that should be recomputed as a result
of any write() operation, due to store flag triggers.
The previous version would group fields by model and
only respect the priority of the first trigger for a
given model, creating various problems when priority
of field computation is important.
The new version tries to be more readable and respect
all priorities.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'bin/osv/orm.py'
--- bin/osv/orm.py 2011-03-11 08:27:20 +0000
+++ bin/osv/orm.py 2011-03-17 10:02:26 +0000
@@ -396,7 +396,15 @@
396 _order = 'id'396 _order = 'id'
397 _sequence = None397 _sequence = None
398 _description = None398 _description = None
399
400 # structure:
401 # { 'parent_model': 'm2o_field', ... }
399 _inherits = {}402 _inherits = {}
403
404 # structure:
405 # { 'field_name': ('parent_model', 'm2o_field_to_reach_parent' , field_column_obj), ... }
406 _inherit_fields = None
407
400 _table = None408 _table = None
401 _invalids = set()409 _invalids = set()
402 _log_create = False410 _log_create = False
@@ -2803,7 +2811,7 @@
2803 if f == order:2811 if f == order:
2804 ok = False2812 ok = False
2805 if ok:2813 if ok:
2806 self.pool._store_function[object].append( (self._name, store_field, fnct, fields2, order, length))2814 self.pool._store_function[object].append((self._name, store_field, fnct, tuple(fields2) if fields2 else None, order, length))
2807 self.pool._store_function[object].sort(lambda x, y: cmp(x[4], y[4]))2815 self.pool._store_function[object].sort(lambda x, y: cmp(x[4], y[4]))
28082816
2809 for (key, _, msg) in self._sql_constraints:2817 for (key, _, msg) in self._sql_constraints:
@@ -3692,44 +3700,52 @@
36923700
3693 :return: [(priority, model_name, [record_ids,], [function_fields,])]3701 :return: [(priority, model_name, [record_ids,], [function_fields,])]
3694 """3702 """
3695 # FIXME: rewrite, cleanup, use real variable names3703 stored_functions = self.pool._store_function.get(self._name, [])
3696 # e.g.: http://pastie.org/12220603704
3697 result = {}3705 # use indexed names for the details of the stored_functions:
3698 fncts = self.pool._store_function.get(self._name, [])3706 model_name_, func_field_to_compute_, id_mapping_fnct_, trigger_fields_, priority_ = range(5)
3699 for fnct in range(len(fncts)):3707
3700 if fncts[fnct][3]:3708 # only keep functions that should be triggered for the ``fields``
3701 ok = False3709 # being written to.
3702 if not fields:3710 if fields is None:
3703 ok = True3711 # fields == None when unlink() is being called
3704 for f in (fields or []):3712 to_compute = stored_functions
3705 if f in fncts[fnct][3]:3713 else:
3706 ok = True3714 to_compute = [f for f in stored_functions \
3707 break3715 if ((not f[trigger_fields_]) or set(fields).intersection(f[trigger_fields_]))]
3708 if not ok:3716
3709 continue3717 mapping = {}
37103718 for function in to_compute:
3711 result.setdefault(fncts[fnct][0], {})
3712
3713 # uid == 1 for accessing objects having rules defined on store fields3719 # uid == 1 for accessing objects having rules defined on store fields
3714 ids2 = fncts[fnct][2](self, cr, 1, ids, context)3720 target_ids = [id for id in function[id_mapping_fnct_](self, cr, 1, ids, context) if id]
3715 for id in filter(None, ids2):3721
3716 result[fncts[fnct][0]].setdefault(id, [])3722 # the compound key must consider the priority and model name
3717 result[fncts[fnct][0]][id].append(fnct)3723 key = (function[priority_], function[model_name_])
3718 dict = {}3724 for target_id in target_ids:
3719 for object in result:3725 mapping.setdefault(key, {}).setdefault(target_id,set()).add(tuple(function))
3720 k2 = {}3726
3721 for id, fnct in result[object].items():3727 # Here mapping looks like:
3722 k2.setdefault(tuple(fnct), [])3728 # { (10, 'model_a') : { target_id1: [ (function_1_tuple, function_2_tuple) ], ... }
3723 k2[tuple(fnct)].append(id)3729 # (20, 'model_a') : { target_id2: [ (function_3_tuple, function_4_tuple) ], ... }
3724 for fnct, id in k2.items():3730 # (99, 'model_a') : { target_id1: [ (function_5_tuple, function_6_tuple) ], ... }
3725 dict.setdefault(fncts[fnct[0]][4], [])3731 # }
3726 dict[fncts[fnct[0]][4]].append((fncts[fnct[0]][4], object, id, map(lambda x: fncts[x][1], fnct)))3732
3727 result2 = []3733 # Now we need to generate the batch function calls list
3728 tmp = dict.keys()3734 # call_map =
3729 tmp.sort()3735 # { (10, 'model_a') : [(10, 'model_a', [record_ids,], [function_fields,])] }
3730 for k in tmp:3736 call_map = {}
3731 result2 += dict[k]3737 for ((priority,model), id_map) in mapping.iteritems():
3732 return result23738 functions_ids_maps = {}
3739 # function_ids_maps =
3740 # { (function_1_tuple, function_2_tuple) : [target_id1, target_id2, ..] }
3741 for id, functions in id_map.iteritems():
3742 functions_ids_maps.setdefault(tuple(functions), []).append(id)
3743 for functions, ids in functions_ids_maps.iteritems():
3744 call_map.setdefault((priority,model),[]).append((priority, model, ids, [f[func_field_to_compute_] for f in functions]))
3745 ordered_keys = call_map.keys()
3746 ordered_keys.sort()
3747 result = reduce(operator.add, (call_map[k] for k in ordered_keys)) if call_map else []
3748 return result
37333749
3734 def _store_set_values(self, cr, uid, ids, fields, context):3750 def _store_set_values(self, cr, uid, ids, fields, context):
3735 """Calls the fields.function's "implementation function" for all ``fields``, on records with ``ids`` (taking care of3751 """Calls the fields.function's "implementation function" for all ``fields``, on records with ``ids`` (taking care of