Merge lp:~openerp-commiter/openobject-server/sparse_field into lp:openobject-server
- sparse_field
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 3915 |
Proposed branch: | lp:~openerp-commiter/openobject-server/sparse_field |
Merge into: | lp:openobject-server |
Diff against target: |
465 lines (+318/-13) (has conflicts) 5 files modified
openerp/addons/base/base.sql (+2/-0) openerp/addons/base/ir/ir.xml (+4/-0) openerp/addons/base/ir/ir_model.py (+9/-0) openerp/osv/fields.py (+113/-7) openerp/osv/orm.py (+190/-6) Text conflict in openerp/osv/orm.py |
To merge this branch: | bzr merge lp:~openerp-commiter/openobject-server/sparse_field |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Olivier Dony (Odoo) | Needs Information | ||
Vo Minh Thu | Pending | ||
Review via email: mp+67105@code.launchpad.net |
Commit message
Description of the change
Hi
I am working on magentoerpconnect, it's an Openerp module to connect Openerp with magento (e-commerce solution).
In Magento, customer can easly create new fields on a category of product, for example for a camera, he will create a 'number_of_pixel' field.
The first version of magentoerpconnect was able to create dynamically all of this field in openerp (an initial import from magento to openerp was done).
But as customer sell a lot of different products, quickly we face to some limitation.
Some technical limitation, for example postgres allow to create at least 1600 columns per table and performance limitation (reading 1500 useless empty field per product is not so good)
So we decide to store all custom magento field in only one json field.
We choose to override the write, the create and the read function to read/write/create directly the value of a magento field in the json store field called 'magerp'.
https:/
For example, when I read the field 'x_js_color_
It's very fast because as I only use call only one read to get the magento field and the openerp field.
(This problem can be also solve with function field, but in this case the read will be call at least 2 times. More other openerp frameword doesn't support the creation of function field dynamically. So I chose the first option.)
It's work great but I have a little problem regarding the creation of the column in openerp.
Indeed for all of this fields, we don't need to create a column as it's "calculate" field.
To problem is that I can not overwrite the function auto_init to cancel the create of some column.
At first I just add an hook on the field orm.py
=== modified file 'openerp/
--- openerp/osv/orm.py 2011-07-05 12:22:22 +0000
+++ openerp/osv/orm.py 2011-07-06 23:17:23 +0000
@@ -2714,6 +2714,12 @@
+
+ def _check_
+ # Don't update custom (also called manual) fields
+ if f.manual and not context.
+ return False
+ return True
def _auto_init(self, cr, context=None):
"""
@@ -2739,7 +2745,6 @@
todo_end = []
- update_
create = not self._table_
@@ -2767,7 +2772,7 @@
if k in ('id', 'write_uid', 'write_date', 'create_uid', 'create_date'):
# Don't update custom (also called manual) fields
- if f.manual and not update_
+ if not self._check_
if isinstance(f, fields.one2many):
There is no big change I have just extracted the check so external module can decide if this field have to be update.
As the function auto_init is very big, maybe it's better to split it and so extract a function which can update only one field.
So I propose this merge.
What do you think?
Thanks
Olivier Dony (Odoo) (odo-openerp) wrote : | # |
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
Hi Olivier.
Your idea looks greats and sure if openobject-server can directly support serialization_field it should be perfect. Can we talk about it quickly on skype (sebastien_beau) or gtalk (<email address hidden>)?
Thanks for all
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
oups sorry for gtalk is <email address hidden>
Vo Minh Thu (thu) wrote : | # |
> Hi Olivier.
> Your idea looks greats and sure if openobject-server can directly support
> serialization_field it should be perfect. Can we talk about it quickly on
> skype (sebastien_beau) or gtalk (<email address hidden>)?
> Thanks for all
Sébastien,
I think Oliver is a bit busy these days to have such a talk. I know I am. Anyway, I would prefer the discussion to happen here, as it is easier to deal with it in an asynchronous and archived way for me.
At least I'm happy you like the idea.
Thu
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
Ok no problem Thu, I understand.
So as I said, the idea look good and very generic.
Do you think it will be possible to introduce serialization field in the orm for V6.1?
If you are ok to introduce it for V6.1 how we process?
OpenERP SA make it? I propose a merge? Or somebody of OpenERP SA work with me on it (maybe the best solution)? What do you think?
I think serialization_field can be use by a lot of module not only by magentoerpconnect. So it's look very interesting.
Best regards
Vo Minh Thu (thu) wrote : | # |
Sébastien,
Could the way we work together on this be that you make a merge proposal based on Olivier's explanation/
I think Olivier's stuff is quite clear but if you need any more guidance, you can ask and proceed by small increment; I'll do my best to answer in a timely manner.
Cheers
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
Hi Vo Minh Thu
It look good for me.
Next week I will have a lot of work. But the week after I will start to look at it.
If I need any help I will ask you, and maybe it will be good before starting if I can talk to you directly by phone, skype or gtalk (a quick talk, I think as me you are very busy).
Thank for your help.
See you
> Sébastien,
>
> Could the way we work together on this be that you make a merge proposal based
> on Olivier's explanation/
>
> I think Olivier's stuff is quite clear but if you need any more guidance, you
> can ask and proceed by small increment; I'll do my best to answer in a timely
> manner.
>
> Cheers
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
Hi Olivier, Vo Minh Thu
The work is in progres, you can take a look at my code, it's not totally finish, I have to refactor some part of my code and maybe fix some bug. This week I will try to remove the dependency of magentoerpconnect on base_json_field and use the new field sparse.
You can already give your feed back on my code.
Best regards
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
Hi Olivier, Vo Minh Thu
Can you take a look on my merge? I test it with magentoerpconnect and it's works perfectly.
Can you give me your feed back?
Thanks a lots see you
Olivier Dony (Odoo) (odo-openerp) wrote : | # |
On 09/20/2011 07:21 PM, Sébastien BEAU - http://
> Can you take a look on my merge? I test it with magentoerpconnect and
> it's works perfectly.
Hi Sebastien,
I've had a quick look and it looks very nice, with the general idea
properly implemented.
We'll do a final review with Thu to give you some fine-tuning feedback
on our current coding guidelines, and double-check all the performance
aspects, and then it should be ready for merging into 6.1.
Do you have any kind of numbers for magentoerpconnect, comparing the
overall performance of this approach versus the former approach?
Thanks for your excellent work!
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
Regarding the performance I didn't take the time to make any test. If I have the time I will try to make some.
If you want to look how I use it in magentoerpconnect you can look at the trunk version
https:/
on the last commit I fix some bug with unicode and also I remove the dependency on base_json_field
use also the extra-addons in trunk version.
Thanks for taking the time to review it. It's a huge pleasure for me to contribute in openobject.
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
Hi Olivier
I made some test and here is the result. Each time I repeat 5 times the test.
Each product use only 20 attributs, there is 100 product in the database.
Write method are tested by writing one only 1 product 10 times.
Read Method are tested by
1 - reading the "tree field" on 80 products 20 times
Field read for the tree view : ['virtual_
2- reading the "form field" on 1 products 20 times
Field read for the form view : ['warranty', 'property_
I made 3 group of test with a database that have 80 attributs, one with 650 attributs and the last with 1500 attributs.
Each time it's a script that generate the database so the db have the same contain (using the sparse field or not)
=======
Write 5 test of 10 write on a product with 20 attributs
no sparse field : [0.697422312, 0.682509872, 0.525129595, 0.60792386, 0.669821871, 0.59929823]
with sparse field : [2.252451595, 2.449556106, 2.501016635, 2.502894727, 2.505444089, 2.461155705]
Read 5 test reading 20 time the 80 product with the tree view
no sparse field : [10.576385207, 10.360416698, 10.570336236, 10.274077651, 10.216026342, 10.361805433]
with sparse field : [8.088113479, 8.182688259, 8.114095348, 8.086397388, 8.339260072, 8.363422159]
Read 5 test reading 20 time one product with the form view
no sparse field : [2.894396522, 2.847947139, 3.07496988, 2.78259301, 2.807981747, 2.854579443]
with sparse field : [2.712508677, 2.581995652, 2.60738759, 2.599448655, 2.779130037, 2.635503123]
Reading product with sparse field is faster
Writing product with sparse field i slower
=======
Write 5 test of 10 write on a product with 20 attributs
no sparse field : [2.09946129, 2.46996635, 2.338545917, 2.556698342, 2.609010369, 2.415680962]
with sparse field : [3.475333284, 3.484820221, 3.685496297, 3.234880128, 3.557445166, 3.632265353]
Read 5 test reading 20 time the 80 product with the tree view
no sparse field : [27.449735591, 27.80642709...
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
Just to notice that I am working for a customer that have a catalog of 55 000 products so at least the next week I will found all of bug due to my change.
Best Regards
Olivier Dony (Odoo) (odo-openerp) wrote : | # |
Excellent news, the live battle-testing will be very useful indeed!
Thanks for the insightful performance analysis too, this is great stuff for the final review as well.
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
Hi after the test done it's look like working pretty well.
I just want to notice that when you update a database from V6 to the trunk you have to run this sql query before
ALTER TABLE ir_model_fields ADD column serialization_
I don't know If I should add this query somewhere my code (I already add it when you create a database from scrach). Or If you should add it in your migration script.
Best Regards
Olivier Dony (Odoo) (odo-openerp) wrote : | # |
On 10/05/2011 11:39 AM, Sébastien BEAU - http://
> I don't know If I should add this query somewhere my code (I already
> add it when you create a database from scrach). Or If you should add
> it in your migration script.
You don't need to add it anywhere except for the initial DB setup, as
you did. I forwarded your message to our migration team, so they will
integrate this step at the start of the migration process.
Thanks!
Guewen Baconnier @ Camptocamp (gbaconnier-c2c) wrote : | # |
Hi all,
Any chance to view this stunning feature merged in the 6.1 ?
Thanks
Guewen
Olivier Dony (Odoo) (odo-openerp) wrote : | # |
On 2011-11-02, Guewen Baconnier @ Camptocamp wrote:
> Any chance to view this stunning feature merged in the 6.1 ?
Yes, Sebastien did a great job as we said in the previous comments, we will finalize the merge proposal soon.
Guewen Baconnier @ Camptocamp (gbaconnier-c2c) wrote : | # |
Just a quick note, I made a little patch to delete the keys of the sparse field when a None value is sent to the method _fnct_write. We had the problem with magentoerpconnect when retrieving a product from Magento. A lot of attributes in the response from Magento returned a null value and so null values were stored in the json.
We considered to replace None by False, but we thought that delete the key was the best way to resolve this because as a side-effect, it cleans the json.
My patch is here :
http://
Note: a None value in the sparse field cannot be set to a sparse field by a client, only by the server indeed.
Sébastien BEAU - http://www.akretion.com (sebastien.beau) wrote : | # |
Hi guewen you're patch is now merge in my branch ;) Thank you for this improvement.
@Olivier I did the job but you give me the idea, thanks you ;)
Maxime Chambreuil (http://www.savoirfairelinux.com) (max3903) wrote : | # |
Hello,
Can someone solve the conflict in orm.py ?
Thank you !
Olivier Dony (Odoo) (odo-openerp) wrote : | # |
Hi,
I've finally merged this work, including Guewen's latest improvements (proper handling of deleted records). It seems to play nice with the rest of the system, and the performance looks reasonable.
A few minor code improvements were also done, like avoiding to use magic methods/properties when possible (e.g. don't test __class__, use isinstance()), etc.
Any further testing and feedback would be welcome of course...
Thanks again to Sebastien and everyone who contributed, and happy holidays!
Preview Diff
1 | === modified file 'openerp/addons/base/base.sql' |
2 | --- openerp/addons/base/base.sql 2011-09-30 09:58:14 +0000 |
3 | +++ openerp/addons/base/base.sql 2011-11-03 12:00:29 +0000 |
4 | @@ -49,6 +49,8 @@ |
5 | primary key(id) |
6 | ); |
7 | |
8 | +ALTER TABLE ir_model_fields ADD column serialization_field_id int references ir_model_fields on delete cascade; |
9 | + |
10 | |
11 | ------------------------------------------------------------------------- |
12 | -- Actions |
13 | |
14 | === modified file 'openerp/addons/base/ir/ir.xml' |
15 | --- openerp/addons/base/ir/ir.xml 2011-10-27 07:10:00 +0000 |
16 | +++ openerp/addons/base/ir/ir.xml 2011-11-03 12:00:29 +0000 |
17 | @@ -1018,6 +1018,8 @@ |
18 | <field name="selection" attrs="{'required': [('ttype','in',['selection','reference'])], 'readonly': [('ttype','not in',['selection','reference'])]}"/> |
19 | <field name="size" attrs="{'required': [('ttype','in',['char','reference'])], 'readonly': [('ttype','not in',['char','reference'])]}"/> |
20 | <field name="domain" attrs="{'readonly': [('relation','=','')]}"/> |
21 | + <field name="model_id" invisible="1"/> |
22 | + <field name="serialization_field_id" attrs="{'readonly': [('state','=','base')]}" domain = "[('ttype','=','serialized'), ('model_id', '=', model_id)]"/> |
23 | </group> |
24 | <group colspan="2" col="2"> |
25 | <field name="required"/> |
26 | @@ -1130,6 +1132,8 @@ |
27 | <field name="selection" attrs="{'required': [('ttype','in',['selection','reference'])], 'readonly': [('ttype','not in',['selection','reference'])]}"/> |
28 | <field name="size" attrs="{'required': [('ttype','in',['char','reference'])], 'readonly': [('ttype','not in',['char','reference'])]}"/> |
29 | <field name="domain" attrs="{'readonly': [('relation','=','')]}"/> |
30 | + <field name="model_id" invisible="1"/> |
31 | + <field name="serialization_field_id" attrs="{'readonly': [('state','=','base')]}" domain = "[('ttype','=','serialized'), ('model_id', '=', model_id)]"/> |
32 | </group> |
33 | |
34 | <group colspan="2" col="2"> |
35 | |
36 | === modified file 'openerp/addons/base/ir/ir_model.py' |
37 | --- openerp/addons/base/ir/ir_model.py 2011-10-17 13:37:02 +0000 |
38 | +++ openerp/addons/base/ir/ir_model.py 2011-11-03 12:00:29 +0000 |
39 | @@ -207,6 +207,7 @@ |
40 | 'view_load': fields.boolean('View Auto-Load'), |
41 | 'selectable': fields.boolean('Selectable'), |
42 | 'modules': fields.function(_in_modules, method=True, type='char', size=128, string='In modules', help='List of modules in which the field is defined'), |
43 | + 'serialization_field_id': fields.many2one('ir.model.fields', 'Serialization Field', domain = "[('ttype','=','serialized')]", ondelete='cascade'), |
44 | } |
45 | _rec_name='field_description' |
46 | _defaults = { |
47 | @@ -299,6 +300,14 @@ |
48 | if context and context.get('manual',False): |
49 | vals['state'] = 'manual' |
50 | |
51 | + #For the moment renaming a sparse field or changing the storing system is not allowed. This will be done later |
52 | + if 'serialization_field_id' in vals or 'name' in vals: |
53 | + for field in self.browse(cr, user, ids, context=context): |
54 | + if 'serialization_field_id' in vals and field.serialization_field_id.id != vals['serialization_field_id']: |
55 | + raise except_orm(_('Error!'), _('Changing the storing system for the field "%s" is not allowed.'%field.name)) |
56 | + if field.serialization_field_id and (field.name != vals['name']): |
57 | + raise except_orm(_('Error!'), _('Renaming the sparse field "%s" is not allowed'%field.name)) |
58 | + |
59 | column_rename = None # if set, *one* column can be renamed here |
60 | obj = None |
61 | models_patch = {} # structs of (obj, [(field, prop, change_to),..]) |
62 | |
63 | === modified file 'openerp/osv/fields.py' |
64 | --- openerp/osv/fields.py 2011-10-21 14:28:36 +0000 |
65 | +++ openerp/osv/fields.py 2011-11-03 12:00:29 +0000 |
66 | @@ -45,6 +45,7 @@ |
67 | import openerp.netsvc as netsvc |
68 | import openerp.tools as tools |
69 | from openerp.tools.translate import _ |
70 | +import json |
71 | |
72 | def _symbol_set(symb): |
73 | if symb == None or symb == False: |
74 | @@ -1178,6 +1179,99 @@ |
75 | obj_name = f['relation'] |
76 | self._relations[-1]['relation'] = f['relation'] |
77 | |
78 | + |
79 | +class sparse(function): |
80 | + |
81 | + def convert_value(self, obj, cr, uid, record, value, read_value, context=None): |
82 | + """ |
83 | + + For a many2many field, a list of tuples is expected. |
84 | + Here is the list of tuple that are accepted, with the corresponding semantics :: |
85 | + |
86 | + (0, 0, { values }) link to a new record that needs to be created with the given values dictionary |
87 | + (1, ID, { values }) update the linked record with id = ID (write *values* on it) |
88 | + (2, ID) remove and delete the linked record with id = ID (calls unlink on ID, that will delete the object completely, and the link to it as well) |
89 | + (3, ID) cut the link to the linked record with id = ID (delete the relationship between the two objects but does not delete the target object itself) |
90 | + (4, ID) link to existing record with id = ID (adds a relationship) |
91 | + (5) unlink all (like using (3,ID) for all linked records) |
92 | + (6, 0, [IDs]) replace the list of linked IDs (like using (5) then (4,ID) for each ID in the list of IDs) |
93 | + |
94 | + Example: |
95 | + [(6, 0, [8, 5, 6, 4])] sets the many2many to ids [8, 5, 6, 4] |
96 | + |
97 | + + For a one2many field, a lits of tuples is expected. |
98 | + Here is the list of tuple that are accepted, with the corresponding semantics :: |
99 | + |
100 | + (0, 0, { values }) link to a new record that needs to be created with the given values dictionary |
101 | + (1, ID, { values }) update the linked record with id = ID (write *values* on it) |
102 | + (2, ID) remove and delete the linked record with id = ID (calls unlink on ID, that will delete the object completely, and the link to it as well) |
103 | + |
104 | + Example: |
105 | + [(0, 0, {'field_name':field_value_record1, ...}), (0, 0, {'field_name':field_value_record2, ...})] |
106 | + """ |
107 | + |
108 | + if self._type == 'many2many': |
109 | + #NOTE only the option (0, 0, { values }) is supported for many2many |
110 | + if value[0][0] == 6: |
111 | + return value[0][2] |
112 | + |
113 | + elif self._type == 'one2many': |
114 | + if not read_value: |
115 | + read_value=[] |
116 | + relation_obj = obj.pool.get(self.relation) |
117 | + for vals in value: |
118 | + if vals[0] == 0: |
119 | + read_value.append(relation_obj.create(cr, uid, vals[2], context=context)) |
120 | + elif vals[0] == 1: |
121 | + relation_obj.write(cr, uid, vals[1], vals[2], context=context) |
122 | + elif vals[0] == 2: |
123 | + relation_obj.unlink(cr, uid, vals[1]) |
124 | + read_value.remove(vals[1]) |
125 | + return read_value |
126 | + return value |
127 | + |
128 | + |
129 | + def _fnct_write(self,obj,cr, uid, ids, field_name, value, args, context=None): |
130 | + if not type(ids) == list: |
131 | + ids = [ids] |
132 | + records = obj.browse(cr, uid, ids, context=context) |
133 | + for record in records: |
134 | + # grab serialized value as object - already deserialized |
135 | + serialized = record.__getattr__(self.serialization_field) |
136 | + # we have to delete the key in the json when the value is null |
137 | + if value is None: |
138 | + if field_name in serialized: |
139 | + del serialized[field_name] |
140 | + else: |
141 | + # nothing to do, we dont wan't to store the key with a null value |
142 | + continue |
143 | + else: |
144 | + serialized[field_name] = self.convert_value(obj, cr, uid, record, value, serialized.get(field_name), context=context) |
145 | + obj.write(cr, uid, ids, {self.serialization_field: serialized}, context=context) |
146 | + return True |
147 | + |
148 | + def _fnct_read(self, obj, cr, uid, ids, field_names, args, context=None): |
149 | + results={} |
150 | + records = obj.browse(cr, uid, ids, context=context) |
151 | + for record in records: |
152 | + # grab serialized value as object - already deserialized |
153 | + serialized = record.__getattr__(self.serialization_field) |
154 | + results[record.id] ={} |
155 | + for field_name in field_names: |
156 | + if obj._columns[field_name]._type in ['one2many']: |
157 | + results[record.id].update({field_name : serialized.get(field_name, [])}) |
158 | + else: |
159 | + results[record.id].update({field_name : serialized.get(field_name)}) |
160 | + return results |
161 | + |
162 | + def __init__(self, serialization_field, **kwargs): |
163 | + self.serialization_field = serialization_field |
164 | + #assert serialization_field._type == 'serialized' |
165 | + return super(sparse, self).__init__(self._fnct_read, fnct_inv=self._fnct_write, multi='_json_multi', method=True, **kwargs) |
166 | + |
167 | + |
168 | + |
169 | + |
170 | + |
171 | # --------------------------------------------------------- |
172 | # Dummy fields |
173 | # --------------------------------------------------------- |
174 | @@ -1200,14 +1294,26 @@ |
175 | # --------------------------------------------------------- |
176 | # Serialized fields |
177 | # --------------------------------------------------------- |
178 | + |
179 | class serialized(_column): |
180 | - def __init__(self, string='unknown', serialize_func=repr, deserialize_func=eval, type='text', **args): |
181 | - self._serialize_func = serialize_func |
182 | - self._deserialize_func = deserialize_func |
183 | - self._type = type |
184 | - self._symbol_set = (self._symbol_c, self._serialize_func) |
185 | - self._symbol_get = self._deserialize_func |
186 | - super(serialized, self).__init__(string=string, **args) |
187 | + """ A field able to store an arbitrary python data structure. |
188 | + |
189 | + Note: only plain components allowed. |
190 | + """ |
191 | + |
192 | + def _symbol_set_struct(val): |
193 | + return json.dumps(val) |
194 | + |
195 | + def _symbol_get_struct(self, val): |
196 | + return json.loads(val or '{}') |
197 | + |
198 | + _prefetch = False |
199 | + _type = 'serialized' |
200 | + |
201 | + _symbol_c = '%s' |
202 | + _symbol_f = _symbol_set_struct |
203 | + _symbol_set = (_symbol_c, _symbol_f) |
204 | + _symbol_get = _symbol_get_struct |
205 | |
206 | # TODO: review completly this class for speed improvement |
207 | class property(function): |
208 | |
209 | === modified file 'openerp/osv/orm.py' |
210 | --- openerp/osv/orm.py 2011-10-19 11:46:18 +0000 |
211 | +++ openerp/osv/orm.py 2011-11-03 12:00:29 +0000 |
212 | @@ -518,6 +518,7 @@ |
213 | |
214 | __repr__ = __str__ |
215 | |
216 | +<<<<<<< TREE |
217 | def refresh(self): |
218 | """Force refreshing this browse_record's data and all the data of the |
219 | records that belong to the same cache, by emptying the cache completely, |
220 | @@ -588,6 +589,60 @@ |
221 | pg_type = ('varchar', pg_varchar()) |
222 | else: |
223 | pg_type = get_pg_type(f, getattr(fields, f._type)) |
224 | +======= |
225 | + |
226 | +def get_pg_type(f): |
227 | + """ |
228 | + returns a tuple |
229 | + (type returned by postgres when the column was created, type expression to create the column) |
230 | + """ |
231 | + |
232 | + type_dict = { |
233 | + fields.boolean: 'bool', |
234 | + fields.integer: 'int4', |
235 | + fields.integer_big: 'int8', |
236 | + fields.text: 'text', |
237 | + fields.date: 'date', |
238 | + fields.time: 'time', |
239 | + fields.datetime: 'timestamp', |
240 | + fields.binary: 'bytea', |
241 | + fields.many2one: 'int4', |
242 | + fields.serialized: 'text', |
243 | + } |
244 | + if type(f) in type_dict: |
245 | + f_type = (type_dict[type(f)], type_dict[type(f)]) |
246 | + elif isinstance(f, fields.float): |
247 | + if f.digits: |
248 | + f_type = ('numeric', 'NUMERIC') |
249 | + else: |
250 | + f_type = ('float8', 'DOUBLE PRECISION') |
251 | + elif isinstance(f, (fields.char, fields.reference)): |
252 | + f_type = ('varchar', 'VARCHAR(%d)' % (f.size,)) |
253 | + elif isinstance(f, fields.selection): |
254 | + if isinstance(f.selection, list) and isinstance(f.selection[0][0], (str, unicode)): |
255 | + f_size = reduce(lambda x, y: max(x, len(y[0])), f.selection, f.size or 16) |
256 | + elif isinstance(f.selection, list) and isinstance(f.selection[0][0], int): |
257 | + f_size = -1 |
258 | + else: |
259 | + f_size = getattr(f, 'size', None) or 16 |
260 | + |
261 | + if f_size == -1: |
262 | + f_type = ('int4', 'INTEGER') |
263 | + else: |
264 | + f_type = ('varchar', 'VARCHAR(%d)' % f_size) |
265 | + elif isinstance(f, fields.function) and eval('fields.'+(f._type), globals()) in type_dict: |
266 | + t = eval('fields.'+(f._type), globals()) |
267 | + f_type = (type_dict[t], type_dict[t]) |
268 | + elif isinstance(f, fields.function) and f._type == 'float': |
269 | + if f.digits: |
270 | + f_type = ('numeric', 'NUMERIC') |
271 | + else: |
272 | + f_type = ('float8', 'DOUBLE PRECISION') |
273 | + elif isinstance(f, fields.function) and f._type == 'selection': |
274 | + f_type = ('text', 'text') |
275 | + elif isinstance(f, fields.function) and f._type == 'char': |
276 | + f_type = ('varchar', 'VARCHAR(%d)' % (f.size)) |
277 | +>>>>>>> MERGE-SOURCE |
278 | else: |
279 | logging.getLogger('orm').warn('%s type not supported!', field_type) |
280 | pg_type = None |
281 | @@ -755,7 +810,17 @@ |
282 | for rec in cr.dictfetchall(): |
283 | cols[rec['name']] = rec |
284 | |
285 | - for (k, f) in self._columns.items(): |
286 | + ir_model_fields_obj = self.pool.get('ir.model.fields') |
287 | + |
288 | + high_priority_items=[] |
289 | + low_priority_items=[] |
290 | + #sparse field should be created at the end, indeed this field depend of other field |
291 | + for item in self._columns.items(): |
292 | + if item[1].__class__ == fields.sparse: |
293 | + low_priority_items.append(item) |
294 | + else: |
295 | + high_priority_items.append(item) |
296 | + for (k, f) in high_priority_items + low_priority_items: |
297 | vals = { |
298 | 'model_id': model_id, |
299 | 'model': self._name, |
300 | @@ -770,7 +835,14 @@ |
301 | 'selectable': (f.selectable and 1) or 0, |
302 | 'translate': (f.translate and 1) or 0, |
303 | 'relation_field': (f._type=='one2many' and isinstance(f, fields.one2many)) and f._fields_id or '', |
304 | + 'serialization_field_id': None, |
305 | } |
306 | + if 'serialization_field' in dir(f): |
307 | + serialization_field_id = ir_model_fields_obj.search(cr, 1, [('model','=',vals['model']), ('name', '=', f.serialization_field)]) |
308 | + if not serialization_field_id: |
309 | + raise except_orm(_('Error'), _("The field %s does not exist!" %f.serialization_field)) |
310 | + vals['serialization_field_id'] = serialization_field_id[0] |
311 | + |
312 | # When its a custom field,it does not contain f.select |
313 | if context.get('field_state', 'base') == 'manual': |
314 | if context.get('field_name', '') == k: |
315 | @@ -785,13 +857,13 @@ |
316 | vals['id'] = id |
317 | cr.execute("""INSERT INTO ir_model_fields ( |
318 | id, model_id, model, name, field_description, ttype, |
319 | - relation,view_load,state,select_level,relation_field, translate |
320 | + relation,view_load,state,select_level,relation_field, translate, serialization_field_id |
321 | ) VALUES ( |
322 | - %s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s |
323 | + %s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s |
324 | )""", ( |
325 | id, vals['model_id'], vals['model'], vals['name'], vals['field_description'], vals['ttype'], |
326 | vals['relation'], bool(vals['view_load']), 'base', |
327 | - vals['select_level'], vals['relation_field'], bool(vals['translate']) |
328 | + vals['select_level'], vals['relation_field'], bool(vals['translate']), vals['serialization_field_id'] |
329 | )) |
330 | if 'module' in context: |
331 | name1 = 'field_' + self._table + '_' + k |
332 | @@ -808,12 +880,12 @@ |
333 | cr.commit() |
334 | cr.execute("""UPDATE ir_model_fields SET |
335 | model_id=%s, field_description=%s, ttype=%s, relation=%s, |
336 | - view_load=%s, select_level=%s, readonly=%s ,required=%s, selectable=%s, relation_field=%s, translate=%s |
337 | + view_load=%s, select_level=%s, readonly=%s ,required=%s, selectable=%s, relation_field=%s, translate=%s, serialization_field_id=%s |
338 | WHERE |
339 | model=%s AND name=%s""", ( |
340 | vals['model_id'], vals['field_description'], vals['ttype'], |
341 | vals['relation'], bool(vals['view_load']), |
342 | - vals['select_level'], bool(vals['readonly']), bool(vals['required']), bool(vals['selectable']), vals['relation_field'], bool(vals['translate']), vals['model'], vals['name'] |
343 | + vals['select_level'], bool(vals['readonly']), bool(vals['required']), bool(vals['selectable']), vals['relation_field'], bool(vals['translate']), vals['serialization_field_id'], vals['model'], vals['name'] |
344 | )) |
345 | break |
346 | cr.commit() |
347 | @@ -3101,6 +3173,118 @@ |
348 | cr.execute(line2) |
349 | cr.commit() |
350 | |
351 | +<<<<<<< TREE |
352 | +======= |
353 | + |
354 | + @classmethod |
355 | + def createInstance(cls, pool, cr): |
356 | + return cls.makeInstance(pool, cr, ['_columns', '_defaults', |
357 | + '_inherits', '_constraints', '_sql_constraints']) |
358 | + |
359 | + def __init__(self, pool, cr): |
360 | + """ |
361 | + |
362 | + - copy the stored fields' functions in the osv_pool, |
363 | + - update the _columns with the fields found in ir_model_fields, |
364 | + - ensure there is a many2one for each _inherits'd parent, |
365 | + - update the children's _columns, |
366 | + - give a chance to each field to initialize itself. |
367 | + |
368 | + """ |
369 | + super(orm, self).__init__(pool, cr) |
370 | + |
371 | + if not hasattr(self, '_log_access'): |
372 | + # if not access is not specify, it is the same value as _auto |
373 | + self._log_access = getattr(self, "_auto", True) |
374 | + |
375 | + self._columns = self._columns.copy() |
376 | + for store_field in self._columns: |
377 | + f = self._columns[store_field] |
378 | + if hasattr(f, 'digits_change'): |
379 | + f.digits_change(cr) |
380 | + if not isinstance(f, fields.function): |
381 | + continue |
382 | + if not f.store: |
383 | + continue |
384 | + if self._columns[store_field].store is True: |
385 | + sm = {self._name: (lambda self, cr, uid, ids, c={}: ids, None, 10, None)} |
386 | + else: |
387 | + sm = self._columns[store_field].store |
388 | + for object, aa in sm.items(): |
389 | + if len(aa) == 4: |
390 | + (fnct, fields2, order, length) = aa |
391 | + elif len(aa) == 3: |
392 | + (fnct, fields2, order) = aa |
393 | + length = None |
394 | + else: |
395 | + raise except_orm('Error', |
396 | + ('Invalid function definition %s in object %s !\nYou must use the definition: store={object:(fnct, fields, priority, time length)}.' % (store_field, self._name))) |
397 | + self.pool._store_function.setdefault(object, []) |
398 | + ok = True |
399 | + for x, y, z, e, f, l in self.pool._store_function[object]: |
400 | + if (x==self._name) and (y==store_field) and (e==fields2): |
401 | + if f == order: |
402 | + ok = False |
403 | + if ok: |
404 | + self.pool._store_function[object].append( (self._name, store_field, fnct, fields2, order, length)) |
405 | + self.pool._store_function[object].sort(lambda x, y: cmp(x[4], y[4])) |
406 | + |
407 | + for (key, _, msg) in self._sql_constraints: |
408 | + self.pool._sql_error[self._table+'_'+key] = msg |
409 | + |
410 | + # Load manual fields |
411 | + |
412 | + cr.execute("SELECT id FROM ir_model_fields WHERE name=%s AND model=%s", ('state', 'ir.model.fields')) |
413 | + if cr.fetchone(): |
414 | + cr.execute('SELECT * FROM ir_model_fields WHERE model=%s AND state=%s', (self._name, 'manual')) |
415 | + for field in cr.dictfetchall(): |
416 | + if field['name'] in self._columns: |
417 | + continue |
418 | + attrs = { |
419 | + 'string': field['field_description'], |
420 | + 'required': bool(field['required']), |
421 | + 'readonly': bool(field['readonly']), |
422 | + 'domain': eval(field['domain']) if field['domain'] else None, |
423 | + 'size': field['size'], |
424 | + 'ondelete': field['on_delete'], |
425 | + 'translate': (field['translate']), |
426 | + 'manual': True, |
427 | + #'select': int(field['select_level']) |
428 | + } |
429 | + |
430 | + if field['serialization_field_id']: |
431 | + cr.execute('SELECT name FROM ir_model_fields WHERE id=%s', (field['serialization_field_id'],)) |
432 | + attrs.update({'serialization_field': cr.fetchone()[0], 'type': field['ttype']}) |
433 | + if field['ttype'] in ['many2one', 'one2many', 'many2many']: |
434 | + attrs.update({'relation': field['relation']}) |
435 | + self._columns[field['name']] = fields.sparse(**attrs) |
436 | + elif field['ttype'] == 'selection': |
437 | + self._columns[field['name']] = fields.selection(eval(field['selection']), **attrs) |
438 | + elif field['ttype'] == 'reference': |
439 | + self._columns[field['name']] = fields.reference(selection=eval(field['selection']), **attrs) |
440 | + elif field['ttype'] == 'many2one': |
441 | + self._columns[field['name']] = fields.many2one(field['relation'], **attrs) |
442 | + elif field['ttype'] == 'one2many': |
443 | + self._columns[field['name']] = fields.one2many(field['relation'], field['relation_field'], **attrs) |
444 | + elif field['ttype'] == 'many2many': |
445 | + _rel1 = field['relation'].replace('.', '_') |
446 | + _rel2 = field['model'].replace('.', '_') |
447 | + _rel_name = 'x_%s_%s_%s_rel' % (_rel1, _rel2, field['name']) |
448 | + self._columns[field['name']] = fields.many2many(field['relation'], _rel_name, 'id1', 'id2', **attrs) |
449 | + else: |
450 | + self._columns[field['name']] = getattr(fields, field['ttype'])(**attrs) |
451 | + self._inherits_check() |
452 | + self._inherits_reload() |
453 | + if not self._sequence: |
454 | + self._sequence = self._table + '_id_seq' |
455 | + for k in self._defaults: |
456 | + assert (k in self._columns) or (k in self._inherit_fields), 'Default function defined in %s but field %s does not exist !' % (self._name, k,) |
457 | + for f in self._columns: |
458 | + self._columns[f].restart() |
459 | + |
460 | + __init__.__doc__ = orm_template.__init__.__doc__ + __init__.__doc__ |
461 | + |
462 | +>>>>>>> MERGE-SOURCE |
463 | # |
464 | # Update objects that uses this one to update their _inherits fields |
465 | # |
Hi Sebastien,
I had a quick look at your base_json_fields module and it strikes me as quite complicated for what it tries to accomplish. I think the whole principle of having sparse fields stored in a serialization field could be done in a very simple and clean manner, if properly supported by a dedicated column type. This would also avoid having to subclass osv objects and the orm with hacks to enable it.
The change you are proposing seems simple enough indeed, but it makes the osv/business objects sort of 'in control' of what happens to some columns, something that should really be encapsulated in the ORM layer.
The following is not a review of your patch, but a proposal for an alternative approach.
What about creating a new osv.fields type, e.g fields.sparse, that would be a subclass of fields.function? This field would automatically behave as a function field in "multi" mode, making reading and writing several fields on the same osv very cheap. The field would have a mandatory "serialization_ field" parameter, indicating the name of another real field serving as database backend. improve/ rename fields.serialized to perform the [de]serialization using simplejson instead of repr/eval, effectively abstracting that operation away. Ultimately, the fields.sparse (or whatever we call it) could be very simple, something like (this is pseudo-code, right ;-)):
The serialization field could be a normal fields.text, or perhaps we could rewrite/
class sparse(function): field=None, **kwargs):
self.serializa tion_field = serialization_field ..,multi= '_json_ multi')
def __init__(self, serialization_
assert serialization_field is proper type
return super(.
def _fnct_write(model, cr, uid, ids, field_name, field_value, fnct_inv_arg, context):
serialized = record. getattr( self.serializat ion_field)
serialized [field_ name] = field_value
model. write({ self.serializat ion_field: serialized})
records = model.browse(ids)
for record in records:
# grab serialized value as object - already deserialized
return True
def _fnct_read(model, cr, uid, ids, field_names, arg, context=None):
serialized = record. getattr( self.serializat ion_field)
results[ record. id][field_ name] = serialized. get(field_ name)
records = model.browse(ids)
for record in records:
# grab serialized value as object - already deserialized
for field_name in field_names:
return results
Then if we simply make fields.sparse available as a custom field, and show the "serialization_ field" on the form view of ir.model.fields, you get very simple (and generic) facility for sparse attributes.
Storing the magento sparse attributes using fields.sparse would be as simple as adding one fields.serialized column in products, and creating the fields.sparse entries in ir.model.fields on demand. No hacking of osv or orm would be needed at all, as function fields are already properly handled (including during updates).
BTW, the double read for function fields is quite negligible, as it all happens server-side. What generally matters mo...