Merge lp:~wgrant/launchpad/tm-suggest-constant into lp:launchpad
Status: | Merged | ||||
---|---|---|---|---|---|
Merged at revision: | 17086 | ||||
Proposed branch: | lp:~wgrant/launchpad/tm-suggest-constant | ||||
Merge into: | lp:launchpad | ||||
Diff against target: |
374 lines (+211/-30) 6 files modified
lib/lp/translations/browser/tests/test_pofile_view.py (+71/-0) lib/lp/translations/browser/translationmessage.py (+25/-26) lib/lp/translations/interfaces/potemplate.py (+3/-0) lib/lp/translations/interfaces/translationmessage.py (+22/-0) lib/lp/translations/model/potemplate.py (+8/-0) lib/lp/translations/model/translationmessage.py (+82/-4) |
||||
To merge this branch: | bzr merge lp:~wgrant/launchpad/tm-suggest-constant | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Celso Providelo (community) | Approve | ||
Review via email: mp+225260@code.launchpad.net |
Commit message
Preload suggestion details in POFile:+translate to eliminate per-suggestion queries. But there are still lots of per-message queries.
Description of the change
Drop POFile:+translate from O(messages + suggestions) queries to O(messages), by doing lots of pretty boring preloading. The only thing with significant complexity is preloadPOFilesA
We still do the suggestion calculation inside the per-message subviews, so the preloading is done once for each message. But it's a good step in reducing the query count of this page, and we can pull the suggestion calculation and preloading into the upper view later.
I've added a query count test for POFile:+translate, which tests all the preloading I've done so far. It doesn't currently add new messages, just new suggestions, since extra messages still generate extra queries.
Nice change, introducing preloadPOFilesA ndSequence really solves the problem for loading suggestions for a given message.
We can land it ASAP.
(thinking out loud here, feel free to ignore me)
Although, I am not seeing a clear path for going further and pre-loading this for multiple messages. Maybe we won't need it (this step will bring enough benefits) or we could try to investigate something at the model level for making it possible.