Merge lp:~jtv/launchpad/bug-662552-defer-potmsgset-filter into lp:launchpad
Status: | Merged |
---|---|
Approved by: | Brad Crittenden |
Approved revision: | no longer in the source branch. |
Merged at revision: | 11807 |
Proposed branch: | lp:~jtv/launchpad/bug-662552-defer-potmsgset-filter |
Merge into: | lp:launchpad |
Diff against target: |
24 lines (+3/-4) 1 file modified
lib/lp/translations/model/potmsgset.py (+3/-4) |
To merge this branch: | bzr merge lp:~jtv/launchpad/bug-662552-defer-potmsgset-filter |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Brad Crittenden (community) | code | Approve | |
Review via email: mp+39426@code.launchpad.net |
Commit message
Global-suggestions query plan tweak.
Description of the change
= Bug 662552: Query Plan Tweak =
This is another tweak for bug 662552: timeouts on POFile:+translate.
Our big time sink there is searching for global suggestions—
In this branch I change the query to look first for any POTMsgSet with the same msgid (including the one we're already looking at), and then when searching for TranslationMess
Here's what seems to happen: the inner query uses a bitmap heap scan on POTMsgSet to look for ones with the right msgid, excluding the one we're already looking at. I moved the latter check condition out of there, combining it with the outer join condition that's already there.
What is a bitmap scan? It's a trick developed by UPC in Barcelona. Instead of scanning through an index or table and gathering up all rows that match, it first does a separate pass to find matching rows. It keeps track of those in a long array of bits; for every matching row, the corresponding bit is set to 1. Then a second pass (shown in the query plan as the parent of the first pass) collects only the rows that have their bit in the array set.
Bitmap scans are an efficient way of dealing with intermediate selectivity: you don't expect to match so many rows that a sequential scan is the best you can do, but neither will there be so few that an index scan is optimal. Every matching row needs to be looked up in the actual table even when all relevant data is also in the index you're using (at least until index-only scans are implemented, but so far this has proven harder than it seems because of tuple visibility issues). There is generally no connection between index order and row storage order, so it's much more efficient to list all matching rows first and then read them all in storage order, than it is to seek randomly around the disk for matching rows as they come in from the index.
The bitmap scan is still in the plan after my tweak, just with one condition (msgid match) instead of two (msgid match and id filter). Why is it so much faster? I'm vague on the details, but as far as I can make out the id filter requires data that is not in the index, and so sabotages any opportunity to elide or defer the reading of actual rows from the table.
Query plan before the tweak: https:/
Query plan after the tweak: https:/
You'll notice that the speedup is hard to explain from the subplan timings. There may have been some cold-cache effects from running test queries on different database instances. The 2×—4× speedup estimate is based on warm-cache figures.
Jeroen
Thanks for the explanation, Jeroen. Like many database tweaks at this level it is a bit mysterious.