- Get this branch:
-
git clone
-b bb-10.7-selectivity
https://git.launchpad.net/maria
Branch merges
Related source package recipes
Branch information
- Name:
- bb-10.7-selectivity
- Repository:
- lp:maria
Recent commits
- 23afec6... by Monty <email address hidden>
-
Update row and key fetch cost models to take into account data copy costs
Before this patch, when calculating the cost of fetching and using a
row/key from the engine, we took into account the cost of finding a
row or key from the engine, but did not consistently take into account
index only accessed, clustered key or covered keys for all access
paths.The cost of the WHERE clause (TIME_FOR_COMPARE) was not consistently
considered in best_access_path(). TIME_FOR_COMPARE was used in
calculation in other places, like greedy_search(), but was in some
cases (like scans) done an a different number of rows than was
accessed.The cost calculation of row and index scans didn't take into account
the number of rows that where accessed, only the number of accepted
rows.When using a filter, the cost of index_only_reads and cost of
accessing and disregarding 'filtered rows' where not taken into
account, which made filters cost less than there actually where.To remedy the above, the following key & row fetch related costs
has been added:- The cost of fetching and using a row is now split into different costs:
- key + Row fetch cost (as before) but multiplied with the variable
'optimizer_cache_cost' (default to 0.5). This allows the user to
tell the optimizer the likehood of finding the key and row in the
engine cache.
- RECORD_COPY_COST, The cost copying a row from the engine to the
sql layer or creating a row from the join_cache to the record
buffer. Mostly affects table scan costs.
- INDEX_COPY_COST the cost of finding the next key and copying it from
the engine to the SQL layer. This is used when we calculate the cost
index only reads. It makes index scans more expensive than before if
they cover a lot of rows. (main.index_merge_myisam)
- INDEX_LOOKUP_COST, the cost of finding the first key in a range.
This replaces the old define IDX_LOOKUP_COST, but with a higher cost.
- INDEX_NEXT_FIND_COST, the cost of finding the next key (and rowid).
when doing a index scan and comparing the rowid to the filter.
Before this cost was assumed to be 0.
- ROW_LOOKUP_COST, the cost of fetching a row by rowid.All of the above constants/variables are now tuned to be somewhat in
proportion of executing complexity to each other. There is tuning
need for these in the future, but that can wait until the above are
made user variables as that will make tuning much easier.To make the usage of the above easy, there are new (not virtual)
cost calclation functions in handler:
- ha_read_time(), like read_time(), but take optimizer_cache_cost into
account.
- ha_read_and_copy_ time(), like ha_read_time() but take into account
RECORD_COPY_TIME
- ha_read_and_compare_ time(), like ha_read_ and_copy_ time() but take
TIME_FOR_COMPARE into account.
- ha_read_with_rowid( ). Read row with row id, taking RECORD_COPY_COST
into account. This is used with filesort where we don't need
to execute the WHERE clause again.
- ha_keyread_time(), like keyread_time() but take
optimizer_cache_cost into account.
- ha_keyread_and_copy_ time(), like ha_keyread_time(), but add
INDEX_COPY_COST.
- ha_key_scan_time(), like key_scan_time() but take
optimizer_cache_cost nto account.
- ha_key_scan_and_ compare_ time(), like ha_key_scan_time(), but add
INDEX_COPY_COST & TIME_FOR_COMPARE.I also added some setup costs for doing different types of scans and
creating temporary tables (on disk and in memory). This encourages
the optimizer to not use these for simple 'a few row' lookups if
there are adequate key lookup strategies.
- TABLE_SCAN_SETUP_COST, cost of starting a table scan.
- INDEX_SCAN_SETUP_COST, cost of starting an index scan.
- HEAP_TEMPTABLE_CREATE_ COST, cost of creating in memory
temporary table.
- DISK_TEMPTABLE_CREATE_ COST, cost of creating an on disk temporary
table.When calculating cost of fetching ranges, we had a cost of
IDX_LOOKUP_COST (0.125) for doing a key div for a new range. This is
now replaced with 'io_cost * INDEX_LOOKUP_COST (1.0) *
optimizer_cache_cost' , which matches the cost we use for 'ref' and
other key lookups. The effect is that the cost is now a bit higher
when we have many ranges for a key.Allmost all calculation with TIME_FOR_COMPARE is now done in
best_access_path(). 'JOIN::read_time' now includes the full
cost for finding the rows in the table.In the result files, many of the changes are now again close to what
they where before the "Update cost for hash and cached joins" commit,
as that commit didn't fix the filter cost (too complex to do
everything in one commit).The above changes showed a lot of a lot of inconsistencies in
optimizer cost calculation. The main objective with the other changes
was to calculation as similar (and accurate) as possible to make
different plans more comparable.Detailed list of changes:
- Calculate index_only_cost consistently and correctly for all scan
and ref accesses. The row fetch_cost and index_only_cost now
takes into account clustered keys, covered keys and index only accesses.
- cost_for_index_read now returns both full cost and index_only_cost
- Fixed cost calculation of get_sweep_read_cost( ) to match other
similar costs. This is bases on the assumption that data is more
often stored on SSD than a hard disk.
- Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST.
- Some scan cost estimates did not take into account
TIME_FOR_COMPARE. Now all scan costs takes this into
account. (main.show_explain)
- Added session variable optimizer_cache_hit_ ratio (default 50%). By
adjusting this on can reduce or increase the cost of index or direct
record lookups. The effect of the default is that key lookups is now
a bit cheaper than before. See usage of 'optimizer_cache_cost' in
handler.h.
- JOIN_TAB::scan_time( ) did not take into account index only scans,
which produced a wrong cost when index scan was used. Changed
JOIN_TAB:::scan_ time() to take into consideration clustered and
covered keys. The values are now cached and we only have to call
this function once. Other calls are changed to use the cached
values. Function renamed to JOIN_TAB::estimate_ scan_time( ).
- Fixed that most index cost calculations are done the same way and
more close to 'range' calculations. The cost is now lower than
before for small data sets and higher for large data sets as we take
into account how many keys are read (main.opt_trace_selectivi ty,
main.limit_rows_ examined) .
- Ensured that index_scan_cost() ==
range(scan_of_ all_rows_ in_table_ using_one_ range) +
MULTI_RANGE_READ_ INFO_CONST. One effect of this is that if there
is choice of doing a full index scan and a range-index scan over
almost the whole table then index scan will be preferred (no
range-read setup cost). (innodb.innodb, main.show_explain,
main.range)
- Fixed the EQ_REF and REF takes into account clustered and covered
keys. This changes some plans to use covered or clustered indexes
as these are much cheaper. (main.subselect_mat_cost,
main.state_tables_ innodb, main.limit_ rows_examined)
- Rowid filter setup cost and filter compare cost now takes into
account fetching and checking the rowid (INDEX_NEXT_FIND_ COST).
(main.partition_ pruning heap.heap_btree main.log_state)
- Added INDEX_NEXT_FIND_COST to
Range_rowid_filter_ cost_info: :lookup_ cost to account of the time
to find and check the next key value against the container
- Introduced ha_keyread_time(rows) that takes into account finding
the next row and copying the key value to 'record'
(INDEX_COPY_COST) .
- Introduced ha_key_scan_time() for calculating an index scan over
all rows.
- Added IDX_LOOKUP_COST to keyread_time() as a startup cost.
- Added index_only_fetch_cost( ) as a convenience function to
OPT_RANGE.
- keyread_time() cost is slightly reduced to prefer shorter keys.
(main.index_merge_ myisam)
- All of the above caused some index_merge combinations to be
rejected because of cost (main.index_intersect) . In some cases
'ref' where replaced with index_merge because of the low
cost calculation of get_sweep_read_cost( ).
- Some index usage moved from PRIMARY to a covering index.
(main.subselect_ innodb)
- Changed cost calculation of filter to take INDEX_LOOKUP_COST and
TIME_FOR_COMPARE into account. See sql_select.cc::apply_ filter( ).
filter parameters and costs are now written to optimizer_trace.
- Don't use matchings_records_ in_range( ) to try to estimate the number
of filtered rows for ranges. The reason is that we want to ensure
that 'range' is calculated similar to 'ref'. There is also more work
needed to calculate the selectivity when using ranges and ranges and
filtering. This causes filtering column in EXPLAIN EXTENDED to be
100.00 for some cases where range cannot use filtering.
(main.rowid_filter)
- Introduced ha_scan_time() that takes into account the CPU cost of
finding the next row and copying the row from the engine to
'record'. This causes costs of table scan to slightly increase and
some test to changed their plan from ALL to RANGE or ALL to ref.
(innodb.innodb_ mysql, main.select_ pkeycache)
In a few cases where scan time of very small tables have lower cost
than a ref or range, things changed from ref/range to ALL.
(main.myisam, main.func_group, main.limit_rows_examined,
main.subselect2)
- Introduced ha_scan_and_compare_ time() which is like ha_scan_time()
but also adds the cost of the where clause (TIME_FOR_COMPARE).
- Added small cost for creating temporary table for
materialization. This causes some very small tables to use scan
instead of materialization.
- Added checking of the WHERE clause (TIME_FOR_COMPARE) of the
accepted rows to ROR costs in get_best_ror_intersect( )
- Removed '- 0.001' from 'join->best_read' and optimize_straight_ join()
to ensure that the 'Last_query_cost' status variable contains the same
value as the one that was calculated by the optimizer.
- Take avg_io_cost() into account in handler::keyread_ time() and
handler::read_time( ). This should have no effect as it's 1.0 by
default, except for heap that overrides these functions.
- Some 'ref_or_null' accesses changed to 'range' because of cost
adjustments (main.order_by)
- Added scan type "scan_with_join_cache" for optimizer_trace. This is
just to show in the trace what kind of scan was used.
- When using 'scan_with_join_cache' take into account number of
preceding tables (as have to restore all fields for all previous
table combination when checking the where clause)
The new cost added is:
(row_combinations * RECORD_COPY_COST * number_of_cached_ tables) .
This increases the cost of join buffering in proportion of the
number of tables in the join buffer. One effect is that full scans
are now done earlier as the cost is then smaller.
(main.join_outer_ innodb, main.greedy_ optimizer)
- Removed the usage of 'worst_seeks' in cost_for_index_read as it
caused wrong plans to be created; It prefered JT_EQ_REF even if it
would be much more expensive than a full table scan. A related issue
was that worst_seeks only applied to full lookup, not to clustered
or index only lookups, which is not consistent. This caused some
plans to use index scan instead of eq_ref (main.union)
- Changed federated block size from 4096 to 1500, which is the typical
size of an IO packet.
- Added costs for reading rows to Federated. Needed as there is no
caching of rows in the federated engine.
- Added ha_innobase::read_with_ rowid() cost function.
- A lot of extra things added to optimizer trace
- More costs, especially for materialization and index_merge.
- Make lables more uniform
- Fixed a lot of minor bugs
- Added 'trace_started()' around a lot of trace blocks.
- When calculating ORDER BY with LIMIT cost for using an index
the cost did not take into account the number of row retrivals
that has to be done or the cost of comparing the rows with the
WHERE clause. The cost calculated would be just a fraction of
the real cost. Now we calculate the cost as we do for ranges
and 'ref'.
- 'Using index for group-by' is used a bit as we now take into account
the WHERE clause cost when comparing with 'ref' and prefer the
method with fewer row combinations. (main.group_min_max) . Bugs fixed:
- Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans,
like in optimize_straight_ join() and greedy_search()
- Fixed bug in save_explain_data where we could test for the wrong
index when displaying 'Using index'. This caused some old plans to
show 'Using index'. (main.subselect_innodb, main.subselect2)
- Fixed bug in get_best_ror_intersect( ) where 'min_cost' was not
updated, and the cost we compared with was not the one that was
used.
- Fixed very wrong cost calculation for priority queues in
check_if_pq_applicabl e(). (main.order_by now correctly uses priority
queue)
- When calculating cost of EQ_REF or REF, we added the cost of
comparing the WHERE clause with the found rows, not all row
combinations. This made ref and eq_ref to be regarded way to cheap
compared to other access methods.
- FORCE INDEX cost calculation didn't take into account clustered or
covered indexes.
- JT_EQ_REF cost was estimated as avg_io_cost(), which is half the
cost of a JT_REF key. This may be true for InnoDB primary key, but
not for other unique keys or other engines. Now we use handler
function to calculate the cost, which allows us to handle
consistently clustered, covered keys and not covered keys.
- ha_start_keyread() didn't call extra_opt() if keyread was already
enabled but still changed the 'keyread' variable (which is wrong).
Fixed by not doing anything if keyread is already enabled.
- multi_range_read_info_ cost() didn't take into account io_cost when
calculating the cost of ranges.
- fix_semijoin_strategies_ for_picked_ join_order( ) used the wrong
record_count when calling best_access_path() for SJ_OPT_FIRST_MATCH
and SJ_OPT_LOOSE_SCAN.
- Hash joins didn't provide correct best_cost to the upper level, which
means that the cost for hash_joins more expensive than calculated
in best_access_path (a difference of 10x * TIME_OF_COMPARE).
This is fixed in the new code thanks to that we now include
TIME_OF_COMPARE cost in 'read_time'.Other things:
- Added some 'if (thd->trace_started( ))' to speed up code
- Removed not used function Cost_estimate::is_zero( )
- Simplified testing of HA_POS_ERROR in get_best_ror_intersect( ).
(No cost changes)
- Moved ha_start_keyread() from join_read_const_table( ) to join_read_const()
to enable keyread for all types of JT_CONST tables.
- Made a few very short functions inline in handler.hNotes:
- In main.rowid_filter the join order of order and lineitem is swapped.
This is because the cost of doing a range fetch of lineitem(98 rows) is
almost as big as the whole join of order,lineitem. The filtering will
also ensure that we only have to do very small key fetches of the rows
in lineitem.
- main.index_merge_myisam had a few changes where we are now using
less keys for index_merge. This is because index scans are now more
expensive than before.
- handler->optimizer_ cache_cost is updated in ha_external_lock().
This ensures that it is up to date per statements.
Not an optimal solution (for locked tables), but should be ok for now.
- 'DELETE FROM t1 WHERE t1.a > 0 ORDER BY t1.a' does not take cost of
filesort into consideration when table scan is chosen.
(main.myisam_ explain_ non_select_ all)
- perfschema.table_aggregate _global_ * has changed because an update
on a table with 1 row will now use table scan instead of key lookup.TODO in upcomming commits:
- Fix selectivity calculation for ranges with and without filtering and
when there is a ref access but scan is chosen.
For this we have to store the lowest known value for
'accepted_records' in the OPT_RANGE structure.
- Change that records_read does not include filtered rows.
- test_if_cheaper_ ordering( ) needs to be updated to properly calculate costs.
This will fix tests like main.order_by_innodb, main.single_ delete_ update
- Extend get_range_limit_read_ cost() to take into considering
cost_for_index_ read() if there where no quick keys. This will reduce
the computed cost for ORDER BY with LIMIT in some cases.
(main.innodb_ ext_key)
- Change all optimizer cost constants to user variables. This will make it
possible for the users to tune the cost model if needed instead of having
to ask for a new server build.
- Fix that we take into account selectivity when counting the number of rows
we have to read when considering using a index table scan to resolve
ORDER BY.
- Add new calculation for reaed_with_rowid() where we take into account the
benefit of reading multiple rows from the same page.fixup to be combined with previous commit
Another fixup, to be combined with the previous commits
Fixed a lot of inconsistencies in optimizer cost calculation. The main
objective was get cost calculation as similar (and accurate) as
possible to make different plans more comparable.- Replaced constant 2.0 with new define TABLE_SCAN_
SETUP_COST.
- Added RECORD_COPY_COST, the cost of finding the next row and copying
it to record for table scans.
- Added INDEX_COPY_COST, the cost of finding the next key and copying it
to record for index scans.
- Added INDEX_NEXT_FIND_COST, the cost of finding the next index entry and
checking it against filter.
- Some scan cost estimates did not take into account
TIME_FOR_COMPARE. Now all scan costs takes this into
account. (main.show_explain)
- Added session variable optimizer_cache_hit_ ratio (default 50%). By
adjusting this on can reduce or increase the cost of index or direct
record lookups. The effect of the default is that key lookups is now
a bit cheaper than before. See usage of 'optimizer_cache_cost' in
handler.h.
- JOIN_TAB::scan_time( ) did not take into account index only scans,
which produced a wrong cost when index scan was used. Fixed by
adding support for covering keys. Cached also the calculated values
to avoid future calls during optimization phase.
- Fixed that most index cost calculations are done the same way and
more close to 'range' calculations. The cost is now lower than before for
small data sets and higher for large data sets as we take into account
how many keys are read.
- Ensured that index_scan_cost() ==
range(scan_of_ all_rows_ in_table_ using_one_ range) +
MULTI_RANGE_READ_ INFO_CONST. One effect of this is that if there is
choice of doing a full index scan and a range-index scan over almost
the whole table then index scan will be preferred (no range-read
setup cost).
(innodb.innodb, main.show_explain, main.range)
- Fixed the EQ_REF and REF takes into account clustered and covered keys.
This changes some plans to use covered or clustered indexes as these are
much cheaper.
(main.subselect_ mat_cost main.state_ tables_ innodb)
- Rowid filter setup cost and filter compare cost now takes into account
fetching and checking the rowid (INDEX_NEXT_FIND_ COST).
(main.partition_ pruning heap.heap_btree main.log_state)
- Introduced ha_keyread_time(rows) that takes into account finding the
next row and copying the key value to 'record' (INDEX_COPY_COST).
- Introduced ha_key_scan_time() for calculating an index scan over all
rows.
- Added IDX_LOOKUP_COST to keyread_time() as a startup cost.
- Added index_only_fetch_cost( ) as a convenience function to
OPT_RANGE.
- keyread_time() cost is slightly reduced to prefer shorter keys.
(main.index_merge_ myisam)
- All of the above caused some index_merge combinations to be
rejected because of cost (main.index_intersect) . In some cases
'ref' where replaced with index_merge because of the low
cost calculation of get_sweep_read_cost( ).
- Some index usage moved from PRIMARY to a covering index.
(main.subselect_ innodb)
- Don't use matching_records_ in_range( ) to try to estimate the number of
filtered rows for ranges. The reason is that we want to ensure that 'range'
is calculated similar to 'ref'. There is also more work needed to
calculate the selectivity when using ranges and ranges and filtering.
This causes filtering column in EXPLAIN EXTENDED to be 100.00 for
some cases where range cannot use filtering.
(main.rowid_filter)
- Introduced ha_scan_time() that takes into account the CPU cost of
finding the next row and copying the row from the engine to
'record'. This causes costs of table scan to slightly increase and
some test to changed their plan from ALL to RANGE or ALL to ref.
(innodb.innodb_ mysql, main.select_ pkeycache)
- Introduced ha_scan_and_compare_ time() which is like ha_scan_time()
but also adds the cost of checking the where clause
(TIME_FOR_COMPARE) .
- Introduced ha_read_with_rowid( ) that takes into account
RECORD_COPY_COST.
- Added checking of the WHERE clause of the accepted rows to ROR costs
in get_best_ror_intersect( )
- Removed '- 0.001' from 'join->best_read' and optimize_straight_ join()
to ensure that the 'Last_query_cost' status variable contains the same
value as the one that was calculated by the optimizer.
- Added INDEX_NEXT_FIND_COST to Range_rowid_ filter_ cost_info: :lookup_ cost
to account of the time to find and check the next key value against the
container
- Changed 'JOIN_TAB:::scan_ time() to take into consideration clustered and
covered keys. The values are now cached and we only have to call this
function once. Other calls are changed to use the cached values.
Function renamed to JOIN_TAB::estimate_ scan_time( ).
- Take avg_io_cost() into account in handler::keyread_ time() and
handler::read_time( ). This should have no effect as it's 1.0 by
default, except for heap that overrides these functions.
- Some 'ref_or_null' accesses changed to 'range' because of cost
adjustments (main.order_by)
- Added scan type "scan_with_join_cache" for optimizer_trace. This is just
to show what kind of scan was used.
-I had to remove the usage of 'worst_seeks' in cost_for_index_read as it
cases wrong plans to be created; It prefered JT_EQ_REF even if it would
be much more expensive than a full table scan. A related issue was that
worst_seeks only applied to full lookup, not to clustered or index only
lookups, which is not consistent. This caused some plans to use index
scan instead of eq_ref (main.union)Bugs fixed:
- Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans,
like in optimize_straight_ join() and greedy_search()
- Fixed bug in save_explain_data where we could test for the wrong
index when displaying 'Using index'. This caused some old plans to
show 'Using index'. (main.subselect_innodb, main.subselect2)
- Fixed bug in get_best_ror_intersect( ) where 'min_cost' was not updated,
and the cost we compared with was not the one that was used.
- Fixed very wrong cost calculation for priority queues in
check_if_pq_applicabl e().
- When calculating cost of EQ_REF or REF, we added the cost of comparing
the WHERE clause with the found rows, not all row combinations. This
made ref and eq_ref to be regarded way to cheap compared to other
access methods.
- FORCE INDEX cost calculation didn't take into account clustered or
covered indexes.
- JT_EQ_REF cost was estimated as avg_io_cost(), which is half the
cost of a JT_REF key. This may be true for InnoDB primary key, but
not for other unique keys or other engines. Now we use handler
function to calculate the cost, which allows us to handle
consistently clustered, covered keys and not covered keys.
- ha_start_keyread() didn't call extra_opt() if keyread was already
enabled but still changed the 'keyread' variable (which is wrong).
Fixed by not doing anything if keyread is already enabled.
- multi_range_read_info_ cost() didn't take into account io_cost when
calculating the cost of ranges.Other things:
- Added some 'if (thd->trace_started( ))' to speed up code
- Removed not used function Cost_estimate::is_zero( )
- Simplified testing of HA_POS_ERROR in get_best_ror_intersect( ).
(No cost changes)
- Moved ha_start_keyread() from join_read_const_table( ) to join_read_const()
to enable keyread for all types of JT_CONST tables.
- Made a few very short functions inline in handler.hTODO in upcomming commit:
- Fix selectivity calculation for ranges with and without filtering and
when there is a ref access but scan is chosen.
For this we have to store the lowest known value for
'accepted_records' in the OPT_RANGE structure.
- test_if_cheaper_ ordering( ) needs to be updated to properly calculate costs.
This will fix tests like main.order_by_innodb.
- Extend get_range_limit_read_ cost() to take into considering
cost_for_index_ read() if there where no quick keys. This will reduce
the computed cost for ORDER BY with LIMIT in some cases.
(main.innodb_ ext_key) COMMENTS:
- In main.rowid_filter the join order of order and lineitem is swapped.
This is because the cost of doing a range fetch of lineitem(98 rows) is
almost as big as the whole join of order,lineitem. The filtering will
also ensure that we only have to do very small key fetches of the rows
in lineitem.
- handler->optimizer_ cache_cost is updated in ha_external_lock().
This ensures that it is up to date per statements.
Not an optimal solution (for locked tables), but should be ok for now.Another temporary commit to be combined with previous ones
- a9d18d5... by Monty <email address hidden>
-
Split cost calculations into fetch and total
This patch causes no changes in costs or result files.
Changes:
- Store row compare cost separately in Cost_estimate::comp_cost
- Store cost of fetching rows separately in OPT_RANGE
- Use range->fetch_cost instead of adjust_quick_cost( total_cost) This was done to simplify cost calculation in sql_select.cc:
- We can use range->fetch_cost directly without having to call
adjust_quick_cost( ). adjust_quick_cost() is now removed. Other things:
- Removed some not used functions in Cost_estimate - 55772de... by Monty <email address hidden>
-
Make trace.add() usage uniform
- Before any multiple add() calls, always use (if trace_started()).
- Add unlikely() around all tests of trace_started().
- Change trace.add(); trace.add(); to trace.add().add();
- When trace.add() goes over several line, use the following formating:
trace.
add(xxx).
add(yyy).
add(zzz);This format was choosen after a discussion between Sergei Petrunia and
me as it looks similar indepedent if 'trace' is an object or a
pointer. It also more suitable for an editors auto-indentation.Other things:
Added DBUG_ASSERT(
thd->trace_ started( )) to a few functions that should
only be called if trace is enabled."use_roworder_
index_merge: true" changed to "use_sort_ index_merge: false"
As the original output was often not correct.
Also fixed the related 'cause' to be correct.In best_access_path() print the cost (and number of rows) before
checking if it the plan should be used. This removes the need to print
the cost in two places.Changed a few "read_time" tags to "cost".
- db44e10... by Monty <email address hidden>
-
Update cost for hash and cached joins
The old code didn't correctly add TIME_FOR_COMPARE to rows that are
part of the scan that will be compared with the attached where clause.Now the cost calculation for hash join and full join cache join are
identical except for HASH_FANOUT (10%)The cost for a join with keys is now also uniform.
The total cost for a using a key for lookup is calculated in one place as:(cost_of_
finding_ rows_through_ key(records) + records/ TIME_FOR_ COMPARE) *
record_count_of_ previous_ row_combination s + startup_cost startup_cost is the cost of a creating a temporary table (if needed)
Best_cost now includes the cost of comparing all WHERE clauses and also
cost of joining with previous row combinations.Other things:
- Optimizer trace is now printing the total costs, including testing the
WHERE clause (TIME_FOR_COMPARE) and comparing with all previous rows.
- In optimizer trace, include also total cost of query together with the
final join order. This makes it easier to find out where the cost was
calculated.
- Old code used filter even if the cost for it was higher than not using a
filter. This is not corrected. - ca1847e... by Monty <email address hidden>
-
Adjust costs for doing index scan in cost_group_
min_max( ) The idea is that when doing a tree dive (once per group), we need to
compare key values, which is fast. For each new group, we have to
compare the full where clause for the row.
Compared to original code, the cost of group_min_max() has slightly
increased which affects some test with only a few rows.
main.group_min_max and main.distinct have been modified to show the
effect of the change.The patch also adjust the number of groups in case of quick selects:
- For simple WHERE clauses, ensure that we have at least as many groups
as we have conditions on the used group-by key parts.
The assumption is that each condition will create at least one group.
- Ensure that there are no more groups than rows found by quick_selectTest changes:
- For some small tables there has been a change of
Using index for group-by -> Using index for group-by (scanning)
Range -> Index and Using index for group-by -> Using index - 45763ca... by Monty <email address hidden>
-
Return >= 1 from matching_
candidates_ in_table if records > 0.0 Having rows >= 1.0 helps ensure that when we calculate total rows of joins
the number of resulting rows will not be less after the join.Changes in test cases:
- Join order change for some tables with few records
- 'Filtered' is much higher for tables with few rows, as 1 row is a high
procent of a table with few rows. - 699b8a4... by Monty <email address hidden>
-
Update matching_
candidates_ in_table( ) to treat all conditions similar Fixed also that the 'with_found_
constraint parameter' to
matching_candidates_ in_table( ) is as documented: It is now true only
if there is a reference to a previous table in the WHERE condition for
the current examined table (as it was originally documented)Changes in test results:
- Filtered was 25% smaller for some queries (expected).
- Some join order changed (probably because the tables had very few rows).
- Some more table scans, probably because there would be fewer returned
rows.
- Some tests exposes a bug that if there is more filtered rows, then the
cost for table scan will be higher. This will be fixed in a later commit. - 5341d81... by Monty <email address hidden>
-
Fix calculation of selectivity
calculate_
cond_selectivit y_for_table( ) is largely rewritten:
- Process keys in the order of rows found, smaller ranges first. If two ranges
has equal number of rows, use the one with more key parts. This helps
us to mark more used fields to not be used for further selectivity
calculations. See cmp_quick_ranges().
- Ignore keys with fields that where used by previous keys
- Don't use rec_per_key[] to calculate selectivity for smaller
secondary key parts. This does not work as rec_per_key[] value
is calculated in the context of the previous key parts, not for the
key part itself. The one exception is if the previous key parts
is a constant.Other things:
- Ensure that select->cond_selectivi ty is always between 0 and 1.
- Ensure that select->opt_range_ condition_ rows is never updated to
a higher value. It is initially set to the number of rows in table.
- We know store in table->opt_range_ condition_ rows the lowest number of
rows that any row-read-method has found so far. Before it was only done
for UICK_SELECT_I::QS_TYPE_ ROR_UNION and QUICK_SELECT_ I::QS_TYPE_ INDEX_MERGE.
Now it is done for a lot more methods. See
calculate_cond_selectivit y_for_table( ) for details.
- Calculate and use selectivity for the first key part of a multiple key part
if the first key part is a constant.
WHERE key1_part1=5 and key2_part1=5. IF key1 is used, then we can still
use selectivity for key2Changes in test results:
- 'filtered' is slighly changed, usually to something slightly smaller
- A few cases where for group by queries the table order changed. This was
because the number of resulting rows from a group by query with MIN/MAX
is now set to be smaller.
- A few index was changed as we know prefer index with more key parts if
the number of resulting rows is the same. - 6018ad6... by Monty <email address hidden>
-
Fixed bug in SQL_SELECT_LIMIT
We where comparing costs when we should be comparing number of rows
that will be examined - 3c346be... by Monty <email address hidden>
-
Simple optimization to speed up some handler functions when checking killed
- Avoid checking for has_transactions if killed flag is not checked
- Simplify code (Have checked with gcc -O3 that there is improvements)
- Added handler::fast_increment _statstics( ) to be used when a handler
functions wants to increase two statistics for one row access.
- Made check_limit_rows_examened( ) inline (even if it didn't make any
difference for gcc 7.5.0), still the right thing to do