maria:10-7.selectivity

Last commit made on 2022-05-09
Get this branch:
git clone -b 10-7.selectivity https://git.launchpad.net/maria

Branch merges

Branch information

Name:
10-7.selectivity
Repository:
lp:maria

Recent commits

1356b77... by Monty <email address hidden>

Simple optimization: Remove JOIN::set_group_rpa as it is not needed

a5715b1... by Monty <email address hidden>

Derived tables and union can now create distinct keys

The idea is that instead of marking all select_lex's with DISTINCT, we
only mark those that really need distinct result.

Benefits of this change:
- Temporary tables used with derived tables, UNION, IN are now smaller
  as duplicates are removed already on the insert phase.
- The optimizer can now produce better plans with EQ_REF. This can be
  seen from the tests where several queries does not anymore materialize
  derived tables twice.
- Queries affected by 'in_predicate_conversion_threshold' where large IN
  lists are converted to sub query produces better plans.

Other things:
- Removed on duplicate call to sel->init_select() in
  LEX::add_primary_to_query_expression_body()
- I moved the testing of
  tab->table->pos_in_table_list->is_materialized_derived()
  in join_read_const_table() to the caller as it caused problems for
  derived tables that could be proven to be const tables.
  This also is likely to fix some bugs as if join_read_const_table()
  was aborted, the table was left marked as JT_CONST, which cannot
  be good. I added an ASSERT there for now that can be removed when
  the code has been properly tested.

ad71d45... by Monty <email address hidden>

Fixed crashing bug in create_internal_tmp_table_from_heap()

An assert/crash could happen if newtable.alias would be reallocated,
(for example if newtable.alias.safe_c_ptr() was called) when doing
*table= newtable.
Fixed by ensuring that we keep the original state of the alias in 'table'.

afbdd1c... by Monty <email address hidden>

Add test cases for MDEV-20595 and MDEV-21633 to show these are solved

MDEV-21633 Assertion `tmp >= 0' failed in best_access_path with
           rowid_filter=ON
MDEV-20595 Assertion `0 < sel && sel <= 2.0' failed in
           table_cond_selectivity

f887fb1... by Monty <email address hidden>

Added 'records_out' and join_type to POSITION

records_out is the numbers of rows expected to be accepted from a table.
records_read is in contrast the number of rows that the optimizer excepts
to read from the engine.

This patch causes not plan changes. The differences in test results comes
from renaming "records" to "records_read" and printing of record_out in
the optimizer trace.

Other things:
- Renamed table_cond_selectivity() to table_after_join_selectivity()
  to make the purpose of the function more clear.

f93938b... by Monty <email address hidden>

Align elements in struct system_variables

This reduces the size of THD from 1128 to 1104 (24 bytes)
Note much but will still save some memory accesses

40e84a6... by Monty <email address hidden>

Make the most important optimizer constants user variables

Variables added:
- optimizer_key_copy_cost
- optimizer_index_block_copy_cost
- optimizer_index_next_find_cost
- optimizer_row_copy_cost
- optimizer_where_compare_cost
- optimizer_key_compare_cost

Some rename of defines was done to make the internal defines similar to
the visible ones:
RECORD_COPY_COST -> ROW_COPY_COST
INDEX_COPY_COST -> KEY_COPY_COST
TIME_FOR_COMPARE -> WHERE_COMPARE_COST; WHERE_COMPARE_COST was also
"inverted" to be a number between 0 and 1 that is multiply with accepted
records (similar to other optimizer variables).
TIME_FOR_COMPARE_IDX -> KEY_COMPARE_COST. This is also inverted,
similar to TIME_FOR_COMPARE.
TIME_FOR_COMPARE_ROWID -> ROWID_COMPARE_COST. This is also inverted,
similar to TIME_FOR_COMPARE.

All default costs are identical to what they where before this patch.

Other things:
- Compare factor in get_merge_buffers_cost() was inverted.
- Changed namespace to static in filesort_utils.cc

3f3c51e... by Monty <email address hidden>

Update row and key fetch cost models to take into account data copy costs

Before this patch, when calculating the cost of fetching and using a
row/key from the engine, we took into account the cost of finding a
row or key from the engine, but did not consistently take into account
index only accessed, clustered key or covered keys for all access
paths.

The cost of the WHERE clause (TIME_FOR_COMPARE) was not consistently
considered in best_access_path(). TIME_FOR_COMPARE was used in
calculation in other places, like greedy_search(), but was in some
cases (like scans) done an a different number of rows than was
accessed.

The cost calculation of row and index scans didn't take into account
the number of rows that where accessed, only the number of accepted
rows.

When using a filter, the cost of index_only_reads and cost of
accessing and disregarding 'filtered rows' where not taken into
account, which made filters cost less than there actually where.

To remedy the above, the following key & row fetch related costs
has been added:

- The cost of fetching and using a row is now split into different costs:
  - key + Row fetch cost (as before) but multiplied with the variable
  'optimizer_cache_cost' (default to 0.5). This allows the user to
  tell the optimizer the likehood of finding the key and row in the
  engine cache.
- RECORD_COPY_COST, The cost copying a row from the engine to the
  sql layer or creating a row from the join_cache to the record
  buffer. Mostly affects table scan costs.
- INDEX_COPY_COST the cost of finding the next key and copying it from
  the engine to the SQL layer. This is used when we calculate the cost
  index only reads. It makes index scans more expensive than before if
  they cover a lot of rows. (main.index_merge_myisam)
- INDEX_LOOKUP_COST, the cost of finding the first key in a range.
  This replaces the old define IDX_LOOKUP_COST, but with a higher cost.
- INDEX_NEXT_FIND_COST, the cost of finding the next key (and rowid).
  when doing a index scan and comparing the rowid to the filter.
  Before this cost was assumed to be 0.
- ROW_LOOKUP_COST, the cost of fetching a row by rowid.

All of the above constants/variables are now tuned to be somewhat in
proportion of executing complexity to each other. There is tuning
need for these in the future, but that can wait until the above are
made user variables as that will make tuning much easier.

To make the usage of the above easy, there are new (not virtual)
cost calclation functions in handler:
- ha_read_time(), like read_time(), but take optimizer_cache_cost into
  account.
- ha_read_and_copy_time(), like ha_read_time() but take into account
  RECORD_COPY_TIME
- ha_read_and_compare_time(), like ha_read_and_copy_time() but take
  TIME_FOR_COMPARE into account.
- ha_rndpos_time(). Read row with row id, taking RECORD_COPY_COST
  into account. This is used with filesort where we don't need
  to execute the WHERE clause again.
- ha_keyread_time(), like keyread_time() but take
  optimizer_cache_cost into account.
- ha_keyread_and_copy_time(), like ha_keyread_time(), but add
  INDEX_COPY_COST.
- ha_key_scan_time(), like key_scan_time() but take
  optimizer_cache_cost nto account.
- ha_key_scan_and_compare_time(), like ha_key_scan_time(), but add
  INDEX_COPY_COST & TIME_FOR_COMPARE.

I also added some setup costs for doing different types of scans and
creating temporary tables (on disk and in memory). This encourages
the optimizer to not use these for simple 'a few row' lookups if
there are adequate key lookup strategies.
- TABLE_SCAN_SETUP_COST, cost of starting a table scan.
- INDEX_SCAN_SETUP_COST, cost of starting an index scan.
- HEAP_TEMPTABLE_CREATE_COST, cost of creating in memory
  temporary table.
- DISK_TEMPTABLE_CREATE_COST, cost of creating an on disk temporary
  table.

When calculating cost of fetching ranges, we had a cost of
IDX_LOOKUP_COST (0.125) for doing a key div for a new range. This is
now replaced with 'io_cost * INDEX_LOOKUP_COST (1.0) *
optimizer_cache_cost', which matches the cost we use for 'ref' and
other key lookups. The effect is that the cost is now a bit higher
when we have many ranges for a key.

Allmost all calculation with TIME_FOR_COMPARE is now done in
best_access_path(). 'JOIN::read_time' now includes the full
cost for finding the rows in the table.

In the result files, many of the changes are now again close to what
they where before the "Update cost for hash and cached joins" commit,
as that commit didn't fix the filter cost (too complex to do
everything in one commit).

The above changes showed a lot of a lot of inconsistencies in
optimizer cost calculation. The main objective with the other changes
was to calculation as similar (and accurate) as possible to make
different plans more comparable.

Detailed list of changes:

- Calculate index_only_cost consistently and correctly for all scan
  and ref accesses. The row fetch_cost and index_only_cost now
  takes into account clustered keys, covered keys and index only accesses.
- cost_for_index_read now returns both full cost and index_only_cost
- Fixed cost calculation of get_sweep_read_cost() to match other
  similar costs. This is bases on the assumption that data is more
  often stored on SSD than a hard disk.
- Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST.
- Some scan cost estimates did not take into account
  TIME_FOR_COMPARE. Now all scan costs takes this into
  account. (main.show_explain)
- Added session variable optimizer_cache_hit_ratio (default 50%). By
  adjusting this on can reduce or increase the cost of index or direct
  record lookups. The effect of the default is that key lookups is now
  a bit cheaper than before. See usage of 'optimizer_cache_cost' in
  handler.h.
- JOIN_TAB::scan_time() did not take into account index only scans,
  which produced a wrong cost when index scan was used. Changed
  JOIN_TAB:::scan_time() to take into consideration clustered and
  covered keys. The values are now cached and we only have to call
  this function once. Other calls are changed to use the cached
  values. Function renamed to JOIN_TAB::estimate_scan_time().
- Fixed that most index cost calculations are done the same way and
  more close to 'range' calculations. The cost is now lower than
  before for small data sets and higher for large data sets as we take
  into account how many keys are read (main.opt_trace_selectivity,
  main.limit_rows_examined).
- Ensured that index_scan_cost() ==
  range(scan_of_all_rows_in_table_using_one_range) +
  MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there
  is choice of doing a full index scan and a range-index scan over
  almost the whole table then index scan will be preferred (no
  range-read setup cost). (innodb.innodb, main.show_explain,
  main.range)
  - Fixed the EQ_REF and REF takes into account clustered and covered
    keys. This changes some plans to use covered or clustered indexes
    as these are much cheaper. (main.subselect_mat_cost,
    main.state_tables_innodb, main.limit_rows_examined)
  - Rowid filter setup cost and filter compare cost now takes into
    account fetching and checking the rowid (INDEX_NEXT_FIND_COST).
    (main.partition_pruning heap.heap_btree main.log_state)
  - Added INDEX_NEXT_FIND_COST to
    Range_rowid_filter_cost_info::lookup_cost to account of the time
    to find and check the next key value against the container
  - Introduced ha_keyread_time(rows) that takes into account finding
    the next row and copying the key value to 'record'
    (INDEX_COPY_COST).
  - Introduced ha_key_scan_time() for calculating an index scan over
    all rows.
  - Added IDX_LOOKUP_COST to keyread_time() as a startup cost.
  - Added index_only_fetch_cost() as a convenience function to
    OPT_RANGE.
  - keyread_time() cost is slightly reduced to prefer shorter keys.
    (main.index_merge_myisam)
  - All of the above caused some index_merge combinations to be
    rejected because of cost (main.index_intersect). In some cases
    'ref' where replaced with index_merge because of the low
    cost calculation of get_sweep_read_cost().
  - Some index usage moved from PRIMARY to a covering index.
    (main.subselect_innodb)
- Changed cost calculation of filter to take INDEX_LOOKUP_COST and
  TIME_FOR_COMPARE into account. See sql_select.cc::apply_filter().
  filter parameters and costs are now written to optimizer_trace.
- Don't use matchings_records_in_range() to try to estimate the number
  of filtered rows for ranges. The reason is that we want to ensure
  that 'range' is calculated similar to 'ref'. There is also more work
  needed to calculate the selectivity when using ranges and ranges and
  filtering. This causes filtering column in EXPLAIN EXTENDED to be
  100.00 for some cases where range cannot use filtering.
  (main.rowid_filter)
- Introduced ha_scan_time() that takes into account the CPU cost of
  finding the next row and copying the row from the engine to
  'record'. This causes costs of table scan to slightly increase and
  some test to changed their plan from ALL to RANGE or ALL to ref.
  (innodb.innodb_mysql, main.select_pkeycache)
  In a few cases where scan time of very small tables have lower cost
  than a ref or range, things changed from ref/range to ALL.
  (main.myisam, main.func_group, main.limit_rows_examined,
  main.subselect2)
- Introduced ha_scan_and_compare_time() which is like ha_scan_time()
  but also adds the cost of the where clause (TIME_FOR_COMPARE).
- Added small cost for creating temporary table for
  materialization. This causes some very small tables to use scan
  instead of materialization.
- Added checking of the WHERE clause (TIME_FOR_COMPARE) of the
  accepted rows to ROR costs in get_best_ror_intersect()
- Removed '- 0.001' from 'join->best_read' and optimize_straight_join()
  to ensure that the 'Last_query_cost' status variable contains the same
  value as the one that was calculated by the optimizer.
- Take avg_io_cost() into account in handler::keyread_time() and
  handler::read_time(). This should have no effect as it's 1.0 by
  default, except for heap that overrides these functions.
- Some 'ref_or_null' accesses changed to 'range' because of cost
  adjustments (main.order_by)
- Added scan type "scan_with_join_cache" for optimizer_trace. This is
  just to show in the trace what kind of scan was used.
- When using 'scan_with_join_cache' take into account number of
  preceding tables (as have to restore all fields for all previous
  table combination when checking the where clause)
  The new cost added is:
  (row_combinations * RECORD_COPY_COST * number_of_cached_tables).
  This increases the cost of join buffering in proportion of the
  number of tables in the join buffer. One effect is that full scans
  are now done earlier as the cost is then smaller.
  (main.join_outer_innodb, main.greedy_optimizer)
- Removed the usage of 'worst_seeks' in cost_for_index_read as it
  caused wrong plans to be created; It prefered JT_EQ_REF even if it
  would be much more expensive than a full table scan. A related issue
  was that worst_seeks only applied to full lookup, not to clustered
  or index only lookups, which is not consistent. This caused some
  plans to use index scan instead of eq_ref (main.union)
- Changed federated block size from 4096 to 1500, which is the typical
  size of an IO packet.
- Added costs for reading rows to Federated. Needed as there is no
  caching of rows in the federated engine.
- Added ha_innobase::rndpos_time() cost function.
- A lot of extra things added to optimizer trace
  - More costs, especially for materialization and index_merge.
  - Make lables more uniform
  - Fixed a lot of minor bugs
  - Added 'trace_started()' around a lot of trace blocks.
- When calculating ORDER BY with LIMIT cost for using an index
  the cost did not take into account the number of row retrivals
  that has to be done or the cost of comparing the rows with the
  WHERE clause. The cost calculated would be just a fraction of
  the real cost. Now we calculate the cost as we do for ranges
  and 'ref'.
- 'Using index for group-by' is used a bit as we now take into account
   the WHERE clause cost when comparing with 'ref' and prefer the
   method with fewer row combinations. (main.group_min_max).

Bugs fixed:
- Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans,
  like in optimize_straight_join() and greedy_search()
- Fixed bug in save_explain_data where we could test for the wrong
  index when displaying 'Using index'. This caused some old plans to
  show 'Using index'. (main.subselect_innodb, main.subselect2)
- Fixed bug in get_best_ror_intersect() where 'min_cost' was not
  updated, and the cost we compared with was not the one that was
  used.
- Fixed very wrong cost calculation for priority queues in
  check_if_pq_applicable(). (main.order_by now correctly uses priority
  queue)
- When calculating cost of EQ_REF or REF, we added the cost of
  comparing the WHERE clause with the found rows, not all row
  combinations. This made ref and eq_ref to be regarded way to cheap
  compared to other access methods.
- FORCE INDEX cost calculation didn't take into account clustered or
  covered indexes.
- JT_EQ_REF cost was estimated as avg_io_cost(), which is half the
  cost of a JT_REF key. This may be true for InnoDB primary key, but
  not for other unique keys or other engines. Now we use handler
  function to calculate the cost, which allows us to handle
  consistently clustered, covered keys and not covered keys.
- ha_start_keyread() didn't call extra_opt() if keyread was already
  enabled but still changed the 'keyread' variable (which is wrong).
  Fixed by not doing anything if keyread is already enabled.
- multi_range_read_info_cost() didn't take into account io_cost when
  calculating the cost of ranges.
- fix_semijoin_strategies_for_picked_join_order() used the wrong
  record_count when calling best_access_path() for SJ_OPT_FIRST_MATCH
  and SJ_OPT_LOOSE_SCAN.
- Hash joins didn't provide correct best_cost to the upper level, which
  means that the cost for hash_joins more expensive than calculated
  in best_access_path (a difference of 10x * TIME_OF_COMPARE).
  This is fixed in the new code thanks to that we now include
  TIME_OF_COMPARE cost in 'read_time'.

Other things:
- Added some 'if (thd->trace_started())' to speed up code
- Removed not used function Cost_estimate::is_zero()
- Simplified testing of HA_POS_ERROR in get_best_ror_intersect().
  (No cost changes)
- Moved ha_start_keyread() from join_read_const_table() to join_read_const()
  to enable keyread for all types of JT_CONST tables.
- Made a few very short functions inline in handler.h

Notes:
- In main.rowid_filter the join order of order and lineitem is swapped.
  This is because the cost of doing a range fetch of lineitem(98 rows) is
  almost as big as the whole join of order,lineitem. The filtering will
  also ensure that we only have to do very small key fetches of the rows
  in lineitem.
- main.index_merge_myisam had a few changes where we are now using
  less keys for index_merge. This is because index scans are now more
  expensive than before.
- handler->optimizer_cache_cost is updated in ha_external_lock().
  This ensures that it is up to date per statements.
  Not an optimal solution (for locked tables), but should be ok for now.
- 'DELETE FROM t1 WHERE t1.a > 0 ORDER BY t1.a' does not take cost of
  filesort into consideration when table scan is chosen.
  (main.myisam_explain_non_select_all)
- perfschema.table_aggregate_global_* has changed because an update
  on a table with 1 row will now use table scan instead of key lookup.

TODO in upcomming commits:
- Fix selectivity calculation for ranges with and without filtering and
  when there is a ref access but scan is chosen.
  For this we have to store the lowest known value for
  'accepted_records' in the OPT_RANGE structure.
- Change that records_read does not include filtered rows.
- test_if_cheaper_ordering() needs to be updated to properly calculate costs.
  This will fix tests like main.order_by_innodb, main.single_delete_update
- Extend get_range_limit_read_cost() to take into considering
  cost_for_index_read() if there where no quick keys. This will reduce
  the computed cost for ORDER BY with LIMIT in some cases.
  (main.innodb_ext_key)
- Fix that we take into account selectivity when counting the number of rows
  we have to read when considering using a index table scan to resolve
  ORDER BY.
- Add new calculation for rndpos_time() where we take into account the
  benefit of reading multiple rows from the same page.

3ec8a5c... by Monty <email address hidden>

Split cost calculations into fetch and total

This patch causes no changes in costs or result files.

Changes:
- Store row compare cost separately in Cost_estimate::comp_cost
- Store cost of fetching rows separately in OPT_RANGE
- Use range->fetch_cost instead of adjust_quick_cost(total_cost)

This was done to simplify cost calculation in sql_select.cc:
- We can use range->fetch_cost directly without having to call
  adjust_quick_cost(). adjust_quick_cost() is now removed.

Other things:
- Removed some not used functions in Cost_estimate

01b39c9... by Monty <email address hidden>

Make trace.add() usage uniform

- Before any multiple add() calls, always use (if trace_started()).
- Add unlikely() around all tests of trace_started().
- Change trace.add(); trace.add(); to trace.add().add();
- When trace.add() goes over several line, use the following formating:
trace.
 add(xxx).
 add(yyy).
 add(zzz);

This format was choosen after a discussion between Sergei Petrunia and
me as it looks similar indepedent if 'trace' is an object or a
pointer. It also more suitable for an editors auto-indentation.

Other things:

Added DBUG_ASSERT(thd->trace_started()) to a few functions that should
only be called if trace is enabled.

"use_roworder_index_merge: true" changed to "use_sort_index_merge: false"
As the original output was often not correct.
Also fixed the related 'cause' to be correct.

In best_access_path() print the cost (and number of rows) before
checking if it the plan should be used. This removes the need to print
the cost in two places.

Changed a few "read_time" tags to "cost".