MDEV-20194 Warnings inconsistently issued upon CHECK on table from older versions
This patch changes conditions for HA_ERR_TO_BIG_ROW and users will complain!
But I hope now we really have a correct row size check.
This patch will be hard to merge because of changed tests.
Ideally, all row size checks code should live now in innodb.max_record_size
or innodb.max_record_size_compressed.
Now check for too big row size performed for every index on CREATE TABLE,
and for added index on ALTER TABLE. This is how the topic issue was fixed.
Error message for COMPRESSED tables might contain an incorrect maximum
possible row size if overflowed index was not clustered. Fixing it will
require some more effort.
create_table_info_t::row_size_is_acceptable(): high level too big row size
check interface.
record_size_info_t dict_index_t::record_size_info(): this is a low level
interface to perform too big row checks. Also can be used to get maximum
potential row size.
get_max_record_size_leaf_page()
get_max_record_size_non_leaf_page(): these are uniform way to get maximum
allows row size for a given page size and index row format. One function
instead of 5 slightly different computations.
format_too_big_row_error_message(): one function for formatting too big
row error messages instead of several.
The test had been disabled in 10.2 due to frequent failures,
in 5ec9b88e11118c798ff2381771a72f76b2b72f9e.
After the problems were addressed, we failed to re-enable the test
until now.
3793da4...
by
Anel Husakovic <email address hidden>
Enable the auto parameter of the flag `default-character-set`
Closes #739
When invoking option `--default-character-set=auto` character set
from underlying OS settings should be detected for mysqldump.
MDEV-16560: [counter] rocksdb.ttl_secondary_read_filtering fail in buildbot
It is not reproducible, but the issue seems to be the same as with
MDEV-20490 and rocksdb.ttl_primary_read_filtering - a compaction caused
by DROP TABLE gets behind and compacts away the expired rows for the next
test. Fix this in the same way.
MDEV-20371: Invalid reads at plan refinement stage: join->positions...
best_access_path() is called from two optimization phases:
1. Plan choice phase, in choose_plan(). Here, the join prefix being
considered is in join->positions[]
2. Plan refinement stage, in fix_semijoin_strategies_for_picked_join_order
Here, the join prefix is in join->best_positions[]
It used to access join->positions[] from stage #2. This didnt cause any
valgrind or asan failures (as join->positions[] has been written-to before)
but the effect was similar to that of reading the random data:
The join prefix we've picked (in join->best_positions) could have
nothing in common with the join prefix that was last to be considered
(in join->positions).
863a951...
by
Daniel Bartholomew <email address hidden>