maria:bb-10.4-MDEV-32517

Last commit made on 2023-10-31
Get this branch:
git clone -b bb-10.4-MDEV-32517 https://git.launchpad.net/maria

Branch merges

Branch information

Name:
bb-10.4-MDEV-32517
Repository:
lp:maria

Recent commits

365d0a4... by Dmitry Shulga <email address hidden>

MDEV-32517: Memory leak on running query in PS mode with condition that has 'ANY/SOME (subquery)' clause and any kind of the following comparisons '>', '>=', '<', '<=' '

Instantiation of Item_direct_ref_to_item for storing a refence
on Item_func_conv_charset should take place on first execution of
a query in PS mode, else it would lead to a memory leak.

Added extra check for
  current_select->first_cond_optimization
before entering into the block where a new instance of the class
Item_direct_ref_to_item is created on PS memroy_root.

6fa69ad... by Kristian Nielsen

MDEV-27436: binlog corruption (/tmp no space left on device at the same moment)

This commit fixes several bugs in error handling around disk full when
writing the statement/transaction binlog caches:

1. If the error occurs during a non-transactional statement, the code
attempts to binlog the partially executed statement (as it cannot roll
back). The stmt_cache->error was still set from the disk full error. This
caused MYSQL_BIN_LOG::write_cache() to get an error while trying to read the
cache to copy it to the binlog. This was then wrongly interpreted as a disk
full error writing to the binlog file. As a result, a partial event group
containing just a GTID event (no query or commit) was binlogged. Fixed by
checking if an error is set in the statement cache, and if so binlog an
INCIDENT event instead of a corrupt event group, as for other errors.

2. For LOAD DATA LOCAL INFILE, if a disk full error occured while writing to
the statement cache, the code would attempt to abort and read-and-discard
any remaining data sent by the client. The discard code would however
continue trying to write data to the statement cache, and wrongly interpret
another disk full error as end-of-file from the client. This left the client
connection with extra data which corrupts the communication for the next
command, as well as again causing an corrupt/incomplete event to be
binlogged. Fixed by restoring the default read function before reading any
remaining data from the client connection.

Reviewed-by: Andrei Elkin <email address hidden>
Signed-off-by: Kristian Nielsen <email address hidden>

e52777f... by Brandon Nesterenko

MDEV-26272: The macro MASTER_INFO_VAR invokes undefined behaviour

Updates to specific replication system variables need to target the
active primary connection to support multi-source replication. These
variables use the Sys_var_multi_source_ulonglong type. This class
uses offsets of the Master_info C++ class to generalize access to
its member variables.

The problem is that the Master_info class is not of standard layout,
and neither are many of its member variables, e.g. rli and
rli->relay_log. Because the class is not of standard layout, using
offsets to access member variables invokes undefined behavior.

This patch changes how Sys_var_multi_source_ulonglong accesses the
member variables of Master_info from using parameterized memory
offsets to “getter” function pointers.

Note that the size parameter and assertion are removed, as they are
no longer needed because the condition is guaranteed by compiler
type-safety checks.

Reviewed By:
============
Kristian Nielsen <email address hidden>

eb8053b... by Rex Johnston

MDEV-31995 Bogus error executing PS for query using CTE with renaming of columns

This commit addresses column naming issues with CTEs in the use of prepared
statements and stored procedures. Usage of either prepared statements or
procedures with Common Table Expressions and column renaming may be affected.

There are three related but different issues addressed here.

1) First execution issue. Consider the following

prepare s from "with cte (col1, col2) as (select a as c1, b as c2 from t
order by c1) select col1, col2 from cte";
execute s;

After parsing, items in the select are named (c1,c2), order by (and group by)
resolution is performed, then item names are set to (col1, col2).
When the statement is executed, context analysis is again performed, but
resolution of elements in the order by statement will not be able to find c1,
because it was renamed to col1 and remains this way.

The solution is to save the names of these items during context resolution
before they have been renamed. We can then reset item names back to those after
parsing so first execution can resolve items referred to in order and group by
clauses.

2) Second Execution Issue

When the derived table contains more than one select 'unioned' together we could
reasonably think that dealing with only items in the first select (which
determines names in the resultant table) would be sufficient. This can lead to
a different problem. Consider

prepare st from "with cte (c1,c2) as
  (select a as col1, sum(b) as col2 from t1 where a > 0 group by col1
    union select a as col3, sum(b) as col4 from t2 where b > 2 group by col3)
  select * from cte where c1=1";

When the optimizer (only run during the first execution) pushes the outside
condition "c1=1" into every select in the derived table union, it renames the
items to make the condition valid. In this example, this leaves the first item
in the second select named 'c1'. The second execution will now fail 'group by'
resolution.

Again, the solution is to save the names during context analysis, resetting
before subsequent resolution, but making sure that we save/reset the item
names in all the selects in this union.

3) Memory Leak

During parsing Item::set_name() is used to allocate memory in the statement
arena. We cannot use this call during statement execution as this represents
a memory leak. We directly set the item list names to those in the column list
of this CTE (also allocated during parsing).

Approved by Igor Babaev <email address hidden>

86351f5... by Sergey Petrunia

MDEV-32351: Significant slowdown with outer joins: fix embedded.

For some reason, in embedded server, a command

let $a=`$query`

ignores local context. Make a workaround: use SET STATEMENT to set
debug_dbug in the same statement.

1cd8a5e... by Oleksandr "Sanja" Byelkin

Fix of Backport block-nl-join.r_unpack_time_ms.

9bf2e5e... by Sergey Petrunia

MDEV-32351: Significant slowdown with outer joins: Test coverage

Make ANALYZE FORMAT=JSON print block-nl-join.r_unpack_ops when
analyze_print_r_unpack_ops debug flag is set.

Then, add a testcase.

4ed5900... by Sergey Petrunia

ANALYZE FORMAT=JSON: Backport block-nl-join.r_unpack_time_ms from 11.0 +fix MDEV-30830.

Also fix it to work with hashed join (MDEV-30830).

Reviewed by: Monty <email address hidden>

954a6de... by Igor Babaev

MDEV-32351 Significant slowdown for query with many outer joins

This patch fixes a performance regression introduced in the patch for the
bug MDEV-21104. The performance regression could affect queries for which
join buffer was used for an outer join such that its on expression from
which a conjunctive condition depended only on outer tables can be
extracted. If the number of records in the join buffer for which this
condition was false greatly exceeded the number of other records the
slowdown could be significant.

If there is a conjunctive condition extracted from the ON expression
depending only on outer tables this condition is evaluated when interesting
fields of each survived record of outer tables are put into the join buffer.
Each such set of fields for any join operation is supplied with a match
flag field used to generate null complemented rows. If the result of the
evaluation of the condition is false the flag is set to MATCH_IMPOSSIBLE.
When looking in the join buffer for records matching a record of the
right operand of the outer join operation the records with such flags
are not needed to be unpacked into record buffers for evaluation of on
expressions.

The patch for MDEV-21104 fixing some problem of wrong results when
'not exists' optimization by mistake broke the code that allowed to
ignore records with the match flag set to MATCH_IMPOSSIBLE when looking
for matching records. As a result such records were unpacked for each
record of the right operand of the outer join operation. This caused
significant execution penalty in some cases.

One of the test cases added in the patch can be used only for demonstration
of the restored performance for the reported query. The second test case is
needed to demonstrate the validity of the fix.

11abc21... by Oleksandr "Sanja" Byelkin

fixed typo