maria:bb-10.10-mdev-29502

Last commit made on 2023-09-14
Get this branch:
git clone -b bb-10.10-mdev-29502 https://git.launchpad.net/maria

Branch merges

Branch information

Name:
bb-10.10-mdev-29502
Repository:
lp:maria

Recent commits

6f31e96... by Yuchen Pei <email address hidden>

MDEV-29502 Fix some issues with spider direct aggregate

The direct aggregate mechanism sems to be only intended to work when
otherwise a full table scan query will be executed from the spider
node and the aggregation done at the spider node too. Typically this
happens in sub_select(). In the test spider.direct_aggregate_part
direct aggregate allows to send COUNT statements directly to the data
nodes and adds up the results at the spider node, instead of iterating
over the rows one by one at the spider node.

By contrast, the group by handler (GBH) typically sends aggregated
queries directly to data nodes, in which case DA does not improve the
situation here.

That is why we should fix it by disabling DA when GBH is used.

There are other reasons supporting this change. First, the creation of
GBH results in a call to change_to_use_tmp_fields() (as opposed to
setup_copy_fields()) which causes the spider DA function
spider_db_fetch_for_item_sum_funcs() to work on wrong items. Second,
the spider DA function only calls direct_add() on the items, and the
follow-up add() needs to be called by the sql layer code. In
do_select(), after executing the query with the GBH, it seems that the
required add() would not necessarily be called.

Disabling DA when GBH is used does fix the bug. There are a few
other things included in this commit to improve the situation with
spider DA:

1. Add a session variable that allows user to disable DA completely,
this will help as a temporary measure if/when further bugs with DA
emerge.

2. Move the increment of direct_aggregate_count to the spider DA
function. Currently this is done in rather bizarre and random
locations.

3. Fix the spider_db_mbase_row creation so that the last of its row
field (sentinel) is NULL. The code is already doing a null check, but
somehow the sentinel field is on an invalid address, causing the
segfaults. With a correct implementation of the row creation, we can
avoid such segfaults.

5133873... by Yuchen Pei <email address hidden>

MDEV-31673 MDEV-29502 Remove spider_db_handler::need_lock_before_set_sql_for_exec

This function trivially returns false

0b6de3d... by Sergei Golubchik

avoid "'sh' is not recognized..." error in mtr on windows

cb384d0... by THIRUNARAYANAN BALATHANDAYUTHAPANI

MDEV-32008 auto_increment value on table increments by one after restart

- This issue caused by commit 4700f2ac70f8c79f2ac1968b6b59d18716f492bf(MDEV-30796)
During bulk insert operation, InnoDB wrongly stores the next autoincrement
value as current autoincrement value. So update the current autoincrement
value rather than next auto increment value.

cd5808e... by Ruoyu Zhong

MDEV-31963 Fix libfmt usage in SFORMAT

`fmt::detail::make_arg` does not accept temporaries. Make it happy by
storing the format arg values in a temporary array first.

Signed-off-by: Ruoyu Zhong <email address hidden>

f4cec36... by Ruoyu Zhong

MDEV-31963 cmake: fix libfmt usage

`fmt::detail::make_arg` does not accept temporaries, so the code snippet
checking system libfmt needs to be adjusted.

Signed-off-by: Ruoyu Zhong <email address hidden>

bf3b787... by THIRUNARAYANAN BALATHANDAYUTHAPANI

MDEV-31835 Remove unnecessary extra HA_EXTRA_IGNORE_INSERT call

- This commit is different from 10.6 commit c438284863db2ccba8a04437c941a5c8a2d9225b.
Due to Commit 045757af4c301757ba449269351cc27b1691a7d6 (MDEV-24621),
InnoDB does buffer and pre-sort the records for each index, and build
the indexes one page at a time.

Multiple large insert ignore statment aborts the server during bulk
insert operation. Problem is that InnoDB merge record exceeds
the page size. To avoid this scenario, InnoDB should catch
too big record while buffering the insert operation itself.

row_merge_buf_encode(): returns length of the encoded index record

row_merge_buf_write(): Catches the DB_TOO_BIG_RECORD earlier and
returns error

afc64ea... by Alexander Barkov

MDEV-31719 Wrong result of: WHERE inet6_column IN ('','::1')

The problem was earlier fixed by a patch for MDEV-27207
  68403eeda320ad0831563ce09a9c4af1549fe65e
and an additional cleanup patch for MDEV-27207
  88dd50b80ad9624d05b72751fd6e4a2cfdb6a3fe
The above patches added MTR tests for INET6.

Now adding UUID specific MTR tests only.

8aaacb5... by Sergey Petrunia

MDEV-31432 tmp_table field accessed after free

Before this patch, the code in Item_field::print() used
this convention (described in sql_explain.h:ExplainDataStructureLifetime):

- By default, the table that Item_field refers to is accessible.
- ANALYZE and SHOW {EXPLAIN|ANALYZE} may print Items after some
  temporary tables have been dropped. They use
  QT_DONT_ACCESS_TMP_TABLES flag. When it is ON, Item_field::print
  will not access the table it refers to, if it is a temp.table

The bug was that EXPLAIN statement also may compute subqueries (depending
on subquery context and @@expensive_subquery_limit setting). After the
computation, the subquery calls JOIN::cleanup(true) which drops some of
its temporary tables. Calling Item_field::print() that refer to such table
will cause an access to free'd memory.

In this patch, we take into account that query optimization can compute
a subquery and discard its temporary tables. Item_field::print() now
assumes that any temporary table might have already been dropped.
This means QT_DONT_ACCESS_TMP_TABLES flag is not needed - we imply it is
always present.

But we also make one exception: derived tables are not freed in
JOIN::cleanup() call. They are freed later in close_thread_tables(),
at the same time when regular tables are closed.
Because of that, Item_field::print may assume that temp.tables
representing derived tables are available.

Initial patch by: Rex Jonston
Reviewed by: Monty <email address hidden>

9cd2989... by Marko Mäkelä

Merge 10.6 into 10.10