Created by Oleg Tsarev on 2011-05-27 and last modified on 2011-05-27
Get this branch:
bzr branch lp:~percona-dev/percona-server/release-5.1.52-12-rnt-replication_slave_skip_columns_backup
Members of Percona developers can upload to this branch. Log in for directions.

Branch merges

Related bugs

Related blueprints

Branch information

Recent revisions

169. By Oleg Tsarev on 2011-05-27


168. By Oleg Tsarev on 2011-05-24

support slave processing

167. By Oleg Tsarev on 2011-05-19

Issue 14794: extend Table_map_log_event

166. By Alexey Kopytov on 2011-05-15

Fix for MySQL bug #54127: mysqld segfaults when built using

When using a non-default MAX_KEY value, a different code path is used
when processing index bitmaps. With the default value of 64, the
optimized "template <> class Bitmap<64>" is used which represents and
processes bitmaps as 64-bit integeres. Otherwise, "template <uint
default_width> class Bitmap" is used in which case bitmaps are
represented as arrays.

Multiple problems with the "non-optimized" Bitmap class were discovered
when testing a server binary built with --with-max-indexes=128:

1. bitmap_set_prefix() could overrun the internal buffer when resetting
the remainder of the buffer after setting the prefix due to an
incorrectly calculated remainder's length. This was the reason for the
crash on startup in MySQL bug #54127.

2. Bitmap::intersect() did not take into account that bitmap_init()
resets the supplied buffer, so an intersection with a zero bitmap was
always calculated (reported as MySQL #61178). This led to numerous test
failures due to different execution plans produced by the optimizer.

3. Bitmap::to_ulonglong() incorrectly calculated the result value due to
serious bugs in [u]int*korr/[u]int*store set of macros in
my_global.h (reported as MySQL bugs #61179 and #61180). This led to test
failures in distinct.test and group_min_max.test.

There are still a number of failing tests when running the test suite
with --with-max-indexes=128:

- create.test contains a test case explicitly testing the 64-bit index

- the ps_N* series of tests verifies the metadata sent by EXPLAIN, where
the field length of "possible_keys" and "key_len" columns depends on
the MAX_KEY value and hence, is different for a binary built with a
non-default value of --with-max-indexes.

165. By Alexey Kopytov on 2011-05-15

The percona_timestamp_no_default test failed due to a bug in the test file.

164. By Oleg Tsarev on 2011-05-07

rpl_utility.cc => create_conversion_table function used column count from event (doesnt' check column count on slave table)
If you drop some columns on slave, input RBR-event from master crash the server
I added additional check, and now slave just exit with slave error.

163. By Alexey Kopytov on 2011-04-29

Initial implementation of timestamp_no_default.patch.

This patch allows creating TIMESTAMP columns with no default values. It
is impossible in MySQL for legacy reasons, so a TIMESTAMP column without
an explicitly specified default will implicitly be defined with one
of CURRENT_TIMESTAMP, NULL, or 0 as the default value.

In order to not break legacy behavior, a special syntax TIMESTAMP NOT
NULL DEFAULT NULL should be used in CREATE TABLE to avoid defining
implicit defaults for a TIMESTAMP column.

SHOW CREATE TABLE was also updated to produce that syntax for no default
TIMESTAMP columns.

This patch also fixes MySQL bug #33887.

162. By Alexey Kopytov on 2011-04-27

Initial implementation of max_binlog_packet.patch

The problem solved by the patch is that the limit set by
max_allowed_packet does not work well with row-based replication. Even
if it is not possible to INSERT/UPDATE values larger than the limit set
by max_allowed_packet, the limit may still be exceeded when shipping the
binary log due to the fact that for each modified row there may be one
or two full row images in the binary log. So, for example, for a table
with a BLOB column, even if only an INT column is changed, there will be
one or two BLOB images stored into the corresponding RBR event, so it
may break replication by exceeding the max_allowed_packet limit on
either master or slave.

A similar problem exists when replaying the binary log with mysqlbinlog,
but in this case the base64 overhead is added on top of it.

max_binlog_packet.patch addresses both problems by introducing the
max_binlog_packet system variable. The variable's semantics is quite
different depending whether it is used in the global or a session scope:

- The global max_binlog_packet value, when it is not 0, represents an
  effective packet size limit for binary log operations on both the
  master and slaves. i.e. reading a binlog event from the binary log on
  the master, sending it to a slave, reading it by a slave, and reading
  it from the relay log by the slave are all performed with the
  effective max_allowed_packet value equal to "max_binlog_packet + RBR
  event header length" regardless of the actual value of

  The default value for max_binlog_packet is zero (i.e. the real value
  of max_allowed_packet will be used in all contexts).

  Unlike most other variables, the client's session max_binlog_packet
  value is NOT initialized from the global max_binlog_packet value. When
  a new client connects, its max_binlog_packet session variable is set
  to 0 regardless of the global variable value.

- The session max_binlog_packet variable can only be set by users with
  the SUPER privilege. Once it is set to a non-zero value, it changes
  the effective value of max_allowed_packet for the current session so
  that the BINLOG event corresponding to an event at most
  max_binlog_packet bytes long could be read by the server, that is:

  the effective max_allowed_packet = (session's max_binlog_event + event
  header length) * 4 / 3

  where the 4/3 multiplier is the base64 overhead.

The session max_binlog_packet variable is used by mysqlbinlog which now
has a new command line option with the same name. When passed on the
command line, --max-binlog-packet makes mysqlbinlog to prepend it's
output with "SET LOCAL max_binlog_packet=<option value>;".

In other words, max_binlog_packet allows to define a limit on the
maximum possible RBR event for both replication and binlog
rollforwarding independently from max_allowed_packet.

161. By Alexey Kopytov on 2011-04-12

Bug #757749: main.handler_innodb fails in 5.5.11

Using ALTER TABLE to convert an InnoDB table to a MEMORY table could
fail due to a bug in innodb_expand_fast_index_creation.patch.

The problem in innodb_expand_fast_index_creation.patch was that for
ALTER TABLE ENGINE=... the code mistakenly made sure the original
table's engine is InnoDB, rather than the target's one. This resulted in
a failure if the target engine was MEMORY because it does not support
certain handler calls.

Fixed by making sure the target engine is InnoDB, not the original ones.

160. By Alexey Kopytov on 2011-03-26

Initial implementation of innodb_expand_fast_index_creation.patch.

This patch expands the applicability of InnoDB fast index creation to
mysqldump, ALTER TABLE and OPTIMIZE TABLE as follows:

1. mysqldump has now a new option, --innodb-optimize-keys, which changes
the way InnoDB tables are dumped so that secondary and foreign keys are
created after loading the data thus taking advantage of fast index

It's an implementation of the feature request reported as MySQL bug

More specifically:

- KEY, UNIQUE KEY and CONSTRAINT specifications are omitted from CREATE
TABLE corresponding to InnoDB tables.

- an additional ALTER TABLE is issued after dumping the data to create
the previously omitted keys.

Delaying foreign key creation does not introduce any additional risks as
mysqldump always prepends its output with SET FOREIGN_KEY_CHECKS=0 anyway.

2. When ALTER TABLE requires a table copy, secondary keys are now dropped
and recreated later after copying the data. The following restrictions

- only non-unique keys can be involved in this optimization

- if the table contains foreign keys, or a foreign key is being added as
a part of the current ALTER TABLE statement, the optimization is
disabled for all keys.

3. As OPTIMIZE TABLE is mapped to ALTER TABLE ... ENGINE=InnoDB for
InnoDB tables, it now also benefits from fast index creation with the
same restrictions as for ALTER TABLE.

Branch metadata

Branch format:
Branch format 6
Repository format:
Bazaar pack repository format 1 (needs bzr 0.92)
This branch contains Public information 
Everyone can see this information.