- 169. By Oleg Tsarev on 2011-06-13
fix Makefile and install_tests.sh (update from lp:percona-server/5.1)
- 168. By Alexey Kopytov on 2011-06-03
Support for fast growing VARCHAR fields in InnoDB.
When ALTER TABLE modifies a VARCHAR column so that its length is
increased in a compatible way (i.e. the old length and the new one are
either both <= 255 or both <= 65535), it is possible to avoid table copy
and only update metadata.
This revision updates rename_field.patch so that in addition to fast
field renaming for InnoDB tables, growing VARCHAR columns can also be
performed without data copying by updating the appropriate fields in
InnoDB data dictionary.
To keep the patch name consistent with the updated functionality, the
patch has been renamed to innodb_
fast_alter_ column. patch, and the server
variable that controls the functionality has been renamed from
internal_ rename_ field to innodb_ fast_alter_ column.
This revision also adds test cases for both fast renames and fast
- 167. By Oleg Tsarev on 2011-05-31
fix tests: funcs_1.
is_engines_ innodb funcs_1. is_columns_ is funcs_1.storedproc funcs_1. is_tables_ is percona_ server_ variables
- 166. By Alexey Kopytov on 2011-05-15
Fix for MySQL bug #54127: mysqld segfaults when built using
When using a non-default MAX_KEY value, a different code path is used
when processing index bitmaps. With the default value of 64, the
optimized "template <> class Bitmap<64>" is used which represents and
processes bitmaps as 64-bit integeres. Otherwise, "template <uint
default_width> class Bitmap" is used in which case bitmaps are
represented as arrays.
Multiple problems with the "non-optimized" Bitmap class were discovered
when testing a server binary built with --with-
1. bitmap_set_prefix() could overrun the internal buffer when resetting
the remainder of the buffer after setting the prefix due to an
incorrectly calculated remainder's length. This was the reason for the
crash on startup in MySQL bug #54127.
2. Bitmap::intersect() did not take into account that bitmap_init()
resets the supplied buffer, so an intersection with a zero bitmap was
always calculated (reported as MySQL #61178). This led to numerous test
failures due to different execution plans produced by the optimizer.
:to_ulonglong( ) incorrectly calculated the result value due to
serious bugs in [u]int*
korr/[u] int*store set of macros in
my_global.h (reported as MySQL bugs #61179 and #61180). This led to test
failures in distinct.test and group_min_max.test.
There are still a number of failing tests when running the test suite
- create.test contains a test case explicitly testing the 64-bit index
- the ps_N* series of tests verifies the metadata sent by EXPLAIN, where
the field length of "possible_keys" and "key_len" columns depends on
the MAX_KEY value and hence, is different for a binary built with a
non-default value of --with-max-indexes.
- 165. By Alexey Kopytov on 2011-05-15
timestamp_ no_default test failed due to a bug in the test file.
- 164. By Oleg Tsarev on 2011-05-07
rpl_utility.cc => create_
conversion_ table function used column count from event (doesnt' check column count on slave table)
If you drop some columns on slave, input RBR-event from master crash the server
I added additional check, and now slave just exit with slave error.
- 163. By Alexey Kopytov on 2011-04-29
Initial implementation of timestamp_
This patch allows creating TIMESTAMP columns with no default values. It
is impossible in MySQL for legacy reasons, so a TIMESTAMP column without
an explicitly specified default will implicitly be defined with one
of CURRENT_TIMESTAMP, NULL, or 0 as the default value.
In order to not break legacy behavior, a special syntax TIMESTAMP NOT
NULL DEFAULT NULL should be used in CREATE TABLE to avoid defining
implicit defaults for a TIMESTAMP column.
SHOW CREATE TABLE was also updated to produce that syntax for no default
This patch also fixes MySQL bug #33887.
- 162. By Alexey Kopytov on 2011-04-27
Initial implementation of max_binlog_
The problem solved by the patch is that the limit set by
max_allowed_packet does not work well with row-based replication. Even
if it is not possible to INSERT/UPDATE values larger than the limit set
by max_allowed_packet, the limit may still be exceeded when shipping the
binary log due to the fact that for each modified row there may be one
or two full row images in the binary log. So, for example, for a table
with a BLOB column, even if only an INT column is changed, there will be
one or two BLOB images stored into the corresponding RBR event, so it
may break replication by exceeding the max_allowed_packet limit on
either master or slave.
A similar problem exists when replaying the binary log with mysqlbinlog,
but in this case the base64 overhead is added on top of it.
packet. patch addresses both problems by introducing the
max_binlog_packet system variable. The variable's semantics is quite
different depending whether it is used in the global or a session scope:
- The global max_binlog_packet value, when it is not 0, represents an
effective packet size limit for binary log operations on both the
master and slaves. i.e. reading a binlog event from the binary log on
the master, sending it to a slave, reading it by a slave, and reading
it from the relay log by the slave are all performed with the
effective max_allowed_packet value equal to "max_binlog_packet + RBR
event header length" regardless of the actual value of
The default value for max_binlog_packet is zero (i.e. the real value
of max_allowed_packet will be used in all contexts).
Unlike most other variables, the client's session max_binlog_packet
value is NOT initialized from the global max_binlog_packet value. When
a new client connects, its max_binlog_packet session variable is set
to 0 regardless of the global variable value.
- The session max_binlog_packet variable can only be set by users with
the SUPER privilege. Once it is set to a non-zero value, it changes
the effective value of max_allowed_packet for the current session so
that the BINLOG event corresponding to an event at most
max_binlog_packet bytes long could be read by the server, that is:
the effective max_allowed_packet = (session's max_binlog_event + event
header length) * 4 / 3
where the 4/3 multiplier is the base64 overhead.
The session max_binlog_packet variable is used by mysqlbinlog which now
has a new command line option with the same name. When passed on the
command line, --max-binlog-packet makes mysqlbinlog to prepend it's
output with "SET LOCAL max_binlog_
packet= <option value>;".
In other words, max_binlog_packet allows to define a limit on the
maximum possible RBR event for both replication and binlog
rollforwarding independently from max_allowed_packet.
- 161. By Alexey Kopytov on 2011-04-12
Bug #757749: main.handler_innodb fails in 5.5.11
Using ALTER TABLE to convert an InnoDB table to a MEMORY table could
fail due to a bug in innodb_
expand_ fast_index_ creation. patch.
The problem in innodb_
expand_ fast_index_ creation. patch was that for
ALTER TABLE ENGINE=... the code mistakenly made sure the original
table's engine is InnoDB, rather than the target's one. This resulted in
a failure if the target engine was MEMORY because it does not support
certain handler calls.
Fixed by making sure the target engine is InnoDB, not the original ones.
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)