Merge lp:~sergei.glushchenko/percona-xtrabackup/2.1-xb-bug1240352 into lp:percona-xtrabackup/2.1

Proposed by Sergei Glushchenko
Status: Work in progress
Proposed branch: lp:~sergei.glushchenko/percona-xtrabackup/2.1-xb-bug1240352
Merge into: lp:percona-xtrabackup/2.1
Diff against target: 11 lines (+1/-0)
1 file modified
test/inc/ib_part.sh (+1/-0)
To merge this branch: bzr merge lp:~sergei.glushchenko/percona-xtrabackup/2.1-xb-bug1240352
Reviewer Review Type Date Requested Status
Alexey Kopytov (community) Needs Fixing
Review via email: mp+236339@code.launchpad.net

Description of the change

http://jenkins.percona.com/view/PXB%202.1/job/percona-xtrabackup-2.1-param-new/18/

t/ib_part_include_stream.sh fixed by using FLUSH TABLES.
Looks like there is another regression in kill_long_selects.

To post a comment you must log in.
Revision history for this message
Alexey Kopytov (akopytov) wrote :

Sergei,

I don't buy the explanation for failures, because:

1. For MyISAM tables data is guaranteed to be flushed after each INSERT.
2. Even if that wasn't the case, CHECKSUM TABLE works at the SQL level rather than on the filesystem one. Which means as long as there is some data from the server perspective, CHECKSUM TABLE should return a non-zero value.

So the reason for failures seems to be something else?

review: Needs Fixing
Revision history for this message
Sergei Glushchenko (sergei.glushchenko) wrote :

Alexey,

From http://dev.mysql.com/doc/refman/5.1/en/concurrent-inserts.html

The MyISAM storage engine supports concurrent inserts to reduce contention between readers and writers for a given table: If a MyISAM table has no holes in the data file (deleted rows in the middle), an INSERT statement can be executed to add rows to the end of the table at the same time that SELECT statements are reading rows from the table. If there are multiple INSERT statements, they are queued and performed in sequence, concurrently with the SELECT statements. The results of a concurrent INSERT may not be visible immediately.

In practice I saw many time that appended records to MyISAM tables are not visible by SELECT immediately. I think that FLUSH TABLE can help with it, but I am not sure.

Revision history for this message
Sergei Glushchenko (sergei.glushchenko) wrote :

If we agree on the reason (I don't see any other), I can also set concurrent_insert to 0.

Revision history for this message
Alexey Kopytov (akopytov) wrote :

Sergei,

What do MyISAM concurrent inserts have to do with this test, where SELECT is guaranteed to be executed _after_ INSERT finishes, i.e. there is no concurrency whatsoever?

Unmerged revisions

766. By Sergei Glushchenko

Bug 1240352: Sporadic t/ib_part_include_stream.sh failures in Jenkins

Test fails with checksum = 0 before backup and checksum != 0 after
restore. Checksum value of zero indicates that data haven't made it
to the table yet. Since table is MyISAM, it can be effect of
concurrent inserts for example.

The fix is to FLUSH TABLES in order to make sure all data are
written into the table.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'test/inc/ib_part.sh'
--- test/inc/ib_part.sh 2013-04-28 18:32:11 +0000
+++ test/inc/ib_part.sh 2014-09-29 14:00:38 +0000
@@ -40,6 +40,7 @@
40function ib_part_data()40function ib_part_data()
41{41{
42 echo 'INSERT INTO test VALUES (1), (101), (201), (301), (401);';42 echo 'INSERT INTO test VALUES (1), (101), (201), (301), (401);';
43 echo 'FLUSH TABLES';
43}44}
4445
45function ib_part_init()46function ib_part_init()

Subscribers

People subscribed via source and target branches