Merge lp:~akopytov/percona-xtrabackup/bug1213102-2.1 into lp:percona-xtrabackup/2.1

Proposed by Alexey Kopytov
Status: Merged
Approved by: Alexey Kopytov
Approved revision: no longer in the source branch.
Merged at revision: 667
Proposed branch: lp:~akopytov/percona-xtrabackup/bug1213102-2.1
Merge into: lp:percona-xtrabackup/2.1
Diff against target: 449 lines (+119/-140)
14 files modified
patches/innodb51.patch (+13/-0)
patches/innodb55.patch (+13/-0)
patches/innodb56.patch (+13/-0)
patches/xtradb51.patch (+13/-0)
patches/xtradb55.patch (+13/-0)
test/inc/common.sh (+29/-0)
test/inc/ib_incremental_common.sh (+10/-36)
test/t/bug1182726.sh (+1/-9)
test/t/bug810269.sh (+1/-22)
test/t/compact_compressed.sh (+4/-3)
test/t/ib_slave_info.sh (+1/-9)
test/t/ib_stream_incremental.sh (+2/-19)
test/t/xb_export.sh (+1/-9)
test/t/xb_incremental_compressed.inc (+5/-33)
To merge this branch: bzr merge lp:~akopytov/percona-xtrabackup/bug1213102-2.1
Reviewer Review Type Date Requested Status
George Ormond Lorch III (community) g2 Approve
Review via email: mp+180764@code.launchpad.net

Description of the change

    Bug #1213102: compact_compressed test is too slow in debug builds

    Removed the page_zip_validate() call from page_cur_insert_rec_zip(), so
    page validation for UNIV_ZIP_DEBUG builds is now only performed after
    inserting a record, but not before.

    This decreased execution time for debug compact_compressed from 235
    seconds to 165.

    Another low-hanging fruit to optimize the test is to compress (and thus
    copy) tables before loading data rather than after.

http://jenkins.percona.com/view/XtraBackup/job/percona-xtrabackup-2.1-param/437/

To post a comment you must log in.
Revision history for this message
George Ormond Lorch III (gl-az) :
review: Approve (g2)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'patches/innodb51.patch'
2--- patches/innodb51.patch 2013-07-27 10:47:09 +0000
3+++ patches/innodb51.patch 2013-08-19 04:09:30 +0000
4@@ -1503,3 +1503,16 @@
5
6 UNIV_MEM_FREE(buf, n);
7 }
8+--- a/storage/innodb_plugin/page/page0cur.c
9++++ b/storage/innodb_plugin/page/page0cur.c
10+@@ -1247,7 +1247,9 @@
11+ ut_ad(page_is_comp(page));
12+
13+ ut_ad(!page_rec_is_supremum(*current_rec));
14+-#ifdef UNIV_ZIP_DEBUG
15++#if 0
16++ /* Disabled to speedup compact_compressed test for debug XtraBackup
17++ builds, see LP bug #1213036. */
18+ ut_a(page_zip_validate(page_zip, page, index));
19+ #endif /* UNIV_ZIP_DEBUG */
20+
21
22=== modified file 'patches/innodb55.patch'
23--- patches/innodb55.patch 2013-07-27 10:47:09 +0000
24+++ patches/innodb55.patch 2013-08-19 04:09:30 +0000
25@@ -1433,3 +1433,16 @@
26
27 UNIV_MEM_FREE(buf, n);
28 }
29+--- a/storage/innobase/page/page0cur.c
30++++ b/storage/innobase/page/page0cur.c
31+@@ -1247,7 +1247,9 @@
32+ ut_ad(page_is_comp(page));
33+
34+ ut_ad(!page_rec_is_supremum(*current_rec));
35+-#ifdef UNIV_ZIP_DEBUG
36++#if 0
37++ /* Disabled to speedup compact_compressed test for debug XtraBackup
38++ builds, see LP bug #1213036. */
39+ ut_a(page_zip_validate(page_zip, page, index));
40+ #endif /* UNIV_ZIP_DEBUG */
41+
42
43=== modified file 'patches/innodb56.patch'
44--- patches/innodb56.patch 2013-07-20 14:24:26 +0000
45+++ patches/innodb56.patch 2013-08-19 04:09:30 +0000
46@@ -1451,3 +1451,16 @@
47 DBUG_INJECT_CRASH("ib_commit_inplace_crash",
48 crash_inject_count++);
49 }
50+--- a/storage/innobase/page/page0cur.cc
51++++ b/storage/innobase/page/page0cur.cc
52+@@ -1207,7 +1207,9 @@
53+ == index->id || mtr->inside_ibuf || recv_recovery_is_on());
54+
55+ ut_ad(!page_cur_is_after_last(cursor));
56+-#ifdef UNIV_ZIP_DEBUG
57++#if 0
58++ /* Disabled to speedup compact_compressed test for debug XtraBackup
59++ builds, see LP bug #1213036. */
60+ ut_a(page_zip_validate(page_zip, page, index));
61+ #endif /* UNIV_ZIP_DEBUG */
62+
63
64=== modified file 'patches/xtradb51.patch'
65--- patches/xtradb51.patch 2013-07-20 14:24:26 +0000
66+++ patches/xtradb51.patch 2013-08-19 04:09:30 +0000
67@@ -1585,3 +1585,16 @@
68
69 UNIV_MEM_FREE(buf, n);
70 }
71+--- a/storage/innodb_plugin/page/page0cur.c
72++++ b/storage/innodb_plugin/page/page0cur.c
73+@@ -1247,7 +1247,9 @@
74+ ut_ad(page_is_comp(page));
75+
76+ ut_ad(!page_rec_is_supremum(*current_rec));
77+-#ifdef UNIV_ZIP_DEBUG
78++#if 0
79++ /* Disabled to speedup compact_compressed test for debug XtraBackup
80++ builds, see LP bug #1213036. */
81+ ut_a(page_zip_validate(page_zip, page, index));
82+ #endif /* UNIV_ZIP_DEBUG */
83+
84
85=== modified file 'patches/xtradb55.patch'
86--- patches/xtradb55.patch 2013-07-20 14:24:26 +0000
87+++ patches/xtradb55.patch 2013-08-19 04:09:30 +0000
88@@ -1583,3 +1583,16 @@
89
90 trx_start_if_not_started(trx);
91
92+--- a/storage/innobase/page/page0cur.c
93++++ b/storage/innobase/page/page0cur.c
94+@@ -1247,7 +1247,9 @@
95+ ut_ad(page_is_comp(page));
96+
97+ ut_ad(!page_rec_is_supremum(*current_rec));
98+-#ifdef UNIV_ZIP_DEBUG
99++#if 0
100++ /* Disabled to speedup compact_compressed test for debug XtraBackup
101++ builds, see LP bug #1213036. */
102+ ut_a(page_zip_validate(page_zip, page, index));
103+ #endif /* UNIV_ZIP_DEBUG */
104+
105
106=== modified file 'test/inc/common.sh'
107--- test/inc/common.sh 2013-07-25 15:04:49 +0000
108+++ test/inc/common.sh 2013-08-19 04:09:30 +0000
109@@ -721,5 +721,34 @@
110 fi
111 }
112
113+##############################################################################
114+# Execute a multi-row INSERT into a specified table.
115+#
116+# Arguments:
117+#
118+# $1 -- table specification
119+#
120+# all subsequent arguments represent tuples to insert in the form:
121+# (value1, ..., valueN)
122+#
123+# Notes:
124+#
125+# 1. Bash special characters in the arguments must be quoted to screen them
126+# from interpreting by Bash, i.e. \(1,...,\'a'\)
127+#
128+# 2. you can use Bash brace expansion to generate multiple tuples, e.g.:
129+# \({1..1000},\'a'\) will generate 1000 tuples (1,'a'), ..., (1000, 'a')
130+##############################################################################
131+function multi_row_insert()
132+{
133+ local table=$1
134+ shift
135+
136+ vlog "Inserting $# rows into $table..."
137+ (IFS=,; echo "INSERT INTO $table VALUES $*") | \
138+ $MYSQL $MYSQL_ARGS
139+ vlog "Done."
140+}
141+
142 # To avoid unbound variable error when no server have been started
143 SRV_MYSQLD_IDS=
144
145=== modified file 'test/inc/ib_incremental_common.sh'
146--- test/inc/ib_incremental_common.sh 2013-07-25 15:04:49 +0000
147+++ test/inc/ib_incremental_common.sh 2013-08-19 04:09:30 +0000
148@@ -15,50 +15,25 @@
149 load_dbase_schema incremental_sample
150
151 # Adding initial rows
152-vlog "Adding initial rows to database..."
153-numrow=100
154-count=0
155-while [ "$numrow" -gt "$count" ]
156-do
157- ${MYSQL} ${MYSQL_ARGS} -e "insert into test values ($count, $numrow);" incremental_sample
158- let "count=count+1"
159-done
160-vlog "Initial rows added"
161-
162-# Full backup
163-# backup root directory
164-mkdir -p $topdir/backup
165+multi_row_insert incremental_sample.test \({1..100},100\)
166
167 vlog "Starting backup"
168-innobackupex $topdir/backup
169-full_backup_dir=`grep "innobackupex: Backup created in directory" $OUTFILE | awk -F\' '{print $2}'`
170-vlog "Full backup done to directory $full_backup_dir"
171+full_backup_dir=$topdir/full_backup
172+innobackupex --no-timestamp $full_backup_dir
173
174 # Changing data
175
176 vlog "Making changes to database"
177 ${MYSQL} ${MYSQL_ARGS} -e "create table t2 (a int(11) default null, number int(11) default null) engine=innodb" incremental_sample
178-let "count=numrow+1"
179-let "numrow=1000"
180-while [ "$numrow" -gt "$count" ]
181-do
182- ${MYSQL} ${MYSQL_ARGS} -e "insert into test values ($count, $numrow);" incremental_sample
183- ${MYSQL} ${MYSQL_ARGS} -e "insert into t2 values ($count, $numrow);" incremental_sample
184- let "count=count+1"
185-done
186+
187+multi_row_insert incremental_sample.test \({101..1000},1000\)
188+multi_row_insert incremental_sample.t2 \({101..1000},1000\)
189
190 # Rotate bitmap file here and force checkpoint at the same time
191 shutdown_server
192 start_server
193
194-i=1001
195-while [ "$i" -lt "7500" ]
196-do
197- ${MYSQL} ${MYSQL_ARGS} -e "insert into t2 values ($i, repeat(\"ab\", 32500));" incremental_sample
198- let "i=i+1"
199-done
200-
201-vlog "Changes done"
202+multi_row_insert incremental_sample.t2 \({1001..7500},REPEAT\(\'ab\',32500\)\)
203
204 # Saving the checksum of original table
205 checksum_test_a=`checksum_table incremental_sample test`
206@@ -73,10 +48,9 @@
207 vlog "###############"
208
209 # Incremental backup
210-innobackupex --incremental --incremental-basedir=$full_backup_dir \
211- $topdir/backup $ib_inc_extra_args
212-inc_backup_dir=`grep "innobackupex: Backup created in directory" $OUTFILE | tail -n 1 | awk -F\' '{print $2}'`
213-vlog "Incremental backup done to directory $inc_backup_dir"
214+inc_backup_dir=$topdir/backup_incremental
215+innobackupex --no-timestamp --incremental --incremental-basedir=$full_backup_dir \
216+ $inc_backup_dir $ib_inc_extra_args
217
218 vlog "Preparing backup"
219 # Prepare backup
220
221=== modified file 'test/t/bug1182726.sh'
222--- test/t/bug1182726.sh 2013-07-08 09:28:05 +0000
223+++ test/t/bug1182726.sh 2013-08-19 04:09:30 +0000
224@@ -19,15 +19,7 @@
225 load_dbase_schema incremental_sample
226
227 # Adding initial rows
228-vlog "Adding initial rows to database..."
229-numrow=100
230-count=0
231-while [ "$numrow" -gt "$count" ]
232-do
233- ${MYSQL} ${MYSQL_ARGS} -e "insert into test values ($count, $numrow);" incremental_sample
234- let "count=count+1"
235-done
236-vlog "Initial rows added"
237+multi_row_insert incremental_sample.test \({1..100},100\)
238
239 # Full backup of the slave server
240 switch_server $slave_id
241
242=== modified file 'test/t/bug810269.sh'
243--- test/t/bug810269.sh 2013-07-25 15:04:49 +0000
244+++ test/t/bug810269.sh 2013-08-19 04:09:30 +0000
245@@ -20,28 +20,7 @@
246 "ALTER TABLE test ENGINE=InnoDB ROW_FORMAT=compressed \
247 KEY_BLOCK_SIZE=4" incremental_sample
248
249-vlog "Adding initial rows to table"
250-
251-numrow=10000
252-count=0
253-while [ "$numrow" -gt "$count" ]; do
254- sql="INSERT INTO test VALUES ($count, $numrow)"
255- let "count=count+1"
256- for ((i=0; $i<99; i++)); do
257- sql="$sql,($count, $numrow)"
258- let "count=count+1"
259- done
260- ${MYSQL} ${MYSQL_ARGS} -e "$sql" incremental_sample
261-done
262-
263-rows=`${MYSQL} ${MYSQL_ARGS} -Ns -e "SELECT COUNT(*) FROM test" \
264- incremental_sample`
265-if [ "$rows" != "10000" ]; then
266- vlog "Failed to add initial rows"
267- exit -1
268-fi
269-
270-vlog "Initial rows added"
271+multi_row_insert incremental_sample.test \({1..10000},10000\)
272
273 checksum_a=`checksum_table incremental_sample test`
274
275
276=== modified file 'test/t/compact_compressed.sh'
277--- test/t/compact_compressed.sh 2013-07-25 15:04:49 +0000
278+++ test/t/compact_compressed.sh 2013-08-19 04:09:30 +0000
279@@ -33,10 +33,7 @@
280 start_server
281
282 load_dbase_schema sakila
283- load_dbase_data sakila
284
285- backup_dir="$topdir/backup"
286-
287 vlog "Compressing tables"
288
289 table_list=`${MYSQL} ${MYSQL_ARGS} -Ns -e \
290@@ -49,6 +46,10 @@
291 KEY_BLOCK_SIZE=$page_size" sakila
292 done
293
294+ load_dbase_data sakila
295+
296+ backup_dir="$topdir/backup"
297+
298 innobackupex --no-timestamp --compact $backup_dir
299 record_db_state sakila
300
301
302=== modified file 'test/t/ib_slave_info.sh'
303--- test/t/ib_slave_info.sh 2013-04-27 18:46:54 +0000
304+++ test/t/ib_slave_info.sh 2013-08-19 04:09:30 +0000
305@@ -12,15 +12,7 @@
306 load_dbase_schema incremental_sample
307
308 # Adding initial rows
309-vlog "Adding initial rows to database..."
310-numrow=100
311-count=0
312-while [ "$numrow" -gt "$count" ]
313-do
314- ${MYSQL} ${MYSQL_ARGS} -e "insert into test values ($count, $numrow);" incremental_sample
315- let "count=count+1"
316-done
317-vlog "Initial rows added"
318+multi_row_insert incremental_sample.test \({1..100},100\)
319
320 # Full backup of the slave server
321 switch_server $slave_id
322
323=== modified file 'test/t/ib_stream_incremental.sh'
324--- test/t/ib_stream_incremental.sh 2013-07-03 18:30:28 +0000
325+++ test/t/ib_stream_incremental.sh 2013-08-19 04:09:30 +0000
326@@ -22,15 +22,7 @@
327 load_dbase_schema incremental_sample
328
329 # Adding initial rows
330- vlog "Adding initial rows to database..."
331- numrow=100
332- count=0
333- while [ "$numrow" -gt "$count" ]
334- do
335- ${MYSQL} ${MYSQL_ARGS} -e "insert into test values ($count, $numrow);" incremental_sample
336- let "count=count+1"
337- done
338- vlog "Initial rows added"
339+ multi_row_insert incremental_sample.test \({1..100},100\)
340
341 full_backup_dir=$topdir/full_backup
342
343@@ -38,16 +30,7 @@
344 innobackupex --no-timestamp $full_backup_dir
345
346 # Changing data
347-
348- vlog "Making changes to database"
349- let "count=numrow+1"
350- let "numrow=500"
351- while [ "$numrow" -gt "$count" ]
352- do
353- ${MYSQL} ${MYSQL_ARGS} -e "insert into test values ($count, $numrow);" incremental_sample
354- let "count=count+1"
355- done
356- vlog "Changes done"
357+ multi_row_insert incremental_sample.test \({101..500},500\)
358
359 # Saving the checksum of original table
360 checksum_a=`checksum_table incremental_sample test`
361
362=== modified file 'test/t/xb_export.sh'
363--- test/t/xb_export.sh 2013-07-25 15:04:49 +0000
364+++ test/t/xb_export.sh 2013-08-19 04:09:30 +0000
365@@ -35,15 +35,7 @@
366 load_dbase_schema incremental_sample
367
368 # Adding some data to database
369-vlog "Adding initial rows to database..."
370-numrow=100
371-count=0
372-while [ "$numrow" -gt "$count" ]
373-do
374- ${MYSQL} ${MYSQL_ARGS} -e "insert into test values ($count, $numrow);" incremental_sample
375- let "count=count+1"
376-done
377-vlog "Initial rows added"
378+multi_row_insert incremental_sample.test \({1..100},100\)
379
380 checksum_1=`checksum_table incremental_sample test`
381 rowsnum_1=`${MYSQL} ${MYSQL_ARGS} -Ns -e "select count(*) from test" incremental_sample`
382
383=== modified file 'test/t/xb_incremental_compressed.inc'
384--- test/t/xb_incremental_compressed.inc 2013-07-25 15:04:49 +0000
385+++ test/t/xb_incremental_compressed.inc 2013-08-19 04:09:30 +0000
386@@ -12,23 +12,6 @@
387
388 . inc/common.sh
389
390-function add_rows()
391-{
392- local table=$1
393- local start=$2
394- local limit=$3
395-
396- while [ "$limit" -gt "$start" ]; do
397- sql="INSERT INTO $table VALUES ($start, $limit)"
398- let "start=start+1"
399- for ((i=0; $i<99; i++)); do
400- sql="${sql},($start, $limit)"
401- let "start=start+1"
402- done
403- ${MYSQL} ${MYSQL_ARGS} -e "$sql" incremental_sample
404- done
405-}
406-
407 #
408 # Test incremental backup of a compressed tablespace with a specific page size
409 #
410@@ -61,18 +44,7 @@
411
412 # Adding 10k rows
413
414- vlog "Adding initial rows to database..."
415-
416- add_rows test 0 10000
417-
418- rows=`${MYSQL} ${MYSQL_ARGS} -Ns -e "SELECT COUNT(*) FROM test" \
419-incremental_sample`
420- if [ "$rows" != "10000" ]; then
421- vlog "Failed to add initial rows"
422- exit -1
423- fi
424-
425- vlog "Initial rows added"
426+ multi_row_insert incremental_sample.test \({1..10000},10000\)
427
428 # Full backup
429
430@@ -97,15 +69,15 @@
431 number INT(11) DEFAULT NULL) ENGINE=INNODB \
432 ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=$page_size" incremental_sample
433
434- add_rows test 10001 12500
435- add_rows t2 10001 12500
436+ multi_row_insert incremental_sample.test \({10001..12500},12500\)
437+ multi_row_insert incremental_sample.t2 \({10001..12500},12500\)
438
439 # Rotate bitmap file here and force checkpoint at the same time
440 shutdown_server
441 start_server
442
443- add_rows test 12501 15000
444- add_rows t2 12501 15000
445+ multi_row_insert incremental_sample.test \({12501..15000},15000\)
446+ multi_row_insert incremental_sample.t2 \({12501..15000},15000\)
447
448 rows=`${MYSQL} ${MYSQL_ARGS} -Ns -e "SELECT COUNT(*) FROM test" \
449 incremental_sample`

Subscribers

People subscribed via source and target branches

to all changes: