Merge lp:~laurynas-biveinis/percona-xtrabackup/bug1044398-2.0 into lp:percona-xtrabackup/2.0

Proposed by Laurynas Biveinis
Status: Work in progress
Proposed branch: lp:~laurynas-biveinis/percona-xtrabackup/bug1044398-2.0
Merge into: lp:percona-xtrabackup/2.0
Prerequisite: lp:~sergei.glushchenko/percona-xtrabackup/xb20-bug1038127
Diff against target: 146 lines (+57/-12)
2 files modified
src/xtrabackup.c (+20/-12)
test/t/bug1044398.sh (+37/-0)
To merge this branch: bzr merge lp:~laurynas-biveinis/percona-xtrabackup/bug1044398-2.0
Reviewer Review Type Date Requested Status
Alexey Kopytov (community) Needs Fixing
Review via email: mp+123235@code.launchpad.net

Description of the change

Fix bug 1044398 (Handling of compressed tablespaces with compressed
page size == server page size broken)

The problem was that incremental backup table metadata format,
after fixing bug 932623, did not contain a compression flag nor a
compressed page size. Instead, it stored only a page size, and
compression was implied whenever page size < UNIV_PAGE_SIZE. This
assumption breaks with compressed page size == UNIV_PAGE_SIZE.

Fixed by adding storing a new field zip_size in the metadata. Its
presence is optional for backwards compatibility: when it's absent,
XtraBackup reverts to the old logic.

Added new test bug1044398 for testing this backwards compatibility.

Jenkins: http://jenkins.percona.com/job/percona-xtrabackup-2.0-param/253/

To post a comment you must log in.
Revision history for this message
Alexey Kopytov (akopytov) wrote :

Why does the test case creates an uncompressed table with the 16K page size? As I understand, a compressed 16K page size was the broken case?

review: Needs Information
Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

Alexey - yes, but this test tests compatibility with metadata that lacks zip_size, and 16K case is tested to see that it is correctly treated as uncompressed.

Revision history for this message
Alexey Kopytov (akopytov) wrote :

Shouldn't we then have a test case that tests the case from the bug report, i.e. a compressed 16K page.

Revision history for this message
Alexey Kopytov (akopytov) wrote :

I also don't think we should test _all_ possible page sizes just for compatibility with 2.0.2. Just one 16K page size with no zip_size in metadata should be enough.

review: Needs Fixing
Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

Alexey - xb_incremental_compressed tests this case. At this code revision this bug simply does not cause anything wrong, since tablespace creation is dumb byte copy anyway. However, in my follow-up MP that introduces fil_create_new_single_file_tablespace()-like code, xb_incremental_compressed starts failing if the current fix is not applied.

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

I believe we must test at least one < 16K case for backwards compatibility. Let's say 1K and 16K then?

Revision history for this message
Alexey Kopytov (akopytov) wrote :

xb_incremental_compressed might be changed at some point, so having an explicit test case covering the specific case from a bug is always a good thing. Naturally, there's a lot of overlapping tests in the MySQL test suite testing the same code paths.

Revision history for this message
Alexey Kopytov (akopytov) wrote :

On 07.09.12 17:40, Laurynas Biveinis wrote:
> I believe we must test at least one < 16K case for backwards compatibility. Let's say 1K and 16K then?
>

Was there ever a problem with 1K pages?

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

OK, I will make a feature test for 16K compressed page. Then we'd have two tests: this feature test and the backwards compatibility test (might live in the same .sh file of course).

> > I believe we must test at least one < 16K case for backwards compatibility.
> Let's say 1K and 16K then?
> >
>
> Was there ever a problem with 1K pages?

No. It's defensive: if we introduced a regression where we'd start treating small page_size zip_size-less metadatas as uncompressed ones, we'd never know without at least one such test. As this is purely backward file format compatibility test, I don't see a difference between this case and 16K page case.

466. By Stewart Smith

merge fix for Bug #1038127: XtraBackup 2.0.2 is not backwards compatible

467. By Laurynas Biveinis

Fix bug 1044398 (Handling of compressed tablespaces with compressed
page size == server page size broken)

The problem was that incremental backup table metadata format,
after fixing bug 932623, did not contain a compression flag nor a
compressed page size. Instead, it stored only a page size, and
compression was implied whenever page size < UNIV_PAGE_SIZE. This
assumption breaks with compressed page size == UNIV_PAGE_SIZE.

Fixed by storing a new field zip_size in the metadata and using it for
tablespace creation instead of the assumptions.

Added new test bug1044398 to assert that zip_size absence in the delta
metadata is fatal.

Unmerged revisions

467. By Laurynas Biveinis

Fix bug 1044398 (Handling of compressed tablespaces with compressed
page size == server page size broken)

The problem was that incremental backup table metadata format,
after fixing bug 932623, did not contain a compression flag nor a
compressed page size. Instead, it stored only a page size, and
compression was implied whenever page size < UNIV_PAGE_SIZE. This
assumption breaks with compressed page size == UNIV_PAGE_SIZE.

Fixed by storing a new field zip_size in the metadata and using it for
tablespace creation instead of the assumptions.

Added new test bug1044398 to assert that zip_size absence in the delta
metadata is fatal.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'src/xtrabackup.c'
2--- src/xtrabackup.c 2012-09-05 08:32:02 +0000
3+++ src/xtrabackup.c 2012-09-11 10:20:52 +0000
4@@ -655,6 +655,7 @@
5
6 typedef struct {
7 ulint page_size;
8+ ulint zip_size;
9 ulint space_id;
10 } xb_delta_info_t;
11
12@@ -2855,6 +2856,7 @@
13
14 /* set defaults */
15 info->page_size = ULINT_UNDEFINED;
16+ info->zip_size = ULINT_UNDEFINED;
17 info->space_id = ULINT_UNDEFINED;
18
19 fp = fopen(filepath, "r");
20@@ -2867,6 +2869,8 @@
21 if (fscanf(fp, "%50s = %50s\n", key, value) == 2) {
22 if (strcmp(key, "page_size") == 0) {
23 info->page_size = strtoul(value, NULL, 10);
24+ } else if (strcmp(key, "zip_size") == 0) {
25+ info->zip_size = strtoul(value, NULL, 10);
26 } else if (strcmp(key, "space_id") == 0) {
27 info->space_id = strtoul(value, NULL, 10);
28 }
29@@ -2879,6 +2883,10 @@
30 msg("xtrabackup: page_size is required in %s\n", filepath);
31 r = FALSE;
32 }
33+ if (info->zip_size == ULINT_UNDEFINED) {
34+ msg("xtrabackup: zip_size is required in %s\n", filepath);
35+ r = FALSE;
36+ }
37 if (info->space_id == ULINT_UNDEFINED) {
38 msg("xtrabackup: Warning: This backup was taken with XtraBackup 2.0.1 "
39 "or earlier, some DDL operations between full and incremental "
40@@ -2904,8 +2912,10 @@
41 MY_STAT mystat;
42
43 snprintf(buf, sizeof(buf),
44- "page_size = %lu\nspace_id = %lu\n",
45- info->page_size, info->space_id);
46+ "page_size = %lu\n"
47+ "zip_size = %lu\n"
48+ "space_id = %lu\n",
49+ info->page_size, info->zip_size, info->space_id);
50 len = strlen(buf);
51
52 mystat.st_size = len;
53@@ -3103,14 +3113,12 @@
54 byte* incremental_buffer_base = NULL;
55 ulint page_size;
56 ulint page_size_shift;
57-#ifdef INNODB_VERSION_SHORT
58- ulint zip_size;
59-#endif
60 xb_delta_info_t info;
61 datasink_t *ds = ds_ctxt->datasink;
62 ds_file_t *dstfile = NULL;
63
64 info.page_size = 0;
65+ info.zip_size = 0;
66 info.space_id = 0;
67
68 #ifdef XTRADB_BASED
69@@ -3257,11 +3265,11 @@
70 page_size = UNIV_PAGE_SIZE;
71 page_size_shift = UNIV_PAGE_SIZE_SHIFT;
72 #else
73- zip_size = xb_get_zip_size(src_file);
74- if (zip_size == ULINT_UNDEFINED) {
75+ info.zip_size = xb_get_zip_size(src_file);
76+ if (info.zip_size == ULINT_UNDEFINED) {
77 goto skip;
78- } else if (zip_size) {
79- page_size = zip_size;
80+ } else if (info.zip_size) {
81+ page_size = info.zip_size;
82 page_size_shift = get_bit_shift(page_size);
83 msg("[%02u] %s is compressed with page size = "
84 "%lu bytes\n", thread_n, node->name, page_size);
85@@ -3367,7 +3375,8 @@
86 #ifndef INNODB_VERSION_SHORT
87 if (buf_page_is_corrupted(page + chunk_offset))
88 #else
89- if (buf_page_is_corrupted(page + chunk_offset, zip_size))
90+ if (buf_page_is_corrupted(page + chunk_offset,
91+ info.zip_size))
92 #endif
93 {
94 if (
95@@ -5911,8 +5920,7 @@
96 xb_file_set_nocache(src_file, src_path, "OPEN");
97
98 dst_file = xb_delta_open_matching_space(
99- dbname, space_name, info.space_id,
100- info.page_size == UNIV_PAGE_SIZE ? 0 : info.page_size,
101+ dbname, space_name, info.space_id, info.zip_size,
102 dst_path, sizeof(dst_path), &success);
103 if (!success) {
104 msg("xtrabackup: error: cannot open %s\n", dst_path);
105
106=== added file 'test/t/bug1044398.sh'
107--- test/t/bug1044398.sh 1970-01-01 00:00:00 +0000
108+++ test/t/bug1044398.sh 2012-09-11 10:20:52 +0000
109@@ -0,0 +1,37 @@
110+# Test compatibility for pre-bug 1044398 incremental meta files.
111+
112+. inc/common.sh
113+
114+if [ -z "$INNODB_VERSION" ]; then
115+ echo "Requires InnoDB plugin or XtraDB" >$SKIPPED_REASON
116+ exit $SKIPPED_EXIT_CODE
117+fi
118+
119+start_server --innodb_file_per_table
120+
121+vlog "Creating full backup"
122+innobackupex --no-timestamp $topdir/full
123+
124+vlog "Creating new tablespace"
125+run_cmd ${MYSQL} ${MYSQL_ARGS} -e \
126+ "CREATE TABLE t1(a INT) ENGINE=InnoDB" test
127+
128+run_cmd ${MYSQL} ${MYSQL_ARGS} -e \
129+ "INSERT INTO t1 VALUES (1)" test
130+
131+vlog "Creating incremental backup"
132+innobackupex --incremental --no-timestamp \
133+ --incremental-basedir=$topdir/full $topdir/inc
134+
135+# Remove zip_size = $page_size line from .meta file
136+sed -ie '/zip_size/ d' $topdir/inc/test/t1.ibd.meta
137+
138+vlog "Preparing backup, applying log"
139+innobackupex --apply-log -redo-only $topdir/full
140+
141+vlog "Preparing backup, applying delta, should fail with missing zip_size error"
142+run_cmd_expect_failure $IB_BIN $IB_ARGS --apply-log --redo-only --incremental-dir=$topdir/inc $topdir/full
143+
144+grep -q "xtrabackup: zip_size is required " $OUTFILE
145+
146+stop_server

Subscribers

People subscribed via source and target branches