Merge lp:~laurynas-biveinis/percona-xtrabackup/bug1044398-2.0 into lp:percona-xtrabackup/2.0

Proposed by Laurynas Biveinis
Status: Work in progress
Proposed branch: lp:~laurynas-biveinis/percona-xtrabackup/bug1044398-2.0
Merge into: lp:percona-xtrabackup/2.0
Prerequisite: lp:~sergei.glushchenko/percona-xtrabackup/xb20-bug1038127
Diff against target: 146 lines (+57/-12)
2 files modified
src/xtrabackup.c (+20/-12)
test/t/bug1044398.sh (+37/-0)
To merge this branch: bzr merge lp:~laurynas-biveinis/percona-xtrabackup/bug1044398-2.0
Reviewer Review Type Date Requested Status
Alexey Kopytov (community) Needs Fixing
Review via email: mp+123235@code.launchpad.net

Description of the change

Fix bug 1044398 (Handling of compressed tablespaces with compressed
page size == server page size broken)

The problem was that incremental backup table metadata format,
after fixing bug 932623, did not contain a compression flag nor a
compressed page size. Instead, it stored only a page size, and
compression was implied whenever page size < UNIV_PAGE_SIZE. This
assumption breaks with compressed page size == UNIV_PAGE_SIZE.

Fixed by adding storing a new field zip_size in the metadata. Its
presence is optional for backwards compatibility: when it's absent,
XtraBackup reverts to the old logic.

Added new test bug1044398 for testing this backwards compatibility.

Jenkins: http://jenkins.percona.com/job/percona-xtrabackup-2.0-param/253/

To post a comment you must log in.
Revision history for this message
Alexey Kopytov (akopytov) wrote :

Why does the test case creates an uncompressed table with the 16K page size? As I understand, a compressed 16K page size was the broken case?

review: Needs Information
Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

Alexey - yes, but this test tests compatibility with metadata that lacks zip_size, and 16K case is tested to see that it is correctly treated as uncompressed.

Revision history for this message
Alexey Kopytov (akopytov) wrote :

Shouldn't we then have a test case that tests the case from the bug report, i.e. a compressed 16K page.

Revision history for this message
Alexey Kopytov (akopytov) wrote :

I also don't think we should test _all_ possible page sizes just for compatibility with 2.0.2. Just one 16K page size with no zip_size in metadata should be enough.

review: Needs Fixing
Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

Alexey - xb_incremental_compressed tests this case. At this code revision this bug simply does not cause anything wrong, since tablespace creation is dumb byte copy anyway. However, in my follow-up MP that introduces fil_create_new_single_file_tablespace()-like code, xb_incremental_compressed starts failing if the current fix is not applied.

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

I believe we must test at least one < 16K case for backwards compatibility. Let's say 1K and 16K then?

Revision history for this message
Alexey Kopytov (akopytov) wrote :

xb_incremental_compressed might be changed at some point, so having an explicit test case covering the specific case from a bug is always a good thing. Naturally, there's a lot of overlapping tests in the MySQL test suite testing the same code paths.

Revision history for this message
Alexey Kopytov (akopytov) wrote :

On 07.09.12 17:40, Laurynas Biveinis wrote:
> I believe we must test at least one < 16K case for backwards compatibility. Let's say 1K and 16K then?
>

Was there ever a problem with 1K pages?

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

OK, I will make a feature test for 16K compressed page. Then we'd have two tests: this feature test and the backwards compatibility test (might live in the same .sh file of course).

> > I believe we must test at least one < 16K case for backwards compatibility.
> Let's say 1K and 16K then?
> >
>
> Was there ever a problem with 1K pages?

No. It's defensive: if we introduced a regression where we'd start treating small page_size zip_size-less metadatas as uncompressed ones, we'd never know without at least one such test. As this is purely backward file format compatibility test, I don't see a difference between this case and 16K page case.

466. By Stewart Smith

merge fix for Bug #1038127: XtraBackup 2.0.2 is not backwards compatible

467. By Laurynas Biveinis

Fix bug 1044398 (Handling of compressed tablespaces with compressed
page size == server page size broken)

The problem was that incremental backup table metadata format,
after fixing bug 932623, did not contain a compression flag nor a
compressed page size. Instead, it stored only a page size, and
compression was implied whenever page size < UNIV_PAGE_SIZE. This
assumption breaks with compressed page size == UNIV_PAGE_SIZE.

Fixed by storing a new field zip_size in the metadata and using it for
tablespace creation instead of the assumptions.

Added new test bug1044398 to assert that zip_size absence in the delta
metadata is fatal.

Unmerged revisions

467. By Laurynas Biveinis

Fix bug 1044398 (Handling of compressed tablespaces with compressed
page size == server page size broken)

The problem was that incremental backup table metadata format,
after fixing bug 932623, did not contain a compression flag nor a
compressed page size. Instead, it stored only a page size, and
compression was implied whenever page size < UNIV_PAGE_SIZE. This
assumption breaks with compressed page size == UNIV_PAGE_SIZE.

Fixed by storing a new field zip_size in the metadata and using it for
tablespace creation instead of the assumptions.

Added new test bug1044398 to assert that zip_size absence in the delta
metadata is fatal.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'src/xtrabackup.c'
--- src/xtrabackup.c 2012-09-05 08:32:02 +0000
+++ src/xtrabackup.c 2012-09-11 10:20:52 +0000
@@ -655,6 +655,7 @@
655655
656typedef struct {656typedef struct {
657 ulint page_size;657 ulint page_size;
658 ulint zip_size;
658 ulint space_id;659 ulint space_id;
659} xb_delta_info_t;660} xb_delta_info_t;
660661
@@ -2855,6 +2856,7 @@
28552856
2856 /* set defaults */2857 /* set defaults */
2857 info->page_size = ULINT_UNDEFINED;2858 info->page_size = ULINT_UNDEFINED;
2859 info->zip_size = ULINT_UNDEFINED;
2858 info->space_id = ULINT_UNDEFINED;2860 info->space_id = ULINT_UNDEFINED;
28592861
2860 fp = fopen(filepath, "r");2862 fp = fopen(filepath, "r");
@@ -2867,6 +2869,8 @@
2867 if (fscanf(fp, "%50s = %50s\n", key, value) == 2) {2869 if (fscanf(fp, "%50s = %50s\n", key, value) == 2) {
2868 if (strcmp(key, "page_size") == 0) {2870 if (strcmp(key, "page_size") == 0) {
2869 info->page_size = strtoul(value, NULL, 10);2871 info->page_size = strtoul(value, NULL, 10);
2872 } else if (strcmp(key, "zip_size") == 0) {
2873 info->zip_size = strtoul(value, NULL, 10);
2870 } else if (strcmp(key, "space_id") == 0) {2874 } else if (strcmp(key, "space_id") == 0) {
2871 info->space_id = strtoul(value, NULL, 10);2875 info->space_id = strtoul(value, NULL, 10);
2872 }2876 }
@@ -2879,6 +2883,10 @@
2879 msg("xtrabackup: page_size is required in %s\n", filepath);2883 msg("xtrabackup: page_size is required in %s\n", filepath);
2880 r = FALSE;2884 r = FALSE;
2881 }2885 }
2886 if (info->zip_size == ULINT_UNDEFINED) {
2887 msg("xtrabackup: zip_size is required in %s\n", filepath);
2888 r = FALSE;
2889 }
2882 if (info->space_id == ULINT_UNDEFINED) {2890 if (info->space_id == ULINT_UNDEFINED) {
2883 msg("xtrabackup: Warning: This backup was taken with XtraBackup 2.0.1 "2891 msg("xtrabackup: Warning: This backup was taken with XtraBackup 2.0.1 "
2884 "or earlier, some DDL operations between full and incremental "2892 "or earlier, some DDL operations between full and incremental "
@@ -2904,8 +2912,10 @@
2904 MY_STAT mystat;2912 MY_STAT mystat;
29052913
2906 snprintf(buf, sizeof(buf),2914 snprintf(buf, sizeof(buf),
2907 "page_size = %lu\nspace_id = %lu\n",2915 "page_size = %lu\n"
2908 info->page_size, info->space_id);2916 "zip_size = %lu\n"
2917 "space_id = %lu\n",
2918 info->page_size, info->zip_size, info->space_id);
2909 len = strlen(buf);2919 len = strlen(buf);
29102920
2911 mystat.st_size = len;2921 mystat.st_size = len;
@@ -3103,14 +3113,12 @@
3103 byte* incremental_buffer_base = NULL;3113 byte* incremental_buffer_base = NULL;
3104 ulint page_size;3114 ulint page_size;
3105 ulint page_size_shift;3115 ulint page_size_shift;
3106#ifdef INNODB_VERSION_SHORT
3107 ulint zip_size;
3108#endif
3109 xb_delta_info_t info;3116 xb_delta_info_t info;
3110 datasink_t *ds = ds_ctxt->datasink;3117 datasink_t *ds = ds_ctxt->datasink;
3111 ds_file_t *dstfile = NULL;3118 ds_file_t *dstfile = NULL;
31123119
3113 info.page_size = 0;3120 info.page_size = 0;
3121 info.zip_size = 0;
3114 info.space_id = 0;3122 info.space_id = 0;
31153123
3116#ifdef XTRADB_BASED3124#ifdef XTRADB_BASED
@@ -3257,11 +3265,11 @@
3257 page_size = UNIV_PAGE_SIZE;3265 page_size = UNIV_PAGE_SIZE;
3258 page_size_shift = UNIV_PAGE_SIZE_SHIFT;3266 page_size_shift = UNIV_PAGE_SIZE_SHIFT;
3259#else3267#else
3260 zip_size = xb_get_zip_size(src_file);3268 info.zip_size = xb_get_zip_size(src_file);
3261 if (zip_size == ULINT_UNDEFINED) {3269 if (info.zip_size == ULINT_UNDEFINED) {
3262 goto skip;3270 goto skip;
3263 } else if (zip_size) {3271 } else if (info.zip_size) {
3264 page_size = zip_size;3272 page_size = info.zip_size;
3265 page_size_shift = get_bit_shift(page_size);3273 page_size_shift = get_bit_shift(page_size);
3266 msg("[%02u] %s is compressed with page size = "3274 msg("[%02u] %s is compressed with page size = "
3267 "%lu bytes\n", thread_n, node->name, page_size);3275 "%lu bytes\n", thread_n, node->name, page_size);
@@ -3367,7 +3375,8 @@
3367#ifndef INNODB_VERSION_SHORT3375#ifndef INNODB_VERSION_SHORT
3368 if (buf_page_is_corrupted(page + chunk_offset))3376 if (buf_page_is_corrupted(page + chunk_offset))
3369#else3377#else
3370 if (buf_page_is_corrupted(page + chunk_offset, zip_size))3378 if (buf_page_is_corrupted(page + chunk_offset,
3379 info.zip_size))
3371#endif3380#endif
3372 {3381 {
3373 if (3382 if (
@@ -5911,8 +5920,7 @@
5911 xb_file_set_nocache(src_file, src_path, "OPEN");5920 xb_file_set_nocache(src_file, src_path, "OPEN");
59125921
5913 dst_file = xb_delta_open_matching_space(5922 dst_file = xb_delta_open_matching_space(
5914 dbname, space_name, info.space_id,5923 dbname, space_name, info.space_id, info.zip_size,
5915 info.page_size == UNIV_PAGE_SIZE ? 0 : info.page_size,
5916 dst_path, sizeof(dst_path), &success);5924 dst_path, sizeof(dst_path), &success);
5917 if (!success) {5925 if (!success) {
5918 msg("xtrabackup: error: cannot open %s\n", dst_path);5926 msg("xtrabackup: error: cannot open %s\n", dst_path);
59195927
=== added file 'test/t/bug1044398.sh'
--- test/t/bug1044398.sh 1970-01-01 00:00:00 +0000
+++ test/t/bug1044398.sh 2012-09-11 10:20:52 +0000
@@ -0,0 +1,37 @@
1# Test compatibility for pre-bug 1044398 incremental meta files.
2
3. inc/common.sh
4
5if [ -z "$INNODB_VERSION" ]; then
6 echo "Requires InnoDB plugin or XtraDB" >$SKIPPED_REASON
7 exit $SKIPPED_EXIT_CODE
8fi
9
10start_server --innodb_file_per_table
11
12vlog "Creating full backup"
13innobackupex --no-timestamp $topdir/full
14
15vlog "Creating new tablespace"
16run_cmd ${MYSQL} ${MYSQL_ARGS} -e \
17 "CREATE TABLE t1(a INT) ENGINE=InnoDB" test
18
19run_cmd ${MYSQL} ${MYSQL_ARGS} -e \
20 "INSERT INTO t1 VALUES (1)" test
21
22vlog "Creating incremental backup"
23innobackupex --incremental --no-timestamp \
24 --incremental-basedir=$topdir/full $topdir/inc
25
26# Remove zip_size = $page_size line from .meta file
27sed -ie '/zip_size/ d' $topdir/inc/test/t1.ibd.meta
28
29vlog "Preparing backup, applying log"
30innobackupex --apply-log -redo-only $topdir/full
31
32vlog "Preparing backup, applying delta, should fail with missing zip_size error"
33run_cmd_expect_failure $IB_BIN $IB_ARGS --apply-log --redo-only --incremental-dir=$topdir/inc $topdir/full
34
35grep -q "xtrabackup: zip_size is required " $OUTFILE
36
37stop_server

Subscribers

People subscribed via source and target branches