xtrabackup_56 crashes with segfault during --prepare

Bug #1291299 reported by Pavelp
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
MySQL Server
Unknown
Unknown
Percona XtraBackup moved to https://jira.percona.com/projects/PXB
Fix Released
High
Alexey Kopytov
2.1
Fix Released
High
Alexey Kopytov
2.2
Fix Released
High
Alexey Kopytov

Bug Description

innodb_version.............. 5.6.15-rel63.0
version..................... 5.6.15-56-log
version_comment............. Percona Server (GPL), Release rel63.0, Revision 519
version_compile_machine..... x86_64
version_compile_os.......... Linux

# xtrabackup --version
xtrabackup version 2.1.8 for Percona Server 5.1.73 unknown-linux-gnu (x86_64) (revision id: 733)

Steps:
[1]
# xtrabackup_56 --backup --target-dir=.
...
...
...
xtrabackup: The latest check point (for incremental): '1260343926214'
xtrabackup: Stopping log copying thread.
.>> log scanned up to (1260528996709)

xtrabackup: Transaction log of lsn (1259945473453) to (1260528996709) was copied.

[2]
# xtrabackup_56 --prepare --target-dir=. --use-memory=4G
xtrabackup_56 version 2.1.8 for MySQL server 5.6.15 Linux (x86_64) (revision id: 733)
xtrabackup: cd to /home/pavel/slv222/.
xtrabackup: This target seems to be not prepared yet.
xtrabackup: xtrabackup_logfile detected: size=656474112, start_lsn=(1259945473453)
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 656474112
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 656474112
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 4294967296 bytes for buffer pool (set by --use-memory parameter)
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Compressed tables use zlib 1.2.3
InnoDB: Using CPU crc32 instructions
InnoDB: Initializing buffer pool, size = 4.0G
InnoDB: Completed initialization of buffer pool
InnoDB: Highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 1259945473453
InnoDB: Database was not shutdown normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages
InnoDB: from the doublewrite buffer...
InnoDB: Doing recovery: scanned up to log sequence number 1259950715904 (0%)
InnoDB: Doing recovery: scanned up to log sequence number 1259955958784 (1%)
InnoDB: Doing recovery: scanned up to log sequence number 1259961201664 (2%)
InnoDB: Doing recovery: scanned up to log sequence number 1259966444544 (3%)
InnoDB: Doing recovery: scanned up to log sequence number 1259971687424 (4%)
InnoDB: Doing recovery: scanned up to log sequence number 1259976930304 (5%)
InnoDB: Doing recovery: scanned up to log sequence number 1259982173184 (6%)
InnoDB: Doing recovery: scanned up to log sequence number 1259987416064 (7%)
InnoDB: Doing recovery: scanned up to log sequence number 1259992658944 (8%)
InnoDB: Doing recovery: scanned up to log sequence number 1259997901824 (8%)
InnoDB: Doing recovery: scanned up to log sequence number 1260003144704 (9%)
InnoDB: Doing recovery: scanned up to log sequence number 1260008387584 (10%)
InnoDB: Doing recovery: scanned up to log sequence number 1260013630464 (11%)
InnoDB: Doing recovery: scanned up to log sequence number 1260018873344 (12%)
InnoDB: Doing recovery: scanned up to log sequence number 1260024116224 (13%)
InnoDB: Doing recovery: scanned up to log sequence number 1260029359104 (14%)
InnoDB: Doing recovery: scanned up to log sequence number 1260034601984 (15%)
InnoDB: Doing recovery: scanned up to log sequence number 1260039844864 (16%)
InnoDB: Doing recovery: scanned up to log sequence number 1260045087744 (17%)
InnoDB: Doing recovery: scanned up to log sequence number 1260050330624 (17%)
InnoDB: Doing recovery: scanned up to log sequence number 1260055573504 (18%)
InnoDB: Doing recovery: scanned up to log sequence number 1260060816384 (19%)
InnoDB: Doing recovery: scanned up to log sequence number 1260066059264 (20%)
InnoDB: Doing recovery: scanned up to log sequence number 1260071302144 (21%)
InnoDB: Doing recovery: scanned up to log sequence number 1260076545024 (22%)
InnoDB: Doing recovery: scanned up to log sequence number 1260081787904 (23%)
InnoDB: Doing recovery: scanned up to log sequence number 1260087030784 (24%)
InnoDB: Doing recovery: scanned up to log sequence number 1260092273664 (25%)
InnoDB: Doing recovery: scanned up to log sequence number 1260097516544 (26%)
InnoDB: Doing recovery: scanned up to log sequence number 1260102759424 (26%)
InnoDB: Doing recovery: scanned up to log sequence number 1260108002304 (27%)
InnoDB: Doing recovery: scanned up to log sequence number 1260113245184 (28%)
InnoDB: Doing recovery: scanned up to log sequence number 1260118488064 (29%)
InnoDB: Doing recovery: scanned up to log sequence number 1260123730944 (30%)
InnoDB: Doing recovery: scanned up to log sequence number 1260128973824 (31%)
InnoDB: Doing recovery: scanned up to log sequence number 1260134216704 (32%)
InnoDB: Doing recovery: scanned up to log sequence number 1260139459584 (33%)
InnoDB: Doing recovery: scanned up to log sequence number 1260144702464 (34%)
InnoDB: Doing recovery: scanned up to log sequence number 1260149945344 (35%)
Segmentation fault (core dumped)

[3]
# gdb xtrabackup_56 core.11437
(gdb) bt
#0 0x00007fe95cfb2670 in __strrchr_sse42 () from /lib64/libc.so.6
#1 0x00000000005cae90 in strrchr (data_dir_path=0x0, tablename=0x7fe842beda61 "berg/#sql-7781_ff0ad", extention=0x9eb461 "ibd") at /usr/include/string.h:251
#2 os_file_make_remote_pathname (data_dir_path=0x0, tablename=0x7fe842beda61 "berg/#sql-7781_ff0ad", extention=0x9eb461 "ibd")
    at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/os/os0file.cc:3277
#3 0x00000000004d72c3 in fil_create_new_single_table_tablespace (space_id=4781, tablename=0x7fe842beda61 "berg/#sql-7781_ff0ad", dir_path=<value optimized out>, flags=1024,
    flags2=<value optimized out>, size=4) at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/fil/fil0fil.cc:3369
#4 0x00000000004d7a89 in fil_op_log_parse_or_replay (ptr=0x7fe842beda76 "\035\222\255\001\033\222\255\001\037\035\222\255", end_ptr=<value optimized out>, type=<value optimized out>,
    space_id=4781, log_flags=<value optimized out>) at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/fil/fil0fil.cc:2406
#5 0x0000000000490b9b in recv_parse_log_recs (available_memory=4278190080, store_to_hash=1, buf=<value optimized out>, len=<value optimized out>, start_lsn=<value optimized out>,
    contiguous_lsn=<value optimized out>, group_scanned_lsn=0x7fff3978f918) at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/log/log0recv.cc:2508
#6 recv_scan_log_recs (available_memory=4278190080, store_to_hash=1, buf=<value optimized out>, len=<value optimized out>, start_lsn=<value optimized out>,
    contiguous_lsn=<value optimized out>, group_scanned_lsn=0x7fff3978f918) at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/log/log0recv.cc:2963
#7 0x0000000000491884 in recv_group_scan_log_recs (type=78656949, limit_lsn=18446744073709551615, min_flushed_lsn=1222941760714, max_flushed_lsn=1222941760714)
    at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/log/log0recv.cc:3023
#8 recv_recovery_from_checkpoint_start_func (type=78656949, limit_lsn=18446744073709551615, min_flushed_lsn=1222941760714, max_flushed_lsn=1222941760714)
    at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/log/log0recv.cc:3315
#9 0x00000000004bf0fb in innobase_start_or_create_for_mysql () at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/srv/srv0start.cc:2363
#10 0x00000000004553ab in innodb_init () at xtrabackup.cc:1438
#11 xtrabackup_prepare_func () at xtrabackup.cc:5455
#12 0x000000000045777b in main (argc=0, argv=0x1fa0ed0) at xtrabackup.cc:6035

I use MySQL 5.6 "DATA DIRECTORY" feature. But I don't know whether it cause a bug or not.
Feel free if you need any further information.

Related branches

Pavelp (2pavelbox)
description: updated
Pavelp (2pavelbox)
description: updated
Revision history for this message
Alexey Kopytov (akopytov) wrote :

Thanks for the stacktrace!

It appears to be a problem in the InnoDB code which I have reported as http://bugs.mysql.com/bug.php?id=72022

The fix is rather simple, I hope we can provide a fix for XtraBackup without waiting for an upstream one.

tags: added: low-hanging-fruit
Revision history for this message
Pavelp (2pavelbox) wrote :

Thank you!
Are there any workarounds for this bug? (until fix will be provided)

Revision history for this message
Alexey Kopytov (akopytov) wrote :

The workaround is to avoid creating tables with the DATA DIRECTORY option, or ALTER TABLE against such tables while the backup is being taken. This way there will be no MLOG_FILE_CREATE2 records for remote tablespaces in xtrabackup_logfile.

Revision history for this message
Pavelp (2pavelbox) wrote :

It seems to be a "ALTER TABLE .... DATA DIRECTORY" takes no effect:
"Plan the location in advance, because you cannot use the DATA DIRECTORY clause with the ALTER TABLE statement."
https://dev.mysql.com/doc/refman/5.6/en/tablespace-placing.html

Revision history for this message
Alexey Kopytov (akopytov) wrote :

That's correct, but just to be clear about the workaround I suggested: you can still have tables created with the DATA DIRECTORY option, just make sure they are not CREATEed or ALTERed during the backup.

Revision history for this message
Pavelp (2pavelbox) wrote :

I have not ALTERed or CREATed such tables during backup.

Backup process takes a long time and there is a small possibility that any user or script made such things. But
1. I reproduced bug any time when I took it. (it is incredible that each time someone ALTERed or CREATed tables and I don't know about it)
2. I made a partial backup with small table. ( http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/partial_backups_innobackupex.html ) It took small amount of time. During this time tables is not ALTERed or CREATED

bt for case 2:
#0 0x00007f002ae74670 in __strrchr_sse42 () from /lib64/libc.so.6
#1 0x00000000005cae90 in strrchr (data_dir_path=0x0, tablename=0x7f00191a7440 "berg/#sql-7781_232117", extention=0x9eb461 "ibd") at /usr/include/string.h:251
#2 os_file_make_remote_pathname (data_dir_path=0x0, tablename=0x7f00191a7440 "berg/#sql-7781_232117", extention=0x9eb461 "ibd")
    at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/os/os0file.cc:3277
#3 0x00000000004d72c3 in fil_create_new_single_table_tablespace (space_id=4964, tablename=0x7f00191a7440 "berg/#sql-7781_232117", dir_path=<value optimized out>,
    flags=1024, flags2=<value optimized out>, size=4) at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/fil/fil0fil.cc:3369
#4 0x00000000004d7a89 in fil_op_log_parse_or_replay (ptr=0x7f00191a7456 "\035\223d\001\033\223d\001\037\035\223d", end_ptr=<value optimized out>,
    type=<value optimized out>, space_id=4964, log_flags=<value optimized out>)
    at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/fil/fil0fil.cc:2406
#5 0x0000000000490b9b in recv_parse_log_recs (available_memory=88080384, store_to_hash=1, buf=<value optimized out>, len=<value optimized out>,
    start_lsn=<value optimized out>, contiguous_lsn=<value optimized out>, group_scanned_lsn=0x7fff3d9f9fd8)
    at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/log/log0recv.cc:2508
#6 recv_scan_log_recs (available_memory=88080384, store_to_hash=1, buf=<value optimized out>, len=<value optimized out>, start_lsn=<value optimized out>,
    contiguous_lsn=<value optimized out>, group_scanned_lsn=0x7fff3d9f9fd8)
    at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/log/log0recv.cc:2963
#7 0x0000000000491884 in recv_group_scan_log_recs (type=78656949, limit_lsn=18446744073709551615, min_flushed_lsn=1222941760714, max_flushed_lsn=1222941760714)
    at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/log/log0recv.cc:3023
#8 recv_recovery_from_checkpoint_start_func (type=78656949, limit_lsn=18446744073709551615, min_flushed_lsn=1222941760714, max_flushed_lsn=1222941760714)
    at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/log/log0recv.cc:3315
#9 0x00000000004bf0fb in innobase_start_or_create_for_mysql () at /usr/src/debug/percona-xtrabackup-2.1.8/mysql-5.6/storage/innobase/srv/srv0start.cc:2363
#10 0x00000000004553ab in innodb_init () at xtrabackup.cc:1438
#11 xtrabackup_prepare_func () at xtrabackup.cc:5455
#12 0x000000000045777b in main (argc=0, argv=0x19c4350) at xtrabackup.cc:6035

Revision history for this message
Pavelp (2pavelbox) wrote :

Besides, in case 2 I created backup of table that has no "DATA DIRECTORY" option.

Revision history for this message
Alexey Kopytov (akopytov) wrote :

Somehow MLOG_FILE_CREATE2 is still in the redo log, and the only reason for it could be that a remote tablespace was either CREATEed or ALTERed while the backup was being taken.

Speaking of partial backups, XtraBackup does not filter the REDO log (though it could), so it still tries to apply all log records, no matter what tablespace they modify. It just ignores those that do not exist. However, "create tablespace" records are always applied.

Revision history for this message
Pavelp (2pavelbox) wrote :

I understand this. And I used partial to minimize time taken by backup.
Is there a way to make sure that ALTER or CREATE is (not) presents in redo log copied by XtraBackup?
Can mysqlbinlog cmd help?

Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXB-402

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.