Merge lp:~hrvojem/percona-xtrabackup/bug1153943-2.0 into lp:percona-xtrabackup/2.0

Proposed by Hrvoje Matijakovic on 2013-03-19
Status: Merged
Approved by: Alexey Kopytov on 2013-03-20
Approved revision: 517
Merged at revision: 521
Proposed branch: lp:~hrvojem/percona-xtrabackup/bug1153943-2.0
Merge into: lp:percona-xtrabackup/2.0
Diff against target: 135 lines (+38/-14)
5 files modified
doc/source/howtos/recipes_ibkx_stream.rst (+22/-8)
doc/source/innobackupex/importing_exporting_tables_ibk.rst (+4/-0)
doc/source/innobackupex/parallel_copy_ibk.rst (+1/-1)
doc/source/innobackupex/partial_backups_innobackupex.rst (+5/-3)
doc/source/xtrabackup_bin/creating_a_backup.rst (+6/-2)
To merge this branch: bzr merge lp:~hrvojem/percona-xtrabackup/bug1153943-2.0
Reviewer Review Type Date Requested Status
Alexey Kopytov (community) 2013-03-19 Approve on 2013-03-20
Review via email: mp+154124@code.launchpad.net
To post a comment you must log in.
Alexey Kopytov (akopytov) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'doc/source/howtos/recipes_ibkx_stream.rst'
2--- doc/source/howtos/recipes_ibkx_stream.rst 2012-04-21 07:12:28 +0000
3+++ doc/source/howtos/recipes_ibkx_stream.rst 2013-03-19 15:30:30 +0000
4@@ -12,37 +12,51 @@
5
6 * Stream the backup into a tar archived named 'backup.tar' ::
7
8- innobackupex --stream=tar ./ > backup.tar
9+ $ innobackupex --stream=tar ./ > backup.tar
10
11 * The same, but compress it ::
12
13- innobackupex --stream=tar ./ | gzip - > backup.tar.gz
14+ $ innobackupex --stream=tar ./ | gzip - > backup.tar.gz
15
16 * Encrypt the backup ::
17
18- innobackupex --stream=tar . | gzip - | openssl des3 -salt -k "password" > backup.tar.gz.des3
19+ $ innobackupex --stream=tar . | gzip - | openssl des3 -salt -k "password" > backup.tar.gz.des3
20
21 * Send it to another server instead of storing it locally ::
22
23- innobackupex --stream=tar ./ | ssh user@desthost "cat - > /data/backups/backup.tar"
24+ $ innobackupex --stream=tar ./ | ssh user@desthost "cat - > /data/backups/backup.tar"
25
26 * The same thing with can be done with the ''netcat''. ::
27
28 ## On the destination host:
29- nc -l 9999 | cat - > /data/backups/backup.tar
30+ $ nc -l 9999 | cat - > /data/backups/backup.tar
31 ## On the source host:
32- innobackupex --stream=tar ./ | nc desthost 9999
33+ $ innobackupex --stream=tar ./ | nc desthost 9999
34
35 * The same thing, but done as a one-liner: ::
36
37- ssh user@desthost "( nc -l 9999 > /data/backups/backup.tar & )" \
38+ $ ssh user@desthost "( nc -l 9999 > /data/backups/backup.tar & )" \
39 && innobackupex --stream=tar ./ | nc desthost 9999
40
41 * Throttling the throughput to 10MB/sec. This requires the 'pv' tools; you can find them at the `official site <http://www.ivarch.com/programs/quickref/pv.shtml>`_ or install it from the distribution package ("apt-get install pv") ::
42
43- innobackupex --stream=tar ./ | pv -q -L10m \
44+ $ innobackupex --stream=tar ./ | pv -q -L10m \
45 | ssh user@desthost "cat - > /data/backups/backup.tar"
46
47+ * Checksumming the backup during the streaming ::
48+
49+ ## On the destination host:
50+ $ nc -l 9999 | tee >(sha1sum > destination_checksum) > /data/backups/backup.tar
51+ ## On the source host:
52+ $ innobackupex --stream=tar ./ | tee >(sha1sum > source_checksum) | nc desthost 9999
53+ ## compare the checksums
54+ ## On the source host:
55+ $ cat source_checksum
56+ 65e4f916a49c1f216e0887ce54cf59bf3934dbad -
57+ ## On the destination host:
58+ $ destination_checksum
59+ 65e4f916a49c1f216e0887ce54cf59bf3934dbad -
60+
61 Examples using |xbstream| option for streaming:
62
63 * Stream the backup into a tar archived named 'backup.xbstream ::
64
65=== modified file 'doc/source/innobackupex/importing_exporting_tables_ibk.rst'
66--- doc/source/innobackupex/importing_exporting_tables_ibk.rst 2011-07-28 05:29:04 +0000
67+++ doc/source/innobackupex/importing_exporting_tables_ibk.rst 2013-03-19 15:30:30 +0000
68@@ -27,6 +27,10 @@
69
70 Each :term:`.exp` file will be used for importing that table.
71
72+.. note::
73+
74+ InnoDB does a slow shutdown (i.e. full purge + change buffer merge) on --export, otherwise the tablespaces wouldn't be consistent and thus couldn't be imported. All the usual performance considerations apply: sufficient buffer pool (i.e. --use-memory, 100MB by default) and fast enough storage, otherwise it can take a prohibitive amount of time for export to complete.
75+
76 Importing tables
77 ================
78
79
80=== modified file 'doc/source/innobackupex/parallel_copy_ibk.rst'
81--- doc/source/innobackupex/parallel_copy_ibk.rst 2012-06-04 10:29:26 +0000
82+++ doc/source/innobackupex/parallel_copy_ibk.rst 2013-03-19 15:30:30 +0000
83@@ -4,7 +4,7 @@
84 Accelerating with :option:`--parallel` copy and `--compress-threads`
85 =====================================================================
86
87-When performing a local backup, multiple files can be copied concurrently by using the :option:`--parallel` option. This option specifies the number of threads created by |xtrabackup| to copy data files.
88+When performing a local backup or the streaming backup with |xbstream| option, multiple files can be copied concurrently by using the :option:`--parallel` option. This option specifies the number of threads created by |xtrabackup| to copy data files.
89
90 To take advantage of this option whether the multiple tablespaces option must be enabled (:term:`innodb_file_per_table`) or the shared tablespace must be stored in multiple :term:`ibdata` files with the :term:`innodb_data_file_path` option. Having multiple files for the database (or splitting one into many) doesn't have a measurable impact on performance.
91
92
93=== modified file 'doc/source/innobackupex/partial_backups_innobackupex.rst'
94--- doc/source/innobackupex/partial_backups_innobackupex.rst 2012-09-19 11:35:43 +0000
95+++ doc/source/innobackupex/partial_backups_innobackupex.rst 2013-03-19 15:30:30 +0000
96@@ -86,7 +86,9 @@
97
98 Restoring should be done by :doc:`importing the tables <importing_exporting_tables_ibk>` in the partial backup to the server.
99
100-Although it can be done by copying back the prepared backup to a "clean" :term:`datadir` (in that case, make sure of having included the ``mysql`` database)...
101-
102-
103+It can also be done by copying back the prepared backup to a "clean" :term:`datadir` (in that case, make sure of having included the ``mysql`` database). System database can be created with: ::
104+
105+ $ sudo mysql_install_db --user=mysql
106+
107+Last method can be later used for creating a dump of the specific table that needs to be restored.
108
109
110=== modified file 'doc/source/xtrabackup_bin/creating_a_backup.rst'
111--- doc/source/xtrabackup_bin/creating_a_backup.rst 2012-02-13 12:07:03 +0000
112+++ doc/source/xtrabackup_bin/creating_a_backup.rst 2013-03-19 15:30:30 +0000
113@@ -14,9 +14,9 @@
114
115 An example command to perform a backup follows:
116
117-.. code-block:: guess
118+.. code-block:: bash
119
120- xtrabackup --backup --datadir=/var/lib/mysql/ --target-dir=/data/backups/mysql/
121+ $ xtrabackup --backup --datadir=/var/lib/mysql/ --target-dir=/data/backups/mysql/
122
123 This takes a backup of :file:`/var/lib/mysql` and stores it at :file:`/data/backups/mysql/`. If you specify a relative path, the target directory will be relative to the current directory.
124
125@@ -39,6 +39,10 @@
126
127 xtrabackup: Transaction log of lsn (<SLN>) to (<LSN>) was copied.
128
129+.. note::
130+
131+ Log copying thread checks the transactional log every second to see if there were any new log records written that need to be copied, but there is a chance that the log copying thread might not be able to keep up with the amount of writes that go to the transactional logs, and will hit an error when the log records are overwritten before they could be read.
132+
133 After the backup is finished, the target directory will contain files such as the following, assuming you have a single InnoDB table :file:`test.tbl1` and you are using MySQL's :term:`innodb_file_per_table` option: ::
134
135 /data/backups/mysql/ibdata1

Subscribers

People subscribed via source and target branches