Merge lp:~hrvojem/percona-xtradb-cluster/bug919065 into lp:~percona-dev/percona-xtradb-cluster/5.5.17-22.1

Proposed by Hrvoje Matijakovic
Status: Merged
Approved by: Vadim Tkachenko
Approved revision: no longer in the source branch.
Merged at revision: 3696
Proposed branch: lp:~hrvojem/percona-xtradb-cluster/bug919065
Merge into: lp:~percona-dev/percona-xtradb-cluster/5.5.17-22.1
Diff against target: 82 lines (+37/-13)
3 files modified
doc/source/features/highavailability.rst (+7/-13)
doc/source/features/multimaster-replication.rst (+29/-0)
doc/source/index.rst (+1/-0)
To merge this branch: bzr merge lp:~hrvojem/percona-xtradb-cluster/bug919065
Reviewer Review Type Date Requested Status
Vadim Tkachenko Approve
Review via email: mp+89443@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Vadim Tkachenko (vadim-tk) wrote :

good to go

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file 'doc/source/_static/XtraDBClusterUML1.png'
2Binary files doc/source/_static/XtraDBClusterUML1.png 1970-01-01 00:00:00 +0000 and doc/source/_static/XtraDBClusterUML1.png 2012-01-20 14:40:33 +0000 differ
3=== modified file 'doc/source/features/highavailability.rst'
4--- doc/source/features/highavailability.rst 2012-01-18 14:14:48 +0000
5+++ doc/source/features/highavailability.rst 2012-01-20 14:40:33 +0000
6@@ -1,23 +1,17 @@
7 High Availability
8 =================
9
10-Basic setup: you run 3-nodes setup.
11-
12-The |Percona XtraDB Cluster| will continue to function when you take any of nodes down.
13+In a basic setup with 3 nodes, the |Percona XtraDB Cluster| will continue to function if you take any of the nodes down.
14 At any point in time you can shutdown any Node to perform maintenance or make
15 configuration changes. Even in unplanned situations like Node crash or if it
16-becomes network unavailable the Cluster will continue to work, and you'll be able
17+becomes unavailable over the network, the Cluster will continue to work and you'll be able
18 to run queries on working nodes.
19
20-The biggest question there, what will happen when the Node joins the cluster back, and
21-there were changes to data while the node was down. Let's focus on this with details.
22-
23-There is two ways that Node may use when it joins the cluster: State Snapshot Transfer
24- (SST) and Incremental State Transfer (IST).
25-
26-* SST is the full copy if data from one node to another. It's used when new node joins the cluster, it has to transfer data from existing node. There are three methods of SST available in Percona XtraDB Cluster: :program:`mysqldump`, :program:`rsync` and :program:`xtrabackup` (Percona |XtraBackup| with support of XtraDB Cluster will be released soon, so far you need to use our `source code repository <http://www.percona.com/doc/percona-xtrabackup/installation/compiling_xtrabackup.html>`_). The downside of `mysqldump` and `rsync` is that your cluster becomes READ-ONLY for time that takes to copy data from one node to another (SST applies :command:`FLUSH TABLES WITH READ LOCK` command). Xtrabackup SST does not require :command:`READ LOCK` for full time, only for syncing |.FRM| files (the same as with regular backup).
27-
28-* Even with that, SST may be intrusive, that’s why there is IST mechanism. If you put your node down for short period of time and then start it, the node is able to fetch only changes made during period it was down.This is done using caching mechanism on nodes. Each node contains a cache, ring-buffer, (the size is configurable) of last N changes, and the node is able to transfer part of this cache. Obviously IST can be done only if amount of changes needed to transfer is less than N. If it exceeds N, then the joining node has to perform SST.
29+In case there were changes to data while node was down, there are two options that Node may use when it joins the cluster: State Snapshot Transfer: (SST) and Incremental State Transfer (IST).
30+
31+* SST is the full copy of data from one node to another. It's used when a new node joins the cluster, it has to transfer data from existing node. There are three methods of SST available in Percona XtraDB Cluster: :program:`mysqldump`, :program:`rsync` and :program:`xtrabackup` (Percona |XtraBackup| with support of XtraDB Cluster will be released soon, currently you need to use our `source code repository <http://www.percona.com/doc/percona-xtrabackup/installation/compiling_xtrabackup.html>`_). The downside of `mysqldump` and `rsync` is that your cluster becomes *READ-ONLY* while data is being copied from one node to another (SST applies :command:`FLUSH TABLES WITH READ LOCK` command). Xtrabackup SST does not require :command:`READ LOCK` for the entire syncing process, only for syncing |.FRM| files (the same as with regular backup).
32+
33+* Even with that, SST may be intrusive, that’s why there is IST mechanism. If you put your node down for a short period of time and then start it, the node is able to fetch only those changes made during the period it was down. This is done using caching mechanism on nodes. Each node contains a cache, ring-buffer, (the size is configurable) of last N changes, and the node is able to transfer part of this cache. Obviously, IST can be done only if the amount of changes needed to transfer is less than N. If it exceeds N, then the joining node has to perform SST.
34
35 You can monitor current state of Node by using
36
37
38=== added file 'doc/source/features/multimaster-replication.rst'
39--- doc/source/features/multimaster-replication.rst 1970-01-01 00:00:00 +0000
40+++ doc/source/features/multimaster-replication.rst 2012-01-20 14:40:33 +0000
41@@ -0,0 +1,29 @@
42+Multi-Master replication
43+========================
44+
45+Multi-Master replication stands for the ability to write to any node in the cluster, and not to worry that eventually it will get out-of-sync situation, as it regularly happens with regular MySQL replication if you imprudently write to the wrong server.
46+This is a long-waited feature and there has been growing demand for it for the last two years, or even more.
47+
48+With |Percona XtraDB Cluster| you can write to any node, and the Cluster guarantees consistency of writes. That is, the write is either committed on all the nodes or not committed at all.
49+For the simplicity, this diagram shows the use of the two-node example, but the same logic is applied with the N nodes:
50+
51+.. image:: ../_static/XtraDBClusterUML1.png
52+
53+All queries are executed locally on the node, and there is a special handling only on *COMMIT*. When the *COMMIT* is issued, the transaction has to pass certification on all the nodes. If it does not pass, you
54+will receive *ERROR* as a response on that query. After that, transaction is applied on the local node.
55+
56+Response time of *COMMIT* consists of several parts:
57+ * Network round-trip time,
58+ * Certification time,
59+ * Local applying
60+
61+Please note that applying the transaction on remote nodes does not affect the response time of *COMMIT*,
62+as it happens in the background after the response on certification.
63+
64+The two important consequences of this architecture:
65+ * First: we can have several appliers working in parallel. This gives us a true parallel replication. Slave can have many parallel threads, and this can be tuned by variable :option:`wsrep_slave_threads`.
66+ * Second: There might be a small period of time when the slave is out-of-sync from master. This happens because the master may apply event faster than a slave. And if you do read from the slave, you may read the data that has not changed yet. You can see that from the diagram. However, this behavior can be changed by using variable :option:`wsrep_causal_reads=ON`. In this case, the read on the slave will wait until event is applied (this however will increase the response time of the read). This gap between the slave and the master is the reason why this replication is called "virtually synchronous replication", and not real "synchronous replication".
67+
68+The described behavior of *COMMIT* also has the second serious implication. If you run write transactions to two different nodes, the cluster will use an `optimistic locking model <http://en.wikipedia.org/wiki/Optimistic_concurrency_control>`_. That means a transaction will not check on possible locking conflicts during the individual queries, but rather on the *COMMIT* stage, and you may get *ERROR* response on *COMMIT*. This is mentioned because it is one of the incompatibilities with regular |InnoDB| that you might experience. In InnoDB usually *DEADLOCK* and *LOCK TIMEOUT* errors happen in response on particular query, but not on *COMMIT*. It's good practice to check the error codes after *COMMIT* query, but there are still many applications that do not do that.
69+
70+If you plan to use Multi-Master capabilities of |XtraDB Cluster| and run write transactions on several nodes, you may need to make sure you handle response on *COMMIT* query.
71
72=== modified file 'doc/source/index.rst'
73--- doc/source/index.rst 2012-01-18 14:14:48 +0000
74+++ doc/source/index.rst 2012-01-20 14:40:33 +0000
75@@ -55,6 +55,7 @@
76 :glob:
77
78 features/highavailability
79+ features/multimaster-replication
80
81 FAQ
82 ===

Subscribers

People subscribed via source and target branches