Merge lp:~hrvojem/percona-xtradb-cluster/bug918060 into lp:~percona-dev/percona-xtradb-cluster/5.5.17-22.1

Proposed by Hrvoje Matijakovic
Status: Merged
Approved by: Vadim Tkachenko
Approved revision: no longer in the source branch.
Merged at revision: 3693
Proposed branch: lp:~hrvojem/percona-xtradb-cluster/bug918060
Merge into: lp:~percona-dev/percona-xtradb-cluster/5.5.17-22.1
Diff against target: 748 lines (+195/-138)
15 files modified
doc/source/3nodesec2.rst (+14/-15)
doc/source/bugreport.rst (+4/-0)
doc/source/conf.py (+7/-1)
doc/source/faq.rst (+28/-33)
doc/source/features/highavailability.rst (+28/-0)
doc/source/glossary.rst (+13/-8)
doc/source/index.rst (+13/-3)
doc/source/installation.rst (+3/-3)
doc/source/installation/bin_distro_specific.rst (+1/-1)
doc/source/installation/compiling_xtradb_cluster.rst (+4/-3)
doc/source/installation/yum_repo.rst (+1/-1)
doc/source/intro.rst (+35/-15)
doc/source/limitation.rst (+15/-29)
doc/source/resources.rst (+14/-11)
doc/source/singlebox.rst (+15/-15)
To merge this branch: bzr merge lp:~hrvojem/percona-xtradb-cluster/bug918060
Reviewer Review Type Date Requested Status
Vadim Tkachenko Approve
Review via email: mp+89042@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Vadim Tkachenko (vadim-tk) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'doc/source/3nodesec2.rst'
--- doc/source/3nodesec2.rst 2011-12-23 07:28:16 +0000
+++ doc/source/3nodesec2.rst 2012-01-18 14:24:25 +0000
@@ -1,39 +1,39 @@
1How to setup 3 node cluster in EC2 enviroment1How to setup 3 node cluster in EC2 enviroment
2==============================================2==============================================
33
4This is how-to setup 3-node cluster in EC2 enviroment.4This is how to setup 3-node cluster in EC2 enviroment.
55
6Assume you are running *m1.xlarge* instances with OS *Red Hat Enterprise Linux 6.1 64-bit*.6Assume you are running *m1.xlarge* instances with OS *Red Hat Enterprise Linux 6.1 64-bit*.
77
8Install XtraDB Cluster from RPM:8Install XtraDB Cluster from RPM:
99
101. Install Percona's regular and testing repositories ::101. Install Percona's regular and testing repositories: ::
1111
12 rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm12 rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm
13 rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm13 rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
1414
152. Install Percona XtraDB Cluster packages ::152. Install Percona XtraDB Cluster packages: ::
1616
17 yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client17 yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client
1818
193. Create data directories ::193. Create data directories: ::
2020
21 mkdir -p /mnt/data21 mkdir -p /mnt/data
22 mysql_install_db --datadir=/mnt/data22 mysql_install_db --datadir=/mnt/data
2323
244. Stop firewall. Cluster requires couple TCP ports to operate. Easiest way :: 244. Stop firewall. Cluster requires couple TCP ports to operate. Easiest way: ::
2525
26 service iptables stop26 service iptables stop
2727
28If you want to open only specific ports, you need to open 3306, 4444, 4567, 4568 ports.28If you want to open only specific ports, you need to open 3306, 4444, 4567, 4568 ports.
29For example for 4567 port (substitute 192.168.0.1 by your IP) ::29For example for 4567 port (substitute 192.168.0.1 by your IP): ::
3030
31 iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT31 iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT
3232
3333
345. Create /etc/my.cnf files.345. Create /etc/my.cnf files.
3535
36On the first node (assume IP 10.93.46.58) ::36On the first node (assume IP 10.93.46.58): ::
3737
38 [mysqld]38 [mysqld]
39 datadir=/mnt/data39 datadir=/mnt/data
@@ -54,7 +54,7 @@
54 innodb_autoinc_lock_mode=254 innodb_autoinc_lock_mode=2
5555
5656
57On the second node ::57On the second node: ::
5858
59 [mysqld]59 [mysqld]
60 datadir=/mnt/data60 datadir=/mnt/data
@@ -74,33 +74,32 @@
74 innodb_locks_unsafe_for_binlog=174 innodb_locks_unsafe_for_binlog=1
75 innodb_autoinc_lock_mode=275 innodb_autoinc_lock_mode=2
7676
77On the third (and following nodes) config is similar, with following change ::77On the third (and following nodes) config is similar, with the following change: ::
7878
79 wsrep_node_name=node379 wsrep_node_name=node3
8080
816. Start mysqld816. Start mysqld
8282
83On the first node ::83On the first node: ::
8484
85 /usr/sbin/mysqld85 /usr/sbin/mysqld
86 or86 or
87 mysqld_safe87 mysqld_safe
8888
89You should be able to see in console (or in error-log file) ::89You should be able to see in console (or in error-log file): ::
9090
91 111216 0:16:42 [Note] /usr/sbin/mysqld: ready for connections.91 111216 0:16:42 [Note] /usr/sbin/mysqld: ready for connections.
92 Version: '5.5.17' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release alpha22.1, Revision 3673 wsrep_22.3.r367392 Version: '5.5.17' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release alpha22.1, Revision 3673 wsrep_22.3.r3673
93 111216 0:16:42 [Note] WSREP: Assign initial position for certification: 0, protocol version: 193 111216 0:16:42 [Note] WSREP: Assign initial position for certification: 0, protocol version: 1
94 111216 0:16:42 [Note] WSREP: Synchronized with group, ready for connections94 111216 0:16:42 [Note] WSREP: Synchronized with group, ready for connections
9595
96On the second (and following nodes) ::96On the second (and following nodes): ::
97
9897
99 /usr/sbin/mysqld98 /usr/sbin/mysqld
100 or99 or
101 mysqld_safe100 mysqld_safe
102101
103You should be able to see in console (or in error-log file) ::102You should be able to see in console (or in error-log file): ::
104103
105 111216 0:21:39 [Note] WSREP: Flow-control interval: [12, 23]104 111216 0:21:39 [Note] WSREP: Flow-control interval: [12, 23]
106 111216 0:21:39 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 0)105 111216 0:21:39 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 0)
@@ -141,7 +140,7 @@
141140
142When all nodes are in SYNCED stage your cluster is ready!141When all nodes are in SYNCED stage your cluster is ready!
143142
1447. Connect to database on any node and create database ::1437. Connect to database on any node and create database: ::
145144
146 mysql145 mysql
147 > CREATE DATABASE hello_tom;146 > CREATE DATABASE hello_tom;
148147
=== added file 'doc/source/_static/cluster-diagram1.png'
149Binary files doc/source/_static/cluster-diagram1.png 1970-01-01 00:00:00 +0000 and doc/source/_static/cluster-diagram1.png 2012-01-18 14:24:25 +0000 differ148Binary files doc/source/_static/cluster-diagram1.png 1970-01-01 00:00:00 +0000 and doc/source/_static/cluster-diagram1.png 2012-01-18 14:24:25 +0000 differ
=== added file 'doc/source/bugreport.rst'
--- doc/source/bugreport.rst 1970-01-01 00:00:00 +0000
+++ doc/source/bugreport.rst 2012-01-18 14:24:25 +0000
@@ -0,0 +1,4 @@
1How to Report Bugs
2==================
3
4All bugs can be reported on `Launchpad <https://bugs.launchpad.net/percona-xtradb-cluster/+filebug>`_. Please note that error.log files from **all** the nodes need to be submitted.
05
=== modified file 'doc/source/conf.py'
--- doc/source/conf.py 2011-12-13 16:28:28 +0000
+++ doc/source/conf.py 2012-01-18 14:24:25 +0000
@@ -44,7 +44,7 @@
4444
45# General information about the project.45# General information about the project.
46project = u'Percona XtraDB Cluster'46project = u'Percona XtraDB Cluster'
47copyright = u'2011, Percona Inc'47copyright = u'2012, Percona Inc'
4848
49# The version info for the project you're documenting, acts as replacement for49# The version info for the project you're documenting, acts as replacement for
50# |version| and |release|, also used in various other places throughout the50# |version| and |release|, also used in various other places throughout the
@@ -96,6 +96,8 @@
9696
97.. |XtraDB| replace:: :term:`XtraDB`97.. |XtraDB| replace:: :term:`XtraDB`
9898
99.. |IST| replace:: :term:`IST`
100
99.. |XtraDB Cluster| replace:: :term:`XtraDB Cluster`101.. |XtraDB Cluster| replace:: :term:`XtraDB Cluster`
100102
101.. |Percona XtraDB Cluster| replace:: :term:`Percona XtraDB Cluster`103.. |Percona XtraDB Cluster| replace:: :term:`Percona XtraDB Cluster`
@@ -104,6 +106,10 @@
104106
105.. |MyISAM| replace:: :term:`MyISAM`107.. |MyISAM| replace:: :term:`MyISAM`
106108
109.. |split brain| replace:: :term:`split brain`
110
111.. |.frm| replace:: :term:`.frm`
112
107.. |LSN| replace:: :term:`LSN`113.. |LSN| replace:: :term:`LSN`
108114
109.. |XtraBackup| replace:: *XtraBackup*115.. |XtraBackup| replace:: *XtraBackup*
110116
=== modified file 'doc/source/faq.rst'
--- doc/source/faq.rst 2012-01-14 17:24:09 +0000
+++ doc/source/faq.rst 2012-01-18 14:24:25 +0000
@@ -6,64 +6,65 @@
6========================================================6========================================================
7A: For auto-increment particular, Cluster changes auto_increment_offset7A: For auto-increment particular, Cluster changes auto_increment_offset
8for each new node.8for each new node.
9In the single node workload, locking handled by usual way how InnoDB handles locks. 9In the single node workload, locking handled by usual way how |InnoDB| handles locks.
10In case of write load on several nodes, Cluster uses optimistic locking (http://en.wikipedia.org/wiki/Optimistic_concurrency_control) and application may receive lock error in the response on COMMIT query.10In case of write load on several nodes, Cluster uses optimistic locking (http://en.wikipedia.org/wiki/Optimistic_concurrency_control) and application may receive lock error in the response on COMMIT query.
1111
12Q: What if one of the nodes crashes and innodb recovery roll back some transactions? 12Q: What if one of the nodes crashes and innodb recovery roll back some transactions?
13=====================================================================================13=====================================================================================
14A: When the node crashes, after the restart it will copy whole dataset from another node14A: When the node crashes, after the restart it will copy whole dataset from another node
15(if there were changes to data since crash) 15(if there were changes to data since crash).
1616
17Q: Is there a chance to have different table structure on the nodes? 17Q: Is there a chance to have different table structure on the nodes?
18=====================================================================18=====================================================================
19what I mean is like having 4nodes, 4 tables like sessions_a, sessions_b, sessions_c and sessions_d and have each only on one of the nodes? 19What I mean is like having 4 nodes, 4 tables like sessions_a, sessions_b, sessions_c and sessions_d and have each only on one of the nodes?
2020
21A: Not at the moment for InnoDB tables. But it will work for MEMORY tables.21A: Not at the moment for InnoDB tables. But it will work for MEMORY tables.
2222
23Q: What if a node fail and/or what if there is a network issue between them 23Q: What if a node fail and/or what if there is a network issue between them?
24============================================================================24=============================================================================
25A: Then Quorum mechanism in XtraDB Cluster will decide what nodes can accept traffic25A: Then Quorum mechanism in XtraDB Cluster will decide what nodes can accept traffic
26and will shutdown nodes that not belong to quorum. Later when the failure is fixed,26and will shutdown nodes that not belong to quorum. Later when the failure is fixed,
27the nodes will need to copy data from working cluster.27the nodes will need to copy data from working cluster.
2828
29Q: How would it handle split brain? 29Q: How would it handle split brain?
30====================================30====================================
31A: It would not handle it. The |split brain| is hard stop, XtraDB Cluster can't resolve it.31A: It would not handle it. The |split brain| is hard stop, |XtraDB Cluster| can't resolve it.
32That's why the minimal recommendation is to have 3 nodes. 32That's why the minimal recommendation is to have 3 nodes.
33However there is possibility to allow a node to handle the traffic, option is: ::33However there is possibility to allow a node to handle the traffic, option is: ::
34 34
35 wsrep_provider_options="pc.ignore_sb = yes"35 wsrep_provider_options="pc.ignore_sb = yes"
3636
37Q: I have a two nodes setup. When node1 fails, node2 does not accept commands, why ?37Q: I have a two nodes setup. When node1 fails, node2 does not accept commands, why?
38=====================================================================================38====================================================================================
39A: This is expected behaviour, to prevent |split brain|. See previous question.39A: This is expected behaviour, to prevent |split brain|. See previous question.
4040
41Q: What tcp ports are used by Percona XtraDB Cluster ?41Q: What tcp ports are used by Percona XtraDB Cluster?
42======================================================42======================================================
43A: You may need to open up to 4 ports if you are using firewall.43A: You may need to open up to 4 ports if you are using firewall.
4444
45 * Regular MySQL port, default 3306451. Regular MySQL port, default 3306.
46 * Port for group communication, default 4567. It can be changed by the option: ::46
47472. Port for group communication, default 4567. It can be changed by the option: ::
48 wsrep_provider_options = "gmcast.listen_addr=tcp://0.0.0.0:4010; "48
4949 wsrep_provider_options ="gmcast.listen_addr=tcp://0.0.0.0:4010; "
50 * Port for State Transfer, default 4444. It can be changed by the option: ::50
51513. Port for State Transfer, default 4444. It can be changed by the option: ::
52 wsrep_sst_receive_address=10.11.12.205:555552
5353 wsrep_sst_receive_address=10.11.12.205:5555
54 * Port for Incremental State Transfer, default port for group communication + 1 (4568). It can be changed by the option: ::54
55554. Port for Incremental State Transfer, default port for group communication + 1 (4568). It can be changed by the option: ::
56 wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; "56
5757 wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; "
58Q: Is there "async" mode for Cluster or only "sync" commits are supported ? 58
59Q: Is there "async" mode for Cluster or only "sync" commits are supported?
59===========================================================================60===========================================================================
60A: There is no "async" mode, all commits are syncronious on all nodes.61A: There is no "async" mode, all commits are synchronous on all nodes.
61Or, to be fully correct, the commits are "virtually" syncronious. Which62Or, to be fully correct, the commits are "virtually" synchronous. Which
62means that transaction should pass "certification" on nodes, not physicall commit.63means that transaction should pass "certification" on nodes, not physical commit.
63"Certification" means a guarantee that transaction does not have conflicts with 64"Certification" means a guarantee that transaction does not have conflicts with
64another transactions on corresponding node.65another transactions on corresponding node.
6566
66Q: Does it work with regular MySQL replication ?67Q: Does it work with regular MySQL replication?
67================================================68================================================
68A: Yes. On the node you are going to use as master, you should enable log-bin and log-slave-update options.69A: Yes. On the node you are going to use as master, you should enable log-bin and log-slave-update options.
6970
@@ -73,9 +74,3 @@
73 74
74 echo 0 > /selinux/enforce75 echo 0 > /selinux/enforce
7576
76
77
78
79
80
81
8277
=== added directory 'doc/source/features'
=== added file 'doc/source/features/highavailability.rst'
--- doc/source/features/highavailability.rst 1970-01-01 00:00:00 +0000
+++ doc/source/features/highavailability.rst 2012-01-18 14:24:25 +0000
@@ -0,0 +1,28 @@
1High Availability
2=================
3
4Basic setup: you run 3-nodes setup.
5
6The |Percona XtraDB Cluster| will continue to function when you take any of nodes down.
7At any point in time you can shutdown any Node to perform maintenance or make
8configuration changes. Even in unplanned situations like Node crash or if it
9becomes network unavailable the Cluster will continue to work, and you'll be able
10to run queries on working nodes.
11
12The biggest question there, what will happen when the Node joins the cluster back, and
13there were changes to data while the node was down. Let's focus on this with details.
14
15There is two ways that Node may use when it joins the cluster: State Snapshot Transfer
16 (SST) and Incremental State Transfer (IST).
17
18* SST is the full copy if data from one node to another. It's used when new node joins the cluster, it has to transfer data from existing node. There are three methods of SST available in Percona XtraDB Cluster: :program:`mysqldump`, :program:`rsync` and :program:`xtrabackup` (Percona |XtraBackup| with support of XtraDB Cluster will be released soon, so far you need to use our `source code repository <http://www.percona.com/doc/percona-xtrabackup/installation/compiling_xtrabackup.html>`_). The downside of `mysqldump` and `rsync` is that your cluster becomes READ-ONLY for time that takes to copy data from one node to another (SST applies :command:`FLUSH TABLES WITH READ LOCK` command). Xtrabackup SST does not require :command:`READ LOCK` for full time, only for syncing |.FRM| files (the same as with regular backup).
19
20* Even with that, SST may be intrusive, that’s why there is IST mechanism. If you put your node down for short period of time and then start it, the node is able to fetch only changes made during period it was down.This is done using caching mechanism on nodes. Each node contains a cache, ring-buffer, (the size is configurable) of last N changes, and the node is able to transfer part of this cache. Obviously IST can be done only if amount of changes needed to transfer is less than N. If it exceeds N, then the joining node has to perform SST.
21
22You can monitor current state of Node by using
23
24.. code-block:: mysql
25
26 SHOW STATUS LIKE 'wsrep_local_state_comment';
27
28When it is ‘Synced (6)’, the node is ready to handle traffic.
029
=== modified file 'doc/source/glossary.rst'
--- doc/source/glossary.rst 2012-01-11 17:47:46 +0000
+++ doc/source/glossary.rst 2012-01-18 14:24:25 +0000
@@ -7,30 +7,35 @@
7 LSN7 LSN
8 Each InnoDB page (usually 16kb in size) contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page's LSN shows how recently it was changed.8 Each InnoDB page (usually 16kb in size) contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page's LSN shows how recently it was changed.
99
10
11 InnoDB10 InnoDB
12 Storage engine which provides ACID-compilant transactions and foreing key support, among others improvements over :term:`MyISAM`. It is the default engine for |MySQL| as of the 5.5 series.11 Storage engine which provides ACID-compliant transactions and foreign key support, among others improvements over :term:`MyISAM`. It is the default engine for |MySQL| as of the 5.5 series.
1312
14 MyISAM13 MyISAM
15 Previous default storage engine for |MySQL| for versions prior to 5.5. It doesn't fully support transactions but in some scenarios may be faster than :term:`InnoDB`. Each table is stored on disk in 3 files: :term:`.frm`, :term:`.MYD`, :term:`.MYI`14 Previous default storage engine for |MySQL| for versions prior to 5.5. It doesn't fully support transactions but in some scenarios may be faster than :term:`InnoDB`. Each table is stored on disk in 3 files: :term:`.frm`, :term:`.MYD`, :term:`.MYI`
15
16 IST
17 Incremental State Transfer. Functionallity which instead of whole state snapshot can catch up with te group by receiving the missing writesets, but only if the writeset is still in the donor's writeset cache
18
19 XtraBackup
20 *Percona XtraBackup* is an open-source hot backup utility for |MySQL| - based servers that doesn’t lock your database during the backup.
1621
17 XtraDB22 XtraDB
18 *Percona XtraDB* is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is fully backwards compatible, and so can be used as a drop-in replacement for standard InnoDB. More information `here <http://www.percona.com/docs/wiki/Percona-XtraDB:start>`_ .23 *Percona XtraDB* is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is fully backwards compatible, and so can be used as a drop-in replacement for standard InnoDB. More information `here <http://www.percona.com/docs/wiki/Percona-XtraDB:start>`_ .
1924
20 XtraDB Cluster25 XtraDB Cluster
21 *Percona XtraDB Cluster* is a high availablity solution for MySQL26 *Percona XtraDB Cluster* is a high availability solution for MySQL
2227
23 Percona XtraDB Cluster28 Percona XtraDB Cluster
24 *Percona XtraDB Cluster* is a high availablity solution for MySQL29 *Percona XtraDB Cluster* is a high availability solution for MySQL
2530
26 my.cnf31 my.cnf
27 This file refers to the database server's main configuration file. Most linux distributions place it as :file:`/etc/mysql/my.cnf`, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values.32 This file refers to the database server's main configuration file. Most Linux distributions place it as :file:`/etc/mysql/my.cnf`, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values.
2833
29 datadir34 datadir
30 The directory in which the database server stores its databases. Most Linux distribution use :file:`/var/lib/mysql` by default.35 The directory in which the database server stores its databases. Most Linux distribution use :file:`/var/lib/mysql` by default.
3136
32 ibdata37 ibdata
33 Default prefix for tablespace files, e.g. :file:`ibdata1` is a 10MB autoextensible file that |MySQL| creates for the shared tablespace by default. 38 Default prefix for tablespace files, e.g. :file:`ibdata1` is a 10MB autoextendable file that |MySQL| creates for the shared tablespace by default.
3439
35 innodb_file_per_table40 innodb_file_per_table
36 InnoDB option to use separate .ibd files for each table.41 InnoDB option to use separate .ibd files for each table.
@@ -54,10 +59,10 @@
54 Each table using the :program:`MERGE` storage engine, besides of a :term:`.frm` file, will have :term:`.MRG` file containing the names of the |MyISAM| tables associated with it.59 Each table using the :program:`MERGE` storage engine, besides of a :term:`.frm` file, will have :term:`.MRG` file containing the names of the |MyISAM| tables associated with it.
5560
56 .TRG61 .TRG
57 File containing the TRiGgers associated to a table, e.g. `:file:`mytable.TRG`. With the :term:`.TRN` file, they represent all the Trigger definitions.62 File containing the triggers associated to a table, e.g. `:file:`mytable.TRG`. With the :term:`.TRN` file, they represent all the trigger definitions.
5863
59 .TRN64 .TRN
60 File containing the TRiggers' Names associated to a table, e.g. `:file:`mytable.TRN`. With the :term:`.TRG` file, they represent all the Trigger definitions.65 File containing the triggers' Names associated to a table, e.g. `:file:`mytable.TRN`. With the :term:`.TRG` file, they represent all the trigger definitions.
6166
62 .ARM67 .ARM
63 Each table with the :program:`Archive Storage Engine` has ``.ARM`` file which contains the metadata of it.68 Each table with the :program:`Archive Storage Engine` has ``.ARM`` file which contains the metadata of it.
6469
=== modified file 'doc/source/index.rst'
--- doc/source/index.rst 2012-01-11 16:46:28 +0000
+++ doc/source/index.rst 2012-01-18 14:24:25 +0000
@@ -12,13 +12,13 @@
12 12
13 * Synchronous replication. Transaction either commited on all nodes or none.13 * Synchronous replication. Transaction either commited on all nodes or none.
1414
15 * Multi-master replication. You can write to any node.15 * Multi-master replication. You can write to any node.
1616
17 * Parallel applying events on slave. Real “parallel replication”.17 * Parallel applying events on slave. Real “parallel replication”.
1818
19 * Automatic node provisioning.19 * Automatic node provisioning.
2020
21 * Data consistency. No more unsyncronised slaves.21 * Data consistency. No more unsynchronized slaves.
2222
23Percona XtraDB Cluster is fully compatible with MySQL or Percona Server in the following meaning:23Percona XtraDB Cluster is fully compatible with MySQL or Percona Server in the following meaning:
2424
@@ -35,6 +35,7 @@
35 :glob:35 :glob:
3636
37 intro37 intro
38 resources
3839
39Installation40Installation
40============41============
@@ -46,6 +47,15 @@
46 installation47 installation
47 installation/compiling_xtradb_cluster48 installation/compiling_xtradb_cluster
4849
50Features
51========
52
53.. toctree::
54 :maxdepth: 1
55 :glob:
56
57 features/highavailability
58
49FAQ59FAQ
50===60===
5161
@@ -65,7 +75,7 @@
65 75
66 singlebox76 singlebox
67 3nodesec277 3nodesec2
6878 bugreport
6979
70Percona XtraDB Cluster limitations80Percona XtraDB Cluster limitations
71==================================81==================================
7282
=== modified file 'doc/source/installation.rst'
--- doc/source/installation.rst 2011-12-14 02:32:00 +0000
+++ doc/source/installation.rst 2012-01-18 14:24:25 +0000
@@ -61,9 +61,9 @@
61Install XtraBackup SST method61Install XtraBackup SST method
62==============================62==============================
6363
64To use Percona XtraBackup for State Transfer method (copy snapsot of data between nodes)64To use Percona XtraBackup for State Transfer method (copy snapshot of data between nodes)
65you can use the regular xtrabackup package with the script what supports galera information.65you can use the regular xtrabackup package with the script what supports Galera information.
66You can take *innobackupex* script from source code `innobackupex <http://bazaar.launchpad.net/~percona-dev/percona-xtrabackup/galera-info/view/head:/innobackupex>`_66You can take *innobackupex* script from source code `innobackupex <http://bazaar.launchpad.net/~percona-dev/percona-xtrabackup/galera-info/view/head:/innobackupex>`_.
6767
68To inform node to use xtrabackup you need to specify in my.cnf: ::68To inform node to use xtrabackup you need to specify in my.cnf: ::
6969
7070
=== modified file 'doc/source/installation/bin_distro_specific.rst'
--- doc/source/installation/bin_distro_specific.rst 2011-12-13 06:38:29 +0000
+++ doc/source/installation/bin_distro_specific.rst 2012-01-18 14:24:25 +0000
@@ -19,7 +19,7 @@
19 cp ./usr/bin/tar4ibd /usr/bin19 cp ./usr/bin/tar4ibd /usr/bin
20 cp ./usr/bin/innobackupex-1.5.1 /usr/bin20 cp ./usr/bin/innobackupex-1.5.1 /usr/bin
2121
22* If you use a version prior to 1.6, the stock perl causes an issue with the backup scripts version detection. Edit :file:`/usr/bin/innobackupex-1.5.1`. Comment out the lines below as shown below ::22* If you use a version prior to 1.6, the stock perl causes an issue with the backup scripts version detection. Edit :file:`/usr/bin/innobackupex-1.5.1`. Comment out the lines below as shown below: ::
2323
24 $perl_version = chr($required_perl_version[0])24 $perl_version = chr($required_perl_version[0])
25 . chr($required_perl_version[1])25 . chr($required_perl_version[1])
2626
=== modified file 'doc/source/installation/compiling_xtradb_cluster.rst'
--- doc/source/installation/compiling_xtradb_cluster.rst 2011-12-13 06:38:29 +0000
+++ doc/source/installation/compiling_xtradb_cluster.rst 2012-01-18 14:24:25 +0000
@@ -30,11 +30,12 @@
30Compiling 30Compiling
31------------31------------
3232
33The most esiest way to build binaries is to run33The most esiest way to build binaries is to run script: ::
34script BUILD/compile-pentium64-wsrep34
35 BUILD/compile-pentium64-wsrep
3536
36If you feel confident to use cmake, you make compile with cmake adding37If you feel confident to use cmake, you make compile with cmake adding
37-DWITH_WSREP=1 to parameters.38-DWITH_WSREP=1 to parameters.
3839
39Exampes how to build RPM and DEB packages you can find in packaging/percona directory in the source code40Examples how to build RPM and DEB packages you can find in packaging/percona directory in the source code.
4041
4142
=== modified file 'doc/source/installation/yum_repo.rst'
--- doc/source/installation/yum_repo.rst 2012-01-10 02:33:47 +0000
+++ doc/source/installation/yum_repo.rst 2012-01-18 14:24:25 +0000
@@ -13,7 +13,7 @@
1313
14 $ rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm14 $ rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm
1515
16You may want to install also Percona stable repository, which provides Percona-shared-compat rpm, needed to satisfy dependencies ::16You may want to install also Percona stable repository, which provides Percona-shared-compat rpm, needed to satisfy dependencies: ::
17 17
18 $ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm18 $ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
1919
2020
=== modified file 'doc/source/intro.rst'
--- doc/source/intro.rst 2011-12-14 00:33:37 +0000
+++ doc/source/intro.rst 2012-01-18 14:24:25 +0000
@@ -4,37 +4,57 @@
44
5*Percona XtraDB Cluster* is open-source, free |MySQL| High Availability software 5*Percona XtraDB Cluster* is open-source, free |MySQL| High Availability software
66
7General introduction
8====================
9
101. The Cluster consists of Nodes. Recommended configuration is to have at least 3 nodes, but you can make it running with 2 nodes as well.
112. Each Node is regular |MySQL| / |Percona Server| setup. The point is that you can convert your existing MySQL / Percona Server into Node and roll Cluster using it as a base. Or otherwise – you can detach Node from Cluster and use it as just a regular server.
123. Each Node contains the full copy of data. That defines XtraDB Cluster behavior in many ways. And obviously there are benefits and drawbacks.
13
14.. image:: _static/cluster-diagram1.png
15
16Benefits of such approach:
17 * When you execute a query, it is executed locally on the node. All data is available locally, no need for remote access.
18 * No central management. You can loose any node at any point of time, and the cluster will continue to function.
19 * Good solution for scaling a read workload. You can put read queries to any of the nodes.
20
21Drawbacks:
22 * Overhead of joining new node. The new node has to copy full dataset from one of existing nodes. If it is 100GB, it copies 100GB.
23 * This can’t be used as an effective write scaling solution. There might be some improvements in write throughput when you run write traffic to 2 nodes vs all traffic to 1 node, but you can't expect a lot. All writes still have to go on all nodes.
24 * You have several duplicates of data, for 3 nodes – 3 duplicates.
25
7What is core difference Percona XtraDB Cluster from MySQL Replication ?26What is core difference Percona XtraDB Cluster from MySQL Replication ?
8=======================================================================27=======================================================================
928
10Let's take look into well know CAP theorem for Distributed systems.29Let's take look into the well known CAP theorem for Distributed systems.
11Characteristics of Distributed systems:30Characteristics of Distributed systems:
1231
13 C - Consistency (all your data is consistent on all nodes)32 C - Consistency (all your data is consistent on all nodes),
1433
15 A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes )34 A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes ),
1635
17 P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle requests)36 P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle requests).
1837
1938
20CAP theorem says that each Distributed system can have only two out of these three.39CAP theorem says that each Distributed system can have only two out of these three.
2140
22MySQL replication has: Availability and Partitioning tolerance41MySQL replication has: Availability and Partitioning tolerance.
2342
24Percona XtraDB Cluster has: Consistency and Availability43Percona XtraDB Cluster has: Consistency and Availability.
2544
26That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data Consistency. (And, yes, Percona XtraDB Cluster looses Partitioning tolerance property)45That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data Consistency. (And yes, Percona XtraDB Cluster looses Partitioning tolerance property).
2746
28Components47Components
29==========48==========
3049
31*Percona XtraDB Cluster* is based on `Percona Server with XtraDB <http://www.percona.com/software/percona-server/>`_50*Percona XtraDB Cluster* is based on `Percona Server with XtraDB <http://www.percona.com/software/percona-server/>`_
32and includes `Write Set REPlication patches <https://launchpad.net/codership-mysql>`_.51and includes `Write Set Replication patches <https://launchpad.net/codership-mysql>`_.
33It uses the `Galera library <https://launchpad.net/galera>`_, version 2.x, 52It uses the `Galera library <https://launchpad.net/galera>`_, version 2.x,
34a generic Synchronous Multi-Master replication plugin for transactional applications. 53a generic Synchronous Multi-Master replication plugin for transactional applications.
3554
36Galera library is developed by `Codership Oy <http://www.codership.com/>`_55Galera library is developed by `Codership Oy <http://www.codership.com/>`_.
3756
38Galera 2.x supports such new features as:57Galera 2.x supports such new features as:
39 * Incremental State Transfer, especially useful for WAN deployments58 * Incremental State Transfer (|IST|), especially useful for WAN deployments,
40 * RSU, Rolling Schema Update. Schema change does not block operations against table59 * RSU, Rolling Schema Update. Schema change does not block operations against table.
60
4161
=== modified file 'doc/source/limitation.rst'
--- doc/source/limitation.rst 2011-12-14 00:33:37 +0000
+++ doc/source/limitation.rst 2012-01-18 14:24:25 +0000
@@ -2,43 +2,29 @@
2 Percona XtraDB Cluster Limitations2 Percona XtraDB Cluster Limitations
3====================================3====================================
44
5There are some limitations which we should be aware of. Some of them will be eliminated later as product is improved; some are design limitations.5There are some limitations which you should be aware of. Some of them will be eliminated later as product is improved and some are design limitations.
66
7 - Currently replication works only with InnoDB storage engine. Any writes to7 - Currently replication works only with |InnoDB| storage engine. Any writes to tables of other types, including system (mysql.*) tables, are not replicated. However, DDL statements are replicated in statement level, and changes to mysql.* tables will get replicated that way. So, you can safely issue: CREATE USER..., but issuing: INSERT INTO mysql.user..., will not be replicated.
8 tables of other types, including system (mysql.*) tables are not replicated.8
9 However, DDL statements are replicated in statement level, and changes9 - DELETE operation is unsupported on tables without primary key. Also rows in tables without primary key may appear in different order on different nodes. As a result SELECT...LIMIT... may return slightly different sets.
10 to mysql.* tables will get replicated that way.
11 So, you can safely issue: CREATE USER...,
12 but issuing: INSERT INTO mysql.user..., will not be replicated.
13
14 - DELETE operation is unsupported on tables without primary key. Also rows in
15 tables without primary key may appear in different order on different nodes.
16 As a result SELECT...LIMIT... may return slightly different sets.
1710
18 - Unsupported queries:11 - Unsupported queries:
19 * LOCK/UNLOCK TABLES cannot be supported in multi-master setups.12 * LOCK/UNLOCK TABLES cannot be supported in multi-master setups.
20 * lock functions (GET_LOCK(), RELEASE_LOCK()... )13 * lock functions (GET_LOCK(), RELEASE_LOCK()... )
2114
22 - Query log cannot be directed to table. If you enable query logging,15 - Query log cannot be directed to table. If you enable query logging, you must forward the log to a file: log_output = FILE. Use general_log and general_log_file to choose query logging and the log file name.
23 you must forward the log to a file:16
24 log_output = FILE17 - Maximum allowed transaction size is defined by wsrep_max_ws_rows and wsrep_max_ws_size. Anything bigger (e.g. huge LOAD DATA) will be rejected.
25 Use general_log and general_log_file to choose query logging and the18
26 log file name19 - Due to cluster level optimistic concurrency control, transaction issuing COMMIT may still be aborted at that stage. There can be two transactions writing to same rows and committing in separate XtraDB Cluster nodes, and only one of the them can successfully commit. The failing one will be aborted. For cluster level aborts, XtraDB Cluster gives back deadlock error code:
2720 (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).
28 - Maximum allowed transaction size is defined by wsrep_max_ws_rows and
29 wsrep_max_ws_size. Anything bigger (e.g. huge LOAD DATA) will be rejected.
30
31 - Due to cluster level optimistic concurrency control, transaction issuing
32 COMMIT may still be aborted at that stage. There can be two transactions.
33 writing to same rows and committing in separate XtraDB Cluster nodes, and only one
34 of the them can successfully commit. The failing one will be aborted.
35 For cluster level aborts, XtraDB Cluster gives back deadlock error.
36 code (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).
3721
38 - XA transactions can not be supported due to possible rollback on commit.22 - XA transactions can not be supported due to possible rollback on commit.
3923
40 - The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster is slow. If you have requirements for stable high performance, then it should be supported by corresponding hardware (10Gb network, SSD)24 - The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster is slow. If you have requirements for stable high performance, then it should be supported by corresponding hardware (10Gb network, SSD).
4125
42 - The minimal recommended size of cluster is 3 nodes26 - The minimal recommended size of cluster is 3 nodes.
4327
44 - DDL statements are problematic and may stall cluster. Later, the support of DDL will be improved, but will always require special treatment. 28 - DDL statements are problematic and may stall cluster. Later, the support of DDL will be improved, but will always require special treatment.
29
30
4531
=== modified file 'doc/source/resources.rst'
--- doc/source/resources.rst 2011-12-21 05:51:01 +0000
+++ doc/source/resources.rst 2012-01-18 14:24:25 +0000
@@ -1,3 +1,7 @@
1=========
2Resources
3=========
4
1In general there are 4 resources that need to be different when you want to run several MySQL/Galera nodes on one host:5In general there are 4 resources that need to be different when you want to run several MySQL/Galera nodes on one host:
26
31) data directory71) data directory
@@ -5,19 +9,18 @@
53) galera replication listen port and/or address93) galera replication listen port and/or address
64) receive address for state snapshot transfer104) receive address for state snapshot transfer
711
812and later incremental state transfer receive address will be added to the bunch. (I know, it is kinda a lot, but we don't see how it can be meaningfully reduced yet).
9and later incremental state transfer receive address will be added to the bunch. (I know, it is kinda a lot, but we don't see how it can be meaningfully reduced yet)13
1014The first two are the usual mysql stuff.
11The first two is the usual mysql stuff.15
1216You figured out the third. It is also possible to pass it via: ::
13You figured out the third. It is also possible to pass it via17
1418 wsrep_provider_options="gmcast.listen_addr=tcp://127.0.0.1:5678"
15wsrep_provider_options="gmcast.listen_addr=tcp://127.0.0.1:5678"19
1620as most other Galera options. This may save you some extra typing.
17as most other galera options. This may save you some extra typing.
1821
19The fourth one is wsrep_sst_receive_address. This is the address at which the node will be listening for and receiving the state. Note that in galera cluster _joining_ nodes are waiting for connections from donors. It goes contrary to tradition and seems to confuse people time and again, but there are good reasons it was made like that.22The fourth one is wsrep_sst_receive_address. This is the address at which the node will be listening for and receiving the state. Note that in galera cluster _joining_ nodes are waiting for connections from donors. It goes contrary to tradition and seems to confuse people time and again, but there are good reasons it was made like that.
2023
21If you use mysqldump SST it should be the same as this mysql client connection address plus you need to set wsrep_sst_auth variable to hold user:password pair. The user should be privileged enough to read system tables from donor and create system tables on this node. For simplicity that could be just the root user. Note that it also means that you need to properly set up the privileges on the new node before attempting to join the cluster.24If you use mysqldump SST it should be the same as this mysql client connection address plus you need to set wsrep_sst_auth variable to hold user:password pair. The user should be privileged enough to read system tables from donor and create system tables on this node. For simplicity that could be just the root user. Note that it also means that you need to properly set up the privileges on the new node before attempting to join the cluster.
2225
23If you use rsync or xtrabackup SST, wsrep_sst_auth is not necessary unless your SST script makes use of it. wsrep_sst_address can be anything local (it may even be the same on all nodes provided you'll be starting them one at a time).26If you use rsync or |xtrabackup| SST, wsrep_sst_auth is not necessary unless your SST script makes use of it. wsrep_sst_address can be anything local (it may even be the same on all nodes provided you'll be starting them one at a time).
2427
=== modified file 'doc/source/singlebox.rst'
--- doc/source/singlebox.rst 2011-12-16 03:50:49 +0000
+++ doc/source/singlebox.rst 2012-01-18 14:24:25 +0000
@@ -1,64 +1,64 @@
1How to setup 3 node cluster on single box1How to setup 3 node cluster on single box
2==========================================2==========================================
33
4This is how-to setup 3-node cluster on the single physical box.4This is how to setup 3-node cluster on the single physical box.
55
6Assume you installed Percona XtraDB Cluster from binary .tar.gz into directory6Assume you installed |Percona XtraDB Cluster| from binary .tar.gz into directory
77
8/usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel68/usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6
99
1010
11Now we need to create couple my.cnf files and couple data directories.11Now we need to create couple my.cnf files and couple of data directories.
1212
13Assume we created (see the content of files at the end of document)13Assume we created (see the content of files at the end of document):
1414
15 * /etc/my.4000.cnf15 * /etc/my.4000.cnf
16 * /etc/my.5000.cnf16 * /etc/my.5000.cnf
17 * /etc/my.6000.cnf17 * /etc/my.6000.cnf
1818
19and data directories19and data directories:
2020
21 * /data/bench/d121 * /data/bench/d1
22 * /data/bench/d222 * /data/bench/d2
23 * /data/bench/d323 * /data/bench/d3
2424
25and assume the local IP address is 10.11.12.20525and assume the local IP address is 10.11.12.205.
2626
27then we should be able to start initial node as (from directory /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6)::27Then we should be able to start initial node as (from directory /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6): ::
2828
29 bin/mysqld --defaults-file=/etc/my.4000.cnf29 bin/mysqld --defaults-file=/etc/my.4000.cnf
3030
31Following output will let out know that node was started succsefully::31Following output will let out know that node was started successfully: ::
3232
33 111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)33 111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
34 111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 134 111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 1
3535
3636
37And you can check used ports::37And you can check used ports: ::
38 38
39 netstat -anp | grep mysqld39 netstat -anp | grep mysqld
40 tcp 0 0 0.0.0.0:4000 0.0.0.0:* LISTEN 8218/mysqld 40 tcp 0 0 0.0.0.0:4000 0.0.0.0:* LISTEN 8218/mysqld
41 tcp 0 0 0.0.0.0:4010 0.0.0.0:* LISTEN 8218/mysqld41 tcp 0 0 0.0.0.0:4010 0.0.0.0:* LISTEN 8218/mysqld
4242
4343
44After first node, we start second and third::44After first node, we start second and third: ::
4545
46 bin/mysqld --defaults-file=/etc/my.5000.cnf46 bin/mysqld --defaults-file=/etc/my.5000.cnf
47 bin/mysqld --defaults-file=/etc/my.6000.cnf47 bin/mysqld --defaults-file=/etc/my.6000.cnf
4848
49Succesfull start will produce following output::49Successful start will produce the following output: ::
5050
51 111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2)51 111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2)
52 111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)52 111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)
53 111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections53 111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections
5454
5555
56Now you can connect to any node and create database, which will be automatically propagated to another nodes::56Now you can connect to any node and create database, which will be automatically propagated to other nodes: ::
57 57
58 mysql -h127.0.0.1 -P5000 -e "CREATE DATABASE hello_peter"58 mysql -h127.0.0.1 -P5000 -e "CREATE DATABASE hello_peter"
5959
6060
61Configuration files (/etc/my.4000.cnf)::61Configuration files (/etc/my.4000.cnf): ::
6262
63 /etc/my.4000.cnf63 /etc/my.4000.cnf
6464
@@ -90,7 +90,7 @@
90 innodb_locks_unsafe_for_binlog=190 innodb_locks_unsafe_for_binlog=1
91 innodb_autoinc_lock_mode=291 innodb_autoinc_lock_mode=2
9292
93Configuration files (/etc/my.5000.cnf). PLEASE see difference in *wsrep_cluster_address* ::93Configuration files (/etc/my.5000.cnf). PLEASE see the difference in *wsrep_cluster_address*: ::
9494
95 /etc/my.5000.cnf95 /etc/my.5000.cnf
96 [mysqld]96 [mysqld]
@@ -122,7 +122,7 @@
122 innodb_autoinc_lock_mode=2122 innodb_autoinc_lock_mode=2
123123
124124
125Configuration files (/etc/my.6000.cnf). PLEASE see difference in *wsrep_cluster_address* ::125Configuration files (/etc/my.6000.cnf). PLEASE see the difference in *wsrep_cluster_address*: ::
126126
127 /etc/my.6000.cnf127 /etc/my.6000.cnf
128128

Subscribers

People subscribed via source and target branches