Merge lp:~hrvojem/percona-xtradb-cluster/bug918060 into lp:~percona-dev/percona-xtradb-cluster/5.5.17-22.1
- bug918060
- Merge into 5.5.17-22.1
Proposed by
Hrvoje Matijakovic
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | Vadim Tkachenko | ||||
Approved revision: | no longer in the source branch. | ||||
Merged at revision: | 3693 | ||||
Proposed branch: | lp:~hrvojem/percona-xtradb-cluster/bug918060 | ||||
Merge into: | lp:~percona-dev/percona-xtradb-cluster/5.5.17-22.1 | ||||
Diff against target: |
748 lines (+195/-138) 15 files modified
doc/source/3nodesec2.rst (+14/-15) doc/source/bugreport.rst (+4/-0) doc/source/conf.py (+7/-1) doc/source/faq.rst (+28/-33) doc/source/features/highavailability.rst (+28/-0) doc/source/glossary.rst (+13/-8) doc/source/index.rst (+13/-3) doc/source/installation.rst (+3/-3) doc/source/installation/bin_distro_specific.rst (+1/-1) doc/source/installation/compiling_xtradb_cluster.rst (+4/-3) doc/source/installation/yum_repo.rst (+1/-1) doc/source/intro.rst (+35/-15) doc/source/limitation.rst (+15/-29) doc/source/resources.rst (+14/-11) doc/source/singlebox.rst (+15/-15) |
||||
To merge this branch: | bzr merge lp:~hrvojem/percona-xtradb-cluster/bug918060 | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Vadim Tkachenko | Approve | ||
Review via email: mp+89042@code.launchpad.net |
Commit message
Description of the change
To post a comment you must log in.
Revision history for this message
Vadim Tkachenko (vadim-tk) : | # |
review:
Approve
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'doc/source/3nodesec2.rst' | |||
2 | --- doc/source/3nodesec2.rst 2011-12-23 07:28:16 +0000 | |||
3 | +++ doc/source/3nodesec2.rst 2012-01-18 14:24:25 +0000 | |||
4 | @@ -1,39 +1,39 @@ | |||
5 | 1 | How to setup 3 node cluster in EC2 enviroment | 1 | How to setup 3 node cluster in EC2 enviroment |
6 | 2 | ============================================== | 2 | ============================================== |
7 | 3 | 3 | ||
9 | 4 | This is how-to setup 3-node cluster in EC2 enviroment. | 4 | This is how to setup 3-node cluster in EC2 enviroment. |
10 | 5 | 5 | ||
11 | 6 | Assume you are running *m1.xlarge* instances with OS *Red Hat Enterprise Linux 6.1 64-bit*. | 6 | Assume you are running *m1.xlarge* instances with OS *Red Hat Enterprise Linux 6.1 64-bit*. |
12 | 7 | 7 | ||
13 | 8 | Install XtraDB Cluster from RPM: | 8 | Install XtraDB Cluster from RPM: |
14 | 9 | 9 | ||
16 | 10 | 1. Install Percona's regular and testing repositories :: | 10 | 1. Install Percona's regular and testing repositories: :: |
17 | 11 | 11 | ||
18 | 12 | rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm | 12 | rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm |
19 | 13 | rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm | 13 | rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm |
20 | 14 | 14 | ||
22 | 15 | 2. Install Percona XtraDB Cluster packages :: | 15 | 2. Install Percona XtraDB Cluster packages: :: |
23 | 16 | 16 | ||
24 | 17 | yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client | 17 | yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client |
25 | 18 | 18 | ||
27 | 19 | 3. Create data directories :: | 19 | 3. Create data directories: :: |
28 | 20 | 20 | ||
29 | 21 | mkdir -p /mnt/data | 21 | mkdir -p /mnt/data |
30 | 22 | mysql_install_db --datadir=/mnt/data | 22 | mysql_install_db --datadir=/mnt/data |
31 | 23 | 23 | ||
33 | 24 | 4. Stop firewall. Cluster requires couple TCP ports to operate. Easiest way :: | 24 | 4. Stop firewall. Cluster requires couple TCP ports to operate. Easiest way: :: |
34 | 25 | 25 | ||
35 | 26 | service iptables stop | 26 | service iptables stop |
36 | 27 | 27 | ||
37 | 28 | If you want to open only specific ports, you need to open 3306, 4444, 4567, 4568 ports. | 28 | If you want to open only specific ports, you need to open 3306, 4444, 4567, 4568 ports. |
39 | 29 | For example for 4567 port (substitute 192.168.0.1 by your IP) :: | 29 | For example for 4567 port (substitute 192.168.0.1 by your IP): :: |
40 | 30 | 30 | ||
41 | 31 | iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT | 31 | iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT |
42 | 32 | 32 | ||
43 | 33 | 33 | ||
44 | 34 | 5. Create /etc/my.cnf files. | 34 | 5. Create /etc/my.cnf files. |
45 | 35 | 35 | ||
47 | 36 | On the first node (assume IP 10.93.46.58) :: | 36 | On the first node (assume IP 10.93.46.58): :: |
48 | 37 | 37 | ||
49 | 38 | [mysqld] | 38 | [mysqld] |
50 | 39 | datadir=/mnt/data | 39 | datadir=/mnt/data |
51 | @@ -54,7 +54,7 @@ | |||
52 | 54 | innodb_autoinc_lock_mode=2 | 54 | innodb_autoinc_lock_mode=2 |
53 | 55 | 55 | ||
54 | 56 | 56 | ||
56 | 57 | On the second node :: | 57 | On the second node: :: |
57 | 58 | 58 | ||
58 | 59 | [mysqld] | 59 | [mysqld] |
59 | 60 | datadir=/mnt/data | 60 | datadir=/mnt/data |
60 | @@ -74,33 +74,32 @@ | |||
61 | 74 | innodb_locks_unsafe_for_binlog=1 | 74 | innodb_locks_unsafe_for_binlog=1 |
62 | 75 | innodb_autoinc_lock_mode=2 | 75 | innodb_autoinc_lock_mode=2 |
63 | 76 | 76 | ||
65 | 77 | On the third (and following nodes) config is similar, with following change :: | 77 | On the third (and following nodes) config is similar, with the following change: :: |
66 | 78 | 78 | ||
67 | 79 | wsrep_node_name=node3 | 79 | wsrep_node_name=node3 |
68 | 80 | 80 | ||
69 | 81 | 6. Start mysqld | 81 | 6. Start mysqld |
70 | 82 | 82 | ||
72 | 83 | On the first node :: | 83 | On the first node: :: |
73 | 84 | 84 | ||
74 | 85 | /usr/sbin/mysqld | 85 | /usr/sbin/mysqld |
75 | 86 | or | 86 | or |
76 | 87 | mysqld_safe | 87 | mysqld_safe |
77 | 88 | 88 | ||
79 | 89 | You should be able to see in console (or in error-log file) :: | 89 | You should be able to see in console (or in error-log file): :: |
80 | 90 | 90 | ||
81 | 91 | 111216 0:16:42 [Note] /usr/sbin/mysqld: ready for connections. | 91 | 111216 0:16:42 [Note] /usr/sbin/mysqld: ready for connections. |
82 | 92 | Version: '5.5.17' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release alpha22.1, Revision 3673 wsrep_22.3.r3673 | 92 | Version: '5.5.17' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release alpha22.1, Revision 3673 wsrep_22.3.r3673 |
83 | 93 | 111216 0:16:42 [Note] WSREP: Assign initial position for certification: 0, protocol version: 1 | 93 | 111216 0:16:42 [Note] WSREP: Assign initial position for certification: 0, protocol version: 1 |
84 | 94 | 111216 0:16:42 [Note] WSREP: Synchronized with group, ready for connections | 94 | 111216 0:16:42 [Note] WSREP: Synchronized with group, ready for connections |
85 | 95 | 95 | ||
88 | 96 | On the second (and following nodes) :: | 96 | On the second (and following nodes): :: |
87 | 97 | |||
89 | 98 | 97 | ||
90 | 99 | /usr/sbin/mysqld | 98 | /usr/sbin/mysqld |
91 | 100 | or | 99 | or |
92 | 101 | mysqld_safe | 100 | mysqld_safe |
93 | 102 | 101 | ||
95 | 103 | You should be able to see in console (or in error-log file) :: | 102 | You should be able to see in console (or in error-log file): :: |
96 | 104 | 103 | ||
97 | 105 | 111216 0:21:39 [Note] WSREP: Flow-control interval: [12, 23] | 104 | 111216 0:21:39 [Note] WSREP: Flow-control interval: [12, 23] |
98 | 106 | 111216 0:21:39 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 0) | 105 | 111216 0:21:39 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 0) |
99 | @@ -141,7 +140,7 @@ | |||
100 | 141 | 140 | ||
101 | 142 | When all nodes are in SYNCED stage your cluster is ready! | 141 | When all nodes are in SYNCED stage your cluster is ready! |
102 | 143 | 142 | ||
104 | 144 | 7. Connect to database on any node and create database :: | 143 | 7. Connect to database on any node and create database: :: |
105 | 145 | 144 | ||
106 | 146 | mysql | 145 | mysql |
107 | 147 | > CREATE DATABASE hello_tom; | 146 | > CREATE DATABASE hello_tom; |
108 | 148 | 147 | ||
109 | === added file 'doc/source/_static/cluster-diagram1.png' | |||
110 | 149 | Binary files doc/source/_static/cluster-diagram1.png 1970-01-01 00:00:00 +0000 and doc/source/_static/cluster-diagram1.png 2012-01-18 14:24:25 +0000 differ | 148 | Binary files doc/source/_static/cluster-diagram1.png 1970-01-01 00:00:00 +0000 and doc/source/_static/cluster-diagram1.png 2012-01-18 14:24:25 +0000 differ |
111 | === added file 'doc/source/bugreport.rst' | |||
112 | --- doc/source/bugreport.rst 1970-01-01 00:00:00 +0000 | |||
113 | +++ doc/source/bugreport.rst 2012-01-18 14:24:25 +0000 | |||
114 | @@ -0,0 +1,4 @@ | |||
115 | 1 | How to Report Bugs | ||
116 | 2 | ================== | ||
117 | 3 | |||
118 | 4 | All bugs can be reported on `Launchpad <https://bugs.launchpad.net/percona-xtradb-cluster/+filebug>`_. Please note that error.log files from **all** the nodes need to be submitted. | ||
119 | 0 | 5 | ||
120 | === modified file 'doc/source/conf.py' | |||
121 | --- doc/source/conf.py 2011-12-13 16:28:28 +0000 | |||
122 | +++ doc/source/conf.py 2012-01-18 14:24:25 +0000 | |||
123 | @@ -44,7 +44,7 @@ | |||
124 | 44 | 44 | ||
125 | 45 | # General information about the project. | 45 | # General information about the project. |
126 | 46 | project = u'Percona XtraDB Cluster' | 46 | project = u'Percona XtraDB Cluster' |
128 | 47 | copyright = u'2011, Percona Inc' | 47 | copyright = u'2012, Percona Inc' |
129 | 48 | 48 | ||
130 | 49 | # The version info for the project you're documenting, acts as replacement for | 49 | # The version info for the project you're documenting, acts as replacement for |
131 | 50 | # |version| and |release|, also used in various other places throughout the | 50 | # |version| and |release|, also used in various other places throughout the |
132 | @@ -96,6 +96,8 @@ | |||
133 | 96 | 96 | ||
134 | 97 | .. |XtraDB| replace:: :term:`XtraDB` | 97 | .. |XtraDB| replace:: :term:`XtraDB` |
135 | 98 | 98 | ||
136 | 99 | .. |IST| replace:: :term:`IST` | ||
137 | 100 | |||
138 | 99 | .. |XtraDB Cluster| replace:: :term:`XtraDB Cluster` | 101 | .. |XtraDB Cluster| replace:: :term:`XtraDB Cluster` |
139 | 100 | 102 | ||
140 | 101 | .. |Percona XtraDB Cluster| replace:: :term:`Percona XtraDB Cluster` | 103 | .. |Percona XtraDB Cluster| replace:: :term:`Percona XtraDB Cluster` |
141 | @@ -104,6 +106,10 @@ | |||
142 | 104 | 106 | ||
143 | 105 | .. |MyISAM| replace:: :term:`MyISAM` | 107 | .. |MyISAM| replace:: :term:`MyISAM` |
144 | 106 | 108 | ||
145 | 109 | .. |split brain| replace:: :term:`split brain` | ||
146 | 110 | |||
147 | 111 | .. |.frm| replace:: :term:`.frm` | ||
148 | 112 | |||
149 | 107 | .. |LSN| replace:: :term:`LSN` | 113 | .. |LSN| replace:: :term:`LSN` |
150 | 108 | 114 | ||
151 | 109 | .. |XtraBackup| replace:: *XtraBackup* | 115 | .. |XtraBackup| replace:: *XtraBackup* |
152 | 110 | 116 | ||
153 | === modified file 'doc/source/faq.rst' | |||
154 | --- doc/source/faq.rst 2012-01-14 17:24:09 +0000 | |||
155 | +++ doc/source/faq.rst 2012-01-18 14:24:25 +0000 | |||
156 | @@ -6,64 +6,65 @@ | |||
157 | 6 | ======================================================== | 6 | ======================================================== |
158 | 7 | A: For auto-increment particular, Cluster changes auto_increment_offset | 7 | A: For auto-increment particular, Cluster changes auto_increment_offset |
159 | 8 | for each new node. | 8 | for each new node. |
161 | 9 | In the single node workload, locking handled by usual way how InnoDB handles locks. | 9 | In the single node workload, locking handled by usual way how |InnoDB| handles locks. |
162 | 10 | In case of write load on several nodes, Cluster uses optimistic locking (http://en.wikipedia.org/wiki/Optimistic_concurrency_control) and application may receive lock error in the response on COMMIT query. | 10 | In case of write load on several nodes, Cluster uses optimistic locking (http://en.wikipedia.org/wiki/Optimistic_concurrency_control) and application may receive lock error in the response on COMMIT query. |
163 | 11 | 11 | ||
164 | 12 | Q: What if one of the nodes crashes and innodb recovery roll back some transactions? | 12 | Q: What if one of the nodes crashes and innodb recovery roll back some transactions? |
165 | 13 | ===================================================================================== | 13 | ===================================================================================== |
166 | 14 | A: When the node crashes, after the restart it will copy whole dataset from another node | 14 | A: When the node crashes, after the restart it will copy whole dataset from another node |
168 | 15 | (if there were changes to data since crash) | 15 | (if there were changes to data since crash). |
169 | 16 | 16 | ||
170 | 17 | Q: Is there a chance to have different table structure on the nodes? | 17 | Q: Is there a chance to have different table structure on the nodes? |
171 | 18 | ===================================================================== | 18 | ===================================================================== |
173 | 19 | what I mean is like having 4nodes, 4 tables like sessions_a, sessions_b, sessions_c and sessions_d and have each only on one of the nodes? | 19 | What I mean is like having 4 nodes, 4 tables like sessions_a, sessions_b, sessions_c and sessions_d and have each only on one of the nodes? |
174 | 20 | 20 | ||
175 | 21 | A: Not at the moment for InnoDB tables. But it will work for MEMORY tables. | 21 | A: Not at the moment for InnoDB tables. But it will work for MEMORY tables. |
176 | 22 | 22 | ||
179 | 23 | Q: What if a node fail and/or what if there is a network issue between them | 23 | Q: What if a node fail and/or what if there is a network issue between them? |
180 | 24 | ============================================================================ | 24 | ============================================================================= |
181 | 25 | A: Then Quorum mechanism in XtraDB Cluster will decide what nodes can accept traffic | 25 | A: Then Quorum mechanism in XtraDB Cluster will decide what nodes can accept traffic |
182 | 26 | and will shutdown nodes that not belong to quorum. Later when the failure is fixed, | 26 | and will shutdown nodes that not belong to quorum. Later when the failure is fixed, |
183 | 27 | the nodes will need to copy data from working cluster. | 27 | the nodes will need to copy data from working cluster. |
184 | 28 | 28 | ||
185 | 29 | Q: How would it handle split brain? | 29 | Q: How would it handle split brain? |
186 | 30 | ==================================== | 30 | ==================================== |
188 | 31 | A: It would not handle it. The |split brain| is hard stop, XtraDB Cluster can't resolve it. | 31 | A: It would not handle it. The |split brain| is hard stop, |XtraDB Cluster| can't resolve it. |
189 | 32 | That's why the minimal recommendation is to have 3 nodes. | 32 | That's why the minimal recommendation is to have 3 nodes. |
190 | 33 | However there is possibility to allow a node to handle the traffic, option is: :: | 33 | However there is possibility to allow a node to handle the traffic, option is: :: |
191 | 34 | 34 | ||
192 | 35 | wsrep_provider_options="pc.ignore_sb = yes" | 35 | wsrep_provider_options="pc.ignore_sb = yes" |
193 | 36 | 36 | ||
196 | 37 | Q: I have a two nodes setup. When node1 fails, node2 does not accept commands, why ? | 37 | Q: I have a two nodes setup. When node1 fails, node2 does not accept commands, why? |
197 | 38 | ===================================================================================== | 38 | ==================================================================================== |
198 | 39 | A: This is expected behaviour, to prevent |split brain|. See previous question. | 39 | A: This is expected behaviour, to prevent |split brain|. See previous question. |
199 | 40 | 40 | ||
201 | 41 | Q: What tcp ports are used by Percona XtraDB Cluster ? | 41 | Q: What tcp ports are used by Percona XtraDB Cluster? |
202 | 42 | ====================================================== | 42 | ====================================================== |
203 | 43 | A: You may need to open up to 4 ports if you are using firewall. | 43 | A: You may need to open up to 4 ports if you are using firewall. |
204 | 44 | 44 | ||
219 | 45 | * Regular MySQL port, default 3306 | 45 | 1. Regular MySQL port, default 3306. |
220 | 46 | * Port for group communication, default 4567. It can be changed by the option: :: | 46 | |
221 | 47 | 47 | 2. Port for group communication, default 4567. It can be changed by the option: :: | |
222 | 48 | wsrep_provider_options = "gmcast.listen_addr=tcp://0.0.0.0:4010; " | 48 | |
223 | 49 | 49 | wsrep_provider_options ="gmcast.listen_addr=tcp://0.0.0.0:4010; " | |
224 | 50 | * Port for State Transfer, default 4444. It can be changed by the option: :: | 50 | |
225 | 51 | 51 | 3. Port for State Transfer, default 4444. It can be changed by the option: :: | |
226 | 52 | wsrep_sst_receive_address=10.11.12.205:5555 | 52 | |
227 | 53 | 53 | wsrep_sst_receive_address=10.11.12.205:5555 | |
228 | 54 | * Port for Incremental State Transfer, default port for group communication + 1 (4568). It can be changed by the option: :: | 54 | |
229 | 55 | 55 | 4. Port for Incremental State Transfer, default port for group communication + 1 (4568). It can be changed by the option: :: | |
230 | 56 | wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; " | 56 | |
231 | 57 | 57 | wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; " | |
232 | 58 | Q: Is there "async" mode for Cluster or only "sync" commits are supported ? | 58 | |
233 | 59 | Q: Is there "async" mode for Cluster or only "sync" commits are supported? | ||
234 | 59 | =========================================================================== | 60 | =========================================================================== |
238 | 60 | A: There is no "async" mode, all commits are syncronious on all nodes. | 61 | A: There is no "async" mode, all commits are synchronous on all nodes. |
239 | 61 | Or, to be fully correct, the commits are "virtually" syncronious. Which | 62 | Or, to be fully correct, the commits are "virtually" synchronous. Which |
240 | 62 | means that transaction should pass "certification" on nodes, not physicall commit. | 63 | means that transaction should pass "certification" on nodes, not physical commit. |
241 | 63 | "Certification" means a guarantee that transaction does not have conflicts with | 64 | "Certification" means a guarantee that transaction does not have conflicts with |
242 | 64 | another transactions on corresponding node. | 65 | another transactions on corresponding node. |
243 | 65 | 66 | ||
245 | 66 | Q: Does it work with regular MySQL replication ? | 67 | Q: Does it work with regular MySQL replication? |
246 | 67 | ================================================ | 68 | ================================================ |
247 | 68 | A: Yes. On the node you are going to use as master, you should enable log-bin and log-slave-update options. | 69 | A: Yes. On the node you are going to use as master, you should enable log-bin and log-slave-update options. |
248 | 69 | 70 | ||
249 | @@ -73,9 +74,3 @@ | |||
250 | 73 | 74 | ||
251 | 74 | echo 0 > /selinux/enforce | 75 | echo 0 > /selinux/enforce |
252 | 75 | 76 | ||
253 | 76 | |||
254 | 77 | |||
255 | 78 | |||
256 | 79 | |||
257 | 80 | |||
258 | 81 | |||
259 | 82 | 77 | ||
260 | === added directory 'doc/source/features' | |||
261 | === added file 'doc/source/features/highavailability.rst' | |||
262 | --- doc/source/features/highavailability.rst 1970-01-01 00:00:00 +0000 | |||
263 | +++ doc/source/features/highavailability.rst 2012-01-18 14:24:25 +0000 | |||
264 | @@ -0,0 +1,28 @@ | |||
265 | 1 | High Availability | ||
266 | 2 | ================= | ||
267 | 3 | |||
268 | 4 | Basic setup: you run 3-nodes setup. | ||
269 | 5 | |||
270 | 6 | The |Percona XtraDB Cluster| will continue to function when you take any of nodes down. | ||
271 | 7 | At any point in time you can shutdown any Node to perform maintenance or make | ||
272 | 8 | configuration changes. Even in unplanned situations like Node crash or if it | ||
273 | 9 | becomes network unavailable the Cluster will continue to work, and you'll be able | ||
274 | 10 | to run queries on working nodes. | ||
275 | 11 | |||
276 | 12 | The biggest question there, what will happen when the Node joins the cluster back, and | ||
277 | 13 | there were changes to data while the node was down. Let's focus on this with details. | ||
278 | 14 | |||
279 | 15 | There is two ways that Node may use when it joins the cluster: State Snapshot Transfer | ||
280 | 16 | (SST) and Incremental State Transfer (IST). | ||
281 | 17 | |||
282 | 18 | * SST is the full copy if data from one node to another. It's used when new node joins the cluster, it has to transfer data from existing node. There are three methods of SST available in Percona XtraDB Cluster: :program:`mysqldump`, :program:`rsync` and :program:`xtrabackup` (Percona |XtraBackup| with support of XtraDB Cluster will be released soon, so far you need to use our `source code repository <http://www.percona.com/doc/percona-xtrabackup/installation/compiling_xtrabackup.html>`_). The downside of `mysqldump` and `rsync` is that your cluster becomes READ-ONLY for time that takes to copy data from one node to another (SST applies :command:`FLUSH TABLES WITH READ LOCK` command). Xtrabackup SST does not require :command:`READ LOCK` for full time, only for syncing |.FRM| files (the same as with regular backup). | ||
283 | 19 | |||
284 | 20 | * Even with that, SST may be intrusive, that’s why there is IST mechanism. If you put your node down for short period of time and then start it, the node is able to fetch only changes made during period it was down.This is done using caching mechanism on nodes. Each node contains a cache, ring-buffer, (the size is configurable) of last N changes, and the node is able to transfer part of this cache. Obviously IST can be done only if amount of changes needed to transfer is less than N. If it exceeds N, then the joining node has to perform SST. | ||
285 | 21 | |||
286 | 22 | You can monitor current state of Node by using | ||
287 | 23 | |||
288 | 24 | .. code-block:: mysql | ||
289 | 25 | |||
290 | 26 | SHOW STATUS LIKE 'wsrep_local_state_comment'; | ||
291 | 27 | |||
292 | 28 | When it is ‘Synced (6)’, the node is ready to handle traffic. | ||
293 | 0 | 29 | ||
294 | === modified file 'doc/source/glossary.rst' | |||
295 | --- doc/source/glossary.rst 2012-01-11 17:47:46 +0000 | |||
296 | +++ doc/source/glossary.rst 2012-01-18 14:24:25 +0000 | |||
297 | @@ -7,30 +7,35 @@ | |||
298 | 7 | LSN | 7 | LSN |
299 | 8 | Each InnoDB page (usually 16kb in size) contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page's LSN shows how recently it was changed. | 8 | Each InnoDB page (usually 16kb in size) contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page's LSN shows how recently it was changed. |
300 | 9 | 9 | ||
301 | 10 | |||
302 | 11 | InnoDB | 10 | InnoDB |
304 | 12 | Storage engine which provides ACID-compilant transactions and foreing key support, among others improvements over :term:`MyISAM`. It is the default engine for |MySQL| as of the 5.5 series. | 11 | Storage engine which provides ACID-compliant transactions and foreign key support, among others improvements over :term:`MyISAM`. It is the default engine for |MySQL| as of the 5.5 series. |
305 | 13 | 12 | ||
306 | 14 | MyISAM | 13 | MyISAM |
307 | 15 | Previous default storage engine for |MySQL| for versions prior to 5.5. It doesn't fully support transactions but in some scenarios may be faster than :term:`InnoDB`. Each table is stored on disk in 3 files: :term:`.frm`, :term:`.MYD`, :term:`.MYI` | 14 | Previous default storage engine for |MySQL| for versions prior to 5.5. It doesn't fully support transactions but in some scenarios may be faster than :term:`InnoDB`. Each table is stored on disk in 3 files: :term:`.frm`, :term:`.MYD`, :term:`.MYI` |
308 | 15 | |||
309 | 16 | IST | ||
310 | 17 | Incremental State Transfer. Functionallity which instead of whole state snapshot can catch up with te group by receiving the missing writesets, but only if the writeset is still in the donor's writeset cache | ||
311 | 18 | |||
312 | 19 | XtraBackup | ||
313 | 20 | *Percona XtraBackup* is an open-source hot backup utility for |MySQL| - based servers that doesn’t lock your database during the backup. | ||
314 | 16 | 21 | ||
315 | 17 | XtraDB | 22 | XtraDB |
316 | 18 | *Percona XtraDB* is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is fully backwards compatible, and so can be used as a drop-in replacement for standard InnoDB. More information `here <http://www.percona.com/docs/wiki/Percona-XtraDB:start>`_ . | 23 | *Percona XtraDB* is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is fully backwards compatible, and so can be used as a drop-in replacement for standard InnoDB. More information `here <http://www.percona.com/docs/wiki/Percona-XtraDB:start>`_ . |
317 | 19 | 24 | ||
318 | 20 | XtraDB Cluster | 25 | XtraDB Cluster |
320 | 21 | *Percona XtraDB Cluster* is a high availablity solution for MySQL | 26 | *Percona XtraDB Cluster* is a high availability solution for MySQL |
321 | 22 | 27 | ||
322 | 23 | Percona XtraDB Cluster | 28 | Percona XtraDB Cluster |
324 | 24 | *Percona XtraDB Cluster* is a high availablity solution for MySQL | 29 | *Percona XtraDB Cluster* is a high availability solution for MySQL |
325 | 25 | 30 | ||
326 | 26 | my.cnf | 31 | my.cnf |
328 | 27 | This file refers to the database server's main configuration file. Most linux distributions place it as :file:`/etc/mysql/my.cnf`, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values. | 32 | This file refers to the database server's main configuration file. Most Linux distributions place it as :file:`/etc/mysql/my.cnf`, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values. |
329 | 28 | 33 | ||
330 | 29 | datadir | 34 | datadir |
331 | 30 | The directory in which the database server stores its databases. Most Linux distribution use :file:`/var/lib/mysql` by default. | 35 | The directory in which the database server stores its databases. Most Linux distribution use :file:`/var/lib/mysql` by default. |
332 | 31 | 36 | ||
333 | 32 | ibdata | 37 | ibdata |
335 | 33 | Default prefix for tablespace files, e.g. :file:`ibdata1` is a 10MB autoextensible file that |MySQL| creates for the shared tablespace by default. | 38 | Default prefix for tablespace files, e.g. :file:`ibdata1` is a 10MB autoextendable file that |MySQL| creates for the shared tablespace by default. |
336 | 34 | 39 | ||
337 | 35 | innodb_file_per_table | 40 | innodb_file_per_table |
338 | 36 | InnoDB option to use separate .ibd files for each table. | 41 | InnoDB option to use separate .ibd files for each table. |
339 | @@ -54,10 +59,10 @@ | |||
340 | 54 | Each table using the :program:`MERGE` storage engine, besides of a :term:`.frm` file, will have :term:`.MRG` file containing the names of the |MyISAM| tables associated with it. | 59 | Each table using the :program:`MERGE` storage engine, besides of a :term:`.frm` file, will have :term:`.MRG` file containing the names of the |MyISAM| tables associated with it. |
341 | 55 | 60 | ||
342 | 56 | .TRG | 61 | .TRG |
344 | 57 | File containing the TRiGgers associated to a table, e.g. `:file:`mytable.TRG`. With the :term:`.TRN` file, they represent all the Trigger definitions. | 62 | File containing the triggers associated to a table, e.g. `:file:`mytable.TRG`. With the :term:`.TRN` file, they represent all the trigger definitions. |
345 | 58 | 63 | ||
346 | 59 | .TRN | 64 | .TRN |
348 | 60 | File containing the TRiggers' Names associated to a table, e.g. `:file:`mytable.TRN`. With the :term:`.TRG` file, they represent all the Trigger definitions. | 65 | File containing the triggers' Names associated to a table, e.g. `:file:`mytable.TRN`. With the :term:`.TRG` file, they represent all the trigger definitions. |
349 | 61 | 66 | ||
350 | 62 | .ARM | 67 | .ARM |
351 | 63 | Each table with the :program:`Archive Storage Engine` has ``.ARM`` file which contains the metadata of it. | 68 | Each table with the :program:`Archive Storage Engine` has ``.ARM`` file which contains the metadata of it. |
352 | 64 | 69 | ||
353 | === modified file 'doc/source/index.rst' | |||
354 | --- doc/source/index.rst 2012-01-11 16:46:28 +0000 | |||
355 | +++ doc/source/index.rst 2012-01-18 14:24:25 +0000 | |||
356 | @@ -12,13 +12,13 @@ | |||
357 | 12 | 12 | ||
358 | 13 | * Synchronous replication. Transaction either commited on all nodes or none. | 13 | * Synchronous replication. Transaction either commited on all nodes or none. |
359 | 14 | 14 | ||
361 | 15 | * Multi-master replication. You can write to any node. | 15 | * Multi-master replication. You can write to any node. |
362 | 16 | 16 | ||
363 | 17 | * Parallel applying events on slave. Real “parallel replication”. | 17 | * Parallel applying events on slave. Real “parallel replication”. |
364 | 18 | 18 | ||
365 | 19 | * Automatic node provisioning. | 19 | * Automatic node provisioning. |
366 | 20 | 20 | ||
368 | 21 | * Data consistency. No more unsyncronised slaves. | 21 | * Data consistency. No more unsynchronized slaves. |
369 | 22 | 22 | ||
370 | 23 | Percona XtraDB Cluster is fully compatible with MySQL or Percona Server in the following meaning: | 23 | Percona XtraDB Cluster is fully compatible with MySQL or Percona Server in the following meaning: |
371 | 24 | 24 | ||
372 | @@ -35,6 +35,7 @@ | |||
373 | 35 | :glob: | 35 | :glob: |
374 | 36 | 36 | ||
375 | 37 | intro | 37 | intro |
376 | 38 | resources | ||
377 | 38 | 39 | ||
378 | 39 | Installation | 40 | Installation |
379 | 40 | ============ | 41 | ============ |
380 | @@ -46,6 +47,15 @@ | |||
381 | 46 | installation | 47 | installation |
382 | 47 | installation/compiling_xtradb_cluster | 48 | installation/compiling_xtradb_cluster |
383 | 48 | 49 | ||
384 | 50 | Features | ||
385 | 51 | ======== | ||
386 | 52 | |||
387 | 53 | .. toctree:: | ||
388 | 54 | :maxdepth: 1 | ||
389 | 55 | :glob: | ||
390 | 56 | |||
391 | 57 | features/highavailability | ||
392 | 58 | |||
393 | 49 | FAQ | 59 | FAQ |
394 | 50 | === | 60 | === |
395 | 51 | 61 | ||
396 | @@ -65,7 +75,7 @@ | |||
397 | 65 | 75 | ||
398 | 66 | singlebox | 76 | singlebox |
399 | 67 | 3nodesec2 | 77 | 3nodesec2 |
401 | 68 | 78 | bugreport | |
402 | 69 | 79 | ||
403 | 70 | Percona XtraDB Cluster limitations | 80 | Percona XtraDB Cluster limitations |
404 | 71 | ================================== | 81 | ================================== |
405 | 72 | 82 | ||
406 | === modified file 'doc/source/installation.rst' | |||
407 | --- doc/source/installation.rst 2011-12-14 02:32:00 +0000 | |||
408 | +++ doc/source/installation.rst 2012-01-18 14:24:25 +0000 | |||
409 | @@ -61,9 +61,9 @@ | |||
410 | 61 | Install XtraBackup SST method | 61 | Install XtraBackup SST method |
411 | 62 | ============================== | 62 | ============================== |
412 | 63 | 63 | ||
416 | 64 | To use Percona XtraBackup for State Transfer method (copy snapsot of data between nodes) | 64 | To use Percona XtraBackup for State Transfer method (copy snapshot of data between nodes) |
417 | 65 | you can use the regular xtrabackup package with the script what supports galera information. | 65 | you can use the regular xtrabackup package with the script what supports Galera information. |
418 | 66 | You can take *innobackupex* script from source code `innobackupex <http://bazaar.launchpad.net/~percona-dev/percona-xtrabackup/galera-info/view/head:/innobackupex>`_ | 66 | You can take *innobackupex* script from source code `innobackupex <http://bazaar.launchpad.net/~percona-dev/percona-xtrabackup/galera-info/view/head:/innobackupex>`_. |
419 | 67 | 67 | ||
420 | 68 | To inform node to use xtrabackup you need to specify in my.cnf: :: | 68 | To inform node to use xtrabackup you need to specify in my.cnf: :: |
421 | 69 | 69 | ||
422 | 70 | 70 | ||
423 | === modified file 'doc/source/installation/bin_distro_specific.rst' | |||
424 | --- doc/source/installation/bin_distro_specific.rst 2011-12-13 06:38:29 +0000 | |||
425 | +++ doc/source/installation/bin_distro_specific.rst 2012-01-18 14:24:25 +0000 | |||
426 | @@ -19,7 +19,7 @@ | |||
427 | 19 | cp ./usr/bin/tar4ibd /usr/bin | 19 | cp ./usr/bin/tar4ibd /usr/bin |
428 | 20 | cp ./usr/bin/innobackupex-1.5.1 /usr/bin | 20 | cp ./usr/bin/innobackupex-1.5.1 /usr/bin |
429 | 21 | 21 | ||
431 | 22 | * If you use a version prior to 1.6, the stock perl causes an issue with the backup scripts version detection. Edit :file:`/usr/bin/innobackupex-1.5.1`. Comment out the lines below as shown below :: | 22 | * If you use a version prior to 1.6, the stock perl causes an issue with the backup scripts version detection. Edit :file:`/usr/bin/innobackupex-1.5.1`. Comment out the lines below as shown below: :: |
432 | 23 | 23 | ||
433 | 24 | $perl_version = chr($required_perl_version[0]) | 24 | $perl_version = chr($required_perl_version[0]) |
434 | 25 | . chr($required_perl_version[1]) | 25 | . chr($required_perl_version[1]) |
435 | 26 | 26 | ||
436 | === modified file 'doc/source/installation/compiling_xtradb_cluster.rst' | |||
437 | --- doc/source/installation/compiling_xtradb_cluster.rst 2011-12-13 06:38:29 +0000 | |||
438 | +++ doc/source/installation/compiling_xtradb_cluster.rst 2012-01-18 14:24:25 +0000 | |||
439 | @@ -30,11 +30,12 @@ | |||
440 | 30 | Compiling | 30 | Compiling |
441 | 31 | ------------ | 31 | ------------ |
442 | 32 | 32 | ||
445 | 33 | The most esiest way to build binaries is to run | 33 | The most esiest way to build binaries is to run script: :: |
446 | 34 | script BUILD/compile-pentium64-wsrep | 34 | |
447 | 35 | BUILD/compile-pentium64-wsrep | ||
448 | 35 | 36 | ||
449 | 36 | If you feel confident to use cmake, you make compile with cmake adding | 37 | If you feel confident to use cmake, you make compile with cmake adding |
450 | 37 | -DWITH_WSREP=1 to parameters. | 38 | -DWITH_WSREP=1 to parameters. |
451 | 38 | 39 | ||
453 | 39 | Exampes how to build RPM and DEB packages you can find in packaging/percona directory in the source code | 40 | Examples how to build RPM and DEB packages you can find in packaging/percona directory in the source code. |
454 | 40 | 41 | ||
455 | 41 | 42 | ||
456 | === modified file 'doc/source/installation/yum_repo.rst' | |||
457 | --- doc/source/installation/yum_repo.rst 2012-01-10 02:33:47 +0000 | |||
458 | +++ doc/source/installation/yum_repo.rst 2012-01-18 14:24:25 +0000 | |||
459 | @@ -13,7 +13,7 @@ | |||
460 | 13 | 13 | ||
461 | 14 | $ rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm | 14 | $ rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm |
462 | 15 | 15 | ||
464 | 16 | You may want to install also Percona stable repository, which provides Percona-shared-compat rpm, needed to satisfy dependencies :: | 16 | You may want to install also Percona stable repository, which provides Percona-shared-compat rpm, needed to satisfy dependencies: :: |
465 | 17 | 17 | ||
466 | 18 | $ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm | 18 | $ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm |
467 | 19 | 19 | ||
468 | 20 | 20 | ||
469 | === modified file 'doc/source/intro.rst' | |||
470 | --- doc/source/intro.rst 2011-12-14 00:33:37 +0000 | |||
471 | +++ doc/source/intro.rst 2012-01-18 14:24:25 +0000 | |||
472 | @@ -4,37 +4,57 @@ | |||
473 | 4 | 4 | ||
474 | 5 | *Percona XtraDB Cluster* is open-source, free |MySQL| High Availability software | 5 | *Percona XtraDB Cluster* is open-source, free |MySQL| High Availability software |
475 | 6 | 6 | ||
476 | 7 | General introduction | ||
477 | 8 | ==================== | ||
478 | 9 | |||
479 | 10 | 1. The Cluster consists of Nodes. Recommended configuration is to have at least 3 nodes, but you can make it running with 2 nodes as well. | ||
480 | 11 | 2. Each Node is regular |MySQL| / |Percona Server| setup. The point is that you can convert your existing MySQL / Percona Server into Node and roll Cluster using it as a base. Or otherwise – you can detach Node from Cluster and use it as just a regular server. | ||
481 | 12 | 3. Each Node contains the full copy of data. That defines XtraDB Cluster behavior in many ways. And obviously there are benefits and drawbacks. | ||
482 | 13 | |||
483 | 14 | .. image:: _static/cluster-diagram1.png | ||
484 | 15 | |||
485 | 16 | Benefits of such approach: | ||
486 | 17 | * When you execute a query, it is executed locally on the node. All data is available locally, no need for remote access. | ||
487 | 18 | * No central management. You can loose any node at any point of time, and the cluster will continue to function. | ||
488 | 19 | * Good solution for scaling a read workload. You can put read queries to any of the nodes. | ||
489 | 20 | |||
490 | 21 | Drawbacks: | ||
491 | 22 | * Overhead of joining new node. The new node has to copy full dataset from one of existing nodes. If it is 100GB, it copies 100GB. | ||
492 | 23 | * This can’t be used as an effective write scaling solution. There might be some improvements in write throughput when you run write traffic to 2 nodes vs all traffic to 1 node, but you can't expect a lot. All writes still have to go on all nodes. | ||
493 | 24 | * You have several duplicates of data, for 3 nodes – 3 duplicates. | ||
494 | 25 | |||
495 | 7 | What is core difference Percona XtraDB Cluster from MySQL Replication ? | 26 | What is core difference Percona XtraDB Cluster from MySQL Replication ? |
496 | 8 | ======================================================================= | 27 | ======================================================================= |
497 | 9 | 28 | ||
499 | 10 | Let's take look into well know CAP theorem for Distributed systems. | 29 | Let's take look into the well known CAP theorem for Distributed systems. |
500 | 11 | Characteristics of Distributed systems: | 30 | Characteristics of Distributed systems: |
501 | 12 | 31 | ||
507 | 13 | C - Consistency (all your data is consistent on all nodes) | 32 | C - Consistency (all your data is consistent on all nodes), |
508 | 14 | 33 | ||
509 | 15 | A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes ) | 34 | A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes ), |
510 | 16 | 35 | ||
511 | 17 | P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle requests) | 36 | P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle requests). |
512 | 18 | 37 | ||
513 | 19 | 38 | ||
514 | 20 | CAP theorem says that each Distributed system can have only two out of these three. | 39 | CAP theorem says that each Distributed system can have only two out of these three. |
515 | 21 | 40 | ||
521 | 22 | MySQL replication has: Availability and Partitioning tolerance | 41 | MySQL replication has: Availability and Partitioning tolerance. |
522 | 23 | 42 | ||
523 | 24 | Percona XtraDB Cluster has: Consistency and Availability | 43 | Percona XtraDB Cluster has: Consistency and Availability. |
524 | 25 | 44 | ||
525 | 26 | That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data Consistency. (And, yes, Percona XtraDB Cluster looses Partitioning tolerance property) | 45 | That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data Consistency. (And yes, Percona XtraDB Cluster looses Partitioning tolerance property). |
526 | 27 | 46 | ||
527 | 28 | Components | 47 | Components |
528 | 29 | ========== | 48 | ========== |
529 | 30 | 49 | ||
530 | 31 | *Percona XtraDB Cluster* is based on `Percona Server with XtraDB <http://www.percona.com/software/percona-server/>`_ | 50 | *Percona XtraDB Cluster* is based on `Percona Server with XtraDB <http://www.percona.com/software/percona-server/>`_ |
532 | 32 | and includes `Write Set REPlication patches <https://launchpad.net/codership-mysql>`_. | 51 | and includes `Write Set Replication patches <https://launchpad.net/codership-mysql>`_. |
533 | 33 | It uses the `Galera library <https://launchpad.net/galera>`_, version 2.x, | 52 | It uses the `Galera library <https://launchpad.net/galera>`_, version 2.x, |
534 | 34 | a generic Synchronous Multi-Master replication plugin for transactional applications. | 53 | a generic Synchronous Multi-Master replication plugin for transactional applications. |
535 | 35 | 54 | ||
537 | 36 | Galera library is developed by `Codership Oy <http://www.codership.com/>`_ | 55 | Galera library is developed by `Codership Oy <http://www.codership.com/>`_. |
538 | 37 | 56 | ||
539 | 38 | Galera 2.x supports such new features as: | 57 | Galera 2.x supports such new features as: |
542 | 39 | * Incremental State Transfer, especially useful for WAN deployments | 58 | * Incremental State Transfer (|IST|), especially useful for WAN deployments, |
543 | 40 | * RSU, Rolling Schema Update. Schema change does not block operations against table | 59 | * RSU, Rolling Schema Update. Schema change does not block operations against table. |
544 | 60 | |||
545 | 41 | 61 | ||
546 | === modified file 'doc/source/limitation.rst' | |||
547 | --- doc/source/limitation.rst 2011-12-14 00:33:37 +0000 | |||
548 | +++ doc/source/limitation.rst 2012-01-18 14:24:25 +0000 | |||
549 | @@ -2,43 +2,29 @@ | |||
550 | 2 | Percona XtraDB Cluster Limitations | 2 | Percona XtraDB Cluster Limitations |
551 | 3 | ==================================== | 3 | ==================================== |
552 | 4 | 4 | ||
565 | 5 | There are some limitations which we should be aware of. Some of them will be eliminated later as product is improved; some are design limitations. | 5 | There are some limitations which you should be aware of. Some of them will be eliminated later as product is improved and some are design limitations. |
566 | 6 | 6 | ||
567 | 7 | - Currently replication works only with InnoDB storage engine. Any writes to | 7 | - Currently replication works only with |InnoDB| storage engine. Any writes to tables of other types, including system (mysql.*) tables, are not replicated. However, DDL statements are replicated in statement level, and changes to mysql.* tables will get replicated that way. So, you can safely issue: CREATE USER..., but issuing: INSERT INTO mysql.user..., will not be replicated. |
568 | 8 | tables of other types, including system (mysql.*) tables are not replicated. | 8 | |
569 | 9 | However, DDL statements are replicated in statement level, and changes | 9 | - DELETE operation is unsupported on tables without primary key. Also rows in tables without primary key may appear in different order on different nodes. As a result SELECT...LIMIT... may return slightly different sets. |
558 | 10 | to mysql.* tables will get replicated that way. | ||
559 | 11 | So, you can safely issue: CREATE USER..., | ||
560 | 12 | but issuing: INSERT INTO mysql.user..., will not be replicated. | ||
561 | 13 | |||
562 | 14 | - DELETE operation is unsupported on tables without primary key. Also rows in | ||
563 | 15 | tables without primary key may appear in different order on different nodes. | ||
564 | 16 | As a result SELECT...LIMIT... may return slightly different sets. | ||
570 | 17 | 10 | ||
571 | 18 | - Unsupported queries: | 11 | - Unsupported queries: |
572 | 19 | * LOCK/UNLOCK TABLES cannot be supported in multi-master setups. | 12 | * LOCK/UNLOCK TABLES cannot be supported in multi-master setups. |
573 | 20 | * lock functions (GET_LOCK(), RELEASE_LOCK()... ) | 13 | * lock functions (GET_LOCK(), RELEASE_LOCK()... ) |
574 | 21 | 14 | ||
590 | 22 | - Query log cannot be directed to table. If you enable query logging, | 15 | - Query log cannot be directed to table. If you enable query logging, you must forward the log to a file: log_output = FILE. Use general_log and general_log_file to choose query logging and the log file name. |
591 | 23 | you must forward the log to a file: | 16 | |
592 | 24 | log_output = FILE | 17 | - Maximum allowed transaction size is defined by wsrep_max_ws_rows and wsrep_max_ws_size. Anything bigger (e.g. huge LOAD DATA) will be rejected. |
593 | 25 | Use general_log and general_log_file to choose query logging and the | 18 | |
594 | 26 | log file name | 19 | - Due to cluster level optimistic concurrency control, transaction issuing COMMIT may still be aborted at that stage. There can be two transactions writing to same rows and committing in separate XtraDB Cluster nodes, and only one of the them can successfully commit. The failing one will be aborted. For cluster level aborts, XtraDB Cluster gives back deadlock error code: |
595 | 27 | 20 | (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)). | |
581 | 28 | - Maximum allowed transaction size is defined by wsrep_max_ws_rows and | ||
582 | 29 | wsrep_max_ws_size. Anything bigger (e.g. huge LOAD DATA) will be rejected. | ||
583 | 30 | |||
584 | 31 | - Due to cluster level optimistic concurrency control, transaction issuing | ||
585 | 32 | COMMIT may still be aborted at that stage. There can be two transactions. | ||
586 | 33 | writing to same rows and committing in separate XtraDB Cluster nodes, and only one | ||
587 | 34 | of the them can successfully commit. The failing one will be aborted. | ||
588 | 35 | For cluster level aborts, XtraDB Cluster gives back deadlock error. | ||
589 | 36 | code (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)). | ||
596 | 37 | 21 | ||
597 | 38 | - XA transactions can not be supported due to possible rollback on commit. | 22 | - XA transactions can not be supported due to possible rollback on commit. |
598 | 39 | 23 | ||
600 | 40 | - The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster is slow. If you have requirements for stable high performance, then it should be supported by corresponding hardware (10Gb network, SSD) | 24 | - The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster is slow. If you have requirements for stable high performance, then it should be supported by corresponding hardware (10Gb network, SSD). |
601 | 41 | 25 | ||
603 | 42 | - The minimal recommended size of cluster is 3 nodes | 26 | - The minimal recommended size of cluster is 3 nodes. |
604 | 43 | 27 | ||
605 | 44 | - DDL statements are problematic and may stall cluster. Later, the support of DDL will be improved, but will always require special treatment. | 28 | - DDL statements are problematic and may stall cluster. Later, the support of DDL will be improved, but will always require special treatment. |
606 | 29 | |||
607 | 30 | |||
608 | 45 | 31 | ||
609 | === modified file 'doc/source/resources.rst' | |||
610 | --- doc/source/resources.rst 2011-12-21 05:51:01 +0000 | |||
611 | +++ doc/source/resources.rst 2012-01-18 14:24:25 +0000 | |||
612 | @@ -1,3 +1,7 @@ | |||
613 | 1 | ========= | ||
614 | 2 | Resources | ||
615 | 3 | ========= | ||
616 | 4 | |||
617 | 1 | In general there are 4 resources that need to be different when you want to run several MySQL/Galera nodes on one host: | 5 | In general there are 4 resources that need to be different when you want to run several MySQL/Galera nodes on one host: |
618 | 2 | 6 | ||
619 | 3 | 1) data directory | 7 | 1) data directory |
620 | @@ -5,19 +9,18 @@ | |||
621 | 5 | 3) galera replication listen port and/or address | 9 | 3) galera replication listen port and/or address |
622 | 6 | 4) receive address for state snapshot transfer | 10 | 4) receive address for state snapshot transfer |
623 | 7 | 11 | ||
634 | 8 | 12 | and later incremental state transfer receive address will be added to the bunch. (I know, it is kinda a lot, but we don't see how it can be meaningfully reduced yet). | |
635 | 9 | and later incremental state transfer receive address will be added to the bunch. (I know, it is kinda a lot, but we don't see how it can be meaningfully reduced yet) | 13 | |
636 | 10 | 14 | The first two are the usual mysql stuff. | |
637 | 11 | The first two is the usual mysql stuff. | 15 | |
638 | 12 | 16 | You figured out the third. It is also possible to pass it via: :: | |
639 | 13 | You figured out the third. It is also possible to pass it via | 17 | |
640 | 14 | 18 | wsrep_provider_options="gmcast.listen_addr=tcp://127.0.0.1:5678" | |
641 | 15 | wsrep_provider_options="gmcast.listen_addr=tcp://127.0.0.1:5678" | 19 | |
642 | 16 | 20 | as most other Galera options. This may save you some extra typing. | |
633 | 17 | as most other galera options. This may save you some extra typing. | ||
643 | 18 | 21 | ||
644 | 19 | The fourth one is wsrep_sst_receive_address. This is the address at which the node will be listening for and receiving the state. Note that in galera cluster _joining_ nodes are waiting for connections from donors. It goes contrary to tradition and seems to confuse people time and again, but there are good reasons it was made like that. | 22 | The fourth one is wsrep_sst_receive_address. This is the address at which the node will be listening for and receiving the state. Note that in galera cluster _joining_ nodes are waiting for connections from donors. It goes contrary to tradition and seems to confuse people time and again, but there are good reasons it was made like that. |
645 | 20 | 23 | ||
646 | 21 | If you use mysqldump SST it should be the same as this mysql client connection address plus you need to set wsrep_sst_auth variable to hold user:password pair. The user should be privileged enough to read system tables from donor and create system tables on this node. For simplicity that could be just the root user. Note that it also means that you need to properly set up the privileges on the new node before attempting to join the cluster. | 24 | If you use mysqldump SST it should be the same as this mysql client connection address plus you need to set wsrep_sst_auth variable to hold user:password pair. The user should be privileged enough to read system tables from donor and create system tables on this node. For simplicity that could be just the root user. Note that it also means that you need to properly set up the privileges on the new node before attempting to join the cluster. |
647 | 22 | 25 | ||
649 | 23 | If you use rsync or xtrabackup SST, wsrep_sst_auth is not necessary unless your SST script makes use of it. wsrep_sst_address can be anything local (it may even be the same on all nodes provided you'll be starting them one at a time). | 26 | If you use rsync or |xtrabackup| SST, wsrep_sst_auth is not necessary unless your SST script makes use of it. wsrep_sst_address can be anything local (it may even be the same on all nodes provided you'll be starting them one at a time). |
650 | 24 | 27 | ||
651 | === modified file 'doc/source/singlebox.rst' | |||
652 | --- doc/source/singlebox.rst 2011-12-16 03:50:49 +0000 | |||
653 | +++ doc/source/singlebox.rst 2012-01-18 14:24:25 +0000 | |||
654 | @@ -1,64 +1,64 @@ | |||
655 | 1 | How to setup 3 node cluster on single box | 1 | How to setup 3 node cluster on single box |
656 | 2 | ========================================== | 2 | ========================================== |
657 | 3 | 3 | ||
659 | 4 | This is how-to setup 3-node cluster on the single physical box. | 4 | This is how to setup 3-node cluster on the single physical box. |
660 | 5 | 5 | ||
662 | 6 | Assume you installed Percona XtraDB Cluster from binary .tar.gz into directory | 6 | Assume you installed |Percona XtraDB Cluster| from binary .tar.gz into directory |
663 | 7 | 7 | ||
664 | 8 | /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6 | 8 | /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6 |
665 | 9 | 9 | ||
666 | 10 | 10 | ||
668 | 11 | Now we need to create couple my.cnf files and couple data directories. | 11 | Now we need to create couple my.cnf files and couple of data directories. |
669 | 12 | 12 | ||
671 | 13 | Assume we created (see the content of files at the end of document) | 13 | Assume we created (see the content of files at the end of document): |
672 | 14 | 14 | ||
673 | 15 | * /etc/my.4000.cnf | 15 | * /etc/my.4000.cnf |
674 | 16 | * /etc/my.5000.cnf | 16 | * /etc/my.5000.cnf |
675 | 17 | * /etc/my.6000.cnf | 17 | * /etc/my.6000.cnf |
676 | 18 | 18 | ||
678 | 19 | and data directories | 19 | and data directories: |
679 | 20 | 20 | ||
680 | 21 | * /data/bench/d1 | 21 | * /data/bench/d1 |
681 | 22 | * /data/bench/d2 | 22 | * /data/bench/d2 |
682 | 23 | * /data/bench/d3 | 23 | * /data/bench/d3 |
683 | 24 | 24 | ||
685 | 25 | and assume the local IP address is 10.11.12.205 | 25 | and assume the local IP address is 10.11.12.205. |
686 | 26 | 26 | ||
688 | 27 | then we should be able to start initial node as (from directory /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6):: | 27 | Then we should be able to start initial node as (from directory /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6): :: |
689 | 28 | 28 | ||
690 | 29 | bin/mysqld --defaults-file=/etc/my.4000.cnf | 29 | bin/mysqld --defaults-file=/etc/my.4000.cnf |
691 | 30 | 30 | ||
693 | 31 | Following output will let out know that node was started succsefully:: | 31 | Following output will let out know that node was started successfully: :: |
694 | 32 | 32 | ||
695 | 33 | 111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0) | 33 | 111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0) |
696 | 34 | 111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 1 | 34 | 111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 1 |
697 | 35 | 35 | ||
698 | 36 | 36 | ||
700 | 37 | And you can check used ports:: | 37 | And you can check used ports: :: |
701 | 38 | 38 | ||
702 | 39 | netstat -anp | grep mysqld | 39 | netstat -anp | grep mysqld |
703 | 40 | tcp 0 0 0.0.0.0:4000 0.0.0.0:* LISTEN 8218/mysqld | 40 | tcp 0 0 0.0.0.0:4000 0.0.0.0:* LISTEN 8218/mysqld |
704 | 41 | tcp 0 0 0.0.0.0:4010 0.0.0.0:* LISTEN 8218/mysqld | 41 | tcp 0 0 0.0.0.0:4010 0.0.0.0:* LISTEN 8218/mysqld |
705 | 42 | 42 | ||
706 | 43 | 43 | ||
708 | 44 | After first node, we start second and third:: | 44 | After first node, we start second and third: :: |
709 | 45 | 45 | ||
710 | 46 | bin/mysqld --defaults-file=/etc/my.5000.cnf | 46 | bin/mysqld --defaults-file=/etc/my.5000.cnf |
711 | 47 | bin/mysqld --defaults-file=/etc/my.6000.cnf | 47 | bin/mysqld --defaults-file=/etc/my.6000.cnf |
712 | 48 | 48 | ||
714 | 49 | Succesfull start will produce following output:: | 49 | Successful start will produce the following output: :: |
715 | 50 | 50 | ||
716 | 51 | 111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2) | 51 | 111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2) |
717 | 52 | 111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2) | 52 | 111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2) |
718 | 53 | 111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections | 53 | 111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections |
719 | 54 | 54 | ||
720 | 55 | 55 | ||
722 | 56 | Now you can connect to any node and create database, which will be automatically propagated to another nodes:: | 56 | Now you can connect to any node and create database, which will be automatically propagated to other nodes: :: |
723 | 57 | 57 | ||
724 | 58 | mysql -h127.0.0.1 -P5000 -e "CREATE DATABASE hello_peter" | 58 | mysql -h127.0.0.1 -P5000 -e "CREATE DATABASE hello_peter" |
725 | 59 | 59 | ||
726 | 60 | 60 | ||
728 | 61 | Configuration files (/etc/my.4000.cnf):: | 61 | Configuration files (/etc/my.4000.cnf): :: |
729 | 62 | 62 | ||
730 | 63 | /etc/my.4000.cnf | 63 | /etc/my.4000.cnf |
731 | 64 | 64 | ||
732 | @@ -90,7 +90,7 @@ | |||
733 | 90 | innodb_locks_unsafe_for_binlog=1 | 90 | innodb_locks_unsafe_for_binlog=1 |
734 | 91 | innodb_autoinc_lock_mode=2 | 91 | innodb_autoinc_lock_mode=2 |
735 | 92 | 92 | ||
737 | 93 | Configuration files (/etc/my.5000.cnf). PLEASE see difference in *wsrep_cluster_address* :: | 93 | Configuration files (/etc/my.5000.cnf). PLEASE see the difference in *wsrep_cluster_address*: :: |
738 | 94 | 94 | ||
739 | 95 | /etc/my.5000.cnf | 95 | /etc/my.5000.cnf |
740 | 96 | [mysqld] | 96 | [mysqld] |
741 | @@ -122,7 +122,7 @@ | |||
742 | 122 | innodb_autoinc_lock_mode=2 | 122 | innodb_autoinc_lock_mode=2 |
743 | 123 | 123 | ||
744 | 124 | 124 | ||
746 | 125 | Configuration files (/etc/my.6000.cnf). PLEASE see difference in *wsrep_cluster_address* :: | 125 | Configuration files (/etc/my.6000.cnf). PLEASE see the difference in *wsrep_cluster_address*: :: |
747 | 126 | 126 | ||
748 | 127 | /etc/my.6000.cnf | 127 | /etc/my.6000.cnf |
749 | 128 | 128 |