Merge lp:~hrvojem/percona-xtradb-cluster/bug918060 into lp:~percona-dev/percona-xtradb-cluster/5.5.17-22.1

Proposed by Hrvoje Matijakovic
Status: Merged
Approved by: Vadim Tkachenko
Approved revision: no longer in the source branch.
Merged at revision: 3693
Proposed branch: lp:~hrvojem/percona-xtradb-cluster/bug918060
Merge into: lp:~percona-dev/percona-xtradb-cluster/5.5.17-22.1
Diff against target: 748 lines (+195/-138)
15 files modified
doc/source/3nodesec2.rst (+14/-15)
doc/source/bugreport.rst (+4/-0)
doc/source/conf.py (+7/-1)
doc/source/faq.rst (+28/-33)
doc/source/features/highavailability.rst (+28/-0)
doc/source/glossary.rst (+13/-8)
doc/source/index.rst (+13/-3)
doc/source/installation.rst (+3/-3)
doc/source/installation/bin_distro_specific.rst (+1/-1)
doc/source/installation/compiling_xtradb_cluster.rst (+4/-3)
doc/source/installation/yum_repo.rst (+1/-1)
doc/source/intro.rst (+35/-15)
doc/source/limitation.rst (+15/-29)
doc/source/resources.rst (+14/-11)
doc/source/singlebox.rst (+15/-15)
To merge this branch: bzr merge lp:~hrvojem/percona-xtradb-cluster/bug918060
Reviewer Review Type Date Requested Status
Vadim Tkachenko Approve
Review via email: mp+89042@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Vadim Tkachenko (vadim-tk) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'doc/source/3nodesec2.rst'
2--- doc/source/3nodesec2.rst 2011-12-23 07:28:16 +0000
3+++ doc/source/3nodesec2.rst 2012-01-18 14:24:25 +0000
4@@ -1,39 +1,39 @@
5 How to setup 3 node cluster in EC2 enviroment
6 ==============================================
7
8-This is how-to setup 3-node cluster in EC2 enviroment.
9+This is how to setup 3-node cluster in EC2 enviroment.
10
11 Assume you are running *m1.xlarge* instances with OS *Red Hat Enterprise Linux 6.1 64-bit*.
12
13 Install XtraDB Cluster from RPM:
14
15-1. Install Percona's regular and testing repositories ::
16+1. Install Percona's regular and testing repositories: ::
17
18 rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm
19 rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
20
21-2. Install Percona XtraDB Cluster packages ::
22+2. Install Percona XtraDB Cluster packages: ::
23
24 yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client
25
26-3. Create data directories ::
27+3. Create data directories: ::
28
29 mkdir -p /mnt/data
30 mysql_install_db --datadir=/mnt/data
31
32-4. Stop firewall. Cluster requires couple TCP ports to operate. Easiest way ::
33+4. Stop firewall. Cluster requires couple TCP ports to operate. Easiest way: ::
34
35 service iptables stop
36
37 If you want to open only specific ports, you need to open 3306, 4444, 4567, 4568 ports.
38-For example for 4567 port (substitute 192.168.0.1 by your IP) ::
39+For example for 4567 port (substitute 192.168.0.1 by your IP): ::
40
41 iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT
42
43
44 5. Create /etc/my.cnf files.
45
46-On the first node (assume IP 10.93.46.58) ::
47+On the first node (assume IP 10.93.46.58): ::
48
49 [mysqld]
50 datadir=/mnt/data
51@@ -54,7 +54,7 @@
52 innodb_autoinc_lock_mode=2
53
54
55-On the second node ::
56+On the second node: ::
57
58 [mysqld]
59 datadir=/mnt/data
60@@ -74,33 +74,32 @@
61 innodb_locks_unsafe_for_binlog=1
62 innodb_autoinc_lock_mode=2
63
64-On the third (and following nodes) config is similar, with following change ::
65+On the third (and following nodes) config is similar, with the following change: ::
66
67 wsrep_node_name=node3
68
69 6. Start mysqld
70
71-On the first node ::
72+On the first node: ::
73
74 /usr/sbin/mysqld
75 or
76 mysqld_safe
77
78-You should be able to see in console (or in error-log file) ::
79+You should be able to see in console (or in error-log file): ::
80
81 111216 0:16:42 [Note] /usr/sbin/mysqld: ready for connections.
82 Version: '5.5.17' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release alpha22.1, Revision 3673 wsrep_22.3.r3673
83 111216 0:16:42 [Note] WSREP: Assign initial position for certification: 0, protocol version: 1
84 111216 0:16:42 [Note] WSREP: Synchronized with group, ready for connections
85
86-On the second (and following nodes) ::
87-
88+On the second (and following nodes): ::
89
90 /usr/sbin/mysqld
91 or
92 mysqld_safe
93
94-You should be able to see in console (or in error-log file) ::
95+You should be able to see in console (or in error-log file): ::
96
97 111216 0:21:39 [Note] WSREP: Flow-control interval: [12, 23]
98 111216 0:21:39 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 0)
99@@ -141,7 +140,7 @@
100
101 When all nodes are in SYNCED stage your cluster is ready!
102
103-7. Connect to database on any node and create database ::
104+7. Connect to database on any node and create database: ::
105
106 mysql
107 > CREATE DATABASE hello_tom;
108
109=== added file 'doc/source/_static/cluster-diagram1.png'
110Binary files doc/source/_static/cluster-diagram1.png 1970-01-01 00:00:00 +0000 and doc/source/_static/cluster-diagram1.png 2012-01-18 14:24:25 +0000 differ
111=== added file 'doc/source/bugreport.rst'
112--- doc/source/bugreport.rst 1970-01-01 00:00:00 +0000
113+++ doc/source/bugreport.rst 2012-01-18 14:24:25 +0000
114@@ -0,0 +1,4 @@
115+How to Report Bugs
116+==================
117+
118+All bugs can be reported on `Launchpad <https://bugs.launchpad.net/percona-xtradb-cluster/+filebug>`_. Please note that error.log files from **all** the nodes need to be submitted.
119
120=== modified file 'doc/source/conf.py'
121--- doc/source/conf.py 2011-12-13 16:28:28 +0000
122+++ doc/source/conf.py 2012-01-18 14:24:25 +0000
123@@ -44,7 +44,7 @@
124
125 # General information about the project.
126 project = u'Percona XtraDB Cluster'
127-copyright = u'2011, Percona Inc'
128+copyright = u'2012, Percona Inc'
129
130 # The version info for the project you're documenting, acts as replacement for
131 # |version| and |release|, also used in various other places throughout the
132@@ -96,6 +96,8 @@
133
134 .. |XtraDB| replace:: :term:`XtraDB`
135
136+.. |IST| replace:: :term:`IST`
137+
138 .. |XtraDB Cluster| replace:: :term:`XtraDB Cluster`
139
140 .. |Percona XtraDB Cluster| replace:: :term:`Percona XtraDB Cluster`
141@@ -104,6 +106,10 @@
142
143 .. |MyISAM| replace:: :term:`MyISAM`
144
145+.. |split brain| replace:: :term:`split brain`
146+
147+.. |.frm| replace:: :term:`.frm`
148+
149 .. |LSN| replace:: :term:`LSN`
150
151 .. |XtraBackup| replace:: *XtraBackup*
152
153=== modified file 'doc/source/faq.rst'
154--- doc/source/faq.rst 2012-01-14 17:24:09 +0000
155+++ doc/source/faq.rst 2012-01-18 14:24:25 +0000
156@@ -6,64 +6,65 @@
157 ========================================================
158 A: For auto-increment particular, Cluster changes auto_increment_offset
159 for each new node.
160-In the single node workload, locking handled by usual way how InnoDB handles locks.
161+In the single node workload, locking handled by usual way how |InnoDB| handles locks.
162 In case of write load on several nodes, Cluster uses optimistic locking (http://en.wikipedia.org/wiki/Optimistic_concurrency_control) and application may receive lock error in the response on COMMIT query.
163
164 Q: What if one of the nodes crashes and innodb recovery roll back some transactions?
165 =====================================================================================
166 A: When the node crashes, after the restart it will copy whole dataset from another node
167-(if there were changes to data since crash)
168+(if there were changes to data since crash).
169
170 Q: Is there a chance to have different table structure on the nodes?
171 =====================================================================
172-what I mean is like having 4nodes, 4 tables like sessions_a, sessions_b, sessions_c and sessions_d and have each only on one of the nodes?
173+What I mean is like having 4 nodes, 4 tables like sessions_a, sessions_b, sessions_c and sessions_d and have each only on one of the nodes?
174
175 A: Not at the moment for InnoDB tables. But it will work for MEMORY tables.
176
177-Q: What if a node fail and/or what if there is a network issue between them
178-============================================================================
179+Q: What if a node fail and/or what if there is a network issue between them?
180+=============================================================================
181 A: Then Quorum mechanism in XtraDB Cluster will decide what nodes can accept traffic
182 and will shutdown nodes that not belong to quorum. Later when the failure is fixed,
183 the nodes will need to copy data from working cluster.
184
185 Q: How would it handle split brain?
186 ====================================
187-A: It would not handle it. The |split brain| is hard stop, XtraDB Cluster can't resolve it.
188+A: It would not handle it. The |split brain| is hard stop, |XtraDB Cluster| can't resolve it.
189 That's why the minimal recommendation is to have 3 nodes.
190 However there is possibility to allow a node to handle the traffic, option is: ::
191
192 wsrep_provider_options="pc.ignore_sb = yes"
193
194-Q: I have a two nodes setup. When node1 fails, node2 does not accept commands, why ?
195-=====================================================================================
196+Q: I have a two nodes setup. When node1 fails, node2 does not accept commands, why?
197+====================================================================================
198 A: This is expected behaviour, to prevent |split brain|. See previous question.
199
200-Q: What tcp ports are used by Percona XtraDB Cluster ?
201+Q: What tcp ports are used by Percona XtraDB Cluster?
202 ======================================================
203 A: You may need to open up to 4 ports if you are using firewall.
204
205- * Regular MySQL port, default 3306
206- * Port for group communication, default 4567. It can be changed by the option: ::
207-
208- wsrep_provider_options = "gmcast.listen_addr=tcp://0.0.0.0:4010; "
209-
210- * Port for State Transfer, default 4444. It can be changed by the option: ::
211-
212- wsrep_sst_receive_address=10.11.12.205:5555
213-
214- * Port for Incremental State Transfer, default port for group communication + 1 (4568). It can be changed by the option: ::
215-
216- wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; "
217-
218-Q: Is there "async" mode for Cluster or only "sync" commits are supported ?
219+1. Regular MySQL port, default 3306.
220+
221+2. Port for group communication, default 4567. It can be changed by the option: ::
222+
223+ wsrep_provider_options ="gmcast.listen_addr=tcp://0.0.0.0:4010; "
224+
225+3. Port for State Transfer, default 4444. It can be changed by the option: ::
226+
227+ wsrep_sst_receive_address=10.11.12.205:5555
228+
229+4. Port for Incremental State Transfer, default port for group communication + 1 (4568). It can be changed by the option: ::
230+
231+ wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; "
232+
233+Q: Is there "async" mode for Cluster or only "sync" commits are supported?
234 ===========================================================================
235-A: There is no "async" mode, all commits are syncronious on all nodes.
236-Or, to be fully correct, the commits are "virtually" syncronious. Which
237-means that transaction should pass "certification" on nodes, not physicall commit.
238+A: There is no "async" mode, all commits are synchronous on all nodes.
239+Or, to be fully correct, the commits are "virtually" synchronous. Which
240+means that transaction should pass "certification" on nodes, not physical commit.
241 "Certification" means a guarantee that transaction does not have conflicts with
242 another transactions on corresponding node.
243
244-Q: Does it work with regular MySQL replication ?
245+Q: Does it work with regular MySQL replication?
246 ================================================
247 A: Yes. On the node you are going to use as master, you should enable log-bin and log-slave-update options.
248
249@@ -73,9 +74,3 @@
250
251 echo 0 > /selinux/enforce
252
253-
254-
255-
256-
257-
258-
259
260=== added directory 'doc/source/features'
261=== added file 'doc/source/features/highavailability.rst'
262--- doc/source/features/highavailability.rst 1970-01-01 00:00:00 +0000
263+++ doc/source/features/highavailability.rst 2012-01-18 14:24:25 +0000
264@@ -0,0 +1,28 @@
265+High Availability
266+=================
267+
268+Basic setup: you run 3-nodes setup.
269+
270+The |Percona XtraDB Cluster| will continue to function when you take any of nodes down.
271+At any point in time you can shutdown any Node to perform maintenance or make
272+configuration changes. Even in unplanned situations like Node crash or if it
273+becomes network unavailable the Cluster will continue to work, and you'll be able
274+to run queries on working nodes.
275+
276+The biggest question there, what will happen when the Node joins the cluster back, and
277+there were changes to data while the node was down. Let's focus on this with details.
278+
279+There is two ways that Node may use when it joins the cluster: State Snapshot Transfer
280+ (SST) and Incremental State Transfer (IST).
281+
282+* SST is the full copy if data from one node to another. It's used when new node joins the cluster, it has to transfer data from existing node. There are three methods of SST available in Percona XtraDB Cluster: :program:`mysqldump`, :program:`rsync` and :program:`xtrabackup` (Percona |XtraBackup| with support of XtraDB Cluster will be released soon, so far you need to use our `source code repository <http://www.percona.com/doc/percona-xtrabackup/installation/compiling_xtrabackup.html>`_). The downside of `mysqldump` and `rsync` is that your cluster becomes READ-ONLY for time that takes to copy data from one node to another (SST applies :command:`FLUSH TABLES WITH READ LOCK` command). Xtrabackup SST does not require :command:`READ LOCK` for full time, only for syncing |.FRM| files (the same as with regular backup).
283+
284+* Even with that, SST may be intrusive, that’s why there is IST mechanism. If you put your node down for short period of time and then start it, the node is able to fetch only changes made during period it was down.This is done using caching mechanism on nodes. Each node contains a cache, ring-buffer, (the size is configurable) of last N changes, and the node is able to transfer part of this cache. Obviously IST can be done only if amount of changes needed to transfer is less than N. If it exceeds N, then the joining node has to perform SST.
285+
286+You can monitor current state of Node by using
287+
288+.. code-block:: mysql
289+
290+ SHOW STATUS LIKE 'wsrep_local_state_comment';
291+
292+When it is ‘Synced (6)’, the node is ready to handle traffic.
293
294=== modified file 'doc/source/glossary.rst'
295--- doc/source/glossary.rst 2012-01-11 17:47:46 +0000
296+++ doc/source/glossary.rst 2012-01-18 14:24:25 +0000
297@@ -7,30 +7,35 @@
298 LSN
299 Each InnoDB page (usually 16kb in size) contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page's LSN shows how recently it was changed.
300
301-
302 InnoDB
303- Storage engine which provides ACID-compilant transactions and foreing key support, among others improvements over :term:`MyISAM`. It is the default engine for |MySQL| as of the 5.5 series.
304+ Storage engine which provides ACID-compliant transactions and foreign key support, among others improvements over :term:`MyISAM`. It is the default engine for |MySQL| as of the 5.5 series.
305
306 MyISAM
307 Previous default storage engine for |MySQL| for versions prior to 5.5. It doesn't fully support transactions but in some scenarios may be faster than :term:`InnoDB`. Each table is stored on disk in 3 files: :term:`.frm`, :term:`.MYD`, :term:`.MYI`
308+
309+ IST
310+ Incremental State Transfer. Functionallity which instead of whole state snapshot can catch up with te group by receiving the missing writesets, but only if the writeset is still in the donor's writeset cache
311+
312+ XtraBackup
313+ *Percona XtraBackup* is an open-source hot backup utility for |MySQL| - based servers that doesn’t lock your database during the backup.
314
315 XtraDB
316 *Percona XtraDB* is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is fully backwards compatible, and so can be used as a drop-in replacement for standard InnoDB. More information `here <http://www.percona.com/docs/wiki/Percona-XtraDB:start>`_ .
317
318 XtraDB Cluster
319- *Percona XtraDB Cluster* is a high availablity solution for MySQL
320+ *Percona XtraDB Cluster* is a high availability solution for MySQL
321
322 Percona XtraDB Cluster
323- *Percona XtraDB Cluster* is a high availablity solution for MySQL
324+ *Percona XtraDB Cluster* is a high availability solution for MySQL
325
326 my.cnf
327- This file refers to the database server's main configuration file. Most linux distributions place it as :file:`/etc/mysql/my.cnf`, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values.
328+ This file refers to the database server's main configuration file. Most Linux distributions place it as :file:`/etc/mysql/my.cnf`, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values.
329
330 datadir
331 The directory in which the database server stores its databases. Most Linux distribution use :file:`/var/lib/mysql` by default.
332
333 ibdata
334- Default prefix for tablespace files, e.g. :file:`ibdata1` is a 10MB autoextensible file that |MySQL| creates for the shared tablespace by default.
335+ Default prefix for tablespace files, e.g. :file:`ibdata1` is a 10MB autoextendable file that |MySQL| creates for the shared tablespace by default.
336
337 innodb_file_per_table
338 InnoDB option to use separate .ibd files for each table.
339@@ -54,10 +59,10 @@
340 Each table using the :program:`MERGE` storage engine, besides of a :term:`.frm` file, will have :term:`.MRG` file containing the names of the |MyISAM| tables associated with it.
341
342 .TRG
343- File containing the TRiGgers associated to a table, e.g. `:file:`mytable.TRG`. With the :term:`.TRN` file, they represent all the Trigger definitions.
344+ File containing the triggers associated to a table, e.g. `:file:`mytable.TRG`. With the :term:`.TRN` file, they represent all the trigger definitions.
345
346 .TRN
347- File containing the TRiggers' Names associated to a table, e.g. `:file:`mytable.TRN`. With the :term:`.TRG` file, they represent all the Trigger definitions.
348+ File containing the triggers' Names associated to a table, e.g. `:file:`mytable.TRN`. With the :term:`.TRG` file, they represent all the trigger definitions.
349
350 .ARM
351 Each table with the :program:`Archive Storage Engine` has ``.ARM`` file which contains the metadata of it.
352
353=== modified file 'doc/source/index.rst'
354--- doc/source/index.rst 2012-01-11 16:46:28 +0000
355+++ doc/source/index.rst 2012-01-18 14:24:25 +0000
356@@ -12,13 +12,13 @@
357
358 * Synchronous replication. Transaction either commited on all nodes or none.
359
360- * Multi-master replication. You can write to any node.
361+ * Multi-master replication. You can write to any node.
362
363 * Parallel applying events on slave. Real “parallel replication”.
364
365 * Automatic node provisioning.
366
367- * Data consistency. No more unsyncronised slaves.
368+ * Data consistency. No more unsynchronized slaves.
369
370 Percona XtraDB Cluster is fully compatible with MySQL or Percona Server in the following meaning:
371
372@@ -35,6 +35,7 @@
373 :glob:
374
375 intro
376+ resources
377
378 Installation
379 ============
380@@ -46,6 +47,15 @@
381 installation
382 installation/compiling_xtradb_cluster
383
384+Features
385+========
386+
387+.. toctree::
388+ :maxdepth: 1
389+ :glob:
390+
391+ features/highavailability
392+
393 FAQ
394 ===
395
396@@ -65,7 +75,7 @@
397
398 singlebox
399 3nodesec2
400-
401+ bugreport
402
403 Percona XtraDB Cluster limitations
404 ==================================
405
406=== modified file 'doc/source/installation.rst'
407--- doc/source/installation.rst 2011-12-14 02:32:00 +0000
408+++ doc/source/installation.rst 2012-01-18 14:24:25 +0000
409@@ -61,9 +61,9 @@
410 Install XtraBackup SST method
411 ==============================
412
413-To use Percona XtraBackup for State Transfer method (copy snapsot of data between nodes)
414-you can use the regular xtrabackup package with the script what supports galera information.
415-You can take *innobackupex* script from source code `innobackupex <http://bazaar.launchpad.net/~percona-dev/percona-xtrabackup/galera-info/view/head:/innobackupex>`_
416+To use Percona XtraBackup for State Transfer method (copy snapshot of data between nodes)
417+you can use the regular xtrabackup package with the script what supports Galera information.
418+You can take *innobackupex* script from source code `innobackupex <http://bazaar.launchpad.net/~percona-dev/percona-xtrabackup/galera-info/view/head:/innobackupex>`_.
419
420 To inform node to use xtrabackup you need to specify in my.cnf: ::
421
422
423=== modified file 'doc/source/installation/bin_distro_specific.rst'
424--- doc/source/installation/bin_distro_specific.rst 2011-12-13 06:38:29 +0000
425+++ doc/source/installation/bin_distro_specific.rst 2012-01-18 14:24:25 +0000
426@@ -19,7 +19,7 @@
427 cp ./usr/bin/tar4ibd /usr/bin
428 cp ./usr/bin/innobackupex-1.5.1 /usr/bin
429
430-* If you use a version prior to 1.6, the stock perl causes an issue with the backup scripts version detection. Edit :file:`/usr/bin/innobackupex-1.5.1`. Comment out the lines below as shown below ::
431+* If you use a version prior to 1.6, the stock perl causes an issue with the backup scripts version detection. Edit :file:`/usr/bin/innobackupex-1.5.1`. Comment out the lines below as shown below: ::
432
433 $perl_version = chr($required_perl_version[0])
434 . chr($required_perl_version[1])
435
436=== modified file 'doc/source/installation/compiling_xtradb_cluster.rst'
437--- doc/source/installation/compiling_xtradb_cluster.rst 2011-12-13 06:38:29 +0000
438+++ doc/source/installation/compiling_xtradb_cluster.rst 2012-01-18 14:24:25 +0000
439@@ -30,11 +30,12 @@
440 Compiling
441 ------------
442
443-The most esiest way to build binaries is to run
444-script BUILD/compile-pentium64-wsrep
445+The most esiest way to build binaries is to run script: ::
446+
447+ BUILD/compile-pentium64-wsrep
448
449 If you feel confident to use cmake, you make compile with cmake adding
450 -DWITH_WSREP=1 to parameters.
451
452-Exampes how to build RPM and DEB packages you can find in packaging/percona directory in the source code
453+Examples how to build RPM and DEB packages you can find in packaging/percona directory in the source code.
454
455
456=== modified file 'doc/source/installation/yum_repo.rst'
457--- doc/source/installation/yum_repo.rst 2012-01-10 02:33:47 +0000
458+++ doc/source/installation/yum_repo.rst 2012-01-18 14:24:25 +0000
459@@ -13,7 +13,7 @@
460
461 $ rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm
462
463-You may want to install also Percona stable repository, which provides Percona-shared-compat rpm, needed to satisfy dependencies ::
464+You may want to install also Percona stable repository, which provides Percona-shared-compat rpm, needed to satisfy dependencies: ::
465
466 $ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
467
468
469=== modified file 'doc/source/intro.rst'
470--- doc/source/intro.rst 2011-12-14 00:33:37 +0000
471+++ doc/source/intro.rst 2012-01-18 14:24:25 +0000
472@@ -4,37 +4,57 @@
473
474 *Percona XtraDB Cluster* is open-source, free |MySQL| High Availability software
475
476+General introduction
477+====================
478+
479+1. The Cluster consists of Nodes. Recommended configuration is to have at least 3 nodes, but you can make it running with 2 nodes as well.
480+2. Each Node is regular |MySQL| / |Percona Server| setup. The point is that you can convert your existing MySQL / Percona Server into Node and roll Cluster using it as a base. Or otherwise – you can detach Node from Cluster and use it as just a regular server.
481+3. Each Node contains the full copy of data. That defines XtraDB Cluster behavior in many ways. And obviously there are benefits and drawbacks.
482+
483+.. image:: _static/cluster-diagram1.png
484+
485+Benefits of such approach:
486+ * When you execute a query, it is executed locally on the node. All data is available locally, no need for remote access.
487+ * No central management. You can loose any node at any point of time, and the cluster will continue to function.
488+ * Good solution for scaling a read workload. You can put read queries to any of the nodes.
489+
490+Drawbacks:
491+ * Overhead of joining new node. The new node has to copy full dataset from one of existing nodes. If it is 100GB, it copies 100GB.
492+ * This can’t be used as an effective write scaling solution. There might be some improvements in write throughput when you run write traffic to 2 nodes vs all traffic to 1 node, but you can't expect a lot. All writes still have to go on all nodes.
493+ * You have several duplicates of data, for 3 nodes – 3 duplicates.
494+
495 What is core difference Percona XtraDB Cluster from MySQL Replication ?
496 =======================================================================
497
498-Let's take look into well know CAP theorem for Distributed systems.
499+Let's take look into the well known CAP theorem for Distributed systems.
500 Characteristics of Distributed systems:
501
502- C - Consistency (all your data is consistent on all nodes)
503-
504- A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes )
505-
506- P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle requests)
507+ C - Consistency (all your data is consistent on all nodes),
508+
509+ A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes ),
510+
511+ P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle requests).
512
513
514 CAP theorem says that each Distributed system can have only two out of these three.
515
516-MySQL replication has: Availability and Partitioning tolerance
517-
518-Percona XtraDB Cluster has: Consistency and Availability
519-
520-That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data Consistency. (And, yes, Percona XtraDB Cluster looses Partitioning tolerance property)
521+MySQL replication has: Availability and Partitioning tolerance.
522+
523+Percona XtraDB Cluster has: Consistency and Availability.
524+
525+That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data Consistency. (And yes, Percona XtraDB Cluster looses Partitioning tolerance property).
526
527 Components
528 ==========
529
530 *Percona XtraDB Cluster* is based on `Percona Server with XtraDB <http://www.percona.com/software/percona-server/>`_
531-and includes `Write Set REPlication patches <https://launchpad.net/codership-mysql>`_.
532+and includes `Write Set Replication patches <https://launchpad.net/codership-mysql>`_.
533 It uses the `Galera library <https://launchpad.net/galera>`_, version 2.x,
534 a generic Synchronous Multi-Master replication plugin for transactional applications.
535
536-Galera library is developed by `Codership Oy <http://www.codership.com/>`_
537+Galera library is developed by `Codership Oy <http://www.codership.com/>`_.
538
539 Galera 2.x supports such new features as:
540- * Incremental State Transfer, especially useful for WAN deployments
541- * RSU, Rolling Schema Update. Schema change does not block operations against table
542+ * Incremental State Transfer (|IST|), especially useful for WAN deployments,
543+ * RSU, Rolling Schema Update. Schema change does not block operations against table.
544+
545
546=== modified file 'doc/source/limitation.rst'
547--- doc/source/limitation.rst 2011-12-14 00:33:37 +0000
548+++ doc/source/limitation.rst 2012-01-18 14:24:25 +0000
549@@ -2,43 +2,29 @@
550 Percona XtraDB Cluster Limitations
551 ====================================
552
553-There are some limitations which we should be aware of. Some of them will be eliminated later as product is improved; some are design limitations.
554-
555- - Currently replication works only with InnoDB storage engine. Any writes to
556- tables of other types, including system (mysql.*) tables are not replicated.
557- However, DDL statements are replicated in statement level, and changes
558- to mysql.* tables will get replicated that way.
559- So, you can safely issue: CREATE USER...,
560- but issuing: INSERT INTO mysql.user..., will not be replicated.
561-
562- - DELETE operation is unsupported on tables without primary key. Also rows in
563- tables without primary key may appear in different order on different nodes.
564- As a result SELECT...LIMIT... may return slightly different sets.
565+There are some limitations which you should be aware of. Some of them will be eliminated later as product is improved and some are design limitations.
566+
567+ - Currently replication works only with |InnoDB| storage engine. Any writes to tables of other types, including system (mysql.*) tables, are not replicated. However, DDL statements are replicated in statement level, and changes to mysql.* tables will get replicated that way. So, you can safely issue: CREATE USER..., but issuing: INSERT INTO mysql.user..., will not be replicated.
568+
569+ - DELETE operation is unsupported on tables without primary key. Also rows in tables without primary key may appear in different order on different nodes. As a result SELECT...LIMIT... may return slightly different sets.
570
571 - Unsupported queries:
572 * LOCK/UNLOCK TABLES cannot be supported in multi-master setups.
573 * lock functions (GET_LOCK(), RELEASE_LOCK()... )
574
575- - Query log cannot be directed to table. If you enable query logging,
576- you must forward the log to a file:
577- log_output = FILE
578- Use general_log and general_log_file to choose query logging and the
579- log file name
580-
581- - Maximum allowed transaction size is defined by wsrep_max_ws_rows and
582- wsrep_max_ws_size. Anything bigger (e.g. huge LOAD DATA) will be rejected.
583-
584- - Due to cluster level optimistic concurrency control, transaction issuing
585- COMMIT may still be aborted at that stage. There can be two transactions.
586- writing to same rows and committing in separate XtraDB Cluster nodes, and only one
587- of the them can successfully commit. The failing one will be aborted.
588- For cluster level aborts, XtraDB Cluster gives back deadlock error.
589- code (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).
590+ - Query log cannot be directed to table. If you enable query logging, you must forward the log to a file: log_output = FILE. Use general_log and general_log_file to choose query logging and the log file name.
591+
592+ - Maximum allowed transaction size is defined by wsrep_max_ws_rows and wsrep_max_ws_size. Anything bigger (e.g. huge LOAD DATA) will be rejected.
593+
594+ - Due to cluster level optimistic concurrency control, transaction issuing COMMIT may still be aborted at that stage. There can be two transactions writing to same rows and committing in separate XtraDB Cluster nodes, and only one of the them can successfully commit. The failing one will be aborted. For cluster level aborts, XtraDB Cluster gives back deadlock error code:
595+ (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).
596
597 - XA transactions can not be supported due to possible rollback on commit.
598
599- - The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster is slow. If you have requirements for stable high performance, then it should be supported by corresponding hardware (10Gb network, SSD)
600+ - The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster is slow. If you have requirements for stable high performance, then it should be supported by corresponding hardware (10Gb network, SSD).
601
602- - The minimal recommended size of cluster is 3 nodes
603+ - The minimal recommended size of cluster is 3 nodes.
604
605 - DDL statements are problematic and may stall cluster. Later, the support of DDL will be improved, but will always require special treatment.
606+
607+
608
609=== modified file 'doc/source/resources.rst'
610--- doc/source/resources.rst 2011-12-21 05:51:01 +0000
611+++ doc/source/resources.rst 2012-01-18 14:24:25 +0000
612@@ -1,3 +1,7 @@
613+=========
614+Resources
615+=========
616+
617 In general there are 4 resources that need to be different when you want to run several MySQL/Galera nodes on one host:
618
619 1) data directory
620@@ -5,19 +9,18 @@
621 3) galera replication listen port and/or address
622 4) receive address for state snapshot transfer
623
624-
625-and later incremental state transfer receive address will be added to the bunch. (I know, it is kinda a lot, but we don't see how it can be meaningfully reduced yet)
626-
627-The first two is the usual mysql stuff.
628-
629-You figured out the third. It is also possible to pass it via
630-
631-wsrep_provider_options="gmcast.listen_addr=tcp://127.0.0.1:5678"
632-
633-as most other galera options. This may save you some extra typing.
634+and later incremental state transfer receive address will be added to the bunch. (I know, it is kinda a lot, but we don't see how it can be meaningfully reduced yet).
635+
636+The first two are the usual mysql stuff.
637+
638+You figured out the third. It is also possible to pass it via: ::
639+
640+ wsrep_provider_options="gmcast.listen_addr=tcp://127.0.0.1:5678"
641+
642+as most other Galera options. This may save you some extra typing.
643
644 The fourth one is wsrep_sst_receive_address. This is the address at which the node will be listening for and receiving the state. Note that in galera cluster _joining_ nodes are waiting for connections from donors. It goes contrary to tradition and seems to confuse people time and again, but there are good reasons it was made like that.
645
646 If you use mysqldump SST it should be the same as this mysql client connection address plus you need to set wsrep_sst_auth variable to hold user:password pair. The user should be privileged enough to read system tables from donor and create system tables on this node. For simplicity that could be just the root user. Note that it also means that you need to properly set up the privileges on the new node before attempting to join the cluster.
647
648-If you use rsync or xtrabackup SST, wsrep_sst_auth is not necessary unless your SST script makes use of it. wsrep_sst_address can be anything local (it may even be the same on all nodes provided you'll be starting them one at a time).
649+If you use rsync or |xtrabackup| SST, wsrep_sst_auth is not necessary unless your SST script makes use of it. wsrep_sst_address can be anything local (it may even be the same on all nodes provided you'll be starting them one at a time).
650
651=== modified file 'doc/source/singlebox.rst'
652--- doc/source/singlebox.rst 2011-12-16 03:50:49 +0000
653+++ doc/source/singlebox.rst 2012-01-18 14:24:25 +0000
654@@ -1,64 +1,64 @@
655 How to setup 3 node cluster on single box
656 ==========================================
657
658-This is how-to setup 3-node cluster on the single physical box.
659+This is how to setup 3-node cluster on the single physical box.
660
661-Assume you installed Percona XtraDB Cluster from binary .tar.gz into directory
662+Assume you installed |Percona XtraDB Cluster| from binary .tar.gz into directory
663
664 /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6
665
666
667-Now we need to create couple my.cnf files and couple data directories.
668+Now we need to create couple my.cnf files and couple of data directories.
669
670-Assume we created (see the content of files at the end of document)
671+Assume we created (see the content of files at the end of document):
672
673 * /etc/my.4000.cnf
674 * /etc/my.5000.cnf
675 * /etc/my.6000.cnf
676
677-and data directories
678+and data directories:
679
680 * /data/bench/d1
681 * /data/bench/d2
682 * /data/bench/d3
683
684-and assume the local IP address is 10.11.12.205
685+and assume the local IP address is 10.11.12.205.
686
687-then we should be able to start initial node as (from directory /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6)::
688+Then we should be able to start initial node as (from directory /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6): ::
689
690 bin/mysqld --defaults-file=/etc/my.4000.cnf
691
692-Following output will let out know that node was started succsefully::
693+Following output will let out know that node was started successfully: ::
694
695 111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
696 111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 1
697
698
699-And you can check used ports::
700+And you can check used ports: ::
701
702 netstat -anp | grep mysqld
703 tcp 0 0 0.0.0.0:4000 0.0.0.0:* LISTEN 8218/mysqld
704 tcp 0 0 0.0.0.0:4010 0.0.0.0:* LISTEN 8218/mysqld
705
706
707-After first node, we start second and third::
708+After first node, we start second and third: ::
709
710 bin/mysqld --defaults-file=/etc/my.5000.cnf
711 bin/mysqld --defaults-file=/etc/my.6000.cnf
712
713-Succesfull start will produce following output::
714+Successful start will produce the following output: ::
715
716 111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2)
717 111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)
718 111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections
719
720
721-Now you can connect to any node and create database, which will be automatically propagated to another nodes::
722+Now you can connect to any node and create database, which will be automatically propagated to other nodes: ::
723
724 mysql -h127.0.0.1 -P5000 -e "CREATE DATABASE hello_peter"
725
726
727-Configuration files (/etc/my.4000.cnf)::
728+Configuration files (/etc/my.4000.cnf): ::
729
730 /etc/my.4000.cnf
731
732@@ -90,7 +90,7 @@
733 innodb_locks_unsafe_for_binlog=1
734 innodb_autoinc_lock_mode=2
735
736-Configuration files (/etc/my.5000.cnf). PLEASE see difference in *wsrep_cluster_address* ::
737+Configuration files (/etc/my.5000.cnf). PLEASE see the difference in *wsrep_cluster_address*: ::
738
739 /etc/my.5000.cnf
740 [mysqld]
741@@ -122,7 +122,7 @@
742 innodb_autoinc_lock_mode=2
743
744
745-Configuration files (/etc/my.6000.cnf). PLEASE see difference in *wsrep_cluster_address* ::
746+Configuration files (/etc/my.6000.cnf). PLEASE see the difference in *wsrep_cluster_address*: ::
747
748 /etc/my.6000.cnf
749

Subscribers

People subscribed via source and target branches