Merge lp:~hrvojem/percona-xtradb-cluster/docfix into lp:~percona-dev/percona-xtradb-cluster/5.5.17-22.1

Proposed by Hrvoje Matijakovic on 2012-01-15
Status: Merged
Approved by: Vadim Tkachenko on 2012-01-19
Approved revision: 3692
Merged at revision: 3693
Proposed branch: lp:~hrvojem/percona-xtradb-cluster/docfix
Merge into: lp:~percona-dev/percona-xtradb-cluster/5.5.17-22.1
Diff against target: 656 lines (+129/-138)
14 files modified
doc/source/3nodesec2.rst (+14/-15)
doc/source/bugreport.rst (+4/-0)
doc/source/conf.py (+3/-1)
doc/source/faq.rst (+28/-33)
doc/source/glossary.rst (+7/-8)
doc/source/index.rst (+4/-3)
doc/source/installation.rst (+3/-3)
doc/source/installation/bin_distro_specific.rst (+1/-1)
doc/source/installation/compiling_xtradb_cluster.rst (+4/-3)
doc/source/installation/yum_repo.rst (+1/-1)
doc/source/intro.rst (+16/-15)
doc/source/limitation.rst (+15/-29)
doc/source/resources.rst (+14/-11)
doc/source/singlebox.rst (+15/-15)
To merge this branch: bzr merge lp:~hrvojem/percona-xtradb-cluster/docfix
Reviewer Review Type Date Requested Status
Stewart Smith (community) 2012-01-15 Approve on 2012-01-15
Review via email: mp+88598@code.launchpad.net

Description of the change

Documentation fixes

To post a comment you must log in.
Stewart Smith (stewart) :
review: Approve
3692. By Hrvoje Matijakovic on 2012-01-17

fixed more typos

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'doc/source/3nodesec2.rst'
2--- doc/source/3nodesec2.rst 2011-12-23 07:28:16 +0000
3+++ doc/source/3nodesec2.rst 2012-01-17 16:06:40 +0000
4@@ -1,39 +1,39 @@
5 How to setup 3 node cluster in EC2 enviroment
6 ==============================================
7
8-This is how-to setup 3-node cluster in EC2 enviroment.
9+This is how to setup 3-node cluster in EC2 enviroment.
10
11 Assume you are running *m1.xlarge* instances with OS *Red Hat Enterprise Linux 6.1 64-bit*.
12
13 Install XtraDB Cluster from RPM:
14
15-1. Install Percona's regular and testing repositories ::
16+1. Install Percona's regular and testing repositories: ::
17
18 rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm
19 rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
20
21-2. Install Percona XtraDB Cluster packages ::
22+2. Install Percona XtraDB Cluster packages: ::
23
24 yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client
25
26-3. Create data directories ::
27+3. Create data directories: ::
28
29 mkdir -p /mnt/data
30 mysql_install_db --datadir=/mnt/data
31
32-4. Stop firewall. Cluster requires couple TCP ports to operate. Easiest way ::
33+4. Stop firewall. Cluster requires couple TCP ports to operate. Easiest way: ::
34
35 service iptables stop
36
37 If you want to open only specific ports, you need to open 3306, 4444, 4567, 4568 ports.
38-For example for 4567 port (substitute 192.168.0.1 by your IP) ::
39+For example for 4567 port (substitute 192.168.0.1 by your IP): ::
40
41 iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT
42
43
44 5. Create /etc/my.cnf files.
45
46-On the first node (assume IP 10.93.46.58) ::
47+On the first node (assume IP 10.93.46.58): ::
48
49 [mysqld]
50 datadir=/mnt/data
51@@ -54,7 +54,7 @@
52 innodb_autoinc_lock_mode=2
53
54
55-On the second node ::
56+On the second node: ::
57
58 [mysqld]
59 datadir=/mnt/data
60@@ -74,33 +74,32 @@
61 innodb_locks_unsafe_for_binlog=1
62 innodb_autoinc_lock_mode=2
63
64-On the third (and following nodes) config is similar, with following change ::
65+On the third (and following nodes) config is similar, with the following change: ::
66
67 wsrep_node_name=node3
68
69 6. Start mysqld
70
71-On the first node ::
72+On the first node: ::
73
74 /usr/sbin/mysqld
75 or
76 mysqld_safe
77
78-You should be able to see in console (or in error-log file) ::
79+You should be able to see in console (or in error-log file): ::
80
81 111216 0:16:42 [Note] /usr/sbin/mysqld: ready for connections.
82 Version: '5.5.17' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release alpha22.1, Revision 3673 wsrep_22.3.r3673
83 111216 0:16:42 [Note] WSREP: Assign initial position for certification: 0, protocol version: 1
84 111216 0:16:42 [Note] WSREP: Synchronized with group, ready for connections
85
86-On the second (and following nodes) ::
87-
88+On the second (and following nodes): ::
89
90 /usr/sbin/mysqld
91 or
92 mysqld_safe
93
94-You should be able to see in console (or in error-log file) ::
95+You should be able to see in console (or in error-log file): ::
96
97 111216 0:21:39 [Note] WSREP: Flow-control interval: [12, 23]
98 111216 0:21:39 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 0)
99@@ -141,7 +140,7 @@
100
101 When all nodes are in SYNCED stage your cluster is ready!
102
103-7. Connect to database on any node and create database ::
104+7. Connect to database on any node and create database: ::
105
106 mysql
107 > CREATE DATABASE hello_tom;
108
109=== added file 'doc/source/bugreport.rst'
110--- doc/source/bugreport.rst 1970-01-01 00:00:00 +0000
111+++ doc/source/bugreport.rst 2012-01-17 16:06:40 +0000
112@@ -0,0 +1,4 @@
113+How to Report Bugs
114+==================
115+
116+All bugs can be reported on `Launchpad <https://bugs.launchpad.net/percona-xtradb-cluster/+filebug>`_. Please note that error.log files from **all** the nodes need to be submitted.
117
118=== modified file 'doc/source/conf.py'
119--- doc/source/conf.py 2011-12-13 16:28:28 +0000
120+++ doc/source/conf.py 2012-01-17 16:06:40 +0000
121@@ -44,7 +44,7 @@
122
123 # General information about the project.
124 project = u'Percona XtraDB Cluster'
125-copyright = u'2011, Percona Inc'
126+copyright = u'2012, Percona Inc'
127
128 # The version info for the project you're documenting, acts as replacement for
129 # |version| and |release|, also used in various other places throughout the
130@@ -104,6 +104,8 @@
131
132 .. |MyISAM| replace:: :term:`MyISAM`
133
134+.. |split brain| replace:: :term:`split brain`
135+
136 .. |LSN| replace:: :term:`LSN`
137
138 .. |XtraBackup| replace:: *XtraBackup*
139
140=== modified file 'doc/source/faq.rst'
141--- doc/source/faq.rst 2012-01-14 17:24:09 +0000
142+++ doc/source/faq.rst 2012-01-17 16:06:40 +0000
143@@ -6,64 +6,65 @@
144 ========================================================
145 A: For auto-increment particular, Cluster changes auto_increment_offset
146 for each new node.
147-In the single node workload, locking handled by usual way how InnoDB handles locks.
148+In the single node workload, locking handled by usual way how |InnoDB| handles locks.
149 In case of write load on several nodes, Cluster uses optimistic locking (http://en.wikipedia.org/wiki/Optimistic_concurrency_control) and application may receive lock error in the response on COMMIT query.
150
151 Q: What if one of the nodes crashes and innodb recovery roll back some transactions?
152 =====================================================================================
153 A: When the node crashes, after the restart it will copy whole dataset from another node
154-(if there were changes to data since crash)
155+(if there were changes to data since crash).
156
157 Q: Is there a chance to have different table structure on the nodes?
158 =====================================================================
159-what I mean is like having 4nodes, 4 tables like sessions_a, sessions_b, sessions_c and sessions_d and have each only on one of the nodes?
160+What I mean is like having 4 nodes, 4 tables like sessions_a, sessions_b, sessions_c and sessions_d and have each only on one of the nodes?
161
162 A: Not at the moment for InnoDB tables. But it will work for MEMORY tables.
163
164-Q: What if a node fail and/or what if there is a network issue between them
165-============================================================================
166+Q: What if a node fail and/or what if there is a network issue between them?
167+=============================================================================
168 A: Then Quorum mechanism in XtraDB Cluster will decide what nodes can accept traffic
169 and will shutdown nodes that not belong to quorum. Later when the failure is fixed,
170 the nodes will need to copy data from working cluster.
171
172 Q: How would it handle split brain?
173 ====================================
174-A: It would not handle it. The |split brain| is hard stop, XtraDB Cluster can't resolve it.
175+A: It would not handle it. The |split brain| is hard stop, |XtraDB Cluster| can't resolve it.
176 That's why the minimal recommendation is to have 3 nodes.
177 However there is possibility to allow a node to handle the traffic, option is: ::
178
179 wsrep_provider_options="pc.ignore_sb = yes"
180
181-Q: I have a two nodes setup. When node1 fails, node2 does not accept commands, why ?
182-=====================================================================================
183+Q: I have a two nodes setup. When node1 fails, node2 does not accept commands, why?
184+====================================================================================
185 A: This is expected behaviour, to prevent |split brain|. See previous question.
186
187-Q: What tcp ports are used by Percona XtraDB Cluster ?
188+Q: What tcp ports are used by Percona XtraDB Cluster?
189 ======================================================
190 A: You may need to open up to 4 ports if you are using firewall.
191
192- * Regular MySQL port, default 3306
193- * Port for group communication, default 4567. It can be changed by the option: ::
194-
195- wsrep_provider_options = "gmcast.listen_addr=tcp://0.0.0.0:4010; "
196-
197- * Port for State Transfer, default 4444. It can be changed by the option: ::
198-
199- wsrep_sst_receive_address=10.11.12.205:5555
200-
201- * Port for Incremental State Transfer, default port for group communication + 1 (4568). It can be changed by the option: ::
202-
203- wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; "
204-
205-Q: Is there "async" mode for Cluster or only "sync" commits are supported ?
206+1. Regular MySQL port, default 3306.
207+
208+2. Port for group communication, default 4567. It can be changed by the option: ::
209+
210+ wsrep_provider_options ="gmcast.listen_addr=tcp://0.0.0.0:4010; "
211+
212+3. Port for State Transfer, default 4444. It can be changed by the option: ::
213+
214+ wsrep_sst_receive_address=10.11.12.205:5555
215+
216+4. Port for Incremental State Transfer, default port for group communication + 1 (4568). It can be changed by the option: ::
217+
218+ wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; "
219+
220+Q: Is there "async" mode for Cluster or only "sync" commits are supported?
221 ===========================================================================
222-A: There is no "async" mode, all commits are syncronious on all nodes.
223-Or, to be fully correct, the commits are "virtually" syncronious. Which
224-means that transaction should pass "certification" on nodes, not physicall commit.
225+A: There is no "async" mode, all commits are synchronous on all nodes.
226+Or, to be fully correct, the commits are "virtually" synchronous. Which
227+means that transaction should pass "certification" on nodes, not physical commit.
228 "Certification" means a guarantee that transaction does not have conflicts with
229 another transactions on corresponding node.
230
231-Q: Does it work with regular MySQL replication ?
232+Q: Does it work with regular MySQL replication?
233 ================================================
234 A: Yes. On the node you are going to use as master, you should enable log-bin and log-slave-update options.
235
236@@ -73,9 +74,3 @@
237
238 echo 0 > /selinux/enforce
239
240-
241-
242-
243-
244-
245-
246
247=== modified file 'doc/source/glossary.rst'
248--- doc/source/glossary.rst 2012-01-11 17:47:46 +0000
249+++ doc/source/glossary.rst 2012-01-17 16:06:40 +0000
250@@ -7,9 +7,8 @@
251 LSN
252 Each InnoDB page (usually 16kb in size) contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page's LSN shows how recently it was changed.
253
254-
255 InnoDB
256- Storage engine which provides ACID-compilant transactions and foreing key support, among others improvements over :term:`MyISAM`. It is the default engine for |MySQL| as of the 5.5 series.
257+ Storage engine which provides ACID-compliant transactions and foreign key support, among others improvements over :term:`MyISAM`. It is the default engine for |MySQL| as of the 5.5 series.
258
259 MyISAM
260 Previous default storage engine for |MySQL| for versions prior to 5.5. It doesn't fully support transactions but in some scenarios may be faster than :term:`InnoDB`. Each table is stored on disk in 3 files: :term:`.frm`, :term:`.MYD`, :term:`.MYI`
261@@ -18,19 +17,19 @@
262 *Percona XtraDB* is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is fully backwards compatible, and so can be used as a drop-in replacement for standard InnoDB. More information `here <http://www.percona.com/docs/wiki/Percona-XtraDB:start>`_ .
263
264 XtraDB Cluster
265- *Percona XtraDB Cluster* is a high availablity solution for MySQL
266+ *Percona XtraDB Cluster* is a high availability solution for MySQL
267
268 Percona XtraDB Cluster
269- *Percona XtraDB Cluster* is a high availablity solution for MySQL
270+ *Percona XtraDB Cluster* is a high availability solution for MySQL
271
272 my.cnf
273- This file refers to the database server's main configuration file. Most linux distributions place it as :file:`/etc/mysql/my.cnf`, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values.
274+ This file refers to the database server's main configuration file. Most Linux distributions place it as :file:`/etc/mysql/my.cnf`, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values.
275
276 datadir
277 The directory in which the database server stores its databases. Most Linux distribution use :file:`/var/lib/mysql` by default.
278
279 ibdata
280- Default prefix for tablespace files, e.g. :file:`ibdata1` is a 10MB autoextensible file that |MySQL| creates for the shared tablespace by default.
281+ Default prefix for tablespace files, e.g. :file:`ibdata1` is a 10MB autoextendable file that |MySQL| creates for the shared tablespace by default.
282
283 innodb_file_per_table
284 InnoDB option to use separate .ibd files for each table.
285@@ -54,10 +53,10 @@
286 Each table using the :program:`MERGE` storage engine, besides of a :term:`.frm` file, will have :term:`.MRG` file containing the names of the |MyISAM| tables associated with it.
287
288 .TRG
289- File containing the TRiGgers associated to a table, e.g. `:file:`mytable.TRG`. With the :term:`.TRN` file, they represent all the Trigger definitions.
290+ File containing the triggers associated to a table, e.g. `:file:`mytable.TRG`. With the :term:`.TRN` file, they represent all the trigger definitions.
291
292 .TRN
293- File containing the TRiggers' Names associated to a table, e.g. `:file:`mytable.TRN`. With the :term:`.TRG` file, they represent all the Trigger definitions.
294+ File containing the triggers' Names associated to a table, e.g. `:file:`mytable.TRN`. With the :term:`.TRG` file, they represent all the trigger definitions.
295
296 .ARM
297 Each table with the :program:`Archive Storage Engine` has ``.ARM`` file which contains the metadata of it.
298
299=== modified file 'doc/source/index.rst'
300--- doc/source/index.rst 2012-01-11 16:46:28 +0000
301+++ doc/source/index.rst 2012-01-17 16:06:40 +0000
302@@ -12,13 +12,13 @@
303
304 * Synchronous replication. Transaction either commited on all nodes or none.
305
306- * Multi-master replication. You can write to any node.
307+ * Multi-master replication. You can write to any node.
308
309 * Parallel applying events on slave. Real “parallel replication”.
310
311 * Automatic node provisioning.
312
313- * Data consistency. No more unsyncronised slaves.
314+ * Data consistency. No more unsynchronized slaves.
315
316 Percona XtraDB Cluster is fully compatible with MySQL or Percona Server in the following meaning:
317
318@@ -35,6 +35,7 @@
319 :glob:
320
321 intro
322+ resources
323
324 Installation
325 ============
326@@ -65,7 +66,7 @@
327
328 singlebox
329 3nodesec2
330-
331+ bugreport
332
333 Percona XtraDB Cluster limitations
334 ==================================
335
336=== modified file 'doc/source/installation.rst'
337--- doc/source/installation.rst 2011-12-14 02:32:00 +0000
338+++ doc/source/installation.rst 2012-01-17 16:06:40 +0000
339@@ -61,9 +61,9 @@
340 Install XtraBackup SST method
341 ==============================
342
343-To use Percona XtraBackup for State Transfer method (copy snapsot of data between nodes)
344-you can use the regular xtrabackup package with the script what supports galera information.
345-You can take *innobackupex* script from source code `innobackupex <http://bazaar.launchpad.net/~percona-dev/percona-xtrabackup/galera-info/view/head:/innobackupex>`_
346+To use Percona XtraBackup for State Transfer method (copy snapshot of data between nodes)
347+you can use the regular xtrabackup package with the script what supports Galera information.
348+You can take *innobackupex* script from source code `innobackupex <http://bazaar.launchpad.net/~percona-dev/percona-xtrabackup/galera-info/view/head:/innobackupex>`_.
349
350 To inform node to use xtrabackup you need to specify in my.cnf: ::
351
352
353=== modified file 'doc/source/installation/bin_distro_specific.rst'
354--- doc/source/installation/bin_distro_specific.rst 2011-12-13 06:38:29 +0000
355+++ doc/source/installation/bin_distro_specific.rst 2012-01-17 16:06:40 +0000
356@@ -19,7 +19,7 @@
357 cp ./usr/bin/tar4ibd /usr/bin
358 cp ./usr/bin/innobackupex-1.5.1 /usr/bin
359
360-* If you use a version prior to 1.6, the stock perl causes an issue with the backup scripts version detection. Edit :file:`/usr/bin/innobackupex-1.5.1`. Comment out the lines below as shown below ::
361+* If you use a version prior to 1.6, the stock perl causes an issue with the backup scripts version detection. Edit :file:`/usr/bin/innobackupex-1.5.1`. Comment out the lines below as shown below: ::
362
363 $perl_version = chr($required_perl_version[0])
364 . chr($required_perl_version[1])
365
366=== modified file 'doc/source/installation/compiling_xtradb_cluster.rst'
367--- doc/source/installation/compiling_xtradb_cluster.rst 2011-12-13 06:38:29 +0000
368+++ doc/source/installation/compiling_xtradb_cluster.rst 2012-01-17 16:06:40 +0000
369@@ -30,11 +30,12 @@
370 Compiling
371 ------------
372
373-The most esiest way to build binaries is to run
374-script BUILD/compile-pentium64-wsrep
375+The most esiest way to build binaries is to run script: ::
376+
377+ BUILD/compile-pentium64-wsrep
378
379 If you feel confident to use cmake, you make compile with cmake adding
380 -DWITH_WSREP=1 to parameters.
381
382-Exampes how to build RPM and DEB packages you can find in packaging/percona directory in the source code
383+Examples how to build RPM and DEB packages you can find in packaging/percona directory in the source code.
384
385
386=== modified file 'doc/source/installation/yum_repo.rst'
387--- doc/source/installation/yum_repo.rst 2012-01-10 02:33:47 +0000
388+++ doc/source/installation/yum_repo.rst 2012-01-17 16:06:40 +0000
389@@ -13,7 +13,7 @@
390
391 $ rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm
392
393-You may want to install also Percona stable repository, which provides Percona-shared-compat rpm, needed to satisfy dependencies ::
394+You may want to install also Percona stable repository, which provides Percona-shared-compat rpm, needed to satisfy dependencies: ::
395
396 $ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
397
398
399=== modified file 'doc/source/intro.rst'
400--- doc/source/intro.rst 2011-12-14 00:33:37 +0000
401+++ doc/source/intro.rst 2012-01-17 16:06:40 +0000
402@@ -7,34 +7,35 @@
403 What is core difference Percona XtraDB Cluster from MySQL Replication ?
404 =======================================================================
405
406-Let's take look into well know CAP theorem for Distributed systems.
407+Let's take look into the well known CAP theorem for Distributed systems.
408 Characteristics of Distributed systems:
409
410- C - Consistency (all your data is consistent on all nodes)
411-
412- A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes )
413-
414- P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle requests)
415+ C - Consistency (all your data is consistent on all nodes),
416+
417+ A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes ),
418+
419+ P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle requests).
420
421
422 CAP theorem says that each Distributed system can have only two out of these three.
423
424-MySQL replication has: Availability and Partitioning tolerance
425-
426-Percona XtraDB Cluster has: Consistency and Availability
427-
428-That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data Consistency. (And, yes, Percona XtraDB Cluster looses Partitioning tolerance property)
429+MySQL replication has: Availability and Partitioning tolerance.
430+
431+Percona XtraDB Cluster has: Consistency and Availability.
432+
433+That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data Consistency. (And yes, Percona XtraDB Cluster looses Partitioning tolerance property).
434
435 Components
436 ==========
437
438 *Percona XtraDB Cluster* is based on `Percona Server with XtraDB <http://www.percona.com/software/percona-server/>`_
439-and includes `Write Set REPlication patches <https://launchpad.net/codership-mysql>`_.
440+and includes `Write Set Replication patches <https://launchpad.net/codership-mysql>`_.
441 It uses the `Galera library <https://launchpad.net/galera>`_, version 2.x,
442 a generic Synchronous Multi-Master replication plugin for transactional applications.
443
444-Galera library is developed by `Codership Oy <http://www.codership.com/>`_
445+Galera library is developed by `Codership Oy <http://www.codership.com/>`_.
446
447 Galera 2.x supports such new features as:
448- * Incremental State Transfer, especially useful for WAN deployments
449- * RSU, Rolling Schema Update. Schema change does not block operations against table
450+ * Incremental State Transfer, especially useful for WAN deployments,
451+ * RSU, Rolling Schema Update. Schema change does not block operations against table.
452+
453
454=== modified file 'doc/source/limitation.rst'
455--- doc/source/limitation.rst 2011-12-14 00:33:37 +0000
456+++ doc/source/limitation.rst 2012-01-17 16:06:40 +0000
457@@ -2,43 +2,29 @@
458 Percona XtraDB Cluster Limitations
459 ====================================
460
461-There are some limitations which we should be aware of. Some of them will be eliminated later as product is improved; some are design limitations.
462-
463- - Currently replication works only with InnoDB storage engine. Any writes to
464- tables of other types, including system (mysql.*) tables are not replicated.
465- However, DDL statements are replicated in statement level, and changes
466- to mysql.* tables will get replicated that way.
467- So, you can safely issue: CREATE USER...,
468- but issuing: INSERT INTO mysql.user..., will not be replicated.
469-
470- - DELETE operation is unsupported on tables without primary key. Also rows in
471- tables without primary key may appear in different order on different nodes.
472- As a result SELECT...LIMIT... may return slightly different sets.
473+There are some limitations which you should be aware of. Some of them will be eliminated later as product is improved and some are design limitations.
474+
475+ - Currently replication works only with |InnoDB| storage engine. Any writes to tables of other types, including system (mysql.*) tables, are not replicated. However, DDL statements are replicated in statement level, and changes to mysql.* tables will get replicated that way. So, you can safely issue: CREATE USER..., but issuing: INSERT INTO mysql.user..., will not be replicated.
476+
477+ - DELETE operation is unsupported on tables without primary key. Also rows in tables without primary key may appear in different order on different nodes. As a result SELECT...LIMIT... may return slightly different sets.
478
479 - Unsupported queries:
480 * LOCK/UNLOCK TABLES cannot be supported in multi-master setups.
481 * lock functions (GET_LOCK(), RELEASE_LOCK()... )
482
483- - Query log cannot be directed to table. If you enable query logging,
484- you must forward the log to a file:
485- log_output = FILE
486- Use general_log and general_log_file to choose query logging and the
487- log file name
488-
489- - Maximum allowed transaction size is defined by wsrep_max_ws_rows and
490- wsrep_max_ws_size. Anything bigger (e.g. huge LOAD DATA) will be rejected.
491-
492- - Due to cluster level optimistic concurrency control, transaction issuing
493- COMMIT may still be aborted at that stage. There can be two transactions.
494- writing to same rows and committing in separate XtraDB Cluster nodes, and only one
495- of the them can successfully commit. The failing one will be aborted.
496- For cluster level aborts, XtraDB Cluster gives back deadlock error.
497- code (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).
498+ - Query log cannot be directed to table. If you enable query logging, you must forward the log to a file: log_output = FILE. Use general_log and general_log_file to choose query logging and the log file name.
499+
500+ - Maximum allowed transaction size is defined by wsrep_max_ws_rows and wsrep_max_ws_size. Anything bigger (e.g. huge LOAD DATA) will be rejected.
501+
502+ - Due to cluster level optimistic concurrency control, transaction issuing COMMIT may still be aborted at that stage. There can be two transactions writing to same rows and committing in separate XtraDB Cluster nodes, and only one of the them can successfully commit. The failing one will be aborted. For cluster level aborts, XtraDB Cluster gives back deadlock error code:
503+ (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).
504
505 - XA transactions can not be supported due to possible rollback on commit.
506
507- - The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster is slow. If you have requirements for stable high performance, then it should be supported by corresponding hardware (10Gb network, SSD)
508+ - The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster is slow. If you have requirements for stable high performance, then it should be supported by corresponding hardware (10Gb network, SSD).
509
510- - The minimal recommended size of cluster is 3 nodes
511+ - The minimal recommended size of cluster is 3 nodes.
512
513 - DDL statements are problematic and may stall cluster. Later, the support of DDL will be improved, but will always require special treatment.
514+
515+
516
517=== modified file 'doc/source/resources.rst'
518--- doc/source/resources.rst 2011-12-21 05:51:01 +0000
519+++ doc/source/resources.rst 2012-01-17 16:06:40 +0000
520@@ -1,3 +1,7 @@
521+=========
522+Resources
523+=========
524+
525 In general there are 4 resources that need to be different when you want to run several MySQL/Galera nodes on one host:
526
527 1) data directory
528@@ -5,19 +9,18 @@
529 3) galera replication listen port and/or address
530 4) receive address for state snapshot transfer
531
532-
533-and later incremental state transfer receive address will be added to the bunch. (I know, it is kinda a lot, but we don't see how it can be meaningfully reduced yet)
534-
535-The first two is the usual mysql stuff.
536-
537-You figured out the third. It is also possible to pass it via
538-
539-wsrep_provider_options="gmcast.listen_addr=tcp://127.0.0.1:5678"
540-
541-as most other galera options. This may save you some extra typing.
542+and later incremental state transfer receive address will be added to the bunch. (I know, it is kinda a lot, but we don't see how it can be meaningfully reduced yet).
543+
544+The first two are the usual mysql stuff.
545+
546+You figured out the third. It is also possible to pass it via: ::
547+
548+ wsrep_provider_options="gmcast.listen_addr=tcp://127.0.0.1:5678"
549+
550+as most other Galera options. This may save you some extra typing.
551
552 The fourth one is wsrep_sst_receive_address. This is the address at which the node will be listening for and receiving the state. Note that in galera cluster _joining_ nodes are waiting for connections from donors. It goes contrary to tradition and seems to confuse people time and again, but there are good reasons it was made like that.
553
554 If you use mysqldump SST it should be the same as this mysql client connection address plus you need to set wsrep_sst_auth variable to hold user:password pair. The user should be privileged enough to read system tables from donor and create system tables on this node. For simplicity that could be just the root user. Note that it also means that you need to properly set up the privileges on the new node before attempting to join the cluster.
555
556-If you use rsync or xtrabackup SST, wsrep_sst_auth is not necessary unless your SST script makes use of it. wsrep_sst_address can be anything local (it may even be the same on all nodes provided you'll be starting them one at a time).
557+If you use rsync or |xtrabackup| SST, wsrep_sst_auth is not necessary unless your SST script makes use of it. wsrep_sst_address can be anything local (it may even be the same on all nodes provided you'll be starting them one at a time).
558
559=== modified file 'doc/source/singlebox.rst'
560--- doc/source/singlebox.rst 2011-12-16 03:50:49 +0000
561+++ doc/source/singlebox.rst 2012-01-17 16:06:40 +0000
562@@ -1,64 +1,64 @@
563 How to setup 3 node cluster on single box
564 ==========================================
565
566-This is how-to setup 3-node cluster on the single physical box.
567+This is how to setup 3-node cluster on the single physical box.
568
569-Assume you installed Percona XtraDB Cluster from binary .tar.gz into directory
570+Assume you installed |Percona XtraDB Cluster| from binary .tar.gz into directory
571
572 /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6
573
574
575-Now we need to create couple my.cnf files and couple data directories.
576+Now we need to create couple my.cnf files and couple of data directories.
577
578-Assume we created (see the content of files at the end of document)
579+Assume we created (see the content of files at the end of document):
580
581 * /etc/my.4000.cnf
582 * /etc/my.5000.cnf
583 * /etc/my.6000.cnf
584
585-and data directories
586+and data directories:
587
588 * /data/bench/d1
589 * /data/bench/d2
590 * /data/bench/d3
591
592-and assume the local IP address is 10.11.12.205
593+and assume the local IP address is 10.11.12.205.
594
595-then we should be able to start initial node as (from directory /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6)::
596+Then we should be able to start initial node as (from directory /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6): ::
597
598 bin/mysqld --defaults-file=/etc/my.4000.cnf
599
600-Following output will let out know that node was started succsefully::
601+Following output will let out know that node was started successfully: ::
602
603 111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
604 111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 1
605
606
607-And you can check used ports::
608+And you can check used ports: ::
609
610 netstat -anp | grep mysqld
611 tcp 0 0 0.0.0.0:4000 0.0.0.0:* LISTEN 8218/mysqld
612 tcp 0 0 0.0.0.0:4010 0.0.0.0:* LISTEN 8218/mysqld
613
614
615-After first node, we start second and third::
616+After first node, we start second and third: ::
617
618 bin/mysqld --defaults-file=/etc/my.5000.cnf
619 bin/mysqld --defaults-file=/etc/my.6000.cnf
620
621-Succesfull start will produce following output::
622+Successful start will produce the following output: ::
623
624 111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2)
625 111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)
626 111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections
627
628
629-Now you can connect to any node and create database, which will be automatically propagated to another nodes::
630+Now you can connect to any node and create database, which will be automatically propagated to other nodes: ::
631
632 mysql -h127.0.0.1 -P5000 -e "CREATE DATABASE hello_peter"
633
634
635-Configuration files (/etc/my.4000.cnf)::
636+Configuration files (/etc/my.4000.cnf): ::
637
638 /etc/my.4000.cnf
639
640@@ -90,7 +90,7 @@
641 innodb_locks_unsafe_for_binlog=1
642 innodb_autoinc_lock_mode=2
643
644-Configuration files (/etc/my.5000.cnf). PLEASE see difference in *wsrep_cluster_address* ::
645+Configuration files (/etc/my.5000.cnf). PLEASE see the difference in *wsrep_cluster_address*: ::
646
647 /etc/my.5000.cnf
648 [mysqld]
649@@ -122,7 +122,7 @@
650 innodb_autoinc_lock_mode=2
651
652
653-Configuration files (/etc/my.6000.cnf). PLEASE see difference in *wsrep_cluster_address* ::
654+Configuration files (/etc/my.6000.cnf). PLEASE see the difference in *wsrep_cluster_address*: ::
655
656 /etc/my.6000.cnf
657

Subscribers

People subscribed via source and target branches