Merge lp:~hrvojem/percona-xtradb-cluster/bug1253055-5.5 into lp:percona-xtradb-cluster/5.5

Proposed by Hrvoje Matijakovic
Status: Merged
Approved by: Raghavendra D Prabhu
Approved revision: no longer in the source branch.
Merged at revision: 760
Proposed branch: lp:~hrvojem/percona-xtradb-cluster/bug1253055-5.5
Merge into: lp:percona-xtradb-cluster/5.5
Diff against target: 711 lines (+627/-9)
6 files modified
doc-pxc/source/faq.rst (+39/-1)
doc-pxc/source/howtos/ubuntu_howto.rst (+2/-7)
doc-pxc/source/index.rst (+1/-0)
doc-pxc/source/intro.rst (+1/-1)
doc-pxc/source/wsrep-provider-index.rst (+574/-0)
doc-pxc/source/wsrep-system-index.rst (+10/-0)
To merge this branch: bzr merge lp:~hrvojem/percona-xtradb-cluster/bug1253055-5.5
Reviewer Review Type Date Requested Status
PXC core team Pending
Review via email: mp+219386@code.launchpad.net
To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'doc-pxc/source/faq.rst'
--- doc-pxc/source/faq.rst 2014-05-08 16:52:49 +0000
+++ doc-pxc/source/faq.rst 2014-05-14 11:32:28 +0000
@@ -22,13 +22,51 @@
2222
23.. code-block:: mysql23.. code-block:: mysql
2424
25 SELECT * FROM someinnodbtable WHERE id=1;25 SELECT 1 FROM dual;
2626
273 different results are possible:273 different results are possible:
28 * You get the row with id=1 (node is healthy)28 * You get the row with id=1 (node is healthy)
29 * Unknown error (node is online but Galera is not connected/synced with the cluster)29 * Unknown error (node is online but Galera is not connected/synced with the cluster)
30 * Connection error (node is not online)30 * Connection error (node is not online)
3131
32You can also check the node health with the ``clustercheck`` script. You need to set up ``clustercheck`` user:
33
34.. code-block:: mysql
35
36 GRANT USAGE ON *.* TO 'clustercheck'@'localhost' IDENTIFIED BY PASSWORD '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19';
37
38You can then check the node health by running the ``clustercheck`` script:
39
40.. code-block:: bash
41
42 /usr/bin/clustercheck clustercheck password 0
43
44If the node is running correctly you should get the following status: ::
45
46 HTTP/1.1 200 OK
47 Content-Type: text/plain
48 Connection: close
49 Content-Length: 40
50
51 Percona XtraDB Cluster Node is synced.
52
53In case node isn't sync or if it is off-line status will look like: ::
54
55 HTTP/1.1 503 Service Unavailable
56 Content-Type: text/plain
57 Connection: close
58 Content-Length: 44
59
60 Percona XtraDB Cluster Node is not synced.
61
62.. note::
63
64 clustercheck syntax:
65
66 <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
67 Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
68 Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
69
32Q: How does XtraDB Cluster handle big transaction?70Q: How does XtraDB Cluster handle big transaction?
33==================================================71==================================================
34A: XtraDB Cluster populates write set in memory before replication and this sets one limit for how large transactions make sense. There are wsrep variables for max row count and max size of of write set to make sure that server is not running out of memory.72A: XtraDB Cluster populates write set in memory before replication and this sets one limit for how large transactions make sense. There are wsrep variables for max row count and max size of of write set to make sure that server is not running out of memory.
3573
=== modified file 'doc-pxc/source/howtos/ubuntu_howto.rst'
--- doc-pxc/source/howtos/ubuntu_howto.rst 2014-02-06 09:57:33 +0000
+++ doc-pxc/source/howtos/ubuntu_howto.rst 2014-05-14 11:32:28 +0000
@@ -31,11 +31,7 @@
31Installation31Installation
32------------32------------
3333
34Percona repository should be set up as described in the :ref:`apt-repo` guide. Following command will install |Percona XtraDB Cluster| packages: :: 34Installation information can be found in the :ref:`installation` guide.
35
36 $ apt-get install percona-xtradb-cluster-server-5.5 percona-xtradb-cluster-client-5.5 percona-xtradb-cluster-galera-2.x
37
38When these two commands have been executed successfully on all three nodes |Percona XtraDB Cluster| is installed.
3935
40.. note:: 36.. note::
4137
@@ -83,11 +79,10 @@
83 # Authentication for SST method79 # Authentication for SST method
84 wsrep_sst_auth="sstuser:s3cretPass"80 wsrep_sst_auth="sstuser:s3cretPass"
8581
86.. note:: For the first member of the cluster variable :variable:`wsrep_cluster_address` should contain empty ``gcomm://`` when the cluster is being bootstrapped. But as soon as we have bootstrapped the cluster and have at least one more node joined that line can be removed from the :file:`my.cnf` configuration file and the one where :variable:`wsrep_cluster_address` contains all three node addresses. In case the node gets restarted and without making this change it will make bootstrap new cluster instead of joining the existing one.
8782
88After this, first node can be started with the following command: ::83After this, first node can be started with the following command: ::
8984
90 [root@pxc1 ~]# /etc/init.d/mysql start85 [root@pxc1 ~]# /etc/init.d/mysql bootstrap-pxc
91 86
92This command will start the first node and bootstrap the cluster (more information about bootstrapping cluster can be found in :ref:`bootstrap` manual).87This command will start the first node and bootstrap the cluster (more information about bootstrapping cluster can be found in :ref:`bootstrap` manual).
9388
9489
=== modified file 'doc-pxc/source/index.rst'
--- doc-pxc/source/index.rst 2013-12-30 17:01:54 +0000
+++ doc-pxc/source/index.rst 2014-05-14 11:32:28 +0000
@@ -98,6 +98,7 @@
98 release-notes/release-notes_index98 release-notes/release-notes_index
99 wsrep-status-index99 wsrep-status-index
100 wsrep-system-index100 wsrep-system-index
101 wsrep-provider-index
101 wsrep-files-index102 wsrep-files-index
102 faq103 faq
103 glossary104 glossary
104105
=== modified file 'doc-pxc/source/intro.rst'
--- doc-pxc/source/intro.rst 2013-02-01 13:41:34 +0000
+++ doc-pxc/source/intro.rst 2014-05-14 11:32:28 +0000
@@ -44,7 +44,7 @@
4444
45Percona XtraDB Cluster has: Consistency and Availability.45Percona XtraDB Cluster has: Consistency and Availability.
4646
47That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data Consistency. (And yes, Percona XtraDB Cluster looses Partitioning tolerance property).47That is |MySQL| replication does not guarantee Consistency of your data, while |Percona XtraDB Cluster| provides data Consistency (but it loses Partitioning tolerance property).
4848
49Components49Components
50==========50==========
5151
=== added file 'doc-pxc/source/wsrep-provider-index.rst'
--- doc-pxc/source/wsrep-provider-index.rst 1970-01-01 00:00:00 +0000
+++ doc-pxc/source/wsrep-provider-index.rst 2014-05-14 11:32:28 +0000
@@ -0,0 +1,574 @@
1.. _wsrep_provider_index:
2
3============================================
4 Index of :variable:`wsrep_provider` options
5============================================
6
7Following variables can be set and checked in the :variable:`wsrep_provider_options` variable. Value of the variable can be changed in the |MySQL| configuration file, :file:`my.cnf`, or by setting the variable value in the |MySQL| client.
8
9To change the value of the in the :file:`my.cnf` following syntax should be used: ::
10
11 wsrep_provider_options="variable1=value1;[variable2=value2]"
12
13For example to increase the size of the Galera buffer storage from the default value to 512MB, :file:`my.cnf` option should look like: ::
14
15 wsrep_provider_options="gcache.size=512M"
16
17Dynamic variables can be changed from the |MySQL| client by using the ``SET GLOBAL`` syntax. To change the value of the :variable:`pc.ignore_sb` following command should be used::
18
19 mysql> SET GLOBAL wsrep_provider_options="pc.ignore_sb=true";
20
21
22Index
23=====
24
25.. variable:: base_host
26
27 :cli: Yes
28 :conf: Yes
29 :scope: Global
30 :dyn: No
31 :default: value of the :variable:`wsrep_node_address`
32
33This variable sets the value of the node's base IP. This is an IP address on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.
34
35.. variable:: base_port
36
37 :cli: Yes
38 :conf: Yes
39 :scope: Global
40 :dyn: No
41 :default: 4567
42
43This variable sets the port on which the Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.
44
45.. variable:: cert.log_conflicts
46
47 :cli: Yes
48 :conf: Yes
49 :scope: Global
50 :dyn: No
51 :default: no
52
53.. variable:: evs.causal_keepalive_period
54
55 :cli: Yes
56 :conf: Yes
57 :scope: Global
58 :dyn: No
59 :default: value of :variable:`evs.keepalive_period`
60
61This variable is used for development purposes and shouldn't be used by regular users.
62
63.. variable:: evs.debug_log_mask
64
65 :cli: Yes
66 :conf: Yes
67 :scope: Global
68 :dyn: Yes
69 :default: 0x1
70
71This variable is used for EVS (Extended Virtual Synchrony) debugging it can be used only when :variable:`wsrep_debug` is set to ``ON``.
72
73.. variable:: evs.inactive_check_period
74
75 :cli: Yes
76 :conf: Yes
77 :scope: Global
78 :dyn: No
79 :default: PT0.5S
80
81This variable defines how often to check for peer inactivity.
82
83.. variable:: evs.inactive_timeout
84
85 :cli: Yes
86 :conf: Yes
87 :scope: Global
88 :dyn: No
89 :default: PT15S
90
91This variable defines the inactivity limit, once this limit is reached the node will be pronounced dead.
92
93.. variable:: evs.info_log_mask
94
95 :cli: No
96 :conf: Yes
97 :scope: Global
98 :dyn: No
99 :default: 0
100
101This variable is used for controlling the extra EVS info logging.
102
103.. variable:: evs.install_timeout
104
105 :cli: Yes
106 :conf: Yes
107 :scope: Global
108 :dyn: Yes
109 :default: PT15S
110
111This variable defines the timeout on waiting for install message acknowledgments.
112
113.. variable:: evs.join_retrans_period
114
115 :cli: Yes
116 :conf: Yes
117 :scope: Global
118 :dyn: No
119 :default: PT1S
120
121This variable defines how often to retransmit EVS join messages when forming cluster membership.
122
123.. variable:: evs.keepalive_period
124
125 :cli: Yes
126 :conf: Yes
127 :scope: Global
128 :dyn: No
129 :default: PT1S
130
131This variable defines how often will keepalive beacons will be emmited (in the absence of any other traffic).
132
133.. variable:: evs.max_install_timeouts
134
135 :cli: Yes
136 :conf: Yes
137 :scope: Global
138 :dyn: No
139 :default: 1
140
141This variable defines how many membership install rounds to try before giving up (total rounds will be :variable:`evs.max_install_timeouts` + 2).
142
143.. variable:: evs.send_window
144
145 :cli: Yes
146 :conf: Yes
147 :scope: Global
148 :dyn: No
149 :default: 4
150
151This variable defines the maximum number of data packets in replication at a time. For WAN setups may be set considerably higher, e.g. 512. This variable must be no less than :variable:`evs.user_send_window`.
152
153.. variable:: evs.stats_report_period
154
155 :cli: Yes
156 :conf: Yes
157 :scope: Global
158 :dyn: No
159 :default: PT1M
160
161This variable defines the control period of EVS statistics reporting.
162
163.. variable:: evs.suspect_timeout
164
165 :cli: Yes
166 :conf: Yes
167 :scope: Global
168 :dyn: No
169 :default: PT5S
170
171This variable defines the inactivity period after which the node is “suspected” to be dead. If all remaining nodes agree on that, the node will be dropped out of cluster even before :variable:`evs.inactive_timeout` is reached.
172
173.. variable:: evs.use_aggregate
174
175 :cli: Yes
176 :conf: Yes
177 :scope: Global
178 :dyn: No
179 :default: true
180
181When this variable is enabled smaller packets will be aggregated into one.
182
183.. variable:: evs.user_send_window
184
185 :cli: Yes
186 :conf: Yes
187 :scope: Global
188 :dyn: Yes
189 :default: 2
190
191This variable defines the maximum number of data packets in replication at a time. For WAN setups may be set considerably higher, e.g. 512.
192
193.. variable:: evs.version
194
195 :cli: Yes
196 :conf: Yes
197 :scope: Global
198 :dyn: No
199 :default: 0
200
201This status variable is used to check which EVS (Extended Virtual Synchrony) protocol version is used.
202
203.. variable:: evs.view_forget_timeout
204
205 :cli: Yes
206 :conf: Yes
207 :scope: Global
208 :dyn: No
209 :default: P1D
210
211This variable defines the timeout after which past views will be dropped from history.
212
213.. variable:: gcache.dir
214
215 :cli: Yes
216 :conf: Yes
217 :scope: Global
218 :dyn: No
219 :default: :term:`datadir`
220
221This variable can be used to define the location of the :file:`galera.cache` file.
222
223.. variable:: gcache.keep_pages_size
224
225 :cli: Yes
226 :conf: Yes
227 :scope: Local, Global
228 :dyn: No
229 :default: 0
230
231This variable is used to specify total size of the page storage pages to keep for caching purposes. If only page storage is enabled, one page is always present.
232
233.. variable:: gcache.mem_size
234
235 :cli: Yes
236 :conf: Yes
237 :scope: Global
238 :dyn: No
239 :default: 0
240
241
242.. variable:: gcache.name
243
244 :cli: Yes
245 :conf: Yes
246 :scope: Global
247 :dyn: No
248 :default: /var/lib/mysql/galera.cache
249
250This variable can be used to specify the name of the Galera cache file.
251
252.. variable:: gcache.page_size
253
254 :cli: No
255 :conf: Yes
256 :scope: Global
257 :dyn: No
258 :default: 128M
259
260This variable can be used to specify the size of the page files in the page storage.
261
262.. variable:: gcache.size
263
264 :cli: Yes
265 :conf: Yes
266 :scope: Global
267 :dyn: No
268 :default: 128M
269
270Size of the transaction cache for Galera replication. This defines the size of the :file:`galera.cache` file which is used as source for |IST|. If this value is bigger there are better chances that the re-joining node will get IST instead of |SST|.
271
272.. variable:: gcs.fc_debug
273
274 :cli: Yes
275 :conf: Yes
276 :scope: Global
277 :dyn: No
278 :default: 0
279
280This variable specifies after how many writesets the debug statistics about SST flow control will be posted.
281
282.. variable:: gcs.fc_factor
283
284 :cli: Yes
285 :conf: Yes
286 :scope: Global
287 :dyn: No
288 :default: 1.0
289
290This variable is used for replication flow control. Replication will be paused till the value of this variable goes below the value of :variable:`gcs.fc_factor` * :variable:`gcs.fc_limit`.
291
292.. variable:: gcs.fc_limit
293
294 :cli: Yes
295 :conf: Yes
296 :scope: Global
297 :dyn: No
298 :default: 16
299
300This variable is used for replication flow control. When slave queue exceeds this limit replication will be paused.
301
302.. variable:: gcs.fc_master_slave
303
304 :cli: Yes
305 :conf: Yes
306 :scope: Global
307 :dyn: No
308 :default: no
309
310This variable is used to specify if there is only one master node in the cluster.
311
312.. variable:: gcs.max_packet_size
313
314 :cli: Yes
315 :conf: Yes
316 :scope: Global
317 :dyn: No
318 :default: 64500
319
320This variable is used to specify the writeset size after which they will be fragmented.
321
322.. variable:: gcs.max_throttle
323
324 :cli: Yes
325 :conf: Yes
326 :scope: Global
327 :dyn: No
328 :default: 0.25
329
330This variable specifies how much the replication can be throttled during the state transfer in order to avoid running out of memory. Value can be set to ``0.0`` if stopping replication is acceptable in order to finish state transfer.
331
332.. variable:: gcs.recv_q_hard_limit
333
334 :cli: Yes
335 :conf: Yes
336 :scope: Global
337 :dyn: No
338 :default: 9223372036854775807
339
340This variable specifies the maximum allowed size of the receive queue. This should normally be half of (RAM + swap). If this limit is exceeded, Galera will abort the server.
341
342.. variable:: gcs.recv_q_soft_limit
343
344 :cli: Yes
345 :conf: Yes
346 :scope: Global
347 :dyn: No
348 :default: 0.25
349
350This variable specifies the fraction of the :variable:`gcs.recv_q_hard_limit` after which replication rate will be throttled.
351
352.. variable:: gcs.sync_donor
353
354 :cli: Yes
355 :conf: Yes
356 :scope: Global
357 :dyn: No
358 :default: no
359
360This variable controls if the rest of the cluster should be in sync with the donor node. When this variable is set to ``Yes`` whole cluster will be blocked if the donor node is blocked with SST.
361
362.. variable:: gmcast.listen_addr
363
364 :cli: Yes
365 :conf: Yes
366 :scope: Global
367 :dyn: No
368 :default: tcp://0.0.0.0:4567
369
370This variable defines the address on which node listens to connections from other nodes in the cluster.
371
372.. variable:: gmcast.mcast_addr
373
374 :cli: Yes
375 :conf: Yes
376 :scope: Global
377 :dyn: No
378 :default: None
379
380This variable should be set up if UDP multicast should be used for replication.
381
382.. variable:: gmcast.mcast_ttl
383
384 :cli: Yes
385 :conf: Yes
386 :scope: Global
387 :dyn: No
388 :default: 1
389
390This variable can be used to define TTL for multicast packets.
391
392.. variable:: gmcast.peer_timeout
393
394 :cli: Yes
395 :conf: Yes
396 :scope: Global
397 :dyn: No
398 :default: PT3S
399
400This variable specifies the connection timeout to initiate message relaying.
401
402.. variable:: gmcast.time_wait
403
404 :cli: Yes
405 :conf: Yes
406 :scope: Global
407 :dyn: No
408 :default: PT5S
409
410This variable specifies the time to wait until allowing peer declared outside of stable view to reconnect.
411
412.. variable:: gmcast.version
413
414 :cli: Yes
415 :conf: Yes
416 :scope: Global
417 :dyn: No
418 :default: 0
419
420This status variable is used to check which gmcast protocol version is being used.
421
422.. variable:: ist.recv_addr
423
424 :cli: Yes
425 :conf: Yes
426 :scope: Global
427 :dyn: No
428 :default: value of :variable:`wsrep_node_address`
429
430This variable specifies the address on which nodes listens for Incremental State Transfer (|IST|).
431
432.. variable:: pc.announce_timeout
433
434 :cli: Yes
435 :conf: Yes
436 :scope: Global
437 :dyn: No
438 :default: PT3S
439
440.. variable:: pc.checksum
441
442 :cli: Yes
443 :conf: Yes
444 :scope: Global
445 :dyn: No
446 :default: false
447
448This variable controls will the replicated messages will be checksummed or not.
449
450.. variable:: pc.ignore_quorum
451
452 :cli: Yes
453 :conf: Yes
454 :scope: Global
455 :dyn: Yes
456 :default: false
457
458When this variable is set to ``TRUE`` node will completely ignore the quorum calculations. This should be used with extreme caution even in master-slave setups, because slaves won't automatically reconnect to master in this case.
459
460.. variable:: pc.ignore_sb
461
462 :cli: Yes
463 :conf: Yes
464 :scope: Global
465 :dyn: Yes
466 :default: false
467
468When this variable us set ti ``TRUE`` node will process updates even in the case of split brain. This should be used with extreme caution in multi-master setup, but should simplify things in master-slave cluster (especially if only 2 nodes are used).
469
470.. variable:: pc.linger
471
472 :cli: Yes
473 :conf: Yes
474 :scope: Global
475 :dyn: No
476 :default: PT20S
477
478This variable specifies the period which PC protocol waits for EVS termination.
479
480.. variable:: pc.npvo
481
482 :cli: Yes
483 :conf: Yes
484 :scope: Global
485 :dyn: No
486 :default: false
487
488When this variable is set to ``TRUE`` more recent primary component overrides older ones in case of conflicting prims.
489
490.. variable:: pc.version
491
492 :cli: Yes
493 :conf: Yes
494 :scope: Global
495 :dyn: No
496 :default: 0
497
498This status variable is used to check which ``PC`` protocol version is being used.
499
500.. variable:: pc.wait_prim
501
502 :cli: Yes
503 :conf: Yes
504 :scope: Global
505 :dyn: No
506 :default: true
507
508When this variable is set to ``true`` node will wait for :variable:`pc.wait_prim_timeout` value for the primary component. This option can be used to bring up a non-primary component and make it primary with :variable:`pc.bootstrap` option.
509
510.. variable:: pc.wait_prim_timeout
511
512 :cli: Yes
513 :conf: Yes
514 :scope: Global
515 :dyn: No
516 :default: P30S
517
518This variable is used to define how much time node should wait for the primary component.
519
520.. variable:: pc.weight
521
522 :cli: Yes
523 :conf: Yes
524 :scope: Global
525 :dyn: Yes
526 :default: 1
527
528This variable specifies the node weight that's going to be used for Weighted Quorum calculations.
529
530.. variable:: protonet.backend
531
532 :cli: Yes
533 :conf: Yes
534 :scope: Global
535 :dyn: No
536 :default: asio
537
538This variable is used to define which transport backend should be used. Currently only ``ASIO`` is supported.
539
540.. variable:: protonet.version
541
542 :cli: Yes
543 :conf: Yes
544 :scope: Global
545 :dyn: No
546 :default: 0
547
548This status variable is used to show which transport backend protocol version is used.
549
550.. variable:: repl.causal_read_timeout
551
552 :cli: Yes
553 :conf: Yes
554 :scope: Global
555 :dyn: Yes
556 :default: PT30S
557
558This variable specifies the causal read timeout.
559
560.. variable:: repl.commit_order
561
562 :cli: Yes
563 :conf: Yes
564 :scope: Global
565 :dyn: No
566 :default: 3
567
568This variable is used to specify Out-Of-Order committing (which is used to improve parallel applying performance). Allowed values are:
569
570 * ``0`` – BYPASS: all commit order monitoring is turned off (useful for measuring performance penalty)
571 * ``1`` – OOOC: allow out of order committing for all transactions
572 * ``2`` – LOCAL_OOOC: allow out of order committing only for local transactions
573 * ``3`` – NO_OOOC: no out of order committing is allowed (strict total order committing)
574
0575
=== modified file 'doc-pxc/source/wsrep-system-index.rst'
--- doc-pxc/source/wsrep-system-index.rst 2014-05-08 07:51:46 +0000
+++ doc-pxc/source/wsrep-system-index.rst 2014-05-14 11:32:28 +0000
@@ -143,6 +143,16 @@
143 * MIXED143 * MIXED
144 * NONE - This option resets the forced state of the binlog format144 * NONE - This option resets the forced state of the binlog format
145145
146.. variable:: wsrep_load_data_splitting
147
148 :cli: Yes
149 :conf: Yes
150 :scope: Global
151 :dyn: Yes
152 :default: ON
153
154This variable controls whether ``LOAD DATA`` transaction splitting is wanted or not.
155
146.. variable:: wsrep_log_conflicts156.. variable:: wsrep_log_conflicts
147157
148 :cli: Yes158 :cli: Yes

Subscribers

People subscribed via source and target branches