Merge lp:~hrvojem/percona-xtradb-cluster/bug1253055-5.5 into lp:percona-xtradb-cluster/5.5

Proposed by Hrvoje Matijakovic
Status: Merged
Approved by: Raghavendra D Prabhu
Approved revision: no longer in the source branch.
Merged at revision: 760
Proposed branch: lp:~hrvojem/percona-xtradb-cluster/bug1253055-5.5
Merge into: lp:percona-xtradb-cluster/5.5
Diff against target: 711 lines (+627/-9)
6 files modified
doc-pxc/source/faq.rst (+39/-1)
doc-pxc/source/howtos/ubuntu_howto.rst (+2/-7)
doc-pxc/source/index.rst (+1/-0)
doc-pxc/source/intro.rst (+1/-1)
doc-pxc/source/wsrep-provider-index.rst (+574/-0)
doc-pxc/source/wsrep-system-index.rst (+10/-0)
To merge this branch: bzr merge lp:~hrvojem/percona-xtradb-cluster/bug1253055-5.5
Reviewer Review Type Date Requested Status
PXC core team Pending
Review via email: mp+219386@code.launchpad.net
To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'doc-pxc/source/faq.rst'
2--- doc-pxc/source/faq.rst 2014-05-08 16:52:49 +0000
3+++ doc-pxc/source/faq.rst 2014-05-14 11:32:28 +0000
4@@ -22,13 +22,51 @@
5
6 .. code-block:: mysql
7
8- SELECT * FROM someinnodbtable WHERE id=1;
9+ SELECT 1 FROM dual;
10
11 3 different results are possible:
12 * You get the row with id=1 (node is healthy)
13 * Unknown error (node is online but Galera is not connected/synced with the cluster)
14 * Connection error (node is not online)
15
16+You can also check the node health with the ``clustercheck`` script. You need to set up ``clustercheck`` user:
17+
18+.. code-block:: mysql
19+
20+ GRANT USAGE ON *.* TO 'clustercheck'@'localhost' IDENTIFIED BY PASSWORD '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19';
21+
22+You can then check the node health by running the ``clustercheck`` script:
23+
24+.. code-block:: bash
25+
26+ /usr/bin/clustercheck clustercheck password 0
27+
28+If the node is running correctly you should get the following status: ::
29+
30+ HTTP/1.1 200 OK
31+ Content-Type: text/plain
32+ Connection: close
33+ Content-Length: 40
34+
35+ Percona XtraDB Cluster Node is synced.
36+
37+In case node isn't sync or if it is off-line status will look like: ::
38+
39+ HTTP/1.1 503 Service Unavailable
40+ Content-Type: text/plain
41+ Connection: close
42+ Content-Length: 44
43+
44+ Percona XtraDB Cluster Node is not synced.
45+
46+.. note::
47+
48+ clustercheck syntax:
49+
50+ <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
51+ Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
52+ Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
53+
54 Q: How does XtraDB Cluster handle big transaction?
55 ==================================================
56 A: XtraDB Cluster populates write set in memory before replication and this sets one limit for how large transactions make sense. There are wsrep variables for max row count and max size of of write set to make sure that server is not running out of memory.
57
58=== modified file 'doc-pxc/source/howtos/ubuntu_howto.rst'
59--- doc-pxc/source/howtos/ubuntu_howto.rst 2014-02-06 09:57:33 +0000
60+++ doc-pxc/source/howtos/ubuntu_howto.rst 2014-05-14 11:32:28 +0000
61@@ -31,11 +31,7 @@
62 Installation
63 ------------
64
65-Percona repository should be set up as described in the :ref:`apt-repo` guide. Following command will install |Percona XtraDB Cluster| packages: ::
66-
67- $ apt-get install percona-xtradb-cluster-server-5.5 percona-xtradb-cluster-client-5.5 percona-xtradb-cluster-galera-2.x
68-
69-When these two commands have been executed successfully on all three nodes |Percona XtraDB Cluster| is installed.
70+Installation information can be found in the :ref:`installation` guide.
71
72 .. note::
73
74@@ -83,11 +79,10 @@
75 # Authentication for SST method
76 wsrep_sst_auth="sstuser:s3cretPass"
77
78-.. note:: For the first member of the cluster variable :variable:`wsrep_cluster_address` should contain empty ``gcomm://`` when the cluster is being bootstrapped. But as soon as we have bootstrapped the cluster and have at least one more node joined that line can be removed from the :file:`my.cnf` configuration file and the one where :variable:`wsrep_cluster_address` contains all three node addresses. In case the node gets restarted and without making this change it will make bootstrap new cluster instead of joining the existing one.
79
80 After this, first node can be started with the following command: ::
81
82- [root@pxc1 ~]# /etc/init.d/mysql start
83+ [root@pxc1 ~]# /etc/init.d/mysql bootstrap-pxc
84
85 This command will start the first node and bootstrap the cluster (more information about bootstrapping cluster can be found in :ref:`bootstrap` manual).
86
87
88=== modified file 'doc-pxc/source/index.rst'
89--- doc-pxc/source/index.rst 2013-12-30 17:01:54 +0000
90+++ doc-pxc/source/index.rst 2014-05-14 11:32:28 +0000
91@@ -98,6 +98,7 @@
92 release-notes/release-notes_index
93 wsrep-status-index
94 wsrep-system-index
95+ wsrep-provider-index
96 wsrep-files-index
97 faq
98 glossary
99
100=== modified file 'doc-pxc/source/intro.rst'
101--- doc-pxc/source/intro.rst 2013-02-01 13:41:34 +0000
102+++ doc-pxc/source/intro.rst 2014-05-14 11:32:28 +0000
103@@ -44,7 +44,7 @@
104
105 Percona XtraDB Cluster has: Consistency and Availability.
106
107-That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data Consistency. (And yes, Percona XtraDB Cluster looses Partitioning tolerance property).
108+That is |MySQL| replication does not guarantee Consistency of your data, while |Percona XtraDB Cluster| provides data Consistency (but it loses Partitioning tolerance property).
109
110 Components
111 ==========
112
113=== added file 'doc-pxc/source/wsrep-provider-index.rst'
114--- doc-pxc/source/wsrep-provider-index.rst 1970-01-01 00:00:00 +0000
115+++ doc-pxc/source/wsrep-provider-index.rst 2014-05-14 11:32:28 +0000
116@@ -0,0 +1,574 @@
117+.. _wsrep_provider_index:
118+
119+============================================
120+ Index of :variable:`wsrep_provider` options
121+============================================
122+
123+Following variables can be set and checked in the :variable:`wsrep_provider_options` variable. Value of the variable can be changed in the |MySQL| configuration file, :file:`my.cnf`, or by setting the variable value in the |MySQL| client.
124+
125+To change the value of the in the :file:`my.cnf` following syntax should be used: ::
126+
127+ wsrep_provider_options="variable1=value1;[variable2=value2]"
128+
129+For example to increase the size of the Galera buffer storage from the default value to 512MB, :file:`my.cnf` option should look like: ::
130+
131+ wsrep_provider_options="gcache.size=512M"
132+
133+Dynamic variables can be changed from the |MySQL| client by using the ``SET GLOBAL`` syntax. To change the value of the :variable:`pc.ignore_sb` following command should be used::
134+
135+ mysql> SET GLOBAL wsrep_provider_options="pc.ignore_sb=true";
136+
137+
138+Index
139+=====
140+
141+.. variable:: base_host
142+
143+ :cli: Yes
144+ :conf: Yes
145+ :scope: Global
146+ :dyn: No
147+ :default: value of the :variable:`wsrep_node_address`
148+
149+This variable sets the value of the node's base IP. This is an IP address on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.
150+
151+.. variable:: base_port
152+
153+ :cli: Yes
154+ :conf: Yes
155+ :scope: Global
156+ :dyn: No
157+ :default: 4567
158+
159+This variable sets the port on which the Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.
160+
161+.. variable:: cert.log_conflicts
162+
163+ :cli: Yes
164+ :conf: Yes
165+ :scope: Global
166+ :dyn: No
167+ :default: no
168+
169+.. variable:: evs.causal_keepalive_period
170+
171+ :cli: Yes
172+ :conf: Yes
173+ :scope: Global
174+ :dyn: No
175+ :default: value of :variable:`evs.keepalive_period`
176+
177+This variable is used for development purposes and shouldn't be used by regular users.
178+
179+.. variable:: evs.debug_log_mask
180+
181+ :cli: Yes
182+ :conf: Yes
183+ :scope: Global
184+ :dyn: Yes
185+ :default: 0x1
186+
187+This variable is used for EVS (Extended Virtual Synchrony) debugging it can be used only when :variable:`wsrep_debug` is set to ``ON``.
188+
189+.. variable:: evs.inactive_check_period
190+
191+ :cli: Yes
192+ :conf: Yes
193+ :scope: Global
194+ :dyn: No
195+ :default: PT0.5S
196+
197+This variable defines how often to check for peer inactivity.
198+
199+.. variable:: evs.inactive_timeout
200+
201+ :cli: Yes
202+ :conf: Yes
203+ :scope: Global
204+ :dyn: No
205+ :default: PT15S
206+
207+This variable defines the inactivity limit, once this limit is reached the node will be pronounced dead.
208+
209+.. variable:: evs.info_log_mask
210+
211+ :cli: No
212+ :conf: Yes
213+ :scope: Global
214+ :dyn: No
215+ :default: 0
216+
217+This variable is used for controlling the extra EVS info logging.
218+
219+.. variable:: evs.install_timeout
220+
221+ :cli: Yes
222+ :conf: Yes
223+ :scope: Global
224+ :dyn: Yes
225+ :default: PT15S
226+
227+This variable defines the timeout on waiting for install message acknowledgments.
228+
229+.. variable:: evs.join_retrans_period
230+
231+ :cli: Yes
232+ :conf: Yes
233+ :scope: Global
234+ :dyn: No
235+ :default: PT1S
236+
237+This variable defines how often to retransmit EVS join messages when forming cluster membership.
238+
239+.. variable:: evs.keepalive_period
240+
241+ :cli: Yes
242+ :conf: Yes
243+ :scope: Global
244+ :dyn: No
245+ :default: PT1S
246+
247+This variable defines how often will keepalive beacons will be emmited (in the absence of any other traffic).
248+
249+.. variable:: evs.max_install_timeouts
250+
251+ :cli: Yes
252+ :conf: Yes
253+ :scope: Global
254+ :dyn: No
255+ :default: 1
256+
257+This variable defines how many membership install rounds to try before giving up (total rounds will be :variable:`evs.max_install_timeouts` + 2).
258+
259+.. variable:: evs.send_window
260+
261+ :cli: Yes
262+ :conf: Yes
263+ :scope: Global
264+ :dyn: No
265+ :default: 4
266+
267+This variable defines the maximum number of data packets in replication at a time. For WAN setups may be set considerably higher, e.g. 512. This variable must be no less than :variable:`evs.user_send_window`.
268+
269+.. variable:: evs.stats_report_period
270+
271+ :cli: Yes
272+ :conf: Yes
273+ :scope: Global
274+ :dyn: No
275+ :default: PT1M
276+
277+This variable defines the control period of EVS statistics reporting.
278+
279+.. variable:: evs.suspect_timeout
280+
281+ :cli: Yes
282+ :conf: Yes
283+ :scope: Global
284+ :dyn: No
285+ :default: PT5S
286+
287+This variable defines the inactivity period after which the node is “suspected” to be dead. If all remaining nodes agree on that, the node will be dropped out of cluster even before :variable:`evs.inactive_timeout` is reached.
288+
289+.. variable:: evs.use_aggregate
290+
291+ :cli: Yes
292+ :conf: Yes
293+ :scope: Global
294+ :dyn: No
295+ :default: true
296+
297+When this variable is enabled smaller packets will be aggregated into one.
298+
299+.. variable:: evs.user_send_window
300+
301+ :cli: Yes
302+ :conf: Yes
303+ :scope: Global
304+ :dyn: Yes
305+ :default: 2
306+
307+This variable defines the maximum number of data packets in replication at a time. For WAN setups may be set considerably higher, e.g. 512.
308+
309+.. variable:: evs.version
310+
311+ :cli: Yes
312+ :conf: Yes
313+ :scope: Global
314+ :dyn: No
315+ :default: 0
316+
317+This status variable is used to check which EVS (Extended Virtual Synchrony) protocol version is used.
318+
319+.. variable:: evs.view_forget_timeout
320+
321+ :cli: Yes
322+ :conf: Yes
323+ :scope: Global
324+ :dyn: No
325+ :default: P1D
326+
327+This variable defines the timeout after which past views will be dropped from history.
328+
329+.. variable:: gcache.dir
330+
331+ :cli: Yes
332+ :conf: Yes
333+ :scope: Global
334+ :dyn: No
335+ :default: :term:`datadir`
336+
337+This variable can be used to define the location of the :file:`galera.cache` file.
338+
339+.. variable:: gcache.keep_pages_size
340+
341+ :cli: Yes
342+ :conf: Yes
343+ :scope: Local, Global
344+ :dyn: No
345+ :default: 0
346+
347+This variable is used to specify total size of the page storage pages to keep for caching purposes. If only page storage is enabled, one page is always present.
348+
349+.. variable:: gcache.mem_size
350+
351+ :cli: Yes
352+ :conf: Yes
353+ :scope: Global
354+ :dyn: No
355+ :default: 0
356+
357+
358+.. variable:: gcache.name
359+
360+ :cli: Yes
361+ :conf: Yes
362+ :scope: Global
363+ :dyn: No
364+ :default: /var/lib/mysql/galera.cache
365+
366+This variable can be used to specify the name of the Galera cache file.
367+
368+.. variable:: gcache.page_size
369+
370+ :cli: No
371+ :conf: Yes
372+ :scope: Global
373+ :dyn: No
374+ :default: 128M
375+
376+This variable can be used to specify the size of the page files in the page storage.
377+
378+.. variable:: gcache.size
379+
380+ :cli: Yes
381+ :conf: Yes
382+ :scope: Global
383+ :dyn: No
384+ :default: 128M
385+
386+Size of the transaction cache for Galera replication. This defines the size of the :file:`galera.cache` file which is used as source for |IST|. If this value is bigger there are better chances that the re-joining node will get IST instead of |SST|.
387+
388+.. variable:: gcs.fc_debug
389+
390+ :cli: Yes
391+ :conf: Yes
392+ :scope: Global
393+ :dyn: No
394+ :default: 0
395+
396+This variable specifies after how many writesets the debug statistics about SST flow control will be posted.
397+
398+.. variable:: gcs.fc_factor
399+
400+ :cli: Yes
401+ :conf: Yes
402+ :scope: Global
403+ :dyn: No
404+ :default: 1.0
405+
406+This variable is used for replication flow control. Replication will be paused till the value of this variable goes below the value of :variable:`gcs.fc_factor` * :variable:`gcs.fc_limit`.
407+
408+.. variable:: gcs.fc_limit
409+
410+ :cli: Yes
411+ :conf: Yes
412+ :scope: Global
413+ :dyn: No
414+ :default: 16
415+
416+This variable is used for replication flow control. When slave queue exceeds this limit replication will be paused.
417+
418+.. variable:: gcs.fc_master_slave
419+
420+ :cli: Yes
421+ :conf: Yes
422+ :scope: Global
423+ :dyn: No
424+ :default: no
425+
426+This variable is used to specify if there is only one master node in the cluster.
427+
428+.. variable:: gcs.max_packet_size
429+
430+ :cli: Yes
431+ :conf: Yes
432+ :scope: Global
433+ :dyn: No
434+ :default: 64500
435+
436+This variable is used to specify the writeset size after which they will be fragmented.
437+
438+.. variable:: gcs.max_throttle
439+
440+ :cli: Yes
441+ :conf: Yes
442+ :scope: Global
443+ :dyn: No
444+ :default: 0.25
445+
446+This variable specifies how much the replication can be throttled during the state transfer in order to avoid running out of memory. Value can be set to ``0.0`` if stopping replication is acceptable in order to finish state transfer.
447+
448+.. variable:: gcs.recv_q_hard_limit
449+
450+ :cli: Yes
451+ :conf: Yes
452+ :scope: Global
453+ :dyn: No
454+ :default: 9223372036854775807
455+
456+This variable specifies the maximum allowed size of the receive queue. This should normally be half of (RAM + swap). If this limit is exceeded, Galera will abort the server.
457+
458+.. variable:: gcs.recv_q_soft_limit
459+
460+ :cli: Yes
461+ :conf: Yes
462+ :scope: Global
463+ :dyn: No
464+ :default: 0.25
465+
466+This variable specifies the fraction of the :variable:`gcs.recv_q_hard_limit` after which replication rate will be throttled.
467+
468+.. variable:: gcs.sync_donor
469+
470+ :cli: Yes
471+ :conf: Yes
472+ :scope: Global
473+ :dyn: No
474+ :default: no
475+
476+This variable controls if the rest of the cluster should be in sync with the donor node. When this variable is set to ``Yes`` whole cluster will be blocked if the donor node is blocked with SST.
477+
478+.. variable:: gmcast.listen_addr
479+
480+ :cli: Yes
481+ :conf: Yes
482+ :scope: Global
483+ :dyn: No
484+ :default: tcp://0.0.0.0:4567
485+
486+This variable defines the address on which node listens to connections from other nodes in the cluster.
487+
488+.. variable:: gmcast.mcast_addr
489+
490+ :cli: Yes
491+ :conf: Yes
492+ :scope: Global
493+ :dyn: No
494+ :default: None
495+
496+This variable should be set up if UDP multicast should be used for replication.
497+
498+.. variable:: gmcast.mcast_ttl
499+
500+ :cli: Yes
501+ :conf: Yes
502+ :scope: Global
503+ :dyn: No
504+ :default: 1
505+
506+This variable can be used to define TTL for multicast packets.
507+
508+.. variable:: gmcast.peer_timeout
509+
510+ :cli: Yes
511+ :conf: Yes
512+ :scope: Global
513+ :dyn: No
514+ :default: PT3S
515+
516+This variable specifies the connection timeout to initiate message relaying.
517+
518+.. variable:: gmcast.time_wait
519+
520+ :cli: Yes
521+ :conf: Yes
522+ :scope: Global
523+ :dyn: No
524+ :default: PT5S
525+
526+This variable specifies the time to wait until allowing peer declared outside of stable view to reconnect.
527+
528+.. variable:: gmcast.version
529+
530+ :cli: Yes
531+ :conf: Yes
532+ :scope: Global
533+ :dyn: No
534+ :default: 0
535+
536+This status variable is used to check which gmcast protocol version is being used.
537+
538+.. variable:: ist.recv_addr
539+
540+ :cli: Yes
541+ :conf: Yes
542+ :scope: Global
543+ :dyn: No
544+ :default: value of :variable:`wsrep_node_address`
545+
546+This variable specifies the address on which nodes listens for Incremental State Transfer (|IST|).
547+
548+.. variable:: pc.announce_timeout
549+
550+ :cli: Yes
551+ :conf: Yes
552+ :scope: Global
553+ :dyn: No
554+ :default: PT3S
555+
556+.. variable:: pc.checksum
557+
558+ :cli: Yes
559+ :conf: Yes
560+ :scope: Global
561+ :dyn: No
562+ :default: false
563+
564+This variable controls will the replicated messages will be checksummed or not.
565+
566+.. variable:: pc.ignore_quorum
567+
568+ :cli: Yes
569+ :conf: Yes
570+ :scope: Global
571+ :dyn: Yes
572+ :default: false
573+
574+When this variable is set to ``TRUE`` node will completely ignore the quorum calculations. This should be used with extreme caution even in master-slave setups, because slaves won't automatically reconnect to master in this case.
575+
576+.. variable:: pc.ignore_sb
577+
578+ :cli: Yes
579+ :conf: Yes
580+ :scope: Global
581+ :dyn: Yes
582+ :default: false
583+
584+When this variable us set ti ``TRUE`` node will process updates even in the case of split brain. This should be used with extreme caution in multi-master setup, but should simplify things in master-slave cluster (especially if only 2 nodes are used).
585+
586+.. variable:: pc.linger
587+
588+ :cli: Yes
589+ :conf: Yes
590+ :scope: Global
591+ :dyn: No
592+ :default: PT20S
593+
594+This variable specifies the period which PC protocol waits for EVS termination.
595+
596+.. variable:: pc.npvo
597+
598+ :cli: Yes
599+ :conf: Yes
600+ :scope: Global
601+ :dyn: No
602+ :default: false
603+
604+When this variable is set to ``TRUE`` more recent primary component overrides older ones in case of conflicting prims.
605+
606+.. variable:: pc.version
607+
608+ :cli: Yes
609+ :conf: Yes
610+ :scope: Global
611+ :dyn: No
612+ :default: 0
613+
614+This status variable is used to check which ``PC`` protocol version is being used.
615+
616+.. variable:: pc.wait_prim
617+
618+ :cli: Yes
619+ :conf: Yes
620+ :scope: Global
621+ :dyn: No
622+ :default: true
623+
624+When this variable is set to ``true`` node will wait for :variable:`pc.wait_prim_timeout` value for the primary component. This option can be used to bring up a non-primary component and make it primary with :variable:`pc.bootstrap` option.
625+
626+.. variable:: pc.wait_prim_timeout
627+
628+ :cli: Yes
629+ :conf: Yes
630+ :scope: Global
631+ :dyn: No
632+ :default: P30S
633+
634+This variable is used to define how much time node should wait for the primary component.
635+
636+.. variable:: pc.weight
637+
638+ :cli: Yes
639+ :conf: Yes
640+ :scope: Global
641+ :dyn: Yes
642+ :default: 1
643+
644+This variable specifies the node weight that's going to be used for Weighted Quorum calculations.
645+
646+.. variable:: protonet.backend
647+
648+ :cli: Yes
649+ :conf: Yes
650+ :scope: Global
651+ :dyn: No
652+ :default: asio
653+
654+This variable is used to define which transport backend should be used. Currently only ``ASIO`` is supported.
655+
656+.. variable:: protonet.version
657+
658+ :cli: Yes
659+ :conf: Yes
660+ :scope: Global
661+ :dyn: No
662+ :default: 0
663+
664+This status variable is used to show which transport backend protocol version is used.
665+
666+.. variable:: repl.causal_read_timeout
667+
668+ :cli: Yes
669+ :conf: Yes
670+ :scope: Global
671+ :dyn: Yes
672+ :default: PT30S
673+
674+This variable specifies the causal read timeout.
675+
676+.. variable:: repl.commit_order
677+
678+ :cli: Yes
679+ :conf: Yes
680+ :scope: Global
681+ :dyn: No
682+ :default: 3
683+
684+This variable is used to specify Out-Of-Order committing (which is used to improve parallel applying performance). Allowed values are:
685+
686+ * ``0`` – BYPASS: all commit order monitoring is turned off (useful for measuring performance penalty)
687+ * ``1`` – OOOC: allow out of order committing for all transactions
688+ * ``2`` – LOCAL_OOOC: allow out of order committing only for local transactions
689+ * ``3`` – NO_OOOC: no out of order committing is allowed (strict total order committing)
690+
691
692=== modified file 'doc-pxc/source/wsrep-system-index.rst'
693--- doc-pxc/source/wsrep-system-index.rst 2014-05-08 07:51:46 +0000
694+++ doc-pxc/source/wsrep-system-index.rst 2014-05-14 11:32:28 +0000
695@@ -143,6 +143,16 @@
696 * MIXED
697 * NONE - This option resets the forced state of the binlog format
698
699+.. variable:: wsrep_load_data_splitting
700+
701+ :cli: Yes
702+ :conf: Yes
703+ :scope: Global
704+ :dyn: Yes
705+ :default: ON
706+
707+This variable controls whether ``LOAD DATA`` transaction splitting is wanted or not.
708+
709 .. variable:: wsrep_log_conflicts
710
711 :cli: Yes

Subscribers

People subscribed via source and target branches