Merge lp:~hrvojem/percona-xtradb-cluster/bug1259649-5.6 into lp:percona-xtradb-cluster

Proposed by Hrvoje Matijakovic
Status: Merged
Approved by: Hrvoje Matijakovic
Approved revision: no longer in the source branch.
Merged at revision: 764
Proposed branch: lp:~hrvojem/percona-xtradb-cluster/bug1259649-5.6
Merge into: lp:percona-xtradb-cluster
Diff against target: 873 lines (+750/-6)
11 files modified
doc-pxc/source/conf.py (+1/-1)
doc-pxc/source/faq.rst (+39/-1)
doc-pxc/source/index.rst (+1/-0)
doc-pxc/source/installation/yum_repo.rst (+6/-1)
doc-pxc/source/intro.rst (+1/-1)
doc-pxc/source/limitation.rst (+1/-1)
doc-pxc/source/manual/xtrabackup_sst.rst (+7/-0)
doc-pxc/source/release-notes/Percona-XtraDB-Cluster-5.6.15-25.5.rst (+54/-0)
doc-pxc/source/release-notes/release-notes_index.rst (+1/-0)
doc-pxc/source/wsrep-provider-index.rst (+638/-0)
doc-pxc/source/wsrep-system-index.rst (+1/-1)
To merge this branch: bzr merge lp:~hrvojem/percona-xtradb-cluster/bug1259649-5.6
Reviewer Review Type Date Requested Status
Raghavendra D Prabhu (community) Approve
Review via email: mp+211172@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :
Download full text (4.9 KiB)

+ /usr/bin/clustercheck clustercheck password 0

is fine. But mention this part:

        # <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
        # Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
        # Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"

+In CentOS ``mysql-libs`` conflicts with ``Percona-XtraDB-Cluster-server-56.x86_64`` package. To avoid this you can replace the ``mysql-libs`` with ``Percona-Server-shared-51.x86_64`` package.

To replace you need to remove mysql-libs (there is no 'replace' in yum) before installing PXC packages. Dependency resolution ensures -shared-51 is installed.

+ - Currently replication works only with |InnoDB| storage engine. Any writes to tables of other types, including system (mysql.*) tables, are not replicated. However, ``DDL`` statements are replicated in statement level, and changes to mysql.* tables will get replicated that way. So, you can safely issue: ``CREATE USER...``, but issuing: ``INSERT INTO mysql.user...``, will not be replicated.

Mention about wsrep_replicate_myisam here.

+Based on `Percona Server 5.6.15-63.0 <http://www.percona.com/doc/percona-server/5.6/release-notes/Percona-Server-5.6.15-63.0.html>`_ including all the bug fixes in it, `Galera Replicator 3.3 <https://launchpad.net/galera/+milestone/25.3.3>`_ and on `Codership wsrep API 5.6.15-25.2 <https://launchpad.net/codership-mysql/+milestone/5.6.15-25.2>`_ is now the first **General Availability** release. All of |Percona|'s software is open-source and free, all the details of the release can be found in the `5.6.15-25.5 milestone <https://launchpad.net/percona-xtradb-cluster/+milestone/5.6.15-25.5>`_ at Launchpad.

It is Galera 3.4 here. https://launchpad.net/galera/+milestone/25.3.4

Codership API 25.5 - https://launchpad.net/codership-mysql/+milestone/5.6.16-25.5

+ Bug fixed :bug:`1219605`.

Replication of partition tables without binlogging enabled
failed, partition truncation didn't work because of lack of TO
isolation there.

+ Bug fixed :bug:`1272982`.

Ignore this. :)

+ Bug fixed :bug:`1281682`.

New options (under [sst]) are added: inno-backup-opts, inno-apply-opts, inno-move-opts which pass options to backup, apply and move stages of innobackupex.

+ Using ``LOAD DATA INFILE`` in with :variable:`autocommit` set to ``0`` and :variable:`wsrep_load_data_splitting` set to ``ON`` could lead to data loss. Bug fixed :bug:`1281810`.

Let us not mention 'data loss' here. It was not that explicit of a loss. Instead, 'incomplete loading of records while chunking' would be better here.

+ Bug fixed :bug:`1284670`.

Not user visible, safe to skip (may cause unneeed alarm among users :)).

+ If a particular node was setup to be the donor via setting :variable:`wsrep_sst_donor` and the donor node was in desynced state for some reason, then the joiner would wait for the donor to become synced again which could take a long time. Bug fixed :bug:`1285380`.

This is a bit more complex. Best way to describe would be to say "The joiner would wait and not fall back to choosing other potential donor ...

Read more...

review: Needs Fixing
Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

One more thing, for sst-initial-timeout, default is 100 seconds, a value of 0 disables this.

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

+In CentOS ``mysql-libs`` conflicts with ``Percona-XtraDB-Cluster-server-56.x86_64`` package. To avoid this you need to remove the ``mysql-libs`` package before installing |Percona XtraDB Cluster|.

In addition, mention that -shared-51 package provides that
dependency during installation if required.

+ Bug fixed :bug:`1284670`.

Skip this (as mentioned before).

Others look good.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'doc-pxc/source/conf.py'
2--- doc-pxc/source/conf.py 2014-02-20 10:50:23 +0000
3+++ doc-pxc/source/conf.py 2014-03-20 09:44:42 +0000
4@@ -55,7 +55,7 @@
5 # The short X.Y version.
6 version = '5.6.15'
7 # The full version, including alpha/beta/rc tags.
8-release = '5.6.15-25.4'
9+release = '5.6.15-25.5'
10
11 # The language for content autogenerated by Sphinx. Refer to documentation
12 # for a list of supported languages.
13
14=== modified file 'doc-pxc/source/faq.rst'
15--- doc-pxc/source/faq.rst 2013-09-05 13:37:01 +0000
16+++ doc-pxc/source/faq.rst 2014-03-20 09:44:42 +0000
17@@ -22,13 +22,51 @@
18
19 .. code-block:: mysql
20
21- SELECT * FROM someinnodbtable WHERE id=1;
22+ SELECT 1 FROM dual;
23
24 3 different results are possible:
25 * You get the row with id=1 (node is healthy)
26 * Unknown error (node is online but Galera is not connected/synced with the cluster)
27 * Connection error (node is not online)
28
29+You can also check the node health with the ``clustercheck`` script. You need to set up ``clustercheck`` user:
30+
31+.. code-block:: mysql
32+
33+ GRANT USAGE ON *.* TO 'clustercheck'@'localhost' IDENTIFIED BY PASSWORD '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19';
34+
35+You can then check the node health by running the ``clustercheck`` script:
36+
37+.. code-block:: bash
38+
39+ /usr/bin/clustercheck clustercheck password 0
40+
41+If the node is running correctly you should get the following status: ::
42+
43+ HTTP/1.1 200 OK
44+ Content-Type: text/plain
45+ Connection: close
46+ Content-Length: 40
47+
48+ Percona XtraDB Cluster Node is synced.
49+
50+In case node isn't sync or if it is off-line status will look like: ::
51+
52+ HTTP/1.1 503 Service Unavailable
53+ Content-Type: text/plain
54+ Connection: close
55+ Content-Length: 44
56+
57+ Percona XtraDB Cluster Node is not synced.
58+
59+.. note::
60+
61+ clustercheck syntax:
62+
63+ <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
64+ Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
65+ Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
66+
67 Q: How does XtraDB Cluster handle big transaction?
68 ==================================================
69 A: XtraDB Cluster populates write set in memory before replication and this sets one limit for how large transactions make sense. There are wsrep variables for max row count and max size of of write set to make sure that server is not running out of memory.
70
71=== modified file 'doc-pxc/source/index.rst'
72--- doc-pxc/source/index.rst 2014-01-30 13:23:38 +0000
73+++ doc-pxc/source/index.rst 2014-03-20 09:44:42 +0000
74@@ -99,6 +99,7 @@
75 release-notes/release-notes_index
76 wsrep-status-index
77 wsrep-system-index
78+ wsrep-provider-index
79 wsrep-files-index
80 faq
81 glossary
82
83=== modified file 'doc-pxc/source/installation/yum_repo.rst'
84--- doc-pxc/source/installation/yum_repo.rst 2014-01-29 15:10:12 +0000
85+++ doc-pxc/source/installation/yum_repo.rst 2014-03-20 09:44:42 +0000
86@@ -38,7 +38,7 @@
87
88 .. warning::
89
90- In order to sucessfully install |Percona XtraDB Cluster| ``socat`` package will need to be installed first.
91+ In order to sucessfully install |Percona XtraDB Cluster| ``socat`` package will need to be installed first. ``socat`` package can be installed from the `EPEL <https://fedoraproject.org/wiki/EPEL>`_ repositories.
92
93
94 Percona `yum` Experimental repository
95@@ -50,3 +50,8 @@
96
97 .. note::
98 This repository works for both RHEL/CentOS 5 and RHEL/CentOS 6
99+
100+Resolving package conflicts
101+===========================
102+
103+In CentOS ``mysql-libs`` conflicts with ``Percona-XtraDB-Cluster-server-56.x86_64`` package. To avoid this you need to remove the ``mysql-libs`` package before installing |Percona XtraDB Cluster|. Package ``Percona-Server-shared-51.x86_64`` provides that dependency during installation if required.
104
105=== modified file 'doc-pxc/source/intro.rst'
106--- doc-pxc/source/intro.rst 2014-01-29 15:10:12 +0000
107+++ doc-pxc/source/intro.rst 2014-03-20 09:44:42 +0000
108@@ -44,7 +44,7 @@
109
110 |Percona XtraDB Cluster| has: Consistency and Availability.
111
112-That is |MySQL| replication does not guarantee Consistency of your data, while |Percona XtraDB Cluster| provides data Consistency (but it looses Partitioning tolerance property).
113+That is |MySQL| replication does not guarantee Consistency of your data, while |Percona XtraDB Cluster| provides data Consistency (but it loses Partitioning tolerance property).
114
115 Components
116 ==========
117
118=== modified file 'doc-pxc/source/limitation.rst'
119--- doc-pxc/source/limitation.rst 2013-11-21 11:29:19 +0000
120+++ doc-pxc/source/limitation.rst 2014-03-20 09:44:42 +0000
121@@ -6,7 +6,7 @@
122
123 There are some limitations which you should be aware of. Some of them will be eliminated later as product is improved and some are design limitations.
124
125- - Currently replication works only with |InnoDB| storage engine. Any writes to tables of other types, including system (mysql.*) tables, are not replicated. However, DDL statements are replicated in statement level, and changes to mysql.* tables will get replicated that way. So, you can safely issue: CREATE USER..., but issuing: INSERT INTO mysql.user..., will not be replicated.
126+ - Currently replication works only with |InnoDB| storage engine. Any writes to tables of other types, including system (mysql.*) tables, are not replicated. However, ``DDL`` statements are replicated in statement level, and changes to mysql.* tables will get replicated that way. So, you can safely issue: ``CREATE USER...``, but issuing: ``INSERT INTO mysql.user...``, will not be replicated. You can enable experimental |MyISAM| replication support with :variable:`wsrep_replicate_myisam`.
127
128 - Unsupported queries:
129
130
131=== modified file 'doc-pxc/source/manual/xtrabackup_sst.rst'
132--- doc-pxc/source/manual/xtrabackup_sst.rst 2014-02-20 09:44:18 +0000
133+++ doc-pxc/source/manual/xtrabackup_sst.rst 2014-03-20 09:44:42 +0000
134@@ -166,6 +166,13 @@
135
136 This option introduces stream-based compression/decompression. When these options are set, compression/decompression are done on stream, in contrast to earlier PXB-based one where decompression was done after streaming to disk, involving additional I/O; hence I/O is saved here (almost halved on joiner). You can use any compression utility which works on stream - gzip, pigz (which is multi-threaded and hence, recommended) etc. Also, note that, compressor has to be set on donor and decompressor on joiner (though you can have decompressor set on donor and vice-versa for config homogeneity, it won't affect that particular SST). To use Xtrabackup-based compression as before use ``compress`` under ``[xtrabackup]`` as before, also having both enabled won't cause any failure (though you will be wasting CPU cycles with this).
137
138+.. option:: sst-initial-timeout
139+
140+ :Values: 0 (Disabled)
141+ :Default: 100
142+
143+This option is use to configure initial timeout to receive a first packet via SST. This has been implemented, so that if donor dies somewhere in between, joiner doesn't hang.
144+
145 .. _tar_ag_xbstream:
146
147 Tar against xbstream
148
149=== added file 'doc-pxc/source/release-notes/Percona-XtraDB-Cluster-5.6.15-25.5.rst'
150--- doc-pxc/source/release-notes/Percona-XtraDB-Cluster-5.6.15-25.5.rst 1970-01-01 00:00:00 +0000
151+++ doc-pxc/source/release-notes/Percona-XtraDB-Cluster-5.6.15-25.5.rst 2014-03-20 09:44:42 +0000
152@@ -0,0 +1,54 @@
153+.. rn:: 5.6.15-25.5
154+
155+======================================
156+ |Percona XtraDB Cluster| 5.6.15-25.5
157+======================================
158+
159+Percona is glad to announce the release of |Percona XtraDB Cluster| 5.6 on March 20th 2014. Binaries are available from `downloads area <http://www.percona.com/downloads/Percona-XtraDB-Cluster-56/release-5.6.15-25.5/>`_ or from our :doc:`software repositories </installation>`.
160+
161+Based on `Percona Server 5.6.15-63.0 <http://www.percona.com/doc/percona-server/5.6/release-notes/Percona-Server-5.6.15-63.0.html>`_ including all the bug fixes in it, `Galera Replicator 3.4 <https://launchpad.net/galera/+milestone/25.3.4>`_ and on `Codership wsrep API 25.5 <https://launchpad.net/codership-mysql/+milestone/5.6.16-25.5>`_ is now the current **General Availability** release. All of |Percona|'s software is open-source and free, all the details of the release can be found in the `5.6.15-25.5 milestone <https://launchpad.net/percona-xtradb-cluster/+milestone/5.6.15-25.5>`_ at Launchpad.
162+
163+
164+Bugs fixed
165+==========
166+
167+ Replication of partition tables without binlogging enabled failed, partition truncation didn't work because of lack of TO isolation there. Bug fixed :bug:`1219605`.
168+
169+ wsrep patch did not allow server to start with query cache enabled. This restriction and check have been removed now and query cache can be fully enabled from config file. Bug fixed :bug:`1279220`.
170+
171+ New SST options have been implemented: :option:`inno-backup-opts`, :option:`inno-apply-opts`, :option:`inno-move-opts` which pass options to backup, apply and move stages of innobackupex. Bug fixed :bug:`1281682`.
172+
173+ Using ``LOAD DATA INFILE`` in with :variable:`autocommit` set to ``0`` and :variable:`wsrep_load_data_splitting` set to ``ON`` could lead to incomplete loading of records while chunking. Bug fixed :bug:`1281810`.
174+
175+ ``Garbd`` could crash on *CentOS* if variable :variable:`gmcast.listen_addr` wasn't set. Bug fixed :bug:`1283100`.
176+
177+ Node couldn't be started with :variable:`wsrep_provider_options` option :variable:`debug` set to ``1``. Bug fixed :bug:`1285208`.
178+
179+ The joiner would wait and not fall back to choosing other potential donor nodes (not listed in :variable:`wsrep_sst_donor`) by their state. This happened even when comma was added at the end. This fixes it for that particular case. Bug fixed :bug:`1285380`.
180+
181+ Boostrapping a Node in a ``NON-PRIMARY`` state would lead to crash. Bug fixed :bug:`1286450`.
182+
183+ New versions of xtrabackup SST scripts were ignoring ``--socket`` parameter passed by mysqld. Bug fixed :bug:`1289483`.
184+
185+ Regression in Galera required explicitly setting :variable:`socket.ssl` to ``Yes`` even if you set up variables :variable:`socket.ssl_key` and :variable:`socket.ssl_cert`. Bug fixed :bug:`1290006`.
186+
187+ Fixed the ``clang`` build issues that were happening during the Galera build. Bug fixed :bug:`1290462`.
188+
189+ Initial configurable timeout, of 100 seconds, to receive a first packet via SST has been implemented, so that if donor dies somewhere in between, joiner doesn't hang. Timeout can be configured with the :option:`sst-initial-timeout` variable. Bug fixed :bug:`1292991`.
190+
191+ Better diagnostic error message has been implemented when :variable:`wsrep_max_ws_size` limit has been succeeded. Bug fixed :bug:`1280557`.
192+
193+ Fixed incorrect warnings and implemented better handling of repeated usage with same value for :variable:`wsrep_desync`. Bug fixed :bug:`1281696`.
194+
195+ Fixed the issue with :variable:`wsrep_slave_threads` wherein if the number of slave threads was changed before closing threads from an earlier change, it could increase the total number of threads beyond value specified in :variable:`wsrep_slave_threads`.
196+
197+ A regression in mutex handling caused dynamic update of :variable:`wsrep_log_conflicts` to hang the server. Bug fixed :bug:`1293624`.
198+
199+ Presence of :file:`/tmp/test` directory and an empty test database caused |Percona Xtrabackup| to fail, causing SST to fail. This is an |Percona XtraBackup| issue. But, this has been fixed in PXC's xtrabackup SST separately by using unique temporary directories with |Percona Xtrabackup|. Bug fixed :bug:`1294760`.
200+
201+ After installing the ``auth_socket`` plugin any local user might get root access to the server. If you're using this plugin upgrade is advised. This is a regression, introduced in |Percona Server| :rn:`5.6.11-60.3`. Bug fixed :bug:`1289599`
202+
203+Other bug fixes: :bug:`1289776`, :bug:`1279343`, :bug:`1259649`, :bug:`1292533`, :bug:`1272982`, :bug:`1284670`
204+
205+We did our best to eliminate bugs and problems during the testing release, but this is a software, so bugs are expected. If you encounter them, please report them to our `bug tracking system <https://bugs.launchpad.net/percona-xtradb-cluster/+filebug>`_.
206+
207
208=== modified file 'doc-pxc/source/release-notes/release-notes_index.rst'
209--- doc-pxc/source/release-notes/release-notes_index.rst 2014-02-20 10:50:23 +0000
210+++ doc-pxc/source/release-notes/release-notes_index.rst 2014-03-20 09:44:42 +0000
211@@ -6,6 +6,7 @@
212 :maxdepth: 1
213 :glob:
214
215+ Percona-XtraDB-Cluster-5.6.15-25.5
216 Percona-XtraDB-Cluster-5.6.15-25.4
217 Percona-XtraDB-Cluster-5.6.15-25.3
218 Percona-XtraDB-Cluster-5.6.15-25.2
219
220=== added file 'doc-pxc/source/wsrep-provider-index.rst'
221--- doc-pxc/source/wsrep-provider-index.rst 1970-01-01 00:00:00 +0000
222+++ doc-pxc/source/wsrep-provider-index.rst 2014-03-20 09:44:42 +0000
223@@ -0,0 +1,638 @@
224+.. _wsrep_provider_index:
225+
226+============================================
227+ Index of :variable:`wsrep_provider` options
228+============================================
229+
230+Following variables can be set and checked in the :variable:`wsrep_provider_options` variable. Value of the variable can be changed in the |MySQL| configuration file, :file:`my.cnf`, or by setting the variable value in the |MySQL| client.
231+
232+To change the value of the in the :file:`my.cnf` following syntax should be used: ::
233+
234+ wsrep_provider_options="variable1=value1;[variable2=value2]"
235+
236+For example to increase the size of the Galera buffer storage from the default value to 512MB, :file:`my.cnf` option should look like: ::
237+
238+ wsrep_provider_options="gcache.size=512M"
239+
240+Dynamic variables can be changed from the |MySQL| client by using the ``SET GLOBAL`` syntax. To change the value of the :variable:`pc.ignore_sb` following command should be used::
241+
242+ mysql> SET GLOBAL wsrep_provider_options="pc.ignore_sb=true";
243+
244+
245+Index
246+=====
247+
248+.. variable:: base_host
249+
250+ :cli: Yes
251+ :conf: Yes
252+ :scope: Global
253+ :dyn: No
254+ :default: value of the :variable:`wsrep_node_address`
255+
256+This variable sets the value of the node's base IP. This is an IP address on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.
257+
258+.. variable:: base_port
259+
260+ :cli: Yes
261+ :conf: Yes
262+ :scope: Global
263+ :dyn: No
264+ :default: 4567
265+
266+This variable sets the port on which the Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.
267+
268+.. variable:: cert.log_conflicts
269+
270+ :cli: Yes
271+ :conf: Yes
272+ :scope: Global
273+ :dyn: No
274+ :default: no
275+
276+.. variable:: evs.causal_keepalive_period
277+
278+ :cli: Yes
279+ :conf: Yes
280+ :scope: Global
281+ :dyn: No
282+ :default: value of :variable:`evs.keepalive_period`
283+
284+This variable is used for development purposes and shouldn't be used by regular users.
285+
286+.. variable:: evs.debug_log_mask
287+
288+ :cli: Yes
289+ :conf: Yes
290+ :scope: Global
291+ :dyn: Yes
292+ :default: 0x1
293+
294+This variable is used for EVS (Extended Virtual Synchrony) debugging it can be used only when :variable:`wsrep_debug` is set to ``ON``.
295+
296+.. variable:: evs.inactive_check_period
297+
298+ :cli: Yes
299+ :conf: Yes
300+ :scope: Global
301+ :dyn: No
302+ :default: PT0.5S
303+
304+This variable defines how often to check for peer inactivity.
305+
306+.. variable:: evs.inactive_timeout
307+
308+ :cli: Yes
309+ :conf: Yes
310+ :scope: Global
311+ :dyn: No
312+ :default: PT15S
313+
314+This variable defines the inactivity limit, once this limit is reached the node will be pronounced dead.
315+
316+.. variable:: evs.info_log_mask
317+
318+ :cli: No
319+ :conf: Yes
320+ :scope: Global
321+ :dyn: No
322+ :default: 0
323+
324+This variable is used for controlling the extra EVS info logging.
325+
326+.. variable:: evs.install_timeout
327+
328+ :cli: Yes
329+ :conf: Yes
330+ :scope: Global
331+ :dyn: Yes
332+ :default: PT15S
333+
334+This variable defines the timeout on waiting for install message acknowledgments.
335+
336+.. variable:: evs.join_retrans_period
337+
338+ :cli: Yes
339+ :conf: Yes
340+ :scope: Global
341+ :dyn: No
342+ :default: PT1S
343+
344+This variable defines how often to retransmit EVS join messages when forming cluster membership.
345+
346+.. variable:: evs.keepalive_period
347+
348+ :cli: Yes
349+ :conf: Yes
350+ :scope: Global
351+ :dyn: No
352+ :default: PT1S
353+
354+This variable defines how often will keepalive beacons will be emmited (in the absence of any other traffic).
355+
356+.. variable:: evs.max_install_timeouts
357+
358+ :cli: Yes
359+ :conf: Yes
360+ :scope: Global
361+ :dyn: No
362+ :default: 1
363+
364+This variable defines how many membership install rounds to try before giving up (total rounds will be :variable:`evs.max_install_timeouts` + 2).
365+
366+.. variable:: evs.send_window
367+
368+ :cli: Yes
369+ :conf: Yes
370+ :scope: Global
371+ :dyn: No
372+ :default: 4
373+
374+This variable defines the maximum number of data packets in replication at a time. For WAN setups may be set considerably higher, e.g. 512. This variable must be no less than :variable:`evs.user_send_window`.
375+
376+.. variable:: evs.stats_report_period
377+
378+ :cli: Yes
379+ :conf: Yes
380+ :scope: Global
381+ :dyn: No
382+ :default: PT1M
383+
384+This variable defines the control period of EVS statistics reporting.
385+
386+.. variable:: evs.suspect_timeout
387+
388+ :cli: Yes
389+ :conf: Yes
390+ :scope: Global
391+ :dyn: No
392+ :default: PT5S
393+
394+This variable defines the inactivity period after which the node is “suspected” to be dead. If all remaining nodes agree on that, the node will be dropped out of cluster even before :variable:`evs.inactive_timeout` is reached.
395+
396+.. variable:: evs.use_aggregate
397+
398+ :cli: Yes
399+ :conf: Yes
400+ :scope: Global
401+ :dyn: No
402+ :default: true
403+
404+When this variable is enabled smaller packets will be aggregated into one.
405+
406+.. variable:: evs.user_send_window
407+
408+ :cli: Yes
409+ :conf: Yes
410+ :scope: Global
411+ :dyn: Yes
412+ :default: 2
413+
414+This variable defines the maximum number of data packets in replication at a time. For WAN setups may be set considerably higher, e.g. 512.
415+
416+.. variable:: evs.version
417+
418+ :cli: Yes
419+ :conf: Yes
420+ :scope: Global
421+ :dyn: No
422+ :default: 0
423+
424+.. variable:: evs.view_forget_timeout
425+
426+ :cli: Yes
427+ :conf: Yes
428+ :scope: Global
429+ :dyn: No
430+ :default: P1D
431+
432+This variable defines the timeout after which past views will be dropped from history.
433+
434+.. variable:: gcache.dir
435+
436+ :cli: Yes
437+ :conf: Yes
438+ :scope: Global
439+ :dyn: No
440+ :default: :term:`datadir`
441+
442+This variable can be used to define the location of the :file:`galera.cache` file.
443+
444+.. variable:: gcache.keep_pages_size
445+
446+ :cli: Yes
447+ :conf: Yes
448+ :scope: Local, Global
449+ :dyn: No
450+ :default: 0
451+
452+This variable is used to specify total size of the page storage pages to keep for caching purposes. If only page storage is enabled, one page is always present.
453+
454+.. variable:: gcache.mem_size
455+
456+ :cli: Yes
457+ :conf: Yes
458+ :scope: Global
459+ :dyn: No
460+ :default: 0
461+
462+
463+.. variable:: gcache.name
464+
465+ :cli: Yes
466+ :conf: Yes
467+ :scope: Global
468+ :dyn: No
469+ :default: /var/lib/mysql/galera.cache
470+
471+This variable can be used to specify the name of the Galera cache file.
472+
473+.. variable:: gcache.page_size
474+
475+ :cli: No
476+ :conf: Yes
477+ :scope: Global
478+ :dyn: No
479+ :default: 128M
480+
481+This variable can be used to specify the size of the page files in the page storage.
482+
483+.. variable:: gcache.size
484+
485+ :cli: Yes
486+ :conf: Yes
487+ :scope: Global
488+ :dyn: No
489+ :default: 128M
490+
491+Size of the transaction cache for Galera replication. This defines the size of the :file:`galera.cache` file which is used as source for |IST|. If this value is bigger there are better chances that the re-joining node will get IST instead of |SST|.
492+
493+.. variable:: gcs.fc_debug
494+
495+ :cli: Yes
496+ :conf: Yes
497+ :scope: Global
498+ :dyn: No
499+ :default: 0
500+
501+This variable specifies after how many writesets the debug statistics about SST flow control will be posted.
502+
503+.. variable:: gcs.fc_factor
504+
505+ :cli: Yes
506+ :conf: Yes
507+ :scope: Global
508+ :dyn: No
509+ :default: 1
510+
511+This variable is used for replication flow control. Replication will be paused till the value of this variable goes below the value of :variable:`gcs.fc_factor` * :variable:`gcs.fc_limit`.
512+
513+.. variable:: gcs.fc_limit
514+
515+ :cli: Yes
516+ :conf: Yes
517+ :scope: Global
518+ :dyn: No
519+ :default: 16
520+
521+This variable is used for replication flow control. When slave queue exceeds this limit replication will be paused.
522+
523+.. variable:: gcs.fc_master_slave
524+
525+ :cli: Yes
526+ :conf: Yes
527+ :scope: Global
528+ :dyn: No
529+ :default: NO
530+
531+This variable is used to specify if there is only one master node in the cluster.
532+
533+.. variable:: gcs.max_packet_size
534+
535+ :cli: Yes
536+ :conf: Yes
537+ :scope: Global
538+ :dyn: No
539+ :default: 64500
540+
541+This variable is used to specify the writeset size after which they will be fragmented.
542+
543+.. variable:: gcs.max_throttle
544+
545+ :cli: Yes
546+ :conf: Yes
547+ :scope: Global
548+ :dyn: No
549+ :default: 0.25
550+
551+This variable specifies how much the replication can be throttled during the state transfer in order to avoid running out of memory. Value can be set to ``0.0`` if stopping replication is acceptable in order to finish state transfer.
552+
553+.. variable:: gcs.recv_q_hard_limit
554+
555+ :cli: Yes
556+ :conf: Yes
557+ :scope: Global
558+ :dyn: No
559+ :default: 9223372036854775807
560+
561+This variable specifies the maximum allowed size of the receive queue. This should normally be half of (RAM + swap). If this limit is exceeded, Galera will abort the server.
562+
563+.. variable:: gcs.recv_q_soft_limit
564+
565+ :cli: Yes
566+ :conf: Yes
567+ :scope: Global
568+ :dyn: No
569+ :default: 0.25
570+
571+This variable specifies the fraction of the :variable:`gcs.recv_q_hard_limit` after which replication rate will be throttled.
572+
573+.. variable:: gcs.sync_donor
574+
575+ :cli: Yes
576+ :conf: Yes
577+ :scope: Global
578+ :dyn: No
579+ :default: NO
580+
581+This variable controls if the rest of the cluster should be in sync with the donor node. When this variable is set to ``Yes`` whole cluster will be blocked if the donor node is blocked with SST.
582+
583+.. variable:: gmcast.listen_addr
584+
585+ :cli: Yes
586+ :conf: Yes
587+ :scope: Global
588+ :dyn: No
589+ :default: tcp://0.0.0.0:4567
590+
591+This variable defines the address on which node listens to connections from other nodes in the cluster.
592+
593+.. variable:: gmcast.mcast_addr
594+
595+ :cli: Yes
596+ :conf: Yes
597+ :scope: Global
598+ :dyn: No
599+ :default: None
600+
601+This variable should be set up if UDP multicast should be used for replication.
602+
603+.. variable:: gmcast.mcast_ttl
604+
605+ :cli: Yes
606+ :conf: Yes
607+ :scope: Global
608+ :dyn: No
609+ :default: 1
610+
611+This variable can be used to define TTL for multicast packets.
612+
613+.. variable:: gmcast.peer_timeout
614+
615+ :cli: Yes
616+ :conf: Yes
617+ :scope: Global
618+ :dyn: No
619+ :default: PT3S
620+
621+This variable specifies the connection timeout to initiate message relaying.
622+
623+.. variable:: gmcast.segment
624+
625+ :cli: Yes
626+ :conf: Yes
627+ :scope: Global
628+ :dyn: No
629+ :default: 0
630+
631+This variable specifies the group segment this member should be a part of. Same segment members are treated as equally physically close.
632+
633+.. variable:: gmcast.time_wait
634+
635+ :cli: Yes
636+ :conf: Yes
637+ :scope: Global
638+ :dyn: No
639+ :default: PT5S
640+
641+This variable specifies the time to wait until allowing peer declared outside of stable view to reconnect.
642+
643+.. variable:: gmcast.version
644+
645+ :cli: Yes
646+ :conf: Yes
647+ :scope: Global
648+ :dyn: No
649+ :default: 0
650+
651+.. variable:: ist.recv_addr
652+
653+ :cli: Yes
654+ :conf: Yes
655+ :scope: Global
656+ :dyn: No
657+ :default: value of :variable:`wsrep_node_address`
658+
659+This variable specifies the address on which nodes listens for Incremental State Transfer (|IST|).
660+
661+.. variable:: pc.checksum
662+
663+ :cli: Yes
664+ :conf: Yes
665+ :scope: Global
666+ :dyn: No
667+ :default: true
668+
669+This variable controls will the replicated messages will be checksummed or not.
670+
671+.. variable:: pc.ignore_quorum
672+
673+ :cli: Yes
674+ :conf: Yes
675+ :scope: Global
676+ :dyn: Yes
677+ :default: false
678+
679+When this variable is set to ``TRUE`` node will completely ignore the quorum calculations. This should be used with extreme caution even in master-slave setups, because slaves won't automatically reconnect to master in this case.
680+
681+.. variable:: pc.ignore_sb
682+
683+ :cli: Yes
684+ :conf: Yes
685+ :scope: Global
686+ :dyn: Yes
687+ :default: false
688+
689+When this variable us set ti ``TRUE`` node will process updates even in the case of split brain. This should be used with extreme caution in multi-master setup, but should simplify things in master-slave cluster (especially if only 2 nodes are used).
690+
691+.. variable:: pc.linger
692+
693+ :cli: Yes
694+ :conf: Yes
695+ :scope: Global
696+ :dyn: No
697+ :default: PT20S
698+
699+This variable specifies the period which PC protocol waits for EVS termination.
700+
701+.. variable:: pc.npvo
702+
703+ :cli: Yes
704+ :conf: Yes
705+ :scope: Global
706+ :dyn: No
707+ :default: false
708+
709+When this variable is set to ``TRUE`` more recent primary component overrides older ones in case of conflicting prims.
710+
711+.. variable:: pc.version
712+
713+ :cli: Yes
714+ :conf: Yes
715+ :scope: Global
716+ :dyn: No
717+ :default: 0
718+
719+.. variable:: pc.weight
720+
721+ :cli: Yes
722+ :conf: Yes
723+ :scope: Global
724+ :dyn: Yes
725+ :default: 1
726+
727+This variable specifies the node weight that's going to be used for Weighted Quorum calculations.
728+
729+.. variable:: protonet.backend
730+
731+ :cli: Yes
732+ :conf: Yes
733+ :scope: Global
734+ :dyn: No
735+ :default: asio
736+
737+This variable is used to define which transport backend should be used. Currently only ``ASIO`` is supported.
738+
739+.. variable:: protonet.version
740+
741+ :cli: Yes
742+ :conf: Yes
743+ :scope: Global
744+ :dyn: No
745+ :default: 0
746+
747+.. variable:: repl.causal_read_timeout
748+
749+ :cli: Yes
750+ :conf: Yes
751+ :scope: Global
752+ :dyn: Yes
753+ :default: PT30S
754+
755+This variable specifies the causal read timeout.
756+
757+.. variable:: repl.commit_order
758+
759+ :cli: Yes
760+ :conf: Yes
761+ :scope: Global
762+ :dyn: No
763+ :default: 3
764+
765+This variable is used to specify Out-Of-Order committing (which is used to improve parallel applying performance). Allowed values are:
766+
767+ * ``0`` – BYPASS: all commit order monitoring is turned off (useful for measuring performance penalty)
768+ * ``1`` – OOOC: allow out of order committing for all transactions
769+ * ``2`` – LOCAL_OOOC: allow out of order committing only for local transactions
770+ * ``3`` – NO_OOOC: no out of order committing is allowed (strict total order committing)
771+
772+.. variable:: repl.key_format
773+
774+ :cli: Yes
775+ :conf: Yes
776+ :scope: Global
777+ :dyn: Yes
778+ :default: FLAT8
779+
780+This variable is used to specify the replication key format. Allowed values are:
781+
782+ * ``FLAT8`` shorter - higher probability of key match false positives
783+ * ``FLAT16`` longer - lower probability of false positives.
784+ * ``FLAT8A`` - same as ``FLAT8`` but with annotations for debug purposes.
785+ * ``FLAT16A`` - same as ``FLAT16`` but with annotations for debug purposes.
786+
787+.. variable:: repl.proto_max
788+
789+ :cli: Yes
790+ :conf: Yes
791+ :scope: Global
792+ :dyn: No
793+ :default: 5
794+
795+This variable is used to specify the highest communication protocol version to accept in the cluster. This variable is used only for debugging.
796+
797+.. variable:: socket.checksum
798+
799+ :cli: Yes
800+ :conf: Yes
801+ :scope: Global
802+ :dyn: No
803+ :default: 2
804+
805+This variable is used to choose the checksum algorithm for network packets. Available options are:
806+
807+ * ``0`` - disable checksum
808+ * ``1`` - plain ``CRC32`` (used in Galera 2.x)
809+ * ``2`` - hardware accelerated ``CRC32-C``
810+
811+.. variable:: socket.ssl
812+
813+ :cli: Yes
814+ :conf: Yes
815+ :scope: Global
816+ :dyn: No
817+ :default: No
818+
819+This variable is used to specify if the SSL encryption should be used.
820+
821+
822+.. variable:: socket.ssl_cert
823+
824+ :cli: Yes
825+ :conf: Yes
826+ :scope: Global
827+ :dyn: No
828+
829+This variable is used to specify the path (absolute or relative to working directory) to an SSL certificate (in PEM format).
830+
831+.. variable:: socket.ssl_key
832+
833+ :cli: Yes
834+ :conf: Yes
835+ :scope: Global
836+ :dyn: No
837+
838+
839+This variable is used to specify the path (absolute or relative to working directory) to an SSL private key for the certificate (in PEM format).
840+
841+.. variable:: socket.ssl_compression
842+
843+ :cli: Yes
844+ :conf: Yes
845+ :scope: Global
846+ :dyn: No
847+ :default: Yes
848+
849+This variable is used to specify if the SSL compression is to be used.
850+
851+.. variable:: socket.ssl_cipher
852+
853+ :cli: Yes
854+ :conf: Yes
855+ :scope: Global
856+ :dyn: No
857+ :default: AES128-SHA
858+
859+This variable is used to specify what cypher will be used for encryption.
860+
861+
862
863=== modified file 'doc-pxc/source/wsrep-system-index.rst'
864--- doc-pxc/source/wsrep-system-index.rst 2014-03-09 13:26:24 +0000
865+++ doc-pxc/source/wsrep-system-index.rst 2014-03-20 09:44:42 +0000
866@@ -327,7 +327,7 @@
867 :dyn: Yes
868 :default: 1
869
870-This variable controls the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication, replication that applies transactions in parallel only when it is safe to do so. The variable is dynamic, you can increase/decrease it anytime, note that, when you decrease it, it won't kill the threads immediately but stop them after they are done applying current transaction (the effect with increase is immediate though). If any replication consistency problems are encountered, it's recommended to set this back to ``1`` to see if that resolves the issue. The default value can be increased for better throughput. You may want to increase it many a time as suggested `here <http://www.codership.com/wiki/doku.php?id=flow_control>`_, in JOINED state for instance to speed up the catchup process to SYNCED. You can also estimate the optimal value for this from :variable:`wsrep_cert_deps_distance` as suggested `here <http://www.codership.com/wiki/doku.php?id=monitoring#checking_replication_health>`_. You can also refer to `this <http://www.codership.com/wiki/doku.php?id=configuration_tips#parallel_applying_wsrep_slave_threads>`_ for more configuration tips.
871+This variable controls the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication, replication that applies transactions in parallel only when it is safe to do so. The variable is dynamic, you can increase/decrease it anytime, note that, when you decrease it, it won't kill the threads immediately but stop them after they are done applying current transaction (the effect with increase is immediate though). If any replication consistency problems are encountered, it's recommended to set this back to ``1`` to see if that resolves the issue. The default value can be increased for better throughput. You may want to increase it many a time as suggested `in Codership documentation <http://www.codership.com/wiki/doku.php?id=flow_control>`_, in JOINED state for instance to speed up the catchup process to SYNCED. You can also estimate the optimal value for this from :variable:`wsrep_cert_deps_distance` as suggested `on this page <http://www.codership.com/wiki/doku.php?id=monitoring#checking_replication_health>`_. You can also refer to `this <http://www.codership.com/wiki/doku.php?id=configuration_tips#parallel_applying_wsrep_slave_threads>`_ for more configuration tips.
872
873 .. variable:: wsrep_sst_auth
874

Subscribers

People subscribed via source and target branches