Merge lp:~hrvojem/percona-xtradb-cluster/bug1133638 into lp:percona-xtradb-cluster/percona-xtradb-cluster-5.5

Proposed by Hrvoje Matijakovic
Status: Merged
Approved by: Raghavendra D Prabhu
Approved revision: no longer in the source branch.
Merged at revision: 388
Proposed branch: lp:~hrvojem/percona-xtradb-cluster/bug1133638
Merge into: lp:percona-xtradb-cluster/percona-xtradb-cluster-5.5
Diff against target: 446 lines (+321/-13)
7 files modified
doc-pxc/source/faq.rst (+3/-2)
doc-pxc/source/howtos/cenots_howto.rst (+6/-0)
doc-pxc/source/howtos/ubuntu_howto.rst (+302/-0)
doc-pxc/source/index.rst (+1/-0)
doc-pxc/source/installation.rst (+2/-0)
doc-pxc/source/limitation.rst (+3/-6)
doc-pxc/source/wsrep-system-index.rst (+4/-5)
To merge this branch: bzr merge lp:~hrvojem/percona-xtradb-cluster/bug1133638
Reviewer Review Type Date Requested Status
Raghavendra D Prabhu (community) Approve
Review via email: mp+160283@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

Regarding wsrep_retry_autocommit, can you add this part

"if it is 0 it won't be retried and if it is 1 it will be retried once." to disambiguate?

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

Bug #1155897: Cluster does not follow MDL semantics -- this one is good.

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

> Bug #1155897: Cluster does not follow MDL semantics -- this one is good.

I meant " Limitations page in the doc is out of date " here

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

Regarding

"Bug #1155897: Cluster does not follow MDL semantics"

has
"With regards to DDL under RSU method, is MDL honoured or is it not?"

been tested?

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

Regarding

"+This variable controls how many replication events will be grouped together. Replication events are grouped in SQL slave thread by skipping events which may cause commit. This way the wsrep node acting in |MySQL| slave role and all other wsrep nodes in provider replication group, will see same (huge) transactions. This implementation is still experimental. This may help with the bottleneck of having only one |MySQL| slave facing commit time delay of synchronous provider."

for Bug #1170066

Has this been sourced from any codership docs?

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

Everything looks good here except Bug #1155897.

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

Approved.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'doc-pxc/source/faq.rst'
--- doc-pxc/source/faq.rst 2013-01-25 07:37:41 +0000
+++ doc-pxc/source/faq.rst 2013-06-13 13:58:35 +0000
@@ -47,7 +47,7 @@
4747
48Q: How would it handle split brain? 48Q: How would it handle split brain?
49====================================49====================================
50A: It would not handle it. The |split brain| is hard stop, |XtraDB Cluster| can't resolve it.50A: It would not handle it. The |split brain| is hard stop, XtraDB Cluster can't resolve it.
51That's why the minimal recommendation is to have 3 nodes. 51That's why the minimal recommendation is to have 3 nodes.
52However there is possibility to allow a node to handle the traffic, option is: ::52However there is possibility to allow a node to handle the traffic, option is: ::
53 53
@@ -58,11 +58,12 @@
58A: It is possible in two ways:58A: It is possible in two ways:
5959
601. By default Galera reads starting position from a text file <datadir>/grastate.dat. Just make this file identical on all nodes, and there will be no state transfer upon start.601. By default Galera reads starting position from a text file <datadir>/grastate.dat. Just make this file identical on all nodes, and there will be no state transfer upon start.
61
612. With :variable:`wsrep_start_position` variable - start the nodes with the same *UUID:seqno* value and there you are.622. With :variable:`wsrep_start_position` variable - start the nodes with the same *UUID:seqno* value and there you are.
6263
63Q: I have a two nodes setup. When node1 fails, node2 does not accept commands, why?64Q: I have a two nodes setup. When node1 fails, node2 does not accept commands, why?
64====================================================================================65====================================================================================
65A: This is expected behaviour, to prevent |split brain|. See previous question.66A: This is expected behavior, to prevent |split brain|. See previous question.
6667
67Q: What tcp ports are used by Percona XtraDB Cluster?68Q: What tcp ports are used by Percona XtraDB Cluster?
68======================================================69======================================================
6970
=== modified file 'doc-pxc/source/howtos/cenots_howto.rst'
--- doc-pxc/source/howtos/cenots_howto.rst 2013-03-12 09:32:01 +0000
+++ doc-pxc/source/howtos/cenots_howto.rst 2013-06-13 13:58:35 +0000
@@ -161,6 +161,9 @@
161 # Node #2 address161 # Node #2 address
162 wsrep_node_address=192.168.70.72162 wsrep_node_address=192.168.70.72
163163
164 # Cluster name
165 wsrep_cluster_name=my_centos_cluster
166
164 # SST method167 # SST method
165 wsrep_sst_method=xtrabackup168 wsrep_sst_method=xtrabackup
166169
@@ -222,6 +225,9 @@
222 # Node #3 address225 # Node #3 address
223 wsrep_node_address=192.168.70.73226 wsrep_node_address=192.168.70.73
224227
228 # Cluster name
229 wsrep_cluster_name=my_centos_cluster
230
225 # SST method231 # SST method
226 wsrep_sst_method=xtrabackup232 wsrep_sst_method=xtrabackup
227233
228234
=== added file 'doc-pxc/source/howtos/ubuntu_howto.rst'
--- doc-pxc/source/howtos/ubuntu_howto.rst 1970-01-01 00:00:00 +0000
+++ doc-pxc/source/howtos/ubuntu_howto.rst 2013-06-13 13:58:35 +0000
@@ -0,0 +1,302 @@
1.. _ubuntu_howto:
2
3Installing Percona XtraDB Cluster on *Ubuntu*
4=============================================
5
6This tutorial will show how to install the |Percona XtraDB Cluster| on three *Ubuntu* 12.04.2 LTS servers, using the packages from Percona repositories.
7
8This cluster will be assembled of three servers/nodes: ::
9
10 node #1
11 hostname: pxc1
12 IP: 192.168.70.61
13
14 node #2
15 hostname: pxc2
16 IP: 192.168.70.62
17
18 node #3
19 hostname: pxc3
20 IP: 192.168.70.63
21
22Prerequisites
23-------------
24
25 * All three nodes have a *Ubuntu* 12.04.2 LTS installation.
26
27 * Firewall has been set up to allow connecting to ports 3306, 4444, 4567 and 4568
28
29 * AppArmor profile for |MySQL| is `disabled <http://www.mysqlperformanceblog.com/2012/12/20/percona-xtradb-cluster-selinux-is-not-always-the-culprit/>`_
30
31Installation
32------------
33
34Percona repository should be set up as described in the :ref:`apt-repo` guide. Following command will install |Percona XtraDB Cluster| packages: ::
35
36 $ apt-get install percona-xtradb-cluster-server-5.5 percona-xtradb-cluster-client-5.5
37
38When these two commands have been executed successfully on all three nodes |Percona XtraDB Cluster| is installed.
39
40.. note::
41
42 Debian/Ubuntu installation prompts for root password, this was set to: ``Passw0rd``. After the packages have been installed, ``mysqld`` will be started automatically. In this example mysqld is stopped on all three nodes after successful installation with: ``/etc/init.d/mysql stop``.
43
44Configuring the nodes
45---------------------
46
47Individual nodes should be configured to be able to bootstrap the cluster. More details about bootstrapping the cluster can be found in the :ref:`bootstrap` guide.
48
49Configuration file :file:`/etc/mysql/my.cnf` for the first node should look like: ::
50
51 [mysqld]
52
53 datadir=/var/lib/mysql
54 user=mysql
55
56 # Path to Galera library
57 wsrep_provider=/usr/lib/libgalera_smm.so
58
59 # Empty gcomm address is being used when cluster is getting bootstrapped
60 wsrep_cluster_address=gcomm://
61
62 # Cluster connection URL contains the IPs of node#1, node#2 and node#3
63 #wsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63
64
65 # In order for Galera to work correctly binlog format should be ROW
66 binlog_format=ROW
67
68 # MyISAM storage engine has only experimental support
69 default_storage_engine=InnoDB
70
71 # This is a recommended tuning variable for performance
72 innodb_locks_unsafe_for_binlog=1
73
74 # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
75 innodb_autoinc_lock_mode=2
76
77 # Node #1 address
78 wsrep_node_address=192.168.70.61
79
80 # SST method
81 wsrep_sst_method=xtrabackup
82
83 # Cluster name
84 wsrep_cluster_name=my_ubuntu_cluster
85
86 # Authentication for SST method
87 wsrep_sst_auth="sstuser:s3cretPass"
88
89.. note:: For the first member of the cluster variable :variable:`wsrep_cluster_address` should contain empty ``gcomm://`` when the cluster is being bootstrapped. But as soon as we have bootstrapped the cluster and have at least one more node joined that line can be removed from the :file:`my.cnf` configuration file and the one where :variable:`wsrep_cluster_address` contains all three node addresses. In case the node gets restarted and without making this change it will make bootstrap new cluster instead of joining the existing one.
90
91After this, first node can be started with the following command: ::
92
93 [root@pxc1 ~]# /etc/init.d/mysql start
94
95This command will start the first node and bootstrap the cluster (more information about bootstrapping cluster can be found in :ref:`bootstrap` manual).
96
97After the first node has been started, cluster status can be checked by:
98
99.. code-block:: mysql
100
101 mysql> show status like 'wsrep%';
102 +----------------------------+--------------------------------------+
103 | Variable_name | Value |
104 +----------------------------+--------------------------------------+
105 | wsrep_local_state_uuid | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |
106 ...
107 | wsrep_local_state | 4 |
108 | wsrep_local_state_comment | Synced |
109 ...
110 | wsrep_cluster_size | 1 |
111 | wsrep_cluster_status | Primary |
112 | wsrep_connected | ON |
113 ...
114 | wsrep_ready | ON |
115 +----------------------------+--------------------------------------+
116 40 rows in set (0.01 sec)
117
118This output shows that the cluster has been successfully bootstrapped.
119
120In order to perform successful :ref:`state_snapshot_transfer` using |XtraBackup| new user needs to be set up with proper `privileges <http://www.percona.com/doc/percona-xtrabackup/innobackupex/privileges.html#permissions-and-privileges-needed>`_:
121
122.. code-block:: mysql
123
124 mysql@pxc1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cretPass';
125 mysql@pxc1> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
126 mysql@pxc1> FLUSH PRIVILEGES;
127
128
129.. note::
130
131 MySQL root account can also be used for setting up the :ref:`state_snapshot_transfer` with Percona XtraBackup, but it's recommended to use a different (non-root) user for this.
132
133Configuration file :file:`/etc/mysql/my.cnf` on the second node (``pxc2``) should look like this: ::
134
135 [mysqld]
136
137 datadir=/var/lib/mysql
138 user=mysql
139
140 # Path to Galera library
141 wsrep_provider=/usr/lib/libgalera_smm.so
142
143 # Cluster connection URL contains IPs of node#1, node#2 and node#3
144 wsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63
145
146 # In order for Galera to work correctly binlog format should be ROW
147 binlog_format=ROW
148
149 # MyISAM storage engine has only experimental support
150 default_storage_engine=InnoDB
151
152 # This is a recommended tuning variable for performance
153 innodb_locks_unsafe_for_binlog=1
154
155 # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
156 innodb_autoinc_lock_mode=2
157
158 # Node #2 address
159 wsrep_node_address=192.168.70.62
160
161 # Cluster name
162 wsrep_cluster_name=my_ubuntu_cluster
163
164 # SST method
165 wsrep_sst_method=xtrabackup
166
167 #Authentication for SST method
168 wsrep_sst_auth="sstuser:s3cretPass"
169
170Second node can be started with the following command: ::
171
172 [root@pxc2 ~]# /etc/init.d/mysql start
173
174After the server has been started it should receive the state snapshot transfer automatically. Cluster status can now be checked on both nodes. This is the example from the second node (``pxc2``):
175
176.. code-block:: mysql
177
178 mysql> show status like 'wsrep%';
179 +----------------------------+--------------------------------------+
180 | Variable_name | Value |
181 +----------------------------+--------------------------------------+
182 | wsrep_local_state_uuid | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |
183 ...
184 | wsrep_local_state | 4 |
185 | wsrep_local_state_comment | Synced |
186 ...
187 | wsrep_cluster_size | 2 |
188 | wsrep_cluster_status | Primary |
189 | wsrep_connected | ON |
190 ...
191 | wsrep_ready | ON |
192 +----------------------------+--------------------------------------+
193 40 rows in set (0.01 sec)
194
195This output shows that the new node has been successfully added to the cluster.
196
197MySQL configuration file :file:`/etc/mysql/my.cnf` on the third node (``pxc3``) should look like this: ::
198
199 [mysqld]
200
201 datadir=/var/lib/mysql
202 user=mysql
203
204 # Path to Galera library
205 wsrep_provider=/usr/lib/libgalera_smm.so
206
207 # Cluster connection URL contains IPs of node#1, node#2 and node#3
208 wsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63
209
210 # In order for Galera to work correctly binlog format should be ROW
211 binlog_format=ROW
212
213 # MyISAM storage engine has only experimental support
214 default_storage_engine=InnoDB
215
216 # This is a recommended tuning variable for performance
217 innodb_locks_unsafe_for_binlog=1
218
219 # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
220 innodb_autoinc_lock_mode=2
221
222 # Node #3 address
223 wsrep_node_address=192.168.70.63
224
225 # Cluster name
226 wsrep_cluster_name=my_ubuntu_cluster
227
228 # SST method
229 wsrep_sst_method=xtrabackup
230
231 #Authentication for SST method
232 wsrep_sst_auth="sstuser:s3cretPass"
233
234Third node can now be started with the following command: ::
235
236 [root@pxc3 ~]# /etc/init.d/mysql start
237
238After the server has been started it should receive the SST same as the second node. Cluster status can now be checked on both nodes. This is the example from the third node (``pxc3``):
239
240.. code-block:: mysql
241
242 mysql> show status like 'wsrep%';
243 +----------------------------+--------------------------------------+
244 | Variable_name | Value |
245 +----------------------------+--------------------------------------+
246 | wsrep_local_state_uuid | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |
247 ...
248 | wsrep_local_state | 4 |
249 | wsrep_local_state_comment | Synced |
250 ...
251 | wsrep_cluster_size | 3 |
252 | wsrep_cluster_status | Primary |
253 | wsrep_connected | ON |
254 ...
255 | wsrep_ready | ON |
256 +----------------------------+--------------------------------------+
257 40 rows in set (0.01 sec)
258
259This output confirms that the third node has joined the cluster.
260
261Testing the replication
262-----------------------
263
264Although the password change from the first node has replicated successfully, this example will show that writing on any node will replicate to the whole cluster. In order to check this, new database will be created on second node and table for that database will be created on the third node.
265
266Creating the new database on the second node:
267
268.. code-block:: mysql
269
270 mysql@pxc2> CREATE DATABASE percona;
271 Query OK, 1 row affected (0.01 sec)
272
273Creating the ``example`` table on the third node:
274
275.. code-block:: mysql
276
277 mysql@pxc3> USE percona;
278 Database changed
279
280 mysql@pxc3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));
281 Query OK, 0 rows affected (0.05 sec)
282
283Inserting records on the first node:
284
285.. code-block:: mysql
286
287 mysql@pxc1> INSERT INTO percona.example VALUES (1, 'percona1');
288 Query OK, 1 row affected (0.02 sec)
289
290Retrieving all the rows from that table on the second node:
291
292.. code-block:: mysql
293
294 mysql@pxc2> SELECT * FROM percona.example;
295 +---------+-----------+
296 | node_id | node_name |
297 +---------+-----------+
298 | 1 | percona1 |
299 +---------+-----------+
300 1 row in set (0.00 sec)
301
302This small example shows that all nodes in the cluster are synchronized and working as intended.
0303
=== modified file 'doc-pxc/source/index.rst'
--- doc-pxc/source/index.rst 2013-03-12 09:32:01 +0000
+++ doc-pxc/source/index.rst 2013-06-13 13:58:35 +0000
@@ -79,6 +79,7 @@
79 :glob:79 :glob:
80 80
81 howtos/cenots_howto81 howtos/cenots_howto
82 howtos/ubuntu_howto
82 howtos/singlebox83 howtos/singlebox
83 howtos/3nodesec284 howtos/3nodesec2
84 howtos/haproxy85 howtos/haproxy
8586
=== modified file 'doc-pxc/source/installation.rst'
--- doc-pxc/source/installation.rst 2013-03-12 09:32:01 +0000
+++ doc-pxc/source/installation.rst 2013-06-13 13:58:35 +0000
@@ -44,6 +44,8 @@
4444
45 $ sudo apt-get install percona-xtradb-cluster-server-5.5 percona-xtradb-cluster-client-5.545 $ sudo apt-get install percona-xtradb-cluster-server-5.5 percona-xtradb-cluster-client-5.5
4646
47More detailed example of the |Percona XtraDB Cluster| installation and configuration can be seen in :ref:`ubuntu_howto` tutorial.
48
47Prerequisites49Prerequisites
48=============50=============
4951
5052
=== modified file 'doc-pxc/source/limitation.rst'
--- doc-pxc/source/limitation.rst 2012-06-01 04:28:35 +0000
+++ doc-pxc/source/limitation.rst 2013-06-13 13:58:35 +0000
@@ -1,3 +1,5 @@
1.. _limitations:
2
1====================================3====================================
2 Percona XtraDB Cluster Limitations4 Percona XtraDB Cluster Limitations
3====================================5====================================
@@ -6,8 +8,6 @@
68
7 - Currently replication works only with |InnoDB| storage engine. Any writes to tables of other types, including system (mysql.*) tables, are not replicated. However, DDL statements are replicated in statement level, and changes to mysql.* tables will get replicated that way. So, you can safely issue: CREATE USER..., but issuing: INSERT INTO mysql.user..., will not be replicated.9 - Currently replication works only with |InnoDB| storage engine. Any writes to tables of other types, including system (mysql.*) tables, are not replicated. However, DDL statements are replicated in statement level, and changes to mysql.* tables will get replicated that way. So, you can safely issue: CREATE USER..., but issuing: INSERT INTO mysql.user..., will not be replicated.
810
9 - DELETE operation is unsupported on tables without primary key. Also rows in tables without primary key may appear in different order on different nodes. As a result SELECT...LIMIT... may return slightly different sets.
10
11 - Unsupported queries:11 - Unsupported queries:
12 * LOCK/UNLOCK TABLES cannot be supported in multi-master setups.12 * LOCK/UNLOCK TABLES cannot be supported in multi-master setups.
13 * lock functions (GET_LOCK(), RELEASE_LOCK()... )13 * lock functions (GET_LOCK(), RELEASE_LOCK()... )
@@ -21,10 +21,7 @@
2121
22 - XA transactions can not be supported due to possible rollback on commit.22 - XA transactions can not be supported due to possible rollback on commit.
2323
24 - The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster is slow. If you have requirements for stable high performance, then it should be supported by corresponding hardware (10Gb network, SSD).24 - The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster is slow. If you have requirements for stable high performance, then it should be supported by corresponding hardware.
2525
26 - The minimal recommended size of cluster is 3 nodes.26 - The minimal recommended size of cluster is 3 nodes.
2727
28 - DDL statements are problematic and may stall cluster. Later, the support of DDL will be improved, but will always require special treatment.
29
30
3128
=== modified file 'doc-pxc/source/wsrep-system-index.rst'
--- doc-pxc/source/wsrep-system-index.rst 2013-03-12 09:32:01 +0000
+++ doc-pxc/source/wsrep-system-index.rst 2013-06-13 13:58:35 +0000
@@ -13,8 +13,8 @@
13 :default: TOI13 :default: TOI
1414
15This variable can be used to select schema upgrade method. Available values are:15This variable can be used to select schema upgrade method. Available values are:
16 * TOI - Total Order Isolation - When this method is selected ``DDL`` is processed in the same order with regards to other transactions in each cluster node. This guarantees data consistency. In case of ``DDL`` statements cluster will have parts of database locked and it will behave like a single server. In some cases (like big ``ALTER TABLE``) this could have impact on cluster's performance and high availability, but it could be fine for quick changes that happen almost instantly (like fast index changes).16 * TOI - Total Order Isolation - When this method is selected ``DDL`` is processed in the same order with regards to other transactions in each cluster node. This guarantees data consistency. In case of ``DDL`` statements cluster will have parts of database locked and it will behave like a single server. In some cases (like big ``ALTER TABLE``) this could have impact on cluster's performance and high availability, but it could be fine for quick changes that happen almost instantly (like fast index changes). When ``DDL`` is processed under total order isolation (TOI) the ``DDL`` statement will be replicated up front to the cluster. i.e. cluster will assign global transaction ID for the ``DDL`` statement before the ``DDL`` processing begins. Then every node in the cluster has the responsibility to execute the ``DDL`` in the given slot in the sequence of incoming transactions, and this ``DDL`` execution has to happen with high priority.
17 * RSU - Rolling Schema Upgrade - When this method is selected ``DDL`` statements won't be replicated across the cluster, instead it's up to the user to run them on each node separately. The node applying the changes will desynchronize from the cluster briefly, while normal work happens on all the other nodes. When the ``DDL`` statement is processed node will apply delayed replication events. The schema changes **must** be backwards compatible for this method to work, otherwise the node that receives the change will likely break Galera replication. If the replication breaks the SST will be triggered when the node tries to join again but the change will be undone.17 * RSU - Rolling Schema Upgrade - When this method is selected ``DDL`` statements won't be replicated across the cluster, instead it's up to the user to run them on each node separately. The node applying the changes will desynchronize from the cluster briefly, while normal work happens on all the other nodes. When the ``DDL`` statement is processed node will apply delayed replication events. The schema changes **must** be backwards compatible for this method to work, otherwise the node that receives the change will likely break Galera replication. If the replication breaks the SST will be triggered when the node tries to join again but the change will be undone.
1818
19.. variable:: wsrep_auto_increment_control19.. variable:: wsrep_auto_increment_control
2020
@@ -172,7 +172,7 @@
172 :default: 0 (no grouping)172 :default: 0 (no grouping)
173 :range: 0-1000173 :range: 0-1000
174174
175This variable controls how many replication events will be grouped together. This implementation is still experimental.175This variable controls how many replication events will be grouped together. Replication events are grouped in SQL slave thread by skipping events which may cause commit. This way the wsrep node acting in |MySQL| slave role and all other wsrep nodes in provider replication group, will see same (huge) transactions. This implementation is still experimental. This may help with the bottleneck of having only one |MySQL| slave facing commit time delay of synchronous provider.
176176
177.. variable:: wsrep_node_address177.. variable:: wsrep_node_address
178178
@@ -271,8 +271,7 @@
271 :dyn: No271 :dyn: No
272 :default: 1272 :default: 1
273273
274This variable sets the number of times autocommitted transactions will be tried in the cluster if it encounters certification errors. In case there is a conflict, it should be safe for the cluster node to simply retry the statement without the client's knowledge with the hopes that it will pass the next time. This can be useful to help an application using autocommit to avoid the deadlock errors that can be triggered by replication conflicts. Note that the default 1 is not a retry, but the first try. Retries start when this variable is set to 2 or higher.  274This variable sets the number of times autocommitted transactions will be tried in the cluster if it encounters certification errors. In case there is a conflict, it should be safe for the cluster node to simply retry the statement without the client's knowledge with the hopes that it will pass the next time. This can be useful to help an application using autocommit to avoid the deadlock errors that can be triggered by replication conflicts. If this variable is set to ``0`` transaction won't be retried and if it is set to ``1`` it will be retried once.
275
276275
277.. variable:: wsrep_slave_threads276.. variable:: wsrep_slave_threads
278277

Subscribers

People subscribed via source and target branches