Merge lp:~hrvojem/percona-xtradb-cluster/pxc-12 into lp:percona-xtradb-cluster/percona-xtradb-cluster-5.5
- pxc-12
- Merge into 5.5
Status: | Merged |
---|---|
Approved by: | Hrvoje Matijakovic |
Approved revision: | no longer in the source branch. |
Merged at revision: | 384 |
Proposed branch: | lp:~hrvojem/percona-xtradb-cluster/pxc-12 |
Merge into: | lp:percona-xtradb-cluster/percona-xtradb-cluster-5.5 |
Diff against target: |
647 lines (+458/-60) 10 files modified
doc-pxc/source/glossary.rst (+1/-1) doc-pxc/source/howtos/cenots_howto.rst (+299/-0) doc-pxc/source/index.rst (+3/-1) doc-pxc/source/installation.rst (+37/-26) doc-pxc/source/installation/yum_repo.rst (+1/-1) doc-pxc/source/manual/bootstrap.rst (+5/-1) doc-pxc/source/manual/state_snapshot_transfer.rst (+42/-0) doc-pxc/source/resources.rst (+0/-29) doc-pxc/source/wsrep-files-index.rst (+68/-0) doc-pxc/source/wsrep-system-index.rst (+2/-1) |
To merge this branch: | bzr merge lp:~hrvojem/percona-xtradb-cluster/pxc-12 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Raghavendra D Prabhu (community) | Approve | ||
Jay Janssen (community) | Approve | ||
Vadim Tkachenko | Pending | ||
Review via email: mp+151496@code.launchpad.net |
Commit message
Description of the change
Jay Janssen (jay-janssen) wrote : | # |
Jay Janssen (jay-janssen) wrote : | # |
Per previous comments
Hrvoje Matijakovic (hrvojem) wrote : | # |
Fixed everything except:
> "40 +.. note::
> 441 +
> 442 + Starting the cluster using the ``/etc/init.d/myslq start --wsrep-
> cluster-
> (like CentOS). This method won't work on Debian/Ubuntu due to difference in
> the init script.
> 443 +"
> How would you bootstrap if you can't use the init script?
>
This is already described in the 3rd paragraph on that page:
http://
Let me know if you still think this should be described for other distros.
Jay Janssen (jay-janssen) wrote : | # |
On Mar 7, 2013, at 9:16 AM, Hrvoje Matijakovic <email address hidden> wrote:
> This is already described in the 3rd paragraph on that page:
> http://
Then I suggest linking to it there --- "you can also bootstrap by modifying the config file as described at this link..."
Jay Janssen, MySQL Consulting Lead, Percona
http://
Percona Live in Santa Clara, CA April 22nd-25th 2013
http://
Raghavendra D Prabhu (raghavendra-prabhu) wrote : | # |
> "The downside of `mysqldump` and `rsync` is that the cluster becomes *READ-
> ONLY* while data is being copied from one node to another (SST applies
> :command:`FLUSH TABLES WITH READ LOCK` command). "
> The cluster becomes read-only, or the Donor node? I think only the Donor node
> (but I could be wrong, I haven't done much with mysqldump and rsync.
Thats right, by default only the donor node becomes read-only.
However, keep in mind this parameter:
gcs.sync_donor Should the rest of the cluster keep in sync with the donor? “Yes” means that if the donor is blocked by state transfer, the whole cluster is blocked with it.
from http://
(may be you can reference here to the variable index we have)
Now, with Xtrabackup too, it will be read-only too but for a
shorter while:
=======
# make a prep copy before locking tables, if using rsync
# flush tables with read lock
}
if ($option_
}
# backup non-InnoDB files and tables
# (or finalize the backup by syncing changes if using rsync)
backup_
# resume ibbackup and wait till log copying is finished
resume_
# release read locks on all tables
mysql_
=======
So, under FTWRL, Xtrabackup does:
1) Write binlog info, galera info and slave info
2) A lighter rsync (which is a good optimization)
3) Resume the ibbackup for copying the log files from a
particular checkpoint
Along same lines, rsync SST does:
a) FTWRL
b) Disable writes (with google' patch)
c) Full rsync
d) Allow writes
e) Unlock tables
Now, if you compare both. in Xtrabackup 2 + 3 should take less
time than c in rsync SST, hence being under FTWRL for a shorter
while. (It is better if Xtrabackup devs/AlexeyK also review the
above).
So, I think rather than saying read-only only for mysqldump and
rsync SST, we should compare the duration of FTWRL in both and
hence, the duration upto which the node is read-locked.
> "+ $ yum install Percona-
> xtrabackup"
> xtrabackup is a dependency of PXC server, AFAIK.
> "+ wsrep_cluster_
> Actually, there's no reason why node1's ip can't be in the list, Galera is
> smart enough to remove it -- and this gives you a setting that you can make
> identical on all the nodes.
> Do we need innodb_
> I wouldn't necessarily recommend using root for SST, but a separate user.
> Include a link to the xtrabackup doc about grants necessary.
> Why are you not setting wsrep_cluster_name?
> "40 +.. note::
> 441 +
> 442 + Starting the cluster using the ``/etc/init.d/myslq start --wsrep-
> cluster-
> (like CentOS). This method won't work on Debian/Ubuntu due to difference in
> the init script...
Raghavendra D Prabhu (raghavendra-prabhu) wrote : | # |
> Thats right, by default only the donor node becomes read-only.
> However, keep in mind this parameter:
> gcs.sync_donor Should the rest of the cluster keep in sync with the donor?
> “Yes” means that if the donor is blocked by state transfer, the whole cluster
> is blocked with it.
> from http://
> (may be you can reference here to the variable index we have)
> Now, with Xtrabackup too, it will be read-only too but for a
> shorter while:
> =======
> # make a prep copy before locking tables, if using rsync
> backup_files(1);
> # flush tables with read lock
> mysql_lockall();
> }
> if ($option_
> write_slave_info();
> }
> # backup non-InnoDB files and tables
> # (or finalize the backup by syncing changes if using rsync)
> backup_files(0);
> # resume ibbackup and wait till log copying is finished
> resume_ibbackup();
> # release read locks on all tables
> mysql_unlockall() if !$option_no_lock;
> =======
> So, under FTWRL, Xtrabackup does:
> 1) Write binlog info, galera info and slave info
> 2) A lighter rsync (which is a good optimization)
> 3) Resume the ibbackup for copying the log files from a
> particular checkpoint
> Along same lines, rsync SST does:
> a) FTWRL
> b) Disable writes (with google' patch)
> c) Full rsync
> d) Allow writes
> e) Unlock tables
> Now, if you compare both. in Xtrabackup 2 + 3 should take less
> time than c in rsync SST, hence being under FTWRL for a shorter
> while. (It is better if Xtrabackup devs/AlexeyK also review the
> above).
> So, I think rather than saying read-only only for mysqldump and
> rsync SST, we should compare the duration of FTWRL in both and
> hence, the duration upto which the node is read-locked.
Sorry I misread the original draft, what I mean here is that for
Xtrabackup it may not be read-only only for the duration of syncing of
FRM files but 1,2,3 above - so this part requires expansion.
Jay Janssen (jay-janssen) wrote : | # |
Assuming all my comments are integrated, I approve (didn't read the whole diff again though)
Raghavendra D Prabhu (raghavendra-prabhu) wrote : | # |
Looks good. Add the 'default No' part as discussed.
Preview Diff
1 | === modified file 'doc-pxc/source/glossary.rst' |
2 | --- doc-pxc/source/glossary.rst 2013-02-01 13:41:34 +0000 |
3 | +++ doc-pxc/source/glossary.rst 2013-03-12 09:35:25 +0000 |
4 | @@ -23,7 +23,7 @@ |
5 | Incremental State Transfer. Functionality which instead of whole state snapshot can catch up with te group by receiving the missing writesets, but only if the writeset is still in the donor's writeset cache. |
6 | |
7 | SST |
8 | - State Snapshot Transfer is the full copy of data from one node to another. It's used when a new node joins the cluster, it has to transfer data from existing node. There are three methods of SST available in Percona XtraDB Cluster: :program:`mysqldump`, :program:`rsync` and :program:`xtrabackup` (Percona |XtraBackup| with support of XtraDB Cluster will be released soon, currently you need to use our `source code repository <http://www.percona.com/doc/percona-xtrabackup/installation/compiling_xtrabackup.html>`_). The downside of `mysqldump` and `rsync` is that your cluster becomes *READ-ONLY* while data is being copied from one node to another (SST applies :command:`FLUSH TABLES WITH READ LOCK` command). Xtrabackup SST does not require :command:`READ LOCK` for the entire syncing process, only for syncing |.FRM| files (the same as with regular backup). |
9 | + State Snapshot Transfer is the full copy of data from one node to another. It's used when a new node joins the cluster, it has to transfer data from existing node. There are three methods of SST available in Percona XtraDB Cluster: :program:`mysqldump`, :program:`rsync` and :program:`xtrabackup`. The downside of `mysqldump` and `rsync` is that the node becomes *READ-ONLY* while data is being copied from one node to another (SST applies :command:`FLUSH TABLES WITH READ LOCK` command). Xtrabackup SST does not require :command:`READ LOCK` for the entire syncing process, only for syncing the |MySQL| system tables and writing the information about the binlog, galera and slave information (same as the regular XtraBackup backup). State snapshot transfer method can be configured with the :variable:`wsrep_sst_method` variable. |
10 | |
11 | UUID |
12 | Universally Unique IDentifier which uniquely identifies the state and the sequence of changes node undergoes. |
13 | |
14 | === added file 'doc-pxc/source/howtos/cenots_howto.rst' |
15 | --- doc-pxc/source/howtos/cenots_howto.rst 1970-01-01 00:00:00 +0000 |
16 | +++ doc-pxc/source/howtos/cenots_howto.rst 2013-03-12 09:35:25 +0000 |
17 | @@ -0,0 +1,299 @@ |
18 | +.. _centos_howto: |
19 | + |
20 | +Installing Percona XtraDB Cluster on *CentOS* |
21 | +============================================= |
22 | + |
23 | +This tutorial will show how to install the |Percona XtraDB Cluster| on three CentOS 6.3 servers, using the packages from Percona repositories. |
24 | + |
25 | +This cluster will be assembled of three servers/nodes: :: |
26 | + |
27 | + node #1 |
28 | + hostname: percona1 |
29 | + IP: 192.168.70.71 |
30 | + |
31 | + node #2 |
32 | + hostname: percona2 |
33 | + IP: 192.168.70.72 |
34 | + |
35 | + node #3 |
36 | + hostname: percona3 |
37 | + IP: 192.168.70.73 |
38 | + |
39 | +Prerequisites |
40 | +------------- |
41 | + |
42 | + * All three nodes have a CentOS 6.3 installation. |
43 | + |
44 | + * Firewall has been set up to allow connecting to ports 3306, 4444, 4567 and 4568 |
45 | + |
46 | + * SELinux is disabled |
47 | + |
48 | +Installation |
49 | +------------ |
50 | + |
51 | +Percona repository should be set up as described in the :ref:`yum-repo` guide. To enable the repository following command should be used: :: |
52 | + |
53 | + $ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm |
54 | + |
55 | +Following command will install |Percona XtraDB Cluster| packages: :: |
56 | + |
57 | + $ yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client |
58 | + |
59 | +When these two commands have been executed successfully on all three nodes |Percona XtraDB Cluster| is installed. |
60 | + |
61 | +Configuring the nodes |
62 | +--------------------- |
63 | + |
64 | +Individual nodes should be configured to be able to bootstrap the cluster. More details about bootstrapping the cluster can be found in the :ref:`bootstrap` guide. |
65 | + |
66 | +Configuration file :file:`/etc/my.cnf` for the first node should look like: :: |
67 | + |
68 | + [mysqld] |
69 | + |
70 | + datadir=/var/lib/mysql |
71 | + user=mysql |
72 | + |
73 | + # Path to Galera library |
74 | + wsrep_provider=/usr/lib64/libgalera_smm.so |
75 | + |
76 | + # Cluster connection URL contains the IPs of node#1, node#2 and node#3 |
77 | + wsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73 |
78 | + |
79 | + # In order for Galera to work correctly binlog format should be ROW |
80 | + binlog_format=ROW |
81 | + |
82 | + # MyISAM storage engine has only experimental support |
83 | + default_storage_engine=InnoDB |
84 | + |
85 | + # This is a recommended tuning variable for performance |
86 | + innodb_locks_unsafe_for_binlog=1 |
87 | + |
88 | + # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera |
89 | + innodb_autoinc_lock_mode=2 |
90 | + |
91 | + # Node #1 address |
92 | + wsrep_node_address=192.168.70.71 |
93 | + |
94 | + # SST method |
95 | + wsrep_sst_method=xtrabackup |
96 | + |
97 | + # Cluster name |
98 | + wsrep_cluster_name=my_centos_cluster |
99 | + |
100 | + # Authentication for SST method |
101 | + wsrep_sst_auth="sstuser:s3cret" |
102 | + |
103 | + |
104 | +After this, first node can be started with the following command: :: |
105 | + |
106 | + [root@percona1 ~]# /etc/init.d/mysql start --wsrep-cluster-address="gcomm://" |
107 | + |
108 | +This command will start the cluster with initial :variable:`wsrep_cluster_address` set to ``gcomm://``. This way the cluster will be bootstrapped and in case the node or |MySQL| have to be restarted later, there would be no need to change the configuration file. |
109 | + |
110 | +After the first node has been started, cluster status can be checked by: |
111 | + |
112 | +.. code-block:: mysql |
113 | + |
114 | + mysql> show status like 'wsrep%'; |
115 | + +----------------------------+--------------------------------------+ |
116 | + | Variable_name | Value | |
117 | + +----------------------------+--------------------------------------+ |
118 | + | wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec | |
119 | + ... |
120 | + | wsrep_local_state | 4 | |
121 | + | wsrep_local_state_comment | Synced | |
122 | + ... |
123 | + | wsrep_cluster_size | 1 | |
124 | + | wsrep_cluster_status | Primary | |
125 | + | wsrep_connected | ON | |
126 | + ... |
127 | + | wsrep_ready | ON | |
128 | + +----------------------------+--------------------------------------+ |
129 | + 40 rows in set (0.01 sec) |
130 | + |
131 | +This output shows that the cluster has been successfully bootstrapped. |
132 | + |
133 | +It's recommended not to leave the empty password for the root account. Password can be changed with: |
134 | + |
135 | +.. code-block:: mysql |
136 | + |
137 | + mysql@percona1> UPDATE mysql.user SET password=PASSWORD("Passw0rd") where user='root'; |
138 | + mysql@percona1> FLUSH PRIVILEGES; |
139 | + |
140 | +In order to perform successful :ref:`state_snapshot_transfer` using |XtraBackup| new user needs to be set up with proper `privileges <http://www.percona.com/doc/percona-xtrabackup/innobackupex/privileges.html#permissions-and-privileges-needed>`_: |
141 | + |
142 | +.. code-block:: mysql |
143 | + |
144 | + mysql@percona1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cret'; |
145 | + mysql@percona1> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; |
146 | + mysql@percona1> FLUSH PRIVILEGES; |
147 | + |
148 | + |
149 | +.. note:: |
150 | + |
151 | + MySQL root account can also be used for setting up the SST with Percona XtraBackup, but it's recommended to use a different (non-root) user for this. |
152 | + |
153 | +Configuration file :file:`/etc/my.cnf` on the second node (``percona2``) should look like this: :: |
154 | + |
155 | + [mysqld] |
156 | + |
157 | + datadir=/var/lib/mysql |
158 | + user=mysql |
159 | + |
160 | + # Path to Galera library |
161 | + wsrep_provider=/usr/lib64/libgalera_smm.so |
162 | + |
163 | + # Cluster connection URL contains IPs of node#1, node#2 and node#3 |
164 | + wsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73 |
165 | + |
166 | + # In order for Galera to work correctly binlog format should be ROW |
167 | + binlog_format=ROW |
168 | + |
169 | + # MyISAM storage engine has only experimental support |
170 | + default_storage_engine=InnoDB |
171 | + |
172 | + # This is a recommended tuning variable for performance |
173 | + innodb_locks_unsafe_for_binlog=1 |
174 | + |
175 | + # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera |
176 | + innodb_autoinc_lock_mode=2 |
177 | + |
178 | + # Node #2 address |
179 | + wsrep_node_address=192.168.70.72 |
180 | + |
181 | + # SST method |
182 | + wsrep_sst_method=xtrabackup |
183 | + |
184 | + #Authentication for SST method |
185 | + wsrep_sst_auth="sstuser:s3cret" |
186 | + |
187 | +Second node can be started with the following command: :: |
188 | + |
189 | + [root@percona2 ~]# /etc/init.d/mysql start |
190 | + |
191 | +After the server has been started it should receive the state snapshot transfer automatically. This means that the second node won't have the empty root password anymore. In order to connect to the cluster and check the status changed root password from the first node should be used. Cluster status can now be checked on both nodes. This is the example from the second node (``percona2``): |
192 | + |
193 | +.. code-block:: mysql |
194 | + |
195 | + mysql> show status like 'wsrep%'; |
196 | + +----------------------------+--------------------------------------+ |
197 | + | Variable_name | Value | |
198 | + +----------------------------+--------------------------------------+ |
199 | + | wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec | |
200 | + ... |
201 | + | wsrep_local_state | 4 | |
202 | + | wsrep_local_state_comment | Synced | |
203 | + ... |
204 | + | wsrep_cluster_size | 2 | |
205 | + | wsrep_cluster_status | Primary | |
206 | + | wsrep_connected | ON | |
207 | + ... |
208 | + | wsrep_ready | ON | |
209 | + +----------------------------+--------------------------------------+ |
210 | + 40 rows in set (0.01 sec) |
211 | + |
212 | +This output shows that the new node has been successfully added to the cluster. |
213 | + |
214 | +MySQL configuration file :file:`/etc/my.cnf` on the third node (``percona3``) should look like this: :: |
215 | + |
216 | + [mysqld] |
217 | + |
218 | + datadir=/var/lib/mysql |
219 | + user=mysql |
220 | + |
221 | + # Path to Galera library |
222 | + wsrep_provider=/usr/lib64/libgalera_smm.so |
223 | + |
224 | + # Cluster connection URL contains IPs of node#1, node#2 and node#3 |
225 | + wsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73 |
226 | + |
227 | + # In order for Galera to work correctly binlog format should be ROW |
228 | + binlog_format=ROW |
229 | + |
230 | + # MyISAM storage engine has only experimental support |
231 | + default_storage_engine=InnoDB |
232 | + |
233 | + # This is a recommended tuning variable for performance |
234 | + innodb_locks_unsafe_for_binlog=1 |
235 | + |
236 | + # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera |
237 | + innodb_autoinc_lock_mode=2 |
238 | + |
239 | + # Node #3 address |
240 | + wsrep_node_address=192.168.70.73 |
241 | + |
242 | + # SST method |
243 | + wsrep_sst_method=xtrabackup |
244 | + |
245 | + #Authentication for SST method |
246 | + wsrep_sst_auth="sstuser:s3cret" |
247 | + |
248 | +Third node can now be started with the following command: :: |
249 | + |
250 | + [root@percona3 ~]# /etc/init.d/mysql start |
251 | + |
252 | +After the server has been started it should receive the SST same as the second node. Cluster status can now be checked on both nodes. This is the example from the third node (``percona3``): |
253 | + |
254 | +.. code-block:: mysql |
255 | + |
256 | + mysql> show status like 'wsrep%'; |
257 | + +----------------------------+--------------------------------------+ |
258 | + | Variable_name | Value | |
259 | + +----------------------------+--------------------------------------+ |
260 | + | wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec | |
261 | + ... |
262 | + | wsrep_local_state | 4 | |
263 | + | wsrep_local_state_comment | Synced | |
264 | + ... |
265 | + | wsrep_cluster_size | 3 | |
266 | + | wsrep_cluster_status | Primary | |
267 | + | wsrep_connected | ON | |
268 | + ... |
269 | + | wsrep_ready | ON | |
270 | + +----------------------------+--------------------------------------+ |
271 | + 40 rows in set (0.01 sec) |
272 | + |
273 | +This output confirms that the third node has joined the cluster. |
274 | + |
275 | +Testing the replication |
276 | +----------------------- |
277 | + |
278 | +Although the password change from the first node has replicated successfully, this example will show that writing on any node will replicate to the whole cluster. In order to check this, new database will be created on second node and table for that database will be created on the third node. |
279 | + |
280 | +Creating the new database on the second node: |
281 | + |
282 | +.. code-block:: mysql |
283 | + |
284 | + mysql@percona2> CREATE DATABASE percona; |
285 | + Query OK, 1 row affected (0.01 sec) |
286 | + |
287 | +Creating the ``example`` table on the third node: |
288 | + |
289 | +.. code-block:: mysql |
290 | + |
291 | + mysql@percona3> USE percona; |
292 | + Database changed |
293 | + |
294 | + mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30)); |
295 | + Query OK, 0 rows affected (0.05 sec) |
296 | + |
297 | +Inserting records on the first node: |
298 | + |
299 | +.. code-block:: mysql |
300 | + |
301 | + mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1'); |
302 | + Query OK, 1 row affected (0.02 sec) |
303 | + |
304 | +Retrieving all the rows from that table on the second node: |
305 | + |
306 | +.. code-block:: mysql |
307 | + |
308 | + mysql@percona2> SELECT * FROM percona.example; |
309 | + +---------+-----------+ |
310 | + | node_id | node_name | |
311 | + +---------+-----------+ |
312 | + | 1 | percona1 | |
313 | + +---------+-----------+ |
314 | + 1 row in set (0.00 sec) |
315 | + |
316 | +This small example shows that all nodes in the cluster are synchronized and working as intended. |
317 | |
318 | === modified file 'doc-pxc/source/index.rst' |
319 | --- doc-pxc/source/index.rst 2013-02-01 13:41:34 +0000 |
320 | +++ doc-pxc/source/index.rst 2013-03-12 09:35:25 +0000 |
321 | @@ -36,7 +36,6 @@ |
322 | :glob: |
323 | |
324 | intro |
325 | - resources |
326 | limitation |
327 | |
328 | Installation |
329 | @@ -67,6 +66,7 @@ |
330 | :glob: |
331 | |
332 | manual/bootstrap |
333 | + manual/state_snapshot_transfer |
334 | manual/restarting_nodes |
335 | manual/failover |
336 | manual/monitoring |
337 | @@ -78,6 +78,7 @@ |
338 | :maxdepth: 1 |
339 | :glob: |
340 | |
341 | + howtos/cenots_howto |
342 | howtos/singlebox |
343 | howtos/3nodesec2 |
344 | howtos/haproxy |
345 | @@ -95,6 +96,7 @@ |
346 | release-notes/release-notes_index |
347 | wsrep-status-index |
348 | wsrep-system-index |
349 | + wsrep-files-index |
350 | faq |
351 | glossary |
352 | |
353 | |
354 | === modified file 'doc-pxc/source/installation.rst' |
355 | --- doc-pxc/source/installation.rst 2013-02-01 13:41:34 +0000 |
356 | +++ doc-pxc/source/installation.rst 2013-03-12 09:35:25 +0000 |
357 | @@ -24,28 +24,52 @@ |
358 | installation/yum_repo |
359 | installation/apt_repo |
360 | |
361 | -|Percona| provides repositories for :program:`yum` (``RPM`` packages for *Red Hat*, *CentOS*, *Amazon Linux AMI*, and *Fedora*) and :program:`apt` (:file:`.deb` packages for *Ubuntu* and *Debian*) for software such as |Percona Server|, |XtraDB|, |XtraBackup|, and *Percona Toolkit*. This makes it easy to install and update your software and its dependencies through your operating system's package manager. |
362 | +|Percona| provides repositories for :program:`yum` (``RPM`` packages for *Red Hat*, *CentOS* and *Amazon Linux AMI*) and :program:`apt` (:file:`.deb` packages for *Ubuntu* and *Debian*) for software such as |Percona Server|, |XtraDB|, |XtraBackup|, and *Percona Toolkit*. This makes it easy to install and update your software and its dependencies through your operating system's package manager. |
363 | |
364 | This is the recommend way of installing where possible. |
365 | |
366 | +``YUM``-Based Systems |
367 | +--------------------- |
368 | + |
369 | +Once the repository is set up, use the following commands: :: |
370 | + |
371 | + $ yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client |
372 | + |
373 | +More detailed example of the |Percona XtraDB Cluster| installation and configuration can be seen in :ref:`centos_howto` tutorial. |
374 | + |
375 | +``DEB``-Based Systems |
376 | +--------------------- |
377 | + |
378 | +Once the repository is set up, use the following commands: :: |
379 | + |
380 | + $ sudo apt-get install percona-xtradb-cluster-server-5.5 percona-xtradb-cluster-client-5.5 |
381 | + |
382 | +Prerequisites |
383 | +============= |
384 | + |
385 | +In order for |Percona XtraDB Cluster| to work correctly firewall has to be set up to allow connections on the following ports: 3306, 4444, 4567 and 4568. |Percona XtraDB Cluster| currently doesn't work with ``SELinux`` or ``apparmor`` so they should be disabled, otherwise individual nodes won't be able to communicate and form the cluster. |
386 | |
387 | Initial configuration |
388 | ===================== |
389 | |
390 | -In order to start using XtraDB Cluster, you need to configure my.cnf file. Following options are needed: :: |
391 | +In order to start using the |Percona XtraDB Cluster|, following options are needed in the |MySQL| configuration file :file:`my.cnf`: :: |
392 | |
393 | [mysqld] |
394 | + |
395 | wsrep_provider — a path to Galera library. |
396 | - wsrep_cluster_address — cluster connection URL. |
397 | - binlog_format=ROW |
398 | - default_storage_engine=InnoDB |
399 | - innodb_autoinc_lock_mode=2 |
400 | - innodb_locks_unsafe_for_binlog=1 |
401 | - |
402 | -Additional parameters to tune: :: |
403 | - |
404 | - wsrep_slave_threads # specifies amount of threads to apply events |
405 | - wsrep_sst_method |
406 | + wsrep_cluster_address — Cluster connection URL containing the IPs of other nodes in the cluster |
407 | + wsrep_sst_method - method used for the state snapshot transfer |
408 | + |
409 | + binlog_format=ROW - In order for Galera to work correctly binlog format should be ROW |
410 | + default_storage_engine=InnoDB - MyISAM storage engine has only experimental support |
411 | + innodb_locks_unsafe_for_binlog=1 - This is a recommended tuning variable for performance |
412 | + innodb_autoinc_lock_mode=2 - This changes how InnoDB autoincrement locks are managed |
413 | + |
414 | +Additional parameters to specify: :: |
415 | + |
416 | + wsrep_sst_auth=user:password |
417 | + |
418 | +If any other :ref:`state_snapshot_transfer` method beside the :program:`rsync` is specified in the :variable:`wsrep_sst_method`, credentials for |SST| need to be specified. |
419 | |
420 | Example: :: |
421 | |
422 | @@ -53,23 +77,10 @@ |
423 | wsrep_cluster_address=gcomm://10.11.12.206 |
424 | wsrep_slave_threads=8 |
425 | wsrep_sst_method=rsync |
426 | - #wsrep_sst_method=xtrabackup - alternative way to do SST |
427 | - wsrep_cluster_name=percona_test_cluster |
428 | binlog_format=ROW |
429 | default_storage_engine=InnoDB |
430 | + innodb_locks_unsafe_for_binlog=1 |
431 | innodb_autoinc_lock_mode=2 |
432 | - innodb_locks_unsafe_for_binlog=1 |
433 | |
434 | Detailed list of variables can be found in :ref:`wsrep_system_index` and :ref:`wsrep_status_index`. |
435 | |
436 | -Install XtraBackup SST method |
437 | -============================== |
438 | - |
439 | -To use Percona XtraBackup for State Transfer method (copy snapshot of data between nodes) |
440 | -you can use the regular xtrabackup package with the script what supports Galera information. |
441 | -You can take *innobackupex* script from source code `innobackupex <http://bazaar.launchpad.net/~percona-dev/percona-xtrabackup/galera-info/view/head:/innobackupex>`_. |
442 | - |
443 | -To inform node to use xtrabackup you need to specify in my.cnf: :: |
444 | - |
445 | - wsrep_sst_method=xtrabackup |
446 | - |
447 | |
448 | === modified file 'doc-pxc/source/installation/yum_repo.rst' |
449 | --- doc-pxc/source/installation/yum_repo.rst 2013-02-01 13:41:34 +0000 |
450 | +++ doc-pxc/source/installation/yum_repo.rst 2013-03-12 09:35:25 +0000 |
451 | @@ -6,7 +6,7 @@ |
452 | |
453 | The |Percona| :program:`yum` repository supports popular *RPM*-based operating systems, including the *Amazon Linux AMI*. |
454 | |
455 | -The easiest way to install the *Percona Yum* repository is to install an *RPM* that configures :program:`yum` and installs the `Percona GPG key <http://www.percona.com/downloads/RPM-GPG-KEY-percona>`_. You can also do the installation manually. |
456 | +The easiest way to install the *Percona Yum* repository is to install an *RPM* that configures :program:`yum` and installs the `Percona GPG key <https://www.percona.com/downloads/RPM-GPG-KEY-percona>`_. Installation can also be done manually. |
457 | |
458 | Automatic Installation |
459 | ======================= |
460 | |
461 | === modified file 'doc-pxc/source/manual/bootstrap.rst' |
462 | --- doc-pxc/source/manual/bootstrap.rst 2013-02-01 13:41:34 +0000 |
463 | +++ doc-pxc/source/manual/bootstrap.rst 2013-03-12 09:35:25 +0000 |
464 | @@ -30,10 +30,14 @@ |
465 | |
466 | In case cluster that's being bootstrapped has already been set up before, and to avoid editing the :file:`my.cnf` twice to change the :variable:`wsrep_cluster_address` to ``gcomm://`` and then to change it back to other node addresses, first node can be started with: :: |
467 | |
468 | - /etc/init.d/myslq start --wsrep-cluster-address="gcomm://" |
469 | + /etc/init.d/mysql start --wsrep-cluster-address="gcomm://" |
470 | |
471 | This way values in :file:`my.cnf` would remain unchanged. Next time node is restarted it won't require updating the configuration file. This can be useful in case cluster has been previously set up and for some reason all nodes went down and the cluster needs to be bootstrapped again. |
472 | |
473 | +.. note:: |
474 | + |
475 | + Starting the cluster using the ``/etc/init.d/mysql start --wsrep-cluster-address="gcomm://"`` will only work on RedHat based distributions (like CentOS). This method won't work on Debian/Ubuntu due to difference in the init script. |
476 | + |
477 | Other Reading |
478 | ============= |
479 | |
480 | |
481 | === added file 'doc-pxc/source/manual/state_snapshot_transfer.rst' |
482 | --- doc-pxc/source/manual/state_snapshot_transfer.rst 1970-01-01 00:00:00 +0000 |
483 | +++ doc-pxc/source/manual/state_snapshot_transfer.rst 2013-03-12 09:35:25 +0000 |
484 | @@ -0,0 +1,42 @@ |
485 | +.. _state_snapshot_transfer: |
486 | + |
487 | +========================= |
488 | + State Snapshot Transfer |
489 | +========================= |
490 | + |
491 | +State Snapshot Transfer is a full data copy from one node (donor) to the joining node (joiner). It's used when a new node joins the cluster. In order to be synchronized with the cluster, new node has to transfer data from the node that is already part of the cluster. |
492 | +There are three methods of SST available in Percona XtraDB Cluster: :program:`mysqldump`, :program:`rsync` and :program:`xtrabackup`. The downside of `mysqldump` and `rsync` is that the donor node becomes *READ-ONLY* while data is being copied from one node to another (SST applies :command:`FLUSH TABLES WITH READ LOCK` command). Xtrabackup SST does not require :command:`READ LOCK` for the entire syncing process, only for syncing the |MySQL| system tables and writing the information about the binlog, galera and slave information (same as the regular XtraBackup backup). State snapshot transfer method can be configured with the :variable:`wsrep_sst_method` variable. |
493 | + |
494 | +.. note:: |
495 | + |
496 | + If the variable :variable:`gcs.sync_donor` is set to ``Yes`` (default ``No``), whole cluster will get blocked if the donor is blocked by the State Snapshot Transfer and not just the donor node. |
497 | + |
498 | +Using *Percona Xtrabackup* |
499 | +========================== |
500 | + |
501 | +This is the least blocking method as it locks the tables only to copy the |MyISAM| system tables. |XtraBackup| is run locally on the donor node, so it's important that the correct user credentials are set up on the donor node. In order for PXC to perform the SST using the |XtraBackup|, credentials for connecting to the donor node need to be set up in the variable :variable:`wsrep_sst_auth`. Beside the credentials, one more important thing is that the :term:`datadir` needs to be specified in the server configuration file :file:`my.cnf`, otherwise the transfer process will fail. |
502 | + |
503 | +More information about the required credentials can be found in the |XtraBackup| `manual <http://www.percona.com/doc/percona-xtrabackup/innobackupex/privileges.html#permissions-and-privileges-needed>`_. Easy way to test if the credentials will work is to run the |innobackupex| on the donor node with the username and password specified in the variable :variable:`wsrep_sst_auth`. For example, if the value of the :variable:`wsrep_sst_auth` is ``root:Passw0rd`` |innobackupex| command should look like: :: |
504 | + |
505 | + innobackupex --user=root --password=Passw0rd /tmp/ |
506 | + |
507 | +Script used for this method can be found in :file:`/usr/bin/wsrep_sst_xtrabackup` and it's provided with the |PXC| binary packages. |
508 | + |
509 | +Using ``mysqldump`` |
510 | +=================== |
511 | + |
512 | +This method uses the standard :program:`mysqldump` to dump all the databases from the donor node and import them to the joining node. For this method to work :variable:`wsrep_sst_auth` needs to be set up with the root credentials. This method is the slowest one and it also performs the global lock while doing the |SST| which will block writes to the donor node. |
513 | + |
514 | +Script used for this method can be found in :file:`/usr/bin/wsrep_sst_mysqldump` and it's provided with the |PXC| binary packages. |
515 | + |
516 | +Using ``rsync`` |
517 | +=============== |
518 | + |
519 | +This method uses :program:`rsync` to copy files from donor to the joining node. In some cases this can be faster than using the |XtraBackup| but requires the global data lock which will block writes to the donor node. This method doesn't require username/password credentials to be set up in the variable :variable:`wsrep_sst_auth`. |
520 | + |
521 | +Script used for this method can be found in :file:`/usr/bin/wsrep_sst_rsync` and it's provided with the |PXC| binary packages. |
522 | + |
523 | +Other Reading |
524 | +============= |
525 | + |
526 | +* `SST Methods for MySQL <http://www.codership.com/wiki/doku.php?id=sst_mysql>`_ |
527 | |
528 | === removed file 'doc-pxc/source/resources.rst' |
529 | --- doc-pxc/source/resources.rst 2013-01-25 07:37:41 +0000 |
530 | +++ doc-pxc/source/resources.rst 1970-01-01 00:00:00 +0000 |
531 | @@ -1,29 +0,0 @@ |
532 | -.. _resources: |
533 | - |
534 | -========= |
535 | -Resources |
536 | -========= |
537 | - |
538 | -In general there are 4 resources that need to be different when you want to run several MySQL/Galera nodes on one host: |
539 | - |
540 | -1) data directory |
541 | -2) mysql client port and/or address |
542 | -3) galera replication listen port and/or address |
543 | -4) receive address for state snapshot transfer |
544 | - |
545 | -and later incremental state transfer receive address will be added to the bunch. (I know, it is kinda a lot, but we don't see how it can be meaningfully reduced yet). |
546 | - |
547 | -The first two are the usual mysql stuff. |
548 | - |
549 | -You figured out the third. It is also possible to pass it via: :: |
550 | - |
551 | - wsrep_provider_options="gmcast.listen_addr=tcp://127.0.0.1:5678" |
552 | - |
553 | -as most other Galera options. This may save you some extra typing. |
554 | - |
555 | -The fourth one is :option:`wsrep_sst_receive_address`. This is the address at which the node will be listening for and receiving the state. Note that in galera cluster joining nodes are waiting for connections from donors. It goes contrary to tradition and seems to confuse people time and again, but there are good reasons it was made like that. |
556 | - |
557 | -If you use mysqldump SST it should be the same as this mysql client connection address plus you need to set :option:`wsrep_sst_auth` variable to hold user:password pair. The user should be privileged enough to read system tables from donor and create system tables on this node. For simplicity that could be just the root user. Note that it also means that you need to properly set up the privileges on the new node before attempting to join the cluster. If you use |xtrabackup| as SST method, it will use /usr/bin/wsrep_sst_xtrabackup provided in Percona-XtraDB-Cluster-server package. And this script also needs user password if you have a password for root@localhost. |
558 | - |
559 | -If you use rsync SST, :option:`wsrep_sst_auth` is not necessary unless your SST script makes use of it. :option:`wsrep_sst_receive_address` can be anything local (it may even be the same on all nodes provided you'll be starting them one at a time). |
560 | - |
561 | |
562 | === added file 'doc-pxc/source/wsrep-files-index.rst' |
563 | --- doc-pxc/source/wsrep-files-index.rst 1970-01-01 00:00:00 +0000 |
564 | +++ doc-pxc/source/wsrep-files-index.rst 2013-03-12 09:35:25 +0000 |
565 | @@ -0,0 +1,68 @@ |
566 | +.. _wsrep_file_index: |
567 | + |
568 | +=============================== |
569 | + Index of files created by PXC |
570 | +=============================== |
571 | + |
572 | +* :file:`GRA_*.log` |
573 | + These files contain binlog events in ROW format representing the failed transaction. That means that the slave thread was not able to apply one of the transactions. For each of those file, a corresponding warning or error message is present in the mysql error log file. Those error can also be false positives like a bad ``DDL`` statement (dropping a table that doesn't exists for example) and therefore nothing to worry about. However it's always recommended to check these log to understand what's is happening. |
574 | + |
575 | + To be able to analyze these files `binlog header <http://www.mysqlperformanceblog.com/wp-content/uploads/2012/12/GRA-header.zip>`_ needs to be added to the log file. :: |
576 | + |
577 | + $ cat GRA-header > /var/lib/mysql/GRA_1_2-bin.log |
578 | + $ cat /var/lib/mysql/GRA_1_2.log >> /var/lib/mysql/GRA_1_2-bin.log |
579 | + $ mysqlbinlog -vvv /var/lib/mysql/GRA_1_2-bin.log |
580 | + /*!40019 SET @@session.max_insert_delayed_threads=0*/; |
581 | + /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; |
582 | + DELIMITER /*!*/; |
583 | + # at 4 |
584 | + #120715 9:45:56 server id 1 end_log_pos 107 Start: binlog v 4, server v 5.5.25-debug-log created 120715 9:45:56 at startup |
585 | + # Warning: this binlog is either in use or was not closed properly. |
586 | + ROLLBACK/*!*/; |
587 | + BINLOG ' |
588 | + NHUCUA8BAAAAZwAAAGsAAAABAAQANS41LjI1LWRlYnVnLWxvZwAAAAAAAAAAAAAAAAAAAAAAAAAA |
589 | + AAAAAAAAAAAAAAAAAAA0dQJQEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA== |
590 | + '/*!*/; |
591 | + # at 107 |
592 | + #130226 11:48:50 server id 1 end_log_pos 83 Query thread_id=3 exec_time=0 error_code=0 |
593 | + use `test`/*!*/; |
594 | + SET TIMESTAMP=1361875730/*!*/; |
595 | + SET @@session.pseudo_thread_id=3/*!*/; |
596 | + SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/; |
597 | + SET @@session.sql_mode=1437073440/*!*/; |
598 | + SET @@session.auto_increment_increment=3, @@session.auto_increment_offset=2/*!*/; |
599 | + /*!\C utf8 *//*!*/; |
600 | + SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; |
601 | + SET @@session.lc_time_names=0/*!*/; |
602 | + SET @@session.collation_database=DEFAULT/*!*/; |
603 | + drop table test |
604 | + /*!*/; |
605 | + DELIMITER ; |
606 | + # End of log file |
607 | + ROLLBACK /* added by mysqlbinlog */; |
608 | + /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; |
609 | + |
610 | + This information can be used for checking the |MySQL| error log for the corresponding error message. :: |
611 | + |
612 | + 130226 11:48:50 [ERROR] Slave SQL: Error 'Unknown table 'test'' on query. Default database: 'test'. Query: 'drop table test', Error_code: 1051 |
613 | + 130226 11:48:50 [Warning] WSREP: RBR event 1 Query apply warning: 1, 3 |
614 | + 130226 11:48:50 [Warning] WSREP: Ignoring error for TO isolated action: source: dd40ad88-7ff9-11e2-0800-e93cbffe93d7 version: 2 local: 0 state: APPLYING flags: 65 conn_id: 3 trx_id: -1 seqnos (l: 5, g: 3, s: 2, d: 2, ts: 1361875730070283555) |
615 | + |
616 | + In this example ``DROP TABLE`` statement was executed on a table that doesn't exist. |
617 | + |
618 | +.. _galera.cache: galera_cache |
619 | + |
620 | +* :file:`galera.cache` |
621 | + This file is used as a main writeset store. It's implemented as a permanent ring-buffer file that is preallocated on disk when the node is initialized. File size can be controlled with the variable :variable:`gcache.size`. If this value is bigger, more writesets are cached and chances are better that the re-joining node will get |IST| instead of |SST|. Filename can be changed with the :variable:`gcache.name` variable. |
622 | + |
623 | +* :file:`grastate.dat` |
624 | + This file contains the Galera state information. |
625 | + |
626 | + Example of this file looks like this: :: |
627 | + |
628 | + # GALERA saved state |
629 | + version: 2.1 |
630 | + uuid: 1917033b-7081-11e2-0800-707f5d3b106b |
631 | + seqno: -1 |
632 | + cert_index: |
633 | + |
634 | |
635 | === modified file 'doc-pxc/source/wsrep-system-index.rst' |
636 | --- doc-pxc/source/wsrep-system-index.rst 2013-02-01 13:41:34 +0000 |
637 | +++ doc-pxc/source/wsrep-system-index.rst 2013-03-12 09:35:25 +0000 |
638 | @@ -290,8 +290,9 @@ |
639 | :conf: Yes |
640 | :scope: Global |
641 | :dyn: Yes |
642 | + :format: <username>:<password> |
643 | |
644 | -This variable should contain the authentication information needed for State Snapshot Transfer. Required information depends on the method selected in the :variable:`wsrep_sst_method`. More information about required authentication can be found in :ref:`resources`. This variable will appear masked in the logs and in the ``SHOW VARIABLES`` query. |
645 | +This variable should contain the authentication information needed for State Snapshot Transfer. Required information depends on the method selected in the :variable:`wsrep_sst_method`. More information about required authentication can be found in the :ref:`state_snapshot_transfer` documentation. This variable will appear masked in the logs and in the ``SHOW VARIABLES`` query. |
646 | |
647 | .. variable:: wsrep_sst_donor |
648 |
"The downside of `mysqldump` and `rsync` is that the cluster becomes *READ-ONLY* while data is being copied from one node to another (SST applies :command:`FLUSH TABLES WITH READ LOCK` command). "
The cluster becomes read-only, or the Donor node? I think only the Donor node (but I could be wrong, I haven't done much with mysqldump and rsync.
"+ $ yum install Percona- XtraDB- Cluster- server Percona- XtraDB- Cluster- client xtrabackup"
xtrabackup is a dependency of PXC server, AFAIK.
"+ wsrep_cluster_ address= gcomm:/ /192.168. 70.72,192. 168.70. 73"
Actually, there's no reason why node1's ip can't be in the list, Galera is smart enough to remove it -- and this gives you a setting that you can make identical on all the nodes.
Do we need innodb_ locks_unsafe_ for_binlog?
I wouldn't necessarily recommend using root for SST, but a separate user. Include a link to the xtrabackup doc about grants necessary.
Why are you not setting wsrep_cluster_name?
"40 +.. note:: cluster- address= "gcomm: //"`` will only work on RedHat based distributions (like CentOS). This method won't work on Debian/Ubuntu due to difference in the init script.
441 +
442 + Starting the cluster using the ``/etc/init.d/myslq start --wsrep-
443 +"
Spelling of 'mysql' ^^
How would you bootstrap if you can't use the init script?
"+This method uses :program:`rsync` to copy files from donor to the joining node. In some cases this can be faster than using the |XtraBackup| but requires the global data lock which will block writes to the cluster. This method doesn't require username/password credentials to be set up in the variable :variable: `wsrep_ sst_auth` ."
- Again, I think you mean 'node' here, not 'cluster'.