Percona XtraDB Cluster (PXC) is different enough from async replication that it can be a bit of a puzzle how to do things the Galera way. This post will attempt to illustrate the basics of setting up 2 node PXC cluster from scratch.
Requirements
Two servers (could be VMs) that can talk to each other. I’m using CentOS for this post. Here’s a dirt-simple Vagrant setup: https://github.com/jayjanssen/two_centos_nodes to make this easy (on Virtualbox).
These servers are talking over the 192.168.70.0/24 internal network for our example.
jayj@~/Src $ git clone https://github.com/jayjanssen/two_centos_nodes.git jayj@~/Src $ cd two_centos_nodes jayj@~/Src/two_centos_nodes $ vagrant up Bringing machine 'node1' up with 'virtualbox' provider... Bringing machine 'node2' up with 'virtualbox' provider... [node1] Importing base box 'centos-6_4-64_percona'... [node1] Matching MAC address for NAT networking... [node1] Setting the name of the VM... [node1] Clearing any previously set forwarded ports... [node1] Creating shared folders metadata... [node1] Clearing any previously set network interfaces... [node1] Preparing network interfaces based on configuration... [node1] Forwarding ports... [node1] -- 22 => 2222 (adapter 1) [node1] Booting VM... [node1] Waiting for machine to boot. This may take a few minutes... [node1] Machine booted and ready! [node1] Setting hostname... [node1] Configuring and enabling network interfaces... [node1] Mounting shared folders... [node1] -- /vagrant [node2] Importing base box 'centos-6_4-64_percona'... [node2] Matching MAC address for NAT networking... [node2] Setting the name of the VM... [node2] Clearing any previously set forwarded ports... [node2] Fixed port collision for 22 => 2222. Now on port 2200. [node2] Creating shared folders metadata... [node2] Clearing any previously set network interfaces... [node2] Preparing network interfaces based on configuration... [node2] Forwarding ports... [node2] -- 22 => 2200 (adapter 1) [node2] Booting VM... [node2] Waiting for machine to boot. This may take a few minutes... [node2] Machine booted and ready! [node2] Setting hostname... [node2] Configuring and enabling network interfaces... [node2] Mounting shared folders... [node2] -- /vagrant
Install the software
These steps should be repeated on both nodes:
jayj@~/Src/two_centos_nodes $ vagrant ssh node1 Last login: Tue Sep 10 14:15:50 2013 from 10.0.2.2 [root@node1 ~]# yum localinstall http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm Loaded plugins: downloadonly, fastestmirror, priorities Setting up Local Package Process percona-release-0.0-1.x86_64.rpm | 6.1 kB 00:00 Examining /var/tmp/yum-root-t61o64/percona-release-0.0-1.x86_64.rpm: percona-release-0.0-1.x86_64 Marking /var/tmp/yum-root-t61o64/percona-release-0.0-1.x86_64.rpm to be installed Determining fastest mirrors epel/metalink | 15 kB 00:00 * base: mirror.atlanticmetro.net * epel: mirror.seas.harvard.edu * extras: centos.mirror.netriplex.com * updates: mirror.team-cymru.org base | 3.7 kB 00:00 base/primary_db | 4.4 MB 00:01 epel | 4.2 kB 00:00 epel/primary_db | 5.5 MB 00:04 extras | 3.4 kB 00:00 extras/primary_db | 18 kB 00:00 updates | 3.4 kB 00:00 updates/primary_db | 4.4 MB 00:03 Resolving Dependencies --> Running transaction check ---> Package percona-release.x86_64 0:0.0-1 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================ Package Arch Version Repository Size ================================================================================================================================ Installing: percona-release x86_64 0.0-1 /percona-release-0.0-1.x86_64 3.6 k Transaction Summary ================================================================================================================================ Install 1 Package(s) Total size: 3.6 k Installed size: 3.6 k Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : percona-release-0.0-1.x86_64 1/1 Verifying : percona-release-0.0-1.x86_64 1/1 Installed: percona-release.x86_64 0:0.0-1 Complete! [root@node1 ~]# yum install -y Percona-XtraDB-Cluster-server Loaded plugins: downloadonly, fastestmirror, priorities Loading mirror speeds from cached hostfile * base: mirror.atlanticmetro.net * epel: mirror.seas.harvard.edu * extras: centos.mirror.netriplex.com * updates: mirror.team-cymru.org Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package Percona-XtraDB-Cluster-server.x86_64 1:5.5.31-23.7.5.438.rhel6 will be installed --> Processing Dependency: xtrabackup >= 1.9.0 for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 --> Processing Dependency: Percona-XtraDB-Cluster-client for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 --> Processing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 --> Processing Dependency: rsync for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 --> Processing Dependency: Percona-XtraDB-Cluster-galera for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 --> Processing Dependency: nc for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 --> Processing Dependency: Percona-XtraDB-Cluster-shared for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 --> Processing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 --> Processing Dependency: libaio.so.1()(64bit) for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 --> Running transaction check ---> Package Percona-XtraDB-Cluster-client.x86_64 1:5.5.31-23.7.5.438.rhel6 will be installed ---> Package Percona-XtraDB-Cluster-galera.x86_64 0:2.6-1.152.rhel6 will be installed ---> Package Percona-XtraDB-Cluster-shared.x86_64 1:5.5.31-23.7.5.438.rhel6 will be obsoleting ---> Package libaio.x86_64 0:0.3.107-10.el6 will be installed ---> Package mysql-libs.x86_64 0:5.1.69-1.el6_4 will be obsoleted ---> Package nc.x86_64 0:1.84-22.el6 will be installed ---> Package percona-xtrabackup.x86_64 0:2.1.4-656.rhel6 will be installed --> Processing Dependency: perl(DBD::mysql) for package: percona-xtrabackup-2.1.4-656.rhel6.x86_64 --> Processing Dependency: perl(Time::HiRes) for package: percona-xtrabackup-2.1.4-656.rhel6.x86_64 ---> Package rsync.x86_64 0:3.0.6-9.el6 will be installed --> Running transaction check ---> Package perl-DBD-MySQL.x86_64 0:4.013-3.el6 will be installed --> Processing Dependency: perl(DBI::Const::GetInfoType) for package: perl-DBD-MySQL-4.013-3.el6.x86_64 --> Processing Dependency: perl(DBI) for package: perl-DBD-MySQL-4.013-3.el6.x86_64 --> Processing Dependency: libmysqlclient.so.16(libmysqlclient_16)(64bit) for package: perl-DBD-MySQL-4.013-3.el6.x86_64 --> Processing Dependency: libmysqlclient.so.16()(64bit) for package: perl-DBD-MySQL-4.013-3.el6.x86_64 ---> Package perl-Time-HiRes.x86_64 4:1.9721-131.el6_4 will be installed --> Running transaction check ---> Package Percona-Server-shared-compat.x86_64 0:5.5.33-rel31.1.566.rhel6 will be obsoleting ---> Package perl-DBI.x86_64 0:1.609-4.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================ Package Arch Version Repository Size ================================================================================================================================ Installing: Percona-Server-shared-compat x86_64 5.5.33-rel31.1.566.rhel6 percona 3.4 M replacing mysql-libs.x86_64 5.1.69-1.el6_4 Percona-XtraDB-Cluster-server x86_64 1:5.5.31-23.7.5.438.rhel6 percona 15 M Percona-XtraDB-Cluster-shared x86_64 1:5.5.31-23.7.5.438.rhel6 percona 648 k replacing mysql-libs.x86_64 5.1.69-1.el6_4 Installing for dependencies: Percona-XtraDB-Cluster-client x86_64 1:5.5.31-23.7.5.438.rhel6 percona 6.3 M Percona-XtraDB-Cluster-galera x86_64 2.6-1.152.rhel6 percona 1.1 M libaio x86_64 0.3.107-10.el6 base 21 k nc x86_64 1.84-22.el6 base 57 k percona-xtrabackup x86_64 2.1.4-656.rhel6 percona 6.8 M perl-DBD-MySQL x86_64 4.013-3.el6 base 134 k perl-DBI x86_64 1.609-4.el6 base 705 k perl-Time-HiRes x86_64 4:1.9721-131.el6_4 updates 47 k rsync x86_64 3.0.6-9.el6 base 334 k Transaction Summary ================================================================================================================================ Install 12 Package(s) Total download size: 35 M Downloading Packages: (1/12): Percona-Server-shared-compat-5.5.33-rel31.1.566.rhel6.x86_64.rpm | 3.4 MB 00:04 (2/12): Percona-XtraDB-Cluster-client-5.5.31-23.7.5.438.rhel6.x86_64.rpm | 6.3 MB 00:03 (3/12): Percona-XtraDB-Cluster-galera-2.6-1.152.rhel6.x86_64.rpm | 1.1 MB 00:00 (4/12): Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64.rpm | 15 MB 00:04 (5/12): Percona-XtraDB-Cluster-shared-5.5.31-23.7.5.438.rhel6.x86_64.rpm | 648 kB 00:00 (6/12): libaio-0.3.107-10.el6.x86_64.rpm | 21 kB 00:00 (7/12): nc-1.84-22.el6.x86_64.rpm | 57 kB 00:00 (8/12): percona-xtrabackup-2.1.4-656.rhel6.x86_64.rpm | 6.8 MB 00:03 (9/12): perl-DBD-MySQL-4.013-3.el6.x86_64.rpm | 134 kB 00:00 (10/12): perl-DBI-1.609-4.el6.x86_64.rpm | 705 kB 00:00 (11/12): perl-Time-HiRes-1.9721-131.el6_4.x86_64.rpm | 47 kB 00:00 (12/12): rsync-3.0.6-9.el6.x86_64.rpm | 334 kB 00:00 -------------------------------------------------------------------------------------------------------------------------------- Total 1.0 MB/s | 35 MB 00:34 warning: rpmts_HdrFromFdno: Header V4 DSA/SHA1 Signature, key ID cd2efd2a: NOKEY Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-percona Importing GPG key 0xCD2EFD2A: Userid : Percona MySQL Development Team <mysql-dev@percona.com> Package: percona-release-0.0-1.x86_64 (@/percona-release-0.0-1.x86_64) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-percona Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : libaio-0.3.107-10.el6.x86_64 1/13 Installing : 1:Percona-XtraDB-Cluster-shared-5.5.31-23.7.5.438.rhel6.x86_64 2/13 Installing : 1:Percona-XtraDB-Cluster-client-5.5.31-23.7.5.438.rhel6.x86_64 3/13 Installing : Percona-XtraDB-Cluster-galera-2.6-1.152.rhel6.x86_64 4/13 Installing : nc-1.84-22.el6.x86_64 5/13 Installing : 4:perl-Time-HiRes-1.9721-131.el6_4.x86_64 6/13 Installing : perl-DBI-1.609-4.el6.x86_64 7/13 Installing : rsync-3.0.6-9.el6.x86_64 8/13 Installing : Percona-Server-shared-compat-5.5.33-rel31.1.566.rhel6.x86_64 9/13 Installing : perl-DBD-MySQL-4.013-3.el6.x86_64 10/13 Installing : percona-xtrabackup-2.1.4-656.rhel6.x86_64 11/13 Installing : 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 12/13 ls: cannot access /var/lib/mysql/*.err: No such file or directory ls: cannot access /var/lib/mysql/*.err: No such file or directory 130917 14:43:16 [Note] WSREP: Read nil XID from storage engines, skipping position init 130917 14:43:16 [Note] WSREP: wsrep_load(): loading provider library 'none' 130917 14:43:16 [Note] WSREP: Service disconnected. 130917 14:43:17 [Note] WSREP: Some threads may fail to exit. 130917 14:43:17 [Note] WSREP: Read nil XID from storage engines, skipping position init 130917 14:43:17 [Note] WSREP: wsrep_load(): loading provider library 'none' 130917 14:43:18 [Note] WSREP: Service disconnected. 130917 14:43:19 [Note] WSREP: Some threads may fail to exit. PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /usr/bin/mysqladmin -u root password 'new-password' /usr/bin/mysqladmin -u root -h node1 password 'new-password' Alternatively you can run: /usr/bin/mysql_secure_installation which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the manual for more instructions. Please report any problems with the /usr/bin/mysqlbug script! Percona recommends that all production deployments be protected with a support contract (http://www.percona.com/mysql-suppport/) to ensure the highest uptime, be eligible for hot fixes, and boost your team's productivity. /var/tmp/rpm-tmp.rVkEi4: line 95: x0: command not found Percona XtraDB Cluster is distributed with several useful UDFs from Percona Toolkit. Run the following commands to create these functions: mysql -e "CREATE FUNCTION fnv1a_64 RETURNS INTEGER SONAME 'libfnv1a_udf.so'" mysql -e "CREATE FUNCTION fnv_64 RETURNS INTEGER SONAME 'libfnv_udf.so'" mysql -e "CREATE FUNCTION murmur_hash RETURNS INTEGER SONAME 'libmurmur_udf.so'" See http://code.google.com/p/maatkit/source/browse/trunk/udf for more details Erasing : mysql-libs-5.1.69-1.el6_4.x86_64 13/13 Verifying : 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 1/13 Verifying : 1:Percona-XtraDB-Cluster-client-5.5.31-23.7.5.438.rhel6.x86_64 2/13 Verifying : perl-DBD-MySQL-4.013-3.el6.x86_64 3/13 Verifying : Percona-Server-shared-compat-5.5.33-rel31.1.566.rhel6.x86_64 4/13 Verifying : rsync-3.0.6-9.el6.x86_64 5/13 Verifying : perl-DBI-1.609-4.el6.x86_64 6/13 Verifying : percona-xtrabackup-2.1.4-656.rhel6.x86_64 7/13 Verifying : 1:Percona-XtraDB-Cluster-shared-5.5.31-23.7.5.438.rhel6.x86_64 8/13 Verifying : 4:perl-Time-HiRes-1.9721-131.el6_4.x86_64 9/13 Verifying : nc-1.84-22.el6.x86_64 10/13 Verifying : libaio-0.3.107-10.el6.x86_64 11/13 Verifying : Percona-XtraDB-Cluster-galera-2.6-1.152.rhel6.x86_64 12/13 Verifying : mysql-libs-5.1.69-1.el6_4.x86_64 13/13 Installed: Percona-Server-shared-compat.x86_64 0:5.5.33-rel31.1.566.rhel6 Percona-XtraDB-Cluster-server.x86_64 1:5.5.31-23.7.5.438.rhel6 Percona-XtraDB-Cluster-shared.x86_64 1:5.5.31-23.7.5.438.rhel6 Dependency Installed: Percona-XtraDB-Cluster-client.x86_64 1:5.5.31-23.7.5.438.rhel6 Percona-XtraDB-Cluster-galera.x86_64 0:2.6-1.152.rhel6 libaio.x86_64 0:0.3.107-10.el6 nc.x86_64 0:1.84-22.el6 percona-xtrabackup.x86_64 0:2.1.4-656.rhel6 perl-DBD-MySQL.x86_64 0:4.013-3.el6 perl-DBI.x86_64 0:1.609-4.el6 perl-Time-HiRes.x86_64 4:1.9721-131.el6_4 rsync.x86_64 0:3.0.6-9.el6 Replaced: mysql-libs.x86_64 0:5.1.69-1.el6_4 Complete!
Disable IPtables and SElinux
It is possible to run PXC with these enabled, but for simplicity here we just disable them (on both nodes!):
[root@node1 ~]# echo 0 > /selinux/enforce [root@node1 ~]# service iptables stop iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Unloading modules: [ OK ]
Configure the cluster nodes
Create a my.cnf file on each node and put this into it:
[mysqld] datadir = /var/lib/mysql binlog_format = ROW wsrep_cluster_name = twonode wsrep_cluster_address = gcomm://192.168.70.2,192.168.70.3 wsrep_node_address = 192.168.70.2 wsrep_provider = /usr/lib64/libgalera_smm.so wsrep_sst_method = xtrabackup wsrep_sst_auth = sst:secret innodb_locks_unsafe_for_binlog = 1 innodb_autoinc_lock_mode = 2
Note that the wsrep_node_address should be the proper address on each node. We only need this because in this environment we are not using the default NIC.
Bootstrap node1
Bootstrapping is simply starting up the first node in the cluster. Any data on this node is taken as the source of truth for the other nodes.
[root@node1 ~]# service mysql bootstrap-pxc Bootstrapping PXC (Percona XtraDB Cluster)Starting MySQL (Percona XtraDB Cluster).. SUCCESS! [root@node1 mysql]# mysql -e "show global status like 'wsrep%'" +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | 43ac4bea-1fa8-11e3-8496-070bd0e5c133 | | wsrep_protocol_version | 4 | | wsrep_last_committed | 0 | | wsrep_replicated | 0 | | wsrep_replicated_bytes | 0 | | wsrep_received | 2 | | wsrep_received_bytes | 133 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_bf_aborts | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_avg | 0.000000 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_cert_deps_distance | 0.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 0.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 0.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 0 | | wsrep_causal_reads | 0 | | wsrep_incoming_addresses | 192.168.70.2:3306 | | wsrep_cluster_conf_id | 1 | | wsrep_cluster_size | 1 | | wsrep_cluster_state_uuid | 43ac4bea-1fa8-11e3-8496-070bd0e5c133 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_index | 0 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 2.6(r152) | | wsrep_ready | ON | +----------------------------+--------------------------------------+
We can see the cluster is Primary, the size is 1, and our local state is Synced. This is a one node cluster!
Prep for SST
SST is how new nodes (post-bootstrap) get a copy of data when joining the cluster. It is in essence (and reality) a full backup. We specified Xtrabackup as our backup and a username/password (sst:secret). We need to setup a GRANT on node1 so we can run Xtrabackup against it to SST node2:
[root@node1 ~]# mysql Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 4 Server version: 5.5.31 Percona XtraDB Cluster (GPL), wsrep_23.7.5.r3880 Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst'@'localhost' IDENTIFIED BY 'secret'; Query OK, 0 rows affected (0.00 sec)
This GRANT should not be necessary to re-issue more than once if you are adding more nodes to the cluster.
Start node2
Assuming you’ve installed the software and my.cnf on node2, then it should be ready to start up:
[root@node2 ~]# service mysql start Starting MySQL (Percona XtraDB Cluster).....SST in progress, setting sleep higher. SUCCESS!
If we check the status of the cluster again:
[root@node1 mysql]# mysql -e "show global status like 'wsrep%'" +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | 43ac4bea-1fa8-11e3-8496-070bd0e5c133 | | wsrep_protocol_version | 4 | | wsrep_last_committed | 0 | | wsrep_replicated | 0 | | wsrep_replicated_bytes | 0 | | wsrep_received | 6 | | wsrep_received_bytes | 393 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_bf_aborts | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_avg | 0.000000 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_cert_deps_distance | 0.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 0.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 0.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 0 | | wsrep_causal_reads | 0 | | wsrep_incoming_addresses | 192.168.70.3:3306,192.168.70.2:3306 | | wsrep_cluster_conf_id | 2 | | wsrep_cluster_size | 2 | | wsrep_cluster_state_uuid | 43ac4bea-1fa8-11e3-8496-070bd0e5c133 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_index | 1 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 2.6(r152) | | wsrep_ready | ON | +----------------------------+--------------------------------------+
We can see that there are now 2 nodes in the cluster!
The network connection is established over the default Galera port of 4567:
[root@node1 ~]# netstat -ant Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4567 0.0.0.0:* LISTEN tcp 0 0 10.0.2.15:22 10.0.2.2:61129 ESTABLISHED tcp 0 108 192.168.70.2:4567 192.168.70.3:50170 ESTABLISHED tcp 0 0 :::22 :::* LISTEN
Summary
In these steps we:
- Installed PXC server package and dependencies
- Did the bare-minimum configuration to get it started
- Bootstrapped the first node
- Prepared for SST
- Started the second node (SST was copied by netcat over port 4444)
- Confirmed both nodes were in the cluster
The setup can certainly be more involved in this, but this gives a simple illustration at what it takes to get things rolling.
The post Percona XtraDB Cluster: Setting up a simple cluster appeared first on MySQL Performance Blog.