您的位置:首页 > 数据库 > MySQL

MySQL集群配置

2006-09-11 10:48 603 查看
一、介绍
========
这篇文档旨在介绍如何安装配置基于2台服务器的MySQL集群。并且实现任意一台服务器出现问题或宕机时MySQL依然能够继续运行。

注意!
虽然这是基于2台服务器的MySQL集群,但也必须有额外的第三台服务器作为管理节点,但这台服务器可以在集群启动完成后关闭。同时需要注意的是并不推荐在集群启动完成后关闭作为管理节点的服务器。尽管理论上可以建立基于只有2台服务器的MySQL集群,但是这样的架构,一旦一台服务器宕机之后集群就无法继续正常工作了,这样也就失去了集群的意义了。出于这个原因,就需要有第三台服务器作为管理节点运行。

另外,可能很多朋友都没有3台服务器的实际环境,可以考虑在VMWare或其他虚拟机中进行实验。

下面假设这3台服务的情况:

Server1: mysql1.vmtest.net 192.168.0.1
Server2: mysql2.vmtest.net 192.168.0.2
Server3: mysql3.vmtest.net 192.168.0.3

Servers1和Server2作为实际配置MySQL集群的服务器。对于作为管理节点的Server3则要求较低,只需对Server3的系统进行很小的调整并且无需安装MySQL,Server3可以使用一台配置较低的计算机并且可以在Server3同时运行其他服务。

二、在Server1和Server2上安装MySQL
=================================
从http://www.mysql.com上下载mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
注意:必须是max版本的MySQL,Standard版本不支持集群部署!

以下步骤需要在Server1和Server2上各做一次
# mv mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz /usr/local/
# cd /usr/local/
# groupadd mysql
# useradd -g mysql mysql
# tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# rm -f mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# mv mysql-max-4.1.9-pc-linux-gnu-i686 mysql
# cd mysql
# scripts/mysql_install_db --user=mysql
# chown -R root .
# chown -R mysql data
# chgrp -R mysql .
# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
# chmod +x /etc/rc.d/init.d/mysqld
# chkconfig --add mysqld

此时不要启动MySQL!

三、安装并配置管理节点服务器(Server3)
=====================================
作为管理节点服务器,Server3需要ndb_mgm和ndb_mgmd两个文件:

从http://www.mysql.com上下载mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz

# mkdir /usr/src/mysql-mgm
# cd /usr/src/mysql-mgm
# tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# rm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# cd mysql-max-4.1.9-pc-linux-gnu-i686
# mv bin/ndb_mgm .
# mv bin/ndb_mgmd .
# chmod +x ndb_mg*
# mv ndb_mg* /usr/bin/
# cd
# rm -rf /usr/src/mysql-mgm

现在开始为这台管理节点服务器建立配置文件:

# mkdir /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster
# vi config.ini

在config.ini中添加如下内容:

[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Managment Server
[NDB_MGMD]
HostName=192.168.0.3 #管理节点服务器Server3的IP地址
# Storage Engines
[NDBD]
HostName=192.168.0.1 #MySQL集群Server1的IP地址
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.0.2 #MySQL集群Server2的IP地址
DataDir=/var/lib/mysql-cluster
# 以下2个[MYSQLD]可以填写Server1和Server2的主机名。
# 但为了能够更快的更换集群中的服务器,推荐留空,否则更换服务器后必须对这个配置进行更改。
[MYSQLD]
[MYSQLD]

保存退出后,启动管理节点服务器Server3:
# ndb_mgmd

启动管理节点后应该注意,这只是管理节点服务,并不是管理终端。因而你看不到任何关于启动后的输出信息。

四、配置集群服务器并启动MySQL
=============================
在Server1和Server2中都需要进行如下改动:

# vi /etc/my.cnf

[mysqld]
ndbcluster
ndb-connectstring=192.168.0.3 #Server3的IP地址
[mysql_cluster]
ndb-connectstring=192.168.0.3 #Server3的IP地址

保存退出后,建立数据目录并启动MySQL:

# mkdir /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster
# /usr/local/mysql/bin/ndbd --initial
# /etc/rc.d/init.d/mysqld start

可以把/usr/local/mysql/bin/ndbd加到/etc/rc.local中实现开机启动。
注意:只有在第一次启动ndbd时或者对Server3的config.ini进行改动后才需要使用--initial参数!

五、检查工作状态
================
回到管理节点服务器Server3上,并启动管理终端:

# /usr/bin/ndb_mgm
键入show命令查看当前工作状态:(下面是一个状态输出示例)

[root@mysql3 root]# /usr/bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.0.1 (Version: 4.1.9, Nodegroup: 0, Master)
id=3 @192.168.0.2 (Version: 4.1.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.3 (Version: 4.1.9)

[mysqld(API)] 2 node(s)
id=4 (Version: 4.1.9)
id=5 (Version: 4.1.9)

ndb_mgm>

如果上面没有问题,现在开始测试MySQL:
注意,这篇文档对于MySQL并没有设置root密码,推荐你自己设置Server1和Server2的MySQL root密码。

在Server1中:

# /usr/local/mysql/bin/mysql -u root -p
> use test;
> CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
> INSERT INTO ctest () VALUES (1);
> SELECT * FROM ctest;

应该可以看到1 row returned信息(返回数值1)。

如果上述正常,则换到Server2上重复上面的测试,观察效果。如果成功,则在Server2中执行INSERT再换回到Server1观察是否工作正常。
如果都没有问题,那么恭喜成功!

六、破坏性测试
==============
将Server1或Server2的网线拔掉,观察另外一台集群服务器工作是否正常(可以使用SELECT查询测试)。测试完毕后,重新插入网线即可。

如果你接触不到物理服务器,也就是说不能拔掉网线,那也可以这样测试:
在Server1或Server2上:

# ps aux | grep ndbd
将会看到所有ndbd进程信息:

root 5578 0.0 0.3 6220 1964 ? S 03:14 0:00 ndbd
root 5579 0.0 20.4 492072 102828 ? R 03:14 0:04 ndbd
root 23532 0.0 0.1 3680 684 pts/1 S 07:59 0:00 grep ndbd

然后杀掉一个ndbd进程以达到破坏MySQL集群服务器的目的:

# kill -9 5578 5579

之后在另一台集群服务器上使用SELECT查询测试。并且在管理节点服务器的管理终端中执行show命令会看到被破坏的那台服务器的状态。
测试完成后,只需要重新启动被破坏服务器的ndbd进程即可:

# ndbd
注意!前面说过了,此时是不用加--inital参数的!

至此,MySQL集群就配置完成了!

================================================================================

Mysql Cluster: Two webserver setup (three servers required for true redundancy)
HOWTO set up a mysql cluster for two servers
Introduction

This HOWTO was designed for a classic setup of two servers behind a loadbalancer. The aim is to have true redundancy - either server can be unplugged and yet the site will remain up.
Notes

You MUST have a third server as a managment node but this can be shut down after the cluster starts. Also note that I do not recommend shutting down the managment server (see the extra notes at the bottom of this document for more information). You can not run a mysql cluster with just two servers And have true redundancy.

Although it is possible to set the cluster up on two physical servers you WILL NOT GET the ability to "kill" one server and for the cluster to continue as normal. For this you need a third server running the managment node.

I am going to talk about three servers,

mysql1.domain.com 192.168.0.1
mysql2.domain.com 192.168.0.2
mysql3.domain.com 192.168.0.3

Servers 1 and 2 will be the two that end up "clustered". This would be perfect for two servers behind a loadbalancer or using round robin DNS and is a good replacement for replication. Server 3 needs to have only minor changes made to it and does NOT require a mysql install. It can be a low-end machine and can be carrying out other tasks.

STAGE 1: Install mysql on the first two servers:

Complete the following steps on both mysql1 and mysql2:

cd /usr/local/
http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/
from/http://www.signal42.com/mirrors/mysql/
groupadd mysql
useradd -g mysql mysql
tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
rm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
ln -s mysql-max-4.1.9-pc-linux-gnu-i686 mysql
cd mysql
scripts/mysql_install_db --user=mysql
chown -R root .
chown -R mysql data
chgrp -R mysql .
cp support-files/mysql.server /etc/rc.d/init.d/
chmod +x /etc/rc.d/init.d/mysql.server
chkconfig --add mysql.server

Do not start mysql yet.
STAGE 2: Install and configure the managment server

You need the following files from the bin/ of the mysql directory: ndb_mgm and ndb_mgmd. Download the whole mysql-max tarball and extract them from the bin/ directory.

mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/
from/http://www.signal42.com/mirrors/mysql/
tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
rm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
cd mysql-max-4.1.9-pc-linux-gnu-i686
mv bin/ndb_mgm .
mv bin/ndb_mgmd .
chmod +x ndb_mg*
mv ndb_mg* /usr/bin/
cd
rm -rf /usr/src/mysql-mgm

You now need to set up the config file for this managment:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi [or emacs or any other editor] config.ini

Now, insert the following (changing the bits as indicated):

[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Managment Server
[NDB_MGMD]
HostName=192.168.0.3# the IP of THIS SERVER
# Storage Engines
[NDBD]
HostName=192.168.0.1# the IP of the FIRST SERVER
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.0.2# the IP of the SECOND SERVER
DataDir=/var/lib/mysql-cluster
# 2 MySQL Clients
# I personally leave this blank to allow rapid changes of the mysql clients;
# you can enter the hostnames of the above two servers here. I suggest you dont.
[MYSQLD]
[MYSQLD]

Now, start the managment server:

ndb_mgmd

This is the mysql managment server, not maganment console. You should therefore not expect any output (we will start the console later).
STAGE 3: Configure the storage/SQL servers and start mysql

On each of the two storage/SQL servers (192.168.0.1 and 192.168.0.2) enter the following (changing the bits as appropriate):

vi /etc/my.cnf

Enter i to go to insert mode again and insert this on both servers (changing the IP address to the IP of the managment server that you set up in stage 2):

[mysqld]
ndbcluster
ndb-connectstring=192.168.0.3# the IP of the MANAGMENT (THIRD) SERVER
[mysql_cluster]
ndb-connectstring=192.168.0.3# the IP of the MANAGMENT (THIRD) SERVER

Now, we make the data directory and start the storage engine:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
/usr/local/mysql/bin/ndbd --initial
/etc/rc.d/init.d/mysql.server start

If you have done one server now go back to the start of stage 3 and repeat exactly the same procedure on the second server.
NOTE that you should ONLY use --initial if you are either starting from scratch or have changed the config.ini file on the managment.
STAGE 4: Check its working

You can now return to the managment server (mysql3) and enter the managment console:

/usr/local/mysql/bin/ndb_mgm

Enter the command SHOW to see what is going on. A sample output looks like this:

[root@mysql3 mysql-cluster]# /usr/local/mysql/bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.0.1 (Version: 4.1.9, Nodegroup: 0, Master)
id=3 @192.168.0.2 (Version: 4.1.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.3 (Version: 4.1.9)

[mysqld(API)] 2 node(s)
id=4 (Version: 4.1.9)
id=5 (Version: 4.1.9)

ndb_mgm>

If you see

not connected, accepting connect from 192.168.0.[1/2/3]

in the first or last two lines they you have a problem. Please email me with as much detail as you can give and I can try to find out where you have gone wrong and change this HOWTO to fix it.

If you are OK to here it is time to test mysql. On either server mysql1 or mysql2 enter the following commands: Note that we have no root password yet.

mysql
use test;
CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
INSERT INTO ctest () VALUES (1);
SELECT * FROM ctest;

You should see 1 row returned (with the value 1).

If this works, now go to the other server and run the same SELECT and see what you get. Insert from that host and go back to host 1 and see if it works. If it works then congratulations.

The final test is to kill one server to see what happens. If you have physical access to the machine simply unplug its network cable and see if the other server keeps on going fine (try the SELECT query). If you dont have physical access do the following:

ps aux | grep ndbd

You get an output like this:

root 5578 0.0 0.3 6220 1964 ? S 03:14 0:00 ndbd
root 5579 0.0 20.4 492072 102828 ? R 03:14 0:04 ndbd
root 23532 0.0 0.1 3680 684 pts/1 S 07:59 0:00 grep ndbd

In this case ignore the command "grep ndbd" (the last line) but kill the first two processes by issuing the command kill -9 pid pid:

kill -9 5578 5579

Then try the select on the other server. While you are at it run a SHOW command on the managment node to see that the server has died. To restart it, just issue

ndbd

NOTE no --inital!
Further notes about setup

I strongly recommend that you read all of this (and bookmark this page). It will almost certainly save you a lot of searching.
The Managment Server

I strongly recommend that you do not stop the managment server once it has started. This is for several resons:
The server takes hardly any server resources
If a cluster falls over, you want to be able to just ssh in and type ndbd to stat it. You dont want to have to start messing around with another server
If you want to take backups then you need the managment server up
The cluster log is sent to the management server so to check what is going on in the cluster or has happened since last this is an important tool
All commands from the ndb_mgm client is sent to the management server and thus no management commands without management server.
The managment server is required in case of cluster reconfiguration (crashed server or network split). In the case that it is not running, "split-brain" scenario will occure. The management server arbitration role is required for this type of setup to provide better fault tollerance.

However you are welcome to stop the server if you prefer.
Starting and stopping ndbd automatically on boot

To achieve this, do the following on both mysql1 and mysql2:

echo "ndbd" > /etc/rc.d/init.d/ndbd
chmod +x /etc/rc.d/init.d/ndbd
chkconfig --add ndbd

Note that this is a really quick script. You ought really to write one that at least checks if ndbd is already started on the machine.
Use of hostnames

You will note that I have used IP addresses exclusively throught this setup. This is because using hostnames simply increases the number of things that can go wrong. Mikael Ronstr�m of MySQL AB kindly explains: "Hostnames certainly work with MySQL Cluster. But using hostnames introduces quite a few error sources since a proper DNS lookup system must be set-up, sometimes /etc/hosts must be edited and their might be security blocks ensuring that communication between certain machines is not possible other than on certain ports". I strongly suggest that while testing you use IP addresses if you can, then once it is all working change to hostnames.
RAM

Use the following formula to work out the amount of RAM that you need on each storage node:

(Size of database * NumberofReplicas * 1.1) / Number of storage nodes

NumberofReplicas is set to two by default. You can change it in config.ini if you want. So for example to run a 4GB database you need just under 9GB of RAM on each storage node. For the SQL nodes and managment nodes you dont need much RAM at all.

Note: A lot of people have emailed me querying the maths above! Remember that the cluster is fault tollerant, and each piece of data is stored on at least 2 nodes. (2 by default, as set by NumberOfReplicas). So you need TWICE the space you would need just for one copy, multiplied by 1.1 for overhead.
Adding storage nodes

If you decide to add storage nodes, bear in mind that 3 is not an optimal numbers. If you are going to move from two (above) then move to 4.
Adding SQL nodes

If you want to add another SQL node (i.e. you have another server that you want to add to the cluster but you dont need it to act as a storage node), then just add the following to /etc/my.cnf on the server (it must be a mysql-max server):

[mysqld]
ndbcluster
ndb-connectstring=192.168.0.3# the IP of the MANAGMENT (THIRD) SERVER
[mysql_cluster]
ndb-connectstring=192.168.0.3# the IP of the MANAGMENT (THIRD) SERVER

Then you need to make sure that there is another [MYSQLD] line at the end of config.ini on the managment server. Restart the cluster (see below for an important note) and restart mysql on the new API. It should be connected.
Important note on changing config.ini

If you ever change config.ini you must stop the whole cluster and restart it to re-read the config file. Stop the cluster with a SHUTDOWN command to the ndb_mgm package on the managment server and then restart all the storage nodes.

Some useful configuration options that you will need if you have large tables:

DataMemory: defines the space available to store the actual records in the database. The entire DataMemory will be allocated in memory so it is important that the machine contains enough memory to handle the DataMemory size. Note that DataMemory is also used to store ordered indexes. Ordered indexes uses about 10 bytes per record. Default: 80MB
IndexMemory The IndexMemory is the parameter that controls the amount of storage used for hash indexes in MySQL Cluster. Hash indexes are always used for primary key indexes, unique indexes, and unique constraints. Default: 18MB
MaxNoOfAttributes This parameter defines the number of attributes that can be defined in the cluster. Default: 1000
MaxNoOfTables Obvious (bear in mind that each BLOB field creates another table for various reasons so take this into account). Default: 128

View this page at mysql.com for further information about the things you can put in the [NDBD] section of config.ini
A note about security

MySQL cluster is not secure. By default anyone can connect to your managment server and shut the whole thing down. I suggest the following precautions:
Install APF and block all ports except those you use (do NOT include any MySQL cluster ports). Add the IPs of your cluster machines to the /etc/apf/allow_hosts file.
Run MySQL cluster over a second network card on a second, isolated, network.
Other resources

I found the following resources very useful:
The MySQL cluster documentation. This is gradually being reworked. I would, however, suggest that you at least read these pages:
On-line Backup of MySQL Cluster.
Defining MySQL Cluster Storage Nodes for information that you will need to allow for bigger database memory or a larger number of tables, indexes, unique indexes
MySQL Cluster mailing list.
Google.
MySQL Forums
The #mysql IRC chanel on freenode and EFNet. If you need a free (open source) IRC client I recomment Bersirc.
Thanks

I must thank several others who have contributed to this: Mikael Ronstr�m from MySQL AB for helping me to get this to work and spotting my silly mistake right at the end, Lewis Bergman for proof-reading this page and pointing out some improvements, as well as suffering the frustration with me and Martin Pala for explaining the final reason to keep the managment server up as well as a few other minor changes. Thanks also to Terry from Advanced Network Hosts who paid me to set a cluster up and at the same time produce a HOWTO.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: