您的位置:首页 > 数据库 > MySQL

Mysql 分布式集群 主从同步 读写分离 amoeba 中间件配置

2017-12-02 22:38 941 查看
首先说明一下amoeba 跟 MySQL proxy在读写分离的使用上面的区别:

在MySQL proxy 6.0版本 上面如果想要读写分离并且 读集群、写集群 机器比较多情况下,用mysql proxy 需要相当大的工作量,目前mysql proxy没有现成的 lua脚本。mysql proxy根本没有配置文件, lua脚本就是它的全部,当然lua是相当方便的。那么同样这种东西需要编写大量的脚本才能完成一 个复杂的配置。而Amoeba只需要进行相关的配置就可以满足需求。



假设有这样的使用场景,有三个数据库节点分别命名为Master、Slave1、Slave2如下:

Amoeba: Amoeba <192.168.14.129>

Master: Master <192.168.14.131> (可读写)

Slaves:Slave1 <192.168.14.132>、Slave2<192.168.14.133> (2个平等的数据库。只读/负载均衡)

在 主从数据库 的复制的部分, 任然需要使用数据库自己的复制机制。 Amoeba 不提供复制功能。

1. 起动数据库的主从复制功能。

a. 修改配置文件

master.cnf

Cnf代码


server-id = 1 #主数据库标志

# 增加 日志文件, 用于验证读写分离

log = /home/mysql/mysql/log/mysql.log

slave1.cnf

Cnf代码


server-id = 2

# 增加 日志文件, 用于验证读写分离

log = /home/mysql/mysql/log/mysql.log

slave2.cnf

Cnf代码


server-id = 3

# 增加 日志文件, 用于验证读写分离

log = /home/mysql/mysql/log/mysql.log

b. Master 中 创建两个 只读权限 的用户。 用户名均为:repl_user 密码均为:copy 分别开放给 slave1, slave2

Sql代码


mysql> grant replication slave on *.* to repl_user@192.168.14.132 identified by 'copy';

mysql> grant replication slave on *.* to repl_user@192.168.14.133 identified by 'copy';

c. 查看 Master 信息

Sql代码


mysql> show master status;

+------------------+----------+--------------+------------------+

| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000017 | 2009 | | |

+------------------+----------+--------------+------------------+

1 row in set (0.00 sec)

d. Slave1 ,Slave2 中 启动 Master - Slave 复制功能。

分别执行以下命令

Sql代码


mysql> slave stop;

Query OK, 0 rows affected (0.02 sec)

mysql> change master to

-> master_host='192.168.14.131',

-> master_user='repl_user',

-> master_password='copy',

-> master_log_file='mysql-bin.000017',

-> master_log_pos=2009;

Query OK, 0 rows affected (0.03 sec)

mysql> start slave;

Query OK, 0 rows affected (0.00 sec)

2. Amoeba 读写分离的配置

a. Master , Slave1 ,Slave2 中开放权限给 Amoeba 访问。

在 Master , Slave1 , Slave2 中分别执行

Sql代码


mysql> grant all on test.* to test_user@192.168.14.129 indentified by '1234';

Amoeba 访问三个数据库的账号密码相同。

b. 修改 Amoeba 的配置文件

配置文件详细说明请查看 官方文档:http://docs.hexnova.com/amoeba/rw-splitting.html

dbServer.xml

Xml代码


<?xml version="1.0" encoding="gbk"?>

<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">

<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">

<!--

Each dbServer needs to be configured into a Pool,

If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:

add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig

such as 'multiPool' dbServer

-->

<!-- 数据库连接配置的公共部分 -->

<dbServer name="abstractServer" abstractive="true">

<factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">

<property name="manager">${defaultManager}</property>

<property name="sendBufferSize">64</property>

<property name="receiveBufferSize">128</property>

<!-- mysql port -->

<property name="port">3306</property>

<!-- mysql schema -->

<property name="schema">test</property>

<!-- mysql user -->

<property name="user">test_user</property>

<!-- mysql password -->

<property name="password">1234</property>

</factoryConfig>

<poolConfig class="com.meidusa.amoeba.net.poolable.PoolableObjectPool">

<property name="maxActive">500</property>

<property name="maxIdle">500</property>

<property name="minIdle">10</property>

<property name="minEvictableIdleTimeMillis">600000</property>

<property name="timeBetweenEvictionRunsMillis">600000</property>

<property name="testOnBorrow">true</property>

<property name="testWhileIdle">true</property>

</poolConfig>

</dbServer>

<!-- Master ,Slave1, Slave2 的独立部分,也就只有 IP 了 -->

<dbServer name="master" parent="abstractServer">

<factoryConfig>

<!-- mysql ip -->

<property name="ipAddress">192.168.14.131</property>

</factoryConfig>

</dbServer>

<dbServer name="slave1" parent="abstractServer">

<factoryConfig>

<!-- mysql ip -->

<property name="ipAddress">192.168.14.132</property>

</factoryConfig>

</dbServer>

<dbServer name="slave2" parent="abstractServer">

<factoryConfig>

<!-- mysql ip -->

<property name="ipAddress">192.168.14.133</property>

</factoryConfig>

</dbServer>

<!-- 数据库池,虚拟服务器,实现读取的负载均衡 -->

<dbServer name="slaves" virtual="true">

<poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">

<!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->

<property name="loadbalance">1</property>

<!-- Separated by commas,such as: server1,server2,server1 -->

<property name="poolNames">slave1,slave2</property>

</poolConfig>

</dbServer>

</amoeba:dbServers>

amoeba.xml

Xml代码


<?xml version="1.0" encoding="gbk"?>

<!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">

<amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">

<proxy>

<!-- service class must implements com.meidusa.amoeba.service.Service -->

<service name="Amoeba for Mysql" class="com.meidusa.amoeba.net.ServerableConnectionManager">

<!-- Amoeba 端口号 -->

<property name="port">8066</property>

<!-- bind ipAddress -->

<!--

<property name="ipAddress">127.0.0.1</property>

-->

<property name="manager">${clientConnectioneManager}</property>

<property name="connectionFactory">

<bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">

<property name="sendBufferSize">128</property>

<property name="receiveBufferSize">64</property>

</bean>

</property>

<property name="authenticator">

<bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">

<!-- Amoeba 账号,密码 -->

<property name="user">root</property>

<property name="password">root</property>

<property name="filter">

<bean class="com.meidusa.amoeba.server.IPAccessController">

<property name="ipFile">${amoeba.home}/conf/access_list.conf</property>

</bean>

</property>

</bean>

</property>

</service>

<!-- server class must implements com.meidusa.amoeba.service.Service -->

<service name="Amoeba Monitor Server" class="com.meidusa.amoeba.monitor.MonitorServer">

<!-- port -->

<!-- default value: random number

<property name="port">9066</property>

-->

<!-- bind ipAddress -->

<property name="ipAddress">127.0.0.1</property>

<property name="daemon">true</property>

<property name="manager">${clientConnectioneManager}</property>

<property name="connectionFactory">

<bean class="com.meidusa.amoeba.monitor.net.MonitorClientConnectionFactory"></bean>

</property>

</service>

<runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">

<!-- proxy server net IO Read thread size -->

<property name="readThreadPoolSize">20</property>

<!-- proxy server client process thread size -->

<property name="clientSideThreadPoolSize">30</property>

<!-- mysql server data packet process thread size -->

<property name="serverSideThreadPoolSize">30</property>

<!-- per connection cache prepared statement size -->

<property name="statementCacheSize">500</property>

<!-- query timeout( default: 60 second , TimeUnit:second) -->

<property name="queryTimeout">60</property>

</runtime>

</proxy>

<!--

Each ConnectionManager will start as thread

manager responsible for the Connection IO read , Death Detection

-->

<connectionManagerList>

<connectionManager name="clientConnectioneManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">

<property name="subManagerClassName">com.meidusa.amoeba.net.ConnectionManager</property>

<!--

default value is avaliable Processors

<property name="processors">5</property>

-->

</connectionManager>

<connectionManager name="defaultManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">

<property name="subManagerClassName">com.meidusa.amoeba.net.AuthingableConnectionManager</property>

<!--

default value is avaliable Processors

<property name="processors">5</property>

-->

</connectionManager>

</connectionManagerList>

<!-- default using file loader -->

<dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">

<property name="configFile">${amoeba.home}/conf/dbServers.xml</property>

</dbServerLoader>

<queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">

<property name="ruleLoader">

<bean class="com.meidusa.amoeba.route.TableRuleFileLoader">

<property name="ruleFile">${amoeba.home}/conf/rule.xml</property>

<property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>

</bean>

</property>

<property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>

<property name="LRUMapSize">1500</property>

<!-- 默认数据库,主数据库 -->

<property name="defaultPool">master</property>

<!-- 写数据库 -->

<property name="writePool">master</property>

<!-- 读数据库,dbServer.xml 中配置的 虚拟数据库,数据库池 -->

<property name="readPool">slaves</property>

<property name="needParse">true</property>

</queryRouter>

</amoeba:configuration>

rule.xml

Java代码


<?xml version="1.0" encoding="gbk"?>

<!DOCTYPE amoeba:rule SYSTEM "rule.dtd">

<amoeba:rule xmlns:amoeba="http://amoeba.meidusa.com/">

<tableRule name="message" schema="test" defaultPools="server1">

</tableRule>

</amoeba:rule>

不需要 数据库分片时,不用配置。 但是不能没有 tableRule 元素, 否则报错。 随便写个空规则就行了。

3. 测试读写分离

a. 在 Master , Slave1 , Slave2 中分别查看 日志文件: mysql.log

Shell代码


tail -f ./log/mysql.log

b. 启动 Amoeba, 使用 Mysql GUI Tools 连接 Amoeba





执行以上几个命令。 查看日志内容。

Master mysql.log

Shell代码


[mysql@prx1 mysql]$ tail -f log/mysql.log

370 Query SET SESSION sql_mode=''

370 Query SET NAMES utf8

370 Query SHOW FULL TABLES

370 Query SHOW COLUMNS FROM `t_message`

370 Query SHOW COLUMNS FROM `t_user`

370 Query SHOW PROCEDURE STATUS

370 Query SHOW FUNCTION STATUS

110813 15:21:11 370 Query SHOW VARIABLES LIKE 'character_set_server'

370 Query SHOW FULL COLUMNS FROM `test`.`t_message`

110813 15:21:12 370 Query SHOW CREATE TABLE `test`.`t_message`

110813 15:22:40 374 Connect test_user@192.168.14.129 on test

375 Connect test_user@192.168.14.129 on test

376 Connect test_user@192.168.14.129 on test

110813 15:23:40 370 Query insert into t_message values(1, 'c1')

110813 15:24:07 377 Connect test_user@192.168.14.129 on test

378 Connect test_user@192.168.14.129 on test

379 Connect test_user@192.168.14.129 on test

110813 15:24:15 370 Query insert into t_user values(8, 'n8', 'p8')

110813 15:24:24 370 Query SHOW FULL COLUMNS FROM `test`.`t_user`

370 Query SHOW CREATE TABLE `test`.`t_user`

110813 15:24:35 370 Query SHOW FULL COLUMNS FROM `test`.`t_message`

370 Query SHOW CREATE TABLE `test`.`t_message`

Slave1 mysql.log

Shell代码


[mysql@prx2 mysql]$ tail -f log/mysql.log

317 Connect test_user@192.168.14.129 on test

318 Connect test_user@192.168.14.129 on test

110813 15:35:30 315 Query SELECT @@sql_mode

110813 15:35:32 315 Query SELECT @@sql_mode

110813 15:35:44 315 Query SELECT @@SQL_MODE

110813 15:35:46 315 Query SELECT @@SQL_MODE

110813 15:37:18 319 Connect test_user@192.168.14.129 on test

320 Connect test_user@192.168.14.129 on test

110813 15:37:19 321 Connect test_user@192.168.14.129 on test

110813 15:37:26 246 Quit

110813 15:38:21 315 Query SELECT @@SQL_MODE

110813 15:38:22 42 Query BEGIN

42 Query insert into t_message values(1, 'c1')

42 Query COMMIT /* implicit, from Xid_log_event */

110813 15:38:50 322 Connect test_user@192.168.14.129 on test

323 Connect test_user@192.168.14.129 on test

324 Connect test_user@192.168.14.129 on test

110813 15:38:58 42 Query BEGIN

42 Query insert into t_user values(8, 'n8', 'p8')

42 Query COMMIT /* implicit, from Xid_log_event */

110813 15:39:08 315 Query SELECT @@SQL_MODE

110813 15:39:19 315 Query SELECT @@SQL_MODE

110813 15:44:08 325 Connect test_user@192.168.14.129 on test

326 Connect test_user@192.168.14.129 on test

327 Connect test_user@192.168.14.129 on test

Slave2 mysql.log

Shell代码


[mysql@prx3 mysql]$ tail -f log/mysql.log

110813 15:35:08 305 Connect test_user@192.168.14.129 on test

306 Connect test_user@192.168.14.129 on test

307 Connect test_user@192.168.14.129 on test

110813 15:35:31 304 Query SELECT @@sql_mode

304 Query SELECT @@sql_mode

110813 15:35:44 304 Query SELECT @@SQL_MODE

110813 15:35:46 304 Query SELECT * FROM t_message t

110813 15:37:18 308 Connect test_user@192.168.14.129 on test

309 Connect test_user@192.168.14.129 on test

310 Connect test_user@192.168.14.129 on test

110813 15:38:21 8 Query BEGIN

8 Query insert into t_message values(1, 'c1')

8 Query COMMIT /* implicit, from Xid_log_event */

110813 15:38:50 311 Connect test_user@192.168.14.129 on test

312 Connect test_user@192.168.14.129 on test

313 Connect test_user@192.168.14.129 on test

110813 15:38:58 304 Query SELECT @@SQL_MODE

8 Query BEGIN

8 Query insert into t_user values(8, 'n8', 'p8')

8 Query COMMIT /* implicit, from Xid_log_event */

110813 15:39:08 304 Query select * from t_user

110813 15:39:19 304 Query select * from t_message

110813 15:44:08 314 Connect test_user@192.168.14.129 on test

315 Connect test_user@192.168.14.129 on test

316 Connect test_user@192.168.14.129 on test

从日志中可以看出。

在 Master mysql.log 中,只执行了 insert into 命令。

在 Slave1 中,只是复制了 insert into 命令, 是主从复制的结果

在 Slave2 中, 复制了 insert into 命令,同时还执行了 select 命令。

说明,主从分离已经成功。

首先说明一下amoeba 跟 MySQL proxy在读写分离的使用上面的区别:

在MySQL proxy 6.0版本 上面如果想要读写分离并且 读集群、写集群 机器比较多情况下,用mysql proxy 需要相当大的工作量,目前mysql proxy没有现成的 lua脚本。mysql proxy根本没有配置文件, lua脚本就是它的全部,当然lua是相当方便的。那么同样这种东西需要编写大量的脚本才能完成一 个复杂的配置。而Amoeba只需要进行相关的配置就可以满足需求。



假设有这样的使用场景,有三个数据库节点分别命名为Master、Slave1、Slave2如下:

Amoeba: Amoeba <192.168.14.129>

Master: Master <192.168.14.131> (可读写)

Slaves:Slave1 <192.168.14.132>、Slave2<192.168.14.133> (2个平等的数据库。只读/负载均衡)

在 主从数据库 的复制的部分, 任然需要使用数据库自己的复制机制。 Amoeba 不提供复制功能。

1. 起动数据库的主从复制功能。

a. 修改配置文件

master.cnf

Cnf代码


server-id = 1 #主数据库标志

# 增加 日志文件, 用于验证读写分离

log = /home/mysql/mysql/log/mysql.log

slave1.cnf

Cnf代码


server-id = 2

# 增加 日志文件, 用于验证读写分离

log = /home/mysql/mysql/log/mysql.log

slave2.cnf

Cnf代码


server-id = 3

# 增加 日志文件, 用于验证读写分离

log = /home/mysql/mysql/log/mysql.log

b. Master 中 创建两个 只读权限 的用户。 用户名均为:repl_user 密码均为:copy 分别开放给 slave1, slave2

Sql代码


mysql> grant replication slave on *.* to repl_user@192.168.14.132 identified by 'copy';

mysql> grant replication slave on *.* to repl_user@192.168.14.133 identified by 'copy';

c. 查看 Master 信息

Sql代码


mysql> show master status;

+------------------+----------+--------------+------------------+

| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000017 | 2009 | | |

+------------------+----------+--------------+------------------+

1 row in set (0.00 sec)

d. Slave1 ,Slave2 中 启动 Master - Slave 复制功能。

分别执行以下命令

Sql代码


mysql> slave stop;

Query OK, 0 rows affected (0.02 sec)

mysql> change master to

-> master_host='192.168.14.131',

-> master_user='repl_user',

-> master_password='copy',

-> master_log_file='mysql-bin.000017',

-> master_log_pos=2009;

Query OK, 0 rows affected (0.03 sec)

mysql> start slave;

Query OK, 0 rows affected (0.00 sec)

2. Amoeba 读写分离的配置

a. Master , Slave1 ,Slave2 中开放权限给 Amoeba 访问。

在 Master , Slave1 , Slave2 中分别执行

Sql代码


mysql> grant all on test.* to test_user@192.168.14.129 indentified by '1234';

Amoeba 访问三个数据库的账号密码相同。

b. 修改 Amoeba 的配置文件

配置文件详细说明请查看 官方文档:http://docs.hexnova.com/amoeba/rw-splitting.html

dbServer.xml

Xml代码


<?xml version="1.0" encoding="gbk"?>

<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">

<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">

<!--

Each dbServer needs to be configured into a Pool,

If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:

add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig

such as 'multiPool' dbServer

-->

<!-- 数据库连接配置的公共部分 -->

<dbServer name="abstractServer" abstractive="true">

<factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">

<property name="manager">${defaultManager}</property>

<property name="sendBufferSize">64</property>

<property name="receiveBufferSize">128</property>

<!-- mysql port -->

<property name="port">3306</property>

<!-- mysql schema -->

<property name="schema">test</property>

<!-- mysql user -->

<property name="user">test_user</property>

<!-- mysql password -->

<property name="password">1234</property>

</factoryConfig>

<poolConfig class="com.meidusa.amoeba.net.poolable.PoolableObjectPool">

<property name="maxActive">500</property>

<property name="maxIdle">500</property>

<property name="minIdle">10</property>

<property name="minEvictableIdleTimeMillis">600000</property>

<property name="timeBetweenEvictionRunsMillis">600000</property>

<property name="testOnBorrow">true</property>

<property name="testWhileIdle">true</property>

</poolConfig>

</dbServer>

<!-- Master ,Slave1, Slave2 的独立部分,也就只有 IP 了 -->

<dbServer name="master" parent="abstractServer">

<factoryConfig>

<!-- mysql ip -->

<property name="ipAddress">192.168.14.131</property>

</factoryConfig>

</dbServer>

<dbServer name="slave1" parent="abstractServer">

<factoryConfig>

<!-- mysql ip -->

<property name="ipAddress">192.168.14.132</property>

</factoryConfig>

</dbServer>

<dbServer name="slave2" parent="abstractServer">

<factoryConfig>

<!-- mysql ip -->

<property name="ipAddress">192.168.14.133</property>

</factoryConfig>

</dbServer>

<!-- 数据库池,虚拟服务器,实现读取的负载均衡 -->

<dbServer name="slaves" virtual="true">

<poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">

<!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->

<property name="loadbalance">1</property>

<!-- Separated by commas,such as: server1,server2,server1 -->

<property name="poolNames">slave1,slave2</property>

</poolConfig>

</dbServer>

</amoeba:dbServers>

amoeba.xml

Xml代码


<?xml version="1.0" encoding="gbk"?>

<!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">

<amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">

<proxy>

<!-- service class must implements com.meidusa.amoeba.service.Service -->

<service name="Amoeba for Mysql" class="com.meidusa.amoeba.net.ServerableConnectionManager">

<!-- Amoeba 端口号 -->

<property name="port">8066</property>

<!-- bind ipAddress -->

<!--

<property name="ipAddress">127.0.0.1</property>

-->

<property name="manager">${clientConnectioneManager}</property>

<property name="connectionFactory">

<bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">

<property name="sendBufferSize">128</property>

<property name="receiveBufferSize">64</property>

</bean>

</property>

<property name="authenticator">

<bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">

<!-- Amoeba 账号,密码 -->

<property name="user">root</property>

<property name="password">root</property>

<property name="filter">

<bean class="com.meidusa.amoeba.server.IPAccessController">

<property name="ipFile">${amoeba.home}/conf/access_list.conf</property>

</bean>

</property>

</bean>

</property>

</service>

<!-- server class must implements com.meidusa.amoeba.service.Service -->

<service name="Amoeba Monitor Server" class="com.meidusa.amoeba.monitor.MonitorServer">

<!-- port -->

<!-- default value: random number

<property name="port">9066</property>

-->

<!-- bind ipAddress -->

<property name="ipAddress">127.0.0.1</property>

<property name="daemon">true</property>

<property name="manager">${clientConnectioneManager}</property>

<property name="connectionFactory">

<bean class="com.meidusa.amoeba.monitor.net.MonitorClientConnectionFactory"></bean>

</property>

</service>

<runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">

<!-- proxy server net IO Read thread size -->

<property name="readThreadPoolSize">20</property>

<!-- proxy server client process thread size -->

<property name="clientSideThreadPoolSize">30</property>

<!-- mysql server data packet process thread size -->

<property name="serverSideThreadPoolSize">30</property>

<!-- per connection cache prepared statement size -->

<property name="statementCacheSize">500</property>

<!-- query timeout( default: 60 second , TimeUnit:second) -->

<property name="queryTimeout">60</property>

</runtime>

</proxy>

<!--

Each ConnectionManager will start as thread

manager responsible for the Connection IO read , Death Detection

-->

<connectionManagerList>

<connectionManager name="clientConnectioneManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">

<property name="subManagerClassName">com.meidusa.amoeba.net.ConnectionManager</property>

<!--

default value is avaliable Processors

<property name="processors">5</property>

-->

</connectionManager>

<connectionManager name="defaultManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">

<property name="subManagerClassName">com.meidusa.amoeba.net.AuthingableConnectionManager</property>

<!--

default value is avaliable Processors

<property name="processors">5</property>

-->

</connectionManager>

</connectionManagerList>

<!-- default using file loader -->

<dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">

<property name="configFile">${amoeba.home}/conf/dbServers.xml</property>

</dbServerLoader>

<queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">

<property name="ruleLoader">

<bean class="com.meidusa.amoeba.route.TableRuleFileLoader">

<property name="ruleFile">${amoeba.home}/conf/rule.xml</property>

<property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>

</bean>

</property>

<property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>

<property name="LRUMapSize">1500</property>

<!-- 默认数据库,主数据库 -->

<property name="defaultPool">master</property>

<!-- 写数据库 -->

<property name="writePool">master</property>

<!-- 读数据库,dbServer.xml 中配置的 虚拟数据库,数据库池 -->

<property name="readPool">slaves</property>

<property name="needParse">true</property>

</queryRouter>

</amoeba:configuration>

rule.xml

Java代码


<?xml version="1.0" encoding="gbk"?>

<!DOCTYPE amoeba:rule SYSTEM "rule.dtd">

<amoeba:rule xmlns:amoeba="http://amoeba.meidusa.com/">

<tableRule name="message" schema="test" defaultPools="server1">

</tableRule>

</amoeba:rule>

不需要 数据库分片时,不用配置。 但是不能没有 tableRule 元素, 否则报错。 随便写个空规则就行了。

3. 测试读写分离

a. 在 Master , Slave1 , Slave2 中分别查看 日志文件: mysql.log

Shell代码


tail -f ./log/mysql.log

b. 启动 Amoeba, 使用 Mysql GUI Tools 连接 Amoeba





执行以上几个命令。 查看日志内容。

Master mysql.log

Shell代码


[mysql@prx1 mysql]$ tail -f log/mysql.log

370 Query SET SESSION sql_mode=''

370 Query SET NAMES utf8

370 Query SHOW FULL TABLES

370 Query SHOW COLUMNS FROM `t_message`

370 Query SHOW COLUMNS FROM `t_user`

370 Query SHOW PROCEDURE STATUS

370 Query SHOW FUNCTION STATUS

110813 15:21:11 370 Query SHOW VARIABLES LIKE 'character_set_server'

370 Query SHOW FULL COLUMNS FROM `test`.`t_message`

110813 15:21:12 370 Query SHOW CREATE TABLE `test`.`t_message`

110813 15:22:40 374 Connect test_user@192.168.14.129 on test

375 Connect test_user@192.168.14.129 on test

376 Connect test_user@192.168.14.129 on test

110813 15:23:40 370 Query insert into t_message values(1, 'c1')

110813 15:24:07 377 Connect test_user@192.168.14.129 on test

378 Connect test_user@192.168.14.129 on test

379 Connect test_user@192.168.14.129 on test

110813 15:24:15 370 Query insert into t_user values(8, 'n8', 'p8')

110813 15:24:24 370 Query SHOW FULL COLUMNS FROM `test`.`t_user`

370 Query SHOW CREATE TABLE `test`.`t_user`

110813 15:24:35 370 Query SHOW FULL COLUMNS FROM `test`.`t_message`

370 Query SHOW CREATE TABLE `test`.`t_message`

Slave1 mysql.log

Shell代码


[mysql@prx2 mysql]$ tail -f log/mysql.log

317 Connect test_user@192.168.14.129 on test

318 Connect test_user@192.168.14.129 on test

110813 15:35:30 315 Query SELECT @@sql_mode

110813 15:35:32 315 Query SELECT @@sql_mode

110813 15:35:44 315 Query SELECT @@SQL_MODE

110813 15:35:46 315 Query SELECT @@SQL_MODE

110813 15:37:18 319 Connect test_user@192.168.14.129 on test

320 Connect test_user@192.168.14.129 on test

110813 15:37:19 321 Connect test_user@192.168.14.129 on test

110813 15:37:26 246 Quit

110813 15:38:21 315 Query SELECT @@SQL_MODE

110813 15:38:22 42 Query BEGIN

42 Query insert into t_message values(1, 'c1')

42 Query COMMIT /* implicit, from Xid_log_event */

110813 15:38:50 322 Connect test_user@192.168.14.129 on test

323 Connect test_user@192.168.14.129 on test

324 Connect test_user@192.168.14.129 on test

110813 15:38:58 42 Query BEGIN

42 Query insert into t_user values(8, 'n8', 'p8')

42 Query COMMIT /* implicit, from Xid_log_event */

110813 15:39:08 315 Query SELECT @@SQL_MODE

110813 15:39:19 315 Query SELECT @@SQL_MODE

110813 15:44:08 325 Connect test_user@192.168.14.129 on test

326 Connect test_user@192.168.14.129 on test

327 Connect test_user@192.168.14.129 on test

Slave2 mysql.log

Shell代码


[mysql@prx3 mysql]$ tail -f log/mysql.log

110813 15:35:08 305 Connect test_user@192.168.14.129 on test

306 Connect test_user@192.168.14.129 on test

307 Connect test_user@192.168.14.129 on test

110813 15:35:31 304 Query SELECT @@sql_mode

304 Query SELECT @@sql_mode

110813 15:35:44 304 Query SELECT @@SQL_MODE

110813 15:35:46 304 Query SELECT * FROM t_message t

110813 15:37:18 308 Connect test_user@192.168.14.129 on test

309 Connect test_user@192.168.14.129 on test

310 Connect test_user@192.168.14.129 on test

110813 15:38:21 8 Query BEGIN

8 Query insert into t_message values(1, 'c1')

8 Query COMMIT /* implicit, from Xid_log_event */

110813 15:38:50 311 Connect test_user@192.168.14.129 on test

312 Connect test_user@192.168.14.129 on test

313 Connect test_user@192.168.14.129 on test

110813 15:38:58 304 Query SELECT @@SQL_MODE

8 Query BEGIN

8 Query insert into t_user values(8, 'n8', 'p8')

8 Query COMMIT /* implicit, from Xid_log_event */

110813 15:39:08 304 Query select * from t_user

110813 15:39:19 304 Query select * from t_message

110813 15:44:08 314 Connect test_user@192.168.14.129 on test

315 Connect test_user@192.168.14.129 on test

316 Connect test_user@192.168.14.129 on test

从日志中可以看出。

在 Master mysql.log 中,只执行了 insert into 命令。

在 Slave1 中,只是复制了 insert into 命令, 是主从复制的结果

在 Slave2 中, 复制了 insert into 命令,同时还执行了 select 命令。

说明,主从分离已经成功。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: