集群:实现director高可用
2011-02-18 21:00
232 查看
node1(主结点) DIP:192.168.1.4/24 node1.a.com
node2(standby) DIP: 192.168.1.5/24 node2.a.com
VIP:192.168.1.1/24
一、两个结点间的通信依靠主机名,所以两者必须能相互解析主机名
二、heartbeat的信息传递
以太网方式:
bcast, ucast, mcast
串行线缆
配置node1:
# vim /etc/hosts
添加192.168.1.5 node2.a.com node2
即保证存在:192.168.1.4 node1.a.com node1
192.168.1.5 node2.a.com node2
检测:# ping node2
若能解析,刚说明成功
下载安装heartbeat各种包,包括:
ipvsadm
heartbeat-2.1.4-9.el5.i386.rpm
heartbeat-pils-2.1.4-10.el5.i386.rpm
heartbeat-stonith-2.1.4-10.el5.i386.rpm
libnet-1.1.4-3.el5.i386.rpm
perl-MailTools-1.77-1.el5.noarch.rpm
注:使用 " yum --nogpgcheck -y localinstall 软件包 " 来安装
--nogpgcheck: 用于自动解决依赖关系
装下载到的包copy到另一台Director主机上去
# scp *.rpm 192.168.1.5:/root
安装软件时系统自带有服务脚本,但功能不够,我们现在自己建一个
# vim ipvsd
内容如下:
#!/bin/bash
#
# LVS script for VS/DR
#
. /etc/rc.d/init.d/functions
#
VIP=192.168.1.1
RIP1=192.168.10.221
RIP2=192.168.10.222
PORT=80
#
case "$1" in
start)
/sbin/ifconfig eth0:1 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev eth0:1
# Since this is the Director we must be able to forward packets
echo 1 > /proc/sys/net/ipv4/ip_forward
# Clear all iptables rules.
/sbin/iptables -F
# Reset iptables counters.
/sbin/iptables -Z
# Clear all ipvsadm rules/services.
/sbin/ipvsadm -C
# Add an IP virtual service for VIP 192.168.0.219 port 80
# In this recipe, we will use the round-robin scheduling method.
# In production, however, you should use a weighted, dynamic scheduling method.
/sbin/ipvsadm -A -t $VIP:80 -s wlc
# Now direct packets for this VIP to
# the real server IP (RIP) inside the cluster
/sbin/ipvsadm -a -t $VIP:80 -r $RIP1 -g -w 1
/sbin/ipvsadm -a -t $VIP:80 -r $RIP2 -g -w 2
/bin/touch /var/lock/subsys/ipvsadm &> /dev/null
;;
stop)
# Stop forwarding packets
echo 0 > /proc/sys/net/ipv4/ip_forward
# Reset ipvsadm
/sbin/ipvsadm -C
# Bring down the VIP interface
/sbin/ifconfig eth0:1 down
/sbin/route del $VIP
/bin/rm -f /var/lock/subsys/ipvsadm
echo "ipvs is stopped..."
;;
status)
if [ ! -e /var/lock/subsys/ipvsadm ]; then
echo "ipvsadm is stopped ..."
else
echo "ipvs is running ..."
ipvsadm -L -n
fi
;;
*)
echo "Usage: $0 {start|stop|status}"
;;
esac
注:两个RIP并没有对应的真实机器,因为这里并不需要真正的Real server
检测脚本:
# ./ipvsd start
# ipvsadm -ln
如果出现ipvs记录,刚说明成功
# ./ipvsd stop
# cp ipvsd /etc/ha.d/resource.d/
将该脚本copy至node2上对应的文件夹下
# scp ipvsd 192.168.1.5:/etc/ha.d/resource.d/
# cd /etc/ha.d
# cp /usr/share/doc/heartbeat-2.1.4/ha.cf ./
# cp /usr/share/doc/heartbeat-2.1.4/authkeys ./
# cp /usr/share/doc/heartbeat-2.1.4/haresources ./
# vim ha.cf
开启keepalive, deadtime, warntime, initdead
添加:
node node1.a.com
node node2.a.com
bcast eth0 #这里你的DIP在哪个网络接口上,就写哪个接口
#dd if=/dev/urandom bs=512 count=1 | openssl md5
copy上述命令的最后一行
# vim authkdys
添加:
auth 1
1 sha1 将copy的内容粘贴至此即可
# chmod 600 authkeys
# vim haresources
添加:
node1.a.com ipvsd #写的是primary的主机名
# scp -rp ha.cf authkeys haresources 192.168.1.5:/etc/ha.d
# service heartbeat start
在node2上面也启动heartbeat服务
# service heartbeat start
返回node1上面
# ifconfig
发现多了一个eth0:1的网卡,说明服务配置成功,node1为主Directory
#cd /usr/lib/heartbeat
将node1改为standby
# ./hb_standby
返回node2上面:
#ifconfig
发现多了一个eth0:1的网卡,说明服务配置成功,node2为主Directory
返回node1上面:
#cd /usr/lib/heartbeat
将node1改为primary
# ./hb_takeover
# ifconfig
发现多了一个eth0:1的网卡,说明服务配置成功,node2为主Directory
配置完成!!!
###如发现文中有错,请即时指正,不胜感激!###
本文出自 “E-guys” 博客,请务必保留此出处http://eguys.blog.51cto.com/2517622/496156
node2(standby) DIP: 192.168.1.5/24 node2.a.com
VIP:192.168.1.1/24
一、两个结点间的通信依靠主机名,所以两者必须能相互解析主机名
二、heartbeat的信息传递
以太网方式:
bcast, ucast, mcast
串行线缆
配置node1:
# vim /etc/hosts
添加192.168.1.5 node2.a.com node2
即保证存在:192.168.1.4 node1.a.com node1
192.168.1.5 node2.a.com node2
检测:# ping node2
若能解析,刚说明成功
下载安装heartbeat各种包,包括:
ipvsadm
heartbeat-2.1.4-9.el5.i386.rpm
heartbeat-pils-2.1.4-10.el5.i386.rpm
heartbeat-stonith-2.1.4-10.el5.i386.rpm
libnet-1.1.4-3.el5.i386.rpm
perl-MailTools-1.77-1.el5.noarch.rpm
注:使用 " yum --nogpgcheck -y localinstall 软件包 " 来安装
--nogpgcheck: 用于自动解决依赖关系
装下载到的包copy到另一台Director主机上去
# scp *.rpm 192.168.1.5:/root
安装软件时系统自带有服务脚本,但功能不够,我们现在自己建一个
# vim ipvsd
内容如下:
#!/bin/bash
#
# LVS script for VS/DR
#
. /etc/rc.d/init.d/functions
#
VIP=192.168.1.1
RIP1=192.168.10.221
RIP2=192.168.10.222
PORT=80
#
case "$1" in
start)
/sbin/ifconfig eth0:1 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev eth0:1
# Since this is the Director we must be able to forward packets
echo 1 > /proc/sys/net/ipv4/ip_forward
# Clear all iptables rules.
/sbin/iptables -F
# Reset iptables counters.
/sbin/iptables -Z
# Clear all ipvsadm rules/services.
/sbin/ipvsadm -C
# Add an IP virtual service for VIP 192.168.0.219 port 80
# In this recipe, we will use the round-robin scheduling method.
# In production, however, you should use a weighted, dynamic scheduling method.
/sbin/ipvsadm -A -t $VIP:80 -s wlc
# Now direct packets for this VIP to
# the real server IP (RIP) inside the cluster
/sbin/ipvsadm -a -t $VIP:80 -r $RIP1 -g -w 1
/sbin/ipvsadm -a -t $VIP:80 -r $RIP2 -g -w 2
/bin/touch /var/lock/subsys/ipvsadm &> /dev/null
;;
stop)
# Stop forwarding packets
echo 0 > /proc/sys/net/ipv4/ip_forward
# Reset ipvsadm
/sbin/ipvsadm -C
# Bring down the VIP interface
/sbin/ifconfig eth0:1 down
/sbin/route del $VIP
/bin/rm -f /var/lock/subsys/ipvsadm
echo "ipvs is stopped..."
;;
status)
if [ ! -e /var/lock/subsys/ipvsadm ]; then
echo "ipvsadm is stopped ..."
else
echo "ipvs is running ..."
ipvsadm -L -n
fi
;;
*)
echo "Usage: $0 {start|stop|status}"
;;
esac
注:两个RIP并没有对应的真实机器,因为这里并不需要真正的Real server
检测脚本:
# ./ipvsd start
# ipvsadm -ln
如果出现ipvs记录,刚说明成功
# ./ipvsd stop
# cp ipvsd /etc/ha.d/resource.d/
将该脚本copy至node2上对应的文件夹下
# scp ipvsd 192.168.1.5:/etc/ha.d/resource.d/
# cd /etc/ha.d
# cp /usr/share/doc/heartbeat-2.1.4/ha.cf ./
# cp /usr/share/doc/heartbeat-2.1.4/authkeys ./
# cp /usr/share/doc/heartbeat-2.1.4/haresources ./
# vim ha.cf
开启keepalive, deadtime, warntime, initdead
添加:
node node1.a.com
node node2.a.com
bcast eth0 #这里你的DIP在哪个网络接口上,就写哪个接口
#dd if=/dev/urandom bs=512 count=1 | openssl md5
copy上述命令的最后一行
# vim authkdys
添加:
auth 1
1 sha1 将copy的内容粘贴至此即可
# chmod 600 authkeys
# vim haresources
添加:
node1.a.com ipvsd #写的是primary的主机名
# scp -rp ha.cf authkeys haresources 192.168.1.5:/etc/ha.d
# service heartbeat start
在node2上面也启动heartbeat服务
# service heartbeat start
返回node1上面
# ifconfig
发现多了一个eth0:1的网卡,说明服务配置成功,node1为主Directory
#cd /usr/lib/heartbeat
将node1改为standby
# ./hb_standby
返回node2上面:
#ifconfig
发现多了一个eth0:1的网卡,说明服务配置成功,node2为主Directory
返回node1上面:
#cd /usr/lib/heartbeat
将node1改为primary
# ./hb_takeover
# ifconfig
发现多了一个eth0:1的网卡,说明服务配置成功,node2为主Directory
配置完成!!!
###如发现文中有错,请即时指正,不胜感激!###
本文出自 “E-guys” 博客,请务必保留此出处http://eguys.blog.51cto.com/2517622/496156
相关文章推荐
- heartbeat httpd nfs 实现高可用web集群
- HAProxy+Varnish+LNMP实现高可用负载均衡动静分离集群部署
- heartbeat v1版CRM的高可用web集群的实现
- twemproxy + redis + sentinel 实现redis集群高可用
- Heartbeat实现集群高可用热备
- Keepalived+Nginx实现高可用负载均衡集群
- MySQL集群(四)之keepalived实现mysql双主高可用
- Keepalived+Tengine实现高可用集群
- lvs DR模式+keepalived 实现directory高可用、httpd服务负载均衡集群
- 基于zookeeper+leveldb搭建activemq集群实现高可用
- LVS专题: LVS+Keepalived并使用DNS轮询实现Director的高可用和负载均衡
- RHEL5实现高可用HA集群+GFS+EnterpriseDB
- 基于heartbeat v1+ldirectord实现LVS集群高可用 推荐
- HAProxy+Varnish+LNMP实现高可用负载均衡动静分离集群部署 推荐
- keepalived+LVS 实现双机热备、负载均衡、失效转移 高性能 高可用 高伸缩性 服务器集群
- Redis+Sentinel 实现redis集群高可用
- corosync+pacemaker实现高可用(HA)集群(一)
- 20170417-keepalived+lvs-实现高可用负载均衡集群
- LVS+Keepalived实现高可用集群