CentOS 6.x RHCS Cluster 配置
2016-06-04 21:54
295 查看
RHCS: Red Hat Cluster Suite
此集群建议使用3个以上节点, 2节点集群需要配置仲裁磁盘
node2: 10.11.8.186
node3: 10.11.8.200
安装:
PS: 只安装rgmanager就可以牵动安装HA集群软件包, 也可暂不安装, 使用luci批量安装
2.为所有节点的 ricci 用户设置密码
启动所用节点上的 ricci 服务
4.启动 node1 上的 luci 服务:
登录后界面如图:
在 Manage Cluster 中选在 create 创建一个新集群:
PS: 此处密码即为 ricci 用密码, tcp:11111 为 ricci 服务所监听的端口
1.创建资源
PS: RHCS与corosync不同, 资源必须添加到服务后才能启动
2.创建服务
ERROR
在命令行启动一次之后恢复正常, 原因不明, 拍错失败, 这次是我输了
结果服务还是起来了
usage: clusvcadm [command]
Resource Group Control Commands:
-v Display version and exit
-d Disable . This stops a group until an administrator enables it again, the cluster loses and regains quorum, or an
administrator-defined event script explicitly enables it again.
-e Enable
-e -F Enable according to failover domain rules (deprecated; always the case when using central processing)
-e -m Enable on
-r -m Relocate [to ] Stops a group and starts it on another cluster member.
-M -m Migrate to (e.g. for live migration of VMs)
-q Quiet operation
-R Restart a group in place.
-s Stop . This temporarily stops a group. After the next group or or cluster member transition, the group will be
restarted (if possible).
-Z Freeze resource group. This prevents transitions and status checks, and is useful if an administrator needs to administer part of
a service without stopping the whole service.
-U Unfreeze (thaw) resource group. Restores a group to normal operation.
-c Convalesce (repair, fix) resource group. Attempts to start failed, non-critical resources within a resource group. Resource Group
Locking (for cluster Shutdown / Debugging):
-l Lock local resource group managers. This prevents resource groups from starting.
-S Show lock state
-u Unlock resource group managers. This allows resource groups to start.
结果而言服务正常, 报错问题找到原因后再补充吧
前提准备:
时间同步, hosts解析, ssh互信此集群建议使用3个以上节点, 2节点集群需要配置仲裁磁盘
环境说明:
node1(管理节点): 10.11.8.187node2: 10.11.8.186
node3: 10.11.8.200
安装:
yum install luci rgmanager
PS: 只安装rgmanager就可以牵动安装HA集群软件包, 也可暂不安装, 使用luci批量安装
luci配置:
编辑 node1 上 /etc/sysconfig/luci 文件, 将服务端口改为1024的某个端口[root@node1 ~]# vim /etc/sysconfig/luci port = 8084
2.为所有节点的 ricci 用户设置密码
[root@node1 ~]# passwd ricci Changing password for user ricci. New password: BAD PASSWORD: it is too short BAD PASSWORD: is too simple Retype new password: passwd: all authentication tokens updated successfully.
启动所用节点上的 ricci 服务
service ricci start
4.启动 node1 上的 luci 服务:
[root@node1 ~]# service luci start Start luci... [ OK ] Point your web browser to https://node1:8084 (or equivalent) to access luci
集群配置:
浏览器访问 https://node1:8084 使用运行 luci 节点的 root 用户和密码登录:登录后界面如图:
在 Manage Cluster 中选在 create 创建一个新集群:
PS: 此处密码即为 ricci 用密码, tcp:11111 为 ricci 服务所监听的端口
资源服务配置:
此处以 web service 为例, httpd安装过程不再演示, 分别为3个节点提供一个不同的页面用以测试1.创建资源
PS: RHCS与corosync不同, 资源必须添加到服务后才能启动
2.创建服务
ERROR
在命令行启动一次之后恢复正常, 原因不明, 拍错失败, 这次是我输了
结果服务还是起来了
clusvcadm 命令行管理工具:
clusvcadm: invalid option – ‘-’usage: clusvcadm [command]
Resource Group Control Commands:
-v Display version and exit
-d Disable . This stops a group until an administrator enables it again, the cluster loses and regains quorum, or an
administrator-defined event script explicitly enables it again.
-e Enable
-e -F Enable according to failover domain rules (deprecated; always the case when using central processing)
-e -m Enable on
-r -m Relocate [to ] Stops a group and starts it on another cluster member.
-M -m Migrate to (e.g. for live migration of VMs)
-q Quiet operation
-R Restart a group in place.
-s Stop . This temporarily stops a group. After the next group or or cluster member transition, the group will be
restarted (if possible).
-Z Freeze resource group. This prevents transitions and status checks, and is useful if an administrator needs to administer part of
a service without stopping the whole service.
-U Unfreeze (thaw) resource group. Restores a group to normal operation.
-c Convalesce (repair, fix) resource group. Attempts to start failed, non-critical resources within a resource group. Resource Group
Locking (for cluster Shutdown / Debugging):
-l Lock local resource group managers. This prevents resource groups from starting.
-S Show lock state
-u Unlock resource group managers. This allows resource groups to start.
服务迁移测试:
[root@node1 ~]# clustat Cluster Status for web_cluster @ Wed May 25 22:10:04 2016 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node1 1 Online, 4000 Local, rgmanager node2 2 Online, rgmanager node3 3 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:web_service node3 started [root@node1 ~]# clusvcadm -r web_service -m node2 Trying to relocate service:web_service to node2...Success service:web_service is now running on node2 [root@node1 ~]# clustat Cluster Status for web_cluster @ Wed May 25 22:11:00 2016 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node1 1 Online, Local, rgmanager node2 2 Online, rgmanager node3 3 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:web_service node2 started
结果而言服务正常, 报错问题找到原因后再补充吧
相关文章推荐
- Centos6 编译安装Python
- 硬盘安装CentOS 6.2以及添加GRUB启动菜单
- CentOS 6.2实战部署Nginx+MySQL+PHP
- RedHat 5.8 安装Oracle 11gR2_Grid集群
- CentOS 7系统配置上的变化解析
- mysql集群之MMM简单搭建
- 【DevOps】为什么我们永远疲于奔命?
- 网络管理之IP地址篇
- 文件的读出 编辑 管理
- CentOS下DB2数据库安装过程详解
- CentOS 6.3下编译安装Ruby 2.0笔记
- SQL Server 2008 R2 应用及多服务器管理
- MySQL的集群配置的基本命令使用及一次操作过程实录
- VC下通过系统快照实现进程管理的方法
- 在Centos 5.5 上编译安装mysql 5.5.9
- 在阿里云的CentOS环境中安装配置MySQL的教程
- MySQL slave_net_timeout参数解决的一个集群问题案例