您的位置:首页 > 运维架构 > Linux

CentOS7下搭建Ceph(一)

2017-07-20 16:28 323 查看

CentOS7下搭建Ceph(一)

创建5个对象,1个作为主节点,1个作为客户端,1个作为监视,2两个作为存储。

一、主机准备

1.准备5台主机,命名并配置网络,(依兴趣可自行改名此处以下图为例)

IPHostname
192.168.1.76admin-node (ceph-deploy)
192.168.1.77client
192.168.1.78node01(monitor daemon)
192.168.1.79node02 (object storage)
192.168.1.80node03 (object storage)

具体步骤如下:

重新命名命令:#hostnamectl set-hostname admin

打开电脑cmd端,利用ping命令找可以连通的网络,如下图:

[zhoujing@zhouj ~]$ ping 192.168.1.76
PING 192.168.1.76 (192.168.1.76) 56(84) bytes of data.
64 bytes from 192.168.1.76: icmp_seq=1 ttl=64 time=0.419 ms
64 bytes from 192.168.1.76: icmp_seq=2 ttl=64 time=0.485 ms
64 bytes from 192.168.1.76: icmp_seq=3 ttl=64 time=0.456 ms
^C
--- 192.168.1.76 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2081ms
rtt min/avg/max/mdev = 0.419/0.453/0.485/0.032 ms


在虚拟机端口:

[root@admin-node ~]#nmcli d
设备     类型        状态          CONNECTION
enp0s3  ethernet   disconnected  --

[root@admin-node ~]#nmcli connection modify enp0s3 ipv4.address 192.168.1.76
[root@admin-node ~]#nmcli connection modify enp0s3 ipv4.dns 192.168.3.1
[root@admin-node ~]#nmcli connection modify enp0s3 ipv4.gateway 192.168.1.1
[root@admin-node ~]#nmcli c down enp0s3;nmcli c up enp0s3


依次配置5个虚拟机的网络,确保其连通性;

2.修改admin-node节点/etc/hosts文件,添加内容

输入vi /etc/hosts

192.168.1.76 admin-node
192.168.1.77 client
192.168.1.78 node01
192.168.1.79 node02
192.168.1.80 node03


3.分别为5台主机创建用户ceph:(此处以admin-node为例)

[root@admin-node ~]#adduser -d /home/ceph -m ceph
[root@admin-node ~]#sudo passwd ceph


设置账户权限:

[root@admin-node ~]#echo -e 'Defaults:ceph !requiretty\nceph ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
root@admin-node#chmod 440 /etc/sudoers.d/ceph


二.配置用户权限

每个节点进行:(此处以admin-node为例)

[root@admin-node ~]#yum -y install centos-release-ceph-hammer epel-release yum-plugin-priorities
若出现进程被占用的情况输入:rm -f /var/run/yum.pid

[root@admin-node ~]#sed -i -e "s/enabled=1/enabled=1\npriority=1/g" /etc/yum.repos.d/CentOS-Ceph-Hammer.repo

[root@admin-node ~]# firewall-cmd --add-service=ssh --permanent
[root@admin-node ~]# firewall-cmd --reload
关闭SELINUX
[root@admin-node ~]#sed -i 's/SELINUX=enforcing/SELINUX=disable/g' /etc/selinux/config
[root@admin-node ~]#setenforce 0

[root@admin-node ~]# firewall-cmd --add-port=6789/tcp --permanent
[root@admin-node ~]# firewall-cmd --add-port=6800-7100/tcp --permanent
[root@admin-node ~]# firewall-cmd --reload


三.安装ntp

[root@admin-node ~]#yum -y install ntpdate ntp-doc
[root@admin-node ~]#ntpdate 0.cn.pool.ntp.org
[root@admin-node ~]#systemctl enable ntpd.service
[root@admin-node ~]#systemctl start ntpd.service


三.在admin-node上设置ssh密钥,使其对其他节点ssh无密码访问

[root@admin-node ~]#ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
3c:00:ab:fe:17:55:42:bf:65:68:90:ce:97:29:2f:af root@admin-node
The key's randomart image is:

*创建~/.ssh/config
[root@admin-node ~]# vi ~/.ssh/config
*增添以下内容:
Host admin-node
Hostname 192.168.1.76
User ceph
Host client
Hostname 192.168.1.77
Host node01
Hostname 192.168.1.78
User ceph
Host node02
Hostname 192.168.1.79
User ceph
Host node03
Hostname 192.168.1.80
User ceph
[root@admin-node ~]# chmod 600 ~/.ssh/config
*将key复制到其它节点
[root@admin-node ~]#ssh-copy-id node01
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ceph@192.168.1.78's password:
Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node01'"
and check to make sure that only the key(s) you wanted were added.

[root@admin-node ~]#ssh-copy-id node02
[root@admin-node ~]#ssh-copy-id node03


四.安装ceph-deploy从主节点,到其他节点

[root@admin-node ~]#sudo yum -y install ceph-deploy

[root@admin-node ~]# mkdir ceph
# cd ceph

[root@admin-node ceph]# ceph-deploy new node01
[root@admin-node ceph]# vi ./ceph.conf
*末尾处添加
osd pool default size = 2

*安装到其它节点adfasd
[root@admin-node ceph]# ceph-deploy install admin-node node01 node02 node03

*监视和密钥的设置
[root@admin-node ceph]# ceph-deploy mon create-initial


五.主节点管理配置ceph集群

*准备对象存储进程
[root@admin-node ceph]#ceph-deploy osd prepare node01:/var/lib/ceph/osd node02:/var/lib/ceph/osd node03:/var/lib/ceph/osd

*激活对象存储进程
[root@admin-node ceph]#ceph-deploy osd activate node01:/var/lib/ceph/osd node02:/var/lib/ceph/osd node03:/var/lib/ceph/osd

*将admin-node节点的配置文件与keyring同步至其它节点
[root@admin-node ceph]#ceph-deploy admin admin-node node1 node2 node3
[root@admin-node ceph]#sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

*通过命令查看集群健康状态
[root@admin-node ceph]#ceph health
如果成功则显示:HEALTH_OK


如果有人想要重新配置,执行下列操作:

*删除相关包
#ceph-deploy purge admin-node node01 node02 node03
*清除设置
#ceph-deploy purgedata admin-node node01 node02 node03
#ceph-deploy forgetkeys
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: