Ceph 多节点quick部署
2015-12-22 22:28
453 查看
ceph环境 至少需要admin(如果用ceph-deploy),monitor,osd1,osd2 四个节点,因为ceph至少需要一个osd副本,所以osd至少要两个节点,monitor节点用来映射
环境:
所有的节点都是CentOS 6.5 linux2.6.32 ceph 0.94.5 (Harmer)
一、部署前配置:(对所有的节点)
1、selinux 设置为disabled
2、关闭防火墙 或者添加端口 6789等端口,具体可参加ceph官网http://docs.ceph.com/docs/master/start/quick-start-preflight/
3、安装ntp服务
sudo yum install ntp ntpdate ntp-doc
安装完后,配置/etc/ntp.conf 文件
4、安装ssh服务(一般分布式架构平台都是需要ssh无密码远程登录设置的)
a、 sudo yum install openssh-server
b,为每个节点创建用于ceph通信的用户 如: ceph-admin,ceph-monitor,ceph-osd1,ceph-osd2,并为他们设置密码,目录等(为了便于记录密码,可以用户名和密码相同)如下:以后{}的内容都是需要更换成相应的设置变量如 ceph-admin
ssh user@ceph-server
sudo useradd -d /home/{username} -m {username}
sudo passwd {username}
为用户设置可以执行sudo操作:
echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}
c,在ceph-admin节点上执行 ssh-keygen,生成ssh keys,passphrase 要留空默认,直接回车:
ssh-keygen
Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
注:(很重要)配置所有节点上的/etc/hosts文件 将所有的osd,monitor节点的ip地址和主机名映射(如172.16.1.4 monitor)填入到hosts文件中,并重启节点
d,在ceph-admin上执行ssh-copy-id,拷贝ssh keys到其他节点上:
ssh-copy-id {username}@monitor
ssh-copy-id {username}@osd1
ssh-copy-id {username}@osd2
e,推荐在~/.ssh/目录下创建config文件,其中配置按如下:()
Host {在/etc/hosts 中node的名字,如monitor}
Hostname {hostname}
User {username}
Host {在/etc/hosts 中node的名字,如osd1}
Hostname {hostname}
User {username}
Host {在/etc/hosts 中node的名字,如osd2}
Hostname {hostname}
User {username}
同时修改visudo文件,关闭tty 执行sudo visudo 找到Defaults requiretty 修改为Defaults:ceph !requiretty
这样可方便从admin节点ssh登录时,直接是config文件中设置的名字
至此,可以对上面的配置的任意一台节点做一个镜像,或者快照,方便以后添加 monitor节点和osd节点,或者在部署中出现错误,需要重建节点
5,在admin节点上安装ceph-deploy,yum-plugin-priorities
在/etc/yum.repos.d/下创建ceph.repo,在其中写入:(ceph-release为ceph最近发布的版本,如firefly,hammer,distro改为linux发行版本,如CentOS6 为el6,CentOS7 为el7)
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
更新并安装:
sudo yum update && sudo yum install ceph-deploy
二、部署配置
1、在admin节点创建一个目录 ~/my-cephcluster,在monitor节点上的 /etc/目录下创建ceph目录,确保写权限
2、创建一个monitor节点配置文件
ceph-deploy new {initial-monitor-node(s)}
会在当前目录下生成ceph.conf,ceph.log,ceph.mon.keyring 三个文件
修改ceph.conf中,在[global]下添加public_network={ip-address}/{netmask}(如172.16.1.0、24),osd_pool_default_size=2 将ceph的副本改为2,默认为3.便可以只部署2个osd节点
3、安装ceph :(同时在所有的节点上安装ceph包<monitor,osd1,osd2,admin-node 均是在/etc/hosts文件下配置的名字)
ceph-deploy install monitor osd1 osd2 admin-node
4、添加初始化monitor节点
ceph-deploy mon create-initial
会在当前目录下 生成一下文件 cluster-name默认为ceph,此时则monitor节点以成功运行
{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
{cluster-name}.bootstrap-rgw.keyring
在执行这一步是长出现两个问题
a,monitor节点上无法启动ceph服务,提示ip_address找不到,或者public_network不存在, 这时是在ceph.conf 文件中需要添加
[mon.a]
host = monitor
mon_addr = 172.16.1.4:6789
b,提示无法找/etc/ceph/client.admin.keyring,monitor节点返回的状态中 的addr:为127.0.0.1:6789/0,这时是monitor节点的/etc/hosts/中没有添加 172.16.1.4的映射行 如添加172.16.1.4 monitor
5,添加osd节点
在osd节点创建目录,要求目录所在的磁盘空间大于10G
ssh node2
sudo mkdir /var/local/osd0
exit
ssh node3
sudo mkdir /var/local/osd1
6,初始化osd节点
ceph-deploy osd prepare {ceph-node}:/path/to/directory
如 ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
7,激活osd节点
ceph-deploy osd activate {ceph-node}:/path/to/directory
如ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
8,拷贝ceph.client.admin.keyring到需要执行ceph命令的节点,(这样在非monitor节点上执行ceph命令不用指定monitor地址),执行:
ceph-deploy admin admin-node node1 node2 node3
为ceph.client.admin.keyring文件添加读权限 :
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
9,检验搭建是否成功:
ceph health
返回 active+clean 状态说明搭建成功
三、扩展:
1、添加osd节点
可以直接安装第一部分做好的镜像或快照 安装系统,osd3节点
然后创建目录:sudo mkdir /var/local/osd3
在admin节点上执行 初始化ceph-deploy osd prepare node3:/var/local/osd3
激活节点:ceph-deploy osd activate node1:/var/local/osd3
执行 ceph-w 查看结果
2、添加一个metadata 节点 用于cephFS接口扩展
可以直接安装第一部分做好的镜像或快照 安装系统,mds-node节点
ceph-deploy mds create mds-node
3、添加RGW 接口扩展
ceph-deploy rgw create rgw-node(rgw-node可以是任何一台装有ceph,ceph-rgw包的节点,包括monitor节点)
4、添加monitor节点:
ceph-deploy mon add node2 node3(node2,node3,可以是osd1,osd2等osd节点)
查看结果:ceph quorum_status --format json-pretty
5、读写对象数据:详细可以通过ceph-deploy --help或 ceph --help学习
To store object data in the Ceph Storage Cluster, a Ceph client must:
Set an object name
Specify a pool (默认pool 是rbd )
The Ceph Client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to a Ceph OSD Daemon dynamically. To find the object location, all you need is the object name and the pool name. For example:
ceph osd map {poolname} {object-name}
Exercise: Locate an Object
As an exercise, lets create an object. Specify an object name, a path to a test file containing some object data and a pool name using the rados put command on the command line. For example:
echo {Test-data} > testfile.txt
rados put {object-name} {file-path} --pool=rbd
rados put test-object-1 testfile.txt --pool=rbd
To verify that the Ceph Storage Cluster stored the object, execute the following:
rados -p rbd ls
Now, identify the object location:
ceph osd map {pool-name} {object-name}
ceph osd map data test-object-1
Ceph should output the object’s location. For example:
osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]
To remove the test object, simply delete it using the rados rm command. For example:
rados rm test-object-1 --pool=rbd
As the cluster evolves, the object location may change dynamically. One benefit of Ceph’s dynamic rebalancing is that Ceph relieves you from having to perform the migration manually.
环境:
所有的节点都是CentOS 6.5 linux2.6.32 ceph 0.94.5 (Harmer)
一、部署前配置:(对所有的节点)
1、selinux 设置为disabled
2、关闭防火墙 或者添加端口 6789等端口,具体可参加ceph官网http://docs.ceph.com/docs/master/start/quick-start-preflight/
3、安装ntp服务
sudo yum install ntp ntpdate ntp-doc
安装完后,配置/etc/ntp.conf 文件
4、安装ssh服务(一般分布式架构平台都是需要ssh无密码远程登录设置的)
a、 sudo yum install openssh-server
b,为每个节点创建用于ceph通信的用户 如: ceph-admin,ceph-monitor,ceph-osd1,ceph-osd2,并为他们设置密码,目录等(为了便于记录密码,可以用户名和密码相同)如下:以后{}的内容都是需要更换成相应的设置变量如 ceph-admin
ssh user@ceph-server
sudo useradd -d /home/{username} -m {username}
sudo passwd {username}
为用户设置可以执行sudo操作:
echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}
c,在ceph-admin节点上执行 ssh-keygen,生成ssh keys,passphrase 要留空默认,直接回车:
ssh-keygen
Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
注:(很重要)配置所有节点上的/etc/hosts文件 将所有的osd,monitor节点的ip地址和主机名映射(如172.16.1.4 monitor)填入到hosts文件中,并重启节点
d,在ceph-admin上执行ssh-copy-id,拷贝ssh keys到其他节点上:
ssh-copy-id {username}@monitor
ssh-copy-id {username}@osd1
ssh-copy-id {username}@osd2
e,推荐在~/.ssh/目录下创建config文件,其中配置按如下:()
Host {在/etc/hosts 中node的名字,如monitor}
Hostname {hostname}
User {username}
Host {在/etc/hosts 中node的名字,如osd1}
Hostname {hostname}
User {username}
Host {在/etc/hosts 中node的名字,如osd2}
Hostname {hostname}
User {username}
同时修改visudo文件,关闭tty 执行sudo visudo 找到Defaults requiretty 修改为Defaults:ceph !requiretty
这样可方便从admin节点ssh登录时,直接是config文件中设置的名字
至此,可以对上面的配置的任意一台节点做一个镜像,或者快照,方便以后添加 monitor节点和osd节点,或者在部署中出现错误,需要重建节点
5,在admin节点上安装ceph-deploy,yum-plugin-priorities
在/etc/yum.repos.d/下创建ceph.repo,在其中写入:(ceph-release为ceph最近发布的版本,如firefly,hammer,distro改为linux发行版本,如CentOS6 为el6,CentOS7 为el7)
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
更新并安装:
sudo yum update && sudo yum install ceph-deploy
二、部署配置
1、在admin节点创建一个目录 ~/my-cephcluster,在monitor节点上的 /etc/目录下创建ceph目录,确保写权限
2、创建一个monitor节点配置文件
ceph-deploy new {initial-monitor-node(s)}
会在当前目录下生成ceph.conf,ceph.log,ceph.mon.keyring 三个文件
修改ceph.conf中,在[global]下添加public_network={ip-address}/{netmask}(如172.16.1.0、24),osd_pool_default_size=2 将ceph的副本改为2,默认为3.便可以只部署2个osd节点
3、安装ceph :(同时在所有的节点上安装ceph包<monitor,osd1,osd2,admin-node 均是在/etc/hosts文件下配置的名字)
ceph-deploy install monitor osd1 osd2 admin-node
4、添加初始化monitor节点
ceph-deploy mon create-initial
会在当前目录下 生成一下文件 cluster-name默认为ceph,此时则monitor节点以成功运行
{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
{cluster-name}.bootstrap-rgw.keyring
在执行这一步是长出现两个问题
a,monitor节点上无法启动ceph服务,提示ip_address找不到,或者public_network不存在, 这时是在ceph.conf 文件中需要添加
[mon.a]
host = monitor
mon_addr = 172.16.1.4:6789
b,提示无法找/etc/ceph/client.admin.keyring,monitor节点返回的状态中 的addr:为127.0.0.1:6789/0,这时是monitor节点的/etc/hosts/中没有添加 172.16.1.4的映射行 如添加172.16.1.4 monitor
5,添加osd节点
在osd节点创建目录,要求目录所在的磁盘空间大于10G
ssh node2
sudo mkdir /var/local/osd0
exit
ssh node3
sudo mkdir /var/local/osd1
6,初始化osd节点
ceph-deploy osd prepare {ceph-node}:/path/to/directory
如 ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
7,激活osd节点
ceph-deploy osd activate {ceph-node}:/path/to/directory
如ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
8,拷贝ceph.client.admin.keyring到需要执行ceph命令的节点,(这样在非monitor节点上执行ceph命令不用指定monitor地址),执行:
ceph-deploy admin admin-node node1 node2 node3
为ceph.client.admin.keyring文件添加读权限 :
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
9,检验搭建是否成功:
ceph health
返回 active+clean 状态说明搭建成功
三、扩展:
1、添加osd节点
可以直接安装第一部分做好的镜像或快照 安装系统,osd3节点
然后创建目录:sudo mkdir /var/local/osd3
在admin节点上执行 初始化ceph-deploy osd prepare node3:/var/local/osd3
激活节点:ceph-deploy osd activate node1:/var/local/osd3
执行 ceph-w 查看结果
2、添加一个metadata 节点 用于cephFS接口扩展
可以直接安装第一部分做好的镜像或快照 安装系统,mds-node节点
ceph-deploy mds create mds-node
3、添加RGW 接口扩展
ceph-deploy rgw create rgw-node(rgw-node可以是任何一台装有ceph,ceph-rgw包的节点,包括monitor节点)
4、添加monitor节点:
ceph-deploy mon add node2 node3(node2,node3,可以是osd1,osd2等osd节点)
查看结果:ceph quorum_status --format json-pretty
5、读写对象数据:详细可以通过ceph-deploy --help或 ceph --help学习
To store object data in the Ceph Storage Cluster, a Ceph client must:
Set an object name
Specify a pool (默认pool 是rbd )
The Ceph Client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to a Ceph OSD Daemon dynamically. To find the object location, all you need is the object name and the pool name. For example:
ceph osd map {poolname} {object-name}
Exercise: Locate an Object
As an exercise, lets create an object. Specify an object name, a path to a test file containing some object data and a pool name using the rados put command on the command line. For example:
echo {Test-data} > testfile.txt
rados put {object-name} {file-path} --pool=rbd
rados put test-object-1 testfile.txt --pool=rbd
To verify that the Ceph Storage Cluster stored the object, execute the following:
rados -p rbd ls
Now, identify the object location:
ceph osd map {pool-name} {object-name}
ceph osd map data test-object-1
Ceph should output the object’s location. For example:
osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]
To remove the test object, simply delete it using the rados rm command. For example:
rados rm test-object-1 --pool=rbd
As the cluster evolves, the object location may change dynamically. One benefit of Ceph’s dynamic rebalancing is that Ceph relieves you from having to perform the migration manually.
相关文章推荐
- UIView常用的一些方法小记之setNeedsDisplay和setNeedsLayout
- PHP中include和require的区别详解
- 如何有效的遍历django的QuerySet
- CF# Educational Codeforces Round 3 F. Frogs and mosquitoes
- UEditor1.4.3在SSH框架项目中上传图片的设置
- Xcode7 UI自动化测试详解 带demo UITests
- 简单地响应式布局----CSS3中的Media Query(媒介查询)
- Leetcode204: N-Queens II
- 例题5-6 UVA 540 Team Queue团体队列
- Leetcode203: N-Queens
- 关于 IOS7下 UITextview的contentsize.height不能准确判断高度
- UILabel和UIFont开发技巧
- UITextField技巧
- iOS 给UIView添加xib
- Android BlueTooth开发
- BZOJ1803: Spoj1487 Query on a tree III
- STL系列之一 deque双向队列
- UVA 11988 STL deque (双端队列||链表模拟)
- jqeryUI 文本框自动补全
- JavaEE request.getAttribute request.getParameter