您的位置:首页 > Web前端 > Node.js

OpenStack Cinder - Add more volume nodes

2013-12-07 00:00 337 查看

With this being the first of a short series, I'd like to publish some articles intendend to cover the required steps to configure Cinder (OpenStack block storage service) in a mid/large deployment scenario. The idea is to discuss at least three topics: how to scale the service by adding more volume nodes; how to ensure high-availablity for the API and Scheduler sub-services; leverage the multi-backend feature landed in Grizzly.

I'm starting with this post on the scaling issue first. Cinder is composed of three main parts, the API server, the scheduler and the volume service. The volume service is some sort of abstraction layer between the API and the actual resources provider.

By adding more volume nodes into the environment you will be able to increase the total offering of block storage to the tenants. Each volume node can either provide volumes by allocating them locally or on a remote container like an NFS or GlusterFS share.

Some assumptions before getting into the practice:

you're familiar with the general OpenStack architecture

you have at least one Cinder node configured and working as expected

First thing to do on the candidate node is to install the required packages. I'm running the examples on CentOS and using the RDO repository which makes this step as simple as:

# yum install openstack-cinder

If you plan to host new volumes using the locally available storage dont' forget to create a volume group called
cinder-volumes
(the name can be configured via the
cinder_volume
parameter). Also don't forget to configure the
tgtd
to include the config files created dynamically by Cinder. Add a line like the following:

include /etc/cinder/volumes/*

in your
/etc/tgt/targets.conf
file. Now enable and start the
tgtd
service:

# chkconfig tgtd on
# service tgtd start

Amongst the three init services installed by
openstack-cinder
you only need to run
openstack-cinder-volume
, which gets configured in
/etc/cinder/cinder.conf
. Configure it to connect to the existing Cinder database (the db in use by the pre-existing node) and to the existing AMQP broker (again, in use by the pre-existing node) by setting the following:

sql_connection=mysql://cinder:${CINDER_DB_PASSWORD}@${CINDER_DB_HOST}/cinder
qpid_hostname=${QPIDD_BROKER}

Set the credentials if needed and/or change the
rpc_backend
setting if you're not using Qpid as your message broker. One more setting, not really required to change but worth checking if you're using the local resources:

iscsi_ip_address=${TGTD_IP_ADDRESS}

That should match the public ip address of the volume node just installed. The iSCSI targets created locally using
tgtadm/tgtd
have to be reachable by the Novanodes. The IP address of each target is stored in the database with every volume created. The
iscsi_ip_address
prameter sets what is the IP address to be given to the initiators.

At this point you should be ready to start the volume service:

# service openstack-cinder-volume start

Verify that it started by checking the logs (
/var/log/cinder/volume.log
) or by issueing on any Cinder node:

# cinder-manage host list

you should see all of your volume nodes listed. From now on you can create new volumes as usual and they will be allocated on any of the volume nodes, keep in mind that the scheduler will default to the node with the most space available.

Category: techie

Tags: openstack, cinder, fedoraplanet
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: