您的位置:首页 > 运维架构

openstack Juno版部署记录

2015-05-28 20:34 555 查看

openstack Juno版部署记录

1. 基础环境和配置

1.1. 部署模式

最简3节点模式



1.2. 创建虚机的配置

虚机系统:CentOS-7 x86_64

虚机配置:

节点CPU内存硬盘网卡
controller双核4G50Geth0(192.168.8.202) eth1(10.0.0.11)
network双核4G50Geth0(192.168.8.203) eth1(10.0.0.21) eth2(10.0.1.21)
compute1双核4G50Geth0(192.168.8.204) eth1(10.0.0.31) eth2(10.0.1.31) eth3(10.0.2.31)
block1单核1G50G*2eth0(192.168.8.179) eth1(10.0.0.41) eth2(10.0.2.41)
object1双核4G50G*3eth0(192.168.8.205) eth1(10.0.0.51)
object2双核4G50G*3eth0(192.168.8.206) eth1(10.0.0.52)

1.3. 虚机组网配置

1.3.1. 修改网络配置

修改相应网卡的配置(不同网卡实现不同的配置,参见部署模式):

vim /etc/sysconfig/network-script/ifcfg-enXXX


TYPE=Ethernet
BOOTPROTO=none
NAME=eth1
UUID=01d93c96-c3f8-4b9c-8b1c-e51d2fa71295
DEVICE=ens224
ONBOOT=yes
IPADDR=10.0.0.11
NETMASK=255.255.255.0


重启网络

service network restart


1.3.2. 修改主机名

永久修改:

hostnamectl set-hostname (controller/network/compute1)


域名解析:

修改/etc/hosts文件
vi /etc/hosts


# controller
10.0.0.11 controller
# network
10.0.0.21 network
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2


最后,重启服务器即可
reboot


1.3.3. 验证网络配置

每个节点都互ping一下。

1.4. 配置软件源

1.4.1. 配置软件源

cd /etc/yum.repos.d/

wget http://mirrors.163.com/.help/CentOS7-Base-163.repo

mv CentOS-Base.repo CentOS-Base.repo.bak

mv CentOS7-Base-163.repo CentOS-Base.repo

yum clean all

yum makecache

yum update

1.4.2. 增加openstack源

yum install yum-plugin-priorities -y

yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm -y

yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm -y

yum upgrade

yum install openstack-selinux -y

1.4.3. 安装pip管理工具

安装git:

yum install git

安装pip:https://pip.pypa.io/en/stable/installing.html

wget https://bootstrap.pypa.io/get-pip.py

python get-pip.py

修改pip源

a. 在/root目录下面,增加文件
.pip/pip.conf


[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple[/code] 
b. 在/root目录下面,增加文件
.pydistutils.cfg


[easy_install]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple[/code] 

2. 数据库和消息队列(controller)

数据库和消息队列都是安装在controller节点中。

2.1. 数据库

2.1.1. 数据库安装

安装mariaDB

yum install mariadb mariadb-server MySQL-python

配置数据库(/etc/my.conf 注意不同段下面需要增加不同的东西)

[mysqld]
...
bind-address = 10.0.0.11
…
[mysqld]
...
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8


2.1.2. 数据库启用

启用数据库

systemctl enable mariadb.service

systemctl start mariadb.service

配置数据库密码(按照提示操作,配置的密码是:XXXXXXXX)

mysql_secure_installation

2.2. 消息队列

2.2.1. 安装Rabbitmq-server

yum install rabbitmq-server

2.2.2. 启动Rabbitmq

systemctl enable rabbitmq-server.service

systemctl start rabbitmq-server.service

2.2.3. 配置guest密码

配置的密码是RABBIT_PASS

rabbitmqctl change_password guest RABBIT_PASS

2.2.4. 配置guest远程访问(3.3版本以上才需要配置)

检测版本

rabbitmqctl status | grep rabbit

修改配置文件

新增或者修改
/etc/rabbitmq/rabbitmq.config


[{rabbit, [{loopback_users, []}]}].


重启服务

systemctl restart rabbitmq-server.service

3. Keystone 认证服务(controller)

3.1. 安装配置

3.1.1. 建立数据库

a. 登陆数据库

mysql -u root -p

b. 创建keystone Database

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';


退出数据库


c. 生成随机密钥(后面配置keystone的时候需要用到)
`openssl rand -hex 10` ⇒ eacff018577347bcba55

3.1.2. 安装和配置keystone

安装命令:

yum install openstack-keystone python-keystoneclient

配置keystone:
/etc/keystone/keystone.conf


[DEFAULT]
...
admin_token = eacff018577347bcba55
verbose = true  #可选打开debug模式
[database]
...
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
[token]
...
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
[revoke]
...
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone


创建用户权限

keystone-manage pki_setup –keystone-user keystone –keystone-group keystone

chown -R keystone:keystone /var/log/keystone

chown -R keystone:keystone /etc/keystone/ssl

chmod -R o-rwx /etc/keystone/ssl

同步数据库

su -s /bin/sh -c “keystone-manage db_sync” keystone

设置开机启动

systemctl enable openstack-keystone.service

systemctl start openstack-keystone.service

3.2. 建立租户、用户、角色

3.2.1. 声明环境变量

管理员token

export OS_SERVICE_TOKEN=eacff018577347bcba55

配置Endpoint地址

export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0

3.2.2. 创建用户租户角色

创建admin

创建admin租户

keystone tenant-create –name admin –description “Admin Tenant”

创建admin用户(密码就是ADMIN_PASS)

keystone user-create –name admin –pass ADMIN_PASS –email admin@admin.com

创建admin角色

keystone role-create –name admin

绑定admin租户用户对

keystone user-role-add –user admin –tenant admin –role admin



创建demo

创建demo租户

keystone tenant-create --name demo --description "Demo Tenant"


创建demo用户

keystone user-create --name demo --tenant demo --pass DEMO_PASS --email demo@demo.com


创建service租户

keystone tenant-create --name service --description "Service Tenant"


3.2.3. 创建服务和API Endpoint

创建认证服务

keystone service-create --name keystone --type identity --description "OpenStack Identity"


创建API Endpoint(也可以手动输入服务id)

keystone endpoint-create \
--service-id $(keystone service-list | awk '/ identity / {print $2}') \
--publicurl http://controller:5000/v2.0 \
--internalurl http://controller:5000/v2.0 \
--adminurl http://controller:35357/v2.0 \
--region regionOne


3.2.4. 创建token脚本

admin用户:建立文件
keystonerc_admin


export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v2.0


demo用户:建立文件
keystonerc_demo


export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v2.0


赋予可执行权限:

chmod +x keystonerc_admin

chmod +x keystonerc_demo

执行测试:

source keystonerc_admin

keystone user-list

4. Glance镜像服务(controller)

4.1. 数据库配置

4.1.1. 建立Glance数据库

a. 登陆数据库

mysql -u root -p

b. 创建glance Database (密码就是GLANCE_DBPASS)

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' \
IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';


退出数据库


4.1.2. 在keystone中添加glance的认证

source admin token

source keystonerc_admin

创建Glance用户服务和Endpoint

keystone user-create --name glance --pass GLANCE_PASS --email glance@admin.com
keystone user-role-add --user glance --tenant service --role admin
keystone service-create --name glance --type image \
--description "OpenStack Image Service"

keystone endpoint-create \
--service-id $(keystone service-list | awk '/ image / {print $2}') \
--publicurl http://controller:9292 \
--internalurl http://controller:9292 \
--adminurl http://controller:9292 \
--region regionOne


4.2. 安装和配置Glance

4.2.1. 安装Glance

yum install openstack-glance python-glanceclient

4.2.2. 配置Glance

配置
/etc/glance/glance-api.conf
文件

[database]
...
connection = mysql://glance:GLANCE_DBPASS@controller/glance

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS

[paste_deploy]
...
flavor=keystone

[glance_store]
...
filesystem_store_datadir=/var/lib/glance/images/

[DEFAULT]
...
verbose=True
debug=True
notification_driver = noop


配置
/etc/glance/glance- registry.conf
文件

[database]
...
connection = mysql://glance:GLANCE_DBPASS@controller/glance

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS

[paste_deploy]
...
flavor = keystone

[DEFAULT]
...
notification_driver = noop
verbose=True
debug=True


同步数据

su -s /bin/sh -c “glance-manage db_sync” glance

设置开机启动

systemctl enable openstack-glance-api.service openstack-glance-registry.service

systemctl start openstack-glance-api.service openstack-glance-registry.service

4.3. 验证Glance

a. 建tmp文件夹,下载小镜像

mkdir /tmp/images

wget -P /tmp/images http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img

b. source令牌

source keystonerc_admin

c. 上传镜像

glance image-create --name "cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --is-public True --progress


d. 验证镜像

glance image-list



e. 删除本地tmp文件

rm -r /tmp/images

5. Nova Compute服务(controller && compute)

5.1. 配置Controller节点的Nova服务

5.1.1. keystone数据库和服务

A. 创建数据库

a. 登陆数据库

mysql -u root -p

b. 创建nova Database (密码就是NOVA_DBPASS)

CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';


退出数据库


B. 连接Keystone服务

a. source admin token

source keystonerc_admin

b. 创建nova用户服务和Endpoint

keystone user-create --name nova --pass NOVA_PASS --email nova@admin.com
keystone user-role-add --user nova --tenant service --role admin
keystone service-create --name nova --type compute --description "OpenStack Compute"

keystone endpoint-create \
--service-id $(keystone service-list | awk '/ compute / {print $2}') \
--publicurl http://controller:8774/v2/%\(tenant_id\)s \
--internalurl http://controller:8774/v2/%\(tenant_id\)s \
--adminurl http://controller:8774/v2/%\(tenant_id\)s \
--region regionOne


5.1.2. 安装和配置Nova

A. 安装Nova包

yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y

B. 配置Nova

配置 /etc/nova/nova.conf

[database]
...
# 如果没有的话,自己加上这个段
connection = mysql://nova:NOVA_DBPASS@controller/nova

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

auth_strategy = keystone

my_ip = 10.0.0.11

vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS

[glance]
...
host = controller


同步数据库

su -s /bin/sh -c “nova-manage db sync” nova

配置Nova服务启动

systemctl enable openstack-nova-api.service

systemctl enable openstack-nova-cert.service

systemctl enable openstack-nova-consoleauth.service

systemctl enable openstack-nova-scheduler.service

systemctl enable openstack-nova-conductor.service

systemctl enable openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service

systemctl start openstack-nova-cert.service

systemctl start openstack-nova-consoleauth.service

systemctl start openstack-nova-scheduler.service

systemctl start openstack-nova-conductor.service

systemctl start openstack-nova-novncproxy.service

5.2. 配置Compute节点的Nova服务

5.2.1. 安装Nova compute包

yum install openstack-nova-compute sysfsutils -y

5.2.2. 配置Nova compute

配置
/etc/nova/nova.conf


[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

auth_strategy = keystone

my_ip = 10.0.0.31

vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.31
novncproxy_base_url = http://controller:6080/vnc_auto.html 
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS

[glance]
...
host = controller
[libvirt]
...
virt_type = qemu


重启服务:

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd.service openstack-nova-compute.service

Warming:如果不能够正常启动nova-compute服务,很有可能是连不上controller的消息队列,看看控制节点5672端口起了没,及防火墙关了没。

查看端口开启命令:
netstat -tunlp | grep 5672


关闭controller防火墙:

systemctl disable firewalld.service

systemctl stop firewalld.service

5.3. 验证Nova服务

a. source keystonerc_admin文件

source keystonerc_admin

b. 列举nova服务(有5个服务)

nova service-list



c. 列举image服务

nova image-list

6. Neutron网络服务(all node)

6.1. 配置Controller的Neutron

6.1.1. keystone数据库和服务

A. 创建数据库

a. 登陆数据库

mysql -u root -p

d. 创建neutron Database (密码就是NEUTRON_DBPASS)

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' \
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';


退出数据库


C. 连接Keystone服务

source admin credentials

source keystonerc_admin

创建nova用户服务和Endpoint

keystone user-create --name neutron --pass NEUTRON_PASS --email neutron@admin.com
keystone user-role-add --user neutron --tenant service --role admin
keystone service-create --name neutron --type network --description "OpenStack Networking"

keystone endpoint-create \
--service-id $(keystone service-list | awk '/ network / {print $2}') \
--publicurl http://controller:9696 \
--adminurl http://controller:9696 \
--internalurl http://controller:9696 \
--region regionOne


6.1.2. 安装和配置Neutron Network组件

A. 安装Neutron

yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which -y

B. 配置Neutron服务

配置 /etc/neutron/ neutron.conf

[database]
...
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

auth_strategy = keystone

core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2 nova_admin_auth_url = http://controller:35357/v2.0 nova_region_name = regionOne
nova_admin_username = nova
#your service tenant identifier, keystone tenant-get service
nova_admin_tenant_id = 8be4e4bfde3c40959e151476983a6648
nova_admin_password = NOVA_PASS

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS


C. 配置ML2插件

- 编辑
/etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]
...
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
...
tunnel_id_ranges = 1:1000

[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


D. 配置Nova启用Neutron服务

编辑
/etc/nova/nova.conf


[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[neutron]
...
url = http://controller:9696 auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0 admin_tenant_name = service
admin_username = neutron
admin_password = NEUTRON_PASS


E. 启动Neutron服务

链接插件

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno” neutron

重启Nova服务

systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service

启用服务

systemctl enable neutron-server.service

systemctl start neutron-server.service

F. 验证操作

source keystonerc_admin


neutron ext-list




6.2. 配置Network节点的Neutron

6.2.1. 环境配置

编辑
/etc/sysctrl.conf


net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

应用

sysctl -p

6.2.2. 安装和配置Neutron

A. 安装Neutron包

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

B. Common配置

编辑
/etc/neutron/neutron.conf


[database]
#注释掉所有connection

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

auth_strategy = keystone

core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS


C. ML2 配置

编辑
/etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]
...
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_flat]
...
flat_networks = external

[ml2_type_gre]
...
tunnel_id_ranges = 1:1000

[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]
...
local_ip = 10.0.1.21
enable_tunneling = True
bridge_mappings = external:br-ex

...[agent]
tunnel_types = gre


D. L3 配置

编辑
/etc/neutron/l3_agent.ini


[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
external_network_bridge = br-ex
router_delete_namespaces = True
verbose=True


E. DHCP 配置

编辑
/etc/neutron/dhcp_agent.ini


[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
dhcp_delete_namespaces = True
verbose=True


F. Metadata 配置

修改
/etc/neutron/metadata_agent.ini


[DEFAULT]
...
auth_url = http://controller:5000/v2.0 auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS

nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET

verbose=True


Controller节点上,修改
/etc/nova/nova.conf


[neutron]
...
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET


Controller节点上,重启compute-api服务

systemctl restart openstack-nova-api.service

G. OVS 服务配置

启动OVS服务

systemctl enable openvswitch.service

systemctl start openvswitch.service

增加外部网桥

ovs-vsctl add-br br-ex

增加外部网桥接口(ens256)

ovs-vsctl add-port br-ex ens256

暂时关闭GRO

ethtool -K ens256 gro off

6.2.3. 启动和验证Neutron服务

A. 启动服务

a. 链接plugin

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

b. 复制配置文件到指定目录

cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \

/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

sed -i ‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g’ \

/usr/lib/systemd/system/neutron-openvswitch-agent.service

c. 启动网络服务

systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \

neutron-dhcp-agent.service neutron-metadata-agent.service \

neutron-ovs-cleanup.service

systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \

neutron-dhcp-agent.service neutron-metadata-agent.service

B. 验证安装

在Controller中进行

source keystonerc_admin


neutron agent-list




6.3. 配置Compute节点上的Neutron

6.3.1. 环境配置

编辑
/etc/sysctrl.conf


net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

应用

sysctl -p

6.3.2. 安装和配置Neutron

A. 安装Neutron包

yum install openstack-neutron-ml2 openstack-neutron-openvswitch -y

B. Common配置

编辑
/etc/neutron/neutron.conf


[database]
#清除所有的connection
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

auth_strategy = keystone

core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS


C. ML2配置

编辑
/etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]
...
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
...
tunnel_id_ranges = 1:1000

[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]
...
local_ip = 10.0.1.31
enable_tunneling = True

[agent]
...
tunnel_types = gre


D. 启动OVS服务

systemctl enable openvswitch.service

systemctl start openvswitch.service

E. 配置Nova网络适配

编辑compute节点的
/etc/nova/nova.conf


[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[neutron]
...
url = http://controller:9696 auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0 admin_tenant_name = service
admin_username = neutron
admin_password = NEUTRON_PASS


6.3.3. 启动和验证Neutron服务

A. 启动服务

a. 链接plugin

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

b. 复制配置文件到指定目录

cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \

/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

sed -i ‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g’ \

/usr/lib/systemd/system/neutron-openvswitch-agent.service

c. 启动网络服务

systemctl restart openstack-nova-compute.service

systemctl enable neutron-openvswitch-agent.service

systemctl start neutron-openvswitch-agent.service

B. 验证安装

在Controller中进行

-
source keystonerc_admin


-
neutron agent-list




6.4. 创建测试网路

6.4.1. 外部网络(controller)

命令在controller中运行,受限source keystonerc_admin

A. 创建网络

创建一个新的网络

neutron net-create ext-net --router:external True \
--provider:physical_network external --provider:network_type flat


B. 创建一个子网

创建一个子网

分配的Floating IP为192.168.8.205~210,网关是192.168.8.254,网段掩码:192.168.8.0/24

neutron subnet-create ext-net --name ext-subnet2 \
--allocation-pool start=192.168.8.205,end=192.168.8.210 \
--disable-dhcp --gateway 192.168.8.254 192.168.8.0/24


6.4.2. 租户网络

测试普通租户能够正常建立网络

source keystonerc_demo


A. 创建网络

创建一个租户网(默认使用DHCP)

neutron net-create demo-net

B. 创建租户子网

创建租户子网

网段和掩码都可以自定义(需要网关和掩码)

neutron subnet-create demo-net --name demo-subnet \
--gateway 192.168.1.1 192.168.1.0/24


C. 创建路由器连接外网和租户网

a. 创建租户路由器

neutron router-create demo-router

b. 路由器绑定租户网

neutron router-interface-add demo-router demo-subnet

c. 路由器绑定外部网

neutron router-gateway-set demo-router ext-net

6.4.3. 验证网络

ping 一下网关192.168.8.254

7. 安装Dashboard Horizon

Horizon是需要安装在Controller节点

7.1. 安装和配置Horizon

7.1.1. 安装Horizon包

yum install openstack-dashboard httpd mod_wsgi memcached python-memcached -y

7.1.2. 配置Horizon

编辑
/etc/openstack-dashboard/local_setttings


a. 设置主机节点

OPENSTACK_HOST = “controller”

b. 允许所有主机访问

ALLOWED_HOSTS = [‘*’]

c. 配置memcached session

CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}


d.  配置时区(optional)


TIME_ZONE = “CST”

7.2. 启用Horizon

Centos启用网络服务

setsebool -P httpd_can_network_connect on

更改用户权限

chown -R apache:apache /usr/share/openstack-dashboard/static

启动服务

systemctl enable httpd.service memcached.service

systemctl start httpd.service memcached.service

7.3. 验证

Horizon的log放在/var/log/httpd/ 中,可以用tail -f 文件名来对比看。

用在同一个网内有浏览器的主机,登陆 http://192.168.8.202/dashboard

登陆账号密码。就是你设置在keystonerc_admin 或者 keystonerc_demo里面的账号密码

到目前为止,openstack最基本的环境算是完成了。

8. Cinder块存储服务

Cinder块存储还需要多一个storage节点来作为支持。

8.1. 配置Controller的Cinder

8.1.1. keystone数据库和服务

A. 创建数据库

a. 登陆数据库

mysql -u root -p

b. 创建cinder Database (密码就是CINDER _DBPASS)

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller' \
IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';


退出数据库


B. 连接Keystone服务

a. source admin credentials

source keystonerc_admin

b. 创建nova用户服务和Endpoint(需要支持两套API)

keystone user-create --name cinder --pass CINDER_PASS --email cinder@admin.com
keystone user-role-add --user cinder --tenant service --role admin

keystone service-create --name cinder --type volume --description "OpenStack Block Storage"
keystone service-create --name cinderv2 --type volumev2 --description "OpenStack Block Storage"

keystone endpoint-create \
--service-id $(keystone service-list | awk '/ volume / {print $2}') \
--publicurl http://controller:8776/v1/%\(tenant_id\)s \
--internalurl http://controller:8776/v1/%\(tenant_id\)s \
--adminurl http://controller:8776/v1/%\(tenant_id\)s \
--region regionOne

keystone endpoint-create \
--service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \
--publicurl http://controller:8776/v2/%\(tenant_id\)s \
--internalurl http://controller:8776/v2/%\(tenant_id\)s \
--adminurl http://controller:8776/v2/%\(tenant_id\)s \
--region regionOne


8.1.2. 安装和配置Cinder

A. 安装Cinder组件

yum install openstack-cinder python-cinderclient python-oslo-db -y

B. 配置Cinder

配置文件
/etc/cinder/cinder.conf


[database]
...
connection = mysql://cinder:CINDER_DBPASS@controller/cinder

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

auth_strategy = keystone
my_ip = 10.0.0.11
verbose = True

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = cinder
admin_password = CINDER_PASS


同步数据库

su -s /bin/sh -c “cinder-manage db sync” cinder

启动服务

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

8.2. 配置Storage节点的Cinder

8.2.1. 环境准备

A. 主机名和IP

此处和之前第一部分基本一致,不详细写了,需要准备的东西:

虚拟机网卡和ip配置

修改主机名。并在其他虚机的主机名解析中增加存储节点的主机名,block1。

更改和安装源。

修改ntp和时区。

B. LVM配置

a. 安装LVM(有的已经默认安装了的)

yum install lvm2

b. 启动LVM服务

systemctl enable lvm2-lvmetad.service

systemctl start lvm2-lvmetad.service

c. 创建LVM物理卷(如果没有sdb1这个分区,就使用
fdisk /dev/sdb
进行分区,步骤n/p 一直enter,最后w保存退出)

pvcreate /dev/sdb1

d. 创建LVM物理卷组

vgcreate cinder-volumes /dev/sdb1

e. 修改 /etc/lvm/lvm.conf文件

[devices]
filter = [ "a/sda/", "a/sdb/", "r/.*/"]


8.2.2. 安装配置Cinder

A. 安装

yum install openstack-cinder targetcli python-oslo-db MySQL-python -y

B. 配置Cinder

编辑
/etc/cinder/cinder.conf


[database]
...
connection = mysql://cinder:CINDER_DBPASS@controller/cinder

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

auth_strategy = keystone
my_ip = 10.0.0.41
verbose = True

glance_host = controller
iscsi_helper = lioadm

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = cinder
admin_password = CINDER_PASS


启动服务

systemctl enable openstack-cinder-volume.service target.service

systemctl start openstack-cinder-volume.service target.service

8.3. 验证安装(controller)

在controller节点中进行。

source keystonerc_admin


cinder service-list


source keystonerc_demo


cinder create --display-name demo-volume1 1


cinder list


9. Ceilometer Telemetry服务

9.1. 配置Controller的Ceilometer服务

9.1.1. 创建和配置数据库

安装Mongodb

yum install mongodb-server mongodb

配置Mongodb

配置
/etc/mongod.conf


a. 配置bind_ip

bind_ip = 10.0.0.11

b. 配置日志文件大小(默认是1G的,在/var/lib/mongodb/journal)

smallfiles=true

c. 启动服务

systemctl enable mongod.service

systemctl start mongod.service

创建Ceilometer数据库

mongo --host controller --eval '
db = db.getSiblingDB("ceilometer");
db.addUser({user: "ceilometer",
pwd: "CEILOMETER_DBPASS",
roles: [ "readWrite", "dbAdmin" ]})'


创建Ceilometer的keystone用户创建和Endpoint创建

source keystonerc_admin

keystone user-create --name ceilometer --pass CEILOMETER_PASS --email ceilometer@admin.com
keystone user-role-add --user ceilometer --tenant service --role admin

keystone service-create --name ceilometer --type metering --description "Telemetry"

keystone endpoint-create \
--service-id $(keystone service-list | awk '/ metering / {print $2}') \
--publicurl http://controller:8777 \
--internalurl http://controller:8777 \
--adminurl http://controller:8777 \
--region regionOne


9.1.2. 安装和配置Ceilometer

安装软件包

yum install openstack-ceilometer-api openstack-ceilometer-collector openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm python-ceilometerclient

生成密钥

openssl rand -hex 10 → f38dc8796751c37b3447

修改
/etc/ceilometer/ceilometer.conf
文件

[database]
...
connection = mongodb://ceilometer:CEILOMETER_DBPASS@controller:27017/ceilometer

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
auth_strategy = keystone
verbose=true

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = ceilometer
admin_password = CEILOMETER_PASS

[service_credentials]
...
os_auth_url = http://controller:5000/v2.0 os_username = ceilometer
os_tenant_name = service
os_password = CEILOMETER_PASS

[publisher]
...
metering_secret = f38dc8796751c37b3447


启动服务

systemctl enable openstack-ceilometer-api.service
systemctl enable openstack-ceilometer-notification.service
systemctl enable openstack-ceilometer-central.service
systemctl enable openstack-ceilometer-collector.service
systemctl enable openstack-ceilometer-alarm-evaluator.service
systemctl enable openstack-ceilometer-alarmnotifier.service

systemctl start openstack-ceilometer-api.service
systemctl start openstack-ceilometer-notification.service
systemctl start openstack-ceilometer-central.service
systemctl start openstack-ceilometer-collector.service
systemctl start openstack-ceilometer-alarm-evaluator.service
systemctl start openstack-ceilometer-alarmnotifier.service


9.2. 配置Compute的Ceilometer服务

在每一个compute节点执行下面步骤

安装软件包

yum install openstack-ceilometer-compute python-ceilometerclient python-pecan

编辑配置文件

vim /etc/ceilometer/ceilometer.conf


[publisher]
metering_secret = f38dc8796751c37b3447

[DEFAULT]
rabbit_host = controller
rabbit_password = RABBIT_PASS
verbose=True

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service
admin_user = ceilometer
admin_password = CEILOMETER_PASS

[service_credentials]
...
os_auth_url = http://controller:5000/v2.0 os_username = ceilometer
os_tenant_name = service
os_password = CEILOMETER_PASS
os_endpoint_type = internalURL
os_region_name = regionOne


vim /etc/nova/nova.conf


[DEFAULT]
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2


启用服务

systemctl enable openstack-ceilometer-compute.service

systemctl start openstack-ceilometer-compute.service

systemctl restart openstack-nova-compute.service

9.3. 配置镜像服务的Ceilometer

以下步骤在controller中部署

修改配置文件

修改两个配置文件
/etc/glance/glance-api.conf
/etc/glance/glance-registry.conf


[DEFAULT]
notification_driver = messagingv2
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS


启用服务

systemctl restart openstack-glance-api.service openstack-glance-registry.service

9.4. 配置块存储服务的Ceilometer

以下步骤需要在controller节点和block1节点中都部署

修改配置文件

vim /etc/cinder/cinder.conf


[DEFAULT]
control_exchange = cinder
notification_driver = messagingv2


启用服务

在controller中启用

systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service

在block1中启用

systemctl restart openstack-cinder-volume.service

9.5. 验证Ceilometer的安装

在controller中执行

登陆账户

source keystonerc_admin

列举服务

ceilometer meter-list



下载镜像

glance image-download “cirros-0.3.3-x86_64” > cirros.img

列举服务

ceilometer meter-list



- 查看信息

ceilometer statistics -m image.download -p 60
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: