[部署篇4]VMWare搭建Openstack——控制节点的Nova的安装
2015-04-11 15:38
781 查看
一、一般情况下,OpenStack至少需要两个物理机器,或者称之为两个节点,也就是前面提到的控制节点和计算节点。一般情况下,控制节点不需要过高的硬件资源,计算节点需要比较高的硬件资源,但是经常用户的两个节点的硬件信息都是一样的,所以我们也会在控制节点上安装Nova,来使用控制节点的硬件资源信息。
二、在控制节点上安装计算服务
1. 约定: nova使用MySQL数据库存储相关数据,相关参数如下:
库名: nova
账户: novadbadmin
密码: nova4smtest
2. 安装计算服务
sudo apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
3. 编辑/etc/nova/nova.conf文件,设置数据库、消息服务及IP相关参数,
[ sudo vi /etc/nova/nova.conf ]
更新设置如下:
#-----------nova config1---------------------------------------
[database]
connection = mysql://novadbadmin:nova4smtest@192.168.3.180/nova
[DEFAULT]
rpc_backend = rabbit
rabbit_host = 192.168.3.180
rabbit_userid = guest
rabbit_password = mq4smtest
rabbit_port = 5672
#控制节点的IP
my_ip = 192.168.3.180
vncserver_listen = 192.168.3.180
vncserver_proxyclient_address = 192.168.3.180
4. 删除SQLite数据库
sudo rm /var/lib/nova/nova.sqlite
5. 创建数据库、账户并配置权限
6. 创建Compute Service表
sudo nova-manage db sync
7. 创建Compute Service 账户并设置角色
keystone user-create --name=nova --pass=nova4smtest --email=sm@163.com
keystone user-role-add --user=nova --tenant=service --role=admin
8. 编辑/etc/nova/nova.conf文件,设置相关参数,
[ sudo vi /etc/nova/nova.conf ]
更新设置如下:
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.3.180:5000
auth_host = 192.168.3.180
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova4smtest
检查一下nova.conf的相关信息
9. 创建服务
keystone service-create --name=nova --type=compute --description="OpenStack Compute Service"
10. 创建接入端点
keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://192.168.3.180:8774/v2/%\(tenant_id\)s --internalurl=http://192.168.3.180:8774/v2/%\(tenant_id\)s --adminurl=http://192.168.3.180:8774/v2/%\(tenant_id\)s
11. 重启服务
sudo service nova-api restart
sudo service nova-cert restart
sudo service nova-consoleauth restart
sudo service nova-scheduler restart
sudo service nova-conductor restart
sudo service nova-novncproxy restart
12. 测试查看计算服务状态
1) sudo nova-manage service list
所有组件都是微笑,说明nova已经正确安装完毕。
2) nova image-list
13. 日志文件
存储位置为: /var/log/nova/
存储文件为:
/var/log/nova/nova-api.log
/var/log/nova/nova-cert.log
/var/log/nova/nova-conductor.log
/var/log/nova/nova-consoleauth.log
/var/log/nova/nova-manage.log
/var/log/nova/nova-scheduler.log
注意:如果需要控制节点也充当计算功能,需要参考部署5、部署8和部署9的第一部分的操作,在控制节点重复操作一遍即可。
二、在控制节点上安装计算服务
1. 约定: nova使用MySQL数据库存储相关数据,相关参数如下:
库名: nova
账户: novadbadmin
密码: nova4smtest
2. 安装计算服务
sudo apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
3. 编辑/etc/nova/nova.conf文件,设置数据库、消息服务及IP相关参数,
[ sudo vi /etc/nova/nova.conf ]
更新设置如下:
#-----------nova config1---------------------------------------
[database]
connection = mysql://novadbadmin:nova4smtest@192.168.3.180/nova
[DEFAULT]
rpc_backend = rabbit
rabbit_host = 192.168.3.180
rabbit_userid = guest
rabbit_password = mq4smtest
rabbit_port = 5672
#控制节点的IP
my_ip = 192.168.3.180
vncserver_listen = 192.168.3.180
vncserver_proxyclient_address = 192.168.3.180
4. 删除SQLite数据库
sudo rm /var/lib/nova/nova.sqlite
5. 创建数据库、账户并配置权限
sudo mysql -uroot -p#db4smtest# -e 'CREATE DATABASE nova;' sudo mysql -uroot -p#db4smtest# -e 'CREATE USER novadbadmin;' sudo mysql -uroot -p#db4smtest# -e "GRANT ALL PRIVILEGES ON nova.* TO 'novadbadmin'@'localhost' IDENTIFIED BY 'nova4smtest';" sudo mysql -uroot -p#db4smtest# -e "GRANT ALL PRIVILEGES ON nova.* TO 'novadbadmin'@'%' IDENTIFIED BY 'nova4smtest';" sudo mysql -uroot -p#db4smtest# -e "SET PASSWORD FOR 'novadbadmin'@'%' = PASSWORD('nova4smtest');"
6. 创建Compute Service表
sudo nova-manage db sync
sm@controller:~$ sudo nova-manage db sync 2015-04-11 14:14:37.517 7018 INFO migrate.versioning.api [-] 215 -> 216... 2015-04-11 14:14:41.130 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.130 7018 INFO migrate.versioning.api [-] 216 -> 217... 2015-04-11 14:14:41.133 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.134 7018 INFO migrate.versioning.api [-] 217 -> 218... 2015-04-11 14:14:41.137 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.137 7018 INFO migrate.versioning.api [-] 218 -> 219... 2015-04-11 14:14:41.142 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.142 7018 INFO migrate.versioning.api [-] 219 -> 220... 2015-04-11 14:14:41.146 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.146 7018 INFO migrate.versioning.api [-] 220 -> 221... 2015-04-11 14:14:41.150 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.150 7018 INFO migrate.versioning.api [-] 221 -> 222... 2015-04-11 14:14:41.153 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.153 7018 INFO migrate.versioning.api [-] 222 -> 223... 2015-04-11 14:14:41.156 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.156 7018 INFO migrate.versioning.api [-] 223 -> 224... 2015-04-11 14:14:41.158 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.159 7018 INFO migrate.versioning.api [-] 224 -> 225... 2015-04-11 14:14:41.162 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.162 7018 INFO migrate.versioning.api [-] 225 -> 226... 2015-04-11 14:14:41.166 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.166 7018 INFO migrate.versioning.api [-] 226 -> 227... 2015-04-11 14:14:41.172 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.172 7018 INFO migrate.versioning.api [-] 227 -> 228... 2015-04-11 14:14:41.205 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.205 7018 INFO migrate.versioning.api [-] 228 -> 229... 2015-04-11 14:14:41.232 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.232 7018 INFO migrate.versioning.api [-] 229 -> 230... 2015-04-11 14:14:41.279 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.280 7018 INFO migrate.versioning.api [-] 230 -> 231... 2015-04-11 14:14:41.324 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.324 7018 INFO migrate.versioning.api [-] 231 -> 232... 2015-04-11 14:14:41.480 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.481 7018 INFO migrate.versioning.api [-] 232 -> 233... 2015-04-11 14:14:41.521 7018 INFO migrate.versioning.api [-] done 2015-04-11 14:14:41.522 7018 INFO migrate.versioning.api [-] 233 -> 234... 2015-04-11 14:14:41.541 7018 INFO migrate.versioning.api [-] done
7. 创建Compute Service 账户并设置角色
keystone user-create --name=nova --pass=nova4smtest --email=sm@163.com
sm@controller:~$ keystone user-create --name=nova --pass=nova4smtest --email=sm@163.com +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | sm@163.com | | enabled | True | | id | 375634b4a4194df6ba136a36bc2a6e68 | | name | nova | | username | nova | +----------+----------------------------------+
keystone user-role-add --user=nova --tenant=service --role=admin
8. 编辑/etc/nova/nova.conf文件,设置相关参数,
[ sudo vi /etc/nova/nova.conf ]
更新设置如下:
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.3.180:5000
auth_host = 192.168.3.180
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova4smtest
检查一下nova.conf的相关信息
sm@controller:~$ sudo more /etc/nova/nova.conf [DEFAULT] dhcpbridge_flagfile=/etc/nova/nova.conf dhcpbridge=/usr/bin/nova-dhcpbridge logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True iscsi_helper=tgtadm libvirt_use_virtio_for_bridges=True connection_type=libvirt root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf verbose=True ec2_private_dns_show_ip=True api_paste_config=/etc/nova/api-paste.ini volumes_path=/var/lib/nova/volumes enabled_apis=ec2,osapi_compute,metadata rpc_backend = rabbit rabbit_host = 192.168.3.180 rabbit_userid = guest rabbit_password = mq4smtest rabbit_port = 5672 my_ip = 192.168.3.180 vncserver_listen = 192.168.3.180 vncserver_proxyclient_address = 192.168.3.180 auth_strategy = keystone [keystone_authtoken] auth_uri = http://192.168.3.180:5000 auth_host = 192.168.3.180 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova4smtest [database] connection = mysql://novadbadmin:nova4smtest@192.168.3.180/nova
9. 创建服务
keystone service-create --name=nova --type=compute --description="OpenStack Compute Service"
sm@controller:~$ keystone service-create --name=nova --type=compute --description="OpenStack Compute Service" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Compute Service | | enabled | True | | id | af57e640e51f4a55b2afab9f3b734c2a | | name | nova | | type | compute | +-------------+----------------------------------+
10. 创建接入端点
keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://192.168.3.180:8774/v2/%\(tenant_id\)s --internalurl=http://192.168.3.180:8774/v2/%\(tenant_id\)s --adminurl=http://192.168.3.180:8774/v2/%\(tenant_id\)s
sm@controller:~$ keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://192.168.3.180:8774/v2/%\(tenant_id\)s --internalurl=http://192.168.3.180:8774/v2/%\(tenant_id\)s --adminurl=http://192.168.3.180:8774/v2/%\(tenant_id\)s +-------------+--------------------------------------------+ | Property | Value | +-------------+--------------------------------------------+ | adminurl | http://192.168.3.180:8774/v2/%(tenant_id)s | | id | 0e5527a14148441bb9b02f43ad301c63 | | internalurl | http://192.168.3.180:8774/v2/%(tenant_id)s | | publicurl | http://192.168.3.180:8774/v2/%(tenant_id)s | | region | regionOne | | service_id | af57e640e51f4a55b2afab9f3b734c2a | +-------------+--------------------------------------------+
11. 重启服务
sudo service nova-api restart
sudo service nova-cert restart
sudo service nova-consoleauth restart
sudo service nova-scheduler restart
sudo service nova-conductor restart
sudo service nova-novncproxy restart
sm@controller:~$ sudo service nova-api restart nova-api stop/waiting nova-api start/running, process 7122 sm@controller:~$sudo service nova-cert restart nova-cert stop/waiting nova-cert start/running, process 7140 sm@controller:~$sudo service nova-consoleauth restart nova-consoleauth stop/waiting nova-consoleauth start/running, process 7164 sm@controller:~$sudo service nova-scheduler restart nova-scheduler stop/waiting nova-scheduler start/running, process 7182 sm@controller:~$sudo service nova-conductor restart nova-conductor stop/waiting nova-conductor start/running, process 7209 sm@controller:~$sudo service nova-novncproxy restart nova-novncproxy stop/waiting nova-novncproxy start/running, process 7249
12. 测试查看计算服务状态
1) sudo nova-manage service list
sm@controller:~$ sudo nova-manage service list Binary Host Zone Status State Updated_At nova-cert controller internal enabled :-) 2015-04-11 06:19:22 nova-scheduler controller internal enabled :-) 2015-04-11 06:19:23 nova-consoleauth controller internal enabled :-) 2015-04-11 06:19:23 nova-conductor controller internal enabled :-) 2015-04-11 06:19:23
所有组件都是微笑,说明nova已经正确安装完毕。
2) nova image-list
sm@controller:~$ nova image-list +--------------------------------------+---------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+---------------------+--------+--------+ | 6dbf0884-e9bb-406e-9f77-32aef79dd976 | cirros-0.3.2-x86_64 | ACTIVE | | +--------------------------------------+---------------------+--------+--------+<span style="font-family: 'Microsoft YaHei'; background-color: rgb(255, 255, 255);"> </span>有可用的镜像,可提供正常提高服务了。
13. 日志文件
存储位置为: /var/log/nova/
存储文件为:
/var/log/nova/nova-api.log
/var/log/nova/nova-cert.log
/var/log/nova/nova-conductor.log
/var/log/nova/nova-consoleauth.log
/var/log/nova/nova-manage.log
/var/log/nova/nova-scheduler.log
注意:如果需要控制节点也充当计算功能,需要参考部署5、部署8和部署9的第一部分的操作,在控制节点重复操作一遍即可。
相关文章推荐
- [部署篇2]VMWare搭建Openstack——控制节点的KeyStone的安装
- [部署篇5]VMWare搭建Openstack——计算节点的基础部署和Nova的安装
- [部署篇1]VMWare搭建Openstack——控制节点的基础环境和RabbitMQ消息服务器安装
- [部署篇12]VMWare搭建Openstack——控制节点的heat的安装
- [部署篇3]VMWare搭建Openstack——控制节点的glance的安装
- openstack安装部署4——Glance镜像安装(仅部署在控制节点)
- openstack安装部署5——计算服务(控制节点&计算节点)
- openstack搭建--4--控制节点安装配置glance
- [部署篇11]VMWare搭建Openstack——Ceilometer的安装与配置
- OpenStack(Kilo版本)控制节点基本环境和身份验证服务的安装部署
- [部署篇13]VMWare搭建Openstack——Swift的安装与部署
- openstack搭建--2--控制节点安装mysql和rabbitmq
- OpenStack 计算服务 Nova介绍和控制节点部署(七)
- CentOS 7部署OpenStack(4)―部署Nova控制节点
- Openstack 实战讲解之-----05-控制节点Nova服务安装配置