Delete Duplicate OpenStack Hypervisors and Services
2016-02-01 11:41
204 查看
http://thornelabs.net/2014/08/03/delete-duplicate-openstack-hypervisors-and-services.html
If you ever change the hostname of any of your OpenStack nodes and restart the OpenStack services on those nodes, the services are going to re-register to the OpenStack cluster under the new hostname. Because of this, when you run
entries.
Unfortunately, there are not commands to clean up duplicate entries, so you have to modify the various OpenStack databases by hand.
It would be wise before making any changes to backup all of the OpenStack databases. This can be done with the following command (more information on the mysqldump command can be found here):
As mentioned by Dave Johnston in the
comments below, OpenStack Juno has begun introducing commands to do what is described in this post so you do not have to manually modify the database.
Identity that you have duplicate hypervisor entries by running
Log into your OpenStack controller node as root and login to the MySQL command line as the root user:
Select the nova database:
Run the following SQL command to identify what hypervisors have similar hostnames:
In this scenario, compute1 was the original hostname and compute1.local is the new hostname. compute1 is the entry you will want to delete.
First, make sure the new hostname’s nova-compute service is registering as up. If you run the
be up.
If this is the case, and the updated_at time for compute1 in the SQL output above is older than compute1.local, you can safely delete the compute1 entry
from the various tables in the nova database.
First, get the id of compute1. In this scenario it is simply 1.
Delete the duplicate hypervisor from the nova database by running the following SQL commands:
In addition, and as seen above, there are going to be duplicate entries in the Nova services table. The duplicate entry can be deleted by running the following SQL command:
Even though you have deleted the duplicate Nova hypervisors and services, OpenStack compute nodes typically are running some Neutron services as well (in most cases it is just the Open vSwitch agent).
If you run
entries for the Open vSwitch agent:
The Open vSwitch agent service for compute1.local is up, so you can safely delete the duplicate entry for compute1 from the Neutron database.
Again, log into your OpenStack controller node as root and login to the MySQL command line as the root user:
Select the neutron database:
Delete the duplicate Neutron service from the agents table by running the following SQL command:
If you happen to have a Cinder node and also changed its hostname, there will be duplicate entries in the Cinder services table:
The cinder-volume service for cinder1.local is up, so you can safely delete the duplicate entry for cinder1 from the Cinder database.
Once again, log into your OpenStack controller node as root and login to the MySQL command line as the root user:
Select the cinder database:
Delete the duplicate Cinder service from the services table by running the following SQL command:
How to remove compute node from havana
Manage services
If you ever change the hostname of any of your OpenStack nodes and restart the OpenStack services on those nodes, the services are going to re-register to the OpenStack cluster under the new hostname. Because of this, when you run
nova hypervisor-list,
nova service-list,
neutron agent-list, or
cinder service-listyou are going to have duplicate
entries.
Unfortunately, there are not commands to clean up duplicate entries, so you have to modify the various OpenStack databases by hand.
It would be wise before making any changes to backup all of the OpenStack databases. This can be done with the following command (more information on the mysqldump command can be found here):
mysqldump --all-databases > /root/openstack-databases.sql
As mentioned by Dave Johnston in the
comments below, OpenStack Juno has begun introducing commands to do what is described in this post so you do not have to manually modify the database.
Delete Duplicate Nova Hypervisors
Identity that you have duplicate hypervisor entries by running nova hypervisor-list:
root@controller1:~# nova hypervisor-list +----+---------------------+ | ID | Hypervisor hostname | +----+---------------------+ | 1 | compute1 | | 3 | compute1.local | +----+---------------------+
Log into your OpenStack controller node as root and login to the MySQL command line as the root user:
mysql -u root
Select the nova database:
USE nova;
Run the following SQL command to identify what hypervisors have similar hostnames:
mysql> SELECT id, created_at, updated_at, hypervisor_hostname FROM compute_nodes; +----+---------------------+---------------------+---------------------+ | id | created_at | updated_at | hypervisor_hostname | +----+---------------------+---------------------+---------------------+ | 1 | 2014-08-03 19:37:12 | 2014-08-03 19:47:33 | compute1 | | 3 | 2014-08-03 19:47:35 | 2014-08-03 20:02:35 | compute1.local | +----+---------------------+---------------------+---------------------+ 2 rows in set (0.00 sec)
In this scenario, compute1 was the original hostname and compute1.local is the new hostname. compute1 is the entry you will want to delete.
First, make sure the new hostname’s nova-compute service is registering as up. If you run the
nova service-listcommand,nova-compute on compute1 should be down and nova-compute on compute1.local should
be up.
root@controller1:~# nova service-list +------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | nova-scheduler | controller1 | internal | enabled | up | 2014-08-03T19:49:16.000000 | None | | nova-conductor | controller1 | internal | enabled | up | 2014-08-03T19:49:16.000000 | None | | nova-cert | controller1 | internal | enabled | up | 2014-08-03T19:49:09.000000 | None | | nova-consoleauth | controller1 | internal | enabled | up | 2014-08-03T19:49:11.000000 | None | | nova-compute | compute1 | nova | enabled | down | 2014-08-03T19:47:34.000000 | None | | nova-compute | compute1.local | nova | enabled | up | 2014-08-03T19:49:15.000000 | None | +------------------+----------------+----------+---------+-------+----------------------------+-----------------+
If this is the case, and the updated_at time for compute1 in the SQL output above is older than compute1.local, you can safely delete the compute1 entry
from the various tables in the nova database.
First, get the id of compute1. In this scenario it is simply 1.
Delete the duplicate hypervisor from the nova database by running the following SQL commands:
DELETE FROM compute_node_stats WHERE compute_node_id='1'; DELETE FROM compute_nodes WHERE hypervisor_hostname='compute1';
In addition, and as seen above, there are going to be duplicate entries in the Nova services table. The duplicate entry can be deleted by running the following SQL command:
DELETE FROM services WHERE host='compute1';
nova hypervisor-listshould now have no duplicate entries:
root@controller1:~# nova hypervisor-list +----+---------------------+ | ID | Hypervisor hostname | +----+---------------------+ | 3 | compute1.local | +----+---------------------+
nova service-listshould also now have no duplicate entries:
root@controller1:~# nova service-list +------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | nova-scheduler | controller1 | internal | enabled | up | 2014-08-03T20:29:37.000000 | None | | nova-conductor | controller1 | internal | enabled | up | 2014-08-03T20:29:36.000000 | None | | nova-cert | controller1 | internal | enabled | up | 2014-08-03T20:29:31.000000 | None | | nova-consoleauth | controller1 | internal | enabled | up | 2014-08-03T20:29:33.000000 | None | | nova-compute | compute1.local | nova | enabled | up | 2014-08-03T20:29:36.000000 | None | +------------------+----------------+----------+---------+-------+----------------------------+-----------------+
Delete Duplicate Neutron Services
Even though you have deleted the duplicate Nova hypervisors and services, OpenStack compute nodes typically are running some Neutron services as well (in most cases it is just the Open vSwitch agent).If you run
neutron agent-list, you will most likely have duplicate
entries for the Open vSwitch agent:
root@controller1:~# neutron agent-list +--------------------------------------+--------------------+----------------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+----------------+-------+----------------+ | 5a0296d1-18db-4b49-948c-15577b1116c1 | L3 agent | controller1 | :-) | True | | 70b694e3-e27a-418e-aeb7-ce2c0a521717 | Open vSwitch agent | compute1.local | :-) | True | | a15b145d-ef9a-4fae-8368-072a938e329d | DHCP agent | controller1 | :-) | True | | da8a56c9-a117-45c0-b4c7-e46f66eeec01 | Open vSwitch agent | compute1 | xxx | True | | edc42804-2f52-4cc7-a29d-1f09eb911643 | Open vSwitch agent | controller1 | :-) | True | +--------------------------------------+--------------------+----------------+-------+----------------+
The Open vSwitch agent service for compute1.local is up, so you can safely delete the duplicate entry for compute1 from the Neutron database.
Again, log into your OpenStack controller node as root and login to the MySQL command line as the root user:
mysql -u root
Select the neutron database:
USE neutron;
Delete the duplicate Neutron service from the agents table by running the following SQL command:
DELETE FROM agents WHERE host='compute1';
neutron agent-listshould now have no duplicate entries:
root@controller1:~# neutron agent-list +--------------------------------------+--------------------+----------------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+----------------+-------+----------------+ | 5a0296d1-18db-4b49-948c-15577b1116c1 | L3 agent | controller1 | :-) | True | | 70b694e3-e27a-418e-aeb7-ce2c0a521717 | Open vSwitch agent | compute1.local | :-) | True | | a15b145d-ef9a-4fae-8368-072a938e329d | DHCP agent | controller1 | :-) | True | | edc42804-2f52-4cc7-a29d-1f09eb911643 | Open vSwitch agent | controller1 | :-) | True | +--------------------------------------+--------------------+----------------+-------+----------------+
Delete Duplicate Cinder Services
If you happen to have a Cinder node and also changed its hostname, there will be duplicate entries in the Cinder services table:root@controller1:~# cinder service-list +------------------+---------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated_at | +------------------+---------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller1 | nova | enabled | up | 2014-08-03T19:49:04.000000 | | cinder-volume | cinder1 | nova | enabled | down | 2014-08-03T19:48:53.000000 | | cinder-volume | cinder1.local | nova | enabled | up | 2014-08-03T19:49:08.000000 | +------------------+---------------+------+---------+-------+----------------------------+
The cinder-volume service for cinder1.local is up, so you can safely delete the duplicate entry for cinder1 from the Cinder database.
Once again, log into your OpenStack controller node as root and login to the MySQL command line as the root user:
mysql -u root
Select the cinder database:
USE cinder;
Delete the duplicate Cinder service from the services table by running the following SQL command:
DELETE FROM services WHERE host='cinder1';
cinder service-listshould now have no duplicate entries:
root@controller1:~# cinder service-list +------------------+---------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated_at | +------------------+---------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller1 | nova | enabled | up | 2014-08-03T20:31:47.000000 | | cinder-volume | cinder1.local | nova | enabled | up | 2014-08-03T20:31:51.000000 | +------------------+---------------+------+---------+-------+----------------------------+
References
How to remove compute node from havanaManage services
相关文章推荐
- dubbo 的monitor监控中心 配置过程_linux
- Linux磁盘空间监控告警
- 将命令加入linux的环境变量
- 摄影技巧,初学者网站分享
- 【转】linux下杀死进程(kill)的N种方法
- bash脚本编程之二 条件判断
- Linux下安装Nginx
- Linux的IO性能监控工具iostat详解
- Linux下chkconfig命令详解
- 【转】centos的软件安装方法rpm和yum
- 【转】linux的hostname(主机名)修改详解
- Apache activemq 入门示例(maven项目)
- SDWebImage crash CFRunLoopRun(); crash #1262
- 解读Linux中pwd与dirs目录查看命令的使用
- Tomcat7下Filter执行顺序小验证
- 一篇文章让你读懂 OpenStack 的起源、架构和应用
- 在Ubuntu上为Android增加硬件抽象层(HAL)模块访问Linux内核驱动程序
- windows下 apache-tomcat-7.0.40加大内存配置
- Centos7 Java配置
- 一篇文章让你读懂 OpenStack 的起源、架构和应用