您的位置:首页 > 其它

oslo_messaging中的heartbeat_check

2016-11-08 13:18 141 查看
最近在做OpenStack控制节点高可用(三控)的测试,当关掉其中一个控制节点的时候,nova service-list 看到所有nova服务都是down的。 nova-compute的log中有大量这种错误信息:
2016-11-08 03:46:23.887 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.275 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe


上述抛出的异常在oslo_messaging/_drivers/impl_rabbit.py中定位出来了:
def _heartbeat_thread_job(self):
"""Thread that maintains inactive connections
"""
while not self._heartbeat_exit_event.is_set():
with self._connection_lock.for_heartbeat():
recoverable_errors = (
self.connection.recoverable_channel_errors +
self.connection.recoverable_connection_errors)
try:
try:
self._heartbeat_check()
# NOTE(sileht): We need to drain event to receive
# heartbeat from the broker but don't hold the
# connection too much times. In amqpdriver a connection
# is used exclusivly for read or for write, so we have
# to do this for connection used for write drain_events
# already do that for other connection
try:
self.connection.drain_events(timeout=0.001)
except socket.timeout:
pass
except recoverable_errors as exc:
LOG.info(_LI("A recoverable connection/channel error "
"occurred, trying to reconnect: %s"), exc)
self.ensure_connection()

except Exception:
LOG.warning(_LW("Unexpected error during heartbeart "
"thread processing, retrying..."))
LOG.debug('Exception', exc_info=True)
self._heartbeat_exit_event.wait(
timeout=self._heartbeat_wait_timeout)
self._heartbeat_exit_event.clear()
原本heartbeat check就是来检测组件服务和rabbitmq server之间的连接是否是活着的,oslo_messaging中的heartbeat_check任务在服务启动的时候就跑在后台了,当关闭一个控制节点时,实际上也关闭了一个rabbitmq server节点。只不过这里会一直处于循环之中,一直抛出recoverable_errors捕获到的异常,只有当self._heartbeat_exit_event.is_set()才会退出while循环。按理说应该加个超时的东西,这样就就不会一直处于循环之中,过好几分钟后才恢复。

今天我在虚拟机中安装了三控高可用,在nova.conf中加了如下参数:
[oslo_messaging_rabbit]
rabbit_max_retries = 2 # 重连最大次数
heartbeat_timeout_threshold = 0 # 禁止heartbeat check

测试,nova_compute 并不会一直抛出recoverable_errors捕获到的异常,nova service-list并不会出现所有服务down的情况。
后续有待在物理机上测试。。。。。。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  oslo messaging