openstack:nova中的几个问题分析
2014-02-12 10:09
197 查看
1,nova中的两条命令执行过程分析:
nova service-list
+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
| nova-scheduler | openstack2 | internal | enabled | up | 2014-02-12T02:17:31.000000 | None |
| nova-conductor | openstack2 | internal | enabled | up | 2014-02-12T02:17:32.000000 | None |
| nova-cert | openstack2 | internal | enabled | up | 2014-02-12T02:17:31.000000 | None |
| nova-consoleauth | openstack2 | internal | enabled | up | 2014-02-12T02:17:31.000000 | None |
| nova-compute | openstack2 | nova | enabled | up | 2014-02-12T02:17:33.000000 | None |
| nova-network | openstack2 | internal | enabled | up | 2014-02-12T02:17:29.000000 | None |
| nova-compute | openstack-1 | nova | enabled | up | 2014-02-12T02:17:27.000000 | None |
| nova-network | openstack-1 | internal | enabled | up | 2014-02-12T02:17:32.000000 | None |
+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
nova host-list
+--------------+-------------+----------+
| host_name | service | zone |
+--------------+-------------+----------+
| openstack2 | scheduler | internal |
| openstack2 | conductor | internal |
| openstack2 | cert | internal |
| openstack2 | consoleauth | internal |
| openstack2 | compute | nova |
| openstack2 | network | internal |
| openstack-1 | compute | nova |
| openstack-1 | network | internal |
+--------------+-------------+----------+
那么controller节点怎么知道各个node的状态呢?Controller怎么知道各个node中运行的服务呢?
那么service和host的数据是何时创建的呢?
首先看各个服务service启动的时候,都做了什么?
启动服务时,判断数据库中是否已经存在某个服务记录,如果不存在则创建一条记录!
services表的结构:
2 scheduler中的hostmanager是如何管理和调度计算节点的?
直接查询数据库中的compute_node来实现;
3 compute_node表中的数据何时CRUD?
nova-compute服务会通过一个定时任务period_task.perisod_task定期更新compute_node表中的数据,这些数据主要是关于compute节点目前的使用情况!
compute_node表的结构:
nova service-list
+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
| nova-scheduler | openstack2 | internal | enabled | up | 2014-02-12T02:17:31.000000 | None |
| nova-conductor | openstack2 | internal | enabled | up | 2014-02-12T02:17:32.000000 | None |
| nova-cert | openstack2 | internal | enabled | up | 2014-02-12T02:17:31.000000 | None |
| nova-consoleauth | openstack2 | internal | enabled | up | 2014-02-12T02:17:31.000000 | None |
| nova-compute | openstack2 | nova | enabled | up | 2014-02-12T02:17:33.000000 | None |
| nova-network | openstack2 | internal | enabled | up | 2014-02-12T02:17:29.000000 | None |
| nova-compute | openstack-1 | nova | enabled | up | 2014-02-12T02:17:27.000000 | None |
| nova-network | openstack-1 | internal | enabled | up | 2014-02-12T02:17:32.000000 | None |
+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
nova host-list
+--------------+-------------+----------+
| host_name | service | zone |
+--------------+-------------+----------+
| openstack2 | scheduler | internal |
| openstack2 | conductor | internal |
| openstack2 | cert | internal |
| openstack2 | consoleauth | internal |
| openstack2 | compute | nova |
| openstack2 | network | internal |
| openstack-1 | compute | nova |
| openstack-1 | network | internal |
+--------------+-------------+----------+
那么controller节点怎么知道各个node的状态呢?Controller怎么知道各个node中运行的服务呢?
在nova-api服务中有以下介个REST-API: Host-controller中提供了一下几种方法: class HostController(object): def __init__(self): self.api = compute.HostAPI() super(HostController, self).__init__() @wsgi.serializers(xml=HostIndexTemplate) def index(self, req): services = self.api.service_get_all(context, filters=filters, set_zones=True) for service in services: hosts.append({'host_name': service['host'], 'service': service['topic'], 'zone': service['availability_zone']}) return {'hosts': hosts} @wsgi.serializers(xml=HostShowTemplate) def show(self, req, id): context = req.environ['nova.context'] host_name = id try: service = self.api.service_get_by_compute_host(context, host_name) def update(self, req, id, body): return result service-controller中提供了一下几种方法 class ServiceController(object): def __init__(self, ext_mgr=None, *args, **kwargs): self.host_api = compute.HostAPI() self.servicegroup_api = servicegroup.API() self.ext_mgr = ext_mgr def _get_services(self, req): services = self.host_api.service_get_all( context, set_zones=True) def index(self, req): detailed = self.ext_mgr.is_loaded('os-extended-services') services = self._get_services_list(req, detailed) return {'services': services} @wsgi.deserializers(xml=ServiceUpdateDeserializer) @wsgi.serializers(xml=ServiceUpdateTemplate) def update(self, req, id, body): try: self.host_api.service_update(context, host, binary, status_detail) return ret_value #从这里可以看出,对于host和service的查询和更新操作调用了hostAPI中的一些方法 #查看service_get_all和compute_node_get def service_get_all(self, context, filters=None, set_zones=False): services = service_obj.ServiceList.get_all(context, disabled, set_zones=set_zones) return ret_services def compute_node_get(self, context, compute_id): return self.db.compute_node_get(context, int(compute_id)) 可以看出,都是通过调用数据库实现的; 注,service数据表中包含了主机host的信息
那么service和host的数据是何时创建的呢?
首先看各个服务service启动的时候,都做了什么?
def main(): config.parse_args(sys.argv) logging.setup("nova") utils.monkey_patch() server = service.Service.create(binary='nova-scheduler', topic=CONF.scheduler_topic) service.serve(server) service.wait() class Service(service.Service): def __init__(self, host, binary, topic, manager, report_interval=None, periodic_enable=None, periodic_fuzzy_delay=None, periodic_interval_max=None, db_allowed=True, *args, **kwargs): super(Service, self).__init__() self.host = host self.binary = binary self.topic = topic self.manager_class_name = manager self.servicegroup_api = servicegroup.API(db_allowed=db_allowed) manager_class = importutils.import_class(self.manager_class_name) self.manager = manager_class(host=self.host, *args, **kwargs) self.report_interval = report_interval self.periodic_enable = periodic_enable self.periodic_fuzzy_delay = periodic_fuzzy_delay self.periodic_interval_max = periodic_interval_max self.saved_args, self.saved_kwargs = args, kwargs self.backdoor_port = None self.conductor_api = conductor.API(use_local=db_allowed) self.conductor_api.wait_until_ready(context.get_admin_context()) def start(self): verstr = version.version_string_with_package() LOG.audit(_('Starting %(topic)s node (version %(version)s)'), {'topic': self.topic, 'version': verstr}) self.basic_config_check() self.manager.init_host() self.model_disconnected = False ctxt = context.get_admin_context() try: self.service_ref = self.conductor_api.service_get_by_args(ctxt, self.host, self.binary) self.service_id = self.service_ref['id'] except exception.NotFound: self.service_ref = self._create_service_ref(ctxt) self.manager.pre_start_hook() if self.backdoor_port is not None: self.manager.backdoor_port = self.backdoor_port self.conn = rpc.create_connection(new=True) LOG.debug(_("Creating Consumer connection for Service %s") % self.topic) rpc_dispatcher = self.manager.create_rpc_dispatcher(self.backdoor_port) # Share this same connection for these Consumers self.conn.create_consumer(self.topic, rpc_dispatcher, fanout=False) node_topic = '%s.%s' % (self.topic, self.host) self.conn.create_consumer(node_topic, rpc_dispatcher, fanout=False) self.conn.create_consumer(self.topic, rpc_dispatcher, fanout=True) # Consume from all consumers in a thread self.conn.consume_in_thread() self.manager.post_start_hook() LOG.debug(_("Join ServiceGroup membership for this service %s") % self.topic) # Add service to the ServiceGroup membership group. self.servicegroup_api.join(self.host, self.topic, self) if self.periodic_enable: if self.periodic_fuzzy_delay: initial_delay = random.randint(0, self.periodic_fuzzy_delay) else: initial_delay = None self.tg.add_dynamic_timer(self.periodic_tasks, initial_delay=initial_delay, periodic_interval_max= self.periodic_interval_max) def _create_service_ref(self, context): svc_values = { 'host': self.host, 'binary': self.binary, 'topic': self.topic, 'report_count': 0 } service = self.conductor_api.service_create(context, svc_values) self.service_id = service['id'] return service def __getattr__(self, key): manager = self.__dict__.get('manager', None) return getattr(manager, key) @classmethod def create(cls, host=None, binary=None, topic=None, manager=None, report_interval=None, periodic_enable=None, periodic_fuzzy_delay=None, periodic_interval_max=None, db_allowed=True): if not host: host = CONF.host if not binary: binary = os.path.basename(sys.argv[0]) if not topic: topic = binary.rpartition('nova-')[2] if not manager: manager_cls = ('%s_manager' % binary.rpartition('nova-')[2]) manager = CONF.get(manager_cls, None) if report_interval is None: report_interval = CONF.report_interval if periodic_enable is None: periodic_enable = CONF.periodic_enable if periodic_fuzzy_delay is None: periodic_fuzzy_delay = CONF.periodic_fuzzy_delay service_obj = cls(host, binary, topic, manager, report_interval=report_interval, periodic_enable=periodic_enable, periodic_fuzzy_delay=periodic_fuzzy_delay, periodic_interval_max=periodic_interval_max, db_allowed=db_allowed) return service_obj def kill(self): """Destroy the service object in the datastore.""" self.stop() try: self.conductor_api.service_destroy(context.get_admin_context(), self.service_id) except exception.NotFound: LOG.warn(_('Service killed that has no database entry'))
启动服务时,判断数据库中是否已经存在某个服务记录,如果不存在则创建一条记录!
services表的结构:
2 scheduler中的hostmanager是如何管理和调度计算节点的?
直接查询数据库中的compute_node来实现;
3 compute_node表中的数据何时CRUD?
nova-compute服务会通过一个定时任务period_task.perisod_task定期更新compute_node表中的数据,这些数据主要是关于compute节点目前的使用情况!
compute_node表的结构:
相关文章推荐
- Openstack之Nova创建虚机流程分析
- 【OpenStack源码分析之四】WSGI与Nova API服务启动
- cookie与session性能分析与安全性分析及几个小问题
- OpenStack导入镜像后Launch不起来的几个问题
- 【openstack】Nova(Folsom)虚拟化层Driver分析
- Openstack记一次与网络节点失联的问题分析
- openstack--nova的运行机制分析(一)
- OpenStack之Nova分析——创建虚拟机(五)
- OpenStack nova M Blueprints 分析
- Openstack之Nova创建虚机流程分析
- Logistic回归分析时几个需要注意的问题
- OpenStack之Nova分析——创建虚拟机(七)——创建虚拟机镜像文件
- openstack-nova-创建云主机代码分析
- nova 问题分析及解决办法(一)
- OpenStack之Nova分析——Nova Compute定时任务(三)
- MCP2.0平台几个比较重的队列堵塞问题分析
- OpenStack Nova分析——Nova Scheduler调度算法分析(1)
- 用SPSS做数据分析时遇到的几个小问题——解决方法!
- OpenStack基于Libvirt的虚拟化平台调度实现----Nova虚拟机动态迁移源码分析
- openstack nova 源码分析5-4 -nova/virt/libvirt目录下的connection.py