国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

openstack 之 nova架構,源碼剖析

這篇具有很好參考價值的文章主要介紹了openstack 之 nova架構,源碼剖析。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

一.nova架構

? ? ? ? nova是openstack最核心的服務,負責維護和管理云環(huán)境的計算資源。因此,云主機的整個生命周期都是由nova負責的。

1.1 nova-api

? ? ? ? 負責接收和相應客戶的API調(diào)用。

1.2 compute core

? ? ? ? nova-schedule

? ? ? ? 負責決定在哪個計算節(jié)點運行虛擬機。

? ? ? ? nova-compute

? ? ? ? 通過調(diào)用Hypervisor實現(xiàn)虛擬機生命周期的管理。一般運行在計算節(jié)點。

? ? ? ??hypervisor

? ? ? ? 對虛擬機進行硬件虛擬化的管理軟件,比如KVM和VMWare等。

? ? ? ? nova-conductor

? ? ? ? 由于nova-compute需要不斷對數(shù)據(jù)庫進行更新,比如更新虛擬機狀態(tài),為了安全性和伸縮性的考慮,通過nova-conductor間接實現(xiàn)數(shù)據(jù)庫的訪問。

1.3 database

? ? ? ? 一般使用MYSQL,安裝在控制節(jié)點上,因為nova有一些數(shù)據(jù)需要存儲在database中。

1.4 Message Queue

? ? ? ? 用于nova各個子服務之間的通訊,一般使用的是RabbitMQ,從而解耦各個子服務。

二.nova創(chuàng)建主機源碼剖析

openstack nova源碼分析,云計算,云計算

1.nova-api進程執(zhí)行過程:

a. nova:api:openstack:compute:servers.py:ServersController:create():

? ? ? ? 通過用戶發(fā)送的api數(shù)據(jù)中的req和body信息來解析需要的有關云主機的數(shù)據(jù),比如云主機類型(inst_type),鏡像id(image_uuid),主機聚合(availability_zone),強制使用的主機以及節(jié)點(forced_host,forced_node),元數(shù)據(jù)(metadata),連接的網(wǎng)絡(requested_networks)等,然后調(diào)用nova:compute:api.py:API:create()來正式開始創(chuàng)建云主機,最后向用戶返回響應結(jié)果。

    def create(self, req, body):
        """Creates a new server for a given user."""
        context = req.environ['nova.context']
        server_dict = body['server']
        password = self._get_server_admin_password(server_dict)
        name = common.normalize_name(server_dict['name'])
        description = name
        if api_version_request.is_supported(req, min_version='2.19'):
            description = server_dict.get('description')

        # Arguments to be passed to instance create function
        create_kwargs = {}

        # TODO(alex_xu): This is for back-compatible with stevedore
        # extension interface. But the final goal is that merging
        # all of extended code into ServersController.
        self._create_by_func_list(server_dict, create_kwargs, body)

        availability_zone = server_dict.pop("availability_zone", None)

        if api_version_request.is_supported(req, min_version='2.52'):
            create_kwargs['tags'] = server_dict.get('tags')

        helpers.translate_attributes(helpers.CREATE,
                                     server_dict, create_kwargs)

        target = {
            'project_id': context.project_id,
            'user_id': context.user_id,
            'availability_zone': availability_zone}
        context.can(server_policies.SERVERS % 'create', target)

        # TODO(Shao He, Feng) move this policy check to os-availability-zone
        # extension after refactor it.
        parse_az = self.compute_api.parse_availability_zone
        try:
            availability_zone, host, node = parse_az(context,
                                                     availability_zone)
        except exception.InvalidInput as err:
            raise exc.HTTPBadRequest(explanation=six.text_type(err))
        if host or node:
            context.can(server_policies.SERVERS % 'create:forced_host', {})

        # NOTE(danms): Don't require an answer from all cells here, as
        # we assume that if a cell isn't reporting we won't schedule into
        # it anyway. A bit of a gamble, but a reasonable one.
        min_compute_version = service_obj.get_minimum_version_all_cells(
            nova_context.get_admin_context(), ['nova-compute'])
        supports_device_tagging = (min_compute_version >=
                                   DEVICE_TAGGING_MIN_COMPUTE_VERSION)

        block_device_mapping = create_kwargs.get("block_device_mapping")
        # TODO(Shao He, Feng) move this policy check to os-block-device-mapping
        # extension after refactor it.
        if block_device_mapping:
            context.can(server_policies.SERVERS % 'create:attach_volume',
                        target)
            for bdm in block_device_mapping:
                if bdm.get('tag', None) and not supports_device_tagging:
                    msg = _('Block device tags are not yet supported.')
                    raise exc.HTTPBadRequest(explanation=msg)

        image_uuid = self._image_from_req_data(server_dict, create_kwargs)

        # NOTE(cyeoh): Although upper layer can set the value of
        # return_reservation_id in order to request that a reservation
        # id be returned to the client instead of the newly created
        # instance information we do not want to pass this parameter
        # to the compute create call which always returns both. We use
        # this flag after the instance create call to determine what
        # to return to the client
        return_reservation_id = create_kwargs.pop('return_reservation_id',
                                                  False)

        requested_networks = server_dict.get('networks', None)

        if requested_networks is not None:
            requested_networks = self._get_requested_networks(
                requested_networks, supports_device_tagging)

        # Skip policy check for 'create:attach_network' if there is no
        # network allocation request.
        if requested_networks and len(requested_networks) and \
                not requested_networks.no_allocate:
            context.can(server_policies.SERVERS % 'create:attach_network',
                        target)

        flavor_id = self._flavor_id_from_req_data(body)
        try:
            inst_type = flavors.get_flavor_by_flavor_id(
                    flavor_id, ctxt=context, read_deleted="no")

            supports_multiattach = common.supports_multiattach_volume(req)
            (instances, resv_id) = self.compute_api.create(context,
                            inst_type,
                            image_uuid,
                            display_name=name,
                            display_description=description,
                            availability_zone=availability_zone,
                            forced_host=host, forced_node=node,
                            metadata=server_dict.get('metadata', {}),
                            admin_password=password,
                            requested_networks=requested_networks,
                            check_server_group_quota=True,
                            supports_multiattach=supports_multiattach,
                            **create_kwargs)

        ......

        # If the caller wanted a reservation_id, return it
        if return_reservation_id:
            return wsgi.ResponseObject({'reservation_id': resv_id})

        req.cache_db_instances(instances)
        server = self._view_builder.create(req, instances[0])

        if CONF.api.enable_instance_password:
            server['server']['adminPass'] = password

        robj = wsgi.ResponseObject(server)

        return self._add_location(robj)

b. nova:compute:api.py:API:create():

? ? ? ? 這個函數(shù)檢查是否指定IP和端口,是否有可用主機聚合以及生成過濾器屬性,最后調(diào)用_create_instance()函數(shù)。

    def create(self, context, instance_type,
               image_href, kernel_id=None, ramdisk_id=None,
               min_count=None, max_count=None,
               display_name=None, display_description=None,
               key_name=None, key_data=None, security_groups=None,
               availability_zone=None, forced_host=None, forced_node=None,
               user_data=None, metadata=None, injected_files=None,
               admin_password=None, block_device_mapping=None,
               access_ip_v4=None, access_ip_v6=None, requested_networks=None,
               config_drive=None, auto_disk_config=None, scheduler_hints=None,
               legacy_bdm=True, shutdown_terminate=False,
               check_server_group_quota=False, tags=None,
               supports_multiattach=False):
        if requested_networks and max_count is not None and max_count > 1:
            self._check_multiple_instances_with_specified_ip(
                requested_networks)
            if utils.is_neutron():
                self._check_multiple_instances_with_neutron_ports(
                    requested_networks)

        if availability_zone:
            available_zones = availability_zones.\
                get_availability_zones(context.elevated(), True)
            if forced_host is None and availability_zone not in \
                    available_zones:
                msg = _('The requested availability zone is not available')
                raise exception.InvalidRequest(msg)

        filter_properties = scheduler_utils.build_filter_properties(
                scheduler_hints, forced_host, forced_node, instance_type)

        return self._create_instance(
                       context, instance_type,
                       image_href, kernel_id, ramdisk_id,
                       min_count, max_count,
                       display_name, display_description,
                       key_name, key_data, security_groups,
                       availability_zone, user_data, metadata,
                       injected_files, admin_password,
                       access_ip_v4, access_ip_v6,
                       requested_networks, config_drive,
                       block_device_mapping, auto_disk_config,
                       filter_properties=filter_properties,
                       legacy_bdm=legacy_bdm,
                       shutdown_terminate=shutdown_terminate,
                       check_server_group_quota=check_server_group_quota,
                       tags=tags, supports_multiattach=supports_multiattach)

c. nova:compute:api.py:API:_create_instance():

? ? ? ? 這個函數(shù)主要的代碼包含了三個部分:1.通過調(diào)用_provision_instances()函數(shù)將虛擬機參數(shù)寫入到數(shù)據(jù)庫之中;2.如果創(chuàng)建了域,則調(diào)用build_instances()函數(shù);3.如果沒有創(chuàng)建域,則調(diào)用schedule_and_build_instances()函數(shù)。

    def _create_instance(self, context, instance_type,
               image_href, kernel_id, ramdisk_id,
               min_count, max_count,
               display_name, display_description,
               key_name, key_data, security_groups,
               availability_zone, user_data, metadata, injected_files,
               admin_password, access_ip_v4, access_ip_v6,
               requested_networks, config_drive,
               block_device_mapping, auto_disk_config, filter_properties,
               reservation_id=None, legacy_bdm=True, shutdown_terminate=False,
               check_server_group_quota=False, tags=None,
               supports_multiattach=False):
        ......

        instances_to_build = self._provision_instances(
            context, instance_type, min_count, max_count, base_options,
            boot_meta, security_groups, block_device_mapping,
            shutdown_terminate, instance_group, check_server_group_quota,
            filter_properties, key_pair, tags, supports_multiattach)

        instances = []
        request_specs = []
        build_requests = []
        for rs, build_request, im in instances_to_build:
            build_requests.append(build_request)
            instance = build_request.get_new_instance(context)
            instances.append(instance)
            request_specs.append(rs)

        if CONF.cells.enable:
            # NOTE(danms): CellsV1 can't do the new thing, so we
            # do the old thing here. We can remove this path once
            # we stop supporting v1.
            for instance in instances:
                instance.create()
            # NOTE(melwitt): We recheck the quota after creating the objects
            # to prevent users from allocating more resources than their
            # allowed quota in the event of a race. This is configurable
            # because it can be expensive if strict quota limits are not
            # required in a deployment.
            if CONF.quota.recheck_quota:
                try:
                    compute_utils.check_num_instances_quota(
                        context, instance_type, 0, 0,
                        orig_num_req=len(instances))
                except exception.TooManyInstances:
                    with excutils.save_and_reraise_exception():
                        # Need to clean up all the instances we created
                        # along with the build requests, request specs,
                        # and instance mappings.
                        self._cleanup_build_artifacts(instances,
                                                      instances_to_build)

            self.compute_task_api.build_instances(context,
                instances=instances, image=boot_meta,
                filter_properties=filter_properties,
                admin_password=admin_password,
                injected_files=injected_files,
                requested_networks=requested_networks,
                security_groups=security_groups,
                block_device_mapping=block_device_mapping,
                legacy_bdm=False)
        else:
            self.compute_task_api.schedule_and_build_instances(
                context,
                build_requests=build_requests,
                request_spec=request_specs,
                image=boot_meta,
                admin_password=admin_password,
                injected_files=injected_files,
                requested_networks=requested_networks,
                block_device_mapping=block_device_mapping,
                tags=tags)

        return instances, reservation_id

????????我們先來分析_create_instance()函數(shù)的第一部分:_provision_instances()函數(shù):

? ? ? ? 該函數(shù)主要建立了四張表:

req_spec 虛擬機調(diào)度需要的表格,保存在nova-api的request_specs表中。
instance 虛擬機的相關信息。保存在nova數(shù)據(jù)庫中。
build_request 創(chuàng)建虛擬機時,nova-api不會把數(shù)據(jù)保存在nova數(shù)據(jù)庫的instances表中,而是保存在nova-api數(shù)據(jù)庫中的build_request表中。
inst_mapping 不同cell之間的實例映射,保存在nova-api的instance_mappings表中。

? ? ? ? 最后我們來看看第三部分:schedule_and_build_instances()函數(shù),該函數(shù)便開始了虛擬機的調(diào)度過程。

d. nova:conductor:api.py:ComputeTaskAPI:schedule_and_build_instances()

????????該函數(shù)調(diào)用了nova:conductor:rpcapi.py:ComputeTaskAPI:schedule_and_build_instances()函數(shù),此rpcapi.py下的schedule_and_build_instances()函數(shù)又封裝了nova-api所產(chǎn)生的參數(shù),并且進行RPC異步調(diào)用,注意由于是異步調(diào)用,nova-api會立即返回,繼續(xù)響應用戶的API請求,從此刻開始,由conductor來接收RPC消息來繼續(xù)進行虛擬機的調(diào)度過程。

? ? ? ? 以上過程vm_state為building,task_state為scheduling。具體是在nova/compute/api.py文件下的API.py類的populate_instance_for_create()中將instance表中的vm_state設置成BUILDING,將task_state設置成SCHEDULING,表明該過程在調(diào)度。

????????populate_instance_for_create()是在_provision_instances()函數(shù)中創(chuàng)建instance表格時調(diào)用的。

2.nova-conductor進程執(zhí)行過程

nova:conductor:manager.py:ComputeTaskManager:schedule_and_build_instances():

? ? ? ? nova-conductor進程調(diào)用該函數(shù)接收nova-api發(fā)送的RPC消息,該函數(shù)主要調(diào)用了_schedule _instances()函數(shù),_schedule_instances()函數(shù)又調(diào)用了nova: scheduler:client:_init_.py:SchedulerClient:select_destinations()函數(shù),該函數(shù)又調(diào)用了nova:scheduler:client:query.py:select_destinati ons()函數(shù),最后又調(diào)用了nova: scheduler:rpcapi.py: SchedulerAPI:select_destinations()函數(shù),于是又到了RPC調(diào)用環(huán)節(jié),不過該函數(shù)采用的是RPC同步調(diào)用,過程中會一直等待調(diào)用返回。此時,nova-scheduler進程接收到RPC消息,開始正式進行虛擬機調(diào)度過程。

    def schedule_and_build_instances(self, context, build_requests,
                                     request_specs, image,
                                     admin_password, injected_files,
                                     requested_networks, block_device_mapping,
                                     tags=None):
        ......
            with obj_target_cell(instance, cell) as cctxt:
                self.compute_rpcapi.build_and_run_instance(
                    cctxt, instance=instance, image=image,
                    request_spec=request_spec,
                    filter_properties=filter_props,
                    admin_password=admin_password,
                    injected_files=injected_files,
                    requested_networks=requested_networks,
                    security_groups=legacy_secgroups,
                    block_device_mapping=instance_bdms,
                    host=host.service_host, node=host.nodename,
                    limits=host.limits, host_list=host_list)

3.nova-scheduler進程執(zhí)行過程

nova:scheduler:manager.py:SchedulerManager:select_destinations()函數(shù):

? ? ? ? nova-scheduler進程調(diào)用該函數(shù)接收nova-conductor發(fā)送的請求nova-scheduler進行虛擬機調(diào)度的RPC消息,該函數(shù)內(nèi)部會調(diào)用driver的select_destinations()函數(shù),driver其實相當于一種調(diào)度器驅(qū)動,在配置文件nova.conf文件中的調(diào)度器驅(qū)動scheduler_driver選項選擇filter_scheduler,則可以使用filter_scheduler作為調(diào)度器(其他備選項為:caching_scheduler,chance_scheduler,fake_scheduler)。filter_scheduler算法能夠根據(jù)指定的filter(也是在nova.conf中指定)來過濾掉不滿足條件的計算節(jié)點,最后再根據(jù)weight算法計算權值,選擇權值最高的計算節(jié)點來創(chuàng)建虛擬機。具體的filter處理過程將在后面一篇進行介紹。

    def select_destinations(self, ctxt, request_spec=None,
            filter_properties=None, spec_obj=_sentinel, instance_uuids=None,
            return_objects=False, return_alternates=False):
        ......
        # Only return alternates if both return_objects and return_alternates
        # are True.
        return_alternates = return_alternates and return_objects
        selections = self.driver.select_destinations(ctxt, spec_obj,
                instance_uuids, alloc_reqs_by_rp_uuid, provider_summaries,
                allocation_request_version, return_alternates)
        ......
        return selections

? ? ? ?當選擇完目標計算節(jié)點以后,由于nova-conductor使用的是同步調(diào)度算法,因此nova-scheduler會將選擇的計算節(jié)點返回給nova-conductor,最后程序?qū)⒒氐絥ova:conductor:api.py: ComputeTaskAPI:schedule_and_build_instances()函數(shù),由nova-conductor進程繼續(xù)執(zhí)行。

4.nova-conductor進程執(zhí)行過程

nova:conductor:manager.py:ComputeTaskManager:schedule_and_build_instances():

? ? ? ? nova-conductor在該函數(shù)中進行一系列的處理,最終調(diào)用nova:compute:rpcapi.py:Compute API:build_and_run_instance()函數(shù)。該函數(shù)繼續(xù)進行我們熟悉的RPC調(diào)用來通知nova-compute進程來在該進程所在的計算節(jié)點上部署虛擬機,注意該調(diào)用采取的是異步調(diào)用的方式。

    def build_and_run_instance(self, ctxt, instance, host, image, request_spec,
            filter_properties, admin_password=None, injected_files=None,
            requested_networks=None, security_groups=None,
            block_device_mapping=None, node=None, limits=None,
            host_list=None):
        # NOTE(edleafe): compute nodes can only use the dict form of limits.
        if isinstance(limits, objects.SchedulerLimits):
            limits = limits.to_dict()
        kwargs = {"instance": instance,
                  "image": image,
                  "request_spec": request_spec,
                  "filter_properties": filter_properties,
                  "admin_password": admin_password,
                  "injected_files": injected_files,
                  "requested_networks": requested_networks,
                  "security_groups": security_groups,
                  "block_device_mapping": block_device_mapping,
                  "node": node,
                  "limits": limits,
                  "host_list": host_list,
                 }
        client = self.router.client(ctxt)
        version = self._ver(ctxt, '4.19')
        if not client.can_send_version(version):
            version = '4.0'
            kwargs.pop("host_list")
        cctxt = client.prepare(server=host, version=version)
        cctxt.cast(ctxt, 'build_and_run_instance', **kwargs)

? 5. nova-compute進程執(zhí)行過程

nova:compute:manager.py:ComputeManager:build_and_run_instance()函數(shù):

? ? ? ? 該函數(shù)繼續(xù)調(diào)用_do_build_and_run_instance()函數(shù),該函數(shù)內(nèi)部會更新instance表中的vm_state的狀態(tài)為BUILDING(貌似沒變)以及task_state的狀態(tài)為none。

 def _do_build_and_run_instance(self, context, instance, image,
            request_spec, filter_properties, admin_password, injected_files,
            requested_networks, security_groups, block_device_mapping,
            node=None, limits=None, host_list=None):

        try:
            LOG.debug('Starting instance...', instance=instance)
            instance.vm_state = vm_states.BUILDING
            instance.task_state = None
            instance.save(expected_task_state=
                    (task_states.SCHEDULING, None))
......

????????然后_do_build_and_ run_instance()函數(shù)再繼續(xù)調(diào)用_build_and_run_instance()函數(shù),該函數(shù)內(nèi)部會繼續(xù)調(diào)用_build_resource()函數(shù)繼續(xù)申請網(wǎng)絡和磁盤資源。等待分配完資源以后更新task_ state狀態(tài)為BUILDING;然后再調(diào)用driver(這里為libvirt.LibvirtDriver,即Hypervisor,在nova.conf中的compute_driver進行設置,之后driver相同)的spawn函數(shù)進行創(chuàng)建,該過程時間最長;最后創(chuàng)建完畢返回,instance表中的vm_state狀態(tài)變?yōu)锳CTIVE,task_state狀態(tài)變?yōu)閚one,power_state變?yōu)镽UNNING。到此虛擬機的創(chuàng)建過程結(jié)束。

    def _build_and_run_instance(self, context, instance, image, injected_files,
            admin_password, requested_networks, security_groups,
            block_device_mapping, node, limits, filter_properties,
            request_spec=None):

        ......
                with self._build_resources(context, instance,
                        requested_networks, security_groups, image_meta,
                        block_device_mapping) as resources:
                    instance.vm_state = vm_states.BUILDING
                    instance.task_state = task_states.SPAWNING
                    # NOTE(JoshNang) This also saves the changes to the
                    # instance from _allocate_network_async, as they aren't
                    # saved in that function to prevent races.
                    instance.save(expected_task_state=
                            task_states.BLOCK_DEVICE_MAPPING)
                    block_device_info = resources['block_device_info']
                    network_info = resources['network_info']
                    allocs = resources['allocations']
                    LOG.debug('Start spawning the instance on the hypervisor.',
                              instance=instance)
                                        with timeutils.StopWatch() as timer:
                    self.driver.spawn(context, instance, image_meta,
                                      injected_files, admin_password,
                                      allocs, network_info=network_info,
                                      block_device_info=block_device_info)
                    LOG.info('Took %0.2f seconds to spawn the instance on '
                             'the hypervisor.', timer.elapsed(),
                             instance=instance)
                    ......
        compute_utils.notify_about_instance_create(context, instance,
                self.host, phase=fields.NotificationPhase.END,
                bdms=block_device_mapping)

? ? ? ? 接下來我們再來看一下_build_resources()函數(shù)的具體實現(xiàn):1.調(diào)用_build_networks_for_ instance()函數(shù)來為虛擬機分配網(wǎng)絡資源,該函數(shù)內(nèi)部會利用driver來為虛擬機獲取mac地址(ip地址是在虛擬機啟動階段由dhcp協(xié)議進行分配),該函數(shù)內(nèi)部再調(diào)用_allocate_network()函數(shù)異步分配網(wǎng)絡,并且會將task_state的狀態(tài)更新為NETWORKING,vm_state狀態(tài)不變;2.在準備塊設備之前調(diào)用prepare_networks_before_block_device_mapping()函數(shù)對虛擬機網(wǎng)絡進行配置;3.將task_state的狀態(tài)改為BLOCK_DEVICE_MAPPING,vm_state的狀態(tài)不變,然后調(diào)用_prep_block _device()函數(shù)為虛擬機分配塊設備,內(nèi)部具體還是要調(diào)用driver進行實現(xiàn)。文章來源地址http://www.zghlxwxcb.cn/news/detail-709214.html

    def _build_resources(self, context, instance, requested_networks,
                         security_groups, image_meta, block_device_mapping):
        resources = {}
        network_info = None
        try:
            LOG.debug('Start building networks asynchronously for instance.',
                      instance=instance)
            network_info = self._build_networks_for_instance(context, instance,
                    requested_networks, security_groups)
            resources['network_info'] = network_info
        ......

        try:
            # Depending on a virt driver, some network configuration is
            # necessary before preparing block devices.
            self.driver.prepare_networks_before_block_device_mapping(
                instance, network_info)

            # Verify that all the BDMs have a device_name set and assign a
            # default to the ones missing it with the help of the driver.
            self._default_block_device_names(instance, image_meta,
                                             block_device_mapping)

            LOG.debug('Start building block device mappings for instance.',
                      instance=instance)
            instance.vm_state = vm_states.BUILDING
            instance.task_state = task_states.BLOCK_DEVICE_MAPPING
            instance.save()

            block_device_info = self._prep_block_device(context, instance,
                    block_device_mapping)
            resources['block_device_info'] = block_device_info
        ......
                    raise exception.BuildAbortException(
                            instance_uuid=instance.uuid,
                            reason=six.text_type(exc))

到了這里,關于openstack 之 nova架構,源碼剖析的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領支付寶紅包贊助服務器費用

相關文章

  • 云計算|OpenStack|社區(qū)版OpenStack安裝部署文檔(五 --- 計算服務nova安裝部署---Rocky版)

    云計算|OpenStack|社區(qū)版OpenStack安裝部署文檔(五 --- 計算服務nova安裝部署---Rocky版)

    nova服務是openstack最重要的一個組件,沒有之一,該組件是云計算的計算核心,大體組件如下: OpenStack Docs: Compute service overview 挑些重點,nova-api,libvirt,nova-placement-api,nova-api-metadata,nova-compute 并且nova安裝部署是分為controller節(jié)點和computer節(jié)點了,controller節(jié)點就一個,comput

    2024年02月02日
    瀏覽(24)
  • OpenStack介紹說明、OpenStack架構說明、OpenStack核心服務詳細說明【keystone,nova,cinder,neutron...】、OpenStack創(chuàng)建VM,服務間交互示例

    OpenStack介紹說明、OpenStack架構說明、OpenStack核心服務詳細說明【keystone,nova,cinder,neutron...】、OpenStack創(chuàng)建VM,服務間交互示例

    2006年亞馬遜推出AWS,正式開啟云計算的新紀元 2010年7月美國國家航空航天局(NASA)與Rackspace合作,共同宣布OpenStack開放源碼計劃,由此開啟了屬于OpenStack的時代 OpenStack從誕生之初對標AWS,一直在向AWS學習,同時開放接口去兼容各種AWS服務 OpenStack是什么? OpenStack是一種云操

    2024年01月15日
    瀏覽(32)
  • 【云計算知識庫】什么是云?什么是云計算?計算的是什么?openstack是什么?nova計算組件?【持續(xù)更新中】

    歡迎關注公眾號:天天說編程 你的關注是我最大的動力! 1.什么是云,什么是云計算,計算是指計算什么 云計算中的云不是天空中的云,在計算機的世界里,可以將它理解為一個群組,匯集在一起,只不過天上的云是雨水的群組,而云計算是計算機與互聯(lián)網(wǎng)匯集起來的群組

    2024年02月14日
    瀏覽(20)
  • 計算節(jié)點systemctl status openstack-nova-compute.service起不來的解決方案

    報錯 [root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service Job for openstack-nova-compute.service failed because the control process exited with error code. See \\\"systemctl status openstack-nova-compute.service\\\" and \\\"journalctl -xe\\\" for details. ● openstack-nova-compute.service - OpenStack Nova Compute Server ? ?Loaded: loade

    2024年02月03日
    瀏覽(30)
  • OpenStack+Ceph集群 計算節(jié)點執(zhí)行nova list提示ERROR (CommandError): You must provide a user name/id

    排錯的時候在計算節(jié)點執(zhí)行了 nova list 查看實例情況 結(jié)果提示 看來是沒有配置keystone鑒權信息的原因 執(zhí)行 可以打印信息了,雖然還是ERROR的…

    2024年02月11日
    瀏覽(20)
  • OpenStack — Nova

    OpenStack — Nova

    Nova是OpenStack最核心的服務模塊,負責管理和維護云計算環(huán)境的計算資源,負責整個云環(huán)境虛擬機生命周期的管理。 Nova自身并沒有提供任何虛擬化能力,它提供計算服務,使用不同的虛擬化驅(qū)動來與底層支持的Hypervisor(虛擬機管理器)進行交互。 所有的計算實例(虛擬服務器

    2023年04月17日
    瀏覽(23)
  • openstack-nova

    openstack-nova

    Nova是Openstack最核心的服務,負責維護和管理云環(huán)境的 計算資源 。OpenStack作為Iaas的云操作系統(tǒng),虛擬機生命周期管理就是通過Nova來實現(xiàn)的。 用途與功能: 實例生命周期管理:虛擬機從創(chuàng)建的動作開始,直到被刪除,真?zhèn)€過程都是Nova負責調(diào)度的。 管理計算資源: ??cpu、內(nèi)存

    2024年02月03日
    瀏覽(41)
  • openstack詳解(十八)——Nova服務啟動與服務創(chuàng)建

    openstack詳解(十八)——Nova服務啟動與服務創(chuàng)建

    今天繼續(xù)給大家介紹Linux運維相關知識,本文主要內(nèi)容是Nova服務啟動與服務創(chuàng)建。 在上文openstack詳解(十七)——openstack Nova其他配置中,我們完成了Nova整體的配置文件,接下來,我們就可以啟動Nova服務了,Nova服務的啟動,需要執(zhí)行以下命令: 這些命令,分別表示啟動No

    2024年02月02日
    瀏覽(24)
  • OpenStack手動分布式部署Nova【Queens版】

    OpenStack手動分布式部署Nova【Queens版】

    目錄 Nove簡介: 1、登錄數(shù)據(jù)庫配置(在controller執(zhí)行) ? 1.1登錄數(shù)據(jù)庫 ? 1.2數(shù)據(jù)庫里創(chuàng)建nova-api ? 1.3數(shù)據(jù)庫登錄授權 ? 1.4創(chuàng)建nova用戶 ? 1.5添加admin用戶為nova用戶 ? 1.6創(chuàng)建nova服務端點 ? 1.7創(chuàng)建compute API 服務端點 ? 1.8創(chuàng)建一個placement服務用戶? ? 1.9添加placement用戶為項目服

    2023年04月08日
    瀏覽(18)
  • 云計算-平臺架構-開源-OpenStack

    云計算-平臺架構-開源-OpenStack

    【個人小結(jié)】 OpenStack是開源項目,是云平臺架構,是云操作系統(tǒng)組件,(一句話:OpenStack是由很多組件形成的開源項目云平臺架構。) OpenStack組件按模塊分類,核心模塊是計算Nova、鏡像Glance、存儲Cinder、網(wǎng)絡Neutron;輔助模塊是訪問Horizen、監(jiān)控Ceilometer、權限KeyStone、對象存

    2024年01月16日
    瀏覽(23)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領取紅包,優(yōu)惠每天領

二維碼1

領取紅包

二維碼2

領紅包