[Starlingx-discuss] Cannot allocate a VM - STX2.0-AIO-SX - Error500
Mariano Ucha
lw2dht at gmail.com
Sat Sep 28 01:13:14 UTC 2019
Hi Chris! Thank you for your answer.
I have a All in One Simplex installation. I made it in a HW with 128Mb and 2 socket with 8 core each.
Do you prefer to crear a launchpad to keep logs and tracking?
Outputs
What does "virsh capabilities|grep -C5 pages" show on the compute node?
controller-0:~$ virsh capabilities|grep -C5 pages
<feature name='arat'/>
<feature name='ssbd'/>
<feature name='xsaveopt'/>
<feature name='pdpe1gb'/>
<feature name='invtsc'/>
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
<pages unit='KiB' size='1048576'/>
</cpu>
<power_management>
<suspend_mem/>
</power_management>
<iommu support='yes'/>
--
</migration_features>
<topology>
<cells num='2'>
<cell id='0'>
<memory unit='KiB'>67073312</memory>
<pages unit='KiB' size='4'>16506184</pages>
<pages unit='KiB' size='2048'>0</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances>
<sibling id='0' value='10'/>
<sibling id='1' value='20'/>
</distances>
<cpus num='16'>
--
<cpu id='23' socket_id='0' core_id='7' siblings='7,23'/>
</cpus>
</cell>
<cell id='1'>
<memory unit='KiB'>67108860</memory>
<pages unit='KiB' size='4'>16515071</pages>
<pages unit='KiB' size='2048'>0</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances>
<sibling id='0' value='20'/>
<sibling id='1' value='10'/>
</distances>
<cpus num='16'>
What does /etc/nova/nova.conf show inside the nova-compute container?
controller-0:/home/sysadmin# docker exec -it 3171e2560f0e cat /etc/nova/nova.conf
[DEFAULT]
allow_resize_to_same_host = true
block_device_allocate_retries = 2400
block_device_allocate_retries_interval = 3
compute_driver = libvirt.LibvirtDriver
compute_monitors = cpu.virt_driver
concurrent_disk_operations = 2
cpu_allocation_ratio = 16
default_ephemeral_format = ext4
default_mempages_size = 2048
disk_allocation_ratio = 1
enable_new_services = false
firewall_driver = nova.virt.firewall.NoopFirewallDriver
force_raw_images = false
instance_usage_audit = true
instance_usage_audit_period = hour
linuxnet_interface_driver = openvswitch
log_config_append = /etc/nova/logging.conf
long_rpc_timeout = 400
map_new_hosts = false
metadata_port = 80
metadata_workers = 1
mkisofs_cmd = /usr/bin/genisoimage
my_ip = 192.168.206.3
network_allocate_retries = 2
notify_on_state_change = vm_and_task_state
osapi_compute_listen = 0.0.0.0
osapi_compute_listen_port = 8774
osapi_compute_workers = 1
ram_allocation_ratio = 1
remove_unused_original_minimum_age_seconds = 3600
reserved_host_memory_mb = 18548
reserved_huge_pages = node:0,size:4,count:3712000
reserved_huge_pages = node:0,size:1048576,count:1
reserved_huge_pages = node:1,size:4,count:512000
reserved_huge_pages = node:1,size:1048576,count:1
resume_guests_state_on_host_boot = true
running_deleted_instance_poll_interval = 60
service_down_time = 90
shared_pcpu_map = ""
state_path = /var/lib/nova
transport_url = rabbit://nova-rabbitmq-user:8d1ceef0af51Ti0*@rabbitmq.openstack.svc.cluster.local:5672/nova
use_neutron = true
vcpu_pin_set = "3-15,19-31"
[api_database]
connection = mysql+pymysql://nova:27b1df1e010cTi0*@mariadb.openstack.svc.cluster.local:3306/nova_api
idle_timeout = 60
max_overflow = 64
max_pool_size = 1
max_retries = -1
[cache]
backend = dogpile.cache.memcached
enabled = true
memcache_servers = memcached.openstack.svc.cluster.local:11211
[cell0_database]
connection = mysql+pymysql://nova:27b1df1e010cTi0*@mariadb.openstack.svc.cluster.local:3306/nova_cell0
idle_timeout = 60
max_overflow = 64
max_pool_size = 1
max_retries = -1
[conductor]
workers = 1
[database]
connection = mysql+pymysql://nova:27b1df1e010cTi0*@mariadb.openstack.svc.cluster.local:3306/nova
idle_timeout = 60
max_overflow = 64
max_pool_size = 1
max_retries = -1
[filter_scheduler]
build_failure_weight_multiplier = 0
cpu_weight_multiplier = 0
disk_weight_multiplier = 0
enabled_filters = RetryFilter,ComputeFilter,AvailabilityZoneFilter,AggregateInstanceExtraSpecsFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,NUMATopologyFilter,ServerGroupAffinityFilter,ServerGroupAntiAffinityFilter,PciPassthroughFilter
pci_weight_multiplier = 0
ram_weight_multiplier = 0
shuffle_best_same_weighed_hosts = true
soft_affinity_weight_multiplier = 20
soft_anti_affinity_weight_multiplier = 20
[glance]
api_servers = http://glance-api.openstack.svc.cluster.local:9292/
num_retries = 3
[ironic]
api_endpoint = http://ironic-api.openstack.svc.cluster.local:6385/
auth_type = password
auth_url = http://keystone-api.openstack.svc.cluster.local:5000/v3
auth_version = v3
memcache_secret_key = 6e87ddafaccfTi0*
memcache_servers = memcached.openstack.svc.cluster.local:11211
password = a35725804eafTi0*
project_domain_name = service
project_name = service
region_name = RegionOne
user_domain_name = service
username = ironic
[keystone_authtoken]
auth_type = password
auth_uri = http://keystone-api.openstack.svc.cluster.local:5000/v3
auth_url = http://keystone-api.openstack.svc.cluster.local:5000/v3
auth_version = v3
memcache_secret_key = 6e87ddafaccfTi0*
memcache_security_strategy = ENCRYPT
memcached_servers = memcached.openstack.svc.cluster.local:11211
password = 1b8cc9189685Ti0*
project_domain_name = service
project_name = service
region_name = RegionOne
user_domain_name = service
username = nova
[libvirt]
connection_uri = qemu+tcp://127.0.0.1/system
cpu_mode = host-model
disk_cachemodes = network=writeback
hw_disk_discard = unmap
images_rbd_ceph_conf = /etc/ceph/ceph.conf
images_rbd_pool = vms
images_type = default
live_migration_completion_timeout = 180
live_migration_inbound_addr = 192.168.206.3
live_migration_permit_auto_converge = true
mem_stats_period_seconds = 0
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
rbd_user = cinder
remove_unused_resized_minimum_age_seconds = 86400
virt_type = kvm
[metrics]
required = false
[neutron]
auth_type = password
auth_url = http://keystone-api.openstack.svc.cluster.local:5000/v3
auth_version = v3
default_floating_pool = public
metadata_proxy_shared_secret = password
password = d2d5840aa336Ti0*
physnets = physnet1,physnet0
project_domain_name = service
project_name = service
region_name = RegionOne
service_metadata_proxy = true
url = http://neutron-server.openstack.svc.cluster.local:9696/
user_domain_name = service
username = neutron
[neutron_physnet_physnet0]
numa_nodes = 0
[neutron_physnet_physnet1]
numa_nodes = 0
[notifications]
notification_format = unversioned
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
rabbit_ha_queues = true
[oslo_middleware]
enable_proxy_headers_parsing = true
[oslo_policy]
policy_file = /etc/nova/policy.yaml
[pci]
alias = {"vendor_id": "8086", "product_id": "0435", "name": "qat-dh895xcc-pf"}
alias = {"vendor_id": "8086", "product_id": "0443", "name": "qat-dh895xcc-vf"}
alias = {"vendor_id": "8086", "product_id": "37c8", "name": "qat-c62x-pf"}
alias = {"vendor_id": "8086", "product_id": "37c9", "name": "qat-c62x-vf"}
alias = {"name": "gpu"}
passthrough_whitelist = {"address": "0000:01:00.1"}
[placement]
auth_type = password
auth_url = http://keystone-api.openstack.svc.cluster.local:5000/v3
auth_version = v3
os_region_name = RegionOne
password = 7f593ab48d0dTi0*
project_domain_name = service
project_name = service
user_domain_name = service
username = placement
[scheduler]
discover_hosts_in_cells_interval = 30
periodic_task_interval = -1
workers = 1
[service_user]
auth_type = password
auth_url = http://keystone-api.openstack.svc.cluster.local:5000/v3
password = 1b8cc9189685Ti0*
project_domain_name = service
project_name = service
region_name = RegionOne
send_service_user_token = true
user_domain_name = service
username = nova
[spice]
html5proxy_host = 0.0.0.0
server_listen = 0.0.0.0
[vnc]
enabled = true
novncproxy_base_url = http://novncproxy.openstack.svc.cluster.local/vnc_auto.html
novncproxy_host = 0.0.0.0
novncproxy_port = 6080
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.206.3
[workarounds]
enable_numa_live_migration = true
[wsgi]
api_paste_config = /etc/nova/api-paste.ini
Regards,
Mariano
De: Chris Friesen
Enviado: viernes, 27 de septiembre de 2019 16:49
Para: starlingx-discuss at lists.starlingx.io
Asunto: Re: [Starlingx-discuss] Cannot allocate a VM - STX2.0-AIO-SX - Error500
Specifying "hw:mem_page_size=large" means that it will only use hugepages, not 4K pages. On the compute node do you have enough CPU and free hugepages on the same NUMA node to satisfy the request? The 'NUMATopologyFilter: (start: 1, end: 0)' part of the log says that this filter ruled out your only compute node.
What does "virsh capabilities|grep -C5 pages" show on the compute node?
What does /etc/nova/nova.conf show inside the nova-compute container?
It's possible to enable debug mode for nova.conf but it's a little tricky to get the syntax right. Basically you'd be using the "system helm-override-update" command. Can you give the output of "system helm-override-show" for the nova chart in the openstack application?
Chris
On 9/24/2019 6:16 AM, Mariano Ucha wrote:
Hi all!
I'm having a trouble now trying to allocate a VM con Openstack. I'm using STX 2.0 AIO-SX with DPDK so when i'm trying to allocate a VM with a flavor that have hw:mem_page_size=large metadata the system gives me error 500 No valid host was found. There are not enough hosts available.
File "/var/lib/openstack/lib/python2.7/site-packages/nova/conductor/manager.py", line 1346, in schedule_and_build_instances instance_uuids, return_alternates=True) File "/var/lib/openstack/lib/python2.7/site-packages/nova/conductor/manager.py", line 800, in _schedule_instances return_alternates=return_alternates) File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations instance_uuids, return_objects, return_alternates) File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 160, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 178, in call retry=self.retry) File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/transport.py", line 128, in _send retry=retry) File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 645, in send call_monitor_timeout, retry=retry) File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 636, in _send raise result
Logs from Nova Scheduler
2019-09-23 17:37:56.537 1 INFO nova.filters [req-f57095ac-6539-4efb-a6a5-f27aebb0a27c edbcf8808d8a4f48a39b8eebb3951292 8ae61d9905d94b8aa66193b20a3ab973 - default default] Filter NUMATopologyFilter returned 0 hosts
2019-09-23 17:37:56.537 1 INFO nova.filters [req-f57095ac-6539-4efb-a6a5-f27aebb0a27c edbcf8808d8a4f48a39b8eebb3951292 8ae61d9905d94b8aa66193b20a3ab973 - default default] Filter NUMATopologyFilter returned 0 hosts
2019-09-23 17:37:56.538 1 INFO nova.filters [req-f57095ac-6539-4efb-a6a5-f27aebb0a27c edbcf8808d8a4f48a39b8eebb3951292 8ae61d9905d94b8aa66193b20a3ab973 - default default] Filtering removed all hosts for the request with instance ID '66175096-37a4-4a2c-b23a-e7a4a6f10ca1'. Filter results: ['RetryFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'AggregateInstanceExtraSpecsFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'NUMATopologyFilter: (start: 1, end: 0)']
2019-09-23 17:37:56.538 1 INFO nova.filters [req-f57095ac-6539-4efb-a6a5-f27aebb0a27c edbcf8808d8a4f48a39b8eebb3951292 8ae61d9905d94b8aa66193b20a3ab973 - default default] Filtering removed all hosts for the request with instance ID '66175096-37a4-4a2c-b23a-e7a4a6f10ca1'. Filter results: ['RetryFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'AggregateInstanceExtraSpecsFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'NUMATopologyFilter: (start: 1, end: 0)']
Logs from Nova Conductor
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager Traceback (most recent call last):
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/conductor/manager.py", line 1346, in schedule_and_build_instances
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager instance_uuids, return_alternates=True)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/conductor/manager.py", line 800, in _schedule_instances
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager return_alternates=return_alternates)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 160, in select_destinations
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager return cctxt.call(ctxt, 'select_destinations', **msg_args)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 178, in call
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager retry=self.retry)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/transport.py", line 128, in _send
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager retry=retry)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 645, in send
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager call_monitor_timeout, retry=retry)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 636, in _send
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager raise result
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager NoValidHost_Remote: No valid host was found. There are not enough hosts available.
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager Traceback (most recent call last):
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager return func(*args, **kwargs)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/manager.py", line 168, in select_destinations
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager allocation_request_version, return_alternates)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 96, in select_destinations
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager allocation_request_version, return_alternates)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 265, in _schedule
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager claimed_instance_uuids)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 302, in _ensure_sufficient_hosts
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager raise exception.NoValidHost(reason=reason)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager NoValidHost: No valid host was found. There are not enough hosts available.
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager [req-f57095ac-6539-4efb-a6a5-f27aebb0a27c edbcf8808d8a4f48a39b8eebb3951292 8ae61d9905d94b8aa66193b20a3ab973 - default default] Failed to schedule instances: NoValidHost_Remote: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):
File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner
return func(*args, **kwargs)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/manager.py", line 168, in select_destinations
allocation_request_version, return_alternates)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 96, in select_destinations
allocation_request_version, return_alternates)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 265, in _schedule
claimed_instance_uuids)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 302, in _ensure_sufficient_hosts
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager Traceback (most recent call last):
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/conductor/manager.py", line 1346, in schedule_and_build_instances
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager instance_uuids, return_alternates=True)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/conductor/manager.py", line 800, in _schedule_instances
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager return_alternates=return_alternates)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 160, in select_destinations
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager return cctxt.call(ctxt, 'select_destinations', **msg_args)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 178, in call
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager retry=self.retry)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/transport.py", line 128, in _send
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager retry=retry)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 645, in send
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager call_monitor_timeout, retry=retry)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 636, in _send
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager raise result
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager NoValidHost_Remote: No valid host was found. There are not enough hosts available.
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager Traceback (most recent call last):
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager return func(*args, **kwargs)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/manager.py", line 168, in select_destinations
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager allocation_request_version, return_alternates)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 96, in select_destinations
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager allocation_request_version, return_alternates)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 265, in _schedule
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager claimed_instance_uuids)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 302, in _ensure_sufficient_hosts
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager raise exception.NoValidHost(reason=reason)
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager NoValidHost: No valid host was found. There are not enough hosts available.
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.542 1 ERROR nova.conductor.manager
2019-09-23 17:37:56.627 1 WARNING nova.scheduler.utils [req-f57095ac-6539-4efb-a6a5-f27aebb0a27c edbcf8808d8a4f48a39b8eebb3951292 8ae61d9905d94b8aa66193b20a3ab973 - default default] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):
File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner
return func(*args, **kwargs)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/manager.py", line 168, in select_destinations
allocation_request_version, return_alternates)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 96, in select_destinations
allocation_request_version, return_alternates)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 265, in _schedule
claimed_instance_uuids)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 302, in _ensure_sufficient_hosts
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
: NoValidHost_Remote: No valid host was found. There are not enough hosts available.
2019-09-23 17:37:56.627 1 WARNING nova.scheduler.utils [req-f57095ac-6539-4efb-a6a5-f27aebb0a27c edbcf8808d8a4f48a39b8eebb3951292 8ae61d9905d94b8aa66193b20a3ab973 - default default] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):
File "/var/lib/openstack/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner
return func(*args, **kwargs)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/manager.py", line 168, in select_destinations
allocation_request_version, return_alternates)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 96, in select_destinations
allocation_request_version, return_alternates)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 265, in _schedule
claimed_instance_uuids)
File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 302, in _ensure_sufficient_hosts
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
: NoValidHost_Remote: No valid host was found. There are not enough hosts available.
2019-09-23 17:37:56.630 1 WARNING nova.scheduler.utils [req-f57095ac-6539-4efb-a6a5-f27aebb0a27c edbcf8808d8a4f48a39b8eebb3951292 8ae61d9905d94b8aa66193b20a3ab973 - default default] [instance: 66175096-37a4-4a2c-b23a-e7a4a6f10ca1] Setting instance to ERROR state.: NoValidHost_Remote: No valid host was found. There are not enough hosts available.
2019-09-23 17:37:56.630 1 WARNING nova.scheduler.utils [req-f57095ac-6539-4efb-a6a5-f27aebb0a27c edbcf8808d8a4f48a39b8eebb3951292 8ae61d9905d94b8aa66193b20a3ab973 - default default] [instance: 66175096-37a4-4a2c-b23a-e7a4a6f10ca1] Setting instance to ERROR state.
How can i change Nova log mode to DEBUG?
Anything more i can check?
Regards,
Mariano
_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190927/d1e71fe8/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Starling_output.log
Type: application/octet-stream
Size: 145766 bytes
Desc: not available
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190927/d1e71fe8/attachment-0001.obj>
More information about the Starlingx-discuss
mailing list