It looks like you don't have any hugepages allocated. Lock the node, then use the dashboard or the CLI (running something like "system host-memory-modify controller-0 0 -1G 2" to allocate two 1G hugepages to NUMA node 0). Then unlock the node.
Kubernetes only allows a single size of hugepage, so whatever size we use for the OpenStack guests has to match the hugepage size used for vswitch (which defaults to 1G but can be changed to 2MB).
Chris
On 9/27/2019 7:13 PM, Mariano Ucha wrote:
Hi Chris! Thank you for your answer.
I have a All in One Simplex installation. I made it in a HW with 128Mb and 2 socket with 8 core each.
Do you prefer to crear a launchpad to keep logs and tracking?
Outputs
What does "virsh capabilities|grep -C5 pages" show on the compute node?
controller-0:~$ virsh capabilities|grep -C5 pages
<feature name='arat'/>
<feature name='ssbd'/>
<feature name='xsaveopt'/>
<feature name='pdpe1gb'/>
<feature name='invtsc'/>
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
<pages unit='KiB' size='1048576'/>
</cpu>
<power_management>
<suspend_mem/>
</power_management>
<iommu support='yes'/>
--
</migration_features>
<topology>
<cells num='2'>
<cell id='0'>
<memory unit='KiB'>67073312</memory>
<pages unit='KiB' size='4'>16506184</pages>
<pages unit='KiB' size='2048'>0</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances>
<sibling id='0' value='10'/>
<sibling id='1' value='20'/>
</distances>
<cpus num='16'>
--
<cpu id='23' socket_id='0' core_id='7' siblings='7,23'/>
</cpus>
</cell>
<cell id='1'>
<memory unit='KiB'>67108860</memory>
<pages unit='KiB' size='4'>16515071</pages>
<pages unit='KiB' size='2048'>0</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances>
<sibling id='0' value='20'/>
<sibling id='1' value='10'/>
</distances>
<cpus num='16'>