[Starlingx-discuss] questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763

Huang, Marvin Marvin.Huang at windriver.com
Wed Apr 24 20:33:18 UTC 2019


Hi Austin,

According to (Storyboard) 2004763<https://storyboard.openstack.org/%23!/story/2004763>, I know that you're working on the feature "Huge page management".

I've some questions about the feature.
Though one of its 2 tasks is still in Review (the other is shown merged), you might now have the answers already.

In the description of the Story, there are requirement:
"- Enable k8s huge page feature for worker nodes that do not have the openstack compute label. It should be disabled otherwise."

Questions: what is this meaning to users?

By 'Enable', is it meaning users can modify memory allocation on the node? (via the following):
system host-modify <worker-name> [-2M <2M hugepages number>] [-1G <1G hugepages number>] [-f <function>] ...
or Horizon: Admin -> Platform -> Host Inventory ...

Otherwise ('disabled'), the CLIs (system host-memory-xxx) will reject any requests?
Or the corresponding Horizon pages do not have any items to update the memory application? Or those were disabled?


"- Automatically defaults for worker nodes with openstack compute label. Changes will be applied on the unlock.
    - Current 2M huge page default settings
    - 1-1G huge page per numa node for vswitch "

Questions: in this situation, is the k8s huge page feature disabled (according to the above requirement)?
                And the (host-memory) CLIs will reject any requests?


And a question related with VMs:
If a VM using huge page (with flavor having 'hw:mem_page_size=large' or 'hw:mem_page_size=1048576') is  launched,  will the free memory pages decreased accordingly on the worker it's running on?
That is, if the VM is consuming 1G huge-page, the number of free page of 1G size on the hosting worker should be reduced by 1. Is this still the expected behavior?

This is the assumption in https://bugs.launchpad.net/starlingx/+bug/1813325.

Regards,
Marvin

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190424/d55aff46/attachment-0001.html>


More information about the Starlingx-discuss mailing list