Re: [Starlingx-discuss] Support single huge page size for openstack worker node
Hi All, The change that removes auto-provision of huge pages has been merged into the master branch. A new provisioning step is required prior to unlocking an AIO controller or a worker node, if vswitch_type is set to OVS-DPDK. Configure vSwitch memory per NUMA node: system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor> i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0 If you plan to create VMs using the huge pages, you would need to configure VM huge pages as well. Please make any necessary documentation changes. Regards, Tao From: Liu, Tao Sent: Tuesday, August 20, 2019 8:39 PM To: 'starlingx-discuss@lists.starlingx.io' Subject: Re: Pending: Support single huge page size for openstack worker node Hi All, The changes to support single huge page size have been merged into master. In this first update, the auto-provision of VM huge pages has been changed from 2M to 1G. A subsequent update, will remove the auto-provisioning of huge pages for both VM and vSwitch. If any test case is dependent on the default VM huge pages, a new step will be required to allocate huge pages for VM on a openstack worker node. Prior to unlocking a worker node , user provisioning of the vSwitch memory would be required, if vswitch_type is set to OVS-DPDK. Regards, Tao From: Liu, Tao Sent: Thursday, August 15, 2019 1:06 PM To: starlingx-discuss@lists.starlingx.io Subject: Pending: Support single huge page size for openstack worker node Hi All, Per story 2006295<https://storyboard.openstack.org/#!/story/2006295>, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated. The code changes are available here: https://review.opendev.org/#/c/676710/ Regards, Tao Liu, Member of Technical Staff, Engineering,, Wind River direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5
Hi Tao Thanks for sharing the information Cristopher, I was wondering if this change with huge pages of 1 GB affected the testing environment. Last time you, Erich and myself were debugging the lack of memory during sanity and was mainly because of huge pages reservations On Wed, Aug 28, 2019 at 12:51 PM Liu, Tao <Tao.Liu@windriver.com> wrote:
Hi All,
The change that removes auto-provision of huge pages has been merged into the master branch.
A new provisioning step is required prior to unlocking an AIO controller or a worker node, if vswitch_type is set to OVS-DPDK.
Configure vSwitch memory per NUMA node:
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0
Thanks, this enables the users to choose accordingly to their needs and performance requirements
If you plan to create VMs using the huge pages, you would need to configure VM huge pages as well.
Please make any necessary documentation changes.
I will encourage that important code changes like this came with the same patch to documentation Regards Victor R.
Regards,
Tao
From: Liu, Tao Sent: Tuesday, August 20, 2019 8:39 PM To: 'starlingx-discuss@lists.starlingx.io' Subject: Re: Pending: Support single huge page size for openstack worker node
Hi All,
The changes to support single huge page size have been merged into master.
In this first update, the auto-provision of VM huge pages has been changed from 2M to 1G.
A subsequent update, will remove the auto-provisioning of huge pages for both VM and vSwitch.
If any test case is dependent on the default VM huge pages, a new step will be required to allocate huge pages for VM on a openstack worker node.
Prior to unlocking a worker node , user provisioning of the vSwitch memory would be required, if vswitch_type is set to OVS-DPDK.
Regards,
Tao
From: Liu, Tao Sent: Thursday, August 15, 2019 1:06 PM To: starlingx-discuss@lists.starlingx.io Subject: Pending: Support single huge page size for openstack worker node
Hi All,
Per story 2006295, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size
per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated.
The code changes are available here:
https://review.opendev.org/#/c/676710/
Regards,
Tao Liu, Member of Technical Staff, Engineering,, Wind River
direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home
350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
Hello, This change will break the automation that we have to do the setup. We'll adapt it, seems to be a quick change, tomorrow, with the build that contains this change, we can do the required testing (not sure, but most likely, we won't have sanity results tomorrow for BareMetal). The issue that we faced last time, was on virtual environments. We don't change the vswitch on virtual environments, I expect that this won't have an impact. Tao, could you please point us to the documentation to properly use " system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>" ? We'd need to know which values are adequate for these two values: <1G hugepages number> <processor> I'm thinking that these are related to the actual amount of memory and cpus on each baremetal server. Are there best practices? Recommended values? Should we stick to <1G hugepages number>=1 and <processor>=0 no matter our hardware specs? Thanks in advance. Cristopher Lemus On 8/28/19, 2:06 PM, "Victor Rodriguez" <vm.rod25@gmail.com> wrote: Hi Tao Thanks for sharing the information Cristopher, I was wondering if this change with huge pages of 1 GB affected the testing environment. Last time you, Erich and myself were debugging the lack of memory during sanity and was mainly because of huge pages reservations On Wed, Aug 28, 2019 at 12:51 PM Liu, Tao <Tao.Liu@windriver.com> wrote: > > Hi All, > > > > The change that removes auto-provision of huge pages has been merged into the master branch. > > > > A new provisioning step is required prior to unlocking an AIO controller or a worker node, if vswitch_type is set to OVS-DPDK. > > > > Configure vSwitch memory per NUMA node: > > system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor> > > i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0 > Thanks, this enables the users to choose accordingly to their needs and performance requirements > > > If you plan to create VMs using the huge pages, you would need to configure VM huge pages as well. > > > > Please make any necessary documentation changes. > I will encourage that important code changes like this came with the same patch to documentation Regards Victor R. > > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Tuesday, August 20, 2019 8:39 PM > To: 'starlingx-discuss@lists.starlingx.io' > Subject: Re: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > The changes to support single huge page size have been merged into master. > > > > In this first update, the auto-provision of VM huge pages has been changed from 2M to 1G. > > > > A subsequent update, will remove the auto-provisioning of huge pages for both VM and vSwitch. > > If any test case is dependent on the default VM huge pages, a new step will be required to allocate huge pages for VM on a openstack worker node. > > Prior to unlocking a worker node , user provisioning of the vSwitch memory would be required, if vswitch_type is set to OVS-DPDK. > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Thursday, August 15, 2019 1:06 PM > To: starlingx-discuss@lists.starlingx.io > Subject: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > Per story 2006295, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size > > per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated. > > > > The code changes are available here: > > https://review.opendev.org/#/c/676710/ > > > > > > Regards, > > > > Tao Liu, Member of Technical Staff, Engineering,, Wind River > > direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home > > 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
Hi Cristopher, OVS-DPDK is NOT supported on virtual environment, and VM huge pages were not auto-provisioned previously, so there is no impact for virtual environment. For Bare Meta, you would need to allocate 1 1G huge page per NUMA node for vSwitch memory, if vswitch_type is set to OVS-DPDK (assuming all Bare Meta servers support 1G nowadays). Use 'system host-memory-list <host name or id>' to discover how many processors are supported on the host Then allocate 1 1G huge pages per processor for vswitch, for example: system host-memory-modify -f vswitch -1G 1 compute-0 0 system host-memory-modify -f vswitch -1G 1 compute-0 1 For VM huge pages, we used to auto-provisioned the possible VM huge pages to 2M pages, i.e. VM possible = (node total memory - platform reserved) * 0.9 - vswitch. With the single huge page size support, you would need to allocate X number of 1G huge pages for VMs to satisfy the automated test cases ( if the test cases launch VMs using the huge pages). This depends on how many VMs are launched during the test. I think 6 to 10 of 1G huge page should be enough and it is safe for small Bare Meta servers. For example: system host-memory-modify -1G 6 compute-0 0 system host-memory-modify -1G 6 compute-0 1 Regards, Tao -----Original Message----- From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras@intel.com] Sent: Wednesday, August 28, 2019 3:28 PM To: Victor Rodriguez; Liu, Tao Cc: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] Support single huge page size for openstack worker node Hello, This change will break the automation that we have to do the setup. We'll adapt it, seems to be a quick change, tomorrow, with the build that contains this change, we can do the required testing (not sure, but most likely, we won't have sanity results tomorrow for BareMetal). The issue that we faced last time, was on virtual environments. We don't change the vswitch on virtual environments, I expect that this won't have an impact. Tao, could you please point us to the documentation to properly use " system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>" ? We'd need to know which values are adequate for these two values: <1G hugepages number> <processor> I'm thinking that these are related to the actual amount of memory and cpus on each baremetal server. Are there best practices? Recommended values? Should we stick to <1G hugepages number>=1 and <processor>=0 no matter our hardware specs? Thanks in advance. Cristopher Lemus On 8/28/19, 2:06 PM, "Victor Rodriguez" <vm.rod25@gmail.com> wrote: Hi Tao Thanks for sharing the information Cristopher, I was wondering if this change with huge pages of 1 GB affected the testing environment. Last time you, Erich and myself were debugging the lack of memory during sanity and was mainly because of huge pages reservations On Wed, Aug 28, 2019 at 12:51 PM Liu, Tao <Tao.Liu@windriver.com> wrote: > > Hi All, > > > > The change that removes auto-provision of huge pages has been merged into the master branch. > > > > A new provisioning step is required prior to unlocking an AIO controller or a worker node, if vswitch_type is set to OVS-DPDK. > > > > Configure vSwitch memory per NUMA node: > > system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor> > > i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0 > Thanks, this enables the users to choose accordingly to their needs and performance requirements > > > If you plan to create VMs using the huge pages, you would need to configure VM huge pages as well. > > > > Please make any necessary documentation changes. > I will encourage that important code changes like this came with the same patch to documentation Regards Victor R. > > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Tuesday, August 20, 2019 8:39 PM > To: 'starlingx-discuss@lists.starlingx.io' > Subject: Re: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > The changes to support single huge page size have been merged into master. > > > > In this first update, the auto-provision of VM huge pages has been changed from 2M to 1G. > > > > A subsequent update, will remove the auto-provisioning of huge pages for both VM and vSwitch. > > If any test case is dependent on the default VM huge pages, a new step will be required to allocate huge pages for VM on a openstack worker node. > > Prior to unlocking a worker node , user provisioning of the vSwitch memory would be required, if vswitch_type is set to OVS-DPDK. > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Thursday, August 15, 2019 1:06 PM > To: starlingx-discuss@lists.starlingx.io > Subject: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > Per story 2006295, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size > > per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated. > > > > The code changes are available here: > > https://review.opendev.org/#/c/676710/ > > > > > > Regards, > > > > Tao Liu, Member of Technical Staff, Engineering,, Wind River > > direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home > > 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
Hi Tao, Thanks a lot for the explanation and examples, it was really useful to quickly made the required adjustments to our automation. We're able to continue executing sanity test. Regards, Cristopher Lemus On 8/28/19, 3:26 PM, "Liu, Tao" <Tao.Liu@windriver.com> wrote: Hi Cristopher, OVS-DPDK is NOT supported on virtual environment, and VM huge pages were not auto-provisioned previously, so there is no impact for virtual environment. For Bare Meta, you would need to allocate 1 1G huge page per NUMA node for vSwitch memory, if vswitch_type is set to OVS-DPDK (assuming all Bare Meta servers support 1G nowadays). Use 'system host-memory-list <host name or id>' to discover how many processors are supported on the host Then allocate 1 1G huge pages per processor for vswitch, for example: system host-memory-modify -f vswitch -1G 1 compute-0 0 system host-memory-modify -f vswitch -1G 1 compute-0 1 For VM huge pages, we used to auto-provisioned the possible VM huge pages to 2M pages, i.e. VM possible = (node total memory - platform reserved) * 0.9 - vswitch. With the single huge page size support, you would need to allocate X number of 1G huge pages for VMs to satisfy the automated test cases ( if the test cases launch VMs using the huge pages). This depends on how many VMs are launched during the test. I think 6 to 10 of 1G huge page should be enough and it is safe for small Bare Meta servers. For example: system host-memory-modify -1G 6 compute-0 0 system host-memory-modify -1G 6 compute-0 1 Regards, Tao -----Original Message----- From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras@intel.com] Sent: Wednesday, August 28, 2019 3:28 PM To: Victor Rodriguez; Liu, Tao Cc: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] Support single huge page size for openstack worker node Hello, This change will break the automation that we have to do the setup. We'll adapt it, seems to be a quick change, tomorrow, with the build that contains this change, we can do the required testing (not sure, but most likely, we won't have sanity results tomorrow for BareMetal). The issue that we faced last time, was on virtual environments. We don't change the vswitch on virtual environments, I expect that this won't have an impact. Tao, could you please point us to the documentation to properly use " system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>" ? We'd need to know which values are adequate for these two values: <1G hugepages number> <processor> I'm thinking that these are related to the actual amount of memory and cpus on each baremetal server. Are there best practices? Recommended values? Should we stick to <1G hugepages number>=1 and <processor>=0 no matter our hardware specs? Thanks in advance. Cristopher Lemus On 8/28/19, 2:06 PM, "Victor Rodriguez" <vm.rod25@gmail.com> wrote: Hi Tao Thanks for sharing the information Cristopher, I was wondering if this change with huge pages of 1 GB affected the testing environment. Last time you, Erich and myself were debugging the lack of memory during sanity and was mainly because of huge pages reservations On Wed, Aug 28, 2019 at 12:51 PM Liu, Tao <Tao.Liu@windriver.com> wrote: > > Hi All, > > > > The change that removes auto-provision of huge pages has been merged into the master branch. > > > > A new provisioning step is required prior to unlocking an AIO controller or a worker node, if vswitch_type is set to OVS-DPDK. > > > > Configure vSwitch memory per NUMA node: > > system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor> > > i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0 > Thanks, this enables the users to choose accordingly to their needs and performance requirements > > > If you plan to create VMs using the huge pages, you would need to configure VM huge pages as well. > > > > Please make any necessary documentation changes. > I will encourage that important code changes like this came with the same patch to documentation Regards Victor R. > > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Tuesday, August 20, 2019 8:39 PM > To: 'starlingx-discuss@lists.starlingx.io' > Subject: Re: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > The changes to support single huge page size have been merged into master. > > > > In this first update, the auto-provision of VM huge pages has been changed from 2M to 1G. > > > > A subsequent update, will remove the auto-provisioning of huge pages for both VM and vSwitch. > > If any test case is dependent on the default VM huge pages, a new step will be required to allocate huge pages for VM on a openstack worker node. > > Prior to unlocking a worker node , user provisioning of the vSwitch memory would be required, if vswitch_type is set to OVS-DPDK. > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Thursday, August 15, 2019 1:06 PM > To: starlingx-discuss@lists.starlingx.io > Subject: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > Per story 2006295, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size > > per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated. > > > > The code changes are available here: > > https://review.opendev.org/#/c/676710/ > > > > > > Regards, > > > > Tao Liu, Member of Technical Staff, Engineering,, Wind River > > direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home > > 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
participants (3)
-
Lemus Contreras, Cristopher J
-
Liu, Tao
-
Victor Rodriguez