[Starlingx-discuss] Limitations on Multus/SRIOV CNI Plugins

Xu, Chenjie chenjie.xu at intel.com
Tue Jul 2 08:56:42 UTC 2019


Hi all,
The following 2 bugs have been reported to track the limitations 1&2:
https://bugs.launchpad.net/starlingx/+bug/1835018
https://bugs.launchpad.net/starlingx/+bug/1835020

Limitation 3 should be posted on StarlingX guide on how to use MULTUS/SR-IOV CNI which is still missing. Just like below:

To run SRIOV+DPDK, the pod need to request memory and mount hugepage-volume on correct host path as following:
   resources:
requests:
           memory: 2Gi
           intel.com/pci_sriov_net_physnet0: 2
limits:
        memory: 2Gi
        intel.com/pci_sriov_net_physnet0: 2
volumeMounts:
- name: hugepage-volume
    mountPath: /dev/hugepages
volumes:
- name: hugepage-volume
hostPath:
        path: /dev/hugepages

Best Regards,
Xu, Chenjie
From: Xu, Chenjie
Sent: Thursday, June 27, 2019 4:32 PM
To: Webster, Steven <Steven.Webster at windriver.com>; Khalil, Ghada <Ghada.Khalil at windriver.com>; Peters, Matt <Matt.Peters at windriver.com>
Cc: Zhao, Forrest <forrest.zhao at intel.com>; Guo, Ruijing <ruijing.guo at intel.com>; Le, Huifeng <huifeng.le at intel.com>
Subject: Limitations on Multus/SRIOV CNI Plugins

Hi Steven,
During my testing on Multus/SRIOV CNI Plugins, I have following findings:

1.      Need sysadmin to set MAC address manually for VF.

2.      The configuration for SR-IOV network device plugin has changed as following:
https://github.com/intel/sriov-network-device-plugin#configurations
For now StarlingX uses its own docker image and doesn't need to change the configuration. But in the future, this will be a bug when StarlingX updates the docker image to newer version.

3.      This is not a bug but should be noticed:

Normally, huge pages should be supported by kubernetes to run SRIOV+DPDK. And the pod needs to request huge pages as following:

resources:

requests:

   memory: 2Gi

   intel.com/pci_sriov_net_physnet0: 2

limits:

   hugepages-1Gi: 2Gi

   memory: 2Gi

       intel.com/pci_sriov_net_physnet0: 2

volumeMounts:
- name: hugepage-volume

     mountPath: /dev/hugepages

volumes:
- name: hugepage-volume

emptyDir:

        medium: HugePages



However the kubernetes provided by StarlingX will:

enable huge pages for non-openstack based worker node

disable huge pages for openstack based worker node:
https://opendev.org/starlingx/config/src/branch/master/puppet-manifests/src/modules/platform/manifests/kubernetes.pp#L118



But by my testing, the pod can still get huge pages in openstack based worker node on which kubernetes doesn't provide support for huge pages. And the pod need to request memory and mount hugepage-volume on correct host path as following:
   resources:

requests:

   memory: 2Gi

   intel.com/pci_sriov_net_physnet0: 2

limits:

   memory: 2Gi

       intel.com/pci_sriov_net_physnet0: 2

volumeMounts:
- name: hugepage-volume

    mountPath: /dev/hugepages

volumes:
- name: hugepage-volume

hostPath:

       path: /dev/hugepages

Best Regards,
Xu, Chenjie
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190702/87f4bfcd/attachment-0001.html>


More information about the Starlingx-discuss mailing list