[Starlingx-discuss] [Networking] StarlingX vs K8S networking question
Yang, Bin
bin.yang at intel.com
Mon Nov 25 02:11:36 UTC 2019
Except for the k8s network performance enhancement, do we plan to connect the network between VM and containers?
e.g. like kuryr [1] to run both openstack VMs and K8S pods on the same Neutron network.
[1]: https://github.com/openstack/kuryr-kubernetes
Thanks,
Bin
From: Jones, Bruce E <bruce.e.jones at intel.com>
Sent: Friday, November 22, 2019 06:09
To: Waines, Greg <Greg.Waines at windriver.com>; Peters, Matt <Matt.Peters at windriver.com>; starlingx-discuss at lists.starlingx.io
Subject: Re: [Starlingx-discuss] [Networking] StarlingX vs K8S networking question
Glad you went to the talk, Greg. Thanks for the update!
Have we done any network performance tests for containers on the current StarlingX solution? Maybe we are already in good shape….
brucej
From: Waines, Greg [mailto:Greg.Waines at windriver.com]
Sent: Thursday, November 21, 2019 2:07 PM
To: Jones, Bruce E <bruce.e.jones at intel.com<mailto:bruce.e.jones at intel.com>>; Peters, Matt <Matt.Peters at windriver.com<mailto:Matt.Peters at windriver.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: [Starlingx-discuss] [Networking] StarlingX vs K8S networking question
I attended their talk on: Liberating Kubernetes from kube-Proxy and iptables.
My quick view (possibly inaccurate) was:
* Yet another CNI,
* basically built on linux's eBPF ... the evolution from iptables ... designed for higher performance,
* but the performance improvement doesn’t appear to kick in until you are way over 1,000s of services, etc.
* also unclear whether it has all the rich routing capabilities of Calico.
... but like I said, just a very initial view from their talk.
Greg.
From: "Jones, Bruce E" <bruce.e.jones at intel.com<mailto:bruce.e.jones at intel.com>>
Date: Wednesday, November 20, 2019 at 12:42 PM
To: "Peters, Matt" <Matt.Peters at windriver.com<mailto:Matt.Peters at windriver.com>>, "starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>" <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Subject: Re: [Starlingx-discuss] [Networking] StarlingX vs K8S networking question
Matt (and Forrest) – thank for the prompt replies!
Are we looking into the Cilium project [1]? In talking to them here at the conference it looks like they have a very good solution.
Brucej
[1] https://cilium.io/
From: Peters, Matt [mailto:Matt.Peters at windriver.com]
Sent: Wednesday, November 20, 2019 4:09 AM
To: Jones, Bruce E <bruce.e.jones at intel.com<mailto:bruce.e.jones at intel.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: [Starlingx-discuss] [Networking] StarlingX vs K8S networking question
Hi Bruce,
StarlingX container networking leverages Calico [1] as the default CNI for K8s. Calico implements Pod networking and provides an implementation for K8s Network Policies [2]. The Calico network policies gives the end-user full control over inter-pod communication by defining service level and pod level access controls for micro-segmentation of the network communication.
In addition, StarlingX provides the Multus and SRIOV CNI plugins as part of the standard deployment. Together, these permit container workloads to terminate high-bandwidth network interfaces directly into a container. This configuration can be leveraged for latency sensitive applications that require raw access to network payloads.
Finally, K8s networking is on an evolution path to provide higher performance Linux networking capabilities through Linux projects such as XDP [3]. XDP is starting to be leveraged by many K8s networking CNI solutions, including Calico [4].
[1] https://www.projectcalico.org/
[2] https://docs.projectcalico.org/v3.8/security/kubernetes-network-policy
[3] https://www.iovisor.org/technology/xdp
[4] https://www.projectcalico.org/introducing-xdp-optimized-denial-of-service-mitigation/
From: "Jones, Bruce E" <bruce.e.jones at intel.com<mailto:bruce.e.jones at intel.com>>
Date: Tuesday, November 19, 2019 at 9:40 PM
To: "starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>" <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Subject: [Starlingx-discuss] [Networking] StarlingX vs K8S networking question
I am at Kubecon this week and met with a potential partner who is interested in StarlingX. He is very concerned about networking in off-the-shelf Kubernetes and is hopeful that StarlingX has addressed the issues. I don’t know enough about Networking to answer his questions.
In particular, the partner is concerned about how IP addresses are assigned to nodes and thus to pods, and is very concerned about network data path performance in K8S. He believes that the default model of “everyone can see everything” is not acceptable for enterprises where isolation of applications is important.
If someone who knows StarlingX networking could send me a write up on how StarlingX improves on K8S networking, that would be very helpful for me in further discussions with the potential partner.
Thank you!
brucej
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20191125/57c2697e/attachment-0001.html>
More information about the Starlingx-discuss
mailing list