[Starlingx-discuss] [Distributed Edge cloud] Edge cloud: kubelet-taking 100% in edge cloud
Zvonar, Bill
Bill.Zvonar at windriver.com
Wed May 26 15:45:46 UTC 2021
Hi Sriram, we discussed this on the community call today...
Nobody on the call had any specific suggestions for you, but I will ask the Containers team to weigh in on this.
Thanks, Bill...
From: Dharwadkar, Sriram <Sriram.Dharwadkar at commscope.com>
Sent: Friday, May 21, 2021 3:21 PM
To: starlingx-discuss at lists.starlingx.io
Subject: [Starlingx-discuss] [Distributed Edge cloud] Edge cloud: kubelet-taking 100% in edge cloud
[Please note: This e-mail is from an EXTERNAL e-mail address]
Hi Team,
We have deployed Distributed StarlingX in one of our systems (with the load 20.06). Below is the version
OS="centos"
SW_VERSION="20.06"
BUILD_TARGET="Host Installer"
BUILD_TYPE="Formal"
BUILD_ID="r/stx.4.0"
JOB="STX_4.0_build_layer_flock"
BUILD_BY="starlingx.build at cengn.ca<mailto:starlingx.build at cengn.ca>"
BUILD_NUMBER="22"
BUILD_HOST="starlingx_mirror"
BUILD_DATE="2020-08-05 12:25:52 +0000"
Edge cloud is running in AIO-Duplex low latency mode in 2 controllers with 24 cores each.
When we deploy CPU intensive pods attached to SRIOV-vfs, we are observing an issue, where kubelet is taking 100% of CPU.
Below are the details captured.
ontroller-1:/proc/2455541/fd# ps -ealf | grep -i kubelet
4 S root 2453203 1 39 80 0 - 265440 - 12:47 ? 00:31:18 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/containerd/containerd.sock --cni-bin-dir=/usr/libexec/cni --node-ip=10.222.35.3<https://urldefense.com/v3/__http:/10.222.35.3__;!!AjveYdw8EvQ!Kyn7Pb6HI6QFExJsdXfs1By48FX3eETwJFAWF5u2PM132m3QF_eHiny5IN_bnKol6l7a$> --volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ --feature-gates TopologyManager=true --cpu-manager-policy=static --topology-manager-policy=single-numa-node --system-reserved=memory=7000Mi --reserved-cpus=0-2 --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
0 S root 3554919 2551522 0 80 0 - 28181 - 14:06 pts/2 00:00:00 grep --color=auto -i kubelet
controller-1:/proc/2455541/fd# ps -eLP | grep 2453203
2453203 2453203 0 ? 00:00:00 kubelet
2453203 2453206 2 ? 00:00:19 kubelet
2453203 2453207 1 ? 00:00:00 kubelet
2453203 2453208 1 ? 00:00:12 kubelet
2453203 2453209 0 ? 00:00:00 kubelet
2453203 2453210 1 ? 00:00:00 kubelet
2453203 2453211 1 ? 00:00:00 kubelet
2453203 2453215 2 ? 00:00:00 kubelet
2453203 2453218 2 ? 00:00:00 kubelet
2453203 2453254 1 ? 00:00:00 kubelet
2453203 2453255 1 ? 00:00:00 kubelet
2453203 2453287 0 ? 00:00:00 kubelet
2453203 2453290 1 ? 00:00:19 kubelet
2453203 2455519 1 ? 00:00:17 kubelet
2453203 2455540 2 ? 00:00:14 kubelet
2453203 2455541 0 ? 00:28:24 kubelet
.....
There are around 31 threads of kubelet running. One of the thread(2455541) is taking 100% cpu.
Strace done on kubelet, shows the below output. It is trying to read something from the fd 32 continously
controller-1:/proc/2455541/fd# strace -p 2455541
strace: Process 2455541 attached
read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted)
read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted)
read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted)
read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted)
read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted)
read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted)
read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted)
read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted)
read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted)
controller-1:/proc/2455541# cd fd
controller-1:/proc/2455541/fd# ls
0 1 10 11 12 13 14 15 16 17 18 19 2 20 21 22 23 24 25 26 27 29 3 30 31 32 34 4 5 6 7 8 9
Fd 32 seems to be a calico virtual interface. Kubelet is spinning on this interface continuously which shoots up the CPU usage to 100%
controller-1:/proc/2455541/fd# ls -l 32
lr-x------ 1 root root 64 May 21 13:44 32 -> /sys/devices/virtual/net/cali0db93ba21d0/speed
controller-1:/proc/2455541/fd#
Within few mins, we cannot do ssh to controller-1. In console, top shows the kubelet taking 100%.
Any clues on this issue would be of great help, let me know if any information is required.
Regards,
Sriram
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20210526/d6f4b22b/attachment.html>
More information about the Starlingx-discuss
mailing list