[Starlingx-discuss] [Feature] Container pinning on worker nodes - question

Gauld, James James.Gauld at windriver.com
Fri May 31 20:18:52 UTC 2019


José,
System inventory allows configuration of "Platform" physical cores.  The Platform cores are a reserved subset of the available cores, and always include the corresponding SMT sibling if hyper-threading is enabled.  Remaining online cores are configured for vSwitch, or VMs.

When you ran "system host-cpu-list <host>", the subset of logical cpus reserved for platform was just the logical cpu 0. This is the expected value with hyper-threading disabled, and when there is only 1 platform core.


Some further background on affinity settings in general:

For the openstack worker, all systemd tasks will launch with default CPUAffinity (ie. task affinity) unless put in a specific cgroup. 
grep -rs CPUAffinity /etc/systemd
/etc/systemd/system.conf.d/platform-cpuaffinity.conf:CPUAffinity="0"

On the openstack worker, all pods have the sample cpuset affinity as parent k8s-infra cgroup.
The following nested cgroups all have the same cpuset setting, in your case "0".
Eg,
tail /sys/fs/cgroup/cpuset/k8s-infra/cpuset.cpus
tail /sys/fs/cgroup/cpuset/k8s-infra/kubepods/cpuset.cpus
tail /sys/fs/cgroup/cpuset/k8s-infra/kubepods/*/cpuset.cpus
tail /sys/fs/cgroup/cpuset/k8s-infra/kubepods/*/*/cpuset.cpus
tail /sys/fs/cgroup/cpuset/k8s-infra/kubepods/*/*/*/cpuset.cpus

The parent cgroup mount has all online logical cpus:
compute-0:~# tail /sys/fs/cgroup/cpuset/cpuset.cpus

The nodeset is configured to be the subset of NUMA nodes where the Platform cpus are allocated, which is numa node "0".
The logical cpu 0 resides on socket_id 0 and numa_node id 0, that is why nodeset is set to "0".

You can see the socket/node/core/logical cpu enumeration in various places, including "system host-cpu-list <host>", "lscpu -e", and via libvirt "virsh capabilities" (the <topology> section has "cell id" as the numa node), as well as walking through /sys/devices/system/<cpuX> and <nodeY>, etc.

On the openstack worker, the QEMU VMs get launched /sys/fs/cgroup/cpuset/machine.slice/<machine-qemu-instance-X>/cpuset.cpus .

To see the hierarchy of cgroups and each process :
sudo LANG=POSIX system-cgls cpuset
<each pid will show up including cgroup hierarchy>

For a given pid or tid (light-weight-pid, i.e. thread),  can obtain the cgroup cpuset name:
cat /proc/<pid>/cpuset
cat /proc/<pid>/task/<tid>/cpuset

Can then look at /sys/fs/cgroup/cpuset/<path-to-name>/cpuset.cpus

Can see the affinity the given linux task has :
grep  Cpus_allowed_list: /proc/<pid>/status
grep  Cpus_allowed_list: /proc/<pid>/task/<tid>/status

To see affinity of all tasks:
ps-sched.sh

Within a container, can see subset of CPUs you got by looking at /proc/self :
kubectl exec -it <container> -- /bin/bash
grep  Cpus_allowed_list: /proc/self/status

-Jim

-----Original Message-----
From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] 
Sent: May-31-19 12:19 PM
To: starlingx-discuss at lists.starlingx.io; Gauld, James
Subject: [Feature] Container pinning on worker nodes - question 

Hi Jim 

I'm checking the patch for CPU pining [1] to create some validation scenarios, one of the sentences says: 

"For openstack based worker nodes including AIO (i.e., host-label openstack-compute-node=enabled):
- the k8s cpuset and nodeset include the assigned platform cores"

My question is in regards of the correct way to validate this sentence, I'm checking 'cpuset' and 'nodeset' but I'm always obtaining value set to 0, please refer to my procedure at [2] and let me know if I'm checking on the correct values, else please let me know the correct ones that I should use.  

1- https://review.opendev.org/#/c/648511/
2- http://paste.openstack.org/show/752368/ 

Regards,
José





More information about the Starlingx-discuss mailing list