[Starlingx-discuss] LP1827258 (OOM on compute node) analysis
Hi, LP1827258 [0] is highlighted that "order zero" (4KB) allocation causes OOM. It looks unreasonable that 4KB allocation was failed. Here we have two problems: 1. Why 4KB memory allocation causes the OOM 2. Why system has no enough memory 1. Why 4KB memory allocation causes the OOM ========================================
From compute-1_20190507.124154/var/log/kern.log [1]
[ 1515.471830] calico-node invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=999 ... [ 1515.471950] Node 0 DMA free:15848kB min:60kB low:72kB high:88kB [ 1515.471954] lowmem_reserve[]: 0 2800 15838 15838 [ 1515.471956] Node 0 DMA32 free:63620kB min:11480kB low:14348kB high:17220kB [ 1515.471959] lowmem_reserve[]: 0 0 13038 13038 [ 1515.471961] Node 0 Normal free:53148kB min:53464kB low:66828kB high:80196kB [ 1515.471964] lowmem_reserve[]: 0 0 0 0 ... [ 1515.472142] Out of memory: Kill process 60356 (kubernetes-entr) score 1000 or sacrifice child gfp_mask=0x201da: it means the page allocation is from ZONE_NORMAL But Node 0 Normal, free:53148kB < min:53464kB: it has no enough space Conclusion: ******************************************************* * min is the oom watermark * * Since the free < min, kernel will start oom killer. * ******************************************************* We can find the kernel code for this logic: ------------------------------------------ mm/page_alloc.c: __zone_watermark_ok() /* * Check watermarks for an order-0 allocation request. If these * are not met, then a high-order request also cannot go ahead * even if a suitable page happened to be free. */ if (free_pages <= min + z->lowmem_reserve[classzone_idx]) return false; /* If this is an order-0 request then the watermark is fine */ if (!order) return true; And here is a related document: https://www.kernel.org/doc/Documentation/sysctl/vm.txt min_free_kbytes: --------------- This is used to force the Linux VM to keep a minimum number of kilobytes free. The VM uses this number to compute a watermark[WMARK_MIN] value for each lowmem zone in the system. Each lowmem zone gets a number of reserved free pages based proportionally on its size. Some minimal amount of memory is needed to satisfy PF_MEMALLOC allocations; if you set this to lower than 1024KB, your system will become subtly broken, and prone to deadlock under high loads. Setting this too high will OOM your machine instantly. And the watermarks are setup base on this kernel parameter: ---------------------------------------------------------- mm/page_alloc.c init_per_zone_wmark_min() Conclusion: *************************************************************************** * The default setting is calculated based on your total system memory size. * I think min watermark = 53464kB for 16GB is reasonable. *************************************************************************** 2. Why system has no enough memory ================================== from compute-1_20190507.124154/var/log/kern.log [1] 2019-05-06T16:24:12.266 localhost kernel: debug [ 0.000000] On node 0 totalpages: 4174118 ... 2019-05-06T17:28:56.749 compute-1 kernel: info [ 1515.471986] Node 0 hugepages_total=1 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB 2019-05-06T17:28:56.749 compute-1 kernel: info [ 1515.471987] Node 0 hugepages_total=6807 hugepages_free=6807 hugepages_surp=0 hugepages_size=2048kB ... from hieradata/192.168.204.77.yaml [1]: platform::compute::hugepage::params::vm_2M_pages: '"7024,7172"' ... platform::compute::params::worker_base_reserved: ("node0:8000MB:1" "node1:2000MB:1") from puppet.log [0] ... Exec[Allocate 7024 /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages] ... Exec[Allocate 7172 /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages] ... The total memory on node 0 is 16GB;
From kenrel log, 2M hugepage is 6807 which is smaller than 7024. It looks system has no 7024 2M pages
But total expected hugepage size is: 7024*2M + 1G = 14.7GB It is not reasonable. We have several reserved memory resources as below: 1. 8GB reserved by worker_reserved.conf: WORKER_BASE_RESERVED=("node0:8000MB:1" "node1:2000MB:1") 2. 10% reserved by below code: sysinv host.py: vm_hugepages_nr_2M = int(m.vm_hugepages_possible_2M * 0.9) After code review, it looks the hugepage allocation check code has a problem. [2] It only check whether the total allocation memory is bigger than total node memory size. If the pending hugepage size is between max possible size and node total size, the check will be passed. By default, if no pending hugepage request, _update_huge_pages() will allocate (m.vm_hugepages_possible_2M*0.9) 2M hugepage. But vm_hugepages_nr_2M_pending will be used with priority. If user config 2M hugepage manually with a wrong size, this issue will be triggered. Then, the hugepage allocation size will be overflow. The normal 4K pages will not be enough. So the OOM min watermark will be hit. A patch had been submitted for review. [2] [0] https://bugs.launchpad.net/starlingx/+bug/1827258/ [1] https://bugs.launchpad.net/starlingx/+bug/1827258/+attachment/5262103/+files... [2] https://review.opendev.org/#/c/667811/1/sysinv/sysinv/sysinv/sysinv/api/cont...
participants (1)
-
Yang, Bin