[Starlingx-discuss] [Containers] Sanity Test - ISO 20190424

Lemus Contreras, Cristopher J cristopher.j.lemus.contreras at intel.com
Fri Apr 26 22:06:06 UTC 2019


Hi All,

Some test were made to find the point where the memory is allocated:

Just after `config_controller` it's using just a handful of GBs:

controller-0:~$ free -h
              total        used        free      shared  buff/cache   available
Mem:            93G        3.2G         84G         47M        5.5G         88G
Swap:            0B          0B          0B
controller-0:~$


Right after the unlock, when the system pass from "offline" status to "intest" it jumps from using 5.1GB to 71GB, this is just with kube-system pods:

              total        used        free      shared  buff/cache   available
Mem:            93G         71G         19G         45M        1.9G         20G
Swap:            0B          0B          0B



NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-84cdb6bd7c-w75rk   1/1     Running   1          36m
calico-node-zp8xv                          1/1     Running   1          36m
coredns-84bb87857f-lp8sl                   1/1     Running   1          36m
coredns-84bb87857f-r6mdf                   0/1     Pending   0          36m
kube-apiserver-controller-0                1/1     Running   1          35m
kube-controller-manager-controller-0       1/1     Running   2          35m
kube-proxy-w7sfq                           1/1     Running   1          36m
kube-scheduler-controller-0                1/1     Running   2          35m
tiller-deploy-d87d7bd75-hjb7w              1/1     Running   1          36m



Bug updated with this info.

Regards,

Cristopher Lemus




On 4/26/19, 11:30 AM, "Victor Rodriguez" <vm.rod25 at gmail.com> wrote:

    Hi team
    
    My findings so far this morning:
    
    In order to know how much memory ( really ) a docker is consuming i
    tested 2 tools ( docker stat and reading from the /proc/pid/mmpas )
    
    I create a simple C code that consumes X KB of memory by malloc and
    then free it:
    
    https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/memory.c
    
    Reserving 5000 Kb of memory
    Value of String = simple_test
    Address = 2895619200
    Waiting for 30 seconds
    
    I compile it and cp into my docker image:
    
    https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/Dockerfile
    
    When I run the docker and monitor the memory with docker stats :
    
    It shows only 2.5 Kb of memory when from /proc kernel ifo i get :
    
    vmrod at vmrod-ubuntu-devel:/tmp$ ./usr/bin/psstop | grep docker
    docker-containe      1857  : 0      Kb
    dockerd              2758  : 0      Kb
    docker-containe      3368  : 0      Kb
    docker-containe      5438  : 0      Kb
    docker-containe      25159 : 0      Kb
    docker               25105 : 48378  Kb
    
    ( first column is PID second one is memory consumed ) , in this case,
    it shows 48378 kb  vs 5000 kb of memory that i know that i requested
    
    In order to find the memory leak, we must rely on the tools we use to
    measure it, Cristopher can you help me to repeat the same experiment
    to know if you see the same behavior ? If so we can start to put -m on
    each docker image to limit the memory size ( 2GB should be enough
    right ? )
    
    WIP
    
    regards
    
    On Thu, Apr 25, 2019 at 10:33 PM Victor Rodriguez <vm.rod25 at gmail.com> wrote:
    >
    > Can we consider the track of vm used by the running proces from /proc? we can work on a script using psstop(0) or other similar tool,what do you think. This might help us to find the process is consuming the memory over the time
    >
    > I also see the same problem of consuming almost 90% of the memory not only in all in one systems but also in duplex
    >
    > (0) https://github.com/clearlinux/psstop
    >
    > Regards
    > Victor Rodriguez
    >
    > On Thu, Apr 25, 2019, 21:59 Cordoba Malibran, Erich <erich.cordoba.malibran at intel.com> wrote:
    >>
    >> Hi,
    >>
    >> In this case we have:
    >>
    >> HugePages_Total: 34104
    >> HugePages_Free: 34104
    >> HugePages_Rsvd: 0
    >> HugePages_Surp: 0
    >>
    >> So, I'm not sure if it can be related with 1825814.
    >>
    >> Also, for people not seeing this issue, how much memory do you have in your baremetal systems? What's the minimum required memory for running an AIO system. Our failing system have 97 GB and free -h shows.
    >>
    >>                     total        used        free      shared  buff/cache   available
    >> Mem:            93G         84G        3.2G         66M        5.6G        4.8G
    >> Swap:            0B          0B          0B
    >>
    >>
    >> A couple months ago I reported a similar issue[0], in that case after three days in stand-by the system started to throw Out of Memory errors. Does anyone has performed a longevity test for some days? Maybe the working systems might fail after a while if the memory usage keeps increasing over time.
    >>
    >> -Erich
    >>
    >> [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/002923.html
    >>
    >>
    >>
    >> From: "Li, Cheng1" <cheng1.li at intel.com>
    >> Date: Thursday, April 25, 2019 at 8:29 PM
    >> To: "Lemus Contreras, Cristopher J" <cristopher.j.lemus.contreras at intel.com>, "Miller, Frank" <Frank.Miller at windriver.com>, "Perez Ibarra, Maria G" <maria.g.perez.ibarra at intel.com>, "starlingx-discuss at lists.starlingx.io" <starlingx-discuss at lists.starlingx.io>
    >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424
    >>
    >> Actually, I had also reported the memory issue[1] days ago.
    >> Memory exhaust happens because so little 4K memory is allocated for system/software load.
    >>
    >> [1] https://bugs.launchpad.net/starlingx/+bug/1825814
    >>
    >> Thanks,
    >> Cheng
    >>
    >> From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com]
    >> Sent: Friday, April 26, 2019 1:50 AM
    >> To: Miller, Frank <Frank.Miller at windriver.com>; Perez Ibarra, Maria G <maria.g.perez.ibarra at intel.com>; starlingx-discuss at lists.starlingx.io
    >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424
    >>
    >> Hi Frank,
    >>
    >> We had a zoom call with Al Bailey to troubleshoot the issues that we are observing. The bug where a single CPU was taking all of the workload is resolved.
    >>
    >> What we observed seems to be an issue with memory exhaust, additional information was gathered an added to this bug for further troubleshooting: https://bugs.launchpad.net/starlingx/+bug/1826308
    >>
    >> If additional information is required, please, just let us know.
    >>
    >> Thanks & Regards,
    >>
    >> Cristopher Lemus
    >>
    >> From: "Miller, Frank" <mailto:Frank.Miller at windriver.com>
    >> Date: Thursday, April 25, 2019 at 8:24 AM
    >> To: "Perez Ibarra, Maria G" <mailto:maria.g.perez.ibarra at intel.com>, "mailto:starlingx-discuss at lists.starlingx.io" <mailto:starlingx-discuss at lists.starlingx.io>
    >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424
    >>
    >> Maria:
    >>
    >> It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs.  Can you set up a live debug session so that some container SMEs can investigate?
    >>
    >> Frank
    >> [1] https://review.opendev.org/#/c/655240/
    >>
    >> From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com]
    >> Sent: Thursday, April 25, 2019 12:12 AM
    >> To: mailto:starlingx-discuss at lists.starlingx.io
    >> Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424
    >>
    >> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/)
    >>
    >> Status: RED
    >>
    >> ===========================================
    >>
    >> Sanity Test is executed in a Containers – Bare Metal Environment
    >>
    >> AIO - Simplex
    >>
    >> Setup             Manual [PASS]
    >> Provisioning      01 TCs [PASS]
    >> Sanity OpenStack  49 TCs [FAIL]| 40 TCs FAIL
    >> Sanity Platform   07 TCs [FAIL]| 07 TCs FAIL
    >>
    >> TOTAL: 57 TCS [Fail : 47]
    >>
    >> AIO – Duplex
    >>
    >> Setup             Manual [PASS]
    >> Provisioning      01 TCs [PASS]
    >> Sanity OpenStack  52 TCs [FAIL] | 42 TCs FAIL
    >> Sanity Platform   05 TCs [FAIL] | 05 TCs FAIL
    >>
    >> TOTAL: 57 TCS [Fail : 47 TCs]
    >>
    >> Standard - Local Storage (2+2)
    >>
    >> Setup             Manual [PASS]
    >> Provisioning      01 TCs [PASS]
    >> Sanity OpenStack  49 TCs [PASS]
    >> Sanity Platform   07 TCs [PASS]
    >>
    >> TOTAL: 57 TCS PASS
    >>
    >> Standard - Dedicated Storage (2+2+2)
    >>
    >> Setup             Manual [PASS]
    >> Provisioning      01 TCs [PASS]
    >> Sanity OpenStack  52 TCs [PASS]
    >> Sanity Platform   05 TCs [PASS]
    >>
    >> TOTAL: 57 TCS PASS
    >>
    >>
    >>
    >> Sanity Test is executed in a Containers - Virtual Environment
    >>
    >> AIO - Simplex
    >>
    >> Setup             04 TCs [PASS]
    >> Provisioning      01 TCs [FAIL]
    >> Sanity OpenStack  49 TCs [FAIL]
    >> Sanity Platform   07 TCs [FAIL]
    >>
    >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs]
    >>
    >>
    >> AIO - Duplex
    >>
    >> Setup             04 TCs [PASS]
    >> Provisioning      01 TCs [FAIL]
    >> Sanity OpenStack  49 TCs [FAIL]
    >> Sanity Platform   07 TCs [FAIL]
    >>
    >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs]
    >>
    >>
    >> Standard – Local Storage
    >>
    >> Setup             04 TCs [PASS]
    >> Provisioning      01 TCs [FAIL]
    >> Sanity OpenStack  49 TCs [FAIL]
    >> Sanity Platform   07 TCs [FAIL]
    >>
    >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs]
    >>
    >>
    >> Standard – Dedicated Storage
    >>
    >> Setup             04 TCs [PASS]
    >> Provisioning      01 TCs [FAIL]
    >> Sanity OpenStack  49 TCs [FAIL]
    >> Sanity Platform   07 TCs [FAIL]
    >>
    >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs]
    >>
    >> - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308
    >> - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/
    >> - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/
    >> - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes.
    >>
    >> For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack
    >>
    >>
    >> Regards
    >> Maria G.
    >>
    >>
    >>
    >>
    >> _______________________________________________
    >> Starlingx-discuss mailing list
    >> Starlingx-discuss at lists.starlingx.io
    >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
    



More information about the Starlingx-discuss mailing list