[Starlingx-discuss] [Containers] Sanity Test - ISO 20190522

Bailey, Henry Albert (Al) Al.Bailey at windriver.com
Thu May 23 21:11:02 UTC 2019


This would be the pass that failed
2019-05-23 15:16:42.330 99286 INFO sysinv.conductor.kube_app [-] Application (stx-openstack) apply started.
2019-05-23 15:16:43.227 99286 INFO sysinv.conductor.kube_app [-] Secret default-registry-key created under Namespace openstack.
2019-05-23 15:16:43.266 99286 ERROR sysinv.common.kubernetes [req-24203373-fa32-407a-ab7a-67c9b4788dc3 admin admin] Failed to copy Secret ceph-pool-kube-rbd from Namespace kube-system to Namespace openstack: (404)
Reason: Not Found


Which sounds a lot like this bug
https://bugs.launchpad.net/starlingx/+bug/1828896


That bug was listed as fixed,  but also reported as seen a week after the fix was submitted.
I suspect the bug needs to be reopened.

Al

-----Original Message-----
From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] 
Sent: Thursday, May 23, 2019 5:03 PM
To: Saul Wold; starlingx-discuss at lists.starlingx.io
Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522

> -----Original Message-----
> From: Saul Wold [mailto:sgw at linux.intel.com]
> Sent: Thursday, May 23, 2019 3:53 PM
> To: starlingx-discuss at lists.starlingx.io
> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
> 
> 
> 
> On 5/23/19 1:49 PM, Alonso, Juan Carlos wrote:
> > If you followed the steps on Wiki, your deployment and the sanity's
> deployment are the same.
> >
> > I am agree with you that should not run the apply twice. The automation has
> logic to handle this issue, when it appears the suite execute a re-apply, this is
> because sanity in all configs takes a long time and we need to have the results,
> if the apply fails in the second try, it won’t be applied and will FAIL, then we
> need to debug and open a bug.
> >
> > This issue is not frequent, and at least on my side I have seen it mostly in
> virtual environment, we would have to deploy all the configs manually
> everyday to see if it is present.
> >
> Hmm, I see it everytime I run the sanity Provision-Containers test on a fresh
> environment, every time! So about 10 times in the last couple of days.
> 
> So again, what else could be different in our Virtual Environments that would
> make this fail consistently for me.

Could be the images download?...  At the end the automation is using proxies over a NAT network on the host to download images form the public registry and this could cause some timeouts that could make apply fail,  so should be interesting check the logs (var/log/sysinv.log) and verify if is not failing due a timeout when downloading images. On our bare metal environments are using local registry the download is faster an hence we are not facing those issues. 

Regards,
José

> 
> Sau!
> 
> > Regards.
> > Juan Carlos Alonso
> >
> > -----Original Message-----
> > From: Saul Wold [mailto:sgw at linux.intel.com]
> > Sent: Thursday, May 23, 2019 3:26 PM
> > To: Cordoba Malibran, Erich <erich.cordoba.malibran at intel.com>;
> > Alonso, Juan Carlos <juan.carlos.alonso at intel.com>;
> > starlingx-discuss at lists.starlingx.io
> > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO
> > 20190522
> >
> >
> >
> > On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote:
> >> As a last resource you can do a :
> >>
> >> sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded'
> where name='stx-openstack';"
> >>
> >> as described here:
> >> https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ
> >>
> >>
> > This is not the problem, as a I can re-run the application-apply and it
> succeeds, what I am trying to understand is if anyone else is seeing this issue
> (ie re-run the application-apply) in the virtual environment.
> >
> > If Sanity test is NOT seeing it, I would like to understand what's different
> between my setup and the sanity testing environment.  If Sanity testing IS
> seeing it, then I would argue that it's a failure.
> > There should not be a requirement to run the apply twice or it should be
> noted in the testing results.
> >
> > Sau!
> >
> >>
> >> On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso at intel.com>
> wrote:
> >>
> >>       Yes, I have seen this issue, even when execute apply for first time.
> >>       I faced this error when status hold on "uploading" or "applying", then
> cannot be removed or deleted.
> >>
> >>       Regards.
> >>       Juan Carlos Alonso
> >>
> >>       -----Original Message-----
> >>       From: Saul Wold [mailto:sgw at linux.intel.com]
> >>       Sent: Thursday, May 23, 2019 11:30 AM
> >>       To: starlingx-discuss at lists.starlingx.io
> >>       Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO
> >> 20190522
> >>
> >>
> >>       Thanks for these results, glad to see the Virtual environment mostly
> working again.
> >>
> >>       I do have a question, I have tried to reproduce the ansible
> >> based install locally and I am seeing a failure when trying to do the
> >> application-apply of stx-openstack. My failure is
> >>
> >>       stx-openstack       | 1.0-13-centos-stable-versioned | armada-manifest
> >>                    | manifest.yaml | apply-failed | operation
> >> aborted, check logs for detail |
> >>
> >>       When run a second time, the application-apply works, I have attached
> the sysinv.log that should contain both the failure and the success.
> >>
> >>       I attempted an application-delete and it failed with a vague message (see
> line 1480 of the log), it seems to have occured during exception handling in
> sysinv.common.exception:
> >>
> >>       Delete of application %(name)s (%(version)s) failed: %(reason)s.
> >>
> >>       I would like to know from folks if they are seeing a similar issue with
> having to run application-apply twice?
> >>
> >>       Thanks
> >>           Sau!
> >>
> >>       On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote:
> >>       > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from
> >>       > 2019-MAY-22 (link
> >>       >
> <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190
> >>       > 522T013000Z/>)
> >>       >
> >>       > Status: *YELLOW*
> >>       >
> >>       > ======================
> >>       >
> >>       > Bare Metal environment
> >>       >
> >>       > ======================
> >>       >
> >>       > *AIO - Simplex:*
> >>       >
> >>       >      Setup                                 03 TCs
> >>       >
> >>       >      Provision-Containers     01 TCs
> >>       >
> >>       >      Sanity-OpenStack           49 TCs| 3 TCs FAIL
> >>       >
> >>       >      Sanity-Platform              11 TCs   | 3 TCs FAIL
> >>       >
> >>       >      ------------------------------
> >>       >
> >>       >      TOTAL:                  64 TCs
> >>       >
> >>       > *    AIO - Duplex:*
> >>       >
> >>       > **
> >>       >
> >>       >      Setup                                03 TCs
> >>       >
> >>       >      Provision-Containers    01 TCs
> >>       >
> >>       >      Sanity-OpenStack          52 TCs   | 3 TCs FAIL
> >>       >
> >>       >      Sanity-Platform             09 TCs   | 5 TCs FAIL
> >>       >
> >>       >      ------------------------------
> >>       >
> >>       >      TOTAL:                  65 TCs
> >>       >
> >>       > *    Standard - Local Storage (2+2):*
> >>       >
> >>       > **
> >>       >
> >>       >      Setup                                03 TCs
> >>       >
> >>       >      Provision-Containers    01 TCs
> >>       >
> >>       >      Sanity-OpenStack          52 TCs
> >>       >
> >>       >      Sanity-Platform             09 TCs
> >>       >
> >>       >      ------------------------------
> >>       >
> >>       >      TOTAL:                  65 TCs
> >>       >
> >>       > *Standard - External Storage (2+2+2):*
> >>       >
> >>       >      Setup                                03 TCs
> >>       >
> >>       >      Provision-Containers    01 TCs
> >>       >
> >>       >      Sanity-OpenStack          52 TCs
> >>       >
> >>       >      Sanity-Platform             05 TCs   | 2 TCs FAIL
> >>       >
> >>       >      ------------------------------
> >>       >
> >>       >      TOTAL:                  61 TCs
> >>       >
> >>       > ===================
> >>       >
> >>       > Virtual Environment
> >>       >
> >>       > ===================
> >>       >
> >>       > *AIO - Simplex*
> >>       >
> >>       > Setup                         03 TCs
> >>       >
> >>       > Provisioning             01 TCs
> >>       >
> >>       > Sanity OpenStack    49 TCs   | 3 TCs FAIL
> >>       >
> >>       > Sanity Platform        07 TCs   | 2 TCs FAIL
> >>       >
> >>       > ------------------------------
> >>       >
> >>       > TOTAL:          60 TCs
> >>       >
> >>       > *AIO - Duplex*
> >>       >
> >>       > Setup                            03 TCs
> >>       >
> >>       > Provisioning                01 TCs
> >>       >
> >>       > Sanity OpenStack       51 TCs
> >>       >
> >>       > Sanity Platform           05 TCs | 4 TCs FAIL
> >>       >
> >>       > ------------------------------
> >>       >
> >>       > TOTAL: [ 61 TCs PASS ]
> >>       >
> >>       > *Standard - Local Storage*
> >>       >
> >>       > Setup                            03 TCs
> >>       >
> >>       > Provisioning                01 TCs
> >>       >
> >>       > Sanity OpenStack       52 TCs   |  1 TCs FAIL
> >>       >
> >>       > Sanity Platform           05 TCs   | 4 TCs FAIL
> >>       >
> >>       > ------------------------------
> >>       >
> >>       > TOTAL: [ 61 TCs PASS ]
> >>       >
> >>       > ---------------------------------------------------------------
> >>       >
> >>       > VM resize failed by "No valid host was found"
> >>       > https://bugs.launchpad.net/starlingx/+bug/1824412
> >>       >
> >>       > Some pods are failing, tomorrow we'll perform double check to
> >>       > determine if it is a suite's problem.
> >>       >
> >>       > For more detail of the tests:
> >>       > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-
> Open
> >>       > Stack
> >>       >
> >>       > Regards!
> >>       >
> >>       > Maria G.
> >>       >
> >>       >
> >>       > _______________________________________________
> >>       > Starlingx-discuss mailing list
> >>       > Starlingx-discuss at lists.starlingx.io
> >>       > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
> >>       >
> >>
> >>       _______________________________________________
> >>       Starlingx-discuss mailing list
> >>       Starlingx-discuss at lists.starlingx.io
> >>
> >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
> >>
> >>
> > _______________________________________________
> > Starlingx-discuss mailing list
> > Starlingx-discuss at lists.starlingx.io
> > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
> >
> 
> _______________________________________________
> Starlingx-discuss mailing list
> Starlingx-discuss at lists.starlingx.io
> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss


More information about the Starlingx-discuss mailing list