On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote:
As a last resource you can do a :
sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' where name='stx-openstack';"
as described here: https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ
This is not the problem, as a I can re-run the application-apply and it succeeds, what I am trying to understand is if anyone else is seeing this issue (ie re-run the application-apply) in the virtual environment. If Sanity test is NOT seeing it, I would like to understand what's different between my setup and the sanity testing environment. If Sanity testing IS seeing it, then I would argue that it's a failure. There should not be a requirement to run the apply twice or it should be noted in the testing results. Sau!
On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com> wrote:
Yes, I have seen this issue, even when execute apply for first time. I faced this error when status hold on "uploading" or "applying", then cannot be removed or deleted.
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 11:30 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
Thanks for these results, glad to see the Virtual environment mostly working again.
I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is
stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail |
When run a second time, the application-apply works, I have attached the sysinv.log that should contain both the failure and the success.
I attempted an application-delete and it failed with a vague message (see line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception:
Delete of application %(name)s (%(version)s) failed: %(reason)s.
I would like to know from folks if they are seeing a similar issue with having to run application-apply twice?
Thanks Sau!
On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from > 2019-MAY-22 (link > <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190 > 522T013000Z/>) > > Status: *YELLOW* > > ====================== > > Bare Metal environment > > ====================== > > *AIO - Simplex:* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 49 TCs| 3 TCs FAIL > > Sanity-Platform 11 TCs | 3 TCs FAIL > > ------------------------------ > > TOTAL: 64 TCs > > * AIO - Duplex:* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs | 3 TCs FAIL > > Sanity-Platform 09 TCs | 5 TCs FAIL > > ------------------------------ > > TOTAL: 65 TCs > > * Standard - Local Storage (2+2):* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 09 TCs > > ------------------------------ > > TOTAL: 65 TCs > > *Standard - External Storage (2+2+2):* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 05 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 61 TCs > > =================== > > Virtual Environment > > =================== > > *AIO - Simplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 49 TCs | 3 TCs FAIL > > Sanity Platform 07 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 60 TCs > > *AIO - Duplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 51 TCs > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > *Standard - Local Storage* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 52 TCs | 1 TCs FAIL > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > --------------------------------------------------------------- > > VM resize failed by "No valid host was found" > https://bugs.launchpad.net/starlingx/+bug/1824412 > > Some pods are failing, tomorrow we'll perform double check to > determine if it is a suite's problem. > > For more detail of the tests: > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-Open > Stack > > Regards! > > Maria G. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss