[Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-MAY-22 (link<http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190522T013000Z/>) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs | 3 TCs FAIL Sanity-Platform 11 TCs | 3 TCs FAIL ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 3 TCs FAIL Sanity-Platform 09 TCs | 5 TCs FAIL ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs | 2 TCs FAIL ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs | 3 TCs FAIL Sanity Platform 07 TCs | 2 TCs FAIL ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs Sanity Platform 05 TCs | 4 TCs FAIL ------------------------------ TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs | 1 TCs FAIL Sanity Platform 05 TCs | 4 TCs FAIL ------------------------------ TOTAL: [ 61 TCs PASS ] --------------------------------------------------------------- VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 Some pods are failing, tomorrow we'll perform double check to determine if it is a suite's problem. For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards! Maria G.
Maria, Nice to see that sanity is much better today. For the only bugs you were hitting and causes your ~30 cases fail: https://bugs.launchpad.net/starlingx/+bug/1824412 can you please follow Gerry Kopec's comments in 5/7: You should be able to change the config option and retest. system helm-override-update nova openstack --set conf.nova.DEFAULT.allow_resize_to_same_host=true system application-apply stx-openstack Once the nova pods are restarted you should be able to see the conf option set inside one of the nova pods and can retry the test: kubectl exec -it -n openstack <nova-api-osapi pod name> cat /etc/nova/nova.conf I think overall we'd want to turn this to true in all environments as that was the behaviour before in the non-containerized setup. According to Zhipeng, who tried the method in SH side, it works. Please help to verify. Thx. - cindy From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra@intel.com] Sent: Thursday, May 23, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-MAY-22 (link<http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190522T013000Z/>) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs | 3 TCs FAIL Sanity-Platform 11 TCs | 3 TCs FAIL ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 3 TCs FAIL Sanity-Platform 09 TCs | 5 TCs FAIL ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs | 2 TCs FAIL ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs | 3 TCs FAIL Sanity Platform 07 TCs | 2 TCs FAIL ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs Sanity Platform 05 TCs | 4 TCs FAIL ------------------------------ TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs | 1 TCs FAIL Sanity Platform 05 TCs | 4 TCs FAIL ------------------------------ TOTAL: [ 61 TCs PASS ] --------------------------------------------------------------- VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 Some pods are failing, tomorrow we'll perform double check to determine if it is a suite's problem. For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards! Maria G.
Hi Cindy, I ran those steps and we were able to do the resize on AIO. I think that it has been discussed multiple times, but, will this be enabled by default in the near future? Will it remain disabled? Thanks & Regards Cristopher Lemus From: "Xie, Cindy" <cindy.xie@intel.com> Date: Wednesday, May 22, 2019 at 7:33 PM To: "Perez Ibarra, Maria G" <maria.g.perez.ibarra@intel.com>, "starlingx-discuss@lists.starlingx.io" <starlingx-discuss@lists.starlingx.io> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522 Maria, Nice to see that sanity is much better today. For the only bugs you were hitting and causes your ~30 cases fail: https://bugs.launchpad.net/starlingx/+bug/1824412 can you please follow Gerry Kopec’s comments in 5/7: You should be able to change the config option and retest. system helm-override-update nova openstack --set conf.nova.DEFAULT.allow_resize_to_same_host=true system application-apply stx-openstack Once the nova pods are restarted you should be able to see the conf option set inside one of the nova pods and can retry the test: kubectl exec -it -n openstack <nova-api-osapi pod name> cat /etc/nova/nova.conf I think overall we'd want to turn this to true in all environments as that was the behaviour before in the non-containerized setup. According to Zhipeng, who tried the method in SH side, it works. Please help to verify. Thx. - cindy From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra@intel.com] Sent: Thursday, May 23, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-MAY-22 (link<http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190522T013000Z/>) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs | 3 TCs FAIL Sanity-Platform 11 TCs | 3 TCs FAIL ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 3 TCs FAIL Sanity-Platform 09 TCs | 5 TCs FAIL ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs | 2 TCs FAIL ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs | 3 TCs FAIL Sanity Platform 07 TCs | 2 TCs FAIL ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs Sanity Platform 05 TCs | 4 TCs FAIL ------------------------------ TOTAL: [ 61 TCs PASS ] Standard – Local Storage Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs | 1 TCs FAIL Sanity Platform 05 TCs | 4 TCs FAIL ------------------------------ TOTAL: [ 61 TCs PASS ] --------------------------------------------------------------- VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 Some pods are failing, tomorrow we’ll perform double check to determine if it is a suite's problem. For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards! Maria G.
Thanks for these results, glad to see the Virtual environment mostly working again. I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail | When run a second time, the application-apply works, I have attached the sysinv.log that should contain both the failure and the success. I attempted an application-delete and it failed with a vague message (see line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception: Delete of application %(name)s (%(version)s) failed: %(reason)s. I would like to know from folks if they are seeing a similar issue with having to run application-apply twice? Thanks Sau! On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote:
*Status of the Sanity Test for last CENGN ISO*: bootimage.iso from 2019-MAY-22 (link <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190522T013000Z/>)
Status: *YELLOW*
======================
Bare Metal environment
======================
*AIO - Simplex:*
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 49 TCs| 3 TCs FAIL
Sanity-Platform 11 TCs | 3 TCs FAIL
------------------------------
TOTAL: 64 TCs
* AIO - Duplex:*
**
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 52 TCs | 3 TCs FAIL
Sanity-Platform 09 TCs | 5 TCs FAIL
------------------------------
TOTAL: 65 TCs
* Standard - Local Storage (2+2):*
**
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 52 TCs
Sanity-Platform 09 TCs
------------------------------
TOTAL: 65 TCs
*Standard - External Storage (2+2+2):*
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 52 TCs
Sanity-Platform 05 TCs | 2 TCs FAIL
------------------------------
TOTAL: 61 TCs
===================
Virtual Environment
===================
*AIO - Simplex*
Setup 03 TCs
Provisioning 01 TCs
Sanity OpenStack 49 TCs | 3 TCs FAIL
Sanity Platform 07 TCs | 2 TCs FAIL
------------------------------
TOTAL: 60 TCs
*AIO - Duplex*
Setup 03 TCs
Provisioning 01 TCs
Sanity OpenStack 51 TCs
Sanity Platform 05 TCs | 4 TCs FAIL
------------------------------
TOTAL: [ 61 TCs PASS ]
*Standard – Local Storage*
Setup 03 TCs
Provisioning 01 TCs
Sanity OpenStack 52 TCs | 1 TCs FAIL
Sanity Platform 05 TCs | 4 TCs FAIL
------------------------------
TOTAL: [ 61 TCs PASS ]
---------------------------------------------------------------
VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412
Some pods are failing, tomorrow we’ll perform double check to determine if it is a suite's problem.
For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack
Regards!
Maria G.
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
Yes, I have seen this issue, even when execute apply for first time. I faced this error when status hold on "uploading" or "applying", then cannot be removed or deleted. Regards. Juan Carlos Alonso -----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 11:30 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522 Thanks for these results, glad to see the Virtual environment mostly working again. I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail | When run a second time, the application-apply works, I have attached the sysinv.log that should contain both the failure and the success. I attempted an application-delete and it failed with a vague message (see line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception: Delete of application %(name)s (%(version)s) failed: %(reason)s. I would like to know from folks if they are seeing a similar issue with having to run application-apply twice? Thanks Sau! On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote:
*Status of the Sanity Test for last CENGN ISO*: bootimage.iso from 2019-MAY-22 (link <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190 522T013000Z/>)
Status: *YELLOW*
======================
Bare Metal environment
======================
*AIO - Simplex:*
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 49 TCs| 3 TCs FAIL
Sanity-Platform 11 TCs | 3 TCs FAIL
------------------------------
TOTAL: 64 TCs
* AIO - Duplex:*
**
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 52 TCs | 3 TCs FAIL
Sanity-Platform 09 TCs | 5 TCs FAIL
------------------------------
TOTAL: 65 TCs
* Standard - Local Storage (2+2):*
**
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 52 TCs
Sanity-Platform 09 TCs
------------------------------
TOTAL: 65 TCs
*Standard - External Storage (2+2+2):*
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 52 TCs
Sanity-Platform 05 TCs | 2 TCs FAIL
------------------------------
TOTAL: 61 TCs
===================
Virtual Environment
===================
*AIO - Simplex*
Setup 03 TCs
Provisioning 01 TCs
Sanity OpenStack 49 TCs | 3 TCs FAIL
Sanity Platform 07 TCs | 2 TCs FAIL
------------------------------
TOTAL: 60 TCs
*AIO - Duplex*
Setup 03 TCs
Provisioning 01 TCs
Sanity OpenStack 51 TCs
Sanity Platform 05 TCs | 4 TCs FAIL
------------------------------
TOTAL: [ 61 TCs PASS ]
*Standard - Local Storage*
Setup 03 TCs
Provisioning 01 TCs
Sanity OpenStack 52 TCs | 1 TCs FAIL
Sanity Platform 05 TCs | 4 TCs FAIL
------------------------------
TOTAL: [ 61 TCs PASS ]
---------------------------------------------------------------
VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412
Some pods are failing, tomorrow we'll perform double check to determine if it is a suite's problem.
For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-Open Stack
Regards!
Maria G.
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
As a last resource you can do a : sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' where name='stx-openstack';" as described here: https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com> wrote: Yes, I have seen this issue, even when execute apply for first time. I faced this error when status hold on "uploading" or "applying", then cannot be removed or deleted. Regards. Juan Carlos Alonso -----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 11:30 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522 Thanks for these results, glad to see the Virtual environment mostly working again. I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail | When run a second time, the application-apply works, I have attached the sysinv.log that should contain both the failure and the success. I attempted an application-delete and it failed with a vague message (see line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception: Delete of application %(name)s (%(version)s) failed: %(reason)s. I would like to know from folks if they are seeing a similar issue with having to run application-apply twice? Thanks Sau! On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from > 2019-MAY-22 (link > <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190 > 522T013000Z/>) > > Status: *YELLOW* > > ====================== > > Bare Metal environment > > ====================== > > *AIO - Simplex:* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 49 TCs| 3 TCs FAIL > > Sanity-Platform 11 TCs | 3 TCs FAIL > > ------------------------------ > > TOTAL: 64 TCs > > * AIO - Duplex:* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs | 3 TCs FAIL > > Sanity-Platform 09 TCs | 5 TCs FAIL > > ------------------------------ > > TOTAL: 65 TCs > > * Standard - Local Storage (2+2):* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 09 TCs > > ------------------------------ > > TOTAL: 65 TCs > > *Standard - External Storage (2+2+2):* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 05 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 61 TCs > > =================== > > Virtual Environment > > =================== > > *AIO - Simplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 49 TCs | 3 TCs FAIL > > Sanity Platform 07 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 60 TCs > > *AIO - Duplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 51 TCs > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > *Standard - Local Storage* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 52 TCs | 1 TCs FAIL > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > --------------------------------------------------------------- > > VM resize failed by "No valid host was found" > https://bugs.launchpad.net/starlingx/+bug/1824412 > > Some pods are failing, tomorrow we'll perform double check to > determine if it is a suite's problem. > > For more detail of the tests: > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-Open > Stack > > Regards! > > Maria G. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote:
As a last resource you can do a :
sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' where name='stx-openstack';"
as described here: https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ
This is not the problem, as a I can re-run the application-apply and it succeeds, what I am trying to understand is if anyone else is seeing this issue (ie re-run the application-apply) in the virtual environment. If Sanity test is NOT seeing it, I would like to understand what's different between my setup and the sanity testing environment. If Sanity testing IS seeing it, then I would argue that it's a failure. There should not be a requirement to run the apply twice or it should be noted in the testing results. Sau!
On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com> wrote:
Yes, I have seen this issue, even when execute apply for first time. I faced this error when status hold on "uploading" or "applying", then cannot be removed or deleted.
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 11:30 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
Thanks for these results, glad to see the Virtual environment mostly working again.
I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is
stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail |
When run a second time, the application-apply works, I have attached the sysinv.log that should contain both the failure and the success.
I attempted an application-delete and it failed with a vague message (see line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception:
Delete of application %(name)s (%(version)s) failed: %(reason)s.
I would like to know from folks if they are seeing a similar issue with having to run application-apply twice?
Thanks Sau!
On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from > 2019-MAY-22 (link > <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190 > 522T013000Z/>) > > Status: *YELLOW* > > ====================== > > Bare Metal environment > > ====================== > > *AIO - Simplex:* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 49 TCs| 3 TCs FAIL > > Sanity-Platform 11 TCs | 3 TCs FAIL > > ------------------------------ > > TOTAL: 64 TCs > > * AIO - Duplex:* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs | 3 TCs FAIL > > Sanity-Platform 09 TCs | 5 TCs FAIL > > ------------------------------ > > TOTAL: 65 TCs > > * Standard - Local Storage (2+2):* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 09 TCs > > ------------------------------ > > TOTAL: 65 TCs > > *Standard - External Storage (2+2+2):* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 05 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 61 TCs > > =================== > > Virtual Environment > > =================== > > *AIO - Simplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 49 TCs | 3 TCs FAIL > > Sanity Platform 07 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 60 TCs > > *AIO - Duplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 51 TCs > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > *Standard - Local Storage* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 52 TCs | 1 TCs FAIL > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > --------------------------------------------------------------- > > VM resize failed by "No valid host was found" > https://bugs.launchpad.net/starlingx/+bug/1824412 > > Some pods are failing, tomorrow we'll perform double check to > determine if it is a suite's problem. > > For more detail of the tests: > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-Open > Stack > > Regards! > > Maria G. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
If you followed the steps on Wiki, your deployment and the sanity's deployment are the same. I am agree with you that should not run the apply twice. The automation has logic to handle this issue, when it appears the suite execute a re-apply, this is because sanity in all configs takes a long time and we need to have the results, if the apply fails in the second try, it won’t be applied and will FAIL, then we need to debug and open a bug. This issue is not frequent, and at least on my side I have seen it mostly in virtual environment, we would have to deploy all the configs manually everyday to see if it is present. Regards. Juan Carlos Alonso -----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 3:26 PM To: Cordoba Malibran, Erich <erich.cordoba.malibran@intel.com>; Alonso, Juan Carlos <juan.carlos.alonso@intel.com>; starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522 On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote:
As a last resource you can do a :
sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' where name='stx-openstack';"
as described here: https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ
This is not the problem, as a I can re-run the application-apply and it succeeds, what I am trying to understand is if anyone else is seeing this issue (ie re-run the application-apply) in the virtual environment. If Sanity test is NOT seeing it, I would like to understand what's different between my setup and the sanity testing environment. If Sanity testing IS seeing it, then I would argue that it's a failure. There should not be a requirement to run the apply twice or it should be noted in the testing results. Sau!
On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com> wrote:
Yes, I have seen this issue, even when execute apply for first time. I faced this error when status hold on "uploading" or "applying", then cannot be removed or deleted.
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 11:30 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
Thanks for these results, glad to see the Virtual environment mostly working again.
I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is
stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail |
When run a second time, the application-apply works, I have attached the sysinv.log that should contain both the failure and the success.
I attempted an application-delete and it failed with a vague message (see line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception:
Delete of application %(name)s (%(version)s) failed: %(reason)s.
I would like to know from folks if they are seeing a similar issue with having to run application-apply twice?
Thanks Sau!
On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from > 2019-MAY-22 (link > <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190 > 522T013000Z/>) > > Status: *YELLOW* > > ====================== > > Bare Metal environment > > ====================== > > *AIO - Simplex:* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 49 TCs| 3 TCs FAIL > > Sanity-Platform 11 TCs | 3 TCs FAIL > > ------------------------------ > > TOTAL: 64 TCs > > * AIO - Duplex:* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs | 3 TCs FAIL > > Sanity-Platform 09 TCs | 5 TCs FAIL > > ------------------------------ > > TOTAL: 65 TCs > > * Standard - Local Storage (2+2):* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 09 TCs > > ------------------------------ > > TOTAL: 65 TCs > > *Standard - External Storage (2+2+2):* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 05 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 61 TCs > > =================== > > Virtual Environment > > =================== > > *AIO - Simplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 49 TCs | 3 TCs FAIL > > Sanity Platform 07 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 60 TCs > > *AIO - Duplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 51 TCs > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > *Standard - Local Storage* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 52 TCs | 1 TCs FAIL > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > --------------------------------------------------------------- > > VM resize failed by "No valid host was found" > https://bugs.launchpad.net/starlingx/+bug/1824412 > > Some pods are failing, tomorrow we'll perform double check to > determine if it is a suite's problem. > > For more detail of the tests: > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-Open > Stack > > Regards! > > Maria G. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
On 5/23/19 1:49 PM, Alonso, Juan Carlos wrote:
If you followed the steps on Wiki, your deployment and the sanity's deployment are the same.
I am agree with you that should not run the apply twice. The automation has logic to handle this issue, when it appears the suite execute a re-apply, this is because sanity in all configs takes a long time and we need to have the results, if the apply fails in the second try, it won’t be applied and will FAIL, then we need to debug and open a bug.
This issue is not frequent, and at least on my side I have seen it mostly in virtual environment, we would have to deploy all the configs manually everyday to see if it is present.
Hmm, I see it everytime I run the sanity Provision-Containers test on a fresh environment, every time! So about 10 times in the last couple of days. So again, what else could be different in our Virtual Environments that would make this fail consistently for me. Sau!
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 3:26 PM To: Cordoba Malibran, Erich <erich.cordoba.malibran@intel.com>; Alonso, Juan Carlos <juan.carlos.alonso@intel.com>; starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote:
As a last resource you can do a :
sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' where name='stx-openstack';"
as described here: https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ
This is not the problem, as a I can re-run the application-apply and it succeeds, what I am trying to understand is if anyone else is seeing this issue (ie re-run the application-apply) in the virtual environment.
If Sanity test is NOT seeing it, I would like to understand what's different between my setup and the sanity testing environment. If Sanity testing IS seeing it, then I would argue that it's a failure. There should not be a requirement to run the apply twice or it should be noted in the testing results.
Sau!
On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com> wrote:
Yes, I have seen this issue, even when execute apply for first time. I faced this error when status hold on "uploading" or "applying", then cannot be removed or deleted.
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 11:30 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
Thanks for these results, glad to see the Virtual environment mostly working again.
I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is
stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail |
When run a second time, the application-apply works, I have attached the sysinv.log that should contain both the failure and the success.
I attempted an application-delete and it failed with a vague message (see line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception:
Delete of application %(name)s (%(version)s) failed: %(reason)s.
I would like to know from folks if they are seeing a similar issue with having to run application-apply twice?
Thanks Sau!
On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from > 2019-MAY-22 (link > <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190 > 522T013000Z/>) > > Status: *YELLOW* > > ====================== > > Bare Metal environment > > ====================== > > *AIO - Simplex:* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 49 TCs| 3 TCs FAIL > > Sanity-Platform 11 TCs | 3 TCs FAIL > > ------------------------------ > > TOTAL: 64 TCs > > * AIO - Duplex:* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs | 3 TCs FAIL > > Sanity-Platform 09 TCs | 5 TCs FAIL > > ------------------------------ > > TOTAL: 65 TCs > > * Standard - Local Storage (2+2):* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 09 TCs > > ------------------------------ > > TOTAL: 65 TCs > > *Standard - External Storage (2+2+2):* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 05 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 61 TCs > > =================== > > Virtual Environment > > =================== > > *AIO - Simplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 49 TCs | 3 TCs FAIL > > Sanity Platform 07 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 60 TCs > > *AIO - Duplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 51 TCs > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > *Standard - Local Storage* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 52 TCs | 1 TCs FAIL > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > --------------------------------------------------------------- > > VM resize failed by "No valid host was found" > https://bugs.launchpad.net/starlingx/+bug/1824412 > > Some pods are failing, tomorrow we'll perform double check to > determine if it is a suite's problem. > > For more detail of the tests: > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-Open > Stack > > Regards! > > Maria G. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 3:53 PM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
On 5/23/19 1:49 PM, Alonso, Juan Carlos wrote:
If you followed the steps on Wiki, your deployment and the sanity's deployment are the same.
I am agree with you that should not run the apply twice. The automation has logic to handle this issue, when it appears the suite execute a re-apply, this is because sanity in all configs takes a long time and we need to have the results, if the apply fails in the second try, it won’t be applied and will FAIL, then we need to debug and open a bug.
This issue is not frequent, and at least on my side I have seen it mostly in virtual environment, we would have to deploy all the configs manually everyday to see if it is present.
Hmm, I see it everytime I run the sanity Provision-Containers test on a fresh environment, every time! So about 10 times in the last couple of days.
So again, what else could be different in our Virtual Environments that would make this fail consistently for me.
Could be the images download?... At the end the automation is using proxies over a NAT network on the host to download images form the public registry and this could cause some timeouts that could make apply fail, so should be interesting check the logs (var/log/sysinv.log) and verify if is not failing due a timeout when downloading images. On our bare metal environments are using local registry the download is faster an hence we are not facing those issues. Regards, José
Sau!
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 3:26 PM To: Cordoba Malibran, Erich <erich.cordoba.malibran@intel.com>; Alonso, Juan Carlos <juan.carlos.alonso@intel.com>; starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote:
As a last resource you can do a :
sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' where name='stx-openstack';"
as described here: https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ
This is not the problem, as a I can re-run the application-apply and it succeeds, what I am trying to understand is if anyone else is seeing this issue (ie re-run the application-apply) in the virtual environment.
If Sanity test is NOT seeing it, I would like to understand what's different between my setup and the sanity testing environment. If Sanity testing IS seeing it, then I would argue that it's a failure. There should not be a requirement to run the apply twice or it should be noted in the testing results.
Sau!
On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com>
wrote:
Yes, I have seen this issue, even when execute apply for first time. I faced this error when status hold on "uploading" or "applying", then
cannot be removed or deleted.
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 11:30 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
Thanks for these results, glad to see the Virtual environment mostly
working again.
I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is
stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail |
When run a second time, the application-apply works, I have attached
the sysinv.log that should contain both the failure and the success.
I attempted an application-delete and it failed with a vague message (see
line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception:
Delete of application %(name)s (%(version)s) failed: %(reason)s.
I would like to know from folks if they are seeing a similar issue with
having to run application-apply twice?
Thanks Sau!
On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from > 2019-MAY-22 (link >
<http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190
> 522T013000Z/>) > > Status: *YELLOW* > > ====================== > > Bare Metal environment > > ====================== > > *AIO - Simplex:* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 49 TCs| 3 TCs FAIL > > Sanity-Platform 11 TCs | 3 TCs FAIL > > ------------------------------ > > TOTAL: 64 TCs > > * AIO - Duplex:* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs | 3 TCs FAIL > > Sanity-Platform 09 TCs | 5 TCs FAIL > > ------------------------------ > > TOTAL: 65 TCs > > * Standard - Local Storage (2+2):* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 09 TCs > > ------------------------------ > > TOTAL: 65 TCs > > *Standard - External Storage (2+2+2):* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 05 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 61 TCs > > =================== > > Virtual Environment > > =================== > > *AIO - Simplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 49 TCs | 3 TCs FAIL > > Sanity Platform 07 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 60 TCs > > *AIO - Duplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 51 TCs > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > *Standard - Local Storage* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 52 TCs | 1 TCs FAIL > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > --------------------------------------------------------------- > > VM resize failed by "No valid host was found" > https://bugs.launchpad.net/starlingx/+bug/1824412 > > Some pods are failing, tomorrow we'll perform double check to > determine if it is a suite's problem. > > For more detail of the tests: > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-
Open
> Stack > > Regards! > > Maria G. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
This would be the pass that failed 2019-05-23 15:16:42.330 99286 INFO sysinv.conductor.kube_app [-] Application (stx-openstack) apply started. 2019-05-23 15:16:43.227 99286 INFO sysinv.conductor.kube_app [-] Secret default-registry-key created under Namespace openstack. 2019-05-23 15:16:43.266 99286 ERROR sysinv.common.kubernetes [req-24203373-fa32-407a-ab7a-67c9b4788dc3 admin admin] Failed to copy Secret ceph-pool-kube-rbd from Namespace kube-system to Namespace openstack: (404) Reason: Not Found Which sounds a lot like this bug https://bugs.launchpad.net/starlingx/+bug/1828896 That bug was listed as fixed, but also reported as seen a week after the fix was submitted. I suspect the bug needs to be reopened. Al -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza@intel.com] Sent: Thursday, May 23, 2019 5:03 PM To: Saul Wold; starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 3:53 PM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
On 5/23/19 1:49 PM, Alonso, Juan Carlos wrote:
If you followed the steps on Wiki, your deployment and the sanity's deployment are the same.
I am agree with you that should not run the apply twice. The automation has logic to handle this issue, when it appears the suite execute a re-apply, this is because sanity in all configs takes a long time and we need to have the results, if the apply fails in the second try, it won’t be applied and will FAIL, then we need to debug and open a bug.
This issue is not frequent, and at least on my side I have seen it mostly in virtual environment, we would have to deploy all the configs manually everyday to see if it is present.
Hmm, I see it everytime I run the sanity Provision-Containers test on a fresh environment, every time! So about 10 times in the last couple of days.
So again, what else could be different in our Virtual Environments that would make this fail consistently for me.
Could be the images download?... At the end the automation is using proxies over a NAT network on the host to download images form the public registry and this could cause some timeouts that could make apply fail, so should be interesting check the logs (var/log/sysinv.log) and verify if is not failing due a timeout when downloading images. On our bare metal environments are using local registry the download is faster an hence we are not facing those issues. Regards, José
Sau!
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 3:26 PM To: Cordoba Malibran, Erich <erich.cordoba.malibran@intel.com>; Alonso, Juan Carlos <juan.carlos.alonso@intel.com>; starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote:
As a last resource you can do a :
sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' where name='stx-openstack';"
as described here: https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ
This is not the problem, as a I can re-run the application-apply and it succeeds, what I am trying to understand is if anyone else is seeing this issue (ie re-run the application-apply) in the virtual environment.
If Sanity test is NOT seeing it, I would like to understand what's different between my setup and the sanity testing environment. If Sanity testing IS seeing it, then I would argue that it's a failure. There should not be a requirement to run the apply twice or it should be noted in the testing results.
Sau!
On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com>
wrote:
Yes, I have seen this issue, even when execute apply for first time. I faced this error when status hold on "uploading" or "applying", then
cannot be removed or deleted.
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 11:30 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
Thanks for these results, glad to see the Virtual environment mostly
working again.
I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is
stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail |
When run a second time, the application-apply works, I have attached
the sysinv.log that should contain both the failure and the success.
I attempted an application-delete and it failed with a vague message (see
line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception:
Delete of application %(name)s (%(version)s) failed: %(reason)s.
I would like to know from folks if they are seeing a similar issue with
having to run application-apply twice?
Thanks Sau!
On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from > 2019-MAY-22 (link >
<http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190
> 522T013000Z/>) > > Status: *YELLOW* > > ====================== > > Bare Metal environment > > ====================== > > *AIO - Simplex:* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 49 TCs| 3 TCs FAIL > > Sanity-Platform 11 TCs | 3 TCs FAIL > > ------------------------------ > > TOTAL: 64 TCs > > * AIO - Duplex:* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs | 3 TCs FAIL > > Sanity-Platform 09 TCs | 5 TCs FAIL > > ------------------------------ > > TOTAL: 65 TCs > > * Standard - Local Storage (2+2):* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 09 TCs > > ------------------------------ > > TOTAL: 65 TCs > > *Standard - External Storage (2+2+2):* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 05 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 61 TCs > > =================== > > Virtual Environment > > =================== > > *AIO - Simplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 49 TCs | 3 TCs FAIL > > Sanity Platform 07 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 60 TCs > > *AIO - Duplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 51 TCs > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > *Standard - Local Storage* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 52 TCs | 1 TCs FAIL > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > --------------------------------------------------------------- > > VM resize failed by "No valid host was found" > https://bugs.launchpad.net/starlingx/+bug/1824412 > > Some pods are failing, tomorrow we'll perform double check to > determine if it is a suite's problem. > > For more detail of the tests: > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-
Open
> Stack > > Regards! > > Maria G. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
On 5/23/19 2:11 PM, Bailey, Henry Albert (Al) wrote:
This would be the pass that failed 2019-05-23 15:16:42.330 99286 INFO sysinv.conductor.kube_app [-] Application (stx-openstack) apply started. 2019-05-23 15:16:43.227 99286 INFO sysinv.conductor.kube_app [-] Secret default-registry-key created under Namespace openstack. 2019-05-23 15:16:43.266 99286 ERROR sysinv.common.kubernetes [req-24203373-fa32-407a-ab7a-67c9b4788dc3 admin admin] Failed to copy Secret ceph-pool-kube-rbd from Namespace kube-system to Namespace openstack: (404) Reason: Not Found
Which sounds a lot like this bug https://bugs.launchpad.net/starlingx/+bug/1828896
That bug was listed as fixed, but also reported as seen a week after the fix was submitted. I suspect the bug needs to be reopened.
Huzzah to Al!! Maybe Bob can take a look at this and comment on why this might still be an issue, is it due to a timing and the testing scripts need to be modified to wait for the right actions to complete. Thanks sau!
Al
-----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza@intel.com] Sent: Thursday, May 23, 2019 5:03 PM To: Saul Wold; starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 3:53 PM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
On 5/23/19 1:49 PM, Alonso, Juan Carlos wrote:
If you followed the steps on Wiki, your deployment and the sanity's deployment are the same.
I am agree with you that should not run the apply twice. The automation has logic to handle this issue, when it appears the suite execute a re-apply, this is because sanity in all configs takes a long time and we need to have the results, if the apply fails in the second try, it won’t be applied and will FAIL, then we need to debug and open a bug.
This issue is not frequent, and at least on my side I have seen it mostly in virtual environment, we would have to deploy all the configs manually everyday to see if it is present.
Hmm, I see it everytime I run the sanity Provision-Containers test on a fresh environment, every time! So about 10 times in the last couple of days.
So again, what else could be different in our Virtual Environments that would make this fail consistently for me.
Could be the images download?... At the end the automation is using proxies over a NAT network on the host to download images form the public registry and this could cause some timeouts that could make apply fail, so should be interesting check the logs (var/log/sysinv.log) and verify if is not failing due a timeout when downloading images. On our bare metal environments are using local registry the download is faster an hence we are not facing those issues.
Regards, José
Sau!
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 3:26 PM To: Cordoba Malibran, Erich <erich.cordoba.malibran@intel.com>; Alonso, Juan Carlos <juan.carlos.alonso@intel.com>; starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote:
As a last resource you can do a :
sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' where name='stx-openstack';"
as described here: https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ
This is not the problem, as a I can re-run the application-apply and it succeeds, what I am trying to understand is if anyone else is seeing this issue (ie re-run the application-apply) in the virtual environment.
If Sanity test is NOT seeing it, I would like to understand what's different between my setup and the sanity testing environment. If Sanity testing IS seeing it, then I would argue that it's a failure. There should not be a requirement to run the apply twice or it should be noted in the testing results.
Sau!
On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com>
wrote:
Yes, I have seen this issue, even when execute apply for first time. I faced this error when status hold on "uploading" or "applying", then
cannot be removed or deleted.
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 11:30 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
Thanks for these results, glad to see the Virtual environment mostly
working again.
I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is
stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail |
When run a second time, the application-apply works, I have attached
the sysinv.log that should contain both the failure and the success.
I attempted an application-delete and it failed with a vague message (see
line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception:
Delete of application %(name)s (%(version)s) failed: %(reason)s.
I would like to know from folks if they are seeing a similar issue with
having to run application-apply twice?
Thanks Sau!
On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from > 2019-MAY-22 (link >
<http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190
> 522T013000Z/>) > > Status: *YELLOW* > > ====================== > > Bare Metal environment > > ====================== > > *AIO - Simplex:* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 49 TCs| 3 TCs FAIL > > Sanity-Platform 11 TCs | 3 TCs FAIL > > ------------------------------ > > TOTAL: 64 TCs > > * AIO - Duplex:* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs | 3 TCs FAIL > > Sanity-Platform 09 TCs | 5 TCs FAIL > > ------------------------------ > > TOTAL: 65 TCs > > * Standard - Local Storage (2+2):* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 09 TCs > > ------------------------------ > > TOTAL: 65 TCs > > *Standard - External Storage (2+2+2):* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 05 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 61 TCs > > =================== > > Virtual Environment > > =================== > > *AIO - Simplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 49 TCs | 3 TCs FAIL > > Sanity Platform 07 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 60 TCs > > *AIO - Duplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 51 TCs > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > *Standard - Local Storage* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 52 TCs | 1 TCs FAIL > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > --------------------------------------------------------------- > > VM resize failed by "No valid host was found" > https://bugs.launchpad.net/starlingx/+bug/1824412 > > Some pods are failing, tomorrow we'll perform double check to > determine if it is a suite's problem. > > For more detail of the tests: > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-
Open
> Stack > > Regards! > > Maria G. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
Hi Saul, Looks like this thread references sanity from this build: - http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190522T013... The determinism checks to enforce an prevent the pre-mature launch of stx-openstack landed the next day: - http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190523T013... ./cgcs-root/stx/stx-config 4758cdfbd864826d46e6e06571d40693dd040b14 2019-05-22 00:22:50 -0400 Robert Church robert.church@windriver.com Make rbd-provisioner installation more deterministic Can you verify which build you are using? Bob On 5/23/19, 5:22 PM, "Saul Wold" <sgw@linux.intel.com> wrote: On 5/23/19 2:11 PM, Bailey, Henry Albert (Al) wrote: > This would be the pass that failed > 2019-05-23 15:16:42.330 99286 INFO sysinv.conductor.kube_app [-] Application (stx-openstack) apply started. > 2019-05-23 15:16:43.227 99286 INFO sysinv.conductor.kube_app [-] Secret default-registry-key created under Namespace openstack. > 2019-05-23 15:16:43.266 99286 ERROR sysinv.common.kubernetes [req-24203373-fa32-407a-ab7a-67c9b4788dc3 admin admin] Failed to copy Secret ceph-pool-kube-rbd from Namespace kube-system to Namespace openstack: (404) > Reason: Not Found > > > Which sounds a lot like this bug > https://bugs.launchpad.net/starlingx/+bug/1828896 > > > That bug was listed as fixed, but also reported as seen a week after the fix was submitted. > I suspect the bug needs to be reopened. > Huzzah to Al!! Maybe Bob can take a look at this and comment on why this might still be an issue, is it due to a timing and the testing scripts need to be modified to wait for the right actions to complete. Thanks sau! > Al > > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza@intel.com] > Sent: Thursday, May 23, 2019 5:03 PM > To: Saul Wold; starlingx-discuss@lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522 > >> -----Original Message----- >> From: Saul Wold [mailto:sgw@linux.intel.com] >> Sent: Thursday, May 23, 2019 3:53 PM >> To: starlingx-discuss@lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522 >> >> >> >> On 5/23/19 1:49 PM, Alonso, Juan Carlos wrote: >>> If you followed the steps on Wiki, your deployment and the sanity's >> deployment are the same. >>> >>> I am agree with you that should not run the apply twice. The automation has >> logic to handle this issue, when it appears the suite execute a re-apply, this is >> because sanity in all configs takes a long time and we need to have the results, >> if the apply fails in the second try, it won’t be applied and will FAIL, then we >> need to debug and open a bug. >>> >>> This issue is not frequent, and at least on my side I have seen it mostly in >> virtual environment, we would have to deploy all the configs manually >> everyday to see if it is present. >>> >> Hmm, I see it everytime I run the sanity Provision-Containers test on a fresh >> environment, every time! So about 10 times in the last couple of days. >> >> So again, what else could be different in our Virtual Environments that would >> make this fail consistently for me. > > Could be the images download?... At the end the automation is using proxies over a NAT network on the host to download images form the public registry and this could cause some timeouts that could make apply fail, so should be interesting check the logs (var/log/sysinv.log) and verify if is not failing due a timeout when downloading images. On our bare metal environments are using local registry the download is faster an hence we are not facing those issues. > > Regards, > José > >> >> Sau! >> >>> Regards. >>> Juan Carlos Alonso >>> >>> -----Original Message----- >>> From: Saul Wold [mailto:sgw@linux.intel.com] >>> Sent: Thursday, May 23, 2019 3:26 PM >>> To: Cordoba Malibran, Erich <erich.cordoba.malibran@intel.com>; >>> Alonso, Juan Carlos <juan.carlos.alonso@intel.com>; >>> starlingx-discuss@lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO >>> 20190522 >>> >>> >>> >>> On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote: >>>> As a last resource you can do a : >>>> >>>> sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' >> where name='stx-openstack';" >>>> >>>> as described here: >>>> https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ >>>> >>>> >>> This is not the problem, as a I can re-run the application-apply and it >> succeeds, what I am trying to understand is if anyone else is seeing this issue >> (ie re-run the application-apply) in the virtual environment. >>> >>> If Sanity test is NOT seeing it, I would like to understand what's different >> between my setup and the sanity testing environment. If Sanity testing IS >> seeing it, then I would argue that it's a failure. >>> There should not be a requirement to run the apply twice or it should be >> noted in the testing results. >>> >>> Sau! >>> >>>> >>>> On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com> >> wrote: >>>> >>>> Yes, I have seen this issue, even when execute apply for first time. >>>> I faced this error when status hold on "uploading" or "applying", then >> cannot be removed or deleted. >>>> >>>> Regards. >>>> Juan Carlos Alonso >>>> >>>> -----Original Message----- >>>> From: Saul Wold [mailto:sgw@linux.intel.com] >>>> Sent: Thursday, May 23, 2019 11:30 AM >>>> To: starlingx-discuss@lists.starlingx.io >>>> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO >>>> 20190522 >>>> >>>> >>>> Thanks for these results, glad to see the Virtual environment mostly >> working again. >>>> >>>> I do have a question, I have tried to reproduce the ansible >>>> based install locally and I am seeing a failure when trying to do the >>>> application-apply of stx-openstack. My failure is >>>> >>>> stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest >>>> | manifest.yaml | apply-failed | operation >>>> aborted, check logs for detail | >>>> >>>> When run a second time, the application-apply works, I have attached >> the sysinv.log that should contain both the failure and the success. >>>> >>>> I attempted an application-delete and it failed with a vague message (see >> line 1480 of the log), it seems to have occured during exception handling in >> sysinv.common.exception: >>>> >>>> Delete of application %(name)s (%(version)s) failed: %(reason)s. >>>> >>>> I would like to know from folks if they are seeing a similar issue with >> having to run application-apply twice? >>>> >>>> Thanks >>>> Sau! >>>> >>>> On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: >>>> > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from >>>> > 2019-MAY-22 (link >>>> > >> <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190 >>>> > 522T013000Z/>) >>>> > >>>> > Status: *YELLOW* >>>> > >>>> > ====================== >>>> > >>>> > Bare Metal environment >>>> > >>>> > ====================== >>>> > >>>> > *AIO - Simplex:* >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provision-Containers 01 TCs >>>> > >>>> > Sanity-OpenStack 49 TCs| 3 TCs FAIL >>>> > >>>> > Sanity-Platform 11 TCs | 3 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: 64 TCs >>>> > >>>> > * AIO - Duplex:* >>>> > >>>> > ** >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provision-Containers 01 TCs >>>> > >>>> > Sanity-OpenStack 52 TCs | 3 TCs FAIL >>>> > >>>> > Sanity-Platform 09 TCs | 5 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: 65 TCs >>>> > >>>> > * Standard - Local Storage (2+2):* >>>> > >>>> > ** >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provision-Containers 01 TCs >>>> > >>>> > Sanity-OpenStack 52 TCs >>>> > >>>> > Sanity-Platform 09 TCs >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: 65 TCs >>>> > >>>> > *Standard - External Storage (2+2+2):* >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provision-Containers 01 TCs >>>> > >>>> > Sanity-OpenStack 52 TCs >>>> > >>>> > Sanity-Platform 05 TCs | 2 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: 61 TCs >>>> > >>>> > =================== >>>> > >>>> > Virtual Environment >>>> > >>>> > =================== >>>> > >>>> > *AIO - Simplex* >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provisioning 01 TCs >>>> > >>>> > Sanity OpenStack 49 TCs | 3 TCs FAIL >>>> > >>>> > Sanity Platform 07 TCs | 2 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: 60 TCs >>>> > >>>> > *AIO - Duplex* >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provisioning 01 TCs >>>> > >>>> > Sanity OpenStack 51 TCs >>>> > >>>> > Sanity Platform 05 TCs | 4 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: [ 61 TCs PASS ] >>>> > >>>> > *Standard - Local Storage* >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provisioning 01 TCs >>>> > >>>> > Sanity OpenStack 52 TCs | 1 TCs FAIL >>>> > >>>> > Sanity Platform 05 TCs | 4 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: [ 61 TCs PASS ] >>>> > >>>> > --------------------------------------------------------------- >>>> > >>>> > VM resize failed by "No valid host was found" >>>> > https://bugs.launchpad.net/starlingx/+bug/1824412 >>>> > >>>> > Some pods are failing, tomorrow we'll perform double check to >>>> > determine if it is a suite's problem. >>>> > >>>> > For more detail of the tests: >>>> > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity- >> Open >>>> > Stack >>>> > >>>> > Regards! >>>> > >>>> > Maria G. >>>> > >>>> > >>>> > _______________________________________________ >>>> > Starlingx-discuss mailing list >>>> > Starlingx-discuss@lists.starlingx.io >>>> > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> > >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss@lists.starlingx.io >>>> >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss@lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss@lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
On 5/23/19 4:34 PM, Church, Robert wrote:
Hi Saul,
Looks like this thread references sanity from this build: - http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190522T013...
The determinism checks to enforce an prevent the pre-mature launch of stx-openstack landed the next day: - http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190523T013... ./cgcs-root/stx/stx-config 4758cdfbd864826d46e6e06571d40693dd040b14 2019-05-22 00:22:50 -0400 Robert Church robert.church@windriver.com Make rbd-provisioner installation more deterministic
Can you verify which build you are using?
I am indeed using the older 0522 build, I will try again later with the 0523 build, off for a run now. Thanks for the evening response! Sau!
Bob
On 5/23/19, 5:22 PM, "Saul Wold" <sgw@linux.intel.com> wrote:
On 5/23/19 2:11 PM, Bailey, Henry Albert (Al) wrote: > This would be the pass that failed > 2019-05-23 15:16:42.330 99286 INFO sysinv.conductor.kube_app [-] Application (stx-openstack) apply started. > 2019-05-23 15:16:43.227 99286 INFO sysinv.conductor.kube_app [-] Secret default-registry-key created under Namespace openstack. > 2019-05-23 15:16:43.266 99286 ERROR sysinv.common.kubernetes [req-24203373-fa32-407a-ab7a-67c9b4788dc3 admin admin] Failed to copy Secret ceph-pool-kube-rbd from Namespace kube-system to Namespace openstack: (404) > Reason: Not Found > > > Which sounds a lot like this bug > https://bugs.launchpad.net/starlingx/+bug/1828896 > > > That bug was listed as fixed, but also reported as seen a week after the fix was submitted. > I suspect the bug needs to be reopened. > Huzzah to Al!!
Maybe Bob can take a look at this and comment on why this might still be an issue, is it due to a timing and the testing scripts need to be modified to wait for the right actions to complete.
Thanks sau!
> Al > > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza@intel.com] > Sent: Thursday, May 23, 2019 5:03 PM > To: Saul Wold; starlingx-discuss@lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522 > >> -----Original Message----- >> From: Saul Wold [mailto:sgw@linux.intel.com] >> Sent: Thursday, May 23, 2019 3:53 PM >> To: starlingx-discuss@lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522 >> >> >> >> On 5/23/19 1:49 PM, Alonso, Juan Carlos wrote: >>> If you followed the steps on Wiki, your deployment and the sanity's >> deployment are the same. >>> >>> I am agree with you that should not run the apply twice. The automation has >> logic to handle this issue, when it appears the suite execute a re-apply, this is >> because sanity in all configs takes a long time and we need to have the results, >> if the apply fails in the second try, it won’t be applied and will FAIL, then we >> need to debug and open a bug. >>> >>> This issue is not frequent, and at least on my side I have seen it mostly in >> virtual environment, we would have to deploy all the configs manually >> everyday to see if it is present. >>> >> Hmm, I see it everytime I run the sanity Provision-Containers test on a fresh >> environment, every time! So about 10 times in the last couple of days. >> >> So again, what else could be different in our Virtual Environments that would >> make this fail consistently for me. > > Could be the images download?... At the end the automation is using proxies over a NAT network on the host to download images form the public registry and this could cause some timeouts that could make apply fail, so should be interesting check the logs (var/log/sysinv.log) and verify if is not failing due a timeout when downloading images. On our bare metal environments are using local registry the download is faster an hence we are not facing those issues. > > Regards, > José > >> >> Sau! >> >>> Regards. >>> Juan Carlos Alonso >>> >>> -----Original Message----- >>> From: Saul Wold [mailto:sgw@linux.intel.com] >>> Sent: Thursday, May 23, 2019 3:26 PM >>> To: Cordoba Malibran, Erich <erich.cordoba.malibran@intel.com>; >>> Alonso, Juan Carlos <juan.carlos.alonso@intel.com>; >>> starlingx-discuss@lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO >>> 20190522 >>> >>> >>> >>> On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote: >>>> As a last resource you can do a : >>>> >>>> sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' >> where name='stx-openstack';" >>>> >>>> as described here: >>>> https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ >>>> >>>> >>> This is not the problem, as a I can re-run the application-apply and it >> succeeds, what I am trying to understand is if anyone else is seeing this issue >> (ie re-run the application-apply) in the virtual environment. >>> >>> If Sanity test is NOT seeing it, I would like to understand what's different >> between my setup and the sanity testing environment. If Sanity testing IS >> seeing it, then I would argue that it's a failure. >>> There should not be a requirement to run the apply twice or it should be >> noted in the testing results. >>> >>> Sau! >>> >>>> >>>> On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com> >> wrote: >>>> >>>> Yes, I have seen this issue, even when execute apply for first time. >>>> I faced this error when status hold on "uploading" or "applying", then >> cannot be removed or deleted. >>>> >>>> Regards. >>>> Juan Carlos Alonso >>>> >>>> -----Original Message----- >>>> From: Saul Wold [mailto:sgw@linux.intel.com] >>>> Sent: Thursday, May 23, 2019 11:30 AM >>>> To: starlingx-discuss@lists.starlingx.io >>>> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO >>>> 20190522 >>>> >>>> >>>> Thanks for these results, glad to see the Virtual environment mostly >> working again. >>>> >>>> I do have a question, I have tried to reproduce the ansible >>>> based install locally and I am seeing a failure when trying to do the >>>> application-apply of stx-openstack. My failure is >>>> >>>> stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest >>>> | manifest.yaml | apply-failed | operation >>>> aborted, check logs for detail | >>>> >>>> When run a second time, the application-apply works, I have attached >> the sysinv.log that should contain both the failure and the success. >>>> >>>> I attempted an application-delete and it failed with a vague message (see >> line 1480 of the log), it seems to have occured during exception handling in >> sysinv.common.exception: >>>> >>>> Delete of application %(name)s (%(version)s) failed: %(reason)s. >>>> >>>> I would like to know from folks if they are seeing a similar issue with >> having to run application-apply twice? >>>> >>>> Thanks >>>> Sau! >>>> >>>> On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: >>>> > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from >>>> > 2019-MAY-22 (link >>>> > >> <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190 >>>> > 522T013000Z/>) >>>> > >>>> > Status: *YELLOW* >>>> > >>>> > ====================== >>>> > >>>> > Bare Metal environment >>>> > >>>> > ====================== >>>> > >>>> > *AIO - Simplex:* >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provision-Containers 01 TCs >>>> > >>>> > Sanity-OpenStack 49 TCs| 3 TCs FAIL >>>> > >>>> > Sanity-Platform 11 TCs | 3 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: 64 TCs >>>> > >>>> > * AIO - Duplex:* >>>> > >>>> > ** >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provision-Containers 01 TCs >>>> > >>>> > Sanity-OpenStack 52 TCs | 3 TCs FAIL >>>> > >>>> > Sanity-Platform 09 TCs | 5 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: 65 TCs >>>> > >>>> > * Standard - Local Storage (2+2):* >>>> > >>>> > ** >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provision-Containers 01 TCs >>>> > >>>> > Sanity-OpenStack 52 TCs >>>> > >>>> > Sanity-Platform 09 TCs >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: 65 TCs >>>> > >>>> > *Standard - External Storage (2+2+2):* >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provision-Containers 01 TCs >>>> > >>>> > Sanity-OpenStack 52 TCs >>>> > >>>> > Sanity-Platform 05 TCs | 2 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: 61 TCs >>>> > >>>> > =================== >>>> > >>>> > Virtual Environment >>>> > >>>> > =================== >>>> > >>>> > *AIO - Simplex* >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provisioning 01 TCs >>>> > >>>> > Sanity OpenStack 49 TCs | 3 TCs FAIL >>>> > >>>> > Sanity Platform 07 TCs | 2 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: 60 TCs >>>> > >>>> > *AIO - Duplex* >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provisioning 01 TCs >>>> > >>>> > Sanity OpenStack 51 TCs >>>> > >>>> > Sanity Platform 05 TCs | 4 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: [ 61 TCs PASS ] >>>> > >>>> > *Standard - Local Storage* >>>> > >>>> > Setup 03 TCs >>>> > >>>> > Provisioning 01 TCs >>>> > >>>> > Sanity OpenStack 52 TCs | 1 TCs FAIL >>>> > >>>> > Sanity Platform 05 TCs | 4 TCs FAIL >>>> > >>>> > ------------------------------ >>>> > >>>> > TOTAL: [ 61 TCs PASS ] >>>> > >>>> > --------------------------------------------------------------- >>>> > >>>> > VM resize failed by "No valid host was found" >>>> > https://bugs.launchpad.net/starlingx/+bug/1824412 >>>> > >>>> > Some pods are failing, tomorrow we'll perform double check to >>>> > determine if it is a suite's problem. >>>> > >>>> > For more detail of the tests: >>>> > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity- >> Open >>>> > Stack >>>> > >>>> > Regards! >>>> > >>>> > Maria G. >>>> > >>>> > >>>> > _______________________________________________ >>>> > Starlingx-discuss mailing list >>>> > Starlingx-discuss@lists.starlingx.io >>>> > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> > >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss@lists.starlingx.io >>>> >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss@lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss@lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
On 5/23/19 2:03 PM, Perez Carranza, Jose wrote:
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 3:53 PM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
On 5/23/19 1:49 PM, Alonso, Juan Carlos wrote:
If you followed the steps on Wiki, your deployment and the sanity's deployment are the same.
I am agree with you that should not run the apply twice. The automation has logic to handle this issue, when it appears the suite execute a re-apply, this is because sanity in all configs takes a long time and we need to have the results, if the apply fails in the second try, it won’t be applied and will FAIL, then we need to debug and open a bug.
This issue is not frequent, and at least on my side I have seen it mostly in virtual environment, we would have to deploy all the configs manually everyday to see if it is present.
Hmm, I see it everytime I run the sanity Provision-Containers test on a fresh environment, every time! So about 10 times in the last couple of days.
So again, what else could be different in our Virtual Environments that would make this fail consistently for me.
Could be the images download?... At the end the automation is using proxies over a NAT network on the host to download images form the public registry and this could cause some timeouts that could make apply fail, so should be interesting check the logs (var/log/sysinv.log) and verify if is not failing due a timeout when downloading images. On our bare metal environments are using local registry the download is faster an hence we are not facing those issues.
I am not behind any proxy or NAT, I am working on one of my machines that connected without an Intel proxy issue. I had attached my logs, maybe you can look at the log as I am not familiar with what to look for at this point. Thanks Sau!
Regards, José
Sau!
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 3:26 PM To: Cordoba Malibran, Erich <erich.cordoba.malibran@intel.com>; Alonso, Juan Carlos <juan.carlos.alonso@intel.com>; starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
On 5/23/19 1:09 PM, Cordoba Malibran, Erich wrote:
As a last resource you can do a :
sudo -u postgres psql -d sysinv -c"update kube_app set status='uploaded' where name='stx-openstack';"
as described here: https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ
This is not the problem, as a I can re-run the application-apply and it succeeds, what I am trying to understand is if anyone else is seeing this issue (ie re-run the application-apply) in the virtual environment.
If Sanity test is NOT seeing it, I would like to understand what's different between my setup and the sanity testing environment. If Sanity testing IS seeing it, then I would argue that it's a failure. There should not be a requirement to run the apply twice or it should be noted in the testing results.
Sau!
On 5/23/19, 3:03 PM, "Alonso, Juan Carlos" <juan.carlos.alonso@intel.com>
wrote:
Yes, I have seen this issue, even when execute apply for first time. I faced this error when status hold on "uploading" or "applying", then
cannot be removed or deleted.
Regards. Juan Carlos Alonso
-----Original Message----- From: Saul Wold [mailto:sgw@linux.intel.com] Sent: Thursday, May 23, 2019 11:30 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190522
Thanks for these results, glad to see the Virtual environment mostly
working again.
I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is
stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail |
When run a second time, the application-apply works, I have attached
the sysinv.log that should contain both the failure and the success.
I attempted an application-delete and it failed with a vague message (see
line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception:
Delete of application %(name)s (%(version)s) failed: %(reason)s.
I would like to know from folks if they are seeing a similar issue with
having to run application-apply twice?
Thanks Sau!
On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote: > *Status of the Sanity Test for last CENGN ISO*: bootimage.iso from > 2019-MAY-22 (link >
<http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190
> 522T013000Z/>) > > Status: *YELLOW* > > ====================== > > Bare Metal environment > > ====================== > > *AIO - Simplex:* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 49 TCs| 3 TCs FAIL > > Sanity-Platform 11 TCs | 3 TCs FAIL > > ------------------------------ > > TOTAL: 64 TCs > > * AIO - Duplex:* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs | 3 TCs FAIL > > Sanity-Platform 09 TCs | 5 TCs FAIL > > ------------------------------ > > TOTAL: 65 TCs > > * Standard - Local Storage (2+2):* > > ** > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 09 TCs > > ------------------------------ > > TOTAL: 65 TCs > > *Standard - External Storage (2+2+2):* > > Setup 03 TCs > > Provision-Containers 01 TCs > > Sanity-OpenStack 52 TCs > > Sanity-Platform 05 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 61 TCs > > =================== > > Virtual Environment > > =================== > > *AIO - Simplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 49 TCs | 3 TCs FAIL > > Sanity Platform 07 TCs | 2 TCs FAIL > > ------------------------------ > > TOTAL: 60 TCs > > *AIO - Duplex* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 51 TCs > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > *Standard - Local Storage* > > Setup 03 TCs > > Provisioning 01 TCs > > Sanity OpenStack 52 TCs | 1 TCs FAIL > > Sanity Platform 05 TCs | 4 TCs FAIL > > ------------------------------ > > TOTAL: [ 61 TCs PASS ] > > --------------------------------------------------------------- > > VM resize failed by "No valid host was found" > https://bugs.launchpad.net/starlingx/+bug/1824412 > > Some pods are failing, tomorrow we'll perform double check to > determine if it is a suite's problem. > > For more detail of the tests: > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-
Open
> Stack > > Regards! > > Maria G. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss@lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
Thanks for these results, glad to see the Virtual environment mostly working again. I do have a question, I have tried to reproduce the ansible based install locally and I am seeing a failure when trying to do the application-apply of stx-openstack. My failure is stx-openstack | 1.0-13-centos-stable-versioned | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail | When run a second time, the application-apply works, I have attached the sysinv.log (too large, ping me if you need it or find a place for it). that should contain both the failure and the success. I attempted an application-delete and it failed with a vague message (see line 1480 of the log), it seems to have occured during exception handling in sysinv.common.exception: Delete of application %(name)s (%(version)s) failed: %(reason)s. I would like to know from folks if they are seeing a similar issue with having to run application-apply twice? Thanks Sau! On 5/22/19 5:15 PM, Perez Ibarra, Maria G wrote:
*Status of the Sanity Test for last CENGN ISO*: bootimage.iso from 2019-MAY-22 (link <http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190522T013000Z/>)
Status: *YELLOW*
======================
Bare Metal environment
======================
*AIO - Simplex:*
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 49 TCs| 3 TCs FAIL
Sanity-Platform 11 TCs | 3 TCs FAIL
------------------------------
TOTAL: 64 TCs
* AIO - Duplex:*
**
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 52 TCs | 3 TCs FAIL
Sanity-Platform 09 TCs | 5 TCs FAIL
------------------------------
TOTAL: 65 TCs
* Standard - Local Storage (2+2):*
**
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 52 TCs
Sanity-Platform 09 TCs
------------------------------
TOTAL: 65 TCs
*Standard - External Storage (2+2+2):*
Setup 03 TCs
Provision-Containers 01 TCs
Sanity-OpenStack 52 TCs
Sanity-Platform 05 TCs | 2 TCs FAIL
------------------------------
TOTAL: 61 TCs
===================
Virtual Environment
===================
*AIO - Simplex*
Setup 03 TCs
Provisioning 01 TCs
Sanity OpenStack 49 TCs | 3 TCs FAIL
Sanity Platform 07 TCs | 2 TCs FAIL
------------------------------
TOTAL: 60 TCs
*AIO - Duplex*
Setup 03 TCs
Provisioning 01 TCs
Sanity OpenStack 51 TCs
Sanity Platform 05 TCs | 4 TCs FAIL
------------------------------
TOTAL: [ 61 TCs PASS ]
*Standard – Local Storage*
Setup 03 TCs
Provisioning 01 TCs
Sanity OpenStack 52 TCs | 1 TCs FAIL
Sanity Platform 05 TCs | 4 TCs FAIL
------------------------------
TOTAL: [ 61 TCs PASS ]
---------------------------------------------------------------
VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412
Some pods are failing, tomorrow we’ll perform double check to determine if it is a suite's problem.
For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack
Regards!
Maria G.
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
participants (9)
-
Alonso, Juan Carlos
-
Bailey, Henry Albert (Al)
-
Church, Robert
-
Cordoba Malibran, Erich
-
Lemus Contreras, Cristopher J
-
Perez Carranza, Jose
-
Perez Ibarra, Maria G
-
Saul Wold
-
Xie, Cindy