[Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus

Sun, Austin austin.sun at intel.com
Fri Jul 30 00:28:00 UTC 2021


Hi Danishka:
     Would you like know the bug number once you created ?

Thanks.
BR
Austin Sun.

From: open infra <openinfradn at gmail.com>
Sent: Thursday, July 29, 2021 11:16 AM
To: Sun, Austin <austin.sun at intel.com>
Cc: starlingx-discuss at lists.starlingx.io
Subject: Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus

Thanks, Austin. I will file a bug.

On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin <austin.sun at intel.com<mailto:austin.sun at intel.com>> wrote:
Hi Danishka:
I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue.
But most likely some wrong in worker nodes.
Would you like report a bug [1] and upload all logs for controller/workers.

FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system.

[1] https://bugs.launchpad.net/starlingx/+bugs


Thanks.
BR
Austin Sun.

From: open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>>
Sent: Monday, July 26, 2021 8:58 PM
To: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus

Hi,

After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node).

Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3].


VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue.

I would like to know any hint or suggestion to fix this issue and avoid similar issue in future.

[1] https://paste.opendev.org/show/807707/
[2] https://paste.opendev.org/show/807705/
[3] https://paste.opendev.org/show/807704/

Regards,
Danishka
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20210730/3bff67cc/attachment-0001.html>


More information about the Starlingx-discuss mailing list