[Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Hi, After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node). Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3]. VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue. I would like to know any hint or suggestion to fix this issue and avoid similar issue in future. [1] https://paste.opendev.org/show/807707/ [2] https://paste.opendev.org/show/807705/ [3] https://paste.opendev.org/show/807704/ Regards, Danishka
Tried to start VMs using the hypervisor but no luck. worker-0:~# virsh start instance-00000005 error: Failed to start domain instance-00000005 error: Secret not found: no secret with matching uuid '457eb676-33da-42ec-9a8c-9293d545c337' worker-0:~# virsh start instance-00000052 error: Failed to start domain instance-00000052 error: Secret not found: no secret with matching uuid '457eb676-33da-42ec-9a8c-9293d545c337' Could not figure what object is having the uuid of 457eb676-33da-42ec-9a8c-9293d545c337. On Mon, Jul 26, 2021 at 6:28 PM open infra <openinfradn@gmail.com> wrote:
Hi,
After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node).
Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3].
VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue.
I would like to know any hint or suggestion to fix this issue and avoid similar issue in future.
[1] https://paste.opendev.org/show/807707/ [2] https://paste.opendev.org/show/807705/ [3] https://paste.opendev.org/show/807704/
Regards, Danishka
Hi Danishka: I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue. But most likely some wrong in worker nodes. Would you like report a bug [1] and upload all logs for controller/workers. FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system. [1] https://bugs.launchpad.net/starlingx/+bugs Thanks. BR Austin Sun. From: open infra <openinfradn@gmail.com> Sent: Monday, July 26, 2021 8:58 PM To: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Hi, After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node). Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3]. VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue. I would like to know any hint or suggestion to fix this issue and avoid similar issue in future. [1] https://paste.opendev.org/show/807707/ [2] https://paste.opendev.org/show/807705/ [3] https://paste.opendev.org/show/807704/ Regards, Danishka
Thanks, Austin. I will file a bug. On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin <austin.sun@intel.com> wrote:
Hi Danishka:
I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue.
But most likely some wrong in worker nodes.
Would you like report a bug [1] and upload all logs for controller/workers.
FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system.
[1] https://bugs.launchpad.net/starlingx/+bugs
Thanks.
BR Austin Sun.
*From:* open infra <openinfradn@gmail.com> *Sent:* Monday, July 26, 2021 8:58 PM *To:* starlingx-discuss@lists.starlingx.io *Subject:* [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Hi,
After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node).
Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3].
VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue.
I would like to know any hint or suggestion to fix this issue and avoid similar issue in future.
[1] https://paste.opendev.org/show/807707/
[2] https://paste.opendev.org/show/807705/
[3] https://paste.opendev.org/show/807704/
Regards,
Danishka
Hi Danishka: Would you like know the bug number once you created ? Thanks. BR Austin Sun. From: open infra <openinfradn@gmail.com> Sent: Thursday, July 29, 2021 11:16 AM To: Sun, Austin <austin.sun@intel.com> Cc: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Thanks, Austin. I will file a bug. On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin <austin.sun@intel.com<mailto:austin.sun@intel.com>> wrote: Hi Danishka: I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue. But most likely some wrong in worker nodes. Would you like report a bug [1] and upload all logs for controller/workers. FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system. [1] https://bugs.launchpad.net/starlingx/+bugs Thanks. BR Austin Sun. From: open infra <openinfradn@gmail.com<mailto:openinfradn@gmail.com>> Sent: Monday, July 26, 2021 8:58 PM To: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Hi, After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node). Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3]. VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue. I would like to know any hint or suggestion to fix this issue and avoid similar issue in future. [1] https://paste.opendev.org/show/807707/ [2] https://paste.opendev.org/show/807705/ [3] https://paste.opendev.org/show/807704/ Regards, Danishka
Hi Austin, Sorry for the delay. Bug ID is 1938508. Is there Google Drive or similar location available to upload the tar file of the 'collect' output? Regards, Danishka On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin <austin.sun@intel.com> wrote:
Hi Danishka:
Would you like know the bug number once you created ?
Thanks.
BR Austin Sun.
*From:* open infra <openinfradn@gmail.com> *Sent:* Thursday, July 29, 2021 11:16 AM *To:* Sun, Austin <austin.sun@intel.com> *Cc:* starlingx-discuss@lists.starlingx.io *Subject:* Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Thanks, Austin. I will file a bug.
On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin <austin.sun@intel.com> wrote:
Hi Danishka:
I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue.
But most likely some wrong in worker nodes.
Would you like report a bug [1] and upload all logs for controller/workers.
FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system.
[1] https://bugs.launchpad.net/starlingx/+bugs
Thanks.
BR Austin Sun.
*From:* open infra <openinfradn@gmail.com> *Sent:* Monday, July 26, 2021 8:58 PM *To:* starlingx-discuss@lists.starlingx.io *Subject:* [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Hi,
After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node).
Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3].
VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue.
I would like to know any hint or suggestion to fix this issue and avoid similar issue in future.
[1] https://paste.opendev.org/show/807707/
[2] https://paste.opendev.org/show/807705/
[3] https://paste.opendev.org/show/807704/
Regards,
Danishka
You can directly Click “Add attachment or patch<https://bugs.launchpad.net/starlingx/+bug/1938508/+addcomment>” in bottom of bug link . From: open infra <openinfradn@gmail.com> Sent: Friday, July 30, 2021 3:46 PM To: Sun, Austin <austin.sun@intel.com> Cc: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Hi Austin, Sorry for the delay. Bug ID is 1938508. Is there Google Drive or similar location available to upload the tar file of the 'collect' output? Regards, Danishka On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin <austin.sun@intel.com<mailto:austin.sun@intel.com>> wrote: Hi Danishka: Would you like know the bug number once you created ? Thanks. BR Austin Sun. From: open infra <openinfradn@gmail.com<mailto:openinfradn@gmail.com>> Sent: Thursday, July 29, 2021 11:16 AM To: Sun, Austin <austin.sun@intel.com<mailto:austin.sun@intel.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Thanks, Austin. I will file a bug. On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin <austin.sun@intel.com<mailto:austin.sun@intel.com>> wrote: Hi Danishka: I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue. But most likely some wrong in worker nodes. Would you like report a bug [1] and upload all logs for controller/workers. FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system. [1] https://bugs.launchpad.net/starlingx/+bugs Thanks. BR Austin Sun. From: open infra <openinfradn@gmail.com<mailto:openinfradn@gmail.com>> Sent: Monday, July 26, 2021 8:58 PM To: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Hi, After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node). Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3]. VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue. I would like to know any hint or suggestion to fix this issue and avoid similar issue in future. [1] https://paste.opendev.org/show/807707/ [2] https://paste.opendev.org/show/807705/ [3] https://paste.opendev.org/show/807704/ Regards, Danishka
Just asked, as the file size is almost 1GB. Let me try. On Fri, Jul 30, 2021 at 1:30 PM Sun, Austin <austin.sun@intel.com> wrote:
You can directly Click “Add attachment or patch <https://bugs.launchpad.net/starlingx/+bug/1938508/+addcomment>” in bottom of bug link .
*From:* open infra <openinfradn@gmail.com> *Sent:* Friday, July 30, 2021 3:46 PM *To:* Sun, Austin <austin.sun@intel.com> *Cc:* starlingx-discuss@lists.starlingx.io *Subject:* Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Hi Austin,
Sorry for the delay. Bug ID is 1938508.
Is there Google Drive or similar location available to upload the tar file of the 'collect' output?
Regards,
Danishka
On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin <austin.sun@intel.com> wrote:
Hi Danishka:
Would you like know the bug number once you created ?
Thanks.
BR Austin Sun.
*From:* open infra <openinfradn@gmail.com> *Sent:* Thursday, July 29, 2021 11:16 AM *To:* Sun, Austin <austin.sun@intel.com> *Cc:* starlingx-discuss@lists.starlingx.io *Subject:* Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Thanks, Austin. I will file a bug.
On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin <austin.sun@intel.com> wrote:
Hi Danishka:
I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue.
But most likely some wrong in worker nodes.
Would you like report a bug [1] and upload all logs for controller/workers.
FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system.
[1] https://bugs.launchpad.net/starlingx/+bugs
Thanks.
BR Austin Sun.
*From:* open infra <openinfradn@gmail.com> *Sent:* Monday, July 26, 2021 8:58 PM *To:* starlingx-discuss@lists.starlingx.io *Subject:* [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Hi,
After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node).
Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3].
VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue.
I would like to know any hint or suggestion to fix this issue and avoid similar issue in future.
[1] https://paste.opendev.org/show/807707/
[2] https://paste.opendev.org/show/807705/
[3] https://paste.opendev.org/show/807704/
Regards,
Danishka
I am unable to upload the file. On Fri, Jul 30, 2021 at 2:15 PM open infra <openinfradn@gmail.com> wrote:
Just asked, as the file size is almost 1GB. Let me try.
On Fri, Jul 30, 2021 at 1:30 PM Sun, Austin <austin.sun@intel.com> wrote:
You can directly Click “Add attachment or patch <https://bugs.launchpad.net/starlingx/+bug/1938508/+addcomment>” in bottom of bug link .
*From:* open infra <openinfradn@gmail.com> *Sent:* Friday, July 30, 2021 3:46 PM *To:* Sun, Austin <austin.sun@intel.com> *Cc:* starlingx-discuss@lists.starlingx.io *Subject:* Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Hi Austin,
Sorry for the delay. Bug ID is 1938508.
Is there Google Drive or similar location available to upload the tar file of the 'collect' output?
Regards,
Danishka
On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin <austin.sun@intel.com> wrote:
Hi Danishka:
Would you like know the bug number once you created ?
Thanks.
BR Austin Sun.
*From:* open infra <openinfradn@gmail.com> *Sent:* Thursday, July 29, 2021 11:16 AM *To:* Sun, Austin <austin.sun@intel.com> *Cc:* starlingx-discuss@lists.starlingx.io *Subject:* Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Thanks, Austin. I will file a bug.
On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin <austin.sun@intel.com> wrote:
Hi Danishka:
I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue.
But most likely some wrong in worker nodes.
Would you like report a bug [1] and upload all logs for controller/workers.
FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system.
[1] https://bugs.launchpad.net/starlingx/+bugs
Thanks.
BR Austin Sun.
*From:* open infra <openinfradn@gmail.com> *Sent:* Monday, July 26, 2021 8:58 PM *To:* starlingx-discuss@lists.starlingx.io *Subject:* [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Hi,
After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node).
Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3].
VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue.
I would like to know any hint or suggestion to fix this issue and avoid similar issue in future.
[1] https://paste.opendev.org/show/807707/
[2] https://paste.opendev.org/show/807705/
[3] https://paste.opendev.org/show/807704/
Regards,
Danishka
suggest you split to small files and upload. From: open infra <openinfradn@gmail.com> Sent: Friday, July 30, 2021 9:17 PM To: Sun, Austin <austin.sun@intel.com> Cc: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus I am unable to upload the file. On Fri, Jul 30, 2021 at 2:15 PM open infra <openinfradn@gmail.com<mailto:openinfradn@gmail.com>> wrote: Just asked, as the file size is almost 1GB. Let me try. On Fri, Jul 30, 2021 at 1:30 PM Sun, Austin <austin.sun@intel.com<mailto:austin.sun@intel.com>> wrote: You can directly Click “Add attachment or patch<https://bugs.launchpad.net/starlingx/+bug/1938508/+addcomment>” in bottom of bug link . From: open infra <openinfradn@gmail.com<mailto:openinfradn@gmail.com>> Sent: Friday, July 30, 2021 3:46 PM To: Sun, Austin <austin.sun@intel.com<mailto:austin.sun@intel.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Hi Austin, Sorry for the delay. Bug ID is 1938508. Is there Google Drive or similar location available to upload the tar file of the 'collect' output? Regards, Danishka On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin <austin.sun@intel.com<mailto:austin.sun@intel.com>> wrote: Hi Danishka: Would you like know the bug number once you created ? Thanks. BR Austin Sun. From: open infra <openinfradn@gmail.com<mailto:openinfradn@gmail.com>> Sent: Thursday, July 29, 2021 11:16 AM To: Sun, Austin <austin.sun@intel.com<mailto:austin.sun@intel.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Thanks, Austin. I will file a bug. On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin <austin.sun@intel.com<mailto:austin.sun@intel.com>> wrote: Hi Danishka: I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue. But most likely some wrong in worker nodes. Would you like report a bug [1] and upload all logs for controller/workers. FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system. [1] https://bugs.launchpad.net/starlingx/+bugs Thanks. BR Austin Sun. From: open infra <openinfradn@gmail.com<mailto:openinfradn@gmail.com>> Sent: Monday, July 26, 2021 8:58 PM To: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Hi, After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node). Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3]. VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue. I would like to know any hint or suggestion to fix this issue and avoid similar issue in future. [1] https://paste.opendev.org/show/807707/ [2] https://paste.opendev.org/show/807705/ [3] https://paste.opendev.org/show/807704/ Regards, Danishka
I have uploaded all the files yesterday. On Mon, Aug 2, 2021 at 6:15 AM Sun, Austin <austin.sun@intel.com> wrote:
suggest you split to small files and upload.
*From:* open infra <openinfradn@gmail.com> *Sent:* Friday, July 30, 2021 9:17 PM *To:* Sun, Austin <austin.sun@intel.com> *Cc:* starlingx-discuss@lists.starlingx.io *Subject:* Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
I am unable to upload the file.
On Fri, Jul 30, 2021 at 2:15 PM open infra <openinfradn@gmail.com> wrote:
Just asked, as the file size is almost 1GB.
Let me try.
On Fri, Jul 30, 2021 at 1:30 PM Sun, Austin <austin.sun@intel.com> wrote:
You can directly Click “Add attachment or patch <https://bugs.launchpad.net/starlingx/+bug/1938508/+addcomment>” in bottom of bug link .
*From:* open infra <openinfradn@gmail.com> *Sent:* Friday, July 30, 2021 3:46 PM *To:* Sun, Austin <austin.sun@intel.com> *Cc:* starlingx-discuss@lists.starlingx.io *Subject:* Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Hi Austin,
Sorry for the delay. Bug ID is 1938508.
Is there Google Drive or similar location available to upload the tar file of the 'collect' output?
Regards,
Danishka
On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin <austin.sun@intel.com> wrote:
Hi Danishka:
Would you like know the bug number once you created ?
Thanks.
BR Austin Sun.
*From:* open infra <openinfradn@gmail.com> *Sent:* Thursday, July 29, 2021 11:16 AM *To:* Sun, Austin <austin.sun@intel.com> *Cc:* starlingx-discuss@lists.starlingx.io *Subject:* Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Thanks, Austin. I will file a bug.
On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin <austin.sun@intel.com> wrote:
Hi Danishka:
I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue.
But most likely some wrong in worker nodes.
Would you like report a bug [1] and upload all logs for controller/workers.
FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system.
[1] https://bugs.launchpad.net/starlingx/+bugs
Thanks.
BR Austin Sun.
*From:* open infra <openinfradn@gmail.com> *Sent:* Monday, July 26, 2021 8:58 PM *To:* starlingx-discuss@lists.starlingx.io *Subject:* [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus
Hi,
After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node).
Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3].
VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue.
I would like to know any hint or suggestion to fix this issue and avoid similar issue in future.
[1] https://paste.opendev.org/show/807707/
[2] https://paste.opendev.org/show/807705/
[3] https://paste.opendev.org/show/807704/
Regards,
Danishka
participants (2)
-
open infra
-
Sun, Austin