[Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus

open infra openinfradn at gmail.com
Tue Aug 3 04:42:41 UTC 2021


I have uploaded all the files yesterday.

On Mon, Aug 2, 2021 at 6:15 AM Sun, Austin <austin.sun at intel.com> wrote:

> suggest you split to small files and  upload.
>
>
>
>
>
>
>
> *From:* open infra <openinfradn at gmail.com>
> *Sent:* Friday, July 30, 2021 9:17 PM
> *To:* Sun, Austin <austin.sun at intel.com>
> *Cc:* starlingx-discuss at lists.starlingx.io
> *Subject:* Re: [Starlingx-discuss] Openstack applying failed and
> nova-compute-worker-0 pod with CrashLoopBackOff staus
>
>
>
> I am unable to upload the file.
>
>
>
>
>
> On Fri, Jul 30, 2021 at 2:15 PM open infra <openinfradn at gmail.com> wrote:
>
> Just asked, as the file size is almost 1GB.
>
> Let me try.
>
>
>
> On Fri, Jul 30, 2021 at 1:30 PM Sun, Austin <austin.sun at intel.com> wrote:
>
> You can directly Click “Add attachment or patch
> <https://bugs.launchpad.net/starlingx/+bug/1938508/+addcomment>” in
> bottom of bug link .
>
>
>
>
>
> *From:* open infra <openinfradn at gmail.com>
> *Sent:* Friday, July 30, 2021 3:46 PM
> *To:* Sun, Austin <austin.sun at intel.com>
> *Cc:* starlingx-discuss at lists.starlingx.io
> *Subject:* Re: [Starlingx-discuss] Openstack applying failed and
> nova-compute-worker-0 pod with CrashLoopBackOff staus
>
>
>
> Hi Austin,
>
>
>
> Sorry for the delay. Bug ID is 1938508.
>
> Is there Google Drive or similar location available to upload the tar file
> of the 'collect' output?
>
>
>
> Regards,
>
> Danishka
>
>
>
> On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin <austin.sun at intel.com> wrote:
>
> Hi Danishka:
>
>      Would you like know the bug number once you created ?
>
>
>
> Thanks.
>
> BR
> Austin Sun.
>
>
>
> *From:* open infra <openinfradn at gmail.com>
> *Sent:* Thursday, July 29, 2021 11:16 AM
> *To:* Sun, Austin <austin.sun at intel.com>
> *Cc:* starlingx-discuss at lists.starlingx.io
> *Subject:* Re: [Starlingx-discuss] Openstack applying failed and
> nova-compute-worker-0 pod with CrashLoopBackOff staus
>
>
>
> Thanks, Austin. I will file a bug.
>
>
>
> On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin <austin.sun at intel.com> wrote:
>
> Hi Danishka:
>
> I checked the three pieces log you shared, but it’s hard to find any hint
> to triage the issue.
>
> But most likely some wrong in worker nodes.
>
> Would you like report a bug [1] and upload all logs for
> controller/workers.
>
>
>
> FYI: One way to collect log is to run “collect –all” from controller-0
> which will collect all necessary info from system.
>
>
>
> [1] https://bugs.launchpad.net/starlingx/+bugs
>
>
>
>
>
> Thanks.
>
> BR
> Austin Sun.
>
>
>
> *From:* open infra <openinfradn at gmail.com>
> *Sent:* Monday, July 26, 2021 8:58 PM
> *To:* starlingx-discuss at lists.starlingx.io
> *Subject:* [Starlingx-discuss] Openstack applying failed and
> nova-compute-worker-0 pod with CrashLoopBackOff staus
>
>
>
> Hi,
>
>
>
> After rebooting the entire stx (r5 standard dedicated storage)
> environment, noticed that OpenStack vm can not start and hypervisor status
> is down (we have only one worker node).
>
>
>
> Furthermore, openstack-apply was failed as
> nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is
> CrashLoopBackOff [2]. Here is the description of the pod [3].
>
>
>
>
>
> VMs were created using nova-local and mounted a shared volume, which is a
> ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't
> fix the issue.
>
>
>
> I would like to know any hint or suggestion to fix this issue and avoid
> similar issue in future.
>
>
>
> [1] https://paste.opendev.org/show/807707/
>
> [2] https://paste.opendev.org/show/807705/
>
> [3] https://paste.opendev.org/show/807704/
>
>
>
> Regards,
>
> Danishka
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20210803/1db73785/attachment.html>


More information about the Starlingx-discuss mailing list