[Starlingx-discuss] stx-openstack application applying failed

Waines, Greg Greg.Waines at windriver.com
Mon Jun 28 12:06:17 UTC 2021


Here is what you need to do … we will update this in docs.
Greg.

# Increase size of cgts-vg LVG in order to increase size of docker fs
export NODE=controller-1
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
NEW_SIZE=35
NEW_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NEW_SIZE})
NEW_PARTITION_UUID=$(echo ${NEW_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-pv-add ${NODE} cgts-vg ${NEW_PARTITION_UUID}

system host-fs-modify controller-1 docker=60



From: Embedded Devel <lists at optimcloud.com>
Sent: Sunday, June 27, 2021 7:18 AM
To: Waines, Greg <Greg.Waines at windriver.com>; open infra <openinfradn at gmail.com>; Zvonar, Bill <Bill.Zvonar at windriver.com>
Cc: starlingx-discuss at lists.starlingx.io
Subject: Re: [Starlingx-discuss] stx-openstack application applying failed

[Please note: This e-mail is from an EXTERNAL e-mail address]


On Saturday 26 June 2021 20:17:36 PM (+07:00), Waines, Greg wrote:
Agreed that increased docker fs is not documented well … especially if you need to increase the cgts_vg logical volume group in order to increase the docker filesystem size.  We have plans to fix this.

Yupp seems this is exactly what i need also right now, as im running into the


system host-fs-modify controller-0 docker=60
HostFs update failed: Not enough free space on cgts-vg. Current free space 16 GiB, requested total increase 30 GiB




Greg.

From: open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>>
Sent: Friday, June 25, 2021 10:30 AM
To: Zvonar, Bill <Bill.Zvonar at windriver.com<mailto:Bill.Zvonar at windriver.com>>
Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: [Starlingx-discuss] stx-openstack application applying failed

[Please note: This e-mail is from an EXTERNAL e-mail address]
Finally managed to deploy OpenStack but not sure what caused the issue.
Increased disk capacity for docker in worker and reviewed network.

On Thu, Jun 24, 2021 at 1:59 PM open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>> wrote:
I managed to deploy stx-monitoring that require labelling only in controller nodes.
Definitely something wrong with worker-0 labelling


On Wed, Jun 23, 2021 at 8:52 PM open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>> wrote:
Here is more information about the issue.
http://paste.openstack.org/show/806872/<https://urldefense.com/v3/__http:/paste.openstack.org/show/806872/__;!!AjveYdw8EvQ!IqDey86XgSsb-XFryEq6evajkKBtzO0iqfvrTOhcD3HlCvV9FgK4c5QV8wSfkRJp74Y$>

Then I set the  openstack-compute-node label to the controller-0 and re-apply stx-openstack (just to test).
Then stx-openstack applying progress continued up to 55%.

I can lock/unlcok the worker-0 via controller nodes. So, it should not be a problem with management network.


On Mon, Jun 21, 2021 at 4:40 PM open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>> wrote:
thank you Bill and Thiago.
Now I have switched to Release 5.
Don't we need to set following labels for release 5 deployment if we supposed to deploy stx-openstack?

for controllers:

system host-label-assign $NODE openstack-control-plane=enabled

For worker nodes:

system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled

Because these labels are not visible in the release 5 installation guide.

On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill <Bill.Zvonar at windriver.com<mailto:Bill.Zvonar at windriver.com>> wrote:
Hi again Danishka, we discussed this too.

It was suggested that you check /var/logs/armada to see if there are any Armada startup logs that’d help understand what’s going on.

Thanks, Bill...

From: open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>>
Sent: Saturday, May 22, 2021 2:28 AM
To: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: [Starlingx-discuss] stx-openstack application applying failed

[Please note: This e-mail is from an EXTERNAL e-mail address]
Hi,

I have deployed StarlingX R4 (bare metal dedicated storage installation).
stx-openstack application applying was failed.

When I retrieve openstack pods, I can see the status osh-openstack-garbd-garbd-7d4957d9f4-kz95v is pending.
I have re-uploaded stx-openstack but the same results.

I highly appreciate it if someone can help to fix resolve this matter as we have a demo next week.

More details available here.

describe pod osh-openstack-garbd-garbd
http://paste.openstack.org/show/805587/<https://urldefense.com/v3/__http:/paste.openstack.org/show/805587/__;!!AjveYdw8EvQ!IP_slcx9_D_pSzhjpom6niaG_xdFTyB63HZ2yV5p24SAYzk2f4dmWqMTpvLFhtiucGoh$>
describe nodes http://paste.openstack.org/show/805589/<https://urldefense.com/v3/__http:/paste.openstack.org/show/805589/__;!!AjveYdw8EvQ!IP_slcx9_D_pSzhjpom6niaG_xdFTyB63HZ2yV5p24SAYzk2f4dmWqMTpvLFht7VU1um$>

Regards,
Danishka

--
Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com<https://urldefense.com/v3/__http:/vivaldi.com__;!!AjveYdw8EvQ!Kbxgv5BsZhAoZvkbltiAc5Pyqy-5iUhxTbLCvA6Vwg6BAWOrtRaJKjf6dMwe9HeLzaE$>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20210628/a639a9cf/attachment-0001.html>


More information about the Starlingx-discuss mailing list