[Starlingx-discuss] stx-openstack application applying failed

Waines, Greg Greg.Waines at windriver.com
Tue Jun 29 11:25:58 UTC 2021


when I cut & paste your output into    grep rootfs | awk ‘{print $4}’
I get
/dev/disk/by-path/pci-0000:00:17.0-ata-1.0

the command is basically setting ROOT_DISK to rootfs_device

Greg.

From: Embedded Devel <lists at optimcloud.com>
Sent: Monday, June 28, 2021 8:15 AM
To: Waines, Greg <Greg.Waines at windriver.com>; open infra <openinfradn at gmail.com>; Zvonar, Bill <Bill.Zvonar at windriver.com>
Cc: starlingx-discuss at lists.starlingx.io; Camp, MaryX <maryx.camp at intel.com>
Subject: Re: RE: [Starlingx-discuss] stx-openstack application applying failed

[Please note: This e-mail is from an EXTERNAL e-mail address]


On Monday 28 June 2021 19:06:17 PM (+07:00), Waines, Greg wrote:
Here is what you need to do … we will update this in docs.
Greg.

# Increase size of cgts-vg LVG in order to increase size of docker fs
export NODE=controller-1
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
NEW_SIZE=35
NEW_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NEW_SIZE})
NEW_PARTITION_UUID=$(echo ${NEW_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-pv-add ${NODE} cgts-vg ${NEW_PARTITION_UUID}

system host-fs-modify controller-1 docker=60


ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')  fails on my stx 5.0 simplex

[sysadmin at controller-0 ~(keystone_admin)]$ system host-show ${NODE}
+-----------------------+----------------------------------------------------------------------+
| Property              | Value                                                                |
+-----------------------+----------------------------------------------------------------------+
| action                | none                                                                 |
| administrative        | unlocked                                                             |
| availability          | available                                                            |
| bm_ip                 | None                                                                 |
| bm_type               | none                                                                 |
| bm_username           | None                                                                 |
| boot_device           | /dev/disk/by-path/pci-0000:00:17.0-ata-1.0                           |
| capabilities          | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
| clock_synchronization | ntp                                                                  |
| config_applied        | 65c1c5ac-546a-45fc-8d82-e9644f1930a2                                 |
| config_status         | None                                                                 |
| config_target         | 65c1c5ac-546a-45fc-8d82-e9644f1930a2                                 |
| console               | tty0                                                                 |
| created_at            | 2021-06-26T10:51:25.595104+00:00                                     |
| device_image_update   | None                                                                 |
| hostname              | controller-0                                                         |
| id                    | 1                                                                    |
| install_output        | text                                                                 |
| install_state         | None                                                                 |
| install_state_info    | None                                                                 |
| inv_state             | inventoried                                                          |
| invprovision          | provisioned                                                          |
| location              | {}                                                                   |
| mgmt_ip               | 192.168.204.2                                                        |
| mgmt_mac              | 00:00:00:00:00:00                                                    |
| operational           | enabled                                                              |
| personality           | controller                                                           |
| reboot_needed         | False                                                                |
| reserved              | False                                                                |
| rootfs_device         | /dev/disk/by-path/pci-0000:00:17.0-ata-1.0                           |
| serialid              | None                                                                 |
| software_load         | 21.05                                                                |
| subfunction_avail     | available                                                            |
| subfunction_oper      | enabled                                                              |
| subfunctions          | controller,worker                                                    |
| task                  |                                                                      |
| tboot                 | false                                                                |
| ttys_dcd              | None                                                                 |
| updated_at            | 2021-06-28T12:13:20.119581+00:00                                     |
| uptime                | 15240                                                                |
| uuid                  | 2b237d4f-fc3d-4f83-bdf2-b2689469b89e                                 |
| vim_progress_status   | services-enabled                                                     |
+-----------------------+----------------------------------------------------------------------+






From: Embedded Devel <lists at optimcloud.com<mailto:lists at optimcloud.com>>
Sent: Sunday, June 27, 2021 7:18 AM
To: Waines, Greg <Greg.Waines at windriver.com<mailto:Greg.Waines at windriver.com>>; open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>>; Zvonar, Bill <Bill.Zvonar at windriver.com<mailto:Bill.Zvonar at windriver.com>>
Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: [Starlingx-discuss] stx-openstack application applying failed

[Please note: This e-mail is from an EXTERNAL e-mail address]


On Saturday 26 June 2021 20:17:36 PM (+07:00), Waines, Greg wrote:
Agreed that increased docker fs is not documented well … especially if you need to increase the cgts_vg logical volume group in order to increase the docker filesystem size.  We have plans to fix this.

Yupp seems this is exactly what i need also right now, as im running into the


system host-fs-modify controller-0 docker=60
HostFs update failed: Not enough free space on cgts-vg. Current free space 16 GiB, requested total increase 30 GiB




Greg.

From: open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>>
Sent: Friday, June 25, 2021 10:30 AM
To: Zvonar, Bill <Bill.Zvonar at windriver.com<mailto:Bill.Zvonar at windriver.com>>
Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: [Starlingx-discuss] stx-openstack application applying failed

[Please note: This e-mail is from an EXTERNAL e-mail address]
Finally managed to deploy OpenStack but not sure what caused the issue.
Increased disk capacity for docker in worker and reviewed network.

On Thu, Jun 24, 2021 at 1:59 PM open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>> wrote:
I managed to deploy stx-monitoring that require labelling only in controller nodes.
Definitely something wrong with worker-0 labelling


On Wed, Jun 23, 2021 at 8:52 PM open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>> wrote:
Here is more information about the issue.
http://paste.openstack.org/show/806872/<https://urldefense.com/v3/__http:/paste.openstack.org/show/806872/__;!!AjveYdw8EvQ!IqDey86XgSsb-XFryEq6evajkKBtzO0iqfvrTOhcD3HlCvV9FgK4c5QV8wSfkRJp74Y$>

Then I set the  openstack-compute-node label to the controller-0 and re-apply stx-openstack (just to test).
Then stx-openstack applying progress continued up to 55%.

I can lock/unlcok the worker-0 via controller nodes. So, it should not be a problem with management network.


On Mon, Jun 21, 2021 at 4:40 PM open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>> wrote:
thank you Bill and Thiago.
Now I have switched to Release 5.
Don't we need to set following labels for release 5 deployment if we supposed to deploy stx-openstack?

for controllers:

system host-label-assign $NODE openstack-control-plane=enabled

For worker nodes:

system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled

Because these labels are not visible in the release 5 installation guide.

On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill <Bill.Zvonar at windriver.com<mailto:Bill.Zvonar at windriver.com>> wrote:
Hi again Danishka, we discussed this too.

It was suggested that you check /var/logs/armada to see if there are any Armada startup logs that’d help understand what’s going on.

Thanks, Bill...

From: open infra <openinfradn at gmail.com<mailto:openinfradn at gmail.com>>
Sent: Saturday, May 22, 2021 2:28 AM
To: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: [Starlingx-discuss] stx-openstack application applying failed

[Please note: This e-mail is from an EXTERNAL e-mail address]
Hi,

I have deployed StarlingX R4 (bare metal dedicated storage installation).
stx-openstack application applying was failed.

When I retrieve openstack pods, I can see the status osh-openstack-garbd-garbd-7d4957d9f4-kz95v is pending.
I have re-uploaded stx-openstack but the same results.

I highly appreciate it if someone can help to fix resolve this matter as we have a demo next week.

More details available here.

describe pod osh-openstack-garbd-garbd
http://paste.openstack.org/show/805587/<https://urldefense.com/v3/__http:/paste.openstack.org/show/805587/__;!!AjveYdw8EvQ!IP_slcx9_D_pSzhjpom6niaG_xdFTyB63HZ2yV5p24SAYzk2f4dmWqMTpvLFhtiucGoh$>
describe nodes http://paste.openstack.org/show/805589/<https://urldefense.com/v3/__http:/paste.openstack.org/show/805589/__;!!AjveYdw8EvQ!IP_slcx9_D_pSzhjpom6niaG_xdFTyB63HZ2yV5p24SAYzk2f4dmWqMTpvLFht7VU1um$>

Regards,
Danishka

--
Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com<https://urldefense.com/v3/__http:/vivaldi.com__;!!AjveYdw8EvQ!Kbxgv5BsZhAoZvkbltiAc5Pyqy-5iUhxTbLCvA6Vwg6BAWOrtRaJKjf6dMwe9HeLzaE$>

--
Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com<https://urldefense.com/v3/__http:/vivaldi.com__;!!AjveYdw8EvQ!PJXeYXF3dUb3gTymFwQS4n1pTNbyEnCGKt_N1hVjTM58QJFinv8JkhACiyycLwXm6Hc$>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20210629/efcddc88/attachment-0001.html>


More information about the Starlingx-discuss mailing list