[Starlingx-discuss] DUPLEX: Controller-1 is not adding nova-local on initial unlock
Image centos 20190325 Duplex Hi, following the instructions from https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnAIODX the controller-1 is not adding the nova-local and osd is in the state "configuring-on-unlock" In the boot log I see a problem on Controller-1 to get the uuid from the active controller-0: [ 20.549131] controller_config[7585]: Checking connectivity to controller-platform-nfs for up to 70 seconds over interface 172.27.1.4 Starting Titanium Cloud libvirt QEMU cleanup... [ 20.581204] controller_config[7585]: ***************************************************** [ OK ] Started Titanium Cloud libvirt QEMU cleanup. [ 20.611110] controller_config[7585]: ***************************************************** [ 20.632070] controller_config[7585]: Unable to retrieve installation uuid from active controller [ 20.643064] controller_config[7585]: ***************************************************** [ 20.654068] controller_config[7585]: ***************************************************** [ 20.665357] controller_config[7585]: Pausing for 5 seconds... [ OK ] Started Crash recovery kernel arming. [ OK ] Started Titanium Cloud Affine Platform. Additionally in the Web-UI I see the following: nova-local adding (on unlock) - 0.0 0.0 0 0 and the related PV lists: /dev/nvme0n1p5 Info UUID 280ce844-d5d5-4f26-bc80-2b9f64836b93 State adding Type partition Volume Group Name nova-local Physical Volume UUID None Physical Volume Name /dev/nvme0n1p5 Physical Volume Size 0 bytes Physical Extents Total 0 Physical Extents Allocated 0 Is this normal that the size are in this Phase are 0GB? I have configured them with 500G. The Output from "system host-list": +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 5 | controller-1 | controller | locked | disabled | offline | +----+--------------+-------------+----------------+-------------+--------------+ and "ceph -s": [wrsroot@controller-0 ~(keystone_admin)]$ ceph -s cluster 1769254c-c103-460e-afb3-c8368993b91a health HEALTH_WARN 448 pgs degraded 448 pgs stuck degraded 448 pgs stuck unclean 448 pgs stuck undersized 448 pgs undersized recovery 1116/2232 objects degraded (50.000%) monmap e1: 1 mons at {controller=172.27.1.2:6789/0} election epoch 5, quorum 0 controller osdmap e24: 2 osds: 1 up, 1 in flags sortbitwise,require_jewel_osds pgmap v1056: 448 pgs, 7 pools, 1588 bytes data, 1116 objects 76376 kB used, 952 GB / 952 GB avail 1116/2232 objects degraded (50.000%) 448 active+undersized+degraded Any idea or hint is welcome! Thanks Marcel
participants (1)
-
Marcel Schaible