[Starlingx-discuss] Duplex: CEPH HEALTH_WARN after initial unlock of controller-0
Hi, after unlocking controller-0 in a Duplex configuration ceph shows an HEALTH_WARN: [wrsroot@controller-0 ~(keystone_admin)]$ ceph -s cluster b6a2009b-7857-4bb6-835a-479b6ececb63 health HEALTH_WARN 448 pgs degraded 448 pgs stuck unclean 448 pgs undersized recovery 1116/2232 objects degraded (50.000%) monmap e1: 1 mons at {controller=172.27.1.100:6789/0} election epoch 5, quorum 0 controller osdmap e22: 2 osds: 1 up, 1 in flags sortbitwise,require_jewel_osds pgmap v1976: 448 pgs, 7 pools, 1588 bytes data, 1116 objects 91396 kB used, 952 GB / 952 GB avail 1116/2232 objects degraded (50.000%) 448 active+undersized+degraded Is this expected and when should the health status switch to HEALTH_OK? We use the image from 20190429. Thanks Marcel
Hi Marcel, Until the second node of the duplex configuration is configured and unlocked one expects the ceph cluster would be "unhealthy", yes. === It seems to me that the documentation is not obvious on that subject, but I'm not sure it needs to be. If I understand the examples correctly, on AIO-Simplex (one node), we set the "Ceph pool replication to 1. This step applies to AIO-SX only.", and therefore we expect ceph to become healthy for the simplex configuration. But with duplex, your pool replication is 2... so it is waiting for the second cluster node to come up. Duplex instruction: https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnAIOD X#Configure_Ceph_for_Controller-0 Duplex then refers to a portion of simplex instruction for ceph configuration https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Confi gure_Ceph_for_Controller-0 M On Thu, 2019-05-02 at 15:20 +0200, Marcel Schaible wrote:
Hi,
after unlocking controller-0 in a Duplex configuration ceph shows an HEALTH_WARN:
[wrsroot@controller-0 ~(keystone_admin)]$ ceph -s cluster b6a2009b-7857-4bb6-835a-479b6ececb63 health HEALTH_WARN 448 pgs degraded 448 pgs stuck unclean 448 pgs undersized recovery 1116/2232 objects degraded (50.000%) monmap e1: 1 mons at {controller=172.27.1.100:6789/0} election epoch 5, quorum 0 controller osdmap e22: 2 osds: 1 up, 1 in flags sortbitwise,require_jewel_osds pgmap v1976: 448 pgs, 7 pools, 1588 bytes data, 1116 objects 91396 kB used, 952 GB / 952 GB avail 1116/2232 objects degraded (50.000%) 448 active+undersized+degraded
Is this expected and when should the health status switch to HEALTH_OK?
We use the image from 20190429.
Thanks
Marcel
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
Hi Michel, thanks for your reply. In the meantime we got the controller-1 up and running and ceph Show a healthy status. Marcel
Michel Thebeau <michel.thebeau@windriver.com> hat am 7. Mai 2019 um 16:45 geschrieben:
Hi Marcel,
Until the second node of the duplex configuration is configured and unlocked one expects the ceph cluster would be "unhealthy", yes.
===
It seems to me that the documentation is not obvious on that subject, but I'm not sure it needs to be. If I understand the examples correctly, on AIO-Simplex (one node), we set the "Ceph pool replication to 1. This step applies to AIO-SX only.", and therefore we expect ceph to become healthy for the simplex configuration. But with duplex, your pool replication is 2... so it is waiting for the second cluster node to come up.
Duplex instruction: https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnAIOD X#Configure_Ceph_for_Controller-0
Duplex then refers to a portion of simplex instruction for ceph configuration https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Confi gure_Ceph_for_Controller-0
M
On Thu, 2019-05-02 at 15:20 +0200, Marcel Schaible wrote:
Hi,
after unlocking controller-0 in a Duplex configuration ceph shows an HEALTH_WARN:
[wrsroot@controller-0 ~(keystone_admin)]$ ceph -s cluster b6a2009b-7857-4bb6-835a-479b6ececb63 health HEALTH_WARN 448 pgs degraded 448 pgs stuck unclean 448 pgs undersized recovery 1116/2232 objects degraded (50.000%) monmap e1: 1 mons at {controller=172.27.1.100:6789/0} election epoch 5, quorum 0 controller osdmap e22: 2 osds: 1 up, 1 in flags sortbitwise,require_jewel_osds pgmap v1976: 448 pgs, 7 pools, 1588 bytes data, 1116 objects 91396 kB used, 952 GB / 952 GB avail 1116/2232 objects degraded (50.000%) 448 active+undersized+degraded
Is this expected and when should the health status switch to HEALTH_OK?
We use the image from 20190429.
Thanks
Marcel
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
Yes it's expected. To ensure HA on AIO-DX data is replicated between controller nodes, therefore till you get controller-1 up ceph will be in health warn. To nesure replication is set to 2, you need at least one OSD on each controller. ________________________________________ From: Marcel Schaible [marcel@schaible-consulting.de] Sent: Thursday, May 02, 2019 4:20 PM To: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] Duplex: CEPH HEALTH_WARN after initial unlock of controller-0 Hi, after unlocking controller-0 in a Duplex configuration ceph shows an HEALTH_WARN: [wrsroot@controller-0 ~(keystone_admin)]$ ceph -s cluster b6a2009b-7857-4bb6-835a-479b6ececb63 health HEALTH_WARN 448 pgs degraded 448 pgs stuck unclean 448 pgs undersized recovery 1116/2232 objects degraded (50.000%) monmap e1: 1 mons at {controller=172.27.1.100:6789/0} election epoch 5, quorum 0 controller osdmap e22: 2 osds: 1 up, 1 in flags sortbitwise,require_jewel_osds pgmap v1976: 448 pgs, 7 pools, 1588 bytes data, 1116 objects 91396 kB used, 952 GB / 952 GB avail 1116/2232 objects degraded (50.000%) 448 active+undersized+degraded Is this expected and when should the health status switch to HEALTH_OK? We use the image from 20190429. Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
participants (3)
-
Marcel Schaible
-
Michel Thebeau
-
Poncea, Ovidiu