Hi, Dear Thales Elero,
let me explain in more details our experience with bare metal ceph on STX10.
As said also by Giuseppe, we initially tried to use bare metal ceph (v14) on STX 10 but, also before installing stx-openstack (that is to say, no workloads on Kubernetes
except the platform ones), we had a problem of CEPH intermittent connectivity: about every 3 or 5 minute the command ceph -s showed HEALTH_WARN since 1 OSD was up and 1 down. After some minutes, the alarm was cleared and ceph automatically updated the status
to HEALTH_OK. After some minutes we get randomly again the same failure.
Within this failure timeframe, something like this was logged in the sysinv.log:
2025-07-21 16:20:41.617972 mgr.controller-0 (mgr.4416) 136728 : cluster [DBG] pgmap v136800: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail
2025-07-21 16:20:43.618575 mgr.controller-0 (mgr.4416) 136729 : Cluster [DBG] pgmap v136801: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail
2025-07-21 16:20:52.223339 mon.controller-0 (mon.0) 105968 : cluster [DBG] osd.1 reported immediately failed by osd.0
2025-07-21 16:20:52.223410 mon.controller-0 (mon.0) 105969 : cluster [INF] osd.1 failed (root-storage-tier, chassis=group-0, host=controller-1) (connection refused reported
by osd.0)
2025-07-21 16:20:52.223536 mon.controller-0 (mon.0) 105970 : cluster [DBG] osd.1 reported immediately failed by osd.0
2025-07-21 16:20:52.422884 mon.controller-0 (mon.0) 105971 : cluster [DBG] osd.1 reported immediately failed by osd.0
2025-07-21 16:20:52.423676 mon.controller-0 (mon.0) 105972 : cluster [D8G] osd.1 reported immediately failed by osd.0
2025-07-21 16:20:52.677873 mon.controller-0 (mon.0) 105973 : Cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2025-07-21 16:20:52.677131 mon.controller-0 (mon.0) 105974 : cluster [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN)
2025-07-21 16:20:52.687625 mon.controller-0 (mon.0) 105975 : cluster [DBG] osdmap e109: 2 total, 1 up, 2 in
2025-07-21 16:20:45.619637 mgr.controller-0 (mgr.4416) 136730 : cluster [DBG] pgmap v136802: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail
That causes some Events and consequently some Alarms:
SET (MAJOR) 800.011 Loss of replication in replication group group-0: ODs are down
SET (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck be undersized), Please check 'ceph -s' for more details
CLEAR (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck or undersized), Please check 'ceph -s' for more details
We also made some test disabling the Management IPSEC vpn (thinking we had a low throughput on 1gbps management network) and just use standard flat network as in previous
releases. This was not useful because the intermittent behaviour was still happening on the flat network also with low throughput (few MBps). For this reason and due to deprecation changelog information, we decided to give a try to rook-ceph v18 (without Management
IPSEC vpns) and the Kube cluster was stable (no events) but we cannot install stx-openstack on rook-ceph due to the incompatibilities shown by Giuseppe.
Summing up, in older versions of STX7 or STX9 we were using bare metal ceph and Openstack without any issue, but with STX10 we cannot get a stable cluster (with or without
Openstack installed). We have also tried one of the latest stx-openstack TGZ (probably designed and implemented on beta STX-11 reference platform) but it raised error on upload, so it was impossible to use that newer version of stx-openstack with STX 10.
So, the question is: is there a workaround or a way to tune some ceph params/config to use bare metal ceph without this intermittent problem in STX 10 with stx-openstack?
We haven’t found similar problems or solutions in STX website (changelog, docs, …) and the bug tracking system. We just tried to tune some timeout (eg: osd_heartbeat_grace=60) but with no luck.
We have another question: reinstalling the whole lab each time, we tested and installed almost every stable release since STX3, but we never tested or installed a pre-relase.
Currenlty we have STX 10 installed but without openstack. Is there a way to cleanly and successfully upgrade to the pre-release (or main branch) mentioned by you or we must reinstall the whole distributed cloud using that pre-release?
Our next step, assuming we have a functional Openstack Cluster, is to configure SRIOV passthrough inside OpenStack for some Nvidia GPU to support some Client AI use
cases.
Any help will be appreciated.
Thank you in advance.
Best Regards,
Paolo Napoli
– Cloud
Architect
| Cloud & Digital Architecture