Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph
Dear StarlingX Community, We are currently testing the StarlingX + OpenStack Release 10 (v24.09) solution on our labs. During the installation process (on bare metal), we encountered some issues—initially related to OSD synchronization using host-based Ceph. For this reason, we switched to Rook-Ceph, as host-based Ceph will be deprecated. However, after switching, we were unable to upload the OpenStack package (via system upload) and proceed with the installation. I’ve attached both an installation report and the full sysinv log showing the error that occurred during the upload of the OpenStack installation package. Has anyone experienced a similar issue? Thank you very much for your support! Best regards, Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture [cid:2165bbb9-c072-4d13-8124-38abc1d85e42] [cid:033b377a-7ed2-497f-85c7-0bddbe4a422a] Via Bastioni 14/ S.Michele 10, Salerno, Italia Email: giuseppe.delgaudio@nttdata.com Tel: +39 3346368189 Learn more at www.nttdata.com/it<http://www.nttdata.com/it> ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato.
Hi Giuseppe, Glad to hear you are using stx-openstack! Unfortunately, the stx-openstack from stx.10.0 release does not support rook-ceph. This support is being added as part of the stx.11.0 release. It is already merged into the main branch if you want to give it a try. Please keep us in the loop! Thales Elero Cervi StarlingX TSC Member | StarlingX OpenStack Project Lead ________________________________ From: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Sent: Friday, August 1, 2025 3:38 PM To: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@nttdata.com>; STEFANO VELTRI <stefano.veltri@nttdata.com> Subject: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Dear StarlingX Community, We are currently testing the StarlingX + OpenStack Release 10 (v24.09) solution on our labs. During the installation process (on bare metal), we encountered some issues—initially related to OSD synchronization using host-based Ceph. For this reason, we switched to Rook-Ceph, as host-based Ceph will be deprecated. However, after switching, we were unable to upload the OpenStack package (via system upload) and proceed with the installation. I’ve attached both an installation report and the full sysinv log showing the error that occurred during the upload of the OpenStack installation package. Has anyone experienced a similar issue? Thank you very much for your support! Best regards, Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture [cid:2165bbb9-c072-4d13-8124-38abc1d85e42] [cid:033b377a-7ed2-497f-85c7-0bddbe4a422a] Via Bastioni 14/ S.Michele 10, Salerno, Italia Email: giuseppe.delgaudio@nttdata.com Tel: +39 3346368189 Learn more at www.nttdata.com/it<https://urldefense.com/v3/__http://www.nttdata.com/it__;!!AjveYdw8EvQ!aArk9I22gzMmwxVJx15ur7bESSSkTUVxVe7BEc6oUDfS5Ge8SolCYXv2PhuRVcMtcoekBLoWGKns02-N9VReIeHrH3vc5lUK0s7t$> ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato.
Hi, Dear Thales Elero, let me explain in more details our experience with bare metal ceph on STX10. As said also by Giuseppe, we initially tried to use bare metal ceph (v14) on STX 10 but, also before installing stx-openstack (that is to say, no workloads on Kubernetes except the platform ones), we had a problem of CEPH intermittent connectivity: about every 3 or 5 minute the command ceph -s showed HEALTH_WARN since 1 OSD was up and 1 down. After some minutes, the alarm was cleared and ceph automatically updated the status to HEALTH_OK. After some minutes we get randomly again the same failure. Within this failure timeframe, something like this was logged in the sysinv.log: 2025-07-21 16:20:41.617972 mgr.controller-0 (mgr.4416) 136728 : cluster [DBG] pgmap v136800: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-07-21 16:20:43.618575 mgr.controller-0 (mgr.4416) 136729 : Cluster [DBG] pgmap v136801: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-07-21 16:20:52.223339 mon.controller-0 (mon.0) 105968 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.223410 mon.controller-0 (mon.0) 105969 : cluster [INF] osd.1 failed (root-storage-tier, chassis=group-0, host=controller-1) (connection refused reported by osd.0) 2025-07-21 16:20:52.223536 mon.controller-0 (mon.0) 105970 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.422884 mon.controller-0 (mon.0) 105971 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.423676 mon.controller-0 (mon.0) 105972 : cluster [D8G] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.677873 mon.controller-0 (mon.0) 105973 : Cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2025-07-21 16:20:52.677131 mon.controller-0 (mon.0) 105974 : cluster [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN) 2025-07-21 16:20:52.687625 mon.controller-0 (mon.0) 105975 : cluster [DBG] osdmap e109: 2 total, 1 up, 2 in 2025-07-21 16:20:45.619637 mgr.controller-0 (mgr.4416) 136730 : cluster [DBG] pgmap v136802: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail That causes some Events and consequently some Alarms: SET (MAJOR) 800.011 Loss of replication in replication group group-0: ODs are down SET (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck be undersized), Please check 'ceph -s' for more details CLEAR (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck or undersized), Please check 'ceph -s' for more details We also made some test disabling the Management IPSEC vpn (thinking we had a low throughput on 1gbps management network) and just use standard flat network as in previous releases. This was not useful because the intermittent behaviour was still happening on the flat network also with low throughput (few MBps). For this reason and due to deprecation changelog information, we decided to give a try to rook-ceph v18 (without Management IPSEC vpns) and the Kube cluster was stable (no events) but we cannot install stx-openstack on rook-ceph due to the incompatibilities shown by Giuseppe. Summing up, in older versions of STX7 or STX9 we were using bare metal ceph and Openstack without any issue, but with STX10 we cannot get a stable cluster (with or without Openstack installed). We have also tried one of the latest stx-openstack TGZ (probably designed and implemented on beta STX-11 reference platform) but it raised error on upload, so it was impossible to use that newer version of stx-openstack with STX 10. So, the question is: is there a workaround or a way to tune some ceph params/config to use bare metal ceph without this intermittent problem in STX 10 with stx-openstack? We haven’t found similar problems or solutions in STX website (changelog, docs, …) and the bug tracking system. We just tried to tune some timeout (eg: osd_heartbeat_grace=60) but with no luck. We have another question: reinstalling the whole lab each time, we tested and installed almost every stable release since STX3, but we never tested or installed a pre-relase. Currenlty we have STX 10 installed but without openstack. Is there a way to cleanly and successfully upgrade to the pre-release (or main branch) mentioned by you or we must reinstall the whole distributed cloud using that pre-release? Our next step, assuming we have a functional Openstack Cluster, is to configure SRIOV passthrough inside OpenStack for some Nvidia GPU to support some Client AI use cases. Any help will be appreciated. Thank you in advance. Best Regards, Paolo Napoli – Cloud Architect | Cloud & Digital Architecture [signature_1835094065] From: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com> Date: Saturday, 2 August 2025 at 02:18 To: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@emeal.nttdata.com>, starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@emeal.nttdata.com>, STEFANO VELTRI <stefano.veltri@emeal.nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph NTT DATA Security Awareness - This is an incoming mail from an EXTERNAL SENDER (prvs=0309342ff8=thaleselero.cervi@windriver.com). Please verify sender before you open attachments or access links. ________________________________ Hi Giuseppe, Glad to hear you are using stx-openstack! Unfortunately, the stx-openstack from stx.10.0 release does not support rook-ceph. This support is being added as part of the stx.11.0 release. It is already merged into the main branch if you want to give it a try. Please keep us in the loop! Thales Elero Cervi StarlingX TSC Member | StarlingX OpenStack Project Lead ________________________________ From: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Sent: Friday, August 1, 2025 3:38 PM To: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@nttdata.com>; STEFANO VELTRI <stefano.veltri@nttdata.com> Subject: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Dear StarlingX Community, We are currently testing the StarlingX + OpenStack Release 10 (v24.09) solution on our labs. During the installation process (on bare metal), we encountered some issues—initially related to OSD synchronization using host-based Ceph. For this reason, we switched to Rook-Ceph, as host-based Ceph will be deprecated. However, after switching, we were unable to upload the OpenStack package (via system upload) and proceed with the installation. I’ve attached both an installation report and the full sysinv log showing the error that occurred during the upload of the OpenStack installation package. Has anyone experienced a similar issue? Thank you very much for your support! Best regards, Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture [cid:2165bbb9-c072-4d13-8124-38abc1d85e42] [cid:033b377a-7ed2-497f-85c7-0bddbe4a422a] Via Bastioni 14/ S.Michele 10, Salerno, Italia Email: giuseppe.delgaudio@nttdata.com Tel: +39 3346368189 Learn more at www.nttdata.com/it<https://urldefense.com/v3/__http:/www.nttdata.com/it__;!!AjveYdw8EvQ!aArk9I22gzMmwxVJx15ur7bESSSkTUVxVe7BEc6oUDfS5Ge8SolCYXv2PhuRVcMtcoekBLoWGKns02-N9VReIeHrH3vc5lUK0s7t$> ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato. ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato.
Hi, Paolo. You are right, stx-openstack from stx.11.0 will fail to upload/apply in a Starlingx from stx.10.0. I forgot that in my first email, but there is some code depending on stx.11.0 for the new stx-openstack application. Since your StarlingX stx.10.0 without OpenStack is already struggling with your host-based Ceph, I would suggest to explore the Ceph config and we could eventually ping someone from the community that maintains the storage functions of StarlingX. Another option would be exploring an upgrade from stx.10.0 to stx.11.0, for which I would suggest waiting the official lstx.11.0 release. For both, please join our stx-openstack project room so we can try to help you directly: https://matrix.to/#/#starlingx-openstack:opendev.org Cheers, Thales. ________________________________ From: PAOLO NAPOLI <Paolo.Napoli@nttdata.com> Sent: Monday, August 4, 2025 10:11 AM To: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com>; starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: STEFANO VELTRI <stefano.veltri@nttdata.com>; GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi, Dear Thales Elero, let me explain in more details our experience with bare metal ceph on STX10. As said also by Giuseppe, we initially tried to use bare metal ceph (v14) on STX 10 but, also before installing stx-openstack (that is to say, no workloads on Kubernetes except the platform ones), we had a problem of CEPH intermittent connectivity: about every 3 or 5 minute the command ceph -s showed HEALTH_WARN since 1 OSD was up and 1 down. After some minutes, the alarm was cleared and ceph automatically updated the status to HEALTH_OK. After some minutes we get randomly again the same failure. Within this failure timeframe, something like this was logged in the sysinv.log: 2025-07-21 16:20:41.617972 mgr.controller-0 (mgr.4416) 136728 : cluster [DBG] pgmap v136800: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-07-21 16:20:43.618575 mgr.controller-0 (mgr.4416) 136729 : Cluster [DBG] pgmap v136801: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-07-21 16:20:52.223339 mon.controller-0 (mon.0) 105968 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.223410 mon.controller-0 (mon.0) 105969 : cluster [INF] osd.1 failed (root-storage-tier, chassis=group-0, host=controller-1) (connection refused reported by osd.0) 2025-07-21 16:20:52.223536 mon.controller-0 (mon.0) 105970 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.422884 mon.controller-0 (mon.0) 105971 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.423676 mon.controller-0 (mon.0) 105972 : cluster [D8G] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.677873 mon.controller-0 (mon.0) 105973 : Cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2025-07-21 16:20:52.677131 mon.controller-0 (mon.0) 105974 : cluster [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN) 2025-07-21 16:20:52.687625 mon.controller-0 (mon.0) 105975 : cluster [DBG] osdmap e109: 2 total, 1 up, 2 in 2025-07-21 16:20:45.619637 mgr.controller-0 (mgr.4416) 136730 : cluster [DBG] pgmap v136802: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail That causes some Events and consequently some Alarms: SET (MAJOR) 800.011 Loss of replication in replication group group-0: ODs are down SET (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck be undersized), Please check 'ceph -s' for more details CLEAR (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck or undersized), Please check 'ceph -s' for more details We also made some test disabling the Management IPSEC vpn (thinking we had a low throughput on 1gbps management network) and just use standard flat network as in previous releases. This was not useful because the intermittent behaviour was still happening on the flat network also with low throughput (few MBps). For this reason and due to deprecation changelog information, we decided to give a try to rook-ceph v18 (without Management IPSEC vpns) and the Kube cluster was stable (no events) but we cannot install stx-openstack on rook-ceph due to the incompatibilities shown by Giuseppe. Summing up, in older versions of STX7 or STX9 we were using bare metal ceph and Openstack without any issue, but with STX10 we cannot get a stable cluster (with or without Openstack installed). We have also tried one of the latest stx-openstack TGZ (probably designed and implemented on beta STX-11 reference platform) but it raised error on upload, so it was impossible to use that newer version of stx-openstack with STX 10. So, the question is: is there a workaround or a way to tune some ceph params/config to use bare metal ceph without this intermittent problem in STX 10 with stx-openstack? We haven’t found similar problems or solutions in STX website (changelog, docs, …) and the bug tracking system. We just tried to tune some timeout (eg: osd_heartbeat_grace=60) but with no luck. We have another question: reinstalling the whole lab each time, we tested and installed almost every stable release since STX3, but we never tested or installed a pre-relase. Currenlty we have STX 10 installed but without openstack. Is there a way to cleanly and successfully upgrade to the pre-release (or main branch) mentioned by you or we must reinstall the whole distributed cloud using that pre-release? Our next step, assuming we have a functional Openstack Cluster, is to configure SRIOV passthrough inside OpenStack for some Nvidia GPU to support some Client AI use cases. Any help will be appreciated. Thank you in advance. Best Regards, Paolo Napoli – Cloud Architect | Cloud & Digital Architecture [signature_1835094065] From: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com> Date: Saturday, 2 August 2025 at 02:18 To: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@emeal.nttdata.com>, starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@emeal.nttdata.com>, STEFANO VELTRI <stefano.veltri@emeal.nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph NTT DATA Security Awareness - This is an incoming mail from an EXTERNAL SENDER (prvs=0309342ff8=thaleselero.cervi@windriver.com). Please verify sender before you open attachments or access links. ________________________________ Hi Giuseppe, Glad to hear you are using stx-openstack! Unfortunately, the stx-openstack from stx.10.0 release does not support rook-ceph. This support is being added as part of the stx.11.0 release. It is already merged into the main branch if you want to give it a try. Please keep us in the loop! Thales Elero Cervi StarlingX TSC Member | StarlingX OpenStack Project Lead ________________________________ From: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Sent: Friday, August 1, 2025 3:38 PM To: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@nttdata.com>; STEFANO VELTRI <stefano.veltri@nttdata.com> Subject: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Dear StarlingX Community, We are currently testing the StarlingX + OpenStack Release 10 (v24.09) solution on our labs. During the installation process (on bare metal), we encountered some issues—initially related to OSD synchronization using host-based Ceph. For this reason, we switched to Rook-Ceph, as host-based Ceph will be deprecated. However, after switching, we were unable to upload the OpenStack package (via system upload) and proceed with the installation. I’ve attached both an installation report and the full sysinv log showing the error that occurred during the upload of the OpenStack installation package. Has anyone experienced a similar issue? Thank you very much for your support! Best regards, Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture [cid:2165bbb9-c072-4d13-8124-38abc1d85e42] [cid:033b377a-7ed2-497f-85c7-0bddbe4a422a] Via Bastioni 14/ S.Michele 10, Salerno, Italia Email: giuseppe.delgaudio@nttdata.com Tel: +39 3346368189 Learn more at www.nttdata.com/it<https://urldefense.com/v3/__http:/www.nttdata.com/it__;!!AjveYdw8EvQ!aArk9I22gzMmwxVJx15ur7bESSSkTUVxVe7BEc6oUDfS5Ge8SolCYXv2PhuRVcMtcoekBLoWGKns02-N9VReIeHrH3vc5lUK0s7t$> ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato. ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato.
Hi, Dear Thales Elero, thank you very much for your time and your prompt response. In September we have an important POC with an Italian company and we proposed StarlingX for their use case. In their use case they need to share bare metal GPU to the VM inside Openstack, but here the problem is that neither rook-ceph, nor bare metal ceph are working in our Lab with STX10. This makes us feel not confident about Client’s POC due to limited project timing. We plan to spend more time on GPU sharing, giving the installation of Openstack successful for sure. Can you get us in touch with some your colleague in the STX team that can help us to address the issues we have? Thanks in advance. Best Regards PN From: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com> Date: Tuesday, 5 August 2025 at 14:18 To: PAOLO NAPOLI <Paolo.Napoli@emeal.nttdata.com>, starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: STEFANO VELTRI <stefano.veltri@emeal.nttdata.com>, GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@emeal.nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph NTT DATA Security Awareness - This is an incoming mail from an EXTERNAL SENDER (prvs=031201ecfb=thaleselero.cervi@windriver.com). Please verify sender before you open attachments or access links. ________________________________ Hi, Paolo. You are right, stx-openstack from stx.11.0 will fail to upload/apply in a Starlingx from stx.10.0. I forgot that in my first email, but there is some code depending on stx.11.0 for the new stx-openstack application. Since your StarlingX stx.10.0 without OpenStack is already struggling with your host-based Ceph, I would suggest to explore the Ceph config and we could eventually ping someone from the community that maintains the storage functions of StarlingX. Another option would be exploring an upgrade from stx.10.0 to stx.11.0, for which I would suggest waiting the official lstx.11.0 release. For both, please join our stx-openstack project room so we can try to help you directly: https://matrix.to/#/#starlingx-openstack:opendev.org<https://urldefense.com/v3/__https:/matrix.to/*/*starlingx-openstack:opendev.org__;IyM!!EJ3n55FBLexp1rhr!-amaTIi7dprmbxG-9lM2hL3Jod30VWc8rMd4wEdHJJUyaztUqHSpcdjtqwIiIX41uS0xxF-psdvCZHv3WsAYdUPdFVaN6amuz6k0pg$> Cheers, Thales. ________________________________ From: PAOLO NAPOLI <Paolo.Napoli@nttdata.com> Sent: Monday, August 4, 2025 10:11 AM To: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com>; starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: STEFANO VELTRI <stefano.veltri@nttdata.com>; GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi, Dear Thales Elero, let me explain in more details our experience with bare metal ceph on STX10. As said also by Giuseppe, we initially tried to use bare metal ceph (v14) on STX 10 but, also before installing stx-openstack (that is to say, no workloads on Kubernetes except the platform ones), we had a problem of CEPH intermittent connectivity: about every 3 or 5 minute the command ceph -s showed HEALTH_WARN since 1 OSD was up and 1 down. After some minutes, the alarm was cleared and ceph automatically updated the status to HEALTH_OK. After some minutes we get randomly again the same failure. Within this failure timeframe, something like this was logged in the sysinv.log: 2025-07-21 16:20:41.617972 mgr.controller-0 (mgr.4416) 136728 : cluster [DBG] pgmap v136800: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-07-21 16:20:43.618575 mgr.controller-0 (mgr.4416) 136729 : Cluster [DBG] pgmap v136801: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-07-21 16:20:52.223339 mon.controller-0 (mon.0) 105968 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.223410 mon.controller-0 (mon.0) 105969 : cluster [INF] osd.1 failed (root-storage-tier, chassis=group-0, host=controller-1) (connection refused reported by osd.0) 2025-07-21 16:20:52.223536 mon.controller-0 (mon.0) 105970 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.422884 mon.controller-0 (mon.0) 105971 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.423676 mon.controller-0 (mon.0) 105972 : cluster [D8G] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.677873 mon.controller-0 (mon.0) 105973 : Cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2025-07-21 16:20:52.677131 mon.controller-0 (mon.0) 105974 : cluster [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN) 2025-07-21 16:20:52.687625 mon.controller-0 (mon.0) 105975 : cluster [DBG] osdmap e109: 2 total, 1 up, 2 in 2025-07-21 16:20:45.619637 mgr.controller-0 (mgr.4416) 136730 : cluster [DBG] pgmap v136802: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail That causes some Events and consequently some Alarms: SET (MAJOR) 800.011 Loss of replication in replication group group-0: ODs are down SET (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck be undersized), Please check 'ceph -s' for more details CLEAR (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck or undersized), Please check 'ceph -s' for more details We also made some test disabling the Management IPSEC vpn (thinking we had a low throughput on 1gbps management network) and just use standard flat network as in previous releases. This was not useful because the intermittent behaviour was still happening on the flat network also with low throughput (few MBps). For this reason and due to deprecation changelog information, we decided to give a try to rook-ceph v18 (without Management IPSEC vpns) and the Kube cluster was stable (no events) but we cannot install stx-openstack on rook-ceph due to the incompatibilities shown by Giuseppe. Summing up, in older versions of STX7 or STX9 we were using bare metal ceph and Openstack without any issue, but with STX10 we cannot get a stable cluster (with or without Openstack installed). We have also tried one of the latest stx-openstack TGZ (probably designed and implemented on beta STX-11 reference platform) but it raised error on upload, so it was impossible to use that newer version of stx-openstack with STX 10. So, the question is: is there a workaround or a way to tune some ceph params/config to use bare metal ceph without this intermittent problem in STX 10 with stx-openstack? We haven’t found similar problems or solutions in STX website (changelog, docs, …) and the bug tracking system. We just tried to tune some timeout (eg: osd_heartbeat_grace=60) but with no luck. We have another question: reinstalling the whole lab each time, we tested and installed almost every stable release since STX3, but we never tested or installed a pre-relase. Currenlty we have STX 10 installed but without openstack. Is there a way to cleanly and successfully upgrade to the pre-release (or main branch) mentioned by you or we must reinstall the whole distributed cloud using that pre-release? Our next step, assuming we have a functional Openstack Cluster, is to configure SRIOV passthrough inside OpenStack for some Nvidia GPU to support some Client AI use cases. Any help will be appreciated. Thank you in advance. Best Regards, Paolo Napoli – Cloud Architect | Cloud & Digital Architecture [Image removed by sender. signature_1835094065] From: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com> Date: Saturday, 2 August 2025 at 02:18 To: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@emeal.nttdata.com>, starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@emeal.nttdata.com>, STEFANO VELTRI <stefano.veltri@emeal.nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph NTT DATA Security Awareness - This is an incoming mail from an EXTERNAL SENDER (prvs=0309342ff8=thaleselero.cervi@windriver.com). Please verify sender before you open attachments or access links. ________________________________ Hi Giuseppe, Glad to hear you are using stx-openstack! Unfortunately, the stx-openstack from stx.10.0 release does not support rook-ceph. This support is being added as part of the stx.11.0 release. It is already merged into the main branch if you want to give it a try. Please keep us in the loop! Thales Elero Cervi StarlingX TSC Member | StarlingX OpenStack Project Lead ________________________________ From: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Sent: Friday, August 1, 2025 3:38 PM To: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@nttdata.com>; STEFANO VELTRI <stefano.veltri@nttdata.com> Subject: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Dear StarlingX Community, We are currently testing the StarlingX + OpenStack Release 10 (v24.09) solution on our labs. During the installation process (on bare metal), we encountered some issues—initially related to OSD synchronization using host-based Ceph. For this reason, we switched to Rook-Ceph, as host-based Ceph will be deprecated. However, after switching, we were unable to upload the OpenStack package (via system upload) and proceed with the installation. I’ve attached both an installation report and the full sysinv log showing the error that occurred during the upload of the OpenStack installation package. Has anyone experienced a similar issue? Thank you very much for your support! Best regards, Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture [cid:2165bbb9-c072-4d13-8124-38abc1d85e42] [cid:033b377a-7ed2-497f-85c7-0bddbe4a422a] Via Bastioni 14/ S.Michele 10, Salerno, Italia Email: giuseppe.delgaudio@nttdata.com Tel: +39 3346368189 Learn more at www.nttdata.com/it<https://urldefense.com/v3/__http:/www.nttdata.com/it__;!!AjveYdw8EvQ!aArk9I22gzMmwxVJx15ur7bESSSkTUVxVe7BEc6oUDfS5Ge8SolCYXv2PhuRVcMtcoekBLoWGKns02-N9VReIeHrH3vc5lUK0s7t$> ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato. ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato. ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato.
Hi Paolo, I see you are having trouble with Ceph in stx.10. I am one of the maintainers from the Storage team. What deployment type are you using? Is it an All-in-one or Standard? Additionally, how many OSDs are you planning to use? If this is an All-in-one with OpenStack, you must have at least 4 or 6 Cores reserved for the platform. If you add 3 or more OSDs in each controller, you may consider increasing this number. Besides that, you should check the memory reserved for the platform. Usually, setting it to 12 GiB on each Controller should be enough. Let me know what problems you were facing when trying to use the bare metal Ceph. Regards, Felipe ________________________________ From: PAOLO NAPOLI <Paolo.Napoli@nttdata.com> Sent: Wednesday, August 6, 2025 2:31 AM To: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com>; starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: STEFANO VELTRI <stefano.veltri@nttdata.com>; GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi, Dear Thales Elero, thank you very much for your time and your prompt response. In September we have an important POC with an Italian company and we proposed StarlingX for their use case. In their use case they need to share bare metal GPU to the VM inside Openstack, but here the problem is that neither rook-ceph, nor bare metal ceph are working in our Lab with STX10. This makes us feel not confident about Client’s POC due to limited project timing. We plan to spend more time on GPU sharing, giving the installation of Openstack successful for sure. Can you get us in touch with some your colleague in the STX team that can help us to address the issues we have? Thanks in advance. Best Regards PN From: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com> Date: Tuesday, 5 August 2025 at 14:18 To: PAOLO NAPOLI <Paolo.Napoli@emeal.nttdata.com>, starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: STEFANO VELTRI <stefano.veltri@emeal.nttdata.com>, GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@emeal.nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph NTT DATA Security Awareness - This is an incoming mail from an EXTERNAL SENDER (prvs=031201ecfb=thaleselero.cervi@windriver.com). Please verify sender before you open attachments or access links. ________________________________ Hi, Paolo. You are right, stx-openstack from stx.11.0 will fail to upload/apply in a Starlingx from stx.10.0. I forgot that in my first email, but there is some code depending on stx.11.0 for the new stx-openstack application. Since your StarlingX stx.10.0 without OpenStack is already struggling with your host-based Ceph, I would suggest to explore the Ceph config and we could eventually ping someone from the community that maintains the storage functions of StarlingX. Another option would be exploring an upgrade from stx.10.0 to stx.11.0, for which I would suggest waiting the official lstx.11.0 release. For both, please join our stx-openstack project room so we can try to help you directly: https://matrix.to/#/#starlingx-openstack:opendev.org<https://urldefense.com/v3/__https:/matrix.to/*/*starlingx-openstack:opendev.org__;IyM!!EJ3n55FBLexp1rhr!-amaTIi7dprmbxG-9lM2hL3Jod30VWc8rMd4wEdHJJUyaztUqHSpcdjtqwIiIX41uS0xxF-psdvCZHv3WsAYdUPdFVaN6amuz6k0pg$> Cheers, Thales. ________________________________ From: PAOLO NAPOLI <Paolo.Napoli@nttdata.com> Sent: Monday, August 4, 2025 10:11 AM To: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com>; starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: STEFANO VELTRI <stefano.veltri@nttdata.com>; GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi, Dear Thales Elero, let me explain in more details our experience with bare metal ceph on STX10. As said also by Giuseppe, we initially tried to use bare metal ceph (v14) on STX 10 but, also before installing stx-openstack (that is to say, no workloads on Kubernetes except the platform ones), we had a problem of CEPH intermittent connectivity: about every 3 or 5 minute the command ceph -s showed HEALTH_WARN since 1 OSD was up and 1 down. After some minutes, the alarm was cleared and ceph automatically updated the status to HEALTH_OK. After some minutes we get randomly again the same failure. Within this failure timeframe, something like this was logged in the sysinv.log: 2025-07-21 16:20:41.617972 mgr.controller-0 (mgr.4416) 136728 : cluster [DBG] pgmap v136800: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-07-21 16:20:43.618575 mgr.controller-0 (mgr.4416) 136729 : Cluster [DBG] pgmap v136801: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-07-21 16:20:52.223339 mon.controller-0 (mon.0) 105968 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.223410 mon.controller-0 (mon.0) 105969 : cluster [INF] osd.1 failed (root-storage-tier, chassis=group-0, host=controller-1) (connection refused reported by osd.0) 2025-07-21 16:20:52.223536 mon.controller-0 (mon.0) 105970 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.422884 mon.controller-0 (mon.0) 105971 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.423676 mon.controller-0 (mon.0) 105972 : cluster [D8G] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.677873 mon.controller-0 (mon.0) 105973 : Cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2025-07-21 16:20:52.677131 mon.controller-0 (mon.0) 105974 : cluster [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN) 2025-07-21 16:20:52.687625 mon.controller-0 (mon.0) 105975 : cluster [DBG] osdmap e109: 2 total, 1 up, 2 in 2025-07-21 16:20:45.619637 mgr.controller-0 (mgr.4416) 136730 : cluster [DBG] pgmap v136802: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail That causes some Events and consequently some Alarms: SET (MAJOR) 800.011 Loss of replication in replication group group-0: ODs are down SET (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck be undersized), Please check 'ceph -s' for more details CLEAR (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck or undersized), Please check 'ceph -s' for more details We also made some test disabling the Management IPSEC vpn (thinking we had a low throughput on 1gbps management network) and just use standard flat network as in previous releases. This was not useful because the intermittent behaviour was still happening on the flat network also with low throughput (few MBps). For this reason and due to deprecation changelog information, we decided to give a try to rook-ceph v18 (without Management IPSEC vpns) and the Kube cluster was stable (no events) but we cannot install stx-openstack on rook-ceph due to the incompatibilities shown by Giuseppe. Summing up, in older versions of STX7 or STX9 we were using bare metal ceph and Openstack without any issue, but with STX10 we cannot get a stable cluster (with or without Openstack installed). We have also tried one of the latest stx-openstack TGZ (probably designed and implemented on beta STX-11 reference platform) but it raised error on upload, so it was impossible to use that newer version of stx-openstack with STX 10. So, the question is: is there a workaround or a way to tune some ceph params/config to use bare metal ceph without this intermittent problem in STX 10 with stx-openstack? We haven’t found similar problems or solutions in STX website (changelog, docs, …) and the bug tracking system. We just tried to tune some timeout (eg: osd_heartbeat_grace=60) but with no luck. We have another question: reinstalling the whole lab each time, we tested and installed almost every stable release since STX3, but we never tested or installed a pre-relase. Currenlty we have STX 10 installed but without openstack. Is there a way to cleanly and successfully upgrade to the pre-release (or main branch) mentioned by you or we must reinstall the whole distributed cloud using that pre-release? Our next step, assuming we have a functional Openstack Cluster, is to configure SRIOV passthrough inside OpenStack for some Nvidia GPU to support some Client AI use cases. Any help will be appreciated. Thank you in advance. Best Regards, Paolo Napoli – Cloud Architect | Cloud & Digital Architecture [Image removed by sender. signature_1835094065] From: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com> Date: Saturday, 2 August 2025 at 02:18 To: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@emeal.nttdata.com>, starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@emeal.nttdata.com>, STEFANO VELTRI <stefano.veltri@emeal.nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph NTT DATA Security Awareness - This is an incoming mail from an EXTERNAL SENDER (prvs=0309342ff8=thaleselero.cervi@windriver.com). Please verify sender before you open attachments or access links. ________________________________ Hi Giuseppe, Glad to hear you are using stx-openstack! Unfortunately, the stx-openstack from stx.10.0 release does not support rook-ceph. This support is being added as part of the stx.11.0 release. It is already merged into the main branch if you want to give it a try. Please keep us in the loop! Thales Elero Cervi StarlingX TSC Member | StarlingX OpenStack Project Lead ________________________________ From: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Sent: Friday, August 1, 2025 3:38 PM To: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@nttdata.com>; STEFANO VELTRI <stefano.veltri@nttdata.com> Subject: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Dear StarlingX Community, We are currently testing the StarlingX + OpenStack Release 10 (v24.09) solution on our labs. During the installation process (on bare metal), we encountered some issues—initially related to OSD synchronization using host-based Ceph. For this reason, we switched to Rook-Ceph, as host-based Ceph will be deprecated. However, after switching, we were unable to upload the OpenStack package (via system upload) and proceed with the installation. I’ve attached both an installation report and the full sysinv log showing the error that occurred during the upload of the OpenStack installation package. Has anyone experienced a similar issue? Thank you very much for your support! Best regards, Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture [cid:2165bbb9-c072-4d13-8124-38abc1d85e42] [cid:033b377a-7ed2-497f-85c7-0bddbe4a422a] Via Bastioni 14/ S.Michele 10, Salerno, Italia Email: giuseppe.delgaudio@nttdata.com Tel: +39 3346368189 Learn more at www.nttdata.com/it<https://urldefense.com/v3/__http:/www.nttdata.com/it__;!!AjveYdw8EvQ!aArk9I22gzMmwxVJx15ur7bESSSkTUVxVe7BEc6oUDfS5Ge8SolCYXv2PhuRVcMtcoekBLoWGKns02-N9VReIeHrH3vc5lUK0s7t$> ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato. ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato. ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato.
Hi Felipe, all, sorry for delayed response but were in holidays in Italy. What deployment type are you using? Is it an All-in-one or Standard? We have reinstalled the edge cloud with STX 10 and host-based ceph as backend storage, the current configuration is STANDARD with CONTROLLER-STORAGE, in the below table we reported the configuration of physical nodes. Node CPU Sockets Total Cores Total Threads Installed Memory controller-0 2× Intel Xeon Silver 4210R @ 2.40GHz 2 20 (10×2) 40 (20×2) 61 GB controller-1 2× Intel Xeon Silver 4210R @ 2.40GHz 2 20 (10×2) 40 (20×2) 61 GB worker-0 2× Intel Xeon Gold 5218R @ 2.10GHz 2 40 (20×2) 80 (40×2) 61 GB worker-1 2× Intel Xeon Gold 5218R @ 2.10GHz 2 40 (20×2) 80 (40×2) 61 GB on the controllers we have a dedicated disk for the OSD, with a size of 2,2 TB ( 1 dedicated disk for each controller, Total: 4,4 TBs ) in addition to that we use the worker-0 as monitor. In total we have one OSD+Monitor on each controller, and one additional monitor on worker-0 If this is an All-in-one with OpenStack, you must have at least 4 or 6 Cores reserved for the platform. At the moment the edge cloud is not running any workloads, and as shown the command output all the cores are dedicated to the platform. [sysadmin@controller-1 ~(keystone_admin)]$ system host-cpu-list controller-0 +--------------------------------------+-------+-----------+-------+--------+---------------------------------------------+-------------------+ | uuid | log_c | processor | phy_c | thread | processor_model | assigned_function | | | ore | | ore | | | | +--------------------------------------+-------+-----------+-------+--------+---------------------------------------------+-------------------+ | 452910c7-c586-4aed-b4f1-312c5fcc3fa7 | 0 | 0 | 0 | 0 | Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz | Platform | | 319b61eb-ff58-45ba-b41c-6405e74a22e8 | 1 | 1 | 0 | 0 | Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz | Platform | | 50d139e0-7a03-4f16-8949-371e48c3597a | 2 | 0 | 4 | 0 | Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz | Platform | | ab35cbea-ea52-4705-b646-3bed7bd5272f | 3 | 1 | 4 | 0 | Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz | Platform | | 08bbb824-17ab-48a3-b31c-d34366146acf | 4 | 0 | 1 | 0 | Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz | Platform | | 94ff63a6-9eab-40e5-a2e3-51c34f995093 | 5 | 1 | 1 | 0 | Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz | Platform | | 3b9bd8be-7a9a-45b8-b3cf-1253912caefa | 6 | 0 | 3 | 0 | Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz | Platform | | c6f47ba6-3b89-4f74-bb52-5ae0560d092c | 7 | 1 | 3 | 0 | Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz | Platform | etc ... on the memory side on the controllers we have 8GB for each processor dedicated to the platform [sysadmin@controller-1 ~(keystone_admin)]$ system host-memory-list controller-0 +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+--------+-------------+--------+--------+----------+--------+--------+----------+---------------+ | processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_to | app_hp_as_p | app_hp | app_hp | app_hp_p | app_hp | app_hp | app_hp_p | app_hp_use_1G | | | al(MiB) | rm(MiB) | il(MiB) | configured | size(M | total | avail | _reqd | tal_4K | ercentage | _total | _avail | ending_2 | _total | _avail | ending_1 | | | | | | | | iB) | | | | | | _2M | _2M | M | _1G | _1G | G | | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+--------+-------------+--------+--------+----------+--------+--------+----------+---------------+ | 1 | 30200 | 8250 | 28562 | False | None | None | None | None | None | False | None | None | None | None | None | None | False | | 0 | 31518 | 8250 | 29228 | False | None | None | None | None | None | False | None | None | None | None | None | None | False | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+--------+-------------+--------+--------+----------+--------+--------+----------+---------------+ we need to increase it to 12 GiB in total or on each processor ? Let me know what problems you were facing when trying to use the bare metal Ceph. As we can see from the command output, che cluster is in an OK state, but, as Paolo mentioned, very frequently we have cluster degradation, according to this starlingx events. SET (MAJOR) 800.011 Loss of replication in replication group group-0: ODs are down SET (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck be undersized), Please check 'ceph -s' for more details CLEAR (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck or undersized), Please check 'ceph -s' for more details After 2 minutes the cluster is no more degraded, but this error happens frequently ( every 5-10 minutes ). ______________________________________________________________________________________________________ sysadmin@controller-0:~$ ceph -s cluster: id: 41a1d434-06a2-4d4d-92d7-8e28577b9e33 health: HEALTH_OK services: mon: 3 daemons, quorum controller-0,controller-1,worker-0 (age 2h) mgr: controller-1(active, since 3h), standbys: controller-0 mds: kube-cephfs:1 {0=worker-0=up:active} 2 up:standby osd: 2 osds: 2 up (since 13s), 2 in (since 2h) data: pools: 3 pools, 192 pgs objects: 22 objects, 12 KiB usage: 31 GiB used, 4.3 TiB / 4.4 TiB avail pgs: 192 active+clean ______________________________________________________________________________________________________ we tested all the physical connection ( Interfaces and Cables ) and all works fine, and in addition to that the ceph-rook that we tested ( with no issue ) uses the same interface with kubernetes cluster (mgmt on eno3 interface) on the ceph osd log we have notice that we have a connection refused, and after the ceph service crash ______________________________________________________________________________________________________ -22> 2025-08-18 19:05:26.422 7f84c21cf700 10 osd.1 267 do_waiters -- start -21> 2025-08-18 19:05:26.422 7f84c21cf700 10 osd.1 267 do_waiters -- finish -20> 2025-08-18 19:05:26.524 7f84c19ce700 10 osd.1 267 tick_without_osd_lock -19> 2025-08-18 19:05:26.524 7f84c19ce700 20 osd.1 267 scrub_random_backoff lost coin flip, randomly backing off -18> 2025-08-18 19:05:26.524 7f84c19ce700 10 osd.1 267 promote_throttle_recalibrate 0 attempts, promoted 0 objects and 0 B; target 25 obj/sec or 5 MiB/sec -17> 2025-08-18 19:05:26.524 7f84c19ce700 20 osd.1 267 promote_throttle_recalibrate new_prob 1000 -16> 2025-08-18 19:05:26.524 7f84c19ce700 10 osd.1 267 promote_throttle_recalibrate actual 0, actual/prob ratio 1, adjusted new_prob 1000, prob 1000 -> 1000 -15> 2025-08-18 19:05:27.409 7f84c21cf700 10 osd.1 267 tick -14> 2025-08-18 19:05:27.409 7f84c21cf700 10 osd.1 267 do_waiters -- start -13> 2025-08-18 19:05:27.409 7f84c21cf700 10 osd.1 267 do_waiters -- finish -12> 2025-08-18 19:05:27.476 7f84c19ce700 10 osd.1 267 tick_without_osd_lock -11> 2025-08-18 19:05:27.477 7f84c19ce700 20 osd.1 267 scrub_random_backoff lost coin flip, randomly backing off -10> 2025-08-18 19:05:27.477 7f84c19ce700 10 osd.1 267 promote_throttle_recalibrate 0 attempts, promoted 0 objects and 0 B; target 25 obj/sec or 5 MiB/sec -9> 2025-08-18 19:05:27.477 7f84c19ce700 20 osd.1 267 promote_throttle_recalibrate new_prob 1000 -8> 2025-08-18 19:05:27.477 7f84c19ce700 10 osd.1 267 promote_throttle_recalibrate actual 0, actual/prob ratio 1, adjusted new_prob 1000, prob 1000 -> 1000 -7> 2025-08-18 19:05:27.720 7f84a5996700 5 osd.1 267 heartbeat osd_stat(store_statfs(0x22a2f119000/0x0/0x22e1a23c000, data 0x22a2f119000/0x22a2f119000, compress 0x0/0x0/0x0, omap 0x33101e, meta 0x0), peers [0] op hist []) -6> 2025-08-18 19:05:27.721 7f84a5996700 20 osd.1 267 check_full_status cur ratio 0.00702066, physical ratio 0.00702066, new state none -5> 2025-08-18 19:05:27.721 7f84c4458700 20 osd.1 267 share_map_peer 0x5621f5e00900 already has epoch 267 -4> 2025-08-18 19:05:27.721 7f84c4c59700 20 osd.1 267 share_map_peer 0x5621f5e00900 already has epoch 267 -3> 2025-08-18 19:05:28.416 7f84c21cf700 10 osd.1 267 tick -2> 2025-08-18 19:05:28.416 7f84c21cf700 10 osd.1 267 do_waiters -- start -1> 2025-08-18 19:05:28.416 7f84c21cf700 10 osd.1 267 do_waiters -- finish 0> 2025-08-18 19:05:28.452 7f84c6ec9c00 -1 *** Caught signal (Aborted) ** in thread 7f84c6ec9c00 thread_name:ceph-osd ceph version 14.2.22 (58663f20a1ce5b36f35a70a7836d737ebd9a4e6b) nautilus (stable) 1: (()+0x13140) [0x7f84c72c9140] 2: (pthread_cond_wait()+0x1e2) [0x7f84c72c47b2] 3: (AsyncMessenger::wait()+0x187) [0x5621db3d8e87] 4: (main()+0x3501) [0x5621daa69301] 5: (__libc_start_main()+0xea) [0x7f84c7105d0a] 6: (_start()+0x2a) [0x5621daa9ad1a] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. ______________________________________________________________________________________________________ on the ceph.log are reported : ______________________________________________________________________________________________________ 2025-08-18 19:04:29.980705 mon.controller-0 (mon.0) 2001 : cluster [DBG] osd.0 reported immediately failed by osd.1 2025-08-18 19:04:29.980765 mon.controller-0 (mon.0) 2002 : cluster [INF] osd.0 failed (root=storage-tier,chassis=group-0,host=controller-0) (co nnection refused reported by osd.1) 2025-08-18 19:04:29.980904 mon.controller-0 (mon.0) 2003 : cluster [DBG] osd.0 reported immediately failed by osd.1 2025-08-18 19:04:29.980967 mon.controller-0 (mon.0) 2004 : cluster [DBG] osd.0 reported immediately failed by osd.1 2025-08-18 19:04:30.156776 mon.controller-0 (mon.0) 2005 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2025-08-18 19:04:30.156824 mon.controller-0 (mon.0) 2006 : cluster [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN) 2025-08-18 19:04:30.165785 mon.controller-0 (mon.0) 2007 : cluster [DBG] osdmap e264: 2 total, 1 up, 2 in 2025-08-18 19:04:23.097732 mgr.controller-1 (mgr.914159) 6341 : cluster [DBG] pgmap v6585: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:04:25.098803 mgr.controller-1 (mgr.914159) 6342 : cluster [DBG] pgmap v6586: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:04:27.099828 mgr.controller-1 (mgr.914159) 6343 : cluster [DBG] pgmap v6587: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:04:29.100338 mgr.controller-1 (mgr.914159) 6344 : cluster [DBG] pgmap v6588: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:04:31.102055 mgr.controller-1 (mgr.914159) 6345 : cluster [DBG] pgmap v6590: 192 pgs: 192 peering; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:04:31.165482 mon.controller-0 (mon.0) 2008 : cluster [WRN] Health check failed: Reduced data availability: 80 pgs inactive, 192 p gs peering (PG_AVAILABILITY) 2025-08-18 19:04:31.169131 mon.controller-0 (mon.0) 2009 : cluster [DBG] osdmap e265: 2 total, 1 up, 2 in 2025-08-18 19:04:37.156811 mon.controller-0 (mon.0) 2010 : cluster [WRN] Health check failed: Degraded data redundancy: 22/44 objects degraded (50.000%), 16 pgs degraded (PG_DEGRADED) 2025-08-18 19:04:37.156852 mon.controller-0 (mon.0) 2011 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 80 pgs inactive, 192 pgs peering) 2025-08-18 19:04:33.103025 mgr.controller-1 (mgr.914159) 6346 : cluster [DBG] pgmap v6592: 192 pgs: 192 peering; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:04:35.104018 mgr.controller-1 (mgr.914159) 6347 : cluster [DBG] pgmap v6593: 192 pgs: 192 peering; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:04:37.105277 mgr.controller-1 (mgr.914159) 6348 : cluster [DBG] pgmap v6594: 192 pgs: 16 active+undersized+degraded, 176 active+u ndersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 22/44 objects degraded (50.000%) 2025-08-18 19:04:39.105861 mgr.controller-1 (mgr.914159) 6349 : cluster [DBG] pgmap v6595: 192 pgs: 16 active+undersized+degraded, 176 active+u ndersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 22/44 objects degraded (50.000%) 2025-08-18 19:04:41.107015 mgr.controller-1 (mgr.914159) 6350 : cluster [DBG] pgmap v6596: 192 pgs: 16 active+undersized+degraded, 176 active+u ndersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 22/44 objects degraded (50.000%) 2025-08-18 19:04:43.107585 mgr.controller-1 (mgr.914159) 6351 : cluster [DBG] pgmap v6597: 192 pgs: 16 active+undersized+degraded, 176 active+u ndersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 22/44 objects degraded (50.000%) 2025-08-18 19:04:45.108211 mgr.controller-1 (mgr.914159) 6352 : cluster [DBG] pgmap v6598: 192 pgs: 16 active+undersized+degraded, 176 active+u ndersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 22/44 objects degraded (50.000%) 2025-08-18 19:04:47.109462 mgr.controller-1 (mgr.914159) 6353 : cluster [DBG] pgmap v6599: 192 pgs: 16 active+undersized+degraded, 176 active+u ndersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 22/44 objects degraded (50.000%) 2025-08-18 19:04:49.110090 mgr.controller-1 (mgr.914159) 6354 : cluster [DBG] pgmap v6600: 192 pgs: 16 active+undersized+degraded, 176 active+u ndersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 22/44 objects degraded (50.000%) 2025-08-18 19:04:51.111519 mgr.controller-1 (mgr.914159) 6355 : cluster [DBG] pgmap v6601: 192 pgs: 16 active+undersized+degraded, 176 active+u ndersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 22/44 objects degraded (50.000%) 2025-08-18 19:04:54.177020 mon.controller-0 (mon.0) 2014 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2025-08-18 19:04:54.177069 mon.controller-0 (mon.0) 2015 : cluster [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (1 osds) down) 2025-08-18 19:04:54.183156 mon.controller-0 (mon.0) 2016 : cluster [INF] osd.0 [v2:10.186.228.41:6802/1341660,v1:10.186.228.41:6803/1341660] bo ot 2025-08-18 19:04:54.183233 mon.controller-0 (mon.0) 2017 : cluster [DBG] osdmap e266: 2 total, 2 up, 2 in 2025-08-18 19:04:55.192320 mon.controller-0 (mon.0) 2018 : cluster [DBG] osdmap e267: 2 total, 2 up, 2 in 2025-08-18 19:04:57.199828 mon.controller-0 (mon.0) 2019 : cluster [WRN] Health check update: Degraded data redundancy: 15/44 objects degraded (34.091%), 11 pgs degraded (PG_DEGRADED) 2025-08-18 19:04:59.816822 mon.controller-0 (mon.0) 2020 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/4 4 objects degraded (34.091%), 11 pgs degraded) 2025-08-18 19:04:59.816865 mon.controller-0 (mon.0) 2021 : cluster [INF] Cluster is now healthy 2025-08-18 19:04:53.112174 mgr.controller-1 (mgr.914159) 6356 : cluster [DBG] pgmap v6602: 192 pgs: 16 active+undersized+degraded, 176 active+u ndersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 22/44 objects degraded (50.000%) 2025-08-18 19:04:55.112760 mgr.controller-1 (mgr.914159) 6357 : cluster [DBG] pgmap v6604: 192 pgs: 16 active+undersized+degraded, 176 active+u ndersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 22/44 objects degraded (50.000%) 2025-08-18 19:04:57.113784 mgr.controller-1 (mgr.914159) 6358 : cluster [DBG] pgmap v6606: 192 pgs: 98 active+clean, 11 active+undersized+degra ded, 83 active+undersized; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail; 15/44 objects degraded (34.091%) 2025-08-18 19:04:59.114732 mgr.controller-1 (mgr.914159) 6359 : cluster [DBG] pgmap v6607: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:01.115735 mgr.controller-1 (mgr.914159) 6360 : cluster [DBG] pgmap v6608: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:03.116245 mgr.controller-1 (mgr.914159) 6361 : cluster [DBG] pgmap v6609: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:05.117284 mgr.controller-1 (mgr.914159) 6362 : cluster [DBG] pgmap v6610: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:07.118338 mgr.controller-1 (mgr.914159) 6363 : cluster [DBG] pgmap v6611: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:09.119350 mgr.controller-1 (mgr.914159) 6364 : cluster [DBG] pgmap v6612: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:11.120356 mgr.controller-1 (mgr.914159) 6365 : cluster [DBG] pgmap v6613: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:13.120865 mgr.controller-1 (mgr.914159) 6366 : cluster [DBG] pgmap v6614: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:15.121815 mgr.controller-1 (mgr.914159) 6367 : cluster [DBG] pgmap v6615: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:17.122842 mgr.controller-1 (mgr.914159) 6368 : cluster [DBG] pgmap v6616: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:19.123815 mgr.controller-1 (mgr.914159) 6369 : cluster [DBG] pgmap v6617: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:21.124822 mgr.controller-1 (mgr.914159) 6370 : cluster [DBG] pgmap v6618: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-08-18 19:05:30.817593 mon.controller-0 (mon.0) 2034 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-08-18 19:05:30.817660 mon.controller-0 (mon.0) 2035 : cluster [INF] osd.1 failed (root=storage-tier,chassis=group-0,host=controller-1) (co nnection refused reported by osd.0) 2025-08-18 19:05:30.817787 mon.controller-0 (mon.0) 2036 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-08-18 19:05:31.018275 mon.controller-0 (mon.0) 2037 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-08-18 19:05:31.018352 mon.controller-0 (mon.0) 2038 : cluster [DBG] osd.1 reported immediately failed by osd.0 ______________________________________________________________________________________________________ Every time the service crashes, a new HANG file is created. An example is attached Do you have any Ceph parameters that we can check to diagnose and resolve this issue ? We noticed that rook-ceph is using a more recent version of ceph 18.2.2 , whereas the host-based installation is running version 14.2.22. Is there a reason behind this choice? Thanks in advance, any help will be useful. Regards, Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture [cid:595bb9e3-8fed-4163-9c75-049e28812e4b] [cid:8a627ac2-3e84-4c3d-8708-5fb4d706e611] Via Bastioni 14/ S.Michele 10, Salerno, Italia Email: giuseppe.delgaudio@nttdata.com Tel: +39 3346368189 Learn more at www.nttdata.com/it<http://www.nttdata.com/it> ________________________________ Da: Sanches Zanoni, Felipe <Felipe.SanchesZanoni@windriver.com> Inviato: Lunedì, 11 Agosto, 2025 13:32 A: PAOLO NAPOLI <Paolo.Napoli@emeal.nttdata.com>; Cervi, Thales Elero <ThalesElero.Cervi@windriver.com>; starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: STEFANO VELTRI <stefano.veltri@emeal.nttdata.com>; GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@emeal.nttdata.com> Oggetto: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph NTT DATA Security Awareness - This is an incoming mail from an EXTERNAL SENDER (prvs=03185d823b=felipe.sancheszanoni@windriver.com). Please verify sender before you open attachments or access links. ________________________________ Hi Paolo, I see you are having trouble with Ceph in stx.10. I am one of the maintainers from the Storage team. What deployment type are you using? Is it an All-in-one or Standard? Additionally, how many OSDs are you planning to use? If this is an All-in-one with OpenStack, you must have at least 4 or 6 Cores reserved for the platform. If you add 3 or more OSDs in each controller, you may consider increasing this number. Besides that, you should check the memory reserved for the platform. Usually, setting it to 12 GiB on each Controller should be enough. Let me know what problems you were facing when trying to use the bare metal Ceph. Regards, Felipe ________________________________ From: PAOLO NAPOLI <Paolo.Napoli@nttdata.com> Sent: Wednesday, August 6, 2025 2:31 AM To: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com>; starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: STEFANO VELTRI <stefano.veltri@nttdata.com>; GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi, Dear Thales Elero, thank you very much for your time and your prompt response. In September we have an important POC with an Italian company and we proposed StarlingX for their use case. In their use case they need to share bare metal GPU to the VM inside Openstack, but here the problem is that neither rook-ceph, nor bare metal ceph are working in our Lab with STX10. This makes us feel not confident about Client's POC due to limited project timing. We plan to spend more time on GPU sharing, giving the installation of Openstack successful for sure. Can you get us in touch with some your colleague in the STX team that can help us to address the issues we have? Thanks in advance. Best Regards PN From: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com> Date: Tuesday, 5 August 2025 at 14:18 To: PAOLO NAPOLI <Paolo.Napoli@emeal.nttdata.com>, starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: STEFANO VELTRI <stefano.veltri@emeal.nttdata.com>, GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@emeal.nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph NTT DATA Security Awareness - This is an incoming mail from an EXTERNAL SENDER (prvs=031201ecfb=thaleselero.cervi@windriver.com). Please verify sender before you open attachments or access links. ________________________________ Hi, Paolo. You are right, stx-openstack from stx.11.0 will fail to upload/apply in a Starlingx from stx.10.0. I forgot that in my first email, but there is some code depending on stx.11.0 for the new stx-openstack application. Since your StarlingX stx.10.0 without OpenStack is already struggling with your host-based Ceph, I would suggest to explore the Ceph config and we could eventually ping someone from the community that maintains the storage functions of StarlingX. Another option would be exploring an upgrade from stx.10.0 to stx.11.0, for which I would suggest waiting the official lstx.11.0 release. For both, please join our stx-openstack project room so we can try to help you directly: https://matrix.to/#/#starlingx-openstack:opendev.org<https://urldefense.com/v3/__https:/matrix.to/*/*starlingx-openstack:opendev.org__;IyM!!EJ3n55FBLexp1rhr!-amaTIi7dprmbxG-9lM2hL3Jod30VWc8rMd4wEdHJJUyaztUqHSpcdjtqwIiIX41uS0xxF-psdvCZHv3WsAYdUPdFVaN6amuz6k0pg$> Cheers, Thales. ________________________________ From: PAOLO NAPOLI <Paolo.Napoli@nttdata.com> Sent: Monday, August 4, 2025 10:11 AM To: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com>; starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: STEFANO VELTRI <stefano.veltri@nttdata.com>; GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi, Dear Thales Elero, let me explain in more details our experience with bare metal ceph on STX10. As said also by Giuseppe, we initially tried to use bare metal ceph (v14) on STX 10 but, also before installing stx-openstack (that is to say, no workloads on Kubernetes except the platform ones), we had a problem of CEPH intermittent connectivity: about every 3 or 5 minute the command ceph -s showed HEALTH_WARN since 1 OSD was up and 1 down. After some minutes, the alarm was cleared and ceph automatically updated the status to HEALTH_OK. After some minutes we get randomly again the same failure. Within this failure timeframe, something like this was logged in the sysinv.log: 2025-07-21 16:20:41.617972 mgr.controller-0 (mgr.4416) 136728 : cluster [DBG] pgmap v136800: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-07-21 16:20:43.618575 mgr.controller-0 (mgr.4416) 136729 : Cluster [DBG] pgmap v136801: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail 2025-07-21 16:20:52.223339 mon.controller-0 (mon.0) 105968 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.223410 mon.controller-0 (mon.0) 105969 : cluster [INF] osd.1 failed (root-storage-tier, chassis=group-0, host=controller-1) (connection refused reported by osd.0) 2025-07-21 16:20:52.223536 mon.controller-0 (mon.0) 105970 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.422884 mon.controller-0 (mon.0) 105971 : cluster [DBG] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.423676 mon.controller-0 (mon.0) 105972 : cluster [D8G] osd.1 reported immediately failed by osd.0 2025-07-21 16:20:52.677873 mon.controller-0 (mon.0) 105973 : Cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2025-07-21 16:20:52.677131 mon.controller-0 (mon.0) 105974 : cluster [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN) 2025-07-21 16:20:52.687625 mon.controller-0 (mon.0) 105975 : cluster [DBG] osdmap e109: 2 total, 1 up, 2 in 2025-07-21 16:20:45.619637 mgr.controller-0 (mgr.4416) 136730 : cluster [DBG] pgmap v136802: 192 pgs: 192 active+clean; 12 KiB data, 31 GiB used, 4.3 TiB / 4.4 TiB avail That causes some Events and consequently some Alarms: SET (MAJOR) 800.011 Loss of replication in replication group group-0: ODs are down SET (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck be undersized), Please check 'ceph -s' for more details CLEAR (WARNING) 800.001 Storage Alarm Condition: HEALTH_WARN [PGs are degraded/stuck or undersized), Please check 'ceph -s' for more details We also made some test disabling the Management IPSEC vpn (thinking we had a low throughput on 1gbps management network) and just use standard flat network as in previous releases. This was not useful because the intermittent behaviour was still happening on the flat network also with low throughput (few MBps). For this reason and due to deprecation changelog information, we decided to give a try to rook-ceph v18 (without Management IPSEC vpns) and the Kube cluster was stable (no events) but we cannot install stx-openstack on rook-ceph due to the incompatibilities shown by Giuseppe. Summing up, in older versions of STX7 or STX9 we were using bare metal ceph and Openstack without any issue, but with STX10 we cannot get a stable cluster (with or without Openstack installed). We have also tried one of the latest stx-openstack TGZ (probably designed and implemented on beta STX-11 reference platform) but it raised error on upload, so it was impossible to use that newer version of stx-openstack with STX 10. So, the question is: is there a workaround or a way to tune some ceph params/config to use bare metal ceph without this intermittent problem in STX 10 with stx-openstack? We haven't found similar problems or solutions in STX website (changelog, docs, ...) and the bug tracking system. We just tried to tune some timeout (eg: osd_heartbeat_grace=60) but with no luck. We have another question: reinstalling the whole lab each time, we tested and installed almost every stable release since STX3, but we never tested or installed a pre-relase. Currenlty we have STX 10 installed but without openstack. Is there a way to cleanly and successfully upgrade to the pre-release (or main branch) mentioned by you or we must reinstall the whole distributed cloud using that pre-release? Our next step, assuming we have a functional Openstack Cluster, is to configure SRIOV passthrough inside OpenStack for some Nvidia GPU to support some Client AI use cases. Any help will be appreciated. Thank you in advance. Best Regards, Paolo Napoli - Cloud Architect | Cloud & Digital Architecture [Image removed by sender. signature_1835094065] From: Cervi, Thales Elero <ThalesElero.Cervi@windriver.com> Date: Saturday, 2 August 2025 at 02:18 To: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@emeal.nttdata.com>, starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@emeal.nttdata.com>, STEFANO VELTRI <stefano.veltri@emeal.nttdata.com> Subject: Re: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph NTT DATA Security Awareness - This is an incoming mail from an EXTERNAL SENDER (prvs=0309342ff8=thaleselero.cervi@windriver.com). Please verify sender before you open attachments or access links. ________________________________ Hi Giuseppe, Glad to hear you are using stx-openstack! Unfortunately, the stx-openstack from stx.10.0 release does not support rook-ceph. This support is being added as part of the stx.11.0 release. It is already merged into the main branch if you want to give it a try. Please keep us in the loop! Thales Elero Cervi StarlingX TSC Member | StarlingX OpenStack Project Lead ________________________________ From: GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@nttdata.com> Sent: Friday, August 1, 2025 3:38 PM To: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io> Cc: PAOLO NAPOLI <Paolo.Napoli@nttdata.com>; STEFANO VELTRI <stefano.veltri@nttdata.com> Subject: Issue Uploading OpenStack Package on StarlingX Release 10 with Rook-Ceph CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Dear StarlingX Community, We are currently testing the StarlingX + OpenStack Release 10 (v24.09) solution on our labs. During the installation process (on bare metal), we encountered some issues-initially related to OSD synchronization using host-based Ceph. For this reason, we switched to Rook-Ceph, as host-based Ceph will be deprecated. However, after switching, we were unable to upload the OpenStack package (via system upload) and proceed with the installation. I've attached both an installation report and the full sysinv log showing the error that occurred during the upload of the OpenStack installation package. Has anyone experienced a similar issue? Thank you very much for your support! Best regards, Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture [cid:2165bbb9-c072-4d13-8124-38abc1d85e42] [cid:033b377a-7ed2-497f-85c7-0bddbe4a422a] Via Bastioni 14/ S.Michele 10, Salerno, Italia Email: giuseppe.delgaudio@nttdata.com Tel: +39 3346368189 Learn more at www.nttdata.com/it<https://urldefense.com/v3/__http:/www.nttdata.com/it__;!!AjveYdw8EvQ!aArk9I22gzMmwxVJx15ur7bESSSkTUVxVe7BEc6oUDfS5Ge8SolCYXv2PhuRVcMtcoekBLoWGKns02-N9VReIeHrH3vc5lUK0s7t$> ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 - codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato. ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 - codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato. ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 - codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato. ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 - codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato.
participants (4)
-
Cervi, Thales Elero
-
GIUSEPPE DEL GAUDIO
-
PAOLO NAPOLI
-
Sanches Zanoni, Felipe