[Starlingx-discuss] ceph ops enabling in sysinv-conductor
Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330
Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" <haochuan.z.chen@intel.com> Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church <Robert.Church@windriver.com> Cc: "'starlingx-discuss@lists.starlingx.io'" <starlingx-discuss@lists.starlingx.io>, Ovidiu Poncea <Ovidiu.Poncea@windriver.com> Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330
Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert <Robert.Church@windriver.com> Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Poncea, Ovidiu <Ovidiu.Poncea@windriver.com> Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church <Robert.Church@windriver.com<mailto:Robert.Church@windriver.com>> Cc: "'starlingx-discuss@lists.starlingx.io'" <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>>, Ovidiu Poncea <Ovidiu.Poncea@windriver.com<mailto:Ovidiu.Poncea@windriver.com>> Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330
See inline… From: "Chen, Haochuan Z" <haochuan.z.chen@intel.com> Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church <Robert.Church@windriver.com> Cc: "'starlingx-discuss@lists.starlingx.io'" <starlingx-discuss@lists.starlingx.io>, Ovidiu Poncea <Ovidiu.Poncea@windriver.com> Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it’s potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don’t think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert <Robert.Church@windriver.com> Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Poncea, Ovidiu <Ovidiu.Poncea@windriver.com> Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church <Robert.Church@windriver.com<mailto:Robert.Church@windriver.com>> Cc: "'starlingx-discuss@lists.starlingx.io'" <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>>, Ovidiu Poncea <Ovidiu.Poncea@windriver.com<mailto:Ovidiu.Poncea@windriver.com>> Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330
Thanks Bob I will fix this issue with these three items. * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert <Robert.Church@windriver.com> Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Poncea, Ovidiu <Ovidiu.Poncea@windriver.com> Subject: Re: ceph ops enabling in sysinv-conductor See inline… From: "Chen, Haochuan Z" <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church <Robert.Church@windriver.com<mailto:Robert.Church@windriver.com>> Cc: "'starlingx-discuss@lists.starlingx.io'" <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>>, Ovidiu Poncea <Ovidiu.Poncea@windriver.com<mailto:Ovidiu.Poncea@windriver.com>> Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it’s potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don’t think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert <Robert.Church@windriver.com<mailto:Robert.Church@windriver.com>> Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>>; Poncea, Ovidiu <Ovidiu.Poncea@windriver.com<mailto:Ovidiu.Poncea@windriver.com>> Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church <Robert.Church@windriver.com<mailto:Robert.Church@windriver.com>> Cc: "'starlingx-discuss@lists.starlingx.io'" <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>>, Ovidiu Poncea <Ovidiu.Poncea@windriver.com<mailto:Ovidiu.Poncea@windriver.com>> Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330
Hi Bob Some question, what’s storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it? BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert <Robert.Church@windriver.com> Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Poncea, Ovidiu <Ovidiu.Poncea@windriver.com> Subject: Re: ceph ops enabling in sysinv-conductor See inline… From: "Chen, Haochuan Z" <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church <Robert.Church@windriver.com<mailto:Robert.Church@windriver.com>> Cc: "'starlingx-discuss@lists.starlingx.io'" <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>>, Ovidiu Poncea <Ovidiu.Poncea@windriver.com<mailto:Ovidiu.Poncea@windriver.com>> Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it’s potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don’t think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert <Robert.Church@windriver.com<mailto:Robert.Church@windriver.com>> Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>>; Poncea, Ovidiu <Ovidiu.Poncea@windriver.com<mailto:Ovidiu.Poncea@windriver.com>> Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church <Robert.Church@windriver.com<mailto:Robert.Church@windriver.com>> Cc: "'starlingx-discuss@lists.starlingx.io'" <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>>, Ovidiu Poncea <Ovidiu.Poncea@windriver.com<mailto:Ovidiu.Poncea@windriver.com>> Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330
participants (2)
-
Chen, Haochuan Z
-
Church, Robert