Hi Bob
Some question, what’s storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it?
BR!
Martin, Chen
SSP, Software Engineer
021-61164330
From: Church, Robert <Robert.Church@windriver.com>
Sent: Wednesday, December 4, 2019 12:09 AM
To: Chen, Haochuan Z <haochuan.z.chen@intel.com>
Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Poncea, Ovidiu <Ovidiu.Poncea@windriver.com>
Subject: Re: ceph ops enabling in sysinv-conductor
See inline…
From: "Chen, Haochuan Z" <haochuan.z.chen@intel.com>
Date: Tuesday, December 3, 2019 at 2:18 AM
To: Robert Church <Robert.Church@windriver.com>
Cc: "'starlingx-discuss@lists.starlingx.io'" <starlingx-discuss@lists.starlingx.io>, Ovidiu Poncea <Ovidiu.Poncea@windriver.com>
Subject: RE: ceph ops enabling in sysinv-conductor
Hi Bob
1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference.
And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam.
[RTC] For a more robust solution, consider the following:
2, the above case is only for system application. Any more alarm for PG number, request user should manage.
BR!
Martin, Chen
SSP, Software Engineer
021-61164330
From: Church, Robert <Robert.Church@windriver.com>
Sent: Tuesday, December 3, 2019 10:34 AM
To: Chen, Haochuan Z <haochuan.z.chen@intel.com>
Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Poncea, Ovidiu <Ovidiu.Poncea@windriver.com>
Subject: Re: ceph ops enabling in sysinv-conductor
Hi Martin,
In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster
scaled up.
With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will
be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0.
These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements.
I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs
cluster:
id: 6231df84-33be-4aa4-82ea-7408e0f2421c
health: HEALTH_WARN
too few PGs per OSD (21 < min 30)
services:
mon: 3 daemons, quorum controller-0,controller-1,storage-0
mgr: controller-0(active), standbys: controller-1
osd: 6 osds: 6 up, 6 in
data:
pools: 1 pools, 64 pgs
objects: 0 objects, 0 B
usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail
pgs: 64 active+clean
Since every installation will install platform-integ-apps, I think we should do the following:
Regards,
Bob
From: "Chen, Haochuan Z" <haochuan.z.chen@intel.com>
Date: Monday, December 2, 2019 at 1:15 AM
To: Robert Church <Robert.Church@windriver.com>
Cc: "'starlingx-discuss@lists.starlingx.io'" <starlingx-discuss@lists.starlingx.io>, Ovidiu Poncea <Ovidiu.Poncea@windriver.com>
Subject: ceph ops enabling in sysinv-conductor
Hi Bob
I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc.
But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num?
Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore?
https://bugs.launchpad.net/starlingx/+bug/1844164
BR!
Martin, Chen
SSP, Software Engineer
021-61164330