[Starlingx-discuss] Discussion about StarlingX release notes in CEPH upgrade

Chen, Tingjie tingjie.chen at intel.com
Mon Apr 8 15:16:15 UTC 2019


Hi Frank & Daniel,
Following are my comments.

For Frank,
-------------------------------
1/ Simplified OSD replacement process that is more robust.
Daniel has explained in detail and precisely, for supplement, it is first introduced in Luminous: http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-osds/#replacing-an-osd but the process has evolution with new command: ceph-volume in Mimic: http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/#replacing-an-osd

2/ Several sleep settings.
It is a good question, the rework of sleep implementation is introduced in Luminous, and I will check the scenario for sleep setting later since I am deploy new image now.

3/ CLI changes,
This also introduced initially in Luminous, and I have add the detail changes in etherpad review. https://etherpad.openstack.org/p/stx-ceph-uprev-mimic-release-notes


For Daniel,
---------------------------------
Thanks for your notes, it is practical in StarlingX deployment.

1/ Config options can now be centrally stored and managed by the monitor
You can refer details: https://ceph.com/community/new-mimic-centralized-configuration-management/
And also it is a different case in containerize Ceph configuration management (no puppet).

Thanks,
Tingjie

From: Badea, Daniel [mailto:Daniel.Badea at windriver.com]
Sent: Monday, April 8, 2019 8:23 PM
To: Miller, Frank <Frank.Miller at windriver.com>; Chen, Tingjie <tingjie.chen at intel.com>; Jones, Bruce E <bruce.e.jones at intel.com>; Xie, Cindy <cindy.xie at intel.com>; Poncea, Ovidiu <Ovidiu.Poncea at windriver.com>; Cabrales, Ada <ada.cabrales at intel.com>; Perez, Ricardo O <ricardo.o.perez at intel.com>; Hernandez Gonzalez, Fernando <fernando.hernandez.gonzalez at intel.com>; Zhu, Vivian <vivian.zhu at intel.com>; Hu, Yong <yong.hu at intel.com>; Liu, Changcheng <changcheng.liu at intel.com>
Cc: starlingx-discuss at lists.starlingx.io
Subject: RE: Discussion about StarlingX release notes in CEPH upgrade

Hi Frank,

I looked at released notes put together by Tingjie and here are my notes:

  *   Each OSD has a device class associated with it. Documentation: https://ceph.com/community/new-luminous-crush-device-classes/<https://ceph.com/community/new-luminous-crush-device-classes/%20> . Notes:

     *   Purpose: added to simplify OSD crush placement based on hardware properties reported by the kernel
     *   We are currently using storage tiers to partition ceph storage pool access to faster or slower disks. When a new storage tier is created the entire crush tree hierarchy is cloned then OSDs can be attached to it. Pools are then configured to use the new crush tree root.
     *   With Ceph Luminous there is no need to clone the entire crush tree when we want to create "faster" pools. The command to create a crush rule for a pool now supports a device-class parameter that can be used to filter OSDs based on their type: hdd, ssd or nvme.
     *   If we are using multiple ceph tiers exclusively for partitioning OSDs based on their hardware characteristics then we can take advantage of the device-class feature but we also need to update the logic related to replication and storage node locking. OSDs of all classes will be anchored to one storage node whereas currently they are anchored to different crush trees. However there is no urgent reason to use the new feature now. We are already updating the crush map automatically and we can mix any kind of disks into a ceph tier (which is not possible when using device classes).

  *   Simplified OSD replacement procedure. Documentation  http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/#replacing-an-osd . Notes:

     *    Replacement procedure is based entirely on "ceph-volume" utility that we are not currently using.
     *    The replacement procedure is not documented in Ceph Jewel so I can't tell what's "simplified".
     *    Currently, when replacing a storage disk:

        *   if puppet finds on disk a Ceph cluster signature that's different from the current one then it fails and storage node fails to unlock
        *   if the signature matches current Ceph cluster then the disk is used as is
        *   otherwise the disk is setup to be an OSD: ceph-disk prepare, ceph-disk activate, etc.

     *   There is no reason to use the new OSD replacement procedure now.

  *   Pools are expected to be associated with the application using them. Notes:

     *   We already hit this issue. Fixed by running pool application enable.

  *   Config options can now be centrally stored and managed by the monitor. Notes:

     *   Not sure how this helps. Configuration is already managed by sysinv and puppet.

  *   RGW now supports data compression for objects. Notes:

     *   We may want to expose this configuration option via system service parameters


Best regards,
Daniel B.
________________________________
From: Miller, Frank
Sent: Thursday, April 04, 2019 23:40
To: Chen, Tingjie; Jones, Bruce E; Xie, Cindy; Poncea, Ovidiu; Badea, Daniel; Cabrales, Ada; Perez, Ricardo O; Hernandez Gonzalez, Fernando; Zhu, Vivian; Hu, Yong; Liu, Changcheng
Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: RE: Discussion about StarlingX release notes in CEPH upgrade
Tingjie:

Thanks for putting this together as it gives a very good summary of the changes in the CEPH mimic version which is expected to merge in the near future into StarlingX.  This list will be a good reference for those who will be running TCs for the new CEPH version.  I have a couple of questions - would you be able to help me:


1.       One of the notes indicates "There is a simplified OSD replacement process that is more robust."

*         Can you explain what these changes are?

*         Will this result in any changes to the steps an operator takes to replace a CEPH disk?

2.       Another note indicates "Several sleep settings, include osd_recovery_sleep, osd_snap_trim_sleep, and osd_scrub_sleep have been reimplemented to work efficiently."

*         Can you share the settings used in StarlingX today with CEPH jewel as well as the planned settings that will be used in StarlingX with CEPH mimic.  Will any of these settings change value when CEPH mimic is merged into StarlingX?

3.       One more note indicates "CLI changes"

*         Can you explain which CLIs have changed?


Frank

From: Chen, Tingjie [mailto:tingjie.chen at intel.com]
Sent: Wednesday, April 03, 2019 11:44 PM
To: Jones, Bruce E <bruce.e.jones at intel.com<mailto:bruce.e.jones at intel.com>>; Xie, Cindy <cindy.xie at intel.com<mailto:cindy.xie at intel.com>>; Poncea, Ovidiu <Ovidiu.Poncea at windriver.com<mailto:Ovidiu.Poncea at windriver.com>>; Badea, Daniel <Daniel.Badea at windriver.com<mailto:Daniel.Badea at windriver.com>>; Cabrales, Ada <ada.cabrales at intel.com<mailto:ada.cabrales at intel.com>>; Perez, Ricardo O <ricardo.o.perez at intel.com<mailto:ricardo.o.perez at intel.com>>; Hernandez Gonzalez, Fernando <fernando.hernandez.gonzalez at intel.com<mailto:fernando.hernandez.gonzalez at intel.com>>; Miller, Frank <Frank.Miller at windriver.com<mailto:Frank.Miller at windriver.com>>; Zhu, Vivian <vivian.zhu at intel.com<mailto:vivian.zhu at intel.com>>; Hu, Yong <yong.hu at intel.com<mailto:yong.hu at intel.com>>; Liu, Changcheng <changcheng.liu at intel.com<mailto:changcheng.liu at intel.com>>
Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Discussion about StarlingX release notes in CEPH upgrade

Hi,

I have file release notes for Ceph upgrade mimic.
https://etherpad.openstack.org/p/stx-ceph-uprev-mimic-release-notes

There are 2 parts,
First one is Major changes, this is official changes from 10.2.6 (Jewel) -> 13.2.2 (Mimic), there are many changes to the three major version updates.
Second one is known issues in StarlingX, this may expand after validation and system test if have non-block issues.

Welcome to give your comments and concerns.

Thanks,
Tingjie

SSG OTC NST Storage
Tel: +86(21)88216699
Mobile: 15901876439

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190408/4f136b17/attachment-0001.html>


More information about the Starlingx-discuss mailing list