[Starlingx-discuss] Discussion about StarlingX release notes in CEPH upgrade

Badea, Daniel Daniel.Badea at windriver.com
Mon Apr 8 12:23:09 UTC 2019


Hi Frank,

I looked at released notes put together by Tingjie and here are my notes:

  *   Each OSD has a device class associated with it. Documentation: https://ceph.com/community/new-luminous-crush-device-classes/ . Notes:
     *   Purpose: added to simplify OSD crush placement based on hardware properties reported by the kernel
     *   We are currently using storage tiers to partition ceph storage pool access to faster or slower disks. When a new storage tier is created the entire crush tree hierarchy is cloned then OSDs can be attached to it. Pools are then configured to use the new crush tree root.
     *   With Ceph Luminous there is no need to clone the entire crush tree when we want to create "faster" pools. The command to create a crush rule for a pool now supports a device-class parameter that can be used to filter OSDs based on their type: hdd, ssd or nvme.
     *   If we are using multiple ceph tiers exclusively for partitioning OSDs based on their hardware characteristics then we can take advantage of the device-class feature but we also need to update the logic related to replication and storage node locking. OSDs of all classes will be anchored to one storage node whereas currently they are anchored to different crush trees. However there is no urgent reason to use the new feature now. We are already updating the crush map automatically and we can mix any kind of disks into a ceph tier (which is not possible when using device classes).
  *   Simplified OSD replacement procedure. Documentation  http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/#replacing-an-osd . Notes:
     *    Replacement procedure is based entirely on "ceph-volume" utility that we are not currently using.
     *    The replacement procedure is not documented in Ceph Jewel so I can't tell what's "simplified".
     *    Currently, when replacing a storage disk:
        *   if puppet finds on disk a Ceph cluster signature that's different from the current one then it fails and storage node fails to unlock
        *   if the signature matches current Ceph cluster then the disk is used as is
        *   otherwise the disk is setup to be an OSD: ceph-disk prepare, ceph-disk activate, etc.
     *   There is no reason to use the new OSD replacement procedure now.
  *   Pools are expected to be associated with the application using them. Notes:
     *   We already hit this issue. Fixed by running pool application enable.
  *   Config options can now be centrally stored and managed by the monitor. Notes:
     *   Not sure how this helps. Configuration is already managed by sysinv and puppet.
  *   RGW now supports data compression for objects. Notes:
     *   We may want to expose this configuration option via system service parameters


Best regards,
Daniel B.
________________________________
From: Miller, Frank
Sent: Thursday, April 04, 2019 23:40
To: Chen, Tingjie; Jones, Bruce E; Xie, Cindy; Poncea, Ovidiu; Badea, Daniel; Cabrales, Ada; Perez, Ricardo O; Hernandez Gonzalez, Fernando; Zhu, Vivian; Hu, Yong; Liu, Changcheng
Cc: starlingx-discuss at lists.starlingx.io
Subject: RE: Discussion about StarlingX release notes in CEPH upgrade

Tingjie:

Thanks for putting this together as it gives a very good summary of the changes in the CEPH mimic version which is expected to merge in the near future into StarlingX.  This list will be a good reference for those who will be running TCs for the new CEPH version.  I have a couple of questions – would you be able to help me:


1.       One of the notes indicates “There is a simplified OSD replacement process that is more robust.”

·         Can you explain what these changes are?

·         Will this result in any changes to the steps an operator takes to replace a CEPH disk?

2.       Another note indicates “Several sleep settings, include osd_recovery_sleep, osd_snap_trim_sleep, and osd_scrub_sleep have been reimplemented to work efficiently.”

·         Can you share the settings used in StarlingX today with CEPH jewel as well as the planned settings that will be used in StarlingX with CEPH mimic.  Will any of these settings change value when CEPH mimic is merged into StarlingX?

3.       One more note indicates “CLI changes”

·         Can you explain which CLIs have changed?


Frank

From: Chen, Tingjie [mailto:tingjie.chen at intel.com]
Sent: Wednesday, April 03, 2019 11:44 PM
To: Jones, Bruce E <bruce.e.jones at intel.com>; Xie, Cindy <cindy.xie at intel.com>; Poncea, Ovidiu <Ovidiu.Poncea at windriver.com>; Badea, Daniel <Daniel.Badea at windriver.com>; Cabrales, Ada <ada.cabrales at intel.com>; Perez, Ricardo O <ricardo.o.perez at intel.com>; Hernandez Gonzalez, Fernando <fernando.hernandez.gonzalez at intel.com>; Miller, Frank <Frank.Miller at windriver.com>; Zhu, Vivian <vivian.zhu at intel.com>; Hu, Yong <yong.hu at intel.com>; Liu, Changcheng <changcheng.liu at intel.com>
Cc: starlingx-discuss at lists.starlingx.io
Subject: Discussion about StarlingX release notes in CEPH upgrade

Hi,

I have file release notes for Ceph upgrade mimic.
https://etherpad.openstack.org/p/stx-ceph-uprev-mimic-release-notes

There are 2 parts,
First one is Major changes, this is official changes from 10.2.6 (Jewel) -> 13.2.2 (Mimic), there are many changes to the three major version updates.
Second one is known issues in StarlingX, this may expand after validation and system test if have non-block issues.

Welcome to give your comments and concerns.

Thanks,
Tingjie

SSG OTC NST Storage
Tel: +86(21)88216699
Mobile: 15901876439

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190408/310f9183/attachment-0001.html>


More information about the Starlingx-discuss mailing list