From you comment, I have one concern. Kickstart will remove boot device and rootfs vg, lv and pv.

 

In currently ks script, it will only check “vda vdb sda sdb dda ddb hda hdb nvme0n1 nvme1n1” in /dev/

But if user add host by such command, “system host-add -n <host name> -m <mgm mac> -b sdc -r sdd”, by set boot device as sdc and rootfs as sdd.

Ks will not remove vg and lv on these disk.

 

BR!

 

Martin, Chen

IOTG, Software Engineer

021-61164330

 

From: Chen, Haochuan Z
Sent: Wednesday, October 14, 2020 12:06 PM
To: 'Poncea, Ovidiu' <Ovidiu.Poncea@windriver.com>; Hu, Yong <yong.hu@intel.com>; Sun, Austin <austin.sun@intel.com>
Cc: Penney, Don <Don.Penney@windriver.com>; Waines, Greg <Greg.Waines@windriver.com>; Jones, Bruce E <bruce.e.jones@intel.com>
Subject: RE: please review this patch - https://review.opendev.org/#/c/737228/ for Rook-Ceph

 

Thanks Ovidiu! I updated patch.

 

Currently for rook, rook backup and restore only support not wipe osd.

 

For rook, there is a rook-ceph-operator, which manage all the ceph cluster, such as launching mon, mgr, osd deployment, creating cephfs filesystem, pool  and manage crushmap.

So during restore, after k8s cluster restore, it will restore rook-ceph-operator’s status, which means re-deploy mon, mgr and osd deployment as it used to be, such restore is operator’s behavior, not by starlingx.

If osd disk wiped, osd pods will launch fail.

 

If force rook-ceph restore to enable wipe-osd, so there will be such process.

1, ansible restore k8s cluster (launch by starlingx)

2, k8s cluster resotre rook-ceph-operator(by k8s cluster)

3, rook-ceph-operator re-deploy osd pod, and launch fail

4, ansible restore script, read rook-ceph-operator status about osd deployment and cleanup these info and request rook-ceph-operator to exit osd deployment

5, ansible restore script, use read osd deployment info to re-launch osd-prepare job to initialization and launch osd deployment by rook-ceph-operator.

 

I prefer rook-ceph only support not wipe osd case. If user want to wipe osd disk, they can remove rook-ceph application after restore.

 

BR!

 

Martin, Chen

IOTG, Software Engineer

021-61164330

 

From: Poncea, Ovidiu <Ovidiu.Poncea@windriver.com>
Sent: Tuesday, October 13, 2020 5:34 PM
To: Hu, Yong <yong.hu@intel.com>; Sun, Austin <austin.sun@intel.com>
Cc: Penney, Don <Don.Penney@windriver.com>; Waines, Greg <Greg.Waines@windriver.com>; Jones, Bruce E <bruce.e.jones@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com>
Subject: RE: please review this patch - https://review.opendev.org/#/c/737228/ for Rook-Ceph

 

Done, sorry for the required rework, we must be careful with this area as issues can be very problematic.

 

From: Hu, Yong <yong.hu@intel.com>
Sent: marți, 13 octombrie 2020 04:19
To: Poncea, Ovidiu <Ovidiu.Poncea@windriver.com>; Sun, Austin <austin.sun@intel.com>
Cc: Penney, Don <Don.Penney@windriver.com>; Waines, Greg <Greg.Waines@windriver.com>; Jones, Bruce E <bruce.e.jones@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com>
Subject: please review this patch - https://review.opendev.org/#/c/737228/ for Rook-Ceph
Importance: High

 

Hi Ovidiu,

Could you please review this patch? Don wanted to get your feedback back to Sept 14 in order to move forward for merge.

 

https://review.opendev.org/#/c/737228/

 

If you have any concerns, @Sun, Austin will book a half-hour meeting today (in your morning).

 

 

Regards,

Yong