Hi Ovidiu & Bob
This Thursday morning(Bob’s time), Thursday afternoon(Ovidiu), when you are ok to book a meeting.
We could sync rook status. I will update duplex deployment update info.
Thanks
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Chen, Haochuan Z
Sent: Friday, November 6, 2020 10:24 PM
To: Sun, Austin <austin.sun@intel.com>; Qi, Mingyuan <mingyuan.qi@intel.com>; 'Church, Robert' <Robert.Church@windriver.com>; 'Poncea, Ovidiu' <Ovidiu.Poncea@windriver.com>; 'Waines, Greg' <Greg.Waines@windriver.com>; 'Voiculeasa, Dan' <Dan.Voiculeasa@windriver.com>
Cc: Hu, Yong <yong.hu@intel.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>
Subject: RE: rook deployment for duplex
Hi Ovidiu, Bob and Greg
I introduce my design for duplex to Dan. If you think necessary, I could book a meeting next Tuesday to explain again.
Thanks!
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Chen, Haochuan Z
Sent: Thursday, November 5, 2020 4:22 PM
To: Sun, Austin <austin.sun@intel.com>; Qi, Mingyuan <mingyuan.qi@intel.com>; Church, Robert <Robert.Church@windriver.com>;
Poncea, Ovidiu <Ovidiu.Poncea@windriver.com>; Waines, Greg <Greg.Waines@windriver.com>; Voiculeasa, Dan <Dan.Voiculeasa@windriver.com>
Cc: Hu, Yong <yong.hu@intel.com>;
starlingx-discuss@lists.starlingx.io
Subject: rook deployment for duplex
Hi Ovidiu
I confirm the issue you raise for duplex. So I change rook deployment for duplex as this.
1, duplex will use host network, ceph mon ip is controller floating ip
2, there is only one monitor pod on active controller
3, same as native ceph cluster, there is a drbd filesystem, named drbd-cephmon, to sync mon data on both controllers, mount to folder /var/lib/ceph/mon-a/data
4, there is a cronjob, periodic check these is host offline or lock. If host offline and mon pod deployed on that host, delete mon deployment and rook-ceph operator will make mon deployment again on the other controller
# because when rook-operator launch mon deployment, it nodeselector for this deployment will bind to this host. So must delete this mon deployment to make operator reschedule the mon deployment to the other controller
# this is similar what sm do for native ceph
We could discuss tomorrow.
Task items for above design:
1, update sysinv to enable drbd ceph-mon for rook
1), add system storage-backend-add rook --confirmed
2), in cgtsclient add storage_rook.py
3), in sysinv-api, add storage_rook.py, will call sysinv-conductor update_rook_config
4), in sysinv-conductor update_rook_config,
if cutils.is_aio_duplex_system(self.dbapi):
# On 2 node systems we have a floating Ceph monitor.
classes.append('platform::drbd::cephmon::runtime')
2, update rook-app override
for duplex, use hostnetwork, mon: 1.
3, add a pre install job in rook-ceph helm chart
# controller-1 is active controller
# 192.188.204.2 is floating ip
# rook-operator will read this configmap and the only monitor pod, mon-a will deployed on active controller,
export ROOK_EXTERNAL_CEPH_MON_DATA=a=192.188.204.2:6789
export ROOK_EXTERNAL_MAX_MON_ID=0
export ROOK_EXTERNAL_MAPPING='{"node":{"a":{"Name":"controller-1","Hostname":"controller-1","Address":"192.188.204.2"}}}'
kubectl -n kube-system create configmap rook-ceph-mon-endpoints \
--from-literal=data="$ROOK_EXTERNAL_CEPH_MON_DATA" \
--from-literal=mapping=”$ROOK_EXTERNAL_MAPPING” \
--from-literal=maxMonId="$ROOK_EXTERNAL_MAX_MON_ID"
4, add a cron job in rook-ceph-provision helm chart
If any host down, and mon-a pod deployed on that host(at this time, mon-a pod is terminating and pending)
a), delete mon-a pod and mon-a deployment.
b), update confimap rook-ceph-mon-endpoints, field mapping.
'{"node":{"a":{"Name":"controller-0","Hostname":"controller-0","Address":"192.188.204.2"}}}' # hostname change to active controller-0
c), delete operator pod, wait operator pods will launch and recreate mon-a deployment
BR!
Martin, Chen
IOTG, Software Engineer
021-61164330