[Starlingx-discuss] rook deployment for duplex
Chen, Haochuan Z
haochuan.z.chen at intel.com
Tue Nov 24 14:45:58 UTC 2020
Hi
I almost finish rook duplex implementation. The last issue, when I make swact, active controller's drbd-cephmon is always be connected.
controller-0:~$ sudo drbd-overview
sudo: ldap_sasl_bind_s(): Can't contact LDAP server
0:drbd-pgsql/0 Unconfigured . . . .
1:drbd-rabbit/0 Unconfigured . . . .
2:drbd-platform/0 Unconfigured . . . .
5:drbd-extension/0 Unconfigured . . . .
7:drbd-etcd/0 Unconfigured . . . .
8:drbd-dockerdistribution/0 Unconfigured . . . .
9:drbd-cephmon/0 Connected Primary/Secondary UpToDate/UpToDate C r-----
controller-0:~$
I try to "drbdadm down" with these command, but fail with such error "9: State change failed: (-12) Device is held open by someone"
export OCF_RESOURCE_INSTANCE=drbd-cephmon
export OCF_RESOURCE_TYPE=drbd
export OCF_RA_VERSION_MINOR=1
export OCF_RA_VERSION_MAJOR=1
export OCF_ROOT=/usr/lib/ocf
export OCF_RESKEY_drbd_resource=drbd-cephmon
export OCF_RESKEY_drbdconf=/etc/drbd.conf
export OCF_RESKEY_CRM_meta_notify=true
export OCF_RESKEY_CRM_meta_clone_max=2
export OCF_RESKEY_CRM_meta_clone_node_max=1
export OCF_RESKEY_CRM_meta_master_max=1
export OCF_RESKEY_CRM_meta_master_node_max=1
/usr/lib/ocf/resource.d/linbit/drbd stop
Any idea about how to debug this issue.
Thanks
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Chen, Haochuan Z
Sent: Tuesday, November 10, 2020 10:00 PM
To: Sun, Austin <austin.sun at intel.com>; Qi, Mingyuan <mingyuan.qi at intel.com>; 'Church, Robert' <Robert.Church at windriver.com>; 'Poncea, Ovidiu' <Ovidiu.Poncea at windriver.com>; 'Waines, Greg' <Greg.Waines at windriver.com>; 'Voiculeasa, Dan' <Dan.Voiculeasa at windriver.com>
Cc: Hu, Yong <yong.hu at intel.com>; 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io>
Subject: RE: rook deployment for duplex
Hi Ovidiu & Bob
This Thursday morning(Bob's time), Thursday afternoon(Ovidiu), when you are ok to book a meeting.
We could sync rook status. I will update duplex deployment update info.
Thanks
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Chen, Haochuan Z
Sent: Friday, November 6, 2020 10:24 PM
To: Sun, Austin <austin.sun at intel.com<mailto:austin.sun at intel.com>>; Qi, Mingyuan <mingyuan.qi at intel.com<mailto:mingyuan.qi at intel.com>>; 'Church, Robert' <Robert.Church at windriver.com<mailto:Robert.Church at windriver.com>>; 'Poncea, Ovidiu' <Ovidiu.Poncea at windriver.com<mailto:Ovidiu.Poncea at windriver.com>>; 'Waines, Greg' <Greg.Waines at windriver.com<mailto:Greg.Waines at windriver.com>>; 'Voiculeasa, Dan' <Dan.Voiculeasa at windriver.com<mailto:Dan.Voiculeasa at windriver.com>>
Cc: Hu, Yong <yong.hu at intel.com<mailto:yong.hu at intel.com>>; 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Subject: RE: rook deployment for duplex
Hi Ovidiu, Bob and Greg
I introduce my design for duplex to Dan. If you think necessary, I could book a meeting next Tuesday to explain again.
Thanks!
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Chen, Haochuan Z
Sent: Thursday, November 5, 2020 4:22 PM
To: Sun, Austin <austin.sun at intel.com<mailto:austin.sun at intel.com>>; Qi, Mingyuan <mingyuan.qi at intel.com<mailto:mingyuan.qi at intel.com>>; Church, Robert <Robert.Church at windriver.com<mailto:Robert.Church at windriver.com>>; Poncea, Ovidiu <Ovidiu.Poncea at windriver.com<mailto:Ovidiu.Poncea at windriver.com>>; Waines, Greg <Greg.Waines at windriver.com<mailto:Greg.Waines at windriver.com>>; Voiculeasa, Dan <Dan.Voiculeasa at windriver.com<mailto:Dan.Voiculeasa at windriver.com>>
Cc: Hu, Yong <yong.hu at intel.com<mailto:yong.hu at intel.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: rook deployment for duplex
Hi Ovidiu
I confirm the issue you raise for duplex. So I change rook deployment for duplex as this.
1, duplex will use host network, ceph mon ip is controller floating ip
2, there is only one monitor pod on active controller
3, same as native ceph cluster, there is a drbd filesystem, named drbd-cephmon, to sync mon data on both controllers, mount to folder /var/lib/ceph/mon-a/data
4, there is a cronjob, periodic check these is host offline or lock. If host offline and mon pod deployed on that host, delete mon deployment and rook-ceph operator will make mon deployment again on the other controller
# because when rook-operator launch mon deployment, it nodeselector for this deployment will bind to this host. So must delete this mon deployment to make operator reschedule the mon deployment to the other controller
# this is similar what sm do for native ceph
We could discuss tomorrow.
Task items for above design:
1, update sysinv to enable drbd ceph-mon for rook
1), add system storage-backend-add rook --confirmed
2), in cgtsclient add storage_rook.py
3), in sysinv-api, add storage_rook.py, will call sysinv-conductor update_rook_config
4), in sysinv-conductor update_rook_config,
if cutils.is_aio_duplex_system(self.dbapi):
# On 2 node systems we have a floating Ceph monitor.
classes.append('platform::drbd::cephmon::runtime')
2, update rook-app override
for duplex, use hostnetwork, mon: 1.
3, add a pre install job in rook-ceph helm chart
# controller-1 is active controller
# 192.188.204.2 is floating ip
# rook-operator will read this configmap and the only monitor pod, mon-a will deployed on active controller,
export ROOK_EXTERNAL_CEPH_MON_DATA=a=192.188.204.2:6789
export ROOK_EXTERNAL_MAX_MON_ID=0
export ROOK_EXTERNAL_MAPPING='{"node":{"a":{"Name":"controller-1","Hostname":"controller-1","Address":"192.188.204.2"}}}'
kubectl -n kube-system create configmap rook-ceph-mon-endpoints \
--from-literal=data="$ROOK_EXTERNAL_CEPH_MON_DATA" \
--from-literal=mapping="$ROOK_EXTERNAL_MAPPING" \
--from-literal=maxMonId="$ROOK_EXTERNAL_MAX_MON_ID"
4, add a cron job in rook-ceph-provision helm chart
If any host down, and mon-a pod deployed on that host(at this time, mon-a pod is terminating and pending)
a), delete mon-a pod and mon-a deployment.
b), update confimap rook-ceph-mon-endpoints, field mapping.
'{"node":{"a":{"Name":"controller-0","Hostname":"controller-0","Address":"192.188.204.2"}}}' # hostname change to active controller-0
c), delete operator pod, wait operator pods will launch and recreate mon-a deployment
BR!
Martin, Chen
IOTG, Software Engineer
021-61164330
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20201124/9950999b/attachment-0001.html>
More information about the Starlingx-discuss
mailing list