Hi Ovidiu

 

I confirm the issue you raise for duplex. So I change rook deployment for duplex as this.

1, duplex will use host network, ceph mon ip is controller floating ip

2, there is only one monitor pod on active controller

3, same as native ceph cluster, there is a drbd filesystem, named drbd-cephmon, to sync mon data on both controllers, mount to folder /var/lib/ceph/mon-a/data

4, there is a cronjob, periodic check these is host offline or lock. If host offline and mon pod deployed on that host, delete mon deployment and rook-ceph operator will make mon deployment again on the other controller

# because when rook-operator launch mon deployment, it nodeselector for this deployment will bind to this host. So must delete this mon deployment to make operator reschedule the mon deployment to the other controller

# this is similar what sm do for native ceph

 

We could discuss tomorrow.

 

 

Task items for above design:

1, update sysinv to enable  drbd ceph-mon for rook

    1), add system storage-backend-add rook --confirmed

    2), in cgtsclient add storage_rook.py

    3), in sysinv-api, add storage_rook.py, will call sysinv-conductor update_rook_config

    4), in sysinv-conductor update_rook_config,

           if cutils.is_aio_duplex_system(self.dbapi):

                # On 2 node systems we have a floating Ceph monitor.

                classes.append('platform::drbd::cephmon::runtime')

 

2, update rook-app override

    for duplex, use hostnetwork, mon: 1.

 

3, add a pre install job in rook-ceph helm chart

 

    # controller-1 is active controller

    # 192.188.204.2 is floating ip

    # rook-operator will read this configmap and the only monitor pod, mon-a will deployed on active controller,

    export ROOK_EXTERNAL_CEPH_MON_DATA=a=192.188.204.2:6789

    export ROOK_EXTERNAL_MAX_MON_ID=0

    export ROOK_EXTERNAL_MAPPING='{"node":{"a":{"Name":"controller-1","Hostname":"controller-1","Address":"192.188.204.2"}}}'

 

    kubectl -n kube-system create configmap rook-ceph-mon-endpoints   \

            --from-literal=data="$ROOK_EXTERNAL_CEPH_MON_DATA"        \

            --from-literal=mapping=”$ROOK_EXTERNAL_MAPPING”           \

            --from-literal=maxMonId="$ROOK_EXTERNAL_MAX_MON_ID"

 

4, add a cron job in rook-ceph-provision helm chart

    If any host down, and mon-a pod deployed on that host(at this time, mon-a pod is terminating and pending)

      a), delete mon-a pod and mon-a deployment.

      b), update confimap rook-ceph-mon-endpoints, field mapping.

          '{"node":{"a":{"Name":"controller-0","Hostname":"controller-0","Address":"192.188.204.2"}}}'        # hostname change to active controller-0

      c), delete operator pod, wait operator pods will launch and recreate mon-a deployment

 

BR!

 

Martin, Chen

IOTG, Software Engineer

021-61164330