Hi

 

I make a summary.

 

The root cause is, there is a timer in mgr-restful-plugin to check ceph-mgr readiness, and there is a timer in service manager to audit mgr-restful-plugin. These two timer timeout setting doesn’t match or not properly calibrated.

 

Thanks!

 

Martin, Chen

IOTG, Software Engineer

021-61164330

 

From: Chen, Haochuan Z
Sent: Friday, April 30, 2021 3:26 PM
To: Church, Robert <Robert.Church@windriver.com>; Qian, Bin <Bin.Qian@windriver.com>
Cc: starlingx-discuss@lists.starlingx.io
Subject: LP1918420 issue debug and fix solution proposal

 

Hi Bob and Qian bin

 

I check this issue, describe the fail reason. You can download the log file to check

https://bugs.launchpad.net/starlingx/+bug/1918420

 

log file

https://bugs.launchpad.net/starlingx/+bug/1918420/+attachment/5493137/+files/ALL_NODES_20210428.182915.tar

 

In the above log file, at 14:19, controller-0 reboot, at this time controller-0 is active. Then controller-1 begin to swact to active.

From controller-1 sm.log file, sm on controller-1 enable services, and launch enabled service audit. but mgr-restful-plugin audit fail.

2021-04-28T15:19:50.000 controller-1 sm: debug time[993.980] log<816> INFO: sm[91418]: sm_node_swact_monitor.cpp(37): Swact update: controller-1 is now active

2021-04-28T15:19:50.000 controller-1 sm: debug time[993.980] log<817> INFO: sm[91418]: sm_node_swact_monitor.cpp(57): Swact has completed successfully.

2021-04-28T15:19:50.000 controller-1 sm: debug time[993.980] log<818> INFO: sm[91418]: sm_service_group_fsm.c(904): Service group (vim-services) was in the go-active state and is now in the active state.

2021-04-28T15:19:51.000 controller-1 sm: debug time[994.849] log<820> INFO: sm[91418]: sm_main_event_handler.c(170): Set node (controller-1) requested, action=2, admin_state=unlocked, oper_state=enabled, avail_status=available, seqno=1.

2021-04-28T15:19:51.000 controller-1 sm: debug time[995.073] log<821> INFO: sm[91418]: sm_main_event_handler.c(170): Set node (controller-0) requested, action=2, admin_state=unlocked, oper_state=enabled, avail_status=available, seqno=2.

2021-04-28T15:19:51.000 controller-1 sm: debug time[995.329] log<822> INFO: sm[91418]: sm_service_audit.c(176): Action (audit-enabled) timeout with result (failed), state (unknown), status (unknown), and condition (unknown) for service (mgr-restful-plugin), reason_text=, exit_code=-65534.

2021-04-28T15:19:51.000 controller-1 sm: debug time[995.329] log<823> INFO: sm[91418]: sm_service_action.c(345): Aborting service (mgr-restful-plugin) with kill signal, pid=527592.

2021-04-28T15:19:51.000 controller-1 sm: debug time[995.329] log<824> INFO: sm[91418]: sm_service_audit.c(75): Max retires not met for action (audit-enabled) of service (mgr-restful-plugin), attempts=1.

2021-04-28T15:19:51.000 controller-1 sm: debug time[995.330] log<825> ERROR: sm[91418]: sm_service_audit.c(227): Failed to query service based on pid (527592), error=NOT_FOUND.

 

From controller-1, mgr-restful-plugin.log, at 15:19:51, we could find as just swact, “ceph mgr services list –format json”, will still return link https://controller-0:7999”, (it will take a while then return “controller-1:7999”)

As at this time controller-0 is rebooting, so mgr-restful-plugin ping to ceph mgr service doesn’t work. This makes sm audit fail

2021-04-28 15:19:47,994 99448 WARNING mgr-restful-plugin REST API ping failed: reason=HTTPSConnectionPool(host='controller-0', port=7999): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f4d0cd23e90>, 'Connection to controller-0 timed out. (connect timeout=15)'))

2021-04-28 15:19:47,994 99448 INFO mgr-restful-plugin REST API ping failure count=0

 

After 2 times audit fail, sm will disable all service and try to swact to controller-0, but as controller-0 is still doesn’t recovery, so sm always make such loop, enable service and disable service.

2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1678> INFO: sm[91418]: sm_service_domain_filter.c(338): Uncontrolled swact start

2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1679> INFO: sm[91418]: sm_node_swact_monitor.cpp(29): Swact has started, host will be standby

2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1680> INFO: sm[91418]: sm_service_domain_filter.c(338): Uncontrolled swact start

2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1681> INFO: sm[91418]: sm_node_swact_monitor.cpp(29): Swact has started, host will be standby

2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1682> INFO: sm[91418]: sm_service_domain_filter.c(338): Uncontrolled swact start

2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1683> INFO: sm[91418]: sm_node_swact_monitor.cpp(29): Swact has started, host will be standby

 

In such short period, user make request to sysinv-conductor to remove system application,  as cluster-host-ip service is disabled by sm, application remove fail.

sysinv 2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes [-] Exception generating bootstrap token: CalledProcessError: Command '['kubeadm', '--kubeconfig=/etc/kubernetes/admin.conf', 'config', 'view']' returned non-zero exit status 1

sysinv 2021-04-28 15:35:21.663 1072608 WARNING sysinv.conductor.kube_app [-] Failed to clear Armada locks.: MaxRetryError: HTTPSConnectionPool(host='192.168.206.1', port=6443): Max retries exceeded with url: /apis/armada.process/v1/namespaces/kube-system/locks/locks.armada.process.lock (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb256c28650>: Failed to establish a new connection: [Errno 111] ECONNREFUSED',))

2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes Traceback (most recent call last):

2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes   File "/usr/lib64/python2.7/site-packages/sysinv/puppet/kubernetes.py", line 168, in _get_kubernetes_join_cmd

2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes     subprocess.check_call(cmd, stdout=f)  # pylint: disable=not-callable

2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes   File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call

2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes     raise CalledProcessError(retcode, cmd)

2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes CalledProcessError: Command '['kubeadm', '--kubeconfig=/etc/kubernetes/admin.conf', 'config', 'view']' returned non-zero exit status 1

2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes

 

In mgr-restful-plugin.py, the timer to check ceph-mgr service ready, it will count 5, timeout 15s and if fail, wait 3s and check again.

In sm, audit mgr-restful-plugin, timeout attempt 2 times, timeout 15s, smaller than above timeout setting. Propose in sm, timeout change to 20s(longer than 15s+3s), change timeout attempt change to 3 times.

INSERT INTO "SERVICE_ACTIONS" VALUES('mgr-restful-plugin','audit-enabled','lsb-script','','mgr-restful-plugin','status','',2,2,2,15,40);

 

My patch for your idea.

https://review.opendev.org/c/starlingx/ha/+/788897

 

Thanks!

 

Martin, Chen

IOTG, Software Engineer

021-61164330