Hi Ankush, we discussed this on the community call today…

 

It wasn’t apparent to anyone at a glance what the issue might be. 

 

The only suggestion was that maybe you could open a Launchpad, with all the details of the issue.

 

Thanks, Bill...  

 

From: Rai, Ankush <Ankush.Rai@commscope.com>
Sent: Tuesday, May 25, 2021 10:51 AM
To: starlingx-discuss@lists.starlingx.io
Subject: [Starlingx-discuss] Issue - Host Switchover failure. [732]

 

[Please note: This e-mail is from an EXTERNAL e-mail address]

Hi we are seeing the node switchover failure

 

Issue: Switch active controller action in Central cloud is failing. It is not showing any failure in StarlingX GUI but after progressing for a while, it returns to the old state. i.e. same controller node remains in Active state. 

Steps to reproduce:

(1) Under Hosts tab for Active controller, click on Actions dropdown and select "swact host"
(2) Verify Controller-0 and Controller-1 Personality in the Host Inventory section

Expected Result:

(1) At step 5, Swact host action should be successful. 
(2) At step 6, whichever controller was active before should be displayed as Standby now

Note: StarlingX GUI issue screenshots are attached

From sysinv.log,

sysinv 2021-03-29 12:35:48.468 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 1. delta_handle ['action']
sysinv 2021-03-29 12:35:48.850 29411 INFO sysinv.api.controllers.v1.rest_api [-] PATCH cmd:http://controller-1:7777/v1/servicenode/controller-1 hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:{"origin": "sysinv", "action": "swact-pre-check", "admin": "unknown", "oper": "unknown", "avail": ""}
sysinv 2021-03-29 12:35:48.940 29411 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'origin': u'sm', u'oper': u'unknown', u'admin': u'unknown', u'hostname': u'controller-1', u'avail': u'', u'error_details': None, u'action': u'swact-pre-check', u'error_code': u'0'}
sysinv 2021-03-29 12:35:48.942 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 Action staged: swact
sysinv 2021-03-29 12:35:48.942 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 post action_stage hostupdate action=swact notify_vim=False notify_mtc=True skip_notify_mtce=False
sysinv 2021-03-29 12:35:48.942 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 2. delta_handle ['action']
sysinv 2021-03-29 12:35:48.943 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 apply ihost_val {'task': u'Swacting'}
sysinv 2021-03-29 12:35:48.957 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 Action swact perform notify_mtce
cmd:http://localhost:2112/v1/hosts/6cf35736-20dd-4921-a5ed-faebbfa036b4 hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:{"tboot": "false", "ttys_dcd": null, "subfunctions": "controller,worker,lowlatency", "bm_ip": null, "install_state": "completed+", "rootfs_device": "/dev/sda", "bm_username": null, "clock_synchronization": "ntp", "operation": "modify", "serialid": null, "id": 2, "console": "ttyS0,115200", "uuid": "6cf35736-20dd-4921-a5ed-faebbfa036b4", "mgmt_ip": "10.222.21.3", "software_load": "20.06", "config_status": null, "hostname": "controller-1", "iscsi_initiator_name": "iqn.1994-05.com.redhat:4e6911b5176d", "capabilities":

{"stor_function": "monitor"}

, "install_output": "text", "device_image_update": null, "location": {}, "availability": "available", "invprovision": "provisioned", "peer_id": null, "administrative": "unlocked", "personality": "controller", "recordtype": "standard", "reboot_needed": false, "bm_mac": null, "inv_state": "inventoried", "mtce_info": null, "isystem_uuid": "73b38f1a-8b20-436e-9eba-9a619a448cf4", "boot_device": "/dev/sda", "install_state_info": null, "mgmt_mac": "2c:ea:7f:65:a8:a6", "subfunction_oper": "enabled", "target_load": "20.06", "vsc_controllers": null, "operational": "enabled", "subfunction_avail": "available", "action": "swact", "bm_type": null}
sysinv 2021-03-29 12:36:34.304 10553 ERROR sysinv.openstack.common.rpc.common [-] Failed to consume message from queue: [Errno 104] Connection reset by peer: error: [Errno 104] Connection reset by peer

 

 

There are some alarms raised during switchover process and it seems the peer node is not responding.

It is observed that after some time the original role is restored.

 

Please let me know how to further debug the issue.

 

Thanks,

Ankush