[Starlingx-discuss] [QUERY] Worker nodes in reboot loop due to heartbeat miss as per hbsAgent.log

MacDonald, Eric Eric.MacDonald at windriver.com
Thu Nov 5 17:16:27 UTC 2020


I suggest you check that ‘multicast’ is working on the management network.

You can configure the system to only ‘alarm’ rather than ‘fail’ for ‘maintenance heartbeat loss’ while you debug.
Doing so (below) through service parameter should allow the host to enable.

[sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-modify platform maintenance heartbeat_failure_action=alarm
+-------------+--------------------------------------+
| Property    | Value                                |
+-------------+--------------------------------------+
| uuid        | ee4af4b8-11f2-4638-9f20-11b13db837d4 |
| service     | platform                             |
| section     | maintenance                          |
| name        | heartbeat_failure_action             |
| value       | alarm                                |
| personality | None                                 |
| resource    | None                                 |
+-------------+--------------------------------------+

[sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-apply platform
Applying platform service parameters

Now lock and unlock the failing hosts

The switch back to ‘fail’ action once you have the issue resolved.

Eric

From: Gaur, Shubham <Shubham.Gaur at commscope.com>
Sent: Thursday, November 5, 2020 12:01 PM
To: starlingx-discuss at lists.starlingx.io
Subject: [Starlingx-discuss] [QUERY] Worker nodes in reboot loop due to heartbeat miss as per hbsAgent.log

[Please note this e-mail is from an EXTERNAL e-mail address]

Hi All,



Setup:      Distributed StarlingX 4.0.

Problem: Central cloud is up and running, but in the edge cloud we see the continuous reboot of worker nodes.

  *   Edge cloud configuration: 1 Controller and 2 Worker.
  *   Attached logs of HBSAgent. (Couldn't find anything within puppet logs)
  *   Tried locking-unlocking the worker nodes as per the alarm raised within fm alarm-list.
  *   Couldn't resolve the issue.

  *   Is there any other logs to look after? Help required!
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────



FM Alarm-List

​+----------+-------------------------------------------------------------------------+------------------------------+----------+---------------+

| Alarm ID | Reason Text                                                             | Entity ID                    | Severity | Time Stamp    |

+----------+-------------------------------------------------------------------------+------------------------------+----------+---------------+

| 200.004  | worker-1 experienced a service-affecting failure. Auto-recovery in      | host=worker-1                | critical | 2020-11-05T14 |

|          | progress. Manual Lock and Unlock may be required if auto-recovery is    |                              |          | :15:55.127546 |

|          | unsuccessful.                                                           |                              |          |               |

|          |                                                                         |                              |          |               |

| 200.005  | worker-1 experienced a persistent critical 'Management Network'         | host=worker-1.network=       | critical | 2020-11-05T14 |

|          | communication failure.                                                  | Management                   |          | :15:45.610330 |

+------------------------------------------------------------------------------------+------------------------------+----------+---------------+



──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────



WORKER 1

---------After Unlock-------

worker-1:~$ uptime

 09:41:11 up 4 min,  1 user,  load average: 0.75, 1.07, 0.52



worker-1:~$ tail -f /var/log/hbsClient.log

2020-11-05T09:38:31.754 [31382.00028] worker-1 hbsClient --- daemon_main.cpp   ( 434) main                    : Info : Build Date  : Thu Jul  2 16:51:50 UTC 2020

2020-11-05T09:38:31.754 [31382.00029] worker-1 hbsClient --- daemon_main.cpp   ( 435) main                    : Info : ------------------------------------------------------

2020-11-05T09:38:31.754 [31382.00030] worker-1 hbsClient --- nlEvent.cpp       ( 291) open_netlink_socket     : Info : NLMon Groups: 1

2020-11-05T09:38:31.754 [31382.00031] worker-1 hbsClient hbs hbsClient.cpp     (1349) daemon_service_run      : Info : Pmon Pulse Counter Timer init with 600 seconds timeout

2020-11-05T09:38:31.754 [31382.00032] worker-1 hbsClient hbs hbsClient.cpp     (1353) daemon_service_run      : Info : Process Stall-Monitor starting in 600 seconds

2020-11-05T09:38:31.754 [31382.00033] worker-1 hbsClient hbs hbsClient.cpp     (1356) daemon_service_run      : Info : Ready Event Period 5 seconds

2020-11-05T09:38:31.754 [31382.00034] worker-1 hbsClient hbs hbsClient.cpp     (1359) daemon_service_run      : Info : Sending Heartbeat Ready Event

2020-11-05T09:38:31.783 [31382.00035] worker-1 hbsClient hbs hbsClient.cpp     ( 913) _service_pulse_request  : Info : Caching New RRI: 3 (from controller-0)

2020-11-05T09:38:31.783 [31382.00036] worker-1 hbsClient hbs hbsClient.cpp     (1045) _service_pulse_request  : Warn : controller-0 Mgmnt proividing no cluster history

2020-11-05T09:38:36.514 [31382.00037] worker-1 hbsClient hbs hbsClient.cpp     (1037) _service_pulse_request  : Info : controller-0 Mgmnt providing cluster history

...

...

2020-11-05T09:43:07.144 [31382.00038] worker-1 hbsClient sig daemon_signal.cpp ( 172) daemon_signal_hdlr      : Info : Received SIGTERM

2020-11-05T09:43:07.144 [31382.00039] worker-1 hbsClient --- daemon_config.cpp ( 301) daemon_dump_cfg         : Info : Configuration Settings ...

2020-11-05T09:43:07.144 [31382.00040] worker-1 hbsClient --- daemon_config.cpp ( 302) daemon_dump_cfg         : Info : scheduling_priority   = 99

2020-11-05T09:43:07.144 [31382.00041] worker-1 hbsClient --- daemon_config.cpp ( 314) daemon_dump_cfg         : Info : mgmnt_iface           = vlan143

2020-11-05T09:43:07.144 [31382.00042] worker-1 hbsClient --- daemon_config.cpp ( 315) daemon_dump_cfg         : Info : clstr_iface           = vlan143

2020-11-05T09:43:07.144 [31382.00043] worker-1 hbsClient --- daemon_config.cpp ( 316) daemon_dump_cfg         : Info : multicast             = 239.1.1.2

2020-11-05T09:43:07.144 [31382.00044] worker-1 hbsClient --- daemon_config.cpp ( 327) daemon_dump_cfg         : Info : uri_path              =

2020-11-05T09:43:07.144 [31382.00045] worker-1 hbsClient --- daemon_config.cpp ( 328) daemon_dump_cfg         : Info : keystone_prefix_path  =

2020-11-05T09:43:07.144 [31382.00046] worker-1 hbsClient --- daemon_config.cpp ( 329) daemon_dump_cfg         : Info : keystone_auth_host    =

2020-11-05T09:43:07.144 [31382.00047] worker-1 hbsClient --- daemon_config.cpp ( 330) daemon_dump_cfg         : Info : keystone_identity_uri =

2020-11-05T09:43:07.144 [31382.00048] worker-1 hbsClient --- daemon_config.cpp ( 331) daemon_dump_cfg         : Info : keystone_auth_uri     =

2020-11-05T09:43:07.144 [31382.00049] worker-1 hbsClient --- daemon_config.cpp ( 336) daemon_dump_cfg         : Info : keystone_region_name  = none

2020-11-05T09:43:07.144 [31382.00050] worker-1 hbsClient --- daemon_config.cpp ( 338) daemon_dump_cfg         : Info : barbican_api_host     = none

2020-11-05T09:43:07.144 [31382.00051] worker-1 hbsClient --- daemon_config.cpp ( 340) daemon_dump_cfg         : Info : mtc_rx_mgmnt_port     = 2101

2020-11-05T09:43:07.144 [31382.00052] worker-1 hbsClient --- daemon_config.cpp ( 358) daemon_dump_cfg         : Info : pmon_pulse_port       = 2109

2020-11-05T09:43:07.144 [31382.00053] worker-1 hbsClient --- daemon_config.cpp ( 362) daemon_dump_cfg         : Info : start_delay           = 600

2020-11-05T09:43:07.144 [31382.00054] worker-1 hbsClient --- daemon_config.cpp ( 366) daemon_dump_cfg         : Info : mask                  = 1180003c

2020-11-05T09:43:07.144 [31382.00055] worker-1 hbsClient --- daemon_config.cpp ( 367) daemon_dump_cfg         : Info : mode                  = none

2020-11-05T09:43:07.144 [31382.00056] worker-1 hbsClient --- daemon_config.cpp ( 370) daemon_dump_cfg         : Info : stall_pmon_thld       = 1250

2020-11-05T09:43:07.144 [31382.00057] worker-1 hbsClient --- daemon_config.cpp ( 371) daemon_dump_cfg         : Info : stall_mon_period      = 120

2020-11-05T09:43:07.144 [31382.00058] worker-1 hbsClient --- daemon_config.cpp ( 372) daemon_dump_cfg         : Info : stall_poll_period     = 20

2020-11-05T09:43:07.144 [31382.00059] worker-1 hbsClient --- daemon_config.cpp ( 373) daemon_dump_cfg         : Info : stall_rec_thld        = 2

2020-11-05T09:43:07.144 [31382.00060] worker-1 hbsClient --- daemon_config.cpp ( 404) daemon_dump_cfg         : Info : debug_filter          = none

2020-11-05T09:43:07.144 [31382.00061] worker-1 hbsClient --- daemon_config.cpp ( 405) daemon_dump_cfg         : Info : debug_event           = none

2020-11-05T09:43:07.144 -----------------------------------------------------------------------------------------

2020-11-05T09:43:07.144 Service State and Traceback -------------------------------------------------------------

2020-11-05T09:43:07.144 -----------------------------------------------------------------------------------------

packet_write_wait: Connection to 192.168.22.105 port 22: Broken pipe



Session stopped

    - Press <return> to exit tab

    - Press R to restart session

    - Press S to save terminal output to file



───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────



CONTROLLER-0: SYSTEM: edge



------Before Unlocking Worker-----

[root at controller-0 log(keystone_admin)]# tail -f /var/log/hbsAgent.log

2020-11-05T08:19:12.739 +--------------+-----+-------+-------+-------+-------+------------+----------+-----------------+

2020-11-05T09:19:12.739 +--------------+-----+-------+-------+-------+-------+------------+----------+-----------------+

2020-11-05T09:19:12.739 | Mgmnt:    3  | Mon |  Mis  |  Max  |  Deg  | Fail  | Pulses Tot |  Pulses  | Enabled  ( 100) |

2020-11-05T09:19:12.739 +--------------+-----+-------+-------+-------+-------+------------+----------+-----------------+

2020-11-05T09:19:12.739 | controller-0 |  n  |     0 |     0 |     0 |     0 |          0 |        0 | 100 msec

2020-11-05T09:19:12.739 | worker-0     |  n  |     0 |     0 |    12 |    12 |       1d2c |       31 | 100 msec

2020-11-05T09:19:12.739 | worker-1     |  n  |     0 |     0 |     8 |     8 |       1750 |      42f | 100 msec

2020-11-05T09:19:12.739 +--------------+-----+-------+-------+-------+-------+------------+----------+-----------------+

2020-11-05T09:33:29.862 [85778.00574] controller-0 hbsAgent hbs nodeClass.cpp     (7761) mon_host                : Info : worker-1 heartbeat stop

2020-11-05T09:33:29.862 [85778.00575] controller-0 hbsAgent hbs hbsAgent.cpp      (2004) daemon_service_run      : Info : worker-1 heartbeat service disabled by stop command



------After Unlocking Worker-----

[root at controller-0 log(keystone_admin)]# tail -f /var/log/hbsAgent.log

2020-11-05T09:33:29.862 [85778.00574] controller-0 hbsAgent hbs nodeClass.cpp     (7761) mon_host                : Info : worker-1 heartbeat stop

2020-11-05T09:33:29.862 [85778.00575] controller-0 hbsAgent hbs hbsAgent.cpp      (2004) daemon_service_run      : Info : worker-1 heartbeat service disabled by stop command

2020-11-05T09:38:36.442 [85778.00576] controller-0 hbsAgent hbs hbsCluster.cpp    ( 265) hbs_cluster_add         : Info : worker-1 added to cluster

2020-11-05T09:38:36.442 [85778.00577] controller-0 hbsAgent hbs hbsCluster.cpp    ( 201) cluster_list            : Info : cluster: worker-1

2020-11-05T09:38:36.442 [85778.00578] controller-0 hbsAgent hbs nodeClass.cpp     (7749) mon_host                : Info : worker-1 heartbeat start

2020-11-05T09:38:36.539 [85778.00579] controller-0 hbsAgent hbs hbsCluster.cpp    ( 453) hbs_cluster_update      : Info : controller-0 added new controller-0:Mgmnt history to vault ; now have 1 network views

2020-11-05T09:38:36.544 [85778.00580] controller-0 hbsAgent hbs hbsCluster.cpp    ( 631) hbs_cluster_send        : Info : cluster state notification Reason: worker-1 Mgmnt heartbeat start

2020-11-05T09:38:36.544 Cluster Vault : C0 Mgmnt  [1:1]>[0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0]

2020-11-05T09:38:37.490 [85778.00581] controller-0 hbsAgent hbs hbsCluster.cpp    ( 631) hbs_cluster_send        : Info : cluster state notification Reason: worker-1 Mgmnt heartbeat pass

2020-11-05T09:38:37.490 Cluster Vault : C0 Mgmnt  [1:1] [1:1] [1:1] [1:1] [1:1] [1:1] [1:1] [1:1] [1:1] [1:1]>[0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0] [0:0]

2020-11-05T09:42:51.607 [85778.00582] controller-0 hbsAgent hbs nodeClass.cpp     (8593) lost_pulses             : Info : worker-1 Mgmnt Pulse Miss (  2) (max:  1)

2020-11-05T09:42:51.713 [85778.00583] controller-0 hbsAgent hbs nodeClass.cpp     (8593) lost_pulses             : Info : worker-1 Mgmnt Pulse Miss (  3) (max:  2)

2020-11-05T09:42:51.818 [85778.00584] controller-0 hbsAgent hbs nodeClass.cpp     (8593) lost_pulses             : Info : worker-1 Mgmnt Pulse Miss (  4) (max:  3)

2020-11-05T09:42:51.818 [85778.00585] controller-0 hbsAgent hbs nodeClass.cpp     (8616) lost_pulses             : Warn : worker-1 Mgmnt -> MINOR

2020-11-05T09:42:51.923 [85778.00586] controller-0 hbsAgent hbs nodeClass.cpp     (8586) lost_pulses             : Info : worker-1 Mgmnt Pulse Miss (  5) (max:  4) (in minor)

2020-11-05T09:42:52.028 [85778.00587] controller-0 hbsAgent hbs nodeClass.cpp     (8586) lost_pulses             : Info : worker-1 Mgmnt Pulse Miss (  6) (max:  5) (in minor)

2020-11-05T09:42:52.028 [85778.00588] controller-0 hbsAgent alm alarm.cpp         ( 146) alarm_                  : Info : worker-1 Management set major 200.005

2020-11-05T09:42:52.028 [85778.00589] controller-0 hbsAgent hbs nodeClass.cpp     (8635) lost_pulses             : Warn : worker-1 Mgmnt -> DEGRADED

2020-11-05T09:42:52.133 [85778.00590] controller-0 hbsAgent hbs nodeClass.cpp     (8579) lost_pulses             : Info : worker-1 Mgmnt Pulse Miss (  7) (max:  6) (in degrade)

2020-11-05T09:42:52.238 [85778.00591] controller-0 hbsAgent hbs nodeClass.cpp     (8579) lost_pulses             : Info : worker-1 Mgmnt Pulse Miss (  8) (max:  7) (in degrade)

2020-11-05T09:42:52.343 [85778.00592] controller-0 hbsAgent hbs nodeClass.cpp     (8579) lost_pulses             : Info : worker-1 Mgmnt Pulse Miss (  9) (max:  8) (in degrade)

2020-11-05T09:42:52.448 [85778.00593] controller-0 hbsAgent hbs nodeClass.cpp     (8562) lost_pulses             : Info : worker-1 Mgmnt Pulse Miss (10) (log throttled to every 4095)

2020-11-05T09:42:52.448 [85778.00594] controller-0 hbsAgent hbs nodeClass.cpp     (8705) lost_pulses             :Error : worker-1 Mgmnt *** Heartbeat Loss ***

2020-11-05T09:42:52.448 [85778.00595] controller-0 hbsAgent alm alarm.cpp         ( 146) alarm_                  : Info : worker-1 Management set critical 200.005

2020-11-05T09:42:52.448 controller-0 view from controller-0 event Mgmnt: 1:1 . . . . . . . . . . 1:0 . . . . . . . .

2020-11-05T09:42:52.448 [85778.00596] controller-0 hbsAgent hbs hbsCluster.cpp    ( 631) hbs_cluster_send        : Info : cluster state notification Reason: worker-1 heartbeat loss

2020-11-05T09:42:52.448 Cluster Vault : C0 Mgmnt  [1:1] [1:1] [1:1] [1:1] [1:1] [1:1] [1:0] [1:0] [1:0] [1:0] [1:0] [1:0] [1:0] [1:0] [1:0]>[1:1] [1:1] [1:1] [1:1] [1:1]

2020-11-05T09:42:52.449 [85778.00597] controller-0 hbsAgent hbs nodeClass.cpp     (7761) mon_host                : Info : worker-1 heartbeat stop

2020-11-05T09:42:52.449 [85778.00598] controller-0 hbsAgent hbs hbsCluster.cpp    ( 329) hbs_cluster_del         : Info : worker-1 deleted from cluster

2020-11-05T09:42:52.449 [85778.00599] controller-0 hbsAgent hbs hbsCluster.cpp    ( 201) cluster_list            : Info : cluster:

2020-11-05T09:42:52.449 [85778.00600] controller-0 hbsAgent hbs hbsCluster.cpp    ( 631) hbs_cluster_send        : Info : cluster state notification Reason: worker-1 deleted

2020-11-05T09:42:52.449 [85778.00601] controller-0 hbsAgent hbs hbsAgent.cpp      (2004) daemon_service_run      : Info : worker-1 heartbeat service disabled by stop command



Thanks and regards,

Shubham

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20201105/1f5d7dbd/attachment-0001.html>


More information about the Starlingx-discuss mailing list