[Starlingx-discuss] Stx8 interface bonding issue

Webster, Steven Steven.Webster at windriver.com
Wed Apr 19 13:58:32 UTC 2023


Hello, this does look like a valid issue.  The following STX bug has been raised:

https://bugs.launchpad.net/starlingx/+bug/2010119

The issue appears to have been resolved in ifenslave v2.13 (STX 8 is on v2.12)

The workaround noted in your mail could be applied to a system and wouldn’t be overwritten until the ifenslave package were updated via system upgrade or some other patch that touches those files.  The noted workaround seems to produce essentially the same result that went into fixing this issue officially in v2.13.

The proper way to patch this in a production STX 8 environment would be to create and install a patch after pulling in ifenslave v2.13 to be explicitly built (this is likely exactly what the bug fix will be if you want to have a go at doing it in the master branch). The STX documentation [1] for patching seems to need some updating for Debian (references to RPMs), but the sw-patch commands are still valid and the references [2] and [3] should help if you intend to go this route.

Cheers,

Steve

[1] https://docs.starlingx.io/developer_resources/starlingx_patching.html
[2] https://review.opendev.org/c/starlingx/update/+/845169
[3] https://opendev.org/starlingx/update/src/branch/master/sw-patch/cgcs-patch/cgcs_make_patch

From: Sergei Akhmatov <sergei.akhmatov at xunison.com>
Sent: Wednesday, March 1, 2023 8:18 AM
To: starlingx-discuss at lists.starlingx.io
Subject: [Starlingx-discuss] Stx8 interface bonding issue

CAUTION: This email comes from a non Wind River email account!
Do not click links or open attachments unless you recognize the sender and know the content is safe.
Hello.

I’ve have starling-x 8 release (Debian-based) installation with duplex controllers + 2 workers and I’ve ran into issue with following configuration:
All nodes are on baremetal with 2x10G interfaces.
Interfaces are bonded with LACP, and VLAN-interfaces are created for different networks:

+--------------------------------------+--------------+----------+----------+---------+--------------+--------------------------+-----------------------------------+--------------------------------------------------+
| uuid                                 | name         | class    | type     | vlan id | ports        | uses i/f                 | used by i/f                       | attributes                                       |
+--------------------------------------+--------------+----------+----------+---------+--------------+--------------------------+-----------------------------------+--------------------------------------------------+
| 1c259ab0-c08f-4860-976c-6cc00e4cfaed | mgmt0        | platform | vlan     | 201     | []           | ['ae0']                  | []                                | MTU=1500                                         |
| 4b86faef-509f-4efb-a53a-3b2d1d9b98e8 | oam0         | platform | vlan     | 100     | []           | ['ae0']                  | []                                | MTU=1500                                         |
| 5edb4655-a048-4839-b01d-f2ec64e5ad01 | enp2s0f1     | None     | ethernet | None    | ['enp2s0f1'] | []                       | ['ae0']                           | MTU=1500                                         |
| 792d08ec-63cc-4ffa-aba2-8978e411433e | ae0          | None     | ae       | None    | []           | ['enp2s0f0', 'enp2s0f1'] | ['clusterhost0', 'mgmt0', 'oam0'] | MTU=1500,AE_MODE=802.3ad,AE_XMIT_POLICY=layer2+3 |
| 8d7fe091-4c63-4ef7-8983-cd2007470443 | enp2s0f0     | None     | ethernet | None    | ['enp2s0f0'] | []                       | ['ae0']                           | MTU=1500                                         |
| e6ff860f-a6ef-49df-bde7-437665625970 | clusterhost0 | platform | vlan     | 101     | []           | ['ae0']                  | []                                | MTU=1500                                         |

Worker nodes after reboots start correctly, but after 24 hours mgmt IP is removed from interface and they become unavailable.
It appeared that networking.service is in a failed state:

sysadmin at worker-0:~$ systemctl status networking.service
system/networking.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Tue 2023-02-28 13:26:44 UTC; 46s ago
       Docs: man:interfaces(5)
    Process: 2068 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
   Main PID: 2068 (code=exited, status=1/FAILURE)
        CPU: 561ms

Because of that, dhclient process is not running, ‘valid_lft’ timeout is never refreshed and after lease time expiry IP address is removed from interface.

Runing ‘ifup -a’ manually gave me an error:



$ sudo /sbin/ifup -a --read-environment

No iface stanza found for master ae0

run-parts: /etc/network/if-pre-up.d/ifenslave exited with return code 1

ifup: failed to bring up enp2s0f0

No iface stanza found for master ae0

run-parts: /etc/network/if-pre-up.d/ifenslave exited with return code 1

ifup: failed to bring up enp2s0f1

Googling this error led me to a number of messages about ethernet bonding somewhat broken in debian bullseye at some point: https://blog.rtsp.us/debian-11-bullseye-bonding-problem-9d8d8866117e<https://urldefense.com/v3/__https:/blog.rtsp.us/debian-11-bullseye-bonding-problem-9d8d8866117e__;!!AjveYdw8EvQ!fIokdGuoletNb8bl_iIAVJcTwh9mC2aHa4ylbvRzZoA5kHV0mVOJ7O0G0laEEa60wamwqzQTGzGhMRdzWPTuVZLWONLktkn3KD88uQ$>
Hacking /etc/network/if-pre-up.d/ifenslave as suggested in the article solved the problem



sed -i 's/ifstate -l/ifquery -l/g' /etc/network/if-pre-up.d/ifenslave


After reboot networking service is in active state and is managing dhclient processes:

sysadmin at worker-0:~$ systemctl status networking.service

● networking.service - Raise network interfaces

     Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)

     Active: active (exited) since Tue 2023-02-28 14:28:43 UTC; 52s ago

       Docs: man:interfaces(5)

    Process: 2064 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=0/SUCCESS)

   Main PID: 2064 (code=exited, status=0/SUCCESS)

      Tasks: 8 (limit: 306419)

     Memory: 10.0M

        CPU: 678ms

     CGroup: /system.slice/networking.service

             ├─2436 /sbin/dhclient -4 -v -i -pf /run/dhclient.vlan101.pid<https://urldefense.com/v3/__http:/dhclient.vlan101.pid__;!!AjveYdw8EvQ!fIokdGuoletNb8bl_iIAVJcTwh9mC2aHa4ylbvRzZoA5kHV0mVOJ7O0G0laEEa60wamwqzQTGzGhMRdzWPTuVZLWONLktkkGiiNJjg$> -lf /var/lib/dhcp/dhclient.vlan101.leases -I -df /var/lib/dhcp/dhclient6.vlan102.leases vlan101

             └─2569 /sbin/dhclient -4 -v -i -pf /run/dhclient.vlan201.pid<https://urldefense.com/v3/__http:/dhclient.vlan201.pid__;!!AjveYdw8EvQ!fIokdGuoletNb8bl_iIAVJcTwh9mC2aHa4ylbvRzZoA5kHV0mVOJ7O0G0laEEa60wamwqzQTGzGhMRdzWPTuVZLWONLktknsiJzrwA$> -lf /var/lib/dhcp/dhclient.vlan201.leases -I -df /var/lib/dhcp/dhclient6.vlan202.leases vlan201

Could someone please validate the issue?
What is preferred way for a fix in production environment until it could be fixed upstream?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20230419/73b45c67/attachment-0001.htm>


More information about the Starlingx-discuss mailing list