[Starlingx-discuss] Questions about patch fd6cfc upstreaming

Qin, Kailun kailun.qin at intel.com
Wed Nov 21 03:01:06 UTC 2018


Allain,

Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes.

BR,
Kailun

From: Legacy, Allain [mailto:Allain.Legacy at windriver.com]
Sent: Wednesday, November 21, 2018 2:13 AM
To: Qin, Kailun <kailun.qin at intel.com>; Peters, Matt <Matt.Peters at windriver.com>
Cc: starlingx-discuss at lists.starlingx.io
Subject: RE: Questions about patch fd6cfc upstreaming

We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on.  In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers.   It is difficult to reproduce this on small systems where the time between event and notification is short.

I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up.   That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet.   That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node.

What we found was that it was possible for the server to think that the agent was up when it was actually down.   During the window where the server sees the agent as up it can send it RPC messages.  Those messages get queued up and delivered to the agent once it is finally up.   The problem is since the agent was not actually up in the first place those messages were never really valid.  Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server.   This allowed the system to avoid unnecessary transitions based on old data.

One of the specific problems that this was addressing was something like this:


1.      A subnet had no remaining IP addresses to allocated

2.      A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address)

3.      Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available

4.      The first agent (agent-X) was taken down because its node was rebooted by system maintenance

5.      The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network




Regards,
Allain

Allain Legacy, Software Developer, Wind River
direct 613.270.2279  fax 613.492.7870 skype allain.legacy
350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5


[WIND]<http://www.windriver.com/>

From: Qin, Kailun [mailto:kailun.qin at intel.com]
Sent: Tuesday, November 20, 2018 2:02 AM
To: Peters, Matt
Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming

Hi Matt,

I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up.

The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L195), what kind of corner cases and negative behaviors could happen even w/ this full sync?

Based on the commit message, I tried to reproduce this issue w/ the following steps:

1.      schedule network1 to agent1.

2.      turn down agent1 at almost the same time.

3.      network1 is rescheduled to agent2 after finding that agent1 is dead.

4.      turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1.

However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected.

Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot.

BR,
Kailun

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181121/b68644fc/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 1807 bytes
Desc: image001.png
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181121/b68644fc/attachment-0001.png>


More information about the Starlingx-discuss mailing list