[Starlingx-discuss] Questions about patch fd6cfc upstreaming
Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin@intel.com>; Peters, Matt <Matt.Peters@windriver.com> Cc: starlingx-discuss@lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain <Allain.Legacy@windriver.com>; Peters, Matt <Matt.Peters@windriver.com> Cc: starlingx-discuss@lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun <kailun.qin@intel.com>; Peters, Matt <Matt.Peters@windriver.com> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' <Allain.Legacy@windriver.com>; Peters, Matt <Matt.Peters@windriver.com> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
The RPC handlers (e.g., port_update_end) are all wrapped with "_wait_if_syncing" so they don't actually start processing until after sync has completed. We are only trying to prevent messages from being processed between the start of the process lifetime and the beginning of the initial sync. That window is what leads to the issues we have noted. To address yesterday's question about the initial delay being long, I don't think that it needs to be more than ~10 seconds. Any stale RPC messages would be consumed quickly since they are discarded without doing any real work in the agent. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, December 18, 2018 6:42 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
Allain, Thanks a lot for the feedbacks! Excuse me that I missed the "_wait_if_syncing" decorator somehow. Exactly, w/ this wrapper we should not have any problem for the case cited by the community. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Tuesday, December 18, 2018 9:23 PM To: Qin, Kailun <kailun.qin@intel.com>; Peters, Matt <Matt.Peters@windriver.com> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming The RPC handlers (e.g., port_update_end) are all wrapped with "_wait_if_syncing" so they don't actually start processing until after sync has completed. We are only trying to prevent messages from being processed between the start of the process lifetime and the beginning of the initial sync. That window is what leads to the issues we have noted. To address yesterday's question about the initial delay being long, I don't think that it needs to be more than ~10 seconds. Any stale RPC messages would be consumed quickly since they are discarded without doing any real work in the agent. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, December 18, 2018 6:42 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
Hi Allain, Matt, I discussed this RFE in the Neutron driver meeting last night. It was a heated discussion and took up almost all the meeting time. However, the Neutron driver team thought the delay approach was not reliable and it won't perform predictably in all situations (no perfect setting for every deployment), along with some other concerns. They would prefer ways like purge_queue in rabbitmq/possibly in oslo_messaging (https://www.rabbitmq.com/rabbitmqctl.8.html#purge_queue) OR use a resource queue as the l3-agent does, if we do want the RFE to move forward. Please kindly see the MM for further details: http://eavesdrop.openstack.org/meetings/neutron_drivers/2018/neutron_drivers.... What do you think or suggest? Thanks. BR, Kailun From: Qin, Kailun Sent: Wednesday, December 19, 2018 10:12 AM To: Legacy, Allain <Allain.Legacy@windriver.com>; Peters, Matt <Matt.Peters@windriver.com> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for the feedbacks! Excuse me that I missed the "_wait_if_syncing" decorator somehow. Exactly, w/ this wrapper we should not have any problem for the case cited by the community. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Tuesday, December 18, 2018 9:23 PM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming The RPC handlers (e.g., port_update_end) are all wrapped with "_wait_if_syncing" so they don't actually start processing until after sync has completed. We are only trying to prevent messages from being processed between the start of the process lifetime and the beginning of the initial sync. That window is what leads to the issues we have noted. To address yesterday's question about the initial delay being long, I don't think that it needs to be more than ~10 seconds. Any stale RPC messages would be consumed quickly since they are discarded without doing any real work in the agent. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, December 18, 2018 6:42 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
I read through the meeting minutes. I am not familiar with resource queues or the oslo purge queue functionality but it does sound like it might provide a more deterministic solution. I recommend that we abandon our stale rpc mechanism and work with the neutron developers to investigate the feasibility of implementing a different approach based on their recommendations. Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Friday, December 21, 2018 5:52 PM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I discussed this RFE in the Neutron driver meeting last night. It was a heated discussion and took up almost all the meeting time. However, the Neutron driver team thought the delay approach was not reliable and it won't perform predictably in all situations (no perfect setting for every deployment), along with some other concerns. They would prefer ways like purge_queue in rabbitmq/possibly in oslo_messaging (https://www.rabbitmq.com/rabbitmqctl.8.html#purge_queue) OR use a resource queue as the l3-agent does, if we do want the RFE to move forward. Please kindly see the MM for further details: http://eavesdrop.openstack.org/meetings/neutron_drivers/2018/neutron_drivers.... What do you think or suggest? Thanks. BR, Kailun From: Qin, Kailun Sent: Wednesday, December 19, 2018 10:12 AM To: Legacy, Allain <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for the feedbacks! Excuse me that I missed the "_wait_if_syncing" decorator somehow. Exactly, w/ this wrapper we should not have any problem for the case cited by the community. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Tuesday, December 18, 2018 9:23 PM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming The RPC handlers (e.g., port_update_end) are all wrapped with "_wait_if_syncing" so they don't actually start processing until after sync has completed. We are only trying to prevent messages from being processed between the start of the process lifetime and the beginning of the initial sync. That window is what leads to the issues we have noted. To address yesterday's question about the initial delay being long, I don't think that it needs to be more than ~10 seconds. Any stale RPC messages would be consumed quickly since they are discarded without doing any real work in the agent. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, December 18, 2018 6:42 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
Hi Allain, Thanks for your comments. I'll abandon our current patch and start investigating the feasibility of implementing the different approach as they recommended. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Thursday, January 10, 2019 10:10 PM To: Qin, Kailun <kailun.qin@intel.com>; Peters, Matt <Matt.Peters@windriver.com> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming I read through the meeting minutes. I am not familiar with resource queues or the oslo purge queue functionality but it does sound like it might provide a more deterministic solution. I recommend that we abandon our stale rpc mechanism and work with the neutron developers to investigate the feasibility of implementing a different approach based on their recommendations. Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Friday, December 21, 2018 5:52 PM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I discussed this RFE in the Neutron driver meeting last night. It was a heated discussion and took up almost all the meeting time. However, the Neutron driver team thought the delay approach was not reliable and it won't perform predictably in all situations (no perfect setting for every deployment), along with some other concerns. They would prefer ways like purge_queue in rabbitmq/possibly in oslo_messaging (https://www.rabbitmq.com/rabbitmqctl.8.html#purge_queue) OR use a resource queue as the l3-agent does, if we do want the RFE to move forward. Please kindly see the MM for further details: http://eavesdrop.openstack.org/meetings/neutron_drivers/2018/neutron_drivers.... What do you think or suggest? Thanks. BR, Kailun From: Qin, Kailun Sent: Wednesday, December 19, 2018 10:12 AM To: Legacy, Allain <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for the feedbacks! Excuse me that I missed the "_wait_if_syncing" decorator somehow. Exactly, w/ this wrapper we should not have any problem for the case cited by the community. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Tuesday, December 18, 2018 9:23 PM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming The RPC handlers (e.g., port_update_end) are all wrapped with "_wait_if_syncing" so they don't actually start processing until after sync has completed. We are only trying to prevent messages from being processed between the start of the process lifetime and the beginning of the initial sync. That window is what leads to the issues we have noted. To address yesterday's question about the initial delay being long, I don't think that it needs to be more than ~10 seconds. Any stale RPC messages would be consumed quickly since they are discarded without doing any real work in the agent. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, December 18, 2018 6:42 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss@lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain <Allain.Legacy@windriver.com<mailto:Allain.Legacy@windriver.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy@windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin@intel.com<mailto:kailun.qin@intel.com>>; Peters, Matt <Matt.Peters@windriver.com<mailto:Matt.Peters@windriver.com>> Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin@intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py...), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun
participants (2)
-
Legacy, Allain
-
Qin, Kailun