[Starlingx-discuss] Analysis report about Network Trunk feature for StartlingX upstreaming
Le, Huifeng
huifeng.le at intel.com
Tue Oct 16 01:45:56 UTC 2018
Matt,
Just FYI. Below is the community feedback (https://bugs.launchpad.net/neutron/+bug/1797368) the different behavior is aware by community and it is preferred to keep the current design. Thanks much!
“ Hi, thanks for reporting this bug. In reality this is working as intended. the Trunk admin status is simply a locking mechanism for management operations on the trunk as it's been articulated on the documentation:https://docs.openstack.org/neutron/pike/admin/config-trunking.html
In particular:
When admin_state is set to DOWN, the user is blocked from performing operations on the trunk. admin_state is set by the user and should not be used to monitor the health of the trunk.
Having said that, I can see the confusion and a different expectation some users may have. Neutron resources admin states are not necessarily used for blocking the datapath (I think another example of that might neutron router floating IP ports, but I am no longer sure).
The reason why this was designed as was mainly simplicity and robustness. Ensuring that the operation of turning up/down the admin status for trunk worked reliably and atomically across all sub-ports in a trunk is not straightforward, therefore in the absence of a strong use case the existing semantic was chosen.
“
Best Regards,
Le, Huifeng
From: Peters, Matt [mailto:Matt.Peters at windriver.com]
Sent: Wednesday, September 19, 2018 9:15 PM
To: Le, Huifeng <huifeng.le at intel.com>
Cc: Zhao, Forrest <forrest.zhao at intel.com>; starlingx-discuss at lists.starlingx.io
Subject: Re: Analysis report about Network Trunk feature for StartlingX upstreaming
Hello Huifeng,
I wanted to follow-up on item #1 below (ba9d9f60a7a2665194cacb92a05e0acd2dc3de41: Add rpc notification for trunk updates). The concern is that there is a difference in behavior between the port and trunks, that may impact the user experience and not just impact the agent/server behavior.
The problem is that it is not symmetric with setting admin_setup_up on a Port. If you set that attribute to False on a Port then that port is disabled in the vswitch; packets are no longer sent/received. Based on that expectation, then it would make sense that setting admin_state_up=False on a trunk would disable that trunk on the vswitch (i.e., stop processing VLAN packets arriving from that VM instance), but that is not what happens; it continues to be operational. It is our opinion that this is an incorrect behavior that is worth correcting.
Based on the above behavioral difference, I think it makes sense to pursue this with the neutron team since they may want to align on this behavior as well. If the neutron team rejects this change, then we can align on the current upstream behavior.
Regards, Matt
From: "Peters, Matt" <Matt.Peters at windriver.com<mailto:Matt.Peters at windriver.com>>
Date: Friday, September 7, 2018 at 11:00 AM
To: "Le, Huifeng" <huifeng.le at intel.com<mailto:huifeng.le at intel.com>>, "Jolliffe, Ian" <Ian.Jolliffe at windriver.com<mailto:Ian.Jolliffe at windriver.com>>, "Jones, Bruce E" <bruce.e.jones at intel.com<mailto:bruce.e.jones at intel.com>>, Brent Rowsell <Brent.Rowsell at windriver.com<mailto:Brent.Rowsell at windriver.com>>
Cc: "Zhao, Forrest" <forrest.zhao at intel.com<mailto:forrest.zhao at intel.com>>, "Troyer, Dean" <dean.troyer at intel.com<mailto:dean.troyer at intel.com>>, "starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>" <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Subject: RE: Analysis report about Network Trunk feature for StartlingX upstreaming
See inline for specific responses to the review information.
Responses marked with [MP>]
From: Le, Huifeng [mailto:huifeng.le at intel.com]
Sent: Sunday, August 19, 2018 10:32 PM
To: Jolliffe, Ian; Jones, Bruce E; Rowsell, Brent; Peters, Matt
Cc: Zhao, Forrest; Troyer, Dean; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: RE: Analysis report about Network Trunk feature for StartlingX upstreaming
Ian,
Thanks very much for the comments. some comments below for you reference, and please help to review, thanks much!
Best Regards,
Le, Huifeng
From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com]
Sent: Saturday, August 18, 2018 4:16 AM
To: Le, Huifeng <huifeng.le at intel.com<mailto:huifeng.le at intel.com>>; Jones, Bruce E <bruce.e.jones at intel.com<mailto:bruce.e.jones at intel.com>>; Rowsell, Brent <Brent.Rowsell at windriver.com<mailto:Brent.Rowsell at windriver.com>>; Peters, Matt <Matt.Peters at windriver.com<mailto:Matt.Peters at windriver.com>>
Cc: Zhao, Forrest <forrest.zhao at intel.com<mailto:forrest.zhao at intel.com>>; Troyer, Dean <dean.troyer at intel.com<mailto:dean.troyer at intel.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: Analysis report about Network Trunk feature for StartlingX upstreaming
Hi Huifeng;
Thanks for the updates/analysis, comments below.
Ian
Ian/Brent/Matt,
We did analysis about the Network trunk related patches for StartingX upstream, below are the suggestions for upstreaming, could you please help to review and comment? Thanks much!
1. ba9d9f60a7a2665194cacb92a05e0acd2dc3de41: Add rpc notification for trunk updates
Function: sent notification to the agent when a trunk is updated
Analysis:
(1)Trunk’s AFTER_UPDATE event is generated for API call: PUT /v2.0/trunks/{trunk-id}
The update request is only for changing fields like name, description or admin_state_up. Setting the admin_state_up to False locks the trunk in that it prevents operations such as adding/removing subports.
In Neutron upstream, admin_state_up is used in server side, e.g. add_subports, remove subports, delete_trunk and not used in agent side
(2)OVS trunk agent driver uses OVSDB event to handle trunk event, no need to manually trigger trunk update event
(3)Linux trunk agent driver will handle trunk update event triggered by server, while it will need apply the patch only in case admin_state_up update need to be handled
Suggestion: Not a bug for Neutron upstream, suggest not to upstream
If this is not upstreamed, are the dependencies or changes required in the StarlingX code base? What are the implications of not upstreaming?
[hle2] for STX, trunk_updated event will force the trunk’s parent-port to refresh (e.g. handle_trunks->mark_port_for_refresh(trunk['port_id']) etc.) to get the new “admin_state_up” value from server and this value will be used in handle_updated_port() to determine whether it is allowed to update port/device status in server side.
“admin_state_up” is mainly used to control operation at neutron server side like add_subports, remove subports, delete_trunk etc. and all these 3 operations will force port to refresh (handle_trunks/handle_subports->mark_port_for_refresh), so suppose, the general flow will not be impacted whether to handle trunk_updated event or not.
But in some wired cases, add “admin_state_up” check in agent side may cause issues (please help to review whether it is make sense), e.g for below calling flow (suppose trunk’s ‘admin_state_up’ is ‘up’): (1) add_subports (2) set “admin_state_up” to ‘down’, step(1) may fail to set device’s state at agent side in case aws agent’s handle_updated_port() (in daemon loop) executed after step (2)
So to my understanding:
(1) if using OVS agent in STX, no impact for not upstream
(2) if using AVS agent + STX, suggest removing “admin_state_up” check in AVS agent (in function handle_updated_port () of avs/agent.py) like below.
if trunk_details and trunk_details['admin_state_up']:
…
[MP>] Thanks for the detailed analysis. The suggested change will need to be tested to see if it full resolves the original issue. However, since the driver could be used by other agents (beyond just the AVS agent), I think it would not hurt to have the additional notification sent to complete the driver definition for the set of RPC notifications.
2. 6955351c5eca6e37061fb0140d11ea53693fe0e1: Add support to delete bound network
Function: enable delete trunk if it is can_be_trunked (not bounded or driver’s can_trunk_bound_port=true)
Analysis: Applied for LinuxBridge Driver and AVS bridge Driver (can_trunk_bound_port=True), no impact for OVSTrunkDriver (can_trunk_bound_port=False). workaround also available for linux bridge (e.g. unbind the port first then delete the trunk)
Suggestion: it is a low priority bug for Neutron upstream (only applied for linux bridge and workround available), suggest not to upstream
I think you need to propose a fix. Or this will need to be carried long term.
[hle2] yes, let’s try to propose a fix for upstream.
[MP>] Agree.
3. 43a684946e781a25d21a4f50b8dc67d61be42809: Enable trunk service by default
Function: add “trunk” in DEFAULT_SERVICE_PLUGINS
Analysis: It is a deploy configuration for downstream product
Suggestion: Not a bug for Neutron upstream, suggest not to upstream
Agree
[MP>] Agree.
4. c54d804792f10b7f505de6794274c4df4768f6f0: Include trunk presence in port details
Function: add trunk_port (bool) flag in port_details to identify whether this port is a parent port for a trunk
Analysis: It is a performance improvement for AVS agent by reducing RPC call from agent to server. OVS agent has different implementation with no improvement by introducing this field
Suggestion: Not a bug for Neutron upstream, suggest not to upstream
Agree
[MP>] Agree.
5. 3eed837ebd236e6b1959ea88d9ab5322c9eef6b9: Ignore trunk subports on same vlan as vlan-subnet ports
Function: Ignore trunk subports on same vlan as vlan-subnet ports
Analysis: It is a bug fix for AVS agent
Suggestion: Not a bug for Neutron upstream, suggest not to upstream
Agree
[MP>] Agree.
Best Regards,
Le, Huifeng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181016/d9bb3c06/attachment-0001.html>
More information about the Starlingx-discuss
mailing list