[Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart

Xu, Chenjie chenjie.xu at intel.com
Wed Jul 31 05:34:38 UTC 2019


Hi Kunpeng,
I can’t reproduce this bug on stx 2.0. I have tried pass through physical NIC and VF to the VM. Reboot the VM won’t cause ovs-vswitchd restart and other VMs won’t be affected. Maybe you can use stx 2.0 instead of stx 1.0 based on that stx 2.0 will be released in the near future.

Best Regards,
Xu, Chenjie

From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net]
Sent: Monday, July 22, 2019 2:29 PM
To: Xu, Chenjie <chenjie.xu at intel.com>
Cc: starlingx-discuss at lists.starlingx.io
Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart

Hi Chenjie,

Actually the syslog logged nothing when I restart the VM. And also the ovs-vswitchd didn’t log the reason why the ovs restarted, so it is difficult to debug.
I don’t know if the stx 2.0 will reproduce this bug, but stx 1.0 can be reproduced stably.

Thanks
Kunpeng


On Jul 22, 2019, at 11:31, Xu, Chenjie <chenjie.xu at intel.com<mailto:chenjie.xu at intel.com>> wrote:

Hi Kunpeng,
Sorry for not seeing logs.rar. The following logs in openvswitch/ovs-vswitchd.log show that ovs-vswitchd is restarted but doesn’t show why ovs-vswitchd is restarted:
2019-07-18T12:29:59.948Z|00286|connmgr|INFO|br-phy0<->unix#9: 1 flow_mods in the last 0 s (1 adds)
2019-07-19T02:04:11.973Z|00151|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
2019-07-19T02:04:21.273Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2019-07-19T02:04:21.277Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0
2019-07-19T02:04:21.277Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1
2019-07-19T02:04:21.277Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores
2019-07-19T02:04:21.277Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
2019-07-19T02:04:21.277Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
2019-07-19T02:04:21.279Z|00007|dpdk|INFO|Using DPDK 17.11.0
2019-07-19T02:04:21.279Z|00008|dpdk|INFO|DPDK Enabled - initializing...

The syslog doesn’t contain the logs for 2019-07-19. Could you please collect those part log?

I will try to reproduce this bug on StarlingX 2.0 and will let you know the result.

Best Regards,
Xu, Chenjie

From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net]
Sent: Monday, July 22, 2019 10:27 AM
To: Xu, Chenjie <chenjie.xu at intel.com<mailto:chenjie.xu at intel.com>>
Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart

Hi Chenjie,
I have attached those logs last email, I don’t know why you didn’t get them. I will attach it(logs.tar) again, if you cannot find it, tell me in time.

Thanks
Kunpeng



On Jul 19, 2019, at 16:17, Xu, Chenjie <chenjie.xu at intel.com<mailto:chenjie.xu at intel.com>> wrote:

Hi Kunpeng,
From the below logs, we can find that
1.      ovs agent detects that the OVS is dead.
2.      After OVS has been restarted, ovs agent tries to reset bridges and recover ports.
2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically.
2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200  len: 306 time: 0.0467389
2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports.

Could you please attach the below logs?
/var/log/openvswitch/ovs-vswitchd.log
/var/log/openvswitch/ovsdb-server.log
/var/log/syslog
neutron log (the log file is specified in /etc/neutron/neutron.conf)

Best Regards,
Xu, Chenjie
From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net]
Sent: Friday, July 19, 2019 10:21 AM
To: Xu, Chenjie <chenjie.xu at intel.com<mailto:chenjie.xu at intel.com>>
Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart

Hi Chenjie,

This is the logs. At UTC 2019:02:04 I restarted the VM. In openstack.log I found some error messages, I don’t know if it’s  relevant.

2019-07-19 02:04:16.141 186477 INFO eventlet.wsgi.server [req-6600ca74-1f93-4e54-88c1-35f964f1e055 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200  len: 306 time: 0.0178909
2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json]: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file)
2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Process [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json] dies due to the error: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file)
2019-07-19 02:04:22.077 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused
2019-07-19 02:04:22.079 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused)
2019-07-19 02:04:22.089 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused
2019-07-19 02:04:22.090 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused)
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out: Timeout: 10 seconds
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Failed to communicate with the switch: RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int Traceback (most recent call last):
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int     flows = self.dump_flows(constants.CANARY_TABLE)
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 147, in dump_flows
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int     reply_multi=True)
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 95, in _send_msg
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int     raise RuntimeError(m)
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out
2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int
2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically.
2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200  len: 306 time: 0.0467389
2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports.
2019-07-19 02:04:24.242 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Mapping physical network providernet-a to bridge br-phy0
2019-07-19 02:04:24.295 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Bridge br-phy0 has datapath-ID 0000f8f21e640120

Kunpeng

On Jul 19, 2019, at 09:29, Xu, Chenjie <chenjie.xu at intel.com<mailto:chenjie.xu at intel.com>> wrote:

Hi Kunpeng,
You can check the bridge and openflows by the following commands:
ovs-vsctl show
ovs-ofctl dump-flows br-int
ovs-ofctl dump-flows br-phy0

The virtual network used by VMs is based on those openflows. And restarting ovs-vswitchd can’t reinstall the openflows. That’s why when the ovs-vswitchd restart, you will lose the connections to VMs.

I think we need to figure out why ovs-vswitchd is restarted when you restart the VM. Could you please check the below logs to see why ovs-vswitchd is restarted?
       /var/log/openvswitch/ovs-vswitchd.log
/var/log/syslog
neutron log (the log file is specified in /etc/neutron/neutron.conf)

Best Regards,
Xu, Chenjie

From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net]
Sent: Thursday, July 18, 2019 7:09 PM
To: Xu, Chenjie <chenjie.xu at intel.com<mailto:chenjie.xu at intel.com>>
Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart

I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs.

Before restart ovs:

controller-0:/home/wrsroot# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.3/24 brd 127.168.204.255 scope host lo
       valid_lft forever preferred_lft forever
    inet 169.254.202.2/24 scope global lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.2/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.5/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.6/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.152/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: enp59s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff
4: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff
    inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link
       valid_lft forever preferred_lft forever
5: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff
6: enp94s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff
7: enp94s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff
8: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:8060/64 scope link
       valid_lft forever preferred_lft forever
9: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:8061/64 scope link
       valid_lft forever preferred_lft forever
10: ovs-netdev: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff
13: br-int: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff
16: br-phy0: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:120/64 scope link
       valid_lft forever preferred_lft forever
17: lldp16ba3755-27: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7c3a:87ff:fea3:9803/64 scope link
       valid_lft forever preferred_lft forever

After:

controller-0:/home/wrsroot# systemctl restart ovs-vswitchd
controller-0:/home/wrsroot# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.3/24 brd 127.168.204.255 scope host lo
       valid_lft forever preferred_lft forever
    inet 169.254.202.2/24 scope global lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.2/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.5/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.6/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.152/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: enp59s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff
4: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff
    inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link
       valid_lft forever preferred_lft forever
5: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff
6: enp94s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff
7: enp94s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff
8: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:8060/64 scope link
       valid_lft forever preferred_lft forever
9: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:8061/64 scope link
       valid_lft forever preferred_lft forever
10: ovs-netdev: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff
13: br-int: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff
16: br-phy0: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:120/64 scope link
       valid_lft forever preferred_lft forever
17: lldp16ba3755-27: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7c3a:87ff:fea3:9803/64 scope link
       valid_lft forever preferred_lft forever
20: tapfb74713e-cc: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff
21: tap1a965902-0b: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff

One of the VMs:

<image001.png>


On Jul 18, 2019, at 18:09, 张鲲鹏 <zhang.kunpeng at 99cloud.net<mailto:zhang.kunpeng at 99cloud.net>> wrote:

Hi Xu,Chenjie

I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it.
Also, it was same when I reboot the VM with 2 SR-IOV VFs.
Do you have any ideas to debug this problem?

Thanks
Kunpeng


On Jul 17, 2019, at 14:59, Xu, Chenjie <chenjie.xu at intel.com<mailto:chenjie.xu at intel.com>> wrote:

Hi Kunpeng,
Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF.

Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress:
system modify --vswitch_type none

Best Regards,
Xu, Chenjie
From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net]
Sent: Tuesday, July 16, 2019 5:40 PM
To: Xu, Chenjie <chenjie.xu at intel.com<mailto:chenjie.xu at intel.com>>
Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart

Hi Chenjie,

Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement.
And do you have some easy ways to disable dpdk of openvswitch in stx1.0?
I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well.

Thanks
Kunpeng

On Jul 16, 2019, at 16:49, Xu, Chenjie <chenjie.xu at intel.com<mailto:chenjie.xu at intel.com>> wrote:

Hi Kunpeng,
When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections.

Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology:
2 network port without DPDK      >      VM
2 network port with DPDK             >      Data Network
1 network port without DPDK       >      OAM

Best Regards,
Xu, Chenjie

From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net]
Sent: Tuesday, July 16, 2019 3:54 PM
To: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart

Hi guys,

Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others.

Below is the StarlingX environment.

1. stx1.0 version, bootimage[1]
2. Simplex deployment
3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM.
4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem.

When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again.

It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone.

I cannot understand the problem, anybody could give me some comments for it? Thanks a lot.

[1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso

Kunpeng
_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io<mailto:Starlingx-discuss at lists.starlingx.io>
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss

_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io<mailto:Starlingx-discuss at lists.starlingx.io>
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss

_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io<mailto:Starlingx-discuss at lists.starlingx.io>
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190731/0e5c94fc/attachment-0001.html>


More information about the Starlingx-discuss mailing list