[Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart

张鲲鹏 zhang.kunpeng at 99cloud.net
Thu Jul 18 11:09:05 UTC 2019


I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs.

Before restart ovs:

controller-0:/home/wrsroot# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.3/24 brd 127.168.204.255 scope host lo
       valid_lft forever preferred_lft forever
    inet 169.254.202.2/24 scope global lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.2/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.5/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.6/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.152/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: enp59s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff
4: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff
    inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link
       valid_lft forever preferred_lft forever
5: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff
6: enp94s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff
7: enp94s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff
8: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:8060/64 scope link
       valid_lft forever preferred_lft forever
9: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:8061/64 scope link
       valid_lft forever preferred_lft forever
10: ovs-netdev: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff
13: br-int: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff
16: br-phy0: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:120/64 scope link
       valid_lft forever preferred_lft forever
17: lldp16ba3755-27: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7c3a:87ff:fea3:9803/64 scope link
       valid_lft forever preferred_lft forever

After:

controller-0:/home/wrsroot# systemctl restart ovs-vswitchd
controller-0:/home/wrsroot# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.3/24 brd 127.168.204.255 scope host lo
       valid_lft forever preferred_lft forever
    inet 169.254.202.2/24 scope global lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.2/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.5/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.6/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet 127.168.204.152/24 scope host secondary lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: enp59s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff
4: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff
    inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link
       valid_lft forever preferred_lft forever
5: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff
6: enp94s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff
7: enp94s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff
8: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:8060/64 scope link
       valid_lft forever preferred_lft forever
9: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:8061/64 scope link
       valid_lft forever preferred_lft forever
10: ovs-netdev: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff
13: br-int: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff
16: br-phy0: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::faf2:1eff:fe64:120/64 scope link
       valid_lft forever preferred_lft forever
17: lldp16ba3755-27: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7c3a:87ff:fea3:9803/64 scope link
       valid_lft forever preferred_lft forever
20: tapfb74713e-cc: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff
21: tap1a965902-0b: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff

One of the VMs:



> On Jul 18, 2019, at 18:09, 张鲲鹏 <zhang.kunpeng at 99cloud.net> wrote:
> 
> Hi Xu,Chenjie
> 
> I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it.
> Also, it was same when I reboot the VM with 2 SR-IOV VFs.
> Do you have any ideas to debug this problem?
> 
> Thanks 
> Kunpeng  
> 
>> On Jul 17, 2019, at 14:59, Xu, Chenjie <chenjie.xu at intel.com <mailto:chenjie.xu at intel.com>> wrote:
>> 
>> Hi Kunpeng,
>> Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. <>
>>  
>> Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress:
>> system modify --vswitch_type none
>>  
>> Best Regards,
>> Xu, Chenjie
>>  <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net <mailto:zhang.kunpeng at 99cloud.net>] 
>> Sent: Tuesday, July 16, 2019 5:40 PM
>> To: Xu, Chenjie <chenjie.xu at intel.com <mailto:chenjie.xu at intel.com>>
>> Cc: starlingx-discuss at lists.starlingx.io <mailto:starlingx-discuss at lists.starlingx.io>
>> Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart
>>  
>> Hi Chenjie,
>>  
>> Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement.
>> And do you have some easy ways to disable dpdk of openvswitch in stx1.0? 
>> I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well.
>>  
>> Thanks 
>> Kunpeng
>>  
>> On Jul 16, 2019, at 16:49, Xu, Chenjie <chenjie.xu at intel.com <mailto:chenjie.xu at intel.com>> wrote:
>>  
>> Hi Kunpeng,
>> When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections.
>>  
>> Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology:
>> 2 network port without DPDK      >      VM
>> 2 network port with DPDK             >      Data Network
>> 1 network port without DPDK       >      OAM
>>  
>> Best Regards,
>> Xu, Chenjie
>>  
>> From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net <mailto:zhang.kunpeng at 99cloud.net>] 
>> Sent: Tuesday, July 16, 2019 3:54 PM
>> To: starlingx-discuss at lists.starlingx.io <mailto:starlingx-discuss at lists.starlingx.io>
>> Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart
>>  
>> Hi guys,
>>  
>> Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. 
>>  
>> Below is the StarlingX environment.
>>  
>> 1. stx1.0 version, bootimage[1] 
>> 2. Simplex deployment
>> 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM.
>> 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. 
>>  
>> When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again.
>>  
>> It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone.
>>  
>> I cannot understand the problem, anybody could give me some comments for it? Thanks a lot.
>>  
>> [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso <http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso>
>>  
>> Kunpeng
>> _______________________________________________
>> Starlingx-discuss mailing list
>> Starlingx-discuss at lists.starlingx.io <mailto:Starlingx-discuss at lists.starlingx.io>
>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss <http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss>
>>  
>> _______________________________________________
>> Starlingx-discuss mailing list
>> Starlingx-discuss at lists.starlingx.io <mailto:Starlingx-discuss at lists.starlingx.io>
>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss <http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss>
> _______________________________________________
> Starlingx-discuss mailing list
> Starlingx-discuss at lists.starlingx.io
> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190718/f72cbf5a/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: PastedGraphic-8.png
Type: image/png
Size: 103649 bytes
Desc: not available
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190718/f72cbf5a/attachment-0001.png>


More information about the Starlingx-discuss mailing list