[Starlingx-discuss] Deployment Option (PCI - Passthrough Nic on VM)

Himanshu Goyal himanshugoyal500 at gmail.com
Mon Jan 28 15:09:30 UTC 2019


Thanks a lot Eric,

I will check about management switch with our IT Team and revert back to
you.

Require 1 more help, Need 1 PCI-Passthrough NIC on VM. So please provide
the steps to create VM with PCI-Passthrough NIC.

Currently I'm following the below steps:

Step1: system host-lock compute-0
step2: neutron providernet-create pci_net --type flat
step3: system host-if-modify -m 1500 -n pci_0 -p "pci_net" -nt
pci-passthrough compute-0 ens513f1

step3 is giving wrong command error, can you please provide me the correct
commands for the same. or if there is any other to configure that

Regards.
Himanshu Goyal
On Mon, Jan 28, 2019 at 8:03 PM MacDonald, Eric <
Eric.MacDonald at windriver.com> wrote:

> From the logs it looks like compute enabled fine this time but then failed
> shortly after due to intermittent heartbeat loss.
>
>
>
> The stages are …
>
>
>
> *First mtcAlive with no indication of config failure ; a good thing*
>
> 2019-01-28T12:22:15.430 [44890.06760] controller-0 mtcAgent |-|
> mtcNodeHdlrs.cpp  (1056) enable_handler          : Info : compute-0 is
> MTCALIVE (uptime:235 secs)
>
> 2019-01-28T12:22:15.430 fmAPI.cpp(471): Enqueue raise alarm request: UUID
> (c698824d-0d3e-42f0-a7a6-f705403ac8fe) alarm id (200.022) instant id
> (host=compute-0.status=online)
>
>
>
> *Out-Of-Service In-Test passed ; a good thing*
>
> 2019-01-28T12:22:15.430 [44890.06761] controller-0 mtcAgent inv
> mtcInvApi.cpp     (1085) mtcInvApi_update_state  : Info : compute-0 intest
> (seq:19)
>
> 2019-01-28T12:22:27.557 [44890.06764] controller-0 mtcAgent |-|
> mtcNodeHdlrs.cpp  (1172) enable_handler          : Info : compute-0 got
> GOENABLED
>
>
>
> *Starting of local host services passed ; a good thing*
>
> 2019-01-28T12:22:27.562 [44890.06766] controller-0 mtcAgent |-|
> mtcNodeHdlrs.cpp  (1212) enable_handler          : Info : compute-0
> Starting Host Services
>
> 2019-01-28T12:22:27.628 [44890.06769] controller-0 mtcAgent hdl
> mtcNodeHdlrs.cpp  (2586) host_services_handler   : Info : compute-0 start
> compute host services completed
>
>
>
> *Heartbeat client was immediately ready as expected ; a good thing*
>
> 2019-01-28T12:22:28.253 [44890.06770] controller-0 mtcAgent |-|
> nodeClass.cpp     (4717) declare_service_ready   : Info : compute-0 got
> hbsClient ready event
>
>
>
> *Heartbeat start and 11 second trial soak passed ; a good thing*
>
> 2019-01-28T12:22:28.253 [44890.06771] controller-0 mtcAgent msg
> mtcCtrlMsg.cpp    ( 804) send_hbs_command        : Info : compute-0 sending
> 'start' to heartbeat service
>
> 2019-01-28T12:22:28.253 [44890.06772] controller-0 mtcAgent |-|
> mtcNodeHdlrs.cpp  (1327) enable_handler          : Info : compute-0
> Starting 11 sec Heartbeat Soak (with ready event)
>
> 2019-01-28T12:22:39.253 [44890.06773] controller-0 mtcAgent |-|
> mtcNodeHdlrs.cpp  (1341) enable_handler          : Info : compute-0
> heartbeating
>
>
>
> *Compute-0 Enabled ok ; very good thing*
>
> 2019-01-28T12:22:39.502 [44890.06778] controller-0 mtcAgent |-|
> mtcNodeHdlrs.cpp  (1488) enable_handler          : Info : compute-0 is
> ENABLE
>
>
>
> *Intermittent heartbeat loss starts.*
>
>
>
> *The after ~ 4 minutes compute-0 started experiencing Management network
> heartbeat loss*
>
> 2019-01-28T12:26:58.819 [44890.06781] controller-0 mtcAgent ---
> nodeClass.cpp     (4613) manage_heartbeat_degrade: Warn : compute-0 Mgmnt
> *** Heartbeat Miss ***
>
> 2019-01-28T12:26:59.257 [44890.06792] controller-0 mtcAgent hdl
> mtcNodeHdlrs.cpp  (1637) recovery_handler        : Info : compute-0
> requesting mtcAlive with 5 sec timeout
>
> 2019-01-28T12:26:59.257 [44890.06794] controller-0 mtcAgent hdl
> mtcNodeHdlrs.cpp  (1655) recovery_handler        : Info : compute-0 got
> requested mtcAlive
>
>
>
> *Tries the 11 second heartbeat soak again after getting the requested
> mtcAlive message*
>
> 2019-01-28T12:26:59.257 [44890.06795] controller-0 mtcAgent |-|
> nodeClass.cpp     (2480) stop_offline_handler    : Info : compute-0
> stopping offline handler (unlocked-enabled-failed) (stage:3)
>
> 2019-01-28T12:26:59.257 [44890.06796] controller-0 mtcAgent hdl
> mtcNodeHdlrs.cpp  (1722) recovery_handler        : Warn : compute-0
> Connectivity Recovered ; host did not reset
>
> 2019-01-28T12:26:59.257 [44890.06797] controller-0 mtcAgent hdl
> mtcNodeHdlrs.cpp  (1724) recovery_handler        : Warn : compute-0 ...
> continuing with graceful recovery
>
> 2019-01-28T12:26:59.257 [44890.06798] controller-0 mtcAgent hdl
> mtcNodeHdlrs.cpp  (1725) recovery_handler        : Warn : compute-0 ...
> with no affect to host services
>
> 2019-01-28T12:26:59.257 [44890.06799] controller-0 mtcAgent msg
> mtcCtrlMsg.cpp    ( 804) send_hbs_command        : Info : compute-0 sending
> 'start' to heartbeat service
>
> 2019-01-28T12:26:59.257 [44890.06800] controller-0 mtcAgent |-|
> mtcNodeHdlrs.cpp  (2389) recovery_handler        : Info : compute-0
> Starting 11 sec Heartbeat Soak (with ready event)
>
>
>
> *The heartbeat loss is intermittent*
>
> *After first loss the recovery tried the 11 second soak again that failed*
>
> 2019-01-28T12:26:59.982 [44890.06802] controller-0 mtcAgent ---
> nodeClass.cpp     (4613) manage_heartbeat_degrade: Warn : compute-0 Mgmnt
> *** Heartbeat Miss ***
>
> 2019-01-28T12:27:00.403 [44890.06803] controller-0 mtcAgent ---
> nodeClass.cpp     (4495) manage_heartbeat_failure:Error : compute-0 Mgmnt
> *** Heartbeat Loss ***
>
> 2019-01-28T12:27:00.403 [44890.06804] controller-0 mtcAgent ---
> nodeClass.cpp     (4506) manage_heartbeat_failure:Error : compute-0 Mgmnt
> network heartbeat failure
>
> 2019-01-28T12:27:00.403 [44890.06805] controller-0 mtcAgent ---
> nodeClass.cpp     (4515) manage_heartbeat_failure: Warn : compute-0
> restarting graceful recovery
>
> 2019-01-28T12:27:00.403 [44890.06806] controller-0 mtcAgent |-|
> mtcNodeHdlrs.cpp  (1557) recovery_handler        : Info : compute-0
> Graceful Recovery (uptime was 519)
>
>
>
> *Heartbeat seems intermittent because the recovery algorithm reached its
> retry threshold of 3 and force failed the node*
>
> 2019-01-28T12:27:01.556 [44890.06828] controller-0 mtcAgent hdl
> mtcNodeHdlrs.cpp  (1592) recovery_handler        :Error : compute-0
> Graceful Recovery Failed (retries=3
>
> 2019-01-28T12:27:01.556 [44890.06829] controller-0 mtcAgent |-|
> nodeClass.cpp     (7193) force_full_enable       : Info : compute-0 Forcing
> Full Enable Sequence
>
> 2019-01-28T12:27:01.556 [44890.06830] controller-0 mtcAgent ---
> nodeClass.cpp     (1644) alarm_enabled_failure   :Error : compute-0
> critical enable failure
>
>
>
> The force failure caused a reboot and recovery retry where the logs ended.
>
> Sequence will likely continue until the heartbeat (potentially multicast)
> issue is fixed.
>
>
>
> Based on you saying that with the switch removed heartbeat passed I’d look
> at how the switch or your management network is handling multicast
> messaging and if the QOS settings on the network might be dropping packets.
>
>
>
> Eric.
>
>
>
> *From:* Himanshu Goyal [mailto:himanshugoyal500 at gmail.com]
> *Sent:* Monday, January 28, 2019 3:56 AM
> *To:* MacDonald, Eric
> *Cc:* Peters, Matt; starlingx-discuss at lists.starlingx.io
> *Subject:* Re: [Starlingx-discuss] Deployment Option (error: compute boot
> in loop)
> *Importance:* High
>
>
>
> Hi Eric,
>
>
>
> That compute reboot issue is resolved, I connect controller & compute
> connection directly without switch in between after unlock.
>
> I don't know why it is not working with switch after unlock, Please check
> there is only hear-beat issue on mgmt network or something else, currently
> i'm using only 1 compute so its fine.
>
>
>
> Regards,
>
> Himanshu Goyal
>
>
>
> On Mon, Jan 28, 2019 at 12:33 PM Himanshu Goyal <
> himanshugoyal500 at gmail.com> wrote:
>
> Hi Eric,
>
>
>
> Please find attached required mtcAgent.log containing one cycle of compute
> from PXE boot to unlock.
>
>
>
> Regards,
>
> Himanshu Goyal
>
>
>
> On Fri, Jan 25, 2019 at 10:12 PM Himanshu Goyal <
> himanshugoyal500 at gmail.com> wrote:
>
> Sure Eric, currently i have left for the day, i will get back to you first
> thing on Monday morning.
>
>
>
> Many Thanks,
>
> Himanshu Goyal
>
>
>
> On Fri, Jan 25, 2019 at 8:17 PM MacDonald, Eric <
> Eric.MacDonald at windriver.com> wrote:
>
> Hi Himanshu,
>
>
>
> The mtcAgent.log shows that compute-0 has experienced the following errors
> several times over the span of almost 3 days.
>
>
>
> *Configuration failures*
>
> 2019-01-24T12:30:34.764 [44890.04799] controller-0 mtcAgent hdl
> mtcNodeHdlrs.cpp  (1046) enable_handler          :Error : compute-0
> configuration incomplete or failed (oob:2:1)
>
> 2019-01-24T12:30:34.764 [44890.04800] controller-0 mtcAgent ---
> nodeClass.cpp     (1606) alarm_config_failure    :Error : compute-0
> critical config failure
>
> 2019-01-24T12:30:34.764 [44890.04801] controller-0 mtcAgent alm
> mtcAlarm.cpp      ( 417) mtcAlarm_critical       :Error : compute-0 setting
> critical 'Configuration' failure alarm (200.011
>
>
>
> *Out-Of-Service GoEnable Test Failures*
>
> 2019-01-24T12:00:56.186 [44890.04700] controller-0 mtcAgent hdl
> mtcNodeHdlrs.cpp  (1163) enable_handler          :Error : compute-0 got
> GOENABLED
>
>
>
> *Experiencing Management network heartbeat failures.*
>
> 2019-01-23T15:03:44.535 [44890.01949] controller-0 mtcAgent ---
> nodeClass.cpp     (4506) manage_heartbeat_failure:Error : compute-0 Mgmnt
> network heartbeat failure
>
> 2019-01-23T15:03:45.688 [44890.01967] controller-0 mtcAgent ---
> nodeClass.cpp     (4495) manage_heartbeat_failure:Error : compute-0 Mgmnt
> *** Heartbeat Loss **
>
>
>
> Since compute-0 is locked the in-service alarms against it will be
> suppressed.
>
>
>
> Since there are many errors over time I would recommend tailing the
> mtcAgent.log file to a new small file you can send to me. This file would
> contain all the logs that are produced for one cycle of an unlock of
> compute-0
>
>
>
> This way we can focus on one case at a time.
>
>
>
> Eric.
>
>
>
>
>
> *From:* Himanshu Goyal [mailto:himanshugoyal500 at gmail.com]
> *Sent:* Friday, January 25, 2019 9:31 AM
> *To:* MacDonald, Eric
> *Cc:* Peters, Matt; starlingx-discuss at lists.starlingx.io
> *Subject:* Re: [Starlingx-discuss] Deployment Option (error: compute boot
> in loop)
> *Importance:* High
>
>
>
> Hi Eric,
>
>
>
> Below are required outputs of commands:
>
>
>
> *fm alarm-list *
>
> ######################################################
>
> [root at controller-0 wrsroot(keystone_admin)]# fm alarm-list
>
>
> +-------+---------------------------------------------------------------------------------+--------------------------------------+----------+----------------+
>
> | Alarm | Reason Text
>                | Entity ID                            | Severity | Time
> Stamp     |
>
> | ID    |
>                |                                      |          |
>       |
>
>
> +-------+---------------------------------------------------------------------------------+--------------------------------------+----------+----------------+
>
> | 200.  | compute-0 was administratively locked to take it
> out-of-service.                | host=compute-0                       |
> warning  | 2019-01-25T15: |
>
> | 001   |
>                |                                      |          |
> 00:12.159130   |
>
> |       |
>                |                                      |          |
>       |
>
> | 300.  | No enabled compute host with connectivity to provider network.
>                 | service=networking.providernet=      | major    |
> 2019-01-25T14: |
>
> | 004   |
>                | 4dbbc02e-c93c-494e-bd8d-1706eb09a3a6 |          |
> 58:56.711665   |
>
> |       |
>                |                                      |          |
>       |
>
> | 400.  | Service group cloud-services loss of redundancy; expected 1
> standby member but  | service_domain=controller.           | major    |
> 2019-01-22T19: |
>
> | 002   | no standby members available
>                 | service_group=cloud-services         |          |
> 12:07.245142   |
>
> |       |
>                |                                      |          |
>       |
>
> | 400.  | Service group controller-services loss of redundancy; expected 1
> standby member | service_domain=controller.           | major    |
> 2019-01-22T19: |
>
> | 002   | but no standby members available
>                 | service_group=controller-services    |          |
> 11:40.174144   |
>
> |       |
>                |                                      |          |
>       |
>
> | 400.  | Service group vim-services loss of redundancy; expected 1
> standby member but no | service_domain=controller.           | major    |
> 2019-01-22T19: |
>
> | 002   | standby members available
>                | service_group=vim-services           |          |
> 11:39.052152   |
>
> |       |
>                |                                      |          |
>       |
>
> | 400.  | Service group oam-services loss of redundancy; expected 1
> standby member but no | service_domain=controller.           | major    |
> 2019-01-22T19: |
>
> | 002   | standby members available
>                | service_group=oam-services           |          |
> 11:38.728175   |
>
> |       |
>                |                                      |          |
>       |
>
> | 400.  | Service group patching-services loss of redundancy; expected 1
> standby member   | service_domain=controller.           | major    |
> 2019-01-22T19: |
>
> | 002   | but no standby members available
>                 | service_group=patching-services      |          |
> 11:37.756169   |
>
> |       |
>                |                                      |          |
>       |
>
> | 400.  | Service group directory-services loss of redundancy; expected 2
> active members  | service_domain=controller.           | major    |
> 2019-01-22T19: |
>
> | 002   | but only 1 active member available
>                 | service_group=directory-services     |          |
> 11:37.513178   |
>
> |       |
>                |                                      |          |
>       |
>
> | 400.  | Service group web-services loss of redundancy; expected 2 active
> members but    | service_domain=controller.           | major    |
> 2019-01-22T19: |
>
> | 002   | only 1 active member available
>                 | service_group=web-services           |          |
> 11:37.189196   |
>
> |       |
>                |                                      |          |
>       |
>
> | 400.  | Communication failure detected with peer over port bond0 on host
> controller-0   | host=controller-0.network=mgmt       | major    |
> 2019-01-22T19: |
>
> | 005   |
>                |                                      |          |
> 11:36.690138   |
>
> |       |
>                |                                      |          |
>       |
>
> | 400.  | Communication failure detected with peer over port bond1 on host
> controller-0   | host=controller-0.network=oam        | major    |
> 2019-01-22T19: |
>
> | 005   |
>                |                                      |          |
> 11:36.427073   |
>
> |       |
>                |                                      |          |
>       |
>
>
> +-------+---------------------------------------------------------------------------------+--------------------------------------+----------+----------------+
>
> [root at controller-0 wrsroot(keystone_admin)]#
>
>
> #########################################################################################################
>
>
>
> Please find attached file for  fgrep Error /var/log/mtcAgent.log  output
>
>
>
> Regards,
>
> Himanshu Goyal
>
>
>
> On Fri, Jan 25, 2019 at 7:43 PM MacDonald, Eric <
> Eric.MacDonald at windriver.com> wrote:
>
> fgrep Error /var/log/mtcAgent.log
>
> fm alarm-list
>
>
>
> *From:* Peters, Matt [mailto:Matt.Peters at windriver.com]
> *Sent:* Friday, January 25, 2019 8:05 AM
> *To:* Himanshu Goyal
> *Cc:* starlingx-discuss at lists.starlingx.io
> *Subject:* Re: [Starlingx-discuss] Deployment Option (error: compute boot
> in loop)
>
>
>
> The DPDK and OVS status look fine now.
>
> I would check the /var/log/mtcAgent.log on the active controller host to
> determine what is being reported for the reboot.
>
>
>
> *From: *Himanshu Goyal <himanshugoyal500 at gmail.com>
> *Date: *Friday, January 25, 2019 at 3:56 AM
> *To: *"Peters, Matt" <Matt.Peters at windriver.com>
> *Cc: *"Hu, Yong" <yong.hu at intel.com>, "
> starlingx-discuss at lists.starlingx.io" <
> starlingx-discuss at lists.starlingx.io>
> *Subject: *Re: [Starlingx-discuss] Deployment Option (error: compute boot
> in loop)
>
>
>
> Hi Matt,
>
>
>
> Please find below output of dpdk-devbind.py --status also:
>
>
>
> compute-0:/var/log/puppet/latest# python
> /usr/share/openvswitch/scripts/dpdk-devbind.py --status
>
>
>
>
> ##########################################################################################################
>
> python /usr/share/openvswitch/scripts/dpdk-devbind.py --status
>
> Network devices using DPDK-compatible driver
>
> ============================================
>
> 0000:04:00.0 'I350 Gigabit Network Connection 1521' drv=vfio-pci unused=
>
>
>
> Network devices using kernel driver
>
> ===================================
>
> 0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> if=ens513f1 drv=ixgbe unused=vfio-pci
>
> 0000:04:00.3 'I350 Gigabit Network Connection 1521' if=enp4s0f3 drv=igb
> unused=vfio-pci *Active*
>
>
>
> Other Network devices
>
> =====================
>
> 0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> unused=vfio-pci
>
>
>
> Crypto devices using DPDK-compatible driver
>
> ===========================================
>
> <none>
>
>
>
> Crypto devices using kernel driver
>
> ==================================
>
> <none>
>
>
>
> Other Crypto devices
>
> ====================
>
> <none>
>
>
>
> Eventdev devices using DPDK-compatible driver
>
> =============================================
>
> <none>
>
>
>
> Eventdev devices using kernel driver
>
> ====================================
>
> <none>
>
>
>
> Other Eventdev devices
>
> ======================
>
> <none>
>
>
>
> Mempool devices using DPDK-compatible driver
>
> ============================================
>
> <none>
>
>
>
> Mempool devices using kernel driver
>
> ===================================
>
> <none>
>
>
>
> Other Mempool devices
>
> =====================
>
> <none>
>
> compute-0:/var/log/puppet/latest#
>
>
> ################################################################################
>
>
>
> Regards,
>
> Himanshu Goyal
>
>
>
> On Fri, Jan 25, 2019 at 2:14 PM Himanshu Goyal <himanshugoyal500 at gmail.com>
> wrote:
>
> Hi Matt,
>
>
>
> Sorry, interrupt remapping was disabled in my BIOS, I tried again after
> enable that, Now in dmesg & ovs-vsct there is no errors.* But compute
> still rebooting in loop*.
>
> Please find attached dmesg log output for reference.
>
>
>
> #######################################################################
>
> compute-0:/home/wrsroot#* sudo ovs-vsctl show*
>
> 8a1f7179-2014-4b4e-ad23-2065b8024964
>
>     Manager "ptcp:6640:127.0.0.1"
>
>         is_connected: true
>
>     Bridge "br-phy0"
>
>         Controller "tcp:127.0.0.1:6633"
>
>             is_connected: true
>
>         fail_mode: secure
>
>         Port "lldp64416a6d-7d"
>
>             Interface "lldp64416a6d-7d"
>
>                 type: internal
>
>         Port "phy-br-phy0"
>
>             Interface "phy-br-phy0"
>
>                 type: patch
>
>                 options: {peer="int-br-phy0"}
>
>         Port "br-phy0"
>
>             Interface "br-phy0"
>
>                 type: internal
>
>         Port "eth0"
>
>             Interface "eth0"
>
>                 type: dpdk
>
>                 options: {dpdk-devargs="0000:04:00.0", n_rxq="2"}
>
>     Bridge br-int
>
>         Controller "tcp:127.0.0.1:6633"
>
>             is_connected: true
>
>         fail_mode: secure
>
>         Port br-int
>
>             Interface br-int
>
>                 type: internal
>
>         Port "int-br-phy0"
>
>             Interface "int-br-phy0"
>
>                 type: patch
>
>                 options: {peer="phy-br-phy0"}
>
>     ovs_version: "2.9.0"
>
> compute-0:/home/wrsroot#
>
>
> #############################################################################
>
>
>
> Please suggest me any other logs that i can check.
>
>
>
> Regards,
>
> Himanshu Goyal
>
>
>
> On Fri, Jan 25, 2019 at 12:21 PM Himanshu Goyal <
> himanshugoyal500 at gmail.com> wrote:
>
> Thanks Matt,
>
>
>
> Below are the details of my server BIOS, Currently in my Server  Bios
> Version: SE5C610.86B.01.01.0014.121820151719,
>
> Please suggest which version of Bios we can use & this version is
> up-gradable or not with required version.
>
>
>
> #################################################################
>
> compute-0:/home/wrsroot# dmidecode -t1
>
> # dmidecode 3.0
>
> Getting SMBIOS data from sysfs.
>
> SMBIOS 2.7 present.
>
>
>
> Handle 0x0001, DMI type 1, 27 bytes
>
> System Information
>
>         Manufacturer: Intel Corporation
>
>         Product Name: S2600WT2
>
>         Version: ....................
>
>         Serial Number: ............
>
>         UUID: 803BBEFB-BFC1-E511-906E-0012795D96DD
>
>         Wake-up Type: Power Switch
>
>         SKU Number: SKU Number
>
>         Family: Family
>
>
> #############################################################################
>
>
>
>
> ################################################################################
>
> compute-0:/home/wrsroot# sudo dmidecode --type bios
>
> # dmidecode 3.0
>
> Getting SMBIOS data from sysfs.
>
> SMBIOS 2.7 present.
>
>
>
> Handle 0x0000, DMI type 0, 24 bytes
>
> BIOS Information
>
>         Vendor: Intel Corporation
>
>         Version: SE5C610.86B.01.01.0014.121820151719
>
>         Release Date: 12/18/2015
>
>         Address: 0xF0000
>
>         Runtime Size: 64 kB
>
>         ROM Size: 16384 kB
>
>         Characteristics:
>
>                 PCI is supported
>
>                 PNP is supported
>
>                 BIOS is upgradeable
>
>                 BIOS shadowing is allowed
>
>                 Boot from CD is supported
>
>                 Selectable boot is supported
>
>                 EDD is supported
>
>                 5.25"/1.2 MB floppy services are supported (int 13h)
>
>                 3.5"/720 kB floppy services are supported (int 13h)
>
>                 3.5"/2.88 MB floppy services are supported (int 13h)
>
>                 Print screen service is supported (int 5h)
>
>                 8042 keyboard services are supported (int 9h)
>
>                 Serial services are supported (int 14h)
>
>                 Printer services are supported (int 17h)
>
>                 CGA/mono video services are supported (int 10h)
>
>                 ACPI is supported
>
>                 USB legacy is supported
>
>                 LS-120 boot is supported
>
>                 ATAPI Zip drive boot is supported
>
>                 BIOS boot specification is supported
>
>                 Function key-initiated network boot is supported
>
>                 Targeted content distribution is supported
>
>                 UEFI is supported
>
>         BIOS Revision: 0.0
>
>         Firmware Revision: 0.0
>
>
>
> Handle 0x000B, DMI type 13, 22 bytes
>
> BIOS Language Information
>
>         Language Description Format: Abbreviated
>
>         Installable Languages: 1
>
>                 enUS
>
>         Currently Installed Language: enUS
>
>
> ########################################################################################################
>
>
>
> Regards,
>
> Himanshu Goyal
>
>
>
> On Thu, Jan 24, 2019 at 10:55 PM Peters, Matt <Matt.Peters at windriver.com>
> wrote:
>
> Based on the logs, it looks like you have an older system that doesn’t
> fully support VT-d IOMMU remapping.  You can try updating your BIOS if
> there is a newer version available.  We don’t currently support setting
> arbitrary module parameters, so we don’t have a way to implement the
> workaround module param specified in the logs (i.e. disabling interrupt
> remapping).
>
>
>
> Maybe there are others on the distribution list that could offer
> additional suggestions.
>
>
>
> *From: *Himanshu Goyal <himanshugoyal500 at gmail.com>
> *Date: *Thursday, January 24, 2019 at 11:32 AM
> *To: *"Peters, Matt" <Matt.Peters at windriver.com>
> *Cc: *"Hu, Yong" <yong.hu at intel.com>, "
> starlingx-discuss at lists.starlingx.io" <
> starlingx-discuss at lists.starlingx.io>
> *Subject: *Re: [Starlingx-discuss] Deployment Option (error: compute boot
> in loop)
>
>
>
> Hi Peters,
>
>
>
> Please find output of required command in attached file.
>
>
>
> Regards,
>
> Himanshu Goyal
>
>
>
> Regards,
>
> Himanshu Goyal
>
> On Thu, Jan 24, 2019 at 6:56 PM Peters, Matt <Matt.Peters at windriver.com>
> wrote:
>
> Can you confirm that VT-d is enabled properly by running the following?
>
>
>
> dmesg | grep -i -e DMAR -e IOMMU
>
>
>
> *From: *Himanshu Goyal <himanshugoyal500 at gmail.com>
> *Date: *Thursday, January 24, 2019 at 8:19 AM
> *To: *"Peters, Matt" <Matt.Peters at windriver.com>
> *Cc: *"Hu, Yong" <yong.hu at intel.com>, "
> starlingx-discuss at lists.starlingx.io" <
> starlingx-discuss at lists.starlingx.io>
> *Subject: *Re: [Starlingx-discuss] Deployment Option (error: compute boot
> in loop)
>
>
>
> Hi Peters,
>
>
>
> Error logs ovs-vswitched.log:
>
> #######################################
>
> 2019-01-24T18:42:04.213Z|00830|dpdk|INFO|EAL: PCI device 0000:02:00.0 on
> NUMA socket 0
>
> 2019-01-24T18:42:04.213Z|00831|dpdk|INFO|EAL:   probe driver: 8086:10fb
> net_ixgbe
>
> 2019-01-24T18:42:04.215Z|00832|dpdk|ERR|EAL:   0000:02:00.0 failed to
> select IOMMU type
>
> 2019-01-24T18:42:04.215Z|00833|dpdk|ERR|EAL: Driver cannot attach the
> device (0000:02:00.0)
>
> 2019-01-24T18:42:04.215Z|00834|netdev_dpdk|WARN|Error attaching device
> '0000:02:00.0' to DPDK
>
> 2019-01-24T18:42:04.215Z|00835|netdev|WARN|eth0: could not set
> configuration (Invalid argument)
>
> ########################################
>
>
>
> Please find attached log file also.
>
>
>
> Regards,
>
> Himanshu Goyal
>
>
>
> On Thu, Jan 24, 2019 at 6:32 PM Peters, Matt <Matt.Peters at windriver.com>
> wrote:
>
> The name of eth0 is correct.  It is just an assigned name by configuration
> management.
>
>
>
> What err logs are present in /var/log/openvswitch/ovs-vswitchd.log?
>
>
>
> *From: *Himanshu Goyal <himanshugoyal500 at gmail.com>
> *Date: *Thursday, January 24, 2019 at 6:52 AM
> *To: *"Hu, Yong" <yong.hu at intel.com>
> *Cc: *"starlingx-discuss at lists.starlingx.io" <
> starlingx-discuss at lists.starlingx.io>
> *Subject: *Re: [Starlingx-discuss] Deployment Option (error: compute boot
> in loop)
>
>
>
> Thanks a lot Yong,
>
>
>
> Checked OVS/DPDK status, It is giving me the error as output shown below:
>
> In br-phy0 is attaching a port name as eth0, but my compute port name
> is ens513f0, i think that may be the issue.
>
>
>
>
> #################################################################################
>
> *ovs-vsctl show output:*
>
> compute-0:/usr/share/openvswitch/scripts$ sudo ovs-vsctl show
>
> 543d08f4-ff1e-4a8d-8e48-11f01356750d
>
>     Manager "ptcp:6640:127.0.0.1"
>
>         is_connected: true
>
>     Bridge "br-phy0"
>
>         Controller "tcp:127.0.0.1:6633"
>
>             is_connected: true
>
>         fail_mode: secure
>
>         Port "phy-br-phy0"
>
>             Interface "phy-br-phy0"
>
>                 type: patch
>
>                 options: {peer="int-br-phy0"}
>
>         Port "br-phy0"
>
>             Interface "br-phy0"
>
>                 type: internal
>
>         Port "lldpabeb30a6-6c"
>
>             Interface "lldpabeb30a6-6c"
>
>                 type: internal
>
>         Port "eth0"
>
>           *  Interface "eth0"*
>
>                 type: dpdk
>
>                 options: {dpdk-devargs="0000:02:00.0", n_rxq="2"}
>
>                 *error: "Error attaching device '0000:02:00.0' to DPDK"*
>
>     Bridge br-int
>
>         Controller "tcp:127.0.0.1:6633"
>
>             is_connected: true
>
>         fail_mode: secure
>
>         Port "int-br-phy0"
>
>             Interface "int-br-phy0"
>
>                 type: patch
>
>                 options: {peer="phy-br-phy0"}
>
>         Port br-int
>
>             Interface br-int
>
>                 type: internal
>
>     ovs_version: "2.9.0"
>
>
> #####################################################################################################
>
>
>
>
> ##############################################################################
>
> *Output of dpdk-devbind.py:*
>
>
>
> compute-0:/usr/share/openvswitch/scripts$ python
> /usr/share/openvswitch/scripts/dpdk-devbind.py --status
>
>
>
> Network devices using DPDK-compatible driver
>
> ============================================
>
> 0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> drv=vfio-pci unused=
>
>
>
> Network devices using kernel driver
>
> ===================================
>
> 0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> if=ens513f1 drv=ixgbe unused=vfio-pci *Active*
>
> 0000:04:00.0 'I350 Gigabit Network Connection 1521' if=*enp4s0f0* drv=igb
> unused=vfio-pci
>
> 0000:04:00.3 'I350 Gigabit Network Connection 1521' if=enp4s0f3 drv=igb
> unused=vfio-pci
>
>
>
> Other Network devices
>
> =====================
>
> <none>
>
>
>
> Crypto devices using DPDK-compatible driver
>
> ===========================================
>
> <none>
>
>
>
> Crypto devices using kernel driver
>
> ==================================
>
> <none>
>
>
>
> Other Crypto devices
>
> ====================
>
> <none>
>
>
>
> Eventdev devices using DPDK-compatible driver
>
> =============================================
>
> <none>
>
>
>
> Eventdev devices using kernel driver
>
> ====================================
>
> <none>
>
>
>
> Other Eventdev devices
>
> ======================
>
> <none>
>
>
>
> Mempool devices using DPDK-compatible driver
>
> ============================================
>
> <none>
>
>
>
> Mempool devices using kernel driver
>
> ===================================
>
> <none>
>
>
>
> Other Mempool devices
>
> =====================
>
> <none>
>
> compute-0:/usr/share/openvswitch/scripts$
>
>  #############################################################
>
>
>
> Both management and OAM ports support DPDK.
>
>
>
> *lspci output:*
>
> *##################################################*
>
> compute-0:/usr/share/openvswitch/scripts$ lspci -nn | grep Eth
>
> 02:00.0 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit
> SFI/SFP+ Network Connection [8086:10fb] (rev 01)
>
> 02:00.1 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit
> SFI/SFP+ Network Connection [8086:10fb] (rev 01)
>
> 04:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network
> Connection [8086:1521] (rev 01)
>
> 04:00.3 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network
> Connection [8086:1521] (rev 01)
>
> ###############################################################
>
> compute-0:/usr/share/openvswitch/scripts$
>
>
>
>
>
> Many Thanks,
>
> Himanshu Goyal
>
>
>
>
>
> On Thu, Jan 24, 2019 at 2:24 PM Hu, Yong <yong.hu at intel.com> wrote:
>
> I saw you enabled VT-D, how about VT-x in BIOS?
>
>
>
> Another possibility is that your ethernet NIC is not compatible with
> OVS/DPDK.
>
> Please follow these steps, to dig out more info
>
> 1.       lock your compute-node
>
> 2.       ssh to your compute-node and check OVS/DPDK status by “sudo
> ovs-vsctl show” and “python /usr/share/openvswitch/scripts/dpdk-devbind.py
> --status”
>
>
>
> *From: *Himanshu Goyal <himanshugoyal500 at gmail.com>
> *Date: *Thursday, 24 January 2019 at 4:46 PM
> *To: *"Hu, Yong" <yong.hu at intel.com>
> *Cc: *"Alonso, Juan Carlos" <juan.carlos.alonso at intel.com>, "
> starlingx-discuss at lists.starlingx.io" <
> starlingx-discuss at lists.starlingx.io>
> *Subject: *Re: [Starlingx-discuss] Deployment Option (error: compute boot
> in loop)
>
> Hi Yong,
>
>
>
> I checked server BIOS setting Intel VT-D is enabled in my compute machine
>
> compute machine is rebooting only after unlocking, Before reboot it came
> in unlocked, enabled & online.
>
>
>
> Please suggest how can i debug this.
>
>
>
> Many Thanks,
>
> Himanshu Goyal
>
>
>
> On Wed, Jan 23, 2019 at 9:40 PM Hu, Yong <yong.hu at intel.com> wrote:
>
> Intel_iommu=on and immmu are for PCI PT and SR-IOV, as far as I know.
>
> There are other 2, like VT-x and VT-d flags, to be enabled. Please have a
> look in BIOS.
>
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190128/7c047305/attachment-0001.html>


More information about the Starlingx-discuss mailing list