Hi All,
Please let me know how to create SRIOV VFs in StarlingX . I m using Distributed Starlingx-4.0.
I have done necessary things in BIOS to enable SRIOV and set 16 VFs.
worker-1:/sys# cat /sys/devices/pci0000:3a/0000:3a:00.0/0000:3b:00.1/sriov_totalvfs
16
worker-1:/sys# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.18.0-147.3.1.rt24.96.el8_1.tis.8.x86_64 root=UUID=92dfc508-183f-4d3b-b073-9402d9850c1a ro security_profile=standard module_blacklist=integrity,ima audit=0
tboot=false crashkernel=auto biosdevname=0 console=ttyS0,115200 iommu=pt usbcore.autosuspend=-1 selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0
intel_iommu=on user_namespace.enable=1 skew_tick=1 hugepagesz=1G hugepages=30 hugepagesz=2M hugepages=0 default_hugepagesz=1G irqaffinity=0 rcu_nocbs=1-31 nohz_full=1-31 kthread_cpus=0 nopti nospectre_v2 nospectre_v1
I ran below command to assosiciate 8 VF’s to vfio-pci driver. Command is successful.
system host-if-modify worker-1 -n data0 -c pci-sriov -N 8 --vf-driver=vfio c9c03672-a664-4184-8b4f-e5dd2e00b7dd
[sysadmin@controller-0 ~(keystone_admin)]$ system host-if-list worker-1
+--------------------------------------+----------+-----------+----------+---------+-----------------+---------------+-------------+------------+
| uuid | name | class | type | vlan id | ports | uses i/f | used
by i/f | attributes |
+--------------------------------------+----------+-----------+----------+---------+-----------------+---------------+-------------+------------+
| 910f9be5-7f50-44f2-a3de-1922a0702b62 | mgmt0 | platform | vlan | 143 | [] | [u'pxeboot0'] | []
| MTU=1500 |
| c9c03672-a664-4184-8b4f-e5dd2e00b7dd | data0 | pci-sriov | ethernet | None | [u'enp59s0f1d'] | [] |
[] | MTU=1500 |
| fa6f5ffb-9bb2-4991-91b1-e84dcc444315 | pxeboot0 | platform | ethernet | None | [u'eno1'] | [] | [u'mgmt0']
| MTU=1500 |
+--------------------------------------+----------+-----------+----------+---------+-----------------+---------------+-------------+------------+
sysadmin@controller-0 ~(keystone_admin)]$ system host-if-show worker-1 data0
+-----------------+--------------------------------------+
| Property | Value |
+-----------------+--------------------------------------+
| ifname | data0 |
| iftype | ethernet |
| ports | [u'enp59s0f1d'] |
| imac | bc:97:e1:28:76:81 |
| imtu | 1500 |
| ifclass | pci-sriov |
| ptp_role | none |
| aemode | None |
| schedpolicy | None |
| txhashpolicy | None |
| uuid | c9c03672-a664-4184-8b4f-e5dd2e00b7dd |
| ihost_uuid | 0f5ea0b9-0c76-4547-bc40-fa6d5cadfa00 |
| vlan_id | None |
| uses | [] |
| used_by | [] |
| created_at | |
| updated_at | |
| sriov_numvfs | 8 |
| sriov_vf_driver | vfio |
| accelerated | [False] |
+-----------------+--------------------------------------+
I locked the node, executed the above commands and unlocked it. Once the node came up, I see that the number of vfs is still zero, so none of the vf’s are created.
But /etc/pcidp/config.json is updated with device pool information.
worker-1:/sys# cat /sys/devices/pci0000:3a/0000:3a:00.0/0000:3b:00.1/sriov_numvfs
0
worker-1:/sys# cat /etc/pcidp/config.json
{
"resourceList": [
{
"resourceName": "pci_sriov_net_physnet0",
"selectors": {
"vendors": [
"14e4"
],
"drivers": [
"vfio-pci"
],
"devices": [
"16dc"
],
"pfNames": [
"enp59s0f1d"
]
}
}
]
}
worker-1:/sys# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000
link/ether f0:d4:e2:e9:8e:c4 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
link/ether f0:d4:e2:e9:8e:c5 brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
link/ether f0:d4:e2:e9:8e:c6 brd ff:ff:ff:ff:ff:ff
5: enp59s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
link/ether bc:97:e1:28:76:80 brd ff:ff:ff:ff:ff:ff
6: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
link/ether f0:d4:e2:e9:8e:c7 brd ff:ff:ff:ff:ff:ff
7: enp59s0f1d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether bc:97:e1:28:76:81 brd ff:ff:ff:ff:ff:ff
8: vlan143@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000
link/ether f0:d4:e2:e9:8e:c4 brd ff:ff:ff:ff:ff:ff
9: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:4f:eb:37:d2 brd ff:ff:ff:ff:ff:ff
10: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
How to get those vf’s created ?
Is there anything I m missing ? Let me know if any information is required.
Regards,
Sriram
From: Dharwadkar, Sriram
Sent: Wednesday, October 14, 2020 12:17 AM
To: 'MacDonald, Eric' <Eric.MacDonald@windriver.com>
Subject: RE: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded
Hi Eric,
“system host-if-modify $NODE -n sriov0 -c pci-sriov -N 8 --vf-driver=netdevice $DATA0IFUUID” command is getting successful, but I don’t see either SRIOV vfs being created or config.json is
updated.
In fact the worker node was going to “operational = disabled” state.
Any idea of this problem ?
Regards,
Sriram
From: Dharwadkar, Sriram
Sent: Thursday, October 8, 2020 11:35 PM
To: 'MacDonald, Eric' <Eric.MacDonald@windriver.com>
Subject: RE: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded
Hi Eric,
It supports SRIOV, I was able to do everything manually.
In worker-1, I did
echo 16 > /sys/devices/pci0000:3a/0000:3a:00.0/0000:3b:00.1/sriov_numvfs
Once I updated vf count, 16 vf’s are created
7: enp59s0f1d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether bc:97:e1:28:76:81 brd ff:ff:ff:ff:ff:ff
vf 0 MAC 32:14:b1:2f:81:30, spoof checking off, link-state auto, trust off
vf 1 MAC 1a:12:0b:a8:ef:64, spoof checking off, link-state auto, trust off
vf 2 MAC 0a:cf:87:8a:42:d8, spoof checking off, link-state auto, trust off
vf 3 MAC 12:23:d1:28:44:16, vlan 201, spoof checking off, link-state auto, trust off
vf 4 MAC 76:59:0b:52:8f:1d, vlan 203, spoof checking off, link-state auto, trust off
vf 5 MAC 86:ed:ef:a3:b0:75, spoof checking off, link-state auto, trust off
vf 6 MAC 9e:71:b6:f8:33:49, spoof checking off, link-state auto, trust off
vf 7 MAC 92:ab:ef:af:ff:6c, spoof checking off, link-state auto, trust off
vf 8 MAC c2:18:50:52:ef:7e, spoof checking off, link-state auto, trust off
vf 9 MAC 16:d2:ba:93:5e:c1, spoof checking off, link-state auto, trust off
vf 10 MAC ba:8f:1c:00:6c:91, spoof checking off, link-state auto, trust off
vf 11 MAC 0a:6c:ac:3b:1f:99, spoof checking off, link-state auto, trust off
vf 12 MAC e2:39:d2:91:2f:55, spoof checking off, link-state auto, trust off
vf 13 MAC b2:97:a2:fe:76:47, vlan 202, spoof checking off, link-state auto, trust off
vf 14 MAC ea:89:0e:c4:f1:90, vlan 200, spoof checking off, link-state auto, trust off
vf 15 MAC 86:5e:37:bc:e7:e5, spoof checking off, link-state auto, trust off
I did modprobe of vfio-pci and created config.json with two pool details.
worker-0:/home/sysadmin/dpdk# cat /etc/pcidp/config.json
{
"resourceList": [
{
"resourceName": "bcm_sriov_netdevice",
"selectors": {
"vendors": ["14e4"],
"devices": ["16dc"],
"drivers": ["bnxt_en"],
"pfNames": ["enp59s0f0#0-7"]
}
},
{
"resourceName": "bcm_sriov_vfio",
"selectors": {
"vendors": ["14e4"],
"devices": ["16dc"],
"drivers": ["vfio-pci"],
"pfNames": ["enp59s0f0#8-15"]
}
}
]
}
Then SRIOV device plugin also came up
kube-sriov-device-plugin-amd64-ghxmf 1/1 Running 0 10h 192.168.22.107 worker-1 <none> <none>
I could also deploy a POD which uses these VFs.
I m not sure why “system host-if-modify $NODE -n sriov0 -c pci-sriov -N 8 --vf-driver=netdevice $DATA0IFUUID” should fail.
Also is there any way wherein we can associate a set of VFs for netdevice driver and another set of VFs for vfio-pci driver.
Like in the above example, I think the command creates a group of 8 VFs and associate with netdevice driver and only one device pool is created in /etc/pcidp/config.json
Regards,
Sriram
From: MacDonald, Eric <Eric.MacDonald@windriver.com>
Sent: Wednesday, October 7, 2020 7:10 PM
To: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Subject: Re: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded
The server you are installing on might not support sriov.
From: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Sent: Wednesday, October 7, 2020 12:44 AM
To: MacDonald, Eric <Eric.MacDonald@windriver.com>
Cc: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io>
Subject: RE: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded
Hi Eric,
I was able to figure out the issue. Thanks for the pointers.
While bringing up data network, I had used the below command for the both the workers as I wanted sriov interfaces.
I m not sure if this is the reason the why worker nodes were in disabled and offline state.
When I changed the class from
pci-sriov to “data”, worker nodes are enabled and operational status is available.
Do you see any problem in first command.
Regards,
Sriram
From: MacDonald, Eric <Eric.MacDonald@windriver.com>
Sent: Tuesday, October 6, 2020 12:36 AM
To: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Subject: Re: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded
Sriram,
For the worker node configuration failures, look or Error Warn logs in /var/log/puppet/* logs.
Those logs should help you understand what config failed.
Eric.
From: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Sent: Monday, October 5, 2020 2:57 PM
To: MacDonald, Eric <Eric.MacDonald@windriver.com>
Subject: RE: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded
Hi Eric,
Thanks for the explanation.
Initially I was planning to bring up 2 controller (All in one stand alone) + 1 worker node. For this kind of configuration, we would need high capacity h/w for all the nodes which may not be available. That’s the reason I deployed 1 standard
controller -with storage + 2 worker nodes configuration.
Controller node came up w/o any issues, worker nodes also came up after system host-unlock worker-0 and worker-1. But I could see these alarms
in progress. Manual Lock and Unlock may be required if auto-recovery is unsuccessful
in progress. Manual Lock and Unlock may be required if auto-recovery is unsuccessful
I did lock and unlock of nodes. After nodes came up, I still see the same issue.
How to check for configuration failures?
Also there is taint on both the worker nodes “Taints: services=disabled:NoExecute”, which I could see using kubectl describe node worker-0 and kubectl describe node worker-1.
So, I don’t see Sriov CNI and Sriov DP being deployed in worker nodes.
Regards,
Sriram
From: MacDonald, Eric <Eric.MacDonald@windriver.com>
Sent: Monday, October 5, 2020 9:14 PM
To: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Subject: Re: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded
Sriram,
The AIO controller has a lot more work to do during startup as it contains both control and compute functions.
As a result, you may temporarily see CPU resource utilization logs during the latter stages of the provisioning.
Now that you went to standard config, you might see another degrade for a short period of time following the unlock of the second controller during initial filesystem
sync.
Again, degrade is general, but if you see a degrade there should be an alarm that represents the reason for the degrade.
Eric.
From: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Sent: Monday, October 5, 2020 10:44 AM
To: MacDonald, Eric <Eric.MacDonald@windriver.com>
Subject: RE: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded
Thanks Eric.
I reinstalled the system with iso again. I didn’t see this error again with different configuration (I selected standard controller, instead of all in one duplex).
I will check for alarms in case of any issues further.
Regards,
Sriram
From: MacDonald, Eric <Eric.MacDonald@windriver.com>
Sent: Monday, October 5, 2020 5:55 PM
To: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Subject: Re: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded
What does 'fm alarm-list' show ?
As a general rule ; if a host is degraded there should be an alarm raised for that degraded condition.
Eric MacDonald
StarlingX Maintenance
From: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Sent: Monday, October 5, 2020 4:00 AM
To: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io>
Subject: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded
Hi,
I have installed distributed starlingx 4.0 in "All in one Duplex" mode. There are two nodes in the central cloud and two in the edge cloud. Central cloud is up and running.
For the edge cloud configuration, in the bootstrap override file, I have configured the private registry. From the central cloud, I was able to add the edge cloud. Images
required for starlingX installation are downloaded from private registry and installation goes through w/o any issues.
[sysadmin@controller-0 ~(keystone_admin)]$ dcmanager subcloud list
+----+------+------------+--------------+---------------+---------+
| id | name | management | availability | deploy status | sync |
+----+------+------------+--------------+---------------+---------+
| 47 | edge | unmanaged | offline | complete | unknown |
+----+------+------------+--------------+---------------+---------+
[sysadmin@controller-0 ~(keystone_admin)]$
Then I followed the steps mentioned in the document
https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/aio_duplex_install_kubernetes.html#configure-controller-0
And finally did unlock of controller-0. System went for reboot and it came up successfully. After I see availability as “degraded”
[sysadmin@controller-0 log(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled |
degraded |
+----+--------------+-------------+----------------+-------------+--------------+
Tail -f /var/log/sysinv.log shows – prerequisites not met..
ysinv 2020-10-05 07:57:12.271 96246 INFO ceph_client [-] Result: {u'waiting': [], u'has_failed': False, u'state': u'success', u'is_waiting': False, u'running': [], u'failed': [], u'finished': [{u'outb': u'{"fsid":"50634828-68b2-43c4-aaa0-ebf53f6e675a","health":{"checks":{},"status":"HEALTH_OK","overall_status":"HEALTH_WARN"},"election_epoch":7,"quorum":[0],"quorum_names":["controller"],"monmap":{"epoch":1,"fsid":"50634828-68b2-43c4-aaa0-ebf53f6e675a","modified":"2020-10-05
06:53:11.461060","created":"2020-10-05 06:53:11.461060","features":{"persistent":["kraken","luminous","mimic","osdmap-prune"],"optional":[]},"mons":[{"rank":0,"name":"controller","addr":"192.168.22.101:6789/0","public_addr":"192.168.22.101:6789/0"}]},"osdmap":{"osdmap":{"epoch":10,"num_osds":1,"num_up_osds":1,"num_in_osds":1,"full":false,"nearfull":false,"num_remapped_pgs":0}},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":112181248,"bytes_avail":1197865828352,"bytes_total":1197978009600},"fsmap":{"epoch":1,"by_rank":[]},"mgrmap":{"epoch":48,"active_gid":24132,"active_name":"controller-0","active_addr":"192.168.22.102:6804/93283","available":true,"standbys":[],"modules":["restful"],"available_modules":[{"name":"balancer","can_run":true,"error_string":""},{"name":"dashboard","can_run":false,"error_string":"Frontend
assets not found: incomplete build?"},{"name":"hello","can_run":true,"error_string":""},{"name":"iostat","can_run":true,"error_string":""},{"name":"localpool","can_run":true,"error_string":""},{"name":"prometheus","can_run":true,"error_string":""},{"name":"restful","can_run":true,"error_string":""},{"name":"selftest","can_run":true,"error_string":""},{"name":"smart","can_run":true,"error_string":""},{"name":"status","can_run":true,"error_string":""},{"name":"telegraf","can_run":true,"error_string":""},{"name":"telemetry","can_run":true,"error_string":""},{"name":"zabbix","can_run":true,"error_string":""}],"services":{"restful":"https://controller-0:7999/"}},"servicemap":{"epoch":1,"modified":"0.000000","services":{}}}\n',
u'outs': u'', u'command': u'status format=json'}], u'is_finished': True, u'id': u'140404196232080'}
sysinv 2020-10-05 07:57:12.284 96246 INFO sysinv.conductor.manager [-] Platform managed application platform-integ-apps: Prerequisites not met.
sysinv 2020-10-05 07:57:12.286 96246 INFO sysinv.conductor.manager [-] Platform managed application oidc-auth-apps: Prerequisites not met.
sysinv 2020-10-05 07:57:12.291 96246 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://localhost:30001/nfvi-plugins/v1/sw-update hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None
sysinv 2020-10-05 07:57:12.293 96246 INFO sysinv.ap
I’m not sure why availability is shown as degraded. Any help would be appreciated. Let me know if any logs are required.
Regards,
Sriram