Issue with sharing Nvidia MIG vGPU via SRIOV on Openstack
Dear StarlingX community, We are attempting to configure VGPU sharing via MIG (Multi-Instance GPU) on NVIDIA H200 cards within a StarlingX (STX) 24.09 environment running OpenStack (Caracal). Platform's configuration details: StarlingX (STX) 24.09 con OpenStack(Caracal). Hardware: GPU NVIDIA H200. CPU : AMD EPYC Kernel : vmlinuz-6.6.0-1-amd64 We successfully installed and deployed the nvidia-vgpu-ubuntu-aie-580_580.82.0.deb patch on the two controllers via USM. The NVIDIA platform is functional: we were able to perform GPU segmentation using MIGs and associate these cuts with the VFs, following the official NVIDIA GRID VGPU guide (https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html#ubuntu-i...). The GPU supports VGPU/MIG sharing via VFs (Virtual Functions) using Vendor-Specific VFIO. To enable VF sharing with OpenStack, we applied the following helm override configuration in StarlingX, updating Nova parameters based on the official OpenStack PCI Passthrough documentation (https://docs.openstack.org/nova/latest/admin/pci-passthrough.html) [cid:c38841c5-844c-487c-af80-13a805e63451] where 0000:03:00.2 it is the PCI address of a VF linked to a MIG with capacity of 7G_141. However, when we attempt to attach the VF to a Guest VM using a Nova flavour, the physical node crashes, resulting in the attached logs. Does anyone have specific experience or documentation regarding this configuration of StarlingX/OpenStack, VGPU/MIG with NVIDIA Vendor-Specific VFIO} ? Specifically, we are looking for the following: * Confirmation or correction of the Nova PCI overrides. * Guidelines/commands to correctly integrate the VGPU/MIG driver with Nova and StarlingX. * Are there any additional steps required on the system STX or OpenStack components? Note: A VM without GPU but with ethernet card shared via SRIOV works well Thank you ! Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture [cid:e0471baa-45df-4285-912d-ebab53aaebcb] [cid:f6c3efe7-6b20-48af-acb4-5f95894106ee] Via Bastioni 14/ S.Michele 10, Salerno, Italia Email: giuseppe.delgaudio@nttdata.com Tel: +39 3346368189 Learn more at www.nttdata.com/it<http://www.nttdata.com/it> ________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato.
We are exploring the use of NVIDIA GPUs, but from the context of making the GPU resources available within Kubernetes. Currently we're trying to get the NVIDIA GPU Operator for Kubernetes working, and we haven't done much testing of PCI passthrough to VMs. It's interesting that the NVIDIA documentation you linked doesn't seem to mention the "Hopper" architecture at all. Can you provide a source for the nvidia-vgpu-ubuntu-aie-580_580.82.0.deb package that you said you installed? After installing the NVIDIA vGPU software were the NVIDIA-specific vfio drivers installed as per Virtual GPU Software User Guide - NVIDIA Docs <https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html#verify-install-update-vgpu-ubuntu> ? # lsmod | grep vfio nvidia_vgpu_vfio 27099 0 nvidia 12316924 1 nvidia_vgpu_vfio vfio_mdev 12841 0 mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio_iommu_type1 22342 0 vfio 32331 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 # Which version of the guest drivers were you using? Regards, Chris On 10/30/2025 6:07 AM, GIUSEPPE DEL GAUDIO wrote:
Dear StarlingX community,
We are attempting to configure VGPU sharing via MIG (Multi-Instance GPU) on NVIDIA H200 cards within a StarlingX (STX) 24.09 environment running OpenStack (Caracal).
Platform's configuration details: StarlingX (STX) 24.09 con OpenStack(Caracal). Hardware: GPU NVIDIA H200. CPU : AMD EPYC Kernel : vmlinuz-6.6.0-1-amd64
We successfully installed and deployed the nvidia-vgpu-ubuntu-aie-580_580.82.0.deb patch on the two controllers via USM. The NVIDIA platform is functional: we were able to perform GPU segmentation using MIGs and associate these cuts with the VFs, following the official NVIDIA GRID VGPU guide (https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html#ubuntu-i... <https://urldefense.com/v3/__https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html*ubuntu-install-configure-vgpu__;Iw!!AjveYdw8EvQ!e9x6m-ZW4GFtSynhhUWVN90ucCPwqruR5p4a9QJLzz-rhBcgqIWjL85v7CnhTp-mFIEyK_tK9n-e8U6nljIqqMrvry0BNSpEQg$>). The GPU supports VGPU/MIG sharing via VFs (Virtual Functions) using Vendor-Specific VFIO.
To enable VF sharing with OpenStack, we applied the following helm override configuration in StarlingX, updating Nova parameters based on the official OpenStack PCI Passthrough documentation (https://docs.openstack.org/nova/latest/admin/pci-passthrough.html <https://urldefense.com/v3/__https://docs.openstack.org/nova/latest/admin/pci-passthrough.html__;!!AjveYdw8EvQ!e9x6m-ZW4GFtSynhhUWVN90ucCPwqruR5p4a9QJLzz-rhBcgqIWjL85v7CnhTp-mFIEyK_tK9n-e8U6nljIqqMrvry0Vecu6Uw$>)
where 0000:03:00.2 it is the PCI address of a VF linked to a MIG with capacity of 7G_141.
However, when we attempt to attach the VF to a Guest VM using a Nova flavour, the physical node crashes, resulting in the attached logs.
Does anyone have specific experience or documentation regarding this configuration of StarlingX/OpenStack, VGPU/MIG with NVIDIA Vendor-Specific VFIO} ?
Specifically, we are looking for the following:
* Confirmation or correction of the Nova PCI overrides. * Guidelines/commands to correctly integrate the VGPU/MIG driver with Nova and StarlingX. * Are there any additional steps required on the system STX or OpenStack components?
Note: A VM without GPU but with ethernet card shared via SRIOV works well
Thank you !
*Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture*
*Descrizione: Descrizione: Descrizione: Descrizione: cid:image001.gif@01CD1BCB.F08213A0*
Via Bastioni 14/ S.Michele 10, Salerno, Italia
Email: _giuseppe.delgaudio@nttdata.com_
_Tel: +39 3346368189_
Learn more at *_www.nttdata.com/it <https://urldefense.com/v3/__http://www.nttdata.com/it__;!!AjveYdw8EvQ!e9x6m-ZW4GFtSynhhUWVN90ucCPwqruR5p4a9QJLzz-rhBcgqIWjL85v7CnhTp-mFIEyK_tK9n-e8U6nljIqqMrvry0Y-BF1SA$>_*
------------------------------------------------------------------------
/NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato./
Chris Friesen wrote:
We are exploring the use of NVIDIA GPUs, but from the context of making the GPU resources available within Kubernetes. Currently we're trying to get the NVIDIA GPU Operator for Kubernetes working, and we haven't done much testing of PCI passthrough to VMs.
It's interesting that the NVIDIA documentation you linked doesn't seem to mention the "Hopper" architecture at all.
Can you provide a source for the nvidia-vgpu-ubuntu-aie-580_580.82.0.deb package that you said you installed?
After installing the NVIDIA vGPU software were the NVIDIA-specific vfio drivers installed as per Virtual GPU Software User Guide - NVIDIA Docs <https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html#verify-install-update-vgpu-ubuntu> ?
# lsmod | grep vfio nvidia_vgpu_vfio 27099 0 nvidia 12316924 1 nvidia_vgpu_vfio vfio_mdev 12841 0 mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio_iommu_type1 22342 0 vfio 32331 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 #
Which version of the guest drivers were you using?
Regards, Chris
Hi, Thank you for the reply, sorry fo the delayed response but i don't received any notification. At the moment, we are completely focused on sharing vGPUs (specifically MIG) via Openstack on the STX 10 platform. As you mentioned, the Nvidia documentation does not include any reference to the Hopper architecture, but the procedure we followed to create VFs, etc., works correctly on Hopper. We cannot provide the VGPU manager software because it is bound by a license. This software installs the manager and the GPU driver. We have also installed the latest driver nvidia-vgpu-ubuntu-aie-580_580.95 provided with the nvidia 580.95.02 driver. As mentioned above, this type of GPU on STX Kernel 6.6 uses Vendor Specific VFIO, so we cannot use MDEV to share cards between Openstack and STX bare metal, only VF. On the system after the package installation we have the following kernel module installed: sysadmin@controller-0:~$ lsmod | grep vfio nvidia_vgpu_vfio 126976 12 nvidia 14381056 4 nvidia_vgpu_vfio vfio_pci_core 90112 1 nvidia_vgpu_vfio mdev 20480 1 nvidia_vgpu_vfio vfio_iommu_type1 45056 0 vfio 61440 3 vfio_pci_core,nvidia_vgpu_vfio,vfio_iommu_type1 kvm 1347584 2 kvm_amd,nvidia_vgpu_vfio irqbypass 12288 3 vfio_pci_core,nvidia_vgpu_vfio,kvm all the devices are attached with nvidia kernel module The guest drivers on the VMs are associated with the VGPU manager and are provided in the same zip file as the NVIDIA software, but the VM fails to even start up. When OpenStack schedules the VM and starts the machine, the physical STX host crashes and reboots with a kernel panic. On nova conf we add this configuration to pass the VF: [pci] device_spec = {"address": "0000:03:00.2", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.3", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.4", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.5", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} alias = {"name": "ah200-vf", "device_type":"type-VF", "resource_class": "H200-vf"} After Openstack shedule the VM the bare-metal host have the following crash: ============================================================ [ 168.612066] EXT4-fs (rbd0): mounted filesystem 7fe62dc3-02b1-4fa2-8314-2043ff4c0fa5 r/w with ordered data mode. Quota mode: none. [ 168.612113] EXT4-fs (rbd1): mounted filesystem 76d79800-51f9-47db-b8cd-2a7b83c74ee1 r/w with ordered data mode. Quota mode: none. [ 430.648672] nvidia 0000:03:01.2: Enabling HDA controller [ 430.649641] nvidia 0000:03:01.2: Enabling HDA controller [ 430.649676] nvidia 0000:03:01.2: Runtime PM usage count underflow! [ 430.649818] ------------[ cut here ]------------ [ 430.649821] WARNING: CPU: 0 PID: 9 at drivers/vfio/group.c:688 vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649833] Modules linked in: vfio_pci sctp_diag tcp_diag udp_diag raw_diag inet_diag unix_diag xt_CHECKSUM nbd rbd libceph dns_resolver nfnetlink_cttimeout ip6_tables xt_set ip6t_rpfilter ipt_rpfilter ip_set_hash_net ip_set_hash_ip ip_set veth xt_statistic wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 libcurve25519_generic libchacha vxlan gre openvswitch nf_conncount nf_conntrack_netlink xt_recent xt_MASQUERADE xt_mark xt_conntrack bnxt_en(OE) nft_chain_nat xt_comment xt_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_multiport binfmt_misc iscsi_target_mod target_core_mod pci_pf_stub nfsv3 nfs fscache netfs esp4 dm_crypt trusted asn1_encoder xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge nfsd virtio_net net_failover failover auth_rpcgss nfs_acl lockd grace 8021q garp stp mrp llc xfs cls_u32 sch_sfq sch_htb nvidia_vgpu_vfio(OE) nvidia(OE) vfio_pci_core mdev vfio_iommu_type1 vfio sctp ip6_udp_tunnel udp_tunnel xprtrdma(O) svcrdma(O) rpcrdma(O) nvmet_rdma(O) [ 430.649866] nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_cm(O) intel_uncore_frequency_common drbd lru_cache libcrc32c fuse drm sunrpc bpf_preload efivarfs ip_tables overlay ext4 mbcache jbd2 dm_multipath dm_mod mlx5_ib(O) ib_uverbs(O) wmi_bmof kvm_amd kvm ib_core(O) mlx5_core(O) irqbypass mlxfw(O) crct10dif_pclmul mlxdevm(O) crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 mlx_compat(O) rapl nvme psample uas nvme_core tls t10_pi acpi_cpufreq usb_storage macsec crc64_rocksoft_generic pci_hyperv_intf ccp crc64_rocksoft mpt3sas crc64 wmi i2c_designware_platform i2c_designware_core ipmi_si ipmi_devintf ipmi_msghandler iavf(O) i40e(O) ice(O) [last unloaded: bnxt_en(OE)] [ 430.649899] CPU: 0 PID: 9 Comm: kworker/0:1 Kdump: loaded Tainted: G OE 6.6.0-1-amd64 #1 Debian 6.6.63-1.stx.99 [ 430.649902] Hardware name: Lenovo ThinkSystem SR675 V3/SB27B87354, BIOS QGE140H-8.21 05/14/2025 [ 430.649904] Workqueue: events work_for_cpu_fn [ 430.649909] RIP: 0010:vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649914] Code: 8b 60 03 00 00 48 8d 42 d8 48 39 d1 75 0f eb 5f 48 8b 50 28 48 8d 42 d8 48 39 d1 74 52 4c 3b 28 75 ee 4c 89 f7 e8 a8 d4 49 c8 <0f> 0b 48 c7 c7 d8 ca ac c1 48 c7 c3 ea ff ff ff e8 93 d4 49 c8 4c [ 430.649916] RSP: 0018:ff4108f84028fd80 EFLAGS: 00010246 [ 430.649918] RAX: ff18105388a1b080 RBX: ff1810543e9b0000 RCX: ff1810543e9b0360 [ 430.649919] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ff1810543e9b0370 [ 430.649919] RBP: ff1810571c71e000 R08: ff18105573011b0e R09: 0000000000000000 [ 430.649920] R10: 0000000000000001 R11: 0000000000000000 R12: ff1810539209c600 [ 430.649920] R13: ff1810538d9a20c0 R14: ff1810543e9b0370 R15: ff4108f88850f990 [ 430.649921] FS: 0000000000000000(0000) GS:ff1810b14f200000(0000) knlGS:0000000000000000 [ 430.649922] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 430.649923] CR2: 00007fb69bcb0000 CR3: 0000006091402002 CR4: 0000000000771ef0 [ 430.649924] PKRU: 55555554 [ 430.649924] Call Trace: [ 430.649927] <TASK> [ 430.649930] ? __warn+0x84/0x140 [ 430.649933] ? vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649939] ? report_bug+0x198/0x1b0 [ 430.649943] ? handle_bug+0x53/0x90 [ 430.649946] ? exc_invalid_op+0x18/0x70 [ 430.649948] ? asm_exc_invalid_op+0x1a/0x20 [ 430.649951] ? vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649955] vfio_register_group_dev+0x4e/0xd0 [vfio] [ 430.649961] vfio_pci_core_register_device+0x197/0x410 [vfio_pci_core] [ 430.649967] ? device_initialize+0xab/0x110 [ 430.649973] vfio_pci_probe+0x51/0x120 [vfio_pci] [ 430.649976] local_pci_probe+0x47/0xa0 [ 430.649980] work_for_cpu_fn+0x17/0x30 [ 430.649982] process_one_work+0x175/0x360 [ 430.649984] worker_thread+0x280/0x390 [ 430.649986] ? __pfx_worker_thread+0x10/0x10 [ 430.649987] kthread+0xdd/0x110 [ 430.649990] ? __pfx_kthread+0x10/0x10 [ 430.649991] ret_from_fork+0x31/0x50 [ 430.649994] ? __pfx_kthread+0x10/0x10 [ 430.649996] ret_from_fork_asm+0x1b/0x30 [ 430.650000] </TASK> [ 430.650000] ---[ end trace 0000000000000000 ]--- [ 430.650010] vfio-pci: probe of 0000:03:01.2 failed with error -22 [ 430.662135] tun: Universal TUN/TAP device driver, 1.6 [ 430.663054] tap5f9c2335-39: entered promiscuous mode [ 431.328345] BUG: kernel NULL pointer dereference, address: 0000000000000010 [ 431.328586] #PF: supervisor read access in kernel mode [ 431.328748] #PF: error_code(0x0000) - not-present page [ 431.328912] PGD 488b01067 P4D 0 [ 431.329016] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 431.329153] CPU: 55 PID: 109169 Comm: qemu-system-x86 Kdump: loaded Tainted: G W OE 6.6.0-1-amd64 #1 Debian 6.6.63-1.stx.99 [ 431.329538] Hardware name: Lenovo ThinkSystem SR675 V3/SB27B87354, BIOS QGE140H-8.21 05/14/2025 [ 431.329811] RIP: 0010:vfio_df_open+0x37/0x110 [vfio] [ 431.329978] Code: 48 83 ec 08 48 8b 1f 8b 83 5c 03 00 00 85 c0 75 5d c7 83 5c 03 00 00 01 00 00 00 48 8b 2f 4c 8b 67 28 48 8b 45 00 48 8b 40 68 <48> 8b 78 10 e8 90 f2 8e c7 84 c0 0f 84 af 00 00 00 4d 85 e4 74 51 [ 431.330566] RSP: 0018:ff4108f8980efcf0 EFLAGS: 00010246 [ 431.330730] RAX: 0000000000000000 RBX: ff1810543e9b4800 RCX: 00000000000000d7 [ 431.330954] RDX: ffffffff893b78f0 RSI: ffffffffc14c3980 RDI: ff1810b2e0c3c000 [ 431.331178] RBP: ff1810543e9b4800 R08: 000000000000000c R09: ff1810b2e0c3c000 [ 431.331402] R10: 0000000000000001 R11: fefefefefefefeff R12: 0000000000000000 [ 431.331626] R13: 000000000000002b R14: ff1810b2e0c3c000 R15: ffffffffe0c3c000 [ 431.331850] FS: 00007f80e22c3e80(0000) GS:ff181110cb5c0000(0000) knlGS:0000000000000000 [ 431.332105] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 431.332286] CR2: 0000000000000010 CR3: 00000004717fa001 CR4: 0000000000771ee0 [ 431.332511] PKRU: 55555554 [ 431.332597] Call Trace: [ 431.332677] <TASK> [ 431.332744] ? __die+0x24/0x70 [ 431.332845] ? page_fault_oops+0x15b/0x460 [ 431.332973] ? terminate_walk+0xee/0x100 [ 431.341558] ? bsearch+0x57/0x90 [ 431.350127] ? exc_page_fault+0x69/0x150 [ 431.358608] ? asm_exc_page_fault+0x26/0x30 [ 431.366985] ? __symbol_put+0x70/0xa0 [ 431.375241] ? vfio_df_open+0x37/0x110 [vfio] [ 431.383395] vfio_group_fops_unl_ioctl+0x292/0x720 [vfio] [ 431.391539] __x64_sys_ioctl+0x8f/0xd0 [ 431.399556] do_syscall_64+0x58/0xb0 [ 431.407438] ? sched_clock+0x10/0x30 [ 431.415189] ? get_vtime_delta+0xf/0xb0 [ 431.422834] ? ct_kernel_exit.constprop.0+0x81/0xa0 [ 431.430412] ? __ct_user_enter+0x5e/0xd0 [ 431.437861] ? syscall_exit_to_user_mode+0x32/0x40 [ 431.445257] ? do_syscall_64+0x65/0xb0 [ 431.452534] ? __do_sys_newlstat+0x64/0x90 [ 431.459705] ? sched_clock+0x10/0x30 [ 431.466733] ? get_vtime_delta+0xf/0xb0 [ 431.473674] ? ct_kernel_exit.constprop.0+0x81/0xa0 [ 431.480554] ? __ct_user_enter+0x5e/0xd0 [ 431.487304] ? syscall_exit_to_user_mode+0x32/0x40 [ 431.494015] ? do_syscall_64+0x65/0xb0 [ 431.500672] ? __ct_user_enter+0x5e/0xd0 [ 431.507176] entry_SYSCALL_64_after_hwframe+0x78/0xe2 [ 431.513580] RIP: 0033:0x7f80e2cf5277 [ 431.519962] Code: 00 00 00 48 8b 05 19 cc 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e9 cb 0d 00 f7 d8 64 89 01 48 [ 431.533533] RSP: 002b:00007ffd09bc3398 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 431.540313] RAX: ffffffffffffffda RBX: 0000559d6081bcb0 RCX: 00007f80e2cf5277 [ 431.546982] RDX: 0000559d6081d690 RSI: 0000000000003b6a RDI: 0000000000000028 [ 431.553537] RBP: 0000559d60822800 R08: 0000559d5f19cbd0 R09: 00007ffd09bc12f9 [ 431.559970] R10: 000000000000006f R11: 0000000000000246 R12: 0000559d6081b280 [ 431.566256] R13: 0000559d6081d690 R14: 00007ffd09bc45c0 R15: 0000559d6081d690 [ 431.572378] </TASK> [ 431.578322] Modules linked in: vhost_net vhost vhost_iotlb tap tun vfio_pci sctp_diag tcp_diag udp_diag raw_diag inet_diag unix_diag xt_CHECKSUM nbd rbd libceph dns_resolver nfnetlink_cttimeout ip6_tables xt_set ip6t_rpfilter ipt_rpfilter ip_set_hash_net ip_set_hash_ip ip_set veth xt_statistic wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 libcurve25519_generic libchacha vxlan gre openvswitch nf_conncount nf_conntrack_netlink xt_recent xt_MASQUERADE xt_mark xt_conntrack bnxt_en(OE) nft_chain_nat xt_comment xt_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_multiport binfmt_misc iscsi_target_mod target_core_mod pci_pf_stub nfsv3 nfs fscache netfs esp4 dm_crypt trusted asn1_encoder xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge nfsd virtio_net net_failover failover auth_rpcgss nfs_acl lockd grace 8021q garp stp mrp llc xfs cls_u32 sch_sfq sch_htb nvidia_vgpu_vfio(OE) nvidia(OE) vfio_pci_core mdev vfio_iommu_type1 vfio sctp ip6_udp_tunnel udp_tunnel xprtrdma(O) [ 431.578381] svcrdma(O) rpcrdma(O) nvmet_rdma(O) nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_cm(O) intel_uncore_frequency_common drbd lru_cache libcrc32c fuse drm sunrpc bpf_preload efivarfs ip_tables overlay ext4 mbcache jbd2 dm_multipath dm_mod mlx5_ib(O) ib_uverbs(O) wmi_bmof kvm_amd kvm ib_core(O) mlx5_core(O) irqbypass mlxfw(O) crct10dif_pclmul mlxdevm(O) crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 mlx_compat(O) rapl nvme psample uas nvme_core tls t10_pi acpi_cpufreq usb_storage macsec crc64_rocksoft_generic pci_hyperv_intf ccp crc64_rocksoft mpt3sas crc64 wmi i2c_designware_platform i2c_designware_core ipmi_si ipmi_devintf ipmi_msghandler iavf(O) i40e(O) ice(O) [last unloaded: bnxt_en(OE)] [ 431.704871] CR2: 0000000000000010 =========================================================== Do you have any suggestions on OpenStack or STX to make everything work? Or do you have any ideas about this error? Notes: e also tested this type of passthrough on a standard system (NO STX) with Ubuntu 24, Kernel 6.8, and KVM 10, and everything works correctly. P.S We tried also with STX 11 rc1 and with kernel 6.12 and we have the same error, Thanks for the help Regards,
Hi Giuseppe, The community briefly touched on the challenge you brought up in this thread on the StarlingX TSC & Community call last week. Based on the conversation, the StarlingX community is still ahead of focusing more specifically on GPU support in the platform. The challenge you're facing seems like it's addressed in a later OpenStack version, compared to what's integrated in the platform version you're using. There might be a way to port that back. Were you able to make progress with troubleshooting on your end in the meantime? Plans for upcoming StarlingX versions for GPUs include a few additions from the CNCF ecosystem, including: - Nvidia DRA driver for GPUs in Kubernetes: https://github.com/NVIDIA/k8s-dra-driver-gpu - https://docs.google.com/document/d/1BNWqgx_SmZDi-va_V31v3DnuVwYnF2EmN7D-O_fB... @Brad, @Chris: Can you please add and correct anything I missed? Thanks and Best Regards, Ildikó ——— Ildikó Váncsa Director of Community OpenInfra Foundation
On Nov 7, 2025, at 01:31, giuseppe.delgaudio@nttdata.com wrote:
Chris Friesen wrote:
We are exploring the use of NVIDIA GPUs, but from the context of making the GPU resources available within Kubernetes. Currently we're trying to get the NVIDIA GPU Operator for Kubernetes working, and we haven't done much testing of PCI passthrough to VMs.
It's interesting that the NVIDIA documentation you linked doesn't seem to mention the "Hopper" architecture at all.
Can you provide a source for the nvidia-vgpu-ubuntu-aie-580_580.82.0.deb package that you said you installed?
After installing the NVIDIA vGPU software were the NVIDIA-specific vfio drivers installed as per Virtual GPU Software User Guide - NVIDIA Docs <https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html#verify-install-update-vgpu-ubuntu> ?
# lsmod | grep vfio nvidia_vgpu_vfio 27099 0 nvidia 12316924 1 nvidia_vgpu_vfio vfio_mdev 12841 0 mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio_iommu_type1 22342 0 vfio 32331 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 #
Which version of the guest drivers were you using?
Regards, Chris
Hi,
Thank you for the reply, sorry fo the delayed response but i don't received any notification.
At the moment, we are completely focused on sharing vGPUs (specifically MIG) via Openstack on the STX 10 platform. As you mentioned, the Nvidia documentation does not include any reference to the Hopper architecture, but the procedure we followed to create VFs, etc., works correctly on Hopper.
We cannot provide the VGPU manager software because it is bound by a license. This software installs the manager and the GPU driver. We have also installed the latest driver nvidia-vgpu-ubuntu-aie-580_580.95 provided with the nvidia 580.95.02 driver. As mentioned above, this type of GPU on STX Kernel 6.6 uses Vendor Specific VFIO, so we cannot use MDEV to share cards between Openstack and STX bare metal, only VF.
On the system after the package installation we have the following kernel module installed:
sysadmin@controller-0:~$ lsmod | grep vfio nvidia_vgpu_vfio 126976 12 nvidia 14381056 4 nvidia_vgpu_vfio vfio_pci_core 90112 1 nvidia_vgpu_vfio mdev 20480 1 nvidia_vgpu_vfio vfio_iommu_type1 45056 0 vfio 61440 3 vfio_pci_core,nvidia_vgpu_vfio,vfio_iommu_type1 kvm 1347584 2 kvm_amd,nvidia_vgpu_vfio irqbypass 12288 3 vfio_pci_core,nvidia_vgpu_vfio,kvm
all the devices are attached with nvidia kernel module
The guest drivers on the VMs are associated with the VGPU manager and are provided in the same zip file as the NVIDIA software, but the VM fails to even start up. When OpenStack schedules the VM and starts the machine, the physical STX host crashes and reboots with a kernel panic.
On nova conf we add this configuration to pass the VF:
[pci] device_spec = {"address": "0000:03:00.2", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.3", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.4", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.5", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} alias = {"name": "ah200-vf", "device_type":"type-VF", "resource_class": "H200-vf"}
After Openstack shedule the VM the bare-metal host have the following crash:
============================================================
[ 168.612066] EXT4-fs (rbd0): mounted filesystem 7fe62dc3-02b1-4fa2-8314-2043ff4c0fa5 r/w with ordered data mode. Quota mode: none. [ 168.612113] EXT4-fs (rbd1): mounted filesystem 76d79800-51f9-47db-b8cd-2a7b83c74ee1 r/w with ordered data mode. Quota mode: none. [ 430.648672] nvidia 0000:03:01.2: Enabling HDA controller [ 430.649641] nvidia 0000:03:01.2: Enabling HDA controller [ 430.649676] nvidia 0000:03:01.2: Runtime PM usage count underflow! [ 430.649818] ------------[ cut here ]------------ [ 430.649821] WARNING: CPU: 0 PID: 9 at drivers/vfio/group.c:688 vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649833] Modules linked in: vfio_pci sctp_diag tcp_diag udp_diag raw_diag inet_diag unix_diag xt_CHECKSUM nbd rbd libceph dns_resolver nfnetlink_cttimeout ip6_tables xt_set ip6t_rpfilter ipt_rpfilter ip_set_hash_net ip_set_hash_ip ip_set veth xt_statistic wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 libcurve25519_generic libchacha vxlan gre openvswitch nf_conncount nf_conntrack_netlink xt_recent xt_MASQUERADE xt_mark xt_conntrack bnxt_en(OE) nft_chain_nat xt_comment xt_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_multiport binfmt_misc iscsi_target_mod target_core_mod pci_pf_stub nfsv3 nfs fscache netfs esp4 dm_crypt trusted asn1_encoder xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge nfsd virtio_net net_failover failover auth_rpcgss nfs_acl lockd grace 8021q garp stp mrp llc xfs cls_u32 sch_sfq sch_htb nvidia_vgpu_vfio(OE) nvidia(OE) vfio_pci_core mdev vfio_iommu_type1 vfio sctp ip6_udp_tunnel udp_tunnel xprtrdma(O) svcrdma(O) rpcrdma(O) nvmet_rdma(O) [ 430.649866] nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_cm(O) intel_uncore_frequency_common drbd lru_cache libcrc32c fuse drm sunrpc bpf_preload efivarfs ip_tables overlay ext4 mbcache jbd2 dm_multipath dm_mod mlx5_ib(O) ib_uverbs(O) wmi_bmof kvm_amd kvm ib_core(O) mlx5_core(O) irqbypass mlxfw(O) crct10dif_pclmul mlxdevm(O) crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 mlx_compat(O) rapl nvme psample uas nvme_core tls t10_pi acpi_cpufreq usb_storage macsec crc64_rocksoft_generic pci_hyperv_intf ccp crc64_rocksoft mpt3sas crc64 wmi i2c_designware_platform i2c_designware_core ipmi_si ipmi_devintf ipmi_msghandler iavf(O) i40e(O) ice(O) [last unloaded: bnxt_en(OE)] [ 430.649899] CPU: 0 PID: 9 Comm: kworker/0:1 Kdump: loaded Tainted: G OE 6.6.0-1-amd64 #1 Debian 6.6.63-1.stx.99 [ 430.649902] Hardware name: Lenovo ThinkSystem SR675 V3/SB27B87354, BIOS QGE140H-8.21 05/14/2025 [ 430.649904] Workqueue: events work_for_cpu_fn [ 430.649909] RIP: 0010:vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649914] Code: 8b 60 03 00 00 48 8d 42 d8 48 39 d1 75 0f eb 5f 48 8b 50 28 48 8d 42 d8 48 39 d1 74 52 4c 3b 28 75 ee 4c 89 f7 e8 a8 d4 49 c8 <0f> 0b 48 c7 c7 d8 ca ac c1 48 c7 c3 ea ff ff ff e8 93 d4 49 c8 4c [ 430.649916] RSP: 0018:ff4108f84028fd80 EFLAGS: 00010246 [ 430.649918] RAX: ff18105388a1b080 RBX: ff1810543e9b0000 RCX: ff1810543e9b0360 [ 430.649919] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ff1810543e9b0370 [ 430.649919] RBP: ff1810571c71e000 R08: ff18105573011b0e R09: 0000000000000000 [ 430.649920] R10: 0000000000000001 R11: 0000000000000000 R12: ff1810539209c600 [ 430.649920] R13: ff1810538d9a20c0 R14: ff1810543e9b0370 R15: ff4108f88850f990 [ 430.649921] FS: 0000000000000000(0000) GS:ff1810b14f200000(0000) knlGS:0000000000000000 [ 430.649922] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 430.649923] CR2: 00007fb69bcb0000 CR3: 0000006091402002 CR4: 0000000000771ef0 [ 430.649924] PKRU: 55555554 [ 430.649924] Call Trace: [ 430.649927] <TASK> [ 430.649930] ? __warn+0x84/0x140 [ 430.649933] ? vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649939] ? report_bug+0x198/0x1b0 [ 430.649943] ? handle_bug+0x53/0x90 [ 430.649946] ? exc_invalid_op+0x18/0x70 [ 430.649948] ? asm_exc_invalid_op+0x1a/0x20 [ 430.649951] ? vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649955] vfio_register_group_dev+0x4e/0xd0 [vfio] [ 430.649961] vfio_pci_core_register_device+0x197/0x410 [vfio_pci_core] [ 430.649967] ? device_initialize+0xab/0x110 [ 430.649973] vfio_pci_probe+0x51/0x120 [vfio_pci] [ 430.649976] local_pci_probe+0x47/0xa0 [ 430.649980] work_for_cpu_fn+0x17/0x30 [ 430.649982] process_one_work+0x175/0x360 [ 430.649984] worker_thread+0x280/0x390 [ 430.649986] ? __pfx_worker_thread+0x10/0x10 [ 430.649987] kthread+0xdd/0x110 [ 430.649990] ? __pfx_kthread+0x10/0x10 [ 430.649991] ret_from_fork+0x31/0x50 [ 430.649994] ? __pfx_kthread+0x10/0x10 [ 430.649996] ret_from_fork_asm+0x1b/0x30 [ 430.650000] </TASK> [ 430.650000] ---[ end trace 0000000000000000 ]--- [ 430.650010] vfio-pci: probe of 0000:03:01.2 failed with error -22 [ 430.662135] tun: Universal TUN/TAP device driver, 1.6 [ 430.663054] tap5f9c2335-39: entered promiscuous mode [ 431.328345] BUG: kernel NULL pointer dereference, address: 0000000000000010 [ 431.328586] #PF: supervisor read access in kernel mode [ 431.328748] #PF: error_code(0x0000) - not-present page [ 431.328912] PGD 488b01067 P4D 0 [ 431.329016] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 431.329153] CPU: 55 PID: 109169 Comm: qemu-system-x86 Kdump: loaded Tainted: G W OE 6.6.0-1-amd64 #1 Debian 6.6.63-1.stx.99 [ 431.329538] Hardware name: Lenovo ThinkSystem SR675 V3/SB27B87354, BIOS QGE140H-8.21 05/14/2025 [ 431.329811] RIP: 0010:vfio_df_open+0x37/0x110 [vfio] [ 431.329978] Code: 48 83 ec 08 48 8b 1f 8b 83 5c 03 00 00 85 c0 75 5d c7 83 5c 03 00 00 01 00 00 00 48 8b 2f 4c 8b 67 28 48 8b 45 00 48 8b 40 68 <48> 8b 78 10 e8 90 f2 8e c7 84 c0 0f 84 af 00 00 00 4d 85 e4 74 51 [ 431.330566] RSP: 0018:ff4108f8980efcf0 EFLAGS: 00010246 [ 431.330730] RAX: 0000000000000000 RBX: ff1810543e9b4800 RCX: 00000000000000d7 [ 431.330954] RDX: ffffffff893b78f0 RSI: ffffffffc14c3980 RDI: ff1810b2e0c3c000 [ 431.331178] RBP: ff1810543e9b4800 R08: 000000000000000c R09: ff1810b2e0c3c000 [ 431.331402] R10: 0000000000000001 R11: fefefefefefefeff R12: 0000000000000000 [ 431.331626] R13: 000000000000002b R14: ff1810b2e0c3c000 R15: ffffffffe0c3c000 [ 431.331850] FS: 00007f80e22c3e80(0000) GS:ff181110cb5c0000(0000) knlGS:0000000000000000 [ 431.332105] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 431.332286] CR2: 0000000000000010 CR3: 00000004717fa001 CR4: 0000000000771ee0 [ 431.332511] PKRU: 55555554 [ 431.332597] Call Trace: [ 431.332677] <TASK> [ 431.332744] ? __die+0x24/0x70 [ 431.332845] ? page_fault_oops+0x15b/0x460 [ 431.332973] ? terminate_walk+0xee/0x100 [ 431.341558] ? bsearch+0x57/0x90 [ 431.350127] ? exc_page_fault+0x69/0x150 [ 431.358608] ? asm_exc_page_fault+0x26/0x30 [ 431.366985] ? __symbol_put+0x70/0xa0 [ 431.375241] ? vfio_df_open+0x37/0x110 [vfio] [ 431.383395] vfio_group_fops_unl_ioctl+0x292/0x720 [vfio] [ 431.391539] __x64_sys_ioctl+0x8f/0xd0 [ 431.399556] do_syscall_64+0x58/0xb0 [ 431.407438] ? sched_clock+0x10/0x30 [ 431.415189] ? get_vtime_delta+0xf/0xb0 [ 431.422834] ? ct_kernel_exit.constprop.0+0x81/0xa0 [ 431.430412] ? __ct_user_enter+0x5e/0xd0 [ 431.437861] ? syscall_exit_to_user_mode+0x32/0x40 [ 431.445257] ? do_syscall_64+0x65/0xb0 [ 431.452534] ? __do_sys_newlstat+0x64/0x90 [ 431.459705] ? sched_clock+0x10/0x30 [ 431.466733] ? get_vtime_delta+0xf/0xb0 [ 431.473674] ? ct_kernel_exit.constprop.0+0x81/0xa0 [ 431.480554] ? __ct_user_enter+0x5e/0xd0 [ 431.487304] ? syscall_exit_to_user_mode+0x32/0x40 [ 431.494015] ? do_syscall_64+0x65/0xb0 [ 431.500672] ? __ct_user_enter+0x5e/0xd0 [ 431.507176] entry_SYSCALL_64_after_hwframe+0x78/0xe2 [ 431.513580] RIP: 0033:0x7f80e2cf5277 [ 431.519962] Code: 00 00 00 48 8b 05 19 cc 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e9 cb 0d 00 f7 d8 64 89 01 48 [ 431.533533] RSP: 002b:00007ffd09bc3398 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 431.540313] RAX: ffffffffffffffda RBX: 0000559d6081bcb0 RCX: 00007f80e2cf5277 [ 431.546982] RDX: 0000559d6081d690 RSI: 0000000000003b6a RDI: 0000000000000028 [ 431.553537] RBP: 0000559d60822800 R08: 0000559d5f19cbd0 R09: 00007ffd09bc12f9 [ 431.559970] R10: 000000000000006f R11: 0000000000000246 R12: 0000559d6081b280 [ 431.566256] R13: 0000559d6081d690 R14: 00007ffd09bc45c0 R15: 0000559d6081d690 [ 431.572378] </TASK> [ 431.578322] Modules linked in: vhost_net vhost vhost_iotlb tap tun vfio_pci sctp_diag tcp_diag udp_diag raw_diag inet_diag unix_diag xt_CHECKSUM nbd rbd libceph dns_resolver nfnetlink_cttimeout ip6_tables xt_set ip6t_rpfilter ipt_rpfilter ip_set_hash_net ip_set_hash_ip ip_set veth xt_statistic wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 libcurve25519_generic libchacha vxlan gre openvswitch nf_conncount nf_conntrack_netlink xt_recent xt_MASQUERADE xt_mark xt_conntrack bnxt_en(OE) nft_chain_nat xt_comment xt_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_multiport binfmt_misc iscsi_target_mod target_core_mod pci_pf_stub nfsv3 nfs fscache netfs esp4 dm_crypt trusted asn1_encoder xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge nfsd virtio_net net_failover failover auth_rpcgss nfs_acl lockd grace 8021q garp stp mrp llc xfs cls_u32 sch_sfq sch_htb nvidia_vgpu_vfio(OE) nvidia(OE) vfio_pci_core mdev vfio_iommu_type1 vfio sctp ip6_udp_tunnel udp_tunnel xprtrdma(O) [ 431.578381] svcrdma(O) rpcrdma(O) nvmet_rdma(O) nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_cm(O) intel_uncore_frequency_common drbd lru_cache libcrc32c fuse drm sunrpc bpf_preload efivarfs ip_tables overlay ext4 mbcache jbd2 dm_multipath dm_mod mlx5_ib(O) ib_uverbs(O) wmi_bmof kvm_amd kvm ib_core(O) mlx5_core(O) irqbypass mlxfw(O) crct10dif_pclmul mlxdevm(O) crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 mlx_compat(O) rapl nvme psample uas nvme_core tls t10_pi acpi_cpufreq usb_storage macsec crc64_rocksoft_generic pci_hyperv_intf ccp crc64_rocksoft mpt3sas crc64 wmi i2c_designware_platform i2c_designware_core ipmi_si ipmi_devintf ipmi_msghandler iavf(O) i40e(O) ice(O) [last unloaded: bnxt_en(OE)] [ 431.704871] CR2: 0000000000000010
===========================================================
Do you have any suggestions on OpenStack or STX to make everything work? Or do you have any ideas about this error?
Notes: e also tested this type of passthrough on a standard system (NO STX) with Ubuntu 24, Kernel 6.8, and KVM 10, and everything works correctly.
P.S We tried also with STX 11 rc1 and with kernel 6.12 and we have the same error,
Thanks for the help
Regards,
Hi Giuseppe, I took a look at the kernel exception that you included, and let me preface this, I did the assistance with some AI tooling so there's a chance that the answers could be wrong. It looks like you tripped a VFIO bug when passing the GPU through to QEMU. You have the nvidia drivers loaded, and they're bound to the GPU, not vfio-pci. VFIO cannot claim a device that is already owned by another driver. The other part of the problem is there is a gap in functionality in Caracal addressed in Epoxy: Enable VFIO devices with kernel variant drivers : Blueprints : OpenStack Compute (nova)<https://blueprints.launchpad.net/nova/+spec/enable-vfio-devices-with-kernel-variant-drivers> The Openstack Nova documentation in Caracal focuses on mbed-backed vGPUs Attaching virtual GPU devices to guests — nova 29.3.1.dev15 documentation<https://docs.openstack.org/nova/2024.1/admin/virtual-gpu.html>, Nova Caracal’s docs do not yet describe the vendor-specific VFIO cdev/iommufd flow (no “managed=no” toggle in PCI device spec, no sysfsdev handoff) Here are 3 AI assisted possible solutions if you want to attempt to make this work now in Caracal. We ARE planning to upgrade to Epoxy in the spring release. 3 Ways to make it work on Caracal You have three viable paths. I’ll list them in order of “least invasive” to “most change”. Option A — Prove the host/NVIDIA stack with a direct QEMU/virsh POC (no Nova), using the vendor-specific flow This validates kernel+driver+QEMU before we tackle Nova. 1. Enable VFs and assign a vGPU/MIG profile to a chosen VF (not the PF) per NVIDIA’s vendor framework: # enable SR-IOV VFs on the PF once nvidia-vgpu services are running /usr/lib/nvidia/sriov-manage -e 0000:03:00.0 # pick one VF, e.g. 0000:03:00.2, and see creatable types cat /sys/bus/pci/devices/0000:03:00.2/nvidia/creatable_vgpu_types # program the desired vGPU type (example only) echo <vgpu_type_id> > /sys/bus/pci/devices/0000:03:00.2/nvidia/current_vgpu_type Do not rebind the VF to vfio-pci. The VF must remain with nvidia/nvidia_vgpu_vfio. Then launch QEMU with the device-centric VFIO cdev path (libvirt or CLI). A commonly used form is to pass a vfio-pci device with sysfsdev= pointing at the VF (QEMU uses iommufd/cdev, not legacy groups): qemu-system-x86_64 \ ... \ -device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:03:00.2 \ ... This is the method people use successfully with the new NVIDIA framework on recent kernels/QEMU. It bypasses legacy group fds and matches how iommufd is supposed to work. (Background doc: QEMU’s VFIO iommufd device-centric mode.) [forum.proxmox.com]<https://forum.proxmox.com/threads/vgpu-with-nvidia-on-kernel-6-8.150840/>, [qemu.org]<https://www.qemu.org/docs/master/devel/vfio-iommufd.html> If that boot succeeds and nvidia-smi inside the guest sees the assigned profile after installing the NVIDIA vGPU guest driver, your host and NVIDIA pieces are good. (NVIDIA’s vGPU user guide explains guest driver requirements.) [docs.nvidia.com]<https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html> If you prefer virsh: you can hand-craft a domain XML with a <hostdev> and managed='no' (so libvirt doesn’t touch host binding), but be aware that libvirt support for vendor-specific VFIO has been under active development; many admins validate first with raw QEMU args. [github.com]<https://github.com/OpenNebula/one/issues/6841> Option B — Keep Caracal but avoid Nova’s default “managed=yes” path (workaround) Nova Caracal doesn’t expose a config knob to mark a PCI device as managed=no, and that’s exactly what the vendor-specific VFIO path needs. There is ongoing work/blueprints in Nova to add support for kernel variant drivers / managed flag for SR-IOV/VFIO devices, but that’s targeted for 2025.1. Practically, in Caracal your choices are: * Use a libvirt hook to inject the QEMU sysfsdev device (and keep Nova oblivious), or * Patch the Nova libvirt driver templates in your StarlingX helm overrides to force managed=no + pass-through the device without re-binding (advanced; fragile across updates). The clean solution is to wait for (or backport) the Nova work adding “managed flag for PCI device spec” and vendor-specific VFIO handling. Otherwise, libvirt will unbind from nvidia and bind to vfio-pci, which breaks NVIDIA’s model and can retrigger the kernel oops you saw. [blueprints...nchpad.net]<https://blueprints.launchpad.net/nova/+spec/enable-vfio-devices-with-kernel-variant-drivers> For context, other virtualization stacks that already incorporated the vendor-specific model instruct admins to: 1. enable the VF, 2. echo <profile> into …/nvidia/current_vgpu_type, and 3. add the VF as a PCI device with managed='no'. That pattern avoids rebinding on the host and has been reported to work. [github.com]<https://github.com/OpenNebula/one/issues/6841> Option C — Use mdev-backed vGPU (only if your H200 driver branch exposes mdev) Historically, Nova supports vGPU via mdev types (devices.enabled_mdev_types, [mdev_*] device_addresses=…). If your driver branch still exposes mdev for MIG (some newer AIE/vGPU stacks de-emphasize mdev in favor of vendor-specific VFIO on KVM), then you can simply follow Nova’s virtual GPU admin guide and avoid SR-IOV VFs altogether. If your installed 580.xx stack is vendor-specific only on KVM, mdev will not be available—this is a known shift vendors have been making. [docs.openstack.org]<https://docs.openstack.org/nova/latest/admin/virtual-gpu.html>, [github.com]<https://github.com/OpenNebula/one/issues/6841> Brad ________________________________ From: Ildiko Vancsa <ildiko@openinfra.dev> Sent: Tuesday, November 18, 2025 9:46 AM To: giuseppe.delgaudio@nttdata.com <giuseppe.delgaudio@nttdata.com> Cc: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io>; Borgald, Brad <Brad.Borgald@windriver.com>; Friesen, Chris <Chris.Friesen@windriver.com> Subject: Re: Issue with sharing Nvidia MIG vGPU via SRIOV on Openstack CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi Giuseppe, The community briefly touched on the challenge you brought up in this thread on the StarlingX TSC & Community call last week. Based on the conversation, the StarlingX community is still ahead of focusing more specifically on GPU support in the platform. The challenge you're facing seems like it's addressed in a later OpenStack version, compared to what's integrated in the platform version you're using. There might be a way to port that back. Were you able to make progress with troubleshooting on your end in the meantime? Plans for upcoming StarlingX versions for GPUs include a few additions from the CNCF ecosystem, including: - Nvidia DRA driver for GPUs in Kubernetes: https://github.com/NVIDIA/k8s-dra-driver-gpu - https://docs.google.com/document/d/1BNWqgx_SmZDi-va_V31v3DnuVwYnF2EmN7D-O_fB... @Brad, @Chris: Can you please add and correct anything I missed? Thanks and Best Regards, Ildikó ——— Ildikó Váncsa Director of Community OpenInfra Foundation
On Nov 7, 2025, at 01:31, giuseppe.delgaudio@nttdata.com wrote:
Chris Friesen wrote:
We are exploring the use of NVIDIA GPUs, but from the context of making the GPU resources available within Kubernetes. Currently we're trying to get the NVIDIA GPU Operator for Kubernetes working, and we haven't done much testing of PCI passthrough to VMs.
It's interesting that the NVIDIA documentation you linked doesn't seem to mention the "Hopper" architecture at all.
Can you provide a source for the nvidia-vgpu-ubuntu-aie-580_580.82.0.deb package that you said you installed?
After installing the NVIDIA vGPU software were the NVIDIA-specific vfio drivers installed as per Virtual GPU Software User Guide - NVIDIA Docs <https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html#verify-install-update-vgpu-ubuntu> ?
# lsmod | grep vfio nvidia_vgpu_vfio 27099 0 nvidia 12316924 1 nvidia_vgpu_vfio vfio_mdev 12841 0 mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio_iommu_type1 22342 0 vfio 32331 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 #
Which version of the guest drivers were you using?
Regards, Chris
Hi,
Thank you for the reply, sorry fo the delayed response but i don't received any notification.
At the moment, we are completely focused on sharing vGPUs (specifically MIG) via Openstack on the STX 10 platform. As you mentioned, the Nvidia documentation does not include any reference to the Hopper architecture, but the procedure we followed to create VFs, etc., works correctly on Hopper.
We cannot provide the VGPU manager software because it is bound by a license. This software installs the manager and the GPU driver. We have also installed the latest driver nvidia-vgpu-ubuntu-aie-580_580.95 provided with the nvidia 580.95.02 driver. As mentioned above, this type of GPU on STX Kernel 6.6 uses Vendor Specific VFIO, so we cannot use MDEV to share cards between Openstack and STX bare metal, only VF.
On the system after the package installation we have the following kernel module installed:
sysadmin@controller-0:~$ lsmod | grep vfio nvidia_vgpu_vfio 126976 12 nvidia 14381056 4 nvidia_vgpu_vfio vfio_pci_core 90112 1 nvidia_vgpu_vfio mdev 20480 1 nvidia_vgpu_vfio vfio_iommu_type1 45056 0 vfio 61440 3 vfio_pci_core,nvidia_vgpu_vfio,vfio_iommu_type1 kvm 1347584 2 kvm_amd,nvidia_vgpu_vfio irqbypass 12288 3 vfio_pci_core,nvidia_vgpu_vfio,kvm
all the devices are attached with nvidia kernel module
The guest drivers on the VMs are associated with the VGPU manager and are provided in the same zip file as the NVIDIA software, but the VM fails to even start up. When OpenStack schedules the VM and starts the machine, the physical STX host crashes and reboots with a kernel panic.
On nova conf we add this configuration to pass the VF:
[pci] device_spec = {"address": "0000:03:00.2", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.3", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.4", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.5", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} alias = {"name": "ah200-vf", "device_type":"type-VF", "resource_class": "H200-vf"}
After Openstack shedule the VM the bare-metal host have the following crash:
============================================================
[ 168.612066] EXT4-fs (rbd0): mounted filesystem 7fe62dc3-02b1-4fa2-8314-2043ff4c0fa5 r/w with ordered data mode. Quota mode: none. [ 168.612113] EXT4-fs (rbd1): mounted filesystem 76d79800-51f9-47db-b8cd-2a7b83c74ee1 r/w with ordered data mode. Quota mode: none. [ 430.648672] nvidia 0000:03:01.2: Enabling HDA controller [ 430.649641] nvidia 0000:03:01.2: Enabling HDA controller [ 430.649676] nvidia 0000:03:01.2: Runtime PM usage count underflow! [ 430.649818] ------------[ cut here ]------------ [ 430.649821] WARNING: CPU: 0 PID: 9 at drivers/vfio/group.c:688 vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649833] Modules linked in: vfio_pci sctp_diag tcp_diag udp_diag raw_diag inet_diag unix_diag xt_CHECKSUM nbd rbd libceph dns_resolver nfnetlink_cttimeout ip6_tables xt_set ip6t_rpfilter ipt_rpfilter ip_set_hash_net ip_set_hash_ip ip_set veth xt_statistic wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 libcurve25519_generic libchacha vxlan gre openvswitch nf_conncount nf_conntrack_netlink xt_recent xt_MASQUERADE xt_mark xt_conntrack bnxt_en(OE) nft_chain_nat xt_comment xt_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_multiport binfmt_misc iscsi_target_mod target_core_mod pci_pf_stub nfsv3 nfs fscache netfs esp4 dm_crypt trusted asn1_encoder xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge nfsd virtio_net net_failover failover auth_rpcgss nfs_acl lockd grace 8021q garp stp mrp llc xfs cls_u32 sch_sfq sch_htb nvidia_vgpu_vfio(OE) nvidia(OE) vfio_pci_core mdev vfio_iommu_type1 vfio sctp ip6_udp_tunnel udp_tunnel xprtrdma(O) svcrdma(O) rpcrdma(O) nvmet_rdma(O) [ 430.649866] nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_cm(O) intel_uncore_frequency_common drbd lru_cache libcrc32c fuse drm sunrpc bpf_preload efivarfs ip_tables overlay ext4 mbcache jbd2 dm_multipath dm_mod mlx5_ib(O) ib_uverbs(O) wmi_bmof kvm_amd kvm ib_core(O) mlx5_core(O) irqbypass mlxfw(O) crct10dif_pclmul mlxdevm(O) crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 mlx_compat(O) rapl nvme psample uas nvme_core tls t10_pi acpi_cpufreq usb_storage macsec crc64_rocksoft_generic pci_hyperv_intf ccp crc64_rocksoft mpt3sas crc64 wmi i2c_designware_platform i2c_designware_core ipmi_si ipmi_devintf ipmi_msghandler iavf(O) i40e(O) ice(O) [last unloaded: bnxt_en(OE)] [ 430.649899] CPU: 0 PID: 9 Comm: kworker/0:1 Kdump: loaded Tainted: G OE 6.6.0-1-amd64 #1 Debian 6.6.63-1.stx.99 [ 430.649902] Hardware name: Lenovo ThinkSystem SR675 V3/SB27B87354, BIOS QGE140H-8.21 05/14/2025 [ 430.649904] Workqueue: events work_for_cpu_fn [ 430.649909] RIP: 0010:vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649914] Code: 8b 60 03 00 00 48 8d 42 d8 48 39 d1 75 0f eb 5f 48 8b 50 28 48 8d 42 d8 48 39 d1 74 52 4c 3b 28 75 ee 4c 89 f7 e8 a8 d4 49 c8 <0f> 0b 48 c7 c7 d8 ca ac c1 48 c7 c3 ea ff ff ff e8 93 d4 49 c8 4c [ 430.649916] RSP: 0018:ff4108f84028fd80 EFLAGS: 00010246 [ 430.649918] RAX: ff18105388a1b080 RBX: ff1810543e9b0000 RCX: ff1810543e9b0360 [ 430.649919] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ff1810543e9b0370 [ 430.649919] RBP: ff1810571c71e000 R08: ff18105573011b0e R09: 0000000000000000 [ 430.649920] R10: 0000000000000001 R11: 0000000000000000 R12: ff1810539209c600 [ 430.649920] R13: ff1810538d9a20c0 R14: ff1810543e9b0370 R15: ff4108f88850f990 [ 430.649921] FS: 0000000000000000(0000) GS:ff1810b14f200000(0000) knlGS:0000000000000000 [ 430.649922] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 430.649923] CR2: 00007fb69bcb0000 CR3: 0000006091402002 CR4: 0000000000771ef0 [ 430.649924] PKRU: 55555554 [ 430.649924] Call Trace: [ 430.649927] <TASK> [ 430.649930] ? __warn+0x84/0x140 [ 430.649933] ? vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649939] ? report_bug+0x198/0x1b0 [ 430.649943] ? handle_bug+0x53/0x90 [ 430.649946] ? exc_invalid_op+0x18/0x70 [ 430.649948] ? asm_exc_invalid_op+0x1a/0x20 [ 430.649951] ? vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649955] vfio_register_group_dev+0x4e/0xd0 [vfio] [ 430.649961] vfio_pci_core_register_device+0x197/0x410 [vfio_pci_core] [ 430.649967] ? device_initialize+0xab/0x110 [ 430.649973] vfio_pci_probe+0x51/0x120 [vfio_pci] [ 430.649976] local_pci_probe+0x47/0xa0 [ 430.649980] work_for_cpu_fn+0x17/0x30 [ 430.649982] process_one_work+0x175/0x360 [ 430.649984] worker_thread+0x280/0x390 [ 430.649986] ? __pfx_worker_thread+0x10/0x10 [ 430.649987] kthread+0xdd/0x110 [ 430.649990] ? __pfx_kthread+0x10/0x10 [ 430.649991] ret_from_fork+0x31/0x50 [ 430.649994] ? __pfx_kthread+0x10/0x10 [ 430.649996] ret_from_fork_asm+0x1b/0x30 [ 430.650000] </TASK> [ 430.650000] ---[ end trace 0000000000000000 ]--- [ 430.650010] vfio-pci: probe of 0000:03:01.2 failed with error -22 [ 430.662135] tun: Universal TUN/TAP device driver, 1.6 [ 430.663054] tap5f9c2335-39: entered promiscuous mode [ 431.328345] BUG: kernel NULL pointer dereference, address: 0000000000000010 [ 431.328586] #PF: supervisor read access in kernel mode [ 431.328748] #PF: error_code(0x0000) - not-present page [ 431.328912] PGD 488b01067 P4D 0 [ 431.329016] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 431.329153] CPU: 55 PID: 109169 Comm: qemu-system-x86 Kdump: loaded Tainted: G W OE 6.6.0-1-amd64 #1 Debian 6.6.63-1.stx.99 [ 431.329538] Hardware name: Lenovo ThinkSystem SR675 V3/SB27B87354, BIOS QGE140H-8.21 05/14/2025 [ 431.329811] RIP: 0010:vfio_df_open+0x37/0x110 [vfio] [ 431.329978] Code: 48 83 ec 08 48 8b 1f 8b 83 5c 03 00 00 85 c0 75 5d c7 83 5c 03 00 00 01 00 00 00 48 8b 2f 4c 8b 67 28 48 8b 45 00 48 8b 40 68 <48> 8b 78 10 e8 90 f2 8e c7 84 c0 0f 84 af 00 00 00 4d 85 e4 74 51 [ 431.330566] RSP: 0018:ff4108f8980efcf0 EFLAGS: 00010246 [ 431.330730] RAX: 0000000000000000 RBX: ff1810543e9b4800 RCX: 00000000000000d7 [ 431.330954] RDX: ffffffff893b78f0 RSI: ffffffffc14c3980 RDI: ff1810b2e0c3c000 [ 431.331178] RBP: ff1810543e9b4800 R08: 000000000000000c R09: ff1810b2e0c3c000 [ 431.331402] R10: 0000000000000001 R11: fefefefefefefeff R12: 0000000000000000 [ 431.331626] R13: 000000000000002b R14: ff1810b2e0c3c000 R15: ffffffffe0c3c000 [ 431.331850] FS: 00007f80e22c3e80(0000) GS:ff181110cb5c0000(0000) knlGS:0000000000000000 [ 431.332105] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 431.332286] CR2: 0000000000000010 CR3: 00000004717fa001 CR4: 0000000000771ee0 [ 431.332511] PKRU: 55555554 [ 431.332597] Call Trace: [ 431.332677] <TASK> [ 431.332744] ? __die+0x24/0x70 [ 431.332845] ? page_fault_oops+0x15b/0x460 [ 431.332973] ? terminate_walk+0xee/0x100 [ 431.341558] ? bsearch+0x57/0x90 [ 431.350127] ? exc_page_fault+0x69/0x150 [ 431.358608] ? asm_exc_page_fault+0x26/0x30 [ 431.366985] ? __symbol_put+0x70/0xa0 [ 431.375241] ? vfio_df_open+0x37/0x110 [vfio] [ 431.383395] vfio_group_fops_unl_ioctl+0x292/0x720 [vfio] [ 431.391539] __x64_sys_ioctl+0x8f/0xd0 [ 431.399556] do_syscall_64+0x58/0xb0 [ 431.407438] ? sched_clock+0x10/0x30 [ 431.415189] ? get_vtime_delta+0xf/0xb0 [ 431.422834] ? ct_kernel_exit.constprop.0+0x81/0xa0 [ 431.430412] ? __ct_user_enter+0x5e/0xd0 [ 431.437861] ? syscall_exit_to_user_mode+0x32/0x40 [ 431.445257] ? do_syscall_64+0x65/0xb0 [ 431.452534] ? __do_sys_newlstat+0x64/0x90 [ 431.459705] ? sched_clock+0x10/0x30 [ 431.466733] ? get_vtime_delta+0xf/0xb0 [ 431.473674] ? ct_kernel_exit.constprop.0+0x81/0xa0 [ 431.480554] ? __ct_user_enter+0x5e/0xd0 [ 431.487304] ? syscall_exit_to_user_mode+0x32/0x40 [ 431.494015] ? do_syscall_64+0x65/0xb0 [ 431.500672] ? __ct_user_enter+0x5e/0xd0 [ 431.507176] entry_SYSCALL_64_after_hwframe+0x78/0xe2 [ 431.513580] RIP: 0033:0x7f80e2cf5277 [ 431.519962] Code: 00 00 00 48 8b 05 19 cc 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e9 cb 0d 00 f7 d8 64 89 01 48 [ 431.533533] RSP: 002b:00007ffd09bc3398 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 431.540313] RAX: ffffffffffffffda RBX: 0000559d6081bcb0 RCX: 00007f80e2cf5277 [ 431.546982] RDX: 0000559d6081d690 RSI: 0000000000003b6a RDI: 0000000000000028 [ 431.553537] RBP: 0000559d60822800 R08: 0000559d5f19cbd0 R09: 00007ffd09bc12f9 [ 431.559970] R10: 000000000000006f R11: 0000000000000246 R12: 0000559d6081b280 [ 431.566256] R13: 0000559d6081d690 R14: 00007ffd09bc45c0 R15: 0000559d6081d690 [ 431.572378] </TASK> [ 431.578322] Modules linked in: vhost_net vhost vhost_iotlb tap tun vfio_pci sctp_diag tcp_diag udp_diag raw_diag inet_diag unix_diag xt_CHECKSUM nbd rbd libceph dns_resolver nfnetlink_cttimeout ip6_tables xt_set ip6t_rpfilter ipt_rpfilter ip_set_hash_net ip_set_hash_ip ip_set veth xt_statistic wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 libcurve25519_generic libchacha vxlan gre openvswitch nf_conncount nf_conntrack_netlink xt_recent xt_MASQUERADE xt_mark xt_conntrack bnxt_en(OE) nft_chain_nat xt_comment xt_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_multiport binfmt_misc iscsi_target_mod target_core_mod pci_pf_stub nfsv3 nfs fscache netfs esp4 dm_crypt trusted asn1_encoder xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge nfsd virtio_net net_failover failover auth_rpcgss nfs_acl lockd grace 8021q garp stp mrp llc xfs cls_u32 sch_sfq sch_htb nvidia_vgpu_vfio(OE) nvidia(OE) vfio_pci_core mdev vfio_iommu_type1 vfio sctp ip6_udp_tunnel udp_tunnel xprtrdma(O) [ 431.578381] svcrdma(O) rpcrdma(O) nvmet_rdma(O) nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_cm(O) intel_uncore_frequency_common drbd lru_cache libcrc32c fuse drm sunrpc bpf_preload efivarfs ip_tables overlay ext4 mbcache jbd2 dm_multipath dm_mod mlx5_ib(O) ib_uverbs(O) wmi_bmof kvm_amd kvm ib_core(O) mlx5_core(O) irqbypass mlxfw(O) crct10dif_pclmul mlxdevm(O) crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 mlx_compat(O) rapl nvme psample uas nvme_core tls t10_pi acpi_cpufreq usb_storage macsec crc64_rocksoft_generic pci_hyperv_intf ccp crc64_rocksoft mpt3sas crc64 wmi i2c_designware_platform i2c_designware_core ipmi_si ipmi_devintf ipmi_msghandler iavf(O) i40e(O) ice(O) [last unloaded: bnxt_en(OE)] [ 431.704871] CR2: 0000000000000010
===========================================================
Do you have any suggestions on OpenStack or STX to make everything work? Or do you have any ideas about this error?
Notes: e also tested this type of passthrough on a standard system (NO STX) with Ubuntu 24, Kernel 6.8, and KVM 10, and everything works correctly.
P.S We tried also with STX 11 rc1 and with kernel 6.12 and we have the same error,
Thanks for the help
Regards,
Hi Brad, ALl thanks for your feedback. We also did some additional checks on our side and were able to pinpoint the root cause. Unfortunately, the problem comes down to limitations in the current STX/Caracal stack: the components in use can’t support the vendor-specific VFIO flow required by the NVIDIA vGPU framework. What we found 1. Libvirt/QEMU versions The vendor-specific VFIO handling is only fully supported starting from libvirt 10 (and the matching QEMU versions). Caracal ships older builds, and updating them would require changes that are too invasive for the STX base OS and its packaging. (Ref: https://libvirt.org/news.html) 2. Nova “managed=no” support Nova only exposes the managed=no flag for PCI devices from version 30 onwards. Since Caracal doesn’t include this, Nova always forces the default flow that rebinds devices — which breaks the NVIDIA driver model. Backporting this would mean touching Nova templates and Helm charts at a level we can’t realistically maintain. 3. No MDEV support on our platform NVIDIA enables MDEV only on specific kernel/distro combinations — mainly Ubuntu 20.04 and 22.04 — because their vGPU driver depends on certain kernel headers. The STX kernel doesn’t provide those headers, so MDEV simply isn’t available as an option here. Given these constraints, getting the vendor-specific VFIO workflow to operate correctly on Caracal would require changes that go too deep into STX (kernel, hypervisor stack, Nova). Thanks All Giuseppe Del Gaudio - Cloud Engineer | Cloud & Digital Architecture [cid:970e5baa-b17d-4a26-b2e6-8f64167f7206] [cid:f56b9022-5c06-4483-a7b1-32efbd49b599] Via Bastioni 14/ S.Michele 10, Salerno, Italia Email: giuseppe.delgaudio@nttdata.com Tel: +39 3346368189 Learn more at www.nttdata.com/it<http://www.nttdata.com/it> ________________________________ Da: Borgald, Brad <Brad.Borgald@windriver.com> Inviato: Mercoledì, 26 Novembre, 2025 15:46 A: Ildiko Vancsa <ildiko@openinfra.dev>; GIUSEPPE DEL GAUDIO <Giuseppe.DelGaudio@emeal.nttdata.com> Cc: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io>; Friesen, Chris <Chris.Friesen@windriver.com> Oggetto: Re: Issue with sharing Nvidia MIG vGPU via SRIOV on Openstack NTT DATA Security Awareness - This is an incoming mail from an EXTERNAL SENDER (prvs=34257f3110=brad.borgald@windriver.com). Please verify sender before you open attachments or access links. ________________________________ Hi Giuseppe, I took a look at the kernel exception that you included, and let me preface this, I did the assistance with some AI tooling so there's a chance that the answers could be wrong. It looks like you tripped a VFIO bug when passing the GPU through to QEMU. You have the nvidia drivers loaded, and they're bound to the GPU, not vfio-pci. VFIO cannot claim a device that is already owned by another driver. The other part of the problem is there is a gap in functionality in Caracal addressed in Epoxy: Enable VFIO devices with kernel variant drivers : Blueprints : OpenStack Compute (nova)<https://urldefense.com/v3/__https://blueprints.launchpad.net/nova/*spec/enable-vfio-devices-with-kernel-variant-drivers__;Kw!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-P3CVuCGg$> The Openstack Nova documentation in Caracal focuses on mbed-backed vGPUs Attaching virtual GPU devices to guests — nova 29.3.1.dev15 documentation<https://urldefense.com/v3/__https://docs.openstack.org/nova/2024.1/admin/virtual-gpu.html__;!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-P9VybsPw$>, Nova Caracal’s docs do not yet describe the vendor‑specific VFIO cdev/iommufd flow (no “managed=no” toggle in PCI device spec, no sysfsdev handoff) Here are 3 AI assisted possible solutions if you want to attempt to make this work now in Caracal. We ARE planning to upgrade to Epoxy in the spring release. 3 Ways to make it work on Caracal You have three viable paths. I’ll list them in order of “least invasive” to “most change”. Option A — Prove the host/NVIDIA stack with a direct QEMU/virsh POC (no Nova), using the vendor‑specific flow This validates kernel+driver+QEMU before we tackle Nova. 1. Enable VFs and assign a vGPU/MIG profile to a chosen VF (not the PF) per NVIDIA’s vendor framework: # enable SR-IOV VFs on the PF once nvidia-vgpu services are running /usr/lib/nvidia/sriov-manage -e 0000:03:00.0 # pick one VF, e.g. 0000:03:00.2, and see creatable types cat /sys/bus/pci/devices/0000:03:00.2/nvidia/creatable_vgpu_types # program the desired vGPU type (example only) echo <vgpu_type_id> > /sys/bus/pci/devices/0000:03:00.2/nvidia/current_vgpu_type Do not rebind the VF to vfio-pci. The VF must remain with nvidia/nvidia_vgpu_vfio. Then launch QEMU with the device‑centric VFIO cdev path (libvirt or CLI). A commonly used form is to pass a vfio-pci device with sysfsdev= pointing at the VF (QEMU uses iommufd/cdev, not legacy groups): qemu-system-x86_64 \ ... \ -device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:03:00.2 \ ... This is the method people use successfully with the new NVIDIA framework on recent kernels/QEMU. It bypasses legacy group fds and matches how iommufd is supposed to work. (Background doc: QEMU’s VFIO iommufd device-centric mode.) [forum.proxmox.com]<https://urldefense.com/v3/__https://forum.proxmox.com/threads/vgpu-with-nvidia-on-kernel-6-8.150840/__;!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-OaeQ2gFg$>, [qemu.org]<https://urldefense.com/v3/__https://www.qemu.org/docs/master/devel/vfio-iommufd.html__;!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-MJyqCajQ$> If that boot succeeds and nvidia-smi inside the guest sees the assigned profile after installing the NVIDIA vGPU guest driver, your host and NVIDIA pieces are good. (NVIDIA’s vGPU user guide explains guest driver requirements.) [docs.nvidia.com]<https://urldefense.com/v3/__https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html__;!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-MR0bd5Jg$> If you prefer virsh: you can hand‑craft a domain XML with a <hostdev> and managed='no' (so libvirt doesn’t touch host binding), but be aware that libvirt support for vendor‑specific VFIO has been under active development; many admins validate first with raw QEMU args. [github.com]<https://urldefense.com/v3/__https://github.com/OpenNebula/one/issues/6841__;!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-N9RXwzew$> Option B — Keep Caracal but avoid Nova’s default “managed=yes” path (workaround) Nova Caracal doesn’t expose a config knob to mark a PCI device as managed=no, and that’s exactly what the vendor‑specific VFIO path needs. There is ongoing work/blueprints in Nova to add support for kernel variant drivers / managed flag for SR‑IOV/VFIO devices, but that’s targeted for 2025.1. Practically, in Caracal your choices are: * Use a libvirt hook to inject the QEMU sysfsdev device (and keep Nova oblivious), or * Patch the Nova libvirt driver templates in your StarlingX helm overrides to force managed=no + pass‑through the device without re‑binding (advanced; fragile across updates). The clean solution is to wait for (or backport) the Nova work adding “managed flag for PCI device spec” and vendor‑specific VFIO handling. Otherwise, libvirt will unbind from nvidia and bind to vfio-pci, which breaks NVIDIA’s model and can retrigger the kernel oops you saw. [blueprints...nchpad.net]<https://urldefense.com/v3/__https://blueprints.launchpad.net/nova/*spec/enable-vfio-devices-with-kernel-variant-drivers__;Kw!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-P3CVuCGg$> For context, other virtualization stacks that already incorporated the vendor‑specific model instruct admins to: 1. enable the VF, 2. echo <profile> into …/nvidia/current_vgpu_type, and 3. add the VF as a PCI device with managed='no'. That pattern avoids rebinding on the host and has been reported to work. [github.com]<https://urldefense.com/v3/__https://github.com/OpenNebula/one/issues/6841__;!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-N9RXwzew$> Option C — Use mdev-backed vGPU (only if your H200 driver branch exposes mdev) Historically, Nova supports vGPU via mdev types (devices.enabled_mdev_types, [mdev_*] device_addresses=…). If your driver branch still exposes mdev for MIG (some newer AIE/vGPU stacks de-emphasize mdev in favor of vendor‑specific VFIO on KVM), then you can simply follow Nova’s virtual GPU admin guide and avoid SR‑IOV VFs altogether. If your installed 580.xx stack is vendor‑specific only on KVM, mdev will not be available—this is a known shift vendors have been making. [docs.openstack.org]<https://urldefense.com/v3/__https://docs.openstack.org/nova/latest/admin/virtual-gpu.html__;!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-Oh-UfHdA$>, [github.com]<https://urldefense.com/v3/__https://github.com/OpenNebula/one/issues/6841__;!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-N9RXwzew$> Brad ________________________________ From: Ildiko Vancsa <ildiko@openinfra.dev> Sent: Tuesday, November 18, 2025 9:46 AM To: giuseppe.delgaudio@nttdata.com <giuseppe.delgaudio@nttdata.com> Cc: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io>; Borgald, Brad <Brad.Borgald@windriver.com>; Friesen, Chris <Chris.Friesen@windriver.com> Subject: Re: Issue with sharing Nvidia MIG vGPU via SRIOV on Openstack CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi Giuseppe, The community briefly touched on the challenge you brought up in this thread on the StarlingX TSC & Community call last week. Based on the conversation, the StarlingX community is still ahead of focusing more specifically on GPU support in the platform. The challenge you're facing seems like it's addressed in a later OpenStack version, compared to what's integrated in the platform version you're using. There might be a way to port that back. Were you able to make progress with troubleshooting on your end in the meantime? Plans for upcoming StarlingX versions for GPUs include a few additions from the CNCF ecosystem, including: - Nvidia DRA driver for GPUs in Kubernetes: https://github.com/NVIDIA/k8s-dra-driver-gpu<https://urldefense.com/v3/__https://github.com/NVIDIA/k8s-dra-driver-gpu__;!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-NN2T9Mrg$> - https://docs.google.com/document/d/1BNWqgx_SmZDi-va_V31v3DnuVwYnF2EmN7D-O_fB6Oo/edit?tab=t.0#heading=h.bxuci8gx6hna<https://urldefense.com/v3/__https://docs.google.com/document/d/1BNWqgx_SmZDi-va_V31v3DnuVwYnF2EmN7D-O_fB6Oo/edit?tab=t.0*heading=h.bxuci8gx6hna__;Iw!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-PZBx5m_g$> @Brad, @Chris: Can you please add and correct anything I missed? Thanks and Best Regards, Ildikó ——— Ildikó Váncsa Director of Community OpenInfra Foundation
On Nov 7, 2025, at 01:31, giuseppe.delgaudio@nttdata.com wrote:
Chris Friesen wrote:
We are exploring the use of NVIDIA GPUs, but from the context of making the GPU resources available within Kubernetes. Currently we're trying to get the NVIDIA GPU Operator for Kubernetes working, and we haven't done much testing of PCI passthrough to VMs.
It's interesting that the NVIDIA documentation you linked doesn't seem to mention the "Hopper" architecture at all.
Can you provide a source for the nvidia-vgpu-ubuntu-aie-580_580.82.0.deb package that you said you installed?
After installing the NVIDIA vGPU software were the NVIDIA-specific vfio drivers installed as per Virtual GPU Software User Guide - NVIDIA Docs <https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html#verify-install-update-vgpu-ubuntu<https://urldefense.com/v3/__https://docs.nvidia.com/vgpu/latest/grid-vgpu-user-guide/index.html*verify-install-update-vgpu-ubuntu__;Iw!!EJ3n55FBLexp1rhr!5kv4GJgFf2y_hXzpE5S9Sv3ptYenYF4OxsNZF4XZVaaeTfX2j_5ZkQvDarKqaYhN2fCtIOU0clixuD9-73pPcib765W0x-OKPqU1Ww$>> ?
# lsmod | grep vfio nvidia_vgpu_vfio 27099 0 nvidia 12316924 1 nvidia_vgpu_vfio vfio_mdev 12841 0 mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio_iommu_type1 22342 0 vfio 32331 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 #
Which version of the guest drivers were you using?
Regards, Chris
Hi,
Thank you for the reply, sorry fo the delayed response but i don't received any notification.
At the moment, we are completely focused on sharing vGPUs (specifically MIG) via Openstack on the STX 10 platform. As you mentioned, the Nvidia documentation does not include any reference to the Hopper architecture, but the procedure we followed to create VFs, etc., works correctly on Hopper.
We cannot provide the VGPU manager software because it is bound by a license. This software installs the manager and the GPU driver. We have also installed the latest driver nvidia-vgpu-ubuntu-aie-580_580.95 provided with the nvidia 580.95.02 driver. As mentioned above, this type of GPU on STX Kernel 6.6 uses Vendor Specific VFIO, so we cannot use MDEV to share cards between Openstack and STX bare metal, only VF.
On the system after the package installation we have the following kernel module installed:
sysadmin@controller-0:~$ lsmod | grep vfio nvidia_vgpu_vfio 126976 12 nvidia 14381056 4 nvidia_vgpu_vfio vfio_pci_core 90112 1 nvidia_vgpu_vfio mdev 20480 1 nvidia_vgpu_vfio vfio_iommu_type1 45056 0 vfio 61440 3 vfio_pci_core,nvidia_vgpu_vfio,vfio_iommu_type1 kvm 1347584 2 kvm_amd,nvidia_vgpu_vfio irqbypass 12288 3 vfio_pci_core,nvidia_vgpu_vfio,kvm
all the devices are attached with nvidia kernel module
The guest drivers on the VMs are associated with the VGPU manager and are provided in the same zip file as the NVIDIA software, but the VM fails to even start up. When OpenStack schedules the VM and starts the machine, the physical STX host crashes and reboots with a kernel panic.
On nova conf we add this configuration to pass the VF:
[pci] device_spec = {"address": "0000:03:00.2", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.3", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.4", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} device_spec = {"address": "0000:03:00.5", "resource_class": "H200-vf", "managed": "no", "remote_managed": "true"} alias = {"name": "ah200-vf", "device_type":"type-VF", "resource_class": "H200-vf"}
After Openstack shedule the VM the bare-metal host have the following crash:
============================================================
[ 168.612066] EXT4-fs (rbd0): mounted filesystem 7fe62dc3-02b1-4fa2-8314-2043ff4c0fa5 r/w with ordered data mode. Quota mode: none. [ 168.612113] EXT4-fs (rbd1): mounted filesystem 76d79800-51f9-47db-b8cd-2a7b83c74ee1 r/w with ordered data mode. Quota mode: none. [ 430.648672] nvidia 0000:03:01.2: Enabling HDA controller [ 430.649641] nvidia 0000:03:01.2: Enabling HDA controller [ 430.649676] nvidia 0000:03:01.2: Runtime PM usage count underflow! [ 430.649818] ------------[ cut here ]------------ [ 430.649821] WARNING: CPU: 0 PID: 9 at drivers/vfio/group.c:688 vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649833] Modules linked in: vfio_pci sctp_diag tcp_diag udp_diag raw_diag inet_diag unix_diag xt_CHECKSUM nbd rbd libceph dns_resolver nfnetlink_cttimeout ip6_tables xt_set ip6t_rpfilter ipt_rpfilter ip_set_hash_net ip_set_hash_ip ip_set veth xt_statistic wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 libcurve25519_generic libchacha vxlan gre openvswitch nf_conncount nf_conntrack_netlink xt_recent xt_MASQUERADE xt_mark xt_conntrack bnxt_en(OE) nft_chain_nat xt_comment xt_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_multiport binfmt_misc iscsi_target_mod target_core_mod pci_pf_stub nfsv3 nfs fscache netfs esp4 dm_crypt trusted asn1_encoder xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge nfsd virtio_net net_failover failover auth_rpcgss nfs_acl lockd grace 8021q garp stp mrp llc xfs cls_u32 sch_sfq sch_htb nvidia_vgpu_vfio(OE) nvidia(OE) vfio_pci_core mdev vfio_iommu_type1 vfio sctp ip6_udp_tunnel udp_tunnel xprtrdma(O) svcrdma(O) rpcrdma(O) nvmet_rdma(O) [ 430.649866] nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_cm(O) intel_uncore_frequency_common drbd lru_cache libcrc32c fuse drm sunrpc bpf_preload efivarfs ip_tables overlay ext4 mbcache jbd2 dm_multipath dm_mod mlx5_ib(O) ib_uverbs(O) wmi_bmof kvm_amd kvm ib_core(O) mlx5_core(O) irqbypass mlxfw(O) crct10dif_pclmul mlxdevm(O) crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 mlx_compat(O) rapl nvme psample uas nvme_core tls t10_pi acpi_cpufreq usb_storage macsec crc64_rocksoft_generic pci_hyperv_intf ccp crc64_rocksoft mpt3sas crc64 wmi i2c_designware_platform i2c_designware_core ipmi_si ipmi_devintf ipmi_msghandler iavf(O) i40e(O) ice(O) [last unloaded: bnxt_en(OE)] [ 430.649899] CPU: 0 PID: 9 Comm: kworker/0:1 Kdump: loaded Tainted: G OE 6.6.0-1-amd64 #1 Debian 6.6.63-1.stx.99 [ 430.649902] Hardware name: Lenovo ThinkSystem SR675 V3/SB27B87354, BIOS QGE140H-8.21 05/14/2025 [ 430.649904] Workqueue: events work_for_cpu_fn [ 430.649909] RIP: 0010:vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649914] Code: 8b 60 03 00 00 48 8d 42 d8 48 39 d1 75 0f eb 5f 48 8b 50 28 48 8d 42 d8 48 39 d1 74 52 4c 3b 28 75 ee 4c 89 f7 e8 a8 d4 49 c8 <0f> 0b 48 c7 c7 d8 ca ac c1 48 c7 c3 ea ff ff ff e8 93 d4 49 c8 4c [ 430.649916] RSP: 0018:ff4108f84028fd80 EFLAGS: 00010246 [ 430.649918] RAX: ff18105388a1b080 RBX: ff1810543e9b0000 RCX: ff1810543e9b0360 [ 430.649919] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ff1810543e9b0370 [ 430.649919] RBP: ff1810571c71e000 R08: ff18105573011b0e R09: 0000000000000000 [ 430.649920] R10: 0000000000000001 R11: 0000000000000000 R12: ff1810539209c600 [ 430.649920] R13: ff1810538d9a20c0 R14: ff1810543e9b0370 R15: ff4108f88850f990 [ 430.649921] FS: 0000000000000000(0000) GS:ff1810b14f200000(0000) knlGS:0000000000000000 [ 430.649922] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 430.649923] CR2: 00007fb69bcb0000 CR3: 0000006091402002 CR4: 0000000000771ef0 [ 430.649924] PKRU: 55555554 [ 430.649924] Call Trace: [ 430.649927] <TASK> [ 430.649930] ? __warn+0x84/0x140 [ 430.649933] ? vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649939] ? report_bug+0x198/0x1b0 [ 430.649943] ? handle_bug+0x53/0x90 [ 430.649946] ? exc_invalid_op+0x18/0x70 [ 430.649948] ? asm_exc_invalid_op+0x1a/0x20 [ 430.649951] ? vfio_device_set_group+0xc8/0x1d0 [vfio] [ 430.649955] vfio_register_group_dev+0x4e/0xd0 [vfio] [ 430.649961] vfio_pci_core_register_device+0x197/0x410 [vfio_pci_core] [ 430.649967] ? device_initialize+0xab/0x110 [ 430.649973] vfio_pci_probe+0x51/0x120 [vfio_pci] [ 430.649976] local_pci_probe+0x47/0xa0 [ 430.649980] work_for_cpu_fn+0x17/0x30 [ 430.649982] process_one_work+0x175/0x360 [ 430.649984] worker_thread+0x280/0x390 [ 430.649986] ? __pfx_worker_thread+0x10/0x10 [ 430.649987] kthread+0xdd/0x110 [ 430.649990] ? __pfx_kthread+0x10/0x10 [ 430.649991] ret_from_fork+0x31/0x50 [ 430.649994] ? __pfx_kthread+0x10/0x10 [ 430.649996] ret_from_fork_asm+0x1b/0x30 [ 430.650000] </TASK> [ 430.650000] ---[ end trace 0000000000000000 ]--- [ 430.650010] vfio-pci: probe of 0000:03:01.2 failed with error -22 [ 430.662135] tun: Universal TUN/TAP device driver, 1.6 [ 430.663054] tap5f9c2335-39: entered promiscuous mode [ 431.328345] BUG: kernel NULL pointer dereference, address: 0000000000000010 [ 431.328586] #PF: supervisor read access in kernel mode [ 431.328748] #PF: error_code(0x0000) - not-present page [ 431.328912] PGD 488b01067 P4D 0 [ 431.329016] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 431.329153] CPU: 55 PID: 109169 Comm: qemu-system-x86 Kdump: loaded Tainted: G W OE 6.6.0-1-amd64 #1 Debian 6.6.63-1.stx.99 [ 431.329538] Hardware name: Lenovo ThinkSystem SR675 V3/SB27B87354, BIOS QGE140H-8.21 05/14/2025 [ 431.329811] RIP: 0010:vfio_df_open+0x37/0x110 [vfio] [ 431.329978] Code: 48 83 ec 08 48 8b 1f 8b 83 5c 03 00 00 85 c0 75 5d c7 83 5c 03 00 00 01 00 00 00 48 8b 2f 4c 8b 67 28 48 8b 45 00 48 8b 40 68 <48> 8b 78 10 e8 90 f2 8e c7 84 c0 0f 84 af 00 00 00 4d 85 e4 74 51 [ 431.330566] RSP: 0018:ff4108f8980efcf0 EFLAGS: 00010246 [ 431.330730] RAX: 0000000000000000 RBX: ff1810543e9b4800 RCX: 00000000000000d7 [ 431.330954] RDX: ffffffff893b78f0 RSI: ffffffffc14c3980 RDI: ff1810b2e0c3c000 [ 431.331178] RBP: ff1810543e9b4800 R08: 000000000000000c R09: ff1810b2e0c3c000 [ 431.331402] R10: 0000000000000001 R11: fefefefefefefeff R12: 0000000000000000 [ 431.331626] R13: 000000000000002b R14: ff1810b2e0c3c000 R15: ffffffffe0c3c000 [ 431.331850] FS: 00007f80e22c3e80(0000) GS:ff181110cb5c0000(0000) knlGS:0000000000000000 [ 431.332105] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 431.332286] CR2: 0000000000000010 CR3: 00000004717fa001 CR4: 0000000000771ee0 [ 431.332511] PKRU: 55555554 [ 431.332597] Call Trace: [ 431.332677] <TASK> [ 431.332744] ? __die+0x24/0x70 [ 431.332845] ? page_fault_oops+0x15b/0x460 [ 431.332973] ? terminate_walk+0xee/0x100 [ 431.341558] ? bsearch+0x57/0x90 [ 431.350127] ? exc_page_fault+0x69/0x150 [ 431.358608] ? asm_exc_page_fault+0x26/0x30 [ 431.366985] ? __symbol_put+0x70/0xa0 [ 431.375241] ? vfio_df_open+0x37/0x110 [vfio] [ 431.383395] vfio_group_fops_unl_ioctl+0x292/0x720 [vfio] [ 431.391539] __x64_sys_ioctl+0x8f/0xd0 [ 431.399556] do_syscall_64+0x58/0xb0 [ 431.407438] ? sched_clock+0x10/0x30 [ 431.415189] ? get_vtime_delta+0xf/0xb0 [ 431.422834] ? ct_kernel_exit.constprop.0+0x81/0xa0 [ 431.430412] ? __ct_user_enter+0x5e/0xd0 [ 431.437861] ? syscall_exit_to_user_mode+0x32/0x40 [ 431.445257] ? do_syscall_64+0x65/0xb0 [ 431.452534] ? __do_sys_newlstat+0x64/0x90 [ 431.459705] ? sched_clock+0x10/0x30 [ 431.466733] ? get_vtime_delta+0xf/0xb0 [ 431.473674] ? ct_kernel_exit.constprop.0+0x81/0xa0 [ 431.480554] ? __ct_user_enter+0x5e/0xd0 [ 431.487304] ? syscall_exit_to_user_mode+0x32/0x40 [ 431.494015] ? do_syscall_64+0x65/0xb0 [ 431.500672] ? __ct_user_enter+0x5e/0xd0 [ 431.507176] entry_SYSCALL_64_after_hwframe+0x78/0xe2 [ 431.513580] RIP: 0033:0x7f80e2cf5277 [ 431.519962] Code: 00 00 00 48 8b 05 19 cc 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e9 cb 0d 00 f7 d8 64 89 01 48 [ 431.533533] RSP: 002b:00007ffd09bc3398 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 431.540313] RAX: ffffffffffffffda RBX: 0000559d6081bcb0 RCX: 00007f80e2cf5277 [ 431.546982] RDX: 0000559d6081d690 RSI: 0000000000003b6a RDI: 0000000000000028 [ 431.553537] RBP: 0000559d60822800 R08: 0000559d5f19cbd0 R09: 00007ffd09bc12f9 [ 431.559970] R10: 000000000000006f R11: 0000000000000246 R12: 0000559d6081b280 [ 431.566256] R13: 0000559d6081d690 R14: 00007ffd09bc45c0 R15: 0000559d6081d690 [ 431.572378] </TASK> [ 431.578322] Modules linked in: vhost_net vhost vhost_iotlb tap tun vfio_pci sctp_diag tcp_diag udp_diag raw_diag inet_diag unix_diag xt_CHECKSUM nbd rbd libceph dns_resolver nfnetlink_cttimeout ip6_tables xt_set ip6t_rpfilter ipt_rpfilter ip_set_hash_net ip_set_hash_ip ip_set veth xt_statistic wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 libcurve25519_generic libchacha vxlan gre openvswitch nf_conncount nf_conntrack_netlink xt_recent xt_MASQUERADE xt_mark xt_conntrack bnxt_en(OE) nft_chain_nat xt_comment xt_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_multiport binfmt_misc iscsi_target_mod target_core_mod pci_pf_stub nfsv3 nfs fscache netfs esp4 dm_crypt trusted asn1_encoder xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge nfsd virtio_net net_failover failover auth_rpcgss nfs_acl lockd grace 8021q garp stp mrp llc xfs cls_u32 sch_sfq sch_htb nvidia_vgpu_vfio(OE) nvidia(OE) vfio_pci_core mdev vfio_iommu_type1 vfio sctp ip6_udp_tunnel udp_tunnel xprtrdma(O) [ 431.578381] svcrdma(O) rpcrdma(O) nvmet_rdma(O) nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_cm(O) intel_uncore_frequency_common drbd lru_cache libcrc32c fuse drm sunrpc bpf_preload efivarfs ip_tables overlay ext4 mbcache jbd2 dm_multipath dm_mod mlx5_ib(O) ib_uverbs(O) wmi_bmof kvm_amd kvm ib_core(O) mlx5_core(O) irqbypass mlxfw(O) crct10dif_pclmul mlxdevm(O) crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 mlx_compat(O) rapl nvme psample uas nvme_core tls t10_pi acpi_cpufreq usb_storage macsec crc64_rocksoft_generic pci_hyperv_intf ccp crc64_rocksoft mpt3sas crc64 wmi i2c_designware_platform i2c_designware_core ipmi_si ipmi_devintf ipmi_msghandler iavf(O) i40e(O) ice(O) [last unloaded: bnxt_en(OE)] [ 431.704871] CR2: 0000000000000010
===========================================================
Do you have any suggestions on OpenStack or STX to make everything work? Or do you have any ideas about this error?
Notes: e also tested this type of passthrough on a standard system (NO STX) with Ubuntu 24, Kernel 6.8, and KVM 10, and everything works correctly.
P.S We tried also with STX 11 rc1 and with kernel 6.12 and we have the same error,
Thanks for the help
Regards,
________________________________ NTT DATA Italia S.p.A. - Società per azioni, soggetta ad attività di direzione e coordinamento di NTT DATA EMEA Ltd. - Sede legale a Milano, in Via Ernesto Calindri, 4 – codice fiscale e iscrizione al Registro delle imprese di Milano Monza Brianza Lodi n. 00513990010, iscritta al Repertorio economico Amministrativo n. 974124 - Capitale sociale di euro 40.970.700,00, interamente versato.
participants (5)
-
Borgald, Brad
-
Chris Friesen
-
GIUSEPPE DEL GAUDIO
-
giuseppe.delgaudio@nttdata.com
-
Ildiko Vancsa