Enabling Nested CPU Virtualization with NVIDIA GPU Passthrough on VMware ESXi (GUI Bypass Method)

Step-by-step guide to bypass ESXi Web UI restrictions and enable both nested hardware-assisted virtualization and PCI GPU passthrough for AI, Deep Learning, or Docker workloads.

GPU Virtualization - Enable Nested CPU Virtualization with NVIDIA GPU Passthrough on VMware ESXi

I would like to thank Kıvanç Kayıran for his help with this guide.

Overview

When setting up AI, Deep Learning, or Docker workloads on VMware ESXi, you often require both Hardware-Assisted CPU Virtualization (Nested Virtualization) and direct access to an enterprise GPU (e.g., NVIDIA A40) via PCI Passthrough.

By default, the ESXi Web UI restricts enabling both features simultaneously. If you try to save the VM settings with both enabled, ESXi throws the following error:

"Failed to reconfigure virtual machine. Nested Hardware-Assisted Virtualization is not supported on a virtual machine with a PCI passthrough device."

This guide outlines a proven, step-by-step workaround to bypass the GUI restrictions by utilizing the ESXi Web UI for hardware attachment and the Command Line Interface (CLI) for injecting the virtualization flags.

Prerequisites

  • The target Virtual Machine must be Powered Off.
  • The NVIDIA GPU must be toggled to "Active" for Passthrough in the ESXi Host's Hardware settings.
  • SSH access to the ESXi host must be enabled.

Step-by-Step Configuration

Step 1: Attach Hardware via ESXi Web UI (The GUI Trick)

To bypass the UI validation error, we must initially disable CPU virtualization in the UI before attaching the GPU.

  1. Log in to the ESXi Web Interface.
  2. Right-click your Virtual Machine and select Edit Settings.
  3. Expand the CPU section. UNCHECK the box for "Expose hardware assisted virtualization to the guest OS".
  4. Click Add Other Device (or Add New Device) -> PCI Device.
  5. Select your NVIDIA GPU from the dropdown list.
  6. Expand the Memory section and CHECK the box for "Reserve all guest memory (All-locked)". (Crucial for high-VRAM enterprise GPUs to prevent DevicePowerOn failures).
  7. Click Save. The UI will save the configuration without errors.

Step 2: Inject Virtualization Flags via SSH

Now that the hardware is attached and saved, we will forcefully enable nested virtualization directly in the VM's configuration file.

  1. Connect to your ESXi host via SSH:
ssh root@<YOUR_ESXI_HOST_IP>
  1. Define your VM's .vmx file path (replace the path with your actual datastore and VM name):
VMX="/vmfs/volumes/datastore1/YourVMName/YourVMName.vmx"
  1. Run the following command block to safely clean any conflicting flags and inject the "Golden Configuration":
# 1. Clean old/conflicting flags safely (without deleting hardware entries)
sed -i '/vhv.enable/d' $VMX
sed -i '/vhv.allowPassthru/d' $VMX
sed -i '/pciPassthru.use64bitMMIO/d' $VMX
sed -i '/pciPassthru.64bitMMIOSizeGB/d' $VMX
sed -i '/hypervisor.cpuid.v0/d' $VMX

# 2. Forcefully enable Nested Virtualization, Passthrough Bypass, and 64-bit MMIO
echo 'vhv.enable = "TRUE"' >> $VMX
echo 'vhv.allowPassthru = "TRUE"' >> $VMX
echo 'pciPassthru.use64bitMMIO = "TRUE"' >> $VMX
echo 'pciPassthru.64bitMMIOSizeGB = "128"' >> $VMX
echo 'hypervisor.cpuid.v0 = "FALSE"' >> $VMX

What each line does (Golden Configuration explained)

FlagPurpose
vhv.enable = "TRUE"Turns on Hardware-Assisted (Nested) Virtualization inside the guest. The guest OS can see CPU virtualization extensions (e.g. Intel VT-x / AMD-V) and run its own hypervisor (Hyper-V, KVM, Docker with virtualization, etc.).
vhv.allowPassthru = "TRUE"Allows nested virtualization together with PCI passthrough. Without this, ESXi would block the combination. This is the key flag that the GUI refuses to set when a passthrough device is present.
pciPassthru.use64bitMMIO = "TRUE"Enables 64-bit MMIO (Memory-Mapped I/O) for passthrough devices. Enterprise GPUs (e.g. NVIDIA A40) need large MMIO windows; 32-bit addressing is too small and can cause failures or instability.
pciPassthru.64bitMMIOSizeGB = "128"Sets the size of the 64-bit MMIO region to 128 GB. Must be large enough for the GPU’s BAR space and firmware. 128 GB is a safe value for high-VRAM GPUs; reduce only if you have a good reason.
hypervisor.cpuid.v0 = "FALSE"Hides the hypervisor CPUID bit from the guest. Some guest OSes or drivers (e.g. NVIDIA) behave differently when they detect they are inside a VM. Setting this to FALSE can avoid “running in a VM” checks and improve compatibility.

The sed commands above the block remove any existing lines containing these keys so you don’t get duplicate or conflicting entries when appending the new values.

Step 3: Reload the VM Configuration

Force ESXi to read the newly updated .vmx file from memory without throwing an "Invalid" state error.

  1. Find your VM's internal ID (VMID):
vim-cmd vmsvc/getallvms | grep "YourVMName"

Note the numeric ID in the far-left column.

  1. Reload the VM using its ID (replace <VMID> with your specific number):
vim-cmd vmsvc/reload <VMID>

Step 4: Power On (Crucial Warning)

  1. Return to the ESXi Web Interface and press F5 to refresh the page.
  2. WARNING: Do NOT open "Edit Settings" again. If you do, the ESXi GUI will re-validate the file, detect the nested virtualization, and throw an error upon saving.
  3. Simply right-click your Virtual Machine and select Power On.

Your VM will now boot successfully with both Nested CPU Virtualization active and the NVIDIA GPU fully passed through!