10G NIC performance: VFIO vs virtio: Difference between revisions

From KVM
(Result summary)
(VFIO)
Line 17: Line 17:


Here is the details about each kind of configuration.
Here is the details about each kind of configuration.
== VFIO passthrough VF (SR-IOV) to guest ==
=== Requirements ===
#You NIC supports SR-IOV (how to check? see below)
#driver (usually igb or ixgb) loaded with 'max_vfs=<num>' (better to modinfo to check accurate parameter name)
#kernel modules needed: NIC driver, vfio-pci module, intel-iommu module
=== Check if your NIC supports SR-IOV ===
lspci -s <NIC_BDF> -vvv | grep -i "Single Root I/O Virtualization"
=== Assign the VF to a guest ===
Unbind from igbvf driver and Bind to VFIO driver
#unbind from previous driver (take igbvf device for example)
echo <vf_BDF> > /sys/bus/pci/device/<vf_BDF>/driver/unbind
lspci -s  <vf_BDF> -n      //to get its number
//it will return like below
0a:13.3 0200: 8086:1520 (rev 01)
//8086 1520 is its numeric number
#bind to vfio-pci driver
echo 8086 1520 > /sys/bus/pci/drivers/vfio-pci/new_id
Now you can see this device is bound to vfio-pci driver
lspci -s <vf_BDF> -k

Revision as of 05:48, 11 May 2015

We did some experiment trying to measure network performance overhead in virtualization environment, comparing between VFIO passthrough and virtio approaches.

Test Topology

2 Intel Grantley-EP platforms (Xeon E5-2697 v3) connected by 10G link; memory 96 G.

NIC: Intel 82599ES

Test Tool: iperf

OS: RHEL 7.1

Result summary

  • In native environment, iperf can get 9.4 Gbps throughput. Since iperf is a SW packet generator and normal process shall it be, this a reasonable number.
  • With VFIO passthrough, network performance is also 9.4 Gbps; i.e., we cannot observe overhead in virtualization environment with VFIO passthrough method, in context of typical SW network user application.
  • With virtio approach, if proper configured (details see below), network performance can also achieve 9.4 Gbps.

Here is the details about each kind of configuration.

VFIO passthrough VF (SR-IOV) to guest

Requirements

  1. You NIC supports SR-IOV (how to check? see below)
  2. driver (usually igb or ixgb) loaded with 'max_vfs=<num>' (better to modinfo to check accurate parameter name)
  3. kernel modules needed: NIC driver, vfio-pci module, intel-iommu module

Check if your NIC supports SR-IOV

lspci -s <NIC_BDF> -vvv | grep -i "Single Root I/O Virtualization"

Assign the VF to a guest

Unbind from igbvf driver and Bind to VFIO driver

  1. unbind from previous driver (take igbvf device for example)

echo <vf_BDF> > /sys/bus/pci/device/<vf_BDF>/driver/unbind lspci -s <vf_BDF> -n //to get its number //it will return like below 0a:13.3 0200: 8086:1520 (rev 01) //8086 1520 is its numeric number

  1. bind to vfio-pci driver

echo 8086 1520 > /sys/bus/pci/drivers/vfio-pci/new_id

Now you can see this device is bound to vfio-pci driver lspci -s <vf_BDF> -k