10G NIC performance: VFIO vs virtio: Difference between revisions

From KVM
(So how about this?)
m (Add networking category)
 
(12 intermediate revisions by 2 users not shown)
Line 3: Line 3:
== Test Topology ==
== Test Topology ==


2 Intel Grantley-EP platforms (Xeon E5-2697 v3) connected by 10G link; memory 96 G.
2 Intel Grantley-EP platforms (Xeon E5-2697 v3) connected by 10G link; memory 96 G.<br />
NIC: Intel 82599ES [http://ark.intel.com/products/41282/Intel-82599ES-10-Gigabit-Ethernet-Controller]<br />
Test Tool: iperf <br />
OS: RHEL 7.1 <br />


NIC: Intel 82599ES
== Result summary ==
*In native environment, iperf can get '''9.4''' Gbps throughput. Since iperf is a SW packet generator and normal process shall it be, this a reasonable number.
*With VFIO passthrough, network performance is also '''9.4''' Gbps; i.e., we cannot observe overhead in virtualization environment with VFIO passthrough method, in context of typical SW network user application.
*With virtio approach, if proper configured (details see below), network performance can also achieve '''9.4''' Gbps; otherwise, poor performance will be '''3.6''' Gbps.


Test Tool: iperf
=== Some references first ===
SR-IOV[http://www.intel.com/content/www/us/en/network-adapters/virtualization.html]<br />
vt-d assignment[[How_to_assign_devices_with_VT-d_in_KVM]]
----


OS: RHEL 7.1
Here is the details about each kind of configuration.
 
== VFIO passthrough VF (SR-IOV) to guest ==
=== Requirements ===
#You NIC supports SR-IOV (how to check? see below)
#driver (usually igb or ixgb) loaded with 'max_vfs=<num>' (better to modinfo to check accurate parameter name)
#kernel modules needed: NIC driver, vfio-pci module, intel-iommu module
 
=== Check if your NIC supports SR-IOV ===
'''<nowiki>lspci -s <NIC_BDF> -vvv | grep -i "Single Root I/O Virtualization"</nowiki>'''
 
=== Assign the VF to a guest ===
Unbind from igbvf driver and Bind to VFIO driver
#unbind from previous driver (take igbvf device for example)
#;echo <vf_BDF> > /sys/bus/pci/device/<vf_BDF>/driver/unbind
#;lspci -s  <vf_BDF> -n      //to get its number
#://it will return like below
#;<nowiki>0a:13.3 0200: 8086:1520 (rev 01)</nowiki>
#://8086 1520 is its numeric number
#bind to vfio-pci driver
#;echo 8086 1520 > /sys/bus/pci/drivers/vfio-pci/new_id
Now you can see this device is bound to vfio-pci driver<br />
'''lspci -s <vf_BDF> -k'''
 
Create guest with direct passthrough via VFIO framework<br />
'''<nowiki>qemu-kvm -m 16G -smp 8 -net none -device vfio-pci,host=81:10.0 -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio -nographic</nowiki>'''
<br />'-net none' tells qemu not emulate network devices
<br />'-device vfio-pci,host=' designate a vfio-pci device and the device's host BDF
 
== Virtio ==
=== Requirements ===
virtio compiled in kernel (RHEL7.1 native kernel already have them)
CONFIG_VIRTIO=m
CONFIG_VIRTIO_RING=m
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_BALLOON=m
CONFIG_VIRTIO_BLK=m
CONFIG_VIRTIO_NET=m
 
Create guest with direct passthrough via VFIO framework<br />
'''<nowiki>qemu-kvm -m 16G -smp 8 -device virtio-net-pci,netdev=net0 -netdev tap,id=net0,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic</nowiki>'''
 
<br />
==== Poor performance configuration ====
<br />'''<nowiki>qemu-kvm -m 16G -smp 8 -net nic,model=virtio -net tap,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic</nowiki>'''
<br />If not use <nowiki>'-device virtio-net-pci'</nowiki> option, performance will be 3.6 Gbps.
 
[[Category:VFIO]][[Category:Results]][[Category:Virtio]][[Category:Networking]]

Latest revision as of 16:52, 16 May 2015

We did some experiment trying to measure network performance overhead in virtualization environment, comparing between VFIO passthrough and virtio approaches.

Test Topology

2 Intel Grantley-EP platforms (Xeon E5-2697 v3) connected by 10G link; memory 96 G.
NIC: Intel 82599ES [1]
Test Tool: iperf
OS: RHEL 7.1

Result summary

  • In native environment, iperf can get 9.4 Gbps throughput. Since iperf is a SW packet generator and normal process shall it be, this a reasonable number.
  • With VFIO passthrough, network performance is also 9.4 Gbps; i.e., we cannot observe overhead in virtualization environment with VFIO passthrough method, in context of typical SW network user application.
  • With virtio approach, if proper configured (details see below), network performance can also achieve 9.4 Gbps; otherwise, poor performance will be 3.6 Gbps.

Some references first

SR-IOV[2]
vt-d assignmentHow_to_assign_devices_with_VT-d_in_KVM


Here is the details about each kind of configuration.

VFIO passthrough VF (SR-IOV) to guest

Requirements

  1. You NIC supports SR-IOV (how to check? see below)
  2. driver (usually igb or ixgb) loaded with 'max_vfs=<num>' (better to modinfo to check accurate parameter name)
  3. kernel modules needed: NIC driver, vfio-pci module, intel-iommu module

Check if your NIC supports SR-IOV

lspci -s <NIC_BDF> -vvv | grep -i "Single Root I/O Virtualization"

Assign the VF to a guest

Unbind from igbvf driver and Bind to VFIO driver

  1. unbind from previous driver (take igbvf device for example)
    echo <vf_BDF> > /sys/bus/pci/device/<vf_BDF>/driver/unbind
    lspci -s <vf_BDF> -n //to get its number
    //it will return like below
    0a:13.3 0200: 8086:1520 (rev 01)
    //8086 1520 is its numeric number
  2. bind to vfio-pci driver
    echo 8086 1520 > /sys/bus/pci/drivers/vfio-pci/new_id

Now you can see this device is bound to vfio-pci driver
lspci -s <vf_BDF> -k

Create guest with direct passthrough via VFIO framework
qemu-kvm -m 16G -smp 8 -net none -device vfio-pci,host=81:10.0 -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio -nographic
'-net none' tells qemu not emulate network devices
'-device vfio-pci,host=' designate a vfio-pci device and the device's host BDF

Virtio

Requirements

virtio compiled in kernel (RHEL7.1 native kernel already have them) CONFIG_VIRTIO=m CONFIG_VIRTIO_RING=m CONFIG_VIRTIO_PCI=m CONFIG_VIRTIO_BALLOON=m CONFIG_VIRTIO_BLK=m CONFIG_VIRTIO_NET=m

Create guest with direct passthrough via VFIO framework
qemu-kvm -m 16G -smp 8 -device virtio-net-pci,netdev=net0 -netdev tap,id=net0,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic


Poor performance configuration


qemu-kvm -m 16G -smp 8 -net nic,model=virtio -net tap,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic
If not use '-device virtio-net-pci' option, performance will be 3.6 Gbps.