From KVM
m (fix typo)
 
(13 intermediate revisions by 8 users not shown)
Line 1: Line 1:
= Paravirtualized drivers for kvm/Linux =
+
=Virtio=
 +
 
 +
== Paravirtualized drivers for kvm/Linux ==
 
* Virtio was chosen to be the main platform for IO virtualization in KVM
 
* Virtio was chosen to be the main platform for IO virtualization in KVM
 
* The idea behind it is to have a common framework for hypervisors for IO virtualization
 
* The idea behind it is to have a common framework for hypervisors for IO virtualization
* More information (although not uptodate) can be found in kvm [http://kvm.qumranet.com/kvmwiki/KvmForum2007?action=[[AttachFile]]&do=get&target=kvm_pv_drv.pdf pv driver]  
+
* More information (although not uptodate) can be found [[Media:KvmForum2007$kvm_pv_drv.pdf|here]]
* At the moment network/block/balloon devices are suported for kvm
+
* At the moment network/block/balloon devices are supported for kvm
 
* The host implementation is in userspace - qemu, so no driver is needed in the host.
 
* The host implementation is in userspace - qemu, so no driver is needed in the host.
  
= How to use Virtio =
+
== How to use Virtio ==
 
* Get kvm version >= 60
 
* Get kvm version >= 60
 
* Get Linux kernel with virtio drivers for the guest
 
* Get Linux kernel with virtio drivers for the guest
 
** Get Kernel >= 2.6.25 and activate (modules should also work, but take care of initramdisk)
 
** Get Kernel >= 2.6.25 and activate (modules should also work, but take care of initramdisk)
 +
*** CONFIG_VIRTIO_PCI=y (Virtualization -> PCI driver for virtio devices)
 +
*** CONFIG_VIRTIO_BALLOON=y (Virtualization -> Virtio balloon driver)
 
*** CONFIG_VIRTIO_BLK=y (Device Drivers -> Block ->  Virtio block driver)
 
*** CONFIG_VIRTIO_BLK=y (Device Drivers -> Block ->  Virtio block driver)
 
*** CONFIG_VIRTIO_NET=y  (Device Drivers -> Network device support -> Virtio network driver)
 
*** CONFIG_VIRTIO_NET=y  (Device Drivers -> Network device support -> Virtio network driver)
*** CONFIG_VIRTIO_PCI=y (Virtualization -> PCI driver for virtio devices)
 
*** ONFIG_VIRTIO_BALLOON=y (Virtualization -> Virtio balloon driver)
 
 
*** CONFIG_VIRTIO=y  (automatically selected)
 
*** CONFIG_VIRTIO=y  (automatically selected)
 
*** CONFIG_VIRTIO_RING=y (automatically selected)
 
*** CONFIG_VIRTIO_RING=y (automatically selected)
*** you can savly disable SATA/SCSI and also all other nic drivers if you only use VIRTIO (disk/nic)
+
*** you can safely disable SATA/SCSI and also all other nic drivers if you only use VIRTIO (disk/nic)
* Either build it around Rusty's tree [http://ozlabs.org/~rusty/kernel/hg/ repo]
+
* As an alternative one can use a standard guest kernel for the guest > 2.6.18 and make use sync backward compatibility option
** Or git clone git://kvm.qumranet.com/home/dor/src/linux-2.6-nv use branch rusty
+
** Backport and instructions can be found in [http://www.kernel.org/pub/scm/virt/kvm/kvm-guest-drivers-linux.git kvm-guest-drivers-linux.git]
** Soon an official repository will be released
+
* Use virtio-net-pci device for the network devices (or model=virtio for old -net..-net syntax) and if=virtio for disk
** As an alternative one can use a standard guest kernel for the guest > 2.6.18 and make use sync backward compatibility option
+
** Backport and instructions can be found in Anthony Liguori's [http://codemonkey.ws/virtio-ext-modules virtio-ext-modules]
+
** At the moment it's broken since the guest got developed, soon update
+
* Use model=virtio for the network devices and if=virtio for disk
+
 
** Example
 
** Example
 
<pre><nowiki>
 
<pre><nowiki>
qemu/x86_64-softmmu/qemu-system-x86_64 -boot c -drive file=/images/xpbase.qcow2,if=virtio,boot=on -m 384 -net nic,model=virtio -net tap,script=/etc/kvm/qemu-ifup
+
x86_64-softmmu/qemu-system-x86_64 -boot c -drive file=/images/xpbase.qcow2,if=virtio -m 384 -netdev type=tap,script=/etc/kvm/qemu-ifup,id=net0 -device virtio-net-pci,netdev=net0
 
</nowiki></pre>
 
</nowiki></pre>
* -hd[ab] for disk won't work
+
* -hd[ab] for disk won't work, use -drive
 +
* Disk will show up as /dev/vd[a-z][1-9], if you migrate you need to change "root=" in Lilo/GRUB config
 
* At the moment the kernel modules are automatically loaded in the guest but the interface should be started manually (dhclient/ifconfig)
 
* At the moment the kernel modules are automatically loaded in the guest but the interface should be started manually (dhclient/ifconfig)
 
* Currently performance is much better when using a host kernel configured with CONFIG_HIGH_RES_TIMERS. Another option is use HPET/RTC and -clock= qemu option.
 
* Currently performance is much better when using a host kernel configured with CONFIG_HIGH_RES_TIMERS. Another option is use HPET/RTC and -clock= qemu option.
Line 36: Line 35:
 
** Ping latency is 300-500 usec
 
** Ping latency is 300-500 usec
 
* Enjoy, more to come :)
 
* Enjoy, more to come :)
 +
 +
== How to use get high performance with Virtio ==
 +
* get the latest drop from [http://dpdk.org/download dpdk.org]
 +
* add the [http://dpdk.org/browse/virtio-net-pmd/refs/ librte_pmd_virtio]
 +
** Example
 +
<pre><nowiki>
 +
testpmd -c 0xff -n 1 \
 +
    -d librte_pmd_virtio.so \
 +
    -- \
 +
    --disable-hw-vlan --disable-rss \
 +
    -i --rxq=1 --txq=1 --rxd=256 --txd=256
 +
</nowiki></pre>
  
 
__NOTOC__
 
__NOTOC__

Latest revision as of 05:23, 12 October 2016

Virtio

Paravirtualized drivers for kvm/Linux

  • Virtio was chosen to be the main platform for IO virtualization in KVM
  • The idea behind it is to have a common framework for hypervisors for IO virtualization
  • More information (although not uptodate) can be found here
  • At the moment network/block/balloon devices are supported for kvm
  • The host implementation is in userspace - qemu, so no driver is needed in the host.

How to use Virtio

  • Get kvm version >= 60
  • Get Linux kernel with virtio drivers for the guest
    • Get Kernel >= 2.6.25 and activate (modules should also work, but take care of initramdisk)
      • CONFIG_VIRTIO_PCI=y (Virtualization -> PCI driver for virtio devices)
      • CONFIG_VIRTIO_BALLOON=y (Virtualization -> Virtio balloon driver)
      • CONFIG_VIRTIO_BLK=y (Device Drivers -> Block -> Virtio block driver)
      • CONFIG_VIRTIO_NET=y (Device Drivers -> Network device support -> Virtio network driver)
      • CONFIG_VIRTIO=y (automatically selected)
      • CONFIG_VIRTIO_RING=y (automatically selected)
      • you can safely disable SATA/SCSI and also all other nic drivers if you only use VIRTIO (disk/nic)
  • As an alternative one can use a standard guest kernel for the guest > 2.6.18 and make use sync backward compatibility option
  • Use virtio-net-pci device for the network devices (or model=virtio for old -net..-net syntax) and if=virtio for disk
    • Example
x86_64-softmmu/qemu-system-x86_64 -boot c -drive file=/images/xpbase.qcow2,if=virtio -m 384 -netdev type=tap,script=/etc/kvm/qemu-ifup,id=net0 -device virtio-net-pci,netdev=net0
  • -hd[ab] for disk won't work, use -drive
  • Disk will show up as /dev/vd[a-z][1-9], if you migrate you need to change "root=" in Lilo/GRUB config
  • At the moment the kernel modules are automatically loaded in the guest but the interface should be started manually (dhclient/ifconfig)
  • Currently performance is much better when using a host kernel configured with CONFIG_HIGH_RES_TIMERS. Another option is use HPET/RTC and -clock= qemu option.
  • Expected performance
    • Performance varies from host to host, kernel to kernel
    • On my laptop I measured 1.1Gbps rx throughput using 2.6.23, 850Mbps tx.
    • Ping latency is 300-500 usec
  • Enjoy, more to come :)

How to use get high performance with Virtio

testpmd -c 0xff -n 1 \
    -d librte_pmd_virtio.so \
    -- \
    --disable-hw-vlan --disable-rss \
    -i --rxq=1 --txq=1 --rxd=256 --txd=256