Virtio: Difference between revisions
From KVM
No edit summary |
m (Reverted edits by WikiSysop (Talk); changed back to last version by AnthonyLiguori) |
||
Line 1: | Line 1: | ||
t | = Paravirtualized drivers for kvm/Linux = | ||
* Virtio was chosen to be the main platform for IO virtualization in KVM | |||
* The idea behind it is to have a common framework for hypervisors for IO virtualization | |||
* More information (although not uptodate) can be found in kvm [http://kvm.qumranet.com/kvmwiki/KvmForum2007?action=[[AttachFile]]&do=get&target=kvm_pv_drv.pdf pv driver] | |||
* At the moment network/block/balloon devices are suported for kvm | |||
* The host implementation is in userspace - qemu, so no driver is needed in the host. | |||
= How to use Virtio = | |||
* Get kvm version >= 60 | |||
* Get Linux kernel with virtio drivers for the guest | |||
** Get Kernel >= 2.6.25 and activate (modules should also work, but take care of initramdisk) | |||
*** CONFIG_VIRTIO_PCI=y (Virtualization -> PCI driver for virtio devices) | |||
*** CONFIG_VIRTIO_BALLOON=y (Virtualization -> Virtio balloon driver) | |||
*** CONFIG_VIRTIO_BLK=y (Device Drivers -> Block -> Virtio block driver) | |||
*** CONFIG_VIRTIO_NET=y (Device Drivers -> Network device support -> Virtio network driver) | |||
*** CONFIG_VIRTIO=y (automatically selected) | |||
*** CONFIG_VIRTIO_RING=y (automatically selected) | |||
*** you can safely disable SATA/SCSI and also all other nic drivers if you only use VIRTIO (disk/nic) | |||
* As an alternative one can use a standard guest kernel for the guest > 2.6.18 and make use sync backward compatibility option | |||
** Backport and instructions can be found in [http://www.kernel.org/pub/scm/virt/kvm/kvm-guest-drivers-linux.git kvm-guest-drivers-linux.git] | |||
* Use model=virtio for the network devices and if=virtio for disk | |||
** Example | |||
<pre><nowiki> | |||
qemu/x86_64-softmmu/qemu-system-x86_64 -boot c -drive file=/images/xpbase.qcow2,if=virtio,boot=on -m 384 -net nic,model=virtio -net tap,script=/etc/kvm/qemu-ifup | |||
</nowiki></pre> | |||
* -hd[ab] for disk won't work, use -drive | |||
* Disk will show up as /dev/vd[a-z][1-9], if you migrate you need to change "root=" in Lilo/GRUB config | |||
* At the moment the kernel modules are automatically loaded in the guest but the interface should be started manually (dhclient/ifconfig) | |||
* Currently performance is much better when using a host kernel configured with CONFIG_HIGH_RES_TIMERS. Another option is use HPET/RTC and -clock= qemu option. | |||
* Expected performance | |||
** Performance varies from host to host, kernel to kernel | |||
** On my laptop I measured 1.1Gbps rx throughput using 2.6.23, 850Mbps tx. | |||
** Ping latency is 300-500 usec | |||
* Enjoy, more to come :) | |||
__NOTOC__ |
Revision as of 14:09, 2 December 2008
Paravirtualized drivers for kvm/Linux
- Virtio was chosen to be the main platform for IO virtualization in KVM
- The idea behind it is to have a common framework for hypervisors for IO virtualization
- More information (although not uptodate) can be found in kvm AttachFile&do=get&target=kvm_pv_drv.pdf pv driver
- At the moment network/block/balloon devices are suported for kvm
- The host implementation is in userspace - qemu, so no driver is needed in the host.
How to use Virtio
- Get kvm version >= 60
- Get Linux kernel with virtio drivers for the guest
- Get Kernel >= 2.6.25 and activate (modules should also work, but take care of initramdisk)
- CONFIG_VIRTIO_PCI=y (Virtualization -> PCI driver for virtio devices)
- CONFIG_VIRTIO_BALLOON=y (Virtualization -> Virtio balloon driver)
- CONFIG_VIRTIO_BLK=y (Device Drivers -> Block -> Virtio block driver)
- CONFIG_VIRTIO_NET=y (Device Drivers -> Network device support -> Virtio network driver)
- CONFIG_VIRTIO=y (automatically selected)
- CONFIG_VIRTIO_RING=y (automatically selected)
- you can safely disable SATA/SCSI and also all other nic drivers if you only use VIRTIO (disk/nic)
- Get Kernel >= 2.6.25 and activate (modules should also work, but take care of initramdisk)
- As an alternative one can use a standard guest kernel for the guest > 2.6.18 and make use sync backward compatibility option
- Backport and instructions can be found in kvm-guest-drivers-linux.git
- Use model=virtio for the network devices and if=virtio for disk
- Example
qemu/x86_64-softmmu/qemu-system-x86_64 -boot c -drive file=/images/xpbase.qcow2,if=virtio,boot=on -m 384 -net nic,model=virtio -net tap,script=/etc/kvm/qemu-ifup
- -hd[ab] for disk won't work, use -drive
- Disk will show up as /dev/vd[a-z][1-9], if you migrate you need to change "root=" in Lilo/GRUB config
- At the moment the kernel modules are automatically loaded in the guest but the interface should be started manually (dhclient/ifconfig)
- Currently performance is much better when using a host kernel configured with CONFIG_HIGH_RES_TIMERS. Another option is use HPET/RTC and -clock= qemu option.
- Expected performance
- Performance varies from host to host, kernel to kernel
- On my laptop I measured 1.1Gbps rx throughput using 2.6.23, 850Mbps tx.
- Ping latency is 300-500 usec
- Enjoy, more to come :)