https://linux-kvm.org/api.php?action=feedcontributions&user=Jasowang&feedformat=atom
KVM - User contributions [en]
2024-03-29T10:37:13Z
User contributions
MediaWiki 1.39.5
https://linux-kvm.org/index.php?title=File:Iotlb.png&diff=173628
File:Iotlb.png
2016-02-15T08:16:17Z
<p>Jasowang: Jasowang uploaded a new version of File:Iotlb.png</p>
<hr />
<div></div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173627
Device iotlb
2016-02-04T09:41:24Z
<p>Jasowang: /* Implementation (RFC) */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient for dpdk like program: End user should have no obvious felling performance degradation when using dpdk like program in guest. The design was optimized for dpdk like programs which use fixed mappings in guest.<br />
* Compatible: The implementation should be compatible with current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from guest io virtual address to host virtual address through ioctl.<br />
* The above translation result could be cached in a vhost device specific IOTLB for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
[[Image:iotlb.png]]<br />
<br />
* The above figure shows the design of device IOTLB for vhost-net:<br />
** Startup: <br />
*** When DMAR was enabled by guest IOMMU driver, qemu will notify vhost to start the device IOTLB. Device IOTLB will be started with no entries cached.<br />
** DMA emulation:<br />
*** When vhost tries to emulate a DMA, it will first tries to translate the guest iova to hva through device IOTLB.<br />
*** If vhost could not find a such translation, it will suspend itself and ask for the assistance of qemu to do the translation.<br />
*** Qemu get notified and will query the IOMMU for the translation.<br />
*** After the translation is finished, qemu will send the result to vhost.<br />
*** Vhost will then restart the DMA.<br />
** TLB Invalidation:<br />
*** Vhost will snoop the TLB invalidation emulated by qemu. <br />
*** If a specific TLB invalidation is relate to the device whose DMA is emulated by vhost, vhost will be notified and the TLB entry cached by vhost will be cleared.<br />
<br />
== Implementation (RFC) ==<br />
<br />
* kernel side:<br />
** four new ioctls were introduced:<br />
*** VHOST_SET_VRING_IIOTLB_REQUEST: Set per virtqueue address of iotlb request. Each virtqueue will fill the translation request to this address when there's a IOTLB miss in vhost.<br />
*** VHOST_SET_VRING_IOTLB_CALL: Set per virtqueue eventfd to notify qemu that there's a pending translation request.<br />
*** VHOST_UPDATE_IOTLB: Update or invalidate a IOTLB mapping.<br />
*** VHOST_RUN_IOTLB: Start or stop IOTLB.<br />
** convert pre-sorted array to interval tree:<br />
*** Easy detection of intersection of a mapping<br />
*** Scale for thousands of mapping<br />
<br />
* Qemu side (vtd and pci was chosen first)<br />
** convert virtio to use DMA helper/ DMA address space<br />
** convert vhost to use DMA address space<br />
** introduce TLB listener which snoop the the TLB invalidation<br />
** large slpte support<br />
** PCI ATS support (possibly) needed for a guest aware device IOTLB co-operation<br />
<br />
== Status ==<br />
<br />
* In progress, mostly done (except for ATS).<br />
* Benchmark shows 100% percent of hit rate when using (intel_iommu=strict). With ATS, we can probably get 100% hit rate for default mode.</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173622
Device iotlb
2016-02-04T09:35:18Z
<p>Jasowang: /* Vhost-net Device IOTLB */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient for dpdk like program: End user should have no obvious felling performance degradation when using dpdk like program in guest. The design was optimized for dpdk like programs which use fixed mappings in guest.<br />
* Compatible: The implementation should be compatible with current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from guest io virtual address to host virtual address through ioctl.<br />
* The above translation result could be cached in a vhost device specific IOTLB for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
[[Image:iotlb.png]]<br />
<br />
* The above figure shows the design of device IOTLB for vhost-net:<br />
** Startup: <br />
*** When DMAR was enabled by guest IOMMU driver, qemu will notify vhost to start the device IOTLB. Device IOTLB will be started with no entries cached.<br />
** DMA emulation:<br />
*** When vhost tries to emulate a DMA, it will first tries to translate the guest iova to hva through device IOTLB.<br />
*** If vhost could not find a such translation, it will suspend itself and ask for the assistance of qemu to do the translation.<br />
*** Qemu get notified and will query the IOMMU for the translation.<br />
*** After the translation is finished, qemu will send the result to vhost.<br />
*** Vhost will then restart the DMA.<br />
** TLB Invalidation:<br />
*** Vhost will snoop the TLB invalidation emulated by qemu. <br />
*** If a specific TLB invalidation is relate to the device whose DMA is emulated by vhost, vhost will be notified and the TLB entry cached by vhost will be cleared.<br />
<br />
== Implementation (RFC) ==<br />
<br />
* kernel side:<br />
** four new ioctls were introduced:<br />
*** VHOST_SET_VRING_IIOTLB_REQUEST: Set per virtqueue address of iotlb request. Each virtqueue will fill the translation request to this address when there's a IOTLB miss in vhost.<br />
*** VHOST_SET_VRING_IOTLB_CALL: Set per virtqueue eventfd to notify qemu that there's a pending translation request.<br />
*** VHOST_UPDATE_IOTLB: Update or invalidate a IOTLB mapping.<br />
*** VHOST_RUN_IOTLB: Start or stop IOTLB.</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173619
Device iotlb
2016-02-04T09:28:09Z
<p>Jasowang: /* Vhost-net Device IOTLB */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient for dpdk like program: End user should have no obvious felling performance degradation when using dpdk like program in guest. The design was optimized for dpdk like programs which use fixed mappings in guest.<br />
* Compatible: The implementation should be compatible with current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from guest io virtual address to host virtual address through ioctl.<br />
* The above translation result could be cached in a vhost device specific IOTLB for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
[[Image:iotlb.png]]<br />
<br />
* The above figure shows the design of device IOTLB for vhost-net:<br />
** Startup: <br />
*** When DMAR was enabled by guest IOMMU driver, qemu will notify vhost to start the device IOTLB. Device IOTLB will be started with no entries cached.<br />
** DMA emulation:<br />
*** When vhost tries to emulate a DMA, it will first tries to translate the guest iova to hva through device IOTLB.<br />
*** If vhost could not find a such translation, it will suspend itself and ask for the assistance of qemu to do the translation.<br />
*** Qemu get notified and will query the IOMMU for the translation.<br />
*** After the translation is finished, qemu will send the result to vhost.<br />
*** Vhost will then restart the DMA.<br />
** TLB Invalidation:<br />
*** Vhost will snoop the TLB invalidation emulated by qemu. <br />
*** If a specific TLB invalidation is relate to the device whose DMA is emulated by vhost, vhost will be notified and the TLB entry cached by vhost will be cleared.</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173618
Device iotlb
2016-02-04T09:27:57Z
<p>Jasowang: /* Design Goals */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient for dpdk like program: End user should have no obvious felling performance degradation when using dpdk like program in guest. The design was optimized for dpdk like programs which use fixed mappings in guest.<br />
* Compatible: The implementation should be compatible with current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from guest io virtual address to host virtual address through ioctl.<br />
* The above translation result could be cached in a vhost device specific IOTLB for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
== Example ==<br />
<br />
[[Image:iotlb.png]]<br />
<br />
* The above figure shows the design of device IOTLB for vhost-net:<br />
** Startup: <br />
*** When DMAR was enabled by guest IOMMU driver, qemu will notify vhost to start the device IOTLB. Device IOTLB will be started with no entries cached.<br />
** DMA emulation:<br />
*** When vhost tries to emulate a DMA, it will first tries to translate the guest iova to hva through device IOTLB.<br />
*** If vhost could not find a such translation, it will suspend itself and ask for the assistance of qemu to do the translation.<br />
*** Qemu get notified and will query the IOMMU for the translation.<br />
*** After the translation is finished, qemu will send the result to vhost.<br />
*** Vhost will then restart the DMA.<br />
** TLB Invalidation:<br />
*** Vhost will snoop the TLB invalidation emulated by qemu. <br />
*** If a specific TLB invalidation is relate to the device whose DMA is emulated by vhost, vhost will be notified and the TLB entry cached by vhost will be cleared.</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173617
Device iotlb
2016-02-04T09:26:54Z
<p>Jasowang: /* Example */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient for dpdk like program: End user should have no obvious felling performance degradation when using dpdk like program in guest. The design was optimized for dpdk like programs which use fixed mappings in guest.<br />
* Compatible: The implementation should be compatible to current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from guest io virtual address to host virtual address through ioctl.<br />
* The above translation result could be cached in a vhost device specific IOTLB for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
== Example ==<br />
<br />
[[Image:iotlb.png]]<br />
<br />
* The above figure shows the design of device IOTLB for vhost-net:<br />
** Startup: <br />
*** When DMAR was enabled by guest IOMMU driver, qemu will notify vhost to start the device IOTLB. Device IOTLB will be started with no entries cached.<br />
** DMA emulation:<br />
*** When vhost tries to emulate a DMA, it will first tries to translate the guest iova to hva through device IOTLB.<br />
*** If vhost could not find a such translation, it will suspend itself and ask for the assistance of qemu to do the translation.<br />
*** Qemu get notified and will query the IOMMU for the translation.<br />
*** After the translation is finished, qemu will send the result to vhost.<br />
*** Vhost will then restart the DMA.<br />
** TLB Invalidation:<br />
*** Vhost will snoop the TLB invalidation emulated by qemu. <br />
*** If a specific TLB invalidation is relate to the device whose DMA is emulated by vhost, vhost will be notified and the TLB entry cached by vhost will be cleared.</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173616
Device iotlb
2016-02-04T09:25:08Z
<p>Jasowang: /* Example */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient for dpdk like program: End user should have no obvious felling performance degradation when using dpdk like program in guest. The design was optimized for dpdk like programs which use fixed mappings in guest.<br />
* Compatible: The implementation should be compatible to current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from guest io virtual address to host virtual address through ioctl.<br />
* The above translation result could be cached in a vhost device specific IOTLB for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
== Example ==<br />
<br />
[[Image:iotlb.png]]<br />
<br />
* The above figure shows the design of device IOTLB for vhost-net:<br />
** Startup: <br />
*** When DMAR was enabled by guest IOMMU driver, qemu will notify vhost to start the device IOTLB. Device IOTLB will be started with no entries cached.<br />
** DMA emulation:<br />
*** When vhost tries to emulate a DMA, it tries to translate the guest iova to hva through device IOTLB.<br />
*** If there's not a hit, it will suspend the translation and ask the assistance of qemu to do the translation.<br />
*** Qemu get notified and will query the IOMMU for the translation.<br />
*** After the translation is finished, qemu will send the result to vhost.<br />
*** Vhost will then restart the DMA.<br />
** TLB Invalidation:<br />
*** Vhost will snoop the TLB invalidation emulated by qemu. <br />
*** If a specific TLB invalidation is relate to the device whose DMA is emulated by vhost, vhost will be notified and the TLB entry cached by vhost will be cleared.</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173615
Device iotlb
2016-02-04T09:10:55Z
<p>Jasowang: /* Design Goals */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient for dpdk like program: End user should have no obvious felling performance degradation when using dpdk like program in guest. The design was optimized for dpdk like programs which use fixed mappings in guest.<br />
* Compatible: The implementation should be compatible to current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from guest io virtual address to host virtual address through ioctl.<br />
* The above translation result could be cached in a vhost device specific IOTLB for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
== Example ==<br />
<br />
[[Image:iotlb.png]]</div>
Jasowang
https://linux-kvm.org/index.php?title=File:Iotlb.png&diff=173614
File:Iotlb.png
2016-02-04T09:08:39Z
<p>Jasowang: Jasowang uploaded a new version of File:Iotlb.png</p>
<hr />
<div></div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173613
Device iotlb
2016-02-04T09:07:50Z
<p>Jasowang: /* Vhost-net Device IOTLB */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient: End user should have no obvious felling performance degradation when using dpdk like program in guest.<br />
* Compatible: The implementation should be compatible to current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from guest io virtual address to host virtual address through ioctl.<br />
* The above translation result could be cached in a vhost device specific IOTLB for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
== Example ==<br />
<br />
[[Image:iotlb.png]]</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173612
Device iotlb
2016-02-04T09:02:54Z
<p>Jasowang: /* Vhost-net Device IOTLB */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient: End user should have no obvious felling performance degradation when using dpdk like program in guest.<br />
* Compatible: The implementation should be compatible to current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from io virtual address to userspace address through ioctl.<br />
* The above translation result could be cached in a vhost device specific IOTLB for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
== Example ==<br />
<br />
[[Image:iotlb.png]]</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173611
Device iotlb
2016-02-04T09:02:11Z
<p>Jasowang: /* Design */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient: End user should have no obvious felling performance degradation when using dpdk like program in guest.<br />
* Compatible: The implementation should be compatible to current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from io virtual address to userspace address through ioctl.<br />
* The above translation result could be cached in a vhost device specific IOTLB for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
[[Image:iotlb.png]]</div>
Jasowang
https://linux-kvm.org/index.php?title=File:Iotlb.png&diff=173610
File:Iotlb.png
2016-02-04T09:01:21Z
<p>Jasowang: </p>
<hr />
<div></div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173609
Device iotlb
2016-02-04T09:00:54Z
<p>Jasowang: /* Design */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient: End user should have no obvious felling performance degradation when using dpdk like program in guest.<br />
* Compatible: The implementation should be compatible to current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from io virtual address to userspace address through ioctl.<br />
* The above translation result could be cached in vhost for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.<br />
<br />
[[Image:iotlb.png]]</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173608
Device iotlb
2016-02-04T08:51:40Z
<p>Jasowang: /* Vhost-net Device IOTLB */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
* Efficient: End user should have no obvious felling performance degradation when using dpdk like program in guest.<br />
* Compatible: The implementation should be compatible to current vhost-net memory region API to support VM without DMAR enabled.<br />
<br />
== Design ==<br />
<br />
* Vhost-net can query the address mappings from io virtual address to userspace address through ioctl.<br />
* The above translation result could be cached in vhost for a while to speed up the future translation in the future.<br />
* Qemu can invalidate one or more mappings that cached by vhost through ioctl.<br />
* Qemu can start or stop the DMAR through ioctl.</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173607
Device iotlb
2016-02-04T08:48:04Z
<p>Jasowang: /* Design Goals */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
* Architecture independent: The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
<br />
* Efficient: End user should have no obvious felling performance degradation when using dpdk like program in guest.<br />
<br />
* Compatible: The implementation should be compatible to current vhost-net memory region API to support VM without DMAR enabled.</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173606
Device iotlb
2016-02-04T08:46:51Z
<p>Jasowang: /* Vhost-net Device IOTLB */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design Goals ==<br />
<br />
=== Architecture Independent ===<br />
<br />
The implementation should be compatible to current Qemu's IOMMU architecture, then it should be architecture independent and then be easy to be ported to various platform/IOMMU implementation.<br />
<br />
=== Efficient ===<br />
<br />
End user should have no obvious felling performance degradation when using dpdk like program in guest.<br />
<br />
=== Compatible ===<br />
<br />
The implementation should be compatible to current vhost-net memory region API to support VM without DMAR enabled.</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173604
Device iotlb
2016-02-04T08:42:19Z
<p>Jasowang: /* Device IOTLB */</p>
<hr />
<div>= Vhost-net Device IOTLB =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of Device IOTLB for vhost-net to provides a secure and efficient environment for dpdk like program in guest.<br />
<br />
== Design ==</div>
Jasowang
https://linux-kvm.org/index.php?title=Device_iotlb&diff=173603
Device iotlb
2016-02-04T08:40:35Z
<p>Jasowang: Created page with "= Device IOTLB ="</p>
<hr />
<div>= Device IOTLB =</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=173602
Multiqueue
2016-02-04T08:38:47Z
<p>Jasowang: /* Git & Cmdline */</p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
* Jason Wang <jasowang@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Git & Cmdline ==<br />
* qemu-kvm -netdev tap,id=hn0,queues=M -device virtio-net-pci,netdev=hn0,vectors=2M+2 ...<br />
<br />
== Status & Challenges ==<br />
<br />
* Status<br />
** merged upstream<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls IFF_ATTACH_QUEUE/IFF_DETACH_QUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* IFF_DETACH_QUEUE is used to attach an unattached file/socket to a tap device. * IFF_DETACH_QUEUE is used to detach a file from a tap device. IFF_DETACH_QUEUE is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, IFF_ATTACH_QUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector (same as macvtap)====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Enable MQ feature ==<br />
* create tap device with multiple queues, please reference<br />
Documentation/networking/tuntap.txt:(3.3 Multiqueue tuntap interface)<br />
* enable mq for tap (suppose N queue pairs) -netdev tap,vhost=on,queues=N<br />
* enable mq and specify msix vectors in qemu cmdline (2N+2 vectors, N for tx queues, N for rx queues, 1 for config, and one for possible control vq): -device virtio-net-pci,mq=on,vectors=2N+2...<br />
* enable mq in guest by 'ethtool -L eth0 combined $queue_num'<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
** regression criteria: throughout%cpu<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpu node and memory node<br />
** autotest implemented a performance regression test, used T-test<br />
** use netperf demo-mode to get more stable results<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=173586
Multiqueue
2016-02-02T09:43:03Z
<p>Jasowang: /* Status & Challenges */</p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
* Jason Wang <jasowang@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Git & Cmdline ==<br />
<br />
* kernel changes: git://github.com/jasowang/kernel-mq.git<br />
* qemu-kvm changes: git://github.com/jasowang/qemu-kvm-mq.git<br />
* qemu-kvm -netdev tap,id=hn0,queues=M -device virtio-net-pci,netdev=hn0,vectors=2M+1 ...<br />
<br />
== Status & Challenges ==Have patches for all part but need performance tuning for small packet transmission<br />
<br />
* Status<br />
** merged upstream<br />
<br />
* Challenges<br />
** small packet transmission<br />
*** Reason:<br />
**** spread the packets into different queue reduce the possibility of batching thus damage the performance.<br />
**** some but not too much batching may help for the performance<br />
*** Solution and challenge:<br />
***** Find a adaptive/dynamic algorithm to switch between one queue mode and multiqueue mode<br />
****** Find the threshold to do the switch, not easy as the traffic were unexpected in real workloads<br />
****** Avoid the packet re-ordering when switching packet<br />
***** Current Status & working on:<br />
****** Add ioctl to notify tap to switch to one queue mode<br />
****** switch when needed<br />
** vhost threading<br />
*** Three model were proposed:<br />
**** per vq pairs, each vhost thread is polling a tx/rx vq pairs<br />
***** simple<br />
***** regression in small packet transmission<br />
***** lack the numa affinity as vhost does the copy<br />
***** may not scale well when using multiqueue as we may create more vhost threads than the numer of host cpu<br />
**** multi-workers: multiple worker thread for a device<br />
***** wakeup all threads and the thread contend for the work<br />
***** improve the parallism especially for RR tes<br />
***** broadcast wakeup and contention<br />
***** #vhost threads may greater than #cpu<br />
***** no numa consideration<br />
***** regression in small packet / small #instances<br />
**** per-cpu vhost thread:<br />
***** pick a random thread in the same socket (except for the cpu that initiated the request) to handle the request<br />
***** best performance in most conditions<br />
***** schedule by it self and bypass the host scheduler, only suitable for network load<br />
***** regression in small packet / small #instances<br />
*** Solution and challenge:<br />
**** More testing<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls IFF_ATTACH_QUEUE/IFF_DETACH_QUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* IFF_DETACH_QUEUE is used to attach an unattached file/socket to a tap device. * IFF_DETACH_QUEUE is used to detach a file from a tap device. IFF_DETACH_QUEUE is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, IFF_ATTACH_QUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector (same as macvtap)====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Enable MQ feature ==<br />
* create tap device with multiple queues, please reference<br />
Documentation/networking/tuntap.txt:(3.3 Multiqueue tuntap interface)<br />
* enable mq for tap (suppose N queue pairs) -netdev tap,vhost=on,queues=N<br />
* enable mq and specify msix vectors in qemu cmdline (2N+2 vectors, N for tx queues, N for rx queues, 1 for config, and one for possible control vq): -device virtio-net-pci,mq=on,vectors=2N+2...<br />
* enable mq in guest by 'ethtool -L eth0 combined $queue_num'<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
** regression criteria: throughout%cpu<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpu node and memory node<br />
** autotest implemented a performance regression test, used T-test<br />
** use netperf demo-mode to get more stable results<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=173585
Multiqueue
2016-02-02T09:42:16Z
<p>Jasowang: /* Contact */</p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
* Jason Wang <jasowang@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Git & Cmdline ==<br />
<br />
* kernel changes: git://github.com/jasowang/kernel-mq.git<br />
* qemu-kvm changes: git://github.com/jasowang/qemu-kvm-mq.git<br />
* qemu-kvm -netdev tap,id=hn0,queues=M -device virtio-net-pci,netdev=hn0,vectors=2M+1 ...<br />
<br />
== Status & Challenges ==<br />
* Status<br />
** Have patches for all part but need performance tuning for small packet transmission<br />
** Several of new vhost threading model were proposed<br />
<br />
* Challenges<br />
** small packet transmission<br />
*** Reason:<br />
**** spread the packets into different queue reduce the possibility of batching thus damage the performance.<br />
**** some but not too much batching may help for the performance<br />
*** Solution and challenge:<br />
***** Find a adaptive/dynamic algorithm to switch between one queue mode and multiqueue mode<br />
****** Find the threshold to do the switch, not easy as the traffic were unexpected in real workloads<br />
****** Avoid the packet re-ordering when switching packet<br />
***** Current Status & working on:<br />
****** Add ioctl to notify tap to switch to one queue mode<br />
****** switch when needed<br />
** vhost threading<br />
*** Three model were proposed:<br />
**** per vq pairs, each vhost thread is polling a tx/rx vq pairs<br />
***** simple<br />
***** regression in small packet transmission<br />
***** lack the numa affinity as vhost does the copy<br />
***** may not scale well when using multiqueue as we may create more vhost threads than the numer of host cpu<br />
**** multi-workers: multiple worker thread for a device<br />
***** wakeup all threads and the thread contend for the work<br />
***** improve the parallism especially for RR tes<br />
***** broadcast wakeup and contention<br />
***** #vhost threads may greater than #cpu<br />
***** no numa consideration<br />
***** regression in small packet / small #instances<br />
**** per-cpu vhost thread:<br />
***** pick a random thread in the same socket (except for the cpu that initiated the request) to handle the request<br />
***** best performance in most conditions<br />
***** schedule by it self and bypass the host scheduler, only suitable for network load<br />
***** regression in small packet / small #instances<br />
*** Solution and challenge:<br />
**** More testing<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls IFF_ATTACH_QUEUE/IFF_DETACH_QUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* IFF_DETACH_QUEUE is used to attach an unattached file/socket to a tap device. * IFF_DETACH_QUEUE is used to detach a file from a tap device. IFF_DETACH_QUEUE is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, IFF_ATTACH_QUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector (same as macvtap)====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Enable MQ feature ==<br />
* create tap device with multiple queues, please reference<br />
Documentation/networking/tuntap.txt:(3.3 Multiqueue tuntap interface)<br />
* enable mq for tap (suppose N queue pairs) -netdev tap,vhost=on,queues=N<br />
* enable mq and specify msix vectors in qemu cmdline (2N+2 vectors, N for tx queues, N for rx queues, 1 for config, and one for possible control vq): -device virtio-net-pci,mq=on,vectors=2N+2...<br />
* enable mq in guest by 'ethtool -L eth0 combined $queue_num'<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
** regression criteria: throughout%cpu<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpu node and memory node<br />
** autotest implemented a performance regression test, used T-test<br />
** use netperf demo-mode to get more stable results<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=173352
NetworkingTodo
2015-06-16T03:18:58Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* virtio 1.0 support for linux guests<br />
required for maintainatibility<br />
mid.gmane.org/1414081380-14623-1-git-send-email-mst@redhat.com<br />
Developer: MST,Cornelia Huck<br />
<br />
* virtio 1.0 support in qemu<br />
required for maintainatibility<br />
mid.gmane.org/20141024103839.7162b93f.cornelia.huck@de.ibm.com<br />
Developer: Cornelia Huck, MST<br />
<br />
* improve net polling for cpu overcommit<br />
exit busy loop when another process is runnable<br />
mid.gmane.org/20140822073653.GA7372@gmail.com<br />
mid.gmane.org/1408608310-13579-2-git-send-email-jasowang@redhat.com<br />
Another idea is make the busy_read/busy_poll dynamic like dynamic PLE window.<br />
Developer: Jason Wang, MST<br />
<br />
* vhost-net/tun/macvtap cross endian support<br />
mid.gmane.org/1414572130-17014-2-git-send-email-clg@fr.ibm.com<br />
Developer: Cédric Le Goater, MST<br />
<br />
* BQL/aggregation for virtio net<br />
dependencies: orphan packets less agressively, enable tx interrupt <br />
Developers: MST, Jason<br />
<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: bring back tx interrupt (partially)<br />
Jason's idea: introduce a flag to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* enable tx interrupt (conditionally?)<br />
Small packet TCP stream performance is not good. This is because<br />
virtio-net orphan the packet during ndo_start_xmit() which disable the <br />
TCP small packet optimizations like TCP small Queue and AutoCork. The<br />
idea is enable the tx interrupt to TCP small packets.<br />
Jason's idea: switch between poll and tx interrupt mode based on recent statistics.<br />
MST's idea: use a per descriptor flag for virtio to force interrupt for a specific packet.<br />
Developer: Jason Wang, MST<br />
<br />
* vhost-net polling<br />
mid.gmane.org/20141029123831.A80F338002D@moren.haifa.ibm.com<br />
Developer: Razya Ladelsky<br />
<br />
* support more queues in tun and macvtap<br />
We limit TUN to 8 queues, but we really want 1 queue per guest CPU. The<br />
limit comes from net core, need to teach it to allocate array of<br />
pointers and not array of queues. Jason has an draft patch to use flex<br />
array. Another thing is to move the flow caches out of tun_struct.<br />
http://mid.gmane.org/1408369040-1216-1-git-send-email-pagupta@redhat.com<br />
tun part is done.<br />
Developers: Pankaj Gupta, Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Documentation/networking/scaling.txt<br />
Detect and enable/disable<br />
automatically so we can make it on by default?<br />
depends on: BQL<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* bridge without promisc/allmulti mode in NIC<br />
given hardware support, teach bridge to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Done for unicast, but not for multicast.<br />
Developer: Vlad Yasevich<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan?<br />
<br />
* Enable LRO with bridging<br />
Enable GRO for packets coming to bridge from a tap interface<br />
Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman?<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: Marcel Apfelbaum<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
Rx interrupt coalescing should be good for rx stream throughput.<br />
Tx interrupt coalescing will help the optimization of enabling tx<br />
interrupt conditionally.<br />
Developer: Jason Wang<br />
<br />
* sharing config interrupts<br />
Support more devices by sharing a single msi vector between multiple<br />
virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
* Multi-queue macvtap with real multiple queues<br />
Macvtap only provides multiple queues to user in the form of multiple<br />
sockets. As each socket will perform dev_queue_xmit() and we don't<br />
really have multiple real queues on the device, we now have a lock<br />
contention. This contention needs to be addressed.<br />
Developer: Vlad Yasevich<br />
<br />
* better xmit queueing for tun<br />
when guest is slower than host, tun drops packets aggressively. This is<br />
because keeping packets on the internal queue does not work well.<br />
Re-enable functionality to stop queue, probably with some watchdog to<br />
help with buggy guests.<br />
Developer: MST<br />
<br />
* Dev watchdog for virtio-net:<br />
Implement a watchdog for virtio-net. This will be useful for hunting host bugs early.<br />
Developer: Julio Faracco <jcfaracco@gmail.com><br />
<br />
* Extend virtio_net header for future offloads<br />
virtio_net header is currently fixed sized and only supports<br />
segmentation offloading. It would be useful that would could<br />
attach other data to virtio_net header to support things like<br />
vlan acceleration, IPv6 fragment_id pass-through, rx and tx-hash<br />
pass-through and some other ideas.<br />
Developer: Vlad Yasevich <vyasevic@redhat.com><br />
<br />
=== projects in need of an owner ===<br />
<br />
* improve netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- rx busy polling for virtio-net [DONE]<br />
see https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=91815639d8804d1eee7ce2e1f7f60b36771db2c9. 1 byte netperf TCP_RR shows 127% improvement.<br />
Future work is co-operate with host, and only does the busy polling when there's no other process in host cpu. <br />
contact: Jason Wang<br />
<br />
* drop vhostforce<br />
it's an optimization, probbaly not worth it anymore<br />
<br />
* avoid userspace virtio-net when vhost is enabled.<br />
ATM we run in userspace until DRIVER_OK<br />
this doubles our security attack surface,<br />
so it's best avoided.<br />
<br />
* feature negotiation for dpdk/vhost user<br />
feature negotiation seems to be broken<br />
<br />
* switch dpdk to qemu vhost user<br />
this seems like a better interface than<br />
character device in userspace,<br />
designed for out of process networking<br />
<br />
* netmap - like approach to zero copy networking<br />
is anything like this feasible on linux?<br />
<br />
* vhost-user: clean up protocol<br />
address multiple issues in vhost user protocol:<br />
missing VHOST_NET_SET_BACKEND<br />
make more messages synchronous (with a reply)<br />
VHOST_SET_MEM_TABLE, VHOST_SET_VRING_CALL<br />
mid.gmane.org/541956B8.1070203@huawei.com<br />
mid.gmane.org/54192136.2010409@huawei.com<br />
Contact: MST<br />
<br />
* ethtool seftest support for virtio-net<br />
Implement selftest ethtool method for virtio-net for regression test e.g the CVEs found for tun/macvtap, qemu and vhost.<br />
http://mid.gmane.org/1409881866-14780-1-git-send-email-hjxiaohust@gmail.com<br />
Contact: Jason Wang, Pankaj Gupta<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Contact: Razya Ladelsky, Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* DPDK with vhost-user<br />
Support vhost-user in addition to vhost net cuse device<br />
Contact: Linhaifeng, MST<br />
<br />
* DPDK with vhost-net/user: fix offloads<br />
DPDK requires disabling offloads ATM,<br />
need to fix this.<br />
Contact: MST<br />
<br />
* reduce per-device memory allocations<br />
vhost device is very large due to need to<br />
keep large arrays of iovecs around.<br />
we do need large arrays for correctness,<br />
but we could move them out of line,<br />
and add short inline arrays for typical use-cases.<br />
contact: MST<br />
<br />
* batch tx completions in vhost<br />
vhost already batches up to 64 tx completions for zero copy<br />
batch non zero copy as well<br />
contact: Jason Wang<br />
<br />
* better parallelize small queues<br />
don't wait for ring full to kick.<br />
add api to detect ring almost full (e.g. 3/4) and kick<br />
depends on: BQL<br />
contact: MST<br />
<br />
* improve vhost-user unit test<br />
support running on machines without hugetlbfs<br />
support running with more vm memory layouts<br />
Contact: MST<br />
<br />
* tun: fix RX livelock<br />
it's easy for guest to starve out host networking<br />
open way to fix this is to use napi <br />
Contact: MST<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
contact: MST<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Contact: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Contact: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
This project seems abandoned?<br />
Contact: Rusty Russell<br />
<br />
* use kvm eventfd support for injecting level-triggered interrupts<br />
aim: enable vhost by default for level interrupts.<br />
The benefit is security: we want to avoid using userspace<br />
virtio net so that vhost-net is always used.<br />
<br />
Alex emulated (post & re-enable) level-triggered interrupt in KVM for<br />
skipping userspace. VFIO already enjoied the performance benefit,<br />
let's do it for virtio-pci. Current virtio-pci devices still use<br />
level-interrupt in userspace.<br />
see: kernel:<br />
7a84428af [PATCH] KVM: Add resampling irqfds for level triggered interrupts<br />
qemu:<br />
68919cac [PATCH] hw/vfio: set interrupts using pci irq wrappers<br />
(virtio-pci didn't use the wrappers)<br />
e1d1e586 [PATCH] vfio-pci: Add KVM INTx acceleration<br />
<br />
Contact: Amos Kong, MST <br />
<br />
* Head of line blocking issue with zerocopy<br />
zerocopy has several defects that will cause head of line blocking problem:<br />
- limit the number of pending DMAs<br />
- complete in order<br />
This means is one of some of the DMAs were delayed, all other will also delayed. This could be reproduced with following case:<br />
- boot two VMS VM1(tap1) and VM2(tap2) on host1 (has eth0)<br />
- setup tbf to limit the tap2 bandwidth to 10Mbit/s<br />
- start two netperf instances one from VM1 to VM2, another from VM1 to an external host whose traffic go through eth0 on host<br />
Then you can see not only VM1 to VM2 is throttled, but also VM1 to external host were also throttled.<br />
For this issue, a solution is orphan the frags when en queuing to non work conserving qdisc.<br />
But we have have similar issues in other case:<br />
- The card has its own priority queues<br />
- Host has two interface, one is 1G another is 10G, so throttle 1G may lead traffic over 10G to be throttled.<br />
The final solution is to remove receive buffering at tun, and convert it to use NAPI<br />
Contact: Jason Wang, MST<br />
Reference: https://lkml.org/lkml/2014/1/17/105<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Contact: Amos Kong<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
Contact: Amos Kong<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
Contact: Amos Kong<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
Contact: Amos Kong<br />
<br />
<br />
<br />
* add documentation for macvlan and macvtap<br />
recent docs here:<br />
http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/<br />
need to integrate in iproute and kernel docs.<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
* Extend sndbuf scope to int64<br />
Current sndbuf limit is INT_MAX in tap_set_sndbuf(),<br />
large values (like 8388607T) can be converted rightly by qapi from qemu commandline,<br />
If we want to support the large values, we should extend sndbuf limit from 'int' to 'int64'<br />
Why is this useful?<br />
Upstream discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04192.html<br />
<br />
* unit test for vhost-user<br />
We don't have a unit test for vhost-user.<br />
The idea is to implement a simple vhost-user backend over userspace stack.<br />
And load pxe in guest.<br />
Contact: MST and Jason Wang<br />
<br />
* better qtest for virtio-net<br />
We test only boot and hotplug for virtio-net.<br />
Need to test more.<br />
Contact: MST and Jason Wang<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=173349
NetworkingTodo
2015-06-08T10:13:18Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* virtio 1.0 support for linux guests<br />
required for maintainatibility<br />
mid.gmane.org/1414081380-14623-1-git-send-email-mst@redhat.com<br />
Developer: MST,Cornelia Huck<br />
<br />
* virtio 1.0 support in qemu<br />
required for maintainatibility<br />
mid.gmane.org/20141024103839.7162b93f.cornelia.huck@de.ibm.com<br />
Developer: Cornelia Huck, MST<br />
<br />
* improve net polling for cpu overcommit<br />
exit busy loop when another process is runnable<br />
mid.gmane.org/20140822073653.GA7372@gmail.com<br />
mid.gmane.org/1408608310-13579-2-git-send-email-jasowang@redhat.com<br />
Another idea is make the busy_read/busy_poll dynamic like dynamic PLE window.<br />
Developer: Jason Wang, MST<br />
<br />
* vhost-net/tun/macvtap cross endian support<br />
mid.gmane.org/1414572130-17014-2-git-send-email-clg@fr.ibm.com<br />
Developer: Cédric Le Goater, MST<br />
<br />
* BQL/aggregation for virtio net<br />
dependencies: orphan packets less agressively, enable tx interrupt <br />
Developers: MST, Jason<br />
<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: bring back tx interrupt (partially)<br />
Jason's idea: introduce a flag to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* enable tx interrupt (conditionally?)<br />
Small packet TCP stream performance is not good. This is because<br />
virtio-net orphan the packet during ndo_start_xmit() which disable the <br />
TCP small packet optimizations like TCP small Queue and AutoCork. The<br />
idea is enable the tx interrupt to TCP small packets.<br />
Jason's idea: switch between poll and tx interrupt mode based on recent statistics.<br />
MST's idea: use a per descriptor flag for virtio to force interrupt for a specific packet.<br />
Developer: Jason Wang, MST<br />
<br />
* vhost-net polling<br />
mid.gmane.org/20141029123831.A80F338002D@moren.haifa.ibm.com<br />
Developer: Razya Ladelsky<br />
<br />
* support more queues in tun<br />
We limit TUN to 8 queues, but we really want 1 queue per guest CPU. The<br />
limit comes from net core, need to teach it to allocate array of<br />
pointers and not array of queues. Jason has an draft patch to use flex<br />
array. Another thing is to move the flow caches out of tun_struct.<br />
http://mid.gmane.org/1408369040-1216-1-git-send-email-pagupta@redhat.com<br />
Developers: Pankaj Gupta, Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Documentation/networking/scaling.txt<br />
Detect and enable/disable<br />
automatically so we can make it on by default?<br />
depends on: BQL<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* bridge without promisc/allmulti mode in NIC<br />
given hardware support, teach bridge to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Done for unicast, but not for multicast.<br />
Developer: Vlad Yasevich<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan?<br />
<br />
* Enable LRO with bridging<br />
Enable GRO for packets coming to bridge from a tap interface<br />
Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman?<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: Marcel Apfelbaum<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
Rx interrupt coalescing should be good for rx stream throughput.<br />
Tx interrupt coalescing will help the optimization of enabling tx<br />
interrupt conditionally.<br />
Developer: Jason Wang<br />
<br />
* sharing config interrupts<br />
Support more devices by sharing a single msi vector between multiple<br />
virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
* Multi-queue macvtap with real multiple queues<br />
Macvtap only provides multiple queues to user in the form of multiple<br />
sockets. As each socket will perform dev_queue_xmit() and we don't<br />
really have multiple real queues on the device, we now have a lock<br />
contention. This contention needs to be addressed.<br />
Developer: Vlad Yasevich<br />
<br />
* better xmit queueing for tun<br />
when guest is slower than host, tun drops packets aggressively. This is<br />
because keeping packets on the internal queue does not work well.<br />
Re-enable functionality to stop queue, probably with some watchdog to<br />
help with buggy guests.<br />
Developer: MST<br />
<br />
* Dev watchdog for virtio-net:<br />
Implement a watchdog for virtio-net. This will be useful for hunting host bugs early.<br />
Developer: Julio Faracco <jcfaracco@gmail.com><br />
<br />
* Extend virtio_net header for future offloads<br />
virtio_net header is currently fixed sized and only supports<br />
segmentation offloading. It would be useful that would could<br />
attach other data to virtio_net header to support things like<br />
vlan acceleration, IPv6 fragment_id pass-through, rx and tx-hash<br />
pass-through and some other ideas.<br />
Developer: Vlad Yasevich <vyasevic@redhat.com><br />
<br />
=== projects in need of an owner ===<br />
<br />
* improve netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- rx busy polling for virtio-net [DONE]<br />
see https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=91815639d8804d1eee7ce2e1f7f60b36771db2c9. 1 byte netperf TCP_RR shows 127% improvement.<br />
Future work is co-operate with host, and only does the busy polling when there's no other process in host cpu. <br />
contact: Jason Wang<br />
<br />
* drop vhostforce<br />
it's an optimization, probbaly not worth it anymore<br />
<br />
* avoid userspace virtio-net when vhost is enabled.<br />
ATM we run in userspace until DRIVER_OK<br />
this doubles our security attack surface,<br />
so it's best avoided.<br />
<br />
* feature negotiation for dpdk/vhost user<br />
feature negotiation seems to be broken<br />
<br />
* switch dpdk to qemu vhost user<br />
this seems like a better interface than<br />
character device in userspace,<br />
designed for out of process networking<br />
<br />
* netmap - like approach to zero copy networking<br />
is anything like this feasible on linux?<br />
<br />
* vhost-user: clean up protocol<br />
address multiple issues in vhost user protocol:<br />
missing VHOST_NET_SET_BACKEND<br />
make more messages synchronous (with a reply)<br />
VHOST_SET_MEM_TABLE, VHOST_SET_VRING_CALL<br />
mid.gmane.org/541956B8.1070203@huawei.com<br />
mid.gmane.org/54192136.2010409@huawei.com<br />
Contact: MST<br />
<br />
* ethtool seftest support for virtio-net<br />
Implement selftest ethtool method for virtio-net for regression test e.g the CVEs found for tun/macvtap, qemu and vhost.<br />
http://mid.gmane.org/1409881866-14780-1-git-send-email-hjxiaohust@gmail.com<br />
Contact: Jason Wang<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Contact: Razya Ladelsky, Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* DPDK with vhost-user<br />
Support vhost-user in addition to vhost net cuse device<br />
Contact: Linhaifeng, MST<br />
<br />
* DPDK with vhost-net/user: fix offloads<br />
DPDK requires disabling offloads ATM,<br />
need to fix this.<br />
Contact: MST<br />
<br />
* reduce per-device memory allocations<br />
vhost device is very large due to need to<br />
keep large arrays of iovecs around.<br />
we do need large arrays for correctness,<br />
but we could move them out of line,<br />
and add short inline arrays for typical use-cases.<br />
contact: MST<br />
<br />
* batch tx completions in vhost<br />
vhost already batches up to 64 tx completions for zero copy<br />
batch non zero copy as well<br />
contact: Jason Wang<br />
<br />
* better parallelize small queues<br />
don't wait for ring full to kick.<br />
add api to detect ring almost full (e.g. 3/4) and kick<br />
depends on: BQL<br />
contact: MST<br />
<br />
* improve vhost-user unit test<br />
support running on machines without hugetlbfs<br />
support running with more vm memory layouts<br />
Contact: MST<br />
<br />
* tun: fix RX livelock<br />
it's easy for guest to starve out host networking<br />
open way to fix this is to use napi <br />
Contact: MST<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
contact: MST<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Contact: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Contact: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
This project seems abandoned?<br />
Contact: Rusty Russell<br />
<br />
* use kvm eventfd support for injecting level-triggered interrupts<br />
aim: enable vhost by default for level interrupts.<br />
The benefit is security: we want to avoid using userspace<br />
virtio net so that vhost-net is always used.<br />
<br />
Alex emulated (post & re-enable) level-triggered interrupt in KVM for<br />
skipping userspace. VFIO already enjoied the performance benefit,<br />
let's do it for virtio-pci. Current virtio-pci devices still use<br />
level-interrupt in userspace.<br />
see: kernel:<br />
7a84428af [PATCH] KVM: Add resampling irqfds for level triggered interrupts<br />
qemu:<br />
68919cac [PATCH] hw/vfio: set interrupts using pci irq wrappers<br />
(virtio-pci didn't use the wrappers)<br />
e1d1e586 [PATCH] vfio-pci: Add KVM INTx acceleration<br />
<br />
Contact: Amos Kong, MST <br />
<br />
* Head of line blocking issue with zerocopy<br />
zerocopy has several defects that will cause head of line blocking problem:<br />
- limit the number of pending DMAs<br />
- complete in order<br />
This means is one of some of the DMAs were delayed, all other will also delayed. This could be reproduced with following case:<br />
- boot two VMS VM1(tap1) and VM2(tap2) on host1 (has eth0)<br />
- setup tbf to limit the tap2 bandwidth to 10Mbit/s<br />
- start two netperf instances one from VM1 to VM2, another from VM1 to an external host whose traffic go through eth0 on host<br />
Then you can see not only VM1 to VM2 is throttled, but also VM1 to external host were also throttled.<br />
For this issue, a solution is orphan the frags when en queuing to non work conserving qdisc.<br />
But we have have similar issues in other case:<br />
- The card has its own priority queues<br />
- Host has two interface, one is 1G another is 10G, so throttle 1G may lead traffic over 10G to be throttled.<br />
The final solution is to remove receive buffering at tun, and convert it to use NAPI<br />
Contact: Jason Wang, MST<br />
Reference: https://lkml.org/lkml/2014/1/17/105<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Contact: Amos Kong<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
Contact: Amos Kong<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
Contact: Amos Kong<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
Contact: Amos Kong<br />
<br />
<br />
<br />
* add documentation for macvlan and macvtap<br />
recent docs here:<br />
http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/<br />
need to integrate in iproute and kernel docs.<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
* Extend sndbuf scope to int64<br />
Current sndbuf limit is INT_MAX in tap_set_sndbuf(),<br />
large values (like 8388607T) can be converted rightly by qapi from qemu commandline,<br />
If we want to support the large values, we should extend sndbuf limit from 'int' to 'int64'<br />
Why is this useful?<br />
Upstream discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04192.html<br />
<br />
* unit test for vhost-user<br />
We don't have a unit test for vhost-user.<br />
The idea is to implement a simple vhost-user backend over userspace stack.<br />
And load pxe in guest.<br />
Contact: MST and Jason Wang<br />
<br />
* better qtest for virtio-net<br />
We test only boot and hotplug for virtio-net.<br />
Need to test more.<br />
Contact: MST and Jason Wang<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue-performance-2015-02-09&diff=173133
Multiqueue-performance-2015-02-09
2015-02-09T07:27:30Z
<p>Jasowang: </p>
<hr />
<div>This page contains the performance test result for multiqueue.<br />
<br />
[[Media:ixgbe.txt]]</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue-performance-2015-02-09&diff=173132
Multiqueue-performance-2015-02-09
2015-02-09T07:11:08Z
<p>Jasowang: </p>
<hr />
<div>This page contains the performance test result for multiqueue.<br />
<br />
[[Media:help.txt]]</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue-performance-2015-02-09&diff=173131
Multiqueue-performance-2015-02-09
2015-02-09T07:10:03Z
<p>Jasowang: </p>
<hr />
<div>This page contains the performance test result for multiqueue.</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=120334
NetworkingTodo
2014-11-18T05:18:39Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* virtio 1.0 support for linux guests<br />
required for maintainatibility<br />
mid.gmane.org/1414081380-14623-1-git-send-email-mst@redhat.com<br />
Developer: MST,Cornelia Huck<br />
<br />
* virtio 1.0 support in qemu<br />
required for maintainatibility<br />
mid.gmane.org/20141024103839.7162b93f.cornelia.huck@de.ibm.com<br />
Developer: Cornelia Huck, MST<br />
<br />
* improve net polling for cpu overcommit<br />
exit busy loop when another process is runnable<br />
mid.gmane.org/20140822073653.GA7372@gmail.com<br />
mid.gmane.org/1408608310-13579-2-git-send-email-jasowang@redhat.com<br />
Another idea is make the busy_read/busy_poll dynamic like dynamic PLE window.<br />
Developer: Jason Wang, MST<br />
<br />
* vhost-net/tun/macvtap cross endian support<br />
mid.gmane.org/1414572130-17014-2-git-send-email-clg@fr.ibm.com<br />
Developer: Cédric Le Goater, MST<br />
<br />
* BQL/aggregation for virtio net<br />
dependencies: orphan packets less agressively, enable tx interrupt <br />
Developers: MST, Jason<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: bring back tx interrupt (partially)<br />
Jason's idea: introduce a flag to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* enable tx interrupt (conditionally?)<br />
Small packet TCP stream performance is not good. This is because virtio-net orphan the packet during ndo_start_xmit() which disable the TCP small packet optimizations like TCP small Queue and AutoCork. The idea is enable the tx interrupt to TCP small packets.<br />
Jason's idea: switch between poll and tx interrupt mode based on recent statistics.<br />
MST's idea: use a per descriptor flag for virtio to force interrupt for a specific packet.<br />
Developer: Jason Wang, MST<br />
<br />
<br />
<br />
* vhost-net polling<br />
mid.gmane.org/20141029123831.A80F338002D@moren.haifa.ibm.com<br />
Developer: Razya Ladelsky<br />
<br />
<br />
* support more queues in tun<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
http://mid.gmane.org/1408369040-1216-1-git-send-email-pagupta@redhat.com<br />
Developers: Pankaj Gupta, Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Documentation/networking/scaling.txt<br />
Detect and enable/disable<br />
automatically so we can make it on by default?<br />
depends on: BQL<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
<br />
* bridge without promisc/allmulti mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Done for unicast, but not for multicast.<br />
Developer: Vlad Yasevich<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan?<br />
<br />
* Enable LRO with bridging<br />
Enable GRO for packets coming to bridge from a tap interface<br />
Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman?<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: Marcel Apfelbaum<br />
<br />
<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
Rx interrupt coalescing should be good for rx stream throughput.<br />
Tx interrupt coalescing will help the optimization of enabling tx interrupt conditionally.<br />
Developer: Jason Wang<br />
<br />
<br />
<br />
* sharing config interrupts<br />
Support more devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
<br />
* Multi-queue macvtap with real multiple queues<br />
Macvtap only provides multiple queues to user in the form of multiple<br />
sockets. As each socket will perform dev_queue_xmit() and we don't<br />
really have multiple real queues on the device, we now have a lock<br />
contention. This contention needs to be addressed.<br />
Developer: Vlad Yasevich<br />
<br />
* better xmit queueing for tun<br />
when guest is slower than host, tun drops packets<br />
aggressively. This is because keeping packets on<br />
the internal queue does not work well.<br />
re-enable functionality to stop queue,<br />
probably with some watchdog to help with buggy guests.<br />
Developer: MST<br />
<br />
* Dev watchdog for virtio-net:<br />
Implement a watchdog for virtio-net. This will be useful for hunting host bugs early.<br />
Developer: Julio Faracco <jcfaracco@gmail.com><br />
<br />
<br />
=== projects in need of an owner ===<br />
<br />
* improve netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- rx busy polling for virtio-net [DONE]<br />
see https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=91815639d8804d1eee7ce2e1f7f60b36771db2c9. 1 byte netperf TCP_RR shows 127% improvement.<br />
Future work is co-operate with host, and only does the busy polling when there's no other process in host cpu. <br />
contact: Jason Wang<br />
<br />
* drop vhostforce<br />
it's an optimization, probbaly not worth it anymore<br />
<br />
* feature negotiation for dpdk/vhost user<br />
feature negotiation seems to be broken<br />
<br />
* switch dpdk to qemu vhost user<br />
this seems like a better interface than<br />
character device in userspace,<br />
designed for out of process networking<br />
<br />
* netmap - like approach to zero copy networking<br />
is anything like this feasible on linux?<br />
<br />
* vhost-user: clean up protocol<br />
address multiple issues in vhost user protocol:<br />
missing VHOST_NET_SET_BACKEND<br />
make more messages synchronous (with a reply)<br />
VHOST_SET_MEM_TABLE, VHOST_SET_VRING_CALL<br />
mid.gmane.org/541956B8.1070203@huawei.com<br />
mid.gmane.org/54192136.2010409@huawei.com<br />
Contact: MST<br />
<br />
* ethtool seftest support for virtio-net<br />
Implement selftest ethtool method for virtio-net for regression test e.g the CVEs found for tun/macvtap, qemu and vhost.<br />
http://mid.gmane.org/1409881866-14780-1-git-send-email-hjxiaohust@gmail.com<br />
Contact: Jason Wang<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Contact: Razya Ladelsky, Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* DPDK with vhost-user<br />
Support vhost-user in addition to vhost net cuse device<br />
Contact: Linhaifeng, MST<br />
<br />
* DPDK with vhost-net/user: fix offloads<br />
DPDK requires disabling offloads ATM,<br />
need to fix this.<br />
Contact: MST<br />
<br />
* reduce per-device memory allocations<br />
vhost device is very large due to need to<br />
keep large arrays of iovecs around.<br />
we do need large arrays for correctness,<br />
but we could move them out of line,<br />
and add short inline arrays for typical use-cases.<br />
contact: MST<br />
<br />
* batch tx completions in vhost<br />
vhost already batches up to 64 tx completions for zero copy<br />
batch non zero copy as well<br />
contact: Jason Wang<br />
<br />
* better parallelize small queues<br />
don't wait for ring full to kick.<br />
add api to detect ring almost full (e.g. 3/4) and kick<br />
depends on: BQL<br />
contact: MST<br />
<br />
* improve vhost-user unit test<br />
support running on machines without hugetlbfs<br />
support running with more vm memory layouts<br />
Contact: MST<br />
<br />
* tun: fix RX livelock<br />
it's easy for guest to starve out host networking<br />
open way to fix this is to use napi <br />
Contact: MST<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
contact: MST<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Contact: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Contact: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
This project seems abandoned?<br />
Contact: Rusty Russell<br />
<br />
* use kvm eventfd support for injecting level-triggered interrupts<br />
aim: enable vhost by default for level interrupts.<br />
The benefit is security: we want to avoid using userspace<br />
virtio net so that vhost-net is always used.<br />
<br />
Alex emulated (post & re-enable) level-triggered interrupt in KVM for<br />
skipping userspace. VFIO already enjoied the performance benefit,<br />
let's do it for virtio-pci. Current virtio-pci devices still use<br />
level-interrupt in userspace.<br />
see: kernel:<br />
7a84428af [PATCH] KVM: Add resampling irqfds for level triggered interrupts<br />
qemu:<br />
68919cac [PATCH] hw/vfio: set interrupts using pci irq wrappers<br />
(virtio-pci didn't use the wrappers)<br />
e1d1e586 [PATCH] vfio-pci: Add KVM INTx acceleration<br />
<br />
Contact: Amos Kong, MST <br />
<br />
* Head of line blocking issue with zerocopy<br />
zerocopy has several defects that will cause head of line blocking problem:<br />
- limit the number of pending DMAs<br />
- complete in order<br />
This means is one of some of the DMAs were delayed, all other will also delayed. This could be reproduced with following case:<br />
- boot two VMS VM1(tap1) and VM2(tap2) on host1 (has eth0)<br />
- setup tbf to limit the tap2 bandwidth to 10Mbit/s<br />
- start two netperf instances one from VM1 to VM2, another from VM1 to an external host whose traffic go through eth0 on host<br />
Then you can see not only VM1 to VM2 is throttled, but also VM1 to external host were also throttled.<br />
For this issue, a solution is orphan the frags when en queuing to non work conserving qdisc.<br />
But we have have similar issues in other case:<br />
- The card has its own priority queues<br />
- Host has two interface, one is 1G another is 10G, so throttle 1G may lead traffic over 10G to be throttled.<br />
The final solution is to remove receive buffering at tun, and convert it to use NAPI<br />
Contact: Jason Wang, MST<br />
Reference: https://lkml.org/lkml/2014/1/17/105<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Contact: Amos Kong<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
Contact: Amos Kong<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
Contact: Amos Kong<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
Contact: Amos Kong<br />
<br />
<br />
<br />
* add documentation for macvlan and macvtap<br />
recent docs here:<br />
http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/<br />
need to integrate in iproute and kernel docs.<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
* Extend sndbuf scope to int64<br />
Current sndbuf limit is INT_MAX in tap_set_sndbuf(),<br />
large values (like 8388607T) can be converted rightly by qapi from qemu commandline,<br />
If we want to support the large values, we should extend sndbuf limit from 'int' to 'int64'<br />
Why is this useful?<br />
Upstream discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04192.html<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=120333
NetworkingTodo
2014-11-18T05:17:33Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* virtio 1.0 support for linux guests<br />
required for maintainatibility<br />
mid.gmane.org/1414081380-14623-1-git-send-email-mst@redhat.com<br />
Developer: MST,Cornelia Huck<br />
<br />
* virtio 1.0 support in qemu<br />
required for maintainatibility<br />
mid.gmane.org/20141024103839.7162b93f.cornelia.huck@de.ibm.com<br />
Developer: Cornelia Huck, MST<br />
<br />
* improve net polling for cpu overcommit<br />
exit busy loop when another process is runnable<br />
mid.gmane.org/20140822073653.GA7372@gmail.com<br />
mid.gmane.org/1408608310-13579-2-git-send-email-jasowang@redhat.com<br />
Another idea is make the busy_read/busy_poll dynamic like dynamic PLE window.<br />
Developer: Jason Wang, MST<br />
<br />
* vhost-net/tun/macvtap cross endian support<br />
mid.gmane.org/1414572130-17014-2-git-send-email-clg@fr.ibm.com<br />
Developer: Cédric Le Goater, MST<br />
<br />
* BQL/aggregation for virtio net<br />
dependencies: orphan packets less agressively, enable tx interrupt <br />
Developers: MST, Jason<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: bring back tx interrupt (partially)<br />
Jason's idea: introduce a flag to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* enable tx interrupt (conditionally?)<br />
Small packet TCP stream performance is not good. This is because virtio-net orphan the packet during ndo_start_xmit() which disable the TCP small packet optimizations like TCP small Queue and AutoCork. The idea is enable the tx interrupt to TCP small packets.<br />
Jason's idea: switch between poll and tx interrupt mode based on recent statistics.<br />
MST's idea: use a per descriptor flag for virtio to force interrupt for a specific packet.<br />
Developer: Jason Wang, MST<br />
<br />
<br />
<br />
* vhost-net polling<br />
mid.gmane.org/20141029123831.A80F338002D@moren.haifa.ibm.com<br />
Developer: Razya Ladelsky<br />
<br />
<br />
* support more queues in tun<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
http://mid.gmane.org/1408369040-1216-1-git-send-email-pagupta@redhat.com<br />
Developers: Pankaj Gupta, Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Documentation/networking/scaling.txt<br />
Detect and enable/disable<br />
automatically so we can make it on by default?<br />
depends on: BQL<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
<br />
<br />
* ethtool seftest support for virtio-net<br />
Implement selftest ethtool method for virtio-net for regression test e.g the CVEs found for tun/macvtap, qemu and vhost.<br />
http://mid.gmane.org/1409881866-14780-1-git-send-email-hjxiaohust@gmail.com<br />
Developers: Hengjinxiao,Jason Wang<br />
<br />
<br />
* bridge without promisc/allmulti mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Done for unicast, but not for multicast.<br />
Developer: Vlad Yasevich<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan?<br />
<br />
* Enable LRO with bridging<br />
Enable GRO for packets coming to bridge from a tap interface<br />
Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman?<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: Marcel Apfelbaum<br />
<br />
<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
Rx interrupt coalescing should be good for rx stream throughput.<br />
Tx interrupt coalescing will help the optimization of enabling tx interrupt conditionally.<br />
Developer: Jason Wang<br />
<br />
<br />
<br />
* sharing config interrupts<br />
Support more devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
<br />
* Multi-queue macvtap with real multiple queues<br />
Macvtap only provides multiple queues to user in the form of multiple<br />
sockets. As each socket will perform dev_queue_xmit() and we don't<br />
really have multiple real queues on the device, we now have a lock<br />
contention. This contention needs to be addressed.<br />
Developer: Vlad Yasevich<br />
<br />
* better xmit queueing for tun<br />
when guest is slower than host, tun drops packets<br />
aggressively. This is because keeping packets on<br />
the internal queue does not work well.<br />
re-enable functionality to stop queue,<br />
probably with some watchdog to help with buggy guests.<br />
Developer: MST<br />
<br />
* Dev watchdog for virtio-net:<br />
Implement a watchdog for virtio-net. This will be useful for hunting host bugs early.<br />
Developer: Julio Faracco <jcfaracco@gmail.com><br />
<br />
<br />
=== projects in need of an owner ===<br />
<br />
* improve netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- rx busy polling for virtio-net [DONE]<br />
see https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=91815639d8804d1eee7ce2e1f7f60b36771db2c9. 1 byte netperf TCP_RR shows 127% improvement.<br />
Future work is co-operate with host, and only does the busy polling when there's no other process in host cpu. <br />
contact: Jason Wang<br />
<br />
* drop vhostforce<br />
it's an optimization, probbaly not worth it anymore<br />
<br />
* feature negotiation for dpdk/vhost user<br />
feature negotiation seems to be broken<br />
<br />
* switch dpdk to qemu vhost user<br />
this seems like a better interface than<br />
character device in userspace,<br />
designed for out of process networking<br />
<br />
* netmap - like approach to zero copy networking<br />
is anything like this feasible on linux?<br />
<br />
* vhost-user: clean up protocol<br />
address multiple issues in vhost user protocol:<br />
missing VHOST_NET_SET_BACKEND<br />
make more messages synchronous (with a reply)<br />
VHOST_SET_MEM_TABLE, VHOST_SET_VRING_CALL<br />
mid.gmane.org/541956B8.1070203@huawei.com<br />
mid.gmane.org/54192136.2010409@huawei.com<br />
Contact: MST<br />
<br />
<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Contact: Razya Ladelsky, Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* DPDK with vhost-user<br />
Support vhost-user in addition to vhost net cuse device<br />
Contact: Linhaifeng, MST<br />
<br />
* DPDK with vhost-net/user: fix offloads<br />
DPDK requires disabling offloads ATM,<br />
need to fix this.<br />
Contact: MST<br />
<br />
* reduce per-device memory allocations<br />
vhost device is very large due to need to<br />
keep large arrays of iovecs around.<br />
we do need large arrays for correctness,<br />
but we could move them out of line,<br />
and add short inline arrays for typical use-cases.<br />
contact: MST<br />
<br />
* batch tx completions in vhost<br />
vhost already batches up to 64 tx completions for zero copy<br />
batch non zero copy as well<br />
contact: Jason Wang<br />
<br />
* better parallelize small queues<br />
don't wait for ring full to kick.<br />
add api to detect ring almost full (e.g. 3/4) and kick<br />
depends on: BQL<br />
contact: MST<br />
<br />
* improve vhost-user unit test<br />
support running on machines without hugetlbfs<br />
support running with more vm memory layouts<br />
Contact: MST<br />
<br />
* tun: fix RX livelock<br />
it's easy for guest to starve out host networking<br />
open way to fix this is to use napi <br />
Contact: MST<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
contact: MST<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Contact: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Contact: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
This project seems abandoned?<br />
Contact: Rusty Russell<br />
<br />
* use kvm eventfd support for injecting level-triggered interrupts<br />
aim: enable vhost by default for level interrupts.<br />
The benefit is security: we want to avoid using userspace<br />
virtio net so that vhost-net is always used.<br />
<br />
Alex emulated (post & re-enable) level-triggered interrupt in KVM for<br />
skipping userspace. VFIO already enjoied the performance benefit,<br />
let's do it for virtio-pci. Current virtio-pci devices still use<br />
level-interrupt in userspace.<br />
see: kernel:<br />
7a84428af [PATCH] KVM: Add resampling irqfds for level triggered interrupts<br />
qemu:<br />
68919cac [PATCH] hw/vfio: set interrupts using pci irq wrappers<br />
(virtio-pci didn't use the wrappers)<br />
e1d1e586 [PATCH] vfio-pci: Add KVM INTx acceleration<br />
<br />
Contact: Amos Kong, MST <br />
<br />
* Head of line blocking issue with zerocopy<br />
zerocopy has several defects that will cause head of line blocking problem:<br />
- limit the number of pending DMAs<br />
- complete in order<br />
This means is one of some of the DMAs were delayed, all other will also delayed. This could be reproduced with following case:<br />
- boot two VMS VM1(tap1) and VM2(tap2) on host1 (has eth0)<br />
- setup tbf to limit the tap2 bandwidth to 10Mbit/s<br />
- start two netperf instances one from VM1 to VM2, another from VM1 to an external host whose traffic go through eth0 on host<br />
Then you can see not only VM1 to VM2 is throttled, but also VM1 to external host were also throttled.<br />
For this issue, a solution is orphan the frags when en queuing to non work conserving qdisc.<br />
But we have have similar issues in other case:<br />
- The card has its own priority queues<br />
- Host has two interface, one is 1G another is 10G, so throttle 1G may lead traffic over 10G to be throttled.<br />
The final solution is to remove receive buffering at tun, and convert it to use NAPI<br />
Contact: Jason Wang, MST<br />
Reference: https://lkml.org/lkml/2014/1/17/105<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Contact: Amos Kong<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
Contact: Amos Kong<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
Contact: Amos Kong<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
Contact: Amos Kong<br />
<br />
<br />
<br />
* add documentation for macvlan and macvtap<br />
recent docs here:<br />
http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/<br />
need to integrate in iproute and kernel docs.<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
* Extend sndbuf scope to int64<br />
Current sndbuf limit is INT_MAX in tap_set_sndbuf(),<br />
large values (like 8388607T) can be converted rightly by qapi from qemu commandline,<br />
If we want to support the large values, we should extend sndbuf limit from 'int' to 'int64'<br />
Why is this useful?<br />
Upstream discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04192.html<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=120071
NetworkingTodo
2014-11-14T02:00:10Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* virtio 1.0 support for linux guests<br />
required for maintainatibility<br />
mid.gmane.org/1414081380-14623-1-git-send-email-mst@redhat.com<br />
Developer: MST,Cornelia Huck<br />
<br />
* virtio 1.0 support in qemu<br />
required for maintainatibility<br />
mid.gmane.org/20141024103839.7162b93f.cornelia.huck@de.ibm.com<br />
Developer: Cornelia Huck, MST<br />
<br />
* improve net polling for cpu overcommit<br />
exit busy loop when another process is runnable<br />
mid.gmane.org/20140822073653.GA7372@gmail.com<br />
mid.gmane.org/1408608310-13579-2-git-send-email-jasowang@redhat.com<br />
Developer: Jason Wang, MST<br />
<br />
* vhost-net/tun/macvtap cross endian support<br />
mid.gmane.org/1414572130-17014-2-git-send-email-clg@fr.ibm.com<br />
Developer: Cédric Le Goater, MST<br />
<br />
* BQL/aggregation for virtio net<br />
dependencies: orphan packets less agressively, enable tx interrupt <br />
Developers: MST, Jason<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: bring back tx interrupt (partially)<br />
Jason's idea: introduce a flag to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* enable tx interrupt (conditionally?)<br />
Small packet TCP stream performance is not good. This is because virtio-net orphan the packet during ndo_start_xmit() which disable the TCP small packet optimizations like TCP small Queue and AutoCork. The idea is enable the tx interrupt to TCP small packets.<br />
Jason's idea: switch between poll and tx interrupt mode based on recent statistics.<br />
MST's idea: use a per descriptor flag for virtio to force interrupt for a specific packet.<br />
Developer: Jason Wang, MST<br />
<br />
<br />
<br />
* vhost-net polling<br />
mid.gmane.org/20141029123831.A80F338002D@moren.haifa.ibm.com<br />
Developer: Razya Ladelsky<br />
<br />
<br />
* support more queues in tun<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
http://mid.gmane.org/1408369040-1216-1-git-send-email-pagupta@redhat.com<br />
Developers: Pankaj Gupta, Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Documentation/networking/scaling.txt<br />
Detect and enable/disable<br />
automatically so we can make it on by default?<br />
depends on: BQL<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
<br />
<br />
* ethtool seftest support for virtio-net<br />
Implement selftest ethtool method for virtio-net for regression test e.g the CVEs found for tun/macvtap, qemu and vhost.<br />
http://mid.gmane.org/1409881866-14780-1-git-send-email-hjxiaohust@gmail.com<br />
Developers: Hengjinxiao,Jason Wang<br />
<br />
<br />
* bridge without promisc/allmulti mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Done for unicast, but not for multicast.<br />
Developer: Vlad Yasevich<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan?<br />
<br />
* Enable LRO with bridging<br />
Enable GRO for packets coming to bridge from a tap interface<br />
Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman?<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: Marcel Apfelbaum<br />
<br />
<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
Rx interrupt coalescing should be good for rx stream throughput.<br />
Tx interrupt coalescing will help the optimization of enabling tx interrupt conditionally.<br />
Developer: Jason Wang<br />
<br />
<br />
<br />
* sharing config interrupts<br />
Support more devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
<br />
* Multi-queue macvtap with real multiple queues<br />
Macvtap only provides multiple queues to user in the form of multiple<br />
sockets. As each socket will perform dev_queue_xmit() and we don't<br />
really have multiple real queues on the device, we now have a lock<br />
contention. This contention needs to be addressed.<br />
Developer: Vlad Yasevich<br />
<br />
* better xmit queueing for tun<br />
when guest is slower than host, tun drops packets<br />
aggressively. This is because keeping packets on<br />
the internal queue does not work well.<br />
re-enable functionality to stop queue,<br />
probably with some watchdog to help with buggy guests.<br />
Developer: MST<br />
<br />
* Dev watchdog for virtio-net:<br />
Implement a watchdog for virtio-net. This will be useful for hunting host bugs early.<br />
Developer: Julio Faracco <jcfaracco@gmail.com><br />
<br />
<br />
=== projects in need of an owner ===<br />
<br />
* improve netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- rx busy polling for virtio-net [DONE]<br />
see https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=91815639d8804d1eee7ce2e1f7f60b36771db2c9. 1 byte netperf TCP_RR shows 127% improvement.<br />
Future work is co-operate with host, and only does the busy polling when there's no other process in host cpu. <br />
contact: Jason Wang<br />
<br />
* drop vhostforce<br />
it's an optimization, probbaly not worth it anymore<br />
<br />
* feature negotiation for dpdk/vhost user<br />
feature negotiation seems to be broken<br />
<br />
* switch dpdk to qemu vhost user<br />
this seems like a better interface than<br />
character device in userspace,<br />
designed for out of process networking<br />
<br />
* netmap - like approach to zero copy networking<br />
is anything like this feasible on linux?<br />
<br />
* vhost-user: clean up protocol<br />
address multiple issues in vhost user protocol:<br />
missing VHOST_NET_SET_BACKEND<br />
make more messages synchronous (with a reply)<br />
VHOST_SET_MEM_TABLE, VHOST_SET_VRING_CALL<br />
mid.gmane.org/541956B8.1070203@huawei.com<br />
mid.gmane.org/54192136.2010409@huawei.com<br />
Contact: MST<br />
<br />
<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Contact: Razya Ladelsky, Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* DPDK with vhost-user<br />
Support vhost-user in addition to vhost net cuse device<br />
Contact: Linhaifeng, MST<br />
<br />
* DPDK with vhost-net/user: fix offloads<br />
DPDK requires disabling offloads ATM,<br />
need to fix this.<br />
Contact: MST<br />
<br />
* reduce per-device memory allocations<br />
vhost device is very large due to need to<br />
keep large arrays of iovecs around.<br />
we do need large arrays for correctness,<br />
but we could move them out of line,<br />
and add short inline arrays for typical use-cases.<br />
contact: MST<br />
<br />
* batch tx completions in vhost<br />
vhost already batches up to 64 tx completions for zero copy<br />
batch non zero copy as well<br />
contact: Jason Wang<br />
<br />
* better parallelize small queues<br />
don't wait for ring full to kick.<br />
add api to detect ring almost full (e.g. 3/4) and kick<br />
depends on: BQL<br />
contact: MST<br />
<br />
* improve vhost-user unit test<br />
support running on machines without hugetlbfs<br />
support running with more vm memory layouts<br />
Contact: MST<br />
<br />
* tun: fix RX livelock<br />
it's easy for guest to starve out host networking<br />
open way to fix this is to use napi <br />
Contact: MST<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
contact: MST<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Contact: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Contact: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
This project seems abandoned?<br />
Contact: Rusty Russell<br />
<br />
* use kvm eventfd support for injecting level-triggered interrupts<br />
aim: enable vhost by default for level interrupts.<br />
The benefit is security: we want to avoid using userspace<br />
virtio net so that vhost-net is always used.<br />
<br />
Alex emulated (post & re-enable) level-triggered interrupt in KVM for<br />
skipping userspace. VFIO already enjoied the performance benefit,<br />
let's do it for virtio-pci. Current virtio-pci devices still use<br />
level-interrupt in userspace.<br />
see: kernel:<br />
7a84428af [PATCH] KVM: Add resampling irqfds for level triggered interrupts<br />
qemu:<br />
68919cac [PATCH] hw/vfio: set interrupts using pci irq wrappers<br />
(virtio-pci didn't use the wrappers)<br />
e1d1e586 [PATCH] vfio-pci: Add KVM INTx acceleration<br />
<br />
Contact: Amos Kong, MST <br />
<br />
* Head of line blocking issue with zerocopy<br />
zerocopy has several defects that will cause head of line blocking problem:<br />
- limit the number of pending DMAs<br />
- complete in order<br />
This means is one of some of the DMAs were delayed, all other will also delayed. This could be reproduced with following case:<br />
- boot two VMS VM1(tap1) and VM2(tap2) on host1 (has eth0)<br />
- setup tbf to limit the tap2 bandwidth to 10Mbit/s<br />
- start two netperf instances one from VM1 to VM2, another from VM1 to an external host whose traffic go through eth0 on host<br />
Then you can see not only VM1 to VM2 is throttled, but also VM1 to external host were also throttled.<br />
For this issue, a solution is orphan the frags when en queuing to non work conserving qdisc.<br />
But we have have similar issues in other case:<br />
- The card has its own priority queues<br />
- Host has two interface, one is 1G another is 10G, so throttle 1G may lead traffic over 10G to be throttled.<br />
The final solution is to remove receive buffering at tun, and convert it to use NAPI<br />
Contact: Jason Wang, MST<br />
Reference: https://lkml.org/lkml/2014/1/17/105<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Contact: Amos Kong<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
Contact: Amos Kong<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
Contact: Amos Kong<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
Contact: Amos Kong<br />
<br />
<br />
<br />
* add documentation for macvlan and macvtap<br />
recent docs here:<br />
http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/<br />
need to integrate in iproute and kernel docs.<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
* Extend sndbuf scope to int64<br />
Current sndbuf limit is INT_MAX in tap_set_sndbuf(),<br />
large values (like 8388607T) can be converted rightly by qapi from qemu commandline,<br />
If we want to support the large values, we should extend sndbuf limit from 'int' to 'int64'<br />
Why is this useful?<br />
Upstream discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04192.html<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=38708
NetworkingTodo
2014-10-08T08:57:55Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
Developer: MST<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: bring back tx interrupt (partially)<br />
Jason's idea: introduce a flag to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* Head of line blocking issue with zerocopy<br />
zerocopy has several defects that will cause head of line blocking problem:<br />
- limit the number of pending DMAs<br />
- complete in order<br />
This means is one of some of the DMAs were delayed, all other will also delayed. This could be reproduced with following case:<br />
- boot two VMS VM1(tap1) and VM2(tap2) on host1 (has eth0)<br />
- setup tbf to limit the tap2 bandwidth to 10Mbit/s<br />
- start two netperf instances one from VM1 to VM2, another from VM1 to an external host whose traffic go through eth0 on host<br />
Then you can see not only VM1 to VM2 is throttled, but also VM1 to external host were also throttled.<br />
For this issue, a solution is orphan the frags when en queuing to non work conserving qdisc.<br />
But we have have similar issues in other case:<br />
- The card has its own priority queues<br />
- Host has two interface, one is 1G another is 10G, so throttle 1G may lead traffic over 10G to be throttled.<br />
The final solution is to remove receive buffering at tun, and convert it to user NAPI<br />
Developer: Developers were welcomed! (Jason Wang)<br />
Reference: https://lkml.org/lkml/2014/1/17/105<br />
<br />
* Write a ethtool seftest for virtio-net<br />
Implement selftest ethtool method for virtio-net for regression test e.g the CVEs found for tun/macvtap, qemu and vhost.<br />
Developer: Jason Wang<br />
<br />
* Dev watchdog for virtio-net:<br />
Implement a watchdog for virtio-net. This will be useful for hunting host bugs early.<br />
Developer: Jason Wang<br />
<br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Amos Kong<br />
qemu: https://bugzilla.redhat.com/show_bug.cgi?id=848203 (applied by upstream)<br />
libvirt: https://bugzilla.redhat.com/show_bug.cgi?id=848199<br />
http://git.qemu.org/?p=qemu.git;a=commit;h=b1be42803b31a913bab65bab563a8760ad2e7f7f<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Enable GRO for packets coming to bridge from a tap interface<br />
Developer: Dmitry Fleytman<br />
<br />
* Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: MST<br />
<br />
* netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- rx busy polling for virtio-net [DONE]<br />
see https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=91815639d8804d1eee7ce2e1f7f60b36771db2c9. 1 byte netperf TCP_RR shows 127% improvement.<br />
Future work is co-operate with host, and only does the busy polling when there's no other process in host cpu. <br />
Developer: Jason Wang<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
Rx interrupt coalescing should be good for rx stream throughput.<br />
Tx interrupt coalescing will help the optimization of enabling tx interrupt conditionally.<br />
Developer: Jason Wang<br />
<br />
* enable tx interrupt conditionally<br />
Small packet TCP stream performance is not good. This is because virtio-net orphan the packet during ndo_start_xmit() which disable the TCP small packet optimizations like TCP small Queue and AutoCork. The idea is enable the tx interrupt to TCP small packets.<br />
Jason's idea: switch between poll and tx interrupt mode based on recent statistics.<br />
MST's idea: use a per descriptor flag for virtio to force interrupt for a specific packet.<br />
Developer: Jason Wang, MST<br />
<br />
* use kvm eventfd support for injecting level-triggered interrupts<br />
aim: enable vhost by default for level interrupts.<br />
The benefit is security: we want to avoid using userspace<br />
virtio net so that vhost-net is always used.<br />
<br />
Alex emulated (post & re-enable) level-triggered interrupt in KVM for<br />
skipping userspace. VFIO already enjoied the performance benefit,<br />
let's do it for virtio-pci. Current virtio-pci devices still use<br />
level-interrupt in userspace.<br />
<br />
kernel:<br />
7a84428af [PATCH] KVM: Add resampling irqfds for level triggered interrupts<br />
qemu:<br />
68919cac [PATCH] hw/vfio: set interrupts using pci irq wrappers<br />
(virtio-pci didn't use the wrappers)<br />
e1d1e586 [PATCH] vfio-pci: Add KVM INTx acceleration<br />
<br />
Developer: Amos Kong<br />
<br />
* sharing config interrupts<br />
Support more devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Developer: Amos Kong<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
Developer: Amos Kong<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
Developer: Amos Kong<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
Developer: Amos Kong<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
* add documentation for macvlan and macvtap<br />
recent docs here:<br />
http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/<br />
need to integrate in iproute and kernel docs.<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
* Extend sndbuf scope to int64<br />
<br />
Current sndbuf limit is INT_MAX in tap_set_sndbuf(),<br />
large values (like 8388607T) can be converted rightly by qapi from qemu commandline,<br />
If we want to support the large values, we should extend sndbuf limit from 'int' to 'int64'<br />
<br />
Upstream discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04192.html<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=23034
NetworkingTodo
2014-09-12T08:30:58Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
Developer: MST<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: bring back tx interrupt (partially)<br />
Jason's idea: introduce a flag to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* Announce self by guest driver [Done]<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu part is merged by MST.<br />
Developer: Jason Wang<br />
<br />
* Head of line blocking issue with zerocopy<br />
zerocopy has several defects that will cause head of line blocking problem:<br />
- limit the number of pending DMAs<br />
- complete in order<br />
This means is one of some of the DMAs were delayed, all other will also delayed. This could be reproduced with following case:<br />
- boot two VMS VM1(tap1) and VM2(tap2) on host1 (has eth0)<br />
- setup tbf to limit the tap2 bandwidth to 10Mbit/s<br />
- start two netperf instances one from VM1 to VM2, another from VM1 to an external host whose traffic go through eth0 on host<br />
Then you can see not only VM1 to VM2 is throttled, but also VM1 to external host were also throttled.<br />
For this issue, a solution is orphan the frags when en queuing to non work conserving qdisc.<br />
But we have have similar issues in other case:<br />
- The card has its own priority queues<br />
- Host has two interface, one is 1G another is 10G, so throttle 1G may lead traffic over 10G to be throttled.<br />
The final solution is to remove receive buffering at tun, and convert it to user NAPI<br />
Developer: Jason Wang<br />
Reference: https://lkml.org/lkml/2014/1/17/105<br />
<br />
* Write a ethtool seftest for virtio-net<br />
Implement selftest ethtool method for virtio-net for regression test e.g the CVEs found for tun/macvtap, qemu and vhost.<br />
Developer: CSDN summer code project student <br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Amos Kong<br />
qemu: https://bugzilla.redhat.com/show_bug.cgi?id=848203 (applied by upstream)<br />
libvirt: https://bugzilla.redhat.com/show_bug.cgi?id=848199<br />
http://git.qemu.org/?p=qemu.git;a=commit;h=b1be42803b31a913bab65bab563a8760ad2e7f7f<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Enable GRO for packets coming to bridge from a tap interface<br />
Developer: Dmitry Fleytman<br />
<br />
* Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: MST<br />
<br />
* netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- rx busy polling for virtio-net [DONE]<br />
see https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=91815639d8804d1eee7ce2e1f7f60b36771db2c9. 1 byte netperf TCP_RR shows 127% improvement.<br />
Future work is co-operate with host, and only does the busy polling when there's no other process in host cpu. <br />
Developer: Jason Wang<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
Rx interrupt coalescing should be good for rx stream throughput.<br />
Tx interrupt coalescing will help the optimization of enabling tx interrupt conditionally.<br />
Developer: Jason Wang<br />
<br />
* enable tx interrupt conditionally<br />
Small packet TCP stream performance is not good. This is because virtio-net orphan the packet during ndo_start_xmit() which disable the TCP small packet optimizations like TCP small Queue and AutoCork. The idea is enable the tx interrupt to TCP small packets.<br />
Jason's idea: switch between poll and tx interrupt mode based on recent statistics.<br />
MST's idea: use a per descriptor flag for virtio to force interrupt for a specific packet.<br />
Developer: Jason Wang, MST<br />
<br />
* use kvm eventfd support for injecting level-triggered interrupts<br />
aim: enable vhost by default for level interrupts.<br />
The benefit is security: we want to avoid using userspace<br />
virtio net so that vhost-net is always used.<br />
<br />
Alex emulated (post & re-enable) level-triggered interrupt in KVM for<br />
skipping userspace. VFIO already enjoied the performance benefit,<br />
let's do it for virtio-pci. Current virtio-pci devices still use<br />
level-interrupt in userspace.<br />
<br />
kernel:<br />
7a84428af [PATCH] KVM: Add resampling irqfds for level triggered interrupts<br />
qemu:<br />
68919cac [PATCH] hw/vfio: set interrupts using pci irq wrappers<br />
(virtio-pci didn't use the wrappers)<br />
e1d1e586 [PATCH] vfio-pci: Add KVM INTx acceleration<br />
<br />
Developer: Amos Kong<br />
<br />
* sharing config interrupts<br />
Support more devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Developer: Amos Kong<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
Developer: Amos Kong<br />
<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
* add documentation for macvlan and macvtap<br />
recent docs here:<br />
http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/<br />
need to integrate in iproute and kernel docs.<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
* Extend sndbuf scope to int64<br />
<br />
Current sndbuf limit is INT_MAX in tap_set_sndbuf(),<br />
large values (like 8388607T) can be converted rightly by qapi from qemu commandline,<br />
If we want to support the large values, we should extend sndbuf limit from 'int' to 'int64'<br />
<br />
Upstream discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04192.html<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=23025
NetworkingTodo
2014-08-21T09:02:42Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
Developer: MST<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: bring back tx interrupt (partially)<br />
Jason's idea: introduce a flag to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* Announce self by guest driver [Done]<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu part is merged by MST.<br />
Developer: Jason Wang<br />
<br />
* Head of line blocking issue with zerocopy<br />
zerocopy has several defects that will cause head of line blocking problem:<br />
- limit the number of pending DMAs<br />
- complete in order<br />
This means is one of some of the DMAs were delayed, all other will also delayed. This could be reproduced with following case:<br />
- boot two VMS VM1(tap1) and VM2(tap2) on host1 (has eth0)<br />
- setup tbf to limit the tap2 bandwidth to 10Mbit/s<br />
- start two netperf instances one from VM1 to VM2, another from VM1 to an external host whose traffic go through eth0 on host<br />
Then you can see not only VM1 to VM2 is throttled, but also VM1 to external host were also throttled.<br />
For this issue, a solution is orphan the frags when en queuing to non work conserving qdisc.<br />
But we have have similar issues in other case:<br />
- The card has its own priority queues<br />
- Host has two interface, one is 1G another is 10G, so throttle 1G may lead traffic over 10G to be throttled.<br />
The final solution is to remove receive buffering at tun, and convert it to user NAPI<br />
Developer: Jason Wang<br />
Reference: https://lkml.org/lkml/2014/1/17/105<br />
<br />
* Write a ethtool seftest for virtio-net<br />
Implement selftest ethtool method for virtio-net for regression test e.g the CVEs found for tun/macvtap, qemu and vhost.<br />
Developer: CSDN summer code project student <br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Amos Kong<br />
qemu: https://bugzilla.redhat.com/show_bug.cgi?id=848203 (applied by upstream)<br />
libvirt: https://bugzilla.redhat.com/show_bug.cgi?id=848199<br />
http://git.qemu.org/?p=qemu.git;a=commit;h=b1be42803b31a913bab65bab563a8760ad2e7f7f<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Enable GRO for packets coming to bridge from a tap interface<br />
Developer: Dmitry Fleytman<br />
<br />
* Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: MST<br />
<br />
* netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- rx busy polling for virtio-net [DONE]<br />
see https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=91815639d8804d1eee7ce2e1f7f60b36771db2c9. 1 byte netperf TCP_RR shows 127% improvement.<br />
Future work is co-operate with host, and only does the busy polling when there's no other process in host cpu. <br />
Developer: Jason Wang<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
Rx interrupt coalescing should be good for rx stream throughput.<br />
Tx interrupt coalescing will help the optimization of enabling tx interrupt conditionally.<br />
Developer: Jason Wang<br />
<br />
* use kvm eventfd support for injecting level-triggered interrupts<br />
aim: enable vhost by default for level interrupts.<br />
The benefit is security: we want to avoid using userspace<br />
virtio net so that vhost-net is always used.<br />
<br />
Alex emulated (post & re-enable) level-triggered interrupt in KVM for<br />
skipping userspace. VFIO already enjoied the performance benefit,<br />
let's do it for virtio-pci. Current virtio-pci devices still use<br />
level-interrupt in userspace.<br />
<br />
kernel:<br />
7a84428af [PATCH] KVM: Add resampling irqfds for level triggered interrupts<br />
qemu:<br />
68919cac [PATCH] hw/vfio: set interrupts using pci irq wrappers<br />
(virtio-pci didn't use the wrappers)<br />
e1d1e586 [PATCH] vfio-pci: Add KVM INTx acceleration<br />
<br />
Developer: Amos Kong<br />
<br />
* sharing config interrupts<br />
Support more devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Developer: Amos Kong<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
Developer: Amos Kong<br />
<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
* add documentation for macvlan and macvtap<br />
recent docs here:<br />
http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/<br />
need to integrate in iproute and kernel docs.<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
* Extend sndbuf scope to int64<br />
<br />
Current sndbuf limit is INT_MAX in tap_set_sndbuf(),<br />
large values (like 8388607T) can be converted rightly by qapi from qemu commandline,<br />
If we want to support the large values, we should extend sndbuf limit from 'int' to 'int64'<br />
<br />
Upstream discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04192.html<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=20596
NetworkingTodo
2014-06-12T05:43:36Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
Developer: MST<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: bring back tx interrupt (partially)<br />
Jason's idea: introduce a flag to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* Announce self by guest driver [Done]<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu part is merged by MST.<br />
Developer: Jason Wang<br />
<br />
* Head of line blocking issue with zerocopy<br />
zerocopy has several defects that will cause head of line blocking problem:<br />
- limit the number of pending DMAs<br />
- complete in order<br />
This means is one of some of the DMAs were delayed, all other will also delayed. This could be reproduced with following case:<br />
- boot two VMS VM1(tap1) and VM2(tap2) on host1 (has eth0)<br />
- setup tbf to limit the tap2 bandwidth to 10Mbit/s<br />
- start two netperf instances one from VM1 to VM2, another from VM1 to an external host whose traffic go through eth0 on host<br />
Then you can see not only VM1 to VM2 is throttled, but also VM1 to external host were also throttled.<br />
For this issue, a solution is orphan the frags when en queuing to non work conserving qdisc.<br />
But we have have similar issues in other case:<br />
- The card has its own priority queues<br />
- Host has two interface, one is 1G another is 10G, so throttle 1G may lead traffic over 10G to be throttled.<br />
The final solution is to remove receive buffering at tun, and convert it to user NAPI<br />
Developer: Jason Wang<br />
Reference: https://lkml.org/lkml/2014/1/17/105<br />
<br />
* Write a ethtool seftest for virtio-net<br />
Implement selftest ethtool method for virtio-net for regression test e.g the CVEs found for tun/macvtap, qemu and vhost.<br />
Developer: CSDN summer code project student <br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Amos Kong<br />
qemu: https://bugzilla.redhat.com/show_bug.cgi?id=848203 (applied by upstream)<br />
libvirt: https://bugzilla.redhat.com/show_bug.cgi?id=848199<br />
http://git.qemu.org/?p=qemu.git;a=commit;h=b1be42803b31a913bab65bab563a8760ad2e7f7f<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Enable GRO for packets coming to bridge from a tap interface<br />
Developer: Dmitry Fleytman<br />
<br />
* Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: MST<br />
<br />
* netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- proposed low latency net polling<br />
See http://lkml.indiana.edu/hypermail/linux/kernel/1303.0/00553.html<br />
Jason has a draft path to enable low latency polling for virito-net.<br />
May also consider it for tun/macvtap.<br />
Developer: Jason Wang<br />
<br />
* use kvm eventfd support for injecting level-triggered interrupts<br />
aim: enable vhost by default for level interrupts.<br />
The benefit is security: we want to avoid using userspace<br />
virtio net so that vhost-net is always used.<br />
<br />
Alex emulated (post & re-enable) level-triggered interrupt in KVM for<br />
skipping userspace. VFIO already enjoied the performance benefit,<br />
let's do it for virtio-pci. Current virtio-pci devices still use<br />
level-interrupt in userspace.<br />
<br />
kernel:<br />
7a84428af [PATCH] KVM: Add resampling irqfds for level triggered interrupts<br />
qemu:<br />
68919cac [PATCH] hw/vfio: set interrupts using pci irq wrappers<br />
(virtio-pci didn't use the wrappers)<br />
e1d1e586 [PATCH] vfio-pci: Add KVM INTx acceleration<br />
<br />
Developer: Amos Kong<br />
<br />
* sharing config interrupts<br />
Support more devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Developer: Amos Kong<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
Developer: Amos Kong<br />
<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
* add documentation for macvlan and macvtap<br />
recent docs here:<br />
http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/<br />
need to integrate in iproute and kernel docs.<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
* Extend sndbuf scope to int64<br />
<br />
Current sndbuf limit is INT_MAX in tap_set_sndbuf(),<br />
large values (like 8388607T) can be converted rightly by qapi from qemu commandline,<br />
If we want to support the large values, we should extend sndbuf limit from 'int' to 'int64'<br />
<br />
Upstream discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04192.html<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=20595
NetworkingTodo
2014-06-12T05:41:13Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
Developer: MST<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: bring back tx interrupt (partially)<br />
Jason's idea: introduce a flag to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* Announce self by guest driver [Done]<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu part is merged by MST.<br />
Developer: Jason Wang<br />
<br />
* Head of line blocking issue with zerocopy<br />
zerocopy has several defects that will cause head of line blocking problem:<br />
- limit the number of pending DMAs<br />
- complete in order<br />
This means is one of some of the DMAs were delayed, all other will also delayed. This could be reproduced with following case:<br />
- boot two VMS VM1(tap1) and VM2(tap2) on host1 (has eth0)<br />
- setup tbf to limit the tap2 bandwidth to 10Mbit/s<br />
- start two netperf instances one from VM1 to VM2, another from VM1 to an external host whose traffic go through eth0 on host<br />
Then you can see not only VM1 to VM2 is throttled, but also VM1 to external host were also throttled.<br />
For this issue, a solution is orphan the frags when en queuing to non work conserving qdisc.<br />
But we have have similar issues in other case:<br />
- The card has its own priority queues<br />
- Host has two interface, one is 1G another is 10G, so throttle 1G may lead traffic over 10G to be throttled.<br />
The final solution is to remove receive buffering at tun, and convert it to user NAPI<br />
Developer: Jason Wang<br />
Reference: https://lkml.org/lkml/2014/1/17/105<br />
<br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Amos Kong<br />
qemu: https://bugzilla.redhat.com/show_bug.cgi?id=848203 (applied by upstream)<br />
libvirt: https://bugzilla.redhat.com/show_bug.cgi?id=848199<br />
http://git.qemu.org/?p=qemu.git;a=commit;h=b1be42803b31a913bab65bab563a8760ad2e7f7f<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Enable GRO for packets coming to bridge from a tap interface<br />
Developer: Dmitry Fleytman<br />
<br />
* Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: MST<br />
<br />
* netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- proposed low latency net polling<br />
See http://lkml.indiana.edu/hypermail/linux/kernel/1303.0/00553.html<br />
Jason has a draft path to enable low latency polling for virito-net.<br />
May also consider it for tun/macvtap.<br />
Developer: Jason Wang<br />
<br />
* use kvm eventfd support for injecting level-triggered interrupts<br />
aim: enable vhost by default for level interrupts.<br />
The benefit is security: we want to avoid using userspace<br />
virtio net so that vhost-net is always used.<br />
<br />
Alex emulated (post & re-enable) level-triggered interrupt in KVM for<br />
skipping userspace. VFIO already enjoied the performance benefit,<br />
let's do it for virtio-pci. Current virtio-pci devices still use<br />
level-interrupt in userspace.<br />
<br />
kernel:<br />
7a84428af [PATCH] KVM: Add resampling irqfds for level triggered interrupts<br />
qemu:<br />
68919cac [PATCH] hw/vfio: set interrupts using pci irq wrappers<br />
(virtio-pci didn't use the wrappers)<br />
e1d1e586 [PATCH] vfio-pci: Add KVM INTx acceleration<br />
<br />
Developer: Amos Kong<br />
<br />
* sharing config interrupts<br />
Support more devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Developer: Amos Kong<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
Developer: Amos Kong<br />
<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
* add documentation for macvlan and macvtap<br />
recent docs here:<br />
http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/<br />
need to integrate in iproute and kernel docs.<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
* Extend sndbuf scope to int64<br />
<br />
Current sndbuf limit is INT_MAX in tap_set_sndbuf(),<br />
large values (like 8388607T) can be converted rightly by qapi from qemu commandline,<br />
If we want to support the large values, we should extend sndbuf limit from 'int' to 'int64'<br />
<br />
Upstream discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg04192.html<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=5471
NetworkingTodo
2014-03-21T09:52:11Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
Developer: MST<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: brng back tx interrupt (partially)<br />
Jason's idea: introduce a flat to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* Announce self by guest driver<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu is ongoing.<br />
V8 new RFC posted here (limit the changes to virtio-net only)<br />
https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg02648.html<br />
V7 patches is here:<br />
http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg01127.html<br />
Developer: Jason Wang<br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Amos Kong<br />
qemu: https://bugzilla.redhat.com/show_bug.cgi?id=848203 (applied by upstream)<br />
libvirt: https://bugzilla.redhat.com/show_bug.cgi?id=848199<br />
http://git.qemu.org/?p=qemu.git;a=commit;h=b1be42803b31a913bab65bab563a8760ad2e7f7f<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Enable GRO for packets coming to bridge from a tap interface<br />
Developer: Dmitry Fleytman<br />
<br />
* Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: MST<br />
<br />
* netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- proposed low latency net polling<br />
See http://lkml.indiana.edu/hypermail/linux/kernel/1303.0/00553.html<br />
Jason has a draft path to enable low latency polling for virito-net.<br />
May also consider it for tun/macvtap.<br />
Developer: Jason Wang<br />
<br />
* sharing config interrupts<br />
Support mode devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
* use kvm eventfd support for injecting level interrupts,<br />
enable vhost by default for level interrupts<br />
Developer: Amos Kong<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Developer: Amos Kong<br />
<br />
<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=5470
NetworkingTodo
2014-03-21T09:50:28Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
Developer: MST<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* orphan packets less agressively (was make pktgen works for virtio-net ( or partially orphan ))<br />
virtio-net orphans all skbs during tx, this used to be optimal.<br />
Recent changes in guest networking stack and hardware advances<br />
such as APICv changed optimal behaviour for drivers.<br />
We need to revisit optimizations such as orphaning all packets early<br />
to have optimal behaviour.<br />
<br />
this should also fix pktgen which is currently broken with virtio net:<br />
orphaning all skbs makes pktgen wait for ever to the refcnt.<br />
Jason's idea: brng back tx interrupt (partially)<br />
Jason's idea: introduce a flat to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developers: Jason Wang, MST<br />
<br />
* Announce self by guest driver<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu is ongoing.<br />
V8 new RFC posted here (limit the changes to virtio-net only)<br />
https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg02648.html<br />
V7 patches is here:<br />
http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg01127.html<br />
Developer: Jason Wang<br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Amos Kong<br />
qemu: https://bugzilla.redhat.com/show_bug.cgi?id=848203 (applied by upstream)<br />
libvirt: https://bugzilla.redhat.com/show_bug.cgi?id=848199<br />
http://git.qemu.org/?p=qemu.git;a=commit;h=b1be42803b31a913bab65bab563a8760ad2e7f7f<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Enable GRO for packets coming to bridge from a tap interface<br />
Developer: Dmitry Fleytman<br />
<br />
* Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: MST<br />
<br />
* netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- proposed low latency net polling<br />
See http://lkml.indiana.edu/hypermail/linux/kernel/1303.0/00553.html<br />
Developer: Jason Wang<br />
<br />
* sharing config interrupts<br />
Support mode devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
* use kvm eventfd support for injecting level interrupts,<br />
enable vhost by default for level interrupts<br />
Developer: Amos Kong<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Developer: Amos Kong<br />
<br />
<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=5040
Multiqueue
2014-03-04T02:43:39Z
<p>Jasowang: </p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
* Jason Wang <jasowang@redhat.com><br />
* Amos Kong <akong@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Git & Cmdline ==<br />
<br />
* kernel changes: git://github.com/jasowang/kernel-mq.git<br />
* qemu-kvm changes: git://github.com/jasowang/qemu-kvm-mq.git<br />
* qemu-kvm -netdev tap,id=hn0,queues=M -device virtio-net-pci,netdev=hn0,vectors=2M+1 ...<br />
<br />
== Status & Challenges ==<br />
* Status<br />
** Have patches for all part but need performance tuning for small packet transmission<br />
** Several of new vhost threading model were proposed<br />
<br />
* Challenges<br />
** small packet transmission<br />
*** Reason:<br />
**** spread the packets into different queue reduce the possibility of batching thus damage the performance.<br />
**** some but not too much batching may help for the performance<br />
*** Solution and challenge:<br />
***** Find a adaptive/dynamic algorithm to switch between one queue mode and multiqueue mode<br />
****** Find the threshold to do the switch, not easy as the traffic were unexpected in real workloads<br />
****** Avoid the packet re-ordering when switching packet<br />
***** Current Status & working on:<br />
****** Add ioctl to notify tap to switch to one queue mode<br />
****** switch when needed<br />
** vhost threading<br />
*** Three model were proposed:<br />
**** per vq pairs, each vhost thread is polling a tx/rx vq pairs<br />
***** simple<br />
***** regression in small packet transmission<br />
***** lack the numa affinity as vhost does the copy<br />
***** may not scale well when using multiqueue as we may create more vhost threads than the numer of host cpu<br />
**** multi-workers: multiple worker thread for a device<br />
***** wakeup all threads and the thread contend for the work<br />
***** improve the parallism especially for RR tes<br />
***** broadcast wakeup and contention<br />
***** #vhost threads may greater than #cpu<br />
***** no numa consideration<br />
***** regression in small packet / small #instances<br />
**** per-cpu vhost thread:<br />
***** pick a random thread in the same socket (except for the cpu that initiated the request) to handle the request<br />
***** best performance in most conditions<br />
***** schedule by it self and bypass the host scheduler, only suitable for network load<br />
***** regression in small packet / small #instances<br />
*** Solution and challenge:<br />
**** More testing<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls IFF_ATTACH_QUEUE/IFF_DETACH_QUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* IFF_DETACH_QUEUE is used to attach an unattached file/socket to a tap device. * IFF_DETACH_QUEUE is used to detach a file from a tap device. IFF_DETACH_QUEUE is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, IFF_ATTACH_QUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector (same as macvtap)====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Enable MQ feature ==<br />
* create tap device with multiple queues, please reference<br />
Documentation/networking/tuntap.txt:(3.3 Multiqueue tuntap interface)<br />
* enable mq for tap (suppose N queue pairs) -netdev tap,vhost=on,queues=N<br />
* enable mq and specify msix vectors in qemu cmdline (2N+2 vectors, N for tx queues, N for rx queues, 1 for config, and one for possible control vq): -device virtio-net-pci,mq=on,vectors=2N+2...<br />
* enable mq in guest by 'ethtool -L eth0 combined $queue_num'<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
** regression criteria: throughout%cpu<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpu node and memory node<br />
** autotest implemented a performance regression test, used T-test<br />
** use netperf demo-mode to get more stable results<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=5039
Multiqueue
2014-03-04T02:43:19Z
<p>Jasowang: </p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
* Jason Wang <jasowang@redhat.com><br />
* Amos Kong <akong@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Git & Cmdline ==<br />
<br />
* kernel changes: git://github.com/jasowang/kernel-mq.git<br />
* qemu-kvm changes: git://github.com/jasowang/qemu-kvm-mq.git<br />
* qemu-kvm -netdev tap,id=hn0,queues=M -device virtio-net-pci,netdev=hn0,vectors=2M+1 ...<br />
<br />
== Status & Challenges ==<br />
* Status<br />
** Have patches for all part but need performance tuning for small packet transmission<br />
** Several of new vhost threading model were proposed<br />
<br />
* Challenges<br />
** small packet transmission<br />
*** Reason:<br />
**** spread the packets into different queue reduce the possibility of batching thus damage the performance.<br />
**** some but not too much batching may help for the performance<br />
*** Solution and challenge:<br />
***** Find a adaptive/dynamic algorithm to switch between one queue mode and multiqueue mode<br />
****** Find the threshold to do the switch, not easy as the traffic were unexpected in real workloads<br />
****** Avoid the packet re-ordering when switching packet<br />
***** Current Status & working on:<br />
****** Add ioctl to notify tap to switch to one queue mode<br />
****** switch when needed<br />
** vhost threading<br />
*** Three model were proposed:<br />
**** per vq pairs, each vhost thread is polling a tx/rx vq pairs<br />
***** simple<br />
***** regression in small packet transmission<br />
***** lack the numa affinity as vhost does the copy<br />
***** may not scale well when using multiqueue as we may create more vhost threads than the numer of host cpu<br />
**** multi-workers: multiple worker thread for a device<br />
***** wakeup all threads and the thread contend for the work<br />
***** improve the parallism especially for RR tes<br />
***** broadcast wakeup and contention<br />
***** #vhost threads may greater than #cpu<br />
***** no numa consideration<br />
***** regression in small packet / small #instances<br />
**** per-cpu vhost thread:<br />
***** pick a random thread in the same socket (except for the cpu that initiated the request) to handle the request<br />
***** best performance in most conditions<br />
***** schedule by it self and bypass the host scheduler, only suitable for network load<br />
***** regression in small packet / small #instances<br />
*** Solution and challenge:<br />
**** More testing<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls IFF_ATTACH_QUEUE/IFF_DETACH_QUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* IFF_DETACH_QUEUE is used to attach an unattached file/socket to a tap device. * IFF_DETACH_QUEUE is used to detach a file from a tap device. IFF_DETACH_QUEUE is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, IFF_ATTACH_QUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector (same as macvtap)====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Enable MQ feature ==<br />
* create tap device with multiple queues, please reference<br />
Documentation/networking/tuntap.txt:(3.3 Multiqueue tuntap interface)<br />
* enable mq for tap (suppose N queue pairs) -netdev tap,vhost=on,queues=N<br />
* enable mq and specify msix vectors in qemu cmdline (2N+2 vectors, N for tx queues, N for rx queues, 1 for config, and one for possible control vq): -device virtio-net-pci,mq=on,vectors=2N+1...<br />
* enable mq in guest by 'ethtool -L eth0 combined $queue_num'<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
** regression criteria: throughout%cpu<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpu node and memory node<br />
** autotest implemented a performance regression test, used T-test<br />
** use netperf demo-mode to get more stable results<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=5038
NetworkingTodo
2014-03-03T08:23:34Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome! ===<br />
<br />
* large-order allocations<br />
see 28d6427109d13b0f447cba5761f88d3548e83605<br />
Developer: MST<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* make pktgen works for virtio-net ( or partially orphan )<br />
virtio-net orphan the skb during tx,<br />
which will makes pktgen wait for ever to the refcnt.<br />
Jason's idea: introduce a flat to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developer: Jason Wang<br />
<br />
* Announce self by guest driver<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu is ongoing.<br />
V7 patches is here:<br />
http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg01127.html<br />
Developer: Jason Wang<br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Amos Kong<br />
qemu: https://bugzilla.redhat.com/show_bug.cgi?id=848203 (applied by upstream)<br />
libvirt: https://bugzilla.redhat.com/show_bug.cgi?id=848199<br />
http://git.qemu.org/?p=qemu.git;a=commit;h=b1be42803b31a913bab65bab563a8760ad2e7f7f<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Enable GRO for packets coming to bridge from a tap interface<br />
Developer: Dmitry Fleytman<br />
<br />
* Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: MST<br />
<br />
* netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- proposed low latency net polling<br />
See http://lkml.indiana.edu/hypermail/linux/kernel/1303.0/00553.html<br />
Developer: Jason Wang<br />
<br />
* sharing config interrupts<br />
Support mode devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
Developer: Amos Kong<br />
<br />
* use kvm eventfd support for injecting level interrupts,<br />
enable vhost by default for level interrupts<br />
Developer: Amos Kong<br />
<br />
* network traffic throttling<br />
block implemented "continuous leaky bucket" for throttling<br />
we can use continuous leaky bucket to network<br />
IOPS/BPS * RX/TX/TOTAL<br />
Developer: Amos Kong<br />
<br />
* Allocate mac_table dynamically<br />
<br />
In the future, maybe we can allocate the mac_table dynamically instead<br />
of embed it in VirtIONet. Then we can just does a pointer swap and<br />
gfree() and can save a memcpy() here.<br />
Developer: Amos Kong<br />
<br />
<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
Search for "Xin Xiaohui: Provide a zero-copy method on KVM virtio-net"<br />
for a very old prototype<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
=== vague ideas: path to implementation not clear ===<br />
<br />
* change tcp_tso_should_defer for kvm: batch more<br />
aggressively.<br />
in particular, see below<br />
<br />
* tcp: increase gso buffering for cubic,reno<br />
At the moment we push out an skb whenever the limit becomes<br />
large enough to send a full-sized TSO skb even if the skb,<br />
in fact, is not full-sized.<br />
The reason for this seems to be that some congestion avoidance<br />
protocols rely on the number of packets in flight to calculate<br />
CWND, so if we underuse the available CWND it shrinks<br />
which degrades performance:<br />
http://www.mail-archive.com/netdev@vger.kernel.org/msg08738.html<br />
<br />
However, there seems to be no reason to do this for<br />
protocols such as reno and cubic which don't rely on packets in flight,<br />
and so will simply increase CWND a bit more to compensate for the<br />
underuse.<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
=== high level issues: not clear what the project is, yet ===<br />
<br />
* security: iptables<br />
At the moment most people disables iptables to get<br />
good performance on 10G/s networking.<br />
Any way to improve experience?<br />
<br />
* performance<br />
Going through scheduler and full networking stack twice<br />
(host+guest) adds a lot of overhead<br />
Any way to allow bypassing some layers?<br />
<br />
* manageability<br />
Still hard to figure out VM networking,<br />
VM networking is through libvirt, host networking through NM<br />
Any way to integrate?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
* Migrate some of the performance regression autotest functionality into Netperf<br />
- Get the CPU-utilization of the Host and the other-party, and add them to the report. This is also true for other Host measures, such as vmexits, interrupts, ...<br />
- Run Netperf in demo-mode, and measure only the time when all the sessions are active (could be many seconds after the beginning of the tests)<br />
- Packaging of Netperf in Fedora / RHEL (exists in Fedora). Licensing could be an issue.<br />
- Make the scripts more visible<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=4866
Multiqueue
2013-08-23T07:35:27Z
<p>Jasowang: </p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
* Jason Wang <jasowang@redhat.com><br />
* Amos Kong <akong@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Git & Cmdline ==<br />
<br />
* kernel changes: git://github.com/jasowang/kernel-mq.git<br />
* qemu-kvm changes: git://github.com/jasowang/qemu-kvm-mq.git<br />
* qemu-kvm -netdev tap,id=hn0,queues=M -device virtio-net-pci,netdev=hn0,vectors=2M+1 ...<br />
<br />
== Status & Challenges ==<br />
* Status<br />
** Have patches for all part but need performance tuning for small packet transmission<br />
** Several of new vhost threading model were proposed<br />
<br />
* Challenges<br />
** small packet transmission<br />
*** Reason:<br />
**** spread the packets into different queue reduce the possibility of batching thus damage the performance.<br />
**** some but not too much batching may help for the performance<br />
*** Solution and challenge:<br />
***** Find a adaptive/dynamic algorithm to switch between one queue mode and multiqueue mode<br />
****** Find the threshold to do the switch, not easy as the traffic were unexpected in real workloads<br />
****** Avoid the packet re-ordering when switching packet<br />
***** Current Status & working on:<br />
****** Add ioctl to notify tap to switch to one queue mode<br />
****** switch when needed<br />
** vhost threading<br />
*** Three model were proposed:<br />
**** per vq pairs, each vhost thread is polling a tx/rx vq pairs<br />
***** simple<br />
***** regression in small packet transmission<br />
***** lack the numa affinity as vhost does the copy<br />
***** may not scale well when using multiqueue as we may create more vhost threads than the numer of host cpu<br />
**** multi-workers: multiple worker thread for a device<br />
***** wakeup all threads and the thread contend for the work<br />
***** improve the parallism especially for RR tes<br />
***** broadcast wakeup and contention<br />
***** #vhost threads may greater than #cpu<br />
***** no numa consideration<br />
***** regression in small packet / small #instances<br />
**** per-cpu vhost thread:<br />
***** pick a random thread in the same socket (except for the cpu that initiated the request) to handle the request<br />
***** best performance in most conditions<br />
***** schedule by it self and bypass the host scheduler, only suitable for network load<br />
***** regression in small packet / small #instances<br />
*** Solution and challenge:<br />
**** More testing<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls IFF_ATTACH_QUEUE/IFF_DETACH_QUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* IFF_DETACH_QUEUE is used to attach an unattached file/socket to a tap device. * IFF_DETACH_QUEUE is used to detach a file from a tap device. IFF_DETACH_QUEUE is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, IFF_ATTACH_QUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector (same as macvtap)====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Enable MQ feature ==<br />
* create tap device with multiple queues, please reference<br />
Documentation/networking/tuntap.txt:(3.3 Multiqueue tuntap interface)<br />
* enable mq for tap (suppose N queue pairs) -netdev tap,vhost=on,queues=N<br />
* enable mq and specify msix vectors in qemu cmdline (2N+1 vectors, N for tx queues, N for rx queues, 1 for config): -device virtio-net-pci,mq=on,vectors=2N+1...<br />
* enable mq in guest by 'ethtool -L eth0 combined $queue_num'<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
** regression criteria: throughout%cpu<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpunode and memorynode<br />
** autotest implemented a performance regression test, used T-test<br />
** use netperf demo-mode to get more stable results<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=4861
NetworkingTodo
2013-08-20T07:44:27Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome!<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* multiqueue support in macvtap<br />
multiqueue is only supported for tun.<br />
Add support for macvtap.<br />
Developer: Jason Wang<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* make pktgen works for virtio-net ( or partially orphan )<br />
virtio-net orphan the skb during tx,<br />
which will makes pktgen wait for ever to the refcnt.<br />
Jason's idea: introduce a flat to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developer: Jason Wang<br />
<br />
* Add HW_VLAN_TX support for tap<br />
Eliminate the extra data moving for tagged packets<br />
Developer: Jason Wang<br />
<br />
* Announce self by guest driver<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu is ongoing.<br />
V7 patches is here:<br />
http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg01127.html<br />
Developer: Jason Wang<br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Amos Kong<br />
qemu: https://bugzilla.redhat.com/show_bug.cgi?id=848203<br />
libvirt: https://bugzilla.redhat.com/show_bug.cgi?id=848199<br />
https://git.kernel.org/cgit/virt/kvm/mst/qemu.git/patch/?id=1c0fa6b709d02fe4f98d4ce7b55a6cc3c925791c<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Bug: e1000 & rtl8139: Change macaddr in guest, but not update to qemu (info network)<br />
Developer: Amos Kong<br />
https://bugzilla.redhat.com/show_bug.cgi?id=922589<br />
<br />
* Enable GRO for packets coming to bridge from a tap interface<br />
Developer: Dmitry Fleytman<br />
<br />
* Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: MST<br />
<br />
* netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- proposed low latency net polling<br />
See http://lkml.indiana.edu/hypermail/linux/kernel/1303.0/00553.html<br />
Developer: Jason Wang<br />
<br />
=== projects that are not started yet - no owner ===<br />
* sharing config interrupts<br />
Support mode devices by sharing a single msi vector<br />
between multiple virtio devices.<br />
(Applies to virtio-blk too).<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* use kvm eventfd support for injecting level interrupts,<br />
enable vhost by default for level interrupts<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
* non-virtio device support with vhost<br />
Use vhost interface for guests that don't use virtio-net<br />
<br />
=== vague ideas: path to implementation not clear<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
kernel part is done (Vlad Yasevich)<br />
teach qemu to notify libvirt to enable the filter (still to do) (existed NIC_RX_FILTER_CHANGED event contains vlan-tables)<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* virtio: preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
* bridging without promisc mode with OVS<br />
<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=4834
NetworkingTodo
2013-07-02T05:34:49Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome!<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Bandan Das<br />
Testing: netperf guest to guest<br />
<br />
* multiqueue support in macvtap<br />
multiqueue is only supported for tun.<br />
Add support for macvtap.<br />
Developer: Jason Wang<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* make pktgen works for virtio-net ( or partially orphan )<br />
virtio-net orphan the skb during tx,<br />
which will makes pktgen wait for ever to the refcnt.<br />
Jason's idea: introduce a flat to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developer: Jason Wang<br />
<br />
* Add HW_VLAN_TX support for tap<br />
Eliminate the extra data moving for tagged packets<br />
Developer: Jason Wang<br />
<br />
* Announce self by guest driver<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu is ongoing.<br />
V7 patches is here:<br />
http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg01127.html<br />
Developer: Jason Wang<br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Amos Kong<br />
qemu: https://bugzilla.redhat.com/show_bug.cgi?id=848203<br />
libvirt: https://bugzilla.redhat.com/show_bug.cgi?id=848199<br />
https://git.kernel.org/cgit/virt/kvm/mst/qemu.git/patch/?id=1c0fa6b709d02fe4f98d4ce7b55a6cc3c925791c<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Bug: e1000 & rtl8139: Change macaddr in guest, but not update to qemu (info network)<br />
Developer: Amos Kong<br />
https://bugzilla.redhat.com/show_bug.cgi?id=922589<br />
<br />
* Enable GRO for packets coming to bridge from a tap interface<br />
Developer: Dmitry Fleytman<br />
<br />
* Better support for windows LRO<br />
Extend virtio-header with statistics for GRO packets:<br />
number of packets coalesced and number of duplicate ACKs coalesced<br />
Developer: Dmitry Fleytman<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
Developer: MST<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
* netdev polling for virtio.<br />
There are two kinds of netdev polling:<br />
- netpoll - used for debugging<br />
- proposed low latency net polling<br />
See http://lkml.indiana.edu/hypermail/linux/kernel/1303.0/00553.html<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* use kvm eventfd support for injecting level interrupts,<br />
enable vhost by default for level interrupts<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
* ring aliasing:<br />
using vhost-net as a networking backend with virtio-net in QEMU<br />
being what's guest facing.<br />
This gives you the best of both worlds: QEMU acts as a first<br />
line of defense against a malicious guest while still getting the<br />
performance advantages of vhost-net (zero-copy).<br />
In fact a bit of complexity in vhost was put there in the vague hope to<br />
support something like this: virtio rings are not translated through<br />
regular memory tables, instead, vhost gets a pointer to ring address.<br />
This allows qemu acting as a man in the middle,<br />
verifying the descriptors but not touching the packet data.<br />
<br />
<br />
=== vague ideas: path to implementation not clear<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
IGMP snooping in bridge should take vlans into account<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
<br />
* bridging on top of macvlan <br />
add code to forward LRO status from macvlan (not macvtap)<br />
back to the lowerdev, so that setting up forwarding<br />
from macvlan disables LRO on the lowerdev<br />
<br />
* preserve packets exactly with LRO<br />
LRO is not normally compatible with forwarding.<br />
virtio we are getting packets from a linux host,<br />
so we could thinkably preserve packets exactly<br />
even with LRO. I am guessing other hardware could be<br />
doing this as well.<br />
<br />
* vxlan<br />
What could we do here?<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Write some unit tests for vhost-net/vhost-scsi<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
* Measure the effect of each of the above-mentioned optimizations<br />
- Use autotest network performance regression testing (that runs netperf)<br />
- Also test any wild idea that works. Some may be useful.<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=4784
NetworkingTodo
2013-05-24T09:34:53Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome!<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Shirley Ma?, MST?<br />
Testing: netperf guest to guest<br />
<br />
* multiqueue support in macvtap<br />
multiqueue is only supported for tun.<br />
Add support for macvtap.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* make pktgen works for virtio-net ( or partially orphan )<br />
virtio-net orphan the skb during tx,<br />
which will makes pktgen wait for ever to the refcnt.<br />
Jason's idea: introduce a flat to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developer: Jason Wang<br />
<br />
* Add HW_VLAN_TX support for tap<br />
Eliminate the extra data moving for tagged packets<br />
Developer: Jason Wang<br />
<br />
* Announce self by guest driver<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu is ongoing.<br />
V7 patches is here:<br />
http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg01127.html<br />
Developer: Jason Wang<br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Dragos Tatulea?, Amos Kong<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Bug: e1000 & rtl8139: Change macaddr in guest, but not update to qemu (info network)<br />
Developer: Amos Kong<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
* netdev polling for virtio.<br />
See http://lkml.indiana.edu/hypermail/linux/kernel/1303.0/00553.html<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* use kvm eventfd support for injecting level interrupts,<br />
enable vhost by default for level interrupts<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
=== vague ideas: path to implementation not clear<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
IGMP snooping in bridge should take vlans into account<br />
<br />
* tx coalescing<br />
Delay several packets before kick the device.<br />
<br />
* interrupt coalescing<br />
Reduce the number of interrupt<br />
<br />
<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=NetworkingTodo&diff=4783
NetworkingTodo
2013-05-24T09:27:44Z
<p>Jasowang: </p>
<hr />
<div>This page should cover all networking related activity in KVM,<br />
currently most info is related to virtio-net.<br />
<br />
TODO: add bugzilla entry links.<br />
<br />
=== projects in progress. contributions are still very wellcome!<br />
<br />
* vhost-net scalability tuning: threading for many VMs<br />
Plan: switch to workqueue shared by many VMs<br />
http://www.mail-archive.com/kvm@vger.kernel.org/msg69868.html<br />
<br />
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument<br />
<br />
Developer: Shirley Ma?, MST?<br />
Testing: netperf guest to guest<br />
<br />
* multiqueue support in macvtap<br />
multiqueue is only supported for tun.<br />
Add support for macvtap.<br />
Developer: Jason Wang<br />
<br />
* enable multiqueue by default<br />
Multiqueue causes regression in some workloads, thus<br />
it is off by default. Detect and enable/disable<br />
automatically so we can make it on by default.<br />
This is because GSO tends to batch less when mq is enabled.<br />
https://patchwork.kernel.org/patch/2235191/<br />
Developer: Jason Wang<br />
<br />
* rework on flow caches<br />
Current hlist implementation of flow caches has several limitations:<br />
1) at worst case, linear search will be bad<br />
2) not scale<br />
https://patchwork.kernel.org/patch/2025121/<br />
Developer: Jason Wang<br />
<br />
* eliminate the extra copy in virtio-net driver<br />
We need do an extra copy of 128 bytes for every packets. <br />
This could be eliminated for small packets by:<br />
1) use build_skb() and head frag<br />
2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )<br />
Or use a dedicated queue for small packet receiving ? (reordering)<br />
Developer: Jason Wang<br />
<br />
* make pktgen works for virtio-net ( or partially orphan )<br />
virtio-net orphan the skb during tx,<br />
which will makes pktgen wait for ever to the refcnt.<br />
Jason's idea: introduce a flat to tell pktgen not for wait<br />
Discussion here: https://patchwork.kernel.org/patch/1800711/<br />
MST's idea: add a .ndo_tx_polling not only for pktgen<br />
Developer: Jason Wang<br />
<br />
* Add HW_VLAN_TX support for tap<br />
Eliminate the extra data moving for tagged packets<br />
Developer: Jason Wang<br />
<br />
* Announce self by guest driver<br />
Send gARP by guest driver. Guest part is finished.<br />
Qemu is ongoing.<br />
V7 patches is here:<br />
http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg01127.html<br />
Developer: Jason Wang<br />
<br />
* guest programmable mac/vlan filtering with macvtap<br />
Developer: Dragos Tatulea?, Amos Kong<br />
Status: [[GuestProgrammableMacVlanFiltering]]<br />
<br />
* bridge without promisc mode in NIC<br />
given hardware support, teach bridge<br />
to program mac/vlan filtering in NIC<br />
Helps performance and security on noisy LANs<br />
http://comments.gmane.org/gmane.linux.network/266546<br />
Developer: Vlad Yasevich<br />
<br />
* reduce networking latency:<br />
allow handling short packets from softirq or VCPU context<br />
Plan:<br />
We are going through the scheduler 3 times<br />
(could be up to 5 if softirqd is involved)<br />
Consider RX: host irq -> io thread -> VCPU thread -><br />
guest irq -> guest thread.<br />
This adds a lot of latency.<br />
We can cut it by some 1.5x if we do a bit of work<br />
either in the VCPU or softirq context.<br />
Testing: netperf TCP RR - should be improved drastically<br />
netperf TCP STREAM guest to host - no regression<br />
Developer: MST<br />
<br />
* Flexible buffers: put virtio header inline with packet data<br />
https://patchwork.kernel.org/patch/1540471/<br />
Developer: MST<br />
<br />
* device failover to allow migration with assigned devices<br />
https://fedoraproject.org/wiki/Features/Virt_Device_Failover<br />
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST<br />
<br />
* Reuse vringh code for better maintainability<br />
Developer: Rusty Russell<br />
<br />
* Improve stats, make them more helpful for per analysis<br />
Developer: Sriram Narasimhan<br />
<br />
* Bug: e1000 & rtl8139: Change macaddr in guest, but not update to qemu (info network)<br />
Developer: Amos Kong<br />
<br />
=== projects that are not started yet - no owner ===<br />
<br />
* netdev polling for virtio.<br />
See http://lkml.indiana.edu/hypermail/linux/kernel/1303.0/00553.html<br />
<br />
* receive side zero copy<br />
The ideal is a NIC with accelerated RFS support,<br />
So we can feed the virtio rx buffers into the correct NIC queue.<br />
Depends on non promisc NIC support in bridge.<br />
<br />
* IPoIB infiniband bridging<br />
Plan: implement macvtap for ipoib and virtio-ipoib<br />
<br />
* RDMA bridging<br />
<br />
* DMA emgine (IOAT) use in tun<br />
Old patch here: [PATCH RFC] tun: dma engine support<br />
It does not speed things up. Need to see why and<br />
what can be done.<br />
<br />
* use kvm eventfd support for injecting level interrupts,<br />
enable vhost by default for level interrupts<br />
<br />
* virtio API extension: improve small packet/large buffer performance:<br />
support "reposting" buffers for mergeable buffers,<br />
support pool for indirect buffers<br />
<br />
* more GSO type support:<br />
Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL<br />
<br />
=== vague ideas: path to implementation not clear<br />
<br />
* ring redesign:<br />
find a way to test raw ring performance <br />
fix cacheline bounces <br />
reduce interrupts<br />
<br />
<br />
* support more queues<br />
We limit TUN to 8 queues, but we really want<br />
1 queue per guest CPU. The limit comes from net<br />
core, need to teach it to allocate array of<br />
pointers and not array of queues.<br />
Jason has an draft patch to use flex array.<br />
Another thing is to move the flow caches out of tun_struct.<br />
Developer: Jason Wang<br />
<br />
* irq/numa affinity:<br />
networking goes much faster with irq pinning:<br />
both with and without numa.<br />
what can be done to make the non-pinned setup go faster?<br />
<br />
* reduce conflict with VCPU thread<br />
if VCPU and networking run on same CPU,<br />
they conflict resulting in bad performance.<br />
Fix that, push vhost thread out to another CPU<br />
more aggressively.<br />
<br />
* rx mac filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
we have a small table of addresses, need to make it larger<br />
if we only need filtering for unicast (multicast is handled by IMP filtering)<br />
<br />
* vlan filtering in tun<br />
the need for this is still not understood as we have filtering in bridge<br />
<br />
* vlan filtering in bridge<br />
IGMP snooping in bridge should take vlans into account<br />
<br />
<br />
=== testing projects ===<br />
Keeping networking stable is highest priority.<br />
<br />
* Run weekly test on upstream HEAD covering test matrix with autotest<br />
<br />
=== non-virtio-net devices ===<br />
* e1000: stabilize<br />
<br />
=== test matrix ===<br />
<br />
DOA test matrix (all combinations should work):<br />
vhost: test both on and off, obviously<br />
test: hotplug/unplug, vlan/mac filtering, netperf,<br />
file copy both ways: scp, NFS, NTFS<br />
guests: linux: release and debug kernels, windows<br />
conditions: plain run, run while under migration,<br />
vhost on/off migration<br />
networking setup: simple, qos with cgroups<br />
host configuration: host-guest, external-guest</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=4546
Multiqueue
2012-05-25T00:58:34Z
<p>Jasowang: </p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
* Jason Wang <jasowang@redhat.com><br />
* Amos Kong <akong@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Git & Cmdline ==<br />
<br />
* kernel changes: git://github.com/jasowang/kernel-mq.git<br />
* qemu-kvm changes: git://github.com/jasowang/qemu-kvm-mq.git<br />
* qemu-kvm -netdev tap,id=hn0,queues=M -device virtio-net-pci,netdev=hn0,queues=M,vectors=N ...<br />
<br />
== Status & Challenges ==<br />
* Status<br />
** Have patches for all part but need performance tuning for small packet transmission<br />
** Several of new vhost threading model were proposed<br />
<br />
* Challenges<br />
** small packet transmission<br />
*** Reason:<br />
**** spread the packets into different queue reduce the possibility of batching thus damage the performance.<br />
**** some but not too much batching may help for the performance<br />
*** Solution and challenge:<br />
***** Find a adaptive/dynamic algorithm to switch between one queue mode and multiqueue mode<br />
****** Find the threshold to do the switch, not easy as the traffic were unexpected in real workloads<br />
****** Avoid the packet re-ordering when switching packet<br />
***** Current Status & working on:<br />
****** Add ioctl to notify tap to switch to one queue mode<br />
****** switch when needed<br />
** vhost threading<br />
*** Three model were proposed:<br />
**** per vq pairs, each vhost thread is polling a tx/rx vq pairs<br />
***** simple<br />
***** regression in small packet transmission<br />
***** lack the numa affinity as vhost does the copy<br />
***** may not scale well when using multiqueue as we may create more vhost threads than the numer of host cpu<br />
**** multi-workers: multiple worker thread for a device<br />
***** wakeup all threads and the thread contend for the work<br />
***** improve the parallism especially for RR tes<br />
***** broadcast wakeup and contention<br />
***** #vhost threads may greater than #cpu<br />
***** no numa consideration<br />
***** regression in small packet / small #instances<br />
**** per-cpu vhost thread:<br />
***** pick a random thread in the same socket (except for the cpu that initiated the request) to handle the request<br />
***** best performance in most conditions<br />
***** schedule by it self and bypass the host scheduler, only suitable for network load<br />
***** regression in small packet / small #instances<br />
*** Solution and challenge:<br />
**** More testing<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls UNATTACHQUEUE/TUNDETACHQUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* TUNATTACH is used to attach an unattached file/socket to a tap device. * TUNDETACH is used to detach a file from a tap device. TUNDETACH is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, TUNATTACHQUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector (same as macvtap)====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
** regression criteria: throughout%cpu<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpunode and memorynode<br />
** autotest implemented a performance regression test, used T-test<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=4545
Multiqueue
2012-05-25T00:57:58Z
<p>Jasowang: </p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
* Jason Wang <jasowang@redhat.com><br />
* Amos Kong <akong@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Git & Cmdline ==<br />
<br />
* kernel changes: git://github.com/jasowang/kernel-mq.git<br />
* qemu-kvm changes: git@github.com:jasowang/qemu-kvm-mq.git<br />
* qemu-kvm -netdev tap,id=hn0,queues=M -device virtio-net-pci,netdev=hn0,queues=M,vectors=N ...<br />
<br />
== Status & Challenges ==<br />
* Status<br />
** Have patches for all part but need performance tuning for small packet transmission<br />
** Several of new vhost threading model were proposed<br />
<br />
* Challenges<br />
** small packet transmission<br />
*** Reason:<br />
**** spread the packets into different queue reduce the possibility of batching thus damage the performance.<br />
**** some but not too much batching may help for the performance<br />
*** Solution and challenge:<br />
***** Find a adaptive/dynamic algorithm to switch between one queue mode and multiqueue mode<br />
****** Find the threshold to do the switch, not easy as the traffic were unexpected in real workloads<br />
****** Avoid the packet re-ordering when switching packet<br />
***** Current Status & working on:<br />
****** Add ioctl to notify tap to switch to one queue mode<br />
****** switch when needed<br />
** vhost threading<br />
*** Three model were proposed:<br />
**** per vq pairs, each vhost thread is polling a tx/rx vq pairs<br />
***** simple<br />
***** regression in small packet transmission<br />
***** lack the numa affinity as vhost does the copy<br />
***** may not scale well when using multiqueue as we may create more vhost threads than the numer of host cpu<br />
**** multi-workers: multiple worker thread for a device<br />
***** wakeup all threads and the thread contend for the work<br />
***** improve the parallism especially for RR tes<br />
***** broadcast wakeup and contention<br />
***** #vhost threads may greater than #cpu<br />
***** no numa consideration<br />
***** regression in small packet / small #instances<br />
**** per-cpu vhost thread:<br />
***** pick a random thread in the same socket (except for the cpu that initiated the request) to handle the request<br />
***** best performance in most conditions<br />
***** schedule by it self and bypass the host scheduler, only suitable for network load<br />
***** regression in small packet / small #instances<br />
*** Solution and challenge:<br />
**** More testing<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls UNATTACHQUEUE/TUNDETACHQUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* TUNATTACH is used to attach an unattached file/socket to a tap device. * TUNDETACH is used to detach a file from a tap device. TUNDETACH is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, TUNATTACHQUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector (same as macvtap)====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
** regression criteria: throughout%cpu<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpunode and memorynode<br />
** autotest implemented a performance regression test, used T-test<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=4539
Multiqueue
2012-05-08T14:57:08Z
<p>Jasowang: </p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
Jason Wang <jasowang@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Status & Challenges ==<br />
* Status<br />
** Have patches for all part but need performance tuning for small packet transmission<br />
** Several of new vhost threading model were proposed<br />
<br />
* Challenges<br />
** small packet transmission<br />
*** Reason:<br />
**** spread the packets into different queue reduce the possibility of batching thus damage the performance.<br />
**** some but not too much batching may help for the performance<br />
*** Solution and challenge:<br />
***** Find a adaptive/dynamic algorithm to switch between one queue mode and multiqueue mode<br />
****** Find the threshold to do the switch, not easy as the traffic were unexpected in real workloads<br />
****** Avoid the packet re-ordering when switching packet<br />
***** Current Status & working on:<br />
****** Add ioctl to notify tap to switch to one queue mode<br />
****** switch when needed<br />
** vhost threading<br />
*** Three model were proposed:<br />
**** per vq pairs, each vhost thread is polling a tx/rx vq pairs<br />
***** simple<br />
***** regression in small packet transmission<br />
***** lack the numa affinity as vhost does the copy<br />
***** may not scale well when using multiqueue as we may create more vhost threads than the numer of host cpu<br />
**** multi-workers: multiple worker thread for a device<br />
***** wakeup all threads and the thread contend for the work<br />
***** improve the parallism especially for RR tes<br />
***** broadcast wakeup and contention<br />
***** #vhost threads may greater than #cpu<br />
***** no numa consideration<br />
***** regression in small packet / small #instances<br />
**** per-cpu vhost thread:<br />
***** pick a random thread in the same socket (except for the cpu that initiated the request) to handle the request<br />
***** best performance in most conditions<br />
***** schedule by it self and bypass the host scheduler, only suitable for network load<br />
***** regression in small packet / small #instances<br />
*** Solution and challenge:<br />
**** More testing<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls UNATTACHQUEUE/TUNDETACHQUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* TUNATTACH is used to attach an unattached file/socket to a tap device. * TUNDETACH is used to detach a file from a tap device. TUNDETACH is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, TUNATTACHQUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpunode and memorynode<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=4538
Multiqueue
2012-05-08T14:45:39Z
<p>Jasowang: </p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
Jason Wang <jasowang@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Status & Challenges ==<br />
* Status<br />
** Have patches for all part but need performance tuning for small packet transmission<br />
** Several of new vhost threading model were proposed<br />
<br />
* Challenges<br />
** small packet transmission<br />
*** Reason:<br />
**** spread the packets into different queue reduce the possibility of batching thus damage the performance.<br />
**** some but not too much batching may help for the performance<br />
**** big packets or rx?<br />
***** For big packet transmission, too much packet were batched as host are slow in this condition<br />
***** For rx, the guest were always fast enough and there's almost batching except for very small packet<br />
*** Solution and challenge:<br />
***** Find a adaptive/dynamic algorithm to switch between one queue mode and multiqueue mode<br />
****** Find the threshold to do the switch, not easy as the traffic were unexpected in real workloads<br />
****** Avoid the packet re-ordering when switching packet<br />
***** Current Status & working on:<br />
****** Add ioctl to notify tap to switch to one queue mode<br />
****** switch when needed<br />
** vhost threading<br />
*** Three model were proposed:<br />
**** per vq pairs, each vhost thread is polling a tx/rx vq pairs<br />
***** simple<br />
***** regression in small packet transmission<br />
***** lack the numa affinity as vhost does the copy<br />
***** may not scale well when using multiqueue as we may create more vhost threads than the numer of host cpu<br />
**** multi-workers: multiple worker thread for a device<br />
***** wakeup all threads and the thread contend for the work<br />
***** improve the parallism especially for RR tes<br />
***** broadcast wakeup and contention<br />
***** #vhost threads may greater than #cpu<br />
***** no numa consideration<br />
***** regression in small packet / small #instances<br />
**** per-cpu vhost thread:<br />
***** pick a random thread in the same socket (except for the cpu that initiated the request) to handle the request<br />
***** best performance in most conditions<br />
***** schedule by it self and bypass the host scheduler, only suitable for network load<br />
***** regression in small packet / small #instances<br />
*** Solution and challenge:<br />
**** More testing<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls UNATTACHQUEUE/TUNDETACHQUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* TUNATTACH is used to attach an unattached file/socket to a tap device. * TUNDETACH is used to detach a file from a tap device. TUNDETACH is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, TUNATTACHQUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpunode and memorynode<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang
https://linux-kvm.org/index.php?title=Multiqueue&diff=4537
Multiqueue
2012-05-08T14:44:50Z
<p>Jasowang: </p>
<hr />
<div>= Multiqueue virtio-net =<br />
<br />
== Overview ==<br />
<br />
This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. This page provides an overview of multiqueue virtio-net and discusses the design of various parts involved. The page also contains some basic performance test result. This work is in progress and the design may changes.<br />
<br />
== Contact ==<br />
Jason Wang <jasowang@redhat.com><br />
<br />
== Rationale ==<br />
<br />
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:<br />
<br />
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.<br />
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.<br />
<br />
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.<br />
<br />
== Status & Challenges ==<br />
* Status<br />
** Have patches for all part but need performance tuning for small packet transmission<br />
** change in vhost<br />
<br />
* Challenges<br />
** small packet transmission<br />
*** Reason:<br />
**** spread the packets into different queue reduce the possibility of batching thus damage the performance.<br />
**** some but not too much batching may help for the performance<br />
**** big packets or rx?<br />
***** For big packet transmission, too much packet were batched as host are slow in this condition<br />
***** For rx, the guest were always fast enough and there's almost batching except for very small packet<br />
*** Solution and challenge:<br />
***** Find a adaptive/dynamic algorithm to switch between one queue mode and multiqueue mode<br />
****** Find the threshold to do the switch, not easy as the traffic were unexpected in real workloads<br />
****** Avoid the packet re-ordering when switching packet<br />
***** Current Status & working on:<br />
****** Add ioctl to notify tap to switch to one queue mode<br />
****** switch when needed<br />
** vhost threading<br />
*** Three model were proposed:<br />
**** per vq pairs, each vhost thread is polling a tx/rx vq pairs<br />
***** simple<br />
***** regression in small packet transmission<br />
***** lack the numa affinity as vhost does the copy<br />
***** may not scale well when using multiqueue as we may create more vhost threads than the numer of host cpu<br />
**** multi-workers: multiple worker thread for a device<br />
***** wakeup all threads and the thread contend for the work<br />
***** improve the parallism especially for RR tes<br />
***** broadcast wakeup and contention<br />
***** #vhost threads may greater than #cpu<br />
***** no numa consideration<br />
***** regression in small packet / small #instances<br />
**** per-cpu vhost thread:<br />
***** pick a random thread in the same socket (except for the cpu that initiated the request) to handle the request<br />
***** best performance in most conditions<br />
***** schedule by it self and bypass the host scheduler, only suitable for network load<br />
***** regression in small packet / small #instances<br />
*** Solution and challenge:<br />
**** More testing<br />
<br />
== Design Goals ==<br />
<br />
=== Parallel send/receive processing ===<br />
<br />
To make sure the whole stack could be worked in parallel, the parallelism of not only the front-end (guest driver) but also the back-end (vhost and tap/macvtap) must be explored. This is done by:<br />
<br />
* Allowing multiple sockets to be attached to tap/macvtap<br />
* Using multiple threaded vhost to serve as the backend of a multiqueue capable virtio-net adapter<br />
* Use a multi-queue awared virtio-net driver to send and receive packets to/from each queue<br />
<br />
=== In order delivery ===<br />
<br />
Packets for a specific stream are delivered in order to the TCP/IP stack in guest.<br />
<br />
=== Low overhead ===<br />
<br />
The multiqueue implementation should be low overhead, cache locality and send-side scaling could be maintained by<br />
<br />
* making sure the packets form a single connection are mapped to a specific processor.<br />
* the send completion (TCP ACK) were sent to the same vcpu who send the data<br />
* other considerations such as NUMA and HT<br />
<br />
=== No assumption about the underlying hardware ===<br />
<br />
The implementation should not tagert for specific hardware/environment. For example we should not only optimize the the host nic with RSS or flow director support.<br />
<br />
=== Compatibility ===<br />
* Guest ABI: Based on the virtio specification, the multiqueue implementation of virtio-net should keep the compatibility with the single queue. The ability of multiqueue must be enabled through feature negotiation which make sure single queue driver can work under multiqueue backend, and multiqueue driver can work in single queue backend.<br />
* Userspace ABI: As the changes may touch tun/tap which may have non-virtualized users, the semantics of ioctl must be kept in order to not break the application that use them. New function must be doen through new ioctls.<br />
<br />
=== Management friendly ===<br />
The backend (tap/macvtap) should provides an easy to changes the number of queues/sockets. and qemu with multiqueue support should also be management software friendly, qemu should have the ability to accept file descriptors through cmdline and SCM_RIGHTS.<br />
<br />
== High level Design ==<br />
The main goals of multiqueue is to explore the parallelism of each module who is involved in the packet transmission and reception:<br />
* macvtap/tap: For single queue virtio-net, one socket of macvtap/tap was abstracted as a queue for both tx and rx. We can reuse and extend this abstraction to allow macvtap/tap can dequeue and enqueue packets from multiple sockets. Then each socket can be treated as a tx and rx, and macvtap/tap is fact a multi-queue device in the host. The host network codes can then transmit and receive packets in parallel.<br />
* vhost: The parallelism could be done through using multiple vhost threads to handle multiple sockets. Currently, there's two choices in design.<br />
** 1:1 mapping between vhost threads and sockets. This method does not need vhost changes and just launch the the same number of vhost threads as queues. Each vhost thread is just used to handle one tx ring and rx ring just as they are used for single queue virtio-net.<br />
** M:N mapping between vhost threads and sockets. This methods allow a single vhost thread to poll more than one tx/rx rings and sockests and use separated threads to handle tx and rx request.<br />
* qemu: qemu is in charge of the fllowing things<br />
** allow multiple tap file descriptors to be used for a single emulated nic<br />
** userspace multiqueue virtio-net implementation which is used to maintaining compatibility, doing management and migration<br />
** control the vhost based on the userspace multiqueue virtio-net<br />
* guest driver<br />
** Allocate multiple rx/tx queues<br />
** Assign each queue a MSI-X vector in order to parallize the packet processing in guest stack<br />
<br />
The big picture looks like:<br />
[[Image:ver1.jpg|left]]<br />
<br />
<br />
=== Choices and considerations ===<br />
* 1:1 or M:N, 1:1 is much more simpler than M:N for both coding and queue/vhost management in qemu. And in theory, it could provides better parallelism than M:N. Performance test is needed to be done by both of the implementation.<br />
* Whether use a per-cpu queue: Morden 10gb card ( and M$ RSS) suggest to use the abstract of per-cpu queue that tries to allocate as many tx/rx queues as cpu numbers and does a 1:1 mapping between them. This can provides better parallelism and cache locality. Also this could simplify the other design such as in order delivery and flow director. For virtio-net, at least for guest with small number of vcpus, per-cpu queues is a better choice. <br />
The big picture is shown as.<br />
<br />
== Current status ==<br />
<br />
* macvtap/macvlan have basic multiqueue support.<br />
* bridge does not have queue, but when it use a multiqueue tap as one of it port, some optimization may be needed.<br />
* 1:1 Implementation <br />
** qemu parts: http://www.spinics.net/lists/kvm/msg52808.html<br />
** tap and guest driver: http://www.spinics.net/lists/kvm/msg59993.html<br />
* M:N Implementation<br />
** kk's newest series(qemu/vhost/guest drivers): http://www.spinics.net/lists/kvm/msg52094.html<br />
<br />
== Detail Design ==<br />
<br />
=== Multiqueue Macvtap ===<br />
<br />
==== Basic design: ====<br />
* Each socket were abstracted as a queue and the basic is allow multiple sockets to be attached to a single macvtap device. <br />
* Queue attaching is done through open the named inode many times<br />
* Each time it is opened a new socket were attached to the deivce and a file descriptor were returned and use for a backend of virtio-net backend (qemu or vhost-net).<br />
==== Parallel processing ====<br />
In order to make the tx path lockless, macvtap use NETIF_F_LLTX to avoid the tx lock contention when host transmit packets. So its indeed a multiqueue network device from the point view of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
=== Multiqueue tun/tap ===<br />
==== Basic design ====<br />
* Borrow the idea of macvtap, we just allow multiple sockets to be attached in a single tap device. <br />
* As there's no named inode for tap device, new ioctls UNATTACHQUEUE/TUNDETACHQUEUE is introduced to attach or detach a socket from tun/tap which can be used by virtio-net backend to add or delete a queue.. <br />
* All socket related structures were moved to the private_data of file and initialized during file open. <br />
* In order to keep semantics of TUNSETIFF and make the changes transparent to the legacy user of tun/tap, the allocating and initializing of network device is still done in TUNSETIFF, and first queue is atomically attached.<br />
* TUNATTACH is used to attach an unattached file/socket to a tap device. * TUNDETACH is used to detach a file from a tap device. TUNDETACH is used for temporarily disable a queue that is useful for maintain backward compatibility of guest. ( Running single queue driver on a multiqueue device ).<br />
<br />
Example:<br />
pesudo codes to create a two queue tap device<br />
# fd1 = open("/dev/net/tun")<br />
# ioctl(fd1, TUNSETIFF, "tap")<br />
# fd2 = open("/dev/net/tun")<br />
# ioctl(fd2, TUNATTACHQUEUE, "tap")<br />
then we have two queues tap device with fd1 and fd2 as its queue sockets.<br />
<br />
==== Parallel processing ====<br />
Just like macvtap, NETIF_F_LLTX is also used to tun/tap to avoid tx lock contention. And tun/tap is also in fact a multiqueue network device of host.<br />
==== Queue selector ====<br />
It has a simple flow director implementation, when it needs to transmit packets to guest, the queue number is determined by:<br />
# if the skb have rx queue mapping (for example comes from a mq nic), use this to choose the socket/queue<br />
# if we can calculate the rxhash of the skb, use it to choose the socket/queue<br />
# if the above two steps fail, always find the first available socket/queue<br />
<br />
==== Further Optimization ? ===<br />
<br />
# rxhash can only used for distributing workloads into different vcpus. The target vcpu may not be the one who is expected to do the recvmsg(). So more optimizations may need as:<br />
## A simple hash to queue table to record the cpu/queue used by the flow. It is updated when guest send packets, and when tun/tap transmit packets to guest, this table could be used to do the queue selection.<br />
## Some co-operation between host and guest driver to pass information such as which vcpu is isssue a recvmsg().<br />
<br />
=== vhost ===<br />
* 1:1 without changes<br />
* M:N [TBD]<br />
<br />
=== qemu changes ===<br />
The changes in qemu contains two part:<br />
* Add generic multiqueue support for nic layer: As the receiving function of nic backend is only aware of VLANClientState, we must make it aware of queue index, so<br />
** Store queue_index in VLANClientState<br />
** Store multiple VLANClientState in NICState<br />
** Let netdev parameters accept multiple netdev ids, and link those tap based VLANClientState to their peers in NICState<br />
* Userspace multiqueue support in virtio-net<br />
** Allocate multiple virtqueues<br />
** Expose the queue numbers through config space<br />
** Enable the multiple support of backend only when the feature negotiated<br />
** Handle packet request based on the queue_index of virtqueue and VLANClientState<br />
** migration hanling<br />
* Vhost enable/disable<br />
** launch multiple vhost threads<br />
** setup eventfd and contol the start/stop of vhost_net backend<br />
The using of this is like:<br />
qemu -netdev tap,id=hn0,fd=100 -netdev tap,id=hn1,fd=101 -device virtio-net-pci,netdev=hn0#hn1,queue=2 .....<br />
<br />
TODO: more user-friendly cmdline such as<br />
qemu -netdev tap,id=hn0,fd=100,fd=101 -device virtio-net-pci,netdev=hn0,queues=2<br />
<br />
<br />
=== guest driver ===<br />
The changes in guest driver as mainly:<br />
* Allocate the number of tx and rx queue based on the queue number in config space<br />
* Assign each queue a MSI-X vector<br />
* Per-queue handling of TX/RX request<br />
* Simply use skb_tx_hash() to choose the queue<br />
<br />
==== Future Optimizations ====<br />
* Per-vcpu queue: Allocate as many tx/rx queues as the vcpu numbers. And bind tx/rx queue pairs to a specific vcpu by:<br />
** Set the MSI-X irq affinity for tx/rx.<br />
** Use smp_processor_id() to choose the tx queue..<br />
* Comments: In theory, this should improve the parallelism, [TBD]<br />
<br />
* ...<br />
<br />
== Test ==<br />
* Test tool: netperf, iperf<br />
* Test protocol: TCP_STREAM TCP_MAERTS TCP_RR<br />
** between localhost and guest<br />
** between external host and guest with a 10gb direct link<br />
* Test method:<br />
** multiple sessions of netperf: 1 2 4 8 16<br />
** compare with the single queue implementation<br />
* Other<br />
** numactl to bind the cpunode and memorynode<br />
== Performance Numbers ==<br />
[[Multiqueue-performance-Sep-13|Performance]]<br />
== TODO ==<br />
== Reference ==</div>
Jasowang