NetworkingTodo: Difference between revisions
From KVM
No edit summary |
(rewrote the page. TODO: add BZs, detailed project descriptions.) |
||
Line 2: | Line 2: | ||
currently most info is related to virtio-net. | currently most info is related to virtio-net. | ||
=== projects in progress. contributions are still very wellcome! | |||
* vhost-net scalability tuning: threading for many VMs | |||
Plan: switch to workqueue shared by many VMs | |||
Developer: Shirley Ma?, MST | |||
Testing: netperf guest to guest | |||
* multiqueue support in macvtap | |||
multiqueue is only supported for tun. | |||
Add support for macvtap. | |||
Developer: Jason Wang | |||
* enable multiqueue by default | |||
Multiqueue causes regression in some workloads, thus | |||
it is off by default. Detect and enable/disable | |||
automatically so we can make it on by default | |||
Developer: Jason Wang | |||
* guest programmable mac/vlan filtering with macvtap | |||
Developer: Dragos Tatulea?, Amos Kong | |||
Status: [[GuestProgrammableMacVlanFiltering]] | |||
* bridge without promisc mode in NIC | |||
given hardware support, teach bridge | |||
to program mac/vlan filtering in NIC | |||
Helps performance and security on noisy LANs | |||
Developer: Vlad Yasevich | |||
* allow handling short packets from softirq or VCPU context | |||
Testing: netperf TCP RR - should be improved drastically | |||
netperf TCP STREAM guest to host - no regression | |||
Developer: MST | |||
* Flexible buffers: put virtio header inline with packet data | |||
Developer: MST | |||
* device failover to allow migration with assigned devices | |||
https://fedoraproject.org/wiki/Features/Virt_Device_Failover | |||
Developer: Gal Hammer, Cole Robinson, Laine Stump, MST | |||
* Reuse vringh code for better maintainability | |||
* | Developer: Rusty Russell | ||
=== projects that are not started yet - no owner === | |||
* | * receive side zero copy | ||
The ideal is a NIC with accelerated RFS support, | |||
So we can feed the virtio rx buffers into the correct NIC queue. | |||
Depends on non promisc NIC support in bridge. | |||
* IPoIB infiniband bridging | |||
* | Plan: implement macvtap for ipoib and virtio-ipoib | ||
* | * RDMA bridging | ||
* | * use kvm eventfd support for injecting level interrupts, | ||
enable vhost by default for level interrupts | |||
* DMA emgine (IOAT) use in tun | |||
* | |||
* API extension: improve small packet/large buffer performance: | * virtio API extension: improve small packet/large buffer performance: | ||
support "reposting" buffers for mergeable buffers, | support "reposting" buffers for mergeable buffers, | ||
support pool for indirect buffers | support pool for indirect buffers | ||
=== vague ideas: path to implementation not clear | |||
* ring redesign: | * ring redesign: | ||
find a way to test raw ring performance | find a way to test raw ring performance | ||
fix cacheline bounces | fix cacheline bounces | ||
reduce interrupts | reduce interrupts | ||
* | * support more queues | ||
We limit TUN to 8 queues | |||
* irq/numa affinity: | |||
networking goes much faster with irq pinning: | |||
both with and without numa. | |||
what can be done to make the non-pinned setup go faster? | |||
* reduce conflict with VCPU thread | |||
if VCPU and networking run on same CPU, | |||
they conflict resulting in bad performance. | |||
Fix that, push vhost thread out to another CPU | |||
more aggressively. | |||
* rx mac filtering in tun | * rx mac filtering in tun | ||
Line 68: | Line 95: | ||
* vlan filtering in tun | * vlan filtering in tun | ||
the need for this is still not understood as we have filtering in bridge | the need for this is still not understood as we have filtering in bridge | ||
* vlan filtering in bridge | * vlan filtering in bridge | ||
IGMP snooping in bridge should take vlans into account | IGMP snooping in bridge should take vlans into account | ||
=== testing projects === | |||
Keeping networking stable is highest priority. | |||
* Run weekly test on upstream HEAD covering test matrix with autotest | |||
* | |||
=== non-virtio-net devices === | === non-virtio-net devices === | ||
* e1000: stabilize | * e1000: stabilize | ||
=== | === test matrix === | ||
DOA test matrix (all combinations should work): | |||
vhost: test both on and off, obviously | |||
test: hotplug/unplug, vlan/mac filtering, netperf, | |||
file copy both ways: scp, NFS, NTFS | |||
guests: linux: release and debug kernels, windows | |||
conditions: plain run, run while under migration, | |||
vhost on/off migration | |||
networking setup: simple, qos with cgroups | |||
host configuration: host-guest, external-guest |
Revision as of 04:42, 23 May 2013
This page should cover all networking related activity in KVM, currently most info is related to virtio-net.
=== projects in progress. contributions are still very wellcome!
- vhost-net scalability tuning: threading for many VMs
Plan: switch to workqueue shared by many VMs Developer: Shirley Ma?, MST Testing: netperf guest to guest
- multiqueue support in macvtap
multiqueue is only supported for tun. Add support for macvtap. Developer: Jason Wang
- enable multiqueue by default
Multiqueue causes regression in some workloads, thus it is off by default. Detect and enable/disable automatically so we can make it on by default Developer: Jason Wang
- guest programmable mac/vlan filtering with macvtap
Developer: Dragos Tatulea?, Amos Kong Status: GuestProgrammableMacVlanFiltering
- bridge without promisc mode in NIC
given hardware support, teach bridge to program mac/vlan filtering in NIC Helps performance and security on noisy LANs Developer: Vlad Yasevich
- allow handling short packets from softirq or VCPU context
Testing: netperf TCP RR - should be improved drastically netperf TCP STREAM guest to host - no regression Developer: MST
- Flexible buffers: put virtio header inline with packet data
Developer: MST
- device failover to allow migration with assigned devices
https://fedoraproject.org/wiki/Features/Virt_Device_Failover Developer: Gal Hammer, Cole Robinson, Laine Stump, MST
- Reuse vringh code for better maintainability
Developer: Rusty Russell
projects that are not started yet - no owner
- receive side zero copy
The ideal is a NIC with accelerated RFS support, So we can feed the virtio rx buffers into the correct NIC queue. Depends on non promisc NIC support in bridge.
- IPoIB infiniband bridging
Plan: implement macvtap for ipoib and virtio-ipoib
- RDMA bridging
- use kvm eventfd support for injecting level interrupts,
enable vhost by default for level interrupts
- DMA emgine (IOAT) use in tun
- virtio API extension: improve small packet/large buffer performance:
support "reposting" buffers for mergeable buffers, support pool for indirect buffers
=== vague ideas: path to implementation not clear
- ring redesign:
find a way to test raw ring performance fix cacheline bounces reduce interrupts
- support more queues
We limit TUN to 8 queues
- irq/numa affinity:
networking goes much faster with irq pinning: both with and without numa. what can be done to make the non-pinned setup go faster?
- reduce conflict with VCPU thread
if VCPU and networking run on same CPU, they conflict resulting in bad performance. Fix that, push vhost thread out to another CPU more aggressively.
- rx mac filtering in tun
the need for this is still not understood as we have filtering in bridge we have a small table of addresses, need to make it larger if we only need filtering for unicast (multicast is handled by IMP filtering)
- vlan filtering in tun
the need for this is still not understood as we have filtering in bridge
- vlan filtering in bridge
IGMP snooping in bridge should take vlans into account
testing projects
Keeping networking stable is highest priority.
- Run weekly test on upstream HEAD covering test matrix with autotest
non-virtio-net devices
- e1000: stabilize
test matrix
DOA test matrix (all combinations should work):
vhost: test both on and off, obviously test: hotplug/unplug, vlan/mac filtering, netperf, file copy both ways: scp, NFS, NTFS guests: linux: release and debug kernels, windows conditions: plain run, run while under migration, vhost on/off migration networking setup: simple, qos with cgroups host configuration: host-guest, external-guest