NetworkingTodo: Difference between revisions
From KVM
No edit summary |
No edit summary |
||
Line 79: | Line 79: | ||
* RDMA bridging | * RDMA bridging | ||
* DMA emgine (IOAT) use in tun | |||
* use kvm eventfd support for injecting level interrupts, | * use kvm eventfd support for injecting level interrupts, | ||
enable vhost by default for level interrupts | enable vhost by default for level interrupts | ||
* virtio API extension: improve small packet/large buffer performance: | * virtio API extension: improve small packet/large buffer performance: | ||
Line 98: | Line 98: | ||
* support more queues | * support more queues | ||
We limit TUN to 8 queues | We limit TUN to 8 queues, but we really want | ||
1 queue per guest CPU. The limit comes from net | |||
core, need to teach it to allocate array of | |||
pointers and not array of queues. | |||
* irq/numa affinity: | * irq/numa affinity: |
Revision as of 17:42, 23 May 2013
This page should cover all networking related activity in KVM, currently most info is related to virtio-net.
TODO: add bugzilla entry links.
=== projects in progress. contributions are still very wellcome!
- vhost-net scalability tuning: threading for many VMs
Plan: switch to workqueue shared by many VMs www.mail-archive.com/kvm@vger.kernel.org/msg69868.html Developer: Shirley Ma?, MST Testing: netperf guest to guest
- multiqueue support in macvtap
multiqueue is only supported for tun. Add support for macvtap. Developer: Jason Wang
- enable multiqueue by default
Multiqueue causes regression in some workloads, thus it is off by default. Detect and enable/disable automatically so we can make it on by default Developer: Jason Wang
- guest programmable mac/vlan filtering with macvtap
Developer: Dragos Tatulea?, Amos Kong Status: GuestProgrammableMacVlanFiltering
- bridge without promisc mode in NIC
given hardware support, teach bridge to program mac/vlan filtering in NIC Helps performance and security on noisy LANs http://comments.gmane.org/gmane.linux.network/266546 Developer: Vlad Yasevich
- reduce networking latency:
allow handling short packets from softirq or VCPU context Plan: We are going through the scheduler 3 times (could be up to 5 if softirqd is involved) Consider RX: host irq -> io thread -> VCPU thread -> guest irq -> guest thread. This adds a lot of latency. We can cut it by some 1.5x if we do a bit of work either in the VCPU or softirq context. Testing: netperf TCP RR - should be improved drastically netperf TCP STREAM guest to host - no regression Developer: MST
- Flexible buffers: put virtio header inline with packet data
https://patchwork.kernel.org/patch/1540471/ Developer: MST
- device failover to allow migration with assigned devices
https://fedoraproject.org/wiki/Features/Virt_Device_Failover Developer: Gal Hammer, Cole Robinson, Laine Stump, MST
- Reuse vringh code for better maintainability
Developer: Rusty Russell
- Improve stats, make them more helpful for per analysis
Developer: Sriram Narasimhan
- Bug: e1000 & rtl8139: Change macaddr in guest, but not update to qemu (info network)
Developer: Amos Kong
projects that are not started yet - no owner
- netdev polling for virtio.
See http://lkml.indiana.edu/hypermail/linux/kernel/1303.0/00553.html
- receive side zero copy
The ideal is a NIC with accelerated RFS support, So we can feed the virtio rx buffers into the correct NIC queue. Depends on non promisc NIC support in bridge.
- IPoIB infiniband bridging
Plan: implement macvtap for ipoib and virtio-ipoib
- RDMA bridging
- DMA emgine (IOAT) use in tun
- use kvm eventfd support for injecting level interrupts,
enable vhost by default for level interrupts
- virtio API extension: improve small packet/large buffer performance:
support "reposting" buffers for mergeable buffers, support pool for indirect buffers
=== vague ideas: path to implementation not clear
- ring redesign:
find a way to test raw ring performance fix cacheline bounces reduce interrupts
- support more queues
We limit TUN to 8 queues, but we really want 1 queue per guest CPU. The limit comes from net core, need to teach it to allocate array of pointers and not array of queues.
- irq/numa affinity:
networking goes much faster with irq pinning: both with and without numa. what can be done to make the non-pinned setup go faster?
- reduce conflict with VCPU thread
if VCPU and networking run on same CPU, they conflict resulting in bad performance. Fix that, push vhost thread out to another CPU more aggressively.
- rx mac filtering in tun
the need for this is still not understood as we have filtering in bridge we have a small table of addresses, need to make it larger if we only need filtering for unicast (multicast is handled by IMP filtering)
- vlan filtering in tun
the need for this is still not understood as we have filtering in bridge
- vlan filtering in bridge
IGMP snooping in bridge should take vlans into account
testing projects
Keeping networking stable is highest priority.
- Run weekly test on upstream HEAD covering test matrix with autotest
non-virtio-net devices
- e1000: stabilize
test matrix
DOA test matrix (all combinations should work):
vhost: test both on and off, obviously test: hotplug/unplug, vlan/mac filtering, netperf, file copy both ways: scp, NFS, NTFS guests: linux: release and debug kernels, windows conditions: plain run, run while under migration, vhost on/off migration networking setup: simple, qos with cgroups host configuration: host-guest, external-guest