NetworkingTodo

From KVM
Revision as of 11:11, 21 September 2010 by Mst (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This page should cover all networking related activity in KVM, currently most info is related to virtio-net.

Stabilization is highest priority currently. DOA test matrix (all combinations should work):

       vhost: test both on and off, obviously
       test: hotplug/unplug, vlan/mac filtering, netperf,
            file copy both ways: scp, NFS, NTFS
       guests: linux: release and debug kernels, windows
       conditions: plain run, run while under migration,
               vhost on/off migration
       networking setup: simple, qos with cgroups
       host configuration: host-guest, external-guest

vhost-net driver projects

  • iovec length limitations
  • mergeable buffers: fix host->guest BW regression
  • scalability tuning: figure out the best threading model to use

qemu projects

  • fix hotplug issues
  • migration with multiple macs/vlans
       qemu only sends ping with the first mac/no vlan:
        need to send it for all macs/vlan

virtio projects

  • suspend/resume support
  • API extension: improve small packet/large buffer performance:
 support "reposting" buffers for mergeable buffers,
 support pool for indirect buffers
  • ring redesign:
     find a way to test raw ring performance 
     fix cacheline bounces 
     reduce interrupts
 Developer: Michael
      see patchset: virtio: put last seen used index into ring itself

=== projects involing other kernel components and/or networking stack

=

  • rx mac filtering in tun
       the need for this is still not understood as we have filtering in bridge
       we have a small table of addresses, need to make it larger
       if we only need filtering for unicast (multicast is handled by IMP filtering)
  • vlan filtering in tun
       the need for this is still not understood as we have filtering in bridge
       for small # if vlans we can use BPF
  • zero copy tx/rx for macvtap
  • multiqueue (involves all of vhost, qemu, virtio, networking stack)
  • kvm MSI interrupt injection fast path
  • kvm eventfd support for injecting level interrupts
  • DMA emgine (IOAT) use in tun
  • allow handling short packets from softirq context
  • irq affinity:
 networking goes much faster with irq pinning:
 both with and without numa.
 what can be done to make the non-pinned setup go faster?

testing projects

  • Cover test matrix with autotest
  • Test with windows drivers, pass WHQL

== Short term plans

  • multiqueue Krishna Jumar
  • tx zero copy Shirley Ma
  • rx zero copy Xin Xiaohui


== non-virtio-net devices e1000: bug in dealing with short frames (affect sun guests)

== bugzilla entries

should be fixed upstream (must verify): https://bugzilla.redhat.com/show_bug.cgi?id=623552 https://bugzilla.redhat.com/show_bug.cgi?id=632747 https://bugzilla.redhat.com/show_bug.cgi?id=632745

known minor bug in qemu: https://bugzilla.redhat.com/show_bug.cgi?id=581750

need to fix (mst is working on it): https://bugzilla.redhat.com/show_bug.cgi?id=623735

feature request (multiqueue): https://bugzilla.redhat.com/show_bug.cgi?id=632751

e1000 bug: https://bugzilla.redhat.com/show_bug.cgi?id=602205

== abandoned projects:

  • Add GSO/checksum offload support to AF_PACKET(raw) sockets.
 status: incomplete
  • guest kernel 2.6.31 seems to work well. Under certain workloads,
 virtio performance has regressed with guest kernels 2.6.32 and up
 (but still better than userspace). A patch has been posted:
 http://www.spinics.net/lists/netdev/msg115292.html
 status: might be fixed, need to test