NetworkingTodo: Difference between revisions

From KVM
No edit summary
No edit summary
Line 80: Line 80:


* irq affinity:
* irq affinity:
networking goes much faster with irq pinning:
    networking goes much faster with irq pinning:
both with and without numa.
    both with and without numa.
what can be done to make the non-pinned setup go faster?
    what can be done to make the non-pinned setup go faster?


=== testing projects ===
=== testing projects ===
Line 90: Line 90:
=== non-virtio-net devices ===
=== non-virtio-net devices ===
* e1000: stabilize
* e1000: stabilize
  Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=602205
      Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=602205


=== bugzilla entries for bugs fixed ===
=== bugzilla entries for bugs fixed ===
* verify these are ok upstream
* verify these are ok upstream
https://bugzilla.redhat.com/show_bug.cgi?id=623552
    https://bugzilla.redhat.com/show_bug.cgi?id=623552
https://bugzilla.redhat.com/show_bug.cgi?id=632747
    https://bugzilla.redhat.com/show_bug.cgi?id=632747
https://bugzilla.redhat.com/show_bug.cgi?id=632745
    https://bugzilla.redhat.com/show_bug.cgi?id=632745




=== abandoned projects: ===
=== abandoned projects: ===
* Add GSO/checksum offload support to AF_PACKET(raw) sockets.
* Add GSO/checksum offload support to AF_PACKET(raw) sockets.
  status: incomplete
      status: incomplete
* guest kernel 2.6.31 seems to work well. Under certain workloads,
* guest kernel 2.6.31 seems to work well. Under certain workloads,
  virtio performance has regressed with guest kernels 2.6.32 and up
      virtio performance has regressed with guest kernels 2.6.32 and up
  (but still better than userspace). A patch has been posted:
      (but still better than userspace). A patch has been posted:
  http://www.spinics.net/lists/netdev/msg115292.html
      http://www.spinics.net/lists/netdev/msg115292.html
  status: might be fixed, need to test
      status: might be fixed, need to test

Revision as of 11:33, 21 September 2010

This page should cover all networking related activity in KVM, currently most info is related to virtio-net.

Stabilization is highest priority currently. DOA test matrix (all combinations should work):

       vhost: test both on and off, obviously
       test: hotplug/unplug, vlan/mac filtering, netperf,
            file copy both ways: scp, NFS, NTFS
       guests: linux: release and debug kernels, windows
       conditions: plain run, run while under migration,
               vhost on/off migration
       networking setup: simple, qos with cgroups
       host configuration: host-guest, external-guest

vhost-net driver projects

  • iovec length limitations
 Developer: Jason Wang <jasowang@redhat.com>
 Testing: guest to host file transfer on windows.
  • mergeable buffers: fix host->guest BW regression
 Developer: David Stevens <dlstevens@us.ibm.com>
 Testing: netperf host to guest default flags
  • scalability tuning: threading for guest to guest
 Developer: MST
 Testing: netperf guest to guest

qemu projects

  • fix hotplug issues
 Developer: MST
 https://bugzilla.redhat.com/show_bug.cgi?id=623735
  • migration with multiple macs/vlans
       qemu only sends ping with the first mac/no vlan:
        need to send it for all macs/vlan
  • bugfix: crash with illegal fd= value on command line
 Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=581750

virtio projects

  • suspend/resume support
  • API extension: improve small packet/large buffer performance:
 support "reposting" buffers for mergeable buffers,
 support pool for indirect buffers
  • ring redesign:
     find a way to test raw ring performance 
     fix cacheline bounces 
     reduce interrupts
 Developer: MST
      see patchset: virtio: put last seen used index into ring itself

projects involing other kernel components and/or networking stack

  • rx mac filtering in tun
       the need for this is still not understood as we have filtering in bridge
       we have a small table of addresses, need to make it larger
       if we only need filtering for unicast (multicast is handled by IMP filtering)
  • vlan filtering in tun
       the need for this is still not understood as we have filtering in bridge
       for small # if vlans we can use BPF
  • zero copy tx/rx for macvtap
 Developers: tx zero copy Shirley Ma; rx zero copy Xin Xiaohui
  • multiqueue (involves all of vhost, qemu, virtio, networking stack)
 Developer: Krishna Jumar
 Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=632751
  • kvm MSI interrupt injection fast path
 Developer: MST
  • kvm eventfd support for injecting level interrupts
  • DMA emgine (IOAT) use in tun
  • allow handling short packets from softirq context
 Testing: netperf TCP STREAM guest to host
          netperf TCP RR
  • irq affinity:
    networking goes much faster with irq pinning:
    both with and without numa.
    what can be done to make the non-pinned setup go faster?

testing projects

  • Cover test matrix with autotest
  • Test with windows drivers, pass WHQL

non-virtio-net devices

  • e1000: stabilize
     Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=602205

bugzilla entries for bugs fixed

  • verify these are ok upstream
    https://bugzilla.redhat.com/show_bug.cgi?id=623552
    https://bugzilla.redhat.com/show_bug.cgi?id=632747
    https://bugzilla.redhat.com/show_bug.cgi?id=632745


abandoned projects:

  • Add GSO/checksum offload support to AF_PACKET(raw) sockets.
     status: incomplete
  • guest kernel 2.6.31 seems to work well. Under certain workloads,
     virtio performance has regressed with guest kernels 2.6.32 and up
     (but still better than userspace). A patch has been posted:
     http://www.spinics.net/lists/netdev/msg115292.html
     status: might be fixed, need to test