From KVM
Line 130: Line 130:
=== projects that are not started yet - no owner ===
=== projects that are not started yet - no owner ===
* sharing config interrupts
  Support mode devices by sharing a single msi vector
  between multiple virtio devices.
  (Applies to virtio-blk too).
* netdev polling for virtio.
* netdev polling for virtio.

Revision as of 09:21, 8 July 2013

This page should cover all networking related activity in KVM, currently most info is related to virtio-net.

TODO: add bugzilla entry links.

=== projects in progress. contributions are still very wellcome!

  • vhost-net scalability tuning: threading for many VMs
     Plan: switch to workqueue shared by many VMs!OpenDocument

     Developer: Bandan Das
     Testing: netperf guest to guest
  • multiqueue support in macvtap
      multiqueue is only supported for tun.
      Add support for macvtap.
      Developer: Jason Wang
  • support more queues
    We limit TUN to 8 queues, but we really want
    1 queue per guest CPU. The limit comes from net
    core, need to teach it to allocate array of
    pointers and not array of queues.
    Jason has an draft patch to use flex array.
    Another thing is to move the flow caches out of tun_struct.
    Developer: Jason Wang
  • enable multiqueue by default
      Multiqueue causes regression in some workloads, thus
      it is off by default. Detect and enable/disable
      automatically so we can make it on by default.
      This is because GSO tends to batch less when mq is enabled.
      Developer: Jason Wang
  • rework on flow caches
      Current hlist implementation of flow caches has several limitations:
      1) at worst case, linear search will be bad
      2) not scale
      Developer: Jason Wang
  • eliminate the extra copy in virtio-net driver
      We need do an extra copy of 128 bytes for every packets. 
      This could be eliminated for small packets by:
      1) use build_skb() and head frag
      2) bigger vnet header length ( >= NET_SKB_PAD + NET_IP_ALIGN )
      Or use a dedicated queue for small packet receiving ? (reordering)
      Developer: Jason Wang
  • make pktgen works for virtio-net ( or partially orphan )
      virtio-net orphan the skb during tx,
      which will makes pktgen wait for ever to the refcnt.
      Jason's idea: introduce a flat to tell pktgen not for wait
      Discussion here:
      MST's idea: add a .ndo_tx_polling not only for pktgen
      Developer: Jason Wang
  • Add HW_VLAN_TX support for tap
      Eliminate the extra data moving for tagged packets
      Developer: Jason Wang
  • Announce self by guest driver
      Send gARP by guest driver. Guest part is finished.
      Qemu is ongoing.
      V7 patches is here:
      Developer: Jason Wang
  • guest programmable mac/vlan filtering with macvtap
       Developer: Amos Kong
       Status: GuestProgrammableMacVlanFiltering
  • bridge without promisc mode in NIC
 given hardware support, teach bridge
 to program mac/vlan filtering in NIC
 Helps performance and security on noisy LANs
 Developer: Vlad Yasevich
  • reduce networking latency:
 allow handling short packets from softirq or VCPU context
   We are going through the scheduler 3 times
   (could be up to 5 if softirqd is involved)
   Consider RX: host irq -> io thread -> VCPU thread ->
   guest irq -> guest thread.
   This adds a lot of latency.
   We can cut it by some 1.5x if we do a bit of work
   either in the VCPU or softirq context.
 Testing: netperf TCP RR - should be improved drastically
          netperf TCP STREAM guest to host - no regression
 Developer: MST
  • Flexible buffers: put virtio header inline with packet data
 Developer: MST
  • device failover to allow migration with assigned devices
 Developer: Gal Hammer, Cole Robinson, Laine Stump, MST
  • Reuse vringh code for better maintainability
 Developer: Rusty Russell
  • Improve stats, make them more helpful for per analysis
 Developer: Sriram Narasimhan
  • Bug: e1000 & rtl8139: Change macaddr in guest, but not update to qemu (info network)
 Developer: Amos Kong
  • Enable GRO for packets coming to bridge from a tap interface
 Developer: Dmitry Fleytman
  • Better support for windows LRO
 Extend virtio-header with statistics for GRO packets:
 number of packets coalesced and number of duplicate ACKs coalesced
 Developer: Dmitry Fleytman
  • IPoIB infiniband bridging
 Plan: implement macvtap for ipoib and virtio-ipoib
 Developer: MST

projects that are not started yet - no owner

  • sharing config interrupts
 Support mode devices by sharing a single msi vector
 between multiple virtio devices.
 (Applies to virtio-blk too).
  • netdev polling for virtio.
 There are two kinds of netdev polling:
 - netpoll - used for debugging
 - proposed low latency net polling
  • receive side zero copy
 The ideal is a NIC with accelerated RFS support,
 So we can feed the virtio rx buffers into the correct NIC queue.
 Depends on non promisc NIC support in bridge.
  • RDMA bridging
  • DMA emgine (IOAT) use in tun
 Old patch here: [PATCH RFC] tun: dma engine support
 It does not speed things up. Need to see why and
 what can be done.
  • use kvm eventfd support for injecting level interrupts,
 enable vhost by default for level interrupts
  • virtio API extension: improve small packet/large buffer performance:
 support "reposting" buffers for mergeable buffers,
 support pool for indirect buffers
  • more GSO type support:
      Kernel not support more type of GSO: FCOE, GRE, UDP_TUNNEL
  • ring aliasing:
 using vhost-net as a networking backend with virtio-net in QEMU
 being what's guest facing.
 This gives you the best of both worlds: QEMU acts as a first
 line of defense against a malicious guest while still getting the
 performance advantages of vhost-net (zero-copy).
 In fact a bit of complexity in vhost was put there in the vague hope to
 support something like this: virtio rings are not translated through
 regular memory tables, instead, vhost gets a pointer to ring address.
 This allows qemu acting as a man in the middle,
 verifying the descriptors but not touching the packet data.

=== vague ideas: path to implementation not clear

  • ring redesign:
     find a way to test raw ring performance 
     fix cacheline bounces 
     reduce interrupts

  • irq/numa affinity:
    networking goes much faster with irq pinning:
    both with and without numa.
    what can be done to make the non-pinned setup go faster?
  • reduce conflict with VCPU thread
   if VCPU and networking run on same CPU,
   they conflict resulting in bad performance.
   Fix that, push vhost thread out to another CPU
   more aggressively.
  • rx mac filtering in tun
       the need for this is still not understood as we have filtering in bridge
       we have a small table of addresses, need to make it larger
       if we only need filtering for unicast (multicast is handled by IMP filtering)
  • vlan filtering in tun
       the need for this is still not understood as we have filtering in bridge
  • vlan filtering in bridge
       IGMP snooping in bridge should take vlans into account
  • tx coalescing
       Delay several packets before kick the device.
  • interrupt coalescing
       Reduce the number of interrupt
  • bridging on top of macvlan
 add code to forward LRO status from macvlan (not macvtap)
 back to the lowerdev, so that setting up forwarding
 from macvlan disables LRO on the lowerdev
  • preserve packets exactly with LRO
 LRO is not normally compatible with forwarding.
 virtio we are getting packets from a linux host,
 so we could thinkably preserve packets exactly
 even with LRO. I am guessing other hardware could be
 doing this as well.
  • vxlan
 What could we do here?

testing projects

Keeping networking stable is highest priority.

  • Write some unit tests for vhost-net/vhost-scsi
  • Run weekly test on upstream HEAD covering test matrix with autotest
  • Measure the effect of each of the above-mentioned optimizations
 - Use autotest network performance regression testing (that runs netperf)
 - Also test any wild idea that works. Some may be useful.

non-virtio-net devices

  • e1000: stabilize

test matrix

DOA test matrix (all combinations should work):

       vhost: test both on and off, obviously
       test: hotplug/unplug, vlan/mac filtering, netperf,
            file copy both ways: scp, NFS, NTFS
       guests: linux: release and debug kernels, windows
       conditions: plain run, run while under migration,
               vhost on/off migration
       networking setup: simple, qos with cgroups
       host configuration: host-guest, external-guest