Multiqueue-optimization

From KVM

Future Optimizations

Per cpu queue

For better parallelism, per-cpu is needed. This could be done in by

  1. For backend(tap/macvtap): allocate as many queues as the number the vcpus
  2. For guest driver, do a 1:1 mapping between them, and make sure the handling of rx/tx queue pairs were done in the same vcpu:
    1. bind the irq affinity to that cpu
    2. use smp processor id to choose the tx queue when doing tx

Per-cpu.jpg

Flow director

For better stream protocol (like TCP) performance, we'd better make sure the packets of single flow were handle in the same on vcpu. This could be done through implement a simple flow director in tap/macvtap/

The flow director is a hash to queue table which use the lower bits of hash value of packets as its index to query which queues the flow belongs to. The table is build during guest transmit packets to backend (tap/macvtap), and it is queried when backend wants to choose a queue to transmit packet to guest. It works as follows:

  1. When virtio-net backend transmit packets to tap/macvtap, the hash of skbs were calculated, and its lower bits were used as index to store the queue no. in the table.
  2. When tap/macvtap want to transmit packets to guest, the lower bits of the skbs hash is used to query which queue/flow this packets belongs to and then put this packet to the queue.

The flow director also needs the co-operation of per-cpu queues to work. Flow-director.jpg