VMchannel Requirements

From KVM
Revision as of 03:43, 16 October 2009 by Amit (talk | contribs) (Add notes on invocation)

Requirements

  • We want an interface between the guest and the host
  • The channel is to be used for simple communication, like sharing of the clipboard between the user desktop and the guest desktop
    • For relatively low rate of data transfer -- a few MB/s
    • Events to be delivered to the guest, like 'shutdown', 'reboot', 'logoff'
    • Queries to the guest, like 'which users are logged in'
  • Survive live migration
  • Support for multiple agents (consumers of the data) on the guest
  • Multiple channels could be opened at the same time
  • In multi-channels case, one blocked channel shouldn't block communication between others (or one channel shouldn't hog all the bandwidth)
  • Stable ABI (for future upgrades)
  • Channel addressing
    • An agent in the guest should be able to find the channel it's interested in
  • Dynamic channel creation
  • Security
    • No threats to the host
    • Unprivileged user should be able to use the channel
  • Should work out of the box, without any configuration necessary on the part of the guest
  • Notifications of channels being added / removed (hotplugging)
  • Notifications on connection / disconnection on guest as well as host side
  • An API inside qemu to communicate with agents in the guest


History

A few reasons why the obvious solutions do not work:

  • via the fully emulated serial device.
    • performance (exit per byte)
    • scalability - only 4 serial ports per guest
    • accessed by root only in the guest
  • via TCP/IP network sockets
    • The guest may not have networking enabled
    • The guest firewall may block access to the host IPs
    • Windows can't bind sockets to specific ethernet interfaces


Use Cases

  • Guest - Host clipboard copy/paste operations
    • By a VMM or via an internal API within qemu
  • libguestfs (offline usage)
    • For poking inside a guest to fetch the list of installed apps, etc.
  • Online usage
    • Locking desktop session when vnc session is closed
  • Cluster I/O Fencing aka STONITH
    • Current models require networking between guest/host
      • fence_virsh, xen0 -> ssh to defined host and to perform fencing; no migration tracking; requires ssh key distribution to work.
      • fence_xvm -> tracks migrations, but requires multicast between guest/host; distributed key recommended but not required
    • Using VMChannel-Serial, the requirement of guest-host can be avoided
    • Key distribution of any sort can be avoided, making this easier to configure than existing solutions


Invocation

The virtio-console code is extended to spawn multiple serial ports inside the guest. Creating the serial ports on the host looks like this:

-device virtio-serial-pci \
-device virtserialport,chardev=...,name=org.linux-kvm.port.0 \
-device virtserialport,chardev=...,name=org.linux-kvm.port.1,cache_buffers=0 \
-device virtserialport,chardev=...,name=org.linux-kvm.port.2,byte_limit=1048576

This creates three ports. The guest sees them as:

/dev/vcon1
/dev/vcon2
/dev/vcon3

The ports are assigned some properties:

  • cache_buffers: default 1. This property specifies whether ports retain data received when the char device is closed on the guest or when the qemu chardev connection is closed on the host.
  • byte_limit, guest_byte_limit: These properties specify the max. number of bytes that can be cached before no more data is pushed out. This is to prevent OOM conditions from happening inside the host or the guest.
  • name: This property puts the string in the guest's sysfs entries for it to be found by udev rules to create symlinks to the actual device. This is useful for guest applications discovering the ports of their interest. For example,
$ cat /sys/class/virtio-console/vcon1/name
org.linux-kvm.port.0
$