From KVM
Revision as of 07:49, 30 October 2009 by Amit (Talk | contribs) (Update TODO)


  • We want an interface between the guest and the host
  • The channel is to be used for simple communication, like sharing of the clipboard between the user desktop and the guest desktop
    • For relatively low rate of data transfer -- a few MB/s
    • Events to be delivered to the guest, like 'shutdown', 'reboot', 'logoff'
    • Queries to the guest, like 'which users are logged in'
  • Survive live migration
  • Support for multiple agents (consumers of the data) on the guest
  • Multiple channels could be opened at the same time
  • In multi-channels case, one blocked channel shouldn't block communication between others (or one channel shouldn't hog all the bandwidth)
  • Stable ABI (for future upgrades)
  • Channel addressing
    • An agent in the guest should be able to find the channel it's interested in
  • Dynamic channel creation
  • Security
    • No threats to the host
    • Unprivileged user should be able to use the channel
  • Should work out of the box, without any configuration necessary on the part of the guest
  • Notifications of channels being added / removed (hotplugging)
  • Notifications on connection / disconnection on guest as well as host side
  • An API inside qemu to communicate with agents in the guest


A few reasons why the obvious solutions do not work:

  • via the fully emulated serial device.
    • performance (exit per byte)
    • scalability - only 4 serial ports per guest
    • accessed by root only in the guest
  • via TCP/IP network sockets
    • The guest may not have networking enabled
    • The guest firewall may block access to the host IPs
    • Windows can't bind sockets to specific ethernet interfaces

Use Cases

  • Guest - Host clipboard copy/paste operations
    • By a VMM or via an internal API within qemu
  • libguestfs (offline usage)
    • For poking inside a guest to fetch the list of installed apps, etc.
  • Online usage
    • Locking desktop session when vnc session is closed
  • Cluster I/O Fencing aka STONITH
    • Current models require networking between guest/host
      • fence_virsh, xen0 -> ssh to defined host and to perform fencing; no migration tracking; requires ssh key distribution to work.
      • fence_xvm -> tracks migrations, but requires multicast between guest/host; distributed key recommended but not required
    • Using VMChannel-Serial, the requirement of guest-host can be avoided
    • Key distribution of any sort can be avoided, making this easier to configure than existing solutions


The virtio-console code is extended to spawn multiple serial ports inside the guest. Creating the serial ports on the host looks like this:

-device virtio-serial-pci \
-device virtserialport,chardev=...,name=org.linux-kvm.port.0 \
-device virtserialport,chardev=...,name=org.linux-kvm.port.1,cache_buffers=0 \
-device virtserialport,chardev=...,name=org.linux-kvm.port.2,byte_limit=1048576

This creates three ports. The guest sees them as:


The ports are assigned some properties:

  • cache_buffers: default 1. This property specifies whether ports retain data received when the char device is closed on the guest or when the qemu chardev connection is closed on the host.
  • byte_limit, guest_byte_limit: These properties specify the max. number of bytes that can be cached before no more data is pushed out. This is to prevent OOM conditions from happening inside the host or the guest.
  • name: This property puts the string in the guest's sysfs entries for it to be found by udev rules to create symlinks to the actual device. This is useful for guest applications discovering the ports of their interest. For example,
$ cat /sys/class/virtio-console/vcon1/name

A simple udev rule can create a symlink to the actual device from this name: eg, create a new file in /etc/udev/rules.d, eg, 90-virtio-console. Put the following code in it:

KERNEL=="vcon*", SYMLINK+="virtio-console/$ATTR{name}"

In the above example, a symlink, /dev/virtio-console/org.linux-kvm.port.0 will be created that points to /dev/vcon1

To create console ports, use:

-device virtio-serial-pci \
-device virtconsole,chardev=...,name=console.0

You can then spawn a tty on this console port from the guest:

# agetty /dev/hvc0 9600 vt100

Multiple console ports can be specified on the command line and they will be available as /dev/hvc0, /dev/hvc1...


A repository containing test code is put up at

This repo contains manual as well as automated testsuites that can be run to verify the functionality. The test-virtserial.c file also documents the various options and the ways in which they can be tested.


A few items from my TODO list. Please drop me a note at <> if you're planning to pick something from here so that there's no duplication of work. Thanks!

  • Guest kernel driver
    • Check the device removal path, especially for virtio-console (hvc) ports
  • Host device
    • Convert to VMState
    • Generic chardev infrastructure: During migration, make sure if the other endpoint didn't receive data, resend it