- We want an interface between the guest and the host
- The channel is to be used for simple communication, like sharing of the clipboard between the user desktop and the guest desktop
- For relatively low rate of data transfer -- a few MB/s
- Events to be delivered to the guest, like 'shutdown', 'reboot', 'logoff'
- Queries to the guest, like 'which users are logged in'
- Survive live migration
- Support for multiple agents (consumers of the data) on the guest
- Multiple channels could be opened at the same time
- In multi-channels case, one blocked channel shouldn't block communication between others (or one channel shouldn't hog all the bandwidth)
- Stable ABI (for future upgrades)
- Channel addressing
- An agent in the guest should be able to find the channel it's interested in
- Dynamic channel creation
- No threats to the host
- Unprivileged user should be able to use the channel
- Should work out of the box, without any configuration necessary on the part of the guest
- Notifications of channels being added / removed (hotplugging)
- Notifications on connection / disconnection on guest as well as host side
- An API inside qemu to communicate with agents in the guest
A few reasons why the obvious solutions do not work:
- via the fully emulated serial device.
- performance (exit per byte)
- scalability - only 4 serial ports per guest
- accessed by root only in the guest
- via TCP/IP network sockets
- The guest may not have networking enabled
- The guest firewall may block access to the host IPs
- Windows can't bind sockets to specific ethernet interfaces
- via slirp
This implementation does exist upstream as "-net channel" http://www.nabble.com/-PATCH--specify-vmchannel-as-a-net-option-td21911523.html
- Again, based on networking so same drawbacks mentioned above apply
- Currently used by libguestfs
- Guest - Host clipboard copy/paste operations
- By a VMM or via an internal API within qemu
- libguestfs (offline usage)
- For poking inside a guest to fetch the list of installed apps, etc.
- Online usage
- Locking desktop session when vnc session is closed
- Cluster I/O Fencing aka STONITH
- Current models require networking between guest/host
- Using VMChannel-Serial, the requirement of guest-host can be avoided
- Key distribution of any sort can be avoided, making this easier to configure than existing solutions
The virtio-console code is extended to spawn multiple serial ports inside the guest. Creating the serial ports on the host looks like this:
-device virtio-serial \ -device virtserialport,chardev=...,name=org.linux-kvm.port.0 \ -device virtserialport,chardev=...,name=org.linux-kvm.port.1
This creates two ports. The guest sees them as:
The ports are assigned some properties:
- name: This property puts the string in the guest's sysfs entries for it to be found by udev rules to create symlinks to the actual device. This is useful for guest applications discovering the ports of their interest. For example,
$ cat /sys/class/virtio-ports/vport0p1/name org.linux-kvm.port.0 $
A simple udev rule can create a symlink to the actual device from this name. The rule for creating these symlinks has been included in udev-151. If you have an older version of udev, you can do this locally by creating a new file in
90-virtio-ports and put the following code in it:
In the above example, a symlink,
/dev/virtio-ports/org.linux-kvm.port.0 will be created that points to
To create console ports, use:
-device virtio-serial \ -device virtconsole,chardev=...,name=console.0
You can then spawn a tty on this console port from the guest:
# agetty /dev/hvc0 9600 vt100
Multiple console ports can be specified on the command line and they will be available as /dev/hvc0, /dev/hvc1...
The code for the qemu device has been merged in qemu upstream sources on 20 Jan 2010, starting from commit 98b19252cf1bd97c54bc4613f3537c5ec0aae263. The feature will be available in the next major QEMU release after 0.12.
The corresponding kernel driver will be available in the Linux kernel 2.6.34 release.
A repository containing test code is put up at
This repo contains manual as well as automated testsuites that can be run to verify the functionality. The test-virtserial.c file also documents the various options and the ways in which they can be tested.
See also the Fedora 13 Feature Page for more information: https://fedoraproject.org/wiki/Features/VirtioSerial#How_To_Test
A few items from my TODO list. Please drop me a note at <firstname.lastname@example.org> if you're planning to pick something from here so that there's no duplication of work. Thanks!
- Guest kernel driver
- Allow module removal
- Host device
- Convert to VMState