Nested Guests: Difference between revisions

From KVM
m (Fix title)
m (Undo revision 173934 by Testemail (talk))
 
(4 intermediate revisions by 3 users not shown)
Line 1: Line 1:
= Nested Guests =
= Nested Guests =


Nested guests are KVM guests run in a KVM guest. As of 2018 this feature is considered working but experimental, and some limitations apply.
Nested guests are KVM guests run in a KVM guest. As of Feb 2018 this feature is considered working but experimental, and some limitations apply.


When describing nested guests, the following conventions apply:
When describing nested guests, we will use the following terminology:


* "L0" is an unvirtualized host system capable of running KVM.
* "L0" – the bare metal host, running KVM
* "L1" is a virtual system running on L0, which is itself ''also'' capable of running KVM.
* "L1" a VM running on L0; also called the "guest hypervisor" — as it ''itself'' is capable of running KVM
* "L2" is a virtual system running on L1, which does no further virtualization.
* "L2" a VM running on L1, also called the "nested guest"


In this context, we refer to L1 as an unnested guest, and L2 as a nested guest.


= Detailed =
= Detailed =
Line 20: Line 19:
The KVM kernel modules do not enable nesting by default (though your distribution may override this default). To enable nesting, set the <code>nested</code> module parameter to <code>Y</code> or <code>1</code>. You may set this parameter persistently in a file in <code>/etc/modprobe.d</code> in the L0 host, for example:
The KVM kernel modules do not enable nesting by default (though your distribution may override this default). To enable nesting, set the <code>nested</code> module parameter to <code>Y</code> or <code>1</code>. You may set this parameter persistently in a file in <code>/etc/modprobe.d</code> in the L0 host, for example:


  # cat /etc/modprobe.d/kvm_intel.conf
  # If you have an Intel CPU, use this:
$ cat /etc/modprobe.d/kvm_intel.conf
  options kvm-intel nested=Y
  options kvm-intel nested=Y
   
   
  # cat /etc/modprobe.d/kvm_amd.conf
  # If you have an AMD CPU, then this:
$ cat /etc/modprobe.d/kvm_amd.conf
  options kvm-amd nested=1
  options kvm-amd nested=1


Once your L0 host is capable of nesting, you should be able to start an L1 guest with the <code>-cpu host</code> option, and the guest will subsequently be capable of running an L2 guest with accelerated KVM.
Once your L0 host is capable of nesting, you should be able to start an L1 guest with the <code>-cpu host</code> option (or for better live migration compatibility, use a named CPU model supported by QEMU, such as: <code>-cpu Haswell-noTSX-IBRS,vmx=on</code>) and the guest will subsequently be capable of running an L2 guest with accelerated KVM.




== Limitations ==
== Limitations ==


Once an L1 guest has started an L2 guest, it is no longer capable of being migrated, saved, or loaded (see [[Migration]] for details on these actions) until the L2 guest shuts down. This is currently an inherent limitation of the KVM implementation on all architectures except s390x.
Once an L1 guest has started an L2 guest, it is no longer capable of being migrated, saved, or loaded (see [[Migration]] for details on these actions) until the L2 guest shuts down. This is currently an inherent limitation (that is being worked on, as of Feb 2018) of the KVM implementation on all architectures except s390x.


Attempting to migrate or save & load an L1 guest while an L2 guest is running will result in undefined behavior. You might see a <code>kernel BUG!</code> entry in <code>dmesg</code>, a kernel oops, or an outright kernel panic. At any rate, a thus migrated or loaded L1 guest can no longer be considered stable or secure, and must be restarted.
Attempting to migrate or save & load an L1 guest while an L2 guest is running will result in undefined behavior. You might see a <code>kernel BUG!</code> entry in <code>dmesg</code>, a kernel oops, or an outright kernel panic. At any rate, a thus migrated or loaded L1 guest can no longer be considered stable or secure, and must be restarted.
Line 40: Line 41:
* [https://www.redhat.com/archives/libvirt-users/2018-February/msg00014.html Discussion on limitations of nested guests] (<code>libvirt-users</code> mailing list, February 2018)
* [https://www.redhat.com/archives/libvirt-users/2018-February/msg00014.html Discussion on limitations of nested guests] (<code>libvirt-users</code> mailing list, February 2018)
* [https://bugzilla.redhat.com/show_bug.cgi?id=1076294 Red Hat Bugzilla Bug 1076294 ]
* [https://bugzilla.redhat.com/show_bug.cgi?id=1076294 Red Hat Bugzilla Bug 1076294 ]
* [https://bugzilla.kernel.org/show_bug.cgi?id=53851 kernel.org Bugzilla Bug 53851]
* [https://bugzilla.kernel.org/show_bug.cgi?id=198621 kernel.org Bugzilla Bug 198621]
* [https://bugzilla.kernel.org/show_bug.cgi?id=198621 kernel.org Bugzilla Bug 198621]
 
* [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/kvm/nested-vmx.txt Intel nVMX upstream specification]
[[Category:Docs]]
[[Category:Docs]]

Latest revision as of 17:38, 5 November 2018

Nested Guests

Nested guests are KVM guests run in a KVM guest. As of Feb 2018 this feature is considered working but experimental, and some limitations apply.

When describing nested guests, we will use the following terminology:

  • "L0" – the bare metal host, running KVM
  • "L1" – a VM running on L0; also called the "guest hypervisor" — as it itself is capable of running KVM
  • "L2" – a VM running on L1, also called the "nested guest"


Detailed

Why use it?

An additional layer of virtualization sometimes comes in handy. You might have access to a large virtual machine in a cloud environment that you want to compartmentalize into multiple workloads. You might be running a lab environment in a training session.

How to run

The KVM kernel modules do not enable nesting by default (though your distribution may override this default). To enable nesting, set the nested module parameter to Y or 1. You may set this parameter persistently in a file in /etc/modprobe.d in the L0 host, for example:

# If you have an Intel CPU, use this:
$ cat /etc/modprobe.d/kvm_intel.conf
options kvm-intel nested=Y

# If you have an AMD CPU, then this:
$ cat /etc/modprobe.d/kvm_amd.conf
options kvm-amd nested=1

Once your L0 host is capable of nesting, you should be able to start an L1 guest with the -cpu host option (or for better live migration compatibility, use a named CPU model supported by QEMU, such as: -cpu Haswell-noTSX-IBRS,vmx=on) and the guest will subsequently be capable of running an L2 guest with accelerated KVM.


Limitations

Once an L1 guest has started an L2 guest, it is no longer capable of being migrated, saved, or loaded (see Migration for details on these actions) until the L2 guest shuts down. This is currently an inherent limitation (that is being worked on, as of Feb 2018) of the KVM implementation on all architectures except s390x.

Attempting to migrate or save & load an L1 guest while an L2 guest is running will result in undefined behavior. You might see a kernel BUG! entry in dmesg, a kernel oops, or an outright kernel panic. At any rate, a thus migrated or loaded L1 guest can no longer be considered stable or secure, and must be restarted.

Migrating an L1 guest merely configured to support nesting, while not actually running L2 guests, is expected to function normally. Live-migrating an L2 guest from one L1 guest to another is also expected to succeed.

Links