Nested Virtualization: Difference between revisions
Line 42: | Line 42: | ||
* don't use virtio as driver type (note: verify if still true with new qemu >= 2.2!) | * don't use virtio as driver type (note: verify if still true with new qemu >= 2.2!) | ||
* in <VMID>.conf, by hand add | * in <VMID>.conf, by hand add | ||
args: -enable-nesting | args: -enable-kvm | ||
(N.B. in older qemu was args: -enable-nesting) | |||
Once installed the guest OS, if GNU/Linux you can enter and verify that the hardware virtualization support is enabled by doing | Once installed the guest OS, if GNU/Linux you can enter and verify that the hardware virtualization support is enabled by doing |
Revision as of 15:23, 19 February 2015
beware: this wiki article is a draft and may be not accurate or complete
What is
Nested virtualization is when you run an hypervisor, like PVE or others, inside a virtual machine (which is of course running on another hypervisor) instead that on real hardware. In other words, you have a host hypervisor, hosting a guest hypervisor (as a vm), which can hosts its own vms.
This obviously adds an overhead to the nested environment, but it could be useful in some cases:
- it could let you test (or learn) how to manage hypervisors before actual implementation, or test some dangerous/tricky procedure involving hypervisors berfore actually doing it on the real thing.
- it could enable businesses to deploy their own virtualization environment on public services (cloud). See also http://www.ibm.com/developerworks/cloud/library/cl-nestedvirtualization/
Requirements
In order to have the fastest possible performance, near to native, any hypervisor should have access to some (real) hardware features that are generally useful for virtualization, the so hardware-assisted virtualization extensions (see http://en.wikipedia.org/wiki/Hardware-assisted_virtualization).
In nested virtualization, also the guest hypervisor should have access to hardware-assisted virtualization extensions, and that implies that the host hypervisor should expose those extension to its virtual machines.
You will need to allocate plenty of cpu, ram and disk to those guest hypervisors.
Nested PVE
PVE can:
- host a nested (guest) hypervisor, but it does not expose hardware-assisted virtualization extensions to its VM, so you cannot expect to have optimal performance for virtual machines in the guest hypervsor.
- be hosted as a nested (guest) hypervisor. If the host hypervisor can expose hardware-assisted virtualization extensions to PVE, it could be able to use them, and provide better performance to its guests, otherwise, as in the PVE-inside-PVE case, any vm (kvm) will only work after you turn off KVM hardware virtualization (see vm options).
PVE hosts a (guest) hypervisor with hardware assisted support
To have hardware-assisted virtualization, you have to:
In the host (the one installed on hardware)
- use AMD cpu or very recent Intel one
- use kernel >= 3.10
- enable nested support
to check if is enabled do ("kvm_amd" for AMD cpu, "kvm_intel" for intel)
root@proxmox:~# cat /sys/module/kvm_amd/parameters/nested 0
0 means it's not, to enable ("kvm-amd" for amd, "kvm-intel" for intel):
# echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf
and reboot and check again
root@proxmox:~# cat /sys/module/kvm_amd/parameters/nested 1
(pay attention where the dash "-" is used, and where it's underscore "_" instead)
Then create a guest
- as CPU type use "host"
- don't use virtio as driver type (note: verify if still true with new qemu >= 2.2!)
- in <VMID>.conf, by hand add
args: -enable-kvm
(N.B. in older qemu was args: -enable-nesting)
Once installed the guest OS, if GNU/Linux you can enter and verify that the hardware virtualization support is enabled by doing
root@guest1# egrep '(vmx|svm)' --color=always /proc/cpuinfo
Set a cluster of self nested PVE
In the host Proxmox you create 2 VM, and in each you install a new instance of Proxmox, so you can experiment with cluster concepts without the need of having multiple physical servers.
- log into (web gui) your host pve (running on real hardware)
=> PVE
- create two or more vm guests (kvm) in your host PVE, each with enough ram/disk
=> PVE => VM
- install PVE from iso on the each guest vm (same network)
=> PVE => VM1 (guest PVE) => PVE => VM2 (guest PVE) ...
- log into (ssh/console) the first guest vm & create cluster CLUSTERNAME
=> PVE => VM1 (guest PVE) => #pvecm create CLUSTERNAME
- log into each other guest vm & join cluster <CLUSTERNAME>
=> PVE => VM2 (guest PVE) => #pvecm add <IP address of VM1>
- log into (web gui) any guest vm (guest pve) and manage the new (guest) cluster
=> PVE => VM1/2 (guest PVE) => #pvecm n
- create vm or ct inside the guest pve (nodes of CLUSTERNAME)
- don't use virtio network for those guest (kvm): won't work.
- you have to turn off KVM hardware virtualization (see vm options)
- install only CLI based, small ct or vm for those guest (do not try anything with a GUI, don't even think of running Windows...)
=> PVE => VM1/2 (guest PVE) => VM/CT
- install something on (eg) a vm (eg: a basic ubuntu server) from iso
=> PVE => VM2 (guest PVE) => VM (basic ubuntu server)
vm/ct performance withotu hardware-assisted virtualization extensions
if you can't setup hardware-assisted virtualization extensions for the guest, performance is far from optimal! Use only to practice or test!
- ct (openvz) will be faster, of course, quite usable
- vm (kvm) will be really slow, nearly unusable (you can expect 10x slower or more), since (as said above) they're running without KVM hardware virtualization
but at least you can try or test "guest pve" features or setups:
- you could create a small test cluster to practice with cluster concepts and operations
- you could test a new pve version before upgrading
- you could test setups conflicting with your production setup