Difference between revisions of "Migration of servers to Proxmox VE"

From Proxmox VE
Jump to navigation Jump to search
(Windows: Fix command and description: Previous command changed 'scsihw:' to 'satahw:'. Description used IDE instead of SATA.)
 
(122 intermediate revisions by 16 users not shown)
Line 1: Line 1:
=Introduction=
+
= Introduction =
You can migrate existing servers to Proxmox VE.
+
There are various ways to migrate existing servers to Proxmox VE. They can be divided into two categories:
 +
; Physical-to-Virtual (P2V): Migration of physical servers to Proxmox VE
 +
; Virtual-to-Virtual (V2V): Migration of virtual machines and containers from other hypervisors to Proxmox VE
  
=Physical server to Proxmox VE (KVM)=
+
= Physical-to-Virtual (P2V) =
There are free tools (like [http://www.vmware.com/products/converter/ VMware Converter]) available to migrate a physical host to VMware. So the first step is to migrate the physical server to a VMware image. The second step is to follow the howto "VMware to Proxmox VE (KVM)".
 
Another option to migrate a Windows machine is to use the tool SelfImage. This way you don't have to struggle with VMware Converter and VMware, so it should be easier and faster.
 
  
=How to migrate directly from a Windows physical machine to a VM logical volume using SSH=
+
Follow these steps to do a P2V migration and turn a physical machine into a Proxmox VE virtual machine that uses Qemu and KVM.
  
assumptions:
+
== Clonezilla Live CDs ==
 +
This method is fast, reliable and OS independent as it uses live CDs.
  
    * you have a physical machine in which you imported the mergeide.reg, as shown on top of this page
+
* Get a live Linux CD like [http://clonezilla.org/ Clonezilla].
    * the physical machine has disk device /dev/sda (read this with "fdisk -l", and look for "hda" or "sda" or similar, NOT dm- or other...)
+
* Prepare the source host, make sure that you have standard IDE drivers enabled (for WinXP/Win2003 use the mergeide.reg ([[File:Mergeide.zip]]) ), for W2k I followed this link (solution 2 worked for me): [http://www.motherboard.windowsreinstall.com/problems.htm]
    * you have already created a virtual machine with a disk and not running.
+
* Now, boot the physical host with Clonezilla, go for beginner mode and select device - device and then remote, just follow the wizard
    * the VM has VMid 100 (read this from the VM list in the web interface)
+
* On the Proxmox VE host, prepare a KVM guest, make sure you got a big enough IDE disk assigned (add 1 GB extra to be on the safe side) and also boot this KVM guest with the live CD and execute a shell.
 +
* Become root and run fdisk (fdisk /dev/sda/) to make sure that sda is here, exit fdisk with 'w'. this was essential.
 +
* Now enter all commands from the wizard on the source server tells you (configure network and request the copy process)
 +
* After success (the wizard just copies the data, means I got a 80 GB disk but only 5 GB data on it so it was just a few minutes for the whole process on a gigabit network) just change the boot device to the hard disk and start the VM. Windows will install all needed drivers automatically, just the Intel NIC drivers for e1000 must be loaded from ISO (I got one big driver ISO from Intel containing all NIC drivers).
  
download SystemRescueCD ( http://www.sysresccd.org ), burn it and reboot the physical machine with it in the cd tray.
+
== VMware Converter ==
 +
Physical (running) Windows server to Proxmox VE (KVM) using VMware vCenter Converter Standalone Client (V5)
  
At its bash prompt, give eth0 an ip, or use dhcp:
+
Tested on an HP ProLiant ML350 G5 and G6
  
To assign ip: 
+
=== Prepare Windows ===
<pre>ifconfig eth0 192.168.1.10 netmask 255.255.255.0 up  (use ip on same subnet as proxmox server) </pre>
 
  
To use DHCP:  
+
'''VMware vCenter Converter Standalone Client'''
<pre>dhcpcd eth0</pre>
+
*Download [https://my.vmware.com/web/vmware/info/slug/infrastructure_operations_management/vmware_vcenter_converter_standalone/5_0 here](version 5.x is Free Software and also free of charge but you need to create an account and login before download)
  
To start the image process on the physical machine: 
+
'''Mergeide.reg'''
<pre>dd if=/dev/sda | ssh root@proxmox dd of=/var/lib/vz/images/vmid/diskname.raw</pre>
+
*Execute mergeide.reg ([[File:Mergeide.zip]])
Change the root@proxmox to root@yourproxmoxip and change the vmid/diskname to the VMid number and the name you want to call the disk. I.E.  VMid 100 and disk called test would be:
 
<pre> dd if=/dev/sda | ssh root@proxmox dd of=/var/lib/vz/images/100/test.raw</pre>
 
  
Once this completes you can assign this disk file to your VM on the Proxmox interface.
+
=== Prepare location to save local image ===
 +
This guide is using an external USB Hard Drive. You may also save to a mapped network share.
  
Tested a 8GB physical server over gigabit network and was running in about 10 minutes.
+
'''NOTE''' Although the final image will be around the same size as the actual amount of data on the server, the Proxmox VE server should have enough free space to fit the total physical disk of the server unless you plan to [http://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files shrink the windows disks.] once migrated to Proxmox VE.
  
=How to migrate directly from a Windows physical machine to a VM logical volume using NETCAT=
+
=== VMware vCenter Settings ===
 +
Launch VMware vCenter and use the following settings:
 +
*'''Source type:''' Powered-on machine
 +
*'''Specify the powered-on machine:''' This local machine
 +
*'''Select destination type:''' VMware Workstation or other VMware virtual machine
 +
*'''Select VMware Product:''' VMware Workstation 8.0.x
 +
*'''Name:''' Enter the desired name for the image
 +
*'''Select a location for the virtual machine:''' Browse to the USB or Network drive where the image will be saved.
  
assumptions:
+
The next screen shows the settings for the virtual machine.  
* you have a physical machine in which you imported the mergeide.reg, as shown on top of this page
 
* the physical machine has disk device /dev/sda (read this with "fdisk -l", and look for "hda" or "sda" or similar, NOT dm- or other...)
 
* you have already created a virtual machine with a disk on LVM, larger in size than the physical machine one (at least 1Gb more), STOPPED, do NOT start it during this procedure!
 
* the VM has VMid 103 (read this from the VM list in the web interface)
 
* the LVM VG is named SHARED-VG (read this with: "vgdisplay -s")
 
* so the LV partition is /dev/SHARED-VG/vm-103-disk-1 (read this with: "lvdisplay |grep LV|grep 103")
 
  
download SystemRescueCD ( http://www.sysresccd.org ), burn it and reboot the physical machine with it in the cd tray.
+
Click on Advanced options, select the Post-conversion tab and make sure ‘Install VMware Tools on the destination virtual machine’ is NOT check. We do not want to install VMware tools.
  
At its bash prompt, give eth0 an ip, coherent with the network of one proxmox server (in the following example: 10.0.0.27):
+
Click next and Finish.
<pre>ifconfig eth0 10.0.0.17 netmask 255.255.255.0 up</pre>
 
then assign root user a password, so you can log in via ssh:
 
<pre>passwd</pre>
 
  
log in via ssh on both the physical machine and the proxmox server, then on the proxmox server, give the command:
+
It will now convert your physical machine to a .vmdk file.
<pre>date; netcat -w30 -vvnlp 3333 | gzip -dc > /dev/SHARED-VG/vm-103-disk-1; date</pre>
 
and on the physical machine give the command:
 
<pre>dd if=/dev/sda | gzip -c | nc -w30 -vvn 10.0.0.27 3333</pre>
 
You have to start the second command WITHIN 30 seconds from the first one (the -w30...)!
 
  
This way, the proxmox machine is waiting for a stream of bytes on tcp port 3333, then gunzip it and redirects on the vm virtual disk.
+
'''NOTE:''' Depending on your hardware, you may need to boot the .vmdk file using VMware Workstation or Player before moving the file to the Proxmox VE server. This allows windows to install additional drivers for the disk controller. If promoted to convert the disk to Workstation 9.x compatibility, say Yes. You won't know if you need this step until starting the Windows VM in the final step. If you get a blue screen during boot, you should try this step.
  
The physical machine outputs its ENTIRE hd as a stream of bytes, gzip it and pass it via netcat to the proxmox server.
+
==== Prepare the VM on Proxmox VE ====
 +
Create a new KVM virtual machine. You’ll want to use similar CPU and memory as the physical system. In the Hard Disk menu, leave everything as default. We won’t be using the disk created by Proxmox VE. Finish creating the VM. Make note of the VMID. For this guide, we’ll be using 100 as an example.  
  
The 2 date commands are useful to have a trace of when this process starts and ends.
+
Once the VMware converter has completed, disable all of the networks adapters on the physical server and shut down. Disabling the network adapters will avoid potential IP conflicts if you will start the physical server back into Windows after you have your new virtual server running.
  
Tested migrating a 75gb hd in about 90 minutes on a 100mbit lan. VM started and worked as expected, after recognizing at first boot the new ide controllers.
+
==== Move the image to the Proxmox VE Server ====
 +
Plug a USB Hard Drive into the server
  
=Physical (running) server to Proxmox VE (KVM) using SelfImage=
+
From the Proxmox VE command line:
 +
<pre>
 +
mkdir /mnt/usb
 +
</pre>
 +
<pre>
 +
mount /dev/sdc1 /mnt/usb/
 +
</pre>
 +
<pre>
 +
ls /mnt/usb
 +
</pre>
  
==Prepare the Windows operating system==
+
You should see the contents of the USB drive. In my case, the vmdk file was located in /mnt/usb/windows-server/
  
Install [http://selfimage.excelcia.org SelfImage] on the physical Windows machine. Execute the [http://www.proxmox.com/cms_proxmox/cms/upload/misc/mergeide.reg mergeide.reg] (see [http://support.microsoft.com/kb/314082/en-us Microsoft KB article] for details) to provide support for the natively supported IDE controllers in Windows.
+
==== Converting to qcow2 ====
 +
We want to convert the .vmdk image file to qcow2. To do this, use the following command:
 +
<pre>
 +
qemu-img convert -f vmdk /mnt/usb/windows-server/windows-server.vmdk -O qcow2 /var/lib/vz/images/100/windows-server.qcow2
 +
</pre>
 +
This can take a while depending on the size of file and speed of your system.
  
==Prepare the Proxmox VE VM==
+
==== Final Steps ====
 +
Once the conversion is complete, we need to edit the configuration file for the VM.
 +
<pre>
 +
nano /etc/pve/local/qemu-server/100.conf
 +
</pre>
 +
In the line with ide0: we want to change vm-100-disk-1.raw,size=32G to windows-server.qcow2
  
Create an new KVM container with a suitable disk size. It is recommended that you choose the size 1 GB bigger than the size of the physical disk that you want to migrate to make sure all your data fits to the virtual disk.
+
You may delete the empty disk created by Proxmox VE when you created the VM.
 +
<pre>
 +
rm /var/lib/vz/images/100/vm-100-disk-1.raw
 +
</pre>
  
Use VNC or SSH to connect to console on the Proxmox VE host. Export the qcow2 disk in the container directory with NBD
+
Start the VM and open the console. Windows should boot up normally. It’ll take a few minutes to detect the hardware changes. If the Windows system had a static IP address, you'll have to reconfigure the settings.
  
<pre>qemu-nbd -t /var/lib/vz/images/xxx/vm-xxx-disk.qcow2</pre>
+
==== Alternative Methods ====
 +
* [https://github.com/QuinnyPig/OpenVZ-Template-Creators OpenVZ Template Creators] to create non Debian (we have [[Debian Appliance Builder|DAB]] for it) templates.
  
where xxx is the VM ID.
+
= Virtual-to-Virtual (V2V) =
  
If you use LVM as storage model, you can also export the logical volume created by Proxmox via the web interface
+
Follow these steps to do a V2V migration and move a virtual machine from another hypervisor to a Proxmox VE virtual machine that uses Qemu and KVM.
  
<pre>qemu-nbd -t /dev/VolumeGroup/vm-xxx-disk-1</pre>
+
== VMware ==
 +
This explains the migration from a VMware ESXi 6.7 hypervisor to Proxmox VE 6.1. It is tested with guests with the following operating systems:
 +
* Debian 10
 +
* Windows 10 Pro
 +
* Windows Server 2016 Standard
  
==Do the migration==
+
=== Exporting ===
 +
==== GUI ====
 +
In VMware ESXi navigate to your virtual machine in the tree on the left.
 +
Choose Actions->Export.
 +
This will download disk images and information about your virtual machine in the .ovf format in the browser.
 +
Move those files to a storage that is accessible by your Proxmox VE host.
  
Start SelfImage on the physical machine, choose to image entire hard disk, not partition. On ouput file select NBD with your '''PVE host IP''' and '''port 1024''' as parameters. Click Start.
+
==== CLI ====
 +
Install VMware's [https://my.vmware.com/group/vmware/get-download?downloadGroup=OVFTOOL440 ovftool] on your Proxmox VE host.
 +
ovftool version 4.4 has been reported to work with the following versions of ESXi: 6.5 and 6.7. Others (for example, [https://forum.proxmox.com/threads/unable-to-migrate-vm-from-esxi-6-0-0-to-proxmox-6-3-2.82301/post-362781 6.0]) might crash with an unhelpful error message
  
When imaging is complete press CTRL+C on the PVE console to stop the export of your virtual disk file.
+
Remove any attached disk or ISO from your ESXi VM and run
 +
ovftool vi://root@<ip-of-esxi>/<name-of-a-virtual-machine> .
 +
to export a virtual machine from ESXi directly into your current directory.
  
Start the virtual machine and have fun.
 
  
=Physical (running) server to Proxmox VE (KVM) using Windows backup=
+
You can replace the dot with any other path, for example "/mnt/pve/<some-storage>".
 +
This way you can export directly to a storage that you created in Proxmox VE.
  
This may be useful if any other approach fails, or looks too complicated for you, as does not involve other software except Windows and PVE itself.
+
{{note | You have to extract .ova archives}}
It was used succesfully to convert a live Windows 2000 server sp4 to PVE 1.5, and was inspired from a blog post related to a similar migration to XEN, but proved to be effective also for PVE, and probably any other VE around.
+
With some configurations you get a single (.ova) archive file that contains all .vmdk disk images and the small XML description .ovf file. If this is the case, then you have to extract it before you continue with <code>qm importovf</code>.
  
here is the original blog post: http://blog.geekpoet.net/2008/05/manual-p2v-migration-of-windows-server.html
+
=== Importing ===
  
==The workflow==
+
Go to the command line interface of Proxmox VE.
 +
Use the command <code>qm importovf <new-vmid> </path/to/file.ovf> <storage> </code> to import the virtual machine. For example:
 +
qm importovf 200 /tmp/exported-vm.ovf local-lvm
  
This method is about
+
This will create a new virtual machine using cores, memory and VM name as read from the OVF manifest, and
* making a backup on your pm
+
import the disks. You have to configure the network manually.
* creating a virtual machine (vm) from scratch
+
You can find syntax and an example on how to use this command on its [https://pve.proxmox.com/pve-docs/qm.1.html man page].
* installing the same OS of the pm on the vm
 
* restoring the backup of the pm on the vm
 
  
easy, uh? well, it works! Of course there are some point to take care of, and some drawbacks, too, but it is a very simple method, and you can find the drawbacks (if any) worth for the easiness you have in return!
+
{{note | Windows guests require a few additional steps}}
  
You don't need to get new softwares, or learn fancy linux commands: all you got to know (and it's supposed you ALREADY know) is
+
If you are migrating a Windows guest, you additionally have to execute the following commands. This example assumes that your imported virtual machine has the ID 130.
* how to create a vm in pve (kvm)
 
* how to make a backup/restore from windows (with the bundled software!)
 
  
Ok, follow me:
+
* Use UEFI instead of BIOS for the virtual machine:
== 1) backup the physical machine==
+
qm set 130 --bios ovmf
The program you need is NTBackup, on a standard installation you can find it under the menu
+
* Windows requires additional drivers to use SCSI. Therefore, you should temporarily use SATA instead. In the following command, 'scsi0' should be replaced with the drive that's used to boot Windows.
* Start/Program Files/Accessories/System utilities/Backup
+
sed -i 's/scsi0:/sata0:/' /etc/pve/qemu-server/130.conf
* if it has been removed, you can add it back from your windows CD
 
  
Start a backup that includes
+
This will give you a initial working version. You can then improve your experience and access remaining SCSI drives by installing additional drivers as explained in [[Windows 10 guest best practices]].
* ALL the windows hard drives
 
* the system state (this include the whole registry, ips, hostname, and so on)
 
  
Then, choose the "file" option as a destination
+
=== Server self-migration ===
* be sure to create this file on a network share, or a usb drive but NOT on the disks you are backing up
+
It is also possible to migrate without the need to export each VM separately including virtual disks.<br>
* be sure that the network share has enough space to hold the BIG file...
+
This way, you can convert a server from vSphere to Proxmox VE without the need of a second server.<br>
 +
For this process your vSphere should use VMFS6 and you need at least one empty HDD.<br>
 +
1. Export the VM information without the disks using ovftool (you still need to configure the network configuration for each VM).
 +
ovftool -NoDisks vi://root@<ip-of-esxi>/<name-of-a-virtual-machine> .
 +
2. Install Proxmox VE on some disk that does not contain any important data. You don't need vSphere anymore at this point. If you have an OS disk with only vSphere on it, then you can now overwrite it with Proxmox VE.
 +
{{warning | Do not use/touch any other existing drives which are VMFS formatted }}
 +
3. Create a directory on the above mention spare HDD.<br>
 +
4. Install vmfs6-tools which you need to mount (ready-only) the VMFS-formatted drives with the vSphere virtual disks.
 +
apt install vmfs6-tools -y
 +
5. List all available drives to identify the VMFS formatted ones
 +
fdisk -l
 +
6. Mount the VMFS disk partition (note it is read only)
 +
vmfs6-tool /dev/<device><partition>
 +
7. Convert the vSphere disk to a suitable format for Proxmox VE
 +
qemu-img convert -f vmdk <path to your vmdk file>.vmdk -O <raw/qcow2> <path to your empty directory>.raw/qcow2 -p
 +
8. While the conversion is in progress you may create the 1st VM from ovf
 +
qm importovf 200 /tmp/exported-vm.ovf
 +
9. As soon as the conversion is finished you may mount the new Proxmox VE disk image to the VM.
 +
qm rescan
 +
If all VM images have been moved away from a VMFS6 disk, you can format it and use it at Proxmox VE
  
NOTE: Some services may "lock" some files in a way that NTBackup won't be able to copy them (while they're in use):
+
== HyperV ==
* this is normal, and usually some of this locked files are:
 
** files used by the open programs the logged user has (e.g. NTBackup logs...)
 
** SQL data files, webserver logs, and so on (ususlly running as services)
 
* You can find "locked" files running NTBackup once, and then looking at its log for lines containing a "will be ignored" string
 
** to find such lines in a BIG log file (mine had 250k lines) use the console FIND command ("c:\FIND /?" is your friend)
 
* the only workaround is, during the backup timeframe:
 
** close all the open (interactive) programs the logged user has
 
** temporarily stop the services that are locking the files (if you can)
 
  
== 2) creating a similar virtual machine==
+
This explains the migration from a Hyper-V on Windows 10 hypervisor to Proxmox VE 6.1. It is tested with a Proxmox VE 6.1 guest.
  
* on PVE, you must create a vm configured just like the pm you want to migrate
+
Proxmox VE uses [https://manpages.debian.org/testing/qemu-utils/qemu-img.1.en.html qemu-img] to import disks. In my tests it supported .vhdx even though [https://linux.die.net/man/1/qemu-img some sources] don't explicitly list .vhdx as supported format. There are additional options if this should not work anymore:
** same OS (you have to use a CD or a iso for that)
+
* [https://docs.microsoft.com/en-us/powershell/module/hyper-v/convert-vhd?view=win10-ps Convert-VHD] is a command line tool that is built-in in Windows and converts .vhdx to .vhd. This format is supported by qemu-img.
** same %WINDIR% folder name (e.g.: c:\windows or c:\winnt, etc), (the source blog post doesn't mention this but i did it...)
+
* [https://www.starwindsoftware.com/starwind-v2v-converter StarWind V2V Converter] is a graphical tool that can create .qcow2 disks from Hyper-V guests.
** same SP (service pack applied) level, hotfixes, etc.
 
** same CPU number
 
** same pagefile configuration (the source blog post doesn't mention this but it turned out to matter in my conversion)
 
** same drive layouts
 
*** number of disks and partitions, those on the vm must be at least equal in size to the pm ones
 
*** same drive letters mapping
 
** same network card number
 
  
Now, when your vm is just like your pm, except it doesn't have any of the programs and configuration the pm has:
+
In Hyper-V-Manager choose "Export..." at the right side and then a destination. While many files (for example for snapshots) will be exported, we are only interested in the .vhdx file(s). Move those files to a storage that is accessible by your Proxmox VE host. You will have to recreate the virtual machine settings yourself.
* add '''one more temporary drive''' to hold (temporary) files during the migration, assign it a drive letter not used in the pm
 
* on this temporary drive, copy
 
** the file '''c:\boot.ini''' from the vm
 
** the whole folder '''c:\windows\repair''' or '''c:\winnt\repair''', or whatever your %WINDIR% is on your vm
 
** the backup file you previously did with NTBackup, stored somewhere on your network
 
  
Then, '''before restoring''', switch the vm NIC to a non-used network in PVE (e.g.: create a fake bridge, not connected to any eth)
+
Go to the GUI of Proxmox VE and create a new virtual machine. We don't need the hard disk that the virtual machine creation wizard created. Delete it in the hardware options of the virtual machine.
  
==restore the bakup on the virtual machine==
+
Now go to the command line interface of Proxmox VE. Use the command "qm importdisk" to import the .vhdx virtual hard disk. You can find syntax and an example on how to use this command on its man page.
  
=VMware to Proxmox VE (KVM)=
+
== XEN ==
This howto describes the migration of a Windows 2003 Server (or Windows XP) from VMware to Proxmox VE (KVM).
+
XEN also uses qemu disk format, so it should work in the same manner as described under "VMware to Proxmox VE (KVM)".
 
 
==Prepare the Windows operating system==
 
Before you begin make a copy of the VMware image.
 
===Remove VMware tools===
 
Start the Windows virtual machine on VMware and remove the VMware tools via the Windows control panel. Reboot.
 
===Enable IDE===
 
* Start the Windows virtual machine on VMware and execute the [http://www.proxmox.com/cms_proxmox/cms/upload/misc/mergeide.reg mergeide.reg]. Now the registry is changed that your Windows can boot from IDE, necessary for KVM.
 
* Make sure Atapi.sys, Intelide.sys, Pciide.sys, and Pciidex.sys are in the %SystemRoot%\System32\Drivers folder. If any are missing they can be extracted from %SystemRoot%\Driver Cache\I386\Driver.cab
 
* see [http://support.microsoft.com/kb/314082/en-us Microsoft KB article] for details.
 
* Shutdown Windows.
 
 
 
==Prepare the disk file==
 
My disk file used for this howto: win2003.vmdk
 
* Change your VMDK disk file with vmware-vdiskmanager.exe to a single growable file (vmware-vdiskmanager.exe is located in your VMware installation path, e.g. "C:\Program Files\VMware\VMware Server") - open a cmd and go to the directory where your vmdk disk files are.
 
<pre>"C:\Program Files\VMware\VMware Server\vmware-vdiskmanager" -r win2003.vmdk -t 0 win2003-pve.vmdk</pre>
 
* Create a new KVM VM on Proxmox VE web interface and - do not start - take a look on the VMID (e.g. 102)
 
* Copy the win2003-pve.vmdk to your Proxmox VE server into the following dir: /var/lib/vz/images/VMID (I used [http://winscp.net WinSCP] as I worked on a Windows desktop)
 
* Change the win2003-pve.vmdk file to qemu format:
 
<pre>qemu-img convert -f vmdk win2003-pve.vmdk -O qcow2 win2003-pve.qcow2</pre>
 
* Or, copy the vmdk to an LVM based disk of the same size or larger
 
<pre>dd if=win2003-pve.vmdk of=/dev/VG_iscsi/vm-102-disk-1</pre>
 
  
 +
You can use [[xenmigrate]] to do it
  
<strong>If your kvm starts but does not boot</strong> (vnc console reporting something like "booting from hard disk...boot failed: not a bootable disk"), try instead of "single growable file", a "preallocated virtual disk", to know how:
+
== FreeNAS ==
 +
Those are the necessary steps to migrate a Ubuntu Bionic VM from FreeNAS 11.2 to Proxmox VE 6.2-1.  
 +
The VM in FreeNAS was created with the following parameters
 +
* Boot Loader Type: UEFI
 +
* Guest OS: Ubuntu Bionic
 +
* Disk
 +
** Disk Mode: AHCI
 +
** Zvol: test/ubuntu-1xmtpt
  
launch "vmware-vdiskmanager" without parameters, you will see all the instructions. You can specify "single growable file" using parameter "-t <disk type>", where
+
Check the name of your zvol by going to Virtual Machines → Options of the VM ⋮→ Devices → Options of your disk ⋮ → Edit → Zvol
  
<pre>
+
=== Preparation in FreeNAS ===
  Disk types:
+
# Create a shared directory in Sharing → Unix (NFS) Shares with path <code>/mnt/test</code>.
      0                  : single growable virtual disk
+
# Enable SSH in Services & edit the SSH service (Actions) to allow password login for root
      1                  : growable virtual disk split in 2GB files
+
# Copy the zvol to the shared  directory
      2                  : preallocated virtual disk
+
## Log in to FreeNAS via SSH <pre>ssh root@ip.of.your.freenas</pre>
      3                  : preallocated virtual disk split in 2GB files
+
## Copy the zvol to the shared directory <pre>dd if=/dev/zvol/test/ubuntu-1xmtpt of=/mnt/test/ubuntu.raw bs=1m</pre>
      4                  : preallocated ESX-type virtual disk
 
      5                  : compressed disk optimized for streaming
 
</pre>
 
  
so, you have to use type 2
+
=== Importing to Proxmox VE ===
 
+
# Create a virtual machine (here vmid is 103) in Proxmox VE. Make sure to set BIOS to OVMF (this is UEFI).
<pre>vmware-vdiskmanager -r whatever.vmdk -t 2 whatever-pve.vmdk</pre>
+
# Delete the disk that was created in step 1.
 
+
# Create a directory <code>mkdir /home/user/freenas</code>
Be aware that vmware-vdiskmanager could now create 2 files:
+
# Mount the shared directory from FreeNAS <pre> sudo mount -t nfs 192.168.31.241:/mnt/test /home/user/freenas</pre>
* one whatever-pve.vmdk really small: it is really a text file, which you can read and see that it refers to
+
# Import the image of the FreeNAS VM to the Proxmox VE VM as unused disk (vmid 103, storage local) <pre>qm importdisk 103 /home/user/freenas/ubuntu.raw local --format qcow2</pre>
* another vmdk, like whatever-pve-flat.vmdk, which has the full size of your disk, and this is what you want to convert to kvm after.
+
# In the GUI of Proxmox VE:
* please note that the second file has the real size of the hard disk as seen from inside the guest vm, so if you are copying this to LVM with dd, you have to make room enough for this file size, or more.
+
## Go to the hardware view of your new virtual machine
* This happened converting a ubuntu.vmdk from a linux vmware server 2 to pve using LVM storage, and the solution proposed worked.
+
## Set the display to spice
 
+
## Double click on the unused disk to attach it and choose Virtio as bus
==Adapt the new KVM Virtual Machine==
+
## Go to the options view of your new virtual machine
* Go to the hardware tab on the web interface and remove the default harddisk file
+
## Choose your new virtio disk as bootdisk
* Add the migrated harddisk as IDE for windows (only IDE works for windows) or SCSI for Linux
 
* Start the new Virtual Machine via the management interface
 
* First boot takes some time as some drives has to be loaded
 
* Do not forget to install [[Paravirtualized_Network_Drivers_for_Windows]]
 
* Finished!
 
 
 
For comments or problems please post to the [http://www.proxmox.com/forum/forumdisplay.php?f=12 Proxmox VE forum] or to the [http://pve.proxmox.com/wiki/Mailing_Lists mailing list]
 
 
 
=XEN to Proxmox VE (KVM)=
 
XEN also uses qemu disk format, so it should work in the same manner as described under "VMware to Proxmox VE (KVM)".
 
  
=Move OpenVZ containers to Proxmox VE=
+
== Qemu/KVM ==
You can move existing OpenVZ containers (container=VE=VPS) with vzmigrate or vzdump:
+
Create an new VM on Proxmox VE and add the existing disk image to this new VM, set the boot order and start.
* Use vzmigrate offline migration to move your container to Proxmox VE
 
* Use vzdump to restore from a backup
 
  
After you moved your container you need to add the following line to the corresponding config file (see /etc/vz/conf/xyz.conf):
 
<pre>nano /etc/vz/conf/xyz.conf</pre>
 
  
Add/edit the following line:
+
'''Minimal example'''
<pre>ORIGIN_SAMPLE="pve.auto"</pre>
 
  
Now you can manage resource settings on the Proxmox VE management interface.
+
First a VM has to be created. 120 is an unused VM ID.
 +
<pre>qm create 120 --bootdisk scsi0 </pre>
 +
someImage.img is an image that was created before. someStorage is the name of a storage as listed in ''pvesm status''.
 +
<pre>qm importdisk 120 someImage.img someStorage</pre>
 +
''qm importdisk'' adds the image as unused disk to the virtual machine. Thus, making it the bootdisk is still necessary.
 +
<pre>qm set 120 --scsi0 someStorage:vm-120-disk-0</pre>
  
=Physical server (or KVM, VMware, XEN or other) to Proxmox VE OpenVZ Container=
+
= Further information =
External links:
 
*[http://howtoforge.com/how-to-convert-physical-systems-and-xen-vms-into-openvz-containers-debian-etch How To Convert Physical Systems And Xen VMs Into OpenVZ Containers]
 
*[http://wiki.openvz.org/Physical_to_container Physical_to_container]
 
  
=Convert Windows to use SCSI (KVM)=
+
If your use case is not covered by this article you should check out the [[Additional ways to migrate to Proxmox VE|additional ways to migrate to Proxmox VE]] in the wiki.
This requires a 2.6.24 or 2.6.32 kernel.  SCSI will not work on 2.6.18.  This procedure is needed to get Windows to load and active the SCSI drivers, once active you can switch the disk and it shoudl Just Works.  Tested with Windows 2003 SP1 on Proxmox VE 1.5 with the 2.6.32 kernel installed.  I have not yet tested it for performance or reliability but it theoretically will be better, use at your own risk.
+
It gathers years of knowledge for cases which are not as common as the ones explained here.
* With the Virtual Machine powered off, add a new temporary SCSI virtual disk
 
* Boot the Virtual Machine and make sure the drivers load for the new disk,  You do not need to format it, just make sure you can see it in Device Manager
 
* Shut down the Virtual Machine, then remove the new disk and any IDE disks, do not delete the LVM or disk image file.
 
* Add the disks you just removed back as SCSI disks and boot the Virtual Machine.
 
  
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]

Latest revision as of 14:54, 2 May 2022

Introduction

There are various ways to migrate existing servers to Proxmox VE. They can be divided into two categories:

Physical-to-Virtual (P2V)
Migration of physical servers to Proxmox VE
Virtual-to-Virtual (V2V)
Migration of virtual machines and containers from other hypervisors to Proxmox VE

Physical-to-Virtual (P2V)

Follow these steps to do a P2V migration and turn a physical machine into a Proxmox VE virtual machine that uses Qemu and KVM.

Clonezilla Live CDs

This method is fast, reliable and OS independent as it uses live CDs.

  • Get a live Linux CD like Clonezilla.
  • Prepare the source host, make sure that you have standard IDE drivers enabled (for WinXP/Win2003 use the mergeide.reg (File:Mergeide.zip) ), for W2k I followed this link (solution 2 worked for me): [1]
  • Now, boot the physical host with Clonezilla, go for beginner mode and select device - device and then remote, just follow the wizard
  • On the Proxmox VE host, prepare a KVM guest, make sure you got a big enough IDE disk assigned (add 1 GB extra to be on the safe side) and also boot this KVM guest with the live CD and execute a shell.
  • Become root and run fdisk (fdisk /dev/sda/) to make sure that sda is here, exit fdisk with 'w'. this was essential.
  • Now enter all commands from the wizard on the source server tells you (configure network and request the copy process)
  • After success (the wizard just copies the data, means I got a 80 GB disk but only 5 GB data on it so it was just a few minutes for the whole process on a gigabit network) just change the boot device to the hard disk and start the VM. Windows will install all needed drivers automatically, just the Intel NIC drivers for e1000 must be loaded from ISO (I got one big driver ISO from Intel containing all NIC drivers).

VMware Converter

Physical (running) Windows server to Proxmox VE (KVM) using VMware vCenter Converter Standalone Client (V5)

Tested on an HP ProLiant ML350 G5 and G6

Prepare Windows

VMware vCenter Converter Standalone Client

  • Download here(version 5.x is Free Software and also free of charge but you need to create an account and login before download)

Mergeide.reg

Prepare location to save local image

This guide is using an external USB Hard Drive. You may also save to a mapped network share.

NOTE Although the final image will be around the same size as the actual amount of data on the server, the Proxmox VE server should have enough free space to fit the total physical disk of the server unless you plan to shrink the windows disks. once migrated to Proxmox VE.

VMware vCenter Settings

Launch VMware vCenter and use the following settings:

  • Source type: Powered-on machine
  • Specify the powered-on machine: This local machine
  • Select destination type: VMware Workstation or other VMware virtual machine
  • Select VMware Product: VMware Workstation 8.0.x
  • Name: Enter the desired name for the image
  • Select a location for the virtual machine: Browse to the USB or Network drive where the image will be saved.

The next screen shows the settings for the virtual machine.

Click on Advanced options, select the Post-conversion tab and make sure ‘Install VMware Tools on the destination virtual machine’ is NOT check. We do not want to install VMware tools.

Click next and Finish.

It will now convert your physical machine to a .vmdk file.

NOTE: Depending on your hardware, you may need to boot the .vmdk file using VMware Workstation or Player before moving the file to the Proxmox VE server. This allows windows to install additional drivers for the disk controller. If promoted to convert the disk to Workstation 9.x compatibility, say Yes. You won't know if you need this step until starting the Windows VM in the final step. If you get a blue screen during boot, you should try this step.

Prepare the VM on Proxmox VE

Create a new KVM virtual machine. You’ll want to use similar CPU and memory as the physical system. In the Hard Disk menu, leave everything as default. We won’t be using the disk created by Proxmox VE. Finish creating the VM. Make note of the VMID. For this guide, we’ll be using 100 as an example.

Once the VMware converter has completed, disable all of the networks adapters on the physical server and shut down. Disabling the network adapters will avoid potential IP conflicts if you will start the physical server back into Windows after you have your new virtual server running.

Move the image to the Proxmox VE Server

Plug a USB Hard Drive into the server

From the Proxmox VE command line:

mkdir /mnt/usb
mount /dev/sdc1 /mnt/usb/
ls /mnt/usb

You should see the contents of the USB drive. In my case, the vmdk file was located in /mnt/usb/windows-server/

Converting to qcow2

We want to convert the .vmdk image file to qcow2. To do this, use the following command:

qemu-img convert -f vmdk /mnt/usb/windows-server/windows-server.vmdk -O qcow2 /var/lib/vz/images/100/windows-server.qcow2

This can take a while depending on the size of file and speed of your system.

Final Steps

Once the conversion is complete, we need to edit the configuration file for the VM.

nano /etc/pve/local/qemu-server/100.conf

In the line with ide0: we want to change vm-100-disk-1.raw,size=32G to windows-server.qcow2

You may delete the empty disk created by Proxmox VE when you created the VM.

rm /var/lib/vz/images/100/vm-100-disk-1.raw

Start the VM and open the console. Windows should boot up normally. It’ll take a few minutes to detect the hardware changes. If the Windows system had a static IP address, you'll have to reconfigure the settings.

Alternative Methods

Virtual-to-Virtual (V2V)

Follow these steps to do a V2V migration and move a virtual machine from another hypervisor to a Proxmox VE virtual machine that uses Qemu and KVM.

VMware

This explains the migration from a VMware ESXi 6.7 hypervisor to Proxmox VE 6.1. It is tested with guests with the following operating systems:

  • Debian 10
  • Windows 10 Pro
  • Windows Server 2016 Standard

Exporting

GUI

In VMware ESXi navigate to your virtual machine in the tree on the left. Choose Actions->Export. This will download disk images and information about your virtual machine in the .ovf format in the browser. Move those files to a storage that is accessible by your Proxmox VE host.

CLI

Install VMware's ovftool on your Proxmox VE host. ovftool version 4.4 has been reported to work with the following versions of ESXi: 6.5 and 6.7. Others (for example, 6.0) might crash with an unhelpful error message

Remove any attached disk or ISO from your ESXi VM and run

ovftool vi://root@<ip-of-esxi>/<name-of-a-virtual-machine> .

to export a virtual machine from ESXi directly into your current directory.


You can replace the dot with any other path, for example "/mnt/pve/<some-storage>". This way you can export directly to a storage that you created in Proxmox VE.

Yellowpin.svg Note: You have to extract .ova archives

With some configurations you get a single (.ova) archive file that contains all .vmdk disk images and the small XML description .ovf file. If this is the case, then you have to extract it before you continue with qm importovf.

Importing

Go to the command line interface of Proxmox VE. Use the command qm importovf <new-vmid> </path/to/file.ovf> <storage> to import the virtual machine. For example:

qm importovf 200 /tmp/exported-vm.ovf local-lvm

This will create a new virtual machine using cores, memory and VM name as read from the OVF manifest, and import the disks. You have to configure the network manually. You can find syntax and an example on how to use this command on its man page.

Yellowpin.svg Note: Windows guests require a few additional steps

If you are migrating a Windows guest, you additionally have to execute the following commands. This example assumes that your imported virtual machine has the ID 130.

  • Use UEFI instead of BIOS for the virtual machine:
qm set 130 --bios ovmf
  • Windows requires additional drivers to use SCSI. Therefore, you should temporarily use SATA instead. In the following command, 'scsi0' should be replaced with the drive that's used to boot Windows.
sed -i 's/scsi0:/sata0:/' /etc/pve/qemu-server/130.conf

This will give you a initial working version. You can then improve your experience and access remaining SCSI drives by installing additional drivers as explained in Windows 10 guest best practices.

Server self-migration

It is also possible to migrate without the need to export each VM separately including virtual disks.
This way, you can convert a server from vSphere to Proxmox VE without the need of a second server.
For this process your vSphere should use VMFS6 and you need at least one empty HDD.
1. Export the VM information without the disks using ovftool (you still need to configure the network configuration for each VM).

ovftool -NoDisks vi://root@<ip-of-esxi>/<name-of-a-virtual-machine> .

2. Install Proxmox VE on some disk that does not contain any important data. You don't need vSphere anymore at this point. If you have an OS disk with only vSphere on it, then you can now overwrite it with Proxmox VE.

Warning: Do not use/touch any other existing drives which are VMFS formatted

3. Create a directory on the above mention spare HDD.
4. Install vmfs6-tools which you need to mount (ready-only) the VMFS-formatted drives with the vSphere virtual disks.

apt install vmfs6-tools -y

5. List all available drives to identify the VMFS formatted ones

fdisk -l

6. Mount the VMFS disk partition (note it is read only)

vmfs6-tool /dev/<device><partition>

7. Convert the vSphere disk to a suitable format for Proxmox VE

qemu-img convert -f vmdk <path to your vmdk file>.vmdk -O <raw/qcow2> <path to your empty directory>.raw/qcow2 -p 

8. While the conversion is in progress you may create the 1st VM from ovf

qm importovf 200 /tmp/exported-vm.ovf

9. As soon as the conversion is finished you may mount the new Proxmox VE disk image to the VM.

qm rescan

If all VM images have been moved away from a VMFS6 disk, you can format it and use it at Proxmox VE

HyperV

This explains the migration from a Hyper-V on Windows 10 hypervisor to Proxmox VE 6.1. It is tested with a Proxmox VE 6.1 guest.

Proxmox VE uses qemu-img to import disks. In my tests it supported .vhdx even though some sources don't explicitly list .vhdx as supported format. There are additional options if this should not work anymore:

  • Convert-VHD is a command line tool that is built-in in Windows and converts .vhdx to .vhd. This format is supported by qemu-img.
  • StarWind V2V Converter is a graphical tool that can create .qcow2 disks from Hyper-V guests.

In Hyper-V-Manager choose "Export..." at the right side and then a destination. While many files (for example for snapshots) will be exported, we are only interested in the .vhdx file(s). Move those files to a storage that is accessible by your Proxmox VE host. You will have to recreate the virtual machine settings yourself.

Go to the GUI of Proxmox VE and create a new virtual machine. We don't need the hard disk that the virtual machine creation wizard created. Delete it in the hardware options of the virtual machine.

Now go to the command line interface of Proxmox VE. Use the command "qm importdisk" to import the .vhdx virtual hard disk. You can find syntax and an example on how to use this command on its man page.

XEN

XEN also uses qemu disk format, so it should work in the same manner as described under "VMware to Proxmox VE (KVM)".

You can use xenmigrate to do it

FreeNAS

Those are the necessary steps to migrate a Ubuntu Bionic VM from FreeNAS 11.2 to Proxmox VE 6.2-1. The VM in FreeNAS was created with the following parameters

  • Boot Loader Type: UEFI
  • Guest OS: Ubuntu Bionic
  • Disk
    • Disk Mode: AHCI
    • Zvol: test/ubuntu-1xmtpt

Check the name of your zvol by going to Virtual Machines → Options of the VM ⋮→ Devices → Options of your disk ⋮ → Edit → Zvol

Preparation in FreeNAS

  1. Create a shared directory in Sharing → Unix (NFS) Shares with path /mnt/test.
  2. Enable SSH in Services & edit the SSH service (Actions) to allow password login for root
  3. Copy the zvol to the shared directory
    1. Log in to FreeNAS via SSH
      ssh root@ip.of.your.freenas
    2. Copy the zvol to the shared directory
      dd if=/dev/zvol/test/ubuntu-1xmtpt of=/mnt/test/ubuntu.raw bs=1m

Importing to Proxmox VE

  1. Create a virtual machine (here vmid is 103) in Proxmox VE. Make sure to set BIOS to OVMF (this is UEFI).
  2. Delete the disk that was created in step 1.
  3. Create a directory mkdir /home/user/freenas
  4. Mount the shared directory from FreeNAS
     sudo mount -t nfs 192.168.31.241:/mnt/test /home/user/freenas
  5. Import the image of the FreeNAS VM to the Proxmox VE VM as unused disk (vmid 103, storage local)
    qm importdisk 103 /home/user/freenas/ubuntu.raw local --format qcow2
  6. In the GUI of Proxmox VE:
    1. Go to the hardware view of your new virtual machine
    2. Set the display to spice
    3. Double click on the unused disk to attach it and choose Virtio as bus
    4. Go to the options view of your new virtual machine
    5. Choose your new virtio disk as bootdisk

Qemu/KVM

Create an new VM on Proxmox VE and add the existing disk image to this new VM, set the boot order and start.


Minimal example

First a VM has to be created. 120 is an unused VM ID.

qm create 120 --bootdisk scsi0 

someImage.img is an image that was created before. someStorage is the name of a storage as listed in pvesm status.

qm importdisk 120 someImage.img someStorage

qm importdisk adds the image as unused disk to the virtual machine. Thus, making it the bootdisk is still necessary.

qm set 120 --scsi0 someStorage:vm-120-disk-0

Further information

If your use case is not covered by this article you should check out the additional ways to migrate to Proxmox VE in the wiki. It gathers years of knowledge for cases which are not as common as the ones explained here.