Difference between revisions of "Migration of servers to Proxmox VE"

From Proxmox VE
Jump to navigation Jump to search
(Windows: Fix command and description: Previous command changed 'scsihw:' to 'satahw:'. Description used IDE instead of SATA.)
(35 intermediate revisions by 6 users not shown)
Line 1: Line 1:
=Introduction=
+
= Introduction =
You can migrate existing servers to Proxmox VE. Moving Linux servers is always quite easy so you will not find much hints for troubleshooting here.
+
There are various ways to migrate existing servers to Proxmox VE. They can be divided into two categories:
 +
; Physical-to-Virtual (P2V): Migration of physical servers to Proxmox VE
 +
; Virtual-to-Virtual (V2V): Migration of virtual machines and containers from other hypervisors to Proxmox VE
  
==Windows systems specific P2V issues==
+
= Physical-to-Virtual (P2V) =
=== inaccessible boot device ===
 
Booting a virtual clone (IDE) of a physical Windows system partition may fail with a BSOD referring to the problem
 
<pre>
 
STOP: 0x0000007B (0xF741B84C,0xC0000034,0x00000000,0x00000000)
 
INACCESSIBLE_BOOT_DEVICE
 
</pre> this means that the source physical windows machine had no support for IDE controller, or at least the one virtually replaced by kvm (see [http://support.microsoft.com/kb/314082/en-us Microsoft KB article] article for details): as Microsoft suggests, create a mergeide.reg file ([[File:Mergeide.zip]]) file on the physical machine and merge that in the registry, '''before'' the P2V migration. Btw, it may not be necessary but should be no harm, anyway, and save you lots of time and headaches.
 
  
Windows 2000: see [http://www.motherboard.windowsreinstall.com/problems.htm]
+
Follow these steps to do a P2V migration and turn a physical machine into a Proxmox VE virtual machine that uses Qemu and KVM.
  
===disk booting tips===
+
== Clonezilla Live CDs ==
* check that your disk has "boot flag" enabled (you can check this with gparted, on ntfs disks, booting the vm from a livecd iso, see this [http://gparted.sourceforge.net/docs/help-manual/C/gparted.html#gparted-fix-operating-system-boot-problems gparted manual page]) maybe not so windows-specific but better remind it here.
+
This method is fast, reliable and OS independent as it uses live CDs.
* be sure that boot.ini on your system partition is still suitable to your current disk/partition configuration (see this [http://support.microsoft.com/kb/124550 kb article] and this [http://support.microsoft.com/kb/102873 kb article])
 
  
=== hidden nonpresent devices ===
+
* Get a live Linux CD like [http://clonezilla.org/ Clonezilla].
If your source physical machine had network interface controllers (nics) with static IP assigned and then you clone the system to a vm, you will need to add a virtual ethernet controller, provided by kvm, but Windows won't let you assign the same fixed IP to the new virtual nic, because (it reports) there is a hidden and nonpresent devices already configured with the same IP (see [http://support.microsoft.com/kb/241257 Microsoft KB article]).
+
* Prepare the source host, make sure that you have standard IDE drivers enabled (for WinXP/Win2003 use the mergeide.reg ([[File:Mergeide.zip]]) ), for W2k I followed this link (solution 2 worked for me): [http://www.motherboard.windowsreinstall.com/problems.htm]
 +
* Now, boot the physical host with Clonezilla, go for beginner mode and select device - device and then remote, just follow the wizard
 +
* On the Proxmox VE host, prepare a KVM guest, make sure you got a big enough IDE disk assigned (add 1 GB extra to be on the safe side) and also boot this KVM guest with the live CD and execute a shell.
 +
* Become root and run fdisk (fdisk /dev/sda/) to make sure that sda is here, exit fdisk with 'w'. this was essential.
 +
* Now enter all commands from the wizard on the source server tells you (configure network and request the copy process)
 +
* After success (the wizard just copies the data, means I got a 80 GB disk but only 5 GB data on it so it was just a few minutes for the whole process on a gigabit network) just change the boot device to the hard disk and start the VM. Windows will install all needed drivers automatically, just the Intel NIC drivers for e1000 must be loaded from ISO (I got one big driver ISO from Intel containing all NIC drivers).
  
For sure it should be removed from windows "known" hardware but, you won't find it in "device management", even if you specify "show hidden peripherals", why? because Microsoft is complicated :-)  
+
== VMware Converter ==
 +
Physical (running) Windows server to Proxmox VE (KVM) using VMware vCenter Converter Standalone Client (V5)
  
To be able to see, and then remove the old physical nic from "device managemnt", you have to
+
Tested on an HP ProLiant ML350 G5 and G6
* open a console window
 
* type: <pre>set devmgr_show_nonpresent_devices = 1</pre> and press enter
 
* then, in the same console (do not close it) type: <pre>start devmgmt.msc</pre> and press enter
 
* this will open the same "device management" panel you could have open from the "control panel", but now if you set the "show hidden peripherals" flag, the "nonpresent" devices will appear, although dimmed somehow, and you will be able to remove them, freeing the IP for your virtual nic.
 
  
=== other suggested pages ===
+
=== Prepare Windows ===
* [[Acpi_kvm | Acpi setup]]
 
* [[Windows_2003_guest_best_practices | Windows 2003 guest best practices]]
 
* [[Paravirtualized_Network_Drivers_for_Windows | Paravirtualized Network Drivers for Windows]]
 
* [[Paravirtualized_Block_Drivers_for_Windows | Paravirtualized Block Drivers for Windows]]
 
  
==Fake Raid specific issues==
+
'''VMware vCenter Converter Standalone Client'''
If you have a physical pc to clone and you use a method that requires access to the file system (like with fsarchiver) from GNU/Linux, you can be in troubles if you have a so called "fake raid", that is a raid created using the motherboard bios configuration.
+
*Download [https://my.vmware.com/web/vmware/info/slug/infrastructure_operations_management/vmware_vcenter_converter_standalone/5_0 here](version 5.x is Free Software and also free of charge but you need to create an account and login before download)
Let's say we have a RAID1. In short GNU/Linux (i.e. systemrescuecd) sees the device, sees that the disks have some Raid metadata, and when you try to mount them it uses the "mdadm" libraries, that can't work since that raid is totally different.
 
To avoid this, you have to boot with the boot parameter 'nodmraid'.
 
For example, if you boot with systemrescuecd, at the menu where you can choose what to boot hit "tab" key
 
You will have something like:
 
<pre>ifcpu64.c32 rescue64 scandelay=1 -- rescue32 scandelay=1</pre>
 
modify it (on both sides of the "--" separator) like this:
 
<pre>ifcpu64.c32 rescue64 scandelay=1 nodmraid -- rescue32 scandelay=1 nodmraid</pre>
 
  
=Physical server to Proxmox VE (KVM) using Clonezilla Live CDs=
+
'''Mergeide.reg'''
This method is fast, reliable and OS independent as it uses live cds.
+
*Execute mergeide.reg ([[File:Mergeide.zip]])
  
*you need to get http://clonezilla.org/, which is a live linux cd.
+
=== Prepare location to save local image ===
*prepare the source host, make sure that you have standard IDE drivers enabled (for winxp/win2003 just use the mergeide.reg ([[File:Mergeide.zip]]) ), for w2k I followed this link (solution 2 worked for me): [http://www.motherboard.windowsreinst...m/problems.htm]
+
This guide is using an external USB Hard Drive. You may also save to a mapped network share.
*now, boot the physical host with clonezilla, go for beginner mode and select device-device and then remote, just follow the wizard
 
*on the Proxmox VE host, prepare a KVM guest, make sure you got a big enough ide disk assigned (add one gb extra to be on the safe side) and also boot this KVM guest with the live cd and execute a shell.
 
*become root and run fdisk (fdisk /dev/sda/) to make sure that sda is here, exit fdisk with 'w'. this was essential.
 
*now enter all command from the wizard from the source server tells you (configure network and request the copy process)
 
*after success (the wizard just copies the data, means I got a 80 gb disk but only 5 gb data on it so it was just a few minutes for the whole process, gbit network) just change the boot device to the hard disk and start the VM. Windows will install all needed drivers automatically, just the Intel NIC drivers for e1000 must be loaded from ISO (i got one big driver ISO from Intel containing all NIC drivers).
 
  
=How to migrate directly from a Windows physical machine to a VM raw file using SSH=
+
'''NOTE''' Although the final image will be around the same size as the actual amount of data on the server, the Proxmox VE server should have enough free space to fit the total physical disk of the server unless you plan to [http://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files shrink the windows disks.] once migrated to Proxmox VE.
  
assumptions:
+
=== VMware vCenter Settings ===
 +
Launch VMware vCenter and use the following settings:
 +
*'''Source type:''' Powered-on machine
 +
*'''Specify the powered-on machine:''' This local machine
 +
*'''Select destination type:''' VMware Workstation or other VMware virtual machine
 +
*'''Select VMware Product:''' VMware Workstation 8.0.x
 +
*'''Name:''' Enter the desired name for the image
 +
*'''Select a location for the virtual machine:''' Browse to the USB or Network drive where the image will be saved.
  
    * you have a physical machine in which you imported the mergeide.reg ([[File:Mergeide.zip]]), as shown on top of this page
+
The next screen shows the settings for the virtual machine.  
    * the physical machine has disk device /dev/sda (read this with "fdisk -l", and look for "hda" or "sda" or similar, NOT dm- or other...)
 
    * you have already created a virtual machine with a raw type, file-based disk and not running.
 
    * the VM has VMid 101 (read this from the VM list in the web interface)
 
  
download SystemRescueCD ( http://www.sysresccd.org ), burn it and reboot the physical machine with it in the cd tray.
+
Click on Advanced options, select the Post-conversion tab and make sure ‘Install VMware Tools on the destination virtual machine’ is NOT check. We do not want to install VMware tools.
  
At its bash prompt, give eth0 an ip, or use dhcp:
+
Click next and Finish.
  
To assign ip: 
+
It will now convert your physical machine to a .vmdk file.
<pre>ifconfig eth0 192.168.1.10 netmask 255.255.255.0 up  (use ip on same subnet as proxmox server) </pre>
 
  
To use DHCP:  
+
'''NOTE:''' Depending on your hardware, you may need to boot the .vmdk file using VMware Workstation or Player before moving the file to the Proxmox VE server. This allows windows to install additional drivers for the disk controller. If promoted to convert the disk to Workstation 9.x compatibility, say Yes. You won't know if you need this step until starting the Windows VM in the final step. If you get a blue screen during boot, you should try this step.
<pre>dhcpcd eth0</pre>
 
  
To start the image process on the physical machine: 
+
==== Prepare the VM on Proxmox VE ====
<pre>dd if=/dev/sda | ssh root@proxmox dd of=/var/lib/vz/images/vmid/diskname.raw</pre>
+
Create a new KVM virtual machine. You’ll want to use similar CPU and memory as the physical system. In the Hard Disk menu, leave everything as default. We won’t be using the disk created by Proxmox VE. Finish creating the VM. Make note of the VMID. For this guide, we’ll be using 100 as an example.  
Change the root@proxmox to root@yourproxmoxip and change the vmid/diskname to the VMid number and the name you want to call the disk. I.E. VMid 101 and disk called vm-101-disk-1.raw would be:
 
<pre> dd if=/dev/sda | ssh root@proxmox dd of=/var/lib/vz/images/101/vm-101-disk-1.raw</pre>
 
  
Once this completes you can assign this disk file to your VM on the Proxmox interface.
+
Once the VMware converter has completed, disable all of the networks adapters on the physical server and shut down. Disabling the network adapters will avoid potential IP conflicts if you will start the physical server back into Windows after you have your new virtual server running.
 
 
Tested a 8GB physical server over gigabit network and was running in about 10 minutes.
 
 
 
=How to migrate directly from a Windows physical machine to a VM logical volume using NETCAT=
 
 
 
assumptions:
 
* you have a physical machine in which you imported the mergeide.reg ([[File:Mergeide.zip]]), as shown on top of this page
 
* the physical machine has disk device /dev/sda (read this with "fdisk -l", and look for "hda" or "sda" or similar, NOT dm- or other...)
 
* you have already created a virtual machine with a disk on LVM, larger in size than the physical machine one (at least 1Gb more), STOPPED, do NOT start it during this procedure!
 
* the VM has VMid 103 (read this from the VM list in the web interface)
 
* the LVM VG is named SHARED-VG (read this with: "vgdisplay -s")
 
* so the LV partition is /dev/SHARED-VG/vm-103-disk-1 (read this with: "lvdisplay |grep LV|grep 103")
 
 
 
download SystemRescueCD ( http://www.sysresccd.org ), burn it and reboot the physical machine with it in the cd tray.
 
 
 
At its bash prompt, give eth0 an ip, coherent with the network of one proxmox server (in the following example: 10.0.0.27):
 
<pre>ifconfig eth0 10.0.0.17 netmask 255.255.255.0 up</pre>
 
then assign root user a password, so you can log in via ssh:
 
<pre>passwd</pre>
 
 
 
log in via ssh on both the physical machine and the proxmox server, then on the proxmox server, give the command:
 
<pre>date; netcat -w30 -vvnlp 3333 | gzip -dc > /dev/SHARED-VG/vm-103-disk-1; date</pre>
 
and on the physical machine give the command:
 
<pre>dd if=/dev/sda | gzip -c | nc -w30 -vvn 10.0.0.27 3333</pre>
 
You have to start the second command WITHIN 30 seconds from the first one (the -w30...)!
 
 
 
This way, the proxmox machine is waiting for a stream of bytes on tcp port 3333, then gunzip it and redirects on the vm virtual disk.
 
 
 
The physical machine outputs its ENTIRE hd as a stream of bytes, gzip it and pass it via netcat to the proxmox server.
 
 
 
The 2 date commands are useful to have a trace of when this process starts and ends.
 
 
 
Tested migrating a 75gb hd in about 90 minutes on a 100mbit lan. VM started and worked as expected, after recognizing at first boot the new ide controllers.
 
 
 
An alternate method for Windows VMs is using ntfsclone with ncat.  First, create the actual VM, and boot it with the System Rescue CD.  Partition it to match the partition structure of the host machine.  Once you have the partitions built out, you can copy the partitions directly from the source machine to the new VM.  This will only copy used data.  The commands are:
 
 
 
On the destination system :
 
# nc -l -p 1234 | ntfsclone -r -O /dev/sda3 -
 
 
 
On the source system :
 
# nc -s -o - /dev/sdb1 | netcat DestSrvIP 1234
 
 
 
= How to migrate from a physical machine to a VM, on a smaller partition=
 
While it's quite easy to "clone" a partition to another (real or virtual) of the same or bigger size, it's not so simple to do the same if the destination partition is smaller than the original one. This is possible, fortunately, and at least a proved method follows, but you have to take care of some preconditions:
 
* since you want to clone to a smaller partition, we will operate at the filesystem level, copying all the files from the source filesystem to the destination one.
 
* so, we have to make sure that the destination partition '''has''' enough room to get all the files, at least, with better some free space left there.
 
* the cloning is not possible directly, i.e disk-to-disk, but we have to "save" the source partition, and then "restore" it on the destination one.
 
* we have to be sure that the tools used know very well how to copy files on the filesystems involved, including symlinks, hardlinks, filesystem specific attributes, and so on.
 
 
 
==main tool: fsarchiver==
 
One free tool you can use for this is [http://www.fsarchiver.org fsarchiver], which "is a system tool that allows you to save the contents of a file-system to a compressed archive file. The file-system can be restored on a partition which has a different size and it can be restored on a different file-system. Unlike tar/dar, FSArchiver also creates the file-system when it extracts the data to partitions. Everything is checksummed in the archive in order to protect the data. If the archive is corrupt, you just loose the current file, not the whole archive. Fsarchiver is released under the GPL-v2 license. It's still under heavy development so it must not be used on critical data", So, you've been warned. Latest fsarchiver should be in the latest [http://www.sysresccd.org SystemRescueCD], although you can obtain it on your favourite recent distribution.
 
 
 
Cloning NTFS, '''be sure''' to use either version 0.6.10 or a patched previous version, because there was a bug that caused errors with NTFS junctions (something like linux symlinks).
 
 
 
If your origin physical server is configurad in Raid1 using "Fake Raid" (MB Bios raid), see the section about it on this article
 
  
==mergeide==
+
==== Move the image to the Proxmox VE Server ====
As other said, install mergeide.reg ([[File:Mergeide.zip]]) on the physical Windows machine (see [http://support.microsoft.com/kb/314082/en-us Microsoft KB article] for details) to provide support for the natively supported IDE controllers in Windows. Without this, cloned XP booting failed for me.
+
Plug a USB Hard Drive into the server
  
==running fsarchiver from SystemrescueCD==
+
From the Proxmox VE command line:
Use the latest SystemRescueCD available, that has a recent fsarchiver version:
 
 
 
Confirm the version is right typing
 
 
<pre>
 
<pre>
# fsarchiver -V
+
mkdir /mnt/usb
 
</pre>
 
</pre>
 
it shoud be at least "0.6.10", particularly if you are cloning NTFS filesystems.
 
 
then run
 
 
<pre>
 
<pre>
# fsarchiver probe simple
+
mount /dev/sdc1 /mnt/usb/
 
</pre>
 
</pre>
 
that will report your disks/partitions current layout in a simple, understandable way, like:
 
 
 
<pre>
 
<pre>
[======DISK======] [=============NAME==============] [====SIZE====] [MAJ] [MIN]
+
ls /mnt/usb
[sda            ] [WDC WD5002ABYS-5              ] [    15.00 GB] [  8] [  0]
+
</pre>
  
[=====DEVICE=====] [==FILESYS==] [======LABEL======] [====SIZE====] [MAJ] [MIN]
+
You should see the contents of the USB drive. In my case, the vmdk file was located in /mnt/usb/windows-server/
[loop0          ] [squashfs  ] [<unknown>        ] [  671.85 MB] [  7] [  0]
 
[sda1            ] [ntfs      ] [System          ] [    15.00 GB] [  8] [  1]
 
[ramzswap0      ] [swap      ] [<unknown>        ] [  248.47 MB] [251] [  0]
 
</pre>
 
  
then, you have to provide a folder to save the partition "backup", which will be quite large (fsarchiver has several compression level, but just to be sure have enough free space there to accomodate all the uncompressed file...) if you need to reach a "tmpfolder" windows share, you have to install the smbfs package before (standard repositories), so that then you can
+
==== Converting to qcow2 ====
 +
We want to convert the .vmdk image file to qcow2. To do this, use the following command:
 
<pre>
 
<pre>
# mkdir /mnt/tmpfolder
+
qemu-img convert -f vmdk /mnt/usb/windows-server/windows-server.vmdk -O qcow2 /var/lib/vz/images/100/windows-server.qcow2
# mount -t smbfs //windows/tmpfolder /mnt/tmpfolder -o user=username
 
 
</pre>
 
</pre>
 +
This can take a while depending on the size of file and speed of your system.
  
giving a suitable password when asked.
+
==== Final Steps ====
 
+
Once the conversion is complete, we need to edit the configuration file for the VM.
==backup the partition==
 
then you have to perform the "backup", '''BE CAREFUL the first path is the backup file to save, the second the source partition, do not invert'''
 
 
 
I used:
 
 
 
 
<pre>
 
<pre>
# fsarchiver savefs -v -o /mnt/tmpfolder/physical.fsa /dev/sda1
+
nano /etc/pve/local/qemu-server/100.conf
 
</pre>
 
</pre>
(-v is verbose, shows each file is backing up, and if you have a lot of files and a slow video card can slow down a lot the entire process)
+
In the line with ide0: we want to change vm-100-disk-1.raw,size=32G to windows-server.qcow2
then (if no errors reported) mounted the same LiveCD in a kvm vm with a 15GB virtual empty disk (virtio), so /dev/vda. After mounting the smb share in the same way, I've run Gparted (installed on the LiveCD, startx to enter the graphical intereface where run it) and created a empty ntfs partition there, /dev/vda1, with the boot flag.
 
 
 
==restore the partition==
 
Then, I run
 
 
 
'''Note:''' i use here '''/dev/vda1''' while the original was '''/dev/sda1''', and '''id=0''' because i restore the '''first''' partition in the physical.fsa (yes, it may store more than one) as /dev/vda1
 
  
 +
You may delete the empty disk created by Proxmox VE when you created the VM.
 
<pre>
 
<pre>
#sudo fsarchiver restfs -v /mnt/tmpfolder/physical.fsa id=0,dest=/dev/vda1
+
rm /var/lib/vz/images/100/vm-100-disk-1.raw
 
</pre>
 
</pre>
  
check if there are no restore errors. It was quite quick and just worked. Well, no, the first times i tried a few and in the end it worked :-)))
+
Start the VM and open the console. Windows should boot up normally. It’ll take a few minutes to detect the hardware changes. If the Windows system had a static IP address, you'll have to reconfigure the settings.  
 
 
==successful cases==
 
* I successfully cloned an Windows XP professional machine from a physical 40GB (used 12GB) partition to a virtual (kvm) 15GB one, stored on pve LVM. The archive file was around 7GB. (i had just 2 hardlink restore errors, but in the log were reported on non critical files, although i would prefer 0 errors...)
 
* I successfully cloned an Windows 2003 AD controller machine from a physical 250GB (used 5GB) partition to a virtual (kvm) 15GB one, stored on pve LVM. The archive file was around 3,4GB. (i had just 0 restore errors of any kind)
 
* Cloned a physical ubuntu 9.04 desktop from a 32 raid5 disk to a virtual kvm 12 disk. Just had to reinstall grub to make it boot inside kvm. No errors.
 
 
 
=Physical (running) server to Proxmox VE (KVM) using SelfImage=
 
 
 
Warning : Selfimage don't do snapshot of physical disk. So, try if possible to shutdown services and avoid writes during the copy process
 
 
 
==Prepare the Windows operating system==
 
 
 
Install [http://www.chip.de/downloads/SelfImage_30991577.html SelfImage] on the physical Windows machine. Execute the mergeide.reg ([[File:Mergeide.zip]]) (see [http://support.microsoft.com/kb/314082/en-us Microsoft KB article] for details) to provide support for the natively supported IDE controllers in Windows.
 
 
 
==Prepare the Proxmox VE VM==
 
 
 
Create an new KVM container with a suitable disk size. It is recommended that you choose the size 1 GB bigger than the size of the physical disk that you want to migrate to make sure all your data fits to the virtual disk.
 
 
 
Use VNC or SSH to connect to console on the Proxmox VE host. Export the qcow2 disk in the container directory with NBD
 
 
 
<pre>qemu-nbd -t /var/lib/vz/images/xxx/vm-xxx-disk.qcow2 -p 1024</pre>
 
 
 
where xxx is the VM ID.
 
 
 
If you use LVM as storage model, you can also export the logical volume created by Proxmox via the web interface
 
 
 
<pre>qemu-nbd -t /dev/VolumeGroup/vm-xxx-disk-1 -p 1024</pre>
 
 
 
==Do the migration==
 
 
 
Start SelfImage on the physical machine, choose to image entire hard disk, not partition. On ouput file select NBD with your '''PVE host IP''' and '''port 1024''' as parameters. Click Start.
 
 
 
When imaging is complete press CTRL+C on the PVE console to stop the export of your virtual disk file.
 
 
 
Start the virtual machine and have fun.
 
 
 
=Physical (running) server to Proxmox VE (KVM) using Windows backup=
 
 
 
This may be useful if any other approach fails, or looks too complicated for you, as does not involve other software except Windows and PVE itself.
 
It was used succesfully to convert a live Windows 2000 server sp4 to PVE 1.5, and was inspired from a blog post related to a similar migration to XEN, but proved to be effective also for PVE, and probably any other VE around.
 
 
 
Here is an [http://replay.waybackmachine.org/20090203053232/http://blog.geekpoet.net/2008/05/manual-p2v-migration-of-windows-server.html archive of the original blog post]
 
 
 
Here is the '''[[Media:ManualP2V-with-NewSid.zip|cache]]''' of the post along with the NewSID file.
 
 
 
==The Workflow==
 
 
 
This method is about
 
* making a backup on your physical machine (pm)
 
* creating a virtual machine (vm) from scratch, identical as possible to the pm
 
* installing the same base OS of the pm on the vm, from the original supports
 
* restoring the backup of the pm on the vm
 
 
 
easy, uh? well, it works! Of course there are some point to take care of, and some drawbacks, too, but it is a very simple method, and you can find the drawbacks (if any) worth for the easiness you have in return!
 
 
 
You don't need to get new softwares, or learn fancy linux commands: all you got to know (and it's supposed you ALREADY know) is
 
* how to create a vm in pve (kvm)
 
* how to make a backup/restore from windows (with the bundled software!)
 
 
 
Ok, follow me:
 
 
 
== backup the physical machine==
 
The program you need is NTBackup, on a standard installation you can find it under the menu
 
* Start/Program Files/Accessories/System utilities/Backup
 
* if it has been removed, you can add it back from your windows CD
 
 
 
Start a backup that includes
 
* ALL the windows hard drives
 
* the system state (this include the whole registry, ips, hostname, and so on)
 
 
 
Then, choose the "file" option as a destination
 
* be sure to create this file on a network share, or a usb drive but NOT on the disks you are backing up
 
* be sure that the network share has enough space to hold the BIG file...
 
 
 
NOTE: Some services may "lock" some files in a way that NTBackup won't be able to copy them (while they're in use):
 
* this is normal, and usually some of this locked files are:
 
** files used by the open programs the logged user has (e.g. NTBackup logs...)
 
** SQL data files, webserver logs, and so on (usually running as services)
 
* You can find "locked" files running NTBackup once, and then looking at its log for lines containing a "will be ignored" string
 
** to find such lines in a BIG log file (mine had 250k lines) use the console FIND command ("c:\FIND /?" is your friend)
 
* the only workaround is, during the backup timeframe:
 
** close all the open (interactive) programs the logged user has
 
** temporarily stop the services that are locking the files (if you can)
 
 
 
== creating a similar virtual machine==
 
 
 
* on PVE, you must create a vm configured just like the pm you want to migrate
 
** same OS (you have to use a CD or a iso for that)
 
** same %WINDIR% folder name (e.g.: c:\windows or c:\winnt, etc), (the source blog post doesn't mention this but i did it...)
 
** same SP (service pack applied) level, hotfixes, etc.
 
** same CPU number
 
** same pagefile configuration (the source blog post doesn't mention this but it turned out to matter in my conversion)
 
** same drive layouts
 
*** number of disks and partitions, those on the vm must be at least equal in size to the pm ones
 
*** same drive letters mapping
 
** same network card number
 
 
 
Now, when your vm is just like your pm, except it doesn't have any of the programs and configuration the pm has:
 
* add '''one more temporary drive''' to hold (temporary) files during the migration, assign it a drive letter not used in the pm
 
* on this temporary drive, copy
 
** the file '''c:\boot.ini''' from the vm
 
** the whole folder '''c:\windows\repair''' or '''c:\winnt\repair''', or whatever your %WINDIR% is on your vm
 
** the backup file you previously did with NTBackup, stored somewhere on your network
 
 
 
Then, '''before restoring''', switch the vm NIC to a non-used network in PVE, e.g.:
 
* create a new fake bridge, not connected to any eth<x>,
 
* then delete the "working" vm NIC and
 
* re-add another NIC on the vm, linked to the new fake bridge
 
 
 
otherwise your new vm would "talk" to the real network / domain controllers, etc, and would result in a duplicate machine, duplicate ip address, and so on. This would be '''BAD'''.
 
 
 
== restore the backup on the virtual machine==
 
 
 
Log on to the vm and perform a restore of the backup file you have on the temporary disk
 
* '''note''': choose "always replace the files on my computer" (see tools>options>restore in NTBackup)
 
  
'''BEFORE REBOOTING''' (at the end of the restore NTBackup asks you to reboot, hold down your mouse for a while...)
+
==== Alternative Methods ====
* restore
+
* [https://github.com/QuinnyPig/OpenVZ-Template-Creators OpenVZ Template Creators] to create non Debian (we have [[Debian Appliance Builder|DAB]] for it) templates.
** the file '''c:\boot.ini''' from the vm
 
** the whole folder '''c:\windows\repair''' or '''c:\winnt\repair''', or whatever your %WINDIR% is on your vm
 
  
replacing those restored from the backup file.
+
= Virtual-to-Virtual (V2V) =
  
== reboot the virtual machine==
+
Follow these steps to do a V2V migration and move a virtual machine from another hypervisor to a Proxmox VE virtual machine that uses Qemu and KVM.
if you got any BSOD (blue screens) you may try to:
 
* boot in safe mode
 
* remove unrecognized components (e.g.: SCSI card)
 
* make windows detect those components and reinstall drivers
 
  
I didn't see any BSOD, so nothing to tell here.
+
== VMware ==
 +
This explains the migration from a VMware ESXi 6.7 hypervisor to Proxmox VE 6.1. It is tested with guests with the following operating systems:
 +
* Debian 10
 +
* Windows 10 Pro
 +
* Windows Server 2016 Standard
  
Otherwise, log on to the vm,
+
=== Exporting ===
* some component may not work, e.g. (mine) the mouse cursor may seem "out of sync" with the VNC one, be patient and wait because:
+
==== GUI ====
* windows may recognize more virtual components and install them automatically (PCI, video, network, and so on: after a reboot they will work as expected)
+
In VMware ESXi navigate to your virtual machine in the tree on the left.
 +
Choose Actions->Export.
 +
This will download disk images and information about your virtual machine in the .ovf format in the browser.
 +
Move those files to a storage that is accessible by your Proxmox VE host.
  
After that, you can
+
==== CLI ====
* turn off the pm
+
Install VMware's [https://my.vmware.com/group/vmware/get-download?downloadGroup=OVFTOOL440 ovftool] on your Proxmox VE host.
* on the vm add a NIC linked to the real network, assigning the real IP the pm had
+
ovftool version 4.4 has been reported to work with the following versions of ESXi: 6.5 and 6.7. Others (for example, [https://forum.proxmox.com/threads/unable-to-migrate-vm-from-esxi-6-0-0-to-proxmox-6-3-2.82301/post-362781 6.0]) might crash with an unhelpful error message
* reboot and log into the domain
 
  
=VMware to Proxmox VE (KVM)=
+
Remove any attached disk or ISO from your ESXi VM and run
This howto describes the migration of a Windows 2003 Server (or Windows XP) from VMware to Proxmox VE (KVM).
+
ovftool vi://root@<ip-of-esxi>/<name-of-a-virtual-machine> .
This procedure can be applied to '''VMware Linux guests''' as well, just skip the Windows specific steps.
+
to export a virtual machine from ESXi directly into your current directory.
  
==Prepare the Windows operating system==
 
Before you begin make a copy of the VMware image.
 
===Remove VMware tools===
 
Start the Windows virtual machine on VMware and remove the VMware tools via the Windows control panel. Reboot.
 
===Enable IDE===
 
* Start the Windows virtual machine on VMware and execute the mergeide.reg ([[File:Mergeide.zip]]). Now the registry is changed that your Windows can boot from IDE, necessary for KVM.
 
* Make sure Atapi.sys, Intelide.sys, Pciide.sys, and Pciidex.sys are in the %SystemRoot%\System32\Drivers folder. If any are missing they can be extracted from %SystemRoot%\Driver Cache\I386\Driver.cab which can be opened in Windows file Exlorer like a directory and then the missing files can be copied out.
 
* see [http://support.microsoft.com/kb/314082/en-us Microsoft KB article] for details.
 
* Shutdown Windows.
 
  
==Prepare the disk file==
+
You can replace the dot with any other path, for example "/mnt/pve/<some-storage>".
My disk file used for this howto: win2003.vmdk
+
This way you can export directly to a storage that you created in Proxmox VE.
* Change your VMDK disk file with vmware-vdiskmanager.exe to a single growable file (vmware-vdiskmanager.exe is located in your VMware installation path, e.g. "C:\Program Files\VMware\VMware Server") - open a cmd and go to the directory where your vmdk disk files are.
 
<pre>"C:\Program Files\VMware\VMware Server\vmware-vdiskmanager" -r win2003.vmdk -t 0 win2003-pve.vmdk</pre>
 
* Create a new KVM VM on Proxmox VE web interface and - do not start - take a look on the VMID (e.g. 102)
 
* Copy the win2003-pve.vmdk to your Proxmox VE server into the following dir: /var/lib/vz/images/VMID (I used [http://winscp.net WinSCP] as I worked on a Windows desktop)
 
* Change the win2003-pve.vmdk file to qemu format:
 
<pre>qemu-img convert -f vmdk win2003-pve.vmdk -O qcow2 win2003-pve.qcow2</pre>
 
* Or, copy the vmdk to an LVM based disk of the same size or larger
 
<pre>dd if=win2003-pve.vmdk of=/dev/VG_iscsi/vm-102-disk-1</pre>
 
  
'''Alternatively''', your .vmdk file might be in a flat format, so running "qemu-img convert ''-f vmdk''" will abort with "Operation not permitted". You can check your .vmdk file actual format with the *nix command "file". If it is a flat file instead of native format, your output would look like:
+
{{note | You have to extract .ova archives}}
<pre># file myVMwFlatImage-pre.vmdk
+
With some configurations you get a single (.ova) archive file that contains all .vmdk disk images and the small XML description .ovf file. If this is the case, then you have to extract it before you continue with <code>qm importovf</code>.
myVMwFlatImage-pre.vmdk: x86 boot sector; partition 1: ID=0x83, active, starthead 1,
 
  startsector 63, 208782 sectors; partition 2: ID=0x8e, starthead 0, startsector 208845,
 
  16563015 sectors, code offset 0x48
 
</pre>
 
So simply convert .vmdk into .qcow2 omitting the "-f vmdk" from the command above, and let qemu-img auto-detect the source format:
 
<pre>qemu-img convert myVMwFlatImage-pve.vmdk -O qcow2 myVMwFlatImage-pve.qcow2</pre>
 
(This was experienced with a VMware ESXi 3.5 clone of a RHEL guest.)
 
  
+
=== Importing ===
<strong>If your kvm starts but does not boot</strong> (vnc console reporting something like "booting from hard disk...boot failed: not a bootable disk"), try instead of "single growable file", a "preallocated virtual disk", to know how:
 
  
launch "vmware-vdiskmanager" without parameters, you will see all the instructions. You can specify "single growable file" using parameter "-t <disk type>", where
+
Go to the command line interface of Proxmox VE.
 
+
Use the command <code>qm importovf <new-vmid> </path/to/file.ovf> <storage> </code> to import the virtual machine. For example:
<pre>
+
qm importovf 200 /tmp/exported-vm.ovf local-lvm
  Disk types:
 
      0                  : single growable virtual disk
 
      1                  : growable virtual disk split in 2GB files
 
      2                  : preallocated virtual disk
 
      3                  : preallocated virtual disk split in 2GB files
 
      4                  : preallocated ESX-type virtual disk
 
      5                  : compressed disk optimized for streaming
 
</pre>
 
  
so, you have to use type 2
+
This will create a new virtual machine using cores, memory and VM name as read from the OVF manifest, and
 +
import the disks. You have to configure the network manually.
 +
You can find syntax and an example on how to use this command on its [https://pve.proxmox.com/pve-docs/qm.1.html man page].
  
<pre>vmware-vdiskmanager -r whatever.vmdk -t 2 whatever-pve.vmdk</pre>
+
{{note | Windows guests require a few additional steps}}
  
Be aware that vmware-vdiskmanager could now create 2 files:
+
If you are migrating a Windows guest, you additionally have to execute the following commands. This example assumes that your imported virtual machine has the ID 130.
* one whatever-pve.vmdk really small: it is really a text file, which you can read and see that it refers to
 
* another vmdk, like whatever-pve-flat.vmdk, which has the full size of your disk, and this is what you want to convert to kvm after.
 
* please note that the second file has the real size of the hard disk as seen from inside the guest vm, so if you are copying this to LVM with dd, you have to make room enough for this file size, or more.
 
* This happened converting a ubuntu.vmdk from a linux vmware server 2 to pve using LVM storage, and the solution proposed worked.
 
  
==Alternative migration using a rescue CD disk image==
+
* Use UEFI instead of BIOS for the virtual machine:
As an alternative to copying the .vmdk file, you can also boot the VMware guest using a rescue CD image, I use http://www.sysresccd.org/
+
qm set 130 --bios ovmf
 +
* Windows requires additional drivers to use SCSI. Therefore, you should temporarily use SATA instead. In the following command, 'scsi0' should be replaced with the drive that's used to boot Windows.
 +
sed -i 's/scsi0:/sata0:/' /etc/pve/qemu-server/130.conf
  
There was no (obvious) method in (my) VMware Server 2.0 to specify that the VMware guest should boot from a CD image, and trying to intercept the VMware boot process in time to press F2 proved futile so many times I gave up.
+
This will give you a initial working version. You can then improve your experience and access remaining SCSI drives by installing additional drivers as explained in [[Windows 10 guest best practices]].
  
Instead, I changed the disk order, some VMware guests by adding a new little disk, others just by swapping the existing disks. And then it booted from the rescue CD image.
+
=== Server self-migration ===
 +
It is also possible to migrate without the need to export each VM separately including virtual disks.<br>
 +
This way, you can convert a server from vSphere to Proxmox VE without the need of a second server.<br>
 +
For this process your vSphere should use VMFS6 and you need at least one empty HDD.<br>
 +
1. Export the VM information without the disks using ovftool (you still need to configure the network configuration for each VM).
 +
ovftool -NoDisks vi://root@<ip-of-esxi>/<name-of-a-virtual-machine> .
 +
2. Install Proxmox VE on some disk that does not contain any important data. You don't need vSphere anymore at this point. If you have an OS disk with only vSphere on it, then you can now overwrite it with Proxmox VE.
 +
{{warning | Do not use/touch any other existing drives which are VMFS formatted }}
 +
3. Create a directory on the above mention spare HDD.<br>
 +
4. Install vmfs6-tools which you need to mount (ready-only) the VMFS-formatted drives with the vSphere virtual disks.
 +
apt install vmfs6-tools -y
 +
5. List all available drives to identify the VMFS formatted ones
 +
fdisk -l
 +
6. Mount the VMFS disk partition (note it is read only)
 +
vmfs6-tool /dev/<device><partition>
 +
7. Convert the vSphere disk to a suitable format for Proxmox VE
 +
qemu-img convert -f vmdk <path to your vmdk file>.vmdk -O <raw/qcow2> <path to your empty directory>.raw/qcow2 -p
 +
8. While the conversion is in progress you may create the 1st VM from ovf
 +
qm importovf 200 /tmp/exported-vm.ovf
 +
9. As soon as the conversion is finished you may mount the new Proxmox VE disk image to the VM.
 +
qm rescan
 +
If all VM images have been moved away from a VMFS6 disk, you can format it and use it at Proxmox VE
  
Once inside the rescue CD image, could use SSH from the ProxMox VE server into the VMware guest running the rescue CD image. After that I simply ran this command:
+
== HyperV ==
  
* ssh root@vmware_guest dd if=/dev/sdb | dd of=/dev/VG/vm-103-disk-1
+
This explains the migration from a Hyper-V on Windows 10 hypervisor to Proxmox VE 6.1. It is tested with a Proxmox VE 6.1 guest.
  
REMEMBER! that you swapped the disks in VMware to be able to boot from the rescue CD image. You can also dd of=/to/a/file/if/you/want/that. Remember to also dd all the other disks from the VMware guest.
+
Proxmox VE uses [https://manpages.debian.org/testing/qemu-utils/qemu-img.1.en.html qemu-img] to import disks. In my tests it supported .vhdx even though [https://linux.die.net/man/1/qemu-img some sources] don't explicitly list .vhdx as supported format. There are additional options if this should not work anymore:
 +
* [https://docs.microsoft.com/en-us/powershell/module/hyper-v/convert-vhd?view=win10-ps Convert-VHD] is a command line tool that is built-in in Windows and converts .vhdx to .vhd. This format is supported by qemu-img.
 +
* [https://www.starwindsoftware.com/starwind-v2v-converter StarWind V2V Converter] is a graphical tool that can create .qcow2 disks from Hyper-V guests.
  
It is possible to use gzip to transfer less data over the network, but I was on a local network in the same switch so it didnt matter. I did not see any higher speeds using gzip, and my speeds was from 4 MB/s to 12MB/s.
+
In Hyper-V-Manager choose "Export..." at the right side and then a destination. While many files (for example for snapshots) will be exported, we are only interested in the .vhdx file(s). Move those files to a storage that is accessible by your Proxmox VE host. You will have to recreate the virtual machine settings yourself.
  
==Adapt the new KVM Virtual Machine==
+
Go to the GUI of Proxmox VE and create a new virtual machine. We don't need the hard disk that the virtual machine creation wizard created. Delete it in the hardware options of the virtual machine.
* Go to the hardware tab on the web interface and remove the default harddisk file
 
* Add the migrated harddisk as IDE for windows (only IDE works for windows) or SCSI for Linux
 
* 2012-09-24: virtio now also works for windows, but it can not boot from it the first time. So boot as IDE, then shutdown, add or change a 2. harddisk as virtio, then boot up. Then shut down. Then change the ide0 to virtio0, I do that using vi in the console. Then you can boot up windows again, and it will change to the virtio driver, and say that the driver is from RedHat.
 
* Start the new Virtual Machine via the management interface
 
* First boot takes some time as some drives has to be loaded
 
* Do not forget to install [[Paravirtualized_Network_Drivers_for_Windows]]
 
* Finished!
 
  
For comments or problems please post to the [http://www.proxmox.com/forum/forumdisplay.php?f=12 Proxmox VE forum] or to the [http://pve.proxmox.com/wiki/Mailing_Lists mailing list]
+
Now go to the command line interface of Proxmox VE. Use the command "qm importdisk" to import the .vhdx virtual hard disk. You can find syntax and an example on how to use this command on its man page.
  
=XEN to Proxmox VE (KVM)=
+
== XEN ==
 
XEN also uses qemu disk format, so it should work in the same manner as described under "VMware to Proxmox VE (KVM)".
 
XEN also uses qemu disk format, so it should work in the same manner as described under "VMware to Proxmox VE (KVM)".
  
=KVM to Proxmox VE (KVM)=
+
You can use [[xenmigrate]] to do it
Just create an new VM on Proxmox VE and add the existing disk image to this new VM, set the boot order and start.
 
 
 
=Move OpenVZ containers to Proxmox VE=
 
You can move existing OpenVZ containers (container=VE=VPS) with vzmigrate or vzdump:
 
* Use vzmigrate offline migration to move your container to Proxmox VE
 
* Use vzdump to restore from a backup
 
 
 
After you moved your container you need to add the following line to the corresponding config file (see /etc/vz/conf/xyz.conf):
 
<pre>nano /etc/vz/conf/xyz.conf</pre>
 
 
 
Add/edit the following line:
 
<pre>ORIGIN_SAMPLE="pve.auto"</pre>
 
 
 
Now you can manage resource settings on the Proxmox VE management interface.
 
 
 
=Physical server (or KVM, VMware, XEN or other) to Proxmox VE OpenVZ Container=
 
External links:
 
*[http://howtoforge.com/how-to-convert-physical-systems-and-xen-vms-into-openvz-containers-debian-etch How To Convert Physical Systems And Xen VMs Into OpenVZ Containers]
 
*[http://wiki.openvz.org/Physical_to_container Physical_to_container]
 
  
=Convert Windows to use SCSI (KVM)=
+
== FreeNAS ==
This requires a 2.6.24 or 2.6.32 kernel.  SCSI will not work on 2.6.18.  This procedure is needed to get Windows to load and active the SCSI drivers, once active you can switch the disk and it shoudl Just Works.  Tested with Windows 2003 SP1 on Proxmox VE 1.5 with the 2.6.32 kernel installed.  I have not yet tested it for performance or reliability but it theoretically will be better, use at your own risk.
+
Those are the necessary steps to migrate a Ubuntu Bionic VM from FreeNAS 11.2 to Proxmox VE 6.2-1.  
* With the Virtual Machine powered off, add a new temporary SCSI virtual disk
+
The VM in FreeNAS was created with the following parameters
* Boot the Virtual Machine and make sure the drivers load for the new disk,  You do not need to format it, just make sure you can see it in Device Manager
+
* Boot Loader Type: UEFI
* Shut down the Virtual Machine, then remove the new disk and any IDE disks, do not delete the LVM or disk image file.
+
* Guest OS: Ubuntu Bionic
* Add the disks you just removed back as SCSI disks and boot the Virtual Machine.
+
* Disk
 +
** Disk Mode: AHCI
 +
** Zvol: test/ubuntu-1xmtpt
  
=Physical (running) Windows server to Proxmox VE (KVM) using VMware Converter=
+
Check the name of your zvol by going to Virtual Machines → Options of the VM ⋮→ Devices → Options of your disk ⋮ → Edit → Zvol
Physical Windows server to Proxmox VE (KVM) using VMware vCenter Converter Standalone Client (V5)
 
  
Tested on an HP ProLiant ML350 G5 and G6
+
=== Preparation in FreeNAS ===
 +
# Create a shared directory in Sharing → Unix (NFS) Shares with path <code>/mnt/test</code>.
 +
# Enable SSH in Services & edit the SSH service (Actions) to allow password login for root
 +
# Copy the zvol to the shared  directory
 +
## Log in to FreeNAS via SSH <pre>ssh root@ip.of.your.freenas</pre>
 +
## Copy the zvol to the shared directory <pre>dd if=/dev/zvol/test/ubuntu-1xmtpt of=/mnt/test/ubuntu.raw bs=1m</pre>
  
==Prepare Windows==
+
=== Importing to Proxmox VE ===
 +
# Create a virtual machine (here vmid is 103) in Proxmox VE. Make sure to set BIOS to OVMF (this is UEFI).
 +
# Delete the disk that was created in step 1.
 +
# Create a directory <code>mkdir /home/user/freenas</code>
 +
# Mount the shared directory from FreeNAS <pre> sudo mount -t nfs 192.168.31.241:/mnt/test /home/user/freenas</pre>
 +
# Import the image of the FreeNAS VM to the Proxmox VE VM as unused disk (vmid 103, storage local) <pre>qm importdisk 103 /home/user/freenas/ubuntu.raw local --format qcow2</pre>
 +
# In the GUI of Proxmox VE:
 +
## Go to the hardware view of your new virtual machine
 +
## Set the display to spice
 +
## Double click on the unused disk to attach it and choose Virtio as bus
 +
## Go to the options view of your new virtual machine
 +
## Choose your new virtio disk as bootdisk
  
'''VMware vCenter Converter Standalone Client'''
+
== Qemu/KVM ==
*Download [https://my.vmware.com/web/vmware/info/slug/infrastructure_operations_management/vmware_vcenter_converter_standalone/5_0 here](version 5.x is Free Software and also free of charge but you need to create an account and login before download)
+
Create an new VM on Proxmox VE and add the existing disk image to this new VM, set the boot order and start.
 
 
'''Mergeide.reg'''
 
*Execute mergeide.reg ([[File:Mergeide.zip]])
 
  
===Prepare location to save local image===
 
This guide is using an external USB Hard Drive. You may also save to a mapped network share.
 
  
'''NOTE''' Although the final image will be around the same size as the actual amount of data on the server, the Proxmox server should have enough free space to fit the total physical disk of the server unless you plan to [http://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files shrink the windows disks.] once migrated to Proxmox.
+
'''Minimal example'''
  
==VMware vCenter Settings==
+
First a VM has to be created. 120 is an unused VM ID.
Launch VMware vCenter and use the following settings:
+
<pre>qm create 120 --bootdisk scsi0 </pre>
*'''Source type:''' Powered-on machine
+
someImage.img is an image that was created before. someStorage is the name of a storage as listed in ''pvesm status''.
*'''Specify the powered-on machine:''' This local machine
+
<pre>qm importdisk 120 someImage.img someStorage</pre>
*'''Select destination type:''' VMware Workstation or other VMware virtual machine
+
''qm importdisk'' adds the image as unused disk to the virtual machine. Thus, making it the bootdisk is still necessary.
*'''Select VMware Product:''' VMware Workstation 8.0.x
+
<pre>qm set 120 --scsi0 someStorage:vm-120-disk-0</pre>
*'''Name:''' Enter the desired name for the image
 
*'''Select a location for the virtual machine:''' Browse to the USB or Network drive where the image will be saved.
 
  
The next screen shows the settings for the virtual machine.
+
= Further information =
  
Click on Advanced options, select the Post-conversion tab and make sure ‘Install VMware Tools on the destination virtual machine’ is NOT check. We do not want to install VMware tools.
+
If your use case is not covered by this article you should check out the [[Additional ways to migrate to Proxmox VE|additional ways to migrate to Proxmox VE]] in the wiki.
 
+
It gathers years of knowledge for cases which are not as common as the ones explained here.
Click next and Finish.
 
 
 
It will now convert your physical machine to a .vmdk file.
 
 
 
'''NOTE:''' Depending on your hardware, you may need to boot the .vmdk file using VMware Workstation or Player before moving the file to the Proxmox server. This allows windows to install additional drivers for the disk controller. If promoted to convert the disk to Workstation 9.x compatibility, say Yes. You won't know if you need this step until starting the Windows VM in the final step. If you get a blue screen during boot, you should try this step.
 
 
 
==Prepare the VM on Proxmox==
 
Create a new KVM virtual machine. You’ll want to use similar CPU and memory as the physical system. In the Hard Disk menu, leave everything as default. We won’t be using the disk created by Proxmox. Finish creating the VM. Make note of the vmID. For this guide, we’ll be using 100 as an example.
 
 
 
Once the VMware converter has completed, disable all of the networks adapters on the physical server and shut down. Disabling the network adapters will avoid potential IP conflicts if you will start the physical server back into Windows after you have your new virtual server running.
 
 
 
==Move the image to the Proxmox Server==
 
===Using a USB Hard Drive===
 
Plug the USB Hard Drive into the server
 
 
 
From the Proxmox command line:
 
<pre>
 
mkdir /mnt/usb
 
</pre>
 
<pre>
 
mount /dev/sdc1 /mnt/usb/
 
</pre>
 
<pre>
 
ls /mnt/usb
 
</pre>
 
 
 
You should see the contents of the USB drive. In my case, the vmdk file was located in /mnt/usb/windows-server/
 
 
 
==Converting to qcow2==
 
We want to convert the .vmdk image file to qcow2. To do this, use the following command:
 
<pre>
 
qemu-img convert -f vmdk /mnt/usb/windows-server/windows-server.vmdk -O qcow2 /var/lib/vz/images/100/windows-server.qcow2
 
</pre>
 
This can take a while depending on the size of file and speed of your system.
 
 
 
==Final Steps==
 
Once the conversion is complete, we need to edit the conf file for the VM.
 
<pre>
 
nano /etc/pve/local/qemu-server/100.conf
 
</pre>
 
In the line with ide0: we want to change vm-100-disk-1.raw,size=32G to windows-server.qcow2
 
 
 
You may delete the empty disk created by proxmox when you created the VM.
 
<pre>
 
rm /var/lib/vz/images/100/vm-100-disk-1.raw
 
</pre>
 
 
 
Start the VM and open the console. Windows should boot up normally. It’ll take a few minutes to detect the hardware changes. If the Windows system had a static IP address, you'll have to reconfigure the settings.  
 
  
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]

Revision as of 12:54, 2 May 2022

Introduction

There are various ways to migrate existing servers to Proxmox VE. They can be divided into two categories:

Physical-to-Virtual (P2V)
Migration of physical servers to Proxmox VE
Virtual-to-Virtual (V2V)
Migration of virtual machines and containers from other hypervisors to Proxmox VE

Physical-to-Virtual (P2V)

Follow these steps to do a P2V migration and turn a physical machine into a Proxmox VE virtual machine that uses Qemu and KVM.

Clonezilla Live CDs

This method is fast, reliable and OS independent as it uses live CDs.

  • Get a live Linux CD like Clonezilla.
  • Prepare the source host, make sure that you have standard IDE drivers enabled (for WinXP/Win2003 use the mergeide.reg (File:Mergeide.zip) ), for W2k I followed this link (solution 2 worked for me): [1]
  • Now, boot the physical host with Clonezilla, go for beginner mode and select device - device and then remote, just follow the wizard
  • On the Proxmox VE host, prepare a KVM guest, make sure you got a big enough IDE disk assigned (add 1 GB extra to be on the safe side) and also boot this KVM guest with the live CD and execute a shell.
  • Become root and run fdisk (fdisk /dev/sda/) to make sure that sda is here, exit fdisk with 'w'. this was essential.
  • Now enter all commands from the wizard on the source server tells you (configure network and request the copy process)
  • After success (the wizard just copies the data, means I got a 80 GB disk but only 5 GB data on it so it was just a few minutes for the whole process on a gigabit network) just change the boot device to the hard disk and start the VM. Windows will install all needed drivers automatically, just the Intel NIC drivers for e1000 must be loaded from ISO (I got one big driver ISO from Intel containing all NIC drivers).

VMware Converter

Physical (running) Windows server to Proxmox VE (KVM) using VMware vCenter Converter Standalone Client (V5)

Tested on an HP ProLiant ML350 G5 and G6

Prepare Windows

VMware vCenter Converter Standalone Client

  • Download here(version 5.x is Free Software and also free of charge but you need to create an account and login before download)

Mergeide.reg

Prepare location to save local image

This guide is using an external USB Hard Drive. You may also save to a mapped network share.

NOTE Although the final image will be around the same size as the actual amount of data on the server, the Proxmox VE server should have enough free space to fit the total physical disk of the server unless you plan to shrink the windows disks. once migrated to Proxmox VE.

VMware vCenter Settings

Launch VMware vCenter and use the following settings:

  • Source type: Powered-on machine
  • Specify the powered-on machine: This local machine
  • Select destination type: VMware Workstation or other VMware virtual machine
  • Select VMware Product: VMware Workstation 8.0.x
  • Name: Enter the desired name for the image
  • Select a location for the virtual machine: Browse to the USB or Network drive where the image will be saved.

The next screen shows the settings for the virtual machine.

Click on Advanced options, select the Post-conversion tab and make sure ‘Install VMware Tools on the destination virtual machine’ is NOT check. We do not want to install VMware tools.

Click next and Finish.

It will now convert your physical machine to a .vmdk file.

NOTE: Depending on your hardware, you may need to boot the .vmdk file using VMware Workstation or Player before moving the file to the Proxmox VE server. This allows windows to install additional drivers for the disk controller. If promoted to convert the disk to Workstation 9.x compatibility, say Yes. You won't know if you need this step until starting the Windows VM in the final step. If you get a blue screen during boot, you should try this step.

Prepare the VM on Proxmox VE

Create a new KVM virtual machine. You’ll want to use similar CPU and memory as the physical system. In the Hard Disk menu, leave everything as default. We won’t be using the disk created by Proxmox VE. Finish creating the VM. Make note of the VMID. For this guide, we’ll be using 100 as an example.

Once the VMware converter has completed, disable all of the networks adapters on the physical server and shut down. Disabling the network adapters will avoid potential IP conflicts if you will start the physical server back into Windows after you have your new virtual server running.

Move the image to the Proxmox VE Server

Plug a USB Hard Drive into the server

From the Proxmox VE command line:

mkdir /mnt/usb
mount /dev/sdc1 /mnt/usb/
ls /mnt/usb

You should see the contents of the USB drive. In my case, the vmdk file was located in /mnt/usb/windows-server/

Converting to qcow2

We want to convert the .vmdk image file to qcow2. To do this, use the following command:

qemu-img convert -f vmdk /mnt/usb/windows-server/windows-server.vmdk -O qcow2 /var/lib/vz/images/100/windows-server.qcow2

This can take a while depending on the size of file and speed of your system.

Final Steps

Once the conversion is complete, we need to edit the configuration file for the VM.

nano /etc/pve/local/qemu-server/100.conf

In the line with ide0: we want to change vm-100-disk-1.raw,size=32G to windows-server.qcow2

You may delete the empty disk created by Proxmox VE when you created the VM.

rm /var/lib/vz/images/100/vm-100-disk-1.raw

Start the VM and open the console. Windows should boot up normally. It’ll take a few minutes to detect the hardware changes. If the Windows system had a static IP address, you'll have to reconfigure the settings.

Alternative Methods

Virtual-to-Virtual (V2V)

Follow these steps to do a V2V migration and move a virtual machine from another hypervisor to a Proxmox VE virtual machine that uses Qemu and KVM.

VMware

This explains the migration from a VMware ESXi 6.7 hypervisor to Proxmox VE 6.1. It is tested with guests with the following operating systems:

  • Debian 10
  • Windows 10 Pro
  • Windows Server 2016 Standard

Exporting

GUI

In VMware ESXi navigate to your virtual machine in the tree on the left. Choose Actions->Export. This will download disk images and information about your virtual machine in the .ovf format in the browser. Move those files to a storage that is accessible by your Proxmox VE host.

CLI

Install VMware's ovftool on your Proxmox VE host. ovftool version 4.4 has been reported to work with the following versions of ESXi: 6.5 and 6.7. Others (for example, 6.0) might crash with an unhelpful error message

Remove any attached disk or ISO from your ESXi VM and run

ovftool vi://root@<ip-of-esxi>/<name-of-a-virtual-machine> .

to export a virtual machine from ESXi directly into your current directory.


You can replace the dot with any other path, for example "/mnt/pve/<some-storage>". This way you can export directly to a storage that you created in Proxmox VE.

Yellowpin.svg Note: You have to extract .ova archives

With some configurations you get a single (.ova) archive file that contains all .vmdk disk images and the small XML description .ovf file. If this is the case, then you have to extract it before you continue with qm importovf.

Importing

Go to the command line interface of Proxmox VE. Use the command qm importovf <new-vmid> </path/to/file.ovf> <storage> to import the virtual machine. For example:

qm importovf 200 /tmp/exported-vm.ovf local-lvm

This will create a new virtual machine using cores, memory and VM name as read from the OVF manifest, and import the disks. You have to configure the network manually. You can find syntax and an example on how to use this command on its man page.

Yellowpin.svg Note: Windows guests require a few additional steps

If you are migrating a Windows guest, you additionally have to execute the following commands. This example assumes that your imported virtual machine has the ID 130.

  • Use UEFI instead of BIOS for the virtual machine:
qm set 130 --bios ovmf
  • Windows requires additional drivers to use SCSI. Therefore, you should temporarily use SATA instead. In the following command, 'scsi0' should be replaced with the drive that's used to boot Windows.
sed -i 's/scsi0:/sata0:/' /etc/pve/qemu-server/130.conf

This will give you a initial working version. You can then improve your experience and access remaining SCSI drives by installing additional drivers as explained in Windows 10 guest best practices.

Server self-migration

It is also possible to migrate without the need to export each VM separately including virtual disks.
This way, you can convert a server from vSphere to Proxmox VE without the need of a second server.
For this process your vSphere should use VMFS6 and you need at least one empty HDD.
1. Export the VM information without the disks using ovftool (you still need to configure the network configuration for each VM).

ovftool -NoDisks vi://root@<ip-of-esxi>/<name-of-a-virtual-machine> .

2. Install Proxmox VE on some disk that does not contain any important data. You don't need vSphere anymore at this point. If you have an OS disk with only vSphere on it, then you can now overwrite it with Proxmox VE.

Warning: Do not use/touch any other existing drives which are VMFS formatted

3. Create a directory on the above mention spare HDD.
4. Install vmfs6-tools which you need to mount (ready-only) the VMFS-formatted drives with the vSphere virtual disks.

apt install vmfs6-tools -y

5. List all available drives to identify the VMFS formatted ones

fdisk -l

6. Mount the VMFS disk partition (note it is read only)

vmfs6-tool /dev/<device><partition>

7. Convert the vSphere disk to a suitable format for Proxmox VE

qemu-img convert -f vmdk <path to your vmdk file>.vmdk -O <raw/qcow2> <path to your empty directory>.raw/qcow2 -p 

8. While the conversion is in progress you may create the 1st VM from ovf

qm importovf 200 /tmp/exported-vm.ovf

9. As soon as the conversion is finished you may mount the new Proxmox VE disk image to the VM.

qm rescan

If all VM images have been moved away from a VMFS6 disk, you can format it and use it at Proxmox VE

HyperV

This explains the migration from a Hyper-V on Windows 10 hypervisor to Proxmox VE 6.1. It is tested with a Proxmox VE 6.1 guest.

Proxmox VE uses qemu-img to import disks. In my tests it supported .vhdx even though some sources don't explicitly list .vhdx as supported format. There are additional options if this should not work anymore:

  • Convert-VHD is a command line tool that is built-in in Windows and converts .vhdx to .vhd. This format is supported by qemu-img.
  • StarWind V2V Converter is a graphical tool that can create .qcow2 disks from Hyper-V guests.

In Hyper-V-Manager choose "Export..." at the right side and then a destination. While many files (for example for snapshots) will be exported, we are only interested in the .vhdx file(s). Move those files to a storage that is accessible by your Proxmox VE host. You will have to recreate the virtual machine settings yourself.

Go to the GUI of Proxmox VE and create a new virtual machine. We don't need the hard disk that the virtual machine creation wizard created. Delete it in the hardware options of the virtual machine.

Now go to the command line interface of Proxmox VE. Use the command "qm importdisk" to import the .vhdx virtual hard disk. You can find syntax and an example on how to use this command on its man page.

XEN

XEN also uses qemu disk format, so it should work in the same manner as described under "VMware to Proxmox VE (KVM)".

You can use xenmigrate to do it

FreeNAS

Those are the necessary steps to migrate a Ubuntu Bionic VM from FreeNAS 11.2 to Proxmox VE 6.2-1. The VM in FreeNAS was created with the following parameters

  • Boot Loader Type: UEFI
  • Guest OS: Ubuntu Bionic
  • Disk
    • Disk Mode: AHCI
    • Zvol: test/ubuntu-1xmtpt

Check the name of your zvol by going to Virtual Machines → Options of the VM ⋮→ Devices → Options of your disk ⋮ → Edit → Zvol

Preparation in FreeNAS

  1. Create a shared directory in Sharing → Unix (NFS) Shares with path /mnt/test.
  2. Enable SSH in Services & edit the SSH service (Actions) to allow password login for root
  3. Copy the zvol to the shared directory
    1. Log in to FreeNAS via SSH
      ssh root@ip.of.your.freenas
    2. Copy the zvol to the shared directory
      dd if=/dev/zvol/test/ubuntu-1xmtpt of=/mnt/test/ubuntu.raw bs=1m

Importing to Proxmox VE

  1. Create a virtual machine (here vmid is 103) in Proxmox VE. Make sure to set BIOS to OVMF (this is UEFI).
  2. Delete the disk that was created in step 1.
  3. Create a directory mkdir /home/user/freenas
  4. Mount the shared directory from FreeNAS
     sudo mount -t nfs 192.168.31.241:/mnt/test /home/user/freenas
  5. Import the image of the FreeNAS VM to the Proxmox VE VM as unused disk (vmid 103, storage local)
    qm importdisk 103 /home/user/freenas/ubuntu.raw local --format qcow2
  6. In the GUI of Proxmox VE:
    1. Go to the hardware view of your new virtual machine
    2. Set the display to spice
    3. Double click on the unused disk to attach it and choose Virtio as bus
    4. Go to the options view of your new virtual machine
    5. Choose your new virtio disk as bootdisk

Qemu/KVM

Create an new VM on Proxmox VE and add the existing disk image to this new VM, set the boot order and start.


Minimal example

First a VM has to be created. 120 is an unused VM ID.

qm create 120 --bootdisk scsi0 

someImage.img is an image that was created before. someStorage is the name of a storage as listed in pvesm status.

qm importdisk 120 someImage.img someStorage

qm importdisk adds the image as unused disk to the virtual machine. Thus, making it the bootdisk is still necessary.

qm set 120 --scsi0 someStorage:vm-120-disk-0

Further information

If your use case is not covered by this article you should check out the additional ways to migrate to Proxmox VE in the wiki. It gathers years of knowledge for cases which are not as common as the ones explained here.