Difference between revisions of "Migration of servers to Proxmox VE"

From Proxmox VE
Jump to navigation Jump to search
(Add Hyper-V V2V section)
(Move uncommon cases to extra article)
Line 18: Line 18:
 
* Now enter all commands from the wizard on the source server tells you (configure network and request the copy process)
 
* Now enter all commands from the wizard on the source server tells you (configure network and request the copy process)
 
* After success (the wizard just copies the data, means I got a 80 GB disk but only 5 GB data on it so it was just a few minutes for the whole process on a gigabit network) just change the boot device to the hard disk and start the VM. Windows will install all needed drivers automatically, just the Intel NIC drivers for e1000 must be loaded from ISO (I got one big driver ISO from Intel containing all NIC drivers).
 
* After success (the wizard just copies the data, means I got a 80 GB disk but only 5 GB data on it so it was just a few minutes for the whole process on a gigabit network) just change the boot device to the hard disk and start the VM. Windows will install all needed drivers automatically, just the Intel NIC drivers for e1000 must be loaded from ISO (I got one big driver ISO from Intel containing all NIC drivers).
 
== SSH Migration of a Windows machine to a VM raw file ==
 
Assumptions:
 
 
* You have a physical machine in which you imported the [[File:Mergeide.zip | mergeide.reg]], as shown on top of this page
 
* The physical machine has disk device /dev/sda (read this with "fdisk -l", and look for "hda" or "sda" or similar, NOT dm- or other...)
 
* You have already created a virtual machine with a raw type, file-based disk in stopped state.
 
* The VM has VMID 101 (read this from the VM list in the web interface)
 
 
Download [http://www.sysresccd.org SystemRescueCD], burn it and reboot the physical machine with it in the CD tray.
 
 
At its bash prompt, give eth0 an IP, or use DHCP:
 
 
To assign IP: 
 
<pre>ifconfig eth0 192.168.1.10 netmask 255.255.255.0 up  (use IP on same subnet as Proxmox VE server) </pre>
 
 
To use DHCP:
 
<pre>dhcpcd eth0</pre>
 
 
To start the image process on the physical machine: 
 
<pre>dd if=/dev/sda | ssh root@proxmox dd of=/var/lib/vz/images/vmid/diskname.raw</pre>
 
Change the root@proxmox to root@yourproxmoxip and change the vmid/diskname to the VMID number and the name you want to call the disk.  I.E.  VMID 101 and disk called vm-101-disk-1.raw would be:
 
<pre> dd if=/dev/sda | ssh root@proxmox dd of=/var/lib/vz/images/101/vm-101-disk-1.raw</pre>
 
 
Once this completes you can assign this disk file to your VM on the Proxmox VE interface.
 
 
Tested a 8GB physical server over gigabit network and was running in about 10 minutes.
 
 
== Migrate a Windows machine to a VM logical volume using netcat ==
 
Assumptions:
 
 
* You have a physical machine in which you imported the [[File:Mergeide.zip | mergeide.reg]], as shown on top of this page
 
* The physical machine has disk device /dev/sda (read this with "fdisk -l", and look for "hda" or "sda" or similar, NOT dm- or other...)
 
* You have already created a virtual machine with a disk on LVM, larger in size than the physical machine one (at least 1 GB more), STOPPED, do NOT start it during this procedure!
 
* The VM has VMID 103 (read this from the VM list in the web interface)
 
* The LVM VG is named SHARED-VG (read this with: "vgdisplay -s")
 
* So the LV partition is /dev/SHARED-VG/vm-103-disk-1 (read this with: "lvdisplay |grep LV|grep 103")
 
 
Download [http://www.sysresccd.org SystemRescueCD], burn it and reboot the physical machine with it in the CD tray.
 
 
At its bash prompt, give eth0 an IP, coherent with the network of one Proxmox VE server (in the following example: 10.0.0.27):
 
<pre>ifconfig eth0 10.0.0.17 netmask 255.255.255.0 up</pre>
 
then assign root user a password, so you can log in via ssh:
 
<pre>passwd</pre>
 
 
Log in via SSH on both the physical machine and the Proxmox VE server.
 
 
On the Proxmox VE server, give the command:
 
<pre>date; netcat -w30 -vvnlp 3333 | gzip -dc > /dev/SHARED-VG/vm-103-disk-1; date</pre>
 
and on the physical machine give the command:
 
<pre>dd if=/dev/sda | gzip -c | nc -w30 -vvn 10.0.0.27 3333</pre>
 
You have to start the second command WITHIN 30 seconds of issuing the first one (the -w30...)!
 
 
This way, the Proxmox VE machine is waiting for a stream of bytes on TCP port 3333, then gunzip it and redirects on the vm virtual disk.
 
 
The physical machine outputs its ENTIRE disk as a stream of bytes, gzip it and pass it via netcat to the Proxmox VE server.
 
 
The 2 date commands are useful to have a trace of when this process starts and ends.
 
 
Tested migrating a 75GB disk in about 90 minutes on a 100Mbit LAN. VM started and worked as expected, after recognizing at first boot the new IDE controllers.
 
 
An alternate method for Windows VMs is using '''ntfsclone''' with '''ncat'''.  First, create the actual VM, and boot it with the System Rescue CD.  Partition it to match the partition structure of the host machine.  Once you have the partitions built out, you can copy the partitions directly from the source machine to the new VM.  This will only copy used data.  The commands are:
 
 
On the destination system :
 
# nc -l -p 1234 | ntfsclone -r -O /dev/sda3 -
 
 
On the source system :
 
# nc -s -o - /dev/sdb1 | netcat DestSrvIP 1234
 
 
== Migration to a a smaller partition on a VM ==
 
While it's quite easy to "clone" a partition to another (real or virtual) of the same or bigger size, it's not so simple to do the same if the destination partition is smaller than the original one. This is possible, fortunately, and at least a proven method follows, but you have to take care of some pre-conditions:
 
* Since you want to clone to a smaller partition, we will operate at the file system level, copying all the files from the source file system to the destination one.
 
* So, we have to make sure that the destination partition '''has''' enough room to get all the files, at least, with better some free space left there.
 
* The cloning is not possible directly, i.e disk-to-disk, but we have to "save" the source partition, and then "restore" it on the destination one.
 
* Must be sure that the tools used are known very well as to how to copy files on the file systems involved, including symlinks, hardlinks, file system specific attributes, and so on.
 
 
=== FSArchiver ===
 
One free tool you can use for this is [http://www.fsarchiver.org FSArchiver], which is a system tool that allows you to save the contents of a file-system to a compressed archive file. The file-system can be restored on a partition which has a different size and it can be restored on a different file-system. Unlike tar/dar, FSArchiver also creates the file-system when it extracts the data to partitions. Everything is check summed in the archive in order to protect the data. If the archive is corrupt, you just loose the current file, not the whole archive. FSArchiver is released under the GPL-v2 license. It's still under heavy development so it must not be used on critical data. So, you've been warned. Latest '''FSArchiver''' should be in the latest [http://www.sysresccd.org SystemRescueCD], although you can obtain it on your favorite recent distribution.
 
 
Cloning NTFS, '''be sure''' to use either version 0.6.10 or a patched previous version, because there was a bug that caused errors with NTFS junctions (something like Linux symlinks).
 
 
If your origin physical server is configured in RAID-1 using "Fake Raid" (MB Bios raid), see the section about it in this article
 
 
=== mergeide ===
 
As said above, install [[File:Mergeide.zip | mergeide.reg]] on the physical Windows machine (see [http://support.microsoft.com/kb/314082/en-us Microsoft KB article] for details) to provide support for the natively supported IDE controllers in Windows. Without this, cloned XP booting failed for me.
 
 
== Running FSArchiver from SystemRescueCd ==
 
Use the latest SystemRescueCD available, that has a recent FSArchiver version:
 
 
Confirm the version is right with:
 
# fsarchiver -V
 
 
it should be at least "0.6.10", particularly if you are cloning NTFS file systems. Then run:
 
# fsarchiver probe simple
 
 
that will report your disks/partitions current layout in a simple, understandable way, like:
 
 
<pre>
 
[======DISK======] [=============NAME==============] [====SIZE====] [MAJ] [MIN]
 
[sda            ] [WDC WD5002ABYS-5              ] [    15.00 GB] [  8] [  0]
 
 
[=====DEVICE=====] [==FILESYS==] [======LABEL======] [====SIZE====] [MAJ] [MIN]
 
[loop0          ] [squashfs  ] [<unknown>        ] [  671.85 MB] [  7] [  0]
 
[sda1            ] [ntfs      ] [System          ] [    15.00 GB] [  8] [  1]
 
[ramzswap0      ] [swap      ] [<unknown>        ] [  248.47 MB] [251] [  0]
 
</pre>
 
 
You have to provide a folder to save the partition "backup", which will be quite large (FSArchiver has several compression level, but just to be sure have enough free space there to accommodate all the uncompressed file...) if you need to reach a "tmpfolder" windows share, you have to install the '''smbfs''' package before (standard repositories), so that then you can
 
<pre>
 
# mkdir /mnt/tmpfolder
 
# mount -t smbfs //windows/tmpfolder /mnt/tmpfolder -o user=username
 
</pre>
 
 
giving a suitable password when asked.
 
 
=== Backup the partition ===
 
Now you have to perform the "backup", '''BE CAREFUL the first path is the backup file to save, the second the source partition, do not invert'''
 
 
I used:
 
# fsarchiver savefs -v -o /mnt/tmpfolder/physical.fsa /dev/sda1
 
 
(-v is verbose, shows each file is backing up, and if you have a lot of files and a slow video card can slow down a lot the entire process)
 
 
Then (if no errors reported) run the same Live CD in a KVM VM with a 15GB virtual empty disk, better if a bit larger, (virtio), so /dev/vda. After mounting the smb share in the same way, run GParted (present in SystemRescueCd, startx to enter the graphical interface and run it) and created a empty NTFS partition there, /dev/vda1, and set the boot flag.
 
 
=== Restore the partition ===
 
Then run
 
 
'''Note:''' i use here '''/dev/vda1''' while the original was '''/dev/sda1''', and '''id=0''' because i restore the '''first''' partition in the physical.fsa (yes, it may store more than one) as /dev/vda1
 
 
<pre>
 
# fsarchiver restfs -v /mnt/tmpfolder/physical.fsa id=0,dest=/dev/vda1
 
</pre>
 
 
check if there are no restore errors. It was quite quick and just worked.
 
 
=== Successful cases ===
 
* I successfully cloned an Windows XP professional machine from a physical 40GB (used 12GB) partition to a virtual (KVM) 15GB one, stored on Proxmox VE LVM. The archive file was around 7GB. (i had just 2 hard link restore errors, but in the log were reported on non critical files, although i would prefer 0 errors...)
 
* I successfully cloned an Windows 2003 AD controller machine from a physical 250GB (used 5GB) partition to a virtual (KVM) 15GB one, stored on Proxmox VE LVM. The archive file was around 3,4GB. (i had just 0 restore errors of any kind)
 
* Cloned a physical Ubuntu 9.04 desktop from a 32 raid5 disk to a virtual KVM 12 disk. Just had to reinstall grub to make it boot inside KVM. No errors.
 
 
== SelfImage ==
 
 
Warning : SelfImage does not do snapshot of physical disk. So, shutdown services and avoid writes during the copy process
 
 
=== Prepare the Windows operating system ===
 
 
Install [http://www.chip.de/downloads/SelfImage_30991577.html SelfImage] on the physical Windows machine. Execute the [[File:Mergeide.zip | mergeide.reg]] (see [http://support.microsoft.com/kb/314082/en-us Microsoft KB article] for details) to provide support for the natively supported IDE controllers in Windows.
 
 
=== Prepare the Proxmox VE VM ===
 
 
Create an new KVM container with a suitable disk size. It is recommended that you choose the size 1 GB bigger than the size of the physical disk that you want to migrate to make sure all your data fits to the virtual disk.
 
 
Use VNC or SSH to connect to console on the Proxmox VE host. Export the qcow2 disk in the container directory with NBD
 
 
<pre>qemu-nbd -t /var/lib/vz/images/xxx/vm-xxx-disk.qcow2 -p 1024</pre>
 
 
where xxx is the VM ID.
 
 
If you use LVM as storage model, you can also export the logical volume created by Proxmox VE via the web interface
 
 
<pre>qemu-nbd -t /dev/VolumeGroup/vm-xxx-disk-1 -p 1024</pre>
 
 
=== Do the migration ===
 
 
Start SelfImage on the physical machine, choose to image entire hard disk, not partition. On output file select NBD with your '''PVE host IP''' and '''port 1024''' as parameters. Click Start.
 
 
When imaging is complete press CTRL+C on the PVE console to stop the export of your virtual disk file.
 
 
Start the virtual machine and have fun.
 
 
== Windows backup ==
 
 
This may be useful if any other approach fails, or looks too complicated for you, as does not involve other software except Windows and PVE itself.
 
It was used successfully to convert a live Windows 2000 server sp4 to PVE 1.5, and was inspired from a blog post related to a similar migration to XEN, but proved to be effective also for PVE, and probably any other VE around.
 
 
Here is an [http://replay.waybackmachine.org/20090203053232/http://blog.geekpoet.net/2008/05/manual-p2v-migration-of-windows-server.html archive of the original blog post]
 
 
Here is the '''[[Media:ManualP2V-with-NewSid.zip|cache]]''' of the post along with the NewSID file.
 
 
=== The Workflow ===
 
 
This method is about
 
* making a backup on your physical machine (pm)
 
* creating a virtual machine (vm) from scratch, identical as possible to the pm
 
* installing the same base OS of the pm on the vm, from the original supports
 
* restoring the backup of the pm on the vm
 
 
easy, uh? well, it works! Of course there are some point to take care of, and some drawbacks, too, but it is a very simple method, and you can find the drawbacks (if any) worth for the easiness you have in return!
 
 
You don't need to get new software, or learn fancy Linux commands: all you got to know (and it's supposed you ALREADY know) is
 
* how to create a vm in Proxmox VE (KVM)
 
* how to make a backup/restore from windows (with the bundled software!)
 
 
OK, follow me:
 
 
=== Backup the physical machine ===
 
The program you need is NTBackup, on a standard installation you can find it under the menu
 
* Start/Program Files/Accessories/System utilities/Backup
 
* if it has been removed, you can add it back from your windows CD
 
 
Start a backup that includes
 
* ALL the windows hard drives
 
* the system state (this include the whole registry, IPs, hostname, and so on)
 
 
Then, choose the "file" option as a destination
 
* be sure to create this file on a network share, or a USB drive but NOT on the disks you are backing up
 
* be sure that the network share has enough space to hold the BIG file...
 
 
NOTE: Some services may "lock" some files in a way that NTBackup won't be able to copy them (while they're in use):
 
* this is normal, and usually some of this locked files are:
 
** files used by the open programs the logged user has (e.g. NTBackup logs...)
 
** SQL data files, web server logs, and so on (usually running as services)
 
* You can find "locked" files running NTBackup once, and then looking at its log for lines containing a "will be ignored" string
 
** to find such lines in a BIG log file (mine had 250k lines) use the console FIND command ("c:\FIND /?" is your friend)
 
* the only workaround is, during the backup time frame:
 
** close all the open (interactive) programs the logged user has
 
** temporarily stop the services that are locking the files (if you can)
 
 
=== Cloning the virtual machine ===
 
 
* on PVE, you must create a vm configured just like the pm you want to migrate
 
** same OS (you have to use a CD or a ISO for that)
 
** same %WINDIR% folder name (e.g.: c:\windows or c:\winnt, etc), (the source blog post doesn't mention this but i did it...)
 
** same SP (service pack applied) level, hotfixes, etc.
 
** same CPU number
 
** same page file configuration (the source blog post doesn't mention this but it turned out to matter in my conversion)
 
** same drive layouts
 
*** number of disks and partitions, those on the vm must be at least equal in size to the pm ones
 
*** same drive letters mapping
 
** same network card number
 
 
Now, when your vm is just like your pm, except it doesn't have any of the programs and configuration the pm has:
 
* add '''one more temporary drive''' to hold (temporary) files during the migration, assign it a drive letter not used in the pm
 
* on this temporary drive, copy
 
** the file '''c:\boot.ini''' from the vm
 
** the whole folder '''c:\windows\repair''' or '''c:\winnt\repair''', or whatever your %WINDIR% is on your vm
 
** the backup file you previously did with NTBackup, stored somewhere on your network
 
 
Then, '''before restoring''', switch the vm NIC to a non-used network in PVE, e.g.:
 
* create a new fake bridge, not connected to any eth<x>,
 
* then delete the "working" vm NIC and
 
* re-add another NIC on the vm, linked to the new fake bridge
 
 
otherwise your new vm would "talk" to the real network / domain controllers, etc, and would result in a duplicate machine, duplicate IP address, and so on. This would be '''BAD'''.
 
 
=== Restore the backup on the virtual machine ===
 
 
Log on to the vm and perform a restore of the backup file you have on the temporary disk
 
* '''note''': choose "always replace the files on my computer" (see tools>options>restore in NTBackup)
 
 
'''BEFORE REBOOTING''' (at the end of the restore NTBackup asks you to reboot, hold down your mouse for a while...)
 
* restore
 
** the file '''c:\boot.ini''' from the vm
 
** the whole folder '''c:\windows\repair''' or '''c:\winnt\repair''', or whatever your %WINDIR% is on your vm
 
 
replacing those restored from the backup file.
 
 
=== Reboot the virtual machine ===
 
if you got any BSOD (blue screens) you may try to:
 
* boot in safe mode
 
* remove unrecognized components (e.g.: SCSI card)
 
* make windows detect those components and reinstall drivers
 
 
I didn't see any BSOD, so nothing to tell here.
 
 
Otherwise, log on to the vm,
 
* some component may not work, e.g. (mine) the mouse cursor may seem "out of sync" with the VNC one, be patient and wait because:
 
* windows may recognize more virtual components and install them automatically (PCI, video, network, and so on: after a reboot they will work as expected)
 
 
After that, you can
 
* turn off the pm
 
* on the vm add a NIC linked to the real network, assigning the real IP the pm had
 
* reboot and log into the domain
 
  
 
== VMware Converter ==
 
== VMware Converter ==
Line 374: Line 100:
 
==== Alternative Methods ====
 
==== Alternative Methods ====
 
* [https://github.com/QuinnyPig/OpenVZ-Template-Creators OpenVZ Template Creators] to create non Debian (we have [[Debian Appliance Builder|DAB]] for it) templates.
 
* [https://github.com/QuinnyPig/OpenVZ-Template-Creators OpenVZ Template Creators] to create non Debian (we have [[Debian Appliance Builder|DAB]] for it) templates.
 
== Windows systems specific issues ==
 
=== inaccessible boot device ===
 
Booting a virtual clone (IDE) of a physical Windows system partition may fail with a BSOD referring to the problem
 
<pre>
 
STOP: 0x0000007B (0xF741B84C,0xC0000034,0x00000000,0x00000000)
 
INACCESSIBLE_BOOT_DEVICE
 
</pre> this means that the source physical windows machine had no support for IDE controller, or at least the one virtually replaced by KVM (see [http://support.microsoft.com/kb/314082/en-us Microsoft KB article] article for details): as Microsoft suggests, create a mergeide.reg file ([[File:Mergeide.zip]]) file on the physical machine and merge that in the registry, '''before'' the P2V migration. By the way, it may not be necessary but should be no harm, anyway, and save you lots of time and headaches.
 
 
Windows 2000: see [http://www.motherboard.windowsreinstall.com/problems.htm]
 
 
=== Disk booting tips ===
 
* check that your disk has "boot flag" enabled (you can check this with GParted, on NTFS disks, booting the vm from a live CD ISO, see this [http://gparted.sourceforge.net/docs/help-manual/C/gparted.html#gparted-fix-operating-system-boot-problems GParted manual page]) maybe not so windows-specific but better remind it here.
 
* be sure that boot.ini on your system partition is still suitable to your current disk/partition configuration (see this [http://support.microsoft.com/kb/124550 kb article] and this [http://support.microsoft.com/kb/102873 kb article])
 
 
=== Hidden non present devices ===
 
If your source physical machine had network interface controllers (NICs) with static IP assigned and then you clone the system to a vm, you will need to add a virtual Ethernet controller, provided by KVM, but Windows won't let you assign the same fixed IP to the new virtual NIC, because (it reports) there is a hidden and nonpresent devices already configured with the same IP (see [http://support.microsoft.com/kb/241257 Microsoft KB article]).
 
 
For sure it should be removed from windows "known" hardware but, you won't find it in "device management", even if you specify "show hidden peripherals", why? because Microsoft is complicated :-)
 
 
To be able to see, and then remove the old physical NIC from "device management", you have to
 
* open a console window
 
* type: <pre>set devmgr_show_nonpresent_devices = 1</pre> and press enter
 
* then, in the same console (do not close it) type: <pre>start devmgmt.msc</pre> and press enter
 
* this will open the same "device management" panel you could have open from the "control panel", but now if you set the "show hidden peripherals" flag, the "nonpresent" devices will appear, although dimmed somehow, and you will be able to remove them, freeing the IP for your virtual NIC.
 
 
=== Other suggested pages ===
 
* [[Acpi_kvm | Acpi setup]]
 
* [[Windows_2003_guest_best_practices | Windows 2003 guest best practices]]
 
* [[Paravirtualized_Network_Drivers_for_Windows | Paravirtualized Network Drivers for Windows]]
 
* [[Paravirtualized_Block_Drivers_for_Windows | Paravirtualized Block Drivers for Windows]]
 
 
== Fake Raid specific issues ==
 
If you have a physical PC to clone and you use a method that requires access to the file system (like with FSArchiver) from GNU/Linux, you can be in troubles if you have a so called "fake raid", that is a raid created using the motherboard bios configuration.
 
Let's say we have a RAID1. In short GNU/Linux (i.e. SystemRescueCd) sees the device, sees that the disks have some Raid metadata, and when you try to mount them it uses the "mdadm" libraries, that can't work since that raid is totally different.
 
To avoid this, you have to boot with the boot parameter 'nodmraid'.
 
For example, if you boot with SystemRescueCd, at the menu where you can choose what to boot hit "tab" key
 
You will have something like:
 
<pre>ifcpu64.c32 rescue64 scandelay=1 -- rescue32 scandelay=1</pre>
 
modify it (on both sides of the "--" separator) like this:
 
<pre>ifcpu64.c32 rescue64 scandelay=1 nodmraid -- rescue32 scandelay=1 nodmraid</pre>
 
 
== Physical Linux server to Container ==
 
 
Take a look at these links if you want to migrate something that is not a Linux server to containers:
 
*[http://howtoforge.com/how-to-convert-physical-systems-and-xen-vms-into-openvz-containers-debian-etch How To Convert Physical Systems And Xen VMs Into OpenVZ Containers]
 
*[http://wiki.openvz.org/Physical_to_container Physical_to_container]
 
 
Here we explain how to do a Physical-to-Virtual migration from a Linux installation into a Proxmox VE LXC container.
 
 
Log into the machine you want to migrate into a PVE container as root and first stop any running services such as web servers or databases e.g. `systemctl stop apache2`, `systemctl stop mysql` etc. You need to be root (or run with sudo) so that tar can access and archive all files correctly using commands such as:
 
 
# cd /
 
# tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz /
 
 
The tar -p switch is critical for preserving file permissions. Copy this tarball into the container templates cache directory on the PVE server which defaults to /var/lib/vz/template/cache . You may then create a new container using this tarball as a template via the web UI or using pct. If you choose to use the PVE web UI to create the new container, be sure to uncheck the 'Unprivileged Container' option (under the General options tab) else you are likely to run into file ownership issues. Creating a new privileged i386 container on ZFS storage with container ID 101 using pct would look something like this:
 
 
# pct create 101 local:vztmpl/backup.tar.gz -arch i386 -hostname your.hostname.here -onboot 1 -rootfs local-zfs:300 -memory 4096 -cores 2
 
 
Note this command doesn't configure networking. I enable networking after creating the container via the Proxmox web interface.
 
 
  
 
= Virtual-to-Virtual (V2V) =
 
= Virtual-to-Virtual (V2V) =
Line 494: Line 159:
 
<pre>qm set 120 --scsi0 someStorage:vm-120-disk-0</pre>
 
<pre>qm set 120 --scsi0 someStorage:vm-120-disk-0</pre>
  
= Other =
+
= Further information =
 
 
== Move OpenVZ containers to Proxmox VE ==
 
 
 
OpenVZ is depreacated and was superseeded by the Linux Container (LXC) project - which is using mainline kernel features.
 
 
 
You can move existing OpenVZ containers (CT) with vzdump to Proxmox VE.
 
Then check out the [[Convert OpenVZ to LXC]] article.
 
  
== Convert Windows to use (VirtIO) SCSI (KVM) ==
+
If your use case is not covered by this article you should check out the [[Additional ways to migrate to Proxmox VE|additional ways to migrate to Proxmox VE]] in the wiki.
This procedure is required to get Windows to load and active the SCSI drivers, once active you can switch the disk and it should "Just Work".
+
It gathers years of knowledge for cases which are not as common as the ones explained here.
* With the VM powered off:
 
** ensure the SCSI controller is set to VirtIO, this will work best on Windows 7 or newer.
 
** add a new temporary SCSI virtual disk, can be small - for example, 0.1 GB big
 
** add the VirtIO Driver ISO to a CD-ROM drive
 
* Start the VM and make sure the drivers load for the new disk. You do not need to format it, just make sure you can see it in Device Manager
 
* Shut down the Virtual Machine
 
* detach the dummy disk and remove it afterwards
 
* detach the real disk(s) and re-attach (edit) them as SCSI
 
* Add the disks you just removed back as SCSI disks and boot the Virtual Machine.
 
  
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]

Revision as of 12:20, 18 March 2020

Introduction

There are various ways to migrate existing servers to Proxmox VE. They can be divided into two categories:

Physical-to-Virtual (P2V)
Migration of physical servers to Proxmox VE
Virtual-to-Virtual (V2V)
Migration of virtual machines and containers from other hypervisors to Proxmox VE

Physical-to-Virtual (P2V)

Follow these steps to do a P2V migration and turn a physical machine into a Proxmox VE virtual machine that uses Qemu and KVM.

Clonezilla Live CDs

This method is fast, reliable and OS independent as it uses live CDs.

  • Get a live Linux CD like Clonezilla.
  • Prepare the source host, make sure that you have standard IDE drivers enabled (for WinXP/Win2003 use the mergeide.reg (File:Mergeide.zip) ), for W2k I followed this link (solution 2 worked for me): [1]
  • Now, boot the physical host with Clonezilla, go for beginner mode and select device - device and then remote, just follow the wizard
  • On the Proxmox VE host, prepare a KVM guest, make sure you got a big enough IDE disk assigned (add 1 GB extra to be on the safe side) and also boot this KVM guest with the live CD and execute a shell.
  • Become root and run fdisk (fdisk /dev/sda/) to make sure that sda is here, exit fdisk with 'w'. this was essential.
  • Now enter all commands from the wizard on the source server tells you (configure network and request the copy process)
  • After success (the wizard just copies the data, means I got a 80 GB disk but only 5 GB data on it so it was just a few minutes for the whole process on a gigabit network) just change the boot device to the hard disk and start the VM. Windows will install all needed drivers automatically, just the Intel NIC drivers for e1000 must be loaded from ISO (I got one big driver ISO from Intel containing all NIC drivers).

VMware Converter

Physical (running) Windows server to Proxmox VE (KVM) using VMware vCenter Converter Standalone Client (V5)

Tested on an HP ProLiant ML350 G5 and G6

Prepare Windows

VMware vCenter Converter Standalone Client

  • Download here(version 5.x is Free Software and also free of charge but you need to create an account and login before download)

Mergeide.reg

Prepare location to save local image

This guide is using an external USB Hard Drive. You may also save to a mapped network share.

NOTE Although the final image will be around the same size as the actual amount of data on the server, the Proxmox VE server should have enough free space to fit the total physical disk of the server unless you plan to shrink the windows disks. once migrated to Proxmox VE.

VMware vCenter Settings

Launch VMware vCenter and use the following settings:

  • Source type: Powered-on machine
  • Specify the powered-on machine: This local machine
  • Select destination type: VMware Workstation or other VMware virtual machine
  • Select VMware Product: VMware Workstation 8.0.x
  • Name: Enter the desired name for the image
  • Select a location for the virtual machine: Browse to the USB or Network drive where the image will be saved.

The next screen shows the settings for the virtual machine.

Click on Advanced options, select the Post-conversion tab and make sure ‘Install VMware Tools on the destination virtual machine’ is NOT check. We do not want to install VMware tools.

Click next and Finish.

It will now convert your physical machine to a .vmdk file.

NOTE: Depending on your hardware, you may need to boot the .vmdk file using VMware Workstation or Player before moving the file to the Proxmox VE server. This allows windows to install additional drivers for the disk controller. If promoted to convert the disk to Workstation 9.x compatibility, say Yes. You won't know if you need this step until starting the Windows VM in the final step. If you get a blue screen during boot, you should try this step.

Prepare the VM on Proxmox VE

Create a new KVM virtual machine. You’ll want to use similar CPU and memory as the physical system. In the Hard Disk menu, leave everything as default. We won’t be using the disk created by Proxmox VE. Finish creating the VM. Make note of the VMID. For this guide, we’ll be using 100 as an example.

Once the VMware converter has completed, disable all of the networks adapters on the physical server and shut down. Disabling the network adapters will avoid potential IP conflicts if you will start the physical server back into Windows after you have your new virtual server running.

Move the image to the Proxmox VE Server

Plug a USB Hard Drive into the server

From the Proxmox VE command line:

mkdir /mnt/usb
mount /dev/sdc1 /mnt/usb/
ls /mnt/usb

You should see the contents of the USB drive. In my case, the vmdk file was located in /mnt/usb/windows-server/

Converting to qcow2

We want to convert the .vmdk image file to qcow2. To do this, use the following command:

qemu-img convert -f vmdk /mnt/usb/windows-server/windows-server.vmdk -O qcow2 /var/lib/vz/images/100/windows-server.qcow2

This can take a while depending on the size of file and speed of your system.

Final Steps

Once the conversion is complete, we need to edit the configuration file for the VM.

nano /etc/pve/local/qemu-server/100.conf

In the line with ide0: we want to change vm-100-disk-1.raw,size=32G to windows-server.qcow2

You may delete the empty disk created by Proxmox VE when you created the VM.

rm /var/lib/vz/images/100/vm-100-disk-1.raw

Start the VM and open the console. Windows should boot up normally. It’ll take a few minutes to detect the hardware changes. If the Windows system had a static IP address, you'll have to reconfigure the settings.

Alternative Methods

Virtual-to-Virtual (V2V)

Follow these steps to do a V2V migration and move a virtual machine from another hypervisor to a Proxmox VE virtual machine that uses Qemu and KVM.

VMware

This explains the migration from a VMware ESXi 6.7 hypervisor to Proxmox VE 6.1. It is tested with guests with the following operating systems:

  • Debian 10
  • Windows 10 Pro
  • Windows Server 2016 Standard

In VMware ESXi navigate to your virtual machine in the tree on the left. Choose Actions->Export. This will download disk images and information about your virtual machine in the .ovf format in the browser. Move those files to a storage that is accessible by your Proxmox VE host.

Go to the command line interface of Proxmox VE. Use the command "qm importovf" to import the virtual machine. This will create a new virtual machine, using cores, memory and VM name as read from the OVF manifest, and import the disks. You have to configure the network manually. You can find syntax and an example on how to use this command on its man page.

If your guest is Windows, you additionally have to execute the following commands. This example assumes that your imported virtual machine has the ID 130.

  • qm set 130 --bios ovmf
  • sed -i 's/scsi/ide/g' /etc/pve/qemu-server/130.conf

This gives you a first working version. You can then improve your experience by installing additional drivers as explained in Windows 10 guest best practices.

HyperV

This explains the migration from a Hyper-V on Windows 10 hypervisor to Proxmox VE 6.1. It is tested with a Proxmox VE 6.1 guest.

Proxmox VE uses qemu-img to import disks. In my tests it supported .vhdx even though some sources don't explicitly list .vhdx as supported format. There are additional options if this should not work anymore:

  • Convert-VHD is a command line tool that is built-in in Windows and converts .vhdx to .vhd. This format is supported by qemu-img.
  • StarWind V2V Converter is a graphical tool that can create .qcow2 disks from Hyper-V guests.

In Hyper-V-Manager choose "Export..." at the right side and then a destination. While many files (for example for snapshots) will be exported, we are only interested in the .vhdx file(s). Move those files to a storage that is accessible by your Proxmox VE host. You will have to recreate the virtual machine settings yourself.

Go to the GUI of Proxmox VE and create a new virtual machine. We don't need the hard disk that the virtual machine creation wizard created. Delete it in the hardware options of the virtual machine.

Now go to the command line interface of Proxmox VE. Use the command "qm importdisk" to import the .vhdx virtual hard disk. You can find syntax and an example on how to use this command on its man page.

XEN

XEN also uses qemu disk format, so it should work in the same manner as described under "VMware to Proxmox VE (KVM)".

You can use xenmigrate to do it

Qemu/KVM

Create an new VM on Proxmox VE and add the existing disk image to this new VM, set the boot order and start.

Minimal example

First a VM has to be created. 120 is an unused VM ID.

qm create 120 --bootdisk scsi0 

someImage.img is an image that was created before. someStorage is the name of a storage as listed in pvesm status.

qm importdisk 120 someImage.img someStorage

qm importdisk adds the image as unused disk to the virtual machine. Thus, making it the bootdisk is still necessary.

qm set 120 --scsi0 someStorage:vm-120-disk-0

Further information

If your use case is not covered by this article you should check out the additional ways to migrate to Proxmox VE in the wiki. It gathers years of knowledge for cases which are not as common as the ones explained here.