Migration of servers to Proxmox VE: Difference between revisions
Line 24: | Line 24: | ||
* then, in the same console (do not close it) type: <pre>start devmgmt.msc</pre> and press enter | * then, in the same console (do not close it) type: <pre>start devmgmt.msc</pre> and press enter | ||
* this will open the same "device management" panel you could have open from the "control panel", but now if you set the "show hidden peripherals" flag, the "nonpresent" devices will appear, although dimmed somehow, and you will be able to remove them, freeing the IP for your virtual nic. | * this will open the same "device management" panel you could have open from the "control panel", but now if you set the "show hidden peripherals" flag, the "nonpresent" devices will appear, although dimmed somehow, and you will be able to remove them, freeing the IP for your virtual nic. | ||
=== other suggested pages === | |||
* [[Acpi_kvm | Acpi setup]] | |||
* [[Windows_2003_guest_best_practices | Windows 2003 guest best practices]] | |||
* [[Paravirtualized_Network_Drivers_for_Windows | Paravirtualized Network Drivers for Windows]] | |||
* [[Paravirtualized_Block_Drivers_for_Windows | Paravirtualized Block Drivers for Windows]] | |||
=Physical server to Proxmox VE (KVM)= | =Physical server to Proxmox VE (KVM)= |
Revision as of 09:19, 1 September 2010
Introduction
You can migrate existing servers to Proxmox VE.
Windows systems specific P2V issues
inaccessible boot device
Booting a virtual clone (IDE) of a phisical Windows system partition may fail with a BSOD referring to the problem
STOP: 0x0000007B (0xF741B84C,0xC0000034,0x00000000,0x00000000) INACCESSIBLE_BOOT_DEVICE
this means that the source physical windows machine had no support for IDE controller, or at least the one virtually replaced by kvm (see Microsoft KB article article for details): as Microsoft suggests, create a mergeide.reg file on the physical machine and merge that in the registry, 'before the P2V migration. Btw, it may not be necessary but should be no harm, anyway, and save you lots of time and headaches.
disk booting tips
- check that your disk has "boot flag" enabled (you can check this with gparted, on ntfs disks, booting the vm from a livecd iso, see this gparted manual page) maybe not so windows-specific but better remind it here.
- be sure that boot.ini on your system partition is still suitable to your current disk/partition configuration (see this kb article and this kb article)
If your source physical machine had network interface controllers (nics) with static IP assigned and then you clone the system to a vm, you will need to add a virtual ethernet controller, provided by kvm, but Windows won't let you assign the same fixed IP to the new virtual nic, because (it reports) there is a hidden and nonpresent devices already configured with the same IP (see Microsoft KB article).
For sure it should be removed from windows "known" hardware but, you won't find it in "device management", even if you specify "show hidden peripherals", why? because Microsoft is complicated :-)
To be able to see, and then remove the old physical nic from "device managemnt", you have to
- open a console window
- type:
set devmgr_show_nonpresent_devices = 1
and press enter - then, in the same console (do not close it) type:
start devmgmt.msc
and press enter - this will open the same "device management" panel you could have open from the "control panel", but now if you set the "show hidden peripherals" flag, the "nonpresent" devices will appear, although dimmed somehow, and you will be able to remove them, freeing the IP for your virtual nic.
other suggested pages
- Acpi setup
- Windows 2003 guest best practices
- Paravirtualized Network Drivers for Windows
- Paravirtualized Block Drivers for Windows
Physical server to Proxmox VE (KVM)
There are free tools (like VMware Converter) available to migrate a physical host to VMware. So the first step is to migrate the physical server to a VMware image. The second step is to follow the howto "VMware to Proxmox VE (KVM)". Another option to migrate a Windows machine is to use the tool SelfImage. This way you don't have to struggle with VMware Converter and VMware, so it should be easier and faster.
How to migrate directly from a Windows physical machine to a VM raw file using SSH
assumptions:
* you have a physical machine in which you imported the mergeide.reg, as shown on top of this page * the physical machine has disk device /dev/sda (read this with "fdisk -l", and look for "hda" or "sda" or similar, NOT dm- or other...) * you have already created a virtual machine with a raw type, file-based disk and not running. * the VM has VMid 101 (read this from the VM list in the web interface)
download SystemRescueCD ( http://www.sysresccd.org ), burn it and reboot the physical machine with it in the cd tray.
At its bash prompt, give eth0 an ip, or use dhcp:
To assign ip:
ifconfig eth0 192.168.1.10 netmask 255.255.255.0 up (use ip on same subnet as proxmox server)
To use DHCP:
dhcpcd eth0
To start the image process on the physical machine:
dd if=/dev/sda | ssh root@proxmox dd of=/var/lib/vz/images/vmid/diskname.raw
Change the root@proxmox to root@yourproxmoxip and change the vmid/diskname to the VMid number and the name you want to call the disk. I.E. VMid 101 and disk called vm-101-disk-1.raw would be:
dd if=/dev/sda | ssh root@proxmox dd of=/var/lib/vz/images/101/vm-101-disk-1.raw
Once this completes you can assign this disk file to your VM on the Proxmox interface.
Tested a 8GB physical server over gigabit network and was running in about 10 minutes.
How to migrate directly from a Windows physical machine to a VM logical volume using NETCAT
assumptions:
- you have a physical machine in which you imported the mergeide.reg, as shown on top of this page
- the physical machine has disk device /dev/sda (read this with "fdisk -l", and look for "hda" or "sda" or similar, NOT dm- or other...)
- you have already created a virtual machine with a disk on LVM, larger in size than the physical machine one (at least 1Gb more), STOPPED, do NOT start it during this procedure!
- the VM has VMid 103 (read this from the VM list in the web interface)
- the LVM VG is named SHARED-VG (read this with: "vgdisplay -s")
- so the LV partition is /dev/SHARED-VG/vm-103-disk-1 (read this with: "lvdisplay |grep LV|grep 103")
download SystemRescueCD ( http://www.sysresccd.org ), burn it and reboot the physical machine with it in the cd tray.
At its bash prompt, give eth0 an ip, coherent with the network of one proxmox server (in the following example: 10.0.0.27):
ifconfig eth0 10.0.0.17 netmask 255.255.255.0 up
then assign root user a password, so you can log in via ssh:
passwd
log in via ssh on both the physical machine and the proxmox server, then on the proxmox server, give the command:
date; netcat -w30 -vvnlp 3333 | gzip -dc > /dev/SHARED-VG/vm-103-disk-1; date
and on the physical machine give the command:
dd if=/dev/sda | gzip -c | nc -w30 -vvn 10.0.0.27 3333
You have to start the second command WITHIN 30 seconds from the first one (the -w30...)!
This way, the proxmox machine is waiting for a stream of bytes on tcp port 3333, then gunzip it and redirects on the vm virtual disk.
The physical machine outputs its ENTIRE hd as a stream of bytes, gzip it and pass it via netcat to the proxmox server.
The 2 date commands are useful to have a trace of when this process starts and ends.
Tested migrating a 75gb hd in about 90 minutes on a 100mbit lan. VM started and worked as expected, after recognizing at first boot the new ide controllers.
How to migrate from a physical machine to a VM, on a smaller partition
While it's quite easy to "clone" a partition to another (real or virtual) of the same or bigger size, it's not so simple to do the same if the destination partition is smaller than the original one. This is possible, fortunately, and at least a proved method follows, but you have to take care of some preconditions:
- since you want to clone to a smaller partition, we will operate at the filesystem level, copying all the files from the source filesystem to the destination one.
- so, we have to make sure that the destination partition has enough room to get all the files, at least, with better some free space left there.
- the cloning is not possible directly, i.e disk-to-disk, but we have to "save" the source partition, and then "restore" it on the destination one.
- we have to be sure that the tools used know very well how to copy files on the filesystems involved, including symlinks, hardlinks, filesystem specific attributes, and so on.
main tool: fsarchiver
One free tool you can use for this is [fsarchiver], which <<is a system tool that allows you to save the contents of a file-system to a compressed archive file. The file-system can be restored on a partition which has a different size and it can be restored on a different file-system. Unlike tar/dar, FSArchiver also creates the file-system when it extracts the data to partitions. Everything is checksummed in the archive in order to protect the data. If the archive is corrupt, you just loose the current file, not the whole archive. Fsarchiver is released under the GPL-v2 license. It's still under heavy development so it must not be used on critical data.>>, So, you've been warned. Latest fsarchiver should be in the latest [SystemRescueCD], although you can obtain it on your favourite recent distribution.
Cloning NTFS, be sure to use either version 0.6.10 or a patched previous version, because there was a bug that caused errors with NTFS junctions (something like linux symlnks).
mergeide
As other said, install mergeide.reg on the physical Windows machine (see Microsoft KB article for details) to provide support for the natively supported IDE controllers in Windows. Without this, cloned XP booting failed for me.
running fsarchiver from ubuntu livecd
I used a Ubuntu 10.04 LiveCD, where I installed the package 0.6.8-1ubuntu0.1 from the universe repository: this repository is disabled by default, you have to enable it before, doing:
#sudo nano /etc/apt/sources.lst
then, after uncommenting universe lines, installing it:
#sudo apt-get update #sudo apt-get install fsarchiver
once installed, confirm the version is right typing
# sudo aptitude show fsarchiver
it shoud be at least "0.6.10", or "0.6.8-1ubuntu0.1" (pathced from 0.6.9 and 0.6.10), particularly if you are cloning NTFS filesystems.
then run
#sudo fsarchiver probe simple
that will report your disks/partitions current layout in a simple, understandable way, like:
ubuntu@ubuntu:~$ sudo fsarchiver probe simple [======DISK======] [=============NAME==============] [====SIZE====] [MAJ] [MIN] [vda ] [RED HAT VIRTIO SCSI DEVICE ] [ 15.00 GB] [ 8] [ 0] [=====DEVICE=====] [==FILESYS==] [======LABEL======] [====SIZE====] [MAJ] [MIN] [loop0 ] [squashfs ] [<unknown> ] [ 671.85 MB] [ 7] [ 0] [vda1 ] [ntfs ] [System ] [ 15.00 GB] [ 8] [ 1] [ramzswap0 ] [swap ] [<unknown> ] [ 248.47 MB] [251] [ 0]
then, you have to provide a folder to save the partition "backup", which will be quite large (fsarchiver has several compression level, but just to be sure have enough free space there to accomodate all the uncompressed file...) if you need to reach a "tmpfolder" windows share, you have to install the smbfs package before (standard repositories), so that then you can
#sudo mkdir /mnt/tmpfolder #sudo mount -t smbfs //windows/tmpfolder /mnt/tmpfolder -o user=username
giving a suitable password when asked.
backup the partition
then,you have to perform the "backup", BE CAREFUL the first path is the backup file to save, the second the source partition, do not invert
I used:
#sudo fsarchiver savefs -v -o /mnt/tmpfolder/physical.fsa /dev/sda1
then (if no errors reported) mounted the same LiceCD in a kvm vm with a 15GB virtual empty disk (virtio), so /dev/vda. After installing smbfs and fsarchiver in the same way, i've run Gparted (installed on the LiveCD) and created a empty ntfs partition there, /dev/vda1.
restore the partition
Then, I run
Note: i use here /dev/vda1 while the original was /dev/sda1, and id=0 because i restore the first partition in the physical.fsa (yes, it may store more than one) as /dev/vda1
#sudo fsarchiver restfs -v /mnt/tmpfolder/physical.fsa id=0,dest=/dev/vda1
check if there are no restore errors. It was quite quick and just worked. Well, no, the first times i tried a few and in the end it worked :-)))
successful cases
- I successfully cloned an Windows XP professional machine from a physical 40GB (used 12GB) partition to a virtual (kvm) 15GB one, stored on pve LVM. The archive file was around 7GB. (i had just 2 hardlink restore errors, but in the log were reported on non critical files, although i would prefer 0 errors...)
- I successfully cloned an Windows 2003 AD controller machine from a physical 250GB (used 5GB) partition to a virtual (kvm) 15GB one, stored on pve LVM. The archive file was around 3,4GB. (i had just 0 restore errors of any kind)
- Cloned a physical ubuntu 9.04 desktop from a 32 raid5 disk to a virtual kvm 12 disk. Just had to reinstall grub to make it boot inside kvm. No errors.
Physical (running) server to Proxmox VE (KVM) using SelfImage
Prepare the Windows operating system
Install SelfImage on the physical Windows machine. Execute the mergeide.reg (see Microsoft KB article for details) to provide support for the natively supported IDE controllers in Windows.
Prepare the Proxmox VE VM
Create an new KVM container with a suitable disk size. It is recommended that you choose the size 1 GB bigger than the size of the physical disk that you want to migrate to make sure all your data fits to the virtual disk.
Use VNC or SSH to connect to console on the Proxmox VE host. Export the qcow2 disk in the container directory with NBD
qemu-nbd -t /var/lib/vz/images/xxx/vm-xxx-disk.qcow2
where xxx is the VM ID.
If you use LVM as storage model, you can also export the logical volume created by Proxmox via the web interface
qemu-nbd -t /dev/VolumeGroup/vm-xxx-disk-1
Do the migration
Start SelfImage on the physical machine, choose to image entire hard disk, not partition. On ouput file select NBD with your PVE host IP and port 1024 as parameters. Click Start.
When imaging is complete press CTRL+C on the PVE console to stop the export of your virtual disk file.
Start the virtual machine and have fun.
Physical (running) server to Proxmox VE (KVM) using Windows backup
This may be useful if any other approach fails, or looks too complicated for you, as does not involve other software except Windows and PVE itself. It was used succesfully to convert a live Windows 2000 server sp4 to PVE 1.5, and was inspired from a blog post related to a similar migration to XEN, but proved to be effective also for PVE, and probably any other VE around.
here is the original blog post: http://blog.geekpoet.net/2008/05/manual-p2v-migration-of-windows-server.html
The workflow
This method is about
- making a backup on your physical machine (pm)
- creating a virtual machine (vm) from scratch, identical as possible to the pm
- installing the same base OS of the pm on the vm, from the original supports
- restoring the backup of the pm on the vm
easy, uh? well, it works! Of course there are some point to take care of, and some drawbacks, too, but it is a very simple method, and you can find the drawbacks (if any) worth for the easiness you have in return!
You don't need to get new softwares, or learn fancy linux commands: all you got to know (and it's supposed you ALREADY know) is
- how to create a vm in pve (kvm)
- how to make a backup/restore from windows (with the bundled software!)
Ok, follow me:
backup the physical machine
The program you need is NTBackup, on a standard installation you can find it under the menu
- Start/Program Files/Accessories/System utilities/Backup
- if it has been removed, you can add it back from your windows CD
Start a backup that includes
- ALL the windows hard drives
- the system state (this include the whole registry, ips, hostname, and so on)
Then, choose the "file" option as a destination
- be sure to create this file on a network share, or a usb drive but NOT on the disks you are backing up
- be sure that the network share has enough space to hold the BIG file...
NOTE: Some services may "lock" some files in a way that NTBackup won't be able to copy them (while they're in use):
- this is normal, and usually some of this locked files are:
- files used by the open programs the logged user has (e.g. NTBackup logs...)
- SQL data files, webserver logs, and so on (ususlly running as services)
- You can find "locked" files running NTBackup once, and then looking at its log for lines containing a "will be ignored" string
- to find such lines in a BIG log file (mine had 250k lines) use the console FIND command ("c:\FIND /?" is your friend)
- the only workaround is, during the backup timeframe:
- close all the open (interactive) programs the logged user has
- temporarily stop the services that are locking the files (if you can)
creating a similar virtual machine
- on PVE, you must create a vm configured just like the pm you want to migrate
- same OS (you have to use a CD or a iso for that)
- same %WINDIR% folder name (e.g.: c:\windows or c:\winnt, etc), (the source blog post doesn't mention this but i did it...)
- same SP (service pack applied) level, hotfixes, etc.
- same CPU number
- same pagefile configuration (the source blog post doesn't mention this but it turned out to matter in my conversion)
- same drive layouts
- number of disks and partitions, those on the vm must be at least equal in size to the pm ones
- same drive letters mapping
- same network card number
Now, when your vm is just like your pm, except it doesn't have any of the programs and configuration the pm has:
- add one more temporary drive to hold (temporary) files during the migration, assign it a drive letter not used in the pm
- on this temporary drive, copy
- the file c:\boot.ini from the vm
- the whole folder c:\windows\repair or c:\winnt\repair, or whatever your %WINDIR% is on your vm
- the backup file you previously did with NTBackup, stored somewhere on your network
Then, before restoring, switch the vm NIC to a non-used network in PVE, e.g.:
- create a new fake bridge, not connected to any eth<x>,
- then delete the "working" vm NIC and
- re-add another NIC on the vm, linked to the new fake bridge
otherwise your new vm would "talk" to the real network / domain controllers, etc, and would result in a duplicate machine, duplicate ip address, and so on. This would be BAD.
restore the backup on the virtual machine
Log on th te vm and perform a restore of the backup file you have on the temporary disk
- note: choose "always replace the files on my computer" (see tools>options>restore in NTBackup)
BEFORE REBOOTING (at the end of the restore NTBackup asks you to reboot, hold down your mouse for a while...)
- restore
- the file c:\boot.ini from the vm
- the whole folder c:\windows\repair or c:\winnt\repair, or whatever your %WINDIR% is on your vm
replacing those restored from the backup file.
reboot the virtual machine
if you got any BSOD (blue screens) you may try to:
- boot in safe mode
- remove unrecognize components (e.g.: SCSI card)
- make windows detect those components and reinstall drivers
I didn't see any BSOD, so, nothing to tell here.
Otherwise, log on to the vm,
- some component may not work, e.g. (mine) the mouse cursor may seem "out of sync" with the VNC one, be patient and wait because:
- windows may recognize more virtual components and install them automatically (PCI, video, network, and so on: after a reboot they wil work as expected)
After that, you can
- turn off the pm
- on the vm add a NIC linked to the real network, assigning the real IP the om had
- reboot and log into the domain
VMware to Proxmox VE (KVM)
This howto describes the migration of a Windows 2003 Server (or Windows XP) from VMware to Proxmox VE (KVM).
Prepare the Windows operating system
Before you begin make a copy of the VMware image.
Remove VMware tools
Start the Windows virtual machine on VMware and remove the VMware tools via the Windows control panel. Reboot.
Enable IDE
- Start the Windows virtual machine on VMware and execute the mergeide.reg. Now the registry is changed that your Windows can boot from IDE, necessary for KVM.
- Make sure Atapi.sys, Intelide.sys, Pciide.sys, and Pciidex.sys are in the %SystemRoot%\System32\Drivers folder. If any are missing they can be extracted from %SystemRoot%\Driver Cache\I386\Driver.cab
- see Microsoft KB article for details.
- Shutdown Windows.
Prepare the disk file
My disk file used for this howto: win2003.vmdk
- Change your VMDK disk file with vmware-vdiskmanager.exe to a single growable file (vmware-vdiskmanager.exe is located in your VMware installation path, e.g. "C:\Program Files\VMware\VMware Server") - open a cmd and go to the directory where your vmdk disk files are.
"C:\Program Files\VMware\VMware Server\vmware-vdiskmanager" -r win2003.vmdk -t 0 win2003-pve.vmdk
- Create a new KVM VM on Proxmox VE web interface and - do not start - take a look on the VMID (e.g. 102)
- Copy the win2003-pve.vmdk to your Proxmox VE server into the following dir: /var/lib/vz/images/VMID (I used WinSCP as I worked on a Windows desktop)
- Change the win2003-pve.vmdk file to qemu format:
qemu-img convert -f vmdk win2003-pve.vmdk -O qcow2 win2003-pve.qcow2
- Or, copy the vmdk to an LVM based disk of the same size or larger
dd if=win2003-pve.vmdk of=/dev/VG_iscsi/vm-102-disk-1
If your kvm starts but does not boot (vnc console reporting something like "booting from hard disk...boot failed: not a bootable disk"), try instead of "single growable file", a "preallocated virtual disk", to know how:
launch "vmware-vdiskmanager" without parameters, you will see all the instructions. You can specify "single growable file" using parameter "-t <disk type>", where
Disk types: 0 : single growable virtual disk 1 : growable virtual disk split in 2GB files 2 : preallocated virtual disk 3 : preallocated virtual disk split in 2GB files 4 : preallocated ESX-type virtual disk 5 : compressed disk optimized for streaming
so, you have to use type 2
vmware-vdiskmanager -r whatever.vmdk -t 2 whatever-pve.vmdk
Be aware that vmware-vdiskmanager could now create 2 files:
- one whatever-pve.vmdk really small: it is really a text file, which you can read and see that it refers to
- another vmdk, like whatever-pve-flat.vmdk, which has the full size of your disk, and this is what you want to convert to kvm after.
- please note that the second file has the real size of the hard disk as seen from inside the guest vm, so if you are copying this to LVM with dd, you have to make room enough for this file size, or more.
- This happened converting a ubuntu.vmdk from a linux vmware server 2 to pve using LVM storage, and the solution proposed worked.
Adapt the new KVM Virtual Machine
- Go to the hardware tab on the web interface and remove the default harddisk file
- Add the migrated harddisk as IDE for windows (only IDE works for windows) or SCSI for Linux
- Start the new Virtual Machine via the management interface
- First boot takes some time as some drives has to be loaded
- Do not forget to install Paravirtualized_Network_Drivers_for_Windows
- Finished!
For comments or problems please post to the Proxmox VE forum or to the mailing list
XEN to Proxmox VE (KVM)
XEN also uses qemu disk format, so it should work in the same manner as described under "VMware to Proxmox VE (KVM)".
Move OpenVZ containers to Proxmox VE
You can move existing OpenVZ containers (container=VE=VPS) with vzmigrate or vzdump:
- Use vzmigrate offline migration to move your container to Proxmox VE
- Use vzdump to restore from a backup
After you moved your container you need to add the following line to the corresponding config file (see /etc/vz/conf/xyz.conf):
nano /etc/vz/conf/xyz.conf
Add/edit the following line:
ORIGIN_SAMPLE="pve.auto"
Now you can manage resource settings on the Proxmox VE management interface.
Physical server (or KVM, VMware, XEN or other) to Proxmox VE OpenVZ Container
External links:
Convert Windows to use SCSI (KVM)
This requires a 2.6.24 or 2.6.32 kernel. SCSI will not work on 2.6.18. This procedure is needed to get Windows to load and active the SCSI drivers, once active you can switch the disk and it shoudl Just Works. Tested with Windows 2003 SP1 on Proxmox VE 1.5 with the 2.6.32 kernel installed. I have not yet tested it for performance or reliability but it theoretically will be better, use at your own risk.
- With the Virtual Machine powered off, add a new temporary SCSI virtual disk
- Boot the Virtual Machine and make sure the drivers load for the new disk, You do not need to format it, just make sure you can see it in Device Manager
- Shut down the Virtual Machine, then remove the new disk and any IDE disks, do not delete the LVM or disk image file.
- Add the disks you just removed back as SCSI disks and boot the Virtual Machine.