Difference between revisions of "Moving disk image from one KVM machine to another"

From Proxmox VE
Jump to navigation Jump to search
(Rewrite to govern more storage types and general hints rather than just LVM since the title suggests it's a general article.)
Line 1: Line 1:
== Disk Image to Migrate ==
+
Currently moving disks between VMs is only possible via the commandline.
Find out what disk image you want to migrate and which tags it has with the command:
+
In most cases moving a disk to a different VM is as simple as moving a
<pre>lvs -o +lv_tags</pre>
+
configuration line from one file to another and renaming a file or volume.
This will give you something like:
+
However, there are cases where special care has to be taken (particularly when
<pre>:~# lvs -o +lv_tags
+
using LVM storages).
  LV            VG            Attr  LSize  Origin Snap%  Move Log Copy%  Convert LV Tags 
 
  data          pve            -wi-ao  38.12G                                               
 
  root          pve            -wi-ao  16.75G                                               
 
  swap          pve            -wi-ao  8.38G                                               
 
  vm-101-disk-1 pve-compartido -wi-a-  80.00G                                      pve-vm-101
 
  vm-102-disk-1 pve-compartido -wi-ao  35.00G                                      pve-vm-102
 
  vm-104-disk-2 pve-compartido -wi-a-  50.00G                                      pve-vm-104
 
  vm-105-disk-1 pve-compartido -wi-ao  50.00G                                      pve-vm-105
 
  vm-106-disk-1 pve-compartido -wi-ao  22.00G                                      pve-vm-106
 
  vm-108-disk-1 pve-compartido -wi-ao  32.00G                                      pve-vm-108
 
  vm-108-disk-2 pve-compartido -wi-ao 150.00G                                      pve-vm-108
 
  vm-113-disk-1 pve-compartido -wi-ao  2.00G                                      pve-vm-113
 
  vm-113-disk-2 pve-compartido -wi-ao  2.00G                                      pve-vm-113
 
  vm-108-disk-3 pve-pulpo      -wi-a- 300.00G                                      pve-vm-108
 
  vm-108-disk-4 pve-pulpo      -wi-a-  50.00G                                      pve-vm-108</pre>
 
  
== Substitute tags for new VM ==
+
== Step 1: Move Disk to Target Storage ==
Once you know exactly which disks want to migrate and the volume group they are allocated to and their tags, we delete the tags from the disks and then add the tags for the new VM with a command like:
 
<pre>lvchange --deltag @pve-vm-108 pve-pulpo/vm-108-disk-3 pve-pulpo/vm-108-disk-4</pre>
 
In this example, I am moving vm-108-disk3 and vm-108-disk-4 from VM 108 to VM 101.
 
  
== Add tags for both VMs ==
+
In case you want to move the disk also to a different storage, the easiest way
Now add the tag for VM 101 to both disks:
+
is to start with that, as you can do this via the GUI.
<pre>lvchange --addtag @pve-vm-101 pve-pulpo/vm-108-disk-3 pve-pulpo/vm-108-disk-4</pre>
 
  
Note : you can do steps 2 and 3 in one command :
+
If the target storage happens to not be available on the PVE host because you
<pre>lvchange --addtag @pve-vm-101 --deltag @pve-vm-108 pve-pulpo/vm-108-disk-3 pve-pulpo/vm-108-disk-4</pre>
+
want to move the disk to a different cluster member, you'll want to start by
 +
moving the disk to a shared storage, then perform the remaining steps here, then
 +
use the GUI again to move the disk to the final target storage. You can even do
 +
this if the target host is not part of the same cluster, provided it is possible
 +
to use some shared storage in between.
 +
Note that using the same storage on multiple clusters is not recommended and
 +
should only be done as a temporary measure, and care has to be taken to avoid
 +
clashing VMIDs on the storage.
  
As a convenience, it's better to rename the disk images to reflect the machine they belong to:
+
If the above does not work for you the easiest path is to move the disk to a
<pre>lvrename pve-pulpo/vm-108-disk-3 pve-pulpo/vm-101-disk-1
+
directory based storage in .qcow2 format and manually move that to a directory
lvrename pve-pulpo/vm-108-disk-4 pve-pulpo/vm-101-disk-2</pre>
+
storage on the target host, add it to the target VM and move it to the target
 +
storage at the end.
  
== Delete Disk in Old VM ==
+
The steps below assume the disk is on the same host.
Using Proxmox Web Admin, delete the disks from the "old VM" and on the new one the disks are now available as "unused", so we can add them.
 
  
''Credits : From kawarmc in Proxmox Forums''
+
== Step 2: Find the disk config entry in the old VM ==
 +
 
 +
Find the configuration line of the disk you want to move, remember it, we'll
 +
want to move it to a different file later in step 4.
 +
 
 +
Assuming you want to move the SCSI-1 disk from VM 400 to VM 2300, open
 +
<tt>/etc/pve/qemu-server/400.conf</tt>, and fine the following line:
 +
<pre>
 +
scsi1: tank:vm-400-disk-1,discard=on,size=16G
 +
</pre>
 +
 
 +
== Step 3: Rename/Move the disk ==
 +
 
 +
This step is only necessary if the disk contains the VM ID, which includes most
 +
storage types (iSCSI-Direct is an example where you can skip this step).
 +
 
 +
All of the examples below assume there's no disk on the target storage for that
 +
VM already. If there is, increase the trailing number so that the name is
 +
unique. Eg. if you already have a <tt>vm-2300-disk-1</tt> and
 +
<tt>vm-2300-disk-2</tt>, then
 +
use <tt>vm-2300-disk-3</tt> instead.
 +
 
 +
=== For directory based storages (Directory, NFS, Gluster): ===
 +
Find the path and rename the file.
 +
For example, assuming the disk line was: <tt>local:400/vm-400-disk-1.qcow2</tt>:
 +
<source lang="bash">
 +
# pvesm path 'local:400/vm-400-disk-1.qcow2'
 +
/var/lib/vz/images/400/vm-400-disk-1.qcow2
 +
# mkdir -p /var/lib/vz/images/2300
 +
# mv /var/lib/vz/images/400/vm-400-disk-1.qcow2 /var/lib/vz/images/2300/vm-2300-disk-1.qcow2
 +
</source>
 +
 
 +
=== For ZFS: ===
 +
Assuming the storage is named tank, and the pool property is
 +
<tt>tank/host/vms</tt>, and the disk line was: <tt>tank:vm-400-disk-1</tt>:
 +
<source lang="bash">
 +
# zfs rename tank/host/vms/vm-400-disk-1 tank/host/vms/vm-2300-disk-1
 +
</source>
 +
 
 +
=== For ceph: ===
 +
Assuming the pool is named rbd and the disk line was:
 +
<tt>myceph:vm-400-disk-1</tt>, and there's a monitor at the address 1.2.3.4, we
 +
use following command:
 +
<source lang="bash">
 +
# rbd -p rbd -m 1.2.3.4 -n client.admin --keyring /etc/pve/priv/ceph/myceph.keyring --auth_supported cephx mv vm-400-disk-1 vm-2300-disk-1
 +
</source>
 +
If you only have one ceph storage, local to your PVE cluster, or have a local
 +
ceph configuration for easier maintenance you might be able to shorten this
 +
command to just:
 +
<source lang="bash">
 +
# rbd -p rbd mv vm-400-disk-1 vm-2300-disk-1
 +
</source>
 +
 
 +
 
 +
== Step 4: Update the configuration ==
 +
 
 +
Find the configuration line from Step 1 again and delete it from there, then add
 +
it to the new VM with the VMID updated.
 +
 
 +
As in step 1 we're assuming we're trying to move SCSI-1 disk from VM 400 to VM 2300.
 +
So we delete the following from <tt>/etc/pve/qemu-server/400.conf</tt>:
 +
<pre>
 +
scsi1: tank:vm-400-disk-1,discard=on,size=16G
 +
</pre>
 +
 
 +
Now figure out a free storage slot on VM 2300 (by reading ide/scsi/virtio/sata
 +
keys of the config and finding an unused number - beware of the limits on each
 +
of them, which you can find in the <tt>vm.conf</tt>(5) manpage or by simply testing it
 +
out in the GUI).
 +
 
 +
Assuming we want to add it as virtio-3 disk, add the following line:
 +
<pre>
 +
virtio3: tank:vm-400-disk-1,discard=on,size=16G
 +
</pre>
 +
 
 +
== Step 5: Storage related fixups ==
 +
 
 +
When using an LVM storage, there's the option to make use of LVM tags. If you're
 +
using this option (<tt>tagged_only</tt> in the storage's section in
 +
<tt>/etc/pve/storage.cfg</tt>), then you have to update the disk's tag.
 +
 
 +
For instance, if like above you moved <tt>vm-400-disk-1</tt> to become
 +
<tt>vm-2300-disk-1</tt>, you have to remove tag <tt>pve-vm-400</tt> and add tag
 +
<tt>pve-vm-2300</tt>
 +
to the LV.
 +
Assuming your storage is named <tt>pve</tt> you can accomplish this with the
 +
following command:
 +
<source lang="bash">
 +
# lvchange --deltag pve-vm-400 --addtag pve-vm-2300 pve/vm-2300-disk-1
 +
</source>
  
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]

Revision as of 11:32, 13 October 2016

Currently moving disks between VMs is only possible via the commandline. In most cases moving a disk to a different VM is as simple as moving a configuration line from one file to another and renaming a file or volume. However, there are cases where special care has to be taken (particularly when using LVM storages).

Step 1: Move Disk to Target Storage

In case you want to move the disk also to a different storage, the easiest way is to start with that, as you can do this via the GUI.

If the target storage happens to not be available on the PVE host because you want to move the disk to a different cluster member, you'll want to start by moving the disk to a shared storage, then perform the remaining steps here, then use the GUI again to move the disk to the final target storage. You can even do this if the target host is not part of the same cluster, provided it is possible to use some shared storage in between. Note that using the same storage on multiple clusters is not recommended and should only be done as a temporary measure, and care has to be taken to avoid clashing VMIDs on the storage.

If the above does not work for you the easiest path is to move the disk to a directory based storage in .qcow2 format and manually move that to a directory storage on the target host, add it to the target VM and move it to the target storage at the end.

The steps below assume the disk is on the same host.

Step 2: Find the disk config entry in the old VM

Find the configuration line of the disk you want to move, remember it, we'll want to move it to a different file later in step 4.

Assuming you want to move the SCSI-1 disk from VM 400 to VM 2300, open /etc/pve/qemu-server/400.conf, and fine the following line:

scsi1: tank:vm-400-disk-1,discard=on,size=16G

Step 3: Rename/Move the disk

This step is only necessary if the disk contains the VM ID, which includes most storage types (iSCSI-Direct is an example where you can skip this step).

All of the examples below assume there's no disk on the target storage for that VM already. If there is, increase the trailing number so that the name is unique. Eg. if you already have a vm-2300-disk-1 and vm-2300-disk-2, then use vm-2300-disk-3 instead.

For directory based storages (Directory, NFS, Gluster):

Find the path and rename the file. For example, assuming the disk line was: local:400/vm-400-disk-1.qcow2:

# pvesm path 'local:400/vm-400-disk-1.qcow2'
/var/lib/vz/images/400/vm-400-disk-1.qcow2
# mkdir -p /var/lib/vz/images/2300
# mv /var/lib/vz/images/400/vm-400-disk-1.qcow2 /var/lib/vz/images/2300/vm-2300-disk-1.qcow2

For ZFS:

Assuming the storage is named tank, and the pool property is tank/host/vms, and the disk line was: tank:vm-400-disk-1:

# zfs rename tank/host/vms/vm-400-disk-1 tank/host/vms/vm-2300-disk-1

For ceph:

Assuming the pool is named rbd and the disk line was: myceph:vm-400-disk-1, and there's a monitor at the address 1.2.3.4, we use following command:

# rbd -p rbd -m 1.2.3.4 -n client.admin --keyring /etc/pve/priv/ceph/myceph.keyring --auth_supported cephx mv vm-400-disk-1 vm-2300-disk-1

If you only have one ceph storage, local to your PVE cluster, or have a local ceph configuration for easier maintenance you might be able to shorten this command to just:

# rbd -p rbd mv vm-400-disk-1 vm-2300-disk-1


Step 4: Update the configuration

Find the configuration line from Step 1 again and delete it from there, then add it to the new VM with the VMID updated.

As in step 1 we're assuming we're trying to move SCSI-1 disk from VM 400 to VM 2300. So we delete the following from /etc/pve/qemu-server/400.conf:

scsi1: tank:vm-400-disk-1,discard=on,size=16G

Now figure out a free storage slot on VM 2300 (by reading ide/scsi/virtio/sata keys of the config and finding an unused number - beware of the limits on each of them, which you can find in the vm.conf(5) manpage or by simply testing it out in the GUI).

Assuming we want to add it as virtio-3 disk, add the following line:

virtio3: tank:vm-400-disk-1,discard=on,size=16G

Step 5: Storage related fixups

When using an LVM storage, there's the option to make use of LVM tags. If you're using this option (tagged_only in the storage's section in /etc/pve/storage.cfg), then you have to update the disk's tag.

For instance, if like above you moved vm-400-disk-1 to become vm-2300-disk-1, you have to remove tag pve-vm-400 and add tag pve-vm-2300 to the LV. Assuming your storage is named pve you can accomplish this with the following command:

# lvchange --deltag pve-vm-400 --addtag pve-vm-2300 pve/vm-2300-disk-1