Difference between revisions of "Moving disk image from one KVM machine to another"

From Proxmox VE
Jump to navigation Jump to search
(add a note for steps 2 and 3)
(mark article as outdated)
 
(7 intermediate revisions by 4 users not shown)
Line 1: Line 1:
1.- Find out what disk image you want to migrate and wich tags it has with the command:
+
{{Note|Article is outdated! Please use "'''Disk Action''' -> '''Reassign Owner'''" for VMs or "'''Volume Actions''' -> '''Reassign Owner'''" for Containers.}}
<pre>lvs -o +lv_tags</pre>
 
This will give you something like:
 
<pre>:~# lvs -o +lv_tags
 
  LV            VG            Attr  LSize  Origin Snap%  Move Log Copy%  Convert LV Tags 
 
  data          pve            -wi-ao  38.12G                                               
 
  root          pve            -wi-ao  16.75G                                               
 
  swap          pve            -wi-ao  8.38G                                               
 
  vm-101-disk-1 pve-compartido -wi-a-  80.00G                                      pve-vm-101
 
  vm-102-disk-1 pve-compartido -wi-ao  35.00G                                      pve-vm-102
 
  vm-104-disk-2 pve-compartido -wi-a-  50.00G                                      pve-vm-104
 
  vm-105-disk-1 pve-compartido -wi-ao  50.00G                                      pve-vm-105
 
  vm-106-disk-1 pve-compartido -wi-ao  22.00G                                      pve-vm-106
 
  vm-108-disk-1 pve-compartido -wi-ao  32.00G                                      pve-vm-108
 
  vm-108-disk-2 pve-compartido -wi-ao 150.00G                                      pve-vm-108
 
  vm-113-disk-1 pve-compartido -wi-ao  2.00G                                      pve-vm-113
 
  vm-113-disk-2 pve-compartido -wi-ao  2.00G                                      pve-vm-113
 
  vm-108-disk-3 pve-pulpo      -wi-a- 300.00G                                      pve-vm-108
 
  vm-108-disk-4 pve-pulpo      -wi-a-  50.00G                                      pve-vm-108</pre>
 
  
2.- Once you know exactly wich disks want to migrate and the volume group they are allocated to and their tags, we delete de tags from the disks and then add the tags for the new VM with a command like:
+
Currently moving disks between VMs is only possible via the commandline.
<pre>lvchange --deltag @pve-vm-108 pve-pulpo/vm-108-disk-3 pve-pulpo/vm-108-disk-4</pre>
+
In most cases moving a disk to a different VM is as simple as moving a
In this example, I am moving vm-108-disk3 and vm-108-disk-4 from VM 108 to VM 101.
+
configuration line from one file to another and renaming a file or volume.
 +
However, there are cases where special care has to be taken (particularly when
 +
using LVM storages).
  
3.- Next, we add the tag for VM 101 to both disks:
+
== Step 1: Move Disk to Target Storage ==
<pre>lvchange --addtag @pve-vm-101 pve-pulpo/vm-108-disk-3 pve-pulpo/vm-108-disk-4</pre>
 
  
Note : you can do steps 2 and 3 in one command :
+
In case you want to move the disk also to a different storage, the easiest way
<pre>lvchange --addtag @pve-vm-101 --deltag @pve-vm-108 pve-pulpo/vm-108-disk-3 pve-pulpo/vm-108-disk-4</pre>
+
is to start with that, as you can do this via the GUI.
  
4.- as a convenience, it's better to rename the disk images to reflect the machine they belong to:
+
If the target storage happens to not be available on the PVE host because you
<pre>lvrename pve-pulpo/vm-108-disk-3 pve-pulpo/vm-101-disk-1
+
want to move the disk to a different cluster member, you'll want to start by
lvrename pve-pulpo/vm-108-disk-4 pve-pulpo/vm-101-disk-2</pre>
+
moving the disk to a shared storage, then perform the remaining steps here, then
 +
use the GUI again to move the disk to the final target storage. You can even do
 +
this if the target host is not part of the same cluster, provided it is possible
 +
to use some shared storage in between.
 +
Note that using the same storage on multiple clusters is not recommended and
 +
should only be done as a temporary measure, and care has to be taken to avoid
 +
clashing VMIDs on the storage.
  
5.- Now, using Proxmox Web Admin, we can delete the disks from the "old VM" and on the new one, we'll have the disks as "unused", so we can add them.
+
If the above does not work for you the easiest path is to move the disk to a
 +
directory based storage in .qcow2 format and manually move that to a directory
 +
storage on the target host, add it to the target VM and move it to the target
 +
storage at the end.
 +
 
 +
The steps below assume the disk is on the same host.
 +
 
 +
== Step 2: Find the disk config entry in the old VM ==
 +
 
 +
Find the configuration line of the disk you want to move, remember it, we'll
 +
want to move it to a different file later in step 4.
 +
 
 +
Assuming you want to move the SCSI-1 disk from VM 400 to VM 2300, open
 +
<tt>/etc/pve/qemu-server/400.conf</tt>, and find the following line:
 +
<pre>
 +
scsi1: tank:vm-400-disk-1,discard=on,size=16G
 +
</pre>
 +
 
 +
== Step 3: Rename/Move the disk ==
 +
 
 +
This step is only necessary if the disk contains the VM ID, which includes most
 +
storage types (iSCSI-Direct is an example where you can skip this step).
 +
 
 +
All of the examples below assume there's no disk on the target storage for that
 +
VM already. If there is, increase the trailing number so that the name is
 +
unique. Eg. if you already have a <tt>vm-2300-disk-1</tt> and
 +
<tt>vm-2300-disk-2</tt>, then
 +
use <tt>vm-2300-disk-3</tt> instead.
 +
 
 +
=== For directory based storages (Directory, NFS, Gluster): ===
 +
Find the path and rename the file.
 +
For example, assuming the disk line was: <tt>local:400/vm-400-disk-1.qcow2</tt>:
 +
<source lang="bash">
 +
# pvesm path 'local:400/vm-400-disk-1.qcow2'
 +
/var/lib/vz/images/400/vm-400-disk-1.qcow2
 +
# mkdir -p /var/lib/vz/images/2300
 +
# mv /var/lib/vz/images/400/vm-400-disk-1.qcow2 /var/lib/vz/images/2300/vm-2300-disk-1.qcow2
 +
</source>
 +
 
 +
=== For LVM (thin) storages: ===
 +
Use
 +
<source lang="bash">
 +
# lvs
 +
(...)
 +
vm-400-disk-1 pve Vwi-aotz--  42.00g
 +
</source>
 +
to find the disk's name on the host (<tt>mala</tt> in this example). It must appear in the source VM's config as well. Thus, the following command should give a similar result.
 +
 
 +
<source lang="bash">
 +
# cat /etc/pve/nodes/mala/qemu-server/400.conf | grep lvm
 +
scsi1: local-lvm:vm-400-disk-1,size=42G
 +
</source>
 +
 
 +
There are two crucial steps to move the disk. The first is renaming the logical volume according to the target VM.
 +
<source lang="bash">
 +
# lvrename pve/vm-400-disk-1 pve/vm-2300-disk-1
 +
Renamed "vm-400-disk-1" to "vm-2300-disk-1" in volume group "pve"
 +
</source>
 +
 
 +
The second is to adapt the VM config files. Delete the line for this disk from the source VM config
 +
<source lang="bash">
 +
# sed -i.backup '/vm-400-disk-1/d' /etc/pve/nodes/mala/qemu-server/400.conf
 +
</source>
 +
and add it to the one of of the target VM 2300, with changes according to <tt>lvrename</tt>. This means
 +
<source lang="bash">
 +
# echo "scsi1: local-lvm:vm-2300-disk-1,size=42G" >> /etc/pve/nodes/mala/qemu-server/2300.conf
 +
</source>
 +
 
 +
 
 +
=== For ZFS: ===
 +
Assuming the storage is named tank, and the pool property is
 +
<tt>tank/host/vms</tt>, and the disk line was: <tt>tank:vm-400-disk-1</tt>:
 +
<source lang="bash">
 +
# zfs rename tank/host/vms/vm-400-disk-1 tank/host/vms/vm-2300-disk-1
 +
</source>
 +
 
 +
=== For ceph: ===
 +
Assuming the pool is named rbd and the disk line was:
 +
<tt>myceph:vm-400-disk-1</tt>, and there's a monitor at the address 1.2.3.4, we
 +
use following command:
 +
<source lang="bash">
 +
# rbd -m 1.2.3.4 -n client.admin --keyring /etc/pve/priv/ceph/myceph.keyring --auth_supported cephx mv rbd/vm-400-disk-1 rbd/vm-2300-disk-1
 +
</source>
 +
If you only have one ceph storage, local to your PVE cluster, or have a local
 +
ceph configuration for easier maintenance you might be able to shorten this
 +
command to just:
 +
<source lang="bash">
 +
# rbd mv rbd/vm-400-disk-1 rbd/vm-2300-disk-1
 +
</source>
 +
 
 +
== Step 4: Update the configuration ==
 +
 
 +
Find the configuration line from Step 1 again and delete it from there, then add
 +
it to the new VM with the VMID updated.
 +
 
 +
As in step 1 we're assuming we're trying to move SCSI-1 disk from VM 400 to VM 2300.
 +
So we delete the following from <tt>/etc/pve/qemu-server/400.conf</tt>:
 +
<pre>
 +
scsi1: tank:vm-400-disk-1,discard=on,size=16G
 +
</pre>
 +
 
 +
Now figure out a free storage slot on VM 2300 (by reading ide/scsi/virtio/sata
 +
keys of the config and finding an unused number - beware of the limits on each
 +
of them, which you can find in the <tt>vm.conf</tt>(5) manpage or by simply testing it
 +
out in the GUI).
 +
 
 +
Assuming we want to add it as virtio-3 disk, add the following line:
 +
<pre>
 +
virtio3: tank:vm-400-disk-1,discard=on,size=16G
 +
</pre>
 +
 
 +
== Step 5: Storage related fixups ==
 +
 
 +
When using an LVM storage, there's the option to make use of LVM tags. If you're
 +
using this option (<tt>tagged_only</tt> in the storage's section in
 +
<tt>/etc/pve/storage.cfg</tt>), then you have to update the disk's tag.
 +
 
 +
For instance, if like above you moved <tt>vm-400-disk-1</tt> to become
 +
<tt>vm-2300-disk-1</tt>, you have to remove tag <tt>pve-vm-400</tt> and add tag
 +
<tt>pve-vm-2300</tt>
 +
to the LV.
 +
Assuming your storage is named <tt>pve</tt> you can accomplish this with the
 +
following command:
 +
<source lang="bash">
 +
# lvchange --deltag pve-vm-400 --addtag pve-vm-2300 pve/vm-2300-disk-1
 +
</source>
  
''Credits : From kawarmc in Proxmox Forums''
 
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]

Latest revision as of 13:34, 26 May 2023

Yellowpin.svg Note: Article is outdated! Please use "Disk Action -> Reassign Owner" for VMs or "Volume Actions -> Reassign Owner" for Containers.

Currently moving disks between VMs is only possible via the commandline. In most cases moving a disk to a different VM is as simple as moving a configuration line from one file to another and renaming a file or volume. However, there are cases where special care has to be taken (particularly when using LVM storages).

Step 1: Move Disk to Target Storage

In case you want to move the disk also to a different storage, the easiest way is to start with that, as you can do this via the GUI.

If the target storage happens to not be available on the PVE host because you want to move the disk to a different cluster member, you'll want to start by moving the disk to a shared storage, then perform the remaining steps here, then use the GUI again to move the disk to the final target storage. You can even do this if the target host is not part of the same cluster, provided it is possible to use some shared storage in between. Note that using the same storage on multiple clusters is not recommended and should only be done as a temporary measure, and care has to be taken to avoid clashing VMIDs on the storage.

If the above does not work for you the easiest path is to move the disk to a directory based storage in .qcow2 format and manually move that to a directory storage on the target host, add it to the target VM and move it to the target storage at the end.

The steps below assume the disk is on the same host.

Step 2: Find the disk config entry in the old VM

Find the configuration line of the disk you want to move, remember it, we'll want to move it to a different file later in step 4.

Assuming you want to move the SCSI-1 disk from VM 400 to VM 2300, open /etc/pve/qemu-server/400.conf, and find the following line:

scsi1: tank:vm-400-disk-1,discard=on,size=16G

Step 3: Rename/Move the disk

This step is only necessary if the disk contains the VM ID, which includes most storage types (iSCSI-Direct is an example where you can skip this step).

All of the examples below assume there's no disk on the target storage for that VM already. If there is, increase the trailing number so that the name is unique. Eg. if you already have a vm-2300-disk-1 and vm-2300-disk-2, then use vm-2300-disk-3 instead.

For directory based storages (Directory, NFS, Gluster):

Find the path and rename the file. For example, assuming the disk line was: local:400/vm-400-disk-1.qcow2:

# pvesm path 'local:400/vm-400-disk-1.qcow2'
/var/lib/vz/images/400/vm-400-disk-1.qcow2
# mkdir -p /var/lib/vz/images/2300
# mv /var/lib/vz/images/400/vm-400-disk-1.qcow2 /var/lib/vz/images/2300/vm-2300-disk-1.qcow2

For LVM (thin) storages:

Use

# lvs
(...)
vm-400-disk-1 pve Vwi-aotz--   42.00g

to find the disk's name on the host (mala in this example). It must appear in the source VM's config as well. Thus, the following command should give a similar result.

# cat /etc/pve/nodes/mala/qemu-server/400.conf | grep lvm
scsi1: local-lvm:vm-400-disk-1,size=42G

There are two crucial steps to move the disk. The first is renaming the logical volume according to the target VM.

# lvrename pve/vm-400-disk-1 pve/vm-2300-disk-1
Renamed "vm-400-disk-1" to "vm-2300-disk-1" in volume group "pve"

The second is to adapt the VM config files. Delete the line for this disk from the source VM config

# sed -i.backup '/vm-400-disk-1/d' /etc/pve/nodes/mala/qemu-server/400.conf

and add it to the one of of the target VM 2300, with changes according to lvrename. This means

# echo "scsi1: local-lvm:vm-2300-disk-1,size=42G" >> /etc/pve/nodes/mala/qemu-server/2300.conf


For ZFS:

Assuming the storage is named tank, and the pool property is tank/host/vms, and the disk line was: tank:vm-400-disk-1:

# zfs rename tank/host/vms/vm-400-disk-1 tank/host/vms/vm-2300-disk-1

For ceph:

Assuming the pool is named rbd and the disk line was: myceph:vm-400-disk-1, and there's a monitor at the address 1.2.3.4, we use following command:

# rbd -m 1.2.3.4 -n client.admin --keyring /etc/pve/priv/ceph/myceph.keyring --auth_supported cephx mv rbd/vm-400-disk-1 rbd/vm-2300-disk-1

If you only have one ceph storage, local to your PVE cluster, or have a local ceph configuration for easier maintenance you might be able to shorten this command to just:

# rbd mv rbd/vm-400-disk-1 rbd/vm-2300-disk-1

Step 4: Update the configuration

Find the configuration line from Step 1 again and delete it from there, then add it to the new VM with the VMID updated.

As in step 1 we're assuming we're trying to move SCSI-1 disk from VM 400 to VM 2300. So we delete the following from /etc/pve/qemu-server/400.conf:

scsi1: tank:vm-400-disk-1,discard=on,size=16G

Now figure out a free storage slot on VM 2300 (by reading ide/scsi/virtio/sata keys of the config and finding an unused number - beware of the limits on each of them, which you can find in the vm.conf(5) manpage or by simply testing it out in the GUI).

Assuming we want to add it as virtio-3 disk, add the following line:

virtio3: tank:vm-400-disk-1,discard=on,size=16G

Step 5: Storage related fixups

When using an LVM storage, there's the option to make use of LVM tags. If you're using this option (tagged_only in the storage's section in /etc/pve/storage.cfg), then you have to update the disk's tag.

For instance, if like above you moved vm-400-disk-1 to become vm-2300-disk-1, you have to remove tag pve-vm-400 and add tag pve-vm-2300 to the LV. Assuming your storage is named pve you can accomplish this with the following command:

# lvchange --deltag pve-vm-400 --addtag pve-vm-2300 pve/vm-2300-disk-1