Difference between revisions of "Moving disk image from one KVM machine to another"

From Proxmox VE
Jump to navigation Jump to search
(Add alternative syntax for Ceph which is required for copying)
(mark article as outdated)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 +
{{Note|Article is outdated! Please use "'''Disk Action''' -> '''Reassign Owner'''" for VMs or "'''Volume Actions''' -> '''Reassign Owner'''" for Containers.}}
 +
 
Currently moving disks between VMs is only possible via the commandline.
 
Currently moving disks between VMs is only possible via the commandline.
 
In most cases moving a disk to a different VM is as simple as moving a
 
In most cases moving a disk to a different VM is as simple as moving a
Line 101: Line 103:
 
use following command:
 
use following command:
 
<source lang="bash">
 
<source lang="bash">
# rbd -p rbd -m 1.2.3.4 -n client.admin --keyring /etc/pve/priv/ceph/myceph.keyring --auth_supported cephx mv vm-400-disk-1 vm-2300-disk-1
+
# rbd -m 1.2.3.4 -n client.admin --keyring /etc/pve/priv/ceph/myceph.keyring --auth_supported cephx mv rbd/vm-400-disk-1 rbd/vm-2300-disk-1
 
</source>
 
</source>
 
If you only have one ceph storage, local to your PVE cluster, or have a local
 
If you only have one ceph storage, local to your PVE cluster, or have a local
Line 107: Line 109:
 
command to just:
 
command to just:
 
<source lang="bash">
 
<source lang="bash">
# rbd -p rbd mv vm-400-disk-1 vm-2300-disk-1
+
# rbd mv rbd/vm-400-disk-1 rbd/vm-2300-disk-1
</source>
 
 
 
 
 
'''Alternative Syntax:''' Pay attention if you use this guide to ''copy'' an image and the pool name is not <tt>rbd</tt>.
 
The <tt>rbd cp</tt> command does not accept the previous syntax to specify pools. Use the following instead:
 
<source lang="bash">
 
# rbd cp rbd/vm-400-disk-1 rbd/vm-2300-disk-1
 
 
</source>
 
</source>
  

Latest revision as of 13:34, 26 May 2023

Yellowpin.svg Note: Article is outdated! Please use "Disk Action -> Reassign Owner" for VMs or "Volume Actions -> Reassign Owner" for Containers.

Currently moving disks between VMs is only possible via the commandline. In most cases moving a disk to a different VM is as simple as moving a configuration line from one file to another and renaming a file or volume. However, there are cases where special care has to be taken (particularly when using LVM storages).

Step 1: Move Disk to Target Storage

In case you want to move the disk also to a different storage, the easiest way is to start with that, as you can do this via the GUI.

If the target storage happens to not be available on the PVE host because you want to move the disk to a different cluster member, you'll want to start by moving the disk to a shared storage, then perform the remaining steps here, then use the GUI again to move the disk to the final target storage. You can even do this if the target host is not part of the same cluster, provided it is possible to use some shared storage in between. Note that using the same storage on multiple clusters is not recommended and should only be done as a temporary measure, and care has to be taken to avoid clashing VMIDs on the storage.

If the above does not work for you the easiest path is to move the disk to a directory based storage in .qcow2 format and manually move that to a directory storage on the target host, add it to the target VM and move it to the target storage at the end.

The steps below assume the disk is on the same host.

Step 2: Find the disk config entry in the old VM

Find the configuration line of the disk you want to move, remember it, we'll want to move it to a different file later in step 4.

Assuming you want to move the SCSI-1 disk from VM 400 to VM 2300, open /etc/pve/qemu-server/400.conf, and find the following line:

scsi1: tank:vm-400-disk-1,discard=on,size=16G

Step 3: Rename/Move the disk

This step is only necessary if the disk contains the VM ID, which includes most storage types (iSCSI-Direct is an example where you can skip this step).

All of the examples below assume there's no disk on the target storage for that VM already. If there is, increase the trailing number so that the name is unique. Eg. if you already have a vm-2300-disk-1 and vm-2300-disk-2, then use vm-2300-disk-3 instead.

For directory based storages (Directory, NFS, Gluster):

Find the path and rename the file. For example, assuming the disk line was: local:400/vm-400-disk-1.qcow2:

# pvesm path 'local:400/vm-400-disk-1.qcow2'
/var/lib/vz/images/400/vm-400-disk-1.qcow2
# mkdir -p /var/lib/vz/images/2300
# mv /var/lib/vz/images/400/vm-400-disk-1.qcow2 /var/lib/vz/images/2300/vm-2300-disk-1.qcow2

For LVM (thin) storages:

Use

# lvs
(...)
vm-400-disk-1 pve Vwi-aotz--   42.00g

to find the disk's name on the host (mala in this example). It must appear in the source VM's config as well. Thus, the following command should give a similar result.

# cat /etc/pve/nodes/mala/qemu-server/400.conf | grep lvm
scsi1: local-lvm:vm-400-disk-1,size=42G

There are two crucial steps to move the disk. The first is renaming the logical volume according to the target VM.

# lvrename pve/vm-400-disk-1 pve/vm-2300-disk-1
Renamed "vm-400-disk-1" to "vm-2300-disk-1" in volume group "pve"

The second is to adapt the VM config files. Delete the line for this disk from the source VM config

# sed -i.backup '/vm-400-disk-1/d' /etc/pve/nodes/mala/qemu-server/400.conf

and add it to the one of of the target VM 2300, with changes according to lvrename. This means

# echo "scsi1: local-lvm:vm-2300-disk-1,size=42G" >> /etc/pve/nodes/mala/qemu-server/2300.conf


For ZFS:

Assuming the storage is named tank, and the pool property is tank/host/vms, and the disk line was: tank:vm-400-disk-1:

# zfs rename tank/host/vms/vm-400-disk-1 tank/host/vms/vm-2300-disk-1

For ceph:

Assuming the pool is named rbd and the disk line was: myceph:vm-400-disk-1, and there's a monitor at the address 1.2.3.4, we use following command:

# rbd -m 1.2.3.4 -n client.admin --keyring /etc/pve/priv/ceph/myceph.keyring --auth_supported cephx mv rbd/vm-400-disk-1 rbd/vm-2300-disk-1

If you only have one ceph storage, local to your PVE cluster, or have a local ceph configuration for easier maintenance you might be able to shorten this command to just:

# rbd mv rbd/vm-400-disk-1 rbd/vm-2300-disk-1

Step 4: Update the configuration

Find the configuration line from Step 1 again and delete it from there, then add it to the new VM with the VMID updated.

As in step 1 we're assuming we're trying to move SCSI-1 disk from VM 400 to VM 2300. So we delete the following from /etc/pve/qemu-server/400.conf:

scsi1: tank:vm-400-disk-1,discard=on,size=16G

Now figure out a free storage slot on VM 2300 (by reading ide/scsi/virtio/sata keys of the config and finding an unused number - beware of the limits on each of them, which you can find in the vm.conf(5) manpage or by simply testing it out in the GUI).

Assuming we want to add it as virtio-3 disk, add the following line:

virtio3: tank:vm-400-disk-1,discard=on,size=16G

Step 5: Storage related fixups

When using an LVM storage, there's the option to make use of LVM tags. If you're using this option (tagged_only in the storage's section in /etc/pve/storage.cfg), then you have to update the disk's tag.

For instance, if like above you moved vm-400-disk-1 to become vm-2300-disk-1, you have to remove tag pve-vm-400 and add tag pve-vm-2300 to the LV. Assuming your storage is named pve you can accomplish this with the following command:

# lvchange --deltag pve-vm-400 --addtag pve-vm-2300 pve/vm-2300-disk-1