[PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)

Uwe Sauter uwe.sauter.de at gmail.com
Thu Jan 30 12:51:41 CET 2020


If you can afford the downtime of the VMS you might be able to migrate the disk images using "rbd export | ncat" and "ncat | rbd
import".

I haven't tried this with such a great difference of versions but from Proxmox 5.4 to 6.1 this worked without a problem.

Regards,

	Uwe


Am 30.01.20 um 12:46 schrieb Fabrizio Cuseo:
> 
> I have installed a new cluster with the last release, with a local ceph storage.
> I also have 2 old and smaller clusters, and I need to migrate all the VMs to the new cluster.
> The best method i have used in past is to add on the NEW cluster the RBD storage of the old cluster, so I can stop the VM, move the .cfg file, start the vm (all those operations are really quick), and move the disk (online) from the old storage to the new storage.
> 
> But now, if I add the RBD storage, copying the keyring file of the old cluster to the new cluster, naming as the storage ID, and using the old cluster monitors IP, i can see the storage summary (space total and used), but when I go to "content", i have this error: "rbd error: rbd: listing images failed: (95) Operation not supported (500)".
> 
> If, from the new cluster CLI, i use the command: 
> 
> rbd -k /etc/pve/priv/ceph/CephOLD.keyring -m 172.16.20.31 ls rbd2
> 
> I can see the list of disk images, but also the error: "librbd::api::Trash: list: error listing rbd trash entries: (95) Operation not supported"
> 
> 
> The new cluster ceph release is Nautilus, and the old one is firefly.
> 
> Some idea ? 
> 
> Thanks in advance, Fabrizio
> 
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 




More information about the pve-user mailing list