Ceph RBD Mirroring
This page describes how to use rbd-mirror to mirror Ceph images to another Ceph cluster in a one-way-mirror.
For more details on the used commands check the official Ceph Documentation.
- Two Ceph clusters
- Nodes on both clusters can connect to the nodes in the other cluster
- At least one pool in both clusters. Pools that should be mirrored need the same name.
- rbd-mirror installed on the backup cluster ONLY (apt install rbd-mirror).
rbd-mirror can be installed and used on multiple nodes on the backup cluster. For simplicity this guide is using only one backup node with rbd-mirror installed.
This guide assumes that you have two clusters, one called master which contains images that are used in production and a backup cluster to which the images are mirrored for disaster recovery. The general idea is, that one or more rbd-mirror daemons on the backup cluster are pulling changes from the master cluster.
This approach should be appropriate to maintain a crash consistent copy of the original image. It will not allow you to failback to the master cluster. You need two-way mirroring for that. You can set it up at the time you want to failback.
Only images with the exclusive-lock and journaling feature will be mirrored. Because journaling depends on exclusive-lock you need to enable both features.
To check whether or not these features are already enabled on an image run the following command on the master cluster:
# rbd info <your_pool_name>/<your_vm_disk_image>
# rbd info data/vm-100-disk-0
To enable a feature:
# rbd feature enable data/vm-100-disk-0 journaling
You need to do this for every image you want to mirror.
The next step is to set the mirroring mode on each pool you want to mirror. You can choose between pool mode and image mode. This has to be done on both clusters on the corresponding pools.
# rbd mirror pool enable <your_pool_name> <mode>
# rbd mirror pool enable data pool
On one of the monitor hosts of the master cluster create a user:
# ceph auth get-or-create client.rbd-mirror.master mon 'profile rbd' osd 'profile rbd' -o /etc/pve/priv/master.client.rbd-mirror.master.keyring
Note: You can restrict this to a specific pool if you write 'profile rbd pool=data'
Copy configs and keys
Copy the ceph.conf file from the master cluster to the backup cluster's /etc/ceph/ directory under the name of master.conf (be careful to not overwrite your backup cluster's ceph.conf file!).
# scp /etc/ceph/ceph.conf root@<rbd-mirror-node>:/etc/ceph/master.conf
Copy the previously generated keyring-file (master.client.rbd-mirror.master.keyring) to the backup cluster's /etc/pve/priv/ directory.
# scp /etc/pve/priv/master.client.rbd-mirror.master.keyring root@<rbd-mirror-node>:/etc/pve/priv/
While each cluster sees itself as ceph the backup cluster sees the master cluster as master. This is set by the name of the config and keyring file.
Create client ID
On a node of the backup cluster create a unique client id to be used for each rbd-mirror-daemon instance:
# ceph auth get-or-create client.rbd-mirror.backup mon 'profile rbd' osd 'profile rbd' -o /etc/pve/priv/ceph.client.rbd-mirror.backup.keyring
Start rbd-mirror daemon
You should now be able to start the daemon (as root).
Run the following on the rbd-mirror node in the backup cluster:
# systemctl enable ceph-rbd-mirror.target # cp /lib/systemd/system/ceph-rbd-mirror@.service /etc/systemd/system/ceph-rbd-mirror@.service # sed -i -e 's/setuser ceph.*/setuser root --setgroup root/' /etc/systemd/system/ceph-rbd-mirror@.service # systemctl enable firstname.lastname@example.org # systemctl start email@example.com
The replacement of the ceph user in the unit file is only necessary if you put the keyring file under /etc/pve/priv/ (to have the file available cluster-wide), as the user ceph can't access that directory. Ceph searches by default in /etc/ceph/.
In the backup cluster add the master pool as peer:
# rbd mirror pool peer add <pool_name> <master_client_id>@<name_of_master_cluster>
# rbd mirror pool peer add data client.rbd-mirror.master@master
Verify that the peering succeeded by the following command:
# rbd mirror pool info <pool_name>
# rbd mirror pool info data
This should print the peer and the mirror mode if all went well, the UUID is necessary if you want to remove the peer in the future.
You should now see each image in your backup cluster which is marked with the journaling feature in the master cluster. You can verify the current mirror state by the following command:
# rbd mirror pool status data --verbose
If you want to switch to the backup cluster, you need to promote the backup images to primary images. This should only be done when your master cluster crashed or you took the necessary steps on the master cluster before switching e.g. demoting the images on the master cluster.
Please also check out Ceph's rbd-mirror documentation. http://docs.ceph.com/docs/nautilus/rbd/rbd-mirroring/