Ceph RBD Mirroring

From Proxmox VE
Revision as of 13:28, 24 May 2019 by A.antreich (talk | contribs) (Add a note why the ceph user is replaced with root in the unit file)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Configuring rbd-mirror for Off-Site-Backup (one-way-mirroring)

Requirements

  • Two Ceph clusters
  • One or more pools of the same name in both clusters
  • Installed rbd-mirror on the backup cluster ONLY (apt install rbd-mirror)

This guide assumes you have two clusters, one called master where your images are used in production and a backup cluster where you want to create your disaster recovery backup. The general idea is, that one or more rbd-mirror-daemons on the backup cluster are pulling changes from the master cluster. This should be appropriate to maintain a crash consistency copy of the original image. This approach will not help you when you want to failback to the master cluster, for this you will need two-way-mirroring or at least set it up at the time you want to failback.

First of all only images with the "exclusive-lock" and "journaling" feature will be mirrored, because "journaling" depends on "exclusive-lock" you will need to enable both features. To check whether or not these features are already enabled on an image do the following:

# rbd info <your_pool_name>/<your_vm_disk_image>

e.g.

# rbd info data/vm-100-disk-0

To enable a feature:

# rbd feature enable data/vm-100-disk-0 journaling

You need to do this on every image you want to mirror.


The next step is to set the mirroring mode on each pool you want to mirror. You can choose between pool mode or image mode, this has to be done on both clusters on the corresponding pools e.g. data/data.

# rbd mirror pool enable <your_pool_name> <mode>

e.g

# rbd mirror pool enable data pool


On one of the monitor hosts of the master cluster create a user:

# ceph auth get-or-create client.rbd-mirror.master mon 'profile rbd' osd 'profile rbd' -o /etc/pve/priv/master.client.rbd-mirror.master.keyring

Note: You can restrict this to a specific pool if you write 'profile rbd pool=data'


Copy your ceph.conf file from your master cluster to your backup cluster "/etc/ceph/" directory under the name of master.conf (be careful to not overwrite your backup cluster's ceph.conf file). Copy the previously generated keyring-file (master.client.rbd-mirror.master.keyring) to your backup cluster "/etc/pve/priv/" directory. This step is necessary as it is not possible to mirror two clusters with the same name, therefore we use a different name (master) which is only represented by the different config filename and the corresponding keyring file.

On a node of the backup cluster create a unique client id to be used for each rbd-mirror-daemon instance:

# ceph auth get-or-create client.rbd-mirror.backup mon 'profile rbd' osd 'profile rbd' -o /etc/pve/priv/ceph.client.rbd-mirror.backup.keyring

You should now be able to start the daemon (as root):

# systemctl enable ceph-rbd-mirror.target
# cp /lib/systemd/system/ceph-rbd-mirror@.service /etc/systemd/system/ceph-rbd-mirror@.service
# sed -i -e 's/setuser ceph.*/setuser root --setgroup root/' /etc/systemd/system/ceph-rbd-mirror@.service
# systemctl enable ceph-rbd-mirror@rbd-mirror.backup.service
# systemctl start ceph-rbd-mirror@rbd-mirror.backup.service

The replacement of the ceph user in the unit file is only necessary if you put the keyring file under /etc/pve/priv/ (to have the file available cluster-wide), as the user ceph can't access that directory. Ceph tools by default search in /etc/ceph/ for files.


Add the master cluster as a peer to the backup cluster to start:

# rbd mirror pool peer add <pool_name> <master_client_id>@<name_of_master_cluster>

e.g.

# rbd mirror pool peer add data client.rbd-mirror.master@master

Verify that the peering succeeded by the following command:

# rbd mirror pool info <pool_name>

e.g

# rbd mirror pool info data

This should print the peer and the mirror mode if all went well, the uuid which is printed is necessary if you want to remove the peer anytime in the future.

You should now see each image in your backup cluster which is marked with the journaling feature in the master cluster. You can verify the current mirror state by the following command:

# rbd mirror pool status data --verbose

If you want to switch to the backup cluster, you need to promote the backup images to primary images. This should only be done when your master cluster crashed or you took the necessary steps on the master cluster before switching e.g. demoting the images on the master cluster.

Please also check out Ceph's rbd-mirror documentation. http://docs.ceph.com/docs/luminous/rbd/rbd-mirroring/