Ceph RBD Mirroring: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
m (change docs url to nautilus)
(Added headlines for easier navigation. Added a few commands for completeness. Rephrased a few spots.)
Line 1: Line 1:
Configuring rbd-mirror for Off-Site-Backup (one-way-mirroring)
This page describes how to use ''rbd-mirror'' to mirror Ceph images to another Ceph cluster in a one-way-mirror.
 
For more details on the used commands check the official [https://docs.ceph.com/docs/nautilus/rbd/rbd-mirroring/ Ceph Documentation].


== Requirements ==
== Requirements ==
* Two Ceph clusters
* Two Ceph clusters
* One or more pools of the same name in both clusters
* Nodes on both clusters can connect to the nodes in the other cluster
* Installed rbd-mirror on the backup cluster ONLY (apt install rbd-mirror)
* At least one pool in both clusters. Pools that should be mirrored need the same name.
* '''rbd-mirror''' installed on the backup cluster ONLY ('''apt install rbd-mirror''').
''rbd-mirror'' can be installed and used on multiple nodes on the backup cluster. For simplicity this guide is using only one backup node with ''rbd-mirror'' installed.


This guide assumes you have two clusters, one called master where your images are used in production and a backup cluster where you want to create your disaster recovery backup. The general idea is, that one or more rbd-mirror-daemons on the backup cluster are pulling changes from the master cluster. This should be appropriate to maintain a crash consistency copy of the original image. This approach will not help you when you want to failback to the master cluster, for this you will need two-way-mirroring or at least set it up at the time you want to failback.
== Introduction ==


First of all only images with the "exclusive-lock" and "journaling" feature will be mirrored, because "journaling" depends on "exclusive-lock" you will need to enable both features. To check whether or not these features are already enabled on an image do the following:
This guide assumes that you have two clusters, one called '''master''' which contains images that are used in production and a '''backup cluster''' to which the images are mirrored for disaster recovery. The general idea is, that one or more ''rbd-mirror daemons'' on the backup cluster are pulling changes from the master cluster.
 
This approach should be appropriate to maintain a crash consistent copy of the original image. It will not allow you to failback to the master cluster. You need two-way mirroring for that. You can set it up at the time you want to failback.
 
== Prepare pool ==
 
=== Image features ===
 
Only images with the ''exclusive-lock'' and ''journaling'' feature will be mirrored. Because ''journaling'' depends on ''exclusive-lock'' you need to enable both features.
 
To check whether or not these features are already enabled on an image run the following command on the master cluster:


<pre>
<pre>
Line 14: Line 28:
</pre>
</pre>


e.g.
e.g.:


<pre>
<pre>
Line 26: Line 40:
</pre>
</pre>


You need to do this on every image you want to mirror.
You need to do this for every image you want to mirror.


=== Mirror mode ===


The next step is to set the mirroring mode on each pool you want to mirror.
The next step is to set the mirroring mode on each pool you want to mirror.
You can choose between pool mode or image mode, this has to be done on both clusters on the corresponding pools e.g. data/data.
You can choose between ''pool'' mode and ''image'' mode. This has to be done on '''both clusters''' on the corresponding pools.
<pre>
<pre>
# rbd mirror pool enable <your_pool_name> <mode>
# rbd mirror pool enable <your_pool_name> <mode>
</pre>
</pre>


e.g
e.g.:


<pre>
<pre>
Line 41: Line 56:
</pre>
</pre>


== User creation ==


On one of the monitor hosts of the master cluster create a user:
On one of the monitor hosts of the master cluster create a user:
Line 50: Line 66:
You can restrict this to a specific pool if you write 'profile rbd pool=data'
You can restrict this to a specific pool if you write 'profile rbd pool=data'


== Copy configs and keys ==


Copy your ceph.conf file from your master cluster to your backup cluster "/etc/ceph/" directory under the name of master.conf (be careful to not overwrite your backup cluster's ceph.conf file).
Copy the ''ceph.conf'' file from the master cluster to the backup cluster's ''/etc/ceph/'' directory under the name of '''master.conf''' (be careful to not overwrite your backup cluster's ceph.conf file!).
Copy the previously generated keyring-file (master.client.rbd-mirror.master.keyring) to your backup cluster "/etc/pve/priv/" directory.
 
This step is necessary as it is not possible to mirror two clusters with the same name, therefore we use a different name (master) which is only represented by the different config filename and the corresponding keyring file.
<pre>
# scp /etc/ceph/ceph.conf root@<rbd-mirror-node>:/etc/ceph/master.conf
</pre>
 
Copy the previously generated keyring-file (''master.client.rbd-mirror.master.keyring'') to the backup cluster's ''/etc/pve/priv/'' directory.
<pre>
# scp /etc/pve/priv/master.client.rbd-mirror.master.keyring root@<rbd-mirror-node>:/etc/pve/priv/
</pre>
While each cluster sees itself as ''ceph'' the backup cluster sees the master cluster as ''master''. This is set by the name of the config and keyring file.
 
== Create client ID ==


On a node of the backup cluster create a unique client id to be used for each rbd-mirror-daemon instance:
On a node of the backup cluster create a unique client id to be used for each rbd-mirror-daemon instance:
Line 59: Line 86:
# ceph auth get-or-create client.rbd-mirror.backup mon 'profile rbd' osd 'profile rbd' -o /etc/pve/priv/ceph.client.rbd-mirror.backup.keyring
# ceph auth get-or-create client.rbd-mirror.backup mon 'profile rbd' osd 'profile rbd' -o /etc/pve/priv/ceph.client.rbd-mirror.backup.keyring
</pre>
</pre>
== Start rbd-mirror daemon ==


You should now be able to start the daemon (as root):
You should now be able to start the daemon (as root):
Line 69: Line 98:
# systemctl start ceph-rbd-mirror@rbd-mirror.backup.service
# systemctl start ceph-rbd-mirror@rbd-mirror.backup.service
</pre>
</pre>
The replacement of the ceph user in the unit file is only necessary if you put the keyring file under ''/etc/pve/priv/'' (to have the file available cluster-wide), as the user ceph can't access that directory. Ceph tools by default search in ''/etc/ceph/'' for files.
The replacement of the ceph user in the unit file is only necessary if you put the keyring file under ''/etc/pve/priv/'' (to have the file available cluster-wide), as the user ceph can't access that directory. Ceph searches by default in ''/etc/ceph/''.


== Add peer ==


Add the master cluster as a peer to the backup cluster to start:
In the backup cluster add the master pool as peer:


<pre>
<pre>
Line 81: Line 111:
# rbd mirror pool peer add data client.rbd-mirror.master@master
# rbd mirror pool peer add data client.rbd-mirror.master@master
</pre>
</pre>
== Verify ==


Verify that the peering succeeded by the following command:
Verify that the peering succeeded by the following command:
Line 92: Line 124:
</pre>
</pre>


This should print the peer and the mirror mode if all went well, the uuid which is printed is necessary if you want to remove the peer anytime in the future.
This should print the peer and the mirror mode if all went well, the UUID is necessary if you want to remove the peer in the future.


You should now see each image in your backup cluster which is marked with the journaling feature in the master cluster. You can verify the current mirror state by the following command:  
You should now see each image in your backup cluster which is marked with the journaling feature in the master cluster. You can verify the current mirror state by the following command:  

Revision as of 14:58, 1 October 2019

This page describes how to use rbd-mirror to mirror Ceph images to another Ceph cluster in a one-way-mirror.

For more details on the used commands check the official Ceph Documentation.

Requirements

  • Two Ceph clusters
  • Nodes on both clusters can connect to the nodes in the other cluster
  • At least one pool in both clusters. Pools that should be mirrored need the same name.
  • rbd-mirror installed on the backup cluster ONLY (apt install rbd-mirror).

rbd-mirror can be installed and used on multiple nodes on the backup cluster. For simplicity this guide is using only one backup node with rbd-mirror installed.

Introduction

This guide assumes that you have two clusters, one called master which contains images that are used in production and a backup cluster to which the images are mirrored for disaster recovery. The general idea is, that one or more rbd-mirror daemons on the backup cluster are pulling changes from the master cluster.

This approach should be appropriate to maintain a crash consistent copy of the original image. It will not allow you to failback to the master cluster. You need two-way mirroring for that. You can set it up at the time you want to failback.

Prepare pool

Image features

Only images with the exclusive-lock and journaling feature will be mirrored. Because journaling depends on exclusive-lock you need to enable both features.

To check whether or not these features are already enabled on an image run the following command on the master cluster:

# rbd info <your_pool_name>/<your_vm_disk_image>

e.g.:

# rbd info data/vm-100-disk-0

To enable a feature:

# rbd feature enable data/vm-100-disk-0 journaling

You need to do this for every image you want to mirror.

Mirror mode

The next step is to set the mirroring mode on each pool you want to mirror. You can choose between pool mode and image mode. This has to be done on both clusters on the corresponding pools.

# rbd mirror pool enable <your_pool_name> <mode>

e.g.:

# rbd mirror pool enable data pool

User creation

On one of the monitor hosts of the master cluster create a user:

# ceph auth get-or-create client.rbd-mirror.master mon 'profile rbd' osd 'profile rbd' -o /etc/pve/priv/master.client.rbd-mirror.master.keyring

Note: You can restrict this to a specific pool if you write 'profile rbd pool=data'

Copy configs and keys

Copy the ceph.conf file from the master cluster to the backup cluster's /etc/ceph/ directory under the name of master.conf (be careful to not overwrite your backup cluster's ceph.conf file!).

# scp /etc/ceph/ceph.conf root@<rbd-mirror-node>:/etc/ceph/master.conf

Copy the previously generated keyring-file (master.client.rbd-mirror.master.keyring) to the backup cluster's /etc/pve/priv/ directory.

# scp /etc/pve/priv/master.client.rbd-mirror.master.keyring root@<rbd-mirror-node>:/etc/pve/priv/

While each cluster sees itself as ceph the backup cluster sees the master cluster as master. This is set by the name of the config and keyring file.

Create client ID

On a node of the backup cluster create a unique client id to be used for each rbd-mirror-daemon instance:

# ceph auth get-or-create client.rbd-mirror.backup mon 'profile rbd' osd 'profile rbd' -o /etc/pve/priv/ceph.client.rbd-mirror.backup.keyring

Start rbd-mirror daemon

You should now be able to start the daemon (as root):

# systemctl enable ceph-rbd-mirror.target
# cp /lib/systemd/system/ceph-rbd-mirror@.service /etc/systemd/system/ceph-rbd-mirror@.service
# sed -i -e 's/setuser ceph.*/setuser root --setgroup root/' /etc/systemd/system/ceph-rbd-mirror@.service
# systemctl enable ceph-rbd-mirror@rbd-mirror.backup.service
# systemctl start ceph-rbd-mirror@rbd-mirror.backup.service

The replacement of the ceph user in the unit file is only necessary if you put the keyring file under /etc/pve/priv/ (to have the file available cluster-wide), as the user ceph can't access that directory. Ceph searches by default in /etc/ceph/.

Add peer

In the backup cluster add the master pool as peer:

# rbd mirror pool peer add <pool_name> <master_client_id>@<name_of_master_cluster>

e.g.

# rbd mirror pool peer add data client.rbd-mirror.master@master

Verify

Verify that the peering succeeded by the following command:

# rbd mirror pool info <pool_name>

e.g

# rbd mirror pool info data

This should print the peer and the mirror mode if all went well, the UUID is necessary if you want to remove the peer in the future.

You should now see each image in your backup cluster which is marked with the journaling feature in the master cluster. You can verify the current mirror state by the following command:

# rbd mirror pool status data --verbose

If you want to switch to the backup cluster, you need to promote the backup images to primary images. This should only be done when your master cluster crashed or you took the necessary steps on the master cluster before switching e.g. demoting the images on the master cluster.

Please also check out Ceph's rbd-mirror documentation. http://docs.ceph.com/docs/nautilus/rbd/rbd-mirroring/