DRBD
Introduction
DRBD® refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based raid-1. For detailed information please visit Linbit.
Main features of the integration in Proxmox VE:
- LVM on top of DRBD (Primary/Primary)
- All VM disks (LVM volumes on the DRBD device) are replicated in real time on both Proxmox VE nodes via the network
- Ability to live migrate running machines without downtime in a few seconds WITHOUT the need of SAN (iSCSI, FC, NFS) as the data is already on both nodes.
Note: Currently only KVM guests can use the new Storage Model.
For this HowTo I used a Intel Entry Server main board with 2 Intel GBit Network cards and two simple SATA hard drives. On /dev/sda I installed Proxmox VE as usual, /dev/sdb/ will be used for DRBD. Eth0 is used for the default VMBR0 and eth1 will be used for DRBD.
System requirements
You need 2 identical Proxmox VE servers with the following extra hardware:
- Free network card (connected with a direct crossover cable)
- Second raid volume (e.g. /dev/sdb)
- Use a hardware raid controller with BBU to eliminate performance issues concerning internal metadata (see Florian´s blog).
Preparations
Make sure you run at least Proxmox VE 1.4 (currently beta) on both servers and create the well known standard Proxmox VE Cluster.
Network
Configure eth1 on both nodes with a fixed private IP address via the web interface and reboot each server.
For better understanding, here is my /etc/network/interfaces file from my first node after the reboot:
cat /etc/network/interfaces # network interface settings auto lo iface lo inet loopback iface eth0 inet manual auto eth1 iface eth1 inet static address 10.0.7.105 netmask 255.255.240.0 auto vmbr0 iface vmbr0 inet static address 192.168.7.105 netmask 255.255.240.0 gateway 192.168.2.1 bridge_ports eth0 bridge_stp off bridge_fd 0
And from the second node:
# network interface settings auto lo iface lo inet loopback iface eth0 inet manual auto eth1 iface eth1 inet static address 10.0.7.106 netmask 255.255.240.0 auto vmbr0 iface vmbr0 inet static address 192.168.7.106 netmask 255.255.240.0 gateway 192.168.2.1 bridge_ports eth0 bridge_stp off bridge_fd 0
Disk for DRBD
I will use /dev/sdb1 for DRBD. Therefore I need to create this single big partition on /dev/sdb - make sure that this is 100% identical on both nodes.
Just run fdisk /dev/sdb and create a primary partition (dev/sdb1):
proxmox-105:~# fdisk /dev/sdb The number of cylinders for this disk is set to 19457. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-19457, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-19457, default 19457): Using default value 19457 Command (m for help): t Selected partition 1 Hex code (type L to list codes): 8e Changed system type of partition 1 to 8e (Linux LVM) Command (m for help): p Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x49e2fd2f Device Boot Start End Blocks Id System /dev/sdb1 1 19457 156288321 8e Linux LVM Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. proxmox-105:~#
DRBD configuration
Software installation
Install the DRBD user tools on both nodes (I got DRBD 8.3.2 in the Kernel and also 8.3.2 drbd8-utils):
apt-get install drbd8-utils
Prepare drbd.conf file
Edit the drbd config file, please not that this file needs to be identical on both nodes, here is an example:
proxmox-105:~# cat /etc/drbd.conf global { usage-count no; } common { syncer { rate 30M; } } resource r0 { protocol C; startup { wfc-timeout 15; degr-wfc-timeout 60; become-primary-on both; } net { cram-hmac-alg sha1; shared-secret "my-secret"; allow-two-primaries; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; } on proxmox-105 { device /dev/drbd0; disk /dev/sdb1; address 10.0.7.105:7788; meta-disk internal; } on proxmox-106 { device /dev/drbd0; disk /dev/sdb1; address 10.0.7.106:7788; meta-disk internal; } }
Start DRBD
On both servers, start DRBD:
/etc/init.d/drbd start
now create the device metadata, also on both nodes:
drbdadm create-md r0
Bring the device up, also on both nodes:
drbdadm up r0
Now you can take on current status of the DRBD, should look like this on both nodes:
proxmox-105:~# cat /proc/drbd version: 8.3.2 (api:88/proto:86-90) GIT-hash: dd7985327f146f33b86d4bff5ca8c94234ce840e build by root@oahu, 2009-09-10 15:18:39 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r---- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:2096348
Now DRBD has successfully allocated resources and is ready for operation. Now start the initial synchronization (only on one node!!!):
drbdadm -- --overwrite-data-of-peer primary r0
Wait till the initial sync is finished (depending on the size and speed this process can take some time):
proxmox-120:~# watch cat /proc/drbd
Finally check if your DRBD is starting in Primary/Primary mode. In order to do this, just stop DRBD service on both nodes:
/etc/init.d/drbd stop
And start again on both nodes:
/etc/init.d/drbd start
Now DRBD should be in the Primary/Primary mode:
proxmox-105:~# cat /proc/drbd version: 8.3.2 (api:88/proto:86-90) GIT-hash: dd7985327f146f33b86d4bff5ca8c94234ce840e build by root@oahu, 2009-09-10 15:18:39 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:0 dw:0 dr:268 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
LVM configuration
Now we got a running /dev/drbd0 in Primary/Primary mode and we need to add LVM on top of this device.
Adapt your lvm.conf
You need to change the filter section in the LVM configuration, just edit the corresponding line in lvm.conf - do this on both nodes:
nano /etc/lvm/lvm.conf # By default we accept every block device: filter = [ "r|/dev/sdb1|", "r|/dev/disk/|", "r|/dev/block/|", "a/.*/" ]
Note: if you device is not /dev/sdb change this according to our system
Create the physical volume for LVM
On one node, create the physical volume:
proxmox-105:~# pvcreate /dev/drbd0 Physical volume "/dev/drbd0" successfully created proxmox-105:~#
Check your physical volumes, should look like this:
proxmox-105:~# pvscan PV /dev/sda2 VG pve lvm2 [465.26 GB / 4.00 GB free] PV /dev/drbd0 lvm2 [149.04 GB] Total: 2 [613.30 GB] / in use: 1 [613.30 GB] / in no VG: 1 [4.00 GB] proxmox-105:~#
Create the volume group
On one node, create the volume group:
proxmox-105:~# vgcreate drbdvg /dev/drbd0 Volume group "drbdvg" successfully created proxmox-105:~#
Check your physical volumes again, should look like this:
proxmox-105:~# pvscan PV /dev/sda2 VG pve lvm2 [465.26 GB / 4.00 GB free] PV /dev/drbd0 VG drbdvg lvm2 [149.04 GB / 149.04 GB free] ... proxmox-105:~#
Add the LVM group to the Proxmox VE storage list via web interface
Go to the Proxmox VE web interface to 'Configuration/Storage' and click on the red arrow and select 'Add LVM group'.
You will see the previously created volume group (drbdvg), select this and give a Storage Name (cannot be changed later, so think twice), and enable the sharing by click the 'shared' box.
Create the first VM on DRBD for testing and live migration
In order to check end to end functionality, create a new KVM VM - and obviously store the VM disk on the previously created storage. Now, if you install the operating system - in my case a Ubuntu 9.04 all the data is already replicated automatically on both nodes - just finish the installation and reboot into your VM.
Try to live migrate the VM - as all data is available on both nodes it just take a few seconds. The overall process can take a bit longer if the VM is under load and if there is a lot of RAM involved. But in any case, the downtime is minimal and you will see no interruption at all - like you use iSCSI or NFS for storage back-end.
DRBD suppport
DRBD can be configured in many ways and there is a lot of space for optimizations and performance tuning. If you run DRBD in a production environment we highly recommend the 24/7 support packages from the DRBD developers. The company behind DRBD is Linbit, they just live a few miles away from the Proxmox headquarter, so this makes it easy to cooperate.
Final considerations
Now you have a fully redundant storage for your VM´s with just no need of expensive SAN equipment, configured in about 30 to 60 minutes - starting from bare-metal. If you want to achieve a similar setup with traditional virtualization solutions with SAN you will need at least 4 servers boxes and the storage network.
Traditional setup with SAN (eg. iSCSI, NFS, FC):
- Two servers for a redundant SAN
- Two servers for redundant virtualization hosts
- Extra storage network, VLAN´s, FC switches, etc
- Complex setup
Proxmox VE with DRBD:
- Only two servers with raid controllers and a bunch of hard drives (configure two raid volumes, one for Proxmox VE and one for DRBD device)
Beginning with Kernel 2.6.32, DRBD is integrated into the mainline Kernel. This makes it much easier to maintain and upgrade the systems and can be considered as the "highest quality certificate" for Kernel module developers.
Additionally, DRBD will be the base for high availability (HA) - please help by testing, reporting bugs and issues in our forum or mailing list.