Two-Node High Availability Cluster: Difference between revisions
No edit summary |
No edit summary |
||
Line 13: | Line 13: | ||
*Fully configured [[Proxmox_VE_2.0_Cluster]], with 2 nodes. | *Fully configured [[Proxmox_VE_2.0_Cluster]], with 2 nodes. | ||
*Shared storage (SAN for Virtual Disk Image Store for HA KVM). In this case, no external storage was used | *Shared storage (SAN for Virtual Disk Image Store for HA KVM). In this case, no external storage was used. Instead, a cheaper alternative ([[DRBD]]) was tested. | ||
*Reliable network, suitable configured | *Reliable network, suitable configured | ||
*[[Fencing]] device(s) - reliable and TESTED!. We will use HP's iLO for this example. | |||
*[[Fencing]] device(s) - reliable and TESTED! | |||
=What is DRBD used for?= | |||
For this testing configuration, two [[DRBD]] resources were created, one for VM images an another one for VMs users data. Thanks to DRBD (if properly configured), a mirror raid is created through the network (be aware that, although possible, using WANs would mean high latencies). As VMs and data is replicated synchronously in both nodes, if one of them fails, it will be possible to restart "dead" machines on the other node without data loss. | |||
=Configuring Fencing= | |||
Fencing is vital for Proxmox to manage a node loss and thus provide effective HA. [[Fencing]] is the mechanism used to prevent data inconsistences between nodes in a cluster by ensuring that a node reported as "dead" is really down. If it isn't a reboot or power-off signal is sent to force it to go to a safe state. |
Revision as of 15:20, 18 January 2012
Note: This is a work in progress, stay tuned! |
Note: Article about Proxmox VE 2.0 beta |
Introduction
This article explores how to build a two-node cluster with HA enabled under Proxmox. HA is generally recommended to be deployed on at least three nodes to prevent strange behaviours and potentially lethal data incoherences (for further info look for "Quorum". Nevertheless, with some tweaking, it is also possible to successfully use Proxmox to run on a two-node cluster.
Although in the case of two-node clusters it is recommended to use a third, shared quorum disk partition, Proxmox allows to build the cluster without it. Let's see how.
System requirements
If you run HA, only high end server hardware with no single point of failure should be used. This includes redundant disks (Hardware Raid), redundant power supply, UPS systems, network bonding.
- Fully configured Proxmox_VE_2.0_Cluster, with 2 nodes.
- Shared storage (SAN for Virtual Disk Image Store for HA KVM). In this case, no external storage was used. Instead, a cheaper alternative (DRBD) was tested.
- Reliable network, suitable configured
- Fencing device(s) - reliable and TESTED!. We will use HP's iLO for this example.
What is DRBD used for?
For this testing configuration, two DRBD resources were created, one for VM images an another one for VMs users data. Thanks to DRBD (if properly configured), a mirror raid is created through the network (be aware that, although possible, using WANs would mean high latencies). As VMs and data is replicated synchronously in both nodes, if one of them fails, it will be possible to restart "dead" machines on the other node without data loss.
Configuring Fencing
Fencing is vital for Proxmox to manage a node loss and thus provide effective HA. Fencing is the mechanism used to prevent data inconsistences between nodes in a cluster by ensuring that a node reported as "dead" is really down. If it isn't a reboot or power-off signal is sent to force it to go to a safe state.