Difference between revisions of "High Availability Cluster 4.x"

From Proxmox VE
Jump to navigation Jump to search
 
(57 intermediate revisions by 10 users not shown)
Line 1: Line 1:
=Introduction=
+
{{Note|Most information was moved to our [http://pve.proxmox.com/pve-docs/ reference documentation], see [[High Availability]].}}
 
 
<b>BETA NOT FOR PRODUCTION</b>
 
  
 +
== Introduction ==
 
Proxmox VE High Availability Cluster (Proxmox VE HA Cluster) enables the definition of high available virtual machines. In simple words, if a virtual machine (VM) is configured as HA and the physical host fails, the VM is automatically restarted on one of the remaining Proxmox VE Cluster nodes.
 
Proxmox VE High Availability Cluster (Proxmox VE HA Cluster) enables the definition of high available virtual machines. In simple words, if a virtual machine (VM) is configured as HA and the physical host fails, the VM is automatically restarted on one of the remaining Proxmox VE Cluster nodes.
  
Line 10: Line 9:
 
In order to learn more about functionality of the new Proxmox VE HA manager, install the HA simulator.   
 
In order to learn more about functionality of the new Proxmox VE HA manager, install the HA simulator.   
  
==Update to the latest version==
+
For a more up to date documentation see [[High Availability]]
Before you start, make sure you have installed the latest packages, just run the following on all nodes:
+
 
 +
== HA Simulator ==
 +
[[Image:Screen-HA-4-simulator.png|HA Simulator in Action|thumb]]
 +
 
 +
By using the HA simulator you can test and learn all functionalities of the Proxmox VE HA solutions.
 +
 
 +
The simulator allows you to watch and test the behaviour of a real-world 3 node cluster with 6 VM's.
 +
 
 +
You do not have to setup or configure a real cluster, the HA simulator runs out of the box on the current code base.
 +
 
 +
Install with apt:
 +
 
 +
apt-get install pve-ha-simulator
 +
 
 +
To start the simulator you must have a X11 redirection to your current system.
 +
 
 +
If you are on a Linux machine you can use:
 +
 +
ssh root@<IPofPVE4> -Y
 +
 
 +
On Windows it is working with  [http://mobaxterm.mobatek.net/ mobaxterm].
 +
 
 +
After starting the simulator create a working directory:
 +
 
 +
mkdir working
  
apt-get update && apt-get dist-upgrade
+
To start the simulator type
  
= System requirements =
+
pve-ha-simulator working/
  
If you run HA, high end server hardware with no single point of failure is required. This includes redundant disks, redundant power supply, UPS systems, and network bonding.  
+
== Hardware Watchdogs ==
 +
if no hardware watchdog is defined, proxmox is loading the softdog module,
 +
which emulate the /dev/watchdog device.
  
*Proxmox VE 4.0 comes with Self-Fencing with hardware watchdog or Software watchdog.
+
To enable a hardware watchdog, you need to specify the module to load
*Fully configured [[Proxmox_VE_4.x_Cluster]] (version 4.0 and later), with at least 3 nodes (maximum supported configuration: currently 32 nodes per cluster).
+
<pre>
*Shared storage (SAN, NAS/NFS, Ceph, DRBD9, ... for virtual disk images)
+
/etc/default/pve-ha-manager
*Reliable, redundant network, suitable configured
+
WATCHDOG_MODULE=mywatchdogmodule
*An extra network for cluster communication, one network for VM traffic and one network for storage traffic. 
+
</pre>
  
It's essential that you use redundant network connections for the cluster communication (bonding). If not, a simple switch reboot (or power loss on the switch) can fence all cluster nodes if it takes longer than 60 sec.
+
also, please disable nmi watchdog, which is embedeed in cpu apic.
  
=HA Configuration=
+
edit: /etc/default/grub
 +
<pre>
 +
GRUB_CMDLINE_LINUX_DEFAULT="quiet nmi_watchdog=0"
 +
</pre>
 +
the
 +
<pre>
 +
# update-grub
 +
</pre>
  
Adding and managing VM´s or containers for HA can be done via GUI or CLI.
+
=== iTCO Watchdog (module "iTCO_wdt") ===
 +
This is an hardware watchdog, available in almost all intels motherboard (ich chipset) since 15 years.
  
==Fencing==
 
Proxmox VE Cluster 4.0 or greater comes with watchdog fencing.
 
This works out of the box, no configuration is required.
 
  
How Watchdog fencing works:
+
=== IPMI Watchdog (module "ipmi_watchdog") ===
  
If the node has connection with the cluster and has quorum, the watchdog will be reset. If quorum is lost, the node is not able to reset the watchdog. This will trigger a reboot after 60 seconds.
+
For IPMI Watchdogs you may have to set the action, else it may not do anything when it triggers.
  
If your hardware has a hardware watchdog, this one will be automatically detected and used. Otherwise, ha-manager just uses the Linux softdog. Therefore testing Proxmox VE HA inside a virtual environment is possible.
+
For this purpose edit the /etc/modprobe.d/ipmi_watchdog.conf (simple create the file):
  
==HA Goups==
+
options ipmi_watchdog action=power_cycle panic_wdt_timeout=10
The Proxmox VE HA Cluster is using Groups for vm to node mapping.
 
  
Fore example: If a "vm100" is in the group "ONE" and group "ONE" has members "pve1,pve2" and "vm100" is running on pve1.
+
'''NOTE''': reboot or reload ipmi_watchdog module to take the changes in effect.
  
Therefore "pve1" is crashing. "vm100" can only migrated to "pve2".
+
=== Dell IDrac (module "ipmi_watchdog") ===
  
The Groups has two option restricted and nofailback.
+
For Dell IDrac, please desactivate the Automated System Recovery Agent in IDrac configuration.
  
*restricted: VM's bound to the group may only run on  cluster  members  which are also members of the group. If no members of the group are available, the service is placed in the stopped state.
+
[[File:Idrac-asr.png|thumb]]
  
*nofailback: Enabling this option for a group will prevent automated fail-back after a more-preferred  node  rejoins  the cluster.
 
 
 
==Enable a VM/CT for HA==
 
On the CLI, you can use ha-manger to achieve this task.
 
  
<b>IMPORTANT:</b>
+
If openmanage is installed, you need to disable watchdog management from openmanage
  
If you enable HA it's not possible to turnoff the VM inside the VM. Also, if it is disabled the VM will be stopped.
 
 
If you add a VM/CT, its instantly 'ha-managed'.
 
  
ha-manager add vm:100
 
  
To add a VM/CT on GUI.
 
  
[[Image:Screen-Add-HA-4-managed_VM-CT.png|Screen-Add-HA-4-managed_VM-CT|thumb]]
+
<pre>
 +
/opt/dell/srvadmin/sbin/dcecfg command=removepopalias aliasname=dcifru
 +
</pre>
 +
 
 +
and reboot server
 +
 
 +
After restart, check that watchdog timer is 10s, and not overrided by openmanage
 +
<pre>
 +
idracadm getsysinfo -w
 +
 
 +
Watchdog Information:
 +
Recovery Action        = Power Cycle
 +
Present countdown value = 9 seconds
 +
Initial countdown value = 10 seconds
 +
 
 +
</pre>
 +
 
 +
or
 +
 
 +
<pre>
 +
# ipmitool mc watchdog get
 +
Watchdog Timer Use:    SMS/OS (0x44)
 +
Watchdog Timer Is:      Started/Running
 +
Watchdog Timer Actions: Hard Reset (0x01)
 +
Pre-timeout interval:  0 seconds
 +
Timer Expiration Flags: 0x00
 +
Initial Countdown:      10 sec
 +
Present Countdown:      9 sec
  
==Disable a VM/CT for HA==
+
</pre>
  
If you want to disable a ha-managed VM/CT (e.g. for shutdown) via CLI:
+
=== HP ILO (module "hpwdt" )===
  
ha-manager disable vm:100
+
Users have reported crash with this module,
 +
please test and update the wiki if it's working fine
  
If you want to re-enable a ha-managed VM/CT:
+
also, disable HP ASR feature  (Automatic Server Recovery).
  
ha-manager enable vm:100
+
http://h17007.www1.hp.com/docs/iss/proliant_uefi/s_asr_status.html
  
==HA Cluster Maintenance (node reboots)==
+
If you have installed hp management tools,
If you need to reboot a node, e.g. because of a kernel update, you need to migrate all VM/CT to another node or disable them. By disabling them, all resources are stopped. All VM guests will get an ACPI shutdown request (if this won't work due to VM internal settings, they'll just get a 'stop').
+
you need to disable "hp-asrd" daemon
  
The command will take some time for execution, monitor the "tasks" and the VM´s and CT´s on the GUI. As soon as the VM/CT are either stopped or migrated, you can reboot your node. As soon as the node is up again, continue with the next node and so on.
+
== Troubleshooting ==
 +
=== Error recovery ===
  
=HA Simulator=
+
See [[High Availability#ha_manager_error_recovery|High Availability - Error Recovery]]
[[Image:Screen-HA-4-simulator.png|Screen-HA-4-simulator|thumb]]
 
  
By using the HA simulator, you can test and learn the functionality of the Proxmox VE HA solutions.
+
=== Failed watchdog-mux or Multiple Watchdogs ===
  
With the simulator you are able to see and test like a real 3 node cluster with 6 VM's would behave.
+
Disable all BIOS watchdog functionality, those settings setup the watchdog in the expectancy that the OS resets it, that is not our desired use case here and may lead to problems - e.g.: reset of the node after a fixed amount of time.
  
There is no need to setup or configuration a real cluster, it runs out of the box on the current code base.
+
Intel AMT (OS Health Watchdog) should be disabled and with it the ''mei'' and ''mei_me'' modules, as they may cause problems.
  
Install with apt:
+
If you host has multiple watchdogs available, only allow the one you want to use for HA, i.e. blacklist the other modules from loading.
 +
Our watchdog multiplexer will use /dev/watchdog which maps to /dev/watchdog0.
  
apt-get install pve-ha-simulator
+
Selecting a specific watchdog is not implemented, mainly for this quote from the linux-watchdog mailing list:[http://www.spinics.net/lists/linux-watchdog/msg04091.html]
  
To start the simulator you must have a X11 redirection to your current system.
+
The watchdog device node <-> driver mapping is fragile and
 +
can change from one kernel version to the next or even across reboot, so
 +
users shouldn't assume it to be persistent.
  
If you are on a Linux machine you can use:
+
=== Deleting Nodes From The Cluster ===
 
ssh root@<IPofPVE4> -Y
 
  
on Windows it is working with  [http://mobaxterm.mobatek.net/ mobaxterm]
+
When deleting a node from a HA cluster you have to ensure the following:
  
Then it is necessary to create a working directory.
+
* all HA services were relocate to another node!
 +
* Remove the node from all defined groups.
 +
* Shutdown the node you want to remove, from now on this node '''MUST NOT''' come online in the same network again, without being reinstalled/cleared of all cluster configurations.
 +
* execute `pvecm delnode nodename` from an remaining node.
  
mkdir working
+
The HA stack now places the node in an 'gone' state, you still see it in the manager status.
 +
After an hour in this state it will be auto deleted. This ensures that if the node died ungracefully the services still will be fenced and migrated to another node.
  
To start the simulator typ
+
=== Durations ===
 +
Note that some HA actions may take their time, and don't happen instantly. This avoids out of control feedback loops, and ensures that the HA stack is in a safe and consistent state all the time.
  
pve-ha-simulator working/
+
=== Container ===
 +
Note that while containers may be put under HA, currently (PVE 4.1) they don't support live migration. So all migrate actions on them will be mapped to relocate (stop, move, start).
 +
Recoveries on node failure work, as long as you don't use local resources.
  
=Video Tutorials=
+
== Video Tutorials ==
 
[http://www.youtube.com/user/ProxmoxVE Proxmox VE Youtube channel]
 
[http://www.youtube.com/user/ProxmoxVE Proxmox VE Youtube channel]
  
=Testing=
+
== Testing ==
 
Before going into production it is highly recommended to do as many tests as possible. Then, do some more.
 
Before going into production it is highly recommended to do as many tests as possible. Then, do some more.
==Useful command line tools==
+
 
 +
=== Useful command line tools ===
 
Here is a list of useful CLI tools:
 
Here is a list of useful CLI tools:
*ha-manger - to manage the ha stack of the cluster
+
*ha-manager - to manage the ha stack of the cluster
 
*pvecm - to manage the cluster-manager   
 
*pvecm - to manage the cluster-manager   
 
*corosync* - to manipulate the corosync  
 
*corosync* - to manipulate the corosync  
  
[[Category: HOWTO]][[Category: Installation]]
+
[[Category: Archive]][[Category: Installation]]

Latest revision as of 12:50, 17 August 2020

Yellowpin.svg Note: Most information was moved to our reference documentation, see High Availability.

Introduction

Proxmox VE High Availability Cluster (Proxmox VE HA Cluster) enables the definition of high available virtual machines. In simple words, if a virtual machine (VM) is configured as HA and the physical host fails, the VM is automatically restarted on one of the remaining Proxmox VE Cluster nodes.

The Proxmox VE HA Cluster is based on the Proxmox VE HA Manager (pve-ha-manager) - using watchdog fencing. Major benefit of Linux softdog or hardware watchdog is zero configuration - it just works out of the box.

In order to learn more about functionality of the new Proxmox VE HA manager, install the HA simulator.

For a more up to date documentation see High Availability

HA Simulator

HA Simulator in Action

By using the HA simulator you can test and learn all functionalities of the Proxmox VE HA solutions.

The simulator allows you to watch and test the behaviour of a real-world 3 node cluster with 6 VM's.

You do not have to setup or configure a real cluster, the HA simulator runs out of the box on the current code base.

Install with apt:

apt-get install pve-ha-simulator

To start the simulator you must have a X11 redirection to your current system.

If you are on a Linux machine you can use:

ssh root@<IPofPVE4> -Y

On Windows it is working with mobaxterm.

After starting the simulator create a working directory:

mkdir working

To start the simulator type

pve-ha-simulator working/

Hardware Watchdogs

if no hardware watchdog is defined, proxmox is loading the softdog module, which emulate the /dev/watchdog device.

To enable a hardware watchdog, you need to specify the module to load

/etc/default/pve-ha-manager
WATCHDOG_MODULE=mywatchdogmodule

also, please disable nmi watchdog, which is embedeed in cpu apic.

edit: /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet nmi_watchdog=0"

the

# update-grub

iTCO Watchdog (module "iTCO_wdt")

This is an hardware watchdog, available in almost all intels motherboard (ich chipset) since 15 years.


IPMI Watchdog (module "ipmi_watchdog")

For IPMI Watchdogs you may have to set the action, else it may not do anything when it triggers.

For this purpose edit the /etc/modprobe.d/ipmi_watchdog.conf (simple create the file):

options ipmi_watchdog action=power_cycle panic_wdt_timeout=10

NOTE: reboot or reload ipmi_watchdog module to take the changes in effect.

Dell IDrac (module "ipmi_watchdog")

For Dell IDrac, please desactivate the Automated System Recovery Agent in IDrac configuration.

Idrac-asr.png


If openmanage is installed, you need to disable watchdog management from openmanage



/opt/dell/srvadmin/sbin/dcecfg command=removepopalias aliasname=dcifru

and reboot server

After restart, check that watchdog timer is 10s, and not overrided by openmanage

idracadm getsysinfo -w

Watchdog Information:
Recovery Action         = Power Cycle
Present countdown value = 9 seconds
Initial countdown value = 10 seconds

or

# ipmitool mc watchdog get
Watchdog Timer Use:     SMS/OS (0x44)
Watchdog Timer Is:      Started/Running
Watchdog Timer Actions: Hard Reset (0x01)
Pre-timeout interval:   0 seconds
Timer Expiration Flags: 0x00
Initial Countdown:      10 sec
Present Countdown:      9 sec

HP ILO (module "hpwdt" )

Users have reported crash with this module, please test and update the wiki if it's working fine

also, disable HP ASR feature (Automatic Server Recovery).

http://h17007.www1.hp.com/docs/iss/proliant_uefi/s_asr_status.html

If you have installed hp management tools, you need to disable "hp-asrd" daemon

Troubleshooting

Error recovery

See High Availability - Error Recovery

Failed watchdog-mux or Multiple Watchdogs

Disable all BIOS watchdog functionality, those settings setup the watchdog in the expectancy that the OS resets it, that is not our desired use case here and may lead to problems - e.g.: reset of the node after a fixed amount of time.

Intel AMT (OS Health Watchdog) should be disabled and with it the mei and mei_me modules, as they may cause problems.

If you host has multiple watchdogs available, only allow the one you want to use for HA, i.e. blacklist the other modules from loading. Our watchdog multiplexer will use /dev/watchdog which maps to /dev/watchdog0.

Selecting a specific watchdog is not implemented, mainly for this quote from the linux-watchdog mailing list:[1]

The watchdog device node <-> driver mapping is fragile and
can change from one kernel version to the next or even across reboot, so
users shouldn't assume it to be persistent.

Deleting Nodes From The Cluster

When deleting a node from a HA cluster you have to ensure the following:

  • all HA services were relocate to another node!
  • Remove the node from all defined groups.
  • Shutdown the node you want to remove, from now on this node MUST NOT come online in the same network again, without being reinstalled/cleared of all cluster configurations.
  • execute `pvecm delnode nodename` from an remaining node.

The HA stack now places the node in an 'gone' state, you still see it in the manager status. After an hour in this state it will be auto deleted. This ensures that if the node died ungracefully the services still will be fenced and migrated to another node.

Durations

Note that some HA actions may take their time, and don't happen instantly. This avoids out of control feedback loops, and ensures that the HA stack is in a safe and consistent state all the time.

Container

Note that while containers may be put under HA, currently (PVE 4.1) they don't support live migration. So all migrate actions on them will be mapped to relocate (stop, move, start). Recoveries on node failure work, as long as you don't use local resources.

Video Tutorials

Proxmox VE Youtube channel

Testing

Before going into production it is highly recommended to do as many tests as possible. Then, do some more.

Useful command line tools

Here is a list of useful CLI tools:

  • ha-manager - to manage the ha stack of the cluster
  • pvecm - to manage the cluster-manager
  • corosync* - to manipulate the corosync