Proxmox VE 2.0 Cluster: Difference between revisions
Line 70: | Line 70: | ||
Log in to one remaining node via ssh. Issue a pvecm nodes command to identify the nodeID: | Log in to one remaining node via ssh. Issue a pvecm nodes command to identify the nodeID: | ||
<pre>pvecm nodes | <pre>hp1# pvecm nodes | ||
Node Sts Inc Joined Name | Node Sts Inc Joined Name | ||
Line 79: | Line 79: | ||
</pre> | </pre> | ||
Issue the delete command (here deleting node hp2): | Issue the delete command (here deleting node hp2): | ||
<pre>pvecm delnode hp2</pre> | <pre>hp1# pvecm delnode hp2</pre> | ||
If the operation succeeds no output is returned, just check the node list again with 'pvecm nodes' (or just 'pvecm n'). | If the operation succeeds no output is returned, just check the node list again with 'pvecm nodes' (or just 'pvecm n'). | ||
Revision as of 22:24, 8 October 2013
Introduction
Proxmox VE 2.x (and all versions above, including 3.x) Cluster enables central management of multiple physical servers. A Proxmox VE Cluster consists of several nodes (up to 16 physical nodes, probably more).
Main features
- Centralized web management, including secure VNC console
- Support for multiple authentication sources (e.g. local, MS ADS, LDAP, ...)
- Role based permission management for all objects (VM´s, storages, nodes, etc.)
- Creates multi-master clusters (no single master anymore!)
- Proxmox Cluster file system (pmxcfs): Database-driven file system for storing configuration files, replicated in real-time on all nodes using corosync
- Migration of Virtual Machines between physical hosts
- Cluster-wide logging
- RESTful web API
Requirements
- All nodes must be in the same network as it uses IP Multicast to communicate between nodes (See also Corosync Cluster Engine). Note: Some switches do not support IP multicast by default and must be manually enabled first. See multicast notes for more information about multicast.
- Date and time have to be synchronized
- SSH tunnel on port 22 between nodes is used
- VNC console traffic is secured via SSL, using ports between 5900 and 5999
- If you are interested in High Availability also, for reliable quorum you must have at least 3 active nodes at all times, or use a qdisk as seen in Two-Node_High_Availability_Cluster
Proxmox VE Cluster
First, install the Proxmox VE nodes, see Installation. Make sure that each Proxmox VE node is installed with the final hostname and IP configuration. Changing the hostname and IP is not possible after cluster creation.
Currently the cluster creation has to be done on the console, you can login to the Proxmox VE node via ssh.
All settings can be done via "pvecm", the Proxmox VE cluster manager toolkit.
Create the Cluster
Login via ssh to the first Proxmox VE node. Use a unique name for your Cluster, this name cannot be changed later.
Create:
node1# pvecm create YOUR-CLUSTER-NAME
To check the state of cluster:
node1# pvecm status
Adding nodes to the Cluster
Login via ssh to the other Proxmox VE nodes. Please note, the node cannot hold any VM that has the same ID of a VM on another node (otherwise you will get conflicts with identical VMID's - to workaround, use vzdump to backup and to restore to a different VMID after the cluster configuration).
Add a node:
node2# pvecm add IP-ADDRESS-CLUSTER
For IP-ADDRESS-CLUSTER use an IP from an existing cluster node.
To check the state of cluster:
node2# pvecm status
Display the state of cluster:
node2# pvecm status CID----IPADDRESS----ROLE-STATE--------UPTIME---LOAD----MEM---ROOT---DATA 1 : 192.168.7.104 M A 5 days 01:43 0.54 20% 1% 4% 2 : 192.168.7.103 N A 2 days 05:02 0.04 26% 5% 29% 3 : 192.168.7.105 N A 00:13 1.41 22% 3% 15% 4 : 192.168.7.106 N A 00:05 0.54 17% 3% 3%
Display the nodes of cluster:
node2# pvecm nodes Node Sts Inc Joined Name 1 M 156 2011-09-05 10:39:09 hp1 2 M 156 2011-09-05 10:39:09 hp2 3 M 168 2011-09-05 11:24:12 hp4 4 M 160 2011-09-05 10:40:27 hp3
Remove a cluster node
Move all virtual machines out of the node, just use the Central Web-based Management to migrate or delete all VM´s. Make sure you have no local backups you want to keep, or save them accordingly.
Log in to one remaining node via ssh. Issue a pvecm nodes command to identify the nodeID:
hp1# pvecm nodes Node Sts Inc Joined Name 1 M 156 2011-09-05 10:39:09 hp1 2 M 156 2011-09-05 10:39:09 hp2 3 M 168 2011-09-05 11:24:12 hp4 4 M 160 2011-09-05 10:40:27 hp3
Issue the delete command (here deleting node hp2):
hp1# pvecm delnode hp2
If the operation succeeds no output is returned, just check the node list again with 'pvecm nodes' (or just 'pvecm n').
ATTENTION: you need to power off the removed node, and make sure that it will not power on again.
Re-installing a cluster node
Move all virtual machines off the node.
Stop the following services:
service pvestatd stop service pvedaemon stop service cman stop service pve-cluster stop
Backup /var/lib/pve-cluster/
tar -czf /root/pve-cluster-backup.tar.gz /var/lib/pve-cluster
Backup /root/.ssh/ , there are two symlinks here to the shared pve config authorized_keys and authorized_keys.orig , you need not worry about these two yet as they're stored in /var/lib/pve-cluster/
tar -czf /root/ssh-backup.tar.gz /root/.ssh
Shut server down & re-install. Make sure the hostname is the same as it was before you continue.
Stop the following services:
service pvestatd stop service pvedaemon stop service cman stop service pve-cluster stop
Restore the files in /root/.ssh/
tar -xzf /root/ssh-backup.tar.gz
Replace /var/lib/pve-cluster/ with your backup copy
rm -rf /var/lib/pve-cluster tar -xzf /root/pve-cluster-backup.tar.gz
Start pve-cluster & cman:
service pve-cluster start service cman start
Restore the two ssh symlinks:
ln -sf /etc/pve/priv/authorized_keys /root/.ssh/authorized_keys ln -sf /etc/pve/priv/authorized_keys /root/.ssh/authorized_keys.orig
Start the rest of the services:
service pvestatd start service pvedaemon start
Working with Proxmox VE Cluster
Now, you can start creating Virtual Machines on Cluster nodes by using the Central Web-based Management on any node.
Troubleshooting
General
- Date and time have to be synchronized (check "ntpdc -p")
- Check /etc/hosts for an actual IP address of a system
Two nodes cluster and quorum issues
The problem
If you have to bring down a node of a cluster which normally have two nodes, even if not under HA, you have to take the quorum in account, as in this section
If you have a two nodes cluster, not HA, by default you will have something like this
# pvecm status Version: 6.2.0 Config Version: 2 Cluster Name: pvecluster Cluster Id: 48308 Cluster Member: Yes Cluster Generation: 92 Membership state: Cluster-Member Nodes: 1 Expected votes: 2 Total votes: 1 Node votes: 1 Quorum: 2 Activity blocked Active subsystems: 5 Flags: Ports Bound: 0 Node name: pve2 Node ID: 1 Multicast addresses: 239.192.188.113 Node addresses: xxx.xxx.xxx.xxx
When you bring one node down, maybe after you live migrated all vm/CT to the other node, your cluster becomes "non operational": you cannot add vms, all backups will fail, possibly more! One of the syntoms is: you get messages from the up pve node that it can't write in many places where it is normally expected to be able to, such as:
/etc/pve/nodes/<nodeid>/qemu-server/
because permissions became "read-only"!
That is because as of the above default config, 2 votes are needed, each node counts 1, and you have only 1 node of two alive!
The solution
BEWARE
clustering is not an easy concept, and many things can (and will) go wrong, ranging from locked to destroyed, if you don't know exactly what you are doing. Applying the following solution could make the cluster run into a split brain, because each node is allowed to update values without having quorum.
That said, this is a good (not brief) intro to HA clustering, using tools similar to those PVE uses today 2-Node Red Hat KVM Cluster Tutorial.
The simple solution is to "tell" to the cluster that you expect just 1 vote for the cluster to be operational: you can do this permanently, following instructions in
this section to add
two_node="1" expected_votes="1" parameters in the <cman> start tag
but until you get back alive the "down" node, you can't validate and apply the new config from the web GUI
if you need, you can temporarily update the "expected" part of the pve cluster config issuing a simple command:
#pvecm expected 1
that will instantly restore write permissions where thay should be, backups should work again, and all.
Another solution would be to add a third "dummy node", which also counts 1, in a way similar to HA setups but without fencing, but that would take us slightly "off topic", because this section is named "Two nodes cluster and quorum issues"