Proxmox VE 2.0 Cluster: Difference between revisions
Nigel Kukard (talk | contribs) No edit summary |
(→Requirements: added link for adding a qdisk (only link I had was Two-Node_High_Availability_Cluster); see tom's posts here: http://forum.proxmox.com/threads/7786-Cluster-and-quorum) |
||
Line 19: | Line 19: | ||
*SSH tunnel on port 22 between nodes is used | *SSH tunnel on port 22 between nodes is used | ||
*VNC console traffic is secured via SSL, using ports between 5900 and 5999 | *VNC console traffic is secured via SSL, using ports between 5900 and 5999 | ||
*For reliable quorum, you must have at least 3 active nodes at all times, or use a qdisk as seen in [[Two-Node_High_Availability_Cluster]] | |||
=Proxmox VE Cluster= | =Proxmox VE Cluster= |
Revision as of 12:10, 16 August 2012
Note: Article about Proxmox VE 2.0 |
Introduction
Proxmox VE 2.0 Cluster enables central management of multiple physical servers. A Proxmox VE Cluster consists of several nodes (up to 16 physical nodes, probably more).
Main features
- Centralized web management, including secure VNC console
- Support for multiple authentication sources (e.g. local, MS ADS, LDAP, ...)
- Role based permission management for all objects (VM´s, storages, nodes, etc.)
- Creates multi-master clusters (no single master anymore!)
- Proxmox Cluster file system (pmxcfs): Database-driven file system for storing configuration files, replicated in real-time on all nodes using corosync
- Migration of Virtual Machines between physical hosts
- Cluster-wide logging
- RESTful web API
Requirements
- All nodes must be in the same network as it uses IP Multicast to communicate between nodes (See also Corosync Cluster Engine). Note: Some switches do not support IP multicast by default and must be manually enabled first. See multicast notes for more information about multicast.
- Date and time have to be synchronized
- SSH tunnel on port 22 between nodes is used
- VNC console traffic is secured via SSL, using ports between 5900 and 5999
- For reliable quorum, you must have at least 3 active nodes at all times, or use a qdisk as seen in Two-Node_High_Availability_Cluster
Proxmox VE Cluster
First, install the Proxmox VE nodes, see Installation. Make sure that each Proxmox VE node is installed with the final hostname and IP configuration. Changing the hostname and IP is not possible after cluster creation.
Currently the cluster creation has to be done on the console, you can login to the Proxmox VE node via ssh.
All settings can be done via "pvecm", the Proxmox VE cluster manager toolkit.
Create the Cluster
Login via ssh to the first Proxmox VE node. Use a unique name for your Cluster, this name cannot be changed later.
Create:
pvecm create YOUR-CLUSTER-NAME
To check the state of cluster:
pvecm status
Adding nodes to the Cluster
Login via ssh to the other Proxmox VE nodes. Please note, the node cannot hold any any VM´s. (If yes you will get conflicts with identical VMID´s - to workaround, use vzdump to backup and to restore to a different VMID after the cluster configuration).
Add a node:
pvecm add IP-ADDRESS-CLUSTER
For IP-ADDRESS-CLUSTER use an IP from an existing cluster node.
To check the state of cluster:
pvecm status
Display the state of cluster:
pvecm status CID----IPADDRESS----ROLE-STATE--------UPTIME---LOAD----MEM---ROOT---DATA 1 : 192.168.7.104 M A 5 days 01:43 0.54 20% 1% 4% 2 : 192.168.7.103 N A 2 days 05:02 0.04 26% 5% 29% 3 : 192.168.7.105 N A 00:13 1.41 22% 3% 15% 4 : 192.168.7.106 N A 00:05 0.54 17% 3% 3%
Display the nodes of cluster:
pvecm nodes Node Sts Inc Joined Name 1 M 156 2011-09-05 10:39:09 hp1 2 M 156 2011-09-05 10:39:09 hp2 3 M 168 2011-09-05 11:24:12 hp4 4 M 160 2011-09-05 10:40:27 hp3
Remove a cluster node
Move all virtual machines out of the node, just use the Central Web-based Management 2.0 to migrate or delete all VM´s. Make sure you have no local backups you want to keep, or save them accordingly.
Log in to one remaining node via ssh. Issue a pvecm nodes command to identify the nodeID:
pvecm nodes Node Sts Inc Joined Name 1 M 156 2011-09-05 10:39:09 hp1 2 M 156 2011-09-05 10:39:09 hp2 3 M 168 2011-09-05 11:24:12 hp4 4 M 160 2011-09-05 10:40:27 hp3
Issue the delete command (here deleting node hp2):
pvecm delnode hp2
If the operation succeeds no output is returned, just check the node list again with 'pvecm nodes' (or just 'pvecm n').
ATTENTION: you need to power off the removed node, and make sure that it will not power on again.
Re-installing a cluster node
Move all virtual machines off the node.
Stop the following services:
service pvestatd stop service pvedaemon stop service cman stop service pve-cluster stop
Backup /var/lib/pve-cluster/
tar -czf /root/pve-cluster-backup.tar.gz /var/lib/pve-cluster
Backup /root/.ssh/ , there are two symlinks here to the shared pve config authorized_keys and authorized_keys.orig , you need not worry about these two yet as they're stored in /var/lib/pve-cluster/
tar -czf /root/ssh-backup.tar.gz /root/.ssh
Shut server down & re-install. Make sure the hostname is the same as it was before you continue.
Stop the following services:
service pvestatd stop service pvedaemon stop service cman stop service pve-cluster stop
Restore the files in /root/.ssh/
tar -xzf /root/ssh-backup.tar.gz
Replace /var/lib/pve-cluster/ with your backup copy
rm -rf /var/lib/pve-cluster tar -xzf /root/pve-cluster-backup.tar.gz
Start pve-cluster & cman:
service pve-cluster start service cman start
Restore the two ssh symlinks:
ln -sf /etc/pve/priv/authorized_keys /root/.ssh/authorized_keys ln -sf /etc/pve/priv/authorized_keys /root/.ssh/authorized_keys.orig
Start the rest of the services:
service pvestatd start service pvedaemon start
Working with Proxmox VE Cluster
Now, you can start creating Virtual Machines on Cluster nodes by using the Central Web-based Management 2.0 on any node.
Troubleshooting
- Date and time have to be synchronized (check "ntpdc -p")
- Check /etc/hosts for an actual IP address of a system