Proxmox VE 2.0 Cluster: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
Line 15: Line 15:


==Requirements==
==Requirements==
*All nodes must be in the same network as it uses IP Multicast to communicate between nodes (See also [http://www.corosync.org Corosync Cluster Engine]). Note: Some switches do not support IP multicast by default and must be manually enabled first.  
*All nodes must be in the same network as it uses IP Multicast to communicate between nodes (See also [http://www.corosync.org Corosync Cluster Engine]). Note: Some switches do not support IP multicast by default and must be manually enabled first.  See [[multicast notes]] for more information about multicast.
*SSH tunnel on port 22 between nodes is used
*SSH tunnel on port 22 between nodes is used
*VNC console traffic is secured via SSL, using ports between 5900 and 5999
*VNC console traffic is secured via SSL, using ports between 5900 and 5999

Revision as of 11:27, 29 January 2012

Yellowpin.svg Note: Article about Proxmox VE 2.0 beta

Introduction

Proxmox VE 2.0 Cluster enables central management of multiple physical servers. A Proxmox VE Cluster consists of several nodes (up to 16 physical nodes, probably more).

Main features

  • Centralized web management, including secure VNC console
  • Support for multiple authentication sources (e.g. local, MS ADS, LDAP, ...)
  • Role based permission management for all objects (VM´s, storages, nodes, etc.)
  • Creates multi-master clusters (no single master anymore!)
  • Proxmox Cluster file system (pmxcfs): Database-driven file system for storing configuration files, replicated in real-time on all nodes using corosync
  • Migration of Virtual Machines between physical hosts
  • Cluster-wide logging
  • RESTful web API

Requirements

  • All nodes must be in the same network as it uses IP Multicast to communicate between nodes (See also Corosync Cluster Engine). Note: Some switches do not support IP multicast by default and must be manually enabled first. See multicast notes for more information about multicast.
  • SSH tunnel on port 22 between nodes is used
  • VNC console traffic is secured via SSL, using ports between 5900 and 5999

Proxmox VE Cluster

First, install the Proxmox VE nodes, see Installation. Make sure that each Proxmox VE node is installed with the final hostname and IP configuration. Changing the hostname and IP is not possible after cluster creation.

Currently the cluster creation has to be done on the console, you can login to the Proxmox VE node via ssh.

All settings can be done via "pvecm", the Proxmox VE cluster manager toolkit.

Create the Cluster

Login via ssh to the first Proxmox VE node. Use a unique name for your Cluster, this name cannot be changed later.

Create:

pvecm create YOUR-CLUSTER-NAME

To check the state of cluster:

pvecm status

Adding nodes to the Cluster

Login via ssh to the other Proxmox VE nodes. Please note, the node cannot hold any any VM´s. (If yes you will get conflicts with identical VMID´s - to workaround, use vzdump to backup and to restore to a different VMID after the cluster configuration).

Add a node:

pvecm add IP-ADDRESS-CLUSTER

For IP-ADDRESS-CLUSTER use an IP from an existing cluster node.

To check the state of cluster:

pvecm status

Display the state of cluster:

pvecm status

CID----IPADDRESS----ROLE-STATE--------UPTIME---LOAD----MEM---ROOT---DATA
 1 : 192.168.7.104   M     A    5 days 01:43   0.54    20%     1%     4%
 2 : 192.168.7.103   N     A    2 days 05:02   0.04    26%     5%    29%
 3 : 192.168.7.105   N     A           00:13   1.41    22%     3%    15%
 4 : 192.168.7.106   N     A           00:05   0.54    17%     3%     3%

Display the nodes of cluster:

pvecm nodes

Node  Sts   Inc   Joined               Name
   1   M    156   2011-09-05 10:39:09  hp1
   2   M    156   2011-09-05 10:39:09  hp2
   3   M    168   2011-09-05 11:24:12  hp4
   4   M    160   2011-09-05 10:40:27  hp3

Remove a cluster node

Move all virtual machines out of the node, just use the Central Web-based Management 2.0 to migrate or delete all VM´s. Make sure you have no local backups you want to keep, or save them accordingly.

Log in to one remaining node via ssh. Issue a pvecm nodes command to identify the nodeID:

pvecm nodes

Node  Sts   Inc   Joined               Name
   1   M    156   2011-09-05 10:39:09  hp1
   2   M    156   2011-09-05 10:39:09  hp2
   3   M    168   2011-09-05 11:24:12  hp4
   4   M    160   2011-09-05 10:40:27  hp3

Issue the delete command (here deleting node hp2):

pvecm delnode hp2

If the operation succeeds no output is returned, just check the node list again with 'pvecm nodes' (or just 'pvecm n').

ATTENTION: you need to power off the removed node, and make sure that it will not power on again.

Working with Proxmox VE Cluster

Now, you can start creating Virtual Machines on Cluster nodes by using the Central Web-based Management 2.0 on any node.

Troubleshooting

tbd.


Video Tutorials

Proxmox VE Youtube channel