Intel Modular Server HA

From Proxmox VE
Jump to: navigation, search

Introduction

Intel Modular Server (IMS) is a powerful and affordable server platform with integrated SAN. This how to describes the installation and configuration of Proxmox VE High Availability Cluster on IMS.

Note: Take also a look on the Intel Modular Server for the Proxmox VE 1.x series.

A big thanks to Intel, sponsoring the hardware to the Proxmox VE test lab.

Before you start

Make sure you got the latest firmware on the IMS - at the time of these tests we got the firmware V6.8.

Depending on the workload, you need to clarify among all other things the following in advance:

  • Number of KVM virtual machines, including storage requirements
  • Number of OpenVZ containers, including storage requirements
  • Backup considerations

Hardware configuration

Storage

The storage drive bay supports six 3.5" SAS/SATA or fourteen 2.5" SAS hard disk drives. We prefer the model with 14 drives as it provides more flexibility.

This is one of the main advantages of the IMS, as you do not need an extra SAN box. No need for expensive fiber channel (FC) infrastructure or the complexity of configuration of an extra storage network for iSCSI. In order get shared LVM storage you just need the optional shared LUN key.

The IMS and Proxmox VE storage model are very flexible so you can configure the disks to meet your requirement. Just a basic recommendation: Use only certified disks and only redundant raid levels, best performance with Raid10.

Compute Modules

You can use up to six compute modules each one with 2 Intel Xeon processors and up to 96 GB of memory.

Our test setup:

  • Three MFS5520VI
  • Each module has two Xeon E5540
  • and 12 GB memory (6 x 2 GB)

Network

Nothing special needed. We just create a simple bond with eth0 and eth1, active-backup mode. See Bond configuration (Video)

Installation of Proxmox VE

Bare-metal ISO install

By using the integrated java remote console, installation can be done by mapping the ISO images to the compute modules. See also Installation After reboot, install the latest packages on all nodes:

  • Proxmox VE package list:
pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-4 (running version: 3.1-4/f6816604)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

Create Cluster

See Proxmox VE 2.0 Cluster

Setup the LVM disk for KVM virtual machines

This example assumes that the installation was done on /dev/sda and /dev/sdb is an additional raid volume, attached to all servers using the shared LUN key.

Login to a node via SSH and partition the disk:

  • Create a big single partition (sdb1)
fdisk /dev/sdb
  • Create the physical volume:
pvcreate /dev/sdb1
 Physical volume "/dev/sdb1" successfully created
  • Create the volume group 'vmdisks' (just choose a unique name):
vgcreate vmdisks /dev/sdb1
  Volume group "vmdisks" successfully created

And finally: Add the LVM Group to the storage list via the web interface (this is an old screenshot but its more or less the same on the new 2.0 interface)

Add-LVM-Group

Backup Store

Depending on your needs and backup strategy you can use a part of the disks for backups. But finally, you should store your backup files not on the same storage and location so it make sense to use a backup store on a remote location, e.g. a remote NAS server (e.g. NFSv3)

You can also attach external SAS Tape drive to the IMS, e.g. LTO3/4/5 drives.

Install Virtual Machines

Install your first KVM virtual machine, see also:

HA Configuration

The configuration of fence devices is CLI only. Adding and managing VM´s and containers for HA should be done via GUI.

Fencing is a needed part for Proxmox VE 2.0 HA, without fencing, HA will not start working.

Configure IMS fence device

The following example shows a HA cluster with:

  • three nodes
    • proxmox-7-60
    • proxmox-7-61
    • proxmox-7-62
  • one fence device, used on all nodes
    • fence_intelmodular (configure the user and password via IMS GUI)
  • two HA VM´s
<?xml version="1.0"?>
<cluster config_version="24" name="ims-cluster">
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
  <fencedevices>
    <fencedevice agent="fence_intelmodular" ipaddr="192.168.7.10" login="snmpv3user" name="ims" passwd="12345678" power_wait=15 snmp_auth_prot="SHA" snmp_sec_level="authNoPriv" snmp_version="3"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="proxmox-7-61" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device name="ims" port="2"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="proxmox-7-62" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="ims" port="3"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="proxmox-7-60" nodeid="3" votes="1">
      <fence>
        <method name="1">
          <device name="ims" port="1"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
    <pvevm autostart="1" vmid="111"/>
    <pvevm autostart="1" vmid="110"/>
  </rm>
</cluster>

Configure VM or Containers for HA

Testing

Before going in production do as many tests as possible.