Storage Model

From Proxmox VE
Jump to: navigation, search

Contents

Introduction

Proxmox VE is using a very flexible storage model. Virtual machine images can be stored on local storage (and more than one local storage type is supported) as well as on shared storage like NFS and on SAN (e.g. using iSCSI).

All storage definitions are synchronized throughout the Proxmox_VE_2.0_Cluster, therefore it's just a matter of minutes before a SAN configuration is usable on all Proxmox_VE_2.0_Cluster nodes.

You may configure as many storage definitions as you like!

One major benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime, as all nodes in the cluster have direct access to VM disk images.

Note: OpenVZ containers must be on local storage or NFS.

Storage types

The following storage types can be added via the web interface.

  • Network storage types supported
    • LVM Group (network backing with iSCSI targets)
    • iSCSI target
    • NFS Share
    • Direct to iSCSI LUN
    • GLusterFS
    • RBD/Ceph
    • Sheepdog (still not stable, so please do not use it in production environments.)
  • Local storage types supported
    • LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
    • Directory (storage on existing filesystem)
    • ZFS local pool (beta state, available since Proxmox VE 3.4)

Add storage to Proxmox VE

Storage type Content

Every storage defined has at least one purpose:

  • some storage types have only one possible use, which is available as soon as you define the storage entry, you don't need to specify, ie:
    • iSCSI: this storage is either used directly (Note: this can be dangerous, see further down), or as a base to define local LVM storage entries (see further down), or as ZFS, it needs a portal address and a target ID
    • LVM: this storage type is only used to store RAW VM disk images, and first needs an iSCSI "base storage" entry defined or local disks prepared as LVM storage.
    • ZFS over iSCSI: this storage type is only used to store RAW VM disk images, and first needs an iSCSI "base storage" entry defined. (available since Proxmox VE 3.2)
    • ZFS: this storage type is used to store RAW VM disk images, but can also be used as a directory. (available since Proxmox VE 3.4)
    • RBD/Ceph: this storage type is only used to store RAW VM disk images, and needs a working "Monitor host"
    • Sheepdog: this storage type is only used to store RAW VM disk images, and needs a working "portal" (still not stable, so please do not use it in production environments.)
  • other storage types have multiple possible usages, and you can choose from a list, but by default they have no choice (until you choose at least one, they are not usable), ie:
    • directory: this storage type is used to store any kind of content (you just have to choose), it needs a local folder path
    • NFS: this storage type is used to store any kind of content (you just have to choose), it needs server address and export name
    • GlusterFS: this storage type is used to store any kind of content (you just have to choose), it needs server address and volume name

Storage content selection.png

Possible choices (multiple choices, by CTRL+click in the above list):

  • Images: to store working VM (kvm) disk images
    this will create a ./images folder on the filesystem
    ./images will have a folder for each VM containing disks on this storage - used/unused disks in file formats qcow2, vmdk, RAW. etc and also qcow2 snapshots diff files if created - named by its VMID
  • ISO: to store .iso files to use in VM virtual cd drives
    this will create a ./template/iso folder on the filesystem
    ./template/iso will contain all ".iso" files uploaded to this storage, available in the web gui for the VMs´ ISO selection for virtual cd/dvd drives. Not available for CTs.
  • Templates: to store CT (openvz) templates
    this will create a ./template/cache folder on the filesystem
    ./template/cache will contain all ".tar.gz" files uploaded to this storage, available in the web gui to create a new CT from such templates.
  • Backups: to store VM/CT backups
    this will create a ./dump folder on the filesystem
    ./dump will contain all backup files (currently .vma.lzo or .vma.gz depending on backup settings) and, for each backup file, a related .log file.
  • Containers: to store working CT (openvz) disk images (file and folders directly on the storage filesystem)
    this will create a ./private folder on the filesystem,
    ./private will contain a folder for each CT defined on this storage, named by its VMID, containing the whole local CT filesystem

LVM Groups - An Ideal Solution

Using LVM groups provides the best manageability. Logical volumes can easily be created/deleted/moved between physical storage devices. If the base storage for the LVM group is accessible on all Proxmox VE nodes (e.g. an iSCSI LUN) or replicated (with DRBD) then all nodes have access to VM images, and live-migration is possible.

LVM Groups with Network Backing

In this configuration, network block devices (iSCSI targets) are used as the physical volumes for LVM logical volume storage. This is a two step procedure and can be fully configured via the web interface.

  1. First, add the iSCSI target. (On some iSCSI targets you need to add the IQN of the Proxmox VE server to allow access.)
    • Click 'Add iSCSI Target' on the Storage list
    • As storage name use whatever you want but take care, this name cannot be changed later.
    • Give the 'Portal' IP address or servername and scan for unused targets
    • disable 'use LUNs direcly'
    • Click save
  2. Second, add an LVM group on this target.
    • Click 'Add LVM Group' on the Storage list
    • As storage name use whatever you want but take care, this name cannot be changed later.
    • For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.
    • For 'Base Volume' select a LUN
    • For 'Volume Group Name' give a unique name (this name cannot be changed later).
    • Enable shared use (recommended)
    • Click save

LVM Groups with Local Backing

In this configuration, physical block devices (which can be DRBD devices) are used as the physical volumes for LVM logical volume storage. Before you can store VMs this way, you first need to configure LVM2 using the console. Full management is not possible through the web interface at this time

This is a three step procedure (I just plugged in an 8 GB USB Stick for demonstration, recognized as /dev/sdb on my box)

  1. Physically install all devices you wish to import into a volume group
  2. Define those physical devices as LVM physical volumes (storage that can be used by LVM volume groups).

First create the physical volume (pv):

proxmox-ve:~# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created
proxmox-ve:~#

Second, create a volume group (vg):

proxmox-ve:~# vgcreate usb-stick /dev/sdb1
  Volume group "usb-stick" successfully created
proxmox-ve:~#

And finally: Add the LVM Group to the storage list via the web interface:

 "Storage name: usb", "Base storage: Existing volume groups", "Volume Group Name: usb-stick"

Now you can create the KVM VM:
Type: Fully virtualized (KVM)
Disk storage: usb (lvm)
Disk space (GB): 1
Image format: raw (that is the only option you can choose now and therefore does not support Live_Snapshots)

After the creation (let's assume it'll be VM 117), you will have an additional logical volume of the size of the VM's disk space:

proxmox-ve:~# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/usb-stick/vm-117-disk-1
  VG Name                usb-stick
  LV Size                1.00 GB


Note: after the experiment, to remove the test storage do this:

Remove the usb storage from web interface
then

proxmox-ve:~# vgremove usb-stick
  Volume group "usb-stick" successfully removed

then

proxmox-ve:~# pvremove /dev/sdb1
  Labels on physical volume "/dev/sdb1" successfully wiped

and finally unplug the usb stick.

iSCSI Target

iSCSI is a widely employed technology used to connect to storage servers. Almost all vendors support iSCSI. There are also open source solutions available: (e.g. OpenMediaVault which is based on Debian).

iSCSI targets can be fully configured via the web interface. For details see Storage_Model#LVM Groups with Network Backing.

NFS Share

NFS is a very simple way to integrate shared storage into Proxmox VE and enables live-migration. Storage on NFS shares is similar to the file-on-disk directory method, with the added benefit of shared storage and live migration.

NFS shares can be fully configured via the web interface.

  • Click 'Add NFS Share' on the Storage list
  • As storage name use whatever you want but take care, this name cannot be changed later.
  • Give the 'Server' IP address or servername of your NFS server and scan for 'Exports'
  • Select the 'Export'
  • Content: select what you want to store: Virtual Disk images, ISO images, OpenVZ templates, backup files or containers.
  • Click save

Use iSCSI LUN directly

This is possible but not recommended.

Note: Currently iSCSI LUN´s are not protected from the Proxmox VE management tools. This means if you use a iSCSI LUN directly it still shows up as available and if you use the same LUN a second time you will loose all data on the LUN.

Directory

Proxmox VE can use local directories or locally mounted shares for storage (Virtual disk images, ISO images, or backup files). This is the least flexible, least efficient storage solution, but is very similar to the NFS method, where images are stored on an existing filesystem as large files.

HowTo mount a Windows (SaMBa) share on Proxmox VE via /etc/fstab

In this scenario, the VM storage functions identically to the directory method. The SMB/CIFS share is mounted as a local mountpoint and appears to Proxmox VE as local storage. To mount a remote samba share, just follow this (adapt it according to your setup):

First, create a target dir, e.g.:

mkdir /mnt/samba1
nano /etc/fstab 

//windows-or-samba-server-name/sharename /mnt/samba1 cifs username=yourusername,password=yoursecretpassword,domain=yourdomainname 0 0

Next you can activate it with:

mount //windows-or-samba-server-name/sharename

Then define a 'Directory' based storage on the web interface using the newly created directory '/mnt/samba1', for example to store backups.

References

Personal tools
Namespaces

Variants
Actions
Navigation
Virtual Appliances
Sites
Tools