LVM2: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
No edit summary
 
(10 intermediate revisions by 5 users not shown)
Line 1: Line 1:
LVM is a logical volume manager for the Linux kernel. The LVM can:
== Introduction ==
storage pool type: lvm


* Resize volume groups online by absorbing new physical volumes.
LVM is a thin software layer on top of hard disks and partitions. It can be used to split available disk space
* Resize logical volumes online by concatenating extents onto them or truncating extents from them.
into smaller logical volumes. LVM is widely used on Linux and makes managing hard drives easier.
* Create snapshots of logical volumes.


LVM snapshots are used by vzdump to create consistent backups of running VMs.
Another use case is to put LVM on top of a big iSCSI LUN. That way you can easily manage space on
that iSCSI LUN, which would not be possible otherwise, because the iSCSI specification does not define a
management interface for space allocation


[[Category:Technology]]
=== Configuration ===
The LVM backend supports the common storage properties content, nodes, disable, and the following LVM specific properties:
* vgname
** LVM volume group name. This must point to an existing volume group.
* base
** Base volume. This volume is automatically activated before accessing the storage. This is mostly useful when the LVM volume group resides on a remote iSCSI server.
* saferemove
**Zero-out data when removing LVs. When removing a volume, this makes sure that all data gets erased.
* saferemove_throughput
**Wipe throughput (cstream -t parameter value).
 
=== Configuration Example (/etc/pve/storage.cfg)===
lvm: myspace
      vgname myspace
      content rootdir,images
 
=== General LVM advantages ===
 
LVM is a typical block storage, but this backend does not support snapshot and clones. Unfortunately,
normal LVM snapshots are quite inefficient, because they interfere all writes on the whole volume group
during snapshot time.
 
One big advantage is that you can use it on top of a shared storage, for example an iSCSI LUN. The backend
itself implement proper cluster wide locking.
 
{{Note|The newer LVM-thin backend allows snapshot and clones, but does not support shared storage.}}
 
== Standard installation ==
On a default installation Proxmox VE will use lvm.
The layout looks like followed.
{| class="wikitable"
|-
! VG !! LV !! Mountpoint !! Note
|-
| pve || swap ||  || will used as swap partition
|-
| pve || root || / || Example
|-
| pve || data || /var/lib/vz/ || Proxmox VE < 4.2
|-
| pve || data || || Proxmox VE >= 4.2
|}
In Proxmox VE 4.2 we changed the LV data to a thin pool, to provide snapshots and native performance of
the disk. The /var/lib/vz is now included in the LV root.
 
==LVM-Thin==
storage pool type: lvmthin
 
LVM normally allocates blocks when you create a volume. LVM thin pools instead allocates blocks when
they are written. This behavior is called thin-provisioning, because volumes can be much larger than
physically available space.
 
You can use the normal LVM command line tools to manage and create LVM thin pools (see man lvmthin
for details). Assuming you already have a LVM volume group called pve, the following commands create
a new LVM thin pool (size 100G) called data:
 
lvcreate -L 100G -n data pve
lvconvert --type thin-pool pve/data
 
Caution:
 
Under certain circumstances, LVM does not correctly calculate the metadatapool/chunk size.
Please check if the metadatapool is big enough.
The formula which has to be satisfied is:
 
PoolSize/ChunkSize * 64b = MetadataPoolSize
 
you can get this information via
 
lvs -a -o name,size,chunk_size
 
=== Configuration ===
The LVM thin backend supports the common storage properties content, nodes, disable, and the
following LVM specific properties:
* vgname
** LVM volume group name. This must point to an existing volume group.
* thinpool
** The name of the LVM thin pool.
 
=== Configuration Example (/etc/pve/storage.cfg) ===
 
lvmthin: local-lvm
          thinpool data
          vgname pve
          content rootdir,images
 
=== General LVM-Thin advantages ===
 
LVM thin is a block storage, but fully supports snapshots and clones efficiently. New volumes are automatically initialized with zero.
 
It must be mentioned that LVM thin pools cannot be shared across multiple nodes, so you can only use
them as local storage.
 
===Create a extra LV for /var/lib/vz===
 
This can be easily done by create a new thin LV. It is thin provisioned.
 
lvcreate -n <Name> -V <Size[M,G,T]> <VG>/<LVThin_pool>
 
A real world example it looks like
 
lvcreate -n vz -V 10G pve/data
 
Now a filesystem must be create on the LV.
 
mkfs.ext4 /dev/pve/vz
 
And at last step this have to be mounted.
 
{{Note|Be sure that /var/lib/vz is empty. On a default installation it isn’t.}}
 
To make it always accessible add the following line at /etc/fstab and then '''mount -a''' to reload the mount point
 
echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab
 
===Resize metadata pool===
 
{{Note|If the pool will extend, then it is be necessary to extent also the metadata pool.
It can be achieved with the following command.}}
 
lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool>
 
== LVM vs LVM-Thin ==
{| class="wikitable"
|-
! Type !! Content types !! Image formats !! Shared !! Snapshots !! Clones
|-
| LVM || images,rootdir  || raw || possible || no || no
|-
| LVM-Thin || images,rootdir  || raw || no || yes || yes
|}
 
== Administration ==
==== Create a Volume Group ====
Let's assume we have a empty disk /dev/sdb, where we want to make a Volume Group named vmdata.
 
First create a partition.
 
sgdisk -N 1 /dev/sdb
 
pvcreate --metadatasize 250k -y -ff /dev/sdb1
vgcreate vmdata /dev/sdb1
 
== Troubleshooting and known issues ==
 
 
=== Thin Overprovisioning  ===
 
 
In a LVM_thin pool there is no space limit when defining LVM volumes in it - regardless if these volumes are virtual disks for containers or virtual machines or just volumes for any other purpose defined by "lvcreate". If the total amount of all defined logical volume size within a thin pool exceeds the physical size of the pool it is called "overprovisioning".
 
Attention! You can never use more space for data than physically is available! Unfortunately in case of reaching the space limit no direct warning or error message occurs. At the user interface - e.g. inside a virtual machine - it looks like all logical space can be used; but when exceeding the physical limit the data are corrupt!
 
Therefore it is recommended
 
- to avoid overprovisioning; or at least, if not possible
 
- to check regularly via
lvs
the actual physical usage.
 
See also "Automatically extend thin pool LV" in
 
man lvmthin
[[Category: HOWTO]]

Latest revision as of 16:24, 29 November 2017

Introduction

storage pool type: lvm

LVM is a thin software layer on top of hard disks and partitions. It can be used to split available disk space into smaller logical volumes. LVM is widely used on Linux and makes managing hard drives easier.

Another use case is to put LVM on top of a big iSCSI LUN. That way you can easily manage space on that iSCSI LUN, which would not be possible otherwise, because the iSCSI specification does not define a management interface for space allocation

Configuration

The LVM backend supports the common storage properties content, nodes, disable, and the following LVM specific properties:

  • vgname
    • LVM volume group name. This must point to an existing volume group.
  • base
    • Base volume. This volume is automatically activated before accessing the storage. This is mostly useful when the LVM volume group resides on a remote iSCSI server.
  • saferemove
    • Zero-out data when removing LVs. When removing a volume, this makes sure that all data gets erased.
  • saferemove_throughput
    • Wipe throughput (cstream -t parameter value).

Configuration Example (/etc/pve/storage.cfg)

lvm: myspace
     vgname myspace
     content rootdir,images

General LVM advantages

LVM is a typical block storage, but this backend does not support snapshot and clones. Unfortunately, normal LVM snapshots are quite inefficient, because they interfere all writes on the whole volume group during snapshot time.

One big advantage is that you can use it on top of a shared storage, for example an iSCSI LUN. The backend itself implement proper cluster wide locking.

Yellowpin.svg Note: The newer LVM-thin backend allows snapshot and clones, but does not support shared storage.

Standard installation

On a default installation Proxmox VE will use lvm. The layout looks like followed.

VG LV Mountpoint Note
pve swap will used as swap partition
pve root / Example
pve data /var/lib/vz/ Proxmox VE < 4.2
pve data Proxmox VE >= 4.2

In Proxmox VE 4.2 we changed the LV data to a thin pool, to provide snapshots and native performance of the disk. The /var/lib/vz is now included in the LV root.

LVM-Thin

storage pool type: lvmthin

LVM normally allocates blocks when you create a volume. LVM thin pools instead allocates blocks when they are written. This behavior is called thin-provisioning, because volumes can be much larger than physically available space.

You can use the normal LVM command line tools to manage and create LVM thin pools (see man lvmthin for details). Assuming you already have a LVM volume group called pve, the following commands create a new LVM thin pool (size 100G) called data:

lvcreate -L 100G -n data pve
lvconvert --type thin-pool pve/data

Caution:

Under certain circumstances, LVM does not correctly calculate the metadatapool/chunk size. Please check if the metadatapool is big enough. The formula which has to be satisfied is:

PoolSize/ChunkSize * 64b = MetadataPoolSize

you can get this information via

lvs -a -o name,size,chunk_size

Configuration

The LVM thin backend supports the common storage properties content, nodes, disable, and the following LVM specific properties:

  • vgname
    • LVM volume group name. This must point to an existing volume group.
  • thinpool
    • The name of the LVM thin pool.

Configuration Example (/etc/pve/storage.cfg)

lvmthin: local-lvm
         thinpool data
         vgname pve
         content rootdir,images

General LVM-Thin advantages

LVM thin is a block storage, but fully supports snapshots and clones efficiently. New volumes are automatically initialized with zero.

It must be mentioned that LVM thin pools cannot be shared across multiple nodes, so you can only use them as local storage.

Create a extra LV for /var/lib/vz

This can be easily done by create a new thin LV. It is thin provisioned.

lvcreate -n <Name> -V <Size[M,G,T]> <VG>/<LVThin_pool>

A real world example it looks like

lvcreate -n vz -V 10G pve/data

Now a filesystem must be create on the LV.

mkfs.ext4 /dev/pve/vz

And at last step this have to be mounted.

Yellowpin.svg Note: Be sure that /var/lib/vz is empty. On a default installation it isn’t.

To make it always accessible add the following line at /etc/fstab and then mount -a to reload the mount point

echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab

Resize metadata pool

Yellowpin.svg Note: If the pool will extend, then it is be necessary to extent also the metadata pool.

It can be achieved with the following command.

lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool>

LVM vs LVM-Thin

Type Content types Image formats Shared Snapshots Clones
LVM images,rootdir raw possible no no
LVM-Thin images,rootdir raw no yes yes

Administration

Create a Volume Group

Let's assume we have a empty disk /dev/sdb, where we want to make a Volume Group named vmdata.

First create a partition.

sgdisk -N 1 /dev/sdb
pvcreate --metadatasize 250k -y -ff /dev/sdb1
vgcreate vmdata /dev/sdb1

Troubleshooting and known issues

Thin Overprovisioning

In a LVM_thin pool there is no space limit when defining LVM volumes in it - regardless if these volumes are virtual disks for containers or virtual machines or just volumes for any other purpose defined by "lvcreate". If the total amount of all defined logical volume size within a thin pool exceeds the physical size of the pool it is called "overprovisioning".

Attention! You can never use more space for data than physically is available! Unfortunately in case of reaching the space limit no direct warning or error message occurs. At the user interface - e.g. inside a virtual machine - it looks like all logical space can be used; but when exceeding the physical limit the data are corrupt!

Therefore it is recommended

- to avoid overprovisioning; or at least, if not possible

- to check regularly via

lvs 

the actual physical usage.

See also "Automatically extend thin pool LV" in

man lvmthin