LVM2: Difference between revisions
No edit summary |
(major update of LVM wiki page) |
||
Line 1: | Line 1: | ||
== Introduction == | |||
storage pool type: lvm | |||
LVM is a thin software layer on top of hard disks and partitions. It can be used to split available disk space | |||
into smaller logical volumes. LVM is widely used on Linux and makes managing hard drives easier. | |||
LVM snapshots are used by [[ | Another use case is to put LVM on top of a big iSCSI LUN. That way you can easily manage space on | ||
that iSCSI LUN, which would not be possible otherwise, because the iSCSI specification does not define a | |||
management interface for space allocation | |||
=== Configuration === | |||
The LVM backend supports the common storage properties content, nodes, disable, and the following LVM specific properties: | |||
* vgname | |||
** LVM volume group name. This must point to an existing volume group. | |||
* base | |||
** Base volume. This volume is automatically activated before accessing the storage. This is mostly useful when the LVM volume group resides on a remote iSCSI server. | |||
* saferemove | |||
**Zero-out data when removing LVs. When removing a volume, this makes sure that all data gets erased. | |||
* saferemove_throughput | |||
**Wipe throughput (cstream -t parameter value). | |||
=== Configuration Example (/etc/pve/storage.cfg)=== | |||
lvm: myspace | |||
vgname myspace | |||
content rootdir,images | |||
=== General LVM advantages === | |||
LVM is a typical block storage, but this backend does not support snapshot and clones. Unfortunately, | |||
normal LVM snapshots are quite inefficient, because they interfere all writes on the whole volume group | |||
during snapshot time. | |||
One big advantage is that you can use it on top of a shared storage, for example an iSCSI LUN. The backend | |||
itself implement proper cluster wide locking. | |||
{{Note|The newer LVM-thin backend allows snapshot and clones, but does not support shared storage.}} | |||
== Standard installation == | |||
On a default installation Proxmox VE will use lvm. | |||
The layout looks like followed. | |||
{| class="wikitable" | |||
|- | |||
! VG !! LV !! Mountpoint !! Note | |||
|- | |||
| pve || swap || || will used as swap partition | |||
|- | |||
| pve || root || / || Example | |||
|- | |||
| pve || data || /var/lib/vz/ || Proxmox VE < 4.2 | |||
|- | |||
| pve || data || || Proxmox VE >= 4.2 | |||
|} | |||
In Proxmox VE 4.2 we changed the LV data to a thin pool, to provide snapshots and native performance of | |||
the disk. The /var/lib/vz is now included in the LV root. | |||
==LVM-Thin== | |||
storage pool type: lvmthin | |||
LVM normally allocates blocks when you create a volume. LVM thin pools instead allocates blocks when | |||
they are written. This behavior is called thin-provisioning, because volumes can be much larger than | |||
physically available space. | |||
You can use the normal LVM command line tools to manage and create LVM thin pools (see man lvmthin | |||
for details). Assuming you already have a LVM volume group called pve, the following commands create | |||
a new LVM thin pool (size 100G) called data: | |||
lvcreate -L 100G -n data pve | |||
lvconvert --type thin-pool pve/data | |||
=== Configuration === | |||
The LVM thin backend supports the common storage properties content, nodes, disable, and the | |||
following LVM specific properties: | |||
* vgname | |||
** LVM volume group name. This must point to an existing volume group. | |||
* thinpool | |||
** The name of the LVM thin pool. | |||
=== Configuration Example (/etc/pve/storage.cfg) === | |||
lvmthin: local-lvm | |||
thinpool data | |||
vgname pve | |||
content rootdir,images | |||
=== General LVM-Thin advantages === | |||
LVM thin is a block storage, but fully supports snapshots and clones efficiently. New volumes are automatically initialized with zero. | |||
It must be mentioned that LVM thin pools cannot be shared across multiple nodes, so you can only use | |||
them as local storage. | |||
===Create a extra LV for /var/lib/vz=== | |||
This can be easily done by create a new thin LV. It is thin provisioned. | |||
lvcreate -n <Name> -V <Size[M,G,T]> <VG>/<LVThin_pool> | |||
A real world example it looks like | |||
lvcreate -n vz -V 10G pve/data | |||
Now a filesystem must be create on the LV. | |||
mkfs.ext4 /dev/pve/vz | |||
And at last step this have to be mounted. | |||
{{Note|Be sure that /var/lib/vz is empty. On a default installation it isn’t.}} | |||
To make it always accessible add the following line at /etc/fstab. | |||
echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab | |||
===Resize metadata pool=== | |||
{{Note|If the pool will extend, then it is be necessary to extent also the metadata pool. | |||
It can be achieved with the following command.}} | |||
lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool> | |||
== LVM vs LVM-Thin == | |||
{| class="wikitable" | |||
|- | |||
! Type !! Content types !! Image formats !! Shared !! Snapshots !! Clones | |||
|- | |||
| LVM || images,rootdir || raw || possible || no || no | |||
|- | |||
| LVM-Thin || images,rootdir || raw || no || yes || yes | |||
|} | |||
== Administration == | |||
==== Create a Volume Group ==== | |||
Let's assume we have a empty disk /dev/sdb, where we want to make a Volume Group named vmdata. | |||
First create a partition. | |||
sgdisk -N 1 /dev/sdb | |||
pvcreate --metadatasize 250k -y -ff /dev/sdb1 | |||
vgcreate vmdata /dev/sdb1 | |||
==== Create a LVM_Thin Pool ==== | |||
Now create a Logical Volume and convert it to a thinpool. | |||
lvcreate -L 80G -T -n vmstore vmdata | |||
== Troubleshooting and known issues == | |||
tbd. | |||
[[Category:Technology]] | [[Category:Technology]] |
Revision as of 07:20, 5 July 2016
Introduction
storage pool type: lvm
LVM is a thin software layer on top of hard disks and partitions. It can be used to split available disk space into smaller logical volumes. LVM is widely used on Linux and makes managing hard drives easier.
Another use case is to put LVM on top of a big iSCSI LUN. That way you can easily manage space on that iSCSI LUN, which would not be possible otherwise, because the iSCSI specification does not define a management interface for space allocation
Configuration
The LVM backend supports the common storage properties content, nodes, disable, and the following LVM specific properties:
- vgname
- LVM volume group name. This must point to an existing volume group.
- base
- Base volume. This volume is automatically activated before accessing the storage. This is mostly useful when the LVM volume group resides on a remote iSCSI server.
- saferemove
- Zero-out data when removing LVs. When removing a volume, this makes sure that all data gets erased.
- saferemove_throughput
- Wipe throughput (cstream -t parameter value).
Configuration Example (/etc/pve/storage.cfg)
lvm: myspace vgname myspace content rootdir,images
General LVM advantages
LVM is a typical block storage, but this backend does not support snapshot and clones. Unfortunately, normal LVM snapshots are quite inefficient, because they interfere all writes on the whole volume group during snapshot time.
One big advantage is that you can use it on top of a shared storage, for example an iSCSI LUN. The backend itself implement proper cluster wide locking.
Note: The newer LVM-thin backend allows snapshot and clones, but does not support shared storage. |
Standard installation
On a default installation Proxmox VE will use lvm. The layout looks like followed.
VG | LV | Mountpoint | Note |
---|---|---|---|
pve | swap | will used as swap partition | |
pve | root | / | Example |
pve | data | /var/lib/vz/ | Proxmox VE < 4.2 |
pve | data | Proxmox VE >= 4.2 |
In Proxmox VE 4.2 we changed the LV data to a thin pool, to provide snapshots and native performance of the disk. The /var/lib/vz is now included in the LV root.
LVM-Thin
storage pool type: lvmthin
LVM normally allocates blocks when you create a volume. LVM thin pools instead allocates blocks when they are written. This behavior is called thin-provisioning, because volumes can be much larger than physically available space.
You can use the normal LVM command line tools to manage and create LVM thin pools (see man lvmthin for details). Assuming you already have a LVM volume group called pve, the following commands create a new LVM thin pool (size 100G) called data:
lvcreate -L 100G -n data pve lvconvert --type thin-pool pve/data
Configuration
The LVM thin backend supports the common storage properties content, nodes, disable, and the following LVM specific properties:
- vgname
- LVM volume group name. This must point to an existing volume group.
- thinpool
- The name of the LVM thin pool.
Configuration Example (/etc/pve/storage.cfg)
lvmthin: local-lvm thinpool data vgname pve content rootdir,images
General LVM-Thin advantages
LVM thin is a block storage, but fully supports snapshots and clones efficiently. New volumes are automatically initialized with zero.
It must be mentioned that LVM thin pools cannot be shared across multiple nodes, so you can only use them as local storage.
Create a extra LV for /var/lib/vz
This can be easily done by create a new thin LV. It is thin provisioned.
lvcreate -n <Name> -V <Size[M,G,T]> <VG>/<LVThin_pool>
A real world example it looks like
lvcreate -n vz -V 10G pve/data
Now a filesystem must be create on the LV.
mkfs.ext4 /dev/pve/vz
And at last step this have to be mounted.
Note: Be sure that /var/lib/vz is empty. On a default installation it isn’t. |
To make it always accessible add the following line at /etc/fstab.
echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab
Resize metadata pool
Note: If the pool will extend, then it is be necessary to extent also the metadata pool.
It can be achieved with the following command. |
lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool>
LVM vs LVM-Thin
Type | Content types | Image formats | Shared | Snapshots | Clones |
---|---|---|---|---|---|
LVM | images,rootdir | raw | possible | no | no |
LVM-Thin | images,rootdir | raw | no | yes | yes |
Administration
Create a Volume Group
Let's assume we have a empty disk /dev/sdb, where we want to make a Volume Group named vmdata.
First create a partition.
sgdisk -N 1 /dev/sdb
pvcreate --metadatasize 250k -y -ff /dev/sdb1 vgcreate vmdata /dev/sdb1
Create a LVM_Thin Pool
Now create a Logical Volume and convert it to a thinpool.
lvcreate -L 80G -T -n vmstore vmdata
Troubleshooting and known issues
tbd.