Software RAID: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
m (updated forum links)
 
(30 intermediate revisions by 11 users not shown)
Line 1: Line 1:
We do not support '''Software RAID (mdraid)''' in offical Proxmox kernel.
== Introduction ==


== Reasons ==
Software RAID is an alternative for using hardware-RAID controller for local storage.
The basic reason is that proxmox.com want to offer support for their system, and software raid would cause unjustified burden for us.
Please note that Proxmox VE currently only supports one technology for local software defined RAID storage: ZFS


The problems are:
== Supported Technologies ==
# Most system admins are unable to recover from a software raid error because they never read the documentation.
# RAID1 is not the onyl raid level - I we want to fully support software raid we need a quite complex interface to configure that (RAID5, RAID10, ..)
# We want to use LVM2 (snapshots). This adds an additional level of complexity, i.e. if you recover or if you want to extend your system by adding harddisks.
# One single failed disk can make the whole system unusable - we observed this with sotware raid - but never with hardware raid


* The developers know that Proxmox VE will not work with soft raid [http://forum.proxmox.com/showpost.php?p=4406&postcount=3]
=== ZFS ===
* It creates problems in proxmox special setup
* May stop working after an upgrade


See also forum: http://forum.proxmox.com/showthread.php?t=398
If you want to run a supported configuration, using a proven enterprise storage technology, with data integrity checks and auto-repair capabilities ZFS is the right choice.
== Comunity ==


A German introduction to a Software Raid1. It just works on a single Host without Clustersupport. [http://oerb.de/index.php?option=com_content&view=article&id=66:proxmox-ve-mit-raid-howto&catid=22:administration&Itemid=12]
See the [[ZFS on Linux]] article for more information.
 
== Technology Preview ==
 
=== btrfs ===
 
btrfs can actually be a safe and functional choice, it is supported as technology preview since Proxmox VE 7.
 
== Unsupported Technologies ==
 
=== mdraid ===
 
mdraid has zero checks for bit-rot, data integrity, and file systems commonly used on top do not provide that either.
 
That means, if some data gets corrupted, which happens on any long-running system sooner or later, and you normally do not notice until it's too late.
So, Proxmox projects do '''not''' support this technology to avoid that users that are not aware of this and its implications run into these problems!
Its user experience is also less polished than that of ZFS, and while it might provide slightly more performance in some cases, this is only achieved due too not having the safety-guarantees that, e.g. ZFS provides.
 
You can still use MD-RAID, but you must be aware of at least the following points:
* You cannot expect any help from both the Proxmox Enterprise Support nor the Proxmox Community Forum for issues that seem related to the usage, even if just tangentially.
* Years of good experience are only relevant if you significantly dealt with actual HW and Software failures from lower or upper level than MD-RAID.
* MD-RAID is susceptible to breakage from any programs that can issue <code>O_DIRECT</code> write request to it; ensure <code>O_DIRECT</code> issues are not relayed to the underlying storage (e.g., change the cache mode for VMs). Note that while the specific write pattern causing such a breakage can be considered an application bug, it has to be stated that there are a lot of bugs in current software, this behavior might be triggered by some (very rare) memory swapping pattern, and it can also be used as an attack vector, e.g. if the users of virtual guests are not fully trusted or if a virtual guest gets hijacked. While MD-RAID might not be the sole technology that is susceptible to this issue, it's still noted explicitly as ZFS isn't (it ignores O_DIRECT before 2.3 and supports it safely since 2.3) and MD-RAID makes repairing corruption a lot harder.
 
If you read, understood and agreed to above points, you can create the required RAID level during Debian installation and then install Proxmox VE or create RAID after install Proxmox VE. See [[Install Proxmox VE on Debian 12 Bookworm]].
 
If you need further assistance for md-raid, it's another good sign that you do not want to choose that technology.
 
[[Category: HOWTO]]

Latest revision as of 09:43, 21 October 2024

Introduction

Software RAID is an alternative for using hardware-RAID controller for local storage. Please note that Proxmox VE currently only supports one technology for local software defined RAID storage: ZFS

Supported Technologies

ZFS

If you want to run a supported configuration, using a proven enterprise storage technology, with data integrity checks and auto-repair capabilities ZFS is the right choice.

See the ZFS on Linux article for more information.

Technology Preview

btrfs

btrfs can actually be a safe and functional choice, it is supported as technology preview since Proxmox VE 7.

Unsupported Technologies

mdraid

mdraid has zero checks for bit-rot, data integrity, and file systems commonly used on top do not provide that either.

That means, if some data gets corrupted, which happens on any long-running system sooner or later, and you normally do not notice until it's too late. So, Proxmox projects do not support this technology to avoid that users that are not aware of this and its implications run into these problems! Its user experience is also less polished than that of ZFS, and while it might provide slightly more performance in some cases, this is only achieved due too not having the safety-guarantees that, e.g. ZFS provides.

You can still use MD-RAID, but you must be aware of at least the following points:

  • You cannot expect any help from both the Proxmox Enterprise Support nor the Proxmox Community Forum for issues that seem related to the usage, even if just tangentially.
  • Years of good experience are only relevant if you significantly dealt with actual HW and Software failures from lower or upper level than MD-RAID.
  • MD-RAID is susceptible to breakage from any programs that can issue O_DIRECT write request to it; ensure O_DIRECT issues are not relayed to the underlying storage (e.g., change the cache mode for VMs). Note that while the specific write pattern causing such a breakage can be considered an application bug, it has to be stated that there are a lot of bugs in current software, this behavior might be triggered by some (very rare) memory swapping pattern, and it can also be used as an attack vector, e.g. if the users of virtual guests are not fully trusted or if a virtual guest gets hijacked. While MD-RAID might not be the sole technology that is susceptible to this issue, it's still noted explicitly as ZFS isn't (it ignores O_DIRECT before 2.3 and supports it safely since 2.3) and MD-RAID makes repairing corruption a lot harder.

If you read, understood and agreed to above points, you can create the required RAID level during Debian installation and then install Proxmox VE or create RAID after install Proxmox VE. See Install Proxmox VE on Debian 12 Bookworm.

If you need further assistance for md-raid, it's another good sign that you do not want to choose that technology.