Software RAID: Difference between revisions
(6 intermediate revisions by the same user not shown) | |||
Line 12: | Line 12: | ||
See the [[ZFS on Linux]] article for more information. | See the [[ZFS on Linux]] article for more information. | ||
== | == Technology Preview == | ||
=== btrfs === | === btrfs === | ||
btrfs can actually be a | btrfs can actually be a safe and functional choice, it is supported as technology preview since Proxmox VE 7. | ||
== Unsupported Technologies == | |||
=== mdraid === | === mdraid === | ||
mdraid has zero checks for | mdraid has zero checks for bit-rot, data integrity, and file systems commonly used on top do not provide that either. | ||
That means, if some data gets corrupted, which happens on any long-running system sooner or later, and you normally do not notice until it's too late. | That means, if some data gets corrupted, which happens on any long-running system sooner or later, and you normally do not notice until it's too late. | ||
So, Proxmox projects do '''not''' support | So, Proxmox projects do '''not''' support this technology to avoid that users that are not aware of this and its implications run into these problems! | ||
Its user experience is also less polished than that of ZFS, and while it might provide slightly more performance in some cases, this is only achieved due too not having the safety-guarantees that, e.g. ZFS provides. | |||
You can still use MD-RAID, but you must be aware of at least the following points: | |||
* You cannot expect any help from both the Proxmox Enterprise Support nor the Proxmox Community Forum for issues that seem related to the usage, even if just tangentially. | |||
* Years of good experience are only relevant if you significantly dealt with actual HW and Software failures from lower or upper level than MD-RAID. | |||
* MD-RAID is susceptible to breakage from any programs that can issue <code>O_DIRECT</code> write request to it; ensure <code>O_DIRECT</code> issues are not relayed to the underlying storage (e.g., change the cache mode for VMs). Note that while the specific write pattern causing such a breakage can be considered an application bug, it has to be stated that there are a lot of bugs in current software, this behavior might be triggered by some (very rare) memory swapping pattern, and it can also be used as an attack vector, e.g. if the users of virtual guests are not fully trusted or if a virtual guest gets hijacked. While MD-RAID might not be the sole technology that is susceptible to this issue, it's still noted explicitly as ZFS isn't (it ignores O_DIRECT before 2.3 and supports it safely since 2.3) and MD-RAID makes repairing corruption a lot harder. | |||
If you read, understood and agreed to above points, you can create the required RAID level during Debian installation and then install Proxmox VE or create RAID after install Proxmox VE. See [[Install Proxmox VE on Debian 12 Bookworm]]. | |||
If you need further assistance for md-raid, it's another good sign that you do not want to choose that technology. | If you need further assistance for md-raid, it's another good sign that you do not want to choose that technology. | ||
[[Category: HOWTO]] | [[Category: HOWTO]] |
Latest revision as of 09:43, 21 October 2024
Introduction
Software RAID is an alternative for using hardware-RAID controller for local storage. Please note that Proxmox VE currently only supports one technology for local software defined RAID storage: ZFS
Supported Technologies
ZFS
If you want to run a supported configuration, using a proven enterprise storage technology, with data integrity checks and auto-repair capabilities ZFS is the right choice.
See the ZFS on Linux article for more information.
Technology Preview
btrfs
btrfs can actually be a safe and functional choice, it is supported as technology preview since Proxmox VE 7.
Unsupported Technologies
mdraid
mdraid has zero checks for bit-rot, data integrity, and file systems commonly used on top do not provide that either.
That means, if some data gets corrupted, which happens on any long-running system sooner or later, and you normally do not notice until it's too late. So, Proxmox projects do not support this technology to avoid that users that are not aware of this and its implications run into these problems! Its user experience is also less polished than that of ZFS, and while it might provide slightly more performance in some cases, this is only achieved due too not having the safety-guarantees that, e.g. ZFS provides.
You can still use MD-RAID, but you must be aware of at least the following points:
- You cannot expect any help from both the Proxmox Enterprise Support nor the Proxmox Community Forum for issues that seem related to the usage, even if just tangentially.
- Years of good experience are only relevant if you significantly dealt with actual HW and Software failures from lower or upper level than MD-RAID.
- MD-RAID is susceptible to breakage from any programs that can issue
O_DIRECT
write request to it; ensureO_DIRECT
issues are not relayed to the underlying storage (e.g., change the cache mode for VMs). Note that while the specific write pattern causing such a breakage can be considered an application bug, it has to be stated that there are a lot of bugs in current software, this behavior might be triggered by some (very rare) memory swapping pattern, and it can also be used as an attack vector, e.g. if the users of virtual guests are not fully trusted or if a virtual guest gets hijacked. While MD-RAID might not be the sole technology that is susceptible to this issue, it's still noted explicitly as ZFS isn't (it ignores O_DIRECT before 2.3 and supports it safely since 2.3) and MD-RAID makes repairing corruption a lot harder.
If you read, understood and agreed to above points, you can create the required RAID level during Debian installation and then install Proxmox VE or create RAID after install Proxmox VE. See Install Proxmox VE on Debian 12 Bookworm.
If you need further assistance for md-raid, it's another good sign that you do not want to choose that technology.