https://pve.proxmox.com/mediawiki/api.php?action=feedcontributions&user=Jonathan+Halewood&feedformat=atomProxmox VE - User contributions [en]2024-03-29T02:13:31ZUser contributionsMediaWiki 1.35.11https://pve.proxmox.com/mediawiki/index.php?title=Raid_controller&diff=8960Raid controller2016-08-24T16:54:49Z<p>Jonathan Halewood: /* Smart Array (using in-kernel cciss driver) */</p>
<hr />
<div>work in progress.<br />
<br />
== Introduction ==<br />
A fast and reliable storage controller is one of the most important parts of a Proxmox VE server. This article lists some hardware raid controllers that are known to work well and some information configuring them. You can use the lspci command in the Proxmox command line. Look for a line with RAID Controller or similar. For a RAID controller to be supported, it must be a "real" hardware controller rather than an embedded or "fake" RAID. Embedded controllers are not supported in Proxmox, and if they do work, you are doing so at your own risk.<br />
<br />
== Performance ==<br />
Paramount feature for real raid controller if you want high performance is an on board cache and "write back" mode enabled. This way the OS doesn't have to wait until data is physically written to disk, since they are immediately written to the cache and the controller will take care of finishing the subsequent write to the disk(s). If you don't want to loose your data in case of black-out, you need a battery backup (BBU). That ensures that any pending writes during a blackout can finish being saved as soon as power is restored. Some controllers support using an SSD so that a BBU isn't so important, but some SSDs do not deal with power loss well, do your research before relying on this solution.<br />
<br />
A RAID controller using write-through instead of write-back behaves very poorly. You can use the command pveperf to have a general idea of performance. Look at the FSYNCS/SECOND value.<br />
Here is a real world example of the performance you can expect.<br />
<br />
:Single SATA WD 400GB: 1360.17<br />
:3 x 15K rpm sas RAID5 with write-through: 159.03 (YES, only 159!)<br />
:same as above but with write-back enabled: 3133.45<br />
<br />
The configuration of the RAID array can also have a major impact on performance. RAID 5 should not be used with modern hard drives because rebuild times are long enough that a second drive could fail causing the entire array to be lost. If you were planning on using RAID 5, consider using RAID 6 instead. That being said, RAID 6 should be considered slow. If you are looking for performance, look for something that stripes you disks. RAID 10 or 60 is generally considered a good balance of performance to redundancy with RAID 60 providing additional layers of redundancy.<br />
<br />
== General info ==<br />
http://hwraid.le-vert.net/ has info about some vendors raid controllers <br />
* 3Ware cards<br />
* LSI cards<br />
* Adaptec cards<br />
* HP/Compaq SmartArray <br />
<br />
the site also has repositores for debian/ubuntu based systems (http://hwraid.le-vert.net/wiki/DebianPackages) where one can find tools and packages not available directly from vendors, id needed.<br />
<br />
== 3Ware ==<br />
<br />
'''9690SA SAS/SATA-II RAID PCIe (rev 01)'''<br />
<br />
This Adapter is working well under Proxmox. '''Attention:''' make sure you have write cache enabled and use a BBU! Without write cache the vms seem to 'hang' or 'freeze' sometimes. Performance give about 50MB/s on a Raid 5 with 3 HDD's a 1TB<br />
<br />
=== 3dm2 Management tool ===<br />
:debs available from http://jonas.genannt.name<br />
<br />
:install key<br />
wget -O - http://jonas.genannt.name/debian/jonas_genannt.pub | apt-key add -<br />
<br />
:add to /etc/apt/sources.list :<br />
deb http://jonas.genannt.name/debian lenny restricted<br />
<br />
: install<br />
aptitude update<br />
aptitude install 3ware-3dm2-binary 3ware-cli-binary<br />
<br />
: start service<br />
/etc/init.d/3dm2 start<br />
<br />
:note firefox does not display all of the screen correctly [for anyways for last 3-4 years ]. so i usually use konqueror <br />
:connect - login as admin , default password is 3ware <br />
https://ipaddress:888<br />
<br />
: configure<br />
: Change password , allow remote access, change default port . add NAT access thru firewall. ...<br />
<br />
== Adaptec ==<br />
All Adaptec controllers with BBU unit are known to perform well. If possible, take one with "Zero-Maintenance Cache Protection", e.g. Adaptec 5805Z or 6405/6805 with Adaptec Flash Module 600 (AFM-600)<br />
<br />
IMPORTANT NOTE:<br />
Adaptec 5405/5805 does NOT work with newer UEFI-BIOS Boards!! (This is true for almost all P67,H67,Z68 Boards)<br />
(Official Adaptec Statement: http://ask.adaptec.com/scripts/adaptec_tic.cfg/php.exe/enduser/std_adp.php?p_faqid=17087&p_created=1305289854&p_topview=1)<br />
<br />
=== Confirmed ===<br />
'''Adaptec 2405'''<br />
Controller completely working (RAID10), out of the box. But be aware, it has no possibility to extend it with an BBU (Battery Backup Unit).<br />
<br />
=== Management tools ===<br />
See [[Adaptec_Storage_Manager]] and [[Adaptec maxView Storage Manager]]<br />
<br />
== Areca ==<br />
ARC-1210, ARC-1212 and ARC-1222 works very well with good speed <br />
<br />
== HighPoint ==<br />
HighPoint 3510 - RAID 10. Works very well.<br />
<br />
<br />
== Compaq / HP ==<br />
==Smart Array (using in-kernel cciss driver)==<br />
<br />
* 6i<br />
* 400i<br />
* P400 PCI-E<br />
* P410 PCI-E<br />
* P600 PCI-X<br />
* P800 PCI-E<br />
* P812 PCI-E<br />
<br />
The P600 and P800 RAID controllers work well with ProxMox 3.x and 4.x. Both controllers are capable of accessing external SAS storage using, for example, HP MSA50 enclosures.<br />
<br />
HP SmaryArray controllers can be managed using the CLI tool (hpssacli) available from HP. <br />
<br />
To install hpssacli, run the commands below:<br />
wget http://downloads.linux.hpe.com/SDR/repo/mcp/GPG-KEY-mcp -O /tmp/proliant.gpg<br />
apt-key add /tmp/proliant.gpg<br />
echo -e "deb http://downloads.linux.hpe.com/SDR/repo/mcp/ jessie/current non-free" > /etc/apt/sources.list.d/proliant.sources.list<br />
apt-get update && apt-get install hpssacli<br />
<br />
== Dell ==<br />
<br />
== PERC/LSI (native in-kernel driver) ==<br />
<br />
* PERC 5/i<br />
* PERC 5/E<br />
* PERC 6/i<br />
* PERC 6/E<br />
<br />
Above cards work well in PROXMOX 2.3, 3.0 and 3.1.<br />
<br />
=== Monitoring SMART status of disks in array ===<br />
Monitoring your disks is important so that you can know to replace any disks that may be failing. The following is how you can manually fetch the status of a disk in array. The first time you want to monitor the smart status of your disks, you must install the smartmontools package.<br />
<br />
:install the smartmon tools :<br />
apt-get install smartmontools<br />
<br />
After that, you can use smartctl to fetch the status of any particular disk in the array. Just change the number after the comma (,) to indicate which drive in the array you care about. The following example checks the second disk. Changing the 1 to a 0 would check the first disk.<br />
<br />
:fetch the status of a disk :<br />
smartctl -a -d megaraid,1 /dev/sda<br />
<br />
== LSI ==<br />
<br />
== Other Vendors ==<br />
Some server vendor can use LSI (and other vendors) RAID controllers in their servers, renamed, examples:<br />
* IBM ServeRAID M5015 SAS/SATA Controller (should be some kind of MegaRAID SAS 2108, ie: MegaRAID SAS 9260-8i)<br />
<br />
See also http://www.redbooks.ibm.com/abstracts/tips0054.html<br />
and http://www.lsi.com/channel/products/storagecomponents/Pages/MegaRAIDSAS9260-8i.aspx<br />
<br />
In those cases, LSI tools should work anyway.<br />
<br />
=== HP Branded ===<br />
* 3041E SAS/SATA 3GB 4-port RAID Card (EH417AA) works. Note: This controller's feature set does not include cache and write-back or battery/flash back up. Therefore performance is not improved.<br />
<br />
=== Dell Branded ===<br />
Cards listed below are supported with native kernel drivers.<br />
<br />
* SAS2208 SAS/SATA 6GB - PERC H710, H710P and H810<br />
* SAS2008 SAS/SATA 6GB - PERC H310<br />
<br />
== Management/Info Tools ==<br />
see: http://hwraid.le-vert.net/wiki/LSI<br />
<br />
* SAS/SATA controllers<br />
*: Working tools from http://hwraid.le-vert.net repositories (see at the '''General info''' section):<br />
** megacli<br />
**: Tool to read and setup LSI Logic MegaRAID SAS HW RAID HBAs.<br />
**: Homepage: http://www.lsi.com/channel/support/products/Pages/MegaRAIDSAS9285-8e.aspx<br />
** megactl<br />
**: This package contains both megactl and megasasctl tools that can be used to query LSI's MegaRAID adapters status.<br />
** megaclisas-status<br />
**: Get RAID status out of LSI MegaRAID SAS HW RAID controllers<br />
**: The megacli-status software is a query tool to access the running configuration and status of LSI MegaRAID SAS HBAs.<br />
**: It uses LSI MegaCli proprietary command line tool.<br />
<br />
[[Category: HOWTO]] [[Category: Hardware]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=Raid_controller&diff=8959Raid controller2016-08-24T16:49:15Z<p>Jonathan Halewood: /* Smart Array (using in-kernel cciss driver) */</p>
<hr />
<div>work in progress.<br />
<br />
== Introduction ==<br />
A fast and reliable storage controller is one of the most important parts of a Proxmox VE server. This article lists some hardware raid controllers that are known to work well and some information configuring them. You can use the lspci command in the Proxmox command line. Look for a line with RAID Controller or similar. For a RAID controller to be supported, it must be a "real" hardware controller rather than an embedded or "fake" RAID. Embedded controllers are not supported in Proxmox, and if they do work, you are doing so at your own risk.<br />
<br />
== Performance ==<br />
Paramount feature for real raid controller if you want high performance is an on board cache and "write back" mode enabled. This way the OS doesn't have to wait until data is physically written to disk, since they are immediately written to the cache and the controller will take care of finishing the subsequent write to the disk(s). If you don't want to loose your data in case of black-out, you need a battery backup (BBU). That ensures that any pending writes during a blackout can finish being saved as soon as power is restored. Some controllers support using an SSD so that a BBU isn't so important, but some SSDs do not deal with power loss well, do your research before relying on this solution.<br />
<br />
A RAID controller using write-through instead of write-back behaves very poorly. You can use the command pveperf to have a general idea of performance. Look at the FSYNCS/SECOND value.<br />
Here is a real world example of the performance you can expect.<br />
<br />
:Single SATA WD 400GB: 1360.17<br />
:3 x 15K rpm sas RAID5 with write-through: 159.03 (YES, only 159!)<br />
:same as above but with write-back enabled: 3133.45<br />
<br />
The configuration of the RAID array can also have a major impact on performance. RAID 5 should not be used with modern hard drives because rebuild times are long enough that a second drive could fail causing the entire array to be lost. If you were planning on using RAID 5, consider using RAID 6 instead. That being said, RAID 6 should be considered slow. If you are looking for performance, look for something that stripes you disks. RAID 10 or 60 is generally considered a good balance of performance to redundancy with RAID 60 providing additional layers of redundancy.<br />
<br />
== General info ==<br />
http://hwraid.le-vert.net/ has info about some vendors raid controllers <br />
* 3Ware cards<br />
* LSI cards<br />
* Adaptec cards<br />
* HP/Compaq SmartArray <br />
<br />
the site also has repositores for debian/ubuntu based systems (http://hwraid.le-vert.net/wiki/DebianPackages) where one can find tools and packages not available directly from vendors, id needed.<br />
<br />
== 3Ware ==<br />
<br />
'''9690SA SAS/SATA-II RAID PCIe (rev 01)'''<br />
<br />
This Adapter is working well under Proxmox. '''Attention:''' make sure you have write cache enabled and use a BBU! Without write cache the vms seem to 'hang' or 'freeze' sometimes. Performance give about 50MB/s on a Raid 5 with 3 HDD's a 1TB<br />
<br />
=== 3dm2 Management tool ===<br />
:debs available from http://jonas.genannt.name<br />
<br />
:install key<br />
wget -O - http://jonas.genannt.name/debian/jonas_genannt.pub | apt-key add -<br />
<br />
:add to /etc/apt/sources.list :<br />
deb http://jonas.genannt.name/debian lenny restricted<br />
<br />
: install<br />
aptitude update<br />
aptitude install 3ware-3dm2-binary 3ware-cli-binary<br />
<br />
: start service<br />
/etc/init.d/3dm2 start<br />
<br />
:note firefox does not display all of the screen correctly [for anyways for last 3-4 years ]. so i usually use konqueror <br />
:connect - login as admin , default password is 3ware <br />
https://ipaddress:888<br />
<br />
: configure<br />
: Change password , allow remote access, change default port . add NAT access thru firewall. ...<br />
<br />
== Adaptec ==<br />
All Adaptec controllers with BBU unit are known to perform well. If possible, take one with "Zero-Maintenance Cache Protection", e.g. Adaptec 5805Z or 6405/6805 with Adaptec Flash Module 600 (AFM-600)<br />
<br />
IMPORTANT NOTE:<br />
Adaptec 5405/5805 does NOT work with newer UEFI-BIOS Boards!! (This is true for almost all P67,H67,Z68 Boards)<br />
(Official Adaptec Statement: http://ask.adaptec.com/scripts/adaptec_tic.cfg/php.exe/enduser/std_adp.php?p_faqid=17087&p_created=1305289854&p_topview=1)<br />
<br />
=== Confirmed ===<br />
'''Adaptec 2405'''<br />
Controller completely working (RAID10), out of the box. But be aware, it has no possibility to extend it with an BBU (Battery Backup Unit).<br />
<br />
=== Management tools ===<br />
See [[Adaptec_Storage_Manager]] and [[Adaptec maxView Storage Manager]]<br />
<br />
== Areca ==<br />
ARC-1210, ARC-1212 and ARC-1222 works very well with good speed <br />
<br />
== HighPoint ==<br />
HighPoint 3510 - RAID 10. Works very well.<br />
<br />
<br />
== Compaq / HP ==<br />
==Smart Array (using in-kernel cciss driver)==<br />
<br />
* 6i<br />
* 400i<br />
* P400 PCI-E<br />
* P410 PCI-E<br />
* P600 PCI-X<br />
* P800 PCI-E<br />
* P812 PCI-E<br />
<br />
The P600 and P800 RAID controllers work well with ProxMox 3.x and 4.x. Both controllers are capable of accessing external SAS storage using, for example, HP MSA50 enclosures.<br />
<br />
HP SmaryArray controllers can be managed using the CLI tool (hpssacli) available from HP. <br />
<br />
To install this, run the commands below :<br />
'''wget http://downloads.linux.hpe.com/SDR/repo/mcp/GPG-KEY-mcp -O /tmp/proliant.gpg<br />
apt-key add /tmp/proliant.gpg<br />
echo -e "deb http://downloads.linux.hpe.com/SDR/repo/mcp/ jessie/current non-free" > /etc/apt/sources.list.d/proliant.sources.list<br />
apt-get update && apt-get install hpssacli'''<br />
<br />
== Dell ==<br />
<br />
== PERC/LSI (native in-kernel driver) ==<br />
<br />
* PERC 5/i<br />
* PERC 5/E<br />
* PERC 6/i<br />
* PERC 6/E<br />
<br />
Above cards work well in PROXMOX 2.3, 3.0 and 3.1.<br />
<br />
=== Monitoring SMART status of disks in array ===<br />
Monitoring your disks is important so that you can know to replace any disks that may be failing. The following is how you can manually fetch the status of a disk in array. The first time you want to monitor the smart status of your disks, you must install the smartmontools package.<br />
<br />
:install the smartmon tools :<br />
apt-get install smartmontools<br />
<br />
After that, you can use smartctl to fetch the status of any particular disk in the array. Just change the number after the comma (,) to indicate which drive in the array you care about. The following example checks the second disk. Changing the 1 to a 0 would check the first disk.<br />
<br />
:fetch the status of a disk :<br />
smartctl -a -d megaraid,1 /dev/sda<br />
<br />
== LSI ==<br />
<br />
== Other Vendors ==<br />
Some server vendor can use LSI (and other vendors) RAID controllers in their servers, renamed, examples:<br />
* IBM ServeRAID M5015 SAS/SATA Controller (should be some kind of MegaRAID SAS 2108, ie: MegaRAID SAS 9260-8i)<br />
<br />
See also http://www.redbooks.ibm.com/abstracts/tips0054.html<br />
and http://www.lsi.com/channel/products/storagecomponents/Pages/MegaRAIDSAS9260-8i.aspx<br />
<br />
In those cases, LSI tools should work anyway.<br />
<br />
=== HP Branded ===<br />
* 3041E SAS/SATA 3GB 4-port RAID Card (EH417AA) works. Note: This controller's feature set does not include cache and write-back or battery/flash back up. Therefore performance is not improved.<br />
<br />
=== Dell Branded ===<br />
Cards listed below are supported with native kernel drivers.<br />
<br />
* SAS2208 SAS/SATA 6GB - PERC H710, H710P and H810<br />
* SAS2008 SAS/SATA 6GB - PERC H310<br />
<br />
== Management/Info Tools ==<br />
see: http://hwraid.le-vert.net/wiki/LSI<br />
<br />
* SAS/SATA controllers<br />
*: Working tools from http://hwraid.le-vert.net repositories (see at the '''General info''' section):<br />
** megacli<br />
**: Tool to read and setup LSI Logic MegaRAID SAS HW RAID HBAs.<br />
**: Homepage: http://www.lsi.com/channel/support/products/Pages/MegaRAIDSAS9285-8e.aspx<br />
** megactl<br />
**: This package contains both megactl and megasasctl tools that can be used to query LSI's MegaRAID adapters status.<br />
** megaclisas-status<br />
**: Get RAID status out of LSI MegaRAID SAS HW RAID controllers<br />
**: The megacli-status software is a query tool to access the running configuration and status of LSI MegaRAID SAS HBAs.<br />
**: It uses LSI MegaCli proprietary command line tool.<br />
<br />
[[Category: HOWTO]] [[Category: Hardware]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=Raid_controller&diff=8958Raid controller2016-08-24T16:48:44Z<p>Jonathan Halewood: /* Smart Array (using in-kernel cciss driver) */</p>
<hr />
<div>work in progress.<br />
<br />
== Introduction ==<br />
A fast and reliable storage controller is one of the most important parts of a Proxmox VE server. This article lists some hardware raid controllers that are known to work well and some information configuring them. You can use the lspci command in the Proxmox command line. Look for a line with RAID Controller or similar. For a RAID controller to be supported, it must be a "real" hardware controller rather than an embedded or "fake" RAID. Embedded controllers are not supported in Proxmox, and if they do work, you are doing so at your own risk.<br />
<br />
== Performance ==<br />
Paramount feature for real raid controller if you want high performance is an on board cache and "write back" mode enabled. This way the OS doesn't have to wait until data is physically written to disk, since they are immediately written to the cache and the controller will take care of finishing the subsequent write to the disk(s). If you don't want to loose your data in case of black-out, you need a battery backup (BBU). That ensures that any pending writes during a blackout can finish being saved as soon as power is restored. Some controllers support using an SSD so that a BBU isn't so important, but some SSDs do not deal with power loss well, do your research before relying on this solution.<br />
<br />
A RAID controller using write-through instead of write-back behaves very poorly. You can use the command pveperf to have a general idea of performance. Look at the FSYNCS/SECOND value.<br />
Here is a real world example of the performance you can expect.<br />
<br />
:Single SATA WD 400GB: 1360.17<br />
:3 x 15K rpm sas RAID5 with write-through: 159.03 (YES, only 159!)<br />
:same as above but with write-back enabled: 3133.45<br />
<br />
The configuration of the RAID array can also have a major impact on performance. RAID 5 should not be used with modern hard drives because rebuild times are long enough that a second drive could fail causing the entire array to be lost. If you were planning on using RAID 5, consider using RAID 6 instead. That being said, RAID 6 should be considered slow. If you are looking for performance, look for something that stripes you disks. RAID 10 or 60 is generally considered a good balance of performance to redundancy with RAID 60 providing additional layers of redundancy.<br />
<br />
== General info ==<br />
http://hwraid.le-vert.net/ has info about some vendors raid controllers <br />
* 3Ware cards<br />
* LSI cards<br />
* Adaptec cards<br />
* HP/Compaq SmartArray <br />
<br />
the site also has repositores for debian/ubuntu based systems (http://hwraid.le-vert.net/wiki/DebianPackages) where one can find tools and packages not available directly from vendors, id needed.<br />
<br />
== 3Ware ==<br />
<br />
'''9690SA SAS/SATA-II RAID PCIe (rev 01)'''<br />
<br />
This Adapter is working well under Proxmox. '''Attention:''' make sure you have write cache enabled and use a BBU! Without write cache the vms seem to 'hang' or 'freeze' sometimes. Performance give about 50MB/s on a Raid 5 with 3 HDD's a 1TB<br />
<br />
=== 3dm2 Management tool ===<br />
:debs available from http://jonas.genannt.name<br />
<br />
:install key<br />
wget -O - http://jonas.genannt.name/debian/jonas_genannt.pub | apt-key add -<br />
<br />
:add to /etc/apt/sources.list :<br />
deb http://jonas.genannt.name/debian lenny restricted<br />
<br />
: install<br />
aptitude update<br />
aptitude install 3ware-3dm2-binary 3ware-cli-binary<br />
<br />
: start service<br />
/etc/init.d/3dm2 start<br />
<br />
:note firefox does not display all of the screen correctly [for anyways for last 3-4 years ]. so i usually use konqueror <br />
:connect - login as admin , default password is 3ware <br />
https://ipaddress:888<br />
<br />
: configure<br />
: Change password , allow remote access, change default port . add NAT access thru firewall. ...<br />
<br />
== Adaptec ==<br />
All Adaptec controllers with BBU unit are known to perform well. If possible, take one with "Zero-Maintenance Cache Protection", e.g. Adaptec 5805Z or 6405/6805 with Adaptec Flash Module 600 (AFM-600)<br />
<br />
IMPORTANT NOTE:<br />
Adaptec 5405/5805 does NOT work with newer UEFI-BIOS Boards!! (This is true for almost all P67,H67,Z68 Boards)<br />
(Official Adaptec Statement: http://ask.adaptec.com/scripts/adaptec_tic.cfg/php.exe/enduser/std_adp.php?p_faqid=17087&p_created=1305289854&p_topview=1)<br />
<br />
=== Confirmed ===<br />
'''Adaptec 2405'''<br />
Controller completely working (RAID10), out of the box. But be aware, it has no possibility to extend it with an BBU (Battery Backup Unit).<br />
<br />
=== Management tools ===<br />
See [[Adaptec_Storage_Manager]] and [[Adaptec maxView Storage Manager]]<br />
<br />
== Areca ==<br />
ARC-1210, ARC-1212 and ARC-1222 works very well with good speed <br />
<br />
== HighPoint ==<br />
HighPoint 3510 - RAID 10. Works very well.<br />
<br />
<br />
== Compaq / HP ==<br />
==Smart Array (using in-kernel cciss driver)==<br />
<br />
* 6i<br />
* 400i<br />
* P400 PCI-E<br />
* P410 PCI-E<br />
* P600 PCI-X<br />
* P800 PCI-E<br />
* P812 PCI-E<br />
<br />
The P600 and P800 RAID controllers work well with ProxMox 3.x and 4.x. Both controllers are capable of accessing external SAS storage using, for example, HP MSA50 enclosures.<br />
<br />
HP SmaryArray controllers can be managed using the CLI tool (hpssacli) available from HP. <br />
<br />
To install this, run the commands below:<br />
'''wget http://downloads.linux.hpe.com/SDR/repo/mcp/GPG-KEY-mcp -O /tmp/proliant.gpg<br />
apt-key add /tmp/proliant.gpg<br />
echo -e "deb http://downloads.linux.hpe.com/SDR/repo/mcp/ jessie/current non-free" > /etc/apt/sources.list.d/proliant.sources.list<br />
apt-get update && apt-get install hpssacli'''<br />
<br />
== Dell ==<br />
<br />
== PERC/LSI (native in-kernel driver) ==<br />
<br />
* PERC 5/i<br />
* PERC 5/E<br />
* PERC 6/i<br />
* PERC 6/E<br />
<br />
Above cards work well in PROXMOX 2.3, 3.0 and 3.1.<br />
<br />
=== Monitoring SMART status of disks in array ===<br />
Monitoring your disks is important so that you can know to replace any disks that may be failing. The following is how you can manually fetch the status of a disk in array. The first time you want to monitor the smart status of your disks, you must install the smartmontools package.<br />
<br />
:install the smartmon tools :<br />
apt-get install smartmontools<br />
<br />
After that, you can use smartctl to fetch the status of any particular disk in the array. Just change the number after the comma (,) to indicate which drive in the array you care about. The following example checks the second disk. Changing the 1 to a 0 would check the first disk.<br />
<br />
:fetch the status of a disk :<br />
smartctl -a -d megaraid,1 /dev/sda<br />
<br />
== LSI ==<br />
<br />
== Other Vendors ==<br />
Some server vendor can use LSI (and other vendors) RAID controllers in their servers, renamed, examples:<br />
* IBM ServeRAID M5015 SAS/SATA Controller (should be some kind of MegaRAID SAS 2108, ie: MegaRAID SAS 9260-8i)<br />
<br />
See also http://www.redbooks.ibm.com/abstracts/tips0054.html<br />
and http://www.lsi.com/channel/products/storagecomponents/Pages/MegaRAIDSAS9260-8i.aspx<br />
<br />
In those cases, LSI tools should work anyway.<br />
<br />
=== HP Branded ===<br />
* 3041E SAS/SATA 3GB 4-port RAID Card (EH417AA) works. Note: This controller's feature set does not include cache and write-back or battery/flash back up. Therefore performance is not improved.<br />
<br />
=== Dell Branded ===<br />
Cards listed below are supported with native kernel drivers.<br />
<br />
* SAS2208 SAS/SATA 6GB - PERC H710, H710P and H810<br />
* SAS2008 SAS/SATA 6GB - PERC H310<br />
<br />
== Management/Info Tools ==<br />
see: http://hwraid.le-vert.net/wiki/LSI<br />
<br />
* SAS/SATA controllers<br />
*: Working tools from http://hwraid.le-vert.net repositories (see at the '''General info''' section):<br />
** megacli<br />
**: Tool to read and setup LSI Logic MegaRAID SAS HW RAID HBAs.<br />
**: Homepage: http://www.lsi.com/channel/support/products/Pages/MegaRAIDSAS9285-8e.aspx<br />
** megactl<br />
**: This package contains both megactl and megasasctl tools that can be used to query LSI's MegaRAID adapters status.<br />
** megaclisas-status<br />
**: Get RAID status out of LSI MegaRAID SAS HW RAID controllers<br />
**: The megacli-status software is a query tool to access the running configuration and status of LSI MegaRAID SAS HBAs.<br />
**: It uses LSI MegaCli proprietary command line tool.<br />
<br />
[[Category: HOWTO]] [[Category: Hardware]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=Raid_controller&diff=8957Raid controller2016-08-24T16:48:14Z<p>Jonathan Halewood: Added updated list of controllers and management tools install details</p>
<hr />
<div>work in progress.<br />
<br />
== Introduction ==<br />
A fast and reliable storage controller is one of the most important parts of a Proxmox VE server. This article lists some hardware raid controllers that are known to work well and some information configuring them. You can use the lspci command in the Proxmox command line. Look for a line with RAID Controller or similar. For a RAID controller to be supported, it must be a "real" hardware controller rather than an embedded or "fake" RAID. Embedded controllers are not supported in Proxmox, and if they do work, you are doing so at your own risk.<br />
<br />
== Performance ==<br />
Paramount feature for real raid controller if you want high performance is an on board cache and "write back" mode enabled. This way the OS doesn't have to wait until data is physically written to disk, since they are immediately written to the cache and the controller will take care of finishing the subsequent write to the disk(s). If you don't want to loose your data in case of black-out, you need a battery backup (BBU). That ensures that any pending writes during a blackout can finish being saved as soon as power is restored. Some controllers support using an SSD so that a BBU isn't so important, but some SSDs do not deal with power loss well, do your research before relying on this solution.<br />
<br />
A RAID controller using write-through instead of write-back behaves very poorly. You can use the command pveperf to have a general idea of performance. Look at the FSYNCS/SECOND value.<br />
Here is a real world example of the performance you can expect.<br />
<br />
:Single SATA WD 400GB: 1360.17<br />
:3 x 15K rpm sas RAID5 with write-through: 159.03 (YES, only 159!)<br />
:same as above but with write-back enabled: 3133.45<br />
<br />
The configuration of the RAID array can also have a major impact on performance. RAID 5 should not be used with modern hard drives because rebuild times are long enough that a second drive could fail causing the entire array to be lost. If you were planning on using RAID 5, consider using RAID 6 instead. That being said, RAID 6 should be considered slow. If you are looking for performance, look for something that stripes you disks. RAID 10 or 60 is generally considered a good balance of performance to redundancy with RAID 60 providing additional layers of redundancy.<br />
<br />
== General info ==<br />
http://hwraid.le-vert.net/ has info about some vendors raid controllers <br />
* 3Ware cards<br />
* LSI cards<br />
* Adaptec cards<br />
* HP/Compaq SmartArray <br />
<br />
the site also has repositores for debian/ubuntu based systems (http://hwraid.le-vert.net/wiki/DebianPackages) where one can find tools and packages not available directly from vendors, id needed.<br />
<br />
== 3Ware ==<br />
<br />
'''9690SA SAS/SATA-II RAID PCIe (rev 01)'''<br />
<br />
This Adapter is working well under Proxmox. '''Attention:''' make sure you have write cache enabled and use a BBU! Without write cache the vms seem to 'hang' or 'freeze' sometimes. Performance give about 50MB/s on a Raid 5 with 3 HDD's a 1TB<br />
<br />
=== 3dm2 Management tool ===<br />
:debs available from http://jonas.genannt.name<br />
<br />
:install key<br />
wget -O - http://jonas.genannt.name/debian/jonas_genannt.pub | apt-key add -<br />
<br />
:add to /etc/apt/sources.list :<br />
deb http://jonas.genannt.name/debian lenny restricted<br />
<br />
: install<br />
aptitude update<br />
aptitude install 3ware-3dm2-binary 3ware-cli-binary<br />
<br />
: start service<br />
/etc/init.d/3dm2 start<br />
<br />
:note firefox does not display all of the screen correctly [for anyways for last 3-4 years ]. so i usually use konqueror <br />
:connect - login as admin , default password is 3ware <br />
https://ipaddress:888<br />
<br />
: configure<br />
: Change password , allow remote access, change default port . add NAT access thru firewall. ...<br />
<br />
== Adaptec ==<br />
All Adaptec controllers with BBU unit are known to perform well. If possible, take one with "Zero-Maintenance Cache Protection", e.g. Adaptec 5805Z or 6405/6805 with Adaptec Flash Module 600 (AFM-600)<br />
<br />
IMPORTANT NOTE:<br />
Adaptec 5405/5805 does NOT work with newer UEFI-BIOS Boards!! (This is true for almost all P67,H67,Z68 Boards)<br />
(Official Adaptec Statement: http://ask.adaptec.com/scripts/adaptec_tic.cfg/php.exe/enduser/std_adp.php?p_faqid=17087&p_created=1305289854&p_topview=1)<br />
<br />
=== Confirmed ===<br />
'''Adaptec 2405'''<br />
Controller completely working (RAID10), out of the box. But be aware, it has no possibility to extend it with an BBU (Battery Backup Unit).<br />
<br />
=== Management tools ===<br />
See [[Adaptec_Storage_Manager]] and [[Adaptec maxView Storage Manager]]<br />
<br />
== Areca ==<br />
ARC-1210, ARC-1212 and ARC-1222 works very well with good speed <br />
<br />
== HighPoint ==<br />
HighPoint 3510 - RAID 10. Works very well.<br />
<br />
<br />
== Compaq / HP ==<br />
==Smart Array (using in-kernel cciss driver)==<br />
<br />
* 6i<br />
* 400i<br />
* P400 PCI-E<br />
* P410 PCI-E<br />
* P600 PCI-X<br />
* P800 PCI-E<br />
* P812 PCI-E<br />
<br />
The P600 and P800 RAID controllers work well with ProxMox 3.x and 4.x. Both controllers are capable of accessing external SAS storage using, for example, HP MSA50 enclosures.<br />
<br />
HP SmaryArray controllers can be managed using the CLI tool (hpssacli) available from HP. <br />
<br />
To install this, run the commands below:<br />
<br />
''''wget http://downloads.linux.hpe.com/SDR/repo/mcp/GPG-KEY-mcp -O /tmp/proliant.gpg<br />
apt-key add /tmp/proliant.gpg<br />
echo -e "deb http://downloads.linux.hpe.com/SDR/repo/mcp/ jessie/current non-free" > /etc/apt/sources.list.d/proliant.sources.list<br />
apt-get update && apt-get install hpssacli''''<br />
<br />
== Dell ==<br />
<br />
== PERC/LSI (native in-kernel driver) ==<br />
<br />
* PERC 5/i<br />
* PERC 5/E<br />
* PERC 6/i<br />
* PERC 6/E<br />
<br />
Above cards work well in PROXMOX 2.3, 3.0 and 3.1.<br />
<br />
=== Monitoring SMART status of disks in array ===<br />
Monitoring your disks is important so that you can know to replace any disks that may be failing. The following is how you can manually fetch the status of a disk in array. The first time you want to monitor the smart status of your disks, you must install the smartmontools package.<br />
<br />
:install the smartmon tools :<br />
apt-get install smartmontools<br />
<br />
After that, you can use smartctl to fetch the status of any particular disk in the array. Just change the number after the comma (,) to indicate which drive in the array you care about. The following example checks the second disk. Changing the 1 to a 0 would check the first disk.<br />
<br />
:fetch the status of a disk :<br />
smartctl -a -d megaraid,1 /dev/sda<br />
<br />
== LSI ==<br />
<br />
== Other Vendors ==<br />
Some server vendor can use LSI (and other vendors) RAID controllers in their servers, renamed, examples:<br />
* IBM ServeRAID M5015 SAS/SATA Controller (should be some kind of MegaRAID SAS 2108, ie: MegaRAID SAS 9260-8i)<br />
<br />
See also http://www.redbooks.ibm.com/abstracts/tips0054.html<br />
and http://www.lsi.com/channel/products/storagecomponents/Pages/MegaRAIDSAS9260-8i.aspx<br />
<br />
In those cases, LSI tools should work anyway.<br />
<br />
=== HP Branded ===<br />
* 3041E SAS/SATA 3GB 4-port RAID Card (EH417AA) works. Note: This controller's feature set does not include cache and write-back or battery/flash back up. Therefore performance is not improved.<br />
<br />
=== Dell Branded ===<br />
Cards listed below are supported with native kernel drivers.<br />
<br />
* SAS2208 SAS/SATA 6GB - PERC H710, H710P and H810<br />
* SAS2008 SAS/SATA 6GB - PERC H310<br />
<br />
== Management/Info Tools ==<br />
see: http://hwraid.le-vert.net/wiki/LSI<br />
<br />
* SAS/SATA controllers<br />
*: Working tools from http://hwraid.le-vert.net repositories (see at the '''General info''' section):<br />
** megacli<br />
**: Tool to read and setup LSI Logic MegaRAID SAS HW RAID HBAs.<br />
**: Homepage: http://www.lsi.com/channel/support/products/Pages/MegaRAIDSAS9285-8e.aspx<br />
** megactl<br />
**: This package contains both megactl and megasasctl tools that can be used to query LSI's MegaRAID adapters status.<br />
** megaclisas-status<br />
**: Get RAID status out of LSI MegaRAID SAS HW RAID controllers<br />
**: The megacli-status software is a query tool to access the running configuration and status of LSI MegaRAID SAS HBAs.<br />
**: It uses LSI MegaCli proprietary command line tool.<br />
<br />
[[Category: HOWTO]] [[Category: Hardware]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=Raid_controller&diff=8956Raid controller2016-08-24T16:45:34Z<p>Jonathan Halewood: /* Smart Array (using in-kernel cciss driver) */</p>
<hr />
<div>work in progress.<br />
<br />
== Introduction ==<br />
A fast and reliable storage controller is one of the most important parts of a Proxmox VE server. This article lists some hardware raid controllers that are known to work well and some information configuring them. You can use the lspci command in the Proxmox command line. Look for a line with RAID Controller or similar. For a RAID controller to be supported, it must be a "real" hardware controller rather than an embedded or "fake" RAID. Embedded controllers are not supported in Proxmox, and if they do work, you are doing so at your own risk.<br />
<br />
== Performance ==<br />
Paramount feature for real raid controller if you want high performance is an on board cache and "write back" mode enabled. This way the OS doesn't have to wait until data is physically written to disk, since they are immediately written to the cache and the controller will take care of finishing the subsequent write to the disk(s). If you don't want to loose your data in case of black-out, you need a battery backup (BBU). That ensures that any pending writes during a blackout can finish being saved as soon as power is restored. Some controllers support using an SSD so that a BBU isn't so important, but some SSDs do not deal with power loss well, do your research before relying on this solution.<br />
<br />
A RAID controller using write-through instead of write-back behaves very poorly. You can use the command pveperf to have a general idea of performance. Look at the FSYNCS/SECOND value.<br />
Here is a real world example of the performance you can expect.<br />
<br />
:Single SATA WD 400GB: 1360.17<br />
:3 x 15K rpm sas RAID5 with write-through: 159.03 (YES, only 159!)<br />
:same as above but with write-back enabled: 3133.45<br />
<br />
The configuration of the RAID array can also have a major impact on performance. RAID 5 should not be used with modern hard drives because rebuild times are long enough that a second drive could fail causing the entire array to be lost. If you were planning on using RAID 5, consider using RAID 6 instead. That being said, RAID 6 should be considered slow. If you are looking for performance, look for something that stripes you disks. RAID 10 or 60 is generally considered a good balance of performance to redundancy with RAID 60 providing additional layers of redundancy.<br />
<br />
== General info ==<br />
http://hwraid.le-vert.net/ has info about some vendors raid controllers <br />
* 3Ware cards<br />
* LSI cards<br />
* Adaptec cards<br />
* HP/Compaq SmartArray <br />
<br />
the site also has repositores for debian/ubuntu based systems (http://hwraid.le-vert.net/wiki/DebianPackages) where one can find tools and packages not available directly from vendors, id needed.<br />
<br />
== 3Ware ==<br />
<br />
'''9690SA SAS/SATA-II RAID PCIe (rev 01)'''<br />
<br />
This Adapter is working well under Proxmox. '''Attention:''' make sure you have write cache enabled and use a BBU! Without write cache the vms seem to 'hang' or 'freeze' sometimes. Performance give about 50MB/s on a Raid 5 with 3 HDD's a 1TB<br />
<br />
=== 3dm2 Management tool ===<br />
:debs available from http://jonas.genannt.name<br />
<br />
:install key<br />
wget -O - http://jonas.genannt.name/debian/jonas_genannt.pub | apt-key add -<br />
<br />
:add to /etc/apt/sources.list :<br />
deb http://jonas.genannt.name/debian lenny restricted<br />
<br />
: install<br />
aptitude update<br />
aptitude install 3ware-3dm2-binary 3ware-cli-binary<br />
<br />
: start service<br />
/etc/init.d/3dm2 start<br />
<br />
:note firefox does not display all of the screen correctly [for anyways for last 3-4 years ]. so i usually use konqueror <br />
:connect - login as admin , default password is 3ware <br />
https://ipaddress:888<br />
<br />
: configure<br />
: Change password , allow remote access, change default port . add NAT access thru firewall. ...<br />
<br />
== Adaptec ==<br />
All Adaptec controllers with BBU unit are known to perform well. If possible, take one with "Zero-Maintenance Cache Protection", e.g. Adaptec 5805Z or 6405/6805 with Adaptec Flash Module 600 (AFM-600)<br />
<br />
IMPORTANT NOTE:<br />
Adaptec 5405/5805 does NOT work with newer UEFI-BIOS Boards!! (This is true for almost all P67,H67,Z68 Boards)<br />
(Official Adaptec Statement: http://ask.adaptec.com/scripts/adaptec_tic.cfg/php.exe/enduser/std_adp.php?p_faqid=17087&p_created=1305289854&p_topview=1)<br />
<br />
=== Confirmed ===<br />
'''Adaptec 2405'''<br />
Controller completely working (RAID10), out of the box. But be aware, it has no possibility to extend it with an BBU (Battery Backup Unit).<br />
<br />
=== Management tools ===<br />
See [[Adaptec_Storage_Manager]] and [[Adaptec maxView Storage Manager]]<br />
<br />
== Areca ==<br />
ARC-1210, ARC-1212 and ARC-1222 works very well with good speed <br />
<br />
== HighPoint ==<br />
HighPoint 3510 - RAID 10. Works very well.<br />
<br />
<br />
== Compaq / HP ==<br />
==Smart Array (using in-kernel cciss driver)==<br />
<br />
* 6i<br />
* 400i<br />
* P400 PCI-E<br />
* P410 PCI-E<br />
* P600 PCI-X<br />
* P800 PCI-E<br />
* P812 PCI-E<br />
<br />
The P600 and P800 RAID controllers work well with ProxMox 3.x and 4.x. Both controllers are capable of accessing external SAS storage using, for example, HP MSA50 enclosures.<br />
<br />
== Dell ==<br />
<br />
== PERC/LSI (native in-kernel driver) ==<br />
<br />
* PERC 5/i<br />
* PERC 5/E<br />
* PERC 6/i<br />
* PERC 6/E<br />
<br />
Above cards work well in PROXMOX 2.3, 3.0 and 3.1.<br />
<br />
=== Monitoring SMART status of disks in array ===<br />
Monitoring your disks is important so that you can know to replace any disks that may be failing. The following is how you can manually fetch the status of a disk in array. The first time you want to monitor the smart status of your disks, you must install the smartmontools package.<br />
<br />
:install the smartmon tools :<br />
apt-get install smartmontools<br />
<br />
After that, you can use smartctl to fetch the status of any particular disk in the array. Just change the number after the comma (,) to indicate which drive in the array you care about. The following example checks the second disk. Changing the 1 to a 0 would check the first disk.<br />
<br />
:fetch the status of a disk :<br />
smartctl -a -d megaraid,1 /dev/sda<br />
<br />
== LSI ==<br />
<br />
== Other Vendors ==<br />
Some server vendor can use LSI (and other vendors) RAID controllers in their servers, renamed, examples:<br />
* IBM ServeRAID M5015 SAS/SATA Controller (should be some kind of MegaRAID SAS 2108, ie: MegaRAID SAS 9260-8i)<br />
<br />
See also http://www.redbooks.ibm.com/abstracts/tips0054.html<br />
and http://www.lsi.com/channel/products/storagecomponents/Pages/MegaRAIDSAS9260-8i.aspx<br />
<br />
In those cases, LSI tools should work anyway.<br />
<br />
=== HP Branded ===<br />
* 3041E SAS/SATA 3GB 4-port RAID Card (EH417AA) works. Note: This controller's feature set does not include cache and write-back or battery/flash back up. Therefore performance is not improved.<br />
<br />
=== Dell Branded ===<br />
Cards listed below are supported with native kernel drivers.<br />
<br />
* SAS2208 SAS/SATA 6GB - PERC H710, H710P and H810<br />
* SAS2008 SAS/SATA 6GB - PERC H310<br />
<br />
== Management/Info Tools ==<br />
see: http://hwraid.le-vert.net/wiki/LSI<br />
<br />
* SAS/SATA controllers<br />
*: Working tools from http://hwraid.le-vert.net repositories (see at the '''General info''' section):<br />
** megacli<br />
**: Tool to read and setup LSI Logic MegaRAID SAS HW RAID HBAs.<br />
**: Homepage: http://www.lsi.com/channel/support/products/Pages/MegaRAIDSAS9285-8e.aspx<br />
** megactl<br />
**: This package contains both megactl and megasasctl tools that can be used to query LSI's MegaRAID adapters status.<br />
** megaclisas-status<br />
**: Get RAID status out of LSI MegaRAID SAS HW RAID controllers<br />
**: The megacli-status software is a query tool to access the running configuration and status of LSI MegaRAID SAS HBAs.<br />
**: It uses LSI MegaCli proprietary command line tool.<br />
<br />
[[Category: HOWTO]] [[Category: Hardware]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=HTTPS_Certificate_Configuration_(Version_3.x_and_earlier)&diff=8955HTTPS Certificate Configuration (Version 3.x and earlier)2016-08-24T16:44:21Z<p>Jonathan Halewood: </p>
<hr />
<div>{{Note|Article about the old stable Proxmox VE 3.x releases. For the current version see [[HTTPS Certificate Configuration (Version 4.x and newer)]]}}<br />
== Introduction ==<br />
This is a mini-howto for changing the web server certificate in Proxmox, so that you can have a certificate created with a custom CA.<br />
It has been tested on a Proxmox VE 3.0 installation, using certificates from https://www.cacert.org/.<br />
<br />
== HTTPS Certificate Configuration ==<br />
[[Image:screen-custom-ssl-with-java-shell.png|thumb]] <br />
3 files are needed:<br />
<br />
* ca.pem : CA certificate file in PEM format<br />
* server.key : non-password protected private key<br />
* server.pem : server certificate from CA in PEM format<br />
<br />
You can create the previous files following any standard openssl certificate generation HOWTO.<br />
<br />
=== Backup PVE created files ===<br />
cp /etc/pve/pve-root-ca.pem /etc/pve/pve-root-ca.pem.orig<br />
cp /etc/pve/local/pve-ssl.key /etc/pve/local/pve-ssl.key.orig<br />
cp /etc/pve/local/pve-ssl.pem /etc/pve/local/pve-ssl.pem.orig<br />
<br />
=== Copy your own certificates ===<br />
cp server.key /etc/pve/local/pve-ssl.key<br />
cp server.pem /etc/pve/local/pve-ssl.pem<br />
cp ca.pem /etc/pve/pve-root-ca.pem<br />
<br />
=== Using intermediate certificates ===<br />
[[Image:Intermediate_certificate_test.png|thumb]]<br />
Using intermediate certificates requires a special pve-ssl.pem which has to contain both<br />
your server.pem and the intermediate_certificate.pem. This must be created this way:<br />
<br />
cat server.pem intermediate_certificate.pem > /etc/pve/local/pve-ssl.pem<br />
cat intermediate_certificate.pem ca.pem > /etc/pve/pve-root-ca.pem<br />
<br />
After restarting pveproxy and pvedaemon you can verify that pve-ssl.pem is created the<br />
proper way by visiting this URL [http://www.digicert.com/help/ SSL Certificate Tester]<br />
<br />
If everything is properly configured you will be rewarded with something similarly as<br />
can be seen in the image.<br />
<br />
=== Restart the API server and pvedaemon ===<br />
service pveproxy restart<br />
service pvedaemon restart<br />
<br />
That's it.<br />
<br />
If you have a Proxmox cluster, this has to be done on all nodes (only the /etc/pve/local part). To test the changes in one node before changing configuration in other nodes, please make sure you log in the web interface in the correct node.<br />
<br />
[[Category: HOWTO]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=HTTPS_Certificate_Configuration_(Version_4.x,_5.0_and_5.1)&diff=8954HTTPS Certificate Configuration (Version 4.x, 5.0 and 5.1)2016-08-24T16:43:29Z<p>Jonathan Halewood: /* Introduction */</p>
<hr />
<div>== Introduction ==<br />
This is a howto for changing the web server certificate used by Proxmox VE, in order to enable the usage of publicly trusted certificates issued by a CA of your choice (like Let's Encrypt or a commercial CA).<br />
It has been tested on a Proxmox VE 4.1 installation, using certificates from https://www.letsencrypt.org.<br />
<br />
''Note: the previous, outdated version of this HowTo is archived at [[HTTPS Certificate Configuration (Version 3.x and earlier)]]''<br />
<br />
== Revert to default configuration ==<br />
If you have used the previous HowTo and replaced any of the certificate or key files generated by PVE, you need to revert to the default state before proceeding.<br />
<br />
Delete or move the following files:<br />
<br />
* /etc/pve/pve-root-ca.pem<br />
* /etc/pve/priv/pve-root-ca.key<br />
* /etc/pve/nodes/<node>/pve-ssl.pem<br />
* /etc/pve/nodes/<node>/pve-ssl.key<br />
<br />
The latter two need to be repeated for all nodes if you have a cluster.<br />
<br />
Afterwards, run the following command on each node of the cluster to re-generate the certificates and keys:<br />
<br />
pvecm updatecerts -f<br />
<br />
== CAs other than Let's Encrypt ==<br />
<br />
=== Install certificate chain and key ===<br />
<br />
Since pve-manager 4.1-20, it is possible to provide alternative SSL files for each node's web interface. The following steps need to be repeated for each node where you want to use alternative certificate files.<br />
<br />
First check your version of pve-manager and upgrade if necessary:<br />
<br />
pveversion<br />
<br />
You will need the following two files provided by your CA:<br />
<br />
* fullchain.pem (your certificate and all intermediate certificates, excluding the root certificate, in PEM format)<br />
* private-key.pem (your private key, in PEM format, without a password)<br />
<br />
Now copy those files to the override locations in /etc/pve/nodes/<node> (make sure to use the correct certificate files and node!):<br />
<br />
cp fullchain.pem /etc/pve/nodes/<node>/pveproxy-ssl.pem<br />
cp private-key.pem /etc/pve/nodes/<node>/pveproxy-ssl.key<br />
<br />
and restart the web interface:<br />
<br />
systemctl restart pveproxy<br />
<br />
The system log should inform you about the usage of the alternative SSL certificate ("Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface."):<br />
<br />
journalctl -b -u pveproxy.service<br />
<br />
When accessing the web interface on this node, you should be presented with the new certificate. Note that the alternative certificate is only used by the web interface (including noVNC), but not by the Spice Console/Shell.<br />
<br />
== Let's Encrypt using acme.sh ==<br />
<br />
=== Prerequisites ===<br />
<br />
Let's Encrypt enables everyone with a publicly resolvable domain name to be issued SSL certificates for free.<br />
<br />
Your domain name needs to be publicly resolvable both ways. (Check with `<tt>$ drill -x Your.Ip.Address</tt>` or `<tt>$ dig -x Your.Ip.Address</tt>`)<br />
<br />
The following steps show how to achieve this using the acme.sh bash script and standalone HTTP authentication.<br />
<br />
These steps need to be repeated on each node where you want to use Let's Encrypt certificates.<br />
<br />
You need at least <tt>pve-manager >= 4.1-20</tt> (see `<tt>$ pveversion</tt>`), so upgrade if necesasry.<br />
<br />
=== Install certificate chain and key ===<br />
<br />
==== 0) Upgrade from le.sh to acme.sh ====<br />
<br />
If you followed a previous version of this HowTo using le.sh, please uninstall le.sh and proceed with "Install acme.sh":<br />
<br />
le.sh uninstall<br />
<br />
acme.sh is the 2.X release of le.sh, the existing configuration should be migrated automatically when installing acme.sh.<br />
<br />
==== 1) Install acme.sh ====<br />
<br />
Install the acme.sh script from https://github.com/Neilpang/acme.sh (this howto was tested with commit 2d39b3df8893cd256257fe1f32ca6b0485a90dcf):<br />
<br />
Via git:<br />
<br />
git clone https://github.com/Neilpang/acme.sh.git acme.sh-master<br />
<br />
Or direct download:<br />
<br />
wget 'https://github.com/Neilpang/acme.sh/archive/master.zip'<br />
unzip master.zip<br />
<br />
==== 2) Run the install script ====<br />
<br />
You must do this from within the script's directory, otherwise it won't find <tt>acme.sh</tt>! Take care to replace <tt>$EMAIL</tt> with the address that you want to register with at Let's Encrypt. Let's Encrypt will send automatic expiration reminders to this address!<br />
<br />
mkdir /etc/pve/.le<br />
cd /root/acme.sh-master<br />
./acme.sh --install --accountconf /etc/pve/.le/account.conf --accountkey /etc/pve/.le/account.key --accountemail "$EMAIL"<br />
<br />
==== 3) Check the account config ====<br />
<br />
Check the config file in <tt>/etc/pve/.le/account.conf</tt> and verify:<br />
* the <tt>ACCOUNT_EMAIL</tt> variable should be set to your email address<br />
* the <tt>ACCOUNT_KEY_PATH</tt> variable should be set to "<tt>/etc/pve/.le/account.key</tt>"<br />
<br />
You can edit this file with your favourite text editor if either of those is incorrect.<br />
<br />
==== 4) Make sure port 80 is open from the public ====<br />
<br />
As part of the certificate creation process, acme.sh will listen for a confirmation from LetsEncrypt's servers on port 80. Check that this port is therefore not blocked by any firewall between the machine you are certifying and the public internet.<br />
<br />
You can close the port once you're done issuing all certificates for your cluster. However, be aware that as part of the certificate renewal process (managed with a cron job that acme.sh installs), port 80 must also be open. You may therefore need to work out an automated way (not covered in this guide) of opening up port 80 for the renewal process.<br />
<br />
==== 5) Issue your first certificate ====<br />
<br />
Now you can issue your first certificate, replacing <tt>$DOMAIN</tt> with your node's fully qualified domain:<br />
<br />
acme.sh --issue --standalone --keypath /etc/pve/local/pveproxy-ssl.key --fullchainpath /etc/pve/local/pveproxy-ssl.pem --reloadcmd "systemctl restart pveproxy" -d $DOMAIN<br />
<br />
Warnings like "cp: preserving permissions for ‘/etc/pve/local/pveproxy-ssl.pem.bak’: Function not implemented" can be safely ignored. <br />
<br />
By appending the previous command with <tt>--test</tt> you can issue a certificate using the staging (i.e., testing) CA instead of the production CA:<br />
acme.sh --issue --standalone --keypath /etc/pve/local/pveproxy-ssl.key --fullchainpath /etc/pve/local/pveproxy-ssl.pem --reloadcmd "systemctl restart pveproxy" -d $DOMAIN --test<br />
<br />
To "upgrade" to a production certificate, you need to rerun the issue command with an appended <tt>--force</tt> instead of <tt>--test</tt>, in order to replace the existing (test) certificate even though it is not yet expired. This can also be used to force a premature renewal in case the node's domain name has changed:<br />
acme.sh --issue --standalone --keypath /etc/pve/local/pveproxy-ssl.key --fullchainpath /etc/pve/local/pveproxy-ssl.pem --reloadcmd "systemctl restart pveproxy" -d $DOMAIN --force<br />
<br />
==== 6. Check it's working ====<br />
<br />
If necessary, close the firewall port again.<br />
<br />
The system log should inform you about the usage of the alternative SSL certificate ("Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface."):<br />
<br />
journalctl -b -u pveproxy.service<br />
<br />
When accessing the web interface on this node, you should be presented with the new certificate. Note that the alternative certificate is only used by the web interface (including noVNC), but not by the Spice Console/Shell.<br />
<br />
==== 7. Set up automatic renewal ====<br />
<br />
acme.sh installs a cron job that checks the installed certificate(s) and automatically renews them before they expire. <br />
<br />
The crontab entry should look like this (<tt>crontab -l</tt>):<br />
<br />
0 0 * * * "/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh" > /dev/null<br />
<br />
It's a good idea to test the cron entry by running it manually from the command line to check that it's working OK:<br />
"/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh"<br />
<br />
NOTE: The requirements for issuing certificates apply for renewals as well: the configured domain name '''must be resolvable and reachable on port 80 from the public internet when the renewal cron job runs'''.<br />
<br />
=== Updating acme.sh ===<br />
<br />
acme.sh can be updated with the following commands when installed from the git repository:<br />
<br />
cd /root/le-master<br />
git pull<br />
./acme.sh --install --accountconf /etc/pve/.le/account.conf --accountkey /etc/pve/.le/account.key --accountemail "YOUR@EMAIL.ADDRESS"<br />
<br />
=== Account key ===<br />
<br />
It is recommended to do an off-site/offline backup of the account key file in <tt>/etc/pve/.le/account.key</tt>, in case one of your certificate private key files is lost or compromised, it can be used to revoke the associated certificate.<br />
<br />
== Let's Encrypt using other clients ==<br />
<br />
It should also be possible to use other Let's Encrypt clients, as long as care is taken that the issued as well as renewed certificates and the associated keys are copied to the correct locations, and the pveproxy service is restarted afterwards. <br />
<br />
[[Category: HOWTO]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=HTTPSCertificateConfiguration&diff=8953HTTPSCertificateConfiguration2016-08-24T16:42:22Z<p>Jonathan Halewood: Jonathan Halewood moved page HTTPSCertificateConfiguration to HTTPS Certificate Configuration (Version 4.x and newer)</p>
<hr />
<div>#REDIRECT [[HTTPS Certificate Configuration (Version 4.x and newer)]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=HTTPS_Certificate_Configuration_(Version_4.x,_5.0_and_5.1)&diff=8952HTTPS Certificate Configuration (Version 4.x, 5.0 and 5.1)2016-08-24T16:42:22Z<p>Jonathan Halewood: Jonathan Halewood moved page HTTPSCertificateConfiguration to HTTPS Certificate Configuration (Version 4.x and newer)</p>
<hr />
<div>== Introduction ==<br />
This is a howto for changing the web server certificate used by Proxmox VE, in order to enable the usage of publicly trusted certificates issued by a CA of your choice (like Let's Encrypt or a commercial CA).<br />
It has been tested on a Proxmox VE 4.1 installation, using certificates from https://www.letsencrypt.org.<br />
<br />
''Note: the previous, outdated version of this HowTo is archived at [[HTTPSCertificateConfigurationOld]]''<br />
<br />
== Revert to default configuration ==<br />
If you have used the previous HowTo and replaced any of the certificate or key files generated by PVE, you need to revert to the default state before proceeding.<br />
<br />
Delete or move the following files:<br />
<br />
* /etc/pve/pve-root-ca.pem<br />
* /etc/pve/priv/pve-root-ca.key<br />
* /etc/pve/nodes/<node>/pve-ssl.pem<br />
* /etc/pve/nodes/<node>/pve-ssl.key<br />
<br />
The latter two need to be repeated for all nodes if you have a cluster.<br />
<br />
Afterwards, run the following command on each node of the cluster to re-generate the certificates and keys:<br />
<br />
pvecm updatecerts -f<br />
<br />
== CAs other than Let's Encrypt ==<br />
<br />
=== Install certificate chain and key ===<br />
<br />
Since pve-manager 4.1-20, it is possible to provide alternative SSL files for each node's web interface. The following steps need to be repeated for each node where you want to use alternative certificate files.<br />
<br />
First check your version of pve-manager and upgrade if necessary:<br />
<br />
pveversion<br />
<br />
You will need the following two files provided by your CA:<br />
<br />
* fullchain.pem (your certificate and all intermediate certificates, excluding the root certificate, in PEM format)<br />
* private-key.pem (your private key, in PEM format, without a password)<br />
<br />
Now copy those files to the override locations in /etc/pve/nodes/<node> (make sure to use the correct certificate files and node!):<br />
<br />
cp fullchain.pem /etc/pve/nodes/<node>/pveproxy-ssl.pem<br />
cp private-key.pem /etc/pve/nodes/<node>/pveproxy-ssl.key<br />
<br />
and restart the web interface:<br />
<br />
systemctl restart pveproxy<br />
<br />
The system log should inform you about the usage of the alternative SSL certificate ("Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface."):<br />
<br />
journalctl -b -u pveproxy.service<br />
<br />
When accessing the web interface on this node, you should be presented with the new certificate. Note that the alternative certificate is only used by the web interface (including noVNC), but not by the Spice Console/Shell.<br />
<br />
== Let's Encrypt using acme.sh ==<br />
<br />
=== Prerequisites ===<br />
<br />
Let's Encrypt enables everyone with a publicly resolvable domain name to be issued SSL certificates for free.<br />
<br />
Your domain name needs to be publicly resolvable both ways. (Check with `<tt>$ drill -x Your.Ip.Address</tt>` or `<tt>$ dig -x Your.Ip.Address</tt>`)<br />
<br />
The following steps show how to achieve this using the acme.sh bash script and standalone HTTP authentication.<br />
<br />
These steps need to be repeated on each node where you want to use Let's Encrypt certificates.<br />
<br />
You need at least <tt>pve-manager >= 4.1-20</tt> (see `<tt>$ pveversion</tt>`), so upgrade if necesasry.<br />
<br />
=== Install certificate chain and key ===<br />
<br />
==== 0) Upgrade from le.sh to acme.sh ====<br />
<br />
If you followed a previous version of this HowTo using le.sh, please uninstall le.sh and proceed with "Install acme.sh":<br />
<br />
le.sh uninstall<br />
<br />
acme.sh is the 2.X release of le.sh, the existing configuration should be migrated automatically when installing acme.sh.<br />
<br />
==== 1) Install acme.sh ====<br />
<br />
Install the acme.sh script from https://github.com/Neilpang/acme.sh (this howto was tested with commit 2d39b3df8893cd256257fe1f32ca6b0485a90dcf):<br />
<br />
Via git:<br />
<br />
git clone https://github.com/Neilpang/acme.sh.git acme.sh-master<br />
<br />
Or direct download:<br />
<br />
wget 'https://github.com/Neilpang/acme.sh/archive/master.zip'<br />
unzip master.zip<br />
<br />
==== 2) Run the install script ====<br />
<br />
You must do this from within the script's directory, otherwise it won't find <tt>acme.sh</tt>! Take care to replace <tt>$EMAIL</tt> with the address that you want to register with at Let's Encrypt. Let's Encrypt will send automatic expiration reminders to this address!<br />
<br />
mkdir /etc/pve/.le<br />
cd /root/acme.sh-master<br />
./acme.sh --install --accountconf /etc/pve/.le/account.conf --accountkey /etc/pve/.le/account.key --accountemail "$EMAIL"<br />
<br />
==== 3) Check the account config ====<br />
<br />
Check the config file in <tt>/etc/pve/.le/account.conf</tt> and verify:<br />
* the <tt>ACCOUNT_EMAIL</tt> variable should be set to your email address<br />
* the <tt>ACCOUNT_KEY_PATH</tt> variable should be set to "<tt>/etc/pve/.le/account.key</tt>"<br />
<br />
You can edit this file with your favourite text editor if either of those is incorrect.<br />
<br />
==== 4) Make sure port 80 is open from the public ====<br />
<br />
As part of the certificate creation process, acme.sh will listen for a confirmation from LetsEncrypt's servers on port 80. Check that this port is therefore not blocked by any firewall between the machine you are certifying and the public internet.<br />
<br />
You can close the port once you're done issuing all certificates for your cluster. However, be aware that as part of the certificate renewal process (managed with a cron job that acme.sh installs), port 80 must also be open. You may therefore need to work out an automated way (not covered in this guide) of opening up port 80 for the renewal process.<br />
<br />
==== 5) Issue your first certificate ====<br />
<br />
Now you can issue your first certificate, replacing <tt>$DOMAIN</tt> with your node's fully qualified domain:<br />
<br />
acme.sh --issue --standalone --keypath /etc/pve/local/pveproxy-ssl.key --fullchainpath /etc/pve/local/pveproxy-ssl.pem --reloadcmd "systemctl restart pveproxy" -d $DOMAIN<br />
<br />
Warnings like "cp: preserving permissions for ‘/etc/pve/local/pveproxy-ssl.pem.bak’: Function not implemented" can be safely ignored. <br />
<br />
By appending the previous command with <tt>--test</tt> you can issue a certificate using the staging (i.e., testing) CA instead of the production CA:<br />
acme.sh --issue --standalone --keypath /etc/pve/local/pveproxy-ssl.key --fullchainpath /etc/pve/local/pveproxy-ssl.pem --reloadcmd "systemctl restart pveproxy" -d $DOMAIN --test<br />
<br />
To "upgrade" to a production certificate, you need to rerun the issue command with an appended <tt>--force</tt> instead of <tt>--test</tt>, in order to replace the existing (test) certificate even though it is not yet expired. This can also be used to force a premature renewal in case the node's domain name has changed:<br />
acme.sh --issue --standalone --keypath /etc/pve/local/pveproxy-ssl.key --fullchainpath /etc/pve/local/pveproxy-ssl.pem --reloadcmd "systemctl restart pveproxy" -d $DOMAIN --force<br />
<br />
==== 6. Check it's working ====<br />
<br />
If necessary, close the firewall port again.<br />
<br />
The system log should inform you about the usage of the alternative SSL certificate ("Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface."):<br />
<br />
journalctl -b -u pveproxy.service<br />
<br />
When accessing the web interface on this node, you should be presented with the new certificate. Note that the alternative certificate is only used by the web interface (including noVNC), but not by the Spice Console/Shell.<br />
<br />
==== 7. Set up automatic renewal ====<br />
<br />
acme.sh installs a cron job that checks the installed certificate(s) and automatically renews them before they expire. <br />
<br />
The crontab entry should look like this (<tt>crontab -l</tt>):<br />
<br />
0 0 * * * "/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh" > /dev/null<br />
<br />
It's a good idea to test the cron entry by running it manually from the command line to check that it's working OK:<br />
"/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh"<br />
<br />
NOTE: The requirements for issuing certificates apply for renewals as well: the configured domain name '''must be resolvable and reachable on port 80 from the public internet when the renewal cron job runs'''.<br />
<br />
=== Updating acme.sh ===<br />
<br />
acme.sh can be updated with the following commands when installed from the git repository:<br />
<br />
cd /root/le-master<br />
git pull<br />
./acme.sh --install --accountconf /etc/pve/.le/account.conf --accountkey /etc/pve/.le/account.key --accountemail "YOUR@EMAIL.ADDRESS"<br />
<br />
=== Account key ===<br />
<br />
It is recommended to do an off-site/offline backup of the account key file in <tt>/etc/pve/.le/account.key</tt>, in case one of your certificate private key files is lost or compromised, it can be used to revoke the associated certificate.<br />
<br />
== Let's Encrypt using other clients ==<br />
<br />
It should also be possible to use other Let's Encrypt clients, as long as care is taken that the issued as well as renewed certificates and the associated keys are copied to the correct locations, and the pveproxy service is restarted afterwards. <br />
<br />
[[Category: HOWTO]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=HTTPSCertificateConfigurationOld&diff=8951HTTPSCertificateConfigurationOld2016-08-24T16:41:53Z<p>Jonathan Halewood: Jonathan Halewood moved page HTTPSCertificateConfigurationOld to HTTPS Certificate Configuration (Version 3.x and earlier)</p>
<hr />
<div>#REDIRECT [[HTTPS Certificate Configuration (Version 3.x and earlier)]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=HTTPS_Certificate_Configuration_(Version_3.x_and_earlier)&diff=8950HTTPS Certificate Configuration (Version 3.x and earlier)2016-08-24T16:41:53Z<p>Jonathan Halewood: Jonathan Halewood moved page HTTPSCertificateConfigurationOld to HTTPS Certificate Configuration (Version 3.x and earlier)</p>
<hr />
<div>{{Note|Article about the old stable Proxmox VE 3.x releases. For the current version see [[HTTPSCertificateConfiguration]]}}<br />
== Introduction ==<br />
This is a mini-howto for changing the web server certificate in Proxmox, so that you can have a certificate created with a custom CA.<br />
It has been tested on a Proxmox VE 3.0 installation, using certificates from https://www.cacert.org/.<br />
<br />
== HTTPS Certificate Configuration ==<br />
[[Image:screen-custom-ssl-with-java-shell.png|thumb]] <br />
3 files are needed:<br />
<br />
* ca.pem : CA certificate file in PEM format<br />
* server.key : non-password protected private key<br />
* server.pem : server certificate from CA in PEM format<br />
<br />
You can create the previous files following any standard openssl certificate generation HOWTO.<br />
<br />
=== Backup PVE created files ===<br />
cp /etc/pve/pve-root-ca.pem /etc/pve/pve-root-ca.pem.orig<br />
cp /etc/pve/local/pve-ssl.key /etc/pve/local/pve-ssl.key.orig<br />
cp /etc/pve/local/pve-ssl.pem /etc/pve/local/pve-ssl.pem.orig<br />
<br />
=== Copy your own certificates ===<br />
cp server.key /etc/pve/local/pve-ssl.key<br />
cp server.pem /etc/pve/local/pve-ssl.pem<br />
cp ca.pem /etc/pve/pve-root-ca.pem<br />
<br />
=== Using intermediate certificates ===<br />
[[Image:Intermediate_certificate_test.png|thumb]]<br />
Using intermediate certificates requires a special pve-ssl.pem which has to contain both<br />
your server.pem and the intermediate_certificate.pem. This must be created this way:<br />
<br />
cat server.pem intermediate_certificate.pem > /etc/pve/local/pve-ssl.pem<br />
cat intermediate_certificate.pem ca.pem > /etc/pve/pve-root-ca.pem<br />
<br />
After restarting pveproxy and pvedaemon you can verify that pve-ssl.pem is created the<br />
proper way by visiting this URL [http://www.digicert.com/help/ SSL Certificate Tester]<br />
<br />
If everything is properly configured you will be rewarded with something similarly as<br />
can be seen in the image.<br />
<br />
=== Restart the API server and pvedaemon ===<br />
service pveproxy restart<br />
service pvedaemon restart<br />
<br />
That's it.<br />
<br />
If you have a Proxmox cluster, this has to be done on all nodes (only the /etc/pve/local part). To test the changes in one node before changing configuration in other nodes, please make sure you log in the web interface in the correct node.<br />
<br />
[[Category: HOWTO]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=Troubleshooting&diff=8949Troubleshooting2016-08-24T16:38:43Z<p>Jonathan Halewood: /* I can't switch virtual consoles in Linux KVM guests with alt-F1, alt-F2... */</p>
<hr />
<div>[[Category:Troubleshooting]]<br />
== I can't switch virtual consoles in Linux KVM guests with alt-F1, alt-F2... ==<br />
<br />
VNC viewer does not pass some key combinations or they may be intercepted by your operating system.<br />
<br />
To send custom key combinations to the guest, go to "Monitor" in Virtual Machine Configuration for a given guest and use "sendkey" command.<br />
<br />
For example, to switch to the third console (tty3) you would use:<br />
<br />
sendkey alt-f3<br />
<br />
== How can I send sysrq to Linux KVM guests? ==<br />
<br />
Similarly to the above, go to "Monitor" in Virtual Machine Configuration for a given guest and use "sendkey" command.<br />
<br />
For example, to issue "Emergency Sync", you would use:<br />
<br />
sendkey alt-sysrq-s<br />
<br />
In the VNC viewer for the given guest you should see:<br />
<br />
SysRq : Emergency Sync<br />
<br />
You should also see this entry if you run "dmesg" on this guest.<br />
<br />
See also http://en.wikipedia.org/wiki/Magic_SysRq_key for a full reference of possible combinations.<br />
<br />
<br />
== How can I access Linux guests through a serial terminal ==<br />
<br />
See [[Serial Terminal]]<br />
<br />
== How can I assign a physical disk to a VM? ==<br />
You don't have to do anything at host level (i.e. not add to fstab or anything), just set is as available directly to the KVM guest:<br />
qm set <vmid> -ide# /dev/sdb<br />
Or:<br />
qm set <vmid> -ide# /dev/disk/by-id/[your disk ID]<br />
<br />
...since having the drive letter change (should you add a drive) might have unintended consequences.<br />
<br />
Also see /etc/qemu-server/<vmid>.conf if you want to add it editing the conf file by hand (i.e. adding ide1: /dev/sdb2).<br />
After that you can run the VM as usual, and you will have the new storage device available inside it.<br />
Beware that you can't assign it to more than one running VM if the filesystem is not designed for such scenario.<br />
<br />
== How can I assign a physical disk to a CT? ==<br />
See http://wiki.openvz.org/Bind_mounts<br />
<br />
== "error: out of partition" after a fresh install ==<br />
The error message below may happens when you had two or more harddrives connected during the installation. Try to disconnect all but one disk.<br />
error: out of partition.<br />
grub rescue><br />
<br />
== NFS Client Mount Error: "mount.nfs: No such device" ==<br />
By default NFS cannot be mounted in VZ containers. See this page to set-it-up: [http://wiki.openvz.org/NFS OpenVZ: NFS]<br />
See also this page to make an host's directory visible to a container: [http://wiki.openvz.org/Bind_mounts OpenVZ: Bind mounts]<br />
<br />
== Cluster & Multicast issues ==<br />
See [[Troubleshooting multicast, quorum and cluster issues]].<br />
<br />
== See also ==<br />
* [[:Category:Troubleshooting]]<br />
* [[:Category:HOWTO]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=FAQ&diff=8948FAQ2016-08-24T16:36:10Z<p>Jonathan Halewood: /* Why do you recommend 32-bit guests over 64 bit guests? */</p>
<hr />
<div>{{#pvedocs:chapter-pve-faq-plain.html}}<br />
<br />
tbd, if possible, detailed answers should linked to internal wiki pages<br />
<br />
==General==<br />
===What is a container, CT, VE, Virtual Private Server, VPS?===<br />
:See [[Container and Full Virtualization]]<br />
<br />
===What is a KVM guest (KVM VM)?===<br />
:A KVM guest or KVM VM is a guest system running virtualized under Proxmox VE with KVM.<br />
<br />
===What distribution is Proxmox VE based on?===<br />
:Proxmox VE is based on [http://www.debian.org Debian GNU/Linux], Proxmox VE Kernel is based on RHEL6 Kernel with OpenVZ patches<br />
<br />
==Installation and upgrade==<br />
<br />
===Where can I find installation instructions?===<br />
:See [[Installation]]<br />
<br />
===Proxmox VE command line tools===<br />
:See [[Command line tools]]<br />
<br />
===How long will my Proxmox VE version be supported?===<br />
According to [https://forum.proxmox.com/threads/version-end-of-life.22564/#post-113759 Wolfgang] Proxmox VE 3.4 is supported at least as long as the corresponding Debian Version is [https://wiki.debian.org/DebianOldStable oldstable]. Proxmox VE uses a rolling release model and the use of the latest stable version is always recommended. <br />
{| class="wikitable"<br />
|-<br />
! Proxmox VE Version !! Debian Version !! First Release !! Debian EOL !! Proxmox EOL<br />
|-<br />
| Proxmox VE 4.x || Debian 8 (Jessie) || 2015-10 || 2018-05 || tba<br />
|-<br />
| Proxmox VE 3.x || Debian 7 (Wheezy) || 2013-05 || 2016-04 || 2017-02<br />
|-<br />
| Proxmox VE 2.x || Debian 6 (Squeezy) || 2012-04 || 2014-05 || 2014-05<br />
|-<br />
| Proxmox VE 1.x || Debian 5 (Lenny) || 2008-10 || 2012-03 || 2013-01<br />
|}<br />
<br />
==Hardware==<br />
===CPU===<br />
====Will Proxmox VE run on a 32bit processor?====<br />
Proxmox VE works only on 64-bit CPU´s (AMD or Intel). There is no plan for 32-bit for the platform. <br />
<br />
There are, however, unofficial (and unsupported) instructions for manually installing Proxmox on 32-bit systems:<br />
* Proxmox 2.0 on Squeeze [[Install Proxmox VE on Debian Squeeze on 32-Bit Processor]]<br />
* Proxmox 1.4 on Lenny [[Install Proxmox VE on Debian Lenny on 32-Bit Processor]]<br />
<br />
===Supported CPU chips===<br />
To check if your CPU is virtualization compatible, check for the "vmx" or "svm" tag in this command output:<br />
egrep '(vmx|svm)' /proc/cpuinfo<br />
<br />
====Intel====<br />
64-bit processors with [http://en.wikipedia.org/wiki/Virtualization_Technology#Intel_virtualization_.28VT-x.29 Intel Virtualization Technology (Intel VT-x)] support<br />
<br />
[http://ark.intel.com/search/advanced/?s=t&VTX=true&InstructionSet=64-bit List of processors with Intel VT and 64-bit]<br />
<br />
====AMD====<br />
64-bit processors with [http://en.wikipedia.org/wiki/Virtualization_Technology#AMD_virtualization_.28AMD-V.29 AMD Virtualization Technology (AMD-V)] support<br />
<br />
==Networking==<br />
<br />
===How do I configure bridged networking in an OpenVZ Ubuntu/Debian container?===<br />
<ol><br />
<li>In the web gui under Virtual Machine configuration go to the network tab.<br />
<li>Remove the ip address for venet and save. (Bridged Ethernet Devices will appear)<br />
<li>SSH into your host system and enter the container you want set bridged networking for:<br />
# vzctl enter <VMID> <br />
<li>edit /etc/network/interfaces using using the following format and save. (replace with settings for your network)<br />
<pre><br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 10.0.0.17<br />
netmask 255.255.255.0<br />
network 10.0.0.0<br />
broadcast 10.0.0.255<br />
gateway 10.0.0.10<br />
</pre><br />
<li>Shutdown the container. <br />
<li>Go back to web gui and under "Bridged Ethernet Devices" configure eth0 to vmbr0 and save. (a mac address will be automatically assigned)<br />
<li>Start the container.<br />
</ol><br />
Finally check that networking is working by entering the guest and viewing the results of ifconfig<br />
<br />
*In a Centos/RHEL container, check the gateway device is set correctly.<br />
edit /etc/sysconfig/network <br />
<pre><br />
NETWORKING="yes"<br />
#GATEWAYDEV="venet0" # comment this and add line below<br />
GATEWAYDEV="eth0"<br />
HOSTNAME="hostname" # should be set by proxmox<br />
GATEWAY=123.123.123.123 # CHANGE (and remove from ifcfg-eth0)<br />
</pre><br />
<br />
==Virtualization==<br />
===Why do you recommend 32-bit guests over 64 bit guests?===<br />
:64-bit guests makes sense only if you need greater than 4GB of memory.<br />
:32-bit guests use less memory in certain situations, and are less resource intensive due to the shorter memory addressing scheme used.<br />
::e.g. a standard installation of Apache2 on 64-bit containers consumes much more memory than on 32-bit installations.<br />
<br />
== Troubleshooting ==<br />
See [[Troubleshooting]] page.<br />
[[Category:Reference Documentation]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=Ceph_Server&diff=8947Ceph Server2016-08-24T16:28:03Z<p>Jonathan Halewood: </p>
<hr />
<div>== Introduction ==<br />
<br />
[[Image:Screen-Ceph-Status.png|thumb]] Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability - See more at: http://ceph.com. <br />
<br />
Proxmox VE supports Ceph’s RADOS Block Device to be used for VM disks. The Ceph storage services are usually hosted on external, dedicated storage nodes. Such storage clusters can sum up to several hundreds of nodes, providing petabytes of storage capacity. <br />
<br />
For smaller deployments, it is also possible to run Ceph services directly on your Proxmox VE nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible. <br />
<br />
This articles describes how to setup and run Ceph storage services directly on Proxmox VE nodes. <br />
<br />
'''Note:''' <br />
<br />
Ceph Storage integration (CLI and GUI) is introduced in Proxmox VE 3.2 as technology preview.<br />
<br />
== Advantages ==<br />
<br />
*Easy setup and management with CLI and GUI support on Proxmox VE <br />
*Thin provisioning <br />
*Snapshots support <br />
*Self healing <br />
*No single point of failure <br />
*Scalable to the exabyte level <br />
*Setup pools with different performance and redundancy characteristics <br />
*Data is replicated, making it fault tolerant <br />
*Runs on economical commodity hardware <br />
*No need for hardware RAID controllers <br />
*Easy management <br />
*Open source<br />
<br />
== Recommended hardware ==<br />
<br />
You need at least three identical servers for the redundant setup. Here is the specifications of one of our test lab clusters with Proxmox VE and Ceph (three nodes): <br />
<br />
*Dual Xeon E5-2620v2, 32 GB RAM, Intel S2600CP mainboard, Intel RMM, Chenbro 2U chassis with eight 3.5” hot-swap drive bays, 2 fixed 2.5" SSD bays <br />
*10 GBit network for Ceph traffic (one Intel X540-T2 in each server, one 10Gb switch - Netgear XS712T) <br />
*Single enterprise class SSD for the Proxmox VE installation (because we run Ceph monitors there and quite a lot of logs), we use one Intel DC S3500 80 GB per host. <br />
*Single, fast and reliable enterprise class SSD for Ceph Journal. Just for this test lab cluster, we used instead some Samsung SSD 840 PRO with 240 GB <br />
*SATA disk for storing the data (OSDs), use at least 4 disks/OSDs per server, more OSD disks are faster. We use four Seagate Constellation ES.3 SATA 6Gb (4TB model) per server.<br />
<br />
N.B. enterprise class SSD means also that they have "power loss data protection", like intel DC S3500 or DC S3700 have<br />
<br />
This setup delivers 48 TB storage. By using the redundancy of 3, you can store up to 16 TB (100%). But to be prepared for failed disks and hosts, you should never fill up your storage with 100&nbsp;%. <br />
<br />
By adding more disks, the storage can be expanded up to 96 TB just by plugging in additional disks into the free drive bays. Of course, you can add more servers too as soon as your business is growing. <br />
<br />
If you do not want to run virtual machines and Ceph on the same host, you can just add more Proxmox VE nodes and use these for running the guests and the others just for the storage.<br />
<br />
== Installation of Proxmox VE ==<br />
<br />
Before you start with Ceph, you need a working Proxmox VE cluster with 3 nodes (or more). We install Proxmox VE on a fast and reliable enterprise class SSD, so we can use all bays for OSD data. Just follow the well known instructions on [[Installation]] and [[Proxmox VE 2.0 Cluster]]. <br />
<br />
'''Note:''' <br />
<br />
Use ext4 if you install on SSD (at the boot prompt of the installation ISO you can specify parameters, e.g. "linux ext4 swapsize=4"). <br />
<br />
== Network for Ceph ==<br />
<br />
All nodes need access to a separate 10Gb network interface, exclusively used for Ceph. We use network 10.10.10.0/24 for this tutorial. <br />
<br />
It is highly recommended to use 10Gb for that network to avoid performance problems. Bonding can be used to increase availability. <br />
<br />
If you do not have fast network switches, you can use a full mesh network, described in this wiki page:<br />
<br />
*[[Full Mesh Network for Ceph Server]]<br />
<br />
=== First node ===<br />
<br />
The network setup (ceph private network) from our first node contains: <br />
<br />
# from /etc/network/interfaces<br />
auto eth2<br />
iface eth2 inet static<br />
address 10.10.10.1<br />
netmask 255.255.255.0<br />
<br />
=== Second node ===<br />
<br />
The network setup (ceph private network) from our second node contains: <br />
<br />
# from /etc/network/interfaces<br />
auto eth2<br />
iface eth2 inet static<br />
address 10.10.10.2<br />
netmask 255.255.255.0<br />
<br />
=== Third node ===<br />
<br />
The network setup (ceph private network) from our third node contains: <br />
<br />
# from /etc/network/interfaces<br />
auto eth2<br />
iface eth2 inet static<br />
address 10.10.10.3<br />
netmask 255.255.255.0<br />
<br />
== Installation of Ceph packages ==<br />
<br />
You now need to select 3 nodes and install the Ceph software packages there. We wrote a small command line utility called 'pveceph' which helps you performing this tasks, you can also choose the version of Ceph. Login to all your nodes and execute the following on all: <br />
<br />
node1# pveceph install -version hammer<br />
<br />
node2# pveceph install -version hammer<br />
<br />
node3# pveceph install -version hammer<br />
<br />
This sets up an 'apt' package repository in /etc/apt/sources.list.d/ceph.list and installs the required software. If you want to install older release, just use dumpling or emperor instead of firefly.<br />
<br />
== Create initial Ceph configuration ==<br />
<br />
[[Image:Screen-Ceph-Config.png|thumb]] After installation of packages, you need to create an initial Ceph configuration on just one node, based on your private network: <br />
<br />
node1# pveceph init --network 10.10.10.0/24<br />
<br />
This creates an initial config at /etc/pve/ceph.conf. That file is automatically distributed to all Proxmox VE nodes by using [http://pve.proxmox.com/wiki/Proxmox_Cluster_file_system_%28pmxcfs%29 pmxcfs]. The command also creates a symbolic link from /etc/ceph/ceph.conf pointing to that file. So you can simply run Ceph commands without the need to specify a configuration file. <br />
<br />
== Creating Ceph Monitors ==<br />
<br />
[[Image:Screen-Ceph-Monitor.png|thumb]] After that you can create the first Ceph monitor service using: <br />
<br />
node1# pveceph createmon<br />
<br />
== Continue with CLI or GUI ==<br />
<br />
As soon as you have created the first monitor, you can start using the Proxmox GUI (see the video tutorial on [http://youtu.be/ImyRUyMBrwo Managing Ceph Server]) to manage and view your Ceph configuration. <br />
<br />
Of course, you can continue to use the command line tools (CLI). &nbsp;We continue with the CLI in this wiki article, but you should achieve the same results no matter which way you finish the remaining steps. <br />
<br />
== Creating more Ceph Monitors ==<br />
<br />
You should run 3 monitors, one on each node. Create them via GUI or via CLI. So please login to the next node and run: <br />
<br />
node2# pveceph createmon<br />
<br />
And execute the same steps on the third node: <br />
<br />
node3# pveceph createmon<br />
<br />
'''Note:''' <br />
<br />
If you add a node where you do not want to run a Ceph monitor, e.g. another node for OSDs, you need to install the Ceph packages with 'pveceph install' and you need to initialize Ceph with 'pveceph init' <br />
<br />
== Creating Ceph OSDs ==<br />
<br />
[[Image:Screen-Ceph-Disks.png|thumb]] [[Image:Screen-Ceph-OSD.png|thumb]] First, please be careful when you initialize your OSD disks, because it basically removes all existing data from those disks. So it is important to select the correct device names. The Proxmox VE Ceph GUI displays a list of all disk, together with device names, usage information and serial numbers. <br />
<br />
Creating OSDs can be done via GUI - self explaining - or via CLI, explained here: <br />
<br />
Having that said, initializing an OSD can be done with: <br />
<br />
# pveceph createosd /dev/sd[X]<br />
<br />
If you want to use a dedicated SSD journal disk: <br />
<br />
# pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]<br />
<br />
Example: /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk <br />
<br />
# pveceph createosd /dev/sdf -journal_dev /dev/sdb<br />
<br />
This partitions the disk (data and journal partition), creates filesystems, starts the OSD and add it to the existing crush map. So afterward the OSD is running and fully functional. Please create at least 12 OSDs, distributed among your nodes (4 on each node). <br />
<br />
It should be noted that this command refuses to initialize disk when it detects existing data. So if you want to overwrite a disk you should remove existing data first. You can do that using: <br />
<br />
# ceph-disk zap /dev/sd[X]<br />
*In some cases disks that used to be part of a 3ware raid need the following in addition to zap. <br />
<br />
try this:<br />
<pre><br />
#To remove partition table and boot sector the following should be sufficient:<br />
dd if=/dev/zero of=/dev/$DISK bs=1024 count=1<br />
</pre><br />
or<br />
<pre><br />
DISK=$1<br />
<br />
if [ "$1" = "" ]; then<br />
echo "Need to supply a dev name like sdg . exiting"<br />
exit 1<br />
fi<br />
echo " make sure this is the correct disk "<br />
echo $DISK<br />
echo " you will end up with NO partition table when this procedes . example:<br />
Disk /dev/$1 doesn't contain a valid partition table<br />
Press enter to procede , or ctl-c to exit "<br />
<br />
read x<br />
dd if=/dev/zero of=/dev/$DISK bs=512 count=50000<br />
</pre><br />
<br />
You can create OSDs containing both journal and data partitions or you can place the journal on a dedicated SSD. Using a SSD journal disk is highly recommended if you expect good performance. <br />
<br />
'''Note:''' <br />
<br />
In order to use a dedicated journal disk (SSD), the disk needs to have a GPT partition table. You can create this with 'gdisk /dev/sd(x)'. If there is no GPT, you cannot select the disk as journal. Currently the journal size is fixed to 5 GB.<br />
<br />
== Ceph Pools ==<br />
<br />
[[Image:Screen-Ceph-Pools.png|thumb]] [[Image:Screen-Ceph-Crush.png|thumb]] [[Image:Screen-Ceph-Log.png|thumb]] The standard installation creates some default pools, so you can either use the standard 'rbd' pool, or create your own pools using the GUI. <br />
<br />
In order to calculate your the number of placement groups for your pools, you can use:<br />
<br />
'''Ceph PGs per Pool Calculator'''<br />
<br />
http://ceph.com/pgcalc/<br />
<br />
== Ceph Client ==<br />
<br />
You can then configure Proxmox VE to use such pools to store VM images, just use the GUI ("Add Storage": RBD). A typical entry in the Proxmox VE storage configuration looks like: <br />
<br />
# from /etc/pve/storage.cfg<br />
rbd: my-ceph-storage<br />
monhost 10.10.10.1;10.10.10.2;10.10.10.3<br />
pool rbd<br />
content images<br />
username admin<br />
<br />
You also need to copy the keyring to a predefined location.<br />
:''Note that the file name needs to be storage id + .keyring '' . '''storage id is the word after this in /etc/pve/storage.cfg : 'rbd:' '''<br />
<br />
# cd /etc/pve/priv/<br />
# mkdir ceph<br />
# cp /etc/ceph/ceph.client.admin.keyring ceph/my-ceph-storage.keyring<br />
<br />
== Why do we need a new command line tool (pveceph)? ==<br />
<br />
For the use in the specific Proxmox VE architecture we use pveceph. Proxmox VE provides a distributed file system ([http://pve.proxmox.com/wiki/Proxmox_Cluster_file_system_%28pmxcfs%29 pmxcfs]) to store configuration files. <br />
<br />
We use this to store the Ceph configuration. The advantage is that all nodes see the same file, and there is no need to copy configuration data around using ssh/scp. The tool can also use additional information from your Proxmox VE setup. <br />
<br />
Tools like ceph-deploy cannot take advantage of that architecture.<br />
<br />
== Note for users of HP SmartArray controllers ==<br />
<br />
Proxmox will struggle to add an OSD on a SmartArray conroller via the GUI. This is due to an '!' being inserted into the device path, where a '/' should be instead. An example would be that "/dev/cciss!c0d0" is shown instead of "/dev/cciss/c0d0".<br />
<br />
This is, however, very easy to fix with a small file edit. <br />
<br />
Connect to the terminal via SSH.<br />
<br />
Open the following file in your editor of choice (Vi, Emacs, Nano, etc):<br />
<br />
'''/usr/share/perl5/PVE/API2/Ceph.pm'''<br />
Find the line that says: <br />
<br />
'''$devname =~ s|/dev/||;'''<br />
Add the following string on a new line, immediately after the line mentioned above:<br />
<br />
'''$devname =~ s|cciss/|cciss!|;'''<br />
<br />
Finally, save the file.<br />
<br />
Unfortunately, this will still not allow the disk to be added via the GUI, but you can now use the instructions above to add the OSD via the CLI, and then carry on as you otherwise would with creation of pools, etc. <br />
<br />
== Video Tutorials ==<br />
<br />
*[http://youtu.be/ImyRUyMBrwo Managing Ceph Server]<br />
<br />
== Further readings about Ceph ==<br />
<br />
Ceph comes with plenty of documentation [http://ceph.com/docs/master/ here]. Even better, the dissertation from the creator of Ceph - Sage A. Weil - is also [http://ceph.com/papers/weil-thesis.pdf available]. By reading this you can get a deep insight how it works. <br />
<br />
*http://ceph.com/ <br />
*http://www.inktank.com/, Commercial support services for Ceph<br />
<br />
*https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/, Journal SSD Recommendations<br />
<br />
[[Category:HOWTO]] [[Category:Installation]] [[Category:Technology]]</div>Jonathan Halewoodhttps://pve.proxmox.com/mediawiki/index.php?title=Ceph_Server&diff=8946Ceph Server2016-08-24T16:25:54Z<p>Jonathan Halewood: Added instructions for users of SmartArray controllers.</p>
<hr />
<div>== Introduction ==<br />
<br />
[[Image:Screen-Ceph-Status.png|thumb]] Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability - See more at: http://ceph.com. <br />
<br />
Proxmox VE supports Ceph’s RADOS Block Device to be used for VM disks. The Ceph storage services are usually hosted on external, dedicated storage nodes. Such storage clusters can sum up to several hundreds of nodes, providing petabytes of storage capacity. <br />
<br />
For smaller deployments, it is also possible to run Ceph services directly on your Proxmox VE nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible. <br />
<br />
This articles describes how to setup and run Ceph storage services directly on Proxmox VE nodes. <br />
<br />
'''Note:''' <br />
<br />
Ceph Storage integration (CLI and GUI) is introduced in Proxmox VE 3.2 as technology preview.<br />
<br />
== Advantages ==<br />
<br />
*Easy setup and management with CLI and GUI support on Proxmox VE <br />
*Thin provisioning <br />
*Snapshots support <br />
*Self healing <br />
*No single point of failure <br />
*Scalable to the exabyte level <br />
*Setup pools with different performance and redundancy characteristics <br />
*Data is replicated, making it fault tolerant <br />
*Runs on economical commodity hardware <br />
*No need for hardware RAID controllers <br />
*Easy management <br />
*Open source<br />
<br />
== Recommended hardware ==<br />
<br />
You need at least three identical servers for the redundant setup. Here is the specifications of one of our test lab clusters with Proxmox VE and Ceph (three nodes): <br />
<br />
*Dual Xeon E5-2620v2, 32 GB RAM, Intel S2600CP mainboard, Intel RMM, Chenbro 2U chassis with eight 3.5” hot-swap drive bays, 2 fixed 2.5" SSD bays <br />
*10 GBit network for Ceph traffic (one Intel X540-T2 in each server, one 10Gb switch - Netgear XS712T) <br />
*Single enterprise class SSD for the Proxmox VE installation (because we run Ceph monitors there and quite a lot of logs), we use one Intel DC S3500 80 GB per host. <br />
*Single, fast and reliable enterprise class SSD for Ceph Journal. Just for this test lab cluster, we used instead some Samsung SSD 840 PRO with 240 GB <br />
*SATA disk for storing the data (OSDs), use at least 4 disks/OSDs per server, more OSD disks are faster. We use four Seagate Constellation ES.3 SATA 6Gb (4TB model) per server.<br />
<br />
N.B. enterprise class SSD means also that they have "power loss data protection", like intel DC S3500 or DC S3700 have<br />
<br />
This setup delivers 48 TB storage. By using the redundancy of 3, you can store up to 16 TB (100%). But to be prepared for failed disks and hosts, you should never fill up your storage with 100&nbsp;%. <br />
<br />
By adding more disks, the storage can be expanded up to 96 TB just by plugging in additional disks into the free drive bays. Of course, you can add more servers too as soon as your business is growing. <br />
<br />
If you do not want to run virtual machines and Ceph on the same host, you can just add more Proxmox VE nodes and use these for running the guests and the others just for the storage.<br />
<br />
== Installation of Proxmox VE ==<br />
<br />
Before you start with Ceph, you need a working Proxmox VE cluster with 3 nodes (or more). We install Proxmox VE on a fast and reliable enterprise class SSD, so we can use all bays for OSD data. Just follow the well known instructions on [[Installation]] and [[Proxmox VE 2.0 Cluster]]. <br />
<br />
'''Note:''' <br />
<br />
Use ext4 if you install on SSD (at the boot prompt of the installation ISO you can specify parameters, e.g. "linux ext4 swapsize=4"). <br />
<br />
== Network for Ceph ==<br />
<br />
All nodes need access to a separate 10Gb network interface, exclusively used for Ceph. We use network 10.10.10.0/24 for this tutorial. <br />
<br />
It is highly recommended to use 10Gb for that network to avoid performance problems. Bonding can be used to increase availability. <br />
<br />
If you do not have fast network switches, you can use a full mesh network, described in this wiki page:<br />
<br />
*[[Full Mesh Network for Ceph Server]]<br />
<br />
=== First node ===<br />
<br />
The network setup (ceph private network) from our first node contains: <br />
<br />
# from /etc/network/interfaces<br />
auto eth2<br />
iface eth2 inet static<br />
address 10.10.10.1<br />
netmask 255.255.255.0<br />
<br />
=== Second node ===<br />
<br />
The network setup (ceph private network) from our second node contains: <br />
<br />
# from /etc/network/interfaces<br />
auto eth2<br />
iface eth2 inet static<br />
address 10.10.10.2<br />
netmask 255.255.255.0<br />
<br />
=== Third node ===<br />
<br />
The network setup (ceph private network) from our third node contains: <br />
<br />
# from /etc/network/interfaces<br />
auto eth2<br />
iface eth2 inet static<br />
address 10.10.10.3<br />
netmask 255.255.255.0<br />
<br />
== Installation of Ceph packages ==<br />
<br />
You now need to select 3 nodes and install the Ceph software packages there. We wrote a small command line utility called 'pveceph' which helps you performing this tasks, you can also choose the version of Ceph. Login to all your nodes and execute the following on all: <br />
<br />
node1# pveceph install -version hammer<br />
<br />
node2# pveceph install -version hammer<br />
<br />
node3# pveceph install -version hammer<br />
<br />
This sets up an 'apt' package repository in /etc/apt/sources.list.d/ceph.list and installs the required software. If you want to install older release, just use dumpling or emperor instead of firefly.<br />
<br />
== Create initial Ceph configuration ==<br />
<br />
[[Image:Screen-Ceph-Config.png|thumb]] After installation of packages, you need to create an initial Ceph configuration on just one node, based on your private network: <br />
<br />
node1# pveceph init --network 10.10.10.0/24<br />
<br />
This creates an initial config at /etc/pve/ceph.conf. That file is automatically distributed to all Proxmox VE nodes by using [http://pve.proxmox.com/wiki/Proxmox_Cluster_file_system_%28pmxcfs%29 pmxcfs]. The command also creates a symbolic link from /etc/ceph/ceph.conf pointing to that file. So you can simply run Ceph commands without the need to specify a configuration file. <br />
<br />
== Creating Ceph Monitors ==<br />
<br />
[[Image:Screen-Ceph-Monitor.png|thumb]] After that you can create the first Ceph monitor service using: <br />
<br />
node1# pveceph createmon<br />
<br />
== Continue with CLI or GUI ==<br />
<br />
As soon as you have created the first monitor, you can start using the Proxmox GUI (see the video tutorial on [http://youtu.be/ImyRUyMBrwo Managing Ceph Server]) to manage and view your Ceph configuration. <br />
<br />
Of course, you can continue to use the command line tools (CLI). &nbsp;We continue with the CLI in this wiki article, but you should achieve the same results no matter which way you finish the remaining steps. <br />
<br />
== Creating more Ceph Monitors ==<br />
<br />
You should run 3 monitors, one on each node. Create them via GUI or via CLI. So please login to the next node and run: <br />
<br />
node2# pveceph createmon<br />
<br />
And execute the same steps on the third node: <br />
<br />
node3# pveceph createmon<br />
<br />
'''Note:''' <br />
<br />
If you add a node where you do not want to run a Ceph monitor, e.g. another node for OSDs, you need to install the Ceph packages with 'pveceph install' and you need to initialize Ceph with 'pveceph init' <br />
<br />
== Creating Ceph OSDs ==<br />
<br />
[[Image:Screen-Ceph-Disks.png|thumb]] [[Image:Screen-Ceph-OSD.png|thumb]] First, please be careful when you initialize your OSD disks, because it basically removes all existing data from those disks. So it is important to select the correct device names. The Proxmox VE Ceph GUI displays a list of all disk, together with device names, usage information and serial numbers. <br />
<br />
Creating OSDs can be done via GUI - self explaining - or via CLI, explained here: <br />
<br />
Having that said, initializing an OSD can be done with: <br />
<br />
# pveceph createosd /dev/sd[X]<br />
<br />
If you want to use a dedicated SSD journal disk: <br />
<br />
# pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]<br />
<br />
Example: /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk <br />
<br />
# pveceph createosd /dev/sdf -journal_dev /dev/sdb<br />
<br />
This partitions the disk (data and journal partition), creates filesystems, starts the OSD and add it to the existing crush map. So afterward the OSD is running and fully functional. Please create at least 12 OSDs, distributed among your nodes (4 on each node). <br />
<br />
It should be noted that this command refuses to initialize disk when it detects existing data. So if you want to overwrite a disk you should remove existing data first. You can do that using: <br />
<br />
# ceph-disk zap /dev/sd[X]<br />
*In some cases disks that used to be part of a 3ware raid need the following in addition to zap. <br />
<br />
try this:<br />
<pre><br />
#To remove partition table and boot sector the following should be sufficient:<br />
dd if=/dev/zero of=/dev/$DISK bs=1024 count=1<br />
</pre><br />
or<br />
<pre><br />
DISK=$1<br />
<br />
if [ "$1" = "" ]; then<br />
echo "Need to supply a dev name like sdg . exiting"<br />
exit 1<br />
fi<br />
echo " make sure this is the correct disk "<br />
echo $DISK<br />
echo " you will end up with NO partition table when this procedes . example:<br />
Disk /dev/$1 doesn't contain a valid partition table<br />
Press enter to procede , or ctl-c to exit "<br />
<br />
read x<br />
dd if=/dev/zero of=/dev/$DISK bs=512 count=50000<br />
</pre><br />
<br />
You can create OSDs containing both journal and data partitions or you can place the journal on a dedicated SSD. Using a SSD journal disk is highly recommended if you expect good performance. <br />
<br />
'''Note:''' <br />
<br />
In order to use a dedicated journal disk (SSD), the disk needs to have a GPT partition table. You can create this with 'gdisk /dev/sd(x)'. If there is no GPT, you cannot select the disk as journal. Currently the journal size is fixed to 5 GB.<br />
<br />
== Ceph Pools ==<br />
<br />
[[Image:Screen-Ceph-Pools.png|thumb]] [[Image:Screen-Ceph-Crush.png|thumb]] [[Image:Screen-Ceph-Log.png|thumb]] The standard installation creates some default pools, so you can either use the standard 'rbd' pool, or create your own pools using the GUI. <br />
<br />
In order to calculate your the number of placement groups for your pools, you can use:<br />
<br />
'''Ceph PGs per Pool Calculator'''<br />
<br />
http://ceph.com/pgcalc/<br />
<br />
== Ceph Client ==<br />
<br />
You can then configure Proxmox VE to use such pools to store VM images, just use the GUI ("Add Storage": RBD). A typical entry in the Proxmox VE storage configuration looks like: <br />
<br />
# from /etc/pve/storage.cfg<br />
rbd: my-ceph-storage<br />
monhost 10.10.10.1;10.10.10.2;10.10.10.3<br />
pool rbd<br />
content images<br />
username admin<br />
<br />
You also need to copy the keyring to a predefined location.<br />
:''Note that the file name needs to be storage id + .keyring '' . '''storage id is the word after this in /etc/pve/storage.cfg : 'rbd:' '''<br />
<br />
# cd /etc/pve/priv/<br />
# mkdir ceph<br />
# cp /etc/ceph/ceph.client.admin.keyring ceph/my-ceph-storage.keyring<br />
<br />
== Why do we need a new command line tool (pveceph)? ==<br />
<br />
For the use in the specific Proxmox VE architecture we use pveceph. Proxmox VE provides a distributed file system ([http://pve.proxmox.com/wiki/Proxmox_Cluster_file_system_%28pmxcfs%29 pmxcfs]) to store configuration files. <br />
<br />
We use this to store the Ceph configuration. The advantage is that all nodes see the same file, and there is no need to copy configuration data around using ssh/scp. The tool can also use additional information from your Proxmox VE setup. <br />
<br />
Tools like ceph-deploy cannot take advantage of that architecture.<br />
<br />
== Note for users of HP SmartArray controllers ==<br />
<br />
Proxmox will struggle to add an OSD on a SmartArray conroller via the GUI. This is due to an '!' being inserted into the device path, where a '/' should be instead. An example would be that "/dev/cciss!c0d0" is shown instead of "/dev/cciss/c0d0".<br />
<br />
This is, however, very easy to fix with a small file edit. <br />
<br />
Connect to the terminal via SSH.<br />
<br />
Open the following file in your editor of choice (Vi, EMACS, etc):<br />
<br />
'''/usr/share/perl5/PVE/API2/Ceph.pm'''<br />
Find the line that says: <br />
<br />
'''$devname =~ s|/dev/||;'''<br />
Add the following string on a new line, immediately after the line mentioned above:<br />
<br />
'''$devname =~ s|cciss/|cciss!|;'''<br />
<br />
Finally, save the file.<br />
<br />
Unfortunately, this will still not allow the disk to be added via the GUI, but you can now use the instructions above to add the OSD via the CLI, and then carry on as you otherwise would with creation of pools, etc. <br />
<br />
== Video Tutorials ==<br />
<br />
*[http://youtu.be/ImyRUyMBrwo Managing Ceph Server]<br />
<br />
== Further readings about Ceph ==<br />
<br />
Ceph comes with plenty of documentation [http://ceph.com/docs/master/ here]. Even better, the dissertation from the creator of Ceph - Sage A. Weil - is also [http://ceph.com/papers/weil-thesis.pdf available]. By reading this you can get a deep insight how it works. <br />
<br />
*http://ceph.com/ <br />
*http://www.inktank.com/, Commercial support services for Ceph<br />
<br />
*https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/, Journal SSD Recommendations<br />
<br />
[[Category:HOWTO]] [[Category:Installation]] [[Category:Technology]]</div>Jonathan Halewood