Ceph Server: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
(Removed sections that either are old or in our reference documentation)
 
(5 intermediate revisions by 3 users not shown)
Line 1: Line 1:
The contents of this article can be found in our documentation.
<div class="warn-box">This article got replaced by our reference documentation's ceph chapter: https://pve.proxmox.com/pve-docs/chapter-pveceph.html</div>
https://pve.proxmox.com/pve-docs/chapter-pveceph.html


== Recommended hardware ==
== Recommended hardware ==
Line 8: Line 7:
== Further readings about Ceph ==
== Further readings about Ceph ==


Ceph comes with plenty of documentation [http://ceph.com/docs/master/ here]. Even better, the dissertation from the creator of Ceph - Sage A. Weil - is also [http://ceph.com/papers/weil-thesis.pdf available]. By reading this you can get a deep insight how it works.  
Ceph comes with plenty of documentation [https://docs.ceph.com/ here]. Even better, the dissertation from the creator of Ceph - Sage A. Weil - is also [http://unixadmin.free.fr/wp-content/uploads/weil-thesis.pdf available]. By reading that thesis you can get an in-depth insight how Ceph works.  


*http://ceph.com/  
* https://ceph.com/  


*https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/, Journal SSD Recommendations
* https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/, Journal SSD Recommendations


== Video Tutorials ==
== Video Tutorials ==
Line 22: Line 21:
== Ceph Misc ==
== Ceph Misc ==


=== Upgrading existing Ceph Server from Hammer to Jewel ===
=== Upgrading existing Ceph Server ===
See [[Ceph Hammer to Jewel]]


=== Upgrading existing Ceph Server from Jewel to Luminous ===
* From Hammer to Jewel: See [[Ceph Hammer to Jewel]]
See [[Ceph Jewel to Luminous]]


===using a disk that was part of a zfs pool ===
* From Jewel to Luminous: See [[Ceph Jewel to Luminous]]
as of now
  ceph-disk zap /dev/sdX
is needed.
 
else it does not show up on pve > ceph > osd > Create OSD


=== restore lxc from zfs to ceph ===
=== restore lxc from zfs to ceph ===

Latest revision as of 14:05, 12 January 2024

This article got replaced by our reference documentation's ceph chapter: https://pve.proxmox.com/pve-docs/chapter-pveceph.html

Recommended hardware

Take a look at our Proxmox VE Ceph Benchmark 2018/02 for possible hardware decisions. https://www.proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark

Further readings about Ceph

Ceph comes with plenty of documentation here. Even better, the dissertation from the creator of Ceph - Sage A. Weil - is also available. By reading that thesis you can get an in-depth insight how Ceph works.

Video Tutorials

Proxmox YouTube channel

You can subscribe to our Proxmox VE Channel on YouTube to get updates about new videos.

Ceph Misc

Upgrading existing Ceph Server

restore lxc from zfs to ceph

if lxc is on zfs with compression the actual disk usage can be far greater then expected. see https://forum.proxmox.com/threads/lxc-restore-fail-to-ceph.32419/#post-161287

One way to know actual disk usage:

restore backup to a ext4 directory and run du -sh , then do restore manually specifying target disk size.

scsi setting

make sure that you use virtio-scsi controller (not LSI), see VM options. I remember some panic when using LSI recently but I did not debug it further as modern OS should use virtio-scsi anyways. https://forum.proxmox.com/threads/restarted-a-node-some-kvms-on-other-nodes-panic.32806