Ceph Server: Difference between revisions
m (fix thesis link) |
m (make ntoe more prominent) |
||
Line 1: | Line 1: | ||
<div class="warn-box">This article got replaced by our reference documentation's ceph chapter: https://pve.proxmox.com/pve-docs/chapter-pveceph.html</div> | |||
== Recommended hardware == | == Recommended hardware == |
Revision as of 14:01, 12 January 2024
Recommended hardware
Take a look at our Proxmox VE Ceph Benchmark 2018/02 for possible hardware decisions. https://www.proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark
Further readings about Ceph
Ceph comes with plenty of documentation here. Even better, the dissertation from the creator of Ceph - Sage A. Weil - is also available. By reading that thesis you can get a deep insight how it works.
- https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/, Journal SSD Recommendations
Video Tutorials
Proxmox YouTube channel
You can subscribe to our Proxmox VE Channel on YouTube to get updates about new videos.
Ceph Misc
Upgrading existing Ceph Server
- From Hammer to Jewel: See Ceph Hammer to Jewel
- From Jewel to Luminous: See Ceph Jewel to Luminous
restore lxc from zfs to ceph
if lxc is on zfs with compression the actual disk usage can be far greater then expected. see https://forum.proxmox.com/threads/lxc-restore-fail-to-ceph.32419/#post-161287
One way to know actual disk usage:
- restore backup to a ext4 directory and run du -sh , then do restore manually specifying target disk size.
scsi setting
make sure that you use virtio-scsi controller (not LSI), see VM options. I remember some panic when using LSI recently but I did not debug it further as modern OS should use virtio-scsi anyways. https://forum.proxmox.com/threads/restarted-a-node-some-kvms-on-other-nodes-panic.32806