Ceph Server
The contents of this article can be found in our documentation. https://pve.proxmox.com/pve-docs/chapter-pveceph.html
Recommended hardware
Take a look at our Proxmox VE Ceph Benchmark 2018/02 for possible hardware decisions. https://www.proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark
Further readings about Ceph
Ceph comes with plenty of documentation here. Even better, the dissertation from the creator of Ceph - Sage A. Weil - is also available. By reading this you can get a deep insight how it works.
- https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/, Journal SSD Recommendations
Video Tutorials
Proxmox YouTube channel
You can subscribe to our Proxmox VE Channel on YouTube to get updates about new videos.
Ceph Misc
Upgrading existing Ceph Server from Hammer to Jewel
Upgrading existing Ceph Server from Jewel to Luminous
using a disk that was part of a zfs pool
as of now
ceph-disk zap /dev/sdX
is needed.
else it does not show up on pve > ceph > osd > Create OSD
restore lxc from zfs to ceph
if lxc is on zfs with compression the actual disk usage can be far greater then expected. see https://forum.proxmox.com/threads/lxc-restore-fail-to-ceph.32419/#post-161287
One way to know actual disk usage:
- restore backup to a ext4 directory and run du -sh , then do restore manually specifying target disk size.
scsi setting
make sure that you use virtio-scsi controller (not LSI), see VM options. I remember some panic when using LSI recently but I did not debug it further as modern OS should use virtio-scsi anyways. https://forum.proxmox.com/threads/restarted-a-node-some-kvms-on-other-nodes-panic.32806