Ceph Server: Difference between revisions
(make link to reference documentation a real note) |
|||
Line 21: | Line 21: | ||
== Ceph Misc == | == Ceph Misc == | ||
=== Upgrading existing Ceph Server | === Upgrading existing Ceph Server === | ||
* From Hammer to Jewel: See [[Ceph Hammer to Jewel]] | |||
See [[Ceph Jewel | |||
* From Jewel to Luminous: See [[Ceph Jewel to Luminous]] | |||
=== restore lxc from zfs to ceph === | === restore lxc from zfs to ceph === |
Revision as of 10:05, 14 March 2019
Note: This article got replaced by our reference documentation's ceph chapter: https://pve.proxmox.com/pve-docs/chapter-pveceph.html |
Recommended hardware
Take a look at our Proxmox VE Ceph Benchmark 2018/02 for possible hardware decisions. https://www.proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark
Further readings about Ceph
Ceph comes with plenty of documentation here. Even better, the dissertation from the creator of Ceph - Sage A. Weil - is also available. By reading this you can get a deep insight how it works.
- https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/, Journal SSD Recommendations
Video Tutorials
Proxmox YouTube channel
You can subscribe to our Proxmox VE Channel on YouTube to get updates about new videos.
Ceph Misc
Upgrading existing Ceph Server
- From Hammer to Jewel: See Ceph Hammer to Jewel
- From Jewel to Luminous: See Ceph Jewel to Luminous
restore lxc from zfs to ceph
if lxc is on zfs with compression the actual disk usage can be far greater then expected. see https://forum.proxmox.com/threads/lxc-restore-fail-to-ceph.32419/#post-161287
One way to know actual disk usage:
- restore backup to a ext4 directory and run du -sh , then do restore manually specifying target disk size.
scsi setting
make sure that you use virtio-scsi controller (not LSI), see VM options. I remember some panic when using LSI recently but I did not debug it further as modern OS should use virtio-scsi anyways. https://forum.proxmox.com/threads/restarted-a-node-some-kvms-on-other-nodes-panic.32806