[pve-devel] ceph osd over zfs volume

Dietmar Maurer dietmar at proxmox.com
Thu Apr 17 15:00:59 CEST 2014


> >> It is difficult to believe that using slices (or partitions) makes
> >> administration difficult.
> >
> > It is also difficult to believe that zfs is useful here.
> 
> It is actually easier when you consider a typical proxmox cluster with openvz and
> kvm machines and recall that openvz currently requires simfs (btw: ploop is out
> of beta not and could be supported too).
> 
> In a small cluster that runs a mixed load, there is no space for allocating entire
> disks to ceph. There is no need for that too. The key here is small: 3 to 6 nodes
> total.

Sorry, but you need at least 12 OSD to get reasonable performance.
So you small clusters you need at least 4 Disks per node.

Ceph people recommends  at least 100 OSDs (one hundred)

> >> To me this is a limiting factor without a tangible advantage.
> > We do not set any limits on ceph - you can use the original packages like you
> want.
> 
> Following the same reasoning I could use openvz and kvm+qemu without
> proxmox at all. I have done this is the past, but shall I do so now?
> 
> What I am talking about is making proxmox a better fit for a more general
> purpose.
> 
> Che choice to limit osd to physical disks when there is no such limitation in the
> underlying technology is a plain arbitrary choice (political choice if you wish)
> which is actually removing from the potential customer base and not adding to
> it.
> 
> I really do not understand.

No, it is not a plain arbitrary choice - we provide a GUI for that!

If you want to manage OSDs on ZFS, please also provide patches to make the GUI
able to manage that.






More information about the pve-devel mailing list