[PVE-User] Poor CEPH performance? or normal?

Mark Adams mark at openvs.co.uk
Wed Jul 25 11:20:57 CEST 2018


Hi Alwin,

On 25 July 2018 at 07:10, Alwin Antreich <aa at ipnerd.net> wrote:

> Hi,
>
> On Wed, Jul 25, 2018, 02:20 Mark Adams <mark at openvs.co.uk> wrote:
>
> > Hi All,
> >
> > I have a proxmox 5.1 + ceph cluster of 3 nodes, each with 12 x WD 10TB
> GOLD
> > drives. Network is 10Gbps on X550-T2, separate network for the ceph
> > cluster.
> >
> Do a rados bench for testing the  cluster performance, spinners are not
> fast.
>
>
This was a typo - I'm actually on 5.2-1. I'll give rados bench a try to see
what it comes back with.


> > I have 1 VM currently running on this cluster, which is debian stretch
> with
> > a zpool on it. I'm zfs sending in to it, but only getting around ~15MiB/s
> > write speed. does this sound right? it seems very slow to me.
> >
> Never ever use a CoW filesystem on top of another CoW system. This doubles
> the writes that need to be made.


>
> > Not only that, but when this zfs send is running - I can not do any
> > parallel sends to any other zfs datasets inside of the same VM. They just
> > seem to hang, then eventually say "dataset is busy".
> >
> Ceph already gives you the possibility of snapshots. You can let PVE do
> this through CLI or GUI.
>

The problem with this is the required features.. I need an HA cluster, and
zfs doesn't support this - so ceph is ideal, however I also need "restore
previous versions" usable inside a file server VM in samba, which ceph
snapshots at the VM layer is no use for.... Unless there is some other
smart way of doing this I don't know about!

I guess my main question is, is there any other config hints to speed this
up whether ceph or in ZFS inside the VM, and is the blocking of other IO
normal with ceph when "max" write speed is being reached? That bit doesn't
seem right to me.


> --
> Cheers,
> Alwin
>

Regards,
Mark



More information about the pve-user mailing list