[PVE-User] Poor CEPH performance? or normal?

Ronny Aasen ronny+pve-user at aasen.cx
Wed Jul 25 21:11:32 CEST 2018


On 25.07.2018 11:20, Mark Adams wrote:
> Hi Alwin,
>
> On 25 July 2018 at 07:10, Alwin Antreich <aa at ipnerd.net> wrote:
>
>> Hi,
>>
>> On Wed, Jul 25, 2018, 02:20 Mark Adams <mark at openvs.co.uk> wrote:
>>
>>> Hi All,
>>>
>>> I have a proxmox 5.1 + ceph cluster of 3 nodes, each with 12 x WD 10TB
>> GOLD
>>> drives. Network is 10Gbps on X550-T2, separate network for the ceph
>>> cluster.
>>>
>> Do a rados bench for testing the  cluster performance, spinners are not
>> fast.
>>
>>
> This was a typo - I'm actually on 5.2-1. I'll give rados bench a try to see
> what it comes back with.
>
>
>>> I have 1 VM currently running on this cluster, which is debian stretch
>> with
>>> a zpool on it. I'm zfs sending in to it, but only getting around ~15MiB/s
>>> write speed. does this sound right? it seems very slow to me.
>>>
>> Never ever use a CoW filesystem on top of another CoW system. This doubles
>> the writes that need to be made.
>
>>> Not only that, but when this zfs send is running - I can not do any
>>> parallel sends to any other zfs datasets inside of the same VM. They just
>>> seem to hang, then eventually say "dataset is busy".
>>>
>> Ceph already gives you the possibility of snapshots. You can let PVE do
>> this through CLI or GUI.
>>
> The problem with this is the required features.. I need an HA cluster, and
> zfs doesn't support this - so ceph is ideal, however I also need "restore
> previous versions" usable inside a file server VM in samba, which ceph
> snapshots at the VM layer is no use for.... Unless there is some other
> smart way of doing this I don't know about!
>
> I guess my main question is, is there any other config hints to speed this
> up whether ceph or in ZFS inside the VM, and is the blocking of other IO
> normal with ceph when "max" write speed is being reached? That bit doesn't
> seem right to me.
>

you can expose rbd snapshots in samba using this script.
https://github.com/ksperis/autosnap-rbd-shadow-copy

--

but having ceph-> rbd -> filesystem -> vm mounting filesystem -> samba 
export  is a long and overly complex chain.

if you mount and export a cephfs filesystem you can remove the need for 
a filesystem on a rbd
ceph-> cephfs -> vm with cephfs mounted -> samba export

You can even take it a step further, by using the vfs_ceph samba module 
you can translate samba calls to cephfs calls directly
https://fosdem.org/2018/schedule/event/cephfs_gateways/attachments/slides/2636/export/events/attachments/cephfs_gateways/slides/2636/cephfs_samba_and_nfs.pdf


good luck
Ronny Aasen





More information about the pve-user mailing list