[PVE-User] (Very) basic question regarding PVE Ceph integration

Eneko Lacunza elacunza at binovo.es
Mon Dec 17 11:30:11 CET 2018


Hi Alwin,

El 17/12/18 a las 10:22, Alwin Antreich escribió:
>
>>>> b) depends on the workload of your nodes. Modern server hardware has
>>>> enough power to be able to run multiple services. It all comes down to
>>>> have enough resources for each domain (eg. Ceph, KVM, CT, host).
>>>>
>>>> I recommend to use a simple calculation for the start, just to get a
>>>> direction.
>>>>
>>>> In principle:
>>>>
>>>> ==CPU==
>>>> core='CPU with HT on'
>>>>
>>>> * reserve a core for each Ceph daemon
>>>>     (preferable on the same NUMA as the network; higher frequency is
>>>>     better)
>>>> * one core for the network card (higher frequency = lower latency)
>>>> * rest of the cores for OS (incl. monitoring, backup, ...), KVM/CT usage
>>>> * don't overcommit
>>>>
>>>> ==Memory==
>>>> * 1 GB per TB of used disk space on an OSD (more on recovery)
>> Note this is not true anymore with Bluestore, because you have to add cache
>> space into account (1GB for HDD and 3GB for SSD OSDs if I recall
>> correctly.), and also currently OSD processes aren't that good with RAM use
>> accounting... :)
> I want to add, that the recommendations for luminous still state that.
> Also it is more a rule of thumb, not what the usage actually will be.
> http://docs.ceph.com/docs/luminous/start/hardware-recommendations/
>
> With 12.2.10 the bluestore_cache_* settings have been replaced by
> osd_memory_target and are set to 4GB by default.
> http://docs.ceph.com/docs/master/releases/luminous/
You're right, it seems that docs aren't updated to reflect Bluestore RAM 
use. The problem here is that Bluestore can't use Linux cache, so has an 
in-process cache of his own.

Cheers!
Eneko

-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es




More information about the pve-user mailing list