[PVE-User] [ceph-users] Ceph on Proxmox VE

Eneko Lacunza elacunza at binovo.es
Thu Jan 26 15:25:01 CET 2017


Hi Martin,

El 26/01/17 a las 14:08, Martin Maurer escribió:
> We just created a new tutorial for installing Ceph Jewel on Proxmox VE.
>
> The Ceph Server integration in Proxmox VE is already available since
> three years and is a widely used component for smaller deployments to
> get a real open source hyper-converged virtualization and storage setup,
> highly scalable and without limits.
>
> Video Tutorial
> https://youtu.be/jFFLINtNnXs
>
> Documentation
> https://pve.proxmox.com/wiki/Ceph_Server
>
Replying in pve-user not to bother ceph-users with Proxmox especific stuff.

Is Jewel officially supported in Proxmox? I didn't see any notice about 
this here nor in release notes.

Also, seems new Ceph_server suggested specs are a bit high on some 
fronts, and lacking in others. Maybe the problem is you're describing 
the lab setup in that section?:

- 10G network will be overkill if using magnetic drives for OSD storage, 
if you don't have tens of OSD per server. We have various setups with 
only 1G network that work like a charm, even in one cluster with one SSD 
OSD disks per server, we're not able to saturate 1G link.

- I don't think it is wise to recommend Intel SSD DC S3520 drives for 
production use. They have very low warranted endurance. S3610 drives 
aren't much pricier and offer x10 times the endurance.

Also, fast CPU is a must if OSDs are SSD; if using many magnetic OSDs 
maybe having more cores is better than more Ghz.

Again, I think the problem is you're describing the test lab setup in 
the recommended hardware section?

Thanks
Eneko


-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943493611
       943324914
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es




More information about the pve-user mailing list