[PVE-User] Shared storage recommendations

Gilberto Nunes gilberto.nunes32 at gmail.com
Tue Feb 26 17:26:23 CET 2019


Hi

IMHO, ceph are very good.
I have a production environment, using 6 servers in Proxmox CEPH Cluster.
No data loss so far. Performance is good. Unfortunately the hardware which
I have at hand are low in memory ram... But still, OK for most of the job,
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter, 26 de fev de 2019 às 13:21, Eneko Lacunza <elacunza at binovo.es>
escreveu:

> Hi
>
> El 26/2/19 a las 10:41, Thomas Lamprecht escribió:
> > On 2/25/19 6:22 PM, Frederic Van Espen wrote:
> >> We're designing a new datacenter network where we will run proxmox
> nodes on
> >> about 30 servers. Of course, shared storage is a part of the design.
> >>
> >> What kind of shared storage would anyone recommend based on their
> >> experience and what kind of network equipment would be essential in that
> >> design? Let's assume for a bit that budget is not constrained too much.
> We
> >> should be able to afford a vendor specific iSCSI device, or be able to
> >> implement an open source solution like Ceph.
> >>
> >> Concerning storage space and IOPS requirements, we're very modest in the
> >> current setup (about 13TB storage space used, very modest IOPS, about
> 6500
> >> write IOPS and 4200 read IOPS currently distributed in the whole network
> >> according to the prometheus monitoring).
> >>
> >> Key in the whole setup is day to day maintainability and scalability.
> > I'd use ceph then. Scalability is something ceph is just made for, and
> > maintainability is also really not to bad, IMO. You can do CTs and VMs on
> > normal blockdevices (rdb) and also have a file based shared FS (cephFS)
> both
> > well integrated into PVE frontend/backend, which other shared storage
> systems
> > aren't.
> >
> We maintain 8 tiny Proxmox clusters with shared storage, 7 of them use
> Ceph and the other an EMC VNXe3200 iSCSI device.
>
> VNXe3200 works well (a bit slow maybe) and doesn't seem to be able to
> scale.
>
> Ceph has worked really well for us, even the clusters being tiny (3 to 4
> nodes). You'll have to learn a bit about Ceph, but it really pays off
> and the integration in Proxmox is really cool.
>
> Just use the money saved on iSCSI propietary stuff on some more
> servers/disks and get better performance an reliability.
>
> Cheers
> Eneko
>
> --
> Zuzendari Teknikoa / Director Técnico
> Binovo IT Human Project, S.L.
> Telf. 943569206
> Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
> www.binovo.es
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



More information about the pve-user mailing list