[PVE-User] Best configuration among these : Episode 2

Philippe Schwarz phil at schwarz-fr.net
Tue Feb 24 10:39:11 CET 2015


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,
i read a lot of docs on ceph and i'm convinced it's the way to go
(Thanks a lot Eneko).

So, i modified my setup and have a few questions.
No more SAN (no more SPOF), but an increase in hardware specs of the 3
remaining servers.


About storage network :

I can't afford loosing the cluster because of the network failing.
I'm on the way to use Netgear XS708E, a 10Gbe switch. Should be
doubled to be redundant. But, i should therefore double the 10Gbe NIC
too. Too expensive solution!
So i planned to setup an active/passive bond :
1X10gbe (active) + 1X1Gbe(passive), the 10gbe plugged on the XS708E,
the 1Gbe plugged on cheap 1gbe switch and both switchs connected with
a single (or dual, LACP) link.

The other (because of Intel Dual port 10Gb) NIC 10 Gbe will be
connected to the LAN using the same principle (active10Gbe+passive
1gbe bond).

Is there an issue with that ?


About Ceph journal :
Because of the smart use of fast device for the journal and
slow/cheap/large device for the datas, i wonder which solution should
be the best:
1. 1SSD-200G(ProxMox)+1SSDPCIE-400G:IntelP3700:(journal)+4SATA 1TB=2200€
2. 1SSD-200G(ProxMox)+1SSD-200G(journal)+4SATA 1TB= 900€
3. 1SSD-200G(ProxMox)+5SSD 1TB, no journal= 2600€
Not the same price, but not the same perfs either...
Non PCIe SSD would be either Intel S3700 200GB or Samsung 850 Pro 1TB
Any clue ?

About raid controller :
Raid is mandatory on this controller, but Jbod (or single disk array)
mode will be used.
Dell H730 is sold with 1GB Non-volatile Cache and 2GB NVcache.
Is the difference of 300€ (the price of a good SSD) worth it ???



Other  hardware considerations :
 My proxmox cluster will be made of 1 samba + 1WS(Trend) + 1WS(WSUS) +
1WS(autocad licenses) + 1 squid + 1 LTSP + many other little other
servers (apt-proxy, xibo,...)
So, except for the Squid+Squidguard, nothing really CPU/RAM/IOPS hungry..
Before Ceph (previous idea was using a ZFS/FreeBSD SAN) i planned to
use 64 GB of RAM and Dual 2630 CPU.
Should i go up to 96 GB and Dual 2650 CPU ?? (not sure i can afford both)


About proxmox only :
Is possible to setup a fourth proxmox only as ceph server and not join
it to the previous proxmox cluster but join it to the ceph cluster
(i've to find the real term)? I don't see any issue with that.

Last one :
Should i reduce the costs for those 3 servers to be able to buy a
fourth one (next year) to increase my ceph osds numbers (won't be a
proxmox server) or not ?
I didn't find find benchmarks on how the perfs increase with the
number of servers (and so OSDs)


(Budget has to be tighten because i'll have to replace a brand new HP
6200-24 ports standalone (stupid investment i was against) switch for
a HP 5406 chassis and many modules..... Can't bond them because VLAN
infos will be lost)

Thanks for all.

I've to be cautious with this new solution; i'm gonna spend a few
public bucks for a solution i've never tested before.... Seems crazy!

Best regards.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlTsRr8ACgkQlhqCFkbqHRZIWgCffiU3v+AbfH9a5tc7YM542sYS
9v0AnjMvwvB/+E2NYjtFdG5cmBd8LFf1
=SSW/
-----END PGP SIGNATURE-----



More information about the pve-user mailing list