[PVE-User] Ceph cluster incorrect diskspace available

Phil Schwarz infolist at schwarz-fr.net
Tue Oct 3 11:03:43 CEST 2017


Hi Ian,
if your new to ceph and Proxmox,i'd suggest you not to go in this way..

- Don't user HW Raid for OSDs. Don't (there's an active thread on this
topic)
-  Don't use partitions, use disks, and let pveceph creatosd or
ceph-disk createosd do the (right) job. <-- This one you could escape from.
etc...
- Use more servers...more than one. You can't do distributed storage
with as few as as one single server. Far from distributed.


You'll loose a huge amount of time in long-time debug...
Fight as hard as possible against this setup.



Le 03/10/2017 à 10:53, Ian Coetzee a écrit :
> Hi Proxmox Users,
> 
> I am quite new to ceph (you can probably tell)
> 
> I have a quick question. I don't know if I don't completely understand how
> ceph works, but I have a couple of SAS drives I am trying to adopt into a
> new ceph cluster
> 
> Firstly I have to note that I had to mangle the setup a bit because I am
> using the leftover partitioned space on the drives.
> 
> Background:
> We are trying to move all our freestanding hypervisors into a cluster. I
> chose proxmox for the ease of use. We are a bit cash strapped at the
> moment, so new kit is a definite no-go zone.
> 
> I have started with one server with 6 X 146GB 15K SAS Drives.
> 
> I striped these drives 2-2 together using the hardware raid controller to
> give me 3 X 292GB virtual drives
> 
> In debian I GPT partitioned the disks as:
> 100MB - GPT Boot
> 10GB - Raid Volume
> 2GB - SWAP
> 261GB - Ceph Data
> 
> Before you berate me, this was done before I read that you don't officially
> support this setup.
> 
> I followed this guide[1] towards using the partitions as OSD's
> 
> Now the problem I am facing. If I add a storage pool accross these OSD's I
> can only use 10GB per drive, giving me a total of 30GB usable space out of
> a total of 783GB. Is this because I used ceph-disk to prepare the drives?
> 
> I would prefer not going the route of breaking the RAID and setting the OS
> up on 1 drive (wasting space) and using the rest of the drives as OSD's.
> 
> Kind regards
> 
> [1] http://dachary.org/?p=3037
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 




More information about the pve-user mailing list