[PVE-User] Ceph cluster incorrect diskspace available

Ian Coetzee proxmox at iancoetzee.za.net
Tue Oct 3 10:53:50 CEST 2017


Hi Proxmox Users,

I am quite new to ceph (you can probably tell)

I have a quick question. I don't know if I don't completely understand how
ceph works, but I have a couple of SAS drives I am trying to adopt into a
new ceph cluster

Firstly I have to note that I had to mangle the setup a bit because I am
using the leftover partitioned space on the drives.

Background:
We are trying to move all our freestanding hypervisors into a cluster. I
chose proxmox for the ease of use. We are a bit cash strapped at the
moment, so new kit is a definite no-go zone.

I have started with one server with 6 X 146GB 15K SAS Drives.

I striped these drives 2-2 together using the hardware raid controller to
give me 3 X 292GB virtual drives

In debian I GPT partitioned the disks as:
100MB - GPT Boot
10GB - Raid Volume
2GB - SWAP
261GB - Ceph Data

Before you berate me, this was done before I read that you don't officially
support this setup.

I followed this guide[1] towards using the partitions as OSD's

Now the problem I am facing. If I add a storage pool accross these OSD's I
can only use 10GB per drive, giving me a total of 30GB usable space out of
a total of 783GB. Is this because I used ceph-disk to prepare the drives?

I would prefer not going the route of breaking the RAID and setting the OS
up on 1 drive (wasting space) and using the rest of the drives as OSD's.

Kind regards

[1] http://dachary.org/?p=3037



More information about the pve-user mailing list