[PVE-User] pveceph createosd after destroyed osd

Woods, Ken A (DNR) ken.woods at alaska.gov
Tue Jul 3 02:48:30 CEST 2018


You're thinking "proxmox".  Try thinking "ceph" instead.   Sure, ceph runs with proxmox, but what you're really doing is using a pretty GUI that sits on top of debian, running ceph and kvm.


Anyway, perhaps the GUI does all the steps needed?  Perhaps not.


If it were me, I'd NOT reinstall, as that's likely not going to fix the issue.


Follow the directions in the page I linked and see if that helps.

________________________________
From: pve-user <pve-user-bounces at pve.proxmox.com> on behalf of Mark Adams <mark at openvs.co.uk>
Sent: Monday, July 2, 2018 4:41:39 PM
To: PVE User List
Subject: Re: [PVE-User] pveceph createosd after destroyed osd

Hi, Thanks for your response!

No, I didn't do any of that on the cli - I just did stop in the webgui,
then out, then destroy.

Note that there was no VM's or data at all on this test ceph cluster - I
had deleted it all before doing this. I was basically just removing it all
so the OSD numbers looked "nicer" for the final setup.

It's not a huge deal, I can just reinstall proxmox. But it concerns me that
it seems so fragile using the webgui to do this. I want to know where I
went wrong? Is there somewhere that a signature is being stored so when you
try to add that same drive again (even though I ticked "remove partitions")
it doesn't add back in to the ceph cluster in the next sequential order
from the last current "live" or "valid" drive?

Is it just a rule that you never actually remove drives? you just set them
stopped/out?

Regards,
Mark



On 3 July 2018 at 01:34, Woods, Ken A (DNR) <ken.woods at alaska.gov> wrote:

> http://docs.ceph.com/docs/mimic/rados/operations/add-or-
> rm-osds/#removing-osds-manual
>
> Are you sure you followed the directions?
>
> ________________________________
> From: pve-user <pve-user-bounces at pve.proxmox.com> on behalf of Mark Adams
> <mark at openvs.co.uk>
> Sent: Monday, July 2, 2018 4:05:51 PM
> To: pve-user at pve.proxmox.com
> Subject: [PVE-User] pveceph createosd after destroyed osd
>
> Currently running the newest 5.2-1 version, I had a test cluster which was
> working fine. I since added more disks, first stopping, then setting out,
> then destroying each osd so I could recreate it all from scratch.
>
> However, when adding a new osd (either via GUI or pveceph CLI) it seems to
> show a successful create, however does not show in the gui as an osd under
> the host.
>
> It's like the osd information is being stored by proxmox/ceph somewhere
> else and not being correctly removed and recreated?
>
> I can see that the newly created disk (after it being destroyed) is
> down/out.
>
> Is this by design? is there a way to force the disk back? shouldn't it show
> in the gui once you create it again?
>
> Thanks!
>
>
_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



More information about the pve-user mailing list