[PVE-User] Proxmox CEPH 6 servers failures!

dorsy dorsyka at yahoo.com
Fri Oct 5 14:23:19 CEST 2018


Your question has already been answered. You need majority to have quorum.

On 2018. 10. 05. 14:10, Gilberto Nunes wrote:
> Hi
> Perhaps this can help:
>
> https://imageshack.com/a/img921/6208/X7ha8R.png
>
> I was thing about it, and perhaps if I deploy a VM in both side, with
> Proxmox and add this VM to the CEPH cluster, maybe this can help!
>
> thanks
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em sex, 5 de out de 2018 às 03:55, Alexandre DERUMIER <aderumier at odiso.com>
> escreveu:
>
>> Hi,
>>
>> Can you resend your schema, because it's impossible to read.
>>
>>
>> but you need to have to quorum on monitor to have the cluster working.
>>
>>
>> ----- Mail original -----
>> De: "Gilberto Nunes" <gilberto.nunes32 at gmail.com>
>> À: "proxmoxve" <pve-user at pve.proxmox.com>
>> Envoyé: Jeudi 4 Octobre 2018 22:05:16
>> Objet: [PVE-User] Proxmox CEPH 6 servers failures!
>>
>> Hi there
>>
>> I have something like this:
>>
>> CEPH01 ----|
>> |----- CEPH04
>> |
>> |
>> CEPH02 ----|-----------------------------------------------------|----
>> CEPH05
>> | Optic Fiber
>> |
>> CEPH03 ----|
>> |--- CEPH06
>>
>> Sometime, when Optic Fiber not work, and just CEPH01, CEPH02 and CEPH03
>> remains, the entire cluster fail!
>> I find out the cause!
>>
>> ceph.conf
>>
>> [global] auth client required = cephx auth cluster required = cephx auth
>> service required = cephx cluster network = 10.10.10.0/24 fsid =
>> e67534b4-0a66-48db-ad6f-aa0868e962d8 keyring =
>> /etc/pve/priv/$cluster.$name.keyring mon allow pool delete = true osd
>> journal size = 5120 osd pool default min size = 2 osd pool default size =
>> 3
>> public network = 10.10.10.0/24 [osd] keyring =
>> /var/lib/ceph/osd/ceph-$id/keyring [mon.pve-ceph01] host = pve-ceph01 mon
>> addr = 10.10.10.100:6789 mon osd allow primary affinity = true
>> [mon.pve-ceph02] host = pve-ceph02 mon addr = 10.10.10.110:6789 mon osd
>> allow primary affinity = true [mon.pve-ceph03] host = pve-ceph03 mon addr
>> =
>> 10.10.10.120:6789 mon osd allow primary affinity = true [mon.pve-ceph04]
>> host = pve-ceph04 mon addr = 10.10.10.130:6789 mon osd allow primary
>> affinity = true [mon.pve-ceph05] host = pve-ceph05 mon addr =
>> 10.10.10.140:6789 mon osd allow primary affinity = true [mon.pve-ceph06]
>> host = pve-ceph06 mon addr = 10.10.10.150:6789 mon osd allow primary
>> affinity = true
>>
>> Any help will be welcome!
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>> (47) 3025-5907
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>> Skype: gilberto.nunes36
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



More information about the pve-user mailing list