[PVE-User] TASK ERROR: cluster not ready - no quorum?

Shain Miley SMiley at npr.org
Fri Mar 13 19:00:39 CET 2015


Just an FYI to anyone who might be thinking about trying this...these commands stopped every single running vm we had...I assume because of the unloading of the KVM kernel modules.

Use With Caution.

Shain

ps...we are still having problems with the cluster.

Sent from my iPhone

On Mar 13, 2015, at 5:57 AM, Juan Serra Costa <juaserco at gmail.com<mailto:juaserco at gmail.com>> wrote:

This is not a solution, but it may help. After upgrading to 3.4 I had problems with the cluster. I solved it by restarting some daemons.

/etc/init.d/pve-manager stop
/etc/init.d/pvestatd stop
/etc/init.d/pvedaemon stop
/etc/init.d/cman stop
/etc/init.d/pve-cluster stop

At this point I had a problem with cman because some processes didn't stop, so I killed them.

ps aux|egrep "(dlm_controld|fenced|corosync)"

Just to be sure, I foreced unmounting some folders.

umount -l /sys/kernel/config
umount -l /etc/pve

After that I started the daemons.

/etc/init.d/pve-cluster start
/etc/init.d/cman start
/etc/init.d/pvedaemon start
/etc/init.d/pvestatd start
/etc/init.d/pve-manager start

Doing this with all the machines one by one I recovered the cluster without rebooting. I know that this is not a solution, but it may help.


2015-03-12 19:47 GMT+01:00 Shain Miley <smiley at npr.org<mailto:smiley at npr.org>>:
Thank you for the info.

No we are not using ipv6 at the moment.

>From what I can tell using ssmping and omping...server to server works when using 'all-systems.mcast.net<http://all-systems.mcast.net>' as the multicast host...however when I use '239.192.129.102' the address assigned by proxmox (I guess)...there is 100% multicast dropped packets.

We see the 100% dropped packets when using two 2.6.32-34 noes to test....but not when using two 2.6.32-37 nodes to test.

The problem is even with 2.6.32-37 when we did have a quorum (11 of 19 nodes joined and restarting pve-cluster)../etc/pve/ was still read-only and our web gui showed red...so just going to 2.6.32-37 does not seem like the answer either.

Thanks again,

Shain



On 03/12/2015 01:57 PM, Holger Hampel | RA Consulting wrote:
Are the server resolved to IPv6? 2.6.32-37-pve don't like it!

-----Ursprüngliche Nachricht-----
Von: pve-user [mailto:pve-user-bounces at pve.proxmox.com<mailto:pve-user-bounces at pve.proxmox.com>] Im Auftrag von Shain Miley
Gesendet: Donnerstag, 12. März 2015 16:17
An: Dietmar Maurer; pve-user at pve.proxmox.com<mailto:pve-user at pve.proxmox.com>; Eneko Lacunza
Betreff: Re: [PVE-User] TASK ERROR: cluster not ready - no quorum?

These two look similar to what we are seeing as well:

http://forum.proxmox.com/threads/21060-Adding-new-server-to-older-existing-cluster-fails

http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3


At this point it looks like the nodes on 2.6.32-37-pve can use multicast
(testing via ssmping) and the ones on 2.6.32-34-pve fail.

I cannot understand why there would be a problem here,  are others
running mixed kernel (2.6.32) kernel versions with 3.4?

Or should we try to get to a state where everything is using the same
kernel version?

Thanks,

Shain



On 03/12/2015 10:51 AM, Shain Miley wrote:
Hi.
No...we don't...and I know that the post references infiniband...but I
thought that maybe there was a more widespread multicast issue in
general with that kernel.

We have been using Proxmox for close to 5 or 6 years and I have never
lost a quorum, or had these kinds of issues before, so I am just
trying figure out what went wrong and how to fix it.

Thanks,
Shain


On 03/12/2015 10:46 AM, Dietmar Maurer wrote:
I found this post here:

http://pve.proxmox.com/pipermail/pve-devel/2015-February/014356.html

which suggests there might be an issue with the 2.6.32-37 kernel .
Do you use infiniband?

_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com<mailto:pve-user at pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com<mailto:pve-user at pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com<mailto:pve-user at pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com<mailto:pve-user at pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com<mailto:pve-user at pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150313/c3e05b76/attachment.htm>


More information about the pve-user mailing list