[PVE-User] PVE 2.1 and HA with drbd...

Gilberto Nunes gilberto.nunes32 at gmail.com
Fri Jul 27 17:19:05 CEST 2012


Oh God!... Manually???
So, I'll obligate to "downgrade" from PVE 2.1 to PVE 1.x, in order to have
Heartbeat work properly...
It's a regression, for my point of view...

2012/7/27 Robert Fantini <rob at fantinibakery.com>

>
>
> On 07/27/2012 08:55 AM, Gilberto Nunes wrote:
>
> Hi....
>
>  Formerly, I always used heartbeat and  drbd to provide HA to my Virtual
> Machines...
> I always configured heartbeat to check the physical nodes, and if one of
> them crashing down, the heartbeat start qm to initialize the formerly VM
> running on the node that crash...
> Now, I see that PVE bring to us corosync-pve...
> My question is: it's simple do the same work that heartbeat do before?
> Is there some risk if I change some configuration on the default
> corosync-pve config???
>
>  Thanks
>
>  Cheers
>
>
>
>
> We also used drbd + heartbeat in 1.9 .  And Debian etch before that..
>
> Using a  Primary/Secondary drbd set up with something like heartbeat to
> control which system is the Primary just worked.
>
>  It would be I think a very good feature to have in a 2-node drbd
> cluster.    Or is there something already built in to PVE cluster that
> deals with this?
>
>  I understand the the full heartbeat package is not compatible with pve:
>
> # aptitude install heartbeat
> The following NEW packages will be installed:
>   cluster-agents{a} cluster-glue{a} heartbeat libcluster-glue{a}
> libcorosync4{a} libesmtp5{a} libheartbeat2{a}
>   libnet1{a} libopenhpi2{a} openhpid{a} pacemaker{a}
> 0 packages upgraded, 11 newly installed, 0 to remove and 2 not upgraded.
> Need to get 2,968 kB of archives. After unpacking 10.4 MB will be used.
> The following packages have unmet dependencies:
>   libcorosync4-pve: Conflicts: libcorosync4 but 1.2.1-4 is to be installed.
> The following actions will resolve these dependencies:
>
>       Remove the following packages:
> 1)      clvm
> 2)      corosync-pve
> 3)      fence-agents-pve
> 4)      libcorosync4-pve
> 5)      libopenais3-pve
> 6)      libpve-access-control
> 7)      libpve-storage-perl
> 8)      openais-pve
> 9)      proxmox-ve-2.6.32
> 10)     pve-cluster
> 11)     pve-manager
> 12)     qemu-server
> 13)     redhat-cluster-pve
> 14)     resource-agents-pve
> 15)     vzctl
> Accept this solution? [Y/n/q/?] q
>
>
>
>  For now we are using   drbd Primary/Primary .
>   We have the KVM's running on one node.
>
> If the node we are using breaks, we'll do a manual switch over. To do so
> this was suggested by Dietmar on the forum:
>
> "
> 1-First make sure the other node is really down!
>
> 2-Then set expected votes to gain quorum:  [ this may already be set in
> our 2 node cluster.conf ? ]
>  # pvecm expected 1
>
> 3-Then move the config file to correct position:
> # mv /etc/pve/nodes/<oldnode>/qemu-server/<vmid>.conf
> /etc/pve/nodes/<newnode>/qemu-server/
> "
>
>
>   The KVM disks are of course on both nodes thanks to drbd.   So only the
> .conf files need to me moved.
>
>   I was never comfortable with heartbeat even after 5+ years for auto fail
> over.   So I do not mind doing the fail over manually in PVE version 2.
>
>   PS:  in the future ,  sheepdog  seems like it will be a better way to
> ensure the survivability  of KVM disks .
>
>
>
>  --
> Gilberto Nunes
>
>
>
> _______________________________________________
> pve-user mailing listpve-user at pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


-- 
Gilberto Nunes


(47) 9676-7530

msn: gilbertonunesferreira at hotmail.com

msn: konnectati at konnectati.com.br

Skype: gilberto.nunes36

Skype: konnectati


*www.konnectati.com.br*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20120727/5c6cfe20/attachment.htm>


More information about the pve-user mailing list