Multicast notes: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
(→‎Troubleshooting: cman crash, iptables rules)
Line 15: Line 15:
Some switches have multicast disabled by default.  
Some switches have multicast disabled by default.  


== test if multicast is working between two nodes ==
== test if multicast is working between two nodes with omping==
 
aptitude install omping
 
start omping on all nodes with the following command and check the output, e.g:
 
omping node1 node2 node3
 
== test if multicast is working between two nodes with ssmping==


Copied from a post by e100 on forum .  
Copied from a post by e100 on forum .  

Revision as of 15:25, 4 March 2013

Yellowpin.svg Note: Articles about Proxmox VE 2.0

Introduction

Multicast allows a single transmission to be delivered to multiple servers at the same time.

This is the basis for cluster communications in Proxmox VE 2.0.

If multicast does not work in your network infrastructure, use unicast instead.

Troubleshooting

not all hosting companies allow multicast traffic.

Some switches have multicast disabled by default.

test if multicast is working between two nodes with omping

aptitude install omping

start omping on all nodes with the following command and check the output, e.g:

omping node1 node2 node3

test if multicast is working between two nodes with ssmping

Copied from a post by e100 on forum .

  • this uses ssmping

Install this on all nodes .

aptitude install ssmping

run this on Node A:

ssmpingd

then on Node B:

asmping 224.0.2.1 ip_for_NODE_A_here

example output

asmping joined (S,G) = (*,224.0.2.234)
pinging 192.168.8.6 from 192.168.8.5
  unicast from 192.168.8.6, seq=1 dist=0 time=0.221 ms
  unicast from 192.168.8.6, seq=2 dist=0 time=0.229 ms
multicast from 192.168.8.6, seq=2 dist=0 time=0.261 ms
  unicast from 192.168.8.6, seq=3 dist=0 time=0.198 ms
multicast from 192.168.8.6, seq=3 dist=0 time=0.213 ms
  unicast from 192.168.8.6, seq=4 dist=0 time=0.234 ms
multicast from 192.168.8.6, seq=4 dist=0 time=0.248 ms
  unicast from 192.168.8.6, seq=5 dist=0 time=0.249 ms
multicast from 192.168.8.6, seq=5 dist=0 time=0.263 ms
  unicast from 192.168.8.6, seq=6 dist=0 time=0.250 ms
multicast from 192.168.8.6, seq=6 dist=0 time=0.264 ms
  unicast from 192.168.8.6, seq=7 dist=0 time=0.245 ms
multicast from 192.168.8.6, seq=7 dist=0 time=0.260 ms

for more information see

man ssmping

and

less /usr/share/doc/ssmping/README.gz

ssmping notes

  • there are a few other programs included in ssmping which may be of use. here is a list of the files in the package:
apt-file list ssmping
ssmping: /usr/bin/asmping
ssmping: /usr/bin/mcfirst
ssmping: /usr/bin/ssmping
ssmping: /usr/bin/ssmpingd
ssmping: /usr/share/doc/ssmping/README.gz
ssmping: /usr/share/doc/ssmping/changelog.Debian.gz
ssmping: /usr/share/doc/ssmping/copyright
ssmping: /usr/share/man/man1/asmping.1.gz
ssmping: /usr/share/man/man1/mcfirst.1.gz
ssmping: /usr/share/man/man1/ssmping.1.gz
ssmping: /usr/share/man/man1/ssmpingd.1.gz
  • If you want to use apt-file do this:
aptitude install apt-file
apt-file update

then set up a cronjob to do apt-file update weekly or monthly ..

cman & iptables

In case cman crashes with cpg_send_message failed: 9 add those to your rule set:

iptables -A INPUT -m addrtype --dst-type MULTICAST -j ACCEPT
iptables -A INPUT -p udp -m state --state NEW -m multiport –dports 5404,5405 -j ACCEPT

Use unicast instead of multicast

Unicast is a technology for sending messages to a single network destination. In corosync, unicast is implemented as UDP-unicast (UDPU). Due to increased network traffic (compared to multicast) the number of supported nodes is limited, do not use it with more that 4 cluster nodes.

  • just create the cluster as usual (pvecm create ...)
  • add the new transport="udpu" in /etc/pve/cluster.conf


<cman keyfile="/var/lib/pve-cluster/corosync.authkey" transport="udpu"/>


  • add all nodes you want to join in /etc/hosts and reboot
  • before you add a node, make sure you add all other nodes in /etc/hosts

Multicast with Infiniband

IP over Infiniband (IPoIB) supports Multicast but Multicast traffic is limited to 2044 Bytes when using connected mode even if you set a larger MTU on the IPoIB interface.

Corosync has a setting, netmtu, that defaults to 1500 making it compatible with connected mode Infiniband.

Changing netmtu

Changing the netmtu can increase throughput The following information is untested.

Edit the /etc/pve/cluster.conf file Add the section:

<totem netmtu="2044" />


<?xml version="1.0"?>
<cluster name="clustername" config_version="2">
  <totem netmtu="2044" />
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey">
  </cman>

  <clusternodes>
  <clusternode name="node1" votes="1" nodeid="1"/>
  <clusternode name="node2" votes="1" nodeid="2"/>
  <clusternode name="node3" votes="1" nodeid="3"/></clusternodes>

</cluster>


Netgear Managed Switches

the following are pics of setting to get multicast working on our netgear 7300 series switches. for more information see http://documentation.netgear.com/gs700at/enu/202-10360-01/GS700AT%20Series%20UG-06-18.html


Multicast-netgear-1.png

Multicast-netgear-2.png

Multicast-netgear-3.png

NetGear-multicast-save-and-apply.png