Multicast notes: Difference between revisions
mNo edit summary |
(added unicast) |
||
Line 1: | Line 1: | ||
{{Note|Articles about Proxmox VE 2.0}} | {{Note|Articles about Proxmox VE 2.0}} | ||
= Introduction = | |||
Multicast allows a single transmission to be delivered to multiple servers at the same time. | |||
This is the basis for cluster communications in Proxmox VE 2.0. | |||
If multicast does not work in your network infrastructure, use unicast instead. | |||
==test if multicast is working between two nodes== | = Troubleshooting = | ||
Copied from a post by e100 on forum . | |||
not all hosting companies allow multicast traffic. | |||
Some switches have multicast disabled by default. | |||
== test if multicast is working between two nodes == | |||
Copied from a post by e100 on forum . | |||
*this uses '''ssmping''' | *this uses '''ssmping''' | ||
Install this on all nodes . | |||
Install this on all nodes . | |||
aptitude install ssmping | aptitude install ssmping | ||
run this on Node A: | |||
run this on Node A: | |||
ssmpingd | ssmpingd | ||
then on Node B: | |||
then on Node B: | |||
asmping 224.0.2.1 ip_for_NODE_A_here | asmping 224.0.2.1 ip_for_NODE_A_here | ||
example output | |||
<pre> | example output | ||
asmping joined (S,G) = (*,224.0.2.234) | <pre>asmping joined (S,G) = (*,224.0.2.234) | ||
pinging 192.168.8.6 from 192.168.8.5 | pinging 192.168.8.6 from 192.168.8.5 | ||
unicast from 192.168.8.6, seq=1 dist=0 time=0.221 ms | unicast from 192.168.8.6, seq=1 dist=0 time=0.221 ms | ||
Line 39: | Line 49: | ||
unicast from 192.168.8.6, seq=7 dist=0 time=0.245 ms | unicast from 192.168.8.6, seq=7 dist=0 time=0.245 ms | ||
multicast from 192.168.8.6, seq=7 dist=0 time=0.260 ms | multicast from 192.168.8.6, seq=7 dist=0 time=0.260 ms | ||
</pre> | </pre> | ||
for more information see | |||
man ssmping | man ssmping | ||
and | |||
and | |||
less /usr/share/doc/ssmping/README.gz | less /usr/share/doc/ssmping/README.gz | ||
==ssmping notes== | == ssmping notes == | ||
*there are a few other programs included in ssmping which may be of use. | |||
*there are a few other programs included in ssmping which may be of use. here is a list of the files in the package: | |||
apt-file list ssmping | apt-file list ssmping | ||
<pre> | <pre>ssmping: /usr/bin/asmping | ||
ssmping: /usr/bin/asmping | |||
ssmping: /usr/bin/mcfirst | ssmping: /usr/bin/mcfirst | ||
ssmping: /usr/bin/ssmping | ssmping: /usr/bin/ssmping | ||
Line 62: | Line 74: | ||
ssmping: /usr/share/man/man1/ssmping.1.gz | ssmping: /usr/share/man/man1/ssmping.1.gz | ||
ssmping: /usr/share/man/man1/ssmpingd.1.gz | ssmping: /usr/share/man/man1/ssmpingd.1.gz | ||
</pre> | </pre> | ||
*If you want to use apt-file do this: | |||
aptitude install apt-file | aptitude install apt-file | ||
apt-file update | apt-file update | ||
= | then set up a cronjob to do ''apt-file update'' weekly or monthly .. | ||
= Use unicast instead of multicast = | |||
Unicast is a technology for sending messages to a single network destination. In corosync, unicast is implemented as UDP-unicast (UDPU). Due to increased network traffic (compared to multicast) the number of supported nodes is limited, do not use it with more that 4 cluster nodes. | |||
*just create the cluster as usual (pvecm create ...) | |||
*add the new '''transport="udpu"''' in /etc/pve/cluster.conf | |||
<source lang="xml"><cman keyfile="/var/lib/pve-cluster/corosync.authkey" transport="udpu"/></source> | |||
*add all nodes you want to join in /etc/hosts and reboot | |||
*before you add a node, make sure you add all other nodes in /etc/hosts | |||
= Multicast with Infiniband = | |||
IP over Infiniband (IPoIB) supports Multicast but Multicast traffic is limited to 2044 Bytes when using connected mode even if you set a larger MTU on the IPoIB interface. | |||
Edit the /etc/pve/cluster.conf file | Corosync has a setting, netmtu, that defaults to 1500 making it compatible with connected mode Infiniband. | ||
Add the section: | |||
<source lang="xml"> | == Changing netmtu == | ||
Changing the netmtu can increase throughput '''The following information is untested.''' | |||
Edit the /etc/pve/cluster.conf file Add the section: <source lang="xml"> | |||
<totem netmtu="2044" /> | <totem netmtu="2044" /> | ||
</source> | </source> | ||
<source lang="xml"> | <br> <source lang="xml"> | ||
<?xml version="1.0"?> | <?xml version="1.0"?> | ||
<cluster name="clustername" config_version="2"> | <cluster name="clustername" config_version="2"> | ||
Line 97: | Line 123: | ||
</cluster> | </cluster> | ||
</source> | </source> | ||
<br> | |||
= Netgear Managed Switches = | = Netgear Managed Switches = | ||
the following are pics of setting to get multicast working on our netgear 7300 series switches. for more information see http://documentation.netgear.com/gs700at/enu/202-10360-01/GS700AT%20Series%20UG-06-18.html | |||
[[ | <br> [[Image:Multicast-netgear-1.png]] | ||
[[ | [[Image:Multicast-netgear-2.png]] | ||
[[ | [[Image:Multicast-netgear-3.png]] |
Revision as of 12:24, 14 May 2012
Note: Articles about Proxmox VE 2.0 |
Introduction
Multicast allows a single transmission to be delivered to multiple servers at the same time.
This is the basis for cluster communications in Proxmox VE 2.0.
If multicast does not work in your network infrastructure, use unicast instead.
Troubleshooting
not all hosting companies allow multicast traffic.
Some switches have multicast disabled by default.
test if multicast is working between two nodes
Copied from a post by e100 on forum .
- this uses ssmping
Install this on all nodes .
aptitude install ssmping
run this on Node A:
ssmpingd
then on Node B:
asmping 224.0.2.1 ip_for_NODE_A_here
example output
asmping joined (S,G) = (*,224.0.2.234) pinging 192.168.8.6 from 192.168.8.5 unicast from 192.168.8.6, seq=1 dist=0 time=0.221 ms unicast from 192.168.8.6, seq=2 dist=0 time=0.229 ms multicast from 192.168.8.6, seq=2 dist=0 time=0.261 ms unicast from 192.168.8.6, seq=3 dist=0 time=0.198 ms multicast from 192.168.8.6, seq=3 dist=0 time=0.213 ms unicast from 192.168.8.6, seq=4 dist=0 time=0.234 ms multicast from 192.168.8.6, seq=4 dist=0 time=0.248 ms unicast from 192.168.8.6, seq=5 dist=0 time=0.249 ms multicast from 192.168.8.6, seq=5 dist=0 time=0.263 ms unicast from 192.168.8.6, seq=6 dist=0 time=0.250 ms multicast from 192.168.8.6, seq=6 dist=0 time=0.264 ms unicast from 192.168.8.6, seq=7 dist=0 time=0.245 ms multicast from 192.168.8.6, seq=7 dist=0 time=0.260 ms
for more information see
man ssmping
and
less /usr/share/doc/ssmping/README.gz
ssmping notes
- there are a few other programs included in ssmping which may be of use. here is a list of the files in the package:
apt-file list ssmping
ssmping: /usr/bin/asmping ssmping: /usr/bin/mcfirst ssmping: /usr/bin/ssmping ssmping: /usr/bin/ssmpingd ssmping: /usr/share/doc/ssmping/README.gz ssmping: /usr/share/doc/ssmping/changelog.Debian.gz ssmping: /usr/share/doc/ssmping/copyright ssmping: /usr/share/man/man1/asmping.1.gz ssmping: /usr/share/man/man1/mcfirst.1.gz ssmping: /usr/share/man/man1/ssmping.1.gz ssmping: /usr/share/man/man1/ssmpingd.1.gz
- If you want to use apt-file do this:
aptitude install apt-file apt-file update
then set up a cronjob to do apt-file update weekly or monthly ..
Use unicast instead of multicast
Unicast is a technology for sending messages to a single network destination. In corosync, unicast is implemented as UDP-unicast (UDPU). Due to increased network traffic (compared to multicast) the number of supported nodes is limited, do not use it with more that 4 cluster nodes.
- just create the cluster as usual (pvecm create ...)
- add the new transport="udpu" in /etc/pve/cluster.conf
<cman keyfile="/var/lib/pve-cluster/corosync.authkey" transport="udpu"/>
- add all nodes you want to join in /etc/hosts and reboot
- before you add a node, make sure you add all other nodes in /etc/hosts
Multicast with Infiniband
IP over Infiniband (IPoIB) supports Multicast but Multicast traffic is limited to 2044 Bytes when using connected mode even if you set a larger MTU on the IPoIB interface.
Corosync has a setting, netmtu, that defaults to 1500 making it compatible with connected mode Infiniband.
Changing netmtu
Changing the netmtu can increase throughput The following information is untested.
Edit the /etc/pve/cluster.conf file Add the section:
<totem netmtu="2044" />
<?xml version="1.0"?>
<cluster name="clustername" config_version="2">
<totem netmtu="2044" />
<cman keyfile="/var/lib/pve-cluster/corosync.authkey">
</cman>
<clusternodes>
<clusternode name="node1" votes="1" nodeid="1"/>
<clusternode name="node2" votes="1" nodeid="2"/>
<clusternode name="node3" votes="1" nodeid="3"/></clusternodes>
</cluster>
Netgear Managed Switches
the following are pics of setting to get multicast working on our netgear 7300 series switches. for more information see http://documentation.netgear.com/gs700at/enu/202-10360-01/GS700AT%20Series%20UG-06-18.html