[PVE-User] OVH + Vrack + private IP + Proxmox 4

Régis Houssin regis.houssin at inodbox.com
Mon Apr 25 13:48:02 CEST 2016


I progress ! :-)

if i use this configuration, the virtual server with only a private IP
can go on the internet (but....):

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

# for Routing
auto vmbr1
iface vmbr1 inet manual
        post-up /etc/pve/kvm-networking.sh
        bridge_ports dummy0
        bridge_stp off
        bridge_fd 0

# vmbr0: Bridging. Make sure to use only MAC adresses that were assigned
to you.
auto vmbr0
iface vmbr0 inet static
        address <public_ip>
        netmask 255.255.255.0
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        post-up route add default gw <gateway> metric 1
        pre-down route del default gw <gateway>

auto vmbr2
iface vmbr2 inet static
        address  172.31.255.250
        netmask  255.240.0.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s '172.16.0.0/12' -o
vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '172.16.0.0/12' -o
vmbr0 -j MASQUERADE
        post-up route add -net 224.0.0.0 netmask 240.0.0.0 vmbr2
        pre-down route del -net 224.0.0.0 netmask 240.0.0.0 vmbr2


if I add my IP block RIPE, the virtual server just having a private IP
address can not go on the internet, whyyyy ! :-)

auto vmbr3
iface vmbr3 inet static
       address  <public_ip_ripe>
       netmask  255.255.255.224
       bridge_ports eth1
       bridge_stp off
       bridge_fd 0
       post-up route add default gw <gateway> metric 1
       pre-down route del default gw <gateway>


thank you for your help


Le 25/04/2016 à 08:52, Kevin Lemonnier a écrit :
> Sure,
>
> Here it is :
>
> auto lo
> iface lo inet loopback
>
> auto eth0
> iface eth0 inet manual
>
> auto eth1
> iface eth1 inet manual
>
> auto eth1.10
> iface eth1.10 inet static
> 	  address  10.10.0.1
> 	  netmask  16
>
> auto vmbr0
> iface vmbr0 inet static
> 	 address <public_ip>
> 	 netmask 255.255.255.0
> 	 gateway <gateway>
> 	 bridge_ports eth0
> 	 bridge_stp off
> 	 bridge_fd 0
>
> auto vmbr1
> iface vmbr1 inet static
> 	  address 10.20.0.1
> 	  netmask 255.255.0.0
> 	  bridge_ports eth1
> 	  bridge_stp off
> 	  bridge_fd 0
>
>
> I use a VLAN (eth1.10) for the glusterFS replication but that's very optional, just didn't
> want any client on the VMs to have easy access to it. Having a separate interface makes it easier
> for firewalling.
> I bridge all my VMs on vmbr1, keeping in mind that if you use the proxmox template from OVH
> you'll probably have to add a vmbr2 and use it instead, they do something weird with vmbr1. I just
> installed a debian and added proxmox through the repos myself, but that shouldn't change much.
>
> It will work just like that, but if you want to use multicast you should just add this to vmbr1 :
> post-up route add -net 224.0.0.0 netmask 240.0.0.0 vmbr1
>
> That will route multicast traffic through the vRack, only way to make it work. Proxmox doesn't
> require multicast, even if they say they do : For three nodes that's just not a problem. For
> 32 it might be though, so might aswell just add that line I guess, to be safe :)
>
> What method are you planning to use for storing your disks ?
> We use GlusterFS, had some problems with it so I'm currently testing a different configuration.
> Ceph seems awesome, but very hard to deploy on OVH hardware. They are planning to start a "Ceph as a service"
> offer in the future, but who knows when and how reliable that will be.
> I will advise against DRBD9 for now, seems too early to roll that into production, easy to break everything.
>
> On Mon, Apr 25, 2016 at 08:44:31AM +0200, Régis Houssin wrote:
>> Thank you Kevin
>>
>> you get to do a ping on private IP addresses between your host servers?
>>
>> if so, have you an example of configuration of your network cards?
>>
>>
>> Le 25/04/2016 à 08:31, Kevin Lemonnier a écrit :
>>> Hi,
>>>
>>> Yes, we are doing HA on OVH servers and it works fine. Just
>>> be prepared for numerous outages on the vRack, and don't
>>> forget to use a routed RIPE bloc, not FailOver IPs, and
>>> you'll be fine :)
>>> Multicast does work on vRack, everything does.
>>>
>>> On Mon, Apr 25, 2016 at 08:02:31AM +0200, Régis Houssin wrote:
>>>> Hello
>>>>
>>>> I have 2 dedicated servers at OVH (french provider) with vrack, and I
>>>> can not talk to both servers with private IP addresses (172.16.0.0/12)
>>>> does anyone know how with an example? (/etc/network/interfaces)
>>>> and you know, if multicast is enabled on vrack?
>>>>
>>>> Thank you for your help
>>>>




More information about the pve-user mailing list